With employee demand for mental health support at an all-time high—and access to traditional care often limited—employers are increasingly exploring AI-powered tools to help close the gap.
In this fireside chat with Dr. Jessica Watrous, Chief Clinical Officer at Modern Health, we'll examine how innovations like predictive models, and on-demand digital platforms, AI tools for providers and benefits leaders, and even chatbots could transform how organizations approach employee well-being.
Dr. Watrous will share how these tools can help deliver personalized support, encourage early intervention, reduce stigma, support providers, and help manage rising healthcare costs—while also addressing the challenges of data privacy, ethics, and benefit integration.
Join us for a forward-looking conversation on how AI is reshaping mental health benefits—and what it means for the future of workplace wellness.
Transcription:
Transcripts are generated using a combination of speech recognition software and human transcribers, and may contain errors. Please check the corresponding audio for the authoritative record.
Paola Peralta (00:14):
Hello and welcome everyone to today's fireside chat about how AI is reshaping mental health. I am here with Dr. Jessica Watrous, Chief Clinical Officer at Modern Health and a licensed clinical psychologist. Before we get to our talk today, just a reminder that this is a SHRM certified event. So everyone make sure that you're checking whether or not you are qualified. We welcome questions and they will most likely be held to the end if we have time. So without further ado, hi Jessica, how are you?
Dr. Jessica Watrous (00:49):
Hi, Paola. I'm good. How are you? Thanks for having me today.
Paola Peralta (00:53):
Of course. This topic is something dear to my heart and I think something that everyone is raving about in the workforce. To kick us off, give me a quick overview of how and when AI started disrupting the mental health space.
Dr. Jessica Watrous (01:11):
Yeah, absolutely. There's a couple of things that I can ground us in. We'll start with this idea that innovation in mental health is not new. Each generation of clinicians has dealt with some level of innovation, whether that was new psychiatric drugs in the fifties or the introduction of evidence-based therapies like Cognitive Behavioral Therapy in the seventies. Even telemedicine and the rapid expansion of telehealth over the COVID years—we've been doing innovation for decades. AI really came on the scene over the last many years. We've had things like machine learning playing a role in how we're thinking about triaging people to care or understanding mental health when we look at research.
(02:06):
But this current boom in AI is often referring to the rapid expansion of generative AI. If you think about the timeline, ChatGPT was released at the end of 2022. We started to see some adoption through 2023, but in my sphere, we've seen more and more of that adoption in the past year to year and a half. We have seen very quickly that people are using them for companionship, to talk about mental health, to talk about their physical health, and to get recommendations and care. This conversation has been bubbling for the past year or so with increased attention since the beginning of 2025. The reality is that some of that attention comes from really sad and scary anecdotal things that have happened as a function of people engaging with a generative AI chatbot about their mental health.
(03:02):
We've seen some really concerning narratives coming out of that, but that's how we got to where we are. The truth of the matter is that healthcare domains are actually becoming some of the quickest adopters of AI in their workflows and how they're thinking about things clinically. So, I think it's a really appropriate time for us to have this conversation about why AI is in a mental health conversation and what we are doing here.
Paola Peralta (03:36):
Where did this demand come from on the company side for AI? I think you touched on where it's coming from on the individual side, but on the organizational side, where did this demand come from?
Dr. Jessica Watrous (03:49):
Ultimately, getting access to mental health care can be really challenging. Globally, we don't have enough clinicians for the number of people that need care. We see people using a variety of different tools to achieve the support they're looking for. From an employer's perspective, they're hearing this from their employees and seeing them use AI in new and interesting ways. Our healthcare systems are sometimes overburdened; there's a lot of work to be done, many patients to be seen, and often limited resources. When we're thinking about ways that we can increase efficiency, that's always going to be something healthcare organizations are interested in.
(04:44):
When we're thinking about ways that we can optimize outcomes, as clinicians, that's something that we care very deeply about. If I can help someone get better faster or demonstrate even stronger outcomes through an AI tool, obviously that's going to be very appealing to me. We also see generational differences. Within the workforce, Gen Z and millennial employees may be more likely to engage with an AI, whether to get mental health advice or for companionship. It makes sense that employers are thinking through this given how their employees are interacting with these tools.
Paola Peralta (05:40):
Absolutely. To get the definition out there, what is the difference between standard AI and AI designed specifically for mental health?
Dr. Jessica Watrous (05:56):
The easiest way to understand that is to look at the underlying purpose of a tool like ChatGPT or Claude. The goal of those generative AI tools is to drive engagement. They want you to keep talking with them. If you've used ChatGPT, there's always a follow-up about whether you'd like it to do more or if it would be helpful to go in a certain direction. From the tech perspective, the stickier their product is, the better. That's fine, but that's not necessarily consistent with what we want for a mental health intervention.
(07:00):
To juxtapose that with evidence-based therapies: when I start working with a patient, I often say my goal is actually to put myself out of business in your life. We have a set number of sessions where I want you to learn new coping skills so you can navigate issues independently—not so you see me as your therapist for the rest of your life. You can see the difference in how we think about engagement between an evidence-based therapy and a generative AI.
(07:50):
The other piece is that those generative AI models weren't necessarily built with clinicians in the loop to spot signs and symptoms of a mental health crisis, like suicidality. As we see more mental health-specific AI tools, the best ones are being built with clinicians alongside to think through how we identify crisis, how we escalate, and how we make sure people are connected to a human. Another point we hear about in the press is that these AI models may be overly sycophantic. Validation is important, but it's also important to challenge people's beliefs or perceptions. Generative AI may not be at a place where it does that as well as a human.
(08:49):
That being said, OpenAI is doing a ton of work to think through this. They've got a committee involved, they're improving the algorithm, and thinking through how to recognize these things versus more mental health-specific ones where people are thinking about that from the beginning.
(10:10):
One other call out: this is all happening hand in hand with regulatory changes. Just last week, the FDA met to talk about what AI looks like in digital health. We are figuring out if a Gen AI makes a diagnostic recommendation or an intervention, how we treat that like a medical device and if we need to walk through FDA approvals.
Paola Peralta (10:33):
What are some of the consequences that organizations or their employees can experience if they don't invest in a mental health-centered AI?
Dr. Jessica Watrous (10:58):
The most simple one is knowing whether they actually work. That is what keeps me up at night. Cognitive Behavioral Therapy has decades of randomized clinical trials and research proving that people get better. At Modern Health, we also have peer-reviewed scientific evidence that says when people use our digital content or coaches, they actually get better. We don't have enough data to know which of these Gen AI models is working that way yet. We know people are using them, but we don't know yet if they demonstrate the same level of outcomes.
(11:56):
I am optimistic, but we need to acknowledge there's something deeply human about therapy or coaching that is still "to be determined" if it can be replaced by a Gen AI model. If an employer offers something that doesn't work, it's not going to deliver the return on investment they'd like to see. To get a ROI on mental health care, it needs to actually help people's mental health.
(12:52):
Other risks: we need to make sure we're investing in models where crisis can be identified, where there are clear guardrails for what the AI can say, and clear processes for sending people to appropriate levels of care.
Paola Peralta (13:38):
Shifting gears to the brighter side of things, tell me about how AI platforms for mental health can actually be successful.
Dr. Jessica Watrous (13:49):
There are so many ways AI can be useful beyond direct interaction with an employee. At Modern Health, we're talking about where we plug AI in. One way is supporting benefits leaders. Generative AI is great for identifying clinical resources we've created. You could ask, "I have employees in X, Y, and Z regions experiencing this problem—do you have resources for that?" AI is a great option to surface those.
(14:37):
Another component is how we're supporting providers. Generative AI can be used for note-taking, which almost every therapist complains about. We're thinking through "Session AI" at Modern Health to get a good session summary and turn that into a note. The human should be in the loop to check for accuracy, but if that cuts note-taking time from 15 minutes to 5, the clinician gets a real break.
(16:17):
Regarding direct member interaction, Modern Health is thinking about how we can increase engagement. Before we build a therapy bot, how can we help members understand their benefit better? Can we help them understand the assessment or what coaching and therapy are? We're building a companion called Sky that will launch early next year, focused on those domains—helping people feel more confident as they navigate their care.
Paola Peralta (17:17):
I'm interested in your take on how these AI tools help reach the demographic of employees who may not need "sit-down" therapy but need something else.
Dr. Jessica Watrous (17:46):
Modern Health uses an adaptive care model founded on a population health approach. When we look at any group of employees, we see a vast spectrum of needs. I was trained to treat the far end of the spectrum—addictions or PTSD—where clinical therapy is the best intervention. But that leaves out a significant portion of the population. Maybe 20-25% of people have a diagnosable disorder, but 80% could be experiencing burnout, holiday stress, or a new promotion.
(19:35):
Does it need to be therapy? Not necessarily. Coaching, digital content, and podcasts are all excellent options. AI can be one of those options on a menu. Younger generations are using it for companionship or mental health in surprising numbers. We want to build something for them that works and feels safe. For others, it may not resonate, but it points to why we need diverse options. If we get it right, AI offers incredible accessibility. For example, if we can get AI to encourage people to connect with friends and family in meaningful ways, that is a powerful opportunity.
Paola Peralta (21:29):
What is the benefits leader's role in this integration? What should the vetting process look like?
Dr. Jessica Watrous (21:48):
First, check your organization's security policies regarding AI. From my perspective, understand the guiding principles of an organization's use of AI. We often talk about a "human in the loop." It is incredibly important to understand what role clinicians play in the building and launching of these products. How are they thinking through escalation processes? Also, make sure companies are tracking regulatory implications. If a company is building a therapy bot but isn't tracking FDA guidelines, there's a risk that it could be yanked away from employees later.
Paola Peralta (23:29):
Are there any big missteps or red flags that leaders should avoid?
Dr. Jessica Watrous (23:43):
Confidentiality is a big one. In therapy, there are specific standards, but with a broad AI, that's not necessarily the case. Data put into a general ChatGPT is not protected. Ask companies how they are thinking about and using this data. Again, I keep coming back to the human-guided piece. I want to know there are clinicians involved in pressure-testing what is being built so that someone who understands crisis is involved in the conversation.
Paola Peralta (25:07):
Tell me a little more about Sky and how you are working those solutions into it.
Dr. Jessica Watrous (25:20):
It highlights the healthy tension between tech and clinical teams. At Modern, our product team and engineers work closely with my clinical team. Our clinicians need to know the guardrails: What happens when a member says X, Y, or Z? How does this get raised to crisis care? What things will the AI stop short of?
(26:17):
Consent is also incredibly important. Even with something like Session AI, both the provider and the member need to be able to consent to that summary being created. Sky is intended to be an engagement driver—not a therapy bot. It's helping people navigate options, understand assessments, and bridge the gap between sessions. Building this ethically and rigorously takes a lot of hard work.
Paola Peralta (27:52):
To close this out, what does the future of AI in mental health look like, and are you optimistic?
Dr. Jessica Watrous (28:04):
I absolutely am optimistic. I have devoted my career to addressing mental health at scale, which requires thinking outside the therapy box. I don't think AI is going to replace human coaches or therapists because people will still prefer that human intervention. But it has the opportunity to make care more accessible and efficient. There are things AI will be really good at, like helping you with the "homework" your therapist gave you or providing gentle encouragement for new health behaviors. There are some really cool things we're going to be able to do.
Paola Peralta (29:33):
Jessica, thank you so much for this talk. This was wonderful. Thank you everyone for joining. Just a reminder to exit back to the Main Event Hub for your next session. I hope everyone has a great rest of your day.
Dr. Jessica Watrous (29:51):
Thank you so much for having me. This is wonderful.
Fireside Chat With Dr. Jessica Watrous: How AI Is Reshaping Mental Health Benefits
November 12, 2025 1:00 PM
30:00