Dartmouth researchers look to meld therapy apps with modern AI 

Therabot, currently in its first clinical trial, uses generative AI trained on therapy scripts in an effort to create technology that brings mental health services to underserved populations.

An experimental, artificial intelligence-powered therapeutic app that its creators hope will drastically improve access to mental health care began its first clinical trial last month.

Therabot, a text-based AI app in development at Dartmouth College, launched in a clinical trial in March with 210 participants. In its conversations with users, the app uses generative AI, the same technology that powers OpenAI’s ChatGPT, to come up with answers and responses. The app also uses a form of AI that learns patterns and has been designed to enable Therabot to get to know and remember a user and provide personalized advice or recommendations based on what it has learned.

WATCH ANYTIME FOR FREE

Stream NBC10 Boston news for free, 24/7, wherever you are.

There are already a handful of script-based therapy apps and broader “wellness” apps that use AI, but Therabot’s creators say theirs would be the first clinically tested app powered entirely by generative AI that has been specifically designed for digital therapy. 

Woebot, a mental health app that says it has served 1.5 million people worldwide, launched in 2017 in collaboration with interventional scientists and clinicians. Wysa, another popular AI therapy app, in 2022 received a Food and Drug Administration Breakthrough Device designation, a voluntary program designed to speed up development, assessment and review of a new technology. But these apps generally rely on rules-based AI with preapproved scripts.

Nicholas Jacobson, an assistant professor at Dartmouth College and a clinically trained psychologist, spearheaded the development of Therabot. His team has been building and finessing the AI program for nearly five years, working to ensure responses are safe and responsible. 

“We had to develop something that really is trained in the broad repertoire that a real therapist would be, which is a lot of different content areas. Thinking about all of the common mental health problems that folks might manifest and be ready to treat those,” Jacobson said. “That is why it took so long. There are a lot of things people experience.”

The team first trained Therabot on data derived from online peer support forums, such as cancer support pages. But Therabot initially replied by reinforcing the difficulty of daily life. They then turned to traditional psychotherapist training videos and scripts. Based on that data, Therabot’s replies leaned heavily on stereotypical therapy tropes like “go on” and “mhmm.” 

The team ultimately pivoted to a more creative approach: writing their own hypothetical therapy transcripts that reflected productive therapy sessions, and training the model on that in-house data. 

Jacobson estimated that more than 95% of Therabot’s replies now match that “gold standard,” but the team has spent the better part of two years finessing deviant responses.

“It could say anything. It really could, and we want it to say certain things and we’ve trained it to act in certain ways. But there’s ways that this could certainly go off the rails,” Jacobson said. “We’ve been essentially patching all of the holes that we’ve been systematically trying to probe for. Once we got to the point where we were not seeing any more major holes, that’s when we finally felt like it was ready for a release within a randomized controlled trial.”

The dangers of digital therapeutic apps have been subject to intense debate in recent years, especially because of those edge cases. AI-based apps in particular have been scrutinized.

Last year, the National Eating Disorders Association pulled Tessa, an AI-powered chatbot designed to provide support for people with eating disorders. Although the app was designed to be rules-based, users reported receiving advice from the chatbot on how to count calories and restrict their diets. 

“If [users] get the wrong messages, that could lead to even more mental health problems and disability in the future,” said Vaile Wright, senior director of the Office of Health Care Innovation at the American Psychological Association. “That frightens me as a provider.”

With recruitment for Therabot’s trial now complete, the research team is reviewing every one of the chatbot’s replies, monitoring for deviant responses. The replies are stored on servers compliant with health privacy laws. Jacobson said his team has been impressed with the results so far.

“We’ve heard ‘I love you, Therabot’ multiple times already,” Jacobson said. “People are engaging with it at times that I would never respond if I were engaging with clients. They’re engaging with it at 3 a.m. when they can’t sleep, and it responds immediately.”

In that sense, the team behind Therabot says, the app could expand access and availability rather than replacing human therapists.

Jacobson believes that generative AI apps like Therabot could play a role in combating the mental health crisis in the United States. The nonprofit Mental Health America estimates that more than 28 million Americans have a mental health condition but do not receive treatment, and 122 million people in the U.S. live in federally designated mental health shortage areas, according to the Health Resources and Services Administration.

“No matter what we do, we will never have a sufficient workforce to meet the demand for mental health care,” Wright said. 

“There needs to be multiple solutions, and one of those is clearly going to be technology,” she added.

During a demonstration for NBC News, Therabot validated feelings of anxiety and nervousness before a hypothetical big exam, then offered techniques to mitigate that anxiety custom to the user’s worries about the test. In another case, when asked for advice on combating pre-party nerves, Therabot encouraged the user to try imaginal exposure, a technique to alleviate anxiety that involves envisioning participating in an activity before doing it in real life. Jacobson noted this is a common therapeutic treatment for anxiety.

Other responses were mixed. When asked for advice about a breakup, Therabot warned that crying and eating chocolate might provide temporary comfort but would “weaken you in the long run.”

With eight weeks left in the clinical trial, Jacobson said that the smartphone app could be poised for additional trials soon and then broader open enrollment by the end of the year if all goes well. Beyond other apps essentially repurposing ChatGPT, Jacobson believes this would be a first-of-its-kind generative AI digital therapeutic tool. The team ultimately hopes to gain FDA approval. The FDA said in an email that it has not approved any generative AI app or device. 

With the explosion of ChatGPT’s popularity, some people online have taken to testing the generative AI app’s therapeutic skills, even though it was not designed to provide that support. 

Daniel Toker, a neuroscience student at UCLA, has been using ChatGPT to supplement his regular therapy sessions for more than a year. He said his initial experiences with traditional therapy AI chatbots were less helpful.

“It seems to know what I need to hear sometimes. If I have a challenging thing that I’m going through or a challenging emotion, it knows what words to say to validate how I’m feeling,” Toker said. “And it does it in a way that an intelligent human would,” he added.

He posted on Instagram in February about his experiences and said he was surprised by the number of responses.

On message forums like Reddit, users also offer advice on how to use ChatGPT as a therapist. One safety employee at OpenAI, which owns ChatGPT, posted on X last year how impressed she was by the generative AI tool’s warmth and listening skills.

“For these particularly vulnerable interactions, we trained the AI system to provide general guidance to the user to seek help. ChatGPT is not a replacement for mental health treatment, and we encourage users to seek support from professionals,” OpenAI said in a statement to NBC News.

Experts warn that ChatGPT could provide inaccurate information or bad advice when treated like a therapist. Generative AI tools like ChatGPT are not regulated by the FDA since they are not therapeutic tools.

“The fact that consumers don’t understand that this isn’t a good replacement is part of the problem and why we need more regulation,” Wright said. “Nobody can track what they’re saying or what they’re doing and if they’re making false claims or if they’re selling your data without your knowledge.”

Toker said the personal benefits of his experience with ChatGPT outweigh the cons.

“If some employee at OpenAI happens to read about my random anxieties, that doesn’t bother me,” Toker said. “It’s been helpful for me.”

This story first appeared on NBCNews.com. More from NBC News:

Copyright NBC News
Exit mobile version