upper waypoint

How Safe Is AI Therapy?

Save ArticleSave Article
Failed to save article

Please try again

Illustration of a dark-skinned person sitting on the ground with a concerned expression, arms wrapped around their knees. A smartphone lies beside them, emitting ghost-like, chaotic speech bubbles that contain sketches of robot faces. The background features swirling brushstrokes in shades of blue, yellow, and green. In the bottom left corner, the words "CLOSE ALL TABS" appear in blocky, pixel-style font.
Can AI therapy apps like Rosebud, Therapist GPT and Woebot bridge the gap in mental health care — offering comfort and support in an era of stress, loneliness and anxiety? (Anna Vignet/KQED)

After a divorce, KQED health reporter Lesley McClurg felt anxious over the prospect of dating again. On a whim, she turned to ChatGPT for a little emotional support — and found herself unexpectedly comforted. That experience launched her investigation into the fast-growing world of AI therapy. In this episode, Lesley joins Morgan to explore the promise and pitfalls of mental health chatbots — and what users should know before sharing their deepest feelings with an algorithm.


Guests:

Further reading:

Want to give us feedback on the show? Shoot us an email at CloseAllTabs@KQED.org

Follow us on Instagram

Sponsored

Episode Transcript

This is a computer-generated transcript. While our team has reviewed it, there may be errors.

Morgan Sung: A quick heads up before we start. This episode includes discussions of suicide and mental health conditions, which may be distressing for some listeners. If you or someone you know need support, we’ll have links to resources in the episode description. 

Lesley McClurg: So, I was going through a divorce and started dating after my divorce and hadn’t dated in many years and came home after a date one night and was just really anxious and kind of disheveled and needed some advice. 

Morgan Sung: This is KQED health reporter Lesley McClurg. 

Lesley McClurg: It was late at night and I had used ChatGPT for, you know, other things and found it pretty helpful and I thought, what about for this moment in my life? And so I asked Chat whether or not I should reach out to this person that I had just dated because I was feeling like the night hadn’t gone that well.

Morgan Sung: It was late at night. She didn’t want to bug a friend about this, and really, she was feeling pretty vulnerable. She didn’t want to be judged. And so, ChatGPT was right there, ready to cheerfully answer her questions. 

Lesley McClurg: I was surprised that it was so good. I just remember after, you know, a few back and forths, I realized that really I was just nervous, really I just needed to take a deep breath. Basically I had created a big storm in my head. And Chat basically was like, “hey, chill, relax, it could have gone well. There’s another way this could have played out, not the sort of devastating reality that you’re playing out right now. Maybe give it a day or two and then reach out.” And so in that moment, it just sort of helped me take the gas off and come back into myself. 

Morgan Sung: It was exactly what she needed to hear at the time. 

Lesley McClurg: I didn’t text the person, which was the right call, and kind of used it as I warmed myself back up into the dating world, and it was really helpful. And so it made me then, as a reporter, start asking, “should I be telling this thing all about my love life? Is this a good idea, privacy-wise, et cetera?” And so that’s where it sort of seeded my reporting going forward. 

Morgan Sung: Lesley isn’t the only one turning to ChatGPT for therapy. If you’ve ever dealt with any health insurance company, you’re probably familiar with the hassle of getting care. And mental health care is especially inaccessible. AI chatbots though, they’re convenient, cost little to nothing to use, and in Lesley’s case, can actually be pretty helpful. But a lot of people are also wary of turning to AI for therapy, can you trust it? What are you risking when you share your most vulnerable thoughts with a chatbot? 

Morgan Sung: This is Close All Tabs. I’m Morgan Sung, tech journalist and your chronically online friend, here to open as many browser tabs as it takes to help you understand how the digital world affects our real lives. Let’s get into it. 

Access to actual mental health resources has become so limited. Cost and insurance aside, there’s a shortage of licensed human mental health professionals across the country. But can AI therapy really replace actual therapists? Okay, new tab. Does AI therapy work? 

Morgan Sung: Over the course of your reporting, did you meet anyone who actually used an AI chatbot for therapy? 

Lesley McClurg: I actually talked to quite a few people who used AI therapy and I went online and read a lot of Reddit threads because this is quite the popular topic.  I heard more positive stories than negatives. As a reporter, I wanted to illustrate someone who kind of had a nuanced experience, you know, good and bad. 

Morgan Sung: So, Lesley found a woman named Lilly Payne:. 

Lesley McClurg: She had kind of the ideal story to illustrate that, yes, it helped her, but it wasn’t ideal. And so that was sort of like the character that I ended up, you know, focusing on. 

Morgan Sung: In your story, you mentioned that Lilly had turned to AI therapy um during the COVID lockdowns, which were a terrible time for a lot of us. But Lilly wasn’t just experiencing, you know, anxiety and depression and loneliness. Her situation was a little more complicated, right? Can you talk about that? 

Lesley McClurg: Yeah, I mean in her words, her life basically fell apart. She graduated from college, she had moved to New York City to pursue an arts career, was very excited. And if we can remember, you know, New York was sort of the epicenter of the early days of COVID. It was really bad. Lockdown was really scary and the hospitals were overflowing and it was not a good scene. And so she left her arts career, abandoned her dreams and moved back home, which was pretty painful, to her parents’ home in Kentucky. And she is sort of tucked away, and it just felt like a big failure. And she was really struggling with like, what’s next for my life? Where do I go from here? 

Lilly Payne: It was such a lonely time for so many people. 

Morgan Sung: This is Lilly. 

Lilly Payne: I was not at a breaking point, but I wasn’t doing awesome. So I was like, “the more help, the better.” 

Lesley McClurg: And so in all of that anxiety, she, you know, initially reached out and leaned on a lot of friends, but eventually she felt like she’d kind of worn those supports thin. And so she read about Woebot, this AI therapy platform in a health newsletter. 

Lilly Payne: So, I gave it a shot because I was like, why not? Everyone’s cooped up in their house. I will talk to this robot. 

Lesley McClurg: Initially it was really helpful. It did help her calm herself. I think she said she, you know, even just having it in her pocket helped her feel more in control in her life. I think she relied on it quite a bit in those early days to kind of find her ground again and be able to focus on, you know, re-imagining a new life from there while she was back at home with her parents in Kentucky. 

Morgan Sung: It’s worth noting that Woebot is a therapy-specific AI chatbot, and it doesn’t use generative AI to respond to users the way that other tools like ChatGPT, or Claude, or DeepSeek do. This means that its interactions with users are a bit more predictable. It’s also engineered to respond the way that a therapist might. So instead of immediately jumping into offering advice, Woebot asks specific questions to encourage users to reflect and do the inner work themselves. 

Lesley McClurg: Well, it was designed by a psychologist. And so, you know, from that perspective, it it really is designed to focus on your mental health. The goal of Woebot is, you know, as a mental health tool, as a wellness tool, I think is how they market themselves. 

Morgan Sung: Woebot is designed to use a set of techniques called cognitive behavioral therapy. 

Lesley McClurg: You know, cognitive behavioral therapy helps you reframe your negative thoughts using specific exercises. And, you know, I think as any CBT, which is what it’s acronym is, it feels a little forced, but she did say it did help her reframe those negative thoughts and that she was able to think more more positively. 

Morgan Sung: Yeah. Can you talk about uh Lilly’s uh other diagnosis that maybe complicated this form of treatment? 

Lesley McClurg: She has obsessive-compulsive disorder, and sometimes that makes her fixate on worst-case scenarios. 

Lilly Payne: Most of the time when people think about OCD they think of, just the very cliche like, “oh, you can’t stop washing your hands, you’re afraid of germs.” While that is a very real subtype that people experience, typically OCD like manifests in really taboo intrusive thoughts, and then the physical compulsions stem from trying to keep those themes away. And so, logically, you can know that, like, this doesn’t make sense, it’s not actually happening, but it just, it, it’s not just in your head, like physically it feels so real. 

Morgan Sung: Lilly is also diagnosed with anxiety and depression. 

Lesley McClurg: A symptom of depression is suicide ideation eventually, right? So she fixated on the idea that eventually because of her depression, that she may think about killing herself. 

Lilly Payne: My brain would be like,  “Oh, you’ve struggled with depression in the past. There’s no saying that one day you won’t want to go through with suicide.”

Lesley McClurg: And so she mentioned that she was worried about suicide in a session with Woebot. And Woebot came back and had a crisis alert and said, “hey, you better call the suicide hotline.” And she said, “no, no no, wait a second.”

Lilly Payne: I’m not experiencing suicidal inclinations, I’m just terrified that I will. 

Lesley McClurg: And luckily she knew that, she understood her disorder enough to know that nuance and to know what was happening in her brain because she had done so much previous therapy. But she said, you know, if she hadn’t really understood her disease, having that crisis alert come up may have even added more stress. 

Lilly Payne: I would have freaked out and been like, “oh my gosh, this  this thing that is supposed to have this mental health knowledge thinks that I am suicidal. I must be suicidal, I must be a danger to myself.” 

Lesley McClurg: So, you know, in defense of Woebot, they came back and said, “hey, we’re not, you know, specifically targeting or for people who have OCD. We really are just a wellness tool. “But her story illustrates where AI doesn’t necessarily have the nuance, the understanding —  that a human, like a human therapist would have picked up on that. They would have understood that she had OCD and really understood the nuances of that, whereas in this case, Woebot didn’t. 

Morgan Sung: Right. It sounds like Wobot was inadvertently validating this intrusive thought that she was having because she has OCD. And when you’re really depressed or anxious, it might be helpful for your feelings to be validated like that. But how does that compare to the recommended treatment for OCD? 

Lesley McClurg: I mean the recommended treatment for OCD is generally exposure therapy. So you expose yourself to whatever you’re scared of. And so in this case, a therapist would work with her in terms of exposing herself to those ideas, probably walk her through, you know, reality, et cetera, in a way that allows her to lean into her fears so that they’re not as scary and sort of wound up and keep going. And sort of overtake her. Whereas you, like a therapist wouldn’t stand up with a red flag and say, “Oh my God, you really are suicidal. Therefore you should call a hotline.” Right? Which is basically what Woebot did. Yeah.

Morgan Sung: Lilly’s case is just one example of the limits of AI therapy. Responding with a crisis alert wasn’t helpful for her specific needs, but it’s probably good that Woebot even has those guardrails in place. But what happens when AI chatbots go off script? How bad can it get? We’ll get into that when we come back. 

New tab. AI therapy … worst case scenarios. 

So Woebot can’t necessarily respond with the nuance of an actual human therapist. But it seems like it wasn’t giving Lilly bad advice. Um but let’s talk about examples of AI therapy doing the exact opposite of what it’s supposed to do. What happened with the National Eating Disorder Association hotline? 

Lesley McClurg: Yeah, that didn’t play out very well. They created a bot named Tessa and some of the users found that Tessa was giving them dieting advice. So these are folks 

Morgan Sung: Oh god. 

Lesley McClurg: Who have, you know, anorexia, bulimia, and somehow Tessa’s wires got crossed and people were getting the exact advice that would be really dangerous for their eating disorders. 

Sharon Maxwell: The recommendations that Tessa gave me was that I could lose one to two pounds per week, that I should eat no more than 2,000 calories in a day, that I should have a calorie deficit of 500 to 1,000 calories per day. All of which might sound benign to the general listener, however, to an individual with an eating disorder, the focus of weight loss really fuels the eating disorder. 

Morgan Sung: That was Sharon Maxwell, an eating disorder recovery educator, speaking to NPR about her experience with Tessa. 

Lesley McClurg: So, NEDA, the National Eating Disorder Association, you know, pulled Tessa down and said, “this isn’t working very well.”

Morgan Sung: And it sounds like they just didn’t have that kind of guardrail in place. Like they didn’t anticipate that. Um, so even if Lilly didn’t really need Woebot to immediately jump into crisis mode, at least it had that guardrail to say, like, “hey, crisis.” But in the past, other AI chatbots have gotten into serious trouble for not responding to users’ red flags and just validating their responses. 

And that happened in the case of Character AI, this AI app that lets users personalize an AI companion based on fictional characters, celebrities, historical figures, all that. Until a recent lawsuit, Character AI did not have any safety measures or disclaimers warning users that they weren’t talking to a real person. What led to this lawsuit? 

Lesley McClurg: Yeah, there was a 14-year-old who grew really attached to his character that he had created. Like you said, Character AI lets you create a character and then interact with that character. And, you know, not surprisingly, kind of like I did in my first experience with ChatGPT, it feels so good that you develop a little bit of an emotional connection. 

And so this 14-year-old did that over the course of several months. And then he started opening up about some of the distress that he was feeling. And the character, instead of steering, you know, this 14-year-old towards help, unfortunately the bot allegedly reinforced some suicidal thoughts and eventually the boy ended up taking his life. And so the lawsuit, 

Morgan Sung: That’s terrible. 

Lesley McClurg: Exactly, it was really kind of horrific and it’s not the only one like this. There’s only a handful at this point, but it really is raising the red flag that these very empathetic responses are sort of like, you know parroting back, which is, again, what some AI does. Uh it can play out really, really poorly. 

Morgan Sung: So what happened with the eating disorder hotline and Character AI, those are pretty extreme cases. Will most people actually experience those worst case scenarios? In your research, did you find anything about that? Or is it just like, are these just edge cases? 

Lesley McClurg: I mean we don’t have numbers yet. I think it’s really early in the arc of this technology. I think the experts are most worried about platforms that are like Character AI, where you are building a relationship with a character. In their defense, they’re not built as mental health tools, right? These are not marketing themselves as mental health tools. They are, you know, marketing themselves as, “hey, here, we’re going to give you a friend.” 

Yet, you know, like a friend, like you and I probably do with our friends, we lean on our friends. We talk to our friends. We build emotional connections with our friends. We trust our friends for the right advice, right? And these are robots. So that relationship is not uh, you know, built on human connection. And like we can see it can go wrong. 

Morgan Sung: Another concern that I have, you know, as a tech reporter is uh privacy. ChatGPT, for example, isn’t HIPAA compliant. Could you explain what HIPPA is and why it’s necessary with medical information? 

Lesley McClurg: Yeah, I mean HIPAA is the regulation that keeps all of our data safe. So when you go to the doctor, a doctor is required to keep all of your medical information,  you know, totally private. It’s not going to be given anywhere. It’s not going to leak away. That is the privacy regulations. Now, some of these platforms, you know, for example, like Woebot, uh Rosebud is one, which is a platform that’s more like a journaling service. Uh you know, they say they’re HIPAA compliant, but there’s no one regulating them. It’s not like the American Medical Association is regulating them. 

So, that data, you don’t really know where it’s going. You’re trusting these companies who are profit driven. You know, I mean, hopefully Woebot and Rosebud, you know, are following their own promises to their consumers. But there might be other companies that, you know, definitely ChatGPT is not, you know, promising that they’re HIPAA compliant. And, you know, that information is being used, is being put out there to retrain the model. And so, you know, hopefully they’re not gonna sell your data to advertisers. 

You know, also, I mean, the kind of a worst-case scenario, this fortunately hasn’t happened yet, but, you know,  what if your mental health information gets out there, an insurance company gets wind of that, and your premiums start going up because they know that you’re struggling with something. 

Morgan Sung: Oh wow. 

Lesley McClurg: So, you know, again, that hasn’t happen yet. Those are sort of like the worst- case scenarios. 

Morgan Sung: But again, worst-case scenarios. Right.  

Lesley McClurg: Exactly. 

Morgan Sung: Yeah. Obviously, the priority of pretty much any for profit company is to monetize. But, do AI companies have any incentive to improve as more people turn to their products for therapy, even if they aren’t necessarily mental health specific chatbots? Um, you know, are there better safety measures, more transparency about data collection, especially given the Character AI lawsuit? 

Lesley McClurg: I think they have that incentive. They also have the incentive to keep you hooked. So I think that’s the sort of like fine line. We’ve seen that with all social media, right? They’re getting a lot better at keeping our attention. AI companies have the same needs and incentives to keep people coming back. And so, you know, I think it’s gonna be a gray area and it’s going to be, unfortunately, like the social media companies, it’s  gonna be really up to the creators of these products on whether or not they’re gonna have a really ethical orientation. 

Morgan Sung: Despite all of these issues, therapy is so inaccessible that unfortunately, AI chatbots might feel like the only immediate tool that people have when seeking treatment. How did we get here? Let’s open one last tab.  The mental healthcare crisis. 

Morgan Sung: You had mentioned earlier the state of mental health care. Why is it so hard to see a therapist? 

Lesley McClurg: Yeah, I mean, the demand for mental health services is really at an all-time high, and it’s surged even more, you know, since the pandemic began and continues to do so. I think there’s something like one in five Americans have some kind of a mental health issue, and yet they have a significant barrier to getting to a therapist. You know, I think it’s 55% of counties, people don’t have access to a psychotherapist or a social worker or a psychologist. They’re just aren’t any in that area. 

And so, you know, I think because of this issue that these sort of mental health deserts, AI is a kind of natural fill-in. You know, It’s available 24-7. You don’t need insurance to get there. You don’t have a high deductible. Uh, you don’t have to prove to anyone that, you know,  that you have a mental health condition. 

Morgan Sung: Yeah. 

Lesley McClurg: You don’t get accepted. Uh, so it’s  easy and accessible. And I think that it will mean that more and more people are going to use this and hopefully, they’ll be well-informed consumers. 

Morgan Sung: Yeah. You know, given the shortage of providers, and like you mentioned, insurance issues. Um, since the pandemic started, telehealth therapy has become pretty popular. But I’ve seen a lot of complaints about these kind of quick, one-size-fits-all mental health care platforms like BetterHelp, which matches users with Licensed Therapist or Cerebral, which sets users up with a psychiatrist that can prescribe medications like antidepressants or ADHD meds. And both of these services were created to, kind of fill this void that you’re talking about, but at the same time, they’re kind of plagued with their own issues. It seems like making therapy quick and accessible isn’t always as easy as it seems. What do you think? 

Lesley McClurg: I think there’s absolutely a role for telehealth. I think there’s absolutely a role for AI therapy. I think anyone would probably say that having a really heartfelt connection with a therapist in an office, live human, feels different than if you are talking to a screen. And the emotional repair that can happen in that session with a live human I think is different and potentially more profound than with a robot. That might change over time. You know, I don’t know how good these things are going to get. They already feel a little bit too good for my own comfort. 

Morgan Sung: Yeah. 

Lesley McClurg: Uh, but they might, they might get even better. You know, same thing, I think the telehealth model at this point is pretty early. I think that they are still refining how well those things work. I think it’s similar with AI therapy. And, you know, I think the, the tricky thing here as well with any of these technological solutions is that we are also living in a pretty isolated way in our lives right now. If you’re taking even like your therapy to a computer, that’s one less human that you’re interacting with. And maybe you’re, you know, mental health issues are because you’re dealing with isolation, with estrangement, with disconnection. Those feelings might even become more escalated if you’re, you know, using telehealth or using AI therapy. 

I saw both responses and reflections when I was reading these Reddit threads, you know, from people who were in rural places. I knew that they were feeling more isolated using an AI therapy and others who said, “you know, it was a godsend because I was so alone, at least someone was listening to me. ” 

Morgan Sung: For your story on AI therapy, you talked to a bunch of psychologists and, you know, real-life human psychologists, um and, you know, someone from the American Psychological Association. Are human therapists concerned about being replaced by AI? 

Lesley McClurg: I don’t hear that from them yet. Number one, they’re still really in high demand. 

Lesley McClurg: So I don’t think they’re feeling that crunch yet. 

Morgan Sung: There’s still a shortage, right? 

Lesley McClurg: There’s still a huge shortage. And I think, they’re, they’re fairly confident that what they offer is different than what AI therapy offers. And, you know, they can pick up on subtle cues that AI, you know, can’t, say like body language or,  you know, pace of speech. These things can reveal a lot about our mental health state, and AI can’t pick up on that stuff. So, and in the deeper bonds, the deeper attachment work that you might do in therapy, I think therapists are quite confident that they’re still better at that. Uh, so in this moment, I would say they’re not, they’re not especially worried. 

Morgan Sung: We’ve talked about the downsides of AI therapy pretty extensively. Um but, you had mentioned like that they can kind of be a tool in a bigger treatment plan while also seeing a real therapist. If someone is going to use AI therapy, how should they approach it? 

Lesley McClurg: Yeah. Yeah. I think, that’s the message I hope comes across in my reporting, is that, you know, there are these worst case scenarios. Again, I think that the consumer should be educated on how their data is going to be used and understand how the company operates so that they’re not sharing uh really vulnerable information. But I think as a sort of, you know, addition to your yoga, your meditation, your, uh, you know,  walks in nature, I think AI can really be a self-regulation tool. And I think it can be used quite well. 

You know, I talked to one company, Rosebud, which is a kind of journaling platform, which it asks you questions to kind of inspire you to express whatever’s going on and help you reflect. And it can follow a thread. So if you mentioned something two weeks ago about your relationship and what was going wrong, it will check in with you about what is happening and help you make sense of that. And I was on it. You know, I’m not a huge pen and paper person. You know, I don’t write anything anymore, so my arm hurts really quickly. And so, I enjoy, you know, I like just would pick up my phone and I would journal just, you know, talking to it and it would ask me questions and it felt, you know, fairly similar to a conversation with a friend. And I would always feel quite a bit better afterwards. 

So, in that sense, I think it can be quite helpful because, you know, maybe I’m in therapy once a week, but I’m having a panic attack on Monday night and my, you know, appointment is not until Thursday. I think in that sense, you know, it’s four o’clock in the morning. I can’t call a human therapist no matter what, even if I do have one. You know, to sit down and have the opportunity to have something that’s engaging me, um, I think can be really helpful. 

Morgan Sung: I’m really curious, since you started reporting on this story, have you used ChatGPT, uh, not necessarily as a therapist, but you know, as this kind of mental health tool that you’re talking about since? 

Lesley McClurg: I wish I had the positive spin to be like, “yes, I’m relying on it all the time.” You know, I didn’t and I don’t. Um, I felt a little bit like one more thing to do. 

Morgan Sung: Right. 

Lesley McClurg: And I felt similarly,  you know, we talked about Lilly at the beginning of the story, and the reason that she stopped using Woebot was not because, you know, it had the crisis alert or it sort of like poorly dealt with her OCD, she got tired of being on her phone. She was like, “I didn’t want to be on my phone anymore. I wanted to talk to someone.”. 

Morgan Sung: Yeah. 

Lesley McClurg: And I feel that. You know, that was, that was, kind of my reasoning, you know, because of my job, I’m on a computer, you know, nearly all day long, and I didn’t want one more thing on the computer or one more thing on my phone. I can imagine, you know, if I was going through a really tough time again, you know, turning to it. Um, luckily, I’m in a bit of a good moment, so I haven’t been using it. 

Morgan Sung: Yeah. You can unplug now. 

Lesley McClurg: Exactly. I’m going to enjoy this moment and ride the wave of goodness. 

Morgan Sung: Thanks again to KQED’s Lesley McClurg: for talking with us about this story. You can check out more of her reporting on healthcare, including this story on AI Therapy at KQED.org. Again, AI therapy tools work best when they’re used in addition to treatment under a licensed professional. But if it’s the only option accessible to you right now, there are AI tools specifically designed for mental health and wellness that might be more useful than the general chatbots like ChatGPT or Claude. For now, let’s close these tabs. 

Morgan Sung: Close All Tabs is a production of KQED Studios and is reported and hosted by me, Morgan Sung. Our producer is Maya Cueva. Chris Egusa is our Senior Editor. Jen Chien is KQED’s Director of Podcasts and helps edit the show. Sound design by Maya Cuaiva, Chris Egusa, and Brendan Willard. Original music by Chris Egusa. Additional music by APM. Mixing and mastering by Brendan Willard. 

Audience engagement support from Maha Sanad and Alana Walker. Katie Sprenger is our Podcast Operations Manager, and Holly Kernan is our Chief Content Officer. Support for this program comes from Birong Hu and supporters of the KQED Studios Fund. Some members of the KQED podcast team are represented by the Screen Actors Guild, American Federation of Television and Radio Artists, San Francisco, Northern California Local. 

Keyboard sounds were recorded on my purple and pink Dustsilver K-84 wired mechanical keyboard with Gateron Red switches. If you have feedback, or a topic you think we should cover, hit us up at CloseAllTabs@kqed.org.  Follow us on instagram at “close all tabs pod.” Or drop it on Discord — we’re in the Close All Tabs channel at discord.gg/KQED.

And if you’re enjoying the show, give us a rating on Apple Podcasts or whatever platform you use. Thanks for listening. 

 

Sponsored

lower waypoint
next waypoint