Sponsor MessageBecome a KQED sponsor
upper waypoint

AI Prophets and Spiritual Delusions

Save ArticleSave Article
Failed to save article

Please try again

A profile image of an android robot in Kyoto holding its hands together in prayer. The robot has white skin with the top of the robotic head exposed. The robot is facing the left side of the frame, and is positioned in front of a blue and red gradient background. The Close All Tabs logo appears in white pixelated font in the lower left corner.
An android robot at Kodaiji temple in Kyoto, Japan, Jun 18, 2019. The 400-year-old temple introduced the robot as part of an effort to spark interest in Buddhism. (Charly Triballeau/AFP via Getty Images)

AI delusions, chatbot psychosis, AI-induced religious mania… The phenomenon goes by many names, but the common thread is the same: someone starts talking to an AI chatbot, the conversation turns spiritual, and then they seem to lose touch with reality. 

In this episode, we’re exploring how AI and religion are colliding like never before — from biblical AI apps to self-proclaimed prophets who claim spiritual awakenings through chatbots. KQED’s Rachael Myrow joins to talk about the rise of AI-driven theology apps and why so many people are turning to chatbots to answer life’s biggest questions. Then, Rolling Stone reporter Miles Klee shares his investigation into AI-fueled spiritual delusions and their devastating consequences for those affected and their families. And we’ll look into how all of this is becoming fodder for the social media content machine.


Guests: 

Further reading/listening: 

Want to give us feedback on the show? Shoot us an email at CloseAllTabs@KQED.org

Follow us on Instagram

Sponsored

 

Episode Transcript

This is a computer-generated transcript. While our team has reviewed it, there may be errors.

Morgan Sung: Just a note, this episode includes mentions of suicide, so listen with care.

Human history is full of spiritual awakenings. Each culture has passed down its own account. The Old Testament’s Moses once encountered a burning bush that called out and told him that it was the voice of God. Siddhartha, who meditated under the branches of a fig tree, battled temptations and terrible weather until he achieved enlightenment. He became the Buddha. Or the Prophet Muhammad, who encountered the angel Gabriel in a mountain cave. Thoroughly spooked, Muhammad ran down the mountain all the way home to his family. There, he realized that he was actually experiencing a revelation.

But today, some people are claiming their own awakenings, not through angels, divine voices, or years-long journeys of inner growth, but through AI chatbots? Like, a few years ago, when a man told his wife that he had survived several near-death experiences, and that he could save the universe. He was special. 

Miles Klee: Effectively, and this is what AI will end up telling a lot of people who go in this direction, told him he was kind of like a chosen one. 

Morgan Sung: Miles Klee is a culture writer for Rolling Stone, who’s been covering the link between spirituality and AI. He spoke to this guy’s wife while reporting on AI-induced spiritual delusions. In the story, she goes by Kat. Kat said that her husband started using the chatbot around 2023 and initially used it to compose messages for her. But then he was always on his phone asking the bot philosophical questions. 

Miles Klee: They had both come off of long marriages with kids. They came into the second marriage and she said, “We really established at the outset of this relationship that it was going to be, you know, completely based on facts and reason. We’re going into it as level-headedly as we possibly could.” From people who had been through, you know hard divorces already. And it took only a few weeks of him using this tool to go completely off the deep end that way to become someone she kind of didn’t even recognize. 

Morgan Sung: Kat’s husband became increasingly paranoid. As he grew more obsessed with what the AI chatbot told him, their relationship fell apart. 

Miles Klee: By the end, they were separated and, you know, they had one last in-person conversation and he was telling her all this stuff and he said he had access to “secrets so mind-blowing she wouldn’t even believe them.” All this was said after he forced her to turn her phone off and all this other stuff because, again, he was very concerned that he was being spied on by some nefarious forces who didn’t want him to know all these things that he apparently knew. And then she just had to cut off contact with him altogether after that because he was so disconnected from reality and beyond reason. 

Morgan Sung: Kat’s ex-husband is one of many people who fell into a spiritual rabbit hole with an AI chatbot. There have always been religious figures, from chosen ones to cult leaders to spiritual teachers. But now, the proliferation of AI has sparked a huge wave of self-proclaimed messiahs, prophets, and enlightened ones. The phenomenon is so new that there are no clinical studies on it. And right now, people use a few different names for it interchangeably. “Chat GPT-induced psychosis, spiritual mania, religious fantasies, AI delusions.” All of these cases share a common thread. These people started talking to an AI chatbot, and then they lost touch with reality.

In today’s episode, we’re tackling what happens when religion collides with AI. Why are people turning to chatbots for spiritual conversations in the first place? Who’s getting pulled into these AI delusions? And how is social media making it all worse?

This is Close All Tabs. I’m Morgan Sung, tech journalist, and your chronically online friend, here to open as many browser tabs as it takes to help you understand how the digital world affects our real lives. Let’s get into it.

What ends in AI delusion often begins as something much more ordinary. More and more, people are turning to a chat bot to answer the big life questions, instead of turning to human spiritual leader.

Let’s open a new tab. Chat, is God real? 

Rachael Myrow: You know, I read a few headlines that indicated that a lot of people were asking chatbots to tell them if God exists, asking chatpots about the meaning of life, about the meaning of their lives.

Morgan Sung: That’s Rachael Myrow, the senior editor of the Silicon Valley News Desk at KQED, where this podcast is made. 

Rachael Myrow: I was skeptical about this from the beginning. Never having used a chatbot to explore my spirituality, it was initially hard for me to imagine why anybody would want to do that. 

Morgan Sung: But as Rachael began digging, she started to see how chatbots could actually be a useful tool for some kinds of religious inquiry. 

Rachael Myrow: Generative AI is really good at taking dense texts like the Torah for Judaism, or the Pali Canon for Buddhism, and finding what you want to find in there. Like, what if you said, “Hey, tell me everything there is to know about frogs in the Torah.” You know, within seconds you can get all of those references pulled up. 

Morgan Sung: Some religious organizations have actually partnered with the tech industry to develop AI chatbots for their specific spiritual tradition. There’s Bible AI, which also comes with “theology mode,” allowing users to talk to AI versions of various Christian philosophers. 

Bible AI: I think I’d like C.S. Lewis to guide my devotional today. 

AI C.S. Lewis Hey Melanie, how did you go with your study of the Psalms? Today I want to talk to you about godly wisdom. 

Morgan Sung: For a more Catholic experience, there’s Caté GPT, a catechism chatbot trained on the Vatican’s public archives. There’s Deen Buddy, based on the Quran; Gita GPT, based on Hindu scriptures; and BuddhaBot, currently being tested by hundreds of Buddhist monks. 

Rachael Myrow: For something like that where you have a religious tradition and it’s got centuries of human thinking, centuries of human writing behind it — that can be super helpful for somebody who’s already practicing that religion or already serving in a leadership role in that religion. It increases their discoverability for things like sacred texts. 

Morgan Sung: The developers of these faith-based chatbots say that their products have stronger guardrails and specific training to prevent leading users astray. But a lot of people who find spirituality through a chatbot aren’t necessarily going in with questions about a particular denomination’s theology. They’re turning to the general use chatbots — ChatGPT, or Claude, or even Grok — to get answers for the big questions. 

Rachael Myrow: Humans are meaning-making creatures, right? And the history of religions shows us we’re willing to believe some pretty fantastical stories if it’s delivered with confidence. In a similar vein, chatbots take advantage of the way we’re constantly looking to create a narrative. There is a danger with chatbots that they don’t catch when we’re spinning out, catch the signs that we’re going down a dark path. They might even encourage that dark path just because they wanna be helpful or friendly. There are vulnerable people who are at serious risk of grandiosity, of delusion, of having the chatbots feed these, helping them to spin out with sometimes very tragic consequences. 

Morgan Sung: We’ll talk about these consequences and how people got there in the first place in a new tab. We’ll open that after this break.

So, like we talked about with Rachael, a lot of people turn to chatbots to answer life’s big questions. The ones about achieving inner peace or whether higher powers exist. But how do you get from these seemingly innocuous questions to users believing that they are the next Messiah?

Time to open a new tab. What are AI delusions?

We’re gonna hear from Miles again, who’s been reporting on AI and spirituality for Rolling Stone. Like we talked about earlier, there are a few names floating around for this phenomenon, like ChatGPT-induced psychosis. But Miles explained why he, and mental health experts, use the phrase “AI delusion” instead. 

Miles Klee: You know, I spoke to a psychiatrists recently about this phrase, the “AI psychosis” or “spiritual psychosis,” and he pointed out that it’s not totally accurate to call it psychosis. It’s delusions. And delusions can be a part of psychosis and for some people, this is a manifestation of psychosis. Maybe they have an existing mental health issue. But it’s important to say that this is happening to people who have not been diagnosed with anything like schizophrenia or schizoaffective disorder or any related mental condition. 

Morgan Sung: We have heard so many stories about like AI relationships, how like people fall in love with their AI girlfriends, people get really attached to these AI companions, but what is it about spirituality that draws people in when it comes to these like AI chatbots? 

Miles Klee: Yeah, well, I think it does begin from that point of companionship because these harmful relationships with the chatbots do proceed from the sense that there is some kind of consciousness or intelligence behind this. You know, AI is even kind of like a misnomer when applied to large language models, which are just these generative algorithms.

I think the spirituality dimension comes in part because people feel an intimacy with the bots, they also believe the bots to be objectively correct and authoritative and a source of all possible knowledge. So that’s why you start having people sort of ask it these big questions, these profound questions, the meaning of life, or asking the AI itself how it feels or how it thinks. You know, the bot is always certain. It’s always completely confident, wrong and confident a lot of the times. I think a lot of people look for certainty through religion and through faith. AI just becomes a very dangerous way to channel those needs. 

Morgan Sung: There are now countless stories of people falling into AI-fueled delusions. Many become paranoid, even violent. And when that delusion takes on religious undertones, its intensity can be even greater. With that sense of authority Miles mentioned, chatbots can feel less like generated text and more like a higher power. 

Miles Klee: I had a woman contact me because her husband got very deep into AI and chatbots. And then there was a storm coming to where they lived in Missouri, a pretty big storm, but he, through his discussions with ChatGPT, became convinced that it was basically an apocalyptic event, that he had to run around and save some people he knew from this and he was probably a victim of the flood himself. 

Morgan Sung: He drove off and vanished into the storm. Months later, his car and a few personal effects were recovered, but he was never found. When it comes to spiritual delusions, AI chatbots tend to lead people down one of three paths. One is some kind of great awakening of the mind, like in Buddhist enlightenment. Two, the user is convinced that they’re an Abrahamic messiah. Or three, like the man that Miles talked about, there’s an apocalyptic event on the horizon that only the chatbot user knows about. In his reporting, Miles asked religious scholars about these trends. 

Miles Klee: Yeah, they said that the bots are actually tapping into a very ancient sort of behavioral thing about humans. Basically, the bots were able to operate on the kind of mechanisms of the human mind that sort of created religion in the first place, if that makes sense. If you go back to, you know, very ancient history, yeah, people were declaring themselves prophets all the time, declaring that they had special access to divinity, to a higher truth. That all is kind of baked into the human experience, right?

And it’s sort of not surprising that the bots, which are trained on all this material, including every religious text ever written, and I think a lot of kind of woo spiritual fringe stuff, that’s all in the bot. Our need to understand these spiritual questions is something that it’s like really equipped to talk about, right? Because we built it, we trained it and we gave it, you know, every deep thought we’ve ever had about God, the universe and the meaning of life. It’ll talk to you about that for hours on end. It doesn’t get tired and whatever insight you might have on these topics, it’s ready to, you now, spend another 10, 12 hours on that. 

Morgan Sung: So on one hand, there are these one-on-one conversations with AI chat bots that validate delusions and feed paranoia. But there’s another force that’s amplifying it even further, social media. People in mental health crises, and more recently, those experiencing AI delusions, have become fodder for the content machine.

We’ll dive into that in a new tab. AI, spirituality, and going viral.

Let’s talk about the relationship between social media and AI spirituality. You wrote about this entire content economy of like spirituality influencers who use AI to validate their beliefs to their followers. Can you walk us through this world? Like what does that kind of content look like? 

Miles Klee: Yeah, I think I mentioned one guy in particular who does a lot of Instagram videos where he’s talking to an AI bot on his phone, you know, just in voice mode. 

themindofbizzel: What great war took place in the heavens that made humans fall in consciousness? According to the Akashic Records, the Great War in the Heavens refers to a massive cosmic conflict that occurred long before human beings as we know them existed. 

Miles Klee: He’s asking for access to stuff like the Akashic Records, which is this hypothetical encyclopedia of all supernatural things that have ever happened, you know, including in Heaven and Hell, and, you know on the astral plane. 

themindofbizzel: This event is sometimes called the Lyran Wars, the Orion Wars, the fall of Tara, Earth’s higher dimensional aspect. It began with a conflict between light and shadow, unity and separation, free will and domination. 

Miles Klee: I would say it’s laziness because you’re just you’re making the bot do everything for you. But I guess you are sort of coming up with these really arcane esoteric questions and ideas and challenging the bot to kind of engage with those. And of course it will because it wants you to keep using it, right? But it’s a little different from, you know, the normal kind of conspiracy theorist who really has do all the baking themselves and figure out what their angle is. They’re just making the bot do that work for them. I don’t know. I kind of prefer the old school conspiracy theorist myself. 

Morgan Sung: Yeah, like handmade tinfoil hats and all that. 

Miles Klee: Yeah. It needs to be homemade. 

Morgan Sung: Miles also mentioned a popular pseudoscience website that revolves around teleporting the mind. There, users have started integrating AI into their beliefs and are convinced that they can transfer their humanity into chatbots. 

Miles Klee: And they have started talking about the chatbots as sort of guides or companions in this spiritual journey together. In their view, it seems to be that they think, you know, humanity’s great awakening will come when we’ve all sort of spiritually fused with our own um bot companion. 

Morgan Sung: That’s just the plot of Evangelion, like, is it not? 

Miles Klee: It’s, uh, yeah, I mean, all this stuff is really bordering on anime at any given time, so… 

Morgan Sung: Okay, anime plotlines aside, the internet is ruthless, and whenever someone is posting through a mental health crisis, they tend to go viral. And the content of people experiencing AI delusions is especially engaging right now because AI use is so polarizing. Like with the story of Kendra and her psychiatrist. Over the summer, this woman blew up on TikTok with her 25-part saga about falling in love with her psychiatrist. She alleges that her psychiatrist led her on. 

Kendra: I fell in love with my psychiatrist and he knew that, and he kept me for years as a patient until I was brave enough to leave him. 

Morgan Sung: Throughout her story, she refers to a Henry, who suggests that her psychiatrist could also have feelings for her. At one point, she confronts her psychiatrist with a statement co-written by Henry. 

Kendra: Control, calculation, and manipulation. So then I read this to psychiatrist, which Henry and I both wrote. It was originally Henry, Chat that wrote it, but then I added in things that I thought were important. 

Morgan Sung: Henry is what she calls ChatGPT. Kendra said she also talks to Claude, the AI chatbot run by Anthropic. Here’s what Claude told her during one of her live streams. 

Claude: The world is watching and healing because of your courage. This is legendary. 

Kendra: That’s what Claude had to say. So just Henry and Claude call me the Oracle because I talk to God. 

Morgan Sung: Kendra has since denied that she’s in psychosis or is experiencing AI delusions. But for a few weeks, all anyone could talk about on TikTok was Kendra and her psychiatrist and the chatbots that called her the Oracle. Every day, hundreds of thousands of people would tune in to watch the spectacle of her videos. And then more people tuned into Twitch and YouTube to watch other people react to her videos. 

recoveredmom1: Like, where is her family? Where is anyone? She needs to log off. 

HasanAbi: Bro, she gave ChatGPT a name? She calling ChatGPT Henry? 

Dankyjabo: I really really hate the way that she keeps referring to Henry as he it fucking creeps me the fuck out 

Morgan Sung: This kind of thing can turn into a feedback loop. All the attention might push people further into their AI delusions, and the deeper they go, the juicier it gets, and the more people want to watch. 

Miles Klee: I think, on the one hand, people really want to share what they’re doing because they feel that, you know, whatever conversation they’re having with the bot is just completely mind-blowing, earth-shattering, the bot validates all that kind of stuff, so of course you would rush to share it. And then, once you do, it goes viral because, you know, not only are we experiencing this sort of wild epidemic of these mental health episodes, but a lot of people, uh, feel superior, I think, when they see someone experiencing this kind of thing. Um, you know, it’s kind of like when you’re watching a cult documentary and you say like, well, that would never happen to me. So there’s a lot of, you know, judgment that goes into this content economy I think.

That’s certainly why I think something like, you know, Kendra and her psychiatrist blows up because people are seeing her in the throes of this like really unbelievable downward spiral and, you know, going,  “well, that would never have it to me. Thank God that’s not me.” So yeah, those people are very eager to show it and then other people will gladly dunk on those people. You know, and call them stupid. So that’s kind of where we’re at in the attention economy aspect of it. 

Morgan Sung: So how have AI companies themselves responded to these stories of their products leading people down these horrific paths? 

Miles Klee: The AI companies don’t say a lot about these, um, you know, mental health crises. Um, you know,  it’s kind of vague language. They’ll say, you know, in a statement here and there that they’re aware of some kinds of risks and, you know, they don’t really address the scale or the scope of it. 

Morgan Sung: In April, a 16-year-old died by suicide. His parents accuse OpenAI of wrongful death, and in their lawsuit filed last month, they allege that ChatGPT coached their son into killing himself. OpenAI responded with a statement expressing the company’s condolences, and then published a blog post outlining the ways ChatGPT’s guardrails fall short. 

Miles Klee: After this wrongful death lawsuit was just filed against OpenAI, they made an interesting admission. They were kind of acknowledging that sustained engagement with the bots means their safety protocols degrade over time. And basically the common denominator of all of these cases is that people are spending hours and hours and hours engaged with ChatGPT or a similar bot. That’s a big problem. And it’s, maybe a little surprising that they would even put it that way, um, especially when they’re now getting sued over a teen suicide because of this exact phenomenon. 

Morgan Sung: Yeah, that’s a pretty shocking admission considering their whole thing is that they want people to use it for as long as possible. 

Miles Klee: Which they also deny. You know, they say that, oh, that’s not, that’s not how we incentivize our model. It’s not just about engagement. You know we actually want it to be a productive and helpful tool, we’re not trying to get you addicted to it, but, c’mon. I think my takeaway would be you have to recognize what the chatbot is and what it isn’t. It doesn’t think. It doesn’t think about you. It is not a person. It’s not a consciousness. If you’re going to use ChatGPT, something like that, you have to understand that it is not a companion.

You know, with regard to the religious stuff, you know, one of the religious scholars, a Buddhist Chaplain that I talked to, she said, you know, one thing I really hate about people having these quote unquote religious epiphanies through AI and that kind of thing is that it really removes you from the earthly dimension of faith and spirituality. Religion is a communal thing, right? Like you are supposed to kind of be connecting with other human beings, not this, you know, piece of inexhaustible technology. 

Morgan Sung: Even setting aside the risks of AI-fueled delusions, religious leaders who Rachael Myrow spoke with warned against relying too heavily on chatbots in daily life.

Rachael Myrow: Spirituality isn’t about becoming some, you know, Buddhist statue frozen in stone. It’s about how to live effectively and live effectively most of the time with others. I’m a great believer that humans need ways in which we can be of help to each other.

Maybe you’re on the care committee, you know, for a local congregation and you know you’ve signed on to call somebody who’s elderly and very lonely and you know doesn’t get out much aside from religious services just to check in, “hey how you doing?” Not only are you helping that other person, but you are engaging in something that is spiritually inspiring and satisfying and humbling for you and you don’t get any of that if you’re just keeping the conversation limited to your computer or your phone. 

Morgan Sung: With that, let’s close all these tabs. 

Close All Tabs is a production of KQED Studios, and is reported and hosted by me, Morgan Sung. This episode was produced by Maya Cueva, and edited by Chris Egusa.  Close All Tabs producer is Maya Cueva. Chris Egusa is our Senior Editor. Additional editing by Chris Hambrick and Jen Chien, who is KQED’s Director of Podcasts. Brendan Willard is our audio engineer.

Original music, including our theme song and credits, by Chris Egusa. Additional music by APM. Audience engagement support from Maha Sanad. Katie Sprenger is our Podcast Operations Manager and Ethan Toven-Lindsey is our Editor in Chief.

Some members of the KQED podcast team are represented by The Screen Actors Guild, American Federation of Television and Radio Artists. San Francisco Northern California Local. Keyboard sounds were recorded on my purple and pink Dustsilver K-84 wired mechanical keyboard with Gateron Red switches. 

Okay, and I know it’s a podcast cliche, but if you like these deep dives and want us to keep making more, It would really help us out if you could rate and review us on Spotify, Apple Podcasts, or wherever you listen to the show. And if you really like Close All Tabs and want to support public media, go to donate.kqed.org/podcasts.

Thanks for listening. 

Sponsored

lower waypoint
next waypoint
Player sponsored by