Sponsor MessageBecome a KQED sponsor
upper waypoint

ChatGPT Will Soon Allow Adults to Generate Erotica. Is This the Future We Want?

Save ArticleSave Article
Failed to save article

Please try again

OpenAI CEO Sam Altman speaks at OpenAI DevDay, the company's annual conference for developers, in San Francisco, California, on Oct. 6, 2025. OpenAI’s announcement this week that erotic content will soon be available to adults reflects a growing trend. Some researchers and Bay Area politicians are worried about the effects. (Benjamin Legendre/AFP via Getty Images)

OpenAI isn’t the first developer to announce plans to offer erotic content on its chatbot. But the blowback against the tech company’s decision to loosen restrictions this week has been bigger, given the San Francisco-based company’s promise to ensure its AI benefits all of humanity.

The most significant change will roll out in December, when OpenAI will allow more comprehensive age-gating, allowing verified adults to generate erotic content using the tool — “as part of our ‘treat adult users like adults’ principle,” OpenAI CEO Sam Altman posted Tuesday on the social media platform X.

Consumer advocates say OpenAI is following the lead of xAI’s Grok, which offers loosely moderated “adult” modes with minimal age verification, raising concerns that teenage users may have access to explicit content. Meta AI is believed to be following xAI’s lead as well, and its back and forth over whether it is intentionally pushing mature content to minors has prompted U.S. Sen. Josh Hawley, R-Missouri, to investigate.

Sponsored

“We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue, we wanted to get this right,” Altman wrote.

The announcement came less than two months after the company was sued by the parents of Adam Raine, a teenager who died by suicide earlier this year, for ChatGPT allegedly providing him with specific advice on how to kill himself — setting off a firestorm of news coverage and comment.

The OpenAI ChatGPT logo. (Jaap Arriens/NurPhoto via Getty Images)

Altman delivered a follow-up on Wednesday. “We will still not allow things that cause harm to others, and we will treat users who are having mental health crises very different from users who are not … But we are not the elected moral police of the world. In the same way that society differentiates other appropriate boundaries (R-rated movies, for example), we want to do a similar thing here,” Altman wrote, although it remains unclear whether OpenAI will extend erotica to its AI voice, image and video generation tools.

“Comparing content moderation of chatbot interactions with movie ratings is not really useful,” wrote Irina Raicu, director of the Internet Ethics program at the Markkula Center for Applied Ethics at Santa Clara University. “It downplays both the nature and the extent of the problems that we’re seeing when people get more and more dependent on and influenced by chatbot ‘relationships.’”

Mark Cuban, the entrepreneur, investor and media personality, argued much the same in a string of posts on X.

“I don’t see how OpenAI can age-gate successfully enough. I’m also not sure that it can’t psychologically damage young adults. We just don’t know yet how addictive LLMs can be. Which, in my OPINION, means that parents and schools, that would otherwise want to use ChatGPT because of its current ubiquity, will decide not to use it,” Cuban wrote.

Others see the drive for paying subscribers and increased profit behind the move. As a private company, OpenAI does not release its shareholder reports publicly. However, Bloomberg recently reported that OpenAI has completed a deal to help employees sell shares in the company at a $500 billion valuation. According to Altman, ChatGPT is already used by 800 million weekly active users. With so much investment at stake, OpenAI is under pressure to grow its subscriber base. The company has also raised billions of dollars for a historic infrastructure buildout, an investment OpenAI eventually needs to pay back.

“It is no secret that sexual content is one of the most popular and lucrative aspects of the internet,” wrote Jennifer King, a privacy and data policy fellow at the Stanford University Institute for Human-Centered Artificial Intelligence. She noted that nearly 20 U.S. states have passed laws requiring age verification for online adult content sites.

“By openly embracing business models that allow access to adult content, mainstream providers like OpenAI will face the burden of demonstrating that they have robust methods for excluding children under 18 and potentially adults under the age of 21,” King said.

AI chatbots appear to be going the way of social media, said California Assemblymember Rebecca Bauer-Kahan, D-San Ramon, whose bill that would have required child safety guardrails for companion chatbots was vetoed earlier this week.

Assemblymember Rebecca Bauer-Kahan says local jurisdictions need the power to stop a wildfire disaster before it starts. The assemblymember and other state lawmakers announced a bill to expand enforcement actions against PG&E and other utilities on February, 18, 2020.
Assemblymember Rebecca Bauer-Kahan on Feb. 18, 2020. (Eli Walsh/Bay City News)

“My fear is that we are on a path to creating the next, frankly, more addictive, more harmful version of social media for our children,” Bauer-Kahan told KQED. “I do not think that the addictive features in these chatbots that result in our children having relationships with a chatbot instead of their fellow humans is a positive thing, and the experts confirm that.”

OpenAI did not comment for this story, but the company has written that it’s working on an under-18 version of ChatGPT, which will redirect minors to age-appropriate content. A couple of weeks ago, OpenAI announced it’s rolling out safety features for minors, including an age prediction system and a way for parents to control their teens’ ChatGPT accounts. This week, OpenAI announced the formation of an expert council of mental health professionals to advise the company on well-being and AI.

In mid-September, the Federal Trade Commission launched an inquiry into seven AI chatbot developers, including xAI, Meta and OpenAI, “seeking information on how these firms measure, test, and monitor potentially negative impacts of this technology on children and teens.”

For the most part, a couple of dozen states and their attorneys general have taken the lead on regulation, enacting measures like age verification and requiring many online platforms to verify users’ identities before granting access. East Bay Assemblymember Buffy Wicks won the support of major tech companies for her measure, AB 1043, which was just signed into law by Gov. Gavin Newsom.

But any parent knows it’s easy for children to sidestep those controls, or reach out to older siblings or friends who can help them, Bauer-Kahan said. She said she sees a coincidence in the fact that the veto of her toughest bill was announced on Monday, and Altman’s announcement was posted on Tuesday.

“Here was a bill that was really requiring very clear, safe-by-design AI for children with real liability. And I think that was further than the industry wanted California to go. I just found the timing of the veto and then this announcement about access to erotica too coincidental not to call out,” she said.

Sponsored

lower waypoint
next waypoint