View the full episode transcript.
Roblox is one of the most popular gaming platforms for kids, with millions of young gamers playing user-created games. It’s also been heavily criticized for its track record on child safety, and is now facing more than 80 lawsuits alleging child abuse and grooming. In response, the company recently rolled out a new safety measure: AI-powered facial age verification that restricts who players can talk with. The reception from players has been anything but warm.
In this episode, host Morgan Sung is joined by youth mental health reporter Rachel Hale, who explains how predators operate on the platform, why everyone seems to hate Roblox’s new AI age verification feature, and the incredible lengths some users are willing to go to get around it. And while Roblox says age verification is about improving safety, questions have emerged about its accuracy, digital privacy and how this move impacts the broader push for age verification across the internet.
Guest:
- Rachel Hale, youth mental health reporter at USA Today
Further Reading/Listening:
- I got an up-close look at Roblox’s new safety feature. Here’s what I found. — Rachel Hale, USA Today
- She just wanted to play Roblox with friends. Then the messages from a predator began. — Rachel Hale, USA Today
- Can social media age verification really protect kids? — Rina Chandran, Rest Of World
- Roblox’s age verification system is reportedly a trainwreck — Will Shanklin, Engadget
Want to give us feedback on the show? Shoot us an email at CloseAllTabs@KQED.org
Follow us on Instagram and TikTok

