Sponsor MessageBecome a KQED sponsor
upper waypoint

Deepfake Videos Just Got More Realistic…and More Dangerous

Save ArticleSave Article
Failed to save article

Please try again

 (NurPhoto/Getty Images)

Airdate: Wednesday, October 8 at 9AM

AI video creation software is advancing rapidly and some of its output is very alarming. OpenAI’s Sora, currently the most downloaded app in the App Store, allows users to create incredibly realistic deepfake videos with minimal effort. One viral example? A fake video of OpenAI CEO Sam Altman shoplifting in a department store. With technology this convincing, how can we trust what we see online? And what kind of destabilizing impact could this have on our society?

Guests:

Max Read, journalist, screenwriter, editor, former editor at Gawker & Select All

Alice Marwick, director of research, Data & Society

Jason Koebler, co-founder, 404 Media

Sponsored

This partial transcript was computer-generated. While our team has reviewed it, there may be errors.

Alexis Madrigal: Welcome to Forum. I’m Alexis Madrigal.

Deepfake videos — that is, videos that look real but are synthesized by AI systems — have long troubled anyone who’s thought about the relationship between technology and society for more than about three minutes. In our world, video is the gold standard for evidence. It’s how we think we know something happened: seeing and hearing are believing.

But now, with just a few swipes on a phone, you can create a surveillance video of Sam Altman, the CEO of OpenAI, stealing stuff at Target — or SpongeBob SquarePants getting arrested, or a friend caught in a compromising position. What could go wrong?

While OpenAI says it’s put guardrails in place to protect people from misuse, new reporting by 404 Media suggests at least one of those measures has already been broken. And even if some of these videos are janky now, this new level of realism suggests we’re well on our way to a world where real and fake videos are impossible to distinguish.

Here to discuss what that could mean: Max Read, journalist, screenwriter, and former editor at Gawker and Select All. Welcome, Max.

Max Read: Thanks for having me.

Alexis Madrigal: Also joining us, Jason Koebler, cofounder of 404 Media. Welcome.

Jason Koebler: Hey, happy to be here.

Alexis Madrigal: And Alice Marwick, director of research at Data & Society. Welcome.

Alice Marwick: Thanks for having me.

Alexis Madrigal: Jason, let’s start with you. Introduce us to Sora a little bit. Where does it fit? We know other companies have launched similar tools, and OpenAI’s had a video model for a while — but this feels like something new.

Jason Koebler: Yeah, I think the big change here is that it’s an app that allows you to scan your face into it. The first thing you do when you open the app is take a selfie video of your face — look left, right, up — and then say three random numbers, like “44, 85, 73.” That’s enough for the system to capture your likeness into its model.

Then you can put yourself into videos. Deepfakes have been around for quite some time, but it used to take a lot of effort to put yourself into an AI video, and the results were usually pretty bad. Now, in just a few seconds, you can tag yourself or your friends in videos that look really realistic — and, importantly, have synced audio.

That had been one of the big challenges before. I actually went to an AI film festival about a year ago, and even films that people had spent many hours on, using state-of-the-art tech at the time, were incredibly janky. The lip syncing was bad, the audio was bad, the movements were awkward. And now, with Sora, pretty much anyone can make something better than what I saw there — in about three seconds.

Alexis Madrigal: Yeah. Max, this is also a social app, right? Something you use with friends?

Max Read: Yeah, the interface is a lot like TikTok. When you open it, you get a feed of videos other people have posted. As Jason said, you upload what they call a “cameo” — a short video of yourself that allows you to appear as a character in other people’s AI-generated clips.

Depending on your settings, you can allow your friends — or anyone — to make videos with your likeness. That’s something Sam Altman and YouTuber Jake Paul have both done.

And I’ll admit, my first thought was: how can I prank my friends? How can I make them do silly TikTok dances or say ridiculous things? However skeptical we might feel about it, there’s this burst of creative energy when you first start making these short, funny videos — without having to do much more than type a sentence.

Alexis Madrigal: Yeah, there’s a part of me that’s just technically impressed. Over the last decade, I’ve watched video generators go from predicting that a moving train would keep moving across the frame — to this. On a purely technical level, it’s impressive.

Max Read: Totally. It reminds me of the first few months after ChatGPT or DALL·E 2 came out — those early frontier models that made the leap from bizarre and uncanny to something where the rough edges almost add to the charm.

Learning how the system works and how to prompt it for better videos is part of the fun, just like early adopters of any new social network — like Facebook or Twitter in their early years — figuring out what it’s good for and who the funny people are. It feels a lot like that late-2000s, early-2010s moment when new social networks kept popping up and people rushed in to explore.

Alexis Madrigal: Alice, you’re also a professor at UNC and co-director of the Center for Information, Technology, and Public Life. Back in 2017, you wrote a report called Media Manipulation and Disinformation Online. When you were writing that, did you imagine a day like this — when you could create realistic video at the push of a button?

Alice Marwick: No, not at all. These tools have progressed much faster than many of us predicted — which is part of what’s driven the AI hype cycle.

If you think about the amount of mayhem and mischief — and even harm — caused just by Photoshop, or by people spreading false information or memes online, that’s nothing compared to what we’ll see with tools like this.

It’s one thing to hear that someone was arrested for drunk driving or said something off-color — it’s another thing entirely to see what looks like video proof. We’ve always believed that “seeing is believing.” Even though we know digital media can be manipulated, there’s something very human about watching something and assuming it’s true.

Alexis Madrigal: Right — it’s hard to override what your brain’s evolved to trust.

Alice Marwick: Exactly. Ideally, we’d have regulation in place — some kind of check on how these tools develop. But tech companies argue that regulation stifles innovation, and right now we’re in a very anti-regulatory moment.

What worries me about Sora isn’t just the technology itself but the fact that it’s also a social network. And from what I’ve seen, they’re adding guardrails hastily — after launch — instead of learning from the history of social media.

When platforms like Facebook came out, everyone was excited about democratization and global connection. But we soon learned they require massive moderation and strong guidelines — and we still struggle with harassment and disinformation today. So if you’re launching a new social network, you should be learning from those lessons.

Alexis Madrigal: Yeah. Max, I want to talk about where OpenAI is as a company right now. Chris Beckey had a tweet that summed it up: “OpenAI in 2021: We want to cure brain cancer. OpenAI in 2025: We’re becoming brain cancer.” Why are they doing this?

Max Read: OpenAI’s a fascinating company. It started as a nonprofit dedicated to developing ethical superintelligence — but over time, as it became a leader in AI research and development, it evolved into a hybrid structure: a for-profit public benefit corporation under a nonprofit.

The real turning point was the success of ChatGPT. It created a clear path to revenue through subscriptions, but the scale of OpenAI’s growth — faster than any previous website or social network — has set it on a new trajectory. Instead of focusing on a few million paying customers, they’re now chasing billions of users, like Facebook or Google.

Sora could be an experiment in what an OpenAI-run social network might look like. If you can get people addicted to an endless video feed, you can insert ads between those videos — and make billions, just like Mark Zuckerberg did. So this could be the other side of the AI boom: figuring out how to turn all that hype and investment into revenue.

Alexis Madrigal: Yeah.

Jason Koebler: One of the most jarring things for me opening Sora was how similar its timeline looked to Instagram Reels or TikTok. AI-generated “slop,” as people call it, has already flooded those spaces and performs really well there.

Up until now, people were using OpenAI’s tools to make content and then posting it elsewhere — on TikTok, YouTube, Instagram. What Max said is really smart: Sora is OpenAI’s way of keeping people on its own platform.

And weirdly, I actually found it a little wholesome. On Sora, I knew everything I was seeing was AI-generated. On Instagram, it’s this unsettling mix of real and fake — and it’s exhausting to tell which is which. As someone who’s covered this nonstop for the last year and a half, it was almost refreshing to just know: okay, it’s all AI.

Alexis Madrigal: This is slop, and I’m eating it.

Jason Koebler, cofounder of 404 Media; Max Read, journalist and screenwriter; and Alice Marwick, director of research at Data & Society, a nonprofit research institute focused on tech policy.

Of course, we want to hear from you too. Do you think the rise of deepfakes will change how you use the internet? Have you tried any of these tools? If you work in AI, what do you think about Sora?

Give us a call at 866-733-6786, email us at forum@kqed.org, or find us on all the social media platforms.

I’m Alexis Madrigal — stay tuned.

Sponsored

lower waypoint
next waypoint