'Deepfake' Videos: How to Spot Them and Why They're Dangerous

at 10:00 AM
Save ArticleSave Article

Failed to save article

Please try again

This article is more than 4 years old.
A AFP journalist views a video on January 25, 2019, manipulated with artificial intelligence to potentially deceive viewers, or 'deepfake' at his newsdesk in Washington, DC. "Deepfake" videos that manipulate reality are becoming more sophisticated and realistic as a result of advances in artificial intelligence, creating a potential for new kinds of misinformation.

Image-doctoring is nothing new: Joseph Stalin ordered his enemies airbrushed out of official photos and Cuba altered images of Fidel Castro to remove his hearing aid. But national security experts are worried about a new frontier in manipulated content: deepfakes. Deceptively realistic, deep fakes are AI-generated videos that use techniques like faceswaps, lip synchs, and even "digital puppeteers" to show people saying things they never said or doing things they never did. We'll talk about how to spot deepfakes and the potential threats they pose to democratic institutions.


Hany Farid, professor of computer science, Dartmouth College

Bobby Chesney, professor of law, University of Texas at Austin; co-author with Danielle Citron, "Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security"