USA’s House Speaker Nancy Pelosi’s doctored video, or Barack Obama’s stand up where he speaks comedian Jordan Peele’s words, and a talking Mona Lisa. Deepfakes are making their way into popular culture. An AI tool that uses the generative adversarial networks to build very convincing videos from scratch, deepfakes is a fancy toy, and yet fairly accessible to everyone. You too could use it to combine or superimpose any number of images and videos onto a source image or video.
Here’s how deep faking works – first, a machine learning model is trained with a particular data set. Then, it creates video forgeries and detects forged videos. Forged videos are created and fed into the model until it fails to detect the forgery. Creating deepfakes was once the bailiwick of sci-fi films and propaganda producing agencies but now one only needs an app or a free to download software for it.
The key here is the data set, the larger it is, the easier the creation of deepfakes. Since most believe what they see, deep fakes are hacking this human tendency, to spread fake news and disinformation. They can be easily used to manipulate and discredit the target, affect global policy outlook, undermine elections or throw a country into crisis. Consider, for example, the video from White House that surfaced in November last year. Apparently, just a tad of editing was successful in showing that a female White House intern was attacked by Acosta (a CNN reporter) while attempting to take the microphone from him.
Deepfakes: The Risks
Remember George Orwell’s 1984 where Winston Smith rewrites news and adjusts (read incinerates) historical records to fit the goals of the state? With everything from minimising errors in shapes, light and shadows, to changing the angles of facial features and the softness and weight of clothing and hair, it can all be done with basic technical know-how.
At a time when deepfakes hew so close to reality, they act as the Gospel truth for partisans (who barely agree on facts). Deepfakes let the audience rely on them to an unprecedented degree while sparing them the need to trust anything else. Most media platforms available today compress the videos into smaller formats that makes them quicker to upload and easier to share. This lossy compression can further remove critical clues that could assist in detecting fakes. The problem is that right now, there isn’t much funding for tools that could work in detecting deep fakes, but there sure is immense potential in creating them.
In a world primed with prejudice, deepfakes can mediate every aspect of our lives, threaten political systems, religious beliefs and may trigger the true beginning of a post-truth era. These videos are tough to debunk and could even lead journalists to sit on fake evidence and fabricate stories. They gradually decay truth as the number of people creating fakes outnumbers the ones detecting them. After all, when one cannot distinguish between what is real and what is not, how can there be objectivity and rationality?
We’re rapidly approaching a point where deepfakes are delivering visual samples with so much detail that an untrained eye would never suspect a thing. As Deepfake tech becomes more readily available, it would draw more people towards an alternate reality and most probably in the post-truth era. There is a need to hold broader societal discussions, provide generative AI, establish new codes of practice and develop synthetic media responsibly to restore truth and ensure that they do not cause harm. However, there is no need to blatantly dismiss deepfakes when we can exploit them for the potential benefits.