“Deepfake” —it's a buzzword you've probably heard. Officially born in 2017 on Reddit, it refers to both the technology used (deep learning) and what it produces (a fake).
A deepfake most often takes the form of a video, but can also be an altered audio clip. The GAN (Generative Adverarial Networks), a neural network developed in 2014 by Ian Goodfellow, is the basis of all deepfake algorithms. It consists of two algorithms that train for opposite goals and improve upon one another. Simply put, the first one aims to make a fake, and the second one aims to detect the counterfeit.
Users of deepfake algorithms most often resort to identity swapping, the most prominent recent example being the famous Tom Cruise deepfake.
This video caused an avalanche of buzz, but it's certainly not the only one of its kind. The estimated number of deepfakes doubled between 2018 and 2019, 96% of which were pornographic in nature and targeted women. In short, tech tools are becoming more accessible, and some are using these tools for more nefarious reasons—manipulation, disinformation, humiliation and defamation.
With the concurrent explosion of fake news, deepfakes raise real questions among our communities—if social networks are already struggling to regulate the spread of fake news, how will they curb the use of this content as well?
Facebook took a notable step in 2019 by organising a competition rewarding the best deepfake detection algorithm. At the time, the winner had an 83% success rate. Since then, other experts have examined the issue, including researchers from the State University of New York who managed to achieve an accuracy of 94%. To do this, they looked at light reflections on both eyes. As humans, we're unable to notice these reflections when watching a video, but algorithms know better. In a legitimate video, the two eyes have almost identical reflections, while in a modified video, there are significant differences in shape and angle.
Nevertheless, there is no doubt that deepfake algorithms will evolve to offer more and more plausible content that will erase the imperfections of corneal reflections. It’s therefore the responsibility of tech companies to innovate and offer new solutions for identifying fake content.
Deepfake technology undoubtedly presents dangers, that much is for certain, but isn't there another side the coin? More positive ramifications? Let's take a look...
The GANs on which deepfakes are based have their dangers, like any new technology, but also bring their share of positive initiatives along with them. The future and the use of this technology lies primarily in the hands of digital companies, it’s now up to them to use and master these new tools for the betterment of society.
Want to discuss the topic of deepfakes with us? Have you had any experience with them, positive or negative? Drop us a line anytime, we'd love to have a chat.