As technology advances, the line between what is real and what is not, is becoming blurred. Video gaming, once the domain of children and cartoon figures, is now so realistic that it is regularly used to train new personnel entering active military service. As the unreal becomes more realistic, realism itself becomes hackable. Audio and video can now be manipulated to such an extent that average people will soon not be able to trust what they see and hear – a basic human expectation since the beginning of time.
In a video posted by Buzzfeed, former US President Barack Obama talks to the camera, warning people not to believe everything they see or read on the internet and instead “to rely on trusted news sources”. This video is not real, rather it is, what is known as, a “deep fake”. With advances in A.I., it is now possible to swap someone else’s face or voice in a video and make it look like they did or said something that never actually happened. As skill and technology advance, a day is easily foreseeable when digital technologists using readily available tools can manipulate or spontaneously generate a video that creates anything imaginable. The ramifications of this technology are terrifying at worst, and taken to the least extreme predictions, they are fundamentally challenging for privacy, democracy and natural security.
Disinformation is the norm
Before the advent of social media, information was manipulated through propaganda efforts, with the ultimate malicious potential being the control of an entire communication ecosystem. Joseph Goebbels, Minister of Propaganda for the Nazi party in the 1930s and 1940s, used movies, radio, television, newspapers and even the theatre to stifle divergence from his warped central narrative. In 2018, this type of control would be almost impossible. The printed press, radio and television broadcasting media are independent and generally free of control by governments or over-concentration by private entities. Further diversification online adds new independent voices, but the completely democratic flow of information comes with its own news problems.
A deep (fake) history
The manipulation of digital files is nothing new. Photographers and graphic designers can edit and completely change digital images using Adobe Lightroom and Photoshop. Examples of this type of manipulation in movies and on television have existed for years. Actors’ faces have been replaced or augmented, or, as in the famous case of Oliver Reed who died during the filming of Gladiator, faces have been mapped onto other people’s heads.
Today, advanced video manipulation software is moving beyond just Hollywood. High-quality video editing technology has proliferated beyond research facilities and entertainment production environments, and the results are becoming so good that it is now difficult to tell the difference between what is real and what is fake. As users of the new technology move beyond comedy and satire to “weaponise” video manipulation, the implications of this technology in the Internet age cannot be overstated.
How did this technology spread?
According to a Motherboard report, this technology first leaked to the masses via Reddit when a user named Deepfake made the algorithm public. Prior to this, the technology was largely confined to the academic world. The dark corners of the internet now churn out an abundance of bogus celebrity pornography. Needless to say, these are some of the most disturbing applications of this emerging technology.
The consequences of deep fake technology
It’s hard to see the benefits of deep fake technology, particularly in this moment of fake news and disinformation campaigns. Deep fakes will only exacerbate the erosion of public trust. Imagine a hypothetical deep fake video created of a famous actress. In it, the actress makes racist comments, the type that would be regarded as horrifying to the mainstream public. The video is published, and without any requirement for its authenticity to be verified, it is shared millions of times before the actress, her publicity team or the mainstream news media can question its validity. It would take several days or weeks for the actress to convince the public that she never made the comments or that the video was doctored. In the time elapsed, potentially irreparable damage has been done to the actress, and her career may be over or at least seriously damaged.
Now imagine that this type of video is published in a country without a responsible news media or other gatekeepers. What if fake content is used as a tool to destroy the reputation of a private citizen, someone without the resources to do anything more than profess their innocence. With technology, any targeted person could be transposed into any scenario to appear anywhere, at any time, doing anything – all without consent. Will anyone remain untouched?
Currently this technology is still far from perfected. Even though nearly anyone can access deep fake tools, it is important to note two key points:
- First, just because someone has access to this technology does not mean they will be good at it. Currently anyone can use Photoshop to retouch a photo or to cut and paste a person from one photo to another, but very few people have the skill to do this on a large scale and make it believable.
- Second, deep fake videos are labour intensive. Buzzfeed’s Obama deep fake, while impressive, is far from perfect and was reported to have taken over 56 hours to complete. The video also relied on a human comedian Obama impression.
Adobe, who is responsible for creating some of the tools used in video manipulation, is also exploring how AI and machine learning could be part of the solution to automatically spot videos and images that have been manipulated. Is there any surprise machine learning technology may be a part of the solution to the problem it helped create?
In the age of social media, rising demand for instantaneous information and increasingly tainted streaming visual technology, solutions are needed to help identify what is real and what is not. Otherwise, the value of streaming video is wholly undermined as confidence is totally eroded and viewers can no longer trust what they see and what they hear.
Digital video would devolve to a walking shadow that struts and frets upon an empty online stage and then is heard no more: a tale told by an idiot, full of sound and fury, signifying nothing.
This story is featured in the 15 November 2018 edition of The Warren Centre’s Prototype newsletter. Sign up for the Prototype here.