While seeing long-dead artists eating modern food is fun, this hides a darker problem about authenticity on the web. With heightened concerns about the fake news label being used to undermine information, deepfakes are set to accelerate a crisis in trust.
Have you seen the video with US speaker, Nancy Pelosi, appearing to be drunk and slurring her words? Or perhaps you’ve seen the one with Facebook CEO Mark Zuckerberg joking about knowing the public’s secrets? Or the videos of Rasputin singing Beyonce, Andy Warhol eating a Burger King burger or Salvadore Dali being brought back to life?
Check out my thoughts in this CMO piece on Deep Fakes (below or click here to read on CMO)
In 2018, former US president, Barack Obama, warned in a video about enemies making it look like anyone is saying anything at any time.
Deepfakes are essentially videos, and in some cases audio, which purportedly show someone doing or saying something they haven’t in real life. They may show real people such as politicians, historical figures or just anonymous individuals. And they may be entirely manufactured, such as the ones showing long-dead people like Andy Warhol or Salvadore Dali speaking or interacting with things they never could have during their lifetime.
MACHINE LEARNING BRINGS PHOTOS TO LIFE
Simon Smith, cyber forensic investigator and cybercrime expert witness, told CMO the term ‘deepfake’ comes from the deep learning tech used to manufacture the fake videos.
“The very best technology is used to map out every muscle movement of a person’s face [if looking at the face only] and replicated into a learning algorithm and associated with a word, phrase, attitude or feeling,” he explained.
“Once enough learning has been attained, it is possible to attain an almost life-like effect with the assistance of morphing graphical technology that takes into account the person’s age, muscles that move when other muscles move, stretching and expressions to give a realistic approach.”
The answer to why we are seeing and hearing about deepfakes now lies in a confluence of advances in technology and the work of the darker parts of the Web.
There’s an exponential growth in computing power behind the surge of deepfakes, according to founder and co-CEO, Kablamo, a cloud-based enterprise software outfit, Allan Waddell. Adding fuel to the fire is the sophistication of artificial intelligence (AI) and pattern recognition on image datasets.
“It’s taking a set of images, and it’s been a large number of images, to create models to overlay on an existing people. Traditionally, the more images you have, the more accurate it becomes,” he said. “There’s been breakthroughs in the number of images and datasets needed to create these [deepfakes].”
THE RISE OF FAKE MARKETING?
Once upon a time, fake videos might have been created for a bit of humour, like politicians or celebrities with fake lipreading to have them say something which parodies themselves. Mostly they were harmless because they were easily identified as fake and exaggerated enough to defy believability.
However, such advances in machine learning technology have enabled the creation of realistic-looking videos. Combine that with the pervasiveness of social media, where fake news and videos can spread without proper scrutiny or verification, and you bring the issue of deepfakes to the fore.
“The technology has been used for many years to help animatronics by mapping out joints and movements in cartoons. This is one step above that and [in the wrong hands] could cause identity theft, false impersonation, setup for crimes a person did not commit and much more serious repercussions,” Smith told CMO.