96pc of deepfakes online are pornographic in nature

8 Oct 2019

Image: © ParamePrizma/Stock.adobe.com

New research by cybersecurity firm Deeptrace has found that, overwhelmingly, most deepfakes online are non-consensual porn.

“The rise of synthetic media and deepfakes is forcing us towards an important and unsettling realisation: our historical belief that video and audio are reliable records of reality is no longer tenable.”

Amsterdam-based cybersecurity company Deeptrace Labs began its latest white paper, The State of Deepfakes: Landscape, Threats and Impact, with that solemn reminder of how potentially destructive deepfake technology has already become, despite being a relatively modern phenomena.

‘Deepfakes’ as a term, which is a portmanteau of ‘deep learning’ and ‘fakes’, first emerged on a Reddit thread devoted to the technology in 2017, and was quickly brought to public attention by a story on Motherboard. Using a variety of machine learning techniques, those with the right skills can manipulate video footage to make people say and do things they neither necessarily said or did.

Since then, the Deeptrace research explains, the technology has exploded on the internet. The number of deepfake videos online has almost doubled in the last seven months, with the firm finding a total of 14,678.

‘A phenomenon that harms women’

The research found that as much as 96pc of all deepfake content online was non-consensual pornography, nearly all of which featured female actors and musicians.

Though the paper declined to specifically name the individuals featured in these videos, the most-targeted individuals were found to be (in descending order) a British actress, a South Korean musician, another South Korean musician and an American actress. Between these four women, some 684 non-consensual pornographic videos were found and had generated a combined total of 19.1m views.

“Deepfake pornography is a phenomenon that exclusively targets and harms women,” the report said. “Despite being a relatively new phenomenon, deepfake pornography has already attracted a large viewership on the top four dedicated deepfake pornography websites alone.”

Most of this material is hosted on dedicated websites, which researchers took to suggest that the deepfake pornography could “represent a growing business opportunity” for website hosts, as all the sites in question featured some form of advertising.

The app that undresses women in photos

The report also looked into controversial app DeepNude as a case study. DeepNude is a computer app that allows users to ‘undress’ photos of clothed women using deep learning image translation algorithms. “These algorithms cannot perform similar translations on images of men, having been specifically trained on images of women,” the team continued.

The app’s website launched in June 2019, however the developers quickly shut it down amid fears that it would be misused. Copies of the app continued to linger on the internet, initially being hosted on GitHub before the company stepped in and totally banned copies of the app’s code.

However, the vastness of the internet coupled with the ability to create digital copies in perpetuity has made it nigh impossible to stamp out entirely.

“The moment DeepNude was made available to download it was out of the creators’ control, and is now highly difficult to remove from circulation. The software will likely continue to spread and mutate like a virus, making a popular tool for creating non-consensual deepfake pornography of women easily accessible and difficult to counter.”

The political implications

The report notes that much of the media attention surrounding the negative implications of deepfakes has focused on its potential to undermine democratic processes and enhance cyberattacks against both individuals and businesses.

Many of the profiled instances where deepfakes have arisen in the political sphere – such as a sex scandal involving a Malaysian minister and allegations that a video of Gabon’s president Ali Bongo had been doctored to cover up his ill health – have proved inconclusive. In both cases, digital experts could not find sufficient evidence that videos had been manipulated.

However, the use of what is called ‘shallowfakes’ in the realm of politics is well-documented, most notably in the form of a scandal earlier this year surrounding a doctored video of US speaker of the house Nancy Pelosi, which was slowed down to make her appear drunk and as if she was slurring. The video gained massive prominence after it was shared by US president Donald Trump.

Similarly, the report notes that a doctored video of CNN correspondent Jim Acosta, which was manipulated to make it look like he had been aggressive with a White House intern, was used as justification to revoke the reporter’s press pass. The situation, however, was rectified two weeks later when it emerged that the video had been digitally altered.

Eva Short was a journalist at Silicon Republic

editorial@siliconrepublic.com