Google has moved to create a slew of deepfakes for a data set designed to help train AIs to detect fraudulent content.
Google has created thousands of manipulated deepfake videos in order to help tackle the latest weapon in the arsenal of those looking to spread disinformation online.
Deepfakes commonly use artificial intelligence software to combine and superimpose existing images and videos of a person to make it look like they have said or done something they have not. While there have been examples of deepfakes used as a source of humour, there have also been fears that they could be used to discredit individuals and be used as a tool to interfere in elections.
US congresswoman Nancy Pelosi is one of the more notable victims – a manipulated video was circulated online earlier this year making her speech appear slurred. Facebook declined to remove the video, a move that inspired artists to put together a derisive deepfake of Mark Zuckerberg declaring his allegiance to a James Bond-esque organisation named Spectre – which Facebook also declined to remove.
Google recruited 28 actors to build its large dataset, which will be available to researchers looking to build and train automated detection systems to spot clips. The actors were shot in a variety of scenes before being edited to create more than 3,000 deepfakes. Some of the videos produced are the original, while others are fake with one actor’s face imposed onto another.
In collaboration with @Jigsaw and in partnership w/ the FaceForensics video benchmark team, we are excited to release a large dataset of visual deepfakes to directly support deepfake detection efforts. Learn more and find the data at https://t.co/0faXdciuxC pic.twitter.com/m8vM3GGbdY
— Google AI (@GoogleAI) September 24, 2019
“Google considers these issues seriously,” said Nick Dufour from Google Research and Andrew Gully of Jigsaw, a technology incubator created by Google, in a blog post.
“Since the field is moving quickly, we’ll add to this dataset as deepfake technology evolves over time, and we’ll continue to work with partners in this space.
“We firmly believe in supporting a thriving research community around mitigating potential harms from misuses of synthetic media, and today’s release of our deepfake dataset in the FaceForensics benchmark is an important step in that direction.”
– PA Media