FakeCatcher can detect deepfakes in real time by analysing pixels in a video to look for signs of blood flow.
Intel has developed an AI that it says can detect in real time whether a video has been manipulated using deepfake technology.
FakeCatcher, part of the chipmaker’s responsible AI work, claims to detect deepfakes within milliseconds and with a 96pc accuracy rate.
“Deepfake videos are everywhere now,” said Intel scientist Ilke Demir, who designed FakeCatcher with Umur Ciftci from the State University of New York at Binghamton.
“You have probably already seen them; videos of celebrities doing or saying things they never actually did.”
Deepfake videos can be difficult to detect by human. They are increasingly being used mislead people with fake news or by cybercriminals to infiltrate organisations.
One report by VMware in August found that two out of three respondents saw malicious deepfakes being used as part of cyberattacks – a 13pc increase from a 2021 report.
FakeCatcher aims to help organisations solve this problem much faster and with greater accuracy than existing detection methods that require uploads and hours to produce results.
Using Intel hardware and software, it runs on a server and interfaces through a web-based platform. On the software side, an orchestra of specialist Intel tools form the optimised FakeCatcher architecture.
While most existing deepfake detectors based on deep learning technology look at a video’s raw data to find signs of manipulation, FakeCatcher looks for clues “by assessing what makes us human” such as subtle signs of blood flow in the pixels of a video.
“Blood flow signals are collected from all over the face and algorithms translate these signals into spatiotemporal maps. Then, using deep learning, we can instantly detect whether a video is real or fake,” the company said.
Potential use cases for this technology are plenty. Video-heavy social media platforms such as TikTok and Facebook could leverage detectors such as this to stop users from uploading harmful deepfakes, while it could prevent media companies from inadvertently amplifying manipulated content.
Organisations could also use such technology to prevent deepfakes from compromising their systems and gaining access to their environments.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.