A new deepfake detection tool should keep world leaders safe—for now - MIT Technology Review
▻https://www.technologyreview.com/s/613846/a-new-deepfake-detection-tool-should-keep-world-leaders-safefor-no
An AI-produced video could show Donald Trump saying or doing something extremely outrageous and inflammatory. It would be only too believable, and in a worst-case scenario it might sway an election, trigger violence in the streets, or spark an international armed conflict.
Fortunately, a new digital forensics technique promises to protect President Trump, other world leaders, and celebrities against such deepfakes—for the time being, at least. The new method uses machine learning to analyze a specific individual’s style of speech and movement, what the researchers call a “softbiometric signature.”
The team then used machine learning to distinguish the head and face movements that characterize the real person. These subtle signals—the way Bernie Sanders nods while saying a particular word, perhaps, or the way Trump smirks after a comeback—are not currently modeled by deepfake algorithms.
In experiments the technique was at least 92% accurate in spotting several variations of deepfakes, including face swaps and ones in which an impersonator is using a digital puppet. It was also able to deal with artifacts in the files that come from recompressing a video, which can confuse other detection techniques. The researchers plan to improve the technique by accounting for characteristics of a person’s speech as well. The research, which was presented at a computer vision conference in California this week, was funded by Google and DARPA, a research wing of the Pentagon. DARPA is funding a program to devise better detection techniques.
The problem facing world leaders (and everyone else) is that it has become ridiculously simple to generate video forgeries with artificial intelligence. False news reports, bogus social-media accounts, and doctored videos have already undermined political news coverage and discourse. Politicians are especially concerned that fake media could be used to sow misinformation during the 2020 presidential election.
Some tools for catching deepfake videos have been produced already, but forgers have quickly adapted. For example, for a while it was possible to spot a deepfake by tracking the speaker’s eye movements, which tended to be unnatural in deepfakes. Shortly after this method was identified, however, deepfake algorithms were tweaked to include better blinking.
“We are witnessing an arms race between digital manipulations and the ability to detect those, and the advancements of AI-based algorithms are catalyzing both sides,” says Hao Li, a professor at the University of Southern California who helped develop the new technique. For this reason, his team has not yet released the code behind the method .
Li says it will be particularly difficult for deepfake-makers to adapt to the new technique, but he concedes that they probably will eventually. “The next step to go around this form of detection would be to synthesize motions and behaviors based on prior observations of this particular person,” he says.
Li also says that as deepfakes get easier to use and more powerful, it may become necessary for everyone to consider protecting themselves. “Celebrities and political figures have been the main targets so far,” he says. “But I would not be surprised if in a year or two, artificial humans that look indistinguishable from real ones can be synthesized by any end user.”