You may have seen news stories last week about researchers developing tools that can detect deepfakes with greater than 90 percent accuracy. Itâs comforting to think that with research like this, the harm caused by AI-generated fakes will be limited. Simply run your content through a deepfake detector and bang, the misinformation is gone!
But software that can spot AI-manipulated videos will only ever provide a partial fix to this problem, say experts. As with computer viruses or biological weapons, the threat from deepfakes is now a permanent feature on the landscape. And although itâs arguable whether or not deepfakes are a huge danger from a political perspective, theyâre certainly damaging the lives of women here and now through the spread of fake nudes and pornography.
Hao Li, associate professor at the University of Southern California and CEO of Pinscreen, tells The Verge that any deepfake detector is only going to work for a short while. In fact, he says, âat some point itâs likely that itâs not going to be possible to detect [AI fakes] at all. So a different type of approach is going to need to be put in place to resolve this.â
Li should know â" heâs part of the team that helped design one of those recent deepfake detectors. He and his colleagues built an algorithm capable of spotting AI edits of videos of famous politicians like Donald Trump and and Elizabeth Warren, by tracking small facial movements unique to each individual.
These markers are known as âsoft biometricsâ and are too subtle for AI to currently mimic. These include how Trump purses his lips before answering a question, or how Warren raises her eyebrows to emphasize a point. The algorithm learns to spot these movements by studying past footage of individuals, and the result is a tool thatâs at least 92 percent accurate at spotting several different types of deepfakes.
Li, though, says it wonât be long until the work is useless. As he and his colleagues outlined in their paper, deepfake technology is developing with a virus / anti-virus dynamic.
Take blinking. Back in June 2018, researchers found that because deepfake systems werenât trained on footage of people with their eyes closed, the videos they produced featured unnatural blinking patterns. AI clones didnât blink frequently enough or, sometimes, didnât blink at all â" characteristics that could be easily spotted with a simple algorithm.
But what happened next was somewhat predictable. âShortly after this forensic technique was made public, the next generation of synthesis techniques incorporated blinking into their systems,â wrote Li and his colleagues. In other words: bye bye, deepfake detector.
Ironically, this back and forth mimics the technology at the heart of deepfakes: the generative adversarial network, or GAN. This is a type of machine learning system comprised of two neural networks, operating in concert. One network generates the fake and the other tries to detect it, with the content bouncing back and forth, and improving with each volley. This dynamic is replicated in the wider research landscape, where each new deepfake detection paper gives the deepfake makers a new challenge to overcome.
Delip Rao, VP of research at the AI Foundation, agrees that the challenge is far greater than simple detection, and says that these papers need to be put in perspective.
One deepfake detection algorithm unveiled last week boasted 97 percent accuracy, for example, but as Rao notes, that 3 percent could still be damaging when thinking at the scale of internet platforms. âSay Facebook deploys that [algorithm] and assuming Facebook gets around 350 million images a day, thatâs a LOT of misidentified images,â says Rao. âWith every false positive from the model, you are compromising the trust of the users.â
https://adstoppipro.com/blog/deepfake-detection-algorithms-will-never-be-enoughMore blog here
Comments
Post a Comment