Adstoppi: How to do clicking task

To start working with your daily task Signup here >>  https://adstoppi.com/  using your Facebook account If you have questions, check on our help page here >>  https://adstoppi.com/help For interesting content Check our Blog page here >>  https://adstoppi.com/blog Like us on Facebook >>  https://www.facebook.com/AdstoppiOfficialPage

Deepfake detection algorithms will never be enough

You may have seen news stories last week about researchers developing tools that can detect deepfakes with greater than 90 percent accuracy. It’s comforting to think that with research like this, the harm caused by AI-generated fakes will be limited. Simply run your content through a deepfake detector and bang, the misinformation is gone!

But software that can spot AI-manipulated videos will only ever provide a partial fix to this problem, say experts. As with computer viruses or biological weapons, the threat from deepfakes is now a permanent feature on the landscape. And although it’s arguable whether or not deepfakes are a huge danger from a political perspective, they’re certainly damaging the lives of women here and now through the spread of fake nudes and pornography.

Hao Li, associate professor at the University of Southern California and CEO of Pinscreen, tells The Verge that any deepfake detector is only going to work for a short while. In fact, he says, “at some point it’s likely that it’s not going to be possible to detect [AI fakes] at all. So a different type of approach is going to need to be put in place to resolve this.”

Li should know â€" he’s part of the team that helped design one of those recent deepfake detectors. He and his colleagues built an algorithm capable of spotting AI edits of videos of famous politicians like Donald Trump and and Elizabeth Warren, by tracking small facial movements unique to each individual.

These markers are known as “soft biometrics” and are too subtle for AI to currently mimic. These include how Trump purses his lips before answering a question, or how Warren raises her eyebrows to emphasize a point. The algorithm learns to spot these movements by studying past footage of individuals, and the result is a tool that’s at least 92 percent accurate at spotting several different types of deepfakes.

Li, though, says it won’t be long until the work is useless. As he and his colleagues outlined in their paper, deepfake technology is developing with a virus / anti-virus dynamic.

Take blinking. Back in June 2018, researchers found that because deepfake systems weren’t trained on footage of people with their eyes closed, the videos they produced featured unnatural blinking patterns. AI clones didn’t blink frequently enough or, sometimes, didn’t blink at all â€" characteristics that could be easily spotted with a simple algorithm.

But what happened next was somewhat predictable. “Shortly after this forensic technique was made public, the next generation of synthesis techniques incorporated blinking into their systems,” wrote Li and his colleagues. In other words: bye bye, deepfake detector.

Ironically, this back and forth mimics the technology at the heart of deepfakes: the generative adversarial network, or GAN. This is a type of machine learning system comprised of two neural networks, operating in concert. One network generates the fake and the other tries to detect it, with the content bouncing back and forth, and improving with each volley. This dynamic is replicated in the wider research landscape, where each new deepfake detection paper gives the deepfake makers a new challenge to overcome.

Delip Rao, VP of research at the AI Foundation, agrees that the challenge is far greater than simple detection, and says that these papers need to be put in perspective.

One deepfake detection algorithm unveiled last week boasted 97 percent accuracy, for example, but as Rao notes, that 3 percent could still be damaging when thinking at the scale of internet platforms. “Say Facebook deploys that [algorithm] and assuming Facebook gets around 350 million images a day, that’s a LOT of misidentified images,” says Rao. “With every false positive from the model, you are compromising the trust of the users.”
https://adstoppipro.com/blog/deepfake-detection-algorithms-will-never-be-enough
More blog here

Comments

Popular posts from this blog

YouTube Premium is changing because it has to

Millions in cryptocurrencies frozen after Canadian founder's death

Instagram is testing a donation sticker in Stories