Artificial Intelligences make deepfakes so perfect even other AIs can't detect them

 Artificial Intelligences make deepfakes so perfect even other AIs can't detect them

Deepfake propaganda is already being used to try to affect the outcome of the US election – but tomorrow the technology could be used to empty your bank account

Artificial Intelligences are now so good at creating ultra-realistic deepfake videos that not even another AI can detect the deception.

Researcher Nina Schick, whose book Deep Fakes and the Infocalypse: What You Urgently Need To Know is an urgent wake-up call about the danger to democracy posed by AI, tells Daily Star Online: “Humans will never be able to detect deepfakes… it’s already there – to the naked eye they’re perfect.”

And even AI tools won’t be able to spot the fakes. She warns: “You might get to the post where the generators are so perfect that even an AI won’t be able to tell the difference between a real video and a fake video.

"That’s already the case with text written by artificial intelligences. No-one can tell…”

AI deepfakes creating fake humans who 'don't exist' to spread misinformation

A "Swiss security analyst" named Martin Aspen was behind the alleged leak of secret data from Hunter Biden – son of US vice-president Joe Biden. But "Martin Aspen" and his company don't exist.

The photos of Aspen were – as revealed NBC News – created in a computer. With the US election on a knife edge, something like the Hunter Biden "scandal" could quite possibly tip the balance – if it wasn't just based on some photos created by an AI.

The threat is immense. On a personal level, people stand to lose their life savings because of rogue AIs and on a global level political careers can be made or lost, and wars could start because of convincing faked video footage.

AI deepfake videos to make up '90% of online content' in just five years

Researchers like Nina might yet provide some defence against the coming information apocalypse. But equally, she herself admits, they might not.

Luckily, as far as we know the only people making content with state-of-the art deepfake technology right now are using it for fun. Sassy Justice is a new viral video show from the creators of South Park.

The show is “hosted” by a reporter named Fred Sassy, who appears to be a dead ringer for US president Donald Trump.

The fakery is so convincing that it’s easy to forget that there’s any technology involved and somehow the programme makers have found an actor who’s a long lost relative of the Trumps.

Matt Stone told the New York Times that the point of the show is to demystify the emerging technology and make it less frightening. He said: “Before the big scary thing of coronavirus showed up, everyone was so afraid of deepfakes.“

"We just wanted to make fun of it because it makes it less scary.”

All the cutting edge technology didn’t come cheap. Co-creator Trey Parker calls Sassy Justice “probably the single most expensive YouTube video ever made.”

But the danger is that the technology is getting cheaper, and quickly. Nvidia’s Maxine uses Deepfake tech to make video calls look more “natural” and new startup Pinscreen creates entire digital avatars so that the user can take part in video chats, as Nina puts it, “without bothering to do their hair or whatever”.

But there’s a lot more to deepfakes than innocent, useful aspects. In 2016 a gang of conmen stole over €50 million by posing as French Defence Minister Jean-Yves Le Drian.

They wore simple rubber masks as they made video calls to wealthy individuals and asked them to fund “secret” French government missions. They depended on the comparatively lo-fi resolution of video calls to hid ether fact that they were wearing disguises.

Today, deepfakes would allow criminals to make photorealistic video calls that even an artificial intelligence couldn’t tell from the real thing.

It’s already happening. In March 2019, cybersecurity firm Symantec reported that three major companies had fallen victim to deepfake fraud, with AI being used to clone voices and call senior financial officers requesting urgent money transfers. While Symantec didn’t reveal the names of the businesses, they confirmed that millions of dollars had been stolen.

The greatest danger, warns Nina, may not be to huge companies with billions of dollars to lose – and therefore millions to spend on deep fake detection.

“You know scammers are going to use it,” she says, “and as it becomes more accessible it won’t be CEOS of being energy companies being defrauded out of millions of Euros, it’ll be just ordinary people like you and me.”

https://www.dailystar.co.uk/news/latest-news/artificial-intelligences-make-deepfakes-perfect-22932413

Comments

Popular posts from this blog

Report: World’s 1st remote brain surgery via 5G network performed in China

Visualizing The Power Of The World's Supercomputers

BMW traps alleged thief by remotely locking him in car