'Deep fake' videos that can make anyone say anything worry U.S. intelligence agencies
'Deep fake' videos that can make anyone say anything worry U.S.
intelligence agencies
·
By Joe Toohey May 06 2019 10:31PM
EDT
NEW YORK - A video
of a seemingly real news anchor, reading a patently false script saying things
like the "subways always run on time" and "New York City pizza
is definitely not as good as Chicago" gives a whole new meaning to the
term fake news.
But that
fake news anchor is a real example of a fascinating new technology with
frightening potential uses.
I was
stunned watching the Frankenstein mix of Steve Lacy's voice coming out of what
looks like my mouth.
"That's how well the
algorithm knows your face," Professor Siwei Lyu told me.
The video
is what is known as a deep fake: a computer-generated clip using an algorithm
that learned my face so well that is can recreate it with remarkable accuracy.
My
generated face can be swapped onto someone else's head (like that original
video with Steve) or it can be used to make me look like I'm saying things I've
never said.
For this
piece, I worked with Lyu and his team at the College of Engineering and Applied
Sciences at the University at Albany.
For many
people, seeing is believing.
"I
would say it's not 100% true anymore," Lyu said.
Their deep
fake research is funded by the Defense Advanced Research Projects Agency, or
DARPA, which acts as the research and development wing of the U.S. Defense
Department. They're working to develop a set of tools the government and public
can use to detect and combat the rise of deep fakes.
"What
we're doing here is providing a kind of detection method to authenticate these
videos," Lyu said.
What's
more, deep fakes technically aren't that hard to make. All it takes is a few
seconds of video of someone, a powerful computer, and some code, which Lyu and
his team don't release publicly.
"The
real danger, I believe, is the fact that the line between what is real and what
is fake is blurred because of the existence of this kind of technology," Lyu
said.
But it is
about more than just a news anchor face-swap experiment. The power to make a
video of anybody saying anything is alarming.
Even
the former
president is raising red flags. The funny thing is (as
you see
in the video) that is not Barack Obama. The video is a deep
fake. Actor Jordan Peele is impersonating Obama's voice. The algorithm is doing
the rest. It's meant to be a PSA about the dangers of deep fakes.
"Moving
forward we need to be more vigilant with what we trust from the internet,"
fake Obama warns.
Imagining
how a deep fake video could quickly create a very scary real-world scenario is
not hard.
Say, for
instance, a video of a world leader, such as Vladimir Putin, pops up on the
internet declaring war on another country, or, maybe, the head of a major
company announcing his or her abrupt resignation, putting the markets in a tail
spin.
Videos
like that can spread like wildfire before fact checkers, journalists, and
governments even have the chance to authenticate it.
And the
U.S. government is paying attention. Deep fakes were a topic at the recent
worldwide threats hearing in front of the Senate Intelligence Committee.
"Are
we organized in a way where we could possibly respond fast enough to a
catastrophic deep fakes attack?" Sen. Ben Sasse, a Nebraska Republican,
asked a panel of the heads of the nation's intelligence agencies.
Director
of National Intelligence Dan Coats responded by saying emerging technology like
deep fakes pose "a major threat to the United States and it's something
the intelligence community needs to be restructured to address."
House
Intelligence Committee member Sean Patrick Maloney told Fox 5 News, "You ain't
seen nothing yet."
"Because
when you start to get a look at these deep fake videos, which are compelling
but false, it's an order of magnitude more serious than anything we've seen to
date," Maloney added. "That's why it's important we get there first,
deter the bad guys, and ask the private sector to step up."
That's
where Lyu's research comes in. His main focus is actually detecting and
preventing deep fakes.
"It's
a cat-and-mouse kind of game. Each side wants to get a little bit of an edge
over the other," he said. "And this actually, for the good part,
gives us motivation and incentive to grow this research field."
But as the
research grows, so will the quality of the deep fake videos being produced,
which will make differentiating between what's real and what isn't increasingly
difficult. That is why they're working to develop this set of tools right now
to be deployed down the road.
"Our
challenge is: how do you build the algorithm to identify the anomaly?" Lt.
Gen. Robert Ashley, the director of the Defense Intelligence Agency, offered at
that recent Senate threats hearing. "Because every deep fake has a flaw.
At least now they do."
Comments
Post a Comment