The Newest AI-Enabled Weapon: ‘Deep-Faking’ Photos of the Earth
The Newest AI-Enabled Weapon: ‘Deep-Faking’ Photos of the
Earth
By Patrick Tucker, Technology Editor, Defense One APRIL
1, 2019
Step 1: Use AI to make undetectable changes to outdoor
photos. Step 2: release them into the open-source world and enjoy the chaos.
Worries about deep fakes—machine-manipulated videos of
celebrities and world leaders purportedly saying or doing things that they
really didn’t—are quaint compared to a new threat: doctored images of the Earth
itself.
China is the acknowledged leader in using an emerging
technique called generative adversarial networks to trick computers into seeing
objects in landscapes or in satellite images that aren’t there, says Todd
Myers, automation lead and Chief Information Officer in the Office of the
Director of Technology at the National Geospatial-Intelligence Agency.
“The Chinese are well ahead of us. This is not classified
info,” Myers said Thursday at the second annual Genius Machines summit, hosted
by Defense One and Nextgov. “The Chinese have already designed; they’re already
doing it right now, using GANs—which are generative adversarial networks—to
manipulate scenes and pixels to create things for nefarious reasons.”
For example, Myers said, an adversary might fool your
computer-assisted imagery analysts into reporting that a bridge crosses an
important river at a given point.
“So from a tactical perspective or mission planning, you
train your forces to go a certain route, toward a bridge, but it’s not there.
Then there’s a big surprise waiting for you,” he said.
First described in 2014, GANs represent a big evolution
in the way neural networks learn to see and recognize objects and even detect
truth from fiction.
Say you ask your conventional neural network to figure
out which objects are what in satellite photos. The network will break the
image into multiple pieces, or pixel clusters, calculate how those broken
pieces relate to one another, and then make a determination about what the
final product is, or, whether the photos are real or doctored. It's all based
on the experience of looking at lots of satellite photos.
GANs reverse that process by pitting two networks against
one another—hence the word “adversarial.” A conventional network might say,
“The presence of x, y, and z in these pixel clusters means this is a picture of
a cat.” But a GAN network might say, “This is a picture of a cat, so x, y, and
z must be present. What are x, y, and z and how do they relate?” The
adversarial network learns how to construct, or generate, x, y, and z in a way
that convinces the first neural network, or the discriminator, that something
is there when, perhaps, it is not.
A lot of scholars have found GANs useful for spotting
objects and sorting valid images from fake ones. In 2017, Chinese scholars used
GANs to identify roads, bridges, and other features in satellite photos.
The concern, as AI technologists told Quartz last year,
is that the same technique that can discern real bridges from fake ones can
also help create fake bridges that AI can’t tell from the real thing.
Myers worries that as the world comes to rely more and more
on open-source images to understand the physical terrain, just a handful of
expertly manipulated data sets entered into the open-source image supply line
could create havoc. “Forget about the [Department of Defense] and the
[intelligence community]. Imagine Google Maps being infiltrated with that,
purposefully? And imagine five years from now when the Tesla [self-driving]
semis are out there routing stuff?” he said.
When it comes to deep fake videos of people, biometric
indicators like pulse and speech can defeat the fake effect. But faked
landscape isn’t vulnerable to the same techniques.
Even if you can defeat GANs, a lot of image-recognition
systems can be fooled by adding small visual changes to the physical objects in
the environment themselves, such as stickers added to stop signs that are
barely noticeable to human drivers but that can throw off machine vision
systems, as DARPA program manager Hava Siegelmann has demonstrated.
Myers says the military and intelligence community can
defeat GAN, but it’s time-consuming and costly, requiring multiple, duplicate
collections of satellite images and other pieces of corroborating evidence.
“For every collect, you have to have a duplicate collect of what occurred from
different sources,” he said.
“Otherwise, you’re trusting the one source.”
The challenge is both a technical and a financial one.
“The biggest thing is the funding required to make sure
you can do what I just talked about,” he said.
On Thursday, U.S. officials confirmed that data integrity
is a rising concern. “It’s something we care about in terms of protecting our
data because if you can get to the data you can do the poisoning, the
corruption, the deceiving and the denials and all of those other things,” said
Lt. Gen. Jack Shanahan, who runs the Pentagon’s new Joint Artificial
Intelligence Center. “We have a strong program protection plan to protect the
data. If you get to the data, you can get to the model.”
But when it comes to protecting open-source data and
images, used by everybody from news organizations to citizens to human rights
groups to hedge funds to make decisions about what is real and what isn’t, the
question of how to protect it is frighteningly open. The gap between the
“truth” that the government can access and the “truth” that the public can
access may soon become unbridgeable, which would further erode the public
credibility of the national security community and the functioning of
democratic institutions.
Andrew Hallman, who heads the CIA’s Digital Directorate,
framed the question in terms of epic conflict. “We are in an existential battle
for truth in the digital domain,” Hallman said. “That’s, again, where the help
of the private sector is important and these data providers. Because that’s
frankly the digital conflict we’re in, in that battle space…This is one of my
highest priorities.”
When asked if he felt the CIA had a firm grasp of the
challenge of fake information in the open-source domain, Hallman said, “I think
we are starting to. We are just starting to understand the magnitude of the
problem.”
Comments
Post a Comment