I create fake videos. Here’s why people believe even the obvious ones
I create fake videos. Here’s why people believe even the
obvious ones
People
will accept anything as true if it confirms their beliefs—regardless of whether
a video or image has obviously been manipulated.
BY CHRISTYE SISSON 09.14.19 9:00 AM
Lots of people—including Congress—are
worried about fake videos and imagery distorting the truth, purporting to show
people saying and doing things they never said or did.
I’m part of a larger U.S. government project that is
working on developing ways to detect images and videos that
have been manipulated. My team’s work, though, is to play the role of the bad
guy. We develop increasingly devious, and convincing, ways to generate fakes—in
hopes of giving other researchers a good challenge when they’re testing their
detection methods.
For the past three years, we’ve
been having a bit of fun dreaming up new ways to try to change the meaning of
images and video. We’ve created some scenarios ourselves, but we’ve also had
plenty of inspiration from current events and circumstances of actual bad guys trying to twist
public opinion.
I’m proud of the work we’ve
done, and hope it will help people keep track of the truth in a media-flooded
world. But we’ve found that a key element of the battle between truth and
propaganda has nothing to do with technology. It has to do with how people are
much more likely to accept something if it confirms their beliefs.
FINDING, AND PUSHING, TECHNICAL BOUNDARIES
When we make our fakes, we
start by collecting original, undoctored images and videos. Those not only
offer raw material for us to manipulate the images but also include the data
stored in authentic media files—sort of like a technical fingerprint that
accompanies every piece of media that describes how and when it was taken, and
with what tools.
That information helps us craft
fakes that look and act as much as possible like real material, in both visual
evidence and digital artifacts. It’s an ever-changing challenge, as new cameras
go on the market and as researchers develop new techniques for digital forensic
analysis.
What we create are then sent to
other research partners in the larger effort, to see if they can tell what
we’ve done and how we’ve done it. Their job is not just to determine whether
it’s authentic or fake—but also, if possible, to explain how the fakes were
made. Then we compare the results to what we actually did, and everyone learns;
we learn how to make better fakes, and they learn to detect them.
BAD VIDEOS CAN BE PERSUASIVE, TOO
While my team and I were being
as exhaustive, technical and methodical as we could be, I couldn’t help but
notice the terrible quality of manipulated images and videos that were
spreading online and in the media. We prided our work on being as convincing as
possible, but what we were seeing—like fuzzy images and slowed audio of Nancy
Pelosi—wouldn’t come close to passing our standards.
As someone with a background in
the nuts and bolts of photographic technology, I was truly shocked that people
seemed to be persuaded by images and video that I could easily identify as
altered.
Seeking to understand what was
going on, I took very unscientific straw polls of family and friends. I learned
anecdotally what sociologists and social psychologists have shown in more
scholarly explorations: If the image or manipulation supports what someone
already believes, they often accept it unquestioningly.
Fake photos are common, purporting
to show an NFL player burning a U.S. flag in a locker room,
a Parkland student tearing up the Constitution,
a shark swimming down a highway and
much more. They are all terrible manipulations, technically speaking. But they
are sensational images and often have a specific political angle. That has
helped them gain tremendous traction on social media – and resulting news
coverage.
ADAPTING TO THE MODERN MEDIA DELUGE
There may be another reason
people believe what they see online. I asked my teenage son why he thought
people fell for these awful fakes while I was working so hard on the effort to
detect better ones, his answer was straightforward: “You can’t trust anything
on the internet. Of course I wouldn’t think it’s real, because nothing is.”
I was surprised by his
response, and suppressed a motherly comment about cynicism when I realized he
has grown up digesting imagery at a pace unmatched in human history. Skepticism
is not only healthy for that level of inundation, but likely key to surviving
and navigating modern media.
For my generation and
generations before, particularly those of us who saw the transition from film
to digital photography, the trust in the image is there to be broken. For my
son and subsequent generations raised on media, the trust, it seems, was never
there in the first place.
When people talk about fake
imagery, they often leave out the basic concepts of media literacy. Fear and
panic grow as people imagine watching fake videos where someone says or does
something that never actually happened. That fear is founded on the
longstanding principle that seeing is believing. But it seems as though that
old axiom may not be true anymore, given how quick people are to believe phony
imagery. In fact, some research indicates fake news may be driven by
those more likely to accept weak or
sensational claims—who also, ironically, tend to be overconfident in
their own knowledge.
SKEPTICISM OVER TECHNOLOGICAL PROWESS
I do have faith that my group’s
work and that of our research collaborators will help detect technologically
advanced fakes. But I am also developing a growing faith, based on both my
son’s experience and the students I work with, that today’s young people, and
future generations, may just be better at consuming and responding to imagery
and video.
The skepticism they have been
raised on is a far more sophisticated type of media literacy than what many of
us are used to, and could even herald a cultural shift away from relying on
images or video as “proof.” They don’t believe it until they have proof that
it’s real, instead of the other way around.
In the meantime, while
researchers get better at detection and adults try to catch up with what the
kids already know, it’s best to be skeptical. Before reacting, find out where
an image came from and in what context. When you see someone share an awesome
or sensational or world-changing image or video on social media, take a moment
before sharing it yourself. Perform a reverse-image search to
identify where else that image has appeared. You might even stumble on a trusted
source reporting that it’s actually a fake.
Christye Sisson is
an associate professor of photographic sciences at the Rochester Institute of Technology. She
has received funding from PAR Government Systems, a U.S. Department of Defense
contractor, for the creation of a dataset of high provenance images and video,
and the generation of manipulated media from this dataset.
This article is republished from The
Conversation under a Creative Commons license.
Comments
Post a Comment