Jennifer Buscemi is the deepfake that should seriously frighten you
Jennifer Buscemi is the deepfake that should seriously
frighten you
Mikael Thalen— Jan 29 at 11:48PM
The ability to develop high-quality fake videos is
becoming much easier.
While the nation grapples with concerns over the spread
of inaccurate and deceptive information online, deepfakes—videos in which an
individual’s face is superimposed onto another—continue to advance at a
quickening pace.
Now, the new ones are downright terrifying.
I've gone down a black hole of the latest DeepFakes and
this mashup of Steve Buscemi and Jennifer Lawrence is a sight to behold
pic.twitter.com/sWnU8SmAcz
— Mikael Thalen (@MikaelThalen) January 29, 2019
What are deepfakes?
The technology, which relies on machine learning and
artificial intelligence, was once largely relegated to researchers at
prestigious universities. But over the past few years, a growing online
community has democratized the practice, bringing powerful and easy-to-use tools
to the masses.
One of the public’s first introductions to deepfakes came
in late 2017. A Reddit group devoted to placing the faces of prominent female
actresses onto those of porn stars gained attention.
As reported by Motherboard’s Samantha Cole, members of
the now-banned subreddit explained how they would first gather stock photos and
videos of celebrities such as Hollywood star Scarlett Johansson. That media
content would then be fed into specialized, open-source tools and combined with
graphic adult content.
The quality of deepfakes are based on several factors but
rely heavily on practice, time, and the source material they are derived from.
Initially, deepfakes were more shocking than convincing, but readily available
programs and tutorials continue to lower the bar for new creators.
One such video, posted by Reddit user VillainGuy earlier
this month, has highlighted how far the technology has come. That video—which
combines actor Steve Buscemi with actress Jennifer Lawrence at the 2016 Golden
Globe awards—is turning heads. Not because anyone believes it is real, but
because of the video’s implications.
Utilizing a free tool known as “faceswap,” VillainGuy
proceeded to train the AI with high-quality media content of Buscemi. With the
aid of a high-end graphics card and processor, “Jennifer Lawrence-Buscemi” was
born. VillainGuy says the level of detail was achieved thanks to hours of
coding and programming as well.
The video’s viral spread online Tuesday comes as numerous
U.S. lawmakers sound the alarm over the potential of deepfakes to disrupt the
2020 election. A report from CNN indicates that the Department of Defense has
begun commissioning researchers to find ways to detect when a video has been
altered.
Late last year, Rep. Adam Schiff (D-Calif.) and other
members of the House of Representatives wrote a letter to Director of National
Intelligence Dan Coates to raise concerns over the possible use of the
technology by foreign adversaries.
“As deep fake technology becomes more advanced and more
accessible, it could pose a threat to United States public discourse and
national security, with broad and concerning implications for offensive active
measures campaigns targeting the United States,” the letter stated.
Researchers have already developed some methods for
detecting deepfakes. One technique, which is said to have a 95 percent success
rate in catching altered videos, relies on analyzing how often an individual in
a video blinks.
“Healthy adult humans blink somewhere between every 2 and
10 seconds, and a single blink takes between one-tenth and four-tenths of a
second,” Siwei Lyu, Associate Professor of Computer Science from the University
at Albany, wrote in Fast Company last year. “That’s what would be normal to see
in a video of a person talking. But it’s not what happens in many deepfake
videos.”
Deepfakes, unfortunately, will only become harder to
catch as time progresses. Lyu notes that the race between those generating and
those detecting fake videos will only intensify in the coming years.
While lawmakers have focused heavily on the potential
national security ramifications of deepfakes, some experts remain skeptical.
Thomas Rid, professor of strategic studies at Johns Hopkins University’s School
of Advanced International Studies, remarked on Twitter this month that fake
news and conspiracy theories already thrive based on far less than altered
videos. Rid, an expert on the history of disinformation, argues, however, that
deepfakes could lead some to deny legitimate information based entirely on the
fact that such technology exists.
“The most concerning aspect is, *possibly*, ‘deep
denials,’ the ability to dispute previously uncontested evidence, even when the
denial flies in the face of forensic artifacts,” Rid wrote.
Although fears concerning deepfakes and subversion from
malicious foreign actors draw attention in the nation’s capital, fake videos
could potentially cause much more damage to individuals. Granted, a fake video
of a politician engaged in some sort of devious behavior could spread rapidly
online before being debunked. But if a similar altered video is used to
blackmail a vulnerable person, it’s likely no credible fact-checkers will be
there to put out the fire.
The practice of targeting ordinary women with fabricated
videos has already begun. In one such example, a woman in her 40s told the
Washington Post that just last year someone had used photos from her social
media accounts to create and spread a fake sexual video of her online.
“I feel violated—this icky kind of violation,” the woman
said. “It’s this weird feeling, like you want to tear everything off the
internet. But you know you can’t.”
And those unwilling to take the time to learn how to
develop their own deepfakes can simply pay to have it done for them. A
now-banned community on Reddit known as “r/deepfakeservice” was found to be
selling such content in early 2018 to anyone willing to provide at least two
minutes of source video.
Obviously, no one thinks Steve Buscemi and Jennifer
Lawrence morphed together at the Golden Globes. But videos based on more
believable premises with even higher quality are coming, and the damage they do
will depend on how we react.
Comments
Post a Comment