U.S. Unleashes Military to Fight Fake News, Disinformation
U.S. Unleashes Military to Fight Fake News, Disinformation
Pete
Norman Bloomberg•August 31, 2019
(Bloomberg)
-- Fake news and social media posts are such a threat to U.S. security that the
Defense Department is launching a project to repel “large-scale, automated
disinformation attacks,” as the top Republican in Congress blocks efforts to
protect the integrity of elections.
The
Defense Advanced Research Projects Agency wants custom software that can
unearth fakes hidden among more than 500,000 stories, photos, video and audio
clips. If successful, the system after four years of trials may expand to detect
malicious intent and prevent viral fake news from polarizing society.
“A
decade ago, today’s state-of-the-art would have registered as sci-fi — that’s
how fast the improvements have come,” said Andrew Grotto at the Center for
International Security at Stanford University. “There is no reason to think the
pace of innovation will slow any time soon.”
U.S.
officials have been working on plans to prevent outside hackers from flooding
social channels with false information ahead of the 2020 election. The drive
has been hindered by Senate Majority Leader Mitch McConnell’s refusal to
consider election-security legislation. Critics have labeled him #MoscowMitch,
saying he left the U.S. vulnerable to meddling by Russia, prompting his retort
of “modern-day McCarthyism.”
Risk
Factor
President
Donald Trump has repeatedly rejected allegations that dubious content on
platforms like Facebook, Twitter and Google aided his election win. Hillary
Clinton supporters claimed a flood of fake items may have helped sway the
results in 2016.
“The
risk factor is social media being abused and used to influence the elections,”
Syracuse University assistant professor of communications Jennifer Grygiel said
in a telephone interview. “It’s really interesting that Darpa is trying to
create these detection systems but good luck is what I say. It won’t be
anywhere near perfect until there is legislative oversight. There’s a huge gap
and that’s a concern.”
False
news stories and so-called deepfakes are increasingly sophisticated and making
it more difficult for data-driven software to spot. AI imagery has advanced in
recent years and is now used by Hollywood, the fashion industry and facial
recognition systems. Researchers have shown that these generative adversarial
networks -- or GANs -- can be used to create fake videos.
Famously,
Oscar-winning filmmaker Jordan Peele created a fake video of former President
Barack Obama talking about the Black Panthers, Ben Carson, and making an
alleged slur against Trump, to highlight the risk of trusting material online.
After
the 2016 election, Facebook Chief Executive Officer Mark Zuckerberg played down
fake news as a challenge for the world’s biggest social media platform. He
later signaled that he took the problem seriously and would let users flag
content and enable fact-checkers to label stories in dispute. These judgments
subsequently prevented stories being turned into paid advertisements, which
were one key avenue toward viral promotion.
In
June, Zuckerberg said Facebook made an “execution mistake” when it didn’t act
fast enough to identify a doctored video of House Speaker Nancy Pelosi in which
her speech was slurred and distorted.
“Where
things get especially scary is the prospect of malicious actors combining
different forms of fake content into a seamless platform,” Grotto said.
“Researchers can already produce convincing fake videos, generate persuasively
realistic text, and deploy chatbots to interact with people. Imagine the
potential persuasive impact on vulnerable people that integrating these
technologies could have: an interactive deepfake of an influential person
engaged in AI-directed propaganda on a bot-to-person basis.”
By
increasing the number algorithm checks, the military research agency hopes it
can spot fake news with malicious intent before going viral.
“A
comprehensive suite of semantic inconsistency detectors would dramatically
increase the burden on media falsifiers, requiring the creators of falsified
media to get every semantic detail correct, while defenders only need to find
one, or a very few, inconsistencies,” the agency said in its Aug. 23 concept
document for the Semantic Forensics program.
Semantic
Errors
The
agency added: “These SemaFor technologies will help identify, deter, and
understand adversary disinformation campaigns.”
Current
surveillance systems are prone to “semantic errors.” An example, according to
the agency, is software not noticing mismatched earrings in a fake video or
photo. Other indicators, which may be noticed by humans but missed by machines,
include weird teeth, messy hair and unusual backgrounds.
Deepfakes
Can Help You Dance, But They’re Not Always So Innocent
The
algorithm testing process will include an ability to scan and evaluate 250,000
news articles and 250,000 social media posts, with 5,000 fake items in the mix.
The program has three phases over 48 months, initially covering news and social
media, before an analysis begins of technical propaganda. The project will also
include week-long “hackathons.”
Program
manager Matt Turek discussed the program on Thursday in Arlington, Virginia,
with potential software designers. Darpa didn’t provide an on-the-record
comment.
Tech
Gap
The
agency also has an existing research program underway, called MediFor, which is
trying to plug a technological gap in image authentication, as no end-to-end
system can verify manipulation of images taken by digital cameras and
smartphones.
“Mirroring
this rise in digital imagery is the associated ability for even relatively
unskilled users to manipulate and distort the message of the visual media,”
according to the agency’s website. “While many manipulations are benign,
performed for fun or for artistic value, others are for adversarial purposes,
such as propaganda or misinformation campaigns.”
With
a four-year project scale for SemaFor, the next election will have come and
gone before the system is operational.
“This
timeline is too slow and I wonder if it is a bit of PR,” Grygiel said.
“Educating the public on media literacy, along with legislation, is what is
important. But elected officials lack motivation themselves for change, and
there is a conflict of interest as they are using these very platforms to get
elected.”
For
more articles like this, please visit us at bloomberg.com
©2019
Bloomberg L.P.
Comments
Post a Comment