Facebook, Google tell Congress they're fighting extremist content with counterpropaganda
Facebook, Google tell Congress they're fighting extremist
content with counterpropaganda
Facebook, Google and Twitter told Congress Wednesday that
they've gone beyond screening and removing extremist content from their
services and are creating more anti-terror propaganda.
Executives from the three largest social media companies
shared their latest methods of fighting extremism during testimony before the
Senate Commerce Committee.
By John Shinal January 17, 2018 CNBC.com
Facebook, Google and Twitter told Congress Wednesday that
they've gone beyond screening and removing extremist content and are creating
more anti-terror propaganda to pre-empt violent messages at the source.
Representatives from the three companies told the Senate
Committee on Commerce, Science and Transportation that they are, among other
things, targeting people likely to be swayed by extremist messages and pushing
content aimed at countering that message. Several senators criticized their
past efforts as not going far enough.
"We believe that a key part of combating extremism
is preventing recruitment by disrupting the underlying ideologies that drive
people to commit acts of violence. That's why we support a variety of
counterspeech efforts," said Monika Bickert, Facebook's head of global
policy management, according to an advance copy of her testimony obtained by
CNBC.
Bickert said that in addition to using image matching and
language analysis to identify terror content before it's posted, the company is
ramping up what it calls "counterspeech."
Facebook is also working with universities,
nongovernmental organizations and community groups around the world "to
empower positive and moderate voices," Bickert said.
Google's YouTube, meanwhile, says it will continue to use
what it calls the "Redirect Method," developed by Google's Jigsaw
research group, to send anti-terror messages to people likely to seek out
extremist content through what is essentially targeted advertising. If YouTube
determines that a person may be headed toward extremism based on their search
history, it will serve them ads that subtly contradict the propaganda that they
might see from ISIS or other such groups. Meanwhile, YouTube supports "Creators
for Change," a group of people who use their channels to counteract hate.
"We believe that a key part of combating extremism
is preventing recruitment by disrupting the underlying ideologies that drive
people to commit acts of violence."
-Monika Bickert, Facebook's head of global policy
management
The video site is also adapting how it deals with videos
that are offensive but don't technically violate its community guidelines,
putting this so-called borderline content behind interstitials and removing comments,
according to the testimony of Juniper Downs, YouTube's head of public policy.
Downs said that over the past year YouTube's algorithms,
in concert with human reviewers, have been able to remove hateful content
faster than before.
"Our advances in machine learning let us now take
down nearly 70% of violent extremism content within 8 hours of upload and
nearly half of it in 2 hours," Downs said.
Twitter's Carlos Monje Jr., director of public policy and
philanthropy in the U.S. and Canada, said the company has participated in more
than 100 trainings events since 2015 on countering extremist content.
Those training sessions included events in Beirut,
Bosnia, Belfast and Brussels and summits at the White House, the United Nations
and in London and Sydney, Monje said in his prepared testimony.
Tech companies under fire from lawmakers
The U.S.-based tech giants have come under fire in the
U.S. and Europe for allowing their websites to be used by Islamic terrorists
and other extremists for recruiting and propaganda.
The German government passed a law last year that fines
internet companies for allowing hate speech to remain on a site for more than
24 hours. Leaders in France and the U.K., which have suffered a series of
terrorist attacks, have threatened similar action.
Now the companies are feeling the heat in Washington
after revelations that extremists are using their services to recruit and
target Americans.
A November report from New York University's Stern Center
for Business and Human Rights estimated the Islamic terrorist group ISIS
generated 200,000 social media messages every day.
An investigation by CNBC, meanwhile, found dozens of
accounts on Facebook and Google Plus being used by terrorists to promote their
message. Some of those accounts had been taken over by hackers first.
All three firms said last year they were adding more
workers to screen content and boosting investment in software that uses
artificial intelligence to find and remove violent posts and videos.
They also created an industry-wide group, the Global
Internet Forum to Counter Terrorism, to share data on extremist groups. The
forum said in December its database had 40,000 images and videos which it was
using to screen content from their sites.
"Social media companies realize the damage of these
bad actors far too late," says Clint Watts, the Robert A. Fox Fellow at
the Foreign Policy Research Institute, who is also scheduled to testify at
Wednesday's hearing.
Comments
Post a Comment