Facebook, Airbnb Go on Offense Against Nazis After Violence
Facebook, Airbnb Go on Offense Against Nazis After
Violence
Charlottesville attack a ‘moment of reckoning,’ SPLC says
Companies in past have avoided becoming arbiters of
morality
By Sarah Frier, Jeff Green, and Olivia Zaleski August 17,
2017, 2:00 AM PDT August 17, 2017, 8:42 AM PDT
When white supremacists plan rallies like the one a few
days ago in Charlottesville, Virginia, they often organize their events on
Facebook, pay for supplies with PayPal, book their lodging with Airbnb and ride
with Uber. Technology companies, for their part, have been taking pains to
distance themselves from these customers.
But sometimes it takes more than automated systems or
complaints from other users to identify and block those who promote hate speech
or violence, so companies are finding novel ways to spot and shut down content
they deem inappropriate or dangerous. People don’t tend to share their views on
their Airbnb Inc. accounts, for example. But after matching user names to posts
on social-media profiles, the company canceled dozens of reservations made by
self-identified Nazis who were using its app to find rooms in Charlottesville,
where they were heading to protest the removal of a Confederate statue.
At Facebook Inc., which relies on community feedback to
flag hateful content for removal, the social network’s private groups meant for
like-minded people can be havens for extremists, falling through gaps in the
content-moderation system. The company is working quickly to improve its
machine-learning capabilities to be able to automatically identify posts that
should be reviewed by human moderators.
These more aggressive actions mark a shift in how
companies view their responsibilities. Virtually all these services have long
maintained rules on how users should behave, but in the past they’d mostly
enforce these policies in response to bad behavior. After the violence in
Charlottesville, which resulted in the death of a counter-protester, their
approach has become more proactive, in anticipation of future events. While
social-media companies have been grappling for years with how to rid their
sites of hateful speech and images, the events of the last several days served
as a stark reminder of just how real, present and local the threat posed by
white supremacists can be.
Ride-hailing app Uber Technologies Inc. told drivers they
don’t have to pick up racists; PayPal Inc. said it has the ability to cancel
relationships with sites that promote racial intolerance. Even Discover
Financial Services, the credit card company, said this week that it was ending
its agreements with hate groups. Apple Inc. has also moved to block hate sites
from using Apple Pay, and Chief Executive Officer Tim Cook said the company
will donate $1 million each to the Southern Poverty Law Center and the
Anti-Defamation League, which track hate groups. Facebook shut down eight group
pages that it said violated hate-speech policies, including "Right Wing
Death Squad" and "White Nationalists United."
“It’s one thing to say, we do not allow hate groups --
it’s another thing to actually go and hunt down the groups, make those
decisions, and kick those people off,” said Gerald Kane, a professor of
information systems at the Boston College Carroll School of Management. “It’s
something most of these companies have avoided intentionally and fervently over
the past 10 years.”
Companies historically have steered clear of trying to
determine what is good and what is evil, Kane said. But given the increasingly
heated public debate in the U.S., they may feel they need to act, he said.
There’s some precedent. Globally, tech firms have been
criticized by governments for their role in the spread of Islamic State
ideology, particularly on Facebook and Twitter Inc. Both of the social-media
companies have stepped up their efforts to remove extremist content, deleting
hundreds of thousands of accounts, as well as group pages on Facebook.
“People have wondered, why are they so focused on Islamic
extremism, and not white nationalism or white supremacy in their own
backyard?" said Emma Llanso, director of the Center for Democracy &
Technology’s Free Expression Project. “Now extremists in the United States are
getting swept up in the same policies.”
Tech companies have no legal obligation in the U.S. to
respond to calls to censor racist content online. Under the Communications
Decency Act of 1996, intermediaries are immunized from most litigation that
claims material on their pages is unlawful.
That doesn’t mean these companies aren’t feeling the
pressure from advertisers and users who fear that pages belonging to alt-right
publications like the Daily Stormer could incite violence, said Daphne Keller,
Director of Intermediary Liability at Stanford Law School’s Center for Internet
and Society. The Daily Stormer’s web domain support was revoked this week by
GoDaddy and then Google, and Twitter suspended several associated accounts.
Technology companies are likely to be evaluating their options in consultation
with organizations including the Anti-Defamation League before shaping their
policy, Keller said.
“What’s pushing them is probably a mix of people being
revolted by the content, plus the public and advertising pressure," said
Keller, who is also former associate general counsel at Google. “Everything
they’re doing is because they want to, or because of public pressure. But not
because of the law."
In March, Google conceded to giving marketers more
control over their online ads after a flurry of brands halted spending in the
U.K. amid concerns about offensive content. The company also agreed to expand
its definition of hate speech under its advertising policy to include
vulnerable racial and socioeconomic groups. The policies marked a sharp turn
for Alphabet Inc.’s Google, which had hewed to its position as a neutral
content host.
Google along with Twitter and Facebook continue to face
increased pressure to amend their user terms to bring them into compliance with
European Union law pertaining to illegal content on their websites.
Facebook hired thousands more human moderators this year
to try to help it tackle violent content, hate speech and extremism on its
platform. Meanwhile, CEO Mark Zuckerberg has in the past touted Facebook’s
product for groups as a key to improving empathy around the world. But when
groups are used to silence others or threaten violence, Facebook will remove
them, he said Wednesday.
“With the potential for more rallies, we’re watching the
situation closely and will take down threats of physical harm,” Zuckerberg
wrote on his Facebook page. “We won’t always be perfect, but you have my
commitment that we’ll keep working to make Facebook a place where everyone can
feel safe.”
A Facebook page remains active for one upcoming rally
that has raised concerns among local officials about potential violence -- set
to be hosted by Patriot Prayer at Crissy Field in San Francisco on Aug. 26.
Facebook said it was aware of the event, but hasn’t yet found a reason to take
it down. The company has to weigh public pressure with its own assessment of a
real-world threat.
Because all the decisions are subjective, it’s going to
be important for technology companies to make it clear what standards they’re
applying when they’re reacting to public outrage, Llanso said.
“When does extra scrutiny kick in, if there are other
standards, or if it’s a special case?” she said. “They have a lot of leeway,
but they still have a responsibility to their user base to explain, what are
the terms, when is the company going to weigh in with a values-based judgment?”
Cloudflare Inc., a web-security company that has
protected the networks of several neo-Nazi sites, including the Daily Stormer,
faced criticism in May from ProPublica for doing so, and has been one of the
“worst offenders when it comes to protecting white-supremacist propaganda,”
said Heidi Beirich, who monitors hate groups for the Southern Poverty Law
Center. The company has defended itself by saying service providers shouldn’t
be censoring content on the internet. But on Wednesday, Cloudflare decided to
end its business with the Daily Stormer, saying it could no longer remain
neutral because the neo-Nazi website was claiming the company secretly
supported its ideology.
“Maybe even they are waking up to this problem,” Beirich
said. “Maybe this is a moment of reckoning and change -- and it sure seems
serious right now.”
Still, Cloudflare CEO Matthew Prince warned that even as
he chose to sever ties with the Daily Stormer, the move could set a dangerous
precedent.
"After today, make no mistake, it will be a little
bit harder for us to argue against a government somewhere pressuring us into
taking down a site they don’t like," Prince wrote.
— With assistance by Kartikay Mehrotra
Comments
Post a Comment