For Facebook, erasing hate speech proves a daunting challenge
For Facebook, erasing hate speech proves a daunting
challenge
Two weeks after Donald Trump won the presidency, Zahra
Billoo, executive director of the Council on American-Islamic Relations office
for the San Francisco Bay area, posted to Facebook a line from a handwritten
letter mailed to a San Jose mosque: “He’s going to do to you Muslims what
Hitler did to the Jews.”
By Tracy Jan and Elizabeth Dwoskin July 31 at 6:02 PM
Francie Latour was picking out produce in a suburban
Boston grocery store when a white man leaned toward her two young sons and,
just loudly enough for the boys to hear, unleashed a profanity-laced racist
epithet.
Reeling, Latour, who is black, turned to Facebook to
vent, in a post that was explicit about the hateful words hurled at her 8 and
12-year-olds on a Sunday evening in July.
“I couldn’t tolerate just sitting with it and being
silent,” Latour said in an interview. “I felt like I was going to jump out of
my skin, like my kids’ innocence was stolen in the blink of an eye.”
But within 20 minutes, Facebook deleted her post, sending
Latour a cursory message that her content had violated company standards. Only
two friends had gotten the chance to voice their disbelief and outrage.
Being put in “Facebook jail” has become a regular
occurrence for San Diego photographer Shannon Hall-Bulzone. (Provided by
Shannon Hall-Bulzone)
Experiences like Latour’s exemplify the challenges
Facebook chief executive Mark Zuckerberg confronts as he tries to rebrand his
company as a safe space for community, expanding on its earlier goal of
connecting friends and family.
But in making decisions about the limits of free speech,
Facebook often fails the racial, religious and sexual minorities Zuckerberg
says he wants to protect.
The 13-year-old social network is wrestling with the
hardest questions it has ever faced as the de facto arbiter of speech for the
third of the world’s population that now logs on each month.
In February, amid mounting concerns over Facebook’s role
in the spread of violent live videos and fake news, Zuckerberg said the
platform had a responsibility to “mitigate the bad” effects of the service in a
more dangerous and divisive political era. In June, he officially changed Facebook’s
mission from connecting the world to community-building.
The company says it now deletes about 288,000 hate-speech
posts a month.
But activists say that Facebook’s censorship standards
are so unclear and biased that it is impossible to know what one can or cannot
say.
The result: Minority groups say they are
disproportionately censored when they use the social-media platform to call
out racism or start dialogues. In the case of Latour and her family, she was
simply repeating what the man who verbally assaulted her children said: “What
the f--- is up with those f---ing n----r heads?”
Compounding their pain, Facebook will often go from
censoring posts to locking users out of their accounts for 24 hours or more,
without explanation — a punishment known among activists as “Facebook jail.”
“In the era of mass incarceration, you come into this
digital space — this one space that seems safe — and then you get attacked by
the trolls and put in Facebook jail,” said Stacey Patton, a journalism
professor at Morgan State University, a historically black university in
Baltimore. “It totally contradicts Mr. Zuckerberg’s mission to create a public
square.”
In June, the company said that nearly 2 billion people
now log onto Facebook each month. With the company’s dramatic growth comes the
challenge of maintaining internally consistent standards as its content
moderators are faced with a growing number of judgment calls.
“Facebook is regulating more human speech than any
government does now or ever has,” said Susan Benesch, director of the Dangerous
Speech Project, a nonprofit group that researches the intersection of harmful
online content and free speech. “They are like a de facto body of law, yet that
law is a secret.”
The company recently admitted, in a blog post, that “too
often we get it wrong,” particularly in cases when people are using certain
terms to describe hateful experiences that happened to them. The company has
promised to hire 3,000 more content moderators before the year’s end, bringing
the total to 7,500, and is looking to improve the software it uses to flag hate
speech, a spokeswoman said.
“We know this is a problem,” said Facebook spokeswoman Ruchika
Budhraja, adding that the company has been meeting with community activists for
several years. “We’re working on evolving not just our policies but our tools.
We are listening.”
Two weeks after Donald Trump won the presidency, Zahra
Billoo, executive director of the Council on American-Islamic Relations’ office
for the San Francisco Bay area, posted to Facebook an image of a handwritten
letter mailed to a San Jose mosque and quoted from it: “He’s going to do to you
Muslims what Hitler did to the Jews.”
The post — made to four Facebook accounts — contained a
notation clarifying that the statement came from hate mail sent to the mosque,
as Facebook guidelines advise.
Facebook removed the post from two of the accounts —
Billoo’s personal page and the council’s local chapter page — but allowed
identical posts to remain on two others — the organization’s national page and
Billoo’s public one. The civil rights attorney was baffled. After she re-posted
the message on her personal page, it was again removed, and Billoo received a
notice saying she would be locked out of Facebook for 24 hours.
“How am I supposed to do my work of challenging hate if I
can’t even share information showing that hate?” she said.
Billoo eventually received an automated apology from
Facebook, and the post was restored to the local chapter page — but not her
personal one.
Being put in “Facebook jail” has become a regular
occurrence for Shannon Hall-Bulzone, a San Diego photographer. In June 2016,
Hall-Bulzone was shut out for three days after posting an angry screed when she
and her toddler were called lazy “brown people” as they walked to day care and
her sister was called a “lazy n----r” as she walked to work. Within hours,
Facebook removed the post.
Many activists who write about race say they break
Facebook rules and keep multiple accounts in order to play a cat-and-mouse game
with the company’s invisible censors, some of whom are third-party contractors
working on teams based in the United States or in Germany or the Philippines.
Others have started using alternate spellings for “white
people,” such as “wypipo,” “Y.P. Pull,” or “yt folkx” to evade being flagged by
the platform activists have nicknamed “Racebook.”
In January, a coalition of more than 70 civil rights
groups wrote a letter urging Facebook to fix its “racially-biased” content
moderation system. The groups asked Facebook to enable an appeals process,
offer explanations for why posts are taken down, and publish data on the types
of posts that get taken down and restored. Facebook has not done these things.
The coalition has gathered 570,000 signatures urging
Facebook to acknowledge discriminatory censorship exists on its platform, that
it harbors white supremacist pages even though it says it forbids hate speech
in all forms, and that black and Muslim communities are especially in danger
because the hate directed against them translates into violence in the streets,
said Malkia Cyril, a Black Lives Matter activist in Oakland, Calif., who was
part of a group that first met with Facebook about their concerns in 2014.
Cyril, executive director for the Center for Media
Justice, said the company has a double standard when it comes to deleting
posts. She has flagged numerous white supremacist pages to Facebook for removal
and said she was told that none was initially found to have violated the
company’s community standards even though they displayed offensive content. One
featured a picture of a skeleton with the caption, “Ever since Trayvon became
white, he’s been a good boy,” in reference to Trayvon Martin, the unarmed black
teenager killed by a volunteer neighborhood watchman in Florida in 2012.
Like most social media companies in Silicon Valley,
Facebook has long resisted being a gatekeeper for speech. For years, Zuckerberg
insisted that the social network had only minimal responsibilities for policing
content.
In its early years, Facebook’s internal guidelines for
moderating and censoring content amounted to only a single page. The
instructions included prohibitions on nudity and images of Hitler, according to
a trove of documents published by the investigative news outlet ProPublica.
(Holocaust denial was allowed.)
By 2015, the internal censorship manual had grown to
15,000 words, according to ProPublica.
In Facebook’s guidelines for moderators, obtained by
ProPublica in June and affirmed by the social network, the rules protect broad
classes of people but not subgroups. Posts criticizing white or black people
would be prohibited, while posts attacking white or black children, or
radicalized Muslim suspects, may be allowed to stay up because the company sees
“children” and “radicalized Muslims” as subgroups.
Facebook says it prohibits direct attacks on protected
characteristics, defined in U.S. law as race, ethnicity, national origin,
religious affiliation, sexual orientation, sex, gender, gender identity,
serious disability or disease.
But the guidelines have never been publicly released, and
as recently as last summer Zuckerberg continued to insist Facebook was “a tech
company, not a media company.”
Unlike media companies, technology platforms that host
speech are not legally responsible for the content that appears.
The chief executive has shifted his stance this year. At
the company’s “Communities Summit,” a first-ever live gathering for members of
Facebook groups held in Chicago in June, Zuckerberg changed the mission
statement.
Earlier, he said the company would become, over the next
decade, a “social infrastructure” for “keeping us safe, for informing us, for
civic engagement, and for inclusion of all.”
The company acknowledged that minorities feel
disproportionately targeted but said it could not verify those claims because
it does not categorize the types of hate speech that appear or tally which
groups are targeted.
In June, for example, Facebook removed a video posted by
Ybia Anderson, a black woman in Toronto who was outraged by the prominent
display of a car decorated with the Confederate flag at a community festival.
The social network did not remove dozens of other posts in which Anderson was
attacked with racial slurs.
Benesch, who herself has tried to build a software tool
to flag hate speech, said she sympathizes with Facebook’s predicament. “It is
authentically difficult to make consistent decisions because of the huge
variety of content out there,” she said. “That doesn’t, however, excuse the
fact they sometimes make some very stupid decisions.”
As for Latour, the Boston mother was surprised when
Facebook restored her post about the hateful words spewed at her sons, less
than 24 hours after it disappeared. The company sent her an automated notice
that a member of its team had removed her post in error. There was no further
explanation.
The initial censoring of Latour’s experience “felt almost
exactly like what happened to my sons writ large,” she said. The man had unleashed
the racial slur so quietly that for everyone else in the store, the verbal
attack never happened. But it had terrified her boys, who froze, unable to
immediately respond or tell their mother.
“They were left with all that ugliness and hate,” she said,
“and when I tried to share it so that people could see it for what it is, I was
shut down.”
Dr Shri Niwash Jangir provide the best axiety treatment in Jaipur
ReplyDeleteNav Imperial Hospital is the best Joint Replacement Hospital in Jaipur
ReplyDeleteHear Again is the best speech therapy centre in Gurgaon
ReplyDelete