Facebook Disabled 1 Billion Fake Accounts in the Last Year
Facebook
Disabled 1 Billion Fake Accounts in the Last Year
Social
network also removed millions of posts that violate its community standards,
according to new report
Last
Updated: May 15, 2018 @ 10:30 AM
Facebook continued to give the
public a peek behind the curtain, releasing a major report on Tuesday that
announced the Silicon Valley company removed more than one billion fake
accounts. Facebook also said it purged millions of posts that violate its rules
in the last year.
The first-ever “Community Standards Enforcement Report,” a
robust 81 pages, details the company’s efforts to weed out unsavory content,
including violence and terrorist propaganda. The report accounted for the
fourth quarter of 2017 and first quarter of 2018.
Here’s a snapshot of the six areas Facebook cracked down on.
Bogus Accounts: Facebook disabled 583
million fake accounts during the Q1 of 2018, and 694 million the quarter
before. The social network removed 98.5 percent of these accounts before they
were reported in Q1.
Sexual Stuff: Facebook’s relationship with nudity
is tricky. The company restricts sexual content and nudity because some users
“may be sensitive to this type of content,” according to its guidelines. There
are some allowances, however, including protests and works of art. Still, the
company removed roughly 42 million pieces of racy content for the two
aforementioned quarters — accounting for less than a tenth of a percent of
content viewed on Facebook.
Graphic Violence: Facebook took action on 1.2
million pieces of graphic violence during Q4 2017, and 3.4 million during the
first quarter of 2018. The company said the spike is due largely to
implementing better tools for finding inappropriate content.
“We aim to reduce violations to
the point that our community doesn’t regularly experience them,” said Facebook
VPs Guy Rosen and Alex Schultz in the report. “We use technology, combined with
people on our teams, to detect and act on as much violating content as possible
before users see and report it. The rate at which we can do this is high for
some violations, meaning we find and flag most content before users do.”
Spam: Nobody likes spam, especially Facebook. The company axed
more than 1.5 billion pieces of spam during Q4 of 2017 and Q1 of 2018.
Hate Speech: Facebook removed 2.5 million
comments that violate its hate speech rules so far this year — up from 1.6
million at the end of 2017. Facebook defines hate speech “as a direct attack on
people based on what we call protected characteristics — race, ethnicity,
national origin, religious affiliation, sexual orientation, sex, gender, gender
identity, and serious disability or disease. We also provide some protections
for immigration status.”
Terrorist Propaganda: Facebook took down 3
million combined posts of terrorist propaganda during Q4 2017 and Q1 2018. The
company said 99.5 percent of these posts are flagged and removed by its
internal team before being reported by users.
The
report comes on the heels of the Cambridge Analytica data leak, where up to 87
million users had their profiles unknowingly compromised. The company — and CEO
Mark Zuckerberg in particular — has been looking to assuage worried users in
the last two months, offering a series of mea culpas. Facebook has said it will
be more transparent with its 2.2 billion users, offering tools to check with
whom the company has shared user
data, and a look at its guidelines for banning
content.
You can
read the full report here.
Comments
Post a Comment