Twitter turns to algorithms to clamp down on abusive content
Twitter turns to algorithms to clamp down on abusive content
By Dustin Volz March 1, 2017
WASHINGTON (Reuters) - Twitter Inc on Wednesday launched
a wider effort to use algorithms to identify accounts as potentially engaging
in abusive behavior, a departure from its practice of relying on users to
report accounts that should be reviewed for possible violation of its rules.
Twitter and rivals like Facebook have long relied on
users' reporting potential abuse for review, sometimes to the chagrin of groups
that accused them of doing too little to thwart hateful speech or harassment.
Twitter, which already uses technology to try and limit some communications,
will still review user reports on potential abuse.
Twitter said it will limit the functionality of accounts
flagged by its technology as abusive for an unspecified amount of time, a
restriction that could include allowing only followers to see that user's
tweets. Currently, accounts are deleted or suspended when marked as abusive.
"We aim to only act on accounts when we’re
confident, based on our algorithms, that their behavior is abusive," Ed
Ho, vice president of engineering, wrote in a blog post. "Since these
tools are new we will sometimes make mistakes, but know that we are actively
working to improve and iterate on them everyday."
Twitter is also introducing new filtering options for
notifications to allow users to limit what they see from certain types of
accounts, such as those that lack a profile photo, and said it would alert
users when it received abuse reports and inform them if further action against
certain accounts takes places.
The updates announced Wednesday are the latest in a
series of changes Twitter has implemented in recent months to combat abuse.
Early in February the social media company said it would make it harder for
abusive users to create new accounts, launch a "safe search" function
and begin collapsing tweet replies deemed abusive or low-quality so they are
hidden from immediate view.
Twitter, Facebook and other internet companies have faced
growing complaints in recent years over how they monitor and police their
content, as users and governments have stepped up pressure on Silicon Valley to
prevent violent extremist propaganda, curtail harassment and bullying, and
limit fake news.
Those efforts have often clashed with free-speech
activists who have warned about internet censorship and some political groups
that claim they are being unfairly targeted.
Ho on Wednesday acknowledged that protecting users
continues to be a challenge for the San Francisco-based company.
"We’re learning a lot as we continue our work to
make Twitter safer – not just from the changes we ship but also from the
mistakes we make, and of course, from feedback you share," Ho said.
(Reporting by Dustin Volz; Editing by Leslie Adler)
Comments
Post a Comment