Google launches robo-tool to flag hate speech online
Google launches robo-tool to flag hate speech online
New York Times and other news media testing AI software
that spots abusive comments
February 23, 2017 by: Madhumita Murgia, European
Technology Correspondent
Google has launched an artificial intelligence tool that
identifies abusive comments online, helping publishers respond to growing
pressure to clamp down on hate speech.
Google’s freely available software, known as Perspective,
is being tested by a range of news organisations, including The New York Times,
The Guardian and The Economist, as a way to help simplify the jobs of humans
reviewing comments on their stories.
“News organisations want to encourage engagement and
discussion around their content, but find that sorting through millions of
comments to find those that are trolling or abusive takes a lot of money,
labour and time,” said Jared Cohen, president of Jigsaw, the Google social
incubator that built the tool.
“As a result, many sites have shut down comments
altogether. But they tell us that isn’t the solution they want.”
Currently, the software is available to a range of
publications that are part of Google’s Digital News Initiative, including the
BBC, the Financial Times, Les Echos and La Stampa, and theoretically to
third-party social media platforms including YouTube, Twitter and Facebook.
“We are open to working with anyone from small developers
to the biggest platforms on the internet. We all have a shared interest and
benefit from healthy online discussions,” said CJ Adams, product manager at
Jigsaw.
Perspective helps to filter abusive comments more quickly
for human review. The algorithm was trained on hundreds of thousands of user
comments that had been labelled as “toxic” by human reviewers, on sites such as
Wikipedia and the New York Times.
It works by scoring online comments based on how similar
they are to comments tagged as “toxic” or likely to make someone leave a
conversation.
“All of us are familiar with increased toxicity around
comments in online conversations,” Mr Cohen said. “People are leaving
conversations because of this, and we want to empower publications to get those
people back.”
The New York Times trial resulted in reviewers being able
to check twice as many comments in the same amount of time, as the algorithm
helped to narrow down the pool of possibilities.
“Their goal is to be able to [improve] review speed by
10x, so the project is ongoing,” said Lucas Dixon, Jigsaw’s chief research
scientist.
Google is not the first to attempt to curb trolling
online. Earlier this month, Twitter stepped up its efforts by making tweaks to
hide abuse from its users, rather than remove content from the platform
completely.
Its chief executive Jack Dorsey tweeted at the time that
Twitter was measuring its progress against abuse on a daily basis.
In May, US tech groups including Google, Facebook,
Twitter and Microsoft signed a “code of conduct” with Brussels that required
them to “review the majority” of flagged hate speech within 24 hours, remove it
if necessary and even develop “counter narratives” to confront the problem.
Copyright The Financial Times Limited 2017. All rights
reserved.
Comments
Post a Comment