Tech giants are still stumbling in the social world they created
Tech giants are still stumbling in the social world they
created
Alex Jones, a right-wing provocateur, suddenly found
himself banned from most major social platforms this week.
Twitter remains a lonely holdout on Jones.
It's particularly difficult for huge tech companies to
balance public goods such free speech with the need to protect their users from
harassment, abuse, fake news and manipulation.
The Associated Press August 10, 2018
Who knew connecting the world could get so complicated?
Perhaps some of technology's brightest minds should have seen that coming.
Social media bans of conspiracy theorist Alex Jones have
thrust Facebook, YouTube, Twitter and others into a role they never wanted — as
gatekeepers of discourse on their platforms, deciding what should and shouldn't
be allowed and often angering almost everyone in the process. Jones, a
right-wing provocateur, suddenly found himself banned from most major social
platforms this week, after years in which he was free to use them to promulgate
a variety of false claims.
Twitter, which one of its executives once called the
"free speech wing of the free speech party," remains a lonely holdout
on Jones. The resulting backlash suggests that no matter what the tech
companies do, "there is no way they can please everyone," as Scott
Shackelford, a business law and ethics professor at Indiana University,
observed.
Facebook's Mark Zuckerberg, Twitter's Jack Dorsey and
crew, and Google's stewards of YouTube gave little thought to such consequences
as they built their empires with lofty goals to connect the world and
democratize discourse. At the time, they were the rebels aiming to bypass the
stodgy old gatekeepers — newspaper editors, television programmers and other
establishment types — and let people talk directly to one another.
"If you go back a decade or so, the whole idea of
speech on social media was seen as highly positive light," said Tim
Cigelske, who teaches social media at Marquette University in Wisconsin. There
was the Arab Spring. There were stories of gay, lesbian and transgender teens
from small towns finding support online.
At the same time, of course, the companies were racing to
build the largest audiences possible, slice and dice their user data and make
big profits by turning that information into lucrative targeted advertisements.
The dark side of untrammeled discourse, the thinking
went, would sort itself out as online communities moderated themselves, aided
by fast-evolving computer algorithms and, eventually, artificial intelligence.
"They scaled, they built, they wanted to drive
revenue as well as user base," said technology analyst Tim Bajarin,
president of consultancy Creative Strategies. "That was priority one and
controlling content was priority two. It should have been the other way
around."
That all got dicier once the election of President Donald
Trump focused new attention on fake news and organized misinformation campaigns
— not to mention the fact that some of the people grabbing these new
social-media megaphones were wild conspiracy theories who falsely call mass
shootings a hoax, white nationalists who organize violent rallies and men who
threaten women with rape and murder.
While the platforms may not have anticipated the influx
of hate speech and meddling from foreign powers like Russia, North Korea and
China, Bajarin said, they should have acted more quickly once they found it.
"The fact is we're dealing with a brave new world that they've allowed to
happen, and they need to take more control to keep it from spreading," he
said.
That's easier said than done, of course. But it's
particularly difficult for huge tech companies to balance public goods such
free speech with the need to protect their users from harassment, abuse, fake
news and manipulation. Especially given that their business models require them
to alienate as few of their users as possible, lest they put the flood of
advertising money at risk.
"Trying to piece together a framework for speech
that works for everyone — and making sure we effectively enforce that framework
— is challenging," wrote Richard Allan, Facebook's vice president of
policy, in a blog post Thursday. "Every policy we have is grounded in
three core principles: giving people a voice, keeping people safe, and treating
people equitably. The frustrations we hear about our policies — outside and
internally as well — come from the inevitable tension between these three
principles."
Such tensions force some of the largest corporations in
the world to decide, for instance, if banning Nazis also means banning white
nationalists — and to figure out how to tell them apart if not. Or whether
kicking off Jones means they need to ban all purveyors of false conspiracy
theories. Or whether racist comments should be allowed if they are posted, to
make a point, by the people who received them.
"I don't think the platforms in their heart of
hearts would like to keep Alex Jones on," said Nathaniel Persily, a professor
at Stanford Law School. "But it's difficult to come up with a principle to
say why Alex Jones and not others would be removed."
While most companies have policies against "hate
speech," defining what constitutes hate speech can be difficult, he added.
Even governments have trouble with it. One country's free speech is another
country's hate speech, punishable by jail time.
Facebook, Twitter, Google, Reddit and others face these
questions millions of times a day, as human moderators and algorithms decide
which posts, which people, which photos or videos to allow, to kick off or
simply make less visible and harder to find. If they allow too much harmful
content, they risk losing users and advertisers. If they go too far and remove
too much, they face charges of censorship and ideological bias.
"My sense is that they are throwing everything at
the wall and seeing what sticks," Persily said. "It's a whack-a-mole
problem. It's not the same threats that are continuing, and they have to be
nimble enough to deal with new problems."
Comments
Post a Comment