How Hate Groups Forced Online Platforms to Reveal Their True Nature

How Hate Groups Forced Online Platforms to Reveal Their True Nature

By JOHN HERRMAN AUG. 21, 2017

White supremacist marchers had not yet lit their torches when the deletions began. The ‘‘Unite the Right’’ Facebook page, which had been used to organize the rally in Charlottesville, was removed the day before the event was scheduled, forcing planners to disperse to other platforms to organize. And then, in the hours and days after a participant drove his car into a crowd of counterprotesters, killing 32-year-old Heather Heyer and injuring at least 19 others, internet companies undertook a collective purge.

Facebook banned a range of pages with names like ‘‘Right Wing Death Squad’’ and ‘‘White Nationalists United.’’ Reddit banned, among others, a hard-right community called ‘‘Physical Removal,’’ an organizer of which had called the weekend’s killing ‘‘a morally justified action.’’ Twitter suspended an unknown number of users, including popular accounts associated with 4chan’s openly fascistic Politically Incorrect message board, or /pol/. Discord, a chat app for gamers that doubled as an organizing tool for the event, and where a prominent white supremacist had called for disrupting Heyer’s funeral, rushed to do cleanup.

The clampdown extended beyond the walled gardens of social platforms to a wide array of online services. The Daily Stormer, a neo-Nazi site that promoted the march and celebrated its fatal outcome, was banned by the domain registrar and hosting service GoDaddy, then hours later by Google’s hosting service, then lost access to SendGrid, which it had used to deliver its newsletter; PayPal cut off the white nationalist Richard Spencer’s organization, which later lost access to its web host, Squarespace; Airbnb removed the accounts of a number of Charlottesville attendees before the event, and released a statement saying that ‘‘violence, racism and hatred demonstrated by neo-­Nazis, the alt-right and white supremacists should have no place in this world’’; by Wednesday, Spotify was even expunging ‘‘white supremacist’’ music from its library.

The platforms’ sudden action in response to an outpouring of public grief and rage resembles, at first glance, a moral awakening and suggests a mounting sense of responsibility to the body politic. You could be forgiven for seeing this as a turning point for these sites, away from a hands-off approach to the communities they host and toward something with more oversight and regulation. An inside-out version of this analysis has been embraced by right-wing users, who have wasted no time declaring these bans a violation of their free speech. But this is an incomplete accounting of what happened and one that serves two parties and two parties alone: the companies themselves and the people they’ve just banned.

The recent rise of all-encompassing internet platforms promised something unprecedented and invigorating: venues that unite all manner of actors — politicians, media, lobbyists, citizens, experts, corporations — under one roof. These companies promised something that no previous vision of the public sphere could offer: real, billion-strong mass participation; a means for affinity groups to find one another and mobilize, gain visibility and influence. This felt and functioned like freedom, but it was always a commercial simulation. This contradiction is foundational to what these internet companies are. Nowhere was this tension more evident than in the case of Cloudflare, a web-infrastructure company. Under sustained pressure to drop The Daily Stormer as a client, the company’s chief executive, Matthew Prince, eventually assented. It was an arbitrary decision, and one that was out of step with the company’s stated policies. This troubled Prince. ‘‘I woke up in a bad mood and decided someone shouldn’t be allowed on the internet,’’ he wrote in an email to his staff. ‘‘No one should have that power.’’

Social platforms tend to refer to their customers in euphemistic, almost democratic terms: as ‘‘users’’ or ‘‘members of a community.’’ Their leaders are prone to statesmanlike posturing, and some, like Mark Zuckerberg, even seem to have statesmanlike ambitions. Content moderation and behavioral guidelines are likewise rendered in the terms of legal governance, as are their systems for dispute and recourse (as in the ubiquitous post-ban ‘‘appeal’’). Questions about how platforms like Twitter and Reddit deal with disruptive users and offensive content tend to be met with defensive language invoking free speech.

In the process of building private communities, these companies had put on the costumes of liberal democracies. They borrowed the language of rights to legitimize arbitrary rules, creating what the technology lawyer Kendra Albert calls ‘‘legal talismans.’’

This was first and foremost operationally convenient or even necessary: What better way to avoid liability and responsibility for how customers use your product? It was also good marketing. It’s easier to entrust increasingly large portions of your private and public life to an advertising and data-mining firm if you’re led to believe it’s something more. But as major internet platforms have grown to compose a greater share of the public sphere, playing host to consequential political organization — not to mention media — their internal contradictions have become harder to ignore. Far before Charlottesville, they had already become acute.

In a bracing Vice documentary about the rally, a man identified as a writer for The Daily Stormer told the reporter Elle Reeve, ‘‘As you can see, we’re stepping off the internet in a big way.’’ He saw the turnout as confirmation that what he’d been a part of online was real. ‘‘We have been spreading our memes, we’ve been organizing on the internet, and so now they’re coming out,’’ he said, before digressing into a rant about ‘‘anti-white, anti-American filth.’’ This sentiment was echoed in active and longstanding far-right communities on Reddit and 4chan and adjacent communities on Facebook and Twitter.

It is worth noting that the platforms most flamboyantly dedicated to a borrowed idea of free speech and assembly are the same ones that have struggled most intensely with groups of users who seek to organize and disrupt their platforms. A community of trolls on an internet platform is, in political terms, not totally unlike a fascist movement in a weak liberal democracy: It engages with and uses the rules and protections of the system it inhabits with the intent of subverting it and eventually remaking it in their image or, if that fails, merely destroying it.

But what gave these trolls power on platforms wasn’t just their willingness to act in bad faith and to break the rules and norms of their environment. It was their understanding that the rules and norms of platforms were self-serving and cynical in the first place. After all, these platforms draw arbitrary boundaries constantly and with much less controversy — against spammers, concerning profanity or in response to government demands. These fringe groups saw an opportunity in the gap between the platforms’ strained public dedication to discourse stewardship and their actual existence as profit-driven entities, free to do as they please. Despite their participatory rhetoric, social platforms are closer to authoritarian spaces than democratic ones. It makes some sense that people with authoritarian tendencies would have an intuitive understanding of how they work and how to take advantage of them.

This was also a moment these hate groups were anticipating; getting banned in an opaque, unilateral fashion was always the way out and, to some degree, it suits them. In the last year, hard-right communities on social platforms have cultivated a pre-emptive identity as platform refugees and victims of censorship. They’ve also been preparing for this moment or one like it: There are hard-right alternatives to Twitter, to Reddit and even to the still-mostly-lawless 4chan. There are alternative fund-raising sites in the mold of GoFundMe or Kickstarter; there’s an alternative to Patreon called Hatreon. Like most of these new alternatives, it has cynically borrowed a cause — it calls itself a site that ‘‘stands for free speech absolutism’’ — that the more mainstream platforms borrowed first. Their persecution narrative, which is the most useful narrative they have, and one that will help spread their cause beyond the fringes, was written for them years ago by the same companies that helped give them a voice.

John Herrman is a David Carr Fellow at The New York Times.

A version of this article appears in print on August 27, 2017, on Page MM18 of the Sunday Magazine with the headline: Online platforms annexed much of our public sphere, playacting as little democracies — until extremists made them reveal their true nature.



Comments

Popular posts from this blog

BMW traps alleged thief by remotely locking him in car

Report: World’s 1st remote brain surgery via 5G network performed in China

New ATM's: withdraw money with veins in your finger