Internet Platforms Want Centralized Censorship... That should Scare You

PLATFORMS WANT CENTRALIZED CENSORSHIP. THAT SHOULD SCARE YOU

IN THE IMMEDIATE aftermath of the horrific attacks at the Al Noor Mosque and Linwood Islamic Centre in Christchurch, New Zealand, internet companies faced intense scrutiny over their efforts to control the proliferation of the shooter's propaganda. Responding to many questions about the speed of their reaction and the continued availability of the shooting video, several companies published posts or gave interviews that revealed new information about their content moderation efforts and capacity to respond to such a high-profile incident.

 

WIRED OPINION 04.18.19 09:00 AM


This kind of transparency and information sharing from these companies is a positive development. If we're going to have coherent discussions about the future of our information environment, we—the public, policymakers, the media, website operators—need to understand the technical realities and policy dynamics that shaped the response to the Christchurch massacre. But some of these responses have also included ideas that point in a disturbing direction: toward increasingly centralized and opaque censorship of the global internet.


Facebook, for example, describes plans for an expanded role for the Global Internet Forum to Counter Terrorism, or GIFCT. The GIFCT is an industry-led self-regulatory effort launched in 2017 by Facebook, Microsoft, Twitter, and YouTube. One of its flagship projects is a shared database of hashes of files identified by the participating companies to be “extreme and egregious” terrorist content. The hash database allows participating companies (which include giants like YouTube and one-man operations like JustPasteIt) to automatically identify when a user is trying to upload content already in the database.

In Facebook's post-Christchurch updates, the company discloses that it added 800 new hashes to the database, all related to the Christchurch video. It also mentions that the GIFCT is "experimenting with sharing URLs systematically rather than just content hashes"—that is, creating a centralized (black) list of URLs that would facilitate widespread blocking of videos, accounts, and potentially entire websites or forums.

Microsoft president Brad Smith also calls for building on the GIFCT in a recent post, urging industry-wide action. He suggests a "joint virtual command center" that would enable tech companies to coordinate during major events and decide what content to block and what content is in "the public interest." (There has been considerable debate among journalists and media organizations about how to cover the Christchurch event in the public interest. Smith does not explain how tech companies would be better able to reach a consensus view, but unilateral decisions on that point, made from a corporate and US-based perspective, will likely not satisfy a global user base.)

One major problem with expanding the hash database is that the initiative has long-standing transparency and accountability deficits. No one outside of the consortium of companies knows what is in the database. There are no established mechanisms for an independent audit of the content, or an appeal process for removing content from the database. People whose posts are removed or accounts disabled on participating sites aren't even notified if the hash database was involved. So there's no way to know, from the outside, whether content has been added inappropriately and no way to remedy the situation if it has.

The risk of overbroad censorship from automated filtering tools has been clear since the earliest days of the internet, and the hash database is undoubtedly vulnerable to the same risks. We know that content moderation aimed at terrorist propaganda can sweep in news reporting, political protest, documentary footage, and more. The GIFCT does not require members to automatically remove content that appears in the database, but in practice, smaller platforms do not have the resources to do nuanced human analysis of large volumes of content and will tend to streamline moderation where they can. Indeed, even YouTube was overwhelmed by a one-video-per-second upload rate. In the days after the shooting, it circumvented its own human-review processes to take videos down en masse.

The post-Christchurch push for centralizing censorship goes well beyond the GIFCT hash database. Smith raises the specter of browser-based filters that would prohibit users from accessing or downloading forbidden content; if these in-browser filters are mandatory or turned on by default, this pushes content control a level deeper into the web. Three ISPs in Australia took the blunt step of blocking websites that hosted the shooting video until those sites removed the copies. While the ISPs acknowledged that this was an extraordinary circumstance, this decision was a stark reminder of the power of internet providers to exercise ultimate control over what users can access and post.

When policymakers and industry leaders talk about how to manage insidious content that takes advantage of virality for horrific aims, their focus typically falls on how to ensure that content removal is swift and comprehensive. But proposals for quick and widespread takedown, with no safeguards or even discussion of the risks of overbroad censorship, are incomplete and irresponsible. Self-regulatory initiatives like the GIFCT function not only to address a particular policy issue, but also to stave off more sweeping government regulation. We've already seen governments, including the European Union, look to co-opt the hash database and transform it from a voluntary initiative into a legislative mandate, without meaningful safeguards for protected speech. Any self-regulatory effort will face this same problem. Safeguards against censorship must be an integral part of any proposed solution.

Beyond that, though, there's a fundamental threat posed by solutions that rely on centralizing content control: The strength of the internet for fostering free expression lies in its decentralized nature, which can support a diversity of platforms. This decentralization allows some sites to focus on providing an experience that feels safe, or entertaining, or suitable for kids, while others aim to foster debate, or create an objective encyclopedia, or maintain an archive of videos documenting war crimes. Each of these is a distinct and laudable goal, but each requires different content standards and moderation practices. As we debate where to go after Christchurch, we must be wary of one-size-fits-all solutions and work to preserve the diversity of an open internet.
WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here. You can also submit an op-ed here: opinion@wired.com

Comments

Popular posts from this blog

BMW traps alleged thief by remotely locking him in car

Report: World’s 1st remote brain surgery via 5G network performed in China

New ATM's: withdraw money with veins in your finger