A call to arms for tech companies: Get extremists off your platforms
A call to arms for tech companies: Get extremists off
your platforms
BY DAVID IBSEN, OPINION CONTRIBUTOR — 02/17/18 12:00 PM
EST
Fort Hood shooter Nidal Hasan. Boston Marathon bombers
Dzhokar and Tamerlan Tsarnaev. Underwear bomber Umar Farouk Abdulmutallab.
Garland, Texas, gunman Elton Simpson. All of these terrorists were deeply
influenced by the American-born al-Qaeda operative and propagandist Anwar
al-Awlaki, who continued to radicalize and encourage indiscriminate killing
long after his 2011 death, thanks to the easy accessibility of his sermons and
lectures on YouTube and other Internet platforms.
Last November, Google took the exemplary step of removing
most Awlaki content from its video-sharing platform YouTube. While that action
was commendable, much more work is waiting to be done. Sadly, Awlaki was just
one in a sea of similar, radical voices. The tech giant must set an example for
the rest of the industry by blocking not just Awlaki but other, noxious
extremists from using all of its platforms and encouraging other tech companies
to do the same.
The new Guide to Online Propagandists from my
organization, the Counter Extremism Project, highlights more than a dozen
radical propagandists whose hateful rhetoric remains freely accessible across
social media platforms. These include:
Abdullah Faisal, an internationally banned Islamist
propagandist convicted in the United Kingdom of soliciting murder and inciting
racial hatred. Faisal has been linked to the radicalization of shoe bomber Richard
Reid, 7/7 London bomber Germaine Lindsay, would-be Times Square bomber Faisal
Shahzad, and 9/11 conspirator Zacarias Moussaoui. A February 2017 YouTube
search returned almost 4,000 results, including two channels devoted to Faisal.
Yusuf Qaradawi, a Muslim Brotherhood ideologue who has
justified suicide bombings and the killing of Islamic apostates. Qaradawi lives
in Qatar under the government’s protection. Qaradawi’s name returns more than
10,000 results on YouTube.
Ahmad Musa Jibril, a Michigan-based Islamist preacher
popular with ISIS fighters and supporters. Though Jibril has retreated from his
social media accounts in recent years, the accounts remain active and his name
still returns more than 1,000 hits on YouTube, including two channels devoted
to him. Khuram Butt, one of the perpetrators of the June 3, 2017, London Bridge
terrorist attack, reportedly was radicalized by watching Jibril’s videos
online.
Nathan Damigo, founder of the white nationalist movement
Identity Evropa. Damigo has called for the preservation of white European
culture, which he believes is under attack from the forces of multiculturalism.
He has appeared alongside noted white nationalist Richard Spencer and spreads
his xenophobia messages to university campuses. Damigo stepped down as leader
of Identity Evropa last year, but he maintains an active presence on YouTube,
where his own channel has almost 4,000 subscribers, and Twitter, where he has
almost 20,000 followers.
Some of this material now carries YouTube’s offensive content
warning label, but keeping this material online and one click away from
vulnerable viewers will not deter anyone seeking it. Google’s Awlaki decision
created a clear precedent for removing content tied to entities and individuals
sanctioned by the U.S., EU, and U.N. as well as individuals with demonstrable
links to violence. Using Awlaki and YouTube as an example, all tech firms must
deny these virulent propagandists a platform. A consistent policy for removing
dangerous, hate-filled content demands they be removed.
The tech companies have already given themselves the
necessary tools to do so. In 2016, Facebook, Twitter, Microsoft, and YouTube
created the Shared Industry Hash Database to collect digital “fingerprints” of
extremist content that had been removed from platforms. In June 2017, these
same tech companies launched the Global Internet Forum to Counter Terrorism in
order to share best practices and lessons learned about countering the threat
of terrorist content online.
Tech companies must remove the confusion regarding what
is and is not acceptable content. In a singular voice, these companies must
reject content promoting groups or individuals on international sanctions lists
or those who spew hateful and/or violent rhetoric toward ethnic or religious
groups. Social media companies whose terms of service agreements do not already
ban extremist content should immediately ensure the necessary changes.
To shape these policies, the tech companies should look
to the State Department’s Foreign Terrorist Organizations list, the Treasury
Department’s Specially Designated Nationals and Blocked Persons list, and the
U.N. Security Council Sanctions list. Tech firms should refuse to host content
produced by groups and individuals on U.K., EU, U.S., and U.N. sanctions list
and by individuals with clear links to violence, as well as use their Shared
Industry Hash Database and the Global Internet Forum to Counter Terrorism to
better target and consistently remove these voices from all digital platforms.
Google made a fundamental decision to remove Awlaki from
YouTube, but many more like him remain online. The tech companies must now
categorically and universally act to ensure those with similar messages of hate
and murder can no longer abuse their platforms.
Comments
Post a Comment