UK academics set to launch 'virus' software for online ‘hate speech’ in time for 2020 election


UK academics set to launch 'virus' software for online ‘hate speech’ in time for 2020 election

Joshua-Caleb Barton on Dec 30, 2019 at 2:10 PM EDT

Researchers at the University of Cambridge have proposed a software program that treats online “hate speech” like a computer virus.

Users would be presented with a warning and a “Hate O’Meter” rating before deciding whether or not to view content that may be regarded as “hate speech."

Researchers at one of the world's oldest universities hope to launch a technology that allows users to block online "hate speech" much like a computer virus. Users will be able to decide whether or not they want to view content with the help of a handy "Hate O'Meter."
Thanks to researchers at the University of Cambridge, the largest social media companies in the world may soon have the ability to preemptively quarantine content classified by an algorithm as “hate speech".” On October 14, 2019, researcher Stephanie Ullmann and professor Marcus Tomalin published a proposal in the Ethics and Information Technology journal promoting an invention that they claim could accomplish this goal without infringing on individual rights of free speech. Their proposal involves software that uses an algorithm to identify "hate speech" in much the same way an antivirus program detects malware. It would then be up to the viewer of such content to either leave it in quarantine or view it. 
Ullmann and Tomalin argue that exposure to online "hate speech" is a type of harm which “is [as] serious as other sub-types [of harm] (e.g., physical, financial)” and social media users deserve protection from such harm. The proposal states that social media companies’ attempts to combat "hate speech" have been inaccurate, untimely, and leaves the companies open to claims of free speech violations. Tomalin argues a middle ground can be found between those who wish to stop all "hate speech" and those who want to protect uninhibited First Amendment speech.
Currently, social media companies primarily combat "hate speech" by a report and review method in which one user reports another for "hate speech," which is then reviewed by the social media company which then decides whether or not to censor the poster. Tamlin believes this is not ideal as it “does not undo the harm that such material has already caused when posted online . . . it would be far better to intercept potentially offensive posts at an earlier stage of the process, ideally before the intended recipient has read them.”
Tomalin's proposal would use a sophisticated algorithm which would evaluate not just the content itself, but also all content posted by the user to determine if a post might be classifiable as "hate speech". If not classified as potential "hate speech", the post occupies the social media feed like any regular post. If the algorithm flags it as possible "hate speech", it will then flag the post as potential hate speech, making it so that readers must opt-in to view the post. A graph from the proposal illustrates this process.
The alert to the reader will identify the type of "hate speech" potentially classified in the content as well as a “Hate O’Meter” to show how offensive the post is likely to be.
With this functionality, Tomalin explains, the decision of whether or not to view the content rests in the readers’ hands. In response to the question of the definition of "hate speech,” Tomalin quotes a 2018 survey that defined the term as “language that attacks or diminishes, that incites violence or hate against groups, based on specific characteristics such as physical appearance, religion, descent, national or ethnic origin, sexual orientation, gender identity or other, and it can occur with different linguistic styles, even in subtle forms or when humor is used.”
Tomalin asserts that the automatic quarantine of such content, coupled with the readers’ ability to view or delete the content subsequently, would remove the likeliness that people would be able to bring credible claims of free speech violations against social media companies. Quarantining potential "hate speech", he says, would do away with some fears of those worried about free speech and Big Tech censorship by leaving final censorship power in the hands of individual readers. In theory, it would entirely place the ability to preemptively block "hate speech" in the hands of the social media companies themselves which would help satisfy those worried about the spread of "hate speech" and its harms. 
Tomalin and Ullmann are creating this project in connection with researchers from ‘Giving Voice to Digital Democracies,’ an organization for which Tomalin is a senior research associate. They seek to have a working prototype available in early 2020. If successful, and subsequently adopted by major social media companies, users can expect to see content quarantined as "hate speech" on their feed before the November 2020 elections.

Comments

Popular posts from this blog

BMW traps alleged thief by remotely locking him in car

Report: World’s 1st remote brain surgery via 5G network performed in China

New ATM's: withdraw money with veins in your finger