UK's Tech Backlash Could Change the Internet


THE UK'S TECH BACKLASH COULD CHANGE THE INTERNET

·       04.09.19 07:00 AM

BRITISH OFFICIALS TOOK a swipe at global internet giants Monday, suggesting rules that would require the companies to proactively remove content the government views as illegal or “harmful,” and giving the government the right to shut down offending sites.

The proposals, contained in a 102-page white paper, are aimed at combating the spread of disinformationhate speechonline extremism, and child exploitation. If enacted as described, they would constitute some of the most stringent and far-reaching restrictions on internet speech by a major western democracy. But critics said the proposals fail to balance curbing harmful speech with free expression.

Under current UK law, social media platforms and other online companies are shielded from liability for potentially illegal content posted by users until they’re notified of it. Monday’s proposals, drafted by the UK’s Department for Digital, Culture, Media and Sport and Home Office, and backed by Prime Minister Theresa May, aim to change that. They would penalize companies like Facebook and Google that UK lawmakers believe have turned a blind eye to the spread of harmful content in favor of maximizing growth, by making companies more responsible for the content on their platforms.

Companies that allow users to post or share content online will be required to proactively police content that the UK government deems illegal—like child sexual exploitation and abuse, the sale of illegal goods, or terrorist activity—as well as legal activity that the government has categorized as harmful, such as disinformation, promotion of serious violence, harassment, and hateful extremist content, among many others.
An as-yet undefined internet regulator would be created to enforce the rules, with tools that go beyond the typical fines. The regulator could block offending sites from being accessed in the UK, and force other companies—like app stores, social media sites, and search engines—to stop doing business with offenders. The regulator could even hold executives personally accountable for offenses, which could mean civil fines or criminal liability.
Among the new requirements, the regulator would be expected to specify “an expedient timeframe for the removal of terrorist content” in cases like the Christchurch, New Zealand shootings, and outline steps for companies “to prevent searches which lead to terrorist activity and/or content.” There would be similar rules for dealing with hate crimes. The rules also would require that companies “ensure that algorithms selecting content do not skew towards extreme and unreliable material in the pursuit of sustained user engagement.”

Critics, while endorsing some of the goals of the proposals, say they may be unattainable in practice. Among other things, they pointed to the lack of specifics in the definition of “harmful.” In a statement, Jim Killock, executive director of the UK’s Open Rights Group, a nonprofit that advocates for privacy and free speech online, said the proposal would unfairly regulate “the speech of millions of British citizens” and would have “serious implications for legal content that is deemed potentially risky, whether it really is or not.”
“Establishing a relationship between harm and content is in practice incredibly difficult. Assumptions abound. If the evidence standard is low, we get over-reaction. If it is set reasonably, it may be impossible for the regulator to require action despite public demand,” Killock continued on Twitter.

Privacy International cautioned against any quick decision making on the matter, which it said would “introduce, rather than reduce, online harms.” The group suggested that lawmakers assess the privacy implications of proactively monitoring user content, and think carefully before giving companies more responsibility for policing the internet. "This would empower corporate judgment over content, [which] would have implications for human rights, particularly freedom of expression and privacy,” said the statement.

Monday’s report was the first stage in a process. The proposals must be turned into legislation, which would have to be approved by Parliament. The government said it would seek advice over the next 12 weeks from "legal, regulatory, technical, online safety and law enforcement experts," and run a series of workshops with civil society organizations users historically subject to increased online abuse.
The proposals would apply to a wide swath of tech companies, but the report suggests enforcing them most stringently against bigger companies, to avoid imposing undue burdens on burgodenging startups. The first order of business for the internet regulator, the report says, will be to take on “those companies which pose the biggest and most obvious risk of harm to users, either because of the scale of the service’s size or because of known issues with serious harms.”
In a statement received early Tuesday, Facebook said, “New rules for the internet should protect society from harm while also supporting innovation, the digital economy and freedom of speech. These are complex issues to get right and we look forward to working with the government and Parliament to ensure new regulations are effective.” Twitter said that it “will continue to engage in the discussion between industry and the UK government, as well as work to strike an appropriate balance between keeping users safe and preserving the internet's open, free nature.” Google did not respond to requests for comment.

Comments

Popular posts from this blog

BMW traps alleged thief by remotely locking him in car

Report: World’s 1st remote brain surgery via 5G network performed in China

New ATM's: withdraw money with veins in your finger