Artificial intelligence runs wild while humans dither

Artificial intelligence runs wild while humans dither

Some algorithmic interactions hit a level of complexity beyond our comprehension

John Thornhill March 6, 2017

As an experiment, Tunde Olanrewaju messed around one day with the Wikipedia entry of his employer, McKinsey. He edited the page to say that he had founded the consultancy firm. A friend took a screenshot to preserve the revised record.

Within minutes, Mr Olanrewaju received an email from Wikipedia saying that his edit had been rejected and that the true founder’s name had been restored. Almost certainly, one of Wikipedia’s computer bots that police the site’s 40m articles had spotted, checked and corrected his entry.

It is reassuring to know that an army of such clever algorithms is patrolling the frontline of truthfulness — and can outsmart a senior partner in McKinsey’s digital practice. In 2014, bots were responsible for about 15 per cent of all edits made on Wikipedia.

But, as is the way of the world, algos can be used for offence as well as defence. And sometimes they can interact with each other in unintended and unpredictable ways. The need to understand such interactions is becoming ever more urgent as algorithms become so central in areas as varied as social media, financial markets, cyber security, autonomous weapons systems and networks of self-driving cars.

A study published last month in the research journal Plos One, analysing the use of bots on Wikipedia over a decade, found that even those designed for wholly benign purposes could spend years duelling with each other.

In one such battle, Xqbot and Darknessbot disputed 3,629 entries, undoing and correcting the other’s edits on subjects ranging from Alexander the Great to Aston Villa football club.

The authors, from the Oxford Internet Institute and the Alan Turing Institute, were surprised by the findings, concluding that we need to pay far more attention to these bot-on-bot interactions. “We know very little about the life and evolution of our digital minions.”

Wikipedia’s bot ecosystem is gated and monitored. But that is not the case in many other reaches of the internet where malevolent bots, often working in collaborative botnets, can run wild.

The authors highlighted the dangers of such bots mimicking humans on social media to “spread political propaganda or influence public discourse”. Such is the threat of digital manipulation that a group of European experts has even questioned whether democracy can survive the era of Big Data and Artificial Intelligence.

It may not be too much of an exaggeration to say we are reaching a critical juncture. Is truth, in some senses, being electronically determined? Are we, as the European academics fear, becoming the “digital slaves” of our one-time “digital minions”? The scale, speed and efficiency of some of these algorithmic interactions are reaching a level of complexity beyond human comprehension.

If you really want to scare yourself on a dark winter’s night you should read Susan Blackmore on the subject. The psychologist has argued that, by creating such computer algorithms we may have inadvertently unleashed a “third replicator”, which she originally called a teme, later modified to treme.

The first replicators were genes that determined our biological evolution. The second were human memes, such as language, writing and money, that accelerated cultural evolution. But now, she believes, our memes are being superseded by non-human tremes, which fit her definition of a replicator as being “information that can be copied with variation and selection”.

“We humans are being transformed by new technologies,” she said in a recent lecture. “We have let loose the most phenomenal power.”

For the moment, Prof Blackmore’s theory remains on the fringes of academic debate. Tremes may be an interesting concept, says Stephen Roberts, professor of machine learning at the University of Oxford, but he does not think we have lost control.

“There would be a lot of negative consequences of AI algos getting out of hand,” he says. “But we are a long way from that right now.”

The more immediate concern is that political and commercial interests have learnt to “hack society”, as he puts it. “Falsehoods can be replicated as easily as truth. We can be manipulated as individuals and groups.”

His solution? To establish the knowledge equivalent of the Millennium Seed Bank, which aims to preserve plant life at risk from extinction.

“As we de-speciate the world we are trying to preserve these species’ DNA. As truth becomes endangered we have the same obligation to record facts.”

But, as we have seen with Wikipedia, that is not always such a simple task.

Copyright The Financial Times Limited 2017. All rights reserved.


Comments

Popular posts from this blog

BMW traps alleged thief by remotely locking him in car

Report: World’s 1st remote brain surgery via 5G network performed in China

New ATM's: withdraw money with veins in your finger