AI apocalypse where robots take over and ‘treat humans like guinea pigs’ could become reality, say experts


RISE OF THE MACHINES AI apocalypse where robots take over and ‘treat humans like guinea pigs’ could become reality, say experts

The very real dangers posed by technology include rampaging robots and a super-intelligent AI that could turn against its masters

By Saqib Shah 26th February 2018, 3:21 pm Updated: 26th February 2018, 5:35 pm

SCI-FI movies have long portrayed a frightening world where robots take over and use humans as slaves.

But the bleak dystopian vision seen in movies such as I, Robot and The Terminator may not be as far-fetched as it first appears, according to a string of experts.

The rapid acceleration of AI means more safeguards are needed, say experts
Futurologist Dr Ian Pearson told The Sun: "We'll have trained it to be like us, trained it to feel emotions like us, but it won't be like us. It will be a bit like aliens off Star Trek – smarter and more calculated in its actions.

"It will be insensitive to humans, viewing us as barbaric. So when it decides to carry out its own experiments, with viruses that it's created, it will treat us like guinea pigs."

This terrifying vision of the future isn't a fringe theory, but one that's gaining traction.

'They'll treat us like guinea pigs' – how the robot apocalypse could happen
Prof Stephen Hawking

A recent report by 26 experts warned that our future is under threat from AI and more must be done to keep the world safe.

The report titled  ‘The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation’ was compiled by representatives from Oxford University’s Future of Humanity Institute and Cambridge University’s Centre for the Study of Existential Risk.

Attacks could range from drones to bots being used to distribute fake news on social media, claimed its authors.

This is especially true now, as the technology is accelerating at a faster pace than ever before.

We're already seeing more human-AI interaction than at any time in the past.

Take a look around you: digital assistants (from Amazon's Alexa to Apple's Siri) are in our homes and in the phones we carry with us.

AI is also powering the self-driving cars that are predicted to hit our roads as soon as 2025.

And, in some cases, the bots are already outsmarting humans.

In May of last year, Google's AlphaGo AI triumphed over the world champion at the ancient Chinese board game of Go.

Then in August, an AI beat the world's best pros at the video game DOTA 2.

This isn't a new phenomenon either –AI has been wiping the floor with us puny humans for years now.

Who can forget IBM's Watson AI beating out two champs to win the popular US game show Jeopardy.

Still, there are some who believe that the technology's benefits far outweigh any negatives. They include Microsoft founder Bill Gates, who claimed that "AI can be our friend."

"AI is just the latest in technologies that allow us to produce a lot more goods and services with less labor," Gates said recently.

"And overwhelmingly, over the last several hundred years, that has been great for society."

But, this current breed of AI – built by companies like Google, Microsoft, and Apple - isn't the type that experts are losing sleep over.

Instead, they fear what these building blocks could lead to – like a conscious, mega-intelligent, system like the sort you see in sci-fi films (think Terminator, or I, Robot).

It's a concern shared by SpaceX and Tesla chief Elon Musk.

The billionaire-entrepreneur has repeatedly urged governments to start regulating AI, warning that it poses a "fundamental risk to the existence of civilization."

Musk's comments led to a public spat with Facebook founder and CEO Mark Zuckerberg, who claimed that such "doomsday scenarios" are "irresponsible."

Meanwhile, in the more immediate future, there's also the risk of hackers manipulating the systems we're utilising in our places of work.

Last year, research by Deloitte suggested that 85% of UK businesses plan to invest in AI by 2020.

But are there safeguards in place to keep these systems, and our cyber-infrastructure, secure?

Dr Pearson claims that bad actors will always be on the hunt for flaws in the software.

"That's definitely the type of thing rogue states and terrorists are going to be interested in," he told The Sun. "[They'll] try to find weaknesses and exploit them for their own purposes."

Ultimately, the burden rests on its creators' shoulders, according to Oxford University's Professor Luciano Floridi.

“The real risks with AI are entirely human: misuses, wrong choices, bad design, and missed opportunities," Floridi, Director of the uni's Digital Ethics Lab, told The Sun.

"If something goes wrong the responsibility will be ours. The only threat to humanity is humanity itself. The rest is science fiction.”



Comments

Popular posts from this blog

Report: World’s 1st remote brain surgery via 5G network performed in China

Visualizing The Power Of The World's Supercomputers

BMW traps alleged thief by remotely locking him in car