Humanity is already losing control of artificial intelligence and it could spell disaster for our species

A MIND OF ITS OWN Humanity is already losing control of artificial intelligence and it could spell disaster for our species

Researchers highlight the 'dark side' of AI and question whether humanity can ever truly understand its most advanced creations

By Margi Murphy 11th April 2017, 12:34 pm  Updated: 11th April 2017, 3:32 pm

WHAT sets humans apart from machines is the speed at which we can learn from our surroundings.

But scientists have successfully trained computers to use artificial intelligence to learn from experience – and one day they will be smarter than their creators.

Now scientists have admitted they are already baffled by the mechanical brains they have built, raising the prospect that we could lose control of them altogether.

Computers are already performing incredible feats – like driving cars and predicting diseases, but their makers say they aren’t entirely in control of their creations.

This could have catastrophic consequences for civilisation, tech experts have warned.

Take the strange driverless car which appeared on the streets of New Jersey, US, last year.

It differed from Google, Tesla or Uber’s autonomous vehicles, which follow the rules set by tech developers to react to scenarios while on the road.

This car could make its own decisions after watching how humans learnt how to drive.

And its creators, researchers at chip making company Nvidia (who supply some of the biggest car makers with supercomputer chips) said they weren’t 100 percent sure how it did so, MIT Technology Review reported.

Its mysterious mind could be a sign of dark times to come, sceptics fear.

The car’s underlying technology, dubbed “deep learning”, is a powerful tool for solving problems.

It helps us tag our friends on Facebook, provides assistance on our smartphones using Siri, Cortana or Google.

Deep learning has helped computers get better at recognising objects than a person.

The military is pouring millions into the technology so it can be used to steer ships, control drones and destroy targets.

And there’s hope it will be able to diagnose deadly diseases, make traders billionaires by reading the stock market and totally transform the world we live in.

But if we don’t make sure creators have a full understanding of how it works, we’re in deep trouble, scientists claim.

Tommi Jaakkola, a professor at MIT who works on applications of machine learning warns: “If you had a very small neural network [deep learning algorithm], you might be able to understand it.”

“But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.”

That means a driverless car, like Nvidia’s, could soar headfirst into a tree and we would have no idea why it decided to do so.

Just imagine if artificial intelligence was given control of the stock market or military systems.

Another computer was also tasked with analysing patient records to predict disease.

Joel Dudley, who led the project at New York’s Mount Sinai Hospital, said the machine was inexplicably good at recognising schizophrenia – but no-one knew why.

“We can build these models, but we don’t know how they work,” he said.

Several big technology firms have been asked to be more transparent about how they create and apply deep learning.

This includes Google, which said it would create an AI ethics board but has kept mysteriously quiet about its existence.

A top British astronomer recently warned that humans will be wiped out by robots who will take over the earth in a matter of centuries.

How do computers 'think'?

Scientists have been training computers how to learn, like humans, since the 1970s. But recent advances in data storage mean that the process has sped up exponentially in recent years. Interest in the field hit a peak when Google paid hundreds of millions to buy a British “deep learning” company in 2015. Coined machine learning or a neural network, deep learning is effectively training a computer so it can figure out natural language and instructions. It’s fed information and is then quizzed on it, so it can learn, similarly to a child in the early years at at school.

If they can’t figure out how the algorithms (the formulas which keep computers performing the tasks we ask them to do) work, they won’t be able to predict when they fail.



Comments

Popular posts from this blog

BMW traps alleged thief by remotely locking him in car

Report: World’s 1st remote brain surgery via 5G network performed in China

New ATM's: withdraw money with veins in your finger