Rise of the machines: Google AI experiment may lead to robots that can learn WITHOUT human input

Rise of the machines: Google AI experiment may lead to robots that can learn WITHOUT human input

Generative Adversarial Networks create digital content based on real-life
Google project pits AI algorithms against each other to refine this output
The results could one day lead to machines that can learn without human input

By TIM COLLINS FOR MAILONLINE PUBLISHED: 07:13 EDT, 18 April 2017 | UPDATED: 09:21 EDT, 18 April 2017

Machines that can think for themselves - and perhaps turn on their creators as a result - have long been a fascination of science fiction.

And creating robots that can learn without any input from humans is moving ever closer, thanks to the latest developments in artificial intelligence.

One such project seeks to pit the wits of two AI algorithms against each other,  with results that could one day lead to the emergence of such intelligent machines.

BATTLE OF THE BOTS

Google's Generative Adversarial Network works by pitting two algorithms against each other, in an attempt to create convincing representations of the real world.

These 'imagined' digital creations - which can take the form of images, videos, sounds and other content - are based on data fed to the system.

One AI bot creates new content based upon what it has been taught, while a second critiques these creations - pointing out imperfections and inaccuracies.

And the process could one day allow robots to learn new information without any input from people.

Researchers at the Google Brain AI lab have developed a system known as a Generative Adversarial Network (GAN).

Conventional AI uses input to 'teach' an algorithm about a particular subject by feeding it massive amounts of information.

This knowledge can then be employed for a specific task - facial recognition being just one example.

GANs seek to generate new content from this learned information, creating digital content like pictures and video based on their understanding of similar real life images and footage.

Google's approach is to set two algorithms against each other, to further refine these 'imaginings'.

One AI bot creates new content based upon what it has been taught about the real world, while a second critiques these creations - pointing out imperfections and inaccuracies.

This allows the system to create more realistic images, sounds and other original creations that are far more realistic than if the first bot was working alone.

And the process could one day allow robots to learn new information without any input from people - a process called 'unsupervised learning' that would represent a giant leap forward in AI technology.

Speaking to Wired, Dr Ian Goodfellow, who works at Google mind, said: 'If an AI can imagine the world in realistic detail—learn how to imagine realistic images and realistic sounds—this encourages the AI to learn about the structure of the world that actually exists.

'You can think of this like an artist and an art critic.

'The generative model wants to fool the art critic—trick the art critic into thinking the images it generates are real.'

Artificial intelligence systems rely on neural networks, which try to simulate the way the brain works in order to learn.

MACHINE LEARNING

Artificial intelligence systems rely on neural networks, which try to simulate the way the brain works in order to learn.

These networks can be trained to recognise patterns in information - including speech, text data, or visual images - and are the basis for a large number of the developments in AI over recent years.

They use input from the digital world to learn, with practical applications like Google's language translation services, Facebook's facial recognition software and Snapchat's image altering live filters.

But the process of inputting this data can be extremely time consuming, and is limited to one type of knowledge.

These networks can be trained to recognise patterns in information - including speech, text data, or visual images - and are the basis for a large number of the developments in AI over recent years.

They use input from the digital world to learn, with practical applications like Google's language translation services, Facebook's facial recognition software and Snapchat's image altering live filters.

But the process of inputting this data can be extremely time consuming, and is limited to one type of knowledge. 

This is not the first time that Google has set AI bots against each other, to expand the limits of this type of machine learning.

In February, a Google team used a game they designed to examine whether competing algorithms would work together or turn on each other.

These experiments showed that AI may be more or less likely to work together depending on the situation.

The results could add to our understanding and control of complex multi-agent systems such as the economy, traffic systems, or the ecological health of our planet – all of which depend on our continued cooperation.


Comments

Popular posts from this blog

Report: World’s 1st remote brain surgery via 5G network performed in China

Visualizing The Power Of The World's Supercomputers

BMW traps alleged thief by remotely locking him in car