Google's DeepMind creates an AI with 'imagination'

Google's DeepMind creates an AI with 'imagination'

The AI firm is developing algorithms that simulate the human ability to construct plans

By LIBBY PLUMMER Wednesday 26 July 2017

Google's DeepMind is developing an AI capable of 'imagination', enabling machines to see the consequences of their actions before they make them.

In two new research papers, the British AI firm, which was acquired by Google in 2014, describes new developments for "imagination-based planning" to AI.

Its attempt to create algorithms that simulate the distinctly human ability to construct a plan could eventually help to produce software and hardware capable of solving complex tasks more efficiently.

DeepMind's previous research in this area has been incredibly successful, with its AlphaGo AI managing to beat a series of human champions at the notoriously tricky board game Go. However, AlphaGo relies on a clearly defined set of rules to provide likely outcomes, with relatively few factors to consider.

"The real world is complex, rules are not so clearly defined and unpredictable problems often arise," explain the DeepMind researchers in a blog post. "Even for the most intelligent agents, imagining in these complex environments is a long and costly process."

The researchers have developed "imagination-augmented agents" (I2As) – a neural network that learns to extract information that might be useful for future decisions, while ignoring anything irrelevant. These I2As can learn different strategies to construct plans, choosing from a broad spectrum of strategies.

"This work complements other model-based AI systems, like AlphaGo, which can also evaluate the consequences of their actions before they take them," the DeepMind research team told WIRED.

"What differentiates these agents is that they learn a model of the world from noisy sensory data, rather than rely on privileged information such as a pre-specified, accurate simulator. Imagination-based approaches are particularly helpful in situations where the agent is in a new situation and has little direct experience to rely on, or when its actions have irreversible consequences and thinking carefully is desirable over spontaneous action."

DeepMind tested these agents using puzzle game Sokoban and a spaceship navigation game, both of which require forward planning and reasoning. "For both tasks, the imagination-augmented agents outperform the imagination-less baselines considerably: they learn with less experience and are able to deal with the imperfections in modelling the environment," explains the blog post.

A video shows an AI agent playing Sokoban, without knowing the rules of the game. It shows the agent's five imagined outcomes for each move, with the chosen route highlighted.

"This is initial research, but as AI systems become more sophisticated and are required to operate in more complex environments, this ability to imagine could enable our systems to learn the rules governing their environment and thus solve tasks more efficiently," the researchers told WIRED.

Earlier this year, researchers from DeepMind and Imperial College London added memory to its AI so that it could learn to play multiple Atari computer games. Previous iterations of the technology had only been able to learn to play one game at a time, and while it could beat human players, it could not 'remember' how it had done so.

Just last month, research from DeepMind and OpenAI revealed developments that could help an AI to learn about the world around it based on minimal, non-technical feedback – mimicking the human trait of inference.



Comments

Popular posts from this blog

Report: World’s 1st remote brain surgery via 5G network performed in China

Visualizing The Power Of The World's Supercomputers

BMW traps alleged thief by remotely locking him in car