IT Begins: Bots are Learning to Chat in their Own Language
IT BEGINS: BOTS ARE LEARNING TO CHAT IN THEIR OWN
LANGUAGE
By CADE METZ 03.16.17. 7:30 AM
IGOR MORDATCH IS working to build machines that can carry
on a conversation. That’s something so many people are working on. In Silicon
Valley, chatbot is now a bona fide buzzword. But Mordatch is different. He’s
not a linguist. He doesn’t deal in the AI techniques that typically reach for
language. He’s a roboticist who began his career as an animator. He spent time
at Pixar and worked on Toy Story 3, in between stints as an academic at places
like Stanford and the University of Washington, where he taught robots to move
like humans. “Creating movement from scratch is what I was always interested
in,” he says. Now, all this expertise is coming together in an unexpected way.
Born in Ukraine and raised in Toronto, the 31-year-old is
now a visiting researcher at OpenAI, the artificial intelligence lab started by
Tesla founder Elon Musk and Y combinator president Sam Altman. There, Mordatch
is exploring a new path to machines that can not only converse with humans, but
with each other. He’s building virtual worlds where software bots learn to
create their own language out of necessity.
As detailed in a research paper published by OpenAI this
week, Mordatch and his collaborators created a world where bots are charged
with completing certain tasks, like moving themselves to a particular landmark.
The world is simple, just a big white square—all of two dimensions—and the bots
are colored shapes: a green, red, or blue circle. But the point of this
universe is more complex. The world allows the bots to create their own
language as a way collaborating, helping each other complete those tasks.
All this happens through what’s called reinforcement
learning, the same fundamental technique that underpinned AlphaGo, the machine
from Google’s DeepMind AI lab that cracked the ancient game of Go. Basically,
the bots navigate their world through extreme trial and error, carefully
keeping track of what works and what doesn’t as they reach for a reward, like
arriving at a landmark. If a particular action helps them achieve that reward,
they know to keep doing it. In this same way, they learn to build their own
language. Telling each other where to go helps them all get places more
quickly.
As Mordatch says: “We can reduce the success of dialogue
to: Did you end up getting to the green can or not?”
To build their language, the bots assign random abstract
characters to simple concepts they learn as they navigate their virtual world.
They assign characters to each other, to locations or objects in the virtual
world, and to actions like “go to” or “look at.” Mordatch and his colleagues
hope that as these bot languages become more complex, related techniques can
then translate them into languages like English. That is a long way off—at
least as a practical piece of software—but another OpenAI researcher is already
working on this kind of “translator bot.”
Ultimately, Mordatch says, these methods can give
machines a deeper grasp of language, actually show them why language exists—and
that provides a springboard to real conversation, a computer interface that
computer scientists have long dreamed of but never actually pulled off.
These methods are a significant departure from most of
the latest AI research related to language. Today, top researchers typically
exploring methods that seek to mimic human language, not create a new language.
One example is work centered on deep neural networks. In recent years, deep
neural nets—complex mathematical systems that can learn tasks by finding
patterns in vast amounts of data—have proven to be an enormously effective way
of recognizing objects in photos, identifying commands spoken into smartphones,
and more. Now, researchers at places like Google, Facebook, and Microsoft are
applying similar methods to language understanding, looking to identify
patterns in English conversation, so far with limited success.
Mordatch and his collaborators, including OpenAI
researcher and University of California, Berkeley professor Pieter Abbeel,
question whether that approach can ever work, so they’re starting from a
completely different place. “For agents to intelligently interact with humans,
simply capturing the statistical patterns is insufficient,” their paper reads.
“An agent possesses an understanding of language when it can use language
(along with other tools such as non-verbal communication or physical acts) to
accomplish goals in its environment.”
With early humans, language came from necessity. They
learned to communicate because it helped them do other stuff, gave them an
advantage over animals. These OpenAI researchers want to create the same
dynamic for bots. In their virtual world, the bots not only learn their own
language, they also use simple gestures and actions to communicate—pointing in
particular direction, for instance, or actually guiding each other from place
to place—much like babies do. That too is language, or at least a path to
language.
Still, many AI researchers think the deep neural network
approach, figuring out language through statistical patterns in data, will
still work. “They’re essentially also capturing statistical patterns but in a
simple, artificial environment,” says Richard Socher, an AI researcher at
Salesforce, of the OpenAI team. “That’s fine to make progress in an interesting
new domain, but the abstract claims a bit too much.”
Nonetheless, Mordatch’s project shows that analyzing vast
amounts of data isn’t the only path. Systems can also learn through their own
actions, and that may ultimately provide very different benefits. Other
researchers at OpenAI teased much the same idea when they unveiled a much
larger and more complex virtual world they call Universe. Among other things,
Universe is a place where bots can learn to use common software applications,
like a web browser. This too happens through a form of reinforcement learning,
and for Ilya Sutskever, one of the founders of OpenAI, the arrangement is yet
another path to language understanding. An AI can only browse the internet if it
understands the natural way humans talk. Meanwhile, Microsoft is tackling
language through other forms of reinforcement learning, and researchers at
Stanford are exploring their own methods that involve collaboration between
bots.
In the end, success will likely come from a combination
of techniques, not just one. And Mordatch is proposing yet another
technique—one where bots don’t just learn to chat. They learn to chat in a
language of their own making. As humans have shown, that is a powerful idea.
Comments
Post a Comment