FACEBOOK close to A.I. with five senses with 'touchy-feely curious' robot..
INSIDE
FACEBOOK'S NEW ROBOTICS LAB, WHERE AI AND MACHINES FRIEND ONE ANOTHER
·
AUTHOR: MATT
SIMON05.20.19 06:30 AM
What Facebook is experimenting with is a bit
different. “What we wanted to try out is to instill this notion of curiosity,”
says Franziska Meier, an AI research scientist at Facebook. That’s how humans
learn to manipulate objects: Children are driven by curiosity about their
world. They don’t try something new, like yanking a cat’s tail, because they have to, but because they wonder what might
happen if they do, much to the detriment of poor old Whiskers.
So
whereas a robot like Brett refines its motions bit by bit—drawing closer to its
target, resetting, and drawing closer still with the next try—Facebook’s robot
arm might get closer and then veer way off course. That’s because the
researchers aren’t rewarding it for incremental success, but instead giving it
freedom to try non-optimal movements. It’s trying new things, like a baby, even
if those things don’t seem particularly rational in the moment.
Each
movement provides data for the system. What did this application of torque in each joint do to move
the arm to that particular spot.
“Although it didn't achieve the task, it gave us more data, and the variety of
data we get by exploring like this is bigger than if we weren't exploring,”
says Meier. This concept is known as self-supervised learning—the robot tries
new things and updates a software model, which can help it predict the
consequences of its actions.
The idea is to make machines more flexible and
less single-minded about a task. Think of it like completing a maze. Maybe a
robot knows the direction it needs to head to find the exit. It might try over
and over to get there, even if it inevitably hits a dead end in that pursuit.
“Since you're so focused on moving in that single direction, you might walk
yourself into corners,” says University of Oslo roboticist Tønnes Nygaard, who
has developed a four-legged robot that learns to walk on its own.
(Facebook is also experimenting with getting a six-legged robot to walk on its
own, but wasn’t able to demonstrate that research for my visit to the lab.)
“Instead of being so focused on saying, I want to go in the direction I know the solution is in, instead I try to focus
on just going to explore. I'm going to try finding new solutions.”
So
those incoherent movements Facebook’s robot arm is making are really a form of
curiosity, and it’s that kind of curiosity that could lead to machines that
more readily adapt to their environment. Think of a home robot that’s trying to
load a dishwasher. Maybe it thinks the most efficient way to put a mug on the
top rack is to come at it sideways, in which case it bumps the edge of the
rack. It’s deterministic, in a sense: Trial and error, over and over, lead it
down this less-than-ideal path, where it’s trying to get better at loading the
rack sideways, and now it can’t back up and try something new. A robot loaded
with curiosity, on the other hand, can experiment and learn that it’s actually
best to come in from above. It’s flexible, not deterministic, which in theory
would allow it to adapt more easily to dynamic human environments.
NOW,
AN EASIER, faster way to teach robots how to do stuff is with
simulations. That is, build a digital world for, say, an animated stick figure,
and let it teach itself to run through
the same kind of trial and error. The method is relatively fast, because the
iterations happen much quicker when the digital “machines” aren’t constrained
by real-world laws of physics.
But while simulation might be more efficient,
it’s an imperfect representation of the real world—there’s just no way you can
fully simulate the complexities of dynamic human environments. So while
researchers have been able to train robots to do something first in simulation,
then port that knowledge to robots in the real world, the transition is extremely messy,
because the digital and physical worlds are mismatched.
Doing everything in the physical world may be
slower and more laborious, but the data you get is more pure, in a sense. “If
it works in the real world, it actually works,” says Roberto Calandra, an AI
research scientist at Facebook. If you’re designing supremely complex robots,
you can’t simulate the chaos of the human world that they’ll be tackling.
They’ve got to live it. This will be
particularly important as the tasks we give robots get more complex. A robot
lifting car doors on a factory line is relatively easy to just code, but to
navigate the chaos of a home (clutter on the floor, children, children on the
floor…) a robot will have to adapt on its own with creativity, so it doesn’t
get stuck in feedback loops. A coder can’t hold its hand for every obstacle.
Facebook’s project is
part of a great coming-together of AI and robots. Traditionally, these worlds
have largely kept to themselves. Yes, robots have always needed AI to operate
autonomously, like using machine vision to sense the world. But while tech
giants like Google and Amazon and Facebook have pushed major advances in the
development of AI in purely digital contexts—getting computers to recognize
objects in images, for example, by having humans label those objects
first—robots have remained fairly dumb as researchers have focused on getting
the things to move without falling on their faces.
That’s
beginning to change, as AI researchers start using robots as platforms to
refine software algorithms. Facebook, for instance, might want to teach a robot
to solve a series of tasks on its own. That, in turn, might inform the
development of AI assistants that can better plan a sequence of actions for
you, the user. “It's the same problem,” says LeCun. “If you solve it in one
context, you'll solve it in the other context.”
In
other words, AI is making robots smarter, but robots are also now helping
advance AI. “A lot of the interesting problems and interesting questions that
are connected with AI—particularly the future of AI, how can we get to
human-level AI—are currently being addressed by people who work in robotics,”
says LeCun. “Because you can't cheat with robots. You can't have thousands of
people labeling images for you.”
Still:
What would a digital behemoth like Facebook want with robots? At the moment,
the company says this research isn’t connected to a particular product
pipeline.
But keep in mind that Facebook is in the
connecting-people business (well, and in the
ad-selling business). “We think robotics is going to be an important
component of this—think about things like telepresence,” says LeCun. Facebook
is already a hardware company, after all, what with the Oculus VR system and
Portal, its video conference device. “The logical succession of this is perhaps
things that you can control from a distance.” (Which, if you’ve been readingWIRED recently, will
certainly bring up questions of privacy and security.)
But we’re getting ahead of ourselves. Every home
robot, save for the Roomba, so far has failed, in
part because the machines just aren’t smart or useful enough. No robot is particularly smart. But maybe
Facebook’s flailing robot arm can help fix that.
Comments
Post a Comment