Robots are getting better at teaching other robots how to do things. Oh.

Robots are getting better at teaching other robots how to do things. Oh.

BY STAN SCHROEDER May 10, 2017

Teaching a robot how to do something is usually done by either programming it to perform a specific task, or demonstrating that task for the robot to observe and imitate. The latter method, however, so far hasn't been accurate enough for robots to be able to transfer their knowledge to other robots.

That's changing, however, thanks to researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and a their new teaching method, called C-LEARN. That could have far-reaching consequences by making it easier for non-programmers to teach robots how to perform certain tasks. Even better, it allows robots to teach other robots how to perform the same tasks.

The system does this by giving the robot a knowledge base with information on how to reach and grab different objects. Then, using a 3D interface, the robot is shown a single demo on how to, say, pick up a cylinder or open a door. The task is divided into important moments called "keyframes" — steps that robot needs to take in order to correctly perform the task.

The test bed for C-LEARN is a small, two-armed bomb-disposal robot called Optimus. Once Optimus learns how to perform a task, it can transfer that knowledge to Atlas, a six-foot-tall, 400-pound robot (we wrote about Atlas several times in the past, here and here).

“By combining the intuitiveness of learning from demonstration with the precision of motion-planning algorithms, this approach can help robots do new types of tasks that they haven’t been able to learn before, like multi-step assembly using both of their arms,” Claudia PĂ©rez-D’Arpino, a PhD student who wrote a paper on C-LEARN together with MIT professor Julie Shah, said in a statement.

The approach, D'Arpino claims, is similar to how humans learn — they take what they already know and tie that information to a demonstration of how something is done. “We can’t magically learn from a single demonstration, so we take new information and match it to previous knowledge about our environment," she said.

And that's the point — teaching robots in a similar way we teach humans to do things would make the process faster and easier. Right now, the C-LEARN method can't handle certain problems, including avoiding collisions, but the team hopes that the system can be further advanced by implementing more human-like learning capabilities.



Comments

Popular posts from this blog

Report: World’s 1st remote brain surgery via 5G network performed in China

Visualizing The Power Of The World's Supercomputers

BMW traps alleged thief by remotely locking him in car