See more of the story

BERKELEY, Calif.

In an engineering laboratory here, a robot has learned to screw the cap on a bottle, even figuring out the need to apply a subtle backward twist to find the thread before turning it the right way.

This and other activities — including putting a clothes hanger on a rod, inserting a block into a tight space and placing a hammer at the correct angle to remove a nail from a block of wood — may seem like pedestrian actions. But they represent significant advances in robotic learning, by a group of researchers at the University of California, Berkeley, who have trained a two-armed machine to match human dexterity and speed in performing these tasks.

The significance of the work is in the use of a so-called machine-learning approach that links powerful software techniques that make it possible for the robot to learn new tasks rapidly with a relatively small amount of training.

The new approach includes a powerful artificial intelligence technique known as "deep learning," which has previously been used to achieve major advances in both computer vision and speech recognition. Now the researchers have found that it can also be used to improve the actions of robots working on tasks that require both machine vision and touch.

'Whole new momentum'

The group, led by the roboticist Pieter Abbeel and the computer vision specialist Trevor Darrell, with Sergey Levine, a postdoctoral researcher, and Chelsea Finn, a graduate student, said they were surprised by how well the approach worked compared with previous efforts. By combining several types of pattern recognition software algorithms known as neural networks, the researchers have been able to train a robot to perfect an action such as correctly inserting a Lego block into another block, with a relatively small number of attempts.

"I would argue this is what has given artificial intelligence the whole new momentum it has right now," Abbeel said.

Roboticists said that the value of the Berkeley technology would be in quickly training robots for new tasks and ultimately to develop machines that learn independently.

"It used to take hours on up to months of careful programming to give a robot the hand-eye coordination necessary to do a task," said Gary Bradski, a roboticist and computer vision specialist who founded OpenCV, a freely available software library for machine vision. "This new work enables robots to just learn the task by doing it."

The way a player catches a baseball

Previously, the Berkeley lab had received attention for training a robot to fold laundry. Although it was viewed almost 1 million times on YouTube, the laundry-folding demonstration noted that the video had been sped up more than 50 times. The new videos show the robots performing tasks at human speeds.

The researchers acknowledge that they are still far away — perhaps more than a decade — from their goal of building an autonomous robot such as a home worker or elder care machine.

To explain the new approach, the researchers draw the analogy of how baseball players track and then catch balls. Humans do not do mathematical calculations to discern the trajectory of the ball. Rather, they fix the ball in their field of vision and adjust their running speed until they arrive at the spot where the ball lands.

This, in effect, short-circuits a complicated set of relations between perception and motion control, substituting a simple technique that works in a wide variety of situations.

Until now, robots have generally learned with a variety of techniques that are laboriously programmed for each specific case. The researchers instead connected the neural networks, which learn from both visual and sensory information, directly to the controller software that oversees the robot's motions. As a result, they achieved a significant advance in speed and accuracy of learning.

"We are trying to come up with a general learning framework that allows the robot to learn new things on its own," Abbeel said.