See more of the story

Sitting on a stool several feet from a long-armed robot, Dr. Danyal Fer wrapped his fingers around two metal handles.

As he moved the handles, the robot mimicked each small motion. This is how surgeons like Fer have long used robots when operating on patients. They can remove a prostate from a patient while sitting at a computer console.

But Fer and his fellow researchers at the University of California, Berkeley hope to advance the technology. Fer let go of the handles, and a new kind of computer software took over. The robot started to move entirely on its own.

With one claw, the machine lifted a tiny plastic ring from an equally tiny peg on the table, passed the ring from one claw to the other, moved it across the table and hooked it onto a new peg. Then the robot completed the task as quickly as it had when guided by Fer.

The project is a part of a wider effort to bring artificial intelligence into the operating room. Using many of the same technologies that underpin self-driving cars, autonomous drones and warehouse robots, researchers are working to automate surgical robots, too. These methods are a long way from everyday use, but progress is accelerating.

"It is an exciting time," said Russell Taylor, a professor at Johns Hopkins University and former IBM researcher known in the academic world as the father of robotic surgery.

The aim is not to remove surgeons from the operating room but to ease their load and perhaps even raise success rates by automating particular phases of surgery.

Robots can already exceed human accuracy on some surgical tasks, like placing a pin into a bone. The hope is that automated robots can bring greater accuracy to other tasks, such as incisions or suturing, and reduce the risks that come with overworked surgeons.

Five years ago, researchers with the Children's National Health System in Washington, D.C., designed a robot that could automatically suture the intestines of a pig during surgery. But it came with an asterisk: The researchers had implanted tiny markers in the pig's intestines that emitted a near-infrared light and helped guide the robot. The method is far from practical.

In recent years, researchers have improved the power of computer vision, which could allow robots to perform surgical tasks without such markers.

The change is driven by what are called neural networks, mathematical systems that can learn skills by analyzing vast amounts of data. By analyzing thousands of cat photos, for instance, a neural network can learn to recognize a cat. In much the same way, a neural network can learn from images captured by surgical robots.

Surgical robots are equipped with cameras that record each operation. By analyzing images that show how a surgeon guides the robot, a neural network can learn the same skills.

This is how the Berkeley researchers have been working to automate their robot, which is based on the da Vinci Surgical System, a two-armed machine that helps surgeons perform more than 1 million procedures a year.

But over months and years of use, the metal cables in the robot's arms stretched and bent in small ways. Human operators could compensate. The automated system could not.

The team decided to build a new neural network that analyzed the robot's mistakes and learned how much precision it was losing each day. "It learns how the robot's joints evolve over time," said Brijen Thananjeyasurn, a doctoral student. Once the automated system could account for this change, the robot could match the performance of human operators.