MIT researchers find a simpler way to teach robots new skills

27 Apr 2022

Concept image of the study, showing how the robotic arms could pick up an object at different angles. Image: MIT

A machine learning method helped a robot pick up objects it had never encountered, while being ready for a new task within 10 to 15 minutes.

Researchers at MIT say they have developed a simpler way to teach robots new skills after only a few physical demonstrations, which could improve their effectiveness in manual labour tasks.

In a study, the research team said machine learning systems usually find it difficult to understand object orientations. For example, a robot can be trained to pick up a specific item – but if the object is placed at a different angle, it is perceived as a completely new scenario by the robot.

Future Human

To deal with this challenge, the researchers said they created a new type of neural network model called a neural descriptor field (NDF), which can learn the geometry of classes of items.

Using this NDF model, the team was able to teach a robot a new skill of picking up a never-before-seen object with only a few physical examples.

“Our major contribution is the general ability to much more efficiently provide new skills to robots that need to operate in more unstructured environments where there could be a lot of variability,” co-lead author of the paper Anthony Simeonov said.

“The concept of generalisation by construction is a fascinating capability because this problem is typically so much harder.”

With the help of a depth camera, the NDF model was able to compute the geometric representation for a specific item using a 3D point cloud, which is a set of coordinates in three dimensions.

The model was also designed with a property called equivariance. This means that if the model is shown an image of a mug and then another image of the mug on its side, it understands that the second mug is the same object that has been rotated.

“This equivariance is what allows us to much more effectively handle cases where the object you observe is in some arbitrary orientation,” Simeonov said.

The researchers said their new method had a success rate of 85pc on pick-and-place tasks in testing, while the best baseline comparisons could only achieve 45pc. Success in these tests meant picking up a new object and placing it on a target location. Within 10 to 15 minutes, the robot was also ready to perform a new pick-and-place task.

While these tests were seen to be a success, researchers noted that this current method only works for particular object categories. For example, a robot that is trained to pick up mugs won’t be able to pick up headphones, since the shape is too different to what the robot was trained on.

“In the future, scaling it up to many categories or completely letting go of the notion of category altogether would be ideal,” Simeonov said.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com