New algorithm developed to make house robots better cleaners

12 Jan 2015

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

A new algorithm developed to help robots manipulate and interact with objects could lead to a potential future for robotic home help as seen in numerous science fiction films.

The breakthrough in object recognition algorithms was made at the Massachusetts Institute of Technology’s (MIT) Computer Science and Artificial Intelligence Laboratory who discovered that home robots could make the best testing beds for future robot artificial intelligence.

The MIT researchers determined that the mobility with which current home robots find themselves capable of, in tandem with their interaction with static objects gives them significant ability to master object recognition through capturing images of objects from multiple angles.

This imaging is a crucial part for robots to determine an object’s depth perception, much in the same way us humans use both our eyes to determine how far away an object is from us.

Illustration of manipulating robot arrm via Christine Daniloff and Jose-Luis Olivares/MIT

The groundwork for future home-help robots

Lead author of the study, Lawson Wong, and his team were able to show that even with a currently-available algorithm, using multi-perspective imaging of an object made it four-times more likely to recognise an object compared with a single perspective as is used by many current home robots.

However, with the addition of the new algorithm developed by Wong and his team, they were able to increase a robot’s object recognition ability by as much as 10-times the normal amount.

While current household robots are more commonly seen as autonomous driving vacuum cleaners, a future with bi-pedal home helpers with working arms will only be useful if their object recognition and depth perception reaches a significant level as is being developed by the MIT team.

Speaking of their research, Wong said, “If you just took the output of looking at it from one viewpoint, there’s a lot of stuff that might be missing, or it might be the angle of illumination or something blocking the object that causes a systematic error in the detector. One way around that is just to move around and go to a different viewpoint.”

Home-help robot image via Shutterstock

66

DAYS

4

HOURS

26

MINUTES

Buy your tickets now!

Colm Gorey is a journalist with Siliconrepublic.com

editorial@siliconrepublic.com