Latest AI has ‘imagination’ to see into the future

6 days ago14 Shares

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Image: Adrian Candela/Shutterstock

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

In our efforts to build a truly safe and smart AI for autonomous cars, researchers have given a new technology the ability to ‘see’ into the future.

The classic scenario presented when discussing the safety of autonomous cars is a child running in front of the vehicle, and the artificial intelligence (AI) deciding whether it should try to save the life of the child or the driver it is carrying.

But what about an object it has never seen before, one that it could never expect to come across?

That is the problem that researchers are attempting to solve at the University of California, Berkeley, with help from a new AI that can ‘imagine’ what it expects to see in the future – something its developers are calling ‘visual foresight’.

With plans to present its findings at an upcoming conference, the Berkeley team’s creation works by taking images from a robot’s cameras and predicting what it will see when it performs a particular sequence of movements.

These robot imaginations are currently simple in scale as predictions can only be made seven seconds into the future, but it is enough for the robot to figure out how to move objects around on a table without disturbing obstacles.

Most importantly, the robot can learn to perform these tasks without any help from humans or prior knowledge about physics, its environment or what the objects are.

A different kind of DNA

“In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualise how different behaviours will affect the world around it,” said Sergey Levine, whose lab developed the technology.

“This can enable intelligent planning of highly flexible skills in complex real-world situations.”

This development is based on a deep-learning technology called dynamic neural advection, which predicts how pixels in an image will move from one frame to the next, based on the robot’s actions. It has been greatly improved upon in recent iterations.

The technology sounds remarkably similar to the recent discovery that human brains also ‘see’ around walls and objects by predicting future obstacles, but at a significantly faster rate than this new AI.

“Children can learn about their world by playing with toys, moving them around, grasping and so forth,” Levine explained. “Our aim with this research is to enable a robot to do the same: to learn about how the world works through autonomous interaction.”

Colm Gorey is a journalist with Siliconrepublic.com

editorial@siliconrepublic.com