Have you ever walked head on with someone and not known whether to move left or right? Well, a robot is attempting to learn not to be confused by such scenarios.
While never explicitly taught, humans pick up on social cues when walking among one another in crowded places and know to avoid doing things like getting right up in a person’s personal space when walking past.
For robots, however, there are no existing social cues to work from, but a team of researchers from MIT has unleashed an autonomous robot with “socially aware navigation” that can keep pace with people around them, while observing these general codes of pedestrian conduct.
The small, wheeled robot could be the first step in a series of artificial beings that could one day travel across cities either in a bipedal or wheeled form to do a host of activities such as deliver shopping or help out at airports.
To actually make the robot socially aware, Michael Everett Miao Liu, and Jonathan How needed it to solve four main problems: knowing its place in the world (localisation), recognising its surroundings (perception), identifying the right path and actually going on that path.
The third option
Speaking with MIT News, Everett said the first two aspects weren’t the greatest challenge, but identifying and travelling a particular path is tricky as it can be hard to predict where humans are going to go.
Most approaches have suggested ideas like making the robot decide the paths of all objects it sees, which Everett said “takes forever to compute”, while another approach suggests coming up with paths on the fly. The latter has its own problems as the robot might get stuck deciding what path to take, making it either too reckless or too conservative.
MIT’s own solution was to use a type of machine learning known as reinforcement learning that would allow it to both predict people’s behaviour while continuously moving at a walking speed of approximately 1.2 metres per second without needing to stop on 20-minute trips.
A more natural robot
“We just look at what we see, choose a velocity, do that for a tenth of a second, then look at the world again, choose another velocity, and go again,” Everett said. “This way, we think our robot looks more natural, and is anticipating what people are doing.”
The next step in its development is to see how the robot handles crowds in populated city streets, rather than on a university campus.
“Crowds have a different dynamic than individual people, and you may have to learn something totally different if you see five people walking together,” he said.
“There may be a social rule of: ‘Don’t move through people, don’t split people up, treat them as one mass.’ That’s something we’re looking at in the future.”