Why would you trust robots out to kill you?

3 Mar 20166 Shares

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

A new quirky study has shown that humans might be placing far too much trust in robots, so much trust, in fact, that we are going to get burned.

In case of fire: ignore the robots. That’s what we need to drill into children as, when robots become better ingratiated into society, we’re doomed.

So says Georgia Tech’s latest findings, with overly trustworthy subjects following an ‘emergency guide robot’ to such a degree that their lives were put in danger.

It was quite an experiment. Subjects were led by a brightly coloured robot – which looks like Henry the hoover at a rave – to a conference room, a fake fire was started outside (which they were not warned of) and then they were left to follow their mechanical aid.

Which they did, implicitly.

‘We absolutely didn’t expect this. If a robot carried a sign saying it was a ‘child-care robot’, would people leave their babies with it?’
– PAUL ROBINETTE, GEORGIA TECH RESEARCH INSTITUTE

Drop it like it’s bot

It’s remarkable. Sometimes the trip to the conference room in the first place was designed to show how faulty the robot was, with it breaking down on occassion, or even going into the wrong room first and circling around for a while.

Still, when the corridors filled with smoke, the subjects were happy enough to follow the bot – which was then brightly-lit with red LEDs and white “arms” that served as pointers.

But, rather than heading straight for the nearest fire exit, it took the scenic route, going down dead ends, entering rooms with major obstacles in the way and pretty much being a liability – and all people followed.

“We expected that if the robot had proven itself untrustworthy in guiding them to the conference room, that people wouldn’t follow it during the simulated emergency,” said Paul Robinette of the Georgia Tech Research Institute.

“Instead, all of the volunteers followed the robot’s instructions, no matter how well it had performed previously. We absolutely didn’t expect this.”

Nobody’s perfect

Only when the robot made obvious errors during the emergency part of the experiment did the participants question its directions. In those cases, some subjects still followed the robot’s instructions even when it directed them toward a darkened room that was blocked by furniture.

Extrapolate this out to other situations to get a better feel for just how ludicrous these findings are.

“Would people trust a hamburger-making robot to provide them with food?” asked Robinette. “If a robot carried a sign saying it was a ‘childcare robot,’ would people leave their babies with it? Will people put their children into an autonomous vehicle and trust it to take them to grandma’s house? We don’t know why people trust or don’t trust machines.”

In future research, the scientists hope to learn more about why the test subjects trusted the robot, whether that response differs by education level or demographics, and how the robots themselves might indicate the level of trust that should be given to them.

The researchers were originally interested in finding out if humans trust robots at all in an emergency. Now, though, they need to find a way to prevent us trusting robots too much.

Until we find out, though, remember the phrase – in case of fire: ignore the robots.

Main image via Shutterstock

Gordon Hunt is senior communications and context executive at NDRC. He previously worked as a journalist with Silicon Republic.

editorial@siliconrepublic.com