Photo: Rob Felt
In emergencies, people may trust robots too much for their own safety, a new study suggests. In a mock building fire, test subjects followed instructions from an “Emergency Guide Robot” even after the machine had proven itself unreliable — and after some participants were told that the robot had broken down.
The research was designed to determine whether or not building occupants would trust a robot designed to help them evacuate a high-rise in case of fire or other emergency. But the researchers were surprised to find that the test subjects followed the robot’s instructions even when the machine’s behavior should not have inspired trust.
“People seem to believe that these robotic systems know more about the world than they really do, and that they would never make mistakes or have any kind of fault,” said Alan Wagner, a senior research engineer in the Georgia Tech Research Institute (GTRI). “In our studies, test subjects followed the robot’s directions even to the point where it might have put them in danger had this been a real emergency.”
The research, believed to be the first to study human-robot trust in an emergency situation, was presented at the 2016 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2016).
The work was sponsored by the Air Force Office of Scientific Research and also involved Research Engineer Paul Robinette from GTRI, Professor Ayanna Howard from Georgia Tech’s School of Electrical and Computer Engineering, and Georgia Tech Fire Marshal Larry Labbe.
— John Toon