Expressive robots are better trusted by humans Monday, 22 August 2016

When it comes to assistive robots that interact with humans, engineers must keep in mind not just factors like efficiency, safety and accuracy, but also traits that promote human trust in the robots.

New research shows expressive and communicative robot partners are better trusted, but could have an expected flip side. Humans are more likely to trust assistive robot partners that are expressive and communicative, even if it makes mistakes, found researchers from University College London and the University of Bristol.

However, these human-like traits could lead to users lying to the robot in order to avoid hurting its "feelings". The researchers wanted to investigate how a robot may recover a user's trust when it makes a mistake, and how it can communicate its erroneous behaviour to somebody who is working with it, either at home or at work.

They experimented with a humanoid assistive robot which helped users to make an omelette. The robot was tasked with passing the eggs, salt and oil but dropped one of the polystyrene eggs in two of the conditions and then attempted to make amends.

While the communicative, expressive robot took 50 percent longer to complete the task, the researchers found that a communicative, expressive robot is preferable for the majority of users. Users reacted well to an apology from the robot that was able to communicate, and were particularly receptive to its sad facial expression.

At the end of the interaction, the communicative robot was programmed to ask participants whether they would give it the job of kitchen assistant, but they could only answer yes or no and were unable to qualify their answers. Some were reluctant to answer and most looked uncomfortable. One person was under the impression that the robot looked sad when he said ‘no’, when it had not been programmed to appear so.

Another complained of emotional blackmail and a third went as far as to lie to the robot. According to one of the researchers, Adriana Hamacher, the study shows that interactions between humans and robots may be complicated by humans' emotional projections upon the robots.

"Having seen it display human-like emotion when the egg dropped, many participants were now pre-conditioned to expect a similar reaction and therefore hesitated to say no; they were mindful of the possibility of a display of further human-like distress," said Hamacher.

"Human-like attributes, such as regret, can be powerful tools in negating dissatisfaction but we must identify with care which specific traits we want to focus on and replicate. If there are no ground rules then we may end up with robots with different personalities, just like the people designing them."