Advertisement

How the Timing and Magnitude of Robot Errors Influence Peoples’ Trust of Robots in an Emergency Scenario

  • Alessandra Rossi
  • Kerstin Dautenhahn
  • Kheng Lee Koay
  • Michael L. Walters
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10652)

Abstract

Trust is a key factor in human users’ acceptance of robots in a home or human oriented environment. Humans should be able to trust that they can safely interact with their robot. Robots will sometimes make errors, due to mechanical or functional failures. It is therefore important that a domestic robot should have acceptable interactive behaviours when exhibiting and recovering from an error situation. In order to define these behaviours, it is firstly necessary to consider that errors can have different degrees of consequences. We hypothesise that the severity of the consequences and the timing of a robot’s different types of erroneous behaviours during an interaction may have different impacts on users’ attitudes towards a domestic robot. In this study we used an interactive storyboard presenting ten different scenarios in which a robot performed different tasks under five different conditions. Each condition included the ten different tasks performed by the robot, either correctly, or with small or big errors. The conditions with errors were complemented with four correct behaviours. At the end of each experimental condition, participants were presented with an emergency scenario to evaluate their current trust in the robot. We conclude that there is correlation between the magnitude of an error performed by the robot and the corresponding loss of trust of the human in the robot.

Keywords

Human-Robot Interaction Social robotics Robot companion Trust in robots Trust recovery 

Notes

Acknowledgments

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 642667 (Safety Enables Cooperation in Uncertain Robotic Environments - SECURE).

References

  1. 1.
    Amazon mechanical turk https://www.mturk.com
  2. 2.
    Agresti, A.: Categorical Data Analysis, 2nd edn. Wiley-Interscience, Chichester, New York (2002)CrossRefzbMATHGoogle Scholar
  3. 3.
    Bainbridge, W.A., Hart, J.W., Kim, E.S., Scassellati, B.: The benefits of interactions with physically present robots over video-displayed agents. Int. J. Social Robot. 3(1), 41–52 (2011)CrossRefGoogle Scholar
  4. 4.
    Billings, D.: Computer poker. University of Alberta M.Sc. thesis (1995)Google Scholar
  5. 5.
    Booth, S., Tompkin, J., Pfister, H., Waldo, J., Gajos, K., Nagpal, R.: Piggybacking robots: human-robot overtrust in university dormitory security, pp. 426–434. ACM (2017)Google Scholar
  6. 6.
    Cameron, D., Aitken, J.M., Collins, E.C., Boorman, L., Chua, A., Fernando, S., McAree, O., Martinez-Hernandez, U., Law, J.: Framing factors: the importance of context and the individual in understanding trust in human-robot interaction. In: International Conference on Intelligent Robots and Systems (2015)Google Scholar
  7. 7.
    Desai, M., Kaniarasu, P., Medvedev, M., Steinfeld, A., Yanco, H.: Impact of robot failures and feedback on real-time trust. In: ACM/IEEE International Conference on Human-Robot Interaction, pp. 251–258 (2013)Google Scholar
  8. 8.
    Desai, M., Medvedev, M., Vázquez, M., McSheehy, S., Gadea-Omelchenko, S., Bruggeman, C., Steinfeld, A., Yanco, H.: Effects of changing reliability on trust of robot systems. In: Proceedings of the Seventh Annual ACM IEEE International Conference on Human Robot Interaction, HRI 2012, pp. 73–80 (2012)Google Scholar
  9. 9.
    Deutsch, M.: Trust and suspicion. J. Confl. Resolut. 2, 265–279 (1958)CrossRefGoogle Scholar
  10. 10.
    Golder, S., Donath, J.: Hiding and revealing in online poker games, pp. 370–373 (2004)Google Scholar
  11. 11.
    Gosling, S.D., Rentfrow, P.J., Swann Jr., W.B.: A very brief measure of the big five personality domains. J. Res. Pers. 37, 504–528 (2003)CrossRefGoogle Scholar
  12. 12.
    Hancock, P.A., Billings, D.R., Schaefer, K.E., Chen, J.Y.C., de Visser, E.J., Parasuraman, R.: A meta-analysis of factors affecting trust in human-robot interaction. Hum. Factors J. Hum. Factors Ergon. Soc. 53(5), 517–527 (2011)CrossRefGoogle Scholar
  13. 13.
    Haselhuhn, M.P., Schweitzer, M.E., Wood, A.M.: How implicit beliefs influence trust recovery. Psychol. Sci. 5, 645–648 (2010)CrossRefGoogle Scholar
  14. 14.
    Koay, K.L., Syrdal, D.S., Walters, M.L., Dautenhahn, K.: Living with robots: investigating the habituation effect in participants’ preferences during a longitudinal human-robot interaction study. In: Proceedings - IEEE International Workshop on Robot and Human Interactive Communication, pp. 564–569 (2007)Google Scholar
  15. 15.
    Kramer, R.M., Carnevale, P.J.: Trust and intergroup negotiation. In: Brown, R., Gaertner, S.L. (eds.) Handbook of Social Psychology: Intergroup Processes. Blackwell, Boston (2003)Google Scholar
  16. 16.
    Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors J. Hum. Factors Ergon. Soc. 46(1), 50–80 (2004)CrossRefGoogle Scholar
  17. 17.
    Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrative model of organizational trust. Acad. Manag. Rev. 20, 709–734 (1995)Google Scholar
  18. 18.
    McKnight, D.H., Choudhury, V., Kacmar, C.: Propensity to trust scale 13, 339–359 (2001). http://highered.mheducation.com/sites/0073381225/student/view0/chapter7/self-assessment/74.html
  19. 19.
    Muir, B.M., Moray, N.: Trust in automation: Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics 39, 429–460 (1996)CrossRefGoogle Scholar
  20. 20.
    Robinette, P., Howard, A.M., Wagner, A.R.: Timing is key for robot trust repair. Social Robotics. LNCS, vol. 9388, pp. 574–583. Springer, Cham (2015). doi: 10.1007/978-3-319-25554-5_57 CrossRefGoogle Scholar
  21. 21.
    Robinette, P., Li, W., Allen, R., Howard, A.M., Wagner, A.R.: Overtrust of robots in emergency evacuation scenarios. In: Proceeding of the Eleventh ACM/IEEE International Conference on Human Robot Interation, HRI 2016, pp. 101–108. IEEE Press, Piscataway (2016)Google Scholar
  22. 22.
    Ross, J.M.: Moderators of trust and reliance across multiple decision aids (Doctoral dissertation), University of Central Florida, Orlando (2008)Google Scholar
  23. 23.
    Rossi, A., Dautenhahn, K., Koay, K.L., Walters, M.L.: Human perceptions of the severity of domestic robot errors. In: Accepted for the Ninth International Conference on Social Robotics, ICSR 2017, 22–24th November 2017, Tsukuba, Japan (2017)Google Scholar
  24. 24.
    Salem, M., Dautenhahn, K.: Evaluating trust and safety in HRI: practical issues and ethical challenges. In: Proceedings of the 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2015): Workshop on the Emerging Policy and Ethics of Human-Robot Interaction (2015)Google Scholar
  25. 25.
    Salem, M., Lakatos, G., Amirabdollahian, F., Dautenhahn, K.: Would you trust a (faulty) robot? Effects of error, task type and personality on human-robot cooperation and trust. In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, pp. 141–148 (2015)Google Scholar
  26. 26.
    Schilke, O., Reimann, M., Cook, K.S.: Effect of relationship experience on trust recovery following a breach. Proc. Natl. Acad. Sci. 110(38), 15236–15241 (2013)CrossRefGoogle Scholar
  27. 27.
    Simpson, J.A.: Foundations of interpersonal trust. In: Kruglanski, A.W., Higgins, E.T. (eds.) Social Psychology: Handbook of Basic Principles, pp. 587–607. Guilford, New York (2007)Google Scholar
  28. 28.
    Simpson, J.A.: Psychological foundations of trust. Curr. Dir. Psychol. Sci. 16(5), 264–268 (2007)CrossRefGoogle Scholar
  29. 29.
    Slovic, P.: Perceived risk, trust, and democracy. Risk Anal. 13, 675–682 (2000)CrossRefGoogle Scholar
  30. 30.
    Wang, N., Pynadath, D.V., Unnikrishnan, K.V., Shankar, S., Merchant, C.: Intelligent agents for virtual simulation of human-robot interaction. In: Shumaker, R., Lackey, S. (eds.) VAMR 2015. LNCS, vol. 9179, pp. 228–239. Springer, Cham (2015). doi: 10.1007/978-3-319-21067-4_24 CrossRefGoogle Scholar
  31. 31.
    Yu, K., Berkovsky, S., Taib, R., Conway, D., Zhou, J., Chen, F.: User trust dynamics: an investigation driven by differences in system performance, vol. 126745, pp. 307–317. ACM (2017)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Alessandra Rossi
    • 1
  • Kerstin Dautenhahn
    • 1
  • Kheng Lee Koay
    • 1
  • Michael L. Walters
    • 1
  1. 1.Adaptive Systems Research GroupUniversity of HertfordshireHatfieldUK

Personalised recommendations