Human Perceptions of the Severity of Domestic Robot Errors

  • Alessandra Rossi
  • Kerstin Dautenhahn
  • Kheng Lee Koay
  • Michael L. Walters
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10652)

Abstract

As robots increasingly take part in daily living activities, humans will have to interact with them in domestic and other human-oriented environments. We can expect that domestic robots will exhibit occasional mechanical, programming or functional errors, as occur with other electrical consumer devices. For example, these errors could include software errors, dropping objects due to gripper malfunctions, picking up the wrong object or showing faulty navigational skills due to unclear camera images or noisy laser scanner data respectively. It is therefore important for a domestic robot to have acceptable interactive behaviour when exhibiting and recovering from an error situation. As a first step, the current study investigated human users’ perceptions of the severity of various categories of potential errors that are likely to be exhibited by a domestic robot. We conducted a questionnaire-based study, where participants rated 20 different scenarios in which a domestic robot made an error. The potential errors were rated by participants by severity. Our findings indicate that people perceptions of the magnitude of the errors presented in the questionnaire were consistent. We did not find any significant differences in users’ ratings due to age and gender. We clearly identified scenarios that were rated by participants as having limited consequences (“small” errors) and that were rated as having severe consequences (“big” errors). Future work will use these two sets of consistently rated robot error scenarios as baseline scenarios to perform studies with repeated interactions investigating human perceptions of robot tasks and error severity.

Keywords

Human-Robot Interaction Social robotics Robot companion 

Notes

Acknowledgments

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 642667 (Safety Enables Cooperation in Uncertain Robotic Environments - SECURE).

References

  1. 1.
    Bainbridge, W.A., Hart, J.W., Kim, E.S., Scassellati, B.: The benefits of interactions with physically present robots over video-displayed agents. Int. J. Soc. Robot. 3(1), 41–52 (2011)CrossRefGoogle Scholar
  2. 2.
    Cameron, D., Aitken, J.M., Collins, E.C., Boorman, L., Chua, A., Fernando, S., McAree, O., Martinez-Hernandez, U., Law, J.: Framing factors: the importance of context and the individual in understanding trust in human-robot interaction. In: International Conference on Intelligent Robots and Systems (2015)Google Scholar
  3. 3.
    Dautenhahn, K.: Socially intelligent robots: dimensions of human-robot interaction. Philos. Trans. Roy. Soc. Lond. B Biol. Sci. 362, 679–704 (2007)CrossRefGoogle Scholar
  4. 4.
    Dautenhahn, K.: Methodology & themes of human-robot interaction: a growing research field. Int. J. Adv. Robot. Syst. 4(1 Spec. Iss.), 103–108 (2007)Google Scholar
  5. 5.
    Desai, M., Kaniarasu, P., Medvedev, M., Steinfeld, A., Yanco, H.: Impact of robot failures and feedback on real-time trust. In: ACM/IEEE International Conference on Human-Robot Interaction, pp. 251–258 (2013)Google Scholar
  6. 6.
    Desai, M., Medvedev, M., Vázquez, M., McSheehy, S., Gadea-Omelchenko, S., Bruggeman, C., Steinfeld, A., Yanco, H.: Effects of changing reliability on trust of robot systems. In: Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI, vol. 12, pp. 73–80 (2012)Google Scholar
  7. 7.
    Deutsch, M.: Trust and suspicion. J. Confl. Resolut. 2, 265–279 (1958)CrossRefGoogle Scholar
  8. 8.
    Esteban, P.G., Baxter, P., Belpaeme, T., Billing, E., Cai, H., Cao, H.L., Coeckelbergh, M., Costescu, C., David, D., Beir, A.D., Fang, Y., Ju, Z., Kennedy, J., Liu, H., Mazel, A., Pandey, A., Richardson, K., Senft, E., Thill, S., de Perre, G.V., Vanderborght, B., Vernon, D., Yu, H., Ziemke, T.: How to build a supervised autonomous system for robot-enhanced therapy for children with autism spectrum disorder. Paladyn J. Behav. Robot. 8(1), 18–38 (2017)Google Scholar
  9. 9.
    Hancock, P.A., Billings, D.R., Schaefer, K.E., Chen, J.Y.C., de Visser, E.J., Parasuraman, R.: A meta-analysis of factors affecting trust in human-robot interaction. Hum. Fact. J. Hum. Fact. Ergon. Soc. 53(5), 517–527 (2011)CrossRefGoogle Scholar
  10. 10.
    Koay, K.L., Syrdal, D.S., Walters, M.L., Dautenhahn, K.: Living with robots: investigating the habituation effect in participants’ preferences during a longitudinal human-robot interaction study. In: Proceedings of IEEE International Workshop on Robot and Human Interactive Communication, pp. 564–569 (2007)Google Scholar
  11. 11.
    Koay, K.L., Syrdal, D.S., Walters, M.L., Dautenhahn, K.: Five weeks in the robot house - exploratory human-robot interaction trials in a domestic setting, pp. 219–226 (2009)Google Scholar
  12. 12.
    Kramer, R.M., Carnevale, P.J.: Trust and intergroup negotiation. In: Brown, R., Gaertner, S.L. (eds.) Blackwell Handbook of Social Psychology: Intergroup Processes. Wiley, Malden (2003)Google Scholar
  13. 13.
    Lee, J., Knox, B., Baumann, J., Breazeal, C., DeSteno, D.: Computationally modeling interpersonal trust. Front. Psychol. 4, 893 (2013)Google Scholar
  14. 14.
    Ligthart, M., Truong, K.P.: Selecting the right robot: Influence of user attitude, robot sociability and embodiment on user preferences, pp. 682–687. IEEE (2015)Google Scholar
  15. 15.
    Lohse, M., Hanheide, M., Wrede, B., Walters, M.L., Koay, K.L., Syrdal, D.S., Green, A., Hüttenrauch, H., Dautenhahn, K., Sagerer, G., Severinson-Eklundh, K.: Evaluating extrovert and introvert behaviour of a domestic robot - a video study. In: Proceedings of the 17th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN, pp. 488–493 (2008)Google Scholar
  16. 16.
    Lorenz, T., Weiss, A., Hirche, S.: Synchrony and reciprocity: key mechanisms for social companion robots in therapy and care. Int. J. Soc. Robot. 8(1), 125–143 (2016)CrossRefGoogle Scholar
  17. 17.
    Martelaro, N., Nneji, V.C., Ju, W., Hinds, P.: Tell me more: designing HRI to encourage more trust, disclosure, and companionship. In: Proceeding of the Eleventh ACM/IEEE International Conference on Human Robot Interation, HRI 2016, pp. 181–188. IEEE Press, Piscataway (2016)Google Scholar
  18. 18.
    Montemerlo, M., Pineau, J., Roy, N., Thrun, S., Verma, V.: Experiences with a mobile robotic guide for the elderly. In: AAAI National Conference on Artificial Intelligence (2002)Google Scholar
  19. 19.
    Muir, B.M., Moray, N.: Trust in automation: Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics 39, 429–460 (1996)CrossRefGoogle Scholar
  20. 20.
    Pollack, M.E.: Intelligent technology for an aging population: the use of AI to assist elders with cognitive impairment. AI Mag. 26, 9–24 (2005)Google Scholar
  21. 21.
    Ponce, P., Molina, A., Grammatikou, D.: Design based on fuzzy signal detection theory for a semi-autonomous assisting robot in children autism therapy. Comput. Hum. Behav. 55, 28–42 (2016)CrossRefGoogle Scholar
  22. 22.
    Reiser, U., Jacobs, T., Arbeiter, G., Parlitz, C., Dautenhahn, K.: Care-O-bot® 3 – vision of a robot butler. In: Trappl, R. (ed.) Your Virtual Butler. LNCS, vol. 7407, pp. 97–116. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-37346-6_9 CrossRefGoogle Scholar
  23. 23.
    Robinette, P., Li, W., Allen, R., Howard, A.M., Wagner, A.R.: Overtrust of robots in emergency evacuation scenarios. In: Proceeding of the Eleventh ACM/IEEE International Conference on Human Robot Interation, HRI 2016, pp. 101–108. IEEE Press, Piscataway (2016)Google Scholar
  24. 24.
    Ross, J.M.: Moderators of trust and reliance across multiple decision aids. Doctoral dissertation, University of Central Florida, Orlando (2008)Google Scholar
  25. 25.
    Salem, M., Eyssel, F., Rohlfing, K., Kopp, S., Joublin, F.: To err is human(-like): effects of robot gesture on perceived anthropomorphism and likability. Int. J. Soc. Robot. 5(3), 313–323 (2013)CrossRefGoogle Scholar
  26. 26.
    Salem, M., Lakatos, G., Amirabdollahian, F., Dautenhahn, K.: Would you trust a (faulty) robot?: effects of error, task type and personality on human-robot cooperation and trust. In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, pp. 141–148 (2015)Google Scholar
  27. 27.
    Stiefelhagen, R., Fugen, C., Gieselmann, P., Holzapfel, H., Nickel, K., Waibel, A.: Natural human-robot interaction using speech, head pose and gestures. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 3 (2004)Google Scholar
  28. 28.
    Stückler, J., Schwarz, M., Behnke, S.: Mobile manipulation, tool use, and intuitive interaction for cognitive service robot cosero. Front. Robot. AI 3, 58 (2016)CrossRefGoogle Scholar
  29. 29.
    Syrdal, D.S., Koay, K.L., Gácsi, M., Walters, M.L., Dautenhahn, K.: Video prototyping of dog-inspired non-verbal affective communication for an appearance constrained robot. In: Proceedings of IEEE International Workshop on Robot and Human Interactive Communication, pp. 632–637 (2010)Google Scholar
  30. 30.
    Syrdal, D., Walters, M., Otero, N., Koay, K., Dautenhahn, K.: He knows when you are sleeping-privacy and the personal robot companion. In: Proceedings of Workshop Human Implications of Human-robot Interaction, Association for the Advancement of Artificial Intelligence (AAAI 2007), pp. 28–33 (2007)Google Scholar
  31. 31.
    Wada, K., Shibata, T., Saito, T., Tanie, K.: Robot assisted activity for elderly people and nurses at a day service center. In: IEEE International Conference on Robotics and Automation (2002)Google Scholar
  32. 32.
    Wainer, J., Robins, B., Amirabdollahian, F., Dautenhahn, K.: Using the humanoid robot KASPAR to autonomously play triadic games and facilitate collaborative play among children with autism. IEEE Trans. Auton. Ment. Dev. 6, 183–199 (2014)CrossRefGoogle Scholar
  33. 33.
    Walters, M.L., Oskoei, M.A., Syrdal, D.S., Dautenhahn, K.: A long-term human-robot proxemic study, pp. 137–142 (2011)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Alessandra Rossi
    • 1
  • Kerstin Dautenhahn
    • 1
  • Kheng Lee Koay
    • 1
  • Michael L. Walters
    • 1
  1. 1.Adaptive Systems Research GroupUniversity of HertfordshireHatfieldUK

Personalised recommendations