Don’t Worry, We’ll Get There: Developing Robot Personalities to Maintain User Interaction After Robot Error

  • David Cameron
  • Emily Collins
  • Hugo Cheung
  • Adriel Chua
  • Jonathan M. Aitken
  • James Law
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9793)

Abstract

Human robot interaction (HRI) often considers the human impact of a robot serving to assist a human in achieving their goal or a shared task. There are many circumstances though during HRI in which a robot may make errors that are inconvenient or even detrimental to human partners. Using the ROBOtic GUidance and Interaction DEvelopment (ROBO-GUIDE) model on the Pioneer LX platform as a case study, and insights from social psychology, we examine key factors for a robot that has made such a mistake, ensuring preservation of individuals’ perceived competence of the robot, and individuals’ trust towards the robot. We outline an experimental approach to test these proposals.

Keywords

Human-robot interaction Design Guidance Psychology 

References

  1. 1.
    Cameron, D., et al.: Help! I cant reach the buttons: facilitating helping behaviors towards robots. In: Wilson, S.P., Verschure, P.F.M.J., Mura, A., Prescott, T.J. (eds.) Living Machines 2015. LNCS(LNAI), vol. 9222, pp. 354–358. Springer, Heidelberg (2015)CrossRefGoogle Scholar
  2. 2.
    Cameron, D., Loh, E.J., Chua, A., Collins, E.C., Aitken, J.M., Law, J.: Robot-stated limitations but not intentions promote user assistance. In: Salem, M., Weiss, A., Baxter, P., Dautenhahn, K. (eds.) Proceedings of the 5th International Symposium on New Frontiers in Human-Robot Interaction. AISB (In Press)Google Scholar
  3. 3.
    Dautenhahn, K.: Socially intelligent robots: dimensions of human-robot interaction. Philos. Trans. R. Soc. Lond. B Biol. Sci. 362, 679–704 (2007)CrossRefGoogle Scholar
  4. 4.
    Hancock, P.A., Billings, D.R., Schaefer, K.E., Chen, J., De Visser, E., Parasuraman, R.: A meta-analysis of factors affecting trust in human-robot interaction. J. Hum. Factors Ergon. Soc. 53, 517–527 (2011)CrossRefGoogle Scholar
  5. 5.
    Kim, P., Ferrin, D., Cooper, C., Dirks, K.: Removing the shadow of suspicion: the effects of apology versus denial for repairing competence- versus integrity-based trust violations. J. Appl. Psychol. 89, 104–118 (2004)CrossRefGoogle Scholar
  6. 6.
    Lee, K.M., Peng, W., Jin, S.A., Yan, C.: Can robots manifest personality?: An empirical test of personality recognition, social responses, and social presence in human-robot interaction. J. Communication 56, 754–772 (2006)CrossRefGoogle Scholar
  7. 7.
    McAllister, D.J.: Affect-and cognition-based trust as foundations for interpersonal cooperation in organizations. Acad. Manage. J. 38, 24–59 (1995)CrossRefGoogle Scholar
  8. 8.
    McAree, O., Aitken, J.M., Boorman, L., Cameron, D., Chua, A., Collins, E.C., Fernando, S., Law, J., Martinez-Hernandez, U.: Floor Determination in the operation of a lift by a mobile guide robot. In: Proceedings of the European Conference on Mobile Robots (2015)Google Scholar
  9. 9.
    Salem, M., Lakatos, G., Amirabdollahian, F., Dautenhahn, K.: Would you trust a (faulty) robot? Effects of error, task type and personality on human-robot cooperation and trust. In: Proceedings of the 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2015), Portland (2015)Google Scholar
  10. 10.
    Snijders, D.: Robots recovery from invading personal space. In: 23rd Twente Student Conference on IT, Enschede, The Netherlands (2015)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • David Cameron
    • 1
  • Emily Collins
    • 1
  • Hugo Cheung
    • 1
  • Adriel Chua
    • 1
  • Jonathan M. Aitken
    • 1
  • James Law
    • 1
  1. 1.Sheffield RoboticsUniversity of SheffieldSheffieldUK

Personalised recommendations