Is It My Looks? Or Something I Said? The Impact of Explanations, Embodiment, and Expectations on Trust and Performance in Human-Robot Teams

  • Ning Wang
  • David V. Pynadath
  • Ericka Rovira
  • Michael J. Barnes
  • Susan G. Hill
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10809)

Abstract

Trust is critical to the success of human-robot interaction. Research has shown that people will more accurately trust a robot if they have an accurate understanding of its decision-making process. The Partially Observable Markov Decision Process (POMDP) is one such decision-making process, but its quantitative reasoning is typically opaque to people. This lack of transparency is exacerbated when a robot can learn, making its decision making better, but also less predictable. Recent research has shown promise in calibrating human-robot trust by automatically generating explanations of POMDP-based decisions. In this work, we explore factors that can potentially interact with such explanations in influencing human decision-making in human-robot teams. We focus on explanations with quantitative expressions of uncertainty and experiment with common design factors of a robot: its embodiment and its communication strategy in case of an error. Results help us identify valuable properties and dynamics of the human-robot trust relationship.

Notes

Acknowledgment

This project is funded by the U.S. Army Research Laboratory. Statements and opinions expressed do not necessarily reflect the position or the policy of the United States Government, and no official endorsement should be inferred.

References

  1. 1.
    Lewis, M., Sycara, K., Walker, P.: The role of trust in human-robot interaction. In: Abbass, H.A., Scholz, J., Reid, D.J. (eds.) Foundations of Trusted Autonomy. SSDC, vol. 117, pp. 135–159. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-64816-3_8CrossRefGoogle Scholar
  2. 2.
    Parasuraman, R., Riley, V.: Humans and automation: use, misuse, disuse, abuse. Hum. Factors 39(2), 230–253 (1997)CrossRefGoogle Scholar
  3. 3.
    Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors 46(1), 50–80 (2004)CrossRefGoogle Scholar
  4. 4.
    Lee, J., Moray, N.: Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35(10), 1243–1270 (1992)CrossRefGoogle Scholar
  5. 5.
    Kaelbling, L.P., Littman, M.L., Cassandra, A.R.: Planning and acting in partially observable stochastic domains. Artif. Intell. 101(1), 99–134 (1998)MathSciNetCrossRefMATHGoogle Scholar
  6. 6.
    Wang, N., Pynadath, D.V., Hill, S.G.: The impact of POMDP-generated explanations on trust and performance in human-robot teams. In: International Conference on Autonomous Agents and Multiagent Systems (2016)Google Scholar
  7. 7.
    Schweitzer, M.E., Hershey, J.C., Bradlow, E.T.: Promises and lies: restoring violated trust. Organ. Behav. Hum. Decis. Process. 101(1), 1–19 (2006)CrossRefGoogle Scholar
  8. 8.
    Walters, M.L., Koay, K.L., Syrdal, D.S., Dautenhahn, K., Boekhorst, R.T.: Preferences and perceptions of robot appearance and embodiment in human-robot interaction trials. In: AISB Symposium on New Frontiers in Human-Robot Interaction Convention, pp. 136–143 (2009)Google Scholar
  9. 9.
    Bruemmer, D.J., Marble, J.L., Dudenhoeffer, D.D.: Mutual initiative in human-machine teams. In: IEEE Conference on Human Factors and Power Plants, pp. 7-22–7-30. IEEE (2002)Google Scholar
  10. 10.
    Billings, D.R., Schaefer, K.E., Chen, J.Y., Kocsis, V., Barrera, M., Cook, J., Ferrer, M., Hancock, P.A.: Human-animal trust as an analog for human-robot trust: a review of current evidence. Technical Report ARL-TR-5949, Army Research Laboratory (2012)Google Scholar
  11. 11.
    Kerepesi, A., Kubinyi, E., Jonsson, G., Magnusson, M., Miklosi, A.: Behavioural comparison of human-animal (dog) and human-robot (AIBO) interactions. Behav. Process. 73(1), 92–99 (2006)CrossRefGoogle Scholar
  12. 12.
    Melson, G.F., Kahn, P.H., Beck, A., Friedman, B., Roberts, T., Garrett, E., Gill, B.T.: Children’s behavior toward and understanding of robotic and living dogs. J. Appl. Dev. Psychol. 30(2), 92–102 (2009)CrossRefGoogle Scholar
  13. 13.
    Dzindolet, M.T., Peterson, S.A., Pomranky, R.A., Pierce, L.G., Beck, H.P.: The role of trust in automation reliance. Int. J. Hum.-Comput. Stud. 58(6), 697–718 (2003)CrossRefGoogle Scholar
  14. 14.
    Swartout, W.R., Moore, J.D.: Explanation in second generation expert systems. In: David, J.M., Krivine, J.P., Simmons, R. (eds.) Second Generation Expert Systems, pp. 543–585. Springer, Heidelberg (1993).  https://doi.org/10.1007/978-3-642-77927-5_24CrossRefGoogle Scholar
  15. 15.
    Elizalde, F., Sucar, L.E., Luque, M., Diez, J., Reyes, A.: Policy explanation in factored Markov decision processes. In: European Workshop on Probabilistic Graphical Models, pp. 97–104 (2008)Google Scholar
  16. 16.
    Visschers, V.H.M., Meertens, R.M., Passchier, W.W.F., De Vries, N.N.K.: Probability information in risk communication: a review of the research literature. Risk Anal. 29(2), 267–287 (2009)CrossRefGoogle Scholar
  17. 17.
    Hendrickx, L., Vlek, C., Oppewal, H.: Relative importance of scenario information and frequency information in the judgment of risk. Acta Psychol. 72(1), 41–63 (1989)CrossRefGoogle Scholar
  18. 18.
    Waters, E.A., Weinstein, N.D., Colditz, G.A., Emmons, K.: Formats for improving risk communication in medical tradeoff decisions. J. Health Commun. 11(2), 167–182 (2006)CrossRefGoogle Scholar
  19. 19.
    Matarić, M.J.: Reinforcement learning in the multi-robot domain. Auton. Robots 4(1), 73–83 (1997)CrossRefGoogle Scholar
  20. 20.
    Smart, W.D., Kaelbling, L.P.: Effective reinforcement learning for mobile robots. In: IEEE International Conference on Robotics and Automation, vol. 4, pp. 3404–3410. IEEE (2002)Google Scholar
  21. 21.
    Lewicki, R.J.: Trust, trust development, and trust repair. In: Deutsch, M., Coleman, P.T., Marcus, E.C. (eds.) The Handbook of Conflict Resolution: Theory and Practice, pp. 92–119. Wiley Publishing (2006)Google Scholar
  22. 22.
    Robinette, P., Howard, A.M., Wagner, A.R.: Timing is key for robot trust repair. Social Robotics. LNCS (LNAI), vol. 9388, pp. 574–583. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-25554-5_57CrossRefGoogle Scholar
  23. 23.
    Wang, N., Pynadath, D.V., Hill, S.G.: Trust calibration within a human-robot team: comparing automatically generated explanations. In: The Eleventh ACM/IEEE International Conference on Human Robot Interaction, Piscataway, NJ, USA, pp. 109–116. IEEE Press (2016)Google Scholar
  24. 24.
    Wang, N., Pynadath, D.V., Hill, S.G.: Building trust in a human-robot team. In: Interservice/Industry Training, Simulation and Education Conference (2015)Google Scholar
  25. 25.
    Rovira, E., Cross, A., Leitch, E., Bonaceto, C.: Displaying contextual information reduces the costs of imperfect decision automation in rapid retasking of ISR assets. Hum. Factors 56(6), 1036–1049 (2014)CrossRefGoogle Scholar
  26. 26.
    Wickens, C.D., Dixon, S.R.: The benefits of imperfect diagnostic automation: a synthesis of the literature. Theor. Issues Ergon. Sci. 8(3), 201–212 (2007)CrossRefGoogle Scholar
  27. 27.
    Pop, V.L., Shrewsbury, A., Durso, F.T.: Individual differences in the calibration of trust in automation. Hum. Factors 57(4), 545–556 (2015)CrossRefGoogle Scholar
  28. 28.
    Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrative model of organizational trust. Acad. Manag. Rev. 20(3), 709–734 (1995)Google Scholar
  29. 29.
    McKnight, D.H., Choudhury, V., Kacmar, C.: Developing and validating trust measures for e-commerce: an integrative typology. Inf. Syst. Res. 13(3), 334–359 (2002)CrossRefGoogle Scholar
  30. 30.
    McShane, S.L.: Propensity to trust scale (2014)Google Scholar
  31. 31.
    Ross, J.M.: Moderators of Trust and Reliance Across Multiple Decision AIDS. ProQuest, Ann Arbor (2008)Google Scholar
  32. 32.
    Syrdal, D.S., Dautenhahn, K., Koay, K.L., Walters, M.L.: The negative attitudes towards robots scale and reactions to robot behaviour in a live human-robot interaction study. In: Adaptive and Emergent Behaviour and Complex Systems (2009)Google Scholar
  33. 33.
    Greco, V., Roger, D.: Coping with uncertainty: the construction and validation of a new measure. Pers. Individ. Differ. 31(4), 519–534 (2001)CrossRefGoogle Scholar
  34. 34.
    Hart, S.G., Staveland, L.E.: Development of NASA-TLX (task load index): results of empirical and theoretical research. Adv. Psychol. 52, 139–183 (1988)CrossRefGoogle Scholar
  35. 35.
    Taylor, R.M.: Situational awareness rating technique (SART): the development of a tool for aircrew systems design. In: Situational Awareness in Aerospace Operations (1990)Google Scholar
  36. 36.
    Mayer, R.C., Davis, J.H.: The effect of the performance appraisal system on trust for management: a field quasi-experiment. J. Appl. Psychol. 84(1), 123 (1999)CrossRefGoogle Scholar
  37. 37.
    Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. J. Artif. Intell. Res. 4, 237–285 (1996)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Ning Wang
    • 1
  • David V. Pynadath
    • 1
  • Ericka Rovira
    • 2
  • Michael J. Barnes
    • 3
  • Susan G. Hill
    • 3
  1. 1.Institute for Creative TechnologiesUniversity of Southern CaliforniaLos AngelesUSA
  2. 2.U.S. Military AcademyWest PointUSA
  3. 3.U.S. Army Research LaboratoryHillandaleUSA

Personalised recommendations