Abstract
The performance of human-robot teams depends on human-robot trust, which in turn depends on appropriate robot-to-human transparency. A key way for robots to build trust through transparency is by providing appropriate explanations for their actions. While most previous work on robot explanation generation has focused on robots’ ability to provide post-hoc explanations upon request, in this paper we instead examine proactive explanations generated before actions are taken, and the effect this has on human-robot trust. Our results suggest a positive relationship between proactive explanations and human-robot trust, and reveal fundamental new questions into the effects of proactive explanations on the nature of humans’ mental models and the fundamental nature of human-robot trust.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Billings, D.R., Schaefer, K.E., Chen, J.Y.C., Hancock, P.A.: Human-robot interaction: developing trust in robots. In: Proceedings of the International Conference on HRI (2012)
Billings, D., Schaefer, K., Llorens, N., Hancock, P.: What is trust? Defining the construct across domains. In: Proceedings Conference of the American Psychological Association (2012)
Danks, D.: The value of trustworthy AI. In: Proceedings of the AIES (2019)
De Graaf, M.M., Malle, B.F.: How people explain action (and autonomous intelligent systems should too). In: 2017 AAAI Fall Symposium Series (2017)
Grice, H.P.: Logic and conversation. In: Syntax and Semantics 3: Speech Acts (1975)
Helldin, T.: Transparency for future semi-automated systems: effects of transparency on operator performance, workload and trust. Ph.D. thesis, University of Skövde (2014)
Hiatt, L.M., Trafton, J.G.: Understanding second-order theory of mind. In: ACM/IEEE International Conference on Human-Robot Interaction (2015)
Jarosz, A.F., Wiley, J.: What are the odds? A practical guide to computing and reporting Bayes factors. J. Probl. Solving 7(1), 2 (2014)
JASP Team: JASP (version 0.12.2) [bibcomputer software] (2020)
Jeffreys, H.: The Theory of Probability. OUP Oxford, Oxford (1998)
Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors 46(1), 50–80 (2004)
Lyons, J.B.: Being transparent about transparency: a model for human-robot interaction. In: 2013 AAAI Spring Symposium Series (2013)
McManus, T., Holtzman, Y., Lazarus, H., Anderberg, J., Ucok, O.: Transparency, communication and mindfulness. J. Manag. Dev. 25, 1024–1028 (2006)
Mercado, J.E., Rupp, M.A., Chen, J.Y.C., Barnes, M.J., Barber, D., Procci, K.: Intelligent agent transparency in human-agent teaming for multi-UxV management. Hum. Factors 58(3), 401–415 (2016)
Muir, B.M., Moray, N.: Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics 39(3), 429–460 (1996)
Neerincx, Mark A., van der Waa, Jasper, Kaptein, Frank, van Diggelen, Jurriaan: Using perceptual and cognitive explanations for enhanced human-agent team performance. In: Harris, Don (ed.) EPCE 2018. LNCS (LNAI), vol. 10906, pp. 204–214. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91122-9_18
Ososky, S., Schuster, D., Phillips, E., Jentsch, F.G.: Building appropriate trust in human-robot teams. In: 2013 AAAI Spring Symposium Series (2013)
Rieser, Verena, Lemon, Oliver: Natural language generation as planning under uncertainty for spoken dialogue systems. In: Krahmer, Emiel, Theune, Mariët (eds.) EACL/ENLG -2009. LNCS (LNAI), vol. 5790, pp. 105–120. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15573-4_6
Schaefer, K.: The perception and measurement of human-robot trust. Ph.D. thesis, University of Central Florida (2013)
Stange, S., Kopp, S.: Effects of a social robot’s self-explanations on how humans understand and evaluate its behavior. In: Proceedings of the International Conference on HRI (2020)
Wang, N., Pynadath, D.V., Hill, S.G.: Trust calibration within a human-robot team: comparing automatically generated explanations. In: Proceedings of the HRI (2016)
Acknowledgments
This work was supported by an Early Career Faculty grant from NASA’s Space Technology Research Grants Program.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhu, L., Williams, T. (2020). Effects of Proactive Explanations by Robots on Human-Robot Trust. In: Wagner, A.R., et al. Social Robotics. ICSR 2020. Lecture Notes in Computer Science(), vol 12483. Springer, Cham. https://doi.org/10.1007/978-3-030-62056-1_8
Download citation
DOI: https://doi.org/10.1007/978-3-030-62056-1_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-62055-4
Online ISBN: 978-3-030-62056-1
eBook Packages: Computer ScienceComputer Science (R0)