Skip to main content

Effects of Proactive Explanations by Robots on Human-Robot Trust

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12483))

Abstract

The performance of human-robot teams depends on human-robot trust, which in turn depends on appropriate robot-to-human transparency. A key way for robots to build trust through transparency is by providing appropriate explanations for their actions. While most previous work on robot explanation generation has focused on robots’ ability to provide post-hoc explanations upon request, in this paper we instead examine proactive explanations generated before actions are taken, and the effect this has on human-robot trust. Our results suggest a positive relationship between proactive explanations and human-robot trust, and reveal fundamental new questions into the effects of proactive explanations on the nature of humans’ mental models and the fundamental nature of human-robot trust.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Billings, D.R., Schaefer, K.E., Chen, J.Y.C., Hancock, P.A.: Human-robot interaction: developing trust in robots. In: Proceedings of the International Conference on HRI (2012)

    Google Scholar 

  2. Billings, D., Schaefer, K., Llorens, N., Hancock, P.: What is trust? Defining the construct across domains. In: Proceedings Conference of the American Psychological Association (2012)

    Google Scholar 

  3. Danks, D.: The value of trustworthy AI. In: Proceedings of the AIES (2019)

    Google Scholar 

  4. De Graaf, M.M., Malle, B.F.: How people explain action (and autonomous intelligent systems should too). In: 2017 AAAI Fall Symposium Series (2017)

    Google Scholar 

  5. Grice, H.P.: Logic and conversation. In: Syntax and Semantics 3: Speech Acts (1975)

    Google Scholar 

  6. Helldin, T.: Transparency for future semi-automated systems: effects of transparency on operator performance, workload and trust. Ph.D. thesis, University of Skövde (2014)

    Google Scholar 

  7. Hiatt, L.M., Trafton, J.G.: Understanding second-order theory of mind. In: ACM/IEEE International Conference on Human-Robot Interaction (2015)

    Google Scholar 

  8. Jarosz, A.F., Wiley, J.: What are the odds? A practical guide to computing and reporting Bayes factors. J. Probl. Solving 7(1), 2 (2014)

    Google Scholar 

  9. JASP Team: JASP (version 0.12.2) [bibcomputer software] (2020)

    Google Scholar 

  10. Jeffreys, H.: The Theory of Probability. OUP Oxford, Oxford (1998)

    Google Scholar 

  11. Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors 46(1), 50–80 (2004)

    Article  Google Scholar 

  12. Lyons, J.B.: Being transparent about transparency: a model for human-robot interaction. In: 2013 AAAI Spring Symposium Series (2013)

    Google Scholar 

  13. McManus, T., Holtzman, Y., Lazarus, H., Anderberg, J., Ucok, O.: Transparency, communication and mindfulness. J. Manag. Dev. 25, 1024–1028 (2006)

    Article  Google Scholar 

  14. Mercado, J.E., Rupp, M.A., Chen, J.Y.C., Barnes, M.J., Barber, D., Procci, K.: Intelligent agent transparency in human-agent teaming for multi-UxV management. Hum. Factors 58(3), 401–415 (2016)

    Article  Google Scholar 

  15. Muir, B.M., Moray, N.: Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics 39(3), 429–460 (1996)

    Article  Google Scholar 

  16. Neerincx, Mark A., van der Waa, Jasper, Kaptein, Frank, van Diggelen, Jurriaan: Using perceptual and cognitive explanations for enhanced human-agent team performance. In: Harris, Don (ed.) EPCE 2018. LNCS (LNAI), vol. 10906, pp. 204–214. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91122-9_18

    Chapter  Google Scholar 

  17. Ososky, S., Schuster, D., Phillips, E., Jentsch, F.G.: Building appropriate trust in human-robot teams. In: 2013 AAAI Spring Symposium Series (2013)

    Google Scholar 

  18. Rieser, Verena, Lemon, Oliver: Natural language generation as planning under uncertainty for spoken dialogue systems. In: Krahmer, Emiel, Theune, Mariët (eds.) EACL/ENLG -2009. LNCS (LNAI), vol. 5790, pp. 105–120. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15573-4_6

    Chapter  Google Scholar 

  19. Schaefer, K.: The perception and measurement of human-robot trust. Ph.D. thesis, University of Central Florida (2013)

    Google Scholar 

  20. Stange, S., Kopp, S.: Effects of a social robot’s self-explanations on how humans understand and evaluate its behavior. In: Proceedings of the International Conference on HRI (2020)

    Google Scholar 

  21. Wang, N., Pynadath, D.V., Hill, S.G.: Trust calibration within a human-robot team: comparing automatically generated explanations. In: Proceedings of the HRI (2016)

    Google Scholar 

Download references

Acknowledgments

This work was supported by an Early Career Faculty grant from NASA’s Space Technology Research Grants Program.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lixiao Zhu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhu, L., Williams, T. (2020). Effects of Proactive Explanations by Robots on Human-Robot Trust. In: Wagner, A.R., et al. Social Robotics. ICSR 2020. Lecture Notes in Computer Science(), vol 12483. Springer, Cham. https://doi.org/10.1007/978-3-030-62056-1_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-62056-1_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-62055-4

  • Online ISBN: 978-3-030-62056-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics