Skip to main content

Explaining Before or After Acting? How the Timing of Self-Explanations Affects User Perception of Robot Behavior

  • Conference paper
  • First Online:
Social Robotics (ICSR 2021)

Abstract

Explanations are a useful tool to improve human-robot interaction and the topic of what a good explanation should entail has received much attention. While a robot’s behavior can be justified upon request after its execution, the intention to act can also be signaled by a robot prior to the execution. In this paper we report results from a pre-registered study on the effects of a social robot proactively giving a self-explanation before vs. after the execution of an undesirable behavior. Contrary to our expectations we found that explaining a behavior before its execution did not yield positive effects on the users’ perception of the robot or the behavior. Instead, the robot’s behavior was perceived as less desirable when explained before the execution rather than afterwards. Exploratory analyses further revealed that even though participants felt less uncertain about what was going to happen next, they also felt less in control, had lower trust and lower contact intentions with a robot that explained before it acted.

This research was supported by the German Federal Ministry of Education and Research (BMBF) in the project ‘VIVA’ (FKZ 16SV7959).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://aspredicted.org/dc6db.pdf.

  2. 2.

    Please note that this enumeration is partially inconsistent with the pre-registered hypothesis.

  3. 3.

    Original videos: https://dl.acm.org/doi/abs/10.1145/3319502.3374802#sec-supp.

  4. 4.

    https://www.soscisurvey.de/.

  5. 5.

    https://www.softbankrobotics.com/emea/en/pepper.

  6. 6.

    www.mturk.com.

References

  1. Anjomshoae, S., Najjar, A., Calvaresi, D., Främling, K.: Explainable agents and robots: Results from a systematic literature review. In: 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019), Montreal, Canada, May 13–17, 2019, pp. 1078–1088. International Foundation for Autonomous Agents and Multiagent Systems (2019)

    Google Scholar 

  2. Baraka, K., Paiva, A., Veloso, M.: Expressive lights for revealing mobile service robot state. In: Robot 2015: Second Iberian Robotics Conference. AISC, vol. 417, pp. 107–119. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-27146-0_9

    Chapter  Google Scholar 

  3. Bartneck, C., Kulić, D., Croft, E., Zoghbi, S.: Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int. J. Soc. Robot. 1(1), 71–81 (2009)

    Article  Google Scholar 

  4. Besold, T.R., Uckelman, S.L.: The what, the why, and the how of artificial explanations in automated decision-making. CoRR (2018)

    Google Scholar 

  5. Cha, E., Kim, Y., Fong, T., Mataric, M.J.: A survey of nonverbal signaling methods for non-humanoid robots. Found. Trends Robot. 6(4), 211–323 (2018)

    Article  Google Scholar 

  6. Ehsan, U., Tambwekar, P., Chan, L., Harrison, B., Riedl, M.O.: Automated rationale generation: a technique for explainable ai and its effects on human perceptions. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 263–274 (2019)

    Google Scholar 

  7. Eyssel, F., Kuchenbrandt, D.: Social categorization of social robots: anthropomorphism as a function of robot group membership. Br. J. Soc. Psychol. 51(4), 724–731 (2012)

    Article  Google Scholar 

  8. Eyssel, F., Loughnan, S.: “It Don’t Matter If You’re Black or White’’? In: Herrmann, G., Pearson, M.J., Lenz, A., Bremner, P., Spiers, A., Leonards, U. (eds.) ICSR 2013. LNCS (LNAI), vol. 8239, pp. 422–431. Springer, Cham (2013). https://doi.org/10.1007/978-3-319-02675-6_42

    Chapter  Google Scholar 

  9. Faul, F., Erdfelder, E., Lang, A.G., Buchner, A.: G* power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav. Res. Methods 39(2), 175–191 (2007)

    Article  Google Scholar 

  10. Hilton, D.J.: Conversational processes and causal explanation. Psychol. Bull. 107(1), 65 (1990)

    Article  Google Scholar 

  11. Lyons, J.B., et al.: Shaping trust through transparent design: theoretical and experimental guidelines. In: Savage-Knepshield, P., Chen, J. (eds.) Advances in Human Factors in Robots and Unmanned Systems, vol. 499, pp. 127–136. Springer, Basel, Switzerland (2017). https://doi.org/10.1007/978-3-319-41959-6_11

  12. Pipitone, A., Chella, A.: What robots want? hearing the inner voice of a robot. Iscience 24(4), 102371 (2021)

    Google Scholar 

  13. Priester, J.R., Petty, R.E.: The gradual threshold model of ambivalence: relating the positive and negative bases of attitudes to subjective ambivalence. J. Personal. Soc. Psychol. 71(3), 431 (1996)

    Article  Google Scholar 

  14. Putnam, V., Conati, C.: Exploring the need for explainable artificial intelligence (XAI) in intelligent tutoring systems (ITS). In: CEUR Workshop Proceedings, pp. 23–27 (2019)

    Google Scholar 

  15. Reich-Stiebert, N., Eyssel, F.: Learning with educational companion robots? toward attitudes on education robots, predictors of attitudes, and application potentials for education robots. Int. J. Soc. Robot. 7(5), 875–888 (2015)

    Article  Google Scholar 

  16. Reysen, S.: Construction of a new scale: the reysen likability scale. Soc. Behav. Personal. Int. J. 33(2), 201–208 (2005)

    Article  Google Scholar 

  17. Rosenfeld, A., Richardson, A.: Explainability in human-agent systems. Auton. Agents Multi-Agent Syst. 33(6), 673–705 (2019)

    Article  Google Scholar 

  18. Stange, S., Kopp, S.: Effects of a social robot’s self-explanations on how humans understand and evaluate its behavior. In: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pp. 619–627 (2020)

    Google Scholar 

  19. Stange, S., Kopp, S.: Effects of referring to robot vs. user needs in self-explanations of undesirable robot behavior. In: Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, pp. 271–275 (2021)

    Google Scholar 

  20. Touré-Tillery, M., McGill, A.L.: Who or what to believe: trust and the differential persuasiveness of human and anthropomorphized messengers. J. Market. 79(4), 94–110 (2015)

    Article  Google Scholar 

  21. Walton, D.: A new dialectical theory of explanation. Philos. Explor. 7(1), 71–89 (2004)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sonja Stange .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Stange, S., Kopp, S. (2021). Explaining Before or After Acting? How the Timing of Self-Explanations Affects User Perception of Robot Behavior. In: Li, H., et al. Social Robotics. ICSR 2021. Lecture Notes in Computer Science(), vol 13086. Springer, Cham. https://doi.org/10.1007/978-3-030-90525-5_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-90525-5_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-90524-8

  • Online ISBN: 978-3-030-90525-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics