Skip to main content

Receiving Robot’s Advice: Does It Matter When and for What?

  • Conference paper
  • First Online:
Social Robotics (ICSR 2020)

Abstract

Two experimental online studies investigate the persuasive effect of robot’s advice on human’s moral decision-making. Using two different decision scenarios with varying complexity, the effect of the point of time when a robot gives its advice was examined. Participants either received advice directly after the decision scenario or stated an initial opinion first, received advice and had the chance to adjust their decision afterwards. The analysis explored whether this affects the adaption to the robot’s advice and the decision certainty as well as the evaluation of the robot. The assumption that people rely more on the robot’s advice when they receive it directly and that those people have a reduced decision certainty was only found in the complex decision task condition.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Awad, E., Dsouza, S., Shariff, A., Rahwan, I., Bonnefon, J.F.: Universals and variations in moral decisions made in 42 countries by 70,000 participants. Proc. Natl. Acad. Sci. 117(5), 2332–2337 (2020)

    Article  Google Scholar 

  2. Bartneck, C., Kulić, D., Croft, E., Zoghbi, S.: Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int. J. Soc. Robot. 1(1), 71–81 (2009)

    Article  Google Scholar 

  3. Bigman, Y.E., Gray, K.: People are averse to machines making moral decisions. Cognition 181, 21–34 (2018)

    Article  Google Scholar 

  4. Dietvorst, B.J., Simmons, J.P., Massey, C.: Algorithm aversion: people erroneously avoid algorithms after seeing them Err. J. Exp. Psychol. Gen. 144(1), 114–126 (2015)

    Article  Google Scholar 

  5. Festinger, L.: A Theory of Cognitive Dissonance, vol. 2. Stanford University Press (1962)

    Google Scholar 

  6. Kirkebøen, G., Vasaasen, E., Halvor Teigen, K.: Revisions and regret: the cost of changing your mind. J. Behav. Decis. Mak. 26(1), 1–12 (2013)

    Article  Google Scholar 

  7. Kruger, J., Wirtz, D., Miller, D.T.: Counterfactual thinking and the first instinct fallacy. J. Personal. Soc. Psychol. 88(5), 725 (2005)

    Article  Google Scholar 

  8. Lee, M.K.: Understanding perception of algorithmic decisions: fairness, trust, and emotion in response to algorithmic management. Big Data Soc. 5(1), 2053951718756684 (2018)

    Article  Google Scholar 

  9. Levin, D.T., Harriott, C., Paul, N.A., Zhang, T., Adams, J.A.: Cognitive dissonance as a measure of reactions to human-robot interaction. J. Hum.-Robot Interact. 2(3), 3–17 (2013). https://doi.org/10.5898/JHRI.2.3.Levin

  10. Liptak, A.: Sent to prison by a software program’s secret algorithms, May 2017. https://www.nytimes.com/2017/05/01/us/politics/sent-to-prison-by-a-software-programs-secret-algorithms.html

  11. Logg, J.M., Minson, J.A., Moore, D.A.: Algorithm appreciation: people prefer algorithmic to human judgment. Organ. Behav. Hum. Decis. Process. 151, 90–103 (2019)

    Article  Google Scholar 

  12. Malle, B.F., Magar, S.T., Scheutz, M.: AI in the sky: how people morally evaluate human and machine decisions in a lethal strike dilemma. In: Aldinhas Ferreira, M.I., Silva Sequeira, J., Virk, G.S., Tokhi, M.O., Kadar, E.E. (eds.) Robotics and Well-Being. ISCASE, vol. 95, pp. 111–133. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-12524-0_11

    Chapter  Google Scholar 

  13. Nickerson, R.S.: Confirmation bias: a ubiquitous phenomenon in many guises. Rev. Gen. Psychol. 2(2), 175–220 (1998)

    Article  Google Scholar 

  14. Nomura, T., Suzuki, T., Kanda, T., Kato, K.: Measurement of negative attitudes toward robots. Interact. Stud. 7(3), 437–454 (2006)

    Article  Google Scholar 

  15. Oh, C., Lee, T., Kim, Y., Park, S., Kwon, S., Suh, B.: Us vs. Them: understanding artificial intelligence technophobia over the Google deepmind challenge match. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 2523–2534 (2017)

    Google Scholar 

  16. Olshavsky, R.W.: Task complexity and contingent processing in decision making: a replication and extension. Organ. Behav. Hum. Performance 24(3), 300–316 (1979)

    Article  Google Scholar 

  17. Payne, J.W.: Task complexity and contingent processing in decision making: an information search and protocol analysis. Organ. Behav. Hum. Perform. 16(2), 366–387 (1976)

    Article  Google Scholar 

  18. Rosenthal-von der Pütten, A.M., Straßmann, C., Yaghoubzadeh, R., Kopp, S., Krämer, N.C.: Dominant and submissive nonverbal behavior of virtual agents and its effects on evaluation and negotiation outcome in different age groups. Comput. Hum. Behav. 90, 397–409 (2019)

    Google Scholar 

  19. Shinozawa, K., Naya, F., Yamato, J., Kogure, K.: Differences in effect of robot and screen agent recommendations on human decision-making. Int. J. Hum.-Comput. Stud. 62(2), 267–279 (2005)

    Article  Google Scholar 

  20. Stellmach, H., Lindner, F.: Perception of an uncertain ethical reasoning robot. i-com 18(1), 79–91 (2019)

    Google Scholar 

  21. Strait, M., Canning, C., Scheutz, M.: Let me tell you! Investigating the effects of robot communication strategies in advice-giving situations based on robot appearance, interaction modality and distance. In: Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction, pp. 479–486 (2014)

    Google Scholar 

  22. Straßmann, C., Grewe, A., Kowalcyk, C., Arntz, A., Eimler, S.C.: Moral robots? How uncertainty and presence affect humans’ moral decision making. Accepted Proceeding of the Human Computer Interaction Conference (to be presented)

    Google Scholar 

  23. Torrey, C., Fussell, S.R., Kiesler, S.: How a robot should give advice. In: 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 275–282. IEEE (2013)

    Google Scholar 

  24. Wolfe, M.B., Williams, T.J.: Poor metacognitive awareness of belief change. Q. J. Exp. Psychol. 71(9), 1898–1910 (2018)

    Article  Google Scholar 

  25. Xu, J., Howard, A.: The impact of first impressions on human-robot trust during problem-solving scenarios. In: 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 435–441. IEEE (2018)

    Google Scholar 

  26. Yeomans, M., Shah, A., Mullainathan, S., Kleinberg, J.: Making sense of recommendations. J. Behav. Decis. Making 32(4), 403–414 (2019)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carolin Straßmann .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Straßmann, C., Eimler, S.C., Arntz, A., Grewe, A., Kowalczyk, C., Sommer, S. (2020). Receiving Robot’s Advice: Does It Matter When and for What?. In: Wagner, A.R., et al. Social Robotics. ICSR 2020. Lecture Notes in Computer Science(), vol 12483. Springer, Cham. https://doi.org/10.1007/978-3-030-62056-1_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-62056-1_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-62055-4

  • Online ISBN: 978-3-030-62056-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics