Skip to main content

On Further Reflection... Moral Reflections Enhance Robotic Moral Persuasive Capability

  • Conference paper
  • First Online:
Persuasive Technology (PERSUASIVE 2023)

Abstract

To enable robots to exert positive moral influence, we need to understand the impacts of robots’ moral communications, the ways robots can phrase their moral language to be most clear and persuasive, and the ways that these factors interact. Previous work has suggested, for example, that for certain types of robot moral interventions to be successful (i.e., moral interventions grounded in particular ethical frameworks), those interventions may need to be followed by opportunities for moral reflection, during which humans can critically engage with not only the contents of the robot’s moral language, but also with the way that moral language connects with their social-relational ontology and broader moral ecosystem. We conceptually replicate this prior work (N = 119) using a design that more precisely manipulates moral reflection. Our results confirm that opportunities for moral reflection are indeed critical to the success of robotic moral interventions—regardless of the ethical framework in which those interventions are grounded.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ajzen, I.: The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 50(2), 179–211 (1991)

    Article  Google Scholar 

  2. Ames, R.T.: Confucian role ethics: a vocabulary (2011)

    Google Scholar 

  3. Baroni, I., Nalin, M., Zelati, M.C., Oleari, E., Sanna, A.: Designing motivational robot: how robots might motivate children to eat fruits and vegetables. In: International Symposium on Robot and Human Interactive Communication (2014)

    Google Scholar 

  4. Bartneck, C., Bleeker, T., Bun, J., Fens, P., Riet, L.: The influence of robot anthropomorphism on the feelings of embarrassment when interacting with robots. Paladyn, J. Behav. Robot. 1(2), 109–115 (2010)

    Article  Google Scholar 

  5. Briggs, G., Williams, T., Jackson, R.B., Scheutz, M.: Why and how robots should say ‘No’. Int. J. Soc. Robot. 14(2), 323–339 (2021). https://doi.org/10.1007/s12369-021-00780-y

    Article  Google Scholar 

  6. Chidambaram, V., Chiang, Y.H., Mutlu, B.: Designing persuasive robots: how robots might persuade people using vocal and nonverbal cues. In: International conference on Human-Robot Interaction (HRI). ACM (2012)

    Google Scholar 

  7. Cormier, D., Newman, G., Nakane, M., Young, J.E., Durocher, S.: Would you do as a robot commands? an obedience study for human-robot interaction. In: International Conference on Human-Agent Interaction (2013)

    Google Scholar 

  8. Feil-Seifer, D., Haring, K.S., Rossi, S., Wagner, A.R., Williams, T.: Where to next? the impact of covid-19 on human-robot interaction research (2020)

    Google Scholar 

  9. Floridi, L., Sanders, J.W.: On the morality of artificial agents. Mind. Mach. 14(3), 349–379 (2004)

    Article  Google Scholar 

  10. Gillet, S., van den Bos, W., Leite, I.: A social robot mediator to foster collaboration and inclusion among children. In: Robotics: Science and Systems (2020)

    Google Scholar 

  11. Gillet, S., Cumbal, R., Pereira, A., Lopes, J., Engwall, O., Leite, I.: Robot gaze can mediate participation imbalance in groups with different skill levels. In: Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, pp. 303–311 (2021)

    Google Scholar 

  12. Ham, J., Bokhorst, R., Cuijpers, R., van der Pol, D., Cabibihan, J.-J.: Making robots persuasive: the influence of combining persuasive strategies (gazing and gestures) by a storytelling robot on its persuasive power. In: Mutlu, B., Bartneck, C., Ham, J., Evers, V., Kanda, T. (eds.) ICSR 2011. LNCS (LNAI), vol. 7072, pp. 71–83. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-25504-5_8

    Chapter  Google Scholar 

  13. Hinne, M., Gronau, Q.F., van den Bergh, D., Wagenmakers, E.J.: A conceptual introduction to Bayesian model averaging. Adv. Methods Pract. Psychol. Sci. 3(2), 200–215 (2020)

    Article  Google Scholar 

  14. Jackson, R.B., Wen, R., Williams, T.: Tact in noncompliance: the need for pragmatically apt responses to unethical commands. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 499–505 (2019)

    Google Scholar 

  15. Jackson, R.B., Williams, T.: Robot: Asker of questions and changer of norms? In: Proceedings of ICRES (2018)

    Google Scholar 

  16. Jackson, R.B., Williams, T.: Language-capable robots may inadvertently weaken human moral norms. In: Companion of the 14th ACM/IEEE International Conference on Human-Robot Interaction (alt.HRI), pp. 401–410. IEEE (2019)

    Google Scholar 

  17. Jackson, R.B., Williams, T.: Language-capable robots may inadvertently weaken human moral norms. In: 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 401–410. IEEE (2019)

    Google Scholar 

  18. Jackson, R.B., Williams, T.: On perceived social and moral agency in natural language capable robots. In: 2019 HRI workshop on the dark side of human-robot interaction. Jackson, RB, and Williams, pp. 401–410 (2019)

    Google Scholar 

  19. Jackson, R.B., Williams, T.: A theory of social agency for human-robot interaction. Front. Robot. AI 8, 267 (2021)

    Article  Google Scholar 

  20. Jackson, R.B., Williams, T.: Enabling morally sensitive robotic clarification requests. ACM Trans. Hum.-Robot Interact. (THRI) 11(2), 1–18 (2022)

    Article  Google Scholar 

  21. Jackson, R.B., Williams, T., Smith, N.: Exploring the role of gender in perceptions of robotic noncompliance. In: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pp. 559–567 (2020)

    Google Scholar 

  22. JASP Team, et al.: Jasp. Version 0.8. 0.0. software (2016)

    Google Scholar 

  23. Jung, M.F., Martelaro, N., Hinds, P.J.: Using robots to moderate team conflict: the case of repairing violations. In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, pp. 229–236 (2015)

    Google Scholar 

  24. Kennedy, J., Baxter, P., Belpaeme, T.: Children comply with a robot’s indirect requests. In: Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 198–199 (2014)

    Google Scholar 

  25. Kim, B., Wen, R., de Visser, E.J., Zhu, Q., Williams, T., Phillips, E.: Investigating robot moral advice to deter cheating behavior. In: TSAR Workshop at ROMAN 2021 (2021)

    Google Scholar 

  26. Kim, B., Wen, R., Zhu, Q., Williams, T., Phillips, E.: Robots as moral advisors: the effects of deontological, virtue, and confucian role ethics on encouraging honest behavior. In: Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, pp. 10–18 (2021)

    Google Scholar 

  27. Lee, M.D., Wagenmakers, E.J.: Bayesian Cognitive Modeling: A Practical Course. Cambridge University Press, Cambridge (2014)

    Book  Google Scholar 

  28. Lee, M.K., Kiesler, S., Forlizzi, J., Rybski, P.: Ripple effects of an embedded social agent: a field study of a social robot in the workplace. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 695–704 (2012)

    Google Scholar 

  29. Mathôt, S.: Bayes like a BAWS: interpreting Bayesian repeated measures in jasp. Cognit. Sci. More (2017). www.cogsci.nl/blog/interpreting-bayesian-repeated-measures-in-jasp

  30. Paradeda, R.B., Ferreira, M.J., Dias, J., Paiva, A.: How robots persuasion based on personality traits may affect human decisions. In: Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pp. 251–252. ACM (2017)

    Google Scholar 

  31. Rea, D.J., Geiskkovitch, D., Young, J.E.: Wizard of AWWWS: exploring psychological impact on the researchers in social hri experiments. In: Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pp. 21–29 (2017)

    Google Scholar 

  32. Robinette, P., Li, W., Allen, R., Howard, A.M., Wagner, A.R.: Overtrust of robots in emergency evacuation scenarios. In: The Eleventh ACM/IEEE International Conference on Human Robot Interaction, pp. 101–108 (2016)

    Google Scholar 

  33. Scheutz, M., Malle, B.F.: May machines take lives to save lives? human perceptions of autonomous robots (with the capacity to kill). Lethal autonomous weapons: re-examining the law and ethics of robotic warfare (2021)

    Google Scholar 

  34. Strait, M., Canning, C., Scheutz, M.: Let me tell you! investigating the effects of robot communication strategies in advice-giving situations based on robot appearance, interaction modality and distance. In: Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction (HRI) (2014)

    Google Scholar 

  35. Strohkorb Sebo, S., Traeger, M., Jung, M., Scassellati, B.: The ripple effects of vulnerability: the effects of a robot’s vulnerable behavior on trust in human-robot teams. In: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp. 178–186 (2018)

    Google Scholar 

  36. Tennent, H., Shen, S., Jung, M.: MICBOT: a peripheral robotic object to shape conversational dynamics and team performance. In: 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 133–142. IEEE (2019)

    Google Scholar 

  37. Wagenmakers, E.J., et al.: Bayesian inference for psychology. Part I: theoretical advantages and practical ramifications. Psychon. Bull. Rev. 25(1), 35–57 (2018)

    Article  MathSciNet  Google Scholar 

  38. Wen, R., Han, Z., Williams, T.: Teacher, teammate, subordinate, friend: generating norm violation responses grounded in role-based relational norms. In: HRI, pp. 353–362 (2022)

    Google Scholar 

  39. Wen, R., Jackson, R.B., Williams, T., Zhu, Q.: Towards a role ethics approach to command rejection. In: HRI Workshop on the Dark Side of Human-Robot Interaction (2019)

    Google Scholar 

  40. Wen, R., Kim, B., Phillips, E., Zhu, Q., Williams, T.: Comparing norm-based and role-based strategies for robot communication of role-grounded moral norms. ACM Trans. Hum.-Robot Interact. (T-HRI) (2022)

    Google Scholar 

  41. Williams, T., Jackson, R.B., Lockshin, J.: A Bayesian analysis of moral norm malleability during clarification dialogues. In: Proceedings of the Annual Meeting of the Cognitive Science Society (COGSCI). Cognitive Science Society, Madison, WI (2018)

    Google Scholar 

  42. Williams, T., Zhu, Q., Wen, R., de Visser, E.J.: The Confucian matador: three defenses against the mechanical bull. In: Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (alt.HRI), pp. 25–33 (2020)

    Google Scholar 

  43. Winkle, K., Melsión, G.I., McMillan, D., Leite, I.: Boosting robot credibility and challenging gender norms in responding to abusive behaviour: a case for feminist robots. In: Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, pp. 29–37 (2021)

    Google Scholar 

  44. Wong, D.B.: Cultivating the self in concert with others. In: Olberding, A. (ed.) Dao companion to the Analects, pp. 171–197. Springer, Dordrecht (2014). https://doi.org/10.1007/978-94-007-7113-0_10

  45. Zhu, Q.: Confucian moral imagination and ethics education in engineering. Front. Philos. China 15(1), 36–52 (2020)

    MathSciNet  Google Scholar 

  46. Zhu, Q., Williams, T., Jackson, B., Wen, R.: Blame-laden moral rebukes and the morally competent robot: a confucian ethical perspective. Sci. Eng. Ethics 26, 2511–2526 (2020)

    Article  Google Scholar 

  47. Zhu, Q., Williams, T., Wen, R.: Role-based morality, ethical pluralism, and morally capable robots. J. Contemp. East. Asia 20(1), 134–150 (2021)

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by NSF grant IIS-1909847 and in part by Air Force Office of Scientific Research Grant 16RT0881f.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ruchen Wen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wen, R., Kim, B., Phillips, E., Zhu, Q., Williams, T. (2023). On Further Reflection... Moral Reflections Enhance Robotic Moral Persuasive Capability. In: Meschtscherjakov, A., Midden, C., Ham, J. (eds) Persuasive Technology. PERSUASIVE 2023. Lecture Notes in Computer Science, vol 13832. Springer, Cham. https://doi.org/10.1007/978-3-031-30933-5_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-30933-5_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-30932-8

  • Online ISBN: 978-3-031-30933-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics