Skip to main content
Log in

Perceived Moral Patiency of Social Robots: Explication and Scale Development

  • Published:
International Journal of Social Robotics Aims and scope Submit manuscript

Abstract

As robots are increasingly integrated into human social spheres, they will be put in situations in which they may be perceived as moral patients—the actual or possible targets of humans’ (im)moral actions by which they may realize some benefit or suffering. However, little is understood about this potential, in part due to a lack of operationalization for measuring humans’ perceptions of machine moral patiency. This paper explicates the notion of perceived moral patiency (PMP) of robots and reports the results of three studies that develop a scale for measuring robot PMP and explore its measurements with relevant social dynamics. We ultimately present an omnibus six-factor scale, with each factor capturing the extent to which people believe a robot deserves a specific kind of moral consideration as specified by moral foundations theory (care, fairness, loyalty, authority, purity, liberty). The omnibus PMP scale’s factor structure is robust across both in-principle and in-context evaluations, and measures contextualized (local) PMP as distinct from heuristic (global) PMP.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. https://www.youtube.com/watch?v=M8YjvHYbZ9w, at timestamp 0:27.

  2. https://www.engineeredarts.co.uk/robot/robothespian/.

  3. https://wiki.engineeredarts.co.uk/Projected_Face.

  4. We also report chi-square goodness of fit for completeness but note that the null hypothesis tested (that the observed data is a “perfect fit” for specified model) is rarely supported in CFA [45]).

References

  1. Boston Dynamics (2015) Introducing Spot Classic (previously Spot). https://youtu.be/M8YjvHYbZ9w

  2. Coeckelbergh M (2016) Is it wrong to kick a robot? Towards a relational and critical robot ethics and beyond. In: What Social Robots Can and Should Do: Proceedings of Robophilosophy 2016, Amsterdam, IOS Press BV, pp 7–8

  3. Sparrow R (2016) Kicking a robot dog. In: Proceedings of HRI’16: p 229. https://doi.org/10.1109/HRI.2016.7451756

  4. Gunkel DJ (2018) The other question: can and should robots have rights? Ethics Inf Technol 20(2):87–99. https://doi.org/10.1007/s10676-017-9442-4

    Article  Google Scholar 

  5. Foot P (1967) The problem of abortion and the doctrine of double effect. Oxf Rev 5:5–15

    Google Scholar 

  6. Gray K, Wegner DM (2009) Moral typecasting: divergent perceptions of moral agents and moral patients. J Personal Soc Psychol 96(3):505–520. https://doi.org/10.1037/a0013748

    Article  Google Scholar 

  7. Banks J (2019) A perceived moral agency scale: development and validation of a metric for humans and social machines. Comput Hum Behav 90:363–371. https://doi.org/10.1016/j.chb.2018.08.028

    Article  Google Scholar 

  8. Eden A, Grizzard M, Lewis RJ (2012) Moral psychology and media theory. Media and the moral mind. New York, pp 1–25

  9. Sullins JP (2006) When is a robot a moral agent? Int Rev Inform Ethics 6:23–30

    Article  Google Scholar 

  10. Gunkel DJ (2012) The machine question: critical perspectives on AI, robots, and ethics. MIT Press, Cambridge, MA

    Book  Google Scholar 

  11. Anderson DL (2013) Machine intentionality, the moral status of machines, and the composition problem. Philosophy and theory of artificial intelligence. Springer, pp 321–334

  12. Coeckelbergh M (2021) Should we treat Teddy Bear 2.0 as a Kantian dog? Four arguments for the indirect moral standing of personal social robots, with implications for thinking about animals and humans. Mind Mach 31:337–360. https://doi.org/10.1007/s11023-020-09554-3

    Article  Google Scholar 

  13. Friedman C (2020) Human-robot moral relations: human interactants as moral patients of their own agential moral actions toward robots. Artificial intelligence research. Springer, pp 3–20

  14. Banks J (2021) From warranty voids to uprising advocacy: human action and the perceived moral patiency of social robots. Front Rob AI 28:670503. https://doi.org/10.3389/frobt.2021.670503

    Article  Google Scholar 

  15. Banks J (2020) Optimus Primed: media cultivation of robot mental models and social judgments. Front Rob AI 7:62. https://doi.org/10.3389/frobt.2020.00062

    Article  Google Scholar 

  16. Mara M, Stein JP, Latoschik ME, Lugrin B, Schreiner C, Hostettler R, Appel M (2021) User responses to a humanoid robot observed in real life, virtual reality, 3D and 2D. Front Psychol 12:633178. https://doi.org/10.3389/fpsyg.2021.633178

    Article  Google Scholar 

  17. Craik K (1943) The nature of exploration. Cambridge University Press, Cambridge, UK

    Google Scholar 

  18. Schneider R (2001) Toward a cognitive theory of literary character: the dynamics of mental-model construction. Style 35(4):607–640

    Google Scholar 

  19. Sparrow R (2004) The Turing triage test. Ethics Inf Technol 6:203–213. https://doi.org/10.1007/s10676-004-6491-2

    Article  Google Scholar 

  20. Keijsers M, Bartneck C (2018) Mindless robots get bullied. In: Proceedings of HRI’18, pp 205–214. https://doi.org/10.1145/3171221.3171266

  21. Ward AF, Olsen AS, Wegner DM (2013) The harm-made mind: observing vicitimization augments attribution of minds to vegetative patients, robots, and the dead. Psychol Sci 24(8):1437–1445. https://doi.org/10.1177/0956797612472343

    Article  Google Scholar 

  22. Rouse WB, Morris NM (1986) On looking into the black box: prospects and limits in the search for mental models. Psychol Bull 100(3):349–363. https://doi.org/10.1037/0033-2909.100.3.349

    Article  Google Scholar 

  23. Nosek BA (2007) Implicit-explicit relations. Curr Dir Psychol Sci 16(2):65–69. https://doi.org/10.1111/j.1467-8721.2007.00477.x

    Article  Google Scholar 

  24. Banks J (2021) Of like mind: the (mostly) similar mentalizing of robots and humans. Technol Mind Behav 1(2). https://doi.org/10.1037/tmb0000025

  25. Gray K, Waytz A, Young L (2012) The moral dyad: a fundamental template unifying moral judgment. Psychol Inq 23(2):206–215. https://doi.org/10.1080/1047840X.2012.686247

    Article  Google Scholar 

  26. Gordon J-S, Gunkel DJ (2021) Moral status and intelligent robots. South J Philos 60(1):88–117. https://doi.org/10.1111/sjp.12450

    Article  Google Scholar 

  27. Coeckelbergh M (2010) Robot rights? Towards a social-relational justification of moral consideration. Ethics Inf Technol 12:209–221. https://doi.org/10.1007/s10676-010-9235-5

    Article  Google Scholar 

  28. Haidt J (2013) The righteous mind: Why good people are divided by politics and religion. New York, Vintage Books

  29. Graham J, Haidt J, Koleva S, Motyl M, Iyer R, Wojcik SP, Ditto PH (2013) Moral foundations theory: The pragmatic validity of moral pluralism. In: Advances in Experimental Social Psychology, vol. 47. Academic Press, pp 55–130. https://doi.org/10.1016/B978-0-12-407236-7.00002-4

  30. Iyer R, Koleva S, Graham J, Ditto P, Haidt J (2012) Understanding libertarian morality: the psychological dispositions of self-identified Libertarians. PLoS ONE 7(8):e42366. https://doi.org/10.1371/journal.pone.0042366

    Article  Google Scholar 

  31. Graham J, Haidt J (2012) Sacred values and evil adversaries: A moral foundations approach. In: The social psychology of morality: Exploring the causes of good and evil Washington, D.C., APA, pp 11–31

  32. Coeckelbergh M (2018) Why care about robots? Empathy, moral standing, and the language of suffering. Kairos: J Philos Sci 20(1):141–158. https://doi.org/10.2478/kjps-2018-0007

    Article  Google Scholar 

  33. Banks J (2021) Perceived Moral Patiency of Social Robots. https://osf.io/5pdnc/

  34. Bartneck C, Kulić D, Croft E, Zoghbi S (2009) Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int J Social Robot 1(1):71–81. https://doi.org/10.1007/s12369-008-0001-3

    Article  Google Scholar 

  35. Tamul DJ, Elson DJ, Ivory M, Hotter JD, Lanier JC, Wolf MK, Martínez-Carrillo J (2020) NI Moral foundations’ methodological foundations: A systematic analysis of reliability in research using the Moral Foundations Questionnaire [Preprint]. https://psyarxiv.com/shcgv/

  36. Bowman ND, Goodboy AK (2020) Evolving considerations and empirical approaches to construct validity in communication science. Annals of the International Communication Association 44(3):219–234. https://doi.org/10.1080/23808985.2020.1792791

    Article  Google Scholar 

  37. Fan X (2003) Two approaches for correcting correlation attenuation cause by measurement error: implications for research practice. Educ Psychol Meas 63(6):915–930. https://doi.org/10.1177/0013164403251319

    Article  MathSciNet  Google Scholar 

  38. Nomura T, Otsubo K, Kanda T (2018) Preliminary investigation of moral expansiveness for robots. In: Proceedings of ARSO’18, pp 91–96. https://doi.org/10.1109/ARSO.2018.8625717

  39. Schein C, Gray K (2018) The theory of Dyadic Morality: reinventing moral judgment by redefining harm. Personality and Social Psychology Review 22(1):32–70. https://doi.org/10.1177/1088868317698288

    Article  Google Scholar 

  40. Schein C (2020) The importance of context in moral judgments. Perspect Psychol Sci 15(2):207–215. https://doi.org/10.1177/1745691620904083

    Article  MathSciNet  Google Scholar 

  41. Haidt J, Graham J (2007) When morality opposed justice: conservatives have moral intuitions that liberals may not recognize. Soc Justice Res 20:98–116. https://doi.org/10.1007/s11211-007-0034-z

    Article  Google Scholar 

  42. Curry OS, Chesters MJ, Van Lissa CJ (2018) Mapping morality with a compass: testing the theory of ‘morality-as-cooperation’ with a new questionnaire. J Res Pers 78:106–124. https://doi.org/10.1016/j.jrp.2018.10.008

    Article  Google Scholar 

  43. Kugler M, Jost JT, Noorbaloochi S (2014) Another look at Moral Foundations Theory: authoritarianism and social dominance orientation explain liveral-conservative differences in “moral” intuitions? Soc Justice Res 27:413–431. https://doi.org/10.1007/s11211-014-0223-5

    Article  Google Scholar 

  44. Curry OS, Chesters MJ, Van Lissa CJ (2019) Mapping morality with a compass: testing the theory of ‘morality-as-cooperation’ with a new questionnaire. J Res Pers 78:106–124. https://doi.org/10.1016/j.jrp.2018.10.008

    Article  Google Scholar 

  45. Goodboy AK, Kline RB (2017) Statistical and practical concerns with published communication research featuring structural equation modeling. Communication Res Rep 34(1):68–77. https://doi.org/10.1080/08824096.2016.1214121

    Article  Google Scholar 

  46. Banks, J., Koban, K., & Haggadone, B. (in press). Breaking the typecast? Moral status and trust in robotic moral patients. InProceedings of Robophilosophy 2022. IOS Press.

  47. Koban, K., & Banks, J. (in press). Dual-process theory in human-machine communication. In Guzman, A.L., McEwen, R., & Jones, S. (Eds.), The SAGE Handbook of Human-Machine Communication. SAGE.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jaime Banks.

Ethics declarations

Statements and Declarations

This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-19-1-006. The authors thank Kevin Koban and Brad Haggadone, who were collaborators on the larger study from which Study 3 data was drawn. Thanks also to the College of Media and Communication at Texas Tech, where a portion of this work was completed. The authors have no relevant (non-)financial interests to declare. All materials for this research are freely available at https://osf.io/w8vre/. The TTU Human Research Protection Program acknowledged these procedures as exempt under protocols IRB2020-3287 and IRB2021-80; informed consent was obtained from all individual participants included in the study. All authors contributed to the study conception and design. Material preparation, data collection, some data analysis, and most manuscript writing were performed by JB. Most data analysis and manuscript editing were performed by NDB. Both authors read and approved the final manuscript.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Banks, J., Bowman, N. Perceived Moral Patiency of Social Robots: Explication and Scale Development. Int J of Soc Robotics 15, 101–113 (2023). https://doi.org/10.1007/s12369-022-00950-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12369-022-00950-6

Keywords

Navigation