Abstract
Service robots need to adhere to the ethical expectations of the people with which they interact. Several research groups have developed methods for implementing artificial morality for robots. The hidden assumption underlying this work is that people hold artificial systems to the same ethical standards as humans. In this sense, humans and robots are considered ethically equivalent. However, this assumption remains untested. This paper presents a series of survey-based studies to measure people’s opinions about acceptable and ethical behavior for robots and how this compares to proper and ethical conduct for human caregivers. As such, we assess the assumption that behavior acceptable for humans is also acceptable for robots. In a series of surveys with different samples and methodologies, we find little evidence for rejecting the assumption that people hold artificial systems to the same ethical standards as humans. In the absence of evidence against this widely held assumption, we conclude that ethical norms governing human-robot interaction can be modeled on existing moral norms regulating human-human interaction.
Similar content being viewed by others
Data Availability
All materials, data, and code used in this paper can be freely downloaded from https://doi.org/10.17605/OSF.IO/GPCWQ.
References
Feil-Seifer D, Matarić MJ (2007) Socially assistive robotics. Robot Autom Magaz IEEE 18(1):24–31. https://doi.org/10.1109/MRA.2010.940150
Winfield AFT, Blum C, Liu W (2014) Towards an ethical robot: internal models, consequences and ethical action selection. In: Advances in autonomous robotics systems, pp 85–96. Springer
Winfield AFT (2014) Robots with internal models?: a route to self-aware and hence safer robots. In: Pitt J (ed) The computer after me: awareness and self-awareness in autonomic systems, 1st edn. Imperial College Press, London
Scheutz M (2017) The case for explicit ethical agents. AI Magaz 38(4):57–64. https://doi.org/10.1609/aimag.v38i4.2746
Sharkey N (2008) The ethical frontiers of robotics. Science 322(5909):1800–1801
Tan SY, Taeihagh A, Tripathi A (2021) Tensions and antagonistic interactions of risks and ethics of using robotics and autonomous systems in long-term care. Technol Forecast Soc Change 167:120686
Anderson M, Anderson SL (2010) Robot be good. Sci Am 303(4):72–77
Arkin RC, Ulam P, Wagner AR (2012) Moral decision making in autonomous systems: enforcement, moral emotions, dignity, trust, and deception. Proc IEEE 100(3):571–589
Mackworth AK (2011) Architectures and ethics for robots. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, Cambridge, pp 204–221
Dieter V, Alan W (2018) An architecture for ethical robots inspired by the simulation theory of cognition. Cognit Syst Res 48:56–66
Vanderelst D, Winfield A (2018b) The dark side of ethical robots. In: Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society, pp 317–322
Allen C, Wallach W, Smit I (2006) Why machine ethics? IEEE Intell Syst 21(4):12–17. https://doi.org/10.1109/MIS.2006.83
Murphy RR (2009) Woods DD beyond Asimov: the three laws of responsible robotics. IEEE Intell Syst. https://doi.org/10.1109/MIS.2009.69
Allen C, Smit I, Wallach W (2005) Artificial morality: Top-down, bottom-up, and hybrid approaches. Ethics Inf Technol 7(3):149–155. https://doi.org/10.1007/s10676-006-0004-4
Vanderelst D, Willems J (2020) Can we agree on what robots should be allowed to do? an exercise in rule selection for ethical care robots. Int J of Soc Robotics 12:1093–1102. https://doi.org/10.1007/s12369-019-00612-0
Pontier M, Hoorn J (2012) Toward machines that behave ethically better than humans do. In: Proceedings of the annual meeting of the cognitive science society, vol 34,
Gips J (1995) Towards the ethical robot. Android epistemology, pp 243–252
Sharkey N, Sharkey A (2011) The rights and wrongs of robot care. In: Robot ethics: The ethical and social implications of robotics, p 267. MIT Press
Abel D, MacGlashan J, Littman ML (2016) Reinforcement learning as a framework for ethical decision making. In: Workshops at the thirtieth AAAI conference on artificial intelligence
Anderson M, Anderson S, Armen C (2005) Towards machine ethics: Implementing two action-based ethical theories. In: Fall symposium on machine ethics, pp 1–7
Castro J (2016) A bottom-up approach to machine ethics. In Proceedings of the artificial life conference 2016, pp 712–719, Cancun, Mexico, MIT Press. https://doi.org/10.7551/978-0-262-33936-0-ch113
Guarini M (2006) Particularism and the classification and reclassification of moral cases. IEEE Intell Syst 21(4):22–28. https://doi.org/10.1109/MIS.2006.76
Honarvar AR, Ghasem-Aghaee N, (2009) An artificial neural network approach for creating an ethical artificial agent. In, (2009) IEEE international symposium on computational intelligence in robotics and automation - (CIRA), pp 290–295, Daejeon, Korea (South). IEEE. https://doi.org/10.1109/CIRA.2009.5423190
Moritz Goeldner, Cornelius Herstatt, Frank Tietze (2015) The emergence of care robotics-a patent and publication analysis. Technol Forecast Soc Change 92:115–131
Broadbent E, Stafford R, MacDonald B (2009) Acceptance of healthcare robots for the older population: Review and future directions. Int J Soc Robot 1(4):319–330. https://doi.org/10.1007/s12369-009-0030-6
Feil-Seifer D, Mataric MJ (2005) Defining socially assistive robotics. In 9th international conference on rehabilitation robotics, 2005. ICORR 2005., pp 465–468. IEEE
Fong Terrence, Nourbakhsh Illah, Dautenhahn Kerstin (2003) A survey of socially interactive robots. Robotics and autonomous systems 42(3):143–166
Frennert Susanne, Aminoff Hedvig, Östlund Britt (2021) Technological frames and care robots in eldercare. Int J Soc Robot 13(2):311–325
Sætra HS (2020) The foundations of a policy for the use of social robots in care. Technol Soc 63:101383
Sparc (2016) Robots that may help you in your silver age, URL http://robohub.org/robots-that-may-help-you-in-your-silver-age/
Roger B, Gelderblom GJ, Jonker P, de Witte L (2012) Socially assistive robots in elderly care: A systematic review into effects and effectiveness. J Am Med Directors Assoc 13(2):114–120. https://doi.org/10.1016/j.jamda.2010.10.002
Salomon JA, Haagsma JA, Davis A, de Noordhout CM, Polinder S, Havelaar AH, Cassini A, Devleesschauwer B, Kretzschmar M, Speybroeck N, Murray CJL, Vos T (2015) Disability weights for the global burden of disease 2013 study. The Lancet Global Health 3(11):e712–e723. https://doi.org/10.1016/S2214-109X(15)00069-8
Orne MT (2009) Demand characteristics and the concept of quasi-controls. Artifacts in behavioral research: Robert Rosenthal and Ralph L. Rosnow’s Classic Books 110:110–137
Raphael V (2018) Pingouin: statistics in python. J Open Source Softw 3(31):1026
Michael L, Jussi P, Nils K (2021) Moral uncanny valley: A robot’s appearance moderates how its decisions are judged. Int J Soc Robot 13(7):1679–1688
Greene J, Morelli SL, Nystrom LE, Cohen JD (2008) Cognitive load selectively interferes with utilitarian moral judgment cognition
Bonnefon J-F, Shariff A, Rahwan I (2019) The trolley, the bull bar, and why engineers should care about the ethics of autonomous cars [point of view]. Proc IEEE 107(3):502–504. https://doi.org/10.1109/JPROC.2019.2897447
Edmond A, Sohan D, Richard K, Jonathan S, Joseph H, Azim S, Jean-François B, Iyad R (2018) The moral machine experiment. Nature 563(7729):59–64
Rees S (2016) Medscape ethics report 2016: life, death, and pain
Russell S, Daly J, Hughes E, Co Hoog (2003) Nurses and ’difficult’patients: negotiating non-compliance. J Adv Nursing 43(3):281–287
Bigman YE, Gray K (2018) People are averse to machines making moral decisions. Cognition 181:21–34. https://doi.org/10.1016/j.cognition.2018.08.003
Funding
No funding was received for the research covered in this paper. The authors declare that they have no conflict of interest.
Author information
Authors and Affiliations
Corresponding authors
Ethics declarations
Informed Consent
Studies 1, 2, 3, and 6 were conducted at the University of Cincinnati and were approved by the Institutional Review Board (IRB). Participants provided informed consent, and all data were anonymized. Studies 4 and 5 were conducted at the University of Hamburg, Germany. In accordance with local regulations, IRB approval was not required for these surveys. Participants provided informed consent, and all data were anonymized.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Vanderelst, D., Jorgenson, C., Ozkes, A.I. et al. Are Robots to be Created in Our Own Image? Testing the Ethical Equivalence of Robots and Humans. Int J of Soc Robotics 15, 85–99 (2023). https://doi.org/10.1007/s12369-022-00940-8
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12369-022-00940-8