Skip to main content

Advertisement

Log in

Are Robots to be Created in Our Own Image? Testing the Ethical Equivalence of Robots and Humans

  • Published:
International Journal of Social Robotics Aims and scope Submit manuscript

Abstract

Service robots need to adhere to the ethical expectations of the people with which they interact. Several research groups have developed methods for implementing artificial morality for robots. The hidden assumption underlying this work is that people hold artificial systems to the same ethical standards as humans. In this sense, humans and robots are considered ethically equivalent. However, this assumption remains untested. This paper presents a series of survey-based studies to measure people’s opinions about acceptable and ethical behavior for robots and how this compares to proper and ethical conduct for human caregivers. As such, we assess the assumption that behavior acceptable for humans is also acceptable for robots. In a series of surveys with different samples and methodologies, we find little evidence for rejecting the assumption that people hold artificial systems to the same ethical standards as humans. In the absence of evidence against this widely held assumption, we conclude that ethical norms governing human-robot interaction can be modeled on existing moral norms regulating human-human interaction.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Data Availability

All materials, data, and code used in this paper can be freely downloaded from https://doi.org/10.17605/OSF.IO/GPCWQ.

References

  1. Feil-Seifer D, Matarić MJ (2007) Socially assistive robotics. Robot Autom Magaz IEEE 18(1):24–31. https://doi.org/10.1109/MRA.2010.940150

    Article  Google Scholar 

  2. Winfield AFT, Blum C, Liu W (2014) Towards an ethical robot: internal models, consequences and ethical action selection. In: Advances in autonomous robotics systems, pp 85–96. Springer

  3. Winfield AFT (2014) Robots with internal models?: a route to self-aware and hence safer robots. In: Pitt J (ed) The computer after me: awareness and self-awareness in autonomic systems, 1st edn. Imperial College Press, London

    Google Scholar 

  4. Scheutz M (2017) The case for explicit ethical agents. AI Magaz 38(4):57–64. https://doi.org/10.1609/aimag.v38i4.2746

  5. Sharkey N (2008) The ethical frontiers of robotics. Science 322(5909):1800–1801

    Article  Google Scholar 

  6. Tan SY, Taeihagh A, Tripathi A (2021) Tensions and antagonistic interactions of risks and ethics of using robotics and autonomous systems in long-term care. Technol Forecast Soc Change 167:120686

  7. Anderson M, Anderson SL (2010) Robot be good. Sci Am 303(4):72–77

    Article  Google Scholar 

  8. Arkin RC, Ulam P, Wagner AR (2012) Moral decision making in autonomous systems: enforcement, moral emotions, dignity, trust, and deception. Proc IEEE 100(3):571–589

    Article  Google Scholar 

  9. Mackworth AK (2011) Architectures and ethics for robots. In: Anderson M, Anderson SL (eds) Machine ethics. Cambridge University Press, Cambridge, pp 204–221

    Google Scholar 

  10. Dieter V, Alan W (2018) An architecture for ethical robots inspired by the simulation theory of cognition. Cognit Syst Res 48:56–66

    Article  Google Scholar 

  11. Vanderelst D, Winfield A (2018b) The dark side of ethical robots. In: Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society, pp 317–322

  12. Allen C, Wallach W, Smit I (2006) Why machine ethics? IEEE Intell Syst 21(4):12–17. https://doi.org/10.1109/MIS.2006.83

  13. Murphy RR (2009) Woods DD beyond Asimov: the three laws of responsible robotics. IEEE Intell Syst. https://doi.org/10.1109/MIS.2009.69

    Article  Google Scholar 

  14. Allen C, Smit I, Wallach W (2005) Artificial morality: Top-down, bottom-up, and hybrid approaches. Ethics Inf Technol 7(3):149–155. https://doi.org/10.1007/s10676-006-0004-4

    Article  Google Scholar 

  15. Vanderelst D, Willems J (2020) Can we agree on what robots should be allowed to do? an exercise in rule selection for ethical care robots. Int J of Soc Robotics 12:1093–1102. https://doi.org/10.1007/s12369-019-00612-0

    Article  Google Scholar 

  16. Pontier M, Hoorn J (2012) Toward machines that behave ethically better than humans do. In: Proceedings of the annual meeting of the cognitive science society, vol 34,

  17. Gips J (1995) Towards the ethical robot. Android epistemology, pp 243–252

  18. Sharkey N, Sharkey A (2011) The rights and wrongs of robot care. In: Robot ethics: The ethical and social implications of robotics, p 267. MIT Press

  19. Abel D, MacGlashan J, Littman ML (2016) Reinforcement learning as a framework for ethical decision making. In: Workshops at the thirtieth AAAI conference on artificial intelligence

  20. Anderson M, Anderson S, Armen C (2005) Towards machine ethics: Implementing two action-based ethical theories. In: Fall symposium on machine ethics, pp 1–7

  21. Castro J (2016) A bottom-up approach to machine ethics. In Proceedings of the artificial life conference 2016, pp 712–719, Cancun, Mexico, MIT Press. https://doi.org/10.7551/978-0-262-33936-0-ch113

  22. Guarini M (2006) Particularism and the classification and reclassification of moral cases. IEEE Intell Syst 21(4):22–28. https://doi.org/10.1109/MIS.2006.76

    Article  Google Scholar 

  23. Honarvar AR, Ghasem-Aghaee N, (2009) An artificial neural network approach for creating an ethical artificial agent. In, (2009) IEEE international symposium on computational intelligence in robotics and automation - (CIRA), pp 290–295, Daejeon, Korea (South). IEEE. https://doi.org/10.1109/CIRA.2009.5423190

  24. Moritz Goeldner, Cornelius Herstatt, Frank Tietze (2015) The emergence of care robotics-a patent and publication analysis. Technol Forecast Soc Change 92:115–131

    Article  Google Scholar 

  25. Broadbent E, Stafford R, MacDonald B (2009) Acceptance of healthcare robots for the older population: Review and future directions. Int J Soc Robot 1(4):319–330. https://doi.org/10.1007/s12369-009-0030-6

    Article  Google Scholar 

  26. Feil-Seifer D, Mataric MJ (2005) Defining socially assistive robotics. In 9th international conference on rehabilitation robotics, 2005. ICORR 2005., pp 465–468. IEEE

  27. Fong Terrence, Nourbakhsh Illah, Dautenhahn Kerstin (2003) A survey of socially interactive robots. Robotics and autonomous systems 42(3):143–166

    Article  MATH  Google Scholar 

  28. Frennert Susanne, Aminoff Hedvig, Östlund Britt (2021) Technological frames and care robots in eldercare. Int J Soc Robot 13(2):311–325

    Article  Google Scholar 

  29. Sætra HS (2020) The foundations of a policy for the use of social robots in care. Technol Soc 63:101383

    Article  Google Scholar 

  30. Sparc (2016) Robots that may help you in your silver age, URL http://robohub.org/robots-that-may-help-you-in-your-silver-age/

  31. Roger B, Gelderblom GJ, Jonker P, de Witte L (2012) Socially assistive robots in elderly care: A systematic review into effects and effectiveness. J Am Med Directors Assoc 13(2):114–120. https://doi.org/10.1016/j.jamda.2010.10.002

    Article  Google Scholar 

  32. Salomon JA, Haagsma JA, Davis A, de Noordhout CM, Polinder S, Havelaar AH, Cassini A, Devleesschauwer B, Kretzschmar M, Speybroeck N, Murray CJL, Vos T (2015) Disability weights for the global burden of disease 2013 study. The Lancet Global Health 3(11):e712–e723. https://doi.org/10.1016/S2214-109X(15)00069-8

    Article  Google Scholar 

  33. Orne MT (2009) Demand characteristics and the concept of quasi-controls. Artifacts in behavioral research: Robert Rosenthal and Ralph L. Rosnow’s Classic Books 110:110–137

    Google Scholar 

  34. Raphael V (2018) Pingouin: statistics in python. J Open Source Softw 3(31):1026

    Article  Google Scholar 

  35. Michael L, Jussi P, Nils K (2021) Moral uncanny valley: A robot’s appearance moderates how its decisions are judged. Int J Soc Robot 13(7):1679–1688

    Article  Google Scholar 

  36. Greene J, Morelli SL, Nystrom LE, Cohen JD (2008) Cognitive load selectively interferes with utilitarian moral judgment cognition

  37. Bonnefon J-F, Shariff A, Rahwan I (2019) The trolley, the bull bar, and why engineers should care about the ethics of autonomous cars [point of view]. Proc IEEE 107(3):502–504. https://doi.org/10.1109/JPROC.2019.2897447

    Article  Google Scholar 

  38. Edmond A, Sohan D, Richard K, Jonathan S, Joseph H, Azim S, Jean-François B, Iyad R (2018) The moral machine experiment. Nature 563(7729):59–64

    Article  Google Scholar 

  39. Rees S (2016) Medscape ethics report 2016: life, death, and pain

  40. Russell S, Daly J, Hughes E, Co Hoog (2003) Nurses and ’difficult’patients: negotiating non-compliance. J Adv Nursing 43(3):281–287

    Article  Google Scholar 

  41. Bigman YE, Gray K (2018) People are averse to machines making moral decisions. Cognition 181:21–34. https://doi.org/10.1016/j.cognition.2018.08.003

    Article  Google Scholar 

Download references

Funding

No funding was received for the research covered in this paper. The authors declare that they have no conflict of interest.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Dieter Vanderelst or Jurgen Willems.

Ethics declarations

Informed Consent

Studies 1, 2, 3, and 6 were conducted at the University of Cincinnati and were approved by the Institutional Review Board (IRB). Participants provided informed consent, and all data were anonymized. Studies 4 and 5 were conducted at the University of Hamburg, Germany. In accordance with local regulations, IRB approval was not required for these surveys. Participants provided informed consent, and all data were anonymized.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (pdf 2257 KB)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Vanderelst, D., Jorgenson, C., Ozkes, A.I. et al. Are Robots to be Created in Our Own Image? Testing the Ethical Equivalence of Robots and Humans. Int J of Soc Robotics 15, 85–99 (2023). https://doi.org/10.1007/s12369-022-00940-8

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12369-022-00940-8

Keywords

Navigation