This “Ethical Trap” Is for Roboticists, Not Robots: On the Issue of Artificial Agent Ethical Decision-Making

Abstract

In this paper we address the question of when a researcher is justified in describing his or her artificial agent as demonstrating ethical decision-making. The paper is motivated by the amount of research being done that attempts to imbue artificial agents with expertise in ethical decision-making. It seems clear that computing systems make decisions, in that they make choices between different options; and there is scholarship in philosophy that addresses the distinction between ethical decision-making and general decision-making. Essentially, the qualitative difference between ethical decisions and general decisions is that ethical decisions must be part of the process of developing ethical expertise within an agent. We use this distinction in examining publicity surrounding a particular experiment in which a simulated robot attempted to safeguard simulated humans from falling into a hole. We conclude that any suggestions that this simulated robot was making ethical decisions were misleading.

This is a preview of subscription content, log in to check access.

References

  1. Arkin, R. C. (2008). Governing lethal behavior: Embedding ethics in a hybrid deliberative/reactive robot architecture part I: Motivation and philosophy. 3rd ACM/IEEE international conference on human–robot interaction.

  2. Englert, M., Siebert, S., & Ziegler, M. (2014). Logical limitations to machine ethics with consequences to lethal autonomous weapons. ArXiv 1411.2842.

  3. European University Institute. (2013). Code of ethics in academic research. http://www.eui.eu/Documents/ServicesAdmin/DeanOfStudies/CodeofEthicsinAcademicResearch.pdf. Accessed November 30, 2015.

  4. Le Menestrel, M., & Van Wassenhove, L. N. (2004). Ethics outside, within, or beyond OR models? European Journal of Operational Research, 153, 477–484.

    Article  Google Scholar 

  5. Principles of Robotics. (2015). Engineering and Physical Sciences Research Council. http://www.epsrc.ac.uk/research/ourportfolio/themes/engineering/activities/principlesofrobotics/. Accessed January 31, 2015.

  6. Reed, R. C. (2013). Euthyphro’s elenchus experience: Ethical expertise and self-knowledge. Ethical Theory and Moral Practice, 16, 245–259. doi:10.1007/s10677-012-9335-x.

    Article  Google Scholar 

  7. Shirky, C. (2009). A speculative post on the idea of algorithmic authority. http://www.shirky.com/weblog/2009/11/a-speculative-post-on-the-idea-of-algorithmic-authority/. Accessed November 30, 2015.

  8. Simon, J. (2010). The entanglement of trust and knowledge on the Web. Ethics and Information Technology, 12(4), 343–355.

    Article  Google Scholar 

  9. Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.

    Google Scholar 

  10. Winfield, A. F. (2014). Robots with internal models: A route to self aware and hence safer robots. In J. Pitt (Ed.), The computer after me: Awareness and self-awareness in autonomic systems (pp. 237–252). London: Imperial College Press.

    Google Scholar 

  11. Winfield, A. F. T., Blum, C., & Liu, W. (2014). Towards an ethical robot: Internal models, consequences and ethical action selection. In M. Mistry, A. Leonardis, M. Witkowski & C. Melhuish (Eds.), Advances in autonomous robotics systems (pp. 85–96). Berlin: Springer International Publishing.

    Google Scholar 

  12. Winfield, A. F. T., & Nembrini, J. (2006). Safety in numbers: Fault-tolerance in robot swarms. International Journal of Modelling, Identification and Control, 1(1), 30–37.

    Article  Google Scholar 

  13. Woodman, R., Winfield, A. F., Harper, C., & Fraser, M. (2012). Building safer robots: Safety driven control. The International Journal of Robotics Research., 31(13), 1603–1626.

    Article  Google Scholar 

  14. Yahoo. (2015). Trap definition. https://search.yahoo.com/yhs/search?p=trap+definition. Accessed January 31, 2015.

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Keith W. Miller.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Miller, K.W., Wolf, M.J. & Grodzinsky, F. This “Ethical Trap” Is for Roboticists, Not Robots: On the Issue of Artificial Agent Ethical Decision-Making. Sci Eng Ethics 23, 389–401 (2017). https://doi.org/10.1007/s11948-016-9785-y

Download citation

Keywords

  • Artificial agents
  • Decision-making
  • Ethical expertise