Skip to main content
Log in

Embedded ethics: some technical and ethical challenges

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

This paper pertains to research works aiming at linking ethics and automated reasoning in autonomous machines. It focuses on a formal approach that is intended to be the basis of an artificial agent’s reasoning that could be considered by a human observer as an ethical reasoning. The approach includes some formal tools to describe a situation and models of ethical principles that are designed to automatically compute a judgement on possible decisions that can be made in a given situation and explain why a given decision is ethically acceptable or not. It is illustrated on three ethical frameworks—utilitarian ethics, deontological ethics and the Doctrine of Double effect whose formal models are tested on ethical dilemmas so as to examine how they respond to those dilemmas and to highlight the issues at stake when a formal approach to ethical concepts is considered. The whole approach is instantiated on the drone dilemma, a thought experiment we have designed; this allows the discrepancies that exist between the judgements of the various ethical frameworks to be shown. The final discussion allows us to highlight the different sources of subjectivity of the approach, despite the fact that concepts are expressed in a more rigorous way than in natural language: indeed, the formal approach enables subjectivity to be identified and located more precisely.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. A formal approach consists in defining a minimal set of concepts that is necessary to deal with ethical reasoning. A language is defined upon this set of concepts in order to compute ethical reasoning with automatic methods. A formal approach requires to disambiguate natural language to get pseudo-mathematical definitions, in order to provide computable meaningful results. Such an approach also requires to identify implicit hypotheses.

  2. In order to deal with an ethical dilemma, an autonomous machine has to be able to identify a situation as such. Despite the fact that some concepts presented in this paper might help automated ethical dilemma recognition, this issue will not be discussed further.

  3. An implicit ethical machine is a machine which is designed to avoid any situation involving ethical issues (Moor 2006).

  4. This is a strong assumption we make in order to avoid additional ethical concerns about judging and comparing values of lives.

  5. Assumptions are required in order to translate ethical notions into computable concepts.

  6. This rule has several meanings. One of the meanings involves the concept of intention: negative facts are not deliberate. Because our formalism does not involve intention (yet), we make the simplifying assumption that an agent never wishes negative facts to happen.

  7. Assuming that any life is equal to another.

  8. In this case, both aggregation criteria give the same result.

References

  • Anderson, M., & Anderson, S. L. (2015). Toward ensuring ethical behavior from autonomous systems: A case-supported principle-based paradigm. Industrial Robot: An International Journal, 42(4), 324–331.

    Article  Google Scholar 

  • Anderson, M., Anderson, S. L., Armen, C. (2005). MedEthEx: Towards a Medical Ethics Advisor. In Proceedings of the AAAI Fall Symposium on Caring Machines: AI and Eldercare.

  • Anderson, S. L., & Anderson, M. (2011). A Prima Facie Duty Approach to Machine Ethics and Its Application to Elder Care. In Proceedings of the 12th AAAI Conference on Human-Robot Interaction in Elder Care, AAAI Press, AAAIWS’11-12, pp. 2–7.

  • Arkin, R. C. (2007). Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture. Technical Report, Proc. HRI 2008.

  • Aroskar, M. A. (1980). Anatomy of an ethical dilemma: The theory. The American Journal of Nursing, 80(4), 658–660.

    Google Scholar 

  • Atkinson, K., & Bench-Capon, T. (2016). Value based reasoning and the actions of others. In Proceedings of ECAI, The Hague, The Netherlands.

  • Baron, J. (1998). Judgment misguided: Intuition and error in public decision making. Oxford: Oxford University Press.

    Google Scholar 

  • Beauchamp, T. L., & Childress, J. F. (1979). Principles of biomedical ethics. Oxford: Oxford University Press.

    Google Scholar 

  • Bench-Capon, T. (2002). Value based argumentation frameworks. https://arXiv.org/quant-ph/0207059.

  • Berreby, F., Bourgne, G., & Ganascia, J. G. (2015). Modelling moral reasoning and ethical responsibility with logic programming.In Logic for Programming, Artificial Intelligence, and Reasoning: 20th International Conference (LPAR-20) (pp. 532–548). Fiji: Suja.

  • Bonnefon, J. F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573–1576.

    Article  Google Scholar 

  • Bonnemains, V., Saurel, C., & Tessier, C. (2016). How ethical frameworks answer to ethical dilemmas: Towards a formal model. In ECAI 2016 Workshop on Ethics in the Design of Intelligent Agents (EDIA’16), The Hague, The Netherlands.

  • Bringsjord, S., & Taylor, J. (2011). The divine-command approach to robot ethics. In P. Lin, G. Bekey & K. Abney (Eds.), Robot ethics: The ethical and social implications of robotics, Cambridge: MIT Press, pp. 85–108.

  • Bringsjord, S., Ghosh, R., Payne-Joyce, J., et al. (2016). Deontic counteridenticals. In ECAI 2016 Workshop on Ethics in the Design of Intelligent Agents (EDIA’16), The Hague, The Netherlands.

  • Cayrol, C., Royer, V., Saurel, C. (1993). Management of preferences in assumption-based reasoning. In 4th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, pp. 13–22.

  • Cointe, N., Bonnet, G., & Boissier, O. (2016). Ethical Judgment of Agents Behaviors in Multi-Agent Systems. In Autonomous Agents and Multiagent Systems International Conference (AAMAS), Singapore.

  • Conitzer, V., Sinnott-Armstrong, W., Schaich Borg, J., Deng, Y., & Kramer, M. (2017). Moral decision making frameworks for artificial intelligence. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI), San Francisco, CA, USA.

  • Defense Science Board. (2016). Summer study on autonomy. Technical Report, US Department of Defense.

  • Foot, P. (1967). The problem of abortion and the doctrine of double effect. Oxford Review, 5, 5–15.

    Google Scholar 

  • Ganascia, J. G. (2007). Modelling ethical rules of lying with answer set programming. Ethics and Information Technology, 9(1), 39–47. https://doi.org/10.1007/s10676-006-9134-y.

    Article  Google Scholar 

  • Grinbaum, A., Chatila, R., Devillers, L., Ganascia, J. G., Tessier, C., & Dauchet, M. (2017). Ethics in robotics research: CERNA recommendations. IEEE Robotics and Automation Magazine. https://doi.org/10.1109/MRA.2016.2611586.

  • Johnson, J. A. (2014). From open data to information justice. Ethics and Information Technology, 16(4), 263.

    Article  Google Scholar 

  • Lin, P., Bekey, G., & Abney, K. (2008). Autonomous military robotics: Risk, ethics, and design. Technical report for the U.S. Department of the Navy. Office of Naval Research.

  • Lin, P., Abney, K., & Bekey, G. (Eds.). (2012). Robot ethics—The Ethical and Social Implications of Robotics. Cambridge: The MIT Press.

  • MacIntyre, A. (2003). A short history of ethics: A history of moral philosophy from the Homeric age to the 20th century. Abingdon: Routledge.

    Google Scholar 

  • Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J., & Cusimano, C. (2015). Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction, ACM, pp. 117–124.

  • McIntyre, A. (2014). Doctrine of double effect. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Winter ed.). California: The Stanford Encyclopedia of Philosophy.

    Google Scholar 

  • Mermet, B., & Simon, G. (2016). Formal verification of ethical properties in multiagent systems. In ECAI 2016 Workshop on Ethics in the Design of Intelligent Agents (EDIA’16), The Hague, The Netherlands.

  • MIT (2016). Moral machine, Technical Report, MIT, http://moralmachine.mit.edu/

  • Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18–21.

    Article  Google Scholar 

  • Oswald, M. E., & Grosjean, S. (2004). Confirmation bias. In R. Pohl (Ed.) Cognitive illusions: A handbook on fallacies and biases in thinking, judgement and memory (p. 79). New York: Psychology Press.

  • Pagallo U (2016) Even Angels Need the Rules: AI, Roboethics, and the Law. In Proceedings of ECAI, The Hague, The Netherlands.

  • Pinto, J., & Reiter, R. (1993). Temporal reasoning in logic programming: A case for the situation calculus. ICLP, 93, 203–221.

    MathSciNet  Google Scholar 

  • Pnueli, A. (1977). The temporal logic of programs. In 18th Annual Symposium on Foundations of Computer Science (SFCS 1977), pp. 46–57.

  • Prakken, H., & Sergot, M. (1996). Contrary-to-duty obligations. Studia Logica, 57(1), 91–115.

    Article  MathSciNet  MATH  Google Scholar 

  • Reiter, R. (1978). On closed world data bases. In H. Gallaire & J. Minker (Eds.), Logic and data bases (pp. 119–140). New York: Plenum Press.

    Google Scholar 

  • Ricoeur, P. (1990). Éthique et morale. Revista Portuguesa de Filosofia, 4(1), 5–17.

    Google Scholar 

  • Santos-Lang, C. (2002). Ethics for Artificial Intelligences. In Wisconsin State-Wide technology Symposium "Promise or Peril?”. Wisconsin, USA: Reflecting on computer technology: Educational, psychological, and ethical implications.

  • Sinnott-Armstrong, W. (2015). Consequentialism. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Winter ed.). Stanford: Metaphysics Research Lab: Stanford University.

    Google Scholar 

  • Sullins, J. (2010). RoboWarfare: Can robots be more ethical than humans on the battlefield? Ethics and Information Technology, 12(3), 263–275.

    Article  Google Scholar 

  • Tessier, C., & Dehais, F. (2012). Authority management and conflict solving in human-machine systems. AerospaceLab: The Onera Journal, 4, 1.

    Google Scholar 

  • The EthicAA team (2015). Dealing with ethical conflicts in autonomous agents and multi-agent systems. In AAAI 2015 Workshop on AI and Ethics, Austin, Texas, USA

  • Tzafestas, S. (2016). Roboethics: A navigating overview. Oxford: Oxford University Press.

    Book  Google Scholar 

  • von Wright, G. H. (1951). Deontic logic. In Mind, Vol. 60, jstor, pp. 1–15.

  • Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Woodfield, A. (1976). Teleology. Cambridge: Cambridge University Press.

    Google Scholar 

  • Yilmaz, L., Franco-Watkins, A., Kroecker, T. S. (2016). Coherence-driven reflective equilibrium model of ethical decision-making. In IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), pp. 42–48. https://doi.org/10.1109/COGSIMA.2016.7497784.

Download references

Acknowledgements

We would like to thank ONERA for providing resources for this work and the EthicAA project team for discussions and advice. Furthermore, we are grateful to ”Région Occitanie” for the contribution to the PhD grant. Moreover, we are greatly indebted to Berreby et al. (2015) for the way of formalizing ethical frameworks including the Doctrine of Double Effect, and to Cointe et al. (2016) for having inspired the judgements of decisions through ethical frameworks.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vincent Bonnemains.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bonnemains, V., Saurel, C. & Tessier, C. Embedded ethics: some technical and ethical challenges. Ethics Inf Technol 20, 41–58 (2018). https://doi.org/10.1007/s10676-018-9444-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10676-018-9444-x

Keywords

Navigation