Advertisement

Ethics and Information Technology

, Volume 20, Issue 1, pp 41–58 | Cite as

Embedded ethics: some technical and ethical challenges

  • Vincent Bonnemains
  • Claire Saurel
  • Catherine Tessier
Original Paper

Abstract

This paper pertains to research works aiming at linking ethics and automated reasoning in autonomous machines. It focuses on a formal approach that is intended to be the basis of an artificial agent’s reasoning that could be considered by a human observer as an ethical reasoning. The approach includes some formal tools to describe a situation and models of ethical principles that are designed to automatically compute a judgement on possible decisions that can be made in a given situation and explain why a given decision is ethically acceptable or not. It is illustrated on three ethical frameworks—utilitarian ethics, deontological ethics and the Doctrine of Double effect whose formal models are tested on ethical dilemmas so as to examine how they respond to those dilemmas and to highlight the issues at stake when a formal approach to ethical concepts is considered. The whole approach is instantiated on the drone dilemma, a thought experiment we have designed; this allows the discrepancies that exist between the judgements of the various ethical frameworks to be shown. The final discussion allows us to highlight the different sources of subjectivity of the approach, despite the fact that concepts are expressed in a more rigorous way than in natural language: indeed, the formal approach enables subjectivity to be identified and located more precisely.

Keywords

Ethical dilemma Ethical framework Autonomous machines Judgement Subjectivity 

Notes

Acknowledgements

We would like to thank ONERA for providing resources for this work and the EthicAA project team for discussions and advice. Furthermore, we are grateful to ”Région Occitanie” for the contribution to the PhD grant. Moreover, we are greatly indebted to Berreby et al. (2015) for the way of formalizing ethical frameworks including the Doctrine of Double Effect, and to Cointe et al. (2016) for having inspired the judgements of decisions through ethical frameworks.

References

  1. Anderson, M., & Anderson, S. L. (2015). Toward ensuring ethical behavior from autonomous systems: A case-supported principle-based paradigm. Industrial Robot: An International Journal, 42(4), 324–331.CrossRefGoogle Scholar
  2. Anderson, M., Anderson, S. L., Armen, C. (2005). MedEthEx: Towards a Medical Ethics Advisor. In Proceedings of the AAAI Fall Symposium on Caring Machines: AI and Eldercare.Google Scholar
  3. Anderson, S. L., & Anderson, M. (2011). A Prima Facie Duty Approach to Machine Ethics and Its Application to Elder Care. In Proceedings of the 12th AAAI Conference on Human-Robot Interaction in Elder Care, AAAI Press, AAAIWS’11-12, pp. 2–7.Google Scholar
  4. Arkin, R. C. (2007). Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture. Technical Report, Proc. HRI 2008.Google Scholar
  5. Aroskar, M. A. (1980). Anatomy of an ethical dilemma: The theory. The American Journal of Nursing, 80(4), 658–660.Google Scholar
  6. Atkinson, K., & Bench-Capon, T. (2016). Value based reasoning and the actions of others. In Proceedings of ECAI, The Hague, The Netherlands.Google Scholar
  7. Baron, J. (1998). Judgment misguided: Intuition and error in public decision making. Oxford: Oxford University Press.Google Scholar
  8. Beauchamp, T. L., & Childress, J. F. (1979). Principles of biomedical ethics. Oxford: Oxford University Press.Google Scholar
  9. Bench-Capon, T. (2002). Value based argumentation frameworks. https://arXiv.org/quant-ph/0207059.
  10. Berreby, F., Bourgne, G., & Ganascia, J. G. (2015). Modelling moral reasoning and ethical responsibility with logic programming.In Logic for Programming, Artificial Intelligence, and Reasoning: 20th International Conference (LPAR-20) (pp. 532–548). Fiji: Suja.Google Scholar
  11. Bonnefon, J. F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573–1576.CrossRefGoogle Scholar
  12. Bonnemains, V., Saurel, C., & Tessier, C. (2016). How ethical frameworks answer to ethical dilemmas: Towards a formal model. In ECAI 2016 Workshop on Ethics in the Design of Intelligent Agents (EDIA’16), The Hague, The Netherlands.Google Scholar
  13. Bringsjord, S., & Taylor, J. (2011). The divine-command approach to robot ethics. In P. Lin, G. Bekey & K. Abney (Eds.), Robot ethics: The ethical and social implications of robotics, Cambridge: MIT Press, pp. 85–108.Google Scholar
  14. Bringsjord, S., Ghosh, R., Payne-Joyce, J., et al. (2016). Deontic counteridenticals. In ECAI 2016 Workshop on Ethics in the Design of Intelligent Agents (EDIA’16), The Hague, The Netherlands.Google Scholar
  15. Cayrol, C., Royer, V., Saurel, C. (1993). Management of preferences in assumption-based reasoning. In 4th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, pp. 13–22.Google Scholar
  16. Cointe, N., Bonnet, G., & Boissier, O. (2016). Ethical Judgment of Agents Behaviors in Multi-Agent Systems. In Autonomous Agents and Multiagent Systems International Conference (AAMAS), Singapore.Google Scholar
  17. Conitzer, V., Sinnott-Armstrong, W., Schaich Borg, J., Deng, Y., & Kramer, M. (2017). Moral decision making frameworks for artificial intelligence. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI), San Francisco, CA, USA.Google Scholar
  18. Defense Science Board. (2016). Summer study on autonomy. Technical Report, US Department of Defense.Google Scholar
  19. Foot, P. (1967). The problem of abortion and the doctrine of double effect. Oxford Review, 5, 5–15.Google Scholar
  20. Ganascia, J. G. (2007). Modelling ethical rules of lying with answer set programming. Ethics and Information Technology, 9(1), 39–47.  https://doi.org/10.1007/s10676-006-9134-y.CrossRefGoogle Scholar
  21. Grinbaum, A., Chatila, R., Devillers, L., Ganascia, J. G., Tessier, C., & Dauchet, M. (2017). Ethics in robotics research: CERNA recommendations. IEEE Robotics and Automation Magazine.  https://doi.org/10.1109/MRA.2016.2611586.
  22. Johnson, J. A. (2014). From open data to information justice. Ethics and Information Technology, 16(4), 263.CrossRefGoogle Scholar
  23. Lin, P., Bekey, G., & Abney, K. (2008). Autonomous military robotics: Risk, ethics, and design. Technical report for the U.S. Department of the Navy. Office of Naval Research.Google Scholar
  24. Lin, P., Abney, K., & Bekey, G. (Eds.). (2012). Robot ethics—The Ethical and Social Implications of Robotics. Cambridge: The MIT Press.Google Scholar
  25. MacIntyre, A. (2003). A short history of ethics: A history of moral philosophy from the Homeric age to the 20th century. Abingdon: Routledge.Google Scholar
  26. Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J., & Cusimano, C. (2015). Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction, ACM, pp. 117–124.Google Scholar
  27. McIntyre, A. (2014). Doctrine of double effect. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Winter ed.). California: The Stanford Encyclopedia of Philosophy.Google Scholar
  28. Mermet, B., & Simon, G. (2016). Formal verification of ethical properties in multiagent systems. In ECAI 2016 Workshop on Ethics in the Design of Intelligent Agents (EDIA’16), The Hague, The Netherlands.Google Scholar
  29. MIT (2016). Moral machine, Technical Report, MIT, http://moralmachine.mit.edu/
  30. Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18–21.CrossRefGoogle Scholar
  31. Oswald, M. E., & Grosjean, S. (2004). Confirmation bias. In R. Pohl (Ed.) Cognitive illusions: A handbook on fallacies and biases in thinking, judgement and memory (p. 79). New York: Psychology Press.Google Scholar
  32. Pagallo U (2016) Even Angels Need the Rules: AI, Roboethics, and the Law. In Proceedings of ECAI, The Hague, The Netherlands.Google Scholar
  33. Pinto, J., & Reiter, R. (1993). Temporal reasoning in logic programming: A case for the situation calculus. ICLP, 93, 203–221.MathSciNetGoogle Scholar
  34. Pnueli, A. (1977). The temporal logic of programs. In 18th Annual Symposium on Foundations of Computer Science (SFCS 1977), pp. 46–57.Google Scholar
  35. Prakken, H., & Sergot, M. (1996). Contrary-to-duty obligations. Studia Logica, 57(1), 91–115.MathSciNetCrossRefzbMATHGoogle Scholar
  36. Reiter, R. (1978). On closed world data bases. In H. Gallaire & J. Minker (Eds.), Logic and data bases (pp. 119–140). New York: Plenum Press.Google Scholar
  37. Ricoeur, P. (1990). Éthique et morale. Revista Portuguesa de Filosofia, 4(1), 5–17.Google Scholar
  38. Santos-Lang, C. (2002). Ethics for Artificial Intelligences. In Wisconsin State-Wide technology Symposium "Promise or Peril?”. Wisconsin, USA: Reflecting on computer technology: Educational, psychological, and ethical implications.Google Scholar
  39. Sinnott-Armstrong, W. (2015). Consequentialism. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Winter ed.). Stanford: Metaphysics Research Lab: Stanford University.Google Scholar
  40. Sullins, J. (2010). RoboWarfare: Can robots be more ethical than humans on the battlefield? Ethics and Information Technology, 12(3), 263–275.CrossRefGoogle Scholar
  41. Tessier, C., & Dehais, F. (2012). Authority management and conflict solving in human-machine systems. AerospaceLab: The Onera Journal, 4, 1.Google Scholar
  42. The EthicAA team (2015). Dealing with ethical conflicts in autonomous agents and multi-agent systems. In AAAI 2015 Workshop on AI and Ethics, Austin, Texas, USAGoogle Scholar
  43. Tzafestas, S. (2016). Roboethics: A navigating overview. Oxford: Oxford University Press.CrossRefGoogle Scholar
  44. von Wright, G. H. (1951). Deontic logic. In Mind, Vol. 60, jstor, pp. 1–15.Google Scholar
  45. Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.CrossRefGoogle Scholar
  46. Woodfield, A. (1976). Teleology. Cambridge: Cambridge University Press.Google Scholar
  47. Yilmaz, L., Franco-Watkins, A., Kroecker, T. S. (2016). Coherence-driven reflective equilibrium model of ethical decision-making. In IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), pp. 42–48.  https://doi.org/10.1109/COGSIMA.2016.7497784.

Copyright information

© Springer Science+Business Media B.V., part of Springer Nature 2018

Authors and Affiliations

  1. 1.ONERAToulouseFrance

Personalised recommendations