Ethical Choice in Unforeseen Circumstances

  • Louise Dennis
  • Michael Fisher
  • Marija Slavkovik
  • Matt Webster
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8069)


For autonomous systems to be allowed to share environments with people, their manufacturers need to guarantee that the system behaves within acceptable legal, but also ethical, limits. Formal verification has been used to test if a system behaves within specified legal limits. This paper proposes an ethical extension to a rational agent controlling an Unmanned Aircraft(UA). The resulting agent is able to distinguish among possible plans and execute the most ethical choice it has. We implement a prototype and verify that when an agent does behave unethically, it does so because no more-ethical possibility is available.



Work partially funded by EPSRC through the “Trustworthy Robotic Assistants”, “Verifying Interoperability Requirements in Pervasive Systems”, and “Reconfigurable Autonomy” projects, and by the ERDF/NWDA-funded Virtual Engineering Centre.


  1. 1.
    Aldewereld, H., Álvarez-Napagao, S., Dignum, F., Vázquez-Salceda, J.: Making norms concrete. In: Proceedings of the AAMAS, pp. 807–814 (2010)Google Scholar
  2. 2.
    Anderson, M., Anderson, S.: Machine ethics: creating an ethical intelligent agent. AI Mag. 28(4), 15–26 (2007)Google Scholar
  3. 3.
    Anderson, S., Anderson, M.: A prima facie duty approach to machine ethics and its application to elder care. In Human-Robot Interaction in Elder Care (2011)Google Scholar
  4. 4.
    Arkin, R.C., Ulam, P., Wagner, A.R.: Moral decision making in autonomous systems: enforcement, moral emotions, dignity, trust, and deception. Proc. IEEE 100(3), 571–589 (2012)CrossRefGoogle Scholar
  5. 5.
    Civil Aviation Authority. CAP 393 Air Navigation: The Order and the Regulations (2010).
  6. 6.
    Baier, C., Katoen, J.P.: Principles of Model Checking. MIT Press, Cambridge (2008)zbMATHGoogle Scholar
  7. 7.
    Baier, J., McIlraith, S.: Planning with preferences. AI Mag. 29(4), 25–36 (2008)Google Scholar
  8. 8.
    Bentham, J.: An Introduction to the Principles of Morals and Legislation. Clarendon Press, Oxford (1781)Google Scholar
  9. 9.
    Bordini, R., Dastani, M., Dix, J., El Fallah-Seghrouchni, A. (eds.): Multi-Agent Programming: Languages, Platforms and Applications. Springer, Berlin (2005)Google Scholar
  10. 10.
    Chisholm, R.M.: Contrary-to-duty imperatives and deontic logic. Analysis 24(2), 33–36 (1963)CrossRefGoogle Scholar
  11. 11.
    Coles, A.J., Coles, A.I., Fox, M., Long, D.: Forward-chaining partial-order planning. In: Proceedings of the 20th International Conference on Automated Planning and Scheduling (ICAPS-10), May 2010Google Scholar
  12. 12.
    Dennis, L.A., Farwer, B.: Gwendolen: a BDI language for verifiable agents. In: Proceedings of the AISB Workshop on Logic and the Simulation of Interaction and Reasoning. AISB, 2008Google Scholar
  13. 13.
    Dennis, L.A., Fisher, M., Webster, M., Bordini, R.H.: Model checking agent programming languages. Autom. Softw. Eng. 19(1), 5–63 (2012)CrossRefGoogle Scholar
  14. 14.
    Helmert, M.: The fast downward planning system. J. Artif. Intell. Res. 26, 191–246 (2006)CrossRefzbMATHGoogle Scholar
  15. 15.
    Lincoln, N., Veres, S.M., Dennis, L.A., Fisher, M., Lisitsa, A.: An agent based framework for adaptive control and decision making of autonomous vehicles. In Proceedings of IFAC Workshop on Adaptation and Learning in Control and Signal Processing (2010)Google Scholar
  16. 16.
    Powers, T.: Prospects for a Kantian machine. IEEE Intell. Syst. 21(4), 46–51 (2006)CrossRefGoogle Scholar
  17. 17.
    Rao, A., Georgeff, M.: BDI agents: from theory to practice. In: Proceedings of the 1st International Conference on Multi-Agent Systems (ICMAS), pp. 312–319 (1995)Google Scholar
  18. 18.
    Ross, W.D.: The Right and the Good. Oxford University Press, Oxford (1930)Google Scholar
  19. 19.
    Sacerdoti, E.: Planning in a heirarchy of abstraction spaces. Artif. Intell. 5, 115–135 (1974)CrossRefzbMATHGoogle Scholar
  20. 20.
    Sardiña, S., Shapiro, S.: Rational action in agent programs with prioritized goals. In: Proceedings of the 2nd International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 417–424. ACM (2003)Google Scholar
  21. 21.
    Tulum, K., Durak, U., Yder, S.K.: Situation aware UAV mission route planning. In: 2009 IEEE Aerospace Conference, pp. 1–12, March 2009Google Scholar
  22. 22.
    Turilli, M.: Ethical protocols design. Ethics Inf. Technol. 9, 49–62 (2007)CrossRefGoogle Scholar
  23. 23.
    Visser, S., Thangarajah, J., Harland, J.: Reasoning about preferences in intelligent agent systems. In: Proceedings of the 22nd International Joint Conference on Artificial Intelligence (2011)Google Scholar
  24. 24.
    Webster, M., Cameron, N., Jump, M., Fisher, M.: Towards certification of autonomous unmanned aircraft using formal model checking and simulation. In: Proceedings of the Infotech@Aerospace, AIAA, pp. 2012–2573 (2012)Google Scholar
  25. 25.
    Webster, M., Fisher, M., Cameron, N., Jump, M.: Formal methods for the certification of autonomous unmanned aircraft systems. In: Flammini, F., Bologna, S., Vittorini, V. (eds.) SAFECOMP 2011. LNCS, vol. 6894, pp. 228–242. Springer, Heidelberg (2011) CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Louise Dennis
    • 1
  • Michael Fisher
    • 1
  • Marija Slavkovik
    • 1
  • Matt Webster
    • 1
  1. 1.University of LiverpoolLiverpoolUK

Personalised recommendations