Skip to main content

Making Maximally Ethical Decisions via Cognitive Likelihood and Formal Planning

  • Chapter
  • First Online:
Towards Trustworthy Artificial Intelligent Systems

Abstract

This chapter attempts to give an answer to the following question: Given an obligation and a set of potentially-inconsistent, ethically-charged beliefs, how can an artificially-intelligent agent ensure that its actions maximize the likelihood that the obligation is satisfied? Our approach to answering this question is in the intersection of several areas of research, including automated planning, reasoning with uncertainty, and argumentation. We exemplify our reasoning framework in a case study based on the famous, heroic ditching of US Airways Flight 1549, an event colloquially known as the “Miracle on the Hudson.”

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    This collective refers to the RAIR Lab, of which the first three authors are members.

  2. 2.

    For information about such attitudes, see [4]; for a wonderful catalogue of all the major categories of human cognition, from perceiving to fearing to remembering to saying and beyond, see [5].

  3. 3.

    See the box titled \({\mathcal {{DCEC}}}\) Signature within Sect. 2.1 for the rest of the modal operator descriptors.

  4. 4.

    Note that several variants of \({\mathcal {{DCEC}}}\) have been formalized and deployed. For example, [2] uses \({\mathcal {{DCEC}}}^*\), a version of \({\mathcal {{DCEC}}}\) which allows for the formal modeling of self-reflective agents.

  5. 5.

    That is, what they perceive externally. We allow agents to infer certain beliefs on the basis of internal perceptions, but need not delve into this further for the purposes of the present work. \({\mathcal {{IDCEC}}}\) used herein makes no provision for these two modes of perception.

  6. 6.

    In connection with standard practice in mathematical logic, \(\zeta \) functions essentially like \(\bot \) or ‘0=1.’

  7. 7.

    See [12] for an example of an ethically-charged situation in which the ascription of mental states is crucial to the success of the AI agents used.

  8. 8.

    This quote, recorded by the in-flight cockpit voice recorder (CVR), was retrieved from the NTSB Accident Report [14].

  9. 9.

    We note that when we refer to arguments being inconsistent, this is to say that the arguments each assert a set of formulae, and from the union of those sets, a contradiction can be deduced.

  10. 10.

    A precise account of what sort of creativity is used by the AI in reasoning to the Hudson as a landing area is beyond the present paper. A number of increasingly impressive types of creativity in an artificial agent are laid out in [19]. Note that at least one expert on AI and creativity, Cope, would classify the AI described in the present paper as creative; see [20].

  11. 11.

    We use the term ‘system’ in reference to the work of Dennis et al. as this is the term they use in their own work. However, in keeping with the terminology of standard textbooks in AI [22, 23], we use the term ‘agent’ in our work.

References

  1. Giancola M, Bringsjord S, Govindarajulu NS, Varela C (2020) Ethical reasoning for autonomous agents under uncertainty. In: Tokhi M, Ferreira M, Govindarajulu N, Silva M, Kadar E, Wang J, Kaur A (eds) Smart living and quality health with robots, proceedings of ICRES 2020, CLAWAR, London, UK, pp 26–41. http://kryten.mm.rpi.edu/MG_SB_NSG_CV_LogicizationMiracleOnHudson.pdf. The Shadow Adjudicator system can be obtained here: https://github.com/RAIRLab/ShadowAdjudicator

  2. Bringsjord S, Govindarajulu N, Thero D, Si M (2014) Akratic robots and the computational logic thereof. In: Proceedings of ETHICS 2014 (2014 IEEE symposium on ethics in engineering, science, and technology), Chicago, IL, pp 22–29. http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=6883275, IEEE Catalog Number: CFP14ETI-POD. (Papers from the Proceedings can be downloaded from IEEE at URL provided here)

  3. Govindarajulu N, Bringsjord S (2017a) On automating the doctrine of double effect. In: Sierra C (ed) Proceedings of the twenty-sixth international joint conference on artificial intelligence (IJCAI-17), international joint conferences on artificial intelligence, pp 4722–4730. https://doi.org/10.24963/ijcai.2017/658

  4. Nelson M (2015) Propositional attitude reports. In: Zalta E (ed) The Stanford Encyclopedia of philosophy. https://plato.stanford.edu/entries/prop-attitude-reports

  5. Ashcraft M, Radvansky G (2013) Cognition, 6th edn. Pearson, London, UK

    Google Scholar 

  6. Bringsjord S, Govindarajulu NS, Licato J, Giancola M (2020) Learning Ex Nihilo. In: Danoy G, Pang J, Sutcliffe G (eds) GCAI 2020, 6th global conference on artificial intelligence (GCAI 2020), EasyChair, EPiC series in computing, vol 72, pp 1–27. https://doi.org/10.29007/ggcf; https://easychair.org/publications/paper/NzWG

  7. Govindarajulu N, Bringsjord S, Peveler M (2019) On quantified modal theorem proving for modeling ethics. In: Suda M, Winkler S (eds) Proceedings of the second international workshop on automated reasoning: challenges, applications, directions, exemplary achievements (ARCADE 2019). Electronic proceedings in theoretical computer science, vol 311. Open Publishing Association, Waterloo, Australia, pp 43–49. http://eptcs.web.cse.unsw.edu.au/paper.cgi?ARCADE2019.7.pdf; The Shadow Prover system can be obtained here https://naveensundarg.github.io/prover/

  8. Govindarajulu NS, Bringsjord S (2017b) Strength factors: an uncertainty system for quantified modal logic. In: Belle V, Cussens J, Finger M, Godo L, Prade H, Qi G (eds) Proceedings of the IJCAI workshop on “Logical Foundations for Uncertainty and Machine Learning” (LFU-2017). Melbourne, Australia, pp 34–40. http://homepages.inf.ed.ac.uk/vbelle/workshops/lfu17/proc.pdf

  9. Ho HL (2015) The legal concept of evidence. In: Zalta EN (ed) The Stanford Encyclopedia of philosophy, winter, 2015th edn. Stanford University, Metaphysics Research Lab, Stanford

    Google Scholar 

  10. Fikes RE, Nilsson NJ (1971) STRIPS: a new approach to the application of theorem proving to problem solving. Artif Intell 2(3–4):189–208

    Article  Google Scholar 

  11. Mcdermott D, Ghallab M, Howe A, Knoblock C, Ram A, Veloso M, Weld D, Wilkins D (1998) PDDL—the planning domain definition language. Tech. Rep. CVC TR-98-003, Yale Center for Computational Vision and Control

    Google Scholar 

  12. Bringsjord S, Govindarajulu N, Giancola M (2021) Automated argument adjudication to solve ethical problems in multi-agent environments. Paladyn J Behav Robotics. 12(1):310–335. https://doi.org/10.1515/pjbr-2021-0009

  13. Naveen Sundar G (2017) Spectra. https://naveensundarg.github.io/Spectra/

  14. Hersman DA, Hart CA, Sumwalt RL (2010) Loss of thrust in both engines after encountering a flock of birds and subsequent ditching on the Hudson River. Accident Report NTSB/AAR-10/03, National Transportation Safety Board (NTSB)

    Google Scholar 

  15. Paul S, Hole F, Zytek A, Varela CA (2017) Flight trajectory planning for fixed wing aircraft in loss of thrust emergencies. In: dynamic data-driven application systems (DDDAS 2017). Cambridge, MA. http://arxiv.org/abs/1711.00716

  16. Atkins E (2010) Emergency landing automation aids: an evaluation inspired by US Airways Flight 1549. In: AIAA Infotech@Aerospace 2010. https://doi.org/10.2514/6.2010-3381

  17. National Transportation Safety Board (NTSB) (2009) Transcript—public hearing day 1. Landing of US Airways Flight 1549, Airbus A320, N106US, in the Hudson River. https://data.ntsb.gov/Docket?NTSBNumber=DCA09MA026

  18. Shen YF, Rahman ZU, Krusienski D, Li J (2013) A vision-based automatic safe landing-site detection system. IEEE Trans Aerosp Electron Syst 49(1):294–311

    Article  Google Scholar 

  19. Bringsjord S, Sen A (2016) On creative self-driving cars: hire the computational logicians, fast. Appl Artif Intell 30:758–786. http://kryten.mm.rpi.edu/SB_AS_CreativeSelf-DrivingCars_0323161130NY.pdf. (The URL here goes only to an uncorrected preprint)

  20. Cope D (2005) Computer models of musical creativity. MIT Press, Cambridge, MA

    Google Scholar 

  21. Dennis L, Fisher M, Slavkovik M, Webster M (2016) Formal verification of ethical choices in autonomous systems. Robot Auton Syst 77:1–14. https://doi.org/10.1016/j.robot.2015.11.012

  22. Luger G (2008) Artificial intelligence: structures and strategies for complex problem solving, 6th edn. Pearson, London, UK

    Google Scholar 

  23. Russell S, Norvig P (2020) Artificial intelligence: a modern approach, 4th edn. Pearson, New York, NY

    MATH  Google Scholar 

Download references

Acknowledgements

The authors are grateful to AFOSR for its longstanding support of Bringsjord et al.’s development of cutting-edge formalisms and automated reasoning and planning systems for high levels of computational intelligence. We deeply thank ONR, both for their support of our current work on belief adjudication (as part of our development of AI that surmounts Arrow’s Impossibility Theorem and related theorems), and for their past support of our R &D in robot/machine/AI ethics (primarily under a MURI to advance the science and engineering of moral competence in robots).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Giancola .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Giancola, M., Bringsjord, S., Govindarajulu, N.S., Varela, C. (2022). Making Maximally Ethical Decisions via Cognitive Likelihood and Formal Planning. In: Ferreira, M.I.A., Tokhi, M.O. (eds) Towards Trustworthy Artificial Intelligent Systems. Intelligent Systems, Control and Automation: Science and Engineering, vol 102. Springer, Cham. https://doi.org/10.1007/978-3-031-09823-9_10

Download citation

Publish with us

Policies and ethics