Skip to main content

The Need for Moral Competency in Autonomous Agent Architectures

Part of the Synthese Library book series (SYLI,volume 376)

Abstract

Autonomous robots will have to have the capability to make decisions on their own to varying degrees. In this chapter, I will make the plea for developing moral capabilities deeply integrated into the control architectures of such autonomous agents, for I shall argue that any ordinary decision-making situation from daily life can be turned into a morally charged decision-making situation.

Keywords

  • Robot
  • Autonomy
  • Morality
  • Machine ethics

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-319-26485-1_30
  • Chapter length: 11 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   219.00
Price excludes VAT (USA)
  • ISBN: 978-3-319-26485-1
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   279.99
Price excludes VAT (USA)
Hardcover Book
USD   279.99
Price excludes VAT (USA)

Notes

  1. 1.

    We could further refine this by defining the set of impermissible actions relative to some situation S.

  2. 2.

    Note that I am using the term “moral dilemma” in an non-technical sense as I do not want to be side-tracked by the discussion on whether there are “genuine moral dilemmas”…

  3. 3.

    Note that a direct comparison between a robotic and human driver in the car scenario is not possible because the robot does not have to take its own destruction into account, whereas in the human case part of the human decision-making will include estimating the chances of minimizing harm to oneself.

References

  • Alechina, N., Dastani, M., & Logan, B. (2014, forthcoming). Norm approximation for imperfect monitors. In Proceedings of AAMAS, Paris.

    Google Scholar 

  • Anderson, M., & Anderson, S. L. (2006). MedEthEx: A prototype medical ethics advisor. In Paper Presented at the 18th Conference on Innovative Applications of Artificial Intelligence, Boston.

    Google Scholar 

  • Arkin, R., & Ulam, P. (2009). An ethical adaptor: Behavioral modification derived from moral emotions. In IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA), 2009, Daejeon (pp. 381–387). IEEE.

    Google Scholar 

  • Briggs, G., & Scheutz, M. (2012). Investigating the effects of robotic displays of protest and distress. In Proceedings of the 2012 Conference on Social Robotics, Chengdu. LNCS. Springer.

    Google Scholar 

  • Briggs, G., & Scheutz, M. (2014). How robots can affect human behavior: Investigating the effects of robotic displays of protest and distress. International Journal of Social Robotics, 6, 1–13.

    CrossRef  Google Scholar 

  • Briggs, G., Gessell, B., Dunlap, M., & Scheutz, M. (2014). Actions speak louder than looks: Does robot appearance affect human reactions to robot protest and distress? In Proceedings of 23rd IEEE Symposium on Robot and Human Interactive Communication (Ro-Man), Edinburgh.

    Google Scholar 

  • Bringsjord, S., Arkoudas, K., & Bello, P. (2006). Toward a general logicist methodology for engineering ethically correct robots. IEEE Intelligent Systems, 21(4), 38–44.

    CrossRef  Google Scholar 

  • Bringsjord, S., Taylor, J., Housten, T., van Heuveln B, Clark, M., & Wojtowicz, R. (2009). Piagetian roboethics via category theory: Moving beyond mere formal operations to engineer robots whose decisions are guaranteed to be ethically correct. In Proceedings of the ICRA 2009 Workshop on Roboethics, Kobe.

    Google Scholar 

  • Dworkin, R. (1984). Rights as trumps. In J. Waldron (Ed.), Theories of rights (pp. 153–167). Oxford: Oxford University Press.

    Google Scholar 

  • Guarini, M. (2011). Computational neural modeling and the philosophy of ethics. In M. Anderson, & S. Anderson (Eds.), Machine ethics (pp. 316–334). Cambridge: Cambridge University Press.

    CrossRef  Google Scholar 

  • Kramer, J., & Scheutz, M. (2007). Reflection and reasoning mechanisms for failure detection and recovery in a distributed robotic architecture for complex robots. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Rome (pp. 3699–3704).

    Google Scholar 

  • Malle, B. F., Guglielmo, S., & Monroe, A. E. (2014). A theory of blame. Psychological Inquiry, 25(2), 147–186.

    CrossRef  Google Scholar 

  • Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21, 18–21.

    CrossRef  Google Scholar 

  • Schermerhorn, P., & Scheutz, M. (2009). Dynamic robot autonomy: Investigating the effects of robot decision-making in a human-robot team task. In Proceedings of the 2009 International Conference on Multimodal Interfaces, Cambridge.

    Google Scholar 

  • Schermerhorn, P., & Scheutz, M. (2011). Disentangling the effects of robot affect, embodiment, and autonomy on human team members in a mixed-initiative task. In ACHI, Gosier (pp. 236–241).

    Google Scholar 

  • Scheutz, M. (2002). Agents with or without emotions? In R. Weber (Ed.), Proceedings of the 15th International FLAIRS Conference, Pensacola Beach (pp. 89–94). AAAI Press.

    Google Scholar 

  • Scheutz, M. (2012). The inherent dangers of unidirectional emotional bonds between humans and social robots. In P. Lin, G. Bekey, & K. Abney (Eds.), Anthology on robo-ethics. Cambridge/Mass: MIT Press.

    Google Scholar 

  • Scheutz, M. (in preparation) Moral action selection and execution.

    Google Scholar 

  • Scheutz, M., Schermerhorn, P., Kramer, J., & Anderson, D. (2007). First steps toward natural human-like HRI. Autonomous Robots, 22(4), 411–423.

    CrossRef  Google Scholar 

  • Scheutz, M., Briggs, G., Cantrell, R., Krause, E., Williams, T., & Veale, R. (2013). Novel mechanisms for natural human-robot interactions in the DIARC architecture. In Proceedings of the AAAI Workshop on Intelligent Robotic Systems, Bellevue.

    Google Scholar 

  • Strait, M., Briggs, G., & Scheutz, M. (2013). Some correlates of agency ascription and emotional value and their effects on decision-making. In Proceedings of the 5th Biannual Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII), Geneva (pp. 505–510).

    Google Scholar 

  • Talamadupula, K., Benton, J., Kambhampati, S., Schermerhorn, P., & Scheutz, M. (2010). Planning for human-robot teaming in open worlds. ACM Transactions on Intelligent Systems and Technology, 1(2), 14:1–14:24.

    Google Scholar 

  • Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.

    CrossRef  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matthias Scheutz .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Scheutz, M. (2016). The Need for Moral Competency in Autonomous Agent Architectures. In: Müller, V.C. (eds) Fundamental Issues of Artificial Intelligence. Synthese Library, vol 376. Springer, Cham. https://doi.org/10.1007/978-3-319-26485-1_30

Download citation