Skip to main content

Artificial Agents and Their Moral Nature

  • Chapter
  • First Online:
Ethics, Governance, and Policies in Artificial Intelligence

Part of the book series: Philosophical Studies Series ((PSSP,volume 144))

Abstract

Artificial agents, particularly but not only those in the infosphere Floridi (Information—a very short introduction. Oxford University Press, Oxford, 2010a), extend the class of entities that can be involved in moral situations, for they can be correctly interpreted as entities that can perform actions with good or evil impact (moral agents). In this chapter, I clarify the concepts of agent and of artificial agent and then distinguish between issues concerning their moral behaviour vs. issues concerning their responsibility. The conclusion is that there is substantial and important scope, particularly in information ethics, for the concept of moral artificial agents not necessarily exhibiting free will, mental states or responsibility. This complements the more traditional approach, which considers whether artificial agents may have mental states, feelings, emotions and so forth. By focussing directly on “mind-less morality”, one is able to by-pass such question as well as other difficulties arising in Artificial Intelligence, in order to tackle some vital issues in contexts where artificial agents are increasingly part of the everyday environment (Floridi L, Metaphilos 39(4/5): 651–655, 2008a).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    For an excellent introduction see Jamieson (2008)

  2. 2.

    See for example Bedau (1996) for a discussion of alternatives to necessary-and-sufficient definitions in the case of life.

  3. 3.

    It is interesting to speculate on the mechanism by which that list is maintained. Perhaps by a human agent; perhaps by an AA composed of several people (a committee); or perhaps by a software agent.

References

  • Allen, C., G. Varner, and J. Zinser. 2000. Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence 12: 251–261.

    Article  Google Scholar 

  • Alpaydin, E. 2010. Introduction to machine learning. 2nd ed. Cambridge, MA/London: MIT Press.

    Google Scholar 

  • Arnold, A., and J. Plaice. 1994. Finite transition systems: Semantics of communicating systems. Paris/Hemel Hempstead: Masson/Prentice Hall.

    Google Scholar 

  • Barandiaran, X.E., E.D. Paolo, and M. Rohde. 2009. Defining agency: Individuality, normativity, asymmetry, and spatio-temporality in action. Adaptive Behavior—Animals, Animats, Software Agents, Robots, Adaptive Systems 17 (5): 367–386.

    Google Scholar 

  • Bedau, M.A. 1996. The nature of life. In The philosophy of life, ed. M.A. Boden, 332–357. Oxford: Oxford University Press.

    Google Scholar 

  • Cassirer, E. 1910. Substanzbegriff Und Funktionsbegriff. Untersuchungen Über Die Grundfragen Der Erkenntniskritik. Berlin: Bruno Cassirer. Translated by Swabey, W. M., and M. C. Swabey. 1923. Substance and function and Einstein’s theory of relativity. Chicago: Open Court.

    Google Scholar 

  • Danielson, P. 1992. Artificial morality: Virtuous robots for virtual games. London/New York: Routledge.

    Google Scholar 

  • Davidsson, P., and S.J. Johansson, eds. 2005. Special issue on “on the metaphysics of agents”. ACM: 1299–1300.

    Google Scholar 

  • Dennet, D. 1997. When Hal kills, who’s to blame? In Hal’s legacy: 2001’s computer as dream and reality, ed. D. Stork, 351–365. Cambridge, MA: MIT Press.

    Google Scholar 

  • Dixon, B.A. 1995. Response: Evil and the moral agency of animals. Between the Species 11 (1–2): 38–40.

    Google Scholar 

  • Epstein, R.G. 1997. The case of the killer robot: Stories about the professional, ethical, and societal dimensions of computing. New York/Chichester: Wiley.

    Google Scholar 

  • Floridi, L. 2003. On the intrinsic value of information objects and the infosphere. Ethics and Information Technology 4 (4): 287–304.

    Article  Google Scholar 

  • ———. 2006. Information technologies and the tragedy of the good will. Ethics and Information Technology 8 (4): 253–262.

    Article  Google Scholar 

  • ———. 2007. Global information ethics: The importance of being environmentally earnest. International Journal of Technology and Human Interaction 3 (3): 1–11.

    Article  Google Scholar 

  • ———. 2008a. Artificial intelligence’s new frontier: Artificial companions and the fourth revolution. Metaphilosophy 39 (4/5): 651–655.

    Article  Google Scholar 

  • ———. 2008b. The method of levels of abstraction. Minds and Machines 18 (3): 303–329.

    Article  Google Scholar 

  • ———. 2010a. Information—A very short introduction. Oxford: Oxford University Press.

    Book  Google Scholar 

  • ———. 2010b. Levels of abstraction and the Turing test. Kybernetes 39 (3): 423–440.

    Article  Google Scholar 

  • ———. 2010c. Network ethics: Information and business ethics in a networked society. Journal of Business Ethics 90 (4): 649–659.

    Google Scholar 

  • Floridi, L., and J.W. Sanders. 2001. Artificial evil and the foundation of computer ethics. Ethics and Information Technology 3 (1): 55–66.

    Article  Google Scholar 

  • ———. 2004. On the morality of artificial agents. Minds and Machines 14 (3): 349–379.

    Article  Google Scholar 

  • ———. 2005. Internet ethics: The constructionist values of Homo Poieticus. In The impact of the internet on our moral lives, ed. R. Cavalier. New York: SUNY.

    Google Scholar 

  • Franklin, S., and A. Graesser. 1997. Is it an agent, or just a program?: A taxonomy for autonomous agents. In Proceedings of the workshop on intelligent agents III, agent theories, architectures, and languages, 21–35. Berlin: Springer.

    Chapter  Google Scholar 

  • Jamieson, D. 2008. Ethics and the environment: An introduction. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Kerr, P. 1996. The grid. New York: Warner Books.

    Google Scholar 

  • Michie, D. 1961. Trial and error. In Penguin science surveys, ed. A. Garratt, 129–145. Harmondsworth: Penguin.

    Google Scholar 

  • Mitchell, M. 1998. An introduction to genetic algorithms. Cambridge, MA/London: MIT.

    Book  Google Scholar 

  • Moor, J.H. 2001. The status and future of the Turing test. Minds and Machines 11 (1): 77–93.

    Article  Google Scholar 

  • Motwani, R., and P. Raghavan. 1995. Randomized algorithms. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Moya, L.J., and A. Tolk. 2007. Special issue on towards a taxonomy of agents and multi- agent systems. In Society for computer simulation international, 11–18. San Diego: International.

    Google Scholar 

  • Rosenfeld, R. 1995a. Can animals be evil?: Kekes’ character-morality, the hard reaction to evil, and animals. Between the Species 11 (1–2): 33–38.

    Google Scholar 

  • ———. 1995b. Reply. Between the Species 11 (1–2): 40–41.

    Google Scholar 

  • Russell, S.J., and P. Norvig 2010. Artificial intelligence: A modern approach, 3rd International. Boston/London: Pearson.

    Google Scholar 

  • Turing, A.M. 1950. Computing machinery and intelligence. Mind 59 (236): 433–460.

    Article  Google Scholar 

  • Wallach, W., and C. Allen. 2010. Moral machines: Teaching robots right from wrong. New York/Oxford: Oxford University Press.

    Google Scholar 

Download references

Acknowledgement

This contribution is based on Floridi and Sanders (2004), Floridi (2008a, 2010a). I am grateful to Jeff Sanders for his permission to use our work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luciano Floridi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Floridi, L. (2021). Artificial Agents and Their Moral Nature. In: Floridi, L. (eds) Ethics, Governance, and Policies in Artificial Intelligence. Philosophical Studies Series, vol 144. Springer, Cham. https://doi.org/10.1007/978-3-030-81907-1_12

Download citation

Publish with us

Policies and ethics