Advertisement

Artificial Agents and Their Moral Nature

  • Luciano Floridi
Chapter
Part of the Philosophy of Engineering and Technology book series (POET, volume 17)

Abstract

Artificial agents, particularly but not only those in the infosphere Floridi (Information – A very short introduction. Oxford University Press, Oxford, 2010a), extend the class of entities that can be involved in moral situations, for they can be correctly interpreted as entities that can perform actions with good or evil impact (moral agents). In this chapter, I clarify the concepts of agent and of artificial agent and then distinguish between issues concerning their moral behaviour vs. issues concerning their responsibility. The conclusion is that there is substantial and important scope, particularly in information ethics, for the concept of moral artificial agents not necessarily exhibiting free will, mental states or responsibility. This complements the more traditional approach, which considers whether artificial agents may have mental states, feelings, emotions and so forth. By focussing directly on “mind-less morality”, one is able to by-pass such question as well as other difficulties arising in Artificial Intelligence, in order to tackle some vital issues in contexts where artificial agents are increasingly part of the everyday environment (Floridi L, Metaphilos 39(4/5): 651–655, 2008a).

Keywords

Moral Agent Artificial Agent Transition Rule Threshold Function Moral Action 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgement

This contribution is based on Floridi and Sanders (2004), Floridi (2008a, 2010a). I am grateful to Jeff Sanders for his permission to use our work.

References

  1. Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, 12, 251–261.CrossRefGoogle Scholar
  2. Alpaydin, E. (2010). Introduction to machine learning (2nd ed.). Cambridge, MA/London: MIT Press.Google Scholar
  3. Arnold, A., & Plaice, J. (1994). Finite transition systems: Semantics of communicating systems. Paris/Hemel Hempstead: Masson/Prentice Hall.Google Scholar
  4. Barandiaran, X. E., Paolo, E. D., & Rohde, M. (2009). Defining agency: Individuality, normativity, asymmetry, and spatio-temporality in action. Adaptive Behavior – Animals, Animats, Software Agents, Robots, Adaptive Systems, 17(5), 367–386.Google Scholar
  5. Bedau, M. A. (1996). The nature of life. In M. A. Boden (Ed.), The philosophy of life (pp. 332–357). Oxford: Oxford University Press.Google Scholar
  6. Cassirer, E. (1910). Substanzbegriff Und Funktionsbegriff. Untersuchungen Über Die Grundfragen Der Erkenntniskritik. Berlin: Bruno Cassirer. Trans. by Swabey, W. M., & Swabey, M. C. (1923). Substance and function and Einstein’s theory of relativity. Chicago: Open Court.Google Scholar
  7. Danielson, P. (1992). Artificial morality: Virtuous robots for virtual games. London/New York: Routledge.Google Scholar
  8. Davidsson, P., & Johansson, S. J. (Eds.) (2005). Special issue on “on the metaphysics of agents”. ACM, 1299–1300.Google Scholar
  9. Dennet, D. (1997). When Hal kills, who’s to blame? In D. Stork (Ed.), Hal’s legacy: 2001’s computer as dream and reality (pp. 351–365). Cambridge, MA: MIT Press.Google Scholar
  10. Dixon, B. A. (1995). Response: Evil and the moral agency of animals. Between the Species, 11(1–2), 38–40.Google Scholar
  11. Epstein, R. G. (1997). The case of the killer robot: Stories about the professional, ethical, and societal dimensions of computing. New York/Chichester: Wiley.Google Scholar
  12. Floridi, L. (2003). On the intrinsic value of information objects and the infosphere. Ethics and Information Technology, 4(4), 287–304.CrossRefGoogle Scholar
  13. Floridi, L. (2006). Information technologies and the tragedy of the good will. Ethics and Information Technology, 8(4), 253–262.CrossRefGoogle Scholar
  14. Floridi, L. (2007). Global information ethics: The importance of being environmentally earnest. International Journal of Technology and Human Interaction, 3(3), 1–11.CrossRefGoogle Scholar
  15. Floridi, L. (2008a). Artificial intelligence’s new frontier: Artificial companions and the fourth revolution. Metaphilosophy, 39(4/5), 651–655.CrossRefGoogle Scholar
  16. Floridi, L. (2008b). The method of levels of abstraction. Minds and Machines, 18(3), 303–329.CrossRefGoogle Scholar
  17. Floridi, L. (2010a). Information – A very short introduction. Oxford: Oxford University Press.CrossRefGoogle Scholar
  18. Floridi, L. (2010b). Levels of abstraction and the Turing test. Kybernetes, 39(3), 423–440.CrossRefGoogle Scholar
  19. Floridi, L. (2010c). Network ethics: Information and business ethics in a networked society. Journal of Business Ethics, 90(4), 649–659.Google Scholar
  20. Floridi, L., & Sanders, J. W. (2001). Artificial evil and the foundation of computer ethics. Ethics and Information Technology, 3(1), 55–66.CrossRefGoogle Scholar
  21. Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.CrossRefGoogle Scholar
  22. Floridi, L., & Sanders, J. W. (2005). Internet ethics: The constructionist values of Homo Poieticus. In R. Cavalier (Ed.), The impact of the internet on our moral lives. New York: SUNY.Google Scholar
  23. Franklin, S., & Graesser, A. (1997). Is it an agent, or just a program?: A taxonomy for autonomous agents. In Proceedings of the workshop on intelligent agents III, agent theories, architectures, and languages (pp. 21–35). Berlin: Springer.CrossRefGoogle Scholar
  24. Jamieson, D. (2008). Ethics and the environment: An introduction. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  25. Kerr, P. (1996). The grid. New York: Warner Books.Google Scholar
  26. Michie, D. (1961). Trial and error. In A. Garratt (Ed.), Penguin science surveys (pp. 129–145). Harmondsworth: Penguin.Google Scholar
  27. Mitchell, M. (1998). An introduction to genetic algorithms. Cambridge, MA/London: MIT.Google Scholar
  28. Moor, J. H. (2001). The status and future of the Turing test. Minds and Machines, 11(1), 77–93.CrossRefGoogle Scholar
  29. Motwani, R., & Raghavan, P. (1995). Randomized algorithms. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  30. Moya, L. J., & Tolk, A. (Eds.). (2007). Special issue on towards a taxonomy of agents and multi-agent systems. Society for Computer Simulation International, 11–18.Google Scholar
  31. Rosenfeld, R. (1995a). Can animals be evil?: Kekes’ character-morality, the hard reaction to evil, and animals. Between the Species, 11(1–2), 33–38.Google Scholar
  32. Rosenfeld, R. (1995b). Reply. Between the Species, 11(1–2), 40–41.Google Scholar
  33. Russell, S. J., & Norvig, P. (2010). Artificial intelligence: A modern approach (3rd, International). Boston/London: Pearson.Google Scholar
  34. Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460.CrossRefGoogle Scholar
  35. Wallach, W., & Allen, C. (2010). Moral machines: Teaching robots right from wrong. New York/Oxford: Oxford University Press.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  1. 1.Oxford Internet InstituteUniversity of OxfordOxfordUK

Personalised recommendations