Floridi’s Information Ethics as Macro-ethics and Info-computational Agent-Based Models

  • Gordana Dodig-CrnkovicEmail author
Part of the Philosophy of Engineering and Technology book series (POET, volume 8)


Luciano Floridi’s Information Ethics (IE) is a new theoretical foundation of Ethics. According to Floridi, ICT with all informational structures and processes generates our new informational habitat, the Infosphere. For IE, moral action is an information processing pattern. IE addresses the fundamentally informational character of our interaction with the world, including interactions with other agents. Information Ethics is macro-ethics as it focuses on systems/networks of agents and their behavior. The IE’s capacity to study ethical phenomena on the basic level of underlying information patterns and processes makes it unique among ethical theories in providing a conceptual framework for fundamental level analysis of present globalised ICT-based world. It allows computational modeling – a powerful tool for study which increases our understanding of informational mechanisms of ethics. Computational models help capturing behaviors invisible to unaided mind which relies exclusively on shared intuitions. The article presents an analysis of the application of IE as interpreted within the framework of Info-Computationalism. The focus is on responsibility/accountability distribution and similar phenomena of information communication in networks of agents. Agent-based modeling enables studying the increasing complexity of behavior in multi-agent systems when agents (actors) are ranging from cellular automata to softbots, robots and humans. Autonomous, learning artificial intelligent systems technologies are developing rapidly, resulting in a new division of tasks between humans and robots/softbots. The biggest present-day concern about autonomous intelligent systems is the fear of human loss of control and robots acting inappropriately and causing harm. Among inappropriate kinds of behavior is the ethically unacceptable one. In order to assure ethically adequate behavior of autonomous intelligent systems, artifactual ethical responsibility/accountability should be one of the built-in features of intelligent artifacts. Adding the requirement for artifactual ethical behavior to a robot/softbot does not by any means take responsibility from humans designing, producing and controlling autonomous intelligent systems. On the contrary, it will make explicit the necessity for all involved with such intelligent technology to assure its ethical conduct. Today’s robots are used mainly as complex electromechanical tools and do not have any capability of taking moral responsibility. But technology progress is remarkable; robots are quickly improving their sensory and motor competencies, and the development of artifactual (synthetic) emotions adds new dimensions to robotics. Artifactual reasoning and other information processing skills are advancing – all of which is causing significant progress in the field of Social Robotics. We have thus strong reasons to try to analyze future technological development where robots/softbots are so intelligent and responsive that they possess artifactual morality alongside with artifactual intelligence. Technological artifacts are always part of a broader socio-technological system with distributed responsibilities. The development of autonomous, learning, morally responsible intelligent agents relies consequently on several responsibility feedback loops; the awareness and preparedness for handling risks on the side of designers, producers, implementers, users and maintenance personnel as well as the support of the society at large which will provide a response on the consequences of the use of technology. This complex system of shared responsibilities should secure a safe functioning of hybrid systems of humans and intelligent machines. Information Ethics provides a conceptual framework for computational modeling of such socio-technological systems. Apart from examples of specific applications of IE, interpretation of several widely debated questions, such as the role of Levels of Abstraction, naturalism and complexity/diversity in Information Ethics, is offered through Info-Computationalist analysis.


Moral Responsibility Intelligent System Intelligent Agent Artificial Agent Ethical Approach 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



The author wants to thank Mark Coeckelbergh for insightful comments on earlier versions of this paper.


  1. Adam, Alison. 2005. Delegating and distributing morality: Can we inscribe privacy protection in a machine? Ethics and Information Technology 7: 233–242.CrossRefGoogle Scholar
  2. Adam, Alison. 2008. Ethics for things. Ethics and Information Technology 10(2–3): 149–154.CrossRefGoogle Scholar
  3. Arkin, Ronald C. 1998. Behavior-based robotics. Cambridge: MIT Press.Google Scholar
  4. Asaro, Peter M. 2007. Robots and responsibility from a legal perspective. In Proceedings of the IEEE 2007 international conference on robotics and automation, Workshop on RoboEthics, Rome.Google Scholar
  5. Becker, Barbara. 2006. Social robots – emotional agents: Some remarks on naturalizing man-machine interaction. International Review of Information Ethics 6: 37–45.Google Scholar
  6. Becker, Barbara. 2009. Social Robots – Emotional Agents: Some Remarks on Naturalizing Man-machine Interaction. In Ethics and robotics, ed. R. Capurro and M. Nagenborg. Amsterdam: IOS Press.Google Scholar
  7. Brey, Philip. 2008. Do we have moral duties towards information objects? Ethics and Information Technology 10(2–3): 109–114.CrossRefGoogle Scholar
  8. Capurro, Rafael. 2008. On Floridi’s metaphysical foundation of information ecology. Ethics and Information Technology 10(2–3): 167–173.CrossRefGoogle Scholar
  9. Coeckelbergh, Mark. 2010. Moral appearances: Emotions, robots, and human morality. Ethics and Information Technology 12(3): 235–241. ISSN 1388-1957.CrossRefGoogle Scholar
  10. Coleman, K.G. 2005. Computing and moral responsibility. In The Stanford encyclopedia of philosophy, Spring edn, ed. Edward N. Zalta. Stanford: Standford University. Available:
  11. Crutzen, C.K.M. 2006. Invisibility and the meaning of ambient intelligence. International Review of Information Ethics 6: 52–62.Google Scholar
  12. Danielson, Peter. 1992. Artificial morality virtuous robots for virtual games. London: Routledge.Google Scholar
  13. Dennett, Daniel C. 1973. Mechanism and responsibility. In Essays on freedom of action, ed. T. Honderich. Boston: Routledge/Keegan Paul.Google Scholar
  14. Dennett, Daniel C. 1994. The myth of original intentionality. In Thinking computers and virtual persons: Essays on the intentionality of machines, ed. E. Dietrich, 91–107. San Diego/London: Academic.Google Scholar
  15. Dodig-Crnkovic, Gordana. 1999. ABB atom’s criticality safety handbook, ICNC’99 sixth international conference on nuclear criticality safety, Versailles, France. (accessed October 26, 2010).
  16. Dodig-Crnkovic, Gordana. 2005. On the importance of teaching professional ethics to computer science students. In Computing and philosophy, Computing and philosophy conference, E-CAP 2004, Pavia, Italy, ed. L. Magnani. Pavia: Associated International Academic Publishers.Google Scholar
  17. Dodig-Crnkovic, Gordana. 2006a. Investigations into information semantics and ethics of computing. Västerås: Mälardalen University Press. (accessed October 26, 2010).
  18. Dodig-Crnkovic, Gordana. 2006b. Professional ethics in computing and intelligent systems. In Proceedings of the ninth Scandinavian Conference on Artificial Intelligence (SCAI 2006), Espoo, Finland, October 25–27.Google Scholar
  19. Dodig-Crnkovic, Gordana. 2008. Knowledge generation as natural computation. Journal of Systemics, Cybernetics and Informatics 6: 12–16.Google Scholar
  20. Dodig-Crnkovic, Gordana. 2009. Information and computation nets. Saarbrücken: VDM Verlag.Google Scholar
  21. Dodig-Crnkovic, Gordana. 2010. The cybersemiotics and info-computationalist research programmes as platforms for knowledge production in organisms and machines. Entropy 12: 878–901. (accessed October 26, 2010).
  22. Dodig-Crnkovic, Gordana, and Margaryta Anokhina. 2008. Workplace gossip and rumor. The information ethics perspective. In ETHICOMP-2008, Mantova, Italy.Google Scholar
  23. Dodig-Crnkovic, Gordana, and Vincent Müller. 2010. A dialogue concerning two world systems: Info-computational vs. mechanistic. In Information and computation, ed. G. Dodig-Crnkovic and M. Burgin. Singapore: World Scientific Publishing Co.Google Scholar
  24. Dodig-Crnkovic, Gordana, and Persson Daniel. 2008. Sharing moral responsibility with robots: A pragmatic approach. In Tenth Scandinavian Conference on Artificial Intelligence SCAI 2008, Frontiers in artificial intelligence and applications, vol. 173, ed. A. Holst, P. Kreuger, and P. Funk. Amsterdam: IOS Press.Google Scholar
  25. Epstein, Joshua M. 2004. Generative social science: Studies in agent-based computational modeling, Princeton studies in complexity. Princeton/Oxford: Princeton University Press.Google Scholar
  26. Eshleman, Andrew. 2004. Moral responsibility. In The Stanford encyclopedia of philosophy, Fall ed, ed. Edward N. Zalta. Stanford: Stanford University. (accessed October 26, 2010).
  27. Fellous, Jean-Marc, and Michael A. Arbib (eds.). 2005. Who needs emotions?: The brain meets the robot. Oxford: Oxford University Press.Google Scholar
  28. Floridi, Luciano. 1999. Information ethics: On the theoretical foundations of computer ethics. Ethics and Information Technology 1(1): 37–56.CrossRefGoogle Scholar
  29. Floridi, Luciano. 2002. What is the philosophy of information? Metaphilosophy 33(1/2): 123–145.CrossRefGoogle Scholar
  30. Floridi, Luciano. 2008a. A defence of informational structural realism. Synthese 161(2): 219–253.CrossRefGoogle Scholar
  31. Floridi, Luciano. 2008b. Information ethics: Its nature and scope. In Moral philosophy and information technology, ed. Jeroen van den Hoven and John Weckert, 40–65. Cambridge: Cambridge University Press.Google Scholar
  32. Floridi, Luciano. 2008c. The method of levels of abstraction. Minds and Machines 18(3): 303–329.CrossRefGoogle Scholar
  33. Floridi, Luciano. 2008d. Ethics Information ethics: A reappraisal. Ethics and Information Technology 10: 189–204.CrossRefGoogle Scholar
  34. Floridi, Luciano, and J.W. Sanders. 2004a. On the morality of artificial agents. Minds and Machines 14(3): 349–379.CrossRefGoogle Scholar
  35. Floridi, Luciano, and J.W. Sanders. 2004b. On the morality of artificial agents. In Minds and machines, vol. 14, 349–379. Dordrecht: Kluwer Academic Publishers.Google Scholar
  36. Gilbert, Nigel. 2008. Agent-based models, Quantitative applications in the social sciences. Los Angeles: Sage Publications.Google Scholar
  37. Grodzinsky, Frances S., Keith W. Miller, and Marty J. Wolf. 2008. The ethics of designing artificial agents. Ethics and Information Technology 11(1): 115–121.CrossRefGoogle Scholar
  38. Hansson, Sven Ove. 1997. The limits of precaution. Foundations of Science 2: 293–306.CrossRefGoogle Scholar
  39. Hansson, Sven Ove. 1999. Adjusting scientific practices to the precautionary principle. Human and Ecological Risk Assessment 5: 909–921.CrossRefGoogle Scholar
  40. Himma, Kenneth E. 2009. Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? Ethics and Information Technology 11(1): 19–29.CrossRefGoogle Scholar
  41. Hongladarom, Soraj. 2008. Floridi and Spinoza on global information ethics. Ethics and Information Technology 10: 175–187.CrossRefGoogle Scholar
  42. Huff, Chuck. 2004. Unintentional power in the design of computing systems. In Computer ethics and professional responsibility, ed. T.W. Bynum and S. Rogerson, 98–106. Kundli: Blackwell Publishing.Google Scholar
  43. Järvik, Marek. 2003. How to understand moral responsibility?, Trames, 7(3), 147–163. Tallinn: Teaduste Akadeemia Kirjastus.Google Scholar
  44. Johnson, Deborah G. 2006. Computer systems: Moral entities but not moral agents. In Ethics and information technology, vol. 8, 195–204. Dordrecht: Springer.Google Scholar
  45. Johnson, Deborah G., and Keith W. Miller. 2006. A dialogue on responsibility, moral agency, and IT systems. In Proceedings of the 2006 ACM symposium on Applied computing table of content, Dijon, France, 272–276.Google Scholar
  46. Johnson, Deborah G., and Keith W. Miller. 2008. Un-making artificial moral agents. Ethics and Information Technology 10(2–3): 123–133.CrossRefGoogle Scholar
  47. Johnson, Deborah G., and T.M. Powers. 2005. Computer systems and responsibility: A normative look at technological complexity. In Ethics and information technology, vol. 7, 99–107. Dordrecht: Springer.Google Scholar
  48. Larsson, Magnus. 2004. Predicting quality attributes in component-based software systems. PhD thesis, Mälardalen University Press, Sweden. ISBN: 91-88834-33-6.Google Scholar
  49. Latour, Bruno. 1992. Where are the missing masses, sociology of a few mundane artefacts, originally. In Shaping technology-building society. Studies in sociotechnical change, ed. Wiebe Bijker and John Law, 225–259. Cambridge, MA: MIT Press. (accessed October 26, 2010).
  50. Lik Mui. 2002. Computational models of trust and reputation: Agents, evolutionary games, and social networks. PhD thesis, MIT. (accessed October 26, 2010).
  51. Lomi, Alessandro, and Erik Larsen (eds.). 2000. Simulating organizational societies: Theories, models and ideas. Cambridge, MA: MIT Press.Google Scholar
  52. Magnani, Lorenzo. 2007. Distributed morality and technological artifacts. In 4th international conference on human being in contemporary philosophy, Volgograd. (accessed October 26 2010).
  53. Marino, Dante, and Guglielmo Tamburrini. 2006. Learning robots and human responsibility. International Review of Information Ethics 6: 46–51.Google Scholar
  54. Martin, Mike W., and Ronald Schinzinger. 1996. Ethics in engineering. New York: McGraw-Hill.Google Scholar
  55. Matthias, Andreas. 2004. The responsibility gap: Ascribing responsibility for the actions of learning automata. In Ethics and information technology, vol. 6, 175–183. Dordrecht: Kluwer Academic Publishers.Google Scholar
  56. Minsky, Marvin. 2006. The emotion machine: Commonsense thinking, artificial intelligence, and the future of the human mind. New York: Simon and Shuster.Google Scholar
  57. Montague, Peter. 1998. The precautionary principle. Rachel’s Environment and Health Weekly, No. 586. (accessed October 26, 2010).
  58. Moor, James H. 2006. The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems 21(4): 18–21.CrossRefGoogle Scholar
  59. Nissenbaum, Helen. 1994. Computing and accountability. In Communications of the ACM, vol. 37, 73–80. New York: ACM.Google Scholar
  60. Pancomputationalism. 2009. (accessed October 26, 2010).
  61. Prietula, Michael. 2000. Advice, trust, and gossip among artificial agents, chapter. In Simulating organizational societies: Theories, models and ideas, ed. A. Lomi and E. Larsen. Cambridge, MA: MIT Press.Google Scholar
  62. Ramchurn Sarvapali, D., Dong, Huynh, and Nicholas, R. Jennings. 2004. Trust in multi-agent systems. The Knowledge Engineering Review 19:1–25. Cambridge: Cambridge University Press.Google Scholar
  63. Russell, Stuart, and Peter Norvig. 2003. Artificial intelligence – a modern approach. Upper Saddle River: Pearson Education.Google Scholar
  64. Shrader-Frechette, Kristen. 2003. Technology and ethics. In Philosophy of technology – the technological condition, ed. R.C. Scharff and V. Dusek, 187–190. Padstow: Blackwell Publishing.Google Scholar
  65. Silver, David A. 2005. Strawsonian defense of corporate moral responsibility. American Philosophical Quarterly 42: 279–295.Google Scholar
  66. Siponen, Mikko. 2004. A pragmatic evaluation of the theory of information ethics. Ethics and Information Technology 6(4): 279–290.CrossRefGoogle Scholar
  67. Sommerville, Ian. 2007. Models for responsibility assignment. In Responsibility and dependable systems, ed. G. Dewsbury and J. Dobson. London: Springer. ISBN 1846286255.Google Scholar
  68. Søraker, Johnny H. 2007. The moral status of information and information technologies: A relational theory of moral status. In Information technology ethics: Cultural perspectives, ed. S. Hongladarom and C. Ess, 1–19. Hershey: IGI Global.Google Scholar
  69. Stahl, Bernd C. 2004. Information, ethics, and computers: The problem of autonomous moral agents. In Minds and machines, vol. 14, 67–83. Dordrecht: Kluwer Academic Publishers.Google Scholar
  70. Stahl, Bernd C. 2006. Responsible computers? A case for ascribing quasi-responsibility to computers independent of personhood or agency. In Ethics and information technology, vol. 8, 205–213. Dordrecht: Springer.Google Scholar
  71. Stamatelatos, Michael. 2000. Probabilistic risk assessment: What is it and why is it worth performing it? NASA Office of Safety and Mission Assurance. (accessed October 26, 2010).
  72. Strawson, Peter F. 1974. Freedom and resentment. In Freedom and resentment and other essays. London: Methuen.Google Scholar
  73. Veruggio, Gianmarco, and Fiorella Operto. 2008. Roboethics, Ch. 64 in Springer. In Handbook of robotics. Berlin/Heidelberg: Springer.Google Scholar
  74. Wallach, Wendell, and Colin Allen. 2009. Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2012

Authors and Affiliations

  1. 1.School of Innovation, Design and Engineering, Computer Science LaboratoryMälardalen UniversityVästeråsSweden

Personalised recommendations