Floridi’s Information Ethics as Macro-ethics and Info-computational Agent-Based Models
- 1.3k Downloads
Luciano Floridi’s Information Ethics (IE) is a new theoretical foundation of Ethics. According to Floridi, ICT with all informational structures and processes generates our new informational habitat, the Infosphere. For IE, moral action is an information processing pattern. IE addresses the fundamentally informational character of our interaction with the world, including interactions with other agents. Information Ethics is macro-ethics as it focuses on systems/networks of agents and their behavior. The IE’s capacity to study ethical phenomena on the basic level of underlying information patterns and processes makes it unique among ethical theories in providing a conceptual framework for fundamental level analysis of present globalised ICT-based world. It allows computational modeling – a powerful tool for study which increases our understanding of informational mechanisms of ethics. Computational models help capturing behaviors invisible to unaided mind which relies exclusively on shared intuitions. The article presents an analysis of the application of IE as interpreted within the framework of Info-Computationalism. The focus is on responsibility/accountability distribution and similar phenomena of information communication in networks of agents. Agent-based modeling enables studying the increasing complexity of behavior in multi-agent systems when agents (actors) are ranging from cellular automata to softbots, robots and humans. Autonomous, learning artificial intelligent systems technologies are developing rapidly, resulting in a new division of tasks between humans and robots/softbots. The biggest present-day concern about autonomous intelligent systems is the fear of human loss of control and robots acting inappropriately and causing harm. Among inappropriate kinds of behavior is the ethically unacceptable one. In order to assure ethically adequate behavior of autonomous intelligent systems, artifactual ethical responsibility/accountability should be one of the built-in features of intelligent artifacts. Adding the requirement for artifactual ethical behavior to a robot/softbot does not by any means take responsibility from humans designing, producing and controlling autonomous intelligent systems. On the contrary, it will make explicit the necessity for all involved with such intelligent technology to assure its ethical conduct. Today’s robots are used mainly as complex electromechanical tools and do not have any capability of taking moral responsibility. But technology progress is remarkable; robots are quickly improving their sensory and motor competencies, and the development of artifactual (synthetic) emotions adds new dimensions to robotics. Artifactual reasoning and other information processing skills are advancing – all of which is causing significant progress in the field of Social Robotics. We have thus strong reasons to try to analyze future technological development where robots/softbots are so intelligent and responsive that they possess artifactual morality alongside with artifactual intelligence. Technological artifacts are always part of a broader socio-technological system with distributed responsibilities. The development of autonomous, learning, morally responsible intelligent agents relies consequently on several responsibility feedback loops; the awareness and preparedness for handling risks on the side of designers, producers, implementers, users and maintenance personnel as well as the support of the society at large which will provide a response on the consequences of the use of technology. This complex system of shared responsibilities should secure a safe functioning of hybrid systems of humans and intelligent machines. Information Ethics provides a conceptual framework for computational modeling of such socio-technological systems. Apart from examples of specific applications of IE, interpretation of several widely debated questions, such as the role of Levels of Abstraction, naturalism and complexity/diversity in Information Ethics, is offered through Info-Computationalist analysis.
KeywordsMoral Responsibility Intelligent System Intelligent Agent Artificial Agent Ethical Approach
The author wants to thank Mark Coeckelbergh for insightful comments on earlier versions of this paper.
- Arkin, Ronald C. 1998. Behavior-based robotics. Cambridge: MIT Press.Google Scholar
- Asaro, Peter M. 2007. Robots and responsibility from a legal perspective. In Proceedings of the IEEE 2007 international conference on robotics and automation, Workshop on RoboEthics, Rome.Google Scholar
- Becker, Barbara. 2006. Social robots – emotional agents: Some remarks on naturalizing man-machine interaction. International Review of Information Ethics 6: 37–45.Google Scholar
- Becker, Barbara. 2009. Social Robots – Emotional Agents: Some Remarks on Naturalizing Man-machine Interaction. In Ethics and robotics, ed. R. Capurro and M. Nagenborg. Amsterdam: IOS Press.Google Scholar
- Coleman, K.G. 2005. Computing and moral responsibility. In The Stanford encyclopedia of philosophy, Spring edn, ed. Edward N. Zalta. Stanford: Standford University. Available: http://plato.stanford.edu/archives/spr2005/entries/computing-responsibility/
- Crutzen, C.K.M. 2006. Invisibility and the meaning of ambient intelligence. International Review of Information Ethics 6: 52–62.Google Scholar
- Danielson, Peter. 1992. Artificial morality virtuous robots for virtual games. London: Routledge.Google Scholar
- Dennett, Daniel C. 1973. Mechanism and responsibility. In Essays on freedom of action, ed. T. Honderich. Boston: Routledge/Keegan Paul.Google Scholar
- Dennett, Daniel C. 1994. The myth of original intentionality. In Thinking computers and virtual persons: Essays on the intentionality of machines, ed. E. Dietrich, 91–107. San Diego/London: Academic.Google Scholar
- DIRC project. http://www.comp.lancs.ac.uk/computing/research/cseg/projects/dirc/projectthemes.htm (accessed October 26, 2010).
- Dodig-Crnkovic, Gordana. 1999. ABB atom’s criticality safety handbook, ICNC’99 sixth international conference on nuclear criticality safety, Versailles, France. http://www.idt.mdh.se/personal/gdc/work/csh.pdf (accessed October 26, 2010).
- Dodig-Crnkovic, Gordana. 2005. On the importance of teaching professional ethics to computer science students. In Computing and philosophy, Computing and philosophy conference, E-CAP 2004, Pavia, Italy, ed. L. Magnani. Pavia: Associated International Academic Publishers.Google Scholar
- Dodig-Crnkovic, Gordana. 2006a. Investigations into information semantics and ethics of computing. Västerås: Mälardalen University Press. http://mdh.divaportal.org/smash/get/diva2:120541/FULLTEXT01 (accessed October 26, 2010).
- Dodig-Crnkovic, Gordana. 2006b. Professional ethics in computing and intelligent systems. In Proceedings of the ninth Scandinavian Conference on Artificial Intelligence (SCAI 2006), Espoo, Finland, October 25–27.Google Scholar
- Dodig-Crnkovic, Gordana. 2008. Knowledge generation as natural computation. Journal of Systemics, Cybernetics and Informatics 6: 12–16.Google Scholar
- Dodig-Crnkovic, Gordana. 2009. Information and computation nets. Saarbrücken: VDM Verlag.Google Scholar
- Dodig-Crnkovic, Gordana. 2010. The cybersemiotics and info-computationalist research programmes as platforms for knowledge production in organisms and machines. Entropy 12: 878–901. http://www.mdpi.com/1099-4300/12/4/878 (accessed October 26, 2010).
- Dodig-Crnkovic, Gordana, and Margaryta Anokhina. 2008. Workplace gossip and rumor. The information ethics perspective. In ETHICOMP-2008, Mantova, Italy.Google Scholar
- Dodig-Crnkovic, Gordana, and Vincent Müller. 2010. A dialogue concerning two world systems: Info-computational vs. mechanistic. In Information and computation, ed. G. Dodig-Crnkovic and M. Burgin. Singapore: World Scientific Publishing Co.Google Scholar
- Dodig-Crnkovic, Gordana, and Persson Daniel. 2008. Sharing moral responsibility with robots: A pragmatic approach. In Tenth Scandinavian Conference on Artificial Intelligence SCAI 2008, Frontiers in artificial intelligence and applications, vol. 173, ed. A. Holst, P. Kreuger, and P. Funk. Amsterdam: IOS Press.Google Scholar
- Epstein, Joshua M. 2004. Generative social science: Studies in agent-based computational modeling, Princeton studies in complexity. Princeton/Oxford: Princeton University Press.Google Scholar
- Eshleman, Andrew. 2004. Moral responsibility. In The Stanford encyclopedia of philosophy, Fall ed, ed. Edward N. Zalta. Stanford: Stanford University. http://plato.stanford.edu/archives/fall2004/entries/moral-responsibility (accessed October 26, 2010).
- Fellous, Jean-Marc, and Michael A. Arbib (eds.). 2005. Who needs emotions?: The brain meets the robot. Oxford: Oxford University Press.Google Scholar
- Floridi, Luciano. 2008b. Information ethics: Its nature and scope. In Moral philosophy and information technology, ed. Jeroen van den Hoven and John Weckert, 40–65. Cambridge: Cambridge University Press.Google Scholar
- Floridi, Luciano, and J.W. Sanders. 2004b. On the morality of artificial agents. In Minds and machines, vol. 14, 349–379. Dordrecht: Kluwer Academic Publishers.Google Scholar
- Gilbert, Nigel. 2008. Agent-based models, Quantitative applications in the social sciences. Los Angeles: Sage Publications.Google Scholar
- Huff, Chuck. 2004. Unintentional power in the design of computing systems. In Computer ethics and professional responsibility, ed. T.W. Bynum and S. Rogerson, 98–106. Kundli: Blackwell Publishing.Google Scholar
- Järvik, Marek. 2003. How to understand moral responsibility?, Trames, 7(3), 147–163. Tallinn: Teaduste Akadeemia Kirjastus.Google Scholar
- Johnson, Deborah G. 2006. Computer systems: Moral entities but not moral agents. In Ethics and information technology, vol. 8, 195–204. Dordrecht: Springer.Google Scholar
- Johnson, Deborah G., and Keith W. Miller. 2006. A dialogue on responsibility, moral agency, and IT systems. In Proceedings of the 2006 ACM symposium on Applied computing table of content, Dijon, France, 272–276.Google Scholar
- Johnson, Deborah G., and T.M. Powers. 2005. Computer systems and responsibility: A normative look at technological complexity. In Ethics and information technology, vol. 7, 99–107. Dordrecht: Springer.Google Scholar
- Larsson, Magnus. 2004. Predicting quality attributes in component-based software systems. PhD thesis, Mälardalen University Press, Sweden. ISBN: 91-88834-33-6.Google Scholar
- Latour, Bruno. 1992. Where are the missing masses, sociology of a few mundane artefacts, originally. In Shaping technology-building society. Studies in sociotechnical change, ed. Wiebe Bijker and John Law, 225–259. Cambridge, MA: MIT Press. http://www.bruno-latour.fr/articles/1992.html (accessed October 26, 2010).
- Lik Mui. 2002. Computational models of trust and reputation: Agents, evolutionary games, and social networks. PhD thesis, MIT. http://groups.csail.mit.edu/medg/ftp/lmui/computational%20models%20of%20trust%20and%20reputation.pdf (accessed October 26, 2010).
- Lomi, Alessandro, and Erik Larsen (eds.). 2000. Simulating organizational societies: Theories, models and ideas. Cambridge, MA: MIT Press.Google Scholar
- Magnani, Lorenzo. 2007. Distributed morality and technological artifacts. In 4th international conference on human being in contemporary philosophy, Volgograd. http://volgograd2007.goldenideashome.com/2%20Papers/Magnani%20Lorenzo%20p.pdf (accessed October 26 2010).
- Marino, Dante, and Guglielmo Tamburrini. 2006. Learning robots and human responsibility. International Review of Information Ethics 6: 46–51.Google Scholar
- Martin, Mike W., and Ronald Schinzinger. 1996. Ethics in engineering. New York: McGraw-Hill.Google Scholar
- Matthias, Andreas. 2004. The responsibility gap: Ascribing responsibility for the actions of learning automata. In Ethics and information technology, vol. 6, 175–183. Dordrecht: Kluwer Academic Publishers.Google Scholar
- Minsky, Marvin. 2006. The emotion machine: Commonsense thinking, artificial intelligence, and the future of the human mind. New York: Simon and Shuster.Google Scholar
- Montague, Peter. 1998. The precautionary principle. Rachel’s Environment and Health Weekly, No. 586. http://www.biotech-info.net/rachels_586.html (accessed October 26, 2010).
- Nissenbaum, Helen. 1994. Computing and accountability. In Communications of the ACM, vol. 37, 73–80. New York: ACM.Google Scholar
- Pancomputationalism. 2009. http://www.idt.mdh.se/personal/gdc/work/Pancomputationalism.mht (accessed October 26, 2010).
- Prietula, Michael. 2000. Advice, trust, and gossip among artificial agents, chapter. In Simulating organizational societies: Theories, models and ideas, ed. A. Lomi and E. Larsen. Cambridge, MA: MIT Press.Google Scholar
- Ramchurn Sarvapali, D., Dong, Huynh, and Nicholas, R. Jennings. 2004. Trust in multi-agent systems. The Knowledge Engineering Review 19:1–25. Cambridge: Cambridge University Press.Google Scholar
- Roboethics links. http://www.roboethics.org, http://www.scuoladirobotica.it, http://roboethics.stanford.edu, http://ethicalife.dynalias.org/schedule.html, http://www-arts.sssup.it/IEEE_TC_RoboEthics, http://ethicbots.na.infn.it, http://www.capurro.de/lehre_ethicbots.htm. ETHICBOTS seminar by Rafael Capurro http://www.roboethics.org/icra2009/index.php?cmd=program ICRA2009 Roboethics workshop on IEEE Conference on robotics and automation (accessed October 26, 2010).
- Russell, Stuart, and Peter Norvig. 2003. Artificial intelligence – a modern approach. Upper Saddle River: Pearson Education.Google Scholar
- Shrader-Frechette, Kristen. 2003. Technology and ethics. In Philosophy of technology – the technological condition, ed. R.C. Scharff and V. Dusek, 187–190. Padstow: Blackwell Publishing.Google Scholar
- Silver, David A. 2005. Strawsonian defense of corporate moral responsibility. American Philosophical Quarterly 42: 279–295.Google Scholar
- Sommerville, Ian. 2007. Models for responsibility assignment. In Responsibility and dependable systems, ed. G. Dewsbury and J. Dobson. London: Springer. ISBN 1846286255.Google Scholar
- Søraker, Johnny H. 2007. The moral status of information and information technologies: A relational theory of moral status. In Information technology ethics: Cultural perspectives, ed. S. Hongladarom and C. Ess, 1–19. Hershey: IGI Global.Google Scholar
- Stahl, Bernd C. 2004. Information, ethics, and computers: The problem of autonomous moral agents. In Minds and machines, vol. 14, 67–83. Dordrecht: Kluwer Academic Publishers.Google Scholar
- Stahl, Bernd C. 2006. Responsible computers? A case for ascribing quasi-responsibility to computers independent of personhood or agency. In Ethics and information technology, vol. 8, 205–213. Dordrecht: Springer.Google Scholar
- Stamatelatos, Michael. 2000. Probabilistic risk assessment: What is it and why is it worth performing it? NASA Office of Safety and Mission Assurance. http://www.hq.nasa.gov/office/codeq/qnews/pra.pdf (accessed October 26, 2010).
- Strawson, Peter F. 1974. Freedom and resentment. In Freedom and resentment and other essays. London: Methuen.Google Scholar
- Veruggio, Gianmarco, and Fiorella Operto. 2008. Roboethics, Ch. 64 in Springer. In Handbook of robotics. Berlin/Heidelberg: Springer.Google Scholar
- Wallach, Wendell, and Colin Allen. 2009. Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.Google Scholar