Ethics and Information Technology

, Volume 14, Issue 1, pp 61–71 | Cite as

Robots: ethical by design

Original paper

Abstract

Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. Moral concerns for agents such as intelligent search machines are relatively simple, while highly intelligent and autonomous artifacts with significant impact and complex modes of agency must be equipped with more advanced ethical capabilities. Systems like cognitive robots are being developed that are expected to become part of our everyday lives in future decades. Thus, it is necessary to ensure that their behaviour is adequate. In an analogy with artificial intelligence, which is the ability of a machine to perform activities that would require intelligence in humans, artificial morality is considered to be the ability of a machine to perform activities that would require morality in humans. The capacity for artificial (artifactual) morality, such as artifactual agency, artifactual responsibility, artificial intentions, artificial (synthetic) emotions, etc., come in varying degrees and depend on the type of agent. As an illustration, we address the assurance of safety in modern High Reliability Organizations through responsibility distribution. In the same way that the concept of agency is generalized in the case of artificial agents, the concept of moral agency, including responsibility, is generalized too. We propose to look at artificial moral agents as having functional responsibilities within a network of distributed responsibilities in a socio-technological system. This does not take away the responsibilities of the other stakeholders in the system, but facilitates an understanding and regulation of such networks. It should be pointed out that the process of development must assume an evolutionary form with a number of iterations because the emergent properties of artifacts must be tested in real world situations with agents of increasing intelligence and moral competence. We see this paper as a contribution to the macro-level Requirement Engineering through discussion and analysis of general requirements for design of ethical robots.

Keywords

Artificial morality Machine ethics Machine morality Roboethics Autonomous agents Artifactual responsibility Functional responsibility 

References

  1. Adam, A. (2005). Delegating and distributing morality: Can we inscribe privacy protection in a machine? Ethics and Information Technology, 7, 233–242.MathSciNetCrossRefGoogle Scholar
  2. Adam, A. (2008). Ethics for things. Ethics and Information Technology, 10(2–3), 149–154.CrossRefGoogle Scholar
  3. Akan, B., Çürüklü, B., Spampinato, G., Asplund, L., et al. (2010). Towards Robust Human Robot Collaboration in Industrial Environments. In Proceedings 5th ACM/IEEE International Conference on Human-Robot Interaction (pp. 71–72).Google Scholar
  4. Allen, C., Smit, I., & Wallach, W. (2005). Artificial morality: Top-down, bottom-up, and hybrid approaches. Ethics and Information Technology, 7, 149–155.CrossRefGoogle Scholar
  5. Allen, C., Smit, I., & Wallach, W. (2006). Why machine ethics? IEEE Intelligent Systems, July/August 2006, pp. 12–17.Google Scholar
  6. Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, 12(3), 251–261.MATHCrossRefGoogle Scholar
  7. Anderson, M., & Anderson, S. L. (2007). Machine ethics: Creating an ethical intelligent agent. AI Magazine, 28(4), 15–25.Google Scholar
  8. Arkin, R. C. (1998). Behavior-based robotics. Cambridge: MIT Press.Google Scholar
  9. Asaro, P. M. (2007). Robots and responsibility from a legal perspective. Proceedings of the IEEE 2007 International Conference on Robotics and Automation, Workshop on RoboEthics, Rome.Google Scholar
  10. Aurum, A., & Wohlin, C. (2003). The fundamental nature of requirements engineering activities as a decision-making process. Information and Software Technology, 45(14), 945–954.CrossRefGoogle Scholar
  11. Beavers, A. (2011). Moral machines and the threat of ethical nihilism. In Patrick Lin, George Bekey, & Keith Abney (Eds.), Robot ethics: The ethical and social implication of robotics. Cambridge, MA: MIT Press.Google Scholar
  12. Becker, B. (2006). Social robots—emotional agents: Some remarks on naturalizing man-machine interaction. International Review of Information Ethics (IRIE), 6, 37–45.Google Scholar
  13. Brey, P. (2006). Freedom and privacy in ambient intelligence. Ethics and Information Technology, 7(3), 157–166.CrossRefGoogle Scholar
  14. Brey, P. (2008). Technological design as an evolutionary process. Philosophy and Design, 1, 61–75.CrossRefGoogle Scholar
  15. Bynum, T. W., & Rogerson, S. (Eds.). (2004). Computer ethics and professional responsibility (pp. 98–106). Kundli, India: Blackwell.Google Scholar
  16. Capurro, R., & Nagenborg, M. (Eds.). (2009). Ethics and robotics. Amsterdam: IOS Press.Google Scholar
  17. Clark, A. (2003). Natural-born cyborgs: Minds, technologies, and the future of human intelligence. Oxford: Oxford University Press.Google Scholar
  18. Coeckelbergh, M. (2009). Virtual moral agency, virtual moral responsibility: On the moral significance of the appearance, perception, and performance of artificial agents. AI & Society, 24, 188–189.CrossRefGoogle Scholar
  19. Coeckelbergh, M. (2010). Moral appearances: Emotions, robots, and human morality. In: Ethics and Information Technology, published on-line ISSN 1388-1957, 18 March 2010.Google Scholar
  20. Coleman, K. G. (2008). Computing and moral responsibility. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2008 Edition). http://plato.stanford.edu/archives/fall2008/entries/computing-responsibility/.
  21. Crutzen, C. K. M. (2006). Invisibility and the meaning of ambient intelligence. International Journal of Information Ethics, 6(006), 52–60. Ethics in Robotics.Google Scholar
  22. Çürüklü, B., Dodig-Crnkovic, G., & Akan, B. (2010). Towards industrial robots with human like moral responsibilities. In Proceedings 5th ACM/IEEE International Conference on Human-Robot Interaction (pp. 85–86).Google Scholar
  23. Danielson, P. (1992). Artificial morality virtuous robots for virtual games. London: Routledge.Google Scholar
  24. Davis, M. (2010). Ain’t no one here but us social forces: Constructing the professional responsibility of engineers. Science and Engineering Ethics, Issn: 1353-3452, pp. 1–22.Google Scholar
  25. Dennett, D. C. (1973). Mechanism and responsibility. In T. Honderich (Ed.), Essays on freedom of action. Boston: Routledge & Keegan Paul.Google Scholar
  26. Dennett, D. C. (1994). The myth of original intentionality. In E. Dietrich (Ed.), Thinking computers and virtual persons: Essays on the intentionality of machines (pp. 91–107). San Diego, CA and London: Academic Press.Google Scholar
  27. Dodig-Crnkovic, G. (1999). ABB atom’s criticality safety handbook, ICNC’99 Sixth International Conference on Nuclear Criticality Safety, Versailles, France.Google Scholar
  28. Dodig-Crnkovic, G. (2005). On the importance of teaching professional ethics to computer science students, computing and philosophy conference, E-CAP 2004, Pavia, Italy. In L. Magnani (Ed.), Computing and philosophy. Associated International Academic Publishers.Google Scholar
  29. Dodig-Crnkovic, G. (2006). Professional ethics in computing and intelligent systems. Proceedings of the Ninth Scandinavian Conference on Artificial Intelligence (SCAI 2006), Espoo, Finland, Oct 25–27.Google Scholar
  30. Dodig-Crnkovic, G., & Persson, D. (2008). Sharing moral responsibility with robots: A pragmatic approach. In A. Holst, P. Kreuger & P. Funk (Eds.), Tenth Scandinavian Conference on Artificial Intelligence SCAI 2008. Vol. 173, Frontiers in Artificial Intelligence and Applications.Google Scholar
  31. Edgar, S. L. (1997). Morality and machines: Perspectives on computer ethics. Sudbury, MA: Jones and Bartlett Publishers.Google Scholar
  32. Eshleman, A. (2009). Moral responsibility. In Edward N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Winter 2009 Edition). http://plato.stanford.edu/archives/win2009/entries/moral-responsibility/.
  33. Fellous, J.-M., & Arbib, M. A. (Eds.). (2005). Who needs emotions?: The brain meets the robot. Oxford: Oxford University Press.Google Scholar
  34. Floridi, L. (2007). Distributed morality in multiagent systems. Paper Presented at CEPE 2007, San Diego. http://cepe2007.sandiego.edu/abstractDetail.asp?ID=40.
  35. Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.CrossRefGoogle Scholar
  36. Gates, B. (2007). A robot in every home. Scientific American, 296, 58–65.CrossRefGoogle Scholar
  37. Grodzinsky, F. S., Miller, K. W., & Wolf, M. J. (2008). The ethics of designing artificial agents. Ethics and Information Technology, 11(1), 115–121.CrossRefGoogle Scholar
  38. Grodzinsky, F., Miller, K., & Wolf, M. (2011). Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”. Ethics and Information Technology, 13(1), 17–27.Google Scholar
  39. Hansson, S. O. (1997). The limits of precaution. Foundations of Science, 2, 293–306.CrossRefGoogle Scholar
  40. Hansson, S. O. (1999). Adjusting scientific practices to the precautionary principle. Human and Ecological Risk Assessment, 5, 909–921.CrossRefGoogle Scholar
  41. Huff, C. (2004). Unintentional power in the design of computing systems. In T. W. Bynum & S. Rogerson (Eds.), Computer ethics and professional responsibility (pp. 98–106). Kundli, India: Blackwell Publishing.Google Scholar
  42. Huff, C. (2010). “Why a sociotechnical system?” http://computingcases.org/general_tools/sia/socio_tech_system.html.
  43. Järvik, M. (2003). How to understand moral responsibility?, Trames, No. 3, Teaduste Akadeemia Kirjastus, pp. 147–163.Google Scholar
  44. Johnson, D. G. (1994). Computer ethics. Upper Saddle River, NJ: Prentice-Hall, Inc.Google Scholar
  45. Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8, 195–204.CrossRefGoogle Scholar
  46. Johnson, D. G., & Miller, K. W. (2006). A dialogue on responsibility, moral agency, and IT systems, Proceedings of the 2006 ACM symposium on Applied computing table of content (pp. 272–276). Dijon, France.Google Scholar
  47. Johnson, D. G., & Powers, T. M. (2005). Computer systems and responsibility: A normative look at technological complexity. Ethics and Information Technology, 7, 99–107.CrossRefGoogle Scholar
  48. Larsson, M. (2004). Predicting quality attributes in component-based software systems, PhD Thesis. Sweden: Mälardalen University Press. ISBN: 91-88834-33-6.Google Scholar
  49. Latour, B. (1992). Where are the missing masses, sociology of a few mundane artefacts, originally. In Wiebe Bijker & John Law (Eds.), Shaping technology-building society. Studies in sociotechnical change (pp. 225–259). Cambridge, Mass: MIT Press.Google Scholar
  50. Levy, D. N. L. (2006). Robots unlimited: Life in a virtual age. Natick, Massachusetts: A K Peters, Ltd.Google Scholar
  51. Lin, P., Bekey, G., & Abney, K. (2008). Autonomous military robotics: Risk, ethics, and design. http://ethics.calpoly.edu/ONR_report.pdf.
  52. Lin, P., Bekey, G., & Abney, K. (2009). Robots in war: Issues of risk and ethics. In Rafael Capurro & Michael Nagenborg (Eds.), Ethics and robotics. Heidelberg, Germany: AKA Verlag/IOS Press.Google Scholar
  53. Magnani, L. (2007). Distributed morality and technological artifacts. 4th International Conference on Human being in Contemporary Philosophy, Volgograd. http://volgograd2007.goldenideashome.com/2%20Papers/Magnani%20Lorenzo%20p.pdf.
  54. Marino, D., & Tamburrini, G. (2006). Learning robots and human responsibility. International Review of Information Ethics (IRIE), 6, 46–51.Google Scholar
  55. Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6, 175–183.CrossRefGoogle Scholar
  56. McKenna, M. (2009). Compatibilism. In: Edward N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Winter 2009 Edition). http://plato.stanford.edu/archives/win2009/entries/compatibilism/.
  57. Miller, K. W. (2011). Moral responsibility for computing artifacts: The rules. IT Professional, 13(3), 57–59.MATHCrossRefGoogle Scholar
  58. Minsky, M. (2006). The emotion machine: Commonsense thinking, artificial intelligence, and the future of the human mind. NY: Simon & Schuster, Inc.Google Scholar
  59. Mitcham, C. (1995). Computers, information and ethics: A review of issues and literature. Science and Engineering Ethics, 1(2), 113–132.CrossRefGoogle Scholar
  60. Montague, P. (1998). The precautionary principle, Rachel’s environment and health weekly, No. 586. http://www.biotech-info.net/rachels_586.html.
  61. Moor, J. (1985). What is computer ethics? Metaphilosophy, 16(4), 266–275.CrossRefGoogle Scholar
  62. Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE intelligent systems, IEEE computer society, pp. 18–21.Google Scholar
  63. Moravec, H. (1999). Robot: Mere machine to transcendent mind. Oxford, New York: Oxford University Press.Google Scholar
  64. Nagenborg, M. (2007). Artificial moral agents: An intercultural perspective. International Review of Information Ethics, 7(09), 129–133.Google Scholar
  65. Nissenbaum, H. (1994). Computing and accountability. Communications of the ACM, 37(1), 73–80.CrossRefGoogle Scholar
  66. Nobre, F. S., Tobias, A. M., & Walker, D. S. (2009). Organizational and technological implications of cognitive machines: Designing future information management systems. IGI Global. 1–338. doi:10.4018/978-1-60566-302-9.
  67. Nof, S. Y. (Ed.). (1999). Handbook of industrial robotics (2nd ed.). Hoboken, New Jersey: Wiley.Google Scholar
  68. Nuseibeh, B., & Easterbrook, S. (2000). Requirements engineering: A roadmap. Proceedings of International Conference on Software Engineering (ICSE-2000) (pp. 4–11). ACM Press: Limerick, Ireland.Google Scholar
  69. Pimple, K. D. (2011). Surrounded by machines. Communications of the ACM, 54(3), 29–31.CrossRefGoogle Scholar
  70. Russell, S., & Norvig, P. (2003). Artificial intelligence—A modern approach. Upper Saddle River, NJ: Pearson Education.Google Scholar
  71. Scheutz, M. (2002). Computationalism new directions (pp. 1–223). Cambridge, Mass: MIT Press.Google Scholar
  72. Shrader-Frechette, K. (2003). Technology and ethics. In R. C. Scharff & V. Dusek (Eds.), Philosophy of technology—The technological condition (pp. 187–190). Padstow, United Kingdom: Blackwell Publishing.Google Scholar
  73. Silver, D. A. (2005). Strawsonian defense of corporate moral responsibility. American Philosophical Quarterly, 42, 279–295.Google Scholar
  74. Siponen, M. (2004). A pragmatic evaluation of the theory of information ethics. Ethics and Information Technology, 6(4), 279–290.CrossRefGoogle Scholar
  75. Som, C., Hilty, L. M., & Ruddy, T. F. (2004). The precautionary principle in the information society. Human and Ecological Risk Assessment, 10(5), 787–799.CrossRefGoogle Scholar
  76. Sommerville, I. (2007). Models for responsibility assignment. In G. Dewsbury & J. Dobson (Eds.), Responsibility and dependable systems. Kluwer: Springer. ISBN 1846286255.Google Scholar
  77. Stahl, B. C. (2004). Information, ethics, and computers: The problem of autonomous moral agents. Minds and Machines, 14, 67–83.CrossRefGoogle Scholar
  78. Strawson, P. F. (1974). Freedom and resentment, in freedom and resentment and other essays. London: Methuen.Google Scholar
  79. Sullins, J. P. (2006). When is a robot a moral agent? International Review of Information Ethics, 6(12), 23–30.Google Scholar
  80. Vallverdú, J., & Casacuberta, D. (2009). Handbook of research on synthetic emotions and sociable robotics: New applications in affective computing and artificial intelligence. IGI Global. doi:10.4018/978-1-60566-354-8.
  81. Van de Poel, I. R., & Verbeek, P. P. (Eds.) (2006). Special issue on ethics and engineering design. Science, Technology and Human Values 31(3), 223–380.Google Scholar
  82. Verbeek, P.-P. (2008). Morality in design: Design ethics and the morality of technological artifacts. In P. E. Vermaas, P. A. Kroes, A. Light, & S. Moore (Eds.), Philosophy and design (pp. 91–103). Berlin, Germany: Springer.CrossRefGoogle Scholar
  83. Veruggio, G. (2006). The EURON Roboethics Roadmap, Humanoids’06, December 6, 2006, Genoa, Italy.Google Scholar
  84. Veruggio, G., & Operto, F. (2008). Roboethics, chapter 64 in springer handbook of robotics. Berlin, Heidelberg: Springer.Google Scholar
  85. Wallach, C., & Allen, W. (2009). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.Google Scholar
  86. Warwick, K. (2009). Today it’s a cute friend. Tomorrow it could be the dominant life form. Times of London 2009-02-25. http://www.timesonline.co.uk/tol/comment/columnists/guest_contributors/article5798625.ece.

Copyright information

© Springer Science+Business Media B.V. 2011

Authors and Affiliations

  1. 1.Computer Science Laboratory, School of Innovation, Design and EngineeringMälardalen UniversityVästeråsSweden
  2. 2.Computational Perception Laboratory, School of Innovation, Design and EngineeringMälardalen UniversityVästeråsSweden

Personalised recommendations