Advertisement

Software Agents, Anticipatory Ethics, and Accountability

  • Deborah G. JohnsonEmail author
Chapter
Part of the The International Library of Ethics, Law and Technology book series (ELTE, volume 7)

Abstract

This chapter takes up a case study of the accountability issues around increasingly autonomous computer systems. In this early phase of their development, certain computer systems are being referred to as “software agents” or “autonomous systems” because they operate in a variety of ways that are seemingly independent of human control. However, because of the responsibility and liability issues, conceptualizing these systems as autonomous seems morally problematic and likely to be legally problematic. Whether software agents and autonomous systems are used to make financial decisions, control transportation, or perform military objectives, when something goes wrong, issues of accountability will indubitably arise. While it would seem that the law will ultimately have to handle these issues, law is currently being used only minimally or indirectly to address accountability for computer software failure. This nascent discussion of computer systems “in the making” seems a good focal point for considering innovative approaches to making law, governance, and ethics more helpful with regard to new technologies. For a start, it would seem that some anticipatory reasoning as to how accountability/liability issues are likely to be handled in law could have an influence on the development of the technology (even if the anticipatory thinking is ultimately wrong). Such thinking could – in principle at least – shape the design of computer systems.

Keywords

Software agents Autonomous systems Anticipatory ethics Responsibility Accountability Liability 

References

  1. Allen, C., I. Smit, and W. Wallach. 2005. Artificial morality: Top-down, bottom-up, and hybrid approaches. Ethics and Information Technology 7: 149–155.CrossRefGoogle Scholar
  2. Allen, C., G. Varaner, and J. Zinser. 2000. Prolegomena to any future artificial moral agent. Journal of Experimental and Theoretical Artificial Intelligence 12 (3): 251–261.CrossRefGoogle Scholar
  3. Anderson, M., and S.L. Anderson. 2006. Machine ethics? IEEE Intelligent Systems 21 (4): 10–11.CrossRefGoogle Scholar
  4. Anderson, M., and S.L. Anderson. 2007. The status of machine ethics: A report from the AAAI symposium. Minds and Machines 17: 1–10.CrossRefGoogle Scholar
  5. Ballman, D.R. 1996. Commentary: Software tort: Evaluating software harm by duty of function and form. Connecticut Insurance Law Journal 3: 417.Google Scholar
  6. Bowker, G.C., and S.L. Starr. 1999. Sorting things out: classification and its consequences. Cambridge, MA: The MIT Press.Google Scholar
  7. Chien, S., R. Sherwood, D. Tran, B. Cichy, G. Rabideau, and R. Castano. 2005. Lessons learned from autonomous sciencecraft experiment. Paper presented at the autonomous agents and multi-agent systems conference, Utrecht, Netherlands.Google Scholar
  8. Childers, S.J. 2008. Don’t stop the music: No strict products liability for embedded software. University of Florida Journal of Law & Public Policy 19: 125, 127.Google Scholar
  9. Fisher, E., C. Selin, and J.M. Wetmore. 2008. The yearbook of nanotechnology in society vol. 1: Presenting Futures. New York, NY: Springer.CrossRefGoogle Scholar
  10. Floridi, L. and J.W. Sanders. 2004. On the morality of artificial agents. Minds and Machines 14 (3): 349–379.CrossRefGoogle Scholar
  11. Grodzinsky, F.S., K.W. Miller, and M.J. Wolf. 2008. The ethics of designing artificial agents. Ethics and Information Technology 10: 115–121.CrossRefGoogle Scholar
  12. Johnson, D. and J.M Wetmore 2008. STS and ethics: Implications for engineering ethics. In New handbook of science, and technology studies, eds. E. Hackett, O. Amsterdamska, M. Lynch, and J. Wajcman. Cambridge, MA: The MIT Press.Google Scholar
  13. Johnson, D. and K. Miller. 2008. Un-making artificial moral agents. Ethics and Information Technology 10 (2–3): 123–133.CrossRefGoogle Scholar
  14. Johnson, D. and T.M. Powers. 2005. Computer systems and responsibility: A normative look at technological complexity. Ethics and Information Technology 7 (2): 99–107.CrossRefGoogle Scholar
  15. Joy, B. 2000. Why the future doesn’t need us. Wired 8 (4): 238–262.Google Scholar
  16. Lee, M. H. and N.J Lacey. 2003. Minds and machines 13: 367–395.CrossRefGoogle Scholar
  17. Moor, J. H. 2006. The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems 21 (4): 18–21.CrossRefGoogle Scholar
  18. Noorman, M. 2008. Mind the gap a critique of human/technology analogies in artificial agent discourse. Universitaire Pers Maastrict,Google Scholar
  19. Rahwan, I., L. Sonenberg, N. Jennings, and McBurney, P. 2007. Stratum: A methodology for designing heuristic agent negotiation strategies. Applied Artificial Intelligence 21 (6): 489–527CrossRefGoogle Scholar
  20. Terry, N.P. 2002. When the “machine that goes ‘ping” causes harm: default torts rules and technologically-mediated health care injuries. Saint Louis University Law Journal 46: 37–59.Google Scholar
  21. The American Law Institute. 2009. Principles of the law of software contracts, proposed final draft 16 Mar.Google Scholar
  22. Wallach, W. and C. Allen. 2009. Moral machines: teaching robots right from wrong. Oxford: Oxford University Press.Google Scholar
  23. Zollers, F.E., A. McMullin, S.N. Hurd, and P. Shears. 2005. No more soft landings for software: Liability for defects in an industry that has come of age. Santa Clara Computer and High Technology Law Journal 21: 745.Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2011

Authors and Affiliations

  1. 1.University of VirginiaCharlottesvilleUSA

Personalised recommendations