Abstract
This chapter takes up a case study of the accountability issues around increasingly autonomous computer systems. In this early phase of their development, certain computer systems are being referred to as “software agents” or “autonomous systems” because they operate in a variety of ways that are seemingly independent of human control. However, because of the responsibility and liability issues, conceptualizing these systems as autonomous seems morally problematic and likely to be legally problematic. Whether software agents and autonomous systems are used to make financial decisions, control transportation, or perform military objectives, when something goes wrong, issues of accountability will indubitably arise. While it would seem that the law will ultimately have to handle these issues, law is currently being used only minimally or indirectly to address accountability for computer software failure. This nascent discussion of computer systems “in the making” seems a good focal point for considering innovative approaches to making law, governance, and ethics more helpful with regard to new technologies. For a start, it would seem that some anticipatory reasoning as to how accountability/liability issues are likely to be handled in law could have an influence on the development of the technology (even if the anticipatory thinking is ultimately wrong). Such thinking could – in principle at least – shape the design of computer systems.
Classification does indeed have its consequences – perceived as real, it has real effects.
– Bowker and Star (1999)
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
This is the framework I presumed in most of my early work on computer and engineering ethics; see, for example, Computer Ethics 1st edition, Prentice Hall, 1985.
- 2.
I am grateful to Martin Anderson for first characterizing the argument in this way (as moral ontology) while it was still somewhat inchoate in my thinking.
- 3.
The argument is inspired by Bowker and Starr (1999) and other work that points to the powerful effects of systems of classification.
- 4.
Of course, software behavior causally contributes to events with untoward consequences. See Johnson and Powers (2005). The locus of accountability is connected to but different from causality.
- 5.
Discussion of the risks from defects and failure of software can be found in The Risks Digest, a Forum On Risks To The Public In Computers And Related Systems; the Forum is an activity of the ACM Committee on Computers and Public Policy, moderated by Peter G. Neumann and found at: http://catless.ncl.ac.uk/Risks/.
References
Allen, C., I. Smit, and W. Wallach. 2005. Artificial morality: Top-down, bottom-up, and hybrid approaches. Ethics and Information Technology 7: 149–155.
Allen, C., G. Varaner, and J. Zinser. 2000. Prolegomena to any future artificial moral agent. Journal of Experimental and Theoretical Artificial Intelligence 12 (3): 251–261.
Anderson, M., and S.L. Anderson. 2006. Machine ethics? IEEE Intelligent Systems 21 (4): 10–11.
Anderson, M., and S.L. Anderson. 2007. The status of machine ethics: A report from the AAAI symposium. Minds and Machines 17: 1–10.
Ballman, D.R. 1996. Commentary: Software tort: Evaluating software harm by duty of function and form. Connecticut Insurance Law Journal 3: 417.
Bowker, G.C., and S.L. Starr. 1999. Sorting things out: classification and its consequences. Cambridge, MA: The MIT Press.
Chien, S., R. Sherwood, D. Tran, B. Cichy, G. Rabideau, and R. Castano. 2005. Lessons learned from autonomous sciencecraft experiment. Paper presented at the autonomous agents and multi-agent systems conference, Utrecht, Netherlands.
Childers, S.J. 2008. Don’t stop the music: No strict products liability for embedded software. University of Florida Journal of Law & Public Policy 19: 125, 127.
Fisher, E., C. Selin, and J.M. Wetmore. 2008. The yearbook of nanotechnology in society vol. 1: Presenting Futures. New York, NY: Springer.
Floridi, L. and J.W. Sanders. 2004. On the morality of artificial agents. Minds and Machines 14 (3): 349–379.
Grodzinsky, F.S., K.W. Miller, and M.J. Wolf. 2008. The ethics of designing artificial agents. Ethics and Information Technology 10: 115–121.
Johnson, D. and J.M Wetmore 2008. STS and ethics: Implications for engineering ethics. In New handbook of science, and technology studies, eds. E. Hackett, O. Amsterdamska, M. Lynch, and J. Wajcman. Cambridge, MA: The MIT Press.
Johnson, D. and K. Miller. 2008. Un-making artificial moral agents. Ethics and Information Technology 10 (2–3): 123–133.
Johnson, D. and T.M. Powers. 2005. Computer systems and responsibility: A normative look at technological complexity. Ethics and Information Technology 7 (2): 99–107.
Joy, B. 2000. Why the future doesn’t need us. Wired 8 (4): 238–262.
Lee, M. H. and N.J Lacey. 2003. Minds and machines 13: 367–395.
Moor, J. H. 2006. The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems 21 (4): 18–21.
Noorman, M. 2008. Mind the gap a critique of human/technology analogies in artificial agent discourse. Universitaire Pers Maastrict,
Rahwan, I., L. Sonenberg, N. Jennings, and McBurney, P. 2007. Stratum: A methodology for designing heuristic agent negotiation strategies. Applied Artificial Intelligence 21 (6): 489–527
Terry, N.P. 2002. When the “machine that goes ‘ping” causes harm: default torts rules and technologically-mediated health care injuries. Saint Louis University Law Journal 46: 37–59.
The American Law Institute. 2009. Principles of the law of software contracts, proposed final draft 16 Mar.
Wallach, W. and C. Allen. 2009. Moral machines: teaching robots right from wrong. Oxford: Oxford University Press.
Zollers, F.E., A. McMullin, S.N. Hurd, and P. Shears. 2005. No more soft landings for software: Liability for defects in an industry that has come of age. Santa Clara Computer and High Technology Law Journal 21: 745.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer Science+Business Media B.V.
About this chapter
Cite this chapter
Johnson, D.G. (2011). Software Agents, Anticipatory Ethics, and Accountability. In: Marchant, G., Allenby, B., Herkert, J. (eds) The Growing Gap Between Emerging Technologies and Legal-Ethical Oversight. The International Library of Ethics, Law and Technology, vol 7. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-1356-7_5
Download citation
DOI: https://doi.org/10.1007/978-94-007-1356-7_5
Published:
Publisher Name: Springer, Dordrecht
Print ISBN: 978-94-007-1355-0
Online ISBN: 978-94-007-1356-7
eBook Packages: Humanities, Social Sciences and LawPhilosophy and Religion (R0)