Skip to main content

Advertisement

Log in

Computing machinery and morality

  • Original Article
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Artificial Intelligence (AI) is a technology widely used to support human decision-making. Current areas of application include financial services, engineering, and management. A number of attempts to introduce AI decision support systems into areas which more obviously include moral judgement have been made. These include systems that give advice on patient care, on social benefit entitlement, and even ethical advice for medical professionals. Responding to these developments raises a complex set of moral questions. This paper proposes a clearer replacement question to them. The replacement question asks under what circumstances, if any, people would accept a moral judgement made by some sort of machine. Since, it is argued, the answer to this replacement question is positive, urgent practical moral problems are raised.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. The notion of a replacement question, the layout, and the title of this paper deliberately echo that of Alan Turing’s 1950 publication; Computing Machinery and Intelligence (Turing 1950).

  2. It is obviously the case that real humans rarely make free and rational decisions about which moral judgements to accept. They exist in networks of authority, social expectations, and religion which effectively limit their choices. However, to discuss the question in such realistic terms from the outset would serve only to obscure the argument.

  3. The fact that computers currently make these sorts of decisions should not, under any circumstances, be conflated with the claim that this is in any sense a desirable state of affairs. There are some serious problems with this sort of development, which unfortunately, lie outside the scope of this paper.

  4. Actually, the apparent lack of prejudice and bias in computers is a consequence of the ‘logical myth’ mentioned in the last section. In practice computers embody and express the prejudices of their designers and this can sometimes be a serious problem.

References

  • Allen C, Varner G, Zinser J (2000) Prolegomena to any future artificial moral agent. J Exp Theor Artif Intell 12:251–261

    Article  MATH  Google Scholar 

  • Bostrom N (2003) Ethical issues in advanced artificial intelligence. In: Smit I, Wallach W, Lasker G (eds) Cognitive, emotive and ethical aspects of decision making in humans and in artificial intelligence, vol 2. IIAS, Windsor, pp 12–17

  • Browne J, Taylor A (1991) Realism, responsibility, and rationality: practical, legal, and political issues concerning the introduction of a large knowledge-based system in law. In: Bennun (ed) Computers, artificial intelligence and the law. Ellis Horwood, Chichester, pp 95–123

  • Damasio AR (1996) Descartes’ error: emotion, reason, and the human brain. Macmillan, London

    Google Scholar 

  • Danielson P (1992) Artificial morality: virtuous robots for virtual games. Routledge, London

    Google Scholar 

  • LaChat MR (1986) Artificial intelligence and ethics: an exercise in the moral imagination. AI Mag 7(2):70–79

    Google Scholar 

  • Miller PL (1984) A critiquing approach to expert computer advice: ATTENDING, Pitman, London

  • Picard R (1998) Affective computing. MIT Press, Cambridge

    Google Scholar 

  • Turing AM (1950) Computing machinery and intelligence. Mind, vol LIX, no. 236

  • Whitby B (1996) Reflections on artificial intelligence. The legal, moral, and ethical dimensions. Intellect Books, Exeter

    Google Scholar 

  • Whitby B (2003) The myth of AI failure. Cognitive science research report 568. University of Sussex, Falmer

  • Yeats WB (1919) An Irish airman foresees his death (1917). In: The wild swans at Coole, and other poems. Macmillan, NY

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Blay Whitby.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Whitby, B. Computing machinery and morality. AI & Soc 22, 551–563 (2008). https://doi.org/10.1007/s00146-007-0100-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-007-0100-y

Keywords

Navigation