Skip to main content

Methods in Applied Ethics

  • Chapter
  • First Online:
AI Ethics
  • 2159 Accesses

Abstract

There are significant disagreements about the methods used in applied ethics. This chapter reviews some central methodological questions and the underlying philosophical issues. A simple account of a common approach is outlined: consider one’s initial response to a case of interest, and then apply reasoning to test or correct one’s initial response. Issues arising from this simple model include the status and reliability of immediate responses and the nature of any reasoning process, including the selection and justification of any framework of ethical values and ethical theory used. Certain features of AI itself pose particular challenges for the methodology in applied ethics, including ways in which developments in technology can impact how we understand concepts. Beliefs about the nature of ethics itself will also impact methodology, including assumptions about consistency, completeness, and clarity in ethics, and the very purpose of morality; we look at some common assumptions. The way in which cases are selected and described is critical, affecting how agency and responsibility are attributed, among other questions. We examine how narratives about AI and images of AI may influence how we approach ethical issues and how fiction, including science fiction, may be used in addressing ethical questions in AI.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 69.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. MacIntyre A (1984) Does applied ethics rest on a mistake? Monist 67(4):498–513

    Article  Google Scholar 

  2. Beauchamp TL (2005) In: Frey RG, Wellman CH (eds) The nature of applied ethics. A companion to applied ethics. Wiley-Blackwell, New York, pp 1–16

    Google Scholar 

  3. Tiku N (2022) The Google engineer who thinks the company’s AI has come to lif. The Washington Post

    Google Scholar 

  4. Kahneman D (2011) Thinking, fast and slow. Farrar, Straus and Giroux, New York

    Google Scholar 

  5. Bazerman MH, Tenbrunsel AE (2011) Blind spots. Princeton University Press, Princeton

    Book  Google Scholar 

  6. Balfour DL, Adams GB (2014) Unmasking administrative evil. Routledge, Oxfordshire

    Google Scholar 

  7. Rawls J (1999) A theory of justice, revised edn. Harvard University Press, Cambridge, MA

    Book  Google Scholar 

  8. Festinger L (1957) A theory of cognitive dissonance, vol 2. Stanford University Press, Redwood City, CA

    Book  Google Scholar 

  9. Nozick R (1974) Anarchy, state, and utopia, vol 5038. Basic Books, New York

    Google Scholar 

  10. Wachowski L, Wachowski L (1999) The Matrix. Warner Home Video

    Google Scholar 

  11. Beauchamp T, Childress J (2013) Principles of biomedical ethics, 7th edn. Oxford University Press, Oxford

    Google Scholar 

  12. Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Luetge C, Madelin R, Pagallo U, Rossi F, Schafer B (2018) AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Mind Mach 28(4):689–707

    Article  Google Scholar 

  13. Boddington P (2017) Towards a code of ethics for artificial intelligence. Springer, Cham

    Book  Google Scholar 

  14. Mittelstadt B (2019) Principles alone cannot guarantee ethical AI. Nat Mach Intell 1:501–507

    Article  Google Scholar 

  15. Boorse C (1975) On the distinction between disease and illness. Philos Public Aff:49–68

    Google Scholar 

  16. World Health Organization (1978) Declaration of Alma-eta (No. WHO/EURO: 1978-3938-43697-61471). World Health Organization. Regional Office for Europe

    Google Scholar 

  17. Boddington P, Räisänen U (2009) Theoretical and practical issues in the definition of health: insights from aboriginal Australia. J Med Philos 34(1):49–67. Oxford University Press

    Article  Google Scholar 

  18. Rose N, Novas C (2005) Biological citizenship. Global assemblages: technology, politics, and ethics as anthropological problems. Blackwell, New York, pp 439–463

    Google Scholar 

  19. Homer N, Szelinger S, Redman M, Duggan D, Tembe W, Muehling J, Pearson JV, Stephan DA, Nelson SF, Craig DW (2008) Resolving individuals contributing trace amounts of DNA to highly complex mixtures using high-density SNP genotyping microarrays. PLoS Genet 4(8):e1000167

    Article  Google Scholar 

  20. Gitschier J (2009) Inferential genotyping of Y chromosomes in Latter-Day Saints founders and comparison to Utah samples in the HapMap project. Am J Hum Genet 84(2):251–258

    Article  Google Scholar 

  21. Blake Lemoine @cajundiscordian (2022). https://twitter.com/cajundiscordian/status/1536503474308907010

  22. Weizenbaum J (1967) Computer power and human reason. Freeman and Company, New York

    Google Scholar 

  23. Awad E, Dsouza S, Kim R, Schulz J, Henrich J, Shariff A et al (2018) The moral machine experiment. Nature 563(7729):59–64

    Article  Google Scholar 

  24. Ali T (2022) Google engineer claims AI chatbot is sentient, but Alan Turing Institute expert says it could ‘dazzle’ anyone. Independent

    Google Scholar 

  25. Johnson K (2022) LaMDA and the sentient AI trap. Wired. https://www.wired.com/story/lamda-sentient-ai-bias-google-blake-lemoine/

  26. Gebru T, Mitchell M (2022) We warned Google that people might believe AI was sentient. Washington Post. https://www.washingtonpost.com/opinions/2022/06/17/google-ai-ethics-sentient-lemoine-warning/

  27. Farley L (1978) Sexual shakedown: the sexual harassment of women on the job. McGraw Hill, New York

    Google Scholar 

  28. Mill JS (1869) The subjection of women. Longman, Green, Reader, and Dyer, London

    Book  Google Scholar 

  29. Garland D (2008) On the concept of moral panic. Crime Media Cult 4(1):9–30

    Article  Google Scholar 

  30. Noble SU (2018) Algorithms of oppression: how search engines reinforce racism. NYU Press, New York

    Book  Google Scholar 

  31. Blodgett SL, O’Connor B (2017) Racial disparity in natural language processing: a case study of social media African-American English. arXiv preprint arXiv:1707.00061

    Google Scholar 

  32. Kearns M, Roth A (2019) The ethical algorithm: the science of socially aware algorithm design. Oxford University Press, Oxford

    Google Scholar 

  33. Aristotle (2014) Aristotle: Nicomachean ethics (trans: Crisp R, ed). Cambridge University Press

    Google Scholar 

  34. Mill JS (1863) Utilitarianism. Parker, Son and Bourn, London

    Google Scholar 

  35. Singer P (2011) The expanding circle: ethics, evolution, and moral progress. Princeton University Press, Princeton

    Book  Google Scholar 

  36. Zollman KJ (2010) The epistemic benefit of transient diversity. Erkenntnis 72(1):17

    Article  MathSciNet  Google Scholar 

  37. Nagel T (1979) Moral luck. In: Mortal questions. Cambridge University Press, New York, pp 31–32

    Google Scholar 

  38. Williams B, Bernard W (1981) Moral luck: philosophical papers 1973–1980. Cambridge University Press, Cambridge

    Book  Google Scholar 

  39. Reynolds E (2018) The agony of Sophia, the world’s first robot citizen, condemned to a lifeless career in marketing. Wired

    Google Scholar 

  40. Open Letter to the European Commission Artificial Intelligence and Robotics. http://www.robotics-openletter.eu

  41. Parviainen J, Coeckelbergh M (2021) The political choreography of the Sophia robot: beyond robot rights and citizenship to political performances for the social robotics market. AI & Soc 36(3):715–724

    Article  Google Scholar 

  42. Warnock GJ (1971) The object of morality. Routledge, London

    Google Scholar 

  43. Yeung K, Howes A, Pogrebna G (2019) AI governance by human rights-centred design, deliberation and oversight: an end to ethics washing. In: The Oxford handbook of AI ethics. Oxford University Press, Oxford

    Google Scholar 

  44. Bietti E (2020) From ethics washing to ethics bashing: a view on tech ethics from within moral philosophy. In: Proceedings of the 2020 conference on fairness, accountability, and transparency. Association for Computing Machinery, New York, pp 210–219

    Chapter  Google Scholar 

  45. Mackie JL (1977) Inventing right and wrong. Penguin, Baltimore

    Google Scholar 

  46. De Courson B, Nettle D (2021) Why do inequality and deprivation produce high crime and low trust? Sci Rep 11(1):1–11

    Article  Google Scholar 

  47. Crawford K (2021) The atlas of AI. Yale University Press, New Haven

    Book  Google Scholar 

  48. Kramer AD, Guillory JE, Hancock JT (2014) Experimental evidence of massive-scale emotional contagion through social networks. Proc Natl Acad Sci 111(24):8788–8790

    Article  Google Scholar 

  49. O’Reilly M, Dogra N, Whiteman N, Hughes J, Eruyar S, Reilly P (2018) Is social media bad for mental health and wellbeing? Exploring the perspectives of adolescents. Clin Child Psychol Psychiatry 23(4):601–613

    Article  Google Scholar 

  50. Berryman C, Ferguson CJ, Negy C (2018) Social media use and mental health among young adults. Psychiatry Q 89(2):307–314

    Article  Google Scholar 

  51. Chambers T (1999) The fiction of bioethics (reflective bioethics). Routledge, New York

    Google Scholar 

  52. Goffman E (1974) Frame analysis: an essay on the organization of experience. Harvard University Press, Cambridge

    Google Scholar 

  53. Tannen D (ed) (1993) Framing in discourse. Oxford University Press, Oxford

    Google Scholar 

  54. Walsh T (2022) Labelling Google’s LaMDA chatbot as sentient is fanciful. But it’s very human to be taken in by machines. The Guardian

    Google Scholar 

  55. Porter J (2022) Google suspends engineer who claims its AI is sentient. The Verge

    Google Scholar 

  56. Grant N, Metz C (2022) Google sidelines AI engineer who claims AI chatbot has become sentient. New York Times

    Google Scholar 

  57. Metz R (2022) No, Google’s AI is not sentient. CNN. https://edition.cnn.com/2022/06/13/tech/google-ai-not-sentient/index.html

  58. Davis O (2022) A Google software engineer believes an AI has become sentient. If he’s right, how would we know? The Conversation. https://theconversation.com/a-google-software-engineer-believes-an-ai-has-become-sentient-if-hes-right-how-would-we-know-185024

  59. Rosenberg L (2022) LaMDA and the power of illusion: the aliens haven’t landed … yet. Venture Beat. https://venturebeat.com/2022/06/14/lamda-and-the-power-of-illusion-the-aliens-havent-landed-yet/

  60. Boddington P, Hogben S (2006) Working up policy: the use of specific disease exemplars in formulating general principles governing childhood genetic testing. Health Care Anal 14(1):1–13

    Article  Google Scholar 

  61. Atkinson P (2009) Ethics and ethnography. Twenty-First Century Soc 4(1):17–30

    Article  Google Scholar 

  62. Kubrick SMGM (1966) 2001 A space Odessy

    Google Scholar 

  63. Cave S, Dihal K (2019) Hopes and fears for intelligent machines in fiction and reality. Nat Mach Intell 1(2):74–78

    Article  Google Scholar 

  64. Burton E, Goldsmith J, Mattei N (2018) How to teach computer ethics through science fiction. Commun ACM 61(8):54–64. https://dl.acm.org/doi/pdf/10.1145/3154485

    Article  Google Scholar 

  65. https://twitter.com/HampVR/status/1537134973441933313

  66. Chalmers D (2003) The matrix as metaphysics. In: Schneider S (ed) Science fiction and philosophy: from time travel to superintelligence. Wiley, New York

    Google Scholar 

  67. Koistinen AK (2016) The (care) robot in science fiction: a monster or a tool for the future? Confero 4(2):97–109

    Article  Google Scholar 

  68. Teo Y (2021) Recognition, collaboration and community: science fiction representations of robot carers in Robot & Frank, Big Hero 6 and Humans. Med Humanit 47(1):95–102

    Article  Google Scholar 

  69. Robot and Frank (2012) dir. Jake Schreier, USA

    Google Scholar 

  70. Big Hero 6 (dir Don Hall/Chris Williams, USA 2014)

    Google Scholar 

  71. Humans (UK/USA, Channel 4/AMC, 2015–2018)

    Google Scholar 

  72. Singler B (2020) The AI creation meme: a case study of the new visibility of religion in artificial intelligence discourse. Religion 11(5):253

    Article  Google Scholar 

  73. Cave S, Craig C, Dihal K, Dillon S, Montgomery J, Singler B, Taylor L (2018) Portrayals and perceptions of AI and why they matter. The Royal Society, London

    Google Scholar 

  74. Cave S, Dihal K (2020) The whiteness of AI. Philos Technol 33(4):685–703

    Article  Google Scholar 

  75. Bolukbasi T, Chang KW, Zou JY, Saligrama V, Kalai AT (2016) Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Adv Neural Inf Proces Syst 29

    Google Scholar 

  76. Binns R, Veale M, Van Kleek M, Shadbolt N (2017) Like trainer, like bot? Inheritance of bias in algorithmic content moderation. In: International conference on social informatics. Springer, Cham, pp 405–415

    Chapter  Google Scholar 

  77. Steele C (2018) The real reason voice assistants are female (and why it matters) PC. https://uk.pcmag.com/smart-home/92697/the-real-reason-voice-assistants-are-female-and-why-it-matters

  78. Poushneh A (2021) Humanizing voice assistant: the impact of voice assistant personality on consumers’ attitudes and behaviors. J Retail Consum Serv 58:102283

    Article  Google Scholar 

  79. Tolmeijer S, Zierau N, Janson A, Wahdatehagh JS, Leimeister JMM, Bernstein A (2021) Female by default?–exploring the effect of voice assistant gender and pitch on trait and trust attribution. In: Extended abstracts of the 2021 CHI conference on human factors in computing systems, pp 1–7

    Google Scholar 

  80. Woods HS (2018) Asking more of Siri and Alexa: feminine persona in service of surveillance capitalism. Crit Stud Media Commun 35(4):334–349

    Article  Google Scholar 

  81. Jagger A (1983) Feminist theory and human nature. Harvester, Sussex, UK

    Google Scholar 

  82. Bryson JJ (2010) Robots should be slaves. In: Close engagements with artificial companions: key social, psychological, ethical and design issues, vol 8, pp 63–74

    Chapter  Google Scholar 

  83. Anscombe GEM (1957) Intention. Basil Blackwell, Oxford

    Google Scholar 

  84. Mackie JL (1965) Causes and conditions. Am Philos Q 2(4):245–264

    Google Scholar 

  85. Boddington P (2016) Shared responsibility agreements: causes of contention. In: Dawson A (ed) The philosophy of public health. Routledge, pp 95–110

    Google Scholar 

  86. Hallowell N (1999) Doing the right thing: genetic risk and responsibility. Sociol Health Illn 21(5):597–621

    Article  Google Scholar 

  87. Marmot M, Wilkinson R (eds) (2005) Social determinants of health. Oxford University Press, Oxford

    Google Scholar 

  88. Lupton D (2013) The digitally engaged patient: self-monitoring and self-care in the digital health era. Soc Theory Health 11(3):256–270

    Article  Google Scholar 

  89. Kelley HH (1973) The processes of causal attribution. Am Psychol 28(2):107

    Article  Google Scholar 

  90. McDonald AJ, Hansen JR (2009) Truth, lies, and O-rings: inside the space shuttle challenger disaster. University Press of Florida, Gainesville, p 626

    Google Scholar 

  91. Lanier J (2010) You are not a gadget: a manifesto. Vintage, New York

    Google Scholar 

  92. Foot P (1967) The problem of abortion and the doctrine of the double effect. Oxford Rev 5

    Google Scholar 

  93. Kamm FM (2020) The use and abuse of the trolley problem: self-driving cars, medical treatments, and the distribution of harm. In: Liao MS (ed) Ethics of artificial intelligence. Oxford University Press, Oxford, pp 79–108

    Chapter  Google Scholar 

Further Reading

    Methodology in Applied Ethics

    • Allen C, Smit I, Wallach W (2005) Artificial morality: top-down, bottom-up, and hybrid approaches. Ethics Inf Technol 7(3):149–155

      Article  Google Scholar 

    • Beauchamp TL (2005) The nature of applied ethics. In: Frey RG, Wellman CH (eds) A companion to applied ethics. Wiley-Blackwell, New York, pp 1–16

      Google Scholar 

    • Beauchamp TL, Beauchamp TA, Childress JF (2019) Principles of biomedical ethics, 8th edn. Oxford University Press, Oxford Part I Moral Foundations

      Google Scholar 

    • Burton E, Goldsmith J, Mattei N (2018) How to teach computer ethics through science fiction. Commun ACM 61(8):54–64

      Article  Google Scholar 

    • Chambers T (2001) The fiction of bioethics: a precis. Am J Bioeth 1(1):40–43

      Article  Google Scholar 

    • Chambers T (2016) Eating One’s friends: fiction as argument in bioethics. Lit Med 34(1):79–105

      Article  Google Scholar 

    • Glover J (1990) Causing death and saving lives: the moral problems of abortion, infanticide, suicide, euthanasia, capital punishment, war and other life-or-death choices. Penguin, London

      Google Scholar 

    • Mackie JL (1977) Inventing right and wrong. Penguin, New York

      Google Scholar 

    • Singer P (2011) Practical ethics. Cambridge University Press, Cambridge

      Book  Google Scholar 

    Artificial Intelligence, Perceptions, and Sources of Bias

    • Awad E, Dsouza S, Kim R, Schulz J, Henrich J, Shariff A et al (2018) The moral machine experiment. Nature 563(7729):59–64

      Article  Google Scholar 

    • Cave S, Dihal K (2019) Hopes and fears for intelligent machines in fiction and reality. Nat Mach Intell 1(2):74–78

      Article  Google Scholar 

    • Cave S, Craig C, Dihal K, Dillon S, Montgomery J, Singler B, Taylor L (2018) Portrayals and perceptions of AI and why they matter. The Royal Society, London

      Google Scholar 

    • Coeckelbergh M (2022) The Ubuntu robot: towards a relational conceptual framework for intercultural robotics. Sci Eng Ethics 28(2):1–15

      Article  Google Scholar 

    • https://dl.acm.org/doi/pdf/10.1145/3154485 (n.d.)

    • Lanier J (2010) You are not a gadget: a manifesto. Vintage, New York

      Google Scholar 

    • Noble SU (2018) Algorithms of oppression: how search engines reinforce racism. NYU Press, New York

      Book  Google Scholar 

    • Singler B (2020) The AI creation meme: a case study of the new visibility of religion in artificial intelligence discourse. Religion 11(5):253

      Article  Google Scholar 

    • Weizenbaum J (1967) Computer power and human reason. Freeman and Company, New York

      Google Scholar 

    Download references

    Author information

    Authors and Affiliations

    Authors

    Rights and permissions

    Reprints and permissions

    Copyright information

    © 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

    About this chapter

    Check for updates. Verify currency and authenticity via CrossMark

    Cite this chapter

    Boddington, P. (2023). Methods in Applied Ethics. In: AI Ethics. Artificial Intelligence: Foundations, Theory, and Algorithms. Springer, Singapore. https://doi.org/10.1007/978-981-19-9382-4_4

    Download citation

    • DOI: https://doi.org/10.1007/978-981-19-9382-4_4

    • Published:

    • Publisher Name: Springer, Singapore

    • Print ISBN: 978-981-19-9381-7

    • Online ISBN: 978-981-19-9382-4

    • eBook Packages: Computer ScienceComputer Science (R0)

    Publish with us

    Policies and ethics