Abstract
There are significant disagreements about the methods used in applied ethics. This chapter reviews some central methodological questions and the underlying philosophical issues. A simple account of a common approach is outlined: consider one’s initial response to a case of interest, and then apply reasoning to test or correct one’s initial response. Issues arising from this simple model include the status and reliability of immediate responses and the nature of any reasoning process, including the selection and justification of any framework of ethical values and ethical theory used. Certain features of AI itself pose particular challenges for the methodology in applied ethics, including ways in which developments in technology can impact how we understand concepts. Beliefs about the nature of ethics itself will also impact methodology, including assumptions about consistency, completeness, and clarity in ethics, and the very purpose of morality; we look at some common assumptions. The way in which cases are selected and described is critical, affecting how agency and responsibility are attributed, among other questions. We examine how narratives about AI and images of AI may influence how we approach ethical issues and how fiction, including science fiction, may be used in addressing ethical questions in AI.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
MacIntyre A (1984) Does applied ethics rest on a mistake? Monist 67(4):498–513
Beauchamp TL (2005) In: Frey RG, Wellman CH (eds) The nature of applied ethics. A companion to applied ethics. Wiley-Blackwell, New York, pp 1–16
Tiku N (2022) The Google engineer who thinks the company’s AI has come to lif. The Washington Post
Kahneman D (2011) Thinking, fast and slow. Farrar, Straus and Giroux, New York
Bazerman MH, Tenbrunsel AE (2011) Blind spots. Princeton University Press, Princeton
Balfour DL, Adams GB (2014) Unmasking administrative evil. Routledge, Oxfordshire
Rawls J (1999) A theory of justice, revised edn. Harvard University Press, Cambridge, MA
Festinger L (1957) A theory of cognitive dissonance, vol 2. Stanford University Press, Redwood City, CA
Nozick R (1974) Anarchy, state, and utopia, vol 5038. Basic Books, New York
Wachowski L, Wachowski L (1999) The Matrix. Warner Home Video
Beauchamp T, Childress J (2013) Principles of biomedical ethics, 7th edn. Oxford University Press, Oxford
Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Luetge C, Madelin R, Pagallo U, Rossi F, Schafer B (2018) AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Mind Mach 28(4):689–707
Boddington P (2017) Towards a code of ethics for artificial intelligence. Springer, Cham
Mittelstadt B (2019) Principles alone cannot guarantee ethical AI. Nat Mach Intell 1:501–507
Boorse C (1975) On the distinction between disease and illness. Philos Public Aff:49–68
World Health Organization (1978) Declaration of Alma-eta (No. WHO/EURO: 1978-3938-43697-61471). World Health Organization. Regional Office for Europe
Boddington P, Räisänen U (2009) Theoretical and practical issues in the definition of health: insights from aboriginal Australia. J Med Philos 34(1):49–67. Oxford University Press
Rose N, Novas C (2005) Biological citizenship. Global assemblages: technology, politics, and ethics as anthropological problems. Blackwell, New York, pp 439–463
Homer N, Szelinger S, Redman M, Duggan D, Tembe W, Muehling J, Pearson JV, Stephan DA, Nelson SF, Craig DW (2008) Resolving individuals contributing trace amounts of DNA to highly complex mixtures using high-density SNP genotyping microarrays. PLoS Genet 4(8):e1000167
Gitschier J (2009) Inferential genotyping of Y chromosomes in Latter-Day Saints founders and comparison to Utah samples in the HapMap project. Am J Hum Genet 84(2):251–258
Blake Lemoine @cajundiscordian (2022). https://twitter.com/cajundiscordian/status/1536503474308907010
Weizenbaum J (1967) Computer power and human reason. Freeman and Company, New York
Awad E, Dsouza S, Kim R, Schulz J, Henrich J, Shariff A et al (2018) The moral machine experiment. Nature 563(7729):59–64
Ali T (2022) Google engineer claims AI chatbot is sentient, but Alan Turing Institute expert says it could ‘dazzle’ anyone. Independent
Johnson K (2022) LaMDA and the sentient AI trap. Wired. https://www.wired.com/story/lamda-sentient-ai-bias-google-blake-lemoine/
Gebru T, Mitchell M (2022) We warned Google that people might believe AI was sentient. Washington Post. https://www.washingtonpost.com/opinions/2022/06/17/google-ai-ethics-sentient-lemoine-warning/
Farley L (1978) Sexual shakedown: the sexual harassment of women on the job. McGraw Hill, New York
Mill JS (1869) The subjection of women. Longman, Green, Reader, and Dyer, London
Garland D (2008) On the concept of moral panic. Crime Media Cult 4(1):9–30
Noble SU (2018) Algorithms of oppression: how search engines reinforce racism. NYU Press, New York
Blodgett SL, O’Connor B (2017) Racial disparity in natural language processing: a case study of social media African-American English. arXiv preprint arXiv:1707.00061
Kearns M, Roth A (2019) The ethical algorithm: the science of socially aware algorithm design. Oxford University Press, Oxford
Aristotle (2014) Aristotle: Nicomachean ethics (trans: Crisp R, ed). Cambridge University Press
Mill JS (1863) Utilitarianism. Parker, Son and Bourn, London
Singer P (2011) The expanding circle: ethics, evolution, and moral progress. Princeton University Press, Princeton
Zollman KJ (2010) The epistemic benefit of transient diversity. Erkenntnis 72(1):17
Nagel T (1979) Moral luck. In: Mortal questions. Cambridge University Press, New York, pp 31–32
Williams B, Bernard W (1981) Moral luck: philosophical papers 1973–1980. Cambridge University Press, Cambridge
Reynolds E (2018) The agony of Sophia, the world’s first robot citizen, condemned to a lifeless career in marketing. Wired
Open Letter to the European Commission Artificial Intelligence and Robotics. http://www.robotics-openletter.eu
Parviainen J, Coeckelbergh M (2021) The political choreography of the Sophia robot: beyond robot rights and citizenship to political performances for the social robotics market. AI & Soc 36(3):715–724
Warnock GJ (1971) The object of morality. Routledge, London
Yeung K, Howes A, Pogrebna G (2019) AI governance by human rights-centred design, deliberation and oversight: an end to ethics washing. In: The Oxford handbook of AI ethics. Oxford University Press, Oxford
Bietti E (2020) From ethics washing to ethics bashing: a view on tech ethics from within moral philosophy. In: Proceedings of the 2020 conference on fairness, accountability, and transparency. Association for Computing Machinery, New York, pp 210–219
Mackie JL (1977) Inventing right and wrong. Penguin, Baltimore
De Courson B, Nettle D (2021) Why do inequality and deprivation produce high crime and low trust? Sci Rep 11(1):1–11
Crawford K (2021) The atlas of AI. Yale University Press, New Haven
Kramer AD, Guillory JE, Hancock JT (2014) Experimental evidence of massive-scale emotional contagion through social networks. Proc Natl Acad Sci 111(24):8788–8790
O’Reilly M, Dogra N, Whiteman N, Hughes J, Eruyar S, Reilly P (2018) Is social media bad for mental health and wellbeing? Exploring the perspectives of adolescents. Clin Child Psychol Psychiatry 23(4):601–613
Berryman C, Ferguson CJ, Negy C (2018) Social media use and mental health among young adults. Psychiatry Q 89(2):307–314
Chambers T (1999) The fiction of bioethics (reflective bioethics). Routledge, New York
Goffman E (1974) Frame analysis: an essay on the organization of experience. Harvard University Press, Cambridge
Tannen D (ed) (1993) Framing in discourse. Oxford University Press, Oxford
Walsh T (2022) Labelling Google’s LaMDA chatbot as sentient is fanciful. But it’s very human to be taken in by machines. The Guardian
Porter J (2022) Google suspends engineer who claims its AI is sentient. The Verge
Grant N, Metz C (2022) Google sidelines AI engineer who claims AI chatbot has become sentient. New York Times
Metz R (2022) No, Google’s AI is not sentient. CNN. https://edition.cnn.com/2022/06/13/tech/google-ai-not-sentient/index.html
Davis O (2022) A Google software engineer believes an AI has become sentient. If he’s right, how would we know? The Conversation. https://theconversation.com/a-google-software-engineer-believes-an-ai-has-become-sentient-if-hes-right-how-would-we-know-185024
Rosenberg L (2022) LaMDA and the power of illusion: the aliens haven’t landed … yet. Venture Beat. https://venturebeat.com/2022/06/14/lamda-and-the-power-of-illusion-the-aliens-havent-landed-yet/
Boddington P, Hogben S (2006) Working up policy: the use of specific disease exemplars in formulating general principles governing childhood genetic testing. Health Care Anal 14(1):1–13
Atkinson P (2009) Ethics and ethnography. Twenty-First Century Soc 4(1):17–30
Kubrick SMGM (1966) 2001 A space Odessy
Cave S, Dihal K (2019) Hopes and fears for intelligent machines in fiction and reality. Nat Mach Intell 1(2):74–78
Burton E, Goldsmith J, Mattei N (2018) How to teach computer ethics through science fiction. Commun ACM 61(8):54–64. https://dl.acm.org/doi/pdf/10.1145/3154485
Chalmers D (2003) The matrix as metaphysics. In: Schneider S (ed) Science fiction and philosophy: from time travel to superintelligence. Wiley, New York
Koistinen AK (2016) The (care) robot in science fiction: a monster or a tool for the future? Confero 4(2):97–109
Teo Y (2021) Recognition, collaboration and community: science fiction representations of robot carers in Robot & Frank, Big Hero 6 and Humans. Med Humanit 47(1):95–102
Robot and Frank (2012) dir. Jake Schreier, USA
Big Hero 6 (dir Don Hall/Chris Williams, USA 2014)
Humans (UK/USA, Channel 4/AMC, 2015–2018)
Singler B (2020) The AI creation meme: a case study of the new visibility of religion in artificial intelligence discourse. Religion 11(5):253
Cave S, Craig C, Dihal K, Dillon S, Montgomery J, Singler B, Taylor L (2018) Portrayals and perceptions of AI and why they matter. The Royal Society, London
Cave S, Dihal K (2020) The whiteness of AI. Philos Technol 33(4):685–703
Bolukbasi T, Chang KW, Zou JY, Saligrama V, Kalai AT (2016) Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Adv Neural Inf Proces Syst 29
Binns R, Veale M, Van Kleek M, Shadbolt N (2017) Like trainer, like bot? Inheritance of bias in algorithmic content moderation. In: International conference on social informatics. Springer, Cham, pp 405–415
Steele C (2018) The real reason voice assistants are female (and why it matters) PC. https://uk.pcmag.com/smart-home/92697/the-real-reason-voice-assistants-are-female-and-why-it-matters
Poushneh A (2021) Humanizing voice assistant: the impact of voice assistant personality on consumers’ attitudes and behaviors. J Retail Consum Serv 58:102283
Tolmeijer S, Zierau N, Janson A, Wahdatehagh JS, Leimeister JMM, Bernstein A (2021) Female by default?–exploring the effect of voice assistant gender and pitch on trait and trust attribution. In: Extended abstracts of the 2021 CHI conference on human factors in computing systems, pp 1–7
Woods HS (2018) Asking more of Siri and Alexa: feminine persona in service of surveillance capitalism. Crit Stud Media Commun 35(4):334–349
Jagger A (1983) Feminist theory and human nature. Harvester, Sussex, UK
Bryson JJ (2010) Robots should be slaves. In: Close engagements with artificial companions: key social, psychological, ethical and design issues, vol 8, pp 63–74
Anscombe GEM (1957) Intention. Basil Blackwell, Oxford
Mackie JL (1965) Causes and conditions. Am Philos Q 2(4):245–264
Boddington P (2016) Shared responsibility agreements: causes of contention. In: Dawson A (ed) The philosophy of public health. Routledge, pp 95–110
Hallowell N (1999) Doing the right thing: genetic risk and responsibility. Sociol Health Illn 21(5):597–621
Marmot M, Wilkinson R (eds) (2005) Social determinants of health. Oxford University Press, Oxford
Lupton D (2013) The digitally engaged patient: self-monitoring and self-care in the digital health era. Soc Theory Health 11(3):256–270
Kelley HH (1973) The processes of causal attribution. Am Psychol 28(2):107
McDonald AJ, Hansen JR (2009) Truth, lies, and O-rings: inside the space shuttle challenger disaster. University Press of Florida, Gainesville, p 626
Lanier J (2010) You are not a gadget: a manifesto. Vintage, New York
Foot P (1967) The problem of abortion and the doctrine of the double effect. Oxford Rev 5
Kamm FM (2020) The use and abuse of the trolley problem: self-driving cars, medical treatments, and the distribution of harm. In: Liao MS (ed) Ethics of artificial intelligence. Oxford University Press, Oxford, pp 79–108
Further Reading
Methodology in Applied Ethics
Allen C, Smit I, Wallach W (2005) Artificial morality: top-down, bottom-up, and hybrid approaches. Ethics Inf Technol 7(3):149–155
Beauchamp TL (2005) The nature of applied ethics. In: Frey RG, Wellman CH (eds) A companion to applied ethics. Wiley-Blackwell, New York, pp 1–16
Beauchamp TL, Beauchamp TA, Childress JF (2019) Principles of biomedical ethics, 8th edn. Oxford University Press, Oxford Part I Moral Foundations
Burton E, Goldsmith J, Mattei N (2018) How to teach computer ethics through science fiction. Commun ACM 61(8):54–64
Chambers T (2001) The fiction of bioethics: a precis. Am J Bioeth 1(1):40–43
Chambers T (2016) Eating One’s friends: fiction as argument in bioethics. Lit Med 34(1):79–105
Glover J (1990) Causing death and saving lives: the moral problems of abortion, infanticide, suicide, euthanasia, capital punishment, war and other life-or-death choices. Penguin, London
Mackie JL (1977) Inventing right and wrong. Penguin, New York
Singer P (2011) Practical ethics. Cambridge University Press, Cambridge
Artificial Intelligence, Perceptions, and Sources of Bias
Awad E, Dsouza S, Kim R, Schulz J, Henrich J, Shariff A et al (2018) The moral machine experiment. Nature 563(7729):59–64
Cave S, Dihal K (2019) Hopes and fears for intelligent machines in fiction and reality. Nat Mach Intell 1(2):74–78
Cave S, Craig C, Dihal K, Dillon S, Montgomery J, Singler B, Taylor L (2018) Portrayals and perceptions of AI and why they matter. The Royal Society, London
Coeckelbergh M (2022) The Ubuntu robot: towards a relational conceptual framework for intercultural robotics. Sci Eng Ethics 28(2):1–15
Lanier J (2010) You are not a gadget: a manifesto. Vintage, New York
Noble SU (2018) Algorithms of oppression: how search engines reinforce racism. NYU Press, New York
Singler B (2020) The AI creation meme: a case study of the new visibility of religion in artificial intelligence discourse. Religion 11(5):253
Weizenbaum J (1967) Computer power and human reason. Freeman and Company, New York
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Boddington, P. (2023). Methods in Applied Ethics. In: AI Ethics. Artificial Intelligence: Foundations, Theory, and Algorithms. Springer, Singapore. https://doi.org/10.1007/978-981-19-9382-4_4
Download citation
DOI: https://doi.org/10.1007/978-981-19-9382-4_4
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-9381-7
Online ISBN: 978-981-19-9382-4
eBook Packages: Computer ScienceComputer Science (R0)