Skip to main content

Applied Ethics: AI and Ethics

  • Chapter
  • First Online:
Introduction to Ethics
  • 448 Accesses

Abstract

Gain an understanding about how we can proceed to apply ethics

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 99.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Encyclopedia Britannica (Online). Applied Ethics. Available at: https://www.britannica.com/topic/ethics-philosophy/Applied-ethics.

Author information

Authors and Affiliations

Authors

Appendices

Study Questions

  1. 1.

    What according to you is a major difference between applied ethics, and the other branches of ethics, such as normative ethics, metaethics?

  2. 2.

    What is value pluralism? How is it violated by the basic tenets of eugenics?

  3. 3.

    Applied ethics is supposed to require interdisciplinary and multi-disciplinary collaboration. Use your own examples to illustrate this point.

  4. 4.

    Why according to some applied ethics is not merely a derivation of the applicable rules from the general theories of ethics to a given practical situation?

  5. 5.

    Explain Kolste’s observation that in the 20th CE, the detachment of people from religion and religious authorities and preference for moral autonomy helped the rise of applied ethics.

  6. 6.

    What is top-down approach in applied ethics? Use Kant’s Deontological ethics to illustrate the approach. Is it possible to use more than one ethical theories in top-down approach?

  7. 7.

    What is reflective equilibrium method in applied ethics? When according to this method the process of reflective equilibrium stops?

  8. 8.

    What is the main contention of the anti-theorists? Are they against ethics? Justify your answer.

  9. 9.

    Consider the definition of AI given by European Commission High-Level Expert Group Artificial Intelligence (AI HLEG): Artificial intelligence (AI) refers to systems that display intelligent behavior by analyzing their environment and taking actions—with some degree of autonomy—to achieve specific goals. What does intelligent behavior mean in this context?

  10. 10.

    What is /are the major difference(s) between narrow AI and general AI?

  11. 11.

    Is the narrow AI and general AI classification the same as what Searle referred to as weak AI and strong AI? Explain your answer.

  12. 12.

    Explain what the objective of Turing’s “imitation game” or of Turing Test is. Is it to show a superior level of accuracy and efficiency than the humans?

  13. 13.

    What is the point of Searle’s Chinese Room argument? Which claim does it try to refute?

  14. 14.

    Which of the sentences is true and why?

    1. (a)

      Data science relies exclusively on AI-based methods for data analytics.

    2. (b)

      Data science uses AI-based tools, among many other tools, for data analytics.

  15. 15.

    Discuss supervised machine learning and unsupervised machine learning highlighting the important differences.

  16. 16.

    What, according to you, are the compelling reasons for discussing Ethics of Ai and DS as an example of applied ethics?

  17. 17.

    Can we justifiably claim that AI systems have a moral status? Discuss the arguments in favor and in against to critically come to a conclusion.

  18. 18.

    Should AI systems be regarded as moral agents, if they are endowed with advanced programming and rule sets for behaving ethically? Justify your answer.

  19. 19.

    What is the Action Network theory (ANT theory), and what important changes it brings to the discussion of agency?

  20. 20.

    Consider the case: A driverless, automated car has gone over the right foot of a pedestrian, who suddenly stepped in front of the car. The injury to the toes of the pedestrian is serious. To whom would you pin moral responsibility of the injury and why?

  21. 21.

    What is the problem of many hands in case of attributing responsibility for an action?

  22. 22.

    In Box 8.19, in this chapter, the Chinese National Social Credit System is mentioned. In that system, what kind of consequences follows from a low score? Is the score computed based only on the financial behavior of a person?

  23. 23.

    Should individual privacy be an absolute right in a society? Justify your answer.

  24. 24.

    Several philosophers have argued that privacy violations are not to be taken lightly, because they also affect something fundamental. Explain this line of thinking.

  25. 25.

    In the seven categories of privacy, as categorized by Fridewald, Finn and Wright (2013), how is privacy of a person different from privacy of the behavior and action? Explain.

  26. 26.

    Explain the difference between personal and non-personal data. Which kind of data the EU’s GDPR is mostly concerned about protecting?

  27. 27.

    Why is it said that “machine learning applications and models are both themselves at risk of data privacy breach”? Explain.

  28. 28.

    On what grounds, does the law not recognize individual data ownership rights as property rights? Explain. Why does the law speak of access instead of exclusive possession of data?

  29. 29.

    What is a data commons? Would it adequately address the problem with data ownership? Would it help in the case of Data lake?

  30. 30.

    What was Jeremy Bentham’s idea of the Panopticon? How is the idea connected to Orwell’s novel ‘1984’? Why Foucault found the Panopticon as a metaphor for social control?

  31. 31.

    What is Panopticism? What are its two major aspects?

  32. 32.

    Could we be under continuous surveillance inside our homes? Explain with appropriate examples.

  33. 33.

    Is massive centralized surveillance ever justified? Explain your answer.

  34. 34.

    When is data sharing unethical? What kind of ethically problematic issues does it give rise to?

  35. 35.

    Briefly explain the Principle of Purpose Limitation. Does it imply that data collected once should not be re-used?

  36. 36.

    Write brief notes on: Persuasive Technology, Filter Bubble. Explain how these technique can influence people’s choices.

  37. 37.

    What is manipulation in marketing? How is AI brought into manipulative marketing strategies?

  38. 38.

    Can manipulation through AI in newspaper reporting be a matter of ethical concern? Explain briefly.

  39. 39.

    What is predatorial marketing? In the digital age, how does AI-enabled technology make predatorial marketing even more powerful?

  40. 40.

    Why are children considered as a vulnerable group with regard to product and services in general? How is their vulnerability different from, for example, the vulnerability of people with low digital literacy?

  41. 41.

    Explain how digital divide can create an unfair distribution of opportunities and benefits in a digital economy. Do you agree that the distribution is even more limited and skewed in case of women?

  42. 42.

    What is fake news? In what sense its effect on people and the society in general can be considered as unethical?

  43. 43.

    Can deep fakes be threat to social harmony and peace? Explain.

  44. 44.

    What is the epistemic threat from the deep fake?

  45. 45.

    Why the issue of trust is said to be AI’s biggest problem?

  46. 46.

    Write short notes on: (a) Cyberbullying, (b) Algorithmic bias (c) Online hate speech.

Research Exercises

  1. 1.

    If the law recognizes the personal data privacy rights, but does not consider personal data ownership as a property right, what is the implication of that for the common person, who is the data principal?

  2. 2.

    What according to you is the most effective way to get rid of algorithmic bias?

Case Study Discussion

Ethics Case 8.1

Netflix and $1 Million Contest

In October 2006, Netflix, the world’s largest online movie rental company, organized a $1 million contest which challenged the machine intelligence experts to come up with a better movie recommendation than Netflix’s own personalized recommendation for movies. The contest caught the attention of machine learning experts from leading academics and private research firms, drawing finally about 50,000 contestants. The Netflix user dataset contained anonymous movie ratings of 500,000 subscribers of Netflix. The data were anonymized before making the data of Netflix users available to the contestants.

However, Arvind Narayanan and Vitaly Shmatikov showed that with a second set of information, such as comments on movies on free and publicly available Internet Movie Database (IMDb), it is possible to deanonymize the Netflix user in the dataset. They showed that with the IMDb as the background knowledge, they have successfully identified the Netflix subscribers, along with their political preferences and other personally sensitive information.

Several Netflix subscribers sued Netflix for breaching their privacy. The lawsuit charged that Netflix has indirectly exposed the data of the Netflix subscribers and that Netflix customer privacy was not protected.

Although Netflix did not admit of any wrong-doing, it settled the lawsuit and forestalled the sequel of the competition as part of the settlement.

Case 8.1. Questions

  1. 1.

    Using the seven types of privacy, explain what kind of privacy of the Netflix subscribers was breached in this case?

  2. 2.

    Explain the main learning takeaway from the case.

  3. 3.

    What other precautions should be taken to protect personal data privacy? Justify your claim using ethical principles.

Ethics Case 8.2

Portrait AI

There are AI-based apps available on the Internet, by which a selfie may be painted by the AI art in the 18th CE Baroque and renaissance style. One of them, PortraitAI (https://portraitai.app/) creates impressive paintings out of a selfie, if the selfie is of a white person. With the selfie images of blacks, browns, or any other skin color, it fails to perform. As the App-making company also admits now, the problem is with the database on which it was trained. The portraits from that era were almost exclusively of the white Europeans. Hence, the database on which the PortraitAI App was trained, consisted of white people only. Consequently, when painting the picture of someone from another race with a different color tone and facial features, the App does not produce satisfactory results. Expansion of database to include more diverse images might have helped.

Case 8.2. Questions

  1. 1.

    Does the case establish an algorithmic bias? Justify.

  2. 2.

    Is the bias due to individual prejudice of the developer? Or, is it an example of a systematic social bias? Or, does it demonstrate some other kind of phenomenon?

Key Terms

  • Anti-theory: It is a philosophical position which rejects the theories of normative ethics. The supporters believe that the theories of normative ethics cannot do justice to the actual complexity that exist in situations in everyday life. In particular, their arguments are against the more prominent theories of normative ethics, such as Utilitarianism, Kant’s deontological ethics.

  • Applied ethics: Applied ethics is a branch of ethics. With its focus on practical problems of lives, it is also known as practical ethics. It covers ethical issues in specific areas of human activities, e.g. in medicine.

  • Big data: Big data refers to massive datasets of amazing variety and complexity. It is supposed to have huge variety, volume, and velocity.

  • Bottom-up approach: It is an approach which starts from a given ethically problematic particular situation, and the knowledge and accurate description of its details, and then from it an ethical judgment is derived which may be confined to the case in hand or may be a defeasible universal generalization. It is also called inductivist approach.

  • Casuistry: Casuistry in ethics is case-based reasoning.

  • Data commons: A data commons is an open repository of data in which data are aggregated from various sources into a unified database.

  • Data principal: The person whose data is in question.

  • Data science: Data science is a multidisciplinary field which aims to discover insights from large sets of raw or unstructured data.

  • Deductivist approach: This refers to an approach which follows the approach in deductive logic. The approach is marked by a movement always from the general principles to a specific case.

  • Digital addiction: Digital addiction is a kind of Internet addiction. It is problematic relationship of a human with the digital technology, resulting in obsessive, compulsive, impulsive, and hasty digital behavior.

  • Digital divide: A digital divide is a new form of inequality among people in a country in terms of their unequal access, knowledge, and use of Internet and other related digital technologies.

  • DNA: DNA is the abbreviated form of deoxyribonucleic acid. It is the hereditary material in humans and practically all other organisms. It forms a chained double helix structure which carries the genetic instructions for the development, functioning and reproduction of an organism.

  • Emotion AI: Emotion AI or Affecting Computing refers to AI which detects and interprets human emotions. Using cameras or images with other technology, emotion AI tries to capture human emotions from facial expressions, body language, vocal intonations, and other cues.

  • Eugenics: Eugenics refers to a belief system and practices which advocate the improvement of the genetic quality of humans by selective breeding either by encouraging practices for the inheritable ‘desirable human traits’, or by the discontinuation of the inheritable ‘undesirable human traits’ by sterilization and other methods.

  • Fake news: Fake news is news story, or news article, or news video, which are created and are false or misleading information, but which are presented as facts or as news.

  • Federated Learning: Federated learning is a machine learning technique which trains an algorithms by keeping the training dataset decentralized. Instead of the usual way of putting all data in a centralized cloud server, it trains by the local data generated by the user history on a particular device, such as a mobile.

  • Filter bubble: A filter bubble is an intellectual isolation which can occur in personalized searches when websites make use of algorithms to guess what the user would like to see based on information about the user, such as location and previous search history, and display only those items to the user.

  • Gaming addiction: This is addiction to playing online games in a manner which disturbs and interferes with normal daily activities of the person. It is considered as a disorder.

  • Gender Digital Divide: It is a form of inequality among the population of a country along the line of gender in terms of their access, affordability, knowledge, and use of Internet and the other digital technologies.

  • Gene therapy: Gene therapy is a medical technique which modifies a person’s genes to treat or to cure a disease. It can either replace the disease-causing gene by a healthy substitute gene or can inactivate a malfunctioning gene.

  • Genome editing: It is a specialized group of engineering technologies in which the DNA is replaced, added, or deleted, or modified, in the genome of a living organism. It is alternatively called gene editing or genome engineering.

  • Inductivist approach: It refers to the way the Inductive arguments work: from particular cases to inductive generalization.

  • Internet addiction: It refers to excessive use and obsession with the Internet which interferes with their normal, daily life activities.

  • Machine learning: It is a field of study in AI in which the machine or the computer algorithms are developed to learn as a human does.

  • Manipulation: Manipulation is to covertly influence an individual’s free decision-making process through suggestion and persuasion. It is not coercive.

  • Media literacy: Media literacy is the ability to be a critical thinker with regard to information and media, so that one does not fall easy prey to false information and undue persuasion.

  • Micro-targeting: It is a marketing strategy which uses consumer data and demographic data to identify specific individuals, or very specific groups, to influence their opinions, thoughts, and behavior.

  • Predatory advertising: Predatory advertising is a kind of advertising in which the advertising company targets the vulnerable people or populations and exploits the vulnerability to sell a product.

  • Principlism: Principlism refers to an approach in Biomedical ethics which uses four ethical principles as basic principles: Non-maleficence, Beneficence, Justice, and Respect for Autonomy.

  • Purpose Limitation: It is a principle in data sharing and data selling which states that data collected for a specific purpose should not be used for a new, incompatible purpose.

  • Recommendation engine: A recommendation engine is an AI-enabled software which analyzes and filters the data to discover certain patterns in the data. Based on those probabilistic patterns, it suggests to an website user what the options the user may be interested in.

  • Reflective equilibrium: It is also a method in applied ethics. It refers to a deliberative process, in which we reflect over our beliefs and justifications about an issue, and revise them seeking an overall coherence with our other relevant beliefs and judgments about similar and related issues. The process stops when a reflective equilibrium or overall coherence as an outcome of reflection is reached.

  • Strict liability: Strict Liability Principle is a principle in law which holds that someone can be held as liable for the consequences of an action, even if he or she is not at fault, and even if there was no criminal intent or intent to do harm while doing the action.

  • Synthetic data: Synthetic data are created artificially by machine algorithms, and are not generated by the actual events in the real world.

  • Top-down approach: Top-down approach is the deductivist approach in applied ethics. It follows the approach which deductive logic follows in an argument: from the general to a particular case.

  • Value pluralism: Value pluralism is the concept that more than one set of values can exist in the same society. Each set may be equally correct and basic, even if they are in conflict with each other.

  • Vulnerable populations: Vulnerable populations are the populations with an increased risk.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Chakraborti, C. (2023). Applied Ethics: AI and Ethics. In: Introduction to Ethics. Springer, Singapore. https://doi.org/10.1007/978-981-99-0707-6_8

Download citation

Publish with us

Policies and ethics