Skip to main content

Advertisement

Log in

Normative ethics, human rights, and artificial intelligence

  • Original Research
  • Published:
AI and Ethics Aims and scope Submit manuscript

Abstract

At some point in the future, nearly all jobs currently performed by humans will be performed by autonomous machines using artificial intelligence (AI). There is little doubt that it will increase precision, comfort, and save time, but this coincides with the introduction of many ethical, social, and legal difficulties as well. Because machines will be performing all of the tasks that humans used to, they cannot be kept exempt from the ethical principles that humans follow. However, because digital machines can only understand 0 and 1, encoding complex philosophical ideas in 0 and 1 would be an assiduous task. These great difficulties offer the opportunity to revisit some of the basic and time-tested normative moral theories advanced by modern philosophers. There could be significant advantages for the many players in AI, namely producers and consumers, thanks to these moral philosophies. Customers could use it to make a purchase decision concerning AI machines, whereas manufacturers could use it to write good ethical algorithms for their AI machines. To handle any ethical difficulties that may develop due to the use of these machines, the manuscript will summarise the important and pertinent normative theories and arrive at a set of principles for writing algorithms for the manufacture and marketing of artificially intelligent machines. These normative theories are simple to understand and use, and they do not require a deep understanding of difficult philosophical or religious notions. These viewpoints claim that good and wrong may be determined merely by applying reasoning and that arriving at any logical conclusion does not necessitate a thorough understanding of philosophy or religion. Another goal of the manuscript is to investigate whether artificial intelligence can be trusted to enforce human rights and whether it is right to code all AI machines with one uniform moral code, particularly in a scenario where they will be doing different jobs for different parties. Is it possible to use the diversity of moral principles as a marketing strategy, and could humans be allowed to choose the moral codes for their machines?

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Lloyd, D.: Frankenstein’s children: artificial intelligence and human value. Metaphilosophy 16(4), 307–318 (1985)

    Article  Google Scholar 

  2. Gills, A.S.: Internet of Things (IoT). TechTarget. https://internetofthingsagenda.techtarget.com/definition/Internet-of-Things-IoT (2020). Accessed 21 June 2021.

  3. Poudel, S.: Internet of things: underlying technologies, interoperability, and threats to privacy and security. Berkeley Technol. Law J. 31(2), 997–1022 (2016)

    Google Scholar 

  4. Miller, S.: Artificial intelligence and its impact on legal technology: to boldly go where no legal department has gone before Thompson Reuters. https://legal.thomsonreuters.com/en/insights/articles/ai-and-its-impact-on-legal-technology (2017). Accessed 20 June 2021

  5. Kumar, S., Choudhury, S.: Ancient vedic literature and human rights: resonances and dissonances. Cogent Soc. Sci. 7(1), 1858562 (2021)

    Google Scholar 

  6. Kumar, S., Rai, A.: Business ethics. Cengage Learning, New Delhi (2019)

    Google Scholar 

  7. Bernstein, J.: Profiles: A.I. The New Yorker. https://www.newyorker.com/magazine/1981/12/14/a-i (1981)

  8. Dennett, D.: When philosophers encounter artificial intelligence. Daedalus 117(1), 283–295 (1988)

    Google Scholar 

  9. Arata, H., Hale, B.: Smart bases, smart decisions. Cyber Def. Rev. 3(1), 69–78 (2018)

    Google Scholar 

  10. Costigan, S., Lindstrom, G.: Policy and the internet of things. Connections 15(2), 9–18 (2016)

    Article  Google Scholar 

  11. Miller, M.: The Internet of Things: How Smart TVs, Smart Cars, Smart Homes, and Smart Cities are Changing the World. Indianapolis, Que (2015)

    Google Scholar 

  12. Boden, M.A.: The philosophy of artificial intelligence. Oxford University Press, Oxford (1990)

    Google Scholar 

  13. Buchholz, S., Mariani, J., Routh, A.,Keyal, A. & Kishnani, P.: The Realist’s guide to quantum technology and national security. Deloitte’s Center for government Insights. https://www2.deloitte.com/us/en/insights/industry/public-sector/the-impact-of-quantum-technology-on-national-security.html (2020)

  14. Newell, A., Simon, H.A.: Human Problem Solving. Echo Point Books & Media, Brattleboro (2019)

    Google Scholar 

  15. Crevier, D.: AI: the tumultuous search for artificial intelligence. Basic Books, New York (1993)

    Google Scholar 

  16. Searle, J.: Minds, brains and programs. Behav. Brain Sci. 3(3), 417–457 (1980). https://doi.org/10.1017/S0140525X00005756

    Article  Google Scholar 

  17. Guo, T.: Alan Turing: Artificial intelligence as human self-knowledge. Anthropol. Today 31(6), 3–7 (2015)

    Article  Google Scholar 

  18. Shelley, M.W.: Frankenstein, or the Modern Prometheus. Bibliologica Press, Unley, SA (2021)

    Google Scholar 

  19. Geraci, R.M.: Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality. Oxford University Press, New York (2012)

    Google Scholar 

  20. Ortega-Rodriguez, M., Solís-Sánchez, H.: Is there a relationship between Stephen Hawking's worldview and his physical disability? On the importance of cognitive diversity. Cornell University. Preprint at arXiv:1804.09012 [physics.pop-ph] (2018)

  21. Hawking, S.: Personal Interview to Rory Cellan-Jones, BBC. https://www.bbc.com/news/technology-30290540 (2014). Accessed 20 June 2021.

  22. Perc, M., Ozer, M., Hojnik, J.: Social and juristic challenges of artificial intelligence. Palgrave Commun. 5, 61 (2019). https://doi.org/10.1057/s41599-019-0278-x

    Article  Google Scholar 

  23. Schmarzo, W., Vantara, H.: Asimov’s 4th Law of Robotics. KDnuggets. https://www.kdnuggets.com/2017/09/asimov-4th-law-robotics.html#:~:text=A%20robot%20may%20not%20injure,being%20to%20come%20to%20harm.&text=A%20robot%20must%20protect%20its,the%20first%20or%20second%20laws (2017). Accessed 20 June 2021.

  24. Hasnas, J.: The normative theories of business ethics: a guide for the perplexed. Bus. Ethics Q. 8, 19–42 (1998)

    Article  Google Scholar 

  25. Heath, J.: Business ethics without stakeholders. Bus. Ethics Q. 16(4), 533 (2006)

    Article  Google Scholar 

  26. Hartman, L.P.: Perspectives in business ethics. McGraw-Hill, Boston (2002)

    Google Scholar 

  27. Trevino, L.K., Nelson, K.A.: Managing Business Ethics: Straight Talk About How To Do It Right, p. 40. Wiley, Hoboken (2011)

    Google Scholar 

  28. Bowie, N.E., Schneider, M.E.: Business ethics for dummies. Wiley, Hoboken (2011)

    Google Scholar 

  29. Ferrell, O.C., Fraedrich, J., Ferrell, L.: Business Ethics. Cengage, Mason, OH (2018)

    Google Scholar 

  30. Painter-Moreland, M.: Business Ethics as Practice, p. 53. Cambridge University Press, Cambridge (2010)

    Google Scholar 

  31. Bentham, J., Mill, J.S., Mill, J.S.: The Utilitarians: An introduction to the principles of morals and legislation [by] Jeremy Bentham. Utilitarianism, and On liberty [by] John Stuart Mill. Doubleday, Garden City, NY (1961)

    Google Scholar 

  32. Mill, J.S.: Utilitarianism. GRIN Verlag GmbH, München (2008)

    Google Scholar 

  33. Mill, J.S., Lazari-Radek, K., Singer, P.: Utilitarianism. W.W. Norton & Company, New York (2022)

    Google Scholar 

  34. Sidgwick, H.: The Methods of Ethics, p. 119. Macmillan Publishers, London (1907)

    Google Scholar 

  35. Baier, K.: Moral obligations. Am. Philos. Q. 3, 210–226 (1966)

    Google Scholar 

  36. Johnston, C.: Why do corporates give? Raconteur. https://www.raconteur.net/why-do-corporates-give/ (2014). Accessed 25 June 2021.

  37. Allison, H.E.: Kant’s Groundwork for the metaphysics of morals: A commentary. Oxford University Press, Oxford (2011)

    Book  Google Scholar 

  38. Burch, M.: Normativity, meaning, and the promise of phenomenology. Routledge, New York (2019)

    Book  Google Scholar 

  39. Wood, A.W.: Kant and Religion. Cambridge University Press, Cambridge (2020)

    Book  Google Scholar 

  40. Kant, I., Korsgaard, C.: Frontmatter. In: Gregor, M. (ed.) Kant: Groundwork of the Metaphysics of Morals. Cambridge Texts in the History of Philosophy, p. I–IV. Cambridge University Press, Cambridge (1998)

    Google Scholar 

  41. Stern, R.: Kant. In: Understanding Moral Obligation: Kant, Hegel, Kierkegaard (Modern European Philosophy), pp. 5–6. Cambridge University Press, Cambridge (2011)

    Chapter  Google Scholar 

  42. Timmermann, J.: Kant. In: Golob, S., Timmermann, J. (eds.) The Cambridge History of Moral Philosophy, pp. 394–409. Cambridge University Press, Cambridge (2017)

    Chapter  Google Scholar 

  43. Timmons, M.: The philosophical and practical significance of Kant’s Universality formulations of the categorical imperative. Ann. Rev. Law Ethics 13, 313–333 (2005)

    Google Scholar 

  44. Watkins, E.: Kant on Laws. Cambridge University Press, Cambridge (2019)

    Book  Google Scholar 

  45. Kant, I.: “First Section: Transition from the Common Rational Knowledge of Morals to the Philosophical” Groundwork of the Metaphysics of the Morals. Harper and Row Publishers, New York (1997)

    Google Scholar 

  46. Hoche, H., Knoop, M.: Logical relations between Kant’s categorical imperative and the two golden rules. Ann. Rev. Law Ethics 18, 483–518 (2010)

    Google Scholar 

  47. Kitcher, P.: Kant’s argument for the categorical imperative. Noûs 38(4), 555–584 (2004)

    Article  MathSciNet  Google Scholar 

  48. Marshall, J.: Hypothetical imperatives. Am. Philos. Q. 19(1), 105–114 (1982)

    Google Scholar 

  49. Morrisson, I.: On Kantian Maxims: a reconciliation of the incorporation thesis and weakness of the Will. Hist. Philos. Q. 22(1), 73–89 (2005)

    Google Scholar 

  50. Shand, J.: Kant, respect, and hypothetical acts. Philosophy 90(353), 505–518 (2015)

    Article  Google Scholar 

  51. Shaver, R.: Korsgaard on hypothetical imperatives. Philos. Stud. 129(2), 335–347 (2006)

    Article  MathSciNet  Google Scholar 

  52. Rawl, J.: A Theory of Justice. Harvard University Press, Cambridge, MA (1971)

    Book  Google Scholar 

  53. Gorovitz, S.: John Rawls: a theory of justice. In: de Crespigny, A., Minogue, K. (eds.) Contemporary Political Philosophers, pp. 272–289. Dodd, Mead and Co, New York (1975)

    Google Scholar 

  54. Hamington, M., Sander-Staudt, M.: Applying Care Ethics to Business. Springer, Dordrecht (2011)

    Book  Google Scholar 

  55. Held, V.: The Ethics of Care: Personal, Political, and Global. Oxford University Press, New York (2007)

    Book  Google Scholar 

  56. Gilligan, C.: In a DIFFERENT VOICE: PSYCHOLOGICAL THEORY and Women’s Development. Harvard University Press, Cambridge (1982)

    Google Scholar 

  57. Noddings, N.: Caring: A Feminine Approach to Ethics & Moral Education. Univ. of Calif. Press, Berkeley (1986)

    Google Scholar 

  58. Cavanagh, G.F., Moberg, D.J., Velasquez, M.: Making business ethics practical. Bus. Ethics Q. 5(3), 402 (1995)

    Article  Google Scholar 

  59. Santow, E.: Can artificial intelligence be trusted with our human rights? Aust. Q. 91(4), 10–17 (2020)

    Google Scholar 

  60. Hawking, S.: This is the most dangerous time for our planet. UK: Guardian. https://www.theguardian.com/commentisfree/2016/dec/01/stephen-hawking-dangerous-time-planet-inequality (2016). Accessed 21 June 2021.

  61. Kumar, S., Choudhury, S.: Migrant Workers and Human Rights: A Critical Study on India’s COVID-19 Lockdown Policy. Elsevier Social Sciences and Humanities. Elsevier, Amsterdam (2021)

    Google Scholar 

  62. Access Now: Human Rights in the age of artificial intelligence. https://www.accessnow.org/cms/assets/uploads/2018/11/AI-and-Human-Rights.pdf (2018). Accessed 20 June 2021

Download references

Funding

No funding has been received for this manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sanghamitra Choudhury.

Ethics declarations

Conflict of interest

There is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kumar, S., Choudhury, S. Normative ethics, human rights, and artificial intelligence. AI Ethics 3, 441–450 (2023). https://doi.org/10.1007/s43681-022-00170-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s43681-022-00170-8

Keywords

Navigation