Skip to main content

Part of the book series: Law, Governance and Technology Series ((ISDP,volume 46))

  • 1040 Accesses

Abstract

This chapter analyzes the ethical challenges with which we need to deal in the course of the AI evolution, both in the present and in the future. First, it focuses on the ethical challenges from ANI and AGI/ASI, including the existential ones, as well as the social, economic. In the second sub-part of the chapter, some fundamental principles which emerge from the public debate about the ethical regulation of AI are presented and are briefly commented. Through this chapter, the first part of the book is concluded; on the basis of the impact, the ontological constituents and the ethical principles of AI, in the second part, the legal analysis takes place.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Tavani (2016).

  2. 2.

    Baum (2018), pp. 566–568.

  3. 3.

    Hayles (1999).

  4. 4.

    Noah and Deus (2017).

  5. 5.

    Hooker and Kim (2019), p. 63.

  6. 6.

    Aghion et al. (2017).

  7. 7.

    “Men can be distinguished from animals by consciousness, by religion or anything else you like. They themselves begin to distinguish themselves from animals as soon as they begin to produce their means of subsistence, a step which is conditioned by their physical organisation. By producing their means of subsistence men are indirectly producing their actual material life.” Marx (1845).

  8. 8.

    Regarding the inequalities and the ethical challenges, indicative reference can be made to: Temkin (1993); Frankfurt (2015).

  9. 9.

    Polonski (2018).

  10. 10.

    Wallach and Allen (2009).

  11. 11.

    Greene et al. (2016), p. 4148.

  12. 12.

    Greene et al. (2016), p. 4150.

  13. 13.

    Greene et al. (2016), p. 4147.

  14. 14.

    Bistarelli et al. (2006), pp. 78–92.

  15. 15.

    Casey (2017), p. 1362.

  16. 16.

    Russell, S. (2015, October 25). Big think: moral philosophy will be big in tech. The California Report (Q. Kim, Interviewer).

  17. 17.

    Anderson et al. (2005).

  18. 18.

    Moor (2006), pp. 18–21.

  19. 19.

    Bostrom, Superintelligence, Paths, Dangers, Strategies, pp. 166–169.

  20. 20.

    Bearne, S. (2016). Plan your digital afterlife and rest in cyber peace. The Guardian. Retrieved from https://www.theguardian.com/media-network/2016/sep/14/plan-your-digital-afterlife-rest-in-cyber-peace; LifeNaut Project. (2017). Retrieved https://www.lifenaut.com/; Eternime. (2017). Retrieved http://eterni.me/Eter9. (2017). Available online https://www.eter9.com/auth/login. Access date: 10 September 2018.

  21. 21.

    Imagine for example an emulation, “going” to work every day, feeling that a long weekend has just ended and that its work is a dream-work.

  22. 22.

    Profoundly there are several in between scenarios for the role of human intelligence in conjunction with AGI and ASI.

  23. 23.

    S. Begley, Would You Upload Your Brain to the Cloud?, mindful (2018, September 6) (https://www.mindful.org/upload-your-brain/).

  24. 24.

    Winfield (2019), p. 46.

  25. 25.

    IEEE, Ethically Aligned Design, Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, p. 4.

  26. 26.

    European Parliament, Report with recommendations to the Commission on Civil Law Rules on Robotics (27 January 2017, para. 10 http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//TEXT+REPORT+A8-2017–0005+0+DOC+XML+V0//EN).

  27. 27.

    An initiative of Université de Montréal, Montreal Declaration for Responsible AI, 2018 Report Montréal Declaration for a Responsible Development of Artificial Intelligence (2018), pp. 8–17.

  28. 28.

    “…we can see input data and output data for algorithm-based systems, but we do not really understand what exactly happens in between.” Villani (2018), p. 15.

  29. 29.

    Villani (2018), pp. 6–7.

  30. 30.

    UNI global union, Top 10 principles for ethical artificial intelligence (http://www.thefutureworldofwork.org/media/35420/uni_ethical_ai.pdf), p. 6.

  31. 31.

    Bryson and Winfeld (2017), p. 118.

  32. 32.

    Doshi-Velez and Kortz (2017), pp. 2–4.

  33. 33.

    House of Lords, Select Committee on Artificial Intelligence, in Report of Session 2017–19, AI in the UK: ready, willing and able?, at para. 94.

  34. 34.

    Mueller (2016).

  35. 35.

    UK Government Digital Service and Office for Artificial Intelligence, Guidance, Understanding artificial intelligence ethics and safety (https://www.gov.uk/guidance/understanding-artificial-intelligence-ethics-and-safety#contents, access. 14-12-2019).

  36. 36.

    Written evidence from Professor Chris Reed, in House of Lords, Select Committee on Artificial Intelligence, in Report of Session 2017–19, paras. 96–98.

  37. 37.

    The Institute for Ethical AI & Machine Learning, Asilomar Principles; European Parliament, EU guidelines on ethics in artificial intelligence: context and implementation, p. 4.

  38. 38.

    Indicatively see: Mnih et al. (2015), pp. 529–33.

  39. 39.

    “A system that is optimizing a function of n variables, where the objective depends on a sub set of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found maybe highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer’s apprentice, or King Midas: you get exactly what you ask for, not what you want. A highly capable decision maker– especially one connected through the Internet to all the world’s information and billions of screens and most of our infrastructure–can have an irreversible impact on humanity.” Russell, Stuart, “Of Myths And Moonshine.” Edge, 2014. https://www.edge.org/conversation/the-myth-of-ai#26015.

  40. 40.

    Partnership on AI, Our Goals (https://www.partnershiponai.org/about/, access, 09-12-2019).

  41. 41.

    Alex Campolo, Madelyn Sanfilippo, Meredith Whittaker, Kate Crawford, AI NOW 2017 Report, New York University; December 2017.

  42. 42.

    UK Government Digital Service and Office for Artificial Intelligence, Guidance Sustainability obviously constitutes a wider concept which is very much close to the principle below about the well- being of societies.

  43. 43.

    T. Metzinger, Ethics washing made in Europe, Der Tagesspiegel (08.04.2019).

  44. 44.

    Regarding the need for AI ethics now needs to revisit two landmark publications: Vallor (2016); S Papert, Mindstorms--Children, Computers and Powerful Ideas (1980).

  45. 45.

    Habermas (1984).

  46. 46.

    Asimov (1950), pp. 26, 136.

  47. 47.

    Mokhtarian (2018), pp. 156–159.

  48. 48.

    Mokhtarian (2018), pp. 159–160.

  49. 49.

    European Parliament, Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics, pp. 4–7, 15.

  50. 50.

    Ibid, pp. 7, 15, 22.

  51. 51.

    Oren Etzioni, How to Regulate Artificial Intelligence (2017, September 1), New York Times (https://www.nytimes.com/2017/09/01/opinion/artificial-intelligence-regulations-rules.html, access, 10-05-2020).

  52. 52.

    Vallor (2016), p. 6.

  53. 53.

    Vallor (2016), pp. 22–32.

  54. 54.

    Vallor (2016), pp. 51, 64.

  55. 55.

    Vallor (2016), p. 119.

  56. 56.

    European Parliament, EU guidelines on ethics in artificial intelligence, Chapter on Robotics.

  57. 57.

    The Toronto Declaration.

  58. 58.

    IEEE, Ethically Aligned Design, p. 5.

  59. 59.

    Joseph Bradley, Joel Barbier, Doug Handler, ‘Embracing the Internet of Everything To Capture Your Share of $14.4 Trillion’, Cisco White Paper, 2013. http://www.cisco.com/c/dam/en_us/about/ac79/docs/innov/IoE_Economy.pdf.

  60. 60.

    Harrison and Sayogo (2014), pp. 513–525.

  61. 61.

    Rubinstein (2014), p. 863.

  62. 62.

    Mayer-Schönberger and Cukier (2013), pp. 52–60.

  63. 63.

    Regarding such potential risks it is interesting to notice the relevant analyses about predictive analytics and the judicial authority, as well as police enforcement, given that “In relation specifically to justice, predictive justice systems are designed for use by legal departments, insurers (both for their internal needs and for their policyholders) as well as lawyers for them to anticipate the outcome of litigation. Theoretically, they could also assist judges in their decision-making.” European Commission for the Efficiency of Justice (CEPEJ) (2018), p. 30, para. 58.

  64. 64.

    Mayer-Schönberger and Cukier (2013), pp. 52–60; Nguyen et al. (2013).

  65. 65.

    Mayer-Schönberger and Cukier (2013), pp. 6–7.

  66. 66.

    Profoundly, the way that we define “public” in such a debate remains to be defined.

  67. 67.

    Greg Conti, Googling security: how much does google know about you? 16–18 (2009).

  68. 68.

    Ibid. pp. 72–76.

  69. 69.

    World Economic Forum, Rethinking personal data: strengthening trust 18 (May 2012), http://www3.weforum.org/docs/WEF_IT_RethinkingPersonalData_Report_2012.pdf, p. 18.

  70. 70.

    Rubinstein (2014), p. 861.

  71. 71.

    Bremmer (2017), p. 9.

  72. 72.

    William A. Carter, Emma Kinnucan, and Josh Elliot, A National Machine Intelligence Strategy for the United States, CSIS and Booz Allen Hamilton; March 2018, p. 33.

  73. 73.

    Villani (2018), p. 19.

  74. 74.

    Lex Gill, Research Fellow at Citizen Lab, The AI Shift: Implications For Policymakers, Sarah Villeneuve and Nisa Malli; The Brookfield Institute for Innovation + Entrepreneurship, Ryerson University; May 2018.

  75. 75.

    Villani (2018), p. 8.

  76. 76.

    Campolo et al. (2017), p. 14.

  77. 77.

    Attenberg et al. (2011), pp. 101–155; Beyer et al. (2015), p. 2.

  78. 78.

    David J. Beymer, Karen W. Brannon, Ting Chen, Moritz AW Hardt, Ritwik K. Kumar and Tanveer F. Syeda-Mahmoo, “Machine learning with incomplete data sets,” U.S. Patent 9,349,105, issued May 24, 2016; Misra et al. (2016), pp. 2930–2939.

  79. 79.

    Zook (2017), p. e100539; D. Sculley et al., “Machine Learning: The High Interest Credit Card of Technical Debt,” SE4ML: Software Engineering for Machine Learning (NIPS 2014 Workshop), 2014, https://research.google.com/pubs/pub43146.html.

  80. 80.

    House of Lords, Select Committee on Artificial Intelligence, in Report of Session 2017–19, AI in the UK: ready, willing and able?, at paras. 108–116.

  81. 81.

    UNI global union, Top 10 Principles For Ethical Artificial intelligence, p. 8; Google: Artificial Intelligence at Google: Our Principles (https://ai.google/principles/).

  82. 82.

    IEEE, Ethically Aligned Design, p. 4; The Toronto Declaration: Protecting the right to equality and non-discrimination in machine learning systems (Prepared by Amnesty International and Access Now).

  83. 83.

    IEEE, Ethically Aligned Design, p. 10.

  84. 84.

    European Parliament, EU guidelines on ethics in artificial intelligence, p. 3.

  85. 85.

    IEEE, Ethically Aligned Design, p. 10; Google: Artificial Intelligence at Google.

  86. 86.

    Google: Artificial Intelligence at Google.

  87. 87.

    Beijing AI Principles.

  88. 88.

    S. Pinchai, AI at Google: our principles (https://blog.google/topics/ai/ai-principles/, access. 14-12-2019).

  89. 89.

    Roff (2018), pp. 19–28.

  90. 90.

    UNI global union, Top 10 Principles For Ethical Artificial intelligence, p. 9.

  91. 91.

    Professor Richard Susskind, in House of Lords, Select Committee on Artificial Intelligence, in Report of Session 2017–19, AI in the UK: ready, willing and able?, at para. 64.

  92. 92.

    House of Lords, Select Committee on Artificial Intelligence.

  93. 93.

    Dorn et al. (2017), pp. 180–185. Autor, David, and Anna Salomons. “Is automation labor-displacing? Productivity growth, employment and the labor share.” Brookings Papers on Economic Activity, 2018. https://www.brookings.edu/wp-content/uploads/2018/03/1_autorsalomons.pdf.

  94. 94.

    S. D. Baum, On the Promotion of Safe and Socially Beneficial Artificial Intelligence, Global Catastrophic Risk Institute Working Paper 16-1 (29-07-2016) (http://gcrinstitute.org/papers/16-1.pdf, access. 14-12-2019), p. 3.

  95. 95.

    Dafoe (2018), pp. 7–8 (https://www.fhi.ox.ac.uk/wp-content/uploads/GovAIAgenda.pdf, access. 14-12-2019).

  96. 96.

    Deep Mind, Exploring the Real World Impact of AI (https://deepmind.com/about/ethics-and-society, access. 14-12-2019).

  97. 97.

    IEEE, Ethically Aligned Design, p. 10.

  98. 98.

    Bohannon (2015), p. 252.

  99. 99.

    The Institute for Ethical AI & Machine Learning, Asilomar Principles.

  100. 100.

    “1) A corrigible reasoner must at least tolerate and preferably assist the programmers in their attempts to alter or turn off the system. (2) It must not attempt to manipulate or deceive its programmers, despite the fact that most possible choices of utility functions would give it incentives to do so. (3) It should have a tendency to repair safety measures (such as shutdown buttons) if they break, or at least to notify programmers that this breakage has occurred. (4) It must preserve the programmers’ ability to correct or shut down the system (even as the system creates new subsystems or self-modifies) Nate Soares, Benja Fallenstein, Eliezer Yudkowsky and Stuart Armstrong, “Corrigibility.” AAAI 2015 Ethics and Artificial Intelligence Workshop, 2015, https://intelligence.org/files/Corrigibility.pd, p. 2.

  101. 101.

    Omohundro (2008), pp. 483–492.

  102. 102.

    John Danaher, The Threat of Algocracy: Reality, Resistance and Accommodation, Philosophy and Technology, January 2016.

  103. 103.

    Bird and Layzell (2002), pp. 1836–1841.

  104. 104.

    European Parliament, Artificial Intelligence ante portas: Legal & ethical reflections (http://www.europarl.europa.eu/RegData/etudes/BRIE/2019/634427/EPRS_BRI(2019)634427_EN.pdf, access. 10-12-2019).

  105. 105.

    E. Geist & A. John, “How Might Artificial Intelligence Affect the Risk of Nuclear War?” (2018), RAND, p. 16, https://www.rand.org/pubs/perspectives/PE296.html.

  106. 106.

    Future of Life Institute, Asilomar AI Principles (https://futureoflife.org/ai-principles/, 06-12-2019).

  107. 107.

    Future Of Life Institute, AI Policy Challenges and Recommendations (https://futureoflife.org/ai-policy-challenges-and-recommendations/#Research, access. 14-12-2019).

  108. 108.

    “19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities. 20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. 21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact. 22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures. 23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.”

  109. 109.

    Posner (2000), pp. 1781–1819; Wilson (2013), pp. 307–364.

  110. 110.

    Zei (2013), pp. 167–204.

References

  • Aghion P, Jones BF, Jones ChI (2017) Artificial intelligence and economic growth. National Bureau of Economic Research working paper. Access at https://www.nber.org/chapters/c14015

  • Anderson M, Anderson S, Armen C (eds) (2005) Machine ethics: papers from the AAAI fall symposium. Technical Report FS-05-06, Association for the Advancement of Artificial Intelligence, Menlo Park, CA

    Google Scholar 

  • Asimov I (1950) I, Robot. Doubleday & Company, Inc, Garden City, pp 26, 136

    Google Scholar 

  • Attenberg J, Melville P, Provost F, Saar-Tsechansky M (2011) Selective data acquisition for machine learning. In: Cost-sensitive machine learning. CRC Press, pp 101–155

    Google Scholar 

  • Baum SD (2018) Reconciliation between factions focused on near-term and long-term artificial intelligence. AI Soc 33(4):565–572

    Article  Google Scholar 

  • Beyer C, Krempl G, Lemaire V (2015) How to select information that matters: a comparative study on active learning strategies for classification. In: Proceedings of the 15th international conference on knowledge technologies and data-driven business. ACM, p 2

    Google Scholar 

  • Bird J, Layzell P (2002) The evolved radio and its implications for modelling the evolution of novel sensors. In: Proceedings of the 2002 Congress on evolutionary computation. CEC’02, vol 2. IEEE, Honolulu, HI, pp 1836–1841. https://doi.org/10.1109/CEC002.1004522

  • Bistarelli S, Pini MS, Rossi F, Venable KB (2006) Bipolar preference problems: framework, properties and solving techniques. In: Recent advances in constraints (CSCLP 2006), volume 4651 of LNCS. Springer, pp 78–92

    Google Scholar 

  • Bohannon J (2015) Fears of an AI pioneer. Science 349(6245):252

    Article  Google Scholar 

  • Bremmer I (2017) China embraces AI: a close look and a long view. Eurasia Group, December, p 9

    Google Scholar 

  • Bryson J, Winfeld A (2017) Standardizing ethical design for artificial intelligence and autonomous systems. Computer: 116

    Google Scholar 

  • Campolo A, Sanfilippo M, Whittaker M, Crawford K (2017) AI Now 2017 Report. New York University, December, p 14

    Google Scholar 

  • Casey B (2017) Amoral machines, or: how roboticists can learn to stop worrying and love the law. Northwest Univ Law Rev 111:1347

    Google Scholar 

  • Dafoe A (2018) AI governance: a research agenda. Future of Humanity Institute- University of Oxford, pp 7–8

    Google Scholar 

  • Dorn D, Katz LF, Patterson C, Van Reenen J (2017) Concentrating on the fall of the labor share. Am Econ Rev 107(5):180–185

    Article  Google Scholar 

  • Doshi-Velez F, Kortz M (2017) Accountability of AI under the law: the role of explanation. Berkman Klein Center Working Group on Explanation and the Law, Berkman Klein Center for Internet & Society working paper, pp 2–4

    Google Scholar 

  • European Commission for the Efficiency of Justice (CEPEJ) (2018) European ethical Charter on the use of artificial intelligence in judicial systems and their environment. Council of Europe, p 30

    Google Scholar 

  • Frankfurt H (2015) On inequality. Princeton University Press

    Book  Google Scholar 

  • Greene J et al (2016) Embedding ethical principles in collective decision support systems. In: Proceedings of the AAAI conference on artificial intelligence, vol 30, p 4147

    Google Scholar 

  • Habermas J (1984) The theory of communicative action. Volume 1: reason and the rationalization of society. Beacon

    Google Scholar 

  • Harrison T, Sayogo D (2014) Transparency, participation and accountability practices in open government: a comparative study. Gov Inf Q 31:513–525

    Article  Google Scholar 

  • Hayles NK (1999) How we became posthuman: virtual bodies in cybernetics, literature, and informatics 2–4

    Google Scholar 

  • Hooker J, Kim T (2019) Ethical implications of the fourth industrial revolution for business and society. Business ethics (Business and Society 360, vol 3). Emerald Publishing Limited, p 35

    Google Scholar 

  • Marx K (1845) The German Ideology, Part I: Feuerbach. Opposition of the materialist and idealist outlook A. Idealism and materialism

    Google Scholar 

  • Mayer-Schönberger V, Cukier K (2013) Big data: a revolution that will transform how we live, work and think. John Murray, pp 52–60

    Google Scholar 

  • Misra I, Zitnick CL, Mitchell M, Girshick R (2016) Seeing through the human reporting bias: visual lassifiers from noisy human-centric labels. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2930–2939

    Google Scholar 

  • Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A et al (2015) Human-level control through deep reinforcement learning. Nature 518(7540):529–533. https://doi.org/10.1038/nature14236

    Article  Google Scholar 

  • Mokhtarian E (2018) The bot legal code: developing a legally compliant artificial intelligence. Vanderbilt J Entertain Technol Law 21:145

    Google Scholar 

  • Moor JH (2006) The nature, importance, and difficulty of machine ethics. IEEE Intell Syst 21(4):18–21

    Article  Google Scholar 

  • Mueller ET (2016) Transparent computers: designing understandable intelligent systems. Erik T. Mueller, San Bernardino, CA

    Google Scholar 

  • Nguyen C et al (2013) A user-centered approach to the data dilemma: context, architecture, and policy. In: Hildebrandt M et al (eds) Digital enlighten forum yearbook 2013: the value of personal data

    Google Scholar 

  • Noah HY, Deus H (2017) A brief history of tomorrow. Vintage Publishing

    Google Scholar 

  • Omohundro SM (2008) The basic AI drives. In: Wang P, Goertzel B, Franklin S (eds) Artificial General Intelligence 2008: Proceedings of the first AGI conference. frontiers in artificial intelligence and applications 171. IOS, Amsterdam, pp 483–492

    Google Scholar 

  • Polonski V (2018) Mitigating algorithmic bias in predictive justice: 4 design principles for AI fairness. Towards Data Science, November 24, 2018

    Google Scholar 

  • Posner EA (2000) Law and social norms: the case of tax compliance. Va Law Rev 86:1781–1819

    Article  Google Scholar 

  • Roff HM (2018) Advancing human security through artificial intelligence. In: Cummings ML, Roff HM, Cukier K, Parakilas J, Bryce H (eds) Artificial intelligence and international affairs: disruption anticipated. Chatham House, June, pp 19–28

    Google Scholar 

  • Rubinstein IS (2014) Voter privacy in the age of big data. Wisconsin Law Rev:861

    Google Scholar 

  • Tavani H (2016) Ethics and technology, 5th edn

    Google Scholar 

  • Temkin L (1993) Inequality. Oxford University Press

    Google Scholar 

  • Vallor S (2016) Technology and the virtues--a philosophical guide to a future worth wanting. Oxford University Press

    Book  Google Scholar 

  • Villani C (2018) For a meaningful artificial intelligence: towards a French and European strategy. French Parliament, March 2018, p 15

    Google Scholar 

  • Wallach W, Allen C (2009) Moral machines: teaching robots right from wrong 16

    Google Scholar 

  • Wilson G (2013) Minimizing global catastrophic and existential risks from emerging technologies through international law. Va Environ Law J 31:307–364

    Google Scholar 

  • Winfield A (2019) Ethical standards in robotics and AI. Nat Electron 2:46. https://doi.org/10.1038/s41928-019-0213-6

    Article  Google Scholar 

  • Zei A (2013) Shifting the boundaries or breaking the branches? On some problems arising with the regulation of technology. In: Law and technology. The challenge of regulating technological development. Pisa University Press, Pisa, pp 167–204

    Google Scholar 

  • Zook M, Barocas S, Crawford K, Keller E, Gangadharan SP, Goodman A, Hollander R, Koenig BA, Metcalf J, Narayanan A, Nelson A, Pasquale F (2017) Ten simple rules for responsible big data research. PLoS Comput Biol 13(3):e100539

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Tzimas, T. (2021). The Ethics of AI. In: Legal and Ethical Challenges of Artificial Intelligence from an International Law Perspective. Law, Governance and Technology Series(), vol 46. Springer, Cham. https://doi.org/10.1007/978-3-030-78585-7_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-78585-7_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-78584-0

  • Online ISBN: 978-3-030-78585-7

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics