Skip to main content

AI and Constitutionalism: The Challenges Ahead

  • Chapter
  • First Online:
Reflections on Artificial Intelligence for Humanity

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12600))

  • 3224 Accesses

Abstract

The article aims to provide an overview of the principles of constitutionalism that can lead to a human-centersed AI. It deals with big data, privacy and consent, profiling, democratic pluralism and equality, providing a few examples of how AI can impact on them. On this basis, the article proposes a list of new ‘human’ rights, understood as the rights that humans are recognized as having, in order to promote a constitution-oriented and human-centered AI.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    This idea, previously proposed by the Human Rights Council of the United Nations, can be based on four conditions: (a) access by everyone to scientific knowledge and the benefits of science and its applications; (b) opportunities for all to contribute to the scientific enterprise and freedom needed for scientific research; (c) participation of individuals and communities in information and in decision-making; and (d) an enabling environment fostering the conservation, development and diffusion of science and technology : F . Shaheed, The right to benefit from scientific progress and its applications, Report of the Special Rapporteur in the field of cultural rights, UN, Human Rights Council, 20th session, 14   May 2012 (A/HRC/20/26): https://www.ohchr.org/EN/Issues/CulturalRights/Pages/benefitfromscientificprogress.aspx .

  2. 2.

    Global data are not encouraging from this point of view. In 2017, 82% of the wealth went to just 1% of the population, while 50% of the poorer population did not benefit from any increase. Regarding 2018, “Wealth is becoming even more concentrated – in 2018 : j ust 26 people owned the same as the 3.8 billion people who make up the poorest half of humanity, down from 43 people the year before” : O XFAM (2019), Public good or private wealth?, Oxford. In 2019, “the world’s billionaires, only 2,153 people, had more wealth than 4.6 billion people. This great divide is based on a flawed and sexist economic system” : O XFAM (2020), Time to care. Unpaid and underpaid care work and the global inequality crisis ( https://www.oxfam.org/en/research/time-care ).

  3. 3.

    Loi n° 2019 – 222 du 23   mars 2019 de programmation 2018 – 2022 et de réforme pour la justice. The French version of the section states as follows: “Les données d'identité des magistrats et des membres du greffe ne peuvent faire l’objet d'une réutilisation ayant pour objet ou pour effet d’évaluer, d’analyser, de comparer ou de prédire leurs pratiques professionnelles réelles ou supposées. ”.

  4. 4.

    “Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), developed by the company Equivant in 1998, is an algorithm widely used in the United States to make predictions about a defendant’s recidivism risk. COMPA S consists of a 137-item questionnaire which takes note of the defendant’s personal information (such as sex, age, and criminal record) and uses this information to make its predictions. Race is not an item on this survey, but several other items that can be correlated with race are included in the COMPA S risk assessment.

  5. 5.

    Just to give an example, the journal Artificial Intelligence in Medicine has been published for more than 30   years now, since 1989.

  6. 6.

    This care risk-prediction algorithm is used on more than 200 million people in the U.S. [63,64,65].

  7. 7.

    It is significant that the Italian law explicitly mentions a trust-based relationship: “The relationship of care and trust between patient and physician, based on informed consent in which the patient's decision-making autonomy meets the competence, professional autonomy and responsibility of the physician, is promoted and valued” (art. 1 par. 2). Besides, the law states that “Communication time between doctor and patient constitutes treatment time” (art. 1 par. 8) [68].

  8. 8.

    See also Chapter 5 in this book.

  9. 9.

    “The robot will be much better at statistical reasoning and less enamored with stories and narratives than people are. The other is that the robot would have much higher emotional intelligence. And the third is that the robot would be wiser. Wisdom is breadth. Wisdom is not having a narrow view; that’s the essence of wisdom. It’s broad framing, and a robot will be endowed with broad framing” [71].

  10. 10.

    “It is impossible to understand how exactly AlphaGo managed to beat the human Go World champion” [81].

  11. 11.

    See, for instance, the suspension of a test in which two chatbots began to communicate in an unintelligible language : F acebook's artificial intelligence robots shut down after they start talking to each other in their own language: “The bizarre discussions came as Facebook challenged its chatbots to try and negotiate with each other over a trade, attempting to swap hats, balls and books, each of which were given a certain value. But they quickly broke down as the robots appeared to chant at each other in a language that they each understood but which appears mostly incomprehensible to humans” [82].

  12. 12.

    Art. 111 of the Italian Constitution, for instance, reads as follows: “All judicial decisions shall include a statement of reasons”.

  13. 13.

    There are several theories about it. Some experts speculate that an AI endowed with such power will be Our Final Invention because it will allow us to solve all our problems or rather because it will destroy us by pursuing goals that simply transcend us [86]. Other experts e.g., Raymond Kurzweil, Fredric Brown, Irving John Good, and Vernor Vinge, focus on Singularity, in which genetics, nanotechnology, robotics and AI will allow us to transform ourselves into cyborg beings connected between us and, through the cloud, with the whole universe.

  14. 14.

    Respectively: Peter Ware Higgs, Nobel Prize in Physics in 2013; Marc Mézard, physicist, director of the École Normale Supérieure in Paris; Giulio Tononi, psychiatrist and neuroscientist, director of the Center for Sleep and Consciousness of the University of Wisconsin; Roberto Cingolani, physicist, former scientific director of the Italian Institute of Technology (IIT) in Genoa and D. Dennett, philosopher [87].

  15. 15.

    Art. 61. “Les lois, les décrets, les jugements et tous les actes publics sont intitulés: Au nom du peuple français, l'an… de la République française.”.

  16. 16.

    Art. 25 of the 1993 (as last amended in October 2017) Act on the Federal Constitutional Court (Bundesverfassungsgerichtsgesetz, BVerfGG): “The decisions of the Federal Constitutional Court shall be issued “in the name of the People”.

  17. 17.

    See the mentioned Statement on Artificial Intelligence, Robotics and ‘Autonomous Systems’, issued by EGE, 11: “we may ask whether people have a right to know whether they are dealing with a human being or with an AI artefact”.

  18. 18.

    As is well known, this is at the center of the Turing test [99].

  19. 19.

    On the pros and cons of this approach, see [1,2,3,4].

  20. 20.

    In a few specific areas, this ‘distraction’ can have beneficial results [1,2,3,4].

  21. 21.

    ‘Explicability’ could be the fifth bioethical principle, in addition to the four (beneficence, non-maleficence, autonomy, and justice) already indicated [1,2,3,4], see also [5].

  22. 22.

    The Italian Constitution, for instance, states as follows: “Officials and employees of the State and public entities shall be directly liable, under criminal, civil and administrative law, for acts performed in violation of rights” (art. 28).

  23. 23.

    In recital 71, the GDPR states that “The data subject should have the right not to be subject to a decision, which may include a measure, evaluating personal aspects relating to him or her which is based solely on automated processing and which produces legal effects concerning him or her or similarly significantly affects him or her, such as automatic refusal of an online credit application or e-recruiting practices without any human intervention. Such processing includes ‘profiling’ that consists of any form of automated processing of personal data evaluating the personal aspects relating to a natural person, in particular to analyse or predict aspects concerning the data subject's performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements, where it produces legal effects concerning him or her or similarly significantly affects him or her”.

  24. 24.

    A commentary in L.A. Bygrave, EU data protection law falls short as desirable model for algorithmic regulation, in [6, 7]. A similar principle was already provided for by directive no. 95/46/CE, art. 15, which, significantly did not contain the ‘explicit consent’ exception.

  25. 25.

    This risk has been reported both in medicine and in justice. In medicine: “The collective medical mind is becoming the combination of published literature and the data captured in health care systems, as opposed to individual clinical experience” [10, 11]. There is a risk that the legal discourse on damages – it has been said – would be based not “on the courts rationale for individual cases, but instead be a result of pure statistical calculation in relation to the average compensation awarded previously by other courts” [8, 9].

References

  1. McIlwain, C.H.: Constitutionalism: Ancient and Modern, Liberty Fund (2008). https://oll.libertyfund.org/titles/2145

  2. Barber, N.W.: The Principles of Constitutionalism. Oxford University Press, Oxford (2018)

    Book  Google Scholar 

  3. Grimm, D.: Constitutionalism: Past, Present, and Future. Oxford University Press, Oxford (2016)

    Book  Google Scholar 

  4. Ackermann, B.: We the People, Volume 1, Foundations. Harvard University Press (1991). We the People, vol. 2, Transformations. Harvard University Press (1998). We the People, vol. 3, The Civil Rights Revolution. Harvard University Press (2014)

    Google Scholar 

  5. Bellamy, R.: Constitutionalism, Encyclopædia Britannica, 30 July 2019. https://www.britannica.com/topic/constitutionalism

  6. Russel, S., Norvig, P.: Artificial Intelligence: A Modern Approach. Prentice Hall, Upper Saddle River (2020)

    Google Scholar 

  7. Bringsjord, S., Govindarajulu, N.S.: Artificial Intelligence. In: Zalta, E.N. (ed.) The Stanford Encyclopedia of Philosophy, Summer 2020 Ed. (2020). https://plato.stanford.edu/archives/sum2020/entries/artificial-intelligence

  8. Executive Summary ‘Data Growth, Business Opportunities, and the IT Imperatives’, The Digital Universe of Opportunities: Rich Data and the Increasing Value of the Internet of Things (2014). https://www.emc.com/leadership/digital-universe/2014iview/index.htm

  9. Dehmer, M., Emmert-Streib, F. (eds.): Frontiers in Data Science, Boca Raton (2017)

    Google Scholar 

  10. Kudina, O., Bas, M.: The end of privacy as we know it: reconsidering public space in the age of google glass. In: Newell, B.C., Timan, T., Koops, B.J. (eds.) Surveillance, Privacy, and Public Space. Routledge (2018). c.7

    Google Scholar 

  11. Beatty, J.F., Samuelson, S.S., Sánchez Abril, P.: Business Law and the Legal Environment, Boston, p. 263 (2015)

    Google Scholar 

  12. Gutwirth, S., De Hert, P., Leenes, R.: Data protection on the Move, Dordrecht (2016)

    Google Scholar 

  13. Plaut, V.C., Bartlett, R.P.: Blind consent? A social psychological investigation of non-readership of click-through agreements. Law Hum Behav 36(4), 293–311 (2012)

    Article  Google Scholar 

  14. Lambert, P.: Understanding the New European Data Protection Rules. Taylor and Francis Ltd. (2017)

    Google Scholar 

  15. Breen, S., Ouazzane, K., Patel, P.: GDPR: Is your consent valid? Bus. Inf. Rev. 37(1), 19–24 ( 2020)

    Google Scholar 

  16. Morsink, J.: The Universal Declaration of Human Rights: Origins, Drafting and Intent. University of Pennsylvania Press, Philadelphia (1999)

    Book  Google Scholar 

  17. Flamigni, C.: Sul consenso sociale informato. Bi-oLaw J. 10(2), 201 (2017)

    Google Scholar 

  18. Lee, J.E.: Artificial intelligence in the future biobanking: current issues in the biobank and future possibilities of artificial intelligence. Biomed. J. Sci. Tech. Res. 7(3), 1 (2018). Fei-Fei Li and John Etchemendy lead the Stanford Institute for Human-Centered AI (HAI)

    Google Scholar 

  19. Calo, R.: Artificial Intelligence Policy: A Primer Roadmap, in 51 UC Davis Law Review, 2, 406 lists Google, Facebook, IBM, Amazon, Microsoft, Apple, Baidu, and a few others (2017)

    Google Scholar 

  20. Prainsack, B.: Data donation: how to resist the iLeviathan. In: Krutzinna, J., Floridi, L. (eds.) The Ethics of Medical Data Donation. PSS, vol. 137, pp. 9–22. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-04363-6_2

    Chapter  Google Scholar 

  21. Arato, A.: The Adventures of the Constituent Power, pp. 329–358. Cambridge University Press, Cambridge (2017)

    Google Scholar 

  22. The world’s most valuable resource is no longer oil, but data, The Economist on May 6th 2017

    Google Scholar 

  23. Carrozza, M.C.: et al.: Automation and autonomy: from a definition to the possible applications of artificial intelligence. The Ethics and Law of AI, Fondazione Leonardo. Civiltà delle Macchine, 13 (2019). https://fondazioneleonardo-cdm.com/site/assets/files/2450/fle1_booklet_conferenza_eng_gar_311019.pdf

  24. Zuboff, S.: The Age of Surveillance Capitalism. The Fight for a Human Future at the New Frontier of Power. Profile books (2019)

    Google Scholar 

  25. Benkler, Y.: Don’t let industry write the rules for AI. Nature 569, 161 (2019)

    Article  Google Scholar 

  26. von der Leyen, U.: A union that strives for more. My agenda for Europe. https://ec.europa.eu/commission/sites/beta-political/files/political-guidelines-next-commission_en.pdf

  27. Shultz, D.: Could Google influence the presidential election? In Science, 25 October 2016. https://www.sciencemag.org/news/2016/10/could-google-influence-presidential-election.

  28. How to avoid unlawful profiling – a guide. European Union Agency for Fundamental Rights on 5 December 2018. https://fra.europa.eu/en/news/2018/how-avoid-unlawful-profiling-guide

  29. Mann, M., Matzner, T.: Challenging algorithmic profiling: the limits of data protection and anti-discrimination in responding to emergent discrimination. Big Data & Society (2019). https://doi.org/10.1177/2053951719895805

  30. O’Neil, C.: Weapons of Math Destruction. Crown Books, New York (2016)

    MATH  Google Scholar 

  31. Zuiderveen Borgesius, F.J.: Strengthening legal protection against discrimination by algorithms and artificial intelligence. Int. J. Human Rights (2020). https://doi.org/10.1080/13642987.2020.1743976

  32. Quintarelli, S., et al.: Paper on ethical principles. The Ethics and Law of AI, Fondazione Leonardo. Civiltà delle Macchine, p. 34 (2019). https://fondazioneleonardo-cdm.com/site/assets/files/2450/fle1_booklet_conferenza_eng_gar_311019.pdf

  33. European Group on Ethics in Science and New Technologies (EGE), Statement on Artificial Intelligence, Robotics and ‘Autonomous Systems’, Chapter on Role of ethical charters in building international AI framework, Brussels, p. 17, 9 March 2018,

    Google Scholar 

  34. Notes from the frontier: Modeling the impact of AI on the world economy. McKinsey Global Institute, September 2018. https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-frontier-modeling-the-impact-of-ai-on-the-world-economy

  35. The future of Jobs report. World Economic Forum (2018). https://www3.weforum.org/docs/WEF_Future_of_Jobs_2018.pdf

  36. Ford, M.: Rise of the Robots: Technology and the Threat of Jobless Future, New York (2015)

    Google Scholar 

  37. Floridi, L., et al.: AI4People—An ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Mind Mach. 28(4), 691 (2018). https://doi.org/10.1007/s11023-018-9482-5

    Article  Google Scholar 

  38. López Peláez, A. (ed.): The Robotics Divide. A New Frontier in the 21st Century? Springer. Heidelberg (2014). https://doi.org/10.1007/978-1-4471-5358-0

  39. European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their environment, European Commission for the Efficiency of Justice (CEPEJ) of the Council of Europe on December 2018. https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c

  40. Ashley, K.D.: Special Issue published by Artificial Intelligence and Law on Artificial Intelligence for Justice, (1) (2017)

    Google Scholar 

  41. Artificial Intelligence and Legal Analytics: New Tools for Law Practice in the Digital Age. Cambridge University Press (2017)

    Google Scholar 

  42. CEPEJ: Justice systems of the future, in 16 Newsletter, August 2018. https://rm.coe.int/newsletter-no-16-august-2018-en-justice-of-the-future/16808d00c8

  43. Katz, D.M., Bommarito, M.J., Blackman, J.: A general approach for predicting the behavior of the Supreme Court of the United States. Plos One, 17 April 2017

    Google Scholar 

  44. Angwin, J., Larson, J., et al.: Machine Bias. There’s software used across the country to predict future criminals. And it’s biased against blacks (2016). ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

  45. Hao, K.: This is how AI bias really happens—and why it’s so hard to fix (2019). MIT Review. https://www.technologyreview.com/2019/02/04/137602/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/

  46. Polonski, V.: AI is convicting criminals and determining jail time, but is it fair? Annual Meeting of the Global Future Councils of the World Economic Forum, 19 November 2018. https://www.weforum.org/agenda/2018/11/algorithms-court-criminals-jail-time-fair/

  47. Helper, P.: Is AI racist? Machine learning, the justice, system, and racial bias, McGill Daily, 3 September 2018. https://www.mcgilldaily.com/2018/09/is-ai-racist/

  48. Hao, K.: AI is sending people to jail—and getting it wrong (2019). MIT Review. https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/

  49. Austin, J.L.: How to Do Things with Words. Urmson, J.O., Sbisá, M. (ed.) Harvard University Press (1962)

    Google Scholar 

  50. Kleinberg, J., et al.: Human decisions and machine predictions. Q. J. Econ. 133(1), 241 (2018)

    MATH  Google Scholar 

  51. Zou, J., Schiebinger, L.: AI can be sexist and racist—it’s time to make it fair. Nature 559, 324 (2018)

    Article  Google Scholar 

  52. State v. Loomis, 881 N.W.2d 749, 767 Wis. (2016)

    Google Scholar 

  53. Israni, E.: Algorithmic due process: mistaken accountability and attribution in state v. Loomis (2017). Harvard Journal of Law Technology, 31 August 2017. https://jolt.law.harvard.edu/digest/algorithmic-due-process-mistaken-accountability-and-attribution-in-state-v-loomis-1

  54. Garapon, A., Lassègue, J.: Justice digitale. Révolution graphique et rupture anthropologique. PUF, p. 239 (2018)

    Google Scholar 

  55. Donna, M.: AI technology and government decision making - recent Italian rulings. in ICLG.com. https://iclg.com/ibr/articles/10731-ai-technology-and-professional-decision-making-recent-italian-rulings.

  56. Tribunale Amministrativo Regionale Lazio, decision n. 10964 of 13 September 2019

    Google Scholar 

  57. World Commission on the Ethics of Scientific Knowledge and Technology (COMEST), UNESCO, Report of COMEST on Robotics Ethics, Paris, p. 30, 14 September 2017

    Google Scholar 

  58. Liu, N., et al.: Artificial intelligence in emergency medicine. J. Emerg. Crit. Care Med. 2, 82 (2018)

    Article  Google Scholar 

  59. Stewart, J., Sprivulis, P., Dwivedi, G.: Artificial intelligence and machine learning in emergency medicine. Emerg. Med. Aust. 30(6), 870 (2018)

    Article  Google Scholar 

  60. Council of Europe materials on AI and the control of COVID-19. https://www.coe.int/en/web/artificial-intelligence/ai-covid19

  61. Hashimoto, D., et al.: Artificial intelligence in surgery: promises and perils. Ann. Surg. 268(1), 70 (2018)

    Article  Google Scholar 

  62. Nicholson Price, W.: Big data and black-box medical algorithms. Sci. Transl. Med. (2018)

    Google Scholar 

  63. Obermeyer, Z., et al.: Dissecting racial bias in an algorithm used to manage the health of populations. Science 366(6464), 447–453 (2019)

    Article  Google Scholar 

  64. Benjamin, R.: Assessing risk, automating racism. Science 366(6464), 421–422 (2019)

    Article  Google Scholar 

  65. Vartan, S.: Racial bias found in a major health care risk algorithm. Sci. Am. (2019)

    Google Scholar 

  66. Topol, E.: Deep Medicine. How Artificial Intelligence Can Make Healthcare Human Again. Basic Books (2019)

    Google Scholar 

  67. Sparrow, R., Hatherley, J.: High hopes for “deep medicine”? AI, Economics, and the Future of Care, Hastings Center Report, pp. 14–17, January-February 2020

    Google Scholar 

  68. Di Paolo, M., Gori, F., et al.: A review and analysis of new Italian law 219/2017: ‘provisions for informed consent and advance directives treatment.’ BMC Med. Ethics 20, 17 (2019). https://doi.org/10.1186/s12910-019-0353-2

    Article  Google Scholar 

  69. Giubilini, A., Savulescu, J.: The artificial moral advisor. The “Ideal Observer” meets artificial intelligence. Philos. Technol. 31(2), 169 (2018)

    Google Scholar 

  70. O’Connell, M.: To be a machine, New York (2017)

    Google Scholar 

  71. Pethokoukis, J.: The American Enterprise Institute blog (2018). https://www.aei.org/economics/nobel-laureate-daniel-kahneman-on-a-i-its-very-difficult-to-imagine-that-with-sufficient-data-there-will-remain-things-that-only-humans-can-do/

  72. Mathias, J.N.: Bias and Noise: Daniel Kahneman on Errors in Decision-Making, in Medium 17 October 2017. https://natematias.medium.com/bias-and-noise-daniel-kahneman-onerrors-in-decision-making-6bc844ff5194

  73. Guthrie, C., Rachlinski, J.J., Wistrick, A.J.: Inside the Judicial Mind, Cornell Law Faculty Publications, Paper 814, (2001). https://scholarship.law.cornell.edu/facpub/814

  74. Claybrook, J., Kildare, S.: Autonomous vehicles: No driver…no regulation? Science 36(6397), 36 (2018)

    Article  Google Scholar 

  75. Barbaro, C., Meneceur, Y.: Issues in the use of artificial intelligence (AI) algorithms in judicial systems. In: European Commission for the Efficiency of Justice Newsletter, Council of Europe, no. 16, 3 August 2018

    Google Scholar 

  76. Rosenfeld, A., Zemel, R., Tsotsos, J.K.: The Elephant in the Room, 9 August 2018. Cornell University site. https://arxiv.org/abs/1808.03305

  77. Yang, G.-Z., Dario, P., Kragic, D.: Social robotics—Trust, learning, and social interaction. Sci. Robot. 3(21), (2018)

    Google Scholar 

  78. Reyzin, L.: Unprovability comes to machine learning, nel numero di Nature del 7 gennaio (2019). https://www-nature-com.ezp.biblio.unitn.it/articles/d41586-019-00012-4

  79. Ben-David, S.: Learnability can be undecidable. Nat. Mach. Intell. 1(1), 44 (2019). Gödel and Cohen showed, in a nutshell, that not everything is provable. Here we show that machine learning shares this fate

    Google Scholar 

  80. Knight, W.: The dark secret at the heart of AI. MIT Technol. Rev. 120, 54–61 (2017)

    Google Scholar 

  81. European Group on Ethics in Science and New Technologies (EGE), Statement on Artificial Intelligence, Robotics and ‘Autonomous Systems’, Brussels, p. 6, 9 March 2018

    Google Scholar 

  82. The Independent, 31 July 2017

    Google Scholar 

  83. Brice, J.: Algorithmic regulation on trial? Professional judgement and the authorisation of algorithmic decision making, in [111]

    Google Scholar 

  84. Indurkhya, B.: Is morality the last frontier for machines? New Ideas Psychol. 54, 107–111 (2019)

    Article  Google Scholar 

  85. Brownsword, R.: Law, liberty and technology: criminal justice in the context of smart machines. Int. J. Law Context 15(2), 107–125 (2019)

    Article  Google Scholar 

  86. Barrat, J.: Artificial Intelligence and the End of the Human Era. Thomas Dunne Books, New York (2013)

    Google Scholar 

  87. Dennett, D.: Consciousness Explained. Little, Brown and Co., Boston (1991)

    Google Scholar 

  88. McSweeney, T.J.: Magna Carta and the Right to Trial by Jury, Faculty Publications, p. 1722 (2014). https://scholarship.law.wm.edu/facpubs/1722

  89. Sourdin, T., Cornes, R.: Do judges need to be hu-man? In: Sourdin, T., Zariski, A. (eds.) The Responsive Judge International Perspectives, vol. 67, pp. 87–120. Springer, Heidelberg (2018). https://doi.org/10.1007/978-981-13-1023-2_4

  90. Floridi, L., et al.: AI4People—An ethical framework for a good AI society: opportunities, risks, principles, and recommendations, above, 692

    Google Scholar 

  91. Report on Ethics guidelines for trustworthy AI. European Commission High-Level Expert Group on AI, April 2019. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

  92. European Commission’s white paper On AI – A European approach to excellence and trust, published in Brussels on 19 February 2020

    Google Scholar 

  93. Proposals for ensuring appropriate regulation of AI, Office of the Privacy Commissioner of Canada, 13 March 2020

    Google Scholar 

  94. Pajno, A.: Paper on legal principles, The Ethics and Law of AI, Fondazione Leonardo. Civiltà delle Macchine

    Google Scholar 

  95. Brownsword, R.: Law, technology, and society: in a state of delicate tension. Notizie di Politeia 137, 26 (2020)

    Google Scholar 

  96. Santosuosso, A.: The human rights of nonhuman artificial entities: an oxymoron? Jahrbuch für Wissenschaft und Ethik 19(1), 203–238 (2015)

    Article  Google Scholar 

  97. Winfield, A., et al.: Machine ethics: the design and governance of ethical ai and autonomous systems. Proc. IEEE 107(3), 509–517 (2019)

    Article  MathSciNet  Google Scholar 

  98. Coeckelbergh, M.: AI ethics. MIT Press, 2020. (in Italian). Simoncini, A.: L’algoritmo incostituzionale: intelligenza artificiale e il futuro delle libertà. BioLaw J. 63–89 (2019). Santosuosso, A.: Intelligenza artificiale e diritto, Mondadori Università (2020)

    Google Scholar 

  99. Turing, A.M.: Computing machinery and intelligence. Mind 59, 433 (1950)

    Article  MathSciNet  Google Scholar 

  100. Mori, M.: The uncanny valley. Energy 7(4), 33 (1970)

    Google Scholar 

  101. Minato, T., et al.: Evaluating the human likeness of an android by comparing gaze behaviors elicited by the android and a person. Adv. Robot. 20(10), 1147 (2006)

    Article  Google Scholar 

  102. Cheetham, M., (ed.): The Uncanny Valley. Hypothesis and beyond, eBook (2018)

    Google Scholar 

  103. O’Neill, K.: Should a bot have to tell you it’s a bot? Medium, 21 March 2018. Almost half from the Goldsmiths and Mindshare results said it would feel “creepy” if a bot pretended to be human. https://medium.com/s/story/should-a-bot-have-to-tell-you-its-a-bot-e9fa29f0b9d4

  104. Huijnen, C.A.G.J., Lexis, M.A.S., Jansens, R., de Witte, L.P.: Roles, strengths and challenges of using robots in interventions for children with autism spectrum disorder (ASD). J. Autism Dev. Disord. 49(1), 11–21 (2018)

    Article  Google Scholar 

  105. Beauchamp and Childress: Principles of Biomedical Ethics (1979)

    Google Scholar 

  106. Floridi, L., Cowls, J.: A unified framework of five principles for AI in society. Harvard Data Sci. Rev. (1)1 (2019)

    Google Scholar 

  107. Wachter, S., Mittelstadt, B.: A right to reasonable inferences: re-thinking data protection law in the age of big data and AI. Columbia Bus. Law Rev. 494 (2019)

    Google Scholar 

  108. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Mach. Intell. 1, 206 (2019)

    Article  Google Scholar 

  109. Andrews, L., et al.: Algorithmic Regulation. King’s College Discussion Paper no. 85, September 2017, London, 26 (2017)

    Google Scholar 

  110. Wachter, S., Mittelstadt, B., Floridi, L.: Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int. Data Priv. Law 7(2), 76–99 (2017)

    Article  Google Scholar 

  111. Char, D.S., Shah, N.H., Magnus, D.: Implementing machine learning in health care – addressing ethical challenges. New Engl. J. Med. 378(11), 981 (2018)

    Article  Google Scholar 

  112. Garapon, A., Lassègue, J.: Justice digitale. Révolution graphique et rupture anthropologique, above, 239

    Google Scholar 

  113. Quintarelli, S., et al.: Paper on ethical principles, above, 34

    Google Scholar 

  114. Brownsword, R.: Law, Technology and Society: Re-imagining the Regulatory Environment. Routledge, Abingdon (2019)

    Book  Google Scholar 

  115. Casonato, C.: 21st century biolaw: a proposal. BioLaw J. 2017(1), 81 (2017)

    Google Scholar 

  116. Scherer, M.U.: Regulating artificial intelligence systems: risks, challenges, competencies, and strategies. Harvard J. Law Technol. 29(2), 353 (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carlo Casonato .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Casonato, C. (2021). AI and Constitutionalism: The Challenges Ahead. In: Braunschweig, B., Ghallab, M. (eds) Reflections on Artificial Intelligence for Humanity. Lecture Notes in Computer Science(), vol 12600. Springer, Cham. https://doi.org/10.1007/978-3-030-69128-8_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-69128-8_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-69127-1

  • Online ISBN: 978-3-030-69128-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics