Skip to main content
Log in

The E.U.’s artificial intelligence act: an ordoliberal assessment

  • Original Research
  • Published:
AI and Ethics Aims and scope Submit manuscript

Abstract

In light of the rise of generative AI and recent debates about the socio-political implications of large-language models and chatbots, this article investigates the E.U.’s artificial intelligence act (AIA), the world’s first major attempt by a government body to address and mitigate the potentially negative impacts of AI technologies. The article critically analyzes the AIA from a distinct economic ethics perspective, i.e., ‘ordoliberalism 2.0’—a perspective currently lacking in the academic literature. It evaluates, in particular, the AIA’s ordoliberal strengths and weaknesses and proposes reform measures that could be taken to strengthen the AIA.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Data availability

Not applicable.

Notes

  1. Some critics have backed their criticism with (a call for) action: Hinton, for example, has resigned from Google and warns of the dangers of AI [9], Musk et al. have signed an open letter demanding the pause of AI development [10], see Gebru et al. [6, 7], Altman calls for AI (self-)regulation to mitigate the ‘risks of increasingly powerful AI’ [11], the Center for AI Safety released a statement which warns that AI poses a severe ‘risk of extinction’ and could be as deadly as pandemics and nuclear weapons (the statement received widespread support from leading AI companies and scientists) [5, 12], and other researchers and politicians argue that global governance and an international AI agency are needed (i.e., ‘IAEA for AI’) [13,14,15].

  2. Eucken’s Constituent Principles include (1) competitive market order; (2) primacy of monetary policy/price stability; (3) open markets; (4) private property rights; (5) freedom of contracts; (6) principle of liability; (7) long-term orientation of economic policy; and (8) interdependency of all Constituent Principles [45]. Eucken’s Regulating Principles include (1) correction of market powers; (2) income redistribution; (3) correction of negative external effects; and (4) correction of ‘abnormal supply reactions’ [27, 45].

  3. According to Wörsdörfer [27], ‘ordoliberalism 2.0’ rests on the following principles: competitive economy, open markets, freedom of contract/liability, correction of market power, limiting rent-seeking, regulatory and competition policy, rule of law, freedom of privileges/non-discrimination, subsidiarity, and correction of negative external effects.

  4. The AIA’s strengths include its legally-binding, i.e., hard-law, character [81, 82], which marks a welcoming departure from existing soft-law AI ethics initiatives [17, 39, 59, 83,84,85,86,87,88,89,90], extra-territoriality and possible extension of the ‘Brussels Effect’ [91,92,93], ability to address data quality and discrimination risks [75], and institutional innovations such as the EAIB and publicly accessible logs/database for AI systems (as an essential step in opening up black-box algorithms) [94]. From a (revised ordoliberal perspective, it is worth pointing out that the AIA attempts to ensure that AI technologies are ‘ethically sound, legally acceptable, socially equitable, and environmentally sustainable, with a[n] [ordoliberal] vision of AI that seeks to support [i.e., serve] the economy, society, and the environment’ [92]. Note that Röpke, Rüstow, and other ordoliberals also believed that the economy is embedded in a higher societal order—‘beyond supply and demand.’ Its primary purpose is to serve the people and society, not vice versa. It is thus seen as a means to an end, not as an end in itself (the ordoliberal end in itself is the so-called ‘vital situation’ or ‘private law society’). They also believed that the economy drains and erodes morality,‘moral reserves’ thus need to be built outside the economy, i.e., in ‘market-free’ sectors and with the help of ‘vital policy’ [29, 34, 48, 49, 49, 52, 78, 95, 95,96,97,98].

  5. The AIA’s weaknesses relate to its tendency to prioritize economic, business, and innovation over moral concerns (i.e., de-prioritization of human rights) [67, 68, 92, 99,100,101,102], the lack of a clear definition of AI systems (i.e., lack of scope) [80, 84, 94, 101, 103], the flawed risk-based framework (i.e., incomplete list of prohibited AI systems and under-regulation of non-high-risk AI systems) [84, 94, 100, 101, 104,105,106,107,108], and the failure to adequately address the challenges posed by generative AI.

  6. Standardization bodies such as CEN and CENELEC are responsible for drafting harmonized voluntary technical standards, e.g., in the areas of electrical engineering.

  7. Notified bodies such as TÜV and other technical organizations are accredited by member states’ notifying authorities and are responsible for assessing and verifying the conformity assessment procedure.

  8. The AIA, for instance, encourages providers of non-high-risk AI systems to voluntarily apply the mandatory requirements for high-risk AI systems laid out in Title III. It also urges providers to voluntarily commit themselves—via codes of conduct—to environmental sustainability, accessibility for persons with disability, stakeholder participation, and team diversity.

  9. Besides input legitimacy (i.e., a lack of stakeholder consultation and participation), those processes might also lack throughput (i.e., a lack of accountable and transparent processes) and output legitimacy (i.e., a lack of standard responsiveness, e.g., about the interests of affected stakeholder groups) [115].

  10. Note that the Parliament has no binding veto power over harmonized standards mandated by the Commission.

  11. Note that Eucken and other ordoliberals saw unions as an essential counterweight to the power of employers [45, 46].

  12. Process policy is rejected for several reasons: It is considered by Eucken and other ordoliberals as a form of ‘privilege-granting policy.’ It is mainly based on ad-hoc and case-by-case decisions and enables arbitrary and selective interventions in the economic ‘game of catallaxy,’ to use Hayek’s [117] term. It thus lacks two crucial features of an ordoliberal economic policy—predictability and long-term orientation. Most importantly, however, it opens the doors for special interest groups to exert influence on the legislative decision-making process: That is, process policy is more likely to be prone to the power of rent-seeking or lobbying groups—due to a more significant regulatory load and the existence of a higher discretionary leeway for decision-making. It thus goes hand in hand with a considerable lack of transparency as many debates and decisions take place behind closed doors—and a lack of accountability and democratic legitimacy—since interest groups represent only a fraction of society and are seldom directly and democratically elected (besides, process policy also tends to weaken or undermine constitutional checks and balances). In sum, this form of particularistic policy jeopardizes the nation’s wealth—due to granting costly and exclusive privileges to special interest groups—and undermines personal freedom—due to the increased politico-economic powers of rent-seekers.

  13. Note that Sect. 5 mirrors the previous section, i.e., the reform measures introduced in this section address the concerns raised in Sect. 4 in the exact same order (e.g., the paragraph on independent conformity assessment addresses the lack of enforcement concern in the previous section, and so on).

  14. According to AlgorithmWatch [94], it is also hardly justifiable to leave the assessment of societal risks and impacts to corporate (i.e., for-profit) actors and their self-interests (i.e., profit and shareholder value maximization).

  15. Based on the notion of ‘predetermined change,’ anticipated AI system modifications currently do not trigger a new conformity assessment [67].

  16. Stuurman and Lachaud [108] argue for introducing mandatory AI labeling schemes for secure, responsible, and ethical AI systems. They claim that the current CE marking process (alone) is insufficient due to its lack of clarity, trust, monitoring, and transparency. Those labeling schemes could be similar to information or nutrition labels; that is, they would provide information about the goal of the AI system, the data collected and processed, and contact information to send queries and file complaints. The AI Ethics Label initiative suggests evaluating six dimensions of AI systems (transparency, accountability, privacy, justice/non-discrimination, reliability, and sustainability on a scale from A to G under a graphical design similar to the European energy efficiency label). The AI label would utilize ex-ante audits to validate the provided information, and such labels would need to be renewed regularly, given machine-learning progress (see for foundation [120, 121].

  17. Some scholars request introducing mandatory algorithmic risk and human rights impact assessments for any AI-based application, not just for high-risk systems, as is currently planned. The risk levels could then be decided on a case-by-case basis, and systems either be banned or classified as high-risk. Such assessments would also promote the auditability and explainability of AI systems [94, 106]).

  18. A related concept—unlawfulness by default [122]—requires changing the burden of proof: i.e., the default is unlawfulness or unethicality, and AI providers have the burden to demonstrate that AI systems are not causing any harm, such as unfair/discriminatory decisions or inaccurate results, before they are marketed.

  19. Critics recommend making the standardization and risk-classifying process more transparent and inclusive, i.e., to have a better representation of stakeholder interests and counterbalance the adverse effects of private rulemaking (and the corresponding power imbalances between AI providers and other [civil society] stakeholders) [82]. It would require, among others, substantive information rights for affected individuals, adding public participation rights for citizens, e.g., regarding the decision to amend the list of high-risk systems, and ensuring that not only corporate and expert groups are involved in the standardization and risk-classifying process by actively involving organizations which represent public interests [101].

  20. This is especially important in the context of (real-time) workplace monitoring, the potentially negative impacts of ‘smart manufacturing’ on labor markets (i.e., possible job losses), and other areas that might affect workers’ rights (including freedom/autonomy rights).

  21. E.g., it is crucial to provide some form of (minimum) harmonized implementation guidance to member states to prevent uneven or unreliable enforcement at the national level [101].

  22. Moreover, E.U. policy and lawmakers should work towards harmonizing international AI standards and guidelines. Of particular importance in this regard is the transatlantic cooperation. Unified—and ideally global—standards for AI technologies would prevent regulatory gaps and ‘forum shopping’ (i.e., companies moving to countries with less regulatory burden and compliance costs) and would help in creating a level playing field with a minimum degree of legal certainty and planning security, as envisioned by ordoliberalism [45, 119].

  23. AlgorithmWatch [94] and others demand the explicit ban of all biometric mass surveillance technologies. According to the organization, the term ‘real-time remote biometric identification systems’ includes too many exceptions and ethical issues. For instance, it enables indiscriminate (i.e., arbitrarily or discriminatorily targeted) mass surveillance, which is incompatible with fundamental (human) rights and undermines key principles of rule-of-law societies. The organization also urges lawmakers to ensure that the ban applies to all public authorities, private actors that act on behalf of public authorities, and to post biometric identification systems—not only real-time. Lastly, the organization demands closing (some of) the AIA’s loopholes, e.g., by removing the exceptions in Art. 5 [94]. Other researchers recommend expanding the scope of the prohibition on social scoring to private actors, extending the ban on remote biometric identification systems in public spaces to non-law enforcement public actors, prohibiting the use of remote live biometric categorization systems in public places and the use of emotion recognition systems, and adding biometric categorization systems and emotion recognition systems to the list of high-risk systems [101] [note that some of those concerns have been addressed by the Council and Parliament (see Sect. 3)]. Most importantly, the Commission should be enabled to add AI technologies to the list of prohibited practices or high-risk systems. Here, it is crucial that the process of banning or adding high-risk categories is done in an inclusive, transparent, and democratic manner, that is, through robust consultation and stakeholder engagement, and that civil society representatives are heard. Also, all systems should be subject to prior independent conformity assessment control [101].

References

  1. European Parliamentary Research Service (EPRS): General-purpose artificial intelligence (2023). www.europarl.europa.eu/RegData/etudes/ATAG/2023/745708/EPRS_ATA(2023)745708_EN.pdf

  2. Floridi, L.: AI as agency without intelligence: on ChatGPT, large language models, and other generative models. Philos. Technol. 36, Article 15 (2023)

  3. Ajayi, R., Al Shafei, E., Aung, H., Costanza-Chock, S., Dad, N., Hernandez, M., Gebru, T., Geybulla, A., Gonzalez, J., Kee, J., Liu., L., Noble, S., Nyabola., N., Ricaurte, P., Soundararajan, T., Varon, J.: Open letter to news media and policy makers re: tech experts from the global majority (2023). www.freepress.net/sites/default/files/2023-05/global_coalition_open_letter_to_news_media_and_policymakers.pdf

  4. Bender, E., Gebru, T., McMillan-Major, A., Shmitchell, M.: On the dangers of stochastic parrots: Can language models be too big? In: FAccT’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency March 2021, pp. 610–623 (2021)

  5. Center for AI Safety: 8 Examples of AI Risk (2023). https://www.safe.ai/ai-risk

  6. Gebru, T., Bender, E., McMillan-Major, A., Mitchell, M.: Statement from the listed authors of stochastic parrots on the ‘AI Pause’ letter (2023). www.dair-institute.org/blog/letter-statement-March2023

  7. Gebru, T., Hanna, A., Kak, A., Myers West, S., Gahntz, M. Solaiman, I., Khan, M., Talat, Z.: Five considerations to guide the regulation of ‘General Purpose AI’ in the E.U.s’ AI Act (2023). www.washingtonpost.com/documents/523e5232-7996-47c6-b502-ed5e1a385ea8.pdf?itid=lk_inline_manual_7

  8. The Guardian: AI song featuring fake drake and Weekend vocals pulled from streaming services (2023). www.theguardian.com/music/2023/apr/18/ai-song-featuring-fake-drake-and-weeknd-vocals-pulled-from-streaming-services

  9. New York Times: ‘The Godfather of AI’ Leaves Google and Warns of Danger Ahead (2023). www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html

  10. Future of Life Institute: Pause giant AI experiments: an open letter (2023). https://futureoflife.org/open-letter/pause-giant-ai-experiments/

  11. The Guardian: OpenAI CEO calls for laws to mitigate ‘risks of increasingly powerful’ AI (2023). www.theguardian.com/technology/2023/may/16/ceo-openai-chatgpt-ai-tech-regulations

  12. Center for AI Safety: Statement on AI Risk (2023). https://www.safe.ai/statement-on-ai-risk

  13. Chowdhury, R.: AI desperately needs global oversight (2023). www.wired.com/story/ai-desperately-needs-global-oversight/

  14. Marcus, G., Reuel, A.: The world needs an international agency for artificial intelligence, say two AI Experts (2023). www.economist.com/by-invitation/2023/04/18/the-world-needs-an-international-agency-for-artificial-intelligence-say-two-ai-experts

  15. United Nations: Secretary-general urges broad engagement from all stakeholders towards united nations code of conduct for information integrity on digital platforms (2023). https://press.un.org/en/2023/sgsm21832.doc.htm

  16. Competition and Markets Authority: AI foundation models: initial review (2023). https://assets.publishing.service.gov.uk/media/64528e622f62220013a6a491/AI_Foundation_Models_-_Initial_review_.pdf

  17. White House: Blueprint for an AI Bill of Rights (2022). www.whitehouse.gov/ostp/ai-bill-of-rights/

  18. White House: Biden-Harris Administration Announces New Actions to Promote Responsible AI Innovation That Protects Americans’ Rights and Safety (2023). www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/fact-sheet-biden-harris-administration-announces-new-actions-to-promote-responsible-ai-innovation-that-protects-americans-rights-and-safety/

  19. White House: Statement from Vice President Harris After Meeting with CEOs on Advancing Responsible Artificial Intelligence Innovation (2023). www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/statement-from-vice-president-harris-after-meeting-with-ceos-on-advancing-responsible-artificial-intelligence-innovation/

  20. European Commission: Proposal for a Regulation of the European Parliament and of the Council. Laying Down Harmonized Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (2021). https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF

  21. European Commission: Annexes to the Proposal (2021). https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence

  22. European Commission: White paper on artificial intelligence: a European approach to excellence and trust (2020). https://commission.europa.eu/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en

  23. European Commission: Europe fit for the digital age: commission proposes new rules and actions for excellence and trust in artificial intelligence (2021). https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1682

  24. European Commission: Regulatory framework proposal on artificial intelligence (2022). https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

  25. European Commission: Liability rules for artificial intelligence (2022). https://commission.europa.eu/business-economy-euro/doing-business-eu/contract-rules/digital-contracts/liability-rules-artificial-intelligence_en

  26. Wörsdörfer, M.: Ordoliberalism 2.0: towards a new regulatory policy for the digital age. Philos. Manag. 19(2), 191–215 (2020)

    Article  Google Scholar 

  27. Wörsdörfer, M.: Big tech and antitrust: an ordoliberal analysis. Philos. Technol. 35(3), Article 85 (2022)

  28. Wörsdörfer, M.: AI ethics and ordoliberalism 2.0: towards a ‘digital bill of rights.’ Working Paper, University of Maine (2023c)

  29. Wörsdörfer, M.: Individual versus regulatory ethics: an economic-ethical and theoretical-historical analysis of ordoliberalism. OEconomia 3(4), 523–557 (2013)

    Article  Google Scholar 

  30. Wörsdörfer, M.: Walter Eucken: foundations of economics. In: Biebricher, T., Nedergaard, P., Bonefeld, W. (eds.) The Oxford Handbook of Ordoliberalism, pp. 91–107. Oxford University Press, Oxford (2022)

    Chapter  Google Scholar 

  31. Wörsdörfer, M.: The E.U.’s artificial intelligence act—hype or hope? Working Paper, University of Maine (2023)

  32. Oppenheimer, F.: Weder so—noch so. Der Dritte Weg. Potsdam, Protte (1933)

  33. Röpke, W.: Civitas Humana. Erlenbach, Rentsch (1944/1949)

  34. Rüstow, A.: Die Religion der Marktwirtschaft. LIT, Münster (2001)

  35. Feld, L., Köhler, E.: Ist die Ordnungsökonomik zukunftsfähig? zfwu 12(2). 173–195 (2011)

  36. Goldschmidt, N. (2002). Entstehung und Vermächtnis ordoliberalen Denkens. Münster, LIT.

  37. Goldschmidt, N.: Walter Eucken’s place in the history of ideas (2007). www.gmu.edu/centers/publicchoice/HES%202007/papers/6d%20goldschmidt.pdf

  38. Goldschmidt, N., Wohlgemuth, M. (eds.): Grundtexte zur Freiburger Tradition der Ordnungsökonomik. Mohr Siebeck, Tübingen (2008)

    Google Scholar 

  39. Häußermann, J., Lütge, C.: Community-in-the-loop: towards pluralistic value creation in AI, or—why AI needs business ethics. AI Ethics 2, 341–362 (2022)

    Article  Google Scholar 

  40. Vanberg, V.: The Freiburg school: Walter Eucken and ordoliberalism (2004). www.eucken.de/publikationen/04_11bw.pdf

  41. Vanberg, V.: Market and state. J. Inst. Econ. 1(1), 23–49 (2005)

    Google Scholar 

  42. Vanberg, V.: Wettbewerb und Regelordnung. Mohr Siebeck, Tübingen (2008b)

  43. Vanberg, V.: James M. Buchanan’s Contractarianism and Modern Liberalism (2013). www.eucken.de/fileadmin/bilder/Dokumente/DP2013/Diskussionspapier_1304.pdf

  44. Eucken, W.: Die Grundlagen der Nationalökonomie. Springer, Berlin (1950/1965)

  45. Eucken, W.: Grundsätze der Wirtschaftspolitik. Mohr Siebeck, Tübingen (1952/2004)

  46. Eucken, W.: Ordnungspolitik. LIT, Münster (1999)

  47. Eucken, W.: Wirtschaftsmacht und Wirtschaftsordnung. LIT, Münster (2001)

  48. Röpke, W.: Die Gesellschaftskrisis der Gegenwart. Erlenbach, Rentsch (1942)

  49. Röpke, W.: Mass und Mitte. Erlenbach, Rentsch (1950)

  50. Rüstow, A.: Wirtschaftsethische Probleme der sozialen Marktwirtschaft. In: Boarman, P. (ed.) Der Christ und die soziale Marktwirtschaft, pp. 53–74. Kohlhammer, Stuttgart (1955)

    Google Scholar 

  51. Rüstow, A.: Ortsbestimmung der Gegenwart (Dritter Band). Rentsch, Erlenbach (1957)

  52. Böhm, F.: Privatrechtsgesellschaft und Marktwirtschaft. In: Böhm, F. (ed.) Freiheit und Ordnung in der Marktwirtschaft. Baden-Baden, Nomos, pp. 105–168 (1966/1980)

  53. Vanberg, V.: Die Ethik der Wettbewerbsordnung und die Versuchungen der sozialen Marktwirtschaft (2008). www.walter-eucken-institut.de/fileadmin/bilder/Publikationen/Diskussionspapiere/08_6bw.pdf

  54. Brennan, G., Buchanan, J.M.: The Reason of Rules. Indianapolis, Liberty Fund (1985/2000)

  55. Buchanan, J.M.: The Limits of Liberty. Indianapolis, Liberty Fund (1975/2000)

  56. Buchanan, J.M., Congleton, R.D.: Politics by Principle. Cambridge University Press, Cambridge (1998)

    Google Scholar 

  57. Buchanan, J.M., Tullock, G.: The Calculus of Consent. Indianapolis, Liberty Fund (1962/1999)

  58. Congleton, R.D.: The contractarian constitutional political economy (2013). http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2327665

  59. High-Level Expert Group on AI: Ethics Guidelines for Trustworthy AI (2019). https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

  60. European Commission: New legislative framework (n.d.). https://single-market-economy.ec.europa.eu/single-market/goods/new-legislative-framework_en

  61. Greenleaf, G.: The ‘Brussels Effect’ of the E.U.’s ‘AI Act’ on Data Privacy Outside Europe. Priv Laws Bus Int Rep 171, 1+3–7 (2021)

  62. Hacker, P., Cordes, J., Rochon, J.: Regulating gatekeeper AI and data: transparency, access, and fairness under the DMA, the GDPR, and beyond (2023). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4316944

  63. Wörsdörfer, M.: The digital markets act and E.U. Competition policy: a critical ordoliberal evaluation. Philos. Manag. 22(1), 149–171 (2023)

  64. Wörsdörfer, M.: Digital platforms and competition policy: a business-ethical assessment. J. Mark. Ethics 9(2), 97–119 (2021)

    Article  Google Scholar 

  65. Wörsdörfer, M.: What happened to ‘big tech’ and antitrust? And how to fix them! Philos. Manag. 21(3), 345–369 (2022)

    Article  Google Scholar 

  66. Dheu, O., De Bruyne, J., Ducuing, C.: The European Commission’s Approach To Extra-Contractual Liability And AI—a first analysis and evaluation of the two proposals (2022). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4239792

  67. Mazzini, G., Scalzo, S.: The proposal for artificial intelligence act: considerations around some key concepts (2022). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4098809

  68. Almada, M., Petit, N.: The E.U. Act: between product safety and fundamental rights (2023). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4308072

  69. Almeida, D., Shmarko, K., Lomas, E.: The ethics of facial recognition technologies, surveillance, and accountability in an age of artificial intelligence: a comparative analysis of USA, E.U., and U.K. Regul. Framew. AI Ethics 2, 377–387 (2021)

    Article  Google Scholar 

  70. European Parliament: AI act: a step closer to the first rules on artificial intelligence (2023). www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence

  71. European Parliament: Draft compromise amendments on the draft report (2023). www.europarl.europa.eu/resources/library/media/20230516RES90302/20230516RES90302.pdf

  72. European Parliament: Report on the proposal for a regulation of the European parliament and of the council on laying down harmonized rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (2023). www.europarl.europa.eu/doceo/document/A-9-2023-0188_EN.html#_section2

  73. Council of the European Union: Artificial intelligence act: council calls for promoting safe AI that respects fundamental rights (2022). www.consilium.europa.eu/en/press/press-releases/2022/12/06/artificial-intelligence-act-council-calls-for-promoting-safe-ai-that-respects-fundamental-rights/#:~:text=The%20Council%20has%20adopted%20its,fundamental%20rights%20and%20Union%20values

  74. Council of the European Union: Artificial intelligence act. General Approach (2022). https://data.consilium.europa.eu/doc/document/ST-14954-2022-INIT/en/pdf

  75. Hacker, P.: A legal framework for AI training data—from first principles to the artificial intelligence act. Law Innov. Technol. 13(2), 257–301 (2021)

    Article  Google Scholar 

  76. Laux, J.: Institutionalized distrust and human oversight of artificial intelligence (2023). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4377481

  77. Röpke, W.: Epochenwende? In: Röpke, W.: Fronten der Freiheit. Stuttgart, Seewald. 167–178 (1933/1965)

  78. Röpke, W.: Jenseits von Angebot und Nachfrage. Rentsch, Erlenbach (1958/1961)

  79. European Parliament: MEPs ready to negotiate first-ever rules for safe and transparent AI (2023). https://www.europarl.europa.eu/news/en/press-room/20230609IPR96212/meps-ready-to-negotiate-first-ever-rules-for-safe-and-transparent-ai

  80. Kazim, E., Güçlütürk, O., Almeida, D., Kerrigan, C., Lomas, E., Koshiyama, A., Hilliard, A., Trengove, M., Gilbert, A.: Proposed EU AI act—presidency compromise text. Select Overview and Comment on the Changes to the Proposed Regulation. AI and Ethics (27 June 2022), pp. 1–7 (2022)

  81. Ebers, M.: Regulating AI and robotics: ethical and legal challenges. In: Ebers, M., Navas, S. (eds.) Algorithms and Law, pp. 37–99. Cambridge University Press, Cambridge (2020)

    Chapter  Google Scholar 

  82. Ebers, M.: Standardizing AI—the case of the European Commission’s proposal for an artificial intelligence act. In: DiMatteo, L., Poncibo, C., Cannarsa, M. (eds.) The Cambridge Handbook of Artificial Intelligence. Global Perspectives on Law and Ethics, pp. 321–344. Cambridge University Press, Cambridge (2022)

    Chapter  Google Scholar 

  83. Attard-Frost, B., De Los Rios, A., Walter, D.: The ethics of AI business practices: a review of 47 AI ethics guidelines. AI Ethics (2022). https://doi.org/10.1007/s43681-022-00156-6

    Article  Google Scholar 

  84. European Parliamentary Research Service (EPRS): Artificial intelligence act (2022). www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf

  85. Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., Srikumar, M.: Principled artificial intelligence: mapping consensus in ethical and rights-based approaches to principles for AI (2020). https://dash.harvard.edu/bitstream/handle/1/42160420/HLS%20White%20Paper%20Final_v3.pdf

  86. Khanna, R.: Dignity in a Digital Age: Making Tech Work for All of Us. Simon & Schuster, New York (2022)

  87. Leslie, D.: Understanding artificial intelligence ethics and safety. A guide for the responsible design and implementation of AI systems in the public sector (2019). www.turing.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and_safety.pdf

  88. Leslie, D., Burr, C., Aitken, M., Cowls, J., Katell, M., Briggs, M.: Artificial intelligence, human rights, democracy, and the rule of law: a primer (2021). www.turing.ac.uk/sites/default/files/2021-03/cahai_feasibility_study_primer_final.pdf

  89. Mittelstadt, B.: Principles alone cannot guarantee ethical AI. Nat. Mach. Intell. 1, 501–507 (2019)

    Article  Google Scholar 

  90. Rubenstein, D.: Acquiring ethical AI. Florida Law Rev. 73, 747–819 (2021)

    Google Scholar 

  91. Bradford, A.: The Brussels Effect. Oxford University Press, Oxford (2020)

    Book  Google Scholar 

  92. Floridi, L.: The European legislation on AI: a brief analysis of its philosophical approach. Philos. Technol. 34, 215–222 (2021)

    Article  Google Scholar 

  93. Petit, N.: Big Tech and the Digital Economy. Oxford University Press, Oxford (2020)

    Book  Google Scholar 

  94. AlgorithmWatch: Draft AI Act: EU needs to live up to its own ambitions in terms of governance and enforcement (2021). https://algorithmwatch.org/en/wp-content/uploads/2021/08/EU-AI-Act-Consultation-Submission-by-AlgorithmWatch-August-2021.pdf

  95. Röpke, W.: Ist die deutsche Wirtschaftspolitik richtig? In: Ludwig Erhard Stiftung (ed.). Grundtexte zur Sozialen Marktwirtschaft. Fischer, Stuttgart, pp. 49–62 (1950/1981)

  96. Rüstow, A.: Das Versagen des Wirtschaftsliberalismus. Metropolis, Hamburg (1945/2001)

  97. Rüstow, A.: Wirtschaft als Dienerin der Menschlichkeit. In: Aktionsgemeinschaft Soziale Marktwirtschaft (ed.). Was wichtiger ist als Wirtschaft. Martin Hoch, Ludwigsburg, pp. 7–16 (1960)

  98. Rüstow, A.: Paläoliberalismus, Kommunismus und Neoliberalismus. In: Greiß, F., Meyer, F. (eds.) Wirtschaft, Gesellschaft und Kultur. Festgabe für Müller-Armack. Duncker & Humblot, Berlin, pp. 61–70 (1961)

  99. Castets-Renard, C., Besse, P.: Ex ante accountability of the AI act: between certification and standardization, in pursuit of fundamental rights in the country of compliance (2022). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4203925

  100. Gstrein, O.: European AI regulation: brussels effect versus human dignity. Zeitschrift für Europarechtliche Studien 04(2022), 755–772 (2022)

    Article  Google Scholar 

  101. Smuha, N., Ahmed-Rengers, E., Harkens, A., Li, W., MacLaren, J., Piselli, R., Yeung, K.: How the E.U. can achieve legally trustworthy AI: a response to the European Commission’s Proposal for an Artificial Intelligence Act (2021). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3899991

  102. Wachter, S., Mittelstadt, B., Russell, C.: Why fairness cannot be automated: bridging the gap between EU non-discrimination law and AI. Comput. Law Secur. Rev. 41, 1–72 (2021)

    Article  Google Scholar 

  103. Mökander, J., Juneja, P., Watson, D., Floridi, L.: The U.S. algorithmic accountability act of 2022 vs. the E.U. artificial intelligence act: What can they learn from each other? Mind. Mach. 32, 751–758 (2022)

    Article  Google Scholar 

  104. Biber, S.: Machines learning the rule of law (2021). https://verfassungsblog.de/ai-rol/

  105. Ebers, M., Hoch, V., Rosenkranz, F., Ruschemeier, H., Steinrötter, B.: The European Commission’s proposal for an artificial intelligence act—a critical assessment by members of the robotics and AI law society (RAILS). J Multidiscip. Sci. J. 4(4), 589–603 (2021)

    Google Scholar 

  106. European Parliamentary Research Service (EPRS): AI and digital tools in workplace management and evaluation (2022b). www.europarl.europa.eu/RegData/etudes/STUD/2022/729516/EPRS_STU(2022)729516_EN.pdf

  107. Mahler, T.: Between risk management and proportionality: the risk-based approach in the E.U.’s artificial intelligence act proposal. Nordic Yearbook of Law and Informatics 2020–2021 (March 2022), pp. 247–270 (2022)

  108. Stuurman, K., Lachaud, E.: Regulating AI. A label to complete the proposed act on artificial intelligence. Comput. Law Secur. Rev. 44 (April 2022), 1–23 (2022)

  109. Cefaliello, A., Kullmann, M.: Offering false security: how the draft artificial intelligence act undermines fundamental workers rights. Eur. Labor Law J. 13(4), 542–562 (2022)

    Article  Google Scholar 

  110. Laux, J., Wachter, S., Mittelstadt, B.: Trustworthy artificial intelligence and the European Union AI act: on the conflation of trustworthiness and acceptability of risk. Regulation and Governance 6 (February 2023), pp. 1–30 (2023)

  111. Gangadharan, S.P., Niklas, J.: Decentering technology in discourse on discrimination. Inf. Commun. Soc. 22(7), 882–899 (2019)

    Article  Google Scholar 

  112. Micklitz, H.W., Gestel, R.V.: European integration through standardization: how judicial review is breaking down the club house of private standardization bodies. Common Market Law Rev. 50(1), 145–181 (2013)

    Article  Google Scholar 

  113. Schepel, H.: The Constitution of Private Governance. Product Standards in the Regulation of Integrating Markets. Hart, Oxford (2005)

  114. Veale, M., Zuiderveen Borgesius, F.: Demystifying the Draft E.U. Artificial Intelligence Act. Comput. Law Rev. Int. 4, 97–112 (2021)

  115. Laux, J., Wachter, S., Mittelstadt, B.: Three pathways for standardization and ethical disclosure by default under the European Union artificial intelligence act (2023). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4365079

  116. Klump, R., Wörsdörfer, M.: Paternalist economic policies. foundations, implications, and critical evaluations. ORDO Yearb. Econ. Soc. Order 66, 27–60 (2015)

    Google Scholar 

  117. Hayek, F.A.: Law, Legislation, and Liberty. Vol. 1: Rules and Order. Routledge, London (1973)

  118. European Commission: Commission Staff Working Document. Impact Assessment. Annexes (2021). https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Artificial-intelligence-ethical-and-legal-requirements_en

  119. Kop, M.: E.U. Artificial intelligence act: the European approach to AI (2021). https://futurium.ec.europa.eu/sites/default/files/2021-10/Kop_EU%20Artificial%20Intelligence%20Act%20-%20The%20European%20Approach%20to%20AI_21092021_0.pdf

  120. AI Transparency Institute: CareAI: Responsible AI Index (2023). https://aitransparencyinstitute.com/responsible-ai-index-demo/

  121. Thelisson, E., Padh, K., Celis, L.E.: Regulatory mechanisms and algorithms towards trust in AI/ML. In: Proceedings of the IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI), Melbourne, Australia (2017)

  122. Malgieri, G., Pasquale, F.: From transparency to justification: toward ex ante accountability for AI. Brussels Privacy Hub Working Paper 8(33) (2022). https://brusselsprivacyhub.com/wp-content/uploads/2022/05/BPH-Working-Paper-vol8-N33.pdf

Download references

Acknowledgements

The author would like to thank two anonymous reviewers for their invaluable constructive feedback and criticism. They helped to improve the article significantly. The usual caveats apply.

Funding

The author declares that no funds, grants, or other support were received during the preparation of this manuscript.

Author information

Authors and Affiliations

Authors

Contributions

MW is the sole author of the manuscript and, therefore, responsible for conceptualization, writing, reviewing, editing, etc.

Corresponding author

Correspondence to Manuel Wörsdörfer.

Ethics declarations

Conflict of interest

The author has declared no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wörsdörfer, M. The E.U.’s artificial intelligence act: an ordoliberal assessment. AI Ethics (2023). https://doi.org/10.1007/s43681-023-00337-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s43681-023-00337-x

Keywords

Navigation