Skip to main content


Log in

AI ethics and ordoliberalism 2.0: towards a ‘Digital Bill of Rights’

  • Original Research
  • Published:
AI and Ethics Aims and scope Submit manuscript


This article analyzes AI ethics from a distinct business ethics perspective, i.e., ‘ordoliberalism 2.0.’ It argues that the ongoing discourse on (generative) AI relies too much on corporate self-regulation and voluntary codes of conduct and thus lacks adequate governance mechanisms. To address these issues, the paper suggests not only introducing hard-law legislation with a more effective oversight structure but also merging already existing AI guidelines with an ordoliberal-inspired regulatory and competition policy. However, this link between AI ethics, regulation, and antitrust is not yet adequately discussed in the academic literature and beyond. The paper thus closes a significant gap in the academic literature and adds to the predominantly legal-political and philosophical discourse on AI governance. The paper’s research questions and goals are twofold: first, it identifies ordoliberal-inspired AI ethics principles that could serve as the foundation for a ‘digital bill of rights.’ Second, it shows how those principles could be implemented at the macro level with the help of ordoliberal competition and regulatory policy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Data availability

Not applicable.


  1. Other notable examples include the Future of Life Institute’s ‘Asilomar AI Principles’ (2017), UNI Global Union’s ‘Top 10 Principles for Ethical AI’ (2017), Council of Europe’s ‘European Ethical Charter on the Use of AI in Judicial Systems’ (2018), European Commission’s ‘AI for Europe’ (2018), Germany’s ‘AI Strategy’ (2018), ‘Beijing AI Principles’ (2019), G20’s ‘AI Principles’ (2019), High-Level Expert Group on AI’s ‘Ethics Guidelines for Trustworthy AI’ and ‘Policy and Investment Recommendations’ (2019), IEEE Global Initiative’s ‘Ethically Aligned Design’ (2019), OECD’s ‘Principles on AI’ (2019), Global Partnership on AI (2020), E.U.’s ‘White Paper on AI’ (2020), U.K.’s National AI Strategy (2021), E.U.’s ‘Proposal for a Regulation on a European Approach to Artificial Intelligence’ (2021), and Khanna’s ‘Internet Bill of Rights’ (2022). Furthermore, several (inter-)national standardization efforts are underway, e.g., by standard-developing organizations such as ISO, IEC, NIST, CEN, and CENELEC [3, 73, 79, 120].

  2. I.e., respect for human rights, data protection and the right to privacy, harm prevention and beneficence, non-discrimination and freedom of privileges, fairness and justice, transparency and explainability of AI systems, accountability and responsibility, democracy and the rule of law, and environmental and social sustainability.

  3. O’Neil [102] classifies AI systems as ‘weapons of math destruction’ since they negatively impact the marginalized and vulnerable parts of society, i.e., low-income people and ethnic minorities. That is, they often lead to more discrimination, racism, and prejudices, e.g., due to biased software and data, thereby increasing inequality, deepening the social divide, and negatively impacting democracy and the rule of law. AI systems also reinforce negative feedback loops and vicious circles, e.g., in the form of poverty traps. Lastly, victims of AI discrimination and racism have (almost) no means to file a complaint and mitigate their harms and adverse impacts [27, 78].

  4. The main reasons for algorithmic biases include the poor selection of training data, especially unrepresentative or incomplete data sets (e.g., relying only on white male U.S. population data or having other cultural or ethnic biases), predictions based on too little data—and thus the impossibility of generalizations –, flawed correlations or not considering the underlying causations, and a lack of diversity among AI developers and data scientists. Note that most computer science teams are dominated by white male Westerners aged 20-40—so-called ‘male AI’; not adequately represented, however, are BIPOC, women, disabled and elderly people, and people from developing countries. Also, note that AI technologies are social artifacts that embed and project AI developers’ choices, biases, and values; that is, personal beliefs, opinions, and prejudices, as well as stereotypes and societal biases of computer scientists play a significant role and are reflected in algorithms (i.e., ‘bias in, bias out’) [18, 21, 24, 27, 86, 115].

  5. Net neutrality is thus also crucial to realize the ordoliberal concept of ‘justice of the starting conditions’ [141], discussed in the next section.

  6. Ordoliberals consider science and academics (i.e., ‘clercs’) as a potential ordering power in society [42, 112]. In the context of AI, academic researchers play an essential role in spreading digital literacy and user awareness, besides, they bear special responsibilities regarding addressing and mitigating the societal impacts of AI technologies and promoting the common good.

  7. According to Fjeld et al. [61], professional responsibility includes accuracy, responsible design, consideration of long-term effects, multi-stakeholder collaboration, and scientific integrity (i.e., following corporate or professional codes of ethics and standards, such as a Hippocratic oath for data scientists and computer professionals).

  8. I.e., private property is linked to and has to serve the public good [42], see for the social commitments of property owners: [11, 43] (here, Eucken defines private property as an essential instrument in both economic and social terms and stresses the socio-economic and political responsibilities of entrepreneurs).

  9. Novelli et al. [101] distinguish between various accountability conditions (authority recognition, interrogation, and limitation of power), features (context, range, agent, forum, standard, process, and implications), and goals (compliance, report, oversight, and enforcement), as well as between proactive (‘accountability as virtue’ with the intent to prevent failures) and reactive accountability (‘accountability as a mechanism’ to redress failures).

  10. Other points of criticism include the AIA’s tendency to prioritize economic, business, and innovation over moral concerns (i.e., the de-prioritization of human rights), the lack of a clear definition of AI systems (i.e., a lack of scope), the flawed risk-based framework (i.e., an incomplete list of prohibited AI systems and under-regulation of non-high-risk AI systems), and the failure to adequately address the challenges posed by generative AI, such as chatbots and deepfakes [59, 63, 65].

  11. Note that Eucken and other ordoliberals saw unions as an essential counterweight to the power of employers [35, 36, 42].

  12. Experts recommend making the risk classification and standardization process more inclusive and transparent [31]. This would require substantive information rights for affected individuals, adding public participation rights for citizens, and ensuring that not only corporate and expert groups are involved in the classification and standardization process by actively involving organizations that represent public interests [121]. It might also be worth exploring whether the EAIB’s responsibilities could be expanded. Currently, the board serves in an advisory role, and critics claim that its tasks should be amended to include investigatory and regulatory powers. Furthermore, it could be transformed into a stakeholder forum to overcome the previously mentioned issues of lack of consultation, participation, and stakeholder dialogue [22, 23]. Besides reforming the AIA, democratic accountability and judicial oversight require an ordoliberal-inspired competition policy and a strengthening of the DMA (see below).

  13. Note that an ordoliberal-inspired social and environmental market economy would mandate the internalization of adverse external effects (e.g., via carbon pricing or emissions trading schemes) and would thus help reduce energy usage—including the ones of AI systems—and promote green environments.


  1. Ajayi, R., Al Shafei, E., Aung, H., Costanza-Chock, S., Dad, N., Hernandez, M., Gebru, T., Geybulla, A., Gonzalez, J., Kee, J., Liu., L., Noble, S., Nyabola., N., Ricaurte, P., Soundararajan, T., Varon, J.: Open Letter to News Media and Policy Makers re: Tech Experts from the Global Majority (2023).

  2. AlgorithmWatch: Draft AI Act: EU needs to live up to its own ambitions in terms of governance and enforcement (2021).

  3. AlgorithmWatch: Global inventory of AI ethics guidelines (2023).

  4. Almada, M., Petit, N.: The E.U. Act: Between product safety and fundamental rights (2023)

  5. Almeida, D., Shmarko, K., Lomas, E.: The ethics of facial recognition technologies, surveillance, and accountability in an age of artificial intelligence: a comparative analysis of USA, E.U., and U.K. regulatory frameworks. AI Ethics 2, 377–387 (2021)

    Article  Google Scholar 

  6. Attard-Frost, B., De Los Rios, A., Walter, D.: The Ethics of AI Business Practices: A Review of 47 AI Ethics Guidelines. AI Ethics (2022)

  7. Bender, E., Gebru, T., McMillan-Major, A., Shmitchell, M.: On the dangers of stochastic parrots: can language models be too big? FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 610–623 (2021)

  8. Berlin, I.: Four Essays on Liberty. Oxford University Press, Oxford (1969)

    Google Scholar 

  9. Biber, S.: Machines Learning the Rule of Law (2021)

  10. Böhm, F.: Privatrechtsgesellschaft und Marktwirtschaft. In: Böhm, F. Freiheit und Ordnung in der Marktwirtschaft. Baden-Baden, Nomos. 105–168 (1966/1980)

  11. Bonhoeffer Kreis: In der Stunde Null: Die Denkschrift des Freiburger “Bonhoeffer-Kreises.” Mohr, Tübingen (1979)

    Google Scholar 

  12. Bradford, A.: The Brussels Effect. Oxford University Press, Oxford (2020)

    Book  Google Scholar 

  13. Brennan, G., Buchanan, J.M.: The Reason of Rules. Indianapolis, Liberty Fund (1985/2000)

  14. Brey, P., Dainow, B.: Ethics by design and ethics of use in AI and robotics (2020)

  15. Buchanan, J.M.: The Limits of Liberty. Indianapolis, Liberty Fund. (1975/2000)

  16. Buchanan, J.M., Congleton, R.D.: Politics by principle, not interest. Cambridge University Press, Cambridge (1998)

    Book  Google Scholar 

  17. Buchanan, J.M., Tullock, G.: The Calculus of Consent. Indianapolis, Liberty Fund (1962/1999)

  18. Castets-Renard, C., Besse, P.: Ex Ante Accountability of the AI Act: Between Certification and Standardization, in Pursuit of Fundamental Rights in the Country of Compliance (2022)

  19. Cefaliello, A., Kullmann, M.: Offering false security: how the draft artificial intelligence act undermines fundamental workers rights. Eur. Labor Law J. 13(4), 542–562 (2022)

    Article  Google Scholar 

  20. Congleton, R.D.: The Contractarian Constitutional Political Economy. (2013)

  21. Coeckelberg, M.: AI Ethics. MIT Press, Cambridge (2020)

    Book  Google Scholar 

  22. Council of the European Union: Artificial Intelligence Act: Council Calls for Promoting Safe AI That Respects Fundamental Rights (2022).,fundamental%20rights%20and%20Union%20values

  23. Council of the European Union: Artificial Intelligence Act. General Approach (2022).

  24. Cowgill, B., Dell’Acqua, F., Deng, S., Hsu, D., Verma, N., Chaintreau, A.: Biased Programmers? Or Biased Data? A Field Experiment in Operationalizing AI Ethics. EC ’20: Proceedings of the 21st ACM Conference on Economics and Computing, July 2020. 679–681 (2020)

  25. Daly, A., Hagendorff, T., Li, H., Mann, M., Marda, V., Wagner, B., Wang, W.: AI, governance, and ethics: global perspectives. In: Micklitz, H., Pollicino, O., Reichman, A., Simoncini, A., Sartor, G., De Gregorio, G. (eds.) Constitutional challenges in the algorithmic society, pp. 182–201. Cambridge University Press, Cambridge (2021)

    Chapter  Google Scholar 

  26. Delecraz, S., Eltarr, L., Becuwe, M., Bouxin, H., Boutin, N., Oullier, O.: Responsible artificial intelligence in human resources technology: an innovative inclusive and fair by design matching algorithm for job recruitment purposes. J. Respons. Technol. 11, 1–8 (2022)

    Article  Google Scholar 

  27. Deutscher Ethikrat: Mensch und Maschine—Herausforderungen durch Künstliche Intelligenz (2023).

  28. Dheu, O., De Bruyne, J., Ducuing, C.: The European Commission’s Approach to Extra-Contractual Liability and AI—A First Analysis and Evaluation of the Two Proposals. (2022)

  29. Dubber, M., Pasquale, F., Das, S. (eds.): The Oxford Handbook of Ethics of AI. Oxford, Oxford University Press (2020)

    Google Scholar 

  30. Ebers, M.: Regulating AI and robotics: Ethical and legal challenges. In: Ebers, M., Navas, S. (eds.) Algorithms and Law, pp. 37–99. Cambridge University Press, Cambridge (2020)

    Chapter  Google Scholar 

  31. Ebers, M.: Standardizing AI—The Case of the European Commission’s Proposal for an Artificial Intelligence Act. In: DiMatteo, L., Poncibo, C., Cannarsa, M. (eds.) The Cambridge Handbook of Artificial Intelligence. Global Perspectives on Law and Ethics, pp. 321–344. Cambridge University Press, Cambridge (2022)

    Chapter  Google Scholar 

  32. Ebers, M., Hoch, V., Rosenkranz, F., Ruschemeier, H., Steinrötter, B.: The European commission’s proposal for an artificial intelligence Act—a critical assessment by members of the robotics and AI law society (RAILS). J. Multidisciplin. Scient. J. 4(4), 589–603 (2021)

    Article  Google Scholar 

  33. Eucken, W.: Religion, Wirtschaft. Staat. Die Tatwelt VII I(2), 82–89 (1932)

    Google Scholar 

  34. Eucken, W.: Staatliche Strukturwandlungen und die Krisis des Kapitalismus. Weltwirtschaftliches Archiv XXXVI. 297–321 (1932)

  35. Eucken, W.: Über die Gesamtrichtung der Wirtschaftspolitik. In: Eucken, W.: Ordnungspolitik. Münster, LIT. 1–24 (1946/1999)

  36. Eucken, W.: Über die Verstaatlichung der Privaten Banken. In: Eucken, W.: Ordnungspolitik. Münster, LIT. 38–58 (1946/1999)

  37. Eucken, W.: Das ordnungspolitische Problem. ORDO 1, 56–90 (1948)

    Google Scholar 

  38. Eucken, W.: On the Theory of the Centrally Administered Economy: An Analysis of the German Experiment. Economica, 15(58). 79–100 and 173–193 (1948)

  39. Eucken, W.: Die Wettbewerbsordnung und ihre Verwirklichung. ORDO 2, 1–99 (1949)

    Google Scholar 

  40. Eucken, W.: Die Grundlagen der Nationalökonomie. Berlin, Springer (1950/1965)

  41. Eucken, W.: Deutschland vor und nach der Währungsreform. In: Schneider, J., Harbrecht, W.: (eds.). Wirtschaftsordnung und Wirtschaftspolitik in Deutschland (1933–1993). Stuttgart, Franz Steiner. 327–360 (1950/1996)

  42. Eucken, W.: Grundsätze der Wirtschaftspolitik. Tübingen, Mohr Siebeck (1952/2004)

  43. Eucken, W. Wettbewerb, Monopol und Unternehmer. Bad Nauheim, Vita (1953)

  44. Eucken, W. Ordnungspolitik. Münster, LIT (1999)

  45. Eucken, W.: Wirtschaftsmacht und Wirtschaftsordnung. Münster, LIT (2001)

  46. European Commission: White Paper on Artificial Intelligence: A European Approach to Excellence and Trust (2020).

  47. European Commission: Proposal for a Regulation of the European Parliament and of the Council. Laying Down Harmonized Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (2021).

  48. European Commission: Annexes to the Proposal (2021).

  49. European Commission: Europe Fit for the Digital Age: Commission Proposes New Rules and Actions for Excellence and Trust in Artificial Intelligence (2021).

  50. European Commission: Commission Staff Working Document. Impact Assessment. Annexes (2021).

  51. European Commission: Regulatory Framework Proposal on Artificial Intelligence (2022).

  52. European Commission: Liability Rules for Artificial Intelligence (2022).

  53. European Parliament: AI Act: A Step Closer to the First Rules on Artificial Intelligence (2023).

  54. European Parliament: Draft Compromise Amendments on the Draft Report (2023).

  55. European Parliament: Report on the Proposal for a Regulation of the European Parliament and of the Council on Laying Down Harmonized Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (2023).

  56. European Parliamentary Research Service: Artificial Intelligence Act (2022).

  57. European Parliamentary Research Service: Artificial Intelligence Act and Regulatory Sandboxes (2022).

  58. European Parliamentary Research Service: AI and Digital Tools in Workplace Management and Evaluation (2022).

  59. European Parliamentary Research Service (EPRS): General-Purpose Artificial Intelligence (2023).

  60. Feld, L. Köhler, E.: Ist die Ordnungsökonomik zukunftsfähig? zfwu 12(2). 173–195 (2011)

  61. Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., Srikumar, M.: Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI. (2020)

  62. Floridi, L.: The European legislation on AI: a brief analysis of its philosophical approach. Philos. Technol. 34, 215–222 (2021)

    Article  Google Scholar 

  63. Floridi, L.: AI as agency without intelligence: on ChatGPT, large language models, and other generative models. Philos. Technol 36. Article 15 (2023)

  64. Floridi, L., Holweg, M., Taddeo, M., Silva, J., Mökander, & Wen, Y.: capAI—A Procedure for Conducting Conformity Assessment of AI Systems in Line with the E.U. Artificial Intelligence Act. (2022).

  65. Gebru, T., Hanna, A., Kak, A., Myers West, S., Gahntz, M. Solaiman, I., Khan, M., Talat, Z.: Five Considerations to Guide the Regulation of ‘General Purpose AI’ in the E.U.s’ AI Act. (2023).

  66. Goldschmidt, N.: Entstehung und Vermächtnis ordoliberalen Denkens. Münster, LIT (2002)

  67. Goldschmidt, N.: Walter Eucken’s Place in the History of Ideas. (2007)

  68. Goldschmidt, N., Wohlgemuth, M. (eds.): Grundtexte zur Freiburger Tradition der Ordnungsökonomik. Tübingen, Mohr Siebeck (2008)

    Google Scholar 

  69. Greenleaf, G.: The ‘Brussels Effect’ of the E.U.’s ‘AI Act’ on Data Privacy Outside Europe. Privacy Laws & Business International Report 171. 1 and 3–7 (2021)

  70. Gstrein, O.: European AI regulation: Brussels effect versus human dignity. Zeitschrift für Europarechtliche Studien 04, 755–772 (2022)

    Article  Google Scholar 

  71. Hacker, P.: A legal framework for AI training data—from first principles to the artificial intelligence Act. Law Innov. Technol. 13(2), 257–301 (2021)

    Article  Google Scholar 

  72. Hacker, P., Cordes, J., Rochon, J.: Regulating Gatekeeper AI and Data: Transparency, Access, and Fairness Under the DMA, the GDPR, and Beyond. (2023).

  73. Hagendorff, T.: The ethics of AI ethics: an evaluation of guidelines. Mind. Mach. 30, 99–120 (2020)

    Article  Google Scholar 

  74. Häußermann, J., Lütge, C.: Community-in-the-loop: towards pluralistic value creation in AI, or—why AI needs business ethics. AI Ethics 2, 341–362 (2022)

    Article  Google Scholar 

  75. Hayek, F.A.: Law, Legislation, and Liberty. Rules and Order. London, Routledge 1,(1973)

  76. High-Level Expert Group on AI: Ethics Guidelines for Trustworthy AI (2019).

  77. Hine, E., Floridi, L.: The Blueprint for an AI Bill of Rights: In Search of Enaction, At Risk of Inaction. Minds and Machines (2023).

  78. Institute for Human Rights and Business: Data Brokers and Human Rights: Big Data, Big Business (2016).

  79. Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nat Mach Intellig 1, 389–399 (2019)

    Article  Google Scholar 

  80. Kant, I.: Groundwork of the Metaphysics of Morals (GMM). Cambridge, Cambridge University Press (1785/2013)

  81. Kant, I.: The Critique of Practical Reason (CPrR). Cambridge, Cambridge University Press (1788/1997)

  82. Kazim, E., Koshiyama, A.: A high-level overview of AI ethics. Patterns 2(9), 1–12 (2021)

    Article  Google Scholar 

  83. Kazim, E., Güçlütürk, O., Almeida, D., Kerrigan, C., Lomas, E., Koshiyama, A., Hilliard, A., Trengove, M., Gilbert, A.: Proposed E.U. AI Act—presidency compromise text. Select overview and comment on the changes to the proposed regulation. AI and Ethics 1–7 (2022)

  84. Klump, R., Wörsdörfer, M.: On the affiliation of phenomenology and ordoliberalism: links between Edmund Husserl, Rudolf, and Walter Eucken. Eur. J. His. Econ. Thought 18(4), 551–578 (2011)

    Article  Google Scholar 

  85. Kop, M.: E.U. Artificial Intelligence Act: The European Approach to AI. (2021)

  86. Kuśmierczyk, M.; Algorithmic Bias in the Light of the GDPR and the Proposed AI Act. (2022).

  87. Laux, J.: Institutionalized Distrust and Human Oversight of Artificial Intelligence. (2023).

  88. Laux, J., Wachter, S., Mittelstadt, B.: Trustworthy Artificial Intelligence and the European Union AI Act: On the Conflation of Trustworthiness and Acceptability of Risk. Regulation and Governance. 1–30 (2023)

  89. Laux, J., Wachter, S., Mittelstadt, B.: Three Pathways for Standardization and Ethical Disclosure by Default Under the European Union Artificial Intelligence Act. (2023).

  90. Leslie, D.: Understanding Artificial Intelligence Ethics and Safety. A Guide for the Responsible Design and Implementation of AI Systems in the Public Sector. (2019).

  91. Leslie, D., Burr, C., Aitken, M., Cowls, J., Katell, M., Briggs, M.: Artificial intelligence, Human Rights, Democracy, and the Rule of Law: A Primer. (2021).

  92. Mahler, T.: Between risk management and proportionality: the risk-based approach in the E.U.’s artificial intelligence act proposal. Nordic Yearbook of Law and Informatics 2020–2021 March 2022. 247–270 (2022)

  93. Mazzini, G.. Scalzo, S.: The Proposal for Artificial Intelligence Act: Considerations Around Some Key Concepts. (2022).

  94. Metzinger, T.; Ethics Washing Made in Europe. (2019).

  95. Mittelstadt, B.: Principles alone cannot guarantee ethical AI. Nat. Mach. Intellig. 1, 501–507 (2019)

    Article  Google Scholar 

  96. Mökander, J., Floridi, L.: Ethics-based auditing to develop trustworthy AI. Mind. Mach. 31, 323–327 (2021)

    Article  Google Scholar 

  97. Mökander, J., Juneja, P., Watson, D., Floridi, L.: The U.S. Algorithmic Accountability Act of 2022 vs. The E.U. Artificial Intelligence Act: what can they learn from each other? Mind. Mach. 32, 751–758 (2022)

    Article  Google Scholar 

  98. Morley, J., Elhalal, A., Garcia, F., Kinsey, L., Mökander, J., Floridi, L.: Ethics as a service: a pragmatic operationalization of AI ethics. Mind. Mach. 31, 239–256 (2021)

    Article  Google Scholar 

  99. New York Times: When AI Chatbots Hallucinate (2023).

  100. New York Times: What Can You Do When AI Lies About You? (2023).

  101. Novelli, C., Taddeo, M., Floridi, L.: Accountability in Artificial Intelligence: What It Is and How It Works. AI and Society 7 February 2023. 1–12 (2023)

  102. O’Neil, C.: Weapons of Math Destruction. How Big Data Increases Inequality and Threatens Democracy. New York, Crown (2016)

  103. OpenAI: Frontier Model Forum (2023).

  104. OpenAI: GPT-4 System Card. (2023).

  105. Oppenheimer, F.: Weder so—noch so. Der Dritte Weg. Potsdam, Protte (1993)

  106. Pasquale, F.: The black box society. The secret algorithms that control money and information. Harvard University Press, Cambridge (2015)

    Book  Google Scholar 

  107. Petit, N.: Big tech and the digital economy. Oxford University Press, Oxford (2020)

    Book  Google Scholar 

  108. Prabhakaran, V., Mitchell, M., Gebru, T., Gabriel, J. A.: Human rights-based approach to responsible AI. (2022).

  109. Responsible AI Collaborative: AI Incident Database (2023)

  110. Rieder, G., Simon, J., Wong, P.-H.: Mapping the stony road toward trustworthy AI expectations, problems, conundrums. In: Pelillo, M., Scantamburlo, T. (eds.) Machines We Trust. Perspectives on Dependable AI, pp. 27–42. MIT Press, Cambridge (2021)

    Chapter  Google Scholar 

  111. Röpke, W.: Die Gesellschaftskrisis der Gegenwart. Erlenbach, Rentsch (1942)

  112. Röpke, W.: Civitas Humana. Erlenbach, Rentsch (1944/1949)

  113. Röpke, W.: Mass und Mitte. Erlenbach, Rentsch (1950)

  114. Röpke, W.: Jenseits von Angebot und Nachfrage. Erlenbach, Rentsch (1958/1961)

  115. Rubenstein, D.: Acquiring ethical AI. Florida Law Rev. 73, 747–819 (2021)

    Google Scholar 

  116. Rüstow, A.: Das Versagen des Wirtschaftsliberalismus. Marburg, Metropolis (1945/2001)

  117. Rüstow, A.: Wirtschaftsethische Probleme der sozialen Marktwirtschaft. In: Boarman, P. (ed.) Der Christ und die soziale Marktwirtschaft, pp. 53–74. Kohlhammer, Stuttgart (1955)

    Google Scholar 

  118. Rüstow, A. Ortsbestimmung der Gegenwart (Dritter Band). Erlenbach, Rentsch

  119. Rüstow, A.: Die Religion der Marktwirtschaft. Münster, LIT (2001)

  120. Smuha, N.: The E.U. approach to ethics guidelines for trustworthy artificial intelligence. Comput Law Rev. Int. 20(4), 97–106 (2019)

    Article  Google Scholar 

  121. Smuha, N., Ahmed-Rengers, E., Harkens, A., Li, W., MacLaren, J., Piselli, R., Yeung, K.: How the E.U. Can Achieve Legally Trustworthy AI: A Response to the European Commission’s Proposal for an Artificial Intelligence Act. (2021)

  122. Spiekermann, S., Winkler, T.: Value-Based Engineering for Ethics by Design. (2020)

  123. Stuurman, K., Lachaud, E.: Regulating AI. A label to complete the proposed act on artificial intelligence. Comput. Law Secur. Rev. 44, 1–23 (2022)

    Article  Google Scholar 

  124. Vanberg, V.: The Freiburg School: Walter Eucken and Ordoliberalism. (2004)

  125. Vanberg, V.: Market and state. J. Inst. Econ. 1(1), 23–49 (2005)

    Google Scholar 

  126. Vanberg, V.: Die Ethik der Wettbewerbsordnung und die Versuchungen der sozialen Marktwirtschaft. (2008)

  127. Vanberg, V.: Wettbewerb und Regelordnung. Tübingen, Mohr Siebeck (2008)

  128. Vanberg, V.: James M. Buchanan’s Contractarianism and Modern Liberalism. (2013).

  129. Veale, M., Zuiderveen Borgesius, F.: Demystifying the draft E. U. Artificial Intelligence Act. Comput. Law Rev. Int. 4, 97–112 (2021)

    Article  Google Scholar 

  130. Wachter, S., Mittelstadt, B., Russell, C.: Why fairness cannot be automated: bridging the gap between E.U. non-discrimination law and AI. Comput. Law Secur. Rev. 41, 1–72 (2021)

    Article  Google Scholar 

  131. Warren, S.D., Brandeis, L.D.: The right to privacy. Harv. Law Rev. 4(5), 193–220 (1890)

    Article  Google Scholar 

  132. Washington Post: Elon Musk’s X is Throttling Traffic to Websites He Dislikes (2023).

  133. Wettstein, F.: Multinational corporations and global justice. Human rights obligations of a quasi-governmental institution. Stanford University Press, Stanford (2009)

    Book  Google Scholar 

  134. Wettstein, F.: Beyond voluntariness, beyond CSR: making a case for human rights and justice. Bus. Soc. Rev. 114(1), 125–152 (2009)

    Article  Google Scholar 

  135. Wettstein, F.: The duty to protect: corporate complicity, political responsibility, and human rights advocacy. J. Bus. Ethics 96(1), 33–47 (2010)

    Article  Google Scholar 

  136. Wettstein, F.: For better or for worse: corporate responsibility beyond “Do No Harm.” Bus. Ethics Q. 20(2), 275–283 (2010)

    Article  Google Scholar 

  137. Wettstein, F.: Silence as complicity: elements of a corporate duty to speak out against the violation of human rights. Bus. Ethics Q. 22(1), 37–61 (2012)

    Article  MathSciNet  Google Scholar 

  138. Wettstein, F.: CSR and the debate on business and human rights: bridging the great divide. Bus. Ethics Q. 22(4), 739–770 (2012)

    Article  Google Scholar 

  139. White House: Fact sheet: Biden-Harris administration secures voluntary commitments from leading artificial intelligence companies to manage the risks posed by AI (2023).

  140. White House: Ensuring Safe, Secure, and Trustworthy AI (2023).

  141. Wörsdörfer, M.: Von Hayek and ordoliberalism on justice. J. His. Econ. Thought 35(3), 291–317 (2013)

    Article  Google Scholar 

  142. Wörsdörfer, M.: Individual versus regulatory ethics: an economic-ethical and theoretical-historical analysis of ordoliberalism. OEconomia. Hist. Methodol. Philos. 3(4), 523–557 (2013)

    Article  Google Scholar 

  143. Wörsdörfer, M.: Engineering and Computer Ethics. Dubuque, Great River Learning (2018)

  144. Wörsdörfer, M.: Ordoliberalism 2.0 towards a new regulatory policy for the digital age. Philos. Manag. 19(2), 191–215 (2020)

    Article  Google Scholar 

  145. Wörsdörfer, M.: Digital platforms and competition policy: a business-ethical assessment. J. Markets Ethics 9(2), 97–119 (2021)

    Article  Google Scholar 

  146. Wörsdörfer, M.: What happened to ‘big tech’ and antitrust? And how to fix them! Philos. Manag. 21(3), 345–369 (2022)

    Article  Google Scholar 

  147. Wörsdörfer, M.: Big tech and antitrust: an ordoliberal analysis. Philos. Technol. 35(3). Article 85 (2022)

  148. Wörsdörfer, M.: Walter Eucken: Foundations of Economics. In: Biebricher, T., Nedergaard, P., Bonefeld, W. (eds.) The Oxford Handbook of Ordoliberalism, pp. 91–107. Oxford University Press, Oxford (2022)

    Chapter  Google Scholar 

  149. Wörsdörfer, M.: The Digital Markets Act and E.U. competition policy: a critical ordoliberal evaluation. Philos. Manag. 22(1), 149–171 (2023)

    Article  Google Scholar 

  150. Wörsdörfer, M.: The E.U.’s Artificial Intelligence Act—Hype or Hope? Working Paper, University of Maine (2023)

  151. Wörsdörfer, M.: The E.U.’s Artificial Intelligence Act—An Ordoliberal Assessment. AI and Ethics; 1–16 (Online First) (2023)

  152. Wörsdörfer, M.: Brandeis and Eucken—Two Pioneers of the Modern Big Tech and Antitrust Debate. History of Economic Ideas (forthcoming)

  153. World Economic Forum: Ethics by design: an organizational approach to responsible use of technology (2020).

  154. Wu, T.: Network neutrality, broadband discrimination. J. Telecommun. High Technol. Law 2, 141–179 (2003)

    Google Scholar 

  155. Zou, A., Wang, Z., Kolter, J.Z., Fredrikson, M.: Universal and transferable attacks on aligned language models. (2023)

Download references


The author declares that no funds, grants, or other support were received during the preparation of this manuscript.

Author information

Authors and Affiliations



M.W. is the sole author of the manuscript and, therefore, responsible for conceptualization, writing, reviewing, editing, etc.

Corresponding author

Correspondence to Manuel Wörsdörfer.

Ethics declarations

Conflict of interest

The author has declared no conflict of interest. The author has no relevant financial or non-financial interests to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wörsdörfer, M. AI ethics and ordoliberalism 2.0: towards a ‘Digital Bill of Rights’. AI Ethics (2023).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: