1 Introduction

There is a lot at stake when it comes to the development and governance of Artificial Intelligence (AI). It can either lead our societies into an age of abundance and automated labor coupled with human ingenuity and creativity, or it can lead us to a place where computers make our lives difficult in terms of finding jobs, differentiating what is true or false, and having stable democracies. This also leads to the question of ethics in the commercialization of AI and how healthy it is to participate in what many commentators would call the ‘AI wars’, which generally refers to the economic idea that companies want to outpace their competitors and thus would not prioritize AI safety enough [156].

The reasons for AI's growing importance in governance and policy are multifaceted, which are found in its technological pervasiveness [1], in its economic impact [152], as well as ethical and social considerations [129]. This leads to the problem that developing responsible guidelines for AI comes with many challenges. Firstly, there is the need to strike a delicate balance between fostering innovation and maintaining regulatory compliance. This balance is intricate, as it involves encouraging technological advancements while also ensuring that these developments adhere to set regulations [69, 99]. Another significant aspect is the borderless nature of AI, which necessitates international collaboration. This collaboration is essential but proves challenging due to the diverse cultural, ethical, and legal standards that exist across different countries. Achieving a global consensus on AI guidelines is therefore a complex task, requiring a nuanced understanding of these varied perspectives [48]. Additionally, the rapid pace of technological advancements in the field of AI presents its own set of challenges. The fast-evolving nature of AI technologies often outstrips the speed at which legislation can be developed and implemented. This discrepancy poses a significant hurdle in creating effective and timely regulatory frameworks that can keep up with the pace of AI development [156]. Lastly, the impact of AI varies greatly across different sectors, necessitating the creation of sector-specific regulations. This diversity in impact complicates the governance landscape, as each industry may require unique regulatory approaches tailored to its specific needs and challenges posed by AI [132, 161]. This diversity further adds layers to the already complex task of drafting comprehensive and effective AI guidelines [38]. Whereas some authors fear that AI competition in the market could result in a “race to the bottom” [156], the endeavors to regulate AI appear as an attempt to counteract these dynamics and resemble a race to the moon, not least because institutions like the EU would like to be the first in establishing comprehensive guidelines and hence become a role model in the field.

Given these complexities, the present paper seeks to discuss the current state of AI regulation and the underlying socioeconomic challenges in terms of policy and governance. Despite AI's significance, there is a notable gap in the literature addressing the socioeconomic dimensions of AI governance. This gap includes a lack of comprehensive analysis of how AI regulations impact economies globally and the societal challenges they entail, which should come along with an analysis of how countries and companies currently deal with governance issues surrounding AI.

Several real-life examples underscore the urgency of this discussion:

  • AI-Generated Music: Platforms using AI to compose music raise questions about copyright, intellectual property rights, and the impact on the music industry's economy. At the same time, one must ask if it is ethical to train neural networks on a musician’s works without compensation. Perhaps the more pressing question would even be how this will disrupt the industry as a whole, and how legal boundaries would mitigate these changes.

  • AI-Generated Images: Tools like DALL-E and DeepMind's generative models such as Imagen, which create art and visual content, challenge traditional notions of creativity and ownership in the visual arts, impacting the art market and copyright laws. Here, there are similar challenges as with AI-generated music.

These examples highlight the immediate need for effective AI policies that consider economic, societal, and ethical dimensions. Although there has been a recent surge in papers and articles discussing the necessity of AI regulations, this is the first one trying to provide a global overview and linking it to socioeconomic problems for further potential solutions. As such, the upcoming chapter will set the stage by introducing the applied paradigm, and then describe the regulatory advancements made in different regions. This is followed by a discussion on how businesses have responded to the need for regulation and if, by and large, it makes sense to leave the responsibility primarily in governmental or in industrial hands (pro’s and con’s are considered). Eventually, as a mere heuristic, an institutional network is suggested that could be further developed in future papers if deemed of value.

2 Conceptual paradigm for the present discussion

A previous treatment of the rapid economic dynamics of AI development provides the basis for the present discussion (for this, see [156]). It emphasizes the accelerated pace of AI development, driven by the quest for technological superiority among leading organizations. This competitive landscape, while fostering innovation, also raises significant concerns about AI alignment, safety, and ethics. It traces evolution of AI historically, noting a significant acceleration in digital innovations post-2012. The paradigm highlights that the development of modern artificial neural networks has greatly enhanced machine learning capabilities, leading to rapid advancements in AI applications across various domains. This speed, however, brings with it a host of ethical, safety, and governance challenges [5, 13, 17, 19].

Basically, the conceptual paradigm implies that in the rapidly evolving landscape of artificial intelligence, the pace of innovation has outstripped traditional mechanisms of oversight and ethical consideration, creating a pressing need for a more structured approach to governance and regulation. As AI technologies become increasingly integral to various sectors of the economy and society, their potential for both transformative benefits and significant risks becomes more apparent. This dynamic environment calls for a proactive and nuanced approach to regulation, one that balances the promotion of innovation with the imperative to safeguard public interest and uphold ethical standards. The development and deployment of AI systems must be guided by principles that ensure transparency, accountability, and fairness, thereby addressing the dual challenges of maximizing AI's positive impact while mitigating its potential harms. Such a regulatory framework should be adaptable, allowing for the swift evolution of AI technologies, while also being robust enough to address the complexities and uncertainties that accompany AI advancements. Engaging a wide range of stakeholders in the formulation of these governance structures is crucial to ensure they are comprehensive, inclusive, and capable of fostering trust between the public, the technology sector, and regulatory bodies. This approach not only aims to protect against the unintended consequences of AI but also to harness its capabilities for societal good, ensuring that AI development is aligned with human values and contributes to the broader objectives of sustainable and equitable growth [13, 19, 116, 132].

AI governance refers to the frameworks, policies, and mechanisms established to guide the development, deployment, and operation of artificial intelligence systems in a manner that aligns with ethical principles, societal norms, and legal standards. It encompasses a broad range of activities, including setting ethical guidelines, ensuring compliance with laws and regulations, promoting transparency and accountability, and engaging stakeholders in decision-making processes. Governance aims to ensure that AI technologies are developed and used responsibly, maximizing their benefits while minimizing harms and risks to individuals and society at large. This involves a collaborative effort among policymakers, technologists, civil society, and the public to create a regulatory environment that fosters innovation and trust in AI systems. More specifically, there are two types of governance issues discussed in the following sections: (i) AI regulation by the governments of different regions, and (ii) AI governance by the businesses themselves [64, 117, 152].

AI safety engineering, a discipline dedicated to addressing these challenges, identifies three primary areas of concern:

  • AI Misalignment: The risk of AI systems developing goals misaligned with human intentions, potentially leading to harmful outcomes [49, 166].

  • Human Abuse of AI: The potential misuse of AI for harmful purposes by individuals [32, 37, 155].

  • Information Control: The challenge posed by AI's capacity to generate new information, raising issues of explainability and the blurring line between fact and fiction [32, 97, 155].

These concerns highlight the necessity for careful management of AI's rapid evolution, ensuring that safety and ethical considerations keep pace with technological advancements. The paradigm pinpoints to the imperative to manage associated risks with such an accelerated AI development, particularly the underrepresentation of AI safety concerns. The European Union's initiatives, funded by the Horizon 2020 program, aim to foster the development of socially acceptable machine learning tools, underlining the importance of trustworthy AI [82, 91, 115, 139].

As is claimed in the respective paper, for an AI system to be deemed trustworthy, it must meet specific requirements, including [156]:

  • Lawfulness: Adherence to laws and regulations [64].

  • Ethicality: Respect for ethical principles and values [139].

  • Robustness: Technical reliability while considering the social environment [21, 50].

  • Privacy and data governance: Ensuring data integrity and restricted access [68].

  • Explainability: Transparency in AI decisions [4, 110].

  • Diversity and fairness: Avoidance of biases and discrimination [133].

  • Societal and environmental wellbeing: Consideration of social and environmental impacts [42].

  • Accountability: Mechanisms to guarantee responsibility and auditability of AI systems [101].

The present paper will build on these foundational concepts, exploring the current state of global developments in AI policy and governance. It aims to delve into the governance structures at both governmental and corporate levels, scrutinizing their effectiveness in addressing the rapid advancements and inherent risks in AI development, which eventually will lead to a new proposal of how governments could deal with the problems at hand.

3 Political and legal advancements

3.1 The unprecedented pace

In late October 2023, U.S. President Joe Biden's “Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence” marked a pivotal move in signaling the will to manage the risks and promises of AI [148]. It aimed to lead America in establishing standards for AI safety and security, protecting privacy, advancing equity and civil rights, and promoting innovation and competition. The order was a commitment to regulating AI's rapid development effectively.

The order directed comprehensive actions to ensure AI systems were safe, secure, and trustworthy before public release. It mandated developers of powerful AI systems to share safety test results with the U.S. government and introduced rigorous standards for AI safety. The order also emphasized the need for privacy-preserving techniques in AI, calling for federal support in accelerating their development and use. It aimed to strengthen research in privacy-preserving technologies and develop guidelines for federal agencies to protect American's privacy in the age of AI. Addressing potential discrimination and bias, the order included actions to provide guidance against algorithmic discrimination, to ensure fairness in the criminal justice system, and to develop best practices for using AI in various societal sectors. It outlined steps to advance the responsible use of AI in healthcare and education, ensuring safety and promoting the development of life-saving drugs and educational tools. Recognizing AI's impact on jobs and workplaces, the order developed principles and practices to mitigate harms and maximize benefits for workers. This included addressing job displacement, labor standards, workplace equity, and data collection concerns. The order aimed to maintain America's leading position in AI innovation by catalyzing research and providing resources for small developers and entrepreneurs. It also included measures to expand the ability of highly skilled immigrants to contribute to the U.S. AI sector. Emphasizing the global nature of AI challenges and opportunities, the order directed efforts to collaborate internationally on AI standards and safe deployment. This included engagements to establish international frameworks for AI and promoting AI's responsible development worldwide. The order sought to ensure responsible AI deployment in government, issuing guidance for AI use by agencies, improving AI procurement, and accelerating the hiring of AI professionals across government departments.

President Biden’s executive order represented a significant step in addressing the rapidly evolving landscape of AI and its implications for governance, policy, and society. It tried to underscore the need for a balanced approach that fosters innovation while ensuring safety, security, and ethical considerations in AI applications. Noting this unprecedented pace of AI development and the need to find suitable governance and policy solutions for safe and trustworthy AI, the following chapters in the present discussion will highlight how different countries so far have dealt with the problem, as well as which reactions have been formulated by the industry itself, and will eventually arrive at a new solution that might be worth considering in future discussions.

3.2 Europe

3.2.1 EU-initiatives

In the European Union, the regulatory framework for Artificial Intelligence is primarily shaped by what has been called the EU AI Act, which is supposed to be the world's first comprehensive AI law [39]. Proposed by the European Commission in April 2021, the AI Act categorizes AI systems into various risk levels and regulates them accordingly. This Act is a part of the EU's digital strategy, aiming to ensure better conditions for the development and use of AI technologies, such as improved healthcare, safer transportation, and more efficient manufacturing [78, 107].

The European Parliament emphasizes the importance of AI systems in the EU being safe, transparent, traceable, non-discriminatory, and environmentally friendly. There is a push for a uniform, technology-neutral definition of AI to apply to future systems. This stems from the EU's priority to safeguard individuals from potential harms of AI while fostering its beneficial use [131].

There are several risk classifications in the AI Act [39]:

  • Unacceptable risks: Systems posing a threat to individuals, like cognitive behavioral manipulation or social scoring, are banned. Exceptions may include delayed biometric identification for serious crimes with court approval.

  • High risks: Systems affecting safety or fundamental rights are subdivided into two categories: those used in EU-regulated products (e.g., medical devices) and those in specific areas (e.g., law enforcement or employment). These systems have to undergo rigorous assessment before and during their market presence.

  • Limited risks: These are systems, such as AI chatbots, that have minimal transparency requirements, ensuring users are informed when interacting with AI. Here, the dangers of AI negatively impacting society are low.

  • Minimal risks: Systems posing the least risk, like email spam filters, have no specific obligations under the AI Act.

The AI Act classifies AI systems into three categories. General Purpose AI is AI that can be used for various applications. Foundation Models are a subset of General Purpose AI, trained on vast data and adaptable to many tasks. These models have specific regulatory obligations, including risk management and data governance. Generative AI, like ChatGPT, is also recognized, requiring providers to inform users of AI interaction, prevent illegal content generation, and disclose training data summaries. These classifications reflect the EU's approach to regulating AI based on its functionality and potential impact. Since the technology is evolving, it is possible that chatbots like ChatGPT or Bard can change in their risk classification, as presently conceived [58, 59].

The implementation and reception of the EU AI Act have been marked by a complex negotiation process and a range of responses from various stakeholders. The AI Act, approved in draft form by the European Parliament in May 2023, was and still is undergoing a detailed negotiation process known as the "trilogue," involving the European Parliament, the European Council, and the European Commission. This process aims to finalize a comprehensive legal framework that addresses the development and use of AI systems within the EU [56, 142]. A significant outcome of this trilogue process is the potential establishment of a certification regime for high-risk AI systems. However, the classification of AI systems as high-risk has raised concerns and debates. For example, an AI system performing "purely accessory" tasks that meet specific conditions may not be classified as high-risk. These conditions include performing narrow procedural tasks, detecting deviations from decision-making patterns, not influencing critical decisions like loans or job offers, and improving the quality of work. Nevertheless, this approach has sparked concerns among consumer and privacy activists about the autonomy companies have in determining the risk level of their AI systems. The European Commission was therefore tasked with developing a comprehensive list of high-risk and non-high-risk use cases, and retaining the authority to modify exemptions when AI systems do not pose significant risks but do not meet the set exemptions [66].

The negotiations have also delved into the use of AI in law enforcement, with proposals addressing the use of foundation models and general-purpose AI. The negotiators have not yet agreed on specific texts for these issues, highlighting the complexities involved in regulating AI in sensitive areas such as law enforcement. The definition of AI is another contentious point, with industry representatives advocating for a definition that aligns with international frameworks to ensure harmonization and market access. Despite its progress, there are many open questions still, namely how to deal with foundation models, or general-purpose AI integrated in visual-language-action models [58]. Despite the open questions and criticisms, the EU AI Act represents a groundbreaking effort in regulating AI, aiming to balance the protection of fundamental rights with fostering innovation. The final form of the AI Act, which at the time of this writing has not yet been issued, will likely shape the landscape of AI regulation not only in the EU but also globally, given its pioneering nature and comprehensive approach.

3.2.2 European countries

There have been initiatives to regulate AI on the level of the European Union (see for example the above-mentioned EU AI Act), but also on the level of its individual members states—although the EU regulations would eventually apply to all of its member states. At the same time, non-EU countries in Europe have also been discussing how to best set up AI governance structures.

Switzerland, as an example of a non-EU European country, has adopted a national AI strategy that emphasizes the responsible development and use of AI. The Swiss strategy includes measures to promote research and innovation in AI, while also addressing ethical and societal concerns [145]. The United Kingdom has also taken steps to regulate AI. In 2021, the UK government published a White Paper on AI, which set out a vision for a "responsible, trustworthy, and innovative" AI ecosystem. The White Paper included proposals for a new regulatory framework for AI, including a new AI Centre of Excellence and a new AI Ethics Advisory Council [117]. The UK's approach to AI regulation has been influenced by its decision to leave the European Union (known as the Brexit). Brexit has given the UK the freedom to develop its own regulatory framework for AI, but it has also created uncertainty about how the UK's AI regulations will interact with those of the EU [52].

The Russian war with Ukraine has raised concerns about the potential for AI to be used for unethical and dangerous purposes. Both Russia and Ukraine have reportedly used AI in the war, for purposes such as targeting enemy positions, conducting surveillance, and spreading disinformation. In the tech industry, this raises the fear that the adoption of AI could spur more widely into other conflicts [10, 122]. Such discussions have also highlighted the urgency for international cooperation on AI regulation. There is currently no international treaty or agreement on AI regulation, and the lack of international consensus on AI norms and standards has created a vacuum that can thus be exploited by malicious actors and for military purposes [41]. In fact, both Ukraine and Russia have adopted laws and regulations related to AI. However, these laws and regulations are not comprehensive, and they do not adequately address the ethical and societal concerns raised and, not surprisingly, are largely opportunistic [122]. Ukraine's AI strategy, adopted in 2021, sets out a vision for the development and use of AI in the country. The strategy includes measures to promote research and innovation in AI, while also addressing ethical and societal concerns [74, 135]. Russia has also adopted a number of AI-related regulations. However, these regulations are focused primarily on promoting the development and use of AI in the country, and they do not address the ethical and societal concerns raised by AI [102]. The compliance of Ukraine and Russia with AI laws and regulations is difficult to assess. There is limited information available on how these countries are implementing their AI laws and regulations, and there is no independent oversight of their compliance efforts. Ukraine's use of AI in the war could potentially conflict with EU laws, even though Ukraine is not a member of the EU. The EU's AI White Paper and the proposed EU AI Act both set out principles for the development and use of AI, and Ukraine's use of AI in the war could violate some of these principles. This goes to further exemplify how difficult it is to set up universal frameworks for AI regulation that would also be fair and not set one party at a strategic disadvantage.

The AI Framework Convention of the Council of Europe, officially referred to as the "Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law," is an ambitious legal instrument designed to ensure that AI systems' development, design, use, and decommissioning adhere to the Council's standards on human rights, democracy, and the rule of law. Initiated in September 2019, this convention stands as the first binding fundamental rights instrument for AI negotiated on such a comprehensive scale. It aims to address the seamless application of human rights and the rule of law in contexts where AI systems either assist or replace human decision-making, especially pertinent as AI becomes increasingly integrated into various sectors, including healthcare [79, 153].

This initiative by the Council of Europe is set against the backdrop of the European Union's own regulatory effort, the EU AI Act. While both aim to govern the ethical use of artificial intelligence, they differ in scope, applicability, and regulatory focus [29, 60, 63, 123]:

  • Geographic scope and applicability: The AI Framework Convention encompasses the 46 member countries of the Council of Europe, extending beyond the EU to include other nations. Its objective is to establish overarching principles for AI that align with human rights and democracy. In contrast, the EU AI Act is tailored specifically to the EU's internal market, aiming to ensure AI's safety, transparency, and accountability within the Union.

  • Regulatory approach and focus: The Council of Europe's convention prioritizes the integration of AI with human rights, democracy, and the rule of law across various domains. The EU AI Act, however, introduces a risk-based framework, categorizing AI systems by their potential risk and applying corresponding regulatory standards, with a particular focus on high-risk applications.

  • Binding nature and legal impact: The AI Framework Convention, once ratified, becomes a binding international treaty that mandates aligning national laws with its principles. The EU AI Act, as a regulation, will be directly applicable across all EU member states, creating a uniform regulatory environment without the need for national transposition.

Despite these differences, both initiatives are pivotal in shaping the global discourse on AI governance. They represent significant efforts to balance the benefits of AI technology with the need to protect fundamental rights and ensure ethical governance. By addressing different aspects of AI regulation, from ethical principles to risk management, they collectively contribute to a more responsible and human-centric approach to AI development and use.

3.3 The Americas

3.3.1 The United States and Canada

The United States' approach to AI regulation takes several roads. It reflects their understanding of the need to balance its position as a global leader in technology and innovation with the imperative of ensuring responsible AI development. The country's strategy involves not only setting standards for AI safety and security but also addressing broader concerns such as privacy, equity, civil rights, consumer and worker protection, and competition. This holistic approach aims to secure the benefits of AI while mitigating potential risks [119]. The involvement of industry leaders in shaping these regulations is a critical aspect of the U.S. strategy. By inviting key players in the AI industry to participate in congressional hearings, the U.S. government is ensuring that the regulatory framework is informed by those at the forefront of AI development. This collaboration facilitates a more nuanced and effective regulatory environment, one that supports new inventions while providing necessary oversight [2]. So far, there are no preliminary laws on the table such as the AI Act in the European Union, but many attempts to bring the necessary industry and legal experts in to find the best balance while not stifling industrial growth [88]. A strong motivator for not being to fast on these regulatory blocks is the fear that the U.S. could get into a head-to-head competition with China, the second-largest leader in the AI race [54, 127].

Canada's approach to AI regulation, while undoubtedly influenced by its proximity to the U.S., is distinct and tailored to its national context. The Canadian government is actively working to create a regulatory environment that ensures the safe, ethical, and equitable use of AI. The development of the Artificial Intelligence and Data Act (AIDA) is a cornerstone of this effort, aiming to provide a robust legal framework for AI deployment in Canada. Canada’s focus on ethical and responsible AI is further exemplified by its development of specific guidelines for the use of generative AI in federal institutions. These guidelines reflect a proactive approach to managing the risks associated with AI, ensuring that its deployment in public services is in line with national values and standards. The introduction of the voluntary AI Code of Conduct is another significant step. It underscores Canada's commitment to fostering a policy ecosystem that not only cultivates public trust in AI but also supports the success of Canadian AI companies. This initiative is indicative of Canada’s broader strategy to create a balanced and forward-looking AI regulatory environment. Furthermore, the Pan-Canadian Artificial Intelligence Strategy, supported by significant government funding, highlights Canada's dedication to developing a cohesive national AI strategy. This strategy aims to harness the potential of AI across the country, promoting collaboration and innovation within the Canadian AI ecosystem [14, 95, 126].

3.3.2 Southern America

In South America, the approach to AI regulation is shaped by the unique socioeconomic and technological landscapes of the region. Southern American countries, grappling with challenges like high unemployment and digital literacy issues, is mindful of the potential impacts of generative AI tools. To address these concerns, several Latin American governments convened in Santiago de Chile in October 2023 under the aegis of UNESCO and the Corporacion Andina de Fomento (CAF) to discuss AI ethics and regulatory frameworks for the deployment of generative AI systems. The initiative highlights the region's commitment to collaborative efforts in determining the future of AI governance, emphasizing the importance of regional cooperation and shared strategies [46, 146].

While collaborative efforts are crucial, individual countries in South America also have distinct approaches to AI regulation [46, 94, 109]:

  • Brazil: Brazil stands out with comprehensive AI-related legislation, including data protection, cybercrime, and cybersecurity laws. The country is actively engaged in regulatory experimentation projects and is processing a bill specifically for the regulation of AI. This showcases Brazil's proactive stance in creating a robust legal framework for AI​.

  • Chile: Chile is also at the forefront, with a bill in the pipeline to regulate AI and a history of regulatory experimentation. Chile's governance approach includes a strong vision and institutional framework for AI, as evidenced by its high scores in governance and adoption dimensions. This indicates Chile's commitment to integrating AI into its institutional and private sectors​.

  • Peru: Peru has specific regulations for AI technology and legislation on data protection, highlighting its focus on creating a safe and responsible AI ecosystem​.

  • Colombia: Although Colombia does not have a specific AI law, it has a history of regulatory “trial and error”-phases in the digital space and a data protection law in place. This approach suggests a more experimental and incremental path towards AI regulation​.

  • Uruguay: Uruguay emphasizes innovation and AI development, indicating a strong focus on nurturing AI capabilities and technological advancements​.

The AI regulatory landscape in South America presents a stark contrast to that of the U.S. and Europe. Unlike the United States, which emphasizes maintaining global leadership in AI, South American countries are more focused on addressing regional challenges, such as high unemployment rates and the risk of automation. The southern continent’s approach is less about leading the global AI race and more about ensuring that AI development aligns with regional socio-economic needs and challenges. Europe, known for its stringent data protection and privacy regulations like the GDPR, has influenced global AI governance norms. South American countries appear to recognize European proposals such as the EU AI Act, but are concerned in tailoring their AI policies to fit their unique regional context [46].

South America's approach to AI regulation is therefore characterized by a mix of collaborative regional efforts and individual national strategies. These efforts reflect a keen awareness of both the opportunities and challenges posed by AI, particularly in a region marked by significant socioeconomic disparities. While influenced by global trends and standards, South American countries are charting their own course in AI governance, focusing on regional needs, ethical considerations, and responsible innovation.

3.4 Asia

3.4.1 China

China's use of AI in governance and surveillance has been a subject of considerable debate and criticism in the West. The country is often portrayed as compromising governance to enable security-focused AI applications. However, this view is an oversimplification. While stability remains a critical priority for the Chinese government, there is an evolving attitude within the country towards AI-enabled surveillance policies. The State Council of China has emphasized AI's "irreplaceable role" in maintaining stability, as evident in the AI-enabled social credit system based on exhaustive data gathering to incentivize compliance. Recent developments show that China’s regulatory bodies are actively balancing security interests with desires for reduced restraints on innovation. The country has imposed privacy-related penalties and restrictions against tech firms, such as sanctioning the ride-share firm Didi. These measures indicate a shift towards more measured regulatory phases in response to AI challenges, including privacy concerns and data breaches [23, 134, 164].

China's approach to AI regulation is characterized by a dual emphasis on promoting AI innovation while ensuring state control over the technology. This approach contrasts with the more horizontal approach of the EU AI Act, which applies flexible standards and requirements across a wide range of AI applications. China employs discrete laws to tackle singular AI issues, a more vertical regulatory approach [143]. China’s AI regulation has so far addressed challenges like AI-driven recommendation algorithms and deep synthesis tools (often used to create deepfakes). Regulations require service providers to limit discrimination, mitigate the spread of negative information, and address exploitative work conditions. Laws around deep synthesis tools mandate that such content conforms to information controls and is labeled as synthetically generated, with additional measures to prevent misuse. Despite China’s use of AI in law enforcement and surveillance, regulations have been introduced to address the use of this technology by non-governmental agencies. These regulations stipulate the specific purposes for which facial recognition tools may be used, emphasizing public safety in public places [23].

In comparison, the US has a more decentralized approach, focusing on specific applications of AI. The EU, on the other hand, has implemented a comprehensive and risk-based approach. China's blend of innovation promotion, state control, and societal influence is reflective of its political attitutes, such as communism and collectivism [9, 35, 118]:

Three current real-life cases exemplify the Chinese approach [22, 73, 160]:

  • Social Credit System: A notable example of AI utilization in governance is China's social credit system. It leverages exhaustive data gathering for compliance and stability, offering benefits such as tax breaks and transport discounts to compliant citizens.

  • Facial Recognition Technology: The usage of AI-enabled facial recognition technology for public security has sparked intense public opposition in China. This led to policy updates by the Cyberspace Administration of China (CAC), which now requires companies to obtain citizen consent for using facial recognition technology and offer alternatives where feasible. However, it is likely that the Chinese government is exempt from these consensual ideals and may probably have reserved its right to use facial recognition systems according to its needs.

  • Generative AI Regulation: On August 15, 2023, China introduced a law restricting the development of generative AI technology. This regulation demonstrates China’s strict approach to the public use of AI, contrasting with the more laissez-faire approach of the US.

Although many things may be occurring in the dark, China is widely acknowledged to be the country that uses AI the most for making its citizens comply with its ideology.

3.4.2 Japan, India and Korea

As many other countries, Japan's AI regulation strategy focuses on encouraging innovation while ensuring responsible use. The government's "Social Principles of Human-Centric AI" prioritize human dignity, diversity, inclusion, and sustainability, steering away from stringent constraints on AI use. Instead, Japan prefers agile governance, relying on sector-specific regulations and nonbinding guidelines that evolve with the technology. This approach is complemented by legal frameworks like the Act on the Protection of Personal Information and the Product Liability Act, which indirectly influence AI development and use. Japan also supports innovation through legislative reforms, such as the revised Road Traffic Act, which accommodates higher levels of automated driving [57].

Meanwhile in India, a complex AI governance landscape is emerging, attempting to balance the need to foster businesses with addressing potential risks in the country. The government has vacillated between a non-regulatory stance and a more cautious approach focused on user harm mitigation. India's recent introduction of the Digital Personal Data Protection Act marks a significant step towards addressing data privacy in AI development. Discussions continue on whether to adopt regulatory models similar to the EU or the US, but India's unique economic and cultural context calls for more targeted regulations that address specific negative consequences of AI, especially considering the fact that India is a large country with many rural areas and social concerns such as the cast system [80].

South Korea is trying to position itself as a leader in AI technology with an emphasis on both industry support and user protection. The proposed Act on Promotion of AI Industry and Framework for Establishing Trustworthy AI aims for comprehensive regulation of the AI industry, categorizing high-risk AI systems and establishing ethical guidelines for AI use. This legislation reflects South Korea's commitment to fostering a technologically advanced and ethically responsible AI ecosystem [71, 108]. In stark contrast, North Korea’s engagement with AI focuses on its application in cyberwarfare, utilizing AI technologies for cyberattacks. This divergent use of AI by North Korea might underscore the diverse implications of AI globally and highlights the necessity of international cooperation in AI governance [72].

3.4.3 Singapore

Singapore has established itself as a noteworthy leader in AI regulation on the Asian continent, striving to balance innovation with ethical considerations and governance. The city-state has developed a Model AI Governance Framework, which offers detailed guidance for the responsible deployment of AI technologies, emphasizing principles like fairness, transparency, and explainability. This framework, complemented by industry-specific guidelines, sets out Singapore's expectations for ethical AI use, highlighting the importance of consumer protection and ethical considerations [40].

In the international arena, Singapore actively engages in discussions to shape global norms and standards for AI, collaborating with organizations such as the OECD and ASEAN. This global engagement reflects its commitment to contributing to and learning from the international community on AI ethics and governance. To support innovation, the Singaporean government invests in AI research and development, fostering partnerships between the public and private sectors and academia. This includes creating environments for safe experimentation with AI technologies and funding initiatives that explore innovative AI applications. Education and training programs are also a priority, aiming to enhance AI and data literacy among the workforce and citizens. This prepares society for an AI-driven future, ensuring people have the skills to work with AI technologies and understand their implications [163, 165]. Furthermore, Singapore emphasizes the ethical use of AI, particularly in terms of privacy and data protection. The Personal Data Protection Act (PDPA) plays a crucial role in ensuring that data used in AI applications is handled securely and ethically. To facilitate innovation within a regulatory framework, Singapore has introduced regulatory sandboxes that allow businesses to test innovative AI solutions in a controlled environment, thus encouraging innovation while maintaining oversight [24, 62, 141, 158].

Through these measures, Singapore aims to foster a responsible and dynamic AI ecosystem, contributing to economic growth and societal well-being, while positioning itself as a model for Asian as well as global AI regulation.

3.4.4 Other (Eur-)Asian Countries

In the realm of AI technology developments and politics across Asia, several countries are making interesting strides. Malaysia, through its National AI Policy launched in 2019, seeks to become a regional AI leader. Its focus lies in cultivating AI talent, supporting AI research and development, enhancing data infrastructure, and advocating for ethical AI development [8, 113, 137]. Thailand, under its Thailand 4.0 Strategy, is channeling AI to transform into a high-tech economy, with particular emphasis on agriculture, healthcare, and transportation sectors [93]. Indonesia's 2020 Roadmap for AI Development and Implementation underscores its ambition to be a global AI leader. The roadmap highlights similar themes of nurturing AI talent, research and development, AI adoption, and ethical AI development [124, 125].

The Middle East, too, is witnessing rapid AI growth, with Saudi Arabia, the UAE, and Israel at the forefront. Saudi Arabia's National AI Strategy and the Saudi AI Hub are geared towards establishing it as a global AI powerhouse [103]. The UAE's AI Strategy and Dubai AI Lab reflect its ambition to be a hub for AI innovation [151]. Meanwhile, Israel stands out for its robust tech sector and renowned AI research institutions [6]. At the time of this writing, there as been a new war between Israel and the Hamas in the Gaza region. Although it is to be expected that AI might be deployed in the delicate warfare against the leaders of the Hamas, at present there is no information concerning the details of technology use.

3.5 Africa

Africa's engagement with AI reflects a varied landscape of policy development, technological investment, and innovation. Countries across the continent are gradually shaping their AI strategies and policies, though the pace and approach vary significantly.

Mauritius, Egypt, and Kenya are at the forefront, having already developed specific AI policy documents. Mauritius's AI strategy, established in 2018, aims to utilize AI for reviving traditional economic sectors and fostering new development pillars. Egypt's national AI strategy, formulated in 2021, focuses on leveraging AI to achieve Sustainable Development Goals (SDGs) and establish Egypt as a key player in regional and international AI cooperation. Kenya began exploring AI potential in 2018, with its Distributed Ledgers Technology and AI Task Force developing a roadmap to harness these technologies for national competitiveness and innovation [147].

Other African nations like Ethiopia, Ghana, Morocco, Rwanda, South Africa, Tunisia, and Uganda are also defining their AI policies. Ghana and Uganda, for instance, have been part of the Ethical Policy Frameworks for AI in the Global South project, focusing on local AI policy frameworks development [7].

South Africa's AI landscape is characterized by its acknowledgment of underperformance in high-technology industries, including AI, as noted in a 2020 report by the Presidential Commission on the Fourth Industrial Revolution [98]. However, the country is recognized for its potential in human capacity and its efforts towards developing an Africa-centric strategy for AI. South Africa is one of the African countries that has taken a proactive approach to AI policy and governance. The country has a strong data protection law, the Protection of Personal Information Act (POPIA), which came into effect in 2014. POPIA places restrictions on the collection, use, and disclosure of personal information, and it also includes provisions for automated decision-making. South Africa also has a number of other AI-related policies in place, such as the National Data Strategy and the National Framework for Research and Development in Artificial Intelligence. These policies aim to promote the responsible development and use of AI in South Africa. Despite its strong regulatory framework, South Africa is still facing a number of challenges in implementing its AI policies. One challenge is the lack of expertise in AI, which can make it difficult for companies and government agencies to comply with the regulations. Additionally, there is a need for more public awareness and understanding of AI, so that people can make informed decisions about how AI is used in their lives [65, 85].

Overall, Africa is still in the early stages of developing its AI policy and governance frameworks. However, there are a growing number of initiatives underway to address the challenges and opportunities of AI on the continent. Many African countries are eager to work together to develop responsible AI policies that will promote sustainable development and benefit all Africans [55]. The 2021 Government AI Readiness Index underscores the disparities among African nations in their preparedness to use AI [47]. Countries like Mauritius, Egypt, and South Africa score higher, reflecting their more developed economies. In contrast, countries like the Democratic Republic of the Congo, Angola, and the Central African Republic score lower, influenced by challenges in infrastructure and governance​. AI discussions in Africa revolve around public sector reform, education, research, national competitiveness, and tech partnerships. Countries with adequate capacities focus on skill and capacity development. For instance, Kenya has integrated coding into its national curriculum, and South Africa hosts events like the Deep Learning Indaba conference to bolster local AI capacities​. In Nigeria, the National Centre for AI and Robotics (NCAIR) promotes research and development in AI and related technologies. Egypt’s AI Centre of Excellence focuses on educating AI professionals and establishing AI usage standards​. Notably, pan-African initiatives such as the African Master’s in Machine Intelligence (AMMI), supported by companies like Meta and Google, and the establishment of research centers and institutes across the continent, indicate a growing commitment to AI development and education​. The involvement of multinational tech companies, such as IBM and Google, in supporting AI research labs and centers in African countries like Kenya, Ethiopia, Ghana, and South Africa, further catalyzes the growth of the AI ecosystem on the continent​ [11, 70, 112, 157].

The EU and the U.S. are two of the world's leading AI regulators, and their regulations have the potential to impact Africa as well. For example, the EU's General Data Protection Regulation (GDPR) is one of the most stringent data protection laws in the world, and it has implications for African companies that collect and process data from EU citizens. Additionally, the U.S. is considering a number of AI-related regulations, such as a law to regulate the use of facial recognition technology, which might inspire African countries in their own regulatory endeavors. While some African countries are working on AI policies, others are looking to adopt or adapt existing regulations from the EU or the U.S. This might be interpreted by some as a way to harmonize AI regulation across the continent and to ensure that African companies are able to comply with international standards [16, 105, 112].

4 Corporate initiatives from the world of economy and business

In the absence of comprehensive national and international regulations governing artificial intelligence (AI), corporations are taking the initiative to develop their own guidelines and frameworks for responsible AI development and deployment. This is driven by several factors, including the potential risks associated with AI, the desire to build public trust in AI technologies, and the need to ensure that AI is used in a way that aligns with corporate values and social norms [25, 44, 159].

The lack of clear and comprehensive AI regulations poses several challenges for businesses. First, it creates uncertainty about what is considered acceptable and unacceptable behavior in the AI space. This can lead to companies being hesitant to develop and deploy AI technologies for fear of legal or reputational repercussions. Second, the lack of regulations can make it difficult for companies to compare and benchmark their AI practices against those of their peers. This can hinder the development of best practices and lead to inconsistencies in how AI is used across different industries [28]. In response to the challenges posed by the lack of regulations, a number of corporations have begun to develop their own ethical guidelines and principles for AI development and deployment. These guidelines often cover topics such as data privacy, fairness, accountability, and transparency. Some companies have also gone further and established AI ethics boards or committees to oversee their AI practices and provide guidance to employees [61].

One of the most notable examples of a corporate initiative to address the challenges of AI is OpenAI, the above-mentioned current leader in the AI developer scene. OpenAI has developed a set of guidelines for the development of safe and beneficial artificial general intelligence (AGI), also known as "superintelligence." These guidelines are based on the principles of value alignment, safety, and robustness. OpenAI is also working to develop techniques for ensuring that AGI is aligned with human values, such as safety, fairness, and beneficence [104]. Other major tech companies, such as Microsoft, IBM, and Meta, have also proposed their own AI regulations. Microsoft has called for a global AI accord that would establish a set of principles for the development and deployment of AI [92]. IBM has proposed a declaration of ethical principles in the development and use of AI that outlines 12 principles for responsible AI development [67]. Meta has also proposed a set of principles for responsible AI development, which focus on the need for fairness, accountability, and transparency [90]. In addition to individual corporate initiatives, there are also a number of networks of major business players that are working to develop AI regulations. For example, the Partnership on AI (PAI) is a multi-stakeholder organization that includes businesses, non-profits, and academic institutions. The PAI has developed a set of guidelines for the responsible development and deployment of AI, which are also based on the principles of fairness, accountability, transparency, and safety. They focus on forecasting future risks, developing best practices, improving preparedness, and creating foundations for governance [106]. Another example is the Global AI Council (GAC), which is a group of business leaders who are committed to promoting responsible AI development. The GAC has developed a set of principles for responsible AI development, which focus on the need for good ethical frameworks and governance structures for the development of AI [36].

All these initiatives attest to the fact that currently businesses mostly have to regulate themselves when it comes to ethical practices in the use of AI. The notion of “self-regulation” refers to the processes and practices that corporations implement internally to ensure that their development and deployment of AI technologies adhere to ethical standards, even in the absence of external regulatory mandates. This approach allows companies to demonstrate their commitment to responsible innovation by preemptively addressing the ethical, social, and legal challenges that AI technologies may pose. Through self-regulation, corporations can establish a framework for accountability, ethical decision-making, and public trust, setting benchmarks for privacy, security, fairness, and transparency. Such frameworks often include conducting ethical AI audits, implementing AI ethics training for employees, and engaging with stakeholders to ensure a diverse range of perspectives are considered in the development process. This proactive stance on self-regulation not only mitigates risks associated with AI but also serves as a model for potential future legislation, offering insights and practical examples for policymakers. However, it is important to recognize that while self-regulation is beneficial, it should not be seen as a substitute for comprehensive legal regulations. Instead, it should complement future laws by laying the groundwork for responsible AI development and use, ensuring that when regulations are enacted, they are informed by the practical experiences and ethical considerations of those at the forefront of AI technology [30, 45].

As can be seen from these examples, the lack of comprehensive national and international regulations governing AI has led a number of corporations to take the initiative to develop their own guidelines and frameworks for responsible AI. These corporate initiatives are helping to shape the emerging landscape of AI regulation and are likely to play an increasingly important role in ensuring that AI is developed and used in a responsible and ethical manner. It is also likely that they are used as inspiration for national and international policymaking.

5 Discussion

5.1 Political challenges with the unprecedented pace of AI developments

The rapid advancement of AI technology poses significant challenges to existing regulatory frameworks, which are often slower to evolve. This mismatch between technological progress and the capacity of regulatory systems, particularly in safeguarding democratic values and human rights, is a central issue faced by national governments and multilateral institutions worldwide [43]. As elaborated above, in the EU the efforts to regulate AI have been exemplified by the proposal of the AI Act. This act represents a comprehensive, horizontal legal instrument intended to regulate AI systems across multiple sectors, including the financial sector. The AI Act adopts a risk-based approach, categorizing AI systems based on the level of risk they pose, for example those that are used to evaluate creditworthiness or establish credit scores, which are classified as "high-risk AI systems" [87]. This approach reflects a growing awareness of the need to balance innovation with fundamental rights and safety concerns. The AI Act's development and the broader EU strategy since 2017 aim to integrate a policy that tightens control over AI systems, ensuring consumer protection and adherence to fundamental rights. The reception of the AI Act within the sector indicates a general support for its objectives, particularly its emphasis on health, safety, and fundamental rights protection. It underscores the importance of a balanced approach that considers existing legislative frameworks and clarity in defining roles and responsibilities of supervisory authorities [154]. At the same time, the European Parliament and Council's response to the AI Act reflects a convergence on certain key points. Both institutions affirm the approach of addressing sectorial legislation relevant to AI systems in the finance sector. They also agree on extending the list of high-risk use cases to include certain AI systems. However, it has proven difficult to first agree on the list of principles that ought to be pursued as well as to manage the developing technology while not hindering innovation in the sector. Strong regulatory measures, while necessary for protection and ethical considerations, may inadvertently slow down the pace of AI advancements or limit access to these technologies. This was particularly evident in cases like Italy's temporary block of ChatGPT due to data regulations, Microsoft’s challenges in introducing Copilot in the European market, as well as the restricted access to Claude-2 by Anthropic AI in European countries. These instances demonstrate the tension between the need for comprehensive regulation and the risk of stifling innovation or limiting access to cutting-edge AI technologies. One the one hand, governments have the responsibility to protect their citizens’ rights, but at the same time the have a vested interest to have access to key technology for their economies [58, 89, 140].

This means that the political challenges in regulating AI stem from the need to harmonize rapid technological advancements with a regulatory framework that ensures safety, protects fundamental rights, and supports democratic values, without unduly hampering innovation and global competitiveness. This is a delicate line to walk. The EU's AI Act, with its risk-based approach, exemplifies efforts to strike this balance, although the ultimate effectiveness of such regulations in the face of the swiftly evolving AI landscape remains to be seen.

5.2 Pros and cons of delegating to the business world

There are two major interrelated problems with democracies trying to grapple with the speed of AI innovation and setting up the necessary boundaries:

  • Democratic processes (i.e. the formulation of laws) take a lot of time and must go through various rounds of consensus finding.

  • In a digital world that unfolds so quickly, laws that might be established (after a lengthy democratic process), may become obsolete as soon or even before they can be installed.

Hence, one idea would be to grant corporations more responsibility in these developments and be slow to act in the enforcement of new rules. To a certain degree, one might be under the impression that many countries including the United States make use of this strategy. This leads to a legislative vacuum where the major industry players would have to settle upon the rules themselves. Delegating the authority of AI regulation to the business world is a radical yet intriguing proposition. The approach could leverage the agility, innovation, and deep technical expertise that businesses possess. Companies at the forefront of AI development have an intimate understanding of the technology's nuances and are well-positioned to anticipate and manage the risks associated with AI. By delegating regulatory or normative authority to these businesses, regulations could evolve in tandem with the technology, ensuring more timely and relevant oversight. This could also lead to more industry-specific regulations, tailored to the unique needs and challenges of different sectors. Moreover, businesses have a vested interest in maintaining public trust in AI. As such, self-regulation could incentivize them to adhere to ethical standards and best practices, not only to mitigate risks but also to maintain their reputation and consumer trust. This approach could foster innovation by allowing businesses more freedom to explore and develop new AI technologies without the constraints of government-imposed regulations.

Schneider [130] and Thuraisingham [149] both emphasize the need for AI governance frameworks within businesses, with Schneider focusing on the governance of data, ML models, and AI systems, and Thuraisingham discussing the roles and responsibilities of corporate officers and the board. Cihon [25] further explores the role of corporations in governing their AI activities to advance the public interest, highlighting the need for diverse actors to work together. Nieminen et al. [100] underscore the multi-level and multi-dimensional nature of AI governance, calling for a shared understanding and coordination across sectors, and a balance between soft and hard governance mechanisms. These studies collectively underscore the importance of multi-level governance in AI regulation, with a focus on the responsibility of businesses and corporations.

However, the downsides of this approach are significant and should not be overlooked. The primary concern is that businesses, driven by the goal of maximizing profits, may not always prioritize the broader interests of society. Their focus on commercial success could lead to the neglect of ethical considerations, data privacy, and the equitable distribution of AI benefits. This profit-driven motive could also result in a lack of transparency, as businesses might withhold information about their AI systems that could negatively impact their competitive advantage. Furthermore, without democratic legitimacy and accountability to the public, businesses regulating their own AI systems could lead to conflicts of interest. They may be more inclined to establish guidelines that favor their own technologies and business models, potentially stifling competition and innovation in the broader industry. There is also the risk of creating a regulatory environment that lacks consistency, as different companies could establish varying standards and practices. Another concern is the potential for misuse of AI. Without stringent, impartial oversight, businesses might develop or deploy AI systems in ways that could be harmful or discriminatory. This could particularly impact vulnerable populations, who might not have the means to advocate for their rights in a corporate-led regulatory environment.

Hence, when it comes to the idea of handing the responsibility to the business world, there are interesting pros and cons at play. While delegating AI regulation to the industry offers potential benefits in terms of innovation, agility, and industry-specific expertise, the cons are substantial. The primary issue lies in the fundamental difference in objectives between businesses and societal needs. Businesses aim to maximize profits, which may not always align with the goal of benefiting all of society. This misalignment could lead to ethical oversights, lack of transparency, conflicts of interest, and potential misuse of AI, raising serious concerns about the efficacy and fairness of such a regulatory approach. Therefore, while incorporating insights and expertise from the business world is vital, complete delegation of regulatory authority to businesses poses risks and challenges that would appear to be unethical to accept.

6 Possible solutions

6.1 Benefits and problems with previous solutions

Schiff et al. [128] analyze how AI ethics and governance issues are treated in public, private and NGO sectors. Historically, there are, however, also public–private partnerships that have been pursued, for example in public health. Such cooperatives hold huge benefits, like the ongoing innovativeness of the business world and the legislation still being accounted for by the public. At the same time, there are difficulties since the ideals, incentives, and structures of both organizations may be different [114]. Public–private partnerships (often abbreviated with PPPs) include a range of collaborations, with contracting-out of services, franchising, business management of public utilities, joint ventures, as well as the design of hybrid organizations for risk sharing and co-production between government and private agents. In hybridity, there is a bidirectional impact where business models are transported into the public sphere and public issues are transferred to business goals [138]. Although there are some ideas to formulate PPPs and make use of hybridity on the fronts of robotics [121] and AI [12, 162], most ventures and theorizing has focused on having the main responsibility in public or in private hands [3, 20, 31, 150]. Hence, previous solutions mostly take the form of proposing strong governmental control over AI regulation, or delegating responsibility to the industry. Both of these ideas have benefits and problems, as the following Table 1 illustrates.

Table 1 Summary of the major benefits and problems of different sources of control over AI regulation

From this it seems clear that AI regulation should firmly stay in the hands of democratic processes and governmental control so that the collective interests are prioritized over short-term and particular goals of specific companies. However, at the same time the know-how and agility of the industry should be included in the formulation of frameworks so that the regulatory endeavors remain both timely and effective. This means that the potentially best solution would take advantage of both institutional characteristics. The following chapter provides an example of how such a solution could look like.

6.2 Proposing an institutional network

6.2.1 Dynamic laws in AI regulation

Combining what we have discussed so far, the present chapter intends to introduce a novel solution, which would be a regulatory body that is connected to international organizations and hosts hearings for corporations. The idea revolves around establishing a governmental body with the authority to create and implement transitory regulations. This body would not just serve an advisory role; it would be empowered to enact "dynamic laws" that respond swiftly to the fast-paced changes in the AI sector. These dynamic laws, while agile and responsive, would be firmly anchored in the more stable "fixed laws" established through traditional democratic processes, ensuring that they do not contradict these foundational legal principles. This body would be integrated within a larger network of international governmental agencies and have deep ties with the AI industry, including both major players and smaller entities, providing a platform for consultation and raising industry-specific issues. Figure 1 illustrates on a conceptual level how such a regulatory body would be embedded in its environment.

Fig. 1
figure 1

Illustration of the dynamic regulatory body and the interdependence on its environment and the major stakeholders

As the illustration shows, the dynamic regulatory body hosts a platform for state organizations, international organizations as well as key business players to bring in their ideas and concerns. This should be a platform where all constructive criticism and potential problems are welcome to be discussed to inform the regulatory body on both the present as well as the future issues on the horizon concerning the AI development and the impact of the laws that are instantiated. State organizations, such as departmental representatives, can bring in their takes on how the laws in action impact the citizens they are set out to protect. At the same time, international organizations, such as spokespeople from the UN, can raise awareness on how the laws impact the international community, and business players from the industry should constantly feedback their takes on how they think the proposed regulations would fare with their strategies and technical expertise. These three stakeholders do not have the power to enforce rules but only to inform the body on potential benefits and problems under way. The dynamic regulatory body is embedded in what is here referred to as the democratic base, which consists of the traditional legal and political systems. As with any other organizations, the government and the courts always have the possibility to overrule their practices and enforce rules that are in unison with the law. Here, this is referred to as the “fixed laws”, which is nothing else than what we generally associate with the terminology surrounding our current laws. On the one hand, legal authorities and political representatives can inform the dynamic regulatory body on what the best action in their opinion would be. On the other hand, they can also enforce practices based on what they deem to be lawfully right in case there would be a misalignment of values and practices with the dynamic regulatory body. Overall, it is the task of the new regulatory body to stay up to date on what is happening in the digital and AI world and where potential problems lie in terms of clashes with civil rights and ethical ideals. Since the usual democratic processes are too slow to provide adequate guidance in a rapidly evolving world, this body can issue what is here referred to as “dynamic laws” that can be instantiated but also changed rather rapidly. These dynamic laws first must go through a “filter”, meaning that they first must be evaluated in respect to the fixed laws, making sure that the new transitory regulations do not conflict with the present legal norms. Within the bounds on the already present legal framework, the regulatory body can then establish new regulations that the industry players need to respect for the time being. In practice, the regulations that prove to be ineffective will have to be modified, and the ones that prove to become vital for creating adequate rules of the game can then be translated into the body of fixed laws via the much slower democratic processes.

The proposed idea has several potential advantages and challenges. On the upside, it offers a nimble approach to regulation, allowing for quick adaptation to new developments in AI. The inclusion of industry representatives ensures that the regulations are informed by technical expertise. However, the approach also raises concerns. The rapid implementation of dynamic laws might bypass the extensive democratic deliberation and checks typically associated with law-making, potentially affecting democratic accountability. The significant presence of industry representatives could also lead to regulations that favor corporate interests over public welfare. In a democratic setting, implementing such a framework requires careful consideration to balance the need for rapid regulatory responses with the principles of democratic governance. Regular public consultations, clear criteria for the activation of dynamic laws, and mechanisms for rapid dissemination and education about these laws could enhance the framework's effectiveness and democratic legitimacy.

6.2.2 Network governance and safety considerations

The idea of network governance itself is not new, although it was never before applied to AI regulation in the way it is proposed here. Network approaches are in part a response to models in which policy making is seen as a more or less rational and sequential process from problem definition through policy intervention to evaluation and feedback [18]. Considine [27] described network governance as a typology of institutional ensembles, offering a solution to the problem of dynamic inertia in governmental institutions by identifying network governance as a crucial pathway for change and enabling structures for the learning, storage, and sharing of hidden alternatives to established institutional routines. Administrative authorization was identified as the key to success. Network governance may generate a form of institutional domination that encompasses both citizens and civil society actors due to the arbitrary influence that certain network participants come to exercise upon the life choices of nonparticipants [77], but most importantly it introduces a division of power via multiple boards, checks and balances, and active stakeholder engagement [111]. Krogh [76] found that the most effective network managers adapt the institutional design to local conditions and link the publicly mandated networks to self-convened stakeholder networks. Although the conceptualization of governance for interorganizational networks holds some problems [84], the dynamic regulatory body presented here may hold some merit due to its novel way to engage with the rapidly changing AI environment.

In short, while this regulatory approach offers a promising solution to the challenges posed by the rapid advancement of AI technology, it requires a careful balancing act to maintain democratic integrity, ensure stakeholder balance, and effectively manage the dynamic nature of AI development. Its practical feasibility would have to be discussed in future treatments. However, there are some ethical and practical questions that must be addressed in any such regulatory model for AI systems. They deal with the questions of who bears the responsibility for placing unsafe and unfair AI models on the market—the designer, the business owner, the contractor who builds it, the entity that tests the system, or the regulators? Then, what sanctions can be imposed in a global market and by whom? How can these sanctions be enforced? How can AI safety and legal conformation be tested? What are the benchmarks that could be used for thus? Such complex issues need to be resolved if any regulatory model is to gain public trust. Although these questions cannot be fully resolved here for the heuristic model presented, some directions shall be provided.

Maas [81] discusses the complex issue of the responsibility for placing unsafe or unfair AI models on the market, highlighting that AI deployment is prone to “normal accident”-type failures, making it difficult to contain or even detect these issues at time. This suggests that large-scale errors in AI systems are likely to occur, necessitating precautionary policymaking and practical recommendations for their safe deployment.

Carter [20] and Falco [40] further emphasize the need for regulation and governance in AI, with the former discussing the potential threats of unregulated AI and the latter proposing independent audits as a mechanism to ensure AI safety. These discussions underscore the shared responsibility of designers, business owners, contractors, and testing entities in ensuring the safety and fairness of AI systems. Carter [20] thusly emphasizes the need for regulation and governance in AI, whereas Falco et al. [40] propose independent AI audits as a pragmatic approach to an otherwise burdensome and potentially unenforceable assurance challenge. Overall, Dignam [33] states, the public interest should be at the heart of both technical and governance-centered AI regulation. As AI seems to exercise strong pressure on existing regulatory frameworks [86], some authors suggest a regulatory market approach to AI safety regulation [26].

6.2.3 Responsibility concerns

This, however, does not specify who would be to blame in case there would be any harm or mistakes made by the AI. The responsibility of AI outputs is a complex and evolving area of law and ethics, with various parties potentially being held accountable. Developers and engineers can be responsible for design flaws and inadequate testing [144]. Manufacturers and companies may be liable under product liability laws [96]. Users or operators could be at fault for negligent use [34]. Regulators and governments may bear responsibility for inadequate safety standards [53]. Some authors even hold that the AI itself should be held legally responsible—but to what end is rather unclear [144]. There is, thus no consensus on who is to blame in the case of an unfortunate event. As for the present proposed regulatory model, there are several stakeholders that need to be considered: the state organizations, international organizations, business players, the regulatory body itself, the democratic base (i.e. courts and politicians), potentially the auditors, and the users. All of these actors have what might be referred to as a partial responsibility, each for the things they are tasked to do. First, the state organizations are required to oversee the developments with due care and to report adequately. Second, the international organizations have to be held accountable for the correct guidance according to the latest knowledge in what AI can do and where the risks lie. Third, the business players are responsible to only deploy a product after setting all the safeguards in place so that the AI model cannot be used for unintended consequences (as much as possible), which includes things like (i) refusal to do certain things, (ii) report on dangerous activities, (iii) content moderation to the users, (iv) only slowly deploying models after testing and learning from past mistakes. This also includes (v) AI safety innovation, like constitutional AI. Like with the weaponry industry, there are two parties that carry the end responsibility, which are the companies and the users. The companies must ensure to do everything they can to counteract misuse and abuse of AI systems. Nevertheless, the users are to blame if they use the systems to deliberately create a harmful outcome or if they are not careful enough in their task automation. As such, there eventually appears to be a shared responsibility between the deployers and the users.

Once the responsible actors are identified, potential sanctions need to be formulated and enforced by the respective authority. Erdélyi and Goldsmith [38], as well as Clark [26] advocate for a global regulatory body, with Erdélyi specifically proposing an international AI regulatory agency. Siegmann and Anderljung [136] highlight the potential influence of the European Union's AI regulation on the global market, suggesting a "Brussels Effect" that could lead to the diffusion of these regulations. Geist [51] underscores the challenge of achieving a global consensus on AI regulation, given the diverse approaches taken by different countries. These voices suggest that while a global regulatory body and the influence of regional regulations are potential avenues for AI regulation, the challenge lies in achieving consensus and enforcement. In the present regulatory network heuristic (cf. Fig. 1), it makes sense to that the dynamic regulatory body depicted as the institutional network is the authoritarian institution in power to enforce the dynamic rules. The specifics of the sanctions that should be applied must be answered in a step-by-step case fashion and cannot be resolved in a first proposition such as is presently the case. However, there may be manifest parallels to the weaponry industry since there, too, the products can cause considerable harm. The enforcement of the rules and the sanctions should occur on a national level by the nationally implemented dynamic regulatory body (since international bodies usually lack the necessary enforcement power and jurisdiction) whereas the regular courts would deal with the fixed an permanent laws and the dynamic laws are overseen by this new institution.

6.2.4 AI assessments

An equally difficult question is how such AI models can be tested for their ethical and legal compliance, and to which standards they should conform. AI can be tested using a combination of technical standards, regulatory requirements, and ethical considerations. Technical standards play a key role in mitigating the risks associated with AI by defining technical requirements for the development and testing of AI systems [15]. These standards can cover aspects such as safety, non-discrimination, and reliability. Regulatory requirements, such as those outlined by the FDA, involve testing medical products using computer models, simulations, and virtual trials enhanced by artificial intelligence to ensure safety and efficacy [83]. Additionally, ethical and legal implications are considered in the testing and optimization of AI-based medical devices to comply with medical device regulations and international standards [120]. To ensure the safety and effectiveness of AI, it is essential to establish and adhere to rigorous standards, conduct thorough testing, and continuously evaluate and update regulatory frameworks to keep pace with technological advancements (e.g. [75]). As for the present model, a comprehensive list of conformity assessments cacated, which ought to be specified in future research and depending on the sector-specificity (cf. Table 2).

Table 2 Ethical, legal, and technological conformity assessments of AI

7 Conclusions and future research

The present paper highlights the complex and evolving landscape of AI regulation. Worldwide, approaches to AI governance vary, reflecting diverse socio-economic and cultural contexts. The EU's comprehensive regulation contrasts with the sector-specific methods in the U.S. and the innovation-driven approaches in Asia and Africa. A crucial challenge is balancing rapid AI development with effective regulatory oversight, ensuring ethical standards and societal well-being.

The lag between AI's fast-paced evolution and the slower democratic process of law-making poses risks of either stifling innovation or failing to address new ethical and societal concerns. Corporate involvement in AI governance, through self-developed ethical guidelines, raises questions about effectiveness and alignment with broader societal interests. As such, a new solution is proposed where governmental authorities closely work together with international organizations and key industry players to create transient “dynamic laws” that would be more flexible to handle.

Future research should focus on evaluating existing AI regulations, exploring international cooperation in AI governance, and assessing the impact of AI regulation on emerging economies. It is essential to understand the long-term societal impacts of AI and develop adaptive regulatory frameworks that evolve with technological advancements. Continuous research and dialogue among stakeholders are vital for ensuring responsible AI development and deployment. Future discussions should also analyze what kinds of collaborations between the industry and the legal as well as the political sectors are in order and how ideas such as the proposed dynamic regulatory body that acts as an institutional network fare in terms of their feasibility.