1 Introduction

Recent advancements in artificial intelligence, in particular the emergence of easily accessible applications of large language models in the form of chats, such as chatGPT, have reignited debates about the potential consequences of AI. In their paper on the evolution of AI governance, Chesterman et al. noted that next to the scandal related to Cambridge Analytica, the emergence of ChatGPT was one of the two events that significantly impacted the dynamics of AI governance (Chesterman et al. 2023). On the cover of the June 2023 edition of Time Magazine, the heading “The End of Humanity” was on a red background with the letters “A” and “I” highlighted, referring to the potential existential risks related to the developments of AI. These risks are usually associated with Artificial General Intelligence.

What is Artificial General Intelligence (AGI)? As Shevlin et al. puts it, AI “that is capable of solving all tasks that human can solve” (Shevlin et al. 2019, 1). The world general refers to a characteristic of human intelligence that allows us to do many things, thanks to our intellect. Usually, this term is contrasted with narrow artificial intelligence, which refers to AI tools applicable to specific domains and capable of performing tasks such as recognizing images or playing chess (Zerilli et al. 2021, xviii). In these divisions, “generative AI,” like chatGPT (Stokel-Walker and Van Noorden 2023; Ponce Del Castillo 2023), belongs to narrow AI. It might seem to do a range of things, but it can certainly not do all the things we can do with our intelligence.

AGI refers to human-like abilities, but these abilities may exceed those of humans. In that sense, sometimes there is the use of the term “Superintelligence” (see Hoffmann 2023 about related concepts). Bostrom defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interests” (Bostrom 2014, 26). At the general level, there are concerns that the appearance of a new kind of entities with human-like intellect or exceeding human-like capabilities with their aims and goals may contradict our human interests (but see Szocik et al. 2020). As Müller puts it, “If high-level AI occurs, this will have a significant impact on humanity, especially on the ability of humans to control their fate on Earth” (Müller 2014, 298).

There is an ongoing discussion about the regulation of AGI (cf. Mahler 2022). Totschnig notes that the problem with potential wrongful superintelligence is political, not technological, and the focus on mitigation of risks should be on the political decisions around it (Totschnig 2019). It has been pointed out that lawyers and the law should do more to mitigate long-term risks, including those related to AI (Martínez and Winter 2023). Lawyers actively advocate mitigating extensional risks through legal interventions (see Bliss 2023 on existential advocacy). In that vein, this paper examines the function of criminal law in minimizing and preventing related risks. Here I present a view according to which we could start with criminal law, which might provide the impulse for further changes.

This paper is structured as follows. After introductory remarks, I briefly discuss the issue of existential threats, emphasizing threats related to AGI. Then, I turn to a discussion on the governance of AGI. After that, there will be discussion on criminal law and AGI. The last part, before conclusions, introduces a proposition relating to AGI crimes.

2 Artificial intelligence and existential risks

In this section, I focus on existential risks, emphasizing AI-related ones. Bostrom defines “existential risk” as “One where an adverse outcome would either annihilate Earth––originating intelligent life or permanently and drastically curtail its potential. An existential risk is one where humankind as a whole is imperiled. Existential disasters have major adverse consequences for the course of human civilization for all to come.” (Bostrom 2002). There are several sources of existential risks, both natural, and those that humans cause. Various existential threats are pointed out, such as pandemics, natural disasters, collisions with asteroids, climate crises, nuclear explosions, etc. Among the greatest existential threats is the risk related to the development of artificial intelligence (Ord 2020). Kasirzadeh notes that in the discussion on AI-related existential risks, the focus is on the state of affairs in which there is a high magnitude event in which AI possesses problematic qualities emerge (Kasirzadeh 2024). AGI. He points out that there is also another kind of risk that we should focus on, which he refers to as the “accumulative AI x-risk hypothesis.” That second category of risks might be manifested through a series of smaller changes that gradually cross the threshold of the critical stage of dangerousness. To illustrate the problem, he refers to the boiling frog example, that might not notice the small increase of the temperature. He argues that division into two kinds of existential risks is helpful from the perspective of forming the right approach to mitigate risks.

The rise of AGI poses risks of several kinds (cf. McLean et al. 2023; Salmon et al. 2023). For example, Sparrow points out the possibility of humans being enslaved by superintelligent machines (Sparrow 2023). In the review paper by McLean et al., the following categories of risks were listed: AGI removing itself from the control of human owners/managers; AGIs being given or developing unsafe goals; Development of unsafe AGI; AGIs with poor ethics, morals, and values; Inadequate management of AGI; and Existential risks (McLean et al. 2023, 660). Vold and Harris point out that the answer to the question of how AGI can pose existential risks is not straightforward (Vold and Harris 2021). They claim that because AGI, by definition, is a general-purpose technology, there are many ways in which it could work to pose risks. In their work, they focused on three ways, as mentioned before the control problem, which is also a key problem in the paperclip example, and two others — the possibility of global disruption from an AI race dynamic and weaponizing AI (on military AI and existential risks see also: Maas et al. 2023). The existential risk related to AI does need to be caused by the intentional design of the agent to realize such risks, but it also by making it unsafe (Bliss 2023).

It seems that most people working on AI safety are concerned with unintended threats by its creator. The critical issue from that perspective is the so-called control problem, which refers to (lack of) controlling AGI when it is launched (cf. Torres 2019). Bostrom uses a famous hypothetical example of a “paperclip maximizer” that, while having a legitimate goal, may act to use all of Earth’s resources to manufacture paperclips, endangering humanity (Bostrom 2009). Russell, in the same vain, notes that “if we build machines to optimize objectives, the objectives we put into the machines have to match what we want, but we do not know how to define human objectives completely and correctly.” (Russell 2019, 170). In other words, there is a problem with the alignment of AI with human values (see, e.g., Christian 2020). Alignment is not a problem only in the context of AGI; however, in the context of AGI, the most serious worries arise, including existential ones. Dung points that current AI tools are misaligned, and lists the characteristics of misalignment: “misalignment can be hard to detect, predict and remedy, it does not depend on a specific architecture or training paradigm, it tends to diminish a system’s usefulness and it is the default outcome of creating AI via machine learning” (Dung 2023, 1). Based on those observations, he notes that the risks of misalignment will be magnified with the more capable systems, and what is more, aligning more capable systems is more problematic than the alignment of less complicated ones.

In contrast to the risks posed above, it could be added that not everyone sees the development of AI as a chance only to increase existential risks. In the interesting paper, Goldstein and Kirk-Giannini mention various scenarios, and next to negative, they also discuss the possibility of reducing the probability of existential risks by the development of large language models (Goldstein and Kirk-Giannini 2023).

It should be noted that the discussion around existential risks related to AI is heated, which is well illustrated by Nature’s editorial title—“Stop talking about tomorrow’s AI doomsday when AI poses risks today.” (“Stop Talking about Tomorrow’s AI Doomsday When AI Poses Risks Today” 2023). It is suggested that by focusing on the existential risks of AI, we may overlook problems such as biases, accountability, etc., which should be discussed and resolved as soon as possible. To simplify the issue, Sætra and Danaher point out that there are two camps, “AI safety” and “AI ethics,” that are different in evaluating risks related to AI. The AI safety group is focused on long-term risks, while the AI ethics group is more concerned with short-term risks. They argue that a more productive dialog between groups is needed. Without cooperation, we might be in a situation in which neither short-term nor long-term risks are managed and mitigated (Sætra and Danaher 2023).

3 Governance of AGI

We are witnessing a growing trend of actions aimed at governing AI worldwide (cf. Neuwirth 2022). In particular, this trend concerns issues such as the development, use, and infrastructure of AI (Veale et al. 2023). This trend is especially visible within the European Union (cf. Harasimiuk and Braun 2021), and the European Union’s AI Act does focus on AI risks (Novelli et al. 2023; Kaminski 2023). Those examples of regulatory efforts don't mean that policy-wise, everything is on good track already. Coeckelbergh argues for more global cooperation in reducing risks related to AI; he notes that it is often used in the context of risks related to AGI. However, he believes that even if there are no risks related to AGI, there would be arguments for global cooperation, due to the cross border consequences of AI development (Coeckelbergh 2024).

Kolt argues that policymakers must mitigate the pressing issues related to AI that are already present, but they also should look toward possible long-term risks—“algorithmic black swans.” He points out that the current regulative responses to AI, particularly at the AI Act and regulatory responses in the US, are focused on immediate risks, and they overlook trying to challenge those risks, which he refers to as “governance gaps.” Some of his recommendations — for filling identified gaps — are to try to anticipate risks, to use diverse regulative strategies, and to evaluate the subject matter to react constantly (Kolt 2023). Bostrom argues that existential risk prevention is a global priority (Bostrom 2013). It seems that the current legal solutions in power, or those under legislative procedure now, are not focused on governing the risks related to AGI. Dung lists three approaches to reducing catastrophic risks related to AI. The first approach is research-based, and it is about intensifying research on AI alignment. The second is referred to as “AI timelines research,” which is devoted to focusing attention on observing and predicting the important steps in developing AI. The last approach, or group of approaches, is AI policy. Dung notes that ultimately the catastrophic risks would require a political response (Dung 2024).

There are many ideas on how to do that. For example, Naudé and Dimitri argue that one way to reduce risks from AGI is to impact the economy of the AGI race itself. They call for coordination of the work on AGI that will not reduce it to a winner-takes-it-all scenario. In their opinion, such a strategy could lead to a rush and increase the chances of unwanted side effects of AGI (Naudé and Dimitri 2020). Salmi proposes controlling AGIs by including them in a democratic system in which humans and AGIs cooperate (Salmi 2023). This idea assumes that AGI is already here, but there is work to do before AGI materializes.

Scherer points out the difficulties with regulating ex-ante risks related to AI (Scherer 2016). He points out four aspects of research and development in this area: discreetness, discreteness, diffuseness, and opaqueness. Discreetness refers to the limited need for physical infrastructure, such as creating nuclear bombs. However, it does not mean that building AGI could be completely unnoticeable. There would need to be significant investments or energy consumption. Discreteness means that the various elements of AI systems could be developed without conscious coordination. Diffuseness refers to the fact that dozens of individuals from all around the globe may participate in AI projects. Opaqueness in this context makes it difficult for external observers to detect potentially harmful features of developed systems.

A range of regulatory entities might be engaged in reducing risks posed by AI, including national legislatures, administrative agencies, and common law tort systems (Scherer 2016, 377). Scherer proposes creating an agency responsible for certifying AI systems. Under the proposed scheme, the actors deploying an uncertified system would be liable for the tort. Ord points out the problem of the cooperation between countries. He wrote, “Each nation is inadequately incentivized to take actions that reduce risk and to avoid actions that produce risk, preferring instead to free-ride on others” (Ord 2020, 199). In his opinion, the response to that issue should be international cooperation.

Martínez and Winter, in their paper, examine the opinion of legal scholars on the impact of legal interventions on mitigating risks. According to a global survey, legal scholars generally believe that legal action taken today could impact future existential risks (Martínez and Winter 2021a, b). That survey covered a variety of risks (such as environmental and pandemic ones) and was not limited to AI. Scholars were asked about the branches of the law that would be best placed to mitigate risks. Participants believed that international, constitutional, and environmental law are better equipped than criminal law for the task of preventing risks. Criminal law was considered as the least plausible branch of the law in terms of its ability to solve future problems.

If criminal law is mentioned, it is usually in the context of international criminal law. Ord writes: “Another promising avenue for incremental change is to explicitly prohibit and punish the deliberate or reckless imposition of unnecessary extinction risk. International law is the natural place for this, as those who impose such risk may well be national governments or heads of state, who could be effectively immune to mere national law. The idea that it may be a serious crime to impose risks on all living humans and our entire future is a natural fit with the common-sense ideas behind the law of human rights and crimes against humanity” (Ord 2020, 203). His suggestion is to work on an international treaty that makes it possible. In that context, he refers to the work of McKinnon, who argues that endangering humanity should be a transnational crime — “postericide.” She points out that in the Anthropocene, humans gained the ability to bring humanity to the state of extinction, and international law is where crime should be situated. She uses the following wording of crime: “Intentional or reckless conduct fit to bring about the near extinction of humanity” (McKinnon 2017, 405). In her later work, McKinnon’s notion was that international criminal law is relatively young and fragile, and she predicted that promoting new controversial crimes within the framework of international criminal law would be resisted (McKinnon 2021).

By its international character, international criminal law might be the right place for making changes related to AGI. It has been pointed out that the scope of international criminal law should be expanded to cover new types of behavior (Renzo 2012). Sloane argues that punishment in international criminal law, although it may appear to be a late intervention, may still be justified, because of the expressive function of discipline, which is to condemn the perpetrator’s actions. At the same time, he adds that international criminal law has limitations. It seems less effective and legitimized than national criminal laws (Sloane 2007). Wilson sees the global catastrophic risks, including risks associated with the development of AI, as a meter for international law, and points out that the lack of relevant laws is a regulatory gap (Wilson 2013). He also proposed the use of national legislation. However, he focuses on the international treaty as the default proposition that, in his opinion, could influence the national legislatures to take action (Wilson 2013).

In this paper, I propose using the national criminal laws as one of the tools that might be used as a legal response to risks. It would have made more sense due to the character of risks related to AGI. For example, compared to nuclear risks, there might be a need to influence the behaviors of individuals who could be the source of threats. In a nuclear context, only whole countries or influential organizations could afford to create infrastructure to create bombs. There would be more sense in using legal instruments addressed to individual citizens, and criminal law has that character. The idea is not that it should be the only way, but it is one of the many tools that should be adopted. I propose considering national criminal law instruments in response to AI risks, not as a standalone proposition, but as a supplement to the legal environment that should be created. I want to underline that I do not treat the proposition as an alternative to international criminal law, but rather as supplementation. There might be changes at the national level, and at the same time, there might be negotiations over changes at the international level. Changes at the national level might inspire or accelerate global efforts to mitigate risk at the international level.

4 AGI and national criminal law

Artificial intelligence may be relevant to criminal justice in many ways (cf. Hayward and Maas 2021; King et al. 2020; Mamak and Glanc 2022, 2023; Lagioia and Sartor 2020; Mamak 2023). One immediate association between AGI and existential risks might be as follows – if AGI refers to human likeness and causes harm, why not make AGI responsible for actions? In the paper about AGI and criminal law, Atkins focuses on this issue — on crimes “committed” by AGI and discusses the problem of responsibility for them. He discusses the human responsibility for those crimes and making AGI responsible for those actions, and he analyzes it from the perspective of criminal law purposes. He focused on the harms that would constitute crimes if committed by humans (Atkinson 2019). The problem of responsibility for harms related to AI is one of the more discussed in the literature on ethical and legal aspects of AI (cf. Matthias 2004; Sparrow 2007; Gunkel 2020; Santoni de Sio and Mecacci 2021; Danaher 2016; Nyholm 2018; Hakli and Mäkelä 2019; Wojtczak 2022; Berber and Srećković 2023; Mamak 2022b). However, from the perspective of the problem described in this paper, deliberation about the responsibility of AGI is secondary— the focus is here on preventing or mitigating risks in the first place.

Below, I present several arguments in favor of using criminal law to mitigate the risks posed by existential threats related to AGI by enacting crimes. Criminal law serves various purposes, such as preventing crimes in general, avoiding the punishment of innocents, rehabilitating offenders, etc. The role of criminal law may also be viewed as a necessary instrument for achieving and sustaining social order (cf. Hart 1958). I will focus on those aspects that, in my opinion, are crucial in discussing the role of criminal law in the context presented. There might be more arguments related to different views of the role or functions of this branch of law. It is not an exhaustive enumeration, and elements on the list are interconnected. In each topic mentioned, I want to focus on several aspects.

I believe that national criminal law should be considered for the following reasons: (I) national states might be obliged to criminalize the imposing risk related to AGI; (II) national laws might have an international effect; (III) changing the national law is fast and cheap; (IV) AGI crimes might have a deterrent effect; (V) laws might be used to punish perpetrators.

4.1 Duty to criminalize

In his paper entitled The Duty to Criminalize, Alon Harel argues that states are not only obligated to protect the fundamental rights of citizens, like the right to life and liberty, from violations—but also to criminalize such violations (Harel 2015). According to this view, the legislature could be required to change the laws to ensure that rights are protected. Harel uses the term “under-criminalization,” which could be understood as a form of regulatory gap mentioned before. The crimes in national laws are flexible to some extent, and are adapting to new changes related to new technologies (cf. Fairfield 2021; Mamak 2019). For example, murder is criminalized even if it is done in a way that was impossible when enacting laws. However, there are limits to the interpretation of crimes. One of the basic rules of criminal law in many countries is nullum criminen sine lege (cf. Mokhtar 2005). In short, this rule forbids treating behavior that was not criminalized before the action that was taken as criminal behavior. It would also be against that rule to interpret existing laws broadly to cover new threats. If there is a new way of threatening the fundamental rights of humans, and we want to use criminal law, there might be a need to make changes in the law. As an example of such an attitude, in my other paper, I argued that disseminating fake medical news should be criminalized as an act that threatens public health (Mamak 2021; 2022a). So, suppose the AGI poses an existential risk, which translates into a threat to human lives. In that case, there might be an obligation to criminalize those behaviors related to AGI that pose such risks.

4.2 Domino effect

As I mentioned, one of the accepted views is that the threats from AGI are global. Therefore, the response to them should be at the international level in the first place. National laws are treated as tools that cannot successfully mitigate risks, because they are, in theory, locally restricted to geographically limited parts of the world. I do not claim that national laws can solve the problem globally, but I do claim that such changes can mitigate the problem. One of the reasons is that the change in one country might impact the change in other countries. One of the standard elements of the legislative process is comparative analysis. How the problem is regulated in one country might be an argument for a change in another. On a bigger scale, there is the so-called “Brussels Effect” (cf. Bradford 2012; 2020), the idea that the regulations at the EU level impact the rest of the world. The changes in some countries might motivate other countries. or regional and international groups of countries, to change their laws. In his book about the governance of robots and AI, Chesterman discusses creating an institution of AI ombudsman (ombudsperson) that could be devoted to the issues related to AI. As he notices, the idea of an ombudsman has Scandinavian roots, which also illustrates that the legislative institution in one legal culture might inspire the adoption of similar provisions in other countries, and even adapt it to the issues related to the deployment of AI (Chesterman 2021, 220–22).

What is more, in the globalized world, criminalizing certain behaviors in some countries might influence the situation of citizens in other countries. It is not hard to imagine that teams working on the development of AI are multinational. It would be a rational expectation of the team members that the team management would limit the imposition of criminal charges, for example, by creating safety measures and strict procedures to prevent unwanted consequences. In that sense, one country’s regulation can impact how work on AGI in others is done. Another aspect relates to traveling. Criminal status makes it harder to travel and move to another country. The person who aims to change countries needs to make sure that there are no criminal charges that could be used against them.

4.3 Enacting new crimes is fast and cheap

Changes at the international level might be necessary for an effective system of mitigating the risks related to AGI; however, usually, enacting treaties is a long process. If time is an aspect that needs to be considered, then the changes of laws at the national level have advantages in that respect. Work on international treaties can take many years. There is a need for many tasks and coordination of the different collective bodies at global and national levels. Looking at the example of the Treaty on the Non-Proliferation of Nuclear Weapons (see, e.g., Bourantonis 1997; Siracusa and Warren 2018). The negotiation phase took place between 1965 and 1968, and the treaty came into force in 1970. But it needs to be considered that negotiations take place only in a limited number of countries, and second, the starting of negotiations was preceded by diplomatic efforts. In that context, the official UN website dedicated to the treaty mentions in the historical context events from the 1950s. Changes in national laws are much quicker to effect and could be counted in months rather than years. Enacting laws at the national level does not exclude further international collaboration on the same matter.

Moreover, the change is cheap from a financial point of view. Enacting crimes related to AGI does not per se require huge investments and establishing new bodies with headquarters and pensions, but I do not claim that enacting new crimes has no cost. There are costs related to the legislative process and then associated with extra work of law enforcement authorities that would get new crimes to deal with (see, e.g., Danaher, who discusses the costs of introducing new crime (Danaher 2017, 93–94)). Still, the legislative change is relatively low-cost. and could be made to mitigate risks. Even if we somehow knew in future that risks related to AGI were exaggerations of the current time that were not worth discussion, the changes in criminal law would be easy and cheap to undo.

4.4 Deterrence effect

The main aim of taking any legal measures related to existential risks posed by AGI is to prevent or mitigate those risks. This aim is consistent with one of the primary aspects of criminal law: deterrence. In general, there are two main justifications of criminal law – backward-looking, when we see criminal law as a tool of responding to crime that happened, and forward-looking, which sees criminal law as a way of impacting the future and decreasing crimes (see: Canton 2020). In this paper, I present the idea of using national criminal law as a tool to mitigate risks related to AGI. In that sense, the change of criminal law aims to impact the behavior of individual humans. In that way, they would not take actions that could be related to potential criminal responsibility. To some extent, criminal law does deter crimes (cf. Robinson and Darley 2004). There is hope that AGI crimes will impact the behavior of those working on advanced AI. I do not claim here that such crimes would prevent the existential risks discussed, but that the potential criminal responsibility might be an extra factor that must be considered while working on AGI. It needs to be mentioned that there are crimes in the legal system that are intentional, but also negligent when the perpetrator does not intend to cause harm. For example, causing a car accident is an unintentional crime, and the driver is not punished for intending to cause the accident in which another person was harmed (if there were intent, it would not be classified as an accident), but for not respecting the safety rules that were required to take. The main reason for enacting such crimes would be to deter people working on AGI: those who intend to use AGI destructively and those working on AGI without ensuring safety measures.

The presented proposition contradicts the work cultures many associates with tech companies, which emphasize “move fast and break things” and “asking for forgiveness, not permission.” The potential criminal responsibility for creating uncontrollable AGI forces the safety issues to be considered more in everyday operations. Every person working on AGI should ask themselves questions about whether to apply all of the known safety measures, read all relevant papers, and be up to date with the safety standards or safety hazards.

4.5 Bringing justice

In the previous section, I referred to criminal justice as being both backward-looking and forward-looking. In the context of AGI, it seems that it would be more sensible to have forward-looking thinking about criminal law. There might seem to be few advantages of punishing perpetrators when AGI crimes are committed, and an existential threat materializes. Especially when existential threats would be in the form of destroying the whole of humanity. There would be no way to punish the perpetrator. In the discussion about existential threats, it is pointed out that the existential threat could also mean the survival of humanity, but in a limited form. Punishing perpetrators in such circumstances would be possible, but it would be poor consolation and weak justification for changing the law. Nevertheless, there are other ways of making sense of the enacting AGI crimes in the legal system from the perspective of backward-looking justification of criminal law. Criminal law cannot only be used when something happens, but it could also extend to periods that precede actual crime, on attempts and preparatory behaviors (cf. Ohana 2007). The criminal attempt concerns behavior directly preceding actual crime, and is punishable in most legal systems (Duff 1997). In the case of some crimes (like terrorist attacks), preparation is punished, which could happen long before the crime being planned, and is justified by reducing risks (Bock and Stark 2020). AGI crimes would provide a tool to chase perpetrators who intend to use AGI maliciously.

5 Legislative proposition

This section will present and explain the proposed crimes. Before that, a few clarifying remarks will provide the context for understanding the proposition.

The first thing that I want to underline here is that it should be treated as a starting point for further discussion, and not a ready-to-implement proposal that can be applied to law. With this paper, I aim to contribute to the discussion on mitigating risks related to AGI. Proposed provisions are material for further discussions. It is easier to talk about regulations and their desired shape, when we have what we can refer to in hand.

The second issue that I want to mention at the outset is that the propositions cover a fraction of the behaviors that might be considered for criminalization from the perspective of the dangers related to the risks of AGI. As mentioned earlier, this proposition is a starting point, also from the perspective of identifying the behaviors that might be considered to be inherently dangerous from the perspective of the emergence of AGI. There might be many ways in which risks related to AGI could materialize, and also a lot of behaviors that might lead to the risks (McLean et al. 2023). In the proposal that is presented below, there is a focus on the creation of uncontrollable AGI, but there might be more problematic behaviors that could be candidates for prohibition.

The next thing I want to mention here is that I have focused on describing the behaviors that constitute risks rather than the punishment. Crime typically consists of describing prohibited behavior, and the punishment that could be applied if the perpetrator breaks that law. In the version presented, the crime looks incomplete. I use the phrase ‘is subject to punishment’, and do not specify which one. In the final proposition, this part should be expanded with the punishment.

One might ask a question: why propose the new crimes if there is already the idea of the crime of “postericide” proposed by McKinnon (2017), which was mentioned earlier? Should we not implement her proposition to prevent the existential risks related to the emergence of AGI? Her proposition should be appreciated as an opening discussion about using criminal law measures to mitigate putting humanity in danger; however, there is a problem with that proposition from the perspective of general principles of criminal law. Simply put, it is too broad from the perspective of the issues discussed in this paper. One of the basic principles of criminal law, as it was mentioned earlier, is nullum crimen sine lege (see, e.g., Glaser 1942), which means, in general, that there is no crime without the law. In a more detailed analysis, this principle is problematized. One of the subprinciples related to that general principle is nullum crimen sine lege certa, which refers to the need to define the scope of the crime clearly (see, e.g., Lahti 1995, 251; Barczak-Oplustil 2013). It has been pointed out that to have a chance to fulfill its functions, the crime needs to be easily understandable by those to whom the criminal law norm is addressed (Loucaides 1995). Citizens should know in advance which behaviors could lead them to criminal responsibility. This is why I believe that in cases of various existential risks, detailed crimes should specify which acts could lead to criminal responsibility, rather than enact one general crime that would cover all of them.

The last introductory comment in this part concerns the national character of the criminal laws and their rules. To fully understand the crime, there is a need to know not only the wording of the crime itself, but also the broader context, which consists of the general rules of the criminal law, the judiciary rulings that impact the ways of interpretation of phrases used in the criminal law system, the constitutional provisions that impact the ways in which criminal law provisions might be read, or the relevant international legal context. In short, the meaning of the provision could not be reduced to the text of the crime. The proposition is written from the perspective of the Polish legal system, with which I am familiar the most. In the sense of covering the same criminalized behavior in other legal systems, the same crime could look slightly different. I will explain later the issues that are “outside” the wording of the crime. So, now it is time for the proposition and explanation.

  • §1. Whoever introduces uncontrollable artificial general intelligence is subject to punishment.

  • §2. If the perpetrator of the act referred to in § 1 acts unintentionally, he or she is subject to punishment.

  • §3. Whoever makes preparations to commit the crime provided for in §1 is subject to punishment.

First, I will focus on the issues that result from the wording of the crimes and then on the important aspects of crimes that result from the general rules of the criminal law.

There are three paragraphs in which three crimes are captured. All the behaviors mentioned aim to prevent the “introduction” of “uncontrollable” “artificial general intelligence”. All the marked terms need explanation. The least controversial, which does not mean that it is devoid of ambiguity, seems to be the word “uncontrollable”. In the context of AGI, I understand it as a lack of opportunity to impact the direction in which AGI operates or a lack of possibility of turning it off. By “introduction,” I mean activity that led to uncontrollable AGI in the wild. It is not the behavior that is limited to programming activities. It also might be the activity of the CEO of the technological company who decides personally to launch such a tool.

The most problematic term is “artificial general intelligence.” First, this term does not yet have a universally accepted definition, which should not be surprising, considering that there is no single definition of “artificial intelligence” upon which the AGI is built (see, e.g., Elliott 2021). Second, what seems to be more problematic is the epistemological problem — how would we know that the evaluated “thing” is artificial general intelligence? In his book about the moral and legal status of robots, Gunkel discusses the problem of whether the machines could meet the criteria that qualify them to be in the category of natural persons, and points out that “possession of the qualifying criteria for natural personhood is only (at best) 50% of what is needed” (Gunkel 2023, 85). He notes that the missing part is related to the epistemological problem of knowing whether the machine met those criteria.

To be responsible for committing the crimes discussed, all those elements must be present at once. There needs to be a human being introducing uncontrollable artificial general intelligence. So, this crime does not ban the creation of Artificial General Intelligence per se, but only its uncontrollable version. The creators need to make sure that before it launches, it has the possibility to control it.

Now I will focus on the aspects of crimes related to the general rules of criminal law (see, e.g., Jasiński and Kremens 2019; Wróbel and Zoll 2014; Wróbel, Zontek, and Wojtaszczyk 2014). Usually, when the crime is worded, it is written in a form that suggests that punishable behaviors are limited to the “successful” performance of the perpetrator. Criminal law, however, might extend to the behaviors that precede the “finished” crime and punish perpetrators if the prohibited act is not yet finalized. Most criminal law systems punish attempts (Duff 1997), so there is no need to put it into the text of the crime that attempt is punishable—it is known from general rules. In the context of the crimes discussed, under §1 it is punishable in a situation in which uncontrollable AGI is already in the wild, but also when the perpetrator(s) has reached the last stage before its introduction. To give an example, there is an employee of a company who finds out that colleagues are working on launching AGI that has not been tested enough. That employee decides to inform the officials about it. If police come to the place and stop the as yet unfinished process, those people could be punished under §1.

The proposed proposition extends punishment to even earlier stages, which is called “preparation”. Punishability of preparation varies more in different legal systems than attempts. It is usually reserved for the most serious crimes, such as terrorist attacks (See, e.g., Bock and Stark 2020; Ohana 2007). Paragraph 3 refers to preparing to introduce uncontrollable artificial general intelligence.

The other important characterization of the crimes that are known from the general rules of criminal law is that both crimes, from §1 and §3 are intentional. It means, more or less, that the perpetrator intentionally makes preparations, attempts, or manages to launch uncontrollable general intelligence. From that perspective, there is a different situation with the crime described in §2, which is about the unintentional introduction of uncontrollable AGI. The punishability of crime in §2 is limited to the situation “after” the lunch of uncontrollable AGI. The unintentional perpetrators could be punishable only if they introduce such AGI. It would not extend to an attempt, which must be intentional (see, e.g., Becker 1974). There is also no option to punish for preparations that are not intentional as well. To sum up this issue, the proposed crimes introduce punishment of intentional preparations, attempts of introduction and introduction of uncontrollable AGI, and the unintentional introduction of uncontrollable AGI.

There is a need to explain more about the unintentional version of the crime, which might seem worrying to the people working on the AGI. To be free from liability for intentional crimes, it is enough to have no intention to commit them. So, if someone is working on AGI, hoping that it would be beneficial and controllable, they do not have to be afraid of responsibility. In the case of unintentional crime, the situation is different. However, it does not mean that in all cases of the introduction of uncontrollable AGI, people contributing to its introduction would be held responsible. To be responsible for unintentional crime, there is a need to act in a way that ignores safety standards. Let us look at how it is described in the Polish criminal code.

“A prohibited act is committed unintentionally if the perpetrator, without having an intent of its commission, commits it due to non-compliance with carefulness required in the given circumstances, although he has foreseen or might have foreseen the possibility of its commission.” (translation: Wróbel, Zontek, and Wojtaszczyk 2014).

Unintentional crime in the position presented above covers negligence and recklessness. What needs to be underlined is “commits it due to non-compliance with carefulness required in the given circumstances.” So, in other words, to not be responsible for this crime, it is necessary to act carefully and adopt any known safety measures. Uncontrollable AGI could be created anyway, but if the people adopted all reasonable safety measures, they could not be held responsible for that crime. While intentional crimes aim to prevent malicious actors who intend to introduce uncontrollable AGI, the unintentional version of the crime seeks to prevent the rush and recklessness in the AGI race.

It needs to be mentioned here that despite the advantages mentioned, criminal law is not an ideal tool for addressing AGI threats. Criminal law, as a branch of law, has issues related to penal populism, corruption, mass incarceration, and so on. There is also a fundamental issue mentioned earlier, which is the geographically narrow impact of domestic criminal laws. In the discussion on the change of the law and choosing the right measures, all the issues need to be considered.

6 Conclusions

The discussion on how to mitigate existential risks related to AGI usually focuses on international instruments, due to the global character of the threat. In this paper, I argued that while considering the potential responses to the threat, we should also consider national criminal laws. I believe that criminal law should be considered for five reasons. First, states might be obliged to criminalize certain behaviors leading to AGI. If AGI threatens the lives of citizens, the legislative bodies could have a duty to change the legal environment to mitigate risks. Second, criminal law has a deterrence effect, and the chance of punishment might impact the behavior of people working on issues related to AGI. Third, crimes might be instruments for punishing those who intend to work on malicious AGI and are at an early stage of their crimes. Fourth, changes in one country could impact the changes in other countries, and the international move might be intensified through changes in some countries. What is more, in a globalized world, the legislative changes in one country could impact the situation of people in other countries. Fifth, changes in criminal law are relatively cheap, and could be enacted fast — compared to the process of changing the international legal landscape.

This paper presents three AGI crimes focusing on the launch of uncontrollable AGI. Two of them are intentional and cover the deliberate introduction of uncontrollable AGI crime and preparations for its introduction. The third crime is unintentional, and it aims to contribute to establishing strict safety rules around the AGI race.

We might speculate about what must happen to incorporate discussed crimes into legal systems. It seems that one of the critical factors in the decision on criminalization is the emergence of political will to make some changes to the dependent law, at least partially, to public opinion. Recently, a considerable amount of media attention has been paid to the development of AI. We might guess that the spectacular safety breach might accelerate discussion on the necessary legal changes to mitigate the most far-reaching potential risks, which might also require changes in criminal law.