Abstract
The recent developments in applications of artificial intelligence bring back discussion about risks posed by AI. Among immediate risks that need to be tackled here and now, there is also a possible problem of existential threats related to Artificial General Intelligence (AGI). There is a discussion on how to mitigate those risks by appropriate regulations. It seems that one commonly accepted assumption is that the problem is global, and thus, it needs to be tackled first of all on an international level. In this paper, I argue that national criminal laws should also be considered one of the possible regulatory tools for mitigating threats posed by AGI. I propose to enact AGI crimes that complement the varieties of legal responses to existential risks that might motivate and speed up further regulatory changes.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Recent advancements in artificial intelligence, in particular the emergence of easily accessible applications of large language models in the form of chats, such as chatGPT, have reignited debates about the potential consequences of AI. In their paper on the evolution of AI governance, Chesterman et al. noted that next to the scandal related to Cambridge Analytica, the emergence of ChatGPT was one of the two events that significantly impacted the dynamics of AI governance (Chesterman et al. 2023). On the cover of the June 2023 edition of Time Magazine, the heading “The End of Humanity” was on a red background with the letters “A” and “I” highlighted, referring to the potential existential risks related to the developments of AI. These risks are usually associated with Artificial General Intelligence.
What is Artificial General Intelligence (AGI)? As Shevlin et al. puts it, AI “that is capable of solving all tasks that human can solve” (Shevlin et al. 2019, 1). The world general refers to a characteristic of human intelligence that allows us to do many things, thanks to our intellect. Usually, this term is contrasted with narrow artificial intelligence, which refers to AI tools applicable to specific domains and capable of performing tasks such as recognizing images or playing chess (Zerilli et al. 2021, xviii). In these divisions, “generative AI,” like chatGPT (Stokel-Walker and Van Noorden 2023; Ponce Del Castillo 2023), belongs to narrow AI. It might seem to do a range of things, but it can certainly not do all the things we can do with our intelligence.
AGI refers to human-like abilities, but these abilities may exceed those of humans. In that sense, sometimes there is the use of the term “Superintelligence” (see Hoffmann 2023 about related concepts). Bostrom defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interests” (Bostrom 2014, 26). At the general level, there are concerns that the appearance of a new kind of entities with human-like intellect or exceeding human-like capabilities with their aims and goals may contradict our human interests (but see Szocik et al. 2020). As Müller puts it, “If high-level AI occurs, this will have a significant impact on humanity, especially on the ability of humans to control their fate on Earth” (Müller 2014, 298).
There is an ongoing discussion about the regulation of AGI (cf. Mahler 2022). Totschnig notes that the problem with potential wrongful superintelligence is political, not technological, and the focus on mitigation of risks should be on the political decisions around it (Totschnig 2019). It has been pointed out that lawyers and the law should do more to mitigate long-term risks, including those related to AI (Martínez and Winter 2023). Lawyers actively advocate mitigating extensional risks through legal interventions (see Bliss 2023 on existential advocacy). In that vein, this paper examines the function of criminal law in minimizing and preventing related risks. Here I present a view according to which we could start with criminal law, which might provide the impulse for further changes.
This paper is structured as follows. After introductory remarks, I briefly discuss the issue of existential threats, emphasizing threats related to AGI. Then, I turn to a discussion on the governance of AGI. After that, there will be discussion on criminal law and AGI. The last part, before conclusions, introduces a proposition relating to AGI crimes.
2 Artificial intelligence and existential risks
In this section, I focus on existential risks, emphasizing AI-related ones. Bostrom defines “existential risk” as “One where an adverse outcome would either annihilate Earth––originating intelligent life or permanently and drastically curtail its potential. An existential risk is one where humankind as a whole is imperiled. Existential disasters have major adverse consequences for the course of human civilization for all to come.” (Bostrom 2002). There are several sources of existential risks, both natural, and those that humans cause. Various existential threats are pointed out, such as pandemics, natural disasters, collisions with asteroids, climate crises, nuclear explosions, etc. Among the greatest existential threats is the risk related to the development of artificial intelligence (Ord 2020). Kasirzadeh notes that in the discussion on AI-related existential risks, the focus is on the state of affairs in which there is a high magnitude event in which AI possesses problematic qualities emerge (Kasirzadeh 2024). AGI. He points out that there is also another kind of risk that we should focus on, which he refers to as the “accumulative AI x-risk hypothesis.” That second category of risks might be manifested through a series of smaller changes that gradually cross the threshold of the critical stage of dangerousness. To illustrate the problem, he refers to the boiling frog example, that might not notice the small increase of the temperature. He argues that division into two kinds of existential risks is helpful from the perspective of forming the right approach to mitigate risks.
The rise of AGI poses risks of several kinds (cf. McLean et al. 2023; Salmon et al. 2023). For example, Sparrow points out the possibility of humans being enslaved by superintelligent machines (Sparrow 2023). In the review paper by McLean et al., the following categories of risks were listed: AGI removing itself from the control of human owners/managers; AGIs being given or developing unsafe goals; Development of unsafe AGI; AGIs with poor ethics, morals, and values; Inadequate management of AGI; and Existential risks (McLean et al. 2023, 660). Vold and Harris point out that the answer to the question of how AGI can pose existential risks is not straightforward (Vold and Harris 2021). They claim that because AGI, by definition, is a general-purpose technology, there are many ways in which it could work to pose risks. In their work, they focused on three ways, as mentioned before the control problem, which is also a key problem in the paperclip example, and two others — the possibility of global disruption from an AI race dynamic and weaponizing AI (on military AI and existential risks see also: Maas et al. 2023). The existential risk related to AI does need to be caused by the intentional design of the agent to realize such risks, but it also by making it unsafe (Bliss 2023).
It seems that most people working on AI safety are concerned with unintended threats by its creator. The critical issue from that perspective is the so-called control problem, which refers to (lack of) controlling AGI when it is launched (cf. Torres 2019). Bostrom uses a famous hypothetical example of a “paperclip maximizer” that, while having a legitimate goal, may act to use all of Earth’s resources to manufacture paperclips, endangering humanity (Bostrom 2009). Russell, in the same vain, notes that “if we build machines to optimize objectives, the objectives we put into the machines have to match what we want, but we do not know how to define human objectives completely and correctly.” (Russell 2019, 170). In other words, there is a problem with the alignment of AI with human values (see, e.g., Christian 2020). Alignment is not a problem only in the context of AGI; however, in the context of AGI, the most serious worries arise, including existential ones. Dung points that current AI tools are misaligned, and lists the characteristics of misalignment: “misalignment can be hard to detect, predict and remedy, it does not depend on a specific architecture or training paradigm, it tends to diminish a system’s usefulness and it is the default outcome of creating AI via machine learning” (Dung 2023, 1). Based on those observations, he notes that the risks of misalignment will be magnified with the more capable systems, and what is more, aligning more capable systems is more problematic than the alignment of less complicated ones.
In contrast to the risks posed above, it could be added that not everyone sees the development of AI as a chance only to increase existential risks. In the interesting paper, Goldstein and Kirk-Giannini mention various scenarios, and next to negative, they also discuss the possibility of reducing the probability of existential risks by the development of large language models (Goldstein and Kirk-Giannini 2023).
It should be noted that the discussion around existential risks related to AI is heated, which is well illustrated by Nature’s editorial title—“Stop talking about tomorrow’s AI doomsday when AI poses risks today.” (“Stop Talking about Tomorrow’s AI Doomsday When AI Poses Risks Today” 2023). It is suggested that by focusing on the existential risks of AI, we may overlook problems such as biases, accountability, etc., which should be discussed and resolved as soon as possible. To simplify the issue, Sætra and Danaher point out that there are two camps, “AI safety” and “AI ethics,” that are different in evaluating risks related to AI. The AI safety group is focused on long-term risks, while the AI ethics group is more concerned with short-term risks. They argue that a more productive dialog between groups is needed. Without cooperation, we might be in a situation in which neither short-term nor long-term risks are managed and mitigated (Sætra and Danaher 2023).
3 Governance of AGI
We are witnessing a growing trend of actions aimed at governing AI worldwide (cf. Neuwirth 2022). In particular, this trend concerns issues such as the development, use, and infrastructure of AI (Veale et al. 2023). This trend is especially visible within the European Union (cf. Harasimiuk and Braun 2021), and the European Union’s AI Act does focus on AI risks (Novelli et al. 2023; Kaminski 2023). Those examples of regulatory efforts don't mean that policy-wise, everything is on good track already. Coeckelbergh argues for more global cooperation in reducing risks related to AI; he notes that it is often used in the context of risks related to AGI. However, he believes that even if there are no risks related to AGI, there would be arguments for global cooperation, due to the cross border consequences of AI development (Coeckelbergh 2024).
Kolt argues that policymakers must mitigate the pressing issues related to AI that are already present, but they also should look toward possible long-term risks—“algorithmic black swans.” He points out that the current regulative responses to AI, particularly at the AI Act and regulatory responses in the US, are focused on immediate risks, and they overlook trying to challenge those risks, which he refers to as “governance gaps.” Some of his recommendations — for filling identified gaps — are to try to anticipate risks, to use diverse regulative strategies, and to evaluate the subject matter to react constantly (Kolt 2023). Bostrom argues that existential risk prevention is a global priority (Bostrom 2013). It seems that the current legal solutions in power, or those under legislative procedure now, are not focused on governing the risks related to AGI. Dung lists three approaches to reducing catastrophic risks related to AI. The first approach is research-based, and it is about intensifying research on AI alignment. The second is referred to as “AI timelines research,” which is devoted to focusing attention on observing and predicting the important steps in developing AI. The last approach, or group of approaches, is AI policy. Dung notes that ultimately the catastrophic risks would require a political response (Dung 2024).
There are many ideas on how to do that. For example, Naudé and Dimitri argue that one way to reduce risks from AGI is to impact the economy of the AGI race itself. They call for coordination of the work on AGI that will not reduce it to a winner-takes-it-all scenario. In their opinion, such a strategy could lead to a rush and increase the chances of unwanted side effects of AGI (Naudé and Dimitri 2020). Salmi proposes controlling AGIs by including them in a democratic system in which humans and AGIs cooperate (Salmi 2023). This idea assumes that AGI is already here, but there is work to do before AGI materializes.
Scherer points out the difficulties with regulating ex-ante risks related to AI (Scherer 2016). He points out four aspects of research and development in this area: discreetness, discreteness, diffuseness, and opaqueness. Discreetness refers to the limited need for physical infrastructure, such as creating nuclear bombs. However, it does not mean that building AGI could be completely unnoticeable. There would need to be significant investments or energy consumption. Discreteness means that the various elements of AI systems could be developed without conscious coordination. Diffuseness refers to the fact that dozens of individuals from all around the globe may participate in AI projects. Opaqueness in this context makes it difficult for external observers to detect potentially harmful features of developed systems.
A range of regulatory entities might be engaged in reducing risks posed by AI, including national legislatures, administrative agencies, and common law tort systems (Scherer 2016, 377). Scherer proposes creating an agency responsible for certifying AI systems. Under the proposed scheme, the actors deploying an uncertified system would be liable for the tort. Ord points out the problem of the cooperation between countries. He wrote, “Each nation is inadequately incentivized to take actions that reduce risk and to avoid actions that produce risk, preferring instead to free-ride on others” (Ord 2020, 199). In his opinion, the response to that issue should be international cooperation.
Martínez and Winter, in their paper, examine the opinion of legal scholars on the impact of legal interventions on mitigating risks. According to a global survey, legal scholars generally believe that legal action taken today could impact future existential risks (Martínez and Winter 2021a, b). That survey covered a variety of risks (such as environmental and pandemic ones) and was not limited to AI. Scholars were asked about the branches of the law that would be best placed to mitigate risks. Participants believed that international, constitutional, and environmental law are better equipped than criminal law for the task of preventing risks. Criminal law was considered as the least plausible branch of the law in terms of its ability to solve future problems.
If criminal law is mentioned, it is usually in the context of international criminal law. Ord writes: “Another promising avenue for incremental change is to explicitly prohibit and punish the deliberate or reckless imposition of unnecessary extinction risk. International law is the natural place for this, as those who impose such risk may well be national governments or heads of state, who could be effectively immune to mere national law. The idea that it may be a serious crime to impose risks on all living humans and our entire future is a natural fit with the common-sense ideas behind the law of human rights and crimes against humanity” (Ord 2020, 203). His suggestion is to work on an international treaty that makes it possible. In that context, he refers to the work of McKinnon, who argues that endangering humanity should be a transnational crime — “postericide.” She points out that in the Anthropocene, humans gained the ability to bring humanity to the state of extinction, and international law is where crime should be situated. She uses the following wording of crime: “Intentional or reckless conduct fit to bring about the near extinction of humanity” (McKinnon 2017, 405). In her later work, McKinnon’s notion was that international criminal law is relatively young and fragile, and she predicted that promoting new controversial crimes within the framework of international criminal law would be resisted (McKinnon 2021).
By its international character, international criminal law might be the right place for making changes related to AGI. It has been pointed out that the scope of international criminal law should be expanded to cover new types of behavior (Renzo 2012). Sloane argues that punishment in international criminal law, although it may appear to be a late intervention, may still be justified, because of the expressive function of discipline, which is to condemn the perpetrator’s actions. At the same time, he adds that international criminal law has limitations. It seems less effective and legitimized than national criminal laws (Sloane 2007). Wilson sees the global catastrophic risks, including risks associated with the development of AI, as a meter for international law, and points out that the lack of relevant laws is a regulatory gap (Wilson 2013). He also proposed the use of national legislation. However, he focuses on the international treaty as the default proposition that, in his opinion, could influence the national legislatures to take action (Wilson 2013).
In this paper, I propose using the national criminal laws as one of the tools that might be used as a legal response to risks. It would have made more sense due to the character of risks related to AGI. For example, compared to nuclear risks, there might be a need to influence the behaviors of individuals who could be the source of threats. In a nuclear context, only whole countries or influential organizations could afford to create infrastructure to create bombs. There would be more sense in using legal instruments addressed to individual citizens, and criminal law has that character. The idea is not that it should be the only way, but it is one of the many tools that should be adopted. I propose considering national criminal law instruments in response to AI risks, not as a standalone proposition, but as a supplement to the legal environment that should be created. I want to underline that I do not treat the proposition as an alternative to international criminal law, but rather as supplementation. There might be changes at the national level, and at the same time, there might be negotiations over changes at the international level. Changes at the national level might inspire or accelerate global efforts to mitigate risk at the international level.
4 AGI and national criminal law
Artificial intelligence may be relevant to criminal justice in many ways (cf. Hayward and Maas 2021; King et al. 2020; Mamak and Glanc 2022, 2023; Lagioia and Sartor 2020; Mamak 2023). One immediate association between AGI and existential risks might be as follows – if AGI refers to human likeness and causes harm, why not make AGI responsible for actions? In the paper about AGI and criminal law, Atkins focuses on this issue — on crimes “committed” by AGI and discusses the problem of responsibility for them. He discusses the human responsibility for those crimes and making AGI responsible for those actions, and he analyzes it from the perspective of criminal law purposes. He focused on the harms that would constitute crimes if committed by humans (Atkinson 2019). The problem of responsibility for harms related to AI is one of the more discussed in the literature on ethical and legal aspects of AI (cf. Matthias 2004; Sparrow 2007; Gunkel 2020; Santoni de Sio and Mecacci 2021; Danaher 2016; Nyholm 2018; Hakli and Mäkelä 2019; Wojtczak 2022; Berber and Srećković 2023; Mamak 2022b). However, from the perspective of the problem described in this paper, deliberation about the responsibility of AGI is secondary— the focus is here on preventing or mitigating risks in the first place.
Below, I present several arguments in favor of using criminal law to mitigate the risks posed by existential threats related to AGI by enacting crimes. Criminal law serves various purposes, such as preventing crimes in general, avoiding the punishment of innocents, rehabilitating offenders, etc. The role of criminal law may also be viewed as a necessary instrument for achieving and sustaining social order (cf. Hart 1958). I will focus on those aspects that, in my opinion, are crucial in discussing the role of criminal law in the context presented. There might be more arguments related to different views of the role or functions of this branch of law. It is not an exhaustive enumeration, and elements on the list are interconnected. In each topic mentioned, I want to focus on several aspects.
I believe that national criminal law should be considered for the following reasons: (I) national states might be obliged to criminalize the imposing risk related to AGI; (II) national laws might have an international effect; (III) changing the national law is fast and cheap; (IV) AGI crimes might have a deterrent effect; (V) laws might be used to punish perpetrators.
4.1 Duty to criminalize
In his paper entitled The Duty to Criminalize, Alon Harel argues that states are not only obligated to protect the fundamental rights of citizens, like the right to life and liberty, from violations—but also to criminalize such violations (Harel 2015). According to this view, the legislature could be required to change the laws to ensure that rights are protected. Harel uses the term “under-criminalization,” which could be understood as a form of regulatory gap mentioned before. The crimes in national laws are flexible to some extent, and are adapting to new changes related to new technologies (cf. Fairfield 2021; Mamak 2019). For example, murder is criminalized even if it is done in a way that was impossible when enacting laws. However, there are limits to the interpretation of crimes. One of the basic rules of criminal law in many countries is nullum criminen sine lege (cf. Mokhtar 2005). In short, this rule forbids treating behavior that was not criminalized before the action that was taken as criminal behavior. It would also be against that rule to interpret existing laws broadly to cover new threats. If there is a new way of threatening the fundamental rights of humans, and we want to use criminal law, there might be a need to make changes in the law. As an example of such an attitude, in my other paper, I argued that disseminating fake medical news should be criminalized as an act that threatens public health (Mamak 2021; 2022a). So, suppose the AGI poses an existential risk, which translates into a threat to human lives. In that case, there might be an obligation to criminalize those behaviors related to AGI that pose such risks.
4.2 Domino effect
As I mentioned, one of the accepted views is that the threats from AGI are global. Therefore, the response to them should be at the international level in the first place. National laws are treated as tools that cannot successfully mitigate risks, because they are, in theory, locally restricted to geographically limited parts of the world. I do not claim that national laws can solve the problem globally, but I do claim that such changes can mitigate the problem. One of the reasons is that the change in one country might impact the change in other countries. One of the standard elements of the legislative process is comparative analysis. How the problem is regulated in one country might be an argument for a change in another. On a bigger scale, there is the so-called “Brussels Effect” (cf. Bradford 2012; 2020), the idea that the regulations at the EU level impact the rest of the world. The changes in some countries might motivate other countries. or regional and international groups of countries, to change their laws. In his book about the governance of robots and AI, Chesterman discusses creating an institution of AI ombudsman (ombudsperson) that could be devoted to the issues related to AI. As he notices, the idea of an ombudsman has Scandinavian roots, which also illustrates that the legislative institution in one legal culture might inspire the adoption of similar provisions in other countries, and even adapt it to the issues related to the deployment of AI (Chesterman 2021, 220–22).
What is more, in the globalized world, criminalizing certain behaviors in some countries might influence the situation of citizens in other countries. It is not hard to imagine that teams working on the development of AI are multinational. It would be a rational expectation of the team members that the team management would limit the imposition of criminal charges, for example, by creating safety measures and strict procedures to prevent unwanted consequences. In that sense, one country’s regulation can impact how work on AGI in others is done. Another aspect relates to traveling. Criminal status makes it harder to travel and move to another country. The person who aims to change countries needs to make sure that there are no criminal charges that could be used against them.
4.3 Enacting new crimes is fast and cheap
Changes at the international level might be necessary for an effective system of mitigating the risks related to AGI; however, usually, enacting treaties is a long process. If time is an aspect that needs to be considered, then the changes of laws at the national level have advantages in that respect. Work on international treaties can take many years. There is a need for many tasks and coordination of the different collective bodies at global and national levels. Looking at the example of the Treaty on the Non-Proliferation of Nuclear Weapons (see, e.g., Bourantonis 1997; Siracusa and Warren 2018). The negotiation phase took place between 1965 and 1968, and the treaty came into force in 1970. But it needs to be considered that negotiations take place only in a limited number of countries, and second, the starting of negotiations was preceded by diplomatic efforts. In that context, the official UN website dedicated to the treaty mentions in the historical context events from the 1950s. Changes in national laws are much quicker to effect and could be counted in months rather than years. Enacting laws at the national level does not exclude further international collaboration on the same matter.
Moreover, the change is cheap from a financial point of view. Enacting crimes related to AGI does not per se require huge investments and establishing new bodies with headquarters and pensions, but I do not claim that enacting new crimes has no cost. There are costs related to the legislative process and then associated with extra work of law enforcement authorities that would get new crimes to deal with (see, e.g., Danaher, who discusses the costs of introducing new crime (Danaher 2017, 93–94)). Still, the legislative change is relatively low-cost. and could be made to mitigate risks. Even if we somehow knew in future that risks related to AGI were exaggerations of the current time that were not worth discussion, the changes in criminal law would be easy and cheap to undo.
4.4 Deterrence effect
The main aim of taking any legal measures related to existential risks posed by AGI is to prevent or mitigate those risks. This aim is consistent with one of the primary aspects of criminal law: deterrence. In general, there are two main justifications of criminal law – backward-looking, when we see criminal law as a tool of responding to crime that happened, and forward-looking, which sees criminal law as a way of impacting the future and decreasing crimes (see: Canton 2020). In this paper, I present the idea of using national criminal law as a tool to mitigate risks related to AGI. In that sense, the change of criminal law aims to impact the behavior of individual humans. In that way, they would not take actions that could be related to potential criminal responsibility. To some extent, criminal law does deter crimes (cf. Robinson and Darley 2004). There is hope that AGI crimes will impact the behavior of those working on advanced AI. I do not claim here that such crimes would prevent the existential risks discussed, but that the potential criminal responsibility might be an extra factor that must be considered while working on AGI. It needs to be mentioned that there are crimes in the legal system that are intentional, but also negligent when the perpetrator does not intend to cause harm. For example, causing a car accident is an unintentional crime, and the driver is not punished for intending to cause the accident in which another person was harmed (if there were intent, it would not be classified as an accident), but for not respecting the safety rules that were required to take. The main reason for enacting such crimes would be to deter people working on AGI: those who intend to use AGI destructively and those working on AGI without ensuring safety measures.
The presented proposition contradicts the work cultures many associates with tech companies, which emphasize “move fast and break things” and “asking for forgiveness, not permission.” The potential criminal responsibility for creating uncontrollable AGI forces the safety issues to be considered more in everyday operations. Every person working on AGI should ask themselves questions about whether to apply all of the known safety measures, read all relevant papers, and be up to date with the safety standards or safety hazards.
4.5 Bringing justice
In the previous section, I referred to criminal justice as being both backward-looking and forward-looking. In the context of AGI, it seems that it would be more sensible to have forward-looking thinking about criminal law. There might seem to be few advantages of punishing perpetrators when AGI crimes are committed, and an existential threat materializes. Especially when existential threats would be in the form of destroying the whole of humanity. There would be no way to punish the perpetrator. In the discussion about existential threats, it is pointed out that the existential threat could also mean the survival of humanity, but in a limited form. Punishing perpetrators in such circumstances would be possible, but it would be poor consolation and weak justification for changing the law. Nevertheless, there are other ways of making sense of the enacting AGI crimes in the legal system from the perspective of backward-looking justification of criminal law. Criminal law cannot only be used when something happens, but it could also extend to periods that precede actual crime, on attempts and preparatory behaviors (cf. Ohana 2007). The criminal attempt concerns behavior directly preceding actual crime, and is punishable in most legal systems (Duff 1997). In the case of some crimes (like terrorist attacks), preparation is punished, which could happen long before the crime being planned, and is justified by reducing risks (Bock and Stark 2020). AGI crimes would provide a tool to chase perpetrators who intend to use AGI maliciously.
5 Legislative proposition
This section will present and explain the proposed crimes. Before that, a few clarifying remarks will provide the context for understanding the proposition.
The first thing that I want to underline here is that it should be treated as a starting point for further discussion, and not a ready-to-implement proposal that can be applied to law. With this paper, I aim to contribute to the discussion on mitigating risks related to AGI. Proposed provisions are material for further discussions. It is easier to talk about regulations and their desired shape, when we have what we can refer to in hand.
The second issue that I want to mention at the outset is that the propositions cover a fraction of the behaviors that might be considered for criminalization from the perspective of the dangers related to the risks of AGI. As mentioned earlier, this proposition is a starting point, also from the perspective of identifying the behaviors that might be considered to be inherently dangerous from the perspective of the emergence of AGI. There might be many ways in which risks related to AGI could materialize, and also a lot of behaviors that might lead to the risks (McLean et al. 2023). In the proposal that is presented below, there is a focus on the creation of uncontrollable AGI, but there might be more problematic behaviors that could be candidates for prohibition.
The next thing I want to mention here is that I have focused on describing the behaviors that constitute risks rather than the punishment. Crime typically consists of describing prohibited behavior, and the punishment that could be applied if the perpetrator breaks that law. In the version presented, the crime looks incomplete. I use the phrase ‘is subject to punishment’, and do not specify which one. In the final proposition, this part should be expanded with the punishment.
One might ask a question: why propose the new crimes if there is already the idea of the crime of “postericide” proposed by McKinnon (2017), which was mentioned earlier? Should we not implement her proposition to prevent the existential risks related to the emergence of AGI? Her proposition should be appreciated as an opening discussion about using criminal law measures to mitigate putting humanity in danger; however, there is a problem with that proposition from the perspective of general principles of criminal law. Simply put, it is too broad from the perspective of the issues discussed in this paper. One of the basic principles of criminal law, as it was mentioned earlier, is nullum crimen sine lege (see, e.g., Glaser 1942), which means, in general, that there is no crime without the law. In a more detailed analysis, this principle is problematized. One of the subprinciples related to that general principle is nullum crimen sine lege certa, which refers to the need to define the scope of the crime clearly (see, e.g., Lahti 1995, 251; Barczak-Oplustil 2013). It has been pointed out that to have a chance to fulfill its functions, the crime needs to be easily understandable by those to whom the criminal law norm is addressed (Loucaides 1995). Citizens should know in advance which behaviors could lead them to criminal responsibility. This is why I believe that in cases of various existential risks, detailed crimes should specify which acts could lead to criminal responsibility, rather than enact one general crime that would cover all of them.
The last introductory comment in this part concerns the national character of the criminal laws and their rules. To fully understand the crime, there is a need to know not only the wording of the crime itself, but also the broader context, which consists of the general rules of the criminal law, the judiciary rulings that impact the ways of interpretation of phrases used in the criminal law system, the constitutional provisions that impact the ways in which criminal law provisions might be read, or the relevant international legal context. In short, the meaning of the provision could not be reduced to the text of the crime. The proposition is written from the perspective of the Polish legal system, with which I am familiar the most. In the sense of covering the same criminalized behavior in other legal systems, the same crime could look slightly different. I will explain later the issues that are “outside” the wording of the crime. So, now it is time for the proposition and explanation.
-
§1. Whoever introduces uncontrollable artificial general intelligence is subject to punishment.
-
§2. If the perpetrator of the act referred to in § 1 acts unintentionally, he or she is subject to punishment.
-
§3. Whoever makes preparations to commit the crime provided for in §1 is subject to punishment.
First, I will focus on the issues that result from the wording of the crimes and then on the important aspects of crimes that result from the general rules of the criminal law.
There are three paragraphs in which three crimes are captured. All the behaviors mentioned aim to prevent the “introduction” of “uncontrollable” “artificial general intelligence”. All the marked terms need explanation. The least controversial, which does not mean that it is devoid of ambiguity, seems to be the word “uncontrollable”. In the context of AGI, I understand it as a lack of opportunity to impact the direction in which AGI operates or a lack of possibility of turning it off. By “introduction,” I mean activity that led to uncontrollable AGI in the wild. It is not the behavior that is limited to programming activities. It also might be the activity of the CEO of the technological company who decides personally to launch such a tool.
The most problematic term is “artificial general intelligence.” First, this term does not yet have a universally accepted definition, which should not be surprising, considering that there is no single definition of “artificial intelligence” upon which the AGI is built (see, e.g., Elliott 2021). Second, what seems to be more problematic is the epistemological problem — how would we know that the evaluated “thing” is artificial general intelligence? In his book about the moral and legal status of robots, Gunkel discusses the problem of whether the machines could meet the criteria that qualify them to be in the category of natural persons, and points out that “possession of the qualifying criteria for natural personhood is only (at best) 50% of what is needed” (Gunkel 2023, 85). He notes that the missing part is related to the epistemological problem of knowing whether the machine met those criteria.
To be responsible for committing the crimes discussed, all those elements must be present at once. There needs to be a human being introducing uncontrollable artificial general intelligence. So, this crime does not ban the creation of Artificial General Intelligence per se, but only its uncontrollable version. The creators need to make sure that before it launches, it has the possibility to control it.
Now I will focus on the aspects of crimes related to the general rules of criminal law (see, e.g., Jasiński and Kremens 2019; Wróbel and Zoll 2014; Wróbel, Zontek, and Wojtaszczyk 2014). Usually, when the crime is worded, it is written in a form that suggests that punishable behaviors are limited to the “successful” performance of the perpetrator. Criminal law, however, might extend to the behaviors that precede the “finished” crime and punish perpetrators if the prohibited act is not yet finalized. Most criminal law systems punish attempts (Duff 1997), so there is no need to put it into the text of the crime that attempt is punishable—it is known from general rules. In the context of the crimes discussed, under §1 it is punishable in a situation in which uncontrollable AGI is already in the wild, but also when the perpetrator(s) has reached the last stage before its introduction. To give an example, there is an employee of a company who finds out that colleagues are working on launching AGI that has not been tested enough. That employee decides to inform the officials about it. If police come to the place and stop the as yet unfinished process, those people could be punished under §1.
The proposed proposition extends punishment to even earlier stages, which is called “preparation”. Punishability of preparation varies more in different legal systems than attempts. It is usually reserved for the most serious crimes, such as terrorist attacks (See, e.g., Bock and Stark 2020; Ohana 2007). Paragraph 3 refers to preparing to introduce uncontrollable artificial general intelligence.
The other important characterization of the crimes that are known from the general rules of criminal law is that both crimes, from §1 and §3 are intentional. It means, more or less, that the perpetrator intentionally makes preparations, attempts, or manages to launch uncontrollable general intelligence. From that perspective, there is a different situation with the crime described in §2, which is about the unintentional introduction of uncontrollable AGI. The punishability of crime in §2 is limited to the situation “after” the lunch of uncontrollable AGI. The unintentional perpetrators could be punishable only if they introduce such AGI. It would not extend to an attempt, which must be intentional (see, e.g., Becker 1974). There is also no option to punish for preparations that are not intentional as well. To sum up this issue, the proposed crimes introduce punishment of intentional preparations, attempts of introduction and introduction of uncontrollable AGI, and the unintentional introduction of uncontrollable AGI.
There is a need to explain more about the unintentional version of the crime, which might seem worrying to the people working on the AGI. To be free from liability for intentional crimes, it is enough to have no intention to commit them. So, if someone is working on AGI, hoping that it would be beneficial and controllable, they do not have to be afraid of responsibility. In the case of unintentional crime, the situation is different. However, it does not mean that in all cases of the introduction of uncontrollable AGI, people contributing to its introduction would be held responsible. To be responsible for unintentional crime, there is a need to act in a way that ignores safety standards. Let us look at how it is described in the Polish criminal code.
“A prohibited act is committed unintentionally if the perpetrator, without having an intent of its commission, commits it due to non-compliance with carefulness required in the given circumstances, although he has foreseen or might have foreseen the possibility of its commission.” (translation: Wróbel, Zontek, and Wojtaszczyk 2014).
Unintentional crime in the position presented above covers negligence and recklessness. What needs to be underlined is “commits it due to non-compliance with carefulness required in the given circumstances.” So, in other words, to not be responsible for this crime, it is necessary to act carefully and adopt any known safety measures. Uncontrollable AGI could be created anyway, but if the people adopted all reasonable safety measures, they could not be held responsible for that crime. While intentional crimes aim to prevent malicious actors who intend to introduce uncontrollable AGI, the unintentional version of the crime seeks to prevent the rush and recklessness in the AGI race.
It needs to be mentioned here that despite the advantages mentioned, criminal law is not an ideal tool for addressing AGI threats. Criminal law, as a branch of law, has issues related to penal populism, corruption, mass incarceration, and so on. There is also a fundamental issue mentioned earlier, which is the geographically narrow impact of domestic criminal laws. In the discussion on the change of the law and choosing the right measures, all the issues need to be considered.
6 Conclusions
The discussion on how to mitigate existential risks related to AGI usually focuses on international instruments, due to the global character of the threat. In this paper, I argued that while considering the potential responses to the threat, we should also consider national criminal laws. I believe that criminal law should be considered for five reasons. First, states might be obliged to criminalize certain behaviors leading to AGI. If AGI threatens the lives of citizens, the legislative bodies could have a duty to change the legal environment to mitigate risks. Second, criminal law has a deterrence effect, and the chance of punishment might impact the behavior of people working on issues related to AGI. Third, crimes might be instruments for punishing those who intend to work on malicious AGI and are at an early stage of their crimes. Fourth, changes in one country could impact the changes in other countries, and the international move might be intensified through changes in some countries. What is more, in a globalized world, the legislative changes in one country could impact the situation of people in other countries. Fifth, changes in criminal law are relatively cheap, and could be enacted fast — compared to the process of changing the international legal landscape.
This paper presents three AGI crimes focusing on the launch of uncontrollable AGI. Two of them are intentional and cover the deliberate introduction of uncontrollable AGI crime and preparations for its introduction. The third crime is unintentional, and it aims to contribute to establishing strict safety rules around the AGI race.
We might speculate about what must happen to incorporate discussed crimes into legal systems. It seems that one of the critical factors in the decision on criminalization is the emergence of political will to make some changes to the dependent law, at least partially, to public opinion. Recently, a considerable amount of media attention has been paid to the development of AI. We might guess that the spectacular safety breach might accelerate discussion on the necessary legal changes to mitigate the most far-reaching potential risks, which might also require changes in criminal law.
Data availability
No data are associated with this paper.
References
Atkinson D (2019) Criminal liability and artificial general intelligence. J Robotics Artif Intell Law 2(5):333–350
Barczak-Oplustil A (2013) The rule of nullum crimen sine lege. Selected issues. J Crim Law Penal Stud 17(3):5–28
Becker LC (1974) Criminal attempt and the theory of the law of crimes. Philos Public Aff 3(3):262–294
Berber A, Srećković S (2023) When something goes wrong: who is responsible for errors in ML decision-making? AI Soc. https://doi.org/10.1007/s00146-023-01640-1
Bliss J (2023) Existential advocacy. Georgetown J Legal Ethics Forthcom. https://doi.org/10.2139/ssrn.4217687
Bock S, Stark F (2020) Preparatory offences. In: Heinze A, Duff A, Roberts J, Ambos K, Weigend T (eds) In Core Concepts in Criminal Law and Criminal Justice, vol I. Cambridge University Press, pp 54–93
Bostrom N (2013) Existential risk prevention as global priority. Global Pol 4(1):15–31. https://doi.org/10.1111/1758-5899.12002
Bostrom N (2002) “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards.” J Evol Technol 9. https://ora.ox.ac.uk/objects/uuid:827452c3-fcba-41b8-86b0-407293e6617c
Bostrom N (2009) Ethical Issues in Advanced Artificial Intelligence. In: Schneider S (ed) Science Fiction and Philosophy: From Time Travel to Superintelligence. Wiley, pp 69–75
Bostrom, N (2014) Superintelligence: Paths, Dangers, Strategies. Oxford University Press. https://en.wikipedia.org/w/index.php?title=Superintelligence:_Paths,_Dangers,_Strategies&oldid=1005546202.
Bostrom Nick. (2020) The Brussels Effect: How the European Union Rules the World. Oxford University Press
Bourantonis D (1997) The negotiation of the non-proliferation treaty, 1965–1968: a note. Int Hist Rev 19(2):347–357. https://doi.org/10.1080/07075332.1997.9640788
Bradford A (2012) The brussels effect. Northwest Univ Law Rev 107(1):1–68
Canton R (2020) Theories of punishment. In: Focquaert F, Shaw E, Waller BN (eds) The Routledge Handbook of the Philosophy and Science of Punishment. Routledge, pp 89–100
Chesterman S, Gao Y, Hahn J, Valerie S (2023) The evolution of AI governance. TechRxiv. https://doi.org/10.36227/techrxiv.24681063.v1
Chesterman S. (2021) We, the Robots?: Regulating Artificial Intelligence and the Limits of the Law. Cambridge University Press.
Christian B (2020) The alignment problem: machine learning and human values, 1st edn. W. W. Norton & Company, New York, NY
Coeckelbergh M (2024) The case for global governance of AI: arguments, counter-arguments, and challenges ahead. AI Soc. https://doi.org/10.1007/s00146-024-01949-5
Danaher J (2016) Robots, law and the retribution gap. Ethics Inf Technol 18(4):299–309. https://doi.org/10.1007/s10676-016-9403-3
Danaher J (2017) Robotic rape and robotic child sexual abuse: should they be criminalised? Crim Law Philos 11(1):71–95. https://doi.org/10.1007/s11572-014-9362-x
de Sio S, Filippo, and Giulio Mecacci. (2021) Four responsibility gaps with artificial intelligence: why they matter and how to address them. Philos Technol 34(4):1057–1084. https://doi.org/10.1007/s13347-021-00450-x
Del Castillo P, Aida. (2023) Generative AI, generating precariousness for workers? AI Soc. https://doi.org/10.1007/s00146-023-01719-9
Duff RA (1997) Criminal Attempts. Oxford Monographs on Criminal Law and Justice. Oxford University Press, Oxford, New York
Dung L (2023) Current cases of AI misalignment and their implications for future risks. Synthese 202(5):138. https://doi.org/10.1007/s11229-023-04367-0
Dung L (2024) Evaluating approaches for reducing catastrophic risks from AI. AI Ethics. https://doi.org/10.1007/s43681-024-00475-w
Elliott A. (2021) Making Sense of AI: Our Algorithmic World. John Wiley & Sons
Fairfield JAT (2021) Runaway technology: can law keep up? Cambridge University Press, Cambridge. https://doi.org/10.1017/9781108545839
Glaser S (1942) Nullum crimen sine lege. J Comp Legislation Int Law 24:29–42
Goldstein S, Kirk-Giannini CD (2023) Language agents reduce the risk of existential catastrophe. AI Soc. https://doi.org/10.1007/s00146-023-01748-4
Gunkel DJ (2020) Mind the gap: responsible robotics and the problem of responsibility. Ethics Inf Technol 22(4):307–320. https://doi.org/10.1007/s10676-017-9428-2
Gunkel DJ (2023) Person, thing, robot: a moral and legal ontology for the 21st century and beyond. The MIT Press, Cambridge
Hakli R, Mäkelä P (2019) Moral responsibility of robots and hybrid agents. Monist 102(2):259–275. https://doi.org/10.1093/monist/onz009
Harasimiuk, Dominika Ewa, and Tomasz Braun (2021) Regulating Artificial Intelligence: Binary Ethics and the Law. London , New York
Harel A (2015) The duty to criminalize*. Law Philos 34(1):1–22. https://doi.org/10.1007/s10982-014-9209-6
Hart HM Jr (1958) The aims of the criminal law sentencing. Law Contemp Probl 23(3):401–441
Hayward KJ, Maas MM (2021) Artificial intelligence and crime: a primer for criminologists. Crime Media Cult 17(2):209–233. https://doi.org/10.1177/1741659020917434
Hoffmann CH (2023) A philosophical view on singularity and strong AI. AI Soc 38(4):1697–1714. https://doi.org/10.1007/s00146-021-01327-5
Hua L, Alkhatib M, Podlesek D, Günther L, Pinzer T, Meinhardt M, Zeugner S, Herold S, Cahill DP, Brastianos PK, Williams EA, Clark EV, Shankar GM, Wakimoto H, Ren L, Chen J, Gong Y, Schackert G, Juratli TA (2023) Friendly AI will still be our master or why we should not want to be the pets of super-intelligent computers. AI Soc. https://doi.org/10.1007/s00146-023-01698-x
Jasiński, Wojciech, and Karolina Kremens (2019) Criminal Law in Poland. Kluwer Law International B.V
Kaminski, Margot E. 2023. “The Developing Law of AI: A Turn to Risk Regulation.” SSRN Scholarly Paper. Rochester, NY. https://doi.org/10.2139/ssrn.4692562.
Kasirzadeh, A (2024) “Two Types of AI Existential Risk: Decisive and Accumulative.” arXiv. http://arxiv.org/abs/2401.07836
King TC, Aggarwal N, Taddeo M, Floridi L (2020) Artificial intelligence crime: an interdisciplinary analysis of foreseeable threats and solutions. Sci Eng Ethics 26(1):89–120. https://doi.org/10.1007/s11948-018-00081-0
Kolt, Noam. 2023. “Algorithmic Black Swans.” SSRN Scholarly Paper. Rochester, NY. https://papers.ssrn.com/abstract=4370566
Lagioia F, Sartor G (2020) AI systems under criminal law: a legal analysis and a regulatory perspective. Philos Technol 33(3):433–465. https://doi.org/10.1007/s13347-019-00362-x
Lahti R (1995) The rule of law and finnish criminal law reform studies. Acta Juridica Hung 37(3–4):251–258
Loucaides LG (1995) Nullum Crimen Sine Lege Certa. In: Loucaides L (ed) Essays on the developing law of human rights. Brill Nijhoff, pp 32–54
Maas MM, Lucero-Matteucci K, Cooke Di (2023) Military artificial intelligence as a contributor to global catastrophic risk. In: Beard SJ, Rees M, Richards C, Rojas CR (eds) The era of global risk: an introduction to existential risk studies. Open Book Publishers, Cambridge
Mahler T (2022) “Regulating artificial general intelligence (AGI)” in law and artificial intelligence: regulating ai and applying ai in legal practice. In: Custers B, Fosch-Villaronga E (eds) Information Technology and Law Series. T.M.C Asser Press, The Hague, pp 521–540
Mamak K (2021) Do we need the criminalization of medical fake news? Med Health Philos. https://doi.org/10.1007/s11019-020-09996-7
Mamak K (2022a) Categories of fake news from the perspective of social harmfulness. In: Faintuch J, Faintuch S (eds) Integrity of Scientific Research: Fraud Misconduct and Fake News in the Academic Medical and Social Environment. Springer International Publishing, Cham, pp 351–357
Mamak K (2022b) Humans neanderthals robots and rights. Ethics Inform Technol 24(3):33. https://doi.org/10.1007/s10676-022-09644-z
Mamak K, Glanc J (2022) Problems with the prospective connected autonomous vehicles regulation: finding a fair balance versus the instinct for self-preservation. Technol Society. https://doi.org/10.1016/j.techsoc.2022.102127
Mamak K, Glanc J (2023) Expunged ‘by design’: on the potential of AI to be a partial enabler in the expungement process. Law Innov Technol 15(2):490–507. https://doi.org/10.1080/17579961.2023.2245682
Mamak K (2019) Rewolucja Cyfrowa a Prawo Karne. Kraków: Krakowski Instytut Prawa Karnego Fundacja
Mamak K (2023) Robotics, AI and Criminal Law: Crimes against Robots. Routledge
Martínez, Eric, Christoph Winter (2021) “Protecting Future Generations: A Global Survey of Legal Academics.” SSRN Scholarly Paper. Rochester, NY https://doi.org/10.2139/ssrn.3931304.
Martínez, E, Christoph W (2021) “Postericide and Intergenerational Ethics.” In The Oxford Handbook of Intergenerational Ethics, edited by Stephen M. Gardiner, 0. Oxford University Press
Matthias A (2004) The responsibility gap: ascribing responsibility for the actions of learning automata. Ethics Inf Technol 6(3):175–183. https://doi.org/10.1007/s10676-004-3422-1
McKinnon C (2017) Endangering humanity: an international crime? Can J Philos 47(2–3):395–415. https://doi.org/10.1080/00455091.2017.1280381
McLean S, Read GJM, Thompson J, Baber C, Stanton NA, Salmon PM (2023) The risks associated with artificial general intelligence: a systematic review. J Exp Theor Artif Intell 35(5):649–663. https://doi.org/10.1080/0952813X.2021.1964003
Mokhtar A (2005) Nullum crimen, nulla poena sine lege: aspects and prospects. Statute Law Rev 26(1):41–55
Müller VC (2014) Risks of general artificial intelligence. J Exp Theor Artif Intell 26(3):297–301. https://doi.org/10.1080/0952813X.2014.895110
Naudé W, Dimitri N (2020) The race for an artificial general intelligence: implications for public policy. AI Soc 35(2):367–379. https://doi.org/10.1007/s00146-019-00887-x
Neuwirth RJ (2022) Law, artificial intelligence, and synaesthesia. AI Soc. https://doi.org/10.1007/s00146-022-01615-8
Novelli C, Casolari F, Rotolo A, Taddeo M, Floridi L (2023) How to evaluate the risks of artificial intelligence: a proportionality-based risk model for the AI act. SSRN Electron J. https://doi.org/10.2139/ssrn.4464783
Nyholm S (2018) Attributing agency to automated systems: reflections on human-robot collaborations and responsibility-loci. Sci Eng Ethics 24(4):1201–1219. https://doi.org/10.1007/s11948-017-9943-x
Ohana D (2007) Desert and punishment for acts preparatory to the commission of a crime. Can J Law Jurisprud 20(1):113–142. https://doi.org/10.1017/S0841820900005725
Ord T (2020) The Precipice: existential risk and the future of humanity, Illustrated. Hachette Books, New York
Renzo M (2012) Crimes against humanity and the limits of international criminal law. Law Philos 31(4):443–476. https://doi.org/10.1007/s10982-012-9127-4
Robinson PH, Darley JM (2004) Does criminal law deter? A behavioural science investigation. Oxf J Leg Stud 24(2):173–205. https://doi.org/10.1093/ojls/24.2.173
Russell S (2019) Human Compatible: AI and the Problem of Control. Allen Lane, an imprint of Penguin Books
Sætra HS, Danaher J (2023) Resolving the battle of short- vs long-term AI risks. AI Ethics. https://doi.org/10.1007/s43681-023-00336-y
Salmi J (2023) A democratic way of controlling artificial general intelligence. AI Soc 38(4):1785–1791. https://doi.org/10.1007/s00146-022-01426-x
Salmon PM, Baber C, Burns C, Carden T, Cooke N, Cummings M, Hancock P, McLean S, Read GJM, Stanton NA (2023) Managing the risks of artificial general intelligence: a human factors and ergonomics perspective. Human Factors Ergonom Manufact Serv Ind 33(5):366–378. https://doi.org/10.1002/hfm.20996
Scherer MU (2016) Regulating artificial intelligence systems: risks, challenges competencies and strategies. Harvard J Law Technol. https://doi.org/10.2139/ssrn.2609777
Shevlin H, Vold K, Crosby M, Halina M (2019) The limits of machine intelligence. EMBO Rep 20(10):e49177
Siracusa JM, Warren A (2018) The nuclear non-proliferation regime: an historical perspective. Dipl Statecraft 29(1):3–28. https://doi.org/10.1080/09592296.2017.1420495
Sloane RD (2007) The expressive capacity of international punishment: the limits of the National law analogy and the potential of International criminal law. Stanford J Int Law 43(1):39–94
Sparrow R (2007) Killer robots. J Appl Philos 24(1):62–77
Stokel-Walker C, Van Noorden R (2023) What ChatGPT and generative AI mean for science. Nature 614(7947):214–216. https://doi.org/10.1038/d41586-023-00340-6
“Stop Talking about Tomorrow’s AI Doomsday When AI Poses Risks Today (2023)” Nature 618(7):885–86 https://doi.org/10.1038/d41586-023-02094-7
Szocik K, Tkacz B, Gulczyński P (2020) The revelation of superintelligence. AI Soc 35(3):755–758. https://doi.org/10.1007/s00146-020-00947-7
Torres P (2019) The possibility and risks of artificial general intelligence. Bulletin Atomic Sci 75(3):105–108. https://doi.org/10.1080/00963402.2019.1604873
Totschnig W (2019) The problem of superintelligence: political, not technological. AI Soc 34(4):907–920. https://doi.org/10.1007/s00146-017-0753-0
Veale M, Matus K, Gorwa R (2023) AI and global governance: modalities, rationales tensions. Annu Rev Law Soc Sci 19(1):255. https://doi.org/10.1146/annurev-lawsocsci-020223-040749
Vold K, Harris DR (2021) How does artificial intelligence pose an existential risk? In: Véliz C (ed) The Oxford Handbook of Digital Ethics. Oxford University Press, pp 724–747
Wilson G (2013) Minimizing global catastrophic and existential risks from emerging technologies through international law note. Va Environ Law J 31(2):307–364
Wróbel W, Witold Z, and Adam W, eds (2014) Kodeks karny: przepisy dwujęzyczne = Criminal code. Stan prawny na 5 listopada 2014 r. z uwzględnieniem zmian wprowadzonych ustawą z dnia 27 września 2013 r. o zmianie ustawy-Kodeks postępowania karnego oraz niektórych innych ustaw (Dz.U. poz. 1247), Które wejdą w życie 1 lipca 2015 r. Warszawa: Lex a Wolters Kluwer business
Wojtczak S (2022) Endowing artificial intelligence with legal subjectivity. AI Soc 37(1):205–213. https://doi.org/10.1007/s00146-021-01147-7
Wróbel W, Andrzej Z (2014). Polskie Prawo Karne: Część Ogólna. Wyd. 3. Kraków: Społeczny Instytut Wydawniczy Znak
Zerilli J, Danaher J, Maclaurin J, Gavaghan C, Knott A (2021) A citizen’s guide to artificial intelligence. The MIT Press, Cambridge
Funding
Open Access funding provided by University of Helsinki (including Helsinki University Central Hospital). Academy of Finland,333873,Kamil Mamak
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interests
The author declares no conflict of interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Mamak, K. AGI crimes? The role of criminal law in mitigating existential risks posed by artificial general intelligence. AI & Soc (2024). https://doi.org/10.1007/s00146-024-02036-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00146-024-02036-5