Democratic inclusion is among the most fundamental questions for any association that aspires to be democratic. Democratic inclusion concerns the scope of the demos understood as the group of people (or entities) that should govern or elect those who govern. Yet the principles for democratic inclusion remain unclear and widely contested. Recent democratic theory has come to a stand-still between the two major alternatives: the all-affected principle (AAP) and the all-subjected principle (ASP) (see Goodin, 2007, 2016; Miller, 2009; Hultin Rosenberg, 2020; Beckman, 2009; Valentini, 2014). Whereas the AAP takes the extent to which someone is affected by a decision as necessary and sufficient conditions for inclusion in the demos, the ASP identifies the demos with the subjects of decisions. At a higher level of abstraction, the ASP and the AAP are nevertheless in agreement on the fact that the relationship between decisions and an entity is decisive for democratic inclusion (Bauböck, 2018). The object of disagreement is on the nature of this relationship.

In the attempt to advance this debate, the current paper argues that democratic inclusion cannot exclusively be determined by appeal to the nature of the relationship between decisions and entities. Additional assumptions are needed. In particular, the boundaries of the demos are premised on claims about “political patiency”—non-relational properties in virtue of which an entity has political standing—as well as on claims about “political agency”—the capacity for either intentional political action or judgment. The thesis is that both the ASP and the AAP remain indeterminate unless complemented by claims about the patients and agents to whom the relational requirements apply.

In the paper we look to recent advancements in technological innovation as a cause for exploring the assumptions of agency and patiency that should guide judgments about democratic inclusion. Developments in artificial intelligence (AI) have produced algorithms with self-learning capacities, allowing them to adjust their performance on the basis of collected and analyzed data. These are vastly more sophisticated than regular computer scripts that are bound by the original program and are unable to adapt to and learn from the environment. Their “intelligence” consists in the capacity to emulate goal-directed behaviour. Though currently existing AI (weak or narrow AI) has few or none of the properties associated with human intelligence, the ultimate aim of investments in computer technology is to develop genuine artificial subjectivity, or what is referred to as “strong artificial intelligence” or “artificial general intelligence”; entities with the capacity to “sense, understand, reason, learn and act in the environment in ways similar to how humans can intelligently” (Wah & Leung, 2020). Artificial general intelligence (AGI) blurs the distinction between human and non-human entities in important respects. The human-like properties of AGI have rightly propelled philosophical research into the metaphysics of human consciousness and computerizing, but has not yet attracted much attention among political philosophers.

Pitching the problem of democratic inclusion to artificial intelligence technologies may strike readers as absurd for at least two reasons. First, democratic participation is widely assumed to be the privilege of human beings. Since AI is not part of the human species, they are not eligible for democratic inclusion. Second, even if criteria for democratic inclusion would apply to AI, it is hard to imagine a situation where algorithms participate in political decision-making on equal terms with humans. The idea is just infeasible.

The claim that democratic inclusion is the privilege of human beings is not obviously true, however. This is evinced by the fact that democratic participation is applicable to entities that are artificial persons and not just to persons of human flesh and blood. A democratic association can be composed of associations (e.g. municipalities, regions or states) or corporations (Beckman, 2018; Hasnas, 2018). Associations are not members of the human species, of course. More fundamentally, the “speciesist” assumptions of democracy are increasingly under pressure as is illustrated by claims that the right to a democratic say, or political representation, should extend to non-human animals (Garner, 2017; Kymlicka & Donaldson, 2016) or even eco-systems in nature. Hence, there are reasons not to assume that democratic inclusion is necessarily an exclusive privilege of human beings.

So, what about the feasibility of extending democratic rights to AI? This point might very well be correct, and we will not investigate it further here. But it is worth remembering that democratic principles are always idealizations. Democracy might “not be suitable to men” but feasible only among Gods, as Rousseau (1762, Book 3:IV) famously suggested. The point is that an exploration of the commitments that follow from the democratic ideal should not be limited by what is feasible at a particular time and place. As noted by Bob Goodin (1996, 841), the fact that it is absurd for practical reasons to include some particular entity in democratic decision-making does not imply that it is absurd to believe that they ought to be included.

In any case, the motivation of this paper is not just to answer whether AI should be democratically included. We also believe that the confrontation between AI and democratic principles is instructive in a more general sense as it helps us identify implicit and potentially controversial assumptions of well-known democratic principles. Though the answer depends on the nature and qualities of artificial intelligence, it also depends ultimately on the principles of democratic theory. On what grounds are entities entitled to rights to democratic participation and when could we legitimately refuse them? The aim of this paper is to tease out the connections between basic democratic convictions and the properties associated with Narrow AI, AGI and possible versions of AI in-between Narrow and General, for the purpose of identifying the conditions for membership in the demos. The question of democratic inclusion for AI is thus used to specify the conditions for democratic inclusion with implications for this question and beyond. AI is uniquely suitable for this task since AI could develop in different directions which enable us to elaborate on various kinds of conditions using different possible developments of AI. In this sense, AI is better than other entities such as children, animals and corporations that have previously been addressed in the literature on democratic inclusion.

1 A non-speciesist approach to the question of AI and democratic inclusion

For the purpose of this paper, we assume that non-human entities, including artificial entities, could qualify for democratic inclusion. If currently existing narrow AI or future more sophisticated AGI do not qualify for democratic inclusion, this is because there are some other relevant differences between AI and humans that motivates this difference in status. This suggests that the question of democratic inclusion of AI could be addressed as a question of what properties an entity must possess in order to qualify for democratic inclusion. Such “property-approach” to the moral and political status of AI (Andreotta, 2021) is suitable to the current purpose of examining principles of democratic inclusion by reference to AI. A property approach to democratic inclusion contrasts with a “species-approach” according to which those and only those who belong to a certain species qualify for democratic inclusion. Our suggestion is that the properties that future AI needs to possess in order to qualify for democratic inclusion are the properties that an entity needs to possess in order to have political agency and political patiency—i.e. agency and patiency in the sense relevant for democratic inclusion.

We approach the topic of democratic inclusion as concerned with the extension of principles that determine democratic inclusion and assume that AI entities are within the domain of application of these principles. We start out with the assumption that future AI, if similar to humans in all democratically relevant respects, should be included in the demos. To be clear, AI will never be similar to humans in all respects. The task here is to specify what similarities are democratically relevant. Regardless of how sophisticated AI becomes, it remains a fact that these entities are technological devices created and used by humans and artefacts are typically not seen as rights holders, at least not as holders of direct rights (Andreotta, 2021). The fact that AI systems are “designed” and “used” by humans means that the actions of these systems in this sense are connected to the intention of the human designers and users (Johnson, 2006; Johnson & Miller, 2008). If this is taken to be a democratically relevant respect in which AI is different from humans and other entities that could qualify for democratic inclusion, the issue is already settled. While tempting to rule out the democratic inclusion of AI by appealing to the fact that they are technological devices, not biological beings, there are good reasons not to settle for this conclusion too quickly. As shown by the discussion of the ontological, legal and moral status of AI (Basl, 2014; Gordon, 2020; Gunkel, 2012, 2014; Gunkel & Bryson, 2014) it is possible to extend concepts developed, and previously reserved, for humans to non-human AI entities. Moreover, the claim that AI’s are created by humans is not necessarily true. In case an AI can create AI’s, there will be AI’s that are not created by humans.

In any case, it is unclear whether the origin of an entity is relevant at all. According to Christian List (2021, 1225) “no matter how AI systems have been brought into existence, systems above a certain threshold of autonomy constitute a new loci of agency, distinct from the agency of any human designers, owners, and operators”. The key issue here is whether or not AI systems are or could ever reach the degree of independence required to count as autonomous. Elena Popa (2021) argues that AI systems will never be sufficiently independent to count as moral agents. One important issue in this debate is whether or not AI systems could set independent goals and not just act upon goals set by humans. To what extent this and other aspects of the current status and future development of AI matters from the perspective of democratic inclusion is an open question that will be further addressed in this paper when discussing what is required in order for AI to count as political agents and political patients from the perspective of principles of democratic inclusion.

As said above, the intelligence of currently existing AI consists in the capacity to emulate goal-directed behaviour. This intelligence is narrow in the sense that it is developed to solve specific tasks. Currently existing artificial intelligences are better than humans in solving certain tasks. The general game-playing AI-program AlphaZero is far better at chess and other games than the best human players. This machine-learning AI is also better than the best programmed specialist game-playing AI-programs (Silver, et al., 2018). Despite this, the intelligence of AlphaZero, although more general than that of specialist game-playing AI-programs, is far from the general intelligence of human beings. It is a matter of scholarly controversy whether or not human-like AGI will ever emerge. Some AI-scholars claim that it is a matter of when rather than a matter of if (McCarthy, 2007). Other scholars are equally certain that AGI will never be realized (Fjelland, 2020). For the question of democratic inclusion of AI, an equally important question as the question of what intelligence AI will develop is whether or not AI will be built “with a capacity for emotions of their own, for example the ability to feel pain” (Wallach & Allen, 2009: 204).

Without taking a position on the speculative issues of how AI in the future will develop in terms of autonomy, intelligence and emotional capacities, there are good reasons to pay attention to questions concerning what properties AI must develop in order to qualify for democratic inclusion on established principles of democratic inclusion. If AI emerge that exhibit qualities of consciousness, we face pressing questions about their political and legal status. Non-human entities with such properties will be able to perform tasks in public administration, they will not just be able to improve the political decisions of human beings, but may in some respects replace them. Many philosophers speculate that a time will come when legal and moral rights will extend to AI. Yet, few if any have delved further and asked whether these entities would also be entitled to exercise the powers that in a democratic society are the privilege of citizens or the members of the demos—that is the people with rights to democratic participation.

2 AI and the relational requirement

Democratic associations at all levels make distinctions between the members and non-members of the demos. In states that aspire to be democratic, citizenship is the predominant condition for rights to vote and democratic participation (Earnest, 2008). However, the democratic status of the rules determining membership in the demos cannot be taken for granted. The rules identifying the demos are democratic only if they conform to the principles for democratic inclusion. Though little agreement on the substance of these principles yet exists, it is widely agreed that citizenship status does not define the democratic status of the boundaries of the demos. The predominant view is that inclusion in the demos instead is conditioned by the existence of a particular relationship between the political unit and potential members. The principles of democratic inclusion identify a relational requirement as a necessary condition for membership in the demos.

The nature of the relational requirement is disputed, however. AAP and ASP currently represent the two main alternative conceptions of the relational requirements for democratic inclusion. According to the AAP, an entity is entitled to inclusion in the demos if and only if affected by the decisions of the political unit in the relevant sense. According to the ASP, an entity is entitled to inclusion in the demos if and only if subjected to the decisions of the political unit in the relevant sense. The normative underpinnings of these principles are a separate question that will not be considered here (Beckman & Hultin Rosenberg, 2018; Bengtson & Lippert-Rasmussen, 2021). Our interest is exclusively with the structure of these principles and if they potentially justify the inclusion of AI in the demos. The first step is consequently to investigate if intelligent artificial entities satisfy the relational requirements as conceived by the AAP and the ASP, respectively.

2.1 The relational requirement of AAP

The scope of inclusion of AAP has been the subject of extensive discussion within the scholarly literature on democratic inclusion. The principle has been argued to stretch the boundaries of inclusion far beyond its current limits. In requiring the inclusion of everyone causally affected by the decisions of the political unit, the principle stretches the boundaries of inclusion spatially by requiring the inclusion of affected entities outside the territorial jurisdictions of these political units. In a seminal article Bob Goodin (2007) suggests that AAP requires the inclusion of everyone, everywhere in every decision. That claim has also been a main target by the critics of AAP (Miller, 2009; Song, 2012; Whelan, 1983). Others argue that AAP stretches the boundaries of inclusion temporally by requiring the inclusion of either future (Cruz, 2018; Goodin, 2007; Heyward, 2008; Tännsjö, 2007) or/and past generations (Goodin, 2007; Bengtson, 2020). More importantly in the present context is the claim that AAP stretches the boundaries of inclusion categorically to include entities that are not usually included in the demos: children (Saunders, 2012), animals (Garner, 2017; Kymlicka & Donaldson, 2016), and non-sentient organisms and nature (Cruz, 2018). However, to the best of our knowledge, artificial intelligent entities have not yet been addressed.

The scope of inclusion of AAP depends on how the principle more precisely is formulated. The radically inclusive implications (in the spatial sense) suggested by Goodin (2007) is based on a formulation of the principle such that it requires the inclusion of everyone who is possibly affected by a possible decision. Other, arguably more plausible, versions of the principle are less inclusive in this respect (for an overview, see Hultin Rosenberg, 2020). The question of democratic inclusion of particular artificial intelligent entities depends on the spatial and temporal extension of AAP. While the more fundamental question of whether AI-entities could be eligible for democratic inclusion on this principle depends on its categorical extension. Could the scope of inclusion of AAP stretch beyond the current domain of human and perhaps even non-human biological entities to include non-human artificial entities? To answer this question the first step is to determine whether AI could satisfy the relational requirement associated with AAP.

In common for all versions of AAP is that democratic inclusion is triggered by a particular relation between the political unit and the entity—the relation of the latter having an interest that is causally affected by the policies decided by the former (Dahl, 1970; Goodin, 2007; Hultin Rosenberg, 2020; Miller, 2009; Whelan, 1983). On this understanding of AAP, the scope of inclusion, categorically understood, is determined by what type of entities: i) that have an interest, ii) that is of a kind that warrant democratic inclusion, and iii) that could be causally affected by political decisions taken by the political unit. In order for AI, narrow or general, to qualify for democratic inclusion on AAP, these artificial entities must have interests that are of this kind.

Perhaps, there could be entities that have an interest that is of a kind that warrant democratic inclusion but that cannot be affected by political decisions.Footnote 1 Currently existing AI cannot be excluded based on this requirement. Political decisions affect existing AI by regulating its use. In order for AAP not to require the inclusion of AI it must be because these entities do not have interests or because the interests of these entities are of a kind that do not warrant democratic inclusion.

Intelligent artificial entities could be seen as bearers of interests. As argued by Basl (2014), existing AI is goal directed and teleologically organized. In that sense, they are similar to non-sentient biological organisms with what he refers to as teleo interests.Footnote 2 Hence, on a categorically maximally inclusive understanding of the relational requirement of AAP, existing AI ought to be included in the demos. Of course, on this understanding of “interests”, also viruses and other micro-organisms are bearers of interests. The counterintuitive implications of AAP so understood are not acknowledged by either adherents or critics of AAP that, with few exceptions (Cruz, 2018; Garner, 2017; Kymlicka & Donaldson, 2016), discuss the principle as if it applies exclusively to human beings. The question addressed is typically formulated as a question of which individuals, persons or peoples to include in the demos. That AAP is highly inclusive in a territorial sense has been recognized by many. Its potential categorical inclusiveness has not been the subject of an equally thorough scrutiny.

In order to save AAP from this counterintuitive implication it must be argued that the teleo interests of currently existing AI are an interest of a kind that does not warrant democratic inclusion. This could be argued by referring to the fact that AI is not humans and only human interests warrant democratic inclusion. An alternative to this “species-approach” that will be explored in this paper is the “property-approach” according to which only entities with certain properties have interests that warrant democratic inclusion.

Rainer Bauböck, to take an example, seems to assume something akin to this when discussing AAP and suggesting that “individuals must be capable of having interests, which presupposes sentience, a sense of selfhood and capacity for purposive action” (Bauböck, 2018). As indicated above, non-sentient entities could be bearers of interests. Defining the interests that are relevant from the perspective of AAP as something that requires “a sense of selfhood and capacity for purposive action”, Bauböck assumes an interpretation of AAP according to which not all interests warrant democratic inclusion. With this interpretation, the scope of inclusion of AAP might admittedly reach beyond the human domain and include at least some non-human animals, but it will not include entities with only teleo interests. To take another example, Ben Saunders (2012, 286) assumes that all sentient beings have interests that warrant democratic inclusion on AAP.

On the most categorically inclusive interpretation of AAP, the principle could be argued to require the inclusion of currently existing AI. In order to avoid this implication, adherents of AAP could make a distinction between interests that warrant democratic inclusion and interests that do not warrant democratic inclusion. With the terminology used in this paper, AAP could be reinterpreted as a principle with a patiency requirement that discriminates between interests that are worthy of political concern and interests that are not worthy of political concern. This patiency requirement of AAP will be further developed below.

2.2 The relational requirement of ASP

The claim that democratic inclusion is conditioned by subjection to public decisions is informed by the notion that public decisions are decisions for some entities and that only entities for whom decisions are made should be included in the demos. The relational requirement is thus conceived in terms of what it means for an entity to be relevantly subjected to public decisions rather than affected by them.

The relevant meaning of “subjected” is nevertheless controversial. A popular view in the literature is that subjection is to be understood in terms of coercion. The maxim that all subjected should be included is equal to the claim that all subject to coercion should be included. Accordingly, the members of the demos should equal the domain of persons or entities that are subject to the coercive apparatus of the state (Abizadeh, 2008; Blake, 2001). There are two problems with this definition of the ASP, however.

The first is that coercion may not be a necessary element of legal systems at all. Early positivist conceptions of law clearly emphasised the coercive nature of law, either as a necessary means for the enforcement of law or as the fundamental object of regulation (Bobbio, 1965). More recent positivists have abandoned this view and instead picture law as an institutionalized normative system. The existence of law so conceived does not necessarily depend on coercive enforcement (Raz, 2009).

The second reason against explaining subjection in terms of coercion is methodological. The coercive effects of public decisions depend on how people are affected by them. Hence, if the all subjected principle applies to the subjects of coercion the analytic distinction between this principle and the all affected principle evaporates (Goodin, 2016, 370). Advocates of the coercion reading of ASP have responded by adding that democratic inclusion only applies to the subjects of coercion that are also subject to legal requirements narrowly interpreted (Abizadeh, 2021). But this reply effectively abandons the claim that subjection to coercion is sufficient for inclusion.

In line with the claim that subjection to law—not just subjection to coercion—is essential to the ASP we consider two different interpretations of the ASP. The first holds that an entity is relevantly subjected to public decisions if and only if the entity is legally obligated to comply with the decision. The second holds that an entity is relevantly subjected to public decision if and only if the entity is subject to claims to legitimate authority. The extension of subjection following these two understandings is potentially divergent. It is conceivable that an entity is legally obligated by a decision, though not subject to claims of legitimate authority and, conversely, that an entity is subject to claims of legitimate authority though not legally obligated.

Could AI satisfy the distinct readings of the relational requirement associated with the ASP? Consider the first view, according to which an entity is relevantly related to public decisions if and only if subject to legal obligations. To know whether AI can be legally obligated to comply with the law clearly depends on what legal obligations are taken to imply. On one understanding, legal obligations are entailed by any legal claim to the effect that the law applies to an entity. Legal obligations are not conditioned by moral obligations and do not depend on the subject accepting the obligation to comply with the law. Legal obligations do in that sense apply “automatically” whenever the law is valid (Lyons, 1993, 98). This is the reading of the ASP endorsed by Goodin (2016, 370f.) who argues that the extent to which an entity is reliantly subjected to the law is a “purely formal, juridical” matter.

It appears to follow from this reading of the relational requirement that AI can be relevantly subjected to the law if and only if true that AI is subject to legal regulation. Laws that regulate AI incur legal obligations for AI by the mere fact that the law applies to AI. However, the possibility of applying the law in this sense is premised on the legal recognition of that entity as a bearer of legal duties. An entity is a potential bearer of legal duties only if true that it is recognized as a legal entity in the legal system. In effect, this is equivalent to legal personality. A legal person is an entity recognized as a bearer of legal rights and/or duties. Hence, AI is subject to the law in the relevant sense only if it enjoys legal personality. AI must in other words be afforded a particular kind of legal agency in order to be subject to law in the “juridical” sense of that term.

On the other hand, an agent has an obligation only if possible for the agent to comply. For a rule to be complied with, the subject to which the rule applies should be able to act in accordance with the rule. Subjection to legal obligations is on this understanding premised on the additional condition that the entity has the capacity to comprehend and respond to rules. It is clear that the certain agency requirements are involved in the ascription of this stronger version of subjection to legal obligations. In order to judge whether the law applies to an entity such that the entity is able to comply with the obligations of the law, something needs to be known about the capacity for action of that entity. From these preliminary observations, two agency conditions emerge as necessary preconditions for democratic inclusion. In order to be included in the demos by virtue of being subject to the law, AI must be endowed with legal personality and capacity for action.

Now, let us consider the second view, according to which an entity is subject to the law in the relevant sense if and only if subject to claims of legitimate authority. The basis for this view is Raz’s (1986, 2009) view that every legal system claims for itself legitimate authority, i.e. that law “presents itself” as justified to entities under its purview. A distinctive mark of this position is that the subjects of law are not identified by the extent to which they are subject to “juridical norms” (Goodin, 2016) but by the extent to which they are subject to claims to legitimate legal authority. The demos—the democratic people—should include all entities that are subjected to claimed legitimate authority.

Since the claim to legitimate authority entails the right to create legal obligations for the subject, this version of the relational requirement subsumes the agency conditions already mentioned. Only legal persons with the ability to comply with the law can be subjected to claimed legitimate authority. But the additional agency requirement following this version of the ASP is the capacity to recognize the authority as legitimate. The law can be legitimate only for agents that are able to accept the law as legitimate; hence, legitimate authority can be claimed only for agents with such an ability. It is perfectly conceivable then that entities relevantly subjected to the law according to the first conception of the ASP are not relevantly subjected according to the second conception. An entity may have legal personality and the capacity for compliance but still lack the ability to recognize the law as legitimate. The point is that assumptions about the agential properties of the subjects of law are critical in deciding if the ASP applies to AI and other entities.

3 ASP and the agency requirement

According to ASP, a necessary precondition for democratic inclusion is the fact of being subjected to the law. Only entities that are legal subjects should be included in the demos. But in order for an entity to be subjected to the law, it must be an agent of some kind. As already discussed, the first reading of the ASP holds that an entity is subjected to the law in the relevant sense if and only if it is a legal person within the jurisdiction of the legal system: the law applies to legal persons only. The consequent understanding of the principle of democratic inclusion proposes a relational requirement (subjection to the law) and an agential requirement (legal personhood) that are together necessary and sufficient for democratic inclusion.

On the second reading of the ASP, an entity is subjected to the law in the relevant sense if and only if it is a legal person within the jurisdiction of the legal system that possesses the ability to comply with the law and to recognize the law’s claim to legitimate authority. The agential requirements posited by this view are more demanding. However, the structure of the conditions for democratic inclusion is similar. In order to be included in the demos, the entity must stand in a particular relationship to public decisions (subjection to the law) as well as satisfying certain agential requirements (legal personhood, capacity to comply and capacity to recognize legitimate authority). The question now is whether either version of the ASP is applicable to AI, whether in its weak or strong version?

3.1 Legal personality

One view is that legal personality is premised on the ability to initiate legal actions against others (Solum, 1992). Legal personality is conditioned by the possession of a capacity that is a natural kind. The implication is that entities that do not possess the capacity necessary for legal personality cannot be recognized as legal persons by the law. Entities that lack the natural kind that is a precondition for legal personality are consequently not subjected to the law in the sense of being the potential bearers of legal rights and duties. The scope of the ASP is thereby limited to entities with a particular agency, i.e. the capacity to form legal relationships. We might for example say that the ASP does not apply to rocks because rocks fail to meet the agency requirements that would allow them to be legal persons. And since rocks cannot be legal persons, rocks cannot be subjected to the law.

A different view is that legal personhood is a “fiction”, employed for the purpose of illustration and simplification, not for the purpose of identifying features that are intrinsic to natural objects (Kelsen, 2015). The implication is that the status of legal personality is not the privilege of a predefined set of entities. Legal personhood is a mere “artifice” (Naffine, 2011) that is attributable to anybody, or anything, whenever the law so declares (Berg, 2007; Naffine, 2003; Tur, 1986). According to this view, legal systems are empowered to ascribe legal personality as they see fit and are not constrained by the intrinsic properties of the entities they seek to regulate.

It might be objected that we should distinguish between the claim that the category of legal personality is artificial and the claim that membership in that category is artificial. Even if the category of legal personality is artificial in the sense of being stipulated by law, it does not follow that membership in that category is arbitrary. The law may invent any conditions for legal personality, but it may still be the case that certain entities would never qualify as legal persons because of their intrinsic properties (Banas, 2021; Kurki, 2019).

Yet, legal practice does not seem to corroborate the view that legal personality is constrained by the intrinsic properties of entities. Legal systems are known to extend legal rights to minors or infants, even though they lack the capacity to initiate legal actions by themselves (Tur, 1986). More radically, non-human animals—dolphins and primates—are granted legal personality and rights in some legal systems (Shyam, 2015) and well-known is the extension of legal personality and associated rights to rivers in India and New Zealand (O’Donnell & Talbot-Jones, 2018). This indicates that the status of legal personality is not limited by the natural or intrinsic properties of an entity; there are few if any legal obstacles to ascribe legal personhood to artificial intelligences and to consider them as subjects of the law.

The artifice theory of legal personality is consistent with the extension of legal personality to non-human animals, ecosystems and artificial intelligences. If legal personality is a precondition for subjection to the law that in turn is a precondition for inclusion in the demos, the implication is that membership in the demos is contingent on developments in legal practice. This particular requirement for the inclusion of artificial intelligences in the demos consequently does not depend on the properties possessed by artificial intelligences but on accidental features of legal systems.

Currently, legal personality is not conferred to AI. But such development seems a real legal possibility (Bryson et al., 2017). The robot Sophia has been granted citizenship in Saudi Arabia (Jaynes, 2020), and the European Parliament has urged the Commission to grant “electronic personality” to sophisticated AI (European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)). In addition, some claim that AI can achieve legal personality indirectly—by assuming control of entities that are already legal persons (Lopucki, 2018). An algorithm that exclusively controls a legal entity is a legal person de facto. Following the first version of the ASP, both currently existing AI and future more sophisticated AI would meet the requirement for inclusion.

3.2 Compliance

As remembered, the second version of the ASP makes additional demands on the agential capacities of legal persons in the jurisdiction for them to be included in the demos. One such requirement is that the legal person has the capacity to comply with the law. It is an open question whether legal persons are able to. Consider, for example, the tendency to recognize ecosystems as legal persons vested with legal rights. Even if rivers and mountains are granted legal standing, no one believes that rivers and mountains can be agents with the capacity to comply with the law. Thus, on the second version of the ASP, rivers and mountains do not qualify as legal subjects in the sense relevant for democratic inclusion, even if they would be legal persons.

The question then is if AI can or could comply with legal norms in the domain in which they are active? At first glance, the answer may seem obviously affirmative. Legal norms are rules for behaviour and AI is specifically designed to be goal-directed in the sense of collecting the information necessary to achieve specified ends. Note however that prior specification of all relevant legal norms in the program of the AI is unlikely to be feasible. The law is a moving target, constantly open to either specification or change (Malle et al., 2020). On the other hand, an AI equipped with sufficiently powerful self-learning capacities might be able to adjust to and learn about changes in the legal environment. Surely then, we should expect AI to learn to behave in accordance with legal norms.

Yet, the challenges involved in compliance with the law have been found to be more complex than expected. A study on autonomous vehicles investigated the ability of compliance with traffic laws in the Netherlands. The study explored several approaches in the field of AI, including those that allowed the algorithm to “reason” in order to solve new problems. Yet, the study concluded that legal compliance is difficult to ascertain due to the law being characterized by “rule conflicts, open texture and vagueness” (Prakken, 2017; also, von Ungern-Sternberg, 2018). The problem is that law is not a given set of rules, but a body of norms that is not always explicit and open to interpretation when it is. Given the difficulty in making AI comply with what appears to be a relatively simple domain of law, there is reason to be pessimistic about the ability of AI in its current form to comply with the full range of laws in the jurisdiction.

A stronger reason for scepticism about the potential for AI to comply with the law derives from the claim that AI necessarily lacks the relevant cognitive faculties. Sceptics argue that the decisions needed to comply with the law ultimately depend on human intuitions and that no technological system can ever fully replicate the workings of that faculty (Khan et al., 2019).

Yet, it is premature to exclude the possibility of future and stronger versions of AI with the capacity to identify and learn how to comply with the law. The capacity of AI to reason intuitively and to interpret textual information in sophisticated ways is a forgone conclusion. Intuitive judgment is arguably neither mystical nor a uniquely human capacity, but a quasi-analytical skill that can be mirrored by algorithms trained in the appropriate way (Frantz, 2003). Also, the ability to interpret vague and complex patterns is a skill in which AI is already outperforming humans in some domains (Topalovic et al., 2019). It is thus likely that strong AI, and weaker versions too, will possess the capacity to comply with the law and should in that respect be considered as legal subjects.

3.3 Recognition of legitimate authority

Following the second conception of the all subjected principle, legal personality and rule compliance are not sufficient for inclusion in the demos, however. The subjects of the law are entitled to inclusion to the extent that they are subjected to legal authority. And in order for them to be subjected to legal authority, they must be able to recognize the law as legitimate.

To see why, it is helpful to note what compliance with the law appears like from an external point of view. An external observer can observe agents in a society behaving in ways that are law-like and hypothesize the existence of norms to which subjects comply. But the external observer cannot be confident in that conclusion. From the external point of view, compliance with law appears little different from adjustments to the natural environment. The fact that individuals tend not to jump off from high buildings is a law-like pattern just as the tendency of most individuals to comply with the law against murder. But to the extent that citizens comply with the law because it is the law, their reasons for action are very different from the practical reasons that induce them not to jump off from high buildings. In the latter case, people act prudentially such that they accept a particular conclusion in view of the balance of practical reasons that apply to the relevant facts. But that is not what is going on when people comply with the law because it is the law. In that case, compliance follows directly from recognition of the authority of law. Only if they believe in the authority of the law do they have content-independent reasons for compliance.

Of course, legal systems rarely if ever achieve legitimate authority. But if they are more than brute exercises of power they are recognized “as if” legitimate by a significant number of its subjects. The distinctive mark of the law is that it claims legitimate authority while often possessing little more than de facto authority (Raz, 2009). The point is however that de facto authority is distinct from brute power and other circumstances to which people regularly adapt. Following the second version of the all-subjected principle, the claim to democratic inclusion among the subjects of the law derives from the fact that they are subjected to a body that claims authority over them. Hence, the ASP applies only to subjects with the ability to recognize law as authoritative. In order for this version of ASP to apply to AI, it must be an agent with the capacity to believe that law is legitimate (in addition to being a legal person and having the ability to comply with the law). The question then is whether AI does have that capacity? This is a vast and complex issue that cannot be satisfactorily discussed here. Two observations are nevertheless in order.

The first is that “norm-recognition” is an important topic in AI literature. The issue here is how to design autonomous systems that are able to distinguish between normative systems that ought to be complied with and those with whom they should not. The capacity needed to accomplish this task is that of “autonomous norm formation” as the AI must be able to make judgments not just on the validity of pre-existing normative systems but also on new and previously unknown systems of norms (Conte & Dignum, 2001). Yet, the ability to navigate between various normative systems does not entail the capacity to believe in their legitimacy. “Reasons” to accept a normative system are not premised on the ability to believe that the normative system is legitimate.

The second is that the capacity to recognize legitimate authority is a moral capacity. The belief that the law has authority is equal to the belief that the powers vested in the legal system are morally legitimate. When subjects believe that law is legitimate, they effectively believe that its directives are morally binding because they believe that the law provides reasons that apply to them independently (Raz, 1986).

With respect to AI, the implication is that they should be included in the demos only if they have the ability to make moral judgments about the legitimacy of legal authority. A capacity for ethical and moral reasoning is thus required for an entity to be included in the demos. Now, numerous algorithms are reportedly able to make ethical decisions in narrowly defined circumstances. More importantly, some argue that artificial agents can be “virtual” moral agents or “functionally equivalent” to moral agents (Coeckelbergh, 2009; Wallach & Allen, 2009). Indeed, Sullins (2006) proposes that robots are moral agents if they have the capacity for intentional, autonomous and responsible actions. No actual version of AI reportedly enjoys a capacity for sophisticated moral reasoning in this sense (Cervantes et al., 2020). But can we exclude that future versions of AI can? Some think we can exclude that possibility since “functionally equivalent” moral agency is not moral agency in the relevant sense (Jebari, 2021). On the assumption that recognition of legitimate authority requires sophisticated moral reasoning, artificial agents that are “moral agents” would need the ability to distinguish between legitimate and illegitimate legal authority. That in turn depends on the capacity to identify and form moral concepts, which is a stronger requirement than intentional, autonomous and responsible action. If democratic inclusion is premised on subjection to legal authority that claims to be legitimate, and subjection to such authority is premised on the capacity to determine whether authority is legitimate, it is uncertain if future AI ever qualifies for inclusion in the demos.

4 AAP and the patiency requirement

It was suggested earlier in this article that already existing AI meets the relational requirement of AAP in the sense that the teleo interests of these entities are causally affected by the decisions taken by democratic political units. Interpreted as a principle that requires inclusion of everyone with an affected interest with no restrictions of what interests that warrants democratic inclusion, AAP could be argued to require inclusion in the demos of currently existing AI. However, this conclusion is based on an over-simplified interpretation of AAP. A lot of things like trees, viruses and rivers could be said to be casually affected in this way while granting these entities democratic inclusion is not what is usually taken to follow from the principle. Something more than being in this particular relation to political decisions seems to be required.

Unlike ASP discussed in the previous section, AAP cannot outright be attributed an implicit agency requirement. As argued by Ben Saunders (2012), AAP requires inclusion without regard to a capacity for political agency. An entity could be causally affected in a way that warrants democratic inclusion without having the capacity for political agency. Instead, AAP seems to have an implicit patiency requirement that could possibly exclude non-sentient biological organisms and artefacts, perhaps even intelligent artefacts, from democratic inclusion. Having an interest is not necessarily the same as having an interest that warrants democratic inclusion. In order for an interest to warrant democratic inclusion on AAP, it must be an interest that is worthy of political concern. Just to be clear, something could be worthy of political concern without having an interest that is worthy of political concern. To take an example, nature and the environment could be worthy of political concern not because nature or the environment has an interest that is worthy of political concern but because other entities with political standing have an interest in a concern for nature and the environment. The same could be true for AI systems if the relevant interests at stake here are the interests of the human designers and users (see e.g. Popa, 2021).

Entities with interests that are worthy of political concern will be referred to as “political patients''. This terminology is borrowed from the literature on moral standing where entities with moral standing are referred to as “moral patients”. With this terminology, only political patients are worthy of political concern because they have an interest that is worthy of political concern. Other entities that are worthy of political concern are that because a political patient has an interest in that these entities are treated in a certain way.

Reinterpreted along this line of thought, AAP does not require inclusion of everyone with an affected interest but of everyone with an affected interest of a certain kind—namely an interest that is worthy of political concern. Put differently, the scope of inclusion of AAP is limited to political patients since only political patients are bearers of interests that warrant democratic inclusion. With such reformulation, the categorical extension of the principle will be less extensive—assuming that all interests are not interests worthy of political concern. Determining the more precise scope of inclusion of AAP, reformulated in this way, we need to establish what interests that warrant democratic inclusion and what entities that could be bearers of these interests.

4.1 Psychological instead of teleo interests

The literature on the AAP is not very detailed on what type of interests that warrants inclusion. It has been suggested that we need some measure or index to determine what should count as being relevantly affected (Arrhenius, 2018; Goodin, 2007). But such a measure or index is seldom developed. Although, with the exception of Saunders (2012), political patiency is never explicitly discussed in the literature on AAP, it seems fair to say that both adherents and critics assume a conception of patiency (or of interests that warrant democratic inclusion) that is less inclusive than the maximally inclusive interpretation according to which all interests (including teleo interests) are interests that warrant democratic inclusion. Both adherents and critics of AAP seem instead to assume that only psychological interests warrant democratic inclusion. At the most general level, these psychological interests could be distinguished from the teleo interests discussed above. A teleo interest is, as said, an interest an entity has “in virtue of being teleological organized”, while a psychological interest is an interest an entity has “in virtue of having psychological status” (Basl, 2014).

Understood as a principle that requires inclusion of those and only those with an affected psychological interest, the scope of inclusion of AAP is limited to entities with cognitive capacities necessary for having psychological interest. It can be about the capacity for consciousness, the capacity to have basic emotions, the capacity for experiencing pleasure or pain, or more sophisticated cognitive capacities. Regardless of which of these capacities are required for having psychological interests that warrant democratic inclusion, requiring psychological interests limits the scope of inclusion in a way that excludes entities without rudimentary cognitive capacities such as currently existing (narrow) AI or non-sentient biological entities.

The more precise scope of inclusion of AAP depends on what psychological interests that warrant democratic inclusion and what types of entities that have the cognitive capacities required for having these psychological interests. As said earlier in this paper, the discussion on the temporal and spatial extension of AAP has often assumed that the categorical extension of the principle is limited to humans or even adult humans. That the discussion of the scope of inclusion of AAP has mainly focused on adults does not however necessarily reflect an assumed patiency requirement that excludes children. As will be discussed below, there are adherents of AAP that assume such patiency requirement. However, it seems fair to say that most adherents of AAP probably assume that children have interests that are worthy of political concern but that children for some other reason are not eligible for democratic inclusion. That the discussion of the scope of inclusion of AAP has mainly focused on humans is more likely to reflect such assumption. The assumed boundaries of political patiency in this literature would in that case coincide with the “common-sense view on moral standing” (Jaworska & Tannenbaum, 2019). On the common-sense view of moral standing, humans (with possible exception of foetuses and those in a persistent vegetative state) have full moral standing. Humans have a higher moral standing than animals although animals also have some moral standing. This difference in moral standing has proven difficult to account for philosophically (Jaworska & Tannenbaum, 2019). This suggests that it would be difficult also to formulate a conception of political patiency that includes all human entities and excludes all non-human entities. If this is the case, a coherent conception of political patiency will be over-inclusive, under-inclusive or both in relation to the view of political patiency assumed in much of the literature on AAP. Inspired by the literature on moral standing we could distinguish two other conceptions of political patiency that limits democratic inclusion to entities with psychological interests. The first holds that political patiency requires sophisticated cognitive capacity, whereas the second holds that political patiency requires only rudimentary cognitive capacity.

4.2 Autonomy

The sophisticated cognitive capacity conception on moral standing or moral patiency traces back to Immanuel Kant. On his account, autonomy, or the capacity to set ends and act upon these ends, is a necessary requirement for having full moral standing (Jaworska & Tannenbaum, 2019). The Kantian account of moral patiency, also referred to as the “standard position” (Gunkel, 2012, 95) or the functional conception, treats moral patiency as the flipside of moral agency. Those and only those with a capacity for moral agency have moral patiency. This intimate connection between agency and patiency has been challenged by scholars discussing the moral status of non-human animals (see Gunkel, 2012).

Autonomy has been put forward as an important value also in the literature on democratic inclusion. Arash Abizadeh (2008) argues for a version of ASP requiring inclusion in the demos of all those and only those coerced by democratic decisions. Here, autonomy is what grounds democratic inclusion. All those and only those whose autonomy is invaded by democratic decisions ought to be included in the demos making these decisions. It follows from this account that those who lack the cognitive capacities necessary for autonomy do not have a justified claim to democratic inclusion. The cognitive capacities required are the capacities needed for formulating and pursuing personal projects (Abizadeh, 2008). In relation to AAP something along the line of the sophisticated cognitive capacity conception of patiency has been suggested by Kim Angell (2020). On his account, the domain of interests that warrant democratic inclusion is limited to “people’s interest in leading an autonomous life” (Angell, 2020). The main rationale for limiting the domain of interests that warrant democratic inclusion in this respect is that it avoids the counterintuitive implication of AAP as it is usually formulated in relation to the question of democratic inclusion of children and tourists (Angell, 2020). Children lack the cognitive capacities necessary for autonomy and ought therefore to be excluded on this account. Limiting the domain of interests that warrant democratic inclusion in this respect, the scope of inclusion of this version of AAP is similar to the scope of inclusion of ASP discussed in the previous section. Understood in this way, AAP will exclude not only children but also (most) non-human animals, people with intellectual disabilities and narrow AI. Future AI with a capacity to autonomously formulate, revise and pursue life-plans will be included on this account. The life plan version of AAP seems to require the inclusion of AI with these capacities even if these artificial entities do not have an interest in personal autonomy.

The scope of democratic inclusion on the categorical dimension following this life-plan version of AAP is intuitively plausible in the sense that it coincides with current democratic practices. It could nonetheless be argued to be based on an under-inclusive conception of political patiency. Although it seems plausible to include only entities capable of political agency, limiting the domain of political standing to entities capable of political agency seems implausible. Limiting the scope of inclusion in this way by limiting the domain of interests that warrant democratic inclusion seems to imply not only that children and people with intellectual disabilities could be excluded from the demos but also that the interests of children and people with intellectual disabilities do not deserve political consideration. Put differently, limiting the scope of inclusion by adding a patiency requirement according to which agency is a requirement for patiency would not only limit the scope of inclusion but also the scope of interests that are worthy of political concern.

Requiring sophisticated cognitive capacities for democratic inclusion seems reasonable indeed since these capacities could be argued to be a precondition for political agency. But, requiring sophisticated cognitive capacities for political patiency seems problematically under-inclusive. The domain of interests that warrant democratic inclusion should not therefore be limited to interests that require sophisticated cognitive capacities. A more plausible alternative is to limit the domain of interest to that warrant democratic inclusion to interest that requires rudimentary cognitive capacities.

4.3 Consciousness

In the literature on moral patiency, the main alternative to the Kantian or functional conception is the experiential conception (Gunkel, 2012). Just like the functional conception of moral standing, the experiential conception connects patiency to certain cognitive capacities. But the cognitive capacities required for patiency are different. The experiential conception does not require the sophisticated cognitive capacities required for autonomy. What is required here is instead the cognitive capacities required for experiencing pleasure, pain, welfare, or harm. This conception of patience, or moral standing, has been developed in the literature on animal ethics and the decisive difference between entities that qualify as patients and entities that do not qualify is consciousness (or self-consciousness) (Jaworska & Tannenbaum, 2019).

Understood as a conception of political patiency, entities with a capacity for conscious experiences are political patients and thus have interests that are worthy of political concern. Hence, having a capacity for consciousness is what qualifies an entity as a political patients on this account. Reformulated along these lines, the scope of inclusion of AAP would not be limited to adult human entities. All entities with a capacity for consciousness including children, people with severe intellectual disabilities, and animals are within its scope of inclusion. That AAP could be interpreted as a principle that is radically categorically inclusive in this respect has been recognized by others (Campos, 2019; Garner, 2017; Saunders, 2012). However, the scope of inclusion with this interpretation of AAP is not categorically unlimited. Non-sentient biological organisms, nature, and currently existing narrow AI lack the capacities required for consciousness and could therefore be excluded.

With this interpretation of APP, the categorical scope of inclusion is determined by what entities have a capacity for consciousness. The question of democratic inclusion of future more sophisticated AI therefore turns into a question of whether or not AI will develop something equivalent to the human consciousness. Adherents of this interpretation of AAP should join those that have argued that a developed capacity for consciousness is a necessary condition for AI to deserve moral concern (Mosakas, 2021) or for being a holder of direct rights (Andreotta, 2021) and argue that AI should not be treated as an entity with an interest that warrants democratic inclusion until AI has been developed with these capacities.

It should be noted here that the conscious machine with the emotional capacity for having conscious experiences does not necessarily possess the intellectual capacities required for political agency. AI could be developed into a “mere patient” without the capacity for agency. These machines can be harmed and are not to be treated as mere machines (cf. Bryson, 2010). An AI that develops into a “mere patient” would be similar to non-human animals in this important respect. Currently existing AI that lacks the capacity for having conscious experiences is thus different from sentient biological entities (like humans and animals) in this important respect (Johnson & Verdicchio, 2018). This is a difference that should be decisive for democratic inclusion from the perspective of this version of AAP.

5 Concluding discussion

What are the non-relational properties required for inclusion in the demos? The answer determines if intelligent artificial entities are or can be eligible for democratic inclusion as it is less controversial that AI satisfies the relevant relational requirements: AI’s can be either affected or subjected to collective decisions. But as argued here, it is less clear that AI’s do or can satisfy agency and patiency requirements. Democratic inclusion cannot be conclusively determined from the fact that an entity is either subjected or affected by public decisions: only agents and/or patients qualify as members of the demos.

The general import of this conclusion is that the debate about democratic inclusion should move beyond an exclusive focus on the relational requirement of AAP and ASP. This becomes particularly clear when we acknowledge that the relational requirements of these principles could be taken to require the inclusion in the demos of currently existing AI systems. The relational requirements offered by these principles determine the spatial and temporal boundaries of democratic inclusion. But the categorical extension of the demos depends also on agency requirements and patience requirements as recognized by the ASP and AAP, respectively.

The agency requirements identified by the distinct versions of ASP are legal personality, the capacity to comply with rules and the ability to recognize legitimate authority. While the first two conditions either are or could be satisfied by AI, it is presently uncertain that AI could develop the capacity for moral reasoning required to satisfy the third condition. Hence, on at least one version of ASP, there is reason to doubt that AI will ever be entitled to democratic inclusion. On the other hand, on at least one version of the ASP, there is reason to conclude that AI might or perhaps already should be included in the demos.

The patience requirements implicit in AAP is suggesting that democratic inclusion is premised on a capacity for conscious experiences. Understood in this way, AAP does not require the inclusion of currently existing AI. However, this formulation of AAP might still be over-inclusive in relation to the scope of inclusion that AAP is usually taken to imply. It would stretch the boundaries of the demos to include children and, indeed, beyond the human domain altogether as some animals do possess a capacity for conscious experiences. As noted in the first section of this paper, it is by far obvious that AI will develop consciousness. But if it does, AI would satisfy the patiency conditions for democratic inclusion as specified by AAP.

In the end, both patiency and agency requirements must be incorporated into a plausible account of the conditions for democratic inclusion. The point is that the patiency requirements associated with AAP are relevant for ASP and the agency requirement associated with ASP is relevant also for AAP.

This is illustrated by the fact that a capacity for agency may not be sufficient for patiency. Hence, even if AI would develop a capacity for agency (moral or political), it is not necessarily the case that AI develops political (or moral) patiency of the relevant kind. The reverse is also imaginable. An entity with moral patiency does not necessarily satisfy the relevant requirements of agency (Saunders, 2012). Hence, even if AI would develop a capacity for patiency, it is not necessarily the case that AI develops agency of the relevant kind.

It is suggested here that currently existing AI poses a challenge to AAP and ASP understood as principles that require the inclusion of all entities that satisfies the relational requirement. It could be argued that possible future AI poses a challenge also to the versions of AAP and ASP developed in this paper. The “experiential machine” with the capacity for political patiency but without the capacity for political agency poses a problem for AAP, while the “morally intelligent machine” with a capacity for political agency but without a capacity for political patiency poses a problem for ASP.

However, AAP and ASP may both harbour the resources to cope with this challenge. Arguably, the importance of agency and patiency is implicit in the normative rationales for democratic inclusion on either principle. Advocates of AAP typically believe that the affected should be included in decisions because it extends to them control and influence (Goodin, 2007; Hultin Rosenberg, 2020). If control and influence can be exercised only by those who are political agents, it follows that democratic inclusion could be limited to agents also on AAP. Similarly, advocates of ASP argue that democratic inclusion is required for the subjects of collective decisions because it imperils their freedom understood either as autonomy (Abizadeh, 2008) or non-domination (Beckman & Hultin Rosenberg, 2018). Democratic inclusion should be limited to political patients on ASP because only patients do possess the relevant interests in freedom.