This paper explores and ultimately defends the perhaps surprising and intriguing claim that artificial intelligence (AI) can become part of the person, in a robust sense. The worry that AI may disrupt personhood has been voiced from various directions and captured the attention of the public (Baker 2013; Schneider and Turner 2020). But what this means more precisely often remains vague. A frequently debated question is whether AI, or rather machines such as robots running on AI algorithms, can become novel distinct persons and thereby holders of rights and bearers of duties, the personification of AI (e.g.,Santosuosso 2015; Gunkel 2022; Solum 2020). This is not the topic of this paper. Rather, it examines the merging of body, mind, and machine and the question whether AI-devices can become part of already existing natural persons, and the normative consequences this may entail. This transformation of material things and immaterial software into part of the person shall be called, for lack of a better term, “empersonification”.Footnote 1 Although the merging of body and technology, or minds and machines, is a recurrent motif of futuristic novels and transhumanist imaginaries, the conditions and consequences of empersonification have not received much scholarly analysis (but see Baker 2013). To start, it is worthy to note that AI does not exist by itself but requires a material substrate such as a computer or machine that implements a code and AI algorithms. The more precise question is thus whether the bundle of AI and its material substrate can become part of the person. This paper examines this question at the example of biotechnological devices functionally integrated with humans and operated by AI such as brain computer interfaces (BCIs). They are introduced in the first section. Assessing whether empersonification is empirically and conceptually possible requires a firmer understanding of what it means to be a person and a part thereof. These aspects are addressed from legal and philosophical perspectives in the second section.Footnote 2 The third section analyzes three normative consequences of the empersonification of AI-devices. First, by becoming parts of the person, AI-devices cease to exist as independent legal entities and come to enjoy the special legal protection of persons. Second, because of the legal maxim that persons cannot be owned by others, third parties such as manufacturers of devices or authors of its software may lose (intellectual) property rights in the AI-device. Third, the persons of whom AI-devices have become part may be held responsible for the outputs of the AI-devices in the same way that they are for outputs of other parts of their person such as their unconscious. Some implications of the analysis may appear counterintuitive. However, so it is argued, they are reasonable and justified, and illustrate the intensity of the transformations that some AI technologies may bring about. The analysis should also prompt lawmakers and regulators to define the scope of empersonification for reasons of clarity and certainty for users, manufacturers, and developers of AI-devices.

1 Artificial intelligence in neurotechnologies

The key characteristic of the technologies featured in the following debate is that they are functionally integrated with the human person, especially with the human mind. As a vivid example, suppose a brain implant, a chip with several electrodes, can detect brain signals and stimulate brain areas. Detecting and processing of brain signals is achieved by machine learning algorithms. They learn to identify and decode the relevant signals, filter a large amount of noise, and dynamically adapt to the plastic, self-transforming brain. These tasks require feedback training and unsupervised learning in the wild. The device can also stimulate specific areas of the brain via electric currents to increase or inhibit the local activity and functioning of brain areas, which may cause various mental effects. The parameters of the stimulation—if, when, where, rhythm and intensity—are controlled by other machine learning algorithms that respond to the outputs of the first algorithms, which process the detected brain signals. Such a device forms a system in which several self-learning algorithms contribute to stimulating the brain to create or modify various mental effects. It deserves mentioning that the biological brain is of course another relevant element of the system. It might well be that the brain adapts its processing to the exposure to the AI-device in some way. Even more, the users of these systems learn to adapt their brain activity so that it becomes better detectable via feedback and reinforcement learning. This works largely non-consciously; although people do not explicitly know how they alter brain signals, they succeed doing so (see reports in Kögel et al. 2020). Thus, there is a bi-directional adaptation process between brain and device, involving several AI algorithms.

Furthermore, the brain signals detected by the device may be used as inputs into other computational processes, or for controlling devices from limb-prostheses to video games. Such direct connections between computers and brains at the level of the central nervous system are called Brain Computer Interfaces (BCIs, for an introduction see Wolpaw and Wolpaw 2012). At the forefront of current neurotechnological advances, prototypes with various functions, mostly control of motor prostheses, have been in use with patients for a couple of years (Chaudhary et al. 2016). Most BCIs resort to machine learning algorithms. The prospects of future neurotechnological innovations are closely related to AI, not least because of the massive amount of data to be processed and filtered (Friedrich et al. 2021). In general, AI-driven neurotechnologies face the familiar ethically problematic features such as opaqueness of self-learning algorithms and potentially unforeseeable outputs.

Our exemplary device is bi-directional (brain → device → brain) and works without conscious input by users (“closed-loop”). It is not a mere figment of thought experiments. Such devices are currently under development for the treatment of psychiatric disorders such as depression with prototypes studied in patients (Scangos et al. 2021). An affective closed-loop device may detect and monitor brain signals, infer the mood of the person from it, and then then modulate it through stimulating brain activity. In this setup, the AI-device can be said to (partly) control moods of the person. More broadly speaking, such devices may create hybrid minds, mental systems that operate on two cognitive systems, the biological human brain/mind as well as the technological AI-device (Bublitz et al. 2022). Hybrid minds are often instances of extended minds (Clark and Chalmers 1998). Irrespective of that, they are an interesting object of study with distinct features such as the bi-directional adaption between brain and AI that raise distinct philosophical and legal questions, among them empersonification.

2 Empersonification: AI-devices becoming a part of the person

The transformation of things into parts of the person requires clarifying two aspects: What does “person” mean in this context—the question of personhood–, and what is a part of a person–the question of parthood. We begin with the first. “Person” is a multivalent term, several overlapping discourses concern different aspects of being a person. One is metaphysical and asks what persons or humans essentially are: body, soul, Cartesian Egos, material entities? A different, widely debated question concerns diachronic personal identity, what makes persons persist over time: the continuation of parts of the body, the mind, or memories about the personal past? These are interesting questions, but they lie outside of the present inquiry. The following is rather concerned with persons, as opposed to nonpersons, in light of personhood, a normative status, ascribed to entities under some conditions.

2.1 Legal personhood

Personhood is relevant in law and other normative systems that consider persons as the basic units and reference objects of norms. Being a person in law means being a potential holder of rights and bearer of duties.Footnote 3 Article 6 of the Universal Declaration of Human Rights (UDHR) declares “everyone has the right to recognition everywhere as a person before the law”. This marks a condition of legal personhood. “Everyone” here means every human being; species membership is thus a sufficient condition of legal personhood.

However, personhood of this kind is often only a weak position, it is only the entry level for being a rightholder. The important further condition, often contested in courts, is the legal power to exercise rights freely and independently of others. Only then, a person is a free and autonomous rightholder. The power to exercise one’s rights is called legal capacity. Persons with legal capacity are entitled to make decisions for themselves and have them respected by the law and others. As a flipside, people with capacity are held responsible for the consequences of their actions. Let us call persons who possess legal capacity “personsC”. People lacking legal capacity need guardians or substitute decision-makers who make decisions on their behalf, and they are therefore no free and independent rightholders.Footnote 4 Legal capacity has several requirements, including a set of (often necessary) factual mental properties. Although the details may vary between jurisdictions, typical candidates are capacities for reasoning, appreciating moral and legal norms, understanding the situation one faces, the options one has and their potential consequences, a preference structure; powers of self-control and a basic level of mental stability; memory, and further capacities to form, reflect, and express one’s will. In addition, legal capacity often negatively requires the absence of undermining factors such as severe mental disorders. In short, being a personC requires a bundle of mental properties. So much for legal aspects of personhood.

2.2 Philosophical personhood

Philosophical conceptions of persons are manifold and speak to a variety of topics; many accounts of persons or personal identity mainly concern the question of persistence through time. But when it comes to the presently interesting question about the features that make entities into persons and confer personhood, there is some convergence. Most accounts conceive of persons as entities with a bundle of properties, in which mental capacities (or the potential to develop them) take a prime place (Olson 2022). According to the traditional view, the hallmark of persons is that they are rational entities. In contemporary debates, having consciousness or a first-person perspective have emerged as central criteria for personhood (Baker 2000). In addition to that, various accounts forward different, sometimes elaborate sets of necessary and sufficient conditions (e.g., Dennett 1976, see also Farah and Heberlein 2007). Mental capacities are crucial in many of them.

Putting aside some presently irrelevant criteria such as species membership or age, a structural similarity between philosophical conceptions of personhood and the legal conception of personsC emerges: Both require that entities possess a set of mental properties (varying in kind and strength in different accounts). Let us call those properties henceforth “person-making features” and the entities that possess them simply “persons”. This conception of persons based on common ground between many standards accounts and abstracting away from details is the one relevant in the following.

2.3 Parthood

The second question concerns parthood. What might being part of a person mean? In general, the relation between wholes and their parts is addressed by the field of mereology (Varzi 2019). These relations are familiar with respect to spatiotemporal entities, but mereology can also apply to abstract entities. However, it seems to be inapplicable with respect to a normative status; asked in this way, it betrays a category mistake. Personhood is something that entities may have or not have, but it is nothing that has parts or can be partitioned. A more adequate question is thus: Can AI-devices become part of the entity that has personhood? It might be approached from several angles, from subjective perspectives, a commonsensical and legal understanding of persons, and from conceptual or constitutive accounts.

2.3.1 Subjective perspectives

Subjective approaches may draw on the phenomenological experience of using and interacting with AI-devices. Does our exemplary closed-loop brain stimulator feel to users as part of them; do they experience it as part of their self; does the affectivity generated by it feel like their own? These slightly different empirical questions need to be explored by phenomenological analyses of the lived experiences of users, which will differ between persons and devices. Some users may not realize a difference in the quality or intensity of their affectivity or the conditions under which it waxes and wanes. Others may feel alienated or estranged from their affectivity or the device. For instance, users of our exemplary device may wonder about the source of their moods. When they feel down, they may wonder whether the device stimulated their brain at all, why that might not have happened, whether it failed to detect relevant brain signals or assessed the mood of the user differently than they themselves did, bringing them to question their affectivity. Conversely, users in good spirits may wonder whether their affectivity is rooted in the world and the events they experience, or whether it is merely generated by stimulation. Given the opaqueness of the workings of the AI, people may not always find clear answers to these questions, which relate to deeper aspects of their personality such as their emotional proclivities. This uncertainty may lead to self-questioning and self-monitoring, which may in turn lead to people experiencing the device and the mental states they generated as something alien (pilot studies indicate this possibility Gilbert 2018; Gilbert et al. 2019).

Another subjective approach to empersonification may draw on users’ considered judgments. Do they consider devices as part of their self or their person upon reflection? This requires interviews and surveys of users. The weakness of this approach is that people will understand something to be part of a person or themselves in various and often inconsistent ways, and their answers may often rely on their varying lived experiences. Most importantly, the underlying conception of the person is regularly not the technical one relevant for ethical and legal purposes. Personhood in that latter sense is primarily an external ascription, not a self-definition. Thus, although subjective dimensions of engaging and integrating AI-devices promise to be fascinating and in need of detailed studies, they may not speak to the present question and will be left aside in the following.

2.3.2 Natural persons and parthood

Turning to objective criteria, one may ask whether AI-devices can become part of natural persons, the model figure on which legal personhood is grounded. This requires an answer of what it means to be human, in the sense of species membership. In the present context, however, it is beyond doubt that humans who receive a biotechnological implant remain humans. Species membership is not at issue.Footnote 5 Rather, the question is whether AI-devices can become part of humans. There are two ways to answer it: One is negative and holds that all parts of the entity need to have some human property, e.g., being organic (or something along these lines). As a consequence, the AI-device cannot be part of it. However, it is not clear that or why such strong conditions should be stipulated; this point will resurface in the following discussion. Suffice it here to mark this dismissive line of reasoning: if conditions of parthood require organic materiality or a specific biological genesis (or some related aspect), AI-devices cannot be part of them.

Alternatively, one may look for hints about parts of persons in the law. Fortunately, some provisions protecting persons provide indications about their parts. For instance, Article 3 of the Charter of Fundamental Rights of the European Union (CFRE) protects the integrity of the person. Its first section specifies this as including the integrity of the body and the mind. This suggests, unsurprisingly, that body and mind are parts of the person. This allows reframing our question: Can AI-devices become part of the body or the mind? Part of the body

Fortunately, there is some literature on both questions to draw upon. With respect to the body, the relation between the material device (e.g., implant, chip) and the body stands in the foreground. Although it might appear evident at first glance, the question what a body is—and what its parts might be—is controversial. Feminist writers and disability scholars have called simple biological understandings of the body in question (Haraway 2013; Fraser and Greco 2005). Phenomenologists have studied the experiential limits of bodies and how they may temporarily extend to objects, e.g., the embodiment of tools (de Vignemont 2011). The point emerging from these debates is that clear-cut boundaries of the body are hard to define (Bublitz 2022). In the eyes of many scholars, prosthetics like artificial limbs or implants like pacemakers may become part of the body under further conditions. Therefore, AI-devices which are functionally integrated with other bodily processes, mimic biological limps or restore ordinary functioning may often be considered parts of the body (Aas 2021). Only accounts that restrict the body to organic materiality seem to take principled exception against such incorporation of devices. But such accounts fail to capture the widespread view that it is not the material, but the functionality or other non-material properties that matter for something being a body. Under this premise, some AI-devices can become part of the body. Part of the mind

The parallel question with respect to the mind is whether AI-devices may become part of the mind, but the focus here lies on the AI software.Footnote 6 Answers require definitions of what the mind is and what its parts are. It seems an impossible feat to provide such definitions without provoking objections from some position in philosophy of mind. The question shall thus be approached from an established debate. The Extended Mind Thesis (EMT; Clark and Chalmers 1998) essentially holds that the mind can extend beyond the skull and the body to the outside, into the world and into artifacts like iPads or calculators. A key idea is the parity principle: If a part of the world functions as a process which, were it done in the head, would count as part of the cognitive process, then that part of the world is part of the cognitive process (Clark and Chalmers, 1998). Put differently, if a process in the AI-device—and this primarily means, the operations of the AI—, performs or functions in a way that would count as mental, were it done in the head, it should count as part of the mind. The EMT stipulates a few additional conditions. Roughly, the external element must interact with the person easily, reliably, and constantly, so that she can trust it (“trust and glue”). AI-devices as our exemplary BCI may fulfill these conditions: The process is controlled by the AI software, which runs on a device that is implanted in the body, directly interfaces with the central nervous system, and reliably and constantly interacts with it. The AI-driven process may thus count as part of the mind.

The question becomes more challenging if one rejects the EMT. For instance, one may call the comparison between mind and software into question. Both might be said to compute data or process information; and the were-it-done-in-the-head clause seems to treat them the same. But software metaphors for the mind have limits and likely fall short at some point (see Brette 2022; Richards and Lillicrap 2022). Nonetheless, this does not imply that algorithms cannot be part of the mind, nor that some mental modules could not be replaced by AI-devices. In other words, even though the mind is not a computer, AI-devices might contribute to mental functioning, and the software running on them may be seen as part of the mind. This is not precluded by the limited nature of the software metaphor.

Moreover, the standard objection to the EMT seeks to uphold the distinction between mind and world, with the skull or the body as the boundary line. As a consequence, AI-devices beyond this line cannot be part of the body or the mind. However, our present interest lies in devices that lie within the bodily envelope, in intracranial neurotechnologies which can itself be part of the body (supra). The standard objection against the EMT seems inapplicable to them. Other objections against the EMT draw on a metaphysics that strictly distinguishes persons and things. So does the law (Kurki 2019). However, all such accounts must define boundary criteria. It seems that nothing speaks in principle for excluding AI-devices, except a condition that the mind must be fully realized in organic or biological matter. But this seems to be a stipulation, and it is not a necessary one. Minds are defined by their mental characteristics, not by a specific physical substrate, to which they might be tied only contingently.Footnote 7 This is why the idea of multiple realizability of the mental is commonplace. Demanding that minds have to be realized fully biologically requires justification and seems even less plausible with respect to the mind than with respect to the body. Therefore, the claim that AI-devices located inside of the body, integrated with the mind, running processes that, were they done in the mind, count as mental, can become part of the mind is reasonable—even if one does not endorse the extension of the mind into the world.

The foregoing has shown that some AI-devices may become part of the body, the mind, or both, under reasonable accounts of what being part of these entities means. In a next step, the mereological rule of transitivity can be applied: A part of a part of an entity is a part of that entity. AI-devices are a part of the body or the mind, which are both parts of the person. Therefore, AI-devices can be a part of the person–Q.E.D. Objections

Two possible objections. A factor limiting empersonification might be spatial boundaries of the person or the body. Suppose the chip implanted in the brain has only limited computing powers. At night, the recorded brain data are transferred wirelessly from the chip to a cloud that does a large share of the computing. One may wonder whether the AI in the cloud becomes part of the mind in this setup. As a reply, I suggest that this question largely falls into the familiar tracks of the Extended Mind debate. In principle, it seems possible to consider the AI in the cloud as part of the functionally integrated mechanisms and hence, as part of the mind. But there might be further boundary criteria, e.g., the spatial bounds of bodies or the cranium, which may be normatively relevant. In this example, there might be the further normative demand that every individual mechanism can only belong to one person at a given moment in time, because persons are by definition distinct individuals. This demand might be violated when the cloud is computing data of many people at night. Such background conditions of the Extended Mind debate resurface here. Nonetheless, such boundary cases do not provide a decisive counterexample to the main claim presented here, the general possibility of AI becoming part of the mind or the person. It is not refuted by examples suggesting that empersonification depends on details of the system architecture, such as the physical location of the substrate of the AI.

Another objection may accuse the foregoing analysis of being too broad. After all, even fingernails can become part of the person. Doing justice to the special status of persons might call for more restrictive conceptions which exclude contingently attached elements. In reply, the following approach sketches a minimalist narrow conception that is restricted to person-making features such as the necessary mental properties for legal capacity.

2.3.3 A Minimalist approach

A minimalist approach may argue like this: An entity is a person in virtue of her possessing person-making features. Among these features are mental capacities or dispositions, which are evidenced in the behavior of the entity. These are the levels of definition and evidence. But there is another level, the mechanisms underlying and enabling these capacities and dispositions. The proposal is that these mechanisms are part of the entity which is a person. Otherwise, if these mechanisms were not part of that entity—but rather independent entities or artifacts—the person would not possess them, and the capacities enabled by them. In other words, it would be contradictory to say that an entity has the relevant person-making features, but that the mechanisms enabling them are not part of that entity. In which sense would it then have these capacities? The reasoning is thus: When an entity possesses mental capacities or dispositions, the mechanisms realizing these capacities or dispositions should count as parts of that entity. In humans, these realizers are usually cerebral and mental mechanisms, but they might also be implants and AI-devices, e.g., those that enable communication, mental stability, or self-control. Thus, even under a minimalist conception of the person, AI-devices may become part of it.

2.3.4 Constitutive account

Some may find the foregoing approaches off the mark because the relation between mechanisms, person-making features, and persons is not adequately captured by a mereological framework, it is not a part–whole relationship. The alternative are constitutive relations. Rather than asking whether AI-devices can become part of persons, the question is whether AI-devices can constitute persons or at least can be constitutive elements of them. Constitution usually concerns explanations of causal powers or dispositions of an entity (Ylikowski 2013). Explaining why a table has the causal powers it has, e.g., keeping the laptop on it from falling, involves reference to the atoms which constitute the table. Likewise, explaining the person-making features of an entity will involve reference to the mechanisms affording them. And just as the atoms constitute the table, these mechanisms—the AI-device—constitute the relevant mental properties.

A key contemporary philosophical account of personhood, the Constitution View by Lynne Baker, draws on this idea. To her, persons are beings with a robust first-person perspective (including, among others, mental capacities for language and self-reflection, Baker 2019). In humans, this first-person perspective is constituted by bodies. Therefore, according to Baker, bodies constitute persons. However, Baker accepts that relevantly similar first-person perspectives might arise in non-organic or biotechnological bodies, so that Martians or robots might be persons as well; it is not conditioned upon a biological body, only the robust first-person perspective matters. Baker’s view is open to the possibility of AI-devices constituting persons. Thus, while rejecting the mereology and the language of parthood, the Constitution View corresponds with the foregoing findings as well. AI-devices may be constitutive elements of persons and thereby become empersonified.

2.3.5 Opposing views

The foregoing laid out several ways of thinking about parthood that roughly converge on the conclusion that AI-devices can become empersonified, either as parts of the person or as constitutive elements of them. One may thus wonder which positions would exclude the possibility of empersonified AI-devices. The only ones denying this in principle seem to be strong biological positions, demanding that all parts of the person must be of organic origin, biological genesis, or related conditions unfulfillable for AI-devices. In their support, it might be argued that—at least for the law—being human is, or should be, a necessary condition for personhood, and that technological devices can never be part of a human. But that latter claim would need substantiation by an argument which is hard to conceive. It seems to be an ad hoc stipulation that runs counter to the widely held view that fails to see a principled problem in considering things such as tooth implants, pacemakers, or artificial hips, as part of the human body or the person. There is a parallel argument in the notably different question of diachronic identity in this direction–animalism. It holds that persons persist over time if (and only if) their living organisms persist. But even animalism does not necessarily hold that all components of this organism must be organic.Footnote 8 And more generally, if one is open to the possibility of non-human, non-organic persons such as Martians or robots, animalism does not find much traction as a comprehensive theory of personhood.

After all: According to mainstream conceptions of personhood and parthood, AI-devices may become part of the person or constitutive elements and hence, become empersonified. This is an intriguing result. Let us call devices that may become empersonified AI-devicesP. Which devices qualify depends on the conception of person one favors. But even under the minimalist account, AI-devices enabling or assisting person-making features qualify, i.e., those affording reasoning, understanding of norms, communicating and expressing of a will, controlling symptoms of psychiatric disorders and other features. In the following section, some of the key normative implications of empersonification shall be laid out.

3 Normative consequences of empersonification

One may wonder whether positing that AI-devices become part of the person appears plausible from a commonsensical perspective. Regarding ordinary bodily prosthesis such as bionic limbs, the idea appears plausible. And if these can be empersonified, the suggestion that AI-devices integrated with body or mind may do so likewise does not appear particularly surprising or troublesome. However, a second glance may raise worries. After all, it is a salient feature of AIs that they involve self-learning and self-adjusting mechanisms which seem to have “a life of their own”. The trajectory of the further evolution of the mechanisms primarily follows the internal dynamics of the AI, and perhaps their interplay with the environment, but not so much other aspects of the person. Also, the AI is connected to the nervous system only through an interface, so that the informational exchange with the biological organisms is likely very limited. Thus, despite its integration into body, mind, and person elaborated upon above, the AI-device seems distanced from the person and her parts; the AI-device seems to form an enclave, situated within the person and functionally integrated with it, yet still independent from it. Moreover, the dynamics of self-learning algorithms may often be opaque to users and lie beyond their control. This might create a somewhat strange relation of the person to the AI-device which has become part of her, but somehow still remains independent. In addition, the phenomenological experience of having a hybrid mind is underexplored, users might experience the empersonified AI-device and the mental states it generates as alien and not belonging to them. Thus, the relationship between the empersonified device and the person of whom it has become a part may often be complex and multifaceted. With this in mind, let us turn to some normative consequences of empersonification. The following will primarily address three legal aspects: Persons enjoy special protection; they are free from rights of others; and they are responsible for their actions unless exceptions apply. These aspects carry over to AI-devicesP, with some interesting consequences.

4 Special protection of AI-device

The degree of legal protection provided to persons is higher than the one accorded to physical objects (devices) or software (AI). Human rights law protects the integrity of the person through several provisions, and domestic legal system have several norms for personal injury in tort or criminal law. By contrast, physical objects or software regularly enjoy a significantly weaker protection. Through empersonification, AI-devicesP cease to be independent legal entities and become parts of the person. This may require reinterpreting provisions to encompass AI-devicesP. For instance, damaging an AI-deviceP might constitute assault of persons rather than damage to property.Footnote 9 Further norms, e.g., a duty to repair a defective AI-device, might result from this as well. Such reinterpretations are sometimes challenging and may depend on the wording and details of specific provisions. But by and large, it seems possible to treat AI-devicesP as part of the person, body, or mind. As a consequence, AI-devicesP will enjoy stronger legal protection than ordinary AI-devices. This results from the extension of the scope of the person and seems to be normatively justifiable because the reasons for protecting persons regularly apply to AI-deviceP as well.

5 Freedom from rights of third parties

The second implication is equally intriguing and possibly more controversial. According to a maxim of the law, persons are free in the sense that other people cannot have rights over them.Footnote 10 This is the legacy of the abolition of slavery. More precisely, people can neither be owned by others, nor be the object of property-like claims of others; persons are not objects of property law as all. This freedom of the person from claims of others extends to its parts, so that an arm or leg cannot be owned by others either. Through empersonification, this freedom from rights of third parties is extended to AI-devicesP. This has several implications. The first concerns the hardware of devices. Insofar as third parties have rights in them, especially property rights, these rights lose their object because the AI-devicesP cease to exist. Typical examples of affected third parties are manufacturers and sellers which retain property in the device until it is fully paid (retention of title), or insurance companies which paid for the device used by insured patient. Their loss of property through empersonification may impact business models, insurance schemes, or contracts about devices. Details about the transfer of property or the need for compensation will vary between jurisdictions and may have to be worked out in detail. In general, the loss of property seems normatively justifiable in light of the no-property-in-persons maxim.Footnote 11

The second implication concerns intellectual property (IP) rights in the software of the devices, including AI-algorithms. It is suggested that the maxim of the freedom of the person extends to them as well. Intellectual property comprises various positions, especially relevant are copyright claims over software. Software code is internationally protected as a work of literature and artsFootnote 12; its creators possess the same set of rights over the code than authors over their novels. The catalog of these rights slightly differs between jurisdictions, but they basically concern the use, reproduction, distribution, and alteration of software. Article 6(1)bis of the Berne Convention is one of the salient international norms, granting authors the right to object to “any distortion, mutilation, or other modification” of their work (under further conditions). Article 4b of the EU-Directive on computer programs (European Union 2009) transforms this into EU law. It stipulates that “adaptation, arrangement, and any other alteration of a computer program” are restricted acts which require authorization by the author.Footnote 13 These norms, it is submitted, should be overridden with respect to the specific instantiations of the software code running on the empersonified AI-devicesP. In particular, restrictions to alter the program code should be overridden because they amount to restrictions of persons to alter parts of themselves. This violates the maxim of the freedom of person from third parties.Footnote 14 No one should have to ask others for authorization to transform (parts of) oneself.

The same reasoning applies to IP norms that make using the software dependent on authorization, restrict access to the code, or penalize hacking of the code.Footnote 15 They should also lose their effect insofar as they infringe on the freedom of the person. More precisely, the following is suggested:

Regarding the software code of empersonified devices, everyone should have the right to a) use the code; b) access the code; c) make alterations of, and additions to the code; d) access the data read out by the software; e) control the further processing and sharing of the data.

These rights derive from the maxim of the freedom of the person. As they pertain to the relation of a person to herself, denying them would curtail some degrees of freedom people have over themselves. The precise way in which these five claims can be legally guaranteed, e.g., through existing fair use exemptions or a novel “empersonification exemption” must be debated at the more granular level of domestic law. Notably, these rights cannot be restricted contractually through End-User License Agreements because the freedom of the person is not a dispositional right, the freedom of the person is inalienable. However, these exceptions may not apply to the full set of IP claims. For instance, the unauthorized reproduction of the code may still be prohibited, because this prohibition does not interfere with the freedom of the person maxim. The loss of IP rights in AI-devices seems to be an unparalleled consequence which may have repercussions on innovation, development, and marketing of devices. It seems advisable that regulators and lawmakers pass regulations  that ensure the freedom of the person, accommodate interests of manufacturers and authors, and clarify the scope of empersonification.

6 Responsibility for conduct

The third legal implication of empersonification might be the most controversial. According to another maxim of law, persons (in the present sense of personsC) are by default responsible for their conduct. As Locke once remarked, “person” is a forensic concept. The person is the focal point of attribution for positive and negative consequences of actions and omissions. This responsibility is the correlative of freedom. Roughly, persons are legally responsible for what originates within them unless exceptions apply. Against this backdrop, empersonification has the ramification that persons become responsible for the outputs of AI-devicesP, in the same way and to the same degree that they are responsible for the outputs of their minds in general. That desires, moods, or intentions were caused or significantly shaped by an AI-deviceP does not exempt them from responsibility, unless further exceptional reasons apply. An example: Some patients with an implanted DBS for Parkinson’s disease experience hypersexuality from the stimulation and the related Dopamine increase (Bhargava and Doshi 2008). It makes them behave out-of-character and inappropriately. Suppose this behavior is also unlawful. As long as patients satisfy the ordinary conditions of responsibility, especially if they possess sufficient self-control over their actions, they are responsible for it. That their sexual desires originated in the DBS stimulation is irrelevant at this stage. Only if they lose control over their behavior, as in uncontrollable urges, they cease to be responsible.Footnote 16 Some might find this result irritating because the device caused the desires which led the person to act. Even more, they may be held responsible when devices malfunction, and even when they feel alienated from the device-generated desires or their own behavior. However, the foregoing argument implies that the AI-deviceP counts as part of the person, so that further distinctions between person and device lose substance.Footnote 17

Although it may appear counterintuitive at first glance, this result is reasonable in a broader perspective. Third parties (victims) have a right not to be bothered by inappropriate behavior, regardless of its origins. To the extent that persons possess sufficient control over their behavior—especially the ability to refrain from acting inappropriately–, they are under a duty to do so. Failing to do so incurs responsibility. It is the duty of everyone to avoid harming and bothering others, irrespective of the origins of one’s desires. That such behavior is out-of-character or inauthentic is irrelevant.Footnote 18 Only at the level of sentencing, in choosing appropriate sanctions, such considerations and the origins of desires may and should become relevant. For instance, a different programming of the stimulation might suffice to deter recidivism. In general, persons are treated with respect to conduct arising from AI-devicesP in the same way that they are treated for actions arising from the unchartered depths of their unconscious.

However, some uneasiness about holding persons responsible for the outputs of opaque self-learning mechanisms outside their control may remain. Two points are noteworthy. For one, most desires are to some degree beyond the control of the person, but this is not what responsibility is ascribed for. People are sometimes deeply alienated by their unchosen desires, e.g., problematic sexual phantasies, especially when they motivate socially undesirable behavior. They are held responsible for acting on them nonetheless. Why should this be different with respect to AI-devicesP? Moreover, one cannot have the benefits of empersonification without carrying the burdens. It is incoherent to ascribe stronger protection for AI-devices due to empersonification while rejecting the concomitant responsibilities. This would forego the functional concept of the person as the basic unit for ascribing rights and duties. And this would indeed lead to the feared disruption of personhood, although in a very different sense, as it would erode the functional concept of the person. The law should not embark on this path. Rather, it should seek to accommodate worries in exceptional cases in other ways. For instance, the law has some leeway in defining excusing conditions and exemptions of responsibility. It might be argued that rapid changes in desires or moods should constitute a novel exception to responsibility when it overwhelms persons without sufficient resources for adaptation. This may be the case when self-learning algorithms cause rapid unforeseeable outputs. However, coherence demands that such an exception is applied to other situations of rapid change as well (e.g., life-changing experiences, drug consumption).

More generally, however, the unconscious might serve as a useful analogy for AI-devicesP. Both are opaque, introspectively inaccessible, and operate to some degree outside of the control of the person. The black box has unsurprisingly become the metaphor for both. To the same degree that people are responsible for actions arising from their unconscious, they should be for those arising in AI-devicesP. This analogy might even be generalized further: Legal rules and doctrines that apply to the unconscious should, prima facie and mutatis mutandis, apply to AI-devicesP as well.

7 Conclusion

Self-learning algorithms enable technological devices that can adapt to complex entities, such as the human organism and the human mind. Neurotechnologies harnessing these mechanisms are currently developed and may become functionally integrated with the body, the mind, or more broadly, the person. The foregoing analysis has shown that some of these devices can become empersonified, in a robust sense, under standard conceptions of personhood which demand a set of mental capacities. The person is understood as the basic unit for having and freely exercising rights and bearing duties in normative systems. It is a richer conception than the basic status of legal personhood (which does not include the power to freely exercise rights). AI-devices can be considered part of the body, the mind, and hence, by mereological transitivity, of the person. Alternatively, AI-devices may be seen as constituting person-making features. Becoming part of the person, or being a constitutive element of persons, is called empersonification. For most intents and purposes, some AI-devices can and should be considered empersonified, and thereby, lose their independent legal existence. Objections to this view seem to turn on strong ideas of biological personhood, demanding that all parts of the body, the mind, or a person, are of organic origin (or similar conditions). They do not appear plausible.

The range of empersonifiable AI-devices depends on the conception of persons one endorses. It can be broad, covering everything that is part of the body or the mind, or narrow, covering only those devices that afford or constitute necessary person-making features. Lawmakers should clarify the scope of empersonification. At the very least, it should encompass devices that enable persons to exercise their rights freely. But there might be good reasons for a more expansive approach which comprises many neurotechnological devices.

Empersonification has at least three legal implications: (i) AI-devices cease to be independent entities and come to enjoy the special protection of persons. (ii) The maxim of the freedom of the person from rights of others demands that third parties such as manufacturers lose rights over the AI-device, both property in the hardware and some intellectual property claims about the software. Put positively, persons should have the rights to use the device, access, hack, and alter the software code and enjoy full sovereignty over related data issues. (iii) Persons become responsible for conduct that originated in empersonified AI-devices, unless ordinary exceptions apply, and despite seemingly counterintuitive results. These are three central implications, but there might be more. Lawmakers and regulators are called upon to define the conditions of empersonification and the kinds of affected devices more precisely.

Finally, the fact that AI-devices may become part of the person should give grounds for a moment of reflection about the trajectory of the technology and the merging of human minds, neurotechnologies, and artificial intelligence. The pace of current developments is high, start-ups and venture capital have entered the field. But the “break things and move fast” credo of many companies driving innovation seems ethically impoverished, if not inadequate, when it comes to transforming persons. A cautious, deliberate approach with democratic oversight and clearer definitions of aims worth pursing and dangers to be avoided is called for. Among others, it has to assess which mental properties are valuable to have, which might be replaced or attuned, and more generally, how deep AI should encroach upon humans and substitute bodily or mental functions. After all, the biological and mental dynamics by which humans operate, often still poorly understood, are replaced or complemented by the operative logic of self-learning machines. Objectification of the person, alienation from oneself, and the expansion of technical modes of rationality to these domains loom large. That this development is clearly beneficial, all things considered, has yet to be shown. In the meantime, some dimensions of persons might be better left untouched. Above all, it should be ensured that further technological advances into the person observe the spirit of the recent UNESCO Recommendation on the Ethics of Artificial Intelligence (UNESCO 2021) and are deployed in the service of humanity, benefiting all.