1 Introduction

Today, data and intervening digital media provide critical lines of communication with our social and business connections. Even those we know personally will typically connect us via digital means. It is now unusual to know anyone entirely in-person. While social and digital media provide essential opportunities for self-(re)presentation, data generated by, about and for us supplements these representations and shapes how we are understood and perceived by others. As a consequence, data and the digital space add a third dimension to the individual: We are now mind, body and digitality.

This essay considers how digitality affects outcomes for the individual by exploring the mechanisms of digital influence. To do so, one must first understand the process of digital translation: that is, the conversion of the real person to a digital translation and the creation of a context-dependent digital proxy. Digital translation makes the real person visible in digital space. Once digitally visible, the real person becomes the subject of algorithms and other decision-making processes.

This essay uses Charles Sanders Peirce’s theory of semiosis to explain digital translation: the process where mind, body and digitality unite to inform who we are and who we become. By conceptualising the process of digital translation and the production of the digital proxy, this essay demonstrates how digitality influences the development of the individual and undermines personal autonomy. Thus, providing an essential first step when considering how the law can address the challenges posed by digitality and the digital space.

1.1 A Note About Terminology

‘Digitality’ refers to the interactions which occur through data and the digital space. As the essay demonstrates, digitality undermines the real person’s autonomy by allocating rights or opportunities according to digital proxies rather than the real person. Rights and opportunities – typically afforded according to personal characteristics such as citizenship, professional qualifications, physical attributes and so on, — are now apportioned according to the known (and digitally available) qualities or attributes of the individual. These attributes are digitally-informed through a process which excludes individual involvement or oversight.

As this essay applies Peirce’s theory of semiosis, the terms ‘sign’, ‘representamen’, ‘object’, and ‘interpretant’ are employed throughout. These terms will be gradually replaced by the term ‘real person’, ‘digital translation’, and ‘digital proxy’ (respectively) to simplify the terminology and to provide a clearer link between the process of digital translation, the real person, and digitality.

2 Legal Persons and the Capacity for Autonomy

‘Legal personhood’ or the quality of being a ‘legal person’ is a complex and common topic of critique and analysis. Visa A. Kurki describes the traditional view of legal personhood as requiring the capacity to hold rights and, in exchange, bear duties to society or one’s community [1].Footnote 1 Discussions of legal personhood often focus on the distinction between legal persons and natural persons [2][3]. The division between ‘legal persons’ and ‘non-legal persons’ is incredibly consequential. John Dewey noted that the term legal person ‘signifies what law makes it signify’ [4]. Thus, as the legal person remains ‘the constant unit of logic in the legal system’, any change in the meaning of legal personhood will change how law applies at its most basic level [3]. This change will alter how the law works and which entities are ‘legal persons.’

According to most theories of the legal person, legal personhood requires a level of autonomy and capacity that includes the ability to bear rights or comply with obligations [5]. Our legal personality comprises the ‘sum total of the legal relations, actual or potential of a legal person’ [3]. This level of capacity and autonomy typically applies to natural persons, but other entities (such as corporations or rivers) are increasingly granted personhood [6]. For our purposes, this essay considers legal personhood in terms of the rights held by legal persons and the level of personal autonomy necessary to meaningfully exercise those rights. The following section considers ideals of independence, self-determination and liberty and proposes three essential elements necessary for the meaningful exercise of personal autonomy.

2.1 The Three Essential Elements of Autonomy

Immanuel Kant described autonomy as the foundation of human dignity [7]. However, the term ‘autonomy’ has various interpretations. Having autonomy means the freedom to choose how to act, where to be, or what to think. Additionally, autonomy requires the ability to evaluate and reflect upon a situation [8]. Autonomy should also allow the individual to change one’s mind (free from coercion, domination, or indoctrination) [9], and consequently one’s actions [8]. Liberty is essential to autonomy as it empowers the individual to act in response to a ‘change of mind’[9]. Liberty also demands that the individual is free to choose what they want without significant options being ‘closed off or made less eligible by the actions of other agents’ [8]. Personal autonomy is undermined when the individual is denied the liberty to act based on their decisions. An important corollary of liberty and autonomy is our capacity for self-determination or self-(re)presentation. Self-determination refers to the individual’s ability to make decisions for themselves while self-(re)presentation relates to the individual’s capacity to define who they are and how they wish to be treated [10]. For the purposes of this essay, we can therefore distil autonomy down to three elements:

1. The ability to define who we are, how we wish to be seen, and how we would like to be treated by others (‘self-definition’) [10];

2. The ability to freely choose from available options without unwarranted limitations or coercion (‘free choice’) [8]; and,

3. The ability to change one’s mind and reflect that changed position in our actions (‘change of mind’) [8].

Digital technology significantly undermines personal autonomy by overriding and reducing opportunities for self-(re)presentation. For example, digitality may exclude individuals from certain opportunities [11], miscategorise them as high-risk offenders, raise debts against them in error [12], or co-opt their image for unauthorised use [13]. Personal autonomy, therefore, depends on a notion of the self which is independent and self-determined, worthy of respect and protected from manipulation [8]. To assess how digitality impacts personal autonomy, we must look beyond the obvious outputs of digitality (such as automated decision-making systems (ADMS)) and consider its less visible impact on personal autonomy and individual identity. The following section uses Peirce’s theory to break individual identity down into its constituent parts and conceptualise the different points of digital influence, thereby illustrating the process of digital translation. This depiction also demonstrates the various sites and mechanisms of digital influence and reveals how this process excludes individual involvement or intervention.

3 Identity, Digitality and Autonomy

Henri Matisse once said:

The work is an emanation … a projection of self … my drawings and canvasses are pieces of myself … their totality is Henri Matisse [14].

Here, Matisse refers to his identity or self as what makes Matisse ‘Matisse’ [10]. His statement gives form to the ethereal nature of identity. For Matisse, his work was not only a form of self-expression but also a performance of identity. Irving Goffman describes identity as a ‘performance’ before others [10]. This performance is a foundational aspect of individual identity. When considering Matisse’s quote from a semiotic perspective, we could say Matisse’s identity is the ‘sign’ as represented by him, his reputation and his body of work. The sign, therefore, represents the entirety of understanding linked to a particular object (or, in this case, person).

Peirce’s semiosis explored the creation and expression of meaning based on cognisable stimuli (typically visual or auditory) [15]. For Peirce, the representamen (R) is ‘that which represents’ the sign [15]. Representations, self-expression, and performances are all representamen. The object (O) is the thing represented by the representamen (for example, Matisse’s artistic style, the celebrity body, or one’s political beliefs). However, there is more to the process of significationFootnote 2 than the object and its representamen. Peirce included a third correlate within his triadic sign model: the interpretant (I) [15].

Fig. 1
figure 1

Peirce’s triadic sign model. In this figure, the sign (S) consists of the object (O) or concept of representation, the representamen (R) as its representative form, and the interpretant (I) as the impression or outcome generated by interpretation

Ferdinand de Saussure’s dyad of ‘signifier’ and ‘signified’ is perhaps the most recognisable terminology used in semiotics. However, Peirce’s interpretant recognises the importance of interpretation and that each interpretation contributes to understandings about the sign itself [16]. Suppose we refer back to our earlier quote from Matisse, and we argue, as per Saussure, that Matisse’s work is a signifier [17]. We can say the work perhaps signifies an original, bold, colourful and fluid artist and his work.Footnote 3 If we consider the quote from a Peircean perspective, we can see that a representamen (such as a piece of work, the above quote, a photograph of the artist himself) may represent Matisse. Matisse, the artist or his artistic style, beliefs, and physical person are also represented (as object). By allowing for an interpretant (I), Peirce recognises that not all representations will generate the same impressions or outcomes in their audience. Peirce’s theory takes Saussure’s dyad of signifier and signified one step further by recognising the importance of the audience and, therefore, the interplay between the representamen, the object, the interpretant and the sign itself. As a consequence, the artist/person, the work, and the impressions generated all inform who we believe ‘Matisse’ to be. Given the collective construction of identity, the link between interpretation and identity allows a deeper appreciation of digitality and its influence on individual identity. Peirce’s theory recognises that the determinations of others create a kind of feedback that shapes how we see ourselves, who we are and what we become.

Figure 1 (above) shows the interactions between Peirce’s sign constituents and how they inform our understanding of an individual’s identity. Figure 2 (below) builds on this model by incorporating the digital space.

Fig. 2
figure 2

Peirce’s triadic sign model and digital translation. This figure indicates how the sign (S) is shaped by the object (O), its representamen (R), and the interpretant (I). The box marked ‘data entity’ indicates the supplementary influence of digital data. This is represented by a black box [5] to indicate our inability to know which data is included. The dotted line indicates the lack of certainty regarding any data influence on the formation of the representamen. The line between the sign and the representamen has also been replaced by an arrow which indicates that the representamen is produced based on understandings about the sign. The figure includes two interpretants, one in digital space and one in the material space to indicate how different interpretants (and therefore, impressions or outcomes) may be produced in each space. These are again linked to the interpretants by a dotted black line to indicate how the interpretant is informed by the representamen

As Fig. 2 illustrates, the individual’s presence in the digital space relies on the process of ‘digital translation’ or the rendering of a material individual in digital form [12]. When expressed according to Peirce’s theory, the object or person (O) is translated into a representamen (R) that can act or be acted upon in digital space. Available digital data may also influence the translation process. However, we cannot know which data is involved or the extent of its influence.

Determinations are then made based on the representamen (or ‘digital translation’) to produce an interpretant (I). Figure 2 shows interpretants in both the digital space and the material space. While this is not always the case, both are included in the previous diagram to highlight how digital translation may produce outcomes in one or both mediums. Consequently, these outcomes depend on the representamen produced (as represented by the dotted line in Fig. 2). Nevertheless, the production of a representamen requires data about the person. Representamen may be identifiers such as the individual’s name and date of birth, Tax File Number, or Medicare number. However, they could also be identifiers produced by websites or data processors. Regardless of origin, these identifiers are likely to be informed by other data such as purchasing data, financial information, employment records or social ties. Data from vast and various sources – often of unknown quality – then coalesce to create the individual’s data entity. The data entity (represented by the box labelled ‘data entity’ in Fig. 2) acts as a repository from which algorithms can extract data to make determinations about the individual [11]. Any data captured and connected with a data subject can inform future determinations [19]. The data selected for (or excluded from) inclusion during digital translation will impact the production of the representamen. This data is also likely to affect the final interpretant or digital proxy produced by algorithmic processes and ADMS (automated decision-making systems).

Nicholas Diakopoulos refers to algorithms as ‘as series of steps undertaken [to] solve a particular problem or accomplish a defined outcome’ [20]. Algorithms ‘see’ individuals through their data [21] and according to their design. The algorithm’s designer becomes the ‘choice architect’ for their given process [22] and is responsible for important decisions such as data sources, relevant considerations, and the categorisation of results. The conversion of a task or process into an algorithm may also be heavily influenced by local customs, knowledge, or the surrounding context, and may not align with that of their data subject [23]. Once the selected data is processed, the different data subjects are categorised with correlations drawn between the members of each category [24]. These correlations shape predictions about the data subject such as relevant content, future behaviour, or appropriate recommendations [25].

Fig. 3
figure 3

Algorithms and the representamen. In this figure, Σx, Σy, and Σzrepresents three different algorithms that may act on a particular data entity and produce the representamens, Rx, Ry, and Rz. As there may be many possible approaches to algorithmic determinations, we are unlikely to know which algorithm (and therefore which data and considerations) has informed the interpretant. Thus, the interpretant is indicated by I?

Figure 3 illustrates how algorithms dictate the representamen thereby shaping all future decisions made within each process. Here, three different algorithms ∑x, ∑y, and ∑z have the capacity to produce three corresponding yet different representamen Rx, Ry, and Rz. As decisions are made based on the representamen, different algorithms will produce different results for the individual or data subject. For the purposes of this essay, the first representamen produced that is specific to the process or determination in question, is referred to as the digital translation. In this instance, this may be representamen x, y, or z depending on the data selected by the algorithm. Interpretants (I) are generated in response to the representamen. Algorithms will also inform the production of the interpretant. Interpretants may be ends in themselves, for example, our eligibility for a loan, our ranking within search engine results, or our positioning within a friend’s social media feed. Often the interpretant will be subject to further determinations until a final interpretant/digital proxy is produced.

Figure 4 (below) illustrates the different points of influence in digital and material space.

Fig. 4
figure 4

Points of influence in digital translation. This figure builds on the preceding figure (Fig. 3) to highlight the sign constituents that are shaped by the intervention of algorithms or ADMS (surrounded by the bold black circle)

These recommendations, rankings and search outcomes structure knowledge [26]. Recommendations amplify some content over others, rankings prioritise some connections or businesses over their competitors, and search results legitimise some results at the expense of other information sources [27]. By shaping the flow of information, the messages individuals receive, or the friends they connect with, algorithms directly influence the individual in and beyond material space in many subtle ways [27].

Thus, while our physical exclusion from digital space means that the individual cannot know the digital representamen produced or make corrections to it, subsequent determinations frame our future knowledge and our relationships with others. Even where the individual carefully curates their online expression and performance, digitality still intervenes in the processes of digital translation to inform the representamen or interpretant. Many outcomes will result from a series of algorithmic determinations. Peirce referred to this as the production of ‘successive interpretants’: that is, interpretants which are used as representamen and upon which further determinations occur [27]. As a consequence, the individual will have the most input into the representamen formed at the beginning of these series, while individual input into the representamen is likely to decrease after each successive determination.

Fig. 5
figure 5

Runaway determinations. This figure illustrates how each interpretant (I) may become a representamen (R) for further determinations. The black border thickens around each successive interpretant to indicate the increased influence of algorithms and ADMS, and by corollary, the decreasing affordances for individual input

Figure 5 (above) illustrates how each interpretant may also be a representamen for further determinations. This creates ‘runaway determinations’ that are not based on raw data but on previous determinations of unknown origin. Runaway determinations further undermine personal autonomy by excluding individual input as well as decreasing the proportion of self-presentation accommodated for during the decision-making process. The individual cannot know whether an outcome is based on an early or late interpretant (and therefore, cannot know how heavily mediated the outcome may be). Nonetheless, the final interpretant will act as a digital proxy and will inform the final outcome and in-turn affect individual potential.

By way of example, imagine person X applies online for a bank loan of $20,000. The individual X is our sign - the combination of their physical body, their mind or spirit and their digitality. Representamen X is the digital translation of person X based on their application and other data. Once representamen X is assessed, this generates digital proxy X, our outcome. In this case, person X is successful and qualifies for a loan of up to $20,000. But which sign constituent was actually offered the loan? Eligibility was not determined based on object X (their physical body or real person). Nor was the loan offered to representamen X (their digital translation). The loan was awarded to digital proxy X: that is, the product of an algorithmic process that assesses set criteria about person X as defined by the algorithm’s designer.

In Pandora’s Hope, Bruno Latour refers to this process as ‘shifting out’ [28]. Shifting out occurs where ‘the reader [or in this case, the interpreter] is sent from one plane of reference to another … and results in the production of an internal referent … as if one is dealing with a differentiated world’[28]. Here, this internal referent is substituted for the real person, and read or interpreted as though the relevant considerations can be adequately addressed based on specified and distinct criteria. It assumes that such criteria can and are accurately captured by the digital proxy within the new and algorithmically informed definition. However, person X knows nothing about their digital proxy or whether it is an accurate reflection of their ability to repay a loan. A recent and salient example is that of an 18-year-old Australian girl, Alanna, who claims to owe over $8,000 to ‘buy now, pay later’ companies. While it appeared that – according to past repayment histories – Alanna would be able to service these loans, this did not accurately reflect reality.Footnote 4 As a result, Alanna now has significant outstanding debts which she cannot afford to repay.

Let us now consider the example of Person Y who has been repeatedly overlooked for promotions by her current employer and has decided to look elsewhere. Person Y applies online through the centralised recruiting site, ‘1Job.’ 1Job narrows the pool of applicants to a smaller pool and forwards only those applications on to potential employers for consideration [11]. After Person Y’s applications are ignored, she contacts several employers to ask whether there were issues with her application and forwards a copy of her resume for employer feedback. Person Y is not advised of any issues with her application and assured that, as the job market is highly competitive in her chosen area, it may take several applications before they succeed. Person Y reapplies several times but repeatedly fails to progress to the interview stage. As in the previous example, it is not Person Y who has been deemed ineligible, but digital proxy Y. Yet individual Y remains excluded from opportunities to progress in her career. Again, Person Y has limited knowledge of their digital proxy which may (or may not) be an accurate reflection of her suitability and capacity to meet the requirements of her desired role. Unlike the situation with Person X, Person Y is denied or excluded from a particular opportunity. A similar issue occurred in the U.S. where job applicant Catherine Taylor was repeatedly denied employment because of an erroneously recorded criminal conviction [19]. Ms Taylor had committed no such offence, nor had she been interviewed, approached, or suspected of having committed the relevant offence by the police. However, this record informed the digital proxy on which employment decisions were made. As she had no knowledge of the process or the data which was considered in the awarding of interviews, Ms Taylor was repeatedly denied these opportunities until the matter came to light.

In both instances, the exclusion of person X and person Y (in favour of digital proxies X and Y, respectively) has undermined personal autonomy. When considering element 1 of personal autonomy or ‘self-definition’, an opportunity to self-define may have protected Alanna from accruing such a significant debt, or enabled Ms Taylor, to progress her application through to the interview stage. An additional and fundamental element of personal autonomy has been undermined for Ms Taylor: that of ‘free choice’. As per the definitions set out in Sect. 2.1, her automatic exclusion from contention for specific employment opportunities directly affected her ability to freely choose from available options without unwarranted limitations.

The above two examples demonstrate the (dis)qualification of the individual based on their digital proxy - a digitally generated entity which the individual cannot know, did not produce, and is not ‘them’. This entity has become the subject and object of law, life, and reality. When this is considered against ideals of personal autonomy, we see that digitality reduces our ability to control or command our individual performance, yet that performance remains determinative. Digitality influences this performance and the process of digital translation. The process will qualify or disqualify real persons and undermine their personal autonomy by denying them the opportunity to self-define, and in-turn may also deprive them of the opportunity to freely choose between available options or change their mind.

4 Rights, Personal Autonomy, and Legal Personhood

The Universal Declaration of Human Rights (‘UDHR’) creates numerous protections for individual human rights. Many of these rights are compromised by digital influences and the process of digital translation [29]. For example, the UDHR prescribes that individuals should be free from ‘arbitrary interference with [their] privacy, family home or correspondence … [or] attacks upon [their] honour or reputation.’Footnote 5 The UDHR also mandates that each member of society should have the social and cultural rights necessary for the free development of their personality.Footnote 6 Article 29 of the UDHR further dictates that the individual is entitled to the free and full development of their personalityFootnote 7 and that this right should only be limited to the extent necessary to respect and recognise the rights and freedoms of others.Footnote 8 This is echoed in the three elements of autonomy as outlined above. The process of digital translation clearly impinges on these rights. This occurs on multiple levels.

Firstly, the rights to privacy, reputation, and the free development of personality are undermined when data is used in a way that denies the individual the opportunity to self-(re)present or self-define. These limitations cannot be justified or excused as necessary to respect or recognise the rights of freedoms of others. Secondly, many algorithmic interventions will have a potentially arbitrary application. By their very nature, algorithms must reduce a decision-making process to a series of specific steps or considerations. In reducing the decision-making process to a fixed model, important context may be ignored, and relevant considerations disregarded while extraneous details are prioritised [31]. Finally, legal personhood entitles individuals to protection against arbitrary interference that undermines personal autonomy. This essay has already established how personal autonomy is undermined. Nonetheless, the assumption that personal autonomy remains intact has potentially consequential repercussions: that is, that a legal person who acts autonomously should be legally responsible for those acts.

The question remains whether the legal person should still have legal responsibility. In practice, the answer is ‘yes’. Ethically-speaking, the matter may be less clear. In fact, the term ‘legal person’ is unclear. The following section explores the differing ideals of legal personhood and their disparate requirements for personal autonomy and accountability. If the legal person is the ‘constant unit of logic’ in our legal system, a change that affects the quality of autonomy afforded to the legal person should have significant implications as to how the law applies. If we consider personal autonomy as a qualifying factor in awarding legal personhood, any change to personal autonomy may be instrumental in determining who are the law’s legal persons.

4.1 The Autonomous Legal Person

Since the term ‘legal person’ is said to mean what law wants it to mean [4], many theorists have sought to clarify its meaning [5]. In her 2003 essay, Ngaire Naffine identified three categories of legal person. Each ‘person’ differed not only in their defining characteristics but also in their assumed capacity for personal autonomy [5]. Naffine’s approach also invites a consideration of what should qualify as a legal person or which definition should be preferred. The following section presents Naffine’s three categories of legal person. Each ‘person’s’ capacity for autonomy and accountability increases from person 1 to person 3. The quality of autonomy and accountability also shifts from technical or artificial forms to those that are conscious and embodied, such as those held by the (legal) person and with their knowledge.

The digital proxy – or its design – is highly consequential to the real person. In many instances, it will have a formative impact on the person they become. In fact, if – in the apportioning of rights and opportunities – the real person is shifted out and replaced by the digital proxy then the process of digital translation comes dangerously close to elevating the digital proxy to the same status as the individual themselves. Consequently, the digital proxy comes strikingly close to achieving legal personhood: it is endowed with rights, opportunities and obligations within a certain subset of relations (such as those discussed above for persons X and Y). However, the digital proxy fails to satisfy any of these categories.

4.1.1 Person 1 (‘P1’) – the Empty Slot

According to Naffine, Person 1 (‘P1’) requires ‘nothing more than the formal capacity to bear … legal right[s] and … participate in legal relations’ [5]. P1 may be ‘pure legal artifice’ and includes corporations, and any ‘thing’ compliant with the prevailing legal definition of ‘person’[5]. Richard Tur likens this definition to an ‘empty slot’ that ‘can be filled with anything that can have rights or duties’ [32]. At its simplest level, P1 is the smallest unit on which a legal decision can be made [5]. Far from being a closed or exclusive category, P1 appears to be the most universal in that it can accommodate the widest variety of entities with the exception of animals [5]. Depending on the theorist, animals are denied legal personhood by design and due to their non-humanity [5], or because of their incapacity to hold social obligations and their status as ‘property’ [5]. In short, Person 1 encompasses a wide variety of entities that is not limited to those in human form but implies a capacity to hold rights and be held responsible and accountable – with or without the direct knowledge of P1.

4.1.2 Person 2 (‘P2’) – the Biological Human

Person 2 (‘P2’) applies to biological humans from birth until death [5]. According to definitions of P2, legal personhood and the rights inherent in it will apply ‘primarily [to] … beings which enjoy the ability to choose [and, therefore,] human beings’ [5][33]. The quality of being human entitles P2 to hold legal personhood and in accordance with Article 1 of the UDHR [30], applies equally to all humans regardless of mental capacity [30]. Naffine further notes the emerging importance of ‘biological humanness’. However, caselaw suggests that a co-requisite of ‘biological humanness’ is the capacity to survive independently of others. P2s must, therefore, be ‘whole, integrated and individuated beings’ capable of surviving alone.Footnote 9 Those falling into category P2 are human and therefore entitled to autonomy, regardless of their capacity to act autonomously or make their own decisions. Thus, the presence of human life – regardless of mental capacity – entitles P2 to legal personhood.

4.1.3 Person 3 (‘P3’) – the Responsible Subject

Finally, Naffine describes Person 3 (‘P3’) as ‘the responsible subject’ [5]. P3s require more than ‘biological humanness’ or the capacity to hold rights. P3s must be rational and legally competent [5]. P3 is, therefore, the sentient and autonomous subject capable of making decisions and taking responsibility for their actions [5]. P3 is also an active and moral being in addition to their legal relations. Where P2 applies only to individuated persons, P3 requires individuation because of its significance for autonomy and responsibility. Complete autonomy requires the person to be individuated and fully self-contained. Without physical independence, the individual cannot have full control and accountability for their actions. It is P3’s ability to fully control their body and self that raises them above the limitations of biology alone and allows for criminal responsibility and legal capacity [5].

Alain Supiot argues that the full realisation of human rights requires the state to actively enforce those rights as ‘a reward for membership [of their] community and [the] acceptance of its rules’ [34]. The enforcement of rights may require more than their mere creation at law, and instead require the law’s active intervention to prevent infringement upon those rights. By corollary, digital processes which undermine personal autonomy and thereby hinder the free development of personality may also require intervention to prevent infringement on this right. However, this “need” may depend on our understanding of personal autonomy and accountability as expected of legal persons.

Those who ascribe to Naffine’s Person 1 as the preferred qualification for legal personhood may see intervention as unnecessary. At the same time, the reasoning which deliberately excludes animals from P1 may also exclude the digital proxy by dint of their non-humanity and their inability to hold social obligations. Those ascribing to the belief that legal personhood should only apply to those who qualify as human or those capable of acting autonomously are likely to be discomfited by any denial of personal autonomy. Thus, those ascribing to Person 2 or 3 and therefore requiring biological humanness (P2) or even sentience (P3) may find the interposition of the digital proxy unsettling. Regardless of approach, our understanding of legal responsibility requires some level of knowledge of the act for which we are to be held responsible. The position of P1 is readily filled by a digital proxy (if you exclude requirements for humanness – which is certainly not fatal to claims of legal personhood by corporations and other non-human entities). The physical body – as the object and location of law will likewise meet the requirements of P2 based on physicality alone. However, it is harder to justify the shifting out of the real person and the interposition of the digital proxy as P3. If legal personhood requires more than biology, or an empty slot on which law can apply, then it also requires the capacity to act autonomously and hold legal obligations and responsibilities. While the digital proxy may accrue legal obligations and responsibilities, it cannot act autonomously and excludes individual input. Additionally, once the digital proxy becomes the site of determinations or decision-making, the real person also loses some of their personal autonomy and individual rights afforded under the UDHR and other state-based legislation. Our inability to refine, edit or know our digital proxy compounds this loss, changing the very nature of legal personhood.

5 Conclusions

This essay shows how personal autonomy is undermined by digital translation and digitality by precisely tracing the points of influence and intervention. Legal personhood is also affected. The real person remains subject to law but is excluded from the processes through which many formative decisions are made. Although the rights and obligations of legal persons reside in the real person, digitality excludes the individual not only from the process of decision-making, but – in many instances – from the knowledge that a decision has occurred. This will be the case regardless of the outcome.

Commentators note that ‘new technologies affect … how we are understood [as well as] how we understand’ [19]. These same technologies have ‘mastered the norms of communication’ [34], determining what we know [19], ‘narrowing our field of vision’ [35], ‘limiting collective thought’ [35], creating filter bubbles [36], and structuring our knowledge through search results [26]. Supiot notes that ‘people do not act, they react [and] they react not to an action but to a reaction [34]. Whether we are aware of this influence or not, such influence informs who we are: what makes Matisse ‘Matisse’. However, individuals need some way to navigate and traverse the digital space. Recommendations can be and are beneficial. Search results are helpful.

This essay does not argue that algorithms are bad or against the use of ADMS. Instead, it merely notes the consequence of algorithms and ADMS. Without the power of oversight or self-representation individuals simply need to know ‘that the data at the heart of the decision is right’ [19]. Otherwise we know, react to, and are informed by information that an intervening party may have prescribed based on erroneous presumptions.

If individuals truly have the right to the free development of their personality [30], then self-determination requires the ability to decide. Autonomy, therefore, requires the ability to define who we are [10], to choose freely without limitations or coercion [8], and, the capacity to change one’s mind and actions [8]. Autonomy without these rights is not autonomy. An identity that excludes our input is not our identity. In reality, individuals are dominated by data and digital determinations. To regulate this process, we must first understand where and how this influence occurs before we can address the challenges to individual identity and personal autonomy.

An approach that understands the process of digital translation and the role of the digital proxy will allow for a better comprehension of digital influence and can illustrate how the process excludes the individual from important decisions that shape who that individual becomes. Only then can law attempt to regulate this process. For example, the law could set minimum standards or requirements based on the quality of data sources. Likewise, the law could define different grades of ADMS that separate out trivial decisions from those more consequential to the real person and prescribe levels of oversight for each. At present, our current inability to legislate or act in response to digital interference is exacerbated by our lack of language through which can explain – let alone address – the problem. This essay hopes to provide the foundation for these discussions in a way that clarifies the processes involved in digital translation and thereby identifies the appropriate opportunities for legal intervention.