1 Introduction: the avatar society

The Japanese roboticist Hiroshi Ishiguro cuts an interesting figure. He is known for creating an ultra-realistic robot version of himself. It is an impressive creation. Not a perfect replica, by any stretch of the imagination, but uncannily like him. Indeed, Ishiguro sometimes jokes during public talks that, rather than fly across the world himself, he could send the robot to give the talk on his behalf.

For Ishiguro the robot-duplicate is not just a piece of theatre. It is part of a serious research project aimed at overcoming the limitations of biological form and bringing about a better future. In a 2022 presentation at the Robophilosophy conference, Ishiguro outlined his vision of the ‘Avatar Symbiotic Society’:

“…[I aim for] the realization of a society in which everyone can freely participate in various social activities (work, education, medical care, and daily life, among others) through the teleoperation of multiple avatars that can fully transmit the user’s actions and intentions.” ( [1]: 627)

Several years ago, Ishiguro’s vision may have seemed farfetched. But in wake of the leap in capabilities of large-language and multi-modal AI models– a trend most obviously exemplified by OpenAI’s GPT models– it is becoming a practical reality. People are now creating digital duplicates of themselves and others to engage in a variety of tasks. Some have already attempted initial ethical analyses of such digital duplicates. Most work has focused on specific use cases, such as the ethics of creating ‘deathbots’ that represent the deceased (e.g., [2, 3]), academic writing assistants that speak in the author’s own voice [4], representations of historical or living figures for research purposes (e.g. the Digital Daniel Dennett project from Schwitzgebel, Schwitzgebel and Strasser [5]) and medical decision-making proxies and digital twins to assist with medical treatment [6,7,8].

In this article, we attempt something different. Rather than focusing on specific use cases, we develop a minimally viable principle for the permissible use of digital duplicates. We start by clarifying the object of our inquiry– digital duplicates themselves– defining them, giving examples, and justifying the focus on them rather than other kinds of artificial being. We then identify a set of generic harms and benefits associated with digital duplicates and use this as the basis for formulating a principle that stipulates the conditions that should be met in order for the creation and use of digital duplicates to be ethically permissible. We assess whether it is possible for those conditions to be met in practice, and, lastly, answer some questions that could be raised about the application of this principle.

2 What are digital duplicates and why should we care?

Our focus is on ‘personalised digital duplicates’. We define these as partial, at least semi-autonomous, digital recreations of real people. This definition requires some unpacking.

The digital duplicates we are interested in are ‘partial’ in the sense that they do not purport to recreate every aspect of the original person. Every person has a set of attributes– physical appearance, cognitive ability, voice, and so on. Digital duplicates try to recreate some, but not all, of them. There is an interesting metaphysical question to be asked about whether there can ever be a complete replication of a person [9]: 200–209). We set that question to the side. We focus on partial recreations because they are the form that seem possible given present technologies, though it is possible that little would change in the formulation of the permissibility principle if they were more complete.

At the time of writing (2024), partial digital duplicates fall into two main categories. The first category consists of ‘cognitive duplicates’: duplicates that try to recreate some cognitive function or attribute of a real person. Most AI-chatbots fall into this category. Examples would include the AI versions of Michael Schumacher,Footnote 1 Dan DennettFootnote 2, Luciano Floridi,Footnote 3 and Jonathan Swift,Footnote 4 that people have created and interacted with. They would also include the companion chatbots created by various social media stars;Footnote 5 the digital writing agents created by academics [4]; and medical decision-making proxies [6]. The second main category consists of ‘visual/auditory’ duplicates intended to replicate the appearance, behaviour, and voice of a real person. These are an emerging phenomenon in the entertainment industry. Examples include the AI versions of the Beatles and Abba, the AI actors used in recent Star Wars movies and other Disney productions. Of course, one could easily imagine combined versions of these digital duplicates and presumably, Ishiguro’s robotic version of himself is intended to fit in both categories. One can also distinguish between function-specific duplicates, that can only do one, narrowly-defined, task or set of tasks (perform a concert, write an academic paper); and function-broad duplicates, that can do a wide range of things (arguably, general AI chatbots are of this type since they can be used for a variety of purposesFootnote 6).

The duplicates are ‘at least semi-autonomous’ in the sense that they can do things without the direct control or intervention of other humans. It is possible for digital duplicates to be created that are wholly controlled by the original person. Video game avatars are of this type and there are, of course, ethical questions related to such avatars. There is a rich literature on the ethics of actions performed in video games (e.g [11, 12]). But autonomy heightens the ethical issues and raises additional challenges.

We favour the terminology ‘personalised digital duplicates’ for two main reasons. First, it captures the phenomenon in which we are interested: efforts at copying or representing attributes of real people in digital form. Second, it has a pleasing and memorable alliteration. In presentations we have given on this topic, some people worry that the term ‘duplicate’ is too strong since it carries connotations of full or complete recreation in digital form. Alternative terms such as ‘personal simulation’ might be thought preferable. We see some merit in this idea but favour ‘duplicate’ as we see duplicates as being intended to function as equivalent to the original person in at least some contexts; simulation, as we see it, is a vaguer concept that could encompass much weaker forms of recreation. Footnote 7 We focus on ‘digital’ duplicates because, although we don’t mean to exclude mechanical duplication from our discussion, most of the examples with which we are here concerned are wholly digital (AI-chatbots, decision-making agents, holograms and the like). Additionally, we use the term ‘digital duplicate’ in preference to other cognate terms, already in use, such as ‘avatar’ and ‘digital twin’. ‘Avatar’ is the term used by Ishiguro in his proposal, but ‘avatar’ seems over-inclusive insofar as avatars need not recreate the attributes of a real person. An avatar is any digital proxy intended to stand for (or stand in for) a real person [13]. This may involve some replication of attributes of the real person but it also may not. Many videogame avatars are nothing like the people that use them to play the game. Contrariwise, the term ‘digital twin’ seems under-inclusive because ‘digital twin’ is most commonly-associated with engineering, on the one hand, and medical research and treatment, on the other hand [14]. For our purposes, digital twins are a sub-type of personalised digital duplicates.

Digital duplicates are to be distinguished from novel artificial beings. The latter are also semi-autonomous and often mimic attributes of real people, but they are not intended to represent or recreate any particular real person. An AI popstar is one thing; an AI version of Paul McCartney is another. Most, non-specific, AI chatbots and robots are novel artificial beings. They may well be trained on data from large numbers of people, or be inspired by real people, but are not intended to recreate real people.

Why focus on digital duplicates to the exclusion of novel artificial beings? There are three main reasons. First, digital duplicates are an emerging and pressing problem. As noted in the introduction, a few years ago the idea that anyone, anywhere could create semi-autonomous digital copy of themselves was farfetched.Footnote 8 Since late 2022, with the release of GPT3.5, GPT4, and similar AI-models, it has become a practical possibility for many. Second, there is already a rich literature on the ethics of artificial beings. This literature covers everything from their potential moral status, the desirability of forming relationships with them, and the permissible use of them in different contexts (e.g., [16, 17]). To add them into this discussion would muddy the waters. Third, digital duplicates raise unique and important ethical questions that would not be adequately captured by discussions of novel artificial beings alone. For example, duplication technology raises interesting metaphysical and moral questions concerning the nature and extension of identity ( [9]: 200–201). Is it possible for individuals to extend themselves through duplication? Can the actions of the duplicate be traced to the original person, or are they fully independent? Who is responsible for the errors and achievements of the duplicate? Given that many moral rules and principles are targeted at integrated individual moral agents, this new technological affordance could disrupt many of our basic moral norms [18, 19]. Furthermore, the idea that others could copy and recreate aspects of these identities creates challenges around consent, use of personal data, identity theft, and more. Additionally, there are likely to be uniquely personalised benefits and harms that arise from the use of digital duplicates but not from the creation of novel artificial beings. This would also merit independent consideration of digital duplicates.

3 The benefits and harms of digital duplicates

Our goal in this article is to develop a principle for assessing the permissible creation and use of digital duplicates. As we shall see, two conditions within our proposed permissibility principle depend upon on the existence of benefits and harms associated with the technology. If there are no such benefits or harms, the principle will make little sense. So, in this section, we review some of the potential benefits and harms associated with personalised digital duplicates. The intention here is not to develop an exhaustive list or assessment of harms and benefits but, rather, to provide enough detail to make our discussion of the permissibility principle in the next section informative.

To a large extent, the harms and benefits associated with the creation and use of digital duplicates will be highly context specific. The use of digital twins in medical diagnosis has the potential to improve personalised health assessment and treatment; it also carries risks of abuse of personalised health data [7]. Similarly, the use of a personalised AI academic writing assistant could allow for enhanced productivity, innovation, and economic growth, whilst also carrying a risk of instrumentalization and alienation from work, homogenisation of writing style, increased inequality (those that know how to use the technology get richer, others are left behind), exploitation of intellectual property and so on [4]. Lists of potential harms and benefits could multiply depending on the specific use case.

Rather than generate a set of such lists, we attempt something more abstract. We identify generic harms and benefits that apply across most potential uses. Whether the harms and benefits discussed genuinely count as harms and benefits depends, as we shall see, on certain axiological assumptions about what is or is not valuable. We do not defend these assumptions here but, rather, draw attention to them when it seems appropriate. Furthermore, the harms and benefits listed are not independent of one another. Sometimes a potential benefit could be undermined by a potential harm and vice versa. As a result, it is possible that, as you read through our discussion, you might disagree with the inclusion of a particular harm or benefit or think that more work needs to be done to establish that it is a genuine harm or benefit. Such disagreement is welcome and needed as we proceed to develop a more complete ethical understanding of this technology. But it should not distract from the focus in this article. As already noted, we are not trying to defend any particular account of the harms and benefits of the technology but, rather, provide some account of them in order to make our discussion of the permissibility principle more comprehensible.

In relation to each of the examples discussed below, the harms and benefits flow in different directions. For the most part, we focus on harms and benefits to the individual being duplicated but, as will become apparent, these harms and benefits can entail harms and benefits to others as well. Sometimes something can harm the individual being duplicated while benefitting someone else, and vice versa. The differential distribution of the benefits and harms of the technology becomes more significant later in the article when discussing the permissibility principle and its implications. So it is worth being aware of this complexity at this stage.

3.1 Potential benefits

Let’s start with potential benefits. One obvious benefit of the technology, at least in theory, is that it allows us to have more of a good thing. If something a person does is valuable, either to them or to others, then, in principle, if we can duplicate them, we can have more of it. Depending on how you want to look at it, you could view this as one big benefit of the technology. Alternatively, you could break it up into a number of distinct manifestations of this same basic benefit. We opt for the latter approach here insofar as it seems possible to have or pursue some manifestations of this benefit without others.

A first manifestation of this benefit is that the technology allows for the wider distribution of an otherwise scarce relationship/style of interaction. Our relationships with others are sources of both instrumental and non-instrumental value ( [20]: chapter one). Relationships can benefit us and benefit others. They can make us and others feel good, develop as persons, gain knowledge/insight, relieve boredom, provide counsel and support, and so on. But individuals are scarce. There is only one of us to go around at any one time. Duplication technology might allow us to overcome this problem of individual scarcity and engage in more beneficial relationships. Enabling wider distributions of relationships appears to be one motivation behind social media influencers such as Caryn MarjorieFootnote 9 and Kaitlyn ‘Amouranth’ SiragusaFootnote 10 creating chatbot versions of themselves to interact with their followers (increasing revenue streams is, no doubt, part and parcel of this).Footnote 11

There are two important questions to be asked about the value of these additional relationships. The first is whether the duplicated person really benefits from them. Some benefits associated with relationships are largely instrumental or indirect. They accrue to the individual as a result of having the relationship, but not in virtue of experiencing the relationship. Using relationships as a way of getting money, networking business contacts, and making other people happy would be examples of this. For these benefits, it doesn’t matter whether the duplicated person is experiencing, or present within, the relationship. It seems fair to say that present duplication technology does allow for the duplicated individual to derive at least some of these benefits. For example, the social media stars mentioned above can get extra money from the relationships enabled by their duplicates. There are, however, other kinds of benefits associated with relationships that require the direct and immediate involvement of the individual (call these intrinsic benefits). The pleasures of sexual intimacy and joy of laughing at someone’s jokes could be examples of this. Present duplication technology would not appear to allow for these benefits to be derived from duplicates, unless we assume the duplicates are themselves capable of experiencing these benefits (which is unlikely with present technology), or except perhaps in a limited form through the use of immersive VR by the original person to occasionally ‘tune in’ to the relationship being engaged in by the duplicate. In short, when viewed from the perspective of the duplicated individual, we must appreciate that they can attain some, but not all, of the benefits from the duplicate’s relationships.

The second question concerns the nature of the benefit to the people interacting with the duplicate. Are they really having valuable relationships or experiences with the duplicate? This is a widely debated topic in the ethics of AI and robotics, and we will discuss it again later on when describing one of conditions within our permissibility principle (the transparency condition). Suffice to say, some would argue that these relationships are not as valuable as they would be if they were with real human beings. So, if there is benefit to be derived from the relationship, it is of a lesser or degraded type. [16, 21]

It is worth noting that both of these questions can be asked, in slightly different form, of each of the benefits discussed below. Rather than belabour the point by doing so, we will simply note it here as an important complexity to be addressed in any holistic assessment of the benefits or harms of the technology in a particular context.

A second, related, benefit of the technology is that it could allow for the continuation of a personally unique service that would otherwise be unavailable due to temporal and spatial constraints on individuals. Not all services require a unique person to provide them. 24-hour services are a major feature of the contemporary economy. They can be achieved through shift-rotations or reliance on novel artificial beings. But some services might be sufficiently unique to an individual that continuing their specific provision of it might be desirable. This could be true from the point of view of the service provider or user. For example, professors cannot provide personal tutorial assistance to all their students, 24 h a day. But if digital tutor versions of themselves were created, this could be made possible. Similarly, mental health therapists cannot be constantly available to their patients but duplicate therapy bots could overcome this limitation (cf. [22]). Of course, whether it is a good thing for services to be demanded or available 24/7 is a contested matter. Some would argue that there is something inhumane in the 24/7 nature of the modern economy and that we should not tolerate its demands (e.g. [23]). But even if you have this negative view, there could be some benefit to the technology insofar as it relieves the duplicated individual of this burden and enables them share it with their duplicate(s) instead. Interestingly, for this to work we have to assume some separation between the duplicate and the original. So, in this case, the fact that the original person cannot directly experience what is happening to the duplicate is a feature, not a bug, of the benefit. This is another important nuance to consider when assessing the potential benefits of the technology. Sometimes it might be better (or more beneficial) if there was a stronger link between the duplicate and the original; sometimes, in order for the benefit to be realised, we need a weaker link.

A third potential benefit of digital duplicates is that they allow for the preservation of (parts of) a unique identity after the original person dies or becomes irremediably incapacitated. Death is the ultimate limit placed on all individual life. While duplication does not allow for immortality, it does allow for the preservation of some aspects of an individual after death [24]. For example, we could preserve their conversational style or physical appearance with a chatbot. We could talk to this chatbot and experience some of what we used to experience when the person was around. We could also preserve something of their cognitive style or perspective on the world. This could benefit loved ones of deceased person (as in ‘griefbots’ or ‘deathbots’)Footnote 12 or benefit researchers and society at large (as in the creation of digital versions of famous historical figures). Here, the benefit is largely to other people interacting with the duplicate. But it could also benefit the duplicated individual who has a project or aspiration that is tragically cut short by death. For example, if someone is writing a book, or creating a piece of art, or imparting wisdom and life advice to their children, but know that they will soon die due to a terminal illness, a duplicate could be created to enable the project to be completed after death.

A fourth potential benefit is that the use of digital duplicates could enable enhanced individual productivity. By outsourcing some tasks to the duplicate, the original person could focus on complementary tasks that enhance overall output. This is, perhaps, the best studied benefit of recent advances in AI. Several empirical papers purport to show that GPT and similar technologies can enhance the productivity of individual workers in a variety of settings [25,26,27]. These studies, however, typically focus on novel AI, not digital duplicates. In an academic context, Porsdam Mann, Earp and Savulescu [4] have created duplicates of themselves that can engage in academic writing in their unique voices and proposed that this can enhance productivity.

A fifth benefit of digital duplicates is that they can enable genuine multi-tasking. Many individuals claim to be able to engage in productive multi-tasking, but research suggests that this is a myth: multi-tasking almost invariably impairs performance in some way. This is due to the bandwidth constraints of the human brain. In principle, the creation of digital duplicates would allow us to overcome those bandwidth constraints and perform multiple tasks simultaneously without impairing performance (provided, of course, that the digital duplicate has the requisite level of competence). Hiroshi Ishiguro, quoted in the introduction, appears to be most excited about this potential benefit of duplication technology.

A sixth potential benefit of digital duplicates is that they can facilitate desirable substitution. There are some things we would all rather not do or risk doing. Digital duplicates could be used by individuals to perform boring, dangerous or otherwise unpleasant tasks. This is another benefit Ishiguro highlights. This could facilitate enhanced productivity, by enabling people to focus on other tasks in which they have a comparative advantage. But it need not facilitate productivity. People could use the time gained to engage in leisure activity, spend time with friends and family and so on. Again, for this benefit to arise there must be some separation between the duplicate and the original.

Finally, a seventh potential benefit of digital duplicates is that they could enable enhanced decision-making in at least some contexts. We are all subject to emotional biases and ups and downs in performance. If sleepy or intoxicated, we cannot make fully reflective and rational decisions. In principle, we could create digital duplicates that represent an idealised, rational version of ourselves to assist with decision-making in such situations. An example might be a digital duplicate of a doctor to assist with medical diagnoses if the doctor themselves is only just after waking up or coming to the end of a long shift [28]. This, it should be noted, is the subject matter of at least one major research project on the development of this technology.Footnote 13

3.2 Potential harms

Let’s now consider potential harms of digital duplicates. One thing to note here is that since duplicate technology relies on contemporary AI many of the ethical harms associated with AI in general would also apply to the creation and use of duplicates. This would include, for instance, concerns about the environmental impact of running AI models such as GPT, their potential for bias/misinformation/hallucination, their reliance on cheap and outsourced labour during training cycles, their lack of transparency and explainability, and so on. There are many detailed discussions of these AI-related harms [29,30,31] and we take it for granted, here, that many of them ought to feature in an assessment of the harms and benefits of digital duplicates. Here, however, we try to focus on harms that are more specific to duplication itself. In other words, these are harms associated with copying and recreating aspects of real people in an AI form, not just harms associated with the use of AI in all contexts.

The most significant thing to think about in relation to harms arising from the use of duplicates is who actually creates and deploys a duplicate, and for what purpose. With sufficient available data, anyone can be duplicated by anyone.Footnote 14 This raises important ethical questions regarding consent and autonomy. Our identities are the most precious things about us. Duplication technology seems to threaten our identities in significant ways. Identity is a philosophically contentious concept, particularly when it comes to defining the nature of personal identity, how it persists over time and the ethical significance of this [9, 32, 33]. Some see identity as being inextricably bound up with our material and biological constitution, others see it as being grounded in non-material or spiritual essences, still others see it as a function of our psychological attributes (memories, desires, intentions, and beliefs over time). We will not weigh in on these philosophical disputes here. For our purposes, it is enough to say that our identities are at least partially constituted by our cognitive (memories, thinking styles, beliefs) and physical (appearance, voice, behaviour) attributes. Since duplication technology enables people to copy, to at least some degree of approximation, those attributes, and instantiate them in a semi-autonomous artificial agent, it threatens the sanctity of our identities in a particularly acute way. It is not simply that personal data or information is copied; it is that some part or aspect of one’s self is copied. If others take those aspects of our identities and use them for purposes we do not desire or to which we do not explicitly consent, this raises a number of important ethical concerns. On top of this, the creation of digital duplicates involves the collection and processing of personal data. The unauthorized or unethical use of this data poses a significant threat to privacy and data protection rights.Footnote 15

The first obvious potential harm stemming from the creation of digital duplicates is that they may allow for the exploitation and theft of individual work product for the gain of others. In many countries, employers are already automatically legally entitled to own the intellectual property of their employees. If such a rule were extended to allow employers to create partial duplicates of their employees, perhaps to continue work after the employee is fired or otherwise removed from employment, this would add to already existing harms associated with the commodification and alienation of workers from their work product. It would allow for the value of an individual’s work to be extracted without due acknowledgment or payment. These are not purely science fictional fears either: this already happens. For example, controversies have arisen in recent years as a result of universities in Canada and the UK continuing to provide pre-recorded lectures from deceased members of staff.Footnote 16 Recordings are one thing. Duplication technology is another. It could allow for this to happen at a much larger and more invasive scale.

Against this, one could argue that employers would be more attracted to the idea of creating novel artificial beings to replace human workers, not copies of real people. After all, the novel beings could be optimised for the given work task(s), and not simply recreate the flaws or shortcomings of the individual. This is a fair observation and there are many reasons to think employers might prefer that option, at least in some cases [35]. Nevertheless, there could be contexts in which recreating a particular worker would be desirable. Copying an actual person might be a quick and easy way to optimise an artificial worker. Some industries are dominated by charismatic ‘superstar’ employees– academia may be one such industry– and recreating the specific superstar would be more economically desirable than creating a generic, optimal AI worker. It is also likely that taking the attributes of real individuals and amalgamating them is a route to creating an optimal AI worker. Indeed, this is, indirectly, a feature of how many AI models are currently trained. So exploitation and alienation from work product is an obvious harm arising from duplication technology.

Second, another, associated harm is that of identity theft, not just by employers but by others. Identity theft arises whenever one person gains profit (or causes harm) by appropriating the image or identity of another person. This includes leveraging someone else’s identity for financial gain, status elevation, or credit acquisition. Identity theft is a major social problem, particularly now that digitised personal records allow for identity theft at scale.Footnote 17 Duplication technology adds to this pervasive problem, allowing for more subtle and creative forms of identity theft, with potentially significant economic and political consequences, e.g. pretending to be a well-known person in an online interview with the media for personal or political reasons.

Third, another potential harm arises from the creation of digital duplicates that are poor or faulty representations of real individuals.Footnote 18 This could be due to malice or it could be due to incompetence or inherent limitations of the technology. Faulty duplicates might resemble a person but fail to replicate their attributes accurately, leading to harm by misrepresenting the individual or providing inadequate representations to others seeking genuine interactions or information. When it comes to harms related to poor or faulty representations of real individuals we can distinguish between harms that this might cause to the individuals in question themselves and harms that this might cause to other people, who might be misled about what somebody is or was like, or what they would be likely to say or do.

Fourth, as related to the person who is poorly or otherwise misleadingly misrepresented, such faulty representations of them may also lead to a kind of digital defamation through the perpetuation of stereotypes or misconceptions about an individual. Again, this is not a purely science-fictional fear. There have already been cases in which LLM chatbots have produced false information about real people and these cases have led to public and legal repercussions.Footnote 19 Duplication technology enables more pervasive versions of this problem.

Fifth, another harm arising from the proliferation of digital duplicates is their potential use as a means for deception and the spread of misinformation. In addition to its intrinsic harms, this could erode the trust required for cooperative human interactions. Trust is social glue. Without it, cooperative human society cannot be sustained. While digital duplicates may not eliminate all trust, they could significantly undermine it. By using these duplicates to mimic genuine human communication, malicious actors can manipulate cooperative relationships for their own gain. This manipulation exploits evolved social instincts to treat anything person-like in a largely trusting and cooperative way [36]. When this instinct to trust is exploited, it creates disharmony and suspicion. Emerging empirical evidence supports the idea that there is less trust and cooperation when machines are involved or suspected to be involved in social interactions [37,38,39]. Obviously, this could happen irrespective of whether an AI duplicates a particular person. This is why discussions about trustworthy AI are common in debates about the AI ethics [29].Footnote 20 But using duplicates seems likely to raise heightened concerns in this regard insofar as the duplicates are, by their very nature, intended to mimic actual people with whom we are already likely to have formed cooperative and trusting relationships. It consequently hijacks the instinct to trust in a particularly invasive way.

Sixth, another harm of digital duplicates is their potential to create responsibility gaps. Responsibility gaps are widely discussed in both philosophical and legal debates about the social impact of AI (for an overview, see [41]: chapter six). The idea is that autonomous systems might do things in the world that would ordinarily attract the attention of our responsibility practices. But those autonomous systems are not themselves capable of being responsible for their actions and, given their autonomy, that responsibility cannot be traced back to their creators, owners or controllers. So a gap arises. Whether such a gap exists, whether it is a problem, and whether we ought to ‘plug’ it if it is, is subject to much debate (see, e.g., [42,43,44]. The particulars of that debate are not relevant here. What is relevant is that the use of digital duplicates also potentially creates responsibility gaps [13]. It might be possible to argue that it is easier to plug the responsibility gap in the case of duplicates since there is a more direct link between them and the person upon whom they are based. But this assumes that the original person is themselves responsible for the creation and use of the duplicate, which may not always be the case.

Additionally, problems could arise when it comes to the credit-blame asymmetry proposed by Porsdam Mann et al. [45, 46]. Their claim is that the use of generative AI can be affected by an asymmetry in the way we ascribe blame and praise for actions. In short, people could be very easily blamed for the harmful outputs of a generative AI (because they were negligent or reckless in using them) whilst at the same time not easily attracting praise for the positive outputs (because they did not play a sufficient causal role or exert sufficient effort in producing those outputs). The existence of this credit-blame asymmetry poses a particular problem for anyone apt to use digital duplicates for personal achievement and gain. One of the reasons why people may be attracted to duplicates is that they hope to claim ownership over the positive outputs of digital duplicates (e.g. an author using a personalised writing AI wants to be able to claim authorship over the resulting work). The existence of the credit-blame asymmetry suggests that this hope may be forlorn.

Seventh, a (for now) final harm concerns the impact of digital duplicates on the value of individuality. This is more abstract and speculative. Historically, individuality has been scarce and non-replaceable. Several philosophers claim that uniqueness or non-replaceability contributes to value (see [47] for an overview). The advent of digital duplicates threatens this scarcity and non-replaceability. This capacity to overcome temporal and spatial scarcity was touted as a benefit of the technology earlier but, if non-replaceability contributes to our value, this touted benefit could turn out to be a double-edged sword. We may gain some, temporary, and largely instrumental benefits from being able to distribute ourselves more widely; but this will come at a cost to one of the foundational values of our civilisation. The flip-side of this is that while the value of individuality might go down in general, verified ‘real’ individuals might become more valuable in certain domains. For example, real flesh and blood popstars that provide unique and authentic experiences might be able to charge more for their services because people value the real person over the duplicate. This could be developed into a more general potential harm associated with digital duplicates. Recall, that many of the benefits of the technology were linked to a ‘more is more’ ideology– if we can have more of a good thing, then that’s good too. This may not always be true. Sometimes there are diminishing marginal returns associated with adding more good things to the world; sometimes less is more. This creates a general problem for deployments of digital duplicates. In some contexts creating more of them might be a good thing; in other contexts their creation might eventually undermine the value of the good things they are supposed to duplicate. The harm to the value of individuality is one illustration of this.Footnote 21

It should be clear that not all the benefits and harms are unique or specific to digital duplicates. Several of them arise in the case of novel artificial beings and/or the outsourcing of tasks to other humans. The assumption, however, is that where the benefits/harms are not unique to duplicates, they nevertheless manifest those benefits and harms in a different ways, in some cases by accentuating them, and in others by increasing their scope or potential impact.

Benefits

Harms

Wider distribution of an otherwise scarce relationship/style of interaction

Exploitation and alienation of people from work product or output

Continuation of a personally unique service

Identity theft and fraud - using someone’s identity for gain

Preservation of a unique identity or set of attributes after death or incapacity

Poor or faulty representation of individual (reputation loss/harm/defamation)

Enhanced productivity of individual

Deception and manipulation through dishonest anthropomorphism

Genuine multi-tasking without bandwidth limitations

Eroding and undermining social trust.

Desirable substitution in task performance/interaction

Responsibility gaps - credit-blame asymmetry

Enhanced decision-making or proxy decision-making

Degradation of the value of individuality (due to non-uniqueness and replaceability)

The above-summarized list of potential benefits and harms is meant to represent some of the most important generic benefits and harms related to digital duplicates, but it is not meant to be exhaustive. Further investigation into ethically relevant potential benefits and harms is needed. However, we will not pursue that topic further here. We will instead use this discussion of harms and benefits as a springboard for introducing a proposed general principle for when the creation and use of digital duplicates can be considered permissible.

4 A minimally viable Principle for the ethical creation and use of Digital Duplicates

We now turn to our main task. Borrowing from the idea of ‘minimally viable products’ in product development, we look to derive a minimally viable permissibility principle (MVPP) for the creation and use of digital duplicates. By this, we mean a ‘minimally contentious’ or ‘maximally agreeable’ principle: a permissibility principle that is likely to be acceptable to most people, in most contexts, irrespective of their specific ethical commitments or beliefs.

You may wonder why we adopt this method. One reason is that the idea of digital duplicates has already proved controversial, with some people raising objections to their very existence. For instance, the philosopher Daniel Dennett has claimed that:

… counterfeit [digital] people are the most dangerous artifacts in human history, capable of destroying not just economies but human freedom itself. Before it is too late (and it may be too late already) we must outlaw both the creation of counterfeit people and the “passing along” of counterfeit people. [36].

Dennett’s term ‘counterfeit people’ encompasses more than just digital duplicates. It covers all artificial person-like beings. Nevertheless his opposition to the technology, and fears about its repercussions, are clear.

If the minimally viable permissibility principle is indeed minimally viable, and can be satisfied in practice, it goes some way toward discrediting this staunch opposition. It does so not by presupposing some controversial or overly permissive ethical theory: it does so by requiring that stringent conditions be met. We present this principle, not as a fundamental principle of ethics, but rather as what in bioethics is sometimes called a “mid-level principle”. This is akin to Beauchamp and Childress [48]’s four principles of bioethics, which are not supposed to be fundamental ethical principles, but mid-level principles that can be justified with reference to different possible more fundamental principles.

In developing the MVPP, we do not claim to identify the final set of necessary and sufficient conditions for the ethical deployment of digital duplicates. We only seek to identify one set of sufficient conditions for permissibility. It could be, for all we argue, that some of the conditions in our principle could be relaxed or ignored in certain contexts, or that others could be added to reach the same conclusion. We discuss some examples of this in what follows and consider its ramifications for the utility of our principle in the final section.

Here is our suggested principle:

MVPP

In any context in which there is informed consent to the creation and ongoing use of a digital duplicate, at least some minimal positive value realised by its creation and use, transparency in interactions between the duplicate and third parties, appropriate harm/risk mitigation, and there is no reason to think that this is a context in which real, authentic presence is required, then its creation and use is permissible.

We will now proceed through each condition of this principle, explaining why it is included and whether it can be met in practice.

4.1 The Consent Condition

The first condition stipulates that there must be informed consent to the creation and use of the duplicate. Why is this important? For one thing, it respects the autonomy of the person being copied into the digital form. If it is their form and identity being copied, they have a legitimate interest in controlling this process. The importance of autonomy and control is recognised by many moral theories, so the inclusion of an informed consent condition is not particularly controversial. Furthermore, as noted in the previous section, one of the significant harms associated with duplicates is their potential use for identity theft, misrepresentation and fraud. Insisting on consent is a way of mitigating these risks (we’ll say more about risk mitigation below).

What does it mean to say that the consent must be informed? We suggest that it means the individual being copied must know that they are being copied, by whom they are being copied (if not themselves) and for what purpose. They should also have the right to see or interact with the duplicate before it is released to third parties. This gives them the additional power to veto the use of the duplicate after its creation. This, incidentally, was the case with the DigiDan duplicate of the philosopher Daniel Dennett: Anna Strasser and colleagues created this digital duplicate with explicit consent from Dennett [5].Footnote 22 In line with many theories of consent (e.g. medical and sexual), we believe that consent to the creation and use of a duplicate should be an ongoing requirement and that the original person always retains the right to withdraw consent, if they so wish. Refreshed consent would also be required if there are planned changes to the form or use of the duplicate.

Can this condition be satisfied in practice? Informed consent conditions are common in law and ethics and there are well-known protocols for implementing them (e.g. [49]). There might be practical difficulties to work through. Exactly how much information needs to be shared and in what form would need to be specified. Consenting to each new use creates additional work and there might be dispute over what counts as a ‘new’ use. Still, fuzzy boundaries of this sort are common in consent practices. The impediments do not seem insurmountable.

One potentially significant problem is that weak consent norms prevail in many areas of software licensing and creation. People often consent to software licensing contracts or to the collection of personal data without carefully reading the terms and conditions; without being properly informed. Regulatory attempts to address this problem have emerged in recent years. The EU’s General Data Protection Regulation is, perhaps, the most prominent example. Given the potential harms of duplicate technology, a weak consent norm should not be allowed to prevail over the creation of digital duplicates.

Some might argue that the consent condition unfairly rules out the private and playful use of duplicates. If I wish to paint a picture of my boss, pin it to the back of my bedroom door, and throw darts at it as a way of venting my frustration, there is nothing to prevent me from doing so. Why shouldn’t I be allowed to create a digital duplicate of my boss for much the same end?Footnote 23 One reason is that ensuring that the duplicate remains private and playful is going to be difficult. We discuss this in more detail, below, when analysing the mitigation condition. Also, there is something qualitatively different about throwing darts at an image and trying to functionally recreate someone’s identity in digital form. At a minimum, the latter requires the gathering of more personal data about the duplicated person, which would clearly violate more privacy and data rights.Footnote 24

Others might argue that the consent condition unfairly rules out duplicates of dead people. If someone wants to create a digital duplicate of Jonathan Swift or John Stuart Mill, why shouldn’t they be allowed to do so? Such duplicates could be created for research or entertainment purposes or to assist with grief and loss. Surely, we cannot insist on informed consent in all such cases? Three things are worth bearing in mind. First, as a rule, if you wish to create a duplicate of a deceased person, you should probably get that person’s consent prior to their death. Second, if it is not possible to get consent, then we can distinguish between cases based on how recent the death is. The idea of posthumous interests and harms is somewhat controversial (e.g. [50]), but in the spirit of identifying a minimally viable principle, it seems plausible to grant that people retain some interest in how their likeness and identities are used after their deaths. The normative power of this interest probably weakens over time. By way of analogy, in copyright law it is common to enforce copyright protections after a person has died, but not indefinitely. A period of 70 years, give or take, is the typically granted window of time. While 70 years may not be the ideal figure, accepting some window of time in which digital duplicates cannot be created would seem to be appropriate. After the period elapses, anyone is fair game for duplication.

It is worth reiterating that the MVPP is solely interested in identifying one set of sufficient conditions for permissibility. It may be possible to make a case for legitimate exceptions in the case of some deceased people. For example, there might be a public interest exception that allows you to create a duplicate of a well-known politician or public figure, for research purposes, before the relevant window of time has elapsed (and provided other conditions are met).

Finally, it is worth considering that some people might wish to create duplicates of dead people, after the acceptable window of time has elapsed, that are perversions or distortions of the historical reality of an individual. For instance, someone might create a digital duplicate of John Stuart Mill that argues for tyranny or the subjection of women, or in favour of censoring controversial opinions.Footnote 25 This is an issue that would need to be considered in more detail. But our sense is that while doing so would, if nothing else, display a certain lack of ethical sensitivity, it might be one of those things that is, strictly speaking, not strongly ethically impermissible in the same way that, for example, creating pornographic content of a living person without their consent would be, but nevertheless something that a virtuous/good person would prefer to avoid doing.

4.2 The positive value condition

The second condition stipulates that the creation and use of the digital duplicate must realise some minimal positive value. Here one can consider both value for the person whose duplicate is in question, value for other people, or value for both. This condition is, again, consistent with many moral theories. The degree of value that must be realised can be very minimal indeed. If you wish to create a duplicate of yourself, to interact with at night when you are alone, because this amuses you, that is good enough for the purposes of satisfying this condition. The condition is only intended to rule out entirely gratuitous or negative uses of duplicates, e.g. uses of duplicates to torment or manipulate people. The potential benefits of duplicates, which were discussed in more detail in the previous section, could all be appealed to, in order to satisfy this condition, subject to the complexities already mentioned.

Even though it is a minimalistic form of value, this condition should not be interpreted as endorsing a purely subjective form of value. The value must be one that can be endorsed from an objective standpoint. Purely private or radically subjective values are thereby ruled out. A fraudster that makes their living from stealing other people’s identities may argue that they realise some positive value in the creation and use of a duplicate of another person. This may well be true, from their perspective, but this would not be enough to satisfy the condition.

The positive value condition intersects with the mitigation condition, which we discuss below, to ensure that the benefits of the duplicate outweigh its potential costs.

4.3 The transparency condition

The third condition stipulates that people must be aware that they are interacting with a digital duplicate. This can be justified on several grounds. The risk of manipulation or overattachment to a duplicate is likely to be higher if people are not aware of the fact that they are interacting with a duplicate. If a duplicate of your doctor tells you that you have cancer and that you need to ‘get your affairs in order’, you are probably going to take this more seriously if you think it is really your doctor. Of course, the information provided might be true, but not making people aware of the fact that they are interacting with a duplicate opens up the risk of abusing our inclination to trust and cooperate with natural persons.Footnote 26 Contrariwise, even if you think it is right to trust (or ‘rely’) on the AI’s judgment, perhaps to a greater extent than a human’s, this would still require transparency since you would need to know that it is an AI in order to rely on it more.

In addition to this, there is a common view that interactions with artificial beings have less value than interactions with real people. For example, in the philosophical debate about AI friends and lovers, many philosophers argue that an artificial friend or lover will lack crucial properties that have the effect of making interactions with them less valuable ( [16, 20]: Ch. 5). While this can be disputed [21, 35], the fact that so many people share this belief lends support to the transparency condition. It is also worth noting that a demand for transparency is already endorsed by regulators in the area of AI. It is central to the EU’s ‘trustworthy AI’ program. For example, the text of article 52 of the EU’s AI Act states that providers of AI systems must ensure that users of those systems are aware that they are interacting with an AI and not a natural person.Footnote 27

Questions arise as to how one ensures transparency in practice. The obvious way would be to program duplicates to constantly remind people of their ontological status. If you interact regularly with generative AI chatbots, you will know that they frequently spit out formulaic text of the type “as a large language model, I am not capable of X…”. This serves as an ongoing reminder to the user that they are not interacting with a real person. While constant reminders seem like overkill, erring on the side of over-reminding people is good practice, at least until well-established norms come into place that enable people to know when and where they are likely to be interacting with a duplicate.

A more philosophical question is whether this condition only applies to interactions with third parties or whether there is also a requirement for self-transparency. If you are a college professor creating a duplicate of yourself to give tutorials to your students, it makes sense that they ought to know that they are not getting the real you. But if you are creating a duplicate of yourself to talk to at home, must you also be made aware that you are not interacting with the real you? It seems like a silly question since, presumably, you would always know. But there are some cases where the lines may become blurred. If you are suffering from some cognitive impairment (e.g. dementia), you may not be able to appreciate the distinction, which could become very confusing or disturbing. Insisting on self-transparency in such cases, and preventing use of the duplicate if this cannot be met, would be a sound and precautionary approach to take. Alternatively, some people may become so dependent on or interactive with their own duplicates that they no longer see any clear difference between themselves and the duplicate. This could be a manifestation of the extended mind thesis [14, 53]. In principle, in the extreme case in which the distinction between self and duplicate breaks down, such that they are no longer separate interacting agents but, rather, one extended system, insisting on a self-transparency condition would no longer be necessary. But until that threshold is crossed, insisting on it is good practice. Why? Because there is a risk of over-dependency or self-delusion: you become reliant on a duplicate that is not really you and not really under your control, or you become convinced that the duplicate is part of you when it is not (the software is owned and controlled by a third party).

4.4 The mitigation condition

The fourth condition stipulates that the creation and use of a duplicate must be subject to appropriate risk mitigation protocols. In other words, steps must be taken to reduce the potential harms of the technology. This condition is plausible from a range of moral perspectives. Each of the harms listed in the previous section of this article is of some genuine concern, either on simple utilitarian grounds or because they violate specific rights or deontic rules. Trying to prevent them from occurring is the morally appropriate thing to do. This may be the least controversial of the conditions included in our MVPP since harm minimisation (or non-maleficence) is a widely endorsed moral principle.

The problems with this condition are more practical in nature. The first obvious challenge is determining what counts as appropriate risk mitigation. The terminology is deliberately vague. Must all risks be mitigated? Some potential harms seem highly probable and concrete. Identity theft and exploitation fall into this category. But other harms seem more speculative and abstract. The harm to individuality is a good example of this. If digital duplicates undermine individualism, then this would be a significant outcome. But the causal link between the rise of duplicates and the collapse of individualism is a bit fuzzy. Would it really happen? Might there be countervailing trends that prevent this collapse? Would we even know how to go about mitigating against it? Demanding that this risk be mitigated seems to demand too much. Additionally, there is the question of how much risk mitigation is appropriate. This is a well-worn topic in risk management theory and, indeed, in debates about AI risk [54]. It is trite but true to say we cannot reduce all risks to zero. But then by how much should we seek to reduce them? There are frameworks in place for thinking through these matters. For example, analysts will often employ risk matrices to try to categorise risks based on the probability of their occurrence and the magnitude of the harm that would materialise if they did. Risk matrices of this sort could be valuable when assessing whether the mitigation condition is satisfied.

Another practical question arises regarding upon whom the burden of risk mitigation falls. Is it the user of the duplicate? The creator? The broader society that tolerates its use? These are familiar questions, even within the AI law and ethics literature. They are, in essence, the forward-looking aspect of the responsibility gap problem discussed earlier in this article. We suggest that there is no single answer to the question. It all depends on the nature of the risk. As a default, it makes sense to assume that the burden of risk mitigation falls on the party that has the most control over the impact of the duplicate. They have the power to block the causal link between the duplicate and the harmful outcome. In some instances, this will be the user. In other instances, it will be the software company that controls the platform on which the duplicate resides. But, in other cases, it may be appropriate to place the burden on the party that benefits most from the technology or who can best pay for the risk mitigation strategy. That could be an employer that asks employees to use digital duplicates of themselves; it could be a government that can pay for additional cybersecurity protocols.

The most challenging practical issue is whether there is something in the nature of the technology that makes effective risk mitigation impossible, or highly unlikely. There might be. Imagine you could duplicate a real person in biological form. This biological duplicate could give rise to many of the harms and benefits discussed in this article. But it would be much easier to mitigate the risks associated with the biological duplicate. The nature of the biological form is such that it is temporally and spatially restricted. You can create it for use in one context and effectively limit it from wreaking harm in that context or spreading to another context. You can, for example, lock the duplicate in a box (prison cell) or, in the extreme case, incapacitate or kill it. Whether you can do the same with a digital duplicate is much more dubious. Digital objects are hard to destroy and easy to copy. Think about what happened to music when it became digitised. The ease of copying and sharing music files meant that it became virtually impossible (and extremely costly) to prevent piracy and other unwanted uses of those digital files by third parties. The same problem arises with digital duplicates. Once they are created, they can easily be copied, shared and transferred to new usage contexts.

In short, once created it is easy to lose control over the duplicate. This makes it much harder to mitigate against the risks of harm. Whether effective cybersecurity protocols, software lockdowns, or ‘kill’ switches could reduce this risk to an acceptable level is not something we are qualified to answer. But it is something that needs to be addressed. The control problem alluded to here is something that has already been widely discussed in the AI risk literature (for an overview, see [41]: chapter four). It is, however, typically associated with debates about superintelligent AI and existential risk [55]. What we are suggesting is that there is an analogous control problem associated with the use of digital duplicates, but which does not arise from a high degree of intelligence. Fears about this more modest and perhaps more insidious loss of control could be one of the things that motivates those calling for an outright ban on this technology [36].

In practice, we suspect that the mitigation condition is the hardest condition to satisfy and the biggest impediment to the permissible use of duplicates.

4.5 The non-authenticity condition

The fifth condition states that the duplicate ought to be used only in situations in which the authentic and real presence of the original person is not morally required.Footnote 28 This is the most convoluted condition, at least in terms of how it is expressed. It is intended to capture the idea that there are some contexts in which it is right and proper to expect the real person to participate in an activity and not a proxy acting on that person’s behalf. Obvious examples of such situations would be certain intimate or friendly relations (cf. [56, 57]). If I create a digital duplicate of myself to sit and chat with my spouse each evening after work, because I am too busy with other things, this could rightly be viewed as morally unacceptable. The value of such an interaction lies not simply in the fact that my spouse has someone to talk to about their day, but also, more significantly, in the fact that I am the one listening and paying attention to them. I am showing that I am really and presently caring for them in this act. It is not the same thing if it is done through a digital proxy. If anything, sending the proxy on my behalf communicates a lack of concern and empathy.

This condition is the easiest to meet in practice, insofar as all it requires is that we refrain from using digital duplicates in some contexts. It can also be endorsed from a variety of ethical perspectives. For instance, real authentic presence could be justified by the duties that arise from interpersonal relationships, or on the grounds that it builds important moral virtues. But the actual contexts in which real authentic presence is demanded could be open to some dispute and could change over time. Cultures may vary in when they demand authentic presence; individuals may vary too. A student might want to speak to their real teacher, but the teacher might think it is okay to send their duplicate instead. Part of the problem here, in addition to competing interests and incentives, is that the demand for real presence can be motivated by the belief that using a digital proxy symbolises or expresses the wrong thing (disrespect; lack of concern). But, as Brennan and Jaworski [58] have argued, the symbolic meaning that attaches to practices is not fixed. It can vary across cultures and across times. While we might think it is disrespectful to pay someone to mourn at our parents’ funeral, paying for professional mourners communicates respect in some cultures (e.g. Ancient Rome and modern Taiwan). In a similar vein, Danaher [56, 59] has argued that the meaning that attaches to the practice of interacting with a robot or using a digital proxy could change over time. While it may be disrespectful now, it might become tolerable or even respectful later. This might be something that changes as the technology becomes normalised. At the same time, it is worth noting that the opposite could happen too: we could become less tolerant of the technology and more inclined to favour real authentic presence as we become more aware of its harms. Such intolerance may be something we are beginning to see happen in relation to the use of mobile phones among school age children. Many countries have either proposed or are seriously considering a ban on the use of such technologies in classrooms.Footnote 29 This is prompted by increased awareness of the harms it might be doing to both learning and social interactions.

In short, even though it is likely to be controversial and subject to disagreement, we think that some demand for real authentic presence is likely to remain robust over time, even if the contexts in which it is demanded change. Consequently, including this condition in the MVPP seems appropriate.

5 Conclusion: understanding the proposed principle as a practical tool

In the previous section, we explained and discussed each part of our proposed principle and considered whether– and to what extent– this principle can be put into practice. We now conclude with some final reflections on it, in the form of replies to possible questions that could be raised regarding our discussion above.

First question: if the use and creation of a particular digital duplicate is permissible by the standards specified in the MVPP, does that automatically mean that it is also a good idea to create and use this digital duplicate? Not necessarily. What we have suggested is a principle stating conditions under which creating and using specific digital duplicates would be permissible, but not a principle stating conditions under which creating and using the duplicate is desirable or advisable.

As far as we can tell, the creation and use of the DigiDan digital duplicate of Dan Dennett fulfils the criteria of permissibility stated in our proposed principle. Dennett explicitly consented to the creation of DigiDan and the specific uses of it the researchers behind DigiDan had proposed. There is clearly at least minimal value here: it is an interesting research experiment, which helps to illustrate, among other things, that it is possible to create a digital duplicate of a leading academic that can create text outputs that can be hard to tell apart from novel bits of text written by the academic himself. Strasser and the Schwitzgebels, moreover, seem to have taken precautions to mitigate risks of various relevant forms of harm. They also seem to have been transparent to third parties about what they were doing. And in the contexts they have made use of the DigiDan digital duplicate of Dennett– viz., in their research and in public presentations– there does not seem to have been any clear need to have the authentic person present. So, on the whole, the creation and use of the DigiDan digital duplicate was permissible by our suggested standards. Yet, it is an open question whether it was a good idea to create and use the DigiDan duplicate. It probably was a good and advisable thing to do in this case, all things considered, but we want to emphasize that our proposed MVPP is silent on whether creating any particular digital duplicate is a good and advisable thing to do.

Second question: should one think of the principle we have suggested as treating the permissibility of creating and using digital duplicates as an either-or matter? Is it that either a digital duplicate clearly fulfils all these criteria, in which case its creation and use is permissible, or it fails to meet the criteria, in which case its creation and use is impermissible? While our proposed principle can be interpreted in such a way, it is also possible to understand the principle as allowing for assessments of the creation and use of particular digital duplicates as being more or less permissible, on the whole and/or along different dimensions.

We can think of this as a spectrum with cases in which it is very clearly and strongly impermissible to create and use a certain digital duplicate along all dimensions, on one extreme end of the spectrum, and cases in which it is clearly permissible along all dimensions of assessment to create and use some particular digital duplicate, on the other extreme end of this spectrum. The greater the extent to which all of the specific sub-criteria in our suggested principle are satisfied, the more clearly permissible along various different dimensions it is to create and use some particular digital duplicate. But the greater the extent to which creating and using some digital duplicate fails to satisfy these criteria, the less permissible and the more impermissible the creation and use of the digital duplicate is. In the middle of this spectrum there might be cases in which one cannot say whether, on the whole, the creation and use of some digital duplicate is either clearly permissible or clearly impermissible. The reason for this might be that while some of the criteria are fulfilled or partly fulfilled, others are not.

In any concrete case, it might be unclear whether we are dealing with an on-the-whole permissible or impermissible digital duplicate. But our principle can still be considered useful in such scenarios because it offers a tool for assessing the moral status of the creation and use of the digital duplicate in question There may be questionable aspects of the digital duplicate under consideration– e.g., it might be unclear whether the person has really consented in a clear and sufficiently unambiguous way to all the uses the digital duplicate might be put to– whereas other aspects of the digital duplicate’s creation and use might be above board: there might, for example, be some minimal value there, and there might be transparency about the fact that this is a digital duplicate. Our proposed principle offers tools for assessing the case in question, even if it could be seen as debatable whether it is a clearly permissible or clearly impermissible. We see this as a strength of our proposed principle: it may sometimes not be clear-cut whether the use and creation of some particular digital duplicate is either permissible or impermissible. But there is still value in having an ethical assessment tool for highlighting aspects of the digital duplicate that are more or less problematic.

Third and final question: how can one weigh the different potential benefits and potential harms of digital duplicates that we have highlighted, so as to decide whether, on balance, particular digital duplicates are potentially more beneficial than harmful? This is an important and legitimate question to ask. Here, we have not sought to assess how likely the potential benefits and harms are to be realized. Nor have we offered assessment of how beneficial the different potential benefits are or how harmful the potential harms are. We think that, in general, whenever one considers possible benefits and harms of emerging technologies, it is important to not only come up with lists of potential benefits and harms, but to also try to come up with methods for assigning probabilities to these potential benefits and harms and for making assessment of just how important the potential benefits are and just how detrimental the potential harms, if realized, might be. When it comes to the ethics of digital duplicates, doing so is an important topic for further research on this issue to explore.

Clearly, this an issue where people will have different views. Some– such as Hiroshi Ishiguro, who we mentioned at the beginning– are likely to take a strongly optimistic view of the promise of digital duplicates, whereas others are likely to take a more pessimistic view, instead highlighting the risks involved This is an important topic to consider as digital duplication technology will be available to more and more people. Our hope is that our discussion above will be useful in further research on this intriguing topic.