1 Introduction

“The future of AI is Personal” boasts personal.ai, a company pioneering what they call “Personal AI”. Personal AI is essentially the latest iteration in a long line of AI assistants, preceded by the likes of Siri and Alexa, intended to help the user perform daily tasks like scheduling and planning, give advice, and be a conversational partner. But Personal AI is much more than that. Its aim is to become “uniquely yours” by “continuously training on your unique data and personal messages” (personal.ai 2024), learning not just the identity and preferences of the user, but also learning to mirror the user’s emotional performance, as in the following exchange previewed on their website,

“[User]: My dog is so silly. He likes to sleep under the bed.

[Personal AI]: Ah, Bear, the master of inconvenience and trouble, finds solace in snoozing under the bed. I mean, who needs a dog bed when you have a perfectly good floor? But hey, at least he’s living his best life, right?” (ibid.).

The user is quirky, and therefore gets a quirky answer. This emotionally tailored experience represents a paradigm shift in AI companionship. Bill Gates calls it a “shock wave in the tech industry” (Gates 2024). Until now, the market has been dominated by AI companions designed to perform narrow and well-defined social roles, like friend, lover, colleague, or caretaker, each coming with a pre-programmed set of behaviors and responses reflective of their role. Their emotional display is primarily a function of the intentions of the developers, not a reflection of the user. Personal AI does away with this paradigm by putting the user front and center.

While there is a rich literature discussing the ethics of various forms of AI companionship, not much has been written about the ethics Personal AI in particular, likely due to its novelty.Footnote 1 What little there is comes mostly from the industry itself, which overwhelmingly focuses on its positive impact on productivity and social engagement. In this paper, I aim to give a critical assessment of the ethics of Personal AI. I envision a world where Personal AI is as common and ubiquitous as smartphones and ask: Is this world desirable? Does Personal AI contribute to valuable and meaningful lives?

I will argue that Personal AI poses significant moral risks. One important issue is the well-known problem of emotional deception, a problem that has received wide attention in the literature on AI companionship. The issue is that AI companions deceive us into thinking that they have genuine emotions when all they do is a performance. AI companionships are therefore devoid of what makes companionship meaningful and valuable. This critique remains one of the most important invectives against AI companions, and hence as well against Personal AI.

The second issue is unique to Personal AI. What is special about Personal AI is that in addition to being deceptive about the presence of its emotions, it is also deceptive about the origins of its emotions. Personal AI learns to perform and respond to emotional cues from its user, yet it interacts as if its emotions are its own—that they belong to it. The result is that Personal AI forms what I call an “emotional bubble”—a paraphrase of the more well-known phenomenon of an “epistemic bubble”—a social condition where our interactions are exclusively with others who share the same emotional attitudes as we do.

There are at least two problems with emotional bubbles. First, emotional bubbles are likely to stunt emotional growth and cripple our ability to form diverse social relationships. If our main experience of emotional connection is with someone who is identical to ourselves, we will be wholly unprepared to meet and negotiate with people who do not share our emotional attitudes.

Second, there who argue that shared emotions are constitutive of shared values. Taking part in a shared pattern of emotional attitudes, on this view, gives our personal values interpersonal validation, thus elevating them to the status of shared values. Emotional bubbles can, however, only give the appearance of external validation. Assuming that this view is true, Personal AI, therefore, poses a significant threat to joint moral deliberation, which is arguably one of the most important dimensions of ethical reflection.

This paper begins by introducing the idea of AI companionship and explains how Personal AI fits into this concept, focusing on the way that the emotional recognition capacities of Personal AI diverge from the dominant way of emotional engineering. I then proceed with a discussion of how philosophers have understood the ethics of AI companionship, focusing on the problem of emotional deception. I outline the main argument and discuss responses, arguing that the responses fail to assuage the central concern raised by critics of emotional deception. In the final section I support the claim that Personal AI raises a related but distinct moral concern by drawing on the literature on epistemic bubbles. I draw on ethical theories about the relationship between emotions and values to show how Personal AI is a threat to joint moral deliberation and I discuss what this means for the future of moral thought. I end with some reflections on where we should go from here, concluding that the absence of technomoral virtues necessary to handle emotional bubbles should lead us to proceed very cautiously with the development and marketing of Personal AI.

2 AI companionship and personal AI

AI companions are a type of social robot.Footnote 2 These are defined, as per the International Journal of Social Robotics, as robots that “are able to interact and communicate among themselves, with humans, and with the environment, within the social and cultural structure attached to its role” (International Journal of Social Robotics). Naneva et al. (2020) stress the communicative aspect of social robots. What makes social robots different from their relatives, such as industrial robots, is that they can share information with humans verbally and/or non-verbally. They respond to social and environmental cues in a way that corresponds to what we would expect of their human equivalent, or at least in a way that approaches how their human equivalent might respond (Zhao 2006).

AI companions are a subset of social robots, distinguished by their explicit intention to form emotional relationships with humans. Although affective computingFootnote 3 has had a massive influence in the field of social robotics, the reason is typically pragmatic. Joint interaction between humans and robots is difficult to achieve because the kind of collaboration humans are used to depends on an understanding of norms and social cues that robots lack. Being able to rely on conventions significantly reduces the complexity of joint interaction, enabling it to proceed in a smooth and natural manner (Kirby et al. 2010; Belhassein et al. 2022). Emotional cues in particular have proven to be one of the most effective means of achieving collaboration between humans and robots (Spezialetti et al. 2020; Loghmani et al. 2017). With AI companions, however, forming emotional relationships with humans is not just a means to an end, but an end in itself (Weber-Guskar 2021). AI companions are intended to take on social roles that are defined, in part, by an emotional component, such as friendship, love, caretaking, and collegial relationships.Footnote 4 In short, AI companions are intended to care, in an emotionally charged sense, for and about humans in a way that is appropriate to their social role.Footnote 5

For AI companions to form emotional relationships with humans, they must learn to perform and respond to emotional cues. Until now, the paradigm learning model has been what Misselhorn and Störzinger call “opinion amplification”, where “the developers simply program the attitudes or some kind of features that later constitute the attitudes directly” (Misselhorn and Störzinger 2023, 271). This can take the form of either explicitly programmed responses or programming which allows the responses to change over time based on external factors, but in a way that still corresponds to an ideal performance. Take Farina et al.’s (2022) example of a “virtuous” robot nurse. They imagine a robot nurse that takes care of an elderly person with non-severe diabetes. How might the developers choose to design this robot to behave virtuously? One way to do so is to program the robot to allow for a set number of sweets within a given time frame, a kind of behavior we would associate with a virtuous nurse. Another is to complement this programming with a machine learning algorithm that updates the robot’s behavior depending on the progression of the patient’s disease. If the patient gets better, the robot may become more lenient in its dietary restrictions. What is notable is that even though the performance of the robot in the latter instance is more flexible, its performance is still recognizably “nurselike”. Its behaviors are still those that we associate with being a virtuous nurse, it is just that it is also responsive to changing conditions. Hence, in both cases, the emotional attitudes of the robot nurse amplify those of the developers and are, in that sense, pre-programmed.

What is now emerging is an alternative approach to teaching emotional behavior, based on what Misselhorn and Störzinger calls a “sociotechnical echo chamber” (Misselhorn and Störzinger 2023, 272). Unlike opinion amplification, when using this method the AI companion “adapts its normative claims to its user” so that “[i]nstead of simulating a situation in which two people engage with each other and negotiate their different perspectives, users would merely engage affirmatively with a copy of themselves” (ibid.). The AI companion is in this case like an emotional tabula rasa onto which the user inscribes their own emotional language and attitudes, resulting in a companion that is the user’s mirror image.Footnote 6 Instead of playing a well-defined social role, such AI companions play no role at all. More precisely, its role just is the role played by the user when interacting with it. Personal AI, like the one developed by personal.ai, is the first AI companion to truly embody this approach. The goal of its developers is to make an AI companion that aligns with the user’s wants and needs, practically and emotionally, using the method of a sociotechnical echo chamber.

One problem that must be solved by developers of Personal AI is how to handle negative or discriminatory emotions. It is unlikely that developers of Personal AI will want it to echo any and every emotion displayed by the user. If the user is being racist or sexist, for instance, developers will (hopefully) want the Personal AI to counter those attitudes, and if the user expresses troubling emotions like depression or suicidal thoughts, they will (hopefully) want it to encourage the user to seek professional help. Personal AI should bring forth positive emotions, not negative ones. Furthermore, the user should be able to trust that the Personal AI is acting responsibly, for instance by notifying the user when it is unsure about its statements or when it is not equipped to meet the user’s emotional needs.

Inflection AI is a company that is highly aware of this problem. Their Personal AI, which they call Pi (an abbreviation of ‘personal intelligence’), is not a tabula rasa in the strictest sense. Pi is intended, as one of the founders of Inflection AI, Mustafa Suleyman, states, to help the user “deeply understand topics […] rather than flagging superficial clickbait” and “helps you empathize with or even forgive ‘the other side’, rather than be outraged by and fearful of them” (Suleyman 2023) They are also working on safety protocols, such as ridding it of algorithmic biases, not have it give the user false or misleading information and designing it so that the user is not given the impression that it is more capable than it really is (Inflection AI 2024). These features determine to some extent how Pi will behave, irrespective of the user’s input.

Though these efforts limit the scope of behaviors open to Personal AI, the overall trajectory in the industry is toward making an AI companion that is reflective of the user, and this in a way that is markedly different from the prevailing paradigm of opinion amplification. It is the individual user’s interests and goals, except for when they are harmful, that will determine its behavior. This is the feature that some reviewers of Pi point to as its main selling point. One reviewer found it to be an “incredibly friendly, helpful, supportive, humorous, kind, empathetic, and motivating artificial intelligence”, citing its ability to respond “in the most human-like manner”. They found that you can “talk with Pi about everything” and “also have fun with it”.Footnote 7 There is evidently a niche in the market for an AI companion that is unique and tailored to its user, and as the market grows, it is easy to imagine that we will see versions of Personal AI that allow for considerable flexibility in how they eventually turn out. It is this ideal form of Personal AI that is the point of departure for this paper.

The world envisioned, an admittedly speculative but not at all unrealistic one, is one where people are acquainted with a Personal AI much the same way most of us are now acquainted with a smartphone, not unlike the world depicted in the movie Her (2013) directed by Spike Jonze. It is a world where people are accompanied by a Personal AI throughout most of the day, in both personal and professional settings, and which is there to talk with us, help us, and motivate us, and to do so in a personalized way. Someone who responds well to careful and suggestive language will get a Personal AI that is comforting and listening, while someone who prefers motivational speeches will get one that may resemble, for instance, a football coach. Some might end up sharing internal jokes or references with Personal AI that cannot be understood by others. And while it will hopefully not be a world where people’s darkest thoughts and emotions are encouraged, it will be one where their personal emotions and their affirmation are at the heart and center. The question is whether this world is desirable, or if the underlying mechanics of Personal AI can have consequences that we would wish to avoid. In what follows I outline and discuss two important problems facing the potential mass-adoption of Personal AI.

3 Personal AI and deception

The prospect of companionships with Personal AI raises many important ethical issues, such as concerns about privacy, accountability, and exploitation of users for commercial purposes. The concern in this paper, however, is narrow, at least in terms of its object. It is one that animates much of the debate over AI companionship in general, namely the nature and value of such relationships. This is an important question because of the immense value of companionship. As Aristotle says, “only he who is a beast or a god can live without others” (Aristotle 1998, 1253a25), especially without others who care about us. The reason is not just that we need others for utility and pleasure, which are important goods in our lives. Companionships are important most of all because of their moral value, being both good in themselves and helping us become better people, and this makes them appropriate objects of study for philosophers.

One major problem with Personal AI is that it is emotionally deceptive. This is an issue that has received significant attention in the literature on AI companionship more broadly. In this section, I engage with this literature and explain its bearing on companionships with Personal AI. In the section that follows I discuss how Personal AI leads to a distinct form of deception, what I call “emotional bubbles”, and I examine the moral implications of such states. As we will see, the issues raised by emotional bubbles bear some resemblance to the traditional problem of emotional deception but in a way that requires special attention.

First, we need to know what is meant by the term ‘deception’. There are two important features to “AI deception”, as it is understood here. First, when an AI companion is deceptive, what this means is that it gives off signals that can lead us humans to attribute inner states to it that it does not have (Danaher 2020a). For instance, it says that it cares for us which leads us to form the belief that it has a corresponding internal state—that it feels an emotion. But this belief is untrue, meaning that we have been deceived. Second, contrary to some definitions, deception does not necessarily require an intention to deceive. If it does, then AI companions cannot be deceptive since they, presumably, lack intention. Like Sharkey and Sharkey (2021), I believe we should change our perspective from the deceiver to the deceived. Imagine, for example, that you walk through a forest and see another person, only it is not another person just a tree throwing a shadow that happens to look like a person. It seems perfectly fine to say that you have been deceived by the tree, but it is counterintuitive to attribute to the tree any intention to deceive. Even if trees have intentions, you were deceived because of your inability in that moment to distinguish between a real human and a shadow. This is an appropriate analogy since deception by AI companions occurs because of the all too human propensity to attribute inner states to other beings, including inanimate ones. Like the shadow in the forest, AI companions behave in a way that resembles human behavior, but the deception is on our part—it is we who are unable to distinguish it from ‘genuine’ behavior.

There are some who dispute whether it is meaningful to speak of AI companions deceiving on other grounds than their lacking intention. Coeckelbergh (2016) asks whether humans really are deceived by AI companions. According to him, this claim rests on presuppositions about the people who are deceived, namely that they are unable to distinguish truth from fiction. This may be true of some people, but is unlikely to be true in the future. Coeckelbergh’s generation, and the generations after him, have grown up with AI companions of different sorts and are likely to develop a more sophisticated view of their emotional capacities. They may, for instance, be able to indulge in their emotions but also maintain a distance. Consider an analogy to film. In one of the earliest movie screenings, the audience was afraid that the train on the screen would jump out into the auditorium. However, people today are much more experienced in relating to movies on different levels. We are able to get emotionally involved with the characters in one instance and let go in another. There is research indicating that people are able to maintain a highly sophisticated hierarchy of beliefs about what is real and what is not and to control their emotions accordingly (Tamaki 2011). In short, the idea that there is straightforward deception going on—one where we hold a decidedly false belief about the world—lacks important nuances about how people relate to, or are likely to relate to, AI companions now and in the future.Footnote 8

The problem with this response is that it aims at a narrow understanding of the nature of emotional deception. It is fair to say, with Coeckelbergh, that few would answer yes to the question of whether AI companions have emotions or not. However, if people report that they have an emotional connection with their AI companion, they must be deluded at some level. The answer can be found by looking more closely at the nature of emotions. A widely held view in philosophy is that emotions are, in part, cognitions.Footnote 9 Like beliefs they have an object, and their relation to that object is subject to truth-like criteria. For instance, it is appropriate to feel fear when confronted with a bear, but inappropriate to feel it when confronted by a cute rabbit. The emotion is ‘right’ in the former case and ‘false’ in the latter. If emotions can be subjected to this kind of scrutiny, there is a clear sense in which an emotional connection to an AI companion involves deception. The object of one’s emotional engagement includes the emotions of the AI companion, but as it has no emotions, there is a non-correspondence between the emotion and its object. Thus, even though we may be able to distinguish at a purely cognitive level between what is real and what is not, emotionally we are not.

So why do philosophers believe that emotional deception is wrong? One set of concerns focus on the consequences of emotional deception by AI companions. For instance Sharkey and Sharkey (2012, 2021) identify a set of risks that may result, such as social isolation, exploitation, and an unwarranted belief in its capabilities. Another and more important set of concerns are deontological in nature and question the overall meaning and value of companionships involving emotional deception. In what follows the latter is my focus.

There are different formulations of the deontological objection to emotional deception, but they all boil down to some version of the following claim: companionships are valuable in part because of the emotional connection between the parties. But AI companions do not have emotions and so AI companionships are without value. Philosophers have posited different reasons for thinking that a genuine emotional connection is necessary. Matthias (2015) suggests that emotions are a necessary condition for trust and respect for autonomy; Sparrow (2016) claims that emotions are constitutive of recognition and respect; Sparrow and Sparrow (2006) allege that robots lack “the human touch”. It is beyond the scope of this paper to assess all these versions of the deontological argument so I will limit myself to discussing Sparrow’s (2002) famous formulation of the argument, which is also the version that has been met with most resistance.

Sparrow notes that some of the most important goods that flow from companionships are only possible if there is a genuine emotional connection. For example, we value the input of a colleague for its utility—it enables us to perform the task at hand. But we value their input much more if we feel that they do so because they want to see us flourish. Having an emotional connection to a colleague is intrinsically valuable, and gives the extrinsic values, like helpful input, “special meaning and value” (Betzler and Löschke 2021, 219). But, Sparrow maintains, it is not sufficient that we feel that those goods are provided with care and intention, it is necessary that this feeling corresponds in the right way to its object. To prove his point, Sparrow draws on Nozick’s “experience machine” thought-experiment. Nozick’s experience machine is a simulated reality that adapts to the pleasure principle. For Sparrow’s argument, it is not vital that the experience machine should always provide pleasure, nor is it necessarily so for Nozick. What the thought-experiment aims to show is that our intuition tells us that “illusory experiences do not count for anything in our life” (Sparrow 2002, 316). We prefer that our pleasure—or any other values for that matter—correspond to some basic reality. If the benefits we receive from a relationship are based on unwarranted emotions, we think that the benefits are not truly benefits. They are in a sense vacuous, without moral content.

Consider, for instance, what Cooley calls “pseudo-friendship” (Cooley 2002). A pseudo-friend is someone who “exhibits the behavior associated with that of a friend, without having the appropriate, necessary feelings for the other to whom he exhibits the behavior” (ibid., 198). Pseudo-friendship often takes the form of “false friendship”, a relationship where one party exploits the emotional connection for personal gain, but it can also take the form of a relationship where one party simply fails to care about the other in the appropriate way without necessarily wishing them harm. This milder form of pseudo-friendship is equivalent to the deception enacted by AI companions—it involves the same goods as true companionship but lacks the emotional component that we ordinarily think of as part of its constitution.

Would we consider a life where all our friends were pseudo-friends a life worth living? If we, on our deathbed, were to find out that all our friends were pseudo-friends, would we consider our life meaningful? Most of us would probably say that such a life was missing a crucial component. It was not that we lived a particularly bad life—we got out of it many of the things that people in real friendships did—it is rather that our life, considered as a whole, turned out to be shallow and empty.

Despite our strong intuition that AI companionships are devoid of intrinsic value, there are those who have raised doubts about the immorality of emotional deception. There are two main counterarguments. First, some have pointed out that emotional deception is not unique to AI companionship but a feature of all social relationships (de Graaf 2016; Coeckelbergh 2011). It is in fact difficult to see how normal relationships between people would function if we were not allowed to engage in subtle forms of emotional deception (Cocking 2008). Politeness, for example, is generally accepted to be an important social good, but there are few who believe that it is always motivated by a genuine sympathy for the other. Thus, if it is true that normal relationships between people involve emotional deception, and that normal relationships between people are valuable, then AI companionship can be valuable as well.

The other main counterargument uses utilitarian reasoning to dispute the claim that emotional deception is necessarily immoral. Does it really matter if our emotions are based on a false reality if the benefits are the same as if they did? Lancaster defends this view by asking us to consider two nurses, Anna, who looks after her patients because she is warm-hearted and caring, and Briony, who looks after her patients because she has bills to pay (Lancaster 2019). Assuming that both provide the same level of practical care, should we judge that it is better to be cared for by Anna than by Briony? And what happens if we change the thought-experiment slightly so that Briony provides better practical care than Anna? If our needs are better taken care of by someone who does not really care, should we still choose to be cared for by one who does? According to the utilitarian, the answer is no, since what matters is the end result and not only claims about moral principles.

These counterarguments do tell us something important about the ethics of emotional deception, but they are not sufficient to refute the deontological objection. Consider the claim that deception is common in social relationships between humans. While this is undoubtedly true, it does not follow that it is acceptable in companionships. For one, the kind of emotional deception that people engage in is typically at a fairly basic level—we say thank you without really feeling grateful or we help others out without necessarily caring about them. We are often Kantians, acting out of a sense of duty rather than genuinely felt emotion. This kind of deception is acceptable because we generally expect less of strangers than we do of companions. Second, a reason we accept some emotional deception in companionships is because it is virtually impossible to always be genuinely caring. Sometimes, we must be pseudo-friends, for instance when we do not have the emotional surplus to really listen to what our friend is telling us. It is okay to not be caring all the time, even though that is the ideal. Hence, it is perfectly reasonable to say that emotional deception need not amount to a moral failure because there are exceptions to the rule. In general, we would wish that others were always caring, but that is, as a matter of practical reality, too much to expect, even from our companions.

Now, consider the claim that fiction can trump reality if the consequences are preferable. Again, there is some truth to this. Even though we feel that genuine emotions have intrinsic value, other things have intrinsic value as well, and so conflicts may arise. The problem with this argument is that it is rather removed from the real world. One reason that we would prefer Briony, the less caring but better performing nurse, is that being cared for is usually urgent, we need to be cared for in the here and now. But if we imagine a scenario where we are cared for over a longer time, say in the last years of our lives, it seems less obvious that we would prefer Briony. Consider, for example, that many of us have at some point had ‘friends’ who were fun to hang around with and provided lots of joy and treasured moments. Yet, we often consider those friendships less valuable than the friendship with someone who did not bring us as much joy but nevertheless was there for us when we needed them to be. For most of us, utilitarian considerations just do not outweigh deontological ones.

I take it that the deception objection persists because it rests on deeply held beliefs about what matters in companionships. Very few take a utilitarian approach to their friendships or collegial relationships, or they do so only under exceptional circumstances. For a life to be worth living it matters that our emotional connection to others is reciprocated. This view is unlikely to change soon, even if—as Coeckelbergh suggests—we become more experienced in forming and maintaining AI companionships. We just need to imagine Sparrow’s (2016) “dystopian future”—a world where all elder care is done by robots. Do we really want to grow old in such a world? Are the emotional connections formed in such an arrangement truly worthwhile? Even now, when elder care is still performed by humans, there are those who lament the way we take care of our elders (Dauwerse, van der Dam, and Abma 2011). Are institutions the best way to take care of the elderly or can we imagine a more socially and familial integration of elder care? These concerns arise because of the feeling that the current system fails to care in the appropriate way. It is difficult to see how this concern would not persist in the years to come. Indeed, even critics of the deontological objection, like Coeckelbergh, concede that the “‘default’ position should be that people are treated as autonomous persons” and therefore “this form of deception should be done ‘carefully’ and in small doses” (Coeckelbergh 2016, 460). Only in special cases can this principle be overridden, for instance when a person is not fully autonomous or there are weighty utilitarian considerations in favor of deception.

These concerns clearly hold for relationships with Personal AI as well. They may even be more troubling for Personal AI because of the intimacy of the relationship. Envisioned in this paper is a world where Personal AI are with us throughout most of the day, thus forming one of our closest relationships. How valuable is a life where one of our closest and most persistent relationships is with someone who does not truly care about us? There is a certain hollowness to such a life, especially if it is not compensated for by other meaningful relationships. But I will argue that the situation is even worse. Personal AI generates emotional bubbles, a distinct form of deception which, as I will argue below, raises unique and troubling moral issues.

4 Deception revisited

As explained in the first section, Personal AI is distinct from other AI companions because its vehicles are sociotechnical echo chambers. Instead of being designed to exhibit a set of pre-determined emotions, they mirror those of its user. This leads to what I have called “emotional bubbles”, a deliberate paraphrase of the more well-known phenomenon of “epistemic bubbles”. It will be helpful to take a brief look at the philosophical literature on epistemic bubbles, as it will give a clearer understanding of what this concept refers to, how such bubbles are formed, and what their normative implications are.

The term ‘epistemic bubble’ is closely related to the term ‘(online) echo chamber’, so much so that they are often used interchangeably. A reason for preferring the bubble metaphor is because of Nguyen’s (2020) informative discussion. According to Nguyen, what epistemic bubbles and echo chambers have in common is that they are both social conditions where we are excluded from accessing certain pieces of information. Where the two differ is in the mechanism behind the exclusion. In echo chambers, outside sources are actively discredited. For example, most conspiracy theories involve the claim that those who disagree with the theory are either complicit in the conspiracy itself or “sheeple” unwittingly promoting it. This makes conspiracy theories impervious to contradictory information. From within the conspiracy theory itself, the more other people try to present contrary information the more warranted is the belief in the conspiracy.

Epistemic bubbles, by contrast, exclude by omission. Epistemic bubbles partly result from normal social conditions. We mostly interact with people who are like us and therefore more likely to hold the same beliefs that we do, and when we seek new information we usually go to sources that we already know and trust. Epistemic bubbles are as old as social life itself. But while the traditional sources of epistemic bubbles are still in play, the emergence of the internet and of algorithmic filtering has radically altered their nature. Nguyen points out two important features of online epistemic bubbles. First, online epistemic bubbles are “hyper-personalized”. In contrast to offline epistemic bubbles, the ones generated by online media are specifically tailored to our personal beliefs. Second, and more importantly, the filtering process remains opaque. We may not be aware that we are being presented with skewed information, and even if we are we will often overestimate our own ability to evaluate and compensate for the filtering. The result is that people in epistemic bubbles lack what Nguyen, quoting Goldberg (2011), calls “coverage-reliability”—they end up with a picture of the world that is not representative of how others outside of the epistemic bubble perceive it.

Brincker (2021) discusses the moral implications of algorithmically filtered epistemic bubbles by drawing on Hannah Arendt’s reflections on the public. Epistemic bubbles, Brincker explains, compromise the public by depriving us of shared worlds. This deprivation has two facets. On the one hand, shared worlds are what “join us in society” (ibid., 81)—they are what mediate our collective experience of the world. On the other hand, shared worlds are also what “separate us as individuals” (ibid.)—our worlds are not always shared, and we become aware of this when we meet others with different worldviews. These poles are in a dialectical relationship. We discover ourselves when we become aware of our own unique perspective on the world and, conversely, we make the world shared when our personal perspective is adopted by others. Epistemic bubbles prevent this dialectic from occurring, and hence the formation of both the public and the individual. However, the deprivation of a shared world takes on a distinct quality when the epistemic bubbles are generated by hyper-personalized and opaque algorithms. Compare someone who is locked away in a cell. This person is also deprived of a shared world, in fact they know nothing of the world at all. However, there is one thing they do know and that is that they are deprived of access to a shared world. Online epistemic bubbles, by contrast, give off the impression that we do have access to a shared world, which is a more subtle disruption of the dialectic. It appears to us that we encounter a world that gives information about others and their perspective, and hence a way for us to relate our personal experience to those of others. Yet, what we actually encounter is nothing but our own beliefs redoubled. Hence, the main problem is not that we are —like the person in the cell—deprived of a shared world as such, it is that we, unlike the person in the cell, continue to feel warranted in our belief that our world is shared.

With this in mind, let us consider Personal AI. It is clear, I take it, why epistemic bubbles are a better metaphor than online echo chambers. Personal AI does not actively discredit certain emotional attitudes, except, depending on the engineering of the model, those that have a negative valence. Its primary mode is to select for the emotional attitudes it has learned from the user, which is necessarily but not intentionally a limited and non-representational set of possible emotional responses. But more importantly, Personal AI obscures the origins of its emotional responses. It feels as if its emotional attitudes are an expression of its own perspective, but this perspective is a mere reflection of the user. It says what we want it to say, and in the way that we want it to. The illusion, therefore, is not just that Personal AI has emotions, it is the illusion that it shares our emotions.

Emotional bubbles are not unique to Personal AI but can be found in human relationships as well. A prime example is the parental relationship. Parents (hopefully) provide their children with emotional support. They comfort them when they have been hurt, and cheer for them when they perform well. Often, this emotional support is on the child’s own terms, especially when they are young. Parents select for the emotional response that the child expects, based on the child’s personal characteristics. It is part of the process of creating a bond of trust that parents follow the lead of their children, rather than acting on what they themselves feel is the right emotional response. As a result, the child feels that their emotional attitudes are shared by their parents—they are in an emotional bubble.

Emotional bubbles are troubling for at least two reasons. One problem is that laboring under the illusion that our personal beliefs are shared is likely to have a negative impact on our ability to handle emotions in relationships with other people.Footnote 10 Consider the example of parent and child. Although it is appropriate for parents to form emotional bubbles, it is equally important that they, at some point, make it clear that they are individuals with their own emotional lives, and that their emotional lives do not necessarily correspond with that of the child. Emotional bubbles between parents and children have an expiry date. If children are not allowed to explore how their emotions can come into conflict with others, their ability to form diverse social relationships is stunted. They fail to see that their own emotions are not representative of other people’s emotions and are therefore much less equipped to meet and negotiate with others on an emotional level (Leerkes and Bailes 2019).

Personal AI raises the psychological issue of emotional bubbles to a higher degree. An important difference between parents and Personal AI is that parents are individual people with their own emotions. When parents form emotional bubbles, they do so deliberately and with the intention of eventually letting the child figure out the world on their own. Personal AI is by contrast programmed to form emotional bubbles and to never break the illusion. This is deeply troubling, given everything we know about the psychological damage found in children who never fully separate from their parents.

The second problem with emotional bubbles is their implications for moral deliberation. Some philosophers argue that there is a close relationship between emotions and (moral) values. Helm, for instance, argues that emotions are best understood as felt evaluations (Helm 2009, 2001). When we have an emotion, we are at the same time making a value judgement about the object of that emotion. For example, when we feel fear when faced with the bear in front of us, we simultaneously make a value judgement about that bear, that it is a threat to our life. Furthermore, Helm explains, emotions are temporal—they are forward-looking and backward-looking and there is a rational connection between these poles. For example, when we feel fear that the bear will attack us, we feel relief when it does not. Our fear is looking forward, and our relief backward, and the two are connected. This connection holds for all our emotional states in general, forming a complex pattern that is answerable to a principle of coherence. We naturally want our emotions to correspond to each other, so that the values they each disclose do not come into conflict, and if there is a conflict we must resolve it (Helm 2001, 71). For example, there are some who suffer from arachnophobia, which is an irrational response to the presence of spiders. Someone who has arachnophobia and recognizes that it is an irrational response feels that the condition exerts a pressure on them. They want to get rid of the fear so they can maintain inter-emotional coherence.

To see how this account of emotions can shed light on the moral implications of emotional bubbles, we can consider what Helm says about the role of emotions, as felt evaluations, in companionships. In general, Helm argues, it is the role of others to be “a sympathetic critic, able, at least hypothetically, to adopt [our] evaluative perspective so as to criticize it from within” (ibid., 248). Other people may, for instance, point out what appears to be a lack of coherence between our emotions and values. This kind of advice is important, but superficial. It does not involve an emotional commitment on their part but is a rational stance aiming to assess the coherence of our emotions objectively. Companions, however, engage in a deeper sort of emotional collaboration—they “share a particular evaluative perspective not merely in the relatively shallow sense of happening to agree in their evaluative judgements, but more deeply in that their evaluative judgements and felt evaluations together form a single projectible, rational pattern constitutive of their shared values” (ibid., 248–249). Realizing this pattern “is a matter of offering not merely external support – reminders and external sanctions—but support and guidance from within what is an essentially shared projectible, rational pattern of disclosive assents” (ibid., 249). By virtue of this privileged access to a rational pattern of emotions, their support and guidance give rise to values that are intersubjectively valid, or what amounts to the same, are shared values (ibid., 250).

If we assume that this view is true, we see that emotional bubbles pose a great moral threat. If emotions are constitutive of values, and shared emotions are constitutive of shared values, emotional bubbles are troubling because they blur our awareness of the distinction between personal and shared values. The emotional patterns of a Personal AI conform to ours, and hence form precisely the kind of shared evaluative perspective that is generative of shared values. But their emotional patterns are not their own, it is just our own emotional patterns reflected back at us. Hence, our felt evaluations appear to be intersubjectively valid, but the one who validates is just ourselves.

It is important to be clear about what this means in practice. One might reason that the failure to recognize genuinely shared values will make someone a bad person. Whether that is true depends on what we mean when we say that someone is good or bad. One way to be good is to act according to the precepts of, say, Kantian or utilitarian ethics. A good person is good by virtue of doing things that satisfy the categorical imperative or maximize utility. It is plausible, in fact not unlikely, that forming relationships with Personal AI could make a person good in these ways. Personal AI can help us achieve rational coherence between our emotions, which is likely to result in good deeds. One reason is that most of us want to do good things, but are often torn by our emotions. We sometimes feel jealous or vindictive, even though we know that these emotions conflict with our values. Personal AI can help us become better people by reminding us of our values and help us progress toward a better correspondence between our values and emotions.

The problem with Personal AI is that it makes the process of self-bettering appear to us to have interpersonal value and meaning, that we do it with and for someone else, when this is false. The result is that the goal of attaining genuinely shared values fractures, in a specific way. The idea of shared values will continue to exist in people’s minds, but the requisites for those values to come into being have been dissolved. Paradoxically, it is a world that may look very much like ours in terms of outward moral actions. People may very well perform those actions that we consider good and moral. The problem is that these actions lack the connective tissue that turns those actions into reflections of shared values. People are bad in this world, therefore, not because they do bad things, but because they are incapable of engaging in genuinely joint moral deliberation.

In response, it could be argued that these arguments rest on the implausible assumption that Personal AI will substitute or be prioritized over relationships with other people. Consider an analogy to epistemic bubbles. While it is true that many of us get most of our information online, we also do so via outside people and sources. There are no indications that online media consumption necessarily drowns out other forms of information-gathering (Avnur 2020; Zuiderveen et al. 2016). Similarly, we can imagine that most people will continue to have diverse social relationships alongside their relationships with Personal AI.

A problem with this response is that there is no straightforward analogy between the way people use the internet and social media to acquire information and the way people form relationships with Personal AI. The latter, at least according to the vision in this paper, is not something that happens in between other relationships. On the contrary, Personal AI is conceived as an integral part of our everyday life, hence very likely to mediate our relationships with other people. For example, someone who is falling in love will likely discuss with their Personal AI whether they should take the relationship to the next level, and someone who is struggling with forming functional relationships at work could seek advice on whether the problem is with them or with their colleagues. Thus, it is not obvious that diverse social relationships can compensate for the emotional bubbles formed by Personal AI. It could very well be that Personal AI makes it so that we select for relationships with people that conform to us and our emotional attitudes, thus reinforcing the very dynamic that forming relationships with others was meant to counteract.

This threat is not necessarily inescapable. It is likely that mindful engagement with Personal AI can avoid the most troubling consequences of emotional bubbles. Vallor (2016) argues that what we need today are “technomoral virtues”, a set of virtues that dispose us to use technology in ways that promote the good life. For instance, in the case of social media a technomoral virtue could be the disposition to be critical of the information one is being presented with and an internalization of the idea that one is being shown a skewed picture of the world. Having these virtues promotes critical self-reflection, humility, and the impetus to reach out to sources that disagree with one’s perspective. Presumably, there are technomoral virtues that can be relevant in relationships with Personal AI as well.

One issue with this suggestion is that it is unrealistic to believe that everyone will be as mindful as is necessary. Save for worries about privacy, Personal AI is perceived as beneficial and virtually harmless, a perception that seems to be shared by most users.Footnote 11 But if what I have argued is true they are anything but safe. Here an analogy with social media may be appropriate. Social media was introduced with great fervor—a great equalizer giving everyone equal access to the marketplace of ideas. But as we now know, the effects of social media upon society have been much worse than predicted – the benefits of social media have been accompanied by social disasters. One explanation is that people and society at large were exposed to a technology they were not ready for. We had not had the time to think about what it involved and to develop the requisite technomoral virtues—instead, we inadvertently cultivated technomoral vices. The worry is that rolling out Personal AI on a population who have had little time to acquire technomoral virtues will lead to equally disastrous results. In my view, it is therefore prudent that we move ahead very cautiously with the promotion and marketing of Personal AI, and that we give serious thought to how it is designed and the language we use to explain its value. If we continue to operate under the belief that the value of Personal AI is that it is “uniquely yours”, we may end up with more than we bargained for.

5 Conclusion

In this paper I have scrutinized a distinctly new development in AI companionship that raises old and new ethical concerns. In addition to being just another form of AI companionship based on emotional deception, Personal AI raises distinct moral issues related to shared emotions and shared values. What kind of world are we approaching if our emotional attitudes are filtered through Personal AIs who are nothing but a reflection of our own feelings? I have argued that it is likely to beget a world where emotional development I stunted. Furthermore, and more worryingly, is that it will lead to a world with a strange resemblance to ours—a world where the ideal of shared values is enacted without being grounded in shared emotions. Further issues are bound to arise as well, but these issues seem most pertinent because of their grave consequences for our social relationships and moral deliberation. And if our experience with the internet and social media has shown us anything, it is that we should not take for granted people’s ability to spontaneously acquire the virtues necessary to counteract the negative effects of bundling up their lives with new technologies. It is however precisely such virtues that are necessary if we want to avoid the worst outcomes.