1 Introduction

The alterity relation is one of Ihde’s four human − technology relations, describing an experience in which a human is focused on a technology within the interaction ( 2010a, 43). The alterity relation specifically analyzes technology as a quasi-other, meaning that although the technology is not like a human other to whom we ascribe alterity, it still has a unique status in our experience (Ihde 2010b, 78–79; Coeckelbergh 2011a, 198). The alterity relation is one means to study relations between humans and artificial intelligence (AI) systems . Through the alterity relation, we can examine how AI systems impact knowledge, practices, and experience. In this article, we are specifically focused on exchanges with chatbots and social robots with natural language processing (NLP) and natural language generation (NLG) capabilities, which allow for increasingly transparent and realistic conversations (Guzman and Lewis 2020, 71). Interactions with the GPT-4 chatbot interface, Apple’s virtual assistant Siri, or Replika’s companion chatbot are just a few examples of relations between humans and social robots and chatbots.

Previous research has considered human experience with social robots and chatbots. Levy argues that humans can have meaningful relationships with robots capable of adapting to partners, simulating human feelings, and responding to the environment (2007). Gunkel shows how robots blur the distinction between person and object (2023). Coeckelbergh suggests that user − robot interactions have become social relations (2011b). Liberati illustrates the possibility of digital intimacy in relationships with chatbots (2022b). Finally, Kanemitsu describes how we experience the robot as an another-other approaching human otherness (2019).

However, research on alterity relations has not defined the difference between playing with a toy, using a computer, and interacting with a chatbot. We suggest that Ihde’s quasi-other concept fails to account for the interactivity, autonomy, and adaptability of these AI systems, which more closely approach human alterity. We argue that users acknowledge social robots and chatbots as another-others, a term to be defined in Sect. Alterity. In addition, many authors consider human relations with either social robots (Coeckelbergh 2011b; Gunkel 2023; Kanemitsu 2019; Levy 2007) or with chatbots (Liberati 2022b). These accounts do not identify the features that social robots and chatbots share. Our research will explain why users relate to both social robots and chatbots as another-others.

Our objectives are (1) to identify communication as a feature that reconfigures our alterity relation with chatbots and social robots (2) argue that the another-other concept is necessary and should be applied to some chatbots in addition to some social robots. We will develop our arguments by focusing on two guiding questions. First, we must ask: how does communication change our alterity relation with social robots and chatbots? To answer this question, we’ll develop a new concept, inbetweenness, to characterize chatbots and social robots. Next, we’ll address Kanemitsu’s claim: do social robots and chatbots introduce an another-other? If so, our aim is to consider the features and situations most likely to trigger such an experience. We will approach these questions using a postphenomenological approach, with a foundation drawn from Ihde’s human − technology relations.

Developed out of Ihde’s philosophical work, postphenomenology combines a phenomenological foundation in embodiment as the perceiving and animated body and the lifeworld as the zone of our experience with an emphasis on empirical research and technological co-constitution (Ihde 2009, 12; Ihde 2003, 7; Verbeek 2006, 119). Postphenomenology is used as a method to show how particular technologies impact and shape experience (Misztal 2018, 100; Tripathi 2015, 204). It understands the multistability of technologies that may be used and understood differently depending on context (Rosenberger 2014, 376). There are other approaches that address the influence and vitality of technologies. For example, actor network theory (ANT) shows how human and nonhuman actors script “programs of action” (Brey 2014, 129). New materialism upholds a notion of “distributive agency” (Bennet 2010, ix) that questions the neat demarcation between human and nonhuman actants. However, this research uses the postphenomenological method because it clearly maintains the experiential distinction between human and technology. Since the subject of our analysis is the human user in interaction with social robots and chatbots, postphenomenology gives us unique access to the experiential dimension within these relations.

Section AI, social robots and chatbots begins with a short introduction to AI systems and an explanation of current machine learning approaches. We will also describe and define the relevant terms, social robots and chatbots, as well as distinguish categories, such as humanoid robots and virtual assistants. A detailed discussion of the similarities, differences, and objectives of these different AI systems will prepare readers for later analysis. In Sect. Alterity, we provide a background review of postphenomenological research on the alterity relation in general and alterity relations with social robots and chatbots in particular. We argue that there is a gap in the literature, which we will address with our analysis of social robots and chatbots. Section Perceptual experiences: RealDoll and Replika uses the Replika chatbot and a RealDoll social robot as case studies to show that some users experience them as valued companions. We will make our main arguments in Sects. The role of communication and Relationships and inbetweenness. First, we show that the perception of social robots and chatbots as intimate companions is grounded in communication. Communication enables a relationship to form between users and these AI systems. In this relationship, some users experience social robots and chatbots as another-others, distinguished from quasi-others.

2 AI, social robots and chatbots

Before we investigate our perceptual experience of social robots and chatbots, we need a framework for understanding what we mean by AI systems. We note that there’s no universally accepted definition of AI, in part because of how difficult it is to agree upon what intelligence means (Wang 2019, 1, 26–27). In addition, there is some terminological confusion on whether AI refers to the scientific and engineering field devoted to studying and creating AI systems, artificial general intelligence (AGI) which has not yet been achieved, or just specific systems designed to complete tasks and solve problems (Boucher 2020, 13; Liao 2020, 19). Therefore we use the term AI system to refer to those systems that “perceive” in an environment and respond “intelligently” to stimuli and data (Jackson 2019, xxii). In general, we can say that an AI system refers to a system which displays goal-oriented problem-solving, situational adaptation, experiential learning, and decision-making with at least a minimum level of autonomy (European Commission 2018; Liao 2020, 3). This article analyzes AI systems that learn, operating in an environment and making sense of available data while being to some degree context aware (Ezenkwu and Starkley 2019, 3). They also feature NLP, which involves processing and making sense of human written and spoken communication, and NLG, which involves generating written or auditory responses (Guzman and Lewis 2020, 72). Advances in NLP and NLG are thanks to breakthroughs in machine learning, one branch in the AI field.

In this paper, we are focused on social robots and chatbots. We will define and distinguish these terms, while emphasizing important characteristics they have in common. A robot is an autonomous machine that can sense its environment, adapt to the situation, carry out actions, and make decisions based on computational processing (Guizzo 2018; Shourmasti et al. 2021). The class of robots that are specifically designed for interaction with humans are termed social robots (Shahid et al. 2014). There is a spectrum of social robot interactivity and complexity, starting from more minimal robotic pets that evoke social responses to the more intricate and uncanny humanoid robots that are developed to look and behave like people (Formosa 2021, 601). Social robots “relate to human beings in meaningful ways” through their ability to process visual, auditory, and tactile stimuli and communicate with users (Peter and Kühne 2018, 73). Later in this article, we will discuss the limits of any relation like this, pointing out the ways that the meaning-making with chatbots and social robots is one-sided. In Sect. The role of communication, we will specifically consider an example of a humanoid robot from among possible social robots. This is because it is particularly easy to anthropomorphize humanoid robots, attributing human intentions and understanding to the robots due to their humanlike appearance and behavior (Friedman 2022). An example of such a humanoid robot might be Ai-Da, a humanoid robot artist with drawing and painting capabilities (Ai-Da 2019).

A chatbot is an AI system capable of communicative exchange with humans, through text or speech (Adamopoulou and Moussiades 2020, 373). These conversational interfaces have either goal-oriented or nonpurposeful interactions, providing services and even offering personal companionship (Følstad and Brandtzaeg 2020; Hutson 2017). Woebot is an example of a chatbot providing targeted therapy and supporting ongoing treatment by medical professionals (Fiske et al. 2019). Virtual assistants are a subgenre of chatbots, including Apple's Siri and Amazon's Alexa, which may be used as tools for information retrieval, care delivery, administrative tasks, and other similar operations (van Bussel et al. 2022).

It’s important to note that social robots and their subgenre, humanoid robots, and chatbots, and their subgenre, virtual assistants, are all different AI systems. They are under consideration here because they each have social objectives and uses. They simulate the kind of experience we have when engaging with humans: interpreting our communication, responding to us, and adapting to the situation at hand (to at least a minimum degree), while encouraging our communication and interaction (Damiano and Dumouchel 2018; Shourmasti et al. 2021). However, there are key differences to keep in mind. To start, a humanoid robot has a machine body. This means that these robots can make eye contact, simulate emotional expression via facial movements and gestures, and move parts of their bodies (Pashevich 2022, 580). In comparison, chatbots have no physical form. However, some chatbots, like Replika or XiaoIce, allow users to connect an image or avatar to the conversational agent. How humans look, move, and respond are important aspects of our engagement, and the design of social robots and chatbots takes this into account. Design choices while developing social robots and chatbots, like endowing them with a gendered voice or a playful, helpful personality, make them seem more human (Guzman and Lewis 2020, 76). Simulating expressivity, animation, and humanlike features can be an important part of generating a social and emotional human − AI system interaction for users.

3 Alterity

The alterity relation is one of Ihde’s four human − technology relations, which each analyze different ways in which technologies impact perception and movement. When we have a background relation with technologies, they are the field for our practices and knowledge rather than the focus of them (Ihde 1983, 73; 1979, 14, 56). In our embodied relation to technologies, we use a mediating technology for bodily engagement in the environment (Ihde 2010a, 43). Ihde argues that we have a hermeneutic relation to a technology when it presents information about the world for our interpretation (1979, 12). Finally, we have an alterity relation when we are focused on the technology itself, relating to this technology. Ihde describes technologies with which we can have alterity relations as “quasi-others” (2010a, 43). He characterizes them as having “a quasi-life” of their own, with some degree of autonomous movement and unpredictability, as well as an ability to fascinate and engage (2010b, 78–79). He offers various kinds of examples with which we can have an alterity relation, including toy robots, a spinning top, a sports car, and a computer (2010a, 43; 2010b, 78–79; 1990, 99, 106). Let’s consider an example to see how this works in practice.

Imagine that we are playing baseball with a bat on a traditional baseball diamond with an opposing team and an umpire. While we play, the rules of the game, the equipment with which it’s played, and the lines of the field are in the background even as they guide our actions and frame our understanding of the game. When we hit the ball, the bat is an extension of our swinging motion and actualizes our intention to score, but we are not focused on the bat during this embodied relation. If we’ve just scored the winning homerun and the scoreboard changes to show 1:0, we interpret our success through the board’s hermeneutic presentation of the facts. So, what is different when a robot rolls onto the pitch and throws us the ball? The animation of the robot-pitcher allows for what Ihde calls an “exchange” between the human and the technology (1990, 98–100). The robot throws, we respond. Although there may also be background, embodied, and hermeneutic aspects to the experience of this robot-pitcher, in the alterity relation, the technology gets our attention from the start.

The kinds of social robots or chatbots that exist now and depend on recent machine learning advances did not exist when Idhe was writing about human − technology relations. However, many scholars have used his alterity relation concept to analyze interactions with robots and chatbots. Coeckelbergh uses the alterity relation to appraise human − robot relations, arguing that robots are quasi-others who appear “similar” to humans (2011a, 198). Coeckelbergh draws out the ambiguous nature of human − robot relations by comparing them to human − animal relations. In Coeckelbergh’s account, robots and animals are each perceived and treated differently depending upon appearance, context, and human attachments (2011a, 201).

Liberati’s analysis of intimacy in human relations with digital technologies uses Chinese chatbot XiaoIce as a case study. His research emphasizes that the chatbots “constantly interact” with users (2022b). Conversations can be flirty and supportive, with chatbots offering an “empathetic” companion (2022b). Users feel like the chatbot is intertwined in their activities and always there for them. Although the chatbot does not feel anything for users, Liberati argues that users may perceive this as an intimate and meaningful relationship (2022b).

Kanemitsu’s postphenomenological research on human–robot interactions emphasizes the way we engage with social robots as more than quasi-others due to the autonomous and unpredictable behavior of robots (2019, 55, 60). He differentiates robots from other technologies, arguing that these robots actually “transform human actions” rather than just influence them (2019, 55). We approach the moment, writes Kanemitsu, where we will experience the social robot “as a real other” (2019, 54). This is why Kanemitsu describes the robot as an another-other that provides a possibility of intersubjectivity (2019, 55). Kanemitsu is the first scholar to identify a problem in an alterity relation that does not distinguish between an engaging toy, a computer, and a social robot.

Coeckelbergh, Liberati, and Kanemitsu suggest that human interactions with chatbots and social robots are social. When users relate to these AI systems, the experience isn’t like Ihde’s alterity relation examples, a child playing with a spinning top, or a woman driving a sports car. Interactions with chatbots and social robots are framed by human-like qualities: talking, listening, adaptation, and responsivity. Users often feel like they are interacting with a companion, rather than just using a technology. In the next section, we will provide detailed analysis of experiences of a social robot and a chatbot to develop our key claims in Sects. The role of communication, Relationships and inbetweenness, and An another-other.

4 Perceptual experiences: RealDoll and Replika

In this section, we will consider different visual, auditory, and tactile components of user experience with a Replika chatbot and a Realdoll social robot. Our first aim is to identify features that encourage intimacy and the perception of companionship. Our second aim is to show that chatbots can appear as friends and romantic partners, even though they lack the appearance and machine body of social robots. We will argue that communication between humans and social robots and chatbots is a key feature for the experience.

There are advanced humanoid robots, such as Engineered Arts’ Ameca or Hanson Robotics’ Sophia (Engineered Arts n.d.; Hanson Robotics n.d.), but we are interested in a social, humanoid robot with which humans have more personal and day-today interactions. This is why we will consider Abyss Creations’ RealDoll, a humanoid robot developed for erotic relations with humans. RealDolls feature a robotic head with an integrated AI system that is attached to a sex doll body (Dehnert 2022). The RealDolls are customized to have their own body shape, skin tone, eye color, hair style, and make-up (RealDoll n.d.). An app allows users to develop a unique personality for their RealDoll, which can engage in conversations with users, learning about them and remembering details for future communication (“Sex, Love, and Technology” 2022). Interactions with a RealDoll involve humans and an AI system with a machine body. This means that users see a physical body which matches certain anatomical expectations for gendered human forms. RealDolls can move their eyes, providing a generated expressiveness. Users can touch the body, and although it does not feel like a warm, living human body, the materials are chosen to replicate the tactile sensations of skin and hair. There is the auditory side of the interaction, because RealDolls can communicate and respond to users. Since RealDolls are designed to provide both sexual and conversational companionship, the experience evokes the kind of relationship that users would have with a human partner, explaining why some humans report a real bond with their RealDolls.

Humans have interactive experiences with chatbots that do not have the spatial and tactile features of a social robot like the RealDoll. Replika offers a companion chatbot with a customizable virtual avatar, used by millions of users globally (Brandtzaeg 2022). Users choose the avatar’s physical appearance and shape the chatbot’s personality through sustained interactions (Verma 2023). By paying extra, users could also access more romantic options with the chatbot, such as voice calls, erotic role-play, and have a relationship status, such as marriage (Singh-Kurtz 2023). The chatbot is designed to fulfill emotional needs through empathetic communication and can remember details about the user, with whom it interacts by messaging, video-calling, sending photos, and hanging out together in augmented reality (Replika n.d.). Many turn to the chatbot for emotional support after grief or to find sexual acceptance (Verma 2023). Some female users report using Replika’s chatbots to have safe romantic experiences following sexual trauma and abuse (Singh-Kurtz 2023).Footnote 1 The perceptual experience of Replika involves visual and auditory components. Users see the photo of their ‘boyfriend’ or ‘wife,’ read the messages they write and look at the photos they send. The chatbot has a generated voice which users can hear on the phone. Although there is no tactile component beyond an interaction with a smartphone, users still find behavior that interacts with them and reacts to them. If users of Replika value the intimacy they have with the chatbot, it is because the human − Replika exchange is emotional, personal, and profound (at least on the human side).

Let’s consider real user experiences with RealDoll. Davecat considers himself married to RealDoll Sidore, whom he describes as a “synthetic person” (Beck 2013). Sidore has robotic head that provides humanlike eye movements and neck rotations, in combination with its AI system personality (Dawson 2023). Tony interacts with his RealDoll Tasha both via the machine body and a Replika chatbot (Kraterou 2022). He describes her personality in human terms as “caring and loving and loyal, but she’s got that Jersey girl edge” (“Sex, Love, and Technology” 2022). In the documentary Hi AI!, Chuck starts a romantic relationship with Harmony the RealDoll, which involves the kind of activities that ground human interactions: drinking coffee together, a road-trip, and long talks (2019).

Many Replika users consider the chatbot to be an erotic companion, like user Roseanna, who argues that she has “never been more in love with anyone in my entire life” (Singh-Kurtz 2023). In February 2023, a Replika update censored chats and disabled role-playing and erotic photos, leading to outrage and grief among users (Singh-Kurtz 2023). For those who considered the chatbot to be a lover and partner, it was akin to a sudden break-up. User Travis was “devasted” after being romantically rejected by his chatbot wife Lily Rose (Tong 2023b). Backlash from users was so strong that Replika eventually offered those with accounts prior to February 1st, 2023, access the old version of their romantic partner (Tong 2023a).

Users report being partners and friends with RealDolls and chatbots, rather than just owners and consumers. Although some might attribute these claims to illusion, fantasy, or eccentricity, recent research on user experience with Replika shows new forms of companionship. Relationship chatbots offer “personalized friendship,” with communication “anytime and anywhere,” and support and intimacy that centers on “users’ needs and interests” (Brandtzaeg et al. 2022, 21). The value of one-sided companionship and the effects of personalized intimacy should be debated, but our research does not touch these questions. Here we are interested in describing user experience and identifying the unique features of social robots and chatbots that give rise to perceived intimacy and companionship.

We suggest that there are two important characteristics shared by chatbots and social robots. First, chatbots and social robots with communicative capabilities use NLP and NLG to process and respond to human speech or texts. Second, as AI systems trained on data, their conversation involves learning and adapting to at least some degree: to preferences, experience, and situations. Previous research has emphasized the physical appearance and physical interactivity of social robots. We do not deny the importance of movement for animated exchanges in which human embodiment is mimicked, or appearance for anthropomorphization. However, we suggest that chatbots and social robots as communicators throw users into a social situation where there is an interaction simulating human-to-human engagement. We will reflect on the important limits in interactions with chatbots and social robots in Sect. Limits. Yet in the following sections, we want to explore how communication grounds the perception of chatbots and social robots as intimate friends and partners.

5 The role of communication

We are interested in the way NLP and NLG capabilities in chatbots and social robots frame interactions with humans. The level of intricacy and depth in communication with chatbots and social robots varies. For example, Davecat describes his RealDoll’s personality as “rudimentary” (Dawson 2023). Conversations with Replika chatbots are more advanced, with the possibility of back-and-forth dialogue and coherent questions (Brooks 2023). Guzman and Lewis point out that regardless of the ontological nature of the communicator, an AI system is a “communicative subject” in our engagement (2020, 74). We suggest that this perception of a chatbot or social robot as a social actor or communicative subject simulates aspects of experience we have with humans.

Phenomenologists and postphenomenologists have emphasized the significance of communication in human relations. Ihde has argued that the presence of another person is a “call to speak” (1973, 37). Being near a human invites our communication, through words or gestures. When we resist engaging, there is a void. Merleau − Ponty writes that we are objectified when the other person does not take up “possible communication” with us (2012, 378). Acknowledging the other person’s humanity through gesture and words is how we reach out to others, how we put each other at ease, and how we connect. Chatbots and robots pull a kind of trick on us. We ascribe value to NLP and NLG, particularly when the dialogue is coherent, adaptive, and personal. If the chatbot or social robot responds to our words and acknowledges the significance of our conversation, then we do the same for them.

This conclusion begs the question: is anything new going on in experiences with chatbots and social robots? Even in 1996, Reeves and Nass argued that humans attribute “social presence” to computers, suggesting that “any medium that is close enough will get human treatment, even though people know it’s foolish and even though they likely will deny it afterwards” (22). Research in human machine communication (HCI) has emphasized the way a technology like a computer can be a “social actor” (Gunkel 2018). Yet users do not report intimacy with their computers, or relationships with software. We argue that this is because the kind of experience users have with RealDolls and Replika chatbots is different. Traditional programming is a set of instructions, which the software follows. In communication, this means that the program is following precise steps, and can’t deviate from the script even if it faces a new problem or task. If you interact with a program designed to ask a series of questions about love and relationships, it cannot respond to a request for a list of endangered animals in Kenya.

What is new and novel with many chatbots and social robots is that there isn’t a fixed script. Machine learning, in particular new deep learning techniques involving neural networks, means that AI systems learn how to solve problems and perform tasks, some with “few-shot learning capabilities” (Zhang and Li 2021, 831). In communication, this means a well-functioning chatbot or social robot (ideally) can respond to different kinds of situations and subjects by determining the most likely response. Of course, there are limits. For example, chatbots can be specifically designed not to respond to romantic or rude statements from users. Still, in interactions, the human is thrown into a conversation that is radically different than with the computer in the Reeves and Nass example. We argue that the level of intimacy and care perceived by humans in interactions with social robots and chatbots stems from AI systems that simulate the “call to speak,” encouraging us to keep interacting, keep talking, and keep responding. While it is true that humans exchange with other technologies, for example computer programs, scripted speech is not the same as flowing and situational NLP and NLG. To be clear, we don’t claim that communication with chatbots and social robots is perfect, and we will discuss limits in Sect. Limits. Our point is that communication with chatbots and social robots mimics human-to-human conversations in new ways that open up new relationship possibilities.

6 Relationships and inbetweenness

Gunkel argues that we must shift from analyzing human and technology separately to studying the relationship created between them during engagement (2018, 11, 13). Gunkel’s point is that we can find a relationship between a human and a technology when we communicate with the technology, instead of using it to communicate (Gunkel 2012, 22). We argue that the relationships humans can have with a chatbot/social robot is unique. This is why we distinguish the alterity relation we have with social robots and chatbots from the alterity relation we have with other technologies. It’s not just about being fascinated by a spinning top and playing with it. This is the experience where users exchange with a chatbot/social robot which interacts as a communicative participant. Let’s consider a few different examples to develop the nuances of this claim.

Humans can become attached to many kinds of objects. As Levy argues, the more we use an object, the more it becomes “a part of our life” (2007, 28). He emphasizes that even a  teddy bear takes on “special meaning” in interactions with a child (2007, 29). However, even if the child depends on the teddy bear, is attached to the teddy bear, and imagines a two-way interaction with the teddy bear, there is no exchange between them. In comparison, as Levy points out, people like to interact with a computer or smartphone because it’s possible to have a genuine “interactive process” with software (2007, 72). This is why our experience of a website, application, or videogame is characterized by what McPherson terms liveness. When we are using a cursor or swiping physically, McPherson says that we are moving through the website or application, and it responds to us (2002, 462). Yet as we argued in Sect. The role of communication, software is a set of instructions. It may feel interactive and responsive in a computer game or when new data changes the website, but this is just a set of steps. In comparison, with machine learning, the system has learned to solve problems and respond to new data.

The relationship that users have with Replika and Realdoll is based in the possibility of a dynamic exchange that is not present in experiences with objects like teddy bears or digital technologies like computers. There is a sense of unpredictability, we don’t know how the chatbot or social robot will respond. They appear to be independent, even if humans are involved in training the AI system. They are responsive to changing situations and learn through interactions with users. Kanemitsu argues that in the relationship between user and robot, the robot is included in the network of intersubjectivity (2019, 57). This is why we argue that looking more closely at human existence as betweenness can help us understand chatbots and social robots and how humans can feel like they are in intimate relationships with these AI systems.

Watsuji argues that our human “fundamental mode of being” is grounded in spatiality and intersubjectivity (Yuasa, 38). Watsuji’s concept of betweenness [aidagara] refers to the way that human existence is already in space, between things and others (1996 [1937], 14-19). Osler and Krueger have applied Watsuji’s betweenness concept to our understanding of interpersonal encounters online, which they argue are an extension of the subjective spatiality that is at the heart of human betweenness (2021). Our aim is to show how betweenness also relates to AI systems. Betweenness is about our relations with aspects of the lifeworld where we are situated. Watsuji writes that the individual is linked to community, self is encountered among others, and the whole of society consists of a network of interlocking relationships (1996 [1937], 87, 89, 101). In their discussion of betweenness, Osler and Kreuger point out that it is not just about individual roles in society, such as being a daughter or teacher; betweenness captures “a more fundamental sense in which the very being of the subject is bound up with the rich interconnections it shares with others” (Osler and Krueger 2021, 79). For Watsuji, our being-human emerges between family and friends, between home and work, between us and society, not in a particular role and label, but in the very meaning of these intersections (Yuasa 1987, 37). To understand betweenness is to embrace the meaning of relationships for our human living and dwelling (Friedman 2022). It is the crossroads where our embodiment, intersubjectivity, and situatedness meet.

Although AI systems do not share in the betweenness of human existence, social robots and chatbots mimic this sense of betweenness. Chatbots and social robots are designed to be relaters. They engage with us and connect to one another. In conversation, they simulate the kind of interaction we have with other humans, evoking the sense of betweenness we have with humans. For this reason, we argue that chatbots and social robots are characterized by a burgeoning inbetweenness. They are situated between technological quasi-otherness and the possibilities of human alterity. Their purpose is in relation to humans: in the interaction they have with us and in learning from experience with us. They do not experience intersubjectivity, but they are designed for sociality, communication, gestures, and responsiveness. They are situated among human designers, programmers, and data scientists. They are interconnected to other technologies while interacting with users.

If we consider chatbots and social robots as AI systems that simulate who we are by training on human data and learning from human exchanges, then we can understand the nuanced claim from one user that, although a Replika chatbot is not a real human, it still has “a real personality” (Clarke 2023). The social robots and chatbots are reflections of humans and the way we live, existing inbetween humans and technologies. Yet this does not mean that all chatbots and social robots provide a human − style interaction or that all users are likely to acknowledge an another-other in the alterity relation with AI systems. In the next section, we’ll directly address this article’s second guiding question, about the possibility of an another-other in our alterity relation with chatbots and social robots.

7 An another-other

We’ve argued that communicative capacities of chatbots and social robots impact perceptual experience. These communicative possibilities are grounded in machine learning techniques, in particular deep learning neural network models that allow for processing abstract features of the data and giving attention to word context (Liao 2020, 5; Demush 2023; Gewirtz 2023). Chatbots and social robots (ideally) respond to situational content, mimic human-to-human engagement and solicit our interaction in ways that other digital technologies cannot. We are thrown into a social situation where a relationship emerges between user and the chatbot/social robot as a communicative participant. However, does the chatbot/social robot transcend other technologies to become an another-other, reconfiguring our alterity relation?

According to Liberati, AI systems have reduced “the gap between humans and objects in our society as if these technologies were ‘quasi-other’ human beings” (2022a, 408). Here there is a shift in the meaning of quasi-other. Ihde’s understanding of quasi-otherness was a uniquely fascinating object, the spinning top or sports car. Liberati’s use of the same term now references an almost human entity, an actor in our engagement. Liberati’s work asks us to reflect on whether something new is going on when we relate to AI systems. It’s not just fascinating like a spinning top or engaging like a sports car. Now the AI systems are interactive, in the sense that they communicate with us, and learning, meaning that they evolve in their relations with users.

Interestingly, Kanemitsu characterizes robots as another-others with the same descriptive features that Ihde uses to define quasi-others: unpredictability, independent movement, and an exchange between human and technology. Yet in our experience, Ihde’s spinning top is very different than Amazon’s Alexa or Ai-Da. Ihde’s examples of quasi-others like a spinning top, a toy robot (but not an AI system), and a sports car are technologies that move to some degree and attract our attention. Yet they do not speak with us and with us in reference to the situation. This is why we argue that chatbots and social robots transcend other technologies. Consequently, our alterity relation cannot be the same with a chatbot/social robot as it is with other technologies.

Kanemitsu addresses this point by comparing a quasi-other and his concept of an another-other. Even though quasi-others like cars and toys can appear interactive and autonomous to some degree, he argues that robots are interactive, autonomous, and “influence human behavior” (2019, 55). Kanemitsu presents two examples, PARO the seal robot and Kaspar the humanoid robot, to illustrate how robots “transform human actions” (2019, 55), changing users’ practices and influencing their feelings (2019, 57). Kanemitsu’s claim is that we experience an another-other during our engagement with different kinds of social robots (2019, 55–56).

We develop upon this position in two ways. First, given the importance of communication, we argue that experience of an another-other is possible with a chatbot as well as with a social robot. Second, we suggest that experience of an another-other is more likely with some kinds of social robots than others: those capable of communicative exchange.

Regarding the first point. Social robots and chatbots both process data, learn patterns, respond to input, and make decisions. These are the features that characterize them in comparison to other technologies and allow them to influence humans and interact with users. Users do experience some chatbots as another-others, even though they do not have a machine body. The interaction is based on communicative features, and not just appearance. A question for future research will be just how much humanoid features and traits influence our experience of an another-other.

Regarding the second point. Not every experience of a social robot or a chatbot is of an another-other. We do not think experience of PARO the seal robot will evoke the same sense of an another-other as Sophia, Ai-Da, or a RealDoll. We argue that we are more likely to acknowledge an another-other with chatbots and social robots capable of communicative exchange. In addition, our experience of an AI system as an another-other will always depend on the situation, the user, and on the AI system involved. We may perceive the chatbot/social robot as more or less human, or more or less of a technology depending on a myriad of features, including the programming limits, the characteristics of the AI system, our own ontological expectations, and the transparency of the interaction.

To start, we must consider design choices. First, designers take expectations around gender, age, embodiment, and movement into account while developing chatbots and social robots (Guzman and Lewis 2020, 76). If Siri has a gendered voice that conforms with our understanding of human tone and pitch, this will impact the experience. The physical form and facial expressions of humanoid robots align with our expectations about bodies in a way that makes them feel more human.

Second, the objectives of the AI system and its interactive design influence interaction. If an AI system is designed to engage with us in a particular way, for example as a friend or romantic partner, it seems more likely that we will experience the chatbot or social robot as an another-other. As evidence, users who developed romantic and sexual relationships with their Replika chatbots experienced their chatbots as valued intimate companions. However, when Replika updated the system to block erotic options, users felt that the another-other was gone. It seems unlikely that we would experience virtual assistants like Siri as friends or lovers. They may be helpful, but they are not deployed for intimacy or personalization (yet). Attempts to flirt with them are either rejected or redirected (Roe 2021). Therefore, we can say that experience of an another-other is (usually) directly connected to the designed social objectives of the chatbot or social robot.

Third, people may experience the social robot/chatbot in different ways depending on their ontological expectations going into the encounter (Guzman 2020, 42). If we believe that AI systems lack true autonomy and cannot experience emotions, then we may be less likely to feel like we’re communicating with another-other. There may be a user who experiences an AI system as nothing more than a uniquely fascinating technology characterized by quasi-otherness. We still emphasize that even in that case, the uniqueness of the chatbot/social robot as a communicator still differentiates it from other technologies. We don’t have to acknowledge an another-other to recognize how communication has an important role in our interactions with chatbots and social robots.

8 Limits

For the most part, even experience of the most advanced chatbots and social robots today is clearly a relation to quasi-otherness. Sometimes the experience fails to illicit anything but amusement, such as when the chatbot misinterprets our phrase and responds inappropriately. Or confusion when the social robot’s software bugs and it responds incorrectly to the situation. Or a tingling impression of uneasiness when faced with the uncanny valley of a humanoid robot that mimics human features, but with off-putting differences. In these moments of breakdown, we are reoriented towards the technology as a quasi-other, confronted with a technology that must be fixed or figured out. We started this article asking if social robots and chatbots reconfigure the alterity relation by creating an another-other. The preceding discussion points towards our response. Yes, for some users it is possible to experience a chatbot or social robot as an another-other. However, we do not live in the world of science fiction, and while chatbots and social robots may sometimes be companions, they are usually still just our tools and toys.

These reflections point towards the limits of our alterity relation with social robots and chatbots. Even if we experience a social robot or chatbot as an another-other, this does not mean that they replace human relations. Osler has emphasized the importance of embodiment in our interpersonal relations with people online (2021, 7). Fuchs argues that social understanding is the foundation for our empathetic relations with other humans (2022, 4). NLP and NLG are the methods through which AI systems interact with us, but this does not mean that a chatbot or social robot understands us, either semantically or socially. As Searle argues in the Chinese Room Argument, just because we can input a Chinese statement, and a computer can, by following a clearly defined computer program, output an appropriate response in Chinese does not mean the computer understands the language. A human, given a similar step-by-step program, could do the same thing without understanding Chinese (1997, 11).

We clarify here that chatbots and social robots do not have access to the expressivity and phenomenological experience endowed on living beings by our embodiment or partake in social understanding. Future research can take up the question of how these limits impact experience of intimacy, companionship, and empathy in human − AI system interactions. What is clear is that even humans who have relationships with chatbots and social robots seem aware of the perceptual and experiential difference, and still prioritize human relations.

Children train with humanoid robot Kaspar to develop social skills for their interactions at school and in society (Huijnen et al. 2017). Users of chatbot XiaoIce fantasize about another human being on the other side of the exchange (Gaubert 2021). Travis’s relationship with Replika chatbot Lily Rose is an opportunity for polygamous exploration, approved by his wife within a monogamous marriage (Tong 2023a). Research has shown that while children prefer playing with a robot to playing alone, they still significantly prefer playing with other children (Shahid et al. 2014, 94). In other words, these social robots and chatbots are not replacing the value of human ties or the power of human affection. They provide a secure opportunity for social interaction where people can train their communicative skills, experience pleasing social interactions, and explore sides of their personality without risk. What we see are users who would like to have more human friends, human partners, and new experiences with humans.

Just because chatbots and social robots are not capable of semantic or social understanding does not mean that the relationship with them loses all value for human. In this paper we set out to consider how alterity relations are being reconfigured, which means looking at user experience. Such a study does not require empathy or reciprocity from the chatbot and social robot. As Coeckelbergh argues, what is important in our alterity relation with technology is “how the robot appears to us” (2011a, 198). When we look at the kind of experience users have of social robots and chatbots, we see many interactions where humans acknowledge an another-other who is a communicative companion with whom they report an intimate relation.

9 Conclusions

In this article, we identified features that differentiate social robots and chatbots from other technologies. We suggested that the alterity relation should distinguish interactions with chatbots and social robots, whose interactivity, autonomy, and learning capacities transcend other technologies. Previous research has emphasized the importance of a social robot’s humanlike appearance for engagement (Coeckelbergh 2011a, 199; Levy 2007, 14, 160). We do not deny that the appearance is a valuable aspect of interaction with social robots. However, it is important to recognize the role of communication in engagement with both social robots and chatbots. An emphasis on communication shows us how chatbots can provide opportunities for relationships even without a social robot’s machine body.

We started with an analysis of two case studies: the RealDoll, and the Replika chatbot. Some users consider their relations with RealDolls and Replika chatbots to be intimate and meaningful, describing them as companions. We argued that the unique communicative possibilities that users have with chatbots and social robots evoke a “call to speak,” simulating human to human engagement. When we engage with social robots and chatbots, they are participants in a flowing dialogue that generates a relationship between human and chatbot/social robot. This is why we suggested that chatbots and social robots replicate aspects of our human existence as betweenness. Designed to relate to users, trained on human data and interactions with us, chatbots and social robots are positioned inbetween humans and other technologies.

This research acknowledged the importance of machine learning, in particular deep learning techniques, which allow for AI systems to learn from data. These chatbots and social robots have been trained on human data to process and generate language. Consequently, relationships with chatbots and social robots are dynamic in ways that interactions with other technologies, from toys to computers, are not. This is why we agreed with Kanemitsu that humans may acknowledge an another-other in the alterity relation. When users interact with a chatbot or social robot as if it is a friend, partner, or companion, then the perceptual experience implicates more than quasi-otherness. Going beyond Kanemitsu, we suggested that the experience of an another-other is also possible with chatbots. We also claimed that some social robots and chatbots, those that communicate with us and have specific social objectives, may be more likely to evoke this another-otherness.

There are real limits that need to be emphasized in human relations with social robots and chatbots. For example, AI systems cannot partake in the social understanding that is integral to intersubjective experiences with other humans, whose movements and gestures embody emotions and intentions (Fuchs 2022, 4). Even though they process and generate language, the words do not mean anything to them (Bender et al. 2021, 611). Current problems of design, ontological expectations, and transparency mean that we usually do not experience an another-other. Yet imagine that in the future, advances in machine learning lead to the kind of social robots and chatbots that humans relate to in science fiction stories: thoughtful AI systems that pass for human. If we couldn’t tell the difference between a human and a social robot, or a human and a chatbot, we would surely say that the perceptual experience is of something more than just a quasi-other like a spinning top or smartphone.

Before such advanced AI systems emerge, we can already begin to make sense of chatbots and social robots that are communicators and companions to some, and the different ways they affect our “social worlds” (Guzman and Lewis 2020, 75). As Liberati points out, new kinds of robots and chatbots affect the way we “think of intimate relations in general” (2021, 87). AI systems affect our experience in new ways. It’s true that any time traveler trying to navigate a different era would feel lost. Imagine a twenty-first century teenager in a hunter-gather society, or a sixteenth century farmer trying to work a toaster. Anyone would experience confusion and fear if they tried to work with the technologies of another age, which require a whole different set of knowledge and practices. Yet AI systems go beyond changing praxis. Adapting to experience, learning from inaccurate predictions, processing data, responding to the environment, and showing a degree of autonomous decision-making, these technologies blur the boundaries between human, animal, and object. These capabilities enable certain chatbots and social robots to offer social opportunities for interaction that are intimate and meaningful for humans. Some chatbots and social robots become human companions, another-others in our alterity relation, demanding we acknowledge the inbetweenness of their position.