Introduction

Artificial intelligence assistant (AIA) users have surpassed 100 million in the US alone (Bohn, 2019). They are the latest artifacts of the digital age that aim to enhance comfort, and they constitute a selling platform in consumers’ own homes (e.g., Amazon’s Alexa). Critically, AIAs are endowed with cutting-edge artificial intelligence (AI) technology, giving them the capacity to naturally interact with their users (henceforth, consumers) in a humanlike way. Seeing the close personal interactions inextricably intertwined with the powerful AI technology, some consumers perceive AIAs as social actors with a mind of their own, that is, they are anthropomorphizing the AIAs (e.g. Foehr & Germelmann, 2020). Moreover, only recently has such a cutting-edge AI-based device been so pervasively and centrally implanted in households, leading consumers to interact daily, and eventually to attribute to this hardware companion a mind of its own. As a result of such anthropomorphism, consumers may build personal relationships with their AIAs with the upsides of relationships such as elevated customer service and satisfaction, but also some potential downsides. The paper’s core goal is to examine consumers’ relationship with the AIAs that are embedded in smart speakers, and investigate beneficial and harmful effects of anthropomorphizing AIAs on consumers’ satisfaction and well-being.

Past research conceived anthropomorphism in AI in a favorable light, focusing mostly on consumer benefits. Anthropomorphism in AI has been established as a path to elicit trust in robots and virtual agents (e.g., Lin et al., 2021), as well as in AIAs (e.g., Foehr & Germelmann, 2020). However, more recently, seminal conceptual works on AI have proposed the idea that relationships with AI in service contexts may actually prove harmful to consumers (Pfeuffer et al., 2019). For instance, AIAs can exhibit an invasive nature into consumers’ privacy, and store and analyze consumers’ most personal data which can cause mental strain for consumers (Puntoni et al., 2021). But these past works are primarily of a conceptual nature and to date, no research has empirically explored in detail the potential psychological costs for consumers who anthropomorphize their AIAs. Thus, it seems worthwhile to examine the harmful effects of AIA anthropomorphism (i.e., perceiving a mind within an AIA) on consumers (see Table 1 in the paper for a synthesis of this literature, and Fig. I in Web Appendix C for the main research model in light of past literature).

Table 1 Prior experimental work on the effects of AIA anthropomorphism

AIAs, such as Alexa, accompany consumers across a large part of their waking and even sleeping hours, often over several years, and, as a result, may assume a key role in consumers’ daily lives. In other words, consumers are forming close relationships with AIAs that may expand over time. As relationship marketing research unequivocally shows, relationship development builds on gaining benefits, but likewise incurring costs from the relationships (Dwyer et al., 1987). Gaining a comprehensive understanding of those costs in terms of psychologically harmful effects for consumers in a relationship with their AIA is paramount from a research, practice, and societal perspective.

To examine benefits as well as costs for consumers in relationships with AIAs, we draw on a seminal relationship theory, social exchange theory (Blau, 1986; Homans, 1961), and blend it with mind perception theory (Gray et al., 2007) (see Fig. II in the Web Appendix C for a depiction of this integration). Specifically, we propose that when consumers perceive a mind in their AIA (i.e., they anthropomorphize it), they are more likely to form relationships with their AIAs and to apply a cost–benefit approach to this relationship. Perceiving a mind in the AIA thus leads to both benefits and costs for the consumer, constituting two mediating mechanisms for the effects of AIA anthropomorphism. In addition to the well-established beneficial path through trust in the AIA, we argue that AIA anthropomorphism affects key consumer outcomes by triggering greater consumer privacy concerns and diminishing consumer satisfaction and well-being through an increased feeling of threat to human identity. Moreover, we essentially argue that AIAs’ threat to human identity can lead consumers to feel disempowered, again harming their well-being. Importantly, drawing on social exchange theory and its relationship perspective, we suggest that these harmful effects are more pronounced if consumers have a closer and longer-lasting relationship with the AIA. Figure 1 depicts our conceptual framework.

Fig. 1
figure 1

Overview of the conceptual model

We conducted two full studies, as well as preliminary interviews and a preliminary consumer survey. In Study 1, we surveyed 238 current users of AIAs embedded in smart speakers at home (e.g., Amazon Echo, Google Home) and demonstrated a harmful path emerging from AIA anthropomorphism, as well as replicating the beneficial path. Our key finding in Study 1 is that AIA anthropomorphism triggers consumers’ privacy concerns through an increased feeling of threat to human identity. Subsequently, consumers’ privacy concerns decreased their satisfaction and well-being. Building on this key finding, we conceptualized and tested remedy strategies to alleviate these harmful effects in a second study, a field experiment with 601 current AIA users. We implemented one control condition and three interventions: (1) raising consumers’ awareness about AIA data practices, (2) providing consumers with knowledge on how to deal with AIA data issues and (3) requesting consumers to take action to protect the privacy of their data in their relationship with the AIA. We measured consumers’ responses across two different time points and found that these interventions successfully attenuated the harmful effect of threat to human identity on consumer AI empowerment, subsequently improving consumer well-being and reducing privacy concerns.

With these findings, we argue for two main theoretical contributions. First, these findings contribute to the literature on human–AI relationships by investigating (to our knowledge for the first time) the usage and interactions with AIAs through a relationship lens. While past research has primarily used a stimulus–response approach to examine consumers’ immediate reactions to AIAs (e.g., Benlian et al., 2020), owing to the unique place of AIAs in a consumer’s home and an intensive exchange of benefits and costs, we suggest that a relationship perspective is valuable for investigating AIAs. Specifically, integrating relationship theories (e.g., Dwyer et al., 1987) into the study of AIAs and drawing parallels from relationship marketing to explore a relationship lifecycle account of human–AI interactions provide a novel lens through which AIAs can be explored. Exploring the relationships between the consumers and the AI-powered platforms can unveil versatile interactions that go beyond purchases and foster consumer loyalty. The relationships that consumers form with intermediary platforms such as Alexa may gradually replace the traditional consumer–firm relationships. Second, we contribute to past literature (Table 1) by providing a more balanced account of benefits and potential psychological costs of AI for consumers. While the benefits of anthropomorphism in AI technology are well documented in the literature, a closer look at this phenomenon shows that there are psychological costs experienced by the consumers in the relationship with AIAs. Considering the ambivalent effects of AIAs on consumers unveiled by our study, we suggest that it might be pertinent to leverage theories related to psychological costs and consumer ambivalence (e.g., techno-stress, cognitive appraisal) when investigating anthropomorphism in AI. Established theories of anthropomorphism (e.g., Epley et al., 2007) could thus be meaningfully extended by including in their framework the psychological costs, as well as the benefits, generated in interactions of humans with anthropomorphized AI entities.

The results of our study show that if managers endow AIAs with anthropomorphic features, it entails ambivalent effects such that consumers are more satisfied with the AIA, but at the same time, experience less general well-being. Therefore, our first recommendation to managers is to endow AIAs with anthropomorphism only if their respective customers are knowledgeable and confident about protecting their privacy in the relationship with the AIA. While there could be great promise in employing anthropomorphism in AI prudently, an indiscriminate use could undermine the potential of such cutting-edge AI technologies for both consumers and firms. Second, to minimize the harmful effects of AIA anthropomorphism and maximize its benefits, managers should empower consumers regarding the protection of their personal data. We provide managers with three practical ways of empowering users to competently handle their data security which could lead to an increase in purchases and customer relationship building through this unique channel.

Preliminary insights on consumers’ relationships with artificial intelligence assistants

Public opinion about the AIAs seems to be mixed: AIAs seem to polarize consumers such that some consumers joyfully use them, while others express blatant rejection, even avoiding being around AIAs. We conducted two preliminary studies with two key goals: (1) to gain and present an up-to-date overview of users’ opinions on AIAs in a systematic way, and (2) to understand the prevalence and the extent of consumers’ privacy concerns and relationship issues with AIAs. To this end, we measured the consumers’ perceptions with a survey, and conducted interviews with AIA users.

Preliminary consumer survey

We conducted a preliminary consumer survey with 300 (Mage = 31.6, 68% female) participants recruited from Prolific to assess the extent of consumers’ concerns about the privacy of their personal data collected by the AIAs and their knowledge of how this data might be used. We found that, although AIA users have a high level of general concern related to personal data (M = 5.43) and think that AIAs are intrusive (M = 5.06, both measured on seven-point scales), their knowledge of, and tendency to take action to protect, their privacy are very limited. We found that only 20% of AIA users know where their data goes and who has access to it, 65% of the users have never changed their privacy preferences and 85% of the users never reviewed or deleted their stored voice recordings. Remarkably, 49% of the AIA users in this survey tend to agree with the statement that companies are currently exploiting consumers with new AIA platforms. This preliminary survey indicates that there may be looming issues in consumers’ relationships with AIAs: they express high privacy concerns but know little about how their data is handled and typically do not take action (i.e., they seem disempowered).

Preliminary interviews

To acquire a deeper, more specific initial understanding of AIA–user relationships, we conducted 11 in-depth interviews totaling 5 h and 30 min of interviews. Results show that users perceive clear and salient benefits from AIAs, but also deliberate on costs and risks (N = 4). There is a strong awareness of an always-listening device, occasionally leading the consumer to think about whether the next thing they utter will be heard and recorded by the AIA (N = 3). As user D states in the interview, “There are certain situations where I think ‘this does not need to be on tape’[…] and turn her off.” For some users the reason for substantial privacy concerns is their lack of knowledge (N = 3), which corroborates the results of the preliminary consumer survey. User A notes, “I do not even know what functions the device has […] I am not sure if certain functions are too unsafe.” While seeing clear benefits in the convenience that AIA brings, consumers were also nervous about instances of AIA behaving autonomously, seemingly acting of its own volition (N = 4). The interviews also informed us about the kind of costs that consumers may experience in their relationship with AIAs. Many consumers expressed fears regarding the potency of AI and referred to AI as a competitor to humans, essentially seen as an undesirable replacement for humans rather than as an enhancer of human experience (N = 6). Table 2 provides a selection of relevant quotes from the interviews. Further details about the preliminary studies can be found in Web Appendices C and D.

Table 2 Insights from preliminary interviews

These two preliminary studies show that AIA users hold ambivalent opinions about their AIAs and know little about how their data is used by the AIAs. Also, privacy concerns in the human–AIA interaction seem to be common among users. To investigate this further, we next present a conceptual framework and then two full studies.

Theoretical framework

Integrating mind perception theory and social exchange theory to build a user–AI relationship perspective

In what follows, we integrate mind perception theory and social exchange theory to build a relationship perspective on user–AI interactions and exchanges (see Fig. II in Web Appendix C). In this respect, social exchange is the main theoretical backbone. Social exchange theory describes how individuals form relationships with others and which factors determine the subsequent relationship development. A key assumption is that all human relationships form on the basis of a subjective cost–benefit analysis (Homans, 1961). In other words, the benefits and costs which individuals perceive in a relationship determine whether individuals initially engage in the relationship and whether they uphold the relationship (Emerson, 1981). Conversely, if exchange partners view the benefits-to-costs ratio in a relationship as insufficient, they will likely terminate the exchange relationship.

Recent literature suggests that advanced AI might be used to engage consumers in the service journey, form relationships with consumers and maintain them across time (Huang & Rust, 2021a). However, theory of mind perception is essential to understand user–AIA exchanges and interactions from a relationship perspective (Gray et al., 2007; Epley & Waytz, 2010; Waytz et al., 2010). According to mind perception theory, people infer that others have mental states that are different than their own and are capable of attributing minds to others. Importantly, people assign minds not only to other people, but also to non-human entities such as animals, gadgets or software and respond to them using the same social rules that they use with people (Epley & Waytz, 2010; Nass & Moon, 2000). Since people do not have direct information of others’ mental states, a social exchange requires mind inferences about the contents of other individuals’ mental states, especially their desires, goals and intentions (Cosmides & Tooby, 1992). Therefore, based on their observations, people make inferences of the mental states of others (e.g., if it talks and looks like a human, then it must have desires and intentions like a human). However, in doing so, people do not actually believe that an anthropomorphized entity is human but rather the anthropomorphic features such as natural language and interactivity trigger responses that are guided by human social rules. Thus, in many ways people interact with AIAs as if they were social beings and the AI-powered anthropomorphic assistants could form and maintain relationships with the consumer by improving themselves constantly to adapt to the consumer, thanks to the constant availability of personal data, in a way similar to sales agents who may form and maintain relationships with their customers (Schmitz et al., 2020).

The motivation for this tendency to assign a mind to non-humans stems from the desire to understand and predict their behavior more easily, and to establish contact and affiliation (Epley et al., 2007). Assigning a mind to a non-human increases the perceived similarity of the human and the non-human entity. While the perceived similarity with the AIAs (i.e., the perception of an intelligent, competent mind of the AIA which is humanlike) brings about some benefits such as a greater sense of trust, closeness and enjoyment towards the entity (Li & Sung, 2021; Qiu & Benbasat, 2009), the same perception can evoke psychological costs: it triggers a feeling of threat to human distinctiveness, raises doubts about one’s place in the world and introduces a fear that humans could be replaced by non-humans with intelligent minds (Złotowski et al., 2017). That is, while a mind that can support the consumer by helping and serving might be desirable, at the same time it could be rivaling and threatening. Such ambivalence is common in human relationships. Consider a manager who holds an employee in high esteem and trusts the employee in task fulfillment, but at the same time fears that this excelling employee might one day take over his/her job. Thus, integrating mind perception and social exchange theories, we argue that when consumers perceive a mind in their AIA (i.e., they anthropomorphize it), they apply a social exchange framework, that is, they adopt a cost/benefit view of the relationship with their AIA. Those who have a greater perception of an AIA’s anthropomorphism will see greater benefits in the relationship with the AIA, but also higher costs.

Beneficial effects of relationships with artificial intelligence assistants

Previous work has shown that anthropomorphizing robots, computers and even brands, attributing a mind to them, fosters trust towards them (Golossenko et al., 2020; Hildebrand & Bergner, 2021; Nass & Moon, 2000). Attributing a mind to a non-human agent increases people’s perception of the agent’s competence (Waytz et al., 2014). A competent AIA mind that exists to help and serve, with benevolent motives and positive intentions, can create trust in a relationship (Foehr & Germelmann, 2020). Having a supportive mind that seeks to assist humans is a clear benefit that AIAs provide to consumers. Gaining such a benefit in a relationship engenders trust in the AIA. In turn, decades of marketing research has shown that customers’ trust in a firm or service employee increases customer satisfaction (Hennig-Thurau et al., 2002; Singh & Sirdeshmukh, 2000).

H1 (replication hypothesis)

Higher (vs. lower) AIA anthropomorphism increases consumer satisfaction through elevated trust in the AIA.

In our work, we seek only to briefly replicate this well-established beneficial path through trust and focus on the potentially harmful path through identity threat.

Harmful effects of relationships with artificial intelligence assistants

In what follows, we elaborate on the effect of AIA anthropomorphism on consumers’ perceived identity threat. For this purpose, we strongly draw on theory of mind perception (Gray et al., 2007). Critically, we argue that consumers will experience psychological costs which arise from perceiving a mind in the AIA. A perception of increased similarity with the AIA, while providing benefits, is not compatible with the conception of human distinctiveness (Ferrari et al., 2016). Therefore, attributing a mind to a non-human entity—a feature that is thought to be uniquely human—might trigger a feeling of threat to human identity. We define this feeling of threat to human identity as a worry that the AIAs might challenge human uniqueness, raising doubts about one’s place in the world and a fear that humans can be replaced by non-humans with rivaling intelligent minds (Gray & Wegner, 2012; Mori et al., 2012; Yang et al., 2020). Feeling a threat to human identity implies that consumers perceive the AIA as conflicting to the established human ways, opposing human core values.

We now turn to our explanation of why the identity threat from AI should trigger privacy concerns regarding AI’s data use in the relationship. In essence, the threat to human identity arises from perceiving an artificial mind in the AIA. This artificial mind exhibits an awareness of its surroundings and can have intentions and motives of its own, and importantly, it tracks its surroundings, rendering it as data, storing and analyzing the data using its AI (Kozinets & Gretzel, 2021). The perception of this competent artificial mind creates increased uncertainty about AIAs’ behaviors, motives and intentions. We define this uncertainty in the human–AIA relationship as not knowing exactly the motives and intentions of the AIA, therefore not knowing how to engage with and react to AIAs. Being increasingly aware of a competing artificial mind that deals with our personal data and limits our autonomy to control the fate of our personal data interferes with maintaining a sense of privacy.

H2

Higher (vs. lower) AIA anthropomorphism increases privacy concerns by elevating the perception of artificial intelligence’s threat to human identity.

In what follows, we focus on explaining why the AIA anthropomorphism-induced threat reduces consumer satisfaction with the AIA. A threat to our human identity puts us in a state of uncertainty and might elevate our stress levels (Higgins et al., 1994; Sharma & Sharma, 2010). This increased uncertainty in the human–AIA relationship, not knowing exactly the motives and intentions of the AIA, puts a strain on the consumer that presents a significant psychological cost. In a social exchange framework where consumers weigh benefits and costs, this added psychological cost induced by perceiving a mind in the AIA deteriorates the balance of benefits and costs. Therefore, as soon as the psychological costs in the relationship increase, consumer satisfaction with the AIA should diminish.

H3

Higher (vs. lower) AIA anthropomorphism reduces consumer satisfaction by elevating the perception of artificial intelligence’s threat to human identity.

Next, we will present our reasoning as to why AIA’s anthropomorphism reduces consumer well-being through increased feeling of threat to human identity. Threat to human identity induced by perceiving a mind in the AIA should lead the individual to experience psychological discomfort in the relationship with the AIA (Breakwell, 1986). More specifically, the consumer is in a state of uncertainty in the relationship with AIA, doubting and questioning his/her identity. This unfavorable mental state created by the identity threat could engender negative feelings and elevate stress in the consumer, thus reducing the well-being of the consumer (Sharma & Sharma, 2010). Therefore, we hypothesize:

H4

Higher (vs. lower) AIA anthropomorphism reduces consumer well-being by elevating the perception of artificial intelligence’s threat to human identity.

Moderating effects of relationship characteristics

Social exchange theory suggests that relationship closeness and relationship length are key determinants of relationship development (Palmatier et al., 2006). With increasing relationship length, the bond of social exchange gets stronger and the benefits-to-cost ratio becomes more salient for the consumer (Gundlach et al., 1995). However, relationship length alone might not be sufficient to establish a close relationship. Dependence of the partners on each other constitutes another key factor to grow a close relationship (Ganesan, 1994). Essentially, AI-powered entities can learn from consumer interactions over time to improve their accuracy, thereby strengthening the relationship with the consumer (Marinova et al., 2017). Applying this to AIAs, consumers might become dependent on and close to the AIA if they consistently use it for important personally consequential tasks, such as organization of personal appointments or self-management. While the consumers who have more recently adopted an AIA or those who use their AIA largely for menial tasks (e.g., turn on/off the lights) may still perceive the AIA as a trivial gadget for convenience, an extended use of the AIA for more consequential and personal tasks might alter the perception of the AIA (Plangger & Montecchi, 2020). With a longer-lasting and a closer relationship, the consumer will have more significant exposure, contact and interactions with the AIA. Consequently, in the long term, the consumers who have a close relationship with the AIA will be more aware of the AIA’s mind and intelligence. This enhanced awareness could then amplify the positive effect of AIA anthropomorphism (i.e., perceiving a mind in the AIA) on a consumer’s identity threat.

H5

The positive effect of AIA anthropomorphism on the perception of artificial intelligence’s threat to human identity increases if the consumer’s relationship with the AIA is closer and the relationship length is longer.

Empowering consumers in their relationships with artificial intelligence assistants

To provide a deeper exploration of AIA outcomes for consumers, we will lay out our reasoning for the relationship between the feeling of threat to human identity and consumer AI empowerment. We define AI empowerment as the consumers’ perception of their own ability to handle the decision-making processes that are related to the use of AIAs and the use of personal data collected by AIAs (Martin et al., 2017; Van Dyke et al., 2007). First, and extrapolating from our previous theorizing, we suggest that AIA’s threat to human identity might disempower consumers in their relationship with the AIA, which then subsequently reduces consumer well-being. That is, we argue that identity threat creates an uncertainty in the human–AIA relationship about how to engage with the AIA. Such uncertainty, not knowing exactly the motives and intentions of the AIA, reduces consumers’ ability to effectively deal with AIAs, that is, reducing consumer AI empowerment.

H6

Higher (vs. lower) threat to human identity decreases consumer well-being through reducing consumer AI empowerment.

Our preliminary studies and previous literature all suggest that consumers’ awareness and knowledge of the data issues with their AIAs is astonishingly limited (e.g., Malkin et al., 2019). Thus, we examine three remedy strategies that aim to empower consumers in relation to the privacy of their personal data collected by AIAs. We propose that interventions that (1) raise consumers’ awareness of data issues, (2) provide the consumers with solutions about how they can deal with data privacy issues and (3) encourage them to take action to protect the privacy of their data will enhance consumer AI empowerment.

In the adoption of innovations, research has shown something akin to a “honeymoon effect” where the novelty of the experience might initially be the main driving force of the attitude towards the AIA, initially shrouding relationship costs and benefits (Wells et al., 2010). But after extended exposure to the AIA, the consumers start to form a comprehensive, nuanced understanding of the relationship with the AIA and begin to think in terms of costs and benefits. The real relationship, so to speak, begins after the honeymoon and develops with time, when the AIA user experiences and responds to the pros and cons of the relationship. Therefore, we suggest that all three of our intervention strategies that we test in Study 2 (H7, H8 and H9) will be especially impactful for AIA users with a longer relationship length.

For the first strategy, we argue that an increased awareness of data issues in the relationship with AIA will help reduce the uncertainty about how to effectively deal with the AIA. We suggest that knowing more about the data issues in the relationship with AIA can attenuate the harmful effect of identity threat (induced by perceiving a mind in the AIA) on consumer AI empowerment, particularly for consumers with longer relationships with their AIA.

H7

Higher (vs. lower) consumer awareness for data issues in relationships with AIAs attenuates the harmful effect of AIA’s threat to human identity by artificial intelligence on consumer AI empowerment, for consumers with longer (vs. shorter) relationship length.

On top of gaining awareness about the data issues, having knowledge of solutions on how to improve data security in relationships with AIAs can be beneficial to the consumer. Simply having an awareness of potential privacy threats might not be sufficient, and could even in some cases disempower the consumer (Olivero & Lunt, 2004). By raising awareness about possible solutions, alongside the potential threats, firms might shift the power back to the consumer. Moreover, our preliminary study shows that consumers’ knowledge of their privacy preferences and how to adjust them is surprisingly low. Raising consumers’ awareness about potential solutions for data issues will further mitigate the uncertainty in the relationship with the AIA, reduce the cognitive cost of protective actions and provide a pathway to solutions. Therefore, we suggest that knowing more about how to deal with AIAs can attenuate the harmful effect of AIA’s threat to human identity on consumer AI empowerment.

H8

Higher (vs. lower) consumer awareness about solutions on how to improve data security in relationships with AIAs attenuates the harmful effect of AIA’s threat to human identity by artificial intelligence on consumer AI empowerment, for consumers with longer (vs. shorter) relationship length.

Besides learning about data issues or solutions, consumers may need to act and implement data security measures to strengthen their empowerment vis-à-vis the AIA. Going through privacy settings and making a specific decision carries a heavy cognitive load for consumers. It might be necessary to give consumers some impetus to ensure that they do indeed take action. Explicitly requesting the consumers to change their privacy preferences to a specific setting pushes the consumers further towards taking action. With this extra push, the consumers will directly experience the results of taking action. This experience implicates competence, impact and self-efficacy, core elements of empowerment (Cattaneo & Chapman, 2010). Therefore, we suggest that prompting the consumers to take action about their AIA privacy preferences can attenuate the harmful effect of identity threat on consumer AI empowerment.

H9

Taking action to improve data security in handling AIAs attenuates the harmful effect of AIA’s threat to human identity on AI empowerment, for consumers with longer (vs. shorter) relationship length.

Study 1: Ambivalent effects of AIA anthropomorphism on user–AIA relationships

Procedure

In Study 1, we test H1, H2, H3, H4 and H5 and thus seek to elucidate the effect of AIA anthropomorphism on relationships between consumers and the AIA. For this purpose, we conducted an online consumer survey. The participants were 238 current AIA users who own a smart speaker in their homes (67% Amazon’s Alexa, 26% Google’s Google Assistant, 7% Apple’s Siri). All of the participants were residents of the United Kingdom that were recruited through Prolific (Mage = 37, 40% male, 60% female). Four hundred participants were invited to the survey and 389 participants completed this survey. The same participants were invited to a second survey after six months, where we measured additional constructs and key dependent variables. Having a time lag allowed us to reduce the common method variance (CMV), that is, the bias that might arise from consumer response tendencies if variables originate from the same data source (Alavi et al., 2018). There were 246 participants who completed both surveys, but 8 participants were excluded due to missing data and the final sample included 238 participants.

Measures

Main variables

We used established scales to operationalize the concepts in our model. The complete list of measurement items of main variables, scale reliabilities and the order of presentation are provided in Table 3 of the paper. We measured relationship length, overall satisfaction with the AIA (Garbarino & Johnson, 1999), anthropomorphism of the AIA (Epley et al., 2008), trust felt towards the AIA (Leach et al., 2007), privacy concerns of the consumer related to their personal data (Smith et al., 1996), threat to human identity (Ferrari et al., 2016; Mende et al., 2019), consumer well-being (Bech et al., 2003) and relationship closeness (Castelo et al., 2019) with items from established scales adapted to our context. We measured the independent variables at time 1 and dependent variables at time 2. We introduced this time lag to reduce CMV issues and the likelihood of reversed causation (Alavi et al., 2018).

Table 3 List of measurement items and scale evaluations for main variables in the order of presentation

Control variables

We included a broad set of theoretically grounded control variables in the model estimation to reduce the likelihood of endogeneity issues due to omitted variable bias (Sande & Ghosh, 2018) and to assess the stability of the hypothesized effects. First, we included perceived usefulness of the AIA and perceived ease of AIA use, both of which have been shown to play an important role in technology acceptance, use and trust (Davis, 1989). We accounted for innovativeness of the consumer, because innovativeness is typically associated with greater intention to adopt a technology (Yi et al., 2006). We also included two demographics variables as controls, as both have been shown to play a role in tendency to anthropomorphize and engage with technology: age (Appel et al., 2020) and gender (Venkatesh et al., 2000). Finally, we added frequency of AIA use as a control because previous research suggests links of this variable with use and purchase behaviors with technology (Venkatesh & Agarwal, 2006). A correlations table is provided in Table 4. The complete list of measurement items of control variables, scale reliabilities and the order of presentation can be found in Table V in Web Appendix C. To examine the convergent and discriminant validity of all multi-item scales, we followed the standard procedures of estimating confirmatory factor analyses (Bagozzi et al., 1991). All scales conform to the prescribed values for the item reliabilities, composite reliability (CR) and average variance extracted (AVE) with one exception, which was slightly below the AVE threshold (AVE = .47 for Identity Threat). Moreover, the global fit statistics indicated a good fit of the measurement model (RMSEA = .049; TLI = .937; CFI = .945; SRMR = .054).

Table 4 Correlations table, Study 1 and Study 2

Model specification and results

To test whether the anthropomorphism of the AIA increases consumer satisfaction through elevated trust in the AIA (H1), increases privacy concerns (H2) and decreases consumer satisfaction (H3) and well-being (H4) through increasing threat to human identity by artificial intelligence, we specified a structural equation model as follows: we linked AIA anthropomorphism to trust and identity threat, then these two variables and all of our control variables to consumer privacy concerns, consumer satisfaction with the AIA and consumer well-being. As control paths, we also analyzed the direct links between trust and well-being, trust and privacy concerns, threat and satisfaction as well as the direct links from privacy concerns to consumer satisfaction and well-being. In addition, we analyzed the indirect and total effects of identity threat on privacy concerns, satisfaction and well-being of the consumer. We estimated this model using Stata 16 (StataCorp, 2019), including 238 observations (Table 5).

Table 5 Results of Study 1 and Study 2

Robustness checks

We conducted several robustness checks for which we report detailed results in Web Appendix A. First, in our study, we recruited only current AIA users. To alleviate concerns about a potential sample selection bias, we implemented Heckman selection correction (Heckman, 1976). Our results remained stable after including the inverse Mills ratios in the model estimation. To assess our model’s accuracy and reliability, we bootstrapped the model, and the results remained stable (see Table 5). To rule out a common method bias, first we implemented Harman’s single-factor test. We found that the total variance extracted by a single factor is 18%, which is well below the recommended threshold of 50% and suggests that common method variance (CMV) is not an issue in our study (Chang et al., 2010). Moreover, we also used the marker variable technique of Lindell and Whitney (2001) to address this issue further. When we ran the model using the adjusted correlation matrix, the statistical significances in our model remained stable. Therefore, we infer that common method bias is unlikely to influence Study 1. To account for potential endogeneity in our model estimation, we implemented the Durbin–Wu–Hausman tests (Hausman, 1978). The results reject the hypothesis that the variables in our model are endogenous. To address this issue further, we also employed a two-step control function approach (e.g., Petrin & Train, 2010) and our results remained stable.

Main indirect effects

For testing the formal hypotheses, we relied on the fully specified model including the full set of control variables. The results show that greater perceived anthropomorphism of the AIA increased the satisfaction of the consumer through increased trust in the AIA (b = .045, p < .01). This result supports H1 and successfully replicates the effects established by past literature. Furthermore, greater perceived anthropomorphism increased the privacy concerns of the consumers related to the use of their personal data through increasing the feeling of threat to human identity by artificial intelligence (b = .097, p < .001). This result fully supports H2 denoting harmful effects for consumers in relationship with AIAs. We did not find indirect effects of AIA anthropomorphism on consumer satisfaction and well-being through identity threat, therefore the hypotheses H3 and H4 were rejected. However, identity threat had significant negative indirect effects on consumer satisfaction (b = −.071, p < .01) and consumer well-being (b = −.045, p < .05) through privacy concerns. Privacy concerns did have significant negative direct effects on consumer satisfaction (b = −.206, p < .001) and consumer well-being (b = −.152, p < .05). The full results are provided in Table 5, and further details can be found in Web Appendix C (Table VI).

Moderation effects

Next, we discuss the tests of the proposed moderators. All variables used in the interaction terms were mean centered. Relationship length or relationship closeness alone did not significantly moderate the effect of AIA anthropomorphism on AIA’s threat to human identity. However, the three-way interaction between AIA’s anthropomorphism, relationship length and relationship closeness had a positive moderating effect on the anthropomorphism to identity threat link (b = .130, p < .05). Those AIA users who have been using their AIA for a longer time and were closer to their AIA in their relationship exhibited even greater feelings of threat to their human identity by artificial intelligence (see interaction diagram in Fig. 2). This result highlights the importance of user–AIA relationship characteristics and provides support for H5.

Fig. 2
figure 2

Study 1: Interactive effect of AIA anthropomorphism, relationship closeness and relationship length on threat to human identity (top). Study 2: Interactive effects of threat to human identity, relationship length and three field experiment interventions on consumer AI empowerment (bottom).

Discussion of Study 1

The results of Study 1 showed that perceived anthropomorphism of the AIA increased consumer satisfaction through increased trust in the AIA. This replication echoes the well-established literature showing the beneficial effects of anthropomorphism (e.g., Li & Sung, 2021). Critically, in line with our theorizing, our results likewise demonstrate that there are psychological costs for the consumer in the relationship with the AIA, beyond the benefits. A greater perception of AIA’s anthropomorphism significantly increased the privacy concerns of the consumer through increased identity threat. It is important to note that the harmful indirect effect of AIA’s anthropomorphism on privacy concerns through identity threat was stronger than the beneficial indirect effect through trust. Moreover, this harmful effect of AIA anthropomorphism was even more pronounced for consumers who had a longer and a closer relationship with the AIA. This result supports H5 and provides evidence about the potential costs and dangers of AIAs if a strong relationship forms between the user and the AIA.

Our hypotheses H3 and H4 were not supported as we found no indirect effects of AIA anthropomorphism on consumer satisfaction and well-being through identity threat. However, the results showed significant indirect effects of identity threat on both consumer satisfaction and well-being. Identity threat by the AIA reduced satisfaction and well-being through exacerbated privacy concerns, likewise pointing to potential harmful consequences for consumers in relationships with AIAs as well as indirectly for the company providing the service. That is, consumers’ privacy concerns significantly reduced both consumer satisfaction and well-being (cf. Okazaki et al., 2020). This result illustrates the important downstream effects emerging from the anthropomorphism of the AIA that are relevant for both the consumers and marketing managers. The consumers in the digital age seem to be in part quite sensitive about the privacy of their personal data, even if they are not fully aware of how companies specifically deal with their data. With this series of results, we demonstrate that perceiving a mind in the AIAs might exacerbate the privacy concerns of the consumer and lead to unfavorable outcomes for both the consumers and the companies. Our results also point to the relevance of using a relationship lens to analyze consumer interaction with AIAs (cf. Steinhoff et al., 2019), that is, Study 1 provides nascent evidence that the harmful effects of AIAs become stronger as the relationship develops.

The harmful effects for consumers uncovered in Study 1 motivated us to take a closer look at the identity threat emerging from perceiving a mind in the AIA. Building on this finding, in Study 2 we explore strategies for consumers and companies to remedy this harmful effect.

Study 2: A field experiment to test interventions that empower consumers

Procedure

In Study 2 we focus on the critical harmful path in the conceptual model and how to remedy the harmful effects for consumers. First, we seek to investigate whether identity threat reduces consumer AI empowerment (H6). Second, we examine the three intervention strategies proposed in H7, H8 and H9 to enhance consumers’ empowerment in their relationship with AI. For this purpose, we designed a randomized field experiment with measures before and after the interventions. We invited 720 current Amazon smart speaker users to our study. All participants were residents of the United Kingdom recruited through Prolific and none of them participated in Study 1. Our field experimental design comprised four conditions: a control and three intervention conditions. While the control group simply completed a survey at times 1 and 2, the three intervention groups received an intervention directly after completing the first survey (see Web Appendix B for details of the interventions). In intervention group 1 we aimed to raise awareness about potentially problematic aspects of how Amazon handles consumer data. This group received information about two Alexa features related to data use and storage (“Saving Voice Recordings” and “Help Improve Alexa”), one paragraph of text each, describing their features and potentially problematic aspects. Intervention group 2 received the same information, and in addition, received a step-by-step illustration of how they could change their preferences related to the two Alexa features presented. Intervention group 3 received the same information as group 2, and, in addition, this group was explicitly asked to change their settings and keep using their AIA with the new settings for a week. The first request was to change the duration of storing voice recordings from the default setting “Save recordings until I delete them” to either “Save recordings for 3 months” or “Don’t save recordings”. This choice was given to avoid a reactance response in the participants. The second request was to turn off the “Help Improve Alexa” feature which uses voice recordings to improve Alexa, which is by default turned on.

Consumers were allocated randomly to one of the four groups. After one week, the same group of 720 participants received an invitation for a second measurement. In our model estimation, we examined how the interventions in the first time point affect the measurements from the second time point to analyze the effects of the interventions. In both time points, we asked all participants to indicate current device settings. The final analysis sample consisted of 601 participants who completed both waves successfully (Mage = 35.2, 57% female).

Measures

Main variables

As in Study 1, we used established scales to operationalize our concepts. Using the same measurements in Study 1, we asked the consumers when they purchased their Alexa-enabled device, their overall satisfaction with the AIA, their perceived anthropomorphism of the AIA, the trust they feel towards the AIA, their privacy concerns related to their personal data, threat to human identity and their well-being. In Study 2, additionally, we measured consumer AI empowerment with three items adapted from Spreitzer (1995). The list of measurement items and the order of presentation are provided in Table 3.

Control variables

We used the same six control variables as in Study 1. Moreover, we included two new controls in Study 2 to further account for additional influences. We measured the attitude of the AIA user towards technology, which is a key variable in the adoption, use and satisfaction of the technology users (Lin & Hsieh, 2007). Second, we measured the trust towards Amazon. The trust towards the company has been shown to have a key role in ensuring consumer satisfaction (Morgan & Hunt, 1994). A correlation table including descriptive statistics is provided in Table 4. The complete list of measurement items of control variables, scale reliabilities and the order of presentation can be found in Table V in Web Appendix C. Measurement scales for both main and control variables conform to the prescribed values for the item reliabilities, CR and AVE. Moreover, the global fit statistics indicated a good fit of the measurement model (RMSEA = .047; TLI = .942; CFI = .949; SRMR = .044).

Manipulation check

As a manipulation check, we asked the participants a total of six questions at the time of second measurement to assess whether our interventions had the intended effect. The participants who received an intervention had greater awareness of privacy issues compared to the control group (F(1,599) = 14.94, p < .001). The participants who received information on how to protect their privacy (groups 2 & 3) had greater perceived knowledge of how to protect their privacy than those who did not receive such information (F(1,599) = 25.96, p < .001). Finally, the participants who were requested to take action (group 3) said they took action significantly more than those who did not receive a request (F(1,599) = 50.19, p < .001).

Model specification and results

To test whether identity threat reduces consumer well-being through reducing consumer AI empowerment (H6), we specified a structural equation model (SEM), linking identity threat to consumer AI empowerment, then linking empowerment to consumer well-being. To test the effectiveness of our interventions in reducing the effect of identity threat on empowerment (H7, H8, H9), we added relationship length and each of our interventions as moderators of the link between identity threat and empowerment. We dummy coded intervention groups to compare each intervention with the control group (Bagozzi & Yi, 1989). As control paths, we included in the SEM all the paths that were analyzed in Study 1. We estimated this model using Stata 16 (StataCorp, 2019), including 601 observations (Table 5).

Robustness checks

To assess our model’s accuracy and reliability, we bootstrapped the model and the results remain stable (see Table 5). To rule out a common method bias, we implemented Harman’s single-factor test. We found that the total variance extracted by a single factor is 23%, which is below the recommended threshold of 50% and suggests that common method variance (CMV) is not an issue in our study (Chang et al., 2010). To account for potential endogeneity in our model estimation, we implemented the Durbin–Wu–Hausman tests (Hausman, 1978). The results reject the hypothesis that the variables in our model are endogenous. Further information on all robustness checks is provided in the Web Appendix A.

Main indirect effects

First, the results of Study 2 replicated H1 and H2, showing that AIA’s anthropomorphism increased the consumer satisfaction through increased trust in the AIA (b = .036, p < .001) and increased the privacy concerns of the consumer through increased threat to human identity (b = .214, p < .001). One of our key findings in Study 2 is that identity threat significantly reduced consumer well-being through reduced consumer AI empowerment (b = −.025, p < .05), providing support for our hypothesis H6. Analysis of non-hypothesized downstream effects of consumer AI empowerment showed that empowerment decreased consumers’ privacy concerns (b = −.170, p < .001), but did not affect consumer satisfaction.

Moderation effects

Next, we discuss the tests of the proposed interactive effects of the intervention strategies. All non-dummy variables in the interaction terms were mean centered. As expected, the three-way interaction between identity threat, relationship length and the interventions resulted in the following significant effects. Raising consumers’ awareness about data issues significantly reduced the negative effect of identity threat on consumer AI empowerment for consumers with longer relationship length (b = .118, p < .05). Raising awareness about potential solutions to data issues reduced the negative effect of identity threat on consumer AI empowerment for consumers with longer relationship length, only at 10% significance level (b = .104, p < .10). Finally, requesting the consumers to take action about their privacy had the strongest moderating effect, significantly reducing the negative effect of identity threat on consumer AI empowerment for consumers with longer relationship length (b = .176, p < .01). These results provide support for our hypotheses H7, H8 and H9 with different degrees of significance. Our interventions, compared to a control condition, succeeded in attenuating the harmful effect of threat to human identity by artificial intelligence on consumer AI empowerment for consumers with longer relationship length. The interaction effects are depicted in Fig. 2 and we interpret these findings further in the discussion section.

Interestingly, the two-way interaction between identity threat and relationship length had a significant negative effect on consumer AI empowerment (b = −.234, p < .05), meaning that identity threat due to AI decreased consumer AI empowerment particularly for consumers with longer relationships. These results provide additional support to the idea that harmful effects of AIA may become stronger as the consumer–AIA relationship develops. The full results are provided in Table 5, and further details can be found in Web Appendix C (Table VII).

Discussion of Study 2

The first key finding of Study 2 is that identity threat significantly reduced consumer well-being by diminishing consumer AI empowerment. This supports H6 and confirms Study 1 in that despite their many advantages, AI technologies may also be disempowering and harmful to consumers. Moreover, the negative effect of identity threat on consumer AI empowerment was significantly stronger for consumers with longer relationship length. This finding underlines the eligibility of adopting a relationship perspective on consumer-AIA exchanges and supports the idea that an extended exposure to the AIA is needed for the harmful effects to fully emerge.

We also found support for H7, H8 and H9, demonstrating the effectiveness of the intervention strategies to attenuate the harmful effect of identity threat on consumer AI empowerment for consumers with longer relationship length. In particular, prompting consumers to take action about their privacy in their relationship with the AIA had the most marked effect. When we asked for the consumers’ current device settings, we found that 55% of the consumers in the third intervention did indeed choose to take action in response to our request and changed their preference for how long their voice recordings should be kept. This intervention not only alleviated the harmful effect of identity threat on consumer empowerment as we intended, but also resulted in real behavior change for many consumers. Moreover, 25% of the consumers in the second intervention group took action despite not being requested to make a change. This result emphasizes the importance of informed consumers who only need to know how to deal with their situation. Once they have a clear pathway to a solution, these consumers may take action without needing a further push. Finally, raising awareness about privacy issues of consumers with longer relationship length also psychologically empowers them, even if they do not take immediate action. The tangible changes in behavior and the resulting empowerment show that managers can empower the consumers by adequately informing them and encouraging them to take action in relation to their personal information.

General discussion

Theoretical contributions

This research investigates potential psychological costs and dangers of AIA anthropomorphism for consumers, and strategies to alleviate the revealed harmful effects. Two empirical studies provide convergent evidence expressing the effects of AIA anthropomorphism-induced identity threat on consumer empowerment and subsequently on consumer well-being. First, our results demonstrate that the harmful downstream effects of human identity threat may particularly be induced by an extended exposure to the AIA in the course of the relationship. With AI-powered technologies taking larger roles in our daily lives, our research highlights the need to account for this permanent ongoing nature of the consumer-AI relationship (cf. Huang & Rust, 2021b; Kozinets & Gretzel, 2021; Novak & Hoffman, 2019). The AIA technology is unique in the sense that the interaction is ongoing from the waking to sleeping moments of the consumer, in the consumer’s home. Adopting a relationship perspective for studying AIA-user interactions is novel, because pertinent past research rather assumed a stimulus–response approach, in which the AIA functions as a stimulus to which consumers react immediately (e.g., Benlian et al., 2020; Li & Sung, 2021). Therefore, we suggest that taking a relationship perspective is valuable in investigating AIAs, in addition to a stimulus–response model. Our first theoretical contribution to the literature on AIAs (e.g., Benlian et al., 2020; Foehr & Germelmann, 2020; Li & Sung, 2021) is that we can conceptualize user–AIA interactions as an ongoing exchange of benefits and psychological costs in a relationship, and the characteristics of this relationship play a crucial role in shaping our experience with AI. One implication is that future research should more intensively leverage theories of relationship development in marketing to examine user–AIA interactions (e.g., Dwyer et al., 1987). For instance, analogous to relationship marketing (Jap & Ganesan, 2000), mapping out a user–AIA relationship lifecycle may be pertinent. An insightful question in this respect might be whether such relationship lifecycles are similar to conventional relationship lifecycles found in past research or exhibit AIA-specific idiosyncrasies.

Second, previous research has shown that perceiving a mind in a non-human entity engenders discrete effects, altering how the entity is perceived (e.g., in terms of its competence, agency, eeriness) (Mende et al., 2019; Waytz et al., 2014). We suggest that we can explain these scattered findings in the mind perception literature in a parsimonious way, i.e., with a costs–benefits approach. Specifically, we show that perceiving a mind in AIAs enables consumers to form a relationship with their AIA. People make inferences about the AIA’s mind and these inferences shape the perceived benefits and costs of the relationship. That is, a perception of a mind in AIAs can be considered as a part of the benefits and costs of engaging with AI, just like other features (e.g., personalization, surveillance), collectively constituting an experiential relationship with AI. Therefore, it might be useful to approach any human–AI interaction as potentially an exchange relationship whenever one perceives a mind in the AI, similar to one’s own. Previous theories that attempted to explain the effects of mind perception, such as threat to distinctiveness (e.g., Ferrari et al., 2016) or psychological distance (e.g., Li & Sung, 2021), could benefit from including a social exchange perspective in their attempts to understand mind perception in AI. Such an approach may provide a comprehensive understanding of mind perception in AI, which might have simultaneous contrasting effects on people, such as psychological closeness and threat to human identity (Ferrari et al., 2016; Li & Sung, 2021).

Third, in recent marketing research, an intense discussion emerged on the consequences of AI for consumers and their experience (Cukier, 2021; Kozinets & Gretzel, 2021; Puntoni et al., 2021). In contrast to the extant literature showing benefits of AIA anthropomorphism (see Table 1), our research underlines that anthropomorphism in AIAs could induce psychological costs for consumers, manifested as a threat to their human identity. Anthropomorphism is being used increasingly in many AI-powered devices and it is necessary to enrich the current perspective in the literature (e.g., Blut et al., 2021). Our results provide clear support for this suggestion by differentiating a harmful dimension of AIA anthropomorphism. Thus, we contribute to past literature with a more balanced account of benefits and psychological costs of AI for consumers, as vehemently demanded in recent works (Cukier, 2021; Puntoni et al., 2021). Seeing the ambivalent effects generated by AIAs on consumers, we suggest that it is important to leverage theories related to psychological costs to understand this aspect of AIAs in more detail, such as techno-stress (Ayyagari et al., 2011) or psychological strain (Edwards, 1996). Established theories of anthropomorphism (e.g., Epley et al., 2007) could be meaningfully extended by including in their framework the psychological costs, as well as benefits, generated in interactions of humans with anthropomorphized AI entities.

Fourth, importantly, this finding also shows that the fear of being replaced by AI is not just a matter of being replaced in one’s job; most people actually buy AIAs to be voluntarily replaced in menial tasks. Being replaced, in fact, implies something more profound: it is about an AI with a rivaling mind being in a “psychological competition” with humans, which questions our human nature and role in a digital world. While previous research has conceptualized human identity threat as a fear of being replaced and distant dystopian tales of robot dominance over humans (e.g., Ferrari et al., 2016), we show that there is also an immediate psychological threat arising from perceiving a competing mind in the AI. To the best of our knowledge, we are the first to propose this feeling of threat to human identity as a driver of consumers’ privacy concerns. Past research has identified intrusive capabilities of AI-powered devices (e.g., data exploitation, extensive surveillance) as drivers of privacy concerns (Maedche et al., 2019; Müller, 2021), but not their nature as “intelligent machines with a rivaling mind”. This finding points to a deeper fear of humans in relation to their privacy, beyond the more proximal sources of concern such as data security or extensive surveillance. Therefore, established theories on technology acceptance such as the technology acceptance model (Davis et al., 1989) should integrate in their framework the psychological costs elicited by mind perception, especially since nowadays most technology is powered by AI.

Managerial implications

Our results show that managers face a difficult dilemma in the marketing of AIAs: if managers endow AIAs with anthropomorphic features, it entails ambivalent effects, such that consumers are more satisfied with the AIA, but at the same time, experience less general well-being. Our findings thus provide managers with a starting point to optimize outcomes from AIAs such that they profit from beneficial effects and avoid harmful consequences. Therefore, our first recommendation to managers is to endow AIAs with anthropomorphism only if consumers are well-informed about the privacy of their data and are confident in taking action to protect their privacy in the relationship with the AIA. Certainly, there are individual differences in consumers’ tendency to anthropomorphize, and some people do not anthropomorphize their AIAs at all. However, for some others, if the perception of a virtual intelligent mind triggers a relationship, firms might elicit human–AI relationships that are richer than a few archetypal relationship styles (e.g., master–servant) (Novak & Hoffman, 2019), and as rich as the human experience (e.g., metaverses). There could be great promise in employing anthropomorphism in AI prudently, but an indiscriminate use may undermine the potential of cutting-edge AI technologies for both consumers and firms (cf. de Ruyter et al., 2018; Grewal et al., 2020).

Our second recommendation to the managers is that, in an attempt to minimize the harmful effects of AIA anthropomorphism, they should try to empower consumers regarding the protection of their personal data. We present three practical ways to empower users for handling their data privacy: (1) informing consumers about the data practices of the firm, (2) informing consumers about ways to protect their data privacy and (3) encouraging consumers to take action to protect the privacy of their data. These ways of empowering users go beyond what is traditionally done online. Firms currently must ask their customers for their consent for data use, and some firms ask the customers their preferences but given the lack of customer knowledge on digital data and privacy, this typically is no more than a formality. An innovative way could be to use the AIAs’ conversational ability to transmit such information to users seamlessly, by integrating privacy dialogs into user interactions. Firms might see such practices as unwanted extra costs, however, it could turn into substantial benefits for the firm for two reasons.

The first reason why firms might want to empower consumers is that AIAs as a purchase channel have certainly not been exploited to their full potential. While 32% of users searched for products and product reviews, only 9% of users made purchases through AIAs. Despite being aware of the convenience of making a purchase just with a voice command, many consumers avoid it partly because of the uncertainty of the integrity of their personal data (Stucke & Ezrachi, 2018). Alleviating data privacy concerns and improving well-being of the consumers could reinforce better consumer relationships and increase purchase through this unique channel.

Second, consumers oftentimes believe that a firm’s data practice is an indication of how they will be treated as a customer (Cisco, 2019). We have seen disastrous examples of careless management of consumer data privacy, such as Facebook, which has seen its reputation plummet over the past few years (Axios Harris Poll 100, 2021). This shows that firms’ current disclosure practices may be insufficient and emphasizes the need to push beyond disclosing rudimentary information to consumers (Martin & Palmatier, 2020). Based on the results of our field experiment, we recommend creating and efficiently communicating transparent privacy policies, improving the salience and accessibility of privacy preferences and encouraging the consumers to take action to protect their privacy. A conscientious and transparent manner of managing consumer data will not only shield the company from continuous government scrutiny and potential legal expenses, but is also a corporate digital responsibility (Lobschat et al., 2021; Martin & Murphy, 2017). A consumer-centric data relationship management (e.g., Krafft et al., 2017; McKinsey, 2021) may be ideal and effective for building mutually beneficial and lucrative relationships with well-informed consumers (Bleier et al., 2020; Plangger & Montecchi, 2020).

Limitations

Our study is not without limitations. We included AIA users of all brands in Study 1, but we targeted only Amazon Alexa users in Study 2. This was a necessary limitation for our field experimental design to preserve homogeneity in our sample. Different platforms have a different set of security settings, presented at different levels of ease of accessibility. We would have liked to include all platforms in Study 2, but the results across intervention groups would not have been entirely comparable. While we added a control variable in our model for company specific perceptions in Study 2, future research could best clarify potentially different consumer attitudes to different brands by investigating consumers of all AIA providers.

While our research carries implications for most AI-powered devices in which consumers might perceive a mind, our empirical work focuses specifically on AIAs that are embedded in smart speakers that are used at home (e.g., an Amazon Echo device). The extent to which our results might be generalizable for other AI-powered entities can best be revealed through further research. Also, despite seeing a clear behavior change in AIA users as documented by changed privacy preferences, we do not have data to confirm a prolonged effect. Future research could investigate the extent to which such behavior changes in AIA users persist over time.

Our study has limitations in relation to the measurement of satisfaction and identity threat. Considering the practical advantages such as higher response rates and due to the high number of variables in our study, we have opted for an established single-item measurement of satisfaction (Garbarino & Johnson, 1999). While previous research found them to be comparable, future research could use multi-item measures as well as single-item measures to enhance the robustness of the findings (cf. Gardner et al., 1998). Finally, future research could use more detailed identity threat measurements, considering potential sub-dimensions of identity threat (e.g., real vs. symbolic threat) to explore this phenomenon with more granularity.

Directions for future research

Based on our findings, we raise potential research questions for future research on relationship marketing, consumer empowerment and privacy management as well as mind perception in AI (see Table 6). First, we recommend future research to likewise adopt a relationship perspective on exchanges of consumers and AI and explore this relational exchange with longitudinal models. Indeed, digital platforms such as AIAs might have tremendous untapped potential as a medium for establishing and maintaining customer relationships. Therefore, investigating the development of a relationship over time and exploring relationship trajectories will provide a whole new way of looking at AIAs, as something more than a convenience gadget that collects a lot of personal data. Exploring the relationships between the consumers and the AI-powered platforms may unveil versatile interactions that go beyond purchase and foster consumer loyalty (e.g., Wichmann et al., 2022). The relationships that consumers form with intermediary platforms such as Alexa may gradually replace the traditional consumer–firm relationships, by acting as a firm’s new access point to consumers. In particular, employing the user log files, longitudinally available and stored by AIAs, which track purchases, AIA tasks, changes of data/privacy settings, and exchanges between AIA and consumer could provide an intriguing opportunity for groundbreaking, longitudinal marketing research in this direction.

Table 6 Future research questions

Second, to empower AIA users, our interventions aimed at informing and encouraging users regarding the protection of their personal data in relationships with AIAs. This is certainly not the only way to empower consumers, but arguably one of the easiest ways. Future research could explore additional strategies, potentially even more effective at making the consumers feel empowered and alleviate their privacy concerns. However, managers should consider that empowerment strategies may introduce more friction into a consumer experience, because consumers become more engaged in the process. In a digital world where frictionless or seamless experience has become a major priority, future research could investigate the trade-offs in consumer experience that empowerment strategies may require (e.g., having to read information, as in our interventions). Potentially, AIAs could use their natural interaction capabilities to avoid this issue by integrating conversational privacy dialogs into user interactions, therefore not sacrificing seamless consumer experience completely. Delivering privacy notices and choices through a two-way verbal dialogue between the users and AIAs could provide the users with a more natural, seamless and effective interface for controlling their privacy. Such an intuitive way of communicating privacy preferences will reduce notice complexity and fatigue, which is a major problem with current privacy policies and settings. Firms can thus potentially rejuvenate their relationships with their customers, especially those with a defeatist attitude towards their privacy, which may be leading them to actively avoid further engagement with their AIAs.

Third, anthropomorphic features of AIAs seem to play an intriguing central role in this technology. A fine-grained perspective is required on which particular anthropomorphic features could lead to greater mind perceptions and how they might differ in the effects they elicit (e.g., interactivity, learning, personality, embodiment, speech) (Rijsdijk et al., 2007). Thus, it would be a worthwhile endeavor for future research to develop a taxonomy of anthropomorphic features and assess their differential effects. Moreover, AIAs represent a new, unique purchasing channel, despite not being exploited to their full potential yet. Future research should investigate the effects of AIA anthropomorphism on novel, untested consumer outcomes such as willingness to pay for a product, purchase likelihood and the overall perception of the provider company.

Finally, regarding potentially harmful effects of AIAs, consumers’ experienced identity threat related to AI assumes a quintessential role. This is noteworthy, because many people in society have concerns over whether AI will gradually displace humans not just in mechanical tasks, but also in tasks that require thinking and feeling (Huang & Rust, 2018). We focus on AIAs which have so far replaced us mostly in menial tasks. Yet, all kinds of digital technologies now encircle us in our daily lives, replacing us in increasingly complex tasks. A broader fear of being replaced by AI in tasks that we think as exclusively human could drastically change the public attitude towards technology and innovation (Davenport et al., 2020). The point is not to be a storm crow, but to encourage future research to thoroughly examine our relationship with AI in a wider context. We believe that investigating the nomological network (antecedents, mechanisms, outcomes, remedy strategies) of consumers’ relationships with AIAs and privacy concerns should assume a key role on the research agenda in the digital age. Both researchers and policymakers should consider implications of AI for consumers whose human identity might be threatened, but also for the future economy that is already being transformed by AI.