Introduction

Artificial Intelligence (AI) enables smart services and digital transformation that significantly change the way organizations and electronic markets work (Gursoy et al., 2019). As digital transformation progressed, personal virtual assistants (PVAs) based on AI recently gained significance so that they are currently a standard feature of most mobile devices, which people use on a daily basis (Maedche et al., 2019). Through these assistants, users can communicate with and centrally steer their devices with natural language (McTear, 2017), adding convenience and making it easier to use new applications. Additionally, PVAs have the potential to take over repetitive tasks which are easy to automate and as technology progresses, and eventually even to take on more complex and creative tasks (Loebbecke et al., 2020). While private PVA use is increasing, organizational use in conjunction with business software such as enterprise resource planning systems is not (Meyer von Wolff et al., 2019). Database inquiries and analyses, as well as order processing and document management, are only a few of the tasks PVAs could potentially undertake. Using PVAs could give employees more time for valuable, more complex tasks and would save the organization resources and money, even offer a competitive advantage (Maedche et al., 2019). Research on AI-readiness at an organizational level has shown that AI awareness, understanding how AI-based technologies work, and knowing where they can be employed is crucial to successfully implementing such technologies (Jöhnk et al., 2020). Consequently, implementing PVAs in an organizational setting could be the next logical step in digitalization progress. Although PVAs are popular in private use contexts, the organizational context is a new area of application that very few studies have specifically considered, primarily due to the lack of already implemented PVAs in organizational settings. Stieglitz et al. (2018) do not fully embrace the concept of an organizational PVA; yet, they do introduce the concept of an automated user service through enterprise bots that provide interactions with complex organizational systems and processes. Meyer von Wolff et al. (2019) describe specific application scenarios for such enterprise assistants, indicating their potential in information acquisition, employee self-service, collaboration, and training. With so many advantages, what is keeping organizations from introducing AI and allowing it to take over large parts of the workplace?

PVAs act within a socio-technical system where there are users working toward a particular goal, as well as tasks they have to fulfil using specially employed technologies (Maedche et al., 2019). In private PVA use, for example using the smartphone-PVA to execute a specific task, this socio-technical system is manageable; however, in organizational use it becomes more complex. Users are aware that they rarely act on their own, that there is interdependence with the entire organization. PVA failure or error can even have severe financial implications. Additionally, user expectations and how organizational PVAs actually function, often diverge (Luger & Sellen, 2016), so that users perceive the PVA as a nuisance rather than as facilitating their job and saving time (Maedche et al., 2019). While these factors are grounded in cognitive theories, there is reason in this context to deviate from purely logical reasoning to account for human irrationality (Loebbecke et al., 2020). Human resistance to using, working with, and ultimately also trusting AI is generally induced by emotions that can range from dissatisfaction and frustration due to the PVA being a disturbance, to worry and fear that it will make serious mistakes or leak their data. Since emotions are important drivers of behavior, the emotions users experience early in a new application’s implementation significantly influence the use of the technology (Beaudry & Pinsonneault, 2010). If users see new technologies as a threat, they generally avoid them (Liang & Xue, 2009). Regarding PVAs, trust and privacy issues mostly lead to users rejecting this technology (Cho et al., 2020; Liao et al., 2019; Zierau et al., 2020). Failing to address these concerns the human workforce might have in such massive change processes would be dangerous from a strategic point of view, but even more, could lead to them complete rejecting the technology (Laumer & Eckhardt, 2010). Thus, attention to emotions induced by using organizational PVAs is vital, particularly before or in the early stages of implementation, especially to draw clear lines and define the right boundaries. This paper’s aim is to disclose negative emotions and concepts related to these PVAs, to clearly define boundaries of organizational PVA use. Consequently, we draw implications emotional responses have for organizational PVA implementation, and provide recommendations for action.

To achieve our goal, we conducted an in-depth interview study, collecting data first in group discussions, including 45 employees across various industries and sectors, followed by individual interviews. We collected the data to identify and categorize emotions according to Beaudry and Pinsonneault’s (2010) framework for classifying emotions. Further, we related these emotions to one another and, through open coding, also found negative implications regarding emotion-laden concepts such as trust and privacy, related to AI use or non-use. Hereby, we show a dark side of potential organizational PVA use, address concerns raised by the human workforce, and give insight on where boundaries should be drawn.

Overall, our paper is a first step toward systematically revealing basic and specific emotions regarding the potential use of organizational PVAs and thereby show where organizations should draw boundaries before they implement such AI-based technologies into daily routines. We provide empirical groundwork for theorizing on the boundaries of AI use based on basic emotions and related concepts. We also find implications for organizations regarding PVA implementation. Future research could draw on these results to validate and enhance our spectrum of relevant emotions, as well as to incorporate it into AI-based system implementation strategies, and to change management endeavors.

Theoretical background

Since we are bringing together two distinct concepts, the PVA as an information systems (IS) technology and emotions as a psychological concept, this paper refers to several theoretical foundations. In the following sections, we give an overview of current and anticipated features of AI-based agents or assistants, and provide an overview of drawbacks to using AI-based technologies. Then, we show how our work on emotions related to organizational PVAs can be embedded in a well-established emotion framework.

Personal virtual assistants

The origins of PVAs can be traced back to the formative research of Turing (1950) and Weizenbaum (1966) that shape AI-related research to this day. Turing attempted to define AI through an experiment in which a human would unknowingly communicate with a machine that appeared to be human, the machine having to convince the human for as long as possible that they are actually communicating with another human. To date, no machine has successfully passed the Turing test. A large portion of AI literature still follows the question of how to make communication between humans and machines as natural as possible. Weizenbaum (1966) introduced the first dialog system that enabled communication with a computer through natural language processing (NLP). Since the 1980s, dialog systems have been introduced that do not focus solely on communication, but are able to fulfil tasks independent of human control (Dale, 2019). Often, these systems have not been scalable and prone to errors, which explains why they were not commercially successful (McTear, 2017). This changed after 2010 as research on AI and speech recognition advanced and the user’s context information such as current location and user history could be accessed (McTear, 2017; Radziwill & Benton, 2017). Further, assistants that can be integrated into and directly communicate with the user through messengers, have been introduced. This progress is reinforced by investments of large technology corporations such as Microsoft, IBM, Apple, Amazon or Google, that all developed PVAs for end users.

The term ‘personal virtual assistant’ (PVA) is difficult to define clearly because it is not a unique or generally known term. Previous studies have shown various conceptualizations and diverse terminology related to the anticipated or displayed features (see Table 1). The main common denominator of all the described agents or assistants is their human-like communication through natural language. Further, the words ‘smart’ and ‘intelligent’ have been widely established to imply underlying AI technology.

Table 1 PVA terms

While most of the terms given in Table 1 do not necessarily suggest an organizational context, most of the identified agents or assistants are applied on the interface with the customer, for example, supporting or delivering customer services. Others are responsible for executing (automated) tasks. The most relevant attribute we find in a PVA, as opposed to a plain chatbot or specialized conversational agent, is its ability to act as an interface that connects the user to many different services. For our purposes, we therefore define the PVA as the focal point for numerous functions that can be accessed through natural language, without touching on the logic behind any application, as the PVA is an intermediary.

Drawbacks of using AI-based technologies

Although researchers and society often recognize the merit AI-based technologies such as PVAs have in potentially creating value, they tend to lose sight of the high cost and negative emotional impact they can have on the human workforce. Overall, individuals working in large organizations are sceptical of PVAs, mistrusting them for several reasons, of which privacy concerns rank the highest. Privacy refers to the state of not being in others’ company nor being under others’ observation. Also, it implies freedom from unwanted intrusion (Merriam-Webster, 2005). Wiretapping and listening in to covertly collected recordings, exploiting vulnerabilities in security, and user impersonation are several ways in which malicious actors can breach users’ privacy and security (Chung et al., 2017). Lentzsch et al. (2021) have shown that malicious users can pressure innocent users unintendedly to reveal information after downloading seemingly harmless data onto the PVA they are using. Inability to administer and change privacy as well as content settings for AI-based technologies leads to mistrust, which emphasizes the importance of designing PVAs to be highly privacy-sensitive and trustworthy (Cheng et al., 2021; Cho et al., 2020).

The general human desire to maintain privacy and data ownership stems from the fear of losing autonomy and personal integrity, therefore people are mostly reluctant to trade privacy and control (Ehrari et al., 2020) unless the value they gain exceeds the risk they perceive to be taking. This gap between willingness to protect or share data can be mediated by trust. Trust is based on the expectation that an action important to one person, will be executed by another party (in this case, either another person or the PVA) regardless of whether the other party can be controlled or monitored (Mayer et al., 1995). A trust problem arises if the user’s expectations cannot be fulfilled by the AI-based technologies because often they do not work or behave as the user anticipates. This leads to a large gap in terms of known machine intelligence, system capability, and goals (Luger & Sellen, 2016), and can be attributed to high AI training costs due to reliable training datasets being necessary (Denning & Denning, 2020).

To leverage the potential AI provides for strategic decision-making in an organizational setting, managers must transfer authority and control to AI-based decision systems such as PVAs. However, humans are less likely to delegate strategic decisions to AI than to another person, since they feel more positive emotions when delegating to another person (Leyer & Schneider, 2019). Further, there is a perceived loss of competence and reputation when organizations transfer decision-making from an employee to an AI system, and there is a moral burden on employees that have to face the real-life consequences of the decisions an AI system makes (Krogh, 2018; Mayer et al., 2020). Collaboration between humans and machines does not guarantee better outcomes, and when a PVA errs or shows untoward bias, people often insufficiently intervene to address the problem (Vaccaro & Waldo, 2019) due to the employees’ reduced ability to critically reflect on their work once the PVA has taken over the entire decision-making process (Mayer et al., 2020).

Generally, a PVA can improve human capabilities by enhancing intelligence and cognition and, in turn, also their performance (Siddike et al., 2018). However, these enhanced capabilities and performance are highly dependent on the success of the interaction between the PVA and its user, making it vital to research the factors that influence interaction. One such factor is human emotion.

Emotions and emotion models

Emotions can be viewed as the primary human motivational system (Leeper, 1948; Mowrer, 1960), as well as being a specific, very elementary part of intelligence (Balcar, 2011). The aspect of emotions which drives human action and interaction, is also represented in the emotions humans display in communication through and about IS (Rice & Love, 1987). A great deal of research in psychology has been dedicated to emotions, highlighting different aspects of a particular emotion or of emotions in general (Kleinginna & Kleinginna, 1981). This has led to not one, but many different definitions and conceptualizations of emotions (Chaplin & Krawiec, 1979). Broadly, an emotion is a chronologically evolving sequence: after exposure to a stimulus, a human perceives a state of ‘feeling’ that results in the person displaying externally visible behavior or emotional output (Elfenbein, 2007). Since our research is not based on the physiological response or physically visible behaviors that an emotion might trigger, for our purposes, we define an emotion more narrowly as “a mental state of readiness for action” (Beaudry & Pinsonneault, 2010, p. 690) that activates, prioritizes, and organizes a certain behavior in preparation of the optimal response to the demands of the environment (Bagozzi et al., 1999; Balcar, 2011; Lazarus, 1991).

Existing research on emotions regarding IS is mostly not grounded in emotion theories, but rather refers to basic or discrete emotions that IS users display (Hyvärinen & Beck, 2018). Definable and objective basic emotions are foundational to many complex emotions. Although there is no consensus on which emotions are the basic ones, Kowalska and Wróbel (2017) combined the theories presented in state-of-the-art emotion research and arrived at six basic emotions, namely happiness, sadness, anger, disgust, fear/anxiety, and surprise. Within these six basic emotions, we observe a distinction between positive and negative emotions as two independent dimensions that are universal across cultural, gender and age groups (Bagozzi et al., 1999; Pappas et al., 2014). However, surprise cannot directly be attributed either positive or negative quality (Ortony & Turner, 1990), therefore we decided to omit it further in our investigation.

Many emotion models or frameworks given in the literature focus on emotions unrelated to an organizational context (Plutchik 1980; Russell, 1980). Other emotion models that could appropriately assess attitudes toward AI-based technologies in the workplace (Kay & Loverock, 2008; Richins, 1997) do not incorporate different stages of assessment. Abandoning these, we decided to base our investigation on Beaudry and Pinsonneault’s (2010) framework for classifying emotions, since their primary appraisal divides emotions according to whether they constitute an opportunity or pose a threat to the human. Maedche et al. (2019) affirm this appraisal in the context of PVAs. Beaudry and Pinsonneault’s (2010) secondary appraisal incorporates an element of perceived control over expected consequences, or the lack of such control. Foundational to the framework, as given in Fig. 1, is a contextual model of stress (Lazarus & Folkman, 1984) which assumes that coping mechanisms help human efforts to meet requirements that exceed their existing resources. These requirements can be divided into ones that rely on either cognitive or behavioral effort. Cognitive effort, on the one hand, aims to avoid or accept a given situation. Behavioral effort, on the other hand, aims to change the situation by, for example, researching new information. A second foundation for Fig. 1 lies in appraisal theories, according to which emotions arise when humans assess events and situations (Moors et al., 2013). Therefore, emotions are taken as reactions to events or situations, which do not occur without a cause or reason. After completing the first and the second appraisal, any emotion can be classified.

Fig. 1
figure 1

A framework for classifying emotions (Beaudry & Pinsonneault, 2010, p. 694)

According to the two axes, there is a fourfold segmentation. The examples of emotions listed in Fig. 1 help with the classification. Most basic emotions are also mentioned here. The first category, achievement emotions (AE), stems from the first and second appraisals resulting, respectively, in an opportunity and a perceived lack of control over expected consequences. For our purposes, we chose to focus on happiness, satisfaction, and relief, as we interpret pleasure and enjoyment as synonyms for satisfaction (Merriam-Webster, 2005). The second category, challenge emotions (CE), combines an opportunity with perceived control over the expected consequences, revealing itself in excitement, hope, anticipation, playfulness, and flow. We interpret arousal as a synonym for enjoyment, therefore we chose not to code it separately. The third category, loss emotions (LE), combine a threat with a perceived lack of control over expected consequences. Examples here are anger, dissatisfaction, frustration, and disgust. Since we interpret disappointment as a synonym for dissatisfaction and annoyance as directly related to frustration, we omitted these two in the further course of the investigation. Disgust was also discarded as it is an emotion elicited in relation to physical objects (Rozin et al., 2009) which we do not expect to contribute to our findings. The fourth and last category, deterrence emotions (DE), stems from the first and second appraisals, respectively revealing a threat and perceived control over the expected consequences. Fear, worry, and distress are illustrative of this category. We omitted anxiety since we interpret it as a synonym for fear.

Method

We used a qualitative research approach to understand emotions evoked by organizational PVAs (Bhattacherjee, 2012). In doing so, we intended to account for the complexity and novelty of a potential organizational PVA-implementation to gain a thorough understanding of the emotions triggered by such action. We selected apparently relevant emotions from Beaudry and Pinsonneault’s (2010) framework for classifying emotions. Figure 2 summarizes our research approach.

Fig. 2
figure 2

Overview of our qualitative research approach

Data collection

We conducted a focus-group and interview study to better understand the emotions the potential use of an AI-based PVA would evoke in the workplace (Myers & Newman, 2007; Rabiee, 2004). Using a non-probability sampling method combining both convenience and voluntary response sampling (Wolf et al., 2016) we arrived at a sample of 45 participants (P01-45) between the ages of 19 and 40 years old. All participants were part-time students (convenience sample) or had recently completed their part time studies (voluntary sample) at the time of the data collection. Also, all participants were employed or self-employed at the time of data collection, and were asked to refer to their current or last workplace when giving statements regarding potential PVA use. Combining these sampling strategies enabled data collection from participants who had shown no previous interest in the topic, as well as from ones who were interested, and had thus chosen voluntarily to respond and take part in our study.

We structured the data collection into two parts. First, we conducted focus-group discussions with groups of 4–7 participants each, who came together and discussed how their workplace could potentially use a PVA. These discussions took place between June and November 2019, and lasted 90–120 min each. Second, we conducted individual one-on-one interviews via telephone or video conference to follow up on the focus-group discussions. These interviews took place between July 2019 and February 2020. All participants’ consented to the interviews being recorded with an audio device. We transcribed the recordings shortly afterwards. The data collection took place in different cities across Germany, therefore focus-groups as well as the interviews were conducted in the participants’ native language, German.

In agreement with our paper’s in-depth approach and to generate rich data we mainly asked open questions (Bhattacherjee, 2012; Myers & Newman, 2007). The focus-group discussions as well as the interviews were semi-structured in order to provide the same stimuli and account for equivalence in meaning (Barriball & While, 1994). In the focus-group discussions, we made sure participants stayed with the given topic of potential organizational PVA use, and we encouraged them to interact with one another (Rabiee, 2004).

Data analysis

For our analysis, we used the qualitative data analysis software Atlas.ti to analyze the full transcripts of the group discussions, as well as excerpts of the individual interviews. One author, the first coder, started to code the emotions according to Beaudry and Pinsonneault’s (2010) framework, while additionally using open coding to find any related occurring themes or phenomena. We assigned the same level of specificity to all codes, arriving at a flat coding frame (Lewins and Silver 2014); however, later we summarized the individual emotions into emotion categories, as in Table 2. Through the selective coding scheme, we arrived at 14 different emotion codes in four clusters, as well as four related phenomenon codes, which the open coding disclosed (trust in humans, trust in PVAs, anthropomorphism, and privacy). Additionally, we used four demographic information codes (job information, field of study, previous PVA experience, and the last technological device purchased). The second coder then used this list of 22 codes to conduct selective coding. Afterwards, the two coders discussed the cases on which they disagreed, either to reach agreement on a code, or to remain in disagreement. Eventually, we arrived at an intercoder-reliability of 92%, “the percentage of agreement of all coding decisions made by pairs of coders on which the coders agree” (Lombard et al., 2002, p. 590). According to most methodologists a coefficient of greater than 0.90 is deemed to be always acceptable (Neuendorf, 2002).

Table 2 Coding scheme

Results

We structured our results in the different emotion categories Beaudry and Pinsonneault (2010) suggested. These categories with their coinciding emotions/codes provided a good basis for understanding human emotions toward organizational PVAs. Then, we looked at the related concepts that emerged during the open coding process and presented them in relation to the coded emotions. A common approach to presenting qualitative research results is to present illustrative quotes from the focus-group discussions and the interviews (Eldh et al., 2020). For convenience, we translated the original German quotes as literally as possible.

Participants’ demographic background

The demographic codes confirmed that all participants had listed business administration or management information systems as their primary field of study, and all were employed or had until recently been employed (at most two months before data collection). The participants’ workplaces were in a variety of different sectors; the roles their various job titles (listed in Table 3) indicate, show that the automotive and public sector as well as software engineering are the most represented. According to Neuner-Jehle's (2019) study on Germany’s job market in 2019, many of these job roles, such as those of software developer, IT administrator, and IT consultant, as well as human resources and project manager, are amongst the most popular and sought-after ones among applicants with university degrees. Further, Germany’s most important business sectors (Statista, 2020) – the automotive industry, mechanical engineering, and pharmaceuticals – are all adequately represented in our sample. Thus, our sample qualifies as a cross-section of office workers with university degrees in Germany. Since, to date, only a very few organizational PVAs have been implemented in Germany, only two participants reported experience with a PVA in an organizational context. Yet, a majority of 60% (27 participants) had used PVAs privately and in their own homes, such as activating Alexa, Siri, or chatbots in using online customer services. Table 3 gives this data as well.

Table 3 Participants’ demographic data, including their previous private PVA experience

Further, we decided to ask participants about the last technical device they had purchased to find out whether any of them had invested significantly in expensive and advanced technology. This would give an overview of their general technological affinity or possible technological aversion signaled by not having purchased any device in a long while. Most participants did not mention unusual purchases, while the majority mentioned recently having purchased a new smartphone, headphones, speakers, smart TV, smartwatch, tablet, or computer. All participants explained why they had selected the devices they purchased, and none stated other options being unavailable as a reason for their purchase. Thus, we found no remarkable anomalies regarding technology affinity or aversion among our participants that would have suggested excluding them from the sample.

Emotion categories

In total, we found 741 emotion codes across our sample. We found a total of 146 AE and 259 CE, thus totaling 405 (55%) emotions considered to represent positive experience or anticipation, and thus an opportunity. Regarding emotions signaling a threat, we found 154 LE and 182 DE, totaling 336 (45%). At least one emotion code per category was found only rarely in our data. Table 4 shows the emotion categories and their respective frequencies.

Table 4 Emotion categories and codes with their respective frequencies

Although we focus on DE and LE since we want to investigate and mitigate negative emotions and perceptions toward organizational AI uses, we also coded for and displayed AE and DE. We display these results to ensure completeness.

Achievement and challenge emotions

The 146 AE found in the full sample, subsume emotions associated with opportunities users gain regarding their perceived inability to control expected consequences. These make up 20% of all emotion codes, and 36% of all opportunity emotions. Happiness was expressed only twice, while satisfaction was demonstrated twelve times in the sample.

The most significant emotion within the AE is relief, a positive feeling of being at ease or having a burden lifted. It occurs 132 times, making up 91% of all AE. Most participants recognized that a PVAs could potentially help to ease time pressure and other resource constraints in the workplace, as “the PVA simplifies and supports” their work through “automatization,” as long as they “function well” and are not prone to making mistakes. This was especially applicable to tasks that do not require a great deal of communication, especially concerning (potential) customers:

“I can imagine it in some types of routine tasks; standardized tasks without much personal contact is where I can really see it being applied.” (P05)

There were 259 CE codes, together representing an opportunity associated with perceived control of expected consequences. These responses made up 35% of all emotion codes, i.e., 64% of all opportunity emotions. Excitement occurred 58 times (22% of the CE), and was mainly directed at an organizational PVA’s potential features and the tasks it could fulfil. Participants were “excited” in advance about this possibility, envisaging the PVAs to be “cool” and “great,” a “massive opportunity” of which they were a “fan.”

Hope occurred 76 times, i.e., 30% of all CE, and it was often articulated by indicators such as “it would be nice” or “maybe it would be possible” for the PVA to support certain processes or tasks that participants “wish for.”

“I hope that they will come into our lives soon enough for me to experience [the technology] and the advantages they will bring.” (P14)

These contributions reveal that participants hope for certain outcomes regarding organizational PVAs, but are not necessarily confident that they will materialize.

The most prominent CE is anticipation, expressing the act of looking forward to an occurrence. We found it 114 times, i.e., 44% of all CE in our sample. Participants greatly looked forward to the PVA “fulfilling many tasks,” “increasing efficiency,” and “optimizing processes” by being “very helpful” and “reliable.”

Playfulness, occurring seven times (3% of CE), and flow, occurring four times (1% of CE), were of secondary importance, and will not be further discussed.

Loss emotions

We found 154 LE in the sample, which constitute 21% of all emotion codes and 46% of the threat emotions. We regard LE as signaling threats that occur when users perceive they cannot control expected consequences. Anger, an intense emotional state of displeasure, makes up only 3% (four occurrences) of all LE, and mainly occurs when participants think about PVA failure, as in

“If you command your assistant to do something and it misunderstands or something and does it completely wrong, that is very exasperating and brings an emotional response.” (P43)

The most common emotion among LE is dissatisfaction, which occurred 89 times, i.e., 58% of LE. Users feel this when their expectations are not met. In our analysis, we largely found dissatisfaction in the context of the PVA “not functioning as desired,” “lacking features,” or “without potential for use.” Our data exhibits many different stages of dissatisfaction, from very specifically criticizing features as in

“the system forces the user to use specific voice commands, but when you have to talk like that it’s not natural […] if you say just anything, the system won’t understand.” (P08)

to comprehensively stating that

“after two questions, [the PVA] wasn’t helpful, no positive experience.” (P43)

Dissatisfaction is closely related to frustration, which is a source of irritation. This reference occurred 61 times in our sample, i.e., 39% of all LE. Many participants showed frustration because they assumed they would lose many desired interactions. Further, some encounters could become more difficult than otherwise if they use a PVA instead of directly communicating with colleagues or customers, even to the point of them losing a core competence.

“AI stubbornly follows the pre-programmed rules, but it is mostly the human element that defines a company. Customers often like calling because they like the consultants. Using AI, all companies would be the same; all the friendliness, the human element, having a bit of chitchat – that would suddenly be gone, which I see as a big problem.” (P17)

Further, the PVA’s functions and (in)accuracy could be a source of frustration for users.

“I am really annoyed at technical devices when they do not immediately function as I would like them to, because I expect them to fulfill their potential.” (P32)

Deterrence emotions

A total of 182 DE occurred in our sample, i.e., 24% of all emotion codes and 54% of all threat emotions. DE, as opposed to LE, signal a threat although users simultaneously perceive having control of expected consequences. Fear, coded 82 times, accounts for 45% of DE. It is experienced in the presence or threat of danger. Mostly, participants fear “job loss” through organizational PVAs being implemented; also, they fear “being tracked” and spied on by a PVA, and that their “privacy” and “data security” could be jeopardized.

“I get the feeling that I am completely under surveillance and being scrutinized in everything I do, with allusions that it could be done better – and actually that I can be replaced by the PVA that suggests how I should be doing things anyway.” (P02)

This articulates both fear of total surveillance by the PVA, and fear of being made redundant by it.

Fear is a very strong emotion; worry, in comparison, is a lighter kind of fear, an uneasy state of mind prompted by anticipated trouble. Worry occurred 98 times in the sample, accounting for 54% of the DE. Worry largely became evident regarding “responsibility” if PVA use were to produce “bad decisions” or “poor execution” of tasks. However, it also occurred in the context of the topics mentioned as prompting fear. Further, participants were worried that they would

“have to give up the human factor, which is hard for me and I also don’t think the company would want that.” (P07)

Additionally, discrimination and respect seemed to be of concern to participants.

“I imagine it being hard for the PVA to implement respect and integrity. I wouldn’t even know how [the PVA] can implement it. Since these are soft skills which are not measurable, this might be hard.” (P08)

Distress, coded only twice (1%), indicates a state of great suffering. Only two participants referred to this; one found it “very upsetting” that a PVA could listen to all their conversations (P36). The other participant went even further, stating that

“if there ever would be such PVAs, I do not want to be alive anymore – I find this a very creepy notion.” (P44)

Additional related concepts

During the open coding process, we found a number of recurring themes that mostly appeared in the context of coded emotions. Specifically, we found anthropomorphism, privacy, trust in humans, and trust in PVAs. Table 5 shows their occurrence in relation to the coded emotions.

Table 5 Related concepts and their frequency of occurrence in conjunction with emotion codes

Anthropomorphism is the attribution of human characteristics to non-human objects (Epley et al., 2007). In our data the PVA is attributed human characteristics, which we found 36 times in the sample. It is not associated with any emotion in particular; rather, it appears in conjunction with most emotions. Recurring themes here are “avatars” and the PVA recognizing and “reacting to moods.” Anthropomorphism and excitement are displayed in statements such as

“I would find it cool if the PVA were something tangible, a type of avatar so that you have a virtual figure, so you don’t have to just talk to a screen, but rather to an animal or so.” (P12)

Further, combining anthropomorphism and anticipation occurred in quotes like

“but this is about the PVA recognizing what mood you are in, and that’s something it should be able to do.” (P15)

Privacy references occurred 78 times, mostly in conjunction with fear (24 times), worry (ten times), or frustration (nine times). Worried participants gave statements along these lines:

“And then data security topics – what about confidential information? Especially regarding thecompetition, the data shouldn’t be spread outside of the company. I have a critical opinion of this.” (P10)

Stronger sentiments came to light when fear and privacy concerns occurred together, leading participants to admit that they saw

“no guarantee that criminals or intelligence or regulatory agencies would not use the devices to wiretap.” (P14)

They reflected fearfully on what would happen if they were to find out that a PVA had been operating.

“And then it started. In that moment I already found it very freaky. I don’t even know how to explain it. In that moment, I was thinking: What did I say in the past half hour? What could this device already have recorded? Already completely paranoid.” (P13)

Trust in humans on the one hand, and in PVAs on the other, were also articulated in the sample, and at almost the same frequency. In our sample, trust in humans is primarily expressed in the context of dissatisfaction regarding the organizational PVA. Participants complained that the PVA was

“only able to reply to what it was programmed to do, which makes it so different from us humans and how we interact […] and this is why I don’t see it being fair, because fairness would have to be a function.” (P17)

Trust in humans was also implied when participants expressed worry that all human care and emotionality would be lost through organizational PVAs’ use, as

“the human component will be lacking, and it doesn’t know all these thousands of people” (P20)

and

“where emotions and humans are involved, things are more conflict-laden.” (P44)

Trust in the PVA, on the other hand, mostly co-occurred with the emotion hope. This shows that participants are willing to trust an organizational PVA, although they remain unsure of the outcome. While trust in machines used to work differently to trust in humans, with increasing machine learning and anthropomorphism, the processes have become more similar (Zierau et al., 2020). In this context, participants showed hope for potential PVA capabilities, often regarding standardized processes which they deemed “easy” for the PVA, even if they do not exist currently, but potentially could be realized in the future:

“The ideal would be for the PVA to take over everything […] I would hand over the entire process.” (P30).

Discussion

In the following section, we discuss the results of our focus-group and interview study, especially regarding possible boundaries to be set for organizational PVAs based on expressed threat emotions. Further, we provide recommendations for action and guidance for organizational PVA implementation, as these emotions can set valuable cornerstones in organizational AI strategy.

First, we have to acknowledge that participants display a fair amount of opportunity-related emotions. The AE we presented in our results focused largely on anticipated features or functionalities that could make participants’ work easier and more convenient, without requiring a high-performance PVA. Such PVAs would hardly tap into the full potential AI can offer, and would not enter domains that have remained exclusive to humans (Schuetz & Venkatesh, 2020). The 27 participants who mentioned experiencing relief mostly focused on “non-complex/simple tasks” which can be “time-consuming”.

“I would expect a PVA to do exactly as expected and not to decide, analyze, and interpret something independently.” (P16)

and

“I think PVAs can be a very sensible support, especially when they relieve people from routine or very standardized tasks so that they can then focus on the real challenges at work.” (P30)

These findings not only show that the discussion of letting AI completely take over the workplace is premature, as the participants clearly direct us toward framing PVAs to take over tasks, but not entire jobs (Sako, 2020). This comes with a high degree of skepticism toward PVA use for critical and non-trivial tasks and processes. Such skepticism frequently stems from LE, mainly dissatisfaction and frustration. In all, 23 participants stated concern regarding the PVA lacking desired functionalities, and 11 participants were concerned about PVAs being prone to errors. Dissatisfaction and frustration can only be avoided by ensuring that the (future) implementation of organizational PVAs are seamlessly integrated and work with very few errors (Luger & Sellen, 2016). Thus PVAs should visibly generate added value for users without blurring the clear distinction between human and AI-system capabilities (Schuetz & Venkatesh, 2020). In this way, organizations can set clear boundaries, avoiding a total loss of control and addressing the fear that future generations might become unduly dependent on the PVA (Reis et al., 2020).

LE involve a perceived inability to control expected consequences in their secondary appraisal and also appear connected to trust in humans due to lacking trust in PVAs. On the one hand, trust is associated with risk-taking without controlling the other party (Mayer et al., 1995), so it is plausible that LE can occur when employees lack trust in PVAs. Trust building is a dynamic process, and continued trust depends not only on the PVA’s performance, but also on its purpose (Siau & Wang, 2018). Creating choice opportunities and providing instrumental contingency (Ly et al., 2019) can increase perceived control, which would lead to less LE coinciding with trust in PVAs. Fostering such trust is, therefore, particularly important for any organization attempting to implement an organizational PVA, and can also reveal where AI strategies reach their boundaries. A lack of clarity over job replacement and displacement through PVAs lead to distrust and hamper continuous trust development (Siau & Wang, 2018):

“If you have many repetitive tasks at work, you might have greater fear of being replaced, And eventually, it will affect the entire company if certain employee groups feel threatened regarding their job security.” (P28)

This can only be mitigated by involving various stakeholders with distinct perspectives and expertise in developing and using AI-based technologies, even if this heterogeneity will create obstacles in communication with each other and with decision-makers (Asatiani et al., 2020).

Privacy, or lack thereof, is strongly associated with the DE of fear and worry. This resonates with state-of-the-art research on new technologies and privacy (Fox & Royne, 2018; Uchidiuno et al., 2018; Xu & Gupta, 2009). Nonetheless, participants displaying fear or worry often refer to lack a perceived control over expected consequences, as 23 participants mentioned. From this we conclude that emotions associated with privacy can be blurry, and the distinction between LE and DE becomes less fixed. Nonetheless, the connection between threat emotions and privacy issues remains clear in the data. To mitigate these issues and find the right boundary for PVA involvement, the core principles of applied ethics, namely respect for autonomy, beneficence, and justice, are helpful (Canca, 2020). Organizational decision-makers must balance the different ethically permissible options, especially by visibly addressing their workforce’s concern or fear regarding autonomy and privacy loss while simultaneously offering prevention or mitigation strategies. Anonymizing data, even partially, until employees feel less threat emotions could be used (Schomakers et al., 2020).

Current research suggests that anthropomorphism or anthropomorphist design of PVAs generally triggers a positive emotional response (Adam et al., 2020; Moussawi et al., 2020). However, our results show an ambiguous response. While some see an opportunity in human-like PVA features, others show fear that can be attributed to the uncanny valley effect (Mori et al., 2012) that has been proven to increase DE.

“I don’t need the PVA to talk to me like an actual human being, and I am not sure I would want that. I could probably get used to it but I still find the idea strange.” (P24)

Additionally, several participants repeatedly stressed the importance to them of absolute transparency on whether they are communicating with or receiving output from the PVA or from a colleague. They want to be able to adjust their reaction accordingly.

“I would lack clear boundaries and get an uneasy feeling if I didn’t know whether the e-mail was actually written and sent by my colleague, or whether the PVA did it without the colleague even knowing what has been sent on their behalf.” (P31)

This constitutes another boundary for PVA design and implementation. Still, it cannot be viewed in isolation from privacy and trust, considering Zarifis et al. (2021) finding that trust is lower and privacy concerns are higher when the user can clearly recognize AI.

To summarize, organizations should adhere to the boundaries of implementing a PVA only for tasks their workforce elect and are relieved to pass on to an AI-based technology. By carefully and transparently introducing PVAs, taking DE into account, they can assuage fear and foster trust regarding job security and stakeholder involvement.

We have noted theoretical implications regarding transparency of interacting with AI, finding ambiguity and the need for deeper investigation. Further, we could show that Beaudry and Pinsonneault’s (2010) framework for classifying emotions not only constitutes a stable basis for investigating emotions regarding organizational AI, but also, through underlying appraisal theories, offers a foundation for investigating and explaining the basic emotions employees reveal when confronted with having to let AI take over.

Conclusion

This paper has provided a first glance at emotions evoked by the potential use of organizational PVAs based on AI. We combined insight gained from ten focus-group discussions and 45 individual interviews to reveal, analyze, and draw implications regarding boundaries concerning AI-based technologies. Further, for organizations planning to implement a PVA, we have made recommendations for action. Thereby we contribute to the research stream on emotions and technologies, and we open up the discussion on human emotions toward AI as well.

Our results are subject to limitations which can encourage further research endeavors in this promising research stream. We suggest expanding the sample by adding additional demographic groups, since in our project all participants had completed or were working toward university degrees in business administration or management information systems. Adding participants with no academic background might bring more application scenarios, as well as disclose further cause for resistance, especially regarding job loss and being replaced by an organizational PVA. Additionally, we found some promising co-occurrences of emotions and related concepts which could be tested empirically through quantitative research. Future qualitative research with open coding could add further related concepts, especially concepts derived from ethics. Also, future research could include different emotion theories and frameworks on which to base the analysis. We believe that applying the framework for classifying emotions by Beaudry and Pinsonneault (2010) delivered promising insights regarding organizational PVA-use, but other theories and models categorizing emotions in a more fine-grained manner, could refine the results. We consider emotion research regarding the organizational use of AI-based technologies, is still in its infancy, yet offers valuable insights and vast avenues for future research.