1 Introduction

The rise of smart cities, Internet of Things (IoT), personalization and self-driving cars signals an automated ‘future now’ where ubiquitous connectivity and artificial intelligence (AI) are fundamentally reshaping how we live and work.

One of the latest beneficiaries of this trend is ‘Emotional AI’ (EAI), a new era in computing that combines artificial intelligence, biometrics, machine learning and big data. Simply put, EAI is the ability of machines and devices to extract data of a person’s emotional state by reading their facial expressions, body language, skin conductance level, eye movement, voice tone, respiration, and heart rate variability, as well as machine learning of images and words (Larradet et al. 2020; McStay 2018; Richardson 2020; Rukavina et al. 2016). Emotional AI products consist of wearable bio-sensors and actuators that measure respiration, heart rate, and skin conductance levels; speech processors that analyze voice tone; video recognition software that tracks facial micro-expressions and even headsets that map brain activity. The origins of this technology stem from the pioneering work in affective computing by Picard (1995). Picard coined the term ‘affective computing’ to describe computational technologies that can predict and respond to a person’s psycho-physical state. Although in its nascent stage, EAI is already a lucrative, USD20 billion business with profits expected to double by 2024 (Telford 2019; Crawford 2021). Current applications range from Spotify’s voice assistant that suggests music playlists tempered to a user’s mood, Honda’s automobile bio-sensors that sense whether drivers are stressed or drowsy, Grammarly’s natural word processing that can detect an email’s tone, Amazon’s Halo bracelet that promotes mood awareness, and smart toys such as Moxie that foster a child’s emotional, social and cognitive development through play-based learning exercises.

Yet, the fastest growing application of EAI is in the workplace. While legacy companies such as IBM, Unilever, and Softbank are using emotional analytics for recruitment purposes (Richardson 2020), affect tools are increasingly embedded in automated management systems. For example, to increase efficiency and productivity in call centers, the Japanese company Empath and Boston start-up Cogito have developed voice recognition software. While Empath’s technology allows managers to read the moods of its employees to assess their well-being, Cogito’s tone detector is designed to gauge customers’ sentiments to provide better services. To de-escalate the potential for office environments to turn toxic, US company Spot markets an AI chatbot that identifies patterns associated with workplace harassment (Fouriezos 2019). Additionally, the security company Vibraimage sells ‘suspect AI’ camera recognition systems to global sporting events that allegedly ‘predicts’ criminal intention by monitoring and analyzing a person’s gait, head and eye movements, as well as facial expressions (Wright 2021). Vibraimage products have been used in Russian airports, Russian and Japanese nuclear powerplants, and convenience and retail stores in Japan (Kobata, personal communication, 2021).

For businesses, besides alleviating costs and administrative burdens associated with workplace wellness programs, affect recognition tools are primarily purposed to optimize efficiency, compliance, and productivity. This is accomplished through automated Human Resource (HR) systems that promise faster “measurement of individual employee performance” allowing supervisors “to encourage goal achievement, productivity and development” so that employees can benefit from “continuous feedback and coaching” (Cornerstoneondemand.com 2021). But as Hochschild (2012) notes, because the ultimate goal of emotional surveillance is to monetize a worker’s affective state, emotions are no longer private or personal (p.7). Rather emotions can be transformed into money and profit in excess of costs normally associated with the labor process.

Although there exists a growing body of literature on digital surveillance in the workplace (Ball 2010; Marciano 2019; Rosenblat 2018; Manokha 2020; Moore and Woodcock 2021), the impact of EAI on workers, managers and the labor process is understudied apart from Andrew McStay’s seminal book, Emotional AI: The Rise of Empathic Media. This article identifies two major streams of interest evolving out of affect-driven automated management systems. The first centers on the legitimacy of the ‘science’ upon which affect technologies are predicated. Kappas (2010) asks how scientists can create technology that measures human emotions when they do not first understand what emotions are or how they are constructed? Besides highlighting the complexity of social and cultural modulators that give rise to affective states, Kappas criticizes the determinist logic of Emotional AI developers who believe accuracy and reliability “is just something that will eventually be solved with a better algorithm” (p.7).

The second concern involves the ethical and legal implications of affect-driven automated management systems. For example, while mindful of the dangers of misuse, proponents of EAI such as McStay believe that given proper regulatory oversight, EAI is a form of biopower that can help managers to find better ways of understanding and communicating with their employees. Critical labor scholars, however, maintain a far more skeptical stance, pointing out historical links between technologies of surveillance and labor exploitation which challenge the ‘neutrality of technology’ assumptions advanced by EAI proponents such as McStay. For example, Crawford (2021) points out that many EAI venders insist on operating with a black-box approach that hides the algorithmic bias of their technologies under a veneer of scientific objectivity. Rhue (2019) notes that this opacity can lead to discriminatory managerial practices and abusive power relations. La Torre et al. (2019) and Rosenblat (2018) both agree that automated management can foment higher degrees of anxiety and stress through target settings, time tracking, gamification, ticketing systems and performance monitoring. Finally, Manokha (2020) and Marciano (2019) maintain that automated surveillance can erode employer–employee relations, leading to lower trust levels and stalled productivity.

On the one hand, EAI vendors claim that their technologies can assist human managers to find better ways of understanding and supervising employees as well as lead to greater levels workplace satisfaction (Gal et al. 2020). They also insist they can help to make objective and unbiased managerial decisions about a worker’s performance (Moore and Woodcock 2021). On the other hand, affect-driven automated management tools whether operationalized through self-tracking devices or imposed externally through panoptic systems can foment higher degrees of anxiety (La Torre et al. 2019), lower trust levels (Brougham and Haar 2017), and encourage discrimination (Rhue 2019).

We suggest that the rise of EAI in the workplace signals a novel and perhaps more insidious genus of neo-Taylorism seeking to optimize workplace efficiency, productivity and profit. Whereas previous generations of biometric devices targeted the exterior corporeality of labor, we argue empathic surveillance passes into the inner and most intimate recesses of the worker-self, exposing it to techniques of actuarial measurement and behavioral control. Put succinctly, EAI is the latest application of “numerous strategies and techniques to subjugate bodies and control populations” (Foucault 1978) by transforming the affective state of physical labor into an emerging form of biopower. As such, we understand EAI as the most recent development by logistical regimes to maximize productivity of populations by making bare ‘life’ (in this case, human emotion) its referent object. This paper nuances and extends nascent literature on emotion-sensing technologies through a highly original, cross-cultural study that focuses on future job seekers’ perceptions of EAI.

1.1 Research questions

Regardless of the issues mentioned above, empathic surveillance in the workplace is unequivocally and uncritically being ushered in as part of the ‘new normal’ in the golden age of big data. Similar to the influence of late nineteenth century industrialization on HR management, the growth and unbridled acceptance of EAI in the workplace is reconfiguring age-old practices in organizational management. Thus, our study suggests the need for a systematic way of understanding how people perceive the prospects of pursuing jobs that will be monitored and assessed by automated management systems that have access to the most intimate regions of their self. Moreover, as affect detection tools migrate across national and cultural borders, especially in context of transnational corporations, there is an urgent need to understand the cross-cultural factors that influence perceptions and understanding of the technology in the workplace.

Thus, we survey a large body of international students, 1015 future job-seekers, from 48 countries and 8 regions, and apply a combination of descriptive statistics and Bayesian multi-level analysis to answer the following research questions.

RQ1: What are the general concerns of future job-seekers regarding EAI as managers vs. AI as their replacement?

RQ2: Which is the level of awareness of EAI among the future job-seekers?

RQ3: How do socio-demographic and cross-cultural factors influence respondents’ perception toward automated management systems?

RQ4: How do socio-demographic and cross-cultural factors influence self-rated knowledge regarding AI?

RQ5: How does self-rated familiarity with AI influence respondents’ attitudes toward automated management?

To answer RQ1 and RQ2, we use descriptive statistics; for the rest of the RQs we use Bayesian statistical analysis. The intention of the survey is to better understand how socio-demographic, cultural, gender and economic factors influence perception and attitude toward three aspects of the AI-enabled human resources (HR) management: job entry gatekeeping, workplace monitoring, and the threat to a worker’s sense of agency, thus enabling a comprehensive and cross-culturally informed discussion of AI ethics and governance in the age of the quantified workplace. The following section provides an in-depth and critical review of the relevant literature on this subject.

2 Literature review

2.1 Philosophical background: from Taylorism to empathic surveillance

Critical labor scholars use the term, ‘Neo-Taylorism’ to describe the post-Fordist intensification and acceleration of labor management systems that prioritize standardization, routinization and specialized techniques in assigned work tasks to maximize efficiency and productivity (Vázquez and García 2011). Whereas classical Taylorism omitted the human factor in its efficiency equation, proponents of neo-Taylorism, especially in the post WWII era, saw a correlation between productivity and a worker’s physical well-being. Yet, as Crowley et al. (2010) observe, concern for the worker came not as the result of a more compassioned or enlightened view in labor relations. Rather it grew out of the negative consequences of an “increasingly rigorous application of the principles of scientific management” (Crowley et al. 2010, p. 421). In other words, the neo-Taylorist’s fanatical obsession with efficiency placed heightened pressures on the worker, leading to a general deterioration of conditions for both blue- and white-collar occupations. Similarly, Reardon (1998) points out that the evolution of wellness programs in the latter half of the twentieth century grew less out of concern for a worker’s health than to empirical studies showing that illness-related absences diminished productivity and profit as well as increased the financial burden of health care costs to the employer. Critically, like classical Taylorism’s corporeal obsession, twentieth century wellness programs emphasized the physical rather than emotional health of the worker (Moore and Robinson, 2015). For the most part, emotions in the workplace were deemed unstable and irrational and as such, had no bearing on human performance or productivity (Simon 1986). This neglect reflected a larger ontological disregard for human emotions in organizational management theory (Dean 1999). These ideas were further supported by Drucker’s (1992) writing on the rise of ‘knowledge workers” who by definition could not be measured in the corporeal metrics and techniques associated with Taylorism. Importantly, the idea that emotions could not be quantified were largely premised on and supported by the fact that besides scientific laboratory settings, medical institutions, or focus groups, no technologies existed in the workplace to measure a person’s affective state (Davies 2015).

Contrary to prominent labor theorists in the late 70s and early 80s who understood the computer only in Taylorist terms as an efficiency multiplier tool, Cooley (1980) warned that the computational workplace was in fact a Trojan horse. Rather than increasing efficiency and liberating the worker from many dreary demands of repetitive tasks, Cooley argued that computers would lead to greater exploitation of social relations in the labor process. Similar to Marx’s (1983) prescient warnings about the dangers of technology in Grundrisse: Foundations of the Critique of Political Economy, Cooley predicted that computer management systems would usher in a more authoritarian form of Taylorism. More than two decades later, this argument would resurface in the seminal writings of Delueze and Guttari’s A Thousand Plateau’s (1987) in which they discuss how technology enchains human labor by transforming them into biological prosthetics of the machine. Deleuze and Guttari refer to this abstraction of human labor as ‘machinic enslavement’ where instead of the worker using the technology, the technology uses the worker to increase productivity and profit. Following this argument, Adorno and Horkheimer (2002) contend that the technological workplace creates a novel form of indentured servitude where exchangeability and precarity are normalized. As a dominant characteristic of late capitalism, Healey (2020), argues that the pervasiveness of digital monitoring devices in the neoliberal workplace has fundamentally eroded the qualitative character of the labor process.

The acceptance of biometric monitoring practices in the 80s exemplifies the neo-Taylorist logic to increase centralized control over the physical body of the laborer. In this regard, EAI signals the emergence of an industrial emotional complex devoted to “psycho-physical informatics,” and in turn, a new genus of wealth creation that monetizes the non-conscious data of workers in order to optimize the workplace and maximize profits. Similar to the interpellate effects of panopticism, under the invisible eye of empathic surveillance, a worker’s emotions are made transparent and vulnerable to measurement, manipulation and control (Jeung et al. 2018; Gu and You 2020). Without the ability to backstage, empathic surveillance demands that a worker’s persona must always be authentic and positive (Moore and Woodcock 2021). Under such conditions the regulation of emotion becomes work itself (Woodcock 2016; Cabanas and Illouz 2019). Indregard et al. (2018) refer to this type of personal estrangement that occurs under empathetic control as ‘emotional dissonance.’

2.2 The rise of empathic surveillance

The rise of EAI in the workplace puts into sharp relief the informalization and monetization of affective labor. Whereas affective labor originally referred to emotional work carried out with organizational outsiders (Hardt 1999; Lupton 2016), such as in the fields of hospitality, entertainment, office work and care work, the term has now broadened to include emotional labor amongst organizational insiders (Leighton 2012). In Hochschild (2012)’s seminal book, The Managed Heart (2012), she observes that emotions are not simply integral to the service economy, they are a service itself. In other words, emotions, especially, in knowledge work and service industry, are now construed as having exchange value. The growth in affective recognition tools in the workplace recalibrates the horizons of capital not by expanding outward into the consumer domain (like surveillance capitalism) but rather turning inward, extracting greater value from the labor process. As a new source of wealth accumulation, mood monitoring dictates that a worker’s emotional state must be surveilled, measured, and controlled. As a result, workplace performance and productivity is now intimately tied to expressions in authenticity, positivity and spontaneity (Cabanas and Illouz 2021; Davies 2015). Whereas first generation biometric devices monitoring sought to optimize performance by reading the exterior body, empathic surveillance allows for control over the microsocial dynamics and inner subjective processes in more fluid and open-ended working environments (Moore and Richardson 2015). For example, affect-driven automated management vendors such as Humanyze use data analytics to optimize workplace social dynamics through wearables equipped with GPS, microphones and blue-tooth that monitor employee physical interactions and conversations. Rather than the probing eyes of a human supervisor, the emotionally quantified workplace, electronic dashboards, bio-sensors and deep learning algorithms monitor and score the performance of each and every worker, making granular second-to-second assessments that can lead to promotion, warning or termination (Mateescu and Nguyen 2019). Often these managerial decisions will simply be communicated with an automatic screen prompt or email (Lecher 2019). In the data-driven workplace, employees are no longer regarded simply as physical capital but instead conduits of actuarial and statistical intelligence gleaned from the extraction of their non-conscious body data.

Beyond the Neo-Taylorist disregard for the human element comes the shaky science to support emotion-sensing tools (Barrett 2017; Barrett et al. 2019; Crawford 2021; Heaven 2020). For decades now researchers in disciplines such as neuroscience, sociology, anthropology, biology and psychology have been unable to agree on whether emotions are hard-wired into the psycho-physical make-up of the human body or if they are social constructions contingent on social situations and understandings (Leys 2017). Added to this dispute are claims by EAI vendors that all humans manifest a discrete number of universal emotions and that they are innate and identical from culture to culture (Crawford 2021). Problematically, as EAI technologies cross international borders, their data sets and algorithms are seldom tweaked for gender, ethnic, and cultural differences or importantly, ‘attitudinal diversity’ (McStay, 2021). McStay uses the term “machinic verisimilitude” (2018, p.5) to capture the sense of “good enough” that technologists and business communities are striving for without fully dealing with the social constructivist complexities of ethnocentric, context-dependent views of emotions.

Thus, the ‘science’ of emotions is further problematized by a growing body of literature questioning the validity of the so-called ‘universality thesis’ of emotion which serves as the foundation for the emphatic media industry. Prior to advances in AI and machine learning, early research on affective computing focused on the reliability of computer vision to decipher human emotion (Picard 1997; Lisetti and Schiano 2000; Picard and Klein 2002; Russell et al. 2003). However, the efficacy of the claims made by computer scientists in these studies were mostly premised on Paul Ekman’s (1999) now disputed face-coding model (Crawford 2021). The communication and inference of anger, fear, disgust, or any other basic emotions have been shown to carry significant cultural and contextual variations as a result of reviewing more than 1000 academic articles on emotional expression (Barrett et al. 2019; Chen et al. 2018). Moreover, modes of emoting are increasingly seen not as static but evolving since cultures themselves are dynamic and unbounded (Boyd et al. 2011; Vuong and Napier 2015; Vuong 2021). The fluidity of emotions in context to culture challenges the traditional/normative and static ways of structuring emotion datasets favored by the tech companies (McStay 2018). The fact that many job-seekers are now aware of AI-hiring and starting to game the algorithms by presenting themselves differently using different words and facial expressions than they naturally would (Partner 2020) makes the concern over accuracy even graver. This is evidenced by the plethora of videos on YouTube by amateur and professional consultants that teach users ‘how to beat AI recruiting’ (Partner 2020). The implications of job-seekers migrating to crowd-sourced platforms to learn how to game the already gamified AI hiring process begs further investigation beyond the scope of this article.

2.3 Correlates of perception of AI and empathic surveillance use in the workplace

Of the few studies on the perception of AI in the modern workplace, it is clear that the research methods to measure awareness of AI, especially EAI, and its effects are still in an early stage. Critically, there is a vacuum in empirical literature devoted to the correlates of perception of EAI as a preeminent tool of HR management. Thus, our current study can be situated within two relevant bodies of literature: (i) technological adoption in the workplace; and (ii) AI-augmented management practice. This section reviews relevant studies on various factors that influence the perception of AI in the workplace, namely, socio-demographic, behavioral, and cross-cultural.

2.3.1 Socio-demographic and cross-cultural factors

Regarding socio-demographic factors, men are found to be more willing to adopt new technologies, including ICTs (Ali 2012) and self-tracking mobile apps than women (Urueña et al. 2018). McClure (2017) also finds women to report higher level of fear related to technology that they know little about such as AI or smart technology. This tendency might be explained by a higher level of perceived technological self-efficacy among male respondents, i.e., the belief that one is capable of performing a task using technologies (Cai et al. 2017; Huffman et al. 2013).

Higher income is also a reliable predictor of willingness to adopt new technologies (Ali 2012; McClure, 2017; Urueña et al. 2018). Batte and Arnholt (2003) argue people from dominant social classes tend to be early adopters of technology as they can afford the risks as well as they are often viewed as local opinion leaders. Concurringly, McClure (2017) shows technophobes have a higher likelihood to come from lower income and non-White groups. Higher level of education has also been shown to positively correlate with attitude toward automated decision-making and news recommendations by AI (Araujo et al. 2020; Thurman et al. 2019). Damerji and Salimi (2021) find third and fourth-year students in university have higher perceived ease of use, perceived utility, and acceptance towards AI. Although these socio-demographic factors are indeed useful in predicting AI perception, most of these studies are conducted from a single-country perspective (Ali 2012; McClure 2017; Batte and Arnholt 2003; Damerji and Salimi 2021; Araujo et al. 2020). Yet, as we next discuss, there is a growing body of literature that explores the cross-cultural nuances in tech-acceptance behaviors.

Curiously, existing theories on technology adoption and acceptance such as the ‘Theory of Planned Behavior’, ‘Theory of Reasoned Action’, and ‘Uses and Gratification Theory’ have struggled to account for cross-cultural differences in norms and values (Taherdoost 2018). Most of these theories account for an individual’s reasoning process based on a cost-and-benefit calculation. The ‘Technology Acceptance Model’ (TAM), despite being one of the most cited theories in the field (Davis 1989), purposefully neglects subjective norms on the grounds that they are hard to quantify (Muk and Chung 2015). Even though Venkatesh and Davis (2000) expand the original TAM model to include subjective norms, the authors’ understanding of the term is based on whether most people who are close to a person think he or she should or should not adopt a technology (p.187). Such a narrow modulator for human behavior does not capture the complexity of cultural nuances in norms, social roles, notions of self or personal values. For example, decades of psychological science research have shown people in collectivist cultures are more likely to conform to their group’s expectations compared to those in individualist cultures (Henrich 2020).

Indeed, a growing body of literature indicates the importance of cultural values for explaining the behavioral mechanism in tech-adoption. Cultural values are shown to be the antecedents to perceived risk, perceived self-efficacy, and subjective norms (Alsaleh et al. 2004, 2019; Muñoz-Leiva et al. 2018). In other words, cultural and socio-religious values play a decisive role in influencing users’ perception of risks of and rewards in the adoption of new technology. For example, a number of U.S. national surveys found, compared to highly religious people, non-religious and less religious people (measured by the number of times they attend religious services, for example (Brewer et al. 2020)) held a more favorable view of AI (Northeastern University and Gallup 2018; West 2018). Except for a few empirical studies that focus on Muslim populations (Adnan et al. 2019), very few studies seek to quantify and compare the effects of specific religions on tech-acceptance behaviors. Thus, this study fills such a gap in the existing literature.

2.3.2 Behavioral factors: trust and self-knowledge regarding EAI

One consistent finding in the literature is that people have little concern over job loss due to AI (Brougham and Haar 2017; Pinto dos Santos et al. 2019). For example, a recent survey of 487 pathologists indicated that nearly 75% of the participants displayed excitement and interest in the prospect of AI integration in their work (Sarwar et al. 2019). Alternatively, there is also evidence that suggests greater anxiety related to the rise of AI applications in the workplace. Brougham and Haar (2017) find in a New Zealand study that the greater an employee’s awareness of these technologies, the lower their organizational commitment and career satisfaction. These findings are concurrent with previous studies that have examined the relationship between biometric surveillance and employee trust in the workplace (Rosenblatt 2018; Marciano 2019; Mateescu and Nguyen 2019; Manohka 2020). Similarly, a Saudi Arabian study of medical students (Bin Dahmash et al. 2020) has found anxiety toward using AI was correlated with a higher self-perceived understanding of this technology.

Of the few studies that look at varying student attitudes toward AI from different university majors, there are mixed results. For example, a 1996 study on university students and faculty perceptions of business ethics indicates that business or humanities majors share similar value judgements (Curren and Harich 1996). However, more recent studies concerning AI ethics provide evidence to the contrary. In terms of future sustainability, Gherheș and Obrad (2018) find Romanian students at technical universities hold more positive views of AI than their humanities counterparts. Likewise, Chen and Lee (2019) show that Taiwanese students majoring in science and engineering are more positive about AI’s social impacts than those in humanities, social science, management, education and arts. Importantly, it appears that curricula of business schools with Association to Advance Collegiate Schools of Business (AASCB) accreditation emphasize the importance and advantages of acquiring data analytics skills to enter the increasingly AI-enabled business world but mention very little about data ethics and algorithmic bias (Clayton and Clopton 2019). It is also common for business and marketing academic journals to emphasize the positive rather than negative aspects of AI in optimizing various operations and processes (Prentice et al. 2020). Consequently, one would expect business students to be more familiar with AI and have more positive attitudes for AI in HR management, the central concern explored in this paper.

Thus, this study addresses three major concerns in the present literature. First, the absence of studies on the impact of emotion-sensing technologies in the workplace calls for further research to fill the intellectual vacuum. Second, empirical studies on the subject indicate a shortage of consistent measuring and testing instruments for AI perception’s determinants. Finally, there is a clear lack of cross-cultural and cross-regional comparison of perceptions of AI use in the workplace.

3 Research design

3.1 Hypotheses

Based on the literature review on socio-demographic, behavioral, and cross-cultural factors influencing technological adoption in the workplace context, this study formulates a series of hypotheses (H) to answer RQ3, 4 and 5.

In Fig. 1A, income, male gender, business major, and school year are hypothesized to have a positive correlation with attitude toward EAI use in the workplace, the dependent variable (H1–H4). Meanwhile, self-rated knowledge regarding EAI and religiosity are hypothesized to have a negative correlation (H5, H6) with the dependent variable. Finally, different regions have a varying effect on the dependent variable (H7). In Fig. 1B, for the dependent variable, self-rated familiarity with EAI, income, male gender, business major, school year, and self-rated knowledge regarding EAI are hypothesized to have a positive correlation (H8–H11), while cross-cultural factors of regions and religiosity would be non-significant (H12–13).

Fig. 1
figure 1

A Hypotheses on the correlates of attitude toward automated management with EAI. B Hypotheses on the correlates of self-rated knowledge with EAI

3.2 Study site and data collection

Regarding the study site, Ritsumeikan Asia Pacific University (APU), Beppu, Oita is Japan’s largest international campus with students coming from 94 countries around the world as of the academic year 2021 (APU Website 2021). International universities such as APU play a pivotal role in the internationalization of the Japanese workforce (Ota 2018). Foreign graduates of APU are not simply adding a multicultural face to a once insular and homogenous workforce, they are replenishing the ranks of professional labor in a nation experiencing a serious decline in birth rate and growing shortage of knowledge workers (Ota 2018). As a result, APUs’ foreign graduates are in high demand. However, while this may seem like a win–win situation for all parties involved, foreign students and graduates tend to display higher levels of anxiety over the prospect of cultural assimilation in the Japanese workplace, a culture known for its inflexibility, compliance and paternalism (Nguyen et al. 2019).

The survey was distributed via a link in 14 online classes from July 15th to December 10th, 2020. Prior to taking the survey, participants read a consent form providing them with background to our research project exploring the social and ethical implications of Emotional AI in cities. Importantly, to gauge a participant's pre-existing knowledge, no definition of the technology was initially provided. The respondents were also informed that the project was conducted with full compliance of research ethics, norms, and more specifically the codes and practices established in the Codes of Conducts for Scientists, issued by the Science Council of Japan on January 25, 2013 and APU’s Research Code of Ethics. All responses were anonymized, the participation was voluntary, and the participants could leave the survey at any point. We explained that consent was automatically given by answering the survey. The codes and data of this study are open-source following the open science framework (Vuong 2020).

3.3 Data treatment

Table 1 presents the method of data treatment for each variable. The Variables column contains the notations, which are how the variables will appear in the model equations. The Description contains information on the quantities that the variables aim to measure. The Remarks/Survey questions column contains information on the measurement instrument, i.e., the survey questions and how they are combined to create a measurement for a variable.

Table 1 Explanation of the data treatment procedure

For the outcome variables, the Cronbach’s alpha values for Attitude and Familiarity were calculated to check for whether the questions measured the same construct. Both values were acceptable, Attitude’s alpha = 0.61, and Familiarity’s alpha = 0.78. In the case of Attitude, when the question on whether a person is worried that the use of AI in the workplace will threaten their autonomy is removed, the alpha value increases to 0.7. Nonetheless, as autonomy is such an important aspect of work where many cultural differences can be explored, we decided to include this question in our measurement of attitude toward automated management. Cultural notions of autonomy in the workplace are particularly relevant considering the cultural disposition of Asians toward consensus and collectivity as opposed to the Western affinity for individualism (Henrich 2020).

3.4 Bayesian multi-level analysis

3.4.1 Model construction

Following the recent guidelines on conducting Bayesian inference (Aczel et al. 2020; Vuong et al. 2018), twelve models are constructed which gradually expand the number of variables and levels. They are then fit with the data using the Hamiltonian Monte Carlo simulation approach with the bayesvl R package (Vuong et al. 2020). All Bayesian priors are set as default, which is ‘uninformative’ (McElreath, 2020). Each model is represented by an equation in Table 2, which seeks to establish a mathematical relationship among the variables. For example, Equation No.1 models the linear relationship between attitude towards the use of EAI for automated HR management, the dependent variable, and four independent (exploratory) socio-demographic variables: income level, school year, biological sex, and school major.

Table 2 Equations of the models

Model 10 is the most complex as it is a multi-level model where the Region variable functions as the varying-intercept and there present all other variables. The multi-level fitting model also helps improve the estimate for imbalance in sampling and explicitly studies the variation among groups. Partial pooling (or adaptive pooling) is another advantage of multi-level modeling. This kind of pooling enables us to produce less underfit estimates than complete pooling and overfit than no-pooling (McElreath 2020; Spiegelhalter 2019). It is worth noting that models with both religion and religiosity variables are nonlinear to avoid confounding effects.

To guard against overfitting and select the model best fitted with the data, the models are compared in detail using the Pareto smoothed importance-sampling leave-one-out cross-validation (PSIS-LOO) approach and their weights computed to assess the plausibility to each model (La and Vuong 2019; Vehtari and Gabry 2019).

4 Results

4.1 Descriptive statistics

First, answering RQ1 on the general concerns of the job-seekers concerning EAI, we presented students a list of nine ethical problems with AI proposed by the World Economic Forum (Bossman 2016) and asked them to choose the top three. Interestingly, Fig. 2 shows the top concern for international students is essentially human–machine interaction, i.e., “Humanity. How do machines affect our behavior and interaction?” with 561 responses (55.3%). The second greatest concern, with 488 responses or 48.1%, is about the security of these smart systems, i.e., “how do we keep AI safe from adversaries?”. The third place is unemployment with 467 responses or 46%, and the fourth place is about unintended consequences of deploying AI with 445 responses or 43.8%. Although previous studies on AI integration at work have pointed out that people are not concerned about AI replacement, at least in the short term (Pinto dos Santos et al. 2019; Sarwar et al. 2019), our survey results provide a more nuanced understanding of people’s perception of various risks regarding automated management systems.

Fig. 2
figure 2

WEF’s nine ethical concerns regarding AI ranked by the students

Second, concerning the RQ2 on the level of awareness of EAI, when the students are asked to choose the most appropriate definition of this technology to the best of their knowledge (Fig. 3A), nearly 80% chose intelligent machine/algorithms that attempt to read (44.7%) or display (34%) the emotions of humans. This means nearly 78% chose the roughly correct definitions of EAI and affective computing (McStay 2018; Richardson 2020; Rukavina et al. 2016). Meanwhile, 21.3% of the respondents chose AI that displays human consciousness.

Fig. 3
figure 3

Familiarity of the respondents with EAI. A Students choose among three definitions of EAI. B Students rate their familiarity with the topic

Table 3 shows 52% of the respondents hold a worried view toward automated management, and 51% rated themselves below average regarding AI knowledge.

Table 3 Key characteristics of the surveyed sample

4.2 Technical validation

4.2.1 Convergence diagnostics

After running the MCMC analyses for all models (4 chains, 5000 iterations, 2000 warm-ups), all Rhat’s values equal one (1), and all the effective sample sizes (n_eff) are above 1000, suggesting a good fit with the data. The detailed results and visualizations of the diagnostic tests are in the Supplementary file. As an example, Fig. 3 presents the mixing of the Markov chains after fitting the model 10 with the data (see Table 2 for equations and Fig. 4 for model visualizations). The Markov chains are mixing well together. There is no divergent chain. Thus, this indicates the coefficients reliably converge to a range of value, i.e., the posterior distribution. We explore the posterior distribution in the Results section.

Fig. 4
figure 4

The mixing of the Markov chains after fitting Model 10 with the data

4.2.2 Model comparison and robustness check

We run the PSIS-LOO test and find that all Pareto k estimates are good (k < 0.5) for all models, suggesting a good fit with the data. In Bayesian statistics, plausibilities of models with the same outcome variable given the data are represented by weights, which must add up to 1. Three types of weights are used and reported as follow: Pseudo-BMA without Bayesian bootstrap; Pseudo-BMA with Bayesian bootstrap; Bayesian stacking. Model 10 starkly outperforms other models with Attitude being the outcome variable (0.999; 0.924; 0.833). Meanwhile, of models with self-rated familiarity with EAI as the outcome variable, Model 5 fits the data the best among (0.821; 0.672; 0.685). We have also conducted a robustness check on the prior sensitivity of Model 10 and 5, the tweaking of the Bayesian priors results in no real differences in the posterior distribution, suggesting the models are robust (see Supplementary file).

4.3 Major findings

4.3.1 The multi-faceted nature of attitude toward EAI as automated management

The best performances belong to models 5 and 10. Indeed, attitude toward EAI-enabled HR management is a very multi-faceted issue, as it is best predicted from a host of factors: not only socio-demographic and behavioral factors, but also cultural and political factors (religion, religiosity, and region) (Model 10). Here, we show cross-cultural factors are indeed important in predicting the attitude toward automated management, thus validating hypothesis No. 6 and 7 (RQ3 and RQ4). This result contradicts theories such as Technology Acceptance Model or Theory of Planned Behaviors or Theory of Reasoned Action that only prioritize the cost and benefit calculation in predicting human behaviors (Davis 1989; Taherdoost 2018). Self-rated familiarity with EAI, however, is a less complicated issue. It is best predicted from basic factors such as sex, school year, income, and study major (Model 5), thus validating H12 and H13. Figure 5 shows business major and sex are the most predictive of self-rated familiarity with AI (validating H9 and H10), while income and school year’s effects are ambiguous (invalidating H8 and H11).

Fig. 5
figure 5

Highest density interval (HDPI) plot of the posterior distribution of income, school year, sex, and major to predict self-familiarity with EAI from Model 5

4.3.2 Determinants of attitude toward EAI-enabled automated management

4.3.2.1 Sex, income, school year, major, familiarity

Figure 6 shows the regression results of Model 10, which shows the highest goodness-of-fit among class of models with attitude as the outcome variable, Model 10. Here, students with higher income, men, business majors, and higher school year are likely to have a less-worried outlook toward EAI-enabled HR management, thus validating H1–4. This is consistent with results from Model 1, Model 4, Model 9, and Model 10 (see Table 2 for model equations, and Supplementary File for the details of each model’s goodness-of-fit and posterior distribution). Regarding income, an explanation might be that the students with higher income are likely to have higher educational attainment (Aakvik et al. 2005; Blanden and Gregg 2004) and end up in high-status occupations (Macmillan et al. 2015); thus, in all likelihood, they are more likely to become future managers who will use those AI tools to recruit and monitor their employees.

Fig. 6
figure 6

Density plot from Model 10 for five variables: familiarity, income, major, school year, and sex

Regarding the sex variables, validating H2 and H9, our result is aligned with the literature showing being male is correlated with higher perceived technological self-efficacy (Cai et al. 2017; Huffman et al. 2013). The fact that being a business major is correlated with less anxiety for EAI-enabled HR management might be a product of the lack of emphasis on AI’s ethical and social implications in business education. Another reason may be that hoping to become a manager would incline a person to adopt the company position, thus seeing management supervision only in terms of productivity and performance results. Future studies are required to understand the underlying cause.

Model 10 shows that students who have higher self-rated familiarity with AI tend to view the EAI-enabled HR management more positively (rejecting H5). This result contradicts a Saudi Arabian study of medical students (Bin Dahmash et al. 2020), which found anxiety toward using AI was correlated with a higher self-perceived understanding of this technology. This divergence with the literature can be explained by the diversity of the surveyed population, of which there are 48 countries in 8 regions and many possible future professions.

4.3.3 Religions and religiosity

Our analyses show religiosity indeed negatively correlates with attitude toward EAI-enabled HR management, supporting H6. First, Model 2b shows that atheism positively correlates with attitude (Fig. 7A), while students from a religious background are found to express more concern (Fig. 7B). Curiously, Buddhist students are least likely to have a worried outlook toward non-human bosses. While Muslim students are most likely to have a negative attitude, the coefficient b_Islam_Attitude (mean = − 0.10, sd = 0.09) is distributed mostly on the negative side. Christian students are more ambiguous, but the majority of the b_Christianity_attitude’s distribution is on the negative side (mean = − 0.10, sd = 0.09).

Fig. 7
figure 7

A The density plot of the Religion variable from Model 10: Religious students are likely to have a worried attitude toward EAI-enabled management. B HDPI interval plot of the Atheism variable Model 2b: Non-religious students are likely to worry about the EAI-enabled management

Higher religiosity of the Muslim and Buddhist students appears to have made the students more anxious about AI tools in human resource management. Our computation shows β_Islam_Attitude (mean = − 0.16, sd = 0.10) and  β_Islam_Religiosity_Attitude (mean = -0.24, sd = 0.18). There is a similar trend for the Buddhist students as well with β_Buddhism_Attitude’s mean = -0.05, sd = 0.07; β_Buddhism_Religiosity_Attitude’s mean = -0.15, sd = 0.19). However, the Christian respondents’ high religiosity seems to generate a slight shift of the distribution toward the positive range and makes the distribution wider. The mean value of β_Christian_Attitude is − 0.10 (sd = 0.09), while the mean value of β_Christianity_Religiosity_Attitude is − 0.05 (sd = 0.17).

4.3.4 Region

Figure 8 shows the attitudes toward AI use in HR management of the respondents from the different geographical regions are also different (validating H7). Respondents from East Asia have the lowest anxiety (a_Region[Eastern Asia] = 1.78; sd = 0.18), while respondents from Europe are the most likely to worry about the use of EAI in the workplace (a_Region[Europe] = 1.36; sd = 0.26). Such findings might be rooted in cultural differences among the regions as well-established results from psychology literature showing stark differences between collectivist and individualist cultures (de Oliveira and Nisbett 2017). In a collectivist culture, for example, in East Asia, concerns about privacy and self-autonomy are less pronounced compared to their Western counterparts (Whitman 1985). In addition, notably, students from underdeveloped regions (Africa, Central Asia, Oceania) also tend to have a lower level of anxiety toward being managed by AI (Fig. 9).

Fig. 8
figure 8

Interval plot of the Region variable: (1) Africa; (2) Central Asia; (3) East Asia; (4) Europe; (5) North America; (6) South-East Asia; (7) South Asia; (8) Oceania

Fig. 9
figure 9

Comparing the distribution of different attitudes toward EAI-enabled management by three major East Asian countries (China, Japan, Korea) and Europe/North America

5 Discussion

5.1 Implications

Besides being among the few cross-cultural empirical studies on the perception of EAI tools in HR management, the paper discovers that being managed by AI is the greatest AI risk perceived by the international future job-seekers, which answers RQ1 on the concerns of future job-seekers regarding AI as managers versus AI as their replacement. Moreover, the analytical insights highlight the urgent need for better education and science communication concerning the risks of AI in the workplace. As our study, in answering RQ2 on the level of awareness of EAI among the future job-seekers, shows although nearly 80% picked a very close definition of EAI (Fig. 3A), when students are asked to rate their level of familiarity with EAI, roughly 40% rate themselves as unfamiliar or very unfamiliar and 36.7% of the respondents are unsure of their level of knowledge (Fig. 3B). Finally, in exploring the effects of various factors on the attitude toward automated management (RQ3,4,5) via the Bayesian MCMC approach, this study also highlights various cross-cultural and socio-demographic discrepancies in concern and ignorance about the EAI-enabled management of the workplace that must be bridged to bring more equalities to the AI-augmented workplace. Table 4 below summarizes the decisions regarding each hypothesis and their relevant literature.

Table 4 A summary of decisions regarding the hypotheses and relevant literature examined in this study

5.1.1 Being managed by AI is the greatest cause for concern

Answering RQ1, the descriptive statistics have indicated being managed by AI and interaction with AI are major concerns for the respondents. Table 3 shows 52% of the future job-seekers express negative concern about the EAI-enabled HR management. Figure 2 shows that human–AI interaction is the top ethical concern with nearly 55% of the total responses, while job loss to AI only ranks third with 48%. These insights will prove crucial when communicating in educational settings about the risks of AI. As the workplace moves toward a more invasive form of neo-Taylorism where AI tools seek to go beyond the exterior of the physical body and datafy our emotional lives (Marciano 2019; Richardson 2020), our results suggest young job-seekers have started expressing a greater level of concern regarding AI supervising and making decisions about their performance and career advancement, rather than AI replacing their jobs.

5.1.2 Biases and privileges

RQ3 and RQ4 are inquiries into the effects of socio-demographic and cross-cultural factors on self-rated knowledge regarding AI and attitude toward automated management. Here, Models 1, 4, 9, and 10 consistently show that being male and being from a higher-income background are correlated with less anxiety toward automated management systems (see Fig. 6H1 and H2). These factors are also correlated with a higher self-rated knowledge for AI (Fig. 5, Model 5). Moreover, answering RQ5, Fig. 6 shows that self-rated familiarity with AI has a positive correlation with the attitude toward AI’s use in HR setting (β_Familiarity_Attitude’s mean = 0.21, sd = 0.04). This finding implies that students who rated themselves to have more knowledge of AI might be unaware of the biases in and inaccuracy of emerging technologies. Taken together, these facts indicate many students might be ignorant of the ways in which social biases and privileges can lead to harmful EAI’s use in the workplace, as shown in various studies on algorithmic biases (Rhue 2019; Crawford 2021, Moore and Woodcock, 2021; Buolamwini and Gebru 2018). Even though the problem of algorithmic bias has now moved to the center of public discourse in Western media (Singh 2020), when it comes to a multi-national sample, this study indicates a clear lack of knowledge as 51% of the respondents rated themselves below average in AI knowledge (Table 3).

Past studies have shown student engagement with ethics is contingent on several factors: first, the type of curriculum adopted by higher education institutions (Culver et al. 2013); second, how the concept of bias is communicated and understood through the course literature. As such, our study indicates that university curricula would strongly benefit from the inclusion of courses on social and ethical implications of AI in the workplace, especially in the business major, which has been shown to correlate with less concern about AI in HR management in this paper (see Fig. 6 and H3). This is to correct any students’ misconceptions and enrich their understanding of the positive and negative potential of such technologies. Given the strong emphasis on the importance and advantages of acquiring data analytics skills in current curricula of AASCB-accredited business schools (Clayton and Clopton 2019), ethical training and critical thinking about the ethics of these technologies should be integral to institutional higher learning epistemology that prepares younger generations for the quantified workforce.

5.1.3 Bridging the cross-cultural discrepancies

Answering RQ4 on the effects of socio-demographic and cross-cultural differences, this study shows people from different socio-cultural, economic backgrounds do tend to form different perceptions of emerging technologies (validating H1,2,6,7). Here, it is worth mentioning previous studies show that an employees’ awareness of the presence of smart surveillance technologies negatively correlates with organizational commitment (Ball 2010; Brougham and Haar 2017). These two tendencies combined with the risk of AI being misunderstood (Wilkens 2020) are important obstacles to overcome before such technologies can be harnessed in ways that safeguard the worker’s best interests.

Our analysis also shows people from economically less developed regions (Africa, Oceania, Central Asia) exhibit less concern for EAI-enabled management, while people from more prosperous regions (Europe, Northern America) tend to be more cautious. Interestingly, however, an economically prosperous region such as East Asia correlates with less anxiety toward the EAI-enabled HR management. Our data in Fig. 9 show for East Asians, 63.62% of the Japanese, 56.32% of the South Korean, and 41.77% of the Chinese people express a more accepting attitude (averaging the score of equal or more than 3 in the attitude scale). While for European and Northern Americans, an overwhelming majority of 75% possess the worried attitude toward being managed by AI. Since these East Asian countries have different political systems, the consistency of accepting attitudes for EAI across these countries could be explained by a common factor—Confucianism. Specifically, there might be antipathy toward individual rights in Confucian cultures (Weatherley 2002), as well as a stronger emphasis on harmony, duty, and loyalty to the collective will (Vuong et al. 2020; Whitman 1985). Finally, in Confucian culture, there is much more acceptance of intervention by higher authority as it is thought of as a source of moral guidance (Roberts et al. 2020).

Such cross-regional and cross-cultural differences prompt us to further investigate the differences among the top ten countries represented in our sample size. Controlling all other socio-demographic and behavioral variables, the Japanese have the strongest correlation with an accepting attitude toward EAI in HR management, followed by the Vietnamese, Chinese, and Korean (See the Supplementary file, Model 11). Indians, on the other hand, correlate with the highest level of anxiety toward automated management followed by their Bangladeshi and Indonesian counterparts. The Japanese participants’ lack of reservation for automated management is unsurprising given the extent to which workplace norms and conventions dictate unquestioning obedience, loyalty and mandatory volunteerism (Stukas et al. 1999), especially in relation to managerial superiors (Meek 2004; Rear 2020). For example, it is an unspoken convention in Japanese corporate culture that no one leaves the office before the kacho (office head) does. Our findings suggest that as a more invasive form of automated management, EAI may exacerbate anxiety amongst foreign workers in Japan, opening up the possibility of conflict with Japanese managers who are culturally conditioned to value conformity and loyalty, and to punish ‘attitudinal diversity’. As the Japanese saying goes, “出る杭は打たれる”, (deru kugi wa utareru—the nail that sticks up must be hammered down) (Sana 1991; Luck 2019).

The empirical findings on such stark cross-cultural and cross-regional differences could help educators, businesses, and policymakers to shape their action programs to address any stakeholder’s concern or lack thereof for the future of AI-driven work.

5.1.4 Ethical and legal implications

Our analysis has highlighted two main areas of ethical and legal concern. First, algorithmically driven management systems measure performance based on established benchmarks of what others have done in the past and what a company believes a worker should achieve in the present. Yet EAI can only quantify statistics of productivity; they do not have the ability to take into account human particularities such as attitudinal diversity, gender differences or cultural idiosyncrasies. Automated monitoring systems are unlikely to know if a worker is ill, physically or mentally disabled, experiencing domestic problems or simply having a bad week. Rather automated management runs the risk of diminishing the need for once valued interpersonal communication skills of an HR manager. Second, while technologically mediated workplaces can provide added perks such as flexible working hours, they also run the risks of eroding labor relations due to ethical and legal grey issues over the rights of workers to have access and control over their personal data that is gathered through automated management systems. These points are particularly salient as traditionally homogeneous workplaces such as in Japan are undergoing greater cultural hybridity.

Fortunately, some policy and legislation efforts are underway. For example, the Switzerland-based UNI Global Union has established a set of ten principles for ethical AI along with ten principles for ensuring the protection of a workers’ data rights, seeking to promote more inclusive practices in the future workplace (Colcough 2018; UNI Global Union, 2021). More recently, the European Union’s (EU) draft AI regulations have identified the use of AI tools as being of ‘high risk’ practice, including the use of AI for recruitment, promotion, performance management, task allocation and workplace monitoring (European Commission 2021). Additionally, as of April 14, 2021, another EU draft proposal titled “Regulation on a European approach for artificial intelligence” has been leaked that seeks to regulate the collection of non-conscious data by emotion-recognition AI systems (Vincent 2021). The proposal requires “any natural person whose personal data is being processed by an emotion-recognition system or a categorization system shall be notified that they are exposed to such a system” (European Commission 2021, p.34).

Similar to how early twentieth-century trade unions’ criticism of Taylorism led to the enactment of labor laws safeguarding the interests of factory workers, our analysis contributes to an emerging body of literature calling for greater regulatory scrutiny of algorithmic management and workforce analytics. This article opens the door for future researchers to explore strategies and practices to empower workers’ collective bargaining power to ensure transparency in how their data is collected and used by AI platforms and their employers. Given the increase in teleworking practices due to COVID-19 and the fact that many business enterprises are now creating their own platforms to monitor work engagement, concentration and performance levels at a distance (Vallas and Schor 2020), our findings are timely and poignant.

5.2 Limitations and future research directions

This study suffers from several limitations. First, the inherited limitations of the convenient sampling method. The surveyed population is young students who study in a multicultural, bilingual campus (Nguyen et al. 2021). Although many of the respondents will find a job in Japan, the diversity in career options and locations has allowed us to discuss cultural expectations outside of Japan. According to the statistics on graduates of the academic year 2020, 56.8% (684) of 1204 graduates reported finding a job, 6.6% (80) continued to higher education, while 36.6% (440) found other options including returning to their home countries. Regarding successful job-seekers, for international students, 85.6% (256/299) obtained an offer; while 36% (94/256) found a job outside of Japan. Whereas for Japanese graduates, 428 out of 441 job-seekers obtained an offer (not specified where, but presumably, the majority are located in Japan) (APU 2021). Third, some regions such as East Asia and South-East Asia are over-represented in the sample, which is corrected for by the partial pooling of the Bayesian multi-level analysis. As such, the results should be interpreted in this context. Future studies can further explore the attitude of working professionals regarding Emotional AI as well as the causal mechanisms of the correlations established in this study. For example, conducting in-depth interviews and controlled experiments with respondents from diverse backgrounds can explain the influences of educational background, industry, work position, entrepreneurial experiences, religious background, and geographical regions.

6 Conclusions

Our study suggests three fundamental concerns for future job-seekers who will be governed and assessed in either small or large ways by non-human resource management. The first is about privacy. The increased accuracy of emotion-sensing biometric technologies relies on a further blurring of personal/employee distinction and harvesting of real-time subjective states. The invasive disciplinary gaze of emotion-recognition technologies does not allow for backstaging. Rather it exposes and makes vulnerable an employee’s affective inner self to top-down but also in the case of workplace wellness programs, peer-to-peer horizontal surveillance conflated as communal care initiatives. The second is a concern for explainability. As EAI and its machine learning capabilities move toward greater complexity levels in automated thinking, many technologists believe that it will not be clear even to the creators of these systems how decisions are reached (Mitchell, 2019). Finally, at a deeper biopolitical level, EAI represents an emerging era of automated governance where Foucauldian strategies and techniques of control are relegated to software systems. Instead of physically monitoring and confining individuals in brick-and-mortar enclosures or enacting forms of control based on the body's exteriority, the ‘algorithmic governmentality’ of emotion-sensing AI ultimately targets the mind and behavioral processes of workers to encourage their productivity and compliance (Mantello 2016). Our empirical results suggest that, left unregulated, EAI will only exacerbate labor relation tensions, especially conflicts that may arise due to culture, gender, social class, ethnicity and attitudinal disposition.

This study advances earlier biopolitical understandings of EAI as suggested by proponents such as McStay (2018). It does so by pointing out a darker discursive cloud that hangs over all forms of biopower. Namely, its proprietary logic to make life its referent object yet willingness to compromise the human element to maximize the productivity of populations. In conclusion, the empirical cross-cultural and socio-demographic discrepancies observed in this paper seek to promote awareness and discussion as well as serve as a platform for further intercultural research on the ethical and social implications of EAI as an emerging tool in non-human resource management.