Introduction

Some optimists see Artificial Intelligence (AI) as greatly improving lives, especially for those who have been historically marginalized. Some pessimists see that AI will harm society by causing mass unemployment, increased bias, and excessive surveillance, which will particularly affect groups that have been historically marginalized (U.S. Senate, Schumer 2023). Looking specifically from the job viewpoint, intelligent machines will not only be performing a greater portion of tasks currently done by humans (Ford 2015; Frey and Osborne 2017), but also exceed the human-worker performance in nearly all jobs (Grace et al. 2018). While adoption of AI technology is mostly visible in administrative and routine tasks, a 2020 survey showed that 67% of large commercial organizations use AI to support their decision-making (Pegasystems Inc. 2020), and doubtless the proportion has increased since then. The private sector is not alone in employing AI for decision automation. In terms of governmental decision-making, algorithms are being used in the United States for evaluating who is qualified for parole from prison (Dressel and Farid 2018), sentencing decisions in courts (Angwin et al. 2016), designing transportation systems, and many other tasks (Levy et al. 2021). The rapidly advancing large language models are predicted to affect around 80% of the U.S. labor and automate at least 10% of its work tasks (Eloundou et al. 2023). Those tasks will not be limited to routine work, but with AI systems such as ChatGPT for text creation and DALL-E for image generation, once imagined inherently “human” occupations like journalists, writers, and artists, are witnessing such systems being deployed routinely (Manjoo 2020; Roose 2022).

While several studies predict that in the near future almost half of American jobs will be replaced by automation or other forms of AI systems (Grace et al. 2018; Frey and Osborne 2017), the general public show little excitement about this potential future. Recent public opinion research shows that 50 percent of Americans believe robot automation of many jobs will have a negative impact on society (Johnson and Tyson 2020). People are averse to AI in the majority of roles, expressing slightly more comfort with AI in a subordinate role of assistant (Mays et al. 2021).

Studying public opinion provides important insights for future development and governance of AI technology (Zhang and Dafoe 2019). People globally believe AI will be transformative, but they are much less certain about whether its implementation will be good or bad for society (Kelley et al. 2021). Dominant narratives portray AI in dystopian scenarios (Cave et al. 2018), which may impede acceptance of AI in more domains. Such scenarios have already affected legislation concerning facial recognition, for instance, at the local, national, and international levels (Rabinowicz 2023, Madiega and Mildebrath 2021).

The present work builds on prior AI-perception research to explore attitudes about AI in various domains. We examine whether people’s comfort with AI in different occupations aligns with experts’ predictions of their potential automation. We also explore whether individual traits, such as locus of control, perceived technological competence, and socioeconomic status influence people’s attitudes about AI in workplaces. This research contributes to the scarce literature on public perspectives of AI development, which is usually overlooked. As demand grows for citizen engagement in designing ethical AI (Balaram et al. 2018), our research provides necessary insights for this technology’s future development and potential in various occupations.

Literature review

AI domains

Previous research showed that some occupations are more likely to be automated than others (Frey and Osborne 2017; Grace et al. 2018; Geiger 2019). Among professions on the lower end of automation probability are jobs in the financial sector, education, healthcare, and media. While forecasts do not expect that most of these jobs will be fully automated any time soon, a large amount of tasks within these professions have already been performed by AI.

Low-probability for automation domains

The financial sector is a salient example of rapid implementation of AI systems. Smart algorithms are used for anti-money laundering screening, credit decisions, financial advising, and trading (OECD 2021). Currently, there is a lack of research that examines the public’s views on automation in finance. The one instance of empirical study was found in Rebitschek et al. (2021), which showed that people had high expectations of algorithmic decision-making systems and were not willing to accept its errors in credit scoring.

Within the educational sector, AI systems have been widely used for students’ testing and personalized learning paths recommendation (Jimenez and Boser 2021). Public perception studies showed that people are comfortable with AI systems performing the former but not the latter. An AI assistant for grading or class scheduling is perceived as more acceptable while an AI teacher or advisor is viewed as less desirable (Mays et al. 2021; Park and Shin 2017).

In healthcare, AI technology is used in wearable devices, diagnostics, patient care, and even surgeries (Shah and Chircu 2018). The research showed that medical AI decisions are perceived more trustworthy than human judgment (Baldauf et al. 2020; Ghafur et al. 2020; Araujo et al. 2020). People also share positive sentiment over use of AI systems, such as chatbots, in therapy (Sweeney et al. 2021; Abd-Alrazaq et al. 2021; Bendig et al. 2019). Contrary to the current rise of AI implementation in surgical procedures, people are not willing to embrace AI surgeons (Stai et al. 2020; Bristows LLP 2021).

As for the media industry, large outlets such as Forbes and The Washington Post regularly use AI-powered systems for moderating content and generating news headlines and entire articles (Schmelzer 2019). AI journalism has been overall perceived favorably by the general public (Goni and Tabassum 2020; Jia 2020; Hofeditz et al. 2021). Past literature also suggests that automatically generated news content is regarded as more objective and capable of reducing hostile bias towards the media (Clerwall 2014; Cloudy et al. 2021).

High-probability for automation domains

Occupations in manufacturing, construction, the service industry, and administrative support are predicted to have higher probability of full automation (Frey and Osborne 2017). Today 60% of manufacturers use AI to improve product quality, inventory management, and maintenance (Dilmegani 2022). AI-powered robots and self-driving machines are used in construction (Srivastava 2022). Interestingly, acceptance of AI systems that are physically co-located with humans, such as at construction sites or in supermarkets, is highly dependent on the quality of human-machine communication (Lewandowski et al. 2020). It has also become more common to encounter AI systems in customer service (Dwivedi et al. 2021), as chatbots and recommendation systems are employed to help customers choose and purchase products. Prior research demonstrated that people have high levels of comfort with AI in assistant roles (Mays et al. 2021; Philipsen et al. 2022; Katz and Halpern 2013). Among the immediate advantages, people emphasize that AI systems help with the volume of incoming inquiries and speed up processing of customers’ claims (Aitken et al. 2020; van der Goot and Pilgrim 2020). However, Xu et al. (2020) showed that for the high-complexity tasks, people would rather seek help from a human customer service representative than use AI. Further, when compared with human-interfacing modalities (both in-person and mediated), automated customer service interfaces were least liked and trusted (Mays et al. 2022).

Human resources

Human resources (HR) is another domain that has been highly influenced by the introduction of AI systems. A recent study showed that approximately 45% of surveyed organizations use or plan to use AI algorithms to support their HR activities in the next five years (SHRM 2022). AI systems are used for hiring and firing processes, personalization of employees’ social benefits, and identifying talents among many other instances (Chevalier 2022). Yet, literature suggests that people hold less favorable attitudes towards adopting AI technologies in the recruiting procedures and perceive AI recruiters as ineffective or less impartial than human interviewers (Acikgoz et al. 2020; Gonzalez et al. 2022; Zhang and Yencha 2022).

The examples above demonstrate that jobs with high as well as low probability of automation scores are currently heavily influenced by the implementation of AI systems. The previous research has been mainly focused on evaluating the people’s perceptions of discrete tasks automation, omitting the more holistic approach of assessing AI management at work. With the further development of systems such as enterprise cognitive computing (Tarafdar et al. 2017) and robotic process automation (Valgaeren 2019), future AI systems arguably will be more autonomous, minimizing human impact on their outputs.

Notably, the research that examines certain tasks automation does not show a clear pattern of the public being less accepting of AI in the domains of low probability for automation. Machines are perceived to be better suited for tasks that require more objectivity, such as within healthcare or journalism. However, research also shows that lay people have a poor understanding of what algorithmic decision-making is and how it works (Woodruff et al. 2018). Given that a common concerns of using AI in low-level automation domains are data privacy and algorithmic bias (Latham and Goltz 2019; Ghafur et al. 2020; Bristows LLP 2021), it is not obvious that in real-work situations people will embrace using AI.

In this study we want to explore people’s perception of AI being a main actor in high and low-probability automation occupational domains. Our first research question will be as follows:

RQ1: Will people’s level of comfort correspond to occupational domains of high and low-probability automation score?

Influence of individual traits: locus of control, perceived technological competence, and innovativeness

One possible explanation for why AI acceptance does not neatly correspond with low-automation probability is that individual differences play a role in people’s perceptions and use of technology (Correa et al. 2010). Indeed, a number of theoretical models account for personal traits as predictors of technology acceptance such as the Technology Acceptance Model (Venkatesh 2000) and Theory of Planned Behavior (Hong 2022). These models focus on perceptions of usage, which may capture some functional dynamics of AI integration. However, typical notions of “use” with AI may be insufficient, as AI is not a usable tool in the same way that computers or mobile phones are. Rather, AI is a pervasive technology that may be interacted with purposefully, as in the case of AI-enabled digital assistants, and also incidentally or involuntarily, as in the cases of automated customer service systems and warehouse logistics. In these latter instances, usage perceptions matter less because people do not have a choice. Understanding antecedents to usage perceptions, such as traits like efficacy (Hsia et al. 2014; Hong 2022) and innovativeness (Rožman et al. 2023), may better inform how organizations can buttress their workers’ feelings of competence around the integration of AI technologies. Therefore, this study explores how traits related to broader notions of control, technology competence, and innovativeness relate to AI attitudes.

Locus of control

Locus of control (LoC) has been used for assessing people’s intention and behavior as early as mid-1950 (Rotter 1966). It refers to people’s beliefs about their abilities to control outcomes and events in their lives. Rotter (1966) demonstrated that people’s behavior varied depending on whether they perceive outcomes as a result of their own behavior (internal LoC) or outside factors (external LoC).

LoC has been shown to be a reliable predictor of technology acceptance in human-machine interactions. For example, people with higher LoC had lower acceptance of AI recommendations (Sharan and Romano 2020). In earlier studies of human-machine cooperation, machine operators with high LoC performed worse with autonomous assistance from machines (Takayama et al. 2011). High LoC operators also issued more commands to machines that tended to take charge of a task, resulting in more command conflicts and higher frustration with the machine (Acharya et al. 2018). More recent research showed that people with high LoC trust machines more when they must cooperate with them compared to scenarios when people are solely responsible for the task (Chiou et al. 2021). Chiou and colleagues (2021) suggested that people trusted the machine more in mixed initiative conditions because they perceived it more as a collaborator than a tool. Similarly, Mays et al. (2021) found that people with high LoC were more uncomfortable with AI in powerful roles than with AI as a peer.

Perceived technology competence

The extent to which someone feels competent using technology is another often-examined factor in technology acceptance research. Where LoC is a more generalized efficacy trait, perceived technology competence (PTC) relates to efficacy in a narrower, more specific domain, which may have differing effects. Indeed, Mays et al. (2021) found that higher internal LoC was negatively, while PTC was positively, related to AI perceptions. Further, prior experience with robots reduces their anxiety towards them (Nomura et al. 2006) which improves attitudes about robots and their perceived usefulness (Belanche et al. 2019).

However, abundant experience with technology can also trigger more negative attitudes. A cross-cultural study showed that Japanese respondents with more experience in human-machine interaction were more concerned about robots’ societal impact compared to American or Mexican respondents (Bartneck et al. 2006). Similarly, Katz and Halpern (2013) demonstrated that Americans with higher PTC were more skeptical of robots’ effects on society. Research has also shown that technological expertise is a key factor in technology adoption within organizations, as employees’ technological competence is a driver for adopting new IT systems and autonomous machines (Shamout et al. 2022; Venkatesh and Bala 2012).

Innovativeness

According to Diffusion of Innovation theory (Rogers 1995), people range from innovators and early adopters to laggards. Those with higher levels of innovativeness are more likely to adopt new technology compared to others in their social system. Prior research has demonstrated that innovativeness is a consistent predictor of new technology adoption, across a range of domains: education (López-Pérez et al. 2019), tourism and hospitality (Ciftci et al. 2021), and the energy sector (Ullah et al. 2020). Additionally, innovativeness may be tied to socioeconomic factors, such as level of education, financial and social status (Shipps 2013; Hong 2022) and individual differences in age (Lee et al. 2017; Martínez-Miranda et al. 2018).

Based on the literature above, we propose the second research question:

RQ2: To what extent, if any, is comfort with AI influenced by (a) generalized (locus of control) and specialized (perceived technology competence) efficacy traits, (b) innovativeness, and (c) socioeconomic factors?

Method

Design and participants

We conducted an omnibus survey through an online questionnaire via a survey company (Qualtrics) from April to June 2021. The larger survey measured attitudes about various emerging technologies, as well as demographic and individual traits. The main variables in this analysis are drawn from a section about perceptions of AI and were determined from the outset of data collection. Qualtrics provides a survey technology platform and partners with over 20 online panel providers to supply a network of diverse, quality participants. Sample quotas on gender, age, ethnicity, education, and income were specified to match those demographic distributions in the U.S. population. Sample quotas on gender and age were specified to match those demographic distributions in the U.S. population. Our survey reflects a sample (N = 1150) with quotas on gender (54.1% female), age (M = 50.09, SD = 18.17), race (62.2% White/Caucasian), income (64.3% made $75,000 or less), education (43.9% had some college degree or less), and employment status (31.6% were employed by an organization). According to the 2020 census data, the final sample’s demographics closely match that of the general U.S. population (U.S. Census Bureau 2020). Compared to the U.S. census data, our sample is slightly older (people 55+ years old constitute 30% in census data versus 44% in our sample) and more educated (people with 4-year degree constitute 38% in census data versus 44% in our sample). Additionally, 60% of general U.S. population was employed in 2020 while in our sample only 32% participants were employed. Descriptive statistics of demographics and U.S. census comparison can be found in Appendix 1.

Measurement

Comfort with AI

For the dependent variable of level of comfort with AI, we asked respondents to indicate how comfortable they would feel if an AI agent managed various domains. Based on Frey and Osborne (2017) automation score, we chose occupations on the low-end of automation (stock investments, surgical teams, air traffic control, news desks, therapist, teacher) and on the high-end of automation (supermarkets, customer service desks, sewage plant, construction site, personal assistant). We also measured people’s comfort with AI managing HR decisions on firing, hiring, salary compensation, and work scheduling. Responses were given on a five-point Likert-type scale, from “not comfortable at all” to “very comfortable.” The following definition of AI was given preceding these items: “Artificially Intelligent (AI) agents are smart computers that put into action decisions that they make by themselves.”

Individual traits

For measuring the independent variable of the locus of control (LoC), we adapted Rotter’s (1966) 13-item LoC scale and reduced it to 6 items (α = 0.76). The items were measured on a five-point, Likert-type scale (“strongly disagree” to “strongly agree”). Higher values corresponded to a higher internal LoC, with statements such as “When I make plans, I am almost certain I can make them work”, “When I try to do something, fate determines what actually happens” (reverse-coded) (M = 3.54, SD = 0.71).

For the second independent variable, perceived technology competence (PTC), we adapted Katz and Halpern (2013) scale. PTC was measured on a 7-item five-point Likert-type scale (“strongly disagree” to “strongly agree”) with the statements such as “Other people come to me for advice on new technologies” and “I feel technology, in general, is easy to operate” (α = 0.87, M = 3.59, SD = 0.83).

For our third independent measure, innovativeness, we adapted and shortened Hurt et al. (1977) scale. The 4 items again were measured on a five-point, Likert-type scale (“strongly disagree” to “strongly agree”), including statements such as “I seek out new ways to do things” (α = 0.81, M = 3.56, SD = 0.78).

Several demographic traits were also measured and included in the analysis. In addition to age, gender, income, education, and race/ethnicity, participants were asked about their current employment status. Categories were full-time, part-time, unemployed looking for work, and unemployed not looking for work.

Data analysis

All data analyses were conducted using IBM SPSS Statistics. Descriptive statistics were run and presented for all dependent variables (e.g., AI domains). Additional descriptive and reliability tests were run for all variables included in the analysis. Ordinary least squares regression models were then run to explore the relationships between individual traits and the extent to which they explained participants’ comfort in AI managing various domains. Variables were entered step-wise into the models, in two blocks: (1) demographics and (2) individual traits.

Results

Levels of comfort with AI

Overall, participants were not very comfortable with AI’s management across domains (see Fig. 1). To some degree, levels of comfort corresponded with the likelihood of automation probability, though some domains diverged from this pattern. Participants were least comfortable with AI managing therapy, surgical teams, and air traffic control (all lower automation probability domains). However, people were slightly more comfortable with AI managing news desks and stock investing (lower automation probability) than construction sites (higher automation probability). Customer service (higher automation probability) and teaching (lower automation probability) were equivalent, and participants were most comfortable with AI managing the higher automation probability domains of supermarkets, sewage plants, and personal assistance.

Fig. 1
figure 1

Comfort with AI managing various domains.

In terms of HR functions, participants were most comfortable with AI managing employee’s work schedules (see Fig. 2). However, participants’ comfort decreased when it came to AI managing salary and making hiring decisions. The least comfort was observed when considering AI managing firing decisions.

Fig. 2
figure 2

Comfort with AI managing HR functions.

Predictors of AI comfort across domains

We created indices for perceived comfort with AI in HR functions and in high- and low-automation probability occupations.Footnote 1 To validate the scales, we conducted a Principal Components Analysis (PCA) that treated perceived comfort with AI in HR functions and in high- and low-automation probability domains as separate uni-dimensional, 4-item (HR functions), 5-item (high automation probability), and 6-item (low automation probability) indices. Given that the items asking about comfort with AI in HR functions and in high- and low automation probability occupations were novel, they were subjected to PCA with a varimax rotation. Across all three PCAs, the KMO Measure of sampling adequacy was >0.82 and significant at p < 0.001. Only one component was extracted per PCA and explained between 76–92 percent of the variance. Most factor loading exceeded 0.80, and all factor loadings exceeded 0.75. For the full statistics for each PCA, see Appendix 2.

A series of hierarchical linear regressions were conducted to explore the individual traits that influenced participants’ levels of comfort with AI in various domains. Independent variables scales were operationalized by averaging the scores of the items with HR functions M = 2.29, SD = 1.02, a = 0.90; high automation probability M = 2.82, SD = 0.91, a = 0.89; and low automation probability M = 2.30, SD = 0.99, a = 0.89. For per-item statistics see Appendix 3. Dependent variables were entered in two blocks. The first block contained demographic characteristics: gender, age, income, education, race/ethnicity, and employment status. Other individual traits previously found to influence attitudes about technology comprised the second block: innovativeness, internal LoC, and PTC.

Prior to data analysis, assumptions of multiple regression were tested. For all three models, we accepted that the residuals were normally distributed based on the standardized normal probability (P-P) plot; homogeneity of the variance of the residuals was tested and accepted by plotting the residuals against fitted values and using the Breusch-Pagan test for heteroscedasticity where p > 0.05 for all models. The models were also accepted after testing for multicollinearity using the Variance Inflation Factor (VIF < 5 for all variables across three models). For plots and reported values for the tested assumptions see Appendix 4. The regression models are presented in Table 1.

Table 1 Influence of individual traits on comfort with AI managing HR functions and AI in high and low-automation probability domains.

Altogether, individual traits explained 11–20% of the variance in AI comfort with AI management in various domains. Across most models, demographic traits explained the most variance, ranging from 10–15%, and innovation and efficacy traits explained an additional 5–7% of comfort with AI management. There are several consistent predictors regardless of domain: men, those with higher income, and those with higher perceived technology competence were more comfortable with AI management in all three domains. Those who felt more in control of their lives (i.e., higher internal locus of control) were less comfortable with AI management in every domain. Age had no influence on participant’s comfort with AI in any domain. Race and employment status showed little influence on comfort levels. Finally, education and innovativeness were not significant predictors in any of the models.

Discussion

AI domains

The present research examines public attitudes towards agentic AI in various occupational domains. Our findings suggest that comfort with AI is consistent with Frey and Osborne’s (2017) automation predictions, except for teaching domain and construction sites. The higher-than-expected comfort with an AI teacher might be due to the proliferation of remote learning at the time the survey was conducted. The COVID-19 pandemic greatly facilitated technological developments in online education, which might have resulted in shifting people’s attitudes towards technology in educational settings.

Construction sites received a lower level of comfort among participants than expected. One potential explanation for this finding, which also should be explored in future research, is that people might not be comfortable with AI systems in high-stakes environments for human safety. Research has shown that using robotics and automation in construction can potentially create new dangers to workers’ safety and worsen existing risks at construction sites (Okpala et al. 2022). This interpretation is bolstered by participants’ lower levels of comfort in the other high-stakes domains we asked about: surgery and air traffic control. However, in therapy, deemed a high-stakes domain, discomfort may stem from participants’ perception that an AI is incapable of effectively managing therapy, given its perceived dearth of real-life experience and emotional capacity (Mays et al. 2021).

Thus, the reasons for people’s comfort (or lack thereof) with AI across domains may be multi-faceted. The “automatability” of a role may inform attitudes in some contexts, but discomfort may be informed in other contexts based on people’s beliefs about what an AI is capable of (or not) and also how suitable it is not only in terms of capabilities but also in terms of AI’s threat to human status and thriving (Ferrari et al. 2016). Other factors could also be recent technological advancements in a particular domain, as in education, or the potential for dire consequences of introducing AI systems, as in construction. Research has demonstrated that people become resistant to technology whenever it undermines beliefs about their own capabilities (Craig et al. 2019). Perceived inferiority compared to machines also triggers negative views on automation, especially when machines demonstrate autonomous capabilities (Ferrari et al. 2016; Złotowski et al. 2017). Considered together with the influence of individual traits discussed more below, people may variably feel threatened by AI-based on identity and individual traits, regardless of operational domain.

Individual traits

Locus of control

Our findings suggest that higher internal LoC is significantly negatively correlated with AI comfort, meaning that people who feel more capable of controlling their outcomes were less likely to accept AI in examined occupations. Interestingly, studies of other types of ICT technologies, such as mobile phones and computers, demonstrated that self-efficacy was a direct predictor for their use and adoption (Chen et al. 2011; Igbaria and Iivari 1995; Turkle 2005). In the case of computers, Turkle (2005) demonstrated that individuals who experienced a sense of powerlessness in their lives tended to favor computer use because it provided them with a sense of control.

Our results are consistent with the previous human-machine interaction literature that incorporated locus of control measurement (Acharya et al. 2018; Chiou et al. 2021) and suggest that AI technology is perceived fundamentally differently from other recent innovations. Individuals who already grapple with feelings of disempowerment in their lives may find AI more appealing, as they have less to relinquish (such as power or control) to AI. Conversely, individuals with a stronger LoC may perceive AI systems as more threatening, particularly when these systems are imposed on individuals in workplace settings.

Perceived technological competence

PTC showed a reverse relationship to LoC: those who perceived themselves as more technologically competent were more likely to be comfortable with AI. Our results are consistent with the studies where AI agents were framed in work situations (Turja and Oksanen 2019; Mays et al. 2021). Similarly, a study by Schoeffer et al. (2022) showed that the amount of information and AI literacy influenced perceived AI fairness and trust in technology. This might suggest that when people are more knowledgeable about how AI operates and have some familiarity with it, they can see its usefulness for work tasks. Given the strong connection between adoption of innovation and employees’ knowledge, technological competence will continue to be an important trait to consider for AI implementation in various industries (Shamout et al. 2022; Venkatesh and Bala 2012).

Demographic and socioeconomic factors

Demographic and socioeconomic factors showed higher explainability for model variance compared to self-efficacy traits. For all domains male participants showed more comfort with AI, as well as high-income individuals, indicating that more vulnerable populations (women and people with lower income) are less likely to be comfortable with AI technology.

Interestingly, employment status was not a significant predictor of comfort with AI. Statistically significant relationships were demonstrated only between full and part-time employment and low-automation probability occupations. However, these results should be interpreted with caution as they are below the threshold for significance after correcting for the number of predictors in our model.

Finally, innovativeness did not show significant association with AI comfort levels in any domain. As innovativeness has been shown to be a strong predictor for adoption of various information technologies (López-Pérez et al. 2019; Ciftci et al. 2021; Ullah et al. 2020), this finding again points to the unique nature of AI that is distinct from other innovations. Currently, the utilization of AI tools in the workplace lacks uniformity. Some companies readily embrace these tools, while others prohibit their use (Korn 2023). Ultimately, it is the companies themselves that make the decision regarding the adoption of AI systems. Prior research highlights the importance of AI-supported leadership in the incorporation of AI systems into the workplace (Rožman et al. 2023). As a result, our findings confirm that individuals’ comfort with AI managing specific occupations is independent of their personal curiosity about emerging technologies or awareness of the latest advancements.

Study implications

This study contributes to the existing literature on people’s perceptions of AI systems at work. Given the past failures of introducing AI systems in work environments, including instances like biased hiring algorithms (Raghavan et al. 2020) and unfair grading systems (Satariano 2020), we argue that examining public opinion regarding technology that significantly impacts people’s daily lives is crucial to AI implementation. A recent case in point is the release of ChatGPT, which received substantial backlash from the writer’s rights movement (Coyle and The Associated Press 2023) and caused disruptions in educational settings (Sullivan et al. 2023). This example highlights the importance of understanding public preferences before widespread implementation.

With the abundance of literature on discrete tasks and decision automation, the present research demonstrates a more holistic approach to examine public opinions about AI implementation in work environments. Previous studies on AI attitudes predominantly center on assessing individual users’ evaluations of AI’s technical capabilities, which remains relevant for evaluating the acceptance of specific AI-powered tools (e.g., Agarwal et al. 2023, de Haan et al. 2022). However, this approach has limitations when applied to state-of-the-art AI systems capable of entirely substituting humans in the workplace. Given that these AI systems are introduced to the workers without the ability to opt out, evaluating their acceptance and usefulness solely based on individual user assessments may not accurately reflect the broader public’s perspectives.

Our results also highlight the differences in perceptions and potential adoption of AI technology compared to previous technical innovations. While computers and mobile phones are more readily perceived as assistive tools that extend human abilities at work, AI systems are capable of autonomous operation that can lead to employees’ redundancy. Examining public perceptions is a crucial step in facilitating ethical AI implementation at workplaces and minimizing potential public backlash.

Attention to ethical AI stemmed from multiple instances of explicitly racial and gender biases encoded in AI systems (Wellner 2020; Benjamin 2019). One approach to combat these and other issues from AI’s rapid implementation is to impose regulations. The European Union issued an AI Act that establishes regulations based on the AI risk category and banning high-risk AI systems, such as social scoring, real-time facial recognition, or dark-pattern AI (McCarthy and Propp 2021). A different approach was taken by the United States. The recent AI Bill of Rights presented by the Biden administration demonstrates a sector-based approach for regulations. However, some criticize the Bill as being uneven across sectors where some domains receive insufficient attention (Engler 2022).

In both instances, it is unclear whether these policies were developed in collaboration with the public. The Royal Society Report on Public Engagement in AI Ethics showed that citizens are being generally excluded from shaping the country’s technological future (Cave et al. 2018). Being left with no agency over technological changes at the workplaces, in healthcare, or education, people might completely reject the imposed technology. Indeed, our research demonstrates that the public, and especially vulnerable populations, are not receptive to AI in almost all domains. Our results shed light on public preferences which, if acknowledged, can help regulators, politicians, and corporate leaders to build AI systems for citizen’s empowerment rather than suppression. Future research may consider other aspects of the public’s attitudes about AI, such as what they perceive is most harmful and dangerous in AI’s potential impact. From a governance standpoint, these insights would be particularly important for policymakers developing guidelines and regulations around AI’s development and deployment.

Limitations and future work

The main limitations in our study come from survey methodology and attitude measurement. In addition, occupational domains were shown within-respondents and in a sequence that might have some effects on the answers. Future research should look at between-subjects designs for examining differences in people’s perceptions. While demographic quotas were introduced for national representativeness, the respondents were recruited through a professional survey company which comes with some effect on generalizability of the findings.

Further, in order to establish general understanding of AI for respondents, we provided a rather simplistic definition of an AI agent. Because the general public may lack an in-depth understanding of what constitutes AI, yet routinely encounters it in workplace settings, we argue that our chosen definition serves its purpose. This definition was selected to underscore the specific facets of AI that we intended to emphasize in our study, including agency, intelligence, and decision-making. As research shows no consistency in AI definitions (Ng et al. 2021), designing a survey where respondents provide their own definition might be one of the solutions to this issue.

The survey was conducted in 2021, prior to the public release of large language models such as ChatGPT. Since that time, people’s attitudes towards AI in various domains may have evolved, especially in light of the widespread media coverage surrounding the release of ChatGPT and its consequences. As a result, our findings may be most valuable as a historical reference point for understanding attitudes, particularly with regard to levels of LoC and PTC in relation to comfort with AI. With the release of AI tools that are more explicitly geared towards helping people with their everyday tasks (e.g., ChatGPT providing templates for repetitive writing tasks), the future research should look into whether people begin to normalize AI as another kind of technological tool, akin to a computer or mobile phone. If that is the case, it would be interesting to explore whether the inverse relationship between LoC and PTC identified in our study, shift to one in which high LoC and high PTC would both relate to more comfort with AI. Another reason why this study is may be of value to future researchers is that establishes a milestone of public attitudes at a certain time (i.e., 2021). Thus measuring future attitudes over several time intervals will yield a better understanding of trajectory and rate of change of the public’s attitudes towards AI and perceptions of its societal consequences.

Future researchers may wish to further examine what factors contribute to people’s comfort with AI systems in the workplace. Prior research suggests that ontological distinctions (Guzman 2020), occupation status and prestige (Qi 2022), and automation anxiety (Piercy and Gist-Mackey 2021) explain differences in individual’s acceptance of AI agents. However, there is no literature that shows whether these approaches vary across occupations on the high or low-end of automation probability. More understanding of what hinders people’s comfort with AI would be beneficial for further development not only systems themselves but also ethical policies regulating them.

Conclusion

The release of generative AI tools like DALL-E and ChatGPT ignited widespread public debate about the promise and perils of AI technology. Within the public, marginalized groups that lack structural power have already disproportionately experienced consequences of powerful algorithms in criminal justice, employment, healthcare, and other spaces where human bias has been long-standing (McGregor 2021). By and large AI has not fixed social bias; rather, its computational prowess has applied a veneer of objectivity to a problem that requires a much more intensive, human solution. The heightened attention on AI’s possibilities has also increased scrutiny on its historical and potential harms, prompting more regulatory action. This is a promising development that would be further bolstered with increased consideration of what the public values and prioritizes.

Artificial intelligence, in the form of large language models and related technologies, holds a unique promise to assist individuals who have not been successful in the traditional educational system due to various factors, and who are disproportionately represented among historically marginalized groups. This failure is often reflected in limited job opportunities and advancement. While the benefits of AI are anticipated to permeate society as a whole, and perhaps accrue disproportionate benefits to those who already have ample societal resources, it is argued that with careful design and implementation within targeted communities, these systems would be particularly advantageous for marginalized and low-income groups. AI could assist such people in bridging the gap in their educational opportunities to help them gain employment in targeted occupations. AI can empower individuals from these communities with tools that enable them to navigate bureaucratic and formal settings more effectively. This empowerment, in turn, can yield economic and political benefits for these communities, as well as enhance their ability to inform local officials about issues affecting their lives and neighborhoods. Hence, the research conducted in this study provides a valuable foundation for understanding the barriers to AI adoption and informing the design of technology that resonates with members of these underserved communities. The next decade and beyond will be a critical period for research institutions, industry leaders, policymakers, and the public to collaborate on establishing shared principles that ensure AI implementation, as it develops, is sustainable and aligned with societal values. Greater attention to community perspectives and concerns will undoubtedly enhance the adoption and utilization of AI-based technologies for personal and collective advancement.