Abstract
The present study explores people’s attitudes towards an assortment of occupations on high and low-likelihood of automation probability. An omnibus survey (N = 1150) was conducted to measure attitudes about various emerging technologies, as well as demographic and individual traits. The results showed that respondents were not very comfortable with AI’s management across domains. To some degree, levels of comfort corresponded with the likelihood of automation probability, though some domains diverged from this pattern. Demographic traits explained the most variance in comfort with AI revealing that men and those with higher perceived technology competence were more comfortable with AI management in every domain. With the exception of personal assistance, those with lower internal locus of control were more comfortable with AI managing in almost every domain. Age, education, and employment showed little influence on comfort levels. The present study demonstrates a more holistic approach of assessing attitudes toward AI management at work. By incorporating demographic and self-efficacy variables, our research revealed that AI systems are perceived differently compared to other recent technological innovations.
Similar content being viewed by others
Introduction
Some optimists see Artificial Intelligence (AI) as greatly improving lives, especially for those who have been historically marginalized. Some pessimists see that AI will harm society by causing mass unemployment, increased bias, and excessive surveillance, which will particularly affect groups that have been historically marginalized (U.S. Senate, Schumer 2023). Looking specifically from the job viewpoint, intelligent machines will not only be performing a greater portion of tasks currently done by humans (Ford 2015; Frey and Osborne 2017), but also exceed the human-worker performance in nearly all jobs (Grace et al. 2018). While adoption of AI technology is mostly visible in administrative and routine tasks, a 2020 survey showed that 67% of large commercial organizations use AI to support their decision-making (Pegasystems Inc. 2020), and doubtless the proportion has increased since then. The private sector is not alone in employing AI for decision automation. In terms of governmental decision-making, algorithms are being used in the United States for evaluating who is qualified for parole from prison (Dressel and Farid 2018), sentencing decisions in courts (Angwin et al. 2016), designing transportation systems, and many other tasks (Levy et al. 2021). The rapidly advancing large language models are predicted to affect around 80% of the U.S. labor and automate at least 10% of its work tasks (Eloundou et al. 2023). Those tasks will not be limited to routine work, but with AI systems such as ChatGPT for text creation and DALL-E for image generation, once imagined inherently “human” occupations like journalists, writers, and artists, are witnessing such systems being deployed routinely (Manjoo 2020; Roose 2022).
While several studies predict that in the near future almost half of American jobs will be replaced by automation or other forms of AI systems (Grace et al. 2018; Frey and Osborne 2017), the general public show little excitement about this potential future. Recent public opinion research shows that 50 percent of Americans believe robot automation of many jobs will have a negative impact on society (Johnson and Tyson 2020). People are averse to AI in the majority of roles, expressing slightly more comfort with AI in a subordinate role of assistant (Mays et al. 2021).
Studying public opinion provides important insights for future development and governance of AI technology (Zhang and Dafoe 2019). People globally believe AI will be transformative, but they are much less certain about whether its implementation will be good or bad for society (Kelley et al. 2021). Dominant narratives portray AI in dystopian scenarios (Cave et al. 2018), which may impede acceptance of AI in more domains. Such scenarios have already affected legislation concerning facial recognition, for instance, at the local, national, and international levels (Rabinowicz 2023, Madiega and Mildebrath 2021).
The present work builds on prior AI-perception research to explore attitudes about AI in various domains. We examine whether people’s comfort with AI in different occupations aligns with experts’ predictions of their potential automation. We also explore whether individual traits, such as locus of control, perceived technological competence, and socioeconomic status influence people’s attitudes about AI in workplaces. This research contributes to the scarce literature on public perspectives of AI development, which is usually overlooked. As demand grows for citizen engagement in designing ethical AI (Balaram et al. 2018), our research provides necessary insights for this technology’s future development and potential in various occupations.
Literature review
AI domains
Previous research showed that some occupations are more likely to be automated than others (Frey and Osborne 2017; Grace et al. 2018; Geiger 2019). Among professions on the lower end of automation probability are jobs in the financial sector, education, healthcare, and media. While forecasts do not expect that most of these jobs will be fully automated any time soon, a large amount of tasks within these professions have already been performed by AI.
Low-probability for automation domains
The financial sector is a salient example of rapid implementation of AI systems. Smart algorithms are used for anti-money laundering screening, credit decisions, financial advising, and trading (OECD 2021). Currently, there is a lack of research that examines the public’s views on automation in finance. The one instance of empirical study was found in Rebitschek et al. (2021), which showed that people had high expectations of algorithmic decision-making systems and were not willing to accept its errors in credit scoring.
Within the educational sector, AI systems have been widely used for students’ testing and personalized learning paths recommendation (Jimenez and Boser 2021). Public perception studies showed that people are comfortable with AI systems performing the former but not the latter. An AI assistant for grading or class scheduling is perceived as more acceptable while an AI teacher or advisor is viewed as less desirable (Mays et al. 2021; Park and Shin 2017).
In healthcare, AI technology is used in wearable devices, diagnostics, patient care, and even surgeries (Shah and Chircu 2018). The research showed that medical AI decisions are perceived more trustworthy than human judgment (Baldauf et al. 2020; Ghafur et al. 2020; Araujo et al. 2020). People also share positive sentiment over use of AI systems, such as chatbots, in therapy (Sweeney et al. 2021; Abd-Alrazaq et al. 2021; Bendig et al. 2019). Contrary to the current rise of AI implementation in surgical procedures, people are not willing to embrace AI surgeons (Stai et al. 2020; Bristows LLP 2021).
As for the media industry, large outlets such as Forbes and The Washington Post regularly use AI-powered systems for moderating content and generating news headlines and entire articles (Schmelzer 2019). AI journalism has been overall perceived favorably by the general public (Goni and Tabassum 2020; Jia 2020; Hofeditz et al. 2021). Past literature also suggests that automatically generated news content is regarded as more objective and capable of reducing hostile bias towards the media (Clerwall 2014; Cloudy et al. 2021).
High-probability for automation domains
Occupations in manufacturing, construction, the service industry, and administrative support are predicted to have higher probability of full automation (Frey and Osborne 2017). Today 60% of manufacturers use AI to improve product quality, inventory management, and maintenance (Dilmegani 2022). AI-powered robots and self-driving machines are used in construction (Srivastava 2022). Interestingly, acceptance of AI systems that are physically co-located with humans, such as at construction sites or in supermarkets, is highly dependent on the quality of human-machine communication (Lewandowski et al. 2020). It has also become more common to encounter AI systems in customer service (Dwivedi et al. 2021), as chatbots and recommendation systems are employed to help customers choose and purchase products. Prior research demonstrated that people have high levels of comfort with AI in assistant roles (Mays et al. 2021; Philipsen et al. 2022; Katz and Halpern 2013). Among the immediate advantages, people emphasize that AI systems help with the volume of incoming inquiries and speed up processing of customers’ claims (Aitken et al. 2020; van der Goot and Pilgrim 2020). However, Xu et al. (2020) showed that for the high-complexity tasks, people would rather seek help from a human customer service representative than use AI. Further, when compared with human-interfacing modalities (both in-person and mediated), automated customer service interfaces were least liked and trusted (Mays et al. 2022).
Human resources
Human resources (HR) is another domain that has been highly influenced by the introduction of AI systems. A recent study showed that approximately 45% of surveyed organizations use or plan to use AI algorithms to support their HR activities in the next five years (SHRM 2022). AI systems are used for hiring and firing processes, personalization of employees’ social benefits, and identifying talents among many other instances (Chevalier 2022). Yet, literature suggests that people hold less favorable attitudes towards adopting AI technologies in the recruiting procedures and perceive AI recruiters as ineffective or less impartial than human interviewers (Acikgoz et al. 2020; Gonzalez et al. 2022; Zhang and Yencha 2022).
The examples above demonstrate that jobs with high as well as low probability of automation scores are currently heavily influenced by the implementation of AI systems. The previous research has been mainly focused on evaluating the people’s perceptions of discrete tasks automation, omitting the more holistic approach of assessing AI management at work. With the further development of systems such as enterprise cognitive computing (Tarafdar et al. 2017) and robotic process automation (Valgaeren 2019), future AI systems arguably will be more autonomous, minimizing human impact on their outputs.
Notably, the research that examines certain tasks automation does not show a clear pattern of the public being less accepting of AI in the domains of low probability for automation. Machines are perceived to be better suited for tasks that require more objectivity, such as within healthcare or journalism. However, research also shows that lay people have a poor understanding of what algorithmic decision-making is and how it works (Woodruff et al. 2018). Given that a common concerns of using AI in low-level automation domains are data privacy and algorithmic bias (Latham and Goltz 2019; Ghafur et al. 2020; Bristows LLP 2021), it is not obvious that in real-work situations people will embrace using AI.
In this study we want to explore people’s perception of AI being a main actor in high and low-probability automation occupational domains. Our first research question will be as follows:
RQ1: Will people’s level of comfort correspond to occupational domains of high and low-probability automation score?
Influence of individual traits: locus of control, perceived technological competence, and innovativeness
One possible explanation for why AI acceptance does not neatly correspond with low-automation probability is that individual differences play a role in people’s perceptions and use of technology (Correa et al. 2010). Indeed, a number of theoretical models account for personal traits as predictors of technology acceptance such as the Technology Acceptance Model (Venkatesh 2000) and Theory of Planned Behavior (Hong 2022). These models focus on perceptions of usage, which may capture some functional dynamics of AI integration. However, typical notions of “use” with AI may be insufficient, as AI is not a usable tool in the same way that computers or mobile phones are. Rather, AI is a pervasive technology that may be interacted with purposefully, as in the case of AI-enabled digital assistants, and also incidentally or involuntarily, as in the cases of automated customer service systems and warehouse logistics. In these latter instances, usage perceptions matter less because people do not have a choice. Understanding antecedents to usage perceptions, such as traits like efficacy (Hsia et al. 2014; Hong 2022) and innovativeness (Rožman et al. 2023), may better inform how organizations can buttress their workers’ feelings of competence around the integration of AI technologies. Therefore, this study explores how traits related to broader notions of control, technology competence, and innovativeness relate to AI attitudes.
Locus of control
Locus of control (LoC) has been used for assessing people’s intention and behavior as early as mid-1950 (Rotter 1966). It refers to people’s beliefs about their abilities to control outcomes and events in their lives. Rotter (1966) demonstrated that people’s behavior varied depending on whether they perceive outcomes as a result of their own behavior (internal LoC) or outside factors (external LoC).
LoC has been shown to be a reliable predictor of technology acceptance in human-machine interactions. For example, people with higher LoC had lower acceptance of AI recommendations (Sharan and Romano 2020). In earlier studies of human-machine cooperation, machine operators with high LoC performed worse with autonomous assistance from machines (Takayama et al. 2011). High LoC operators also issued more commands to machines that tended to take charge of a task, resulting in more command conflicts and higher frustration with the machine (Acharya et al. 2018). More recent research showed that people with high LoC trust machines more when they must cooperate with them compared to scenarios when people are solely responsible for the task (Chiou et al. 2021). Chiou and colleagues (2021) suggested that people trusted the machine more in mixed initiative conditions because they perceived it more as a collaborator than a tool. Similarly, Mays et al. (2021) found that people with high LoC were more uncomfortable with AI in powerful roles than with AI as a peer.
Perceived technology competence
The extent to which someone feels competent using technology is another often-examined factor in technology acceptance research. Where LoC is a more generalized efficacy trait, perceived technology competence (PTC) relates to efficacy in a narrower, more specific domain, which may have differing effects. Indeed, Mays et al. (2021) found that higher internal LoC was negatively, while PTC was positively, related to AI perceptions. Further, prior experience with robots reduces their anxiety towards them (Nomura et al. 2006) which improves attitudes about robots and their perceived usefulness (Belanche et al. 2019).
However, abundant experience with technology can also trigger more negative attitudes. A cross-cultural study showed that Japanese respondents with more experience in human-machine interaction were more concerned about robots’ societal impact compared to American or Mexican respondents (Bartneck et al. 2006). Similarly, Katz and Halpern (2013) demonstrated that Americans with higher PTC were more skeptical of robots’ effects on society. Research has also shown that technological expertise is a key factor in technology adoption within organizations, as employees’ technological competence is a driver for adopting new IT systems and autonomous machines (Shamout et al. 2022; Venkatesh and Bala 2012).
Innovativeness
According to Diffusion of Innovation theory (Rogers 1995), people range from innovators and early adopters to laggards. Those with higher levels of innovativeness are more likely to adopt new technology compared to others in their social system. Prior research has demonstrated that innovativeness is a consistent predictor of new technology adoption, across a range of domains: education (López-Pérez et al. 2019), tourism and hospitality (Ciftci et al. 2021), and the energy sector (Ullah et al. 2020). Additionally, innovativeness may be tied to socioeconomic factors, such as level of education, financial and social status (Shipps 2013; Hong 2022) and individual differences in age (Lee et al. 2017; Martínez-Miranda et al. 2018).
Based on the literature above, we propose the second research question:
RQ2: To what extent, if any, is comfort with AI influenced by (a) generalized (locus of control) and specialized (perceived technology competence) efficacy traits, (b) innovativeness, and (c) socioeconomic factors?
Method
Design and participants
We conducted an omnibus survey through an online questionnaire via a survey company (Qualtrics) from April to June 2021. The larger survey measured attitudes about various emerging technologies, as well as demographic and individual traits. The main variables in this analysis are drawn from a section about perceptions of AI and were determined from the outset of data collection. Qualtrics provides a survey technology platform and partners with over 20 online panel providers to supply a network of diverse, quality participants. Sample quotas on gender, age, ethnicity, education, and income were specified to match those demographic distributions in the U.S. population. Sample quotas on gender and age were specified to match those demographic distributions in the U.S. population. Our survey reflects a sample (N = 1150) with quotas on gender (54.1% female), age (M = 50.09, SD = 18.17), race (62.2% White/Caucasian), income (64.3% made $75,000 or less), education (43.9% had some college degree or less), and employment status (31.6% were employed by an organization). According to the 2020 census data, the final sample’s demographics closely match that of the general U.S. population (U.S. Census Bureau 2020). Compared to the U.S. census data, our sample is slightly older (people 55+ years old constitute 30% in census data versus 44% in our sample) and more educated (people with 4-year degree constitute 38% in census data versus 44% in our sample). Additionally, 60% of general U.S. population was employed in 2020 while in our sample only 32% participants were employed. Descriptive statistics of demographics and U.S. census comparison can be found in Appendix 1.
Measurement
Comfort with AI
For the dependent variable of level of comfort with AI, we asked respondents to indicate how comfortable they would feel if an AI agent managed various domains. Based on Frey and Osborne (2017) automation score, we chose occupations on the low-end of automation (stock investments, surgical teams, air traffic control, news desks, therapist, teacher) and on the high-end of automation (supermarkets, customer service desks, sewage plant, construction site, personal assistant). We also measured people’s comfort with AI managing HR decisions on firing, hiring, salary compensation, and work scheduling. Responses were given on a five-point Likert-type scale, from “not comfortable at all” to “very comfortable.” The following definition of AI was given preceding these items: “Artificially Intelligent (AI) agents are smart computers that put into action decisions that they make by themselves.”
Individual traits
For measuring the independent variable of the locus of control (LoC), we adapted Rotter’s (1966) 13-item LoC scale and reduced it to 6 items (α = 0.76). The items were measured on a five-point, Likert-type scale (“strongly disagree” to “strongly agree”). Higher values corresponded to a higher internal LoC, with statements such as “When I make plans, I am almost certain I can make them work”, “When I try to do something, fate determines what actually happens” (reverse-coded) (M = 3.54, SD = 0.71).
For the second independent variable, perceived technology competence (PTC), we adapted Katz and Halpern (2013) scale. PTC was measured on a 7-item five-point Likert-type scale (“strongly disagree” to “strongly agree”) with the statements such as “Other people come to me for advice on new technologies” and “I feel technology, in general, is easy to operate” (α = 0.87, M = 3.59, SD = 0.83).
For our third independent measure, innovativeness, we adapted and shortened Hurt et al. (1977) scale. The 4 items again were measured on a five-point, Likert-type scale (“strongly disagree” to “strongly agree”), including statements such as “I seek out new ways to do things” (α = 0.81, M = 3.56, SD = 0.78).
Several demographic traits were also measured and included in the analysis. In addition to age, gender, income, education, and race/ethnicity, participants were asked about their current employment status. Categories were full-time, part-time, unemployed looking for work, and unemployed not looking for work.
Data analysis
All data analyses were conducted using IBM SPSS Statistics. Descriptive statistics were run and presented for all dependent variables (e.g., AI domains). Additional descriptive and reliability tests were run for all variables included in the analysis. Ordinary least squares regression models were then run to explore the relationships between individual traits and the extent to which they explained participants’ comfort in AI managing various domains. Variables were entered step-wise into the models, in two blocks: (1) demographics and (2) individual traits.
Results
Levels of comfort with AI
Overall, participants were not very comfortable with AI’s management across domains (see Fig. 1). To some degree, levels of comfort corresponded with the likelihood of automation probability, though some domains diverged from this pattern. Participants were least comfortable with AI managing therapy, surgical teams, and air traffic control (all lower automation probability domains). However, people were slightly more comfortable with AI managing news desks and stock investing (lower automation probability) than construction sites (higher automation probability). Customer service (higher automation probability) and teaching (lower automation probability) were equivalent, and participants were most comfortable with AI managing the higher automation probability domains of supermarkets, sewage plants, and personal assistance.
In terms of HR functions, participants were most comfortable with AI managing employee’s work schedules (see Fig. 2). However, participants’ comfort decreased when it came to AI managing salary and making hiring decisions. The least comfort was observed when considering AI managing firing decisions.
Predictors of AI comfort across domains
We created indices for perceived comfort with AI in HR functions and in high- and low-automation probability occupations.Footnote 1 To validate the scales, we conducted a Principal Components Analysis (PCA) that treated perceived comfort with AI in HR functions and in high- and low-automation probability domains as separate uni-dimensional, 4-item (HR functions), 5-item (high automation probability), and 6-item (low automation probability) indices. Given that the items asking about comfort with AI in HR functions and in high- and low automation probability occupations were novel, they were subjected to PCA with a varimax rotation. Across all three PCAs, the KMO Measure of sampling adequacy was >0.82 and significant at p < 0.001. Only one component was extracted per PCA and explained between 76–92 percent of the variance. Most factor loading exceeded 0.80, and all factor loadings exceeded 0.75. For the full statistics for each PCA, see Appendix 2.
A series of hierarchical linear regressions were conducted to explore the individual traits that influenced participants’ levels of comfort with AI in various domains. Independent variables scales were operationalized by averaging the scores of the items with HR functions M = 2.29, SD = 1.02, a = 0.90; high automation probability M = 2.82, SD = 0.91, a = 0.89; and low automation probability M = 2.30, SD = 0.99, a = 0.89. For per-item statistics see Appendix 3. Dependent variables were entered in two blocks. The first block contained demographic characteristics: gender, age, income, education, race/ethnicity, and employment status. Other individual traits previously found to influence attitudes about technology comprised the second block: innovativeness, internal LoC, and PTC.
Prior to data analysis, assumptions of multiple regression were tested. For all three models, we accepted that the residuals were normally distributed based on the standardized normal probability (P-P) plot; homogeneity of the variance of the residuals was tested and accepted by plotting the residuals against fitted values and using the Breusch-Pagan test for heteroscedasticity where p > 0.05 for all models. The models were also accepted after testing for multicollinearity using the Variance Inflation Factor (VIF < 5 for all variables across three models). For plots and reported values for the tested assumptions see Appendix 4. The regression models are presented in Table 1.
Altogether, individual traits explained 11–20% of the variance in AI comfort with AI management in various domains. Across most models, demographic traits explained the most variance, ranging from 10–15%, and innovation and efficacy traits explained an additional 5–7% of comfort with AI management. There are several consistent predictors regardless of domain: men, those with higher income, and those with higher perceived technology competence were more comfortable with AI management in all three domains. Those who felt more in control of their lives (i.e., higher internal locus of control) were less comfortable with AI management in every domain. Age had no influence on participant’s comfort with AI in any domain. Race and employment status showed little influence on comfort levels. Finally, education and innovativeness were not significant predictors in any of the models.
Discussion
AI domains
The present research examines public attitudes towards agentic AI in various occupational domains. Our findings suggest that comfort with AI is consistent with Frey and Osborne’s (2017) automation predictions, except for teaching domain and construction sites. The higher-than-expected comfort with an AI teacher might be due to the proliferation of remote learning at the time the survey was conducted. The COVID-19 pandemic greatly facilitated technological developments in online education, which might have resulted in shifting people’s attitudes towards technology in educational settings.
Construction sites received a lower level of comfort among participants than expected. One potential explanation for this finding, which also should be explored in future research, is that people might not be comfortable with AI systems in high-stakes environments for human safety. Research has shown that using robotics and automation in construction can potentially create new dangers to workers’ safety and worsen existing risks at construction sites (Okpala et al. 2022). This interpretation is bolstered by participants’ lower levels of comfort in the other high-stakes domains we asked about: surgery and air traffic control. However, in therapy, deemed a high-stakes domain, discomfort may stem from participants’ perception that an AI is incapable of effectively managing therapy, given its perceived dearth of real-life experience and emotional capacity (Mays et al. 2021).
Thus, the reasons for people’s comfort (or lack thereof) with AI across domains may be multi-faceted. The “automatability” of a role may inform attitudes in some contexts, but discomfort may be informed in other contexts based on people’s beliefs about what an AI is capable of (or not) and also how suitable it is not only in terms of capabilities but also in terms of AI’s threat to human status and thriving (Ferrari et al. 2016). Other factors could also be recent technological advancements in a particular domain, as in education, or the potential for dire consequences of introducing AI systems, as in construction. Research has demonstrated that people become resistant to technology whenever it undermines beliefs about their own capabilities (Craig et al. 2019). Perceived inferiority compared to machines also triggers negative views on automation, especially when machines demonstrate autonomous capabilities (Ferrari et al. 2016; Złotowski et al. 2017). Considered together with the influence of individual traits discussed more below, people may variably feel threatened by AI-based on identity and individual traits, regardless of operational domain.
Individual traits
Locus of control
Our findings suggest that higher internal LoC is significantly negatively correlated with AI comfort, meaning that people who feel more capable of controlling their outcomes were less likely to accept AI in examined occupations. Interestingly, studies of other types of ICT technologies, such as mobile phones and computers, demonstrated that self-efficacy was a direct predictor for their use and adoption (Chen et al. 2011; Igbaria and Iivari 1995; Turkle 2005). In the case of computers, Turkle (2005) demonstrated that individuals who experienced a sense of powerlessness in their lives tended to favor computer use because it provided them with a sense of control.
Our results are consistent with the previous human-machine interaction literature that incorporated locus of control measurement (Acharya et al. 2018; Chiou et al. 2021) and suggest that AI technology is perceived fundamentally differently from other recent innovations. Individuals who already grapple with feelings of disempowerment in their lives may find AI more appealing, as they have less to relinquish (such as power or control) to AI. Conversely, individuals with a stronger LoC may perceive AI systems as more threatening, particularly when these systems are imposed on individuals in workplace settings.
Perceived technological competence
PTC showed a reverse relationship to LoC: those who perceived themselves as more technologically competent were more likely to be comfortable with AI. Our results are consistent with the studies where AI agents were framed in work situations (Turja and Oksanen 2019; Mays et al. 2021). Similarly, a study by Schoeffer et al. (2022) showed that the amount of information and AI literacy influenced perceived AI fairness and trust in technology. This might suggest that when people are more knowledgeable about how AI operates and have some familiarity with it, they can see its usefulness for work tasks. Given the strong connection between adoption of innovation and employees’ knowledge, technological competence will continue to be an important trait to consider for AI implementation in various industries (Shamout et al. 2022; Venkatesh and Bala 2012).
Demographic and socioeconomic factors
Demographic and socioeconomic factors showed higher explainability for model variance compared to self-efficacy traits. For all domains male participants showed more comfort with AI, as well as high-income individuals, indicating that more vulnerable populations (women and people with lower income) are less likely to be comfortable with AI technology.
Interestingly, employment status was not a significant predictor of comfort with AI. Statistically significant relationships were demonstrated only between full and part-time employment and low-automation probability occupations. However, these results should be interpreted with caution as they are below the threshold for significance after correcting for the number of predictors in our model.
Finally, innovativeness did not show significant association with AI comfort levels in any domain. As innovativeness has been shown to be a strong predictor for adoption of various information technologies (López-Pérez et al. 2019; Ciftci et al. 2021; Ullah et al. 2020), this finding again points to the unique nature of AI that is distinct from other innovations. Currently, the utilization of AI tools in the workplace lacks uniformity. Some companies readily embrace these tools, while others prohibit their use (Korn 2023). Ultimately, it is the companies themselves that make the decision regarding the adoption of AI systems. Prior research highlights the importance of AI-supported leadership in the incorporation of AI systems into the workplace (Rožman et al. 2023). As a result, our findings confirm that individuals’ comfort with AI managing specific occupations is independent of their personal curiosity about emerging technologies or awareness of the latest advancements.
Study implications
This study contributes to the existing literature on people’s perceptions of AI systems at work. Given the past failures of introducing AI systems in work environments, including instances like biased hiring algorithms (Raghavan et al. 2020) and unfair grading systems (Satariano 2020), we argue that examining public opinion regarding technology that significantly impacts people’s daily lives is crucial to AI implementation. A recent case in point is the release of ChatGPT, which received substantial backlash from the writer’s rights movement (Coyle and The Associated Press 2023) and caused disruptions in educational settings (Sullivan et al. 2023). This example highlights the importance of understanding public preferences before widespread implementation.
With the abundance of literature on discrete tasks and decision automation, the present research demonstrates a more holistic approach to examine public opinions about AI implementation in work environments. Previous studies on AI attitudes predominantly center on assessing individual users’ evaluations of AI’s technical capabilities, which remains relevant for evaluating the acceptance of specific AI-powered tools (e.g., Agarwal et al. 2023, de Haan et al. 2022). However, this approach has limitations when applied to state-of-the-art AI systems capable of entirely substituting humans in the workplace. Given that these AI systems are introduced to the workers without the ability to opt out, evaluating their acceptance and usefulness solely based on individual user assessments may not accurately reflect the broader public’s perspectives.
Our results also highlight the differences in perceptions and potential adoption of AI technology compared to previous technical innovations. While computers and mobile phones are more readily perceived as assistive tools that extend human abilities at work, AI systems are capable of autonomous operation that can lead to employees’ redundancy. Examining public perceptions is a crucial step in facilitating ethical AI implementation at workplaces and minimizing potential public backlash.
Attention to ethical AI stemmed from multiple instances of explicitly racial and gender biases encoded in AI systems (Wellner 2020; Benjamin 2019). One approach to combat these and other issues from AI’s rapid implementation is to impose regulations. The European Union issued an AI Act that establishes regulations based on the AI risk category and banning high-risk AI systems, such as social scoring, real-time facial recognition, or dark-pattern AI (McCarthy and Propp 2021). A different approach was taken by the United States. The recent AI Bill of Rights presented by the Biden administration demonstrates a sector-based approach for regulations. However, some criticize the Bill as being uneven across sectors where some domains receive insufficient attention (Engler 2022).
In both instances, it is unclear whether these policies were developed in collaboration with the public. The Royal Society Report on Public Engagement in AI Ethics showed that citizens are being generally excluded from shaping the country’s technological future (Cave et al. 2018). Being left with no agency over technological changes at the workplaces, in healthcare, or education, people might completely reject the imposed technology. Indeed, our research demonstrates that the public, and especially vulnerable populations, are not receptive to AI in almost all domains. Our results shed light on public preferences which, if acknowledged, can help regulators, politicians, and corporate leaders to build AI systems for citizen’s empowerment rather than suppression. Future research may consider other aspects of the public’s attitudes about AI, such as what they perceive is most harmful and dangerous in AI’s potential impact. From a governance standpoint, these insights would be particularly important for policymakers developing guidelines and regulations around AI’s development and deployment.
Limitations and future work
The main limitations in our study come from survey methodology and attitude measurement. In addition, occupational domains were shown within-respondents and in a sequence that might have some effects on the answers. Future research should look at between-subjects designs for examining differences in people’s perceptions. While demographic quotas were introduced for national representativeness, the respondents were recruited through a professional survey company which comes with some effect on generalizability of the findings.
Further, in order to establish general understanding of AI for respondents, we provided a rather simplistic definition of an AI agent. Because the general public may lack an in-depth understanding of what constitutes AI, yet routinely encounters it in workplace settings, we argue that our chosen definition serves its purpose. This definition was selected to underscore the specific facets of AI that we intended to emphasize in our study, including agency, intelligence, and decision-making. As research shows no consistency in AI definitions (Ng et al. 2021), designing a survey where respondents provide their own definition might be one of the solutions to this issue.
The survey was conducted in 2021, prior to the public release of large language models such as ChatGPT. Since that time, people’s attitudes towards AI in various domains may have evolved, especially in light of the widespread media coverage surrounding the release of ChatGPT and its consequences. As a result, our findings may be most valuable as a historical reference point for understanding attitudes, particularly with regard to levels of LoC and PTC in relation to comfort with AI. With the release of AI tools that are more explicitly geared towards helping people with their everyday tasks (e.g., ChatGPT providing templates for repetitive writing tasks), the future research should look into whether people begin to normalize AI as another kind of technological tool, akin to a computer or mobile phone. If that is the case, it would be interesting to explore whether the inverse relationship between LoC and PTC identified in our study, shift to one in which high LoC and high PTC would both relate to more comfort with AI. Another reason why this study is may be of value to future researchers is that establishes a milestone of public attitudes at a certain time (i.e., 2021). Thus measuring future attitudes over several time intervals will yield a better understanding of trajectory and rate of change of the public’s attitudes towards AI and perceptions of its societal consequences.
Future researchers may wish to further examine what factors contribute to people’s comfort with AI systems in the workplace. Prior research suggests that ontological distinctions (Guzman 2020), occupation status and prestige (Qi 2022), and automation anxiety (Piercy and Gist-Mackey 2021) explain differences in individual’s acceptance of AI agents. However, there is no literature that shows whether these approaches vary across occupations on the high or low-end of automation probability. More understanding of what hinders people’s comfort with AI would be beneficial for further development not only systems themselves but also ethical policies regulating them.
Conclusion
The release of generative AI tools like DALL-E and ChatGPT ignited widespread public debate about the promise and perils of AI technology. Within the public, marginalized groups that lack structural power have already disproportionately experienced consequences of powerful algorithms in criminal justice, employment, healthcare, and other spaces where human bias has been long-standing (McGregor 2021). By and large AI has not fixed social bias; rather, its computational prowess has applied a veneer of objectivity to a problem that requires a much more intensive, human solution. The heightened attention on AI’s possibilities has also increased scrutiny on its historical and potential harms, prompting more regulatory action. This is a promising development that would be further bolstered with increased consideration of what the public values and prioritizes.
Artificial intelligence, in the form of large language models and related technologies, holds a unique promise to assist individuals who have not been successful in the traditional educational system due to various factors, and who are disproportionately represented among historically marginalized groups. This failure is often reflected in limited job opportunities and advancement. While the benefits of AI are anticipated to permeate society as a whole, and perhaps accrue disproportionate benefits to those who already have ample societal resources, it is argued that with careful design and implementation within targeted communities, these systems would be particularly advantageous for marginalized and low-income groups. AI could assist such people in bridging the gap in their educational opportunities to help them gain employment in targeted occupations. AI can empower individuals from these communities with tools that enable them to navigate bureaucratic and formal settings more effectively. This empowerment, in turn, can yield economic and political benefits for these communities, as well as enhance their ability to inform local officials about issues affecting their lives and neighborhoods. Hence, the research conducted in this study provides a valuable foundation for understanding the barriers to AI adoption and informing the design of technology that resonates with members of these underserved communities. The next decade and beyond will be a critical period for research institutions, industry leaders, policymakers, and the public to collaborate on establishing shared principles that ensure AI implementation, as it develops, is sustainable and aligned with societal values. Greater attention to community perspectives and concerns will undoubtedly enhance the adoption and utilization of AI-based technologies for personal and collective advancement.
Data availability
The dataset generated and analysed during the current study is available from the corresponding author upon reasonable request.
Notes
High automation probability occupations included supermarkets, construction sites, customer service desks, sewage plants, personal assistant. Low automation probability occupations stock investments, surgical teams, air traffic control, news desks, therapist, teacher.
References
Abd-Alrazaq AA, Alajlani M, Ali N, Denecke K, Bewick BM, Househ M (2021) Perceptions and opinions of patients about mental health chatbots: scoping review. J Med Internet Res 23(1):e17828. https://doi.org/10.2196/17828
Acikgoz Y, Davison KH, Compagnone M, Laske M (2020) Justice perceptions of artificial intelligence in selection. Int J Sel Assess 28(4):399–416. https://doi.org/10.1111/ijsa.12306
Agarwal N, Moehring A, Rajpurkar P, Salz T (2023) Combining human expertise with artificial intelligence: experimental evidence from Radiology (No. w31422). National Bureau of Economic Research. https://doi.org/10.3386/w31422
Aitken M, Ng M, Toreini E, van Moorsel A, Coopamootoo KPL, Elliott K (2020) Keeping it human: a focus group study of public attitudes towards AI in banking (Lecture notes in computer science) pp 21–38. https://doi.org/10.1007/978-3-030-66504-3_2
Angwin J, Larson J, Mattu S, Kirchner L (2016) Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Araujo T, Helberger N, Kruikemeier S, De Vreese CH (2020) In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI Soc 35(3):611–623. https://doi.org/10.1007/s00146-019-00931-w
Acharya U, Kunde S, Hall L, Duncan BA, Bradley JM (2018) Inference of user qualities in shared control. In: 2018 IEEE international conference on robotics and automation (ICRA). https://doi.org/10.1109/icra.2018.8461193
Balaram B, Greenham T, Leonard J (2018) Artificial intelligence: real public engagement. RSA, London. https://www.thersa.org/reports/artificial-intelligence-real-public-engagement
Baldauf M, Fröehlich P, Endl R (2020) Trust me, I’m a doctor – user perceptions of AI-driven apps for mobile health diagnosis. In: 19th international conference on mobile and ubiquitous multimedia. https://doi.org/10.1145/3428361.3428362
Bartneck C, Suzuki T, Kanda T, Nomura T (2006) The influence of people’s culture and prior experiences with Aibo on their attitude towards robots. AI Soc 21(1–2):217–230. https://doi.org/10.1007/s00146-006-0052-7
Belanche D, Casaló LV, Flavián C (2019) Artificial intelligence in FinTech: understanding robo-advisors adoption among customers. Ind Manag Data Syst 119(7):1411–1430. https://doi.org/10.1108/imds-08-2018-0368
Bendig E, Erb B, Schulze-Thuesing L, Baumeister H (2019) The next generation: chatbots in clinical psychology and psychotherapy to foster mental health–a scoping review. Verhaltenstherapie 1–13. https://doi.org/10.1159/000501812
Benjamin R (2019) Race after technology: abolitionist tools for the new Jim code. Soc Forces 98(4):1–3. https://doi.org/10.1093/sf/soz162
Bristows LLP (2021) AI in the healthcare sphere: a survey of public opinion. https://www.bristows.com/viewpoint/articles/whitepaper-what-do-people-think-about-the-use-of-ai-in-the-medical-sphere/
Cave S, Craig C, Dihal K, Dillon S, Montgomery J, Singler B, Taylor L (2018) Portrayals and perceptions of AI and why they matter. The Royal Society, London
Chen K, Chen JV, Yen DC (2011) Dimensions of self-efficacy in the study of smart phone acceptance. Comput Stand Interfaces 33(4):422–431. https://doi.org/10.1016/j.csi.2011.01.003
Chevalier F (2022) AI in HR: how is it really used and what are the risks? https://www.hec.edu/en/knowledge/articles/ai-hr-how-it-really-used-and-what-are-risks
Chiou M, McCabe F, Grigoriou M, Stolkin R (2021) Trust, shared understanding and locus of control in mixed-initiative robotic systems. In: 2021 30th IEEE international conference on robot & human interactive communication (RO-MAN). https://doi.org/10.1109/ro-man50785.2021.9515476
Ciftci O, Berezina K, Kang M (2021) Effect of personal innovativeness on technology adoption in hospitality and tourism: meta-analysis. Inf Commun Technol Tour 2021:162–174. https://doi.org/10.1007/978-3-030-65785-7_14
Clerwall C (2014) Enter the robot journalist. J Pract 8(5):519–531. https://doi.org/10.1080/17512786.2014.883116. February 25
Cloudy J, Banks J, Bowman ND (2021) The Str(AI)ght scoop: artificial intelligence cues reduce perceptions of hostile media bias. Digit J 1–20. https://doi.org/10.1080/21670811.2021.1969974
Correa T, Hinsley AW, De Zuniga HG (2010) Who interacts on the web? The intersection of users’ personality and social media use. Comput Hum Behav 26(2):247–253. https://doi.org/10.1016/j.chb.2009.09.003
Coyle J, The Associated Press (2023) ChatGPT is the ‘terrifying’ subtext of the writers’ strike that is reshaping Hollywood. Fortune. https://fortune.com/2023/05/05/hollywood-writers-strike-wga-chatgpt-ai-terrifying-replace-workers/
Craig K, Thatcher JB, Grover V (2019) The IT identity threat: a conceptual definition and operational measure. J Manag Inf Syst 36(1):259–288. https://doi.org/10.1080/07421222.2018.1550561
de Haan Y, van den Berg E, Goutier N, Kruikemeier S, Lecheler S (2022) Invisible friend or foe? How journalists use and perceive algorithmic-driven tools in their research process. Digit J 10(10):1775–1793. https://doi.org/10.1080/21670811.2022.2027798
Dilmegani C (2022) Top 13 Use Cases / Applications of AI in Manufacturing in 2022. https://research.aimultiple.com/manufacturing-ai/
Dressel J, Farid H (2018) The accuracy, fairness, and limits of predicting recidivism. Sci Adv 4(1). https://doi.org/10.1126/sciadv.aao5580
Dwivedi YK, Hughes L, Ismagilova E, Aarts G, Coombs C, Crick T, Duan Y, Dwivedi R, Edwards J, Eirug A, Galanos V, Ilavarasan PV, Janssen M, Jones P, Kar AK, Kizgin H, Kronemann B, Lal B, Lucini B, Williams MD (2021) Artificial intelligence (AI): multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. Int J Inf Manag 57:101994. https://doi.org/10.1016/j.ijinfomgt.2019.08.002
Eloundou T, Manning S, Mishkin P, Rock D (2023) GPTs are GPTs: an early look at the labor market impact potential of large language models. Preprint at https://doi.org/10.48550/arXiv.2303.10130
Engler A (2022) The AI Bill of Rights Makes Uneven Progress on Algorithmic Protections. https://www.lawfareblog.com/ai-bill-rights-makes-uneven-progress-algorithmic-protections
Ferrari F, Paladino MP, Jetten J (2016) Blurring human–machine distinctions: anthropomorphic appearance in social robots as a threat to human distinctiveness. Int J Soc Robot 8(2):287–302. https://doi.org/10.1007/s12369-016-0338-y
Frey CB, Osborne MA (2017) The future of employment: how susceptible are jobs to computerisation? Technol Forecast Soc Change 114:254–280. https://doi.org/10.1016/j.techfore.2016.08.019
Ford M (2015) Rise of the robots: technology and the threat of a jobless future. Basic Books, New York
Geiger AW (2019) How Americans see automation and the workplace in 7 charts. https://policycommons.net/artifacts/616856/how-americans-see-automation-and-the-workplace-in-7-charts/1597575/
Ghafur S, Van Dael J, Leis M, Darzi A, Sheikh A (2020) Public perceptions on data sharing: key insights from the UK and the USA. Lancet Digit Health 2(9):e444–e446. https://doi.org/10.1016/s2589-7500(20)30161-8
Goni A, Tabassum M (2020) Artificial intelligence (AI) in journalism: is Bangladesh ready for it? A study on journalism students in Bangladesh. Athens J Mass Media Commun 6(4):209–228. https://doi.org/10.30958/ajmmc.6-4-1
Gonzalez MF, Liu W, Shirase L, Tomczak DL, Lobbe CE, Justenhoven R, Martin NR (2022) Allying with AI? Reactions toward human-based, AI/ML-based, and augmented hiring processes. Comput Hum Behav 130:107179. https://doi.org/10.1016/j.chb.2022.107179
Grace K, Salvatier J, Dafoe A, Zhang B, Evans O (2018) When will AI exceed human performance? Evidence from AI experts. J Artif Intell Res 62:729–754. https://doi.org/10.1613/jair.1.11222
Guzman A (2020) Ontological boundaries between humans and computers and the implications for human-machine communication. Hum-Mach Commun 1:37–54. https://doi.org/10.30658/hmc.1.3
Hofeditz L, Mirbabaie M, Holstein J, Stieglitz S (2021) Do you trust an AI-Journalist? A credibility analysis of news content with AI-Authorship. In: ECIS 2021 Proceedings, Marrakech, Morocco. https://www.researchgate.net/publication/351348564_Do_you_Trust_an_AI-Journalist_A_Credibility_Analysis_of_News_Content_with_AI-Authorship
Hong JW (2022) I was born to love AI: the influence of social status on AI self-efficacy and intentions to use AI. Int J Commun 16:20. https://ijoc.org/index.php/ijoc/article/view/17728
Hsia JW, Chang CC, Tseng AH (2014) Effects of individuals’ locus of control and computer self-efficacy on their e-learning acceptance in high-tech companies. Behav Inf Technol 33(1):51–64. https://doi.org/10.1080/0144929x.2012.702284
Hurt HT, Joseph K, Cook CD (1977) Scales for the measurement of innovativeness. Hum Commun Res 4(1):58–65. https://doi.org/10.1111/j.1468-2958.1977.tb00597.x
Igbaria M, Iivari J (1995) The effects of self-efficacy on computer usage. Omega 23(6):587–605. https://doi.org/10.1016/0305-0483(95)00035-6
Jia C (2020) Chinese automated journalism: a comparison between expectations and perceived quality. Int J Commun 14:22. https://ijoc.org/index.php/ijoc/article/view/13334
Jimenez L, Boser U (2021) Future of testing in education: Artificial intelligence. https://www.americanprogress.org/article/future-testing-education-artificial-intelligence
Johnson C, Tyson A (2020) People globally offer mixed views of the impact of artificial intelligence, job automation on society. Pew Research Center. https://www.pewresearch.org/fact-tank/2020/12/15/people-globally-offer-mixed-views-of-the-impact-of-artificial-intelligence-job-automation-on-society/
Katz JE, Halpern D (2013) Attitudes towards robots suitability for various jobs as affected robot appearance. Behav Inf Technol 33(9):941–953. https://doi.org/10.1080/0144929x.2013.783115
Kelley PG, Yang Y, Heldreth C, Moessner C, Sedley A, Kramm A, Newman DT, Woodruff A (2021) Exciting, useful, worrying, futuristic: public perception of artificial intelligence in 8 countries. In: Proceedings of the 2021 AAAI/ACM conference on AI, ethics, and society. https://doi.org/10.1145/3461702.3462605
Korn J (2023) How companies are embracing generative AI for employees…or not. CNN. https://www.cnn.com/2023/09/22/tech/generative-ai-corporate-policy/index.html
Latham A, Goltz S (2019) A survey of the general public’s views on the ethics of using AI in education. In: Artificial intelligence in education pp 194–206. https://doi.org/10.1007/978-3-030-23204-7_17
López-Pérez VA, Ramírez-Correa PE, Grandón EE (2019) Innovativeness and factors that affect the information technology adoption in the classroom by primary teachers in Chile. Inform Educ - Int J 18(1):165–181. https://doi.org/10.15388/infedu.2019.08
Lee C, Ward C, Raue M, D’Ambrosio L, Coughlin JF (2017) Age differences in acceptance of self-driving cars: a survey of perceptions and attitudes (Lecture notes in computer science) pp 3–13. https://doi.org/10.1007/978-3-319-58530-7_1
Levy K, Chasalow KE, Riley S (2021) Algorithms and decision-making in the public sector. Annu Rev Law Soc Sci 17(1):309–334. https://doi.org/10.1146/annurev-lawsocsci-041221-023808
Lewandowski B, Wengefeld T, Muller S, Jenny M, Glende S, Schroter C, Bley A, Gross H-M (2020) Socially compliant human-robot interaction for autonomous scanning tasks in supermarket environments. In: 2020 29th IEEE international conference on robot and human interactive communication (RO-MAN). https://doi.org/10.1109/ro-man47096.2020.9223568
Madiega T, Mildebrath H (2021) Regulating facial recognition in the EU. https://www.europarl.europa.eu/RegData/etudes/IDAN/2021/698021/EPRS_IDA(2021)698021_EN.pdf
Manjoo F (2020) How do you know a human wrote this? New York Times. https://www.nytimes.com/2020/07/29/opinion/gpt-3-ai-automation.html
Martínez-Miranda J, Pérez-Espinosa H, Espinosa-Curiel I, Avila-George H, Rodríguez-Jacobo J (2018) Age-based differences in preferences and affective reactions towards a robot’s personality during interaction. Comput Hum Behav 84:245–257. https://doi.org/10.1016/j.chb.2018.02.039
Mays KK, Lei Y, Giovanetti R, Katz JE (2021) AI as a boss? A national US survey of predispositions governing comfort with expanded AI roles in society. AI Soc. https://doi.org/10.1007/s00146-021-01253-6
Mays KK, Katz J, Groshek J (2022) Mediated communication and customer service experiences: psychological and demographic predictors of user evaluations in the United States. Period Polytech Soc Manage Sci 30(1). Available at: https://ssrn.com/abstract=4143911
McCarthy M, Propp K (2021) Machines learn that Brussels writes the rules: the EU’s new AI regulation. Brookings Institute. https://www.brookings.edu/blog/techtank/2021/05/04/machines-learn-that-brussels-writes-the-rules-the-eus-new-ai-regulation/
McGregor S (2021) Preventing repeated real world AI failures by cataloging incidents: the AI incident database. Proc AAAI Conf Artif Intell 35(17):15458–15463
Ng DTK, Leung JKL, Chu KWS, Qiao MS (2021) AI literacy: definition, teaching, evaluation and ethical issues. Proc Assoc Inf Sci Technol 58(1):504–509. https://doi.org/10.1002/pra2.487
Nomura T, Suzuki T, Kanda T, Kato K (2006) Measurement of anxiety toward robots. In: ROMAN 2006 - The 15th IEEE international symposium on robot and human interactive communication. https://doi.org/10.1109/roman.2006.314462
OECD (2021) Artificial intelligence, machine learning and big data in finance: opportunities, challenges, and implications for policy makers. https://www.oecd.org/finance/artificial-intelligence-machine-learning-big-data-in-finance.htm
Okpala I, Nnaji C, Gambatese J (2022) Assessment tool for human–robot interaction safety risks during construction operations. J Constr Eng Manage 149(1). https://doi.org/10.1061/(ASCE)CO.1943-7862.0002432
Park JH, Shin NM (2017) Students’ perceptions of artificial intelligence technology and artificial intelligence teachers. J Korean Teach Educ 34(2):169–192. https://doi.org/10.24211/TJKTE.2017.34.2.169
Pegasystems Inc. (2020) The future of work: What’s next? https://www.pega.com/future-of-work
Piercy C, Gist-Mackey A (2021) Automation anxieties: perceptions about technological automation and the future of pharmacy work. Hum-Mach Commun 2:191–208. https://doi.org/10.30658/hmc.2.10
Philipsen R, Brauner P, Biermann H, Ziefle M (2022) I am what i am–roles for artificial intelligence from the users’ perspective. In: AHFE International. https://doi.org/10.54941/ahfe1001453
Qi W (2022) The organizational correlates of automation depend on job status. https://doi.org/10.31234/osf.io/2r53z
Rabinowicz C (2023) Approaches to regulating government use of facial recognition technology. Jolt Digest. https://jolt.law.harvard.edu/digest/approaches-to-regulating-government-use-of-facial-recognition-technology
Raghavan M, Barocas S, Kleinberg J, Levy K (2020) Mitigating bias in algorithmic hiring: evaluating claims and practices. In: Proceedings of the 2020 conference on fairness, accountability, and transparency pp 469-481. https://doi.org/10.1145/3351095.3372828
Rogers EM (1995) Diffusion of innovations–5th edition. Free Press, New York
Roose K (2022) An A.I.-generated picture won an art prize. Artists aren’t happy. New York Times. https://www.nytimes.com/2022/09/02/technology/ai-artificial-intelligence-artists.html
Rebitschek FG, Gigerenzer G, Wagner GG (2021) People underestimate the errors made by algorithms for credit scoring and recidivism prediction but accept even fewer errors. Sci Rep 11(1):1–11. https://doi.org/10.1038/s41598-021-99802-y
Rotter JB (1966) Generalized expectancies for internal versus external control of reinforcement. Psychol Monogr Gen Appl 80(1):1–28. https://doi.org/10.1037/h0092976
Rožman M, Tominc P, Milfelner B (2023) Maximizing employee engagement through artificial intelligent organizational culture in the context of leadership and training of employees: testing linear and non-linear relationships. Cogent Bus Manag 10(2):2248732. https://doi.org/10.1080/23311975.2023.2248732
Satariano A (2020) British grading debacle shows pitfalls of automating government. New York Times. https://www.nytimes.com/2020/08/20/world/europe/uk-england-grading-algorithm.html
Schmelzer R (2019) AI making waves in news and journalism. Forbes. https://www.forbes.com/sites/cognitiveworld/2019/08/23/ai-making-waves-in-news-and-journalism/?sh=30e35fc17748
Schoeffer J, Kuehl N, Machowski Y (2022) “There is not enough information”: on the effects of explanations on perceptions of informational fairness and trustworthiness in automated decision-making. In: 2022 ACM conference on fairness, accountability, and transparency. https://doi.org/10.1145/3531146.3533218
Shah R, Chircu A (2018) IOT and AI in healthcare: a systematic literature review. Issues Inf Syst 19(3). https://doi.org/10.48009/3_iis_2018_33-41
Shamout M, Ben-Abdallah R, Alshurideh M, Alzoubi H, Kurdi BA, Hamadneh S (2022) A conceptual model for the adoption of autonomous robots in supply chain and logistics industry. Uncertain Supply Chain Manag 10(2):577–592. https://doi.org/10.5267/j.uscm.2021.11.006
Sharan NN, Romano DM (2020) The effects of personality and locus of control on trust in humans versus artificial intelligence. Heliyon 6(8):e04572. https://doi.org/10.1016/j.heliyon.2020.e04572
Shipps B (2013) Social networks, interactivity and satisfaction: assessing socio-technical behavioral factors as an extension to technology acceptance. J Theor Appl Electron Commer Res 8(1):35–52. https://doi.org/10.4067/S0718-18762013000100004
SHRM (2022) Fresh SHRM research explores use of automation and AI in HR. https://www.shrm.org/about-shrm/press-room/press-releases/pages/fresh-shrm-research-explores-use-of-automation-and-ai-in-hr.aspx
Srivastava S (2022) AI in construction – how artificial intelligence is paving the way for smart construction. https://appinventiv.com/blog/ai-in-construction/
Stai B, Heller N, McSweeney S, Rickman J, Blake P, Vasdev R, Edgerton Z, Tejpaul R, Peterson M, Rosenberg J, Kalapara A, Regmi S, Papanikolopoulos N, Weight C (2020) Public perceptions of artificial intelligence and robotics in medicine. J Endourol 34(10):1041–1048. https://doi.org/10.1089/end.2020.0137
Sullivan M, Kelly A, McLaughlan P (2023) ChatGPT in higher education: considerations for academic integrity and student learning. J Appl Learn Teach 6(1). https://doi.org/10.37074/jalt.2023.6.1.17
Sweeney C, Potts C, Ennis E, Bond R, Mulvenna MD, O’neill S, Malcolm M, Kuosmanen L, Kostenius C, Vakaloudis A, Mcconvey G, Turkington R, Hanna D, Nieminen H, Vartiainen A-K, Robertson A, Mctear MF (2021) Can chatbots help support a person’s mental health? Perceptions and views from mental healthcare professionals and experts. ACM Trans Comput Healthc 2(3):1–15. https://doi.org/10.1145/3453175
Takayama L, Marder-Eppstein E, Harris H, Beer JM (2011) Assisted driving of a mobile remote presence system: system design and controlled user evaluation. In: 2011 IEEE International Conference on Robotics and Automation. https://doi.org/10.1109/icra.2011.5979637
Tarafdar M, Beath C, Ross J (2017) Enterprise cognitive computing applications: opportunities and challenges. In: IT professional p 1. https://doi.org/10.1109/mitp.2017.265111150
Turja T, Oksanen A (2019) Robot acceptance at work: a multilevel analysis based on 27 EU countries. Int J Soc Robot 11(4):679–689. https://doi.org/10.1007/s12369-019-00526-x
Turkle S (2005) The second self, twentieth anniversary edition: computers and the human spirit. MIT Press, Cambridge, MA
Ullah N, Alnumay WS, Al-Rahmi WM, Alzahrani AI, Al-Samarraie H (2020) Modeling cost saving and innovativeness for blockchain technology adoption by energy management. Energies 13(18):4783. https://doi.org/10.3390/en13184783
U.S. Census Bureau (2020) Summary Statistics for the U.S. https://www.census.gov/quickfacts/fact/table/US/PST040222#PST040222
U.S. Senate, Schumer C (2023). Majority Leader Schumer Delivers Remarks At The Senate Rules Committee Hearing On The Impact Of AI On Elections. https://www.democrats.senate.gov/news/press-releases/majority-leader-schumer-delivers-remarks-at-the-senate-rules-committee-hearing-on-the-impact-of-ai-on-elections
Valgaeren H (2019) Robotic process automation in financial and accounting processes in the banking sector. Master’s thesis, Katholieke Universiteit Leuven. https://www.scriptiebank.be/sites/default/files/thesis/2019-09/MBA_Valgaeren_H_Final_Report1819.pdf
van der Goot MJ, Pilgrim T (2020) Exploring age differences in motivations for and acceptance of chatbot communication in a customer service context (Lecture notes in computer science) pp 173–186. https://doi.org/10.1007/978-3-030-39540-7_12
Venkatesh V (2000) Determinants of perceived ease of use: integrating control, intrinsic motivation, and emotion into the technology acceptance model. Inf Syst Res 11(4):342–365. https://doi.org/10.1287/isre.11.4.342.11872
Venkatesh V, Bala H (2012) Adoption and impacts of interorganizational business process standards: role of partnering synergy. Inf Syst Res 23(4):1131–1157. https://doi.org/10.1287/isre.1110.0404
Wellner GP (2020) When AI Is gender-biased. Humana MENTE J Philos Stud 13(37):127–150. https://www.humanamente.eu/index.php/HM/article/view/307
Woodruff A, Fox SE, Rousso-Schindler S, Warshaw J (2018) A qualitative exploration of perceptions of algorithmic fairness. In: Proceedings of the 2018 CHI conference on human factors in computing systems. https://doi.org/10.1145/3173574.3174230
Xu Y, Shieh CH, van Esch P, Ling IL (2020) AI customer service: task complexity, problem-solving ability, and usage intention. Australas Mark J 28(4):189–199. https://doi.org/10.1016/j.ausmj.2020.03.005
Zhang, B, & Dafoe, A (2019). Artificial intelligence: American attitudes and trends. SSRN Electron J. https://doi.org/10.2139/ssrn.3312874
Zhang L, Yencha C (2022) Examining perceptions towards hiring algorithms. Technol Soc 68:101848. https://doi.org/10.1016/j.techsoc.2021.101848
Złotowski J, Yogeeswaran K, Bartneck C (2017) Can we control it? Autonomous robots threaten human identity, uniqueness, safety, and resources. Int J Hum-Comput Stud 100:48–54. https://doi.org/10.1016/j.ijhcs.2016.12.008
Author information
Authors and Affiliations
Contributions
EN: Conceptualization, methodology, formal analysis, writing–original draft, writing–review and editing. KM: Conceptualization, methodology, formal analysis, writing–review, and editing. JEK: Conceptualization, writing–review and editing, funding acquisition.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Ethical approval
Study was reviewed and approved as exempt by the Boston University Charles River Campus IRB.
Informed consent
Informed written consent to participate in the present study was obtained from all the participants.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Novozhilova, E., Mays, K. & Katz, J.E. Looking towards an automated future: U.S. attitudes towards future artificial intelligence instantiations and their effect. Humanit Soc Sci Commun 11, 132 (2024). https://doi.org/10.1057/s41599-024-02625-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1057/s41599-024-02625-1
- Springer Nature Limited