Abstract
Evidence on AI acceptance comes from a diverse field comprising public opinion research and largely experimental studies from various disciplines. Differing theoretical approaches in this research, however, imply heterogeneous ways of studying AI acceptance. The present paper provides a framework for systematizing different uses. It identifies three families of theoretical perspectives informing research on AI acceptance—user acceptance, delegation acceptance, and societal adoption acceptance. These models differ in scope, each has elements specific to them, and the connotation of technology acceptance thus changes when shifting perspective. The discussion points to a need for combining the three perspectives as they have all become relevant for AI. A combined approach serves to systematically relate findings from different studies. And as AI systems affect people in different constellations and no single perspective can accommodate them all, building blocks from several perspectives are needed to comprehensively study how AI is perceived in society.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
A burgeoning literature on attitudes toward AI has rapidly generated a substantial amount of empirical evidence regarding how people think about AI. This research has grown in the face of an increasing prevalence and public awareness of AI systems. These are systems with learned rules for processing data they receive from their environment to produce optimal outputs or actions (e.g., predictions or recommendations) given predefined objectives. AI systems can be employed in manifold ways and take many different forms. Research mainly from computer science has examined various aspects of AI systems, such as transparency and fairness, and the consequences that follow from these properties (Lepri et al. 2018; Kaur et al. 2023). Attitudinal research has quickly followed suit, complementing this work with insights on how people evaluate AI systems and their properties.
Scholars from different disciplinary perspectives investigating AI attitudes have produced a wealth of evidence, but it has also led to an increasingly fragmented field in which similar questions are addressed with different theoretical perspectives and models. At the same time, the relations between these models are not usually spelled out, which makes it harder to integrate empirical findings. This is a problematic shortcoming particularly with AI since AI comes in many different forms and entails variable relations between individuals and the technology. Not only are certain models more suitable for some and other models more suitable for other cases, but certain forms of AI may well also be studied from different angles at the same time, with potentially differing results regarding AI acceptance.
Against this backdrop, the present paper takes stock of and discusses existing approaches to studying attitudes toward AI. In doing so, it makes two theoretical contributions. First, it systematizes existing strands in research to provide an orienting framework for embedding studies and findings on AI attitudes. Second, it demonstrates a need for integration of existing models of technology acceptance specifically for studying AI as a technology that can affect people in different roles and with varying social scope while also differing in the extent to which AI has agent-like qualities. There are thus different possible settings and also perspectives for studying attitudes toward AI. Depending on the adopted perspective, elements specific to different models for examining AI acceptance become more relevant. Furthermore, certain forms of AI, particularly applications of generative AI, can make several theoretical perspectives simultaneously relevant as they affect people in different constellations at the same time.
Such an integrative perspective covers a diverse literature. This literature consists of studies looking at the acceptance of concrete AI systems as a function of design features (e.g., Shin 2021; Shin and Park 2019; König et al. 2022a, b; Nussberger et al. 2022) and of contextual and dispositional factors (e.g., Burton et al. 2020; Glikson and Woolley 2020). It also includes studies that adopt a broader perspective on how people generally think about AI and its potential consequences for society (Smith 2018; Grzymek and Puntschuh 2019; Zhang and Dafoe 2019; Araujo et al. 2018; European Commission 2020; Ada Lovelace Institute and The Alan Turing Institute 2023; Scantamburlo et al. 2023; Selwyn and Gallo Cordoba 2022).
The discussion below identifies three distinct families of theoretical perspectives which inform this extant literature: a traditional user-centered technology acceptance perspective with the Technology Acceptance Model (TAM) at its core, a delegation or automation acceptance, and a societal adoption acceptance perspective. The first perspective, which centers on individuals as users of technology, recurrently appears in research on AI attitudes as an explicit model, albeit with various extensions and modifications. The delegation perspective is less frequent but also regularly spelled out as a theoretical framework. It expressly conceives of AI not simply as a tool to be used but more as an agent providing a service to someone with a performance that is not directly transparent and may thus have important hidden qualities. The delegation perspective thus foregrounds not only accountability challenges for the individual and the role of trust in AI but also extended delegation relations in which other actors, such as government bodies, adopt AI to perform tasks in the interest of affected individuals. The societal AI acceptance perspective, in turn, focuses on AI systems’ impact not on the individual but on society, e.g., in the form of effects on employment (see, e.g., Gallego and Kurer 2022), the working of the public sphere (Smith 2018), and the environment (König et al. 2023). The third perspective often remains implicit although an explicit template could be taken from research on risk technologies and transferred to the study of AI acceptance (e.g., Huijts et al. 2012). Including a broader and explicit model of societal AI acceptance also seems warranted in view of an increasing acknowledgment of AI’s relevance under the heading of sustainability.
The discussion below compares the three above-mentioned theoretical perspectives, describes their respective scope, and highlights which elements are specific to them. It will furthermore illustrate how a combined framework can cover central facets of AI acceptance that have been discussed in the literature. Before turning to this systematizing account, the following section will first provide an overview of extant research on attitudes toward AI.
2 The state of research on attitudes toward AI
The heterogeneity of the literature on AI attitudes has several sources. First, research comes from different disciplines. Besides research in the fields of information systems and human–computer interaction, one finds contributions from psychology, social sciences, business studies, and disciplines with an interest in specific applications of AI, such as health and mobility research. Second, the literature covers many different applications of AI, from low-risk consumer applications to high-risk systems to which individuals may even be exposed without having significant control or influence over them. Third, there is variation in the adopted theoretical frameworks, such as the technology acceptance model and related models (for an overview, see Sohn and Kwon 2020), and the chosen specific dependent variables. This heterogeneity is further compounded by the fact that the more recent work on acceptance of AI has antecedents in a literature on automation acceptance (Lee and See 2004).
The following account cannot do justice to the many facets of the quickly growing literature on AI attitudes, nor does it aim to be exhaustive. It illustrates the heterogeneity of this literature and motivates a need for systematically integrating different strands of research. Trying to integrate different strands may seem less relevant within certain disciplinary perspectives, when centering, e.g., on specific questions of product design and user acceptance. However, from a social science angle that aims to comprehensively make understandable how AI systems are perceived and taken up in society, a broader and integrative approach is warranted. Given this wider perspective, the following account and the subsequent discussion are not only rooted in a social science perspective but also based on a broad reading of literature from different disciplines dealing with AI acceptance. Focusing largely on the last decade, the discussion also draws on various conceptual articles that have summarized existing research while also pointing to antecedents of the more recent research.
The heterogeneity of AI attitudes research already becomes palpable when looking at two recent literature reviews. The review by Glikson and Woolley (2020), which takes trust in AI as the core dependent variable, is rooted in a business research perspective interested largely in how workers rely on AI systems. The authors identify tangibility, transparency, reliability, task characteristics, and immediacy behavior as key dimensions that shape trust in AI. The literature review by Kelly et al. (2023), in turn, focuses on user acceptance of AI and is rooted more in information systems research. The review finds perceived usefulness, performance expectancy, trust, and personal dispositions to be among the key determinants of AI user acceptance. Notably, such user acceptance presumes individuals to be more like active users rather than more passively relying on AI systems—as in the case of individuals working along AI. The two reviews thus cover similar or related research but assemble it under different perspectives and highlight partly different key attitudinal dimensions.
On the level of individual empirical studies, one similarly finds different approaches to studying acceptance of AI systems, as they usually focus on specific dimensions or variables. Studies dealing with algorithm aversion and appreciation (Logg et al. 2019; Dietvorst et al. 2015), for instance, have focused on the role of task context and characteristics and personal dispositions. Summing up previous research, Burton et al. (2020) identify expectations and expertise, decision autonomy, incentivization, cognitive compatibility, and divergent rationalities as five major conditioning factors of algorithm aversion. Other research, inspired by discussions of computer science about fair, transparent, and accountable (FAccT) AI, has focused on how people evaluate these FAccT features in the design and performance of AI systems (e.g., Candrian and Scherer 2022; Shin 2021; Shin and Park 2019; König et al. 2022a, b; Nussberger et al. 2022; Langer et al. 2023). A particular interest in accountability and legitimacy of AI uses is furthermore present in a growing body of political science and public administration studies on public sector AI applications (Aoki 2020; Grimmelikhuijsen 2022; Ingrams et al. 2021; Schiff et al. 2021; Starke and Lünich 2020). What makes this setting special is that citizens may often not interact with the technology itself, are possibly unable to opt out of AI uses, and are particularly vulnerable in relation to the power of the state.
Additional heterogeneity stems from social sciences and public opinion research. It often contains a mix of different constructs, partly adopted from the other research described above, and usually includes awareness, knowledge or prior experience, positive and negative evaluations, select questions about AI system design features, and attitudes toward regulation (Smith 2018; Grzymek and Puntschuh 2019; Zhang and Dafoe 2019; Araujo et al. 2018; European Commission 2020; Ada Lovelace Institute and The Alan Turing Institute 2023; Scantamburlo et al. 2023). The broad scope of this research also manifests in the fact that it commonly comprises different applications and impacts of AI—on the individual as well as on society at large—within a single study and survey. Variable AI impacts may even be covered within a single attitudinal scale. The General AI Attitudes Scale (Schepman and Rodway 2020), designed to study how people think about AI, comprises items concerning effects on the individual, such as the interest in using AI in one’s daily life, while others refer to the social impacts of AI, such as new economic opportunities for the country. While the validated scale captures a general evaluation of AI, it is notable that the included items also implicitly refer to different ways in which people relate to AI systems and entail different connotations of AI acceptance.
Overall, the emergence of a fragmented research field in recent years increases the need for integration and consolidation. At the same time, there are important distinctions in the literature that get lost when adopting a single notion of AI acceptance. As the preceding review of existing research illustrates, there are various constellations concerning the relation between the individual and the technology, and its impacts vary. A single or uniform approach toward AI acceptance risks seemingly talking about the same thing when one is dealing with distinct connotations of AI acceptance. It also makes it harder to synthesize findings as these may not be directly comparable and compatible due to different adopted perspectives. Against this backdrop, the following sections aim at systematizing existing research to enhance mutual understanding among researchers, help to better contextualize findings, and support cumulative work.
3 Three perspectives on AI acceptance
3.1 User-centered technology acceptance
The Technology Acceptance Model (TAM) is arguably the most influential model in the study of what drives technology usage (Davis 1989; Venkatesh et al. 2003). It was initially proposed by Davis (1989) to study the intention of using technological innovation as a function of perceived usefulness and the perceived ease of use of a technology. Since then, the model has seen various extensions. Besides a variable for actual use, following from the intention to use, various extensions to the TAM have led to its third version (Venkatesh and Bala 2008) which spells out antecedents of the original main variables, i.e., perceived usefulness and perceived ease of use. These antecedents comprise, among others, dispositional factors (subjective norms), experience and self-efficacy, and context factors (e.g., job relevance and result demonstrability). Yet other variables are included in the Value-Based Adoption Model (Kim et al. 2007), which has been derived from the TAM and is related to it. It focuses on exogenous variables that capture user experience, such as enjoyment, to explain technology adoption acceptance.
While the TAM can be used to study different kinds of technological innovation, it has been widely used in the study of information systems where it has proven to be a robust and reliable model (King and He 2006). Unsurprisingly, the TAM has also influenced the more recent wave of research on attitudes toward AI (for an overview, see Kelly et al. 2023). Applications of the TAM to study the acceptance of AI regularly show modifications to fit the study contexts. Yet contributions that draw on the TAM commonly stay true to its focus on the individual as the user of an AI system. This is the case, for instance, with consumers using products with AI components (Sohn and Kwon 2020) or farmers using AI for more precise use of resources (Mohr and Kühl 2021).
However, interpreting AI “use” along the same lines as the use of other technologies amounts to a significant limitation of the TAM. Since AI can have distinct agent-like qualities, unlike many other technologies that are more like tools, the notion of a mere user hardly applies (Ghazizadeh et al. 2012). As Schepman and Rodway (2020, 11) state, the TAM “reflect users’ individual choices to use technology, but AI often involves decisions by others.” It is thus more suitable to describe the relation between user and technology as one of trust and involving a delegation of tasks (Ghazizadeh et al. 2012). Echoing this argument, Vorms and Combs (2022) point out that more recent intelligent systems present novel challenges that make trust in the technology a crucial construct. The agent-like character of AI thus sits uneasy with major premises of the TAM. In this sense, the particularities of AI are more directly addressed by models inherently about delegation.
3.2 Delegation and automation acceptance
The extent to which individuals are ready to delegate tasks to technological systems has been studied in the field of cognitive engineering since the 1980s (Lee and Kirlik 2013). While this field is generally concerned with the human-centered design of technologies and workplaces, it has particularly responded to advances in information technology and automation. Largely dealing with settings in which humans and automated systems work together to achieve certain goals, cognitive engineering has centered on agency and different levels of automation (Ghazizadeh et al. 2012). While at least the factor of task compatibility in automation is comparable to that of job relevance in the TAM, the focus on agency and the role of trust in the technology itself (rather than the organization providing it) clearly differ. Accordingly, models of automation acceptance, such as in the seminal contribution by Lee and See (2004), focus on trust in technology together with antecedents of this trust, such as automation performance.
The key assumptions of models for studying automation acceptance easily transfer to the acceptance of AI. Before the advent of machine learning and proliferation of AI systems, scholars pointed to the increasing adoption of computer programs that operated as agents—often not apparent to computer users—with a certain degree of autonomy and an ability to adapt their behavior (see, e.g., Dowling and Nicholson 2002). This work also highlighted the importance of evaluating the delegation of tasks to such computer programs in terms of fundamental aspects of human interaction such as trust, perceived competence, and intentions. Castelfranchi and Falcone (1998) emphasized the need to explicitly model the relationship to agent-based systems as one of delegation, with various possible conflicts arising from this relationship depending on the abilities and reliability of the agent.
This notion of delegation to an agent is also present in more recent research on acceptance of AI. Some research focuses directly on trust in AI or algorithms as the dependent variable (Glikson and Woolley 2020; Burton et al. 2020). Other contributions examine the relationship between an individual and an AI system in terms of acceptance of delegation (Bouwer 2022; Bel and Coeugnet 2023; Candrian and Scherer 2022), acceptance of technology agency (Morosan and Dursun-Cengizci 2023), or trust behavior, meaning the actual delegation of a task following from trust (Langer et al. 2023). Besides trust, the transparency and explainability of AI systems are central predictors of AI acceptance understood as the readiness to delegate tasks (e.g., Candrian and Scherer 2022; Shin 2021).
While psychological and computer science research on delegation to intelligent systems as agents has devised theoretical models to describe this relationship (e.g., Dowling and Nicholson 2002; Castelfranchi and Falcone 1998), a full-fledged model for analyzing delegation to human agents has long existed. This theoretical framework, the principal–agent model, was first developed in economics and political science (Hölmstrom 1979; Weingast and Moran 1983). It can be seen as the implicit basis to many of the delegation acceptance models used in the research cited above. Several contributions also explicitly mention this model as a suitable framework for understanding AI acceptance (De Fine Licht and De Fine Licht 2020; Krafft et al. 2020; Wieringa 2020). As described by Krafft et al. (2020), the major premises of the principal–agent model can be transferred to delegation to AI systems. In fact, the model even more extensively applies to machines than it does to humans because certain ways of achieving transparency—i.e., looking into people’s heads—are not (yet) possible with humans, but possible with certain AI systems.
The principal–agent model presumes that a principal relies on an agent to fulfill a task while operating in the principal’s interest. As the agent may pursue her own interests, though, there is a risk of agency loss for the principal, i.e., a discrepancy between her goals and the actual results achieved by the agent (Pratt and Zeckhauser 1991; Lane 2007). This challenge equally exists with AI systems. Furthermore, the problems of hidden intentions, hidden information, and hidden action that the principal faces also exist with AI systems—in the form of unknown biases, opaque data sources, and opaque operations. The core issue in this regard is that the observable performance of the agent may well be acceptable, but there may be hidden and undesirable qualities to it. The principal does not know whether she gets the best result she could get. Given these problems, the principal has an interest in scrutinizing the agent and finding ways to align her actions with the interests of the principal. This can be done through mechanisms for realizing accountability. Importantly, accountability goes beyond transparency as it also requires answerability and the ability to sanction as a way to exert actual control (Bovens 2007). Accordingly, a principal will be more likely to delegate to an agent the more she can ensure accountability.
Transposed to AI as an agent, acceptance of delegation will depend on the (a) transparency of an AI system, (b) its explainability—both with regard to how it produces outputs and to why the system has been designed and trained in a specific way—and (c) control, e.g., through a human in the loop and the ability to make corrections. Although the principal–agent model has thus far not been established as a formal theoretical basis for studying AI acceptance, those core elements of transparency, answerability/explainability, and control are present in many studies on AI acceptance involving delegation (e.g. Shin and Park 2019; Shin 2021; Grimmelikhuijsen 2022; Wenzelburger et al. 2022). In this sense, the principal–agent model can form a central analytical framework to study delegation acceptance for AI systems.
Importantly, the accountability dimension of this model becomes relevant precisely when and because there is a lack of trust in an agent. A common definition of trust is the expectation that an agent will act in one’s interest without being supervised or kept in check. Hence, if people have trust in an AI application, they may feel no need for transparency and exerting control over it. And, a lack of trust and the perceived risk of agency loss call for the use of instruments establishing accountability.
3.3 Societal technology adoption acceptance
In Beck’s (1992) notion of risk societies, technological advances that are supposed to deal with or solve societal problems are themselves a major source of uncertainty and societal risk. Nuclear power and genetics are examples of technologies that entail risks not only for the individual but also for society. Observers have similarly pointed to possible broader risks that AI entails for society besides its possible benefits, e.g., through rapidly automating many jobs or spreading disinformation in the political public sphere. Attitudes toward AI have so far not been studied within an explicit model for acceptance of risk technologies or societal risk. However, several contributions on AI attitudes have acknowledged AI as a risk technology with broader social impacts (e.g., Galaz et al. 2021; König et al. 2023).
Further, there is ample research on public opinion about AI that has examined how people think about possible broader impacts of this technology, such as destroying jobs, providing economic opportunities, increasing well-being, and taking over more and more control within society and leading to a loss of freedom (e.g. Smith 2018; Zhang and Dafoe 2019; Araujo et al. 2018; Selwyn and Gallo Cordoba 2022; Schepman and Rodway 2020; Rainie et al. 2022). One can interpret these evaluations as socio-tropic in nature, as opposed to egocentric evaluations (on this distinction, see Lewis-Beck and Stegmaier 2018). Indeed, Borwein et al. (2023) have used the term socio-tropic evaluations with regard to labor market effects of AI, referring to the effects on society rather than the individual. And O’Shaughnessy et al. (2023) have explicitly distinguished self-benefit from societal benefit perceptions.
The interest in societal effects and socio-tropic evaluations in various studies about AI acceptance is specific to a societal perspective that also informs a rich literature on technologies with societal impacts. A review of the literature on acceptance of energy technology by Huijts et al. (2012) arrives at a comprehensive model of technology acceptance with many factors that are reminiscent of the TAM. It includes, for instance, subjective norms, experience, and knowledge, perceived behavioral control, and perceived benefits. However, the model adopts a societal perspective on technology acceptance, which implies that the dependent variable takes on a different meaning. It is about public acceptance of the societal use of a technology rather than about personal use as in the user-centric perspective of the TAM. Another important difference is that the costs, risks, and benefits in the societal adoption perspective cover socio-tropic evaluations. To take the example of nuclear energy, possible questions posed to respondents are about whether nuclear power degrades the environment, whether it is risky for society as a whole, or whether it has a positive impact on climate mitigation (De Groot et al. 2020).
As with other risky technologies, looking at AI as a societal risk technology entails a distinct perspective that foregrounds certain factors specific to it. Also, theoretical models of technology acceptance similar to the one by Huijts et al. (2012) employed for energy technologies can easily be transferred to AI. Indeed, the connection particularly between attitudes toward energy technologies and attitudes toward AI becomes particularly straightforward in light of the environmental impacts of AI that have come to the fore in recent years (Dauvergne 2020). This being said, a particularity of AI is that it entails a larger range of risks for society, including distributional effects and consequences for social values such as fairness as well as for notions of desirable social development.
4 The three theoretical perspectives compared
4.1 Variation in highlighted aspects
Each of the three theoretical perspectives presented above, subsequently called user acceptance, delegation acceptance, and societal adoption acceptance perspective, has elements that are specific to it. Depending on which perspective one adopts, it is also the meaning of technology acceptance—as the dependent variable—that shifts. With the user acceptance perspective, technology is regarded as a tool with which users interact directly. This means that besides the perceived usefulness of the tool, the perceived ease of use—i.e., when directly handling the technology—is a central factor. Technology acceptance in that case is about the willingness to use the technology and actual use. This perspective is suitable, for instance, for an AI-based translation tool that someone uses to speed up text production and processing, the performance of which the user can directly assess. Other applications that come close to tools include image generation or scenario modeling and forecasting, e.g., in business planning.
The delegation acceptance perspective instead sees technology as an agent rather than a tool. Individuals do not necessarily use the tool themselves, but rather instruct it or merely (passively) rely on it. Technology acceptance therefore takes on the meaning of willingness to delegate. This perspective could be applied, e.g., to intelligent agents providing recommendations similar to what a human service provider might do. An example is an AI travel planning companion. Those receiving the service cannot directly assess the performance of the agent and ascertain what quality of service they received, as the AI system’s recommendations may have hidden biases primarily serving the interests of third parties. This challenge is not so much about ease of use, which is about getting the technology to work as intended by a competent user handling it. Rather, it is about agency loss, which depends more on hidden agent properties. Specific to this delegation perspective are the features of trust in the agent and perceived accountability features, i.e., for transparency, explainability, and control.
Finally, in the perspective of societal technology adoption acceptance, the individual is affected as a part of a larger collective, i.e., society, and technology acceptance means accepting that a technology is adopted and increasingly used in society. Specific to this model are socio-tropic evaluations of the technology, i.e., impacts on society rather than just the individual (as a technology user). Under this perspective, one might inspect, e.g., people’s views about the social and political impacts of AI-based filtering for the curation of online content on platforms. Such applications of AI could also be examined regarding their effects on the individual—and these can be studied with the user acceptance and delegation acceptance perspectives—but certain AI systems may be particularly relevant regarding their broader societal effects. Other consequences of AI that people may evaluate are inherently on the social level, such as environmental harms due to a growing energy footprint of widespread AI uses or the impact of AI uses on labor markets. Table 1 summarizes these differences between the models.
4.2 Variation in scope
Based on their core theoretical premises and the intentions with which they have been formulated, the three described theoretical perspectives differ in scope. They cover different parts of a space describing different ways of relating to and evaluating the possible impacts of AI, as shown in Fig. 1. The vertical dimension in the figure distinguishes the extent to which people directly interact with and have control over an AI system. It can essentially be a tool that people directly interact with and that has directly observable characteristics and performance. However, the relationship to an AI system can also be remote. It can have agent-like qualities and unknown biases that introduce some form of possible agency loss, and/or if it can be third parties wielding the AI system—which constitutes an extended delegation relationship. The horizontal dimension in the chart describes the distinction between the focus on AI impacts on the individual versus impacts on society. In the latter case, acceptance of the technology is not evaluated in terms of how it affects an individual, but its adoption in society more broadly and how it affects individuals as members of society.
The more one moves to the top-right part of the figure, the more remote are AI systems from a person while still possibly affecting her. Individuals may even evaluate AI systems that they do not interact with at all while they are nonetheless affected by aggregate impacts of the technology on society. A passive role of the individual is, however, not a necessary condition for being able to make socio-tropic AI evaluations. A person may interact with an AI system, such as a consumer recommender system, herself but still also evaluate it in socio-tropic terms, e.g., regarding its environmental impact through its widespread usage. Both ego- and socio-tropic evaluations can be of interest.
Within the space described by Fig. 1, the three models cover different areas based on their core theoretical premises. The user acceptance perspective, with its focus on individual users handling technology as a tool rather than an agent, can be situated in the bottom-left corner of the chart. The delegation acceptance perspective covers AI uses in which affected individuals take a more passive role as they rely on some service provided by an AI system. This can, however, entail different delegation relationships. First, individuals may directly interact with an AI system to which they (partly) delegate decisions. They may however also be, second, affected by an AI system through other actors employing the system—as an agent or a tool—to provide some service for the individual. This is the case, for instance, if medical practitioners use AI systems for prognosis or if the state adopts AI to provide better services for citizens, such as through faster processing of tax statements. One can understand constellations of this sort in terms of a delegation chain, i.e., subsequent steps of delegation that can compound accountability challenges (Nielson and Tierney 2003). And, this constellation can be analyzed within the delegation acceptance perspective (and the principal–agent model).
Third, even where an individual is merely the object of AI processing and outputs (e.g., classification) and there is no formal act of delegation, one can still see this relationship as an implicit principal–agent relation. This is the case, e.g., with credit default risk assessments, which serve the organization using the system rather than those being assessed. As affected individuals have certain rights and legitimate interests (e.g., no unfair discrimination), they are an implicit or external principal who can hold the agent accountable for acting in ways that violate these rights and interests (Krafft et al. 2020). Clearly, this implicit delegation relationship is linked to a very passive role of the individual in relation to the AI system and its impacts.
The societal risk technology model is about the broader impacts of technology, regardless of whether it is agent-like or more like a tool. Unlike in the user acceptance and the delegation acceptance perspectives, the adoption of the technology and its acceptance are evaluated in terms of its effects on society, e.g., its consequences for the labor market, the working of democracy, or the environment. Notably, this perspective does not exclude the delegation perspective. Certain AI applications can be viewed not only in terms of delegation by an individual receiving a service as an individual but also in terms of delegation by individuals as members of society. In the latter case, it is a collective subject delegating and the impacts of the technology are not merely those on an individual. This relationship even takes on a formal character through citizens being represented by the government to shape the societal adoption of technology in line with citizens’ interests. The societal adoption acceptance of AI therefore has a latent political dimension to it. As such, it directly links up with studies that have examined citizens’ demand for regulation (e.g. Ada Lovelace Institute and The Alan Turing Institute 2023; Zhang and Dafoe 2019) and regulatory preferences (König et al. 2023). Attitudes of this sort can be understood as referring to a delegation relationship in which the state, acting in the interest of the people, is supposed to regulate AI.
One should note that demarcations between the three perspectives are not as clear-cut as suggested by Fig. 1. Indeed, the fact that there is an entire space of constellations as illustrated in the figure has created a need to adapt existing models, particularly the TAM, to specific contexts. The TAM has been extended to better fit the acceptance of automation or delegation to AI. Notably, contributions extending the TAM have pointed out its limitations for studying AI and stressed the need to integrate trust (Acharya and Mekker 2022; Vorm and Combs 2022; Bel and Coeugnet 2023; Chen et al. 2023; Choung et al. 2023)—a factor central to the delegation model. However, while taking the TAM as the reference model may seem appealing due to the model’s influence and popularity, this means remaining within a framework that does not ideally suit AI as a technology with agent-like properties.
Important limitations remain when keeping the general premises of the TAM. Since it has been developed for users directly interacting with technology as a form of planned behavior, it is less suitable for settings in which individuals explicitly or implicitly delegate tasks to AI—which may involve merely the acceptance that an AI makes decisions for an individual. The outlook of the TAM does not fit well with the study of societal adoption acceptance of technology either, as the latter is even less about individual users and planned behavior and involves socio-tropic evaluations of the technology. In that sense, the TAM has more of a heuristic value when applied beyond its original scope while other models are inherently more applicable.
The delegation perspective, in turn, can involve different degrees of passivity. In certain cases, an individual can be affected by an AI system even while it is others, e.g., government bodies, who are directly interacting with it. Under these conditions, the broader context of delegation and particularly the trust in organizations deploying AI systems may then be central to AI acceptance (Wenzelburger et al. 2022; Schiff et al. 2023). Especially under conditions of extended or implicit delegation as described above, the delegation perspective shows a stronger affinity to socio-tropic evaluations of AI under the societal adoption acceptance perspective. Due to their highly passive role and low agency, those affected may see themselves affected less as an individual and more as part of a social group or member of society.
5 A combined framework
Bringing the three described perspectives together serves to contextualize studies of AI acceptance by spelling out how AI acceptance is understood and studied. This requires being sensitive to changes in the meaning of AI acceptance as the dependent variable and considering which elements are specific to and primarily relevant for the chosen perspective. It is important in this regard to consider the constellation in which those stating their AI acceptance relate to the technology as they can be affected by AI in different roles. Drawing on distinct connotations of AI acceptance in a broader framework can thus help to situate research within a diverse literature. It can also form a basis for building a compilation of survey items that considers theoretically relevant distinctions (see Appendix Table A1 for examples).
Figure 2 compiles the three perspectives and core building blocks described further above. It highlights important shifts regarding AI acceptance as a dependent variable due to differing conceptualizations of this acceptance and predictors that are specific to the theoretical perspectives. The elements marked in dark gray are from the user acceptance, those in light gray with a dashed line are from the delegation acceptance perspective, and the white elements with a dotted line are from the societal adoption acceptance perspective.
The user acceptance perspective covers direct use of a technology and includes perceived usefulness based on personal benefits as well as costs and risks and perceived ease of use as central predictors. The delegation acceptance perspective includes accountability based on transparency, answerability/explainability, and control/correctability together with trust. Trust itself can be conceived of as stemming from perceived ability, benevolence, and integrity, following the seminal framework by Mayer et al. (1995)—which has been adopted by some authors specifically for the study of AI acceptance (e.g., Vorm and Combs 2022; Langer et al. 2023). Although perceived accountability features such as transparency can themselves contribute to trust (Vorm and Combs 2022), perceived accountability can also be a separate important factor and particularly relevant where trust is lacking. Accountability measures basically amount to measures mitigating agency loss and the risks linked to it. Agency loss means that the agent does not act in the way it is supposed to, thus not fully realizing the interests and goals of the delegating actor.
The societal adoption acceptance perspective includes socio-tropic evaluations of benefits, costs, and risks for society stemming from AI (including consequences for fairness or personal autonomy). These evaluations may diverge from people’s egocentric or pocketbook evaluations. For instance, a person may think that AI benefits her economically or in her own career while she thinks that this technology has an overall negative effect on the labor market. Finally, various additional factors, such as knowledge and experience and dispositional factors such as technophobia/computer anxiety or subjective norms, are not firmly tied to any of the theoretical perspectives.
Elements from the three perspectives can be jointly relevant in the study of AI acceptance. Their applicability depends on how a study and study context relate to the distinctions shown in the chart and taken from Fig. 1 further above. Whether ego-tropic or socio-tropic perceptions of (positive and negative) AI consequences are relevant depends on whether impacts on the individual or society are of interest. Socio-tropic evaluations of AI impacts are primarily relevant for societal adoption acceptance. However, ego-tropic evaluations (e.g., based on personal use) might also influence the degree to which people accept widespread societal adoption of AI systems. Ego-tropic evaluations are primarily applicable where consequences for the individual are of interest, which can be valid when studying user acceptance as much as when studying delegation acceptance.
The applicability of other elements in Fig. 2 depends on the role of the individual and its relation to the technology. Overall, it is not the technology per se that determines the suitability of these elements, but rather how AI is applied, i.e., which delegation relationship it involves and whether effects on the individual or on society are of interest. While ease of use is relevant for very direct interaction and control over AI systems, accountability aspects and trust in the AI system become relevant where AI operates like an agent. Accountability and control over an AI agent can be regarded as being related to ease of use in the sense that they are both about employing technology in the way intended to achieve certain goals. The mechanisms for exerting control are different, though. Finally, for extended delegation relationships, in which affected persons are not directly interacting with the AI system themselves, this lack of control makes trust in the organization deploying the AI system particularly relevant.
Combining the three theoretical perspectives discussed above promises to be useful in the study of acceptance of AI. A combined framework allows for systematically selecting relevant elements depending on which notion of AI acceptance is of interest. When covering different AI systems within a single study, all the three perspectives may become relevant for investigating acceptance of those systems. Further, it is possible to study the same AI system from different angles, e.g., through examining both ego- and socio-tropic evaluations—which may well diverge to some extent.
With certain forms of AI, all the three perspectives can even be simultaneously relevant. This is the case particularly with applications of AI that can be applied to a range of different cognitive tasks, such as generative AI. As Helberger and Diakopoulos (2023) argue, these AI systems can be seen as a general-purpose technology that does not neatly fall into established categories and/or risk classes. Looking at the way in which generative AI, such as ChatGPT, can be used, it can come close to a tool for which performance quality can be assessed directly, e.g., when it is used to draft parts of a text to save time. It can, however, also be an agent to which tasks are delegated for which performance, bias, and fairness matter and cannot easily be ascertained—for instance, when text-based generative AI is used to provide recommendations about subjects that require technical expertise. Finally, generative AI can also be regarded as a societal risk technology with broader effects. It could have, for instance, undesirable effects through replacing jobs or harming public opinion formation through disinformation. Certain applications of generative AI may thus be investigated from different angles depending on which people’s acceptance may differ.
6 Discussion and conclusion
To integrate the findings from the different disciplines and theoretical angles in the growing literature dealing with acceptance of AI systems, it is important to understand how these relate to each other. The challenge of integrating this work stems from the fact that different disciplinary perspectives may foreground different aspects when modeling technology acceptance. Furthermore, AI takes many forms and can be deployed in many different settings. Accordingly, existing research covers manifold constellations and variable roles in which people are affected by AI.
Individuals can be users who employ an AI system like a tool or who delegate tasks to an AI system acting like an agent that provides a service to them—with individuals being more passive rather than active users. They may indirectly receive a service through an AI system, without interacting with the system themselves. This is the case, e.g., when the state adopts AI systems to provide services to citizens. Individuals can also be the object of AI outputs, such as risk assessments, which, however, serves other actors rather than the object of data processing. And finally, citizens can be affected by AI less as individuals but through broader societal impacts that affect citizens as members of society.
Treating different constellations under a single broad notion of AI acceptance hides theoretically important distinctions. Further, no single existing theoretical perspective or model properly accommodates all the described constellations. The discussion above has therefore argued for a need to bring together three different broader perspectives, namely user acceptance, delegation acceptance, and societal adoption acceptance perspectives. Each implies a different understanding of what technology acceptance exactly refers to: (a) intention to use or actual use, (b) willingness to delegate and actual delegation, or (c) acceptance of the societal adoption of technology. The perspectives come not only with different theoretical premises but also with central factors that are specific to them. Given that AI can be studied from different angles, it is important to spell out and contextualize how it is approached: which of the various possible relations between AI and an individual is/are of interest and which theoretical perspective is foregrounded?
The most prominent of the three perspectives is the user-centered perspective specifically in the form of the Technology Acceptance Model. Over decades, this model has proven to be versatile and robust. However, as has been illustrated above, the core theoretical premises of the Technology Acceptance Model cover only a small part of a universe of constellations that comprises possible relations between an individual and AI system. To deal with the particularities of AI, some contributions have argued to extend the TAM (e.g., Vorm and Combs 2022; Bel and Coeugnet 2023). However, while this leverages a heuristic value of the TAM, this does not eliminate limitations of the model. Due to its focus on the user, the societal perspective and technology affecting society rather than merely the individual are foreign to that model. Further, the user-centered model does not sit well with pronounced agent-like qualities of AI systems, which often mean that individuals are more passive and not like users employing technology as a tool that they handle more or less competently. The delegation acceptance perspective—and the principal–agent model inherent to this perspective—naturally accommodates AI’s agent-like qualities. It can also comprise more complex delegation relationships that can become relevant for AI adoption and acceptance. As it incorporates the central concepts of accountability as well as trust, this perspective also directly connects with the accountable algorithm literature.
Ultimately, building blocks from all theoretical perspectives are important in the study of AI acceptance. Combining them not only serves cumulative research, but it is also important to tackle specific research problems and can open up some new directions. First, situating research among the described perspectives can help to contextualize studies and thereby establish stronger links between related research areas. As research on acceptance of AI is about to grow further, organizing and integrating the evidence will only become more important. Being explicit about the angle adopted in a study also helps to understand how exactly empirical findings on AI acceptance relate to and complement each other. Otherwise, it may seem like studies are talking about the same thing, but results may not be directly comparable as they refer to different notions and aspects of AI acceptance.
Second, integrating the different perspectives in a single survey allows one to relate attitudes from these perspectives to one another (see Appendix Table A1 for examples of survey items). This means being able to uncover discrepancies between these attitudes as, for instance, ego- and socio-tropic evaluations of certain AI applications might diverge. Indeed, existing findings indicate that the perception of AI systems is multifaceted and not necessarily consistent. For instance, while certain desirable features of AI systems such as their performance may be important for their acceptance when asking citizens about their design (König et al. 2022a, b; Horvath et al. 2023), other factors such as trust in the organization deploying these systems may be more important when people evaluate their societal adoption (Wenzelburger et al. 2022; Schiff et al. 2023; Kleizen et al. 2023). Integrating the different perspectives in a single study is also a prerequisite to be able to examine potential spillovers, i.e., evaluations under one perspective affecting those under another. Positive ego-tropic evaluations of certain AI systems or trust in the systems might influence socio-tropic evaluations of the same or other systems. Finally, with applications of generative AI, an AI system can even become relevant under several perspectives at the same time—since they can be used as tools (e.g., speed up drafting a letter), operate similar to agents to which tasks are delegated (provide recommendations), and have impacts on society at large (producing disinformation).
Overall, the combination of different perspectives is important because AI, unlike most other technologies, can affect people in different ways, roles, and constellations. This may be less relevant with a narrow research interest, for instance, when studying how people perceive a specific AI-based consumer application. The broader approach is, however, very relevant from a social science perspective that aims at a comprehensive understanding of how AI and its impacts are perceived in the larger public. For policymakers, the discussion accordingly implies that there is a risk of getting a partial view and wrong impression of AI acceptance in society. Understanding how citizens think about AI requires a differentiated perspective as their views are likely multifaceted and there are several ways in which AI acceptance can show.
It should be noted that the above discussion of how AI acceptance can be studied has focused on quantitative research using standardized surveys. Yet it is clearly also possible to use qualitative interviews to get a deeper understanding of how people perceive AI systems. Research along these lines has shown that people have folk theories concerning AI (Bucher 2018). Through being better able to demonstrate how intricate psychological mechanism, e.g., of self-blame or gratitude can shape the perception of power relations and their own agency in AI use (e.g., Ramesh et al. 2022), qualitative evidence directly connects with critical AI and algorithm studies. It can inform and complement quantitative research with important insights. The three perspectives described above, in turn, can also provide an analytical scaffolding for qualitative studies while informing new kinds of questions that explore people’s perceptions of AI under different perspectives within the same study. All in all, systematically combining distinct facets of AI acceptance promises to yield a deeper understanding of the relationship between AI and society.
References
Acharya S, Mekker M (2022) Public acceptance of connected vehicles: an extension of the technology acceptance model. Transport Res f Traffic Psychol Behav 88(July):54–68. https://doi.org/10.1016/j.trf.2022.05.002
Ada Lovelace Institute, The Alan Turing Institute (2023) How do people feel about AI? A nationally representative survey of public attitudes to artificial intelligence in Britain. Ada Lovelace Institute, London
Aoki N (2020) An Experimental study of public trust in AI chatbots in the public sector. Gov Inf Q 37(4):101490. https://doi.org/10.1016/j.giq.2020.101490
Araujo T, de Vreese C, Helberger N, Kruikemeier S, van Weert J, Oberski D, Pechenizkiy M, Schaap G, Taylor L (2018) Automated decision-making fairness in an AI-driven world: public perceptions, hopes and concerns. Digital Communication Methods Lab, Amsterdam http://www.digicomlab.eu/reports/2018_adm_by_ai/
Beck U (1992) Risk society: towards a new modernity. Sage Publications, London
Bel M, Coeugnet S (2023) The delegation-level choice of an automated vehicle: an analysis by structural equation modeling. Int J Hum Comput Interact. https://doi.org/10.1080/10447318.2023.2170368
Borwein S, Beatrice M, Peter JL, Bart B, Blake LW (2023) The gender gap in attitudes toward workplace echnological change. 1–38
Bouwer A (2022) Under which conditions are humans motivated to delegate tasks to AI? A taxonomy on the human emotional state driving the motivation for AI delegation. In: José LR, Eduardo PL, Luiz M, José PMDS (eds) Marketing and smart technologies. Smart Innovation, Systems and Technologies. Singapore: Springer Nature Singapore, 279:37–53. https://doi.org/10.1007/978-981-16-9268-0_4.
Bovens M (2007) Analysing and assessing accountability: a conceptual framework. Eur Law J 13(4):447–468. https://doi.org/10.1111/j.1468-0386.2007.00378.x
Bucher T (2018) If then: algorithmic power and politics. Oxford University Press, New York
Burton JW, Stein M-K, Jensen TB (2020) A systematic review of algorithm aversion in augmented decision making. J Behav Decis Mak 33(2):220–239. https://doi.org/10.1002/bdm.2155
Candrian C, Scherer A (2022) Rise of the machines: delegating decisions to autonomous AI. Comput Hum Behav 134(September):107308. https://doi.org/10.1016/j.chb.2022.107308
Castelfranchi C, Falcone R (1998) Towards a theory of delegation for agent-based systems. Robot Auton Syst 24(3–4):141–157. https://doi.org/10.1016/S0921-8890(98)00028-1
Chen Y, Khan SK, Shiwakoti N, Stasinopoulos P, Aghabayk K (2023) Analysis of Australian public acceptance of fully automated vehicles by extending technology acceptance model. Case Stud Transp Policy 14(December):101072. https://doi.org/10.1016/j.cstp.2023.101072
Choung H, David P, Ross A (2023) Trust in AI and its role in the acceptance of AI technologies. Int J Hum-Comput Interact 39(9):1727–1739. https://doi.org/10.1080/10447318.2022.2050543
Dauvergne P (2020) Is artificial intelligence greening global supply chains? Exposing the political economy of environmental costs. Rev Int Polit Econ 2:1–23. https://doi.org/10.1080/09692290.2020.1814381
Davis FD (1989) Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q 13(3):319. https://doi.org/10.2307/249008
De Fine Licht K, Jenny De Fine L (2020) Artificial intelligence, transparency, and public decision-making: why explanations are key when trying to produce perceived legitimacy. AI Soc 35(4):917–926. https://doi.org/10.1007/s00146-020-00960-w
Groot De, Judith IM, Schweiger E, Schubert I (2020) Social influence, risk and benefit perceptions, and the acceptability of risky energy technologies: an explanatory model of nuclear power versus shale gas. Risk Anal 40(6):1226–1243. https://doi.org/10.1111/risa.13457
Dietvorst BJ, Simmons JP, Massey C (2015) Algorithm aversion: people erroneously avoid algorithms after seeing them err. J Exp Psychol Gen 144(1):114–126. https://doi.org/10.1037/xge0000033
Dowling C, Paul N (2002) Choice and responsibility: the delegation of decision making to intelligent software agents. In: Klaus B, Jacques B (eds) Human choice and computers, 98:163–70. IFIP Advances in Information and Communication Technology. Boston, MA: Springer US. https://doi.org/10.1007/978-0-387-35609-9_13.
European Commission (2020) Eurobarometer 92.3 (2019)Eurobarometer 92.3 (2019): Standard Eurobarometer 92: Standard Eurobarometer 92. GESIS Data Arch. https://doi.org/10.4232/1.13564
Galaz V, Centeno MA, Callahan PW, Causevic A, Patterson T, Brass I, Baum S et al (2021) Artificial intelligence, systemic risks, and sustainability. Technol Soc 67(November):101741. https://doi.org/10.1016/j.techsoc.2021.101741
Gallego A, Thomas K (2022) Automation, digitalization, and artificial intelligence in the workplace: implications for political behavior. Ann Rev Polit Sci. https://doi.org/10.1146/annurev-polisci-051120-104535
Ghazizadeh M, Lee JD, Boyle LN (2012) Extending the technology acceptance model to assess automation. Cogn Technol Work 14(1):39–49. https://doi.org/10.1007/s10111-011-0194-3
Glikson E, Woolley AW (2020) Human trust in artificial intelligence: review of empirical research. Acad Manag Ann 14(2):627–660. https://doi.org/10.5465/annals.2018.0057
Grimmelikhuijsen S (2022) Explaining why the computer says no: algorithmic transparency affects the perceived trustworthiness of automated decision-making. Publ Admin Rev. https://doi.org/10.1111/puar.13483
Grzymek V, Michael P (2019) What Europe knows and thinks about algorithms results of a representative survey. Bertelsmann Stiftung, Gütersloh
Helberger N, Nicholas D (2023) ChatGPT and the AI Act. Internet Policy Rev. https://doi.org/10.14763/2023.1.1682
Hölmstrom B (1979) Moral Hazard and Observability. The Bell Journal of Economics 10(1):74–91
Horvath L, James O, Banducci S, Beduschi A (2023) Citizens’ acceptance of artificial intelligence in public services: evidence from a conjoint experiment about processing permit applications. Gov Inf Q 40(4):101876. https://doi.org/10.1016/j.giq.2023.101876
Huijts NMA, Molin EJE, Steg L (2012) Psychological factors influencing sustainable energy technology acceptance: a review-based comprehensive framework. Renew Sustain Energy Rev 16(1):525–531. https://doi.org/10.1016/j.rser.2011.08.018
Ingrams A, Wesley K, Daan J (2021) In AI we trust? Citizen perceptions of AI in government decision making. Policy Internet. https://doi.org/10.1002/poi3.276.
Kaur D, Uslu S, Rittichier KJ, Durresi A (2023) Trustworthy artificial intelligence: a review. ACM Comput Surv 55(2):1–38. https://doi.org/10.1145/3491209
Kelly S, Kaye S-A, Oviedo-Trespalacios O (2023) What factors contribute to the acceptance of artificial intelligence? A systematic review. Telemat Inform 77(February):101925. https://doi.org/10.1016/j.tele.2022.101925
Kim H-W, Chan HC, Gupta S (2007) Value-based adoption of mobile internet: an empirical investigation. Decis Supp Syst 43(1):111–126. https://doi.org/10.1016/j.dss.2005.05.009
King WR, He J (2006) A meta-analysis of the technology acceptance model. Inf Manag 43(6):740–755. https://doi.org/10.1016/j.im.2006.05.003
Kleizen B, Van Dooren W, Verhoest K, Tan E (2023) Do citizens trust trustworthy artificial intelligence? Experimental evidence on the limits of ethical AI measures in government. Gov Inf Q 40(4):101834. https://doi.org/10.1016/j.giq.2023.101834
König PD, Julia F, Anja A, Georg W (2022a) The Importance of effectiveness versus transparency and stakeholder involvement in citizens’ perception of public sector algorithms. Public Manag Rev. https://doi.org/10.1080/14719037.2022.2144938
König PD, Wurster S, Siewert MB (2022b) Consumers are willing to pay a price for explainable, but not for green AI. Evidence from a choice-based conjoint analysis. Big Data Soc 9(1):1–13. https://doi.org/10.1177/20539517211069632
König PD, Stefan W, Markus BS (2023) Sustainability challenges of artificial intelligence and citizens’ regulatory preferences. Govern Inf Quart. https://doi.org/10.1016/j.giq.2023.101863
Krafft TD, Katharina AZ, Pascal DK (2020) How to regulate algorithmic decision-making: a framework of regulatory requirements for different applications. Regul Govern. https://doi.org/10.1111/rego.12369
Lane J-E (2007) Comparative politics: the principal-agent perspective. Routledge, Milton Park. https://doi.org/10.4324/9780203935545
Langer M, König CJ, Back C, Hemsing V (2023) Trust in artificial intelligence: comparing trust processes between human and automated trustees in light of unfair bias. J Bus Psychol 38(3):493–508. https://doi.org/10.1007/s10869-022-09829-9
Lee JD, Alex K (2013) The oxford handbook of cognitive engineering. In: Lee JD, Krilik A (eds) Introduction to the handbook. Oxford University Press, Oxford, pp 3–16. https://doi.org/10.1093/oxfordhb/9780199757183.013.0001
Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Fact 46(1):50–80. https://doi.org/10.1518/hfes.46.1.50_30392
Lepri B, Oliver N, Letouzé E, Pentland A, Vinck P (2018) Fair, transparent, and accountable algorithmic decision-making processes: the premise, the proposed solutions, and the open challenges. Philos Technol 31(4):611–627. https://doi.org/10.1007/s13347-017-0279-x
Lewis-Beck MS, Mary S (2018) Economic voting. In: The Oxford Handbook of Public Choice, edited by Roger D. Congleton, Bernard Grofman, and Stefan Voigt, 1:247–65.
Logg JM, Minson JA, Moore DA (2019) Algorithm appreciation: people prefer algorithmic to human judgment. Organ Behav Hum Decis Process 151(March):90–103. https://doi.org/10.1016/j.obhdp.2018.12.005
Mayer RC, Davis JH, David Schoorman F (1995) An integrative model of organizational trust. Acad Manag Rev 20(3):709. https://doi.org/10.2307/258792
Mohr S, Kühl R (2021) Acceptance of artificial intelligence in German agriculture: an application of the technology acceptance model and the theory of planned behavior. Precis Agric 22(6):1816–1844. https://doi.org/10.1007/s11119-021-09814-x
Morosan C, Dursun-Cengizci A (2023) Letting AI make decisions for me: an empirical examination of hotel guests’ acceptance of technology agency. Int J Contemp Hosp Manag. https://doi.org/10.1108/IJCHM-08-2022-1041
Nielson DL, Tierney MJ (2003) Delegation to international organizations: agency theory and world bank environmental reform. Int Organ 57(2):241–276. https://doi.org/10.1017/S0020818303572010
Nussberger A-M, LanLuo L, Celis E, Crockett MJ (2022) Public attitudes value interpretability but prioritize accuracy in artificial intelligence. Nat Commun 13(1):5821. https://doi.org/10.1038/s41467-022-33417-3
O’Shaughnessy MR, Schiff DS, Varshney LR, Rozell CJ, Davenport MA (2023) What governs attitudes toward artificial intelligence adoption and governance? Sci Public Policy 50(2):161–176. https://doi.org/10.1093/scipol/scac056
Pratt JW, Richard Z (1991) Principals and agents: an overview. In: Pratt JW, Zeckhauser R (eds) Principals and agents: the structure of business.. Research Colloquium. Boston, Mass: Harvard Business School Press, 1–36
Rainie L, Cary F, Monica A, Alex T (2022) AI and human enehancement: American’s openness is tempered by a range of concerns. Washington, D.C.: Pew Research Center. http://www.pewInternet.org/2017/02/08/code-dependent.
Ramesh D, Vaishnav K, Ding W, Nithya S (2022) How platform-user power relations shape algorithmic accountability: a case study of instant loan platforms and financially stressed users in India. In: 2022 ACM Conference on Fairness, Accountability, and Transparency, 1917–28. Seoul Republic of Korea: ACM. https://doi.org/10.1145/3531146.3533237.
Scantamburlo T, Atia C, Francesca F, Cristian B, Veronica D, Long P, Alessandro F (2023) Artificial intelligence across europe: a study on awareness, attitude and trust. https://doi.org/10.48550/ARXIV.2308.09979
Schepman A, Paul R (2020) Initial validation of the general attitudes towards artificial intelligence scale. Comput Hum Behav Rep. https://doi.org/10.1016/j.chbr.2020.100014
Schiff DS, Kaylyn JS, Patrick P (2021) Assessing public value failure in government adoption of artificial intelligence. Public Admin. https://doi.org/10.1111/padm.12742
Schiff KJ, Daniel SS, Ian TA, Joshua M, Scott MM (2023) Institutional factors driving citizen perceptions of AI in government: evidence from a survey experiment on policing. Public Admin Rev. https://doi.org/10.1111/puar.13754
Selwyn N, Cordoba BG (2022) Australian public understandings of artificial intelligence. AI Soc 37(4):1645–1662. https://doi.org/10.1007/s00146-021-01268-z
Shin D (2021) The effects of explainability and causability on perception, trust, and acceptance: implications for explainable AI. Int J Hum-Comput Stud 146:2. https://doi.org/10.1016/j.ijhcs.2020.102551
Shin D, Park YJ (2019) Role of fairness, accountability, and transparency in algorithmic affordance. Comput Hum Behav 98(September):277–284. https://doi.org/10.1016/j.chb.2019.04.019
Smith A (2018) Public attitudes toward computer algorithms. Pew Research Center, Washington
Sohn K, Kwon O (2020) Technology acceptance theories and factors influencing artificial intelligence-based intelligent products. Telematics Inform 47(April):101324. https://doi.org/10.1016/j.tele.2019.101324
Starke C, Lünich M (2020) Artificial intelligence for political decision-making in the European Union: effects on citizens’ perceptions of input, throughput, and output legitimacy. Data Policy 2:e16. https://doi.org/10.1017/dap.2020.19
Venkatesh MD, Davis D (2003) User acceptance of information technology: toward a unified view. MIS Quart 27(3):425. https://doi.org/10.2307/30036540
Venkatesh V, Bala H (2008) Technology acceptance model 3 and a research agenda on interventions. Decis Sci 39(2):273–315. https://doi.org/10.1111/j.1540-5915.2008.00192.x
Vorm ES, Combs DJY (2022) Integrating transparency, trust, and acceptance: the intelligent systems technology acceptance model (ISTAM). Int J Hum-Comput Interact 38(18–20):1828–1845. https://doi.org/10.1080/10447318.2022.2070107
Weingast BR, Moran MJ (1983) Bureaucratic discretion or congressional control? Regulatory policymaking by the federal trade commission. J Polit Econ 91(5):765–800. https://doi.org/10.1086/261181
Wenzelburger G, Pascal DK, Julia F, Anja A (2022) Algorithms in the public sector. Why context matters. Public Administration. https://doi.org/10.1111/padm.12901.
Wieringa M (2020) What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 1–18. Barcelona: ACM. https://doi.org/10.1145/3351095.3372833.
Zhang B, Dafoe A (2019) Artificial intelligence: American attitudes and trends. University of Oxford, Oxford
Acknowledgements
I thank the editors at AI & Society and the reviewers for their time and effort. The paper has gained a lot from the reviewers' thoughtful comments and suggestions.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author declares no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Koenig, P.D. Attitudes toward artificial intelligence: combining three theoretical perspectives on technology acceptance. AI & Soc (2024). https://doi.org/10.1007/s00146-024-01987-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00146-024-01987-z