1 Introduction

A burgeoning literature on attitudes toward AI has rapidly generated a substantial amount of empirical evidence regarding how people think about AI. This research has grown in the face of an increasing prevalence and public awareness of AI systems. These are systems with learned rules for processing data they receive from their environment to produce optimal outputs or actions (e.g., predictions or recommendations) given predefined objectives. AI systems can be employed in manifold ways and take many different forms. Research mainly from computer science has examined various aspects of AI systems, such as transparency and fairness, and the consequences that follow from these properties (Lepri et al. 2018; Kaur et al. 2023). Attitudinal research has quickly followed suit, complementing this work with insights on how people evaluate AI systems and their properties.

Scholars from different disciplinary perspectives investigating AI attitudes have produced a wealth of evidence, but it has also led to an increasingly fragmented field in which similar questions are addressed with different theoretical perspectives and models. At the same time, the relations between these models are not usually spelled out, which makes it harder to integrate empirical findings. This is a problematic shortcoming particularly with AI since AI comes in many different forms and entails variable relations between individuals and the technology. Not only are certain models more suitable for some and other models more suitable for other cases, but certain forms of AI may well also be studied from different angles at the same time, with potentially differing results regarding AI acceptance.

Against this backdrop, the present paper takes stock of and discusses existing approaches to studying attitudes toward AI. In doing so, it makes two theoretical contributions. First, it systematizes existing strands in research to provide an orienting framework for embedding studies and findings on AI attitudes. Second, it demonstrates a need for integration of existing models of technology acceptance specifically for studying AI as a technology that can affect people in different roles and with varying social scope while also differing in the extent to which AI has agent-like qualities. There are thus different possible settings and also perspectives for studying attitudes toward AI. Depending on the adopted perspective, elements specific to different models for examining AI acceptance become more relevant. Furthermore, certain forms of AI, particularly applications of generative AI, can make several theoretical perspectives simultaneously relevant as they affect people in different constellations at the same time.

Such an integrative perspective covers a diverse literature. This literature consists of studies looking at the acceptance of concrete AI systems as a function of design features (e.g., Shin 2021; Shin and Park 2019; König et al. 2022a, b; Nussberger et al. 2022) and of contextual and dispositional factors (e.g., Burton et al. 2020; Glikson and Woolley 2020). It also includes studies that adopt a broader perspective on how people generally think about AI and its potential consequences for society (Smith 2018; Grzymek and Puntschuh 2019; Zhang and Dafoe 2019; Araujo et al. 2018; European Commission 2020; Ada Lovelace Institute and The Alan Turing Institute 2023; Scantamburlo et al. 2023; Selwyn and Gallo Cordoba 2022).

The discussion below identifies three distinct families of theoretical perspectives which inform this extant literature: a traditional user-centered technology acceptance perspective with the Technology Acceptance Model (TAM) at its core, a delegation or automation acceptance, and a societal adoption acceptance perspective. The first perspective, which centers on individuals as users of technology, recurrently appears in research on AI attitudes as an explicit model, albeit with various extensions and modifications. The delegation perspective is less frequent but also regularly spelled out as a theoretical framework. It expressly conceives of AI not simply as a tool to be used but more as an agent providing a service to someone with a performance that is not directly transparent and may thus have important hidden qualities. The delegation perspective thus foregrounds not only accountability challenges for the individual and the role of trust in AI but also extended delegation relations in which other actors, such as government bodies, adopt AI to perform tasks in the interest of affected individuals. The societal AI acceptance perspective, in turn, focuses on AI systems’ impact not on the individual but on society, e.g., in the form of effects on employment (see, e.g., Gallego and Kurer 2022), the working of the public sphere (Smith 2018), and the environment (König et al. 2023). The third perspective often remains implicit although an explicit template could be taken from research on risk technologies and transferred to the study of AI acceptance (e.g., Huijts et al. 2012). Including a broader and explicit model of societal AI acceptance also seems warranted in view of an increasing acknowledgment of AI’s relevance under the heading of sustainability.

The discussion below compares the three above-mentioned theoretical perspectives, describes their respective scope, and highlights which elements are specific to them. It will furthermore illustrate how a combined framework can cover central facets of AI acceptance that have been discussed in the literature. Before turning to this systematizing account, the following section will first provide an overview of extant research on attitudes toward AI.

2 The state of research on attitudes toward AI

The heterogeneity of the literature on AI attitudes has several sources. First, research comes from different disciplines. Besides research in the fields of information systems and human–computer interaction, one finds contributions from psychology, social sciences, business studies, and disciplines with an interest in specific applications of AI, such as health and mobility research. Second, the literature covers many different applications of AI, from low-risk consumer applications to high-risk systems to which individuals may even be exposed without having significant control or influence over them. Third, there is variation in the adopted theoretical frameworks, such as the technology acceptance model and related models (for an overview, see Sohn and Kwon 2020), and the chosen specific dependent variables. This heterogeneity is further compounded by the fact that the more recent work on acceptance of AI has antecedents in a literature on automation acceptance (Lee and See 2004).

The following account cannot do justice to the many facets of the quickly growing literature on AI attitudes, nor does it aim to be exhaustive. It illustrates the heterogeneity of this literature and motivates a need for systematically integrating different strands of research. Trying to integrate different strands may seem less relevant within certain disciplinary perspectives, when centering, e.g., on specific questions of product design and user acceptance. However, from a social science angle that aims to comprehensively make understandable how AI systems are perceived and taken up in society, a broader and integrative approach is warranted. Given this wider perspective, the following account and the subsequent discussion are not only rooted in a social science perspective but also based on a broad reading of literature from different disciplines dealing with AI acceptance. Focusing largely on the last decade, the discussion also draws on various conceptual articles that have summarized existing research while also pointing to antecedents of the more recent research.

The heterogeneity of AI attitudes research already becomes palpable when looking at two recent literature reviews. The review by Glikson and Woolley (2020), which takes trust in AI as the core dependent variable, is rooted in a business research perspective interested largely in how workers rely on AI systems. The authors identify tangibility, transparency, reliability, task characteristics, and immediacy behavior as key dimensions that shape trust in AI. The literature review by Kelly et al. (2023), in turn, focuses on user acceptance of AI and is rooted more in information systems research. The review finds perceived usefulness, performance expectancy, trust, and personal dispositions to be among the key determinants of AI user acceptance. Notably, such user acceptance presumes individuals to be more like active users rather than more passively relying on AI systems—as in the case of individuals working along AI. The two reviews thus cover similar or related research but assemble it under different perspectives and highlight partly different key attitudinal dimensions.

On the level of individual empirical studies, one similarly finds different approaches to studying acceptance of AI systems, as they usually focus on specific dimensions or variables. Studies dealing with algorithm aversion and appreciation (Logg et al. 2019; Dietvorst et al. 2015), for instance, have focused on the role of task context and characteristics and personal dispositions. Summing up previous research, Burton et al. (2020) identify expectations and expertise, decision autonomy, incentivization, cognitive compatibility, and divergent rationalities as five major conditioning factors of algorithm aversion. Other research, inspired by discussions of computer science about fair, transparent, and accountable (FAccT) AI, has focused on how people evaluate these FAccT features in the design and performance of AI systems (e.g., Candrian and Scherer 2022; Shin 2021; Shin and Park 2019; König et al. 2022a, b; Nussberger et al. 2022; Langer et al. 2023). A particular interest in accountability and legitimacy of AI uses is furthermore present in a growing body of political science and public administration studies on public sector AI applications (Aoki 2020; Grimmelikhuijsen 2022; Ingrams et al. 2021; Schiff et al. 2021; Starke and Lünich 2020). What makes this setting special is that citizens may often not interact with the technology itself, are possibly unable to opt out of AI uses, and are particularly vulnerable in relation to the power of the state.

Additional heterogeneity stems from social sciences and public opinion research. It often contains a mix of different constructs, partly adopted from the other research described above, and usually includes awareness, knowledge or prior experience, positive and negative evaluations, select questions about AI system design features, and attitudes toward regulation (Smith 2018; Grzymek and Puntschuh 2019; Zhang and Dafoe 2019; Araujo et al. 2018; European Commission 2020; Ada Lovelace Institute and The Alan Turing Institute 2023; Scantamburlo et al. 2023). The broad scope of this research also manifests in the fact that it commonly comprises different applications and impacts of AI—on the individual as well as on society at large—within a single study and survey. Variable AI impacts may even be covered within a single attitudinal scale. The General AI Attitudes Scale (Schepman and Rodway 2020), designed to study how people think about AI, comprises items concerning effects on the individual, such as the interest in using AI in one’s daily life, while others refer to the social impacts of AI, such as new economic opportunities for the country. While the validated scale captures a general evaluation of AI, it is notable that the included items also implicitly refer to different ways in which people relate to AI systems and entail different connotations of AI acceptance.

Overall, the emergence of a fragmented research field in recent years increases the need for integration and consolidation. At the same time, there are important distinctions in the literature that get lost when adopting a single notion of AI acceptance. As the preceding review of existing research illustrates, there are various constellations concerning the relation between the individual and the technology, and its impacts vary. A single or uniform approach toward AI acceptance risks seemingly talking about the same thing when one is dealing with distinct connotations of AI acceptance. It also makes it harder to synthesize findings as these may not be directly comparable and compatible due to different adopted perspectives. Against this backdrop, the following sections aim at systematizing existing research to enhance mutual understanding among researchers, help to better contextualize findings, and support cumulative work.

3 Three perspectives on AI acceptance

3.1 User-centered technology acceptance

The Technology Acceptance Model (TAM) is arguably the most influential model in the study of what drives technology usage (Davis 1989; Venkatesh et al. 2003). It was initially proposed by Davis (1989) to study the intention of using technological innovation as a function of perceived usefulness and the perceived ease of use of a technology. Since then, the model has seen various extensions. Besides a variable for actual use, following from the intention to use, various extensions to the TAM have led to its third version (Venkatesh and Bala 2008) which spells out antecedents of the original main variables, i.e., perceived usefulness and perceived ease of use. These antecedents comprise, among others, dispositional factors (subjective norms), experience and self-efficacy, and context factors (e.g., job relevance and result demonstrability). Yet other variables are included in the Value-Based Adoption Model (Kim et al. 2007), which has been derived from the TAM and is related to it. It focuses on exogenous variables that capture user experience, such as enjoyment, to explain technology adoption acceptance.

While the TAM can be used to study different kinds of technological innovation, it has been widely used in the study of information systems where it has proven to be a robust and reliable model (King and He 2006). Unsurprisingly, the TAM has also influenced the more recent wave of research on attitudes toward AI (for an overview, see Kelly et al. 2023). Applications of the TAM to study the acceptance of AI regularly show modifications to fit the study contexts. Yet contributions that draw on the TAM commonly stay true to its focus on the individual as the user of an AI system. This is the case, for instance, with consumers using products with AI components (Sohn and Kwon 2020) or farmers using AI for more precise use of resources (Mohr and Kühl 2021).

However, interpreting AI “use” along the same lines as the use of other technologies amounts to a significant limitation of the TAM. Since AI can have distinct agent-like qualities, unlike many other technologies that are more like tools, the notion of a mere user hardly applies (Ghazizadeh et al. 2012). As Schepman and Rodway (2020, 11) state, the TAM “reflect users’ individual choices to use technology, but AI often involves decisions by others.” It is thus more suitable to describe the relation between user and technology as one of trust and involving a delegation of tasks (Ghazizadeh et al. 2012). Echoing this argument, Vorms and Combs (2022) point out that more recent intelligent systems present novel challenges that make trust in the technology a crucial construct. The agent-like character of AI thus sits uneasy with major premises of the TAM. In this sense, the particularities of AI are more directly addressed by models inherently about delegation.

3.2 Delegation and automation acceptance

The extent to which individuals are ready to delegate tasks to technological systems has been studied in the field of cognitive engineering since the 1980s (Lee and Kirlik 2013). While this field is generally concerned with the human-centered design of technologies and workplaces, it has particularly responded to advances in information technology and automation. Largely dealing with settings in which humans and automated systems work together to achieve certain goals, cognitive engineering has centered on agency and different levels of automation (Ghazizadeh et al. 2012). While at least the factor of task compatibility in automation is comparable to that of job relevance in the TAM, the focus on agency and the role of trust in the technology itself (rather than the organization providing it) clearly differ. Accordingly, models of automation acceptance, such as in the seminal contribution by Lee and See (2004), focus on trust in technology together with antecedents of this trust, such as automation performance.

The key assumptions of models for studying automation acceptance easily transfer to the acceptance of AI. Before the advent of machine learning and proliferation of AI systems, scholars pointed to the increasing adoption of computer programs that operated as agents—often not apparent to computer users—with a certain degree of autonomy and an ability to adapt their behavior (see, e.g., Dowling and Nicholson 2002). This work also highlighted the importance of evaluating the delegation of tasks to such computer programs in terms of fundamental aspects of human interaction such as trust, perceived competence, and intentions. Castelfranchi and Falcone (1998) emphasized the need to explicitly model the relationship to agent-based systems as one of delegation, with various possible conflicts arising from this relationship depending on the abilities and reliability of the agent.

This notion of delegation to an agent is also present in more recent research on acceptance of AI. Some research focuses directly on trust in AI or algorithms as the dependent variable (Glikson and Woolley 2020; Burton et al. 2020). Other contributions examine the relationship between an individual and an AI system in terms of acceptance of delegation (Bouwer 2022; Bel and Coeugnet 2023; Candrian and Scherer 2022), acceptance of technology agency (Morosan and Dursun-Cengizci 2023), or trust behavior, meaning the actual delegation of a task following from trust (Langer et al. 2023). Besides trust, the transparency and explainability of AI systems are central predictors of AI acceptance understood as the readiness to delegate tasks (e.g., Candrian and Scherer 2022; Shin 2021).

While psychological and computer science research on delegation to intelligent systems as agents has devised theoretical models to describe this relationship (e.g., Dowling and Nicholson 2002; Castelfranchi and Falcone 1998), a full-fledged model for analyzing delegation to human agents has long existed. This theoretical framework, the principal–agent model, was first developed in economics and political science (Hölmstrom 1979; Weingast and Moran 1983). It can be seen as the implicit basis to many of the delegation acceptance models used in the research cited above. Several contributions also explicitly mention this model as a suitable framework for understanding AI acceptance (De Fine Licht and De Fine Licht 2020; Krafft et al. 2020; Wieringa 2020). As described by Krafft et al. (2020), the major premises of the principal–agent model can be transferred to delegation to AI systems. In fact, the model even more extensively applies to machines than it does to humans because certain ways of achieving transparency—i.e., looking into people’s heads—are not (yet) possible with humans, but possible with certain AI systems.

The principal–agent model presumes that a principal relies on an agent to fulfill a task while operating in the principal’s interest. As the agent may pursue her own interests, though, there is a risk of agency loss for the principal, i.e., a discrepancy between her goals and the actual results achieved by the agent (Pratt and Zeckhauser 1991; Lane 2007). This challenge equally exists with AI systems. Furthermore, the problems of hidden intentions, hidden information, and hidden action that the principal faces also exist with AI systems—in the form of unknown biases, opaque data sources, and opaque operations. The core issue in this regard is that the observable performance of the agent may well be acceptable, but there may be hidden and undesirable qualities to it. The principal does not know whether she gets the best result she could get. Given these problems, the principal has an interest in scrutinizing the agent and finding ways to align her actions with the interests of the principal. This can be done through mechanisms for realizing accountability. Importantly, accountability goes beyond transparency as it also requires answerability and the ability to sanction as a way to exert actual control (Bovens 2007). Accordingly, a principal will be more likely to delegate to an agent the more she can ensure accountability.

Transposed to AI as an agent, acceptance of delegation will depend on the (a) transparency of an AI system, (b) its explainability—both with regard to how it produces outputs and to why the system has been designed and trained in a specific way—and (c) control, e.g., through a human in the loop and the ability to make corrections. Although the principal–agent model has thus far not been established as a formal theoretical basis for studying AI acceptance, those core elements of transparency, answerability/explainability, and control are present in many studies on AI acceptance involving delegation (e.g. Shin and Park 2019; Shin 2021; Grimmelikhuijsen 2022; Wenzelburger et al. 2022). In this sense, the principal–agent model can form a central analytical framework to study delegation acceptance for AI systems.

Importantly, the accountability dimension of this model becomes relevant precisely when and because there is a lack of trust in an agent. A common definition of trust is the expectation that an agent will act in one’s interest without being supervised or kept in check. Hence, if people have trust in an AI application, they may feel no need for transparency and exerting control over it. And, a lack of trust and the perceived risk of agency loss call for the use of instruments establishing accountability.

3.3 Societal technology adoption acceptance

In Beck’s (1992) notion of risk societies, technological advances that are supposed to deal with or solve societal problems are themselves a major source of uncertainty and societal risk. Nuclear power and genetics are examples of technologies that entail risks not only for the individual but also for society. Observers have similarly pointed to possible broader risks that AI entails for society besides its possible benefits, e.g., through rapidly automating many jobs or spreading disinformation in the political public sphere. Attitudes toward AI have so far not been studied within an explicit model for acceptance of risk technologies or societal risk. However, several contributions on AI attitudes have acknowledged AI as a risk technology with broader social impacts (e.g., Galaz et al. 2021; König et al. 2023).

Further, there is ample research on public opinion about AI that has examined how people think about possible broader impacts of this technology, such as destroying jobs, providing economic opportunities, increasing well-being, and taking over more and more control within society and leading to a loss of freedom (e.g. Smith 2018; Zhang and Dafoe 2019; Araujo et al. 2018; Selwyn and Gallo Cordoba 2022; Schepman and Rodway 2020; Rainie et al. 2022). One can interpret these evaluations as socio-tropic in nature, as opposed to egocentric evaluations (on this distinction, see Lewis-Beck and Stegmaier 2018). Indeed, Borwein et al. (2023) have used the term socio-tropic evaluations with regard to labor market effects of AI, referring to the effects on society rather than the individual. And O’Shaughnessy et al. (2023) have explicitly distinguished self-benefit from societal benefit perceptions.

The interest in societal effects and socio-tropic evaluations in various studies about AI acceptance is specific to a societal perspective that also informs a rich literature on technologies with societal impacts. A review of the literature on acceptance of energy technology by Huijts et al. (2012) arrives at a comprehensive model of technology acceptance with many factors that are reminiscent of the TAM. It includes, for instance, subjective norms, experience, and knowledge, perceived behavioral control, and perceived benefits. However, the model adopts a societal perspective on technology acceptance, which implies that the dependent variable takes on a different meaning. It is about public acceptance of the societal use of a technology rather than about personal use as in the user-centric perspective of the TAM. Another important difference is that the costs, risks, and benefits in the societal adoption perspective cover socio-tropic evaluations. To take the example of nuclear energy, possible questions posed to respondents are about whether nuclear power degrades the environment, whether it is risky for society as a whole, or whether it has a positive impact on climate mitigation (De Groot et al. 2020).

As with other risky technologies, looking at AI as a societal risk technology entails a distinct perspective that foregrounds certain factors specific to it. Also, theoretical models of technology acceptance similar to the one by Huijts et al. (2012) employed for energy technologies can easily be transferred to AI. Indeed, the connection particularly between attitudes toward energy technologies and attitudes toward AI becomes particularly straightforward in light of the environmental impacts of AI that have come to the fore in recent years (Dauvergne 2020). This being said, a particularity of AI is that it entails a larger range of risks for society, including distributional effects and consequences for social values such as fairness as well as for notions of desirable social development.

4 The three theoretical perspectives compared

4.1 Variation in highlighted aspects

Each of the three theoretical perspectives presented above, subsequently called user acceptance, delegation acceptance, and societal adoption acceptance perspective, has elements that are specific to it. Depending on which perspective one adopts, it is also the meaning of technology acceptance—as the dependent variable—that shifts. With the user acceptance perspective, technology is regarded as a tool with which users interact directly. This means that besides the perceived usefulness of the tool, the perceived ease of use—i.e., when directly handling the technology—is a central factor. Technology acceptance in that case is about the willingness to use the technology and actual use. This perspective is suitable, for instance, for an AI-based translation tool that someone uses to speed up text production and processing, the performance of which the user can directly assess. Other applications that come close to tools include image generation or scenario modeling and forecasting, e.g., in business planning.

The delegation acceptance perspective instead sees technology as an agent rather than a tool. Individuals do not necessarily use the tool themselves, but rather instruct it or merely (passively) rely on it. Technology acceptance therefore takes on the meaning of willingness to delegate. This perspective could be applied, e.g., to intelligent agents providing recommendations similar to what a human service provider might do. An example is an AI travel planning companion. Those receiving the service cannot directly assess the performance of the agent and ascertain what quality of service they received, as the AI system’s recommendations may have hidden biases primarily serving the interests of third parties. This challenge is not so much about ease of use, which is about getting the technology to work as intended by a competent user handling it. Rather, it is about agency loss, which depends more on hidden agent properties. Specific to this delegation perspective are the features of trust in the agent and perceived accountability features, i.e., for transparency, explainability, and control.

Finally, in the perspective of societal technology adoption acceptance, the individual is affected as a part of a larger collective, i.e., society, and technology acceptance means accepting that a technology is adopted and increasingly used in society. Specific to this model are socio-tropic evaluations of the technology, i.e., impacts on society rather than just the individual (as a technology user). Under this perspective, one might inspect, e.g., people’s views about the social and political impacts of AI-based filtering for the curation of online content on platforms. Such applications of AI could also be examined regarding their effects on the individual—and these can be studied with the user acceptance and delegation acceptance perspectives—but certain AI systems may be particularly relevant regarding their broader societal effects. Other consequences of AI that people may evaluate are inherently on the social level, such as environmental harms due to a growing energy footprint of widespread AI uses or the impact of AI uses on labor markets. Table 1 summarizes these differences between the models.

Table 1 Comparison of the three theoretical perspectives of technology acceptance

4.2 Variation in scope

Based on their core theoretical premises and the intentions with which they have been formulated, the three described theoretical perspectives differ in scope. They cover different parts of a space describing different ways of relating to and evaluating the possible impacts of AI, as shown in Fig. 1. The vertical dimension in the figure distinguishes the extent to which people directly interact with and have control over an AI system. It can essentially be a tool that people directly interact with and that has directly observable characteristics and performance. However, the relationship to an AI system can also be remote. It can have agent-like qualities and unknown biases that introduce some form of possible agency loss, and/or if it can be third parties wielding the AI system—which constitutes an extended delegation relationship. The horizontal dimension in the chart describes the distinction between the focus on AI impacts on the individual versus impacts on society. In the latter case, acceptance of the technology is not evaluated in terms of how it affects an individual, but its adoption in society more broadly and how it affects individuals as members of society.

Fig. 1
figure 1

Systematizing models for studying AI attitudes. Own depiction

The more one moves to the top-right part of the figure, the more remote are AI systems from a person while still possibly affecting her. Individuals may even evaluate AI systems that they do not interact with at all while they are nonetheless affected by aggregate impacts of the technology on society. A passive role of the individual is, however, not a necessary condition for being able to make socio-tropic AI evaluations. A person may interact with an AI system, such as a consumer recommender system, herself but still also evaluate it in socio-tropic terms, e.g., regarding its environmental impact through its widespread usage. Both ego- and socio-tropic evaluations can be of interest.

Within the space described by Fig. 1, the three models cover different areas based on their core theoretical premises. The user acceptance perspective, with its focus on individual users handling technology as a tool rather than an agent, can be situated in the bottom-left corner of the chart. The delegation acceptance perspective covers AI uses in which affected individuals take a more passive role as they rely on some service provided by an AI system. This can, however, entail different delegation relationships. First, individuals may directly interact with an AI system to which they (partly) delegate decisions. They may however also be, second, affected by an AI system through other actors employing the system—as an agent or a tool—to provide some service for the individual. This is the case, for instance, if medical practitioners use AI systems for prognosis or if the state adopts AI to provide better services for citizens, such as through faster processing of tax statements. One can understand constellations of this sort in terms of a delegation chain, i.e., subsequent steps of delegation that can compound accountability challenges (Nielson and Tierney 2003). And, this constellation can be analyzed within the delegation acceptance perspective (and the principal–agent model).

Third, even where an individual is merely the object of AI processing and outputs (e.g., classification) and there is no formal act of delegation, one can still see this relationship as an implicit principal–agent relation. This is the case, e.g., with credit default risk assessments, which serve the organization using the system rather than those being assessed. As affected individuals have certain rights and legitimate interests (e.g., no unfair discrimination), they are an implicit or external principal who can hold the agent accountable for acting in ways that violate these rights and interests (Krafft et al. 2020). Clearly, this implicit delegation relationship is linked to a very passive role of the individual in relation to the AI system and its impacts.

The societal risk technology model is about the broader impacts of technology, regardless of whether it is agent-like or more like a tool. Unlike in the user acceptance and the delegation acceptance perspectives, the adoption of the technology and its acceptance are evaluated in terms of its effects on society, e.g., its consequences for the labor market, the working of democracy, or the environment. Notably, this perspective does not exclude the delegation perspective. Certain AI applications can be viewed not only in terms of delegation by an individual receiving a service as an individual but also in terms of delegation by individuals as members of society. In the latter case, it is a collective subject delegating and the impacts of the technology are not merely those on an individual. This relationship even takes on a formal character through citizens being represented by the government to shape the societal adoption of technology in line with citizens’ interests. The societal adoption acceptance of AI therefore has a latent political dimension to it. As such, it directly links up with studies that have examined citizens’ demand for regulation (e.g. Ada Lovelace Institute and The Alan Turing Institute 2023; Zhang and Dafoe 2019) and regulatory preferences (König et al. 2023). Attitudes of this sort can be understood as referring to a delegation relationship in which the state, acting in the interest of the people, is supposed to regulate AI.

One should note that demarcations between the three perspectives are not as clear-cut as suggested by Fig. 1. Indeed, the fact that there is an entire space of constellations as illustrated in the figure has created a need to adapt existing models, particularly the TAM, to specific contexts. The TAM has been extended to better fit the acceptance of automation or delegation to AI. Notably, contributions extending the TAM have pointed out its limitations for studying AI and stressed the need to integrate trust (Acharya and Mekker 2022; Vorm and Combs 2022; Bel and Coeugnet 2023; Chen et al. 2023; Choung et al. 2023)—a factor central to the delegation model. However, while taking the TAM as the reference model may seem appealing due to the model’s influence and popularity, this means remaining within a framework that does not ideally suit AI as a technology with agent-like properties.

Important limitations remain when keeping the general premises of the TAM. Since it has been developed for users directly interacting with technology as a form of planned behavior, it is less suitable for settings in which individuals explicitly or implicitly delegate tasks to AI—which may involve merely the acceptance that an AI makes decisions for an individual. The outlook of the TAM does not fit well with the study of societal adoption acceptance of technology either, as the latter is even less about individual users and planned behavior and involves socio-tropic evaluations of the technology. In that sense, the TAM has more of a heuristic value when applied beyond its original scope while other models are inherently more applicable.

The delegation perspective, in turn, can involve different degrees of passivity. In certain cases, an individual can be affected by an AI system even while it is others, e.g., government bodies, who are directly interacting with it. Under these conditions, the broader context of delegation and particularly the trust in organizations deploying AI systems may then be central to AI acceptance (Wenzelburger et al. 2022; Schiff et al. 2023). Especially under conditions of extended or implicit delegation as described above, the delegation perspective shows a stronger affinity to socio-tropic evaluations of AI under the societal adoption acceptance perspective. Due to their highly passive role and low agency, those affected may see themselves affected less as an individual and more as part of a social group or member of society.

5 A combined framework

Bringing the three described perspectives together serves to contextualize studies of AI acceptance by spelling out how AI acceptance is understood and studied. This requires being sensitive to changes in the meaning of AI acceptance as the dependent variable and considering which elements are specific to and primarily relevant for the chosen perspective. It is important in this regard to consider the constellation in which those stating their AI acceptance relate to the technology as they can be affected by AI in different roles. Drawing on distinct connotations of AI acceptance in a broader framework can thus help to situate research within a diverse literature. It can also form a basis for building a compilation of survey items that considers theoretically relevant distinctions (see Appendix Table A1 for examples).

Figure 2 compiles the three perspectives and core building blocks described further above. It highlights important shifts regarding AI acceptance as a dependent variable due to differing conceptualizations of this acceptance and predictors that are specific to the theoretical perspectives. The elements marked in dark gray are from the user acceptance, those in light gray with a dashed line are from the delegation acceptance perspective, and the white elements with a dotted line are from the societal adoption acceptance perspective.

Fig. 2
figure 2

Combined framework with three perspectives on AI acceptance. Dark gray = user acceptance model, light gray = delegation acceptance model, white and dashed lines = societal adoption acceptance model. Own depiction, based on Venkatesh et al. (2003), Mayer et al. (1995), Huijts et al. (2012)

The user acceptance perspective covers direct use of a technology and includes perceived usefulness based on personal benefits as well as costs and risks and perceived ease of use as central predictors. The delegation acceptance perspective includes accountability based on transparency, answerability/explainability, and control/correctability together with trust. Trust itself can be conceived of as stemming from perceived ability, benevolence, and integrity, following the seminal framework by Mayer et al. (1995)—which has been adopted by some authors specifically for the study of AI acceptance (e.g., Vorm and Combs 2022; Langer et al. 2023). Although perceived accountability features such as transparency can themselves contribute to trust (Vorm and Combs 2022), perceived accountability can also be a separate important factor and particularly relevant where trust is lacking. Accountability measures basically amount to measures mitigating agency loss and the risks linked to it. Agency loss means that the agent does not act in the way it is supposed to, thus not fully realizing the interests and goals of the delegating actor.

The societal adoption acceptance perspective includes socio-tropic evaluations of benefits, costs, and risks for society stemming from AI (including consequences for fairness or personal autonomy). These evaluations may diverge from people’s egocentric or pocketbook evaluations. For instance, a person may think that AI benefits her economically or in her own career while she thinks that this technology has an overall negative effect on the labor market. Finally, various additional factors, such as knowledge and experience and dispositional factors such as technophobia/computer anxiety or subjective norms, are not firmly tied to any of the theoretical perspectives.

Elements from the three perspectives can be jointly relevant in the study of AI acceptance. Their applicability depends on how a study and study context relate to the distinctions shown in the chart and taken from Fig. 1 further above. Whether ego-tropic or socio-tropic perceptions of (positive and negative) AI consequences are relevant depends on whether impacts on the individual or society are of interest. Socio-tropic evaluations of AI impacts are primarily relevant for societal adoption acceptance. However, ego-tropic evaluations (e.g., based on personal use) might also influence the degree to which people accept widespread societal adoption of AI systems. Ego-tropic evaluations are primarily applicable where consequences for the individual are of interest, which can be valid when studying user acceptance as much as when studying delegation acceptance.

The applicability of other elements in Fig. 2 depends on the role of the individual and its relation to the technology. Overall, it is not the technology per se that determines the suitability of these elements, but rather how AI is applied, i.e., which delegation relationship it involves and whether effects on the individual or on society are of interest. While ease of use is relevant for very direct interaction and control over AI systems, accountability aspects and trust in the AI system become relevant where AI operates like an agent. Accountability and control over an AI agent can be regarded as being related to ease of use in the sense that they are both about employing technology in the way intended to achieve certain goals. The mechanisms for exerting control are different, though. Finally, for extended delegation relationships, in which affected persons are not directly interacting with the AI system themselves, this lack of control makes trust in the organization deploying the AI system particularly relevant.

Combining the three theoretical perspectives discussed above promises to be useful in the study of acceptance of AI. A combined framework allows for systematically selecting relevant elements depending on which notion of AI acceptance is of interest. When covering different AI systems within a single study, all the three perspectives may become relevant for investigating acceptance of those systems. Further, it is possible to study the same AI system from different angles, e.g., through examining both ego- and socio-tropic evaluations—which may well diverge to some extent.

With certain forms of AI, all the three perspectives can even be simultaneously relevant. This is the case particularly with applications of AI that can be applied to a range of different cognitive tasks, such as generative AI. As Helberger and Diakopoulos (2023) argue, these AI systems can be seen as a general-purpose technology that does not neatly fall into established categories and/or risk classes. Looking at the way in which generative AI, such as ChatGPT, can be used, it can come close to a tool for which performance quality can be assessed directly, e.g., when it is used to draft parts of a text to save time. It can, however, also be an agent to which tasks are delegated for which performance, bias, and fairness matter and cannot easily be ascertained—for instance, when text-based generative AI is used to provide recommendations about subjects that require technical expertise. Finally, generative AI can also be regarded as a societal risk technology with broader effects. It could have, for instance, undesirable effects through replacing jobs or harming public opinion formation through disinformation. Certain applications of generative AI may thus be investigated from different angles depending on which people’s acceptance may differ.

6 Discussion and conclusion

To integrate the findings from the different disciplines and theoretical angles in the growing literature dealing with acceptance of AI systems, it is important to understand how these relate to each other. The challenge of integrating this work stems from the fact that different disciplinary perspectives may foreground different aspects when modeling technology acceptance. Furthermore, AI takes many forms and can be deployed in many different settings. Accordingly, existing research covers manifold constellations and variable roles in which people are affected by AI.

Individuals can be users who employ an AI system like a tool or who delegate tasks to an AI system acting like an agent that provides a service to them—with individuals being more passive rather than active users. They may indirectly receive a service through an AI system, without interacting with the system themselves. This is the case, e.g., when the state adopts AI systems to provide services to citizens. Individuals can also be the object of AI outputs, such as risk assessments, which, however, serves other actors rather than the object of data processing. And finally, citizens can be affected by AI less as individuals but through broader societal impacts that affect citizens as members of society.

Treating different constellations under a single broad notion of AI acceptance hides theoretically important distinctions. Further, no single existing theoretical perspective or model properly accommodates all the described constellations. The discussion above has therefore argued for a need to bring together three different broader perspectives, namely user acceptance, delegation acceptance, and societal adoption acceptance perspectives. Each implies a different understanding of what technology acceptance exactly refers to: (a) intention to use or actual use, (b) willingness to delegate and actual delegation, or (c) acceptance of the societal adoption of technology. The perspectives come not only with different theoretical premises but also with central factors that are specific to them. Given that AI can be studied from different angles, it is important to spell out and contextualize how it is approached: which of the various possible relations between AI and an individual is/are of interest and which theoretical perspective is foregrounded?

The most prominent of the three perspectives is the user-centered perspective specifically in the form of the Technology Acceptance Model. Over decades, this model has proven to be versatile and robust. However, as has been illustrated above, the core theoretical premises of the Technology Acceptance Model cover only a small part of a universe of constellations that comprises possible relations between an individual and AI system. To deal with the particularities of AI, some contributions have argued to extend the TAM (e.g., Vorm and Combs 2022; Bel and Coeugnet 2023). However, while this leverages a heuristic value of the TAM, this does not eliminate limitations of the model. Due to its focus on the user, the societal perspective and technology affecting society rather than merely the individual are foreign to that model. Further, the user-centered model does not sit well with pronounced agent-like qualities of AI systems, which often mean that individuals are more passive and not like users employing technology as a tool that they handle more or less competently. The delegation acceptance perspective—and the principal–agent model inherent to this perspective—naturally accommodates AI’s agent-like qualities. It can also comprise more complex delegation relationships that can become relevant for AI adoption and acceptance. As it incorporates the central concepts of accountability as well as trust, this perspective also directly connects with the accountable algorithm literature.

Ultimately, building blocks from all theoretical perspectives are important in the study of AI acceptance. Combining them not only serves cumulative research, but it is also important to tackle specific research problems and can open up some new directions. First, situating research among the described perspectives can help to contextualize studies and thereby establish stronger links between related research areas. As research on acceptance of AI is about to grow further, organizing and integrating the evidence will only become more important. Being explicit about the angle adopted in a study also helps to understand how exactly empirical findings on AI acceptance relate to and complement each other. Otherwise, it may seem like studies are talking about the same thing, but results may not be directly comparable as they refer to different notions and aspects of AI acceptance.

Second, integrating the different perspectives in a single survey allows one to relate attitudes from these perspectives to one another (see Appendix Table A1 for examples of survey items). This means being able to uncover discrepancies between these attitudes as, for instance, ego- and socio-tropic evaluations of certain AI applications might diverge. Indeed, existing findings indicate that the perception of AI systems is multifaceted and not necessarily consistent. For instance, while certain desirable features of AI systems such as their performance may be important for their acceptance when asking citizens about their design (König et al. 2022a, b; Horvath et al. 2023), other factors such as trust in the organization deploying these systems may be more important when people evaluate their societal adoption (Wenzelburger et al. 2022; Schiff et al. 2023; Kleizen et al. 2023). Integrating the different perspectives in a single study is also a prerequisite to be able to examine potential spillovers, i.e., evaluations under one perspective affecting those under another. Positive ego-tropic evaluations of certain AI systems or trust in the systems might influence socio-tropic evaluations of the same or other systems. Finally, with applications of generative AI, an AI system can even become relevant under several perspectives at the same time—since they can be used as tools (e.g., speed up drafting a letter), operate similar to agents to which tasks are delegated (provide recommendations), and have impacts on society at large (producing disinformation).

Overall, the combination of different perspectives is important because AI, unlike most other technologies, can affect people in different ways, roles, and constellations. This may be less relevant with a narrow research interest, for instance, when studying how people perceive a specific AI-based consumer application. The broader approach is, however, very relevant from a social science perspective that aims at a comprehensive understanding of how AI and its impacts are perceived in the larger public. For policymakers, the discussion accordingly implies that there is a risk of getting a partial view and wrong impression of AI acceptance in society. Understanding how citizens think about AI requires a differentiated perspective as their views are likely multifaceted and there are several ways in which AI acceptance can show.

It should be noted that the above discussion of how AI acceptance can be studied has focused on quantitative research using standardized surveys. Yet it is clearly also possible to use qualitative interviews to get a deeper understanding of how people perceive AI systems. Research along these lines has shown that people have folk theories concerning AI (Bucher 2018). Through being better able to demonstrate how intricate psychological mechanism, e.g., of self-blame or gratitude can shape the perception of power relations and their own agency in AI use (e.g., Ramesh et al. 2022), qualitative evidence directly connects with critical AI and algorithm studies. It can inform and complement quantitative research with important insights. The three perspectives described above, in turn, can also provide an analytical scaffolding for qualitative studies while informing new kinds of questions that explore people’s perceptions of AI under different perspectives within the same study. All in all, systematically combining distinct facets of AI acceptance promises to yield a deeper understanding of the relationship between AI and society.