1 Introduction

To benefit from the opportunities afforded by artificial intelligence (AI), the various actors involved in AI-based decision making must trust the decisions and actions taken by algorithms (European Commission, 2020; Meske et al., 2022; Thiebes et al., 2021). At an organizational level, the socially responsible use of AI requires the appropriate design, implementation, and use of AI systems (Kumar et al., 2021; Rakova et al., 2021; Trocin et al., 2021) in addition to ethical guidelines and governance approaches (Dignum, 2020; Jobin et al., 2019). Moreover, the governance of a technology is underpinned by actors’ assumptions and expectations of the technology in question (Wang et al., 2021). The concept of technological frames captures these assumptions and expectations (Orlikowski & Gash, 1994) by offering an analytical lens for studying and explaining the development, use, change, and governance surrounding a technology (Davidson, 2006; Elbanna & Linderoth, 2015; Wang et al., 2021).

The governance of AI and the promotion of its socially responsible development and use are large-scale challenges that take place among multiple actors (cf. Minkkinen et al. 2022; Mäntymäki et al., 2022; Seppälä et al., 2021; Yeung et al., 2020). Accordingly, the European Union (EU) has articulated an ecosystem approach to responsible AI (RAI) (European Commission, 2020). The number of recent high-profile strategies, events, and statements, as well as the proposed Artificial Intelligence Act (published on April 21, 2021), indicate that the EU positions itself as a key actor in this ecosystem approach (European Commission, 2021; High-Level Expert Group on Artificial Intelligence, 2019; Renda, 2020). Moreover, its nascent AI policy approach has resulted in initiatives such as networks of AI excellence centers and has sparked scholarly debate (Antonov & Kerikmäe, 2020; Renda, 2020; Smuha, 2021; Veale, 2020). Despite the EU’s efforts, a fully-fledged multi-actor RAI ecosystem remains elusive. While the ecosystem is incipient, EU strategy documents clearly articulate expectations of RAI, externalizing the EU’s technological frame of RAI. This technological frame acts as a vehicle for conveying the EU’s expectations of governing RAI and the aspired, inextricably linked ecosystem. Thus, the EU’s technological frame of the RAI ecosystem shapes other actors’ expectations and, accordingly, the network building for an RAI ecosystem.

The EU’s RAI ecosystem comprises actors such as AI developers, technology providers, platform companies, AI user organizations, and individual users (Stahl, 2021). Moreover, AI’s ethical and societal implications transcend organizational boundaries (Ananny & Crawford, 2018; Morley et al., 2021; Orr & Davis, 2020). This suggests that different actors within the RAI ecosystem engage in strategic framing and negotiation (e.g., Davidson, 2002; Hoppmann et al., 2020; Wang et al., 2021) over the EU’s technological frame of RAI. Although the EU is only one regional player in the global landscape of AI governance (Schmitt, 2021), it is an important case because the EU is a global frontrunner in data protection regulation (Bennett & Raab, 2020). Moreover, the advocated ecosystem approach builds on previous discussions on collaborative governance mechanisms in relation to the EU General Data Protection Regulation (GDPR) (Kaminski, 2019). Since the regulation and governance of AI are still emerging in many regions, the experiences of the EU are of broader interest. Hence, to explore the shaping of the emerging European RAI ecosystem, we study the EU’s articulated expectations underpinning its technological frame of RAI and how actors in the emerging multi-actor network asymmetrically adopt and co-shape this technological frame. In this paper, we address the following two research questions:

  1. RQ1.

    What expectations constitute the EU’s technological frame of the RAI ecosystem?

  2. RQ2.

    How do experts adopt and co-shape the RAI technological frame externalized by the EU institutions?

By answering these research questions, we make two contributions. First, we contribute to the technological frame (Davidson, 2006; Orlikowski & Gash, 1994; Wang et al., 2021) and ecosystem literature (Adner, 2017; Jacobides et al., 2018; Parker et al., 2017; Tiwana, 2015), offering a framework for analyzing future-oriented expectations underpinning the technological frame of a technology-centered ecosystem and the dynamic and reflexive co-shaping of this frame. To these literature streams, we contribute the concept of expectation work and five types of congruent or incongruent expectation work that explain actors’ adoption and co-shaping of technological frames. Second, we contribute to research on RAI, presenting a thematic map of the EU’s expectations of the emerging ecosystem. Furthermore, we posit that ecosystems can serve as a mediating level between regulation, ethical principles, and organizational AI implementation.

The article is structured as follows. In the next section, we provide our conceptual background, which covers the concepts of ecosystems, technological frames, and expectations. Building on this background, we define the concept of expectation work. Subsequently, we outline our research approach, which comprises document analysis and semi-structured expert interviews. The findings section then presents the EU’s technological frame of RAI (RQ1) and five distinct types of expectation work identified in the interviewees’ adoption and co-shaping of the EU’s articulated expectations (RQ2). The article closes with a discussion of the theoretical and practical implications of our findings, as well as limitations and future research directions.

2 Conceptual background

2.1 Ecosystem for responsible AI

In this section, we identify conceptual components from relevant literature streams to allow for the analysis of emerging ecosystems for RAI. To begin, we need to understand the concept of AI and what responsibility means in the AI context. Kaplan and Haenlein (2019) concisely defined AI as “a system’s ability to interpret external data correctly, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation”. Other definitions highlight similar characteristics, including perception, information processing, decision making, and achievement of goals (Samoili et al., 2020). Furthermore, learning and adaptation based on data (rather than formalized rules) is a central characteristic of AI systems. However, these abilities can have ethical implications, such as biases in data and algorithms, lack of transparency, and potentially harmful effects on individuals (Martin, 2019). Accordingly, RAI refers to the design, implementation, and use of AI technology in ways that are aligned with ethical and social norms, such as fairness and explainability (Dignum, 2020; Trocin et al., 2021). In recent years, significant work has been dedicated to outlining general sets of ethical AI principles (Jobin et al., 2019; Schiff et al., 2020). Although researchers have started to explore ways of translating these principles into organizational practices (Morley et al., 2021; Rakova et al., 2021), implementing and governing RAI in practice remains a pressing challenge.

AI governance research strives to operationalize AI ethics. Recent work within this stream highlights the joint efforts of multiple actors (de Almeida et al., 2021; Shneiderman, 2020). Researchers conceptualize AI governance as a multi-layered phenomenon, where societal and industry-level requirements exert pressure on organizations and development teams (Gasser & Almeida, 2017; Kaminski, 2019; Rakova et al., 2021; Shneiderman, 2020). In other words, implementing RAI at a large scale requires collaborative networks as a mediating layer between abstract AI ethics principles and organizational AI implementation. Thus, the EU has set out to promote an “ecosystem of trust” that addresses ethical concerns and legal uncertainty and drives RAI through a regulatory framework (European Commission, 2020). However, although the EU has articulated the goal of establishing an ecosystem for RAI, the ecosystem remains embryonic.

To analyze this emerging RAI ecosystem, we need to specify what kind of ecosystem is emerging. In general terms, ecosystems are network structures that develop organically with some degree of coordination instead of purely hierarchical or horizontal structures. Scholars have shown great interest in the ecosystem concept and have produced numerous literature streams and conceptual variants (Adner, 2017; cf. Hyrynsalmi & Mäntymäki, 2018; Mäntymäki & Salmela, 2017). Jacobides et al. (2018) proposed a core of three streams of ecosystems: business, innovation, and platform. Aarikka-Stenroos and Ritala (2017) added entrepreneurial/start-up and service ecosystems to this list. Subsequently, Tsujimoto et al. (2018) added the industrial ecology perspective and multi-actor networks to the streams of business ecosystems and platforms. The multi-actor network perspective emphasizes the heterogeneity of actors with different operating logics in addition to dynamic and complex interlinkages. In the information systems (IS) literature, ecosystems have been discussed extensively, particularly in terms of platform ecosystems (Parker et al., 2017; Tiwana, 2015), which are typically organized around large technology companies. Alternative structural models and architectures for ecosystems have also been researched (Ju et al., 2019; Kannisto et al., 2020). From an IS perspective, ecosystems are linked to the general issue of multi-actor value creation utilizing technology (Lempinen & Rajala, 2014).

The multi-actor network perspective (Tsujimoto et al., 2018) captures the emerging RAI ecosystem most effectively. However, multi-actor networks require a unifying element and some degree of coordination to qualify as an ecosystem (Jacobides et al., 2018). We propose that the emerging RAI ecosystem should center on a core value proposition (Adner, 2017; Jacobides et al., 2018) that derives from the focal technology (i.e., RAI). Hence, we can describe the RAI ecosystem as a technology-centered ecosystem. According to this value creation perspective, an ecosystem strives to produce something economically or socially valuable using a particular technology. While they do not refer to ecosystems explicitly, Solaimani et al. (2015) operationalize this perspective by modeling flows of value, information, and processes between networked actors that develop joint services or products. No single firm or public organization orchestrates the RAI landscape, even though large technology companies may orchestrate subsystems, and organizations (such as EU bodies) may be aspiring orchestrators. In addition, centering the ecosystem around a particular product or service (Tsujimoto et al., 2018) may be premature, as RAI activities and innovations are still emerging.

Conceptualizing the emerging networks of RAI as an ecosystem is justified in light of the calls for multi-actor AI governance, the EU’s aspirational RAI ecosystem, and the literature on ecosystems as multi-actor networks organized around a technological value proposition. Moreover, conceptualizing the networked interactions of AI actors as ecosystems is well established in the academic literature (Findlay & Seah, 2020; Stahl, 2021, p. 81) and policy and strategy papers (e.g., European Commission, 2020; OECD, 2019). The European ecosystem for RAI is at an early development stage despite increasing EU efforts in areas such as talent creation and infrastructure initiatives (Stix, 2019).

Studying the early phase of ecosystem development requires an appropriate analytical entry point. Given the early stages of development, the potential ecosystem is still being shaped through strategies and plans, which provide one such entry point. At this point, the ecosystem is unstable, similar to the “era of ferment” in technology lifecycles (S. Kaplan & Tripsas, 2008), and future directions are still being actively negotiated. Hence, plans and strategies for ecosystem development offer a rich object of study for understanding the co-shaping of an ecosystem. The question then becomes how ecosystems emerge if they are planned and develop organically. According to the literature, ecosystems are partially designed intentionally (Stahl, 2021, p. 84; Tsujimoto et al., 2018), although they can also co-evolve through the interactions of actors, activities, artifacts, and institutions (Granstrand & Holgersson, 2020; Moore, 1993). For the RAI ecosystem, the focal technology of RAI unites these various actors, artifacts, and institutions. Understanding how multiple actors influence the ecosystem requires us to investigate how actors understand and interpret the focal technology and, in particular, how they co-shape this understanding. Therefore, the perspective of technological frames and their active shaping offers a useful analytical angle.

In summation, new technological capabilities create the need to ensure ethically appropriate AI use, and evolving RAI ecosystems can address this need by organizing actors into a coordinated network. The conceptual components for understanding RAI ecosystems are summarized in Table 1.

Table 1 Conceptual components of understanding ecosystems for responsible AI

2.2 Technological frames, expectation work, and network-building

In IS research, technological frames offer a theoretical perspective that can capture actors’ interpretations and sensemaking of technology, including its development, use, and governance (Orlikowski & Gash, 1994; Wang et al., 2021). Technological frames are defined as “the core set of assumptions, expectations, and knowledge of technology collectively held by a group or community” (Orlikowski & Gash, 1994, p. 199). In this paper, we understand “technology” in the context of technological frames as denoting particular technologies (such as a particular RAI system) and the surrounding social relations and networks (Orlikowski & Iacono, 2001). The concept of a technological frame derives from a socio-cognitive perspective on the development, use, and changes of organizational IT (Orlikowski & Gash, 1994).

Technological frames create and maintain shared meanings around technologies. Hence, frames enable purposeful technology design and use to support organizational objectives, and they also shape evaluation criteria for technologies (Kaplan & Tripsas, 2008). Moreover, they are symbolically expressed in language, images, metaphors, and stories (Orlikowski & Gash, 1994, p. 176). Interactions between actors (such as producers, users, and institutions) can create dynamic situations in which frames and their salience change during technological lifecycles (Davidson, 2002; Kaplan & Tripsas, 2008). The framing process is understood as both a cognitive process of information filtering and a social process of purposeful strategic persuasion. Hence, these are equally as important as the frames themselves (Hoppmann et al., 2020).

Framing occurs through congruence and incongruence between negotiated technological frames. In this context, congruence refers to the “alignment of frames on key elements or categories” (Orlikowski & Gash, 1994, p. 180); for example, similar assumptions and expectations on the role of technology in organizational processes. This does not mean identical frames; rather, frames are structurally or substantially related (Davidson, 2006; Orlikowski & Gash, 1994). Correspondingly, incongruence refers to significant differences in the expectations and assumptions underlying separate technological frames. While this can require alignment efforts, it can also drive organizational change (Davidson, 2006). Indeed, previous studies have highlighted the productive role of incongruence and ambiguity in technological frames in generating innovation and adaptive governance (Wang et al., 2021). Thus, technological frames are shaped by the congruence and incongruence between frames, regardless of whether these similarities and differences are simply perceived or deliberately evoked through negotiation.

Within innovation studies and economic sociology, the concept of expectations offers a theoretical lens for contextualizing technological frames in ecosystem-building (Beckert, 2016; Borup et al., 2006). Here, expectations are conceived as performative, meaning they influence actions and are perceived as playing a key role in agenda building and mobilizing resources in innovation networks (Beckert, 2016; Borup et al., 2006). Beckert (2016, p. 9) defines expectations as “the images actors form as they consider future states of the world, the way they visualize causal relations, and the ways they perceive their actions influencing outcomes.” Moreover, expectations provide a communicative basis for collective actions and the development of new policies, ideals, and ways of organizing institutions (Emirbayer & Mische, 1998, p. 990). In other words, expectations are about the future and can influence the future. In this study, expectations (and the encompassing technological frames) are about ecosystems and also intended to shape them. Under conditions of uncertainty, expectations include elements of invention and are sustained by storylines, enabling actors to behave as if those expectations were real (Beckert, 2016, pp. 67–68). Although expectations can be situation-specific, they can also be externalized as material representations, such as in documents and material objects (Borup et al., 2006; Mische, 2014). Hence, we can study these “embedded expectations” through techniques such as document analysis (Linders, 2008; Prior, 2008).

To study the constituent elements of RAI ecosystem technological frames in EU documents (RQ1), we synthesized two theoretical frameworks (Fig. 1). First, Orlikowski and Gash (1994) proposed three analytical dimensions for studying technological frames: the nature of technology (i.e., people’s understanding of a technology), technology strategy (i.e., understanding of the motivation and vision behind adopting a technology), and technology-in-use (i.e., understanding of day-to-day use of a technology). Second, van Merkerk and Robinson (2006) outlined three dimensions for understanding the emergence of technological fields: expectations (i.e., shared beliefs about prospective entities and positions), agendas (i.e., sets of priorities that guide actions), and networks (i.e., beliefs about current and future network dynamics). We interpret the first dimension (expectations) as referring to network nodes and their connections, particularly in relation to a focal technology. The frameworks of Orlikowski and Gash (1994) and van Merkerk and Robinson’s (2006) both rest on beliefs, assumptions, and expectations. However, while the former highlights expectations of a technology, the latter emphasizes expectations of networks that constitute an emerging technological field. Synthesizing these frameworks (see Fig. 1) and applying the concept of a technology-centered ecosystem, we intend to study what expectations constitute the EU’s technological frame of the emerging RAI ecosystem (RQ1).

Fig. 1
figure 1

Visualization of the synthesis of the frameworks of Orlikowski and Gash (1994) and van Merkerk and Robinson (2006) into a framework for understanding expectations toward a technology-centered ecosystem

The first dimension in the synthesized framework captures expectations about the technology at the core of the ecosystem (RAI in our study) and its economic and ethical implications. Understanding technology-centered ecosystems as sociotechnical systems, we refer to this dimension as the nature of the sociotechnical system. The second dimension consists of expectations of the strategy, approach, and outcome of building an ecosystem around the technology (i.e., the motivation). We refer to this dimension as ecosystem agendas. The third dimension of network building refers to expectations of the entities, linkages, and connections that make up and constitute a future ecosystem that develops, uses, and governs the technology.

The synthesized framework (right of Fig. 1) brings two additions to the previously mentioned frameworks. First, compared to Orlikowski and Gash (1994), we broaden the technological frame from a technological and organizational focus to encompass a technology-centered ecosystem. The focal technology is one important node in this ecosystem. Second, compared to van Merkerk and Robinson (2006), we highlight the focal technology and intrinsic beliefs about it as unifying forces in the ecosystem.

It should be noted that expectations and the technological frames they constitute are not static entities. Instead, they emerge and evolve in organizational and societal processes, where actors strategically frame technological issues to promote particular agendas (Hoppmann et al., 2020). Since expectations can influence decisions and outcomes, actors have an interest in creating expectations that favor desired decisions or outcomes (Beckert, 2016, p. 80). Moreover, expectations embedded in plans and strategies are influential only if they are acted upon (i.e., if actors adopt and co-shape the expectations). This suggests that creating and mobilizing expectations requires intentional effort.

Theorizing this intentional effort, we propose the concept of expectation work, which we define as the purposive action of actors (e.g., individuals, groups, or organizations) in creating and negotiating expectations. As expectations form one element of technological frames, we posit that actors can shape technological frames through expectation work. Hence, by studying expectation work, we can study how actors shape technological frames through purposive actions in congruence or incongruence with the expectations underlying these frames.

The concept of expectation work clarifies the dynamic and reflexive process of co-shaping technological frames. In particular, expectation work elaborates on the concept of framing (Hoppmann et al., 2020; Kaplan & Tripsas, 2008) and the emergence of technological fields (van Merkerk & Robinson, 2006). Moreover, actors’ purposive future-oriented expectation work renders technological frames dynamic and continuously shaped, which means that frames are a starting point for contention and negotiation rather than a stable outcome. In addition, expectation work is reflexive because expectations are beliefs about the ecosystem produced and co-shaped by actors within the ecosystem. In other words, expectation work can be considered as the ecosystem shaping itself. Taken together, the framework in Fig. 1 and the concept of expectation work will focus analytical attention on the constituent dimensions and co-shaping of technological frames underlying ecosystem building. However, an empirical investigation of the resulting ecosystem (e.g., social network analysis) is beyond the scope of this framework. This delimitation stems from our decision to focus on early ecosystem development and the role of expectations.

In conclusion, we are interested in the asymmetrical shaping of the technological frames of the RAI ecosystem between the EU and other actors. While the former articulates embedded expectations through documents on RAI, the latter respond to these embedded expectations in an interview setting. In these responses, the actors perform expectation work, shaping the technological frames of RAI.

3 Research approach

To answer the research questions, we collected two separate datasets. The first comprises documents that articulate the EU’s expectations of RAI and the ecosystem of RAI (RQ1), while the second consists of 15 interviews with RAI experts (RQ2). Each data set allowed us to answer one of the two research questions. Next, we describe the data collection and data analysis processes.

3.1 Data collection

We started our data collection by screening and selecting EU documents on RAI. On April 10, 2018, 25 European countries signed the Declaration of Cooperation on Artificial Intelligence. This declaration emphasizes cross-border cooperation to ensure the following: Europe’s competitiveness in the research and deployment of AI, profit from AI’s business opportunities, and the consideration of societal, ethical, and legal questions.Footnote 1 With this declaration and the ensuing documentation, the EU aspires to be a key player in defining rules related to digitalized societies. Indeed, these documents adopt a crucial role in communicating the EU’s and related experts’ (the High-Level Expert Group) expectations on the nature of the sociotechnical system, ecosystem agendas, and network building related to RAI. In this role, the documents outline the EU’s vision and approach to RAI, which makes them a suitable dataset to understand their expectations, which constitute the technological frame underlying multi-actor networks of RAI.

To select the key documents for our analysis, we defined three criteria. (1) The documents should be official EU documents that were published specifically to reflect the RAI agenda. Since we are interested in the EU’s technological frame of RAI, we only included documents published by the EU (e.g., its Expert Group or the European Commission) that specifically addressed RAI. (2) The documents should be seminal contributions to the EU’s efforts toward an RAI ecosystem. This criterion was met if the documents stated the EU’s long-term objectives for RAI or introduced a topic for the first time (such as trustworthy AI). (3) The documents have evoked resonance among RAI researchers and practitioners. Since we analyze the shaping and co-shaping of the emerging ecosystem of RAI, other stakeholders should be aware of the selected EU documents. Based on these three criteria, we selected five documents that the European Commission published between 2018 and 2020, which are presented in Table 2 alongside the selection criteria that these fulfilled.

Table 2 Selected EU documents and selection criteria

The first data set (EU documents) informed the collection of our second interview data set. After collecting these documents, we familiarized ourselves with the content and performed a document analysis to grasp the EU’s expectations of responsible AI (see “Data analysis”). Based on this analysis, we crafted questions for conducting semi-structured interviews with RAI experts. We formulated these questions to ensure that they addressed two topics on RAI: (1) benefits and roadblocks of RAI and (2) actors and activities in an emerging ecosystem for RAI. However, rather than explicitly referring to our analysis of the EU’s expectations, we explored whether and how these expectations emerged from the interviews. When they emerged, we left the interviewees’ expectations of RAI ecosystems uncorrected, which manifested as not engaging in conversations about their expectations in juxtaposition with our understanding of the EU’s expectations. We tested the interview questions in a mock-up interview (which were excluded from the analysis) and subsequently revised the questions.

In total, we conducted 15 semi-structured interviews with technology developers, researchers, and consultants (Table 3). To identify potential interviewees, we screened our own networks, regional and national AI networks and communities, research institutes, authors of research articles on RAI, and social media lists of AI experts (e.g., on Twitter). In addition, we asked the interviewees to name further experts with whom we should talk. We selected the interviewees based on their expertise in RAI and approached them via email. When they agreed to conduct an interview, we also asked for their informed consent. All the interviews were audio-recorded and transcribed, and we started the analysis immediately after the first interview. When analyzing the interviews, two authors coded them separately, although they regularly discussed their analyses and codes. Through these discussions, we realized that theoretical saturation had been reached when we approached the 15th semi-structured interview (Urquhart et al., 2010). In other words, after 15 interviews, we realized that few new insights on the topic of interest were being revealed. The 15 interviews lasted between 36 and 90 min, with an average length of 60 min.

Table 3 Interviewee profiles and interview lengths

3.2 Data analysis

Our data analysis process featured three steps. First, we analyzed the five selected EU documents to identify their expectations toward RAI. Second, we coded the semi-structured interviews in relation to the expectations identified in the EU documents. Third, we related our codes to the concepts of technological frames and expectation work to understand the adoption and co-shaping of the technological frame of RAI and the surrounding ecosystem. In this final step, we identified five types of expectation work.

The first step produced four key expectations embedded in the EU documents. For this, we abstracted statements from the documents to expectations. These expectations summarize condensed meaning units (Graneheim & Lundman, 2004), which are close to the original wording in the documents. For example, we coded the statement “Like the steam engine or electricity in the past, AI is transforming our world, our society and our industry” (European Commission, 2018a) as “Transformative potential of AI” and “The EU will continue to cooperate with like-minded countries, but also with global players, on AI, based on an approach based on EU rules and values” (European Commission, 2020, p. 8) as “Value-based cooperation.” Although the first author conducted the document analyses, the codes and findings were discussed with the author team. We identified four key expectations: (1) trust as the foundation of AI; (2) ethics and competitiveness as complementary; (3) European value-based approach; and (4) Europe as a global leader in RAI.

To contextualize these expectations as parts of a coherent set, we introduced the synthesized framework presented in the conceptual background (Fig. 1). However, before introducing this framework, we initially had to identify the underlying concepts. Unfortunately, neither the RAI literature (e.g., Dignum, 2020; Martin, 2019; Trocin et al., 2021) nor the sociology of expectations (Beckert, 2016; Borup et al., 2006) provided concepts for theorizing the articulation, adoption, and co-shaping of expectations toward an emerging technology and its surrounding ecosystem. Through consulting the IS literature in relation to expectations, we discovered the concept of technological frames (Orlikowski & Gash, 1994; Wang et al., 2021). Relating this concept to our data, analysis results, and future-oriented expectation literature (van Merkerk & Robinson, 2006), we perceived that the EU documents externalized expectations and constituted a technological frame of an RAI ecosystem. This warranted a synthesis of the frameworks of technological frames (Orlikowski & Gash, 1994) and technological field emergence (van Merkerk & Robinson, 2006), allowing us to understand the constituent dimensions of the technological frame of an RAI ecosystem.

The second step focused on the interview transcripts, with the aim of understanding what the interviewees said in relation to the EU’s technological frame. Using NVivo software, we analyzed the interviews abductively (Tavory & Timmermans, 2014; Timmermans & Tavory, 2012). This means that we considered the interviews against the four expectations identified in the first step (document analysis), while remaining close to the empirical material to identify whether and how the interviews adopted or co-shaped these expectations. During this step, the first and second authors coded the interviews independently and kept a research journal of memos capturing thoughts, ideas, observations, and explanations from and for their coding. In regular meetings, they subsequently discussed their codes and the coded statements. Through this iterative process of independent coding and discussing the codes, we arrived at a set of codes that related interview excerpts to the four expectations identified in the EU documents.

In the third step, we aimed to systematize our analyses of stakeholder reactions to the EU’s expectations. This raised the question of how the experts adopted and co-shaped the technological frame embedded in the EU documents. To answer this question, we returned to the interview statements, in which the experts referred to the four expectations. By analyzing the interview statements against the theoretical framework, we found that these statements voiced congruent or incongruent expectations of RAI. We understand these instances as “sites of hyperprojectivity” (Mische, 2014), which render implicit expectations of the future explicit. This notion rests on our understanding that the interview situation prompted interviewees to consider RAI ecosystems in a future-oriented manner to articulate their expectations of RAI. In addition, we conceptualized interviewees’ statements concerning the technological frame as expectation work, meaning those moments in which actors adopt or co-shape a technological frame by expressing congruence or incongruence with its expectations. Through an analysis of these instances, we identified five types of expectation work: three types of congruent expectation work (i.e., reproducing, translating, and extending) and two types of incongruent expectation work (i.e., scrutinizing and rooting). A summary visualization of the described research approach is presented in Fig. 2.

Fig. 2
figure 2

Visualization of the research approach

4 Findings

We present two key findings. First, we outline four expectations of RAI ecosystems that were identified in the analyzed EU documents. Second, we present five types of expectation work to conceptualize how the interviewees responded to and acted upon the expectations embedded in the EU documents. Together, these two findings provide insights into how actors articulate, adopt, and co-shape the EU’s technological frame of the RAI ecosystem.

4.1 EU technological frame: analysis of EU documents

The analysis of the selected EU documents revealed four key expectations (Fig. 3): (1) trust as the foundation of RAI, (2) ethics and competitiveness as complementary, (3) a European value-based approach, and (4) Europe as a global leader in RAI. The findings are presented through these four key expectations and in relation to their positions within our analytical framework.

Fig. 3
figure 3

Map of the expectations in the analyzed EU documents; key expectations are in bold and numbered

4.1.1 Nature of the sociotechnical system

The nature of the sociotechnical system includes two key expectations in the technological frame: trust as the foundation of RAI and ethics and competitiveness as complementary. With regard to trust as the foundation of RAI, trust and trustworthiness are central themes in the documents, containing beliefs about how trust operates in complex systems. In the documents, trust is connected to many other topics. For example, trust is mentioned as a prerequisite for the uptake of digital technology (European Commission, 2020, p. 1), for the development, deployment, and use of AI systems (High-Level Expert Group on Artificial Intelligence, 2019), and for a human-centric approach to AI (European Commission, 2019). The uptake of AI is seen as particularly important, with one document arguing for “the broadest possible uptake of AI in the economy, in particular by start-ups and small and medium-sized enterprises” (European Commission, 2018b). Moreover, trust in AI is fostered by a clear regulatory framework (European Commission, 2020, p. 10), evaluation by auditors (High-Level Expert Group on Artificial Intelligence, 2019), explainability (European Commission, 2018a), responsible data management (European Commission, 2020, p. 8) and an ethical approach to AI (European Commission, 2019). Trustworthiness is perceived as requiring a holistic approach that considers the entire sociotechnical context, actors, and processes (High-Level Expert Group on Artificial Intelligence, 2019), which is also expressed in the idea of an “ecosystem of trust” (European Commission 2020, p. 3) or “environment of trust and accountability” (European Commission, 2018a).

Trust is linked to the theme of developing and leveraging ecosystems, placed under “network building” in Fig. 3. Europe’s “world-leading AI research community,” deep-tech start-ups (European Commission, 2018a), and the General Data Protection Regulation (GDPR) as an “anchor of trust” (European Commission, 2018b) provide a basis for creating synergies and networks between research centers and for developing a “lighthouse center” to coordinate efforts (European Commission, 2020). From an ecosystem perspective, trust between actors is an established theme in research (e.g., Tsujimoto et al., 2018). The expectations of trust build the basis for the transformative potential of AI to be realized in Europe and for AI to support social progress, including achieving sustainable development goals, tackling inequality, and promoting social rights. Furthermore, the documents position AI as supporting desirable outcomes if it is trustworthy and ethical. Accordingly, trust expectations underpin ecosystem agendas as well as statements on network building.

The second central expectation is the idea of ethics and competitiveness as complementary. The concept of “responsible competitiveness” summarizes this idea effectively (High-Level Expert Group on Artificial Intelligence, 2019). Moreover, the “Building trust in human-centric artificial intelligence” document states the expectations around ethical AI in particularly clear terms:

“Ethical AI is a win-win proposition. Guaranteeing the respect for fundamental values and rights is not only essential in itself, it also facilitates acceptance by the public and increases the competitive advantage of European AI companies by establishing a brand of human-centric, trustworthy AI known for ethical and secure products.” (European Commission, 2019, p. 8).

The document also states that economic competitiveness and societal trust must emanate from the same fundamental values (European Commission, 2019). Further, in the documents, it is argued that the “sustainable approach” to technologies creates a competitive edge for Europe (European Commission 2018a). The European approach aims to promote Europe’s innovation capacity while simultaneously supporting ethical and trustworthy AI (European Commission, 2020, p. 25).

The “win-win” position essentially claims that strong ethical values create an appealing brand for European businesses. As stated by Floridi (2019), “the EU wants to determine a long-term strategy in which ethics is an innovation enabler that offers a competitive advantage, and which ensures that fundamental rights and values are fostered.” This argument makes sense in the context of an initial, predominantly negative European Parliament discussion on AI regulation and the twin strategic EU objectives of protecting citizens and enabling competitiveness (Renda 2020). In the background, the documents reveal concern over increasing global competition, which in the literature is often called an “AI race” (Smuha, 2021). Moreover, the documents depict Europe as falling behind in terms of private investments in AI, and without major effort, the EU risks missing many of the opportunities offered by AI (European Commission, 2018b). However, the notion of ethics and competitiveness being complementary can be questioned, as this could obscure issues of power and conflicts (Veale, 2020). Conversely, the importance of trust is widely recognized and perceived as having economic value (Smuha, 2021). Therefore, trust can be identified as a bridge between ethical and economic concerns. On an analytical level, expectations of the nature of the sociotechnical system represent the foundations of the EU expectations.

4.1.2 Ecosystem agendas

The EU documents express a strong sense of seeking a European value-based approach, meaning a distinct European path or vision to approach AI. Although a common approach is sought to avoid fragmentation and regulatory uncertainty, the emphasis on the ethical foundations of the European approach is equally important. Since AI is understood to have major societal impacts, and building trust is considered essential, the preferred European AI approach is grounded in “European values,” fundamental rights, human dignity, and privacy protection (European Commission, 2020, p. 2). Furthermore, the European approach is framed as human-centric and inclusive. Democracy and the rule of law are considered underpinnings of AI systems and enable “responsible competitiveness” (High-Level Expert Group on Artificial Intelligence, 2019). Moreover, it is argued that societal values provide a distinctive “trademark for Europe and its industry” in the field of AI (European Commission, 2019). This quest for a European approach rooted in ethics and fundamental rights sets the normative agenda that underpins measures such as public investments and drafting regulatory frameworks. Turning to the analytical framework, the expectations of ecosystem agendas provide a desired direction of action. Hence, the ecosystem agenda connects concrete plans to a broader value-based project.

4.1.3 Network-building

The EU documents frame Europe as a global leader in RAI and state that it is “well positioned to exercise global leadership in building alliances around shared values” (European Commission, 2020, p. 8). It is further noted that the EU is “well placed to lead this debate on the global stage” (European Commission, 2018a) and can “be the champion of an approach to AI that benefits people and society as a whole” (European Commission, 2018a). Accordingly, Europe is perceived as providing a unique contribution to the global debate and a strong regulatory framework that sets the global standard (European Commission, 2019). This strong attachment to values, the rule of law, and the human-centric approach to AI are seen as core strengths that will enable Europe to promote RAI on the global stage. According to the High-Level Expert Group, placing citizens at the heart of endeavors is “written into the very DNA of the European Union through the Treaties upon which it is built,” which enables the building of leadership in innovative AI systems (High-Level Expert Group on Artificial Intelligence, 2019).

The value of cooperation is also highlighted, especially with like-minded countries and those who share the same values, although the documentation also encourages collaboration on a more general, global scale (European Commission 2018b, 2020). In effect, the documentation presents the view that only global solutions are ultimately sustainable (European Commission, 2018a). Moreover, global forums such as UNESCO, OECD, WTO, and the International Telecommunications Union are mentioned as key arenas (European Commission, 2020).

From the ecosystem perspective, the visions promoted by the EU institutions and the High-Level Expert Group place the EU as the leader of the RAI ecosystem. This ties to the concept of “normative power Europe,” where it is argued that the role of the EU is based on influencing ideas and norms in addition to civilian and military power (Manners, 2002). However, this raises the question of values from other regions of the world. Smuha (2021) notes that regional diversity may be needed in some aspects of regulation and that global “regulatory co-opetition” might be preferable to global convergence.

The expectations in the network-building category link the EU documents to the emergence of ecosystems for RAI. Here, it is envisaged that the networks can be built based on statements about sociotechnical systems and agendas. The ethical undertones are particularly interesting because they highlight the ecosystem around RAI rather than the broader AI ecosystem. Within this notion of an “ecosystem of trust” alongside an “ecosystem of excellence” (European Commission 2020), the documents’ narrative connects back to expectations of the foundational role of trust in the sociotechnical system. EU’s global leadership in RAI represents the culmination of this pathway, although it requires achieving other objectives, such as increasing AI adoption and stimulating investment.

4.2 Types of expectation work: Analysis of interviews

The interviews with 15 RAI experts revealed the following five distinct types of expectation work: reproducing, translating, scrutinizing, rooting, and extending. These are introduced in the following section and fall under the main categories of congruent and incongruent expectation work (cf. Orlikowski & Gash, 1994). To clarify this concept, incongruent expectation work means that there is incongruence (i.e., differing expectations) with regard to the initial technological frame, not that the expectation work itself is considered incongruent. In the following section, the types of expectation work are explained, and examples are provided in Table 4.

Congruent expectation work comprises three types: reproducing, translating, and extending. Reproducing the EU’s technological frame is the clearest form of congruent expectation work in the material, where actors reiterate elements of the frame from the EU documents without them being significantly added to or questioned. For example, the experts reiterated the importance of trust: “Trust can be a surprisingly important thing if you think about an individual person’s life and the more important and big things like their finances. This also applies on the company side” (#10). Overall, the reproducing type of expectation work remains within the bounds of the initial set of expectations and serves to strengthen frames through repetition.

Translating is a more substantial type of congruent expectation work, where practical implications and implementation options are derived from the general approach and set of expectations. For instance, one interviewee called for professional organizations for AI auditors: “[…] some kind of a professional association that somehow, maintains professional qualifications and maintains ethical monitoring. So, something like this would be appropriate for the auditing parties” (#8). Translating can be characterized as a solution-oriented type of expectation work, where the initial set of expectations is accepted. Moreover, in translating, the discussion moves towards the practical level, to structures and activities that help implement the approach laid out in the expectations.

Extending is a type of expectation work in which new visionary elements are added to take the set of expectations further. As an example, the distributed power of consumers is envisioned as one part of the ecosystem: “I mean that’s not one, one single leader, […] in an optimal way, people, their decisions, buying decisions and so on, […] then it will change quite fast because […] then it’s a deciding factor if you use something or not.” (#3). In contrast to translating, ideas in extending are at the same high level of abstraction as the initial set of expectations, rather than translating the set of expectations to practical implications. Extending is the most visionary among the different types and is the most similar to the original articulation work of the expectations because new expectations are produced. However, rather than being created ex nihilo, the new expectations extend an existing set of expectations (in this case, the EU’s technological frame for RAI).

Incongruent expectation work consists of two types: scrutinizing and rooting. Scrutinizing is the clearest type of incongruent expectation work in which assumptions and expectations from the EU technological frame are challenged. Hence, the scrutinizing form of expectation work tests particular elements and assumptions of expectations. For instance, the core concept of RAI is criticized as being unclear: “it’s always the discussion about ethical AI, but if you start scratching the surface you realize that even ethical software is something that, we don’t have a clear answer to that […] uncertainty or unclarity is the first thing that comes to my mind” (#4). However, the purpose of this type of expectation work is not only to deconstruct expectations; it can ultimately strengthen the set of expectations if the identified problems are appropriately addressed.

Rooting is a type of expectation work in which perceived real-world issues are included in the discussion. Hence, there is an attempt to ‘root’ expectations into contexts with various confounding factors that may challenge the straightforward success of approaches and technologies, such as RAI. For example, tensions between different interests are highlighted: “big companies which want to do something, when it comes to commercial interest, that’s often winning against, being transparent with your systems or changing something” (#3). Compared to scrutinizing, rooting is more sympathetic to the initial set of expectations, although real-world issues can present serious challenges. Moreover, although rooting is somewhat similar to the translating type of expectation work, its direction is different. In translating, the set of expectations leads to real-world implications, while real-world issues are incorporated to challenge the set of expectations in rooting. These introduced real-world problems do not necessarily invalidate the European approach to RAI ecosystems. However, they require that the approach be adjusted in relation to these issues. Similar to the “veto right” of sources in historical studies, these material factors are perceived to have a veto right regarding the narratives that are told about the future (Roßmann, 2021).

The types of expectation work, together with the interview excerpts, are summarized in Table 4. Each type of expectation work was typically applied to particular expectations within the material, as shown in the “Targeted expectation” column in the table. Although these links between expectation work and expectations are indicative rather than exhaustive, they demonstrate that different kinds of expectation work are conducted on the same expectations. For example, while “Europe as a global leader on RAI” is translated, extended, scrutinized, and rooted, it is not reproduced, indicating that this expectation fosters and requires further framing work. “Trust as the foundation of RAI” was mostly reproduced, although it was also scrutinized by one interviewee, indicating that the foundation is contested by some actors.

Table 4 Types of expectation work and excerpts from interviews

5 Discussion

Ecosystems for RAI are being configured and planned in sets of expectations. Accordingly, the aim of this study was to analyze the following: (1) what expectations constitute the EU’s technological frame of the RAI ecosystem, and (2) how experts adopt and co-shape the RAI technological frame externalized by the EU institutions. As discussed in the conceptual background, we understand the technological frame as encompassing the focal technology and any surrounding social relations and networks (Orlikowski & Iacono, 2001). The following sections outline implications for research and practice, limitations, and directions for future research.

5.1 Implications for research

Our research contributes to the concepts of technological frames and ecosystems and to the literature on RAI. We begin by highlighting our contributions to the literature on technological frames and ecosystems.

We emphasize the importance of future-oriented expectations in technological frames and offer a three-fold framework for analyzing the technological frame of an emerging technology-centered ecosystem: the nature of the sociotechnical system, ecosystem agendas, and network building. The ecosystem-oriented conceptualization of technological frames broadens the technological and organizational focus to include expectations about networks of human actors and technical artifacts. Moreover, we elaborate on strategic framing and frame congruence and incongruence (E. Davidson 2006; E. J. Davidson 2002; Hoppmann et al. 2020; Wang et al., 2021) in an interorganizational setting by theorizing the reflexive co-shaping of technological frames through expectation work. In particular, we argue that how stakeholders react to technological frames is equally important as their initial articulation.

We provide a typology of expectation work for co-shaping expectations. The different types of expectation work are illustrated in Fig. 4 and explained in more detail in 4.2. On a theoretical level, the types of congruent expectation work (reproducing and translating) start from the focal technological frame and either simply reiterate it (reproducing), translate it to practice (translating), or bring new compatible elements that take the vision further (extending). Incongruent expectation work starts from the technological frame and questions its internal coherence (scrutinizing) or brings real-world issues to problematize the technological frame (rooting). In all cases, except for reproducing, the expectations in the technological frame serve to produce new ideas and material opportunities. In the case of reproducing, the expectations can be considered more like a closed system, where received ideas are reiterated, although no new elements are introduced. The expectation work concept provides vocabulary for understanding the co-shaping of technological frames in multi-actor settings.

Fig. 4
figure 4

Types of expectation work

We argue that the concept of expectation work (purposive actions to create and negotiate expectations) highlights how technology-centered ecosystems are established and stabilized through future-oriented framing activities. This has implications for understanding how ecosystems form around focal technologies. In addition to asking about key activities and the structure of ecosystems (Adner, 2017), we can ask further questions: when ecosystems exist (temporal investigation), which aspects of ecosystems are accepted or contested and how committed different actor groups are to establishing and maintaining ecosystems. Moreover, the concept of expectation work helps to elucidate how ecosystems evolve and are designed (cf. Granstrand & Holgersson, 2020; Tsujimoto et al., 2018). In particular, it illustrates that actors can strategically use different types of expectation work to align ecosystem design efforts with their interests and objectives. Potentially, this can build a web of expectation work and can result in tensions that may undermine efforts to design an ecosystem. However, frame ambiguity can also serve as a tool for the adaptive governance of technology if it is used skillfully (Wang et al., 2021).

We illustrate the importance of technological frames and the underlying artifacts’ materiality in ecosystem design and maintenance. In translating and rooting types of expectation work, actors mobilize real-world issues to either strengthen and enact expectations in a frame (translation) or to challenge these expectations (rooting). Similarly, previous research has suggested that artifacts such as models, simulations, and prototypes may be considered to have a veto right in relation to narratives about the future (Roßmann, 2021). This means that actors accept these artifacts as “props” to represent imagined futures (Beckert, 2016, pp. 147–148; Roßmann, 2021) and legitimize expectations that are considered congruent with the artifacts. Hence, existing frameworks, prototypes, and exemplars can influence the shape and credibility of technological frames and expectations. This underpins the importance of the artifact (RAI systems) and its actual materiality for building an ecosystem of RAI.

Next, we elaborate on our contributions to the RAI literature. We posit that technology-centered ecosystems and networked cooperation act as mediating levels between regulation, high-level AI ethics principles, and the organizational implementation of RAI. Existing research on RAI has incorporated discussions on translating AI ethics principles to practice through accountable AI technology (Morley et al., 2021; Trocin et al., 2021), the networked nature of AI ethics, accountability (Ananny & Crawford, 2018; Orr & Davis, 2020), and the value of RAI (Kumar et al., 2021). From another perspective, the EU’s approach to RAI has been scrutinized by law and policy scholars (Renda, 2020; Smuha, 2021; Veale, 2020). To understand the predominant framing of RAI ecosystems, we provide a thematic map of expectations, which suggests a layered structure for the EU’s technological frame. The analysis in this paper reveals that the EU raises key expectations of RAI ecosystems: building trust, speeding up adoption at home, and spreading the word on the global stage. In expectation, AI holds great transformative potential if it is broadly adopted. However, this requires taming to avoid any risks and to support societal progress. This is where expectations of ecosystem agendas become important. According to the documents, the potential of AI can be unlocked in a responsible way if a European approach is found that is grounded in broadly accepted values, fundamental rights, and a human-centric perspective. Accordingly, Europe can export its approach globally and develop appealing AI products and services for global markets. In summation, trust and ethics provide a shared basis, a European approach lays out a normative project, and Europe (as a global leader) extends to global networks and provides a resolution to the narrative.

We also highlight the importance of technological frames and expectation work in framing RAI as the core technology for the emerging ecosystem. The broader point is that RAI is subject to ongoing redefining among academics and practitioners, rather than being a fixed construct with specific pillars (e.g., Dignum, 2020; High-Level Expert Group on Artificial Intelligence, 2019; Jobin et al., 2019). In addition, framing influences the prospects of an RAI ecosystem in a kind of double loop. On the one hand, how RAI is framed in Europe will influence the prospects of successfully promoting RAI ecosystems. This means that the frame needs to be sufficiently rooted in the current technological and geopolitical landscape. On the other hand, the technological frame defines how success criteria are perceived (Orlikowski & Gash, 1994). As a logical consequence, there is no position outside technological frames from which the prospects of the technological frame can be evaluated. In other words, frames and expectations partly define the agenda against which their success can be evaluated.

5.2 Implications for practice

Our study also has implications for practice. Drawing on our findings regarding the expectations toward RAI and the types of expectation work, we present these implications as considerations for when organizations formulate and implement a corporate (responsible) AI strategy. We present four implications.

Early AI adopters can become RAI champions. Organizations that only utilize and do not develop AI systems may hesitate to be early adopters of RAI for two reasons: implementing AI systems responsibly requires resource regulations on RAI are still in a state of flux. However, our findings suggest that organizations should consider adopting an RAI ecosystem early to co-shape its emergence, which could elicit three benefits. (1) Reputation: they can benefit from going beyond adopting the minimum legal requirements for AI systems. (2) Ecosystem partnerships: they would be among the first to build partnerships that can save costs and tailor AI systems to their requirements. (3) Organizational learning: they can benefit from a learning advantage, as they are the first to experiment and use RAI systems.

RAI ecosystem as the first port of call. The technology-centered ecosystem of RAI mediates regulation, high-level AI ethics principles, and the organizational implementation of RAI systems. This renders the RAI ecosystem the first port of call for organizations, meaning that they can turn to the ecosystem to find guidance, negotiate contracts, issue audits, or develop and implement an RAI system. Organizations should also consider this integral role of the emerging RAI ecosystem in their own RAI strategy, meaning the determination of which node positions and relationships they seek to obtain and how this aligns with their overall corporate AI strategy and corporate responsibility. This mediating role as the first port of call will become more apparent if recognizable local centers emerge as ‘faces’ for the EU RAI ecosystem (cf. Stix, 2022).

Map for navigating the “sea of expectations.” The presented framework maps EU expectations toward the emerging RAI ecosystem. Existing literature on AI regulation (Jabłonowska et al., 2018; Smuha, 2021; Veale, 2020) has identified similar themes. However, we position these expectations within a framework for understanding them as expectations toward an emerging ecosystem. Beyond our mapping, practitioners can utilize the framework as a tool for prioritizing and responding to expectations, which can subsequently inform ecosystem design (Tsujimoto et al., 2018) and enable ecosystem designers to consider their respective expectations reflexively. Hence, practitioners can draw on our framework as a mapping tool and our map of the EU expectations to sociotechnical systems, agendas, and network building as a map to the “sea of expectations” (van Lente, 2012). Hence, the mapping tool and map can help practitioners formulate and act upon their own expectations and their role in ecosystem-building for RAI.

Types of expectation work offer strategic directives. The types of expectation work illustrate that organizations can co-shape the expectations of the emerging RAI ecosystem. Moreover, they can express congruence or incongruence with existing expectations to co-shape these as projections of potential futures. Thus, organizational actors can consider the available types of expectation work as options for strategic direction and positioning within ecosystem building. Furthermore, organizations can consider whether they wish to accept and extend a technological frame (reproducing, translating, and extending) or whether they seek to question its coherence (scrutinizing and rooting). In addition, they can reflect on which type of expectation work would help them to achieve their ecosystem agenda and network building most effectively.

5.3 Limitations

Our study is based on a qualitative analysis of five key documents and interviews with 15 experts. We acknowledge that this research approach has limitations. For example, while we can offer theoretical generalizations, statistical generalizations cannot be provided (Lee & Baskerville, 2003). Considering the qualitative nature of the collected data and the number of interviews, we also acknowledge that we provide theoretical abstractions of the EU’s technological frame of the RAI ecosystem and actors’ adoption and co-shaping of the frame through expectation work. More substantially, our research approach assumes that a coherent technological frame can be traced from the documents, and that the views of expert interviewees can be linked to this frame. While we acknowledge that the documents and interviews present different units of analysis, we took this approach in accordance with Orlikowski and Gash (1994), who argued that technological frames can be shared among groups, communities, and organizations.

Finally, this study does not present a final technological frame after analyzing the expectation work. This limits our assessment of whether and how the expectation work co-shaped the studied technological frame. We acknowledge this as a conscious limitation and a consequence of our different levels of analysis. While the EU documents present the expectations of macro-level actors, the interviews also revealed the responses of micro-level actors to these expectations. Moreover, although these responses co-shaped the technological frame within the interview setting, this was not through interaction with the EU as a macro-level actor who externalized the technological frame. That being said, our research focus lies elsewhere: in the types of expectation work themselves and how they indicate different positions in relation to the initial technological frame.

5.4 Future research

This study creates future research directions into ecosystems around RAI and expectation work. The ecosystem view pertaining to RAI implies that future research could study potential RAI business models within this ecosystem. Indeed, the set of expectations articulated by the documents has implications for company business models and emerging products and services that address RAI challenges. Besides business models that rely on developing and offering RAI solutions, they could revolve around auditing and consulting or challenging existing AI business models that build on an ethically debatable premise.

Studies of expectation work (including its antecedents and consequences) could reveal new patterns of technology development, use, and adoption within complex multi-actor settings. Moreover, the identified types of expectations work present a starting point for future research into how individual actors (or groups) can mobilize expectation work to shape shared technological frames. Accordingly, future studies could examine the ways in which the EU technological frame itself was enacted. For example, this could involve studying groups such as the High-Level Expert Group on Artificial Intelligence and how they negotiate unstable and contradictory sets of expectations when drafting articulations of technological frames, such as in the EU documents.

Overall, the framework proposed in this paper opens up new research directions on the role of technological frames and expectations in the early stages of ecosystem development. Further, the study provides an analysis of the current European discussion on RAI and the expectation work that underlies ecosystem building. Similar processes could be traced in different regions and longitudinally over time for cross-regional or historical comparisons. Moreover, the framework could lend itself to other studies of ecosystems emerging around new technological artifacts.

6 Conclusions

This paper posed the following two research questions: (1) what expectations constitute the EU’s technological frame of the RAI ecosystem, and (2) how do experts adopt and co-shape the RAI technological frame externalized by the EU institutions. To answer these research questions, we initially analyzed EU documents to identify the central expectations that constitute the EU’s technological frame, which underlie its strategic vision for an RAI ecosystem. Subsequently, we conducted and analyzed expert interviews, which revealed five distinct types of expectation work (reproducing, translating, extending, scrutinizing, and rooting) that actors mobilize to co-shape expectations. Importantly, the types of expectation work portray different actions for co-shaping expectations that constitute a technological frame. Our conceptual framework and research approach highlight the fact that technological frames are not set in stone, as they can evolve over time through negotiation and reframing.

The RAI ecosystem is emerging and will continue to crystallize over the following years. Although different domains (such as healthcare and transport) will increasingly adopt AI, the development of ethical and governance frameworks for RAI contains many open questions. Moreover, the future of RAI ecosystems relies on today’s expectations to guide subsequent actions. Our study presents the EU’s expectations of an RAI ecosystem and also provides starting points for research and practice to understand and co-shape the emergence of this ecosystem through technological frames and expectation work. We argue that ensuring a desirable direction for AI use should happen now before path dependencies render it difficult to change an entrenched ecosystem.