1 Introduction

One of the key emerging technologies of the twenty-first century—Artificial Intelligence (AI)—has been surrounded by major policy discussions about its benefits and challenges, as evidenced by national and international strategies, reports and policy papers launched by governments, international organizations, consultancies and civil society organizations in recent years. These AI policy documents have defined priorities, outlined opportunities and risks and developed recommendations for the governance of development and use of AI (af Malmborg & Trondal, 2021; Bareis & Katzenbach, 2022; Dexe & Franke, 2020; Djeffal et al., 2022; Filgueiras, 2022; Guenduez & Mettler, 2022; Ossewaarde & Gulenc, 2020; Paltieli, 2021; Radu, 2021; Roberts et al., 2021; Ulnicane et al., 2021a, 2021b, 2022). As many countries and organizations have launched their documents around the same time, there has been a lot of cross-national and cross-organizational policy learning (Dolowitz & Marsh, 2000), which has led to some convergence in terms of the key themes and principles but also important divergence in terms of priorities, breadth and understanding of common themes and principles not only across countries but also across different types of organizations (Jobin et al., 2019; Schiff et al., 2021; Ulnicane et al., 2021a, 2022).

This study contributes to research on AI policy debates by looking on how they articulate the purpose of AI development and use. To do that, it draws on the studies of the two major frames of technology policy, namely its contribution to economic competitiveness and societal challenges (Diercks et al., 2019; Mazzucato, 2021; Schot & Steinmueller, 2018; Ulnicane, 2016). According to the first frame, technology is expected to contribute to economic growth and competitiveness. In contrast, the second frame highlights the potential of technology for tackling Grand societal challenges in areas such as health, environment and energy as well as achieving the United Nations’ Sustainable Development Goals. This research applies the two technology policy frames to analyse how the purpose of AI development and use is discussed in AI policy. It examines AI policy documents to answer the main research question: How do they frame the purpose of AI development and use? The three sub-questions are as follows: Do AI policy documents focus on a traditional technology policy frame prioritizing economic growth or an emerging paradigm of addressing societal challenges? What is the relationship between these two frames in AI policy? What are omissions and silences in defining the purpose in AI policy?

To examine AI policy discussions, this study uses policy framing approach, which focusses on how problems and their potential solutions are articulated and interpreted in policy debates (Head, 2022; Rein & Schon, 1993, 1996; Schon & Rein, 1994). It explores the two policy frames empirically by analysing AI policy documents launched by national governments, international organizations, civil society organizations and consultancies.

This study aims to contribute to the topic of this special issue on the global governance of emerging technologies by deepening our understanding of the ideational dimension of public policy. While recent studies of emerging technologies such as AI have strongly focussed on ethical and regulatory issues or their economic impacts, critical analysis of policy aims and priorities has been largely missing. By undertaking an in-depth analysis of competing AI policy frames, this research sheds a light on policy discussions and political choices surrounding emerging technologies representing variety of values, ideologies and interests co-shaping development and deployment of these technologies. It draws on insights and concepts from a number of disciplines and research fields including policy analysis and Science and Technology Studies to highlight that emerging technologies also serve as political battlegrounds about desirable and possible futures. Thus, this research aims to make a conceptual contribution to the studies of global governance of emerging technologies (Kuhlmann et al., 2019; Taeihagh, 2021; Taeihagh et al., 2021), supported by empirical insights from recent AI policy.

This paper proceeds as follows: Sect. 2 introduces conceptual framework presenting AI as an emerging technology, policy framing approach and two technology policy frames; Sect. 3 discusses insights from examining frames in AI policy documents; and finally, in Conclusions, the main findings are summarized.

2 Conceptual framework: emerging technology and policy framing

To examine policy framing of the purpose of AI development and use, the conceptual framework of this paper consists of the three main elements: first, the concept of AI and approaching AI as an emerging technology; second, policy framing approach, and third, the two main technology policy frames of economic competitiveness and societal challenges.

2.1 Artificial Intelligence as an emerging technology

Although the term ‘Artificial Intelligence’ has been widely used over the recent years, experts and policy-makers highlight the difficulties to define AI. AI policy documents emphasize the challenge of pinning down a precise definition of AI (The 2015 panel, 2016) and the continuous debate on this topic over many years (European Commission, 2017). In AI literature and policy documents, multiple definitions of AI can be found. AI experts, who undertook a dedicated study of how to define AI, came up with the following definition:

Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions. (European Commission, 2019: 6).

It is acknowledged that AI includes ‘a broad set of approaches, with the goal of creating machines with intelligence’ (Mitchell, 2019: 8). AI includes approaches and techniques, such as machine learning, machine reasoning and robotics (European Commission, 2019). Accordingly, in setting boundaries of what counts as AI policy, this paper follows actors’ definitions of AI considering how policy-makers and other stakeholders understand and use the term AI.

While the term AI has existed for over 60 years, real world applications have only accelerated over the last decade due to advance in computing power, availability of data and better algorithms (Campolo et al., 2017; European Commission, 2018a). Due to these recent advances, AI today exhibits typical characteristics of emerging technologies, such as radical novelty, relatively fast growth, prominent impacts, uncertainty and ambiguity (Rotolo et al., 2015), hypes and high positive and negative expectations (Van Lente et al., 2013), and specific needs for a tentative governance to address high uncertainty (Kuhlmann et al., 2019). Hypes and high positive and negative expectations associated with emerging technologies can be seen in AI policy documents, which present AI as revolutionary, transformative and disruptive technology (Ulnicane et al., 2022) but also highlight concerns and challenges including safety, privacy and accountability (Ulnicane et al., 2021b).

Importantly for this study of framing the purpose of AI development and use, AI as any technology is seen as being co-shaped by society and values it is embedded in and thus having important political, social and cultural aspects (Jasanoff, 2016; Schatzberg, 2018; Winner, 2020). It is not just a neutral tool serving the goals defined by others (Hare, 2022; Schatzberg, 2018; Stilgoe, 2020) but represents collectively designed future ways of living, power relations and value systems (Ulnicane et al., 2022).

2.2 Policy framing approach

Policy framing approach (Head, 2022; Rein & Schon, 1993, 1996; Schon & Rein, 1994) offers a productive way to analyse policy debates. It focusses on how in policy practice, policy stories influence the shaping of laws, regulations, allocation decisions, institutional mechanisms and incentives. Policy frames help to structure and inform policy debates and practice situated in a specific political and historical context. According to Martin Rein and Donald Schon (1993),

framing is a way of selecting, organizing, interpreting, and making sense of a complex reality to provide guideposts for knowing, analysing, persuading, and acting. A frame is a perspective from which an amorphous, ill-defined, problematic situation can be made sense of and acted on’ (Rein & Schon, 1993: 146) and in such frames ‘facts, values, theories, and interests are integrated (Rein & Schon, 1993: 145).

Policy frames are ‘diagnostic/prescriptive stories that tell, within a given issue terrain, what needs fixing and how it might be fixed’ (Rein & Schon, 1996: 89). Analysis of policy framing helps to demystify political rhetoric and problematise how policy problems are defined, debated and acted upon (Head, 2022). This paper examines rhetorical frames, which ‘are constructed from the policy-relevant texts that play important roles in policy discourse, where the context is one of debate, persuasion, or justification’ (Rein & Schon, 1996: 90). However, when analysing rhetorical frames, it is important to examine not only what is said but also omissions, silences and kinds of politics hidden in the framing (Bacchi, 2000). According to Carol Bacchi (2000), it is necessary ‘to recognize the non-innocence of how ‘problems’ get framed within policy proposals, how the frames will affect what can be thought about and how this affects possibilities for action’ (Bacchi, 2000: 50).

Rein and Schon associate policy frames with public controversies and pluralism, as ‘in any given issue terrain, there are almost always a variety of frames competing for both meaning and resources’, where ‘the contest over meaning gives legitimacy to the claim for economic and social resources’ (Rein & Schon, 1996: 95). According to Schon and Rein, these situated policy controversies with their competing frames structure policy debates and practices and shape the design of policies (Schon & Rein, 1994). For them, design of policy is a social and political process involving divergent interests and powers of actors. In their approach to policy design, Schon and Rein emphasize interaction of multiple designers, redesign in use and shifting contexts.

The concept of policy frames as well as related notions of policy paradigms, discourses and narratives have been productively applied to analyse technology policy (see, e.g., Diercks et al., 2019; Mitzner, 2020; Ulnicane, 2016), governance of emerging technologies (Jasanoff, 2003), and more recently AI policy (see, e.g., Köstler & Ossewaarde, 2022; Nordström, 2021; Ulnicane et al., 2021a, 2022). While previous studies of framing AI policy have focussed on governance, uncertainty and national policy, this paper contributes by exploring policy controversies of framing the purpose for AI development and use.

2.3 Shifting frames of technology policy

Technology policy globally is undergoing major changes in framing (Diercks et al., 2019; Mazzucato, 2021; Schot & Steinmueller, 2018; Ulnicane, 2016). Traditionally, technology policy largely focussed on economic growth, productivity and competitiveness and was justified by market failures and system failures requiring government intervention in times when market did not provide sufficient support, investment and networks for the development and use of new technologies. Recently, the key assumptions of this frame have increasingly been challenged arguing that technology development should be directed towards societal objectives known as Grand societal challenges and the United Nations Sustainable Development Goals in the areas such as climate change, health and poverty reduction (Diercks et al., 2019; Mazzucato, 2021; Schot & Steinmueller, 2018; Ulnicane, 2016). Rather than fully replacing previous economically oriented technology policy frame, new focus on societal challenges can be seen as layering process when old and new technology paradigms co-exist and in practice sometimes overlap.

Elements of both of these technology policy frames are part of ongoing discussions about AI that cover a broad range of issues from economic competitiveness (Justo-Hanani, 2022; Ulnicane et al., 2021b, 2022) depicting global AI development as a new space race (Ulnicane, 2022) or a new cold war (Bryson & Malikova, 2021) to AI potential contribution to sustainability, environmental and social goals (Sætra, 2021; van Wynsberghe, 2021; Vinuesa et al., 2020). It is useful to have a closer look at the key elements of these two stylized technology policy frames, so that it can later be examined how do they play out in AI policy debates. While technology policy frames focus on a range of questions including objectives and organization of technology development and use as well as policy instruments to support it, to answer the research question of this study, this paper highlights how different frames articulate the purpose of technology policy.

Technology policy emerged as a separate policy field in the 1950 and 1960s (Godin, 2004; Mitzner, 2020; Schot & Steinmueller, 2018). Since then, technology policy has been closely linked to economic policy, prioritizing contribution of technology to national economic objectives such as growth, productivity and competitiveness. While evidence of links between technology, growth and productivity has been questioned (Godin, 2004), this frame has become very influential and has been diffused internationally by the Organization for Economic Cooperation and Development (Godin, 2004; Henriques & Larédo, 2013).

An important element of traditional economic framing of technology policy is its focus on national competitiveness. It depicts technology development internationally as competition where one country is winning and acquiring political, military and economic superiority, while others are losing and are left behind. There are many examples of an economic competitiveness discourse claiming that other countries are more advanced in technology development. For example, during the twentieth century, the perception in Great Britain has been that other countries such as Germany, the United States, the Soviet Union and Japan are technologically superior (Edgerton, 2019). Such sentiments, that other countries are better at technology development, are typically accompanied by calls to national governments to support technology development with more investment and other policy measures. Major investments in the US technology followed the fears about the Soviet supremacy in space technology in the late 1950s and the worries about the Japanese technological supremacy in the 1980s (O’Mara, 2019). The gradual emergence and expansion of supranational European Union’s technology policy since the 1960s has been largely driven by concerns about Europe’s technology gap with the US, then Japan and recently China (Mitzner, 2020). These ideas have also become popular in policy discussions surrounding AI where it is argued that the development of AI is largely driven by the rivalry between the two major AI superpowers of the US and China (Lee, 2018). While the economic competitiveness discourse is very popular and plays a major role in technology policy, it has been criticised. Paul Krugman (1994) has argued that it is misleading because states do not compete the same way as corporations and international development is not necessarily a zero-sum game where one country wins and others loose; it can also be a positive sum game where many can benefit from technological advances elsewhere.

In the early twenty-first century, traditional technology policy frame with its objective to contribute to economic growth has been increasingly challenged. In the context of climate change and escalating societal concerns, having economic growth as a key objective has been questioned (De Saille et al., 2020). Instead, the idea that technology policy should tackle the so-called Grand societal challenges in the areas such as environment, energy and health have gained increasing prominence around the world (Boon & Edler, 2018; Diercks et al., 2019; Kaldewey, 2018; Kaltenbrunner, 2020; Ludwig et al., 2022; Ulnicane, 2016; Wanzenbock et al., 2020). To address complex societal challenges, it is argued that boundary spanning collaborations are needed that bring together heterogeneous partners from diverse disciplines and sectors including science, business, policy-makers and civil society (Ulnicane, 2016). Despite the widely shared recognition that initiatives addressing societal challenges require inclusion and participation of a broad range of stakeholders, concerns have been raised that in practice still dominant actors and their perspectives might get prioritized (Ludwig et al., 2022). Moreover, while some argue that Grand challenges span national borders and, therefore, require global collaborations, others emphasize their context-specificity and argue for local initiatives to address them (Wanzenbock et al., 2020).

Although the discourse of Grand societal challenges builds on earlier ideas such as social function of science (Bernal, 1939), the past two decades have seen the launch of dedicated initiatives to tackle Grand challenges from national governments, international organizations, universities, research institutes and academic associations (Kaldewey, 2018; Ulnicane, 2016). Focus on tackling Grand challenges discourse are part of transformative technology policy and initiatives to achieve the Sustainable Development Goals by mission-oriented policies (Mazzucato, 2021; Schot & Steinmueller, 2018). If traditional technology policy frame focusses on addressing supply-side, then demand-side is prioritised in challenge- and mission-oriented policies (Boon & Edler, 2018; Diercks et al., 2019). Idea that technologies should be developed according to societal needs and values is at the core of Responsible Research and Innovation concept that since 2010 has played an important role in technology policy in Europe (De Saille, 2015; Owen et al., 2021; Stilgoe et al., 2013). While in recent technology policy Grand challenges are typically understood as societal challenges of broad social relevance, on some occasions the term of Grand challenges has also been used to describe purely scientific and technological challenge (Ulnicane, 2016) including technological competitions such as DARPA (Defence Advanced Research Projects Agency) Grand Challenge (Kaldewey, 2018).

Despite inspirational discourses surrounding Grand challenge initiatives, it is recognized that tackling Grand challenges is an uncertain, open-ended and highly complex endeavour and its successful outcome cannot be guaranteed (Diercks et al., 2019; Kaldewey, 2018; Ludwig et al., 2022; Ulnicane, 2016; Wanzenbock et al., 2020). Moreover, technology does not necessarily play the main role in addressing complex challenges such as climate change, which also require economic, political, institutional, social and other changes. Grand Challenges are seen as ‘wicked problems’ (see, e.g., Kaldewey, 2018; Ludwig et al., 2022; Wanzenbock et al., 2020). Horst Rittel and Melvin Webber (1973) argued that nearly all public policy issues are ill-defined ‘wicked problems’, which differ significantly from definable and solvable problems in the natural sciences (Rittel & Webber, 1973). ‘Wicked problems’ are unruly and intractable problems, characterized by their complexity, uncertainty and value divergence (Head, 2019, 2022; Peters, 2017). Brian Head suggests that ‘the governance of wicked problems is less about designing elegant science-based solutions and more about implementing ‘coping’ strategies, which manage uncertainties, strengthen community capabilities and build resilience across all sectors—social, economic and environmental’ (Head, 2022: 61).

Each technology policy frame is based on a different idea of technology and innovation (Diercks et al., 2019). Traditional frame focussing on economic growth has a strong pro-innovation bias and assumes that technology always has positive outcomes. In contrast, challenge-oriented policy recognizes that technology can have positive as well as negative outcomes on environment, health and equality (Coad et al., 2021; Edgerton, 2019; Stilgoe, 2020). These questions have featured prominently in AI debates about the positive and negative impacts of AI including on jobs, democracy and justice (see, e.g., Crawford, 2021; Eubanks, 2019; Pasquale, 2015; Zuboff, 2019).

The recent rise of challenge-oriented policy has been described as a ‘normative turn’ when policy not only optimize the innovation system to improve economic competitiveness and growth but also induce strategic directionality and guide processes of transformative change towards desired societal objectives (Diercks et al., 2019: 884). However, describing the recent emergence of challenge-oriented policies as a ‘normative turn’ is misleading because it implies that traditional policy focussing on economic growth and competitiveness is purely technocratic, value-neutral and non-normative. It is important to recognize that both technology policy frames are normative and based on political choices about which values and norms to prioritize and support with public resources and other measures. Prioritizing and providing political support for policy that promotes economic growth, competitiveness, efficiency and productivity is also a highly normative political choice based on certain values, expectations and norms. Thus, focussing on diverse frames of technology policy highlights political aspects of technology and its policy drawing attention to mutual shaping of technologies and politics in terms of values, distribution of power and desirable futures (Jasanoff, 2016; Winner, 2020). These political aspects are also highly important in understanding contestations and controversies that currently surround AI development.

While there is a lot of variation in each of the two main technology frames (Diercks et al., 2019) introduced here, for the purposes of this paper, two stylized frames—one traditional based primarily on ideas about centrality of economic growth and competitiveness and another one focussing on Grand challenges and Sustainable Development Goals—are examined. Although AI policy documents cover a broad range of topics including impacts of AI on jobs, security and risks, this paper focusses on how do these documents articulate the overarching objectives of AI development and use according to the two stylized technology policy frames outlined above.

3 Empirical insights on framing the purpose of AI development and use

To provide insights on how the purpose of AI development and use is framed, this study examines AI policy documents. Policy documents here ‘are treated as vehicles of messages, communicating or reflecting official intentions, objectives, commitments, proposals, ‘thinking’, ideology and responses to external events’ (Freeman & Maybin, 2011: 157). They are seen as policy-relevant texts that play important roles in policy discourse and debate, persuasion, or justification (see above on rhetorical frames).

3.1 Methods and data sources

This article examines a pre-existing dataset of AI policy documents (Ulnicane et al., 2021a) that includes 49 policy documents (see Annex 1) launched by national governments, international organizations, consultancies and think tanks in the European Union and the United States from 2016 to 2018, namely, during the time when the main initial AI policy documents were launched around the world. These documents have been selected according to a number of criteria such as strong focus on overarching AI policy and being a stand-alone and self-contained document (for more on dataset, see Ulnicane et al., 2021a). The focus here is on AI policy documents rather than ethics guidelines, which are analysed elsewhere (see, e.g., Jobin et al., 2019; Schiff et al., 2021); however, it has to be admitted that there is some overlap between the two, e.g., some policy documents also include ethical principles.

For the purpose of this study, these documents have been analysed in line with the above outlined research questions and conceptual framework, namely, how they frame the purpose of AI development and use in line with the two stylized technology policy frames of economic growth and societal challenges. In particular, focus here is on common features how different policy documents frame the purpose of AI.

3.2 Economic growth and competitiveness frame

When reading AI policy documents, it is possible to find evidence for both stylized policy frames—prioritizing economic growth as well as societal challenges. Ideas from traditional economic frame are highly visible in AI policy. AI is presented as a driver of economic growth and a major economic opportunity, which should be fully exploited to reap the economic benefits of AI. Positive influence on economic growth is seen as one of the main benefits of AI expecting that ‘AI has the potential to create a new basis for economic growth and to be a main driver for competitiveness’ (European Commission, 2017: 4). Some documents mention specific forecasts about AI influence on the growth rates. For example, the US Executive Office of the President (2016a: 6-7) states that ‘AI has the potential to double annual economic growth in the countries analysed by 2035’, while the report from the UK All-Party Parliamentary Group on AI includes an estimate that ‘AI will boost economic growth in the UK by adding £140 billion to the UK economy by 2034, and boost labour productivity by 25% across all sectors, including in Britain’s strong pharmaceutical and aerospace industries’ (Big Innovation Centre/All-Party Parliamentary Group on Artificial Intelligence, 2017b: 23).

Similarly, in other documents, increases in economic growth due to AI are mentioned next to the boosting of productivity, efficiencies and cost savings (see, e.g., European Commission, 2018a; House of Lords, 2018). Focus on economic growth also includes positive expectations about potential contribution of AI to new ideas and innovation (European Commission, 2018a) and optimism about the promise of technological innovation (Thierer et al., 2017), thus making explicit the pro-innovation bias of economic growth discourse.

An important part of the discourse about economic growth potential of AI is the focus on economic competitiveness depicting AI development as taking place ‘amid fierce global competition’ (European Commission, 2018b: 2). AI advancements are seen as boosting competitiveness around the world from increasing and maintaining US national competitiveness (Executive Office of the President, 2016c) to improving the EU’s competitiveness (European Economic and Social Committee, 2017). To fully exploit AI contribution to competitiveness, policy documents make a number of policy recommendations. Greater federal investment in AI research and development is seen as essential to maintain US competitiveness (IEEE-USA, 2017), while providing qualified workforce is presented as an urgent issue to maintain the EU competitiveness (IEEE European Public Policy Initiative, 2017) and reforming tax frameworks is suggested to assure UK’s global competitiveness (Big Innovation Centre/All-Party Parliamentary Group on Artificial Intelligence, 2017d). On the other hand, policy discussions tend to present regulation as potentially damaging for competitiveness, associating it with regulatory burden and, for example, claiming that AI regulation could reduce innovation and competitiveness for UK industry (House of Lords, 2018). The main exception here are documents launched by the European Commission, which present a solid European ethical and regulatory framework as a prerequisite and unique feature of the EU within the global AI competition (European Commission, 2018b).

An important part of economic competitiveness discourse is fear of lagging behind and missing out on opportunities offered by AI revolution. This is the case with the European Commission (2018b), which points out that the EU is behind in private investments in AI, compared with Asia and North America. Therefore, the European Commission presents that it is crucial for the EU to create an environment that stimulates investments and uses public funding to leverage private investments as well as to build on its assets such as world-leading AI research community (European Commission, 2018b). The need to take measures to be competitive is presented as urgent and essential, as can be seen in this quote:

One of the main challenges for the EU to be competitive is to ensure the take-up of AI technology across its economy. European industry cannot miss the train. (European Commission, 2018b: 5)

Not undertaking necessary measures is associated with missing the benefits of AI and negative consequences, as suggested here ‘without such efforts, the EU risks losing out on the opportunities offered by AI, facing a brain-drain and being a consumer of solutions developed elsewhere’ (European Commission, 2018b: 5). Thus, in the case of emerging technology of AI, policy tends to be framed in a traditional discourse about economic competitiveness and fears about being left behind other countries and regions, which are perceived as technologically superior. To sum up, traditional technology policy frame with its focus on contribution of technology to economic growth, productivity and competitiveness is strongly present in the way AI policy documents frame the purpose of AI development and use.

3.3 Societal challenges frame

In addition to the focus on traditional economic growth and competitiveness frame, policy documents also emphasize the potential of AI to contribute to solving a range of societal problems. They highlight that AI should only be developed and used in ways that serve global social and environmental good (European Group on Ethics in Science and New Technologies, 2018) and should enable the achievement the UN Sustainable Development Goals that concern eradicating poverty, illiteracy, gender and ethnic inequality, and combating the impact of climate change (IEEE, 2017). AI is expected to ‘be central to the achievement of the Sustainable Development Goals (SDGs) and could help to solve humanity’s grand challenges by capitalizing on the unprecedented quantities of data now generated on sentient behaviour, human health, commerce, communication, migration and more’ (International Telecommunication Union, 2017: 6).

Policy documents include very positive statements about the role of AI in solving a range of major societal challenges ‘AI is helping us to solve some of the world’s biggest challenges: from treating chronic diseases or reducing fatality rates in traffic accidents to fighting climate change or anticipating cybersecurity threats’. (European Commission, 2018b: 2).

The European Commission claims that there are many examples ‘of what we know AI can do across all sectors, from energy to education, from financial services to construction. Countless more examples that cannot be imagined today will emerge over the next decade’ (European Commission, 2018b: 2). These strong and highly optimistic claims about the AI solving societal challenges ignore that, as explained earlier, addressing challenges such as climate change and global health is highly complex and uncertain ‘wicked problem’, success cannot be guaranteed and technology is not the only or even the main ‘solution’. Somewhat more cautious tone about potential of AI to address societal challenges can be found in several reports that recommend to carry out studies not only on strengths but also on weaknesses of using AI for achieving the SDGs (IEEE, 2017; Villani, 2018).

In AI policy, having inclusive and participatory governance bringing together diverse stakeholders nationally and internationally is seen as necessity for addressing societal challenges. Policy documents suggest that use of AI for facilitating societal benefits should be based on deliberative democratic processes and global effort towards equal access to AI and fair distribution of benefits and equal opportunities across and within societies (European Group on Ethics in Science and New Technologies, 2018). When discussing the role of international fora such as G7/G20, United Nations and Organization for Economic Cooperation and Development in AI policy, the European Commission states that the EU 'will promote the use of AI, and technologies in general, to help solve global challenges, support the implementation of the Paris Climate agreement and achieve the United Nations Sustainable Development Goals'. (European Commission, 2018b: 19).

The AI for Good Global Summit Report in 2017 emphasizes that diverse range of people, including the most vulnerable ones should be at the centre of designing AI to tackle SDGs and suggests to create a repository of case studies, activities, partnerships and best practices that would be a resource to understand how different stakeholders are solving Grand challenges using AI (International Telecommunication Union, 2017). While inclusive governance is seen as important for using AI to address societal issues, insights from practice suggest that deliberative forums can be captured by vested interests of the most resourceful actors (Ulnicane, 2021a; b).

The concept of Grand challenges in AI policy documents is used not only to describe issues of broad social relevance but also in a narrower sense. In the UK Industrial Strategy, AI is identified as one of four Grand challenges (other three being future of mobility, clean growth and ageing society) in which the UK can lead the world in the years to come (HM Government, 2018). Yhis approach resembles a traditional sectoral policy rather than directing AI towards actually solving specific societal challenges. Occasions of understanding Grand challenge in AI policy more as technological rather societal challenge include describing creation of computer which could win at Go as an uncompleted Grand challenge in AI (The Royal Society, 2017) or mentions of initiatives such as DARPA’s Cyber Grand Challenge that involved AI agents autonomously analysing and countering cyberattacks or the Camelyon Grand Challenge for metastatic cancer detection (Executive Office of the President, 2016c).

To sum up, recent policy frame focussing on contribution of technology to addressing Grand societal challenges and Sustainable Development Goals can be found in the optimistic statements in AI policy documents about the potential of AI to address most pressing social issues today. However, in these documents, AI is typically presented as a simple technological fix to social issues, largely ignoring uncertainty and complexity of such ‘wicked’ problems.

3.4 Can economic and societal frames be combined?

In AI policy documents, the two policy frames of economic and social goals are mentioned next to each other (see, e.g., European Commission, 2018b; HM Government, 2018) suggesting that they are seen as complementary and compatible rather than as competing alternatives excluding each other. For example, the US National AI Research and Development Strategic Plan states that ‘AI advancements are providing many positive benefits to society and are increasing US national competitiveness’ (Executive Office of the President, 2016c), while the European Group on Ethics in Science and New Technologies highlights that ‘Artificial intelligence, robotics and ‘autonomous’ systems can bring prosperity, contribute to well-being and help to achieve European moral ideals and socio-economic goals if designed and deployed wisely’. (European Group on Ethics in Science and New Technologies, 2018: 20).

Some documents come up with suggestions of paradigm shifts combining growth and energy efficiency, as can be seen in this quote from a French document:

A truly ambitious vision for AI should therefore go beyond mere rhetoric concerning the efficient use of resources; it needs to incorporate a paradigm shift toward a more energy-efficient collective growth which requires an understanding of the dynamics of the ecosystems for which this will be a key tool. We should take the opportunity to think of new uses for AI in terms of sharing and collaboration that will allow us to come up with more frugal models for technology and economics. (Villani, 2018: 102)

In the quote above, an idea about paradigm shift and new models for technology and economics is mentioned rather briefly, without much elaboration what it would entail. This is a typical feature of policy documents that intentions and objectives are just mentioned without going into further discussion and reflection if and how economic growth is compatible with societal challenges, when and under what conditions are they complementary or in tension with each other, and what are the potential conflicts between the two. The question of the compatibility of the two frames is an important omission in the AI policy documents. Thus, crucial AI policy controversies remain implicit and silent: does focus on economic growth imply neglect of addressing societal challenges? Is focus on societal challenges compatible with current economic growth models? Is it possible for AI to address both—economic growth and societal challenges—and what kind of measures and trade-offs that would require? AI policy documents are largely silent about- diversity of values, norms and interests behind each of these frames, thus ignoring crucial questions about their desirability and feasibility.

4 Conclusions

This study examined the articulation of the purpose of developing and using an emerging technology by looking at policy frames surrounding AI as one of the key emerging technologies today. Using the two stylized technology policy frames—traditional frame focussing on contribution of technology to economic growth and competitiveness and a more recent one prioritizing contribution of technology to addressing societal challenges and the Sustainable Development Goals—, this research reveals layering of the two frames in AI policy where both economic growth as well as tackling societal challenges are discussed.

The insights from the policy documents demonstrate that, while AI is a novel technology, its policy includes a lot of ideas from the traditional frame that perceives an emerging technology as a source of economic growth, productivity and competitiveness, which could be further enhanced by such well-known measures as investments in research and skilled workforce. These measures are seen as important to avoid lagging behind other countries and missing out on opportunities offered by emerging technology. Thus, recent AI policy largely draws on a traditional policy frame about the need and measures to reap economic benefits of emerging technologies.

In addition to traditional economic ideas, AI policy documents also include elements from the recent technology policy frame highlighting importance of addressing societal challenges and the Sustainable Development Goals in areas such as energy, climate change and health and having participatory and inclusive governance to address them. However, in policy documents, AI is depicted as a simple technological solution to complex ‘wicked problems’, ignoring uncertainties involved and overstating the role of technology as the main or even the only solution to societal issues that require a broader range of political, economic, social and other measures.

While AI policy documents are optimistic that AI can address both—economic as well as societal—objectives, they are largely silent about the compatibility of the two. Although the initial idea for this research was to examine controversies the two frames, the examination of AI policy documents revealed that there is no open controversy. In the documents, the two frames are mentioned rather superficially, without much reflection on diversity of norms, values and interests they involve. Examining conceptual and practical synergies, trade-offs, conflicts and requirements of the well-intended but complex idea to combine economic and social objectives in AI development and use remains an important question for future research.

To summarize, this paper demonstrates that there is certain convergence in framing purpose of AI development and use in terms of contribution to economic growth and societal challenges in the initial AI policy documents from Europe and the US. Future studies would benefit from extending the empirical scope to additional AI policy documents from other regions like Asia, Latin America, Middle East and Africa (see, e.g., Adams, 2021; Filgueiras, 2022; Kim, 2021; Lee, 2018; Tan & Taeihagh, 2021) and from looking not only on converging features but also divergencies. Furthermore, an important avenue for future research would be to analyse how the rhetoric in AI policy documents is followed-up and implemented through specific AI policy actions and instruments. Additionally, in future it would be interesting to compare the framings found in AI policy to the discourses about other emerging technologies such as neurotechnology, biotechnology or quantum computing.

This research on AI policy frames contributes to an emerging research agenda on AI governance (see, e.g., Köstler & Ossewaarde, 2022; Radu, 2021; Taeihagh, 2021) that takes a critical lens to interrogate and demystify popular discourses like governing AI for growth, efficiency and competitiveness that presents them as technocratic and value-neutral. Instead, this research agenda highlights normative, social, political and power aspects of AI governance and discourses that support it. It reinvigorates some well-known and long-standing problematic issues in technology governance like focussing on technological fixes and solutions while having difficulties to deal with complex societal (‘wicked’) problems, as highlighted by David Collingridge already in his 1980 book on social control of technology:

Ask technologists to build gadgets which explode with enormous power or to get men to the moon, and success can be expected, given sufficient resources, enthusiasm and organization. But ask them to get food for the poor; to develop transport systems for the journeys which people want; to provide machines which will work efficiently without alienating the men who work them; to provide security from war, liberation from mental stress, or anything else where the technological hardware can fulfil its function only through interaction with people and their societies, and success is far from guaranteed (Collingridge, 1980:15).