1 Introduction

“Like the steam engine or electricity in the past, AI is transforming our world, our society and our industry. Growth in computing power, availability of data and progress in algorithms have turned AI into one of the most strategic technologies of the 21st century. The stakes could not be higher. The way we approach AI will define the world we live in” (European Commission 2018)

Artificial intelligence (AI) will transform humanity. That is the key message from the European AI Strategy quoted above. The rise of super computing power and big data technologies, which have fuelled the latest “AI spring”, appears to have successfully captured the imagination of Western policymakers, technologists and scientists alike (Hälterlein 2024; Hine and Floridi 2024). One policy issue that is increasingly encompassed by the lure of AI constitutes environmental sustainability (Creutzig et al. 2022). Seeking to assert the interdependent nature of digitalisation and climate change, AI is heralded as providing an effective tool to propel climate change mitigation and adaptation. Clutton-Brock et al. (2021) emphasise that AI can accelerate climate action by distilling raw data into actionable information, improving predictions, optimising complex systems and accelerating scientific modelling and discovery. Key sectors in which AI is imagined to achieve climate impact include electricity systems, forestry and land use, transportation and agriculture (Kaack et al. 2022). Crucially, mobilising AI for environmental sustainability implies not simply supporting specific AI applications but also depends to a large extent on institutional capacity building. This may include implementing large-scale AI literacy and upskilling programmes for governments and civil society, funding and incentivising the creation of trusted AI for climate solution providers and auditors, as well as ensuring global access to public interest AI programmes and sharing resources across a wide variety of countries and sectors (Clutton-Brock et al. 2021; Creutzig et al. 2022).

In this paper, we explore the intersection of digitalisation and climate change by examining the investment in and deployment of AI in government-led climate action. Doing so, we adopt Suchman’s (2023) definition of AI as “a label for currently dominant computational techniques and technologies that extract statistical correlations (designated as patterns) from large datasets, based on the adjustment of relevant parameters according to either internally or externally generated feedback” (p. 2). More specifically, we zoom in on one key policy arena: public interest AI (PIAI), which denotes “AI systems that support those outcomes best serving the long-term survival and well-being of a social collective construed as a ‘public’” (Züger et al. 2022, p. 1). Building on participant observations conducted in the context of the “Civic Tech Lab for Green” (CTLG)—a German government-funded public interest AI initiative—and eight expert interviews, we investigate how the investment in AI shapes the negotiation of environmental sustainability as an issue of public interest. To this end, we engage with the following overarching research question: how does the investment in AI mediate government efforts to push civic engagement in climate change adaptation?

Challenging the prescribed means–end relationship in contemporary policy discourse between AI and environmental protection, we argue that that the unquestioned investment in AI curtails political imagination and displaces discussion of climate problems and possible solutions with “technology education”. Contemporary state-led civic tech initiatives must “decentre AI” by shifting their focus away from automation to civic data as key lever to push public participation. Existing research in science and technology studies (STS) has provided valuable insights on “public participation in climate change adaptation” (Arnstein 1969; Hügel and Davies 2020; Turnhout et al. 2020; Maas et al. 2022) and “civic technology and PIAI” (Gilman 2017; Gabrys 2019a; Züger et al. 2023). In what follows, we bring these bodies of literature into conversation using PIAI as an empirical ground for studying how citizens are engaged in co-producing knowledge for climate change adaptation. Shedding light on the politics of co-production in AI-mediated climate governance, we extend existing participation research by focussing explicitly on PIAI and contribute to civic tech literature by zooming in on a government-led PIAI initiative. More generally, this article responds to calls in STS that demand stronger diversification of and reflexivity in environmental research (Turnhout and Lahsen 2022).

This paper is structured as follows: we begin by interrogating academic literature in STS that deals with the relationship between AI, climate change adaptation and public participation. After introducing the methodology undergirding this study, we present three key empirical findings obtained from the data collection process. Building on the latter, three analytical vignettes are developed to disentangle the complex interdependencies of AI, public participation and climate change adaptation. Finally, we conclude by emphasising the importance of re-exploring the innovative state through civic data.

2 AI, environmental governance and the promise of participation

2.1 Public participation in climate change adaptation

Public participation has become a major consideration in environmental governance. It forms a critical component of contemporary efforts to combat climate change and is enshrined in key governance bodies (UNDP 2004; Lee and Romero 2023). In Article 6, the United Nations Framework on Climate Change (1992) already states that “all parties must promote and facilitate […] public participation in addressing climate change and its effects and developing adequate responses” (p. 10). Additionally, the latest IPCC (2023) report declares that “climate literacy and information provided through climate services and community approaches, including those that are informed by Indigenous Knowledge and local knowledge, can accelerate behavioural changes and planning” (p. 107). There is a long history of interdisciplinary research that investigates the role of public participation in climate change adaptation, ranging from Arnstein’s (1969) seminal “ladder of participation” in the late 1960s to the expansion of participatory processes in the 1990s (Davies 2002) to contemporary contributions on “climate action” (Corner et al. 2014; Cologna and Oreskes 2022).

Particularly relevant in the context of this paper is research that investigates the politics of co-production (Collins and Ison 2009; Turnhout et al. 2020). Whilst official policy documents and much of the literature on co-production herald public participation in environmental governance as inevitable and necessary, there is now growing academic attention on the role of power and politics in shaping participation efforts. Turnhout et al. (2020) identify three ways in which co-production processes can end up reproducing, rather than changing, existing power inequalities: (1) by a rationale of science-driven impacts that does not challenge the power of scientific knowledge over other forms of knowledge, (2) by the “tendency of co-production projects to strive for consensus and for solutions that are considered rational according to elite perspectives “ (p. 18) but cover up differences amongst less privileged groups, and finally, (3) by a lack of engagement with the wider political context in which co-production processes occur. Thus, accounts in this body of literature move beyond aspirational debates of participation as a normative ideal by grasping it as a performative practice that is inherently shaped by the socio-political context it is embedded in (Sprain 2017).

2.2 Civic technology and public interest AI

One phenomenon that links AI and climate governance constitutes civic technology (“civic tech”). Broadly defined as “technology that is explicitly leveraged to increase and deepen democratic participation “ (Gilman 2017, p. 745), civic tech rests on three pillars: transparency and accountability to hold governments accountable, effective citizen–government interaction, and digital tools that make citizens’ everyday lives easier (Dietrich 2015). Civic tech initiatives can be initiated by public institutions top-down and/or bottom-up (Sotsky and Kartt 2017). Government-led civic tech initiatives include city governments’ attempts to resist corporate control and build digital public infrastructures. The most prominent example denotes the city of Barcelona’s “New Data Deal”, which aims to reclaim data as commons and introduces procurement and data sharing standards (Fernandez-Monge et al. 2023). Examples of bottom-up civic tech initiatives include open data activists seeking to democratise datafication (Baack 2015) and civic IoT projects which collect citizen-sourced sensor data on particle matter in order to raise awareness of air pollution and influence local policy (Gabrys 2019b). One subfield of civic tech that is receiving increasing attention is PIAI. Building on Bozeman (2007) and Dewey (1927), Züger et al. (2022) define PIAI as “AI systems that support those outcomes best serving the long-term survival and well-being of a social collective construed as a ‘public’” (p. 1). Rather than individuals or corporations, the notion of public interest refers to outcomes that serve the whole community. In this context, technology is seen as a public good serving a societal benefit, for instance, “through an open democratic system of governance, with open data initiatives, open technologies, and open systems/ ecosystems designed for the collective good” (Abbas et al. 2021, p. 10).

3 Methodology

Following an interpretivist approach, this study is concerned with “how the social world is interpreted, understood, experienced, produced or constituted” (Mason 2002, p. 3), and was therefore qualitative in nature. A twofold approach was applied. Starting with ethnographic investigation of the CTLG, we conducted eight expert interviews on the basis of previously generated insights. The aim of the interviews was to triangulate previously generated ethnographic data and gain an in-depth understanding of the role of citizen participation at the intersection of AI and climate change.

3.1 Preparation and data generation of the case study “Civic Tech Lab for Green”

The CTLG—a joint initiative of the Federal Ministry for the Environment (BMUV), the Federal Ministry of Labour and Social Affairs (BMAS), and the Federal Ministry for Family Affairs, Senior Citizens, Women and Youth (BMFSFJ) in Germany—forms part of the “Civic Coding—Innovation Network AI for the Common Good”. It constitutes an initiative of the five-point programme “Artificial Intelligence for Environment and Climate”, with which the BMUV promotes the sustainable design of AI and the use of its possibilities for the benefit of the environmental protection, conservation and climate action.

Acting as an intermediary body between government, civil society and science, the CTLG explicitly focuses on citizen participation. It invites citizens to try out and experience AI technologies in order to engage with environmentalism, build competences, and produce bottom-up knowledge. Furthermore, as part of the “Civic Coding” network, it is dedicated to exploring how AI can be used for the common good. Challenging the profit motive, the CTLG emphasises the importance of safeguarding public values and advocates for ethical principles, such as participatory design, open source and sustainability.Footnote 1 It employs six permanent staff covering federal territory and is supposed to facilitate discussion, offer exhibition space, provide software and hardware, support and supervise pilot projects and host a variety of sensor workshops.Footnote 2 The epistemological position of this study suggested that knowledge or evidence of the chosen phenomenon could be generated by immersing oneself in “natural” or “real-life” settings (Mason 2002; Bogner 2012; Baack 2015). Keeping the main research interest in mind—civic engagement at the intersection of AI and climate change—it was helpful to not only rely on participants’ retrospective accounts but to put emphasis on situationally generated knowledge. To this end, we attended three workshops focussing on how AI can be used for environmental protection (see Figs. 1, 2, 3, 4Footnote 3), specifically one on smart bee hives, one on networked trees, and one on insect protection in the city. Each workshop was attended by 5–10 people, lasted for five hours and consisted of presentations given by CTLG staff and external experts as well as group work.

Fig. 1
figure 1

Workshop setup

Fig. 2
figure 2

Humidity sensor

Fig. 3
figure 3

Linking tech and nature

Fig. 4
figure 4

Insect detection

3.2 Preparation and data generation of expert interviews

Furthermore, semi-structured interviews with key experts at the intersection of public interest AI and citizen participation were conducted. Keeping the qualitative nature and the ontological and epistemological assumptions underlying this research in mind, a stratified purposeful sampling strategy was adopted. This type of sampling allowed for the selection of varied and information-rich cases, thereby permitting to capture commonalities and variations amongst the chosen samples (Patton 2002). In total, eight expert interviews were conducted. Experts included one high-ranking bureaucrat heading the AI lab within the Federal Environment Agency, one executive in charge of an urban innovation lab, one individual leading a research lab on public interest AI, one person running a civic tech lab as well as four senior academics working at the intersection of AI and sustainability (see Table 1). A semi-structured questionnaire, consisting of five parts, was developed. Afterwards, a pre-test was conducted with a researcher working in the field of civic tech, leading to further modifying identified themes of the questionnaire. The interviews, lasted between 45 and 60 min, were conducted in May and June 2023, and recorded.

Table 1 Demographics of expert interviews

3.3 Data analysis

Given the qualitative and reflexive nature of this study, the data generation and data analysis were intimately linked. The transcribing was done by use of the AI-powered audio transcription software “Trint”. In order to facilitate the organising of the generated data and to support the identification of patterns within it, the computer-assisted qualitative data analysis software NVivo 12 was used to code interview transcripts, documentary and visual material. Central to the processing of the generated data was the idea of thematic coding or categorising (Gibbs 2007). Given the exploratory nature of this research, we adopted an open coding approach. Our coding strategy consisted of two parts: (1) fieldnotes and (2) descriptive coding. Following the interview, we took fieldnotes to record immediate impressions and thoughts. This proved highly useful since it allowed us to reflect on emerging themes, such as tensions around defining public interest, as well as the interview flow. In a second step and following the transcription process, we conducted descriptive coding. We gathered related textual data and clustered them around “nodes” in order to identify and solidify emerging analytical themes. The latter—including conceptualisations of AI, climate knowledge-making practices and participatory design—helped challenge our initial assumptions and sharpened our analytical focus.

4 Results

4.1 Civics washing vs. civics involvement?

“Could what we are doing—similar to ethics washing and green washing—be called civics washing?” (I8, 2023)

This statement was made by an individual responsible for running the CTLG. In many ways, it captures the stark discrepancy between the promises attached to AI-enabled citizen participation on the one hand and the politics on the ground on the other. Disentangling this relationship, it makes sense to go one step back and keep the initial definition of citizen participation laid out above in mind. According to that, there are two main components that shape its implementation: knowledge creation and empowerment (I3, 2023). What kinds of knowledge are transferred at urban labs? Who exactly is empowered? Describing the purpose of urban labs, interviewees kept emphasising the “learning by making” culture. One of the architects behind the CTLG stresses that “we want to enable the experts, we want to enable the people who come to us to ultimately also maintain, promote and further develop their solutions and maybe also see where there are coupling effects” (I1, 2023).

One of the key means of achieving this mission are sensor workshops. Taking place on the first Saturday of every month with varying thematic foci, they are directed at organised civil society, i.e. people with prior knowledge and experience in environmental movements. This specification is important since it hints at tensions which emerged in the very conception phase of the CTLG. Whilst the BMUV saw the CTLG’s mission as serving primarily as a platform for technological innovation—equipping organised civil society with data science skills—the staff running the CTLG pursued a different goal:

“That is, this networking, being the platform for social innovation, that is the subtitle of our mission, which we as a team of experts have taken very, very seriously. Politically, this is not wanted, because the term social innovation is associated with another ministry and with another party. […] Why should I, as an environmental activist, propose to you now a technology that may be obsolete in ten years or that has side effects in production?” (I8, 2023)

The quote above demonstrates that the very design of citizen engagement efforts is intimately linked to political agendas. In other words, different actors “own” different terminologies. The “social” in innovation is occupied by the BMAS whilst the “technological” dimension is advanced by the BMUV. The employees of the CTLG therefore find themselves in a context of conflicting interests. For a start, CTLG staff received no training with regard to running effective participation formats (I8, 2023). Put differently, those individuals supposed to foster civic engagement and empower civil society had to rely solely on existing community engagement skills. One interviewee emphasises that prior training in running effective participation formats is absolutely crucial for ensuring the sustainability of civic engagement projects (I3, 2023).

A second observation refers to the people who attended the workshops. Given the setting—first Saturday of every month, indoors, running from 10 am to 3 pm—attendance remained sparse. Mirroring insights from similar, government-led participation efforts (I3, 2023), the workshops attracted mainly white, male and tech-savvy individuals. Whilst not particularly surprising, one can’t help but think of the enormous potential of CTLG’s site. Located in the heart of Berlin-Neukölln—a highly mixed neighbourhood with a range of socioeconomic inequalities—a different, more inclusive approach would promise to reach much more diverse audiences. Reflecting on the conflicting objectives guiding the mission of the CTLG, one staff member critically remarks:

“I also ask myself the question quite openly and quite honestly, and I would also put it on the record. So feel free to quote me on that. Are we really living up to our claim? And if civil society comes along and we say “no” every time, really every time. We haven’t even said “yes” anywhere, but “no” every time, then we lose our credibility as people who work there, who have a genuine interest, who work there for that reason” (I8, 2023)

Interestingly, those who attended the workshops did not identify themselves as “citizen scientists” but rather as “civic tech activists” curious about exploring technological solutions in the context of climate change (Pfitzner-Eden and Samoilova 2023). With backgrounds in design, public administration, data science and experiences in the “quantified self” movement, what prevailed was a sense of technological solutionism (Morozov 2013). This can be attributed to the CTLG’s ambiguous positioning at the boundary of deterministic and critical understandings of innovation. On the one hand, the design of the lab feeds into the belief that technologies such as AI can “solve” complex issues such as the climate crisis. On the other, those running the CTLG advocate a view on technology that directly challenges tech-solutionist claims.

4.2 Public invitation vs. public participation

The empirical insights presented above seem emblematic of not just the CTLG but reflect broader concerns regarding the role of citizen participation in civic tech initiatives. On the one hand, there was broad consensus amongst interviewees that public participation is valuable, both intrinsically, as a means of democratic expression and procedural justice, and instrumentally, as a means of increasing political accountability and securing trust in governance processes. This senior researcher at an urban science lab points out:

“And that’s why I consider any form of participation of the people who live and work in the city, in this multi-stakeholder approach, i.e. the involvement of different interest groups, different people in very different contexts, but above all the joint development with them, absolutely necessary when it comes to developing cities worth living in. To do this you need people, and people know that, and also the possibility that these people can create something. That’s why participation is ultimately the basis for our work” (I4, 2023)

At the same time, respondents expressed considerable doubts as to the design and execution of current participation formats. This interviewee remarks:

“So I think it’s true that I’m also sceptical about many current forms of participation. But not because I think participation is somehow stupid per se, but because I have the impression that it is not done well and that it quickly becomes a fig leaf story or token participation. In part, it’s just the typical diffusion of responsibility into a process” (I5, 2023)

This insight is important because it illuminates the difficulties involved in establishing effective citizen participation. More generally, it challenges the assumption that civic engagement inevitably advances more inclusive technology development. A further insight that was persistently highlighted refers to the ways in which citizens are approached. Reflecting on what effective civic engagement can look like, this public interest AI expert emphasises that “[…] if you want this exchange with citizens, then you usually have to go to them and actively consider it beforehand […]” (I7, 2023). In other words, it is not enough to simply “invite” citizens to participate. Instead, successful civic engagement requires community building, i.e. meeting and engaging citizens in their “natural habitats” (I3, 2023).

Finally, respondents highlighted inherent tensions in existing knowledge-making practices. Confronted with the observation that digitalisation involves not simply the introduction and proliferation of digital technologies but also changes in underlying epistemic infrastructures, one third sector expert emphasised the importance of first making existing knowledge-making processes more transparent, which will then eventually ensure more effective social innovation. Reflecting on current standards and processes in information technology infrastructure of a public administration, one interviewee urges:

“It’s a huge mess. It is highly fragmented and decentralized and there are many people in the administration with partial secret knowledge, but this is not written down anywhere. And it really takes years of painstaking work to create these networks. And that’s a really big interest of ours […] to create visibility, for example to write down which steps actually have to be followed and in what order to actually launch an innovative product in Berlin.” (I5, 2023)

4.3 AI talk vs. AI walk

Another finding refers to the role of AI in mediating the co-production of climate change knowledge. Whilst the CTLG is explicitly built around AI, one couldn’t help but notice the strong discrepancy between “AI talk” and “AI walk”. Put differently, AI featured first and foremost verbally rather than materially. The point being that most of the group activities were concerned with identifying relevant sources of data, setting up data collection processes via sensor technologies and establishing an IoT ecosystem instead of deploying AI in data analysis. The absence of AI became particularly clear during the second sensor workshop when a discussion erupted around the societal impact of AI. What is the difference between AI and machine learning? How does a neural network function? What are the implications and limits of generative AI models such as ChatGPT? What does intelligence mean in the first place? (see Figs. 5 and 6) This insight is crucial since it illuminates the importance of clarifying what AI is in the first place before assuming that it can be deployed as a tool for environmentalism.

Fig. 5
figure 5

Drawing of a neural network

Fig. 6
figure 6

Reinforcement learning model

In addition to that, the notion of PIAI remains highly controversial. Confronting our interviewees with this emergent concept, it became clear that it is extremely difficult to define (1) what public interest is, (2) what AI is, and, most importantly, how these two elements can be brought together for the benefit of society. Two observations deserve elaboration. First, most experts intuitively defined public interest in terms of what it is not, rather than specifying relevant publics or focussing on what we call “social good”. Put differently, all of our interviewees agreed that public interest AI is ultimately not about profit orientation but had difficulties specifying the ingredients of social value generation (I4, 2023). Second, there was widespread consensus amongst interviewees that the notion of AI itself is highly problematic for thinking constructively about societal progress. This interviewee points out:

“And there I am with you, the term AI is not fitting for me, because already the term intelligence is difficult to grasp. What is artificial about it? Nothing about it is artificial. It’s a stochastic model, if you will” (I1, 2023)

Criticising the conceptual and analytical vagueness of the term, this AI ethicist remarks:

“So I actually find the term very problematic because it’s much too crude to bring us to stronger insights. I think I would even understand the term as a symptom of this hype, because it is such a good projection surface for general hopes and fears” (I7, 2023)

These two observations are crucial since they reveal the inherent complexities involved in thinking AI beyond corporate imaginaries of the social good. This last insight points to a much more fundamental finding. Rather than taking AI for granted in helping us to tackle questions of public interest—be it environmental protection, species conservation or other—policy issues such as the “climate emergency” and spaces such as the CTLG can serve as platforms for technology education. Confronted with the reversal of the means–end relationship between AI and environmental protection, this interviewee admits…

“And yes, and I think guilty as charged, if it was a charge. It’s actually that I’m more about this AI literacy, that I’m actually focussing on that and not environmental protection” (I8, 2023)

These remarks fundamentally reshuffle the relationship between AI, citizen participation and environmental protection. Instead of mobilising citizens to explore AI for engaging with environmentalism, the CTLG appears to function as a space for AI literacy and the democratisation of knowledge production practices. The current hype around AI serves as blessing and curse. Whilst failing to reach diverse audiences (“civics washing”), the CTLG can function as platform for social innovation since it mobilises technology to rethink fundamental questions about technology and society. The latter include asking basic questions about the nature of (human) intelligence, reflecting on the ascribed novelty of AI and interrogating the “problems” that AI-mediated citizen participation is supposed to “solve”.

5 Disentangling public participation, AI and environmental governance

There are three key findings that inform the forthcoming discussion. First, current civic engagement efforts at the CTLG remain ineffective due to a conflict of objectives and staff’s lack of training in community building. Second, the relationship between civic tech and participation is far from straightforward since citizen participation is often used to diffuse responsibility and effective climate change heavily depends on political agendas. Finally, there is a discrepancy between “talking AI” and “walking AI”: rather than running AI-informed climate action programmes, climate change serves as a platform to engage citizens in technology education. In what follows, we build on the empirical findings and generate three analytical vignettes that help disentangle the complex relationship between citizen participation, AI and environmental governance (see Table 2).

Table 2 Moving beyond AI inevitability

5.1 Navigating a policy context of AI inevitability

Züger et al. (2022) remind us that public interest denotes the “results that best serve the survival and well-being of a social collective/public over the long term” (p. 14–15). Accordingly, public interest does not represent a universal quality but requires a deliberative process over what serves public interest for every matter concerning the affected general public (Wikimedia Germany 2023). Taking the notion of public interest as an analytical starting point, we want to engage with the following two questions: what is at stake when defining environmental sustainability as public interest? Who gets to decide that AI should be mobilised in order to advance climate action?

The idea to frame environmental sustainability through the lens of PIAI did not emerge out of a vacuum. In 2018, the German government announced that it would provide a total of €3 billion for the implementation of the AI strategy “AI made in Germany” up to 2025. Besides ensuring innovation and economic competitiveness, the Government commits to the responsible and public interest-oriented development and use of AI. In addition to that, it aims to promote AI applications for the benefit of the environment and climate and to initiate 50 lighthouse applications in this topic area. In doing so, the German government joined a small number of other governments in dedicating significant funding to the intersection of AI and climate change.Footnote 4

Within the framework of the federal AI strategy, the BMUV is investing €150 million in AI in the context of environmental and climate protection. Doing so, it is focusing on five key areas. Those include (1) AI for energy transition and climate protection, (2) resource-efficient design of AI, (3) AI for increased resource efficiency in small and medium-sized companies, (4) public interest-oriented AI and, (5) AI for the public understanding of the environment. The CTLG forms part of the Ministry’s efforts to push public interest AI and is supposed to foster dialogue with AI developers and equip environmental activists and other interested parties with AI know-how (BMUV 2021).

In what follows, we argue that framing environmental sustainability as an issue of public interest forms part of what can be called a “context of AI inevitability”. That is, AI—particularly in the German context—is established “as a given and massively disrupting technical development that will change society and politics fundamentally” (Bareis and Katzenbach 2022, p. 21). This insight is important since it illuminates the conditions that allow policy issues such as climate change to be framed as “issues of AI”. More specifically, it points to what Schiølin (2020) calls “future essentialism”, “an imaginary of a fixed and scripted, indeed inevitable, future, […] that can be desirable if harnessed in an appropriate and timely fashion, but is likewise dangerous if humanity fails to grasp its dynamics” (p. 545). Whilst presented as societal panacea, the empirical findings informing this paper demonstrate that the adoption of AI for climate change knowledge on the ground is far more tension-ridden than official policy documents suggest.

They point to what Bareis and Katzenbach (2022) call the “paradox of AI imaginaries”: “AI tales sound fantastic and trigger our fantasies, though simultaneously they actually undermine political imagination and political practice by raising expectations of a comforting technological fix to structural societal problems” (p. 22). Rather than pushing creative climate action, the underlying policy discourse of AI inevitability complicates the negotiation of environmental sustainability as an issue of public interest. Whilst attracting public attention and large sums of funding, it remains unclear if AI advances institutional capacity building or actually hampers efforts to co-produce knowledge for climate change adaptation. Drawing on Mark Fisher’s (2009) concept of “capitalist realism”, McQuillan (2022) pointedly notes that “the fact that AI solutions don’t live up to the hype is overridden by AI Realism’s sense of inevitability” (p. 45). As a result, a more fundamental question emerges: is it at all possible for government-led civic tech initiatives to resist AI inevitability or even cater for a critical discussion of the deployment of AI in environmentalism?

5.2 Re-exploring the innovative state through civic data

“And the discussion I see now is definitely much more about ‘How does the state itself become capable of innovation?’ and not so much about ‘How can civil society help the state become capable of innovation?’ Because that’s what we‘ve been trying to do for a long time. And now it’s the state’s turn to move, so to speak” (I5, 2023)

Critically reflecting on community-led civic tech initiatives, particularly their limited impact on driving innovation during the COVID-19 pandemic, this third sector representative observes a shifting debate away from bottom-up civic tech to state-led civic tech initiatives. This finding addresses a much more fundamental question: who drives social innovation, and with what means? Building on the empirical findings informing this article, we contend that existing state-led civic tech initiatives must shift their focus away from AI to civic data as key lever to push public participation. Rather than feeding into the AI hype, it seems crucial to bring AI down to earth by focussing on its most important ingredient: data.

Unpacking this line of argumentation, Fourcade’s and Gordon’s (2020) discussion of the impact of big data and ML on statecraft in the digital era serves as a helpful epistemological starting point. Examining changing modes of knowledge production and data governance structures, the authors argue that the state must learn to “see like a citizen”. That is, adopting a “mode of statecraft that identifies social problems—including those problems stemming from the deployment of dataism itself—from the perspective of those affected” (Fourcade and Gordon 2020, p. 96). Particularly relevant for the underlying analysis is the concept of “dataism”, which they define as “an ideology that finds the purpose of government in what can be measured rather than in the will of the people” (p. 81). This observation echoes what we described in the previous section as AI inevitability. Rather than viewing dataism as an inevitable consequence of using data in governance, digital tools such as AI must be paired with legal and political arrangements that empower citizens to have a say in what should be measured, how and why.

What does it mean, then, to “see like a citizen”? Keeping in mind the difficulties of grasping environmental sustainability in the context of AI inevitability, it is crucial to establish inclusive digital infrastructures by treating data as commons for the public interest (Bria et al. 2023). This implies at least three policy manoeuvres: first, shifting the discourse from AI to civic data. Lamenting the conceptual and analytical vagueness of AI, all of our interviewees confirmed the importance of re-imagining AI through civic data. Questioned about the potential of deploying ML in climate protection, one of the lead authors of the fifth IPCC report points out that one “cannot separate AI from the data it’s about. For it to work well, the data base has to be very, very big” (I6, 2023). This insight is important because it shifts the discussion away from premature attempts to deploy AI in data analysis to creating inclusive data infrastructures in the first place. In addition to re-framing the “vocabularies of change”, it is necessary for governments to further invest in capacity building. It is not sufficient to rent office space in innovation hubs. Instead, the state must equip staff with the resources they need to conduct effective civic engagement exercises. This means switching from “inviting” citizens to join maker spaces to purposefully approaching relevant audiences (e.g. pupils, children, pensioners) directly in their respective habitats: schools, kindergartens and retirement homes.

Finally, unleashing the innovative capacity of state-led climate action requires a reimagination of citizenship. Grand challenges such as climate change will not be tackled effectively unless citizens and governments have access to detailed information on what happens in public spaces. Rather than “stewardship, civic paternalism, and a neoliberal conception of citizenship” (Cardullo and Kitchin 2019), what is needed is a notion of active citizenship that links climate action and digital rights. Grasping environmental sustainability as an issue of public interest through civic data requires a conceptualisation of citizenship that accounts for the complex interdependencies of climate change and digitalisation. It is beyond the scope of this article to develop a notion of “environmental and/or ecological citizenship” (Dobson 2003) in times of rapid digitalisation. Following our discussion on climate action in a context of AI inevitability (Sect. 5.1), however, we can point to a promising starting point that illuminates how the shift of focus from AI to civic data can convey an alternative understanding of citizenship. In this context, current debates on (municipal) data commons offer valuable insights.

One concrete example linking climate action and digital rights constitutes the introduction and implementation of data sharing practices and standards. Seeking to unlock the potential of urban data, Bria et al. (2023) emphasise that data sharing would benefit mainly from two solutions: data intermediaries and standardised frameworks. The former “ensure data flow from data contributors to data users, for example, by clarifying the conditions for data sharing, transforming, or even pooling data “ (Bria et al. 2023, p. 19). Data intermediaries can come in different forms, ranging from a data sharing contract between city governments and other stakeholders to fully fledged organisations such as a limited liability company that handles information exchange and security. The latter—standardised frameworks—“identify common ontologies, procedures, contracts, and templates that allow for systematising different sharing setups” (ibid., p. 22). Initiatives working specifically on establishing international data sharing practices constitute the International Data Spaces AssociationFootnote 5 (IDSA) and the Gaia-XFootnote 6 ecosystem.

5.3 Using AI to ask better questions

“What is wrong, I think, is that we have permitted technological metaphors, what Mumford […] calls the ‘Myth of the Machine’, and technique itself to so thoroughly pervade our thought processes that we have finally abdicated to technology the very duty to formulate questions” (Weizenbaum 1972, p. 611)

Synthesising the two previous analytical vignettes is a much more fundamental point: contrary to common claims that emphasise the use of digital technology as a means to an end (e.g. helping to combat the climate crisis), we contend that policy issues such as climate change serve as platforms for technology education. This line of argumentation is rooted in an understanding of technoscientific progress that acknowledges the symmetrical, co-constitutive relationship between technology and society. Thus, in line with Joseph Weizenbaum’s (1972) critique of emerging AI debates in the early 1970s, we suggest complementing existing techno-solutionist debates with an increased focus on the inherently socio-political embeddedness of technologies such as AI. Put differently: what happens if we use digital technologies such as AI to ask better questions before formulating definitive answers to highly complex social issues?

Advancing this claim implies “treating the existence of AI as controversial” (Suchman 2023, p. 1) by challenging current discourses of AI inevitability through the critical examination of underlying power structures. Crucially, adopting this standpoint does not imply a denial of the efficacy of data-intensive technologies such as ML but rather calls for “a keener focus on their locations, politics, material-semiotic specificity and effects, including consequences of the ongoing enactment of AI as a singular and controversial object “ (ibid., p. 4). Ironically, the conflicting goals between the federal government and CTLG staff regarding CTLG’s mission (see Sect. 4.1), as well as the lack of training in community building, actually led to treating AI as controversial rather than set in stone. What prevailed was a sense of scepticism rather than determinism.

Moving forward, it is important to build on this momentum of scepticism and create space for critical discussion by asking basic questions about the relationship between technology and society. Applied to the field of public interest AI, this means asking questions about AI from the standpoint of those imagined to deploy the technologies. Stilgoe (2023) goes so far as to propose the need for a “Weizenbaum test for AI”. He suggests moving from the issue of whether machines are intelligent to whether they are useful, asking questions such as “Who will benefit?”, “Is the technology reversible?” and “Who will bear the costs?”. Thus, “rather than a test of intelligence, a Weizenbaum test would assess the public value of AI technologies, evaluating them according to their real-world implications rather than their proponents’ claims” (Stilgoe 2023, p. 1).

A recent study at the intersection of ML and climate change confirms the need to move from a “discourse of intelligence” to a “discourse of usefulness” by stressing the significance of institutional capacity building (Clutton-Brock et al. 2021; Rolnick et al. 2023). Seeking to support the responsible adoption of AI for climate action, governments are urged to “rapidly implement large-scale AI literacy and ‘upskilling’ programmes [and] incorporate elements on data and on climate, including both technical and socio-technical components, into educational curricula” (Clutton-Brock et al. 2021, p. 11). It follows that if “civics washing” (Sect. 4.1) is to be avoided, governments must adopt a holistic, socio-technical view on co-producing climate change knowledge with emerging digital technologies such as ML and ensure significant investment into civic engagement. Thus, a first step towards AI literacy is shifting the epistemological focus from AI to civic data. That is, moving from seduction to empowerment, from projecting an illusion of intelligence to translating citizen rights into the digital realmfrom hype to reality.

6 Conclusion—from public interest AI to civic data-enabled government?

This paper addressed the intersection of digitalisation and climate change by examining the deployment of AI in government-led climate action. Building on ethnographic research conducted in the context of the CTLG and eight expert interviews, we investigated how the investment in AI shapes the negotiation of environmental sustainability as an issue of public interest. Challenging the prescribed means-end relationship between AI and environmental protection, we argued that the unquestioned investment in AI curtails political imagination and displaces discussion of climate problems and possible solutions with “technology education”. Contrary to common claims that emphasise the use of AI as a means to an end, we contended that government-led climate action should refocus on the politics of co-production, thereby escaping the policy context of AI inevitability.

To substantiate this line of argumentation, we developed three analytical vignettes. First, current efforts by the German government to frame environmental sustainability as an issue of public interest take place in a policy environment of AI inevitability. AI is established as an inevitable and unstoppable force that fundamentally changes society and must be mobilised if prosperity is to be maintained. Crucially, AI inevitabilityrather than pushing creative climate actioncomplicates the negotiation of environmental sustainability as public interest since it undermines political imagination. Second, existing state-led civic tech initiatives must shift their focus away from AI to civic data as key lever to push public participation. Instead of feeding into the AI hype, it seems crucial to bring AI down to earth by “seeing like a citizen” (Fourcade and Gordon 2020). Finallysynthesising the previous two analytical vignetteswe reiterated the importance of using technologies such as AI to ask better questions before formulating definitive answers to highly complex social issues. Sloane (2024) aptly notes that because AI leverages both profound fears and hopes, it also “renews questions about ethics and how we should imagine society’s future” (p. 2). Seeking to shift contemporary policy discourse around AI from questions of intelligence to questions of usefulness, this article makes a case for moving beyond AI inevitability and, instead, explore civic data as a vehicle to unlock more inclusive, civic futures.