Introduction

Artificial Intelligence (AI) innovations increasingly shape the conception, collection, analysis, and translation of environmental data that directly influence conservation strategies and decision-making worldwide (Kwok, 2019; Wearn et al., 2019). From computer vision to smart sensors to predictive modelling, an increasing number of AI-driven techniques are attempting to automate and optimise environmental monitoring, management, and policymaking to address complex climate and biodiversity issues (Hino et al., 2018; Kaack et al., 2022; Runting et al., 2020). Yet, there is growing concern that the consequences of AI innovations are often neglected when assessing their potential risks, social harms, and ecological damage (cf. Benjamin, 2019; Jasanoff, 2016). The complexity of human and data-driven AI interactions has particularly significant implications for how conservation science and practices are conceived and implemented across varied contexts (Chapman et al., 2024; Nost and Goldstein, 2021).

In this paper, we draw on the multiple dimensions of environmental justice to critically examine how chatbot content might reproduce biases or misrepresentations to approach global conservation targets. A chatbot is a software application that assists online conversations through text or speech-based user-driven questions (van Dis et al., 2023). Generative AI tools, like ChatGPT from OpenAI, are directly transforming how scientists, policymakers, and practitioners can use large language models to cast their search net wider and quicker in the process of research formulation, testing, and discovery (Schmidt, 2023) through supervised and reinforcement learning techniques (Lowe, 2023). Rather than being objective or neutral, chatbots are power-laden tools legitimised by the Western logic of automation and efficiency (Ho, 2023; Porsdam Mann et al., 2023). While generative AI suggests ways of processing and accessing more information with less time, there are concerns that chatbots are trained on datasets that amplify distortions and biases (Gent, 2023) and continue to produce bogus data (Naddaf, 2023). There are now growing calls to improve safeguards needed to mitigate and manage the impacts AI-driven systems can have on user judgement and decision-making (Krügel et al., 2023) and avoid the perpetuation of misleading information (Brainard, 2023), including content that causes racism and inequalities (Sadasivan et al., 2023). Although responsible AI protocols are being recommended to ensure transparency and credibility (Porsdam Mann et al., 2023; Srikumar et al., 2022), big tech companies often consider technological accidents and negative impacts as unintended consequences that can be addressed after the failures have been exposed (Clarke, 2023; Prunkl et al., 2021).

Of particular interest here are the justice consequences of chatbot’s responses to inform restoration knowledge production and policymaking needed to meet the international conservation agenda. Nations across the globe have now pledged to reach a nature net-positive outcome (CBD, 2020), halt illegal deforestation, and reverse land degradation by 2030 (UNFCCC, 2021). These ambitious targets are articulated as part of the UN Decade on Ecosystem Restoration, where the goal of recovering land degradation aligns climate and biodiversity agreements with Sustainable Development Goals (UN, 2020). Principles guiding international restoration efforts during this UN Decade include taking direct actions to integrate Indigenous, local, and scientific knowledge to inform progress towards large-scale targets (FAO et al., 2021). However, recent assessments disclose an alarming trend of restoration strategies based on top-down analytical models (Briggs et al., 2020; Schultz et al., 2022) and standardised techniques, such as large-scale tree planting (Coleman et al., 2021; Urzedo et al., 2022). The strong dependence on data-driven restoration solutions raises critical concerns, including the reinforcement and exacerbation of inequalities in decision-making processes (Briggs et al., 2020; Dinerstein et al., 2020; Wyborn and Evans, 2021).

Environmental justice offers a useful analytical approach to navigate how data-driven AI innovations can influence international conservation commitments and place-based practices associated with multiple knowledge systems (cf. Pritchard et al., 2022; Robinson et al., 2023). Environmental justice emphasises the importance of enabling equitable distributive access to ecological restoration information, the need to ensure differences in information sources, and allowing political agency for often dismissed and marginalised groups in environmental evidence and decisions (Kashwan, 2022; Martin et al., 2016; Schlosberg, 2004). This includes a growing call for Global South perspectives to dismantle power asymmetries in Western science formulations (Escobar, 2011; Mignolo, 2011; Quijano, 2000) and emphasise the importance of accommodating the plurality of knowledge systems in the conservation paradigm (Álvarez and Coolsaet, 2020; Rodríguez and Inturias, 2018; Ulloa, 2017). In the context of rapidly evolving environmental justice and generative AI debates, we examine the text-based content formulated by ChatGPT, with particular attention to the sources and information that shape ecological restoration expertise, stakeholder engagements, and techniques.

Methods

In this study, we undertook a 30-question interview with ChatGPT to analyse the distributive, recognition, procedural, and epistemic justice dimensions of AI-generated information about ecological restoration. Drawing on environmental justice lenses, we focused on issues associated with organisational, gender, geographical representation, and diverse knowledge systems in ChatGPT’s answers when formulating ecological restoration content. We examined the responses from the ChatGPT 3.5 model, which was trained on internet text until 2019, and the specific details of these datasets are not disclosed by OpenAI.

Data collection

We identified key ecological restoration components based on the International Principles & Standards for the Practice of Ecological Restoration (Gann et al., 2019). This analysis supported the establishment of key thematic areas as part of the formulation of a questionnaire (Table S1). The interview covered questions regarding knowledge systems (n = 10), stakeholder engagements (n = 10), and technical approaches (n = 10). To ensure comprehensive data collection, we asked each question 1000 times, resulting in a dataset of 30,000 answers, which were produced by the chatbot from June to November 2023. The collected datasets were processed and analysed using ATLAS.ti Mac (Version 22.0.6.0), as described in the following subsections.

Knowledge systems analysis

ChatGPT’s answers to questions 1–10 were analysed to understand how diverse dimensions of restoration knowledge were considered, including experts, affiliations, academic literature, relevant experiences, and projects. Firstly, the geographical representation was examined by identifying the countries listed by the chatbot. We identified the frequencies of countries mentioned in the 10,000 ChatGPT’s answers to the knowledge system theme. We then compared the countries listed by the chatbot with the list of countries that have official restoration commitments under the Bonn Challenge (2023), the African Forest Landscape Restoration Initiative (AFR100, 2023), the Paris Agreement, UN REDD+ programme, and other national schemes. An association was established between the frequency of each country mentioned by ChatGPT and its corresponding domestic restoration pledge. In the analysis, high frequency denoted that the frequency rate of the mentioned country in ChatGPT was greater than its restoration target area rate, and low frequency denoted that the frequency rate of the mentioned country in ChatGPT was less than the restoration target area rate. Lastly, the distribution of the frequencies of the mentioned countries was analysed according to their income level and region-based categories, considering the World Bank’s definitions (2022).

In terms of expertise, we cross-checked factual validation for the given expert list by ChatGPT. This analysis was performed manually by selecting 150 experts randomly from our 1000 answers to question number 1. This sample of 150 experts was also analysed in terms of representativeness of gender, country, and organisation types in ChatGPT’s answers. This information was manually verified by searching diverse publicly available information on websites with personal biographies, such as institutional or personal webpages, LinkedIn, ResearchGate, ORCID, and social media. After collecting information about their gender (male, female, or non-binary) and affiliations (i.e. country and organisation type), experts’ information was anonymised to protect their privacy. Experts’ affiliation was automatically run by ATLAS.ti’s named-entity-recognition algorithm to identify organisations within the 10,000 answers.

Stakeholder engagements analysis

We analysed questions 11–20 to understand the organisational engagements described by the chatbot, including the recognition of influential stakeholders. Across 10,000 answers, we identified the listed organisations using ATLAS.ti by running the named-entity-recognition algorithm to recognise named entities within the texts. We only selected and coded the organisations listed at least 10 times to eliminate noise and outlier data in the graph visualisation. We then performed a social network analysis by grouping and colour-coding according to different organisation types, particularly to highlight community-led organisations. The codebook of organisation types is presented in Table S2.

Technical approaches analysis

We examined diverse dimensions of restoration technical practices by analysing questions 21–30. This analysis considered how the chatbot approaches the diversity of ecosystems, plant life forms, ecological restoration approaches, and environmental outcomes. The codebook of these assessments is presented in Table S3. We undertook prevalence and sentiment analysis on this content as follows. We searched keywords to identify the presence of different ecosystem types and plant life forms in the 10,000 answers of ChatGPT. We then calculate the minimum, first quartile, median, third quartile, and maximum of the mentions of each studied variable in the answers to develop boxplot graphs. Furthermore, we searched for different ecological restoration approaches and their associated environmental impacts in the chatbot’s answers. We then performed artificial intelligence-based sentiment analysis using ATLAS.ti across the texts for each of these ecological restoration approaches and environmental consequences. Our analysis identified ChatGPT’s sentiments (positive, negative, and neutral statements), for each technical approach and their environmental outcomes.

The Global North expertise shapes ChatGPT’s restoration information

ChatGPT’s answers covered the restoration experiences of 145 countries across different regions of the world. However, this information reveals uneven geographical and expertise representation in how restoration knowledge is sourced and formulated. Two-thirds of the sources used by ChatGPT relied on restoration evidence from the United States, Europe, Canada, or Australia (Fig. 1a). Figure 1b shows that information associated with high-income countries was 7.8 times more frequent than their low- and lower-middle-income counterparts. We identified a substantial knowledge gap in covering restoration evidence from South Asia (1.8%), Sub-Saharan Africa (6.1%), and Middle east and North Africa (1.8%). Most of the East Asian and Pacific region information was associated with Australia (46.3%), while the Polynesian (5.2%) and Melanesian (0.6%) regions were poorly described.

Fig. 1: Geographical representation of ChatGPT’s responses on ecological restoration expertise.
figure 1

ChatGPT’s responses (n = 10,000) relied on a narrow expertise from the Global North, b excluded expertise from low- and lower-middle-income countries, and c neglected information on countries of with restoration pledges. Countries with no information provided by the chatbot are shown in grey More specific information on countries can be found in the Supplementary materials.

Figure 1c illustrates that the chatbot has limited information from locations with officially established restoration pledges contributing to global conservation strategies. Out of the 81 countries with restoration commitments, almost one-quarter were not mentioned by ChatGPT. Most of these dismissed official restoration contributions were associated with low- and lower-middle-income countries. Despite thirty-four African countries leading ambitious large-scale initiatives to restore over 100 million hectares of degraded lands (AFR100, 2023), the chatbot only vaguely mentioned the experiences of two-thirds of these nations (Table S4). For instance, ChatGPT neglected the experiences from the Democratic Republic of the Congo, Tanzania, and the Central African Republic, which collectively established a restoration goal of more than 16 million hectares by 2030. Meanwhile, 40 high-income countries without any official restoration pledges were described by the chatbot.

The chatbot relied substantially on content produced primarily by male researchers (68%) when asked to review available expertise in ecological restoration (Fig. S1). Across ChatGPT’s answers, we identified 1118 experts affiliated with 928 organisations. The top ten cited experts represented half of all the mentions made by the chatbot. Sixty-six percent of the listed experts were based in the United States, largely working at universities (Fig. S1a). Only one-quarter of researchers cited by the chatbot were based in non-high-income countries, with 3.6% of experts affiliated with organisations in low- and lower-middle-income income nations. While this information has critical issues in terms of the representation of diverse experts, more than one-third of the experts listed by the chatbot were inaccurate. Out of the list of 150 top experts, 57 had inaccurate names, affiliations, or no relationship with the ecological restoration field (Fig. S1).

Overlooked Indigenous and community-led restoration organisations

ChatGPT listed 265 organisations engaging with ecological restoration actions globally (Table 1). These organisations spanned a wide range of sectors, including government, non-profit organisations, companies, international bodies, and universities, as well as community and Indigenous groups. Prominent roles in shaping restoration interventions were substantially associated with influential well-established international organisations or government agencies from high-income nations. More than half of these references were related to not-for-profit entities, most of which are associated with transnational non-government organisations, such as the World Wide Fund for Nature (9.8%), The Nature Conservancy (7.9%), and the International Union for Conservation of Nature (4.5%) (Fig. S2). The chatbot also amplified the restoration work led by international bodies (13.8%) and government agencies (22.4%), particularly from the United States of America. These state-led agencies are described as authorities responsible for developing and implementing influential restoration policies and programmes that are intended to lead to large-scale restoration actions.

Table 1 Organisations engaged in ecological restoration actions according to ChatGPT.

The striking feature of this analysis was that only 2% of the mentions by ChatGPT considered Indigenous and community restoration experiences (Table 1). These Indigenous and community-led organisations exhibited few connections and little influence within the restoration social network analysis of the answers (Fig. S2). All these initiatives occupied positions in the periphery of the network, indicating isolation of their contributions. The chatbot-generated information associated with restoration grassroots actions often consisted of generic descriptions that homogenised diverse groups into a singular category. There was a lack of specific details about community, volunteering, or citizen-science organisations and the contextualisation of their place-based engagements, stories, and experiences across varied landscapes.

A focus on forests and tree planting neglects holistic restoration techniques

ChatGPT reinforced the adoption of tree planting interventions as the main restoration intervention associated with optimistic environmental outcomes (Fig. 2). There was a significant focus in the chatbot’s answers on recovering the functionality of forests and wetlands (Fig. S3). In fact, forests and wetlands collectively represented more than two-thirds of the ecosystem types described in the answers. Among all the plant life forms mentioned throughout the answers, trees accounted for almost 92% of the mentions (Fig. S4). Trees were at least 18 times more likely to be present in ChatGPT’s responses compared to any other plant life form. Ecosystems such as grasslands (14.6%), coastal (5.2%), savannas (0.3%), and drylands (0.3%) are largely disregarded without a critical understanding of their technical requirements and contexts.

Fig. 2: Sentiments expressed in ChatGPT’s answers focused on the positive environmental outcomes of ecological restoration techniques.
figure 2

This Sankey diagram covers the distribution of sentiments (centre) associated with the types of restoration techniques (left, n = 10,000 answers) and their associated environmental outcomes (right, n = 10,000 answers).

Planting was the most frequent restoration technique mentioned by the chatbot (46%), while other relevant restoration techniques, such as direct seeding, agroforestry, and nucleation, were barely cited. More than half of the restoration techniques described were associated with positive statements, including soil recovery benefits (21.5%), biodiversity conservation (19.3%), and water quality and availability (18.7%). A very small percentage of the ChatGPT-generated content focused on the negative restoration impacts (2.3%), which were 25 times less likely to be generated compared to positive statements (Fig. 2). The language used by the chatbot tended to focus on neutral and optimistic techniques and consequences, removing current evidence and debates on how injustices and inequalities emerge from restoration efforts.

Environmental justice eludes AI-generated restoration information

Ecological restoration in the tropics is internationally recognised as the most promising intervention to achieve climate and biodiversity (e.g. Cook-Patton et al., 2020; Strassburg et al., 2020). Yet, ChatGPT reinforced the adoption of North American and European sources to inform the expertise needed to support restoration efforts worldwide. This bias reflects a growing concern about the dominant Western scientific knowledge being used to shape conservation strategies (Rodríguez and Inturias, 2018) that relies heavily on English-language information (Amano et al., 2023). While high-income nations centralise power through knowledge production and policymaking, the responsibilities to implement conservation actions are often transferred to Global South nations (Asiyanbi and Lund, 2020; Lewis et al., 2019). This highlights the distributive, procedural, and epistemic injustices in regions with a history of colonisation or limited influence in international decision-making (de Sousa Santos, 2014; Mignolo, 2021). These power asymmetries in research development reveal the colonial legacies inherent in Western science that can dismiss the experiences, histories, and perspectives of Global South nations (Maldonado-Torres, 2016).

In this study, we demonstrated geographical, expertise, and organisational biases in the ecological restoration content generated by ChatGPT. Of particular concern is the chatbot’s tendency to homogenise and overlook the contributions of Indigenous and community-led practices. In the era of digital conservation, the significance of representation issues becomes increasingly prominent, as Western science dominates the ways of framing AI-driven tools for formulating plans, strategies, and actions (Lewis et al., 2020). These injustices surrounding conservation knowledge production have been critically debated and exposed over the last decades (Álvarez and Coolsaet, 2020; Vermeylen, 2019). These critical decolonial considerations have led to growing calls for the integration of multiple knowledge systems and the elevation of political agency for diverse stakeholders in conservation decision-making processes (Guibrunet et al., 2021; Urzedo and Robinson, 2023). The Kunming–Montreal Global Biodiversity Framework, for instance, urges the need for effective contributions of women, Indigenous Peoples, local communities, and civil society organisations in conservation efforts (CBD, 2023). There is now a crucial need to translate these international decisions into tangible research and technological practices aimed at dismantling colonial legacies in knowledge production and AI developments.

Environmental injustices not only emerge from the social and geographical biases but also from the ways of representing nature (Ulloa, 2017). The chatbot reproduced a significant technical limitation in restoration efforts by neglecting the significance of non-forest ecosystems and non-tree plant species (cf. Bond, 2016; Tölgyesi et al., 2022; Veldman et al., 2019). Despite the existence of a wide range of techniques to consider diverse ecosystems (e.g. Arruda et al., 2023; Buisson et al., 2022; Overbeck et al., 2022), restoration interventions rely heavily on reforestation and tree-planting techniques as optimistic ways of reversing degraded landscapes worldwide (Coleman et al., 2021). Other digital developments, including data platforms and smart technologies, also reinforce forest-centric approaches to recover degraded ecosystems (Gabrys et al., 2022; Urzedo et al., 2022). However, these emerging tools are not accounting for the fact that large-scale tree planting and afforestation projects can lead to negative environmental impacts in non-forest ecosystems (Holl and Brancalion, 2020) and generate severe socioeconomic harm at the local level (Fleischman et al., 2022; Reyes-García et al., 2022).

Towards responsible chatbot contributions for just conservation

Urgent measures are essential to reorient AI chatbot developments to ensure these tools prioritise ethical practices when gathering, processing, and translating datasets into information. A foundational responsibility and accountability considerations include the disclosure of source and authorship to reveal how databases are included and assembled in the generation of answers (cf. Gaggioli, 2023). The fast-paced chatbot advancements also emphasise the need for decolonial formulations to enable the coexistence of diverse histories, stories, connections, and worldviews (cf. Blaser and Cadena, 2018; Escobar, 2018). Negotiating knowledge systems within digital practices should consider the plurality of experiences and the nuances across contexts and groups (Westerlaken et al., 2022), especially in relation to gender, race, and ethnicity considerations (Noble, 2020). This effort can also draw on insights from extensive research on co-production mechanisms that facilitate inclusive and just knowledge sharing and integration practices (e.g. Lemos et al., 2018; Nel et al., 2016; Robinson et al., 2022). Without these perspectives, chatbots may reinforce or exacerbate the social harms and power asymmetries that exist in technological systems (Benjamin, 2019).

These AI-driven innovations also raise issues about data access, control, and ownership, particularly in terms of how community, collective, and Indigenous knowledge practices are considered and integrated (Robinson et al., 2023; Walter and Suina, 2019). As chatbots evolve, it is imperative for big tech companies to contemplate the perspectives of societal groups to formulate responsible approaches for reworking data sourcing and modelling based on specific contexts and demands. The incorporation of these needs into chatbot formulations requires situated negotiations, considering data sovereignty and democratic decision-making processes (Taylor and Kukutai, 2016). These complex efforts challenge the existing ethical approaches to data governance, such as the “Collective Benefit, Authority to Control, Responsibility, and Ethics” (CARE) principles (Carroll et al., 2020), requiring safeguards in the fast expansion of large language models. Recognising the increasing ubiquity of chatbots in daily life, making these AI-driven tools more broadly transparent and accountable will illuminate their contributions and limitations in embracing environmental justice perspectives.