Keywords

8.1 Introduction

Artificial intelligence (AI) is one of the few emerging technologies that are very prominently linked to the UN Sustainable Development Goals (SDGs) (UN n.d.a). Through 17 goals and 169 targets, 193 nations resolved “to end poverty and hunger everywhere; to combat inequalities within and among countries; [and] to build peaceful, just and inclusive societies” by 2030 (UN 2015). One could perhaps even argue that AI has been linked directly to international justice and sustainability through the SDGs.

“AI for Good” is a UN-led digital platformFootnote 1 that identifies AI solutions to problems relevant to the SDGs. The site offers mostly information about big data sets provided as links to other sites. For instance, data sets on primary energy production and consumption as well as renewable energy data are provided in relation to SDG 7, “Affordable and clean energy”. More unusual links from the AI for Good platform include AI-generated photographs designed to increase empathy with distant strangers. To achieve this increase in empathy, AI calculations transformed pictures of a Boston neighbourhood into images reminiscent of a war-ravaged Syrian city. The results of this DeepEmpathy project were linked to the AI for Good site under SDG 1, “Zero poverty” (Scalable Cooperation n.d.).

A similar initiative, AI for SDGs,Footnote 2 led by the Chinese Academy of Sciences, also collates projects from around the world, mapped onto individual SDGs. For instance, an Irish project using remote sensing data, Microsoft Geo AI Data Science Virtual Machines and GIS mapping “to develop machine learning models that can identify agricultural practices” leading to a decline of bees was linked to SDG 2, “Zero hunger”, and SDG 15, “Life on land” (AI for SDGs Think Tank 2019).

Many philosophers and ethicists make a distinction between doing no harm and doing good. Prominently, Immanuel Kant distinguished the two by referring to perfect and imperfect duties. According to Kant, certain actions such as lying, stealing and making false promises can never be justified, and it is a perfect duty not to commit those acts (Kant 1965: 14f [397f]). Even without understanding the complicated Kantian justification for perfect duties (categorical imperatives, ibid 42f [421f]), one should find complying with this ethical requirement straightforward, by doing no intentional harm. Imperfect duties, on the other hand, are more difficult to comply with, as they are open-ended. Kant also calls them virtue-based (Kant 1990: 28f [394f]). How much help to offer to the needy is a typical example. Until one has exhausted one’s own resources? Or by giving 10% of one’s own wealth?

This Kantian distinction is also prominent in the law and everyday morality; as “in both law and ordinary moral reasoning, the avoidance of harm has priority over the provision of benefit” (Keating 2018). AI for Good would then fall into the second category, providing benefits, and by implication become an area of ethics and morality, which is more difficult to assess.

Both case studies are examples of trying to provide benefits. However, as they are drawn from the real world, they blur the lines of the Kantian distinctions. The first case study also illustrates direct harm to vulnerable populations, and the second illustrates a high likelihood of potential harm and a lack of care and equity in international collaboration.

While the intentionality of harm is decisive for Kant when assessing moral actions, lack of due diligence or care has long been identified as a shortcoming in ethical action (Bonnitcha and McCorquodale 2017). (Kant famously said that there is nothing in the world that is ethical or good per se other than a good will—Kant 1965: 10 [393].) Similarly, the 2000-year-old “Primum non nocere, secundum cavere, tertium sanare” (Do no harm, then act cautiously, then heal) (Weckbecker 2018) has been employed in the twenty-first century to describe responsible leadership in business and innovation (Leisinger 2018: 120–122). Technologies can often be used for purposes that were not originally foreseen or intended, which is why responsiveness and care are required in responsible innovation today (Owen et al. 2013: 35).

8.2 Cases of AI for Good or Not?

“Farming 4.0”, “precision agriculture” and “precision farming” (Auernhammer 2001) are all terms used to express, among other things, the employment of big data and AI in agriculture. The International Society of Precision Agriculture has defined precision agriculture as follows:

Precision agriculture is a management strategy that gathers, processes and analyzes temporal, spatial and individual data and combines it with other information to support management decisions according to estimated variability for improved resource use efficiency, productivity, quality, profitability and sustainability of agricultural production. (ISPA n.d.)

Precision agriculture even has its own academic journal, which covers topics from machine learning methods for crop yield prediction (Burdett and Wellen 2022) to neural networks for irrigation management (Jimenez et al. 2021).

In the service of precision agriculture, AI is useful, for example, in processing vast data amounts for weather forecasting, climate monitoring and decadal predictions (climate predictions of up to a decade) with the ultimate aim of increasing forecast quality (Dewitte et al. 2021). Examples of the benefits of increased forecast quality could be earlier evacuation in the case of severe weather incidents such as tornados and reduced irrigation if future rainfall could be forecast with high precision.

8.2.1 Case 1: Seasonal Climate Forecasting in Resource-Limited Settings

Seasonal climate forecasting (SCF) is used to predict severe weather, such as droughts and floods, in order to provide policymakers and farmers with the means to address problems in an anticipatory rather than a reactive manner (Klemm and McPherson 2017). Lemos and Dilling (2007) have argued that the benefits of SCF mostly reach those “that are already more resilient, or more resource-rich … in terms of … ability to cope with hazards and disasters”. By contrast, those who are most at risk of being pushed below the poverty line by severe weather have been harmed in cases in Zimbabwe, Brazil and Peru. In Zimbabwe and Brazil, poor farmers were denied credit after SCF results predicted a drought (ibid). In Zimbabwe, “misinterpretation of the probabilistic nature of the forecast by the banking sector” might have played a role in decision-making about credits (Hammer et al. 2001). SCF forecasting in Peru also led to accelerated layoffs of workers in the fishing industry due to “a forecast of El Niño and the prospect of a weak season.” (Lemos and Dilling 2007)

Agenda 2030, the underlying framework for the SDGs, makes the following commitment: “Leave no one behind” (UNSDG n.d.). The case above shows that some of those most in need of not being left behind have suffered as a result of new seasonal climate forecasting techniques.

SDG 9 focuses on fostering innovation and notes in its first target that affordable and equitable access to innovation for all should be aimed for (UN n.d.a). While the above cases from Zimbabwe, Brazil and Peru precede the SDGs, the potential for AI to “exacerbate inequality” has since been identified as a major concern for Agenda 2030 (Vinuesa et al. 2020). We will return to this problem after the second case.

8.2.2 Case 2: “Helicopter Research”

In 2014, a research team from higher-income countries requested access to vast amounts of mobile phone data from users in Sierra Leone, Guinea and Liberia to track population movements during the Ebola crisis. They argued that the value of such data was undeniable in the public health context of the Ebola crisis (Wesolowski 2014). Other researchers disagreed and maintained that quantified population movements would not reveal how the Ebola virus spread (Maxmen 2019). As no ethics guidelines on providing access to mobile phone data existed in Sierra Leone, Guinea and Liberia, government time was spent deliberating whether to provide such access. This time expended on debating access rights, it was argued, “could have been better spent handling the escalating crisis” (ibid). Liberia decided to deny access owing to privacy concerns (ibid) and the research was undertaken on mobile phone data from Sierra Leone. The data showed that fewer people travelled during the Ebola travel ban, but it did not assist in tracking Ebola. (ibid)

One could analyse this case ethically from a harm perspective too, if valuable government time was indeed lost that could have been used to handle the Ebola crisis, as one case commentator argued. One could also analyse it in the context of potential harm from privacy breaches when researchers obtain big data sets from countries that have limited means to ensure privacy, especially during a crisis. So-called data-for-good projects have “analysed calls from tens of millions of phone owners in Pakistan, Bangladesh, Kenya and at least two dozen other low- and middle-income nations” (Maxmen 2019) and it has been argued that

concerns are rising over the lack of consent involved; the potential for breaches of privacy, even from anonymized data sets; and the possibility of misuse by commercial or government entities interested in surveillance. (ibid)

However, we will analyse the case from the perspective of “helicopter research”, defined thus:

The practice of Global North … researchers making roundtrips to the Global South … to collect materials and then process, analyze, and publish results with little to no involvement from local collaborators is referred to as “helicopter research” or “parachute research”. (Haelewaters et al. 2021)

Helicopter research thrives in crisis. For instance, during the same 2014 Ebola crisis a social scientist from the North collected social science data without obtaining ethics approval for his research, taking undue advantage of the fragile national regulatory framework for overseeing research (Tegli 2017). Before the publication of his results, the researcher realised that he would need research ethics approval to publish. He had already left the country and asked a research assistant to make the case for retrospective approval. The approval was denied by the relevant research ethics committee (ibid).

One of the main problems of helicopter research is the lack of involvement of local researchers, potentially leading to colonial assumptions about what will help another country best. Often benefits for researchers from the Global North are clear (e.g. access to data, publications, research grants), while benefits might not materialise at all locally, in the Global South (Schroeder et al. 2021). We will return to this in the next section, but here, in the context of obtaining large-scale phone data during a crisis, we can cite a news feature in Nature reporting that

researchers … say they have witnessed the roll-out of too many technological experiments during crises that don’t help the people who most need it. … [A] digital-governance researcher … cautions that crises can be used as an excuse to rapidly analyse call records without frameworks first being used to evaluate their worth or to assess potential harms. (Maxmen 2019)

8.3 Ethical Questions Concerning AI for Good and the SDGs

At first sight, AI for Good seems to deserve celebration, especially when linked to the SDGs. And it is likely that praise is warranted for many efforts, possibly most (Caine 2020). However, the spectre of inequities and unintended harm due to helicopter research or a lack of due diligence looms large. AI for Good may be reminiscent of other efforts where technological solutions have been given precedence over alternatives and where local collaborators have not been consulted, or have even been excluded from contributing.

Another similarly named movement is called GM for Good,Footnote 3 and examples of helicopter research on the application of genetically modified (GM) technologies in resource-limited settings are not hard to find.

In 2014, a US university aimed to produce a transgenic banana containing beta-carotene to address vitamin A deficiency in Uganda. Later the research was abandoned for ethical reasons. During the human food trials conducted among US-based students, safety issues and undue inducement concerns materialised. However, the study also raised concerns in Uganda, in particular about the potential release of a transgenic fruit, the risks of undermining local food and cultural systems, and the risks of reducing banana agrobiodiversity. Uganda is home to non-modified banana varieties that are already higher in beta-carotene than the proposed transgenic variety. Uninvited intrusions into local food systems that were not matched to local needs were unwelcome and considered inappropriate (Van Niekerk and Wynberg 2018).

Analysing the problems of building GM solutions for populations on the poverty line, Kettenburg et al. (2018) made the following suggestion in the context of Golden Rice, another contentious example (Kettenburg et al. 2018):

To transcend the reductionism of regarding rice as mere nutrient provider, neglecting its place in the eco- and cultural system … and of describing vitamin A-deficient populations as passive victims … we propose to reframe the question: from “how do we create a rice plant producing beta-carotene?” … to “how do we foster the well-being of people affected by malnutrition, both in short and long terms?”

AI for Good can also be susceptible to the weaknesses of helicopter research and reductionism for the following five reasons.

8.3.1 The Data Desert or the Uneven Distribution of Data Availability

AI relies on data. Machine learning and neural networks are only possible with the input of data. Data is also a highly valuable resource (see Chap. 4 on surveillance capitalism). In this context, a South African report speaks of the “data desert”, with worrying figures such as that statistical capacity has decreased over the past 15 years in 11 out of 48 African countries (University of Pretoria 2018: 31). This is highly relevant to the use of AI in the context of SDGs. For instance, Case 1 used the records of mobile phone calls during a crisis to track population movements. “However, vulnerable populations are less likely to have access to mobile devices” (Rosman and Carman 2021).

The data desert has at least two implications. First, if local capacity is not available to generate a sufficient amount of data for AI applications in resource-limited settings, it might have to be generated by outsiders, for example researchers from the Global North “helicoptering” into the region. Second, such helicopter research has then the potential to increase the digital divide, as local capacities are left undeveloped. (See below for more on the digital divide.) In this context, Shamika N Sirimanne, Director of Technology and Logistics for the UN Conference on Trade and Development, says, “As the digital economy grows, a data-related divide is compounding the digital divide” (UNCTAD 2021).

8.3.2 The Application of Double Standards

Helicopter research can in effect be research that is only carried out in lower-income settings, as it would not be permitted, or would be severely restricted, in higher-income settings, for instance due to the potential for privacy breaches from the large-scale processing of mobile phone records. For example, there is no evidence in the literature of any phone tracking research having been used during the catastrophic 2021 German floods in the Ahr district, even though almost 200 people died and it took weeks to track all the deceased and all the survivors (Fitzgerald et al. 2021). One could speculate that it would have been very hard to obtain consent to gather mobile phone data, even anonymised data, for research from a German population, even in a crisis setting.

8.3.3 Ignoring the Social Determinants of the Problems the SDGs Try to Solve

SDG 2 “Zero hunger” refers to a long-standing problem that Nobel economics laureate Amartya Sen ascribed to entitlement failure rather than a shortage of food availability (Sen 1983). He used the Bengal famine of 1943 to show that the region had more food in 1943 than in 1941, when no famine was experienced. To simplify the argument, the first case study above could be called a study of how the social determinants of hunger were ignored. By trying to improve the forecasting of severe weather in order to give policymakers and farmers options for action in anticipation of failed crops, SCF overlooks the fact that this information, in the hands of banks and employers, could make matters even worse for small-scale farmers and seasonal labourers. That is because the latter have no resilience or resources for addressing food shortages (Lemos and Dilling 2007), the social determinants of hunger.

Another example. An AI application has been developed that identifies potential candidates for pre-exposure prophylaxis in the case of HIV (Marcus et al. 2020). Pre-exposure prophylaxis refers to the intake of medication to prevent infection with HIV. However, those who might need the prophylaxis the most can experience major adherence problems related to SDG 2 “Zero hunger”, such as this patient explained.

When you take these drugs, you feel so hungry. They are so powerful. If you take them on an empty stomach they just burn. I found that sometimes I would just skip the drugs, but not tell anyone. These are some of the things that make it difficult to survive. (Nagata et al. 2012)

An AI solution on its own, without reference to the social determinants of health such as local food security, might therefore not succeed for the most vulnerable segments of populations in resource-limited settings. The type of reductionism attributed to Golden Rice and the Uganda banana scenario described above is likely to occur as well when AI for Good researchers tackle SDGs without local collaborators, which leads to yet another challenge, taking Africa as an example.

8.3.4 The Elephant in the Room: The Digital Divide and the Shortage of AI Talent

AI depends on high quality broadband. This creates an obvious problem for Africa: given the continent’s many connectivity challenges, people must be brought online before they can fully leverage the benefits of AI. (University of Pretoria 2018: 27)

Only an estimated 10% of Africans in rural areas have access to the internet, a figure that goes up to just 22% for urban populations (ibid). These figures are dramatic enough, but the ability to develop AI is another matter altogether. Analysing the potential of AI to contribute to achieving the SDGs, a United Nations Development Programme (UNDP) publication notes that the “chronic shortage of talent able to improve AI capabilities, improve models, and implement solutions” is a critical bottleneck (Chui et al. 2019).

Chronic shortage of AI talent is a worldwide challenge, even for large commercial set-ups. For instance, DeLoitte has commented that “companies [in the US] across all industries have been scrambling to secure top AI talent from a pool that’s not growing fast enough.” (Jarvis 2020) Potential new staff with AI capabilities are even lured from universities before completing their degrees to fill the shortage (Kamil 2019).

At the same time, a partnership such as 2030Vision, whose focus is the potential for AI to contribute to achieving the SDGs, is clear about what that requires.

Training significantly more people in the development and use of AI is essential … We need to ensure we are training and supporting people to apply AI to the SDGs, as these fields are less lucrative than more commercially-oriented sectors (e.g. defense and medicine). (2030Vision 2019: 17)

Yet even universities in high-income countries are struggling to educate the next generation of AI specialists. In the context of the shortage of AI talent, one university executive speaks of “a ‘missing generation’ of academics who would normally teach students and be the creative force behind research projects”, but who are now working outside of the university sector (Kamil 2019).

To avoid the potential reductionism of helicopter research in AI for Good, local collaborators are essential, yet these need to be competent local collaborators who are trained in the technology. This is a significant challenge for AI, owing to the serious shortage of workers, never mind trainers.

8.3.5 Wider Unresolved Challenges Where AI and the SDGs Are in Conflict

Taking an even broader perspective, AI and other information and communication technologies (ICTs) might challenge rather than support the achievement of SDG 13, which focuses on climate change. Estimates of electricity needs suggest that “up to 20% of the global electricity demand by 2030” might be taken up by AI and other ICTs, a much higher figure than today’s 1% (Vinuesa et al. 2020).

An article in Nature argues that AI-powered technologies have a great potential to create wealth, yet argues that this wealth “may go mainly to those already well-off and educated while … others [are left] worse off” (ibid). The five challenges facing AI for Good that we have enumerated above must be seen in this context.

Preventing helicopter research and unintentional harm to vulnerable populations in resource-limited settings is one of the main aims of the Global Code of Conduct for Research in Resource-Poor Settings (TRUST 2018) (see also Schroeder 2019). Close collaboration with local partners and communities throughout all research phases is its key ingredient. As we went to press, the journal Nature followed major funders (e.g. the European Commission) and adopted the code in an effort “to improve inclusion and ethics in global research collaborations” and “to dismantle systemic legacies of exclusion” (Nature 2022).

8.4 Key Insights

Efforts by AI for Good are contributing to the achievement of Agenda 2030 against the background of a major digital divide and a shortage of AI talent, potentially leading to helicopter research that is not tailored to local needs. This digital divide is just one small phenomenon characteristic of a world that distributes its opportunities extremely unequally. According to Jeffrey Sachs, “there is enough in the world for everyone to live free of poverty and it won’t require a big effort on the part of big countries to help poor ones” (Xinhua 2018). But this help cannot be dispensed colonial-style to be effective; it has to be delivered in equitable collaborations with local partners and potential beneficiaries.

What all the challenges facing AI for Good described in this chapter have in common is the lack of equitable partnerships between those who are seeking solutions for the SDGs and those who are meant to benefit from the solutions. The small-scale farmers and seasonal workers whose livelihoods are endangered as a result of the application of seasonal climate forecasting, as well as the populations whose mobile phone data are used without proper privacy governance, are meant to be beneficiaries of AI for Good activities, yet they are not.

A saying usually attributed to Mahatma Gandhi expresses it this way: “Whatever you do for me but without me, you do against me.” To make AI for Good truly good for the SDGs, AI’s potential to “exacerbate inequality”, its potential for the “over-exploitation of resources” and its focus on “SDG issues that are mainly relevant in those nations where most AI researchers live and work” (Vinuesa et al. 2020) must be monitored and counteracted, ideally in close collaboration and engagement with potential beneficiaries in resource-limited settings.