1 Introduction

Artificial intelligence (AI) is enabling organizations to address a range of real-world challenges in areas as diverse as global health, education and poverty alleviation. AI is flourishing at present because of advances in computer power, availability of large amounts of digital information (big data, open data), and enhanced theoretical understanding. John McCarthy coined the term Artificial Intelligence (AI) and described the field as the “science and engineering of making intelligent machines, especially intelligent computer programs” (McCarthy 1956). AI is based on the use of mathematical models to process large quantities of data and make accurate predictions. However, despite AI's contributions, we must remain constantly aware of the potential risks and shortcomings of AI, as well as instances where it may fail to be fit for purpose, in order to develop the best methods for teaching AI ethics.

Students of statistics learn that an influential pioneer of statistical modelling, George Box, famously stated that “All models are wrong, some are useful” (Box 1976). Box was concerned about two separate issues. First, model accuracy, understood through the principle of Occam’s Razor, implies that the scientist should seek the simplest description of natural phenomena that is highly predictive. Second, Box was worried that scientists were not sufficiently aware when constructing models and explained “since all models are wrong the scientist must be alert to what is importantly wrong”. Box realized that the model shortcomings were often due to the failure of the scientist to be sufficiently aware of the importance of the ingredients of the model. The potential pitfalls that await AI systems may be inferred from the comments of another famous statistician, David Cox, who explained “the idea that complex physical, biological or sociological systems can be exactly described by a few formulae is patently absurd” (Cox 1995).

Awareness of the risks associated with AI can be improved by learning from case studies based on prior events and the consideration of future scenarios. Of course, there will always be new issues that arise and therefore AI practitioners will need to be kept up to date with the best practices. A greater danger may result from the potentially adverse impacts that certain AI applications could have on society at large. Without actively considering and analysing the long-term implications of AI on our everyday lives or making a conscious decision to accept these changes, citizens may be blindly walking into a new paradigm that is often referred to as the age of the Fourth Revolution.

Africa is home to over 1.3 billion people, and it is demographically the world’s youngest continent with a median age of 19.7 years (UN 2019). The continent is already harnessing the potential of digital technology to revolutionize children’s education with Ed-Tech solutions. An example of this is the $1 XPRIZE recipient, RoboTutor, an open-source Android tablet app from Carnegie Mellon University, that enables children aged seven to ten with little or no access to schools and teachers to learn basic reading, writing, and arithmetic without adult assistance (XPRIZE 2019). The AI-enabled RoboTutor addresses the acute shortage of teachers in developing countries and a Swahili version is now being tested in Tanzania. Whilst RoboTutor offers incredible opportunities for children who have access to this technology, consideration also needs to be given to the constraints that may prevent some children from accessing technology and the risks that this may have in terms of exacerbating inequalities and leaving some groups behind.

AI offers the ability to improve and speed up processes and scale applications. The many advantages of AI must be balanced with the potential for failure when implementing solutions in the real-world. Imperfect datasets, inadequate models and insufficient time to trial and test AI solutions may deliver a reputational blow to the entire field. Biased grading and scoring of individuals from minority groups and propagation of misinformation are just two of the risks that are already associated with AI. The time is ripe, therefore, to consider not only the important role of AI in delivering tailor-made education to create equal opportunities for all, but also the ethics of AI and how it will impact the lives of different groups within society. Politicians, business leaders and regulators face many challenging decisions as they embrace the immediate opportunities offered by AI and consider the long-term consequences for society. By priming educators and students in the field of AI with a heightened awareness of the risks, it is hoped that many adverse consequences can be mitigated and that AI can be used for the greater good.

This chapter aims to describe the advantages and disadvantages of AI using real-world examples, establish a set of risks to consider, and finally presents a set of scenarios that can help to stimulate discussion and debate before implementing such solutions. Many of the examples of the opportunities, challenges, risks and consequences discussed here are based on experiences at CMU-Africa, located in Rwanda, and case studies from the East Africa region. Participation in the development of Rwanda’s National Strategy for AI has also been an enormous source of inspiration for this chapter, especially for devising scenarios to frame and explore the ethical risks. The following sections are structured to convey both the incredible opportunities offered by AI alongside the risks and ethical issues that AI presents. The first section introduces a series of examples from different African countries to demonstrate the benefits of AI and the advantages that are already being recognized. The second section considers the many facets of risk that come with the introduction of AI. While the main focus is on what we know at present, no guidance for the ethics of AI would be complete without discussing potential risks, future concerns and fears. Some guidelines are offered where possible to help identify and hopefully avoid problems with AI. These case studies serve to highlight the potential dangers that exist and how these are already shaping our world. It will be argued that the greatest risks to society are yet to come. By being aware and prepared for these risks, however, we will be better placed to mitigate against them.

2 Opportunities

Perhaps the most exciting proposition of the field of AI is the ability to facilitate innovation in so many aspects of our lives. The resulting changes are often dramatic and difficult for many to imagine without the help of science fiction novels. Virtual assistants, chatbots, digital communication and driverless cars are changing the way we interact and connect across the entire planet. This section outlines the diversity of ongoing initiatives across the continent and highlights a number of actual real-world use cases.

AI is at the heart of this global digital revolution, often referred to as the fourth industrial revolution. The first industrial revolution used water and steam power to mechanize production. The second used electric power to create mass production. The third used electronics and information technology to automate production. Schwab (2015) describes how a fourth industrial revolution is building on the third, the digital revolution that has been occurring since the middle of the last century. It is characterized by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres. The hallmark of the fourth industrial revolution is the automation of traditional manufacturing and industrial practices, using innovative smart technology.

The increasing availability of data from multiple sources, often referred to as big data, combined with advances in computational power and sophisticated mathematical algorithms is driving this innovation (Thomas and McSharry 2015). The internet of things (IoT) is changing the way we interact with the physical world and satellite imagery can help us to monitor the environment around us. Big data algorithms are able to harness information about our movements, online searches, financial transactions, comments and opinions in order to generate predictive analytics and improve decision-making. This treasure chest of knowledge about the demand for and supply of goods and services will lead to more efficient allocation of resources. The scalability offered by cloud computing is helping to speed up the pace of human development and will be a key component in achieving many of the sustainable development goals (SDGs).

In 2016, at the World Economic Forum (WEF) for Africa, it was acknowledged that Africa can use the fourth industrial revolution to enhance economic growth and prosperity. While technology has the potential to offer transformative power, it was also recognized that in order to maximise this opportunity, education on the continent is in need of radical reform. The Centre for the Fourth Industrial Revolution (C4IR) Rwanda, a partner of the WEF Network for Global Technology Governance, was founded with the objective of bringing together government, industry, civil society, and academia in order to co-design, test and refine policy frameworks and governance protocols to maximize the benefits and minimize the risks of 4IR technologies. C4IR Rwanda is primarily focusing on AI and data policy and developing multi-stakeholder partnerships to drive innovation and adoption at scale for the benefit of society.

Carnegie Mellon University Africa (CMU-Africa) launched a new Master of Science in Engineering Artificial Intelligence (MS EAI) in 2021 in recognition of the increasing demand from students that wish to integrate AI into their engineered solutions. The degree combines the fundamentals of AI and machine learning with engineering domain knowledge. The MS EAI takes AI and embeds it into engineering frameworks, including engineering representations, applications within engineered systems, and discipline-specific interpretations of system outcomes. Within these frameworks, students will learn to invent, tune, and specialize AI algorithms and tools for engineering systems. MS EAI graduates engineer new solutions where AI is integral to the engineered system’s design or operation.

In its national strategy for AI, Rwanda plans to increase the number of individuals with experience in machine learning, data science, data engineering and computer science. In addition to these high-tech areas, there will also be a drive to develop practical technical skills in data collection, cleansing, processing and labelling. There will be a push to develop holistic curricula for science, technology, engineering and mathematics (STEM) subjects in order to prepare youth for these jobs in AI. Finally, there needs to be a business case for AI adoption. Attention will be given to human-centred design, identifying and piloting use cases and ensuring that there will be sufficient demand and uptake.

It is difficult to know where to start when listing the many AI innovations that are already under development across the African continent. The following examples in energy, finance and healthcare serve to highlight the wide range of interventions and applications using AI that are taking place in different countries across the continent.

There are many examples of AI-driven innovative pay-as-you-go financing models that allow customers to get instant access to products or services, while building ownership over time through flexible micro-payments. This innovation utilizes the widespread penetration of mobile phones in many countries. M-KOPA, based in Kenya, is an example of a connected asset financing platform that helps underbanked customers obtain access to products and services, such as electricity, radios, televisions and fridges.

Airtime in developing countries is quickly becoming a basic commodity among the rapidly growing middle class. Failure to have sufficient airtime in order to communicate or load data bundles is a challenge for many prepay customers. ComzAfrica, based in Rwanda, is a micro-lending company operating in 16 countries across Africa and Asia. ComzAfrica has built an Airtime Credit Service (ACS) which allows users to access airtime on a credit basis. Given that the users do not always have access to a retailer or direct funds, this service offered by both allows them to access airtime on a credit basis and make calls or send messages. Using actual loan data from ComzAfrica, it was shown that AI techniques could provide a credit scoring system that enables the company to quadruple the tolerable level of default rate for breaking even (Dushimimana et al. 2020).

Babylon Health is revolutionising healthcare by empowering doctors with AI in order to stand out from other providers. With operations in the US, UK and Canada, the company is known as Babyl in Rwanda. The speed provided by AI is a key differentiator as it helps medical professionals work faster, see more patients, and make better decisions based on user’s data. Patients benefit by being able to address symptoms, get faster information about conditions, and proceed to treatment sooner. Its AI system learns from anonymised, aggregated, and consented medical datasets, patient health records, and the consultation notes from clinicians. Babylon is successfully showing how the power of AI can help address some of the healthcare challenges faced in countries with limited numbers of health professionals, enabling more speed and effectiveness in the processes that enable them to make decisions about triage, causes of symptoms, and future health predictions (Baker et al. 2020).

The anonymity afforded by digital technology and AI has also given rise to some unexpected innovations in healthcare. A study found that young people use Google to self-diagnose and treat when concerned about sexually transmitted diseases (PSI 2020). Sadly, it was fear that drives young people to turn to Google, rather than proactive measures to make healthy choices well before symptoms present. It was found that confidentiality is key and time efficiency is highly valued. Young people want sexual and reproductive health information at their fingertips, without others knowing what they are searching for. For these reasons, a chatbot designed and deployed in Kenya has been found to be much more accepted than a human adviser. Furthermore, a focus group highlighted how chatbots developed using “American” English failed to recognize slang commonly used by Kenyan youth. This frustrated users, resulting in decreased engagement. This important finding highlights the need for learning from local content and promoting home grown AI solutions that are more appropriate to the local context and needs of the local population.

3 Challenges and Risks

One of the biggest challenges for AI is how to ensure that it is inclusive, accessible and able to benefit those that are already digitally excluded. A large digital divide exists either due to digital illiteracy or through a lack of mobile connection (GSMA 2019). AI requires data to work effectively, and unfortunately there is still an insufficient amount of accurate, complete and regularly updated data in many countries in Africa. The engine of an AI system relies on algorithms, which are sets of mathematical rules that process data. Without sufficient data from under-served communities, such as digital records and voice and text in multiple languages, there are risks that these algorithms, often trained on foreign data, will fail to be representative of African citizens and may be less accurate as a result. Furthermore, progress can only be truly made once data is being shared and available via application programming interface (APIs) that provide an interface for interactions between multiple software applications.

In the following sections, a number of potential risks are discussed. These are organised in terms of their severity and impact on society and categorized by three risks levels (Fig. 1). The three levels range from mainly unintended consequences of AI to purposeful intent to disrupt to extreme hazards with potential for substantial destruction. The first risk level is one that is already underway with countless examples having been encountered over the last decade. Fortunately, most of these risks can be mitigated to some extent by better awareness when designing AI systems and enhanced cybersecurity. In the case of the second risk level, there also exist extremely concerning examples involving criminal organizations that may have state sponsorship in some cases. Unfortunately, there are still few clear answers in terms of how to address and manage these risks. The good news is that considerable awareness of these risks now exist and numerous actors are attempting to find solutions. The third level of risk is futuristic for the moment but already of sufficient concern to warrant consideration and required immediate action in order to avoid potentially harmful consequences in the future.

Fig. 1
A block diagram of three risk levels. 1. Inequality, discrimination, exclusion, and data breaches. 2. Disinformation, destabilization of governments, threats to democracy, and cyber-terrorism. 3. The failure of safety-critical systems, cyber warfare, and existential risks.

List of escalating risks associated with AI and digitization

In 2019, the European Commission, tasked with shaping Europe’s digital future, produced a report entitled “Ethics guidelines for trustworthy AI” (EC 2019). According to this report, Trustworthy AI should be:

  1. 1.

    lawful—respecting all applicable laws and regulations;

  2. 2.

    ethical—respecting ethical principles and values; and

  3. 3.

    robust—both from a technical perspective while taking into account its social environment.

This summary of the report was presented to a class of students studying AI who were majoring in IT or electrical and computer engineering (ECE) at CMU-Africa and CMU-Pittsburgh. The students were asked to identify, in their opinion, the most important component of trustworthy AI based on their own experiences. The 70 responses from students in the US and Africa were as follows: robust (47%), ethical (36%) and lawful (17%). This finding echoes the views of many engineering peers that ethics is something that others, perhaps sociologists or philosophers, should be concerned with, rather than AI practitioners. Engineers are already busy innovating and trying to make sure that the latest device, whether hardware, software or a combination, actually works. Given the technical specifications for a use case with clear demand, engineers often believe they are best left to solve problems rather than worrying about ethics.

Working in a silo and leaving the ethics for someone else to worry about might have been an acceptable solution if the pace of innovation were relatively slow and the new technology not so dangerous. It now appears, however, that AI is transforming rapidly with profound impacts and far-reaching implications for society, meaning that one cannot separate the creation and development of new interventions from the ethical discussions about their usage. For this reason, it is critical that the authorities regulating AI work closely with, and receive regular information from engineers in order to continuously review potential new risks as these arise.

The AI community is currently learning that applying models to socio-economic systems is fraught with danger and potential risks. While much progress has been made to avoid technical pitfalls such as overfitting and thereby ensuring parsimony and generalisability, there remain some serious issues with regard to data availability and quality that are more difficult to quantify. Many datasets are biased due to the way in which the data was collected or labelled. Concerning examples include sampling biases, crowd sourcing or alternative sources of big data that may not be representative of the population that is being addressed. These issues are particularly relevant for applications in African countries where due to limited research budgets, datasets are less likely to be available for building AI models. The use of some variables can prevent inclusion and discriminate against certain groups. Other variables serve as proxies to propagate existing biases. The importance of transparency and explainability is greater than ever in order for society to trust AI solutions.

The UK’s Prime minister Boris Johnson discovered this the hard way. In August 2020, as a result of the COVID-19 pandemic and resulting lockdown restrictions which prevented children from attending schools and sitting exams, the UK’s examination regulator Ofqual was obliged to develop a computer algorithm that could replace the need for examinations in order to grade all its A-level students (BBC 2020). As a result, approximately 39% of predicted A-level results were downgraded by the algorithm. Most shockingly, disadvantaged students were the most adversely affected as the algorithm replicated existing societal inequalities. Initially, Johnson claimed the grading algorithm was dependable and robust. As student protests increased, however, Johnson changed his position, claiming that it was AI which was responsible for the error and shedding the blame on what he called the 'mutant algorithm' for the exams fiasco, leading Ofqual to eventually override the algorithm (Guardian 2020). Though perhaps the first national large-scale disaster for an algorithm, it is certainly unlikely to be the last.

The less attractive consequence of the innovation promised by the fourth industrial revolution is the potential loss of many jobs as AI and automation replaces the less skilled workers within the labour force. Worse still, there is growing concern about the long-term societal impacts of AI, particularly as automation replaces many professional jobs and larger numbers of people find themselves unemployed. A seminal study estimated that about 47% of total US employment is at risk from automation (Frey and Osborne 2017). As the benefits of AI in business become more apparent and engineers enhance the applicability of AI, it is now clear that the machines are only getting warmed up. A closer look at the jobs that might be automated painted an even scarier picture, indicating that not only low-skilled repetitive jobs that are already affected by automation are in danger, but also a whole new set of higher skilled professional jobs, including lawyers and medical professionals (Brandes and Wattenhofer 2016). Given the low levels of human capital and scarcity of high-skilled jobs in the African continent, the future threat of automation is even greater.

Globalization is another major factor, often with an insatiable appetite for cheap labour by any means possible. Those developing countries that currently sell the cheap labour of their unskilled workers will face competition from AI on a global scale (Harari 2018). According to the World Economic Forum, AI is expected to replace 85 million jobs worldwide by 2025 (WEF 2020). The good news is that this report goes on to say that AI will also create 97 million new jobs in that same timeframe. The big question for teenagers in African countries when considering a career, is whether future opportunities may be threatened by an AI-enabled algorithm or machine that can eventually automate the tasks required in this sector. New technology such as AI chat-bots and the proliferation of 3D printers is likely to replace many unskilled workers that currently find employment in sweatshops and call centres. The bridge from cheap labour to high-skill tech jobs requires substantial investment in human capital development, and in particular in third level education with a focus on university degrees that offer AI skills such as data science, machine learning and cybersecurity. For this reason, Rwanda’s national strategy for AI recommends a particular focus in these areas and also training for technical experts to collect, process and label datasets.

Road safety is one area where the automation of driving enabled by AI may both enhance safety while also replacing paid employment. Walking through the current issues and considering the future risks shows just how difficult it is for policymakers to safely manage the pace of AI innovation. Road injuries are now the biggest killer of children and young adults worldwide causing 1.35 million deaths each year which is more than that from HIV/Aids, tuberculosis or diarrhoeal diseases (WHO 2018). In addition, between 20 and 50 million people are seriously injured in road accidents each year (WB 2017). At present, 93% of the world's fatalities on the roads occur in low- and middle-income countries, even though these countries have only 60% of the world's vehicles. The cause of traffic accidents can be inferred from the US’s fatality analysis reporting system (FARS). Road traffic deaths are almost entirely caused by human drivers due to alcohol abuse (29%), speeding (26%) or being distracted (21%) while driving (NHTSA 2019). Analysis of on-scene post-crash data concluded that the vast majority (93%) of critical reasons leading to crashes are attributable to the driver (Singh 2018). Self-driving cars, also known as autonomous vehicles, will make drivers redundant. By meticulously following traffic rules and communicating directly with other vehicles, they can improve road safety by never succumbing to the temptation to speed, drink, fall asleep or become distracted by telephone calls. There are of course serious risks to having a large fleet of autonomous vehicles and system failure or a cyberattack could lead to large-scale disaster.

While engineers are fast at innovating and creating new solutions, they are much slower to acknowledge or consider the potential misuses or nefarious implications of their inventions. The proliferation of the internet, mobile technology and countless social media platforms dominate our lives, and we have all been lured into the use of these platforms without actively given our consent or giving adequate consideration to the risks. The historian, Yuval Noah Harari, dedicates a lesson for the 21st Century to the dangers of AI and warns that society is currently facing unprecedented challenges from infotech and biotech (Harari 2018). While humans were forced to revolt against exploitation or retrain to overcome the first three industrial resolutions, Harari fears that many simply do not have the skills required to make the transition to working in high tech jobs. Automation is therefore likely to offer a worse outcome to many: irrelevance rather than exploitation.

The COVID-19 pandemic also demonstrated just how dispensable many jobs have become as many people lost their means of employment as some businesses have been forced to shut down, while other parts of the economy that rely on AI have gone from strength to strength. According to the ILO, 114 million jobs were lost globally in 2020 due to the pandemic (ILO 2020). In fact, this may be an underestimate, given that 8.8% of global working hours were lost for the whole of last year (relative to the fourth quarter of 2019), equivalent to 255 million full-time jobs. Roughly 9.6 million U.S. workers (ages 16–64) lost their jobs. In contrast, only about 2.6 million workers in the EU lost their jobs over this period. This is remarkable given that the EU is home to about 100 million more people than the U.S. The two geographical areas contribute equally to the world economy, each accounting for about 16% of global output. The reason for this is that countries across the EU deployed significant employment retention schemes, while the U.S. focused on stimulus checks and unemployment compensation in lieu of job retention. These different policies may have profound implications in the future, especially when the opportunity to replace workers with AI become more of a reality. While the industrial revolution created the working class, AI may be already creating a “global useless class”, a term coined by Yuval Harari to emphasize the level of exclusion that could be caused by automation (Harari 2018). The response to COVID-19 has clarified how governments will likely respond to further automation.

AI and automation have played a large role in helping many big tech companies to reap the rewards from scaling up their operations and services. Apple became the world’s first trillion-dollar company in August 2018. Two years later, right in the middle of the pandemic, Apple crossed the two trillion-dollar hurdle. A handful of big tech companies, known as the FANGAM stocks, Facebook, Amazon, Netflix, Google owner Alphabet, Apple and Microsoft, are key players in AI and have all increased in value steadily over the last decade. In 2020, Apple, Microsoft, Amazon, Google and Facebook had a 21.7% share of the S&P500—an index representing 500 of the largest companies listed on stock exchanges in the US. Investors that use fundamentals to value companies claim that the stock market has become detached from economic reality. Forward-looking AI enthusiasts argue that that these tech companies represent our new reality as they process most of our interactions with digital technology, while others are terrified by the huge control and power these companies now hold over many of us.

The response to an extreme event is a good way to test the resilience of any business model. The global pandemic, COVID-19, that has claimed more than 3.5 million lives as of Jun-2021, produced such an event. The impact on different parts of the economy help to understand which companies are likely to survive going forward and AI certainly features strongly. During 2020, global lockdowns caused the S&P500 index to crash by 34% in March but it ended the year up more than 18%. Two-thirds of that gain was entirely due to the growth of the six FANGAM stocks, which registered average growth of over forty percent during the year. The resilience of big tech is apparent by the fact that these companies continued to make money despite the majority of the global population being locked down and unable to leave their homes. It is the use of information technology and AI that allows these big tech companies to scale and grow at unprecedented rates. The effect of COVID-19, with many traditional workers furloughed and paid to stay at home without working, has served to demonstrate that the concept of the global useless class may no longer be futuristic.

3.1 Risk Level 2

There are already some noticeable risks that are being facilitated by social media and exacerbated by AI. One such risk is the proliferation of fake news described as false or misleading information presented as news. A more sinister view is that this fake news is specially crafted disinformation with the sole aim of damaging the reputation of a person, company or nation. There are increasing concerns that fake news can influence political, economic, and social well-being. Indeed, fake news is frequently mentioned as having had an impact on many political elections, such as the 2016 UK Brexit referendum and the 2016 US presidential election with Trump versus Clinton.

Fake news spreads much more rapidly on social networks such as Twitter than real news because people are more likely to share extreme and unlikely news than the mundane (Vosoughi et al. 2018). Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information. Along with the long-term implications of large swathes of society being misinformed, some of the dire dangers of fake news are now acknowledged. The World Health Organization (WHO) coined the term “infodemic” to describe the misinformation surrounding COVID-19 and how this has spread as fast as the virus itself. Sadly, conspiracy theories, rumours and cultural stigma have all contributed to deaths and injuries with a recent study estimating that about 5,800 people were admitted to hospital globally as a result of following false information received on social media (Islam et al. 2020).

There have been numerous cybersecurity incidents, involving espionage, fraud and ransomware, with a worrying upward trend in the past year. A recent study found that 86% of breaches were financially motivated, and 10% were motivated by espionage (Verizon 2020). Furthermore, 70% of breaches were perpetrated by external actors, and organized criminal groups were behind 55% of breaches. In May 2021, the infamous cyber-criminal entity, DarkSide, took offline a major US pipeline carrying 45% of the East Coast’s supply of fuel, using a ransomware cyber-attack. As a result, the U.S. Department of Justice is elevating investigations of ransomware attacks to a similar priority as terrorism. While digitization and AI offers many opportunities, they also generate systemic vulnerabilities as a result of the digital connectivity required. Accenture, a global consulting firm, found that the number of business leaders spending more than 20% of IT budgets on advanced technology investments has doubled in the last three years and 69% of business leaders say that staying ahead of attackers is a constant battle and the cost is unsustainable (Accenture 2020).

3.2 Risk Level 3

Science fiction movies, like Terminator, make for thrilling entertainment by suggesting that AI may eventually destroy the human race. Unmanned aerial vehicles (UAVs), more commonly known as drones, are now routinely used for military missions by countries such as the US, China, Russia and Israel. It is generally assumed that humans are fully in control of the movements and actions of these drones. A United Nations report this year, however, suggests that a drone, used against militia fighters in Libya’s civil war, may have selected a target autonomously (UNSC 2021). This drone, described as “a lethal autonomous weapons system,” was powered by AI and used by government-backed forces against enemy militia fighters as they ran away from rocket attacks. The fighters “were hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems,” according to the report, which did not say whether there were any casualties or injuries. The weapons systems, it said, “were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect a true ‘fire, forget and find’ capability.” Rather than being a futuristic concern, this now demonstrates that the world has already embarked on a journey that will see the proliferation of AI-enabled military equipment.

Concerns about humans not being able to compete with robots or AI applications became mainstream in the 2010s. AI has already conquered chess, once viewed as the ultimate strategic game for humans—requiring superior intelligence and years of dedicated training. IBM’s Deep Blue won its first game against the reigning world champion Garry Kasparov in 1996. Two decades later, AI systems can be trained solely via “self-play” and no longer need any human interaction or training. In 2017, Google’s DeepMind team created AlphaZero, which within 24 h of training achieved a superhuman level of play in chess, shogi and go. With AI systems now capable of superhuman intelligence without the need for human inputs, it is fair to ask if a sophisticated AI system might one day decide that humans are no longer necessary.

An existential risk is an event that could lead to human extinction or permanently and drastically curtail humanity's potential. In contrast to global catastrophic risks, existential risk scenarios do not allow for meaningful recovery and are, by definition, unprecedented in human history. The likelihood of the world experiencing an existential catastrophe over the next one hundred years has been estimated to be high as a one in six risk (Ord 2020). A recent report identified the misuse of AI systems as a key extreme risk and calls on governments to prepare appropriately (Ord et al. 2021). The report explains that as AI becomes integrated into safety-critical systems, whether self-driving cars, air traffic control systems, or military equipment, it raises the stakes of accidents, malicious use of this technology, or AI systems behaving in unexpected ways and recommends increasing funding for technical AI safety research, to help avoid the dangers of unsafe AI systems.

These sections have walked through a series of risks and offered a three-level classification system based on the severity of the risk. At present, society appears to be moving consistently along this risk stratification. This is not surprising since greater adoption of AI brings with it the potential for more harmful consequences in both magnitude and spatial scale. Many of the level one risks are manageable and in isolation do not necessarily warrant the regulation of AI but certainly make a case for the ethics of AI. The level two risks are already taking being observed and some of the consequences may be difficult to reverse. Finally, level three risks are not science fiction and deserve serious consideration. Awareness of all these risks and better classification may help policymakers to manage the opportunities and threats of AI.

4 Risk Mitigation

Fortunately, there has been a dramatic awakening to the risks of AI over the last three years. Numerous organizations have attempted to introduce ethical principles and offer recommendations that will guide future practitioners and protect society. These include national governments (UK 2018), the European Union (EC 2019), intergovernmental economic organisations (OECD 2019), international consultations involving experts from 155 countries (UNESCO 2019), the world's largest technical professional organization (IEEE 2019) and one of the largest tech companies (Microsoft 2020).

The Government of Rwanda (GoR) represented by Rwanda’s Ministry of ICT and Innovation (MINICT) and Rwanda Utilities Regulatory Authority (RURA) are collaborating with GIZ and the Future Society in a project called “FAIR Forward—Artificial Intelligence for all”. Being part of the Future Society team collaborating with GoR and GIZ has provided many insights into the process for establishing national guidelines. It has been paramount to organize workshops, solicit expert feedback, and validate the AI ethical guidelines which are now being presented in Rwanda's National Artificial Intelligence Policy. CMU-Africa, which is based in Rwanda, will provide masters courses in IT and AI and undertake pilot studies that aim to ensure the following important AI guidelines are respected:

  1. 1.

    Societal Benefit: aim to deliver strong economic and social impact and improve the well-being of citizens

  2. 2.

    Inclusion & Fairness: identify underrepresentation in data and lack of access to services due to economic means, physical location or gender

  3. 3.

    Privacy: anonymize personal data and follow national privacy laws

  4. 4.

    Safety & Security: ensure data storage and sharing mechanisms are secure and encrypted

  5. 5.

    Responsibility & Accountability: consider and acknowledge the impact of AI for all participants and stakeholders

  6. 6.

    Transparency and Explainability: document all steps involved in the construction and deployment of an AI system

  7. 7.

    Human Autonomy and Dignity: maintain freedom from subordination to, or coercion by, AI systems.

With these guidelines in mind, the alertness of future AI experts to these issues is paramount. CMU-Africa promotes trustworthy AI solutions which are lawful, ethical and robust. With a new MSc in Engineering AI being offered by CMU, it is important to ensure that learners are fully aware of these principles of AI ethics. After presenting case studies in different sectors that highlight the opportunities and risks, it is useful to illustrate the potential danger of insufficient awareness and to study the dangers of privacy risks, lack of transparency and biases in data.

One way that has been tested and proven both useful and practical in classes at CMU-Africa and during policy workshops for the national AI strategy, is to initiate a group discussion about the risks and consequences of AI for a particular solution. This can be achieved by offering situation appropriate scenarios about how AI would be utilized in a given sector and how it might affect certain individuals. By focusing on scenarios that highlight situations where individuals experience the advantages and disadvantages of AI systems, it is then easier to assess what is fair and what might lead to exclusion or job losses for example. These scenarios need to be realistic, provide reasonable advantages as to be attractive and yet carefully highlight some of the potential pitfalls and adverse consequences. The discussions that follow in a group environment can then be focused around the seven guidelines listed above.

In order to maintain a thread running through the different scenarios and emphasize relevant issues to tackle, each scenario can be discussed in the light of some talking points. Breakout rooms offer a means of exploring multiple sectors in a small group setting and then bringing participants back to identify common themes. The following set of guiding questions can help to draw out important ethical issues for group discussion:

  • What are the benefits of this AI system?

  • Who will specifically benefit from the adoption of this AI system?

  • Might the introduction of this AI system threaten existing jobs?

  • Will certain individuals be adversely affected immediately or in the long term?

  • Could accessing data for the AI system be a breach of privacy law?

  • Might the changes introduced by this innovation lead to greater surveillance?

  • Where and how should the data required for the AI system be stored?

  • Will data be encrypted and who will have access?

  • Are regulators or policymakers providing oversight of this innovation and its impact?

  • Is it relatively easy to understand how the AI system operates?

  • Does the introduction of this AI system remove the full and effective self determination of any individual over themselves?

In the boxes below, three scenarios are presented for banking, healthcare and education. Depending on the audience and context, these scenarios can be adapted and extended to include other sectors where AI is likely to play an important role.

Banking Scenario

Sandra is a student at the University studying data science with excellent grades. She works as a waiter at a restaurant at weekends. Due to the COVID-19 lockdown, her earnings have been reduced and she is concerned about running out of money.

Just as Sandra is considering seeking a loan, she receives an SMS on her mobile phone from her bank offering a loan facility. She is amazed to find this pre-approved loan is exactly what she is looking for. She clicks on the link, accepts the loan through an app and finds the money in her bank account an hour later.

Her bank has developed a credit scoring model that utilizes AI. Harnessing data about account activity, degree course and grades enables the bank to automatically select students for pre-approved loans. The speed and efficiency of processing these loans has increased the bank’s profits and improved customer satisfaction.

Sandra’s friend John has been waiting for over a month to hear back from his bank about a paper-based loan application that took hours to complete. On hearing about Sandra’s positive experience, John decides to switch banks.

Healthcare Scenario

Peter is married to Catherine and they have two children and live and work on the land in a thriving rural community, situated over 100 km from the capital city. One year ago, Peter subscribed to a new medical app on his phone that was offered as a reward for being a loyal customer with his mobile network operator (MNO). The app provides useful advice on nutrition and lifestyle and has been particularly useful for receiving information on COVID-19. Peter and Catherine were very fortunate in being able to attend good local schools and their high literacy levels have allowed them to learn from the medical app and make informed decisions that have benefited their family.

Recently Peter was pleasantly surprised to be offered medical insurance for a very reasonable monthly premium. He assumes that the enrollment information and data he was providing via the app over the year makes him an attractive client for the insurer that is collaborating with the MNO. Peter is delighted that he can now have peace of mind by knowing that his family are covered by medical insurance.

Peter’s neighbour, Charles, is also a subscriber to the same MNO. He too is a farmer but due to past health problems has been unable to spend as much time on his land or to generate a regular income. As Charles cannot afford the money to make as many mobile phone calls, as Peter has, he has not been offered the medical app or the medical insurance.

Education Scenario

Marie is at the national university in the capital city where she studies medicine. Her family live in a smaller town, approximately three hours away. During the restrictions caused by the pandemic, Marie was relieved to find that the university was able to provide continuous education and was not forced to shut down. Fortunately, the forward-thinking administration in the university had invested substantially in IT equipment and fast broadband and were already experimenting with remote teaching for adjunct faculty living in other countries. As a result, the university was able to quickly move from physical classes to remote classes using Zoom, allowing students to continue their courses even when in a different part of the country. A partnership with an Ed-Tech company helped to streamline the process and offer additional courses and online materials.

Marie enjoys the security of being at home with her parents and siblings during the lockdowns and can still continue with her dream of becoming the first doctor in her family. Lecturers at the University have welcomed the advantages of the Ed-Tech platform, which includes tailored digital content, AI-enabled chat-bots and communication channels for students working on group projects.

Marie’s older brother, George, is in his final year at a different university. The board of this university decided not to invest in IT and are relatively inflexible with regard to new technology, insisting that all lectures take place physically on campus. Sadly, this university now has no choice but to postpone all activities until the following year.

5 Conclusions

Although John McCarthy coined the term AI in 1955, many of the original hopes for this new technology have only recently been realized. From being able to beat humans at chess, shogi and go to empowering autonomous vehicles to promoting human development, AI will continue to enable innovations that were previously unimaginable. Alongside the considerable opportunities offered by AI, politicians and leaders need to be wary of the risks posed by AI. These risks have the potential not only to adversely affect individuals but could possibly threaten democracy and have a profound negative impact on society. By being aware of these risks and constantly discussing the scenarios that will likely play out over time, it may be possible to mitigate against the worst of these risks and harness the opportunities that AI can offer.