1 Artificial Intelligence as a Socio-Technical System

Business leaders, policymakers and technologists regularly portray Artificial Intelligence (AI) as an easy way to make sense of an increasingly complex world. Unsurprisingly, AI plays a central role in strategy papers, TED talks and speeches about the future of mobility, revolutions in healthcare, or scientific innovation (Bhardwaj 2018; Cornet et al. 2017). In this often techno-optimistic narrative, AI is harmless. By remaining largely in the abstract, it is possible to keep the misconception alive that AI is merely a technical tool, albeit a powerful one, to address a myriad of challenges from digital transformation to global inequality to climate change.

This changes drastically when AI moves from concept to application. The development of AI applications is embedded in its social structure. That means that the norms, values, knowledge, and attitudes of developers influence how the AI application is designed and how it works. They become an inherent part of the application itself and can lead to undesirable consequences due to biased data or algorithmic designs. This raises serious concerns when AI is used for hiring employees, offering loans or even in criminal proceedings and makes decisions based on biased data about gender, ethnicity or age. For example, the facial recognition software of leading US-American companies has been shown to better work for faces with white and male characteristics (Lohr 2018). Arguably quite similar to the group of people that developed the respective algorithms (Guynn 2019).

At the same time, AI is not used in a social vacuum. Instead, the applications serve a particular purpose in the real world. Keeping with the same example, if facial recognition is used in public CCTV or to identify suspects in criminal investigations it creates various problems (Chandran 2022). If the AI system actually works, it facilitates public surveillance of citizens with implications for their right to privacy, right to dissent and protest. In the more likely case that it does not work flawlessly all the time, individuals might be accused of crimes or other violations they are not involved in.

In the application settings, it becomes clear that AI is not merely a tool but a socio-technical system. One cannot clearly separate the technology from its social setting it is developed and used in—they are mutually dependent, they influence each other (see e.g., Acuto and Curtis 2014; Latour 2005). From this follow two important conclusions for the relevance of AI ethics: First, AI is no harmless tool that will solve problems of crime, health and climate change. The application of AI is driven by its developers, users, regulators, businesses and political decision-makers. They constitute the social context. This is where ethics come in as important guiding principles that define why, how and when an AI system such as facial recognition is used. Second, the development of AI technologies is not pre-determined but is contingent on their social context. They are the result of political and financial decisions as well as the individual developers who write the code. Consequently, it is not only the framework conditions that decide if AI is developed responsibly but also who writes the code. Essentially, then, it also becomes an ethical question if the diversity found in society is also found in the development teams of AI.

Acknowledging the socio-technical nature of AI does not mean ignoring the fact that responsible AI indeed offers a range of opportunities for human development and can help to achieve the Sustainable Development Goals (SDGs) (Vinuesa et al. 2020). For example, AI applications are trained with large datasets to automatically recognize and translate the language. Voice technologies allow people who cannot read and write very well to interact with digital technologies. In both cases, AI systems make access to information more inclusive and facilitate social, political and economic participation. In other instances, AI-powered apps can support small-holder farmers to identify plant diseases and take countermeasures early on. This does not only contribute to better yields but might also avoid the excessive use of herbicides. However, the responsible development and use of AI is the foundation to realize the opportunities it has to offer.

Overall, if AI is understood as a socio-technical system, ethics are relevant for both how AI is developed as well as how it is used. In turn, that means the world is neither doomed nor saved by the virtue of the power of Artificial Intelligence. However, policy-makers, businesses, civil society and, of course, AI developers are empowered to use AI ethically. They are empowered to use AI for good. As a result, they have a particular responsibility to promote the ethical development and use of AI.

2 From AI Ethics to Practice

In light of this responsibility, it is consequential to tackle the challenge of teaching AI ethics to upcoming AI practitioners and decision-makers in Africa and beyond. For doing so, this book analyzes the present and future states of AI ethics education in local Computer Science programs. It shares relevant best practices for in-class teaching, develops answers to ongoing organizational challenges and reflects on the practical implications of different theoretical approaches to AI ethics.

AI ethics can be described as “a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies” (Leslie 2019, p. 3). In this sense, the merit of AI ethics is twofold in that they encourage developers to harness the power of AI to effect positive change while it also helps them to navigate the risks (Chaturvedi et al. 2021).

At first, much of the global debate on AI ethics has remained rather abstract and high-level. In May 2019, the member countries of the Organization for Economic Cooperation and Development (OECD) adopted the so-called OECD Principles on Artificial Intelligence (OECD 2019). This counts among the first international agreements on the topic and commit signatories to ensure that AI serves the people and the planet and that it needs to respect the rule of law, human rights and democratic values. At the same time, the principles remain rather general which leaves room for interpretation on principles such as transparency of AI systems, accountability, security and safety. The vagueness and non-binding nature of the OECD principles have made them quite compatible, too, so that various non-OECD members have endorsed them as well as the G20 (OECD 2019; G20 2019). The United Nations Educational, Scientific and Cultural Organization (UNESCO) has concluded a global and more inclusive approach to AI ethics. In November 2021, the General Conference of UNESCO adopted the Recommendation on the Ethics of Artificial Intelligence (UNESCO 2021). It is the first globally accepted instrument that formulates joint values and principles. On top of that, the Recommendation defines policy actions that make suggestions on how to implement the agreed-upon values. This more action-oriented approach can also be found in the AI for Africa Blueprint that was developed by the Smart Africa Alliance under the leadership of the South African Government (Smart Africa 2021; N.B. the FAIR Forward project was involved in the development of the blueprint). The blueprint is the result of a multi-stakeholder process involving governments, the private sector and civil society. Among other things, it outlines concrete recommendations on how to create policies for responsible AI development across Africa. In early 2022, the OECD also followed up on their AI Principles and released a framework for classifying AI systems that should enable policy-makers to assess the opportunities and risks of AI applications (OECD 2022).

In doing so, the UNESCO Recommendation, the OECD framework and the Smart Africa AI Blueprint have already shown the way that AI ethics only become influential in action, i.e. when they are implemented. The question then is how to translate AI ethics into practice so that values and rights such as privacy, fairness and security are already part of the development process. In addition to the recommendations aimed at policy-makers, there are efforts to bring AI ethics into practice that put developers at the centre. On a more general note, there are approaches such as the Principles for Digital Development that outline nine overarching guidelines on how to apply digital technologies for sustainable development (Digital Principles n.d.). For instance, it requires project teams to design with the user to develop solutions, including AI, that effectively meets user needs. Moreover, it recommends using open-source software and open data to encourage more collaboration and avoid duplication of efforts.

More specific to AI are products such as the Handbook on Data Protection and Privacy for Developers of Artificial Intelligence (AI) in India (Chaturvedi et al. 2021; N.B. the FAIR Forward project was involved in the development of the blueprint). The handbook is the result of multiple discussions with AI start-ups, developers and practitioners. Following the development cycle of AI from data collection to data processing to roll-out, concrete prompts to encourage the developer to think through the ethical requirements of the AI application. In doing so, the handbook turns abstract principles such as transparency into concrete questions including “[a]re you aware of the source of data used for training?” or “[i]s there a mechanism for users and beneficiaries to raise a ticket for AI decisions?” (Chaturvedi et al. 2021, p. 16). While certainly not perfect, this approach serves to reduce uncertainty about the interpretation and meaning of abstract concepts. Instead, it allows AI developers and small startups who are not backed by a legal team to focus more of their time and resources on technical innovation. Quite practically, they can go through a prepared checklist during the development process and preempt ethical problems.

Moving on, the target group of AI ethics in computer science programs at institutions of higher education changes again. It does not so much comprise of policy-makers or AI startups but it begins slightly earlier with future AI practitioners. In many cases, university students are the AI developers of tomorrow. One fundamental way forward is equipping future AI developers with the know-how on AI ethics at an early stage in their education. That is why this book tackles the challenge of integrating concerns related to AI ethics into higher education curriculums in Africa and beyond. For in doing so, it analyzes the present and future states of AI ethics education in African Computer Science and Engineering programs. The authors share relevant best practices and use-cases for teaching, develop answers to ongoing organizational challenges and reflect on the practical implications of different theoretical approaches to AI ethics. As such, they offer useful starting points for educators, administrators and students in the field of AI in Africa and beyond. In doing so, the book does not only raise awareness of the risks of AI but offers practical tools for how to address them in university contexts.

3 Diversity of Perspective on AI Ethics in Global Higher Education

Following this introduction, the remainder of the book is divided into three parts. The subsequent section discusses the theoretical underpinnings of AI ethics in practice. In doing so, it frames the more practice-oriented contributions by outlining conceptually different approaches to how AI ethics can be understood and taught. This is followed by three chapters on best practices and current challenges in AI ethics education. Among other things, the authors offer practice-oriented research as well as anecdotal reflections on how AI ethics are and can be taught at African universities. The book then concludes with a chapter outlining what needs to happen so that Computer Science education responsibly addresses the risks of AI while seizing the opportunities it holds for economic and social development.

In the opening chapter of the theoretical section of the book, Emmanuel R. Goffi reflects on the origins of AI ethics. Given the dominance of Western thought, especially continental philosophy, he proposes a more inclusive perspective that leads to a cross-cultural approach to AI ethics. As AI can be conceived as a socio-technical system, the local context becomes relevant in both development and application. Consequently, teachers of AI ethics should embrace the variety of cultures and thought from Africa and beyond to account for the relevance of the local context. Ugochi A. Okengwu builds on this and reinforces the relevance of including African perspectives in the formulation of global AI ethics. Putting this into practice, she reviews different ethical frameworks that are applied to AI, e.g. the OECD AI Principles, to derive suggestions for practicing AI ethics in Africa.

Joyce Nakatumba-Nabende, Conrad Suuna and Engineer Bainomugisha kick off the second part of the book on present practices and challenges in AI ethics education. They empirically describe three approaches of teaching AI ethics at African universities including full course programs, AI research labs and the project-based application of AI. Drawing on practical experience, they outline concrete recommendations for how AI ethics can be best integrated into teaching emphasizing the relevance of including local African perspectives and use cases. Why and how the process of introducing AI ethics into Computer Science curricula can be challenging is discussed by Laeticia N. Onyejegbu. Following an analysis of the institutional setup using Nigeria as a case study, she presents suggestions on how AI ethics can play a more relevant role in teaching through including it in existing benchmarking standards as well as through creating stand-alone courses. Patrick McSharry takes the reader from the intricacies of education policies into the classroom. Acknowledging the real-life impact and risks of AI solutions, he demonstrates the value of case studies in teaching AI ethics. He argues that case studies help illustrate the impact of insufficient risk awareness, the dangers of privacy risks, lack of transparency and biases in data. Instructively, McSharry shares some case studies and accompanying questions that can be used by fellow educators in teaching AI ethics.

In the final chapter, Gadosey Pius Kwao, Deborah Dormah Kanubala and Belona Sonna build on the findings of the preceding chapters including the current state of AI ethics education at African educational institutions. They conclude with a set of priority measures that should be implemented to instill a sense of responsibility in future AI practitioners. Among other things, they suggest that general ethical principles as in the UNESCO AI Recommendation are a good starting point but they need to be adapted to the respective contexts to promote responsible AI development.

4 Conclusion

Certainly, AI ethics is such a broad topic that any book would be unable to cover it exhaustively. While many concepts, approaches and perspective on AI ethics have found their way into the book, it was not possible to deep-dive into specific ethical principles such as privacy, data protection and security nor to discuss the implications of legislative frameworks and certification of AI products. Future research could, for example, examine the diverse understanding of individual ethical principles such as transparency from different perspectives and explore regional, social and political differences. Furthermore, there is the continuous question of what role ethical concerns play in decisions of direct investment and public funding of AI research and how can they more effectively promote a responsible AI agenda.

Nevertheless, the different contributions in the book offer a multi-disciplinary and global perspective on the topic of AI ethics and touch upon three shared salient themes. First, AI ethics are not static in terms of both place and time. That means for establishing a global set of ethical AI principles it is not only necessary but rewarding to include the diversity of perspectives from all over the world. As of now, there is an imbalance that favors the perspective of the Global North. However, AI ethics are unlikely to become commonly accepted if the policy-makers, businesses and practitioners who develop, use and procure AI solutions are not involved. Moreover, AI ethics keep changing and evolving. Inclusive approaches such as the consultation processes of the UNESCO AI Recommendation or the Smart Africa AI Blueprint are laudable. But they will not be definitive because of changing technological possibilities and norms. A regular multi-stakeholder format would be able to address this and debate and adapt AI ethics based on practical experience. The Global Partnership on AI (GPAI) is one format that has the potential to grow into such an inclusive forum if it chooses to do so. Second, institutions of higher education play an important role in shaping AI practitioners who are aware of the risks and ethical dimensions of AI development. This is true for relevant programs such as Computer Science but also related fields given that AI is a cross-sectional technology. Already today AI ethics permeate all sectors and levels in that it becomes a topic for international government negotiations, business organizations and individual AI developers. Third, the teaching of AI ethics needs practical application elements. As the authors have shown ethics are a complex field of study that is intertwined with diverse traditions of thought and local context. They offer concrete recommendations on how to make AI ethics matter. In addition to institutional changes, they propose to use real-world examples and case studies in the classroom to illustrate ethical dilemmas and discuss and discover new ways to mitigate risks. Taken together, the authors add to diversifying the global debate on AI ethics and offer valuable advice to fellow lecturers, students and policy-makers alike.