Keywords

Introduction

I argue that the road to responsible AI ethics in Africa should be paved by regulations driven by epistemic just and dynamic AI ethics systems, rather than only by good intentions. Not only does Africa need dynamic and adaptive AI ethics systems as critical enabler of progress in fast-moving AI technologies, but it also needs such systems to ensure Africa is included—and their contribution understood—in the global conversations needed to speak to the transnational nature of Big Tech Companies and their combined potential threat to all of humanity.

Some of the main motivations for this argument include: The fast-changing nature of AI technologies means that regulation always seems out of step. I thus plead for the acknowledgement of the dynamic role of AI ethics to alert humanity to possible harm from AI technologies and to flag where legal protection is needed. AI ethics as a system that has its ear to the real world perhaps more immediately and intimately than International Law does seems eminently suited for this role. Of course the general debate about the priority of ethics vs. the law is an old and complex one, and I regret that I cannot do more here than just place a stake for AI ethics in this debate.

Secondly, a motivation speaking to epistemic justice is that Africa has become the ethical dumping ground of the main players on the AI technology scene. Because of weak regulations and other factors, African states are vulnerable to exploitation by transnational companies of members of poor communities in various ways, from labour exploitation to not being able to determine which data represents Africa and not owning our own data.

A third motivation straddling both the plea to recognise the dynamic nature of AI ethics and the plea for an epistemic just AI ethics system is that full adoption and trust in AI technologies and realisation of the benefits of AI technologies are in a tight reciprocal necessary and sufficient relationship, and in Africa, the potential benefits of AI technologies can introduce real change for many of its inhabitants. Both these reasons speak to the need for AI ethics awareness, sensitivity and literacy. In order to address this need, it is necessary to link AI ethics reflection to the lived world of Africa’s inhabitants and to meet them in their own context.

A final essential motivation in terms of epistemic justice is that Africa has been more or less excluded from AI ethics debates globally and Western terminology and approaches dominate the domain. I argue here that there is a real need to be sensitive to the communal nature of African ethics, to rethink typical Western style formulation of AI ethics principles (which does not imply weakening the protection they are intended to offer at all), to take context as well as culture into account and to authentically and effectively integrate ethics with technological innovation in Africa.

In the good company of thinkers such as Aristotle, Stuart Mill, Jeremy Bentham, Paul Ricoeur, Michael Sandel, Robert Nozick, Amartya Sen and Martha Nussbaum, I subscribe to the view that ethics is the medium through which decisions are made about what the right thing to do is (morality). Thus, a moral problem relates to what the right thing to do is, while an ethical problem relates to how best to consider a solution to a moral problem. This is of course a complex debate in moral philosophy, but not one which I can delve into here. But even so, it should be obvious that on this kind of account of ethics there are different ethical systems to choose from in order to engage with moral dilemmas and decisions, not only in terms of the school of thought a system belongs to (deontological, consequentialist, virtue approaches, communitarian approaches, religious/spiritual approaches, etc.), but also in terms of the values that inform these systems. Ultimately, the choice of ethical system is heavily influenced by culture and also by context.

The particular relevance of these two factors when considering the AI ethics domain in Africa is explained and argued for in Sects. 2 and 3. Obstacles in the way of actionable AI ethics range over different dimensions, some relating to political, economic, social or educational contexts (Sect. 2), while others relate to culture (Sect. 3). By building on arguments in previous sections, I conclude this chapter by suggesting the fast-changing nature of AI technology and the fluidity of the AI readinessFootnote 1 of any country, together with the role that culture and context play in adoption of and openness to AI ethics regulations, imply actionable AI ethics in Africa has to be realised in epistemic just and dynamic, adaptive, agile systems.

The African Context and Actionable AI Ethics

When considering factors impacting on adherence to AI ethics policy, both the context within which policies are formulated and the context in which they are to be actionalised are relevant for very practical reasons. Factors such as Internet penetration, civil and political stability, quality education and factors as mundane as access to electricity, among many others, all impact on both the implementation of AI ethics frameworks and the appetite for adoption of AI technologies and sensitivity to AI ethics concerns. It is clear that such factors range over a “continuum of scientific, technological, economic, educational, legal, regulatory, infrastructural, societal, and other dimensions” (UNESCO, 2020). These factors essentially determine the “AI readiness” of a country in UNESCO terms (ibid.). It also relates to a novel notion of “national AI capital” (NAIC), suggested by Momčilović (2021) as a country’s “capacity to apply and develop, and cope with the challenges of various artificial intelligence systems, in order to increase the country’s social and economic well-being and competitiveness” (ibid.). This leads to a definition of a third related concept, the AI ethics capital (AIEC) of a country, as “the state of the art multi-disciplinary knowledge, skills, and competencies of individual AI actors, which drive individual AI actors’ ethical habits and inform a country’s AI ethics guidelines; which as such, in their turn, facilitate the creation of personal, social and economic wellbeing as a result of the potential of harmonious and ethical co-existence of humans with technology thus created” (Ruttkamp-Bloem, 2020).

These three notions—AI readiness, National AI Capital and AI Ethics Capital—all imply that AI ethics regulation should be scalable, as all three concepts indicate dynamic statutes. This points to a need to engage with the context in which AI ethics regulation is formulated and will be applied via instruments such as AI ethics impact assessment and readiness methodologies and above all being sensitive to the fact that there is a difference of context between the West, the East and the Global South in many ways, which necessitates being sensitive to what can be expected in each context, while both acknowledging existing AI infrastructure and governance policies and assisting where necessary in culturally respectful manners.

While it is always risky to speak of the African continent as if it is a homogeneous whole, it allows me to make some general observations. Africa has the youngest population of all continents—a median age of 19.7 years at presentFootnote 2—and it is clear that while this status brings immense social and economic opportunities, it also brings with it urgent social, moral, legal and economic responsibilities. Furthermore, in terms of ethical concerns around AI technology, we should not make the mistake to think that this young population necessarily implies a high level of either AI ethics literacy or protection via AI ethics regulation. It is imperative that it is acknowledged that while “generation Z” is a technology savvy generation, this brings its own concerns: We do not yet know what the full scope of the impact is of growing up in a technologically driven world in the way that this generation has (see, e.g., UNESCO, 2020), and given the socio-economic and political instability in some African states, this impact has to be closely monitored so that it does not exacerbate existing inequality or other potential forms of social harm.

In addition, the potential for good that AI technology holds for Africa is huge, but can only be fully realised if AI technology is trusted, as full adoption of technology only happens in contexts where there is trust in technology, and only full adoption guarantees economically successful and socially supportive AI ecosystems, as alluded to already. In this sense, African states seem to face a serious moral dilemma with regard to their AI approach and AI governance approach: “Do they go all out and become a global role player with the eye on economic gains that AI offers? Or do they take time to stop and think about the social and ethical impact on vulnerable groups in their communities?” (Ruttkamp-Bloem, 2021). To some extent however, I suggest this is a false dichotomy as the moral challenge is more complex. It is necessary to have participative inclusion (UNESCO, 2020) during all phases of the AI systemFootnote 3 lifecycleFootnote 4 (Ibid.) of as many different cultural, ethnic, social-economic, age, gender and other groups as possible such that the mechanisms, products and benefits of AI technologies belong to all. This means that states can in fact not secure or guarantee economic gain from AI technologies without concerning themselves with all groups in their societies, and ensuring members of all groups are both protected against possible harm from AI systems (which includes addressing issues of structural harm that may potentially be amplified by data-driven AI technologies) and are enabled to actively participate in AI ecosystem building in the particular state.

Some specific important challenges in the context of AI governance include the issue of the exclusion of Africa from so-called global AI debates; the level of AI adoption and successful implementation of AI governance policies (impacted on by many factors that are not uniform across the continent, e.g., Internet penetration on the continent,Footnote 5 effectiveness and agility of legal processes, etc.); AI ethics, information and communication literacy (e.g., access to equal STEAM—STEM plus the Arts—education for all, etc.); and lastly, the collectivist/individualist clash between African and Western ethical traditions.

All of these merit further discussion, but here I will just highlight a few issues. First, when we consider the issue of exclusion of Africa from “global” AI debates,Footnote 6 it should be clearly understood that we deal here with issues of epistemic injustice that cut across both hermeneutic and testimonial injustice in Miranda Fricker’s (2007) sense. What is interesting here is not just that the exclusion of African academics and AI practitioners is a form of epistemic injustice, but that it in fact may be contributing to general harm in terms of fairness and bias concerns as this exclusion is the product of deeper structural injustice in the West and globally. In particular, given that a person’s social positioning, influenced by factors such as race, gender and class, determines what knowledge they have access to as well as how the mechanisms with which they ascribe social meaning and gain knowledge are developed,Footnote 7 I think it is probable that epistemic injustice in the sense of the exclusion of Africa and Africans from global debates on AI ethics on the one hand contributes to what Kate Crawford (2017) calls allocation and representational harms, and on the other hand, is exacerbated as a result of such harms, in a concerning kind of feedback loop.

Testimonial injustice occurs when a hearer “awards a speaker’s claims less credibility than it deserves because of a prejudice that the hearer holds towards the speaker based on operations of power that come about as a result of given social identities” (ibid.). In its turn, hermeneutic injustice is the “withholding from a certain social group the proper tools with which to make sense of or articulate social experiences which prohibits [members of such groups] from functioning adequately as equal agents in society” (ibid.). If one considers that representational harm from AI systems is a cultural and social harm, which occurs when “systems reinforce the subordination of some groups along the lines of identity; so that's race, class, gender, etc.” (Crawford, 2017), and that the primarily economic and transactional harm of allocation harm occurs “when a system allocates or withhold[s] certain groups an opportunity or resource” (ibid.), one can perhaps get a glimpse of the scope of the harm to Africa and its people brought about by excluding them from global AI conversations.

I claim that a social systems analysis of the kind advocated for by Crawford (e.g., ibid., Crawford & Calo, 2016) and others such as Campolo et al., (2017), may very well show that there is a feedback loop specifically between testimonial injustice and representational harm and between hermeneutic injustice and allocation harm (especially in the latter case if allocation harm is also seen in terms of access to equal quality education). It is because of being excluded, of epistemic injustice on a grand scale and of resulting exclusionary practices in the tech community, that identity prejudice feeds so easily into the harms Crawford (2017) identifies. The point I am making here is that excluding Africa from global discussions specifically in AI, given the potential of data-driven AI for amplifying structural bias, unfairness and exclusion, does far more harm than simply ensuring AI technology stays in the hands of the North. It is necessary that this is acknowledged and actively combatted by advocating for inclusive, international and diverse tech teams, ensuring travel between Africa and the North becomes easier for AI practitioners, inviting speakers from Africa to global AI forums and acknowledging the existing skills and expertise in Africa, among other initiatives.

One of the biggest contributing factors relating to the above is the fact that work done in Africa is not recognised adequately. Of course there are unique challenges in Africa, but nevertheless there is a lot to be excited about in terms of AI development in Africa. This is illustrated by many initiatives such as the courses offered at the African Institute for Mathematical ScienceFootnote 8 and various evening and weekend classes, AI boot camps and innovation hubs sponsored from industry in many African universities. Then there are initiatives such as, to name just a few, Data Science Africa,Footnote 9 the Deep Learning Indaba,Footnote 10 the Masakhane Natural Language Processing community,Footnote 11 the IBRO-SIMONS Computational Neuroscience group Imbizo,Footnote 12 the Sisonke-Biotek grassroots-focused research initiative at the interface of machine learning and healthcare,Footnote 13 and also government sponsored programmes such as Rwanda’s digital ambassador programme.Footnote 14 There are many private sector technology actors in Africa too—two examples of big AI technology exhibitions are East Africa COMFootnote 15 and AI EXPO Africa.Footnote 16 Then, there are academic initiatives such as the Technical University Munich and the Ghanaian Kwame Nkrumah University of Science and Technology’s Responsible AI Network,Footnote 17 and the Division for Science and Innovation’s Centre for AI Research in South Africa,Footnote 18 among many others.

Second, the concern around full AI adoption and successful implementation of AI governance policies relate, among other factors, to the practice of “ethics-dumping” which is a term introduced by the European Commission (Ruttkamp-Bloem, 2021), and first applied in the AI ethics context by Floridi (2019). Basically, this refers to the practice of transnational tech companies moving core operations to countries with weak AI governance regulation. The reason why this is such a strong threat is that few African states have national AI strategies. Key components of such strategies should include at least improved telecommunications infrastructure in Africa to increase Internet penetration, adequate AI regulation and, perhaps most importantly, the establishment of enabling and collaborative AI environments (Pillay, 2020). Even fewer countries have national strategies to address social and ethical concerns around AI technology. Key obstacles in formulating such strategies include lack of sufficient research into re- and up-skilling so as to offset potential job losses; lack of equal access to quality education focused on STEAM teaching, that is, STEM teaching with at least AI ethics, information and communication literacy included; reaching every member of society, across socio-economic and age and gender divides; actively ensuring inclusion in global debates; and both ensuring the quality of data and addressing the “data desert” in Africa (ibid.). In the case of the latter, the private sector is playing an ever more crucial role in many African AI ecosystems (see the UNECA Africa Data Revolution Report, 2016).

Furthermore, in summary, in their recent AI Needs Assessment Survey in Africa 2021 (Sibhal et al., 2021), UNESCO states that “Member States have requested UNESCO’s support for standard setting, policy advice, capacity building, network development and for addressing gender equality-related concerns in the development and use of AI” (ibid.): 32 countries requested UNESCO’s support for building human and institutional capacities in AI-related domains in its fields of competence; 26 countries requested policy advice for the development of aspects of AI policy concerning education, sciences, culture and communication and information; 21 countries requested support from UNESCO in terms of setting standards; 27 countries requested support in building partnerships for the development and use of AI to help them achieve their developmental priorities; and 17 countries requested support for addressing gender equality-related concerns in the development and use of AI (ibid.).

The need for enhancing capacities for AI development and implementation of ethical AI governance is thus widely recognised, rather than downplayed in Africa. The need for AI literacy is also widely recognised, if not always immediately or uniformly addressed. One success story of addressing this issue is the Rwandan Digital Ambassador’s Programme,Footnote 19 which is a government-funded and government-driven initiative for bringing digital literacy to rural areas by sending out graduate students and young entrepreneurs to communities to provide digital literacy training in local languages and focusing on locally relevant digital content and services (http://www.hsrc.ac.za/en/news/impact-centre/African-AI). Data Science Nigeria has developed an elementary school textbook on the nature and role of data in African lives, and focused on children being taught skills in contexts they know. These examples illustrate the importance of intentional capacity building and public interest-driven AI approaches in Africa (e.g., AI for social good and AI for development initiatives focused on food security in agriculture, or on distribution of medical suppliesFootnote 20) combined with the need for understanding existing infrastructure and being realistic about what is possible. Being focused on African problems and needs also feeds back into the discussion above on trust and adoption of AI technology being key for the success of this technology.

There are many factors in Africa that impact on the possibility of establishing this trust I referred to in the Introduction as an essential ingredient for successful realisation of the benefits of AI technologies for all. From the above, it is clear that these fall into different categories: (i) socio-political factors ranging from epistemic injustice and being at the receiving end of structurally biased and non-transparent AI systems, to religious and cultural concerns; (ii) literacy and digital inequality concerns which may lead to feelings of helplessness or even despair, especially in cases where there are other hardships to face such as political instability, access to clean water, to electricity, to education, etc.; and (iii) concerns around labour exploitation, job loss and access to up-skilling. Overall, there is also a shared concern about the implications of AI technology for cultural diversity, which brings me to the next section.

Towards an African AI Ethics

More than in any other culture perhaps, African ethics is deeply seated in the societal beliefs about what is morally right and wrong on the one hand, and in the behaviour society deems appropriate to bring about social justice and harmony (see, e.g., Gyekye, 2011). In the West, ethics is not as entangled with societal thinking, as the approach is individualistic rather than collectivist. This difference in approach to the role and nature of ethical systems should be acknowledged, if one wants to speak at all of any kind of global AI ethics regulation, but also in particular, if one wants to establish an effective AI ethics paradigm or domain in Africa.

Furthermore, the typical human rights approach to AI ethics dominant in the West may not sit naturally in Africa, as much of African political thought focuses on duties (responsibilities) rather than rights, or at the least, recognises that the rights vs. duties debate is central to African political thinking. This debate rests on the African notion of personhood, and this is a complex moral-political concept in African philosophy, which needs to be taken into account when thinking of AI ethics regulation in Africa. Gyekye (1997, p. 2) differentiates between metaphysical and moral perspectives on the notion of personhood in relation to social structuring of societies. The metaphysical perspective concerns questions such as whether a person is an “atomic, self-sufficient individual” (ibid.,), and thus the “ontological priority” of an individual over the community vs. her communal nature (ibid.). Moral questions focus on the nature and status of individual rights, the place and role of duties, and the nature and role of a sense of “shared life or common (collective) good” (ibid.).

There is a contrast between Western acknowledgement of values such as autonomy, freedom and dignity belonging to individuals and the African ethical tradition according to which individuals depend on society for their very status of personhood and their general well-being (ibid., pp. 1–2). This difference also carries over to the notion of individual rights in the different traditions (see also, e.g., Ake, 1987; Deng, 2004; Metz, 2011; Molefe, 2019). In the African tradition, the relation between an individual and society is determined by a community of people (Gyekye, 1997, p. 2). Thus, the “communal structure” (ibid., p. 3) of African societies is the core characteristic of the social structure of African cultures (see also, e.g., Masolo, 2004; Tshivhase, 2015; Matolino, 2018; Metz, 2018). In this sense, one’s status as an individual, one’s very uniqueness, is only a secondary quality, as one is “first and foremost … several people's relatives and several people's contemporary” (Gyekye, 1997, 2).Footnote 21

In the AI ethics context, this communal aspect of African societies should be taken seriously, as there are many (e.g., Raso, 2018, Latonero, 2018) who view AI ethics regulation through the lens of human rights and the communal structure of African societies implies that this individualistic approach may not make sense in Africa. Let me be clear, I am certainly not implying that human rights are not respected in Africa, or should not be, or cannot be, but I am cautioning that there needs to be sensitivity to how the concept of individual (and by implication human) rights is interpreted in Africa and therefore to how AI ethics regulations are formulated. In the global context, the actionability of AI ethics regulation depends on a number of factors such as the divide between the goals of members of the tech world and abstract ethical guidelines among others (see, e.g., Mittlestadt, 2019, Jobin et al., 2019; Hagendorff, 2020), and certainly, those factors apply in the African context too. But, I urge here for the additional acknowledgement of the potential impact of recognition of cultural language and traditions on AI ethics and actionability in general and in Africa in particular.

The naturalist approach to rights depicts rights as being “held simply by virtue of being a person (human being)” (Donnelly, 1982b, p. 391) and is based on individualistic moral and political frameworks (Gyekye, 1997, p. 33). But if it is taken into account that the concept of personhood is at the heart of the debate on rights vs. duties in African communitarian theories (Molefe, 2019, p. 147), it becomes clear that more nuance is necessary (e.g., Donnelly, 1982a).Footnote 22 In other words, the “thick” view of rights according to which rights are prior to duties (e.g., Griffin, 2009) is not necessarily naturally the African view, as on the latter account, rights should not naturally trump cultural, moral and political grounds for action (Molefe, 2019, p. 147), but are rather related to human dignity in more complex causal relationships (ibid, p. 152). There is in fact a continuum of views on rights in African literature, with persons such as Ake (1987), claiming that there are no individual rights in African moral and political thought, only communal duties, on one side of the continuum, more moderate views such as Gyekye’s (1997) in which rights and duties are not in a one-to-one correlative relationship, but are nevertheless on equal moral footing and in a mutually dependent relationship, to African scholars accepting individual rights into African political theories to varying degrees (e.g., Wiredu, 1997; Metz, 2011; Matolino (2018)). I will here briefly consider Gyekye’s moderate communitarian approachFootnote 23 to this debate, with some reference to other views, such as Molefe’s. It is impossible to do justice here to the richness of this discussion in African literature and I can do no more than alert readers here to its overall importance, and specifically its importance in the context of actionable AI ethics for Africa.Footnote 24

On Gyekye’s moderate communitarian view, the community and the individual are ascribed “equal moral status” (Ibid., p. 9) as the one cannot exist without the other. Such a moderate view ties to a specific interpretation of the notion of the common good, which differs in important ways from the Western or individualistic interpretation of this notion. The Western understanding of the common good as “the aggregate of the particular goods of individual persons, which, like individual rights, ought to be respected” (Ibid, p. 13), implies not only the prioritisation of the individual, but also that the individual’s value system, does not depend on and may be totally different from that of their community. On this view “the pursuit of a common good in an individualistic society will do violence to the autonomy and freedom of the individual and fetter her ability to choose her own good and life plans. But not only that: … the pursuit of the common good will result in intolerance of other conceptions of the good and inappropriate use of political power to realise the common good” (Ibid.).

If one now considers that even Gyekye’s moderate view of communitarianism still implies that “communal life is not optional” (Ibid, p. 5) for an individual and that her personhood is “constituted by the social relationships she finds herself in” (Ibid), because of her “natural sociality” (ibid.), it becomes clear that the Western view of the common good and ultimately of rights cannot just be adopted as it is and should not just be “domesticated” (Molefe, 2019, p. 148) to fit the African context. Rather, in-depth reflection is needed to consider how these notions are interpreted and applied in African moral and political theories. On Gyekye’s account, the individual cannot be ontologically prior to society, as this would imply not only that the individual’s choice to join a community is optional, but also that the forming of communities is contingent on such decisions, and this is in opposition to the notion of our natural sociality (Ibid, pp. 5–6). It must furthermore be understood that cultural values are inherited from the social structure of the community and cannot be generated by the individual (Ibid., p. 7) as the individual can only realise their full potential as a member of a society.

On Gyekye’s (ibid, p. 14) view then, in contrast to the Western view, the common good “literally and seriously means a good that is common to individual human beings—at least those embraced within a community … It is linked, … to the concept of our common humanity and, thus, cannot consist of, or be derived from, the goods or preferences of particular individuals; thus, the common good is not a surrogate for the sum of the different individual goods [as in the West]”, because if it were, it would suggest the that “common” can only be contingently realised (ibid.). Rather, the common good includes moral or political values that are “embracive of fundamental or essential goods” (ibid.) such as dignity, peace and respect. “The common good can, thus, be regarded as that which inspires the creation of a moral, social, or political system for enhancing the well-being of people in a community generally” (ibid., p. 15).

In this context, Gyekye (ibid., p. 34) formulates his moderate view of rights as belonging to individuals and contributing to their self-realisation, even though the way in which individuals express themselves as the result of these rights is still best done within a social framework within which communal values such as compassion are more important than individual rights. Gyekye (ibid., p. 36) writes: “… even though rights belong primarily to individuals, insofar as their exercise will often, directly or indirectly, be valuable to the larger society, their status and roles will nevertheless (have to) be recognized by communitarian theory”. To deny this would be “sawing off the branch” on which communitarianism sits (ibid.). Society acknowledges social values such as “peace, harmony, stability, solidarity, and mutual reciprocities and sympathies” (ibid, p. 37) and no individual may exercise their rights in a way that compromises these values (ibid.) and thus the claims of “individuality and community ought to be equally morally acknowledged” (ibid, p. 38).

Thus, individual rights are only valid, or only have meaning, within human society. This means that such rights come with social “responsibility” (duties) (ibid, p. 38)—where “responsibility” refers to “a caring attitude or conduct that one feels one ought to adopt with respect to the wellbeing of another person or other persons” (ibid, p. 39), and thus, Gyekye is defending a duty-based view of rights on which rights and duties (“responsibilities”) are closely related (ibid, p. 38). This is the case as the “relational character of the individual by virtue of her natural sociality immediately makes her naturally oriented to other persons with whom she must live. Living in relation to others directly involves an individual in social and moral roles, obligations, commitments, and responsibilities, which the individual must fulfil” (ibid., p. 39) as the communitarian notion of the common good implies that the individual should always do what is best for the community. This implies that moral duties/ responsibilities are “elevated” to the level of rights (ibid, p. 40).

If we briefly turn to consider the view of Molefe, he points out that the African idea of personhood is “grounded on a different ethical sensibility than that which informs the discourse on rights” (Molefe, 2017) and that acknowledging this is core to understanding the debate on rights vs. duties in African communitarian theories (Molefe, 2019 p. 147). “The idea of personhood, … envisages an other-regarding morality of duties. It is these (other-regarding) duties [virtues], I submit, that take priority even over rights” (ibid). This implies the moral currency in the African context is different from the West: Rather than making each individual member of a community a moral concern, the community is the moral concern in African communitarianism.

On the Western “minimalist” account of rights (e.g., Griffin, 2009), human rights “… function to protect our normative agency, which requires autonomy (the ability to choose one’s own ends), liberty (freedom from coercion and manipulation) and welfare (provision of basic needs to be able to lead a human life like education)” (Molefe, 2019, p. 157). Molefe’s “maximalist” duty-based view of rights is in direct contrast to this minimalist account of rights, as it is based at its core on the “ineliminable residue of human dependency” (Wiredu, 1998, p. 293), which relates to the recognition of our shared humanity rather than on differences between individuals (Molefe, 2019, pp. 158–160). The maximalist conception of personhood is based on a social or relational ethics (ibid, p. 160), which is “an ethics motivated by the needs and interests of others” (ibid), and in this sense echoes Gyekye’s (1997) insistence on an ethics of sensitivity or care towards others and their needs. “The basis of these other-regarding virtues is the spontaneous human capacity to recognise human needs” (Molefe, 2019, p. 160). The kind of needs at issue here are basic needs in order to live a life of dignity. Such a life is only possible if an individual can attain personhood, which is dependent on living in a society in which the conditions for pursuing and attaining personhood are good (ibid, p. 163). Masolo (2004, p. 494) speaks of this duty to provide the basic goods for all in terms of the “economy of affection”.

Given this brief—and necessarily superficial due to the wider scope of this article—introduction into African thinking on the rights vs. duties debate, at least two considerations to take into account when considering the nature and actionability of AI ethics in Africa become clear. Firstly, some of the most influential views of African ethics claim that it has a social and duty-based character that is different from the individualistic rights-based character of Western ethics. Secondly, it should be clear that the manner in which AI ethics regulations are phrased should take into account the cultural embeddedness of any action that is required.

In terms of AI ethics culture matters, because for a global (or any other) AI ethics policy to be successful, it should belong to every role-playerFootnote 25 as an active participant (Ruttkamp-Bloem, 2020). Among other things, this means the terms in which such policies are formulated must be familiar and acceptable to everyone at its receiving end, whether as user or researcher or developer or deployer of the technology at issue. Implementation and negotiation can only happen if role-players can understand each other and if every role-player feels heard and recognised as a credible and valid participant.Footnote 26

Returning to our conversation on Africa, Hagerty and Rubinov (2019) write that, “AI is likely to have markedly different social impacts depending on geographical setting. Likewise, perceptions and understandings of AI [and addressing its disruption and successfully mitigating its potential harm] are likely to be profoundly shaped by local cultural and social context” (Hagerty & Rubinov, 2019). In conclusion of this section, I now briefly consider concretely the difference between Western and African ethical values and principles that may credibly drive AI ethics regulation to illustrate the importance of cultural nuance.

In terms of general African ethical values, Gyekye (1997, p. 40) identifies values such as peace, dignity, compassion, solidarity, reciprocity, cooperation, interdependence and social well-being as principles of communitarian morality, which impose “responsibilities [duties] on the individual with respect to the community and its members” (ibid). These values are for instance clearly taken up by the Masakhane natural language processing organisation, which is a South African grassroots organisation whose mission is “to strengthen and spur NLP research in African languages, for Africans, by Africans” (https://www.masakhane.io/). Their list of principles includes: “Umuntu Ngumuntu Ngabantu”—“loosely translated from isiZulu and meaning ‘a person is a person through another person’ or ‘I am because you are’. This principle [of Ubuntu] proposes relationality over individualism for stronger social cohesion towards sustainable communities” (ibid.). Other principles include African-centricity, ownership (of NLP research processes), openness, multidisciplinarity, kindness (towards members of the NLP community), responsibility (taking ethical impact of research seriously), data sovereignty (“Africans should be able to decide what data represents our communities globally, retain ultimate ownership of that data, and know how it is used” (ibid)), reproducibility and sustainability.

If we now compare the Western context, where individualism, autonomy and human rights of individuals dominate as general values, we can clearly see how culture impacts on the AI ethics context. In general, AI ethics values and principles in documents generated in the West include some mention of human rights and human dignity, inclusiveness, flourishing of individuals and societies, autonomy, explainability, transparency, fairness and non-discrimination, awareness and literacy, responsibility, accountability, good governance, sustainability, robustness, privacy, solidarity and trust (see, e.g., Jobin et al., 2019).

Even though the Masakhane project is focused on NLP and the values listed for the West are from broad AI ethics documents, it is clear that there is some overlap of values, if not directly, e.g., sustainability being mentioned from within different cultures; then by implication in terms of at least some shared sentiment, e.g., solidarity and Ubuntu, and openness and inclusivity. It may very well be that there may be values that stand alone as very context-dependent, but which are nonetheless universally comprehensible, such as African-centricity; and others that may overlap in terms of sentiment, but in which the description has a very contextual slant, such as the interpretation of data sovereignty in the Masakhane case. But, in essence, it should be clear that taking culture seriously in the formulation of AI ethics values and principles does not need to make for incomprehensible differences. Rather, it makes for understanding—and respecting—the community that is on the receiving end of AI ethics regulation better and also ensures better potential adherence as regulation is formulated in familiar terms. This is why care should be taken in terms of concrete formulation of AI ethics policy that in principle it would be possible for different communities to find affinity with the manner in which values and principles taken up in AI ethics regulation have been expressed.

As a brief example, in the current version of the UNESCO Recommendation, there is a value called “Living in peaceful, just and interconnected societies” (UNESCO, 2021). In the first draft of the Recommendation, this value was referred to as “Living in harmony” (UNESCO, 2020) and the idea behind it was to incorporate principles from the African philosophy of Ubuntu and Eastern philosophies such as Buddhism and Taoism, in order to demonstrate concretely the global character of the Recommendation. This value was not simply focused on inclusion and diversity, or on respecting human rights, and even though it incorporates the essence of what is meant by solidarity, it is still different and needed to be expressed in its own terms. It is clear if one looks at the current version (UNESCO, 2021) that while a solid compromise was reached after negotiation, a fair measure of cultural content has unfortunately been lost in translation.

This means we have a long way to go to learn how to practice cultural respect without somehow imagining that not expressing a value in Western terms would necessarily mean transgressing International Law, or that it would not be powerful enough somehow. Embracing culture does not mean that the meaning or interpretation of values and principles is open or relative to absurd degrees and it certainly does not mean International Law is not complied with, it simply means that there has to be engagement in epistemic just circumstances with every member of every community on the receiving end of AI ethics regulation to ensure that the meaning ascribed to values is articulated clearly and communicated as much as possible in terms that have synergy with the culture at issue. This will obviously improve sensitivity to AI ethics and adherence to resulting regulation. This brings us to some concluding remarks.

Conclusion

I conclude that in order to ensure the actionability of AI ethics regulation in Africa, AI ethics should be realised in epistemic just and dynamic systems driving AI policymaking. Reasons for advocating for the dynamic nature of AI ethics as a system include the fast-changing nature of AI technology, the need to take inter-, trans- and multi-disciplinary research seriously, the dynamic status and difference of contexts of application in terms of concepts such as AI (ethics) readiness and AI Ethics National Capital, and also the fact that this system should be accessible to various cultures. In its turn, reasons for defending an epistemic just AI ethics system lie in the simple fact that culture matters in conversations on ethics. Listing un-interpreted values as abstract concepts alienates the members of the tech community, but expressing such concepts from within one dominant culture in fact alienates entire countries or even continents. There should be engagement with different interpretations such that there can be articulation and communication of ascribed meaning (always within the confines of International Law).

The way to meet this challenge is not to turn to hegemonic forms of discourse based on epistemic and other injustices as the easy way out; but to work to have cultural difference actualise the dynamics of ethics for a shared human goal.Footnote 27 There is thus an urgent need for a protocol for cultural engagement in AI ethics discussions. While human rights may be the lens through which many in the West consider AI ethics, culture should perhaps be the global calculus for AI ethics in the sense of being the source for interpreting AI ethics and translating it into familiar terms for each community. This may contribute to trust in technology becoming tangible because in such a scenario, trust would be constructed from the bottom up. The point made here is that cross-cultural understanding and collaboration on the one hand, and respect for socio-economic context on the other may ensure that the “responsibility of solidarity with the least advanced to ensure that the benefits of AI technologies are shared” (UNESCO, 2021) that lies with the most technologically advanced countries can be successfully taken up. Interestingly enough, the African philosophy of Ubuntu may be the very golden threat needed to knit the world into a more equal future and allow for a sustainable global AI ethics narrative, focusing such narrative on living in harmony, on shared human values and on the ultimate interconnectedness of all humans.Footnote 28