Abstract
This article introduces readers to the special issue on Selected Issues in the Ethics of Artificial Intelligence. In this paper, I make a case for a wider outlook on the ethics of AI. So far, much of the engagements with the subject have come from Euro-American scholars with obvious influences from Western epistemic traditions. I demonstrate that socio-cultural features influence our conceptions of ethics and in this case the ethics of AI. The goal of this special issue is to entertain more diverse views, particularly those from Africa; it brings together six articles addressing pertinent issues in the ethics of AI. These articles address topics around artificial moral agency, patiency, personhood, social robotics, and the principle of explicability. These works offer unique contributions for and from an African perspective. I contend that a wider engagement with the ethics of AI is worthwhile as we anticipate a global deployment of artificial intelligence systems.
Notes
With testimonial injustice, we attribute less credibility to a proposition, opinion, statement or knowledge systems on grounds of prejudice about the speaker’s gender, race, ethnicity, sexuality or accent, etc. The harm caused by testimonial injustice is that it deprives the speaker or knower a level-playing field to express their ideas, which many times are embraced if another speaker expresses them but does not share the conditions that made the prejudices appear in the first place. Examples abound in history where knowledge systems of indigenous people are jettisoned only to be celebrated when appropriated by European or North American anthropologists.
Metz argues that there are unique interpretations of ethics found amongst the people of sub-Saharan Africa. This does not mean that they lay exclusive claims to these types of ideas. Rather, “… it means merely that certain properties have been recurrent amongst many of those societies for a long span of time in a way they have tended not to be elsewhere around the globe” (2017, p. 62).
The moral machine experiment used a trolley problem-like scenario to gather responses on a variety of ethical decisions. The overall results of the survey showed a few shared principles among respondents regardless of cultural-influenced ethical preferences, such as choosing to save many over few; however, it was clear that this was done in varying degrees across geographical mapping. For instance, among South-East Asians (Japan, China) and Middle Eastern (Saudi Arabia) respondents, and unlike Western respondents, the preference to save younger characters in the scenario were less distinct. This correlates with the notion of respect for elders found among people of collectivist cultures.
Dignum (2018) offers an insightful way to understand the ethics of AI across three categories. First, ethics by design, which focuses on the technical capacity and integration required to develop autonomous systems that have ethical reasoning capabilities. Second, ethics in design, which addresses governing guidelines and engineering methodologies that are required to analyse the ethical implications of autonomous intelligent systems as they become more ubiquitous. And third, ethics for design, which refers to codes of conducts, guidelines, standards and certifications that guarantee the quality and veracity of engineers, developers and users who engage with research and deployment of AI systems.
References
Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., et al. (2018). The moral machine experiment. Nature, 563(7729), 59–64. https://doi.org/10.1038/s41586-018-0637-6
Bell, D. A., & Metz, T. (2011). Confucianism and Ubuntu: Reflections on a dialogue between Chinese and African traditions. Journal of Chinese Philosophy, 38, 78–95.
Birhane, A. (2020). Algorithmic colonization of Africa. SCRIPTed. https://doi.org/10.2966/scrip.170220.389
Chimakonam, J. O. (2017). African philosophy and global epistemic injustice. Journal of Global Ethics, 13(2), 120–137. https://doi.org/10.1080/17449626.2017.1364660
Dignum, V. (2018). Ethics in artificial intelligence: Introduction to the special issue. Ethics and Information Technology, 20, 1–3. https://doi.org/10.1007/s10676-018-9450-z
Feuchtwang, S. (2016). Chinese religions. In L. Woodhead, H. Kawanami, & C. H. Partridge (Eds.), Religions in the modern world: Traditions and transformations (pp. 143–172). London: Routledge.
Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford: Oxford University Press.
Hofstede, G. (1991). Cultures and organizations: Software of the mind. London: McGraw-Hill.
Hofstede, G. (2001). Culture’s consequences: Comparing values, behaviors, institutions and organizations across nations. Thousand Oaks: Sage publications.
Hofstede, G. (2011). Dimensionalizing cultures: The Hofstede model in context. Online Readings in Psychology and Culture, 2(1), 2307.
Korsgaard, C. (1996). Creating the Kingdom of Ends. New York: Cambridge University Press.
McDougall, B. S., & Hansson, A. (2002). Chinese concepts of privacy (Vol. 55). Leiden: Brill Academic Pub.
Metz, T. (2007). Toward an African moral theory. The Journal of Political Philosophy, 15(3), 321–341.
Metz, T. (2013). The virtues of African ethics. In S. Van Hooft (Ed.), The handbook of virtue ethics (pp. 276–284). Durham: Acumen Publishers.
Metz, T. (2016). An African theory of social justice: Relationship as the ground of rights, resources and recognition. In C. Boisen & M. C. Murray (Eds.), Distributive justice debates in political and social thought: Perspectives on finding a fair share (pp. 171–190). New York: Routledge.
Metz, T. (2017). Toward an african moral theory. Themes, issues and problems in African philosophy (pp. 97–119). Cham: Palgrave Macmillan.
O’Neill, O. (1975). Acting on principle. New York: Columbia University Press.
Realo, A. (1998). Collectivism in an individualist culture: The case of Estonia. Trames, 2(52/47), 19–39.
Segun, S. T. (2020). From machine ethics to computational ethics. AI & Society. https://doi.org/10.1007/s00146-020-01010-1
Tam, L. (2018). Why privacy is an alien concept in Chinese culture. Retrieved from https://scmp.com/news/hong-kong/article/2139946/why-privacyalien-concept-chinese-culture.
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. (2017). Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems, version 2. IEEE. Retrieved from http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html.
Shutte, A. (2001). Ubuntu: An ethic for the new South Africa. Cape Town: Cluster Publications.
UNESCO. (2019). Elaboration of a Recommendation on the ethics of artificial intelligence. Retrieved from https://en.unesco.org/artificial-intelligence/ethics.
Yao-Huai, L. (2005). Privacy and data privacy issues in contemporary China. Ethics and Information Technology, 7(1), 7–15.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Segun, S.T. Critically engaging the ethics of AI for a global audience. Ethics Inf Technol 23, 99–105 (2021). https://doi.org/10.1007/s10676-020-09570-y
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10676-020-09570-y