Applying a principle of explicability to AI research in Africa: should we do it?

Abstract

Developing and implementing artificial intelligence (AI) systems in an ethical manner faces several challenges specific to the kind of technology at hand, including ensuring that decision-making systems making use of machine learning are just, fair, and intelligible, and are aligned with our human values. Given that values vary across cultures, an additional ethical challenge is to ensure that these AI systems are not developed according to some unquestioned but questionable assumption of universal norms but are in fact compatible with the societies in which they operate. This is particularly pertinent for AI research and implementation across Africa, a ground where AI systems are and will be used but also a place with a history of imposition of outside values. In this paper, we thus critically examine one proposal for ensuring that decision-making systems are just, fair, and intelligible—that we adopt a principle of explicability to generate specific recommendations—to assess whether the principle should be adopted in an African research context. We argue that a principle of explicability not only can contribute to responsible and thoughtful development of AI that is sensitive to African interests and values, but can also advance tackling some of the computational challenges in machine learning research. In this way, the motivation for ensuring that a machine learning-based system is just, fair, and intelligible is not only to meet ethical requirements, but also to make effective progress in the field itself.

This is a preview of subscription content, access via your institution.

Notes

  1. 1.

    Africa is, of course, a vast continent with many different cultures and peoples. While we talk of ‘Africa’ in this paper for ease of reference, we do not deny that within Africa there is great complexity.

  2. 2.

    The guidelines and frameworks surveyed are: The Asilomar AIPrinciples (2017), the Montreal Declaration for Responsible AI (2017), the General Principles in the IEEE Global Initiative’s ‘Ethically Aligned Design’ (2017), the Ethical Principles in the ‘Statement on Artificial Intelligence, Robotics and “Autonomous” Systems’ of the European Group on Ethics in Science and New Technologies (2018), the principles of the ‘AI in the UK: Ready, willing and able?’ report of the UK House of Lords Artificial Intelligence Committee (2018), and the Tenets of the Partnership on AI (2018).

  3. 3.

    This question is an ethical question. There are, of course, questions about legal accountability and responsibility but we do not attend to them in this paper.

  4. 4.

    For a sample of seminal philosophical work highlighting community within different cultural contexts, see Mbiti (1990) (Kenya, although with a systematic review of other cultures), Gyekye (1987) (Akan, Ghana), Gbadegesin (1991) (Yoruba, Nigeria) and Ramose (2005) (South Africa).

  5. 5.

    This is a tradition that draws on a diverse range of theorists from across the continent, such as Senghor (1988), Mbembe (2001), Wa’Thionga (1986) and Biko (2002).

  6. 6.

    https://moralmachineresults.scalablecoop.org/.

References

  1. Abbott, J. & Martinus, L. (2019). Benchmarking neural machine translation for southern African languages. Proceedings of the 2019 Workshop on Widening NLP.

  2. Andoh, C. (2011). Bioethics and the challenges to its growth in Africa. Open Journal of Philosophy,1(2), 67–75.

    Article  Google Scholar 

  3. Asilomar AI Principles. (2017). Retrieved May 29, 2019 from https://futureoflife.org/ai-principles.

  4. Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., et al. (2018). The moral machine experiment. Nature,563(7729), 59.

    Article  Google Scholar 

  5. Barugahare, J. (2018). African bioethics: Methodological doubts and insights. BMC Medical Ethics. https://doi.org/10.1186/s12910-018-0338-6.

    Article  Google Scholar 

  6. Beauchamp, T. L., & Childress, J. F. (2012). Principles of biomedical ethics (7th ed.). Oxford: Oxford University Press.

    Google Scholar 

  7. Behrens, K. (2013). Towards an indigenous African bioethics. The South African Journal of Bioethics and Law,6(1), 32–35.

    Article  Google Scholar 

  8. Biko, S. (2002). I write what I like: Selected writings. Chicago: University of Chicago Press.

    Google Scholar 

  9. Bostrom, N. (2003). Ethical issues in advanced artificial intelligence. In I. Smit, et al. (Eds.), Cognitive, emotive and ethical aspects of decision making in humans and in artificial intelligence (2nd ed., pp. 12–17). Tecumseh: International Institute of Advanced Studies in Systems Research and Cybernetics.

    Google Scholar 

  10. Brown, B. J., Przybylski, A. A., Manescu, P., Caccioli, F., Oyinloye, G., Elmi, M., Shaw, M. J., et al. (2019). Data-driven malaria prevalence prediction in large densely-populated urban holoendemic sub-Saharan West Africa: Harnessing machine learning approaches and 22-years of prospectively collected data. arXiv preprint arXiv:1906.07502.

  11. Buolamwini, J. & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR, 81, pp. 77–91.

  12. Chukwuneke, F. N., Umeora, O. U. J., Maduabuchi, J. U., & Egbunike, N. (2014). Global bioethics and culture in a pluralistic world: How does culture influence bioethics in Africa? Annals of Medical and Health Sciences Research,4(5), 672–675.

    Article  Google Scholar 

  13. Crawford, K. (2017). The trouble with bias. NIPS 2017 Keynote Address. Retrieved August 19, 2019 from https://www.youtube.com/watch?v=fMym_BKWQzk.

  14. Crawford, K., & Calo, R. (2016). There is a blind spot in AI research. Nature,538, 311–313.

    Article  Google Scholar 

  15. Deep Learning Indaba. (2019). Together we build African AI: Outcomes of the 2nd annual Deep Learning Indaba. Retrieved March 1, 2020 from https://www.deeplearningindaba.com/uploads/1/0/2/6/102657286/annualindaba2018report-v1.pdf.

  16. Department of Transport. (2012). SA learner driver manual: Rules of the road. Pretoria: SA Department of Transport.

    Google Scholar 

  17. Doshi-Velez, F. & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. Retrieved May 29, 2019 from https://arxiv.org/abs/1702.08608.

  18. European Group on Ethics in Science and New Technologies. (2018). Statement on artificial intelligence, robotics and ‘autonomous systems’. European Commission. Retrieved May 29, 2019 from https://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf.

  19. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., et al. (2018). AI4People: An ethical framework for a Good AI Society: Opportunities, risks, principles, and recommendations. Minds and Machines,28, 689–707.

    Article  Google Scholar 

  20. Friedman, B., Hendry, D., & Borning, A. (2017). A survey of value sensitive design methods. Foundations and Trends in Human-Computer Interaction,11(23), 63–125.

    Article  Google Scholar 

  21. Gbadegesin, S. (1991). African philosophy: Traditional Yoruba philosophy and contemporary African realities. New York: Peter Lang.

    Google Scholar 

  22. Gyekye, K. (1987). An essay on African philosophical thought: The Akan conceptual scheme. Cambridge: Cambridge University Press.

    Google Scholar 

  23. Hellsten, S. (2006). Global bioethics: Utopia or reality? Developing World Bioethics,8(2), 70–81.

    Article  Google Scholar 

  24. House of Lords Artificial Intelligence Committee. (2018). AI in the UK: Ready, willing and able? UK Parliament. Retrieved May 29, 2019 from https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf.

  25. IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. (2019). Ethically aligned design: A vision for prioritising human well-being with autonomous and intelligent systems (first edition). Retrieved May 29, 2019 from https://ethicsinaction.ieee.org.

  26. Kroll, J. A. (2018). The fallacy of inscrutability. Philosophical Transactions A,276, 20180084.

    Article  Google Scholar 

  27. Landhuis, E. (2017). Tanzania gears up to become a nation of medical drones, npr.org. Retrieved February 28, 2020 from https://www.npr.org/sections/goatsandsoda/2017/08/24/545589328/tanzania-gears-up-to-become-a-nation-of-medical-drones.

  28. London, A. (2019). Artificial Intelligence and black-box medical decisions: Accuracy versus explainability. Hastings Center Report,49(1), 15–21.

    Article  Google Scholar 

  29. Mbembe, A. (2001). On the postcolony. Berkeley: University of California Press.

    Google Scholar 

  30. Mbiti, J. (1990). African religions and philosophy (2nd ed.). Portsmouth: Heinemann.

    Google Scholar 

  31. Microsoft. (2019). Artificial Intelligence for Africa: An opportunity for growth, development and democratisation. Retrieved May 29, 2019 from https://info.microsoft.com/ME-DIGTRNS-WBNR-FY19-11Nov-02-AIinAfrica-MGC0003244_01Registration-ForminBody.html.

  32. Montreal Declaration for a Responsible Development of Artificial Intelligence. (2017). Retrieved May 29, 2019 from https://www.montrealdeclaration-responsibleai.com/the-declaration.

  33. Moodley, K. (2007). Microbicide research in developing countries: Have we given the ethical concerns due consideration? BMC Medical Ethics. https://doi.org/10.1186/1472-6939-8-10.

    Article  Google Scholar 

  34. Murove, M. F. (2005). African bioethics: An explanatory discourse. Journal for the Study of Religion,18(1), 16–36.

    Article  Google Scholar 

  35. Partnership on AI. (2018). Tenets. Retrieved May 29, 2019 from https://www.partnershiponai.org/tenets.

  36. Rakotsoane, F., & Van Niekerk, A. (2017). Human life invaluableness: An emerging African bioethical principle. South African Journal of Philosophy,36(2), 252–262.

    Article  Google Scholar 

  37. Ramose, M. (2005). African philosophy through Ubuntu (Revised ed.). Harare: Mond Books Publishers.

    Google Scholar 

  38. Selbst, A. D., & Powles, J. (2017). Meaningful information and the right to explanation. International Data Privacy Law,7(4), 233–242.

    Article  Google Scholar 

  39. Senghor, L. (1988). Ce Que Je Crois. Paris: Grasset.

    Google Scholar 

  40. Sloane, M., & Moss, E. (2019). AI’s social sciences deficit. Nature Machine Intelligence,1, 330–331.

    Article  Google Scholar 

  41. Steyn, L. (2018). Watch your job, the bots are coming. Mail & Guardian, 8 December. Retrieved May 29, 2019 from https://mg.co.za/article/2017-12-08-00-watch-your-job-the-bots-are-coming.

  42. United Republic of Tanzania Ministry of Health and Social Welfare. (2015). Health Sector Strategic Plan July 2015-June 2020. Retrieved February 28, 2020 from https://www.moh.go.tz/en/strategic-plans.

  43. Wa’Thiongo, N. (1986). Decolonising the mind. Chowchilla, CA: James Curry.

    Google Scholar 

  44. Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Transparent, explainable, and accountable AI for robotics. Science Robotics. https://doi.org/10.1126/scirobotics.aan6080.

    Article  Google Scholar 

  45. Winfield, A. F. T., & Jirotka, M. (2018). Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philosophical Transactions A,376, 20180085.

    Article  Google Scholar 

  46. World Health Organisation (WHO). (2020). Guidelines on reproductive health research and partners’ agreement. Sexual and Reproductive Health. Retrieved February 21, 2020 from https://www.who.int/reproductivehealth/topics/ethics/partners_guide_serg/en/.

Download references

Acknowledgements

We would like to thank participants at the Third CAIR Symposium on AI Research and Society, held at the University of Johannesburg in March 2019, for feedback and discussion on an earlier version of the paper. We would also like to thank the two reviewers and editors for this journal for their comments.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Mary Carman.

Ethics declarations

Conflict of interests

Benjamin Rosman is one of the founders and organisers of the Deep Learning Indaba that we mention as an example of the growth of machine learning across Africa.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Carman, M., Rosman, B. Applying a principle of explicability to AI research in Africa: should we do it?. Ethics Inf Technol (2020). https://doi.org/10.1007/s10676-020-09534-2

Download citation

Keywords

  • Principle of explicability
  • Machine learning
  • Intelligibility
  • Accountability
  • Africa