Skip to main content
Log in

Decolonizing AI Ethics: Relational Autonomy as a Means to Counter AI Harms

  • Published:
Topoi Aims and scope Submit manuscript

Abstract

Many popular artificial intelligence (AI) ethics frameworks center the principle of autonomy as necessary in order to mitigate the harms that might result from the use of AI within society. These harms often disproportionately affect the most marginalized within society. In this paper, we argue that the principle of autonomy, as currently formalized in AI ethics, is itself flawed, as it expresses only a mainstream mainly liberal notion of autonomy as rational self-determination, derived from Western traditional philosophy. In particular, we claim that the adherence to such principle, as currently formalized, does not only fail to address many ways in which people’s autonomy can be violated, but also to grasp a broader range of AI-empowered harms profoundly tied to the legacy of colonization, and which particularly affect the already marginalized and most vulnerable on a global scale. To counter such a phenomenon, we advocate for the need of a relational turn in AI ethics, starting from a relational rethinking of the AI ethics principle of autonomy that we propose by drawing on theories on relational autonomy developed both in moral philosophy and Ubuntu ethics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data Availability

Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.

Notes

  1. To deepen the ethical mission of the Decolonial AI movement, see its manifesto at https://manyfesto.ai.

  2. Consider, for example, the renowned case of Cambridge Analytica (Rosenberg 2018) or the Facebook experiment on emotion contagion (Kramer et al. 2014).

  3. This data can be paradoxical if we consider how much such non-Western countries are more populous than Western ones and how they are the most exploited and affected by AI. Mohamed et al. 2020 provide many examples of such exploitation ranging from ghost workers’ labor to beta-testing techniques (p. 668).

  4. Since a detailed survey of the perspectives that will be mentioned is beyond of the scope of our analysis, i.e., to shed light on the richness of Western moral philosophy on autonomy and on diverse accounts developed within Western ethics beyond those offered by the liberal tradition, we focus on representative samples of these accounts. For their in-depth analysis, see Christman and Anderson (2005).

  5. A widespread idea in AI ethics is that individuals’ identity can be expressed in informational terms (“we are our information”) and therefore its protection requires informational privacy, as safeguarding human control over personal data and rational decision-making (Floridi 2011).

  6. In addition, an exclusive focus on competencies is also at odds with recent discoveries in cognitive sciences (see e.g., Thaler and Sunstein 2009; Kahneman 2011; Simon 1991) according to which individuals rarely choose in optimal conditions, and therefore as rational decision-makers, but rather, in conditions of limited cognitive and time resources that lead them to be very often rationally bounded and biased decision-makers.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Simona Tiribelli.

Ethics declarations

Conflict of interest

The authors have no relevant financial or non-financial interests to disclose. The authors have no competing interests to declare that are relevant to the content of this article. All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript. The authors have no financial or proprietary interests in any material discussed in this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Sábëlo Mhlambi and Simona Tiribelli shares joint first authorship.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mhlambi, S., Tiribelli, S. Decolonizing AI Ethics: Relational Autonomy as a Means to Counter AI Harms. Topoi 42, 867–880 (2023). https://doi.org/10.1007/s11245-022-09874-2

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11245-022-09874-2

Keywords

Navigation