Skip to main content
Log in

ACESOR: a critical engagement in systems of oppression AI assessment tool

  • Original Research
  • Published:
AI and Ethics Aims and scope Submit manuscript

Abstract

The subarea of AI ethics and fairness research has done a lot of broad and far reaching research on the impact of AI on society. Unfortunately, much of this work has not included critical engagement with systems of oppression, limiting the understanding of why AI has the impacts it does. This paper introduces the Assessment of Critical Engagement in Systems of Oppression in Research (ACESOR) rubric as an assessment tool that can help researchers bridge this gap by providing guided critical engagement. Interviews were also conducted with experts who engage with systems of oppression in their work to gather feedback about the field’s current state, barriers to critical engagement, and opinions about the rubric and its use. Based on expert input, the field overall is doing great work, but more needs to be done to increase critical engagement, with some changes needing to come from the systemic level. The rubric is a valuable tool for researchers and practitioners, but it is not a single solution. This paper introduces the ACESOR rubric, highlights expert feedback, and provides an example of how the rubric could be used with the goal that the rubric as a tool will push the field forward toward more critical engagement.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Data availability

Not applicable.

Notes

  1. Meaningful engagement is used interchangeably with critical engagement in this context.

References

  1. Weintrop, D., Coenraad, M., Palmer, J., Franklin, D.: The teacher accessibility, equity, and content (tec) rubric for evaluating computing curricula. ACM Trans. Comput. Educ. 20(1), 1–30 (2019)

    Article  Google Scholar 

  2. NMAAHC: Social identities and systems of oppression. National Museum of African American History and Culture (2021)

  3. Harvard Global Health Institute: Systems of oppression. (2021)

  4. Head, T.: What are the most common types of social oppression and their origins? ThoughtCo (2021)

  5. Truth, S.: Ain’t I a Woman? (1851)

  6. The Combahee River Collective: The Combahee river collective statement (1977)

  7. Crenshaw, K.: Demarginalizing the intersection of race and sex: a black feminist critique of antidiscrimination doctrine, feminist theory, and antiracist politics. Univ. Chic. Leg. Forum 1989(8), 139–167 (1989)

    Google Scholar 

  8. Collins, P.H.: Black Feminist Thought: Knowledge, Consciousness, and the Politics of Empowerment. Taylor & Francis, Routledge (2000)

    Google Scholar 

  9. Marx, K.: Capital Volume 1, pp. 11–57. Penguin Classics (1990)

  10. Lott, B.: The social psychology of class and classism. Am. Psychol. 67(8), 650 (2012)

    Article  Google Scholar 

  11. Hooks, B.: The Will to Change: Men, Masculinity, and Love. Washington Square Press, New York (2004)

    Google Scholar 

  12. Hooks, B.: Feminist Theory: From Margin to Center. South End Press, Boston (1984)

    Google Scholar 

  13. Manne, K.: Down Girl: The Logic of Misogyny. Oxford University Press, Oxford (2017)

    Book  Google Scholar 

  14. Smith, L.: Center for Disability Rights (2023)

  15. Horvath, R.J.: A definition of colonialism. Curr. Anthropol. 13(1), 45–57 (1972)

    Article  Google Scholar 

  16. Noble, S.: Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press, New York (2018)

    Book  Google Scholar 

  17. Benjamin, R.: Race After Technology. Polity Press, Cambridge (2019)

    Google Scholar 

  18. Morozov, E.: To Save Everything, Click Here: The Folly of Technological Solutionism. PublicAffairs, New York (2013)

    Google Scholar 

  19. Shew, A.: Against Technoableism: Rethinking Who Needs Improvement. W.W. Norton & Company Inc., New York (2023)

    Google Scholar 

  20. Cave, S.: The problem with intelligence: its value-laden history and the future of AI. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, AIES ’20, pp. 29–35. Association for Computing Machinery, New York, NY (2020)

  21. Hanna, A., Denton, E., Smart, A., Smith-Loud, J.: Towards a critical race methodology in algorithmic fairness. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* ’20, pp. 501–512. Association for Computing Machinery, New York, NY (2020)

  22. Birhane, A., Ruane, E., Laurent, T., Brown, M.S., Flowers, J., Ventresque, A., Dancy, C.L. The forgotten margins of AI ethics. In: 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’22, pp. 948–958. Association for Computing Machinery, New York, NY (2022)

  23. Green, B.: The false promise of risk assessments: epistemic reform and the limits of fairness. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* ’20, pp. 594–606. Association for Computing Machinery, New York, NY (2020)

  24. Suresh, H., Movva, R., Dogan, A.L., Bhargava, R., Cruxen, I., Cuba, A.M., Taurino, G., So, W., D’Ignazio, C.: Towards intersectional feminist and participatory ML: a case study in supporting feminicide counter data collection. In: 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’22, pp. 667–678. Association for Computing Machinery, New York, NY (2022)

  25. Kong, Y.: Are “intersectionally fair” AI algorithms really fair to women of color? A philosophical analysis. In: 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’22, page 485–494. Association for Computing Machinery, New York, NY (2022)

  26. McFadden, Z., Alvarez, L.: Performative ethics from within the ivory tower: how CS practitioners uphold systems of oppression. J. AI Res. 79, 777–799 (2024)

    MathSciNet  Google Scholar 

  27. Learning for Justice: Critical Practices for Social Justice Education (2023)

  28. Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J.W., Wallach, H., Daumé III, H., Crawford, K.: Datasheets for Datasets (2021)

  29. Reisman, D., Schultz, J., Crawford, K., Whittaker, M. Algorithmic Impact Assessments: A Practical Framework For Public Agency Accountability (2018)

  30. Bradley, T, Ambrose, K., Maya, et al.: Bernstein. Federal Data Ethics Framework (2020)

  31. Ayling, J., Chapman, A.: Putting Ai ethics to work: are the tools fit for purpose? AI Ethics 2, 405–429 (2022)

    Article  Google Scholar 

  32. General, J., Sarlin, J. A False Facial Recognition Match Sent this Innocent Black Man to Jail (2021)

  33. Hawkinson, K.: In Every Reported Case Where Police Mistakenly Arrested Someone Using Facial Recognition, That Person Has Been Black (2023)

  34. Hill, K.: Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match (2020)

  35. Pierson, E., Simoiu, C., Overgoor, J., Corbett-Davies, S., Jenson, D., Shoemaker, A., Ramachandran, V., Barghouty, P., Phillips, C., Shroff, R., Goel, S.: A large-scale analysis of racial disparities in police stops across the United States. Nature 4, 736–745 (2020)

    Google Scholar 

  36. Rucker, J.M., Richeson, J.A.: Toward an understanding of structural racism: implications for criminal justice. Science 374(6565), 286–290 (2021)

    Article  Google Scholar 

  37. Ryan-Mosley, T.: The New Lawsuit That Shows Facial Recognition is Officially a Civil Rights Issue (2021)

  38. Achieve and National Science Teachers Association: Equip Rubric For Lessons & Units: Science, version 3.0 (2016)

  39. RCampus: iRubric: Careers in Computer Science Research Paper Rubric (2023)

  40. San Jose State University: Templates for Assessment Rubrics (2023)

  41. United States Naval Academy: USNA Rubrics for Assessment (2023)

  42. Wohlwend, K.E., Lewis, C.: Critical literacy, critical engagement, and digital technology. In: Handbook of Research on Teaching the English Language Arts, pp. 188–194 (2011)

  43. Armstrong, P.: Bloom’s Taxonomy (2010)

  44. Bloom, B.S.: Taxonomy of Educational Objectives: The Classification of Educational Goals. David McKay Company Inc, Philadelphia, PA (1956)

    Google Scholar 

  45. Freeman, S., Eddy, S.L., McDonough, M., Smith, M.K., Okoroafor, N., Jordt, H., Wenderoth, M.P.: Active learning increases student performance in science, engineering, and mathematics. Proc. Natl. Acad. Sci. 111(23), 8410–8415 (2014)

    Article  Google Scholar 

  46. Pearse, N.: An illustration of deductive analysis in qualitative research. In: 18th European conference on research methodology for business and management studies, p. 264 (2019)

  47. Broussard, M.: Artificial Unintelligence: How Computers Misunderstand the World. The MIT Press, Cambridge, MA (2018)

    Book  Google Scholar 

Download references

Acknowledgements

I would like to thank my advisor, Dr. Veronica Cateté, and my dissertation committee members, Dr. Tiffany Barnes, Dr. Thomas Price, and Dr. Kanton Reynolds, for their feedback in helping me revise this manuscript. I would also like to thank my colleague Lauren Alvarez for her feedback in revising this manuscript.

Funding

The authors declare that no funds, grants, or other support were received during the preparation of this manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zari McFadden.

Ethics declarations

Conflict of interests

The authors have no relevant financial or non-financial interests to disclose.

Consent to participate

Informed written consent was obtained from all individual participants included in the study.

Consent for publication

Written consent was obtained from all individual participants included in the study to publish their interview feedback.

Ethics approval

The study was given Exempt status by North Carolina State University IRB under #25998.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary file1 (CSV 4 KB)

Supplementary file2 (PDF 747 KB)

Appendices

Appendix 1: Interview questions

  1. 1.

    What are your personal thoughts about the state of AI ethics and fairness research? What strengths and weaknesses have you noticed in the research?

  2. 2.

    Do you think AI ethics and fairness researchers should engage with systems of oppression in their work? Why or why not?

    • If so, how do you think these systems should be engaged with?

  3. 3.

    Discussed the list of systems of oppression that the research was focused on (capitalism/classism, racism/white supremacy, patriarchy/sexism, ableism, and colonialism/imperialism) then asked:

    • What are your thoughts on this list?

    • Do you think there is anything I should add or remove?

  4. 4.

    What do you think is most important for a rubric that aims to help researchers critically engage with systems of oppression to include?

  5. 5.

    What do you think about the subsections and questions in the rubric? (for this question, the rubric was shared with participants either via screen share or a link they could open on their personal device)

    • Is there anything you think I should add or remove?

  6. 6.

    What do you think would be barriers to having researchers more critically engage with systems of oppression in their work?

  7. 7.

    What do you think could be done to address these barriers on an individual or systemic level? Please specify which.

  8. 8.

    What would critical engagement with systems of oppression in research look like to you?

  9. 9.

    Is there anything not covered or mentioned that you would like to add?

  10. 10.

    Is there anyone you would like to recommend to participate in a workshop once the rubric is complete? (individuals, mailing list, people you know, etc.)

Appendix 2: ACESOR rubric sheets

Here are the different sheets in the ACESOR rubric (see Figs. 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, and 15).

Fig. 3
figure 3

Rubric sheet 1

Fig. 4
figure 4

Rubric sheet 2

Fig. 5
figure 5

Rubric sheet 3

Fig. 6
figure 6

Rubric sheet 4

Fig. 7
figure 7

Rubric sheet 5

Fig. 8
figure 8

Rubric sheet 6

Fig. 9
figure 9

Rubric sheet 7

Fig. 10
figure 10

Rubric sheet 8

Fig. 11
figure 11

Rubric sheet 9

Fig. 12
figure 12

Rubric sheet 10

Fig. 13
figure 13

Rubric sheet 11

Fig. 14
figure 14

Rubric sheet 12

Fig. 15
figure 15

Rubric sheet 13

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

McFadden, Z. ACESOR: a critical engagement in systems of oppression AI assessment tool. AI Ethics (2024). https://doi.org/10.1007/s43681-024-00478-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s43681-024-00478-7

Keywords

Navigation