Abstract
In response to calls for greater interdisciplinary involvement from the social sciences and humanities in the development, governance, and study of artificial intelligence systems, this paper presents one sociologist’s view on the problem of algorithmic bias and the reproduction of societal bias. Discussions of bias in AI cover much of the same conceptual terrain that sociologists studying inequality have long understood using more specific terms and theories. Concerns over reproducing societal bias should be informed by an understanding of the ways that inequality is continually reproduced in society—processes that AI systems are either complicit in, or can be designed to disrupt and counter. The contrast presented here is between conservative and radical approaches to AI, with conservatism referring to dominant tendencies that reproduce and strengthen the status quo, while radical approaches work to disrupt systemic forms of inequality. The limitations of a conservative approach to racial bias are discussed through the specific example of biased criminal risk assessments and Indigenous overrepresentation in Canada’s criminal justice system. This illustrates the dangers of treating racial bias as a generalizable problem and equality as a generalizable solution, emphasizing the importance of considering inequality in context. Societal issues can no longer be out of scope for AI and machine learning, given the impact of these systems on human lives. This requires engagement with a growing body of critical AI scholarship that goes beyond biased data to analyze structured ways of perpetuating inequality, opening up the possibility for interdisciplinary engagement and radical alternatives.
Similar content being viewed by others
References
Barocas S, Selbst AD (2016) Big data’s disparate impact. Calif Law Rev 104:671–732
Barrett LF, Adolphs R, Marsella S et al (2019) Emotional expressions reconsidered: challenges to inferring emotion from human facial movements. Psychol Sci Public Interest 20:1–68. https://doi.org/10.1177/1529100619832930
Baumann E, Rumberger JL (2018) State of the Art in Fair ML: From Moral Philosophy and Legislation to Fair Classifiers. ArXiv181109539 Cs Stat http://arxiv.org/abs/1811.09539
Benjamin R (2019) Race after technology: abolitionist tools for the new Jim code. Polity Press, Cambridge, U.K.
Benthall S, Haynes BD (2019) Racial categories in machine learning. Proc Conf Fairness Account Transpar FAT 19:289–298. https://doi.org/10.1145/3287560.3287575
Birhane A (2020) Fair Warning. In: Real Life. https://reallifemag.com/fair-warning/. Accessed 29 December 2020
Blackstock C (2017) Reflections on reconciliation after 150 years since confederation: an interview with Dr Cindy Blackstock. Ott Law Rev 49:13–28
Bonilla-Silva E (2015) The structure of racism in color-blind, “post-racial” America. Am Behav Sci 59:1358–1376. https://doi.org/10.1177/0002764215586826
Bourke R (2018) What is conservatism? History, ideology and party. Eur J Polit Theory 17:449–475. https://doi.org/10.1177/1474885118782384
Browne S (2015) Dark matters: on the surveillance of blackness. Duke University Press, Durham
Cardoso T (2020) Bias Behind Bars: A Globe investigation finds a prison system stacked against Black and Indigenous inmates. In: Globe and Mail. https://www.theglobeandmail.com/canada/article-investigation-racial-bias-in-canadian-prison-risk-assessments/. Accessed 26 Oct 2020
Carr N (2014) The Limits of Social Engineering. In: MIT Technol. Rev. https://www.technologyreview.com/2014/04/16/173156/the-limits-of-social-engineering/. Accessed 29 Jun 2020
Chiang T (2017) Silicon Valley Is Turning Into Its Own Worst Fear. In: BuzzFeed News. https://www.buzzfeednews.com/article/tedchiang/the-real-danger-to-civilization-isnt-ai-its-runaway. Accessed 14 Jul 2020
Cifor M, Garcia P, Cowan TL, et al (2019) Feminist data manifest-no. https://www.manifestno.com/home. Accessed 31 Jan 2021
Coalition for Critical Technology (2020) Abolish the #TechToPrisonPipeline. In: Medium. https://medium.com/@CoalitionForCriticalTechnology/abolish-the-techtoprisonpipeline-9b5b14366b16. Accessed 3 Jul 2020
Cooley M (1995) The myth of the moral neutrality of technology. AI Soc 9:10–17. https://doi.org/10.1007/BF01174475
Corbett-Davies S, Goel S (2018). The measure and mismeasure of fairness: a critical review of fair machine learning. ArXiv1808.00023 Cs http://arxiv.org/abs/1808.00023
Costanza-Chock S (2020) Design justice: community-led practices to build the worlds we need. MIT Press, Cambridge, MA
Eubanks V (2017) Automating inequality: how high-tech tools profile, police, and punish the poor. St. Martin’s Press, New York
Friedler SA, Scheidegger C, Venkatasubramanian S (2016) On the (im)possibility of fairness. ArXiv160907236 Cs Stat http://arxiv.org/abs/1609.07236
Friedman B, Nissenbaum H (1996) Bias in computer systems. ACM Trans Inf Syst 14:330–347. https://doi.org/10.1145/230538.230561
G7 Science Academies (2019) Artificial intelligence and society. https://rsc-src.ca/sites/default/files/Artificial%20intelligence%20and%20society%20G7%202019.pdf. Accessed 31 May 2019
Gillespie T, Seaver N (2016) Critical Algorithm Studies: a Reading List. In: Soc. Media Collect. http://socialmediacollective.org/reading-lists/critical-algorithm-studies/. Accessed 7 Jul 2020
Goetz B (1997) Organization as class bias in local law enforcement: arson-for-profit as a “nonissue.” Law Soc Rev 31:557–588
Green B (2018) Data science as political action: grounding data science in a politics of justice. ArXiv181103435 Cs https://arxiv.org/abs/1811.03435
Green B (2019) “Good” isn’t good enough. In: proceedings of the AI for Social Good workshop at NeurIPS. Vancouver. https://www.benzevgreen.com/wp-content/uploads/2019/11/19-ai4sg.pdf. Accessed 29 December 2020
Green B, Hu L (2018) The myth in the methodology: towards a recontextualization of fairness in machine learning. https://scholar.harvard.edu/files/bgreen/files/18-icmldebates.pdf. Accessed 1 October 2020
Green B, Viljoen S (2020) Algorithmic realism: expanding the boundaries of algorithmic thought. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, Barcelona, Spain 19–31
Grusky DB (2014) Social stratification: class, race, and gender in sociological perspective, 4th edn. Routledge, New York
Hacker P (2018) Teaching fairness to artificial intelligence: existing and novel strategies against algorithmic discrimination under EU law. Common Mark Law Rev 55:1143–1185
Hacking I (2005) Why Race Still Matters. Daedalus 134:102–116
Hanna A, Denton E, Smart A, Smith-Loud J (2019) Towards a critical race methodology in algorithmic fairness. ArXiv: 1912.03593 Cs http://arxiv.org/abs/1912.03593
Hartley S (2017) The Fuzzy and the Techie: Why the Liberal Arts Will Rule the Digital World. Houghton Mifflin Harcourt, Boston
Hoffman SG (2017) Managing ambiguities at the edge of knowledge: research strategy and artificial intelligence labs in an era of academic capitalism. Sci Technol Hum Values 42:703–740. https://doi.org/10.1177/0162243916687038
Hoffmann AL (2019) Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Inf Commun Soc 22:900–915. https://doi.org/10.1080/1369118X.2019.1573912
Iliadis A, Russo F (2016) Critical data studies: an introduction. Big Data Soc 3:1–7. https://doi.org/10.1177/2053951716674238
Jobin A, Ienca M, Vayena E (2019) The global landscape of AI ethics guidelines. Nat Mach Intell 1:389–399. https://doi.org/10.1038/s42256-019-0088-2
Johnson K (2020a) NeurIPS requires AI researchers to account for societal impact and financial conflicts of interest. VentureBeat. https://venturebeat.com/2020/02/24/neurips-requires-ai-researchers-to-account-for-societal-impact-and-financial-conflicts-of-interest/. Accessed 29 Dec 2020
Johnson K (2020b) AI Weekly: A deep learning pioneer’s teachable moment on AI bias. VentureBeat. https://venturebeat.com/2020/06/26/ai-weekly-a-deep-learning-pioneers-teachable-moment-on-ai-bias/. Accessed 27 Jun 2020
Johnson K, Pasquale F, Chapman J (2019) Artificial intelligence, machine learning, and bias in finance: toward responsible innovation. Fordham Rev 88:499–529
Kalluri P (2020) Don’t ask if artificial intelligence is good or fair, ask how it shifts power. Nature 583:169–169. https://doi.org/10.1038/d41586-020-02003-2
Kusner MJ, Loftus JR (2020) The long road to fairer algorithms. Nature 578:34–36. https://doi.org/10.1038/d41586-020-00274-3
Leith P (1990) Formalism in AI and computer science. Ellis Horwood, New York
Lentin A (2020) Why race still matters. Polity Press, Cambridge
Lepri B, Oliver N, Letouzé E et al (2018) Fair, transparent, and accountable algorithmic decision-making processes. Philos Technol 31:611–627. https://doi.org/10.1007/s13347-017-0279-x
Lewis JE, Abdilla A, Arista N, et al (2020) Indigenous Protocol and Artificial Intelligence Position Paper. https://spectrum.library.concordia.ca/986506/. Accessed 15 Oct 2020
Lukes S (2005) Power: a radical view, 2nd edn. Red Globe Press, London
Lury C (2018) Introduction: Activating the present of interdisciplinary methods. In: Lury C, Fensham R, Heller-Nicholas A et al (eds) Routledge handbook of interdisciplinary research methods. Routledge, New York, pp 1–25
Machin A, Stehr N (2016) Inequality in Modern Societies: Causes, Consequences and Challenges. In: Stehr N (ed) Machin A. Understanding Inequality, Social Costs and Benefits. Springer, pp 3–34
Mills CW (1997) The racial contract. Cornell University Press, Ithaca, NY
Mitchell S, Potash E, Barocas S, et al (2020) Prediction-Based Decisions and Fairness: A Catalogue of Choices, Assumptions, and Definitions. ArXiv181107867 Stat http://arxiv.org/abs/1811.07867
Mohamed S, Png M-T, Isaac W (2020) Decolonial AI: decolonial theory as sociotechnical foresight in artificial intelligence. Philos Technol 33:659–684. https://doi.org/10.1007/s13347-020-00405-8
Nath R, Sahu V (2020) The problem of machine ethics in artificial intelligence. AI Soc 35:103–111. https://doi.org/10.1007/s00146-017-0768-6
Nisbet RA (1952) Conservatism and sociology. Am J Sociol 58:167–175
O’Neil C (2016) Weapons of math destruction: how big data increases inequality and threatens democracy. Crown, New York
Omi M, Winant H (2014) Racial Formation in the United States, 3rd edn. Routledge, New York
Penn J (2018) AI thinks like a corporation—and that’s worrying. The Economist. https://www.economist.com/open-future/2018/11/26/ai-thinks-like-a-corporation-and-thats-worrying. Accessed 29 Dec 2020
Roberge J, Castelle M (eds) (2021) The cultural life of machine learning: an incursion into critical AI studies. Springer International Publishing, Cham
Robinson CJ (2000) Black marxism: the making of the black radical tradition, 2nd edn. University of North Carolina Press, Chapel Hill
Rosanvallon P (2013) The society of equals. Harvard University Press, Cambridge, Massachusetts
Scheuerman MK, Paul JM, Brubaker JR (2019) How Computers see gender: an evaluation of gender classification in commercial facial analysis services. Proc ACM Hum-Comput Interact 3:1–33. https://doi.org/10.1145/3359246
Segall S (2013) Equality and opportunity. Oxford University Press, Oxford
Selbst AD, boyd d, Friedler SA, et al (2019) Fairness and abstraction in sociotechnical systems. In: proceedings of the conference on fairness, accountability, and transparency. ACM, New York 59–68
Shah D, Schwartz HA, Hovy D (2019a) Predictive biases in natural language processing models: a conceptual framework and overview. ArXiv191211078 Cs http://arxiv.org/abs/1912.11078
Shah R, Gundotra N, Abbeel P, Dragan AD (2019b) On the feasibility of learning, rather than assuming, human biases for reward inference. ArXiv190609624 Cs https://arxiv.org/abs/1906.09624
Silberg J, Manyika J (2019) Notes from the AI frontier: tackling bias in AI (and in humans). https://www.mckinsey.com/~/media/mckinsey/featured%20insights/artificial%20intelligence/tackling%20bias%20in%20artificial%20intelligence%20and%20in%20humans/mgi-tackling-bias-in-ai-june-2019.ashx. Accessed 29 Dec 2020
Simpson LB (2016) Indigenous resurgence and co-resistance. Crit Ethn Stud 2:19–34. https://doi.org/10.5749/jcritethnstud.2.2.0019
Statistics Canada (2019) Statistics on Indigenous peoples. https://www.statcan.gc.ca/eng/subjects-start/indigenous_peoples. Accessed 21 Aug 2019
Suresh H, Guttag JV (2020) A Framework for Understanding Unintended Consequences of Machine Learning. ArXiv190110002 Cs http://arxiv.org/abs/1901.10002
Truth and Reconciliation Commission of Canada (2015) Honouring the truth, reconciling for the future: summary of the final report of the truth and reconciliation commission of Canada
Veracini L (2010) Settler colonialism: a theoretical overview. Palgrave Macmillan, Houndmills, Basingstoke
Walter M, Kukutai T, Carroll SR, Rodriguez-Lonebear D (eds) (2020) Indigenous data sovereignty and policy. Routledge, London
Walton S (2020) Why the critical race theory concept of ‘White supremacy’ should not be dismissed by neo-Marxists: lessons from contemporary Black radicalism. Power Educ 12:78–94
West SM, Whittaker M, Crawford K (2019) Discriminating systems: gender, race and power in AI. AI now institute. https://ainowinstitute.org/discriminatingsystems.pdf. Accessed 31 Jan 2021
Whittaker M, Crawford K, Dobbe R, et al (2018) AI Now 2018 report. AI now institute. https://ainowinstitute.org/AI_Now_2018_Report.pdf. Accessed 31 Jan 2021
Winner L (1980) Do artifacts have politics? Daedalus 109:121–136
Wolfe AB (1923) Conservatism and radicalism: some definitions and distinctions. Sci Mon 17:229–237
Wong P-H (2020) Democratizing algorithmic fairness. Philos Technol 33:225–244. https://doi.org/10.1007/s13347-019-00355-w
Acknowledgements
This is to acknowledge that an earlier draft of this article is available on arXiv: 200708666 [cs] https://arxiv.org/abs/2007.08666.
Funding
Research for this article was supported by funding from the University of British Columbia.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declared that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Zajko, M. Conservative AI and social inequality: conceptualizing alternatives to bias through social theory. AI & Soc 36, 1047–1056 (2021). https://doi.org/10.1007/s00146-021-01153-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-021-01153-9