Skip to main content
Log in

(Some) algorithmic bias as institutional bias

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

In this paper I argue that some examples of what we label ‘algorithmic bias’ would be better understood as cases of institutional bias. Even when individual algorithms appear unobjectionable, they may produce biased outcomes given the way that they are embedded in the background structure of our social world. Therefore, the problematic outcomes associated with the use of algorithmic systems cannot be understood or accounted for without a kind of structural account. Understanding algorithmic bias as institutional bias in particular (as opposed to other structural accounts) has at least two important upshots. First, I argue that the existence of bias that is intrinsic to certain institutions (whether algorithmic or not) suggests that at least in some cases, the algorithms now substituting as pieces of institutional norms or rules are not “fixable” in the relevant sense, because the institutions they help make up are not fixable. Second, I argue that in other cases, changing the algorithms being used within our institutions (rather than getting rid of them entirely) is essential to changing the background structural conditions of our society.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. The fact that the data used in algorithmic systems (either as inputs or as training data) might be biased in some way has been clearly documented. For example, these data may suffer from: representation bias, which occurs because of the way an algorithm or algorithm designer might define sample population; measurement bias, which occurs because of choices by an algorithm or algorithm designer about how to choose, utilize, and measure particular features; historical bias, which consists of “already existing bias and socio-technical issues in the world” that appear in the data “even given a perfect sampling and feature selection”; along with many others (Mehrabi et al., 2019).

  2. For example, a framework from Friedman and Nissenbaum (1996) divides bias into three categories: preexisting, technical, and emergent. Another framework from Mehrabi et al. (2019) categorizes bias in machine learning as pre-processing, in-processing, or post-processing.

  3. Importantly, my aim is not to argue that there is no such thing as algorithmic bias, understood as bias that is specific to data sets and/or their use within algorithms. Rather, the claim is that many of the cases of algorithmic bias that are particularly concerning are those where the bias becomes a feature of our social world through its uptake in our social institutions, leading to discrimination that is systemic and structural.

  4. In some cases, sufficiently complex algorithmic systems may not seem to operate as a set of rules or steps. In particular, the way that deep learning algorithms function may ultimately not be reducible to simplistic rules or patterns. Given this distinction, my arguments may not apply to certain deep learning algorithms. However, I think my argument will still apply in all of the cases I am particularly concerned with.

  5. For example, Howell (2009), shows that in addition to increased numbers of non-felony arrests overall, the demographics of people arrested after the introduction of Zero Tolerance Policies in NYC also changed to include more people with no prior criminal records, more young people without prior convictions, proportionally fewer white people, and proportionally more Hispanic people.

  6. Ruha Benjamin (2019) makes the point that it is the ongoing surveillance priorities of our institutions that produce the data which predictive policing algorithms now use. Thus, according to Benjamin, “if we consider that institutional racism in this country is an ongoing unnatural disaster, then crime prediction algorithms should more accurately be called crime production algorithms” (p. 94).

  7. According to O’Neil (2016), PredPol squares are ten times as efficient as random patrolling, and twice as efficient as analysis of “police intelligence.”

  8. Thank you to an anonymous reviewer for raising this objection.

  9. For example, the American Heart Association (AHA) has a set of guidelines which uses an algorithm to assign a risk score meant to represent the predicted risk of death in patients admitted to the hospital. The algorithm is designed to add three additional points to the score for any patient identified as “nonblack.” This therefore categorizes “all black patients as being at lower risk,” and since Black patients are thereby categorized as lower risk, “following the guidelines could direct care away from black patients” (Vyas et al., 2021). There is no rationale given for the adjustment (Vyas et al., 2021). The AHA, as an institution, most likely does not have an explicitly racist purpose. But the algorithm may still be intrinsically racially discriminatory in its design. And the effect that biased guidelines, whether implemented as algorithms or not, can have on the care that Black patients receive, is significant. Bias in healthcare institutions is not new; the use of algorithms to operationalize such bias within those institutions merely changes the mechanism by which that bias becomes operationalized. Whether or not there is a rationale given for the adjustment, some internal decision-making led to this particular algorithm’s design, which suggests that the algorithm is meant to be a substitution for a policy or way of assessing risk that already existed in the institution before being written out into algorithmic instructions.

  10. For example, algorithms determine what content is suitable for (and thus allowed to remain on) social media platforms. Content moderation, guided by algorithms, may sometimes function to apply institutional policy unevenly depending on the race, gender, etc. of the users in question. There is some evidence, for example, that algorithms similar to the ones used by these platforms are more likely to flag tweets written by African American users as offensive or hateful than other users (Ghaffary, 2019). Even if the purpose of these algorithms (which might include something like “to remove hate speech so as to reduce harm”) is not racist, the biased application of institutional policy through the use of algorithms may result in an institution that is intrinsically racist. Content moderation done by humans and content moderation done by algorithms are both subject to bias which might result in uneven application of the institutional policies – what has changed is the method by which that uneven application occurs. The systematicity and scale of impact is still determined in large part by the power that institutions like Twitter (as a platform) have and the scope of the role they play within society.

  11. One objection that might be made here is that algorithms, and the “Big Data” that they use to train and/or operate on, actually profoundly change the scale and scope of the impact that bias in institutional policies might have. The idea is that the increase in calculative ability that the use of algorithms affords us, along with certain features of algorithms such as their opacity or propensity to develop feedback loops, is enough to fully explain all cases of what we refer to as algorithmic bias. In some ways, I think this is probably right; the ability to scale up calculations and procedures to such a degree does change the impact that institutions are able to have. But the negative features we primarily use to characterize algorithmic bias in these cases as something entirely novel, such as the opacity of algorithms or their propensity to create feedback loops, are ones that have existed in institutions long before the introduction of algorithms. Institutional policies have often been opaque in ways that are problematic; and, due to their role in regulating the social environment, institutions necessarily create feedback loops that can serve to reinforce bias. Further, although I am arguing here that the scale of impact is determined at least in part by the power that institutions have in our society and in determining how well our lives go for us, this is not incompatible with the idea that the use of algorithms still results in new and important changes in terms of the scale of impact. Perhaps the right way to think about it is something more like an algorithmic feedback loop within a larger institutional feedback loop. The main idea, though, is that without an appeal to the powerful role that institutions already play in collectively forming the basic structure of society, we cannot fully account for the bias (and resultant discrimination) in many of the cases that particularly concern us.

  12. Which may itself be a subset of the larger institution: ‘the criminal justice system.’

  13. See, for example, Castro (2019).

  14. Thank you to an anonymous reviewer for pointing out this objection.

  15. For recent discussions of fairness in algorithms see Hedden, 2021 and Hellman, 2020.

  16. This is roughly what Hellman calls for in order to increase the accuracy of algorithms.

  17. In a similar vein, Davis, Williams & Yang (2021) make a compelling case for what they term “algorithmic reparation.” Chander discusses a similar idea, which he refers to as “algorithmic affirmative action,” in Chander, 2017. Hellman (2020) argues that the way to improve fairness in algorithms is to permit the use of protected traits within algorithms in order to enhance their accuracy, and includes a discussion of the legality of this kind of use of protected traits. For a more detailed discussion of the legality of “algorithmic affirmative action” more specifically, see Bent, 2020.

  18. See Hellman (2008) or Shelby (2016) for clear accounts of wrongful discrimination.

References

  • Alexander, M. (2010). The New Jim Crow: Mass Incarceration in the age of colorblindness. New York: New Press.

    Google Scholar 

  • Benjamin, R. (2019). Race after technology: abolitionist tools for the New Jim Code. Cambridge: Polity Press.

    Google Scholar 

  • Bent, J. R. (2020). Is Algorithmic Affirmative Action Legal? The Georgetown Law Journal108, 803–853.

    Google Scholar 

  • Castro, C. (2019). What’s Wrong with Machine Bias. Ergo, 6(15), 405–426.

  • Chander, A. (2017). The Racist Algorithm? Michigan Law Review, 115(6), 1023–1045.

    Article  Google Scholar 

  • Creel, K., Hellman, D. (2021). The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision Making Systems. Virginia Public Law and Legal Theory Research Paper No. 2021–13.

  • Davis, J. L., Williams, A., Yang, M. W. (2021). Algorithmic Reparation. Big Data & Society, 8(2).

  • Friedman, B., & Nissenbaum, H. (1996). Bias in Computer Systems. ACM Transactions on Information Systems, 14(3), 330–347.

    Article  Google Scholar 

  • Ghaffary, S. (2019). The algorithms that detect hate speech online are biased against black people. Vox. https://www.vox.com/recode/2019/8/15/20806384/social-media-hate-speech-bias-black-african-american-facebook-twitter

  • Guala, F. (2016). Understanding institutions. Princeton: Princeton University Press.

    Book  Google Scholar 

  • Hassoun, N., Conklin, S., Nekrasov, M., & Jevin West. (2022). The Past 110 Years: Historical data on the underrepresentation of women in philosophy journals. Ethics, 132(3), 680–729.

  • Hedden, B. (2021). On statistical criteria of algorithmic fairness. Philosophy and Public Affairs, 49(2), 209–231.

  • Hellman, D. (2008). When is discrimination wrong. Cambridge: Harvard University Press.

    Google Scholar 

  • Hellman, D. (2020). Measuring Algorithmic Fairness. Virginia Law Review, 106(4), 811–866.

    Google Scholar 

  • Hoffmann, A. L. (2019). Where fairness fails: Data, algorithms, and the limits of antidiscrimination discourse. Information, Communications & Society, 22(7), 900–915.

  • Howell, K. B. (2009). Broken lives from broken windows: The hidden costs of aggressive order-maintenance policing. New York University Review of Law & Social Change, 33, 271–329.

  • Lin, T. A., Chen, P.-H. C. (2022). Artificial Intelligence in a Structurally Unjust Society. Feminist Philosophy Quarterly, 8(3/4), Article 3.

  • Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A. (2019). A Survey on Bias and Fairness in Machine Learning. arXiv:1908.09635v2 [cs.LG].

  • North, D. (1990). Institutions, Institutional Change, and economic performance. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • O’Neil, C. (2016). Weapons of Math Destruction. New York: Crown Books.

    MATH  Google Scholar 

  • Shelby, T. (2016). Dark ghettos: injustice, dissent, and reform. Cambridge: The Belknap Press of Harvard University Press.

    Book  Google Scholar 

  • Vyas, D. A., Eisenstein, L. G., Jones, D. S. (2021). Hidden in plain sight – Reconsidering the use of race correction in clinical Algorithms. The New England Journal of Medicine, 383(9), 874–882.

  • Young, I. M. (1990). Justice and the politics of difference. Princeton: Princeton University Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Camila Hernandez Flowerman.

Ethics declarations

Conflict of Interest

The author did not receive support from any organization for the submitted work. The author has no relevant financial or non-financial interests to disclose.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Flowerman, C.H. (Some) algorithmic bias as institutional bias. Ethics Inf Technol 25, 24 (2023). https://doi.org/10.1007/s10676-023-09698-7

Download citation

  • Published:

  • DOI: https://doi.org/10.1007/s10676-023-09698-7

Keywords

Navigation