Abstract
Algorithm-assisted decision procedures—including some of the most high-profile ones, such as COMPAS—have been described as unfair because they compound injustice. The complaint is that in such procedures a decision disadvantaging members of a certain group is based on information reflecting the fact that the members of the group have already been unjustly disadvantaged. I assess this reasoning. First, I distinguish the anti-compounding duty from a related but distinct duty—the proportionality duty—from which at least some of the intuitive appeal of the former illegitimately derives. Second, I distinguish between different versions of the anti-compounding duty, arguing that, on some versions, uses of algorithm-assisted decision procedures rarely clash with the anti-compounding duty. Third, drawing on examples of algorithm-assisted decision procedures, I present three objections to the idea that there is a reason not to compound injustice. The most important of these is that one can compound injustice in a non-disrespectful way, and that the wrongfulness of non-disrespectfully compounding injustice is fully explained by the proportionality duty.
This is a preview of subscription content, access via your institution.
Notes
By an ‘algorithm’ I mean a process or procedure that ‘extracts patterns from data’ (Lee and Floridi 2021, p. 170). I use ‘fair’ and ‘unjust’ interchangeably throughout this article.
For an argument to the effect that error ratio parity is irrelevant to fairness, see (Long 2020).
Strictly, it is the use of the algorithm, not the algorithm itself, that compounds injustice.
See also https://civilrights.org/2014/02/27/civil-rights-principles-era-big-data/. Another algorithm-external fairness issue is whether the data used in algorithm-assisted decision procedures are biased in the way they represent reality (Chouldechova 2017, pp. 13–14; Hellman 2020a, pp. 824–825; Mayson 2019). For an overview of some algorithm-external fairness issues, see Barocas and Selbst (2016, pp. 677–693, 712–714).
In an important paper on the wrongness of racial profiling, Andreas Mogensen argues that any individual instance of racial profiling is wrong when it forms part of a larger pattern of oppression, which amounts to a collective harm. Presumably, many of instances of wrongful algorithmic discrimination that compound injustice also form part of a larger pattern of oppression, which amounts to a collective harm. However, a reason not to compound injustice in the sense I have in mind here is clearly distinct from a reason not to perform actions which are part of a larger pattern of oppression (Mogensen 2019, p. 466). First, given Hellman’s account there is reason not to compound injustices to victims whether they are oppressed or not. Second, an action that forms part of a larger pattern of oppression, which amounts to a collective harm, need not satisfy Hellman’s implication component. I suspect Mogensen’s account of the wrongness of racial profiling can be tweaked to explain the wrongness of many instances of algorithmic discrimination. However, this account is sufficiently different from one that appeals to a duty not to compound injustice for me to set this account’s applicability to the present topic aside for future work. I thank an anonymous reviewer for drawing my attention to Mogensen’s work at this point.
By and large, people have not complained that, because the base-rate of recidivism of male offenders is higher than that of female offenders, COMPAS is more likely to falsely predict that a male than a female offender will recidivate. Given the anti-compounding view, this is reasonable as long as we assume that the higher base-rate probability of men reoffending does not reflect gender-based injustices, whereas the higher base-rate probability of blacks reoffending does reflect racial injustices (Hellman 2020b, pp. 487–489; for complications, see Hellman 2020b, p. 516).
See also (Hellman 2018, pp. 114–119; Hellman forthcoming, p. 7).
A misfortune might not be an injustice. However, luck egalitarians think that the misfortune in question typically constitutes unjust, bad brute luck.
In view of what I think are problems with respect-based accounts of the wrongness of discrimination, I merely contend that a respect-based account of the wrongness of certain instances of compounding injustice fits our anti-compounding injustice related intuitions better than one that appeals to a duty not to compound injustice, see Lippert-Rasmussen (2006, forthcoming).
The harm in question could be the harm of not receiving a benefit one might otherwise have received. The intuition motivating the proportionality duty is this: the more unjustly disadvantaged a person is, the worse it is to impose an extra unit of disadvantage on this person.
One cannot violate the anti-compounding duty by allowing a certain original injustice to be the reason for another agent’s acting in a way that amplifies the relevant harm.
Nothing in my discussion here suggests that the two cases might not be wrong for reasons other than those captured by the the anti-compounding and the proportionality duty. Nor does it signal any stance on how these two duties might take different forms depending on whether the agent is a public or a private agent.
The appeal to the proportionality duty can also explain the asymmetric assessment of error ratio disparities across black/white offenders and male/female offenders: the former but not the latter clash with that duty (see note 7).
There I argue that the latter’s being worse is best explained by an appeal to disrespect.
Hellman (2020b, pp. 511–512) appeals to Barbara Kiviat’s finding that people think that whether insurance companies’ use of credit scores to determine insurance premiums is fair depends on whether this policy disadvantages poor insurance takers to support the anti-compounding duty. However, this study is neutral on whether we should accept this duty or the proportionality duty.
This is particularly plausible if we offer an attitudinal consistency rationale for the anti-compounding duty (see section ‘Rationale for the Anti-compounding Duty’. Suppose an agent has already fought a certain injustice. In virtue of that she might manifest proper aversion to the injustice even when she treats facts which reflect that injustice as reasons for action.
Arguably, one can also implicate oneself in a future, expected injustice, in which case a time-neutral version of the implication component might be preferable. However, in the context of algorithmic justice this may be less relevant. I also assume that one can take an injustice, or its effects, as a reason for one’s action even if one does not regard it as such, though perhaps some will say that one’s duty not to compound what one rightly regards as an injustice is more stringent than the duty not to compound an injustice which one does not regard as such. Hellman does not address these complications.
Lippert-Rasmussen (forthcoming).
On the respect-based view I discuss in section ‘Compounding Injustice Without Disrespect’, COMPAS II and III may be morally less problematic because they do not involve disrespect in the way COMPAS I might.
Hellman’s (2018, p. 113) official definition of implication says that when an actor takes ‘the prior injustice or its effects as her reason for action’, she implicates herself in the injustice. However, occasionally her texts suggest that only taking ‘direct effects’ as reasons for action result in implication (Hellman 2018, p. 113; Hellman 2021, p. 282). On this more restrictive view of implication the present challenge is avoided. However, it also comes with the need for, first, an account of how we distinguish between direct and indirect effects and, second, a principled argument to the effect that only direct effects matter in relation to implication. My view is that it is difficult to provide an account of the first sort, because the distinction between direct and indirect effects is determined by purpose-relative pragmatic concerns and not the nature of the relevant causal chains themselves. However, we want to avoid saying that an act compounds injustice relative to some purposes, but relative to others it does not. Also, it is unlikely that an intuitively appealing account of the second kind can be offered. Suppose a society compounds injustice by using low educational achievement—a direct effect of injustice in the educational system—as a reason for imposing longer prison terms, because of the accompanying high risk of recidivism. Suppose also that high risk of unemployment is a direct causal effect of low educational achievement and, thus, an indirect causal effect of injustice in the educational system. Suppose finally that, for the same reason, high risk of unemployment is also used as a reason for longer prison terms. In that case, it seems unclear why using the former direct effect, but not the latter indirect effect, as a reason for imposing longer prison terms involves compounding injustice. Why does the latter case not involve compounding injustice to some lesser degree at least, if the former involves compounding injustice?
Compounding, in my broader sense, is a different matter. However, an anti-compounding duty of this broader kind cannot be grounded on the desirability of avoiding attitudinal inconsistency, since it need not involve any form of attitudinal inconsistency.
See also Lippert-Rasmussen (2017, pp. 76–81).
Parr does not claim that it is transferable, so what follows is not a criticism of his views.
One could broaden Parr’s rationale to extend to satisfying wrongdoers’ immoral (possibly unacknowledged) preferences. However, this suggestion imports problems of its own, and in any case, for reasons similar to those that explain why Parr’s own rationale fails to ground an anti-compounding duty, this broader rationale does so as well, even if it is vulnerable to fewer instances of objectionable compounding of injustice to which it does not apply.
See Lippert-Rasmussen (forthcoming).
You could have a partial compensation programme independently of whether you compound injustice if the victim of the prior justice ends up even more disadvantaged as a result of the injustice compounding act. I include this feature in my description of the case to eliminate a particular reason why the new administration might be thought to be disrespectful. Thanks to an anonymous reviewer for pushing me at this point.
Perhaps on an account of disrespect with a strong non-attitudinal component, it could be insisted that the use of risk prediction instruments is disrespectful even under the circumstances that I have described (see Dillon 2018). Specifically, it is worth pointing out that, on Hellman’s (2008) account of the wrongness of discrimination, the objective meaning of an act of discrimination is that the discriminatee has a lower moral status and that actions that are wrong in this way are disrespectful to some extent at least independently of the agent’s mental states. In response, I note, first, that standard accounts of disrespect focus on the disrespectful agent’s mental states (see Eidelson 2015; Darwall 2006). Second, as indicated Hellman’s own notion of implication is tied to the agent’s mental states and not to the objective meaning of the agent’s actions, so it is unclear that an appeal to her 2008 account is legitimate in the present context. Finally, even on an objective meaning account of disrespect, it need not be disrespectful to take a prior injustice or its effects as a reason for action provided the alternative to doing so is sufficiently bad considering everyone’s interests etc. equally, and the agent also acknowledges and takes actions, whose objective meaning is an acknowledgement of, this prior injustice. I thank an anonymous reviewer for raising the issue of behavioural components of disrespect.
Some might say that they think that the reluctant compounder does something that is pro tanto wrong—she compounds injustice—but that this wrong is not weighty enough to outweigh the benefit of significantly reduced crime. In response, I note first that avoiding injustice is generally seen as a weighty concern and, accordingly, that, generally, people are willing to, and believed required to, forgo significant benefits in the interest of avoiding acting unjustly. Second, my view is that if injustice is not compounded by the reluctant compounder those having to bear the cost in the form of a significantly greater risk of being victims of crime, arguably, can complain about being treated unjustly. If so, and assuming that it is always possible to act in a way that no one is treated unjustly, this would speak against the suggestion that the reluctant compounder does something that is pro tanto wrong.
We can stipulate that they even went so far as to try, unsuccessfully, to prevent the injustice (see the two sentences closing section ‘The Agent of Injustice’) and infer that they therefore cannot be accused of ‘negligence or passivity in the face of injustice’ (Hellman 2021, p. 282)—flaws which, according to Hellman, can activate the duty not to compound injustice along with acquiescing and welcoming injustice.
There are some doubts about whether COMPAS and similar devices do that (Bollinger forthcoming, p. 16). However, let us suppose they do. The general point that using less accurate risk prediction instruments involves morally significant costs stands (for a discussion, see Loi and Christen 2021, pp. 15–20; Kleinberg et al. 2018, pp. 119 n. 12, 120, 160–163).
This weighing will have different outcomes in different cases. All I am saying here is that, for some high level of future crime avoided, and for some low level of harm deriving from different error ratios, the weighing will favour using the relevant recidivism prediction algorithms if everyone’s interests are taken equally into account. Such cases of compounding injustice will also not represent a failure of the administration to do its proportionality duty, since the disadvantages imposed on those who are already disproportionately disadvantaged are not disproportionate relative to the benefits that accrue to others and more fortunately situated people.
Here I need to extend Cohen’s view on what results in the failure to meet the interpersonal test in relation to token-policies (e.g., the unjust educational policies in the USA in the 1980s) to cover cases involving type-policies (unjust educational policies of the type that were in place in the USA in the 1980s).
If, unlike me, you think that, as I have described the reluctant compounder, she could be disrespectful of the victims of the injustice that she compounds, you might still accept my main claim, i.e., that compounding injustice is not wrong when it is not disrespectful. To test whether you disagree with me about this claim, you should tweak the case of the reluctant compounder such that the case involves no disrespect on your view, and then see if you still think it involves wronging the victims of past injustice.
I remain neutral on whether we have a duty to respect others, and what grounds such a duty if it exists. However, most agree that there is such a duty—fundamental or derivative.
Predecessors of this paper were presented at the Arctic University of Norway-UiT, 20 October 2020 (online), at the ‘Algorithmic Fairness Workshop’ (online) at University of Copenhagen, 12 November 2020, and at the ‘Bias and discrimination in algorithmic decision making’ workshop at Leibniz Universität Hannover, 11 October 2021. I would like to thank Ramón Alvarado, Didde Boisen Andersen, Reuben Binns, Ben Eidelson, Göran Duus-Otterström, Sarah Fine, Jake Fry-Lehrle, Andreas Føllesdal, Deborah Hellman, Dietmar Hübner, Nils Holtug, Annabelle Lever, Michael Morreau and Annette Pufner for helpful comments.This work was funded by the Danish National Research Foundation (DNRF144).
References
Allen, James A. 2019. The color of algorithms. Fordham Urban Law Journal 46 (2): 219–270.
Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine bias. ProPublica May 23. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
Barocas, Solon, and Andrew S. Selbst. 2016. Big data’s impact. California Law Review 104 (3): 671–732.
Birhane, Abeba. 2021. Algorithmic injustice. Patterns. https://doi.org/10.1016/j.patter.2021.100205.
Bollinger, Reneé Jorgensen. forthcoming. Algorithms and the individual in criminal law (on file with author).
Bruckner, Matthew A. 2018. The promise and perils of algorithmic lenders’ use of big data. Chicago-Kent Law Review 93 (1): 3–60.
Butt, Daniel. 2007. On benefiting from injustice. Canadian Journal of Philosophy 37: 129–152.
Chouldechova, Alexandra. 2017. Fair prediction with disparate impact. Big Data 5 (2): 153–163.
Clayton, Matthew, and David Stevens. 2004. Social choice and the burdens of justice. Theory and Research in Education 2 (2): 111–126.
Cohen, G. A. 1991. Incentives, inequality, and community. Tanner Lectures on Human Values 13: 261–329.
Darwall, Stephen. 2006. The second person standpoint. Cambridge, MA: Harvard University Press.
Dillon, Robin S. 2018. Respect. Stanford encyclopedia of philosophy. https://plato.stanford.edu/entries/respect/.
Duus-Otterström, Göran. 2017. Benefiting from injustice and the common source problem. Ethical Theory and Moral Practice 20: 1067–1081.
Eidelson, Benjamin. 2021. Patterned inequality, compounding injustice, and algorithmic prediction. American Journal of Law and Inequality 1 (1): 252–276.
Eidelson, Benjamin. 2015. Discrimination and disrespect. Oxford: Oxford University Press.
Ghent, A. C., R. Hernández-Murillo, and M. T. Owyang. 2014. Differences in subprime loan pricing across races and neighborhoods. Regional Science and Urban Economics 48: 199–215.
Harris, John. 1987. QALYfying the value of life. Journal of Medical Ethics 13: 117–123.
Hedden, Brian. 2021. On statistical criteria of algorithmic fairness. Philosophy & Public Affairs 49 (2): 209–231.
Hellman, Deborah. Forthcoming. Big data and compounding injustice. Journal of Moral Philosophy.
Hellman, Deborah. 2021. Personal responsibility in an unjust world. American Journal of Law and Equality 1 (1): 277–285.
Hellman, Deborah. 2020a. Measuring algorithmic fairness. Virginia Law Review 106 (4): 811–866.
Hellman, Deborah. 2020b. Sex, causation, and algorithms: How equal protection prohibits compounding prior injustice. Washington University Law Review 98 (2): 481–524.
Hellman, Deborah. 2018. Indirect discrimination and the duty to avoid compounding injustice. In Foundations of indirect discrimination law, ed. Hugh Collins and Tarunabh Khaitan, 105–122. Oxford: Hart Publishing.
Hellman, Deborah. 2008. When is discrimination wrong? Cambridge, MA: Harvard University Press.
Johnson, Gabrielle. 2021. Algorithmic bias. Synthese 198: 9941–9961.
Kagan, Shelly. 2012. The geometry of desert. Oxford: Oxford University Press.
Kleinberg, Jon, Jens Ludwig, Sendhil Mullainathan, and Cass R. Sunstein. 2018. Discrimination in the age of algorithms. Journal of Legal Analysis 10: 113–174.
Kleinberg, Jon, Jens Ludwig, Sendhil Mullainathan, and Cass R. Sunstein. 2019. Discrimination in the age of algorithms. 1–45. https://arxiv.org/abs/1902.03731.
Kleinberg, Joel, Sendhil Mullainathan, and Manish Raghavan. 2016. Inherent trade-offs in the fair determination of risk scores. arXiv:1609.05807v2.
Lee, Michelle Seng Ah., and Luciano Floridi. 2021. Algorithmic fairness in mortgage lending. Minds and Machines 31: 165–191.
Lindstad, Sigurd. 2020. What is wrong in retaining benefits from wrongdoing? Res Publica 26: 25–43.
Lippert-Rasmussen, Kasper. 2006. The badness of discrimination. Ethical Theory and Moral Practice 9 (2): 167–185.
Lippert-Rasmussen, Kasper. 2017. Affirmative action, historical injustice, and the concept of beneficiaries. Journal of Political Philosophy 25 (1): 72–90.
Lippert-Rasmussen, Kasper. 2022. The benefits of injustice and its correction. Journal of Political Philosophy 30 (2): 395–408.
Lippert-Rasmussen, Kasper. Forthcoming. Is there a duty not to compound injustice? Law and Philosophy.
Loi, Michele, and Markus Christen. 2021. Choosing how to discriminate. Philosophy and Technology. https://doi.org/10.1007/s13347-021-00444-9.
Long, Robert. 2020. Fairness in machine learning. https://arxiv.org/pdf/2007.02890.pdf.
Mayson, Sandra G. 2019. Bias in, bias out. Yale Law Journal 128 (8): 2218–2300.
McMahan, Jeff. 2010. Humanitarian intervention, consent, and proportionality. In Ethics and humanity themes from the philosophy of Jonathan Glover, ed. N. Ann Davis, Richard Keshen, and Jeff McMahan, 44–72. Oxford: Oxford University Press.
Miconi, Thomas. 2017. The impossibility of fairness. arXiv preprint arXiv:1707.01195.
Miller, David. 2007. National responsibility and global justice. Oxford: Oxford University Press.
Mogensen, Andreas. 2019. Racial profiling and cumulative injustice. Philosophy and Phenomenological Research 98 (2): 452–477.
Olson, Jonas, and Frans Svensson. 2008. Regimenting reasons. Theoria 71 (3): 203–214.
Page, Edward. 2012. Give it up for climate change: A defence of the beneficiary pays principle. International Theory 4: 300–330.
Parr, Tom. 2016. The moral taintedness of benefiting from injustice. Ethical Theory and Moral Practice 19: 985–997.
Scheffler, Samuel. 1982. The rejection of consequentialism. Oxford: Oxford University Press.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Lippert-Rasmussen, K. Using (Un)Fair Algorithms in an Unjust World. Res Publica 29, 283–302 (2023). https://doi.org/10.1007/s11158-022-09558-z
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11158-022-09558-z