Skip to main content

Algorithmic indirect discrimination, fairness and harm

Abstract

Over the past decade, scholars, institutions, and activists have voiced strong concerns about the potential of automated decision systems to indirectly discriminate against vulnerable groups. This article analyzes the ethics of algorithmic indirect discrimination, and argues that we can explain what is morally bad about such discrimination by reference to the fact that it causes harm. The article first sketches certain elements of the technical and conceptual background, including definitions of direct and indirect algorithmic discrimination. It next introduces three prominent accounts of fairness as potential explanations if the badness of algorithmic indirect discrimination, but argues that all three are vulnerable to powerful leveling-down-style objections. Instead, the article demonstrates how proper attention to the way differences in decision scenarios affect the distribution of harms can help us account for intuitions in prominent cases. Finally, the article considers a potential objection based on the fact that certain forms of algorithmic indirect discrimination appear to distribute rather than cause harm, and notes that we can explain how such distributions cause harm by attending to differences in individual and group vulnerability.

This is a preview of subscription content, access via your institution.

Data availability

Not appropriate.

Notes

  1. Some recent contributions include [15, 19, 38, 40,47, 49, 74, 77, 103]. Legal scholars have engaged more extensively with the issue. (See e.g., [2, 9, 20, 31, 51, 55, 61, 93, 109]. By far the most concerted focus has come from data scientists and computer scientists, particularly within the so-called “fair machine learning” community. Some central contributions include [10, 22, 26, 33, 42, 44, 46, 59, 65, 24.] For a good, recent overview, see: [17].

  2. Over the past decade-and-a-half (roughly) there has been increased philosophical interest in discrimination, which the current analysis draws on. Prominent contributions include [23, 34, 50, 52, 57, 70, 72, 73, 83].

  3. For general discussion of the difference between equal and differential treatment, and disadvantageous and neutral treatment, see: Lippert-Rasmussen, 2014, p.40ff, [100, 106]).

  4. I have previously defended an account of indirect discrimination along these lines on general grounds, and argued that algorithmic discrimination in particular illustrates it as a generally plausible and useful way of distinguishing direct and indirect discrimination. See [100].

  5. There are theoretically precise conceptions of fairness in the philosophical literature, but they have not been widely employed in the debate on fair machine learning. Prominent examples include [16, 29, 84, 91]. See also [15].

  6. For simplicity, I shall focus mostly on classification problems, where ADS attempts to predict the presence of a target property but the points transfer readily to regression problems, where ADS attempts to predict the value of the target property (“What is this person’s age?”).

  7. “Significantly” because we must allow for minor differences attributable to randomness. Also, note that negative and positive classifications are symmetrical in the sense that increasing the probability of one equally reduces the probability of the other. Thus, we need only review one of the two to measure disparity.

  8. “Roughly” because, as before, we should presumably allow space for minor differences attributable to randomness.

  9. The measure is conventionally formalized as P[Ŷ = 1 ∣ S = 0] = P[Ŷ = 1 ∣ S = 1], where Ŷ is the classification and S denotes group status, such that the condition requires that the probability of a positive classification conditional on majority group membership is equal to the probability of a positive classification conditional on minority group membership.

  10. The measure is conventionally formalized as P[Ŷ = yY = y ∩ S = 0] = P[Ŷ = yY = y ∩ S = 1], where Ŷ is the classification, Y is the true status, y are the values that Ŷ and Y can assume ({0,1} in a binary classification problem) and S denotes group status, such that the condition requires that the probability of a classification being true conditional on majority group membership is equal to the probability of a classification being true conditional on minority group membership.

  11. The measure is equivalent to the combination of the requirements of equal true- and false positive rates, which can be formalized as P[Ŷ = 1 ∣ Y = 1 ∩ S = 0] = P[Ŷ = 1 ∣ Y = 1 ∩ S = 1] ∩ P[Ŷ = 1 ∣ Y = 0 ∩ S = 0] = P[Ŷ = 1 ∣ Y = 0 ∩ S = 1], where Ŷ is the classification, Y is the true status, such that the condition requires that the probability of a classification being positive conditional on true status being positive and majority group membership is equal to the probability of a classification being positive conditional on true status being positive and minority group membership and the probability of a classification being positive conditional on true status being negative and majority group membership is equal to the probability of a classification being positive conditional on true status being negative and minority group membership.

  12. As with accuracy above, we can specify more narrow conditions, such as parity of false positives (but not false negatives). However, as with the diverse accuracy measures noted above, the problems afflicting PET, discussed below, also apply to related parity conditions here.

  13. The original leveling down objection was famously raised by Derek Parfit against telic egalitarianism. [88] I say leveling down-type objections, because they are structurally similar but different in that they pertain to reasons, as opposed to values. (cf.[71], chapter 5).

  14. The same point applies to benign tumors, of course, though for simplicity we can focus on only one of the two.

  15. For a related general argument, see [69].

  16. A further criticism holds that justice applies at the level of individuals, but parity conditions are concerned with the average treatment of groups. See [18, 69].

  17. The leveling down-type objections apply even if the scenarios are unlikely to occur in practice, but it is worth noting that such scenarios may in fact be common [25, 26].

  18. We set aside here the possibility that it may be all things considered worse for the person likely to reoffend to be granted parole, e.g., because this will allow them to reoffend, and reoffending is bad for the offender. Furthermore, we are still setting aside the issue of when an act, policy or practice might be all-things-considered permissible in spite of the fact that it is bad for some persons, e.g., because denying parole to persons accurately assessed as high-risk recidivists prevents harm to potential victims.

  19. Does it matter what the alternatives to ADS are in the first place, for example how a human doing the same classification task would perform? Yes, clearly. The ADS causes harm if we could do better without it. (cf. [4, 103]. For the purposes of this argument, however, such alternatives (“the human ADS”) are no different than the possibility of training a different model. Hence, let us assume that alternatives to ADS are impossible or would be even worse.

  20. Overestimation is only actually good when it makes a difference to whether the person obtains an education or not. We set aside for simplicity’s sake the complex issue of what it means to have academic potential, and whether it can plausibly be ranked. That is, we assume for the purposes of the argument that we can meaningfully speak of a rank that one really merits. Furthermore, as in Criminal, we set aside here the possibility that some persons may be worse off by being overestimated, e.g., because they are offered and accept a position at an education they are incapable of completing, and the resulting waste of time and experience of failure leave them worse off than they would have been, had they not been offered a position at all.

  21. Note that the objection does not purport to show that harm explains the badness of no cases of indirect algorithmic discrimination. In fact, it is compatible with the objection that harm explains the badness of many cases. The objection is an argument for the more modest claim that harm cannot explain the badness of all cases, and that there must therefore be other moral factors at stake.

  22. As [81] observe, this dubious assumption is common in both development of ADS and academic discussions of fairness in machine learning.

  23. It is also possible, as prioritarianism claims, that the moral value of units of well-being vary with the well-being level of the recipient, or that, as telic egalitarianism claims, increasing inequality in the distribution of goods is morally disvaluable. I am not persuaded by either view, but if they were true, harming persons who are in general worse off would be morally even more bad.

  24. The most prominent alternative accounts in the literature explain the badness of discrimination with reference to disrespect or inequality. Proponents of respect-based accounts argue that discrimination involves a failure to treat persons in light of reasons grounded in their moral worth [3, 34, 43, 83, 96], or that it involves treating persons in a way that expresses a demeaning underestimation of their worth [50]. Equality-based accounts hold that discrimination involves a decrease in the well-being or life opportunities of persons who are already disadvantaged through no fault of their own. [63, 95].

  25. For example, accounts that rely on the discriminator’s mental state are likely to fit poorly with ADS that does not have mental states [103]. For critical discussion of disrespect-based accounts, see [8, 11, 67, 70, 75, 102]. For critical discussion of the expressive disrespect account, see [7, pp. 91–94], [34, pp. 84–90], [70]. For critical discussion of equality-based accounts, see [70].

  26. Recent research on how to develop ADS under constraints sensitive to benefits, welfare and harm, includes [4, 26, 25, 48, 97].

References

  1. AccessNow: Human rights in the age of artificial intelligence. https://www.accessnow.org/cms/assets/uploads/2018/11/AI-and-Human-Rights.pdf (2018). Accessed 11 June 2019

  2. Adams-Prassl, J., Binns, R., Kelly-Lyth, A.: Directly discriminatory algorithms. Mod. Law Rev. 86(1), 144–175 (2023). https://doi.org/10.1111/1468-2230.12759

    Article  Google Scholar 

  3. Alexander, L.: What makes wrongful discrimination wrong? Biases, preferences, stereotypes and proxies. Univ. Pa. Law Rev. 141, 149–219 (1992)

    Article  Google Scholar 

  4. Altman, M., Wood, A., Vayena, E.: A harm-reduction framework for algorithmic fairness. IEEE Secur. Priv. 16(3), 34–45 (2018). https://doi.org/10.1109/MSP.2018.2701149

    Article  Google Scholar 

  5. Altman, A.: Discrimination. In: Zalta, E.N. (ed) Stanford Encyclopedia of Philosophy (2020)

  6. Angwin, J., Larson, J., Mattu, S., & Kirchner, L.: Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (2016). Accessed 9 Sept 2019

  7. Arneson, R.J.: Discrimination, disparate impact, and theories of justice. In: Hellman, D., Moreau, S. (eds.) Philosophical Foundations of Discrimination Law, pp. 87–111. Oxford University Press, Oxford (2013)

    Chapter  Google Scholar 

  8. Arneson, R.: Discrimination and harm. In: Lippert-Rasmussen, K. (ed.) The Routledge Handbook of the Ethics of Discrimination, pp. 151–163. Routledge, London (2017)

    Chapter  Google Scholar 

  9. Barocas, S., Selbst, A.D.: Big Data’s disparate impact. Calif. Law Rev. 104(3), 671–732 (2016). https://doi.org/10.2139/ssrn.2477899

    Article  Google Scholar 

  10. Barocas, S., Hardt, M., Narayanan, A.: Fairness and machine-learning. https://fairmlbook.org/ (2019)). Accessed 3 Oct 2019

  11. Beeghly, E.: Discrimination and disrespect. In: Lippert-Rasmussen, K. (ed.) Routledge Handbook to the Ethics of Discrimination, pp. 83–96. Routledge (2017)

    Chapter  Google Scholar 

  12. Benner, A.D., Wang, Y., Shen, Y., Boyle, A.E., Polk, R., Cheng, Y.-P.: Racial/ethnic discrimination and well-being during adolescence: a meta-analytic review. Am. Psychol. 73(7), 855–883 (2018). https://doi.org/10.1037/amp0000204

    Article  Google Scholar 

  13. Berger, M., Sarnyai, Z.: “More than skin deep”: stress neurobiology and mental health consequences of racial discrimination. Stress 18(1), 1–10 (2015). https://doi.org/10.3109/10253890.2014.989204

    Article  Google Scholar 

  14. Berk, R., Heidari, H., Jabbari, S., Kearns, M., Roth, A.: Fairness in criminal justice risk assessments: the state of the art. Sociol. Methods Res. Online First (2018). https://doi.org/10.1177/0049124118782533

    Article  Google Scholar 

  15. Binns, R.: Fairness in machine learning: lessons from political philosophy. J. Mach. Learn. Res. 81, 1–11 (2018)

    Google Scholar 

  16. Broome, J.: Fairness. Proc. Aristot. Soc. 91, 87–101 (1990)

    Article  Google Scholar 

  17. Carey, A.N., Wu, X.: The statistical fairness field guide: perspectives from social and formal sciences. AI Ethics 3(1), 1–23 (2023). https://doi.org/10.1007/s43681-022-00183-3

    Article  Google Scholar 

  18. Castro, C., Loi, M.: The fair chances in algorithmic fairness: a response to Holm. Res. Publ. 29(2), 331–337 (2023). https://doi.org/10.1007/s11158-022-09570-3

    Article  Google Scholar 

  19. Castro, C., O’Brien, D., Schwan, B.: Egalitarian machine learning. Res. Publ. 29(2), 237–264 (2023). https://doi.org/10.1007/s11158-022-09561-4

    Article  Google Scholar 

  20. Chiao, V.: Fairness, accountability and transparency: notes on algorithmic decision-making in criminal justice. Int. J. Law Context 15(2), 126–139 (2019). https://doi.org/10.1017/S1744552319000077

    Article  Google Scholar 

  21. Chouldechova, A.: Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big Data 5(2), 153–163. https://ui.adsabs.harvard.edu/abs/2016arXiv161007524C (2017). Accessed 5 Oct 2020

  22. Chouldechova, A., Roth, A.: The frontiers of fairness in machine learning. arXiv e-prints. https://ui.adsabs.harvard.edu/abs/2018arXiv181008810C (2018). Accessed 20 Mar 2019

  23. Collins, H., Khaitan, T. (eds.): Foundations of Indirect Discrimination Law. Hart Publishing, Oxford (2018)

    Google Scholar 

  24. Corbett-Davies, S., Goel, S.: The measure and mismeasure of fairness: a critical review of fair machine learning. arXiv preprint arXiv:1808.00023 (2018)

  25. Corbett-Davies, S., Goel, S.: The measure and mismeasure of fairness: a critical review of fair machine learning. arXiv e-prints. https://arxiv.org/pdf/1808.00023.pdf (2018)

  26. Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., Huq, A.: Algorithmic decision making and the cost of fairness. Paper presented at the KDD ’17 (2017)

  27. Cosette-Lefebvre, H.: Direct and indirect discrimination. Public Aff. Q. 34(4), 340–367 (2020)

    Article  Google Scholar 

  28. Crisp, R.: In defence of the priority view: a response to Otsuka and Voorhoeve. Utilitas 23(1), 105–108 (2011). https://doi.org/10.1017/S0953820810000488

    Article  Google Scholar 

  29. Daniels, N.: Just Health: Meeting Health Needs Fairly. Cambridge University Press, Cambridge (2008)

    Google Scholar 

  30. Dieterich, W., Mendoza, C., Brennan, T. (2016). COMPAS risk scales: demonstrating accuracy equity and predictive parity. Northpointe Inc. Research Department, https://go.volarisgroup.com/rs/430-MBX-989/images/ProPublica_Commentary_Final_070616.pdf. Accessed 27 Mar 2019

  31. Donohue, M.: A replacement for Justitia’s scales? Machine learning’s role in sentencing. Harvard J. Law Technol. 32(2), 657–678 (2019)

    Google Scholar 

  32. Dressel, J., Farid, H.: The accuracy, fairness, and limits of predicting recidivism. Sci. Adv. (2018). https://doi.org/10.1126/sciadv.aao5580

    Article  Google Scholar 

  33. Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through awareness. arXiv:1104.3913 [cs] (2011)

  34. Eidelson, B.: Discrimination and disrespect. Oxford University Press, Oxford (2015)

    Book  Google Scholar 

  35. Ensign, D., Friedler, S.A., Neville, S., Scheidegger, C., Venkatasubramanian, S.: Runaway feedback loops in predictive policing. In: Paper Presented at the 1st Conference on Fairness, Accountability and Transparency. https://arxiv.org/abs/1706.09847 (2017)

  36. Eubanks, V.: Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor. St. Martin’s Press, New York (2018)

    Google Scholar 

  37. European Group on Ethics in Science and New Technologies: Artificial intelligence, robotics and ‘autonomous’ systems. https://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf (2018)

  38. Eva, B.: Algorithmic fairness and base rate tracking. Philos. Public Aff. 50(2), 239–266 (2022). https://doi.org/10.1111/papa.12211

    Article  Google Scholar 

  39. FRA: #BigData: discrimination in data-supported decision making. http://fra.europa.eu/sites/default/files/fra_uploads/fra-2018-focus-big-data_en.pdf (2018). Accessed 11 June 2019

  40. Fazelpour, S., Danks, D.: Algorithmic bias: senses, sources, solutions. Philos Compass 16(8), 1–16 (2021). https://doi.org/10.1111/phc3.12760

    Article  Google Scholar 

  41. Ferguson, A.G.: The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement. NYU Press, New York (2017)

    Book  Google Scholar 

  42. Friedler, S.A., Scheidegger, C., Venkatasubramanian, S., Choudhary, S., Hamilton, E.P., Roth, D.:A comparative study of fairness-enhancing interventions in machine learning. In: Paper presented at the Proceedings of the Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA. https://doi.org/10.1145/3287560.3287589 (2019)

  43. Glasgow, J.: Racism as disrespect. Ethics 120, 64–93 (2009)

    Article  Google Scholar 

  44. Grgic-Hlaca, N., Bilal Zafar, M., Gummadi, K.P., Weller, A.: The case for process fairness in learning: feature selection for fair decision making. In: Paper presented at the Symposium on Machine Learning and the Law at the 29th Conference on Neural Information Processing Systems (2016)

  45. Hacker, P.: Teaching fairness to artificial intelligence: existing and novel strategies against algorithmic discrimination under EU law. Common Market Law Rev. 1143–1185. http://www.kluwerlawonline.com/document.php?id=COLA2018095 (2018). Accessed 22 Mar 2019

  46. Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. arXiv:1610.02413 [cs] (2016)

  47. Hedden, B.: On statistical criteria of algorithmic fairness. Philos. Public Aff. 49(2), 209–231 (2021). https://doi.org/10.1111/papa.12189

    Article  Google Scholar 

  48. Heidari, H., Ferrari, C., Gummadi, K.P., Krause, A.: Fairness behind a veil of ignorance: a welfare analysis for automated decision making. arXiv e-prints. https://arxiv.org/pdf/1806.04959.pdf (2019). Accessed 24 Feb 2020

  49. Heidari, H., Loi, M., Gummadi, K.P., Krause, A.: A moral framework for understanding fair ml through economic models of equality of opportunity. In: Paper Presented at the Proceedings of the Conference on Fairness, Accountability, and Transparency (2019)

  50. Hellman, D.: When is discrimination wrong? Harvard University Press, Cambridge (2008)

    Google Scholar 

  51. Hellman, D.: Measuring algorithmic fairness. Va. Law Rev. 106(4), 811–866 (2020)

    Google Scholar 

  52. Hellman, D., Moreau, S. (eds.): Philosophical Foundations of Discrimination Law. Oxford University Press, Oxford (2013)

    Google Scholar 

  53. High-Level Expert Group on Artificial Intelligence: Ethics guidelines for trustworthy AI. https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=58477 (2019). Accessed 11 June 2019

  54. Holtug, N.: Persons, Interests, and Justice. Oxford University Press, Oxford (2010)

    Book  Google Scholar 

  55. Huq, A.Z.: Racial equity in algorithmic criminal justice. Duke Law J. 68, 1043–1134 (2019)

    Google Scholar 

  56. Jaume-Palasí, L., Spielkamp, M.: Ethics and algorithmic processes for decision making and decision support. https://algorithmwatch.org/wp-content/uploads/2017/06/Ethik_und_algo_EN_final.pdf (2017). Accessed 11 June 2019

  57. Khaitan, T.: A Theory of Discrimination Law. Oxford University Press, Oxford (2015)

    Book  Google Scholar 

  58. Khaitan, T.: Indirect discrimination. In: Lippert-Rasmussen, K. (ed.) Routledge Handbook of the Ethics of Discrimination, pp. 30–41. Routledge, London (2017)

    Chapter  Google Scholar 

  59. Kilbertus, N., Gascón, A., Kusner, M.J., Veale, M., Gummadi, K.P., Weller, A.: Blind justice: fairness with encrypted sensitive attributes. arXiv:1806.03281 (2018)

  60. Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., Mullainathan, S.: Human decisions and machine predictions. NBER working paper series. http://www.nber.org/papers/w23180 (2017)

  61. Kleinberg, J., Ludwig, J., Mullainathan, S., Sunstein, C.R.: Discrimination in the Age of Algorithms. arXiv e-prints. https://ui.adsabs.harvard.edu/abs/2019arXiv190203731K (2019). Accessed 4 Apr 2019

  62. Kleinberg, J., Mullainathan, S., Raghavan, M.: Inherent trade-offs in the fair determination of risk scores. arXiv e-prints. https://ui.adsabs.harvard.edu/abs/2016arXiv160905807K (2016). Accessed 22 Mar 2019

  63. Knight, C.: Discrimination and equality of opportunity. In: Lippert-Rasmussen, K. (ed.) Routledge Handbook of the Ethics of Discrimination, pp. 140–150. Routledge, London (2017)

    Chapter  Google Scholar 

  64. Krieger, N.: Discrimination and health inequities. Int. J. Health Serv. 44(4), 643–710 (2014). https://doi.org/10.2190/HS.44.4.b

    Article  Google Scholar 

  65. Kusner, M.J., Loftus, J.R., Russell, C., Silva, R.: Counterfactual fairness. arXiv e-prints. https://ui.adsabs.harvard.edu/abs/2017arXiv170306856K (2017). Accessed 20 Mar 2019

  66. Larson, J., Mattu, S., Kirchner, L., Angwin, J.: How we analyzed the COMPAS recidivism algorithm. ProPublica. https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm (2016). Accessed 9 Sept 2019

  67. Lippert-Rasmussen, K.: The badness of discrimination. Ethical Theory Moral Pract. 9, 167–185 (2006)

    Article  Google Scholar 

  68. Lippert-Rasmussen, K.: Private discrimination: a prioritarian desert-accommodating account. San Diego Law Rev. 43, 817–856 (2007)

    Google Scholar 

  69. Lippert-Rasmussen, K.: Discrimination and the aim of proportional representation. Polit. Philos. Econ. 7, 159–182 (2008)

    Article  Google Scholar 

  70. Lippert-Rasmussen, K.: Born Free and Equal? A Philosophical Inquiry Into the Nature of Discrimination. Oxford University Press, Oxford (2013)

    Book  Google Scholar 

  71. Lippert-Rasmussen, K.: Luck Egalitarianism. Bloomsbury Publishing, London (2015)

    Google Scholar 

  72. Lippert-Rasmussen, K. (ed.): The Routledge Handbook of the Ethics of Discrimination. Routledge, Abingdon (2018)

    Google Scholar 

  73. Lippert-Rasmussen, K.: Making Sense of Affirmative Action. Oxford University Press, Incorporated, Oxford (2020)

    Book  Google Scholar 

  74. Lippert-Rasmussen, K.: Using (un)fair algorithms in an unjust world. Res. Publ. 29(2), 283–302 (2023). https://doi.org/10.1007/s11158-022-09558-z

    Article  Google Scholar 

  75. Lippert-Rasmussen, K.: Respect and discrimination. In: Hurd, H.M. (ed.) Moral Puzzles and Legal Perplexities: Essays on the Influence of Larry Alexander, pp. 317–332. Cambridge University Press, Cambridge (2018)

    Chapter  Google Scholar 

  76. Lipton, Z.C., Chouldechova, A., McAuley, J.: Does mitigating ML’s impact disparity require treatment disparity? In: Paper Presented at the 32nd Conference on Neural Information Processing Systems (2018)

  77. Loi, M., Nappo, F., Viganò, E.: How I would have been differently treated. Discrimination through the lens of counterfactual fairness. Res. Publ. 29(2), 185–211 (2023). https://doi.org/10.1007/s11158-023-09586-3

    Article  Google Scholar 

  78. MSI-AUT: A study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework. https://rm.coe.int/draft-study-of-the-implications-of-advanced-digital-technologies-inclu/16808ef255 (2018). Accessed 11 June 2019

  79. MSI-NET: Algorithms and human rights—study on the human rights dimensions of automated data processing techniques and possible regulatory implications. https://rm.coe.int/study-hr-dimension-of-automated-data-processing-incl-algorithms/168075b94a (2017). Accessed 11 June 2019

  80. Mitchell, S., Potash, E., Barocas, S., D’Amour, A., Lum, K.: Algorithmic fairness: choices, assumptions, and definitions. Annu. Rev. Stat. Appl. 8(1), 141–163 (2021). https://doi.org/10.1146/annurev-statistics-042720-125902

    Article  MathSciNet  Google Scholar 

  81. Mitchell, S., Potash, E., Barocas, S., D'Amour, A., Lum, K.: Prediction-based decisions and fairness: a catalogue of choices, assumptions, and definitions. arXiv:1811.07867. https://ui.adsabs.harvard.edu/abs/2018arXiv181107867M (2018)

  82. Mittelstadt, B.D., Allo, P., Taddeo, M., Wachter, S., Floridi, L.: The ethics of algorithms: mapping the debate. Big Data Soc. (2016). https://doi.org/10.1177/2053951716679679

    Article  Google Scholar 

  83. Moreau, S.: Faces of Inequality: A Theory of Wrongful Discrimination. Oxford University Press, Incorporated, Oxford (2020)

    Book  Google Scholar 

  84. Otsuka, M., Voorhoeve, A.: Why It matters that some are worse off than others: an argument against the priority view. Philos. Public Aff. 37(2), 171–199. http://www.jstor.org.ep.fjernadgang.kb.dk/stable/40212842 (2009). Accessed 11 Oct 2017

  85. O’Neil, C.: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown/Archetype, New York (2016)

    MATH  Google Scholar 

  86. Panel for the Future of Science and Technology: Understanding algorithmic decision-making: opportunities and challenges (2019)

  87. Parfit, D.: Another defence of the priority view. Utilitas 24(3), 399–440 (2012). https://doi.org/10.1017/S095382081200009X

    Article  Google Scholar 

  88. Parfit, D.: Equality or priority. In: Clayton, M., Williams, A. (eds.) The Ideal of Equality, pp. 81–125. Palgrave Macmillan, Basingstoke (2002)

    Google Scholar 

  89. Perry, W.L.: Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations. RAND Corporation, Santa Monica (2013)

    Book  Google Scholar 

  90. Rainie, L., Anderson, J.: Code-dependent: pros and cons of the algorithm age. http://www.elon.edu/docs/e-web/imagining/surveys/2016_survey/Pew%20and%20Elon%20University%20Algorithms%20Report%20Future%20of%20Internet%202.8.17.pdf (2017). Accessed 11 June 2019

  91. Rawls, J.: A Theory of Justice. Oxford University Press, Oxford (1999)

    Book  Google Scholar 

  92. Reisman, D., Schultz, J., Crawford, K., Whittaker, M.: Algorithmic impact assessments: a practical framework for public agency accountability. https://ainowinstitute.org/aiareport2018.pdf (2018). Accessed 11 June 2019

  93. Roth, A.: Trial by machine. Georgetown Law J. 104(5), 1245–1306 (2016)

    Google Scholar 

  94. Schmitt, M.T., Branscombe, N.R., Postmes, T., Garcia, A.: The consequences of perceived discrimination for psychological well-being: a meta-analytic review. Psychol. Bull. 140(4), 921–948 (2014). https://doi.org/10.1037/a0035754

    Article  Google Scholar 

  95. Segall, S.: What’s so bad about discrimination? Utilitas 24(1), 82–100 (2012)

    Article  Google Scholar 

  96. Slavny, A., Parr, T.: Harmless discrimination. Leg. Theory 21(2), 100–114 (2015)

    Article  Google Scholar 

  97. Speicher, T., Heidari, H., Grgic-Hlaca, N., Gummadi, K.P., Singla, A., Weller, A., Zafar, M.B.: A unified approach to quantifying algorithmic unfairness: measuring individual and group unfairness via inequality indices. In: Paper Presented at the Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, United Kingdom. (2018). https://doi.org/10.1145/3219819.3220046

  98. Temkin, L.S.: Equality, priority, and the levelling down objection. In: Clayton, M., Williams, A. (eds.) The Ideal of Equality, pp. 126–161. Palgrave Macmillan, Basingstoke (2002)

    Google Scholar 

  99. Thomsen, F.K.: But some groups are more equal than others - a critical review of the group criterion in the concept of discrimination. Soc Theory Pract 39(1), 120–146 (2013)

    Article  Google Scholar 

  100. Thomsen, F.K.: Stealing Bread and Sleeping Beneath Bridges - Indirect Discrimination as Disadvantageous Equal Treatment. Moral Philosophy and Politics 2(2), 299–327 (2015)

    Article  Google Scholar 

  101. Thomsen, F.K.: Stealing bread and sleeping beneath bridges - indirect discrimination as disadvantageous equal treatment. Moral Philo Politics 2(2), 299–327 (2015)

    Google Scholar 

  102. Thomsen, F.K.: No disrespect-but that account does not explain what is morally bad about discrimination. J Ethic Soc Philo 23(3), 420–447 (2022)

    Google Scholar 

  103. Thomsen, F.K.: Three lessons for and from algorithmic discrimination. Res Publica 29(2), 213–235. https://doi.org/10.1007/s11158-023-09579-2 (2023)

    Article  Google Scholar 

  104. Thomsen, F.K.: In: Lippert-Rasmussen, K. (ed.) Direct Discrimination. Routledge Handbook of Discrimination (2018)

    Google Scholar 

  105. Thomsen, F.K.: Discrimination. In: Thompson, W.R. (ed.) Oxford Research Encyclopedia of Politics. Oxford UniversityPress, Oxford (2017)

    Google Scholar 

  106. Thomsen, F. K. (2018). Direct Discrimination. In K. Lippert-Rasmussen (Ed.), Routledge Handbook of Discrimination.

  107. Voorhoeve, A., Fleurbaey, M.: Egalitarianism and the separateness of persons. Utilitas 24(3), 381–398 (2012). https://doi.org/10.1017/S0953820812000040

    Article  Google Scholar 

  108. Williams, D.R., Lawrence, J.A., Davis, B.A., Vu, C.: Understanding how discrimination can affect health. Health Serv. Res. 54(S2), 1374–1388 (2019). https://doi.org/10.1111/1475-6773.13222

    Article  Google Scholar 

  109. Zarsky, T.: The trouble with algorithmic decisions: an analytic road map to examine efficiency and fairness in automated and opaque decision making. Sci. Technol. Hum. Values 41(1), 118–132 (2016). https://doi.org/10.1177/0162243915605575

    Article  Google Scholar 

  110. Zerilli, J., Knott, A., Maclaurin, J., Gavaghan, C.: Transparency in algorithmic and human decision-making: is there a double standard? Philos. Technol. 32, 661–683 (2019). https://doi.org/10.1007/s13347-018-0330-6

    Article  Google Scholar 

  111. Zuiderveen Borgesius, F.: Discrimination, artificial intelligence, and algorithmic decision-making. https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decision-making/1680925d73 (2018). Accessed 11 June 2019

Download references

Acknowledgements

I have presented versions of this paper at the Nordic Network for Political Theory 2019 conference at Oslo University, as well as at seminars at the Food & Health Science department of Copenhagen University, Denmark, the Philosophy department at the Nanyang Technological University, Singapore, the Philosophy & Science Studies department of Roskilde University, Denmark, and the Centre for Experimental-Philosophical Studies of Discrimination (CEPDISC) of Aarhus University, Denmark. I am grateful for helpful questions and comments on these occasions from Andreas Brøgger Albertsen, Kim Angell, Ludvig Beckman, Reuben Binns, Emil J. Busch, Christina Chuang, Jakob Elster, Marion Goodman, Rune Klingenberg Hansen, Frederik Hjorten, Sune Hannibal Holm, Robert Huseby, Ditte Marie Munch Jurisic, Sune Lægaard, Jakob Thrane Mainz, Viki Møller Lyngby Pedersen, Jesper Ryberg, Peter Sandøe, Jørn Sønderholm, Kim Mannemar Sønderskov, Jacob Livingston Slosser, Jørn Sønderholm, Olav Benjamin Vassend, and Søren Sofus Wichmann. I owe particular thanks for very thorough written comments to Sebastian Holmen, Nils Holtug, Søren Flinch Midtgaard, and Thomas Søbirk Petersen. Furthermore, I owe an enormous debt of gratitude to my friend, Associate Professor Tommy Sonne Alstrøm, The Technical University of Denmark, who with admirable patience helped me understand the workings of machine learning and automated decision-making. Finally, I am grateful to the Research Department of the Danish Institute for Human Rights, and in particular its former Head of Research, Hans-Otto Sano, as well as CEPDISC, Aarhus University and its leader, Professor Kasper Lippert-Rasmussen. Much of the work on this article was conducted during my tenure as Senior Researcher with the former and visiting Associate Professor with the latter.

Funding

Funding was supported by Danmarks Grundforskningsfond, DNRF114.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Frej Klem Thomsen.

Ethics declarations

Conflict of interest

The author attests that he has no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Thomsen, F.K. Algorithmic indirect discrimination, fairness and harm. AI Ethics (2023). https://doi.org/10.1007/s43681-023-00326-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s43681-023-00326-0

Keywords