Democratizing Algorithmic Fairness


Machine learning algorithms can now identify patterns and correlations in (big) datasets and predict outcomes based on the identified patterns and correlations. They can then generate decisions in accordance with the outcomes predicted, and decision-making processes can thereby be automated. Algorithms can inherit questionable values from datasets and acquire biases in the course of (machine) learning. While researchers and developers have taken the problem of algorithmic bias seriously, the development of fair algorithms is primarily conceptualized as a technical task. In this paper, I discuss the limitations and risks of this view. Since decisions on “fairness measure” and the related techniques for fair algorithms essentially involve choices between competing values, “fairness” in algorithmic fairness should be conceptualized first and foremost as a political question and be resolved politically. In short, this paper aims to foreground the political dimension of algorithmic fairness and supplement the current discussion with a deliberative approach to algorithmic fairness based on the accountability for reasonableness framework (AFR).

This is a preview of subscription content, access via your institution.


  1. 1.

    For an overview of major approaches to assess the values embedded in information technology, see Brey (2010).

  2. 2.

    The media have reported many cases of (potential) harm from algorithmic decision-making, but the racial bias in the COMPAS recidivism algorithm reported by ProPublica (Angwin et al. 2016; Angwin and Larson 2016), along with Northpointe’s (now renamed to “equivant”) response to ProPublica’s report (Dieterich et al. 2016), have arguably generated the most discussion. The COMPAS recidivism algorithm has since become the paradigmatic case for research on algorithmic bias, with various research citing it as their motivation or using it as a benchmark. Also, see O’Neil (2016) for an accessible discussion of other cases of algorithmic bias.

  3. 3.

    For a recent overview of the current approaches to algorithmic fairness and different techniques to achieve algorithmic fairness, see Lepri et al. (2018) and Friedler et al. (2019).

  4. 4.

    This is not to claim that the presumed ideas of fairness are unreasonable or idiosyncratic. In fact, some researchers have explicitly referred to the social or legal understandings of fairness in constructing their fairness measures. Still, it is the researchers’ choice to rely on a specific understanding of fairness, but not the others, for their fairness measures, and their choice is rarely informed by the public. I shall return to this point in my discussion of the AFR-based framework.

  5. 5.

    For example, Corbett-Davies et al.’s (2017) analysis of the COMPAS recidivism algorithm refers to three definitions of fairness, i.e., statistical parity, conditional statistical parity, and predictive equality. Berk et al.’s (2018) review of fairness in criminal justice risk assessments refers to six definitions of fairness, i.e., overall accuracy equality, statistical parity, conditional procedure accuracy equality, conditional use accuracy equality, treatment equality, and total fairness. Mitchell and Shadlen’s (2017) recent summary includes 19 definitions of fairness, and a recent talk by Arvind Narayanan (2018) has increased the number of definitions to 21.

  6. 6.

    National or international legislation against discrimination may supply the meaning of fairness to researchers and developers for their design and implementation of algorithms. However, there are two potential shortcomings in grounding the “fairness” in fair algorithms on national and international legislation. Firstly, the capacity of algorithms to identify patterns and correlations may engender new types of discrimination that are not based on common protected features, e.g., races and genders. Accordingly, the existing legislation is likely to be insufficient. Secondly, national and international legislation is often difficult and slow to change. Therefore, the idea of “fairness” in algorithmic fairness is likely to be conservative if it is based on the legislation. Of course, national and international legislation remains important to algorithmic fairness for identifying common types of discrimination.

  7. 7.

    For instance, the reason to opt for a specific definition of fairness is often left unarticulated or implicit in the research, except for a few notable exceptions in which researchers and developers acknowledge or reflect on the normative ground of their choice of definition(s). See, e.g., Dwork et al. (2012) and Lipton et al. (2018).

  8. 8.

    It is not entirely accurate to describe the incompatibility among different definitions of fairness as “the impossibility theorem.” There are indeed situations where some of the definitions of fairness in question can be satisfied simultaneously, but these situations are highly unrealistic, e.g., when we have perfect predictor or trivial predictor that is either always-positive or always-negative (Miconi 2017).

  9. 9.

    This is not intended to be a knock-down argument against viewing algorithmic fairness primarily as a technical challenge. However, as I have argued the focus on technical tasks can lead to a less critical attitude towards one’s idea of “fairness,” it is more likely that researchers and developers who see algorithmic fairness primarily as a technical challenge are less sensitive to the contentious nature of the definition of fairness.

  10. 10.

    There is an important distinction between actualized harm and risk of harm to be made in the discussion on the fair distribution of risk, see Hayenhjelm (2012) and Hayenhjelm and Wolff (2012). The debate on risk and distributive justice is out of the scope here, but my argument only relies on the assumption that the distribution of risk and benefit is, in fact, an issue of fairness.

  11. 11.

    Here, the claim about unfairness could at least be grounded on (i) a consequentialist perspective and (ii) a rights-based perspective. From the consequentialist perspective, the unfairness is due to a reduction of overall social good, whereas from the rights-based perspective, individuals have prima facie rights not to be exposed to a risk of harm (see Hayenhjelm and Wolff 2012).

  12. 12.

    In this respect, the increasing number of researchers being more explicit about the values and normative grounds of various definitions of fairness is a welcoming trend in the research on algorithmic fairness (see, e.g., Dwork et al. (2012); Friedler et al. (2016), Berk et al. (2018), Narayanan (2018)).

  13. 13.

    Hansson (2006) has forcibly questioned the applicability of (informed) consent in non-individualistic contexts. Here, the discussion is by no means an argument for the role of (informed) consent in justifying the imposition of risk by algorithms, but it is merely an example of the kind of ethical issues that may arise.

  14. 14.

    If one considers every use of algorithmic decision-making to be morally impermissible, then concerns over fairness in algorithms will cease to exist. The project of achieving fair algorithms presupposes some uses of algorithms to be morally permissible.

  15. 15.

    However, even if there is no disagreement among different groups of stakeholders, I take it that the AFR-inspired framework I outline can enhance the “fairness” of the decision.

  16. 16.

    My discussion only requires there to be at least some choices that are equally justifiable and thereby leading to the requirement for justifying one justifiable choice over another equally justifiable choice.

  17. 17.

    For Rawls, the fact of reasonable pluralism amount to “a pluralism of comprehensive religious, philosophical, and moral doctrines […] a pluralism of incompatible yet reasonable comprehensive doctrines” (Rawls 1993, p. xvi).

  18. 18.

    Rawls argues that despite there are differences in reasonable comprehensive doctrines, individuals in the society could still achieve mutual agreement on a political conception of justice through overlapping consensus, that is, individuals subscribe to different comprehensive doctrines can agree on the political conception of justice with their own reasons and from their own moral points of view (cf. Rawls 1993, p. 134). Yet, the agreement on the political conception of justice is necessarily thin, and thus, it is insufficient to supply fine-grained normative principles to settle substantive value-related issues, e.g., prioritizing the interests of different groups of stakeholders (cf. Daniels 1993).

  19. 19.

    Daniels and Sabin first proposed AFR in Daniels and Sabin (1997), and Daniels has since defended and applied AFR on various healthcare issues with Sabin and other colleagues. Note that this paper is not an exposition of AFR, and I shall not attempt to survey the extensive discussion on AFR. My discussion of AFR refers primarily to Daniels and Sabin (2008), which incorporate the earlier works on AFR and present the most systematic account of it. However, I shall also refer to earlier works on AFR when I consider them to be more relevant on a specific point under discussion.

  20. 20.

    The formulation of the four conditions I quoted is slightly different from the one presented in Daniels and Sabin (2008, p. 45). I refer to this formulation because it is explicitly targeted at the problem of priority-setting, and, as I point out, the choice of fairness measure and balance between fairness and accuracy can be viewed as a priority-setting problem.

  21. 21.

    Veale and Binns (2017) rightly point out that there are practical difficulties for private organizations to explicate the consequences of an algorithm and its distributional implications, for private organizations may not, or even are not, allowed to possess and process relevant data for such endeavors. I think, however, the responses Veale and Binns provided in their paper can resolve the practical difficulties. In this paper, I cannot discuss their responses in detail, but the proposed responses are compatible with the AFR-inspired framework I develop in here.

  22. 22.

    It is useful to caution that both Badano’s Full Acceptability condition and Daniels and Sabin’s Relevance condition risk over-intellectualized public deliberation and thereby excluding views and voices that are not presented in a rational, argumentative form. Similarly, implicit in the Full Acceptability condition, the importance of achieving consensus, which, in turn, can lead to a suppression of differences. In response to the two concerns, it is useful to explore whether Young’s (2000) communicative democracy can broaden the inclusion of views and voices by introducing other modes of communication in public deliberation, e.g., greeting, rhetoric, and narrative; and, whether Young’s ideal of differentiated solidarity based on mutual respect and caring but not mutual identification can avoid the suppression of differences (Young 2000, pp. 221–228).

  23. 23.

    The more fundamental questions for the AFR-based framework, therefore, are about (i) the normative and practical viability of deliberative democracy and (ii) the proper scope of it. In other words, a more comprehensive account of the AFR-based framework requires one to defend deliberative democracy as a better alternative than other forms of democracy and to work out the institutional arrangements where individuals’ views and voices can be adequately communicated. It must also specify whose views and voices are to be included, e.g., citizens vs. non-citizens in the democratic society, and what questions are open for democratic deliberation, e.g., national security issues. Debates on theoretical and practical aspects of deliberative democracy have generated an enormous amount of research that I cannot summarize in this paper, but I shall acknowledge the significant role deliberative democracy in normatively grounding my AFR-based framework. For a review of the prospect of deliberative democracy, see Curato et al. (2017).

  24. 24.

    Binns (2018b) is an important exception to this claim, where he explores the phenomenon of algorithmic accountability in terms of the democratic ideal of public reason. While there are affinities between my discussion and Binns’ account, there are two important differences. Firstly, I attempt to demonstrate the political dimension in the problem of algorithmic fairness is due to its internal features, particularly the impossibility theorem and the inherent trade-off between fairness and accuracy. Secondly, I attempt to offer a specific approach to ground decision-makers’ accountability with Daniels and Sabin’s AFR.

  25. 25.

    The other requirements listed in the report are related to “Accuracy, Validity, and Bias,” i.e., “Requirement 1: training datasets must measure the intended variables,” “Requirement 2: bias in statistical models must be measured and mitigated,” and “Requirement 3: tools must not conflate multiple distinct predictions” and to “Human-Computer Interface Issues,” i.e., “Requirement 4: predictions and how they are made must be easily interpretable,” “Requirement 5: tools should produce confidence estimates for their predictions,” and “Requirement 6: users of risk assessment tools must attend trainings on the nature and limitations of the tools.”

  26. 26.

    This is not the only possible mapping of the four conditions with the policy goals of AIAs and requirements in the report by Partnership on AI. The aim of this exercise is to demonstrate the affinity of the AFR-based framework with major ethical and governance principles.

  27. 27.

    For example, see Ney and Verweij (2015) for an excellent discussion of different methods to engage the public and to accommodate the normative principles and values of different, conflicting worldviews in relation to wicked problems, but also see Hagendijk and Irwin (2006) for a discussion about the difficulties for public deliberation and deliberative democracy in science and technology policies.


  1. ACM US Public Policy Council [USACM] (2017). Statement on algorithmic transparency and accountability. Association for Computing Machinery. Accessed 23 April 2019.

  2. Angwin, J. Larson, J. (2016) ProPublica responds to company’s critique of machine bias story. ProPublica, July 29, 2016. Available online at:

  3. Angwin, J. Larson, J. Mattu, S. Kirchner, L. (2016) Machine bias. ProPublica, May 23, 2016. Available online at:

  4. Arneson, R. (2018). Four conceptions of equal opportunity. The Economic Journal, 128(612), F152–F173.

    Article  Google Scholar 

  5. Badano, G. (2018). If you’re a Rawlsian, how come you’re so close to utilitarianism and intuitionism? A critique of Daniels’s accountability for reasonableness. Health Care Analysis, 26(1), 1–16.

    Article  Google Scholar 

  6. Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671–732.

    Google Scholar 

  7. Berk, R., Heidari, H., Jabbari, S., Kearns, M., & Roth, A. (2018). Fairness in criminal justice risk assessments: the state of the art. Sociological Methods and Research, OnlineFirst.

  8. Binns, R. (2018a). Fairness in machine learning: lessons from political philosophy. Journal of Machine Learning Research, 81, 1–11.

    Google Scholar 

  9. Binns, R. (2018b). Algorithmic accountability and public reason. Philosophy and Technology, 31(4), 543–556.

    Article  Google Scholar 

  10. Boyd, D., & Crawford, K. (2012). Critical questions for big data. Information, Communication and Society, 15(5), 662–679.

    Article  Google Scholar 

  11. Brey, P. A. E. (2010). Values in technology and disclosive computer ethics. In L. Floridi (Ed.), The Cambridge handbook of information and computer ethics (pp. 41–58). Cambridge: Cambridge University Press.

    Google Scholar 

  12. Burrell, J. (2016). How the machine ‘thinks:’ understanding opacity in machine learning algorithms. Big Data and Society, 3, 1–12.

    Article  Google Scholar 

  13. Chouldechova, A. (2017). Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163.

    Article  Google Scholar 

  14. Chouldechova, A. G’Sell, M. (2017) Fairer and more accurate, but for whom? Poster presented at: The 4th Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2017).

  15. Corbett-Davies, S. Pierson, E. Feller, A. Goel, S. (2016) A computer program used for bail and sentencing decisions was labeled biased against blacks. It’s actually not that clear. Washington Post, October 17, 2016. Available Online at:

  16. Corbett-Davies, S. Pierson, E. Feller, A. Goel, S. Huq, A. (2017). Algorithmic decision making and the cost of fairness. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ‘17), 797–806.

  17. Curato, N., Dryzek, J. S., Ercan, S. A., Hendriks, C. M., & Niemeyer, S. (2017). Twelve key findings in deliberative democracy research. Daedalus, 146(3), 28–38.

    Article  Google Scholar 

  18. Dahl, R. A. (1990). After the revolution? Authority in a good society, Revised Edition. New Haven: Yale University Press.

    Google Scholar 

  19. Daniels, N. (1993). Rationing fairly: programmatic considerations. Bioethics, 7(2–3), 224–233.

    Article  Google Scholar 

  20. Daniels, N. (2010). Capabilities, opportunity, and health. In H. Brighouse & I. Robeyns (Eds.), Measuring justice: primary goods and capabilities (pp. 131–149). Cambridge: Cambridge University Press.

    Google Scholar 

  21. Daniels, N., & Sabin, J. (1997). Limits to health care: fair procedures, democratic deliberation, and the legitimacy problem for insurers. Philosophy & Public Affairs Public Affairs, 26(4), 303–350.

    Article  Google Scholar 

  22. Daniels, N., & Sabin, J. (2000). The ethics of accountability in managed care reform. Health Affairs, 17(5), 50–64.

    Article  Google Scholar 

  23. Daniels, N., & Sabin, J. (2008). Setting limits fairly: Learning to share resources for health (2nd ed.). New York: Oxford University Press.

    Google Scholar 

  24. Diakopoulos, N., Friedler, S., Arenas, M., Barocas, S., Hay, M., Howe, B., Jagadish, H. V., Unsworth, K., Sahuguet, A., Venkatasubramanian, S., Wilson, C., Yu, C., & Zevenbergen, B. (n.d.). Principles for accountable algorithms and a social impact statement for algorithms. Fairness, Accountability, and Transparency in Machine Learning. Accessed 23 April 2019.

  25. Dieterich, W., Mendoza, C., & Brennan, T. (2016). COMPAS risk scales: demonstrating accuracy equity and predictive parity. Northpoint Inc. Accessed 23 April 2019.

  26. Dwork, C, Hardt, M., Pitassi, T., Reingold, O., Zemel, R (2012) Fairness through awareness. Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, 214–226.

  27. Ford, A. (2015). Accountability for reasonableness: the relevance, or not, of exceptionality in resource allocation. Medicine, Health Care and Philosophy, 18(2), 217–227.

    Article  Google Scholar 

  28. Friedler, S. Scheidegger, C. Venkatasubramanian, S. (2016) On the (Im)possibility of fairness. arXiv preprint, arXiv:1609.07236.

  29. Friedler, S. A. Scheidegger, C. Venkatasubramanian, S. Choudhary, S. Hamilton, E. P. Roth, D. (2019) A comparative study of fairness-enhancing interventions in machine learning, Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19), 329–338.

  30. Friedman, A. (2008). Beyond accountability for reasonableness. Bioethics, 22(2), 101–112.

    Article  Google Scholar 

  31. Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems, 14(3), 330–347.

    Article  Google Scholar 

  32. Grgić-Hlača, N. Zafar, M. B. Gummadi, K. P. Weller, A. (2018) Beyond distributive fairness in algorithmic decision making: feature selection for procedurally fair learning, Proceeding of the thirty-second AAAI conference on artificial intelligence, 51–60.

  33. Gutmann, A., & Thompson, D. (1996). Democracy and disagreement. Cambridge: Harvard University Press.

    Google Scholar 

  34. Gutmann, A., & Thompson, D. (2004). Why deliberative democracy. Princeton: Princeton University Press.

    Google Scholar 

  35. Habermas, J. (1996). Between facts and norms: contributions to a discourse theory of law and democracy. Cambridge, MA: MIT Press.

    Google Scholar 

  36. Hagendijk, R., & Irwin, A. (2006). Public deliberation and governance: engaging with science and technology in contemporary Europe. Minerva, 44(2), 167–184.

    Article  Google Scholar 

  37. Hansson, S. O. (2006). Informed consent out of context. Journal of Business Ethics, 63(2), 149–154.

    Article  Google Scholar 

  38. Hayenhjelm, M. (2012). What is a fair distribution of risk? In S. Roeser, R. Hillerbrand, P. Sandin, & M. Peterson (Eds.), Handbook of risk theory: epistemology, decision theory, ethics, and social implications of risk (pp. 910–929). Dordrecht: Springer.

    Google Scholar 

  39. Hayenhjelm, M., & Wolff, J. (2012). The moral problem of risk impositions: a survey of the literature. European Journal of Philosophy, 20(S1), E26–E51.

    Article  Google Scholar 

  40. Kleinberg, J. Mullainathan, S. Raghavan, M. (2017) Inherent trade-offs in the fair determination of risk scores, Proceedings of 8th Innovations in Theoretical Computer Science Conference (ITCS 2017), 43.

  41. Landwehr, C. (2013). Procedural justice and democratic institutional design in health-care priority-setting. Contemporary Political Theory, 12, 296–317.

    Article  Google Scholar 

  42. Lauridsen, S., & Lippert-Rasmussen, K. (2009). Legitimate allocation of public healthcare: beyond accountability for reasonableness. Public Health Ethics, 2(1), 59–69.

    Article  Google Scholar 

  43. Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, transparent, and accountable algorithmic decision-making processes. Philosophy and Technology, 31(4), 611–627.

    Article  Google Scholar 

  44. Lipton, Z., Chouldechova, A., & McAuley, J. (2018). Does mitigating ML’s impact disparity require treatment disparity? In Proceedings of the Neural Information Processing Systems Conference 2018 (NIPS 2018) Accessed 23 April 2019.

  45. MacLean, D. (1982). Risk and consent: philosophical issues for centralized decisions. Risk Analysis, 2(2), 59–67.

    Article  Google Scholar 

  46. Matthias, A. (2004). The responsibility gap: ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183.

    Article  Google Scholar 

  47. McQuillan, D. (2018). People’s councils for ethical machine learning. Social Media + Society, 1–10.

  48. Miconi, T. (2017) The impossibility of "fairness": a generalized impossibility result for decisions. arXiv preprint, arXiv:1707.01195.

  49. Mitchell, S., & Shadlen, J. (2017). Fairness: notation, definitions, data, legality. Accessed 2 May 2019.

  50. Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: mapping the debate. Big Data and Society, 1–21.

  51. Nagel, T. (1979). Mortal questions. Cambridge: Cambridge University Press.

    Google Scholar 

  52. Nagel, T. (1991). Equality and partiality. Oxford: Oxford University Press.

    Google Scholar 

  53. Narayanan, A. (2018). 21 fairness definitions and their politics. Accessed 23 April 2019.

  54. Ney, S., & Verweij, M. (2015). Messy institutions for wicked problems: how to generate clumsy solutions? Environment and Planning C: Politics and Space, 33(6), 1679–1696.

    Article  Google Scholar 

  55. O’Neil, C. (2016). Weapons of math destruction: how big data increases inequality and threatens democracy. New York: Crown.

    Google Scholar 

  56. Partnership on AI (2019). Report on algorithmic risk assessment tools in the US criminal justice system. Partnership on AI. Accessed 2 May 2019.

  57. Rawls, J. (1993). Political liberalism. New York: Columbia University Press.

    Google Scholar 

  58. Reisman, D., Schultz, J., Crawford, K., & Whittaker, M. (2018). Algorithmic impact assessments: a practical framework for public agency accountability. AI Now Institute. Accessed 23 April 2019.

  59. Ryan, A. (2006). Fairness and philosophy. Social Research, 73(2), 597–606.

    Google Scholar 

  60. Scanlon, T. (1982). Contractualism and utilitarianism. In A. Sen & B. Williams (Eds.), Utilitarianism and beyond (pp. 103–128). Cambridge: Cambridge University Press.

    Google Scholar 

  61. Skirpan, M. Gorelick, M. (2017) The authority of "fair" in machine learning. Paper presented at: The 4th Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2017). arXiv:1706.09976.

  62. Syrett, K. (2002). Nice work? Rationing, review and the ‘legitimacy problem’ in the new NHS. Medical Law Review, 10(1), 1–27.

    Article  Google Scholar 

  63. Temkin, L. (2017). The many faces of equal opportunity. Theory and Research in Education, 14(3), 255–276.

    Article  Google Scholar 

  64. Teuber, A. (1990). Justifying risk. Daedalus, 119(4), 235–254.

    Google Scholar 

  65. Tsu, P. S.-H. (2018). Can the AFR approach stand up to the test of reasonable pluralism? The American Journal of Bioethics, 18(3), 61–62.

    Article  Google Scholar 

  66. Veale, M., & Binns, R. (2017). Fairer machine learning in the real world: mitigating discrimination without collecting sensitive data. Big Data and Society, 2017, 1–17.

    Article  Google Scholar 

  67. Whelan, F. G. (1983). Democratic theory and the boundary problem. In J. R. Pennock & J. W. Chapman (Eds.), Liberal democracy (pp. 13–47). New York: New York University Press.

    Google Scholar 

  68. Woodruff, A. Fox, S. E. Rousso-Schindler, S., & Warshaw, J. (2018). A qualitative exploration of perceptions of algorithmic fairness. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Paper No. 656.

  69. Young, I. M. (2000). Inclusion and democracy. New York: Oxford University Press.

    Google Scholar 

  70. Žliobaitė, I. (2017). Measuring discrimination in algorithmic decision making. Data Mining and Knowledge Discovery, 31(4), 1060–1089.

    Article  Google Scholar 

Download references

Author information



Corresponding author

Correspondence to Pak-Hang Wong.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Wong, PH. Democratizing Algorithmic Fairness. Philos. Technol. 33, 225–244 (2020).

Download citation


  • Algorithmic bias
  • Machine learning
  • Fairness
  • Democratization
  • Accountability for reasonableness