Skip to main content
Log in

Fairness as Equal Concession: Critical Remarks on Fair AI

  • Original Research/Scholarship
  • Published:
Science and Engineering Ethics Aims and scope Submit manuscript

Abstract

Although existing work draws attention to a range of obstacles in realizing fair AI, the field lacks an account that emphasizes how these worries hang together in a systematic way. Furthermore, a review of the fair AI and philosophical literature demonstrates the unsuitability of ‘treat like cases alike’ and other intuitive notions as conceptions of fairness. That review then generates three desiderata for a replacement conception of fairness valuable to AI research: (1) It must provide a meta-theory for understanding tradeoffs, entailing that it must be flexible enough to capture diverse species of objection to decisions. (2) It must not appeal to an impartial perspective (neutral data, objective data, or final arbiter.) (3) It must foreground the way in which judgments of fairness are sensitive to context, i.e., to historical and institutional states of affairs. We argue that a conception of fairness as appropriate concession in the historical iteration of institutional decisions meets these three desiderata. On the basis of this definition, we organize the insights of commentators into a process-structure map of the ethical territory that we hope will bring clarity to computer scientists and ethicists analyzing Fair AI while clearing some ground for further technical and philosophical work.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Availability of Data and Material

All material is publicly available.

Code Availability

Not applicable.

Notes

  1. We aim here only to systematize worries about fairness in particular. For a recent higher-altitude survey of ethical issues surrounding AI, see Tsamados et al., (2021).

  2. It is also worth considering the history of the philosophy of fairness together with the history of societal-level algorithmic practice in general. See, e.g., Ochigame (2020).

  3. OED.

  4. Such mistakes, moreover, are those which a human is unlikely to make, as when imperceptible or irrelevant changes in an image provoke an ML system to erroneously label an object (Goodfellow et al., 2014). Tsamados et al. summarize certain frontiers of progress in generating artificial adversarial examples in order to make training sets more robust (Tsamados et al., 2021).

  5. It must be noted that challenges to the fixation on definitions invoking legal precedents in anti-discrimination law exist. Although anti-discrimination law may more or less neatly map onto quantitative measures of fairness (in whatever way they are contrived), that fixation may cover over other more robust demands for social justice, such as those that would target structural conditions (Hoffmann 2019). Fairness approaches that reduce to risk assessments based upon historical data may fatalistically encourage the carceral state in ways that attention to welfare provision might not (Ochigame 2020). For a general treatment of the relation between EU non-discrimination law and AI fairness, see (Wachter et. al., 2020).

  6. Fazelpour and Lipton (2020) invoke the ideal/non-ideal theory distinction from political philosophy to diagnose the temptation to artificially limit the actual scope of fairness. Whereas ideal theory imagines a perfect world and seeks to solve discrepancies between it and the actual world from that ideal standard, non-ideal theory orients itself from a description of the actual world and the manifold web of causes generating a given injustice, thus situating itself in a position to ameliorate an injustice while keeping track of diffuse burdens of responsibility (on account of that attention to material conditions). By limiting fairness definitions to parity outcomes, aspiringly fair AI systems instantiate localized expressions of naïve ideal theorizing, thereby passing off degenerate definitions of fairness as the complex and internally diverse everyday notion described above, as fair in general. Fazelpour and Lipton note industry AI products hastily certifying themselves as fair on account of controlling for demographic parity, placing some blame on fair AI literature making it possible: “In many papers, these fairness-inspired parity metrics are described as definitions of fairness and the resulting algorithms that satisfy the parities are claimed axiomatically to be fair” (Fazelpour and Lipton, 2020 9).

  7. Rueben Binns explores various conversations from the history of political philosophy to try on different lenses for capturing what would make certain states of affairs upon which AI systems might bear fair or not, such as how classifiers relate to an individual’s responsibility, culpability, or desert for them (Binns, 2018). This is a valuable exercise in using the history of philosophy to see more clearly. Our project complements such efforts while actually settling upon a specific theoretical tool, namely, fairness as equal concession.

  8. We adapt this point from non-deal theorists such as Elizabeth Anderson and Chris McMahon (See: Anderson, 2013; McMahon, 2016).

  9. References to the Nicomachean Ethics are to book and chapter numbers. See Aristotle (1984).

  10. McMahon (2016).

References

  • American Medical Association. (2018). AMA passes first policy recommendations on augmented intelligence. 2018. Accessed at www. ama-assn. org/ama-passes-first-policy-recommendations-augmented-intelligence.

  • Anderson, E. (2013). The imperative of integration. Princeton University Press.

    Google Scholar 

  • Aristotle, J. B. (1984). The complete works of Aristotle: The revised Oxford translation. Princeton University Press.

    Google Scholar 

  • Barabas, C., Virza, M., Dinakar, K., Ito, J., & Zittrain, J. (2018). Interventions over predictions: Reframing the ethical debate for actuarial risk assessment. In Conference on fairness, accountability and transparency (pp. 62–76). Association for Computing Machinery.

  • Binns, R. (2018). What can political philosophy teach us about algorithmic fairness? IEEE Security & Privacy, 16(3), 73–80.

    Article  Google Scholar 

  • Corbett-Davies, S., & Goel, S. (2018). The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023

  • Crigger, E., & Khoury, C. (2019). Making policy on augmented intelligence in health care. AMA Journal of Ethics, 21(2), 188–191.

    Article  Google Scholar 

  • Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference (pp. 214–226). Association for Computing Machinery.

  • Fazelpour, S., & Lipton, Z. C. (2020). Algorithmic fairness from a non-ideal perspective. In Proceedings of the AAAI/ACM conference on AI, ethics, and society (pp. 57–63). Association for Computing Machinery.

  • Feldman, M., Friedler, S. A., Moeller, J., Scheidegger, C., & Venkatasubramanian, S. (2015). Certifying and removing disparate impact. In ACM SIGKDD international conference on knowledge discovery and data mining. Association for Computing Machinery.

  • Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review. https://doi.org/10.1162/99608f92.8cd550d1

    Article  Google Scholar 

  • Friedler, S. A., Scheidegger, C., & Venkatasubramanian, S. (2016). On the (im) possibility of fairness. arXiv preprint arXiv:1609.07236

  • Goodfellow, I. J., Shlens, J., & Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572

  • Green, B. (2018). ‘Fair’ risk assessments: A precarious approach for criminal justice reform. In 5th Workshop on fairness, accountability, and transparency in machine learning (FAT/ML 2018).

  • Harrison, G., Hanson, J., Jacinto, C., Ramirez, J., & Ur, B. (2020). An empirical study on the perceived fairness of realistic, imperfect machine learning models. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 392–402). Association for Computing Machinery.

  • Hoffmann, A. L. (2019). Where fairness fails: Data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society, 22(7), 900–915.

    Article  Google Scholar 

  • Kitchin, R. (2014). Big Data, new epistemologies and paradigm shifts. Big Data & Society, 1(1), 2053951714528481.

    Article  Google Scholar 

  • McMahon, C. (2016). Reasonableness and fairness: A historical theory. Cambridge University Press.

    Book  Google Scholar 

  • Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 2053951716679679.

    Article  Google Scholar 

  • Mittelstadt, B. D., & Floridi, L. (2016). The ethics of big data: Current and foreseeable issues in biomedical contexts. Science and Engineering Ethics, 22(2), 303–341.

    Article  Google Scholar 

  • Ochigame, R. (2020). The long history of algorithmic fairness. In Phenomenal World. Retrieved December 11, 2020 from https://phenomenalworld.org/analysis/long-history-algorithmic-fairness

  • Rawls, J. (1971). A theory of justice. Belknap Press of Harvard University Press.

    Google Scholar 

  • Strauss, D. A. (2002). Must Like Cases Be Treated Alike? U of Chicago, Public Law Research Paper No. 24. http://dx.doi.org/https://doi.org/10.2139/ssrn.312180

  • Tsamados, A., Aggarawal, N., Cowls, J., Morely, J., Roberts, H., Taddeo, M., & Floridi, L. (2021). The ethics of algorithms: Key problems and solutions. AI & Society. https://doi.org/10.1007/s00146-021-01154-8

    Article  Google Scholar 

  • Wachter, S., Mittelstadt, B., & Russell, C. (2020). Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI. arXiv preprint arXiv:2005.05906

Download references

Funding

Work on this project was supported by US National Science Foundation Grant 1939728 FAI: Identifying, Measuring, and Mitigating Fairness Issues in AI.

Author information

Authors and Affiliations

Authors

Contributions

Equal.

Corresponding author

Correspondence to Christopher Yeomans.

Ethics declarations

Conflict of interest

The authors declared that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

van Nood, R., Yeomans, C. Fairness as Equal Concession: Critical Remarks on Fair AI. Sci Eng Ethics 27, 73 (2021). https://doi.org/10.1007/s11948-021-00348-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11948-021-00348-z

Keywords

Navigation