Skip to main content

The Myth of Complete AI-Fairness

Part of the Lecture Notes in Computer Science book series (LNAI,volume 12721)


Just recently, IBM invited me to participate in a panel titled “Will AI ever be completely fair?” My first reaction was that it surely would be a very short panel, as the only possible answer is ‘no’. In this short paper, I wish to further motivate my position in that debate: “I will never be completely fair. Nothing ever is. The point is not complete fairness, but the need to establish metrics and thresholds for fairness that ensure trust in AI systems”.

This is a preview of subscription content, access via your institution.

Buying options

USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-030-77211-6_1
  • Chapter length: 6 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
USD   84.99
Price excludes VAT (USA)
  • ISBN: 978-3-030-77211-6
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   109.99
Price excludes VAT (USA)


  1. 1.

    As quoted by Kate Crawford on Twitter; 1 April 2021.

  2. 2.

  3. 3.

    This example is at the core of the well-known Propublica investigations of the COMPAS algorithms used by courts in the US to determine recidivism risk:

  4. 4.

  5. 5.


  1. Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. In: Conference on Fairness, Accountability and Transparency, pp. 77–91. PMLR (2018)

    Google Scholar 

  2. Crawford, K.: The Atlas of AI. Yale University Press, London (2021)

    CrossRef  Google Scholar 

  3. Dignum, V.: Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer, Cham (2019).

    CrossRef  Google Scholar 

  4. Flores, A.W., Bechtel, K., Lowenkamp, C.T.: False positives, false negatives, and false analyses: a rejoinder to machine bias: there’s software used across the country to predict future criminals. And it’s biased against blacks. Fed. Probation 80, 38 (2016)

    Google Scholar 

  5. Gray, M.L., Suri, S.: Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Eamon Dolan Books (2019)

    Google Scholar 

  6. Kleinberg, J., Mullainathan, S., Raghavan, M.: Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807 (2016)

  7. Pleiss, G., Raghavan, M., Wu, F., Kleinberg, J., Weinberger, K.Q.: On fairness and calibration. arXiv preprint arXiv:1709.02012 (2017)

  8. Schnabel, T., Swaminathan, A., Singh, A., Chandak, N., Joachims, T.: Recommendations as treatments: debiasing learning and evaluation. In: International Conference on Machine Learning, pp. 1670–1679. PMLR (2016)

    Google Scholar 

  9. Sumpter, D.: Outnumbered: From Facebook and Google to Fake News and Filter-Bubbles-the Algorithms that Control Our Lives. Bloomsbury Publishing, London (2018)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Virginia Dignum .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Dignum, V. (2021). The Myth of Complete AI-Fairness. In: Tucker, A., Henriques Abreu, P., Cardoso, J., Pereira Rodrigues, P., Riaño, D. (eds) Artificial Intelligence in Medicine. AIME 2021. Lecture Notes in Computer Science(), vol 12721. Springer, Cham.

Download citation

  • DOI:

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-77210-9

  • Online ISBN: 978-3-030-77211-6

  • eBook Packages: Computer ScienceComputer Science (R0)