Abstract
Just recently, IBM invited me to participate in a panel titled “Will AI ever be completely fair?” My first reaction was that it surely would be a very short panel, as the only possible answer is ‘no’. In this short paper, I wish to further motivate my position in that debate: “I will never be completely fair. Nothing ever is. The point is not complete fairness, but the need to establish metrics and thresholds for fairness that ensure trust in AI systems”.
This is a preview of subscription content, access via your institution.
Buying options
Notes
- 1.
As quoted by Kate Crawford on Twitter https://twitter.com/katecrawford/status/1377551240146522115; 1 April 2021.
- 2.
- 3.
This example is at the core of the well-known Propublica investigations of the COMPAS algorithms used by courts in the US to determine recidivism risk: www.propublica.org/article/how-we-analyzed-the-compasrecidivism-algorithm.
- 4.
- 5.
References
Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. In: Conference on Fairness, Accountability and Transparency, pp. 77–91. PMLR (2018)
Crawford, K.: The Atlas of AI. Yale University Press, London (2021)
Dignum, V.: Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30371-6
Flores, A.W., Bechtel, K., Lowenkamp, C.T.: False positives, false negatives, and false analyses: a rejoinder to machine bias: there’s software used across the country to predict future criminals. And it’s biased against blacks. Fed. Probation 80, 38 (2016)
Gray, M.L., Suri, S.: Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Eamon Dolan Books (2019)
Kleinberg, J., Mullainathan, S., Raghavan, M.: Inherent trade-offs in the fair determination of risk scores. arXiv preprint arXiv:1609.05807 (2016)
Pleiss, G., Raghavan, M., Wu, F., Kleinberg, J., Weinberger, K.Q.: On fairness and calibration. arXiv preprint arXiv:1709.02012 (2017)
Schnabel, T., Swaminathan, A., Singh, A., Chandak, N., Joachims, T.: Recommendations as treatments: debiasing learning and evaluation. In: International Conference on Machine Learning, pp. 1670–1679. PMLR (2016)
Sumpter, D.: Outnumbered: From Facebook and Google to Fake News and Filter-Bubbles-the Algorithms that Control Our Lives. Bloomsbury Publishing, London (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Dignum, V. (2021). The Myth of Complete AI-Fairness. In: Tucker, A., Henriques Abreu, P., Cardoso, J., Pereira Rodrigues, P., Riaño, D. (eds) Artificial Intelligence in Medicine. AIME 2021. Lecture Notes in Computer Science(), vol 12721. Springer, Cham. https://doi.org/10.1007/978-3-030-77211-6_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-77211-6_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-77210-9
Online ISBN: 978-3-030-77211-6
eBook Packages: Computer ScienceComputer Science (R0)