Skip to main content

Bias and Discrimination in Machine Decision-Making Systems

  • Chapter
  • First Online:
Ethics of Artificial Intelligence

Part of the book series: The International Library of Ethics, Law and Technology ((ELTE,volume 41))

  • 363 Accesses

Abstract

There exists a perception, which is occasionally incorrect, that the presence of machines in decision-making processes leads to improved outcomes. The rationale for this belief is that machines are more trustworthy since they are not prone to errors and possess superior knowledge to deduce what is optimal. Nonetheless, machines are crafted by humans and their data is sourced from human-generated information. Consequently, the machine can be influenced by the same issues that afflict humans, whether that is caused by design inadequacies, by deliberately skewed design, or by biased data resulting from human actions. But, with an added problem, any failure of a machine is much more serious than that of a human; mainly due to three factors: they are massive, invisible, and sovereign. When machine decision-making systems are applied to very sensitive problems such as employee hiring, credit risk assessment, granting of subsidies, or medical diagnosis, a failure means thousands of people are disadvantaged. Many of these errors result in unfair treatment of minority groups (such as those defined in terms of ethnicity or gender), thus incurring discrimination. This chapter reviews different forms and definitions of machine discrimination, identifies the causes that lead to it, and discusses different solutions to avoid or, at least, mitigate its harmful effect.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Agarwal, A., A. Beygelzimer, M. Dudík, J. Langford, and H. Wallach. 2018. A reductions approach to fair classification. In International conference on machine learning, 60–69. PMLR.

    Google Scholar 

  • Aizer, A.A., T.J. Wilhite, M.H. Chen, P.L. Graham, T.K. Choueiri, K.E. Hoffman, et al. 2014. Lack of reduction in racial disparities in cancer-specific mortality over a 20-year period. Cancer 120 (10): 1532–1539.

    Article  Google Scholar 

  • Alexander, L. 2016. Do Google’s ‘unprofessional hair’ results show it is racist? The Guardian. https://www.theguardian.com/technology/2016/apr/08/does-google-unprofessional-hair-results-prove-algorithms-racist-.

  • Angwin, J., J. Larson, S. Mattu, and L. Kirchner. 2016. Machine bias. ProPublica, May 23, 2016.

    Google Scholar 

  • Barocas, S., and A.D. Selbst. 2016. Big data’s disparate impact. California Law Review: 671–732.

    Google Scholar 

  • Barocas, S., M. Hardt, and A. Narayanan. 2017. Fairness in machine learning. Nips tutorial 1: 2017.

    Google Scholar 

  • Bengio, Y., et al. 2023. Pause giant ai experiments: An open letter. Future of Live Institute. https://futureoflife.org/open-letter/pause-giant-ai-experiments/.

  • Benner, K., G. Thrush, M. Isaac. 2019. Facebook engages in housing discrimination with its ad practices, U.S. says. The New York Times, March 28. https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html.

  • Brennan, T., W. Dieterich, and B. Ehret. 2009. Evaluating the predictive validity of the COMPAS risk and needs assessment system. Criminal Justice and Behavior 36 (1): 21–40.

    Article  Google Scholar 

  • Brown, S.J., W. Goetzmann, R.G. Ibbotson, and S.A. Ross. 1992. Survivorship bias in performance studies. The Review of Financial Studies 5 (4): 553–580.

    Article  Google Scholar 

  • Calders, T., and S. Verwer. 2010. Three naive bayes approaches for discrimination-free classification. Data Mining and Knowledge Discovery 21: 277–292.

    Article  Google Scholar 

  • Calmon, F., D. Wei, B. Vinzamuri, K. Natesan Ramamurthy, and K.R. Varshney. 2017. Optimized pre-processing for discrimination prevention. In Advances in neural information processing systems, vol. 30, NIPS.

    Google Scholar 

  • Canetti, R., A. Cohen, N. Dikkala, G. Ramnarayan, S. Scheffler, and A. Smith. 2019. From soft classifiers to hard decisions: How fair can we be? In Proceedings of the conference on fairness, accountability, and transparency, 309–318. ACM.

    Chapter  Google Scholar 

  • Chen, J., N. Kallus, X. Mao, G. Svacha, and M. Udell. 2019. Fairness under unawareness: Assessing disparity when protected class is unobserved. In Proceedings of the conference on fairness, accountability, and transparency, 339–348. ACM.

    Chapter  Google Scholar 

  • Chouldechova, A. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data 5 (2): 153–163.

    Article  Google Scholar 

  • Dastin, J. 2018. Amazon scraps secret AI recruiting tool that showed bias against women. Reuters.

    Google Scholar 

  • Donini, M., L. Oneto, S. Ben-David, J.S. Shawe-Taylor, and M. Pontil. 2018. Empirical risk minimization under fairness constraints. In Advances in neural information processing systems, vol. 31. NIPS.

    Google Scholar 

  • Dressel, J., and H. Farid. 2018. The accuracy, fairness, and limits of predicting recidivism. Science Advances 4 (1): eaao5580.

    Article  Google Scholar 

  • Dwork, C., M. Hardt, T. Pitassi, O. Reingold, and R. Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, 214–226. ACM.

    Chapter  Google Scholar 

  • Equivant. 2019. Practitioner’s guide to COMPAS core. https://www.equivant.com/practitioners-guide-to-compas-core/.

  • Eubanks, V. 2018. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.

    Google Scholar 

  • Fabris, A., S. Messina, G. Silvello, and G.A. Susto. 2022. Algorithmic fairness datasets: The story so far. Data Mining and Knowledge Discovery 36 (6): 2074–2152.

    Article  Google Scholar 

  • Fryer, R.G., Jr., G.C. Loury, and T. Yuret. 2008. An economic analysis of color-blind affirmative action. The Journal of Law, Economics, & Organization 24 (2): 319–355.

    Article  Google Scholar 

  • Gajane, P., and M. Pechenizkiy. 2017. On formalizing fairness in prediction with machine learning. arXiv preprint arXiv:1710.03184.

    Google Scholar 

  • Goldin, C., and C. Rouse. 2000. Orchestrating impartiality: The impact of “blind” auditions on female musicians. American Economic Review 90 (4): 715–741.

    Article  Google Scholar 

  • Grgić-Hlača, N., M.B. Zafar, K.P. Gummadi, and A. Weller. 2018. Beyond distributive fairness in algorithmic decision making: Feature selection for procedurally fair learning. In Proceedings of the AAAI conference on artificial intelligence, vol. 32, no. 1. AAAI.

    Google Scholar 

  • Gu, X., P.P. Angelov, and E.A. Soares. 2020. A self-adaptive synthetic over-sampling technique for imbalanced classification. International Journal of Intelligent Systems 35 (6): 923–943.

    Article  Google Scholar 

  • Hara, K., A. Adams, K. Milland, S. Savage, C. Callison-Burch, and J.P. Bigham. 2018. A data-driven analysis of workers’ earnings on Amazon Mechanical Turk. In Proceedings of the 2018 CHI conference on human factors in computing systems, 1–14. ACM.

    Google Scholar 

  • Hardt, M., E. Price, and N. Srebro. 2016. Equality of opportunity in supervised learning. In Advances in neural information processing systems, 29. NIPS.

    Google Scholar 

  • Hardt, M., S. Barocas, and A. Narayanan. 2023. Fairness and machine learning: Limitations and opportunities. The MIT Press. (ISBN 9780262048613).

    Google Scholar 

  • Holder, E. 2014. Attorney general Eric holder speaks at the national association of criminal defense lawyers 57th annual meeting and 13th state criminal justice network conference. The United States Department of Justice.

    Google Scholar 

  • Ingold, D., and S. Soper. 2016. Amazon doesn’t consider the race of its customers. Should it. Bloomberg, April, 21.

    Google Scholar 

  • Kamiran, F., and T. Calders. 2010. Classification with no discrimination by preferential sampling. In Proceedings 19th machine learning Conference Belgium and The Netherlands, vol. 1, no. 6. Citeseer.

    Google Scholar 

  • Kamiran, F., T. Calders, and M. Pechenizkiy. 2010. Discrimination aware decision tree learning. In 2010 IEEE international conference on data mining, 869–874. IEEE.

    Chapter  Google Scholar 

  • Kamiran, F., S. Mansha, A. Karim, and X. Zhang. 2018. Exploiting reject option in classification for social discrimination control. Information Sciences 425: 18–33.

    Article  Google Scholar 

  • Kearns, M., S. Neel, A. Roth, and Z.S. Wu. 2018. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In International conference on machine learning, 2564–2572. PMLR.

    Google Scholar 

  • Krasanakis, E., E. Spyromitros-Xioufis, S. Papadopoulos, and Y. Kompatsiaris. 2018. Adaptive sensitive reweighting to mitigate bias in fairness-aware classification. In Proceedings of the 2018 world wide web conference, 853–862. ACM.

    Google Scholar 

  • Kusner, M.J., J. Loftus, C. Russell, and R. Silva. 2017. Counterfactual fairness. In Advances in neural information processing systems, vol. 30. NIPS.

    Google Scholar 

  • Larson, J. 2023. COMPAS recidivism risk score data and analysis. ProPublica, April 2023. https://www.propublica.org/datastore/dataset/compas-recidivism-risk-score-data-and-analysis.

  • Liu, H.W., C.F. Lin, and Y.J. Chen. 2019. Beyond state v Loomis: Artificial intelligence, government algorithmization and accountability. International journal of law and information technology 27 (2): 122–141.

    Article  Google Scholar 

  • Miconi, T. 2017. The impossibility of “fairness”: A generalized impossibility result for decisions. arXiv preprint arXiv:1707.01195.

    Google Scholar 

  • Mitchell, S., E. Potash, S. Barocas, A. D’Amour, and K. Lum. 2018. Prediction-based decisions and fairness: A catalogue of choices, assumptions, and definitions. arXiv preprint arXiv:1811.07867.

    Google Scholar 

  • Ntoutsi, E., P. Fafalios, U. Gadiraju, V. Iosifidis, W. Nejdl, M.E. Vidal, et al. 2020. Bias in data-driven artificial intelligence systems—An introductory survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 10 (3): e1356.

    Google Scholar 

  • O’Neil, C. 2016. Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Crown Books. (ISBN 978-0553418811).

    Google Scholar 

  • Pedreschi, D., S. Ruggieri, and F. Turini. 2009. Measuring discrimination in socially-sensitive decision records. In Proceedings of the 2009 SIAM international conference on data mining, 581–592. Society for Industrial and Applied Mathematics.

    Google Scholar 

  • Peeters, Rik, and Arjan C. Widlak. 2023. Administrative exclusion in the infrastructure-level bureaucracy: The case of the Dutch daycare benefit scandal. Public Administration Review 83: 1–15. https://doi.org/10.1111/puar.13615.

    Article  Google Scholar 

  • Phaure, H., and E. Robin. 2020. Artificial intelligence for credit risk management. Deloitte. https://www2.deloitte.com/content/dam/Deloitte/fr/Documents/risk/Publications/deloitte_artificial-intelligence-credit-risk.pdf.

    Google Scholar 

  • Popejoy, A.B., and S.M. Fullerton. 2016. Genomics is failing on diversity. Nature 538 (7624): 161–164.

    Article  Google Scholar 

  • Sattigeri, P., S.C. Hoffman, V. Chenthamarakshan, and K.R. Varshney. 2019. Fairness GAN: Generating datasets with fairness properties using a generative adversarial network. IBM Journal of Research and Development 63 (4/5): 3–1.

    Article  Google Scholar 

  • Valdivia, A., J. Sánchez-Monedero, and J. Casillas. 2021. How fair can we go in machine learning? Assessing the boundaries of accuracy and fairness. International Journal of Intelligent Systems 36 (4): 1619–1643.

    Article  Google Scholar 

  • Villar, D., and J. Casillas. 2021. Facing many objectives for fairness in machine learning. In Quality of information and communications technology: 14th international conference, QUATIC 2021, Algarve, Portugal, September 8–11, 2021, proceedings, vol. 1439, 373–386. Springer International Publishing.

    Chapter  Google Scholar 

  • Von Ahn, L., B. Maurer, C. McMillen, D. Abraham, and M. Blum. 2008. Recaptcha: Human-based character recognition via web security measures. Science 321 (5895): 1465–1468.

    Article  Google Scholar 

  • Xu, D., S. Yuan, L. Zhang, and X. Wu. 2018. Fairgan: Fairness-aware generative adversarial networks. In 2018 IEEE international conference on big data (big data), 570–575. IEEE.

    Chapter  Google Scholar 

  • Zafar, M.B., I. Valera, M. Gomez Rodriguez, and K.P. Gummadi. 2017. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th international conference on world wide web, 1171–1180. ACM.

    Chapter  Google Scholar 

  • Zafar, M.B., I. Valera, M.G. Rogriguez, and K.P. Gummadi. 2017b. Fairness constraints: Mechanisms for fair classification. In Artificial intelligence and statistics, 962–970. PMLR.

    Google Scholar 

  • Zemel, R., Y. Wu, K. Swersky, T. Pitassi, and C. Dwork. 2013. Learning fair representations. In International conference on machine learning, 325–333. PMLR.

    Google Scholar 

  • Zhang, B.H., B. Lemoine, and M. Mitchell. 2018. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society, 335–340. ACM.

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jorge Casillas .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Casillas, J. (2023). Bias and Discrimination in Machine Decision-Making Systems. In: Lara, F., Deckers, J. (eds) Ethics of Artificial Intelligence. The International Library of Ethics, Law and Technology, vol 41. Springer, Cham. https://doi.org/10.1007/978-3-031-48135-2_2

Download citation

Publish with us

Policies and ethics