Skip to main content

Advertisement

Log in

Machine learning’s limitations in avoiding automation of bias

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

The use of predictive systems has become wider with the development of related computational methods, and the evolution of the sciences in which these methods are applied Solon and Selbst (Calif L REV 104: 671–732, 2016) and Pedreschi et al. (2007). The referred methods include machine learning techniques, face and/or voice recognition, temperature mapping, and other, within the artificial intelligence domain. These techniques are being applied to solve problems in socially and politically sensitive areas such as crime prevention and justice management, crowd management, and emotion analysis, just to mention a few. However, dissimilar predictions can be found nowadays as the result of the application of these methods resulting in misclassification, for example for the case of conviction risk assessment Office of Probation and Pretrial Services (2011) or decision-making process when designing public policies Lange (2015). The goal of this paper is to identify current gaps on fairness achievement within the context of predictive systems in artificial intelligence by analyzing available academic and scientific literature up to 2020. To achieve this goal, we have gathered available materials at the Web of Science and Scopus from last 5 years and analyzed the different proposed methods and their results in relation to the bias as an emergent issue in the Artificial Intelligence field of study. Our tentative conclusions indicate that machine learning has some intrinsic limitations which are leading to automate the bias when designing predictive algorithms. Consequently, other methods should be explored; or we should redefine the way current machine learning approaches are being used when building decision making/decision support systems for crucial institutions of our political systems such as the judicial system, just to mention one.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. The analysis regarding the case scenarios can be found in a latter study due to our research project organizational purposes and communication strategy.

References

  • AccessNow Conference Declaration (2018) The Toronto declaration: protecting the rights to equality and non-discrimination in machine learning systems. Retrieved from Access Now: https://www.accessnow.org/the-toronto-declaration-protecting-the-rights-to-equality-and-non-discrimination-in-machine-learning-systems/

  • Ayat S, Farahani HA, Aghamohamadi M, Alian M, Aghamohamadi S, Kazemi Z (2013) A comparison of artificial neural networks learning algorithms in predicting tendency for suicide. Neural Comput Appl 23(5):1381–1386. https://doi.org/10.1007/s00521-012-1086-z

    Article  Google Scholar 

  • Cem Geyik S, Ambler S, Kenthap K (2019) Fairness-aware ranking in search and recommendation systems with application to linkedin talent search. ACM KDD. https:\\arXiv\abs\1905.01989v3

  • Chouldechova A (2017) Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big Data 5(2):153–163. https://doi.org/10.1089/big.2016.0047

    Article  Google Scholar 

  • Cofone I (2019).Antidiscriminatory privacy. SMU Law Rev 72(1):139–176. Retrieved from https://scholar.smu.edu/smulr/vol72/iss1/11

  • Dwork C, Hardt M, Pitassiz T, Reingold O, Zemel R (2011) Fairness through awareness. eprint, 1–24. arXiv:1104.3913.

  • European Group on Ethics in Science and New Technologies (2018) Statement on artificial intelligence, robotics and ‘autonomous’ systems. Publications Office of the European Union, Luxembourg. https://doi.org/10.2777/531856

    Book  Google Scholar 

  • Feldman M, Friedler SA, Moelle J, Scheidegger C, Venkatasubramanian S (2015) Certifying and removing disparate impact. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, pp 259–268.

  • Fish B, Kun J, Lelkes ÁD (2016) A confidence-based approach for balancing fairness and accuracy. In: Proceedings of the 2016 SIAM International Conference on Data Mining. Society for Industrial and Applied Mathematics. Society for Industrial and Applied Mathematics, pp 144–152.

  • Fjeld J, Achten N, Hilligoss H, Nagy A, Srikumar M (2020) Principled artificial intelligence: mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center for Internet & Society. Retrieved from https://dash.harvard.edu/handle/1/42160420

  • Gao R, Shah C (2020) Toward creating a fairer ranking in search engine results. Inf Process Manage 57:1–19. https://doi.org/10.1016/j.ipm.2019.102138

    Article  Google Scholar 

  • Hardt M, Price E, Srebro N (2016) Equality of opportunity in supervised learning. arXiv:1610.02413 [cs.LG]

  • Kharal A (2014) A neutrosophic multi-criteria decision making method. New Math Nat Comput 10(2):143–162

    Article  MathSciNet  Google Scholar 

  • Lange AR (2015) Digital decisions: policy tools in automated decision-making. Center for Democracy and Technology, Washintong

    Google Scholar 

  • Mondal KA (2015) Rough neutrosophic multi-attribute decision-making based on grey relational analysis. Neutrosophic Sets Syst 7:8–17

    Google Scholar 

  • Office of Probation and Pretrial Services (2011) An overview of the federal post conviction risk assessment. Administrative Office of the United States Courts.

  • Pedreschi D, Ruggieri S, Franco T (2007) Discrimination-aware data mining Technical Report: TR-07-19. Dipartimento di Informatica, Universitµa di Pisa, Pisa

    Google Scholar 

  • Pedreshi D, Ruggieri S, Tur F (2008) Discrimination-aware data mining. In: Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pp 560–568.

  • Pierson E, Corbett-Davies S, Goel S (2018) Fast threshold tests for detecting discrimination. In: Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS) 2018. Lanzarote, Spain, arXiv:1702.08536v3.

  • Sait Vural M, Gök M (2017) Criminal prediction using Naive Bayes theory. Neural Comput Appl 8(9):2581–2592. https://doi.org/10.1007/s00521-016-2205-z

    Article  Google Scholar 

  • Smarandache F (2015). Symbolic neutrosophic theory. Infinite Study.

  • Solon B, Selbst AD (2016) Big data's disparate impact. Calif L Rev 104:671–732

    Google Scholar 

  • Université de Montréal (2018) Montréal declaration for a responsible development of artificial intelligence. Université de Montréal, Montréal

    Google Scholar 

  • Varona D (2018) La responsabilidad ética del diseñador de sistemas en inteligencia artificial. Revista de Occidente, julio-agosto(446–447), 104–114.

  • Walker T (2017) How much …? The rise of dynamic and personalised pricing. The Guardian. Retrieved April 2018, from https://www.theguardian.com/global/2017/nov/20/dynamic-personalised-pricing

  • Zafar MB, Valera I, Gomez Rodriguez M, Gummadi KP (2015) Fairness constraints: mechanisms for fair classification. arXiv preprint arXiv:1507.05259

  • Zemel R, Wu Y, Swersky K, Pitassi T, Dwork C (2013) Learning fair representations. In: Proceedings of the 30th International Conference on Machine Learning. Atlanta, Georgia, USA: JMLR:W&CP volume 28, pp 325–333.

Download references

Acknowledgements

The authors wish to acknowledge the great contribution of Editor and Reviewers to the present manuscript. The referee process has been very valuable and positive.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniel Varona.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Varona, D., Lizama-Mue, Y. & Suárez, J.L. Machine learning’s limitations in avoiding automation of bias. AI & Soc 36, 197–203 (2021). https://doi.org/10.1007/s00146-020-00996-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-020-00996-y

Keywords

Navigation