Advertisement

Fair Enough? On (Avoiding) Bias in Data, Algorithms and Decisions

  • Francien DechesneEmail author
Chapter
  • 56 Downloads
Part of the IFIP Advances in Information and Communication Technology book series (IFIPAICT, volume 576)

Abstract

This contribution explores bias in automated decision systems from a conceptual, (socio-)technical and normative perspective. In particular, it discusses the role of computational methods and mathematical models when striving for “fairness” of decisions involving such systems.

Keywords

Bias Data analytics Algorithmic decision systems Fairness 

Notes

Acknowledgment

This work is part of the SCALES project funded by the Dutch Research Council NWO MVI-program under project number 313-99-315.

References

  1. 1.
    Kahneman, D., Paul, P.S., Tversky, A.: Judgment under Uncertainty: Heuristics and Biases. Cambridge University Press, Cambridge (1982)CrossRefGoogle Scholar
  2. 2.
    Februari, M.: “Het is allemaal mensenwerk” (in Dutch). Godwin lecture. De Correspondent, 5 May 2017. https://decorrespondent.nl/6692/de-datahonger-van-staten-en-bedrijven-zet-veel-meer-op-het-spel-dan-uw-privacy-alleen/1514846110316-d1a5748d
  3. 3.
    Angwin, J., Larson, J., Mattu, S., Kirchner, L.: Machine Bias - There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica, 23 May 2016. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  4. 4.
    Narayanan, A.: 21 definitions of fairness. In: Tutorial at the FAT\(^*\) Conference, New York, February 2018. https://www.youtube.com/watch?v=jIXIuYdnyyk
  5. 5.
    Froehle, C.: The evolution of an accidental meme. How one little graphic became shared and adapted by millions. Medium, 14 April 2016. https://medium.com/@CRA1G/the-evolution-of-an-accidental-meme-ddc4e139e0e4
  6. 6.
    Kranzberg, M.: Technology and history: “Kranzberg’s laws". Technol. Cult. 27(3), 544–560 (1986).  https://doi.org/10.2307/3105385CrossRefGoogle Scholar
  7. 7.
    Verbeek, P.-P.: Animation: Explaining Technological Mediation, June 2017. https://vimeo.com/221545135
  8. 8.
    Harcourt, B.: Against Prediction. University of Chicago Press (2006)Google Scholar
  9. 9.
    O’Neil, C.: Weapons of Math Destruction - How Big Data Increases Inequality and Threatens Democracy. Crown (2016)Google Scholar
  10. 10.
    Lapowsky, I.: Google Autocomplete Still Makes Vile Suggestions. Wired, 2 December 2018. https://www.wired.com/story/google-autocomplete-vile-suggestions/
  11. 11.
    Sonnad, N.: Google Translate’s gender bias pairs “he” with “hardworking” and “she” with lazy, and other examples. Quartz, 29 November 2017. https://qz.com/1141122/google-translates-gender-bias-pairs-he-with-hardworking-and-she-with-lazy-and-other-examples/
  12. 12.
    Bolukbasi, T., Chang, K.-W., Zou, J.Y., Saligrama, V., Kalai, A.T.: Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In: Advances in Neural Information Processing Systems, pp. 4349–4357 (2016)Google Scholar
  13. 13.
    Caliskan, A., Bryson, J.J., Narayanan, A.: Semantics derived automatically from language corpora contain human-like biases. Science 356(6334), 183–186 (2017)CrossRefGoogle Scholar
  14. 14.
    Barocas, S., Selbst, A.D.: Big data’s disparate impact. Calif. Law Rev. 104, 671 (2016)Google Scholar
  15. 15.
    Kuczmarski, J.: Reducing gender bias in Google Translate. Google Blog, 2 December 2018. https://www.blog.google/products/translate/reducing-gender-bias-google-translate/
  16. 16.
    Yong, E.: A popular algorithm is no better at predicting crimes than random people. The Atlantic, 17 January 2018. https://www.theatlantic.com/technology/archive/2018/01/equivant-compas-algorithm/550646/
  17. 17.
    Zafar, M.B., Valera, I., Gomez Rodriguez, M., Gummadi, K.P.: Fairness beyond disparate treatment & disparate impact: learning classification without disparate mistreatment. In: Proceedings of the 26th International Conference on World Wide Web (WWW 2017), pp. 1171–1180 (2017).  https://doi.org/10.1145/3038912.3052660
  18. 18.
    Verma, S., Rubin, J.: Fairness definitions explained. In: Fair-Ware 2018: IEEE/ACM International Workshop on Software Fairness, 29 May 2018, Gothenburg, Sweden. ACM, New York, 7 p. (2018).  https://doi.org/10.1145/3194770.3194776
  19. 19.
    Zliobaite, I.: A survey on measuring indirect discrimination in machine learning (2015). arXiv:1511.00148 [cs.CY]
  20. 20.
    Friedler, S.A., Scheidegger, C., Venkatasubramanian, S., Choudhary, S., Hamilton, E.P., Roth, D.: A comparative study of fairness-enhancing interventions in machine learning. In: Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* 2019). ACM, New York, pp. 329–338 (2019).  https://doi.org/10.1145/3287560.3287589
  21. 21.
    Friedler, S.A., Scheidegger, C., Venkatasubramanian, S.: On the (im)possibility of fairness (2016). arXiv:1609.07236 [cs.CY]
  22. 22.
    Zliobaite, I., Custers, B.: Using sensitive personal data may be necessary for avoiding discrimination in data-driven decision models. Artif. Intell. Law 24, 183–201 (2016). Available at SSRN: https://ssrn.com/abstract=3047233CrossRefGoogle Scholar
  23. 23.
    European Council: General Data Protection Regulation. Regulation (EU) 2016/679Google Scholar
  24. 24.
    Liu, L.T., Dean, S., Rolf, E., Simchowitz, M., Hardt, M.: Delayed impact of fair machine learning. In: Proceedings of the 35th International Conference on Machine Learning, PMLR, vol. 80, pp. 3150–3158 (2018). See also the blog-post at https://bair.berkeley.edu/blog/2018/05/17/delayed-impact/

Copyright information

© IFIP International Federation for Information Processing 2020

Authors and Affiliations

  1. 1.eLaw Center for Law and Digital TechnologiesLeiden University Law SchoolLeidenThe Netherlands

Personalised recommendations