Skip to main content

AI & Human Values

Inequalities, Biases, Fairness, Nudge, and Feedback Loops

  • Chapter
  • First Online:
Reflections on Artificial Intelligence for Humanity

Abstract

This chapter summarizes contributions made by Ricardo Baeza-Yates, Francesco Bonchi, Kate Crawford, Laurence Devillers and Eric Salobir in the session chaired by Françoise Fogelman-Soulié on AI & Human values at the Global Forum on AI for Humanity. It provides an overview of key concepts and definitions relevant for the study of inequalities and Artificial Intelligence. It then presents and discusses concrete examples of inequalities produced by AI systems, highlighting their variety and potential harmfulness. Finally, we conclude by discussing how putting human values at the core of AI requires answering many questions, still open for further research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://www.wired.com/story/when-it-comes-to-gorillas-google-photos-remains-blind/.

  2. 2.

    https://www.technologyreview.com/2018/03/27/144290/microsofts-neo-nazi-sexbot-was-a-great-lesson-for-makers-of-ai-assistants/.

  3. 3.

    https://www.mic.com/articles/156286/crime-prediction-tool-pred-pol-only-amplifies-racially-biased-policing-study-shows.

  4. 4.

    https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

  5. 5.

    See also chapter 2 of this book.

  6. 6.

    https://ec.europa.eu/info/aid-development-cooperation-fundamental-rights/your-rights-eu/eu-charter-fundamental-rights_en.

  7. 7.

    https://www.cigionline.org/internet-survey-2019.

  8. 8.

    See also chapter 3 of this book.

  9. 9.

    https://www.lexico.com/en/definition/bias.

  10. 10.

    https://en.wikipedia.org/wiki/Bias.

  11. 11.

    https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G/.

  12. 12.

    https://www.lexico.com/en/definition/fairness.

  13. 13.

    https://www.lexico.com/en/definition/discrimination.

  14. 14.

    https://vis-www.cs.umass.edu/lfw/.

  15. 15.

    https://www.carrotrewards.ca/.

  16. 16.

    https://www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf.

  17. 17.

    See also chapters 9–10-11 of this book.

  18. 18.

    See also chapters 1 and 15 of this book.

References

  1. Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. In: Conference on Fairness, Accountability and Transparency, pp. 77–91, 21 January 2018

    Google Scholar 

  2. Baeza-Yates, R.: Bias on the Web. Commun. ACM 61(6), 54–61 (2018)

    Article  Google Scholar 

  3. AI High-Level Expert Group: Assessment List for Trustworthy Artificial Intelligence (ALTAI) for Self-assessment. Report European Commission, July 2020

    Google Scholar 

  4. Stanford Encyclopedia of Philosophy, Privacy (2018)

    Google Scholar 

  5. Moore, A.D.: Privacy, Security and Accountability: Ethics Law and Policy. Rowman & Littlefield Publishers, Lanham (2015)

    Google Scholar 

  6. Information Security and Privacy Advisory Board. Meeting June 11, 12 and 13, 2014. https://csrc.nist.gov/CSRC/media/Events/ISPAB-JUNE-2014-MEETING/documents/ispab_jun2014_big-data-privacy_blumenthal.pdf

  7. Norberg, P., Horne, D.R., Horne, D.A.: The privacy paradox: personal information disclosure intentions versus behaviors. J. Consum. Aff. 41, 100–126 (2007)

    Article  Google Scholar 

  8. General Data Protection Regulations, Official Journal of European Union (2016)

    Google Scholar 

  9. Hajian, S., Bonchi, F., Castillo, C.: Algorithmic bias: from discrimination discovery to fairness-aware data mining. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 2125–2126, 13 August 2016

    Google Scholar 

  10. West, S.M., Whittaker, M., Crawford, K.: Discriminating systems: gender, race and power in AI. AI Now Institute (2019)

    Google Scholar 

  11. Pitoura, E., et al.: On measuring bias in online information. ACM SIGMOD Rec. 46(4), 16–21 (2018)

    Article  Google Scholar 

  12. Fairness measures. Datasets and software for detecting algorithmic discrimination. https://www.fairness-measures.org

  13. Lepri, B., Oliver, N., Letouzé, E., Pentland, A., Vinck, P.: Fair, transparent, and accountable algorithmic decision-making processes. Philos. Technol. 31(4), 611–627 (2018)

    Article  Google Scholar 

  14. Schiller, B.: First degree price discrimination using big data. Brandeis University, Department of Economics, 30 January 2014

    Google Scholar 

  15. Kahneman, D.: Thinking, Fast and Slow. Farrar, Straus and Giroux (2011). ISBN 978-0374275631

    Google Scholar 

  16. Thaler, R.H., Sunstein, C.R.: Nudge: Improving Decisions About Health, Wealth and Happiness. Yale University Press, New Haven (2008)

    Google Scholar 

  17. Johnson, E., Goldstein, D.: Do defaults save lives? Science 302, 5649 (2003)

    Google Scholar 

  18. Schneider, C., Weinmann, M., vom Brocke, J.: Digital nudging: guiding online user choices through interface design. Commun. ACM 61(7), 67–73 (2018). https://doi.org/10.1145/3213765

    Article  Google Scholar 

  19. Zook, M., et al.: Ten simple rules for responsible big data research. PLoS Comput. Biol. 13(3), e1005399 (2017)

    Article  Google Scholar 

  20. Richardson, R., Schultz, J., Crawford, K.: Dirty data, bad predictions: how civil rights violations impact police data, predictive policing systems, and justice. New York University Law Review Online, 13 February 2019

    Google Scholar 

  21. Merler, M., Ratha, N., Feris, R.S., Smith, J.R.: Diversity in faces, 29 January 2019. arXiv preprint arXiv:1901.10436

  22. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning, 17 September 2019. arXiv preprint arXiv:1908.09635

  23. Salobir, E., Davet, J.-L.: Artificial intelligence, solidarity and insurance in Europe and Canada. Roadmap for international cooperation. Optic, 20 January 2020

    Google Scholar 

  24. Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., Mullainathan, S.: Human decisions and machine predictions. Q. J. Econ. 133(1), 237–293 (2017)

    MATH  Google Scholar 

  25. Kleinberg, J., Mullainathan, S., Raghavan, M.: Inherent trade-offs in the fair determination of risk scores, 19 September 2016. arXiv preprint

    Google Scholar 

  26. Harrison, J., Patel, M.: Designing nudges for success in health care. AMA J. Ethics 22(9), E796-801 (2020)

    Article  Google Scholar 

  27. Affective computing committee (IEEE Initiative on Ethics)

    Google Scholar 

  28. The Bad Robot Project. https://dataia.eu/en/research/bad-nudge-bad-robot-project

  29. Devillers, L., chair AI HUMAAINE, 2020–24. https://humaaine-chaireia.fr/

  30. Damgaard, M., Nielsen, H.: Nudging in education. Econ. Educ. Rev. 64, 313–342 (2018). https://doi.org/10.1016/j.econedurev.2018.03.008

    Article  Google Scholar 

  31. Joachims, T., Swaminathan, A., Schnabel, T.: Unbiased learning-to-rank with biased feedback. In: Proceedings of the Tenth ACM International Conference on Web Search and Data Mining WCDM 2017, pp. 781–789 (2017)

    Google Scholar 

  32. Zehlike, M., Bonchi, F., Castillo, C., Hajian, S., Megahed, M., Baeza-Yates, R.: FA*IR: a fair top-k ranking algorithm. In: Proceedings of 26th ACM International Conference on Information and Knowledge Management CIKM 2017, Singapore, pp. 1569–1578, 6–10 November 2017

    Google Scholar 

  33. Cave, S., Nyrup, R., Vold, K., Weller, A.: Motivations and risks of machine ethics. Proc. IEEE 107(3), 562–574 (2018)

    Article  Google Scholar 

  34. Ethically Aligned Design: Prioritizing Human Wellbeing with Autonomous and Intelligent Systems. IEEE (2019). https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead1e.pdf

  35. ACM US Public Policy Council (USACM): Statement on Algorithmic Transparency and Accountability, 12 January 2017

    Google Scholar 

  36. ACM US Technology Policy Committee, Statement on Facial Recognition Technologies, 30 June 2020

    Google Scholar 

  37. ACM Conference on Fairness, Accountability, and Transparency (ACM FAT). https://facctconference.org/2020/

  38. AAAI/ACM. Conference in AI, Ethics, and Society. https://www.aies-conference.com/

  39. Chang, S., et al.: Mobility network models of COVID-19 explain inequities and inform reopening. Nature 10, 1–8 (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Françoise Fogelman-Soulié .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Devillers, L., Fogelman-Soulié, F., Baeza-Yates, R. (2021). AI & Human Values. In: Braunschweig, B., Ghallab, M. (eds) Reflections on Artificial Intelligence for Humanity. Lecture Notes in Computer Science(), vol 12600. Springer, Cham. https://doi.org/10.1007/978-3-030-69128-8_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-69128-8_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-69127-1

  • Online ISBN: 978-3-030-69128-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics