Advertisement

Science and Engineering Ethics

, Volume 24, Issue 5, pp 1521–1536 | Cite as

The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity

  • Ayanna Howard
  • Jason BorensteinEmail author
Original Paper

Abstract

Recently, there has been an upsurge of attention focused on bias and its impact on specialized artificial intelligence (AI) applications. Allegations of racism and sexism have permeated the conversation as stories surface about search engines delivering job postings for well-paying technical jobs to men and not women, or providing arrest mugshots when keywords such as “black teenagers” are entered. Learning algorithms are evolving; they are often created from parsing through large datasets of online information while having truth labels bestowed on them by crowd-sourced masses. These specialized AI algorithms have been liberated from the minds of researchers and startups, and released onto the public. Yet intelligent though they may be, these algorithms maintain some of the same biases that permeate society. They find patterns within datasets that reflect implicit biases and, in so doing, emphasize and reinforce these biases as global truth. This paper describes specific examples of how bias has infused itself into current AI and robotic systems, and how it may affect the future design of such systems. More specifically, we draw attention to how bias may affect the functioning of (1) a robot peacekeeper, (2) a self-driving car, and (3) a medical robot. We conclude with an overview of measures that could be taken to mitigate or halt bias from permeating robotic technology.

Keywords

Artificial intelligence Implicit bias Design ethics Professional ethics Robot ethics 

References

  1. American Association of Medical Colleges (AAMC). (2016). 2016 Physicians specialty data report. https://www.aamc.org/data/workforce/reports/457712/2016-specialty-databook.html. Accessed August 30, 2017.
  2. American Civil Liberties Union. (2014). Racial disparities in sentencing. Submitted to the Inter-American Commission on Human Rights 153rd Session, October 27, 2014. https://www.aclu.org/sites/default/files/assets/141027_iachr_racial_disparities_aclu_submission_0.pdf. Accessed August 31, 2017.
  3. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica, May 23. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Accessed June 28, 2017.
  4. Bian, L., Leslie, S. J., & Cimpian, A. (2017). Gender stereotypes about intellectual ability emerge early and influence children’s interests. Science, 355(6323), 389–391.CrossRefGoogle Scholar
  5. Bogost, I. (2017). ‘Artificial Intelligence’ has become meaningless. The Atlantic, March 4. https://www.theatlantic.com/technology/archive/2017/03/what-is-artificial-intelligence/518547. Accessed August 30, 2017.
  6. Bolukbasi, T., Chang K-W, Zou, J., Saligrama, V., & Kalai, A. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. https://arxiv.org/pdf/1607.06520.pdf. Accessed July 3, 2017.
  7. Borenstein, J., Herkert, J., & Miller, K. (2017a). Self-driving cars: Ethical responsibilities of design engineers. IEEE Technology and Society Magazine, 36(2), 67–75.CrossRefGoogle Scholar
  8. Borenstein, J., Howard, A., & Wagner, A. (2017b). Pediatric robotics and ethics: The robot is ready to see you now but should it be trusted? In P. Lin, K. Abney, & G. Bekey (Eds.), Robot ethics 2.0. Oxford: Oxford University Press.CrossRefGoogle Scholar
  9. Brownstein, M. (2016). Implicit bias. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy. https://plato.stanford.edu/archives/win2016/entries/implicit-bias/. Accessed July 3, 2017.
  10. Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183–186.CrossRefGoogle Scholar
  11. Carpenter, J. (2015). Google’s algorithm shows prestigious job ads to men, but not to women. Independent, July 7. http://www.independent.co.uk/life-style/gadgets-and-tech/news/googles-algorithm-shows-prestigious-job-ads-to-men-but-not-to-women-10372166.html. Accessed July 3, 2017.
  12. Carty, S. S. (2011). Many cars tone deaf to women’s voices. Autoblog, May 31. http://www.autoblog.com/2011/05/31/women-voice-command-systems/. Accessed February 4, 2017.
  13. Castellanos, S. (2016). Capital One pursues ‘explainable AI’ to guard against bias in models. The Wall Street Journal, December 2. http://blogs.wsj.com/cio/2016/12/06/capital-one-pursues-explainable-ai-to-guard-against-bias-in-models/. Accessed February 10, 2017.
  14. Chavalarias, D., & Ioannidis, J. P. A. (2010). Science mapping analysis characterizes 235 biases in biomedical research. Journal of Clinical Epidemiology, 63(11), 1205–1215.CrossRefGoogle Scholar
  15. Chawla, N. V., Hall, L. O., Bowyer, K. W., & Kegelmeyer, W. P. (2002). SMOTE: Synthetic minority oversampling technique. Journal of Artificial Intelligence Research, 16, 321–357.CrossRefGoogle Scholar
  16. Chayes, J. (2017). How machine learning advances will improve the fairness of algorithms. Huffington Post, August 23. Accessed August 25, 2017.Google Scholar
  17. Datta, A., Tschantz, M. C., & Datta, A. (2015). Automated experiments on ad privacy settings. Proceedings on Privacy Enhancing Technologies, 2015(1), 92–112.CrossRefGoogle Scholar
  18. Devlin, H. (2016). Discrimination by algorithm: Scientists devise test to detect AI bias. The Guardian, December 19. https://www.theguardian.com/technology/2016/dec/19/discrimination-by-algorithm-scientists-devise-test-to-detect-ai-bias Accessed February 9, 2017.
  19. Executive Office of the President (EOP). (2016). Artificial intelligence, automation, and the economy. https://obamawhitehouse.archives.gov/sites/whitehouse.gov/files/documents/Artificial-Intelligence-Automation-Economy.PDF. Accessed February 7, 2017.
  20. Federal Trade Commission. (2016). Big data: A tool for inclusion or exclusion? Understanding the issues. FTC report. https://www.ftc.gov/system/files/documents/reports/big-data-tool-inclusion-or-exclusion-understanding-issues/160106big-data-rpt.pdf. Accessed July 3, 2017.
  21. Finley, K. (2016). Tech giants team up to keep AI from getting out of hand. Wired, September 28. https://www.wired.com/2016/09/google-facebook-microsoft-tackle-ethics-ai/. Accessed February 7, 2017.
  22. Freeman, D. (2016). Self-driving cars could save millions of lives: But there’s a catch. HuffPost Tech, February 18. http://www.huffingtonpost.com/entry/the-moral-imperative-thats-driving-the-robot-revolution_us_56c22168e4b0c3c550521f64. Accessed January 30, 2017.
  23. Gendler, T. S. (2011). On the epistemic costs of implicit bias. Philosophical Studies, 156, 33–63.CrossRefGoogle Scholar
  24. Green, A., Carney, D., Pallin, D., Ngo, L., Raymond, K., Lezzoni, L., et al. (2007). Implicit bias among physicians and its prediction of thrombolysis decisions for black and white patients. Journal of General Internal Medicine, 22, 1231–1238.CrossRefGoogle Scholar
  25. Griffiths, J. (2016). New Zealand passport robot thinks this Asian man’s eyes are closed. CNN, December 9. http://www.cnn.com/2016/12/07/asia/new-zealand-passport-robot-asian-trnd/. Accessed February 4, 2017.
  26. Guarino, B. (2016). Google faulted for racial bias in image search results for black teenagers. The Washington Post, June 10.Google Scholar
  27. Hackett, R. (2016). Singapore is getting driverless taxi cabs. Fortune, April 5. http://fortune.com/2016/04/05/singapore-driverless-car-taxi-nutonomy/. Accessed February 8, 2017.
  28. Hall, W., Chapman, M., Lee, K. M., Merino, Y. M., Thomas, T. W., Payne, B. K., et al. (2015). Implicit racial/ethnic bias among health care professionals and its influence on health care outcomes: A systematic review. American Journal of Public Health, 105(12), e60–e76.CrossRefGoogle Scholar
  29. Hardt, M. (2014). How big data is unfair: Understanding unintended sources of unfairness in data driven decision making, Medium, September 26. https://medium.com/@mrtz/how-big-data-is-unfair-9aa544d739de#.v96yl9fy6. Accessed January 28, 2017.
  30. Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. October 11. https://arxiv.org/pdf/1610.02413.pdf. Accessed February 8, 2017.
  31. Harpur, P. (2013). From universal exclusion to universal equality: Regulating ableism in a digital age. Northern Kentucky Law Review, 40(3), 529–565.Google Scholar
  32. Holroyd, J. (2012). Responsibility for implicit bias. Journal of Social Philosophy, 43, 274–306. doi: 10.1111/j.1467-9833.2012.01565.x.CrossRefGoogle Scholar
  33. IEEE. (2017). Ethically aligned design: A vision for prioritizing human wellbeing with artificial intelligence and autonomous systems. http://standards.ieee.org/develop/indconn/ec/ead_brochure.pdf. Accessed February 7, 2017.
  34. Johnson, D. D. P., Blumstein, D. T., Fowler, J. H., & Haselton, M. G. (2013). The evolution of error: error management, cognitive constraints, and adaptive decision-making biases. Trends in Ecology and Evolution, 28(8), 474–481.CrossRefGoogle Scholar
  35. Kane, L. (2010). Exclusive ethics survey results: Doctors struggle with tougher-than-ever dilemmas. Medscape, November 11. http://www.medscape.com/viewarticle/731485. Accessed February 8, 2017.
  36. Kang, C. (2016). The 15-Point federal checklist for self-driving cars. The New York Times, September 21.Google Scholar
  37. Kay, M., Matuszek, C., & Munson, S. A. (2015). Unequal representation and gender stereotypes in image search results for occupations. In Proceedings of the 33rd annual ACM conference on human factors in computing systems (CHI ‘15) (pp. 3819–3828). New York, NY: ACM. doi: 10.1145/2702123.2702520.
  38. Kotsiantis, S., Kanellopoulos, D., & Pintelas, P. (2006). Handling imbalanced datasets: A review. GESTS International Transactions on Computer Science and Engineering, 30, 25–36.Google Scholar
  39. Kuipers, B. (2016). Why and how should robots behave ethically? RoboPhilosophy 2016/TRANSOR 2016, Aarhus, Denmark. https://web.eecs.umich.edu/~kuipers/papers/Kuipers-robophilosophy-16-abstract.pdf. Accessed February 10, 2017.
  40. Larson, S. (2016). Research shows gender bias in Google’s voice recognition. The Daily Dot, July 15. http://www.dailydot.com/debug/google-voice-recognition-gender-bias/. Accessed February 4, 2017.
  41. Levin, S. (2016). A beauty contest was judged by AI and the robots didn’t like dark skin. The Guardian, September 8. https://www.theguardian.com/technology/2016/sep/08/artificial-intelligence-beauty-contest-doesnt-like-black-people. Accessed February 4, 2017.
  42. McFarland, M. (2016). Robot’s role in killing Dallas shooter is a first. CNN.com, July 11. http://money.cnn.com/2016/07/08/technology/dallas-robot-death/. Accessed February 8, 2017.
  43. McHugh, C., & Way, J. (2016). What is good reasoning? Philosophy and Phenomenological Research. doi: 10.1111/phpr.12299.CrossRefGoogle Scholar
  44. MIT Media Lab. (2017). MIT Media Lab to participate in $27 million initiative on AI ethics and governance. MIT News, January 10. http://news.mit.edu/2017/mit-media-lab-to-participate-in-ai-ethics-and-governance-initiative-0110. Accessed February 7, 2017.
  45. Muench, U., Sindelar, J., Busch, S. H., & Buerhaus, P. I. (2015). Salary differences between male and female registered nurses in the united states. JAMA, 313(12), 1265–1267. doi: 10.1001/jama.2015.1487.CrossRefGoogle Scholar
  46. Nwana, H. S. (1996). Software agents: An overview. Knowledge Engineering Review, 11(3), 1–40.CrossRefGoogle Scholar
  47. Office of Science and Technology Policy (OSTP). (2016). Reducing the impact of bias in the stem workforce: Strengthening excellence and innovation. A Report of The Interagency Policy Group on Increasing Diversity in the Stem Workforce by Reducing the Impact of Bias.Google Scholar
  48. Otterbacher, J. (2016). New evidence shows search engines reinforce social stereotypes. Harvard Business Review, October 20. https://hbr.org/2016/10/new-evidence-shows-search-engines-reinforce-social-stereotypes. Accessed February 4, 2017.
  49. Pulliam-Moore, C. (2015). Google photos identified black people as ‘gorillas,’ but racist software isn’t new. Fusion, July 1. http://fusion.net/story/159736/google-photos-identified-black-people-as-gorillas-but-racist-software-isnt-new/. Accessed February 4, 2017.
  50. Robinson, D., & Koepke, L. (2016). Stuck in a pattern: Early evidence on “predictive policing” and civil rights. A report from Upturn. https://www.teamupturn.com/reports/2016/stuck-in-a-pattern. Accessed January 28, 2017.
  51. Rodger, J. A., & Pendharkar, P. C. (2004). A field study of the impact of gender and user’s technical experience on the performance of voice-activated medical tracking application. International Journal of Human-Computer Studies, 60(5–6), 529–544.CrossRefGoogle Scholar
  52. Rose, A. (2010). Are face-detection cameras racist? Time, January 22. http://content.time.com/time/business/article/0,8599,1954643,00.html. Accessed February 4, 2017.
  53. Šabanović, S., Chang, W-L, Bennett, C. C., Piatt, J. A., & Hakken, D. (2015). A Robot of my own: Participatory design of socially assistive robots for independently living older adults diagnosed with depression. In: Human aspects of IT for the aged population: Design for aging. Lecture Notes in Computer Science (Vol. 9193, pp. 104–114).CrossRefGoogle Scholar
  54. SAE International. (2016). Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles J3016. http://standards.sae.org/j3016_201609/. Accessed January 17, 2017.
  55. Spencer, S. J., Steele, C. M., & Quinn, D. M. (1999). Stereotype threat and women’s math performance. Journal of Experimental Social Psychology, 35(1), 4–28.CrossRefGoogle Scholar
  56. Steele, C. M. (2010). Whistling Vivaldi: And other clues to how stereotypes affect us (issues of our time). New York, NY: W. W. Norton and Co.Google Scholar
  57. Stone, P., Brooks, R., Brynjolfsson, E., Calo, R., Etzioni, O., Hager, G., et al. (2016). Artificial intelligence and life in 2030. One hundred year study on artificial intelligence: Report of the 2015–2016 study panel. Stanford, CA: Stanford University. http://ai100.stanford.edu/2016-report. Accessed February 7, 2017.
  58. Tatman, R. (2016). Google’s speech recognition has a gender bias. Making Noise and Hearing Things, July 12. https://makingnoiseandhearingthings.com/2016/07/12/googles-speech-recognition-has-a-gender-bias/. Accessed February 9, 2017.
  59. Toor, A. (2016). Drones will begin delivering blood and medicine in the US. The Verge, August 2. http://www.theverge.com/2016/8/2/12350274/zipline-drone-delivery-us-launch-blood-medicine. Accessed February 8, 2017.
  60. Wagner, L. (2015). North Dakota legalizes armed police drones. The Two-Way, August 27. http://www.npr.org/sections/thetwo-way/2015/08/27/435301160/north-dakota-legalizes-armed-police-drones. Accessed February 8, 2017.

Copyright information

© Springer Science+Business Media B.V. 2017

Authors and Affiliations

  1. 1.School of Public PolicyGeorgia Institute of TechnologyAtlantaUSA
  2. 2.School of Electrical & Computer EngineeringGeorgia Institute of TechnologyAtlantaUSA

Personalised recommendations