Abstract
Recently, there has been an upsurge of attention focused on bias and its impact on specialized artificial intelligence (AI) applications. Allegations of racism and sexism have permeated the conversation as stories surface about search engines delivering job postings for well-paying technical jobs to men and not women, or providing arrest mugshots when keywords such as “black teenagers” are entered. Learning algorithms are evolving; they are often created from parsing through large datasets of online information while having truth labels bestowed on them by crowd-sourced masses. These specialized AI algorithms have been liberated from the minds of researchers and startups, and released onto the public. Yet intelligent though they may be, these algorithms maintain some of the same biases that permeate society. They find patterns within datasets that reflect implicit biases and, in so doing, emphasize and reinforce these biases as global truth. This paper describes specific examples of how bias has infused itself into current AI and robotic systems, and how it may affect the future design of such systems. More specifically, we draw attention to how bias may affect the functioning of (1) a robot peacekeeper, (2) a self-driving car, and (3) a medical robot. We conclude with an overview of measures that could be taken to mitigate or halt bias from permeating robotic technology.
This is a preview of subscription content, access via your institution.
Notes
Refer to http://moralmachine.mit.edu/ (accessed July 3, 2017).
For more information refer to https://implicit.harvard.edu/implicit/ (accessed July 3, 2017).
For example, see IEEE PROJECT: P7003—Algorithmic Bias Considerations, https://standards.ieee.org/develop/project/7003.html (accessed August 24, 2017).
References
American Association of Medical Colleges (AAMC). (2016). 2016 Physicians specialty data report. https://www.aamc.org/data/workforce/reports/457712/2016-specialty-databook.html. Accessed August 30, 2017.
American Civil Liberties Union. (2014). Racial disparities in sentencing. Submitted to the Inter-American Commission on Human Rights 153rd Session, October 27, 2014. https://www.aclu.org/sites/default/files/assets/141027_iachr_racial_disparities_aclu_submission_0.pdf. Accessed August 31, 2017.
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica, May 23. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Accessed June 28, 2017.
Bian, L., Leslie, S. J., & Cimpian, A. (2017). Gender stereotypes about intellectual ability emerge early and influence children’s interests. Science, 355(6323), 389–391.
Bogost, I. (2017). ‘Artificial Intelligence’ has become meaningless. The Atlantic, March 4. https://www.theatlantic.com/technology/archive/2017/03/what-is-artificial-intelligence/518547. Accessed August 30, 2017.
Bolukbasi, T., Chang K-W, Zou, J., Saligrama, V., & Kalai, A. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. https://arxiv.org/pdf/1607.06520.pdf. Accessed July 3, 2017.
Borenstein, J., Herkert, J., & Miller, K. (2017a). Self-driving cars: Ethical responsibilities of design engineers. IEEE Technology and Society Magazine, 36(2), 67–75.
Borenstein, J., Howard, A., & Wagner, A. (2017b). Pediatric robotics and ethics: The robot is ready to see you now but should it be trusted? In P. Lin, K. Abney, & G. Bekey (Eds.), Robot ethics 2.0. Oxford: Oxford University Press.
Brownstein, M. (2016). Implicit bias. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy. https://plato.stanford.edu/archives/win2016/entries/implicit-bias/. Accessed July 3, 2017.
Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183–186.
Carpenter, J. (2015). Google’s algorithm shows prestigious job ads to men, but not to women. Independent, July 7. http://www.independent.co.uk/life-style/gadgets-and-tech/news/googles-algorithm-shows-prestigious-job-ads-to-men-but-not-to-women-10372166.html. Accessed July 3, 2017.
Carty, S. S. (2011). Many cars tone deaf to women’s voices. Autoblog, May 31. http://www.autoblog.com/2011/05/31/women-voice-command-systems/. Accessed February 4, 2017.
Castellanos, S. (2016). Capital One pursues ‘explainable AI’ to guard against bias in models. The Wall Street Journal, December 2. http://blogs.wsj.com/cio/2016/12/06/capital-one-pursues-explainable-ai-to-guard-against-bias-in-models/. Accessed February 10, 2017.
Chavalarias, D., & Ioannidis, J. P. A. (2010). Science mapping analysis characterizes 235 biases in biomedical research. Journal of Clinical Epidemiology, 63(11), 1205–1215.
Chawla, N. V., Hall, L. O., Bowyer, K. W., & Kegelmeyer, W. P. (2002). SMOTE: Synthetic minority oversampling technique. Journal of Artificial Intelligence Research, 16, 321–357.
Chayes, J. (2017). How machine learning advances will improve the fairness of algorithms. Huffington Post, August 23. Accessed August 25, 2017.
Datta, A., Tschantz, M. C., & Datta, A. (2015). Automated experiments on ad privacy settings. Proceedings on Privacy Enhancing Technologies, 2015(1), 92–112.
Devlin, H. (2016). Discrimination by algorithm: Scientists devise test to detect AI bias. The Guardian, December 19. https://www.theguardian.com/technology/2016/dec/19/discrimination-by-algorithm-scientists-devise-test-to-detect-ai-bias Accessed February 9, 2017.
Executive Office of the President (EOP). (2016). Artificial intelligence, automation, and the economy. https://obamawhitehouse.archives.gov/sites/whitehouse.gov/files/documents/Artificial-Intelligence-Automation-Economy.PDF. Accessed February 7, 2017.
Federal Trade Commission. (2016). Big data: A tool for inclusion or exclusion? Understanding the issues. FTC report. https://www.ftc.gov/system/files/documents/reports/big-data-tool-inclusion-or-exclusion-understanding-issues/160106big-data-rpt.pdf. Accessed July 3, 2017.
Finley, K. (2016). Tech giants team up to keep AI from getting out of hand. Wired, September 28. https://www.wired.com/2016/09/google-facebook-microsoft-tackle-ethics-ai/. Accessed February 7, 2017.
Freeman, D. (2016). Self-driving cars could save millions of lives: But there’s a catch. HuffPost Tech, February 18. http://www.huffingtonpost.com/entry/the-moral-imperative-thats-driving-the-robot-revolution_us_56c22168e4b0c3c550521f64. Accessed January 30, 2017.
Gendler, T. S. (2011). On the epistemic costs of implicit bias. Philosophical Studies, 156, 33–63.
Green, A., Carney, D., Pallin, D., Ngo, L., Raymond, K., Lezzoni, L., et al. (2007). Implicit bias among physicians and its prediction of thrombolysis decisions for black and white patients. Journal of General Internal Medicine, 22, 1231–1238.
Griffiths, J. (2016). New Zealand passport robot thinks this Asian man’s eyes are closed. CNN, December 9. http://www.cnn.com/2016/12/07/asia/new-zealand-passport-robot-asian-trnd/. Accessed February 4, 2017.
Guarino, B. (2016). Google faulted for racial bias in image search results for black teenagers. The Washington Post, June 10.
Hackett, R. (2016). Singapore is getting driverless taxi cabs. Fortune, April 5. http://fortune.com/2016/04/05/singapore-driverless-car-taxi-nutonomy/. Accessed February 8, 2017.
Hall, W., Chapman, M., Lee, K. M., Merino, Y. M., Thomas, T. W., Payne, B. K., et al. (2015). Implicit racial/ethnic bias among health care professionals and its influence on health care outcomes: A systematic review. American Journal of Public Health, 105(12), e60–e76.
Hardt, M. (2014). How big data is unfair: Understanding unintended sources of unfairness in data driven decision making, Medium, September 26. https://medium.com/@mrtz/how-big-data-is-unfair-9aa544d739de#.v96yl9fy6. Accessed January 28, 2017.
Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. October 11. https://arxiv.org/pdf/1610.02413.pdf. Accessed February 8, 2017.
Harpur, P. (2013). From universal exclusion to universal equality: Regulating ableism in a digital age. Northern Kentucky Law Review, 40(3), 529–565.
Holroyd, J. (2012). Responsibility for implicit bias. Journal of Social Philosophy, 43, 274–306. doi:10.1111/j.1467-9833.2012.01565.x.
IEEE. (2017). Ethically aligned design: A vision for prioritizing human wellbeing with artificial intelligence and autonomous systems. http://standards.ieee.org/develop/indconn/ec/ead_brochure.pdf. Accessed February 7, 2017.
Johnson, D. D. P., Blumstein, D. T., Fowler, J. H., & Haselton, M. G. (2013). The evolution of error: error management, cognitive constraints, and adaptive decision-making biases. Trends in Ecology and Evolution, 28(8), 474–481.
Kane, L. (2010). Exclusive ethics survey results: Doctors struggle with tougher-than-ever dilemmas. Medscape, November 11. http://www.medscape.com/viewarticle/731485. Accessed February 8, 2017.
Kang, C. (2016). The 15-Point federal checklist for self-driving cars. The New York Times, September 21.
Kay, M., Matuszek, C., & Munson, S. A. (2015). Unequal representation and gender stereotypes in image search results for occupations. In Proceedings of the 33rd annual ACM conference on human factors in computing systems (CHI ‘15) (pp. 3819–3828). New York, NY: ACM. doi:10.1145/2702123.2702520.
Kotsiantis, S., Kanellopoulos, D., & Pintelas, P. (2006). Handling imbalanced datasets: A review. GESTS International Transactions on Computer Science and Engineering, 30, 25–36.
Kuipers, B. (2016). Why and how should robots behave ethically? RoboPhilosophy 2016/TRANSOR 2016, Aarhus, Denmark. https://web.eecs.umich.edu/~kuipers/papers/Kuipers-robophilosophy-16-abstract.pdf. Accessed February 10, 2017.
Larson, S. (2016). Research shows gender bias in Google’s voice recognition. The Daily Dot, July 15. http://www.dailydot.com/debug/google-voice-recognition-gender-bias/. Accessed February 4, 2017.
Levin, S. (2016). A beauty contest was judged by AI and the robots didn’t like dark skin. The Guardian, September 8. https://www.theguardian.com/technology/2016/sep/08/artificial-intelligence-beauty-contest-doesnt-like-black-people. Accessed February 4, 2017.
McFarland, M. (2016). Robot’s role in killing Dallas shooter is a first. CNN.com, July 11. http://money.cnn.com/2016/07/08/technology/dallas-robot-death/. Accessed February 8, 2017.
McHugh, C., & Way, J. (2016). What is good reasoning? Philosophy and Phenomenological Research. doi:10.1111/phpr.12299.
MIT Media Lab. (2017). MIT Media Lab to participate in $27 million initiative on AI ethics and governance. MIT News, January 10. http://news.mit.edu/2017/mit-media-lab-to-participate-in-ai-ethics-and-governance-initiative-0110. Accessed February 7, 2017.
Muench, U., Sindelar, J., Busch, S. H., & Buerhaus, P. I. (2015). Salary differences between male and female registered nurses in the united states. JAMA, 313(12), 1265–1267. doi:10.1001/jama.2015.1487.
Nwana, H. S. (1996). Software agents: An overview. Knowledge Engineering Review, 11(3), 1–40.
Office of Science and Technology Policy (OSTP). (2016). Reducing the impact of bias in the stem workforce: Strengthening excellence and innovation. A Report of The Interagency Policy Group on Increasing Diversity in the Stem Workforce by Reducing the Impact of Bias.
Otterbacher, J. (2016). New evidence shows search engines reinforce social stereotypes. Harvard Business Review, October 20. https://hbr.org/2016/10/new-evidence-shows-search-engines-reinforce-social-stereotypes. Accessed February 4, 2017.
Pulliam-Moore, C. (2015). Google photos identified black people as ‘gorillas,’ but racist software isn’t new. Fusion, July 1. http://fusion.net/story/159736/google-photos-identified-black-people-as-gorillas-but-racist-software-isnt-new/. Accessed February 4, 2017.
Robinson, D., & Koepke, L. (2016). Stuck in a pattern: Early evidence on “predictive policing” and civil rights. A report from Upturn. https://www.teamupturn.com/reports/2016/stuck-in-a-pattern. Accessed January 28, 2017.
Rodger, J. A., & Pendharkar, P. C. (2004). A field study of the impact of gender and user’s technical experience on the performance of voice-activated medical tracking application. International Journal of Human-Computer Studies, 60(5–6), 529–544.
Rose, A. (2010). Are face-detection cameras racist? Time, January 22. http://content.time.com/time/business/article/0,8599,1954643,00.html. Accessed February 4, 2017.
Šabanović, S., Chang, W-L, Bennett, C. C., Piatt, J. A., & Hakken, D. (2015). A Robot of my own: Participatory design of socially assistive robots for independently living older adults diagnosed with depression. In: Human aspects of IT for the aged population: Design for aging. Lecture Notes in Computer Science (Vol. 9193, pp. 104–114).
SAE International. (2016). Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles J3016. http://standards.sae.org/j3016_201609/. Accessed January 17, 2017.
Spencer, S. J., Steele, C. M., & Quinn, D. M. (1999). Stereotype threat and women’s math performance. Journal of Experimental Social Psychology, 35(1), 4–28.
Steele, C. M. (2010). Whistling Vivaldi: And other clues to how stereotypes affect us (issues of our time). New York, NY: W. W. Norton and Co.
Stone, P., Brooks, R., Brynjolfsson, E., Calo, R., Etzioni, O., Hager, G., et al. (2016). Artificial intelligence and life in 2030. One hundred year study on artificial intelligence: Report of the 2015–2016 study panel. Stanford, CA: Stanford University. http://ai100.stanford.edu/2016-report. Accessed February 7, 2017.
Tatman, R. (2016). Google’s speech recognition has a gender bias. Making Noise and Hearing Things, July 12. https://makingnoiseandhearingthings.com/2016/07/12/googles-speech-recognition-has-a-gender-bias/. Accessed February 9, 2017.
Toor, A. (2016). Drones will begin delivering blood and medicine in the US. The Verge, August 2. http://www.theverge.com/2016/8/2/12350274/zipline-drone-delivery-us-launch-blood-medicine. Accessed February 8, 2017.
Wagner, L. (2015). North Dakota legalizes armed police drones. The Two-Way, August 27. http://www.npr.org/sections/thetwo-way/2015/08/27/435301160/north-dakota-legalizes-armed-police-drones. Accessed February 8, 2017.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Howard, A., Borenstein, J. The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity. Sci Eng Ethics 24, 1521–1536 (2018). https://doi.org/10.1007/s11948-017-9975-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11948-017-9975-2
Keywords
- Artificial intelligence
- Implicit bias
- Design ethics
- Professional ethics
- Robot ethics