Abstract
This chapter will focus on the concept of artificial intelligence (AI) and scrutinize whether AI is used to reinforce the social construct of racism or if it promises to provide opportunity and justice in areas where previously they lacked. Throughout this chapter, common examples of the everyday use of AI will be discussed to provide an understanding of how this technology has permeated through multiple sectors of life with the intention to continuously expand. This chapter will also explore the notion of AI inheriting the bias of its creator. This brings into question who is building this technology, how this technology should be used, and who is the intended beneficiary of its use. For the experts and novices in the field, this chapter calls for critical consideration for what is being developed. For the individuals unfamiliar with AI, this chapter is intended to bring awareness of the technology being used around them and how it has already impacted their life.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Alkhatlan, A., & Kalita, J. Intelligent tutoring systems: A comprehensive historical survey with recent developments. International Journal of Computer Applications, 975, 8887.
American Bar Association. (2011). ABA urges states to save money and enhance public safety by implementing criminal justice reforms (Technical report).
Baer, T. (2019). Understand, manage, and prevent algorithmic bias: A guide for business users and data scientists. New York, NY: Apress.
Barr, A. (2015), Google mistakenly tags black people as ‘gorillas,’ showing limits of algorithms. Retrieved from https://blogs.wsj.com/digits/2015/07/01/google-mistakenly-tagsblack-people-as-gorillas-showing-limits-of-algorithms/
Beveridge, J. R., Givens, G. H., Phillips, P. J., & Draper, B. A. (2009). Factors that influence algorithm performance in the face recognition grand challenge. Computer Vision and Image Understanding, 113(6), 750–762.
Bloom, B. S. (1984). The 2 sigma problem: The search for methods of group instruction as effective as one-to-one tutoring. Educational Researcher, 13(6), 4–16.
Board, F. S. (2017). Artificial intelligence and machine learning in financial services market developments and financial stability implications. Retrieved from https://www.fsb.org/wp-content/uploads/P011117.pdf
Bunt, A., & Conati, C. (2003). Probabilistic student modelling to improve exploratory behaviour. User Modeling and User-Adapted Interaction, 13(3), 269–309.
Buolamwini, J. & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on Fairness, Accountability and Transparency (pp. 77–91).
Calders, T., & Žliobaitė, I. (2013). Why unbiased computational processes can lead˙ to discriminative decision procedures. In Discrimination and privacy in the information society (pp. 43–57). Berlin, Germany: Springer.
Casey, P. M., Warren, R. K., & Elek, J. K. (2011). Using offender risk and needs assessment information at sentencing: Guidance for courts from a national working group. Williamsburg, VA: National Center for State Courts.
Chakraborty, S., Roy, D., & Basu, A. (2010). Development of knowledge based intelligent tutoring system. Advanced Knowledge Based Systems: Model, Applications & Research, 1, 74–100.
Chen, J., Kallus, N., Mao, X., Svacha, G., & Udell, M. (2019), Fairness under unawareness: Assessing disparity when protected class is unobserved. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 339–348).
Conati, C. (2009), Intelligent tutoring systems: New challenges and directions. In Twenty-First International Joint Conference on Artificial Intelligence.
Corbett-Davies, S., & Goel, S. (2018). The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023.
Cross, T. L., Bazron, B. J., Dennis, K. W., & Isaacs, M. R. (1989). Towards a culturally competent system of care: A monograph on effective services for minority children who are severely emotionally disturbed. Washington, DC: CASSP Technical Assistance Center, Georgetown University Child Development Center.
Dargue, B., & Biddle, E. (2014). Just enough fidelity in student and expert modeling for its. In International Conference on Augmented Cognition (pp. 202–211). Berlin, Germany: Springer.
Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Retrieved from https://www.reuters.com/article/us-amazon-com-jobs-automationinsight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-againstwomen-idUSKCN1MK08G
Del Baro, M. (2014). How Kodak’s Shirley cards set photography’s skin-tone standard. Retrieved from https://www.npr.org/2014/11/13/363517842/for-decades-kodak-s-shirleycards-set-photography-s-skin-tone-standard
Delgado, R., & Stefancic, J. (2017). Critical race theory: An introduction (Vol. 20). New York, NY: NYU Press.
Deloitte. (2016). Credit scoring case study in data analytics. Retrieved from https://www2.deloitte.com/content/dam/Deloitte/global/Documents/FinancialServices/gx-be-aers-fsi-credit-scoring.pdf
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … Vayena, E. (2018). Ai4people—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707.
Fothergill, A., Maestas, E. G., & Darlington, J. D. (1999). Race, ethnicity and disasters in the united states: A review of the literature. Disasters, 23(2), 156–173.
García, S., Ramírez-Gallego, S., Luengo, J., Benítez, J. M., & Herrera, F. (2016). Big data preprocessing: Methods and prospects. Big Data Analytics, 1(1), 9.
Garvie, C. (2016). The perpetual line-up: Unregulated police face recognition in America. Washington, DC: Georgetown Law Center on Privacy & Technology.
Gustin, D. (2019). What happens when machine learning finance models fail. Retrieved from https://spendmatters.com/tfmatters/what-happens-when-machine-learningfinance-models-fail/
Guynn, J. (2015). Google photos labeled black people ‘gorillas’. Retrieved from https://www.usatoday.com/story/tech/2015/07/01/google-apologizes-afterphotos-identify-black-people-as-gorillas/29567465/
Hamann, K., & Smith, R. (n.d.). Facial recognition technology: Where will it take us? American Bar Association. Retrieved May 6, 2019, from https://www.americanbar.org/groups/criminaljustice/publications/criminal-justicemagazine/2019/spring/facial-recognition-technology/
Han, H., & Jain, A. K. (2014). Age, gender and race estimation from unconstrained face images (MSU Technical Report (MSU-CSE-14-5) 87, 27). East Lansing, MI: Department of Computer Science and Engineering., Michigan State University.
Jaschik, S. (2019). Wealth and admissions. Retrieved from https://www.insidehighered.com/admissions/article/2019/03/18/look-manylegal-ways-wealthy-applicants-have-edge-admissions
Klare, B. F., Klein, B., Taborsky, E., Blanton, A., Cheney, J., Allen, K.,... Jain, A. K. (2015). Pushing the frontiers of unconstrained face detection and recognition: IARPA Janus benchmark A. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1931–1939).
Kulik, J. A., & Fletcher, J. (2016). Effectiveness of intelligent tutoring systems: A meta-analytic review. Review of Educational Research, 86(1), 42–78.
Larson, J., Mattu, S., Kirchner, L. & Angwin, J. (2016). How we analyzed the Compas recidivism algorithm. ProPublica, 9.
Lowry, S., & Macpherson, G. (1988). A blot on the profession. British Medical Journal, 296(6623), 657.
Lum, K., & Isaac, W. (2016). To predict and serve? Significance, 13(5), 14–19.
Merriam-Webster Dictionary. (2020). Racism. Retrieved from https://www.merriam-webster.com/dictionary/racism
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 2053951716679679.
Murray, T. (1998). Authoring knowledge-based tutors: Tools for content, instructional strategy, student model, and interface design. The Journal of the Learning Sciences, 7(1), 5–64.
Nadella, S. (2018). Satya Nadella email to employees: Embracing our future: Intelligent cloud and intelligent edge. Retrieved from https://news.microsoft.com/2018/03/29/satyanadella-email-to-employees-embracing-our-future-intelligent-cloud-andintelligent-edge/
Nkambou, R. (2010). Modeling the domain: An introduction to the expert module. In Advances in intelligent tutoring systems (pp. 15–32). Berlin, Germany: Springer.
O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Portland, OR: Broadway Books.
of Chief Justices Board of Directors, C. (2007). Resolution 12: In support of sentencing practices that promote public safety and reduce recidivism. Retrieved from https://ccj.ncsc.org//media/Microsites/Files/CCJ/Resolutions/08012007Support-Sentencing-Practices-Promote-Public-Safety-Reduce-Recidivism.ashx
Orey, M. A., & Nelson, W. A. (1993). Development principles for intelligent tutoring systems: Integrating cognitive theory into the development of computer-based instruction. Educational Technology Research and Development, 41(1), 59–72.
Pettit, B., & Western, B. (2004). Mass imprisonment and the life course: Race and class inequality in us incarceration. American Sociological Review, 69(2), 151–169.
Phobun, P., & Vicheanpanya, J. (2010). Adaptive intelligent tutoring systems for e-learning systems. Procedia-Social and Behavioral Sciences, 2(2), 4064–4069.
Review, H. L. (2017). State v. Loomis, Wisconsin Supreme Court requires warning before use of algorithmic risk assessments in sentencing. Retrieved from https://harvardlawreview.org/2017/03/state-v-loomis/
Rigano, C. (2019). Using artificial intelligence to address criminal justice needs. National Institute of Justice, (280).
Roth, L. (2009). Looking at shirley, the ultimate norm: Color balance, image technologies, and cognitive equity. Canadian Journal of Communication, 34(1), 111–136.
Sandvig, C. (1942). Seeing the sort: The aesthetic and industrial defense of “the algorithm”. Journal of the New Media Caucus—ISSN: 1942-017X.
Shannon, M. (2019). Credit denial in the age of AI. Retrieved from https://www.brookings.edu/research/credit-denial-in-the-age-of-ai/
Srinivasan, R., Chander, A., & Pezeshkpour, P. (2019). Generating user-friendly explanations for loan denials using gans. arXiv preprint arXiv:1906.10244.
Tucker, I. (2017). ‘A white mask worked better’: Why algorithms are not color blind. Retrieved from https://www.theguardian.com/technology/2017/may/28/joy-buolamwiniwhen-algorithms-are-racist-facial-recognition-bias
VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4), 197–221.
Veale, M., & Binns, R. (2017). Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society, 4(2), 2053951717743530.
Wang, M., Deng, W., Hu, J., Tao, X., & Huang, Y. (2019). Racial faces in the wild: Reducing racial bias by information maximization adaptation network. In Proceedings of the IEEE International Conference on Computer Vision (pp. 692–702).
Washington, A. N. (2020). When twice as good isn’t enough: The case for cultural competence in computing. In Proceedings of the 51st ACM Technical Symposium on Computer Science Education (pp. 213–219).
West, A. (2018). Autonomous next report on augmented finance and machine intelligence shows how AI is disrupting the financial services industry. Retrieved from https://www.cardrates.com/news/autonomous-next-shows-how-ai-impactsthe-financial-industry/
West, S. M., Whittaker, M., & Crawford, K. (2019). Discriminating systems: Gender, race and power in AI (pp. 1–33). New York, NY: AI Now Institute.
Western, B., & Pettit, B. (2010). Incarceration & social inequality. Daedalus, 139(3), 8–19.
Williams, D. R. (2012). Miles to go before we sleep: Racial inequities in health. Journal of Health and Social Behavior, 53(3), 279–295.
Zuiderveen Borgesius, F. (2018). Discrimination, artificial intelligence, and algorithmic decision-making.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Roberts, A.L., Richardson, B., Alikhademi, K., Drobina, E., Gilbert, J.E. (2021). General Perspectives Toward the Impact of AI on Race and Society. In: Pearson Jr., W., Reddy, V. (eds) Social Justice and Education in the 21st Century. Diversity and Inclusion Research. Springer, Cham. https://doi.org/10.1007/978-3-030-65417-7_18
Download citation
DOI: https://doi.org/10.1007/978-3-030-65417-7_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-65416-0
Online ISBN: 978-3-030-65417-7
eBook Packages: Business and ManagementBusiness and Management (R0)