Abstract
Financial institutions in the United States and Europe are traditionally known to be biased against minority groups. Such bias is often observed in an organization’s choice of who is offered a credit/loan or who is denied. Similarly, these minorities are often charged higher interest than their counterparts (Stefan et al. 2018; Bartlett et al. 2019; Aldén and Hammarstedt 2016; Lloyd, Bo, and John 2005; Solomon, Alper, and Philip 2013; Patatouka and Fasianos 2015). At the time of this writing, most of the leading financial institutions are doing little to nothing to combat the problem of systemic bias observed in the banking sector and more generally in access to finance. Additionally, financial institutions are increasingly relying on AI for improving operational efficiency through automation. Such reliance on AI technology is likely to amplify bias against minorities. This is because most if not all AI systems created in the banking sector as of today must rely on historical data that inevitably reflects in some way or form the current situation in the industry. Stated differently, the data is likely to incorporate the same flaws that one may be trying to combat. But more importantly, as discussed in the “Beyond Traditional AI Performance Metrics” section of Chapter 5, bias resulting from an algorithm’s behavior operates at the institution’s level and has a wider reach than bias exhibited by humans.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
This probably applies to other regions of the world as well; however, published studies examining the topic of financial discrimination against minorities are mostly focused on the United States and Europe.
- 2.
The system incorrectly indicates that the applicant is a good credit risk when the applicant is in fact a bad credit risk.
- 3.
The system incorrectly indicates that the applicant is a bad credit risk when the applicant is in fact a good credit risk.
- 4.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature
About this chapter
Cite this chapter
Tsafack Chetsa, G.L. (2021). SAIF in Action: A Case Study. In: Towards Sustainable Artificial Intelligence. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-7214-5_6
Download citation
DOI: https://doi.org/10.1007/978-1-4842-7214-5_6
Published:
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4842-7213-8
Online ISBN: 978-1-4842-7214-5
eBook Packages: Professional and Applied ComputingApress Access BooksProfessional and Applied Computing (R0)