Abstract
Issues surrounding bias and discrimination in housing markets have been acknowledged and discussed both in the literature and in practice. In this study, we investigate this issue specifically in the context of mortgage applications through the lens of an AI-based decision support system. Using the data provided as a part of the Home Mortgage Disclosure Act (HMDA), we first show that ethnicity bias does indeed exist in historical mortgage application approvals, where black applicants are more likely to be declined a mortgage compared with white applicants whose circumstances are otherwise similar. More interestingly, this bias is amplified when an off-the-shelf machine-learning model is used to recommend an approval/denial decision. Finally, when fair machine-learning algorithms are adopted to alleviate such biases, we find that the “fairness” actually leaves all stakeholders—black applicants, white applicants, and mortgage lenders—worse off. Our findings caution against the use of machine-learning models without human involvement when the decision has significant implications for the prediction subjects.
Similar content being viewed by others
References
Ross, S.L., Turner, M.A.: Housing discrimination in metropolitan America: explaining changes between 1989 and 2000. Soc. Probl. 52(2), 152–180 (2005)
Yinger, J.: Closed doors, opportunities lost: the continuing costs of housing discrimination. Russell Sage Foundation (1995)
Wachter, S.M., Megbolugbe, I.F.: Impacts of housing and mortgage market discrimination racial and ethnic disparities in homeownership. Hous. Policy Debate 3(2), 332–370 (1992)
Quillian, L., Lee, J.J., Honoré, B.: Racial discrimination in the US housing and mortgage lending markets: a quantitative review of trends, 1976–2016. Race Soc. Probl. 12(1), 13–28 (2020)
Cohen, M.C., Dahan, S., Khern-am-nuai, W., Shimao, H., Touboul, J.: The Use of AI in Legal Systems: Determining Independent Contractor vs. Employee Status. Queen's University Legal Research Paper Forthcoming, Available at SSRN: https://ssrn.com/abstract=4013823. (2022). https://doi.org/10.2139/ssrn.4013823
Lacruz, F., Saniie, J.: Applications of machine learning in fintech credit card fraud detection. IEEE, City (2021)
van Esch, P., Black, J.S., Arli, D.: Job candidates’ reactions to AI-enabled job application processes. AI Ethics 1(2), 119–130 (2021)
Shimao, H., Khern-am-nuai, W., Kannan, K. and Cohen, M.C.: Strategic Best Response Fairness in Fair Machine Learning. Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, pp. 664–664 (2022)
Favaretto, M., De Clercq, E., Elger, B.S.: Big data and discrimination: perils, promises and solutions: a systematic review. J. Big Data 6(1), 1–27 (2019)
Schneider, V.: Locked out by big data: how big data algorithms and machine learning may undermine housing justice. Colum. Hum. Rts. L. Rev. 52, 251 (2020)
Ben Shahar, T.H.: Educational justice and big data. Theory Res. Educ. 15(3), 306–320 (2017)
Kleinberg, J., Ludwig, J., Mullainathan, S., Sunstein, C.R.: Discrimination in the age of algorithms. J. Legal Anal. 10, 113–174 (2018)
Podsakoff, P.M., MacKenzie, S.B., Lee, J.-Y., Podsakoff, N.P.: Common method biases in behavioral research: a critical review of the literature and recommended remedies. J. Appl. Psychol. 88(5), 879 (2003)
Nisar, Q.A., Nasir, N., Jamshed, S., Naz, S., Ali, M., Ali, S.: Big data management and environmental performance: role of big data decision-making capabilities and decision-making quality. J Enterprise Info Manag. 34(4), 1061–1096 (2020)
Khademi, A., Lee, S, Foley, D., Honavar, V.: Fairness in algorithmic decision making: an excursion through the lens of causality. City (2019)
Rubin, D.B.: Causal inference using potential outcomes: design, modeling, decisions. J. Am. Stat. Assoc. 100(469), 322–331 (2005)
Khademi, A., Honavar, V.: Algorithmic bias in recidivism prediction: a causal perspective (student abstract). City (2020)
Athey, S., Wager, S.: Estimating treatment effects with causal forests: an application. Observational Stud. 5(2), 37–51 (2019)
Bird, S., Dudík, M., Edgar, R., Horn, B., Lutz, R., Milan, V., Sameki, M., Wallach, H., Walker, K.: Fairlearn: a toolkit for assessing and improving fairness in AI. Microsoft, Tech. Rep. MSR-TR-2020-32 (2020)
Das, P., Ivkin, N., Bansal, T., Rouesnel, L., Gautier, P., Karnin, Z., Dirac, L., Ramakrishnan, L., Perunicic, A., Shcherbatyi, I.: Amazon SageMaker Autopilot: a white box AutoML solution at scale. City (2020)
Bellamy, R. K., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K., Lohia, P., Martino, J., Mehta, S., Mojsilovic, A.: AI Fairness 360: an extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv preprint arXiv:1810.01943 (2018)
Cocheo, S.: Justice Department sues tiny South Dakota bank for loan bias. American Bankers Assoc. ABA Banking J. 86(1), 6 (1994)
Dymski, G.A.: Why the subprime crisis is different: a Minskyian approach. Camb. J. Econ. 34(2), 239–255 (2010)
Dahan, S.: A path-dependent deadlock: institutional causes of the Euro crisis. Cornell Int’l LJ 49, 309 (2016)
Dymski, G.: Bank lending and the subprime crisis. The Handbook of the Political Economy of Financial Crises (2013), p 411
Bradford, C.: Risk or race?: Racial disparities and the subprime refinance market. Center for Community Change Washington, DC (2002)
Agarwal, A., Beygelzimer, A., Dudík, M., Langford, J., Wallach, H.: A reductions approach to fair classification. PMLR, City (2018)
Kivinen, J., Warmuth, M.K.: Exponentiated gradient versus gradient descent for linear predictors. Info. Comput. 132(1), 1–63 (1997)
Larose, D.T.: Data mining and predictive analytics. John Wiley & Sons (2015)
Funding
This work is financially supported by the Social Science and Humanities Research Council of Canada (SSHRC) grant number 892-2021-1014.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Zou, L., Khern-am-nuai, W. AI and housing discrimination: the case of mortgage applications. AI Ethics 3, 1271–1281 (2023). https://doi.org/10.1007/s43681-022-00234-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s43681-022-00234-9