Abstract
With the widespread adoption of machine learning, the impact of discriminatory biases on outputs has received attention, and various methods for mitigating discriminatory biases have been proposed. Among the discriminatory bias, it is difficult to remove intersectional bias, which brings unfair situations when there are subgroups treated worse in the same protected group. Despite the development of some conventional methods for mitigating intersectional bias, applicable use-case scenarios are limited. To broaden the use-case scenarios, where intersectional bias can be mitigated, in this study, we propose a method called One-vs.-One Mitigation. This method applies a process of comparison between each pair of subgroups based on sensitive attributes to the fairness-aware machine learning for binary classification. We compare our method with conventional fairness-aware binary classification methods in comprehensive scenarios using three approaches (pre-, in-, and post-processing), three metrics (demographic parity, equalized odds, and equal opportunity), and a real-world dataset. Experimental results show that our method mitigates intersectional bias much better than conventional methods in all scenarios. Based on these findings, we have opened up a potential path of fairness-aware binary classification for solving more realistic problems with multiple sensitive attributes.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Barocas, S., Selbst, A.D.: Big data’s disparate impact. Calif. Law Rev. 104(3), 671–732 (2016)
Bishop, C.M.: Pattern Recognition and Machine Learning (2006)
Brodersen, K.H., Ong, C.S., Stephan, K.E., Buhmann, J.M.: The balanced accuracy and its posterior distribution. In: ICPR, pp. 3121–3124 (2010)
Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. In: FAT*, pp. 77–91 (2018)
Cabrera, A.A., Epperson, W., Hohman, F., Kahng, M., Morgenstern, J., Chau, D.H.: Fairvis: visual analytics for discovering intersectional bias in machine learning. In: VAST, pp. 46–56 (2019)
Calmon, F.P., Wei, D., Vinzamuri, B., Ramamurthy, K.N., Varshney, K.R.: Optimized pre-processing for discrimination prevention. In: NIPS, pp. 3995–4004 (2017)
Crenshaw, K.: Demarginalizing the intersection of race and sex: a black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. u. Chi. Legal f., 139 (1989)
Dheeru, D., Karra Taniskidou, E.: UCI machine learning repository (2017). http://archive.ics.uci.edu/ml
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through awareness. In: ITCS, pp. 214–226 (2012)
Feldman, M., Friedler, S.A., Moeller, J., Scheidegger, C., Venkatasubramanian, S.: Certifying and removing disparate impact. In: ACM SIGKDD, pp. 259–268 (2015)
Foulds, J.R., Islam, R., Keya, K.N., Pan, S.: An intersectional definition of fairness. In: ICDE, pp. 1918–1921 (2020)
Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: NIPS, pp. 3323–3331 (2016)
Hebert-Johnson, U., Kim, M., Reingold, O., Rothblum, G.: Multicalibration: calibration for the (Computationally-identifiable) masses. In: ICML, pp. 1939–1948 (2018)
Kamiran, F., Calders, T.: Data preprocessing techniques for classification without discrimination. Knowl. Inf. Syst. 33(1), 1–33 (2012)
Kamiran, F., Karim, A., Zhang, X.: Decision theory for discrimination-aware classification. In: ICDM, pp. 924–929 (2012)
Kamishima, T., Akaho, S., Asoh, H., Sakuma, J.: Fairness-aware classifier with prejudice remover regularizer. In: ECMLPKDD, pp. 35–50 (2012)
Kim, M.P., Ghorbani, A., Zou, J.: Multiaccuracy: black-box post-processing for fairness in classification. In: AIES, pp. 247–254 (2019)
Propublica: Compas recidivism risk score data and analysis (2020). Accessed 18 Sept 2020, https://www.propublica.org/datastore/dataset/compas-recidivism-risk-score-data-and-analysis
Robert, L.P., Pierce, C., Marquis, L., Kim, S., Alahmad, R.: Designing fair ai for managing employees in organizations: a review, critique, and design agenda. Human-Comput. Interact. 35(5–6), 545–575 (2020)
Russell, C., Kusner, M.J., Loftus, J.R., Silva, R.: When worlds collide: integrating different counterfactual assumptions in fairness. In: NIPS, pp. 6417–6426 (2017)
Zafar, M.B., Valera, I., Gomez Rodriguez, M., Gummadi, K.P.: Fairness beyond disparate treatment & disparate impact: learning classification without disparate mistreatment. In: WWW, pp. 1171–1180
Zhang, B.H., Lemoine, B., Mitchell, M.: Mitigating unwanted biases with adversarial learning. In: AIES, pp. 335–340 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Kobayashi, K., Nakao, Y. (2022). One-vs.-One Mitigation of Intersectional Bias: A General Method for Extending Fairness-Aware Binary Classification. In: de Paz Santana, J.F., de la Iglesia, D.H., López Rivero, A.J. (eds) New Trends in Disruptive Technologies, Tech Ethics and Artificial Intelligence. DiTTEt 2021. Advances in Intelligent Systems and Computing, vol 1410. Springer, Cham. https://doi.org/10.1007/978-3-030-87687-6_5
Download citation
DOI: https://doi.org/10.1007/978-3-030-87687-6_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-87686-9
Online ISBN: 978-3-030-87687-6
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)