Abstract
The need to measure and mitigate bias in machine learning data sets has gained wide recognition in the field of Artificial Intelligence (AI) during the past decade. The academic and business communities call for new general-purpose measures to quantify bias. In this paper, we propose a new measure that relies on the fuzzy-rough set theory. The intuition of our measure is that protected features should not change the fuzzy-rough set boundary regions significantly. The extent to which this happens can be understood as a proxy for bias quantification. Our measure can be categorized as an individual fairness measure since the fuzzy-rough regions are computed using instance-based information pieces. The main advantage of our measure is that it does not depend on any prediction model but on a distance function. At the same time, our measure offers an intuitive rationale for the bias concept. The results using a proof-of-concept show that our measure can capture the bias issues better than other state-of-the-art measures.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Kearns, M., Neel, S., Roth, A., Wu, Z.: Preventing fairness gerrymandering: auditing and learning for subgroup fairness. In: ICML (2018)
Kleinberg, J., Mullainathan, S., Raghavan, M.: Inherent trade-offs in the fair determination of risk scores. In: 8th Innovations in Theoretical Computer Science Conference, pp. 43:1–43:23 (2017)
Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: NIPS (2016)
Friedler, S.A., Scheidegger, C., Venkatasubramanian, S., Choudhary, S., Hamilton, E., Roth, D.: A comparative study of fairness-enhancing interventions in machine learning. In: Proceedings of the Conference on Fairness, Accountability, and Transparency (2019)
Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., Huq, A.: Algorithmic decision making and the cost of fairness. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2017)
Choi, Y., Farnadi, G., Babaki, B., Van den Broeck, G.: Learning fair Naive Bayes classifiers by discovering and eliminating discrimination patterns. In: Proceedings of the AAAI Conference on Artificial Intelligence, 34(06) (2020)
Kehrenberg, T., Chen, Z., Quadrianto, N.: Tuning fairness by balancing target labels. Front. Artif. Intell. 3, 33 (2020)
Varona, D., Lizama-Mue, Y., Suárez, J.L.: Machine learning’s limitations in avoiding automation of bias. Artif. Intell. Soc. 36(1), 197–203 (2020). https://doi.org/10.1007/s00146-020-00996-y
Ntoutsi, E., et al.: Bias in data-driven artificial intelligence systems-An introductory survey WIREs. Data Min. Knowl. Disc. 10(3), e1356 (2020)
Fuchs, D.: The Dangers of Human-Like Bias in Machine-Learning Algorithms, Missouri S&T’s Peer to Peer2, (1) (2018)
Verma, S., Rubin, J.: Fairness definitions explained. In: Proceedings of the International Workshop on Software Fairness , pp. 1–7 (2019)
Chouldechova, A.: Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big Data 5(2), 153–163 (2017)
Zemel, R., Yu Wu, Y., Swersky, K., Pitassi, T., Dwork, C.: Learning fair representations. In: Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3), 325–333 (2013)
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through awareness. In: Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (ITCS), pp. 214–226 (2012)
Kusner, M.J., Russell, C., Loftus, J., Silva, R.: Counterfactual fairness. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 4069–4079 (2017)
Speicher, T., et al.: A unified approach to quantifying algorithmic unfairness: measuring individual & group unfairness via inequality indices. In: CoRR (2018)
Bellamy, R., et al.: AI fairness 360: an extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias (2018)
Dua, D., Graff, C.: UCI machine learning repository. University of California, School of Information and Computer Science, Irvine, CA (2019)
Pedrycz, W., Vukovich, G.: Feature analysis through information granulation and fuzzy sets. Pattern Recogn. 35, 825–834 (2002)
Pawlak, Z.: Rough sets. Int. J. Comput. Inf. Sci. 11, 341–356 (1982)
Dubois, D., Prade, H.: Rough fuzzy sets and fuzzy rough sets. Int. J. Gen. Syst. 17, 191–209 (1990)
Inuiguchi, M., Wu, W., Cornelis, C., Verbiest, N.: Fuzzy-rough hybridization. Handbook of Computational Intelligence, pp. 425-451. Springer, Berlin (2015)
Jensen, R., Cornelis, C.: Fuzzy-rough nearest neighbour classification and prediction. Theoret. Comput. Sci. Rough Sets Fuzzy Sets Nat. Comput. 412(42), 5871–5884 (2011)
Wilson, D.R., Martinez, T.R.: Improved heterogeneous distance functions. J. Artif. Intell. Res. (JAIR) 6, 1–34 (1997)
Nápoles, G., Mosquera, C., Falcon, R., Grau, I., Bello, R., Vanhoof, K.: Fuzzy-rough cognitive networks. Neural Netw.: Official J. Int. Neural Netw. Soc. 97, 19–27 (2017)
Cornelis, C., De Cock, M., Radzikowska, A.: Fuzzy rough sets: from theory into practice. Handbook of Granular Computing. Wiley, pp. 533–553 (2008)
Vluymans, S., D’eer, L., Saeys, Y., Cornelis, C.: Applications of fuzzy rough set theory in machine learning: a survey. Fundam. Inf. 142(1–4), 53–86 (2015)
Yang, J., Xu, T., Zhao, F.: Modified uncertainty measure of rough fuzzy sets from the perspective of fuzzy distance. Math. Problems Eng. 1–11 (2018)
Bello, M., Nápoles, G., Morera, R., Vanhoof, K., Bello, R.: Outliers detection in multi-label datasets. Advances in Soft Computing, pp. 65–75. Springer Nature Switzerland AG (2020)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Koutsoviti Koumeri, L., Nápoles, G. (2021). Bias Quantification for Protected Features in Pattern Classification Problems. In: Tavares, J.M.R.S., Papa, J.P., González Hidalgo, M. (eds) Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications. CIARP 2021. Lecture Notes in Computer Science(), vol 12702. Springer, Cham. https://doi.org/10.1007/978-3-030-93420-0_33
Download citation
DOI: https://doi.org/10.1007/978-3-030-93420-0_33
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-93419-4
Online ISBN: 978-3-030-93420-0
eBook Packages: Computer ScienceComputer Science (R0)