Abstract
Learning and reasoning over graphs is increasingly done by means of probabilistic models, e.g. exponential random graph models, graph embedding models, and graph neural networks. When graphs are modeling relations between people, however, they will inevitably reflect biases, prejudices, and other forms of inequity and inequality. An important challenge is thus to design accurate graph modeling approaches while guaranteeing fairness according to the specific notion of fairness that the problem requires. Yet, past work on the topic remains scarce, is limited to debiasing specific graph modeling methods, and often aims to ensure fairness in an indirect manner.
We propose a generic approach applicable to most probabilistic graph modeling approaches. Specifically, we first define the class of fair graph models corresponding to a chosen set of fairness criteria. Given this, we propose a fairness regularizer defined as the KL-divergence between the graph model and its I-projection onto the set of fair models. We demonstrate that using this fairness regularizer in combination with existing graph modeling approaches efficiently trades-off fairness with accuracy, whereas the state-of-the-art models can only make this trade-off for the fairness criterion that they were specifically designed for.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
In our proposed framework, we require these constraints to be satisfied exactly in order for p to be fair. However, prior work has also allowed for a percentage-wise deviation [34].
- 2.
The distribution that results from the reverse KL-divergence formulation is much less practical to compute and was therefore not further considered for this work.
- 3.
A table with the results in text format is provided in the Appendix.
- 4.
All experiments were conducted using half the hyperthreads on a machine equipped with a 12 Core Intel(R) Xeon(R) Gold processor and 256Â GB of RAM.
References
Adamic, L.A., Glance, N.: The political blogosphere and the 2004 US election: divided they blog. In: Proceedings of the 3rd International Workshop on Link Discovery, pp. 36–43 (2005)
Agarwal, A., Beygelzimer, A., DudÃk, M., Langford, J., Wallach, H.: A reductions approach to fair classification. In: International Conference on Machine Learning, pp. 60–69. PMLR (2018)
Alghamdi, W., Asoodeh, S., Wang, H., Calmon, F.P., Wei, D., Ramamurthy, K.N.: Model projection: theory and applications to fair machine learning. In: 2020 IEEE International Symposium on Information Theory (ISIT), pp. 2711–2716. IEEE (2020)
Bose, A., Hamilton, W.: Compositional fairness constraints for graph embeddings. In: International Conference on Machine Learning, pp. 715–724 (2019)
Burnham, K.P., Anderson, D.R.: Practical use of the information-theoretic approach. In: Model selection and inference, pp. 75–117. Springer, New York (1998). https://doi.org/10.1007/978-1-4757-2917-7_3
Buyl, M., De Bie, T.: DeBayes: a Bayesian method for debiasing network embeddings. In: International Conference on Machine Learning, pp. 1220–1229. PMLR (2020)
Calmon, F.P., Wei, D., Vinzamuri, B., Ramamurthy, K.N., Varshney, K.R.: Optimized pre-processing for discrimination prevention. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 3995–4004 (2017)
Cotter, A., et al.: Optimization with non-differentiable constraints with applications to fairness, recall, churn, and other goals. J. Mach. Learn. Res. 20(172), 1–59 (2019)
Cover, T.M.: Elements of Information Theory. Wiley, Hoboken (1999)
Csiszár, I.: I-divergence geometry of probability distributions and minimization problems. Annals Prob. 146–158 (1975)
Csiszár, I., Matus, F.: Information projections revisited. IEEE Trans. Inf. Theory 49(6), 1474–1490 (2003)
De Bie, T.: Maximum entropy models and subjective interestingness: an application to tiles in binary databases. Data Min. Knowl. Discov. 23(3), 407–446 (2011)
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through awareness. In: Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pp. 214–226 (2012)
Hamilton, W.L., Ying, R., Leskovec, J.: Representation learning on graphs: methods and applications. arXiv preprint arXiv:1709.05584 (2017)
Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, pp. 3323–3331 (2016)
Hofstra, B., Corten, R., Van Tubergen, F., Ellison, N.B.: Sources of segregation in social networks: a novel approach using Facebook. Am. Sociol. Rev. 82(3), 625–656 (2017)
Jiang, H., Nachum, O.: Identifying and correcting label bias in machine learning. In: International Conference on Artificial Intelligence and Statistics, pp. 702–712. PMLR (2020)
Kang, B., Lijffijt, J., De Bie, T.: Conditional network embeddings. In: International Conference on Learning Representations (2018)
Karimi, F., Génois, M., Wagner, C., Singer, P., Strohmaier, M.: Homophily influences ranking of minorities in social networks. Sci. Reports 8(1), 1–12 (2018)
Kipf, T.N., Welling, M.: Variational graph auto-encoders. arXiv preprint arXiv:1611.07308 (2016)
Laclau, C., Redko, I., Choudhary, M., Largeron, C.: All of the fairness for edge prediction with optimal transport. In: International Conference on Artificial Intelligence and Statistics, pp. 1774–1782. PMLR (2021)
Li, P., Wang, Y., Zhao, H., Hong, P., Liu, H.: On dyadic fairness: exploring and mitigating bias in graph connections. In: International Conference on Learning Representations (2021)
Liben-Nowell, D., Kleinberg, J.: The link-prediction problem for social networks. J. Am. Soc. Inf. Sci. Technol. 58(7), 1019–1031 (2007)
MartÃnez, V., Berzal, F., Cubero, J.C.: A survey of link prediction in complex networks. ACM Comput. Surv. (CSUR) 49(4), 1–33 (2016)
Masrour, F., Wilson, T., Yan, H., Tan, P.N., Esfahanian, A.: Bursting the filter bubble: fairness-aware network link prediction. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 841–848 (2020)
McAuley, J., Leskovec, J.: Learning to discover social circles in ego networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems, pp. 539–547 (2012)
McPherson, M., Smith-Lovin, L., Cook, J.M.: Birds of a feather: homophily in social networks. Annual Rev. Sociol. 27(1), 415–444 (2001)
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635 (2019)
Rahman, T., Surma, B., Backes, M., Zhang, Y.: Fairwalk: towards fair graph embedding. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, pp. 3289–3295. International Joint Conferences on Artificial Intelligence Organization (2019)
Robins, G., Pattison, P., Kalish, Y., Lusher, D.: An introduction to exponential random graph (p*) models for social networks. Soc. Netw. 29(2), 173–191 (2007)
Wei, D., Ramamurthy, K.N., Calmon, F.: Optimized score transformation for fair classification. In: International Conference on Artificial Intelligence and Statistics, pp. 1673–1683. PMLR (2020)
Woodworth, B., Gunasekar, S., Ohannessian, M.I., Srebro, N.: Learning non-discriminatory predictors. In: Conference on Learning Theory, pp. 1920–1953. PMLR (2017)
Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE Trans. Neural Netw. Learn. Syst. 32, 4–24 (2020)
Zafar, M.B., Valera, I., Rogriguez, M.G., Gummadi, K.P.: Fairness constraints: mechanisms for fair classification. In: Artificial Intelligence and Statistics, pp. 962–970. PMLR (2017)
Zhang, M., Chen, Y.: Link prediction based on graph neural networks. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 5171–5181 (2018)
Acknowledgments
This research was funded by the ERC under the EU’s 7th Framework and H2020 Programmes (ERC Grant Agreement no. 615517 and 963924), the Flemish Government (AI Research Program), the BOF of Ghent University (PhD scholarship BOF20/DOC/144), and the FWO (project no. G091017N, G0F9816N, 3G042220).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Buyl, M., De Bie, T. (2021). The KL-Divergence Between a Graph Model and its Fair I-Projection as a Fairness Regularizer. In: Oliver, N., Pérez-Cruz, F., Kramer, S., Read, J., Lozano, J.A. (eds) Machine Learning and Knowledge Discovery in Databases. Research Track. ECML PKDD 2021. Lecture Notes in Computer Science(), vol 12976. Springer, Cham. https://doi.org/10.1007/978-3-030-86520-7_22
Download citation
DOI: https://doi.org/10.1007/978-3-030-86520-7_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-86519-1
Online ISBN: 978-3-030-86520-7
eBook Packages: Computer ScienceComputer Science (R0)