Skip to main content
Log in

Exploring and mitigating gender bias in book recommender systems with explicit feedback

  • Research
  • Published:
Journal of Intelligent Information Systems Aims and scope Submit manuscript

Abstract

Recommender systems are indispensable because they influence our day-to-day behavior and decisions by giving us personalized suggestions. Services like Kindle, YouTube, and Netflix depend heavily on the performance of their recommender systems to ensure that their users have a good experience and to increase revenues. Despite their popularity, it has been shown that recommender systems reproduce and amplify the bias present in the real world. The resulting feedback creates a self-perpetuating loop that deteriorates the user experience and results in homogenizing recommendations over time. Further, biased recommendations can also reinforce stereotypes based on gender or ethnicity, thus reinforcing the filter bubbles that we live in. In this paper, we address the problem of gender bias in recommender systems with explicit feedback. We propose a model to quantify the gender bias present in book rating datasets and in the recommendations produced by the recommender systems. Our main contribution is to provide a principled approach to mitigate the bias being produced in the recommendations. We theoretically show that the proposed approach provides unbiased recommendations despite biased data. Through empirical evaluation of publicly available book rating datasets, we further show that the proposed model can significantly reduce bias without significant impact on accuracy and outperforms the existing model in terms of bias. Our method is model-agnostic and can be applied to any recommender system. To demonstrate the performance of our model, we present the results on four recommender algorithms, two from the K-nearest neighbors family, UserKNN and ItemKNN, and the other two from the matrix factorization family, Alternating Least Square and Singular Value Decomposition. The extensive simulations of various recommender algorithms show the generality of the proposed approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19

Similar content being viewed by others

Availability of Supporting Data

Two book-ratings datasets were used to assess the suggested model: the Book-Crossing dataset, which was initially compiled by Ziegler et al. (2005), and the Amazon Book Review dataset, which was compiled by Ni et al. (2019). Both datasets are available publicly.

Notes

  1. code is available at https://github.com/Pyromancer11/mitigatingGenderBias

References

  • Amatriain, X., Jaimes, A., Oliver, N., & Pujol, J. (2011). Data mining methods for recommender systems. Recommender Systems Handbook, 39–71. https://doi.org/10.1007/978-0-387-85820-3_2

  • Atas, M., Felfernig, A., Polat-Erdeniz, S., Popescu, A., Tran, T. N. T., & Uta, M. (2021). Towards psychology-aware preference construction in recommender systems: overview and research issues. Journal of Intelligent Information Systems, 57(3), 467–489. https://doi.org/10.1007/s10844-021-00674-5

    Article  Google Scholar 

  • Boratto, L., Fenu, G., & Marras, M. (2019). The effect of algorithmic bias on recommender systems for massive open online courses. Advances in Information Retrieval, 457–472. https://doi.org/10.1007/978-3-030-15712-8_30

  • Boratto, L., Fenu, G., & Marras, M. (2021). Interplay between upsampling and regularization for provider fairness in recommender systems. User Modeling and User-Adapted Interaction, 31(3), 421–455. https://doi.org/10.1007/s11257-021-09294-8

    Article  Google Scholar 

  • Burke, R. (2017). Multisided Fairness for Recommendation. arXiv:1707.00093

  • Carraro, D., & Bridge, D. (2022). A sampling approach to debiasing the offline evaluation of recommender systems. Journal of Intelligent Information Systems, 58(2), 311–336. https://doi.org/10.1007/s10844-021-00651-y

    Article  Google Scholar 

  • Coston, A., Ramamurthy, K. N., Wei, D., Varshney, K. R., Speakman, S., Mustahsan, Z., & Chakraborty, S. (2019). Fair transfer learning with missing protected attributes. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 91–98. https://doi.org/10.1145/3306618.3314236

  • Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R. (2011). Fairness through awareness. Proceedings of the 3rd Innovations in Theoretical Computer Science Conference. arXiv:1104.3913, https://doi.org/10.1145/2090236.2090255

  • Ekstrand, M., Tian, M., Kazi, M., Mehrpouyan, H., & Kluver, D. (2018). Exploring author gender in book rating and recommendation. User modeling and User-adapted Interaction, 377–420. https://doi.org/10.1145/3240323.3240373

  • Genderize.io (2021). https://genderize.io/. Accessed 5 March 2021

  • Google Books API (2022). https://developers.google.com/books. Accessed 24 Feb 2021

  • Hajian, S., & Domingo-Ferrer, J. (2013). A methodology for direct and indirect discrimination prevention in data mining. IEEE Transactions on Knowledge and Data Engineering. https://doi.org/10.1109/TKDE.2012.72

  • Hajian, S., Domingo-Ferrer, J., & Farrás, O. (2014). Generalization-based privacy preservation and discrimination prevention in data publishing and mining. Data Mining and Knowledge Discovery. https://doi.org/10.1007/s10618-014-0346-1

  • Hajian, S., Domingo-Ferrer, J., & Farrás, O. (2014). Discrimination- and privacy-aware patterns. Data Mining and Knowledge Discovery, 29. https://doi.org/10.1007/s10618-014-0393-7

  • Hajian, S., Bonchi, F., & Castillo, C. (2016). Algorithmic bias: from discrimination discovery to fairness-aware data mining. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 2125–2126. https://doi.org/10.1145/2939672.2945386

  • Herlocker, J. L., Konstan, J. A., Terveen, L. G., & Riedl, J. T. (2004). Evaluating collaborative filtering recommender systems. ACM Transactions on Information Systems, 22(1), 5–53. https://doi.org/10.1145/963770.963772

    Article  Google Scholar 

  • Hurley, N., & Zhang, M. (2011). Novelty and diversity in top-n recommendation – analysis and evaluation. ACM Transactions on Internet Technology 10(4). https://doi.org/10.1145/1944339.1944341

  • ISBNdb (2021). https://isbndb.com/isbn-database. Accessed 27 Feb 2021

  • Kamiran, F., Calders, T., & Pechenizkiy, M. (2010). Discrimination aware decision tree learning. IEEE International Conference on Data Mining, 869–874. https://doi.org/10.1109/ICDM.2010.50

  • Kamiran, F., Karim, A., & Zhang, X. (2012). Decision theory for discrimination-aware classification. IEEE International Conference on Data Mining, ICDM, 924–929. https://doi.org/10.1109/ICDM.2012.45

  • Knijnenburg, B., Willemsen, M., Gantner, S., & et al. (2012). Explaining the user experience of recommender systems. User Modeling and User-Adapted Interaction, 22, 441–504. https://doi.org/10.1007/s11257-011-9118-4

  • Leavy, S., Meaney, G., Wade, K., & Greene, D. (2020). Mitigating Gender bias in machine learning data sets. Bias and Social Aspects in Search and Recommendation, 12–26. https://doi.org/10.1007/978-3-030-52485-2_2

  • Mancuhan, K., & Mancuhan, C. (2014). Combating discrimination using Bayesian networks. Artificial Intelligence and Law, 22. https://doi.org/10.1007/s10506-014-9156-4

  • Mansoury, M., Abdollahpouri, H., Smith, J., & et al. (2020). Investigating potential factors associated with gender discrimination in collaborative recommender systems. Proceedings of the 33rd International Florida Artificial Intelligence Research Society Conference, FLAIRS 2020, 193–196. https://aaai.org/papers/193-flairs-2020-18430/

  • Neve, J., & Palomares, I. (2019). Latent Factor Models and Aggregation Operators for Collaborative Filtering in Reciprocal Recommender Systems. Proceedings of the 13th ACM Conference on Recommender Systems, pp. 219–227. https://doi.org/10.1145/3298689.3347026

  • Ni, J., Li, J., & McAuley, J. (2019). Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 188–197. https://doi.org/10.18653/v1/D19-1018

  • OpenLibrary API (2021). https://openlibrary.org/developers/api. Accessed 02 March 2021

  • Pedreschi, D., Ruggieri, S., & Turini, F. (2008). Discrimination-aware data mining. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 560–568. https://doi.org/10.1145/1401890.1401959

  • Pedreschi, D., Ruggieri, S., & Turini, F. (2009). Measuring Discrimination in Socially-Sensitive Decision Records. Proceedings of the 2009 SIAM International Conference on Data Mining (SDM), pp. 581–592. https://doi.org/10.1137/1.9781611972795.50

  • Rastegarpanah, B., Gummadi, K. P., & Crovella, M. (2019). Fighting Fire with Fire: Using Antidote Data to Improve Polarization and Fairness of Recommender Systems. Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pp. 231–239. https://doi.org/10.1145/3289600.3291002

  • Ruggieri, S., Pedreschi, D., & Turini, F. (2010). Data Mining for Discrimination Discovery. ACM Transactions on Knowledge Discovery from Data, 4(2). https://doi.org/10.1145/1754428.1754432

  • Ruggieri, S., Hajian, S., Kamiran, F., & Zhang, X. (2014). Anti-discrimination analysis using privacy attack strategies. Machine Learning and Knowledge Discovery in Databases, 694–710. https://doi.org/10.1007/978-3-662-44851-9_44

  • Shakespeare, D., Porcaro, L., Gómez, E., & Castillo, C. (2020). Exploring artist gender bias in music recommendation. https://doi.org/10.48550/arXiv.2009.01715

  • Shani, G., & Gunawardana, A. (2011). Evaluating recommendation systems. Recommender Systems Handbook, 257–297. https://doi.org/10.1007/978-0-387-85820-3_8

  • Thanh, B., Ruggieri, S., & Turini, F. (2011). k-NN as an Implementation of Situation Testing for Discrimination Discovery and Prevention. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 502–510. https://doi.org/10.1145/2020408.2020488

  • Tsintzou, V., Pitoura, E., & Tsaparas, P. (2018). Bias Disparity in Recommendation Systems. https://doi.org/10.48550/arXiv.1811.01461

  • Valcarce, D., Bellogín, A., Parapar, J., & Castells, P. (2020). Assessing ranking metrics in top-N recommendation. Information Retrieval Journal, 23, 411–448. https://doi.org/10.1007/s10791-020-09377-x

    Article  Google Scholar 

  • Zehlike, M., Bonchi, F., Castillo, C., & et al. (2017). FA*IR: A Fair Top-k Ranking Algorithm. Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 1569–1578. https://doi.org/10.1145/3132847.3132938

  • Zemel, R., Wu, Y., Swersky, K., & et al. (2013). Learning Fair Representations. Proceedings of the 30th International Conference on Machine Learning 28(3), 325–333. https://proceedings.mlr.press/v28/zemel13.html

  • Ziegler, C.-N., McNee, S. M., Konstan, J. A. & Lausen, G. (2005). Improving Recommendation Lists through Topic Diversification. Proceedings of the 14th International Conference on World Wide Web, pp. 22–32. https://doi.org/10.1145/1060745.1060754

Download references

Acknowledgements

We would like to express our sincere gratitude to Indian Institute of Technology, Ropar for providing us with the opportunity to work on this project.

We are also thankful for the API services Google Books API , ISBNdb , OpenLibrary API and Genderize.io for helping us identify the authors of the books and their genders.

Our research work would not have been possible without the support of these organizations, and we truly appreciate their contributions.

Funding

We, the authors of this research paper entitled Exploring and Mitigating Gender Bias in Recommender Systems with Explicit Feedback, would like to acknowledge the funding given by the Department of Science & Technology, India under grant number SRG/2020/001138 (Recipient name- Dr. Shweta Jain). The computational resources obtained from the funding were used to carry out the experiments for the project.

Author information

Authors and Affiliations

Authors

Contributions

The contributions of the authors are as follows:

\( \bullet \) Shrikant Saxena and Shweta Jain conceived of the presented idea.

\( \bullet \) Shrikant Saxena developed the theory and performed the computations.

\( \bullet \) Shweta Jain wrote the manuscript.

Corresponding author

Correspondence to Shweta Jain.

Ethics declarations

Competing interests

We, the authors of this research paper entitled Exploring and Mitigating Gender Bias in Recommender Systems with Explicit Feedback, declare that we have no competing interests that may influence the interpretation or presentation of this manuscript.

We have no financial, personal or professional relationships with other people or organizations that could be considered potential sources of bias. Furthermore, we have no financial or personal relationships with any company or organization that could benefit from the publication of this research paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Saxena, S., Jain, S. Exploring and mitigating gender bias in book recommender systems with explicit feedback. J Intell Inf Syst (2024). https://doi.org/10.1007/s10844-023-00827-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10844-023-00827-8

Keywords

Navigation