Abstract
While recommender systems are highly successful at helping users find relevant information online, they may also exhibit a certain undesired bias of mostly promoting only already popular items. Various approaches of quantifying and mitigating such biases were put forward in the literature. Most recently, calibration methods were proposed that aim to match the popularity of the recommended items with popularity preferences of individual users. In this paper, we show that while such methods are efficient in avoiding the recommendation of too popular items for some users, other techniques may be more effective in reducing the popularity bias on the platform level. Overall, our work highlights that in practice choices regarding metrics and algorithms have to be made with caution to ensure the desired effects.
Keywords
- Recommender Systems
- Bias
- Multi-Metric Evaluation
This is a preview of subscription content, access via your institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
- 2.
Differently from [2], we used the MovieLens dataset with about 100k ratings by 943 users on 1612 items of in our experiments.
- 3.
Interestingly, in [2], CP was favorable over XQ also on the ARP measure. We could not reproduce this finding for both datasets. Unfortunately, the authors of [2] could not reproduce the code of the CP method. The observed discrepancy might therefore be both related to dataset characteristics and differences in the implementation.
References
Abdollahpouri, H., Burke, R., Mobasher, B.: Managing popularity bias in recommender systems with personalized re-ranking. In: FLAIRS 2019, pp. 413–418 (2019)
Abdollahpouri, H., Mansoury, M., Burke, R., Mobasher, B., Malthouse, E.: User-centered evaluation of popularity bias in recommender systems. In: ACM UMAP 2021, pp. 119–129 (2021)
Boratto, L., Fenu, G., Marras, M.: The effect of algorithmic bias on recommender systems for massive open online courses. In: European Conference on Information Retrieval, pp. 457–472 (2019)
Boratto, L., Fenu, G., Marras, M.: Combining mitigation treatments against biases in personalized rankings: use case on item popularity. In: IIR 2021 (2021)
Boratto, L., Fenu, G., Marras, M.: Connecting user and item perspectives in popularity debiasing for collaborative recommendation. IP&M 58(1), 102387 (2021)
Borges, R., Stefanidis, K.: On mitigating popularity bias in recommendations via variational autoencoders. In: ACM/SIGAPP SAC 2021, pp. 1383–1389 (2021)
Castells, P., Hurley, N.J., Vargas, S.: Novelty and diversity in recommender systems. In: Ricci, F., Rokach, L., Shapira, B. (eds.) Recommender Systems Handbook, pp. 881–918. Springer, New York (2015)
Elahi, M., Jannach, D., Skjærven, L., et al.: Towards responsible media recommendation. AI and Ethics (2021)
Elahi, M., Kholgh, D.K., Kiarostami, M.S., Saghari, S., Rad, S.P., Tkalcic, M.: Investigating the impact of recommender systems on user-based and item-based popularity bias. Inf. Process. Manage. 58, 102655 (2021)
Fleder, D., Hosanagar, K.: Blockbuster culture’s next rise or fall: the impact of recommender systems on sales diversity. Manage. Sci. 55, 697–712, 102655 (2009)
Harper, F.M., Konstan, J.A.: The MovieLens datasets: history and context. ACM TIIS 5(4), 1–19, 102655 (2015)
Jannach, D., Jugovac, M.: Measuring the business value of recommender systems. ACM Trans. Manage. Inf. Syst. 10(4) (2019)
Jannach, D., Lerche, L., Kamehkhosh, I., Jugovac, M.: What recommenders recommend: an analysis of recommendation biases and possible countermeasures. User Model. User-Adap. Inter. 25(5), 427–491 (2015). https://doi.org/10.1007/s11257-015-9165-3
Jugovac, M., Jannach, D., Lerche, L.: Efficient optimization of multiple recommendation quality factors according to individual user tendencies. Expert Syst. Appl. 81, 321–331 (2017)
Kowald, D., Schedl, M., Lex, E.: The unfairness of popularity bias in music recommendation: a reproducibility study. In: European Conference on Information Retrieval, pp. 35–42 (2020)
Lin, J.: Divergence measures based on the Shannon entropy. IEEE Trans. Inf. Theory 37(1), 145–151 (1991)
Oh, J., Park, S., Yu, H., Song, M., Park, S.T.: Novel recommendation based on personal popularity tendency. In: ICDM 2011, pp. 507–516 (2011)
Santos, R.L., Macdonald, C., Ounis, I.: Exploiting query reformulations for web search result diversification. In: WWW 2010, pp. 881–890 (2010)
Steck, H.: Calibrated recommendations. In: ACM RecSys 2018, pp. 154–162 (2018)
Takács, G., Tikk, D.: Alternating least squares for personalized ranking. In: ACM RecSys 2012, pp. 83–90 (2012)
Trattner, C., Elsweiler, D.: Investigating the healthiness of internet-sourced recipes: implications for meal planning and recommender systems. In: WWW 2017, pp. 489–498 (2017)
Trattner, C., et al.: Responsible media technology and AI: challenges and research directions. AI and Ethics, pp. 1–10 (2021)
Yin, H., Cui, B., Li, J., Yao, J., Chen, C.: Challenging the long tail recommendation. Proc. VLDB Endow. 5(9), 896–907 (2012)
Zehlike, M., Bonchi, F., Castillo, C., Hajian, S., Megahed, M., Baeza-Yates, R.: FA*IR: a Fair Top-k ranking algorithm. In: CIKM 2017, pp. 1569–1578 (2017)
Acknowledgement
This work was supported by industry partners and the Research Council of Norway with funding to MediaFutures: Research Centre for Responsible Media Technology and Innovation, through The Centres for Research-based Innovation scheme, project number 309339.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Klimashevskaia, A., Elahi, M., Jannach, D., Trattner, C., Skjærven, L. (2022). Mitigating Popularity Bias in Recommendation: Potential and Limits of Calibration Approaches. In: Boratto, L., Faralli, S., Marras, M., Stilo, G. (eds) Advances in Bias and Fairness in Information Retrieval. BIAS 2022. Communications in Computer and Information Science, vol 1610. Springer, Cham. https://doi.org/10.1007/978-3-031-09316-6_8
Download citation
DOI: https://doi.org/10.1007/978-3-031-09316-6_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-09315-9
Online ISBN: 978-3-031-09316-6
eBook Packages: Computer ScienceComputer Science (R0)