Advertisement

Position Bias in Recommender Systems for Digital Libraries

  • Andrew CollinsEmail author
  • Dominika Tkaczyk
  • Akiko Aizawa
  • Joeran Beel
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10766)

Abstract

“Position bias” describes the tendency of users to interact with items on top of a list with higher probability than with items at a lower position in the list, regardless of the items’ actual relevance. In the domain of recommender systems, particularly recommender systems in digital libraries, position bias has received little attention. We conduct a study in a real-world recommender system that delivered ten million related-article recommendations to the users of the digital library Sowiport, and the reference manager JabRef. Recommendations were randomly chosen to be shuffled or non-shuffled, and we compared click-through rate (CTR) for each rank of the recommendations. According to our analysis, the CTR for the highest rank in the case of Sowiport is 53% higher than expected in a hypothetical non-biased situation (0.189% vs. 0.123%). Similarly, in the case of Jabref the highest rank received a CTR of 1.276%, which is 87% higher than expected (0.683%). A chi-squared test confirms the strong relationship between the rank of the recommendation shown to the user and whether the user decided to click it (p < 0.01 for both Jabref and Sowiport). Our study confirms the findings from other domains, that recommendations in the top positions are more often clicked, regardless of their actual relevance.

Keywords

Recommender systems Position bias Click-through rate 

References

  1. 1.
    Beel, J., et al.: Introducing Mr. DLib, a machine-readable digital library. In: Proceedings of the 11th ACM/IEEE Joint Conference on Digital Libraries (JCDL 2011) (2011)Google Scholar
  2. 2.
    Beel, J., et al.: Mr. DLib: Recommendations-as-a-Service (RaaS) for academia. In: 2017 ACM/IEEE Joint Conference on Digital Libraries, JCDL 2017, Toronto, ON, Canada, 19–23 June 2017, pp. 313–314 (2017)Google Scholar
  3. 3.
    Beel, J., Langer, S.: A comparison of offline evaluations, online evaluations, and user studies in the context of research-paper recommender systems. In: Kapidakis, S., Mazurek, C., Werla, M. (eds.) TPDL 2015. LNCS, vol. 9316, pp. 153–168. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24592-8_12 CrossRefGoogle Scholar
  4. 4.
    Feyer, S., Siebert, S., Gipp, B., Aizawa, A., Beel, J.: Integration of the scientific recommender system Mr. DLib into the reference manager JabRef. In: Jose, J.M., Hauff, C., Altıngovde, I.S., Song, D., Albakour, D., Watt, S., Tait, J. (eds.) ECIR 2017. LNCS, vol. 10193, pp. 770–774. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-56608-5_80 CrossRefGoogle Scholar
  5. 5.
    Hofmann, K., Schuth, A., Bellogín, A., de Rijke, M.: Effects of position bias on click-based recommender evaluation. In: de Rijke, M., Kenter, T., de Vries, A.P., Zhai, C., de Jong, F., Radinsky, K., Hofmann, K. (eds.) ECIR 2014. LNCS, vol. 8416, pp. 624–630. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-06028-6_67 CrossRefGoogle Scholar
  6. 6.
    Joachims, T., et al.: Accurately interpreting clickthrough data as implicit feedback. In: Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 154–161. ACM (2005)Google Scholar
  7. 7.
    Joachims, T., et al.: Evaluating the accuracy of implicit feedback from clicks and query reformulations in web search. ACM Trans. Inf. Syst. (TOIS) 25(2), 7 (2007)CrossRefGoogle Scholar
  8. 8.
    Joachims, T., et al.: Unbiased learning-to-rank with biased feedback. In: Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, pp. 781–789 ACM (2017)Google Scholar
  9. 9.
    Keane, M.T., et al.: Are people biased in their use of search engines? Commun. ACM. 51(2), 49–52 (2008)Google Scholar
  10. 10.
    Klöckner, K., et al.: Depth-and breadth-first processing of search result lists. In: CHI 2004 Extended Abstracts on Human Factors in Computing Systems, p. 1539. ACM (2004)Google Scholar
  11. 11.
    Lerman, K., Hogg, T.: Leveraging position bias to improve peer recommendation. PLoS ONE 9(6), e98914 (2014)CrossRefGoogle Scholar
  12. 12.
    Murphy, J., et al.: Primacy and recency effects on clicking behavior. J. Comput.-Mediat. Commun. 11(2), 522–535 (2006)CrossRefGoogle Scholar
  13. 13.
    Pandey, S., et al.: Shuffling a stacked deck: the case for partially randomized ranking of search engine results. In: Proceedings of the 31st International Conference on Very Large Data Bases, pp. 781–792. VLDB Endowment (2005)Google Scholar
  14. 14.
    Schnabel, T., et al.: Recommendations as treatments: debiasing learning and evaluation. In: Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, 19–24 June 2016. pp. 1670–1679 (2016)Google Scholar
  15. 15.
    Schuth, A.: Search engines that learn from their users. SIGIR Forum 50(1), 95–96 (2016)CrossRefGoogle Scholar
  16. 16.
    Serenko, A., Bontis, N.: First in, best dressed: the presence of order-effect bias in journal ranking surveys. J. Informetr. 7(1), 138–144 (2013)CrossRefGoogle Scholar
  17. 17.
    Teppan, E.C., Zanker, M.: Decision biases in recommender systems. J. Internet Commer. 14(2), 255–275 (2015)CrossRefGoogle Scholar
  18. 18.
    Wang, X., et al.: Learning to rank with selection bias in personal search. In: Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 115–124. ACM (2016)Google Scholar
  19. 19.
    Zheng, H., et al.: Do clicks measure recommendation relevancy?: an empirical user study. In: Proceedings of the Fourth ACM Conference on Recommender Systems, pp. 249–252. ACM (2010)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.School of Computer Science and Statistics, ADAPT CentreTrinity College DublinDublinIreland
  2. 2.National Institute of Informatics (NII)TokyoJapan

Personalised recommendations