A Study of Smoothing Methods for Relevance-Based Language Modelling of Recommender Systems
Language Models have been traditionally used in several fields like speech recognition or document retrieval. It was only recently when their use was extended to collaborative Recommender Systems. In this field, a Language Model is estimated for each user based on the probabilities of the items. A central issue in the estimation of such Language Model is smoothing, i.e., how to adjust the maximum likelihood estimator to compensate for rating sparsity. This work is devoted to explore how the classical smoothing approaches (Absolute Discounting, Jelinek-Mercer and Dirichlet priors) perform in the recommender task. We tested the different methods under the recently presented Relevance-Based Language Models for collaborative filtering, and compared how the smoothing techniques behave in terms of precision and stability. We found that Absolute Discounting is practically insensitive to the parameter value being an almost parameter-free method and, at the same time, its performance is similar to Jelinek-Mercer and Dirichlet priors.
KeywordsRecommender systems Collaborative filtering Smoothing Relevance Models
Unable to display preview. Download preview PDF.
- 1.Bellogín, A., Castells, P., Cantador, I.: Precision-oriented evaluation of recommender systems. In: RecSys 2011, p. 333. ACM Press, New York (2011)Google Scholar
- 2.Ding, C., Li, T., Luo, D., Peng, W.: Posterior probabilistic clustering using NMF. In: SIGIR 2008, pp. 831–832. ACM, New York (2008)Google Scholar
- 3.Lavrenko, V., Croft, W.B.: Relevance based language models. In: SIGIR 2001, pp. 120–127. ACM Press, New York (2001)Google Scholar