Advertisement

Improving Simple Collaborative Filtering Models Using Ensemble Methods

  • Ariel Bar
  • Lior Rokach
  • Guy Shani
  • Bracha Shapira
  • Alon Schclar
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7872)

Abstract

In this paper we examine the effect of applying ensemble learning to the performance of collaborative filtering methods. We present several systematic approaches for generating an ensemble of collaborative filtering models based on a single collaborative filtering algorithm (single-model or homogeneous ensemble). We present an adaptation of several popular ensemble techniques in machine learning for the collaborative filtering domain, including bagging, boosting, fusion and randomness injection. We evaluate the proposed approach on several types of collaborative filtering base models: k-NN, matrix factorization and a neighborhood matrix factorization model. Empirical evaluation shows a prediction improvement compared to all base CF algorithms. In particular, we show that the performance of an ensemble of simple (weak) CF models such as k-NN is competitive compared with a single strong CF model (such as matrix factorization) while requiring an order of magnitude less computational cost.

Keywords

Recommendation Systems Collaborative Filtering Ensemble Methods 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Sarwar, B., Karypis, G., Konstan, J., Riedl, J.: Analysis of Recommendation Algorithms for E-Commerce. In: Proc. of the 2nd ACM Conference on E-commerce, pp. 158–167 (2000)Google Scholar
  2. 2.
    Sarwar, B., Karypis, G., Konstan, J., Riedl, J.: Item-based collaborative filtering recommendation algorithm. In: Proceedings of the 10th International Conference on World Wide Web, pp. 285–295 (2001)Google Scholar
  3. 3.
    Koren, Y.: Factorization Meets the Neighborhood: a Multifaceted Factor in the Neighbors: Scalable and Accurate Collaborative Filtering. In: Proc. 14th ACM Int. Conference on Knowledge Discovery and Data Mining (KDD 2008). ACM Press (2008)Google Scholar
  4. 4.
    Burke, R.: Hybrid recommender systems: Survey and Experiments. User Model. User-Adapt. Interact. 12(4), 331–370 (2002)zbMATHCrossRefGoogle Scholar
  5. 5.
    Takacs, Pilaszy, I., Nemeth, B., Tikk, D.: Scalable Collaborative Filtering Approaches for Large Recommender Systems. JMLR 10, 623–656 (2009)Google Scholar
  6. 6.
    Bell, R., Koren, Y.: Lessons from the Netflix Prize Challenge. SIGKDD Explorations 9 (2007)Google Scholar
  7. 7.
    Bell, R., Koren, Y.: Scalable Collaborative Filtering with Jointly Derived Neighborhood Interpolation Weights. In: IEEE International Conference on Data Mining (ICDM 2007), pp. 43–52 (2007)Google Scholar
  8. 8.
    Candillier, L., Meyer, F., Fessant, F.: Designing Specific Weighted Similarity Measures to Improve Collaborative Filtering Systems. In: IEEE International Conference on Data Mining (2008)Google Scholar
  9. 9.
    Su, X., Khoshgoftaar, T.M., Zhu, X., Greiner, R.: Imputation-boosted collaborative filtering using machine learning classifiers. In: Proceedings of the 2008 ACM Symposium on Applied Computing (2008)Google Scholar
  10. 10.
    Wu, M.: Collaborative Filtering via Ensembles of Matrix Factorizations. In: Proceedings of KDD Cup and Workshop (2007)Google Scholar
  11. 11.
    Lee, J.-S., Olafsson, S.: Two-way cooperative prediction for collaborative filtering recommendations. Expert Systems with Applications (2008)Google Scholar
  12. 12.
    Schclar, A., Meisels, A., Gershman, A., Rokach, L., Tsikinovsky, A.: Ensemble Methods for Improving the Performance of Neighborhood-based Collaborative Filtering. In: Proceedings of ACM RecSys 2009 (2009)Google Scholar
  13. 13.
    Freund, Y., Schapire, R.E.: Experiments with a New Boosting Algorithm. In: Machine Learning: Proceedings of the Thirteenth International Conference (1996)Google Scholar
  14. 14.
    Jahrer, M., Töscher, A., Legenstein, R.: Combining predictions for accurate recommender systems. In: Proc. 16th ACM SIGKDD, pp. 693–702 (2010)Google Scholar
  15. 15.
    Breiman, L.: Bagging Predictors. Machine Learning 24, 123–140 (1996)MathSciNetzbMATHGoogle Scholar
  16. 16.
    Shrestha, D., Solomatine, D.: Experiments with AdaBoost.RT, an improved boosting scheme for regression. Neural Computation 18 (2006)Google Scholar
  17. 17.
    Cherkauer, K.J.: Human expert level performance on a scientific image analysis task by a system using combined artificial neural networks. In: Proc. AAAI 1996 Workshop on Integrating Multiple Learned Models for Improving and Scaling Machine Learning Algorithms, Portland, OR, pp. 15–21. AAAI Press, Menlo Park (1996)Google Scholar
  18. 18.
    Friedman, J.H.: Stochastic gradient boosting. Computational Statistics & Data Analysis 38(4), 367–378 (2002)MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Ariel Bar
    • 1
  • Lior Rokach
    • 1
  • Guy Shani
    • 1
  • Bracha Shapira
    • 1
  • Alon Schclar
    • 2
  1. 1.Department of Information Systems EngineeringBen-Gurion University of the NegevBeer-ShevaIsrael
  2. 2.School of Computer ScienceAcademic College of Tel Aviv-YaffoTel AvivIsrael

Personalised recommendations