Advertisement

Heterogeneous Dyadic Multi-task Learning with Implicit Feedback

  • Simon MouraEmail author
  • Amir Asarbaev
  • Massih-Reza Amini
  • Yury Maximov
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11303)

Abstract

In this paper we present a framework for learning models for Recommender Systems (RS) in the case where there are multiple implicit feedback associated to items. Based on a set of features, representing the dyads of users and items extracted from an implicit feedback collection, we propose a stochastic gradient descent algorithm that learn jointly classification, ranking and embeddings for users and items. Our experimental results on a subset of the collection used in the RecSys 2016 challenge for job recommendation show the effectiveness of our approach with respect to single task approaches and paves the way for future work in jointly learning models for multiple implicit feedback for RS.

Keywords

Recommendation systems Multiple implicit feedback Dyadic prediction Muti-task learning 

References

  1. 1.
    Bennett, J., Lanning, S.: The Netflix prize. In: KDD Cup and Workshop 2007, p. 35 (2007)Google Scholar
  2. 2.
    Ben-David, S., Schuller, R.: Exploiting task relatedness for multiple task learning. In: Schölkopf, B., Warmuth, M.K. (eds.) COLT-Kernel 2003. LNCS (LNAI), vol. 2777, pp. 567–580. Springer, Heidelberg (2003).  https://doi.org/10.1007/978-3-540-45167-9_41CrossRefzbMATHGoogle Scholar
  3. 3.
    Caruana, R.: Multitask Learning. Mach. Learn. 28(1), 41–75 (1997)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Yang, X., Seyoung, K., Xing, E.P.: Heterogeneous multi-task learning with joint sparsity constraints. In: Advances in Neural Information Processing Systems 22, Vancouver, pp. 2151–2159 (2009)Google Scholar
  5. 5.
    Sculley, D.: Combined regression and ranking. In: 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Washington, pp. 979–988 (2010)Google Scholar
  6. 6.
    Kumar, A., Daume, H.: Learning task grouping and overlap in multi-task learning. In: 29th International Conference on Machine Learning, New York, pp. 1383–1390 (2012)Google Scholar
  7. 7.
    Chapelle, O., Shivaswamy, P., Vadrevu, S., Weinberger, K., Zhang, Y., Tseng, B.: Multi-task learning for boosting with application to web search ranking. In: 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Washington, pp. 1189–1198 (2010)Google Scholar
  8. 8.
    Evgeniou, T., Pontil, M.: Regularized multi-task learning. In: 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, pp. 109–117 (2004)Google Scholar
  9. 9.
    Stock, M., Pahikkala, T., Airola, A., De Baets, B., Waegeman, W.: Efficient Pairwise Learning using Kernel Ridge Regression: an Exact Two-Step Method. Technical report (2016)Google Scholar
  10. 10.
    Volkovs, M., Zemel, R.S.: Collaborative ranking with 17 parameters. In: Advances in Neural Information Processing Systems 25, Lake Tahoe, pp. 2294–2302 (2012)Google Scholar
  11. 11.
    Lehmann, E.L., Romano, J.P.: Testing Statistical Hypotheses. Springer Texts in Statistics. Springer, New York (2005).  https://doi.org/10.1007/0-387-27605-XCrossRefzbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Simon Moura
    • 1
    Email author
  • Amir Asarbaev
    • 1
    • 4
  • Massih-Reza Amini
    • 1
  • Yury Maximov
    • 2
    • 3
  1. 1.Univ. Grenoble Alps, CNRS, Grenoble INP - LIGGrenobleFrance
  2. 2.Skolkovo Institute of Science and TechnologyMoscowRussia
  3. 3.Theoretical Division T-5 and CNLS Los Alamos National LaboratoryLos AlamosUSA
  4. 4.Moscow Institute of Physics and TechnologyDolgoprudnyRussia

Personalised recommendations