Abstract
In this paper, we study how to recommend both accurate and diverse top-N recommendation, which is a typical instance of the maximum coverage problem. Traditional approaches are to treat the process of constructing the recommendation list as a problem of greedy sequential items selection, which are inevitably sub-optimal. In this paper, we propose a reinforcement learning and neural networks based framework – Diversify top-N Recommendation with Fast Monte Carlo Tree Search (Div-FMCTS) – to optimize the diverse top-N recommendations in a global view. The learning of Div-FMCTS consists of two stages: (1) searching for better recommendation with MCTS; (2) generalizing those plans with the policy and value neural networks. Due to the difficulty of searching over extremely large item permutations, we propose two approaches to speeding up the training process. The first approach is pruning the branches of the search tree by the structure information of the optimal recommendations. The second approach is searching over a randomly chosen small subset of items to quickly harvest the fruits of searching in the generalization with neural networks. Its effectiveness has been proved both empirically and theoretically. Extensive experiments on four benchmark datasets have demonstrated the superiority of Div-FMCTS over state-of-the-art methods.
Keywords
- Recommender system
- Recommendation diversity
- Monte Carlo Tree Search
L. Zou—Work performed during an internship at JD.com.
This is a preview of subscription content, access via your institution.
Buying options





References
Adomavicius, G., Kwon, Y.: Maximizing aggregate recommendation diversity: a graph-theoretic approach. In: RecSys, pp. 3–10 (2011)
Adomavicius, G., Kwon, Y.: Improving aggregate recommendation diversity using ranking-based techniques. TKDE 24(5), 896–911 (2012)
Ashkan, A., Kveton, B., Berkovsky, S., Wen, Z.: Optimal greedy diversity for recommendation. In: IJCAI, pp. 173–182 (2015)
Brynjolfsson, E., Hu, Y., Smith, M.D.: Research commentary-long tails vs. superstars: the effect of information technology on product variety and sales concentration patterns. Inf. Syst. Res. 21(4), 736–747 (2010)
Cheng, P., Wang, S., Ma, J., Sun, J., Xiong, H.: Learning to recommend accurate and diverse items. In: WWW, pp. 183–192 (2017)
Clarke, C.L.A., et al.: Novelty and diversity in information retrieval evaluation. In: SIGIR, pp. 659–666 (2008)
Dulac-Arnold, G.: Deep reinforcement learning in large discrete action spaces. arXiv preprint arXiv:1512.07679 (2015)
Feige, U.: A threshold of ln n for approximating set cover. JACM 45(4), 634–652 (1998)
He, X., Liao, L., Zhang, H., Nie, L., Hu, X., Chua, T.-S.: Neural collaborative filtering. In: WWW, pp. 173–182 (2017)
Hidasi, B., Karatzoglou, A., Baltrunas, L., Tikk, D.: Session-based recommendations with recurrent neural networks. arXiv (2015)
Hochba, D.S.: Approximation algorithms for NP-hard problems. ACM SIGACT News 28(2), 40–52 (1997)
Xue, H.-J., Dai, X.-Y., Zhang, J., Huang, S., Chen, J.: Deep matrix factorization models for recommender systems. In: IJCAI, pp. 764–773 (2017)
Järvelin, K., Kekäläinen, J.: Cumulated gain-based evaluation of IR techniques. ACM Trans. Inf. Syst. (TOIS) 20(4), 422–446 (2002)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)
Lee, D.D., Seung, H.S.: Algorithms for non-negative matrix factorization. In: NIPS 2001, pp. 556–562 (2001)
Li, J., Ren, P., Chen, Z., Ren, Z., Ma, J.: Neural attentive session-based recommendation. arXiv preprint arXiv:1511.06939 (2015)
Li, S., Kawale, J., Fu, Y.: Deep collaborative filtering via marginalized denoising auto-encoder. In: CIKM, pp. 811–820 (2015)
Liu, T.-Y., et al.: Learning to rank for information retrieval. Found. Trends® Inf. Retr. 3(3), 225–331 (2009)
Lu, Z., Yang, Q.: Partially observable Markov decision process for recommender systems. arXiv preprint arXiv:1608.07793 (2016)
Mahmood, T., Ricci, F.: Improving recommender systems with adaptive conversational strategies. In: HT 2009, pp. 73–82 (2009)
McNee, S.M., Riedl, J., Konstan, J.A.: Being accurate is not enough: how accuracy metrics have hurt recommender systems. In: CHI, pp. 1097–1101 (2013)
Mnih, V., et al.: Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 (2013)
Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)
Rendle, S.: Factorization machines. In: ICDM, pp. 995–1000 (2007)
Salakhutdinov, R., Mnih, A.: Probabilistic matrix factorization. In: NIPS, pp. 1257–1264 (2007)
Salakhutdinov, R., Mnih, A., Hinton, G.: Restricted Boltzmann machines for collaborative filtering. In: ICML, pp. 791–798 (2007)
Shani, G., Heckerman, D., Brafman, R.I.: An MDP-based recommender system. JMLR 6, 1265–1295 (2005)
Silver, D., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016)
Silver, D., et al.: Mastering the game of go without human knowledge. Nature 550, 354–360 (2017)
Strub, F., Mary, J.: Collaborative filtering with stacked denoising autoencoders and sparse inputs. In: NIPS, pp. 111–112 (2015)
Sunehag, P., Evans, R., Dulac-Arnold, G., Zwols, Y., Visentin, D., Coppin, B.: Deep reinforcement learning with attention for slate Markov decision processes with high-dimensional states and actions. arXiv preprint arXiv:1512.01124 (2015)
Szpektor, I., Maarek, Y., Pelleg, D.: When relevance is not enough: promoting diversity and freshness in personalized question recommendation. In: WWW, pp. 173–182 (2013)
den Oord, A.V., Dieleman, S., Schrauwen, B.: Deep content-based music recommendation. In: NIPS, pp. 2643–2651 (2015)
Wang, H., Wang, N., Yeung, D.-Y.: Collaborative deep learning for recommender systems. In: SIGKDD, pp. 1235–1244 (2015)
Williams, R.J.: Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn. 8(3–4), 229–256 (1992)
Wu, Y., DuBois, C., Zheng, A.X., Ester, M.: Collaborative denoising auto-encoders for top-n recommender systems. In: WSDM, pp. 153–162 (2016)
Xia, L., Xu, J., Lan, Y., Guo, J., Zeng, W., Cheng, X.: Adapting Markov decision process for search result diversification. In: SIGIR, pp. 535–544 (2017)
Yue, Y., Joachims, T.: Interactively optimizing information retrieval systems as a dueling bandits problem. In: ICML, pp. 1201–1208 (2009)
Zheng, Y., Tang, B., Ding, W., Zhou, H.: A neural autoregressive approach to collaborative filtering. In: ICML, pp. 764–773 (2016)
Ziegler, C.-N., McNee, S.M., Konstan, J.A., Lausen, G.: Improving recommendation lists through topic diversification. In: WWW, pp. 22–32 (2005)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Zou, L., Xia, L., Ding, Z., Yin, D., Song, J., Liu, W. (2019). Reinforcement Learning to Diversify Top-N Recommendation. In: Li, G., Yang, J., Gama, J., Natwichai, J., Tong, Y. (eds) Database Systems for Advanced Applications. DASFAA 2019. Lecture Notes in Computer Science(), vol 11447. Springer, Cham. https://doi.org/10.1007/978-3-030-18579-4_7
Download citation
DOI: https://doi.org/10.1007/978-3-030-18579-4_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-18578-7
Online ISBN: 978-3-030-18579-4
eBook Packages: Computer ScienceComputer Science (R0)