Skip to main content
Log in

Dual-regularized one-class collaborative filtering with implicit feedback

  • Published:
World Wide Web Aims and scope Submit manuscript

Abstract

Collaborative filtering plays a central role in many recommender systems. While most of the existing collaborative filtering methods are proposed for the explicit, multi-class settings (e.g., 1-5 stars in movie recommendation), many real-world applications actually belong to the one-class setting where user feedback is implicitly expressed (e.g., views in news recommendation and video recommendation). In this article, we propose dual-regularized one-class collaborative filtering models for implicit feedback. In particular, by dividing existing methods into point-wise class and pair-wise class, we first propose a point-wise model by integrating two existing methods and further exploiting the side information from both users and items. Next, we propose to add dual regularization into an existing pair-wise method with a different treatment of the side information. We also propose efficient algorithms to solve the proposed models. Extensive experimental evaluations on three real data sets demonstrate the effectiveness and efficiency of the proposed methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6

Similar content being viewed by others

Notes

  1. http://ir.ii.uam.es/hetrec2011

References

  1. Anand, S.S., Griffiths, N.: A market-based approach to address the new item problem. In: Proceedings of the fifth ACM conference on recommender systems, pp. 205–212 (2011)

  2. Bogdanov, P., Busch, M., Moehlis, J., Singh, A.K., Szymanski, B.K.: Modeling individual topic-specific behavior and influence backbone networks in social media. Soc. Netw. Anal. Min. 4(1), 204 (2014)

    Article  Google Scholar 

  3. Breese, J.S., Heckerman, D., Kadie, C.: Empirical analysis of predictive algorithms for collaborative filtering. In: UAI, pp. 43–52 (1998)

  4. Chapelle, O., Sindhwani, V., Keerthi, S.S.: Optimization techniques for semi-supervised support vector machines. J. Mach. Learn. Res. 9, 203–233 (2008)

    MATH  Google Scholar 

  5. Chen, H.C., Chen, A.L.: A music recommendation system based on music data grouping and user interests. In: CIKM, pp. 231–238 (2001)

  6. Das, A.S., Datar, M., Garg, A., Rajaram, S.: Google news personalization: scalable online collaborative filtering. In: WWW, pp. 271–280 (2007)

  7. Ding, C., Li, T., Peng, W., Park, H.: Orthogonal nonnegative matrix t-factorizations for clustering. In: KDD, pp. 126–135 (2006)

  8. Du, L., Li, X., Shen, Y.D.: User graph regularized pairwise matrix factorization for item recommendation. In: Advanced data mining and applications, pp. 372–385 (2011)

  9. Elo, A.E.: The rating of chessplayers past and present (1978)

  10. Fang, Y., Si, L.: Matrix co-factorization for recommendation with rich side information and implicit feedback. In: Hetrec, pp. 65–69 (2011)

  11. Gantner, Z., Drumond, L., Freudenthaler, C., Rendle, S., Schmidt-Thieme, L.: Learning attribute-to-feature mappings for cold-start recommendations. In: ICDM, pp. 176–185 (2010)

  12. Gu, Q., Zhou, J., Ding, C.H.: Collaborative filtering: weighted nonnegative matrix factorization incorporating user and item graphs. In: SDM, pp. 199–210. SIAM (2010)

  13. Hacker, S., Von Ahn, L.: Matchin: eliciting user preferences with an online game. In: CHI, pp. 1207–1216. ACM (2009)

  14. Harpale, A.S., Yang, Y.: Personalized active learning for collaborative filtering. In: SIGIR, pp. 91–98. ACM (2008)

  15. He, R., McAuley, J.: Ups and downs: modeling the visual evolution of fashion trends with one-class collaborative filtering. In: WWW, pp. 507–517 (2016)

  16. He, X., Zhang, H., Kan, M.Y., Chua, T.S.: Fast matrix factorization for online recommendation with implicit feedback. In: SIGIR, pp. 549–558. ACM (2016)

  17. Herschtal, A., Raskutti, B.: Optimising area under the roc curve using gradient descent. In: ICML, p. 49 (2004)

  18. Hu, Y., Koren, Y., Volinsky, C.: Collaborative filtering for implicit feedback datasets. In: ICDM, pp. 263–272 (2008)

  19. Jamali, M., Ester, M.: A matrix factorization technique with trust propagation for recommendation in social networks. In: Recsys, pp. 135–142. ACM (2010)

  20. Jiang, M., Cui, P., Wang, F., Yang, Q., Zhu, W., Yang, S.: Social recommendation across multiple relational domains. In: CIKM, pp. 1422–1431. ACM (2012)

  21. Kabbur, S., Ning, X., Karypis, G.: Fism: factored item similarity models for top-N recommender systems. In: KDD, pp. 659–667. ACM (2013)

  22. Kanagal, B., Ahmed, A., Pandey, S., Josifovski, V., Yuan, J., Garcia-Pueyo, L.: Supercharging recommender systems using taxonomies for learning user purchase behavior. Proceedings of the VLDB Endowment 5(10), 956–967 (2012)

    Article  Google Scholar 

  23. Koren, Y.: Factorization meets the neighborhood: a multifaceted collaborative filtering model. In: KDD, pp. 426–434. ACM (2008)

  24. Koren, Y., Bell, R., Volinsky, C.: Matrix factorization techniques for recommender systems. Computer 42(8), 30–37 (2009)

    Article  Google Scholar 

  25. Lee, D.D., Seung, H.S.: Algorithms for non-negative matrix factorization. In: NIPS, pp. 556–562 (2000)

  26. Li, Y., Hu, J., Zhai, C., Chen, Y.: Improving one-class collaborative filtering by incorporating rich user information. In: CIKM, pp. 959–968 (2010)

  27. Ma, H.: An experimental study on implicit social recommendation. In: SIGIR, pp. 73–82. ACM (2013)

  28. Ma, H., Yang, H., Lyu, M.R., King, I.: Sorec: social recommendation using probabilistic matrix factorization. In: CIKM, pp. 931–940 (2008)

  29. Ma, H., Zhou, D., Liu, C., Lyu, M.R., King, I.: Recommender systems with social regularization. In: WSDM, pp. 287–296. ACM (2011)

  30. McPherson, M., Smith-Lovin, L., Cook, J.M.: Birds of a feather: Homophily in social networks. Annu. Rev. Sociol. 27(1), 415–444 (2001)

    Article  Google Scholar 

  31. Pan, R., Scholz, M.: Mind the gaps: weighting the unknown in large-scale one-class collaborative filtering. In: KDD, pp. 667–676 (2009)

  32. Pan, R., Zhou, Y., Cao, B., Liu, N.N., Lukose, R., Scholz, M., Yang, Q.: One-class collaborative filtering. In: ICDM, pp. 502–511 (2008)

  33. Pan, W., Chen, L.: Gbpr: group preference based bayesian personalized ranking for one-class collaborative filtering. In: IJCAI, pp. 2691–2697 (2013)

  34. Pan, W., Liu, M., Zhong, M.: Transfer learning for heterogeneous one-class collaborative filtering. IEEE Intell. Syst. 31(4), 43–49 (2016)

    Article  Google Scholar 

  35. Paquet, U., Koenigstein, N.: One-class collaborative filtering with random graphs. In: WWW, pp. 999–1008 (2013)

  36. Park, S.T., Chu, W.: Pairwise preference regression for cold-start recommendation. In: Proceedings of the third ACM conference on Recommender systems, pp. 21–28 (2009)

  37. Popescu-Belis, A., Pappas, N.: Sentiment analysis of user comments for one-class collaborative filtering over ted talks. In: 36Th ACM SIGIR conference on research and development in information retrieval, EPFL-CONF-192567. ACM (2013)

  38. Rendle, S., Freudenthaler, C.: Improving pairwise learning for item recommendation from implicit feedback. In: WSDM, pp. 273–282 (2014)

  39. Rendle, S., Freudenthaler, C., Gantner, Z., Schmidt-Thieme, L.: Bpr: bayesian personalized ranking from implicit feedback. In: UAI, pp. 452–461 (2009)

  40. Schein, A.I., Popescul, A., Ungar, L.H., Pennock, D.M.: Methods and metrics for cold-start recommendations. In: SIGIR, pp. 253–260. ACM (2002)

  41. Shen, Y., Jin, R.: Learning personal + social latent factor model for social recommendation. In: KDD, pp. 1303–1311 (2012)

  42. Sindhwani, V., Bucak, S.S., Hu, J., Mojsilovic, A.: One-class matrix completion with low-density factorizations. In: ICDM, pp. 1055–1060 (2010)

  43. Sun, M., Li, F., Lee, J., Zhou, K., Lebanon, G., Zha, H.: Learning multiple-question decision trees for cold-start recommendation. In: WSDM, pp. 445–454. ACM (2013)

  44. Tang, J., Gao, H., Liu, H.: MTrust: discerning multi-faceted trust in a connected world. In: WSDM, pp. 93–102. ACM (2012)

  45. Tang, J., Hu, X., Gao, H., Liu, H.: Exploiting local and global social context for recommendation. In: IJCAI, pp. 2712–2718 (2013)

  46. Tang, J., Liu, H., Gao, H., Das Sarmas, A.: Etrust: understanding trust evolution in an online world. In: KDD, pp. 253–261. ACM (2012)

  47. Wang, B., Ester, M., Bu, J., Zhu, Y., Guan, Z., Cai, D.: Which to view: personalized prioritization for broadcast emails. In: WWW, pp. 1181–1190 (2016)

  48. Wang, C., Blei, D.M.: Collaborative topic modeling for recommending scientific articles. In: KDD, pp. 448–456. ACM (2011)

  49. Xu, J., Yao, Y., Tong, H., Tao, X., Lu, J.: Ice-Breaking: mitigating cold-start recommendation problem by rating comparison. In: IJCAI, pp. 3981–3987 (2015)

  50. Xu, J., Yao, Y., Tong, H., Tao, X., Lu, J.: Hoorays: high-order optimization of rating distance for recommender systems. In: KDD, pp. 525–534. ACM (2017)

  51. Xu, J., Yao, Y., Tong, H., Tao, X., Lu, J.: Rapare: a generic strategy for cold-start rating prediction problem. IEEE Trans. Knowl. Data Eng. 29(6), 1296–1309 (2017)

    Article  Google Scholar 

  52. Yan, G., Yao, Y., Xu, F., Lu, J.: Rit: enhancing recommendation with inferred trust. In: PAKDD, pp. 756–767. Springer (2015)

  53. Yang, S.H., Long, B., Smola, A., Sadagopan, N., Zheng, Z., Zha, H.: Like like alike: joint friendship and interest propagation in social networks. In: WWW, pp. 537–546. ACM (2011)

  54. Yao, Y., Tong, H., Yan, G., Xu, F., Zhang, X., Szymanski, B.K., Lu, J.: Dual-regularized one-class collaborative filtering. In: CIKM, pp. 759–768. ACM (2014)

  55. Yao, Y., Zhao, W.X., Wang, Y., Tong, H., Xu, F., Lu, J.: Version-aware rating prediction for mobile app recommendation. ACM Trans. Inf. Syst. 35(4), 38 (2017)

    Article  Google Scholar 

  56. Yu, H.F., Huang, H.Y., Dhillon, I.S., Lin, C.J.: A unified algorithm for one-cass structured matrix factorization with side information. In: AAAI, pp. 2845–2851 (2017)

  57. Zhang, M., Tang, J., Zhang, X., Xue, X.: Addressing cold start in recommender systems: a semi-supervised co-training algorithm. In: SIGIR (2014)

  58. Zhao, W.X., Li, S., He, Y., Chang, E.Y., Wen, J.R., Li, X.: Connecting social media to e-commerce: Cold-start product recommendation using microblogging information. IEEE Trans. Knowl. Data Eng. 28(5), 1147–1159 (2016)

    Article  Google Scholar 

  59. Zhao, X.W., Guo, Y., He, Y., Jiang, H., Wu, Y., Li, X.: We know what you want to buy: a demographic-based system for product recommendation on microblogs. In: KDD, pp. 1935–1944. ACM (2014)

  60. Zheng, X., Ding, H., Mamitsuka, H., Zhu, S.: Collaborative matrix factorization with multiple similarities for predicting drug-target interactions. In: KDD, pp. 1025–1033 (2013)

  61. Zhou, K., Yang, S.H., Zha, H.: Functional matrix factorizations for cold-start recommendation. In: SIGIR, pp. 315–324 (2011)

Download references

Acknowledgments

This work is supported by the National Key Research and Development Program of China (No. 2017YFB1001801), the National Natural Science Foundation of China (No. 61690204, 61672274, 61702252), and the Collaborative Innovation Center of Novel Software Technology and Industrialization. Hanghang Tong is partially supported by NSF (IIS-1651203, IIS-1715385, CNS-1629888 and IIS-1743040), DTRA (HDTRA1-16-0017), ARO (W911NF-16-1-0168), and gifts from Huawei and Baidu.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuan Yao.

Additional information

This article belongs to the Topical Collection: Special Issue on Geo-Social Computing

Guest Editors: Guandong Xu, Wen-Chih Peng, Hongzhi Yin, Zi (Helen) Huang

Appendix

Appendix

Proof Proof of Theorem 2

By ignoring constant terms, we can re-write (12) as

$$\begin{array}{@{}rcl@{}} J(\textbf{F}) &=& -2 \text{tr}[(\textbf{W}\odot\textbf{W}\odot(\textbf{R}+\textbf{P}))\textbf{G}\textbf{F}^{\prime}] + \text{tr}[(\textbf{W}\odot\textbf{W}\odot(\textbf{F}\textbf{G}^{\prime}))\textbf{G}\textbf{F}^{\prime}] \\ & & + \,\lambda_{r} \text{tr}(\textbf{F}\textbf{F}^{\prime}) + \lambda_{F} \text{tr}(\textbf{F}^{\prime}\textbf{D}_{M} \textbf{F}) - \lambda_{F} \text{tr}(\textbf{F}^{\prime}\textbf{M} \textbf{F}) \end{array} $$
(24)

Following the auxiliary function approach [25], an auxiliary function \(H(\textbf {F}, \tilde {\textbf {F}})\) of \(J(\textbf {F})\) must satisfy

$$\begin{array}{@{}rcl@{}} H(\textbf{F}, \textbf{F}) = J(\textbf{F}), ~~~~~~~~H(\textbf{F}, \tilde{\textbf{F}}) \geqslant J(\textbf{F}) \end{array} $$
(25)

We define

$$\begin{array}{@{}rcl@{}} \textbf{F}^{(t + 1)} = {\arg}\min\limits_{\textbf{F}} H(\textbf{F}, \textbf{F}^{(t)}) \end{array} $$
(26)

Then, by construction, we have

$$\begin{array}{@{}rcl@{}} J(\textbf{F}^{(t)}) = H(\textbf{F}^{(t)}, \textbf{F}^{(t)}) \geqslant H(\textbf{F}^{(t + 1)}, \textbf{F}^{(t)}) \geqslant J(\textbf{F}^{(t + 1)}) \end{array} $$
(27)

This would prove that \(J(\textbf {F}^{(t)})\) is monotonically decreasing.

In the remainder of proof, we need to find 1) an appropriate auxiliary function, and 2) the global minimum solution of the auxiliary function.

We start with the auxiliary function, and show that the following equation is one of the auxiliary functions for (24)

$$\begin{array}{@{}rcl@{}} H(\textbf{F}, \tilde{\textbf{F}}) &=& -2 \sum\limits_{u = 1}^{m} \sum\limits_{k = 1}^{r} [(\textbf{W}\odot\textbf{W}\odot(\textbf{R}+\textbf{P}))\textbf{G}](u,k) \tilde{\textbf{F}}(u,k) \\ & & (1+\log(\frac{\textbf{F}(u,k)}{\tilde{\textbf{F}}(u,k)})) \\ & & - \sum\limits_{u = 1}^{m} \sum\limits_{v = 1}^{m} \sum\limits_{k = 1}^{r} \lambda_{F} \textbf{M}(u,v) \tilde{\textbf{F}}(v,k) \tilde{\textbf{F}}(u,k) \\ & & (1+\log(\frac{\textbf{F}(v,k)\textbf{F}(u,k)}{\tilde{\textbf{F}}(v,k)\tilde{\textbf{F}}(u,k)})) \\ & & + \sum\limits_{u = 1}^{m} \sum\limits_{k = 1}^{r} \lambda_{r} \textbf{F}^{2}(u,k) \\ & & + \sum\limits_{u = 1}^{m} \sum\limits_{k = 1}^{r} \frac{[(\textbf{W}\odot\textbf{W}\odot(\tilde{\textbf{F}}\textbf{G}^{\prime}))\textbf{G}](u,k) \textbf{F}^{2}(u,k)}{\tilde{\textbf{F}}(u,k)} \\ & & + \sum\limits_{u = 1}^{m} \sum\limits_{k = 1}^{r} \frac{[\lambda_{F} \textbf{D}_{M} \tilde{\textbf{F}}](u,k) \textbf{F}^{2}(u,k)}{\tilde{\textbf{F}}(u,k)} \end{array} $$
(28)

For convenience, we name the five terms in (28) as \(E1\), \(E2\), \(E3\), \(E4\) and \(E5\), respectively. Then, for \(E3\) we have

$$\begin{array}{@{}rcl@{}} E3 &=& \lambda_{r} \text{tr}(\textbf{F}\textbf{F}^{\prime}) \end{array} $$
(29)

Using the inequality \(z \geqslant 1+\log z\), we have

$$\begin{array}{@{}rcl@{}} E1 &\geqslant& -2 \sum\limits_{u = 1}^{m} \sum\limits_{k = 1}^{r} [(\textbf{W}\odot\textbf{W}\odot(\textbf{R}+\textbf{P}))\textbf{G}](u,k) \textbf{F}(u,k) \\ &=& -2 \text{tr}[(\textbf{W}\odot\textbf{W}\odot(\textbf{R}+\textbf{P}))\textbf{G}\textbf{F}^{\prime}] \end{array} $$
(30)

and

$$\begin{array}{@{}rcl@{}} E2 \geqslant -\sum\limits_{u = 1}^{m} \sum\limits_{v = 1}^{m} \sum\limits_{k = 1}^{r} \lambda_{F} \textbf{M}(u,v) \textbf{F}(v,k) \textbf{F}(u,k) = - \lambda_{F} \text{tr}(\textbf{F}^{\prime}\textbf{M} \textbf{F}) \end{array} $$
(31)

For \(E5\), we use the following inequality [7]

$$\begin{array}{@{}rcl@{}} {\sum}_{i = 1}^{n} \sum\limits_{p = 1}^{k} \frac{(\textbf{A}\textbf{S}^{*}\textbf{B})\textbf{S}^{2}(i,p)}{\textbf{S}(i,p)} \geqslant \text{tr}(\textbf{S}^{*}\textbf{A}\textbf{S}\textbf{B}) \end{array} $$

where \(\textbf {A}_{n \times n}\), \(\textbf {B}_{k \times k}\), \(\textbf {S}_{n \times k}\), and \(\textbf {S}^{*}_{n \times k}\) are non-negative matrices, and \(\textbf {A}\) and \(\textbf {B}\) are symmetric. Therefore, we have

$$\begin{array}{@{}rcl@{}} E5 \geqslant \lambda_{F} \text{tr}(\textbf{F}^{\prime}\textbf{D}_{M} \textbf{F}) \end{array} $$
(32)

Finally, for \(E4\), let \(\textbf {F}(u,k) = \tilde {\textbf {F}}(u,k) \textbf {Q}(u,k)\) we have

$$\begin{array}{@{}rcl@{}} E4 &=& \sum\limits_{u = 1}^{m} \sum\limits_{i = 1}^{n} \sum\limits_{k = 1}^{r} \sum\limits_{l = 1}^{r} \frac{\tilde{\textbf{F}}(u,l) \textbf{G}^{\prime}(l,i) \textbf{W}^{2}(u,i) \textbf{G}(i,k)\textbf{F}^{2}(u,k)}{\tilde{\textbf{F}}(u,k)} \\ &=& \sum\limits_{u = 1}^{m} \sum\limits_{i = 1}^{n} \sum\limits_{k = 1}^{r} \sum\limits_{l = 1}^{r} \tilde{\textbf{F}}(u,l) \textbf{G}^{\prime}(l,i) \textbf{W}^{2}(u,i) \textbf{G}(i,k) \tilde{\textbf{F}}(u,k) \textbf{Q}^{2}(u,k) \\ &=& \sum\limits_{u = 1}^{m} \sum\limits_{i = 1}^{n} \sum\limits_{k = 1}^{r} \sum\limits_{l = 1}^{r} \tilde{\textbf{F}}(u,l) \textbf{G}^{\prime}(l,i) \textbf{W}^{2}(u,i) \textbf{G}(i,k) \tilde{\textbf{F}}(u,k) \\ & & (\frac{\textbf{Q}^{2}(u,k)+\textbf{Q}^{2}(u,l)}{2}) \\ &\geqslant& \sum\limits_{u = 1}^{m} \sum\limits_{i = 1}^{n} \sum\limits_{k = 1}^{r} \sum\limits_{l = 1}^{r} \tilde{\textbf{F}}(u,l) \textbf{G}^{\prime}(l,i) \textbf{W}^{2}(u,i) \textbf{G}(i,k) \tilde{\textbf{F}}(u,k) \\ & & (\textbf{Q}(u,k)\textbf{Q}(u,l)) \\ &=& \sum\limits_{u = 1}^{m} \sum\limits_{i = 1}^{n} \sum\limits_{k = 1}^{r} \sum\limits_{l = 1}^{r} \textbf{F}(u,l) \textbf{G}^{\prime}(l,i) \textbf{W}^{2}(u,i) \textbf{G}(i,k) \textbf{F}(u,k) \\ &=& \text{tr}[(\textbf{W}\odot\textbf{W}\odot(\textbf{F}\textbf{G}^{\prime}))\textbf{G}\textbf{F}^{\prime}] \end{array} $$
(33)

By substituting (29)-(33) into (28), we have \(H(\textbf {F}, \tilde {\textbf {F}}) \geqslant J(\textbf {F})\).

Next, we need to find the global minimum solution of \(H(\textbf {F}, \tilde {\textbf {F}})\). The gradient is

$$\begin{array}{@{}rcl@{}} \frac{1}{2} \frac{\partial{H(\textbf{F}, \tilde{\textbf{F}})}}{\partial{\textbf{F}(u,k)}} &=& - \frac{[(\textbf{W}\odot\textbf{W}\odot(\textbf{R}+ \textbf{P}))\textbf{G}](u,k) \tilde{\textbf{F}}(u,k)}{\textbf{F}(u,k)} \\ & & - \frac{[\lambda_{F} \textbf{M} \tilde{\textbf{F}}](u,k) \tilde{\textbf{F}}(u,k)}{\textbf{F}(u,k)} + \frac{[\lambda_{r} \tilde{\textbf{F}}](u,k) \textbf{F}(u,k)}{\tilde{\textbf{F}}(u,k)} \\ & & + \frac{[(\textbf{W}\odot\textbf{W}\odot(\tilde{\textbf{F}}\textbf{G}^{\prime}))\textbf{G}](u,k) \textbf{F}(u,k)}{\tilde{\textbf{F}}(u,k)} \\ & & + \frac{[\lambda_{F} \textbf{D}_{M} \tilde{\textbf{F}}](u,k) \textbf{F}(u,k)}{\tilde{\textbf{F}}(u,k)} \\ &=& - \frac{[(\textbf{W}\odot\textbf{W}\odot(\textbf{R}+ \textbf{P}))\textbf{G} + \lambda_{F} \textbf{M} \tilde{\textbf{F}}](u,k) \tilde{\textbf{F}}(u,k)}{\textbf{F}(u,k)} \\ & & + \frac{[(\textbf{W}\odot\textbf{W}\odot(\tilde{\textbf{F}}\textbf{G}^{\prime}))\textbf{G} + \lambda_{r} \tilde{\textbf{F}} + \lambda_{F} \textbf{D}_{M} \tilde{\textbf{F}}](u,k) \textbf{F}(u,k)}{\tilde{\textbf{F}}(u,k)} \\ \end{array} $$
(34)

We can further show that the Hessian matrix of \(H(\textbf {F}, \tilde {\textbf {F}})\) is a diagonal matrix with positive diagonal elements. Therefore, the global minimum can be obtained by setting (34) as zero, which results in

$$\begin{array}{@{}rcl@{}} \textbf{F}^{2}(u,k) = \tilde{\textbf{F}}^{2}(u,k) \frac{[(\textbf{W}\odot\textbf{W}\odot(\textbf{R}+ \textbf{P}))\textbf{G} + \lambda_{F} \textbf{M} \tilde{\textbf{F}}](u,k)}{[(\textbf{W}\odot\textbf{W}\odot(\tilde{\textbf{F}}\textbf{G}^{\prime}))\textbf{G} + \lambda_{r} \tilde{\textbf{F}} + \lambda_{F} \textbf{D}_{M} \tilde{\textbf{F}}](u,k)} \end{array} $$
(35)

Back to (26), \(\textbf {F}^{(t + 1)} = \textbf {F}\) and \(\textbf {F}^{(t)} = \tilde {\textbf {F}}\). Therefore, the updating rule in (14) decreases monotonically. Further, with equivalence between (14) and (16) as shown in the proof of Theorem 1, we have that (12) decreases monotonically under the updating rule of (16), which completes the proof. □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yao, Y., Tong, H., Yan, G. et al. Dual-regularized one-class collaborative filtering with implicit feedback. World Wide Web 22, 1099–1129 (2019). https://doi.org/10.1007/s11280-018-0574-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11280-018-0574-1

Keywords

Navigation