Advertisement

International Journal of Computer Vision

, Volume 121, Issue 2, pp 183–208 | Cite as

Weighted Nuclear Norm Minimization and Its Applications to Low Level Vision

  • Shuhang Gu
  • Qi Xie
  • Deyu Meng
  • Wangmeng Zuo
  • Xiangchu Feng
  • Lei Zhang
Article

Abstract

As a convex relaxation of the rank minimization model, the nuclear norm minimization (NNM) problem has been attracting significant research interest in recent years. The standard NNM regularizes each singular value equally, composing an easily calculated convex norm. However, this restricts its capability and flexibility in dealing with many practical problems, where the singular values have clear physical meanings and should be treated differently. In this paper we study the weighted nuclear norm minimization (WNNM) problem, which adaptively assigns weights on different singular values. As the key step of solving general WNNM models, the theoretical properties of the weighted nuclear norm proximal (WNNP) operator are investigated. Albeit nonconvex, we prove that WNNP is equivalent to a standard quadratic programming problem with linear constrains, which facilitates solving the original problem with off-the-shelf convex optimization solvers. In particular, when the weights are sorted in a non-descending order, its optimal solution can be easily obtained in closed-form. With WNNP, the solving strategies for multiple extensions of WNNM, including robust PCA and matrix completion, can be readily constructed under the alternating direction method of multipliers paradigm. Furthermore, inspired by the reweighted sparse coding scheme, we present an automatic weight setting method, which greatly facilitates the practical implementation of WNNM. The proposed WNNM methods achieve state-of-the-art performance in typical low level vision tasks, including image denoising, background subtraction and image inpainting.

Keywords

Low rank analysis Nuclear norm minimization Low level vision 

Notes

Acknowledgments

This work is supported by the Hong Kong RGC GRF grant (PolyU 5313/13E).

References

  1. Arias, P., Facciolo, G., Caselles, V., & Sapiro, G. (2011). A variational framework for exemplar-based image inpainting. International Journal of computer Vision, 93(3), 319–347.MathSciNetCrossRefzbMATHGoogle Scholar
  2. Babacan, S. D., Luessi, M., Molina, R., & Katsaggelos, A. K. (2012). Sparse bayesian methods for low-rank matrix estimation. IEEE Transactions on Signal Processing, 60(8), 3964–3977.MathSciNetCrossRefGoogle Scholar
  3. Boykov, Y., Veksler, O., & Zabih, R. (2001). Fast approximate energy minimization via graph cuts. IEEE Transaction on Pattern Analysis and Machine Intelligence, 23(11), 1222–1239.CrossRefGoogle Scholar
  4. Buades, A., Coll, B., & Morel, J. M. (2005). A non-local algorithm for image denoising. In CVPR.Google Scholar
  5. Buades, A., Coll, B., & Morel, J. M. (2008). Nonlocal image and movie denoising. International Journal of Computer Vision, 76(2), 123–139.CrossRefGoogle Scholar
  6. Buchanan, A.M., & Fitzgibbon, A.W, (2005). Damped newton algorithms for matrix factorization with missing data. In CVPR.Google Scholar
  7. Cai, J. F., Candès, E. J., & Shen, Z. (2010). A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4), 1956–1982.MathSciNetCrossRefzbMATHGoogle Scholar
  8. Candès, E. J., & Recht, B. (2009). Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6), 717–772.MathSciNetCrossRefzbMATHGoogle Scholar
  9. Candès, E. J., Wakin, M. B., & Boyd, S. P. (2008). Enhancing sparsity by reweighted \(l_1\) minimization. Journal of Fourier Analysis and Applications, 14(5–6), 877–905.MathSciNetCrossRefzbMATHGoogle Scholar
  10. Candès, E. J., Li, X., Ma, Y., & Wright, J. (2011). Robust principal component analysis? Journal of the ACM, 58(3), 11.MathSciNetCrossRefzbMATHGoogle Scholar
  11. Chan, T. F., & Shen, J. J. (2005). Image processing and analysis: Variational, PDE, wavelet, and stochastic methods. Philadelphia: SIAM Press.CrossRefzbMATHGoogle Scholar
  12. Chartrand, R. (2007). Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal Processing Letters, 14(10), 707–710.CrossRefGoogle Scholar
  13. Chartrand, R. (2012). Nonconvex splitting for regularized low-rank+ sparse decomposition. IEEE Transaction on Signal Processing, 60(11), 5810–5819.MathSciNetCrossRefGoogle Scholar
  14. Dabov, K., Foi, A., Katkovnik, V., & Egiazarian, K. (2007). Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transaction on Image Processing, 16(8), 2080–2095.MathSciNetCrossRefGoogle Scholar
  15. Dahl, J., Hansen, P. C., Jensen, S. H., & Jensen, T. L. (2010). Algorithms and software for total variation image reconstruction via first-order methods. Numerical Algorithms, 53(1), 67–92.MathSciNetCrossRefzbMATHGoogle Scholar
  16. De La Torre, F., & Black, M. J. (2003). A framework for robust subspace learning. International Journal of Computer Vision, 54(1–3), 117–142.CrossRefzbMATHGoogle Scholar
  17. Ding, X., He, L., & Carin, L. (2011). Bayesian robust principal component analysis. IEEE Transactions on Image Processing, 20(12), 3419–3430.MathSciNetCrossRefGoogle Scholar
  18. Dong, W., Zhang, L., & Shi, G. (2011). Centralized sparse representation for image restoration. In ICCV.Google Scholar
  19. Dong, W., Shi, G., & Li, X. (2013). Nonlocal image restoration with bilateral variance estimation: A low-rank approach. IEEE Transaction on Image Processing, 22(2), 700–711.MathSciNetCrossRefGoogle Scholar
  20. Dong, W., Shi, G., Li, X., Ma, Y., & Huang, F. (2014). Compressive sensing via nonlocal low-rank regularization. IEEE Transaction on Image Processing, 23(8), 3618–3632.MathSciNetCrossRefGoogle Scholar
  21. Donoho, D. L. (1995). De-noising by soft-thresholding. IEEE Transaction on Info Theory, 41(3), 613–627.MathSciNetCrossRefzbMATHGoogle Scholar
  22. Eriksson, A., & Van Den Hengel, A. (2010). Efficient computation of robust low-rank matrix approximations in the presence of missing data using the \(l_1\) norm. In CVPR.Google Scholar
  23. Fazel, M. (2002). Matrix rank minimization with applications. PhD thesis, PhD thesis, Stanford University.Google Scholar
  24. Fazel, M., Hindi, H., & Boyd, S.P. (2001). A rank minimization heuristic with application to minimum order system approximation. In American Control Conference. (ACC).Google Scholar
  25. Gu, S., Zhang, L., Zuo, W., & Feng, X. (2014). Weighted nuclear norm minimization with application to image denoising. In CVPR.Google Scholar
  26. Jain, P., Netrapalli, P., & Sanghavi, S. (2013). Low-rank matrix completion using alternating minimization. In ACM symposium on theory of computing.Google Scholar
  27. Ji, H., Liu, C., Shen, Z., & Xu, Y. (2010). Robust video denoising using low rank matrix completion. In CVPR.Google Scholar
  28. Ji, S., & Ye, J. (2009). An accelerated gradient method for trace norm minimization. In ICML (pp. 457–464).Google Scholar
  29. Ke, Q., & Kanade, T. (2005). Robust \(l_1\) norm factorization in the presence of outliers and missing data by alternative convex programming. In CVPR.Google Scholar
  30. Kwak, N. (2008). Principal component analysis based on l1-norm maximization. IEEE Transaction on Pattern Analysis and Machine Intelligence, 30(9), 1672–1680.CrossRefGoogle Scholar
  31. Levin, A., & Nadler, B. (2011). Natural image denoising: Optimality and inherent bounds. In CVPR.Google Scholar
  32. Levin, A., Nadler, B., Durand, F., & Freeman, W.T. (2012). Patch complexity, finite pixel correlations and optimal denoising. In ECCV.Google Scholar
  33. Li, L., Huang, W., Gu, I. H., & Tian, Q. (2004). Statistical modeling of complex backgrounds for foreground object detection. IEEE Transaction on Image Processing, 13(11), 1459–1472.CrossRefGoogle Scholar
  34. Lin, Z., Ganesh, A., Wright, J., Wu, L., Chen, M., & Ma, Y. (2009). Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix. In International Workshop on Computational Advances in Multi-Sensor Adaptive Processing.Google Scholar
  35. Lin, Z., Liu, R., & Su, Z. (2011). Linearized alternating direction method with adaptive penalty for low-rank representation. In NIPS.Google Scholar
  36. Lin, Z., Liu, R., & Li, H. (2015). Linearized alternating direction method with parallel splitting and adaptive penalty for separable convex programs in machine learning. Machine Learning, 99(2), 287–325.MathSciNetCrossRefzbMATHGoogle Scholar
  37. Liu, G., Lin, Z., Yan, S., Sun, J., Yu, & Y., Ma, Y. (2010). Robust subspace segmentation by low-rank representation. In ICML.Google Scholar
  38. Liu, R., Lin, Z., De la, Torre, F., & Su, Z. (2012). Fixed-rank representation for unsupervised visual learning. In CVPR.Google Scholar
  39. Lu, C., Tang, J., Yan, S., & Lin, Z. (2014a). Generalized nonconvex nonsmooth low-rank minimization. In CVPR.Google Scholar
  40. Lu, C., Zhu, C., Xu, C., Yan, S., & Lin, Z. (2014b). Generalized singular value thresholding. arXiv preprint arXiv:1412.2231.
  41. Mairal, J., Bach, F., Ponce, J., Sapiro, G., & Zisserman, A. (2009). Non-local sparse models for image restoration. In ICCV.Google Scholar
  42. Meng, D., & Torre, F.D.L. (2013). Robust matrix factorization with unknown noise. In ICCV.Google Scholar
  43. Mirsky, L. (1975). A trace inequality of john von neumann. Monatshefte für Mathematik, 79(4), 303–306.MathSciNetCrossRefzbMATHGoogle Scholar
  44. Mnih, A.,&Salakhutdinov, R. (2007). Probabilistic matrix factorization. In NIPS.Google Scholar
  45. Mohan, K., & Fazel, M. (2012). Iterative reweighted algorithms for matrix rank minimization. The Journal of Machine Learning Research, 13(1), 3441–3473.MathSciNetzbMATHGoogle Scholar
  46. Moreau, J. J. (1965). Proximité et dualité dans un espace hilbertien. Bulletin de la Société mathématique de France, 93, 273–299.MathSciNetzbMATHGoogle Scholar
  47. Mu, Y., Dong, J., Yuan, X., & Yan, S. (2011). Accelerated low-rank visual recovery by random projection. In CVPR.Google Scholar
  48. Nie, F., Huang, H., & Ding, C.H. (2012). Low-rank matrix recovery via efficient schatten p-norm minimization. In AAAI.Google Scholar
  49. Oh, T.H., Kim, H., Tai, Y.W., Bazin, J.C., & Kweon, I.S. (2013). Partial sum minimization of singular values in rpca for low-level vision. In ICCV.Google Scholar
  50. Peng, Y., Ganesh, A., Wright, J., Xu, W., & Ma, Y. (2012). Rasl: Robust alignment by sparse and low-rank decomposition for linearly correlated images. IEEE Transaction on Pattern Analysis and Machine Intelligence, 34(11), 2233–2246.CrossRefGoogle Scholar
  51. Portilla, J. (2004). Blind non-white noise removal in images using gaussian scale. Citeseer: In Proceedings of the IEEE benelux signal processing symposium.Google Scholar
  52. Rhea, D. (2011). The case of equality in the von Neumann trace inequality. Preprint.Google Scholar
  53. Roth, S., & Black, M. J. (2009). Fields of experts. International Journal of Computer Vision, 82(2), 205–229.CrossRefGoogle Scholar
  54. Ruslan, S., & Srebro, N. (2010). Collaborative filtering in a non-uniform world: Learning with the weighted trace norm. In NIPS.Google Scholar
  55. She, Y. (2012). An iterative algorithm for fitting nonconvex penalized generalized linear models with grouped predictors. Computational Statistics & Data Analysis, 56(10), 2976–2990.MathSciNetCrossRefzbMATHGoogle Scholar
  56. Srebro, N., & Jaakkola, T., et al. (2003). Weighted low-rank approximations. In ICML.Google Scholar
  57. Srebro, N., Rennie, J., & Jaakkola, T.S. (2004). Maximum-margin matrix factorization. In NIPS.Google Scholar
  58. Tipping, M. E., & Bishop, C. M. (1999). Probabilistic principal component analysis. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 61(3), 611–622.MathSciNetCrossRefzbMATHGoogle Scholar
  59. Wang, N., & Yeung, D.Y. (2013). Bayesian robust matrix factorization for image and video processing. In ICCV.Google Scholar
  60. Wang, S., Zhang, L.,&Y, L. (2012). Nonlocal spectral prior model for low-level vision. In ACCV.Google Scholar
  61. Wright, J., Peng, Y., Ma, Y., Ganesh, A., & Rao, S. (2009). Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. In NIPS.Google Scholar
  62. Zhang, D., Hu, Y., Ye, J., Li, X., & He X (2012a). Matrix completion by truncated nuclear norm regularization. In CVPR.Google Scholar
  63. Zhang, Z., Ganesh, A., Liang, X., & Ma, Y. (2012b). Tilt: transform invariant low-rank textures. International Journal of Computer Vision, 99(1), 1–24.MathSciNetCrossRefzbMATHGoogle Scholar
  64. Zhao, Q., Meng, D., Xu, Z., Zuo, W., & Zhang, L. (2014) Robust principal component analysis with complex noise. In ICML.Google Scholar
  65. Zheng, Y., Liu, G., Sugimoto, S., Yan, S., & Okutomi, M. (2012). Practical low-rank matrix approximation under robust \(l_1\) norm. In CVPR.Google Scholar
  66. Zhou M, Chen, H., Ren, L., Sapiro, G., Carin, L., & Paisley, J.W. (2009). Non-parametric bayesian dictionary learning for sparse image representations. In NIPS.Google Scholar
  67. Zhou, X., Yang, C., Zhao, H., & Yu, W. (2014). Low-rank modeling and its applications in image analysis. arXiv preprint arXiv:1401.3409.
  68. Zoran, D., & Weiss, Y. (2011). From learning models of natural image patches to whole image restoration. In ICCV.Google Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  • Shuhang Gu
    • 1
  • Qi Xie
    • 2
  • Deyu Meng
    • 2
  • Wangmeng Zuo
    • 3
  • Xiangchu Feng
    • 4
  • Lei Zhang
    • 1
  1. 1.Department of ComputingThe Hong Kong Polytechnic UniversityHong Kong SARChina
  2. 2.School of Mathematics and StatisticsXi’an Jiaotong UniversityXi’anChina
  3. 3.School of Computer Science and TechnologyHarbin Institute of TechnologyHarbinChina
  4. 4.Department of Applied MathematicsXidian UniversityXi’anChina

Personalised recommendations