Advertisement

Double Weighted Low-Rank Representation and Its Efficient Implementation

  • Jianwei Zheng
  • Kechen Lou
  • Ping YangEmail author
  • Wanjun Chen
  • Wanliang Wang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11440)

Abstract

To overcome the limitations of existing low-rank representation (LRR) methods, i.e., the error distribution should be known a prior and the leading rank components might be over penalized, this paper proposes a new low-rank representation based model, namely double weighted LRR (DWLRR), using two distinguished properties on the concerned representation matrix. The first characterizes various distributions of the residuals into an adaptively learned weighting matrix for more flexibility of noise resistance. The second employs a parameterized rational penalty as well as a weighting vector s to reveal the importance of different rank components for better approximation to the intrinsic subspace structure. Moreover, we derive a computationally efficient algorithm based on the parallel updating scheme and automatic thresholding operation. Comprehensive experimental results conducted on image clustering demonstrate the robustness and efficiency of DWLRR compared with other state-of-the-art models.

Keywords

Subspace clustering Low-rank approximation Nonconvex surrogate function Proximal gradient method 

Notes

Acknowledgements

This work is supported by National Natural Science Foundation of China (61602413) and Natural Science Foundation of Zhejiang Province (LY19F030016).

Supplementary material

482298_1_En_44_MOESM1_ESM.pdf (59 kb)
Supplementary material 1 (pdf 58 KB)

References

  1. 1.
    Liu, G., Lin, Z., Yan, S., Sun, J., Xu, Y., Ma, Y.: Robust recovery of subspace structures by low-rank representation. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 171–184 (2013)CrossRefGoogle Scholar
  2. 2.
    Kim, E., Lee, M., Oh, S.: Robust elastic-net subspace representation. IEEE Trans. Image Process. 25(9), 4245–4259 (2016)MathSciNetzbMATHGoogle Scholar
  3. 3.
    Lu, C.-Y., Min, H., Zhao, Z.-Q., Zhu, L., Huang, D.-S., Yan, S.: Robust and efficient subspace segmentation via least squares regression. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7578, pp. 347–360. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-33786-4_26CrossRefGoogle Scholar
  4. 4.
    Ding, Z., Fu, F.: Dual low-rank decompositions for robust cross-view learning. IEEE Trans. Image Process. 28(1), 194–204 (2019)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Zhang, Z., Li, F., Zhao, M., Zhang, L., Yan, S.: Robust neighborhood preserving projection by nuclear/l2,1-norm regularization for image feature extraction. IEEE Trans. Image Process. 26(4), 1607–1622 (2017)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Peng, X., Yu, Z., Yi, Z., Tang, H.: Constructing the L2-graph for robust subspace learning and subspace clustering. IEEE Trans. Cybern. 47(4), 1053–1066 (2017)CrossRefGoogle Scholar
  7. 7.
    Peng, C., Kang, Z., Yang, M., Cheng, Q.: Feature selection embedded subspace clustering. IEEE Signal Process. Lett. 23(7), 1018–1022 (2016)CrossRefGoogle Scholar
  8. 8.
    Chen, J., Yang, J.: Robust subspace segmentation via low-rank representation. IEEE Trans. Cybern. 44(8), 1432–1445 (2014)CrossRefGoogle Scholar
  9. 9.
    Zheng, J., Yang, P., Chen, S., Shen, G., Wang, W.: Iterative re-constrained group sparse face recognition with adaptive weights learning. IEEE Trans. Image Process. 26(5), 2408–2423 (2017)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Zhang, Z., Li, F., Zhao, M., Zhang, L., Yan, S.: Joint low-rank and sparse principal feature coding for enhanced robust representation and visual classification. IEEE Trans. Image Process. 25(6), 2429–2443 (2016)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Yin, M., Gao, J., Lin, Z.: Laplacian regularized low-rank representation and its applications. IEEE Trans. Pattern Anal. Mach. Intell. 38(3), 504–517 (2016)CrossRefGoogle Scholar
  12. 12.
    Peng, X., Lu, C., Yi, Z., Tang, H.: Connections between nuclear-norm and frobenius-norm-based representations. IEEE Trans. Neural Netw. Learn. Syst. 29(1), 218–224 (2018)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Lanza, A., Morigi, S., Selesnick, I., Sgallari, F.: Nonconvex nonsmooth optimization via convex-nonconvex majorization-minimization. Numer. Math. 136(2), 343–381 (2017)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Lu, C., Tang, J., Yan, S., Lin, Z.: Nonconvex nonsmooth low rank minimization via iteratively reweighted nuclear norm. IEEE Trans. Image Process. 25(2), 829–839 (2016)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Gu, S., Xie, Q., Meng, D., Zuo, W., Feng, X., Zhang, L.: Weighted nuclear norm minimization and its applications to low level vision. Int. J. Comput. Vis. 121(2), 183–208 (2017)CrossRefGoogle Scholar
  16. 16.
    Xie, Y., Gu, S., Liu, Y., Zuo, W., Zhang, W., Zhang, L.: Weighted schatten p-norm minimization for image denoising and background subtraction. IEEE Trans. Image Process. 25(10), 4842–4857 (2016)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Peng, C., Kang, Z., Cheng, Q.: Integrating feature and graph learning with low-rank representation. Neurocomputing 249, 106–116 (2017)CrossRefGoogle Scholar
  18. 18.
    Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Yao, Q., Kwok, J., Gao, F., Chen, W., Liu, T.: Efficient inexact proximal gradient algorithm for nonconvex problems. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence, pp. 3308–3314. Melbourne (2017)Google Scholar
  20. 20.
    Yao, Q., Kwok, J., Zhong, W.: Fast low-rank matrix learning with nonconvex regularization. In: International Conference on Data Mining, pp. 539–548. IEEE, Atlantic (2015)Google Scholar
  21. 21.
    Li, Y., Yu, W.: Fast randomized singular value thresholding for low-rank optimization. IEEE Trans. Pattern Anal. Mach. Intell. 40(2), 376–391 (2018)CrossRefGoogle Scholar
  22. 22.
    Hu, H., Lin, Z., Feng, J., Zhou, J.: Smooth representation clustering. In: Conference on Computer Vision and Pattern Recognition, pp. 3834–3841. IEEE, Columbus (2014)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Zhejiang University of TechnologyHangzhouChina

Personalised recommendations