Abstract
Subspace learning has many applications such as motion segmentation and image recognition. The existing algorithms based on self-expressiveness of samples for subspace learning may suffer from the unsuitable balance between the rank and sparsity of the expressive matrix. In this paper, a new model is proposed that can balance the rank and sparsity well. This model adopts the log-determinant function to control the rank of solution. Meanwhile, the diagonals are penalized, rather than the strict zero-restriction on diagonals. This strategy makes the rank–sparsity balance more tunable. We furthermore give a new graph construction from the low-rank and sparse solution, which absorbs the advantages of the graph constructions in the sparse subspace clustering and the low-rank representation for further clustering. Numerical experiments show that the new method, named as RSBR, can significantly increase the accuracy of subspace clustering on the real-world data sets that we tested.
Similar content being viewed by others
Notes
We say \(\mathbf{C}_k\) is connected if a undirected graph having the adjoint matrix \(|\mathbf{C}_k|+|\mathbf{C}_k^T|\) is connected.
In this case, \(\mathbf{C}\) should be a block-diagonal matrix of K connected blocks under a permutation.
References
Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)
Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)
Bradley, P.S., Mangasarian, O.L.: k-plane clustering. J. Glob. Optim. 16(1), 23–32 (2000)
Elhamifar, E.: High-rank matrix completion and clustering under self-expressive models. In: Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 29, pp. 73–81. Curran Associates, Inc. (2016)
Elhamifar, E., Vidal, R.: Sparse subspace clustering. In: Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 2790–2797 (2009)
Elhamifar, E., Vidal, R.: Clustering disjoint subspaces via sparse representation. In: IEEE International Conference on Acoustics Speech and Signal Processing, pp. 1926–1929 (2010)
Elhamifar, E., Vidal, R.: Sparse subspace clustering: algorithm, theory, and applications. IEEE Trans. Softw. Eng. 35(11), 2765–2781 (2013)
Han, J., Xiong, K., Nie, F.: Orthogonal and nonnegative graph reconstruction for large scale clustering. In: Twenty-Sixth International Joint Conference on Artificial Intelligence, pp. 1809–1815 (2017)
Hurley, N., Rickard, S.: Comparing measures of sparsity. IEEE Trans. Inf. Theory 55(10), 4723–4741 (2009)
Kang, Z., Peng, C., Cheng, Q.: Robust subspace clustering via smoothed rank approximation. IEEE Signal Process. Lett. 22(11), 2088–2092 (2015)
Lee, K.C., Ho, J., Kriegman, D.J.: Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 684–698 (2005). https://doi.org/10.1109/TPAMI.2005.92
Li, C.G., Vidal, R.: Structured sparse subspace clustering: a unified optimization framework. In: Computer Vision and Pattern Recognition, pp. 277–286 (2015)
Li, C.G., Vidal, R.: A structured sparse plus structured low-rank framework for subspace clustering and completion. IEEE Trans. Signal Process. 64(24), 6557–6570 (2016)
Liu, G., Lin, Z., Yan, S., Sun, J., Yu, Y., Ma, Y.: Robust recovery of subspace structures by low-rank representation. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 171–184 (2013). https://doi.org/10.1109/TPAMI.2012.88
Liu, G., Lin, Z., Yu, Y.: Robust subspace segmentation by low-rank representation. In: International Conference on Machine Learning, pp. 663–670 (2010)
Nasihatkon, B., Hartley, R.: Graph connectivity in sparse subspace clustering. In: Computer Vision and Pattern Recognition, pp. 2137–2144 (2011)
Nene, S.A., Nayar, S.K., Murase, H.: Columbia Object Image Library (COIL-20). Technical Report CUCS-005-96 (1996)
Nie, F., Huang, H.: Subspace clustering via new low-rank model with discrete group structure constraint. In: IJCAI International Joint Conference on Artificial Intelligence 2016-January, pp. 1874–1880 (2016)
Oh, T.H., Matsushita, Y., Tai, Y.W., Kweon, I.S.: Fast randomized singular value thresholding for nuclear norm minimization. In: Computer Vision and Pattern Recognition, pp. 4484–4493 (2015)
Park, D., Caramanis, C., Sanghavi, S.: Greedy subspace clustering. In: International Conference on Neural Information Processing Systems, pp. 2753–2761 (2014)
Rahmani, M., Atia, G.K.: Innovation pursuit: a new approach to subspace clustering. IEEE Trans. Signal Process. 65(23), 6276–6291 (2017)
Soltanolkotabi, M., Cands, E.J.: A geometric analysis of subspace clustering with outliers. Ann. Stat. 40(4), 2012 (2012)
Soltanolkotabi, M., Elhamifar, E., Cands, E.J.: Robust subspace clustering. Ann. Stat. 42(2), 669–699 (2013)
Tron, R., Vidal, R.: A benchmark for the comparison of 3D motion segmentation algorithms. In: Computer Vision and Pattern Recognition, 2007. CVPR ’07. IEEE Conference on, pp. 1–8 (2007)
Tseng, P.: Nearest q-flat to m points. J. Optim. Theory Appl. 105(1), 249–252 (2000)
Vidal, R., Ma, Y., Sastry, S.: Generalized principal component analysis (GPCA). IEEE Trans. Pattern Anal. Mach. Intell. 27(12), 1945–1959 (2005)
Von Luxburg, U.: A tutorial on spectral clustering. Stat. Comput. 17(4), 395–416 (2007)
Wang, J., Shi, D., Cheng, D., Zhang, Y., Gao, J.: LRSR: low-rank-sparse representation for subspace clustering. Neurocomputing 214, 1026–1037 (2016)
Wang, Y.X., Xu, H.: Noisy sparse subspace clustering J. Mach. Learn. Res. (2013)
Wang, Y.X., Xu, H., Leng, C.: Provable subspace clustering: when LRR meets SSC. In: International Conference on Neural Information Processing Systems, pp 64–72 (2013)
Wang, Y., Yin, W., Zeng, J.: Global convergence of ADMM in nonconvex nonsmooth optimization. UCLA CAM Report 15–62 (2015)
Wei, S., Lin, Z.: Analysis and improvement of low rank representation for subspace segmentation. Computer Vision and Pattern Recognition. arXiv:1107.1561v1 (2011)
Wu, F., Zhou, Y., Yang, Y., Tang, S., Zhang, Y., Zhuang, Y.: Sparse multi-modal hashing. IEEE Trans. Multimed. 16(2), 427–439 (2014)
Xu, Y., Fang, X., Wu, J., Li, X., Zhang, D.: Discriminative transfer subspace learning via low-rank and sparse representation. IEEE Trans. Image Process. A Publ. IEEE Signal Process. Soc. 25(2), 850 (2016)
Yan, S., Xu, D., Zhang, B., Zhang, H.J., Yang, Q., Lin, S.: Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Trans. Pattern Anal. Mach. Intell. 29(1), 40 (2007)
Yang, J., Yin, W., Zhang, Y., Wang, Y.: A fast algorithm for edge-preserving variational multichannel image restoration. SIAM J. Imaging Sci. 2(2), 569–592 (2011)
Yang, Y., Feng, J., Jojic, N., Yang, J., Huang, T.S.: \(\ell ^{0}\) -sparse subspace clustering. In: European Conference on Computer Vision, pp. 731–747 (2016)
Yu, J., Hong, C., Rui, Y., Tao, D.: Multitask autoencoder model for recovering human poses. IEEE Trans Ind Electron 65(6), 5060–5068 (2018)
Yu, Z., Wu, F., Yang, Y., Tian, Q., Luo, J., Zhuang, Y.: Discriminative coupled dictionary hashing for fast cross-media retrieval. In: International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 395–404 (2014)
Acknowledgements
The work was supported in part by NSFC Projects 11571312 and 91730303, and National Basic Research Program of China (973 Program) 2015CB352503.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Xia, Y., Zhang, Z. Rank–sparsity balanced representation for subspace clustering. Machine Vision and Applications 29, 979–990 (2018). https://doi.org/10.1007/s00138-018-0918-y
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00138-018-0918-y