A Sparse and Low-Rank Matrix Recovery Model for Saliency Detection

  • Chao Wang
  • Jing LiEmail author
  • KeXin Li
  • Yi Zhuang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11323)


The previous low-rank matrix recovery model for saliency detection have a large of problem that the transform matrix obtained on the open datasets may not be suitable for the detecting image and the transform matrix fails to combine the low-level features of the image. In this paper, we propose a novel salient object detection model that combines sparse and low-rank matrix recovery (SLRR) with the adaptive background template. Our SLRR model using a selection strategy is presented to establish the adaptive background template by removing the potential saliency super pixels from the image border regions, and the background template is obtained. And the sparse and low rank matrix recovery model solved by Inexact Augmented Lagrange Multiplier (ALM). Both quantitative and qualitative experimental results on two challenging datasets show competitive results as compared with other state-of-the-art methods. In addition, a new datasets which saliency object on the edge (SOE), containing 500 images is constructed for evaluating saliency detection.


Low rank Saliency detection Matrix recovery Inexact augmented lagrange multiplier 



This paper is supported by the Fundamental Research Funds for the Central Universities (NS 2015092).


  1. 1.
    Borji, A., Cheng, M.M., Jiang, H., et al.: Salient object detection: a benchmark. IEEE Trans. Image Process. 24(12), 5706–5722 (2015)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Yuan, Y., Li, C., Kim, J., et al.: Reversion correction and regularized random walk ranking for saliency detection. IEEE Trans. Image Process. 27(3), 1311–1322 (2018)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Li, X., Lu, H., Zhang, L.: Saliency detection via dense and sparse reconstruction. In: Proceedings of the 2013 IEEE International Conference on Computer Vision (ICCV), pp. 2976–2983. IEEE, Sydney (2013)Google Scholar
  4. 4.
    Wang, K., Lin, L., Lu, J., Li, C., Shi, K.: PISA: pixelwise image saliency by aggregating complementary appearance contrast measures with edge-preserving coherence. IEEE Trans. Image Process. 24(10), 3019–3033 (2015)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Wang, L., Lu, H., Ruan, X., Yang, M.H.:Deep networks for saliency detection via local estimation and global search. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 3183–3192. IEEE, Boston (2015)Google Scholar
  6. 6.
    Chen, W., et al.: EEG-based motion intention recognition via multi-task RNNs. In: Proceedings of the 2018 SIAM International Conference on Data Mining, pp. 279–287. Society for Industrial and Applied Mathematics (2018)Google Scholar
  7. 7.
    Li, X., Lu, H., Zhang, L., Ruan, X.: Saliency detection via dense and sparse reconstruction. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2976–2983. IEEE, Sydney (2013)Google Scholar
  8. 8.
    Jain, S.D., Grauman K.: Active image segmentation propagation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2864–2873. IEEE, Nevada (2016)Google Scholar
  9. 9.
    Yu, G., Yuan, J., Liu, Z.: Propagative Hough voting for human activity detection and recognition. IEEE Trans. Circuits Syst. Video Technol. 25(1), 87–98 (2015)CrossRefGoogle Scholar
  10. 10.
    Yue Lin, Weitong Chen, Xue Li, Wanli Zuo, and Minghao Yin.: A survey of sentiment analysis in social media. Knowl. Inf. Syst. 1–47 (2018)Google Scholar
  11. 11.
    Horbert, E., García, G.M., Frintrop, S., Leibe, B.: Sequence level object candidates based on saliency for generic object recognition on mobile systems. In: Proceedings of 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 127–134. IEEE, Washington (2015)Google Scholar
  12. 12.
    Yang, X., Qian, X., Xue, Y.: Scalable mobile image retrieval by exploring contextual saliency. IEEE Trans. Image Process. 24(6), 1709–1721 (2015)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Lei, B., Tan, E.L., Chen, S., Ni, D., Wang, T.: Saliency-driven image classification method based on histogram mining and image score. Pattern Recognit. 48(8), 2567–2580 (2015)CrossRefGoogle Scholar
  14. 14.
    Chandra S, Usunier N, Kokkinos I.:Dense and low-rank Gaussian CRFs using deep embeddings[C]//International Conference on Computer Vision (ICCV), pp. 5113–5122. IEEE, Venice (2017)Google Scholar
  15. 15.
    Yao, X., Han, J., Zhang, D., et al.: Revisiting co-saliency detection: a novel approach based on two-stage multi-view spectral rotation co-clustering. IEEE Trans. Image Process. 26(7), 3196–3209 (2017)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Jiang, B., Zhang, L., Lu, H., Yang, C., Yang, M.H.: Saliency detection via absorbing Markov chain. In: Proceedings of the 2013 IEEE International Conference on Computer Vision (ICCV), pp. 1665–1672. IEEE, Sydney (2013)Google Scholar
  17. 17.
    Lu, H., Li, X., Zhang, L., Ruan, X., Yang, M.H.: Dense and sparse reconstruction error based saliency descriptor. IEEE Trans. Image Process. 25(4), 1592–1603 (2016)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Ishikura, K., Kurita, N., Chandler, D.M., et al.: Saliency detection based on multiscale extrema of local perceptual color differences. IEEE Trans. Image Process. 27(2), 703–717 (2018)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.College of Computer Science and TechnologyNanjing University of Aeronautics and AstronauticsNanjingChina

Personalised recommendations