Advertisement

The Visual Computer

, Volume 33, Issue 11, pp 1403–1413 | Cite as

Computing object-based saliency via locality-constrained linear coding and conditional random fields

  • Zhen Yang
  • Huilin Xiong
Original Article

Abstract

Predicting object location using a top-down saliency model has grown increasingly popular in recent years. In this work, we combine locality-constrained linear coding (LLC) with a conditional random field (CRF), and construct a top-down saliency model to generate a specific object-based saliency map. During the training phase, we use the LLC codes as the latent variables of the CRF model, and meanwhile learn a class-specific codebook by CRF modulation. In the testing phase, we use this top-down model to distinguish specific objects from a cluttered background. Finally, we evaluate the experimental results on the MSRA-B, Garz-02, Weizmann Horse, and Plane datasets by applying the developed object-based saliency model. The performance shows that our approach can not only improve the precision but also dramatically reduce the computational complexity.

Keywords

Top-down model Locality-constrained linear coding Conditional random field Object-based saliency 

Notes

Acknowledgments

This work was supported by the National Natural Foundation of China under Grant no. 61375008. We thank LetPub (http://www.letpub.com) for its linguistic assistance during the preparation of this manuscript.

References

  1. 1.
    Achanta, R., Hemami, S., Estrada, F., Ssstrunk, S.: Frequency-tuned Salient Region Detection. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR 2009), pp. 1597–1604 (2009)Google Scholar
  2. 2.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254–1259 (1998)CrossRefGoogle Scholar
  3. 3.
    Bergbauer, J., Tari, S.: Top-down visual search in wimmelbild. Proc. SPIE 8651, 86511H–86511H–8 (2013)CrossRefGoogle Scholar
  4. 4.
    Mannan, C.K.S.K., Husain, M.: The role of visual salience in directing eye movements in visual object agnosia. Curr. Biol. 19(6), 247–248 (2009)CrossRefGoogle Scholar
  5. 5.
    Desimone, R., Duncan, J.: Neural mechanisms of selective visual attention. Annu. Rev. Neurosci. 18, 193–222 (1995)CrossRefGoogle Scholar
  6. 6.
    Wolfe, J.M., Horowitz, T.S.: What attributes guide the deployment of visual attention and how do they do it? Nat. Rev. Neurosci. 5(6), 495–501 (2004)CrossRefGoogle Scholar
  7. 7.
    Teuber, H.: Physiological psychology. Annu. Rev. Psychol. 6(1), 267C296 (1955)CrossRefGoogle Scholar
  8. 8.
    Liu, T., Yuan, Z., Sun, J., Wang, J., Zheng, N., Tang, X., Shum, H.-Y.: Learning to detect a salient object. IEEE Trans. Pattern Anal. Mach. Intell. 33, 353–367 (2011)CrossRefGoogle Scholar
  9. 9.
    Borji, A., Sihite, D.N., Itti, L.: Salient object detection: a benchmark. In: Proceedings of the 12th european conference on computer vision, vol. Part II, ECCV’12, pp. 414–429. Springer, Berlin (2012)Google Scholar
  10. 10.
    Shi, Y., Yi, Y., Zhang, K., Kong, J., Zhang, M., Wang, J.: Multiview saliency detection based on improved multimanifold ranking. J. Electron. Imag. 23(6), 061113 (2014)CrossRefGoogle Scholar
  11. 11.
    Khan, F.S., van de Weijer, J., Vanrell, M.: Top-down color attention for object recognition. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 979–986 (2009)Google Scholar
  12. 12.
    Csurka, G., Dance, C.R., Fan, L., Willamowski, J., Bray, C.: Visual categorization with bags of keypoints. In: Workshop on Statistical Learning in Computer Vision, ECCV, pp. 1–22 (2004)Google Scholar
  13. 13.
    Einhauser, W., Spain, M., Perona, P.: Objects predict fixations better than early saliency. J. Vis. 8(14), 1–26 (2008)CrossRefGoogle Scholar
  14. 14.
    Yang, J., Yang, M.-H.: Top-down visual saliency via joint crf and dictionary learning. In: CVPR, pp. 2296–2303. IEEE Computer Society, New York (2012)Google Scholar
  15. 15.
    Elazary, L., Itti, L.: Interesting objects are visually salient. J. Vis. 8, 1–15 (2008)CrossRefGoogle Scholar
  16. 16.
    Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to predict where humans look. In: IEEE International Conference on Computer Vision (ICCV) (2009)Google Scholar
  17. 17.
    Kocak, A., Cizmeciler, K., Erdem, A., Erkut, E.: Top down saliency estimation via superpixel-based discriminative dictionaries. In: Proceedings of the British Machine Vision Conference. BMVA Press, New York (2014)Google Scholar
  18. 18.
    Zhang, H., Xu, M., Zhuo, L., Havyarimana, V.: A novel optimization framework for salient object detection. Vis. Comput. 32(1), 31–41 (2016)CrossRefGoogle Scholar
  19. 19.
    Zhong, G., Liu, R., Cao, J., Su, Z.: A generalized nonlocal mean framework with object-level cues for saliency detection. Vis. Comput. 32(5), 611–623 (2016)CrossRefGoogle Scholar
  20. 20.
    Zhang, W., Xiong, Q., Shi, W., Chen, S.: Region saliency detection via multi-feature on absorbing markov chain. Vis. Comput. 32(3), 275–287 (2016)CrossRefGoogle Scholar
  21. 21.
    Yang, C., Zhang, L., Lu, H., Ruan, X., Yang, M.H.: Saliency detection via graph-based manifold ranking. In: Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3166–3173 (2013)Google Scholar
  22. 22.
    Wangjiang Zhu, Y.W.J.S., Liang, S.: Saliency optimization from robust background detection (2014)Google Scholar
  23. 23.
    Zhang, J., Sclaroff, S., Lin, Z., Shen, X., Price, B., Mech, R.R.: Minimum barrier salient object detection at 80 fps. In: 2015 IEEE International Conference on Computer Vision (ICCV) (2015)Google Scholar
  24. 24.
    Min Xu, H.Z.: Saliency detection with color contrast based on boundary information and neighbors. Vis. Comput. 31(3), 355–364 (2015)CrossRefGoogle Scholar
  25. 25.
    Wang, J., Yang, J., Yu, K., Lv, F., Huang, T.S., Gong, Y.: Locality-constrained linear coding for image classification. In: CVPR, pp. 3360–3367. IEEE Computer Society, New York (2010)Google Scholar
  26. 26.
    Yu, K., Zhang, T., Gong, Y.: Nonlinear learning using local coordinate coding. In: Advances in Neural Information Processing Systems (2009)Google Scholar
  27. 27.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60(2), 91–110 (2004)Google Scholar
  28. 28.
    Gao, D., Han, S., Vasconcelos, N.: Discriminant saliency, the detection of suspicious coincidences, and applications to visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 31(6), 989–1005 (2009)CrossRefGoogle Scholar
  29. 29.
    Kanan, C., Tong, M.H., Zhang, L., Cottrell, G.W.: Sun: top-down saliency using atural statistics. Vis. Cogn. 17, 979 (2009)Google Scholar
  30. 30.
    Qi, W., Han, J., Zhang, Y., Bai, L.: Saliency detection via boolean and foreground in a dynamic bayesian framework. Vis. Comput. (2015). doi: 10.1007/s00371-015-1176-x
  31. 31.
    Yang, J., Yu, K., Gong, Y., Huang, T.S.: Linear spatial pyramid matching using sparse coding for image classification. In: CVPR, pp. 1794–1801. IEEE Computer Society, New York (2009)Google Scholar
  32. 32.
    Mairal, J., Bach, F., Ponce, J.: Task-driven dictionary learning. IEEE Trans. Pattern Anal. Mach. Intell. 34(4), 791–804 (2012)CrossRefGoogle Scholar
  33. 33.
    Mairal, J., Ponce, J., Sapiro, G., Zisserman, A., Bach, F.R.: Supervised dictionary learning. In: Koller, D., Schuurmans, D., Bengio, Y., Bottou, L. (eds.) Advances in Neural Information Processing Systems, vol. 21, pp. 1033–1040. Curran Associates, Inc. (2009)Google Scholar
  34. 34.
    Yang, J., Yu, K., Huang, T.: Supervised translation-invariant sparse coding. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3517–3524. IEEE, San Francisco (2010)Google Scholar
  35. 35.
    Shotton, J., Winn, J., Rother, C., Criminisi, A.: Textonboost for image understanding: multi-class object recognition and segmentation by jointly modeling texture, layout, and context. Int. J. Comput. Vis. 81, 2–23 (2009)CrossRefGoogle Scholar
  36. 36.
    Bertelli, L., Yu, T., Vu, D., Gokturk, B.: Kernelized structural svm learning for supervised object segmentation. In: CVPR, pp. 2153–2160. IEEE Computer Society, New York (2011)Google Scholar
  37. 37.
    Lafferty, J.: Conditional random fields: probabilistic models for segmenting and labeling sequence data, pp. 282–289. Morgan Kaufmann, Burlington (2001)Google Scholar
  38. 38.
    Quattoni, A., Wang, S., Morency, L.-P., Collins, M., Darrell, T.: Hidden conditional random fields. IEEE Trans. Pattern Anal. Mach. Intell. 29, 1848–1852 (2007)CrossRefGoogle Scholar
  39. 39.
    Wang, Y., Mori, G.: Max-margin hidden conditional random fields for human action recognition. In: CVPR (2009)Google Scholar
  40. 40.
    Joachims, T., Finley, T., Yu, C.N.J.: Cutting-plane training of structural svms. mach learn. Mach. Learn. 77(1), 27–59 (2009)CrossRefMATHGoogle Scholar
  41. 41.
    Liu, T., Zheng, N., Tang, X., Shum, H.-Y.: Learning to detect a salient object. In: CVPR. IEEE Computer Society, New York (2007)Google Scholar
  42. 42.
    Opelt, A., Pinz, A., Fussenegger, M., Auer, P.: Generic object recognition with boosting. IEEE Trans. Pattern Anal. Mach. Intell. 28(3), 416–431 (2006)CrossRefMATHGoogle Scholar
  43. 43.
    Borenstein, E.: Combining top-down and bottom-up segmentation. In: Proceedings IEEE Workshop on Perceptual Organization in Computer Vision, CVPR, p. 46 (2004)Google Scholar
  44. 44.
    Aldavert, D., Ramisa, A., Toledo, R., De Mantaras, R.L.: Fast and robust object segmentation with the integral linear classifier. In: CVPR 2010—23rd IEEE conference on computer vision & pattern recognition, pp. 1046–1053. IEEE Computer Society, San Francisco (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2016

Authors and Affiliations

  1. 1.Department of AutomationShanghai Jiao Tong UniversityShanghaiChina

Personalised recommendations