Signal, Image and Video Processing

, Volume 8, Issue 1, pp 181–190 | Cite as

Saliency-based superpixels

  • Linfeng XuEmail author
  • Liaoyuan Zeng
  • Zhengning Wang
Original Paper


Superpixels provide an over-segmentation representation of a natural image. However, they lack information of the entire object. In this paper, we propose a method to obtain superpixels through a merging strategy based on the bottom-up saliency values of superpixels. The proposed method aims to obtain meaningful superpixels, i.e., make the objects as complete as possible. The proposed method creates an over-segmented representation of an image. The saliency value of each superpixel is calculated through a biologically plausible saliency model in a way of statistical theory. Two adjacent superpixels are merged if the merged superpixel is more salient than the unmerged ones. The merging process is performed in an iterative way. Experimental evaluation on test images shows that the obtained saliency-based superpixels can extract the salient objects more effectively than the existing methods.


Visual attention Saliency model Superpixel Merging strategy 



This work was partially supported by NSFC (No. 61179060), National High Technology Research and Development Program of China (863 Program, No. 2012AA011503), China Postdoctoral Science Foundation (No. Y02006023601254), and the Fundamental Research Funds for the Central Universities (No. ZYGX2012J019).


  1. 1.
    Ren, X., Malik, J.: Learning a classification model for segmentation. In: IEEE International Conference on Computer Vision (ICCV) pp. 10–17 (2003)Google Scholar
  2. 2.
    Fulkerson, B., Vedaldi, A., Soatto, S.: Class segmentation and object localization with superpixel neighborhoods. In: IEEE International Conference on Computer Vision (ICCV) pp. 670–677 (2009)Google Scholar
  3. 3.
    Liu, B., Hu, H., Wang, H., Wang, K., Liu, X., Yu, W.: Superpixel-based classification of polarimetric synthetic aperture radar images. In: IEEE Radar Conference (RADAR), pp. 606–611 (2011)Google Scholar
  4. 4.
    Li, H., Ngan, K.N.: A co-saliency model of image pairs. IEEE Trans. Image Process. 20(12), 3365–3375 (2011)CrossRefMathSciNetGoogle Scholar
  5. 5.
    Cheng, M.-M., Zhang, G.-X., Mitra, N.J., Huang, X., Hu, S.-M.: Global contrast based salient region detection. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp. 409–416 (2011)Google Scholar
  6. 6.
    Mori, G.: Guiding model search using segmentation. In: IEEE International Conference on Computer Vision (ICCV), vol. 2, pp. 1417–1423 (2005)Google Scholar
  7. 7.
    Jacobson, N., Lee, Y., Mahadevan, V., Vasconcelos, N., Nguyen, T.Q.: A novel approach to FRUC using discriminant saliency and frame segmentation. IEEE Trans. Image Process. 19(11), 2924–2934 (2010)CrossRefMathSciNetGoogle Scholar
  8. 8.
    He, X., Zemel, R., Ray, D.: Learning and incorporating top-down cues in image segmentation. ECCV 1, 338–351 (2006)Google Scholar
  9. 9.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998)CrossRefGoogle Scholar
  10. 10.
    Niu, Y., Kyan, M., Krishnan, S., Zhang, Q.: A combined just noticeable distortion model-guided image watermarking. Signal Image Video Process. 5(4), 517–526 (2011)CrossRefGoogle Scholar
  11. 11.
    Li, H., Ngan, K.N.: Automatic video segmentation and tracking for content-based applications. IEEE Commun. Mag. 45(1), 27–33 (2007)CrossRefGoogle Scholar
  12. 12.
    Ha, H., Park, J., Lee, S., Bovik, A.C.: Perceptually scalable extension of h.264. IEEE Trans. Circuits Syst. Video Technol. 21(11), 1667–1678 (2011)CrossRefGoogle Scholar
  13. 13.
    Shokoufandeh, A., Marsic, I., Dickinson, S.J.: View-based object recognition using saliency maps. Image Vis. Comput. 17, 445–460 (1999)CrossRefGoogle Scholar
  14. 14.
    Dardi, F., Abate, L., Ramponi, G.: No-reference measurement of perceptually significant blurriness in video frames. Signal Image Video Process. 5(3), 271–282 (2011)CrossRefGoogle Scholar
  15. 15.
    He, L., Gao, X., Lu, W., Li, X., Tao, D.: Image quality assessment based on S-CIELAB model. Signal Image Video Process. 5(3), 283–290 (2011)CrossRefGoogle Scholar
  16. 16.
    Li, H., Ngan, K.N.: Saliency model based face segmentation in head-and-shoulder video sequences. J. Vis. Commun. Image Represent. 19(5), 320–333 (2008)CrossRefGoogle Scholar
  17. 17.
    Li, H., Ngan, K.N.: Learning to extract focused object from low DOF images. IEEE Trans. Circuits Syst. Video Technol. 21(11), 1571–1580 (2011)CrossRefzbMATHGoogle Scholar
  18. 18.
    Gao, D., Vasconcelos, N.: Decision-theoretic saliency: computational principles, biological plausibility, and implications for neurophysiology and psychophysics. Neural Comput. 21, 239–271 (2009)CrossRefzbMATHGoogle Scholar
  19. 19.
    Ma, Y., Zhang, H.: Contrast-based image attention analysis by using fuzzy growing. In: Proceedings of the Eleventh ACM International Conference on Multimedia (MULTIMEDIA’03), pp. 374–381 (2003)Google Scholar
  20. 20.
    Walther, D., Koch, C.: Modeling attention to salient proto-objects. Neural Netw. 19(9), 1395–1407 (2006)CrossRefzbMATHGoogle Scholar
  21. 21.
    Achanta, R., Estrada, F., Wils, P., Süsstrunk, S.: Salient region detection and segmentation. In: International Conference on Computer Vision Systems (ICVS), vol. 5008, pp. 66–75 (2008)Google Scholar
  22. 22.
    Zhai, Y., Shah, M.: Visual attention detection in video sequences using spatiotemporal cues. In: ACM International Conference of Multimedia, pp. 815–824 (2006)Google Scholar
  23. 23.
    Achanta, R., Hemami, S., Estrada, F., Süsstrunk, S.: Frequency-tuned salient region detection. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1597–1604 (2009)Google Scholar
  24. 24.
    Luo, W., Li, H., Liu, G., Ngan, K.N.: Global salient information maximization for saliency detection. Signal Process. Image Commun. 27(3), 238–248 (2012)CrossRefGoogle Scholar
  25. 25.
    Xu, L., Li, H., Zeng, L., Wang, Z., Liu, G.: Saliency detection using a central stimuli sensitivity based model. In: IEEE International Symposium on Circuits and Systems (ISCAS), pp. 945–949 (2013)Google Scholar
  26. 26.
    Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Süsstrunk, S.: “SLIC superpixels”, EPFL Technical, Report 149300 (June 2010)Google Scholar
  27. 27.
    Shi, J., Malik, J.: Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 888–905 (2000)CrossRefGoogle Scholar
  28. 28.
    Felzenszwalb, P., Huttenlocher, D.: Efficient graph-based image segmentation. IJCV 59(2), 167–181 (2004)CrossRefGoogle Scholar
  29. 29.
    Moore, A.P., Prince, S., Warrell, J., Mohammed, U., Jones, G.: Superpixel lattices. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2008)Google Scholar
  30. 30.
    Comaniciu, D., Meer, P.: Mean shift: a robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 24(5), 603–619 (2002)CrossRefGoogle Scholar
  31. 31.
    Levinshtein, A., Stere, A., Kutulakos, K.N., Fleet, D.J., Dickinson, S.J.: TurboPixels: fast superpixels using geometric flows. IEEE Trans. Pattern Anal. Mach. Intell. 31(12), 2290–2297 (2009)CrossRefGoogle Scholar
  32. 32.
    Han, J., Ngan, K.N., Li, M., Zhang, H.J.: Unsupervised extraction of visual attention objects in color images. IEEE Trans. Circuits Syst. Video Technol. 16(1), 141–145 (2006)CrossRefGoogle Scholar
  33. 33.
    Rother, C., Kolmogorov, V., Blake, A.: Grabcut: interactive foreground extraction using iterated graph cuts. ACM Trans. Graph. 23(3), 309–314 (2004)CrossRefGoogle Scholar
  34. 34.
    Meng, F., Li, H., Liu, G., Ngan, K.N.: Object co-segmentation based on shortest path algorithm and saliency model. IEEE Trans. Multimed. 14(5), 1429–1441 (2012)CrossRefGoogle Scholar
  35. 35.
    Sundberg, K.A., Mitchell, J.F., Reynolds, J.H.: Spatial attention modulates center-surround interactions in macaque visual area V4. Neuron 61(6), 952–963 (2009)CrossRefGoogle Scholar
  36. 36.
    Kelly, D.H.: Motion and vision. l. stabilized images of stationary gratings. J. Opt. Soc. Am. 69, 1266–1274 (1979)CrossRefGoogle Scholar
  37. 37.
    Cavanaugh, J.R., Bair, W., Movshon, J.A.: Nature and interaction of signals from the receptive field center and surround in macaque V1 neurons. J. Neurophysiol. 88(5), 2530–2546 (2002)CrossRefGoogle Scholar
  38. 38.
    Belhumeur, P.N., Hespanha, J.P., Kriegman, D.J.: Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 711–720 (1997)CrossRefGoogle Scholar
  39. 39.
    Sceniak, M.P., Ringach, D.L., Hawken, M.J., Shapley, R.: Contrast’s effect on spatial summation by macaque V1 neurons. Nat. Neurosci. 2(8), 733–739 (1999)CrossRefGoogle Scholar
  40. 40.
    Adamek, T., O’Connor, N.: Using dempster-shafer theory to fuse multiple information sources in region-based segmentation. In: Proceedings of the 14th IEEE International Conference on Image Processing (ICIP2007) (2007)Google Scholar
  41. 41.
    Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: Proceedings International Conference on Computer Vision (ICCV), vol. 2, pp. 416–423 (2001) Google Scholar
  42. 42.
    Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 33(5), 898–916 (2011)Google Scholar
  43. 43.
    Maire, M., Arbelaez, P., Fowlkes, C., Malik, J.: Using contours to detect and localize junctions in natural images. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2008)Google Scholar
  44. 44.
    Ren, X.: Multi-scale improves boundary detection in natural images. In: European Conference on Computer Vision (ECCV), Part III, pp. 533–545 (2008)Google Scholar
  45. 45.
    Arbelaez, P.: Boundary extraction in natural images using ultrametric contour maps. In: Computer Vision and Pattern Recognition Workshop (2006)Google Scholar

Copyright information

© Springer-Verlag London 2013

Authors and Affiliations

  1. 1.School of Electronic EngineeringUniversity of Electronic Science and Technology of ChinaChengduChina

Personalised recommendations