Using Label Propagation to Get Confidence Map for Segmentation

  • Haoran Li
  • Hongxun Yao
  • Xiaoshuai Sun
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8879)


We propose a novel algorithm to segment objects from the existed segmentation results of the co-segmentation algorithms [1]. Previous co-segmentation algorithms work well when the main regions of the images contain only the target objects; however, their performances degenerate significantly when multi-category objects appear in the images. In contrast, our method adopts mask transformation from multiple images and discriminatively enhancement from multiple object categories, which can effectively ensure a good performance in both scenarios. We propose to use sift-flow [2] between pre-segmented source images and target image, and transform the source images’ segmentation mask to fit the target testing image by the flow vectors. Then we use all the transformed masks to vote the testing image mask and get the initial segmentation results. We also propose to use the ratio between the target category and the other categories to eliminate the side effects from other objects that might appeared in the initial segmentation. We conduct our experiment on internet images collected by Rubinstein .etc [1]. We also do additional experiment to study the multi-object conjunction cases. Our algorithm is effective in computation complexity and able to achieve a better performance than the state-of-the-art algorithm.


Target Object Label Propagation Positive Ratio Segmentation Mask Semantic Region 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Rubinstein, M., Joulin, A., Kopf, J., Liu, C.: Unsupervised joint object discovery and segmentation in internet images. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1939–1946. IEEE (2013)Google Scholar
  2. 2.
    Liu, C., Yuen, J., Torralba, A.: Sift flow: Dense correspondence across scenes and its applications. IEEE Transactions on Pattern Analysis and Machine Intelligence 33(5), 978–994 (2011)CrossRefGoogle Scholar
  3. 3.
    Vicente, S., Rother, C., Kolmogorov, V.: Object cosegmentation. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2217–2224 (2011)Google Scholar
  4. 4.
    Cheng, M.-M., Zhang, G.-X., Mitra, N.J., Huang, X., Hu, S.-M.: Global contrast based salient region detection. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 409–416. IEEE (2011)Google Scholar
  5. 5.
    Rubinstein, M., Liu, C., Freeman, W.T.: Annotation propagation in large image databases via dense image correspondence. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part III. LNCS, vol. 7574, pp. 85–99. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  6. 6.
    Yang, Y., Liang, Q., Niu, L., Zhang, Q.: Belief propagation stereo matching algorithm using ground control points. In: Fifth International Conference on Graphic and Image Processing. International Society for Optics and Photonics, pp. 90690W–90690W (2014)Google Scholar
  7. 7.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60(2), 91–110 (2004)CrossRefGoogle Scholar
  8. 8.
    Rother, C., Kolmogorov, V., Blake, A.: Grabcut: Interactive foreground extraction using iterated graph cuts. ACM Transactions on Graphics (TOG) 23, 309–314 (2004)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Haoran Li
    • 1
  • Hongxun Yao
    • 1
  • Xiaoshuai Sun
    • 1
  1. 1.School of Computer Science and TechnologyHarbin Institute of TechnologyHarbinChina

Personalised recommendations