Advertisement

The Visual Computer

, Volume 29, Issue 9, pp 861–870 | Cite as

SimLocator: robust locator of similar objects in images

  • Yan Kong
  • Weiming DongEmail author
  • Xing Mei
  • Xiaopeng Zhang
  • Jean-Claude Paul
Original Article

Abstract

Similar objects commonly appear in natural images, and locating and cutting out these objects can be tedious when using classical interactive image segmentation methods. In this paper, we propose SimLocator, a robust method oriented to locate and cut out similar objects with minimum user interaction. After extracting an arbitrary object template from the input image, candidate locations of similar objects are roughly detected by distinguishing the shape and color features of each image. A novel optimization method is then introduced to select accurate locations from the two sets of candidates. Additionally, a matting-based method is used to improve the results and to ensure that all similar objects are located in the image. Finally, a method based on alpha matting is utilized to extract the precise object contours. To ensure the performance of the matting operation, this work has developed a new method for foreground extraction. Experiments show that SimLocator is more robust and more convenient to use compared to other more advanced repetition detection and interactive image segmentation methods, in terms of locating similar objects in images.

Keywords

Similar objects Object descriptor Object matching Stable locations Contour extraction 

Notes

Acknowledgements

We thank anonymous reviewers for their valuable input. We thank Fuzhang Wu for making some Dual-Bound results. We thank the flickr members who kindly share their images under Creative Commons License: Bestfriend (tea cups), RenateEurope (purple flowers), acaffery (balloons), Luko Gecko (fish school), Bold Huang (petunia), Swami Stream (chocolate cake). We thank the users of 500px.com who have shared their images through public domain: Ricky Marek (pomegranates), Nitin Prabhudesai (dandelions), Joram Huyben (green lanterns). We also thank the users of pinterest.com who have put the media in their spaces: Gus’s Mom (panda cakes), Rhonda E. Peterson (meatballs), Cindy Loo (fried balls). The two fruit cake images are borrowed from [10]. This work is supported by National Natural Science Foundation of China under project Nos. 61172104, 61271430, 61201402 and 61202324, by Beijing Natural Science Foundation (Content Aware Image Synthesis and its Applications, No. 4112061), and by SRF for ROCS, SEM.

References

  1. 1.
    Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Susstrunk, S.: SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2274–2282 (2012) CrossRefGoogle Scholar
  2. 2.
    Ahuja, N., Todorovic, S.: Extracting texels in 2.1d natural textures. In: IEEE International Conference on Computer Vision, pp. 1–8. IEEE Computer Society, Los Alamitos (2007) Google Scholar
  3. 3.
    Bay, H., Ess, A., Tuytelaars, T., Van Gool, L.: Speeded-up robust features (surf). Comput. Vis. Image Underst. 110(3), 346–359 (2008) CrossRefGoogle Scholar
  4. 4.
    Belongie, S., Malik, J., Puzicha, J.: Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell. 24(4), 509–522 (2002) CrossRefGoogle Scholar
  5. 5.
    Berg, A.C., Berg, T.L., Malik, J.: Shape matching and object recognition using low distortion correspondences. In: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 1, CVPR’05, pp. 26–33. IEEE Computer Society, Washington (2005) CrossRefGoogle Scholar
  6. 6.
    Chen, Q., Li, D., Tang, C.K.: KNN matting. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 869–876 (2012) Google Scholar
  7. 7.
    Cheng, M.M., Zhang, F.L., Mitra, N.J., Huang, X., Hu, S.M.: RepFinder: finding approximately repeated scene elements for image editing. ACM Trans. Graph. 29(4), 83:1–83:8 (2010) CrossRefGoogle Scholar
  8. 8.
    Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005, vol. 1, pp. 886–893. IEEE, New York (2005) Google Scholar
  9. 9.
    Forssén, P.E.: Maximally stable colour regions for recognition and matching. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–8 (2007) Google Scholar
  10. 10.
    Huang, H., Zhang, L., Zhang, H.C.: RepSnapping: efficient image cutout for repeated scene elements. Comput. Graph. Forum 30(7), 2059–2066 (2011) CrossRefGoogle Scholar
  11. 11.
    Joulin, A., Bach, F., Ponce, J.: Discriminative clustering for image co-segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR2010), pp. 1943–1950 (2010) Google Scholar
  12. 12.
    Kim, E., Li, H., Huang, X.: A hierarchical image clustering cosegmentation framework. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 686–693 (2012) Google Scholar
  13. 13.
    Krages, B.: Photography: The Art of Composition. Allworth Press, New York (2005) Google Scholar
  14. 14.
    Leung, T.K., Malik, J.: Detecting, localizing and grouping repeated scene elements from an image. In: Proceedings of the 4th European Conference on Computer. ECCV’96, vol. I, pp. 546–555. Springer, London (1996) Google Scholar
  15. 15.
    Levin, A., Lischinski, D., Weiss, Y.: A closed-form solution to natural image matting. IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 228–242 (2008) CrossRefGoogle Scholar
  16. 16.
    Li, Y., Sun, J., Tang, C.K., Shum, H.Y.: Lazy snapping. ACM Trans. Graph. 23(3), 303–308 (2004) CrossRefGoogle Scholar
  17. 17.
    Liu, J., Sun, J., Shum, H.Y.: Paint selection. ACM Trans. Graph. 28(3), 69:1–69:7 (2009) Google Scholar
  18. 18.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91–110 (2004) CrossRefGoogle Scholar
  19. 19.
    Meng, F., Li, H., Liu, G., Ngan, K.N.: Object co-segmentation based on shortest path algorithm and saliency model. IEEE Trans. Multimed. 14(5), 1429–1441 (2012) CrossRefGoogle Scholar
  20. 20.
    Pauly, M., Mitra, N.J., Wallner, J., Pottmann, H., Guibas, L.J.: Discovering structural regularity in 3d geometry. ACM Trans. Graph. 27(3), 43:1–43:11 (2008) CrossRefGoogle Scholar
  21. 21.
    Rother, C., Kolmogorov, V., Blake, A.: “grabcut”: interactive foreground extraction using iterated graph cuts. ACM Trans. Graph. 23, 309–314 (2004) CrossRefGoogle Scholar
  22. 22.
    Schweitzer, H., Deng, R., Anderson, R.F.: A dual-bound algorithm for very fast and exact template matching. IEEE Trans. Pattern Anal. Mach. Intell. 33(3), 459–470 (2011) CrossRefGoogle Scholar
  23. 23.
    Thompson, D.W.: On Growth and Form. Cambridge University Press, Cambridge (1992) Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Yan Kong
    • 1
  • Weiming Dong
    • 1
    Email author
  • Xing Mei
    • 1
  • Xiaopeng Zhang
    • 1
  • Jean-Claude Paul
    • 2
  1. 1.LIAMA-NLPR, Institute of AutomationChinese Academy of SciencesBeijingChina
  2. 2.Project CADINRIAParisFrance

Personalised recommendations