Abstract

Snap Composition broadens the applicability of interactive image composition. Current tools, like Adobe’s Photomerge Group Shot, do an excellent job when the background can be aligned and objects have limited motion. Snap Composition works well even when the input images include different objects and the backgrounds cannot be aligned. The power of Snap Composition comes from the ability to assign for every output pixel a source pixel in any input image, and from any location in that image. An energy value is computed for each such assignment, representing both the user constraints and the quality of composition. Minimization of this energy gives the desired composition.

Composition is performed once a user marks objects in the different images, and optionally drags them into a new location in the target canvas. The background around the dragged objects, as well as the final locations of the objects themselves, will be automatically computed for seamless composition. If the user does not drag the selected objects to a desired place, they will automatically snap into a suitable location. A video describing the results can be seen in www.vision.huji.ac.il/shiftmap/SnapVideo.mp4 .

Keywords

Input Image Output Image Graph Label Marked Object Image Editing 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Agarwala, A., Dontcheva, M., Agrawala, M., Drucker, S., Colburn, A., Curless, B., Salesin, D., Cohen, M.: Interactive digital photomontage. In: SIGGRAPH, pp. 294–302 (2004)Google Scholar
  2. 2.
    Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.B.: Patchmatch: a randomized correspondence algorithm for structural image editing. ACM Trans. Graph (July 28, 2009)Google Scholar
  3. 3.
    Blake, A., Zisserman, A.: Visual Reconstruction. MIT Press, Cambridge (1987)Google Scholar
  4. 4.
    Boykov, Y., Kolmogorov, V.: An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEET-PAMI 26(9), 1124–1137 (2004)CrossRefMATHGoogle Scholar
  5. 5.
    Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEET-PAMI 23 (2001)Google Scholar
  6. 6.
    Burt, P.J., Adelson, E.H.: A multiresolution spline with application to image mosaics. ACM Trans. Graph. 2, 217–236 (1983)CrossRefGoogle Scholar
  7. 7.
    Cho, T., Butman, M., Avidan, S., Freeman, W.: The patch transform and its applications to image editing. In: CVPR 2008 (2008)Google Scholar
  8. 8.
    Fisher, M.L.: The lagrangian relaxation method for solving integer programming problems. Manage. Sci. 50, 1861–1871 (2004)CrossRefGoogle Scholar
  9. 9.
    Jia, J., Tang, C.: Image stitching using structure deformation. IEEE T-PAMI 30, 617–631 (2008)CrossRefGoogle Scholar
  10. 10.
    Jia, J., Sun, J., Tang, C.-K., Shum, H.-Y.: Drag-and-drop pasting. ACM Trans. Graph. 25, 631–637 (2006)CrossRefGoogle Scholar
  11. 11.
    Kolmogorov, V., Zabih, R.: What energy functions can be minimized via graph cuts? In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2352, pp. 65–81. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  12. 12.
    Kwatra, V., Schodl, A., Essa, I., Turk, G., Bobick, A.: Graphcut textures: image and video synthesis using graph cuts. In: SIGGRAPH 2003, pp. 277–286 (2003)Google Scholar
  13. 13.
    Levin, A., Zomet, A., Peleg, S., Weiss, Y.: Seamless image stitching in the gradient domain. In: ECCV 2006 (2006)Google Scholar
  14. 14.
    Lombaert, H., Sun, Y., Grady, L., Xu, C.: A multilevel banded graph cuts method for fast image segmentation. In: ICCV 2005, vol. 1, pp. 259–265 (2005)Google Scholar
  15. 15.
    Pérez, P., Gangnet, M., Blake, A.: Poisson image editing. In: SIGGRAPH 2003, New York, NY, USA, pp. 313–318 (2003)Google Scholar
  16. 16.
    Pritch, Y., Kav-Venaki, E., Peleg, S.: Shift-map image editing. In: ICCV 2009, Kyoto, pp. 151–158 (September 2009)Google Scholar
  17. 17.
    Rother, C., Bordeaux, L., Hamadi, Y., Blake, A.: Autocollage. In: SIGGRAPH 2006, pp. 847–852 (2006)Google Scholar
  18. 18.
    Rubinstein, M., Shamir, A., Avidan, S.: Improved seam carving for video retargeting. ACM Trans. Graph. 27(3), 1–9 (2008)CrossRefGoogle Scholar
  19. 19.
    Simakov, D., Caspi, Y., Shechtman, E., Irani, M.: Summarizing visual data using bidirectional similarity. In: CVPR 2008 (2008)Google Scholar
  20. 20.
    Tao, M.W., Johnson, M.K., Paris, S.: Error-tolerant image compositing. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6311, pp. 31–44. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  21. 21.
    Uyttendaele, M., Eden, A., Szeliski, R.: Eliminating ghosting and exposure artifacts in image mosaics. In: CVPR 2001, Hawaii, vol. II, pp. 509–516 (December 2001)Google Scholar
  22. 22.
    Wang, J., Cohen, M.: Image and Video Matting: A Survey. Foundations and Trends in Computer Graphics and Vision 3(2), 97–175 (2007)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Yael Pritch
    • 1
  • Yair Poleg
    • 1
  • Shmuel Peleg
    • 1
  1. 1.School of Computer ScienceThe Hebrew UniversityJerusalemIsrael

Personalised recommendations