A generic tool for interactive complex image editing

Abstract

Plenty of complex image editing techniques require certain per-pixel property or magnitude to be known, e.g., simulating depth of field effects requires a depth map. This work presents an efficient interaction paradigm that approximates any per-pixel magnitude from a few user strokes by propagating the sparse user input to each pixel of the image. The propagation scheme is based on a linear least-squares system of equations which represents local and neighboring restrictions over superpixels. After each user input, the system responds immediately, propagating the values and applying the corresponding filter. Our interaction paradigm is generic, enabling image editing applications to run at interactive rates by changing just the image processing algorithm, but keeping our proposed propagation scheme. We illustrate this through three interactive applications: depth of field simulation, dehazing and tone mapping.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Notes

  1. 1.

    https://www.youtube.com/watch?v=Fps5SasG9v4.

  2. 2.

    https://github.com/anacambra/app_lensblur/tree/TVCJ.

References

  1. 1.

    Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Susstrunk, S.: SLIC superpixels compared to state-of-the-art superpixel methods. Trans. Pattern Anal. Mach. Intell. 34(11), 2274–2282 (2012)

    Article  Google Scholar 

  2. 2.

    An, X., Pellacini, F.: AppProp: all-pairs appearance-space edit propagation. In: ACM Transactions on Graphics (TOG), vol. 27, p. 40. ACM (2008)

  3. 3.

    An, X., Tong, X., Denning, J.D., Pellacini, F.: AppWarp: retargeting measured materials by appearance-space warping. ACM Trans. Gr. 30(6), 147:1–147:10 (2011)

    Article  Google Scholar 

  4. 4.

    Andrews, S., Hamarneh, G., Saad, A.: Fast random walker with priors using precomputation for interactive medical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention, vol. 13, pp. 9–16. Springer (2010)

  5. 5.

    Ao, H., Zhang, Y., Jarabo, A., Masia, B., Liu, Y., Gutierrez, D., Dai, Q.: Light field editing based on reparameterization. In: Pacific Rim Conference on Multimedia (2015)

  6. 6.

    Berman, D., Treibitz, T., Avidan, S.: Non-local image dehazing. In: IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1674–1682 (2016)

  7. 7.

    Besag, J.: On the statistical analysis of dirty pictures. J. R. Stat. Soc. Ser. B (Methodological), 48(3), 259–302 (1986)

  8. 8.

    Bonneel, N., Sunkavalli, K., Tompkin, J., Sun, D., Paris, S., Pfister, H.: Interactive intrinsic video editing. ACM Trans. Gr. 33(6), 197 (2014)

    Article  Google Scholar 

  9. 9.

    Bousseau, A., Paris, S., Durand, F.: User assisted intrinsic images. ACM Trans. Gr. (SIGGRAPH Asia 2009) 28(5), 130 (2009)

    Google Scholar 

  10. 10.

    Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2001)

    Article  Google Scholar 

  11. 11.

    Cambra, A.B., Muñoz, A., Guerrero, J.J., Murillo, A.C.: Dense labeling with user interaction: an example for depth-of-field simulation. In: British Machine Vision Conference (2016)

  12. 12.

    Chaurasia, G., Duchene, S., Sorkine-Hornung, O., Drettakis, G.: Depth synthesis and local warps for plausible image-based navigation. ACM Trans. Gr. 32(3), 30 (2013)

    Article  Google Scholar 

  13. 13.

    Chen, Q., Koltun, V.: Fast MRF optimization with application to depth reconstruction. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3914–3921 (2014)

  14. 14.

    Chen, X., Zou, D., Zhao, Q., Tan, P.: Manifold preserving edit propagation. ACM Trans. Gr. (TOG) 31(6), 132 (2012)

    Google Scholar 

  15. 15.

    Di Renzo, F., Calabrese, C., Pellacini, F.: AppIm: Linear spaces for image-based appearance editing. ACM Trans. Gr. 33(6), 194:1–194:9 (2014)

    MATH  Google Scholar 

  16. 16.

    Fattal, R.: Dehazing using color-lines. ACM Trans. Gr. 34, 13 (2014)

    Article  Google Scholar 

  17. 17.

    Freedman, D.: An improved image graph for semi-automatic segmentation. Signal Image Video Process. 6(4), 533–545 (2012)

    Article  Google Scholar 

  18. 18.

    Garces, E., Muñoz, A., Lopez-Moreno, J., Gutierrez, D.: Intrinsic images by clustering. Comput. Gr. Forum 31(4), 1415–1424 (2012)

    Article  Google Scholar 

  19. 19.

    Grady, L.: Random walks for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 28(11), 1768–1783 (2006)

    Article  Google Scholar 

  20. 20.

    Iizuka, S., Endo, Y., Kanamori, Y., Mitani, J., Fukui, Y.: Efficient depth propagation for constructing a layered depth image from a single image. In: Computer Graphics Forum (Proceedings of Pacific Graphics 2014) vol. 33(7), pp. 279–288 (2014)

    Article  Google Scholar 

  21. 21.

    Jarabo, A., Masia, B., Bousseau, A., Pellacini, F., Gutierrez, D.: How do people edit light fields? ACM Trans. Gr. (SIGGRAPH 2014) 33(4), 146 (2014)

    Google Scholar 

  22. 22.

    Kolmogorov, V.: Convergent tree-reweighted message passing for energy minimization. IEEE Trans. Pattern Anal. Mach. Intell. 28(10), 1568–1583 (2006)

    Article  Google Scholar 

  23. 23.

    Kraus, M., Strengert, M.: Depth-of-field rendering by pyramidal image processing. Comput. Gr. Forum 26(3), 645–654 (2007)

    Article  Google Scholar 

  24. 24.

    Levin, A., Lischinski, D., Weiss, Y.: Colorization using optimization. In: ACM Transactions on Graphics, vol. 23, pp. 689–694. ACM (2004)

  25. 25.

    Lischinski, D., Farbman, Z., Uyttendaele, M., Szeliski, R.: Interactive local adjustment of tonal values. ACM Trans. Gr. (TOG) 25(3), 646–653 (2006)

    Article  Google Scholar 

  26. 26.

    Liu, M., Salzmann, M., He, X.: Discrete–continuous depth estimation from a single image. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 716–723. IEEE (2014)

  27. 27.

    Lopez, A., Garces, E., Gutierrez, D.: Depth from a Single Image Through User Interaction. In: Spanish Computer Graphics Conference, pp. 11–20. The Eurographics Assoc. (2014)

  28. 28.

    Luan, Q., Wen, F., Cohen-Or, D., Liang, L., Xu, Y.Q., Shum, H.Y.: Natural image colorization. In: Proceedings of the 18th Eurographics Conference on Rendering Techniques, pp. 309–320. Eurographics Association (2007)

  29. 29.

    Mantiuk, R., Seidel, H.P.: Modeling a generic tone-mapping operator. In: Computer Graphics Forum, vol. 27(2), pp. 699–708. Wiley (2008)

  30. 30.

    Mičušík, B., Košecká, J.: Multi-view superpixel stereo in urban environments. Int. J. Comput. Vis. 89(1), 106–119 (2010)

    Article  Google Scholar 

  31. 31.

    Ning, J., Zhang, L., Zhang, D., Wu, C.: Interactive image segmentation by maximal similarity based region merging. Pattern Recognit. 43(2), 445–456 (2010)

    Article  Google Scholar 

  32. 32.

    Phan, R., Androutsos, D.: Robust semi-automatic depth map generation in unconstrained images and video sequences for 2d to stereoscopic 3d conversion. IEEE Trans. Multimed. 16(1), 122–136 (2014)

    Article  Google Scholar 

  33. 33.

    Reinhard, E., Stark, M., Shirley, P., Ferwerda, J.: Photographic tone reproduction for digital images. ACM Trans. Gr. 21(3), 267–276 (2002)

    Article  Google Scholar 

  34. 34.

    Ren, X., Bo, L., Fox, D.: Rgb-(d) scene labeling: features and algorithms. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2759–2766. IEEE (2012)

  35. 35.

    Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vis. 47(1–3), 7–42 (2002)

    Article  Google Scholar 

  36. 36.

    Scharstein, D., Szeliski, R.: High-accuracy stereo depth maps using structured light. In: IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. I–195 (2003)

  37. 37.

    Singaraju, D., Grady, L., Vidal, R.: P-brush: continuous valued MRFs with normed pairwise distributions for image segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1303–1310. IEEE (2009)

  38. 38.

    Stutz, D., Hermans, A., Leibe, B.: Superpixels: an evaluation of the state-of-the-art. Comput. Vis. Image Underst. (2017). doi:10.1016/j.cviu.2017.03.007

    Article  Google Scholar 

  39. 39.

    Szeliski, R., Zabih, R., Scharstein, D., Veksler, O., Kolmogorov, V., Agarwala, A., Tappen, M., Rother, C.: A comparative study of energy minimization methods for markov random fields with smoothness-based priors. IEEE Trans. Pattern Anal. Mach. Intell. 30(6), 1068–1080 (2008)

    Article  Google Scholar 

  40. 40.

    Tappen, M.F., Freeman, W.T.: Comparison of graph cuts with belief propagation for stereo, using identical MRF parameters. In: IEEE International Conference on Computer Vision, pp. 900–906. IEEE (2003)

  41. 41.

    Tighe, J., Lazebnik, S.: Superparsing. Int. J. Comput. Vis. 101(2), 329–349 (2013)

    MathSciNet  Article  Google Scholar 

  42. 42.

    Wang, O., Lang, M., Frei, M., Hornung, A., Smolic, A., Gross, M.: StereoBrush: interactive 2d to 3d conversion using discontinuous warps. In: Proceedings of the Eighth Eurographics Symposium on Sketch-Based Interfaces and Modeling, pp. 47–54. ACM (2011)

  43. 43.

    Weinshall, D.: Qualitative depth from stereo, with applications. Comput. Vis. Gr. Image Process. 49, 222–241 (1990)

    Article  Google Scholar 

  44. 44.

    Xu, K., Li, Y., Ju, T., Hu, S.M., Liu, T.Q.: Efficient affinity-based edit propagation using k-d tree. ACM Trans. Gr. 28(5), 118:1–118:6 (2009)

    Google Scholar 

  45. 45.

    Yamaguchi, K., Hazan, T., McAllester, D., Urtasun, R.: Continuous markov random fields for robust stereo estimation. Comput. Vis. ECCV 2012, 45–58 (2012)

    Google Scholar 

  46. 46.

    Yu, F., Gallup, D.: 3d reconstruction from accidental motion. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3986–3993 (2014)

  47. 47.

    Yücer, K., Jacobson, A., Hornung, A., Sorkine, O.: Transfusive image manipulation. In: ACM Transactions on Graphics (proceedings of ACM SIGGRAPH ASIA) vol. 31(6), pp. 176:1–176:9 (2012)

    Article  Google Scholar 

  48. 48.

    Yücer, K., Sorkine-Hornung, A., Sorkine-Hornung, O.: Transfusive weights for content-aware image manipulation. In: Proceedings of the Symposium on Vision, Modeling and Visualization (VMV). Eurographics Association (2013)

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Ana B. Cambra.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 25850 KB)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Cambra, A.B., Murillo, A.C. & Muñoz, A. A generic tool for interactive complex image editing. Vis Comput 34, 1493–1505 (2018). https://doi.org/10.1007/s00371-017-1422-5

Download citation

Keywords

  • User interaction
  • Image processing
  • Computer vision
  • Dense label propagation