The Visual Computer

, Volume 31, Issue 6–8, pp 1101–1111 | Cite as

Sparse pixel sampling for appearance edit propagation

Original Article

Abstract

Edit propagation is an appearance-editing method using sparsely provided edit strokes from users. Although edit propagation has a wide variety of applications, it is computationally complex, owing to the need to solve large linear systems. To reduce the computational cost, interpolation-based approaches have been studied intensely. This study is inspired by an interpolation-based edit-propagation method that uses a clustering algorithm to determine samples. The method uses an interpolant, which approximates edit parameters with convex combinations of the samples. However, because the clustering algorithm generates samples that lie inside the set of pixels in a feature space, an interpolant with convex combinations does not allow for an exact reconstruction of the pixels outside the convex hull. To address this issue, this paper proposes a novel approximation model for interpolating image colors as well as edit parameters using affine combinations. In addition, this paper introduces sparse pixel sampling to determine the quantity and positions of samples and the weight coefficients of the affine combinations simultaneously. Sparse pixel sampling is performed by updating candidate pixels. Unnecessary pixels are discarded with compressive sensing, and new candidate pixels are greedily resampled following their approximation errors. This paper demonstrates that the proposed model achieves better approximation in terms of both image colors and edit parameters, and discusses the properties of the proposed model with various experiments.

Keywords

Image and video editing Interactive editing Edit propagation Compressive sensing 

References

  1. 1.
    Eigen 3.2.4, a c++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms. http://eigen.tuxfamily.org/index.php (2015)
  2. 2.
    An, X., Pellacini, F.: Appprop: all-pairs appearance-space edit propagation. ACM Trans. Graph. 27(3), 40:1–40:9 (2008)CrossRefGoogle Scholar
  3. 3.
    An, X., Pellacini, F.: User-controllable color transfer. Comput. Graph. Forum 29(2), 263–271 (2010)CrossRefGoogle Scholar
  4. 4.
    Bae, S., Paris, S., Durand, F.: Two-scale tone management for photographic look. ACM Trans. Graph. 25(3), 637–645 (2006)CrossRefGoogle Scholar
  5. 5.
    Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)MATHMathSciNetCrossRefGoogle Scholar
  6. 6.
    Bhat, P., Zitnick, C.L., Cohen, M., Curless, B.: Gradientshop: a gradient-domain optimization framework for image and video filtering. ACM Trans. Graph. 29(2), 10:1–10:14 (2010)CrossRefGoogle Scholar
  7. 7.
    Bie, X., Huang, H., Wang, W.: Real time edit propagation by efficient sampling. Comput. Graph. Forum 30(7), 2041–2048 (2011)CrossRefGoogle Scholar
  8. 8.
    Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)CrossRefGoogle Scholar
  9. 9.
    Chen, X., Zou, D., Zhao, Q., Tan, P.: Manifold preserving edit propagation. ACM Trans. Graph. 31(6), 132:1–132:7 (2012)Google Scholar
  10. 10.
    Durand, F., Dorsey, J.: Fast bilateral filtering for the display of high-dynamic-range images. ACM Trans. Graph. 21(3), 257–266 (2002)CrossRefGoogle Scholar
  11. 11.
    Eisemann, E., Durand, F.: Flash photography enhancement via intrinsic relighting. ACM Trans. Graph. 23(3), 673–678 (2004)CrossRefGoogle Scholar
  12. 12.
    Farbman, Z., Fattal, R., Lischinski, D., Szeliski, R.: Edge-preserving decompositions for multi-scale tone and detail manipulation. ACM Trans. Graph. 27(3), 67:1–67:10 (2008)Google Scholar
  13. 13.
    Fowlkes, C., Belongie, S., Chung, F., Malik, J.: Spectral grouping using the Nyström method. IEEE Trans. Pattern Anal. Mach. Intell. 26, 214–225 (2004)CrossRefGoogle Scholar
  14. 14.
    Gastal, E.S.L., Oliveira, M.M.: Domain transform for edge-aware image and video processing. ACM Trans. Graph. 30(4), 69:1–69:12 (2011)CrossRefGoogle Scholar
  15. 15.
    Gastal, E.S.L., Oliveira, M.M.: Adaptive manifolds for real-time high-dimensional filtering. ACM Trans. Graph. 31(4), 33:1–33:13 (2012)CrossRefGoogle Scholar
  16. 16.
    He, K., Sun, J., Tang, X.: Guided image filtering. In: Proceedings of ECCV 2010. Lecture Notes in Computer Science, vol. 6311, pp. 1–14. Springer (2010)Google Scholar
  17. 17.
    Lang, M., Wang, O., Aydin, T., Smolic, A., Gross, M.: Practical temporal consistency for image-based graphics applications. ACM Trans. Graph. 31(4), 34:1–34:8 (2012)CrossRefGoogle Scholar
  18. 18.
    Levin, A., Lischinski, D., Weiss, Y.: Colorization using optimization. ACM Trans. Graph. 23(3), 689–694 (2004)CrossRefGoogle Scholar
  19. 19.
    Li, Y., Ju, T., Hu, S.M.: Instant propagation of sparse edits on images and videos. Comput. Graph. Forum 29(7), 2049–2054 (2010)CrossRefGoogle Scholar
  20. 20.
    Lischinski, D., Farbman, Z., Uyttendaele, M., Szeliski, R.: Interactive local adjustment of tonal values. ACM Trans. Graph. 25(3), 646–653 (2006)CrossRefGoogle Scholar
  21. 21.
    Ma, L.Q., Xu, K.: Efficient antialiased edit propagation for images and videos. Comput. Graph. 36(8), 1005–1012 (2012)CrossRefGoogle Scholar
  22. 22.
    Ma, L.Q., Xu, K.: Efficient manifold preserving edit propagation with adaptive neighborhood size. Comput. Graph. 38, 167–173 (2014)CrossRefGoogle Scholar
  23. 23.
    Musialski, P., Cui, M., Ye, J., Razdan, A., Wonka, P.: A framework for interactive image color editing. Vis. Comput. 29(11), 1173–1186 (2013)CrossRefGoogle Scholar
  24. 24.
    Paris, S., Hasinoff, S.W., Kautz, J.: Local Laplacian filters: edge-aware image processing with a Laplacian pyramid. ACM Trans. Graph. 30(4), 68:1–68:12 (2011)CrossRefGoogle Scholar
  25. 25.
    Pellacini, F., Lawrence, J.: Appwand: editing measured materials using appearance-driven optimization. ACM Trans. Graph. 26(3), 51:1–54:8Google Scholar
  26. 26.
    Petschnigg, G., Szeliski, R., Agrawala, M., Cohen, M., Hoppe, H., Toyama, K.: Digital photography with flash and no-flash image pairs. ACM Trans. Graph. 23(3), 664–672 (2004)CrossRefGoogle Scholar
  27. 27.
    Reinhard, E., Adhikhmin, M., Gooch, B., Shirley, P.: Color transfer between images. Comput. Graph. Appl. IEEE 21(5), 34–41 (2001)CrossRefGoogle Scholar
  28. 28.
    Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500), 2323–2326 (2000)CrossRefGoogle Scholar
  29. 29.
    Tomasi, C., Manduchi, R.: Bilateral filtering for gray and color images. In: Proceedings of 6th international conference on computer vision, pp. 839–846. IEEE (1998)Google Scholar
  30. 30.
    Tomioka, R., Sugiyama, M.: Dual-augmented lagrangian method for efficient sparse reconstruction. IEEE Signal Process. Lett. 16(12), 1067–1070 (2009)CrossRefGoogle Scholar
  31. 31.
    Williams, C., Seeger, M.: Using the Nyström method to speed up kernel machines. In: Advances in Neural Information Processing Systems, Proceedings of the 15th annual conference on Neural Information Processing Systems, pp. 682–688 (2001)Google Scholar
  32. 32.
    Winnemöller, H., Olsen, S.C., Gooch, B.: Real-time video abstraction. ACM Trans. Graph. 25(3), 1221–1226 (2006)CrossRefGoogle Scholar
  33. 33.
    Wu, J., Shen, X., Liu, L.: Interactive two-scale color-to-gray. Vis. Comput. 28(6–8), 723–731 (2012)MATHCrossRefGoogle Scholar
  34. 34.
    Xiao, C., Nie, Y., Tang, F.: Efficient edit propagation using hierarchical data structure. IEEE Trans. Vis. Comput. Graph. 17(8), 1135–1147 (2011)CrossRefGoogle Scholar
  35. 35.
    Xu, K., Li, Y., Ju, T., Hu, S.M., Liu, T.Q.: Efficient affinity-based edit propagation using k-d tree. ACM Trans. Graph. 28(5), 118:1–118:6 (2009)Google Scholar
  36. 36.
    Xu, K., Wang, J., Tong, X., Hu, S.M., Guo, B.: Edit propagation on bidirectional texture functions. Comput. Graph. Forum 28(7), 1871–1877 (2009)CrossRefGoogle Scholar
  37. 37.
    Xu, L., Yan, Q., Jia, J.: A sparse control model for image and video editing. ACM Trans. Graph. 32(6), 197:1–197:10 (2013)Google Scholar
  38. 38.
    Yang, L., Sander, P.V., Lawrence, J., Hoppe, H.: Antialiasing recovery. ACM Trans. Graph. 30(3), 22:1–22:9 (2011)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  1. 1.University of TokyoTokyoJapan
  2. 2.University of Tokyo/JST CRESTTokyoJapan

Personalised recommendations