Advertisement

The Visual Computer

, Volume 30, Issue 6–8, pp 673–684 | Cite as

Parametric meta-filter modeling from a single example pair

  • Shi-Sheng Huang
  • Guo-Xin Zhang
  • Yu-Kun Lai
  • Johannes Kopf
  • Daniel Cohen-Or
  • Shi-Min Hu
Original Article

Abstract

We present a method for learning a meta-filter from an example pair comprising an original image \(A\) and its filtered version \(A^\prime \) using an unknown image filter. A meta-filter is a parametric model, consisting of a spatially varying linear combination of simple basis filters. We introduce a technique for learning the parameters of the meta-filter \(f\) such that it approximates the effects of the unknown filter, i.e., \(f(A)\) approximates \(A^\prime \). The meta-filter can be transferred to novel input images, and its parametric representation enables intuitive tuning of its parameters to achieve controlled variations. We show that our technique successfully learns and models meta-filters that approximate a large variety of common image filters with high accuracy both visually and quantitatively.

Keywords

Image filters Filter space Sparsity Learning and Transfer 

Notes

Acknowledgments

This work was supported by the National Basic Research Project of China (Project Number 2011CB302205), the Natural Science Foundation of China (Project Number 61120106007, 61133008), the National High Technology Research and Development Program of China (Project Number 2012AA011802), and Tsinghua University Initiative Scientific Research Program.

References

  1. 1.
    Hertzmann, A., Jacobs, C.E., Curless, N.O.B., Salesin, D.H.: Image analogies. In: Proceedings of ACM SIGGRAPH, pp. 327–340 (2001)Google Scholar
  2. 2.
    Wang, B., Yu, Y., Xu, Y.-Q.: Example-based image color and tone style enhancement. ACM Trans. Graph. 30(4), 64:1–64:12 (2011)Google Scholar
  3. 3.
    Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.B.: PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Trans. Graph. 28(3), 24:1–24:11 (2009)CrossRefGoogle Scholar
  4. 4.
    Liu, C., Yuen, J., Torralba, A.: Sift flow: dense correspondence across scenes and its applications. IEEE Trans. Pattern Anal. Mach. Intell. 33(5), 978–994 (2011)CrossRefGoogle Scholar
  5. 5.
    Ziang, D., Xin, Z., Wei, C., Xavier, T., Dichao, P., Q, P.: Coherent streamline generation for 2-D vector fields. Tsinghua Sci. Technol. 17(4), 463–470 (2012)Google Scholar
  6. 6.
    Sahba, F., Tizhoosh, H.R.: Filter fusion for image enhancement using reinforcement learning. In: Proceedings of IEEE Canadian Conference on Electrical and Computer Engineering, pp. 847–850 (2003)Google Scholar
  7. 7.
    Berthouzoz, F., Li, W., Dontcheva, M., Agrawala, M.: A Framework for content-adaptive photo manipulation macros: application to face, landscape, and global manipulations. ACM Trans. Graph. 30(5), 120:1–120:14 (2011)CrossRefGoogle Scholar
  8. 8.
    Huang, H., Xiao, X.: Example-based contrast enhancement by gradient mapping. Vis. Comput. 26(6–8), 731–738 (2010)CrossRefGoogle Scholar
  9. 9.
    Ji, H., Wang, K.: Robust image deblurring with inaccurate blur kernels. IEEE Trans. Image Process. 21(4), 1624–1634 (2012)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Mairal, J., Bach, F., Ponce, J., Sapiro, G., Zisserman, A.: Non-local sparse models for image restoration. In: Proceedings of IEEE International Conference on Computer Vision (ICCV), pp. 2272–2279 (2009)Google Scholar
  11. 11.
    Russ, J.C.: The Image Processing Handbook, 5th edn. CRC Press, USA (2006)CrossRefGoogle Scholar
  12. 12.
    He, K., Sun, J, Tang, X: Guided image filtering. In: Proceedings of European Conference on Computer Vision: Part I, pp. 1–14 (2010)Google Scholar
  13. 13.
    Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D 60, 259–268 (1992)zbMATHCrossRefGoogle Scholar
  14. 14.
    Li, X.-Y., Yan, G., Hu, S.-M., Martin, R.R.: Mixed-domain edge-aware image manipulation. IEEE Trans. Image Process. 22(5), 1915–1925 (2013)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Yuan, L., Sun, J., Quan, L., Shum, H.-Y.: Image deblurring with blurred/noisy image pairs. ACM Trans. Graph. 26(3), 1:1–1:10 (2007)CrossRefGoogle Scholar
  16. 16.
    Menon, D., Calvagno, G.: Regularization approaches to demosaicking. IEEE Trans. Image Process. 18(10), 2209–2220 (2009)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Joshi, N., Szeliski, R., Kriegman, D.J.: PSF estimation using sharp edge prediction. In: Proceeding of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–8 (2008)Google Scholar
  18. 18.
    Laffont, P.-Y., Bousseau, A., Drettakis, G.: Rich intrinsic image decomposition of outdoor scenes from multiple views. IEEE Trans. Vis. Comput. Graph. 19(2), 210–224 (2013)CrossRefGoogle Scholar
  19. 19.
    Hu, S.-M., Chen, T., Xu, K., Cheng, M.-M., Martin, R.R.: Internet visual media processing: a survey with graphics and vision applications. Vis. Comput. 29(5), 393–405 (2013)CrossRefGoogle Scholar
  20. 20.
    Kang, S.B., Kapoor, A., Lischinski, D.: Personalization of image enhancement/CVPR 1799–1806 (2010)Google Scholar
  21. 21.
    Goldstein, T., Osher, S.: The split Bregman method for \(L_1\) regularized problems. SIAM J. Imaging Sci. 2(2), 323–343 (2009)zbMATHMathSciNetCrossRefGoogle Scholar
  22. 22.
    Davis, T.A.: CHOLMOD: a sparse Cholesky factorization and modification package. University of Florida, Florida (2011)Google Scholar
  23. 23.
    Bychkovsky, V., Paris, S., Chan, E., Durand, F.: Learning photographic global tonal adjustment with a database of input/output image pairs. CVPR 97–104 (2011)Google Scholar
  24. 24.
    Wang, X., Jia, J., Cai, L.: Affective image adjustment with a single word. Vis. Comput. 29(11), 1121–1133 (2013)CrossRefGoogle Scholar
  25. 25.
    Wang, X.-H., Jia, J., Liao, H.-Y., Cai, L.-H.: Affective image colorization. J. Comput. Sci. Technol. 27(6), 1119–1128 (2012)CrossRefGoogle Scholar
  26. 26.
    Tang, Y., Shi, X., Xiao, T., Fan, J.: An improved image analogy method based on adaptive CUDA-accelerated neighborhood matching framework. Vis. Comput. 28(6–8), 743–753 (2012)CrossRefGoogle Scholar
  27. 27.
    Ling, Y., Yan, C., Liu, C., Wang, X., Li, H.: Adaptive tone-preserved image detail enhancement. Vis. Comput. 28(6–8), 733–742 (2012)CrossRefGoogle Scholar
  28. 28.
    Zang, Y., Huang, H., Li, C.-F.: Stroke style analysis for painterly rendering. J. Comput. Sci. Technol. 28(5), 762–775 (2013)CrossRefGoogle Scholar
  29. 29.
    Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Proc. 13(4), 600–612(2004)Google Scholar
  30. 30.
    Zhuo, S., Luo, X., Artusi, A.: A novel image decomposition approach and its applications. Visual Comput. 29(10), 1011–1023 (2013)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Shi-Sheng Huang
    • 1
  • Guo-Xin Zhang
    • 1
  • Yu-Kun Lai
    • 2
  • Johannes Kopf
    • 3
  • Daniel Cohen-Or
    • 4
  • Shi-Min Hu
    • 1
  1. 1.Tsinghua UniversityBeijingChina
  2. 2.Cardiff UniversityCardiffUK
  3. 3.Microsoft ResearchRedmondUSA
  4. 4.Tel Aviv UniversityTel AvivIsrael

Personalised recommendations