Skip to main content

Variational Networks: Connecting Variational Methods and Deep Learning

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 10496))

Abstract

In this paper, we introduce variational networks (VNs) for image reconstruction. VNs are fully learned models based on the framework of incremental proximal gradient methods. They provide a natural transition between classical variational methods and state-of-the-art residual neural networks. Due to their incremental nature, VNs are very efficient, but only approximately minimize the underlying variational model. Surprisingly, in our numerical experiments on image reconstruction problems it turns out that giving up exact minimization leads to a consistent performance increase, in particular in the case of convex models.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Alexandre, B.: Sharesnet: reducing residual network parameter number by sharing weights. arXiv e-prints 1702.08782 (2017)

  2. Bertalmio, M., Sapiro, G., Randall, G.: Morphing active contours. TPAMI 22(7), 733–737 (2000)

    Article  Google Scholar 

  3. Bertsekas, D.P.: Incremental proximal methods for large scale convex optimization. Math. Program. 129(2), 163 (2011). doi:10.1007/s10107-011-0472-0

    Article  MathSciNet  MATH  Google Scholar 

  4. Black, M.J., Anandan, P.: The robust estimation of multiple motions: parametric and piecewise-smooth flow fields. Comput. Vis. Image Underst. 63(1), 75–104 (1996)

    Article  Google Scholar 

  5. Blake, A., Zisserman, A.: Visual Reconstruction. MIT Press, Cambridge (1987)

    Google Scholar 

  6. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)

    Book  MATH  Google Scholar 

  7. Bredies, K., Kunisch, K., Pock, T.: Total generalized variation. SIIMS 3(3), 492–526 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  8. Chambolle, A., Lions, P.L.: Image recovery via total variation minimization and related problems. Numer. Math. 76(2), 167–188 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  9. Chan, T.F., Vese, L.A.: Active contours without edges. IEEE Trans. Image Process. 10(2), 266–277 (2001)

    Article  MATH  Google Scholar 

  10. Chen, Y., Ranftl, R., Pock, T.: Insights into analysis operator learning: from patch-based sparse models to higher order MRFs. IEEE Trans. Image Process. 23(3), 1060–1072 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  11. Chen, Y., Yu, W., Pock, T.: On learning optimized reaction diffusion processes for effective image restoration. In: CVPR (2015)

    Google Scholar 

  12. Dabov, K., Foi, A., Katkovnik, V.: Image denoising by sparse 3D transformation-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 1–16 (2007)

    Article  MathSciNet  Google Scholar 

  13. Domke, J.: Generic methods for optimization-based modeling. In: AISTATS, pp. 318–326 (2012)

    Google Scholar 

  14. Freedman, D., Zhang, T.: Active contours for tracking distributions. IEEE Trans. Image Process. 13(4), 518–526 (2004)

    Article  Google Scholar 

  15. Gregor, K., LeCun, Y.: Learning fast approximations of sparse coding. In: ICML (2010)

    Google Scholar 

  16. Hammernik, K., Knoll, F., Sodickson, D., Pock, T.: Learning a variational model for compressed sensing MRI reconstruction. In: ISMRM (2016)

    Google Scholar 

  17. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition (2016)

    Google Scholar 

  18. Horn, B., Schunck, B.: Determining optical flow. Artif. Intell. 17, 185–203 (1981)

    Article  Google Scholar 

  19. Klatzer, T., Hammernik, K., Knöbelreiter, P., Pock, T.: Learning joint demosaicing and denoising based on sequential energy minimization. In: ICCP (2016)

    Google Scholar 

  20. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS (2012)

    Google Scholar 

  21. Kunisch, K., Pock, T.: A bilevel optimization approach for parameter learning in variational models. SIIMS 6, 938–983 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  22. LeCun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., Jackel, L.D.: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989)

    Article  Google Scholar 

  23. Lee, A.B., Pedersen, K.S., Mumford, D.: The nonlinear statistics of high-contrast patches in natural images. IJCV 54(1–3), 83–103 (2003)

    Article  MATH  Google Scholar 

  24. Levin, A., Weiss, Y., Durand, F., Freeman, W.T.: Understanding and evaluating blind deconvolution algorithms. In: CVPR (2009)

    Google Scholar 

  25. Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). doi:10.1007/978-3-319-10602-1_48

    Google Scholar 

  26. Mao, X.J., Shen, C., Yang, Y.B.: Image restoration using convolutional auto-encoders with symmetric skip connections. arXiv e-prints 1606.08921 (2016)

  27. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: ICCV (2001)

    Google Scholar 

  28. Masoud, A., Saeid, N.: Multi-residual networks. arXiv e-prints 1609.05672 (2016)

  29. Mumford, D., Shah, J.: Optimal approximations by piecewise smooth functions and associated variational problems. Commun. Pure Appl. Math. 42(5), 577–685 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  30. Nedić, A., Bertsekas, D.: Convergence rate of incremental subgradient algorithms. In: Uryasev, S., Pardalos, P.M. (eds.) Stochastic Optimization: Algorithms and Applications. Applied Optimization, vol. 54, pp. 223–264. Springer, Boston (2001)

    Google Scholar 

  31. Roth, S., Black, M.J.: Fields of experts. IJCV 82, 205–229 (2009)

    Article  Google Scholar 

  32. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D: Nonlinear Phenom. 60(1–4), 259–268 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  33. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323(6088), 533–536 (1986)

    Article  MATH  Google Scholar 

  34. Schelten, K., Nowozin, S., Jancsary, J., Rother, C., Roth, S.: Interleaved regression tree field cascades for blind image deconvolution. In: IEEE Winter Conference on Applications of Computer Vision (2015)

    Google Scholar 

  35. Schmidt, U., Roth, S.: Shrinkage fields for effective image restoration. In: CVPR (2014)

    Google Scholar 

  36. Shan, Q., Jia, J., Agarwala, A.: High-quality motion deblurring from a single image. In: SIGGRAPH (2008)

    Google Scholar 

  37. Sra, S.: Scalable nonconvex inexact proximal splitting. In: NIPS (2012)

    Google Scholar 

  38. Veit, A., Wilber, M., Belongie, S.: Residual networks are exponential ensembles of relatively shallow networks. arXiv e-prints 1605.06431 (2016)

  39. Xu, L., Zheng, S., Jia, J.: Unnatural L0 sparse representation for natural image deblurring. In: CVPR (2013)

    Google Scholar 

  40. Yu, W., Heber, S., Pock, T.: Learning reaction-diffusion models for image inpainting. In: Gall, J., Gehler, P., Leibe, B. (eds.) GCPR 2015. LNCS, vol. 9358, pp. 356–367. Springer, Cham (2015). doi:10.1007/978-3-319-24947-6_29

    Chapter  Google Scholar 

  41. Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. arXiv e-prints 1608.03981 (2016)

  42. Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for neural networks for image processing. arXiv e-prints 1511.08861 (2015)

  43. Zhu, S.C., Mumford, D.: Prior learning and gibbs reaction-diffusion. TPAMI 19(11), 1236–1250 (1997)

    Article  Google Scholar 

Download references

Acknowledgements

We acknowledge grant support from the Austrian Science Fund (FWF) under the START project BIVISION, No. Y729 and the European Research Council under the Horizon 2020 program, ERC starting grant HOMOVIS, No. 640156.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Erich Kobler .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1517 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Kobler, E., Klatzer, T., Hammernik, K., Pock, T. (2017). Variational Networks: Connecting Variational Methods and Deep Learning. In: Roth, V., Vetter, T. (eds) Pattern Recognition. GCPR 2017. Lecture Notes in Computer Science(), vol 10496. Springer, Cham. https://doi.org/10.1007/978-3-319-66709-6_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-66709-6_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-66708-9

  • Online ISBN: 978-3-319-66709-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics