Advertisement

ConvNet-Based Depth Estimation, Reflection Separation and Deblurring of Plenoptic Images

  • Paramanand Chandramouli
  • Mehdi Noroozi
  • Paolo Favaro
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10113)

Abstract

In this paper, we address the problem of reflection removal and deblurring from a single image captured by a plenoptic camera. We develop a two-stage approach to recover the scene depth and high resolution textures of the reflected and transmitted layers. For depth estimation in the presence of reflections, we train a classifier through convolutional neural networks. For recovering high resolution textures, we assume that the scene is composed of planar regions and perform the reconstruction of each layer by using an explicit form of the plenoptic camera point spread function. The proposed framework also recovers the sharp scene texture with different motion blurs applied to each layer. We demonstrate our method on challenging real and synthetic images.

Keywords

Point Spread Function Convolutional Neural Network Depth Estimation Motion Blur Microlens Array 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Supplementary material

416261_1_En_9_MOESM1_ESM.pdf (2.2 mb)
Supplementary material 1 (pdf 2254 KB)

References

  1. 1.
    Guo, X., Cao, X., Ma, Y.: Robust separation of reflection from multiple images. In: CVPR, pp. 2195–2202 (2014)Google Scholar
  2. 2.
    Xue, T., Rubinstein, M., Liu, C., Freeman, W.T.: A computational approach for obstruction-free photography. ACM Trans. Graph. 34, 7901–7911 (2015)CrossRefGoogle Scholar
  3. 3.
    Li, Y., Brown, M.: Exploiting reflection change for automatic reflection removal. In: ICCV, pp. 2432–2439 (2013)Google Scholar
  4. 4.
    Schechner, Y.Y., Kiryati, N., Basri, R.: Separation of transparent layers using focus. Int. J. Comput. Vis. 39, 25–39 (2000)CrossRefMATHGoogle Scholar
  5. 5.
    Agrawal, A., Raskar, R., Nayar, S.K., Li, Y.: Removing photography artifacts using gradient projection and flash-exposure sampling. ACM Trans. Graph. (TOG) 24, 828–835 (2005). ACMCrossRefGoogle Scholar
  6. 6.
    Kong, N., Tai, Y.W., Shin, J.S.: A physically-based approach to reflection separation: from physical modeling to constrained optimization. IEEE Trans. Patt. Anal. Mach. Intell. 36, 209–221 (2014)CrossRefGoogle Scholar
  7. 7.
    Levin, A., Zomet, A., Weiss, Y.: Separating reflections from a single image using local features. In: CVPR (2004)Google Scholar
  8. 8.
    Li, Y., Brown, M.: Single image layer separation using relative smoothness. In: CVPR, pp. 2752–2759 (2014)Google Scholar
  9. 9.
    Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., Hanrahan, P.: Light field photography with a hand-held plenoptic camera. Comput. Sci. Tech. Rep. CSTR 2(11), 1–11 (2005)Google Scholar
  10. 10.
  11. 11.
  12. 12.
    Wanner, S., Goldlücke, B.: Reconstructing reflective and transparent surfaces from epipolar plane images. In: GCPR (2013)Google Scholar
  13. 13.
    Wang, Q., Lin, H., Ma, Y., Kang, S.B., Yu, J.: Automatic layer separation using light field imaging. arXiv preprint arXiv:1506.04721 (2015)
  14. 14.
    Johannsen, O., Sulc, A., Goldluecke, B.: Variational separation of light field layers. In: Vision, Modelling and Visualization (VMV) (2015)Google Scholar
  15. 15.
    Levin, A., Weiss, Y.: User assisted separation of reflections from a single image using a sparsity prior. IEEE Trans. Pattern Anal. Mach. Intell. 29, 1647–1654 (2007)CrossRefGoogle Scholar
  16. 16.
    Szeliski, R., Avidan, S., Anandan, P.: Layer extraction from multiple images containing reflections and transparency. In: CVPR, vol. 1, pp. 246–253. IEEE (2000)Google Scholar
  17. 17.
    Tsin, Y., Kang, S.B., Szeliski, R.: Stereo matching with linear superposition of layers. IEEE Trans. Pattern Anal. Mach. Intell. 28, 290–301 (2006)CrossRefGoogle Scholar
  18. 18.
    Shih, Y., Krishnan, D., Durand, F., Freeman, W.T.: Reflection removal using ghosting cues. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3193–3201. IEEE (2015)Google Scholar
  19. 19.
    Bishop, T., Favaro, P.: The light field camera: extended depth of field, aliasing and superresolution. IEEE Trans. Pattern Anal. Mach. Intell. 34, 972–986 (2012)CrossRefGoogle Scholar
  20. 20.
    Lumsdaine, A., Georgiev., T.: Full resolution lightfield rendering. Technical report, Adobe Systems (2008)Google Scholar
  21. 21.
    Wanner, S., Goldluecke, B.: Variational light field analysis for disparity estimation and super-resolution. IEEE Trans. Patt. Anal. Mach. Intell. 36, 606–619 (2014)CrossRefGoogle Scholar
  22. 22.
    Cho, D., Lee, M., Kim, S., Tai, Y.W.: Modeling the calibration pipeline of the lytro camera for high quality light-field image reconstruction. In: Proceedings ICCV (2013)Google Scholar
  23. 23.
    Dansereau, D.G., Pizarro, O., Williams, S.B.: Decoding, calibration and rectification for lenselet-based plenoptic cameras. In: Proceedings CVPR (2013)Google Scholar
  24. 24.
    Bok, Y., Jeon, H.G., Kweon, I.S.: Geometric calibration of micro-lens-based light-field cameras using line features. In: Proceedings ECCV (2014)Google Scholar
  25. 25.
    Tao, M., Hadap, S., Malik, J., Ramamoorthi, R.: Depth from combining defocus and correspondence using light-field cameras. In: Proceedings ICCV (2013)Google Scholar
  26. 26.
    Sabater, N., Seifi, M., Drazic, V., Sandri, G., Perez, P.: Accurate disparity estimation for plenoptic images. In: Proceedings ECCV Workshops (2014)Google Scholar
  27. 27.
    Yu, Z., Guo, X., Ling, H., Lumsdaine, A., Yu, J.: Line assisted light field triangulation and stereo matching. In: ICCV. IEEE (2013)Google Scholar
  28. 28.
    Jeon, H.G., Park, J., Choe, G., Park, J., Bok, Y., Tai, Y.W., Kweon, I.S.: Accurate depth map estimation from a lenslet light field camera. In: CVPR (2015)Google Scholar
  29. 29.
    Wang, T.C., Efros, A., Ramamoorthi, R.: Occlusion-aware depth estimation using light-field cameras. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2015)Google Scholar
  30. 30.
    Tao, M., Su, J., Wang, T., Malik, J., Ramamoorthi, R.: Depth estimation and specular removal for glossy surfaces using point and line consistency with light-field cameras. IEEE Trans. Pattern Anal. Mach. Intell. 38(6), 1155–1169 (2015)CrossRefGoogle Scholar
  31. 31.
    Fergus, R., Singh, B., Hertzmann, A., Roweis, S.T., Freeman, W.T.: Removing camera shake from a single photograph. ACM Trans. Graph. 25, 787–794 (2006)CrossRefGoogle Scholar
  32. 32.
    Levin, A., Weiss, Y., Durand, F., Freeman, W.: Efficient marginal likelihood optimization in blind deconvolution. In: CVPR, pp. 2657–2664 (2011)Google Scholar
  33. 33.
    Cho, S., Lee, S.: Fast motion deblurring. ACM Trans. Graph. 28, 1–8 (2009)CrossRefGoogle Scholar
  34. 34.
    Xu, L., Jia, J.: Two-phase kernel estimation for robust motion deblurring. In: ECCV (2010)Google Scholar
  35. 35.
    Xu, L., Zheng, S., Jia, J.: Unnatural L0 sparse representation for natural image deblurring. In: CVPR (2013)Google Scholar
  36. 36.
    Perrone, D., Favaro, P.: Total variation blind deconvolution: the devil is in the details. In: CVPR (2014)Google Scholar
  37. 37.
    Whyte, O., Sivic, J., Zisserman, A., Ponce, J.: Non-uniform deblurring for shaken images. In: Proceedings CVPR (2010)Google Scholar
  38. 38.
    Gupta, A., Joshi, N., Zitnick, L., Cohen, M., Curless, B.: Single image deblurring using motion density functions. In: Proceedings ECCV (2010)Google Scholar
  39. 39.
    Hirsch, M., Schuler, C.J., Harmeling, S., Scholkopf, B.: Fast removal of non-uniform camera shake. In: Proceedings ICCV (2011)Google Scholar
  40. 40.
    Hu, Z., Xu, L., Yang, M.H.: Joint depth estimation and camera shake removal from single blurry image. In: CVPR (2014)Google Scholar
  41. 41.
    Paramanand, C., Rajagopalan, A.N.: Non-uniform motion deblurring for bilayer scenes. In: CVPR (2013)Google Scholar
  42. 42.
    Sorel, M., Flusser, J.: Space-variant restoration of images degraded by camera motion blur. Trans. Img. Proc. 17, 105–116 (2008)MathSciNetCrossRefGoogle Scholar
  43. 43.
    Kim, T.H., Ahn, B., Lee, K.M.: Dynamic scene deblurring. In: The IEEE International Conference on Computer Vision (ICCV) (2013)Google Scholar
  44. 44.
    Whyte, O., Sivic, J., Zisserman, A.: Deblurring shaken and partially saturated images. Int. J. Comput. Vis. 110, 185–201 (2014)CrossRefGoogle Scholar
  45. 45.
    Broxton, M., Grosenick, L., Yang, S., Cohen, N., Andalman, A., Deisseroth, K., Levoy, M.: Wave optics theory and 3-D deconvolution for the light field microscope. Opt. Express 21, 25418–25439 (2013)CrossRefGoogle Scholar
  46. 46.
    Liang, C.K., Ramamoorthi, R.: A light transport framework for lenslet light field cameras. ACM Trans. Graph. 34, 16:1–16:19 (2015)CrossRefGoogle Scholar
  47. 47.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 32nd International Conference on Machine Learning, pp. 448–456. ACM (2015)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Paramanand Chandramouli
    • 1
  • Mehdi Noroozi
    • 1
  • Paolo Favaro
    • 1
  1. 1.Department of Computer ScienceUniversity of BernBernSwitzerland

Personalised recommendations