Advertisement

Joint Blind Motion Deblurring and Depth Estimation of Light Field

  • Dongwoo Lee
  • Haesol Park
  • In Kyu Park
  • Kyoung Mu Lee
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11220)

Abstract

Removing camera motion blur from a single light field is a challenging task since it is highly ill-posed inverse problem. The problem becomes even worse when blur kernel varies spatially due to scene depth variation and high-order camera motion. In this paper, we propose a novel algorithm to estimate all blur model variables jointly, including latent sub-aperture image, camera motion, and scene depth from the blurred 4D light field. Exploiting multi-view nature of a light field relieves the inverse property of the optimization by utilizing strong depth cues and multi-view blur observation. The proposed joint estimation achieves high quality light field deblurring and depth estimation simultaneously under arbitrary 6-DOF camera motion and unconstrained scene depth. Intensive experiment on real and synthetic blurred light field confirms that the proposed algorithm outperforms the state-of-the-art light field deblurring and depth estimation methods.

Keywords

Light field 6-DOF camera motion Motion blur Blind motion deblurring Depth estimation 

Notes

Acknowledgement

This work was supported by the Visual Turing Test project (IITP-2017-0-01780) from the Ministry of Science and ICT of Korea, and the Samsung Research Funding Center of Samsung Electronics under Project Number SRFC-IT1702-06.

References

  1. 1.
    Beck, A., Teboulle, M.: Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. 18(11), 2419–2434 (2009)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Blanco, J.L.: A tutorial on se (3) transformation parameterizations and on-manifold optimization. University of Malaga, Technical Report, 3 (2010)Google Scholar
  3. 3.
    Blender Online Community: Blender - A 3D modelling and rendering package. Blender Foundation, Blender Institute, Amsterdam. http://www.blender.org
  4. 4.
    Bok, Y., Jeon, H.-G., Kweon, I.S.: Geometric calibration of micro-lens-based light-field cameras using line features. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8694, pp. 47–61. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10599-4_4CrossRefGoogle Scholar
  5. 5.
    Chan, T.F., Wong, C.K.: Total variation blind deconvolution. IEEE Trans. Image Process. 7(3), 370–375 (1998)CrossRefGoogle Scholar
  6. 6.
    Chandramouli, P., Perrone, D., Favaro, P.: Light field blind deconvolution. CoRR abs/ arXiv:1408.3686 (2014)
  7. 7.
    Chen, C., Lin, H., Yu, Z., Bing Kang, S., Yu, J.: Light field stereo matching using bilateral statistics of surface cameras. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1518–1525 (2014)Google Scholar
  8. 8.
    Cho, S., Lee, S.: Fast motion deblurring. Proc. ACM Trans. Gr. 28145 (2009)Google Scholar
  9. 9.
    Cho, S., Matsushita, Y., Lee, S.: Removing non-uniform motion blur from images. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1–8 (2007)Google Scholar
  10. 10.
    Cho, S., Wang, J., Lee, S.: Video deblurring for hand-held cameras using patch-based synthesis. ACM Trans. Gr. 31(4), 64 (2012)CrossRefGoogle Scholar
  11. 11.
    Dansereau, D.G., Eriksson, A., Leitner, J.: Richardson-lucy deblurring for moving light field cameras. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshop (2017)Google Scholar
  12. 12.
    Fergus, R., Singh, B., Hertzmann, A., Roweis, S.T., Freeman, W.T.: Removing camera shake from a single photograph. Proc. ACM Trans. Gr. 25, 787–794 (2006)CrossRefGoogle Scholar
  13. 13.
    Gupta, A., Joshi, N., Lawrence Zitnick, C., Cohen, M., Curless, B.: Single image deblurring using motion density functions. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6311, pp. 171–184. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15549-9_13CrossRefGoogle Scholar
  14. 14.
    Hu, Z., Xu, L., Yang, M.H.: Joint depth estimation and camera shake removal from single blurry image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2893–2900 (2014)Google Scholar
  15. 15.
    Hu, Z., Yang, M.H.: Fast non-uniform deblurring using constrained camera pose subspace. In: Proceedings of the British Machine Vision Conference, vol. 2, p. 4 (2012)Google Scholar
  16. 16.
    Jeon, H.G., et al.: Accurate depth map estimation from a lenslet light field camera. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1547–1555 (2015)Google Scholar
  17. 17.
    Ji, H., Wang, K.: A two-stage approach to blind spatially-varying motion deblurring. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 73–80 (2012)Google Scholar
  18. 18.
    Jin, M., Chandramouli, P., Favaro, P.: Bilayer blind deconvolution with the light field camera. In: Proceedings of the IEEE International Conference on Computer Vision Workshop, pp. 10–18 (2015)Google Scholar
  19. 19.
    Kim, T.H., Lee, K.M.: Segmentation-free dynamic scene deblurring. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2766–2773 (2014)Google Scholar
  20. 20.
    Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5426–5434 (2015)Google Scholar
  21. 21.
    Köhler, R., Hirsch, M., Mohler, B., Schölkopf, B., Harmeling, S.: Recording and playback of camera shake: Benchmarking blind deconvolution with a real-world database, pp. 27–40 (2012)CrossRefGoogle Scholar
  22. 22.
    Ng, R.: Digital light field photography. Ph.D. thesis, stanford university (2006)Google Scholar
  23. 23.
    Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., Hanrahan, P.: Light field photography with a hand-held plenoptic camera. Comput. Sci. Tech. Rep. 2(11), 1–11 (2005)Google Scholar
  24. 24.
    Park, H., Lee, K.M.: Joint estimation of camera pose, depth, deblurring, and super-resolution from a blurred image sequence. In: Proceedings of the IEEE International Conference on Computer Vision (2017)Google Scholar
  25. 25.
    Scales, J.A., Gersztenkorn, A.: Robust methods in inverse theory. Inverse Prob. 4(4), 1071 (1988)MathSciNetCrossRefGoogle Scholar
  26. 26.
    Sellent, A., Rother, C., Roth, S.: Stereo video deblurring. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 558–575. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46475-6_35CrossRefGoogle Scholar
  27. 27.
    Shan, Q., Jia, J., Agarwala, A.: High-quality motion deblurring from a single image. ACM Trans. Gr. 27, 73 (2008)Google Scholar
  28. 28.
    Snoswell, A., Singh, S.: Light field de-blurring for robotics applications. In: Australasian Conference on Robotics and Automation (2014)Google Scholar
  29. 29.
    Srinivasan, P.P., Ng, R., Ramamoorthi, R.: Light field blind motion deblurring. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)Google Scholar
  30. 30.
    Sun, J., Cao, W., Xu, Z., Ponce, J.: Learning a convolutional neural network for non-uniform motion blur removal. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 769–777 (2015)Google Scholar
  31. 31.
    Tao, M.W., Srinivasan, P.P., Malik, J., Rusinkiewicz, S., Ramamoorthi, R.: Depth from shading, defocus, and correspondence using light-field angular coherence. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1940–1948 (2015)Google Scholar
  32. 32.
    Wang, T.C., Efros, A.A., Ramamoorthi, R.: Occlusion-aware depth estimation using light-field cameras. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3487–3495 (2015)Google Scholar
  33. 33.
    Whyte, O., Sivic, J., Zisserman, A., Ponce, J.: Non-uniform deblurring for shaken images. Int. J. Comput. Vis. 98(2), 168–186 (2012)MathSciNetCrossRefGoogle Scholar
  34. 34.
    Williem, Park, I.K.: Robust light field depth estimation for noisy scene with occlusion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4396–4404 (2016)Google Scholar
  35. 35.
    Wulff, J., Black, M.J.: Modeling blurred video with layers. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8694, pp. 236–252. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10599-4_16CrossRefGoogle Scholar
  36. 36.
    Xu, L., Jia, J.: Two-phase kernel estimation for robust motion deblurring. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6311, pp. 157–170. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15549-9_12CrossRefGoogle Scholar
  37. 37.
    Xu, L., Jia, J.: Depth-aware motion deblurring. In: Proceedings of IEEE International Conference on Computational Photography, pp. 1–8 (2012)Google Scholar
  38. 38.
    Zheng, S., Xu, L., Jia, J.: Forward motion deblurring. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1465–1472 (2013)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Dongwoo Lee
    • 1
  • Haesol Park
    • 1
  • In Kyu Park
    • 2
  • Kyoung Mu Lee
    • 1
  1. 1.Department of ECEASRI, Seoul National UniversitySeoulSouth Korea
  2. 2.Department of Information and Communication EngineeringInha UniversityIncheonSouth Korea

Personalised recommendations