Skip to main content

Learning Depth from Light Field via Deep Convolutional Neural Network

  • Conference paper
  • First Online:
Big Data and Security (ICBDS 2020)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1415))

Included in the following conference series:

  • 951 Accesses

Abstract

Depth estimation based on light field imaging is a new way of depth estimation. The abundant data of light field paves the way for deep learning methods to play a role. In recent years, deep convolutional neural network (CNN) has shown great advantages in extracting image texture features. Inspired by this, we design a deep CNN with EPI synthetic images as input for depth estimation, which is called EENet. Under the constrains of epipolar geometry, pixels corresponding to a common object point are distributed in a straight line in the epipolar plane image (EPI), and the EPI synthetic image is easier to extract features by convolution kernel. Our EENet has the structure characteristics of multi-stream inputs and skip connections. Specifically, the horizontal EPI synthetic image, the vertical EPI synthetic image and the central view image are first generated from the light field, and input into the three streams of EENet respectively. Next, the U-shaped neural network is designed to predict the depth information, that is, the convolution and pooling blocks are used to encode the features, while the deconvolution layer and convolution layer are combined to decode features and recover the depth information. Furthermore, we employ skip connections between the encoding layers and the decoding layers to fuse the shallow location features and deep semantic features. Our EENet is trained and tested on the light field benchmark, and has obtained good experimental results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Liu, F., Hou, G., Sun, Z., et al.: High quality depth map estimation of object surface from light-field image. Neurocomputing 252(252), 3–16 (2017)

    Google Scholar 

  2. Liu, F., Shen, C., Lin, G., et al.: Learning depth from single monocular images using deep convolutional neural fields. IEEE Trans. Pattern Anal. Mach. Intell. 38(10), 2024–2039 (2015)

    Article  Google Scholar 

  3. Kim, C., Zimmer, H., Pritch, Y., et al.: Scene reconstruction from high spatio-angular resolution light fields. ACM Trans. Graph. 32(4), 73:1–73:12 (2013)

    Google Scholar 

  4. Shin, C., Jeon, H.G., Yoon, Y., et al.: Epinet: a fully-convolutional neural network using epipolar geometry for depth from light field images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4748–4757 (2018)

    Google Scholar 

  5. Tao, M.W., Hadap, S., Malik, J., et al.: Depth from combining defocus and correspondence using light-field cameras. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 673–680 (2013)

    Google Scholar 

  6. Mun, J.H., Ho, Y.S.: Occlusion aware reduced angular candidates based light field depth estimation from an epipolar plane image. Electron. Imaging 2018(13), 390-1–390-6 (2018)

    Google Scholar 

  7. Han, Q., Jung, C.: Guided filtering based data fusion for light field depth estimation with L0 gradient minimization. J. Vis. Commun. Image Represent. 55, 449–456 (2018)

    Article  Google Scholar 

  8. Mishiba, K.: Fast depth estimation for light field cameras. IEEE Trans. Image Process. 29, 4232–4242 (2020)

    Article  Google Scholar 

  9. Wanner, S., Goldluecke, B.: Globally consistent depth labeling of 4D light fields. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 41–48. IEEE (2012)

    Google Scholar 

  10. Jeon, H.G., Park, J., Choe, G., et al.: Accurate depth map estimation from a lenslet light field camera. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1547–1555 (2015)

    Google Scholar 

  11. Neri, A., Carli, M., Battisti, F.: A maximum likelihood approach for depth field estimation based on epipolar plane images. IEEE Trans. Image Process. 28(2), 827–840 (2018)

    Article  MathSciNet  Google Scholar 

  12. Lourenco, R., Assuncao, P., Tavora, L.M., et al.: Silhouette enhancement in light field disparity estimation using the structure tensor. In: International Conference on Image Processing, pp. 2580–2584 (2018)

    Google Scholar 

  13. Li, J., Jin, X.: EPI-neighborhood distribution based light field depth estimation. In: ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2003–2007. IEEE (2020)

    Google Scholar 

  14. Schilling, H., Diebold, M., Rother, C., et al.: Trust your model: light field depth estimation with inline occlusion handling. In: Computer Vision Pattern Recognition, pp. 4530–4538 (2018)

    Google Scholar 

  15. Heber, S., Pock, T.: Convolutional networks for shape from light field. In: Computer Vision and Pattern Recognition, pp. 3746–3754 (2016)

    Google Scholar 

  16. Guo, C., Jin, J., Hou, J., et al.: Accurate light field depth estimation via an occlusion-aware network. In: 2020 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE (2020)

    Google Scholar 

  17. Johannsen, O., Sulc, A., Goldluecke, B.: What sparse light field coding reveals about scene structure. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3262–3270 (2016)

    Google Scholar 

  18. Heber, S., Yu, W., Pock, T.: Neural EPI-volume networks for shape from light field. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2252–2260 (2017)

    Google Scholar 

  19. Shi, J., Jiang, X., Guillemot, C.: A framework for learning depth from a flexible subset of dense and sparse light field views. IEEE Trans. Image Process. 28(12), 5867–5880 (2019)

    Article  MathSciNet  Google Scholar 

  20. Long, J., Shelhamer, E., Darrell, T., et al.: Fully convolutional networks for semantic segmentation. In: Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)

    Google Scholar 

  21. Honauer, K., Johannsen, O., Kondermann, D., Goldluecke, B.: A dataset and evaluation methodology for depth estimation on 4D light fields. In: Lai, S.-H., Lepetit, V., Nishino, Ko., Sato, Y. (eds.) ACCV 2016. LNCS, vol. 10113, pp. 19–34. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-54187-7_2

    Chapter  Google Scholar 

Download references

Acknowledgment

This work has been supported by Natural Science Foundation of Nanjing Institute of Technology (Grant Nos. CKJB201804, ZKJ201906), National Natural Science Foundation of China (Grant No. 62076122) the Jiangsu Specially-Appointed Professor Program, the Talent Startup project of NJIT (No. YKJ201982), Science and Technology Innovation Project of Nanjing for Oversea Scientist.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lei Han .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Han, L., Huang, X., Shi, Z., Zheng, S. (2021). Learning Depth from Light Field via Deep Convolutional Neural Network. In: Tian, Y., Ma, T., Khan, M.K. (eds) Big Data and Security. ICBDS 2020. Communications in Computer and Information Science, vol 1415. Springer, Singapore. https://doi.org/10.1007/978-981-16-3150-4_40

Download citation

  • DOI: https://doi.org/10.1007/978-981-16-3150-4_40

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-16-3149-8

  • Online ISBN: 978-981-16-3150-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics