Advertisement

Defocus Deblurring Using Dual-Pixel Data

Conference paper
  • 838 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12355)

Abstract

Defocus blur arises in images that are captured with a shallow depth of field due to the use of a wide aperture. Correcting defocus blur is challenging because the blur is spatially varying and difficult to estimate. We propose an effective defocus deblurring method that exploits data available on dual-pixel (DP) sensors found on most modern cameras. DP sensors are used to assist a camera’s auto-focus by capturing two sub-aperture views of the scene in a single image shot. The two sub-aperture images are used to calculate the appropriate lens position to focus on a particular scene region and are discarded afterwards. We introduce a deep neural network (DNN) architecture that uses these discarded sub-aperture images to reduce defocus blur. A key contribution of our effort is a carefully captured dataset of 500 scenes (2000 images) where each scene has: (i) an image with defocus blur captured at a large aperture; (ii) the two associated DP sub-aperture views; and (iii) the corresponding all-in-focus image captured with a small aperture. Our proposed DNN produces results that are significantly better than conventional single image methods in terms of both quantitative and perceptual metrics – all from data that is already available on the camera but ignored.

Keywords

Defocus blur Extended depth of field Dual-pixel sensors 

Notes

Acknowledgments

This study was funded in part by the Canada First Research Excellence Fund for the Vision: Science to Applications (VISTA) programme and an NSERC Discovery Grant. Dr. Brown contributed to this article in his personal capacity as a professor at York University. The views expressed are his own and do not necessarily represent the views of Samsung Research.

Supplementary material

504449_1_En_7_MOESM1_ESM.zip (80 mb)
Supplementary material 1 (zip 81900 KB)

References

  1. 1.
    Abuolaim, A., Brown, M.S.: Online lens motion smoothing for video autofocus. In: WACV (2020)Google Scholar
  2. 2.
    Abuolaim, A., Punnappurath, A., Brown, M.S.: Revisiting autofocus for smartphone cameras. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11219, pp. 545–559. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01267-0_32CrossRefGoogle Scholar
  3. 3.
    Boominathan, V., Mitra, K., Veeraraghavan, A.: Improving resolution and depth-of-field of light field cameras using a hybrid imaging system. In: ICCP (2014)Google Scholar
  4. 4.
    D’Andrès, L., Salvador, J., Kochale, A., Süsstrunk, S.: Non-parametric blur map regression for depth of field extension. IEEE Trans. Image Process. 25(4), 1660–1673 (2016)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Fish, D., Brinicombe, A., Pike, E., Walker, J.: Blind deconvolution by means of the Richardson-Lucy algorithm. J. Opt. Soc. Am. A 12(1), 58–65 (1995)CrossRefGoogle Scholar
  6. 6.
    Garg, R., Wadhwa, N., Ansari, S., Barron, J.T.: Learning single camera depth estimation using dual-pixels. In: ICCV (2019)Google Scholar
  7. 7.
    Godard, C., Mac Aodha, O., Brostow, G.J.: Unsupervised monocular depth estimation with left-right consistency. In: CVPR (2017)Google Scholar
  8. 8.
    Golestaneh, S.A., Karam, L.J.: Spatially-varying blur detection based on multiscale fused and sorted transform coefficients of gradient magnitudes. In: CVPR (2017)Google Scholar
  9. 9.
    Google: Google research: Android app to capture dual-pixel data (2019). https://github.com/google-research/google-research/tree/master/dual_pixels. Accessed Mar 2020
  10. 10.
    Guo, Q., Feng, W., Chen, Z., Gao, R., Wan, L., Wang, S.: Effects of blur and deblurring to visual object tracking. arXiv preprint arXiv:1908.07904 (2019)
  11. 11.
    Hazirbas, C., Soyer, S.G., Staab, M.C., Leal-Taixé, L., Cremers, D.: Deep depth from focus. In: Jawahar, C.V., Li, H., Mori, G., Schindler, K. (eds.) ACCV 2018. LNCS, vol. 11363, pp. 525–541. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-20893-6_33CrossRefGoogle Scholar
  12. 12.
    He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: ICCV (2015)Google Scholar
  13. 13.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)Google Scholar
  14. 14.
    Jang, J., Yoo, Y., Kim, J., Paik, J.: Sensor-based auto-focusing system using multi-scale feature extraction and phase correlation matching. Sensors 15(3), 5747–5762 (2015)CrossRefGoogle Scholar
  15. 15.
    Karaali, A., Jung, C.R.: Edge-based defocus blur estimation with adaptive scale selection. IEEE Trans. Image Process. 27(3), 1126–1137 (2017)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  17. 17.
    Krishnan, D., Fergus, R.: Fast image deconvolution using hyper-laplacian priors. In: NeurIPS (2009)Google Scholar
  18. 18.
    Lee, J., Lee, S., Cho, S., Lee, S.: Deep defocus map estimation using domain adaptation. In: CVPR (2019)Google Scholar
  19. 19.
    Levin, A., Fergus, R., Durand, F., Freeman, W.T.: Image and depth from a conventional camera with a coded aperture. ACM Trans. Graph. 26(3), 70 (2007)CrossRefGoogle Scholar
  20. 20.
    Mao, X., Shen, C., Yang, Y.B.: Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. In: NeurIPS (2016)Google Scholar
  21. 21.
    Odena, A., Dumoulin, V., Olah, C.: Deconvolution and checkerboard artifacts. Distill 1(10), e3 (2016)CrossRefGoogle Scholar
  22. 22.
    Park, J., Tai, Y.W., Cho, D., So Kweon, I.: A unified approach of multi-scale deep and hand-crafted features for defocus estimation. In: CVPR (2017)Google Scholar
  23. 23.
    Plotz, T., Roth, S.: Benchmarking denoising algorithms with real photographs. In: CVPR (2017)Google Scholar
  24. 24.
    Punnappurath, A., Abuolaim, A., Afifi, M., Brown, M.S.: Modeling defocus-disparity in dual-pixel sensors. In: ICCP (2020)Google Scholar
  25. 25.
    Punnappurath, A., Brown, M.S.: Reflection removal using a dual-pixel sensor. In: CVPR (2019)Google Scholar
  26. 26.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  27. 27.
    Shi, J., Xu, L., Jia, J.: Discriminative blur detection features. In: CVPR (2014)Google Scholar
  28. 28.
    Shi, J., Xu, L., Jia, J.: Just noticeable defocus blur detection and estimation. In: CVPR (2015)Google Scholar
  29. 29.
    Srinivasan, P.P., Wang, T., Sreelal, A., Ramamoorthi, R., Ng, R.: Learning to synthesize a 4D RGBD light field from a single image. In: ICCV (2017)Google Scholar
  30. 30.
    Srivastava, R.K., Greff, K., Schmidhuber, J.: Training very deep networks. In: NeurIPS (2015)Google Scholar
  31. 31.
    Tang, C., Zhu, X., Liu, X., Wang, L., Zomaya, A.: DeFusionNET: defocus blur detection via recurrently fusing and refining multi-scale deep features. In: CVPR (2019)Google Scholar
  32. 32.
    Wadhwa, N., Garg, R., Jacobs, D.E., Feldman, B.E., Kanazawa, N., Carroll, R., Movshovitz-Attias, Y., Barron, J.T., Pritch, Y., Levoy, M.: Synthetic depth-of-field with a single-camera mobile phone. ACM Trans. Graph. 37(4), 64 (2018)CrossRefGoogle Scholar
  33. 33.
    Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P., et al.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)CrossRefGoogle Scholar
  34. 34.
    Yi, X., Eramian, M.: LBP-based segmentation of defocus blur. IEEE Trans. Image Process. 25(4), 1626–1638 (2016)MathSciNetCrossRefGoogle Scholar
  35. 35.
    Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: CVPR (2018)Google Scholar
  36. 36.
    Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network. In: CVPR (2017)Google Scholar
  37. 37.
    Zhao, W., Zhao, F., Wang, D., Lu, H.: Defocus blur detection via multi-stream bottom-top-bottom fully convolutional network. In: CVPR (2018)Google Scholar
  38. 38.
    Zhao, W., Zheng, B., Lin, Q., Lu, H.: Enhancing diversity of defocus blur detectors via cross-ensemble network. In: CVPR (2019)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.York UniversityTorontoCanada
  2. 2.Samsung AI CenterTorontoCanada

Personalised recommendations