Skip to main content
Log in

Extending the depth of field of imaging systems using depth sensing camera

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Due to the physical properties of the imaging systems, images obtained by them can be suffered from the limited depth of field. In this work, an efficient approach based on the depth map of the scene for extending the depth field of the imaging systems is proposed. Unlike previous methods where the number of source images to be taken is unknown, our approach forwards to taking only the required amount of source images by using the depth map. Firstly, the depth map of the scene is obtained and segmented based on the color camera parameters. Then, the depth map segments are used in source image acquisition and extending the depth of field process. In this work, the proposed method is evaluated on six image sets in terms of four fusion quality metrics, and the results are then compared with eight well-known image fusion techniques. Experimental results demonstrate the superiority of the proposed method over the traditional multi-focus image fusion methods in terms of both subjective and objective assessments and run time.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Liu, Y., Wang, L., Cheng, J., Li, C., Chen, X.: Multi-focus image fusion: a survey of the state of the art. Inf. Fusion 64, 71–91 (2020)

    Article  Google Scholar 

  2. Bhat, S., Koundal, D.: Multi-focus image fusion techniques: a survey. Artif. Intell. Rev. 1–53 (2021)

  3. Choe, J., Im, S., Rameau, F., Kang, M., Kweon, I.S.: Volumefusion: Deep depth fusion for 3d scene reconstruction. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 16086–16095 (2021)

  4. Fan, Z., Xia, W., Liu, X., Li, H.: Detection and segmentation of underwater objects from forward-looking sonar based on a modified mask RCNN. Signal Image Video Process. 1–9 (2021)

  5. Aslantaş, V., Pham, D.: Depth from automatic defocusing. Opt. Exp. 15(3), 1011–23 (2007)

    Article  Google Scholar 

  6. Akpinar, U., Sahin, E., Meem, M., Menon, R., Gotchev, A.: Learning wavefront coding for extended depth of field imaging. IEEE Trans. Image Process. 30, 3307–3320 (2021)

    Article  MathSciNet  Google Scholar 

  7. Cathey, W., Dowski, E.: A new paradigm for imaging systems. Appl. Opt. 41(29), 6080–92 (2002)

    Article  Google Scholar 

  8. Rai, M., Rosen, J.: Depth-of-field engineering in coded aperture imaging. Opt. Exp. 29(2), 1634–1648 (2021)

    Article  Google Scholar 

  9. Aslantaş, V., Toprak, A.N.: A pixel based multi-focus image fusion method. Opt. Commun. 332, 350–358 (2014)

    Article  Google Scholar 

  10. Li, S., Kang, X., Hu, J., Yang, B.: Image matting for fusion of multi-focus images in dynamic scenes. Inf. Fusion 14, 147–162 (2013)

    Article  Google Scholar 

  11. Guo, D., Yan, J., Qu, X.: High quality multi-focus image fusion using self-similarity and depth information. Opt. Commun. 338, 138–144 (2015)

    Article  Google Scholar 

  12. Qiu, X., Li, M., Zhang, L., Yuan, X.: Guided filter-based multi-focus image fusion through focus region detection. Signal Process. Image Commun. 72, 35–46 (2019)

    Article  Google Scholar 

  13. Ma, J., Zhou, Z., Wang, B., Miao, L., Zong, H.: Multi-focus image fusion using boosted random walks-based algorithm with two-scale focus maps. Neurocomputing 335, 9–20 (2019)

    Article  Google Scholar 

  14. Pajares, G., Cruz, J.M.: A wavelet-based image fusion tutorial. Pattern Recognit. 37, 1855–1872 (2004)

    Article  Google Scholar 

  15. Singh, V., Kaushik, V.: Renyi entropy and atom search sine cosine algorithm for multi focus image fusion. Signal Image Video Process. 15, 903–912 (2021)

    Article  Google Scholar 

  16. Nencini, F., Garzelli, A., Baronti, S., Alparone, L.: Remote sensing image fusion using the curvelet transform. Inf. Fusion 8, 143–156 (2007)

    Article  Google Scholar 

  17. Kumar, B.K.S.: Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform. Signal Image Video Process. 7, 1125–1143 (2013)

    Article  Google Scholar 

  18. Jiang, L., Wang, C., Luo, D.: A dense map optimization method based on common-view geometry. Signal Image Video Process. 1–9 (2021)

  19. Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000)

    Article  Google Scholar 

  20. Zhan, K., Kong, L., Liu, B., He, Y.: Multimodal image seamless fusion. J. Electron. Imaging 28(2), 023027 (2019)

    Article  Google Scholar 

  21. Bavirisetti, D.P., Xiao, G., Zhao, J., Dhuli, R., Liu, G.: Multi-scale guided image and video fusion: a fast and efficient approach. Circuits Syst. Signal Process. 38(12), 5576–5605 (2019)

  22. Ilyas, A., Farid, M.S., Khan, M.H., Grzegorzek, M.: Exploiting superpixels for multi-focus image fusion. Entropy 23(2), 247 (2021)

    Article  Google Scholar 

  23. Zhang, Y., Liu, Y., Sun, P., Yan, H., Zhao, X., Zhang, L.: Ifcnn: a general image fusion framework based on convolutional neural network. Inf. Fusion 54, 99–118 (2020)

    Article  Google Scholar 

  24. Hossny, M., Nahavandi, S., Creighton, D.: Comments on information measure for performance of image fusion. Electron. Lett. 44, 1066–1067 (2008)

    Article  Google Scholar 

  25. Xydeas, C., Petrovic, V.S.: Objective image fusion performance measure. Electron. Lett. 36, 308–309 (2000)

  26. Yang, C., Zhang, J., Wang, X., Liu, X.: A novel similarity based quality metric for image fusion. Inf. Fusion 9, 156–160 (2008)

    Article  Google Scholar 

  27. Chen, Y., Blum, R.S.: A new automated quality assessment algorithm for image fusion. Image Vis. Comput. 27, 1421–1432 (2009)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Florenc Skuka.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (pdf 5781 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Skuka, F., Toprak, A.N. & Karaboga, D. Extending the depth of field of imaging systems using depth sensing camera. SIViP 17, 323–331 (2023). https://doi.org/10.1007/s11760-022-02235-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-022-02235-x

Keywords

Navigation