Abstract
Due to the physical properties of the imaging systems, images obtained by them can be suffered from the limited depth of field. In this work, an efficient approach based on the depth map of the scene for extending the depth field of the imaging systems is proposed. Unlike previous methods where the number of source images to be taken is unknown, our approach forwards to taking only the required amount of source images by using the depth map. Firstly, the depth map of the scene is obtained and segmented based on the color camera parameters. Then, the depth map segments are used in source image acquisition and extending the depth of field process. In this work, the proposed method is evaluated on six image sets in terms of four fusion quality metrics, and the results are then compared with eight well-known image fusion techniques. Experimental results demonstrate the superiority of the proposed method over the traditional multi-focus image fusion methods in terms of both subjective and objective assessments and run time.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Liu, Y., Wang, L., Cheng, J., Li, C., Chen, X.: Multi-focus image fusion: a survey of the state of the art. Inf. Fusion 64, 71–91 (2020)
Bhat, S., Koundal, D.: Multi-focus image fusion techniques: a survey. Artif. Intell. Rev. 1–53 (2021)
Choe, J., Im, S., Rameau, F., Kang, M., Kweon, I.S.: Volumefusion: Deep depth fusion for 3d scene reconstruction. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 16086–16095 (2021)
Fan, Z., Xia, W., Liu, X., Li, H.: Detection and segmentation of underwater objects from forward-looking sonar based on a modified mask RCNN. Signal Image Video Process. 1–9 (2021)
Aslantaş, V., Pham, D.: Depth from automatic defocusing. Opt. Exp. 15(3), 1011–23 (2007)
Akpinar, U., Sahin, E., Meem, M., Menon, R., Gotchev, A.: Learning wavefront coding for extended depth of field imaging. IEEE Trans. Image Process. 30, 3307–3320 (2021)
Cathey, W., Dowski, E.: A new paradigm for imaging systems. Appl. Opt. 41(29), 6080–92 (2002)
Rai, M., Rosen, J.: Depth-of-field engineering in coded aperture imaging. Opt. Exp. 29(2), 1634–1648 (2021)
Aslantaş, V., Toprak, A.N.: A pixel based multi-focus image fusion method. Opt. Commun. 332, 350–358 (2014)
Li, S., Kang, X., Hu, J., Yang, B.: Image matting for fusion of multi-focus images in dynamic scenes. Inf. Fusion 14, 147–162 (2013)
Guo, D., Yan, J., Qu, X.: High quality multi-focus image fusion using self-similarity and depth information. Opt. Commun. 338, 138–144 (2015)
Qiu, X., Li, M., Zhang, L., Yuan, X.: Guided filter-based multi-focus image fusion through focus region detection. Signal Process. Image Commun. 72, 35–46 (2019)
Ma, J., Zhou, Z., Wang, B., Miao, L., Zong, H.: Multi-focus image fusion using boosted random walks-based algorithm with two-scale focus maps. Neurocomputing 335, 9–20 (2019)
Pajares, G., Cruz, J.M.: A wavelet-based image fusion tutorial. Pattern Recognit. 37, 1855–1872 (2004)
Singh, V., Kaushik, V.: Renyi entropy and atom search sine cosine algorithm for multi focus image fusion. Signal Image Video Process. 15, 903–912 (2021)
Nencini, F., Garzelli, A., Baronti, S., Alparone, L.: Remote sensing image fusion using the curvelet transform. Inf. Fusion 8, 143–156 (2007)
Kumar, B.K.S.: Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform. Signal Image Video Process. 7, 1125–1143 (2013)
Jiang, L., Wang, C., Luo, D.: A dense map optimization method based on common-view geometry. Signal Image Video Process. 1–9 (2021)
Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000)
Zhan, K., Kong, L., Liu, B., He, Y.: Multimodal image seamless fusion. J. Electron. Imaging 28(2), 023027 (2019)
Bavirisetti, D.P., Xiao, G., Zhao, J., Dhuli, R., Liu, G.: Multi-scale guided image and video fusion: a fast and efficient approach. Circuits Syst. Signal Process. 38(12), 5576–5605 (2019)
Ilyas, A., Farid, M.S., Khan, M.H., Grzegorzek, M.: Exploiting superpixels for multi-focus image fusion. Entropy 23(2), 247 (2021)
Zhang, Y., Liu, Y., Sun, P., Yan, H., Zhao, X., Zhang, L.: Ifcnn: a general image fusion framework based on convolutional neural network. Inf. Fusion 54, 99–118 (2020)
Hossny, M., Nahavandi, S., Creighton, D.: Comments on information measure for performance of image fusion. Electron. Lett. 44, 1066–1067 (2008)
Xydeas, C., Petrovic, V.S.: Objective image fusion performance measure. Electron. Lett. 36, 308–309 (2000)
Yang, C., Zhang, J., Wang, X., Liu, X.: A novel similarity based quality metric for image fusion. Inf. Fusion 9, 156–160 (2008)
Chen, Y., Blum, R.S.: A new automated quality assessment algorithm for image fusion. Image Vis. Comput. 27, 1421–1432 (2009)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
About this article
Cite this article
Skuka, F., Toprak, A.N. & Karaboga, D. Extending the depth of field of imaging systems using depth sensing camera. SIViP 17, 323–331 (2023). https://doi.org/10.1007/s11760-022-02235-x
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11760-022-02235-x