Abstract
According to the atmospheric scattering model (ASM), the object signal’s attenuation diminishes exponentially as the imaging distance increases. This imposes limitations on ASM-based methods in situations where the scattering medium one wish to look through is inhomogeneous. Here, we extend ASM by taking into account the spatial variation of the medium density, and propose a two-step method for imaging through inhomogeneous scattering media. In the first step, the proposed method eliminates the direct current component of the scattered pattern by subscribing to the estimated global distribution (background). In the second step, it eliminates the randomized components of the scattered light by using threshold truncation, followed by the histogram equalization to further enhance the contrast. Outdoor experiments were carried out to demonstrate the proposed method.
Similar content being viewed by others
Introduction
Imaging through inhomogeneous and dense scattering media is widely encountered in the various fields of science and engineering. However, the inherent scattering properties of scattering media along the imaging pathway prohibit a clear image of the scene within or behind it to be formed1. On one hand, the scattering particles absorb or scatter light directly from the scene (i.e., signal light), leading to a reduction in signal intensity. On the other hand, these particles also scatter ambient light that does not carry any information of interest, resulting in strong interference noise superposing on the raw images captured by a camera.
Many methods have been proposed, attempting to address this challenging problem. In particular, recent advancement in gating technologies2,3,4,5,6, point-wise scanning strategies7, wavefront shaping8, optical transmission matrix measurement9, and speckle correlations10,11,12 has received a lot of attentions. But all these advanced techniques rely on the use of coherent light sources as active illumination. Thus, one can apply coding to the illumination, leveraging the efficiency of information extraction. In the cases where coherent active illumination is not possible, one has to mainly rely on defogging algorithms in addition to the optimization of the imaging system13. So far, algorithms depending on image depth priors14, dark channel priors15, histogram equalization16, Retinex-based17, wavelet transform18 and data-driven deep learning19,20,21,22 have been proposed to enhance the contrast of the captured raw images. These defogging algorithms more or less are developed on the base of the atmospheric scattering model (ASM). This model assumes that the attenuation of the object signal exponentially diminishes as the imaging distance increases23,24. This assumption does not hold when dealing with inhomogeneous scattering media.
Here we address the challenge of imaging through inhomogeneous and dense scattering media. Our approach is to first extend the conventional ASM by introducing a spatially variant attenuation coefficient, and then develop a two-step defogging algorithm in accordance with it for image recovery.
Methods
Now let us go into the technical details. The conventional ASM that describes the formation of a hazy image under sun light illumination can be expressed as23,24
where I(x) is the hazy image, \(L_{\infty }\) is the global atmospheric light, \(\rho (x)\) is the albedo of a target scene, \(\alpha \) is the attenuation coefficient of the atmosphere, and d(x) is the distance between the scene and the camera. The first term on the right-hand side of Eq. (1) represents the transmission attenuation of signal(green solid line in Fig. 1), and the second one is the superposition of airlight24(orange dashed line in Fig. 1), which is scattered by the fog and contains no target information. Equation (1) describes the phenomenon that the signal light attenuates exponentially as it propagates through the scattering media, whereas the airlight is the opposite. The scene is barely visible when the imaging distance d exceeds the range of visibility.
This equation can be rewritten as24,25
where \(J(x)=L_\infty \rho (x)\) denotes the clear image without disturbance by scattering. \(t(x)=\exp [-\alpha d(x)]\) is the transmission attenuation ratio, and A is the global airlight at an infinite distance, which is usually estimated from the sky area of a hazy image I(x). The purpose of defogging algorithms is to recover J(x) from I(x).
It is important to note that t(x) in Eq. (2) decays exponentially as a function of d(x) with a constant attenuation factor \(\alpha \). \(\alpha \) is independent of distance, indicating that the scattering medium is homogeneous. Nevertheless,this is not always the case with scattering media in real-world situations. As crudely illustrated in Fig. 1 , natural scattering media such as fog frequently look spatially inhomogeneous in terms of both thickness and density. The green solid line depicts the signal light from the target scene, and the dashed portion of it indicates the attenuation caused by the fog it propagates through. The orange dashed line is the airlight scattered by the fog, which does not provide any information about the target and can be regarded as additive noise. Taking into consideration the inhomogeneity of the fog, we can phenomenologically rewrite the ASM model as
where the transmission attenuation ratio is given by
Clearly, the attenuation coefficient \(\alpha (x)\) now becomes a function of the spatial coordinates x. The second term in Eq. (3) is related to \(A[1-t(x)]\) in Eq. (2), i.e., \(B(x)=A[1-\bar{t}'(x)]\), where \(\bar{t}'=\exp [-\bar{\alpha }d(x)]\) is the time averaged transmission attenuation ratio over the course of exposure, and \(R(x)=A[1-\exp [-(\alpha (x)-\bar{\alpha })]d(x)]\) is the deviation from B(x) that accounts for the inhomogeneity of the scattering medium.
When an image captured by a camera, signal and noise are aliased. Therefore instead of estimating the attenuation coefficient \(\alpha (x)\) directly, we estimated the magnitude of the additive noise (i.e., B(x) and R(x)) caused by it through a proposed two-step method. Our purpose then is to estimate and eliminate the noise B(x) and R(x) in Eq. (3), respectively. Based on the two-step method, in the first step, we estimate the statistical distribution of B(x) from the acquired hazy image I(x). The reason for this is the term B(x) is the most significant among the three when the scattering medium is optically thick. Here we adopt the space domain estimation inspired by Satat et al.’s time-of-flight-based time domain estimation method26.
As schematically illustrated by Step 1 in Fig. 2, the proposed method first divides the acquired hazy image into an array of \(q\times q\) non-overlapping and connected sub-images, \(I_i(x)\), of appropriate size, within which the scattering medium can be treated as homogeneous. This is reasonable in particular to take the mean of each sub-image, i.e., \(b_i=\overline{I_i(x)}\), for the patches that only contain scattered light \(B(x)+R(x)\). For the patches that contain the object signal \(J(x)t'(x)\), which could be very weak though, we have demonstrated that estimating the noise level as the minimum value of the patch, i.e., \(b_i=\textrm{min}I_i(x)\), is helpful for subsequent contrast enhancement13. This operation can be applied to all the other patches without the object signal \(J(x)t'(x)\) because, in the case of our interest, it is the term B(x) that is the most significant as mentioned above. This means that the difference between \(\textrm{min}I_i(x)\) and \(\overline{I_i(x)}\) is insignificant. Thus we can take the minimum value \(b_i=\textrm{min}I_i(x)\) of each sub-image as the estimation of the local averaged noise level \(B_i(x)\), resulting in a noise map of \(q\times q\) in size. Next we erode the estimated noise map in order to reduce the contribution of \(J(x)t'(x)\), and then fit the spatial statistical distribution of noise map based on the assumption of macroscopic smooth variation of the scattering media27. Finally, we resize the downsampled noise map back to the original size by using, for instance, Bicubic interpolation, and subtracts this estimated noise \(\hat{B}(x)\) from the hazy image I(x). In this way, the averaged noise B(x) in Eq. (3) can be dramatically reduced.
In the step 2 of the proposed algorithm we will eliminate the residual noise R(x) as shown in the orange patches in Fig. 2. Note that with the removal of the averaged noise B(x), the term \(J(x)t'(x)\) in Eq. (3) should become significant, whereas the other term R(x) appears as small random disturbance superposing the signal term. Since we take the minimum value instead of the mean of each patch to estimate B in step 1, R(x) is positive. Thus this disturbance can be eliminated by pixel-wise subtracting it from \(I_{\text{re}}\). To estimate R(x), we first define a threshold \(\gamma = \mu + w \sigma \), where w is an adjustable weighting factor, and \(\mu \) and \(\sigma \) are the mean and standard deviation of \(I_{\text{re}}\), respectively, and set the pixel of \(I_{\text{re}}\) whose values are greater than \(\gamma \) to be 0. And then we obtain an estimation, \(\hat{R}(x)\), of R(x) by adjusting the threshold truncation through w, so that \(\hat{I} = I_{\text{re}} - \hat{R}\). In this phase, the image of the target scene should be recovered. But we can further smooth it by standard median filtering, followed by histogram equalization to enhance the contrast owing to the transmission attenuation \(t'(x)\). For more detailed information, one can see Algorithm 1.
Experiment and results
We conducted outfield experiments to demonstrate the performance of the proposed method in natural environment. The scattering medium in this study was fog present in the imaging pathway. Fog existing in nature is inhomogeneous in terms of both thickness and density, and usually changes with time.
Outfield experiment. (a) Schematic illustration of the imaging scenario, (b) The imaging system, and (c) the image of the object captured by our imaging system in good weather. The object is a homemade black cloth with two white patterns attached on it and hanged on a cliff of 528 m away from the imaging system.
Figure 3a schematically depicts the scenario of our outfield experiments. As highlighted in this figure, the geometric imaging distance is 528 m. The imaging system (Fig. 3b) was a reflective telescope (CPC1100HD, Celestron) equipped with an sCMOS camera (Edge 4.2,PCO). The object to be imaged was a homemade black cloth with two white patterns attached on it. Its image taken in good weather is shown in Fig. 3c, and we use it as the ground truth in our analysis.
Experimental results. (a1–c1), the captured raw hazy images at ranges of different visibility. The images restored by using the proposed technique (a2–c2) histogram equalization (a3–c3), Retinex (a4–c4) and DCP (a5–c5). And (d–f), the estimated background noise of raw hazy images (a1–c1), where the dotted map in blue represent the raw hazy images.
We captured hazy images of of the object in a cloud of fog with various ranges of visibility which is expressed as a function of the attenuation coefficient \(\alpha (x)\) as \(V= 2.996/\alpha \)13. Then we restored the object images from these hazy ones. The main results are shown in Fig. 4. One can clearly see that the contrast of the raw hazy images is gradually decreased as the range of visibility reduces (Fig. 4a1–c1). In particular, one can see from Fig. 4b1 that the local contrast around the arrow and that around the star is different owing to the inhomogeneity of the fog. In the most serious case, the raw hazy image looks visibly uniformed as shown in Fig. 4c1 because the cloud of fog is so thick that the strong scattered light might undergo random walk. The images restored by using the proposed method are shown in Fig. 4a2–c2, with the corresponding estimated background scattering noise B(x) shown in Fig. 4d–f, respectively. One can see that even in the most serious case in our experiments the object image is revealed very well. This suggests that the ballistic light, although extincting exponentially with respect to \(\alpha \), can still be detected and occupies the lower bits in the camera pixel values. But to the best of our knowledge, no existing algorithms can de-scatter such a strong background scattering noise. This can be clearly seen from the images recovered by some of the benchmark algorithms such as histogram equalization28 (Fig. 4a3–c3), Retinex17 (Fig. 4a4–c4), and the one based on dark channel prior (DCP)15 (Fig. 4a5–c5). The experimental results suggest that the proposed method outperforms the other three, effectively balancing high contrast and the preservation of the object details. In contrast, the object images revealed by the other three defogging algorithms have strong artifacts or inhomogeneous scattering noise.
To quantitatively evaluate the error, we calculate the Pearson’s correlation coefficient r between the ground truth (GT) and images restored by all the four algorithms. The results are plotted in Fig. 5. It clearly demonstrates that the image restored by our method correlates the best among the four with the ground truth, associated with the largest r value in all the cases.
Conclusions
In summary, we have extended the conventional ASM to accommodate passive imaging through inhomogeneous and dense scattering media. This is done by recognizing that the scattered light can be written as summation of a averaged term and a fluctuation term. In accordance with this, we have developed a two-step method to estimate and eliminate these two terms sequentially.
Outfield experiments of passively imaging through inhomogeneous fog have been conducted to demonstrate the proposed method. The experimental results show that clear images can be revealed from the captured hazy ones, outperforming the commonly used methods such as histogram equalization, Retinex, and DCP. Although demonstrated it with the reconstruction of a binary image, the proposed method could be in principle adapted for the imaging of gray-scale objects through dense inhomogeneous scattering media. This will be investigated in a future study.
Data availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. Please contact by the e-mail: ymbian@siom.ac.cn. Link to data request: Data request link.
References
Yoon, S. et al. Deep optical imaging within complex scattering media. Nat. Rev. Phys. 2, 141–158. https://doi.org/10.1038/s42254-019-0143-2 (2020).
Wang, L., Ho, P. P., Liu, C., Zhang, G. & Alfano, R. R. Ballistic 2-D imaging through scattering walls using an ultrafast optical Kerr gate. Science 253, 769–771 (1991).
Demos, S. & Alfano, R. Optical polarization imaging. Appl. Opt. 36, 150–155 (1997).
Leith, E. N. et al. Imaging through scattering media using spatial incoherence techniques. Opt. Lett. 16, 1820–1822 (1991).
Zhang, Y. et al. Application of short-coherence lensless fourier-transform digital holography in imaging through diffusive medium. Opt. Commun. 286, 56–59 (2013).
Schechner, Y. Y., Narasimhan, S. G. & Nayar, S. K. Instant dehazing of images using polarization. in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, vol. 1 (IEEE, 2001).
Webb, R. H. Confocal optical microscopy. Rep. Progress Phys. 59, 427 (1996).
Vellekoop, I. M., Lagendijk, A. & Mosk, A. Exploiting disorder for perfect focusing. Nat. Photon. 4, 320–322 (2010).
Popoff, S. M. et al. Measuring the transmission matrix in optics: An approach to the study and control of light propagation in disordered media. Phys. Rev. Lett. 104, 100601 (2010).
Bertolotti, J. et al. Non-invasive imaging through opaque scattering layers. Nature 491, 232–234 (2012).
Katz, O., Heidmann, P., Fink, M. & Gigan, S. Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations. Nat. Photon. 8, 784–790 (2014).
Yang, W., Li, G. & Situ, G. Imaging through scattering media with the auxiliary of a known reference object. Sci. Rep. 8, 1–7 (2018).
Bian, Y. et al. Passive imaging through dense scattering media. Photon. Res. 12, 134 (2024).
Kopf, J. et al. Deep photo: Model-based photograph enhancement and viewing. ACM Trans. Graph. (TOG) 27, 1–10 (2008).
He, K., Sun, J. & Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33, 2341–2353 (2010).
Li, Y., Zhang, Y., Geng, A., Cao, L. & Chen, J. Infrared image enhancement based on atmospheric scattering model and histogram equalization. Opt. Laser Technol. 83, 99–107. https://doi.org/10.1016/j.optlastec.2016.03.017 (2016).
Rajput, G. S. & Rahman, Z.-U. Hazard detection on runways using image processing techniques. Proc. SPIE 6957, 69570D (2008).
Wang, M. & Zhou, S.-D. The study of color image defogging based on wavelet transform and single scale retinex. Prod. SPIE 8194, 81940F (2011).
Horisaki, R., Takagi, R. & Tanida, J. Learning-based imaging through scattering media. Opt. Express 24, 13738–13743 (2016).
Lyu, M., Wang, H., Li, G., Zheng, S. & Situ, G. Learning-based lensless imaging through optically thick scattering media. Adv. Photon. 1, 036002 (2019).
Zheng, S., Wang, H., Dong, S., Wang, F. & Situ, G. Incoherent imaging through highly nonstatic and optically thick turbid media based on neural network. Photon. Res. 9, B220–B228. https://doi.org/10.1364/PRJ.416246 (2021).
Zhu, S., Guo, E., Gu, J., Bai, L. & Han, J. Imaging through unknown scattering media based on physics-informed learning. Photon. Res. 9, B210–B219 (2021).
Nayar, S. K. & Narasimhan, S. G. Vision in bad weather. in Proceedings of the Seventh IEEE International Conference on Computer Vision 2, 820–827 (1999).
Tan, R. T. Visibility in bad weather from a single image. in 2008 IEEE Conference on Computer Vision and Pattern Recognition, 1–8. https://doi.org/10.1109/CVPR.2008.4587643 (2008).
Fattal, R. Single image dehazing. ACM Trans. Graph. 27, 1–9. https://doi.org/10.1145/1360612.1360671 (2008).
Satat, G., Tancik, M. & Raskar, R. Towards photography through realistic fog. in IEEE International Conference on Computational Photography (2018).
Cleveland, W. S. Robust locally weighted regression and smoothing scatterplots. J. Am. Stat. Assoc. 74, 829–836 (1979).
Hines, G. D., Rahman, Z. U., Jobson, D. J., Woodell, G. A. & Harrah, S. D. Real-time enhanced vision system. in Conference on Enhanced and Synthetic Vision; 20050328 (2005).
Funding
This work was supported by the National Natural Science Foundation of China (62325508, 61991452, 62061136005), Program of Shanghai Academic Research Leader (22XD1403900), and Shanghai Municipal Science and Technology Major Project.
Author information
Authors and Affiliations
Contributions
Y.B. data analysis, figures and wrote the manuscript. F.W. and G.S. helped finalize the manuscript. Other authors, assisted in the experiment. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Bian, Y., Wang, F., Liu, H. et al. Passive imaging through inhomogeneous scattering media. Sci Rep 14, 15857 (2024). https://doi.org/10.1038/s41598-024-66449-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-024-66449-4
- Springer Nature Limited