Skip to main content

Single Image Dehazing and Non-uniform Illumination Enhancement: A Z-Score Approach

Abstract

This paper proposes a novel enhancement approach using Z-score for images captured under hazy and non-uniform illumination conditions for multimedia applications. The proposed approach aims to estimate the scene transmission using Z-score-based weighting function and global atmospheric light for image dehazing. On the contrary, the proposed approach equalizes the illumination channel using Z-score weighting function of non-uniformly illuminated image. The comparative analysis to show the effectiveness of the proposed approach is also presented quantitatively and visually. The datasets used for comparison are realistic single image dehazing dataset, high dynamic range dataset, images captured with commercial DIgital CaMeras, MiddleBury Stereo dataset, and natural benchmarked images. The dehazed images are compared in terms of peak signal to noise ratio, structural similarity index, Lightness Order Error (LOE), and Naturalness Image Quality Evaluator (NIQE). The enhanced version of non-uniformly illuminated images is compared in terms of LOE and NIQE performance measures. The comparison shows that the proposed approach outperforms others.

Introduction

Single image enhancement is an arduous process of refining perceived visible information. Nowadays, it is of utmost important pre-processing task for real-time vision-based applications like multimedia [1, 2], remote sensing [3,4,5], multi-spectral imagery [6, 7], etc. The objective is to boost the degraded performance of vision-based applications. The visible information gets degrade due to suspended air particles under inclement weather conditions [8] and non-uniform illumination [9], resulting in a hazy and non-uniformly illuminated scenes, respectively. To address these issues, the atmospheric scattering model [8] for image dehazing and Retinex theory [9] for non-uniform illumination enhancement established a benchmark in the literature.

Atmospheric Scattering Model According to this model, if light rays reflected from an object of image radiance \((I_{\text {T}})\) at pixel location (pq) travel towards observer with \((T_{\text {R}})\) transmission, then the haze particles hinder this transmission due to Attenuation. The scene transmission with angular scattering coefficient \((\beta )\) and depth map \((D_{\text {Map}})\) is defined by \(\exp { \left( -\beta D_{\text {Map}} \right) }\). Additionally, the scattered rays summed up into true radiance and pixel appears whitish. This is termed as Airlight. Thus, the hazy image formation can be interpreted as

$$\begin{aligned} I_{\text {Hazy}} (p,q) = I_{\text {T}} (p,q) \times T_{\text {R}} (p,q) + A_{\text {G}} \left( 1 - T_{\text {R}} (p,q) \right) , \end{aligned}$$
(1)

where \(A_{\text {G}}\) is the Global Atmospheric Light.

Remark 1

The scene transmission refers to the reflected light rays from the object which travels towards the observer which is a 2D matrix. The bandwidth of the scene transmission can be estimated as the smallest non-negative integer \(K_\mathrm{B}\) such that \(T_{\text {R}}(p,q) = 0\) for \(|i - j| > K_\mathrm{B}\). Since the objective of the single image dehazing is to enhance the image quality under hazy environment, the image captured using hazy image formation phenomenon with scene transmission attenuation is the prerequisite.

Retinex Theory According to this theory, a non-uniformly illuminated image (\(I_{\text {NUI}}\)) can be decomposed into reflectance (R) and illumination (\(I_{\text {l}}\)) components for each pixel location (pq) as

$$\begin{aligned} I_{\text {NUI}} (p,q) = R(p,q) \times I_{\text {l}} (p,q). \end{aligned}$$
(2)

Literature Review

Based on the priors and assumptions-based strategies, it has been proven that the scene depth from observer view is able to estimate the scene transmission [10]. On the contrary, the adjustment of illumination channel (\(I_{\text {l}}\)) is capable to enhance the non-uniformly illuminated images [11]. The literature of image enhancement under hazy and non-uniform illumination conditions had various phases of development. This includes polarization-based methods [12, 13] followed by dehazing using multiple images [8, 14]. Then, histogram equalization methods [15, 16] came into the picture. This was further replaced by priors and assumptions-based enhancement approaches relying on contrast [17] and dark channel [10]. On the other hand, Contextual and Variational Contrast (CVC) [18] enhancement and Layered Difference Representation (LDR) of 2D histograms [19] were introduced for contrast restoration of non-uniformly illuminated images to handle the inconsistencies near edges. Thereafter, Single-Scale Retinex (SSR) [20] and Multi-Scale Retinex (MSR) [21] were introduced for the estimation of scene reflectance but unfortunately could not deal with color distortions. The efforts were also made to recover the reflectance and illumination components simultaneously [22]. Considering the assumption of Retinex theory, an Effective Naturalness Restoration (ENR) method was proposed by Shin et al. [23] for equalizing the illumination channel. Though the naturalness is a subjective matter and varies human to human, Guo et al. proposed Low-light IMage Enhancement (LIME) [24] method for restoring the illumination component to achieve visually appealing results. Towards this direction, Ghosh et al. proposed texture smoothing [25] and detail enhancement [26] methods. In order to deal with color and illumination distortions, Ren et al. proposed Low-light image Enhancement CAmera Response Model (LECARM) [27]. With the success of human brain-inspired deep learning solutions for real-time applications, deep enhancement networks have been introduced in the literature. For dehazing purpose, Color Attenuation Prior (CAP) [28] model provides depth map for the estimation of scene transmission, whereas DehazeNet [29] and Multi-Scale Convolutional Neural Networks [30] estimate the scene transmission directly. In spite of going through a rigorous training, earlier models were still relying on the atmospheric scattering model for dehazing. Hence, researchers came up with an end-to-end mapping for obtaining enhanced images. Li et al. proposed All in One Dehazing Network (AOD-Net) [31] for directly estimating the clean image with faster run-time. Ren et al. proposed gating of confidence maps derived from input features in Gated Fusion Network (GFN) [32]. Wang et al. developed Atmospheric Illumination Prior Network (AIPNet) [33] for enhancing the illumination channel of degraded input image. Thereafter, Generic Model Agnostic Network (GMAN) [34] was proposed by Liu et al. with an encoder–decoder architecture. On the contrary, Guo et al. proposed a light weight trained Zero-reference Deep Curve Estimation (Zero-DCE) [35] network to learn the mapping of input image for obtaining enhanced scene image with uniform illumination. The efforts were also made to address the problem of thick clouds and shadows by Li et al. [36] and dense haze based on generalized imaging model by Gao et al. [37]. The major challenge, while performing enhancement of hazy or non-uniformly illuminated images, is to hold the image naturalness such that the enhanced image appeals the human visual system with minimal error. This has been taken care by fuzzy transform-based approach for illumination enhancement [38]. The enhancement methods always suffer with uncertainties in deciding image regions. To deal with uncertainties near edges and in homogeneous regions, Type-2 fuzzy approach was introduced for dehazing [39].

Contributions

This paper proposes a novel single image dehazing and non-uniform illumination enhancement approach using Z-score for hazy and non-uniformly illuminated scene images, respectively. The objective is to enhance the visual appearance of images used in various multimedia applications. The proposed approach enhances the minimum channel and illumination channel of input hazy image and input non-uniformly illuminated image, respectively. It estimates the scene transmission using proposed Z-score-based weighting function map and global atmospheric light for image dehazing. On the other hand, it equalizes the illumination channel using proposed Z-score weighting function map of non-uniformly illuminated scene image. The approach proposed in this paper for two different scenarios has been experimentally validated on several benchmark datasets and proven efficient with respect to state-of-the-art in the literature.

Organization

In further sections, the manuscript is organized as follows. First, the proposed approach for single image dehazing and non-uniform illumination enhancement is explained. Thereafter, the experimental results with validations and discussions followed by conclusions have been demonstrated.

Preliminaries

For a digital image, a Z-score describes the relation of a pixel intensity with the mean of group of pixel intensities. Z-score is measured in terms of standard deviation from the mean [40]. Let us consider a group of pixel intensities \(p_{i,j}\) in an image with \(1 \le i \le H\) and \(1 \le j \le W\), where H and W are the height and width of the image, respectively. The Z-score for each pixel intensity is defined as

$$\begin{aligned} Z^{\text {Score}} \left( p_{i,j} \right) = \dfrac{ p_{i,j} - \mu }{ \sigma }, \end{aligned}$$
(3)

where \(\mu\) and \(\sigma\) are the mean and standard deviation of the image, respectively. The probability value for a normal distribution using Z-score value is defined as

$$\begin{aligned} Z^{f} = 0.5 \times \text {\textit{erfc}} \left( - \dfrac{ Z^{\text {Score}} }{ \sqrt{2} } \right) . \end{aligned}$$
(4)

In literature, Z-score has been extensively used for analysis of various applications like medical [41], image processing [5], etc. The value of Z-score may be positive and negative. The positive value represents that the Z-score is greater than the mean, while the negative value represents that the Z-score is less than the mean.

Fig. 1
figure1

Hazy image: a sample images, b histogram of (a)

Fig. 2
figure2

Non-uniform illumination: a sample images, b histogram of (a)

The proposed Z-score approach for enhancement of hazy and non-uniformly illuminated images is based on the following observations:

  • For a hazy image, the pixel intensity histogram is more inclined towards the brighter side as shown in Fig. 1. Thus, the means for the local patches in a hazy image will lie towards the brighter side of the histogram.

  • For a non-uniformly illuminated image, the pixel intensity histogram is more inclined to the darker side as shown in Fig. 2. Thus, the means for the local patches in a non-uniformly illuminated image lie towards the darker side of the histogram.

Since an enhanced image under hazy environment or non-uniformly illuminated condition should hold an equalized histogram for clear visibility, the following statements can be made:

  • Since the minimum channel deviates from the darker side to the brighter side in a hazy image, the Z-score function tries to adjust the minimum channel to achieve better contrast for clear haze-free output image.

  • On the contrary for non-uniform illumination, the Z-score function tries to adjust the pixel intensities of illumination channel to obtain an enhanced output image with uniform illumination.

In further sections, the single image dehazing and non-uniform illumination enhancement using Z-score approach is explained in detail.

Fig. 3
figure3

Block diagram of single image dehazing using proposed approach

Proposed Approach

This section provides detailed explanation of single image dehazing and illumination enhancement using proposed approach of images captured under hazy and non-uniform illumination conditions, respectively.

Single Image Dehazing

Figure 3 shows the block diagram of single image dehazing using proposed approach. Firstly, the Z-score weighting function map is estimated for minimum channel to obtain the depth map and scene transmission. The minimum channel of hazy input scene image is obtained as

$$\begin{aligned} I_{\text {Hazy}}^{\text {min}} (p,q) = \min _{ \tau _1 \in \{ \text {R,G,B} \} } I_{\text {Hazy}}^{\tau _1} (p,q) . \end{aligned}$$
(5)

Then, the global atmospheric light is calculated by Quad tree subdivision [42] based on Z-score approach for Value/ Illumination channel of image in HSV color domain. Finally, the enhanced image is obtained by atmospheric scattering model [8] using

$$\begin{aligned} I_{\text {enh}} (p,q) = \dfrac{ I_{\text {Hazy}} (p,q) - A_{\text {G}} }{ \text {max} \left( T_{\text {R}} (p,q) , T_{\text {0}} \right) } + A_{\text {G}}, \end{aligned}$$
(6)

where \(T_{\text {0}}\) is a lower bound non-zero value of the scene transmission. The value of \(T_{\text {0}}\) is adjusted as 0.1 to avoid the indefinite values at pixel locations with zero transmission.

In further subsections, the estimation of scene transmission and global atmospheric light have been explained in detail to recover the haze-free clear image.

Scene Transmission

The scene transmission estimation using proposed approach comprises of the estimation of Z-score weighting function map (ZSWFM) for the minimum channel followed by depth map estimation. For a local patch of size \((2 F_1 + 1) \times (2 F_1 + 1)\) centered at \((i_1,j_1)\) in a hazy input image \(I_{\text {Hazy}}\) of height \(H_1\) and width \(W_1\), the neighborhood pixel set \(N'\) with \(k_1\) number of pixels is extracted using

$$\begin{aligned} N_{k_1}' = \left\{ I_{\text {Hazy}}^{\text {min}} ( i_1+u_1,j_1+v_1 ) \right\} ~\forall ~(u_1,v_1) \in [-F_1,F_1], \end{aligned}$$
(7)

where \(\forall ~ 1 \le i_1 \le H_1\) and \(\forall ~ 1 \le j_1 \le W_1\).

For pixel intensities of neighborhood pixel set \(N'\) with mean \(\mu _1\) and standard deviation \(\sigma _1\), the Z-score is evaluated by

$$\begin{aligned} Z^{\text {Score}}_{k_1} \left( p,q \right) = \dfrac{N_i ' (p,q) - \mu _1 (p,q)}{ \sigma _1 (p,q) + \epsilon _1 }~~ \forall ~ 1 \le i \le k_1, \end{aligned}$$
(8)

where \(\epsilon _1\) is an empirical parameter.

Thereafter, the ZSWFM is obtained by

$$\begin{aligned} Z^{f}_{k_1} \left( p,q \right) = 0.5 \times \it{erfc} \left( - \dfrac{ Z^{\text {Score}}_i \left( p,q \right) }{ \sqrt{2} } \right) ~~ \forall ~ 1 \le i \le k_1 . \end{aligned}$$
(9)

Now, the depth map is estimated using weighted average of ZSWFM and neighborhood pixel set as

$$\begin{aligned} D_{\text {Map}} (p,q) = \dfrac{ \sum _{i = 1}^{k_1} Z^{f}_{i} (p,q) \times N_i ' (p,q) }{ \sum _{i = 1}^{k_1} Z^{f}_{i} (p,q) }. \end{aligned}$$
(10)

Therefore, the scene transmission is obtained by

$$\begin{aligned} T_{\text {R}} (p,q) = \exp { \left( -\beta \times D_{\text {Map}} (p,q) \right) , } \end{aligned}$$
(11)

where \(\beta\) is the angular scattering coefficient of the atmosphere and the value has been set as 1 for experimentation in this paper.

Remark 2

If standard deviation \(\sigma _1\) in (8) occurs 0, then this generates the condition of an infinite value of \(Z^{\text {Score}}_{k_1}\). Thus, a lower bound of standard deviation i.e. \(\epsilon _{\text {1}}\) has been set experimentally as 0.01.

Global Atmospheric Light

The global atmospheric light in the proposed approach is estimated by Quad tree subdivision based on Z-score. The input RGB image is converted into HSV image domain to obtain the illumination channel \(I_{\text {Hazy}}^{\text {V}}\) as

$$\begin{aligned} I_{\text {Hazy}}^{\tau _1} \xrightarrow {\text { RGB to HSV Conversion }} I_{\text {Hazy}}^{\tau _2}, \end{aligned}$$
(12)

where \(\tau _1 \in \{ \text {R,G,B} \}\) and \(\tau _2 \in \{ \text {H,S,V} \}\). Then, the illumination channel \(I_{\text {Hazy}}^{\text {V}}\) is divided into d equal regions represented by \(I_{d_{n}}^l\) for \(d_n \in [1,2,3,4]\) under \(l\mathrm{th}\) iteration. For each subdivision, the ZSWFM, as defined in (9), is obtained by

$$\begin{aligned} A^l_{d_{n}} = Z^f \left( I_{d_{n}}^l \right) . \end{aligned}$$
(13)

Then, the maximum of mean of ZSWFM for each subdivision is evaluated by

$$\begin{aligned} A^l_{\text {max}} = \max \limits _n \left( \text {mean} \left( A^l_{d_{n}} \right) \right) . \end{aligned}$$
(14)

Thereafter, the subdivision with maximum ZSWFM is considered for further subdivisions. The Quad tree subdivision is followed until the following criteria satisfies,

$$\begin{aligned} \min | A^{l}_{\text {max}} - A^{l-1}_{\text {max}} | \le \epsilon _{\text {T}}, \end{aligned}$$
(15)

where \(\epsilon _{\text {T}}\) is a predefined threshold and set experimentally.

Hence, the global atmospheric light \(A_{\text {G}}\) is obtained using

$$\begin{aligned} A_{\text {G}} = A^l_{\text {max}}. \end{aligned}$$
(16)

Finally, the enhanced dehazed image is obtained using (6).

Remark 3

For experiments in this paper, the value of \(\epsilon _{\text {T}}\) has been set as 0.1. The decrease in the value of \(\epsilon _{\text {T}}\) leads to the more accurate but increased run-time for estimation of the global atmospheric light whereas the increase in the value of \(\epsilon _{\text {T}}\) leads to less accurate estimation of the global atmospheric light with faster run-time.

Fig. 4
figure4

Block diagram of illumination enhancement using proposed approach

Non-uniform Illumination Enhancement

Figure 4 shows the block diagram of the proposed approach for enhancement of non-uniform illuminated images. Initially, the illumination channel \(I_{\text {NUI}}^{\text {V}}\) of non-uniformly illuminated input image \(I_{\text {NUI}}^{\tau _3}\) for \(\tau _3 \in \{ \text {R,G,B} \}\) is obtained by converting it to HSV image domain, i.e., \(I_{\text {NUI}}^{\tau _4}\) for \(\tau _4 \in \{ \text {H,S,V} \}\). Then, the illumination channel is processed with ZSWFM to obtain the coarse illumination followed by applying adaptive gamma correction. Thereafter, the scene reflectance is estimated. Finally, the enhanced image is composed using (2).

In further subsections, the coarse illumination estimation, obtaining the scene reflectance, and adaptive gamma correction have been explained in detail to restore the enhanced output image.

Coarse Illumination

For a local patch of size \((2 F_2 + 1) \times (2 F_2 + 1)\) centered at \((i_2,j_2)\) in the illumination channel \(I_{\text {NUI}}^{\text {V}}\) of non-uniformly illuminated image \(I_{\text {NUI}}\) of height \(H_2\) and width \(W_2\), the neighborhood pixel set \(N''\) with \(k_2\) number of pixels is extracted using

$$\begin{aligned} N_{k_2}'' = \{ I_{\text {NUI}}^{\text {V}} ( i_2+u_2,j_2+v_2 ) \}~\forall ~(u_2,v_2) \in [-F_2,F_2], \end{aligned}$$
(17)

where \(\forall ~ 1 \le i_2 \le H_2\) and \(\forall ~ 1 \le j_2 \le W_2\). The Z-score for neighborhood pixel set \(N''\) is evaluated by

$$\begin{aligned} Z^{\text {Score}}_{k_2} \left( p,q \right) = \dfrac{N_i '' (p,q) - \mu _2 (p,q)}{ \sigma _2 (p,q) + \epsilon _2 }~~ \forall ~ 1 \le i \le k_2, \end{aligned}$$
(18)

where \(\epsilon _2\) is an empirical parameter adjusted as 0.01 to avoid the infinite value of \(Z^{\text {Score}}_{k_2}\), \(\mu _2\) and \(\sigma _2\) are the mean and standard deviation, respectively for \(N''\). Then, the ZSWFM is obtained by

$$\begin{aligned} Z^{f}_{k_2} \left( p,q \right) = 0.5 \times \it{erfc} \left( - \dfrac{ Z^{\text {Score}}_i \left( p,q \right) }{ \sqrt{2} } \right) ~~ \forall ~ 1 \le i \le k_2. \end{aligned}$$
(19)

Thus, the coarse illumination is estimated using weighted average of ZSWFM and neighborhood pixel set as

$$\begin{aligned} I_{\text {l}} (p,q) = \dfrac{ \sum _{i = 1}^{k_2} Z^{f}_{i} (p,q) \times N_i '' (p,q) }{ \sum _{i = 1}^{k_2} Z^{f}_{i} (p,q) } . \end{aligned}$$
(20)

Scene Reflectance

The scene reflectance for non-uniformly illuminated input image is estimated as

$$\begin{aligned} R (p,q) = \dfrac{ I_{\text {NUI}} (p,q) }{ I_{\text {l}} (p,q) } . \end{aligned}$$
(21)

Adaptive Gamma Correction

The coarse illumination obtained by (20) is processed with adaptive gamma correction [23] to obtain the refined coarse illumination as

$$\begin{aligned} I_{\text {l}}^{\text {AGC}} (p,q) = I_{\text {l}} (p,q) ^ {\gamma (p,q)}, \end{aligned}$$
(22)

where

$$\begin{aligned} \gamma (p,q) = \dfrac{ I_{\text {l}} (p,q) + \alpha }{ \left( 1 + \alpha \right) ^ {\alpha } } \end{aligned}$$
(23)

and

$$\begin{aligned} \alpha = 1 + \text {mean} \left( I_{\text {NUI}}^{\text {V}} \right) . \end{aligned}$$
(24)

Thus, the final enhanced output image is recomposed with obtained scene reflectance and refined coarse illumination as

$$\begin{aligned} I_{\text {enh}} (p,q) = R(p,q) \times I_{\text {l}}^{\text {AGC}} (p,q). \end{aligned}$$
(25)
Fig. 5
figure5

Visualization of experimental results on a natural hazy image: a input hazy image, dehazed images using b CLAHE [16], c DCP [10], d CAP [28], e MSCNN [30], f DehazeNet [29], g AOD-Net [31], h GMAN [34], and i Proposed approach

Fig. 6
figure6

Visualization of experimental results on a natural hazy image: a input hazy image, dehazed images using b CLAHE [16], c DCP [10], d CAP [28], e MSCNN [30], f DehazeNet [29], g AOD-Net [31], h GMAN [34], and i proposed approach. (Please also see Fig. 5 for more results)

Fig. 7
figure7

Visualization of experimental results for illumination enhancement: a non-uniformly illuminated image, Enhanced images using b CLAHE [16], c CVC [18], d LDR [19], e ENR [23], f LIME [24], g LECARM [27], h Zero-DCE [35], and i proposed approach

Fig. 8
figure8

Visualization of experimental results for illumination enhancement: a non-uniformly illuminated image, Enhanced images using b CLAHE [16], c CVC [18], d LDR [19], e ENR [23], f LIME [24], g LECARM [27], h Zero-DCE [35], and i proposed approach. (Please also see Fig. 7 for more results)

Table 1 Average performance analysis on RESIDE dataset [47]
Table 2 Average run time (in seconds) comparison on RESIDE dataset [47]
Table 3 Performance comparison for Illumination Enhancement on images from HDR dataset [48]
Table 4 Average run time (in seconds) comparison on HDR dataset [48]
Table 5 Performance comparison for Illumination Enhancement on images from DICM dataset [19]
Table 6 Average run time (in seconds) comparison on DICM dataset [19]
Table 7 Average performance analysis on MiddleBury Stereo dataset [49,50,51]

Results, Validations, and Discussions

This section presents the experimental results of the proposed approach and its quantitative and qualitative comparisons with state-of-the-art. The enhanced results for image dehazing are compared with Contrast Limited Adaptive Histogram Equalization (CLAHE) [16], Dark Channel Prior (DCP) [10], CAP [28], MSCNN [30], DehazeNet [29], AOD-Net [31], and GMAN [34] methods. The performance metrics used for quantitative comparison are Peak Signal to Noise Ratio (PSNR) [43], Structural SIMilarity (SSIM) index [44], Lightness Order Error (LOE) [45], and Naturalness Image Quality Evaluator (NIQE) [46] scores. On the other hand, the enhanced results for non-uniformly illuminated images are compared with CLAHE [16], CVC [18], LDR [19], ENR [23], LIME [24], LECARM [27], and Zero-DCE [35] methods. The quantitative performance has been measured using LOE and NIQE scores.

In further subsections, the datasets used for comparative analysis, discussions on qualitative and quantitative comparisons, and average run time comparison of the proposed approach with other state-of-the-methods have been explained.

Datasets

The single image dehazing comparisons have been demonstrated for natural hazy images used in the literature and images from REalistic Single Image DEhazing (RESIDE) dataset [47]. The RESIDE dataset consists of 500 indoor Synthetic Objective Testing Set (SOTS), 500 outdoor SOTS, and 10 Hybrid Subjective Testing Set (HSTS) images. The non-uniform illumination enhancement experiments have been performed on natural images with non-uniform illumination from the literature, images from High Dynamic Range (HDR) [48] dataset, and 69 images captured with commercial DIgital CaMeras (DICM) [19]. The HDR dataset consists of 8 scene images. For each scene image, the reconstructed image from input images with different views and exposures act as the ground truth. Further experiments have been done for single image dehazing on 32 synthetic hazy images generated from MiddleBury Stereo dataset [49,50,51] and natural hazy benchmarked images used in the literature [10]. Moreover, the experiments for non-uniform illumination have been done on benchmarked images frequently used in literature [52].

Discussions

Figures 5 and 6 show the comparative visualization of dehazed images. Among all, the CLAHE could not deal with the haze and generates enhanced results with information distortions. The results generated using DCP and CAP were able to remove the haze but seems over-saturated. However, MSCNN, DehazeNet, and GMAN are left with some haze to retain the image naturalness instead of enhancing contrast too much to avoid over-saturation. In comparison to others, AOD-Net is capable enough to maintain the contrast and haze removal both but adds blur near edges resulting in degraded textural information. The enhanced image obtained using proposed approach manages the trade-off between contrast restoration and retaining the scene texture comparatively over others.

Moreover, Table 1 tabulates the performance analysis of the proposed approach over others in terms of PSNR, SSIM, LOE, and NIQE measures. The higher values of PSNR and SSIM state that the restored image is much closer to the ground truth image whereas the lower values of LOE and NIQE state that the enhanced image persist its naturalness close to human perception. Similarly, Table 7 compares the performance of the proposed approach on 32 synthetic hazy images generated from MiddleBury Stereo dataset.

Figures 7 and 8 show the visual comparison of enhanced images obtained for a natural non-uniformly illuminated image used in the literature. It is quite evident from the figure that the proposed approach outperforms others by maintaining the textural information while performing enhancement. Comparatively, LIME performs better during enhancement as the image texture is retained with less noise whereas CLAHE, CVC, LDR, ENR, and LECARM fail to enhance the visual information for human perception.

Since naturalness of a scene image is a major concern to achieve human perception, the LOE and NIQE performance measures have been tabulated for comparison in Tables 3 and 5. The lower the values of LOE and NIQE, the better the performance of the algorithm. Hence, Tables 3 and 5 present the superiority of the proposed approach over others.

Remark 4

The refined scene transmission map with preserved important structures plays a key role in single image dehazing methods relying on atmospheric scattering model. Similarly, the refined illumination channel plays the key role for enhancement of non-uniformly illuminated images using Retinex theory. To explore this possibility, Fast Image Dehazing algorithm using Morphological Reconstruction (FIDMR) proposed by Colores et al. [53] and Fast Bright-pass bilateral filter (FBPBF) proposed by Ghosh et al. [54] are used to compare the dehazed images for some natural hazy images and enhancement of non-uniformly illuminated images, respectively, frequently used in literature. Figures 9 and 10 show the visual comparisons of proposed approach on hazy and non-uniformly illuminated images. The figures show the effectiveness of refining the scene transmission and illumination channel using proposed approach with FIDMR and FBPBF, respectively.

Fig. 9
figure9

Visual comparison for single image dehazing on natural hazy images [10]. Top row: Natural hazy images, Middle row: Dehazed images using FIDMR [53], and Bottom row: Dehazed images using proposed approach

Fig. 10
figure10

Visual comparison for enhancement of non-uniformly illuminated images [52]. Top row: Non-uniformly illuminated images, Middle row: Enhanced images using FBPBF [54], and Bottom row: Enhanced images using proposed approach.

Fig. 11
figure11

Visualization of dark channels: a input hazy image, b dark channel obtained using DCP [10], c dark channel obtained using proposed ZSWFM, and d Dehazed image obtained using proposed approach

Fig. 12
figure12

Visualization of illumination channels: a input non-uniformly illuminated image, b illumination channel obtained using ENR [23], c illumination channel obtained using proposed ZSWFM, and d Enhanced image obtained using proposed approach.

Remark 5

For single image dehazing, the proposed approach using ZSWFM aims to produce the dark channel using (10). This adjusts the pixel intensities from brighter side to dark side using ZSWFM. Figure 11 shows the visual comparison of dark channels obtained using conventional method, i.e., DCP [10] and proposed ZSWFM. DCP takes patch-wise dark pixel intensity values which sometimes results in darker values in the low contrast regions. Similarly, for non-uniform illumination condition, the proposed ZSWFM using (20) adjusts the pixel intensities from darker side to brighter side. Figure 12 shows the visual comparison of illumination channels using conventional method, i.e., ENR [23] and proposed ZSWFM. The edge regions produce artifacts and inconsistencies using ENR, but it is quite efficiently being taken care using the proposed approach.

Average Run Time

The comparison of average run time for the proposed approach has been tabulated for single image dehazing on RESIDE dataset in Table 2 and for non-uniform illumination on HDR and DICM datasets in Tables 4 and 6, respectively. Though the average run time of the proposed approach is not outperforming others, it is able to maintain the trade-off between accuracy and run time efficiently. The accuracy of the proposed approach has been quantitatively validated as highest using several performance metrics in Tables 1, 3, and 5. For single image dehazing, the average run time of the proposed approach on RESIDE dataset is lower after the CAP method as shown in Table 2. For enhancement of non-uniformly illuminated images, the average run time of the proposed approach for HDR dataset is lower after LDR and ENR; and lower after LDR for DICM dataset as shown in Tables 4 and 6.

Conclusion

This paper contributes a novel image enhancement approach of hazy and non-uniformly illuminated scene images using Z-score. The proposed approach has been proven effective while performing image dehazing and non-uniform illumination enhancement without any visual artifacts and maintaining the textural information as well.

However, the potential future scope still exists with the noise incurred during enhancement. Though the proposed approach outperforms others, the noise should also be dealt as one of the pre-processing tasks for achieving an image with more textural information and better visual appearance.

References

  1. 1.

    Huang SC, Chen BH, Cheng YJ. An efficient visibility enhancement algorithm for road scenes captured by intelligent transportation systems. IEEE Trans Intell Transport Syst. 2014;15(5):2321–32.

    Article  Google Scholar 

  2. 2.

    Shih K-T, Chen H-H. Exploiting perceptual anchoring for color image enhancement. IEEE Trans Multimed. 2016;18(2):300–10.

    Article  Google Scholar 

  3. 3.

    Long J, Shi Z, Tang W, Zhang C. Single remote sensing image dehazing. IEEE Geosci Remote Sens Lett. 2014;11(1):59–63.

    Article  Google Scholar 

  4. 4.

    Chaudhry AM, Riaz MM, Ghafoor A. A framework for outdoor RGB image enhancement and dehazing. IEEE Geosci Remote Sens Lett. 2018;15(6):932–6.

    Article  Google Scholar 

  5. 5.

    Zhang X, Liu L, Chen X, Xie S, Lei L. A novel multitemporal cloud and cloud shadow detection method using the integrated cloud Z-scores model. IEEE J Sel Top Appl Earth Observ Remote Sens. 2019;12(1):123–34.

    Article  Google Scholar 

  6. 6.

    Makarau A, Richter R, Müller R, Reinartz P. Haze detection and removal in remotely sensed multispectral imagery. IEEE Trans Geosci Remote Sens. 2014;52(9):5895–905.

    Article  Google Scholar 

  7. 7.

    Makarau A, Richter R, Schlöpfer D, Reinartz P. Combined haze and cirrus removal for multispectral imagery. IEEE Geosci Remote Sens Lett. 2016;13(3):379–83.

    Google Scholar 

  8. 8.

    Nayar SK, Narasimhan SG. Vision in bad weather. In: Proceedings of the Seventh IEEE International Conference on Computer Vision 1999 Sep 20 (Vol. 2, pp. 820–7). IEEE.

  9. 9.

    Land EH, McCann JJ. Lightness and Retinex theory. J Opt Soc Am. 1971;61(1):1–11.

    Article  Google Scholar 

  10. 10.

    He K, Sun J, Tang X. Single image haze removal using dark channel prior. IEEE Trans Pattern Anal Mach Intell. 2011;33(12):2341–53.

    Article  Google Scholar 

  11. 11.

    Gao Y, Hu H-M, Li B, Guo Q. Naturalness preserved nonuniform illumination estimation for image enhancement based on Retinex. IEEE Trans Multimed. 2018;20(2):335–44.

    Article  Google Scholar 

  12. 12.

    Schechner YY, Narasimhan SG, Nayar SK. Instant dehazing of images using polarization. In: IEEE Conf. on Comp. Vision and Pattern Recog. Kauai, HI: USA; 2001. vol. 1, pp. I-325–I-32.

  13. 13.

    Schechner YY, Narasimhan SG, Nayar SK. Polarization-based vision through haze. Appl Opt. 2003;42(3):511–25.

    Article  Google Scholar 

  14. 14.

    Narasimhan SG, Nayar SK. Chromatic framework for vision in bad weather. In: IEEE Conf. on comp. vision and pattern recog. Hilton Head Island, SC: USA; 2000. vol. 1, p. 598–605.

  15. 15.

    Pizer SM, Amburn EP, Austin JD, Cromartie R, Geselowitz A, Greer T, Romeny BTH, Zimmerman JB. Adaptive histogram equalization and its variations. Comp Vis Graphics Image Process. 1987;39(3):355–68.

    Article  Google Scholar 

  16. 16.

    Zuiderveld K. Contrast limited adaptive histogram equalization. In: Graphics Gems IV. 1994;474–85. https://doi.org/10.5555/180895.180940.

  17. 17.

    Meng G, Wang Y, Duan J, Xiang S, Pan C. Efficient image dehazing with boundary constraint and contextual regularization. In: IEEE Int. Conf. on comp. vision, Sydney, NSW, Australia, Dec. 1–8, 2013, pp. 617–24.

  18. 18.

    Celik T, Tjahjadi T. Contextual and variational contrast enhancement. IEEE Trans Image Process. 2011;20(12):3431–41.

    MathSciNet  Article  Google Scholar 

  19. 19.

    Lee C, Lee C, Kim C-S. Contrast enhancement based on layered difference representation of 2D histograms. IEEE Trans Image Process. 2013;22(12):5372–84.

    Article  Google Scholar 

  20. 20.

    Jobson DJ, Rahman Z, Woodell GA. Properties and performance of a center/surround Retinex. IEEE Trans Image Process. 1996;6(3):451–62.

    Article  Google Scholar 

  21. 21.

    Rahman Z, Jobson DJ, Woodell GA. Multi-scale Retinex for color image enhancement. In: IEEE Int. Conf. on image processing, Lausanne, Switzerland, Sept. 19, 1996, vol. 3, pp. 1003–6.

  22. 22.

    Fu X, Zeng D, Huang Y, Zhang X, Ding X, A weighted variational model for simultaneous reflectance and illumination estimation. In: IEEE Conf. on comp. vision and pattern recog., Las Vegas, NV, USA, June 27–30, 2016, pp. 2782–90.

  23. 23.

    Shin Y, Jeong S, Lee S. Efficient naturalness restoration for non uniform illumination images. IET Image Process J. 2015;9(8):662–71.

    Article  Google Scholar 

  24. 24.

    Guo X, Li Y, Ling H. LIME: low-light image enhancement via illumination map estimation. IEEE Trans Image Process. 2017;26(2):982–93.

    MathSciNet  Article  Google Scholar 

  25. 25.

    Ghosh S, Gavaskar RG, Panda D, Chaudhury KN. Fast scale-adaptive bilateral texture smoothing. IEEE Trans Circ Syst Video Technol. 2020;30(7):2015–26.

    Google Scholar 

  26. 26.

    Ghosh S, Gavaskar RG, Chaudhury KM. Saliency guided image detail enhancement. In: 2019 National Conference on Communications (NCC), Bangalore, India, Feb. 20–3, 2019.

  27. 27.

    Ren Y, Ying Z, Li TH, Li G. LECARM: low-light image enhancement using the camera response model. IEEE Trans Circ and Syst for Video Tech. 2019;29(4):968–81.

    Article  Google Scholar 

  28. 28.

    Zhu Q, Mai J, Shao L. A fast single image haze removal algorithm using color attenuation prior. IEEE Trans Image Process. 2015;24(11):3522–33.

    MathSciNet  Article  Google Scholar 

  29. 29.

    Cai B, Xu X, Jia K, Qing C, Tao D. DehazeNet: an end-to-end system for single image haze removal. IEEE Trans Image Process. 2016;25(11):5187–98.

    MathSciNet  Article  Google Scholar 

  30. 30.

    Ren W, Liu S, Zhang H, Pan J, Cao X, Yang M-H. Single image dehazing via multi-scale convolutional neural networks. In: European Conf. on Comp. Vision. 2016. pp. 154–69.

  31. 31.

    Li B, Peng X, Wang Z, Xu J, Feng D. Aod-net: all-in-one dehazing network. In: IEEE Int. Conf. on comp. vision. Venice: Italy; Oct. 22–29, 2017. pp. 4770–78.

  32. 32.

    Ren W, Ma L, Zhang J, Pan J, Cao X, Liu W, Yang M-H. Gated fusion network for single image dehazing. In: IEEE/CVF Conf. on comp. vision and pattern recog. Salt Lake City: UT, USA; 2018. p. 18–23.

  33. 33.

    Wang A, Wang W, Liu J, Gu N. AIPNet: image-to-image single image dehazing with atmospheric illumination prior. IEEE Trans Image Process. 2019;28(1):381–93.

    MathSciNet  Article  Google Scholar 

  34. 34.

    Liu Z, Xiao B, Alrabeiah M, Wang K, Chen J. Single image dehazing with a generic model-agnostic convolutional neural network. IEEE Signal Process Lett. 2019;26(6):833–7.

    Article  Google Scholar 

  35. 35.

    Guo C, Li C, Guo J, Loy CC, Hou J, Kwong S, Cong R. Zero-reference deep curve estimation for low-light image enhancement. In: IEEE Conf. on comp. vision and pattern recog. 2020. p. 1780–89.

  36. 36.

    Li X, Shen H, Zhang L, Zhang H, Yuan Q, Yang G. Recovering quantitative remote sensing products contaminated by thick clouds and shadows using multitemporal dictionary learning. IEEE Trans Geosci Remote Sens. 2014;52(11):7086–98.

    Article  Google Scholar 

  37. 37.

    Gao Y, Liu G, Ma C. Dense hazy image enhancement based on generalized imaging model. In: 2018 IEEE 3rd International Conference on image, vision and computing (ICIVC). Chongqing: China; June 27–29, 2018, pp. 410–14.

  38. 38.

    Chandrasekharan R, Sasikumar M. Fuzzy transform for contrast enhancement of nonuniform illumination images. IEEE Signal Process Lett. 2018;25(6):813–7.

    Article  Google Scholar 

  39. 39.

    Sharma T, Verma NK. Estimating depth and global atmospheric light for image dehazing using Type-2 fuzzy approach. IEEE Trans Emerg Top Comput Intell, 2020; p. 1–10 (Early Access).

  40. 40.

    Kreyszig E. Advanced engineering mathematics. 4th ed. Hoboken: John Wiley and Sons Ltd; 1979. (ISBN: 0-471-02140-7).

    MATH  Google Scholar 

  41. 41.

    Kranzusch R, Siepen FAD, Wiesemann S, Zange L, Jeuthe S, da Silva TF, Kuehne T, Pieske B, Tillmanns C, Friedrich MG, Schulz-Menger J. Z-score mapping for standardized analysis and reporting of cardiovascular magnetic resonance modified Look-Locker inversion recovery (MOLLI) T1 data: Normal behavior and validation in patients with amyloidosis. J Cardiovasc Magn Reson. 2020;22(1):1–10.

    Article  Google Scholar 

  42. 42.

    Kim JH, Jang WD, Sim JY, Kim CS. Optimized contrast enhancement for real-time image and video dehazing. J Vis Commun Image Represent. 2013;24(3):410–25.

    Article  Google Scholar 

  43. 43.

    Salomon D. Data compression: the complete reference. Berlin: Springer-Verlag, New York; 2004.

    MATH  Google Scholar 

  44. 44.

    Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process. 2004;13(4):600–12.

    Article  Google Scholar 

  45. 45.

    Wang S, Zheng J, Hu HM, Li B. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans Image Process. 2013;22(9):3538–48.

    Article  Google Scholar 

  46. 46.

    Mittal A, Soundararajan R, Bovik AC. Making a completely blind image quality analyzer. IEEE Signal Process Lett. 2013;22(3):209–12.

    Article  Google Scholar 

  47. 47.

    Li B, Ren W, Fu D, Tao D, Feng D, Zeng W, Wang Z. Benchmarking single-image dehazing and beyond. IEEE Trans Image Process. 2019;28(1):492–505.

    MathSciNet  Article  Google Scholar 

  48. 48.

    Sen P, Kalantari NK, Yaesoubi M, Darabi S, Goldman DB, Shechtman E. Robust patch-based HDR reconstruction of dynamic scenes. ACM Trans Graphics. 2012;31(6):203:1-203:11.

    Article  Google Scholar 

  49. 49.

    Scharstein D, Szeliski R. High-accuracy stereo depth maps using structured light. In: IEEE Conf. on Comp. Vision and Pattern Recognition (CVPR), Madison, WI, USA, USA, vol. 1, Jun. 2003, pp. 195–202.

  50. 50.

    Scharstein D, Pal C. Learning conditional random fields for stereo. In: IEEE Conf. on comp, vision and pattern recognition (CVPR). Minneapolis, MN: USA; 2007, pp. 1–8.

  51. 51.

    Hirschmüller H, Scharstein D. Evaluation of cost functions for stereo matching. In: : IEEE Conf. on comp, vision and pattern recognition (CVPR). Minneapolis: MN; 2007. pp. 1–8.

  52. 52.

    Sharma T, Verma NK. Adaptive interval Type-2 fuzzy filter: an AI agent for handling uncertainties to preserve image naturalness. IEEE Trans Artif Intell. 2021;2(1):83–92.

    Article  Google Scholar 

  53. 53.

    Colores SS, Yepez EC, Arreguin JMR, Botella G, Carrillo LML, Ledesma S. A fast image dehazing algorithm using morphological reconstruction. IEEE Trans Image Process. 2019;28(5):2357–66.

    MathSciNet  Article  Google Scholar 

  54. 54.

    Ghosh S, Chaudhury KN. Fast bright-pass bilateral filtering for low-light enhancement. In: 2019 IEEE International Conference on image processing (ICIP). Taipei: Taiwan; Sept. 22–25, 2019. pp. 205–9.

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Teena Sharma.

Ethics declarations

Conflict of Interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Sharma, T., Verma, N.K. Single Image Dehazing and Non-uniform Illumination Enhancement: A Z-Score Approach. SN COMPUT. SCI. 2, 488 (2021). https://doi.org/10.1007/s42979-021-00912-1

Download citation

Keywords

  • Image enhancement
  • Image quality
  • Image dehazing
  • Scene transmission
  • Global atmospheric light
  • Non-uniform illumination
  • Illumination channel
  • Z-score