Low and non-uniform illumination color image enhancement using weighted guided image filtering

In the state of the art, grayscale image enhancement algorithms are typically adopted for enhancement of RGB color images captured with low or non-uniform illumination. As these methods are applied to each RGB channel independently, imbalanced inter-channel enhancements (color distortion) can often be observed in the resulting images. On the other hand, images with non-uniform illumination enhanced by the retinex algorithm are prone to artifacts such as local blurring, halos, and over-enhancement. To address these problems, an improved RGB color image enhancement method is proposed for images captured under non-uniform illumination or in poor visibility, based on weighted guided image filtering (WGIF). Unlike the conventional retinex algorithm and its variants, WGIF uses a surround function instead of a Gaussian filter to estimate the illumination component; it avoids local blurring and halo artifacts due to its anisotropy and adaptive local regularization. To limit color distortion, RGB images are first converted to HSI (hue, saturation, intensity) color space, where only the intensity channel is enhanced, before being converted back to RGB space by a linear color restoration algorithm. Experimental results show that the proposed method is effective for both RGB color and grayscale images captured under low exposure and non-uniform illumination, with better visual quality and objective evaluation scores than from comparator algorithms. It is also efficient due to use of a linear color restoration algorithm.


Introduction
Color images contain richer information than grayscale images, and are used in many fields. However, in practice, images are often obtained under undesirable weather and illumination conditions. Images taken under insufficient or non-uniform light show low brightness, poor contrast, blurred local details, poor color fidelity, and sudden changes in brightness, and are often accompanied by significant noise. These make it difficult for human or machine vision to extract and analyze information from such images [1][2][3]. Thus, many scholars have devoted themselves to color image enhancement [4,5].
To enhance color images taken under low illumination, it is required to maintain the color information without distortion while increasing the brightness and contrast, to highlight the image details and texture, so that the enhanced image is bright and natural. Conventional color image enhancement directly applies a grayscale image enhancement method to each channel of the RGB model. These methods include, in the spatial domain, histogram equalization and its various improvements [6][7][8][9], and in the frequency domain, wavelet transform algorithms [10][11][12], retinex [13] and its improvements [14][15][16]. However, good results cannot be achieved by applying these grayscale image enhancement algorithms directly to the color image, due to strong correlation between the RGB color channels. If each color channel is directly processed by a grayscale image enhancement algorithm, the different channels will be enhanced in an imbalanced way, leading to color distortion, saturation decrease, obvious block effects, and other issues.
To overcome these problems, some work processes images in other color spaces such as HSI, HSV, YCbCr, and YUV [17][18][19]. In these color models, brightness and color of the image are recorded in different independent channels. Processing the brightness channel does not affect the color channel, ensuring no color shift occurs. For example, Yang et al. [20] and Shin et al. [21] proposed an image enhancement method in the HSV model. First, in the value channel the illumination component is obtained by a Gaussian filter, and the reflection component is found by retinex theory. Then the brightness of the illumination component is increased, and the processed illumination component is recombined with the reflection component, to give the enhanced value channel image. Finally, the enhanced image is reconverted to RGB. This method can overcome deficiencies such as color distortion and loss of image light details. However, due to isotropic characteristics of Gaussian filtering (GF) when it is used to estimate the background illumination, blurring of edges of the resulting reflection component tends to occur, and the enhanced image is subject to halo artifacts and low contrast because of the non-uniform illumination of the original image. Consequently, in order to enhance the non-uniform illumination component under low illumination, some researchers have attempted to estimate the background illumination using a filter with anisotropic characteristics such as a bilateral filtering (BF) [22][23][24] or guided image filter (GIF) [25,26]. BF may be subject to "gradient reversal" [21,27,28] when used for image enhancement, because the Gaussian weighted average is unstable if a pixel on an edge has few similar pixels around it, and the efficiency of BF is poor. However, if GIF is used to estimate the illumination component of the image, a blurred halo and pseudo-edge effects [29] can appear at the edges of windows with large texture differences, as the same regularization factor is used in all local filtering windows. Accordingly, a variant of GIF, the weighted guided image filter (WGIF) [30], was proposed to achieve good edge preservation.
In recent years, with the development of deep neural networks, many researchers have proposed deep learning-based low-light image enhancement methods [31][32][33]. Wang et al. [34] put forward a global light awareness and detail retention network (GladNet), which can effectively enhance image details, but it still suffers from unsaturated color and low contrast. Wei et al. [35] put forward a deep retinex network (retinexNet). This method is data-driven and introduces multi-scale cascading technology to adjust lighting; it improves the brightness of low-illumination images well. However, as it ignores color information in the image, the enhanced image can have color distortion. Zhang et al. [36] proposed a kindling darkness (KinD) network. It employs pairs of datasets taken under different exposure conditions to train an image decomposition network, a reflection component recovery network, and a luminance adjustment network, which can effectively remove noise, enhance brightness, and maintain color realism. Although this deep learning-based method has good generalization performance and provides good results, it needs a large-scale dataset for training, is highly dependent on this dataset, and requires high computational resources.
In this paper, we present a novel color image enhancement method for low and non-uniform illumination images. Our main contributions are based on the following. • GF possesses anisotropic characteristics, so gradient inversion can be effectively avoided by using it to estimate illumination. Thus, WGIF introduces adaptive regularization parameters, to better avoid halos and pseudo-edges as the edge gray difference is large. Therefore, we use WGIF instead of GF to estimate the illumination component and denoise the reflection component, which better maintains the edge and detail information, and avoids local blurring, halos, and noise amplification. • Only the intensity channel in HSI color space is processed, and a linear color restoration algorithm is used to calculate the luminance gain coefficient to accurately restore the color of each pixel. This is efficient while avoiding color distortion. • The illumination component and the fused image are enhanced by adaptive gamma correction and an S-hyperbolic tangent function to improve image contrast and enhance image details. The rest of this paper is organized as follows. Section 2 gives a brief review of related work. In Section 3, we explain the proposed method. Section 4 presents and analyzes experimental results, and conclusions are drawn in Section 5.

Related work
Our proposed method converts low and non-uniform illumination color images to the HSI model. Only dark areas of the intensity channel are enhanced to improve the brightness and clarity of the color image: enhancement of the intensity channel is the core of the proposed method. The key issue is how to maintain rich information, while avoiding phenomena such as blurring, halos, and over enhancement, causing illumination artefacts as the brightness of the image is enhanced.
Human perception can construct a visual representation with vivid color and detail across a wide dynamic range regardless of lighting variations, which is called color constancy [37,38]. Some scholars have proposed various image enhancement algorithms under low illumination by considering the perceptual characteristics of the human visual system, among which retinex [39,40], based on color constancy, has attracted much attention. Many improved versions have emerged [24,41,42]. According to the illumination-reflection model, the human visual perception of color depends on the reflection characteristics of the object's surface; the image can be mathematically represented as the product of the illumination component and reflection component: where c is one of the RGB channels: c ∈ {R, G, B}. S, L, and R are the original image, illumination component, and reflection component, respectively.
The main idea of retinex theory is to calculate and eliminate the illumination component from the original image. To find the illumination component, however, an under-determined equation system has to be solved, which only can be estimated approximately instead of being accurately calculated. Jobson et al. [39] put forward the single-scale retinex (SSR) algorithm, using GF as the center-surround function to estimate the background illumination. The scale parameter in the Gaussian function is the only input parameter to the SSR algorithm; for smaller values, the dynamic range of the image is compressed, and for large values, image contrast is enhanced. Subsequently, the multi-scale retinex (MSR) algorithm [40] was proposed, which combines dynamic range compression and tonal rendition, with a weighted sum of several different SSR results. In practice, the MSR just approaches human visual performance in dynamic range compression. It fails to process images with local or global graying-world violations effectively. In some cases, the "graying out" effect is severe and unexpected color distortion may occur. Therefore, a multi-scale retinex algorithm with color restoration (MSRCR) was proposed by Rahman et al. [41], which improves MSR with a universally applied color restoration factor to eliminate color distortions and evident gray zones.
In the classical retinex methods mentioned above, the lighting is usually considered to be uniform, so a GF is used as a center-surround function to estimate the illumination component. However, illumination change may occur at the edge of an object in the image, so the gradient variation in all directions around the pixels would be different. If an isotropic filter such as a GF is used to estimate the illumination component, inaccurate results will be produced in the light change region, leading to a halo defect [22]. In fact, we need to preserve the edges of illumination while smoothing other small fluctuations unrelated to light. Thus, many studies have focused on estimating the illumination component by low-pass filters with anisotropic properties. They preserve edge images related to light jumps, and smooth useless details and textures independent of the light change. If BF is used to estimate the illumination component [23,24,42], the halo effect may be overcome to some extent. However, on one hand, BF may be subjected to "gradient reversal" in image enhancement: because global mapping is adopted, the Gaussian weighted average is unstable if a pixel on an edge has few similar pixels around it, and detailed information about the illumination component will be lost. On the other hand, the efficiency of BF is poor. Wang et al. [43] put forward a bright-pass filter to estimate the illumination component, which was further optimized by using a relative illumination error function, thus preserving naturalness while enhancing image details. Sun et al. [44] proposed to estimate the illumination component using a guided image filter in the gradient domain (GDGIF) and to correct it by using gamma and sigmoid functions, effectively avoiding edge artifacts and enhancing contrast. Nevertheless, noise often exists in low illumination components, and noise amplification tends to occur on enhancement. To avoid noise amplification, Liu et al. [45] put forward a structurerevealing enhancement method based on a robust retinex decomposition model, using fidelity terms, region smoothing terms, and structure information terms to construct an optimization function. This method can effectively suppress noise, but results in an image with regional blurring, and loss of detailed information. Yu and Zhu [46] proposed a physical lighting model to recover low-illumination images. Having iteratively adjusted the environmental light and light-scattering attenuation rate by information loss constraints, WGIF is used to refine them, which effectively removes noise interference, highlights texture details, and maintains relatively natural colors.
GIF [25] is an image filtering method with anisotropic characteristics. It consists of a local linear transfer model between a guidance image and the output image, such that the gradient direction of the output image is the same as that of the guidance image, effectively avoiding the gradient reversal problem in BF [47,48]. The regularization hyper-parameter of the cost function is set by the user to determine the relative edge-preserving and smoothing effects of the image. Generally speaking, in high variance regions, lower regularization hyperparameter values are chosen to penalize the linear coefficients' amplitudes, whereas in flat regions, a higher regularization hyper-parameter value is preferred to ensure lower approximation error. However, this hyper-parameter is most often fixed for all local windows, and when GIF is used to estimate the illumination component of the image, blurred edges can occur in windows with large texture differences. WGIF [30] combines the advantages of both global and local filtering by adaptively adjusting this regularization hyper-parameter based on the variance of the current window. Therefore, WGIF as used in this paper can also be considered as estimating the illumination component by removing noise from the reflection component.

Background
In this paper, a novel low and non-uniform illumination color image enhancement method based on WGIF is proposed. First, the original color image is transferred from an RGB color model to an HSI color model, with intensity, hue, and saturation channels. We only use WGIF to estimate the illumination component of the intensity image and remove the noise contained in the reflection component. Ultimately, a linear color restoration method is used to transform the enhanced intensity image back into the original RGB color model to give the final enhanced color image. An outline of the proposed method is shown in Fig. 1. The area surrounded by the dashed box is the core of the proposed method, which is the process of enhancing the original intensity image based on WGIF. Firstly, the WGIF is used to obtain the illumination component of the intensity image, and adaptive brightness equalization methods of gamma correction and linear stretching are used to process the obtained illumination results. Then, the reflection component is calculated using the illumination component, and noise is also removed by WGIF. Finally, the new illumination component and reflection component are fused and the obtained result is globally enhanced. Pseudocode for the proposed method is shown in Algorithm 1.
From the above, it can be seen that enhancement of the intensity channel and linear restoration are the

Algorithm 1 WGIF-based color image enhancement method
Begin 1) Load original RGB color image S(x, y), convert to HSI color model, select intensity image S I (x, y).

2) Enhance intensity image Compute and process illumination component
x Use WGIF to estimate illumination component of intensity: Adaptive brightness equalization y Correct the illumination component using adaptive gamma function:

Compute and process reflection component image
{ Compute the reflection component: Fuse the processed illumination component and reflection component x Compute the enhanced intensity image: y Improve the brightness of the fused image using the S-hyperbolic tangent function:

4) Color restoration
x Calculate the brightness gain coefficient: key points of our method. Intensity enhancement mainly consists of illumination estimation, local brightness enhancement, and image fusion; details are described below.

Illumination estimation
In the proposed method, WGIF is applied to estimate the illumination component. Both the guide and input images are the intensity image S I , and the output q i is the estimated illumination component, denoted S IL : where a k and b k are linear coefficients. The loss cost function is where ε is a regularization factor penalizing large a k , which is the criterion for a flat patch or a high variance area. An edge-aware weighting Γ S I (i) is defined in the local window w k centered on pixel i as follows: where I is the guidance image, σ 2 I (·) is the variance of I in a 3 × 3 local window, and N is the number of pixels; ζ is a constant selected to be (0.001L) 2 and L is the dynamic range of the input image; i ranges over all pixels of the image.
For a pixel i in a high variance region with large texture variation and rich information, the corresponding edge weight Γ S I (i) is large, giving a smaller regularization term ε Γ S I (i) in Eq. (3), so image edges are well preserved. If pixel i is in a flat area, the edge weight Γ S I (i) is small, and ε Γ S I (i) is large, giving more smoothing.
Thanks to the adaptive adjustment of the regularization term, results of WGIF are more stable than for GIF. Estimating the illumination component of the image by WGIF not only better preserves edge details in the image, but also avoids artifact edges in the image, and the reflection component can be also calculated more accurately. Nevertheless, the computational complexity of WGIF is the same as for GIF: O(N ).
In the classic retinex algorithms, Eq. (1) is transformed into the logarithmic domain, and multiplication is replaced by addition. This can simplify the calculation, but may cause the loss of gray information in the image. Therefore, the reflection component is S IR obtained directly from the estimated illumination component S IL according to Eq. (1):

Adaptive brightness equalization
To avoid loss of gray information, the proposed method does not directly remove the illumination component in the logarithmic domain, but corrects the illumination component. The brightness of the estimated illumination component S IL is often very low, so the proposed method uses adaptive gamma correction [49] to correct S IL , which maps the input dark color in a narrow range to a wider range, correcting the illumination component S ILG : where S ILG (x, y) is the corrected illumination component, and m, n are the height, width of the original image, respectively. φ(x, y) is the gamma correction function, and parameter a is adaptively derived from the mean gray value of S IL . It can be seen from Eqs.

Image fusion
Noise must be removed to avoid its amplification before image fusion. Since noise mainly exists in the reflection component S IR , the reflection component S IR obtained by Eq. (5) is processed by WGIF, giving the denoised reflection component is S IR .
Then, the processed illumination component S ILGf is multiplied by the denoised reflection component S IRH to give the fused intensity image S IE .
Finally, the S-hyperbolic tangent function [50] is used to improve the brightness of the fused image S IE , and the enhanced intensity image S IEf is obtained.
where b is the mean intensity of S IE , and m, n are the height and width of S IE , respectively.

Color restoration
After the above steps have given the enhanced intensity image, it is necessary to re-convert it to RGB to provide the final enhancement output. Therefore, in order to avoid the color distortion caused by inconsistent increase of RGB channels [51], the brightness gain coefficient α [51][52][53] is calculated from the original and enhanced intensity images using a linear method as follows: α(x, y) = S IEf (x, y)/S I (x, y) Then, α is used to convert the enhanced image back to RGB, preserving linear proportions of the original and enhanced color images in the RGB channels. The linear color restoration process is performed using: where the RGB channels of the original and enhanced color images are denoted [R 0 , G 0 , B 0 ] and [R 1 , G 1 , B 1 ], respectively.

Experiments and discussion
In order to verify the effectiveness of proposed method, experiments on illumination estimation, reflection denoising, and image enhancement were carried out. MATLAB 2019b was adopted for programming, and a computer with an eight-core, Intel 3.6 GHz CPU with 8 GB RAM, running on Windows 10, was used.

Illumination estimation
In order to verify the effectiveness of WGIF in estimating the illumination component, we used GF, BF, GIF, and WGIF to estimate the illumination and reflection components of an image. The results are shown in Figs. 3 and 4. Figure 3 illustrates some examples of the illumination estimated by different image filters. GF results in blurring at the step edge, which would lead to halos. The edge preserving effects of BF and GIF are better than for GF, but many details unrelated to light are also preserved. In the WGIF result, the strong edges are well preserved while weaker textures are smoothed. Therefore, a more accurate illumination component can be obtained using WGIF than for the other filters. Figure 4 shows resulting illumination and reflection components obtained using GIF (Fig. 4(b)) and WGIF (Fig. 4(c)) on the intensity image S I in Fig. 4(a). The illumination component estimated by GIF preserves more texture detail, so there are few details in the reflection component. In comparison, the illumination component estimated by WGIF is clear at the step edge, while more details are preserved in the reflection component.

Reflection denoising
To verify whether reflection component denoising affects the enhancement results, comparisons of the final image were made with and without denoising; experimental results are shown in   and the result image with denoising in turn. It can be seen that there is significant noise in the reflection component because the original image was taken under low illumination. After denoising the reflection component using WGIF, most noise in the ground and the entrance to the right hand building is removed; the final result after denoising is obviously better than the result image before denoising.

Data and methods
In this paper, we use low-light color images and grayscale images with uniform and non-uniform illumination showing different scenes as test data. To ensure the diversity of data sources, we randomly selected various public datasets of color images from NASA Research Center, LIME-Data (low-light image enhancement data) [54], DICM (digital camera data) [55], and MEF (multi-exposure image fusion data) [56]. For enhancement of grayscale images, Face1 and Face3 were used from the CMU-PIE dataset, and Face2 and Face4 from the YaleB dataset. In our work, we are only interested in illumination, so we just select Face2 and Face4 with same pose (P 00) and different lighting (0 • -77 • ) from the YaleB face dataset.
We compare our method to various traditional and deep learning-based methods, including MSR [40], MSRCR [41], CLAHE [9], NPE [43], Liu [45], retinexNet [35], GladNet [34], and KinD [36] for colour enhancement, and for grayscale image enhancement, we use SSR [39], MSR [40], and CLAHE [9] as comparators. To ensure the fairness of the experiments, the relevant methods were downloaded from the authors' websites, and default parameters in the authors' articles were used in the experiments. The parameters for our method were set as follows: window radius r = 5, regularization factor ε = 0.1 2 , ζ = 0.065536 in WGIF. Figures 6 and 8 respectively show the results of the proposed method and the traditional methods for uniform and non-uniform low-illumination color images. Figure 7 shows close-ups of the images in Fig. 6. Figure 9 compares results of the proposed method and deep learning-based methods. Due to the limitation of space, only representative results are shown.

Subjective evaluation
It can be seen that the brightness enhancement effect of the MSR algorithm is distinctive: MSR algorithm overenhances the image, resulting in loss of detail in brighter areas, the occurrence of strong noise, and obvious haloing in the step edge (the sky in the Tower image of Fig. 6, the clouds in the Girl image of Fig. 8). Compared to MSR, the MSRCR algorithm improves brightness and preserves the color. However, the contrast of the enhanced image is low and some details are lost. Moreover, MSRCR is subject to the most severe haloing (the chandelier in the Coffee House image of Fig. 6, the candle wick in the Candle image of Fig. 8). Although the CLAHE algorithm provides appropriate brightness enhancement and is close to the true color, it appears blurred with unnatural color in dark and detailed areas (the wooden floor in the Factory image of Fig. 6, the idol of the Madison image in Fig. 8). The enhanced images of NPE, Liu [45] and our proposed method are obviously better than others in terms of visual effects, and the images are more natural and realistic. However, some of the images enhanced by NPE are too vivid in color and the noise is amplified, Fig. 6 Comparison of the proposed method to traditional methods on low illumination color images with uniform light. Leftmost column: original image, named in the left corner and close up area (see Fig. 7) indicated in the red box. resulting in the loss of detailed information and severe haloing (the robot arm in the Robot image of Fig. 6, the wall and light in the Factory image of Fig. 6). The enhanced image of Liu's method has natural colors and effectively removes noise, but has some blurred details (the distant tall buildings in the Apartment image of Fig. 6, the stone gate in the Eiffel Tower image of Fig. 8). Figure 9 shows that the retinexNet algorithm effectively improves the overall brightness of the image, but noise is obviously amplified after enhancement, resulting in serious color distortion and a poor visual effect. Compared to retinexNet, the brightness of GladNet and KinD algorithms is better, and the details of dark areas are clear, but the color of the enhanced image is unsaturated and the overall contrast is low.
Further considering the detail in Fig. 7, it can be seen that our proposed method adaptively improves the local brightness and contrast, effectively enhancing the dark area, with well-preserved detail in bright areas. In addition, the halo problem is also avoided, noise is not significantly amplified, and the color of the enhanced image is vivid and natural. However, detail enhancement is poor when the light is extremely non-uniform and the dark area is too dark (the bookshelf in the Cadik image of Fig. 8). Figures 10 and 11 are the results of low illumination grayscale image enhancement with uniform and nonuniform lighting, respectively. Figure 12 is a close up of the Face1 image in Fig. 10. It can be seen that the SSR algorithm improves the brightness of images under uniform illumination, but its effect is poor with amplified noise under non-uniform illumination.    The MSR algorithm overenhances the image, thus decreasing contrast, and some details are lost. A noticeable halo appears in the sharp edge (see Fig. 12), and noise is very obvious. The enhanced image of CLAHE has clearer details of the face, but its tones are uneven with halos at the edges. In comparison, the proposed method improves both brightness and contrast, resulting in a bright and clear image. For images under very non-uniform lighting, however, the enhancement effect in dark areas is still limited. Figure 13 shows the 30th scanline of pixels for image Face1 processed by the aforementioned algorithms. It can be seen that the pixel intensity values in the original image are the lowest. Hopefully, they are improved significantly after processing by SSR, MSR, CLAHE, and the proposed method. Among them, MSR algorithm improves the brightness values most, but the scanline is flattest, meaning the image contrast is the lowest. SSR and CLAHE algorithms have poor smoothing effects in the low frequency region: the denoising effect is poor. The proposed method improves the intensity values significantly in the high frequency region, while in the low frequency region they are well smoothed. Thus the details in the dark areas are presented and noise removed. In addition, brightness is moderately improved by the proposed method.

Objective evaluation
In this section, objective criteria including information entropy, brightness, contrast, mean gradient, edge intensity, and the Std×Gray [53] are used to objectively evaluate the different enhancement algorithms. To calculate Std×Gray, each image is divided into several non-overlapping blocks of the same size, the average deviations and average gray of all blocks are calculated, and the product of the two averages is Std×Gray. The greater its value, the higher the quality of the image.
In order to evaluate color images, the indicators ΔB [53], ΔC [53], and ΔH [57] are used to measure rates of change of brightness, contrast, and hue, calculated as follows: The objective evaluation results of for color and grayscale image enhancement in low illumination environment are shown in Tables 1-3. Bold fonts represent the best result in each group of experiments. ΔH for CLAHE is indicated by "\", as CLAHE processes only the intensity channel in HSI and keeps the hue change ratio close to 0.  It can be seen that compared to other algorithms, the information entropy, ΔC, and Std×Gray of the proposed method are obviously superior, while the results for ΔB, ΔH, average gradient, and edge intensity of color images are optimal or close to optimal on most images. The proposed method not only retains rich detail but also has good color fidelity while significantly improving brightness and contrast. The color images processed by MSR, MSRCR, and retinexNet have larger ΔB, average gradient, and edge intensity, and the image brightness is significantly improved to show more details in dark areas, but ΔH is generally too large and color bias is more serious. The color fidelity of NPE and Liu [45] algorithms is relatively good, but the detailed performance ability, brightness, and contrast improvement are slightly inferior to the proposed method and MSRCR. The brightness of the grayscale images processed by the proposed method is lower than that of the MSR, but the visual results show that just increasing brightness can lead to overenhancement instead of better results. Table 4 shows the average time for processing low-illumination images in the LTSM dataset using the above algorithms. The average time to process each image by the proposed method is longer than that for MSR and CLAHE, but significantly shorter than for MSRCR, NPE, Liu's method, and deep learning-based methods, although running the latter on the GPU can significantly reduce their processing time. Combined subjective results with objective analysis, the overall enhancement effect of MSR and CLAHE is acceptable, NPE and Liu's method  results, but also take less time to process the image.
The efficiency of the color model conversion algorithm is compared on the LTSM dataset in Fig. 14. Obviously, the efficiency of the linear color restoration algorithm is higher than that of the non-linear color restoration algorithm. On the whole, the proposed method in this paper improves the brightness and contrast of images appropriately, and the enhanced image has excellent hue retention, natural and vivid color, and can show more details, while being highly efficient.

Conclusions
A novel low and non-uniform illumination image enhancement method based on WGIF is presented in this paper.
WGIF is adopted to estimate illumination and remove noise, effectively overcoming problems such as halo defects, detail loss, and noise amplification.
To avoid color distortion, images are processed in the intensity channel of the HSI color model, and a linear color restoration algorithm is adopted, which not only ensures the color is undistorted but also helps to achieve higher efficiency. To prevent the loss of gray information, the proposed method does not eliminate the illumination component directly in the logarithmic domain. Instead, it adaptively improves the brightness according to the illumination component, which effectively avoids overenhancement of bright areas. Our experimental results show via subjective and objective evaluation that the proposed method can efficiently and effectively enhance both color and gray images with low illumination. Nevertheless, if the illumination is very uneven, the enhancement effect of local dark regions is limited. Thus, further research is needed in future. His research interests are in intelligent information processing, computer graphics and image processing technology, visual computing and visualization, intelligent computing, and big data analysis.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www. editorialmanager.com/cvmj.