Keywords

1 Introduction

Scene appearance atmospheres such as lighting condition, dominant scene color, and haze information are important factors for determining scene image impressions. Users can obtain more realistic, more dramatic, or more desirable appearance of target images. Image and video editing for addressing such scene appearance atmospheres comprise an interesting topic in image processing, computer vision, and computer graphics [1, 2]. For example, the image editing method for lighting conditions has been studied actively. Zhang et al. decomposed the illumination effects of multiple light sources from a single image [3]. This technique can be applied for editing lighting conditions. Shu et al. transferred the lighting appearance of human faces between the source and target images [4].

In particular, color editing and transferring in images are effective for changing the scene appearance atmospheres [5]. Reinhard et al. proposed a color transfer method between the source and target images [6]. They used averages and standard deviations of input images in the lαβ color space for transferring colors. Their approach can be expressed as a statistical-based method by converting a potentially complex three-dimensional (3D) color space problem into three separate one-dimensional color spaces. Pitie et al. rotated the 3D distributions of the source and target images using a random 3D rotation matrix and projected them to the axes of their new coordinate system [7]. Pouli et al. reshaped and matched the two histograms. They applied scale-space approaches for a partial histogram match [8]. Further, several works investigated the performances of different color spaces for color transfer [5, 9].

Similar to color information, haze information is also significant for providing scene appearance atmospheres. Regarding haze images, haze removal (dehazing) methods have been proposed [10]. In particular, a single-image dehazing method is easier to apply in imaging research. Tan proposed a single-image dehazing method to enhance contrast [11]. Fattal presented an image-dehazing method based on a haze-imaging model [12]. He et al. restored the visibility in haze-affected images based on the previously proposed haze-imaging model and used a dark channel prior algorithm [13]. However, no haze transfer methods are available for editing and changing scene appearance atmospheres. By editing the haze in an image, it is possible to change the overall impression evoked by a scene, simulate different lighting environments, or achieve different visual effects.

We herein propose a method of transferring haze information between the source and target images. Hence, we applied the dark channel prior method [13] and Reinhard’s color transfer [6]. The dark channel prior method is used for decomposing an input image to a scene radiance image, global atmospheric color (haze color) vector, and transmission map. Subsequently, the color transfer method is applied to the decomposed images for out-haze transfer. Finally, the haze-transferred results are reconstructed from the decomposed images.

This paper is organized as follows: We describe the prosed haze transfer method based on the dark channel prior and color transfer in Sect. 2. Subsequently, we present the experimental results and discussions in Sect. 3. Finally, the conclusions and future research are discussed in Sect. 4.

2 Proposed Method

Figure 1 shows the flow chart of our proposed method. First, based on the dark channel method, the haze components are separated from the input source and target images, separately. Next, we transfer the haze information between the images using color transfer. Finally, we reconstruct a haze-transferred image by combining the decomposed images.

Fig. 1.
figure 1

Flow chart of the proposed method.

2.1 Separation of Haze Components

In general, the haze imaging model is given by the following equation:

$$ {\mathbf{I}}\left( {\mathbf{x}} \right) = {\mathbf{J}}\left( {\mathbf{x}} \right){\text{t}}\left( {\mathbf{x}} \right) + {\mathbf{A}}\left( {1 - {\text{t}}\left( {\mathbf{x}} \right)} \right), $$
(1)

where x is the pixel coordinates in camera image \( {\mathbf{I}} \), \( {\mathbf{J}} \) is the scene radiance, \( {\mathbf{A}} \) is the global atmospheric color, and \( t \) is the transmission of the scene radiance. When the atmosphere is homogenous, the transmission value \( {\text{t}} \) is defined by

$$ t\left( {\mathbf{x}} \right) = \exp \left( { - \beta \cdot d\left( {\mathbf{x}} \right)} \right), $$
(2)

where \( \beta \) is the scattering coefficient of the atmosphere, and \( d \) is the distance between the objects and a camera. As shown in Eq. (2), haze is uniformly distributed in the scenes, and depends only on the distances.

He et al. found that at least one of the RGB values in a patch was extremely low (almost zero) when using an image under clear daylight [13]. This statistical observation called the dark channel prior is expressed as follows:

$$ {\text{J}}^{dark} \left( {\mathbf{x}} \right) = \mathop {\hbox{min} }\limits_{{{\text{c}} \in {\text{r}},{\text{g}},{\text{b}}}} (\mathop {\hbox{min} }\nolimits_{{{\mathbf{y}} \in \varOmega \left( {\mathbf{x}} \right)}} J^{c} \left( {\mathbf{y}} \right)) \cong 0, $$
(3)

where \( \varOmega \) is a patch region of a pixel \( {\mathbf{x}} \), and \( c \) is an RGB channel. Subsequently, based on Eqs. (1) and (3), a transmission is estimated as follows:

$$ {\tilde{\text{t}}}\left( {\mathbf{x}} \right) = 1 - \omega \mathop {\hbox{min} }\limits_{{{\text{c}} \in {\text{r}},{\text{g}},{\text{b}}}} (\mathop {\hbox{min} }\nolimits_{{{\mathbf{y}} \in \varOmega \left( {\mathbf{x}} \right)}} \left( {\frac{{{\text{I}}^{\text{c}} \left( {\mathbf{y}} \right)}}{{{\text{A}}^{\text{c}} }}} \right)), $$
(4)

where \( \omega \) is a parameter for maintaining some amount of haze for far-distant objects.

To estimate the atmospheric color \( {\mathbf{A}} \), it is necessary to obtain a pixel of \( {\text{t}}\left( {\mathbf{x}} \right) = 0 \). Based on Eq. (2), the transmission value \( {\text{t}}\left( {\mathbf{x}} \right) \) will be 0 at the pixel of infinite distance \( d\left( {\mathbf{x}} \right) = \infty \). Assuming that the distance in the sky area will be infinite, they employed the brightest pixel in an input image as the sky area.

The estimated transmission map \( {\tilde{\text{t}}} \) generally contains block noise due to the patch-based processing. After the refinement of noisy transmission map \( {\tilde{\text{t}}} \) by soft matting, scene radiance \( {\mathbf{J}} \) is estimated by

$$ {\mathbf{J}}\left( {\mathbf{x}} \right) = \frac{{{\mathbf{I}}\left( {\mathbf{x}} \right) - {\mathbf{A}}}}{{\hbox{max} \left( {{\text{t}}\left( {\mathbf{x}} \right),{\text{t}}_{0} } \right)}} + {\mathbf{A}}, $$
(5)

where \( {\text{t}}_{0} \) is a lower-limit transmission threshold for noise reduction.

Hence, the components of the haze image are separated based on the dark channel prior. Further, we convert Eqs. (1) to (6).

$$ {\mathbf{I}}\left( {\text{x}} \right) = {\mathbf{J}}\left( {\text{x}} \right){\text{t}}\left( {\text{x}} \right) + {\mathbf{A}} - {\mathbf{H}}\left( {\text{x}} \right), $$
(6)

where \( {\mathbf{H}}\left( {\text{x}} \right) \) indicates the haze information (\( {\mathbf{H}} = {\mathbf{A}}{\text{t}}\left( {\text{x}} \right) \)).

2.2 Transferring Haze Components

Next, we transfer the estimated atmospheric color \( {\mathbf{A}} \) and transmission \( t \) of the source image to the target image using the color transfer method [6]. Because the estimated atmospheric color \( {\mathbf{A}} \) is expressed in a color vector [13], the estimated atmospheric color \( {\mathbf{A}} \) (three-channel vector) or transmission \( {\text{t}} \) (monochrome image) of the source image cannot be applied to the color transfer method directly. Therefore, in this study, the color transfer is applied to \( {\mathbf{H}}\left( {\text{x}} \right) \).

In the color transfer method of Reinhard et al. [6], they converted the pixel values of the input images to an uncorrelated color space expressed as the \( {\text{l}}\upalpha \upbeta \) color space. We obtained the standard deviations and averages for each of \( {\text{l}} \), \( \upalpha \), and \( \upbeta \). Subsequently, we calculated \( {\text{l'}} \), \( \upalpha ' \), and \( \upbeta' \) that are color-transferred values. Finally, the image obtained by transferring the color of the source image to the target image can be generated.

Here, we used another notation for the RGB values of a decomposed image \( {\mathbf{H}}\left( {\text{x}} \right) \) as \( {{\rm H}}_{{\rm R}} {{\rm H}}_{{\rm G}} {{\rm H}}_{{\rm B}} \). In this study, we chose the conversion matrix of Soo et al.’s for the color transfer; it converts the input RGB values to LMS color spaces [14]. First, we processed a conversion from the \( {{\rm H}}_{{\rm R}} {{\rm H}}_{{\rm G}} {{\rm H}}_{{\rm B}} \) to XYZ space and converted the XYZ space to LMS space. Further, the resultant of these two converted matrices between the \( {{\rm H}}_{{\rm R}} {{\rm H}}_{{\rm G}} {{\rm H}}_{{\rm B}} \) haze space and LMS cone space can be combined as follows:

$$ \left[ {\begin{array}{*{20}c} {{\rm L}} \\ {{\rm M}} \\ {{\rm S}} \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {0.3811} & {0.5783} & {0.0402} \\ {0.1967} & {0.7244} & {0.0782} \\ {0.0241} & {0.1228} & {0.8444} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {{{\rm H}}_{{\rm R}} } \\ {{{\rm H}}_{{\rm G}} } \\ {{{\rm H}}_{{\rm B}} } \\ \end{array} } \right], $$
(7)

By converting the data to the logarithmic space, a significant amount of skew in the data in this \( {{\rm H}}_{{\rm R}} {{\rm H}}_{{\rm G}} {{\rm H}}_{{\rm B}} \) space must be eliminated [15].

$$ {\mathbf{L}} = \log {\text{L}}, $$
$$ {\mathbf{M}} = \log {\text{M}}, $$
(8)
$$ {\mathbf{S}} = \log {\text{S}}, $$

Reinhard et al. used the \( {\text{l}}\upalpha \upbeta \) color space for each channel maximal uncorrelated by avoiding the unwanted cross effect and treating the three channels separately. Ruderman et al. suggested the following transform [15]:

$$ \left[ {\begin{array}{*{20}c} {\text{l}} \\\upalpha \\\upbeta \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {\frac{1}{\sqrt 3 }} & 0 & 0 \\ 0 & {\frac{1}{\sqrt 6 }} & 0 \\ 0 & 0 & {\frac{1}{\sqrt 2 }} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} 1 & 1 & 1 \\ 1 & 1 & { - 2} \\ 1 & { - 1} & 0 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {\mathbf{L}} \\ {\mathbf{M}} \\ {\mathbf{S}} \\ \end{array} } \right], $$
(9)

After converting the RGB color space to the uncorrelated color space, the haze components were transferred using Eqs. (10) and (11).

$$ {\text{l}}^{ *} = {\text{l}} - \overline{1} $$
$$ \upalpha^{ *} =\upalpha - \overline{\upalpha} $$
(10)
$$ \upbeta^{ *} =\upbeta - \overline{\upbeta} $$

and

$$ {\text{l'}} = \frac{{\sigma_{\text{Source}}^{\text{l}} }}{{\sigma_{\text{Target}}^{\text{l}} }}{\text{l}}^{ *} + \overline{1} , $$
$$ \upalpha '= \frac{{\sigma_{\text{Source}}^{\upalpha} }}{{\sigma_{\text{Target}}^{\upalpha} }}\upalpha^{ *} + \overline{\upalpha} , $$
(11)
$$ \upbeta '= \frac{{\sigma_{\text{Source}}^{\upbeta} }}{{\sigma_{\text{Target}}^{\upbeta} }}\upbeta^{ *} + \overline{\upbeta} , $$

where \( \overline{l} \), \( \overline{\alpha } \), and \( \overline{\beta } \) are the averages of the quantities, and \( \sigma \) represents the standard deviation. Subsequently, the obtained \( {\text{l'}} \), \( \upalpha ' \), and \( \upbeta ' \) were converted to the original color space as follows:

$$ \left[ {\begin{array}{*{20}c} {{\text{H}}_{\text{R}} '} \\ {{\text{H}}_{\text{G}} '} \\ {{\text{H}}_{\text{B}} '} \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {0.5773} & {0.2621} & {11.3918} \\ {0.5773} & {0.6071} & { - 5.0905} \\ {0.5833} & { - 1.0628} & {0.4152} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {\text{l'}} \\ {\upalpha '} \\ {\upbeta '} \\ \end{array} } \right], $$
(12)

By this conversion, \( {\mathbf{H}} ' \) was obtained by transferring the haze information of the source image to the target image.

2.3 Reconstructing Haze-Transferred Image

Finally, the haze-transferred image by the proposed method was implemented by Eqs. (13)–(16). In Eq. (14), function T shows the haze component transfer in Sect. 2.2.

$$ {\mathbf{I}}_{{{\text{haze}}\_{\text{trans}}}} \left( {\text{x}} \right) = {\mathbf{J}}_{\text{Target}} \left( {\text{x}} \right){\text{t'}}\left( {\text{x}} \right) + {\mathbf{A^{\prime}}} - {\mathbf{H}} '\left( {\text{x}} \right), $$
(13)
$$ {\mathbf{H^{\prime}}}\left( {\text{x}} \right) = T\left( {{\mathbf{H}}_{\text{Target}} , {\mathbf{H}}_{\text{Source}} } \right), $$
(14)
$$ {\mathbf{A^{\prime}}} = \varvec{A}_{\text{Source}} , $$
(15)
$$ {\mathbf{t^{\prime}}}\left( {\text{x}} \right) = \frac{{\mathop \sum \nolimits_{{\varvec{c} = 1}}^{3} {\mathbf{H}}\varvec{'}\left( {\text{x}} \right)}}{{\mathop \sum \nolimits_{{\varvec{c} = 1}}^{3} {\mathbf{A}}\varvec{'}}}, $$
(16)

where \( {\mathbf{H}}' \) is the haze-transferred component containing the color information of the three channels. Regarding the atmospheric light \( {\mathbf{A}} ' \) for an output image, we used the atmospheric light of the source image shown in Eq. (15). The transmission \( {\text{t'}} \) for the reconstruction was obtained by converting one channel using Eq. (16). Finally, a haze-transferred image was reconstructed using Eq. (13).

3 Results and Discussions

3.1 Results of the Proposed Method

In this study, we proposed a method of transferring haze information between images based on the dark channel prior. Figure 2 shows the input source and target images, transmission maps of input images, and estimated transmission maps. These results indicate that transmission can be estimated and haze can be removed by the dark channel prior. Figure 2(g) shows the results of the proposed haze transfer. Figure 2(h) shows the transferred transmission map. We confirmed that the haze concentration in the source image is transferred to the target image. Further, we observed from the reconstructed results that the brightness and appearance of the haze in the source image are transferred properly to the target image, and the appearance atmosphere of the haze can be reproduced similarly.

Fig. 2.
figure 2

Our haze-transferred result. Input images; (a) Target image, (b) Source image, the estimated scene radiation; (c) Target image, (d) Source image, transmission of input; (e) Target image, (f) Source image. output images; (g) The proposed haze-transferred image, (h) Transmission map.

Next, we adopt an image database called CHIC (Color Hazy Image for Comparison) [16] for evaluating out haze transfer method. This database contains two scenes under controlled environments and nine haze images with different haze density levels. Figure 3 shows the original image. In this database, the higher the value of the level, the thinner the haze in the images. Here, we adopt the original image as a target image and the haze images as source images. Then, we compared the haze appearance between source haze images and haze-transferred images. Figure 4 shows the results of our proposed method. It is confirmed that the haze of the source images is properly transferred to the target image according to the haze level.

Fig. 3.
figure 3

Original image (target images)

Fig. 4.
figure 4

Haze-transferred results using source images with different haze levels. Level 3: (a) Source image, (b) Haze-transferred image from Fig. 3, Level 5: (c) Source image, (d) Haze-transferred image from Fig. 3, Level 7: (e) Source image, (f) Haze-transferred image from Fig. 3, Level 9: (g) Source image, (h) Haze-transferred image from Fig. 3

Our proposed method can also transfer depth information. As shown in Eq. (2), the estimated transmission contains depth information. The haze of the target image is distributed comprehensively. Meanwhile, the haze distribution of the source image is different between the far and near sides of the image shown in Fig. 2. Owing to the depth information and the transmission map, the haze distribution of the output image is similar to that of the source image.

3.2 Differences from Color Transfer

We compared the proposed haze transfer method with the color transfer method. The source and the target images are shown in Fig. 2, and the result by the color transfer method is shown in Fig. 5. Because the color transfer method can consider the color information of the source and the target image, the redness of the ground in the source image is transferred to the whole area in the output images. The color transfer method cannot transfer the appearance atmosphere of haze suitably.

Fig. 5.
figure 5

Color-transferred image.

3.3 Differences in Image Scenes

We changed the source image for the comparison. Figure 6 shows the input images, transmission maps, and results. The scenes of both input images are completely different. The target image is the forest scene with haze. Meanwhile, the source image is the building scene with haze. The result images show that the appearance atmosphere of haze is transferred properly. Because our proposed method uses standard deviations and averages of haze information, it does not depend on the target and source scenes.

Fig. 6.
figure 6

Results using forest and distance view of a city. Input images; (a) Target image, (b) Source image, transmission maps; (c) Target image, (d) Source image, output images; (e) The proposed haze-transferred image, (f) Transmission map of (e)

3.4 Non-haze Scene

We also changed the source image to the scene without haze. Figure 7 shows the input images, transmission maps, and results. When our proposed method was applied using the source image without haze, the results were similar to those of the image by the haze-removal method based on the dark channel prior. Therefore, we found that our proposed method can be applied to not only haze transfer but also haze reduction.

Fig. 7.
figure 7

Results using a hazeless scene as a source image. Input images; (a) Target image, (b) Source image (no haze scene), transmission map; (c) Target image, (d) Source image, result images; (e) Haze-transferred, (f) Scene radiation, (g) Transmission map.

4 Conclusions

In this study, we proposed a method to transfer haze information between two images by decomposing the input images into scene radiation, atmospheric light, and transmission based on the dark channel prior and by adapting the color transfer method. From the experimental results, we confirmed that the haze information of the source image could be transferred to the target image. We also confirmed that the transmission maps were generated successfully.

As future work, we would like to revise the haze transfer model in Eq. (13). As shown in Eqs. (15) and (16), the atmospheric light and the transmission map for the output images were calculated simply. The calculation of these components will be improved. In addition, we will test other color transfer methods. Herein, we used Reinhard’s color transfer method. However, numerous other methods are available that could be used to improve our haze transfer method.