Keywords

1 Introduction

Adverse weather conditions create visibility problems for both people and the sensors that power automated systems [25, 37, 48]. While sensors and the down-streaming vision algorithms are constantly getting better, their performance is mainly benchmarked with clear-weather images. Many outdoor applications, however, can hardly escape from bad weather. One typical example of adverse weather conditions is fog, which degrades the visibility of a scene significantly [36, 52]. The denser the fog is, the more severe this problem becomes.

During the past years, the community has made a tremendous progress on image dehazing (defogging) to increase the visibility of foggy images [24, 40, 56]. The last few years have also witnessed a leap in object recognition. The semantic understanding of foggy scenes, however, has received little attention, despite its importance in outdoor applications. For example, an automated car still needs to detect other traffic agents and traffic control devices in the presence of fog. This work investigates the problem of semantic foggy scene understanding (SFSU).

The current “standard” policy for addressing semantic scene understanding is to train a neural network with many annotations of real images [11, 47]. Applying the same protocol to diverse weather conditions seems to be problematic, as the manual annotation part is hard to scale. The difficulty of data collection and annotation increases even more for adverse weather conditions. To overcome this problem, two streams of research have gained extensive attention: (1) transfer learning [9] and (2) learning with synthetic data [46, 48].

Our method falls into the middle ground, and aims to combine the strength of these two kinds of methods. In particular, our method is developed to learn from (1) a dataset with high-quality synthetic fog and corresponding human annotations, and (2) a dataset with a large number of images with real fog. The goal of our method is to improve the performance of SFSU without requiring extra human annotations.

Fig. 1.
figure 1

The illustrative pipeline of our approach for semantic scene understanding under dense fog

To this aim, this work proposes a novel fog simulator to generate high-quality synthetic fog into real images that contain clear-weather outdoor scenes, and then leverage these partially synthetic foggy images for SFSU. The new fog simulator builds on the recent work in [48], by introducing a semantic-aware filter to exploit the structures of object instances. We show that learning with our synthetic data improves the performance for SFSU. Furthermore, we present a novel method, dubbed Curriculum Model Adaptation (CMAda), which is able to gradually adapt a segmentation model from light synthetic fog to dense real fog in multiple steps, by using both synthetic and real foggy data. CMAda improves upon direct adaptation significantly on two datasets with dense real fog.

The main contributions of the paper are: (1) a new automatic and scalable pipeline to generate high-quality synthetic fog, with which new datasets are generated; (2) a novel curriculum model adaptation method to learn from both synthetic and (unlabeled) real foggy images; (3) a new real foggy dataset with 3808 images, including 16 finely annotated images with dense fog. A visual overview of our approach is presented in Fig. 1.

2 Related Work

Our work is relevant to image defogging (dehazing), foggy scene understanding, and domain adaptation.

2.1 Image Defogging/Dehazing

Fog fades the color of observed objects and reduces their contrast. Extensive research has been conducted on image defogging (dehazing) to increase the visibility of foggy scenes [5, 15, 16, 24, 36, 40, 52]. Certain works focus particularly on enhancing foggy road scenes [38, 54]. Recent approaches also rely on trainable architectures [53], which have evolved to end-to-end models [34, 59]. For a comprehensive overview of dehazing algorithms, we point the reader to [32, 57]. Our work is complementary and focuses on semantic foggy scene understanding.

2.2 Foggy Scene Understanding

Typical examples in this line include road and lane detection [3], traffic light detection [28], car and pedestrian detection [19], and a dense, pixel-level segmentation of road scenes into most of the relevant semantic classes [7, 11]. While deep recognition networks have been developed [20, 33, 45, 58, 60] and large-scale datasets have been presented [11, 19], that research mainly focused on clear weather. There is also a large body of work on fog detection [6, 17, 42, 51]. Classification of scenes into foggy and fog-free has been tackled as well [43]. In addition, visibility estimation has been extensively studied for both daytime [22, 35, 55] and nighttime [18], in the context of assisted and autonomous driving. The closest of these works to ours is [55], in which synthetic fog is generated and foggy images are segmented to free-space area and vertical objects. Our work differs in that our semantic understanding task is more complex and we tackle the problem from a different route by learning jointly from synthetic fog and real fog.

2.3 Domain Adaptation

Our work bears resemblance to transfer learning and model adaptation. Model adaptation across weather conditions to semantically segment simple road scenes is studied in [31]. More recently, domain adversarial based approaches were proposed to adapt semantic segmentation models both at pixel level and feature level from simulated to real environments [27, 49]. Our work closes the domain gap by generating synthetic fog and by using the policy of gradual adaptation. Combining our method and the aforementioned transfer learning methods is a promising direction. The concurrent work in [13] on adaptation of semantic models from daytime to nighttime solely with real data is closely related to ours.

Fig. 2.
figure 2

The pipeline of our fog simulation using semantics

3 Fog Simulation on Real Scenes Using Semantics

3.1 Motivation

We drive our motivation for fog simulation on real scenes using semantic input from the pipeline that was used in [48] to generate the Foggy Cityscapes dataset, which primarily focuses on depth denoising and completion. This pipeline is denoted in Fig. 2 with thin gray arrows and consists of three main steps: depth outlier detection, robust depth plane fitting at the level of SLIC superpixels [2] using RANSAC, and postprocessing of the completed depth map with guided image filtering [23]. Our approach adopts the general configuration of this pipeline, but aims to improve its postprocessing step by leveraging the semantic annotation of the scene as additional reference for filtering, which is indicated in Fig. 2 with the thick blue arrow.

The guided filtering step in [48] uses the clear-weather color image as guidance to filter depth. However, as previous works on image filtering [50] have shown, guided filtering and similar joint filtering methods such as cross-bilateral filtering [14, 44] transfer the structure that is present in the guidance/reference image to the output target image. Thus, any structure that is specific to the reference image but irrelevant for the target image is also transferred to the latter erroneously.

Whereas previous approaches such as mutual-structure filtering [50] attempt to estimate the common structure between reference and target images, we identify this common structure with the structure that is present in the ground-truth semantic labeling of the image. In other words, we assume that edges which are shared by the color image and the depth map generally coincide with semantic edges, i.e. locations in the image where the semantic classes of adjacent pixels are different. Under this assumption, the semantic labeling can be used directly as the reference image in a classical cross-bilateral filtering setting, since it contains exactly the mutual structure between the color image and the depth map. In practice, however, the boundaries drawn by humans in the semantic annotation are not pixel-accurate, and using the color image as additional reference helps to capture the precise shape of edges better. As a result, we formulate the postprocessing step of the completed depth map in our fog simulation as a dual-reference cross-bilateral filter, with color and semantic reference.

3.2 Dual-reference Cross-Bilateral Filter Using Color and Semantics

Let us denote the RGB image of the clear-weather scene by \(\mathbf {R}\) and its CIELAB counterpart by \(\mathbf {J}\). We consider CIELAB, as it has been designed to increase perceptual uniformity and gives better results for bilateral filtering of color images [41]. The input image to be filtered in the postprocessing step of our pipeline constitutes a scalar-valued transmittance map \(\hat{t}\). We provide more details on this transmittance map in Sect. 3.3. Last, we are given a labeling function

$$\begin{aligned} h: \mathcal {P} \rightarrow \{1,\,\dots ,\,C\} \end{aligned}$$
(1)

which maps pixels to semantic labels, where \(\mathcal {P}\) is the discrete domain of pixel positions and C is the total number of semantic classes in the scene. We define our dual-reference cross-bilateral filter with color and semantic reference as

$$\begin{aligned} t(\mathbf {p}) = \frac{\displaystyle \sum _{q \in \mathcal {N}(\mathbf {p})} G_{\sigma _s}(\left\| \mathbf {q}-\mathbf {p}\right\| ) \left[ \delta (h(\mathbf {q})-h(\mathbf {p})) + \mu G_{\sigma _c}(\left\| \mathbf {J}(\mathbf {q})-\mathbf {J}(\mathbf {p})\right\| )\right] \hat{t}(\mathbf {q})}{\displaystyle \sum _{q \in \mathcal {N}(\mathbf {p})} G_{\sigma _s}(\left\| \mathbf {q}-\mathbf {p}\right\| ) \left[ \delta (h(\mathbf {q})-h(\mathbf {p})) + \mu G_{\sigma _c}(\left\| \mathbf {J}(\mathbf {q})-\mathbf {J}(\mathbf {p})\right\| )\right] }, \end{aligned}$$
(2)

where \(\mathbf {p}\) and \(\mathbf {q}\) denote pixel positions, \(\mathcal {N}(\mathbf {p})\) is the neighborhood of \(\mathbf {p}\), \(\delta \) denotes the Kronecker delta, \(G_{\sigma _s}\) is the spatial Gaussian kernel, \(G_{\sigma _c}\) is the color-domain Gaussian kernel and \(\mu \) is a positive constant. The novel dual reference is demonstrated in the second factor of the filter weights, which constitutes a sum of the terms \(\delta (h(\mathbf {q})-h(\mathbf {p}))\) for semantic reference and \(G_{\sigma _c}(\left\| \mathbf {J}(\mathbf {q})-\mathbf {J}(\mathbf {p})\right\| )\) for color reference, weighted by \(\mu \). The formulation of the semantic term implies that only pixels \(\mathbf {q}\) with the same semantic label as the examined pixel \(\mathbf {p}\) contribute to the output at \(\mathbf {p}\) through this term, which prevents blurring of semantic edges. At the same time, the color term helps to better preserve true depth edges that do not coincide with any semantic boundary but are present in \(\mathbf {J}\).

The formulation of (2) enables an efficient implementation of our filter based on the bilateral grid [41]. More specifically, we construct two separate bilateral grids that correspond to the semantic and color domains and operate separately on each grid to perform filtering, combining the results in the end. In this way, we handle a 3D bilateral grid for the semantic domain and a 5D grid for the color domain instead of a single joint 6D grid that would dramatically increase computation time [41].

In our experiments, we set \(\mu = 5\), \(\sigma _s = 20\), and \(\sigma _c = 10\).

3.3 Remaining Steps

Here we outline the rest parts of our fog simulation pipeline of Fig. 2. For more details, we refer the reader to [48], with which most parts of the pipeline are common. The standard optical model for fog that forms the basis of our fog simulation was introduced in [29] and is expressed as

$$\begin{aligned} \mathbf {I}(\mathbf {x}) = \mathbf {R}(\mathbf {x})t(\mathbf {x}) + \mathbf {L}(1 - t(\mathbf {x})), \end{aligned}$$
(3)

where \(\mathbf {I}(\mathbf {x})\) is the observed foggy image at pixel \(\mathbf {x}\), \(\mathbf {R}(\mathbf {x})\) is the clear scene radiance and \(\mathbf {L}\) is the atmospheric light, which is assumed to be globally constant. The transmittance \(t(\mathbf {x})\) determines the amount of scene radiance that reaches the camera. For homogeneous fog, transmittance depends on the distance \(\ell (\mathbf {x})\) of the scene from the camera through

$$\begin{aligned} t(\mathbf {x}) = \exp \left( -\beta \ell (\mathbf {x})\right) . \end{aligned}$$
(4)

The attenuation coefficient \(\beta \) controls the density of the fog: larger values of \(\beta \) mean denser fog. Fog decreases the meteorological optical range (MOR), also known as visibility, to less than 1 Km by definition [1]. For homogeneous fog \(\text {MOR}=2.996/\beta \), which implies

$$\begin{aligned} \beta \ge 2.996\times {}10^{-3}{\text { m}}^{-1}, \end{aligned}$$
(5)

where the lower bound corresponds to the lightest fog configuration. In our fog simulation, the value that is used for \(\beta \) always obeys (5).

The required inputs for fog simulation with (3) are the image \(\mathbf {R}\) of the original clear scene, atmospheric light \(\mathbf {L}\) and a complete transmittance map t. We use the same approach for atmospheric light estimation as that in [48]. Moreover, we adopt the stereoscopic inpainting method of [48] for depth denoising and completion to obtain an initial complete transmittance map \(\hat{t}\) from a noisy and incomplete input disparity map D, using the recommended parameters. We filter \(\hat{t}\) with our dual-reference cross-bilateral filter (2) to compute the final transmittance map t, which is used in (3) to synthesize the foggy image \(\mathbf {I}\).

Results of the presented pipeline for fog simulation on example images from Cityscapes [11] are provided in Fig. 3 for \(\beta = 0.02\), which corresponds to visibility of ca. 150 m. We specifically leverage the instance-level semantic annotations that are provided in Cityscapes and set the labeling h of (1) to a different value for each distinct instance of the same semantic class in order to distinguish adjacent instances. We compare our synthetic foggy images against the respective images of Foggy Cityscapes that were generated with the approach of [48]. Our synthetic foggy images generally preserve the edges between adjacent objects with large discrepancy in depth better than the images in Foggy Cityscapes, because our approach utilizes semantic boundaries, which usually encompass these edges. The incorrect structure transfer of color textures to the transmittance map, which deteriorates the quality of Foggy Cityscapes, is also reduced with our method.

Fig. 3.
figure 3

Comparison of our synthetic foggy images against Foggy Cityscapes [48]. This figure is better seen on a screen and zoomed in

4 Semantic Segmentation of Scenes with Dense Fog

In this section, we first present a standard supervised learning approach for semantic segmentation under dense fog using our synthetic foggy data with the novel fog simulation of Sect. 3, and then elaborate on our novel curriculum model adaptation approach using both synthetic and real foggy data.

4.1 Learning with Synthetic Fog

Generating synthetic fog from real clear-weather scenes grants the potential of inheriting the existing human annotations of these scenes, such as those from the Cityscapes dataset [11]. This is a significant asset that enables training of standard segmentation models. Therefore, an effective way of evaluating the merit of a fog simulator is to adapt a segmentation model originally trained on clear weather to the synthesized foggy images and then evaluate the adapted model against the original one on real foggy images. The goal is to verify that the standard learning methods for semantic segmentation can benefit from our simulated fog in the challenging scenario of real fog. This evaluation policy has been proposed in [48]. We adopt this policy and fine-tune the RefineNet model [33] on synthetic foggy images generated with our simulation. The performance of our adapted models on dense real fog is compared to that of the original clear-weather model as well as the models that are adapted on Foggy Cityscapes [48], providing an objective comparison of our simulation method against [48].

4.2 Curriculum Model Adaptation with Synthetic and Real Fog

While adapting a standard segmentation model to our synthetic fog improves its performance as shown in Sect. 6.2, the paradigm still suffers from the domain discrepancy between synthetic and real foggy images. This discrepancy becomes more accentuated for denser fog. We present a method which can learn from our synthetic fog plus unlabeled real foggy data.

The method, which we term Curriculum Model Adaptation (CMAda), uses two versions of synthetic fog—one with light fog and another with dense fog—and a large dataset of unlabeled real foggy scenes with variable, unknown fog density, and works as follows:

  1. 1.

    generate a synthetic foggy dataset with multiple versions of varying fog density;

  2. 2.

    train a model for fog density estimation on the dataset of step 1;

  3. 3.

    rank the images in the real foggy dataset with the model of step 2 according to fog density;

  4. 4.

    generate a dataset with light synthetic fog, and train a segmentation model on it;

  5. 5.

    apply the segmentation model from step 4 to the light-fog images of the real dataset (ranked lower in step 3) to obtain “noisy” semantic labels;

  6. 6.

    generate a dataset with dense synthetic fog;

  7. 7.

    adapt the segmentation model from step 4 to the union of the dense synthetic foggy dataset from step 6 and the light real foggy one from step 5.

CMAda adapts segmentation models from light synthetic fog to dense real fog and is inspired by curriculum learning [4], in the sense that we first solve easier tasks with our synthetic data, i.e. fog density estimation and semantic scene understanding under light fog, and then acquire new knowledge from the already “solved” tasks in order to better tackle the harder task, i.e. scene understanding under dense real fog. CMAda also exploits the direct control of fog density for synthetic foggy images. Figure 1 provides an overview of our method. Below we present details on our fog density estimation, i.e. step 2, and the training of the model, i.e. step 7.

Fog Density Estimation. Fog density is usually determined by the visibility of the foggy scene. An accurate estimate of fog density can benefit many applications, such as image defogging [10]. Since annotating images in a fine-grained manner regarding fog density is very challenging, previous methods are trained on a few hundreds of images divided into only two classes: foggy and fog-free [10]. The performance of the system, however, is affected by the small amount of training data and the coarse class granularity.

In this paper, we leverage our fog simulation applied to Cityscapes [11] for fog density estimation. Since simulated fog density is directly controlled through \(\beta \), we generate several versions of Foggy Cityscapes with varying \(\beta \in \{0,\,0.005,\,0.01,\,0.02\}\) and train AlexNet [30] to regress the value of \(\beta \) for each image, lifting the need to handcraft features relevant to fog as [10] did. The predicted fog density using our method correlates well with human judgments of fog density taken in a subjective study on a large foggy image database on Amazon Mechanical Turk (cf. Sect. 6.1 for results). The fog density estimator is used to rank our new Foggy Zurich dataset, to select light foggy images for usage in CMAda, and to select dense foggy images for manual annotation.

Curriculum Model Adaptation. We formulate CMAda for semantic segmentation as follows. Let us denote a clear-weather image by \(\mathbf {x}\), the corresponding image under light synthetic fog by \(\mathbf {x}^\prime \), the corresponding image under dense synthetic fog by \(\mathbf {x}^{\prime \prime }\), and the corresponding human annotation by \(\mathbf {y}\). Then, the training data consist of labeled data with light synthetic fog \(\mathcal {D}^\prime _l =\{(\mathbf {x}^\prime _i, \mathbf {y}_i)\}_{i=1}^{l}\), labeled data with dense synthetic fog \(\mathcal {D}^{\prime \prime }_l =\{(\mathbf {x}^{\prime \prime }_i, \mathbf {y}_i)\}_{i=1}^{l}\) and unlabeled images with light real fog \(\mathcal {\bar{D}}^\prime _u =\{\mathbf {\bar{x}}^\prime _j\}_{j=l+1}^{l+u}\), where \(\mathbf {y}_i^{m,n} \in \{1, ..., C\}\) is the label of pixel (mn), and C is the total number of classes. l is the number of labeled training images with synthetic fog, and u is the number of unlabeled images with light real fog. The aim is to learn a mapping function \(\phi ^{\prime \prime }: \mathcal {X}^{\prime \prime } \mapsto \mathcal {Y}\) from \(\mathcal {D}^\prime _l\), \(\mathcal {D}^{\prime \prime }_l\) and \(\mathcal {\bar{D}}^\prime _u\), and evaluate it on images with dense real fog \(\mathcal {\bar{D}}^{\prime \prime } = \{\mathbf {\bar{x}}^{\prime \prime }_1, \dots , \mathbf {\bar{x}}^{\prime \prime }_k\}\), where k is the number of images with dense real fog.

Since \(\mathcal {\bar{D}}^\prime _u\) does not have human annotations, we generate the supervisory labels as previously described in step 5. In particular, we first learn a mapping function \(\phi ^\prime : \mathcal {X^\prime } \mapsto \mathcal {Y}\) with \(\mathcal {D}^{\prime }_l\) and then obtain the labels \(\bar{\mathbf {y}}^\prime _j=\phi ^\prime (\mathbf {\bar{x}}^\prime _j)\) for \(\mathbf {\bar{x}}^\prime _j\), \(\forall j \in \{l+1, \dots , l+u\}\). \(\mathcal {\bar{D}}^\prime _u\) is then upgraded to \(\mathcal {\bar{D}}^\prime _u=\{(\mathbf {\bar{x}}^\prime _j, \bar{\mathbf {y}}^\prime _j)\}_{j=l+1}^{l+u}\). The proposed scheme for training semantic segmentation models for dense foggy image \(\bar{\mathbf {x}}^{\prime \prime }\) is to learn a mapping function \(\phi ^{\prime \prime }\) so that human annotations for dense synthetic fog and the generated labels for light real fog are both taken into account:

$$\begin{aligned} \min _{\phi ^{\prime \prime }} \frac{1}{l}\sum _{i=1}^l L(\phi ^{\prime \prime }(\mathbf {x}^{\prime \prime }_i), \mathbf {y}_i) + \lambda \frac{1}{u}\sum _{j=l+1}^{l+u} L(\phi ^{\prime \prime }(\mathbf {\bar{x}}^\prime _j), \bar{\mathbf {y}}^\prime _j), \end{aligned}$$
(6)

where L(., .) is the cross entropy loss function and \(\lambda =\frac{u}{l}\times w\) is a hyper-parameter balancing the weights of the two data sources, with w serving as the relative weight of each real weakly labeled image compared to each synthetic labeled one. We empirically set \(w=1/3\) in our experiment, but an optimal value can be obtained via cross-validation if needed. The optimization of (6) is implemented by mixing images from \(\mathcal {D}^{\prime \prime }_l\) and \(\bar{\mathcal {D}}^\prime _u\) in a proportion of 1 : w and feeding the stream of hybrid data to a CNN for standard supervised training.

This learning approach bears resemblance to model distillation [21, 26] or imitation [8, 12]. The underpinnings of our proposed approach are the following: (1) in light fog objects are easier to recognize than in dense fog, hence models trained on synthetic data are more generalizable to real data in case both data sources contain light rather than dense fog; (2) dense synthetic fog and light real fog reflect different and complementary characteristics of the target domain of dense real fog. On the one hand, dense synthetic fog features a similar overall visibility obstruction to dense real fog, but includes artifacts. On the other hand, light real fog captures the true nonuniform and spatially varying structure of fog, but at a different density than dense fog.

5 The Foggy Zurich Dataset

5.1 Data Collection

Foggy Zurich was collected during multiple rides with a car inside the city of Zurich and its suburbs using a GoPro Hero 5 camera. We recorded four large video sequences, and extracted video frames corresponding to those parts of the sequences where fog is (almost) ubiquitous in the scene at a rate of one frame per second. The extracted images are manually cleaned by removing the duplicates (if any), resulting in 3808 foggy images in total. The resolution of the frames is \(1920 \times 1080\) pixels. We mounted the camera inside the front windshield, since we found that mounting it outside the vehicle resulted in significant deterioration in image quality due to blurring artifacts caused by dew.

5.2 Annotation of Images with Dense Fog

We use our fog density estimator presented in Sect. 4.2 to rank all images in Foggy Zurich according to fog density. Based on the ordering, we manually select 16 images with dense fog and diverse visual scenes, and construct the test set of Foggy Zurich therefrom, which we term Foggy Zurich-test. We annotate these images with fine pixel-level semantic annotations using the 19 evaluation classes of the Cityscapes dataset [11]. In addition, we assign the void label to pixels which do not belong to any of the above 19 classes, or the class of which is uncertain due to the presence of fog. Every such pixel is ignored for semantic segmentation evaluation. Comprehensive statistics for the semantic annotations of Foggy Zurich-test are presented in Fig. 4. We also distinguish the semantic classes that occur frequently in Foggy Zurich-test. These “frequent” classes are: road, sidewalk, building, wall, fence, pole, traffic light, traffic sign, vegetation, sky, and car. When performing evaluation on Foggy Zurich-test, we occasionally report the average score over this set of frequent classes, which feature plenty of examples, as a second metric to support the corresponding results.

Fig. 4.
figure 4

Number of annotated pixels per class for Foggy Zurich-test

Despite the fact that there exists a number of prominent large-scale datasets for semantic road scene understanding, such as KITTI [19], Cityscapes [11] and Mapillary Vistas [39], most of these datasets contain few or even no foggy scenes, which can be attributed partly to the rarity of the condition of fog and the difficulty of annotating foggy images. To the best of our knowledge, the only previous dataset for semantic foggy scene understanding whose scale exceeds that of Foggy Zurich-test is Foggy Driving [48], with 101 annotated images. However, we found that most images in Foggy Driving contain relatively light fog and most images with dense fog are annotated coarsely. Compared to Foggy Driving, Foggy Zurich comprises a much greater number of high-resolution foggy images. Its larger, unlabeled part is highly relevant for unsupervised or semi-supervised approaches such as the one we have presented in Sect. 4.2, while the smaller, labeled Foggy Zurich-test set features fine semantic annotations for the particularly challenging setting of dense fog, making a significant step towards evaluation of semantic segmentation models in this setting.

In order to ensure a sound training and evaluation, we manually filter the unlabeled part of Foggy Zurich and exclude from the resulting training sets those images which bear resemblance to any image in Foggy Zurich-test with respect to the depicted scene.

6 Experiments

6.1 Fog Density Estimation with Synthetic Data

We conduct a user study on Amazon Mechanical Turk (AMT) to evaluate the ranking results of our fog density estimator. In order to guarantee high quality, we only employ AMT Masters in our study and verify the answers via a Known Answer Review Policy. Each human intelligence task (HIT) comprises five image pairs to be compared: three pairs are the true query pairs; the rest two pairs contain synthetic fog of different densities and are used for validation. The participants are shown two images at a time, side by side, and are simply asked to choose the one which is more foggy. The query pairs are sampled based on the ranking results of our method. In order to avoid confusing cases, i.e. two images of similar fog densities, the two images of each pair need to be at least 20 percentiles apart based on the ranking results.

We have collected answers for 12000 pairs in 4000 HITs. The HITs are only considered for evaluation only when both the validation questions are correctly answered. \(87\%\) of all HITs are valid for evaluation. For these 10400 annotations, we find that the agreement between our ranking method and human judgment is \(89.3\%\). The high accuracy confirms that fog density estimation is a relatively easier task, and the solution to it can be exploited for solving high-level tasks of foggy scenes.

6.2 Benefit of Adaptation with Our Synthetic Fog

Our model of choice for experiments on semantic segmentation is the state-of-the-art RefineNet [33]. We use the publicly available RefineNet-res101-Cityscapes model, which has been trained on the clear-weather training set of Cityscapes. In all experiments of this section, we use a constant learning rate of \(5\times {}10^{-5}\) and mini-batches of size 1. Moreover, we compile all versions of our synthetic foggy dataset by applying our fog simulation (which is denoted by “Stereo-DBF” in the following for short) on the same refined set of Cityscapes images that was used in [48] to compile Foggy Cityscapes-refined. This set comprises 498 training and 52 validation images; we use the former for training. We considered dehazing as a preprocessing step as in [48] but did not observe a gain against no dehazing and thus omit such comparisons from the following presentation.

Our first segmentation experiment shows that our semantic-aware fog simulation performs competitively compared to the fog simulation of [48] (denoted by “Stereo-GF”) for generating synthetic data to adapt RefineNet to dense real fog. RefineNet-res101-Cityscapes is fine-tuned on the version of Foggy Cityscapes-refined that corresponds to each simulation method for 8 epochs. We experiment with two synthetic fog densities. For evaluation, we use Foggy Zurich-test as well as a subset of Foggy Driving [48] containing 21 images with dense fog, which we term Foggy Driving-dense, and report results in Tables 1 and 2 respectively. Training on lighter synthetic fog helps to beat the baseline clear-weather model in all cases and yields consistently better results than denser synthetic fog, which verifies the first motivating assumption of CMAda at the end of Sect. 4.2. In addition, Stereo-DBF beats Stereo-GF in most cases by a small margin and is consistently better at generating denser synthetic foggy data. On the other hand, Stereo-GF with light fog is slightly better for Foggy Zurich-test. This motivates us to consistently use the model that has been trained with Stereo-GF in steps 4 and 5 of CMAda for the experiments of Sect. 6.3, assuming that its merit for dense real fog extends to lighter fog. However, Stereo-DBF is still fully relevant for step 6 of CMAda based on its favorable comparison for denser synthetic fog.

Table 1. Performance comparison on Foggy Zurich-test of RefineNet and fine-tuned versions of it using Foggy Cityscapes-refined, rendered with different fog simulations and attenuation coefficients \(\beta \)
Table 2. Performance comparison on Foggy Driving-dense of RefineNet and fine-tuned versions of it using Foggy Cityscapes-refined, rendered with different fog simulations and attenuation coefficients \(\beta \)
Table 3. Performance comparison on Foggy Zurich-test of the two adaptation steps of CMAda using Foggy Cityscapes-refined and Foggy Zurich-light for training

6.3 Benefit of Curriculum Adaptation with Synthetic and Real Fog

Our second segmentation experiment showcases the effectiveness of our CMAda pipeline, using Stereo-DBF and Stereo-GF as alternatives for generating synthetic Foggy Cityscapes-refined in steps 4 and 6 of the pipeline. Foggy Zurich serves as the real foggy dataset in the pipeline. We use the results of our fog density estimation to select 1556 images with light fog and name this set Foggy Zurich-light. The models which are obtained after the initial adaptation step that uses Foggy Cityscapes-refined with \(\beta =0.005\) are further fine-tuned for 6k iterations on the union of Foggy Cityscapes-refined with \(\beta =0.01\) and Foggy Zurich-light setting \(w=1/3\), where the latter set is noisily labeled by the aforementioned initially adapted models. Results for the two adaptation steps (denoted by “CMAda-4” and“CMAda-7”) on Foggy Zurich-test and Foggy Driving-dense are reported in Tables 3 and 4 respectively. The second adaptation step CMAda-7, which involves dense synthetic fog and light real fog, consistently improves upon the first step CMAda-4. Moreover, using our fog simulation to simulate dense synthetic fog for CMAda-7 gives the best result on Foggy Zurich-test, improving the clear-weather baseline by \(5.9\%\) and \(7.9\%\) in terms of mean IoU over all classes and frequent classes respectively. Figure 5 supports this result with visual comparisons. The real foggy images of Foggy Zurich-light used in CMAda-7 additionally provide a clear generalization benefit on Foggy Driving-dense, which involves different camera sensors than Foggy Zurich.

Table 4. Performance comparison on Foggy Driving-dense of the two adaptation steps of CMAda using Foggy Cityscapes-refined and Foggy Zurich-light for training
Fig. 5.
figure 5

Qualitative results for semantic segmentation on Foggy Zurich-test. “CMAda” stands for RefineNet [33] fine-tuned with our full CMAda pipeline on the union of Foggy Cityscapes-refined using our simulation and Foggy Zurich-light

7 Conclusion

In this paper, we have shown the benefit of using partially synthetic as well as unlabeled real foggy data in a curriculum adaptation framework to progressively improve performance of state-of-the-art semantic segmentation models in dense real fog. To this end, we have proposed a novel fog simulation approach on real scenes, which leverages the semantic annotation of the scene as input to a novel dual-reference cross-bilateral filter, and applied it to the Cityscapes dataset. We have presented Foggy Zurich, a large-scale dataset of real foggy scenes, including pixel-level semantic annotations for 16 scenes with dense fog. Through detailed evaluation, we have evidenced clearly that our curriculum adaptation method exploits both our synthetic and real data and significantly boosts performance on dense real fog without using any labeled real foggy image and that our fog simulation performs competitively to state-of-the-art counterparts.