Intra-& Extra-Source Exemplar-Based Style Synthesis for Improved Domain Generalization

The generalization with respect to domain shifts, as they frequently appear in applications such as autonomous driving, is one of the remaining big challenges for deep learning models. Therefore, we propose an exemplar-based style synthesis pipeline to improve domain generalization in semantic segmentation. Our method is based on a novel masked noise encoder for StyleGAN2 inversion. The model learns to faithfully reconstruct the image, preserving its semantic layout through noise prediction. Random masking of the estimated noise enables the style mixing capability of our model, i.e. it allows to alter the global appearance without affecting the semantic layout of an image. Using the proposed masked noise encoder to randomize style and content combinations in the training set, i.e., intra-source style augmentation ( ISSA ) effectively increases the diversity of training data and reduces spurious correlation. As a result, we achieve up to 12 . 4% mIoU improvements on driving-scene semantic segmentation under different types of data shifts, i.e., changing geographic locations, adverse weather conditions, and day to night. ISSA is model-agnostic and straightforwardly applicable with CNNs and Transformers. It is also complementary to other domain generalization techniques, e.g., it improves the recent state-of-the-art solution RobustNet by 3% mIoU in Cityscapes to Dark Z ¨ urich. In addition, we demonstrate the strong plug-n-play ability of the

Fig. 1: Semantic segmentation results of HRNet (Wang et al., 2021b) on unseen domain (snow), trained on Cityscapes (Cordts et al., 2016) and tested on ACDC (Sakaridis et al., 2021).The model trained with our ISSA can successfully segment the truck, while the baseline model fails completely.
proposed style synthesis pipeline, which is readily usable for extra-source exemplars e.g., web-crawled images, without any retraining or fine-tuning.Moreover, we study a new use case to indicate neural network's generalization capability by building a stylized proxy validation set.This application has significant practical sense for selecting models to be deployed in the open-world environment.Our code is available at https://github.com/boschresearch/ISSA.

Introduction
The varying environment with potentially diverse illumination and adverse weather conditions makes challenging the deployment of deep learning models in an open-world (Sakaridis et al., 2021;Zhang et al., 2021a).Therefore, improving the generalization capability of neural networks is crucial for safety-critical applications such as autonomous driving (see for example Fig. 1).While generally the target domains can be inaccessible or unpredictable at training time, it is important to train a generalizable model, based on the known (source) domain, which may offer only a limited or biased view of the real world (Burton et al., 2017;Shafaei et al., 2018).
Diversity of the training data is considered to play an important role for domain generalization, including natural distribution shifts (Taori et al., 2020).Many existing works assume that multiple source domains are accessible during training (Balaji et al., 2018;Hu et al., 2020;Jin et al., 2020;Li et al., 2018aLi et al., ,b, 2020;;Zhou et al., 2020).For instance, Li et al. (Li et al., 2018a) applied meta-learning to better generalize to unseen domains, where source domains are divided into meta-source and meta-target domains to simulate domain shift; Hu et al. (Hu et al., 2020) propose multi-domain discriminant analysis to learn a domain-invariant feature transformation.However, for pixel-level prediction tasks such as semantic segmentation, collecting diverse training data involves a tedious and costly annotation process (Caesar et al., 2018).Therefore, improving and predicting generalization from a single source domain is exceptionally compelling, particularly for semantic segmentation.
One pragmatic way to improve data diversity is by applying data augmentation.It has been widely adopted in solving different tasks, such as image classification (Hendrycks et al., 2019;Hong et al., 2021;Verma et al., 2019;Zhang et al., 2018a;Zhou et al., 2021), GAN training with limited data (Jiang et al., 2021;Karras et al., 2020a), or pose estimation (Bin et al., 2020;Peng et al., 2018;Wang et al., 2021a).One line of data augmentation techniques focuses on increasing the content diversity in the training set, such as geometric transformation (e.g., cropping or flipping), CutOut (DeVries andTaylor, 2017), andCutMix (Yun et al., 2019).However, CutOut and CutMix are ineffective on natural domain shifts, as reported in (Taori et al., 2020).Style augmentation, on the other hand, only modifies the style -the non-semantic appearance such as texture and color of the image (Gatys et al., 2016) -while preserving the semantic content.By diversifying the style and content combinations, style augmentation can reduce overfitting to the style-content correlation in the training set, improving robustness against domain shifts.Hendrycks corruptions (Hendrycks and Dietterich, 2018) provide a wide range of synthetic styles, including weather conditions.However, they are not always realistic looking, thus being still far from resembling natural data shifts.In this work, we propose an exemplar-based style synthesis pipeline for semantic segmentation, aiming to improve the style diversity in the training and validation set without extra labeling effort.
Our exemplar-based style synthesis technique is based on the inversion of StyleGAN2 (Karras et al., 2020b), which is the state-of-the-art unconditional Generative Adversarial Network (GAN) and thus ensures high quality and realism of synthetic samples.GAN inversion allows encoding a given image to latent variables, and thus facilitates faithful reconstruction with style mixing capability.To realize the synthesis pipeline, we learn to separate semantic content from style information based on a single source domain.This allows to alter the style of an image while leaving the content unchanged.
In particular, we focus on intra-source style augmentation (ISSA).Namely, our exemplar-based style synthesis makes use of training samples from the source domain, extracting their styles and contents followed by randomly mixing them up.In doing so, we can increase the data diversity and alleviate the spurious correlation in the given training data.
The faithful reconstruction of images with complex structures such as driving scenes is non-trivial.Prior methods (Alaluf et al., 2022;Dinh et al., 2022;Richardson et al., 2021;Roich et al., 2021;Šubrtová et al., 2022;Yao et al., 2022) are mainly tested on simple single-object-centric datasets, e.g., FFHQ (Karras et al., 2019), CelebA-HQ (Karras et al., 2018), or LSUN (Yu et al., 2015).As shown in (Abdal et al., 2020), extending the native latent space of StyleGAN2 with a stochastic noise space can lead to improved inversion quality.However, all style and content information will be embedded in the noise map, leaving the latent codes inactive in this setting.Therefore, to enable the precise reconstruction of complex driving scenes as well as style mixing, we propose a masked noise encoder for StyleGAN2.The proposed random masking regularization on the noise map encourages the generator to rely on the latent prediction for reconstruction.Thus, it allows to effectively separate content and style information and facilitates realistic style mixing, as shown in Fig. 2.
We further discover an excellent plug-n-play ability of the proposed style synthesis pipeline, i.e., it can be directly applied to unseen domains without requiring the re-training of the encoder or generator.For instance, in Fig. 11, we employ our pipeline directly on web-crawled images, where the model is only trained on Cityscapes.This appealing property opens up the opportunity to go beyond intra-source exemplarbased style mixing, and grants us more flexibility to harness extra-source data for style synthesis.Thus, we also experiment with extra-source style argumentation (ESSA) to further improve the generalization performance.
Besides data augmentation, we explore the usage of the proposed pipeline for assessing neural networks' generalization capability in Sec. 6.By transferring styles from unanno-tated data samples of the target domain to existing labelled data, we can build a style-augmented proxy set for validation without introducing extra-labelling effort.We observe that performance on this proxy set has a strong correlation with the real test performance on unseen target data, which could be used in practice to select more suitable models for deployment.
In summary, we make the following contributions: -We propose a masked noise encoder for GAN inversion, which enables high quality reconstruction and style mixing of complex scene-centric datasets.-We exploit GAN inversion for intra-source data augmentation, which can improve generalization under natural distribution shifts on semantic segmentation.
-Extensive experiments demonstrate that our proposed augmentation method ISSA consistently promotes domain generalization performance on driving-scene semantic segmentation across different network architectures, achieving up to 12.4% mIoU improvement, even with limited diversity in the source data and without access to the target domain.
-We discover the plug-n-play ability of our masked noise encoder, and showcase its potential of direct application on extra-source data such as web-crawled images.-We further explore the usage of the proposed pipeline for assessing models' generalization performance on unseen data.By building a style-augmented proxy validation set on known labelled data, we observe that there is a strong correlation between the performance on the proxy validation set and the real test set, which offers useful insights for model selection without introducing any extra annotation effort.
This paper is an extended version of our previous work (Li et al., 2023) with more experimental evaluation and discussion on the potential and two new applications of the proposed method.In particular, we provide a more detailed ablation study on the design of the proposed masked noise encoder (see Tabs. 3 and 4,Fig.8).Furthermore, we add a discussion on the plug-n-play ability of the pipeline and go beyond intra-source domain to extra-source domain style mixing.We also conducted new experiments reported in Tabs.11 and 12. Finally, the new application as model generalization performance indicator is introduced in Sec. 6.
2 Related Work Domain Generalization.Domain generalization concerns the generalization ability of neural networks to a target domain that follows a different distribution than the source domain, and prior knowledge of the target domain is inaccessible at training.Various methods have been proposed to approach this problem from different angles, which employ data augmentation (Huang et al., 2021;Khirodkar et al., 2019;Li et al., 2022;Somavarapu et al., 2020;Zhou et al., 2021), domain alignment (Hu et al., 2020;Jin et al., 2020;Li et al., 2020;Zhou et al., 2020), adversarial training (Deng et al., 2020;Li et al., 2018b;Rahman et al., 2020;Shao et al., 2019), meta-learning (Balaji et al., 2018;Li et al., 2018aLi et al., , 2019a;;Zhao et al., 2021), ensemble learning (D'Innocente and Caputo, 2018;Lee et al., 2022a;Mancini et al., 2018;Wu and Gong, 2021), or feature decomposition (Chen et al., 2022;Wan et al., 2022).Particularly, (Jia et al., 2020;Ouyang et al., 2022;Qiao et al., 2020;Wang et al., 2021c)  Another line of work explores feature-level augmentation (Li et al., 2022;Zhou et al., 2021).MixStyle (Zhou et al., 2021) and DSU (Li et al., 2022) add perturbation at the normalization layer to simulate domain shifts at test time.However, this perturbation can potentially cause a distortion of the image content, which can be harmful for semantic segmentation (see Sec. 4.3).Moreover, these methods require a careful adaptation to the specific network architecture.In contrast, ISSA performs style mixing on the image-level, thus being model-agnostic, and can be applied as a complement to other methods in order to further increase the generalization performance.
Beyond data augmentation for improving domain generalization, we further explore the usage of our exemplar-based style synthesis pipeline for assessing the generalization performance.Recently, (Zhang et al., 2021b) proposed to predict generalization of image classifiers using performance on synthetic data produced by a conditional GAN.While this is limited to the generalization in the source domain, and it is not straightforward how to apply it on semantic segmentation task.In contrast to generating image from scratch, we employ proposed exemplar-based style synthesis pipeline to augment labelled source data and build a stylized proxy validation sets.(Richardson et al., 2021) introduced by us, training pSp with an additional discriminator and incorporate synthesized images for better initialization.pSp † can reconstruct the rough layout of the scene but still struggles to preserve details.The Feature-Style encoder shows a better reconstruction quality, yet it cannot faithfully reconstruct small objects (e.g.pedestrian), and some objects (e.g. the vehicle, bicycle) are rather blurry.Our masked noise encoder has highest image fidelity, preserving finer details in the inverted image.
We empirically show that such proxy validation sets can indicate generalization performance, without extra annotation required.
Data Augmentation.Data augmentation techniques can diversify training samples by altering their style, content, or both, thus preventing overfitting and improving generalization.Mixup augmentations (Dabouei et al., 2021;Verma et al., 2019;Zhang et al., 2018a) linearly interpolate between two training samples and their labels, regularizing both style and content.Despite effectiveness shown on image-level classification tasks, they are not well suited for dense pixellevel prediction tasks.CutMix (Yun et al., 2019) cuts and pastes a random rectangular region of the input image into another image, thus increasing the content diversity.Geometric transformation, e.g., random scaling and horizontal flipping, can also serve this purpose.In contrast, Hendrycks corruptions (Hendrycks and Dietterich, 2018) only affect the image appearance without modifying the content.Their generated images look artificial, being far from resembling natural data, and thus offer limited help against natural distribution shifts (Taori et al., 2020).
StyleMix (Hong et al., 2021) is conceptually closer to our method, which aims to decompose training images into content and style representations and then mix them up to generate more samples.Nonetheless, their AdaIN (Huang and Belongie, 2017) based style mixing method cannot fulfill the pixel-wise label-preserving requirement (see Fig. 10).Another line of CycleGAN based style transfer methods (Hoffman et al., 2018;Voreiter et al., 2020) require the access to both source and target domain during training, and thus cannot be employed for domain generalization problem.Our ISSA is also a style-based data augmentation technique that leverages the capabilities of a state-of-the-art GAN to produce natural looking samples.By modifying solely the style of the input images and maintaining their content intact, the original ground truth label maps can be reused.Furthermore, this model can be effectively trained on a single domain without necessitating target data.
GAN Inversion.Showing good results, GAN inversion has been explored for many applications such as face editing (Abdal et al., 2019(Abdal et al., , 2020;;Zhu et al., 2020), image restoration (Pan et al., 2022), and data augmentation (Golhar et al., 2022;Nguyen et al., 2021).StyleGANs (Karras et al., 2019, 2020a,b) are commonly used for inversion, as they demonstrate high synthesis quality and appealing editing capabilities.Nevertheless, there is a known distortion-editability trade-off (Tov et al., 2021).Thus, it is crucial to achieve a curated performance for a specific use case.
GAN inversion approaches can be classified into three groups: optimization based methods (Abdal et al., 2019(Abdal et al., , 2020;;Collins et al., 2020;Creswell and Bharath, 2019;Gu et al., 2020;Kang et al., 2021), encoder based models (Bartz et al., 2021;Richardson et al., 2021;Tov et al., 2021;Wei et al., 2022;Yao et al., 2022) methods, and hybrid approaches (Alaluf et al., 2022;Chai et al., 2021;Dinh et al., 2022;Roich et al., 2021;Song et al., 2022).Optimization methods generally have worse editability and need exhaustive optimization for each input.Thus, in this paper, we use an encoder based method for our style mixing purpose.The representative encoder based work pSp encoder (Richardson et al., 2021) embeds the input image in the extended latent space W + of StyleGAN.The e4e encoder (Tov et al., 2021) improves editability of pSp while trading off detail preservation.Yet, for the semantic segmentation augmentation task, it is crucial to assure the pixel-wise alignment with ground-truth label maps.To improve the reconstruction quality, the Feature-Style encoder (Yao et al., 2022) further replaces the lower latent code prediction with a feature map prediction.Recent works explored the usage of additional information such as labelled regions of interest (Moon and Park, 2022) and segment masks ( Šubrtová et al., 2022), or involved the joint optimization of the generator (Hu, 2022;Roich et al., 2021).Our method only requires RGB images and a frozen generator, meanwhile offers plug-n-play ability on web-crawled images (see Sec. 5).
Despite much progress, most prior work only showcases applications on single object-centric datasets, such as CelebA-HQ (Karras et al., 2018), FFHQ (Karras et al., 2019), LSUN (Yu et al., 2015).They still fail on more complex scenes, thus restricting their application in practice.Our masked noise encoder can fulfil both the fidelity and the style mixing capability requirements, rendering itself well-suited for data augmentation for semantic segmentation.To the best of our knowledge, our approach is the first GAN inversion method which can be effectively applied as data augmentation for the semantic segmentation of complex scenes.

Method
We introduce our exemplar-based style synthesis pipeline in Sec.3.1, which relies on GAN inversion that can offer faithful reconstruction and style mixing of images.To enable better style-content disentanglement, we propose a masked noise encoder for GAN inversion in Sec.3.2.Its detailed training loss is described in Sec.3.3.

Exemplar-Based Style Synthesis Pipeline
The lack of data diversity and the existence of spurious correlation in the training set often lead to poor domain generalization.To mitigate them, the proposed style synthesis pipeline aims at 1) extracting styles from given exemplars, and 2) augmenting the training samples in the source domain with the new styles, while preserving their semantic content.For data augmentation, it employs GAN inversion to randomize the style-content combinations.In doing so, it diversifies the source dataset and reduces spurious style-content correlations.Because the content of images is preserved and only the style is changed, the ground truth label maps can be re-used for training and validation, without requiring any further annotation effort.
Our style synthesis pipeline is built on top of an encoderbased GAN inversion, given its fast inference.GANs, such as StyleGANs (Karras et al., 2019(Karras et al., , 2020a,b),b), have shown the capability of encoding rich semantic and style information in intermediate features and latent spaces.For encoder-based GAN inversion, an encoder is trained to invert an input image back into the latent space of a pre-trained GAN generator.The encoder is desired to separately encode the style and content information of the input image.With such an encoder, it can synthesize new training samples with new style-content combinations.In particular, we are interested in intra-source style augmentation (ISSA), where the encoder should take the content and style codes from different training samples within the source domain and feed them to the pre-trained generator.If this encoder-based GAN inversion can also handle unseen data, we will further make use the styles of exemplars outside the source domain, such as web-crawled images, enabling extra-source style augmentation (ESSA).In both cases, since only the styles of the training samples in the source domain are modified, the newly synthesized training samples already have their ground truth label maps in place.
StyleGAN2 can synthesize natural looking images resembling scene-centric datasets such as Cityscapes (Cordts et al., 2016) and BDD100K (Yu et al., 2020).However, existing GAN inversion encoders cannot provide the desired fidelity and style mixing capability to enable ISSA and ESSA for an improved domain generalization of semantic segmentation.Loss of fine details or inauthentic reconstruction of smallscale objects would even harm the model's generalization Fig. 3: Method overview.Our encoder is built on top of the pSp encoder (Richardson et al., 2021), shown in the blue area (A).It maps the input image to the extended latent space W + of the pre-trained StyleGAN2 generator.To promote the reconstruction quality on complex scene-centric dataset, e.g., Cityscapes, our encoder additionally predicts the noise map at an intermediate scale, illustrated in the orange area (B).M stands for random noise masking, regularization for the encoder training.Without it, the noise map overtakes the latent codes in encoding the image style, so that the latter cannot make any perceivable changes on the reconstructed image, thus making style mixing impossible.
ability.Therefore, we propose a novel encoder design to invert StyleGAN2, termed masked noise encoder (see Fig. 3).

Masked Noise Encoder
We build our encoder upon the pSp encoder (Richardson et al., 2021).It employs a feature pyramid (Lin et al., 2017) to extract multi-scale features from a given image, see Fig. 3-(A).We improve over pSp by identifying in which latent space to embed the input image for the high-quality reconstruction of the images with complex street scenes.Further, we propose a novel training scheme to enable the style-content disentanglement of the encoder, thus improving its style mixing capability.
Extended Latent Space.The StyleGAN2 generator takes the latent code w ∈ W generated by an MLP network and randomly sampled additive Gaussian noise maps {ϵ} as inputs for image synthesis.As pointed out in (Abdal et al., 2019), it is suboptimal to embed a real image into the original latent space W of StyleGAN2, due to the gap between the real and synthetic data distributions.A common practice is to map the input image into the extended latent space W + .The multiscale features of the pSp feature pyramid are respectively mapped to the latent codes {w k } at the corresponding scales of the StyleGAN2 generator, i.e., map2latent in Fig. 3-(A).
Additive Noise Map.The latent codes {w k } from the extended latent space W + alone are not expressive enough to reconstruct images with diverse semantic layouts such as Cityscapes (Cordts et al., 2016) as shown in Fig. 2-(pSp  † ).The latent codes of StyleGAN2 are one-dimensional vectors that modulate the feature vectors at different spatial positions identically.Therefore, they cannot precisely encode the semantic layout information, which is spatially varying.To address this issue, our encoder additionally predicts the additive noise map ε of the StyleGAN2 at an intermediate scale, i.e., map2noise in Fig. 3-(B).The noise map ε has spatial dimensions, making it inherently capable of encoding more information.It is particularly advantageous when dealing with content information that varies spatially, as the noise map can more readily accommodate such information.As evidenced by the visualization presented in Fig. 5, the noise map is adept at capturing the semantic content of the scene.
Random Noise Masking.While offering high-quality reconstruction, the additive noise map can be too expressive so that it encodes nearly all perceivable details of the input image.This results in a poor style-content disentanglement and can damage the style mixing capability of the encoder (see Fig. 4).To avoid this undesired effect, we propose to regularize the noise prediction of the encoder by random masking of the noise map.Note that the random masking as a regularization technique has also been successfully used in reconstruction-based self-supervised learning (He et al.,  to synthesize a new image, i.e., G(w k s , ε c ), as illustrated in Fig. 6.If the encoder is not trained with random masking, the new image does not have any perceptible difference with the content image.This means the latent codes {w k } encode negligible information of the image.In contrast, when being trained with masking, the encoder creates a novel image that takes the content and style from two different images.This observation confirms the enabling role of masking for content and style disentanglement, and thus the improved style mixing capability.The noise map no longer encodes all perceptible information of the image, including style and content.In effect, the latent codes {w k } play a more active role in controlling the style.In Fig. 5, we further visualize the noise map of the masked noise encoder and observe that it captures well the semantic content of the scene.
Additionally, we discover that our masked noise encoder is equipped with strong plug-n-play ability, i.e., readily usable on novel domains without retraining or fine-tuning.As shown in Fig. 11, the masked noise encoder together with the generator which is trained on Cityscapes not only reconstruct unseen domain data (e.g., north polar bear), but also remain the style mixing capability (e.g., turning bright day into a sunset scene).This generalization capability allows us to further exploit extra-source data for style synthesis, i.e., ESSA.Except that the styles are extracted from external exemplars, the style synthesis process of ESSA is identical to ISSA.

Encoder Training Loss
Mathematically, the proposed StyleGAN2 inversion with the masked noised encoder E M can be formulated as {w 1 , . . ., w K , ε} = E M (x); (3.1) The masked noise encoder E M maps the given image x onto the latent codes {w k } and the noise map ε.The StyleGAN2 generator G takes both {w k } and ε as the input and generates x * .Ideally, x * should be identical to x, i.e., a perfect reconstruction.When training the masked noise encoder E M to reconstruct x, the original noise map ε is masked before being fed into the pre-trained where M noise is the random binary mask, ⊙ indicates the Hadamard product, and x denotes the reconstructed image with the masked noise ε M .The training loss for the encoder is given as where {λ i } are weighting factors.The first three terms are the pixel-wise MSE loss, learned perceptual image patch similarity (LPIPS) (Zhang et al., 2018b) loss and adversarial loss (Goodfellow et al., 2014), which are the common reconstruction losses for encoder training (Richardson et al., 2021;Zhu et al., 2020).Note that masking removes the information of the given image x at certain spatial positions, the reconstruction requirement on these positions should then be relaxed.M img and M f eat are obtained by up-and down-sampling the noise mask M noise to the image size and the feature size of the VGG-based feature extractor.The adversarial loss is obtained by formulating the encoder training as an adversarial game with a discriminator D that is trained to distinguish between reconstructed and real images.
The last regularization term is defined as The L1 norm helps to induce sparse noise prediction.It is complementary to random masking, reducing the capacity of the noise map.The second term is obtained by using the ground truth latent codes w gt of synthesized images G(w gt , ϵ) to train the latent code prediction E M w (•) (Yao et al., 2022).It guides the encoder to stay close to the original latent space of the generator, speeding up the convergence.

Experiments
We start from the experiment setup in Sec.4.1.Then, Sec.4.2 and Sec.4.3 respectively report our experiments on the masked noise encoder for StyleGAN2 inversion and ISSA for improved domain generalization of semantic segmentation.

Experiment Setup
Datasets.We conduct extensive experiments on four driving scene datasets, which are Cityscapes (CS) (Cordts et al., 2016), BDD100K (BDD) (Yu et al., 2020), ACDC (Sakaridis et al., 2021) and Dark Zürich (DarkZ) (Sakaridis et al., 2019).Cityscapes is collected from different cities primarily in Germany, under good/medium weather conditions during daytime.BDD100K is a driving-scene dataset collected in the US, representing a geographic location shift from Cityscapes.Besides, it also includes more diverse scenes (e.g., city streets, residential areas, and highways) and different weather conditions captured at different times of the day.Both ACDC and Dark Zürich are collected in Switzerland.ACDC contains four adverse weather conditions (rain, fog, snow, night) and Dark Zürich contains night scenes.The default setting is to use Cityscapes as the source training data, whereas the validation sets of the other datasets represent unseen target domains with different types of natural shifts, i.e., used only for testing.Additionally, we also study the challenging day-to-night generalization scenario, where BDD100K-Daytime is used as the source set, ACDC-Night and Dark Zürich are treated as unseen domains.In both cases, we consider a single source domain for training.
Training details.We experiment with two image resolutions: 128 × 256 and 256 × 512.The StyleGAN2 (Karras et al., 2020a) model is first trained to unconditionally synthesize images and then fixed during the encoder training.To invert the pre-trained StyleGAN2 generator, the masked noise encoder predicts both latent codes in the extended W + space and the additive noise map.In accordance with the StyleGAN2 generator, W + space consists of 14 and 16 latent code vectors for the input resolution 128 × 256 and 256 × 512, respectively.The additive noise map is always at the intermediate feature space with one fourth of the input resolution.We use the same encoder architecture, optimizer, and learning rate scheduling as pSp (Richardson et al., 2021).Our encoder is trained with the loss function defined in Eq. (3.4) with λ 1 = 10 and λ 2 = λ 3 = 0.1.For our random noise masking, we use a patch size P of 4 with a masking ratio ρ = 25%.A detailed ablation study on the masking and noise map of the encoder can be found in Sec.4.2.
We use the trained masked noise encoder to perform ISSA as described in Sec.3.1.We experiment with several architectures for semantic segmentation, i.e., HRNet (Wang et al., 2021b), SegFormer (Xie et al., 2021), and DeepLab v2/v3+ (Chen et al., 2018a,b).The baseline segmentation models are trained with their default configurations and using the standard augmentation, i.e., random scaling and horizontal flipping.

Masked Noise Encoder
Reconstruction quality.Table 1 shows that our masked noise encoder considerably outperforms two strong StyleGAN2 inversion baselines pSp (Richardson et al., 2021) and Feature-Style encoder (Yao et al., 2022) in all three evaluation metrics.The achieved low values of MSE, LPIPS (Zhang et al., 2018b) and FID (Heusel et al., 2017) indicate its high-quality reconstruction.Both the masked noise encoder and the Feature-Style encoder adopt the adversarial loss L adv and regularization using synthesized images with ground truth latent codes w gt .Therefore, we also add them to train pSp and note this version as pSp † .While pSp † improves over pSp in MSE and FID, it still underperforms compared to the others.This confirms that inverting into the extended latent space W + only allows limited reconstruction quality on Cityscapes.The Feature-Style encoder (Yao et al., 2022) replaces the prediction of the low level latent codes with feature prediction, which results in better reconstruction without severely harming style editability.However, its reconstruction on Cityscapes is still not satisfying and underperforms to our masked noise encoder.As noted in (Yao et al., 2022), the feature size of the Feature-Style encoder is restricted.Using a larger feature map to improve reconstruction quality can only be done as a replacement of more latent code predictions.Consequently, it largely reduces the expressiveness of the latent embedding and leads to extremely poor editability, being no longer suitable for downstream applications, e.g., style mixing data augmentation.
The visual comparison across pSp † , the Feature-Style encoder and our masked noise encoder is shown in Fig. 2 and is aligned with the quantitative results in  encoder cannot faithfully reconstruct small objects and restore fine details.In comparison, our masked noise encoder offers high-quality reconstruction, preserving the semantic layout and fine details of each class.Having a high-quality reconstruction is an important requirement for using the encoder for data augmentation.Unfortunately, neither pSp † nor the Feature-Style encoder achieve satisfactory reconstruction quality.For instance, they both fail at capturing the red traffic light in Fig. 2. Using such images for data augmentation can confuse the semantic segmentation model, leading to performance degradation.
Ablation on the masking effect.In Fig. 4 and Fig. 7, we visually observe that random masking offers a stronger per-  Ablation on masking hyperparameters.We conduct an ablation study on the mask patch size P and masking ratio ρ, shown in Table 3.We observe that the mask patch size is a relatively insensitive hyperparameter, while higher masking ratio results in noticeable degradation on the reconstruction quality.Empirically, the patch size P = 4 with a masking ratio ρ = 25% achieves the best reconstruction performance.Therefore, we use the encoder trained with this parameter combination for our data augmentation ISSA.BDD100K and Dark Zürich (unseen).ISSA consistently outperforms the other data augmentation techniques across different datasets and network architectures, which is consistent with the Table 5.
Ablation on the noise map resolution.We investigate the effect of noise map size and experimentally observed that the reconstruction quality benefits the most from using the noise map at the intermediate feature space with one fourth of the input resolution.As shown in Table 4, using 32 × 64 noise, i.e., one fourth of the image resolution, achieves better reconstruction quality than using lower resolution noise maps.Higher resolution noise map, e.g., full image resolution, in contrast, can be too expressive and encode nearly all perceivable details.This results in worse style mixing capability, as shown in Fig. 8. Therefore, we employ the intermediate noise map at one fourth of the input resolution in all of our experiments.

ISSA for Domain Generalization
Comparison with data augmentation methods.In contrast to the others, CutMix mixes up the content rather than the style.It improves the in-distribution performance on Cityscapes, but this gain does not extend to domain generalization.Hendrycks's weather corruptions can be seen as the synthetic version of Cityscapes under the rain, fog, and snow weather conditions.While already mimicking ACDC at training, it can still degrade ACDC-Snow by more than 5.8% in mIoU using HRNet.Among the four Hendrycks' corruption types (i.e., noise, blur, digital and weather), Hendrycks-Digital, consisting of contrast, elastics transformation, pixelation and JPEG, is the best-performing one, but still underperforms ISSA.StyleMix (Hong et al., 2021) also seeks to mix up styles.However, it does not work well for scene-centric datasets, such as Cityscapes.Its poor synthetic image quality (see Fig. 10) leads to the performance drop over the HRNet baseline in many cases, e.g., on Cityscapes to ACDC-Fog from 58.68% to 49.11% mIoU.
More evaluation on the generalization performance from Cityscapes to BDD100K and Dark Zürich is provided in Table 6, where the observation is consistent with Table 5 explained above.In addition to weather changes, we further compare different data augmentation methods under the more challenging day-to-night setting in Table 7. ISSA present consistent advantages over competing methods, which again justifies the effectiveness of ISSA on improving generalization performance.
Comparison with domain generalization techniques.We further compare ISSA with two advanced feature space style mixing methods designed to improve domain generalization performance: MixStyle (Zhou et al., 2021) and DSU (Li et al., 2022).Both extract the style information at certain normalization layers of CNNs.MixStyle (Zhou et al., 2021) mixes up styles by linearly interpolating the feature statistics, i.e., mean and variance, of different images, while DSU (Li et al., 2022) models the feature statistics as a distribution and randomly draws samples from it.
We adopt the experimental setting of DSU with default hyperparameters, using DeepLab v2 (Chen et al., 2018a)    drop on the source domain (i.e., CS) when applying DSU and MixStyle.As they operate at the feature-level, there is no guarantee that the semantic content stays unchanged after the random perturbation of feature statistics.Thus, the changes in feature statistics might negatively affect the performance, as also indicated in (Li et al., 2022).Note that, in contrast,  (Sakaridis et al., 2021).
ISSA operates on the image space.Combining ISSA with MixStyle and DSU leads to a strong boost in performance of these methods.
Being model-agnostic, ISSA can be combined with other networks designed specifically for the domain generalization of semantic segmentation.To showcase its complementary nature, we add ISSA on top of two state-of-the-art domain generalization methods for semantic segmentation, Robust-Net (Choi et al., 2021) andSHADE (Zhao et al., 2022).RobustNet proposed a novel instance whitening loss to selectively remove domain-specific style information.SHADE on the other hand aims to learn style-invariant representation and preserve knowledge from the pretrained backbone.Although color transformation has already been used for augmentation in both methods and SHADE additionally employs feature-level style augmentation, ISSA can introduce more natural style shifts, thus is able to bring further improvements.GAN and an encoder, which could take considerable computational resources.Therefore, it is of practical interest if the trained models can be readily useable for novel domains.
ISSA using arbitrary encoders.Favorably, thanks to the plug-n-play ability of the synthesis pipeline, we observe that ISSA can still be effective even when encoder and generator are trained on a different dataset of a similar task, and re-training is not required.Note that here the source is with respect to the segmenter training for domain generalization, not the encoder training.As shown in Table 11, when training the segmenter on Cityscapes using ISSA, we can directly use generator and encoder trained on BDD100K without finetuning.Even though the models have not seen any samples of Cityscapes, they can still reconstruct and augment styles within Cityscapes, and the effectiveness of ISSA is not compromised.This implies that, once the generator and encoder are trained on one dataset, they are also straightforwardly applicable for augmenting novel datasets.
Extra-source exemplar based style synthesis.Furthermore, we exploit the usage of extra-source data as the style exemplar.Visual examples in Fig. 11 showcase the plug-n-play style-mixing ability of our encoder on web-crawled images, where the model is only trained on Cityscapes.It can be observed that the style of unseen images can still be successfully transferred to the content images, which grants us the opportunity to further utilize images on the web to enhance the effectiveness of style augmentation beyond intra-source styles.Also, we illustrate the interpolation capability in the style latent space on both trained Cityscapes and unseen web-crawled image.This property enables more control on the style mixing strength.
To further explore the usage of images on the web, we take Landscape Pictures1 dataset as the extra-source exemplars for style augmentation.Table 12 justifies that by exploiting additional image styles, ESSA can further improve the generalization performance of ISSA on unseen target domains.

Stylized Proxy Validation Set Synthesis
Beyond the usage of data augmentation for network training, we further explore if our exemplar-based style synthesis pipeline can be used to assess the generalization capability of semantic segmentation models for both source and target domain without extra data annotation effort.Prior work (Zhang et al., 2021b) has used conditional GAN synthesized samples to predict generalization performance of image classifiers in the source domain.However, it remains unclear how to evaluate the generalization performance on unseen domains, and apply it on dense prediction tasks.Given the fact that our masked noise encoder can transfer styles even from novel domains, we utilize this attractive property to generate a stylized proxy validation set, i.e., combining styles from the target domain with the contents from the source domain training samples.For getting their styles, exemplars from the target domain do not need to be labelled.The existing ground-truth label maps of the training samples in the source domain are Experimental Setup.We investigate the generalization performance of 95 semantic segmentation models trained on Cityscapes, where 54 models are obtained from MMSegmentation (Contributors, 2020) model zoo and the others are trained by ourselves.The models cover both CNN-based architectures, e.g., HRNet (Wang et al., 2021b), DeepLab (Chen et al., 2017), DANet (Fu et al., 2019), and transformer-based model, e.g., SegFormer (Xie et al., 2021), SETR (Zheng et al., 2021).Besides, the models are trained using different strategies, e.g., various learning rate schedule, cropping size and data augmentation.We consider generalization performance on both source and target domain for the correlation study.Specifically, we use the Cityscapes validation set as the source test set, ACDC and BDD100K validation sets as the target test data.To verify the generalization performance on the source domain, we apply intra-source style augmentation on the Cityscapes training set and use it as the proxy validation set.For the verification of target domain generalization performance, we build a proxy set by transferring styles from the corresponding target test dataset.Further, we study the correlation between the real test performance and performance on the proxy data.
Correlation Metrics.We compute Spearman's Rank Correlation coefficient (ρ) and Kendall Rank Correlation Coefficient (τ ) to quantitatively measure the correlation strength.The value of the correlation coefficient varies from [−1, 1].A value closer to ±1 indicates strong positive/negative association between the two variables.As the coefficient goes towards 0, the association becomes looser.Both correlation coefficients are non-parametric, i.e., no strict assumptions on the data distribution, and the assessment is based on the ranking of the data.
Observations.In Fig. 14, we show the correlation of performance on the intra-source style augmented proxy set and real Cityscapes test set across different network architectures.We clearly observe a strong correlation (ρ > 0.95), indicating that ISSA proxy set can serve as a good indicator for generalization in the source domain.
Furthermore, we report the correlation results of target domain generalization on two datasets, i.e., ACDC and BDD100K in each row of Fig. 15.We compare three different choices of the proxy set in each column, namely the original Cityscapes validation set, intra-source style augmented Cityscapes validation set and target data style augmented validation set.Blue and orange dots represent CNN-and transformer-based backbones, respectively.Quantitatively, the correlation coefficients of Figs.15a and 15d are rather low.Also from Fig. 15a, some blue points in the upper right corner has stronger performance on Cityscapes validation set compared to the orange points, but worse on ACDC test data.This suggests that evaluation of the original Cityscapes (source) validation set cannot properly reflect the generalization performance on the target domain.Therefore, this raises the concern that by following the traditional way, selecting the best model based on the source validation performance could be problematic when the deploying environment involves data of unknown target domains.By applying intrasource style augmentation on the Cityscapes validation set, the correlation coefficient has been improved (see Figs. 15b  and 15e).We hypothesize that style mixing results in better data coverage and thus can better represent model's generalization ability under style shifts.Furthermore, whenever it is possible to have access to images of the target domain, even though without annotation, we can utilize styles of the unlabeled target data and achieve the strongest correlation in Figs.15c and 15f.In addition to the correlation metric, in general models have higher mIoU on the Cityscapes validation set, compared with the intra-source style and target domain style augmented proxy set.And the mIoU range on the intra-source proxy set is closer to the one of using target styles, which also justifies our hypothesis above.
Besides, we also observe an interesting phenomenon from Fig. 15: all transformer-based models (orange dots) are above the linear fit.This suggests that transformer-based models present better generalization ability under natural shifts compared with CNN-based models (blue dots).This is consistent with the acknowledgement on transformers from In each row, we investigate the correlation between the real test performance, i.e., mIoU of ACDC and BDD100K, and mIoU of different proxy sets.We observe that Figs.15c and 15f achieve the strongest correlation for each scenario, which indicates that it is beneficial to build a proper proxy set using styles of the corresponding test dataset.
To sum up, we present a new use case of proposed exemplarbased style synthesis pipeline, and demonstrate that stylized samples can be used as a proxy validation set and a strong indicator for model's generalization capability without introducing additional annotation efforts.Based on this observation, we can better utilize existing annotated data together with our exemplar-based style synthesis pipeline, to select models in practice especially when deployment in an openworld environment, where unknown target data commonly exists.

Conclusion and Discussions
In this paper, we propose a GAN inversion based style synthesis pipeline for domain generalization in semantic segmentation.The key enabler for our pipeline is the masked noise encoder, which is capable of preserving fine-grained content details and allows style mixing between images without affecting the semantic content.In particular, we employ intra-source style augmentation (ISSA) for learning domain generalized semantic segmentation using restricted training data from a single source domain.Extensive experimental results verify the effectiveness of ISSA on domain generalization across different datasets and network architectures.We further demonstrate the plug-n-play ability of the proposed pipeline.Without requiring retraining the encoder and generator, our model can be used directly on extra-source exemplars such as web-crawled images, enabling extra-source style augmentation (ESSA).It also opens up applications beyond data augmentation for improved domain generalization.Specifically, we show that the intra-& extra-source exemplarbased style synthesis pipeline can be used for creating proxy validation sets to compare the generalization capability of diverse models on both the source and target domain without extra data annotation effort.
Limitation and future work.One limitation of ISSA is that our style mixing is a global transformation, which cannot specifically alter the style of local objects, e.g., adjusting vehicle color from red to black, though when changing the image globally, local areas are inevitably modified.Also compared to simple data augmentation such as color transformation, our pipeline requires higher computational complexity for training.It takes around 7 days to train the masked noise encoder on 256 × 512 resolution using 2 GPUs.A similar amount of time is required for the StyleGAN2 training.Nonetheless, for data augmentation, it only concerns the inference time of our encoder, which is much faster, i.e., 0.1 seconds, compared to optimization based methods such as PTI (Roich et al., 2021) that takes 55.7 seconds per image.
In the future, it is challenging yet interesting to extend our work with more flexible local editing.Our proposed intra-& extra-source exemplar-based style synthesis is a global transformation, which cannot specifically alter the style of local objects, e.g., adjusting vehicle color from red to black, though when changing the image globally, local areas are inevitably modified.One potential direction is by exploiting the pre-trained language-vision model, such as CLIP (Radford et al., 2021).We can synthesize styles conditioned on text rather than an image.For instance, by providing a text condition "snowy road", ideally we would want to obtain an image where there is snow on the road and other semantic classes remain unchanged.Recent works (Bar-Tal et al., 2022;Hertz et al., 2022;Kawar et al., 2022) studied local editing conditioned on text.However, CLIP exhibits a strong bias (Bar-Tal et al., 2022) and may generate undesirable results, and the edited image may suffer from insufficient alignment with the other parts of the image.Overall, there is still large room for improvement on synthesizing images with more controls on both style and content.
Data Availability.The datasets analysed during the current study are available at Cityscapes, ACDC, BDD100K, Dark Zürich, Landscape Pictures repository, respectively.
Fig. 2: Qualitative results (best view in color and zoom in) of StyleGAN2 inversion methods on Cityscapes, i.e., pSp (Richardson et al., 2021), pSp † , Feature-Style encoder (Yao et al., 2022) and our masked noise encoder.Note, pSp † is an improved version of pSp (Richardson et al., 2021) introduced by us, training pSp with an additional discriminator and incorporate synthesized images for better initialization.pSp † can reconstruct the rough layout of the scene but still struggles to preserve details.The Feature-Style encoder shows a better reconstruction quality, yet it cannot faithfully reconstruct small objects (e.g.pedestrian), and some objects (e.g. the vehicle, bicycle) are rather blurry.Our masked noise encoder has highest image fidelity, preserving finer details in the inverted image.

Fig. 4 :
Fig. 4: Style mixing effect enabled by random noise masking (best view in color).Despite the good reconstruction quality, the encoder trained without masking cannot change the style of the given Content image.In contrast, the encoder trained with masking can modify it using the style from the given Style image.

Fig. 6 :
Fig. 6: Style mixing process.The generator G takes the latent codes {w k s } of I s and the noise map ε c of I c , and produce the stylized image, i.e., G(w k s , ε c ).

Fig. 7 :
Fig. 7: Visual examples of style mixing on BDD100K (best view in color) enabled by our masked noise encoder.By combining the latent codes {w k s } of I s and the noise map ε c of I c , the synthesized images G(w k s , ε c ) preserve the content of I c with a new style resembling I s .

Fig. 9 :
Fig. 9: Semantic segmentation results of Cityscapes to ACDC generalization using HRNet.The HRNet is trained on Cityscapes only.The segmenter trained with ISSA provides more reasonable prediction under adverse weather conditions.
Fig. 11: Extra-source exemplar based style synthesis using web-crawled images, where the generator and encoder are only trained on Cityscapes.Except for the Content 1 image of the first 2 rows, all the others are web-crawled images.

Fig. 14 :
Fig. 14: Correlation between real Cityscapes test performance and intra-source style augmented proxy performance for 95 models.Spearman's Rank Correlation coefficient (ρ) and Kendall Rank Correlation Coefficient (τ ) are computed to quantitatively measure correlation strength.Blue and orange dots represent CNN-and transformer-based backbones, respectively.We observe that there is a strong correlation between the real test mIoU and proxy mIoU.

Fig. 15 :
Fig.15: Correlation between test performance and proxy performance for 95 models.We compute Spearman's Rank Correlation coefficient (ρ) and Kendall Rank Correlation Coefficient (τ ) to quantitatively measure correlation strength.Blue and orange dots represent CNN-and transformer-based backbones, respectively.In each row, we investigate the correlation between the real test performance, i.e., mIoU of ACDC and BDD100K, and mIoU of different proxy sets.We observe that Figs.15c and 15f achieve the strongest correlation for each scenario, which indicates that it is beneficial to build a proper proxy set using styles of the corresponding test dataset.

Table 1 :
(Heusel et al., 2017)rall poor reconstruction quality.The Feature-Style Reconstruction quality on Cityscapes at the resolution 128 × 256.MSE, LPIPS(Zhang et al., 2018b)and FID(Heusel et al., 2017)respectively measure the pixel-wise reconstruction difference, perceptual difference, and distribution difference between the real and reconstructed images.
H 16 × W 16 H 4 × W 4 (Ours) H × W Fig. 8: Influence of the noise map resolution on style-mixing ability.Using higher resolution noise map, e.g., H × W , leads to poor style-mixing ability.While too low resolution, e.g., H 16 × W 16 , cannot reconstruct the scene faithfully.The proposed masked noise encoder (Ours) consistently outperforms pSp, pSp † and the feature-style encoder.Note, pSp † is introduced by us, by training pSp with an additional discriminator and incorporating synthesized images for better initialization.

Table 2 :
The effect of random noise masking on improving domain generalization via ISSA.We report the mean Intersection over Union (mIoU) of HRNet(Wang et al., 2021b)trained on Cityscapes at the resolution 256 × 512.BDD100K (BDD), ACDC, and Dark Zürich (DarkZ) represent different domain shifts from Cityscapes.

Table 3 :
Ablation on the mask patch size and masking ratio.The influence of patch size on the reconstruction is minor, while masking ratio is more important, i.e., higher masking ratio has negative impact.

Table 5 :
Comparison of data augmentation for improving domain generalization, i.e., from Cityscapes (train) to ACDC (unseen).The mean Intersection over Union (mIoU) is reported on Cityscapes (CS), four individual scenarios of ACDC (Rain, Fog, Snow and Night) and the whole ACDC (Avg.).ColorTransform consists of various color transformations such as altering the contrast, brightness, saturation; luma flip and hue rotation.Hendrycks-Weather (Hendrycks and Dietterich, 2018) simulates weather conditions in a synthetic manner for data augmentation, and Hendrycks-Digital is composed of contrast, elastics transformation, pixelation and JPEG corruption.Oracle indicates the supervised training on both Cityscapes and ACDC, serving as an upper bound on ACDC for the other methods.Note, it is not supposed to be an upper bound on Cityscapes.Underline denotes worse results than the baseline on ACDC.ISSA performs the best and consistently improves the mIoU in all four scenarios of ACDC using both HRNet and SegFormer.

Table 6 :
Comparison of data augmentation for improving domain generalization, i.e., from Cityscapes (train) to ACDC,

Table 7
mentation network with ResNet101 backbone.Table8shows that ISSA outperforms both MixStyle and DSU by a large margin.We also observe that there is a slight performance

Table 9 :
(Chen et al., 2018b)and RobustNet (Choi et al.,  2021).We adopt the experimental setting of RobustNet and use DeepLab v3+(Chen et al., 2018b)as the baseline.Our ISSA is complementary to RobustNet and further improves its generalization performance.
Table 9 verifies the effectiveness of ISSA, which brings ex-

Table 12 :
Utilizing Landscape Pictures as extra-source exemplars for style augmentation, where the generator and encoder are only trained on Cityscapes (CS-G-E).ESSA can further improve the generalization performance from Cityscapes to other unseen datasets.tra gains for RobustNet and SHADE.For RobustNet, the performance of the challenging day to night scenario, i.e., Cityscapes to Dark Zürich is boosted from 20.11% to 23.09% in mIoU.Visual examples of stylized data by transferring style from one unannotated ACDC sample (target domain) to Cityscapes (source domain).Best view in color.