Advertisement

Learning to Optimize Domain Specific Normalization for Domain Generalization

Conference paper
  • 517 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12367)

Abstract

We propose a simple but effective multi-source domain generalization technique based on deep neural networks by incorporating optimized normalization layers that are specific to individual domains. Our approach employs multiple normalization methods while learning separate affine parameters per domain. For each domain, the activations are normalized by a weighted average of multiple normalization statistics. The normalization statistics are kept track of separately for each normalization type if necessary. Specifically, we employ batch and instance normalizations in our implementation to identify the best combination of these two normalization methods in each domain. The optimized normalization layers are effective to enhance the generalizability of the learned model. We demonstrate the state-of-the-art accuracy of our algorithm in the standard domain generalization benchmarks, as well as viability to further tasks such as multi-source domain adaptation and domain generalization in the presence of label noise.

Keyword

Domain generalization 

Notes

Acknowledgement

This work was supported by Institute for Information & Communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) [2016-0-00563, 2017-0-01779].

Supplementary material

504482_1_En_5_MOESM1_ESM.pdf (143 kb)
Supplementary material 1 (pdf 143 KB)

References

  1. 1.
    Ba, L.J., Kiros, R., Hinton, G.E.: Layer normalization. arXiv preprint arXiv:1607.06450 (2016)
  2. 2.
    Balaji, Y., Sankaranarayanan, S., Chellappa, R.: MetaReg: towards domain generalization using meta-regularization. In: NeurIPS (2018)Google Scholar
  3. 3.
    Bilen, H., Vedaldi, A.: Universal representations: the missing link between faces, text, planktons, and cat breeds. arXiv preprint arXiv:1701.07275 (2017)
  4. 4.
    Carlucci, F.M., D’Innocente, A., Bucci, S., Caputo, B., Tommasi, T.: Domain generalization by solving jigsaw puzzles. In: CVPR (2019)Google Scholar
  5. 5.
    Chang, W.G., You, T., Seo, S., Kwak, S., Han, B.: Domain specific batch normalization for unsupervised domain adaptation. In: CVPR (2019)Google Scholar
  6. 6.
    D’Innocente, A., Caputo, B.: Domain generalization with domain-specific aggregation modules. In: Brox, T., Bruhn, A., Fritz, M. (eds.) GCPR 2018. LNCS, vol. 11269, pp. 187–198. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-12939-2_14CrossRefGoogle Scholar
  7. 7.
    Dou, Q., de Castro, D.C., Kamnitsas, K., Glocker, B.: Domain generalization via model-agnostic learning of semantic features. In: NeruIPS (2019)Google Scholar
  8. 8.
    Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: ICML (2017)Google Scholar
  9. 9.
    Guo, J., Shah, D., Barzilay, R.: Multi-source domain adaptation with mixture of experts. In: EMNLP (2018)Google Scholar
  10. 10.
    Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: ICCV (2017)Google Scholar
  11. 11.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: ICML (2015)Google Scholar
  12. 12.
    Li, D., Yang, Y., Song, Y.Z., Hospedales, T.: Learning to generalize: meta-learning for domain generalization. In: AAAI (2018)Google Scholar
  13. 13.
    Li, D., Yang, Y., Song, Y.Z., Hospedales, T.M.: Deeper, broader and artier domain generalization. In: ICCV (2017)Google Scholar
  14. 14.
    Li, D., Zhang, J., Yang, Y., Liu, C., Song, Y.Z., Hospedales, T.M.: Episodic training for domain generalization (2019)Google Scholar
  15. 15.
    Li, Y., Gong, M., Tian, X., Liu, T., Tao, D.: Domain generalization via conditional invariant representations. In: AAAI (2018)Google Scholar
  16. 16.
    Luo, P., Ren, J., Peng, Z.: Differentiable learning-to-normalize via switchable normalization. In: ICLR (2019)Google Scholar
  17. 17.
    Mancini, M., Bulò, S.R., Caputo, B., Ricci, E.: Best sources forward: domain generalization through source-specific nets. In: ICIP (2018)Google Scholar
  18. 18.
    Matsuura, T., Harada, T.: Domain generalization using a mixture of multiple latent domains. In: AAAI (2020)Google Scholar
  19. 19.
    Miyato, T., Kataoka, T., Koyama, M., Yoshida, Y.: Spectral normalization for generative adversarial networks. In: ICLR (2018)Google Scholar
  20. 20.
    Motiian, S., Piccirilli, M., Adjeroh, D.A., Doretto, G.: Unified deep supervised domain adaptation and generalization. In: ICCV (2017)Google Scholar
  21. 21.
    Muandet, K., Balduzzi, D., Schölkopf, B.: Domain generalization via invariant feature representation. In: ICML (2013)Google Scholar
  22. 22.
    Nam, H., Kim, H.E.: Batch-instance normalization for adaptively style-invariant neural networks. In: NeurIPS (2018)Google Scholar
  23. 23.
    Pan, X., Luo, P., Shi, J., Tang, X.: Two at once: enhancing learning and generalization capacities via IBN-Net. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11208, pp. 484–500. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01225-0_29CrossRefGoogle Scholar
  24. 24.
    Salimans, T., Kingma, D.P.: Weight normalization: a simple reparameterization to accelerate training of deep neural networks. In: NIPS (2016)Google Scholar
  25. 25.
    Shao, W., et al.: SSN: learning sparse switchable normalization via sparsestmax. In: CVPR (2019)Google Scholar
  26. 26.
    Shu, Y., Cao, Z., Long, M., Wang, J.: Transferable curriculum for weakly-supervised domain adaptation. In: AAAI (2018)Google Scholar
  27. 27.
    Ulyanov, D., Vedaldi, A., Lempitsky, V.: Improved Texture Networks: Maximizing Quality and Diversity in Feed-Forward Stylization and Texture Synthesis. In: CVPR (2017)Google Scholar
  28. 28.
    Venkateswara, H., Eusebio, J., Chakraborty, S., Panchanathan, S.: Deep hashing network for unsupervised domain adaptation. In: CVPR (2017)Google Scholar
  29. 29.
    Wu, Y., He, K.: Group normalization. Int. J. Comput. Vis. 128(3), 742–755 (2019).  https://doi.org/10.1007/s11263-019-01198-wCrossRefGoogle Scholar
  30. 30.
    Xu, R., Chen, Z., Zuo, W., Yan, J., Lin, L.: Deep cocktail network: multi-source unsupervised domain adaptation with category shift. In: CVPR (2018)Google Scholar
  31. 31.
    Zhao, H., Zhang, S., Wu, G., Moura, J.M.F., Costeira, J.P., Gordon, G.J.: Adversarial multiple source domain adaptation. In: NeurIPS (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Seoul National UniversitySeoulSouth Korea
  2. 2.NEC Laboratories AmericaPrincetonUSA
  3. 3.LG ElectronicsSeoulSouth Korea

Personalised recommendations