Advertisement

Online Meta-learning for Multi-source and Semi-supervised Domain Adaptation

Conference paper
  • 939 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12361)

Abstract

Domain adaptation (DA) is the topical problem of adapting models from labelled source datasets so that they perform well on target datasets where only unlabelled or partially labelled data is available. Many methods have been proposed to address this problem through different ways to minimise the domain shift between source and target datasets. In this paper we take an orthogonal perspective and propose a framework to further enhance performance by meta-learning the initial conditions of existing DA algorithms. This is challenging compared to the more widely considered setting of few-shot meta-learning, due to the length of the computation graph involved. Therefore we propose an online shortest-path meta-learning framework that is both computationally tractable and practically effective for improving DA performance. We present variants for both multi-source unsupervised domain adaptation (MSDA), and semi-supervised domain adaptation (SSDA). Importantly, our approach is agnostic to the base adaptation algorithm, and can be applied to improve many techniques. Experimentally, we demonstrate improvements on classic (DANN) and recent (MCD and MME) techniques for MSDA and SSDA, and ultimately achieve state of the art results on several DA benchmarks including the largest scale DomainNet.

Keywords

Meta-learning Domain adaptation 

Supplementary material

References

  1. 1.
    Andrychowicz, M., et al.: Learning to learn by gradient descent by gradient descent. In: NeurIPS (2016)Google Scholar
  2. 2.
    Balaji, Y., Chellappa, R., Feizi, S.: Normalized wasserstein distance for mixture distributions with applications in adversarial learning and domain adaptation. In: ICCV (2019)Google Scholar
  3. 3.
    Balaji, Y., Sankaranarayanan, S., Chellappa, R.: Metareg: towards domain generalization using meta-regularization. In: NeurIPS (2018)Google Scholar
  4. 4.
    Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., Vaughan, J.W.: A theory of learning from different domains. Mach. Learn. 79, 151–175 (2010)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Ben-David, S., Blitzer, J., Crammer, K., Pereira, F.: Analysis of representations for domain adaptation. In: NeurIPS (2006)Google Scholar
  6. 6.
    Bousmalis, K., Trigeorgis, G., Silberman, N., Krishnan, D., Erhan, D.: Domain separation networks. In: NeurIPS (2016)Google Scholar
  7. 7.
    Carlucci, F.M., D’Innocente, A., Bucci, S., Caputo, B., Tommasi, T.: Domain generalization by solving jigsaw puzzles. In: CVPR (2019)Google Scholar
  8. 8.
    Chang, W.G., You, T., Seo, S., Kwak, S., Han, B.: Domain-specific batch normalization for unsupervised domain adaptation. In: CVPR (2019)Google Scholar
  9. 9.
    Daumé, H.: Frustratingly easy domain adaptation. In: ACL (2007)Google Scholar
  10. 10.
    Donahue, J., Hoffman, J., Rodner, E., Saenko, K., Darrell, T.: Semi-supervised domain adaptation with instance constraints. In: CVPR (2013)Google Scholar
  11. 11.
    Dou, Q., Castro, D.C., Kamnitsas, K., Glocker, B.: Domain generalization via model-agnostic learning of semantic features. In: NeurIPS (2019)Google Scholar
  12. 12.
    Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: ICML (2017)Google Scholar
  13. 13.
    Finn, C., Rajeswaran, A., Kakade, S.M., Levine, S.: Online meta-learning. In: ICML (2019)Google Scholar
  14. 14.
    Franceschi, L., Frasconi, P., Salzo, S., Grazzi, R., Pontil, M.: Bilevel programming for hyperparameter optimization and meta-learning. In: ICML (2018)Google Scholar
  15. 15.
    French, G., Mackiewicz, M., Fisher, M.: Self-ensembling for visual domain adaptation. In: ICLR (2018)Google Scholar
  16. 16.
    Ganin, Y., et al.: Domain-adversarial training of neural networks. In: JMLR (2016)Google Scholar
  17. 17.
    Grandvalet, Y., Bengio, Y.: Semi-supervised learning by entropy minimization. In: NeurIPS (2005)Google Scholar
  18. 18.
    Hoffman, J., et al.: CyCADA: cycle-consistent adversarial domain adaptation. In: ICML (2018)Google Scholar
  19. 19.
    Kim, M., Sahu, P., Gholami, B., Pavlovic, V.: Unsupervised visual domain adaptation: a deep max-margin gaussian process approach. In: CVPR (2019)Google Scholar
  20. 20.
    Lee, C.Y., Batra, T., Baig, M.H., Ulbricht, D.: Sliced wasserstein discrepancy for unsupervised domain adaptation. In: CVPR (2019)Google Scholar
  21. 21.
    Li, D., Yang, Y., Song, Y.Z., Hospedales, T.: Learning to generalize: Meta-learning for domain generalization. In: AAAI (2018)Google Scholar
  22. 22.
    Li, D., Yang, Y., Song, Y.Z., Hospedales, T.M.: Deeper, broader and artier domain generalization. In: ICCV (2017)Google Scholar
  23. 23.
    Li, D., Zhang, J., Yang, Y., Liu, C., Song, Y.Z., Hospedales, T.M.: Episodic training for domain generalization. In: The IEEE International Conference on Computer Vision (ICCV), October 2019Google Scholar
  24. 24.
    Li, Z., Zhou, F., Chen, F., Li, H.: Meta-SGD: learning to learn quickly for few-shot learning. arXiv:1707.09835 (2017)
  25. 25.
    Liu, H., Simonyan, K., Yang, Y.: Darts: differentiable architecture search. In: ICLR (2019)Google Scholar
  26. 26.
    Liu, M.Y., Tuzel, O.: Coupled generative adversarial networks. In: NeurIPS (2016)Google Scholar
  27. 27.
    Long, M., Cao, Y., Wang, J., Jordan, M.I.: Learning transferable features with deep adaptation networks. In: ICML (2015)Google Scholar
  28. 28.
    Long, M., Cao, Z., Wang, J., Jordan, M.I.: Conditional adversarial domain adaptation. In: NeurIPS (2018)Google Scholar
  29. 29.
    Long, M., Zhu, H., Wang, J., Jordan, M.I.: Unsupervised domain adaptation with residual transfer networks. In: NeurIPS (2016)Google Scholar
  30. 30.
    Long, M., Zhu, H., Wang, J., Jordan, M.I.: Deep transfer learning with joint adaptation networks. In: ICML (2017)Google Scholar
  31. 31.
    Luo, Y., Zheng, L., Guan, T., Yu, J., Yang, Y.: Taking a closer look at domain shift: category-level adversaries for semantics consistent domain adaptation. In: CVPR (2019)Google Scholar
  32. 32.
    Maaten, L.V.D., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)zbMATHGoogle Scholar
  33. 33.
    Maclaurin, D., Duvenaud, D., Adams, R.: Gradient-based hyperparameter optimization through reversible learning. In: ICML (2015)Google Scholar
  34. 34.
    Mancini, M., Porzi, L., Rota Bulò, S., Caputo, B., Ricci, E.: Boosting domain adaptation by discovering latent domains. In: CVPR (2018)Google Scholar
  35. 35.
    Mansour, Y., Mohri, M., Rostamizadeh, A.: Domain adaptation with multiple sources. In: NeurIPS (2009)Google Scholar
  36. 36.
    Nichol, A., Achiam, J., Schulman, J.: On first-order meta-learning algorithms. arXiv:1803.02999 (2018)
  37. 37.
    Parisotto, E., Ghosh, S., Yalamanchi, S.B., Chinnaobireddy, V., Wu, Y., Salakhutdinov, R.: Concurrent meta reinforcement learning. arXiv:1903.02710 (2019)
  38. 38.
    Peng, X., Bai, Q., Xia, X., Huang, Z., Saenko, K., Wang, B.: Moment matching for multi-source domain adaptation. In: CVPR (2019)Google Scholar
  39. 39.
    Rajeswaran, A., Finn, C., Kakade, S., Levine, S.: Meta-learning with implicit gradients. In: NeurIPS (2019)Google Scholar
  40. 40.
    Ravi, S., Larochelle, H.: Optimization as a model for few-shot learning. In: ICLR (2016)Google Scholar
  41. 41.
    Saito, K., Kim, D., Sclaroff, S., Darrell, T., Saenko, K.: Semi-supervised domain adaptation via minimax entropy. In: ICCV (2019)Google Scholar
  42. 42.
    Saito, K., Ushiku, Y., Harada, T., Saenko, K.: Adversarial dropout regularization. In: ICLR (2018)Google Scholar
  43. 43.
    Saito, K., Watanabe, K., Ushiku, Y., Harada, T.: Maximum classifier discrepancy for unsupervised domain adaptation. In: CVPR (2018)Google Scholar
  44. 44.
    Schmidhuber, J.: Learning to control fast-weight memories: an alternative to dynamic recurrent networks. Neural Comput. 4, 131–139 (1992)CrossRefGoogle Scholar
  45. 45.
    Schmidhuber, J., Zhao, J., Wiering, M.: Shifting inductive bias with success-story algorithm, adaptive Levin search, and incremental self-improvement. Mach. Learn. 28, 105–130 (1997)CrossRefGoogle Scholar
  46. 46.
    Thrun, S., Pratt, L. (eds.): Learning to Learn. Kluwer Academic Publishers, Boston (1998)zbMATHGoogle Scholar
  47. 47.
    Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: CVPR (2017)Google Scholar
  48. 48.
    Tzeng, E., Hoffman, J., Zhang, N., Saenko, K., Darrell, T.: Deep domain confusion: Maximizing for domain invariance. arXiv:1412.3474 (2014)
  49. 49.
    Venkateswara, H., Eusebio, J., Chakraborty, S., Panchanathan, S.: Deep hashing network for unsupervised domain adaptation. In: (IEEE) Conference on Computer Vision and Pattern Recognition (CVPR) (2017)Google Scholar
  50. 50.
    Vinyals, O., Blundell, C., Lillicrap, T., Wierstra, D., et al.: Matching networks for one shot learning. In: NeurIPS (2016)Google Scholar
  51. 51.
    Xu, P., Gurram, P., Whipps, G., Chellappa, R.: Wasserstein distance based domain adaptation for object detection. arXiv:1909.08675 (2019)
  52. 52.
    Xu, R., Chen, Z., Zuo, W., Yan, J., Lin, L.: Deep cocktail network: Multi-source unsupervised domain adaptation with category shift. In: CVPR (2018)Google Scholar
  53. 53.
    Xu, Z., van Hasselt, H.P., Silver, D.: Meta-gradient reinforcement learning. In: NeurIPS (2018)Google Scholar
  54. 54.
    Yao, T., Pan, Y., Ngo, C.W., Li, H., Mei, T.: Semi-supervised domain adaptation with subspace learning for visual recognition. In: CVPR (2015)Google Scholar
  55. 55.
    Zhao, H., Zhang, S., Wu, G., Moura, J.M., Costeira, J.P., Gordon, G.J.: Adversarial multiple source domain adaptation. In: NeurIPS (2018)Google Scholar
  56. 56.
    Zheng, Z., Oh, J., Singh, S.: On learning intrinsic rewards for policy gradient methods. In: NeurIPS (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Samsung AI Center CambridgeCambridgeUK
  2. 2.The University of EdinburghEdinburghUK

Personalised recommendations