Multi-Domain Transfer Component Analysis for Domain Generalization

  • Thomas Grubinger
  • Adriana Birlutiu
  • Holger Schöner
  • Thomas Natschläger
  • Tom Heskes
Article

Abstract

This paper presents the domain generalization methods Multi-Domain Transfer Component Analysis (Multi-TCA) and Multi-Domain Semi-Supervised Transfer Component Analysis (Multi-SSTCA) which are extensions of the domain adaptation method Transfer Component Analysis to multiple domains. Multi-TCA learns a shared subspace by minimizing the dissimilarities across domains, while maximally preserving the data variance. The proposed methods are compared to other state-of-the-art methods on three public datasets and on a real-world case study on climate control in residential buildings. Experimental results demonstrate that Multi-TCA and Multi-SSTCA can improve predictive performance on previously unseen domains. We perform sensitivity analysis on model parameters and evaluate different kernel distances, which facilitate further improvements in predictive performance.

Keywords

Domain generalization Domain adaptation Transfer learning Transfer component analysis 

References

  1. 1.
    Belkin M (2006) Manifold regularization: a geometric framework for learning from labeled and unlabeled examples. J Mach Learn Res 7:2399–2434MathSciNetMATHGoogle Scholar
  2. 2.
    Blanchard G, Lee G, Scott C (2011) Generalizing from several related classification tasks to a new unlabeled sample. In: NIPS, pp 2178–2186Google Scholar
  3. 3.
    Brinkman R, Gasparetto M, Lee SJ, Ribickas A, Perkins J, Janssen W, Smiley R, Smith C (2007) High-content flow cytometry and temporal data analysis for defining a cellular signature of graft-versus-host disease. Biol Blood Marrow Transplant 13(6):691–700CrossRefGoogle Scholar
  4. 4.
    Ghifary M, Bastiaan Kleijn W, Zhang M, Balduzzi D (2015) Domain generalization for object recognition with multi-task autoencoders. In: Proceedings of the IEEE international conference on computer vision, pp 2551–2559Google Scholar
  5. 5.
    Gong B, Grauman K, Sha F (2013) Connecting the dots with landmarks: Discriminatively learning domain-invariant features for unsupervised domain adaptation. In: ICML, pp 222–230Google Scholar
  6. 6.
    Gretton A, Borgwardt K, Rasch M, Schölkopf B, Smola A (2006) A kernel method for the two-sample-problem. In: NIPS, pp 513–520Google Scholar
  7. 7.
    Gretton A, Bousquet O, Smola A, Schölkopf B (2005) Measuring statistical dependence with Hilbert-Schmidt norms. In: ALT, pp 63–77Google Scholar
  8. 8.
    Grubinger T, Birlutiu A, Schöner H, Natschläger T, Heskes T (2015) Domain generalization based on transfer component analysis. In: Advances in computational intelligence. Springer, pp 325–334Google Scholar
  9. 9.
    Ionescu RT, Popescu M (2015) PQ kernel: a rank correlation kernel for visual word histograms. Pattern Recognit Lett 55:51–57CrossRefGoogle Scholar
  10. 10.
    Little M, McSharry P, Roberts S, Costello D, Moroz I (2007) Moroz I (2007) Exploiting nonlinear recurrence and fractal scaling properties for voice disorder detection. BioMed Eng OnLine 6(1):23CrossRefGoogle Scholar
  11. 11.
    Muandet K, Balduzzi D, Schölkopf B (2013) Domain generalization via invariant feature representation. In: Proceedings of the 30th international conference on machhine learning, pp 10–18Google Scholar
  12. 12.
    Müller K, Mika S, Rätsch G, Tsuda K, Schölkopf B (2001) An introduction to kernel-based learning algorithms. IEEE Trans Neural Netw 12(2):181–201CrossRefGoogle Scholar
  13. 13.
    Pan S, Tsang I, Kwok J, Yang Q (2011) Domain adaptation via transfer component analysis. IEEE Trans Neural Netw 22(2):199–210CrossRefGoogle Scholar
  14. 14.
    Pan S, Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359CrossRefGoogle Scholar
  15. 15.
    Persello C, Bruzzone L (2014) Relevant and invariant feature selection of hyperspectral images for domain generalization. In: International geoscience and remote sensing symposium (IGARSS), IEEE. pp 3562–3565Google Scholar
  16. 16.
    Schölkopf B, Smola A, Müller K (1999) Kernel principal component analysis. In: International Conference on Artificial Neural Networks, pp 583–588Google Scholar
  17. 17.
    Sun H, Liu S, Zhou S (2016) Discriminative subspace alignment for unsupervised visual domain adaptation. In: NEPL, pp 1–15Google Scholar
  18. 18.
    Sun S, Shi H (2013) Bayesian multi-source domain adaptation. In: International conference on machine learning and cybernetics, IEEE, vol 1, pp 24–28Google Scholar
  19. 19.
    Sun S, Shi H, Wu Y (2015) A survey of multi-source domain adaptation. Inf Fusion 24:84–92CrossRefGoogle Scholar
  20. 20.
    Vedaldi A, Zisserman A (2012) Efficient additive kernels via explicit feature maps. IEEE Trans Pattern Anal Mach Intell 34(3):480–492CrossRefGoogle Scholar
  21. 21.
    Xu Z, Li W, Niu L, Xu D (2014) Exploiting low-rank structure from latent domains for domain generalization. In: Computer vision—ECCV 2014—13th European conference, pp 628–643. doi:10.1007/978-3-319-10578-9_41
  22. 22.
    Xu Z, Sun S (2012) Multi-source transfer learning with multi-view adaboost. In: International conference on neural information processing systems, Springer. pp 332–339Google Scholar
  23. 23.
    Xue Y, Liao X, Carin L, Krishnapuram B (2007) Multitask learning for classication with Dirichlet process priors. J Mach Learn Res 35(8):35–63MATHGoogle Scholar
  24. 24.
    Zhang H, Ji H, Wang X (2012) Transfer learning from unlabeled data via neural networks. NEPL 36(2):173–187Google Scholar

Copyright information

© Springer Science+Business Media New York 2017

Authors and Affiliations

  1. 1.Data Analysis SystemsSoftware Competence Center HagenbergHagenberg im MühlkreisAustria
  2. 2.Faculty of Science“1 Decembrie 1918” University of Alba-IuliaAlba IuliaRomania
  3. 3.Institute for Computing and Information SciencesRadboud University NijmegenNijmegenThe Netherlands

Personalised recommendations