Skip to main content

Domain Adaptation Using the Grassmann Manifold

  • Chapter
Riemannian Computing in Computer Vision

Abstract

Modern data analysis is flush with large databases and high-dimensional inference problems. Often the size of these problems directly relates to the fact that given data may have variations that can be difficult to incorporate into well-known, classical methods. One of these sources of variation is that of differing data sources, often called domain adaptation. Many domain adaptation techniques use the notion of a shared representation to attempt to remove domain-specific variation in given observations. One way to obtain these representations is through dimension reduction using a linear projection onto a lower-dimensional subspace of the predictors. Estimating linear projections is intrinsically linked with parameters lying on the Grassmannian. We present some historical approaches and their relationship to recent advances in domain adaptation that exploit the Grassmannian through subspace estimation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 149.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bhattacharya R, Patragenaru V (2003) Large sample theory of intrinsic and extrinsic sample means on manifolds I. Ann Stat. 31(1):1–29

    Article  MATH  Google Scholar 

  2. Blitzer J, McDonald R, Pereira F (2006) Domain adaptation with structural correspondence learning. In: Proceedings of the 2006 conference on empirical methods in natural language processing - EMNLP ‘06. Association for Computational Linguistics, July 2006

    Google Scholar 

  3. Chikuse Y (2003) Statistics on special manifolds. Lecture notes in statistics. Springer, Berlin

    Book  MATH  Google Scholar 

  4. Cook RD, Adragni KP (2009) Sufficient dimension reduction and prediction in regression. Philos Trans R Soc A Math Phys Eng Sci 367(1906):4385–4405

    Google Scholar 

  5. Cook RD, Forzani L (2009) Likelihood-based sufficient dimension reduction. J Am Stat Assoc 104(485):197–208

    Article  MathSciNet  Google Scholar 

  6. Daumé III H (2007) Frustratingly easy domain adaptation. In: Annual meeting—association for computational linguistics, vol 45, pp 256–263

    Google Scholar 

  7. DerSimonian R, Laird N (1986) Meta-analysis in clinical trials. Control Clin Trials 7(3):177–188

    Article  Google Scholar 

  8. Dryden IL, Mardia KV (1998) Statistical shape analysis. Wiley, New York

    MATH  Google Scholar 

  9. Edelman A, Arias TA, Smith ST (1998) The geometry of algorithms with orthogonality constraints. SIAM J Matrix Anal Appl 20:303–353

    Article  MATH  MathSciNet  Google Scholar 

  10. FG-NET Aging Database (2011) Face and gesture recognition research network. http://www.fgnet.rsunit.com/. Accessed April 2011

  11. Fu Y, Huang TS (2008) Human age estimation with regression on discriminative aging manifold. IEEE Trans Multimedia 10(4):578–584

    Article  Google Scholar 

  12. Gong B, Shi Y, Sha F, Grauman K (2012) Geodesic flow kernel for unsupervised domain adaptation. In: IEEE conference on computer vision and pattern recognition

    Google Scholar 

  13. Goodall CR, Mardia KV (1999) Projective shape analysis. J Comput Graph Stat 8(2):143–168

    MathSciNet  Google Scholar 

  14. Gopalan R, Li R, Chellappa R (2011) Domain adaptation for object recognition: an unsupervised approach. In: International conference on computer vision

    Google Scholar 

  15. Heckman JJ (1979) Sample selection bias as a specification error. Econometrica J Econ Soc 47(1):153–161

    Article  MATH  MathSciNet  Google Scholar 

  16. Ho HT, Gopalan R (2014) Model-driven domain adaptation on product manifolds for unconstrained face recognition. Int J Comput Vis 109(1–2):110–125

    Article  Google Scholar 

  17. Huang J, Mumford DB (1999) Statistics of natural images and models. In: IEEE Computer Society (ed) Proceedings of the IEEE computer society conference on computer vision and pattern recognition, vol 1. IEEE Computer Society Press, Los Alamitos, CA, pp 541–547

    Google Scholar 

  18. Huang J, Smola AJ, Gretton A, Borgwardt KM, Schölkopf B (2007) Correcting sample selection bias by unlabeled data. Adv Neural Inf Process Syst 19:601

    Google Scholar 

  19. Jolliffe IT (2005) Principal component analysis. Wiley Online Library, New York

    Book  Google Scholar 

  20. Kulis B, Saenko K, Darrell T (2011) What you saw is not what you get: domain adaptation using asymmetric kernel transforms. In: IEEE conference on computer vision and pattern recognition

    Google Scholar 

  21. Lee JA, Verleysen M (2007) Nonlinear dimensionality reduction. Springer, Berlin

    Book  MATH  Google Scholar 

  22. Li K-C (1991) Sliced inverse regression for dimension reduction. J Am Stat Assoc 86(141):316–327

    Article  MATH  Google Scholar 

  23. Liu J, Shah M, Kuipers B, Savarese S (2011) Cross-view action recognition via view knowledge transfer. In: IEEE conference on computer vision and pattern recognition

    Book  Google Scholar 

  24. Mahalanobis PC (1944) On large-scale sample surveys. Philos Trans R Soc Lond B 231:329–451

    Article  Google Scholar 

  25. Pan SJ, Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359

    Article  Google Scholar 

  26. Radhakrishna Rao C (1948) The utilization of multiple measurements in problems of biological classification. J R Stat Soc Ser B 10(2):159–203

    MATH  MathSciNet  Google Scholar 

  27. Roweis ST, Saul LK (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290:2323–2326

    Article  Google Scholar 

  28. Ruderman DL, Bialek W (1994) Statistics of natural images: scaling in the woods. Phys Rev Lett 73(6):814–817

    Article  Google Scholar 

  29. Saenko K, Kulis B, Fritz M, Darrell T (2010) Adapting visual category models to new domains. In: Daniilidis K, Maragos P, Paragios N (eds) European conference on computer vision 2010. Lecture notes in computer science. Springer, Berlin/Heidelberg, pp 213–226

    Chapter  Google Scholar 

  30. Shaw DA, Chellappa R (2013) Regression on manifolds using data-dependent regularization with applications in computer vision. Stat Anal Data Min. (Special issue: joint statistical meetings 2012) 6(6):519–528

    Google Scholar 

  31. Shaw DA, Chellappa R (2015) Combined direction estimation for dimension reduction in the presence of inhomogeneous data. J Am Stat Assoc. Under review

    Google Scholar 

  32. Shaw DA, Chellappa R (2014) Domain adaptation for robust pattern recognition. In: Information theory and applications workshop (ITA), 2014, pp 1–7, Feb 2014

    Google Scholar 

  33. Shekhar S, Patel V, Nguyen H, Chellappa R (2013) Generalized domain-adaptive dictionaries. In: IEEE conference on computer vision and pattern recognition

    Book  Google Scholar 

  34. Shi Y, Sha F (2012) Information-theoretical learning of discriminative clusters for unsupervised domain adaptation. In: International conference on machine learning

    Google Scholar 

  35. Shimodaira H (2000) Improving predictive inference under covariate shift by weighting the log-likelihood function. J Stat Plann Inference 90(2):227–244

    Article  MATH  MathSciNet  Google Scholar 

  36. Spivak M (1999) A comprehensive introduction to differential geometry vol 1, 3rd edn. Publish or Perish Inc., Houston, TX

    MATH  Google Scholar 

  37. Sugiyama M, Suzuki T, Nakajima S, Kashima H, von Bünau P, Kawanabe M (2008) Direct importance estimation for covariate shift adaptation. Ann Inst Stat Math 60(4):699–746

    Article  MATH  Google Scholar 

  38. Tenenbaum JB, de Silva V, Langford JC (2000) A global geometric framework for nonlinear dimensionality reduction. Science (New York, NY) 290(5500):2319–23

    Article  Google Scholar 

  39. Thorpe JA (1979) Elementary topics in differential geometry. Springer, New York

    Book  MATH  Google Scholar 

  40. Turaga P, Biswas S, Chellappa R (2010) The role of geometry for age estimation. In: IEEE international conference on acoustics, speech and signal processing. University of Michigan Library, Ann Arbor, pp 946–949

    Google Scholar 

  41. Turaga PK, Veeraraghavan A, Srivastava A, Chellappa R (2011) Statistical computations on grassmann and stiefel manifolds for image and video-based recognition. IEEE Trans Pattern Anal Mach Intell 33(11):2273–2286

    Article  Google Scholar 

  42. Vardi Y (1982) Nonparametric estimation in the presence of length bias. Ann Stat 10(2):616–620

    Article  MATH  MathSciNet  Google Scholar 

  43. Wold H (1985) Partial least squares. In: Kotz S, Johnson N (eds) Encyclopedia of statistical sciences. Wiley, New York, pp 581–591

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David A. Shaw .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Shaw, D.A., Chellappa, R. (2016). Domain Adaptation Using the Grassmann Manifold. In: Turaga, P., Srivastava, A. (eds) Riemannian Computing in Computer Vision. Springer, Cham. https://doi.org/10.1007/978-3-319-22957-7_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-22957-7_15

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-22956-0

  • Online ISBN: 978-3-319-22957-7

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics