Skip to main content

Geometrical Feature Transformation Methods

  • Chapter
  • First Online:
Introduction to Transfer Learning
  • 1805 Accesses

Abstract

In this chapter, we introduce the geometrical feature transformation methods for transfer learning, which is different from statistical feature transformation in the last section. The geometrical features can exploit the potentially geometrical structure to obtain clean and effective representations with remarkable performance. Similar to statistical features, there are also many geometrical features. We mainly introduce three types of geometrical feature transformation methods: subspace learning, manifold learning, and optimal transport methods. These methods are different in methodology, and they are all important in transfer learning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Free shipping worldwide - see info
Hardcover Book
USD 79.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://pythonot.github.io/.

References

  • Baktashmotlagh, M., Harandi, M. T., Lovell, B. C., and Salzmann, M. (2014). Domain adaptation on the statistical manifold. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2481–2488.

    Google Scholar 

  • Belkin, M., Niyogi, P., and Sindhwani, V. (2006). Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of Machine Learning Research, 7(Nov):2399–2434.

    MathSciNet  MATH  Google Scholar 

  • Bhushan Damodaran, B., Kellenberger, B., Flamary, R., Tuia, D., and Courty, N. (2018). DeepJDOT: Deep joint distribution optimal transport for unsupervised domain adaptation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 447–463.

    Google Scholar 

  • Bishop, C. M. (2006). Pattern recognition and machine learning. Springer.

    MATH  Google Scholar 

  • Courty, N., Flamary, R., and Tuia, D. (2014). Domain adaptation with regularized optimal transport. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 274–289. Springer.

    Google Scholar 

  • Courty, N., Flamary, R., Tuia, D., and Rakotomamonjy, A. (2016). Optimal transport for domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence.

    Google Scholar 

  • Courty, N., Flamary, R., Habrard, A., and Rakotomamonjy, A. (2017). Joint distribution optimal transportation for domain adaptation. In Advances in Neural Information Processing Systems, pages 3730–3739.

    Google Scholar 

  • Fernando, B., Habrard, A., Sebban, M., and Tuytelaars, T. (2013). Unsupervised visual domain adaptation using subspace alignment. In ICCV, pages 2960–2967.

    Google Scholar 

  • Gong, B., Shi, Y., Sha, F., and Grauman, K. (2012). Geodesic flow kernel for unsupervised domain adaptation. In CVPR, pages 2066–2073.

    Google Scholar 

  • Gopalan, R., Li, R., and Chellappa, R. (2011). Domain adaptation for object recognition: An unsupervised approach. In ICCV, pages 999–1006. IEEE.

    Google Scholar 

  • Greene, R. E. and Jacobowitz, H. (1971). Analytic isometric embeddings. Annals of Mathematics, pages 189–204.

    Google Scholar 

  • Guerrero, R., Ledig, C., and Rueckert, D. (2014). Manifold alignment and transfer learning for classification of Alzheimer’s disease. In International Workshop on Machine Learning in Medical Imaging, pages 77–84. Springer.

    Google Scholar 

  • Hamm, J. and Lee, D. D. (2008). Grassmann discriminant analysis: a unifying view on subspace-based learning. In ICML, pages 376–383. ACM.

    Google Scholar 

  • Lee, C.-Y., Batra, T., Baig, M. H., and Ulbricht, D. (2019). Sliced Wasserstein discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10285–10295.

    Google Scholar 

  • Lu, W., Chen, Y., Wang, J., and Qin, X. (2021). Cross-domain activity recognition via substructural optimal transport. Neurocomputing, 454:65–75.

    Article  Google Scholar 

  • Qin, X., Chen, Y., Wang, J., and Yu, C. (2019). Cross-dataset activity recognition via adaptive spatial-temporal transfer learning. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 3(4):1–25.

    Article  Google Scholar 

  • Seung, H. S. and Lee, D. D. (2000). The manifold ways of perception. Science, 290(5500):2268–2269.

    Article  Google Scholar 

  • Sun, B. and Saenko, K. (2015). Subspace distribution alignment for unsupervised domain adaptation. In BMVC, pages 24–1.

    Google Scholar 

  • Sun, B. and Saenko, K. (2016). Deep CORAL: Correlation alignment for deep domain adaptation. In ECCV, pages 443–450.

    Google Scholar 

  • Sun, B., Feng, J., and Saenko, K. (2016). Return of frustratingly easy domain adaptation. In AAAI.

    Google Scholar 

  • Villani, C. (2008). Optimal transport: old and new, volume 338. Springer Science & Business Media.

    Google Scholar 

  • Wang, J., Feng, W., Chen, Y., Yu, H., Huang, M., and Yu, P. S. (2018). Visual domain adaptation with manifold embedded distribution alignment. In ACMMM, pages 402–410.

    Google Scholar 

  • Wang, J., Lan, C., Liu, C., Ouyang, Y., Zeng, W., and Qin, T. (2021). Generalizing to unseen domains: A survey on domain generalization. In IJCAI Survey Track.

    Google Scholar 

  • Xu, R., Liu, P., Wang, L., Chen, C., and Wang, J. (2020a). Reliable weighted optimal transport for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4394–4403.

    Google Scholar 

  • Xu, R., Liu, P., Zhang, Y., Cai, F., Wang, J., Liang, S., Ying, H., and Yin, J. (2020b). Joint partial optimal transport for open set domain adaptation. In International Joint Conference on Artificial Intelligence, pages 2540–2546.

    Google Scholar 

  • Zhou, Z.-h. (2016). Machine learning. Tsinghua University Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Wang, J., Chen, Y. (2023). Geometrical Feature Transformation Methods. In: Introduction to Transfer Learning. Machine Learning: Foundations, Methodologies, and Applications. Springer, Singapore. https://doi.org/10.1007/978-981-19-7584-4_6

Download citation

  • DOI: https://doi.org/10.1007/978-981-19-7584-4_6

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-19-7583-7

  • Online ISBN: 978-981-19-7584-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics