Abstract
In this chapter, we introduce the geometrical feature transformation methods for transfer learning, which is different from statistical feature transformation in the last section. The geometrical features can exploit the potentially geometrical structure to obtain clean and effective representations with remarkable performance. Similar to statistical features, there are also many geometrical features. We mainly introduce three types of geometrical feature transformation methods: subspace learning, manifold learning, and optimal transport methods. These methods are different in methodology, and they are all important in transfer learning.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
References
Baktashmotlagh, M., Harandi, M. T., Lovell, B. C., and Salzmann, M. (2014). Domain adaptation on the statistical manifold. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2481–2488.
Belkin, M., Niyogi, P., and Sindhwani, V. (2006). Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of Machine Learning Research, 7(Nov):2399–2434.
Bhushan Damodaran, B., Kellenberger, B., Flamary, R., Tuia, D., and Courty, N. (2018). DeepJDOT: Deep joint distribution optimal transport for unsupervised domain adaptation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 447–463.
Bishop, C. M. (2006). Pattern recognition and machine learning. Springer.
Courty, N., Flamary, R., and Tuia, D. (2014). Domain adaptation with regularized optimal transport. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 274–289. Springer.
Courty, N., Flamary, R., Tuia, D., and Rakotomamonjy, A. (2016). Optimal transport for domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence.
Courty, N., Flamary, R., Habrard, A., and Rakotomamonjy, A. (2017). Joint distribution optimal transportation for domain adaptation. In Advances in Neural Information Processing Systems, pages 3730–3739.
Fernando, B., Habrard, A., Sebban, M., and Tuytelaars, T. (2013). Unsupervised visual domain adaptation using subspace alignment. In ICCV, pages 2960–2967.
Gong, B., Shi, Y., Sha, F., and Grauman, K. (2012). Geodesic flow kernel for unsupervised domain adaptation. In CVPR, pages 2066–2073.
Gopalan, R., Li, R., and Chellappa, R. (2011). Domain adaptation for object recognition: An unsupervised approach. In ICCV, pages 999–1006. IEEE.
Greene, R. E. and Jacobowitz, H. (1971). Analytic isometric embeddings. Annals of Mathematics, pages 189–204.
Guerrero, R., Ledig, C., and Rueckert, D. (2014). Manifold alignment and transfer learning for classification of Alzheimer’s disease. In International Workshop on Machine Learning in Medical Imaging, pages 77–84. Springer.
Hamm, J. and Lee, D. D. (2008). Grassmann discriminant analysis: a unifying view on subspace-based learning. In ICML, pages 376–383. ACM.
Lee, C.-Y., Batra, T., Baig, M. H., and Ulbricht, D. (2019). Sliced Wasserstein discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10285–10295.
Lu, W., Chen, Y., Wang, J., and Qin, X. (2021). Cross-domain activity recognition via substructural optimal transport. Neurocomputing, 454:65–75.
Qin, X., Chen, Y., Wang, J., and Yu, C. (2019). Cross-dataset activity recognition via adaptive spatial-temporal transfer learning. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 3(4):1–25.
Seung, H. S. and Lee, D. D. (2000). The manifold ways of perception. Science, 290(5500):2268–2269.
Sun, B. and Saenko, K. (2015). Subspace distribution alignment for unsupervised domain adaptation. In BMVC, pages 24–1.
Sun, B. and Saenko, K. (2016). Deep CORAL: Correlation alignment for deep domain adaptation. In ECCV, pages 443–450.
Sun, B., Feng, J., and Saenko, K. (2016). Return of frustratingly easy domain adaptation. In AAAI.
Villani, C. (2008). Optimal transport: old and new, volume 338. Springer Science & Business Media.
Wang, J., Feng, W., Chen, Y., Yu, H., Huang, M., and Yu, P. S. (2018). Visual domain adaptation with manifold embedded distribution alignment. In ACMMM, pages 402–410.
Wang, J., Lan, C., Liu, C., Ouyang, Y., Zeng, W., and Qin, T. (2021). Generalizing to unseen domains: A survey on domain generalization. In IJCAI Survey Track.
Xu, R., Liu, P., Wang, L., Chen, C., and Wang, J. (2020a). Reliable weighted optimal transport for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4394–4403.
Xu, R., Liu, P., Zhang, Y., Cai, F., Wang, J., Liang, S., Ying, H., and Yin, J. (2020b). Joint partial optimal transport for open set domain adaptation. In International Joint Conference on Artificial Intelligence, pages 2540–2546.
Zhou, Z.-h. (2016). Machine learning. Tsinghua University Press.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Wang, J., Chen, Y. (2023). Geometrical Feature Transformation Methods. In: Introduction to Transfer Learning. Machine Learning: Foundations, Methodologies, and Applications. Springer, Singapore. https://doi.org/10.1007/978-981-19-7584-4_6
Download citation
DOI: https://doi.org/10.1007/978-981-19-7584-4_6
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-7583-7
Online ISBN: 978-981-19-7584-4
eBook Packages: Computer ScienceComputer Science (R0)