Abstract
In this chapter, we discuss the safety and robustness of transfer learning. By safety, we refer to its defense and solutions against attack and data privacy misuse. By robustness, we mean the transfer mechanism that prevents the model from learning from spurious relations. Therefore, we introduce the related topics from three levels: (1) framework level, which is the safe fine-tuning process against defect inheritance, (2) data level, which is the transfer learning system against data privacy leakage, and (3) mechanism level, which is causal learning.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
- 3.
- 4.
References
Agarwal, P., Paudel, D. P., Zaech, J.-N., and Van Gool, L. (2022). Unsupervised robust domain adaptation without source data. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2009–2018.
Ahmed, S. M., Raychaudhuri, D. S., Paul, S., Oymak, S., and Roy-Chowdhury, A. K. (2021). Unsupervised multi-source domain adaptation without access to source data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10103–10112.
Arivazhagan, M. G., Aggarwal, V., Singh, A. K., and Choudhary, S. (2019). Federated learning with personalization layers. arXiv preprint arXiv:1912.00818.
Arjovsky, M., Bottou, L., Gulrajani, I., and Lopez-Paz, D. (2019). Invariant risk minimization. arXiv preprint arXiv:1907.02893.
Atzmon, Y., Kreuk, F., Shalit, U., and Chechik, G. (2020). A causal view of compositional zero-shot recognition. Advances in Neural Information Processing Systems, 33.
Bahadori, M. T., Chalupka, K., Choi, E., Chen, R., Stewart, W. F., and Sun, J. (2017). Causal regularization. arXiv preprint arXiv:1702.02604.
Berthelot, D., Carlini, N., Goodfellow, I., Papernot, N., Oliver, A., and Raffel, C. (2019). Mixmatch: A holistic approach to semi-supervised learning. arXiv preprint arXiv:1905.02249.
Besserve, M., Shajarisales, N., Schölkopf, B., and Janzing, D. (2018). Group invariance principles for causal generative models. In International Conference on Artificial Intelligence and Statistics, pages 557–565. PMLR.
Borgwardt, K. M., Gretton, A., Rasch, M. J., Kriegel, H.-P., Schölkopf, B., and Smola, A. J. (2006). Integrating structured biological data by kernel maximum mean discrepancy. Bioinformatics, 22(14):e49–e57.
Bridle, J. S., Heading, A. J., and MacKay, D. J. (1991). Unsupervised classifiers, mutual information and ’phantom targets’. In Advances in neural information processing systems (NIPS).
Cai, R., Li, Z., Wei, P., Qiao, J., Zhang, K., and Hao, Z. (2019). Learning disentangled semantic representation for domain adaptation. In Proceedings of the Conference of IJCAI, volume 2019, page 2060. NIH Public Access.
Chen, Y., Lu, W., Wang, J., Qin, X., and Qin, T. (2021). Federated learning with adaptive batchnorm for personalized healthcare. arXiv preprint arXiv:2112.00734.
Chen, Y., Qin, X., Wang, J., Yu, C., and Gao, W. (2020). Fedhealth: A federated transfer learning framework for wearable healthcare. IEEE Intelligent Systems, 35(4):83–93.
Chidlovskii, B., Clinchant, S., and Csurka, G. (2016). Domain adaptation in the absence of source domain data. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 451–460.
Chin, T.-W., Zhang, C., and Marculescu, D. (2021). Renofeation: A simple transfer learning method for improved adversarial robustness. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition workshops, pages 3243–3252.
Chuang, C.-Y., Torralba, A., and Jegelka, S. (2020). Estimating generalization under distribution shifts via domain-invariant representations. In International Conference on Machine Learning, pages 1984–1994. PMLR.
Feng, H.-Z., You, Z., Chen, M., Zhang, T., Zhu, M., Wu, F., Wu, C., and Chen, W. (2021). Kd3a: Unsupervised multi-source decentralized domain adaptation via knowledge distillation. In International conference on machine learning (ICML).
Gong, M., Zhang, K., Huang, B., Glymour, C., Tao, D., and Batmanghelich, K. (2018). Causal generative domain adaptation networks. arXiv preprint arXiv:1804.04333.
Gong, M., Zhang, K., Liu, T., Tao, D., Glymour, C., and Schölkopf, B. (2016). Domain adaptation with conditional transferable components. In International Conference on Machine Learning, pages 2839–2848.
He, Y., Shen, Z., and Cui, P. (2019). Towards non-i.i.d. image classification: A dataset and baselines. arXiv preprint arXiv:1906.02899.
Hou, Y. and Zheng, L. (2020). Source free domain adaptation with image translation. arXiv preprint arXiv:2008.07514.
Ilse, M., Tomczak, J. M., Louizos, C., and Welling, M. (2020). DIVA: Domain invariant variational autoencoders. In Medical Imaging with Deep Learning, pages 322–348. PMLR.
Ji, Y., Zhang, X., Ji, S., Luo, X., and Wang, T. (2018). Model-reuse attacks on deep learning systems. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pages 349–363.
Johansson, F. D., Sontag, D., and Ranganath, R. (2019). Support and invertibility in domain-invariant representations. In AISTAS, pages 527–536.
Khodak, M., Balcan, M.-F. F., and Talwalkar, A. S. (2019). Adaptive gradient-based meta-learning methods. In Advances in Neural Information Processing Systems, volume 32, pages 5917–5928.
Kilbertus, N., Parascandolo, G., and Schölkopf, B. (2018). Generalization in anti-causal learning. arXiv preprint arXiv:1812.00524.
Kundu, J. N., Venkat, N., Babu, R. V., et al. (2020). Universal source-free domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4544–4553.
Kurmi, V. K., Subramanian, V. K., and Namboodiri, V. P. (2021). Domain impression: A source data free domain adaptation method. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 615–625.
Li, T., Sahu, A. K., Zaheer, M., Sanjabi, M., Talwalkar, A., and Smith, V. (2020). Federated optimization in heterogeneous networks. In Proceedings of Machine Learning and Systems 2020, MLSys 2020, Austin, TX, USA, March 2–4, 2020. mlsys.org.
Liang, J., Hu, D., and Feng, J. (2020). Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In International Conference on Machine Learning, pages 6028–6039. PMLR.
Liang, J., Hu, D., He, R., and Feng, J. (2021a). Distill and fine-tune: Effective adaptation from a black-box source model. arXiv preprint arXiv:2104.01539.
Liang, J., Hu, D., Wang, Y., He, R., and Feng, J. (2021b). Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer. IEEE Transactions on Pattern Analysis and Machine Intelligence.
Liu, C., Sun, X., Wang, J., Li, T., Qin, T., Chen, W., and Liu, T.-Y. (2020). Learning causal semantic representation for out-of-distribution prediction. arXiv preprint arXiv:2011.01681.
Lopez-Paz, D., Nishihara, R., Chintala, S., Schölkopf, B., and Bottou, L. (2017). Discovering causal signals in images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6979–6987.
McMahan, B., Moore, E., Ramage, D., Hampson, S., and y Arcas, B. A. (2017). Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics, pages 1273–1282. PMLR.
Pearl, J. (2009). Causality. Cambridge university press.
Pearl, J. et al. (2009). Causal inference in statistics: An overview. Statistics surveys, 3:96–146.
Peng, X., Usman, B., Kaushik, N., Hoffman, J., Wang, D., and Saenko, K. (2017). VisDA: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924.
Peters, J., Janzing, D., and Schölkopf, B. (2017). Elements of causal inference: foundations and learning algorithms. MIT press.
Rezaei, S. and Liu, X. (2020). A target-agnostic attack on deep models: Exploiting security vulnerabilities of transfer learning. In International conference on learning representations (ICLR).
Rojas-Carulla, M., Schölkopf, B., Turner, R., and Peters, J. (2018). Invariant models for causal transfer learning. The Journal of Machine Learning Research, 19(1):1309–1342.
Schölkopf, B. (2019). Causality for machine learning. arXiv preprint arXiv:1911.10500.
Schölkopf, B., Janzing, D., Peters, J., Sgouritsa, E., Zhang, K., and Mooij, J. M. (2012). On causal and anticausal learning. In International Conference on Machine Learning (ICML 2012), pages 1255–1262. International Machine Learning Society.
Schölkopf, B., Janzing, D., Peters, J., and Zhang, K. (2011). Robust learning via cause-effect models. arXiv preprint arXiv:1112.2738.
Shen, Z., Cui, P., Kuang, K., Li, B., and Chen, P. (2018). Causally regularized learning with agnostic data selection bias. In 2018 ACM Multimedia Conference on Multimedia Conference, pages 411–419. ACM.
Smith, V., Chiang, C.-K., Sanjabi, M., and Talwalkar, A. (2017). Federated multi-task learning. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 4427–4437.
Sun, B. and Saenko, K. (2016). Deep coral: Correlation alignment for deep domain adaptation. In ECCV, pages 443–450.
Sun, X., Wu, B., Liu, C., Zheng, X., Chen, W., Qin, T., and Liu, T.-y. (2020). Latent causal invariant model. arXiv preprint arXiv:2011.02203.
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9.
T Dinh, C., Tran, N., and Nguyen, T. D. (2020). Personalized federated learning with Moreau envelopes. In Advances in Neural Information Processing Systems, volume 33.
Taufique, A. M. N., Jahan, C. S., and Savakis, A. (2021). ConDA: Continual unsupervised domain adaptation. arXiv preprint arXiv:2103.11056.
Teshima, T., Sato, I., and Sugiyama, M. (2020). Few-shot domain adaptation by causal mechanism transfer. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13–18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 9458–9469.
Tian, J., Zhang, J., Li, W., and Xu, D. (2021). VDM-DA: Virtual domain modeling for source data-free domain adaptation. arXiv preprint arXiv:2103.14357.
Wang, B., Yao, Y., Viswanath, B., Zheng, H., and Zhao, B. Y. (2018). With great training comes great vulnerability: Practical attacks against transfer learning. In 27th {USENIX} Security Symposium ({USENIX} Security 18), pages 1281–1297.
Xu, J., Glicksberg, B. S., Su, C., Walker, P., Bian, J., and Wang, F. (2021). Federated learning for healthcare informatics. Journal of Healthcare Informatics Research, 5(1):1–19.
Yang, Q., Liu, Y., Cheng, Y., Kang, Y., Chen, T., and Yu, H. (2019). Federated learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 13(3):1–207.
Yeganeh, Y., Farshad, A., Navab, N., and Albarqouni, S. (2020). Inverse distance aggregation for federated learning with non-iid data. In Domain Adaptation and Representation Transfer, and Distributed and Collaborative Learning, pages 150–159. Springer.
Yeh, H.-W., Yang, B., Yuen, P. C., and Harada, T. (2021). Sofa: Source-data-free feature alignment for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 474–483.
Yu, T., Bagdasaryan, E., and Shmatikov, V. (2020). Salvaging federated learning by local adaptation. arXiv preprint arXiv:2002.04758.
Zhang, K., Schölkopf, B., Muandet, K., and Wang, Z. (2013). Domain adaptation under target and conditional shift. In International Conference on Machine Learning, pages 819–827.
Zhang, Z., Li, Y., Wang, J., Liu, B., Li, D., Chen, X., Guo, Y., and Liu, Y. (2022). ReMos: Reducing defect inheritance in transfer learning via relevant model slicing. In The 44th International Conference on Software Engineering.
Zhao, H., Des Combes, R. T., Zhang, K., and Gordon, G. (2019). On learning invariant representations for domain adaptation. In International Conference on Machine Learning, pages 7523–7532.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Wang, J., Chen, Y. (2023). Safe and Robust Transfer Learning. In: Introduction to Transfer Learning. Machine Learning: Foundations, Methodologies, and Applications. Springer, Singapore. https://doi.org/10.1007/978-981-19-7584-4_12
Download citation
DOI: https://doi.org/10.1007/978-981-19-7584-4_12
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-7583-7
Online ISBN: 978-981-19-7584-4
eBook Packages: Computer ScienceComputer Science (R0)