Skip to main content

Safe and Robust Transfer Learning

  • Chapter
  • First Online:
Introduction to Transfer Learning
  • 1829 Accesses

Abstract

In this chapter, we discuss the safety and robustness of transfer learning. By safety, we refer to its defense and solutions against attack and data privacy misuse. By robustness, we mean the transfer mechanism that prevents the model from learning from spurious relations. Therefore, we introduce the related topics from three levels: (1) framework level, which is the safe fine-tuning process against defect inheritance, (2) data level, which is the transfer learning system against data privacy leakage, and (3) mechanism level, which is causal learning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Free shipping worldwide - see info
Hardcover Book
USD 79.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://pytorch.org/hub/.

  2. 2.

    https://www.tensorflow.org/hub.

  3. 3.

    https://huggingface.co/models.

  4. 4.

    Other works (Johansson et al., 2019; Zhao et al., 2019; Chuang et al., 2020) also proposed some problems of the domain-invariant learning.

References

  • Agarwal, P., Paudel, D. P., Zaech, J.-N., and Van Gool, L. (2022). Unsupervised robust domain adaptation without source data. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2009–2018.

    Google Scholar 

  • Ahmed, S. M., Raychaudhuri, D. S., Paul, S., Oymak, S., and Roy-Chowdhury, A. K. (2021). Unsupervised multi-source domain adaptation without access to source data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10103–10112.

    Google Scholar 

  • Arivazhagan, M. G., Aggarwal, V., Singh, A. K., and Choudhary, S. (2019). Federated learning with personalization layers. arXiv preprint arXiv:1912.00818.

    Google Scholar 

  • Arjovsky, M., Bottou, L., Gulrajani, I., and Lopez-Paz, D. (2019). Invariant risk minimization. arXiv preprint arXiv:1907.02893.

    Google Scholar 

  • Atzmon, Y., Kreuk, F., Shalit, U., and Chechik, G. (2020). A causal view of compositional zero-shot recognition. Advances in Neural Information Processing Systems, 33.

    Google Scholar 

  • Bahadori, M. T., Chalupka, K., Choi, E., Chen, R., Stewart, W. F., and Sun, J. (2017). Causal regularization. arXiv preprint arXiv:1702.02604.

    Google Scholar 

  • Berthelot, D., Carlini, N., Goodfellow, I., Papernot, N., Oliver, A., and Raffel, C. (2019). Mixmatch: A holistic approach to semi-supervised learning. arXiv preprint arXiv:1905.02249.

    Google Scholar 

  • Besserve, M., Shajarisales, N., Schölkopf, B., and Janzing, D. (2018). Group invariance principles for causal generative models. In International Conference on Artificial Intelligence and Statistics, pages 557–565. PMLR.

    Google Scholar 

  • Borgwardt, K. M., Gretton, A., Rasch, M. J., Kriegel, H.-P., Schölkopf, B., and Smola, A. J. (2006). Integrating structured biological data by kernel maximum mean discrepancy. Bioinformatics, 22(14):e49–e57.

    Article  Google Scholar 

  • Bridle, J. S., Heading, A. J., and MacKay, D. J. (1991). Unsupervised classifiers, mutual information and ’phantom targets’. In Advances in neural information processing systems (NIPS).

    Google Scholar 

  • Cai, R., Li, Z., Wei, P., Qiao, J., Zhang, K., and Hao, Z. (2019). Learning disentangled semantic representation for domain adaptation. In Proceedings of the Conference of IJCAI, volume 2019, page 2060. NIH Public Access.

    Google Scholar 

  • Chen, Y., Lu, W., Wang, J., Qin, X., and Qin, T. (2021). Federated learning with adaptive batchnorm for personalized healthcare. arXiv preprint arXiv:2112.00734.

    Google Scholar 

  • Chen, Y., Qin, X., Wang, J., Yu, C., and Gao, W. (2020). Fedhealth: A federated transfer learning framework for wearable healthcare. IEEE Intelligent Systems, 35(4):83–93.

    Article  Google Scholar 

  • Chidlovskii, B., Clinchant, S., and Csurka, G. (2016). Domain adaptation in the absence of source domain data. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 451–460.

    Google Scholar 

  • Chin, T.-W., Zhang, C., and Marculescu, D. (2021). Renofeation: A simple transfer learning method for improved adversarial robustness. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition workshops, pages 3243–3252.

    Google Scholar 

  • Chuang, C.-Y., Torralba, A., and Jegelka, S. (2020). Estimating generalization under distribution shifts via domain-invariant representations. In International Conference on Machine Learning, pages 1984–1994. PMLR.

    Google Scholar 

  • Feng, H.-Z., You, Z., Chen, M., Zhang, T., Zhu, M., Wu, F., Wu, C., and Chen, W. (2021). Kd3a: Unsupervised multi-source decentralized domain adaptation via knowledge distillation. In International conference on machine learning (ICML).

    Google Scholar 

  • Gong, M., Zhang, K., Huang, B., Glymour, C., Tao, D., and Batmanghelich, K. (2018). Causal generative domain adaptation networks. arXiv preprint arXiv:1804.04333.

    Google Scholar 

  • Gong, M., Zhang, K., Liu, T., Tao, D., Glymour, C., and Schölkopf, B. (2016). Domain adaptation with conditional transferable components. In International Conference on Machine Learning, pages 2839–2848.

    Google Scholar 

  • He, Y., Shen, Z., and Cui, P. (2019). Towards non-i.i.d. image classification: A dataset and baselines. arXiv preprint arXiv:1906.02899.

    Google Scholar 

  • Hou, Y. and Zheng, L. (2020). Source free domain adaptation with image translation. arXiv preprint arXiv:2008.07514.

    Google Scholar 

  • Ilse, M., Tomczak, J. M., Louizos, C., and Welling, M. (2020). DIVA: Domain invariant variational autoencoders. In Medical Imaging with Deep Learning, pages 322–348. PMLR.

    Google Scholar 

  • Ji, Y., Zhang, X., Ji, S., Luo, X., and Wang, T. (2018). Model-reuse attacks on deep learning systems. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pages 349–363.

    Google Scholar 

  • Johansson, F. D., Sontag, D., and Ranganath, R. (2019). Support and invertibility in domain-invariant representations. In AISTAS, pages 527–536.

    Google Scholar 

  • Khodak, M., Balcan, M.-F. F., and Talwalkar, A. S. (2019). Adaptive gradient-based meta-learning methods. In Advances in Neural Information Processing Systems, volume 32, pages 5917–5928.

    Google Scholar 

  • Kilbertus, N., Parascandolo, G., and Schölkopf, B. (2018). Generalization in anti-causal learning. arXiv preprint arXiv:1812.00524.

    Google Scholar 

  • Kundu, J. N., Venkat, N., Babu, R. V., et al. (2020). Universal source-free domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4544–4553.

    Google Scholar 

  • Kurmi, V. K., Subramanian, V. K., and Namboodiri, V. P. (2021). Domain impression: A source data free domain adaptation method. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 615–625.

    Google Scholar 

  • Li, T., Sahu, A. K., Zaheer, M., Sanjabi, M., Talwalkar, A., and Smith, V. (2020). Federated optimization in heterogeneous networks. In Proceedings of Machine Learning and Systems 2020, MLSys 2020, Austin, TX, USA, March 2–4, 2020. mlsys.org.

    Google Scholar 

  • Liang, J., Hu, D., and Feng, J. (2020). Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In International Conference on Machine Learning, pages 6028–6039. PMLR.

    Google Scholar 

  • Liang, J., Hu, D., He, R., and Feng, J. (2021a). Distill and fine-tune: Effective adaptation from a black-box source model. arXiv preprint arXiv:2104.01539.

    Google Scholar 

  • Liang, J., Hu, D., Wang, Y., He, R., and Feng, J. (2021b). Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer. IEEE Transactions on Pattern Analysis and Machine Intelligence.

    Google Scholar 

  • Liu, C., Sun, X., Wang, J., Li, T., Qin, T., Chen, W., and Liu, T.-Y. (2020). Learning causal semantic representation for out-of-distribution prediction. arXiv preprint arXiv:2011.01681.

    Google Scholar 

  • Lopez-Paz, D., Nishihara, R., Chintala, S., Schölkopf, B., and Bottou, L. (2017). Discovering causal signals in images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6979–6987.

    Google Scholar 

  • McMahan, B., Moore, E., Ramage, D., Hampson, S., and y Arcas, B. A. (2017). Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics, pages 1273–1282. PMLR.

    Google Scholar 

  • Pearl, J. (2009). Causality. Cambridge university press.

    Book  MATH  Google Scholar 

  • Pearl, J. et al. (2009). Causal inference in statistics: An overview. Statistics surveys, 3:96–146.

    Article  MathSciNet  MATH  Google Scholar 

  • Peng, X., Usman, B., Kaushik, N., Hoffman, J., Wang, D., and Saenko, K. (2017). VisDA: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924.

    Google Scholar 

  • Peters, J., Janzing, D., and Schölkopf, B. (2017). Elements of causal inference: foundations and learning algorithms. MIT press.

    MATH  Google Scholar 

  • Rezaei, S. and Liu, X. (2020). A target-agnostic attack on deep models: Exploiting security vulnerabilities of transfer learning. In International conference on learning representations (ICLR).

    Google Scholar 

  • Rojas-Carulla, M., Schölkopf, B., Turner, R., and Peters, J. (2018). Invariant models for causal transfer learning. The Journal of Machine Learning Research, 19(1):1309–1342.

    MathSciNet  MATH  Google Scholar 

  • Schölkopf, B. (2019). Causality for machine learning. arXiv preprint arXiv:1911.10500.

    Google Scholar 

  • Schölkopf, B., Janzing, D., Peters, J., Sgouritsa, E., Zhang, K., and Mooij, J. M. (2012). On causal and anticausal learning. In International Conference on Machine Learning (ICML 2012), pages 1255–1262. International Machine Learning Society.

    Google Scholar 

  • Schölkopf, B., Janzing, D., Peters, J., and Zhang, K. (2011). Robust learning via cause-effect models. arXiv preprint arXiv:1112.2738.

    Google Scholar 

  • Shen, Z., Cui, P., Kuang, K., Li, B., and Chen, P. (2018). Causally regularized learning with agnostic data selection bias. In 2018 ACM Multimedia Conference on Multimedia Conference, pages 411–419. ACM.

    Google Scholar 

  • Smith, V., Chiang, C.-K., Sanjabi, M., and Talwalkar, A. (2017). Federated multi-task learning. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 4427–4437.

    Google Scholar 

  • Sun, B. and Saenko, K. (2016). Deep coral: Correlation alignment for deep domain adaptation. In ECCV, pages 443–450.

    Google Scholar 

  • Sun, X., Wu, B., Liu, C., Zheng, X., Chen, W., Qin, T., and Liu, T.-y. (2020). Latent causal invariant model. arXiv preprint arXiv:2011.02203.

    Google Scholar 

  • Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9.

    Google Scholar 

  • T Dinh, C., Tran, N., and Nguyen, T. D. (2020). Personalized federated learning with Moreau envelopes. In Advances in Neural Information Processing Systems, volume 33.

    Google Scholar 

  • Taufique, A. M. N., Jahan, C. S., and Savakis, A. (2021). ConDA: Continual unsupervised domain adaptation. arXiv preprint arXiv:2103.11056.

    Google Scholar 

  • Teshima, T., Sato, I., and Sugiyama, M. (2020). Few-shot domain adaptation by causal mechanism transfer. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13–18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 9458–9469.

    Google Scholar 

  • Tian, J., Zhang, J., Li, W., and Xu, D. (2021). VDM-DA: Virtual domain modeling for source data-free domain adaptation. arXiv preprint arXiv:2103.14357.

    Google Scholar 

  • Wang, B., Yao, Y., Viswanath, B., Zheng, H., and Zhao, B. Y. (2018). With great training comes great vulnerability: Practical attacks against transfer learning. In 27th {USENIX} Security Symposium ({USENIX} Security 18), pages 1281–1297.

    Google Scholar 

  • Xu, J., Glicksberg, B. S., Su, C., Walker, P., Bian, J., and Wang, F. (2021). Federated learning for healthcare informatics. Journal of Healthcare Informatics Research, 5(1):1–19.

    Article  Google Scholar 

  • Yang, Q., Liu, Y., Cheng, Y., Kang, Y., Chen, T., and Yu, H. (2019). Federated learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 13(3):1–207.

    Article  Google Scholar 

  • Yeganeh, Y., Farshad, A., Navab, N., and Albarqouni, S. (2020). Inverse distance aggregation for federated learning with non-iid data. In Domain Adaptation and Representation Transfer, and Distributed and Collaborative Learning, pages 150–159. Springer.

    Google Scholar 

  • Yeh, H.-W., Yang, B., Yuen, P. C., and Harada, T. (2021). Sofa: Source-data-free feature alignment for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 474–483.

    Google Scholar 

  • Yu, T., Bagdasaryan, E., and Shmatikov, V. (2020). Salvaging federated learning by local adaptation. arXiv preprint arXiv:2002.04758.

    Google Scholar 

  • Zhang, K., Schölkopf, B., Muandet, K., and Wang, Z. (2013). Domain adaptation under target and conditional shift. In International Conference on Machine Learning, pages 819–827.

    Google Scholar 

  • Zhang, Z., Li, Y., Wang, J., Liu, B., Li, D., Chen, X., Guo, Y., and Liu, Y. (2022). ReMos: Reducing defect inheritance in transfer learning via relevant model slicing. In The 44th International Conference on Software Engineering.

    Google Scholar 

  • Zhao, H., Des Combes, R. T., Zhang, K., and Gordon, G. (2019). On learning invariant representations for domain adaptation. In International Conference on Machine Learning, pages 7523–7532.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Wang, J., Chen, Y. (2023). Safe and Robust Transfer Learning. In: Introduction to Transfer Learning. Machine Learning: Foundations, Methodologies, and Applications. Springer, Singapore. https://doi.org/10.1007/978-981-19-7584-4_12

Download citation

  • DOI: https://doi.org/10.1007/978-981-19-7584-4_12

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-19-7583-7

  • Online ISBN: 978-981-19-7584-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics