Skip to main content

Advertisement

SpringerLink
  • Machine Intelligence Research
  • Journal Aims and Scope
  • Submit to this journal
Federated Learning with Privacy-preserving and Model IP-right-protection
Download PDF
Your article has downloaded

Similar articles being viewed by others

Slider with three articles shown per slide. Use the Previous and Next buttons to navigate the slides or the slide controller buttons at the end to navigate through each slide.

A survey on federated learning: challenges and applications

11 November 2022

Jie Wen, Zhixia Zhang, … Wensheng Zhang

Challenges and future directions of secure federated learning: a survey

10 December 2021

Kaiyue Zhang, Xuan Song, … Shui Yu

Threats, attacks and defenses to federated learning: issues, taxonomy and perspectives

02 February 2022

Pengrui Liu, Xiangrui Xu & Wei Wang

Adaptive federated learning scheme for recognition of malicious attacks in an IoT network

07 January 2023

Prateek Chhikara, Rajkumar Tekchandani & Neeraj Kumar

From distributed machine learning to federated learning: a survey

22 March 2022

Ji Liu, Jizhou Huang, … Dejing Dou

Fair detection of poisoning attacks in federated learning on non-i.i.d. data

04 January 2023

Ashneet Khandpur Singh, Alberto Blanco-Justicia & Josep Domingo-Ferrer

Secure sharing of industrial IoT data based on distributed trust management and trusted execution environments: a federated learning approach

03 March 2023

Wei Zheng, Yang Cao & Haining Tan

Confidential machine learning on untrusted platforms: a survey

01 September 2021

Sharma Sagar & Chen Keke

Securing federated learning with blockchain: a systematic literature review

16 September 2022

Attia Qammar, Ahmad Karim, … Jianguo Ding

Download PDF
  • Review
  • Open Access
  • Published: 10 January 2023

Federated Learning with Privacy-preserving and Model IP-right-protection

  • Qiang Yang  ORCID: orcid.org/0000-0001-5059-83601,2,
  • Anbu Huang  ORCID: orcid.org/0000-0003-3444-73481,
  • Lixin Fan1,
  • Chee Seng Chan3,
  • Jian Han Lim3,
  • Kam Woh Ng  ORCID: orcid.org/0000-0002-9309-563X4,
  • Ding Sheng Ong5 &
  • …
  • Bowen Li  ORCID: orcid.org/0000-0003-1602-35416 

Machine Intelligence Research volume 20, pages 19–37 (2023)Cite this article

  • 399 Accesses

  • 1 Altmetric

  • Metrics details

Abstract

In the past decades, artificial intelligence (AI) has achieved unprecedented success, where statistical models become the central entity in AI. However, the centralized training and inference paradigm for building and using these models is facing more and more privacy and legal challenges. To bridge the gap between data privacy and the need for data fusion, an emerging AI paradigm federated learning (FL) has emerged as an approach for solving data silos and data privacy problems. Based on secure distributed AI, federated learning emphasizes data security throughout the lifecycle, which includes the following steps: data preprocessing, training, evaluation, and deployments. FL keeps data security by using methods, such as secure multi-party computation (MPC), differential privacy, and hardware solutions, to build and use distributed multiple-party machine-learning systems and statistical models over different data sources. Besides data privacy concerns, we argue that the concept of “model” matters, when developing and deploying federated models, they are easy to expose to various kinds of risks including plagiarism, illegal copy, and misuse. To address these issues, we introduce FedIPR, a novel ownership verification scheme, by embedding watermarks into FL models to verify the ownership of FL models and protect model intellectual property rights (IPR or IP-right for short). While security is at the core of FL, there are still many articles referred to distributed machine learning with no security guarantee as “federated learning”, which are not satisfied with the FL definition supposed to be. To this end, in this paper, we reiterate the concept of federated learning and propose secure federated learning (SFL), where the ultimate goal is to build trustworthy and safe AI with strong privacy-preserving and IP-right-preserving. We provide a comprehensive overview of existing works, including threats, attacks, and defenses in each phase of SFL from the lifecycle perspective.

Download to read the full article text

Working on a manuscript?

Avoid the most common mistakes and prepare your manuscript for journal editors.

Learn more

References

  1. A. Krizhevsky, I. Sutskever, G. E. Hinton. ImageNet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems, ACM, Lake Tahoe, USA. pp. 1097–1105, 2012. DOI: https://doi.org/10.5555/2999134.2999257.

    Google Scholar 

  2. K. M. He, X. Y. Zhang, S. Q. Ren, J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Las Vegas, USA, pp. 770–778, 2016. DOI: https://doi.org/10.1109/CVPR.2016.90.

    Google Scholar 

  3. J. Devlin, M. W. Chang, K. Lee, K. Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, ACL, Minneapolis, USA, pp. 4171–4186, 2019. DOI: https://doi.org/10.18653/v1/N19-1423.

    Google Scholar 

  4. T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, D. Amodei. Language models are few-shot learners. In Proceedings of the 34th International Conference on Neural Information Processing Systems, ACM, Vancouver, Canada, Article number 159, 2020. DOI: https://doi.org/10.5555/3495724.3495883.

    Google Scholar 

  5. H. T. Cheng, L. Koc, J. Harmsen, T. Shaked, T. Chandra, H. Aradhye, G. Anderson, G. Corrado, W. Chai, M. Ispir, R. Anil, Z. Haque, L. C. Hong, V. Jain, X. B. Liu, H. Shah. Wide & deep learning for recommender systems. In Proceedings of the 1st Workshop on Deep Learning for Recommender Systems, ACM, Boston, USA, pp. 7–10, 2016. DOI: https://doi.org/10.1145/2988450.2988454.

    Chapter  Google Scholar 

  6. H. F. Guo, R. M. Tang, Y. M. Ye, Z. G. Li, X. Q. He. DeepFM: A factorization-machine based neural network for CTR prediction. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, ACM, Melbourne, Australia, pp. 1725–1731, 2017. DOI: https://doi.org/10.5555/3172077.3172127.

    Google Scholar 

  7. J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, F. F. Li. ImageNet: A large-scale hierarchical image database. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Miami, USA, pp. 248–255, 2009. DOI: https://doi.org/10.1109/CVPR.2009.5206848.

    Google Scholar 

  8. Protein Data Bank. A structural view of biology, [Online], Available: https://www.rcsb.org/.

  9. J. Jumper, R. Evans, A. Pritzel, T. Green, M. Figurnov, O. Ronneberger, K. Tunyasuvunakool, R. Bates, A. Žídek, A. Potapenko, A. Bridgland, C. Meyer, S. A. A. Kohl, A. J. Ballard, A. Cowie, B. Romera-Paredes, S. Nikolov, R. Jain, J. Adler, T. Back, S. Petersen, D. Reiman, E. Clancy, M. Zielinski, M. Steinegger, M. Pacholska, T. Berghammer, S. Bodenstein, D. Silver, O. Vinyals, A. W. Senior, K. Kavukcuoglu, P. Kohli, D. Hassabis. Highly accurate protein structure prediction with AlphaFold. Nature, vol. 596, no. 7873, pp. 583–589, 2021. DOI: https://doi.org/10.1038/s41586-021-03819-2.

    Article  Google Scholar 

  10. A. W. Senior, R. Evans, J. Jumper, J. Kirkpatrick, L. Sifre, T. Green, C. L. Qin, A. Žídek, A. W. R. Nelson, A. Bridgland, H. Penedones, S. Petersen, K. Simonyan, S. Crossan, P. Kohli, D. T. Jones, D. Silver, K. Kavukcuoglu, D. Hassabis. Improved protein structure prediction using potentials from deep learning. Nature, vol. 577, no. 7792, pp. 706–710, 2020. DOI: https://doi.org/10.1038/s41586-019-1923-7.

    Article  Google Scholar 

  11. EU. General data protection regulation, [Online], Available: https://gdpr-info.eu/.

  12. DLA Piper. Data protection laws of the world: Full handbook, [Online], Available: https://www.dlapiperdataprotection.com/.

  13. The National People’s Congress. China data security law, [Online], Available: http://www.npc.gov.cn/npc/c30834/202106/7c9afl2f51334a73b56d7938f99a788a.shtml. (in Chinese)

  14. B. McMahan, E. Moore, D. Ramage, S. Hampson, B. A. Arcas. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, USA, pp. 1273–1282, 2017.

  15. L. G. Zhu, Z. J. Liu, S. Han. Deep leakage from gradients. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, Canada, pp. 14774–14784, 2019.

  16. L. T. Phong, Y. Aono, T. Hayashi, L. H. Wang, S. Moriai. Privacy-preserving deep learning via additively homomorphic encryption. IEEE Transactions on Information Forensics and Security, vol. 13, no. 5, pp. 1333–1345, 2018. DOI: https://doi.org/10.1109/TIFS.2017.2787987.

    Article  Google Scholar 

  17. P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings, R. G. L. D’Ohveira, H. Eichner, S. El Rouayheb, D. Evans, J. Gardner, Z. Garrett, A. Gascón, B. Ghazi, P. B. Gibbons, M. Gruteser, Z. Harchaoui, C. Y. He, L. He, Z. Y. Huo, B. Hutchinson, J. Hsu, M. Jaggi, T. Javidi, G. Joshi, M. Khodak, J. Konecný, A. Korolova, F. Koushanfar, S. Koyejo, T. Lepoint, Y. Liu, P. Mittal, M. Mohri, R. Nock, A. Özgür, R. Pagh, H. Qi, D. Ramage, R. Raskar, M. Raykova, D. Song, W. K. Song, S. U. Stich, Z. T. Sun, A. T. Suresh, F. Tramèr, P. Vepakomma, J. Y. Wang, L. Xiong, Z. Xu, Q. Yang, F. X. Yu, H. Yu, S. Zhao. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, vol. 14, no. 1–2, pp. 1–210, 2021. DOI: https://doi.org/10.1561/2200000083.

    Article  Google Scholar 

  18. Y. Z. Ma, X. J. Zhu, J. Hsu. Data poisoning against differentially-private learners: Attacks and defenses. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, ACM, Macao, China, pp. 4732–4738, 2019. DOI: https://doi.org/10.5555/3367471.3367701.

    Google Scholar 

  19. Z. B. Ying, Y. Zhang, X. M. Liu. Privacy-preserving in defending against membership inference attacks. In Proceedings of the Workshop on Privacy-preserving Machine Learning in Practice, ACM, pp. 61–63, 2020. DOI: https://doi.org/10.1145/3411501.3419428.

  20. Q. Yang, Y. Liu, Y. Cheng, Y. Kang, T. J. Chen, H. Yu. Federated Learning, San Francisco Bay Area, USA: Morgan & Claypool Publishers, pp. 207, 2019.

    Google Scholar 

  21. Q. Yang, Y. Liu, T. J. Chen, Y. X. Tong. Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology, vol. 10, no. 2, Article number 12, 2019. DOI: https://doi.org/10.1145/3298981.

  22. T. Li, A. K. Sahu, A. Talwalkar, V. Smith. Federated learning: Challenges, methods, and future directions. IEEE Signal Processing Magazine, vol. 37, no. 3, pp. 50–60, 2020. DOI: https://doi.org/10.1109/MSP.2020.2975749.

    Article  Google Scholar 

  23. L. J. Lyu, H. Yu, Q. Yang. Threats to federated learning: A survey. [Online], Available: https://arxiv.org/abs/2003.02133, 2020.

  24. N. Bouacida, P. Mohapatra. Vulnerabilities in federated learning. IEEE Access, vol. 9, pp. 63229–63249, 2021. DOI: https://doi.org/10.1109/ACCESS.2021.3075203.

    Article  Google Scholar 

  25. V. Mothukuri, R. M. Parizi, S. Pouriyeh, Y. Huang, A. Dehghantanha, G. Srivastava. A survey on security and privacy of federated learning. Future Generation Computer Systems, vol. 115, pp. 619–640, 2021. DOI: https://doi.org/10.1016/j.future.2020.10.007.

    Article  Google Scholar 

  26. P. R. Liu, X. R. Xu, W. Wang. Threats, attacks and defenses to federated learning: Issues, taxonomy and perspectives. Cybersecurity, vol. 5, no. 1, Article number 4, 2022. DOI: https://doi.org/10.1186/s42400-021-00105-6.

  27. X. J. Zhang, H. L. Gu, L. X. Fan, K. Chen, Q. Yang. No free lunch theorem for security and utility in federated learning. [Online], Available: https://arxiv.org/abs/2203.05816, 2022.

  28. O. Goldreich, S. Micali, A. Wigderson. How to play ANY mental game. In Proceedings of the Nineteenth Annual ACM Symposium on Theory of Computing, ACM, New York, USA, pp. 218–229, 1987. DOI: https://doi.org/10.1145/28395.28420.

    Google Scholar 

  29. T. Rabin, M. Ben-Or. Verifiable secret sharing and multiparty protocols with honest majority. In Proceedings of the 21st Annual ACM Symposium on Theory of Computing, ACM, Seattle, USA, pp. 73–85, 1989. DOI: https://doi.org/10.1145/73007.73014.

    Google Scholar 

  30. C. Dwork. Differential privacy: A survey of results. In Proceedings of the 5th International Conference on Theory and Applications of Models of Computation, Springer, Xi’an, China, pp. 1–19, 2008. DOI: https://doi.org/10.1007/978-3-540-79228-4_1.

    Google Scholar 

  31. C. Dwork, A. Roth. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, vol. 9, no. 3–4, pp. 211–407, 2014. DOI: https://doi.org/10.1561/0400000042.

    MathSciNet  MATH  Google Scholar 

  32. P. Paillier. Public-key cryptosystems based on composite degree residuosity classes. In Proceedings of the International Conference on Advances in Cryptology, Springer, Prague, Czech Republic, pp. 223–238, 1999. DOI: https://doi.org/10.1007/3-540-48910-X_16.

    MATH  Google Scholar 

  33. OMTP. 2009. Advanced trusted environment: OMTP TR1. http://www.omtp.org/OMTP_Advanced_Trusted_Environment_OMTP_TR1_v1_1.pdf

  34. ARM. ARM TrustZone Technology, [Online], Available: https://developer.arm.com/documentation/100690/0200/ARM-TrustZone-technology?lang=en.

  35. M. Sabt, M. Achemlal, A. Bouabdallah. Trusted execution environment: What it is, and what it is not. In Proceedings of IEEE Trustcom/BigDataSE/ISPA, IEEE, Helsinki, Finland, pp. 57–64, 2015. DOI: https://doi.org/10.1109/Trustcom.2015.357.

    Google Scholar 

  36. B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Šrndić, P. Laskov, G. Giacinto, F. Roli. Evasion attacks against machine learning at test time. In Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases, Springer, Prague, Czech Republic, pp. 387–402, 2013. DOI: https://doi.org/10.1007/978-3-642-40994-325.

    Google Scholar 

  37. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, R. Fergus. Intriguing properties of neural networks. In Proceedings of the 2nd International Conference on Learning Representations, Banff, Canada, 2014.

  38. A. Nguyen, J. Yosinski, J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Boston, USA, pp. 427–436, 2015. DOI: https://doi.org/10.1109/CVPR.2015.7298640.

    Google Scholar 

  39. I. J. Goodfellow, J. Shlens, C. Szegedy. Explaining and harnessing adversarial examples. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, USA, 2015.

  40. E. Bagdasaryan, A. Veit, Y. Q. Hua, D. Estrin, V. Shmatikov. How to backdoor federated learning. In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, Palermo, Italy, pp. 2938–2948, 2020.

  41. H. J. Zhang, Z. J. Xie, R. Zarei, T. Wu, K. W. Chen. Adaptive client selection in resource constrained federated learning systems: A deep reinforcement learning approach. IEEE Access, vol. 9, pp. 98423–98432, 2021. DOI: https://doi.org/10.1109/ACCESS.2021.3095915.

    Article  Google Scholar 

  42. R. Albelaihi, X. Sun, W. D. Craft, L. K. Yu, C. G. Wang. Adaptive participant selection in heterogeneous federated learning. In Proceedings of IEEE Global Communications Conference, IEEE, Madrid, Spain, 2021. DOI: https://doi.org/10.1109/GLOBECOM46510.2021.9685077.

    Google Scholar 

  43. F. Mo, A. S. Shamsabadi, K. Katevas, S. Demetriou, I. Leontiadis, A. Cavallaro, H. Haddadi. DarkneTZ: Towards model privacy at the edge using trusted execution environments. In Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services, ACM, Toronto, Canada, pp. 161–174, 2020. DOI: https://doi.org/10.1145/3386901.3388946.

    Chapter  Google Scholar 

  44. A. B. Huang, Y. Liu, T. J. Chen, Y. K. Zhou, Q. Sun, H. F. Chai, Q. Yang. StarFL: Hybrid federated learning architecture for smart urban computing. ACM Transactions on Intelligent Systems and Technology, vol. 12, no. 4, Article number 43, 2021. DOI: https://doi.org/10.1145/3467956.

  45. B. Hitaj, G. Ateniese, F. Perez-Cruz. Deep models under the GAN: Information leakage from collaborative deep learning. In Proceedings of ACM SIGSAC Conference on Computer and Communications Security, ACM, Dallas, USA, pp. 603–618, 2017. DOI: https://doi.org/10.1145/3133956.3134012.

    Google Scholar 

  46. B. Zhao, K. R. Mopuri, H. Bilen. iDLG: Improved deep leakage from gradients. [Online], Available: https://arxiv.org/abs/2001.02610, 2020.

  47. J. Geiping, H. Bauermeister, H. Dröge, M. Moeller. Inverting gradients-how easy is it to break privacy in federated learning? In Proceedings of the 34th International Conference on Neural Information Processing Systems, ACM, Vancouver, Canada, Article number 33, 2020. DOI: https://doi.org/10.5555/3495724.3497145.

    Google Scholar 

  48. Y. J. Wang, J. R. Deng, D. Guo, C. H. Wang, X. R. Meng, H. Liu, C. W. Ding, S. Rajasekaran. SAPAG: A self-adaptive privacy attack from gradients. [Online], Available: https://arxiv.org/abs/2009.06228, 2020.

  49. J. Y. Zhu, M. B. Blaschko. R-GAP: Recursive gradient attack on privacy. In Proceedings of the 9th International Conference on Learning Representations, 2021.

  50. X. Jin, P. Y. Chen, C. Y. Hsu, C. M. Yu, T. Y. Chen. Catastrophic data leakage in vertical federated learning. In Proceedings of the 34th Conference on Neural Information Processing Systems, pp. 994–1006, 2021.

  51. Z. H. Li, J. X. Zhang, L. Y. Liu, J. Liu. Auditing privacy defenses in federated learning via generative gradient leakage. [Online], Available: https://arxiv.org/abs/2203.15696, 2022.

  52. S. Hardy, W. Henecka, H. Ivey-Law, R. Nock, G. Patrini, G. Smith, B. Thorne. Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption. [Online], Available: https://arxiv.org/abs/1711.10677, 2017.

  53. C. L. Zhang, S. Y. Li, J. Z. Xia, W. Wang, F. Yan, Y. Liu. BatchCrypt: Efficient homomorphic encryption for cross-silo federated learning. In Proceedings of USENIX Conference on USENIX Annual Technical Conference, Berkeley, USA, Article number. 33, 2020. DOI: https://doi.org/10.5555/3489146.3489179.

  54. A. Huang, Y. Y. Chen, Y. Liu, T. J. Chen, Q. Yang. RPN: A residual pooling network for efficient federated learning. In Proceedings of the 24th European Conference on Artificial Intelligence, Santiago de Compostela, Spain, pp. 1223–1229, 2020.

  55. H. B. McMahan, D. Ramage, K. Talwar, L. Zhang. Learning differentially private recurrent language models. In Proceedings of the 6th International Conference on Learning Representations, Vancouver, Canada, 2018.

  56. K. Wei, J. Li, M. Ding, C. Ma, H. H. Yang, F. Farokhi, S. Jin, T. Q. S. Quek, H. V. Poor. Federated learning with differential privacy: Algorithms and performance analysis. IEEE Transactions on Information Forensics and Security, vol. 15, pp. 3454–3469, 2020. DOI: https://doi.org/10.1109/TIFS.2020.2988575.

    Article  Google Scholar 

  57. C. L. Xie, K. L. Huang, P. Y. Chen, B. Li. DBA: Distributed backdoor attacks against federated learning. In Proceedings of the 8th International Conference on Learning Representations, Addis Ababa, Ethiopia, 2020.

  58. A. B. Huang. Dynamic backdoor attacks against federated learning. [Online], Available: https://arxiv.org/abs/2011.07429, 2020.

  59. J. Feng, Q. Z. Cai, Z. H. Zhou. Learning to confuse: Generating training time adversarial data with auto-encoder. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, ACM, Vancouver, Canada, Article number 32, 2019. DOI: https://doi.org/10.5555/3454287.3455361.

    Google Scholar 

  60. S. S. Hu, J. R. Lu, W. Wan, L. Y. Zhang. Challenges and approaches for mitigating byzantine attacks in federated learning. [Online], Available: https://arxiv.org/abs/2112.14468, 2021.

  61. M. H. Fang, X. Y. Cao, J. Y. Jia, N. Z. Gong. Local model poisoning attacks to byzantine-robust federated learning. In Proceedings of the 29th USENIX Conference on Security Symposium, ACM, Berkeley, USA, Article number 92, 2020. DOI: https://doi.org/10.5555/3489212.3489304.

    Google Scholar 

  62. D. Yin, Y. D. Chen, R. Kannan, P. Bartlett. Byzantine-robust distributed learning: Towards optimal statistical rates. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, pp. 5650–5659, 2018.

  63. P. Blanchard, E. M. El Mhamdi, R. Guerraoui, J. Stainer. Machine learning with adversaries: Byzantine tolerant gradient descent. In Proceedings of the 31st International Conference on Neural Information Processing Systems, ACM, Long Beach, USA, pp. 118–128, 2017. DOI: https://doi.org/10.5555/3294771.3294783.

    Google Scholar 

  64. C. Xie, S. Koyejo, I. Gupta. Zeno: Distributed stochastic gradient descent with suspicion-based fault-tolerance. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, USA, pp. 6893–6901, 2019.

  65. Y. Dong, X. J. Chen, L. Y. Shen, D. K. Wang. Privacy-preserving distributed machine learning based on secret sharing. In Proceedings of the 21st International Conference on Information and Communications Security, Springer, Beijing, China, pp. 684–702, 2019. DOI: https://doi.org/10.1007/978-3-030-41579-2_40.

    Google Scholar 

  66. R. Kanagavelu, Z. X. Li, J. Samsudin, Y. C. Yang, F. Yang, R. S. M. Goh, M. Cheah, P. Wiwatphonthana, K. Akkarajitsakul, S. G. Wang. Two-phase multi-party computation enabled privacy-preserving federated learning. In Proceedings of the 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing, IEEE, Melbourne, Australia, pp. 410–419, 2020. DOI: https://doi.org/10.1109/CCGrid49817.2020.00-52.

    Google Scholar 

  67. M. O. Rabin. How to exchange secrets with oblivious transfer, Technical Report Paper 2005/187, 2005.

  68. A. C. C. Yao. How to generate and exchange secrets. In Proceedings of the 27th Annual Symposium on Foundations of Computer Science, IEEE, Toronto, Canada, pp. 162–167, 1986. DOI: https://doi.org/10.1109/SFCS.1986.25.

    Google Scholar 

  69. Intel®. Architecture instruction set extensions programming reference, Technical Report 319433-012, Intel Corporation, USA, 2012.

  70. V. Costan, S. Devadas. Intel SGX explained, Technical Report Paper 2016/086, 2016.

  71. ArmDeveloper. Arm TrustZone Technology, [Online], Available: https://developer.arm.com/documentation/100690/0200/ARM-TrustZone-technology?lang=en, December 05, 2019.

  72. Androidtrusty. Android Trusty TEE, [Online], Available: https://source.android.com/security/trusty, 2019.

  73. AMD. AMD Secure Encrypted Virtualization, [Online], Available: https://developer.amd.com/sev/.

  74. F. Mo, H. Haddadi, K. Katevas, E. Marin, D. Perino, N. Kourtellis. PPFL: Privacy-preserving federated learning with trusted execution environments. In Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services, ACM, pp. 94–108, 2021. DOI: https://doi.org/10.1145/3458864.3466628.

  75. A. Kurakin, I. J. Goodfellow, S. Bengio. Adversarial examples in the physical world. In Proceedings of the 5th International Conference on Learning Representations, Toulon, France, 2017.

  76. N. Carlini, D. Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, ACM, Dallas, USA, pp. 3–14, 2017. DOI: https://doi.org/10.1145/3128572.3140444.

    Chapter  Google Scholar 

  77. P. Y. Chen, H. Zhang, Y. Sharma, J. F. Yi, C. J. Hsieh. ZOO: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, ACM, Dallas, USA, pp. 15–26, 2017. DOI: https://doi.org/10.1145/3128572.3140448.

    Chapter  Google Scholar 

  78. A. Ilyas, L. Engstrom, A. Athalye, J. Lin. Black-box adversarial attacks with limited queries and information. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, pp. 2137–2146, 2018.

  79. D. Y. Meng, H. Chen. MagNet: A two-pronged defense against adversarial examples. In Proceedings of ACM SIGSAC Conference on Computer and Communications Security, ACM, Dallas, USA, pp. 135–147, 2017. DOI: https://doi.org/10.1145/3133956.3134057.

    Google Scholar 

  80. S. M. Moosavi-Dezfooli, A. Fawzi, P. Frossard. Deep Fool: A simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Las Vegas, USA, pp. 2574–2582, 2016. DOI: https://doi.org/10.1109/CVPR.2016.282.

    Google Scholar 

  81. N. Papernot, P. McDaniel, X. Wu, S. Jha, A. Swami. Distillation as a defense to adversarial perturbations against deep neural networks. In Proceedings of IEEE Symposium on Security and Privacy, IEEE, San Jose, USA, pp. 582–597, 2016. DOI: https://doi.org/10.1109/SP.2016.41.

    Google Scholar 

  82. J. H. Metzen, T. Genewein, V. Fischer, B. Bischoff. On detecting adversarial perturbations. In Proceedings of the 5th International Conference on Learning Representations, Toulon, France, 2017.

  83. K. Grosse, P. Manoharan, N. Papernot, M. Backes, P. McDaniel. On the (statistical) detection of adversarial examples. [Online], Available: https://arxiv.org/abs/1702.06280, 2017.

  84. C. Fu, X. H. Zhang, S. L. Ji, J. Y. Chen, J. Z. Wu, S. Q. Guo, J. Zhou, A. X. Liu, T. Wang. Label inference attacks against vertical federated learning. In Proceedings of the 31st USENIX Security Symposium, USENIX Association, Boston, USA, 2022.

    Google Scholar 

  85. Y. Liu, Z. H. Yi, T. J. Chen. Backdoor attacks and defenses in feature-partitioned collaborative learning. [Online], Available: https://arxiv.org/abs/2007.03608, 2020.

  86. X. J. Luo, Y. C. Wu, X. K. Xiao, B. C. Ooi. Feature inference attack on model predictions in vertical federated learning. In Proceedings of the 37th IEEE International Conference on Data Engineering, IEEE, Chania, Greece, pp. 181–192, 2021. DOI: https://doi.org/10.1109/ICDE51399.2021.00023.

    Google Scholar 

  87. A. Pustozerova, R. Mayer. Information leaks in federated learning. In Proceedings of the Workshop on Decentralized IoT Systems and Security, DISS, San Diego, USA, 2020. DOI: https://doi.org/10.14722/diss.2020.23004.

    Google Scholar 

  88. Y. Uchida, Y. Nagai, S. Sakazawa, S. Satoh. Embedding watermarks into deep neural networks. In Proceedings of ACM International Conference on Multimedia Retrieval, ACM, Bucharest, Romania, pp. 269–277, 2017. DOI: https://doi.org/10.1145/3078971.3078974.

    Google Scholar 

  89. L. X. Fan, K. W. Ng, C. S. Chan, Q. Yang, DeepIP: Deep neural network intellectual property protection with passports. IEEE Transactions on Pattern Analysis and Machine Intelligence, to be published. DOI: https://doi.org/10.1109/TPAMI.2021.3088846.

  90. Y. Adi, C. Baum, M. Cisse, B. Pinkas, J. Keshet. Turning your weakness into a strength: Watermarking deep neural networks by backdooring. In Proceedings of the 27th USENIX Conference on Security Symposium, ACM, Baltimore, USA, pp. 1615–1631, 2018. DOI: https://doi.org/10.5555/3277203.3277324.

    Google Scholar 

  91. B. G. A. Tekgul, Y. X. Xia, S. Marchal, N. Asokan. WAFFLE: Watermarking in federated learning. In Proceedings of the 40th International Symposium on Reliable Distributed Systems, IEEE, Chicago, USA, pp. 310–320, 2021. DOI: https://doi.org/10.1109/SRDS53918.2021.00038.

    Google Scholar 

  92. B. W. Li, L. X. Fan, H. L. Gu, J. Li, Q. Yang. FedIPR: Ownership verification for federated deep neural network models. [Online], Available: https://arxiv.org/abs/2109.13236, 2022.

  93. E. M. El Mhamdi, R. Guerraoui, S. Rouault. The hidden vulnerability of distributed learning in Byzantium. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, pp. 3521–3530, 2018.

  94. Y. He, N. Yu, M. Keuper, M. Fritz. Beyond the spectrum: Detecting Deepfakes via re-synthesis. In Proceedings of the 30th International Joint Conference on Artificial Intelligence, Beijing, China, pp. 2534–2541, 2021. DOI: https://doi.org/10.24963/ijcai.2021/349.

  95. L. Chai, D. Bau, S. N. Lim, P. Isola. What makes fake images detectable? Understanding properties that generalize. In Proceedings of the 16th European Conference on Computer Vision, Springer, Glasgow, UK, pp. 103–120, 2020. DOI: https://doi.org/10.1007/978-3-030-58574-7_7.

    Google Scholar 

  96. Z. Z. Liu, X. J. Qi, P. H. S. Torr. Global texture enhancement for fake face detection in the wild. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Seattle, USA, pp. 8057–8066, 2020. DOI: https://doi.org/10.1109/CVPR42600.2020.00808.

    Google Scholar 

  97. E. Nezhadarya, Z. J. Wang, R. K. Ward. Robust image watermarking based on multiscale gradient direction quantization. IEEE Transactions on Information Forensics and Security, vol. 6, no. 4, pp. 1200–1213, 2011. DOI: https://doi.org/10.1109/TIFS.2011.2163627.

    Article  Google Scholar 

  98. H. Fang, W. M. Zhang, H. Zhou, H. Cui, N. H. Yu. Screen-shooting resilient watermarking. IEEE Transactions on Information Forensics and Security, vol. 14, no. 6, pp. 1403–1418, 2019. DOI: https://doi.org/10.1109/TIFS.2018.2878541.

    Article  Google Scholar 

  99. H. Mareen, J. De Praeter, G. Van Wallendael, P. Lambert. A scalable architecture for uncompressed-domain watermarked videos. IEEE Transactions on Information Forensics and Security, vol. 14, no. 6, pp. 1432–1444, 2019. DOI: https://doi.org/10.1109/TIFS.2018.2879301.

    Article  Google Scholar 

  100. M. Asikuzzaman, M. R. Pickering. An overview of digital video watermarking. IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 9, pp. 2131–2153, 2018. DOI: https://doi.org/10.1109/TCSVT.2017.2712162.

    Article  Google Scholar 

  101. M. J. Hwang, J. Lee, M. Lee, H. G. Kang. SVD-based adaptive QIM watermarking on stereo audio signals. IEEE Transactions on Multimedia, vol. 20, no. 1, pp. 45–54, 2018. DOI: https://doi.org/10.1109/TMM.2017.2721642.

    Article  Google Scholar 

  102. Y. Erfani, R. Pichevar, J. Rouat. Audio watermarking using spikegram and a two-dictionary approach. IEEE Transactions on Information Forensics and Security, vol. 12, no. 4, pp. 840–852, 2017. DOI: https://doi.org/10.1109/TIFS.2016.2636094.

    Article  Google Scholar 

  103. A. Nadeau, G. Sharma. An audio watermark designed for efficient and robust resynchronization after Analog playback. IEEE Transactions on Information Forensics and Security, vol. 12, no. 6, pp. 1393–1405, 2017. DOI: https://doi.org/10.1109/TIFS.2017.2661724.

    Article  Google Scholar 

  104. Z. X. Lin, F. Peng, M. Long. A low-distortion reversible watermarking for 2D engineering graphics based on region nesting. IEEE Transactions on Information Forensics and Security, vol. 13, no. 9, pp. 2372–2382, 2018. DOI: https://doi.org/10.1109/TIFS.2018.2819122.

    Article  Google Scholar 

  105. J. Zhang, D. D. Chen, J. Liao, W. M. Zhang, G. Hua, N. H. Yu. Passport-aware normalization for deep model protection. In Proceedings of the 34th International Conference on Neural Information Processing Systems, ACM, Vancouver, Canada, Article number 1896, 2020. DOI: https://doi.org/10.5555/3495724.3497620.

    Google Scholar 

  106. H. Chen, B. D. Rohani, F. Koushanfar. DeepMarks: A digital fingerprinting framework for deep neural networks. [Online], Available: https://arxiv.org/abs/1804.03648, 2018.

  107. B. D. Rohani, H. L. Chen, F. Koushanfar. DeepSigns: A generic watermarking framework for IP protection of deep learning models. [Online], Available: https://arxiv.org/abs/1804.00750, 2018.

  108. E. Le Merrer, P. Pérez, G. Trédan. Adversarial frontier stitching for remote neural network watermarking. Neural Computing and Applications, vol. 32, no. 13, pp. 9233–9244, 2020. DOI: https://doi.org/10.1007/s00521-019-04434-z.

    Article  Google Scholar 

  109. D. S. Ong, C. S. Chan, K. W. Ng, L. X. Fan, Q. Yang. Protecting intellectual property of generative adversarial networks from ambiguity attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Nashville, USA, pp. 3629–3638, 2021. DOI: https://doi.org/10.1109/CVPR46437.2021.00363.

    Google Scholar 

  110. J. H. Lim, C. S. Chan, K. W. Ng, L. X. Fan, Q. Yang. Protect, show, attend and tell: Empowering image captioning models with ownership protection. Pattern Recognition, vol. 122, pp. 108285. DOI: https://doi.org/10.1016/j.patcog.2021.108285.

  111. A. Radford, L. Metz, S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In Proceedings of the 4th International Conference on Learning Representations, San Juan, Puerto Rico, 2016.

  112. C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. H. Wang, W. Z. Shi. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Honolulu, USA, pp. 105–114. DOI: https://doi.org/10.1109/CVPR.2017.19.

  113. J. Y. Zhu, T. Park, P. Isola, A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of IEEE International Conference on Computer Vision, IEEE, Venice, Italy, pp. 2242–2251, 2017. DOI: https://doi.org/10.1109/ICCV.2017.244.

    Google Scholar 

  114. F. Tramèr, F. Zhang, A. Juels, M. K. Reiter, T. Ristenpart. Stealing machine learning models via prediction APIs. In Proceedings of the 25th USENIX Conference on Security Symposium, ACM, Austin, USA, pp. 601–618, 2016. DOI: https://doi.org/10.5555/3241094.3241142.

    Google Scholar 

  115. T. Orekondy, B. Schiele, M. Fritz. Knockoff nets: Stealing functionality of black-box models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Long Beach, USA, pp. 4949–4958, 2019. DOI: https://doi.org/10.1109/CVPR.2019.00509.

    Google Scholar 

  116. N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, A. Swami. Practical black-box attacks against machine learning. In Proceedings of ACM on Asia Conference on Computer and Communications Security, ACM, Abu Dhabi, UAE, pp. 506–519, 2017. DOI: https://doi.org/10.1145/3052973.3053009.

    Google Scholar 

  117. WeBank AI Department (2020-03-07). Federated AI Technology Enabler (FATE), 2020-03-07. [Online], Available: https://github.com/FederatedAI/FATE.

  118. K. Bonawitz, H. Eichner, W. Grieskamp, D. Huba, A. In-german, V. Ivanov, C. Kiddon, J. Konečný, S. Mazzocchi, B. McMahan, T. Van Overveldt, D. Petrou, D. Ramage, J. Roselander. Towards federated learning at scale: System design. In Proceedings of the 2nd SysML Conference, Stanford, USA, 2019.

  119. Google. Tensorflow Federated (TFF), [Online], Available: https://tensorflow.google.cn/federated.

  120. OpenMined. PySyft, [Online], Available: https://github.com/OpenMined.

  121. T. Ryffel, A. Trask, M. Dahl, B. Wagner, J. Mancuso, D. Rueckert, J. Passerat-Palmbach. A generic framework for privacy preserving deep learning. [Online], Available: https://arxiv.org/abs/1811.04017, 2018.

  122. G. A. Reina, A. Gruzdev, P. Foley, O. Perepelkina, M. Sharma, I. Davidyuk, I. Trushkin, M. Radionov, A. Mokrov, D. Agapov, J. Martin, B. Edwards, M. J. Sheller, S. Pati, P. N. Moorthy, S. H. Wang, P. Shah, S. Bakas. OpenFL: An open-source framework for federated learning. [Online], Available: https://arxiv.org/abs/2105.06413, 2021.

  123. Intel. OpenFL — An open-source framework for federated learning, [Online], Available: https://github.com/intel/openfl.

  124. H. Ludwig, N. Baracaldo, G. Thomas, Y. Zhou, A. Anwar, S. Rajamoni, Y. Ong, J. Radhakrishnan, A. Verma, M. Sinn, M. Purcell, A. Rawat, T. Minh, N. Holohan, S. Chakraborty, S. Whitherspoon, D. Steuer, L. Wynter, H. Hassan, S. Laguna, M. Yurochkin, M. Agarwal, E. Chuba, A. Abay. IBM federated learning: An enterprise framework white paper V0.1. [Online], Available: https://arxiv.org/abs/2007.10987, 2020.

  125. Nvidia. Nvidia Clara, [Online], Available: https://developer.nvidia.com/clara.

  126. C. Y. He, S. Z. Li, J. So, X. Zeng, M. Zhang, H. Y. Wang, X. Y. Wang, P. Vepakomma, A. Singh, H. Qiu, X. H. Zhu, J. Z. Wang, L. Shen, P. L. Zhao, Y. Kang, Y. Liu, R. Raskar, Q. Yang, M. Annavaram, S. Avestimehr. Fed-ML: A research library and benchmark for federated machine learning. [Online], Available: https://arxiv.org/abs/2007.13518, 2020.

  127. FedML-AI. FedML, [Online], Available: https://github.com/FedML-AI/FedML.

  128. Bytedance. Fedlearner, [Online], Available: https://github.com/bytedance/fedlearner.

  129. D. J. Beutel, T. Topal, A. Mathur, X. C. Qiu, J. Fernandez-Marques, Y. Gao, L. Sani, K. H. Li, T. Parcollet, P. P. B. de Gusmão, N. D. Lane. Flower: A friendly federated learning research framework. [Online], Available: https://arxiv.org/abs/2007.14390, 2020.

  130. PaddlePaddle. PaddleFL, [Online], Available: https://github.com/PaddlePaddle/PaddleFL.

  131. Tencent. Angel PowerFL, [Online], Available: https://cloud.tencent.com/solution/powerfl.

  132. S. Caldas, S. M. K. Duddu, P. Wu, T. Li, J. Konečný, H. B. McMahan, V. Smith, A. Talwalkar. LEAF: A benchmark for federated settings. [Online], Available: https://arxiv.org/abs/1812.01097, 2018.

  133. Sherpa.ai. Sherpa.ai, [Online], Available: https://sherpa.ai/.

  134. D. Romanini, A. J. Hall, P. Papadopoulos, T. Titcombe, A. Ismail, T. Cebere, R. Sandmann, R. Roehm, M. A. Hoeh. PyVertical: A vertical federated learning framework for multi-headed splitNN. [Online], Available: https://arxiv.org/abs/2104.00489, 2021.

Download references

Acknowledgements

The work was supported by National Key Research and Development Program of China (No. 2018AAA 0101100).

Author information

Authors and Affiliations

  1. WeBank, Shenzhen, 518057, China

    Qiang Yang, Anbu Huang & Lixin Fan

  2. Hong Kong University of Science and Technology, Hong Kong, 999077, China

    Qiang Yang

  3. University of Malaya, Kuala Lumpur, 50603, Malaysia

    Chee Seng Chan & Jian Han Lim

  4. University of Surrey, Guildford, GU2 7XH, UK

    Kam Woh Ng

  5. University of Aberystwyth, Wales, SY23 3DD, UK

    Ding Sheng Ong

  6. Shanghai Jiao Tong University, Shanghai, 200240, China

    Bowen Li

Authors
  1. Qiang Yang
    View author publications

    You can also search for this author in PubMed Google Scholar

  2. Anbu Huang
    View author publications

    You can also search for this author in PubMed Google Scholar

  3. Lixin Fan
    View author publications

    You can also search for this author in PubMed Google Scholar

  4. Chee Seng Chan
    View author publications

    You can also search for this author in PubMed Google Scholar

  5. Jian Han Lim
    View author publications

    You can also search for this author in PubMed Google Scholar

  6. Kam Woh Ng
    View author publications

    You can also search for this author in PubMed Google Scholar

  7. Ding Sheng Ong
    View author publications

    You can also search for this author in PubMed Google Scholar

  8. Bowen Li
    View author publications

    You can also search for this author in PubMed Google Scholar

Corresponding author

Correspondence to Qiang Yang.

Additional information

Qiang Yang is a Fellow of Canadian Academy of Engineering (CAE) and Royal Society of Canada (RSC), Chief Artificial Intelligence Officer of WeBank and Chair Professor of CSE Department, Hong Kong University of Science and Technology, China. He is the Conference Chair of AAAI-21, President of Hong Kong Society of Artificial Intelligence and Robotics (HKSAIR), the President of Investment Technology League (ITL) and Open Islands Privacy-Computing Open-source Community, and former President of IJCAI (2017–2019). He is a fellow of AAAI, ACM, IEEE and AAAS. He is the founding EiC of two journals: IEEE Transactions on Big Data and ACM Transactions on Intelligent Systems and Technology. His latest books are Transfer Learning, Federated Learning, Privacy-preserving Computing and Practicing Federated Learning.

His research interests include transfer learning and federated learning.

Anbu Huang is currently a senior research scientist at WeBank, his research papers have been published in leading journals and conferences, such as AAAI and ACM TIST. He served as a peer reviewer in ACM TIST, IEEE TMI, IJCAI, and other top artificial intelligence journals and conferences. Previously, He was a technical team leader at Tencent (2014–2018), and a senior engineer at MicroStrategy (2012–2014). His latest books are Practicing Federated Learning (2021) and Dive into Deep Learning (2017).

His research interests include deep learning, machine learning, and federated learning.

Lixin Fan is the Chief Scientist of Artificial Intelligence at WeBank, China. He is the author of more than 70 international journals and conference articles. He has worked at Nokia Research Center and Xerox Research Center Europe. He has participated in NIPS/NeurIPS, ICML, CVPR, ICCV, ECCV, IJCAI and other top artificial intelligence conferences for a long time, served as area chair of ICPR, and organized workshops in various technical fields. He is also the inventor of more than one hundred patents filed in USA, Europe and China, and the chairman of the IEEE P2894 Explainable Artificial Intelligence (XAI) Standard Working Group.

His research interests include machine learning and deep learning, computer vision and pattern recognition, image and video processing, 3D big data processing, data visualization and rendering, augmented and virtual reality, mobile computing and ubiquitous computing, and intelligent man-machine interface.

Chee Seng Chan is currently a full Professor with Faculty of Computer Science and Information Technology, University of Malaya, Malaysia. Dr. Chan was the Founding Chair for the IEEE Computational Intelligence Society Malaysia Chapter, the Organising Chair for the Asian Conference on Pattern Recognition (2015), the General Chair for the IEEE Workshop on Multimedia Signal Processing (2019), and IEEE Visual Communications and Image Processing (2013). He is a Chartered Engineer registered under the Engineering Council, UK. He was the recipient of several notable awards, such as Young Scientist Network-Academy of Sciences Malaysia in 2015 and the Hitachi Research Fellowship in 2013.

His research interests include computer vision and machine learning with focus on scene understanding. He is also interested in the interplay between language and vision: generating sentential descriptions about complex scenes.

Jian Han Lim is currently a Ph.D. degree candidate in artificial intelligence with Universiti Malaya, Malaysia.

His research interests include computer vision and deep learning with a focus on image captioning.

Kam Woh Ng received the B.Sc. degree from Faculty of Computer Science and Information Technology, University of Malaya, Malaysia, in 2019. He is currently a Ph.D. degree candidate at University of Surrey, UK under the supervision of Prof. Tao Xiang and Prof. Yi-Zhe Song. Prior to joining the University of Surrey, he was an AI researcher from WeBank, China and a lab member of Center of Image and Signal Processing (CISIP) in University of Malaya, Malaysia.

His research interests include deep learning, computer vision, representation learning and their applications.

Ding Sheng Ong received the B.Sc. degree from Faculty of Computer Science and Information Technology, University of Malaya, Malaysia, in 2020. He currently a Ph.D. degree candidate at is Aberystwyth University, UK. Prior to joining the Aberystwyth University, he was a lab member of Center of Image and Signal Processing (CISiP) in Universiti Malaya, Malaysia.

His research interests include deep learning and computer vision.

Bowen Li received the B.Sc. degree in automation from Xi’an Jiaotong University, China in 2019. He is currently a Ph.D. degree candidate at Department of Computer Science and Engineering, Shanghai Jiao Tong University, China. He worked as a research intern at WeBank AI Group, China in 2021.

His research interests include federated learning, data privacy, and machine learning security.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Yang, Q., Huang, A., Fan, L. et al. Federated Learning with Privacy-preserving and Model IP-right-protection. Mach. Intell. Res. 20, 19–37 (2023). https://doi.org/10.1007/s11633-022-1343-2

Download citation

  • Received: 02 March 2020

  • Accepted: 02 June 2020

  • Published: 10 January 2023

  • Issue Date: February 2023

  • DOI: https://doi.org/10.1007/s11633-022-1343-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Federated learning
  • privacy-preserving machine learning
  • security
  • decentralized learning
  • intellectual property protection
Download PDF

Working on a manuscript?

Avoid the most common mistakes and prepare your manuscript for journal editors.

Learn more

Advertisement

Over 10 million scientific documents at your fingertips

Switch Edition
  • Academic Edition
  • Corporate Edition
  • Home
  • Impressum
  • Legal information
  • Privacy statement
  • California Privacy Statement
  • How we use cookies
  • Manage cookies/Do not sell my data
  • Accessibility
  • FAQ
  • Contact us
  • Affiliate program

Not affiliated

Springer Nature

© 2023 Springer Nature Switzerland AG. Part of Springer Nature.