Skip to main content

Active intellectual property protection for deep neural networks through stealthy backdoor and users’ identities authentication


Recently, the intellectual properties (IP) protection of deep neural networks (DNN) has attracted serious concerns. A number of DNN copyright protection methods have been proposed. However, most of the existing DNN watermarking methods can only verify the ownership of the model after the piracy occurs, which cannot actively prevent the occurrence of the piracy and do not support users’ identities management, thus can not satisfy the requirements of commercial DNN copyright management. In addition, the query modification attack which was proposed recently can invalidate most of the existing backdoor-based DNN watermarking methods. In this paper, we propose an active intellectual properties protection technique for DNN models via stealthy backdoor and users’ identities authentication. For the first time, we use a set of clean images (as the watermark key samples) to embed an additional class into the DNN for ownership verification, and use the image steganography to embed users’ identity information into these watermark key images. Each user will be assigned with a unique identity image for identity authentication and authorization control. Since the backdoor instances are clean images outside the dataset, the backdoor trigger is visually imperceptible and concealed. In addition, we embed the watermark by exploiting an additional class outside the main tasks, which establishes a strong connection for watermark key samples and the corresponding label. As a result, the proposed method is concealed, robust, and can resist common attacks and query modification attack. Experimental results demonstrate that, the proposed method can obtain 100% watermark accuracy and 100% fingerprint authentication success rate on Fashion-MNIST and CIFAR-10 datasets. In addition, the proposed method is demonstrated to be robust against the model fine-tuning attack, model pruning attack, and query modification attack. Compared with three existing DNN watermarking methods, the proposed method has better performance on watermark accuracy and robustness against the query modification attack.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7


  1. Krizhevsky A, Sutskever I, Hinton GE (2017) Imagenet classification with deep convolutional neural networks. Commun ACM 60(6):84–90

    Article  Google Scholar 

  2. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  3. Xiao J, Xu H, Gao H, Bian M, Li Y (2021) A weakly supervised semantic segmentation network by aggregating seed cues: The multi-object proposal generation perspective. ACM Trans Multimid Comput Commun Appl 17(1s):1–19

    Article  Google Scholar 

  4. Gao H, Xu K, Cao M, Xiao J, Xu Q, Yin Y (2021) The deep features and attention mechanism-based method to dish healthcare under social IoT systems: An empirical study with a hand-deep local-global net. IEEE Trans Comput Soc Syst:1–12. Early Access

  5. Huang Y, Xu H, Gao H, Ma X, Hussain W (2021) SSUR: An approach to optimizing virtual machine allocation strategy based on user requirements for cloud data center. IEEE Trans Green Commun Netw 5(2):670–681

    Article  Google Scholar 

  6. Ribeiro M, Grolinger K, Capretz M AM (2015) MLaaS: Machine learning as a service. In: Proceedings of the 14th IEEE International Conference on Machine Learning and Applications, pp 896–902

  7. Xue M, Zhang Y, Wang J, Liu W (2021) Intellectual property protection for deep learning models: Taxonomy, methods, attacks, and evaluations. IEEE Trans Artif Intell:1–16. Early Access

  8. Zhang J, Gu Z, Jang J, Wu H, Stoecklin MP, Huang H, Molloy I (2018) Protecting intellectual property of deep neural networks with watermarking. In: Proceedings of the Asia Conference on Computer and Communications Security, pp 159–172

  9. Uchida Y, Nagai Y, Sakazawa S, Satoh S (2017) Embedding watermarks into deep neural networks. In: Proceedings of the ACM on International Conference on Multimedia Retrieval, pp 269–277

  10. Merrer EL, Pérez P, Trédan G (2020) Adversarial frontier stitching for remote neural network watermarking. Neural Comput Appl 32(13):9233–9244

    Article  Google Scholar 

  11. Adi Y, Baum C, Cissé M, Pinkas B, Keshet J (2018) Turning your weakness into a strength: Watermarking deep neural networks by backdooring. In: Proceedings of the 27th USENIX Security Symposium, pp 1615–1631

  12. Namba R, Sakuma J (2019) Robust watermarking of neural network with exponential weighting. In: Proceedings of the ACM Asia Conference on Computer and Communications Security, pp 228–240

  13. David R (2020) LSB-Steganography.

  14. Xiao H, Rasul K, Vollgraf R (2017) Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv:1708.07747

  15. Krizhevsky A, Hinton G et al (2009) Learning multiple layers of features from tiny images. Technical Report

  16. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: From error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612

    Article  Google Scholar 

  17. Lecun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324

    Article  Google Scholar 

  18. Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: Proceedings of the 3rd International Conference on Learning Representations, pp 1–14

  19. Rouhani BD, Chen H, Koushanfar F (2019) DeepSigns: An end-to-end watermarking framework for ownership protection of deep neural networks. In: Proceedings of the 24th International Conference on Architectural Support for Programming Languages and Operating Systems, pp 485–497

  20. Guan X, Feng H, Zhang W, Zhou H, Zhang J, Yu N (2020) Reversible watermarking in deep convolutional neural networks for integrity authentication. In: Proceedings of the 28th ACM International Conference on Multimedia, pp 2273–2280

  21. Xue M, He C, Wang J, Liu W (2020) One-to-N & N-to-One: Two advanced backdoor attacks against deep learning models. IEEE Trans Depend Sec Comput:1–17. Early access

  22. Zhong Q, Zhang LY, Zhang J, Gao L, Xiang Y (2020) Protecting IP of deep neural networks with watermarking: A new label helps. In: Pacific-Asia Conference on Knowledge Discovery and Data Mining, pp 462–474

  23. Jia H, Choquette-Choo CA, Chandrasekaran V, Papernot N (2021) Entangled watermarks as a defense against model extraction. In: Proceedings of the 30th USENIX Security Symposium, pp 1937–1954

  24. Shafieinejad M, Lukas N, Wang J, Li X, Kerschbaum F (2021) On the robustness of backdoor-based watermarking in deep neural networks. In: Proceedings of the ACM Workshop on Information Hiding and Multimedia Security, pp 177–188

  25. Wenger E, Passananti J, Bhagoji AN, Yao Y, Zheng H, Zhao BY (2021) Backdoor attacks against deep learning systems in the physical world. In: IEEE/CVF conference on computer vision and pattern recognition, pp 6206–6215

  26. Pittaras N, Markatopoulou F, Mezaris V, Patras I (2017) Comparison of fine-tuning and extension strategies for deep convolutional neural networks. In: Proceedings of the 23rd International Conference on MultiMedia Modeling, pp 102– 114

  27. Han S, Pool J, Tran J, Dally WJ (2015) Learning both weights and connections for efficient neural network. In: Proceedings of the Advances in Neural Information Processing Systems, pp 1135–1143

  28. Chollet F et al (2015) Keras.

  29. LeCun Y, Cortes C, Burges CJC (1998) The MNIST database of handwritten digits.

  30. Kingma DP, Ba JL (2015) Adam: A method for stochastic optimization. In: Proceedings of the 3rd International Conference on Learning Representations, pp 1–15

  31. Bottou L et al (1991) Stochastic gradient learning in neural networks. Proc Neuro-Nımes 91 (8):1–12

    Google Scholar 

  32. Harrington P (2012) Machine learning in action. Simon and Schuster, Shelter Island

    Google Scholar 

  33. (2018). DeepLearningDenoise.

  34. (2020). WatermarkRobustness.

  35. von Känel T (2021) adversarial-frontier-stitching.

  36. Goodfellow IJ, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples. In: International Conference on Learning Representations, pp 1–11

Download references


This work is supported by the National Natural Science Foundation of China (No. 61602241), and CCF-NSFOCUS Kun-Peng Scientific Research Fund (No. CCF-NSFOCUS 2021012).

Author information

Authors and Affiliations


Corresponding author

Correspondence to Mingfu Xue.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Xue, M., Sun, S., Zhang, Y. et al. Active intellectual property protection for deep neural networks through stealthy backdoor and users’ identities authentication. Appl Intell (2022).

Download citation

  • Accepted:

  • Published:

  • DOI:


  • Deep neural networks
  • Intellectual property protection
  • Backdoor
  • Users’ fingerprints authentication
  • Ownership verification