Skip to main content
Log in

Dataset authorization control: protect the intellectual property of dataset via reversible feature space adversarial examples

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

The cost of collecting and annotating large-scale datasets is expensive, thus the valuable datasets can be considered as the intellectual property (IP) of the dataset creator. To date, all the copyright protection methods for deep learning focus on the copyright protection of the models, while there are no researches on copyright protection of the dataset. Protecting the intellectual property of dataset is a brand new topic which is very challenging. In this paper, we propose an authorization control method to actively protect the dataset from being used to train Deep Neural Network (DNN) models without authorization. To the best of our knowledge, this is the first work on IP protection for dataset. We generate feature space adversarial examples for clean images. Then, we utilize the modified Reversible Image Transformation to hide the clean images into the corresponding feature space adversarial examples to generate the protected images. For the unauthorized users, the model directly trained on the protected dataset will have poor inference accuracy. For the authorized users, the model can be trained on the recovered dataset and will have normal inference accuracy. Experimental results on CIFAR-10 and TinyImageNet datasets demonstrate the effectiveness of the proposed method. It is also demonstrated that the proposed method has an excellent transferability across different models. Moreover, the proposed method is robust to the adaptive attack.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: IEEE Conference on computer vision and pattern recognition, pp 770–778

  2. Banesh D, Petersen MR, Ahrens J, Turton TL, Samsel F, Schoonover J, Hamann B (2021) An image-based framework for ocean feature detection and analysis. J Geovisualization Spat Anal 5 (2):1–21

    Google Scholar 

  3. Xue M, Zhang Y, Wang J, Liu W (2021) Intellectual property protection for deep learning models: taxonomy, methods, attacks, and evaluations. IEEE Trans Artif Intell:1–16

  4. Du P, Bai X, Tan K, Xue Z, Samat A, Xia J, Li E, Su H, Liu W (2020) Advances of four machine learning methods for spatial data handling: a review. J Geovisualization Spat Anal 4(1):1–25

    Google Scholar 

  5. Adi Y, Baum C, Cissé M, Pinkas B, Keshet J (2018) Turning your weakness into a strength: watermarking deep neural networks by backdooring. In: 27th USENIX security symposium, pp 1615–1631

  6. Uchida Y, Nagai Y, Sakazawa S, Satoh S (2017) Embedding watermarks into deep neural networks. In: Proceedings of the ACM international conference on multimedia retrieval, pp 269–277

  7. Zhang J, Gu Z, Jang J, Wu H, Stoecklin MP, Huang H, Molloy IM (2018) Protecting intellectual property of deep neural networks with watermarking. In: Proceedings of the Asia conference on computer and communications security, pp 159–172

  8. Ribeiro M, Grolinger K, Capretz MAM (2015) MLaaS: machine learning as a service. In: Proceedings of the 14th IEEE international conference on machine learning and applications, pp 896–902

  9. Inkawhich N, Wen W, Li HH, Chen Y (2019) Feature space perturbations yield more transferable adversarial examples. In: IEEE conference on computer vision and pattern recognition, pp 7066–7074

  10. Zhang W, Wang H, Hou D, Yu N (2016) Reversible data hiding in encrypted images by reversible image transformation. IEEE Trans Multimed 18(8):1469–1479

    Article  Google Scholar 

  11. Krizhevsky A, Hinton G (2009) Learning multiple layers of features from tiny images. Technical report, University of Toronto

  12. Karpathy A (2016) Tiny imagenet challenge. Technical report, Stanford University

  13. Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: 3rd international conference on learning representations, pp 1–14

  14. Huang G, Liu Z, van der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: IEEE Conference on computer vision and pattern recognition, pp 2261–2269

  15. Guo J, Potkonjak M (2019) Evolutionary trigger set generation for DNN black-box watermarking. arXiv:1906.04411

  16. Zhong Q, Zhang LY, Zhang J, Gao L, Xiang Y (2020) Protecting IP of deep neural networks with watermarking: a new label helps. In: Pacific-Asia advances in knowledge discovery and data mining, vol 12085, pp 462–474

  17. Szyller S, Atli BG, Marchal S, Asokan N (2021) DAWN: dynamic adversarial watermarking of neural networks. In: ACM multimedia conference, pp 4417–4425

  18. Jia H, Choquette-Choo CA, Chandrasekaran V, Papernot N (2021) Entangled watermarks as a defense against model extraction. In: 30th USENIX security symposium, pp 1937–1954

  19. Zhao J, Hu Q, Liu G, Ma X, Chen F, Hassan MM (2020) AFA: Adversarial fingerprinting authentication for deep neural networks. Comput Commun 150:488–497

    Article  Google Scholar 

  20. Chen M, Wu M (2018) Protect your deep neural networks from piracy. In: IEEE international workshop on information forensics and security, pp 1–7

  21. Fan L, Ng K, Chan CS (2019) Rethinking deep neural network ownership verification: embedding passports to defeat ambiguity attacks. In: Annual conference on neural information processing systems, pp 4716–4725

  22. Lin N, Chen X, Lu H, Li X (2021) Chaotic weights: a novel approach to protect intellectual property of deep neural networks. IEEE Trans Comput Aided Des Integr Circ Syst 40(7):1327–1339

    Article  Google Scholar 

  23. Xue M, Wu Z, He C, Wang J, Liu W (2020) Active DNN IP protection: a novel user fingerprint management and DNN authorization control technique. In: 19th IEEE international conference on trust, security and privacy in computing and communications, pp 975–982

  24. Xue M, He C, Wang J, Liu W (2022) One-to-N & N-to-One: two advanced backdoor attacks against deep learning models. IEEE Trans Dependable Secure Comput 19(3):1562–1578

    Article  Google Scholar 

  25. Zhang J, Chen D, Liao J, Fang H, Zhang W, Zhou W, Cui H, Yu N (2020) Model watermarking for image processing networks. In: Proceedings of the AAAI conference on artificial intelligence, pp 12805–12812

  26. Wu H, Liu G, Yao Y, Zhang X (2021) Watermarking neural networks with watermarked images. IEEE Trans Circ Syst Video Technol 31(7):2591–2601

    Article  Google Scholar 

  27. Zhang X (2012) Separable reversible. IEEE Trans Inf Forensic Secur 7(2):826–832

    Article  Google Scholar 

  28. David R (2020) LSB-Steganography. https://github.com/robindavid/LSB-steganography. Accessed 10 Sep 2021

  29. Dong Y, Liao F, Pang T, Su H, Zhu J, Hu X, Li J (2018) Boosting adversarial attacks with momentum. In: IEEE conference on computer vision and pattern recognition, pp 9185–9193

  30. Yosinski J, Clune J, Bengio Y, Lipson H (2014) How transferable are features in deep neural networks?. In: Annual conference on neural information processing systems, pp 3320–3328

  31. Dworkin M, Barker E, Nechvatal J, Foti J, Bassham L, Roback E, Dray J (2001) Advanced encryption standard (AES). Federal Inf. Process. Stds. (NIST FIPS), National Institute of Standards and Technology, Gaithersburg MD

  32. Kingma DP, Ba J (2015) Adam: a method for stochastic optimization. In: 3rd international conference on learning representations, pp 1–15

  33. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China (No. 61602241), and CCF-NSFOCUS Kun-Peng Scientific Research Fund (No. CCF-NSFOCUS 2021012).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mingfu Xue.

Ethics declarations

Conflict of Interests

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Additional information

Availability of data and materials

The data used to support the findings of this study are available from the corresponding author upon reasonable request.

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xue, M., Wu, Y., Zhang, Y. et al. Dataset authorization control: protect the intellectual property of dataset via reversible feature space adversarial examples. Appl Intell 53, 7298–7309 (2023). https://doi.org/10.1007/s10489-022-03926-1

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-022-03926-1

Keywords

Navigation