Skip to main content

Unlearnable Examples for Time Series

  • Conference paper
  • First Online:
Advances in Knowledge Discovery and Data Mining (PAKDD 2024)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14650))

Included in the following conference series:

  • 194 Accesses

Abstract

Unlearnable examples (UEs) refer to training samples modified to be unlearnable to Deep Neural Networks (DNNs). These examples are usually generated by adding error-minimizing noises that can fool a DNN model into believing that there is nothing (no error) to learn from the data. The concept of UE has been proposed as a countermeasure against unauthorized data exploitation on personal data. While UE has been extensively studied on images, it is unclear how to craft effective UEs for time series data. In this work, we introduce the first UE generation method to protect time series data from unauthorized training by deep learning models. To this end, we propose a new form of error-minimizing noise that can be selectively applied to specific segments of time series, rendering them unlearnable to DNN models while remaining imperceptible to human observers. Through extensive experiments on a wide range of time series datasets, we demonstrate that the proposed UE generation method is effective in both classification and generation tasks. It can protect time series data against unauthorized exploitation, while preserving their utility for legitimate usage, thereby contributing to the development of secure and trustworthy machine learning systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abadi, M., et al.: Deep learning with differential privacy. In: SIGSAC (2016)

    Google Scholar 

  2. Barra, S., Carta, S.M., Corriga, A., Podda, A.S., Recupero, D.R.: Deep learning and time series-to-image encoding for financial forecasting. IEEE/CAA J. Automat. Sinica 7(3), 683–692 (2020)

    Article  Google Scholar 

  3. Biggio, B., Nelson, B., Laskov, P.: Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389 (2012)

  4. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: SP (2017)

    Google Scholar 

  5. Croce, F., Hein, M.: Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. arXiv preprint arXiv:2003.01690 (2020)

  6. Esteban, C., Hyland, S.L., Rätsch, G.: Real-valued (medical) time series generation with recurrent conditional gans. arXiv e-prints pp. arXiv–1706 (2017)

    Google Scholar 

  7. Feng, Y., Duan, Q., Chen, X., Yakkali, S.S., Wang, J.: Space cooling energy usage prediction based on utility data for residential buildings using machine learning methods. Appl. Energy 291, 116814 (2021)

    Article  Google Scholar 

  8. Fu, S., He, F., Liu, Y., Shen, L., Tao, D.: Robust unlearnable examples: protecting data against adversarial learning. arXiv preprint arXiv:2203.14533 (2022)

  9. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015)

    Google Scholar 

  10. Hill, K.: The secretive company that might end privacy as we know it (2020)

    Google Scholar 

  11. Huang, H., Ma, X., Erfani, S.M., Bailey, J., Wang, Y.: Unlearnable examples: making personal data unexploitable. In: ICLR (2020)

    Google Scholar 

  12. Jiang, W., Diao, Y., Wang, H., Sun, J., Wang, M., Hong, R.: Unlearnable examples give a false sense of security: piercing through unexploitable data with learnable examples. arXiv preprint arXiv:2305.09241 (2023)

  13. Jiang, Y., Ma, X., Erfani, S.M., Bailey, J.: Backdoor attacks on time series: a generative approach. In: SaTML (2023)

    Google Scholar 

  14. Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: ICML (2017)

    Google Scholar 

  15. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236 (2016)

  16. Li, J., et al.: Universal adversarial perturbations generative network for speaker recognition. In: ICME (2020)

    Google Scholar 

  17. Liu, Y., Ma, X., Bailey, J., Lu, F.: Reflection backdoor: a natural backdoor attack on deep neural networks. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) Computer Vision – ECCV 2020, pp. 182–199. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58607-2_11

  18. Ma, T., Antoniou, C., Toledo, T.: Hybrid machine learning algorithm and statistical time series model for network-wide traffic forecast. Transport. Res. Part C: Emerg. Technol. 111, 352–372 (2020)

    Article  Google Scholar 

  19. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018)

    Google Scholar 

  20. Maweu, B.M., Shamsuddin, R., Dakshit, S., Prabhakaran, B.: Generating healthcare time series data for improving diagnostic accuracy of deep neural networks. IEEE Trans. Instrum. Meas. 70, 1–15 (2021)

    Article  Google Scholar 

  21. Muñoz-González, L., et al.: Towards poisoning of deep learning algorithms with back-gradient optimization. In: AISec (2017)

    Google Scholar 

  22. Phan, N., Wang, Y., Wu, X., Dou, D.: Differential privacy preservation for deep auto-encoders: an application of human behavior prediction. In: AAAI (2016)

    Google Scholar 

  23. Phan, N., Wu, X., Hu, H., Dou, D.: Adaptive laplace mechanism: differential privacy preservation in deep learning. In: ICDM (2017)

    Google Scholar 

  24. Qin, T., Gao, X., Zhao, J., Ye, K., Xu, C.Z.: Learning the unlearnable: adversarial augmentations suppress unlearnable example attacks. arXiv preprint arXiv:2303.15127 (2023)

  25. Ren, J., Xu, H., Wan, Y., Ma, X., Sun, L., Tang, J.: Transferable unlearnable examples. In: ICLR (2022)

    Google Scholar 

  26. Shafahi, A., et al.: Poison frogs! targeted clean-label poisoning attacks on neural networks. In: NeurIPS (2018)

    Google Scholar 

  27. Shafahi, A., Najibi, M., Xu, Z., Dickerson, J., Davis, L.S., Goldstein, T.: Universal adversarial training. In: AAAI (2020)

    Google Scholar 

  28. Shan, S., Ding, W., Wenger, E., Zheng, H., Zhao, B.Y.: Post-breach recovery: protection against white-box adversarial examples for leaked DNN models. In: ACM SIGSAC Conference on Computer and Communications Security (2022)

    Google Scholar 

  29. Shan, S., Wenger, E., Zhang, J., Li, H., Zheng, H., Zhao, B.Y.: Fawkes: protecting personal privacy against unauthorized deep learning models. In: USENIX-Security (2020)

    Google Scholar 

  30. Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: SIGSAC (2015)

    Google Scholar 

  31. Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: SP (2017)

    Google Scholar 

  32. Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014)

    Google Scholar 

  33. Wang, Y., Ma, X., Bailey, J., Yi, J., Zhou, B., Gu, Q.: On the convergence and robustness of adversarial training. In: ICML, pp. 6586–6595 (2019)

    Google Scholar 

  34. Wang, Y., Zou, D., Yi, J., Bailey, J., Ma, X., Gu, Q.: Improving adversarial robustness requires revisiting misclassified examples. In: ICLR (2020)

    Google Scholar 

  35. Wiese, M., Knobloch, R., Korn, R., Kretschmer, P.: Quant gans: deep generation of financial time series. Quantitative Finance 20(9), 1419–1440 (2020)

    Article  MathSciNet  Google Scholar 

  36. Wu, D., Xia, S.T., Wang, Y.: Adversarial weight perturbation helps robust generalization. Adv. Neural Inf. Process. Syst. 33 (2020)

    Google Scholar 

  37. Yuan, C.H., Wu, S.H.: Neural tangent generalization attacks. In: International Conference on Machine Learning, pp. 12230–12240. PMLR (2021)

    Google Scholar 

  38. Zhang, H., Yu, Y., Jiao, J., Xing, E.P., Ghaoui, L.E., Jordan, M.I.: Theoretically principled trade-off between robustness and accuracy. In: ICML (2019)

    Google Scholar 

  39. Zhang, J., et al.: Unlearnable clusters: towards label-agnostic unlearnable examples. In: CVPR (2023)

    Google Scholar 

  40. Zhao, S., Ma, X., Zheng, X., Bailey, J., Chen, J., Jiang, Y.G.: Clean-label backdoor attacks on video recognition models. In: CVPR (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yujing Jiang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jiang, Y., Ma, X., Erfani, S.M., Bailey, J. (2024). Unlearnable Examples for Time Series. In: Yang, DN., Xie, X., Tseng, V.S., Pei, J., Huang, JW., Lin, J.CW. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2024. Lecture Notes in Computer Science(), vol 14650. Springer, Singapore. https://doi.org/10.1007/978-981-97-2266-2_17

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-2266-2_17

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-2265-5

  • Online ISBN: 978-981-97-2266-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics