Abstract
Unlearnable examples (UEs) refer to training samples modified to be unlearnable to Deep Neural Networks (DNNs). These examples are usually generated by adding error-minimizing noises that can fool a DNN model into believing that there is nothing (no error) to learn from the data. The concept of UE has been proposed as a countermeasure against unauthorized data exploitation on personal data. While UE has been extensively studied on images, it is unclear how to craft effective UEs for time series data. In this work, we introduce the first UE generation method to protect time series data from unauthorized training by deep learning models. To this end, we propose a new form of error-minimizing noise that can be selectively applied to specific segments of time series, rendering them unlearnable to DNN models while remaining imperceptible to human observers. Through extensive experiments on a wide range of time series datasets, we demonstrate that the proposed UE generation method is effective in both classification and generation tasks. It can protect time series data against unauthorized exploitation, while preserving their utility for legitimate usage, thereby contributing to the development of secure and trustworthy machine learning systems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Abadi, M., et al.: Deep learning with differential privacy. In: SIGSAC (2016)
Barra, S., Carta, S.M., Corriga, A., Podda, A.S., Recupero, D.R.: Deep learning and time series-to-image encoding for financial forecasting. IEEE/CAA J. Automat. Sinica 7(3), 683–692 (2020)
Biggio, B., Nelson, B., Laskov, P.: Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389 (2012)
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: SP (2017)
Croce, F., Hein, M.: Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. arXiv preprint arXiv:2003.01690 (2020)
Esteban, C., Hyland, S.L., Rätsch, G.: Real-valued (medical) time series generation with recurrent conditional gans. arXiv e-prints pp. arXiv–1706 (2017)
Feng, Y., Duan, Q., Chen, X., Yakkali, S.S., Wang, J.: Space cooling energy usage prediction based on utility data for residential buildings using machine learning methods. Appl. Energy 291, 116814 (2021)
Fu, S., He, F., Liu, Y., Shen, L., Tao, D.: Robust unlearnable examples: protecting data against adversarial learning. arXiv preprint arXiv:2203.14533 (2022)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015)
Hill, K.: The secretive company that might end privacy as we know it (2020)
Huang, H., Ma, X., Erfani, S.M., Bailey, J., Wang, Y.: Unlearnable examples: making personal data unexploitable. In: ICLR (2020)
Jiang, W., Diao, Y., Wang, H., Sun, J., Wang, M., Hong, R.: Unlearnable examples give a false sense of security: piercing through unexploitable data with learnable examples. arXiv preprint arXiv:2305.09241 (2023)
Jiang, Y., Ma, X., Erfani, S.M., Bailey, J.: Backdoor attacks on time series: a generative approach. In: SaTML (2023)
Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: ICML (2017)
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236 (2016)
Li, J., et al.: Universal adversarial perturbations generative network for speaker recognition. In: ICME (2020)
Liu, Y., Ma, X., Bailey, J., Lu, F.: Reflection backdoor: a natural backdoor attack on deep neural networks. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) Computer Vision – ECCV 2020, pp. 182–199. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58607-2_11
Ma, T., Antoniou, C., Toledo, T.: Hybrid machine learning algorithm and statistical time series model for network-wide traffic forecast. Transport. Res. Part C: Emerg. Technol. 111, 352–372 (2020)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018)
Maweu, B.M., Shamsuddin, R., Dakshit, S., Prabhakaran, B.: Generating healthcare time series data for improving diagnostic accuracy of deep neural networks. IEEE Trans. Instrum. Meas. 70, 1–15 (2021)
Muñoz-González, L., et al.: Towards poisoning of deep learning algorithms with back-gradient optimization. In: AISec (2017)
Phan, N., Wang, Y., Wu, X., Dou, D.: Differential privacy preservation for deep auto-encoders: an application of human behavior prediction. In: AAAI (2016)
Phan, N., Wu, X., Hu, H., Dou, D.: Adaptive laplace mechanism: differential privacy preservation in deep learning. In: ICDM (2017)
Qin, T., Gao, X., Zhao, J., Ye, K., Xu, C.Z.: Learning the unlearnable: adversarial augmentations suppress unlearnable example attacks. arXiv preprint arXiv:2303.15127 (2023)
Ren, J., Xu, H., Wan, Y., Ma, X., Sun, L., Tang, J.: Transferable unlearnable examples. In: ICLR (2022)
Shafahi, A., et al.: Poison frogs! targeted clean-label poisoning attacks on neural networks. In: NeurIPS (2018)
Shafahi, A., Najibi, M., Xu, Z., Dickerson, J., Davis, L.S., Goldstein, T.: Universal adversarial training. In: AAAI (2020)
Shan, S., Ding, W., Wenger, E., Zheng, H., Zhao, B.Y.: Post-breach recovery: protection against white-box adversarial examples for leaked DNN models. In: ACM SIGSAC Conference on Computer and Communications Security (2022)
Shan, S., Wenger, E., Zhang, J., Li, H., Zheng, H., Zhao, B.Y.: Fawkes: protecting personal privacy against unauthorized deep learning models. In: USENIX-Security (2020)
Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: SIGSAC (2015)
Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: SP (2017)
Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014)
Wang, Y., Ma, X., Bailey, J., Yi, J., Zhou, B., Gu, Q.: On the convergence and robustness of adversarial training. In: ICML, pp. 6586–6595 (2019)
Wang, Y., Zou, D., Yi, J., Bailey, J., Ma, X., Gu, Q.: Improving adversarial robustness requires revisiting misclassified examples. In: ICLR (2020)
Wiese, M., Knobloch, R., Korn, R., Kretschmer, P.: Quant gans: deep generation of financial time series. Quantitative Finance 20(9), 1419–1440 (2020)
Wu, D., Xia, S.T., Wang, Y.: Adversarial weight perturbation helps robust generalization. Adv. Neural Inf. Process. Syst. 33 (2020)
Yuan, C.H., Wu, S.H.: Neural tangent generalization attacks. In: International Conference on Machine Learning, pp. 12230–12240. PMLR (2021)
Zhang, H., Yu, Y., Jiao, J., Xing, E.P., Ghaoui, L.E., Jordan, M.I.: Theoretically principled trade-off between robustness and accuracy. In: ICML (2019)
Zhang, J., et al.: Unlearnable clusters: towards label-agnostic unlearnable examples. In: CVPR (2023)
Zhao, S., Ma, X., Zheng, X., Bailey, J., Chen, J., Jiang, Y.G.: Clean-label backdoor attacks on video recognition models. In: CVPR (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Jiang, Y., Ma, X., Erfani, S.M., Bailey, J. (2024). Unlearnable Examples for Time Series. In: Yang, DN., Xie, X., Tseng, V.S., Pei, J., Huang, JW., Lin, J.CW. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2024. Lecture Notes in Computer Science(), vol 14650. Springer, Singapore. https://doi.org/10.1007/978-981-97-2266-2_17
Download citation
DOI: https://doi.org/10.1007/978-981-97-2266-2_17
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-2265-5
Online ISBN: 978-981-97-2266-2
eBook Packages: Computer ScienceComputer Science (R0)