Advertisement

Adversarial Sample Crafting for Time Series Classification with Elastic Similarity Measures

  • Izaskun Oregi
  • Javier Del Ser
  • Aritz Perez
  • Jose A. Lozano
Conference paper
Part of the Studies in Computational Intelligence book series (SCI, volume 798)

Abstract

Adversarial Machine Learning (AML) refers to the study of the robustness of classification models when processing data samples that have been intelligently manipulated to confuse them. Procedures aimed at furnishing such confusing samples exploit concrete vulnerabilities of the learning algorithm of the model at hand, by which perturbations can make a given data instance to be misclassified. In this context, the literature has so far gravitated on different AML strategies to modify data instances for diverse learning algorithms, in most cases for image classification. This work builds upon this background literature to address AML for distance based time series classifiers (e.g., nearest neighbors), in which attacks (i.e. modifications of the samples to be classified by the model) must be intelligently devised by taking into account the measure of similarity used to compare time series. In particular, we propose different attack strategies relying on guided perturbations of the input time series based on gradient information provided by a smoothed version of the distance based model to be attacked. Furthermore, we formulate the AML sample crafting process as an optimization problem driven by the Pareto trade-off between (1) a measure of distortion of the input sample with respect to its original version; and (2) the probability of the crafted sample to confuse the model. In this case, this formulated problem is efficiently tackled by using multi-objective heuristic solvers. Several experiments are discussed so as to assess whether the crafted adversarial time series succeed when confusing the distance based model under target.

Keywords

Adversarial Machine Learning Time series classification Elastic similarity measures 

Notes

Acknowledgments

This work has been supported by the Basque Government through the EMAITEK, BERC 2014–2017 and the ELKARTEK programs, and by the Spanish Ministry of Economy and Competitiveness MINECO: BCAM Severo Ochoa excellence accreditation SVP-2014-068574 and SEV-2013-0323, and through the project TIN2017-82626-R funded by (AEI/FEDER, UE).

References

  1. 1.
    Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. arXiv preprint arXiv:180100553 (2018)
  2. 2.
    Berndt, D.J., Clifford, J.: Using dynamic time warping to find patterns in time series. In: Workshop on Knowledge Discovery in Databases, Seattle, WA, pp. 359–370 (1994)Google Scholar
  3. 3.
    Biggio, B., Corona, I., Maiorca, D., Nelson, B., Šrndić, N., Laskov, P., Giacinto, G., Roli, F.: Evasion attacks against machine learning at test time. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 387–402. Springer (2013)Google Scholar
  4. 4.
    Chen, Y., Keogh, E., Hu, B., Begum, N., Bagnall, A., Mueen, A., Batista, G.: The UCR Time Series Classification Archive (2015). www.cs.ucr.edu/~eamonn/time_series_data/
  5. 5.
    Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002)CrossRefGoogle Scholar
  6. 6.
    Ding, H., Trajcevski, G., Scheuermann, P., Wang, X., Keogh, E.: Querying and mining of time series data: experimental comparison of representations and distance measures. Proc. VLDB Endow. 1(2), 1542–1552 (2008)CrossRefGoogle Scholar
  7. 7.
    Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:14126572 (2014)
  8. 8.
    ten Holt, G.A., Reinders, M.J., Hendriks, E.: Multi-dimensional dynamic time warping for gesture recognition. In: Conference of the Advanced School for Computing and Imaging, vol. 300, p. 1 (2007)Google Scholar
  9. 9.
    Huang, L., Joseph, AD., Nelson, B., Rubinstein, BI., Tygar, J.: Adversarial machine learning. In: Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence, pp. 43–58. ACM (2011a)Google Scholar
  10. 10.
    Huang, L., Joseph, A.D., Nelson, B., Rubinstein, B.I., Tygar, J.: Adversarial machine learning. In: Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence, pp. 43–58. ACM (2011b)Google Scholar
  11. 11.
    Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv preprint arXiv:160702533 (2016)
  12. 12.
    Lana, I., Del Ser, J., Velez, M., Vlahogianni, E.I.: Road traffic forecasting: recent advances and new challenges. Proc. VLDB Endow. 10(2), 93–109 (2018)Google Scholar
  13. 13.
    Lines, J., Bagnall, A.: Time series classification with ensembles of elastic distance measures. Data Min. Knowl. Discov. 29(3), 565–592 (2015)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Miyato, T., Maeda, S., Koyama, M., Ishii, S.: Virtual adversarial training: a regularization method for supervised and semi-supervised learning. arXiv preprint arXiv:170403976 (2017)
  15. 15.
    Molina-Solana, M., Ros, M., Ruiz, M.D., Gómez-Romero, J., Martín-Bautista, M.J.: Data science for building energy management: a review. Renew. Sustain. Energy Rev. 70, 598–609 (2017)CrossRefGoogle Scholar
  16. 16.
    Papernot, N., McDaniel, P., Goodfellow, I.: Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:160507277 (2016a)
  17. 17.
    Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE Symposium on Security and Privacy (SP), pp. 582–597. IEEE (2016b)Google Scholar
  18. 18.
    Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506–519. ACM (2017)Google Scholar
  19. 19.
    Sakoe, H., Chiba, S.: Dynamic programming algorithm optimization for spoken word recognition. IEEE Trans. Acoust. Speech Signal Process. 26(1), 43–49 (1978)CrossRefGoogle Scholar
  20. 20.
    Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. arXiv preprint arXiv:13126199 (2013)
  21. 21.
    Villar-Rodriguez, E., Del Ser, J., Oregi, I., Bilbao, M.N., Gil-Lopez, S.: Detection of non-technical losses in smart meter data based on load curve profiling and time series analysis. Energy 137, 118–128 (2017)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Izaskun Oregi
    • 1
  • Javier Del Ser
    • 2
    • 3
  • Aritz Perez
    • 3
  • Jose A. Lozano
    • 3
    • 4
  1. 1.TECNALIADerioSpain
  2. 2.TECNALIA, University of the Basque Country (UPV/EHU)LeioaSpain
  3. 3.Basque Center for Applied Mathematics (BCAM)BilbaoSpain
  4. 4.University of the Basque Country (UPV/EHU)LeioaSpain

Personalised recommendations