Localized Random Shapelets

  • Mael GuilleméEmail author
  • Simon Malinowski
  • Romain Tavenard
  • Xavier Renard
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11986)


Shapelet models have attracted a lot of attention from researchers in the time series community, due in particular to its good classification performance. However, such models only inform about the presence/absence of local temporal patterns. Structural information about the localization of these patterns is ignored. In addition, end-to-end learning shapelet models tend to generate meaningless shapelets, leading to poorly interpretable models. In this paper, we aim at designing an interpretable shapelet model that takes into account the localization of the shapelets in the time series. Time series are transformed into feature vectors composed of both a distance and a localization information. Then, we design a hierarchical feature selection process using regularization. This process can be tuned to select, for each shapelet, either only its distance information or both distance and localization information. It is hence possible for every selected shapelet to analyze whether only the presence or the presence and the localization contributed to the decision process improving interpretability of the decision. Experiments show that this feature selection process has competitive performance compared to state-of-the-art shapelet-based classifiers, while providing better interpretability.


Time series Machine learning Shapelets 


  1. 1.
    Bagnall, A., Lines, J., Bostrom, A., Large, J., Keogh, E.: The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Data Min. Knowl. Disc. 31, 606–660 (2016)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Bagnall, A., Lines, J., Vickers, W., Keogh, E.: The UEA & UCR time series classification repository.
  3. 3.
    Bailly, A., Malinowski, S., Tavenard, R., Chapel, L., Guyet, T.: Dense bag-of-temporal-SIFT-words for time series classification. In: Douzal-Chouakria, A., Vilar, J.A., Marteau, P.-F. (eds.) AALTD 2015. LNCS (LNAI), vol. 9785, pp. 17–30. Springer, Cham (2016). Scholar
  4. 4.
    Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 315–323 (2011)Google Scholar
  5. 5.
    Grabocka, J., Schilling, N., Wistuba, M., Schmidt-Thieme, L.: Learning time-series shapelets. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 392–401 (2014)Google Scholar
  6. 6.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015)
  7. 7.
    Lines, J., Davis, L.M., Hills, J., Bagnall, A.: A shapelet transform for time series classification. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 289–297 (2012)Google Scholar
  8. 8.
    Rakthanmanon, T., Keogh, E.: Fast shapelets: a scalable algorithm for discovering time series shapelets, pp. 668–676, May 2013Google Scholar
  9. 9.
    Renard, X., Rifqi, M., Erray, W., Detyniecki, M.: Random-shapelet: an algorithm for fast shapelet discovery. In: IEEE International Conference on Data Science and Advanced Analytics, pp. 1–10 (2015)Google Scholar
  10. 10.
    Renard, X., Rifqi, M., Fricout, G., Detyniecki, M.: EAST representation: fast discriminant temporal patterns discovery from time series. In: ECML/PKDD Workshop on Advanced Analytics and Learning on Temporal Data (2016)Google Scholar
  11. 11.
    Scardapane, S., Comminiello, D., Hussain, A., Uncini, A.: Group sparse regularization for deep neural networks. Neurocomputing 241, 81–89 (2017)CrossRefGoogle Scholar
  12. 12.
    Schäfer, P.: The BOSS is concerned with time series classification in the presence of noise. Data Min. Knowl. Disc. 29(6), 1505–1530 (2015)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Simon, N., Friedman, J., Hastie, T., Tibshirani, R.: A sparse-group lasso. J. Comput. Graph. Stat. 22(2), 231–245 (2013)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Tavenard, R.: tslearn: a machine learning toolkit dedicated to time-series data (2017).
  15. 15.
    Tavenard, R., Malinowski, S., Chapel, L., Bailly, A., Sanchez, H., Bustos, B.: Efficient temporal kernels between feature sets for time series classification. In: Ceci, M., Hollmén, J., Todorovski, L., Vens, C., Džeroski, S. (eds.) ECML PKDD 2017. LNCS (LNAI), vol. 10535, pp. 528–543. Springer, Cham (2017). Scholar
  16. 16.
    Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B (Methodol.) 58, 267–288 (1996)MathSciNetzbMATHGoogle Scholar
  17. 17.
    Tieleman, T., Hinton, G.: Lecture 6.5-rmsprop: divide the gradient by a running average of its recent magnitude. COURSERA: Neural Netw. Mach. Learn. 4(2), 26–31 (2012)Google Scholar
  18. 18.
    Wistuba, M., Grabocka, J., Schmidt-Thieme, L.: Ultra-fast shapelets for time series classification. CoRR abs/1503.05018 (2015).
  19. 19.
    Ye, L., Keogh, E.: Time series shapelets: a new primitive for data mining. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 947–956 (2009)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Mael Guillemé
    • 1
    • 2
    Email author
  • Simon Malinowski
    • 2
  • Romain Tavenard
    • 3
  • Xavier Renard
    • 4
  1. 1.EnergiencyRennesFrance
  2. 2.Univ Rennes, Inria, CNRS, IRISARennesFrance
  3. 3.Univ Rennes, CNRS, LETG, IRISARennesFrance
  4. 4.AXAParisFrance

Personalised recommendations