Skip to main content
Log in

XEM: An explainable-by-design ensemble method for multivariate time series classification

  • Published:
Data Mining and Knowledge Discovery Aims and scope Submit manuscript

Abstract

We present XEM, an eXplainable-by-design Ensemble method for Multivariate time series classification. XEM relies on a new hybrid ensemble method that combines an explicit boosting-bagging approach to handle the bias-variance trade-off faced by machine learning models and an implicit divide-and-conquer approach to individualize classifier errors on different parts of the training data. Our evaluation shows that XEM outperforms the state-of-the-art MTS classifiers on the public UEA datasets. Furthermore, XEM provides faithful explainability-by-design and manifests robust performance when faced with challenges arising from continuous data collection (different MTS length, missing data and noise).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19

Similar content being viewed by others

Notes

  1. https://github.com/xuczhang/xuczhang.github.io/blob/master/papers/aaai20_tapnet_full.pdf

  2. https://github.com/XAIseries/XEM

  3. sklearn.ensemble.BaggingClassifier.

  4. sklearn.tree.DecisionTreeClassifier.

  5. sklearn.ensemble.AdaBoostClassifier.

  6. sklearn.linear_model.SGDClassifier.

  7. sklearn.naive_bayes.GaussianNB.

  8. https://keras.io/

  9. sklearn.ensemble.RandomForestClassifier.

  10. sklearn.svm.SVC.

  11. https://xgboost.readthedocs.io/en/latest/python/

  12. https://github.com/houshd/MLSTM-FCN

  13. https://github.com/patrickzib/SFA

  14. https://github.com/hyperopt/hyperopt

  15. https://github.com/maxpumperla/hyperas

  16. https://www.rdocumentation.org/packages/scmamp/versions/0.2.55/topics/plotCD

  17. http://www.timeseriesclassification.com/dataset.php

References

  • Bagnall A, Lines J, Keogh E (2018) The UEA UCR time series classification archive

  • Baydogan M, Runger G (2014) Learning a symbolic representation for multivariate time series classification. Data Min Knowl Disc 29(2):400–422

    Article  MathSciNet  Google Scholar 

  • Baydogan M, Runger G (2016) Time series representation and similarity based on local autopatterns. Data Min Knowl Disc 30(2):476–509

    Article  MathSciNet  Google Scholar 

  • Bergstra J, Bardenet R, Bengio Y, Kégl B (2011) Algorithms for hyper-parameter optimization. In: Proceedings of the 25th international conference on neural information processing systems

  • Breiman L (1996) Bagging predictors. Mach Learn, pp 123–140

  • Breiman L (2001) Random forests. Mach Learn, pp 5–32

  • Breiman L, Friedman J, Stone C, Olshen R (1984) Classification and regression trees. The Wadsworth and Brooks-Cole statistics-probability series. Taylor & Francis

  • Chen T, Guestrin C (2016) XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining

  • Cussins Newman J (2019) Toward AI security: global aspirations for a more resilient future. In: Center for long-term cybersecurity

  • Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30

    MathSciNet  MATH  Google Scholar 

  • Dietterich T (2000) Ensemble methods in machine learning. Multiple Classifier Syst, pp 1–15

  • Du M, Liu N, Hu X (2020) Techniques for interpretable machine learning. Commun ACM

  • Dua D, Graff C (2017) UCI machine learning repository

  • Ebrahimpour R, Sadeghnejad N, Arani S, Mohammadi N (2012) Boost-wise pre-loaded mixture of experts for classification tasks. Neural Comput Appl 22(1):365–377

    Google Scholar 

  • Esteva A, Robicquet A, Ramsundar B, Kuleshov V, DePristo M, Chou K, Cui C, Corrado G, Thrun S, Dean J (2019) A guide to deep learning in healthcare. Nat Med 25:24–29

    Article  Google Scholar 

  • Fauvel K, Masson V, Fromont É, Faverdin P, Termier A (2019) Towards sustainable dairy management - a machine learning enhanced method for estrus detection. In: Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery and data mining

  • Fauvel K, Balouek-Thomert D, Melgar D, Silva P, Simonet A, Antoniu G, Costan A, Masson V, Parashar M, Rodero I, Termier A (2020a) A distributed multi-sensor machine learning approach to earthquake early warning. In: Proceedings of the 34th AAAI conference on artificial intelligence

  • Fauvel K, Masson V, Fromont É (2020b) A performance-explainability framework to benchmark machine learning methods: application to multivariate time series classifiers. In: Proceedings of the IJCAI-PRICAI workshop on explainable artificial intelligence

  • Freund Y, Schapire R (1996) Experiments with a new boosting algorithm. In: Proceedings of the 13th international conference on machine learning

  • Gama J, Brazdil P (2000) Cascade generalization. Mach Learn 41(3):315–343

    Article  Google Scholar 

  • Guidotti R, Monreale A, Giannotti F, Pedreschi D, Ruggieri S, Turini F (2019) Factual and counterfactual explanations for black box decision making. IEEE Intell Syst 34(6):14–23

    Article  Google Scholar 

  • He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the 2016 IEEE conference on computer vision and pattern recognition

  • Jacobs R, Jordan M, Nowlan S, Hinton G (1991) Adaptive mixtures of local experts. Neural Comput 3(1):79–87

    Article  Google Scholar 

  • Jiang R, Song X, Huang D, Song X, Xia T, Cai Z, Wang Z, Kim K, Shibasaki R (2019) DeepUrbanEvent: a system for predicting citywide crowd dynamics at big events. In: Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery and data mining

  • Karim F, Majumdar S, Darabi H, Harford S (2019) Multivariate LSTM-FCNs for time series classification. Neural Netw 116:237–245

    Article  Google Scholar 

  • Karlsson I, Papapetrou P, Boström H (2016) Generalized random shapelet forests. Data Min Knowl Disc 30(5):1053–1085

    Article  MathSciNet  Google Scholar 

  • Karlsson I, Rebane J, Papapetrou P, Gionis A (2020) Locally and globally explainable time series tweaking. Knowl Inf Syst 62:1671–1700

    Article  Google Scholar 

  • Kotsiantis S, Pintelas P (2005) Combining bagging and boosting. Int J Comput Intell 1(8):372–381

    Google Scholar 

  • Li J, Rong Y, Meng H, Lu Z, Kwok T, Cheng H (2018) TATC: Predicting Alzheimer’s disease with actigraphy data. In: Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery and data mining

  • Lipton Z (2016) The mythos of model interpretability. In: Proceedings of the ICML workshop on human interpretability in machine learning

  • Liu Y, Yao X (1999) Ensemble learning via negative correlation. Neural Netw 12(10):1399–1404

    Article  Google Scholar 

  • Lundberg S, Lee S (2017) A unified approach to interpreting model predictions. In: Proceedings of the 31st international conference on neural information processing systems

  • Masoudnia S, Ebrahimpour R (2014) Mixture of experts: a literature survey. Artif Intell Rev 42(2):275–293

    Article  Google Scholar 

  • Miller T (2019) Explanation in artificial intelligence: insights from the social sciences. Artif Intell 267:1–38

    Article  MathSciNet  Google Scholar 

  • Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay E (2011) Scikit-learn: machine learning in python. J Mach Learn Res

  • Ransbotham S, Khodabandeh S, Fehling R, LaFountain B, Kiron D (2019) Winning with AI. In: MIT sloan management review and boston consulting group

  • Ribeiro M, Singh S, Guestrin C (2016) “Why should i trust you?”: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining

  • Ribeiro M, Singh S, Guestrin C (2018) Anchors: high-precision model-agnostic explanations. In: Proceedings of the 32nd AAAI conference on artificial intelligence

  • Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1:206–215

    Article  Google Scholar 

  • Schäfer P, Högqvist M (2012) SFA: a symbolic fourier approximation and index for similarity search in high dimensional datasets. In: Proceedings of the 15th international conference on extending database technology, pp 516–527

  • Schäfer P, Leser U (2017) Multivariate time series classification with WEASEL+MUSE

  • Schapire R (1990) The strength of weak learnability. Mach Learn 5:197–227

    Google Scholar 

  • Selvaraju R, Das A, Vedantam R, Cogswell M, Parikh D, Batra D (2019) Grad-CAM: visual explanations from deep networks via gradient-based localization. Int J Comput Vision 128:336–359

    Article  Google Scholar 

  • Sesmero M, Ledezma A, Sanchis A (2015) Generating ensembles of heterogeneous classifiers using stacked generalization. Wiley Interdiscip Rev Data Min Knowl Discov 5(1):21–34

    Article  Google Scholar 

  • Seto S, Zhang W, Zhou Y (2015) Multivariate time series classification using dynamic time warping template selection for human activity recognition. In: Proceedings of the 2015 IEEE symposium series on computational intelligence

  • Sharkey A, Sharkey N (1997) Combining diverse neural nets. Knowl Eng Rev 12(3):231–247

    Article  Google Scholar 

  • Shokoohi-Yekta M, Hu B, Jin H, Wang J, Keogh E (2017) Generalizing DTW to the multi-dimensional case requires an adaptive approach. Data Min Knowl Disc 31:1–31

    Article  MathSciNet  Google Scholar 

  • Tuncel K, Baydogan M (2018) Autoregressive forests for multivariate time series modeling. Pattern Recogn 73:202–215

    Article  Google Scholar 

  • Wang Z, Yan W, Oates T (2017) Time series classification from scratch with deep neural networks: a strong baseline. In: Proceedings of the 2017 international joint conference on neural networks

  • Wistuba M, Grabocka J, Schmidt-Thieme L (2015) Ultra-fast shapelets for time series classification

  • Wolpert D (1996) The lack of a priori distinctions between learning algorithms. Neural Comput 8(7):1341–1390

    Article  Google Scholar 

  • Zerveas G, Jayaraman S, Patel D, Bhamidipaty A, Eickhoff C (2021) A transformer-based framework for multivariate time series representation learning. In: Proceedings of the 27th ACM SIGKDD conference on knowledge discovery and data mining

  • Zhang H (2004) The optimality of Naïve Bayes. In: Proceedings of the 17th Florida artificial intelligence research society conference

  • Zhang X, Gao Y, Lin J, Lu C (2020) TapNet: multivariate time series classification with attentional prototypical network. In: Proceedings of the 34th AAAI conference on artificial intelligence

  • Zou H, Hastie T (2005) Regularization and variable selection via the elastic net. J R Stat Soc Ser B (Stat Methodol) 67(2):301–320

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported by the French National Research Agency under the Investments for the Future Program (ANR-16-CONV-0004) and the Inria Project Lab Hybrid Approaches for Interpretable AI (HyAIAI).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kevin Fauvel.

Additional information

Responsible editor: Panagiotis Papapetrou.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fauvel, K., Fromont, É., Masson, V. et al. XEM: An explainable-by-design ensemble method for multivariate time series classification. Data Min Knowl Disc 36, 917–957 (2022). https://doi.org/10.1007/s10618-022-00823-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10618-022-00823-6

Keywords

Navigation