Skip to main content

Instance-Based Counterfactual Explanations for Time Series Classification

  • Conference paper
  • First Online:
Case-Based Reasoning Research and Development (ICCBR 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12877))

Included in the following conference series:

Abstract

In recent years, there has been a rapidly expanding focus on explaining the predictions made by black-box AI systems that handle image and tabular data. However, considerably less attention has been paid to explaining the predictions of opaque AI systems handling time series data. In this paper, we advance a novel model-agnostic, case-based technique – Native Guide – that generates counterfactual explanations for time series classifiers. Given a query time series, \(T_{q}\), for which a black-box classification system predicts class, c, a counterfactual time series explanation shows how \(T_{q}\) could change, such that the system predicts an alternative class, \(c'\). The proposed instance-based technique adapts existing counterfactual instances in the case-base by highlighting and modifying discriminative areas of the time series that underlie the classification. Quantitative and qualitative results from two comparative experiments indicate that Native Guide generates plausible, proximal, sparse and diverse explanations that are better than those produced by key benchmark counterfactual methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Note, SHAP can also be used to generate such vectors, if we are directly explaining any given model, rather than twinning.

  2. 2.

    https://github.com/e-delaney/Instance-Based_CFE_TSC.

  3. 3.

    We tried and failed in these tests, to use DiCE [38], a variant of w-CF with added constraints for diversity. We found that DiCE did not generate diverse counterfactuals within reasonable time-limits, suggesting that it is not well suited to high-dimensional time series data (even for shallower ANNs).

  4. 4.

    Counterfactuals for other classifiers, such as MR-SEQL, were found but not reported.

References

  1. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Article  Google Scholar 

  2. Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., Kim, B.: Sanity checks for saliency maps. In: NeurIPS, pp. 9505–9515 (2018)

    Google Scholar 

  3. Ates, E., Aksar, B., Leung, V.J., Coskun, A.K.: Counterfactual explanations for machine learning on multivariate time series data. arXiv preprint arXiv:2008.10781 (2020)

  4. Breunig, M.M., Kriegel, H.P., Ng, R.T., Sander, J.: Lof: identifying density-based local outliers. In: ACM SIGMOD, pp. 93–104 (2000)

    Google Scholar 

  5. Briandet, R., Kemsley, E.K., Wilson, R.H.: Discrimination of arabica and Robusta in instant coffee by Fourier transform infrared spectroscopy and chemometrics. J. Agric. Food Chem. 44(1), 170–174 (1996)

    Article  Google Scholar 

  6. Byrne, R.M.: Counterfactuals in explainable artificial intelligence (XAI): evidence from human reasoning. In: IJCAI-19, pp. 6276–6282 (2019)

    Google Scholar 

  7. Dau, H.A., et al.: The UCR time series archive. IEEE/CAA J. Automatica Sinica 6(6), 1293–1305 (2019)

    Article  Google Scholar 

  8. Delaney, E., Greene, D., Keane, M.T.: Instance-based counterfactual explanations for time series classification. arXiv preprint arXiv:2009.13211 (2020)

  9. Dodge, J., Liao, Q.V., Zhang, Y., Bellamy, R.K., Dugan, C.: Explaining models: an empirical study of how explanations impact fairness judgment. In: International Conference on Intelligent User Interfaces, pp. 275–285 (2019)

    Google Scholar 

  10. Downs, M., Chu, J.L., Yacoby, Y., Doshi-Velez, F., Pan, W.: Cruds: counterfactual recourse using disentangled subspaces. In: ICML Workshop Proceedings (2020)

    Google Scholar 

  11. Fawaz, H.I., Forestier, G., Weber, J., Idoumghar, L., Muller, P.A.: Adversarial attacks on deep neural networks for time series classification. In: 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2019)

    Google Scholar 

  12. Ismail Fawaz, H., Forestier, G., Weber, J., Idoumghar, L., Muller, P.-A.: Deep learning for time series classification: a review. Data Min. Knowl. Disc. 33(4), 917–963 (2019). https://doi.org/10.1007/s10618-019-00619-1

    Article  MathSciNet  MATH  Google Scholar 

  13. Forestier, G., Petitjean, F., Dau, H.A., Webb, G.I., Keogh, E.: Generating synthetic time series to augment sparse datasets. In: ICDM, pp. 865–870. IEEE (2017)

    Google Scholar 

  14. Gee, A.H., Garcia-Olano, D., Ghosh, J., Paydarfar, D.: Explaining deep classification of time-series data with learned prototypes. In: CEUR Workshop Proceedings, vol. 2429, pp. 15–22 (2019)

    Google Scholar 

  15. Goyal, Y., Wu, Z., Ernst, J., Batra, D., Parikh, D., Lee, S.: Counterfactual visual explanations. In: ICML, pp. 2376–2384. PMLR (2019)

    Google Scholar 

  16. Grabocka, J., Schilling, N., Wistuba, M., Schmidt-Thieme, L.: Learning time-series shapelets. In: ACM SIGKDD, pp. 392–401 (2014)

    Google Scholar 

  17. Guidotti, R., Monreale, A., Giannotti, F., Pedreschi, D., Ruggieri, S., Turini, F.: Factual and counterfactual explanations for black box decision making. IEEE Intell. Syst. 34(6), 14–23 (2019)

    Article  Google Scholar 

  18. Guidotti, R., Monreale, A., Spinnato, F., Pedreschi, D., Giannotti, F.: Explaining any time series classifier. In: CogMI 2020, pp. 167–176. IEEE (2020)

    Google Scholar 

  19. Gunning, D., Aha, D.: Darpa’s explainable artificial intelligence (XAI) program. AI Mag. 40(2), 44–58 (2019)

    Google Scholar 

  20. Kanamori, K., Takagi, T., Kobayashi, K., Arimura, H.: Dace: distribution-aware counterfactual explanation by mixed-integer linear optimization. In: IJCAI-20, pp. 2855–2862 (2020)

    Google Scholar 

  21. Karimi, A.H., Barthe, G., Balle, B., Valera, I.: Model-agnostic counterfactual explanations for consequential decisions. In: AISTATS, pp. 895–905 (2020)

    Google Scholar 

  22. Karlsson, I., Rebane, J., Papapetrou, P., Gionis, A.: Explainable time series tweaking via irreversible and reversible temporal transformations. In: ICDM (2018)

    Google Scholar 

  23. Keane, M.T., Kenny, E.M.: How case-based reasoning explains neural networks: a theoretical analysis of XAI using Post-Hoc explanation-by-example from a survey of ANN-CBR twin-systems. In: Bach, K., Marling, C. (eds.) ICCBR 2019. LNCS (LNAI), vol. 11680, pp. 155–171. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29249-2_11

    Chapter  Google Scholar 

  24. Keane, M.T., Kenny, E.M., Delaney, E., Smyth, B.: If only we had better counterfactual explanations: five key deficits to rectify in the evaluation of counterfactual XAI techniques. In: IJCAI-21 (2021)

    Google Scholar 

  25. Keane, M.T., Smyth, B.: Good counterfactuals and where to find them: a case-based technique for generating counterfactuals for explainable AI (XAI). In: Watson, I., Weber, R. (eds.) ICCBR 2020. LNCS (LNAI), vol. 12311, pp. 163–178. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58342-2_11

    Chapter  Google Scholar 

  26. Kenny, E.M., Delaney, E.D., Greene, D., Keane, M.T.: Post-hoc explanation options for XAI in deep learning: the Insight centre for data analytics perspective. In: Del Bimbo, A., et al. (eds.) ICPR 2021. LNCS, vol. 12663, pp. 20–34. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-68796-0_2

    Chapter  Google Scholar 

  27. Kenny, E.M., Keane, M.T.: Twin-systems to explain artificial neural networks using case-based reasoning: comparative tests of feature-weighting methods in ANN-CBR twins for XAI. In: IJCAI-19, pp. 2708–2715 (2019)

    Google Scholar 

  28. Kenny, E.M., Keane, M.T.: On generating plausible counterfactual and semi-factual explanations for deep learning. In: AAAI-21, pp. 11575–11585 (2021)

    Google Scholar 

  29. Laugel, T., Lesot, M.J., Marsala, C., Renard, X., Detyniecki, M.: The dangers of post-hoc interpretability: unjustified counterfactual explanations. In: Proceedings of IJCAI-19, pp. 2801–2807 (2019)

    Google Scholar 

  30. Le Nguyen, T., Gsponer, S., Ilie, I., O’Reilly, M., Ifrim, G.: Interpretable time series classification using linear models and multi-resolution multi-domain symbolic representations. Data Min. Knowl. Disc. 33(4), 1183–1222 (2019). https://doi.org/10.1007/s10618-019-00633-3

    Article  MathSciNet  MATH  Google Scholar 

  31. Leake, D., Mcsherry, D.: Introduction to the special issue on explanation in case-based reasoning. Artif. Intell. Rev. 24(2), 103 (2005)

    Article  Google Scholar 

  32. Leonardi, G., Montani, S., Striani, M.: Deep feature extraction for representing and classifying time series cases: towards an interpretable approach in haemodialysis. In: Flairs-2020. AAAI Press (2020)

    Google Scholar 

  33. Lipton, Z.C.: The mythos of model interpretability. Queue 16(3), 30 (2018)

    Article  Google Scholar 

  34. Liu, F.T., Ting, K.M., Zhou, Z.H.: Isolation forest. In: ICDM, pp. 413–422 (2008)

    Google Scholar 

  35. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, pp. 4765–4774 (2017)

    Google Scholar 

  36. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article  MathSciNet  Google Scholar 

  37. Molnar, C.: Interpretable machine learning. Lulu.com (2020)

    Google Scholar 

  38. Mothilal, R.K., Sharma, A., Tan, C.: Explaining machine learning classifiers through diverse counterfactual explanations. In: ACM FAccT, pp. 607–617 (2020)

    Google Scholar 

  39. Nguyen, T.T., Le Nguyen, T., Ifrim, G.: A model-agnostic approach to quantifying the informativeness of explanation methods for time series classification. In: Lemaire, V., Malinowski, S., Bagnall, A., Guyet, T., Tavenard, R., Ifrim, G. (eds.) AALTD 2020. LNCS (LNAI), vol. 12588, pp. 77–94. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-65742-0_6

    Chapter  Google Scholar 

  40. Nugent, C., Cunningham, P.: A case-based explanation system for black-box systems. Artif. Intell. Rev. 24(2), 163–178 (2005)

    Article  Google Scholar 

  41. Nugent, C., Doyle, D., Cunningham, P.: Gaining insight through case-based explanation. J. Intell. Inf. Syst. 32(3), 267–295 (2009). https://doi.org/10.1007/s10844-008-0069-0

    Article  Google Scholar 

  42. Olszewski, R.T.: Generalized feature extraction for structural pattern recognition in time-series data, Technical report. Carnegie-Mellon Univ, Pittsburgh (2001)

    Google Scholar 

  43. Pearl, J., Mackenzie, D.: The Book of Why. Basic Books, New York (2018)

    MATH  Google Scholar 

  44. Poyiadzi, R., Sokol, K., Santos-Rodriguez, R., De Bie, T., Flach, P.: FACE: feasible and actionable counterfactual explanations. In: AIES, pp. 344–350 (2020)

    Google Scholar 

  45. Recio-Garcia, J.A., Diaz-Agudo, B., Pino-Castilla, V.: CBR-LIME: a case-based reasoning approach to provide specific local interpretable model-agnostic explanations. In: Watson, I., Weber, R. (eds.) ICCBR 2020. LNCS (LNAI), vol. 12311, pp. 179–194. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58342-2_12

    Chapter  Google Scholar 

  46. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: Explaining the predictions of any classifier. In: Proceedings of SIGKDD’16, pp. 1135–1144. ACM (2016)

    Google Scholar 

  47. Russell, C.: Efficient search for diverse coherent explanations. In: Conference on Fairness, Accountability, and Transparency, pp. 20–28 (2019)

    Google Scholar 

  48. Samangouei, P., Saeedi, A., Nakagawa, L., Silberman, N.: ExplainGAN: model explanation via decision boundary crossing transformations. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11214, pp. 681–696. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01249-6_41

    Chapter  Google Scholar 

  49. Sani, S., Wiratunga, N., Massie, S.: Learning deep features for kNN-based Human Activity Recognition. In: Proceedings of the International Conference on Case-Based Reasoning Workshops, pp. 95–103. CEUR Workshop Proceedings, Trondheim (2017). https://rgu-repository.worktribe.com/output/246837/learning-deep-features-for-knn-based-human-activity-recognition

  50. Schlegel, U., Arnout, H., El-Assady, M., Oelke, D., Keim, D.A.: Towards a rigorous evaluation of xai methods on time series. arXiv preprint arXiv:1909.07082 (2019)

  51. Schoenborn, J.M., Weber, R.O., Aha, D.W., Cassens, J., Althoff, K.D.: Explainable case-based reasoning: a survey. In: AAAI-21 Workshop Proceedings (2021)

    Google Scholar 

  52. Schölkopf, B., Platt, J.C., Shawe-Taylor, J., Smola, A.J., Williamson, R.C.: Estimating the support of a high-dimensional distribution. Neural Comput. 13(7), 1443–1471 (2001)

    Article  Google Scholar 

  53. Sørmo, F., Cassens, J., Aamodt, A.: Explanation in case-based reasoning-perspectives and goals. Artif. Intell. Rev. 24(2), 109–143 (2005). https://doi.org/10.1007/s10462-005-4607-7

    Article  MATH  Google Scholar 

  54. Van Looveren, A., Klaise, J.: Interpretable counterfactual explanations guided by prototypes. arXiv preprint arXiv:1907.02584 (2019)

  55. Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv. J. Law Tech. 31, 841 (2017)

    Google Scholar 

  56. Wang, Y., et al.: Learning interpretable shapelets for time series classification through adversarial regularization. arXiv preprint arXiv:1906.00917 (2019)

  57. Wang, Z., Yan, W., Oates, T.: Time series classification from scratch with deep neural networks: a strong baseline. In: IJCNN, pp. 1578–1585. IEEE (2017)

    Google Scholar 

  58. Ye, L., Keogh, E.: Time series shapelets: a novel technique that allows accurate, interpretable and fast classification. Data Min. Knowl. Disc. 22(1–2), 149–182 (2011). https://doi.org/10.1007/s10618-010-0179-5

    Article  MathSciNet  MATH  Google Scholar 

  59. Yeh, C.C.M., et al.: Matrix profile i: all pairs similarity joins for time series: a unifying view that includes motifs, discords and shapelets. In: ICDM (2016)

    Google Scholar 

  60. Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: IEEE CVPR, pp. 2921–2929 (2016)

    Google Scholar 

Download references

Acknowledgements

This publication has emanated from research conducted with the financial support of (i) Science Foundation Ireland (SFI) to the Insight Centre for Data Analytics under Grant Number 12/RC/2289_P2 and (ii) SFI and the Department of Agriculture, Food and Marine on behalf of the Government of Ireland under Grant Number 16/RC/3835 (VistaMilk).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eoin Delaney .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Delaney, E., Greene, D., Keane, M.T. (2021). Instance-Based Counterfactual Explanations for Time Series Classification. In: Sánchez-Ruiz, A.A., Floyd, M.W. (eds) Case-Based Reasoning Research and Development. ICCBR 2021. Lecture Notes in Computer Science(), vol 12877. Springer, Cham. https://doi.org/10.1007/978-3-030-86957-1_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-86957-1_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-86956-4

  • Online ISBN: 978-3-030-86957-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics