Knowledge and Information Systems

, Volume 47, Issue 1, pp 1–26 | Cite as

Faster and more accurate classification of time series by exploiting a novel dynamic time warping averaging algorithm

  • François PetitjeanEmail author
  • Germain Forestier
  • Geoffrey I. Webb
  • Ann E. Nicholson
  • Yanping Chen
  • Eamonn Keogh
Regular Paper


A concerted research effort over the past two decades has heralded significant improvements in both the efficiency and effectiveness of time series classification. The consensus that has emerged in the community is that the best solution is a surprisingly simple one. In virtually all domains, the most accurate classifier is the nearest neighbor algorithm with dynamic time warping as the distance measure. The time complexity of dynamic time warping means that successful deployments on resource-constrained devices remain elusive. Moreover, the recent explosion of interest in wearable computing devices, which typically have limited computational resources, has greatly increased the need for very efficient classification algorithms. A classic technique to obtain the benefits of the nearest neighbor algorithm, without inheriting its undesirable time and space complexity, is to use the nearest centroid algorithm. Unfortunately, the unique properties of (most) time series data mean that the centroid typically does not resemble any of the instances, an unintuitive and underappreciated fact. In this paper we demonstrate that we can exploit a recent result by Petitjean et al. to allow meaningful averaging of “warped” time series, which then allows us to create super-efficient nearest “centroid” classifiers that are at least as accurate as their more computationally challenged nearest neighbor relatives. We demonstrate empirically the utility of our approach by comparing it to all the appropriate strawmen algorithms on the ubiquitous UCR Benchmarks and with a case study in supporting insect classification on resource-constrained sensors.


Time series Averaging Dynamic time warping Classification Data mining 



This research was supported by the ARC DP120100553 and DP140100087, the NSF IIS-1161997, the Bill and Melinda Gates Foundation, Vodafone’s Wireless Innovation Project, the French-Australia Science Innovation Collaboration Grants PHC Grant No. 32571NA and the Air Force Office of Scientific Research, Asian Office of Aerospace Research under contracts FA2386-15-1-4017 and FA2386-15-1-4007.


  1. 1.
    Wang X, Mueen A, Ding H, Trajcevski G, Scheuermann P, Keogh E (2013) Experimental comparison of representation methods and distance measures for time series data. Data Min Knowl Discov 26(2):275–309MathSciNetCrossRefGoogle Scholar
  2. 2.
    Bagnall A, Lines J (2014) An experimental evaluation of nearest neighbour time series classification. Technical report #CMP-C14-01, Department of Computing Sciences, University of East Anglia, Tech. RepGoogle Scholar
  3. 3.
    Xi X, Keogh E, Shelton C, Wei L, Ratanamahatana CA (2006) Fast time series classification using numerosity reduction, in international conference on machine learning, pp 1033–1040Google Scholar
  4. 4.
    Rakthanmanon T, Campana B, Mueen A, Batista G, Westover B, Zhu Q, Zakaria J, Keogh E (2012) Searching and mining trillions of time series subsequences under dynamic time warping, in international conference on knowledge discovery and data mining, pp 262–270Google Scholar
  5. 5.
    Assent I, Wichterich M, Krieger R, Kremer H, Seidl T (2009) Anticipatory DTW for efficient similarity search in time series databases. Proc VLDB Endow 2(1):826–837CrossRefGoogle Scholar
  6. 6.
    Kremer H, Günnemann S, Ivanescu A-M, Assent I, Seidl T (2011) Efficient processing of multiple DTW queries in time series databases. Scientific and statistical database management. Springer, Berlin, pp 150–167CrossRefGoogle Scholar
  7. 7.
    Zhuang DE, Li GC, Wong AK (2014) Discovery of temporal associations in multivariate time series. IEEE Trans Knowl Data Eng 26(12):2969–2982Google Scholar
  8. 8.
    Petitjean F, Ketterlin A, Gançarski P (2011) A global averaging method for dynamic time warping, with applications to clustering. Pattern Recognit 44(3):678–693CrossRefzbMATHGoogle Scholar
  9. 9.
    Petitjean F, Forestier G, Webb GI, Nicholson AE, Chen Y, Keogh E (2014) Dynamic time warping averaging of time series allows faster and more accurate classification, in IEEE international conference on data mining, pp 470–479Google Scholar
  10. 10.
    Galton F (1907) Vox populi. Nature 75(1949):450–451CrossRefzbMATHGoogle Scholar
  11. 11.
    Tibshirani R, Hastie T, Narasimhan B, Chu G (2002) Diagnosis of multiple cancer types by shrunken centroids of gene expression. Natl Acad Sci 99(10):6567–6572CrossRefGoogle Scholar
  12. 12.
    Gou J, Yi Z, Du L, Xiong T (2012) A local mean-based k-nearest centroid neighbor classifier. Comput J 55(9):1058–1071CrossRefGoogle Scholar
  13. 13.
    Hart PE (1968) The condensed nearest neighbor rule. IEEE Trans Inf Theory 14(03):515–516CrossRefGoogle Scholar
  14. 14.
    Xi X, Ueno K, Keogh E, Lee D-J (2008) Converting non-parametric distance-based classification to anytime algorithms. Pattern Anal Appl 11(3–4):321–336MathSciNetCrossRefGoogle Scholar
  15. 15.
  16. 16.
    Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Lear Res 7:1–30MathSciNetzbMATHGoogle Scholar
  17. 17.
    Ariely D (2001) Seeing sets: representation by statistical properties. Psychol Sci 12(2):157–162CrossRefGoogle Scholar
  18. 18.
    Alvarez GA (2011) Representing multiple objects as an ensemble enhances visual cognition. Trends Cognit Sci 15(3):122–131CrossRefGoogle Scholar
  19. 19.
    Jenkins R, Burton A (2008) 100 % accuracy in automatic face recognition. Science 319(5862):435–435CrossRefGoogle Scholar
  20. 20.
    Gusfield D (1997) Algorithms on strings, trees, and sequences: computer science and computational biology, Cambridge University Press, ch. 14 multiple string comparison—The Holy Grail, pp 332–367Google Scholar
  21. 21.
    Wang L, Jiang T (1994) On the complexity of multiple sequence alignment. J Comput Biol 1(4):337–348CrossRefGoogle Scholar
  22. 22.
    Gupta L, Molfese DL, Tammana R, Simos PG (1996) Nonlinear alignment and averaging for estimating the evoked potential. IEEE Trans Biomed Eng 43(4):348–356CrossRefGoogle Scholar
  23. 23.
    Wang K, Gasser T et al (1997) Alignment of curves by dynamic time warping. Ann Stat 25(3):1251–1276MathSciNetCrossRefzbMATHGoogle Scholar
  24. 24.
    Niennattrakul V, Ratanamahatana CA (2009) Shape averaging under time warping, in IEEE international conference on electrical engineering/electronics, computer, telecommunications and information technology, vol. 2, pp 626–629Google Scholar
  25. 25.
    Ongwattanakul S, Srisai D (2009) Contrast enhanced dynamic time warping distance for time series shape averaging classification, in International conference on interaction sciences: information technology, culture and human, ACM, pp 976–981Google Scholar
  26. 26.
    Feng D-F, Doolittle RF (1987) Progressive sequence alignment as a prerequisite to correct phylogenetic trees. J Mole Evol 25(4):351–360CrossRefGoogle Scholar
  27. 27.
    Keogh E, Xi X, Wei L, Ratanamahatana CA (2011) The UCR time series classification/clustering homepage.
  28. 28.
    Petitjean F, Gançarski P (2012) Summarizing a set of time series by averaging: from Steiner sequence to compact multiple alignment. Theor Comput Sci 414(1):76–91MathSciNetCrossRefzbMATHGoogle Scholar
  29. 29.
    Petitjean F, Inglada J, Gançarski P (2012) Satellite image time series analysis under time warping. IEEE Trans Geosci Remote Sens 50(8):3081–3095CrossRefGoogle Scholar
  30. 30.
    Petitjean F (2014) Matlab and Java source code for DBA. doi: 10.5281/zenodo.10432
  31. 31.
    Kranen P, Seidl T (2009) Harnessing the strengths of anytime algorithms for constant data streams. Data Min Knowl Discov 19(2):245–260MathSciNetCrossRefGoogle Scholar
  32. 32.
    Hu B, Rakthanmanon T, Hao Y, Evans S, Lonardi S, Keogh E (2011) Discovering the intrinsic cardinality and dimensionality of time series using MDL, in IEEE international conference on data mining, pp 1086–1091Google Scholar
  33. 33.
    Ratanamahatana CA, Keogh E (2005) Three myths about dynamic time warping data mining, in SIAM international conference on data mining, pp 506–510Google Scholar
  34. 34.
    Niennattrakul V, Ratanamahatana CA (2007) Inaccuracies of shape averaging method using dynamic time warping for time series data, in international conference on computational science. Springer, pp 513–520Google Scholar
  35. 35.
    Pekalska E, Duin RP, Paclìk P (2006) Prototype selection for dissimilarity-based classifiers. Pattern Recogni 39(2):189–208CrossRefzbMATHGoogle Scholar
  36. 36.
    Ueno K, Xi X, Keogh E, Lee D-J (2006) Anytime classification using the nearest neighbor algorithm with applications to stream mining, in IEEE international conference on data mining, pp 623–632Google Scholar
  37. 37.
    Chen Y, Why A, Batista G, Mafra-Neto A, Keogh E (2014) Flying insect classification with inexpensive sensors. J Insect Behav 27(5):657–677CrossRefGoogle Scholar
  38. 38.
    Goddard LB, Roth AE, Reisen WK, Scott TW et al (2002) Vector competence of California mosquitoes for west Nile virus. Emerging Infect Dis 8(12):1385–1391CrossRefGoogle Scholar
  39. 39.
    Yang Y, Webb GI, Korb K, Ting K-M (2007) Classifying under computational resource constraints: anytime classification using probabilistic estimators. Mach Learn, 69(1):35–53Google Scholar

Copyright information

© Springer-Verlag London (outside the USA) 2015

Authors and Affiliations

  • François Petitjean
    • 1
    Email author
  • Germain Forestier
    • 2
  • Geoffrey I. Webb
    • 1
  • Ann E. Nicholson
    • 1
  • Yanping Chen
    • 3
  • Eamonn Keogh
    • 3
  1. 1.Faculty of ITMonash UniversityMelbourneVIC, Australia
  2. 2.MIPS (EA 2332)Université de Haute AlsaceMulhouseFrance
  3. 3.Computer Science and Engineering DepartmentUniversity of CaliforniaRiversideUSA

Personalised recommendations