Data Mining and Knowledge Discovery

, Volume 32, Issue 4, pp 1074–1120 | Cite as

Optimizing dynamic time warping’s window width for time series data mining applications

  • Hoang Anh Dau
  • Diego Furtado Silva
  • François Petitjean
  • Germain Forestier
  • Anthony Bagnall
  • Abdullah Mueen
  • Eamonn Keogh


Dynamic Time Warping (DTW) is a highly competitive distance measure for most time series data mining problems. Obtaining the best performance from DTW requires setting its only parameter, the maximum amount of warping (w). In the supervised case with ample data, w is typically set by cross-validation in the training stage. However, this method is likely to yield suboptimal results for small training sets. For the unsupervised case, learning via cross-validation is not possible because we do not have access to labeled data. Many practitioners have thus resorted to assuming that “the larger the better”, and they use the largest value of w permitted by the computational resources. However, as we will show, in most circumstances, this is a naïve approach that produces inferior clusterings. Moreover, the best warping window width is generally non-transferable between the two tasks, i.e., for a single dataset, practitioners cannot simply apply the best w learned for classification on clustering or vice versa. In addition, we will demonstrate that the appropriate amount of warping not only depends on the data structure, but also on the dataset size. Thus, even if a practitioner knows the best setting for a given dataset, they will likely be at a lost if they apply that setting on a bigger size version of that data. All these issues seem largely unknown or at least unappreciated in the community. In this work, we demonstrate the importance of setting DTW’s warping window width correctly, and we also propose novel methods to learn this parameter in both supervised and unsupervised settings. The algorithms we propose to learn w can produce significant improvements in classification accuracy and clustering quality. We demonstrate the correctness of our novel observations and the utility of our ideas by testing them with more than one hundred publicly available datasets. Our forceful results allow us to make a perhaps unexpected claim; an underappreciated “low hanging fruit” in optimizing DTW’s performance can produce improvements that make it an even stronger baseline, closing most or all the improvement gap of the more sophisticated methods proposed in recent years.


Time series Clustering Classification Dynamic time warping Semi-supervised learning 



This material is based upon work supported by the Air Force Office of Scientific Research, Asian Office of Aerospace Research and Development (AOARD) under award number FA2386-16-1-4023. The Australian Research Council under grant DE170100037 and the UK Engineering and Physical Sciences Research Council (EPSRC) under grant number EP/M015807/1 have also supported this work. Finally, we acknowledge the funding from NSF IIS-1161997 II and NSF IIS-1510741. We also wish to take this opportunity to thank the donors of the data to the UCR Time Series Archive.


  1. Albert MV, Kording K, Herrmann M, Jayaraman A (2012) Fall classification by machine learning using mobile phones. PLoS ONE 7(5):e36556. CrossRefGoogle Scholar
  2. Assent I, Wichterich M, Seidl T (2006) Adaptable distance functions for similarity-based multimedia retrieval. Datenbank Spektrum 19:23–31Google Scholar
  3. Athitsos V, Papapetrou P, Potamias M, Kollios G, Gunopulos D (2008) Approximate embedding-based subsequence matching of time series. In: Proceedings of the 2008 ACM SIGMOD international conference on management of data. ACM, pp 365–378Google Scholar
  4. Bagnall A, Lines J (2014) An experimental evaluation of nearest neighbour time series classification. arXiv Preprint arXiv:1406.4757
  5. Bagnall A, Lines J, Bostrom A, Large J, Keogh E (2017) The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Data Min Knowl Discov 31(3):606–660. MathSciNetCrossRefGoogle Scholar
  6. Bagnall A, Lines J, Vickers W, Keogh E (2018) The UEA and UCR time series classification repository.
  7. Basu S, Banerjee A, Mooney R (2002) Semi-supervised clustering by seeding. In: Proceedings of the 19th international conference on machine learning (ICML-2002), pp 19–26Google Scholar
  8. Basu S, Bilenko M, Mooney RJ (2004) A probabilistic framework for semi-supervised clustering. Int Conf Knowl Discov Data Min (KDD). Google Scholar
  9. Batista GE, Prati RC, Monard MC (2004) A study of the behavior of several methods for balancing machine learning training data. ACM SIGKDD Explor Newsl Spec Issue Learn Imbalanced Datasets 6(1):20–29. CrossRefGoogle Scholar
  10. Beecks C, Uysal MS, Seidl T (2010) Signature quadratic form distance. In: Proceedings of the ACM international conference on image and video retrieval. ACM, pp 438–445Google Scholar
  11. Begum N, Ulanova L, Wang J, Keogh E (2015) Accelerating dynamic time warping clustering with a novel admissible pruning strategy. In: Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining—KDD’15, pp 49–58.
  12. Bilenko M, Mooney RJ (2003) Adaptive duplicate detection using learnable string similarity measures. In: Proceedings of the ninth ACM SIGKDD international conference on knowledge discovery and data mining—KDD’03, p 39.
  13. Cao H, Li XL, Woon DYK, Ng SK (2013) Integrated oversampling for imbalanced time series classification. IEEE Trans Knowl Data Eng 25(12):2809–2822. CrossRefGoogle Scholar
  14. Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP (2002) SMOTE: synthetic minority over-sampling technique. J Artif Intell Res 16:321–357. CrossRefzbMATHGoogle Scholar
  15. Chen Y, Hu B, Keogh E, Batista GE (2013) “DTW-D: time series semi-supervised learning from a single example. In: KDD '13: Proceedings of the 19th ACM SIGKDD international conference on knowledge discovery and data mining, pp 383–391.
  16. Chen Y, Keogh E, Hu B, Begum N, Bagnall A, Mueen A, Batista G (2015) The UCR time series classification archive. www.Cs.Ucr.Edu/~Eamonn/time_series_data
  17. Dau HA (2018) Supporting page 2018.
  18. Dau HA, Begum N, Keogh E (2016) Semi-supervision dramatically improves time series clustering under dynamic time warping. In: 25th ACM international conference on information and knowledge management, pp 999–1008.
  19. Dau HA, Silva DF, Petitjean F, Forestier G, Bagnall A, Keogh E (2017) Judicious setting of dynamic time warping’s window width allows more accurate classification of time series. In: IEEE international conference on big dataGoogle Scholar
  20. Demiriz A, Bennett KP, Embrechts MJ (1999) Semi-supervised clustering using genetic algorithms. In: Artificial neural networks in engineering (ANNIE-99), pp 809–814Google Scholar
  21. Deng H, Runger G, Tuv E, Vladimir M (2013) A time series forest for classification and feature extraction. Inf Sci 239:142–153. MathSciNetCrossRefzbMATHGoogle Scholar
  22. Ding H, Trajcevski G, Scheuermann P, Wang X, Keogh E (2008) Querying and mining of time series data: experimental comparison of representations and distance measures. Proc VLDB Endow 1(2):1542–1552. CrossRefGoogle Scholar
  23. Ding R, Wang Q, Dang Y, Fu Q, Zhang H, Zhang D (2015) YADING: fast clustering of large-scale time series data. VLDB Endow 8(5):473–484. CrossRefGoogle Scholar
  24. Esteban C, Hyland SL, Rätsch G (2017) Real-valued (medical) time series generation with recurrent conditional GANs. arXiv Preprint arXiv:1706.02633
  25. Ferreira LN, Zhao L (2016) Time series clustering via community detection in networks. Inf Sci 326:227–242. MathSciNetCrossRefzbMATHGoogle Scholar
  26. Forestier G, Petitjean F, Dau HA, Webb GI, Keogh E (2017) Generating synthetic time series to augment sparse datasets. In: 2017 IEEE international conference on data mining (ICDM), pp 865–870.
  27. Geler Z, Kurbalija V, Radovanović M, Ivanović M (2014) Impact of the Sakoe–Chiba band on the DTW time series distance measure for kNN classification. In: International conference on knowledge science, engineering and management. Springer, pp 105–114Google Scholar
  28. Górecki T, Łuczak M (2013) Using derivatives in time series classification. Data Min Knowl Discov 26(2):310–331. MathSciNetCrossRefGoogle Scholar
  29. Górecki T, Łuczak M (2014) Non-isometric transforms in time series classification using DTW. Knowl Based Syst 61:98–108. CrossRefzbMATHGoogle Scholar
  30. Guennec AL, Malinowski S, Tavenard R (2016) Data augmentation for time series classification using convolutional neural networks. In: ECML/PKDD workshop on advanced analytics and learning on temporal dataGoogle Scholar
  31. Guna J, Humar I, Pogačnik M (2012) Intuitive gesture based user identification system. In: 2012 Proceedings of 35th international conference on telecommunications and signal processing, TSP 2012, pp 629–633.
  32. Ha TM, Bunke H (1997) Off-line, handwritten numeral recognition by perturbation method. IEEE Trans Pattern Anal Mach Intell 19(5):535–539. CrossRefGoogle Scholar
  33. Hayashi A, Mizuhara Y, Suematsu N (2005) Embedding time series data for classification. In: International workshop on machine learning and data mining in pattern recognition, pp 356–365Google Scholar
  34. He H, Bai Y, Garcia EA, Li S (2008) ADASYN: adaptive synthetic sampling approach for imbalanced learning. In: Proceedings of the international joint conference on neural networks, pp 1322–1328.
  35. Hu B, Rakthanmanon T, Hao Y, Evans S, Lonardi S, Keogh E (2014) Using the minimum description length to discover the intrinsic cardinality and dimensionality of time series. Data Min Knowl Discov 29(2):358–399. MathSciNetCrossRefGoogle Scholar
  36. Jeong Y-S, Jeong MK, Omitaomu OA (2011) Weighted dynamic time warping for time series classification. Pattern Recogn 44:2231–2240. CrossRefGoogle Scholar
  37. Kate RJ (2015) Using dynamic time warping distances as features for improved time series classification. Data Min Knowl Discov 30(2):283–312. MathSciNetCrossRefGoogle Scholar
  38. Kurbalija V, Radovanović M, Geler Z, Ivanović M (2014) The influence of global constraints on similarity measures for time-series databases. Knowl Based Syst 56:49–67. CrossRefGoogle Scholar
  39. Lee J-G, Han J, Li X, Gonzalez H (2008) TraClass: trajectory classification using hierarchical region-based and trajectory-based clustering. Proc VLDB Endow 1(1):1081–1094. CrossRefGoogle Scholar
  40. Li L, Aditya Prakash B (2011) Time series clustering: complex is simpler! Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit 28(1):137–146. Google Scholar
  41. Lines J, Bagnall A (2015) Time series classification with ensembles of elastic distance measures. Data Min Knowl Discov 29(3):565–592. MathSciNetCrossRefGoogle Scholar
  42. Liu J, Zhong L, Wickramasuriya J, Vasudevan V (2009) uWave: accelerometer-based personalized gesture recognition and its applications. Pervasive Mob Comput 5(6):657–675. CrossRefGoogle Scholar
  43. Lu S, Mirchevska G, Phatak SS, Li D, Luka J, Calderone RA, Fonzi WA (2017) Dynamic time warping assessment of highresolution melt curves provides a robust metric for fungal identification. PLoS ONE 12(3):e0173320. CrossRefGoogle Scholar
  44. Lv Y, Zhai CX (2010) Positional relevance model for pseudo-relevance feedback. In: Proceeding of the 33rd international ACM SIGIR conference on research and development in information retrieval—SIGIR’10, p 579.
  45. Masters J (2016) The level of pain and injury from slip and fall accidents. Brain Injury Society.
  46. National Council on Aging (NCOA) (2016) Falls prevention facts.
  47. Ng AY (1997) Preventing ‘overfitting’ of cross-validation data. In: ICML, vol 97, pp 245–253.
  48. Paparrizos J, Gravano L (2015) K-shape: efficient and accurate clustering of time series. ACM Sigmod. Google Scholar
  49. Paparrizos J, Gravano L (2017) Fast and accurate time-series clustering. ACM Trans Database Syst 42(2):1–49. MathSciNetCrossRefGoogle Scholar
  50. Petitjean F, Forestier G, Webb GI, Nicholson AE, Chen Y, Keogh E (2015) Dynamic time warping averaging of time series allows faster and more accurate classification. In: Proceedings of IEEE international conference on data mining, ICDM, pp 470–479.
  51. Rakthanmanon T, Campana B, Mueen A, Batista G, Westover B, Zhu Q, Zakaria J, Keogh E (2012) Searching and mining trillions of time series subsequences under dynamic time warping. In: Proceedings of the 18th ACM SIGKDD international conference on knowledge discovery and data mining—KDD’12, p 262.
  52. Rand WM (1971) Objective criteria for the evaluation of clustering methods. J Am Stat Assoc 66(336):846–850. CrossRefGoogle Scholar
  53. Rani S, Sikka G (2012) Recent techniques of clustering of time series data: a survey. Int J Comput Appl 52(15):1–9. Google Scholar
  54. Ratanamahatana CA, Keogh E (2005) Three myths about dynamic time warping data mining. In: Proceedings of the 2005 SIAM international conference on data mining, pp 506–510.
  55. Rodriguez A, Laio A (2014) Clustering by fast search and find of density peaks. Science 344(6191):1492–1496. CrossRefGoogle Scholar
  56. Sakoe H, Chiba S (1978) Dynamic programming algorithm optimization for spoken word recognition. IEEE Trans Acoust Speech Signal Process 26(1):43–49. CrossRefzbMATHGoogle Scholar
  57. Shokoohi-Yekta M, Wang J, Keogh E (2015) On the non-trivial generalization of dynamic time warping to the multi-dimensional case. In: Proceedings of the 2015 SIAM international conference on data mining, pp 289–297.
  58. Shou Y, Mamoulis N, Cheung D (2005) Fast and exact warping of time series using adaptive segmental approximations. Mach Learn 58(2–3):231–267. CrossRefzbMATHGoogle Scholar
  59. Silva DF, Batista GE, Keogh E (2017) Prefix and suffix invariant dynamic time warping. In: Proceedings of IEEE international conference on data mining, ICDM, pp 1209–1214.
  60. Silva DF, Giusti R, Keogh E, Batista GE (2018) Speeding up similarity search under dynamic time warping by pruning unpromising alignments. In: Data mining and knowledge discovery. Springer, pp 1–29Google Scholar
  61. Tan CW, Herrmann M, Forestier G, Webb GI, Petitjean F (2018) Efficient search of the best warping window for dynamic time warping. In: Proceedings of the 2018 SIAM international conference on data mining.
  62. Valsamis A, Tserpes K, Zissis D, Anagnostopoulos D, Varvarigou T (2017) Employing traditional machine learning algorithms for big data streams analysis: the case of object trajectory prediction. J Syst Softw 127:249–257. CrossRefGoogle Scholar
  63. Vinh NX (2010) Information theoretic measures for clusterings comparison: variants, properties, normalization and correction for chance. J Mach Learn Res 11:2837–2854. MathSciNetzbMATHGoogle Scholar
  64. Vlachos M, Kollios G, Gunopulos D (2002) Discovering similar multidimensional trajectories. In: Proceedings of international conference on data engineering, pp 673–684.
  65. Von Luxburg U (2010) Clustering stability: an overview. Found Trends® Mach Learn 2(3):235–274zbMATHGoogle Scholar
  66. Xi X, Keogh E, Shelton C, Wei L, Ratanamahatana CA (2006) Fast time series classification using numerosity reduction. In: Proceedings of the 23rd international conference on machine learning—ICML’06, pp 1033–1040.
  67. Zakaria J, Abdullah M, Keogh E (2012) Clustering time series using unsupervised-shapelets. In: Proceedings of IEEE international conference on data mining, ICDM, pp 785–94.
  68. Zhong Y, Liu S, Wang X, Xiao J, Song Y (2016) Tracking idea flows between social groups. In: AAAI, pp 1436–43Google Scholar
  69. Zhou J, Zhu SF, Huang X, Zhang Y (2015) Enhancing time series clustering by incorporating multiple distance measures with semi-supervised learning. J Comput Sci Technol 30(4):859–873. CrossRefGoogle Scholar

Copyright information

© The Author(s) 2018

Authors and Affiliations

  1. 1.University of California, RiversideRiversideUSA
  2. 2.Universidade Federal de São CarlosSão CarlosBrazil
  3. 3.Monash UniversityMelbourneAustralia
  4. 4.University of Haute-AlsaceMulhouseFrance
  5. 5.University of East AngliaNorwichUK
  6. 6.University of New MexicoAlbuquerqueUSA

Personalised recommendations