Advertisement

Journal of Classification

, Volume 31, Issue 2, pp 154–178 | Cite as

A Run Length Transformation for Discriminating Between Auto Regressive Time Series

Article

Abstract

We describe a simple time series transformation to detect differences in series that can be accurately modelled as stationary autoregressive (AR) processes. The transformation involves forming the histogram of above and below the mean run lengths. The run length (RL) transformation has the benefits of being very fast, compact and updatable for new data in constant time. Furthermore, it can be generated directly from data that has already been highly compressed. We first establish the theoretical asymptotic relationship between run length distributions and AR models through consideration of the zero crossing probability and the distribution of runs. We benchmark our transformation against two alternatives: the truncated Autocorrelation function (ACF) transform and the AR transformation, which involves the standard method of fitting the partial autocorrelation coefficients with the Durbin-Levinson recursions and using the Akaike Information Criterion stopping procedure. Whilst optimal in the idealized scenario, representing the data in these ways is time consuming and the representation cannot be updated online for new data. We show that for classification problems the accuracy obtained through using the run length distribution tends towards that obtained from using the full fitted models. We then propose three alternative distance measures for run length distributions based on Gower’s general similarity coefficient, the likelihood ratio and dynamic time warping (DTW). Through simulated classification experiments we show that a nearest neighbour distance based on DTW converges to the optimal faster than classifiers based on Euclidean distance, Gower’s coefficient and the likelihood ratio. We experiment with a variety of classifiers and demonstrate that although the RL transform requires more data than the best performing classifier to achieve the same accuracy as AR or ACF, this factor is at worst non-increasing with the series length, m, whereas the relative time taken to fit AR and ACF increases with m. We conclude that if the data is stationary and can be suitably modelled by an AR series, and if time is an important factor in reaching a discriminatory decision, then the run length distribution transform is a simple and effective transformation to use.

Keywords

Time series classification Run length distribution, Auto regressive model approximation 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. ABRAHAM, Z., and TAN, P. (2010), “An Integrated Framework for Simultaneous Classification and Regression of Time-Series Data”, in Proceedings of the SIAM International Conference on Data Mining, SDM 2010, pp. 653–664.Google Scholar
  2. AGRAWAL, R., FALOUTSOS, C., and SWAMI, A. (1993), “Efficient Similarity Search in Sequence Databases”, in Proceedings of the 4th International Conference on Foundations of Data Organization and Algorithms, Chicago, also in Lecture Notes in Computer Science 730, Springer Verlag, pp. 169–184.Google Scholar
  3. BAGNALL, A., and JANACEK, G. (2004), “Clustering Time Series from ARMA Models with Clipped Data”, Machine Learning, 58(2), 151–178.Google Scholar
  4. BAGNALL, A., RATANAMAHATANA, C., KEOGH, E., LONARDI, S., and JANACEK, G. (2006), “A Bit Level Representation for Time Series Data Mining with Shape Based Similarity”, Data Mining and Knowledge Discovery Journal, 13(1), 11–40.CrossRefMathSciNetGoogle Scholar
  5. BAGNALL, A., DAVIS, L., HILLS, J., and LINES, J. (2012), “Transformation Based Ensembles for Time Series Classification”, in Proceedings of the International Conference on Data Mining, SDM 2012, pp. 307–318.Google Scholar
  6. BOJANCZYK, A.W., BRENT, R.P., and De HOOG, F.R., (1995), “A Weakly Stable algorithm for General Toeplitz Systems”, Numerical Algorithms, 10, 225–244.CrossRefMATHMathSciNetGoogle Scholar
  7. BOX, G., JENKINS, G., and REINSEL, G., (2008), Time Series Analysis, Forecasting and Control (4th ed.), John Wiley and Sons.Google Scholar
  8. BUZA, K., NANOPOULOS, A., and SCHMIDT-THIEME, L. (2011), “Fusion of Similarity Measures for Time Series Classification”, in Proceedings of the 6th International Conference on Hybrid Artificial Intelligent Systems, also in Lecture Notes in Computer Science 6679, Springer Verlag, pp. 253–261.Google Scholar
  9. CORDUAS, M., and PICCOLO, D. (2008), “Time Series Clustering and Classification by the Autoregressive Metric”, Computational Statistics and Data Analysis, 52(4), 1860–1872.CrossRefMATHMathSciNetGoogle Scholar
  10. COX, T., and COX, M. (2000), “A General Weighted Two-Way Dissimilarity Coefficient”, Journal of Classification, 17(1), 101–121.CrossRefMATHMathSciNetGoogle Scholar
  11. DENG, H., RUNGER, G., TUV, E., and VLADIMIR, M. (2013), ”A Time Series Forest for Classification and Feature Extraction”, Information Sciences, 239, 142–153.CrossRefMathSciNetGoogle Scholar
  12. DING, H., TRAJCEVSKI, G., SCHEUERMANN, P., WANG,X., and KEOGH, E. (2008), “Querying and Mining of Time Series Data: Experimental Comparison of Representations and Distance Measure”, in Proceedings of the 34th International Conference on Very Large Data Bases (VLDB), pp. 1542–1552.Google Scholar
  13. DOUZAL-CHOUAKRIA, A., and AMBLARD, C. (2012), “Classification Trees for Time Series”, Pattern Recognition, 45(3), 1076–1091.CrossRefGoogle Scholar
  14. DOUZAL-CHOUAKRIA, A., DIALLO, A., and GIROUD, F. (2010), “A Random-Periods Model for the Comparison of a Metrics Efficiency to Classify Cell-Cycle Expressed Genes”, Pattern Recognition Letters, 31(12), 1608–1617.Google Scholar
  15. DURBIN, J. (1960), “The Fitting of Time Series Models”, Review of the International Statistical Institute, 28(4), 233–243.CrossRefMATHGoogle Scholar
  16. GOWER, J.C. (1971), “A General Coefficient of Similarity and Some of its Properties”, Biometrics, 27(4), 857–871.CrossRefGoogle Scholar
  17. HE, S., and KEDEM, B. (1990), “The Zero-Crossing Rate of Autoregressive Processes and Its Link to Unit Roots”, Journal of Time Series Analysis, 11(3), 201–213.CrossRefMATHMathSciNetGoogle Scholar
  18. JEONG, Y., JEONG, M., and OMITAOMU, O. (2011), “Weighted Dynamic Time Warping for Time Series Classification”, Pattern Recognition, 44(9), 2231–2240.CrossRefGoogle Scholar
  19. KALPAKIS, K., GADA, D., and PUTTANGUNTA, V. (2001), “Distance Measures for Effective Clustering of ARIMA Time-Series”, in IEEE Proceedings of the International Conference on Data Mining (ICDM), San Jose CA, pp. 273–280.Google Scholar
  20. KEOGH, E., and FOLIAS, T., “The UCR Time SeriesDataMining Archive”, http://www.cs.ucr.edu/~eamonn/time_series_data/.
  21. LIAO, T.W. (2005), “Clustering of Time Series Data: A Survey”, Pattern Recognition, 38(11), 1857–1874.CrossRefMATHGoogle Scholar
  22. LINES, J., DAVIS, L., HILLS, J., and BAGNALL, A. (2012), “A Shapelet Transform for Time Series Classification”, in Proceedings of the 18th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Beijing, China, pp. 289–297.Google Scholar
  23. MAHARAJ, E.A. (2000), Clusters of Time Series, Journal of Classification, 17, 297–314.CrossRefMATHMathSciNetGoogle Scholar
  24. MAHARAJ, E.A. (1999), “Comparison and Classification of Stationary Multivariate Time Series”, Pattern Recognition, 32(7), 1129–1138.CrossRefMathSciNetGoogle Scholar
  25. MAHARAJ, E.A. (1996), “A Significance Test for Classifying ARMA Models”, Journal of Statistical Computation and Simulation, 54(4), 305–331.CrossRefMATHMathSciNetGoogle Scholar
  26. PENG, C.K., MIETUS, J.E., LIU, Y., KHALSA, G., DOUGLAS, P.S., BENSON, H., and GOLDBERGER, A.L. (1999), “Exaggerated Heart Rate Oscillations During Two Meditation Techniques”, International Journal of Cardiology, 70(2), 101–107.CrossRefGoogle Scholar
  27. PHYSIONET, “PhysioBank’s Data Archive”, http://www.physionet.org/.
  28. PICCOLO, D. (1990), “A Distance Measure for Classifying ARIMA Models”, Journal of Time Series Analysis, 11(2), 153–164.CrossRefMATHGoogle Scholar
  29. QAIRUNNISA, S., CHANDRASHEKAR,M., REVATHI,M., KONDAM, A.,MADHURI, B.A., and SUESH, M. (2012), “A Study on Modulation on Cardiovascular Response to Yoga Training”, International Journal of Biological and Medical Research, 3(2),1662–1666.Google Scholar
  30. RATANAMAHATANA, C., and KEOGH, E. (2005), “Three Myths About Dynamic Time Warping Data Mining”, in Proceedings of the SIAM International Conference on Data Mining, DSM 2005, pp. 506–510.Google Scholar
  31. RODRIGUEZ, J., ALONSO, C., and MAESTRO, J. (2005), ”Support Vector Machines of Interval-Based Features for Time Series Classification”, Knowledge-Based Systems 18, pp. 171–178.CrossRefGoogle Scholar
  32. VAN WYK, B., VAN WYK, M., and QI, G. (2009), “Difference Histograms: A New Tool for Time Series Analysis Applied to Bearing Fault Diagnosis”, Pattern Recognition Letters, 30(6), 595–599.CrossRefGoogle Scholar
  33. WITTEN, I., FRANK, L., and HALL,M., (2011), Data Mining: Practical Machine Learning Tools and Techniques (3rd Ed.), San Francisco: Morgan Kaufmann.Google Scholar
  34. WU, Y., AGRAWAL, D., and EL ABBADI, A. (2000), “A Comparison of DFT and DWT Based Similartiy Search in Time-Series Databases”, in Proceedings of the 9th International Conference on Information Knowledge Management, McLean VA, NewYork: ACM Press, pp. 488–495.Google Scholar
  35. YE, L., and KEOGH, E. (2009), “Time Series Shapelets: A New Primitive for Data Mining”, in Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Paris, France, New York: ACM Press, pp. 947–956.Google Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  1. 1.University of East AngliaNorwichUK

Personalised recommendations