A Run Length Transformation for Discriminating Between Auto Regressive Time Series
- 301 Downloads
We describe a simple time series transformation to detect differences in series that can be accurately modelled as stationary autoregressive (AR) processes. The transformation involves forming the histogram of above and below the mean run lengths. The run length (RL) transformation has the benefits of being very fast, compact and updatable for new data in constant time. Furthermore, it can be generated directly from data that has already been highly compressed. We first establish the theoretical asymptotic relationship between run length distributions and AR models through consideration of the zero crossing probability and the distribution of runs. We benchmark our transformation against two alternatives: the truncated Autocorrelation function (ACF) transform and the AR transformation, which involves the standard method of fitting the partial autocorrelation coefficients with the Durbin-Levinson recursions and using the Akaike Information Criterion stopping procedure. Whilst optimal in the idealized scenario, representing the data in these ways is time consuming and the representation cannot be updated online for new data. We show that for classification problems the accuracy obtained through using the run length distribution tends towards that obtained from using the full fitted models. We then propose three alternative distance measures for run length distributions based on Gower’s general similarity coefficient, the likelihood ratio and dynamic time warping (DTW). Through simulated classification experiments we show that a nearest neighbour distance based on DTW converges to the optimal faster than classifiers based on Euclidean distance, Gower’s coefficient and the likelihood ratio. We experiment with a variety of classifiers and demonstrate that although the RL transform requires more data than the best performing classifier to achieve the same accuracy as AR or ACF, this factor is at worst non-increasing with the series length, m, whereas the relative time taken to fit AR and ACF increases with m. We conclude that if the data is stationary and can be suitably modelled by an AR series, and if time is an important factor in reaching a discriminatory decision, then the run length distribution transform is a simple and effective transformation to use.
KeywordsTime series classification Run length distribution, Auto regressive model approximation
Unable to display preview. Download preview PDF.
- ABRAHAM, Z., and TAN, P. (2010), “An Integrated Framework for Simultaneous Classification and Regression of Time-Series Data”, in Proceedings of the SIAM International Conference on Data Mining, SDM 2010, pp. 653–664.Google Scholar
- AGRAWAL, R., FALOUTSOS, C., and SWAMI, A. (1993), “Efficient Similarity Search in Sequence Databases”, in Proceedings of the 4th International Conference on Foundations of Data Organization and Algorithms, Chicago, also in Lecture Notes in Computer Science 730, Springer Verlag, pp. 169–184.Google Scholar
- BAGNALL, A., and JANACEK, G. (2004), “Clustering Time Series from ARMA Models with Clipped Data”, Machine Learning, 58(2), 151–178.Google Scholar
- BAGNALL, A., DAVIS, L., HILLS, J., and LINES, J. (2012), “Transformation Based Ensembles for Time Series Classification”, in Proceedings of the International Conference on Data Mining, SDM 2012, pp. 307–318.Google Scholar
- BOX, G., JENKINS, G., and REINSEL, G., (2008), Time Series Analysis, Forecasting and Control (4th ed.), John Wiley and Sons.Google Scholar
- BUZA, K., NANOPOULOS, A., and SCHMIDT-THIEME, L. (2011), “Fusion of Similarity Measures for Time Series Classification”, in Proceedings of the 6th International Conference on Hybrid Artificial Intelligent Systems, also in Lecture Notes in Computer Science 6679, Springer Verlag, pp. 253–261.Google Scholar
- DING, H., TRAJCEVSKI, G., SCHEUERMANN, P., WANG,X., and KEOGH, E. (2008), “Querying and Mining of Time Series Data: Experimental Comparison of Representations and Distance Measure”, in Proceedings of the 34th International Conference on Very Large Data Bases (VLDB), pp. 1542–1552.Google Scholar
- DOUZAL-CHOUAKRIA, A., DIALLO, A., and GIROUD, F. (2010), “A Random-Periods Model for the Comparison of a Metrics Efficiency to Classify Cell-Cycle Expressed Genes”, Pattern Recognition Letters, 31(12), 1608–1617.Google Scholar
- KALPAKIS, K., GADA, D., and PUTTANGUNTA, V. (2001), “Distance Measures for Effective Clustering of ARIMA Time-Series”, in IEEE Proceedings of the International Conference on Data Mining (ICDM), San Jose CA, pp. 273–280.Google Scholar
- KEOGH, E., and FOLIAS, T., “The UCR Time SeriesDataMining Archive”, http://www.cs.ucr.edu/~eamonn/time_series_data/.
- LINES, J., DAVIS, L., HILLS, J., and BAGNALL, A. (2012), “A Shapelet Transform for Time Series Classification”, in Proceedings of the 18th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Beijing, China, pp. 289–297.Google Scholar
- PHYSIONET, “PhysioBank’s Data Archive”, http://www.physionet.org/.
- QAIRUNNISA, S., CHANDRASHEKAR,M., REVATHI,M., KONDAM, A.,MADHURI, B.A., and SUESH, M. (2012), “A Study on Modulation on Cardiovascular Response to Yoga Training”, International Journal of Biological and Medical Research, 3(2),1662–1666.Google Scholar
- RATANAMAHATANA, C., and KEOGH, E. (2005), “Three Myths About Dynamic Time Warping Data Mining”, in Proceedings of the SIAM International Conference on Data Mining, DSM 2005, pp. 506–510.Google Scholar
- WITTEN, I., FRANK, L., and HALL,M., (2011), Data Mining: Practical Machine Learning Tools and Techniques (3rd Ed.), San Francisco: Morgan Kaufmann.Google Scholar
- WU, Y., AGRAWAL, D., and EL ABBADI, A. (2000), “A Comparison of DFT and DWT Based Similartiy Search in Time-Series Databases”, in Proceedings of the 9th International Conference on Information Knowledge Management, McLean VA, NewYork: ACM Press, pp. 488–495.Google Scholar
- YE, L., and KEOGH, E. (2009), “Time Series Shapelets: A New Primitive for Data Mining”, in Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Paris, France, New York: ACM Press, pp. 947–956.Google Scholar