Learning States and Rules for Detecting Anomalies in Time Series
- 327 Downloads
The normal operation of a device can be characterized in different temporal states. To identify these states, we introduce a segmentation algorithm called Gecko that can determine a reasonable number of segments using our proposed L method. We then use the RIPPER classification algorithm to describe these states in logical rules. Finally, transitional logic between the states is added to create a finite state automaton. Our empirical results, on data obtained from the NASA shuttle program, indicate that the Gecko segmentation algorithm is comparable to a human expert in identifying states, and our L method performs better than the existing permutation tests method when determining the number of segments to return in segmentation algorithms. Empirical results have also shown that our overall system can track normal behavior and detect anomalies.
Keywordsanomaly detection time series segmentation cluster validation clustering
Unable to display preview. Download preview PDF.
- W. Cohen, “Fast effective rule induction,” in Proc. of the 12 Intl. Conference on Machine Learning, Tahoe City, CA, 1995, pp. 115–123.Google Scholar
- S. Salvador and P. Chan, “Determining the number of clusters/segments in hierarchical clustering/segmentation algorithms,” Laboratory for Learning Research, Florida Institute of Technology, Melbourne, FL, Technical Report TR-2003-18, 2003.Google Scholar
- R. Ng and J. Hah, “Efficient and effective clustering methods for spatial data mining,” in The 20th Intl. Conf. On Very Large Data Bases, Santiago, Chile, 1994, pp. 12–15.Google Scholar
- M. Ester, H. Kriegel, J. Sander, and X. Xu, “A density-based algorithm for discovering clusters in large spatial databases with noise,” in Proc. 3rd Intl. Conf. on Knowledge Discovery and Data Mining, Portland OR, 1996, pp. 226–231.Google Scholar
- A. Hinneburg and D. Keim, “An efficient approach to clustering in large multimedia databases with noise,” in Proc 4th Intl. Conf. on Knowledge Discovery and Data Mining. New York City, NY, 1998, pp. 58–65.Google Scholar
- S. Guha, R. Rastogi, and K. Shim, “ROCK: A robust clustering algorithm for categorical attributes,” in The 15th Intl. Conf. on Data Engineering, Sydney, Australia, 1999, pp. 512–523.Google Scholar
- G. Karypis, E. Han, and V. Kumar, “Chameleon: A hierarchical clustering algorithm using dynamic modeling,” IEEE Computer, vol. 32, no 8, pp. 68–75, 1999.Google Scholar
- G. Seikholeslami, S. Chatterjee and A. Zhang, :WaveCluster: A multi-resolution clustering approach for very large spatial databases,” in Proc. of the 24th VLDB, New York City, New York, 1998, pp. 428–439.Google Scholar
- E. Keogh, S. Chu, D. Hart and M. Pazanni, “An online algorithm for segmenting time series,” in Proc. IEEE Intl. Conf. on Data Mining, San Jose, CA, 2001, pp. 289–296.Google Scholar
- P. Smyth, “Clustering using Monte-Carlo cross-validation,” in Proc. 2nd KDD, Portland, OR, 1996, pp.126–133.Google Scholar
- R. Baxter and J. Oliver, “The kindest cut: minimum message length segmentation,” in Algorithmic Learning Theory, 7th Intl. Workshop, Sydney, Australia, 1996, pp. 83–90.Google Scholar
- K. Vasko and T. Toivonen. “Estimating the number of segments in time series data using permutation tests,” in Proc. IEEE Intl. Conf. on Data Mining, Maebashi City, Japan, 2002, pp. 466–47.Google Scholar
- V. Roth, T. Lange, M. Braun, and J. Buhmann, “A resampling approach to cluster validation,” in Proc. in Computational Statistics: 15th Symposium (COMPSTAT2002), Berlin, Germany, 2002, pp. 123–128.Google Scholar
- S. Monti, P. Tamayo, J. Mesirov, and T Golub, “Consensus clustering: A resampling-based method for class discovery and visualization of gene expression microarray data,” Machine Learning, vol. 52, nos.~1–2, pp. 91–118, 2003.Google Scholar
- R. Tibshirani, G. Walther, and T. Hastie, “ Estimating the number of clusters in a dataset via the Gap statistic,” Dept. of Biostatistics, Stanford Univ., Stanford, CA, Technical Report 208, 2001.Google Scholar
- R. Tibshirani, G. Walther B. Botstein, and P. Brown, “Cluster validation by prediction strength,” Dept. of Biostatistics, Stanford Univ., Stanford, CA, Technical Report 2001–21, 2001.Google Scholar
- T. Chiu, D. Fang, J. Chen, Y. Wang, and C. Jeris, “A robust and scalable clustering algorithm for mixed type attributes in large database environment,” in Proc. of the 7th ACM SIGKDD Intl. Conf. on Knowledge Discovery and Data Mining, San Francisco, CA, 2001, pp. 263–268.Google Scholar
- A. Foss and A. Zaíane, “A parameterless method for efficiently discovering clusters of arbitrary shape in large datasets.” in Proc. of the 2002 IEEE Intl. Conf. on Data Mining (ICDM'02), Maebashi City, Japan, 2002, pp. 179–186.Google Scholar
- S. Harris, D. Hess, and J. Venegas, “An objective analysis of the pressure-volume curve in the acute respiratory distress syndrome,” American Journal of Respiratory and Critical Care Medicine, vol. 161, no. 2, pp. 432–439, 2000.Google Scholar
- D. Dasgupta and S. Forrest, “novelty detection in time series data using ideas from immunology,” In Proc. Fifth Intl. Conf. on Intelligent Systems, Reno, NV, 1996, pp. 82–87.Google Scholar
- T. Caudell and D. Newman, “An adaptive resonance architecture to define normality and detect novelties in time series and databases,” in Proc. IEEE World Congress on Neural Networks, Portland, OR, pp. IV166–176. 1993.Google Scholar
- P. Langley, D. George, S. Bay, and K. Saito, “Robust induction of process models from time-series data,” in Proc. of the 20th Intl. Conf. on Machine Learning, Washington, DC, 2003, pp. 32–439.Google Scholar
- E. Weisstein, “Least squares fitting,” From MathWorld-A Wolfram Web Resource. [http://mathworld.wolfram.com/LeastSquaresFitting.html].
- J. Furnkranz and G. Wildmer, “Incremental reduced error pruning,” in Proc. Intl. Conf. on Machine Learning, New Brunswick, NJ, 1994, pp. 70–77.Google Scholar
- E. Keogh and T. Folias, The UCR Time Series Data Mining Archive [http://www.cs.ucr.edu/~eamonn/TSDMA/index.html]. Riverside, CA. University of California—Computer Science and Engineering Department, 2004.