Advertisement

Applied Intelligence

, Volume 23, Issue 3, pp 241–255 | Cite as

Learning States and Rules for Detecting Anomalies in Time Series

  • Stan SalvadorEmail author
  • Philip Chan
Article

Abstract

The normal operation of a device can be characterized in different temporal states. To identify these states, we introduce a segmentation algorithm called Gecko that can determine a reasonable number of segments using our proposed L method. We then use the RIPPER classification algorithm to describe these states in logical rules. Finally, transitional logic between the states is added to create a finite state automaton. Our empirical results, on data obtained from the NASA shuttle program, indicate that the Gecko segmentation algorithm is comparable to a human expert in identifying states, and our L method performs better than the existing permutation tests method when determining the number of segments to return in segmentation algorithms. Empirical results have also shown that our overall system can track normal behavior and detect anomalies.

Keywords

anomaly detection time series segmentation cluster validation clustering 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. W. Cohen, “Fast effective rule induction,” in Proc. of the 12 Intl. Conference on Machine Learning, Tahoe City, CA, 1995, pp. 115–123.Google Scholar
  2. S. Salvador and P. Chan, “Determining the number of clusters/segments in hierarchical clustering/segmentation algorithms,” Laboratory for Learning Research, Florida Institute of Technology, Melbourne, FL, Technical Report TR-2003-18, 2003.Google Scholar
  3. R. Ng and J. Hah, “Efficient and effective clustering methods for spatial data mining,” in The 20th Intl. Conf. On Very Large Data Bases, Santiago, Chile, 1994, pp. 12–15.Google Scholar
  4. M. Ester, H. Kriegel, J. Sander, and X. Xu, “A density-based algorithm for discovering clusters in large spatial databases with noise,” in Proc. 3rd Intl. Conf. on Knowledge Discovery and Data Mining, Portland OR, 1996, pp. 226–231.Google Scholar
  5. A. Hinneburg and D. Keim, “An efficient approach to clustering in large multimedia databases with noise,” in Proc 4th Intl. Conf. on Knowledge Discovery and Data Mining. New York City, NY, 1998, pp. 58–65.Google Scholar
  6. S. Guha, R. Rastogi, and K. Shim, “ROCK: A robust clustering algorithm for categorical attributes,” in The 15th Intl. Conf. on Data Engineering, Sydney, Australia, 1999, pp. 512–523.Google Scholar
  7. G. Karypis, E. Han, and V. Kumar, “Chameleon: A hierarchical clustering algorithm using dynamic modeling,” IEEE Computer, vol. 32, no 8, pp. 68–75, 1999.Google Scholar
  8. G. Seikholeslami, S. Chatterjee and A. Zhang, :WaveCluster: A multi-resolution clustering approach for very large spatial databases,” in Proc. of the 24th VLDB, New York City, New York, 1998, pp. 428–439.Google Scholar
  9. E. Keogh, S. Chu, D. Hart and M. Pazanni, “An online algorithm for segmenting time series,” in Proc. IEEE Intl. Conf. on Data Mining, San Jose, CA, 2001, pp. 289–296.Google Scholar
  10. P. Smyth, “Clustering using Monte-Carlo cross-validation,” in Proc. 2nd KDD, Portland, OR, 1996, pp.126–133.Google Scholar
  11. R. Baxter and J. Oliver, “The kindest cut: minimum message length segmentation,” in Algorithmic Learning Theory, 7th Intl. Workshop, Sydney, Australia, 1996, pp. 83–90.Google Scholar
  12. M. Hansen and B. Yu, “Model selection and the principle of minimum description length,” JASA, vol. 96, pp.746–774, 2001.MathSciNetGoogle Scholar
  13. C. Fraley and E. Raftery, “How many clusters? Which clustering method? Answers via model-based Cluster Analysis,” Computer Journal, vol. 41, pp. 578–588, 1998.CrossRefGoogle Scholar
  14. M. Sugiyama and H. Ogawa, Subspace Information criterion for model selection, Neural Computation, vol. 13, no.8, pp. 1863–1889, 2001.CrossRefGoogle Scholar
  15. K. Vasko and T. Toivonen. “Estimating the number of segments in time series data using permutation tests,” in Proc. IEEE Intl. Conf. on Data Mining, Maebashi City, Japan, 2002, pp. 466–47.Google Scholar
  16. V. Roth, T. Lange, M. Braun, and J. Buhmann, “A resampling approach to cluster validation,” in Proc. in Computational Statistics: 15th Symposium (COMPSTAT2002), Berlin, Germany, 2002, pp. 123–128.Google Scholar
  17. S. Monti, P. Tamayo, J. Mesirov, and T Golub, “Consensus clustering: A resampling-based method for class discovery and visualization of gene expression microarray data,” Machine Learning, vol. 52, nos.~1–2, pp. 91–118, 2003.Google Scholar
  18. R. Tibshirani, G. Walther, and T. Hastie, “ Estimating the number of clusters in a dataset via the Gap statistic,” Dept. of Biostatistics, Stanford Univ., Stanford, CA, Technical Report 208, 2001.Google Scholar
  19. R. Tibshirani, G. Walther B. Botstein, and P. Brown, “Cluster validation by prediction strength,” Dept. of Biostatistics, Stanford Univ., Stanford, CA, Technical Report 2001–21, 2001.Google Scholar
  20. T. Chiu, D. Fang, J. Chen, Y. Wang, and C. Jeris, “A robust and scalable clustering algorithm for mixed type attributes in large database environment,” in Proc. of the 7th ACM SIGKDD Intl. Conf. on Knowledge Discovery and Data Mining, San Francisco, CA, 2001, pp. 263–268.Google Scholar
  21. A. Foss and A. Zaíane, “A parameterless method for efficiently discovering clusters of arbitrary shape in large datasets.” in Proc. of the 2002 IEEE Intl. Conf. on Data Mining (ICDM'02), Maebashi City, Japan, 2002, pp. 179–186.Google Scholar
  22. S. Harris, D. Hess, and J. Venegas, “An objective analysis of the pressure-volume curve in the acute respiratory distress syndrome,” American Journal of Respiratory and Critical Care Medicine, vol. 161, no. 2, pp. 432–439, 2000.Google Scholar
  23. D. Dasgupta and S. Forrest, “novelty detection in time series data using ideas from immunology,” In Proc. Fifth Intl. Conf. on Intelligent Systems, Reno, NV, 1996, pp. 82–87.Google Scholar
  24. T. Caudell and D. Newman, “An adaptive resonance architecture to define normality and detect novelties in time series and databases,” in Proc. IEEE World Congress on Neural Networks, Portland, OR, pp. IV166–176. 1993.Google Scholar
  25. P. Langley, D. George, S. Bay, and K. Saito, “Robust induction of process models from time-series data,” in Proc. of the 20th Intl. Conf. on Machine Learning, Washington, DC, 2003, pp. 32–439.Google Scholar
  26. E. Weisstein, “Least squares fitting,” From MathWorld-A Wolfram Web Resource. [http://mathworld.wolfram.com/LeastSquaresFitting.html].
  27. J. Furnkranz and G. Wildmer, “Incremental reduced error pruning,” in Proc. Intl. Conf. on Machine Learning, New Brunswick, NJ, 1994, pp. 70–77.Google Scholar
  28. E. Keogh and T. Folias, The UCR Time Series Data Mining Archive [http://www.cs.ucr.edu/~eamonn/TSDMA/index.html]. Riverside, CA. University of California—Computer Science and Engineering Department, 2004.

Copyright information

© Springer Science + Business Media, Inc. 2005

Authors and Affiliations

  1. 1.Department of Computer SciencesFlorida Institute of TechnologyMelbourne

Personalised recommendations