Advertisement

Sequential Data Classification in the Space of Liquid State Machines

  • Yang Li
  • Junyuan Hong
  • Huanhuan ChenEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9851)

Abstract

This paper proposes a novel classification approach to carrying out sequential data classification. In this approach, each sequence in a data stream is approximated and represented by one state space model – liquid state machine. Each sequence is mapped into the state space of the approximating model. Instead of carrying out classification on the sequences directly, we discuss measuring the dissimilarity between models under different hypotheses. The classification experiment on binary synthetic data demonstrates robustness using appropriate measurement. The classifications on benchmark univariate and multivariate data confirm the advantages of the proposed approach compared with several common algorithms. The software related to this paper is available at https://github.com/jyhong836/LSMModelSpace.

Keywords

Sequential learning Classification Learning in the model space 

Notes

Acknowledgements

This work is supported by the National Ket Research and Development plan under Grant 2016YFB1000905, and the National Natural Science Foundation of China under Grants 91546116, 61511130083, 61673363. The authors would like to thank Dr. Hongfei Xing for her valuable comments.

References

  1. 1.
    Berndt, D.J., Clifford, J.: Using dynamic time warping to find patterns in time series. In: KDD Workshop, vol. 10, pp. 359–370 (1994)Google Scholar
  2. 2.
    Chang, C.C., Lin, C.J.: Libsvm: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2(3), 27 (2011)CrossRefGoogle Scholar
  3. 3.
    Chen, H., Tang, F., Tino, P., Cohn, A.G., Yao, X.: Model metric co-learning for time series classification. In: Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, pp. 3387–3394. AAAI Press (2015)Google Scholar
  4. 4.
    Chen, H., Tang, F., Tino, P., Yao, X.: Model-based kernel for efficient time series analysis. In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 392–400. ACM (2013)Google Scholar
  5. 5.
    Chen, H., Tino, P., Rodan, A., Yao, X.: Learning in the model space for cognitive fault diagnosis. IEEE Trans. Neural Netw. Learn. Syst. 25(1), 124–136 (2014)CrossRefGoogle Scholar
  6. 6.
    Chen, H., Tiňo, P., Yao, X.: Cognitive fault diagnosis in tennessee eastman process using learning in the model space. Comput. Chem. Eng. 67, 33–42 (2014)CrossRefGoogle Scholar
  7. 7.
    Chen, Y., Keogh, E., Hu, B., Begum, N., Bagnall, A., Mueen, A., Batista, G.: The UCR time series classification archive, July 2015. www.cs.ucr.edu/~eamonn/time_series_data/
  8. 8.
    Cuturi, M., Doucet, A.: Autoregressive kernels for time series. arXiv preprint arXiv:1101.0673 (2011)
  9. 9.
    Cuturi, M., Vert, J.P., Birkenes, O., Matsui, T.: A kernel for time series based on global alignments. In: IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 2, pp. 413–416 (2007)Google Scholar
  10. 10.
    Figueiredo, M.A.T., Jain, A.K.: Unsupervised learning of finite mixture models. IEEE Trans. Pattern Anal. Mach. Intell. 24(3), 381–396 (2002)CrossRefGoogle Scholar
  11. 11.
    Granger, C.W.J., Hatanaka, M., et al.: Spectral Analysis of Economic Time Series. Princeton University Press, Princeton (1964)zbMATHGoogle Scholar
  12. 12.
    Jebara, T., Kondor, R., Howard, A.: Probability product kernels. J. Mach. Learn. Res. 5, 819–844 (2004)MathSciNetzbMATHGoogle Scholar
  13. 13.
    Keogh, E.J., Pazzani, M.J.: Scaling up dynamic time warping for datamining applications. In: Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 285–289. ACM (2000)Google Scholar
  14. 14.
    Kitagawa, G.: A self-organizing state-space model. J. Am. Stat. Assoc. 93, 1203–1215 (1998)Google Scholar
  15. 15.
    Lahiri, S.N.: Theoretical comparisons of block bootstrap methods. Ann. Stat. 27, 386–404 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Maass, W., Natschläger, T., Markram, H.: Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 14(11), 2531–2560 (2002)CrossRefzbMATHGoogle Scholar
  17. 17.
    Maaten, L.: Learning discriminative fisher kernels. In: Proceedings of the 28th International Conference on Machine Learning, pp. 217–224 (2011)Google Scholar
  18. 18.
    Müller, K.-R., Smola, A.J., Rätsch, G., Schölkopf, B., Kohlmorgen, J., Vapnik, V.: Predicting time series with support vector machines. In: Gerstner, W., Germond, A., Hasler, M., Nicoud, J.-D. (eds.) ICANN 1997. LNCS, vol. 1327, pp. 999–1004. Springer, Heidelberg (1997). doi: 10.1007/BFb0020283 Google Scholar
  19. 19.
    Natschläger, T., Markram, H., Maass, W.: Computer models and analysis tools for neural microcircuits. In: Kötter, R. (ed.) Neuroscience Databases, pp. 123–138. Springer, New York (2003)CrossRefGoogle Scholar
  20. 20.
    Sahoo, D., Sharma, A., Hoi, S.C., Zhao, P.: Temporal kernel descriptors for learning with time-sensitive patterns. In: Proceedings of the First SIAM Conference on Data Mining (2016)Google Scholar
  21. 21.
    Sakoe, H., Chiba, S.: Dynamic programming algorithm optimization for spoken word recognition. IEEE Trans. Acoust. Speech Sig. Process. 26(1), 43–49 (1978)CrossRefzbMATHGoogle Scholar
  22. 22.
    Shawe-Taylor, J., Cristianini, N.: Kernel Methods for Pattern Analysis. Cambridge University Press, New York (2004)CrossRefzbMATHGoogle Scholar
  23. 23.
    Van Rossum, M.C.: A novel spike distance. Neural Comput. 13(4), 751–763 (2001)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.UBRI, School of Computer Science and TechnologyUniversity of Science and Technology of ChinaHefeiChina

Personalised recommendations