Advertisement

Learning a Kernel Matrix for Time Series Data from DTW Distances

  • Hiroyuki Narita
  • Yasumasa Sawamura
  • Akira Hayashi
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4985)

Abstract

One of the advantages of the kernel methods is that they can deal with various kinds of objects, not necessarily vectorial data with a fixed number of attributes. In this paper, we develop kernels for time series data using dynamic time warping (DTW) distances. Since DTW distances are pseudo distances that do not satisfy the triangle inequality, a kernel matrix based on them is not positive semidefinite, in general. We use semidefinite programming (SDP) to guarantee the positive definiteness of a kernel matrix. We present neighborhood preserving embedding (NPE), an SDP formulation to obtain a kernel matrix that best preserves the local geometry of time series data. We also present an out-of-sample extension (OSE) for NPE. We use two applications, time series classification and time series embedding for similarity search to validate our approach.

Keywords

Time Series Data Kernel Method Dynamic Time Warping Kernel Matrix Kernel Principal Component Analysis 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Shawe-Taylor, J., Cristianini, N.: Kernel Methods for Pattern Analysis. Cambridge University Press, Cambridge (2004)Google Scholar
  2. 2.
    Corres, C., Vapnik, V.: Support vector networks. Machine Learning 20, 273–297 (1995)Google Scholar
  3. 3.
    Schölkopf, B., Smola, A., Müller, K.: Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation 10, 1299–1319 (1998)CrossRefGoogle Scholar
  4. 4.
    Rabiner, L., Juang, B.: Fundamentals of Speech Recognition. Prentice-Hall, Englewood Cliffs (1993)Google Scholar
  5. 5.
    Shimodaira, H., Noma, K., Nakai, M., Sagayama, S.: Dynamic time-alignment kernel in support vector machine. In: Neural Information Processing Systems 14, pp. 921–928. MIT Press, Cambridge (2002)Google Scholar
  6. 6.
    Bahlmann, C., Haasdonk, B., Burkhardt, H.: On-line handwriting recognition with support vector machines-a kernel approach. In: Proc. 8th Int. W/S on Frontiers in Handwriting Recognition, pp. 49–54 (2002)Google Scholar
  7. 7.
    Vandenberghe, L., Boyd, S.: Semidefinite programming. SIAM Rev. 38(1), 49–95 (1996)CrossRefMathSciNetzbMATHGoogle Scholar
  8. 8.
    Lanckriet, G., Christianini, N., Barlett, P., Ghaoui, L., Jordan, M.: Learning the kernel matrix with semidifinite programming. Journal of Machine Learning Research 5, 27–72 (2004)Google Scholar
  9. 9.
    Weinberger, K.Q., Sha, F., Saul, L.K.: Learning a kernel matrix for nonlinear dimensionality reduction. In: Proc. 21st Int. Conf. on Machine Learning (ICML 2004), pp. 839–846 (2004)Google Scholar
  10. 10.
    Lu, F., Keles, S., Wright, S., Wahba, G.: Framework for kernel regularization with application to protein clustering. PNAS 102(35), 12332–12337 (2005)CrossRefMathSciNetzbMATHGoogle Scholar
  11. 11.
    Hayashi, A., Mizuhara, Y., Suematsu, N.: Embedding time series data for classification. In: Perner, P., Imiya, A. (eds.) MLDM 2005. LNCS (LNAI), vol. 3587, pp. 356–365. Springer, Heidelberg (2005)Google Scholar
  12. 12.
    Hayashi, A., Nisizaki, K., Suematsu, N.: Fast similarity search of time series data using the nystrom method. In: ICDM 2005 Workshop on Temporal Data Mining, pp. 157–164 (2005)Google Scholar
  13. 13.
    Haasdonk, B., Bahlmann, C.: Learning with distance substitution kernels. In: Rasmussen, C.E., Bülthoff, H.H., Schölkopf, B., Giese, M.A. (eds.) DAGM 2004. LNCS, vol. 3175, pp. 220–227. Springer, Heidelberg (2004)Google Scholar
  14. 14.
    Toh, K., Tütüncü, R., Todd, M.: Solving semidefinite-quadratic-linear programming using sdpt3. Mathematical Programming 95, 180–217 (2003)Google Scholar
  15. 15.
    Friedman, J., Bentley, J., Finkel, R.: An algorithm for finding the best matches in logarithmic expected time. ACM Trans. Mathematical Software 3(3), 209–226 (1977)CrossRefzbMATHGoogle Scholar
  16. 16.
    Kadous, W.: Australian sign language data in the uci kdd archive (1995), http://www.cse.unsw.edu.au/~waleed/tml/data/
  17. 17.
    Cole, R., Muthusamy, Y., Fanty, M.: The ISOLET spoken letter database. Technical Report CS/E 90-004 (1990)Google Scholar
  18. 18.
    Cox, T., Cox, M.: Multidimensional Scaling. Chapman and Hall, Boca Raton (2001)zbMATHGoogle Scholar
  19. 19.
    Bengio, Y., Vincent, P., Paiement, J.: Learning eigenfunctions links spectral embedding and kernel pca. Neural Computation 16(10), 2197–2219 (2004)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Hiroyuki Narita
    • 1
  • Yasumasa Sawamura
    • 1
  • Akira Hayashi
    • 1
  1. 1.Graduate School of Information SciencesHiroshima City UniversityHiroshimaJapan

Personalised recommendations