Nonlinear Dynamics

, Volume 90, Issue 3, pp 1785–1806 | Cite as

High-dimensional time series prediction using kernel-based Koopman mode regression

  • Jia-Chen Hua
  • Farzad Noorian
  • Duncan Moss
  • Philip H. W. Leong
  • Gemunu H. Gunaratne
Original Paper


We propose a novel methodology for high-dimensional time series prediction based on the kernel method extension of data-driven Koopman spectral analysis, via the following methodological advances: (a) a new numerical regularization method, (b) a natural ordering of Koopman modes which provides a fast alternative to the sparsity-promoting procedure, (c) a predictable Koopman modes selection technique which is equivalent to cross-validation in machine learning, (d) an optimization method for selected Koopman modes to improve prediction accuracy, (e) prediction model generation and selection based on historical error measures. The prediction accuracy of this methodology is excellent: for example, when it is used to predict clients’ order flow time series of foreign exchange, which is almost random, it can achieve more than 10% improvement on root-mean-square error over auto-regressive moving average. This methodology also opens up new possibilities for data-driven modeling and forecasting complex systems that generate the high-dimensional time series. We believe that this methodology will be of interest to the community of scientists and engineers working on quantitative finance, econometrics, system biology, neurosciences, meteorology, oceanography, system identification and control, data mining, machine learning, and many other fields involving high-dimensional time series and spatio-temporal data.


High-dimensional time series Spatio-temporal dynamics Complex systems Data-driven Koopman operator Dynamic mode decomposition Kernel methods 


  1. 1.
    Allen, R.L., Mills, D.: Signal Analysis: Time, Frequency, Scale, and Structure. Wiley, London (2004)Google Scholar
  2. 2.
    Aubry, N., Guyonnet, R., Lima, R.: Spatiotemporal analysis of complex signals: theory and applications. J. Stat. Phys. 64(3–4), 683–739 (1991). doi: 10.1007/BF01048312 MathSciNetCrossRefMATHGoogle Scholar
  3. 3.
  4. 4.
    Bankman, I.N. (ed.): Handbook of Medical Imaging: Processing and Analysis. Academic Press Series in Biomedical Engineering. Academic Press, San Diego (2000)Google Scholar
  5. 5.
    Berger, E., Sastuba, M., Vogt, D., Jung, B., Amor, H.B.: Estimation of perturbations in robotic behavior using dynamic mode decomposition. Adv. Robot. 29(5), 331–343 (2015). doi: 10.1080/01691864.2014.981292 CrossRefGoogle Scholar
  6. 6.
    Bishop, C.M.: Pattern Recognition and Machine Learning. Information Science and Statistics. Springer, New York (2006)MATHGoogle Scholar
  7. 7.
    Boivin, N., Pierre, C., Shaw, S.W.: Non-linear normal modes, invariance, and modal dynamics approximations of non-linear systems. Nonlinear Dyn. 8(3), 315–346 (1995). doi: 10.1007/BF00045620 MathSciNetGoogle Scholar
  8. 8.
    Bourantas, G.C., Ghommem, M., Kagadis, G.C., Katsanos, K., Loukopoulos, V.C., Burganos, V.N., Nikiforidis, G.C.: Real-time tumor ablation simulation based on the dynamic mode decomposition method. Med. Phys. 41(5), 053301 (2014)CrossRefGoogle Scholar
  9. 9.
    Brewick, P.T., Masri, S.F.: An evaluation of data-driven identification strategies for complex nonlinear dynamic systems. Nonlinear Dyn. 85(2), 1297–1318 (2016). doi: 10.1007/s11071-016-2761-x MathSciNetCrossRefGoogle Scholar
  10. 10.
    Brunton, B.W., Johnson, L.A., Ojemann, J.G., Kutz, J.N.: Extracting spatial–temporal coherent patterns in large-scale neural recordings using dynamic mode decomposition. J. Neurosci. Methods 258, 1–15 (2016). doi: 10.1016/j.jneumeth.2015.10.010 CrossRefGoogle Scholar
  11. 11.
    Brunton, S.L., Brunton, B.W., Proctor, J.L., Kutz, J.N.: Koopman invariant subspaces and finite linear representations of nonlinear dynamical systems for control. PLoS ONE 11(2), e0150171 (2016). doi: 10.1371/journal.pone.0150171 CrossRefMATHGoogle Scholar
  12. 12.
    Budišić, M., Mohr, R., Mezić, I.: Applied Koopmanism. Chaos Interdiscip. J. Nonlinear Sci. 22(4), 047510 (2012). doi: 10.1063/1.4772195 MathSciNetCrossRefMATHGoogle Scholar
  13. 13.
    Burges, C.J.: A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Discov. 2(2), 121–167 (1998)CrossRefGoogle Scholar
  14. 14.
    Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge University Press, Cambridge (2000)CrossRefMATHGoogle Scholar
  15. 15.
    Cross, M.C., Hohenberg, P.C.: Pattern formation outside of equilibrium. Rev. Mod. Phys. 65(3), 851 (1993)CrossRefMATHGoogle Scholar
  16. 16.
    Deng, L., Li, J., Huang, J.T., Yao, K., Yu, D., Seide, F., Seltzer, M., Zweig, G., He, X., Williams, J., Gong, Y., Acero, A.: Recent advances in deep learning for speech research at Microsoft. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 8604–8608 (2013). doi: 10.1109/ICASSP.2013.6639345
  17. 17.
    Ding, F., Wang, F., Xu, L., Wu, M.: Decomposition based least squares iterative identification algorithm for multivariate pseudo-linear ARMA systems using the data filtering. J. Frankl. Inst. 354(3), 1321–1339 (2017). doi: 10.1016/j.jfranklin.2016.11.030 MathSciNetCrossRefMATHGoogle Scholar
  18. 18.
    Erichson, N.B., Brunton, S.L., Kutz, J.N.: Compressed dynamic mode decomposition for real-time object detection. arXiv:1512.04205 [cs] (2015)
  19. 19.
    Garland, J., James, R., Bradley, E.: Model-free quantification of time-series predictability. Phys. Rev. E 90(5), 052910 (2014). doi: 10.1103/PhysRevE.90.052910 CrossRefGoogle Scholar
  20. 20.
    Giannakis, D.: Data-driven spectral decomposition and forecasting of ergodic dynamical systems. arXiv:1507.02338 [physics] (2015)
  21. 21.
    Giannakis, D., Slawinska, J., Zhao, Z.: Spatiotemporal feature extraction with data-driven koopman operators. In: Proceedings of The 1st International Workshop on “Feature Extraction: Modern Questions and Challenges”, pp. 103–115. NIPS (2015)Google Scholar
  22. 22.
    Gneiting, T., Raftery, A.E.: Weather forecasting with ensemble methods. Science 310(5746), 248–249 (2005). doi: 10.1126/science.1115255 CrossRefGoogle Scholar
  23. 23.
    Golubitsky, M., Stewart, I., et al.: Singularities and Groups in Bifurcation Theory, vol. 2. Springer, Berlin (2012)Google Scholar
  24. 24.
    Haller, G., Ponsioen, S.: Nonlinear normal modes and spectral submanifolds: existence, uniqueness and use in model reduction. Nonlinear Dyn. (2016). doi: 10.1007/s11071-016-2974-z MathSciNetMATHGoogle Scholar
  25. 25.
    Ham, J., Lee, D.D., Mika, S., Schölkopf, B.: A kernel view of the dimensionality reduction of manifolds. In: Proceedings of the Twenty-First International Conference on Machine Learning, p. 47. ACM (2004)Google Scholar
  26. 26.
    Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning, Springer Series in Statistics. Springer, New York (2009)CrossRefMATHGoogle Scholar
  27. 27.
    Hua, J.C., Chen, L., Falcon, L., McCauley, J.L., Gunaratne, G.H.: Variable diffusion in stock market fluctuations. Physica A Stat. Mech. Appl. 419, 221–233 (2015). doi: 10.1016/j.physa.2014.10.024 MathSciNetCrossRefGoogle Scholar
  28. 28.
    Hua, J.C., Gunaratne, G.H., Talley, D.G., Gord, J.R., Roy, S.: Dynamic-mode decomposition based analysis of shear coaxial jets with and without transverse acoustic driving. J. Fluid Mech. 790, 5–32 (2016). doi: 10.1017/jfm.2016.2 CrossRefGoogle Scholar
  29. 29.
    Hua, J.C., Roy, S., McCauley, J.L., Gunaratne, G.H.: Using dynamic mode decomposition to extract cyclic behavior in the stock market. Physica A Stat. Mech. Appl. 448, 172–180 (2016). doi: 10.1016/j.physa.2015.12.059 CrossRefGoogle Scholar
  30. 30.
    Huang, Y., Slaney, M., Gong, Y., Seltzer, M.: Towards better performance with heterogeneous training data in acoustic modeling using deep neural networks. In: Proceedings of Interspeech 2014 (2014)Google Scholar
  31. 31.
    Hyndman, R.J., Khandakar, Y.: Automatic time series forecasting. The forecast package for R. J. Stat. Softw. 27(3), 1–22 (2008). doi: 10.18637/jss.v027.i03 CrossRefGoogle Scholar
  32. 32.
    Jovanović, M.R., Schmid, P.J., Nichols, J.W.: Sparsity-promoting dynamic mode decomposition. Phys. Fluids (1994–Present) 26(2), 024103 (2014)CrossRefGoogle Scholar
  33. 33.
    Chitode, J.S.: Digital Signal Processing. Technical Publications, Pune (2009)Google Scholar
  34. 34.
    Koopman, B.O.: Hamiltonian systems and transformation in Hilbert space. PNAS 17(5), 315–318 (1931)CrossRefMATHGoogle Scholar
  35. 35.
    Kutz, J.N., Fu, X., Brunton, S.L.: Multi-resolution dynamic mode decomposition. arXiv preprint arXiv:1506.00564 (2015)
  36. 36.
    Li, J., Zheng, W.X., Gu, J., Hua, L.: Parameter estimation algorithms for Hammerstein output error systems using Levenberg–Marquardt optimization method with varying interval measurements. J. Frankl. Inst. 354(1), 316–331 (2017). doi: 10.1016/j.jfranklin.2016.10.002 MathSciNetCrossRefMATHGoogle Scholar
  37. 37.
    Mann, J., Kutz, J.N.: Dynamic mode decomposition for financial trading strategies. Quant. Finance (2016). doi: 10.1080/14697688.2016.1170194 MathSciNetGoogle Scholar
  38. 38.
    Mao, Y., Ding, F.: Multi-innovation stochastic gradient identification for Hammerstein controlled autoregressive autoregressive systems based on the filtering technique. Nonlinear Dyn. 79(3), 1745–1755 (2015). doi: 10.1007/s11071-014-1771-9 CrossRefMATHGoogle Scholar
  39. 39.
    Mauroy, A., Goncalves, J.: Linear identification of nonlinear systems: a lifting technique based on the Koopman operator. In: 2016 IEEE 55th Conference On Decision and Control (CDC), pp. 6500–6505. IEEE (2016)Google Scholar
  40. 40.
    Mauroy, A., Hendrickx, J.: Spectral identification of networks using sparse measurements. arXiv:1601.04364 [cs, math] (2016)
  41. 41.
    Mezić, I.: Spectral properties of dynamical systems. Model reduction and decompositions. Nonlinear Dyn. 41(1–3), 309–325 (2005). doi: 10.1007/s11071-005-2824-x MathSciNetMATHGoogle Scholar
  42. 42.
    Mezić, I.: Analysis of fluid flows via spectral properties of the Koopman operator. Annu. Rev. Fluid Mech. 45(1), 357–378 (2013). doi: 10.1146/annurev-fluid-011212-140652 MathSciNetCrossRefMATHGoogle Scholar
  43. 43.
    Mezić, I., Banaszuk, A.: Comparison of systems with complex behavior. Physica D Nonlinear Phenom. 197(1–2), 101–133 (2004). doi: 10.1016/j.physd.2004.06.015 MathSciNetMATHGoogle Scholar
  44. 44.
    Noorian, F.: Risk management using model predictive control. Ph.D. thesis, University of Sydney (2015)Google Scholar
  45. 45.
    Noorian, F., Flower, B., Leong, P.H.W.: Stochastic receding horizon control for short-term risk management in foreign exchange. J. Risk 18(5), 29–62 (2016). doi: 10.21314/J0R.2016.333 CrossRefGoogle Scholar
  46. 46.
    Noorian, F., Leong, P.H.: On time series forecasting error measures for finite horizon control. IEEE Trans. Control Syst. Technol. 25(2), 736–743 (2016)Google Scholar
  47. 47.
    Noorian, F., Leong, P.H.W.: Dynamic hedging of foreign exchange risk using stochastic model predictive control. In: 2014 IEEE Conference on Computational Intelligence for Financial Engineering Economics (CIFEr), pp. 441–448 (2014). doi: 10.1109/CIFEr.2014.6924107
  48. 48.
    Proctor, J.L., Brunton, S.L., Kutz, J.N.: Dynamic mode decomposition with control. SIAM J. Appl. Dyn. Syst. 15(1), 142–161 (2016). doi: 10.1137/15M1013857 MathSciNetCrossRefMATHGoogle Scholar
  49. 49.
    Proctor, J.L., Brunton, S.L., Kutz, J.N.: Generalizing Koopman theory to allow for inputs and control. arXiv:1602.07647 [math] (2016)
  50. 50.
    Proctor, J.L., Eckhoff, P.A.: Discovering dynamic patterns from infectious disease data using dynamic mode decomposition. Int. Health 7(2), 139–145 (2015)CrossRefGoogle Scholar
  51. 51.
    Robinson, M.: Sheaves are the canonical datastructure for sensor integration. arXiv:1603.01446 [math] (2016)
  52. 52.
    Rowley, C.W., Mezić, I., Bagheri, S., Schlatter, P., Henningson, D.S.: Spectral analysis of nonlinear flows. J. Fluid Mech. 641, 115–127 (2009). doi: 10.1017/S0022112009992059 MathSciNetCrossRefMATHGoogle Scholar
  53. 53.
    Roy, S., Hua, J.C., Barnhill, W., Gunaratne, G.H., Gord, J.R.: Deconvolution of reacting-flow dynamics using proper orthogonal and dynamic mode decompositions. Phys. Rev. E 91(1), 013001 (2015). doi: 10.1103/PhysRevE.91.013001 CrossRefGoogle Scholar
  54. 54.
    Schapire, R.E., Freund, Y.: Boosting: Foundations and Algorithms. Adaptive Computation and Machine Learning Series. MIT Press, Cambridge (2012)MATHGoogle Scholar
  55. 55.
    Schmid, P.J.: Nonmodal stability theory. Annu. Rev. Fluid Mech. 39(1), 129–162 (2007). doi: 10.1146/annurev.fluid.38.050304.092139 MathSciNetCrossRefMATHGoogle Scholar
  56. 56.
    Schmid, P.J.: Dynamic mode decomposition of experimental data. In: 8th International Symposium on Particle Image Velocimetry (PIV09), Melbourne, p. 141 (2009)Google Scholar
  57. 57.
    Schmid, P.J.: Dynamic mode decomposition of numerical and experimental data. J. Fluid Mech. 656, 5–28 (2010). doi: 10.1017/S0022112010001217 MathSciNetCrossRefMATHGoogle Scholar
  58. 58.
    Scholkopf, B.: The kernel trick for distances. In: Leen, T.K., Dietterich, T.G., Tresp, V. (eds.) Advances in Neural Information Processing Systems, vol. 13, p. 301. MIT Press, Cambridge (2001)Google Scholar
  59. 59.
    Seemann, L., Hua, J.C., McCauley, J.L., Gunaratne, G.H.: Ensemble vs. time averages in financial time series analysis. Physica A Stat. Mech. Appl. 391(23), 6024–6032 (2012). doi: 10.1016/j.physa.2012.06.054 CrossRefGoogle Scholar
  60. 60.
    Slater, J.C.: A numerical method for determining nonlinear normal modes. Nonlinear Dyn. 10(1), 19–30 (1996). doi: 10.1007/BF00114796 CrossRefGoogle Scholar
  61. 61.
    Susuki, Y., Mezic, I.: Nonlinear Koopman modes and a precursor to power system swing instabilities. IEEE Trans. Power Syst. 27(3), 1182–1191 (2012). doi: 10.1109/TPWRS.2012.2183625 CrossRefGoogle Scholar
  62. 62.
    Tu, J.H., Rowley, C.W., Luchtenburg, D.M., Brunton, S.L., Kutz, J.N.: On dynamic mode decomposition: theory and applications. J. Comput. Dyn. 1(2), 391–421 (2014). doi: 10.3934/jcd.2014.1.391 MathSciNetCrossRefMATHGoogle Scholar
  63. 63.
    Valipour, M.: Ability of Box-Jenkins models to estimate of reference potential evapotranspiration (a case study: Mehrabad synoptic station, Tehran, Iran). IOSR J. Agric. Vet. Sci. (IOSR-JAVS) 1(5), 1–11 (2012)MathSciNetCrossRefGoogle Scholar
  64. 64.
    Valipour, M.: Critical areas of Iran for agriculture water management according to the annual rainfall. Eur. J. Sci. Res. 84(4), 600–608 (2012)Google Scholar
  65. 65.
    Valipour, M.: Long-term runoff study using SARIMA and ARIMA models in the United States. Met. Apps 22(3), 592–598 (2015). doi: 10.1002/met.1491 MathSciNetCrossRefGoogle Scholar
  66. 66.
    Valipour, M., Banihabib, M.E., Behbahani, S.M.R.: Comparison of the ARMA, ARIMA, and the autoregressive artificial neural network models in forecasting the monthly inflow of Dez dam reservoir. J. Hydrol. 476, 433–441 (2013). doi: 10.1016/j.jhydrol.2012.11.017 CrossRefGoogle Scholar
  67. 67.
    Wang, D.: Hierarchical parameter estimation for a class of MIMO Hammerstein systems based on the reframed models. Appl. Math. Lett. 57, 13–19 (2016). doi: 10.1016/j.aml.2015.12.018 MathSciNetCrossRefMATHGoogle Scholar
  68. 68.
    Wang, D., Zhang, W.: Improved least squares identification algorithm for multivariable Hammerstein systems. J. Frankl. Inst. 352(11), 5292–5307 (2015). doi: 10.1016/j.jfranklin.2015.09.007 MathSciNetCrossRefGoogle Scholar
  69. 69.
    Wang, N., Er, M.J., Han, M.: Parsimonious extreme learning machine using recursive orthogonal least squares. IEEE Trans. Neural Netw. Learn. Syst. 25(10), 1828–1841 (2014). doi: 10.1109/TNNLS.2013.2296048 CrossRefGoogle Scholar
  70. 70.
    Wang, N., Er, M.J., Han, M.: Generalized single-hidden layer feedforward networks for regression problems. IEEE Trans. Neural Netw. Learn. Syst. 26(6), 1161–1176 (2015). doi: 10.1109/TNNLS.2014.2334366 MathSciNetCrossRefGoogle Scholar
  71. 71.
    Wang, N., Han, M., Dong, N., Er, M.J.: Constructive multi-output extreme learning machine with application to large tanker motion dynamics identification. Neurocomputing 128, 59–72 (2014). doi: 10.1016/j.neucom.2013.01.062 CrossRefGoogle Scholar
  72. 72.
    Wang, N., Sun, J.C., Er, M.J., Liu, Y.C.: Hybrid recursive least squares algorithm for online sequential identification using data chunks. Neurocomputing 174(Part B), 651–660 (2016). doi: 10.1016/j.neucom.2015.09.090 CrossRefGoogle Scholar
  73. 73.
    Williams, C.K., Rasmussen, C.E.: Gaussian Processes for Machine Learning. The MIT Press, Cambridge (2006)MATHGoogle Scholar
  74. 74.
    Williams, M.O., Kevrekidis, I.G., Rowley, C.W.: A data-driven approximation of the Koopman operator: extending dynamic mode decomposition. J. Nonlinear Sci. (2015). doi: 10.1007/s00332-015-9258-5 MathSciNetMATHGoogle Scholar
  75. 75.
    Williams, M.O., Rowley, C.W., Kevrekidis, I.G.: A Kernel-based approach to data-driven koopman spectral analysis. arXiv:1411.2260 [math] (2014)
  76. 76.
    Williams, M.O., Rowley, C.W., Mezić, I., Kevrekidis, I.G.: Data fusion via intrinsic dynamic variables: an application of data-driven Koopman spectral analysis. EPL (Eur. Lett.) 109(4), 40007 (2015). doi: 10.1209/0295-5075/109/40007 CrossRefGoogle Scholar
  77. 77.
    Xu, L., Ding, F., Gu, Y., Alsaedi, A., Hayat, T.: A multi-innovation state and parameter estimation algorithm for a state space system with d-step state-delay. Signal Process. 140, 97–103 (2017). doi: 10.1016/j.sigpro.2017.05.006 CrossRefGoogle Scholar
  78. 78.
    Zhou, Z.H.: Ensemble Methods: Foundations and Algorithms. Chapman & Hall/CRC Machine Learning & Pattern Recognition Series. Taylor & Francis, Boca Raton (2012)Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2017

Authors and Affiliations

  1. 1.School of Electrical and Information EngineeringUniversity of SydneySydneyAustralia
  2. 2.Luxembourg Centre for Systems BiomedicineUniversity of LuxembourgBelvauxLuxembourg
  3. 3.Department of PhysicsUniversity of HoustonHoustonUSA

Personalised recommendations