Skip to main content
Log in

Extracting parametric dynamics from time-series data

  • Original Paper
  • Published:
Nonlinear Dynamics Aims and scope Submit manuscript

Abstract

In this paper, we present a data-driven regression approach to identify parametric governing equations from time-series data. Iterative computations are performed for each time stamp to first determine if the governing equations to be recovered are time dependent. The results are then used as input data to extract the parametric equations. A combination of the constrained \(\ell ^1\) and \(\ell ^0+\ell ^2\) optimization problems are used to ensure parsimonious representation of the learned dynamics in the form of parametric differential equations. The method is demonstrated on three canonical dynamics. We show that the proposed method outperforms other sparse-promoting algorithms in identifying parametric differential equations in the low-noise regime in the aspect of accuracy and computation time.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data availability

The data used in this paper can be generated using the method as described in Sect. 4. Sample code is available on https://github.com/HuimeiMa/ParametricDynamicModelSelection.

References

  1. Chou, I.-C., Voit, E.O.: Recent developments in parameter estimation and structure identification of biochemical and genomic systems. Math. Biosci. 219(2), 57–83 (2009)

    MathSciNet  MATH  Google Scholar 

  2. Engl, H.W., Flamm, C., Kügler, P., Lu, J., Müller, S., Schuster, P.: Inverse problems in systems biology. Inverse Prob. 25(12), 123014 (2009)

    MathSciNet  MATH  Google Scholar 

  3. Wang, W.-X., Lai, Y.-C., Grebogi, C.: Data based identification and prediction of nonlinear and complex dynamical systems. Phys. Rep. 644, 1–76 (2016)

  4. Bongard, J., Lipson, H.: Automated reverse engineering of nonlinear dynamical systems. Proc. Natl. Acad. Sci. 104(24), 9943–9948 (2007)

    MATH  Google Scholar 

  5. Schmidt, M., Lipson, H.: Distilling free-form natural laws from experimental data. Science 324(5923), 81–85 (2009)

    Google Scholar 

  6. Brunton, S.L., Proctor, J.L., Kutz, J.N.: Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proc. Natl. Acad. Sci. 113(15), 3932–3937 (2016)

    MathSciNet  MATH  Google Scholar 

  7. Rudy, S.H., Brunton, S.L., Proctor, J.L., Kutz, J.N.: Data-driven discovery of partial differential equations. Sci. Adv. 3(4), 1602614 (2017)

    Google Scholar 

  8. Mangan, N.M., Brunton, S.L., Proctor, J.L., Kutz, J.N.: Inferring biological networks by sparse identification of nonlinear dynamics. IEEE Trans. Mol. Biol. Multi-Scale Commun. 2(1), 52–63 (2016)

    Google Scholar 

  9. Kaheman, K., Kutz, J.N., Brunton, S.L.: SINDy-PI: a robust algorithm for parallel implicit sparse identification of nonlinear dynamics. Proc. R. Soc. A 476(2242), 20200279 (2020)

    MathSciNet  MATH  Google Scholar 

  10. Brunton, S.L., Proctor, J.L., Kutz, J.N.: Sparse identification of nonlinear dynamics with control (SINDYc). IFAC-PapersOnLine 49(18), 710–715 (2016)

    Google Scholar 

  11. Kaiser, E., Kutz, J.N., Brunton, S.L.: Sparse identification of nonlinear dynamics for model predictive control in the low-data limit. Proc. R. Soc. A 474(2219), 20180335 (2018)

    MathSciNet  MATH  Google Scholar 

  12. Fasel, U., Kaiser, E., Kutz, J.N., Brunton, B.W., Brunton, S.L.: SINDy with control: A tutorial. In: 2021 60th IEEE Conference on Decision and Control (CDC), pp. 16–21 (2021). IEEE

  13. Shea, D.E., Brunton, S.L., Kutz, J.N.: SINDy-BVP: sparse identification of nonlinear dynamics for boundary value problems. Phys. Rev. Res. 3(2), 023255 (2021)

    Google Scholar 

  14. Schaeffer, H., McCalla, S.G.: Sparse model selection via integral terms. Phys. Rev. E 96(2), 023302 (2017)

    MathSciNet  Google Scholar 

  15. Messenger, D.A., Bortz, D.M.: Weak SINDy for partial differential equations. J. Comput. Phys. 443, 110525 (2021)

    MathSciNet  MATH  Google Scholar 

  16. Messenger, D.A., Bortz, D.M.: Weak SINDy: Galerkin-based data-driven model selection. Multiscale Model. Simul. 19(3), 1474–1497 (2021)

    MathSciNet  MATH  Google Scholar 

  17. Bortz, D.M., Messenger, D.A., Dukic, V.: Direct estimation of parameters in ODE models using WENDy: weak-form estimation of nonlinear dynamics. arXiv preprint arXiv:2302.13271 (2023)

  18. Schaeffer, H.: Learning partial differential equations via data discovery and sparse optimization. Proc. R. Soc. A Math. Phys. Eng. Sci. 473(2197), 20160446 (2017)

    MathSciNet  MATH  Google Scholar 

  19. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B (Methodol.) 58(1), 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  20. Lions, P.-L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16(6), 964–979 (1979)

    MathSciNet  MATH  Google Scholar 

  21. Combettes, P.L., Pesquet, J.-C.: Proximal splitting methods in signal processing. Fixed-point Algorithms Inverse Probl. Sci. Eng. 185–212 (2011)

  22. Combettes, P.L., Pesquet, J.-C.: A Douglas-Rachford splitting approach to nonsmooth convex variational signal recovery. IEEE J. Sel. Top. Signal Process. 1(4), 564–574 (2007)

    Google Scholar 

  23. He, B., Yuan, X.: On the \({\cal{O} }(1/n)\)convergence rate of the Douglas-Rachford alternating direction method. SIAM J. Numer. Anal. 50(2), 700–709 (2012)

    MathSciNet  MATH  Google Scholar 

  24. Schaeffer, H., Tran, G., Ward, R.: Extracting sparse high-dimensional dynamics from limited data. SIAM J. Appl. Math. 78(6), 3279–3295 (2018)

    MathSciNet  MATH  Google Scholar 

  25. Schaeffer, H., Tran, G., Ward, R., Zhang, L.: Extracting structured dynamical systems using sparse optimization with very few samples. Multiscale Model. Simul. 18(4), 1435–1461 (2020)

    MathSciNet  MATH  Google Scholar 

  26. Eckstein, J.: Splitting Methods for Monotone Operators with Applications to Parallel Optimization. PhD thesis, Massachusetts Institute of Technology (1989)

  27. Foucart, S., Rauhut, H.: A Mathematical Introduction to Compressive Sensing. Birkhäuser, New York (2013)

    MATH  Google Scholar 

  28. Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math. Appl. 2(1), 17–40 (1976)

    MATH  Google Scholar 

  29. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J., et al.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends® Mach. Learn. 3(1), 1–122 (2011)

    MATH  Google Scholar 

  30. Daubechies, I., Defrise, M., De Mol, C.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57(11), 1413–1457 (2004)

    MathSciNet  MATH  Google Scholar 

  31. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2(1), 183–202 (2009)

    MathSciNet  MATH  Google Scholar 

  32. Van Den Berg, E., Friedlander, M.P.: Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31(2), 890–912 (2009)

    MathSciNet  MATH  Google Scholar 

  33. Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40, 120–145 (2011)

    MathSciNet  MATH  Google Scholar 

  34. Rudy, S., Alla, A., Brunton, S.L., Kutz, J.N.: Data-driven identification of parametric partial differential equations. SIAM J. Appl. Dyn. Syst. 18(2), 643–660 (2019)

    MathSciNet  MATH  Google Scholar 

  35. Li, X., Li, L., Yue, Z., Tang, X., Voss, H., Kurths, J., Yuan, Y.: Sparse learning of partial differential equations with structured dictionary matrix. Chaos Interdiscip. J. Nonlinear Sci. 29, 043130 (2019). https://doi.org/10.1063/1.5054708

    Article  MathSciNet  MATH  Google Scholar 

  36. Grant, M., Boyd, S.: CVX: Matlab Software for Disciplined Convex Programming, version 2.1. http://cvxr.com/cvx (2014)

  37. Grant, M.C., Boyd, S.P.: Graph implementations for nonsmooth convex programs. In: Recent Advances in Learning and Control, pp. 95–110 (2008). Springer, New York

  38. Li, Z., Kovachki, N., Azizzadenesheli, K., Liu, B., Bhattacharya, K., Stuart, A., Anandkumar, A.: Fourier neural operator for parametric partial differential equations. arXiv preprint arXiv:2010.08895 (2020)

  39. Xu, H., Zhang, D., Zeng, J.: Deep-learning of parametric partial differential equations from sparse and noisy data. Phys. Fluids 33(3), 037132 (2021)

    Google Scholar 

  40. Wang, S., Wang, H., Perdikaris, P.: Learning the solution operator of parametric partial differential equations with physics-informed deeponets. Sci. Adv. 7(40), 8605 (2021)

    Google Scholar 

  41. Im, J., Rizzo, C.B., Barros, F.P., Masri, S.F.: Application of genetic programming for model-free identification of nonlinear multi-physics systems. Nonlinear Dyn. 104, 1781–1800 (2021)

    Google Scholar 

  42. Chen, Y., Luo, Y., Liu, Q., Xu, H., Zhang, D.: Symbolic genetic algorithm for discovering open-form partial differential equations (SGA-PDE). Phys. Rev. Res. 4(2), 023174 (2022)

    Google Scholar 

  43. Davis, D., Yin, W.: Convergence rate analysis of several splitting schemes. Split. Methods Commun. Imaging Sci. Eng. 115–163 (2016)

  44. Giselsson, P., Boyd, S.: Linear convergence and metric selection for Douglas-Rachford splitting and ADMM. IEEE Trans. Autom. Control 62(2), 532–544 (2016)

    MathSciNet  MATH  Google Scholar 

  45. Fisher, R.A.: The wave of advance of advantageous genes. Ann. Eugen. 7(4), 355–369 (1937)

    MATH  Google Scholar 

  46. Tikhomirov, V.M.: A study of the diffusion equation with increase in the amount of substance, and its application to a biological problem, pp. 242–270. Springer, New York (1991)

    Google Scholar 

  47. Reinbold, P.A.K., Gurevich, D.R., Grigoriev, R.O.: Using noisy or incomplete data to discover models of spatiotemporal dynamics. Phys. Rev. E 101(1), 010203 (2020)

    Google Scholar 

  48. Kaptanoglu, A.A., Silva, B.M., Fasel, U., Kaheman, K., Goldschmidt, A.J., Callaham, J.L., Delahunt, C.B., Nicolaou, Z.G., Champion, K., Loiseau, J.-C., et al.: Pysindy: A comprehensive python package for robust sparse system identification. arXiv preprint arXiv:2111.08481 (2021)

  49. Antonelli, G., Chiaverini, S., Di Lillo, P.: On data-driven identification: Is automatically discovering equations of motion from data a chimera? Nonlinear Dyn. 111(7), 6487–6498 (2023)

    Google Scholar 

  50. Strebel, O.: Preprocessing algorithms for the estimation of ordinary differential equation models with polynomial nonlinearities. Nonlinear Dyn. 1–16 (2023)

  51. Mangan, N.M., Kutz, J.N., Brunton, S.L., Proctor, J.L.: Model selection for dynamical systems via sparse regression and information criteria. Proc. R. Soc. A Math. Phys. Eng. Sci. 473(2204), 20170009 (2017)

    MathSciNet  MATH  Google Scholar 

  52. Baake, E., Baake, M., Bock, H.G., Briggs, K.M.: Fitting ordinary differential equations to chaotic data. Phys. Rev. A 45(8), 5524 (1992)

  53. Douglas, J., Rachford, H.H.: On the numerical solution of heat conduction problems in two and three space variables. Trans. Am. Math. Soc. 82(2), 421–439 (1956)

    MathSciNet  MATH  Google Scholar 

  54. Patrinos, P., Stella, L., Bemporad, A.: Douglas-Rachford splitting: Complexity estimates and accelerated variants. In: 53rd IEEE Conference on Decision and Control, pp. 4234–4239 (2014). IEEE

  55. Pham, M., Rana, A., Miao, J., Osher, S.: Semi-implicit relaxed Douglas-Rachford algorithm (sDR) for ptychography. Opt. Express 27(22), 31246–31260 (2019)

    Google Scholar 

  56. Fu, A., Zhang, J., Boyd, S.: Anderson accelerated Douglas-Rachford splitting. SIAM J. Sci. Comput. 42(6), 3560–3583 (2020)

  57. Goldstein, T., Osher, S.: The split Bregman method for \(l^1\)-regularized problems. SIAM J. Imag. Sci. 2(2), 323–343 (2009)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

L. Zhang was supported by NSFC Grant #12101342.

Funding

The authors have not disclosed any funding.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Linan Zhang.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: The Douglas-Rachford algorithm

Consider the minimization problem (1.2), where \(G_1\) and \(G_2\) are functions for which one can compute the proximal mappings \(\text {prox}_{\gamma G_i}\), \(i=1,2\), via Eq. (1.3). The DR algorithm was introduced in [20] as a generalization of an algorithm introduced by Douglas and Rachford in the case of quadratic minimization [53]. Under certain conditions, the DR algorithm have the following convergence property [22, 23, 43, 44].

Theorem A.1

Let \(G_1\) and \(G_2\) be proper, closed, and convex functions. For any \(\gamma >0\), any \(\mu \in (0,2)\), and any initial point \(\tilde{x}^0\), the iterates \(x^k\) generated by Eq. (1.4) converges linearly to a minimizer of the minimization problem (1.2).

For the constrained \(\ell ^1\) minimization problem (P\(_{1,\epsilon }\)) of main interest in this paper, we define

$$\begin{aligned}&G_{1}(\omega , x) := \Vert x \Vert _1 + \text {Ind}_{\mathcal {B}} (\omega ),\\&G_{2}(\omega , x) := \text {Ind}_{\mathcal {K}} (\omega , x), \end{aligned}$$

where

$$\begin{aligned}&\mathcal {K} := \{(\omega , x): \ \omega = A x \}, \\&\mathcal {B} :=B_{\epsilon }(b) = \{ \omega : \ \Vert \omega -b \Vert _2 \le \epsilon \}, \end{aligned}$$

and \(\text {Ind}\) denotes the indicator function. The proximal operators \(\text {prox}_{\gamma G_i}\), \(i=1,2\), are then given by:

$$\begin{aligned} \text {prox}_{\gamma G_1}(\omega , x)&:= \begin{pmatrix} S_{\gamma }(x),&{} \text {Proj}_{\mathcal {B}}(\omega )\\ \end{pmatrix},\\ \text {prox}_{\gamma G_2}(\omega , x)&:= \begin{pmatrix} y,&{} Ay\\ \end{pmatrix}, \end{aligned}$$

where

$$\begin{aligned} y = (I + A^T A)^{-1} (x + A^T \omega ). \end{aligned}$$

The function S is the soft-thresholding function which is defined component-wise as follows:

$$\begin{aligned}{}[S_{\gamma }(x)]_j =\left\{ \begin{array}{ll} x_j - \gamma \ \frac{x_j}{|x_j|}, &{} \quad |x_j| \ge \gamma , \\ 0, &{} \quad \text {otherwise}, \end{array} \right. \end{aligned}$$

and \(\text {prox}_{\mathcal {B}}\) is the projection operator onto the ball \(\mathcal {B}\):

$$\begin{aligned} \text {prox}_{\mathcal {B}}(\omega ) =\left\{ \begin{array}{ll} b + \epsilon \ \frac{\omega - b}{\Vert \omega - b\Vert _2}, &{} \quad \omega \notin \mathcal {B}, \\ \omega , &{} \quad \omega \in \mathcal {B}, \end{array} \right. \end{aligned}$$

For the constrained \(\ell ^0\) minimization problem (P\(_{0,\epsilon }\)) considered for comparison in Sect. 5, redefine \(G_1\) as follows:

$$\begin{aligned} {\tilde{G}}_{1}(\omega , x)&:= \Vert x \Vert _0 + \text {Ind}_{\mathcal {B}} (\omega ). \end{aligned}$$

The proximal operator \(\text {prox}_{\gamma {\tilde{G}}_1}\) is given by:

$$\begin{aligned} \text {prox}_{\gamma {\tilde{G}}_1}(\omega , x)&:= \begin{pmatrix} H_{\gamma }(x),&{} \text {Proj}_{\mathcal {B}}(\omega )\\ \end{pmatrix}, \end{aligned}$$

where the function H is the hard-thresholding function which is defined component-wise as follows:

$$\begin{aligned}{}[H_{\gamma }(x)]_j =\left\{ \begin{aligned} x_j&,&\quad |x_j|\ge \sqrt{\gamma },\\ 0&,&\quad \text {otherwise}. \end{aligned} \right. \end{aligned}$$

In Sect. 3, we use the basic form of the DR algorithm, i.e. Eq. (1.4). In [54,55,56], accelerated variants of the DR algorithm are presented. To solve Eq. (P\(_{1,\epsilon }\)), one can also choose other algorithms for \(\ell ^1\) minimization [27], for example ADMM, SPGL1, and the split Bregman algorithm [57].

Appendix B: Hyperparameters and supplementary figures

In this section, we provide the hyperparameters and some supplementary figures related to the computational experiments in Sect. 4.

Table 12 The dictionary matrices in Sect. 4
Table 13 The Lorenz 96 system: Hyperparameters used in the STRidge algorithm
Table 14 The Fisher-KPP equation: Hyperparameters used in the STRidge algorithm
Table 15 The Burgers’ equation: Hyperparameters used in the STRidge algorithm
Table 16 Dictionary sizes for one-step methods
Fig. 12
figure 12

The parametric Burgers’ equation, Example 4.8. Coefficients of the nonzero terms in Eq. (4.20). The subvector \(\textbf{t}\) used in the second learning is indicated by the dashed lines

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ma, H., Lu, X. & Zhang, L. Extracting parametric dynamics from time-series data. Nonlinear Dyn 111, 15177–15199 (2023). https://doi.org/10.1007/s11071-023-08643-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11071-023-08643-z

Keywords

Navigation