Introduction

Modern computational architectures have enabled the detailed numerical simulation of incredibly complex physical and engineering systems at a vast range of scales in both space and time [31]. Even with improved high-performance computing, the iterative numerical solutions required for design and optimization may quickly become intractable; the computational demands for real-time feedback control are even more challenging [11]. Fortunately, many high-dimensional dynamical systems, such as a discretized simulation of a fluid flow, are characterized by energetic coherent structures that evolve on an attractor or manifold with a lower dimensional intrinsic dimension [3, 6, 23]. This observed low-dimensionality has formed the basis of reduced-order modeling, which now plays a central role in modern design, optimization, and control of complex systems. Despite the growing importance of model reduction, the challenges of nonlinearity, identifying an effective low-dimensional basis, and multiscale dynamics have limited its widespread use across the engineering and natural sciences [10]. A leading method for reduced-order modeling involves the Galerkin projection of known governing equations onto a mathematical or empirical basis, although this approach is intrusive and challenging to implement. The leading alternative involves black-box system identification, which is purely data-driven, but generally yields uninterpretable models. In this work, we investigate and compare several emerging techniques from machine learning, i.e. applied data-driven optimization, for non-intrusive reduced-order modeling.

Intrusive model reduction methods, based on a working and decomposable numerical simulation of the governing equations, provide the most general and widely used set of techniques. Foremost in this arsenal is the Galerkin projection of the governing equations onto a low-dimensional linear subspace, usually spanned by orthogonal modes, such as Fourier, or data-driven modes from proper orthogonal decomposition (POD) [3, 6, 23, 39]. These approaches benefit from a strong connection to the underlying physics, the ability to include constraints to enforce preservation of the underlying dynamic structure, and adaptivity [2, 15,16,17]. In addition, there are several extensions around hyper-reduction to improve efficiency for systems with parametric dependence or higher order nonlinearities, based on the empirical interpolation method (EIM) [5] and the discrete empirical interpolation method (DEIM) [19]. However, these models are limited to the dynamic terms of the governing equations and the linear projection basis may not be optimal for capturing the dynamics. The main drawback of intrusive methods is that they are challenging to implement, requiring considerable human effort and knowledge to manipulate the governing equations and simulation code.

In contrast to intrusive model reduction, data-driven techniques are becoming increasingly prevalent, catalyzed by the increasing availability of measurement data, the lack of governing equations for many modern systems, and emerging methods in machine learning and data-driven optimization. Collectively, these data-driven techniques form the field of system identification [24, 33, 38]. Many techniques in system identification use regression to identify linear models, such as the eigensystem realization algorithm (ERA) [25] and dynamic mode decomposition (DMD) [4, 30, 49, 56]; recently, both techniques have been connected to nonlinear systems via the Koopman operator [13, 36, 46]. Another classic technique in nonlinear system identification is the NARMAX algorithm [7, 28, 51]. Recently, powerful techniques in machine learning are re-defining what is possible in system identification, leveraging increasingly large datasets and more powerful optimization and regression techniques. Deep learning, based on multi-layer neural networks, is increasingly used to model fluid dynamics and obtain closure models [29, 32, 37, 40, 42, 52, 61, 69]. More generally, machine learning is having a considerable impact in the modeling of dynamical systems and physics [8, 12, 44, 50], for example relevant work in cluster-based reduced order models [26], long-short time memory networks (LSTMs) [58, 60], and Gaussian process regression [43, 59]. Of particular interest are techniques based on the principle of parsimony that seek the simplest models, with the fewest terms necessary to describe the observed dynamics [8, 12, 34, 35, 47, 48, 50]. For example, the sparse identification of nonlinear dynamics (SINDy) algorithm [12] uses sparse regression to identify the few active terms in a differential equation. The resulting models balance model complexity with descriptive power, avoiding overfitting and remaining both interpretable and generalizable.

Data-driven techniques in system identification and machine learning are currently being employed for advanced, non-intrusive, and reduced-order modeling, removing the need for full-order model equations or a modifiable numerical code. Loiseau and Brunton [34] extended the SINDy modeling framework to enforce known constraints, such as energy conserving quadratic nonlinearities in incompressible flows. This so-called Galerkin regression also enables higher order nonlinearities than are possible with standard Galerkin projection, providing effective closure for the dynamics of truncated modes. Swischuk et al. [54] develop parametric models between input parameters and POD coefficients for physics and engineering problems, comparing many leading methods in machine learning. Peherstorfer and Willcox [41] develop time-dependent reduced order models for partial differential equations, by non-intrusively inferring the reduced-order model operator output as a function of initial conditions, inputs, trajectories of states, and full-order model outputs, without requiring access to the full-order model. They prove that for dynamics that are linear in the state or are characterized by low-order polynomial nonlinearities, the ROM converges to the projection of the FOM onto a low-order subspace. Carlberg et al. [18] propose dimensionality reduction on autoencoders to learn dynamics for recovering missing CFD data. Other non-intrusive ROMs have been developed based on neural networks [22, 27, 62, 63, 68], sparse autoencoders [21], radial basis functions [66], and kernel methods [57, 65]. Hybrid methods are also promising, for example by modeling the error of a Galerkin model to produce closures [67].

Despite the considerable recent progress in leveraging machine learning for non-intrusive reduced-order modeling, current approaches still have a number of shortcomings. First, many methods are limited to low-order polynomial nonlinearities. All of the above approaches learn the low-dimensional operators from full-system trajectories, which is expensive and limits the predictive accuracy, as the ROM trajectory is likely to quickly deviate from the full-system trajectory. Current methods also have limited consideration for stability and the interplay between regression methods and time integration has not been explored. Finally, current approaches do not provide a framework for model selection, as each method will likely be best suited to a different problem class.

In this work, we continue to develop and explore machine learning techniques for non-intrusive model reduction, addressing many of the shortcomings described above. In particular, we develop and compare numerous leading techniques in machine learning to produce accurate and efficient reduced order models. We focus primarily on the class of Galerkin projection models, although these approaches may be extended to more general models. To study the dynamical system in the latent space, we map from the low-dimensional state at the current time instance to the next, in explicit or implicit time integration schemes. We investigate a variety of regression models for the mapping, including k-nearest neighbors [1], support vector regression [53], random forest [9], boosted trees [20], the vectorial kernel orthogonal greedy algorithm (VKOGA) [64, 65] and SINDy [12]. In the following, we explicitly consider stability and explore the connection between various regression methods and their suitability with different time integration schemes; for example, non-differentiable machine learning models are only amenable to explicit time integration. We also explore the modeling procedures of varying complexities on two examples, the 1D Burgers’ equation and the 2D convection–diffusion equation using both explicit and implicit time integrators.

Problem formulation

General nonlinear dynamical systems

This work considers parameterized nonlinear dynamical systems characterized by the system of nonlinear ODEs

$$\begin{aligned} {\dot{{{x}}}}&= {{f}}({{x}};t,\varvec{\mu }) \end{aligned}$$
(1)
$$\begin{aligned} {{x}}(0;\varvec{\mu })&={{x}}^0(\varvec{\mu }), \end{aligned}$$
(2)

where \(t\in [0,T]\) denotes time with \(T\in {\mathbb {R}}_+{}\) representing the final time, \({{x}}\in {\mathbb {R}}^{ N }\) denotes the state, \(\varvec{\mu }\in {\mathscr {D}}\subseteq {\mathbb {R}}^{ p }\) denotes the system parameters, \({{x}}^0:{\mathscr {D}}\rightarrow {\mathbb {R}}^{ N }\) is the parameterized initial condition, and \({{f}}:{\mathbb {R}}^{ N }\times {\mathbb {R}}_+{}\times {\mathscr {D}}\rightarrow {\mathbb {R}}^{ N }\) denotes the velocity.

Galerkin projection

We assume that we we have low-dimensional trial-basis matrix \({\varvec{V}}\in {\mathbb {R}}_\star ^{ N\times n }\) (with \(n\ll N\)) computed, e.g., via proper orthogonal decomposition (POD), such that the solution can be approximated as \({{x}}(t,\varvec{\mu })\approx {\tilde{{{x}}}}(t,\varvec{\mu }) = {\bar{{{x}}}}(\varvec{\mu }) + {\varvec{V}}\hat{{{x}}}(t,\varvec{\mu }) \approx {{x}}\) with \(\hat{{{x}}}\in {\mathbb {R}}^{ n }\) denoting the reduced state. Then, the Galerkin ROM can be expressed as

$$\begin{aligned} {\dot{\hat{{{x}}}}}&= {\varvec{V}}^T{{f}}({\varvec{V}}\hat{{{x}}},t;\varvec{\mu }), \end{aligned}$$
(3)
$$\begin{aligned} \hat{{{x}}}(0)&={\varvec{V}}^T {{x}}^0\left( \varvec{\mu }\right) . \end{aligned}$$
(4)

Critically, note that the ROM is defined by the low-dimensional mapping

$$\begin{aligned}&{{f}}_r:({\hat{{{y}}}},\tau ;\varvec{\nu })\mapsto {\varvec{V}}^T{{f}}({\varvec{V}}{\hat{{{y}}}},\tau ;\varvec{\nu }) \end{aligned}$$
(5)
$$\begin{aligned}&:{\mathbb {R}}^{ n }\times {\mathbb {R}}_+{}\times {\mathbb {R}}^{ p }\rightarrow {\mathbb {R}}^{ n }. \end{aligned}$$
(6)

The reduced velocity \({{f}}_r\) is thus simply a function that maps the reduced state and inputs to a low-dimensional vector. Thus, we can rewrite Eq. (3) as

$$\begin{aligned} {\dot{\hat{{{x}}}}} = {\hat{{{f}}_r}}(\hat{{{x}}},t;\varvec{\mu }) \end{aligned}$$
(7)

However, this approach is intrusive to implement in computational mechanics codes, as it requires querying the full-order model code to compute \({{f}}({\varvec{V}}\hat{{{x}}},t;\varvec{\mu })\) for every instance of \(\hat{{{x}}}\) and \(\varvec{\mu }\) encountered during the ROM simulation; without hyper-reduction (e.g., DEIM or gappy POD), it is also computationally expensive, as computing \({\varvec{V}}^T{{f}}\) given \({{f}}\) incurs \({\mathscr {O}}(Nn)\) flops. Even if hyper-reduction is present, however, the approach remains intrusive, as sampled entries of the velocity must be computed for every instance of \(\hat{{{x}}}\) and \(\varvec{\mu }\) encountered during the ROM simulation. Depending on the complexity of the system trajectories, one can choose n to balance the accuracy and computational speedup.

Regression-based reduced operator approximation

This section outlines learning the low dimensional operator through regression methods. We commence describing the general formulation of the framework in “Mathematical formulation” section, and then “Surrogate ROM” section discusses the proposed regression models and their computational complexity. In “Error analysis” section, we derive the boundedness and stability of of the approximated discrete-time dynamical system.

Mathematical formulation

The objective is to develop a data-driven, non-intrusive approximation to the reduced velocity with reduced-order models of different dynamical systems. \({{f}}_r\). In particular, we aim to devise an approximate mapping

$$\begin{aligned} {\hat{{{f}}_r}}:{\mathbb {R}}^{ n }\times {\mathbb {R}}_+{}\times {\mathbb {R}}^{ p }\rightarrow {\mathbb {R}}^{ n }. \end{aligned}$$
(8)

such that

$$\begin{aligned} {\hat{{{f}}_r}}(\hat{{{x}}},t;\varvec{\mu })\approx {{f}}_r(\hat{{{x}}},t;\varvec{\mu }),\quad \forall (\hat{{{x}}},t;\varvec{\mu })\in {\mathbb {R}}^{ n }\times {\mathbb {R}}_+{}\times {\mathbb {R}}^{ p }. \end{aligned}$$
(9)

As the domain and codomain of \({\hat{{{f}}_r}}\) exhibit no specialized structure, we can consider general high-dimensional regression techniques (e.g., kernel regression, random forest regression).

Similar to Eq. (7), the reduced order model is given by

$$\begin{aligned} {\dot{\hat{{{x}}}}} = {\hat{{{f}}_r}}(\hat{{{x}}},t;\varvec{\mu }), \end{aligned}$$
(10)

with the initial condition

$$\begin{aligned} \hat{{{x}}}(0) = {\varvec{V}}^T {{x}}^0\left( \varvec{\mu }\right) . \end{aligned}$$
(11)

We assume that the sequences of reduced velocity \({{f}}_r\) can be studied from the Markovian discrete-time dynamical system

$$\begin{aligned} \hat{{{x}}}^{j+1} = {{g}}(\hat{{{x}}}^{j}, \varvec{\mu }), \quad j = 0,\ldots , N_t, \end{aligned}$$
(12)

for discrete-time velocity \({{g}}: {\mathbb {R}}^{ n }\times {\mathbb {R}}^{ p } \rightarrow {\mathbb {R}}^{ n }\). If such a reduced velocity exists and we can compute the operator, the whole time sequence of the reduced states can then be estimated from Eq. (12) by specifying the initial (reduced) state. By considering the mapping between the current reduced state and the next, we cast the problem into a set of regression models.

Surrogate ROM

In this work, we collect the reduced states from the Galerkin projection as in “Galerkin projection” section and approximate the operator \({{g}}\) by regressing each component of the reduced velocity. To be specific, we construct individual regression \(\hat{g}_i \approx g_i\) for \(i = 1, \ldots , n\), the ith equation of \({{g}}\) is:

$$\begin{aligned} \hat{x}_i^{j+1} = \hat{g}_i(\hat{{{x}}}^{j}, \varvec{\mu }), \quad j = 0,\ldots , N_t, \quad i = 1,\ldots ,n, \end{aligned}$$
(13)

where \(\hat{{{x}}}= [x_1 \ldots x_n]\) and \({{g}}= [g_1 \ldots g_n]\). Equation (13) shows that we can map the reduced state at time step \(\hat{{{x}}}^{n}\) to each component of the state at the next time step \(\hat{x}_i^{j+1}\) for \(j = 0,\ldots , N_t- 1\).

In the offline stage, we obtain training data from the existing Galerkin projection model. By querying both features (\(\hat{{{x}}}^{j}\)) and responses (\(\hat{x}_i^{j+1}\)) at every time instance, we generate a data set \({\mathscr {T}}_{train,i} = {(\hat{{{x}}}^{j}, \hat{x}_i^{j+1})}\) for model training and cross validation.

We consider a variety of conventional statistical models, including support vector regression (SVR) with kernel functions (Gaussian, polynomials) [53], tree-based methods (boosted decision trees [20], random forest [9]) and k-nearest neighbors [1], for regressing the feature-response pairs. Three types of kernel functions are explored in SVR. Specifically, we refer to the SVR model using the 2nd, 3rd order polynomial kernel as SVR2, SVR3 respectively. When the Gaussian radial basis function (RBF) kernel is used, we refer to the model as SVRrbf. In addition, we investigate the vectorial kernel orthogonal greedy algorithm (VKOGA) and Sparse identification of nonlinear dynamics (SINDy) as advanced regression models. More details of VKOGA and SINDy can be found in “Special regression models” section. We compare the computational complexity of all the proposed regression models in “Special regression models’ section’. For all the candidate models, we employ cross validation to optimize the hyperparameters for model selection, aiming for a balanced bias and variance. We also explore the training and validation error as varying the number of samples in “Cross-validation and hyperparameter tuning” section, and we select a fixed sample size for performance comparison between models.

For time integration along the trajectory of the dynamical system, we investigate the appropriate time step for each regression model. We implement the 4th-order Runge–Kutta as the explicit method, as well as Newton–Raphson and fixed-point iteration both in backward Euler as the implicit methods. We report the numerical results, with respect to the Galerkin reduced-order model (ROM) and the full-order model (FOM) as a function of time in “Numerical experiments” section.

Error analysis

Assuming the considered regression model generates bounded in reduced space, we examine the boundedness of the surrogate FOM on the time evolution of the states along the trajectory. Let \(J_t=[0, T]\) denote the time domain, \(J_\mu \subset {\mathbb {R}}^{ p }\) be a compact space of \(\varvec{\mu }\), \({{x}}: J_t \rightarrow {\mathbb {R}}^n\) denote the state variable, and \({{f}}({{x}},t;\varvec{\mu })\) be the vector field. Let \( {{f}}_r({\hat{{{y}}}},t;\varvec{\mu }) = {\varvec{V}}^T{{f}}({\varvec{V}}{\hat{{{y}}}},t;\varvec{\mu })\) represent the Galerkin reduced vector field. Let \(\hat{{{x}}}(t)\) denote an approximate reduced vector field constructed by a machine learning method and \({\hat{{{f}}_r}}(\hat{{{x}}},t;\varvec{\mu })\) be the corresponding vector field.

The error of the reduced model can be defined as \({{e}}(t) :={{\tilde{x}}}(t) -{{x}}(t)\). Let \(P = {\varvec{V}}{\varvec{V}}^*\). Let \({{e}}_o(t):=(I_n -P) {{e}}(t)\), which denotes the error component orthogonal to range (\({\varvec{V}}\)), and \({{e}}_i(t):=P {{e}}(t)\), which denotes the component of error parallel to range (\({\varvec{V}}\)). Let \({{\tilde{x}}}:= P {{x}}\) denote the direct projection of \({{x}}\). Thus, we have

$$\begin{aligned} {{e}}_o(t)={{\tilde{x}}}(t)- {{x}}(t). \end{aligned}$$
(14)

However, since the system is evolutionary with time, further approximations of the projection-based reduced model result in an additional error \({{e}}_i(t)\), and we have

$$\begin{aligned} {{e}}_i(t)={\varvec{V}}\hat{{{x}}}(t)-{{\tilde{x}}}(t). \end{aligned}$$
(15)

Although \({{e}}_i(t)\) and \({{e}}_o(t)\) are orthogonal to each other, they are not independent.

Lemma 1

Consider the dynamical system as Eq. (1) over the interval \(t\in J\) and \(\varvec{\mu }\in J_\mu \). Suppose \({{f}}({{x}},t;\varvec{\mu })\) is a uniformly Lipschitz function of \({{x}}\) with constant K and a continuous function of \(t\) for all \(({{x}},t, \varvec{\mu }) \in B_b({{x}}_0) \times J_t \times J_\mu \). Suppose \({\hat{{{f}}_r}}(\hat{{{x}}},t;\varvec{\mu })\) is a uniformly Lipschitz function of \(\hat{{{x}}}\) and a continuous function of \(t\) for all \(({{x}},t;\varvec{\mu }) \in B_b(x_0) \times J_t \times J_\mu \). Suppose the data-driven approximation satisfies \(\Vert {\hat{{{f}}_r}}(\hat{{{x}}},t;\varvec{\mu }) - {\varvec{V}}^* {{f}}({\varvec{V}}\hat{{{x}}},t;\varvec{\mu }) \Vert \le C\) for all \((\hat{{{x}}}, t, \varvec{\mu }) \in B_b(\hat{{{x}}}(\times _0)) \times J_t \times J_\mu \). Then the error \({{e}}(t)=\hat{{{x}}}(t)-{{x}}(t)\) in the infinity norm for the interval \(J_t\) is bounded by

$$\begin{aligned} \left\| {{e}}\right\| _\infty \le e^{KT} \left\| {{e}}_o\right\| _\infty +e^{KT} \left\| {{{e}}_i(0)} \right\| + \frac{C}{K}(e^{KT}-1) \end{aligned}$$
(16)

Proof

For notation simplification, we fix \(\varvec{\mu }\) and do not explicitly denote it in vector fields. Substituting Eqs. (1), (10) into the differentiation of \({{e}}_o(t)+ {{e}}_i(t) = {\varvec{V}}\hat{{{x}}}(t) - {{x}}(t)\) yields

$$\begin{aligned} {\dot{{{e}}_o}} + {\dot{{{e}}_i}} = {\varvec{V}}{\hat{{{f}}_r}}(\hat{{{x}}},t) - {{f}}({{x}}, t) . \end{aligned}$$
(17)

Left multiplying Eq. (17) by P and recognizing that \(P {\varvec{V}}= {\varvec{V}}\) gives

$$\begin{aligned} \begin{aligned} {\dot{{{e}}_i}}(t)&= {\varvec{V}}{\hat{{{f}}_r}}(\hat{{{x}}},t) - P {{f}}({{x}}, t) \\&= {\varvec{V}}{\hat{{{f}}_r}}(\hat{{{x}}},t) -P{{f}}({\varvec{V}}\hat{{{x}}}, t) + P{{f}}({\varvec{V}}\hat{{{x}}}, t) - P{{f}}( {{x}}+{{e}}_o, t)+P{{f}}({{x}}+{{e}}_o, t)-P {{f}}({{x}}, t) \\&={\varvec{V}}({\hat{{{f}}_r}}(\hat{{{x}}}, t) - {\varvec{V}}^* {{f}}({\varvec{V}}\hat{{{x}}}, t)) +P({{f}}({{x}}+{{e}}_o+{{e}}_i, t)-{{f}}({{x}}+{{e}}_o, t))\\&\quad +P({{f}}({{x}}+{{e}}_o, t)- {{f}}({{x}}, t) ) \end{aligned} \end{aligned}$$

Using this equation by expanding \( \Vert {{e}}_i({{x}}+ h) \Vert \) and applying triangular inequality yields

$$\begin{aligned} \begin{aligned} \Vert {{e}}_i(t+ h) \Vert&= \Vert {{e}}_i(t) + h {\dot{{{e}}_i}}(t) \Vert + {\mathscr {O}}({h^2}) \\&\le \Vert {{e}}_i(t) \Vert + \Vert h {\varvec{V}}({\hat{{{f}}_r}}(\hat{{{x}}}, t) - {\varvec{V}}^* {{f}}({\varvec{V}}\hat{{{x}}}, t)) \Vert + \Vert h P ({{f}}({{x}}+ {{e}}_o+ {{e}}_i, t) \\&\quad - {{f}}({{x}}+ {{e}}_o, t)) \Vert + \Vert h P ({{f}}({{x}}+ {{e}}_o, t) - {{f}}({{x}}, t) ) \Vert + {\mathscr {O}}(h^2)\\&\le \Vert {{e}}_i(t) \Vert + h \Vert {\varvec{V}}\Vert \cdot \Vert {\hat{{{f}}_r}}(t,\hat{{{x}}}) - {\varvec{V}}^* {{f}}({\varvec{V}}\hat{{{x}}}, t) \Vert + h\Vert P \Vert \cdot \Vert {{f}}({{x}}+ {{e}}_o+ {{e}}_i, t) \\&\quad - {{f}}({{x}}+ {{e}}_o, t) \Vert + h\Vert P \Vert \cdot \Vert {{f}}({{x}}+{{e}}_o, t) - {{f}}({{x}}, t) \Vert + {\mathscr {O}}(h^2). \end{aligned} \end{aligned}$$

Using \(\Vert {\varvec{V}}\Vert =\Vert P\Vert =1\) and \(\Vert {\hat{{{f}}_r}}(t, \hat{{{x}}}) - {\varvec{V}}^* {{f}}(t, {\varvec{V}}\hat{{{x}}}) \Vert \le C\), the last inequality gives

$$\begin{aligned} \begin{aligned} \Vert {{e}}_i(t+ h) \Vert&\le \Vert {{e}}_i(t) \Vert + h C + h \Vert {{f}}({{x}}+ {{e}}_o+ {{e}}_i, t) - {{f}}({{x}}, + {{e}}_o, t) \Vert \\&\quad + h \Vert {{f}}({{x}}+{{e}}_o, t) - {{f}}({{x}}, t) \Vert + {\mathscr {O}}(h^2). \end{aligned} \end{aligned}$$

Rearranging this inequality and applying the Lipschitz conditions gives

$$\begin{aligned} \frac{\Vert {{e}}_i(t+ h) \Vert - \Vert {{e}}_i(t) \Vert }{h} \le K \Vert {{e}}_i(t) \Vert + K \Vert {{e}}_o(t) \Vert + C+ {\mathscr {O}}(h). \end{aligned}$$

Since \({\mathscr {O}}(h)\) can be uniformly bounded independent of \({{e}}_i(t)\), using the mean value theorem and letting \(h \rightarrow 0\) give

$$\begin{aligned} \frac{d}{dt} \Vert {{e}}_i(t)\Vert \le K\Vert {{e}}_i(t) \Vert + K\Vert {{e}}_o(t) \Vert +C. \end{aligned}$$

Rewriting the above inequality into integral form, \(\left\| {{e}}_i(t) \right\| \le \alpha (t) + K \int _0^ t\Vert {{e}}_i(\tau ) \Vert d\tau \), where \(\alpha (t): = \left\| {{e}}_i(0) \right\| + K\int _0^ t{\Vert {{e}}_o(\tau ) \Vert d \tau }+C t\), and using Gronwall’s lemma, we obtain

$$\begin{aligned} \left\| {{{{e}}_i}(t)} \right\| \le \alpha (t) + \int _0^ t{\alpha (s)K\exp \left( {\int _s^ t{Kd\tau } } \right) ds}. \end{aligned}$$

By definition, \({\left\| {{e}}_o\right\| _ \infty } \ge \left\| {{e}}_o(t) \right\| \) for any \(t\in J_t\). It follows that \(\alpha (t) \le \left\| {{{{e}}_i}(0)} \right\| + K t\left\| {{e}}_o\right\| _\infty +C t\). Simplifying the integral of the right-hand side of the above inequality gives

$$\begin{aligned} \Vert {{e}}_i(t) \Vert \le (e^{K t}-1) \left( \Vert {{e}}_o\Vert _\infty + \frac{C}{K} \right) + e^{K t} \Vert {{e}}_i(0) \Vert , \end{aligned}$$

for any \(t\in J_t\). If follows that

$$\begin{aligned} \Vert {{e}}_i\Vert _\infty \le (e^{KT}-1) \left( \Vert {{e}}_o\Vert _\infty + \frac{C}{K} \right) + e^{KT} \Vert {{e}}_i(0) \Vert , \end{aligned}$$

Combining the above inequality with \(\left\| {{e}}\right\| _\infty \le \left\| {{{e}}_i} \right\| _\infty + \left\| {{{e}}_o} \right\| _\infty \), one can obtain Eq. (16). \(\square \)

Remark

The above lemma provides a bound for \(\Vert {{e}}_i(t)\Vert \) in terms of \(\Vert {{e}}_o\Vert _\infty \) and \(\Vert {{e}}_i(0)\Vert \). We have \(\left\| {{e}}_i(0)\right\| =0\) when the initial condition of the reduced model is given by \(\hat{{{x}}}(0)={\varvec{V}}^* {{x}}_0\) for Eq. (10). In this situation, Eq. (16) becomes \({\left\| {{e}}\right\| _\infty } \le e^{KT} {\left\| {{{{e}}_o}} \right\| _\infty }+ \frac{C}{K}(e^{KT}-1) \).

Numerical experiments

To assess the proposed non-intrusive ROM, we consider two parameterized PDEs: (i) 1D inviscid Burgers’ equation, and (ii) 2D convection–diffusion equation. We implement explicit integrator, including 4th-order Runge–Kutta solvers, and Newton–Raphson, fixed-point iteration in backward Euler as the implicit methods.

1D Burgers’ equation

The experiment first employs a 1D parameterized inviscid Burgers’ equation. The input parameters \(\varvec{\mu }=(a,b)\) in the space \([1.5, 2]\times [0.02, 0.025]\). In the current setup, the parameters for online test are fixed to be \(\varvec{\mu }=(1.8, 0.0232)\). In the FOM, the problem is often solved using a conservative finite-volume formulation and Backward Euler in time. The 1D domain is discretized using a grid with 501 nodes, corresponding to \(x=i\times (100/500), i=0,\ldots ,500\). The solution \(u(x,t)\) is computed in the time interval \(t\in [0,25]\) using different time step sizes considering the convergence in each time integrator.

$$\begin{aligned} \frac{\partial u(x,t)}{\partial t} + \frac{1}{2} \frac{\partial (u^2(x,t))}{\partial x}&= 0.02e^{bx}, \end{aligned}$$
(18a)
$$\begin{aligned} u(0,t)&= a, \forall t>0, \end{aligned}$$
(18b)
$$\begin{aligned} u(x,0)&= 1, \forall x \in [0,100]. \end{aligned}$$
(18c)

The solution \(u({{x}}, t)\) is computed in the time interval \(t\in [0,25]\) using a uniform computational time-step size \(\varDelta t\).

Data collection

Fig. 1
figure 1

Timestep verification for 1D inviscid Burgers’ equation: a backward-Euler integrator: we select a time step of 3.12e-2, as it leads to an approximated convergence rate of 1 and an approximated error of 4e-4 for the selected state; b Runge–Kutta: we select a time step of 1.25e-1, as it leads to an approximated (\(<1\%\) error) convergence rate of 4 and an averaged approximated error of 6e-5 for the selected state

We investigate time step verification on choosing an appropriate \(\varDelta t\) of the time integrator for the problem. We collect the solutions under an increasing number of time steps \(N_t= [25, 50, 100, 200, 400, 800, 1600, 3200, 6400]\) using both Runge–Kutta as well as backward Euler integrator. Throughout the paper, we select the time step at \(99\%\) of the asymptotic rate of convergence. The verification results in Fig. 1 show that \(N_t= 200\) is a reasonable number of time steps to use for the 4th-order Runge–Kutta and \(N_t=800\) for backward Euler method. During the offline stage, we run four full simulations corresponding to the corner parameters of the space \([1.5, 2]\times [0.02, 0.025]\). Then, we sample the training data from Latin-hypercube. In the sampling, \(N_\text {training}\) and \(N_\text {validation}\) instances of the state, time and parameters are generated following the criterion that the minimum distances between the data points are maximized. For this study, the default size of the training set is \(N_\text {training}= 1000\) and the default size of the validation set is \(N_\text {validation}= 500\). The reduced vector field \({{f}}_r\) is computed for each input pairs \((\hat{{{x}}},t;\varvec{\mu })\). Note that both the training and validation stage only involves pure machine learning. Then in the test stage, we evaluate the proposed ROM. The parameters are fixed to be \(\mu =(1.8,0.0232)\) for testing purpose.

Model validation

We use SVR with kernel functions (2nd, 3rd order polynomials and radial basis function), kNN, Random Forest, Boosting, VKOGA, and SINDy as regression models to approximate reduced velocity. In particular, for each regression method, we change the model hyperparameters and plot the relative training error and validation error. The relative error (continuous-time model) is defined by

$$\begin{aligned} err = \frac{\Vert {\hat{{{f}}_r}}(\hat{{{x}}},t;\varvec{\mu }) - {{f}}_r(\hat{{{x}}},t;\varvec{\mu })\Vert }{\Vert {{f}}_r(\hat{{{x}}},t;\varvec{\mu })\Vert }. \end{aligned}$$
(19)

We then plot the learning curve of each regression method and compare the performance of each model on training and validation data over a varying number of training instances in “Cross-validation and hyperparameter tuning” section. By properly choosing the hyperparameters and the number of training instances, our regression models can effectively balance bias and variance.

Simulation of the surrogate ROM

Fig. 2
figure 2

Backward Euler for 1D inviscid Burgers’ equation: time evolution of relative error: a \({{e}}_\textit{FOM}(t)\) in FOM; and b \({{e}}_\textit{ROM}(t)\) in ROM

Fig. 3
figure 3

Pareto frontier of relative error with respect to the relative running time using backward Euler for 1D inviscid Burgers’ equation: a \({{e}}_\textit{FOM}\) vs. \({\varvec{\tau }}_\textit{FOM}\) in FOM; b \({{e}}_\textit{ROM}\) vs. \({\varvec{\tau }}_\textit{ROM}\) in ROM

We can now solve the problem using the surrogate model along the trajectory of the dynamical system. After applying time integration to the regression-based ROM, we compute the relative error of the proposed models as a function of time. We investigate both Newton–Raphson, fixed-point iteration in backward Euler and 4th-order Runge–Kutta in explicit methods. Let \({{x}}(t)\), \(\hat{{{x}}_{b}}(t)\) and \(\hat{{{x}}}(t)\) denote the solution of the FOM, the Galerkin ROM, and non-intrusive ROMs respectively. We define the relative error with respect to FOM \({{e}}_\textit{FOM}(t)\) and Galerkin ROM \({{e}}_\textit{ROM}(t)\) as

$$\begin{aligned} {{e}}_\textit{FOM}(t)= & {} \frac{\Vert ({\varvec{V}}\hat{{{x}}}(t) - {{x}}(t))\Vert }{\Vert {{x}}(t)\Vert },\end{aligned}$$
(20)
$$\begin{aligned} {{e}}_\textit{ROM}(t)= & {} \frac{\Vert ( \hat{{{x}}}(t) - \hat{{{x}}_{b}}(t))\Vert }{\Vert \hat{{{x}}_{b}}(t)\Vert }. \end{aligned}$$
(21)

The corresponding averaged relative error over the entire time domain \({{e}}_\textit{FOM}\) and \({{e}}_\textit{ROM}\) can be computed as

$$\begin{aligned} {{e}}_\textit{FOM}= & {} \frac{1}{T}\int _{t=0}^T {{e}}_\textit{FOM}(t)dt,\end{aligned}$$
(22)
$$\begin{aligned} {{e}}_\textit{ROM}= & {} \frac{1}{T} \int _{t=0}^T {{e}}_\textit{ROM}(t)dt. \end{aligned}$$
(23)

Let \(t_{FOM}\), \(t_{ROM}\), and \(\tau \) denote the running time of FOM, Galerkin ROM, and non-intrusive ROM respectively. Define the relative running time with respect to the FOM and the Galerkin ROM:

$$\begin{aligned} {\varvec{\tau }}_\textit{FOM}= \frac{{\varvec{\tau }}}{t_\textit{FOM}}, \end{aligned}$$
(24)

and

$$\begin{aligned} {\varvec{\tau }}_\textit{ROM}= \frac{{\varvec{\tau }}}{t_\textit{ROM}}. \end{aligned}$$
(25)

The following are the simulation results from backward Euler method with \(N_t=800\). Figure 2 plots the state-space error with respect to the FOM and ROM using the backward Euler integrator. As validation results predict, SVR3 and SINDy behave better than the other models, achieving a relative ROM error below 1e-4 over the entire time domain, and the relative error in terms of FOM is well bounded. Figure 3 plots the Pareto frontier error as a function of the relative running time using the backward Euler integrator. For differential models, the relative time is calculated using the less expensive approach, i.e. Newton’s method. By comparison, SINDy requires much less relative time than SVR3, at a comparable level of relative error in both FOM and ROM. Table 1 summarizes (i) online running time of all methods, (ii) mean time-integrated error with respect to the FOM, and (iii) mean time-integrated error with respect to ROMs using the backward Euler integrator. For differentiable models, we compare the computational time of both Newton’s method and fixed point iteration in backward Euler. For those models that are non-differentiable, i.e. random forest, boosting and kNN, only the running time of the fixed-point iteration method is reported. All the ROM and FOM solutions are computed at the verified backward Euler time step \(\varDelta {t} = 3.12e\)-2. Note that the non-intrusive ROM, e.g. SINDy with Newton’s method can accelerate the solver by \(10.4{\times }\) relative to the FOM and \(2.5{\times }\) compared to the Galerkin ROM at a relative error of 0.0346 and 4.36e-5 respectively.

Table 1 Comparison of different machine learning methods using backward Euler methods for 1D inviscid Burgers’ equation: Newton’s method (N); fixed-point iteration (FP)
Fig. 4
figure 4

Runge–Kutta for 1D inviscid Burgers’ equation: time evolution of relative error: a \({{e}}_\textit{FOM}(t)\) in FOM; and b \({{e}}_\textit{ROM}(t)\) in ROM

Fig. 5
figure 5

Pareto frontier of relative error with respect to the relative running time using 4th-order Runge–Kutta for 1D inviscid Burgers’ equation: a \({{e}}_\textit{FOM}\) vs. \({\varvec{\tau }}_\textit{FOM}\) in FOM; b \({{e}}_\textit{ROM}\) vs. \({\varvec{\tau }}_\textit{ROM}\) in ROM

Table 2 Comparison of different machine learning methods using 4th-order Runge–Kutta for 1D inviscid Burgers’ equation

We examine simulation results from 4th-order Runge–Kutta method with \(N_t=200\). Figure 4 shows the state-space error with respect to the FOM and ROM using the Runge–Kutta integrator. SVR2, SVR3 and SINDy have a comparable performance, and result in a bigger ROM error relative to the backward Euler solver. We notice that the random forest model begins to diverge after \(t>10\) in the explicit scheme. This can be explained by the corresponding performance in model evaluation in “Special regression models” section. Figure 5 plots the Pareto frontier error with respect to the relative running time using the backward Euler integrator. VKOGA has the smallest relative time in both the FOM and ROM comparison. SINDy requires slightly more running time while the accuracy outperforms VKOGA. Table 2 shows the online running time of all methods, the mean time-integrated error with respect to the FOM, and the mean time-integrated error with respect to the Galerkin ROM using the Runge–Kutta solver. For a fair comparison, all the ROM and FOM solutions are computed at the verified Runge–Kutta time step as selected in Fig. 1b. The results show that SVR based models, e.g. SVR2 and SVR3, yield the smallest relative errors, however the computational cost is more expensive than the FOM. Note that the non-intrusive ROM (VKOGA) can speed up the solver by \(6.9{\times }\) relative to the FOM and \(3.3{\times }\) over the Galerkin ROM at a relative error of 0.0353 and 0.0164 respectively.

2D convection–diffusion

Fig. 6
figure 6

Solution profile of 2D convection–diffusion equation, \(u(x, y, t= 2)\) with input parameter \(\mu _1 = \mu _2 =9.5\)

We consider a 2D parametric nonlinear heat equation. Given a state variable \(u = u(x, y, t)\), the governing equation is described as

$$\begin{aligned} \frac{\partial u(x, y, t)}{\partial t}&= -{\varvec{\mu }}_0 \bigtriangledown ^2 u - \frac{{\varvec{\mu }}_0 {\varvec{\mu }}_1}{{\varvec{\mu }}_2} \left( e^{\mu _2 u} - 1\right) + \cos (2{\varvec{\pi }} x) \cos (2{\varvec{\pi }} y), \end{aligned}$$
(26a)
$$\begin{aligned} u(x, y, 0)&= 0. \end{aligned}$$
(26b)

The parameters are given by \({\varvec{\mu }}_0 = 0.01\), and \(({\varvec{\mu }}_1, {\varvec{\mu }}_2) \in [9, 10]^2\). The spatial domain is \([0,1]^2\) and Dirichlet boundary conditions are applied. The FOM uses a finite difference discretization with \(51 \times 51\) grid points. The full time domain is [0, 2] and we evaluate both the backward Euler and Runge–Kutta methods for time integration with uniform time steps. Figure 6 shows the solution profile at \(t = 2\) with input parameter \((\mu _1, \mu _2) = (9.5, 9.5)\).

Fig. 7
figure 7

Timestep verification for 2D convection–diffusion equation: a backward Euler integrator: we select a time step of 3.12e-2, as it leads to an approximated convergence rate of 1 and an approximated error of 5e-4 for the selected state; b Runge–Kutta: we select a time step of 1.25e-1, as it leads to an approximated convergence rate of 4 and an averaged approximated error of 2e-5 for the selected state

Data collection

Similar to the 1D case, first we investigate the appropriate time step \(\varDelta t\) for solving the ODE. We collect the solutions of a sequence number of time steps \(N_t= [25, 50, 100, 200, 400, 800, 1600, 3200, 6400]\) for (i) explicit Runge–Kutta and (ii) implicit backward-Euler integrator. The verification results in Fig. 7 shows that \(N_t= 200\) is a reasonable number of time steps to use for Runge–Kutta and \(N_t=800\) for backward Euler method. During the offline stage, we run four full simulations corresponding to the corner parameters of the space \([9, 10]^2\). Then, the training data are sampled from a Latin-hypercube for better covering the parameter space. In the sampling, \(N_\text {training}\) and \(N_\text {validation}\) instances of the state, time and parameters are generated following the criterion that the minimum distances between the data points are maximized. We use the default size of the training set \(N_\text {training}= 1000\) and of the validation set \(N_\text {validation}= 500\). Then the reduced vector field \({{f}}_r\) is computed for each input pairs \((\hat{{{x}}},t;\varvec{\mu })\). In the training and validation stage, we regress the reduced vector field \({{f}}_r\) by the input \((\hat{{{x}}},t;\varvec{\mu })\); in the test stage, we evaluate the ROM. The parameters are fixed to be \((\mu _1, \mu _2) = (9.5, 9.5)\).

Model validation

We report the performance of SVR (2nd, 3rd poly and rbf), kNN, Random Forest, Boosting, VKOGA, and SINDy as regression models to approximate reduced velocity. In particular, as in “Model validation” section, for each regression method, we change the model hyperparameters and plot the relative training and validation error. The relative error is defined by Eq. (19). Similarly, we plot the learning curve of each regression method and compare the performance of each model on training and validation data over a varying number of training instances in “Cross-validation and hyperparameter tuning” section. We aim to balance bias and variance in each regression model, by properly choosing hyperparameters and the number of training instances.

Simulation of the surrogate ROM

We can now solve this 2D problem using the surrogate model along the trajectory of the dynamical system. After applying time integration to the regression-based ROM, we compute the relative error of the proposed models as a function of time. As in “Simulation of the surrogate ROM” section, we investigate both the backward Euler and Runge–Kutta integrators. The following are the simulation results from backward Euler method with \(N_t=800\). Figure 8 plots the state-space error with respect to FOM and ROM using the backward Euler integrator. VKOGA outperforms the other models, achieving a relative ROM error below 6e-2 over the entire time domain, and the accuracy is closest to Galerkin ROM.

Fig. 8
figure 8

Backward Euler for 2D convection–diffusion equation: time evolution of relative error: a \({{e}}_\textit{FOM}(t)\) in FOM; and b \({{e}}_\textit{ROM}(t)\) in ROM

Figure 9 plots the Pareto frontier error as a function of the relative running time using the backward Euler integrator. VKOGA performs best in terms of both accuracy and time efficiency, when comparing with the Galerkin ROM.

Fig. 9
figure 9

Pareto frontier of relative error with respect to the relative running time using backward Euler for 2D convection–diffusion equation: a \({{e}}_\textit{FOM}\) vs. \({\varvec{\tau }}_\textit{FOM}\) in FOM; b \({{e}}_\textit{ROM}\) vs. \({\varvec{\tau }}_\textit{ROM}\) in ROM

Table 3 presents the online running time of all methods, the mean time-integrated error with respect to FOM, and the mean time-integrated error with respect to the Galerkin ROM using the backward Euler integrator for the 2D convection–diffusion equation. To compare, all the ROM and FOM solutions are computed at the verified backward Euler time step. Note that the non-intrusive ROM, e.g. VKOGA with Newton’s method, can improve the solve time by three orders of magnitude over the FOM and \(111.2{\times }\) compared to the Galerkin ROM at a relative error of 0.0059 and 0.0041 respectively.

Table 3 Comparison of different machine learning methods using the Backward Euler integrator for 2D convection–diffusion equation: Newton’s method (N); FP-fixed-point iteration (FP)
Fig. 10
figure 10

Runge–Kutta for 2D convection–diffusion equation: time evolution of relative error: a \({{e}}_\textit{FOM}(t)\) in FOM; and b \({{e}}_\textit{ROM}(t)\) in ROM

Fig. 11
figure 11

Pareto frontier of relative error with respect to the relative running time using 4th-order Runge–Kutta for 2D convection–diffusion equation: a \({{e}}_\textit{FOM}\) vs. \({\varvec{\tau }}_\textit{FOM}\) in FOM; b \({{e}}_\textit{ROM}\) vs. \({\varvec{\tau }}_\textit{ROM}\) in ROM

We examine the simulation results from the 4th-order Runge–Kutta method with \(N_t=200\). Figure 10 shows the state-space error with respect to FOM and ROM using the Runge–Kutta integrator. SVR2, SVR3, VKOGA, and SINDy have a comparable performance, and result in a smaller ROM error relative to in the backward Euler solver. We notice that the random forest, boosting models and kNN begin to diverge quickly in the second half of the time domain. Figure 11 plots the Pareto frontier error as a function of the relative running time using the backward Euler integrator. VKOGA and SINDy outperform the other models in terms of both computation accuracy and time cost. The relative error compared to Galerkin ROM and FOM is below 1e-2. Table 4 includes online running time of all methods, the mean time-integrated error with respect to the FOM, and the mean time-integrated error with respect to the Galerkin ROM using the Runge–Kutta integrator for the 2D convection–diffusion equation. In the comparison, all the ROM and FOM solutions are computed at the verified backward Euler time step. We observe that the computational efficiency of the non-intrusive ROM performs significantly better than Galerkin ROM. VKOGA using Runge–Kutta can speed up the solver \(3406.9{\times }\) that of the FOM and \(468.3{\times }\) that of the Galerkin ROM at a relative error of 0.0083 and 0.0069. SINDy using Runge–Kutta can accelerate the solver \(2524.1{\times }\) over the FOM and \(347.0{\times }\) compared to the Galerkin ROM at a relative error of 0.0077 and 0.0066 respectively.

Table 4 Comparison of different machine learning methods using 4th-order Runge–Kutta for 2D convection–diffusion equation

Conclusions

In this study, we demonstrate the effectiveness of a data-driven model reduction method for solving two types of parametric PDEs non-intrusively, in both explicit and implicit time integration schemes. The approach successfully avoids the cost and intrusiveness of querying the full-order model and reduced-order model in the simulation, by approximating low-dimensional operators using regression methods. In the offline stage, we train the regression models using state-of-the art techniques from machine learning and specific dynamical learning methods in a reduced order architecture; in the online stage, the surrogate ROM then solves problems more efficiently (orders of magnitude in speedup) using the learned reduced operators. The proposed approach speeds up the full-order model simulation by one order of magnitude in the 1D Burgers’ equation and three order of magnitude in the 2D convection–diffusion equation. Among the machine learning models we evaluate, VKOGA and SINDy distinguish themselves from the other models, delivering superior cost vs. error trade-off.

Further work involves a number of important extensions and directions that arise out of this work. First, it will be interesting to investigate the effectiveness on solving a different types of PDE or a more complex nonlinear dynamical systems, e.g. Euler’s equation. Morerover, for nonlinear Lagrangian dynamical systems, we need to develop structure-preserving approximations for all reduced Lagrangian ingredients in the model reduction. Rather than apply a Galerkin projection to obtain the ROM, one can alternatively employ a least-square Petrov–Galerkin (LSPG) [14, 15, 17], which requires a regression method predicting non-negative values. Analyzing models on a nonlinear manifold is an important area of future work, where we can go beyond the linear spaces, and mappings between the original full-order space \({\mathbb {R}}^N\) and the reduced-order subspace \({\mathbb {R}}^n\) in Galerkin projection. The growing intersection of dynamical systems and data science are driving innovations in estimating the prediction of complex systems [10]. With a deeper understanding of neural networks, it may be possible to generalize the non-intrusive approach to another level of accuracy. Furthermore, physics-informed models [45] may accelerate learning from data and add interpretability to existing approaches. This framework can also be combined with distributed systems to enable parallelism for extreme-scale high performance computing.