1 Introduction

There has been much recent interest in developing machine-learning-based methods for learning the underlying physics-based equations of motion from data. In this paper, we are interested in learning the general dynamics F[uxt] of spatiotemporal partial differential equations (PDEs) or spatiotemporal partial integro-differential equations (PIDEs) such as

$$\begin{aligned} \partial _{t}u = F[u; x, t], \quad x\in \Omega ,\, t\in [0, T]. \end{aligned}$$
(1)

Here, \(\Omega \) is the spatial domain of interest and F[uxt] represents a general spatiotemporal operator acting on the function u(xt), including linear combinations of all differential operators acting on u, such as \(u_x, u_{xx}, u_{xxx},...\), and spatial convolutional operators.

Although machine learning approaches have been proposed for many types of inverse problems that reconstruct partial differential equations (PDEs) from data [1, 2], most of them make prior assumptions about the specific form of the PDE and use spatial discretization i.e., grids or meshes, of a bounded spatial variable x to approximate the solutions of the PDE. There are three main types of machine-learning-based methods for learning PDEs: (i) methods that use neural networks to reconstruct the RHS of Eq. (1), F[uxt], by assuming that it can be well approximated by a (non)linear combination of a class of differential operators, (ii) methods that try to find an explicit mathematical expression for F[uxt] by imposing specific forms on F, and (iii) methods that circumvent learning F[uxt] by reconstructing a map from the initial condition to the solution at a later time. Long et al. [3] and Churchill et al. [4] used convolutional layers to construct the spatial derivatives of u, then applied a neural network [3, 4] or a symbolic network [5] to approximate F[uxt] by \(F(x, u_x, u_{xx},\dots )\). Sparse identification of nonlinear dynamics (SINDy) [6] and its variants [7, 8] have been developed to learn the dynamics of PDEs by using a sparse regression method to infer coefficients \(\textbf{a}\) in \(\partial _{t}u(x,t) = \textbf{a}\cdot (1, u, u^2, u_x, u_{xx}, \ldots )\), where \(\textbf{a}\) is the to-be-learned row vector of coefficients associated with each term in the PDE. These methods imposed an additive form for F[uxt]. Additionally, Fourier neural operator (FNO) [9] and other approaches [10] which learn the mapping between the function space of the initial condition \(u_0(\cdot , 0)\) and the function space of the solution \(u(\cdot , t)\) within a time range \(t\in [t_1, t_2]\) have also been recently developed [10, 11].

In summary, previous methods either assume F[uxt] can be approximated by some (non)linear combinations of differential operators, impose a specific form of F[uxt], or circumvent learning F[uxt] by reconstructing a map from the initial condition to the solution at a later time. To our knowledge, there has been no method that can extract the dynamics F[uxt] from data without making prior assumptions on its form. Moreover, since most prevailing numerical methods for time-dependent DEs rely on spatial discretization via meshes or grids, they cannot be directly applied to problems defined on an unbounded domain [12]. Nonetheless, many physical systems involve the evolution of quantities that experience long-ranged spatial interactions, requiring the solution of spatiotemporal integro-differential equations defined on unbounded domains; i.e., the dynamics F on the RHS of Eq. (1) might contain a spatial convolutional operator. Examples of such spatiotemporal integro-differential equations involve the fractional Laplacian equations for anomalous diffusion [13] and the Keller–Segel equation [14] for describing the swarming behavior. Reconstructing F[uxt] on the RHS in Eq. (1) given some observations of the physical quantity u(xt) can help uncover the physical laws that govern their time evolution.

One major difficulty that prevents direct application of previous methods to unbounded domain problems is that one needs to truncate the unbounded domain and define appropriate boundary conditions [15, 16] on the new artificial boundaries. Although generalizations of the FNO method that include basis functions other than the Fourier series [17] can potentially be applied to unbounded domains, they do not reconstruct the dynamics F[uxt]. Moreover, they treat x and t in the same way using nonadaptive basis functions which can be inefficient for addressing unbounded-domain spatiotemporal problems where basis functions often need to be dynamically adjusted over time. Recently, an adaptive spectral PINN (s-PINN) method was proposed to solve specified unbounded-domain PDEs [18]. The method expresses the underlying unknown function in terms of adaptive spectral expansions in space and time-dependent coefficients of the basis functions, does not rely on explicit spatial discretization, and can be applied to unbounded domains. However, like many other approaches, the s-PINN approach assumes that the PDE takes the specific form \(u_t = F(u, u_x, u_{xx},...) + f(x, t)\) in which \(F(u,u_x,u_{xx},...)\) is known and only the u-independent source term f(xt) is unknown and to be learned. Therefore, the s-PINN method is limited to parameter inference and source reconstruction.

Fig. 1
figure 1

a A 1D example of the spectral expansion in an unbounded domain with scaling factor \(\beta \) and displacement \(x_0\) (Eq. (7)). b The evolution of the coefficient \(c_0(t)\) and the two tuning parameters \(\beta (t), x_0(t)\). c A schematic of how to reconstruct Eq. (1) satisfied by the spectral expansion approximation \(u_{N, x_0}^{\beta }\). The time t, expansion coefficients \(c_i\), and tuning variables \(\beta (t)\), and \(x_0\) are inputs of the neural network, which then outputs \(\varvec{F}(\tilde{\varvec{c}}_N;t,\Theta )=(F_0,...,F_N, F_{\beta }, F_{x_0})\). The basis functions \(\phi _i\big (\beta (t)(x-x_0(t))\big )\) are shaped by the time-dependent scaling factor \(\beta (t)\) and shift parameter \(x_0(t)\) which are determined by \(\frac{\text {d} \beta }{\text {d}t} \approx F_{\beta }\) and \(\frac{\text {d}x_0}{\text {d} t}\approx F_{x_0}\), respectively

In this paper, we propose a spectral-based DE learning method that extracts the unknown dynamics in the spatiotemporal DE Eq. (1) by using a parameterized neural network to express \(F[u; x, t]\approx F[u;x, t,\Theta ]\). Specifically, our spectral-based DE learning approach aims to reconstruct both spatiotemporal PDEs and spatiotemporal PIDEs where the spatial variable x is defined on an unbounded domain. Moreover, our approach does not require prior assumptions on the form of F[uxt]. Throughout this paper, the term “spatiotemporal DE” will refer to both PDEs and PIDEs in the form of Eq. (1). The formal solution u is then represented by a spectral expansion in space,

$$\begin{aligned} u(x, t)\approx u_{N}(x, t) = \sum _{i=0}^{N} c_{i}(t) \phi _{i}(x), \end{aligned}$$
(2)

where \(\{\phi _i\}_{i=0}^N\) is a set of appropriate basis functions that can be defined on bounded or unbounded domains and \(\{c_i\}_{i=0}^N\) are the associated coefficients. We assume that the spectral expansion coefficients \(c_i(t_{j}), i=0,...,N\) in Eq. (2) are given as inputs at various time points \(\{t_{j}\}=t_0,...,t_M\).

By using the spectral expansion in Eq. (2) to approximate u, we do not need an explicit spatial discretization like spatial grids or meshes, and the spatial variable x can be defined in either bounded or unbounded spatial domains. The best choice of basis functions will depend on the spatial domain. In bounded domains, any set of basis functions in the Jacobi polynomial family, including Chebyshev and Legendre polynomials, provides similar performance and convergence rates; for semibounded domains \(\mathbb {R}^+\), generalized Laguerre functions are often used; for unbounded domains \(\mathbb {R}\), generalized Hermite functions are used if the solution is exponentially decaying at infinity, while mapped Jacobi functions are used if the solution is algebraically decaying [19, 20].

Additionally, after using the spectral expansion Eq. (2), a numerical scheme for Eq. (1), regardless of whether it is a PDE or a PIDE involving convolution terms in the spatial variable x, can be expressed as ordinary differential equations (ODEs) in the expansion coefficients \(\varvec{c}_N(t):=(c_0(t),\ldots , c_N(t))\)

$$\begin{aligned} \frac{\text{ d }\varvec{c}_N(t)}{\text{ d }t} = \varvec{F}(\varvec{c}_N;t). \end{aligned}$$
(3)

The spatiotemporal DE learning method proposed here differs substantially from the s-PINN framework because it does not make any assumptions on the form of the spatiotemporal DE in Eq. (1) other than that the RHS F does not contain time-derivatives or time-integrals of u(xt). Instead, the spectral neural DE method models F directly by a neural network and employs the neural ODE method [21] to fit the trajectories of the ground truth spectral expansion coefficients \(\varvec{c}_N(t)\). (see Fig. 1c). The method inputs both the solution u(xt) (in terms of a spectral expansion) and t into the neural network and applies a neural ODE model [21]. Thus, general DEs such as Eq. (1) can be learned with little knowledge of the RHS. To summarize, the proposed method presented in this paper has the advantage that it

  1. (i)

    does not require assumptions on the explicit form of F other than it should not contain any time-derivatives or time-integrals of u. Both spatiotemporal PDEs and PIDEs can be learned in a unified way.

  2. (ii)

    directly learns the dynamics of a spatiotemporal DE (RHS of Eq. (1)) by using a parameterized neural network that can time-extrapolate the solutions, and

  3. (iii)

    does not rely on explicit spatial discretization and can thus learn unbounded-domain DEs. By further using adaptive spectral techniques, our neural DE learning method also learns the dynamics of the shaping parameters that adjust the basis functions. Additionally, our neural DE learning method can also take advantage of sparse spectral methods [22] for effectively reconstructing multidimensional spatiotemporal DEs using a reduced number of inputs.

In the next section, we formulate our spatiotemporal DE learning method. In Sect. 3, we use our spatiotemporal DE method to learn the underlying dynamics of DEs. Although our main focus is to address learning unbounded-domain spatiotemporal DEs, we perform benchmarking comparisons on bounded-domain problems that are solved using other recently developed machine-learning based PDE learning methods that apply only in bounded domains. Concluding remarks are given in Sect. 4. Additional numerical experiments and results are given in the Appendix.

2 Spectral spatiotemporal DE learning method

We now formalize our spectral spatiotemporal DE learning method for spatiotemporal DEs of the general structure of Eq. (1), assuming F[uxt] does not include time-differentiation or time-integration of u(xt). However, unlike in [10], the “dynamics” F[uxt] on the RHS of Eq. (1) can take any other form including differentiation in space, spatial convolution, and nonlinear terms. Below is a table of the notation used throughout the development of our method and the test examples in the rest of the paper.

Table 1 Overview of variables and parameters

First, consider a bounded spatial domain \(\Omega \). Suppose we have observational data \(u_m(x, t),\, m=1,...,M\) for all x at given time points \(t_j, j=1,...,T\) associated with different initial conditions \(u_m(x, t_0),\, m=1,...,M\). Furthermore, we assume that \(u_m(x, t)\) all obey the same underlying, well-posed spatiotemporal DE Eq. (1). Upon choosing proper orthogonal basis functions \(\{\phi _i(x)\}_{i=0}^N\), we can approximate u(xt) by the spectral expansion in Eq. (2) and obtain the spectral expansion coefficients \(\varvec{c}_N(t):=(c_0(t),...,c_N(t))\) as Eq. (3). We aim to reconstruct the dynamics \(F(\varvec{c}_N; t)\) in Eq. (3) by using a neural network

$$\begin{aligned} \varvec{F}(\varvec{c}_N; t)\approx \varvec{F}(\varvec{c}_N; t, \Theta ), \end{aligned}$$
(4)

where \(\Theta \) is the set of parameters in the neural network. We can then construct the RHS of Eq. (1) using

$$\begin{aligned} F[u;x,t,\Theta ]\approx \sum _{i=0}^N F_i(\varvec{c}_N; t,\Theta )\phi _i(x) \end{aligned}$$
(5)

where \(F_i\) is the \(i^{\text {th}}\) component of the vector \(\varvec{F}(\varvec{c}_N; t,\Theta )\). We shall use the neural ODE to learn the dynamics \(\varvec{F}(\varvec{c}_N;t, \Theta )\) by minimizing the mean loss function \(L(u_N(x, t;\Theta ), u(x, t))\) between the numerical solution \(u_N(x, t;\Theta )\) and the observations u(xt). When data are provided at discrete time points \(t_j\), we need to minimize

$$\begin{aligned} \sum _{m=1}^M\sum _{j=1}^{T} L\big (u_{N, m}(x, t_j; \Theta ), u_m(x, t_j)\big ), \end{aligned}$$
(6)

with respect to \(\Theta \). Here, \(u_m(x, t_j)\) is the solution at \(t_j\) of the \(m^{\text {th}}\) trajectory in the dataset and \(u_{N, m}(x, t_j;{\Theta })\) denotes the spectral expansion solution reconstructed from the coefficients \(\varvec{c}_{N, s}\) obtained by the neural ODE of the \(m^{\text {th}}\) solution at \(t_j\).

To solve unbounded domain DEs (in any dimension \(\Omega \subseteq \mathbb {R}^D\)), two additional sets of parameters are needed to scale and translate the spatial argument \(\varvec{x}\), a scaling factor \(\varvec{\beta }:=(\beta ^1,\ldots ,\beta ^D)\in \mathbb {R}^D\), and a shift factor \(\varvec{x}_0:=(x_0^1,\ldots ,x_0^D)\in \mathbb {R}^D\). These factors need to be dynamically adjusted to obtain accurate spectral approximations of the original function [12, 23, 24]. When generalizing the spectral approximation \(u_{N, x_0}^{\beta }(x, t)\) in Table 1 to higher spatial dimensions, we can write

$$\begin{aligned} u(\varvec{x}, t) \approx u_{N, \varvec{x}_0}^{\varvec{\beta }}(\varvec{x}, t) =\sum _{i=0}^N c_i(t)\phi _i\big (\varvec{\beta }*(\varvec{x} - \varvec{x}_0)\big ), \end{aligned}$$
(7)

where here, \(\varvec{\beta }*(\varvec{x} - \varvec{x}_0) :=(\beta ^1(x-x_0^1),..., \beta ^D(x-x_0^D))\) is the Hadamard product and \(\phi _{i}(\cdot )\) are D-dimensional basis functions.

Given observed \(u(\varvec{x}, t)\), the ground truth coefficients \(c_i(t)\) as well as the spectral adjustment parameters \(\varvec{\beta }(t)\) and \(\varvec{x}_0(t)\) at discrete time points can be obtained by minimizing the frequency indicator (introduced in [12])

$$\begin{aligned} \mathcal {F}(u; \varvec{\beta }, \varvec{x}_0) = \sqrt{\frac{\sum _{i=N-[\frac{N}{3}]+1}^N c_i^2}{\sum _{i=0}^N c_i^2}} \end{aligned}$$
(8)

that measures the error of the numerical representation of the solution u [25]. \(\mathcal {F}(u; \varvec{\beta }, \varvec{x}_0)\) depends on \(\varvec{\beta }, \varvec{x}_0\), and the expansion order N through the arguments of the basis functions and thus implicitly through their expansion coefficients \(c_{i}\). Thus, minimizing \(\mathcal {F}(u; \varvec{\beta }, \varvec{x}_0)\) will also minimize the approximation error \(\Vert u - \sum _{i=0}^N c_i\phi _{i}(\varvec{\beta }(t)*(\varvec{x}-\varvec{x}_0(t)))\Vert ^2_2\). Numerically evaluating \(c_i(t_{j})\) usually requires setting up appropriate collocation points determined by the basis functions and adaptive parameters \(\varvec{\beta }\) and \(\varvec{x}_0\). In such unbounded domain problems, the ground truth coefficients and adaptive parameters \(\tilde{\varvec{c}}_N :=\big (c_0(t),...,c_N(t),\varvec{\beta }(t), \varvec{x}_0(t)\big )\) at times \(t_{j}\) are given as inputs to the neural network.

In addition to \(\varvec{c}_{N}(t)\), evolution of the adaptive parameters \(\varvec{\beta }(t), \varvec{x}_0(t)\) over time can also be learned by the neural ODE. More specifically,

$$\begin{aligned} \frac{\text{ d }\tilde{\varvec{c}}_N}{\text{ d }t} = \varvec{F}(\tilde{\varvec{c}}_N;t) \end{aligned}$$
(9)

for the ODEs satisfied by \(\tilde{\varvec{c}}_N :=\big (\varvec{c}_{N}(t),\varvec{\beta }(t), \varvec{x}_0(t)\big )\). The underlying dynamics \(\varvec{F}(\tilde{\varvec{c}}_N;t)\) is approximated as

$$\begin{aligned} \varvec{F}(\tilde{\varvec{c}}_N; t)\approx \varvec{F}(\tilde{\varvec{c}}_N; t, \Theta ) \end{aligned}$$
(10)

by minimizing with respect to \(\Theta \) a loss function that also penalizes the error in \(\varvec{\beta }\) and \(\varvec{x}_0\)

$$\begin{aligned} \begin{aligned} \sum _{m=1}^M\sum _{j=1}^{T}&\bigg [L\big ( u_m(\varvec{x}, t_j), u_{N, \varvec{x}_{0, m}, m}^{\varvec{\beta }_m}(\varvec{x}, t_j;{\Theta })\big ) + \lambda \, \big \Vert \varvec{\beta }_m(t_j) -\varvec{\beta }_m(t_j;\Theta )\big \Vert _2^2 \\ \qquad \qquad \qquad&\qquad + \lambda \,\big \Vert \varvec{x}_{0, m}(t_j)-\varvec{x}_{0, m}(t_j;\Theta )\big \Vert _2^2\bigg ]. \end{aligned} \nonumber \\ \end{aligned}$$
(11)

Similarly, the DE satisfied by \(u_{N, \varvec{x}_0}^{\varvec{\beta }}(\varvec{x}, t)\) is

$$\begin{aligned} \partial _t u_{N, \varvec{x}_0}^{\varvec{\beta }}(\varvec{x}, t)=F[u_{N, \varvec{x}_0}^{\varvec{\beta }};\varvec{x},t, \Theta ], \end{aligned}$$
(12)

where

$$\begin{aligned} \begin{aligned} F[u_{N,\varvec{x}_0}^{\varvec{\beta }};\varvec{x},t, \Theta ]&= \sum _{i=0}^N F_i(\tilde{\varvec{c}}_N;t,\Theta ) \phi _i\big (\varvec{\beta }(t)*(\varvec{x}-\varvec{x}_0(t))\big ) \\ \,&\,\,\, + \sum _{i=0}^{N} c_{i}(t) \sum _{j=1}^D \Big (\partial _t \Big (\beta ^j(t)\big (x^j-x_0^j(t)\big )\Big )\, \partial _{x^j} \phi _i\big (\varvec{\beta }(t)*(\varvec{x} - \varvec{x}_0(t))\big )\Big ) \end{aligned} \end{aligned}$$
(13)

and \(F_i\) is the \(i^{\text {th}}\) component of \(\varvec{F}(\tilde{\varvec{c}}_N; t,\Theta )\). Here, \(\varvec{\beta }_{m}(t_j)\) and \(\varvec{x}_{0, m}(t_j)\) are the scaling factor and the displacement of the \(m^{\text {th}}\) sample at time \(t_j\), respectively, and \(\lambda \) is the penalty due to squared mismatches in the scaling and shift parameters \(\beta \) and \(x_0\). In this way, the dynamics of the variables \(\varvec{x}_0,\varvec{\beta }\) are also learned by the neural ODE so they do not need to be manually adjusted as they were in [12, 18, 24, 25].

If the space \(\Omega \) is high-dimensional, sufficiently smooth and well-behaved solutions can be approximated by restricting the basis functions \(\{\phi _{i, x_0}^{\beta }\}\) to those in the hyperbolic cross space. If this projection is performed optimally, such sparse spectral methods with spectral expansions defined in the hyperbolic cross space can reduce the effective dimensionality of the problem by leaving out redundant basis functions [22, 26] without significant loss of accuracy. We will show that our method can also easily incorporate the hyperbolic cross spaces to enhance training efficiency in modestly higher-dimensional problems.

3 Numerical experiments

In this work, we set \(L(\cdot , \cdot )\) to be the relative squared \(L^2\) error

$$\begin{aligned} L(u(x, t_i),u_N(x, t_i;\Theta )):=\frac{\big \Vert u(x, t_i) - u_N(x, t_i;\Theta )\big \Vert _2^2}{\Vert u\Vert _2^2} \end{aligned}$$
(14)

in the loss function Eq. (6) used for training. We carry out numerical experiments to test our spectral spatiotemporal DE reconstruction method by learning the underlying DE given data in both bounded and unbounded domains. In this section, we use the odeint_adjoint function along with the dopri5 numerical integration method developed in the torchdiffeq package [21] to numerically integrate Eqs. (3) and  (9). Stochastic gradient descent (SGD) and the Adam optimizer are used separately to optimize parameters of the neural network. Computations for all numerical experiments were implemented on a 4-core Intel® i7-8550U CPU, 1.80 GHz laptop using Python 3.8.10, Torch 1.12.1, and Torchdiffeq 0.2.3.

Since algorithms already exist for learning bounded-domain PDEs, we first examine a bounded-domain problem in order to benchmark our spatiotemporal DE method against two other recent representative methods, a convolutional neural PDE learner [10] and a Fourier neural operator PDE learning method [11].

Example 1

For our first example, we consider learning a bounded-domain Burgers’ equation that describes the behavior of viscous fluid flow [27]. This example illustrates the performance of our spatiotemporal DE method in learning bounded-domain PDEs and benchmarks our approach against some recently developed methods.

$$\begin{aligned} \begin{aligned} \displaystyle \partial _t u + \tfrac{1}{2}\partial _x (u^2)&= \displaystyle \tfrac{1}{10}\partial _{xx} u, \quad x\in (-1, 1),\,\, t\ge 0,\\ u(-1, t) = u(1, t), \,\,\, \partial _x u(-1, t)&\displaystyle = \partial _x u(1, t), \,\, \, u(x, 0) = -\tfrac{1}{5}\frac{\psi _x(x, 0)}{\psi (x, 0)}, \end{aligned} \end{aligned}$$
(15)

where

$$\begin{aligned} \psi (x, t) \equiv 5+ \big (\tfrac{2+\xi _1}{2}\big )e^{-\pi ^{2}t/10} \sin \pi x + \tfrac{\xi _2}{2} e^{-2\pi ^{2}t/5} \cos 2\pi x. \end{aligned}$$
(16)

This model admits the analytic solution expressible as \(u(x, t) = -\frac{\psi _x(x, t)}{5\psi (x, t)}\). We then sample two independent random variables from \(\xi _1, \xi _2 \sim \mathcal {U}(0, 1)\) to generate a class of solutions to Eq. (16) \(\{u\}_{\xi _1, \xi _2}\) for both training and testing. To approximate F in Eq. (4), we use a neural network that has one intermediate layer with 300 neurons and the ELU activation function. The basis functions in Eq. (2) are taken to be Chebyshev polynomials. For training, we use 200 solutions (each corresponding to an independently randomly sampled pair \((\xi _{1}, \xi _{2})\) of Eq. (15)) and record the expansion coefficients \(\{c_i\}_{i=0}^9\) at different times \(t_j=j\Delta {t}, \Delta {t}=\frac{1}{4}, j=0,\ldots ,4\). The test set consists of 100 more solutions, also evaluated at times \(t_j=j\Delta {t}, \Delta {t}=\frac{1}{4}, j=0,\ldots ,4\).

In this bounded-domain problem, we can compare our results (the generated solutions u(xt)) with those generated from the Fourier neural operator (FNO) and the convolutional neural PDE learner methods. In the FNO method, four intermediate Fourier convolution layers with 128 neurons in each layer were used to input the initial condition \(u(i\Delta x, 0)\). Then, the FNO method outputs the function values \(u(i\Delta x, t=j\Delta t)\) (with \(\Delta x = \frac{1}{128}, \Delta t = \frac{1}{4}\)) for \(j>0\) [11].

When implementing the convolutional neural PDE solver [10], we input \(u(i\Delta x,(j-1)\Delta t)\) and \(u(i\Delta x,j\Delta t)\) (with \(\Delta {x}=\frac{1}{100}, \Delta {t}=\frac{1}{250}\)) [10] into seven convolutional layers with 40 neurons in each layer which outputs \(u(i\Delta x,(j+1)\Delta t)\) as the numerical solution at the next time step. Small \(\Delta {x}\) and \(\Delta {t}\) are used in the convolutional neural PDE solver method because this method depends on both spatial and temporal discretization, requiring fine discretization meshes in both dimensions. For all three methods, we used the Adam method to perform gradient descent with a learning rate \(\eta =0.001\) to run 10000 epochs, which was sufficient for the errors in all three methods to converge. We list in Table 2 the mean relative \(L^2\) error

$$\begin{aligned} \frac{1}{MT}\sum _{m=1}^M\sum _{j=1}^T \frac{\big \Vert u_{N, m}(x, t_j; \Theta ) - u_m(x, t_j)\big \Vert _2}{\Vert u_m(x, t_j)\Vert _2}. \end{aligned}$$
(17)

For the FNO and spectral PDE learning methods, we aim to minimize the relative squared \(L^{2}\) loss (Eq. (14)), while for the convolution neural PDE solver method, we must minimize the MSE loss since only partial and local spatial information on the solution is inputted during each training epoch so we cannot properly define a relative squared loss as the relative squared loss Eq. (14) needs global spatial information to calculate \(\Vert u\Vert _2\). As shown in Table 2, the relative \(L^2\) error of the FNO method is smaller than the MSEs of the other two methods on the training set while the convolutional neural PDE solver method performs the worst. Nonetheless, our proposed neural spectral DE learning approach performs comparably to the FNO method, giving comparable mean relative \(L^2\) errors for learning the dynamics associated with the bounded-domain Burgers’ equation, but can also generate new solutions given different initial conditions.

Additionally, the run times (using a 4-core i7-8550U laptop) in this example were \(\sim 2\) hours for the convolutional PDE solver method, \(\sim 6\) hours for the FNO method, and \(\sim 5\) hours for our proposed spatiotemporal DE learning approach. Overall, even in bounded domains, our proposed neural DE learning approach compares well with the recently developed convolutional neural PDE solver and FNO methods, providing comparable errors and efficiency in generating solutions to Eq. (15) given different initial conditions.

Table 2 The convolutional PDE solver, the Fourier neural operator method, and our proposed spatiotemporal DE learner are used to learn or solve the dynamics of Burgers’ equation Eq. (15) in a bounded domain

The Fourier neural operator method works well for solving Burgers’ equation in Example 1, and there could be other even more efficient methods for reconstructing bounded domain spatiotemporal DEs. However, reconstructing unbounded domain spatiotemporal DEs is substantially different from reconstructing bounded domain counterparts. First, discretizing space cannot be directly applied to unbounded domains; second, if we truncate an unbounded domain into a bounded domain, appropriate artificial boundary conditions need to be imposed [15]. Constructing such boundary conditions is usually complex; improper boundary conditions can lead to large errors. A simple example of when the FNO will fail when we truncate the unbounded domain into a bounded domain is provided in Appendix A.

Since our spectral method uses basis functions, it obviates the need for explicit spatial discretization and can be used to reconstruct unbounded-domain DEs. Dynamics in unbounded domains are intrinsically different from their bounded-domain counterparts because functions can display diffusive and convective behavior leading to, e.g., time-dependent growth at large x. This growth poses intrinsic numerical challenges when using prevailing finite element/finite difference methods that truncate the domain.

Although it is difficult for most existing methods to learn the dynamics in unbounded spatial domains, our spectral approach can reconstruct unbounded-domain DEs by simultaneously learning the expansion coefficients and the evolution of the basis functions. To illustrate this, we next consider a one-dimensional unbounded domain inverse problem.

Example 2

Here, we examine a parabolic PDE in an unbounded domain and with initial conditions that depend on parameters \(\xi _1, \xi _2, \xi _3\). The PDE and its initial condition are given by

$$\begin{aligned} \partial _t u = -\partial _x u + \tfrac{1}{4} \partial _{xx} u,\quad u(x, 0) = \frac{\xi _1}{\sqrt{1+\xi _2}}\exp \bigg (-\frac{(x-\xi _3)^2}{1+\xi _2}\bigg ). \end{aligned}$$
(18)

This example illustrates the application of our method to learning the dynamics of a parabolic PDE given different ground truth solutions in an unbounded domain corresponding to different initial conditions. The solution of the PDE, within the domain \(x\in \mathbb {R}\) and time interval \(t\in [0, 1]\), is expressed as

$$\begin{aligned} u(x, t;\xi _1, \xi _2, \xi _3)= \frac{\xi _1}{\sqrt{t+1+\xi _2}} \exp \bigg (-\frac{(x-t - \xi _3)^2}{t+1+\xi _2}\bigg ). \end{aligned}$$
(19)

Since this problem is defined on an unbounded domain, neither the FNO nor the convolutional neural PDE methods can be used as they rely explicitly on spatial meshes or grids and apply only on bounded domains. However, given observational data \(u(\cdot , t)\) for different t, we can calculate the spectral expansion of u via the generalized Hermite functions [20]

$$\begin{aligned} u(x, t)\approx u_{N, x_0}^{\beta }=\sum _{i=0}^N c_i(t) \hat{\mathcal {H}}_i\big (\beta (t)(x-x_0(t))\big ) \end{aligned}$$
(20)

and then use the spatiotemporal DE learning approach to reconstruct the dynamics F in Eq. (1) satisfied by u. Recall that the scaling factor \(\beta (t)\) and the displacement of the basis functions \(x_0(t)\) are also to be learned. To penalize misalignment of the spectral expansion coefficients and the scaling and displacement factors \(\beta \) and \(x_0\), we use the loss function Eq. (11). Note that taking the derivative of the first term in Eq. (11) would involve evaluating the derivative of Eq. (14) which would require evaluation of integrals such as \(\int \tfrac{u_{N,x_0}^{\beta }(x,t_j; \Theta ) - u(x, t_j)}{\Vert u\Vert _2^2} \partial _x u_{N, x_0}^{\beta }(x, t_j;\Theta ) \partial _{\Theta }\big [\beta (t_j;\Theta )(x-x_0(t_j;\Theta ))\big ] \text {d}{x}\). Expressing \(\partial _x u_{N, x_0}^{\beta }(x, t_j; \Theta ) \) in terms of the basis functions \(\hat{\mathcal {H}}_{i}\big (\beta (t)(x-x_{0}(t))\big )\) would involve a dense matrix–vector multiplication of the coefficients of the expansion \(\partial _x u_{N, x_0}^{\beta }(x, t_j; \Theta ) \partial _{\Theta }\big [\beta (t_j;\Theta )(x-x_0(t_j;\Theta ))\big ]\), which might be computationally expensive during backward propagation in the stochastic gradient descent (SGD) procedure.

Alternatively, let the neural network parameter after the \((j-1)^\mathrm{{th}}\) training epoch be \(\Theta _{j-1}\). During the calculation of the gradient of the loss function Eq. (11) w.r.t. \(\Theta \) at the \(j^\mathrm{{th}}\) epoch, we define \(\tilde{\beta }(t_j):=\beta (t_j;\Theta _{j-1}), \tilde{x}_0(t_j) :=x_0(t_j;\Theta _{j-1})\) to be constants independent of \(\Theta \) and then modify Eq. (11) to

$$\begin{aligned} \begin{aligned} \sum _{m=1}^M \sum _{j=1}^{T} \Bigg [&\frac{\big \Vert u_{N, \tilde{x}_{0, m}(t_j)}^{\tilde{\beta }_m(t_j)}(x, t_j;\Theta ) - u_m(x, t_j)\big \Vert _2^2}{\Vert u_m(x, t_j)\Vert _2^2} \\ \,&\,\quad +\lambda \big (\beta _m(t_j; \Theta ) - \beta _m(t_j)\big )^2 + \lambda \big (x_{0, m}(t_j; \Theta ) - x_{0, m}(t_j)\big )^2\Bigg ], \end{aligned} \end{aligned}$$
(21)

so that backpropagation within each epoch will not involve calculating gradients of \(\tilde{\beta }_m(t_j), \tilde{x}_{0, m}(t_j)\) in the first term of Eq. (21). This simplified calculation reduces the computational cost of the training process but can provide gradients close to the true gradients when the reconstructed \(\tilde{\beta }(t_j), \tilde{x}_0(t_j)\) are close to the ground truth values \(\beta _m(t), x_{m, 0}(t)\). For example, when \(\beta _m(t;\Theta _{j-1})=\beta _m(t), x_{m, 0}(t;\Theta _{j-1})=x_{m, 0}(t)\), i.e., the reconstructed \(\beta _m(t;\Theta _{j-1}), x_{m, 0}(t;\Theta _{j-1})\) agree exactly with the ground truth, Eq. (11) and Eq. (21) will both become

$$\begin{aligned} \sum _{m=1}^M \sum _{j=1}^{T} \frac{\big \Vert u_{N, x_{0, m}(t_j)}^{\beta _m(t_j)}(x, t_j;\Theta ) - u_m(x, t_j)\big \Vert _2^2}{\big \Vert u_m(x, t_j)\big \Vert _2^2} =\!\sum _{m=1}^M \sum _{j=1}^{T} \frac{\sum _{i=0}^N \big (c_{m, i}(t_j;\Theta ) -c_i(t_j)\big )^2}{\sum _{i=0}^N c_{m, i}(t_j)^2}.\nonumber \\ \end{aligned}$$
(22)

No derivative of \(\beta , x_0\) w.r.t. \(\Theta \) will be used and only the gradient of F in Eq. (4) w.r.t. \(\Theta \) appears. In this case, the simplified gradient exactly reflects the true gradient. Therefore, we can fit the coefficients \(c_i(t)\) and \(\beta (t), x_0(t)\) separately, and then use the simplified loss gradient to update the neural network parameters.

We use 100 solutions for training and another 50 solutions for testing with \(N=9, \Delta {t}=0.1, T=9, \lambda =0.1\). Each solution is generated from Eq. (19) with independently sampled parameters \(\xi _1, \xi _2, \xi _3\). A neural network with two hidden layers, 200 neurons in each layer, and the ELU activation function is used for training. Both training and testing data are taken from Eq. (19) with sampled parameters \(\xi _1\sim \mathcal {N}(3,\frac{1}{4}),\,\, \xi _2\sim \mathcal {U}(0, \frac{1}{2}),\,\, \xi _3\sim \mathcal {N}(0, \frac{1}{2})\).

Setting \(\lambda =0.1\), we first compare the two different loss functions Eqs. (11) and (21). After running 10 independent training processes using SGD, each containing 2000 epochs and using a learning rate \(\eta =0.0002\), the average relative \(L^2\) error when using the loss function Eq. (11) are larger than the average relative \(L^2\) errors when using the loss function Eq. (21). This difference arises in both the training and testing sets as shown in Fig. 2a.

Fig. 2
figure 2

a Errors using the different loss functions Eqs. (11) and (21). b Average dynamics found from using Eqs. (11) and (21). c Errors with \(\lambda \) and \(\sigma \). d Errors on the testing set with random times \(t_i\sim \mathcal {U}(0, 1.5)\)

In Fig. 2b, we plot the average learned F (RHS in Eq. (1)) for a randomly selected sample at \(t=0\) in the testing set. The dynamics learned by using Eq. (21) is a little more accurate than that learned by using Eq. (11). Also, using the loss function Eq. (21) required only \(~\sim 1\) hour of computational time compared to 5 days when learning with Eq. (11) (on the 4-core i7-8550U laptop). Therefore, for efficiency and accuracy, we adopt the revised loss function Eq. (21) and separately fit the dynamics of the adaptive spectral parameters (\(\beta , x_0\)) and the dynamics of the spectral coefficients \(c_{i}\).

We also explore how network architecture and regularization affect the reconstructed dynamics. The results are shown in Appendix B, from which we observe that a wider and shallower neural network with 2 or 4 intermediate layers and 200 neurons in each layer yields the smallest errors on both the training and testing sets, and short run times. We also apply a ResNet [28] as well as the dropout technique [29, 30] to regularize the neural network structure. Dropout regularization does not reduce either the training error or the testing error probably because even with a feedforward neural network, the errors from our spatiotemporal DE learner on the training set are close to those on the testing set and there is no overfitting issue. On the other hand, applying the ResNet technique leads to about a  20% decrease in errors. Results from using ResNets and dropout are shown in Appendix B.

Next, we investigate how noise in the observed data and changes in the adaptive parameter penalty coefficient \(\lambda \) in Eq. (21) impact the results. Noise is incorporated into simulated observational data as

$$\begin{aligned} u_{\xi }(x, t) = u(x, t) \big [1+ \xi (x, t)\big ], \end{aligned}$$
(23)

where u(xt) is the solution to the parabolic equation Eq. (18) given by Eq. (19) and \(\xi (x, t)\sim \mathcal {N}(0, \sigma ^2)\) is a Gaussian-distributed noise that is both spatially and temporally uncorrelated (i.e., \(\langle \xi (x, t)\xi (y, s)\rangle =\sigma ^2\delta _{x, y}\delta _{s, t}\)). The noise term is assumed to be independent for different samples. We use a neural network with 2 hidden layers, 200 neurons in each layer, to implement 10 independent training processes using SGD and a learning rate \(\eta =0.0002\), each containing 5000 epochs. Results are shown in Fig. 2c and further tabulated in Appendix C. For \(\sigma =0\), choosing an intermediate \(\lambda \in (10^{-1.5}, 10^{-1}]\) leads the smallest errors and an optimal balance between learning the coefficients \(c_{i}\) and learning the dynamics of \(\beta , x_0\). When \(\sigma \) is increased to nonzero values (\(\sim 10^{-4} - 10^{-3}\)), a larger \(\lambda \sim 10^{-0.75}-10^{-0.5}\) is needed to keep errors small (see Fig. 2c and Appendix C). If the noise is further increased to, say, \(\sigma =10^{-2}\) (not shown in Fig. 2), an even larger \(\lambda \sim 10^{-0.5}\) is needed for training to converge. This behavior arises because the independent noise \(\xi (x, t)\sim \mathcal {N}(0, \sigma ^2)\) contributes more to high-frequency components in the spectral expansion. In order for training to converge, fitting the shape of the basis functions by learning \(\beta , x_0\) is more important than fitting noisy high-frequency components via learning \(c_{i}\). A larger \(\lambda \) puts more weight on learning the dynamics of \(\beta , x_0\) and basis function shapes.

We also investigate how intrinsic noise in the parameters \(\xi _1, \xi _2, \xi _3\) affects the solution (Eq. (19)) and the accuracy of the learned DE. As shown in  D, we find that if the intrinsic noise in \(\xi _1, \xi _2, \xi _3\) is increased, the training errors of the learned DE models also increase. However, compared to models trained on data with lower \(\xi _1, \xi _2, \xi _3\), training using noisier data leads to lower errors when testing data are also noisy. Additionally, we explore how the number of solutions in the training set impacts how the learned DE model makes new predictions given the initial conditions of solutions in the testing set. The results are also listed in Appendix D and show that larger numbers of training samples (solutions associated with different \((\xi _1, \xi _2)\) in Eq. (19)) lead to smaller relative \(L^2\) errors of the predicted solutions in both the training and testing sets.

Finally, we test whether the parameterized F (Eq. (10)) learned from the training set can extrapolate well beyond the training set sampling interval \(t\in [0, 0.9]\). To do this, we generate another 50 trajectories and sample each of then at random times \(t_i\sim \mathcal {U}(0, 1.5), i=1,...,9\). We then use models trained with \(\sigma =0\) and different \(\lambda \) to test. As shown in Fig. 2d, our spatiotemporal DE learner can accurately extrapolate the solution to times beyond the training set sampling time intervals. We also observe that a stronger penalty on \(\beta \) and \(x_0\) (\(\lambda =10^{-0.5}\)) leads to better extrapolation results.

In the last example, we carry out a numerical experiment on learning the evolution of a Gaussian wave packet (which may depend on nonlocal interactions) across a two-dimensional unbounded domain \((x, k)\in \mathbb {R}^2\). We use this case to explore improving training efficiency by using a hyperbolic cross space to reduce the number of coefficients in multidimensional problems.

Example 3

We solve a 2D unbounded-domain problem of fitting a Gaussian wave packet’s evolution

$$\begin{aligned} f(x, k, t; \xi _1, \xi _2) = 2e^{-\frac{(x-\xi _1)^2}{2a^2}}\, e^{2b t (x-\xi _{1})(k-\xi _2)}\, e^{-2a^2(1+b^2t^2)(k-\xi _2)^2}, \end{aligned}$$
(24)

where \(\xi _1\) is the center of the wave packet and a is the minimum positional spread. If \(\xi _2=0\), the Gaussian wave packets defined in Eq. (24) solves the stationary zero-potential Wigner equation, an equation often used in quantum mechanics to describe the evolution of the Wigner quasi-distribution function [31, 32]. We set \(a=1\) and \(b=\frac{1}{2}\) in Eq. (24) and independently sample \(\xi _1, \xi _2\sim \mathcal {U}(-\frac{1}{2}, \frac{1}{2})\) to generate data. Thus, the DE satisfied by the highly nonlinear Eq. (24) is unknown and potentially involves nonlocal convolution terms. In fact, there could be infinitely many DEs, including complicated nonlocal DEs, that can describe the dynamics of Eq. (24). An example of such a nonlocal DE is

$$\begin{aligned} \begin{aligned} \partial _t f +&\, 2a^2b A[f; x,k, t]\,\partial _{x} f(x, k, t) = 0, \\ A[f; x,k, t] =&\, \frac{b t (x - B[f;x,k,t])}{2a^2(1+b^2 t^2)} + \sqrt{\frac{\log D[f; k, t] - C(t) - \log (f/2)}{a^2(1+b^2t^2)}}, \\ B[f; x, k, t] =&\, x - \sqrt{2a^2(1+b^2t^2)}\sqrt{2C(t) - 2\log D[f; x, k, t] +\log (f/2)}, \\ C(t) =&\, \frac{1}{2}\log \Big [\frac{\pi }{a^{2}(1+b^{2}t^{2})}\Big ], \\ D[f;x, k, t] =&\int f(x, y, t)e^{-2a^2(1+b^2t^2)(y-k)^2} \text {d}y. \end{aligned} \end{aligned}$$
(25)

We wish to learn the underlying dynamics using a parameterized F in Eq. (10). Since the Gaussian wave packet Eq. (24) is defined in the unbounded domain \(\mathbb {R}^2\), learning its evolution requires information over the entire domain. Thus, methods that depend on discretization of space are not applicable.

Our numerical experiment uses Eq. (24) as both training and testing data. We take \(\Delta {t}=0.1, t_j=j\Delta {t}, j=0,...,10\) and generate 100 solutions for training. For testing, we generate another 50 solutions, each with starting time \(t_0=0\) but \(t_j\) taken from \(\mathcal {U}(0, 1), j=1,\dots , 10\). The parameters \(\xi _1, \xi _2\) in the solutions Eq. (24) are independently sampled for both the training set and the testing set. For this example, training with ResNet results in diverging gradients, whereas the use of a feedforward neural network yields convergent results. So we use a feedforward neural network with two hidden layers and 200 neurons in each hidden layer and the ELU activation function. We train across 1000 epochs using SGD with momentum (SGD M), a learning rate \(\eta =0.001\), \(\text {momentum}=0.9\), and \(\text {weight decay}=0.005\). We use a spectral expansion in the form of a two-dimensional tensorial product of Hermite basis functions \(\hat{\mathcal {H}}_i\hat{\mathcal {H}}_{\ell }\)

$$\begin{aligned} f_N(x, k, t_j; \xi _1, \xi _2)= \sum _{i=0}^{14}\sum _{\ell =0}^{14} c_{i, \ell }(t_j)\hat{\mathcal {H}}_i(\beta ^1(x-x_0^1)) \hat{\mathcal {H}}_{\ell }(\beta ^2(k-k_0^2)) \end{aligned}$$
(26)

to approximate Eq. (24). We record the coefficients \(c_{i, \ell }\) as well as the scaling factors and displacements \(\beta ^{1}, \beta ^{2}, x_0^1, k_0^2\) at different \(t_j\) as the training data.

Because \((x, k)\in \mathbb {R}^2\) are defined in a 2-dimensional space, instead of a tensor product, we can use a hyperbolic cross space for the spectral expansion to effectively reduce the total number of basis functions while preserving accuracy [22]. Similar to the use of sparse grids in the finite element method [33, 34], choosing basis functions in the space

$$\begin{aligned} \begin{aligned} V_{N, \gamma }^{\varvec{\beta }, \varvec{x}_0} :=&\text {span} \Big \{\hat{\mathcal {H}}_{n_1}(\beta ^1(x-x_0^1)) \hat{\mathcal {H}}_{n_2}(\beta ^2(k-k_0^2)): |\varvec{n}|_{\text {mix}}\Vert \varvec{n}\Vert _{\infty }^{-\gamma }\le N^{1-\gamma } \Big \},\\ \varvec{n} :=&(n_1, n_2),\, ~|\varvec{n}|_{\text {mix}} :=\max \{n_1, 1\}\max \{n_2, 1\} \end{aligned} \end{aligned}$$
(27)

can reduce the effective dimensionality of the problem. We explored different hyperbolic spaces \(V_{N, \gamma }^{\varvec{\beta }, \varvec{x}_0}\) with different N and \(\gamma \). We use the loss function Eq. (21) with \(\lambda =\frac{1}{50}\) for training. The results are listed in Appendix E. To show how the loss function Eq. (21) depends on the coefficients \(c_{i, \ell }\) in Eq. (26), we plot saliency maps [35] for the quantity \(\frac{1}{10}\sum _{j=1}^{10} \Big |\frac{\partial \textrm{Loss}_j}{\partial c_{i, \ell }(0)}\Big |\)Footnote 1, the absolute value of the partial derivative of the loss function Eq. (21) w.r.t. \(c_{i, \ell }\) averaged over 10 training processes.

Fig. 3
figure 3

a, b Mean relative \(L^2\) errors for \(N=14\) and \(\gamma =-\infty , -1, 0, 1/2\). cf Saliency maps showing the mean absolute values of the partial derivative of the loss function w.r.t. to \(\{c_{i, \ell }(0)\}\) for \(\gamma =-\infty , -1,0,1/2\)

As shown in Fig. 3a, b, using \(\gamma =-1, 0\) leads to similar errors as the full tensor product \(\gamma =-\infty \), but could greatly reduce the number of coefficients and improve training efficiency. Taking a too large \(\gamma =1/2\) leads to larger errors because useful coefficients are left out. From Fig. 3c–f, there is a resolution-invariance for the dependence of the loss function on the coefficients \(c_{i, \ell }\) though using different hyperbolic spaces with different \(\gamma \). We find that an intermediate \(\gamma \in (-\infty , 1)\) (e.g., \(\gamma =-1, 0\)) can be used to maintain accuracy and reduce the number of inputs/outputs when reconstructing the dynamics of Eq. (24). Overall, the “curse of dimensionality" can be mitigated by adopting a hyperbolic space for the spectral representation.

Finally, in Appendix F, we consider source reconstruction in a heat equation. Our proposed spatiotemporal DE learning method achieves an average relative error \(L^2 \approx 0.1\) in the reconstructed source term. On the other hand, if all terms on the RHS of Eq. (1) except an unknown source (which does not depend on the solution) are known, the recently developed s-PINN method [18] achieves a higher accuracy. However, if in addition to the source term, additional terms on on the RHS of Eq. (1) are unknown, s-PINNs cannot be used but our proposed spatiotemporal DE learning method remains applicable.

4 Conclusions

In this paper, we propose a spatiotemporal DE learning method that is quite suitable for learning spatiotemporal DEs from spectral expansion data of the underlying solution. Its main advantage is its applicability to learning both spatiotemporal PDEs and integro-differential equations in unbounded domains, while matching the performance of the most recent high-accuracy PDE learning methods applicable to only bounded domains. Moreover, our proposed method has the potential to deal with higher-dimensional problems if a proper hyperbolic cross space can be justified to effectively reduce the dimensionality.

In future investigations, we plan to apply our spatiotemporal DE learning method to many other inverse-type problems in physics with other appropriate basis functions in unbounded domains, such as the mapped Jacobi functions that characterize algebraic decay at infinity [36,37,38], the radial basis functions [39,40,41], or the Laguerre functions on the semi-unbounded half line \(\mathbb {R}^+\) [42,43,44]. A potentially interesting application is to learn the evolution of probability densities associated with anomalous diffusion [45] in an unbounded domain, which is often described by fractional derivatives or convolutional terms in the corresponding F[uxt] term. Finally, higher dimensional problems remain challenging since the number of inputs (expansion coefficients) grows exponentially with spatial dimension and the computational cost in may not be sufficiently mitigated by the optimal hyperbolic cross space indices \(N, \gamma \) (see Eq. (27)). Two possible ways to address this issue are promising. First, prior knowledge on the observed data can be used to reduce the dimension of the unknown dynamics to be learned, e.g., if we can determine an optimal hyperbolic cross space for the spectral expansion from data, we can effectively reduce the number of basis functions needed. Second, deep neural networks [46, 47], which can effectively handle a large number of inputs, could be adopted when the number of spectral expansion coefficients becomes large. Exploring these directions can further extend the applicability of our proposed spatiotemporal DE learning method to higher-dimensional problems.