Direct Estimation of Parameters in ODE Models Using WENDy: Weak-Form Estimation of Nonlinear Dynamics

We introduce the Weak-form Estimation of Nonlinear Dynamics (WENDy) method for estimating model parameters for non-linear systems of ODEs. Without relying on any numerical differential equation solvers, WENDy computes accurate estimates and is robust to large (biologically relevant) levels of measurement noise. For low dimensional systems with modest amounts of data, WENDy is competitive with conventional forward solver-based nonlinear least squares methods in terms of speed and accuracy. For both higher dimensional systems and stiff systems, WENDy is typically both faster (often by orders of magnitude) and more accurate than forward solver-based approaches. The core mathematical idea involves an efficient conversion of the strong form representation of a model to its weak form, and then solving a regression problem to perform parameter inference. The core statistical idea rests on the Errors-In-Variables framework, which necessitates the use of the iteratively reweighted least squares algorithm. Further improvements are obtained by using orthonormal test functions, created from a set of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C^{\infty }$$\end{document}C∞ bump functions of varying support sizes.We demonstrate the high robustness and computational efficiency by applying WENDy to estimate parameters in some common models from population biology, neuroscience, and biochemistry, including logistic growth, Lotka-Volterra, FitzHugh-Nagumo, Hindmarsh-Rose, and a Protein Transduction Benchmark model. Software and code for reproducing the examples is available at https://github.com/MathBioCU/WENDy.


Introduction
Accurate estimation of parameters for a given model is central to modern scientific discovery.It is particularly important in the modeling of biological systems which can involve both first principles-based and phenomenological models and for which measurement errors can be substantial, often in excess of 20%.The dominant methodologies for parameter inference are either not capable of handling realistic errors, or are computationally costly relying on forward solvers or Markov chain Monte Carlo methods.In this work, we propose an accurate, robust and efficient weak form-based approach to estimate parameters for parameter inference.We demonstrate that our "Weak form Estimation of Nonlinear Dynamics" (WENDy) method offers many advantages including high accuracy, robustness to substantial noise, and computational efficiency often up to several orders of magnitude over the existing methods.
In the remainder of this section, we provide an overview of modern parameter estimation methods in ODE systems, as well as a discussion of the literature that led to the WENDy idea.Section 2 contains the core weak-form estimation ideas as well as the WENDy algorithm itself.In Section 2.1, we introduce the idea of weak-form parameter estimation, including a simple algorithm to illustrate the idea.In Section 2.2, we describe the WENDy method in detail.We describe the Errors-In-Variables (EiV) framework, and derive a Taylor expansion of the residual which allows us to formulate the (in Section 2.2) Iteratively Reweighted Least Squares (IRLS) approach to inference.The EiV and IRLS modifications are important as they offers significant improvements to the Ordinary Least Squares approach.In Section 2.3, we present a strategy for computing an orthogonal set of test functions that facilitate a successful weak-form implementation.In Section 3 we illustrate the performance of WENDy using five common mathematical models from the biological sciences and in Section 4 we offer some concluding remarks.

Background
A ubiquitous version of the parameter estimation problem in the biological sciences is w := arg min where the function u : R → R d is a solution to a differential equation model The ODE system in (2) is parameterized by w ∈ R J , the vector of J true parameters which are to be estimated by w.The solution to the equation is then compared (in a least squares sense) with data U ∈ R (M +1)×d that is sampled at M + 1 timepoints t := {t i } M i=0 .We note that in this work, we will restrict the differential equations to those with right sides that are linear combinations of the f j functions with coefficients w j , as in equation (2).
Conventionally, the standard approach for parameter estimation methodologies has been forward solver-based nonlinear least squares (FSNLS).In that framework, 1) a candidate parameter vector is proposed, 2) the resulting equation is numerically solved on a computer, 3) the output is compared (via least squares) to data, and 4) then this process is repeated until a convergence criteria is met.This is a mature field and we direct the interested reader to references by Ljung [27,28] and, for those interested in a more theoretical perspective, to the monograph by Banks and Kunisch [1].
The FSNLS methodology is very well understood and its use is ubiquitous in the biological, medical, and bioengineering sciences.However, as models get larger and more realism is demanded of them, there remain several important challenges that do not have fully satisfying answers.For example, the accuracy of the solver can have a huge impact on parameter estimates; see [40] for an illustration with PDE models and [6] for an example with ODE and DDE models.There is no widespread convention on detection of this type of error and the conventional strategy would be to simply increase the solution accuracy (usually at significant computational cost) until the estimate stabilizes.
Perhaps more importantly, the choice of the initial candidate parameter vector can have a huge impact upon the final estimate, given that nonlinear least squares cost functions frequently have multiple local minima in differential equations applications.There are several algorithms designed to deal with the multi-modality, such as particle swarm optimization [5] and simulated annealing [57]; however, all come at the cost of additional forward solves and unclear dependence on the hyperparameters used in the solver and optimization algorithms.
Given the above, it is reasonable to consider alternatives to fitting via comparing an approximate model solution with the measured data.A natural idea would be to avoid performing forward solves altogether via substituting the data directly into the model equation (2).The derivative could be approximated via differentiating a projection of the data onto, e.g., orthogonal polynomials, and the parameters could then be estimated by minimizing the norm of the residual of the equation (2) -i.e., via a gradient matching criteria.Indeed, Richard Bellman proposed exactly this strategy in 1969 [2].There have been similar ideas in the literature of chemical and aerospace engineering, which can be traced back even further [18,44].However, these methods are known to perform poorly in the presence of even modest noise.
To account for the noise in the measurements while estimating the parameters (and in some cases the state trajectories), researchers have proposed a variety of different non-solver-based methods.The most popular modern approaches involve denoising the measured state via Gaussian Processes [10,33,60,62,64] and collocations projecting onto a polynomial or spline basis [7,26,45,47,58,65].For example, Yang et al. [64], restricted a Gaussian Process to the manifold of solutions to an ODE to infer both the parameters and the state using a Hamiltonian Markov chain Monte Carlo method.Ramsey et al. [47] proposed a collocation-type method in which the solution is projected onto a spline basis.In a two-step procedure, both the basis weights and the unknown parameters are iteratively estimated.
The minimization identifies the states and the parameters by penalizing poor faithfulness to the model equation (i.e., gradient matching) and deviations too far from the measured data.Liang and Wu [26] proposed a similar strategy based on local polynomial smoothing to first estimate the state and its derivative, compute derivatives of the smoothed solution, and then estimate the parameters.Ding and Wu later improved upon this work in [13] by using local polynomial regression instead of the pseudo-least squares estimator used in [26].
There are also a few approaches which focus on transforming the equations with operators that allow efficiently solving for the parameters.In particular Xu and Khanmohamadi created smoothing and derivative smoothing operators based on Fourier theory [63] and Chebyshev operators [23].However, they have not proven to be as influential as the integral and weak form methods described in the next subsection.

Integral and Weak Form Methods
Recent efforts by our group and others suggest that there is a considerable advantage in parameter estimation performance to be gained from using an integral-based transform of the model equations.The two main approaches are to 1) use integral forms of the model equation or 2) convolve the equation with a compactly supported test function to obtain the so-called "weak form" of the equation.The weak form idea can be traced back to Laurent Schwartz's Theory of Distributions [53] 2 , which recasts the classical notion of a function acting on a point to one acting on a measurement structure or "test function".In the context of differential equation models, Lax and Milgram pioneered the use of the weak form for relaxing smoothness requirements on unique solutions to parabolic PDE systems in Hilbert spaces [25].Since then, the weak form has been heavily used in studying solutions to PDEs as well as numerically solving for the solutions (e.g., the Finite Element Method), but not with the goal of directly estimating parameters.
The idea of weak-form based estimation has been repeatedly discovered over the years (see [46] for a good historical overview).Briefly, in 1954, Shinbrot created a proto-weak-form parameter inference method, called the Equations Of Motion (EOM) method [55].In it, he proposes to multiply the model equations by so-called method functions, i.e., what we would now call test functions.These test functions were based on sin n (νt) for different values of ν and n.In 1965, Loeb and Cahen [29,30] independently discovered the same method, calling it the Modulating Function (MF) method.They proposed and advocated for the use of polynomial test functions.The issue with these approaches (and indeed all subsequent developments based on these methods) is that the maximum power n is chosen to exactly match the number of derivatives needed to perform integration by parts (IBP).As we have shown, this choice means that these methods are not nearly as effective as they could be.As we initially reported in [35], a critical step in obtaining robust and accurate parameter estimation is to use highly smooth test functions, e.g., to have n be substantially higher than the minimum needed by the IBP.This insight led to our use of the C ∞ bump functions in WENDy (see Section 2.3).
In the statistics literature, there are several examples of using integral or weak-form equations.Dattner et al. [12] illustrate an integral-based approach and Dattner's 2021 review [11] provides a good overview of other efforts to use the integral form for parameter estimation.Concerning the weak form, several researchers have used it as a core part of their estimation methods [8,50].Unlike WENDy, however, either these approaches smooth the data before substitution into the model equation (which can lead to poor performance) or still require forward solves.As with the EOM and MF method above, the test functions in these methods were also chosen with insufficient smoothness to yield the highly robust parameter estimates we obtain with WENDy.
As the field of SINDy-based equation learning [9] is built upon direct parameter estimation methods, there are also several relevant contributions from this literature.Schaeffer and McCalla [51] showed that parameter estimation and learning an integral form of equations can be done in the presence of significant noise.Broadly speaking, however, the consensus has emerged that the weak form is more effective than a straightforward integral representation.In particular, several groups (including ours) independently proposed weak form-based approaches [19,34,35,43,43,61].
The weak form is now even implemented in the PySINDy code [22] which is actively developed by the authors of the original SINDy papers [9,49].However, we do note that the Weak SINDy in PySINDy is based on an early weak form implementation (proposed in [19,48]).A more recent implementation with autotuned hyperparameters can be found at https://github.com/MathBioCU/WSINDy_ODEfor ODEs [35] and https://github.com/MathBioCU/WSINDy_PDEfor PDEs [34].
While our group wasn't the first to propose a weak form methodology, we have pioneered its use for equation learning in a wide range of model structures and applications including: ODEs [35], PDEs [34], interacting particle systems of the first [37] and second [39] order, and online streaming [38].We have also studied and advanced the computational method itself.Among other contributions, we were the first to automate (with mathematical justification) test function hyperparameter specification, feature matrix rescaling (to ensure stable computations), and to filter high frequency noise [34].Lastly we have also studied the theoretical convergence properties for WSINDy in the continuum data limit [36].Among the results are a description of a broad class of models for which the asymptotic limit of continuum data can overcome any noise level to yield both an accurately learned equation and a correct parameter estimate (see [36] for more information).

Weak form Estimation of Nonlinear Dynamics (WENDy)
In this work, we assume that the exact form of a differential equation-based mathematical model is known, but that the precise values of constituent parameters are to be estimated using existing data.As the model equation is not being learned, this is different than the WSINDy methodology and, importantly, does not use sparse regression.We thus denote the method presented in this paper as the Weak-form Estimation of Nonlinear Dynamics (WENDy) method.
In Section 2.1, we start with an introduction to the idea of weak-form parameter estimation in a simple OLS setting.In Section 2.2 we describe the WENDy algorithm in detail, along with several strategies for improving the accuracy: in Section 2.3 we describe a strategy for optimal test function selection, and in Section 2.4 the strategy for improved iteration termination criteria.

Weak-form Estimation with Ordinary Least Squares
We begin by considering a d-dimensional matrix form of (2), i.e., an ordinary differential equation system model with row vector of the d solution states u(t; W ] where f j : R d → R, and the matrix of unknown parameters W ∈ R J×d .We consider a C ∞ test function φ compactly supported in the time interval [0, T ] (e.g.φ ∈ C ∞ c ([0, T ])), multiply both sides of (3) by φ, and integrate over 0 to T .Via integration by parts we obtain As the compact support of φ implies that φ(0) = φ(T ) = 0, this yields a transform of (3) into This weak form of the equation allows us to define a novel methodology for estimating the entries in W .
Observations of states of this system are (in this paper) assumed to occur at a discrete set of M + 1 timepoints {t m } M m=0 with uniform stepsize ∆t.The test functions are thus centered at a subsequence of K timepoints {t m k } K k=1 .
We choose the test function support to be centered at a timepoint t m k with radius m t ∆t where m t is an integer (to be chosen later).Bold variables denote evaluation at or dependence on the chosen timepoints, e.g., Approximating the integrals in (4) using a Newton-Cotes quadrature yields where and φ k is a test function centered at timepoint t m k .To account for proper scaling, in computations we normalize The Q matrix contains the quadrature weights on the diagonal.In this work we use the composite Trapezoidal rule3 for which the matrix is We defer full consideration of the integration error until Section 2.3 but note that in the case of a non-uniform timegrid, Q would simply be adapted with the correct stepsize and quadrature weights.
The core idea of the weak-form-based direct parameter estimation is to identify W as a least squares solution to where "vec" vectorizes a matrix, where U represents the data, and the integration matrices are The ordinary least squares (OLS) solution to ( 6) is presented in Algorithm 1.We note that we have written the algorithm this way to promote clarity concerning the weak-form estimation idea.For actual implementation, we create a different Θ i for each variable i = 1 . . ., d and use regression for state i to solve for a vector w i of parameters (instead of a matrix of parameters W , which can contain values known to be zero).To increase computational efficiency, we make sure to remove any redundancies and use sparse computations whenever possible.
The OLS solution has respectable performance in some cases, but in general there is a clear need for improvement upon OLS.In particular, we note that ( 6) is not a standard least squares problem.The (likely noisy) observations of the state u appear on both sides of (5).In Statistics, this is known as an Errors in Variables (EiV) problem. 4hile a full and rigorous analysis of the statistical properties of weak-form estimation is beyond the scope of this article 5 , here we will present several formal derivations aimed at improving the accuracy of parameter estimation.
These improvements are critical as the OLS approach is not reliably accurate.Accordingly, we define WENDy (in the next section) as a weak-form parameter estimation method which uses techniques that address the EiV challenges.

WENDy: Weak-form estimation using Iterative Reweighting
In this subsection, we acknowledge that the regression problem does not fit within the framework of ordinary least squares (see Figure 4) and is actually an Errors-In-Variables problem.We now derive a linearization that yields insight into the covariance structure of the problem.First, we denote the vector of true (but unknown) parameter values used in all state variable equations as w and let u := u(t; w ) and Θ := Θ(u ).We also assume that measurements of the system are noisy, so that at each timepoint t all states are observed with additive noise where each element of ε(t) is i.i.d.N (0, σ 2 ). 6Lastly, we note that there are d variables, J feature terms, and M + 1 timepoints.In what follows, we present the expansion using Kronecker products (denoted as ⊗).
We begin by considering the sampled data U := u + ε ε ε ∈ R (M +1)×d and vector of parameters to be identified w ∈ R Jd .The use bolded variables to represent evaluation at the timegrid t, and use superscript notation to denote quantities based on true (noise-free) parameter or states.We now consider the residual r(U, w) := Gw − b, (8) where we redefine We then note that we can decompose the residual into several components Here, r 0 is the residual without measurement noise or integration errors, and e int is the numerical integration error induced by the quadrature (and will be analyzed in Section 2.3).
Let us further consider the leftover terms e Θ − b ε ε ε and take a Taylor expansion around the data U e where h(U, w, ε ε ε) is a vector-valued function of higher order terms in the measurement errors ε ε ε (including the Hessian as well as higher order derivatives).Note that the h function will generally produce a bias and higher-order dependencies for all system where ∇ 2 Θ = 0, but vanishes when ε ε ε = 0.
The first order matrix in the expansion ( 9) is where "mat" is the matricization operation and P is a permutation matrix such that Pvec(ε) = vec(ε T ).The matrix ∇Θ contains derivatives of the features and U m is the row vector of true solution states at t m .
As mentioned above, we assume that all elements of ε are i.i.d.Gaussian, i.e., N (0, σ 2 ) and thus to first order the residual is characterized by In the case where w = w and the integration error is negligible, (10) simplifies to We note that in (11) (and in (10)), the covariance is dependent upon the parameter vector w.In the statistical inference literature, the Iteratively Reweighted Least Squares (IRLS) [21] method offers a strategy to account for a parameterdependent covariance by iterating between solving for w and updating the covariance matrix C. Furthermore, while the normality in ( 11) is approximate, the weighted least squares estimator has been shown to be consistent under fairly general conditions even without normality [4].In Algorithm 2 we present WENDy method, updating C (n) (at the n-th iteration step) in lines 7-8 and then the new parameters w (n+1) are computed in line 9 by weighted least squares.
The IRLS step in line 9 requires inverting C (n) , which is done by computing its Cholesky factorization and then applying the inverse to G and b.Since this inversion may be unstable, we allow for possible regularization of C (n)   in line 8 via a convex combination between the analytical first-order covariance L (n) (L (n) ) T and the identity via the covariance relaxation parameter α.This regularization allows the user to interpolate between the OLS solution (α = 1) and the unregularized IRLS solution (α = 0).In this way WENDy extends and encapsulates Algorithm 1.
However, in the numerical examples below, we simply set α = 10 −10 throughout, as the aforementioned instability was not an issue.Lastly, any iterative scheme needs a stopping criteria and we will defer discussion of ours until Section 2.4.
// Return estimate and standard statistical quantities The outputs of Algorithm 2 include the estimated parameters w as well as the covariance C of the response vector b such that approximately b ∼ N (G w, σ 2 C).
A primary benefit of the WENDy methodology is that the parameter covariance matrix S can be estimated from C using This yields the variances of individual components of w along diag(S) as well as the correlations between elements of w in the off-diagonals of S.Here σ 2 is an estimate of the measurement variance σ 2 , which we compute by convolving each compartment of the data U with a high-order7 filter f and taking the Frobenius norm of the resulting convolved data matrix f * U. Throughout we set f to be the centered finite difference weights of order 6 over 15 equally-spaced points (computed using [17]), so that f has order 5.The filter f is then normalized to have unit 2-norm.This yields a high-accuracy approximation of σ 2 for underlying data u that is locally well-approximated by polynomials up to degree 5.

Choice of Test Functions
When using WENDy for parameter estimation, a valid question concerns the choice of test function.This is particularly challenging in the sparse data regime, where integration errors can easily affect parameter estimates.In [35] we reported that using higher order polynomials as test functions yielded more accuracy (up to machine precision).
Inspired by this result and to render moot the question of what order polynomial is needed, we have developed a 2-step process for offline computation of highly efficient test functions, given a timegrid t.
We first derive an estimator of the integration error that can be computed using the noisy data U and used to detect a minimal radius m t such that m t > m t leads to negligible integration error compared to the errors introduced by random noise.Inspired by wavelet decompositions, we next row-concatenate convolution matrices of test functions at different radii m t := (2 m t ; = {0, . . ., ¯ }).An SVD of this tall matrix yields an orthonormal test function matrix Φ, which maximally extracts information across different scales.We note that in the later examples we have ¯ = 3, which in many cases leads to a largest test function support covering half of the time domain.
To begin, we consider a C ∞ bump function where the constant C enforces that ψ 2 = 1, η is a shape parameter, and [•] + := max(•, 0), so that ψ(t; a) is supported only on [−a, a] where a = m t ∆t.
With the ψ in (13) we have discovered that the accuracy of the parameter estimates is relatively insensitive to a wide range of η values.Therefore, based on empirical investigation we arbitrarily choose η = 9 in all examples and defer more extensive analysis to future work.In the rest of this section, we will describe the computation of m t and how to use ψ to construct Φ and Φ.

Minimum radius selection
In (9), it is clear that reducing the numerical integration errors e int will improve the estimate accuracy.Figure 1 illustrates for the Logistic Growth model how the relative error changes as a function of test function radius m t (for different noise levels).As the radius increases, the error becomee dominated by the measurement noise.To establish a lower bound m t on the test function radius m t , we create an estimate for the integration error which works for any of the d variables in a model.To promote clarity, we will let u be any of the d variables for the remainder of this section.However, it is important to note the the final e rms sums over all d variables.
We now consider the k-th element of e int where ∆t = T /M for a uniform timegrid t = (0, ∆t, 2∆t, . . ., M ∆t) with overall length T .We also note that the biggest benefit of this approach is that e int does not explicitly depend upon w .
By expanding d dt (φ k (t)u (t)) into its Fourier Series8 we then have hence increasing a corresponds to higher-order Fourier coefficients of φ k (•; 1) entering the error formula (15), which shows, using (15), that increasing a (eventually) lowers the integration error.For small m t , this leads to the integration error e int dominating the noise-related errors, while for large m t , e int noise-related effects are dominant.
We now derive a surrogate approximation of e int using the noisy data U to estimate this transition from integration error-dominated to noise error-dominated residuals.From the noisy data U on timegrid t ∈ R M , we wish to compute e int (u , φ k , M ) by substituting U for u and using the discrete Fourier transform (DFT), however the highest mode 9 we have access to is F ±M/2 [φU].On the other hand, we are able to approximate e int (u , φ k , M/s ) from U, that is, the integration error over a coarsened timegrid (0, ∆t, 2 ∆t, . . ., M/s ∆t), where ∆t = T / M/s and s > 2 is a chosen coarsening factor.By introducing the truncated error formula we have that e int (u , φ k , M/s , s) ≈ e int (u , φ k , M/s ), 9 We define the nth discrete Fourier mode of a function f over a periodic grid (m∆t) M m=0 by and e int can be directly evaluated at U using the DFT.In particular, with 2 < s < 4, we get where Im{z} denotes the imaginary portion of z ∈ C, so that only a single Fourier mode needs computation.In most practical cases of interest, this leads to (see Figure 2) so that ensuring e int (U, φ k , M/s , s) is below some tolerance τ leads also to e int (u, φ k , M ) < τ .
Statistically, under our additive noise model we have that e int (U, φ k , M/s , s) is an unbiased estimator of e int (u , φ k , M/s , s), i.e., where E denotes expectation.The variance satisfies, for 2 < s < 4, The upper bound follows from φ k 2 = 1, and shows that the variance is not sensitive to the radius of the test function φ k .
We pick a radius m t as a changepoint of log(ê rms ), where êrms is the root-mean-squared integration error over test functions placed along the timeseries, where U (i) is the ith variable in the system.Figure 2 depicts e rms as a function of support radius m t .As can be seen, since the variance of e int is insensitive to the radius m t , the estimator is approximately flat over the region with negligible integration error, a perfect setting for changepoint detection.Crucially, Figure 2 demonstrates that, in practice, the minimum radius m t lies to the right of the changepoint of the coefficient errors as a function of m t .Lastly, note that the red × in Figure 1 depicts the identified m t for the Logistic Growth model.

Orthonormal test functions
Having computed the minimal radius m t , we then construct the test function matrices (Φ, Φ) by orthonormalizing and truncating a concatenation of test function matrices with m t := m t × (1, 2, 4, 8).Letting Ψ be the convolution matrix for ψ(• ; 2 m t ∆t), we compute the SVD of  16) holds empirically for small radii mt.Right: coefficient error E 2 as a function of mt is plotted, showing that for each noise level the identified radius mt using êrms lies to right of the dip in E 2 , as random errors begin to dominate integration errors.In particular, for low levels of noise, m t increases to ensure high accuracy integration.
The right singular vectors V then form an orthonormal basis for the set of test functions forming the rows of Ψ .
Letting r be the rank of Ψ , we then truncate the SVD to rank K, where K is selected as the changepoint in the cumulative sum of the singular values (Σ ii ) r i=1 .We then let Φ = (V (K) ) T be the test function basis where V (K) indicates the first K modes of V. Unlike our previous implementations, the derivative matrix Φ must now be computed numerically, however given the compact support and smoothness of the reference test functions ψ(•; 2 m t ∆t), this can be done very accurately with Fourier differentiation.Hence, we let where F is the discrete Fourier transform and k k k are the requisite wavenumbers.Figure 3 displays the first six orthonormal test functions along with their derivatives obtained from this process applied to Hindmarsh-Rose data.

Stopping criteria
Having formed the test function matrices {Φ, Φ}, the remaining unspecified process in Algorithm 2 is the stopping criteria SC.The iteration can stop in one of three ways: (1) the iterates reach a fixed point, (2) the number of iterates exceeds a specified limit, or (3) the residuals are no longer approximately normally distributed.( 1) and ( 2) are straightfoward limitations of any iterative algorithm while (3) results from the fact that our weighted least-squares framework is only approximate.In ideal scenarios where the discrepancy terms e int and h(u , w ; ε ε ε) are negligible, equation (10) implies that where C = L (L ) T is the covariance computed from w .Hence we expect r (n) to agree with a normal distribution more strongly as n increases.If the discrepancy terms are non-negligible, it is possible that the reweighting procedure will not result in an increasingly normal r (n) , and iterates w (n) may become worse approximations of w .A simple way to detect this is with the Shapiro-Wilk (S-W) test for normality [54], which produces an approximate p-value under the null hypothesis that the given sample is i.i.d.normally distributed.However, the first few iterations are also not expected to yield i.i.d.normal residuals (see Figure 4), so we only check the S-W test after a fixed number of iterations n 0 .Letting SW (n) := SW(r (n) ) denote the p-value of the S-W test at iteration n > n 0 , and setting SW (n 0 ) = 1, we specify the stopping criteria as: SC(w We set the fixed-point tolerance to τ FP = 10 −6 , the S-W tolerance and starting point to τ SW = 10 −4 and n 0 = 10, and max_its = 100.

Illustrating Examples
Here we demonstrate the effectiveness of WENDy applied to five ordinary differential equations canonical to biology and biochemical modeling.As demonstrated in the works mentioned in Section 1, it is known that the weak or integral formulations are advantageous, with previous works mostly advocating for a two step process involving (1) pre-smoothing the data before (2) solving for parameters using ordinary least squares.The WENDy approach does not involve smoothing the data, and instead leverages the covariance structure introduced by the weak form to iteratively reduce errors in the ordinary least squares (OLS) weak-form estimation.Utilizing the covariance structure in this way not only reduces error, but reveals parameter uncertainties as demonstrated in Section 3.3.
We compare the WENDy solution to the weak-form ordinary least squares solution (described in Section 2 and denoted simply by OLS in this section) to forward solver-based nonlinear least squares (FSNLS).Comparison to OLS is important due to the growing use of weak formulations in joint equation learning / parameter estimation tasks, but often without smoothing or further variance reduction steps [3,15,34,41].In most cases WENDy reduces the OLS error by 60%-90% (see the bar plots in Figures 5-9).When compared to FSNLS, WENDy provides a more efficient and accurate solution in typical use cases, however in the regime of highly sparse data and large noise, FSNLS provides an improvement in accuracy at a higher computational cost.Furthermore, we demonstrate that FSNLS may be improved by using the WENDy output as an initial guess.We aim to explore further benefits of combining forward solver-based approaches with solver-free weak-form approaches in a future work.Code to generate all examples is available at https://github.com/MathBioCU/WENDy.

Numerical methods and performance metrics
In all cases below, we solve for approximate weights w using Algorithm 2 over 100 independent trials of additive Gaussian noise with standard deviation σ = σ N R vec(U ) rms for a range of noise ratios σ N R .This specification of the variance implies that so that σ N R can be interpreted as the relative error between the true and noisy data.Results from all trials are aggregated by computing the mean and median.Computations of Algorithm 2 are performed in MATLAB on a laptop with 40GB of RAM and an 8-core AMD Ryzen 7 pro 4750u processor.Computations of FSNLS are also performed in MATLAB but were run on the University of Colorado Boulder's Blanca Condo Cluster in a trivially parallel manner over a homogeneous CPU set each with Intel Xeon Gold 6130 processors and 24GB RAM.Due to the comparable speed of the two processors (1.7 GHz for AMD Ryzen 7, 2.1 GHz for Intel Xeon Gold) and the fact that each task required less than 5 GB working memory (well below the maximum allowable), we believe the walltime comparisons between WENDy and FSNLS below are fair.
As well as σ N R , we vary the stepsize ∆t (keeping the final time T fixed for each example), to demonstrate large and small sample behavior.For each example, a high-fidelity solution is obtained on a fine grid (512 timepoints for Logistic Growth, 1024 for all other examples), which is then subsampled by factors of 2 to obtain coarser datasets.
To evaluate the performance of WENDy, we record the relative coefficient error only 64 timepoints, the coefficient error is still less than 10%.WENDy reduces the error by 40%-70% on average from the OLS (top right panel).

Fitzhugh-Nagumo
The Fitzhugh-Nagumo equations are a simplified model for an excitable neuron [16].The equations contain six fundamental terms with coefficients to be identified.The cubic nonlinearity implies that the first-order covariance expansion in WENDy becomes inaccurate at high levels of noise.Nevertheless, Figure 7 (lower plots) shows that WENDy produces on average 6% coefficient errors at 10% noise with only 128 timepoints, and only 7% forward simulation errors (see upper left plot for an example dataset at this resolution).In many cases WENDy reduces the error by over 50% from the FSNLS solution, with 80% reductions for high noise and M = 1024 timepoints (top right panel).For sparse data (e.g.64 timepoints), numerical integration errors prevent estimation of parameters with lower than 3% error, as the solution is nearly discontinuous in this case (jumps between datapoints are O(1)).

Hindmarsh-Rose
The Hindmarsh-Rose model is used to emulate neuronal bursting and features 10 fundamental parameters which span 4 orders of magnitude [20].Bursting behavior is observed in the first two solution components, while the third component represents slow neuronal adaptation with dynamics that are two orders of magnitude smaller in amplitude.
Bursting produces steep gradients which render the dynamics numerically discontinuous at M = 128 timepoints, while at M = 256 there is at most one data point between peaks and troughs of bursts (see Figure 8, upper left).
Furthermore, cubic and quadratic nonlinearities lead to inaccuracies at high levels of noise.Thus, in a multitude of ways (multiple coefficient scales, multiple solution scales, steep gradients, higher-order nonlinearities, etc.) this is a challenging problem, yet an important one as it exhibits a canonical biological phenomenon.Figure 8 (lower left) shows that WENDy is robust to 2% noise when M ≥ 256, robust to 5% noise when M ≥ 512, and robust to 10% noise when M ≥ 1024.It should be noted that since our noise model applies additive noise of equal variance to each component, relatively small noise renders the slowly-varying third component u 3 unidentifiable (in fact, the noise ratio of only U (3) exceeds 100% when the total noise ratio is 10%).In the operable range of 1%-2% noise and M ≥ 256, WENDy results in 70%-90% reductions in errors from the naive OLS solution, indicating that inclusion of the approximate covariance is highly beneficial under conditions which can be assumed to be experimentally relevant.
We note that the forward simulation error here is not indicative of performance, as it will inevitably be large in all cases due to slight misalignment with bursts in the true data.

Protein Transduction Benchmark (PTB)
The PTB model is a five-compartment protein transduction model identified in [52] as a mechanism in the signaling cascade of epidermal growth factor (EGF).It was used in [59] to compare between four other models, and has since served as a benchmark for parameter estimation studies in biochemistry [24,32,42].The nonlinearites are quadratic and sigmoidal, the latter category producing nontrivial transformations of the additive noise.WENDy estimates the 11 parameters with reasonable accuracy when 256 or more timepoints are available (Figure 9), which is sufficient to result in forward simulation errors often much less than 10%.The benefit of using WENDy over the OLS solution is most apparent for M ≥ 512, where the coefficient errors are reduced by at least 70%, leading to forward simulation errors less than 10%, even at 20% noise.

Parameter uncertainties using learned covariance
We now demonstrate how the WENDy methodology may be used to inform the user about uncertainties in the parameter estimates.Figures 10 and 11 contain visualizations of confidence intervals around each parameter in the FitzHugh-Nagumo and Hindmarsh-Rose models computed from the diagonal elements of the learned parameter covariance matrix S. Each combination of noise level and number of timepoints yields a 95% confidence interval around the learned parameter 10 .As expected, increasing the number of timepoints and decreasing the noise level leads to more certainty in the learned parameters, while lower quality data leads to higher uncertainty.Uncertainty levels can be used to inform experimental protocols and even be propagated into predictions made from learned models.One could also examine the off-diagonal correlations in S, which indicate how information flows between parameters.We aim to explore these directions in a future work.

Comparison to nonlinear least squares
We now briefly compare WENDy and forward solver-based nonlinear least squares (FSNLS) using walltime and relative coefficient error E 2 as criteria.For nonlinear least-squares one must specify the initial conditions for the ODE solve (IC), a simulation method (SM), and an initial guess for the parameters (w (0) ).Additionally, stopping tolerances for the optimization method must be specified (Levenberg-Marquardt is used throughout).Optimal choices for each of these hyperparameters is an ongoing area of research.We have optimized FSNLS in ways that are unrealistic in practice in order to demonstrate the advantages of WENDy even when FSNLS is performing somewhat optimally in both walltime and accuracy.Our hyperparameter selections are collected in Table 2 and discussed below.
To remove some sources of error from FSNLS, we use the true initial conditions u(0) throughout, noting that these would not be available in practice.For the simulation method, we use state-of-the-art ODE solvers for each problem, namely for the stiff differential equations Fitzhugh-Nagumo and Hindmarsh-Rose we use MATLAB's ode15s, while for Lotka-Volterra and PTB we use ode45.In this way FSNLS is optimized for speed in each problem.We fix the relative and absolute tolerances of the solvers at 10 −6 in order to prevent numerical errors from affecting results without asking for excessive computations.In practice, the ODE tolerance, as well as the solver, must be optimized to depend on the noise in the data, and the relation between simulation errors and parameters errors in FSNLS is an on-going area of research [40].
Due to the non-convexity of the loss function in FSNLS, choosing a good initial guess w (0) for the parameters w is crucial.For comparison, we use two strategies.The first strategy (simply labeled FSNLS in Figures 12-15), and keeping only the best-performing result.Since the sign of coefficients greatly impacts the stability of the ODE, we take the standard deviations to be so that initial guesses always have the correct sign but with approximately 25% error from the true coefficients.(For cases like Hindmarsh-Rose, this implies that the small coefficients in w are measured to high accuracy relative to the large coefficients.)In practice, one would not have the luxury of selecting the lowest-error result of five independent trials of FSNLS, however it may be possible to combine several results to boost performance.
For the second initial guess strategy we set w (0) = w, the output from WENDy (labeled WENDy-FSNLS in .In almost all cases, this results in an increase in accuracy, and in many cases, also a decrease in walltime. Figures [12][13][14][15] display comparisons between FSNLS, WENDy-FSNLS, and WENDy for Lotka-Volterra, FitzHugh-Nagumo, Hindmarsh-Rose, and PTB models.In general, we observe that WENDy provides significant decreases in walltime and modest to considerable increases in accuracy compared to the FSNLS solution.Due to the additive noise structure of the data, this is surprising because FSNLS corresponds to (for normally distributed measurement errors) a maximum likelihood estimation, while WENDy only provides a first order approximation to the statistical model.
At lower resolution and higher noise (top right plot in Figures 12-15), all three methods are comparable in accuracy, and WENDy decreases the walltime by two orders of magnitude.In several cases, such as Lotka-Volterra Figure 12, the WENDy-FSNLS solution achieves a lower error than both WENDy and FSNLS, and improves on the speed of FSNLS.For Hindmarsh-Rose, even with high-resolution data and low noise (bottom left plot of Figure 14), FSNLS is unable to provide an accurate solution (E 2 ≈ 0.2), while WENDy and WENDy-FSNLS result in E 2 ≈ 0.005.The clusters of FSNLS runs in Figure 14 with walltimes ≈ 10 seconds correspond to local minima, a particular weakness of FSNLS, while the remaining runs have walltimes on the order of 20 minutes, compared to 10-30 seconds WENDy.
We see a similar trend in E 2 for the PTB model (Figure 15), with E 2 rarely dropping below 10%, however in this case FSNLS runs in a more reasonable amount of time, taking only ≈ 100 seconds.The WENDy solution offers speed and error reductions.For high-resolution data (M = 1024), WENDy runs in 40-50 seconds on PTB data due to the impact of M and d, the number of ODE compartments (here d = 5), on the computational complexity.It is possible to reduce this using more a sophisticated implementation (in particular, symbolic computations are used to take gradients of generic functions, which could be precomputed).
Finally, the aggregate performance of WENDy, WENDy-FSNLS, and FSNLS is reported in Figure 16, which reiterates the trends identified in the previous Figures.Firstly, WENDy provides significant accuracy and walltime improvements over FSNLS.It is possible that FSNLS results in lower error for very small sample sizes (see M = 128 results in the left plot), although this comes at a much higher computational cost.Secondly, WENDy-FSNLS provides similar accuracy improvements over FSNLS and improves the walltime per datapoint score, suggesting that using WENDy as an initial guess may alleviate the computational burden in cases where FSNLS is competitive.

Concluding Remarks
In this work, we have proposed the Weak-form Estimation of Nonlinear Dynamics (WENDy) method for directly estimating model parameters, without relying on forward solvers.The essential feature of the method involves converting the strong form representation of a model to its weak form and then substituting in the data and solving a    regression problem for the parameters.The method is robust to substantial amounts of noise, and in particular to levels frequently seen in biological experiments.
As mentioned above, the idea of substituting data into the weak form of an equation followed by a least squares solve for the parameters has existed since at least the mid 1950's [55].However, FSNLS-based methods have proven highly successful and are ubiquitous in the parameter estimation literature and software.The disadvantage of FSNLS is that fitting using repeated forward solves comes at a substantial computational cost and with unclear dependence on the initial guess and hyperparameters (in both the solver and the optimizer).Several researchers over the years have created direct parameter estimation methods (that do not rely on forward solves), but they have historically included some sort of data smoothing step.The primary issue with this is that projecting the data onto a spline basis (for example) represents the data using a basis which does not solve the original equation 11 .Importantly, that error propagates to the error in the parameter estimates.However, we note that the WENDy framework introduced here is able to encapsulate previous works that incorporate smoothing, namely by including the smoothing operator in the covariance matrix C.
The conversion to the weak form is essentially a weighted integral transform of the equation.As there is no projection onto a non-solution based function basis, the weak-form approach bypasses the need to estimate the true solution to directly estimate the parameters.
The main message of this work is that weak-form-based direct parameter estimation offers intriguing advantages over FSNLS-based methods.In almost all the examples shown in this work and in particular for larger dimensional systems with high noise, the WENDy method is faster and more accurate by orders of magnitude.In rare cases where an FSNLS-based approach yields higher accuracy, WENDy can be used as an efficient method to identify a good initial guess for parameters.

Fig. 1
Fig.1Coefficient error E 2 = w − w 2 / w 2 of WENDy applied to the Logistic Growth model vs test function radius mt for noise levels σ N R ∈ {10 −6 , . . ., 10 −1 }.For large enough radius, errors are dominated by noise and integration error is negligible.The minimum radius m t computed as in Section 2.3 finds this noise-dominated region, which varies depending on σ N R .

Fig. 2
Fig. 2 Visualization of the minimum radius selection using single realizations of Fitzhugh-Nagumo data with 512 timepoints at three different noise levels.Dashed lines indicate the minimum radius m t Left: we see that inequality (16) holds empirically for small radii mt.Right: coefficient error E 2 as a function of mt is plotted, showing that for each noise level the identified radius mt using êrms lies to right of the dip in E 2 , as random errors begin to dominate integration errors.In particular, for low levels of noise, m t increases to ensure high accuracy integration.

Fig. 3
Fig.3First six orthonormal test functions obtained from Hindmarsh-Rose data with 2% noise and 256 timepoints using the process outlined in Section 2.3.

Fig. 4
Fig.4Histograms of the WENDy (red) and OLS (blue) residuals evaluated at the WENDy output w applied to the (left-right) Logistic Growth, Lotka-Volterra, and Fitzhugh-Nagumo data, each with 256 timepoints and 20% noise.Curves are averaged over 100 independent trials with each histogram scaled by its empirical standard deviation.In each case, the WENDy residual agrees well with a standard normal, while the OLS residual exhibits distictly non-Gaussian features, indicative that OLS is the wrong statistical regression model.

Fig. 5
Fig. 5 Logistic Growth: Estimation of parameters in the Logistic Growth model.Left and middle panels display parameter errors E 2 and forward simulation error E F S , with solid lines showing mean error and dashed lines showing median error.Right: median percentage drop in E 2 from the OLS solution to the WENDy output (e.g. at 30% noise and 512 timepoints WENDy results in a 85% reduction in error).

Fig. 8
Fig. 8 Hindmarsh-Rose: Estimation of parameters in the Hindmarsh-Rose model (for plot details see Figure 5 caption).

Fig. 9
Fig. 9 Protein Transduction Benchmark (PTB): Estimation of parameters in the PTB model (for plot details see Figure 5 caption).

Fig. 10 FitzHugh
Fig. 10 FitzHugh-Nagumo: Performance of WENDy for all estimated parameters.The true parameters are plotted in green, the purple lines indicate the average learned parameters over all experiments and the black lines represent the 95% confidence intervals obtained from averaging the learned parameter covariance matrices S. The x-axis indicates noise level and number of timepoints for each interval.

Fig. 11
Fig. 11 Hindmarsh-Rose: Performance of WENDy for all estimated parameters.See Figure 10 for a description.

Fig. 12
Fig.12Comparison between FSNLS, WENDy-FSNLS, and WENDy for the Lotka-Volterra model.Left to right: noise levels {5%, 10%, 20%}.Top: 256 timepoints, bottom: 1024 timepoints.We note that the M = 1024 with 20% noise figure on the lower right suggests that WENDy results in slightly higher errors than the FSNLS.This is inconsistent with all other results in this work and appears to be an outlier.Understanding the source of this discrepancy is a topic or future work.

Fig. 16
Fig. 16 Average performance of FSNLS, WENDy-FSNLS, and WENDy over Lotka-Volterra, FitzHugh-Nagumo, Hindmarsh-Rose and PTB for noise ratios σ N R ∈ {0.01, 0.02, 0.05, 0.1}.To account for scaling between examples, the geometric mean across the four examples is reported in each plot.Left: average relative coefficient error E 2 vs. number of timepoints M ; right: relative coefficient error E 2 multiplied by walltime per datapoint vs. M .In each case, increasing noise levels σ N R correspond to increasing values along the y-axis.Both plots suggest that WENDy and WENDy-FSNLS each provide accuracy and walltime improvements over FSNLS with best-of-five random initial parameter guesses.

Table 2
Hyperparameters for the FSNLS algorithm.
consists of running FSNLS on five initial guesses, where each parameter is sampled i.i.d from a uniform distribution, i.e., for the ith parameter, w