Emulation-accelerated Hamiltonian Monte Carlo algorithms for parameter estimation and uncertainty quantification in differential equation models

We propose to accelerate Hamiltonian and Lagrangian Monte Carlo algorithms by coupling them with Gaussian processes for emulation of the log unnormalised posterior distribution. We provide proofs of detailed balance with respect to the exact posterior distribution for these algorithms, and validate the correctness of the samplers’ implementation by Geweke consistency tests. We implement these algorithms in a delayed acceptance (DA) framework, and investigate whether the DA scheme can offer computational gains over the standard algorithms. A comparative evaluation study is carried out to assess the performance of the methods on a series of models described by differential equations, including a real-world application of a 1D fluid-dynamics model of the pulmonary blood circulation. The aim is to identify the algorithm which gives the best trade-off between accuracy and computational efficiency, to be used in nonlinear DE models, which are computationally onerous due to repeated numerical integrations in a Bayesian analysis. Results showed no advantage of the DA scheme over the standard algorithms with respect to several efficiency measures based on the effective sample size for most methods and DE models considered. These gradient-driven algorithms register a high acceptance rate, thus the number of expensive forward model evaluations is not significantly reduced by the first emulator-based stage of DA. Additionally, the Lagrangian Dynamical Monte Carlo and Riemann Manifold Hamiltonian Monte Carlo tended to register the highest efficiency (in terms of effective sample size normalised by the number of forward model evaluations), followed by the Hamiltonian Monte Carlo, and the No U-turn sampler tended to be the least efficient.


Introduction
Parameter estimation and uncertainty quantification (UQ) in systems of nonlinear ordinary and partial differential equations (ODEs/PDEs) is a topical research area with the emergence of complex mathematical models expressed via ODEs or PDEs. Such models are heavily used throughout all science and engineering fields to understand the underlying mechanisms behind a process (e.g. biological systems B L. Mihaela Paun l.paun.1@research.gla.ac.uk Dirk Husmeier dirk.husmeier@glasgow.ac.uk 1 School of Mathematics and Statistics, University of Glasgow, Glasgow G12 8QQ, UK (Wilkinson 2007), or physiology (Mirams et al. 2016)). Statistical inference allows estimation of the unknown model parameters from the data in a robust and coherent manner within a Bayesian or frequentist framework. This is however a challenging task to accomplish since nonlinear ODE/PDE models that faithfully capture real-world processes of interest are analytically intractable and can only be solved using numerical integration. While this may not be a problem if the numerical integration is performed only a few times, it quickly becomes a major hindrance if incorporated within an adaptive parameter estimation procedure requiring thousands of ODE/PDE evaluations, incurring high computational costs. In addition, non-identifiable parameters due to model formulation or insufficient amount of data, and any strong parameter correlations further complicate the statistical analysis. Bayesian methods can be employed to provide posterior probability distributions over parameters; Markov Chain Monte Carlo (MCMC) algorithms are generally used to sample from the posterior distribution.
Finding efficient algorithms that return high effective sample sizes (ESS) in a reasonable time frame is challenging, especially if there are strong correlations between the parameters, which would retard the convergence of standard MCMC algorithms, such as Metropolis-Hastings (M-H) (Turner et al. 2013). The implication of using standard MCMC algorithms on such problems is that a small step size is needed to obtain a reasonable acceptance rate, which in turn means that low ESS (or high auto-correlation) is obtained. This problem can be alleviated by using more advanced MCMC algorithms, such as the HMC algorithm (Neal 2011). HMC introduces an auxiliary variable and makes use of the gradient of the posterior distribution for more informed moves in parameter space. However, while it has been shown on numerous occasions that the HMC algorithm outperforms random-walk algorithms in terms of efficiency (e.g. in Ch. 5 in Brooks et al. 2011or in Sengupta et al. 2016, it has rarely been applied to nonlinear ODE or PDE models, with four noticeable exceptions Kramer et al. (2014), Lê et al. (2016), Bui-Thanh and Girolami (2014), Sengupta et al. (2016).
The major drawback of applying HMC to ODE/PDE models over the M-H algorithm is related to the large number of model evaluations before a proposal is made. In HMC, trajectories are simulated, and throughout each trajectory, the ODE/PDEs are evaluated multiple times (for calculating the likelihood and its gradient) until a proposal is made, unlike the M-H algorithm, which requires one single ODE/PDE evaluation for a proposal to be made. To reduce the computational burden, several approaches have been proposed in the literature. In a special class of ODE models (steady state data models), Kramer et al. (2014) make use of a special property of steady state data to obtain output sensitivities (i.e. derivatives of the model output with respect to the unknown parameters) required in the Hamiltonian equations, through analytical calculations. However, the approach in Kramer et al. (2014) cannot be applied to dynamic time data ODE/PDE models for which the output sensitivities can only be obtained via numerical integration. Sengupta et al. (2016) compare the performance in terms of computational speed and accuracy of three methods for calculating the likelihood gradients: finite differences, 1 forward sensitivities 2 and the adjoint method, 3 and the latter was shown to be superior (specific details of these methods can also be found in Sengupta et al. (2014)). Similarly, Bui-Thanh and Girolami (2014) apply the adjoint method in a PDE system to , where h > 0 is a small constant 2 The ODE system is augmented to include the gradient of the states with respect to the parameters (called the sensitivity derivative equation) 3 Avoids solving the sensitivity derivative equation by the use of a linear adjoint equation, which is easier to numerically integrate compute the first-, second-and third-order derivatives of the likelihood as part of the Riemann Manifold HMC (RMHMC) algorithm. The approaches taken in Sengupta et al. (2016) and Bui-Thanh and Girolami (2014) can nevertheless still be too computationally expensive for large class problems. The study in Chen et al. (2014) introduces stochastic-gradient HMC, which sub-samples the data and uses stochastic gradients as a replacement to full-data gradients in the Hamiltonian equations. Data sub-sampling introduces noise, which gets propagated to the Hamiltonian differential equations, thus the convergence to the posterior distribution depends on stochastic differential equations, which may reduce exploration efficiency and accuracy (Betancourt 2015).
Another approach proposed to speed up HMC involves the replacement of the expensive likelihood (or posterior distribution) with computationally cheaper surrogate models (Rasmussen 2003;Zhang et al. 2017), i.e. HMC is coupled with statistical emulation of the unnormalised posterior distribution. For this method, two routes can be taken: drawing samples from the approximate posterior distribution defined by the surrogate model ('emulator'), or drawing samples from a distribution that asymptotically converges to the true posterior distribution ('simulator'). The first route, taken in multiple studies (Peirlinck et al. 2019;Schiavazzi et al. 2016;Paun et al. 2019;Costabal et al. 2019;Bliznyuk et al. 2008;Dietzel and Reichert 2014;Fielding et al. 2011;Wilkinson 2014), samples from the surrogate posterior distribution, resulting in substantial gains in computational efficiency (since the expensive model is no longer used), however accuracy is sacrificed.
Methods that correct for the bias and ensure asymptotic exactness use the surrogate for the proposal only (see studies in Rasmussen 2003;Zhang et al. 2017;Paun and Husmeier 2020), or incrementally refine the surrogate model as MCMC proceeds (Zhang et al. 2017). Other studies employ similar approaches guaranteeing asymptotic convergence to the posterior distribution (Wu and Li 2016;Conrad et al. 2016Conrad et al. , 2018Gong and Duan 2017), e.g. the study in Conrad et al. (2016) uses forward simulations from the expensive model to continually refine a local approximation of the log unnormalised posterior, however the algorithm depends on various heuristic parameters, which critically affect the computational efficiency and may be difficult to tune in practice. The delayed acceptance (DA) scheme (Christen and Fox 2005;Golightly et al. 2015) is another exact method, and it does not depend on any heuristically set terms. This method, employed in Christen and Fox (2005), Golightly et al. (2015), Sherlock et al. (2017), Higdon et al. (2011), Cui et al. (2011), Quiroz et al. (2018, Banterle et al. (2019), Paun and Husmeier (2020), is a two-stage acceptance procedure, with two separate acceptance/rejection decisions. The first decision is a computationally fast pre-filter step based on the surrogate model, which upon rejection of a proposed new parameter avoids carrying out the computationally expensive second step based on the original model.
In the current study, we employ HMC in combination with statistical emulation using Gaussian processes (GPs) of the log unnormalised posterior distribution within the GP-HMC algorithm, following ideas in Rasmussen (2003) and Lan et al. (2016). Throughout the trajectory, HMC runs at low computational costs in the emulated space, and the Metropolis-Hastings acceptance/rejection step at the end of the trajectory is based on the ratio of the true posterior distributions. This requires one single numerical integration of the ODEs/PDEs throughout one HMC trajectory segment to obtain one parameter proposal, which substantially reduces the computational complexity. The algorithm is exact in the sense of converging to the true posterior distribution asymptotically, assuming no discretisation errors are introduced from the numerical integration of the ODEs/PDEs. 4 Our methodological contributions are as follows. Firstly, we generalise the GP-HMC algorithm (Rasmussen 2003) to algorithms which advance HMC: No U-turn sampler (NUTS) (Hoffman and Gelman 2014), RMHMC (Girolami and Calderhead 2011), and Lagrangian Dynamical Monte Carlo (LDMC) (Lan et al. 2015). This includes novel detailed balance proofs with respect to the correct posterior distribution, and we show how these subsume the proof of the GP-HMC algorithm as a limiting case. Secondly, we extend these algorithms by including DA, with formal detailed balance proofs, to investigate if the DA scheme brings any computational gains over the standard algorithms. Thirdly, we assess the computational efficiency of these emulation algorithms on a series of complex nonlinear ODE/PDE models.
The present study extends our previous work (Paun and Husmeier 2020) by providing a theoretical framework for the emulation algorithms and by investigating the efficiency of the DA scheme in the context of the emulation HMC algorithms. Moreover, we extend the simulation study to provide a broad benchmark set of ODE/PDE models that are representative of the complexity of typical real-world applications, allowing us to carry out a sound method evaluation.

Bayesian inference in ODEs/PDEs
By denoting the solution (output) from the ODEs/PDEs as m(θ ), which is a function of the unknown parameters θ , we 4 Investigation of the discretisation errors is beyond the scope of this paper. assume Normal data likelihood: where m(θ ) = (m 1 (θ ), . . . , m n (θ)) is the vector of predictions from the ODEs/PDEs, y = (y 1 , . . . , y n ) is the vector of measurements, n is the number of data points, S(θ ) is the residual sum-of-squares (RSS) value corresponding to the parameter vector θ, and σ 2 is the noise variance. We employ Bayesian methods to infer θ , which is done via exploration of the posterior distribution: p(θ |y) = is the prior distribution. The priors for the ODE/PDE model parameters are given in Sect. 4.
The integral in the denominator is intractable for the complex DE models considered, thus the posterior distribution is approximated using numerical schemes based on MCMC sampling: p(θ |y) ∝ p(y|θ ) p(θ).
To accelerate the sampling procedure, we couple MCMC with emulation using GPs. We emulate RSS, and the emulated likelihood is defined as: whereS(θ ) is the RSS value predicted by the emulator for the particular θ . The emulated posterior distribution becomes: p(θ|y) ∝p(y|θ ) p(θ ). In Sect. 2.2 we briefly review the HMC-type algorithms utilised and in Sect. 2.3 we show how HMC can be coupled with GPs as part of the GP-HMC algorithm (Rasmussen 2003). The methodological contribution of our paper is described in Sect. 3.

HMC
HMC (Neal 2011) introduces an auxiliary variable, the 'momentum' with a mass matrix M (identity matrix), and simulates Hamiltonian dynamics by using gradient information from the log target density. An HMC trajectory is simulated by numerically integrating the Hamiltonian dynamics with the leapfrog integrator (Neal 2011) using a small step size, > 0 and a number of leapfrog steps, L. However, the numerical integration induces an error, and the bias is corrected by a M-H accept/reject step. More details can be found in the Supplement, Sect. 2.

RMHMC
Riemann Manifold HMC (RMHMC) (Girolami and Calderhead 2011) exploits the Riemannian geometry of the parameter space, and sets M based on the curvature of the (approximate) target distribution at every step throughout the trajectory. M is the metric tensor of the Riemann space and is calculated based on the expected Fisher information matrix plus the negative Hessian log prior: An implicit integrator, the generalised leapfrog scheme, is used for the numerical integration of the Hamiltonian dynamics, since reversibility with the leapfrog integrator is no longer satisfied.

LDMC
LDMC (Lan et al. 2015) simulates Lagrangian dynamics (instead of Hamiltonian dynamics), thus the 'velocity' variables replace the 'momentum' variables. This enables the use of an explicit geometric integrator, which substantially improves the computational efficiency of the costly RMHMC implicit integrator. The consequence is that the volume in phase space is no longer preserved, and to ensure detailed balance (Lan et al. 2015), the Jacobian transformation is needed to adjust the M-H acceptance probability. In LDMC, similarly to RMHMC, M is adjusted to the curvature of the (approximate) posterior distribution at every step throughout the trajectory.

NUTS
The No U-turn sampler (NUTS) (Hoffman and Gelman 2014) chooses L recursively by moving in Euclidean parameter space (i.e. M equals the identity matrix) until the leapfrog trajectory starts to double back and retrace its steps. This is achieved via a tree doubling process building a binary tree, whose leaf nodes correspond to the position-momentum states. The points collected along the trajectory are sampled in a way that preserves detailed balance (Hoffman and Gelman 2014). NUTS uses stochastic optimisation (the primal-dual averaging algorithm) to tune in the burn-in phase.

Bayesian optimisation for parameter tuning
In our work we employ Bayesian optimisation (Wang et al. 2013;Mockus et al. 1978) to automatically tune the step size and number of steps L in HMC, RMHMC and LDMC. and L are optimised by maximising an objective function, which, following Wang et al. (2013), we take to be the expected squared jumping distance (ESJD) normalised by the number of leapfrog steps, The idea of emulation and Bayesian optimisation (Shahriari et al. 2016) is adopted, with optimisation of the Upper Confidence Bound acquisition function (Wang et al. 2013).

HMC coupled with emulation using GPs
GP-HMC (Rasmussen 2003) combines the HMC algorithm with GPs (briefly reviewed in the Supplement, Sect. 1), used for statistical emulation of the log unnormalised posterior distribution (Kennedy and O'Hagan 2001;Conti et al. 2009;Conti and O'Hagan 2010a). The need for using the GP-HMC algorithm over the plain HMC algorithm is illustrated in Algorithm 1, which comparatively provides the total number of model (ODE/PDE) evaluations incurred by the GP-HMC and the plain HMC algorithms and shows the differences between these two algorithms. Algorithm 1 is just a conceptual outline; the reader is referred to Algorithms S.1a, S.1b, S.1c and S.1d in the Supplement for a detailed pseudocode.
Algorithm 1 Conceptual outline for Hamiltonian Monte Carlo (HMC) in the left side column vs HMC coupled with emulation using Gaussian processes (GP-HMC) algorithm in the right side column. The total number of model (ODE/PDE) evaluations required for running each algorithm is: HMC -SL(d + 1) vs GP-HMC -S Algorithm 1 illustrates that HMC running in the original log likelihood space requires many more expensive model evaluations compared to HMC running in the emulated log likelihood space. This is because the former requires the numerical integration of the differential equations at every leapfrog step throughout the Hamiltonian dynamics, while the latter only requires one single numerical integration at the end of the leapfrog trajectory per HMC sample. An HMC trajectory typically has in the order of L = 100-1000 steps, which if carried out in the original space would require in the order of 100 × (d + 1) to 1000 × (d + 1) (d: number of parameters) model evaluations per HMC sample, thus a reduction in the computational complexity by about two to three orders of magnitude is obtained if the emulator provides a faithful representation of the log likelihood. The GP-HMC algorithm is guaranteed to satisfy detailed balance with respect to the correct posterior distribution, and in Sect. 4 of the Supplement we give the proof.
The GP-HMC algorithm proceeds as follows: -Initial design stage. Choose parameter values from a space filling design in parameter space (e.g. a Latin hypercube McKay et al. 1979 or a Sobol sequence Bratley and Fox 1988), and perform an expensive model evaluation for each parameter vector to obtain the true log likelihood value. These training points are used to create a GP emulator. -Exploratory phase. 'Learn' the form of the posterior distribution by running HMC in the space of the emulated posterior distribution, with the proposed point at the end of the HMC trajectory being subject to an M-H step based on the true posterior distribution (see Sect. 4 of the Supplement for more details on this). If the proposed parameter vector is accepted, add it together with the corresponding log likelihood value to the list of existing training points for the emulator, which is subsequently refined by maximisation of the log marginal likelihood with respect to the covariance hyperparameters-see Eq. 2 or 20 in the Supplement. Training points collected in the initial design stage are gradually removed as they tend to come from low posterior density regions and can bias the inference results. Points in low probability density regions could encourage overfitting by driving the GP lengthscale to small values due to large changes in output space happening over small distances in input space. Following Rasmussen (2003), we set the emulated 'potential energy' of the HMC algorithm (see Sect. 2.2) toẼ (4) Here f (.) is the emulated RSS function, E( f (θ * )|D) is the GP posterior predictive mean given the training points D (see Eq. 4 or 12 in the Supplement) and var( f (θ * )|D) is the GP posterior predictive standard deviation (see Eq. 5 or 13 in the Supplement) for RSS at unseen parameter configurations θ * conditional on the training points D. This drives the exploration into regions of high posterior probability (low value of E(.)) or high uncertainty (large value of √ var(.)). If √ var(.)>3 along the HMC trajectory, the algorithm steps into a region of high uncertainty, where the GP needs to be further trained, thus the simulation is stopped prematurely before reaching the end of the trajectory and an expensive model evaluation is performed at this point.
-Sampling phase. Draw parameter samples by running HMC in the space of the emulated posterior distribution defined by the emulator constructed in the exploratory phase (the emulator is no longer refined at this stage). Proposed points are accepted/rejected in a M-H step according to the simulator (see Sect. 4 of the Supplement for more details). If the rejection rate is too high (e.g. above 35%), this indicates that the emulated posterior distribution is not an accurate enough representation of the original posterior distribution 5 and an extension of the exploratory phase is needed. We set the emulated 'potential energy' in the HMC algorithm to Rasmussen (2003): where Eq. (5) gives the unnormalised log posterior probability.
We note that Eq. (4) is a modified version of (5) used in the exploratory phase only to aid exploration by subtracting a term dependent on the posterior predictive variance. The effect of this term is to shift the trade-off between exploration and exploitation towards exploration. A wider exploration during the exploratory phase is advisable to facilitate a better global coverage of the configuration space. This is a standard trick widely used in Bayesian optimization-see e.g. Shahriari et al. (2016).

Delayed acceptance
In several studies (Golightly et al. 2015;Sherlock et al. 2017) it has been hypothesized that the delayed GP-x Combination of algorithm x (listed above) and Gaussian processes DA-GP-x GP-x algorithm (x listed above) coupled with delayed acceptance noDA-GP-x Standard (without delayed acceptance) GP-x algorithm (x listed above) DA-GP-hybrid-x-y Hybrid algorithm between DA-GP-x and DA-GP-y (x, y listed above) acceptance scheme could potentially speed up simulations. This scheme is a two-stage acceptance procedure, with two M-H acceptance/rejection steps. The first step is a computationally fast pre-filter step, in which proposed values are accepted/rejected based on the emulator (first Eq. in (6)). Detailed balance with respect to the true posterior distribution is ensured through the second M-H step, which invokes the original posterior distribution for the acceptance/rejection of the those samples accepted in the first step (second Eq. in (6)).
where q(.) is the proposal density and a ∧ b = min{a, b}. If a proposal move is rejected based on the emulator in the first step, the computationally expensive second step will never have to be carried out.

Methodological contribution
Our methodological contributions are as follows. Firstly, we generalise Rasmussen's GP-HMC algorithm (Rasmussen 2003) to more modern HMC variants, namely GP-NUTS, GP-RMHMC and GP-LDMC (see abbreviations in Table 1), include formal detailed balance proofs, and show how these subsume the proof of the GP-HMC algorithm as a limiting case. Secondly, we extend GP-HMC and its variations by including DA, with formal detailed balance proofs for the following algorithms: DA-GP-HMC, DA-GP-NUTS, DA-GP-RMHMC and DA-GP-LDMC (see abbreviations in Table 1). Thirdly, we assess the computational efficiency and accuracy of all these emulation methods in simulation studies. While HMC coupled with emulation has been proposed in the literature before (Rasmussen 2003;Lan et al. 2016), our study is the first to provide, collate and unify detailed balance proofs in a theoretical framework for all algorithms listed in Table 1. Section 4 in the Supplement offers detailed balance proofs for HMC, RMHMC and LDMC, and we show how the former two are limiting cases of the latter. The RMHMC proofs (Sects. 4.3 and 4.4 in the Supplement) are a special case of the LDMC proofs (Sects. 4.5 and 4.6 in the Supplement), with the proposal probability ratio set to 1 (rather than the Jacobian of the variable transformation). The HMC proofs (Sects. 4.1 and 4.2 in the Supplement), are a more specialised case even still, with a constant mass matrix. We mention that Lan et al. (2015) prove detailed balance for the vanilla LDMC algorithm, and our contribution is to extend this proof to LDMC with emulation (Sects. 4.5 and 4.6 in the Supplement). Moreover, we extend the DA proof in Sherlock et al. (2017) from random walk M-H to HMC. These proofs set the basis for the more complex proof of NUTS coupled with emulation (with and without DA, in Sects. 4.7 and 4.8 of the Supplement), which we regard as our main theoretical contribution.
Throughout this work, ergodicity of the system defined by the various MCMC samplers is assumed to hold. Ergodicity could be proven by adapting the proofs in Livingstone et al. (2019), Banterle et al. (2019), Sherlock et al. (2017) and Conrad et al. (2016) to the generalised scenario where proposal moves are based on an emulated log likelihood, but this is beyond the scope of the present paper.

Brief discussion on efficiency for all algorithms proposed
The noDA-GP-NUTS algorithm (see abbreviations in Table 1) requires evaluation of the differential equations several times (equal to the number of tree doublings, i.e. tree height) along the trajectory before making a proposal; see the proof in Sect. 4.7 in the Supplement, in particular Eq. (37), which shows that the simulator potential function E(θ ) needs evaluating for every tree doubling. In contrast, DA-GP-NUTS only evaluates the ODEs/PDEs once at the end of the trajectory, for the final proposed point (as do all the other algorithms investigated: noDA-GP-HMC, DA-GP-HMC, noDA-GP-RMHMC, DA-GP-RMHMC, noDA-GP-LDMC, DA-GP-LDMC-see abbreviations in Table 1). This implies that the noDA-GP-NUTS requires a much larger number of forward evaluations (roughly one order of magnitude larger) than the DA-GP-NUTS. For this reason, we did not implement this algorithm. We employ all the other algorithms: noDA-GP-HMC (which is the standard GP-HMC), DA-GP-HMC, DA-GP-NUTS, noDA-GP-RMHMC, DA-GP-RMHMC, noDA-GP-LDMC, DA-GP-LDMC (Table 1) on a number of ODE/PDE problems, presented in Sect. 4.

ODE/PDE test examples
This section describes the test examples on which we applied the emulation methods under consideration.

Sinusoidal example
We introduce a linear ODE model, which is a sinusoidal toy problem, defined via the expression This ODE model has an analytical solution: where A: amplitude, 2π B : period, C: phase shift are the unknown parameters, which are estimated from the data. We only considered one period, t ∈ [0, 2π ], to ensure unimodality of the likelihood. We simulated data at 50 equally spaced time points in the range [0, 2π ]. We set the true parameter values as: A = 3, B = 1 and C = 0.05. We added iid additive Gaussian noise to the clean data generated using Eq. (7), and we set the variance of the noise σ 2 to 0.12 (signal-to-noise, SNR, set to 80). An illustration of the data is given in panel (a) of Fig. 1.
The task in the assessment of the proposed sampling schemes was to estimate the parameters A, B, C. The noise variance was kept fixed at its true value, however this simplification was relaxed for the next examples. We placed a Gaussian prior distribution on the log of the parameters to ensure positivity (log A ∼ N (log 4, 0.02), log B ∼ N (log 1, 0.01), log C ∼ N (log 0.05, 0.05)). These priors were chosen to encourage unimodality of the posterior distribution.

FitzHugh-Nagumo
The FitzHugh-Nagumo model, developed by Fitzhugh (1961) and Nagumo et al. (1962) to model the behaviour of spike potentials in the giant axon of neurons, is defined as a nonlinear ODE model: The system describes the reciprocal dependency between the voltage V across an axon membrane (characterising the self-excitation of the axon membrane) and the recovery R, acting as outwards currents (providing a feedback response). This example is a widely used benchmark problem, with the model having been used as a mathematical representation for cardiac dynamics (Göktepe and Kuhl 2009) and neuro-degenerative diseases ( Brüggemeier et al. 2014). This example deviates from the regular sinusoidal oscillations of the first toy example, with Eq. (8) defining a highly nonlinear likelihood surface (Ramsay et al. 2007) as the three parameters α, β, γ are varied. The FitzHugh-Nagumo model has proven to be a very challenging problem for alternative inference methods, e.g. based on gradient matching (Campbell and Steele 2012), thus making it an ideal test bed for our inference procedure. We followed Campbell and Steele (2012) and generated data from the model with the following parameter values: . We used 100 time points equally spaced in the interval [0, 20] ms. As in Campbell and Steele (2012), we added iid additive Gaussian noise to the data, with the following variances: In our estimation procedure we learnt five parameters: α, β, γ , σ 2 V , σ 2 R from data from the two 'species' V and R. Following the study in Chapter 8 of Lawrence et al. (2010), we set a Gamma(2,1) prior for parameters α, β, γ , while for the noise variances: σ 2 V , σ 2 R we chose a weakly informative Inverse-Gamma(0.001, 0.001) prior.

Biochemical signalling pathway
The biochemical signalling pathway model, characterised by the nonlinear ODE system in Eq. (9), uses the Michaelis-Menten kinetic law to describe the activation of a protein R into its active form R pp in the presence of an enzyme S, followed by the degradation of the enzyme into its inactive form, D (see panel (a) in Fig. 2 for a graphical representation).
Cell signalling has been used in cancer modelling (Martin 2004) and modelling of neuro-degenerative diseases (Kim and Choi 2010). The system of eqns in (9) depends on five  (10), while the pressure data constituted the main signal used for inference kinetic parameters: k 1 , V 1 , K m 1 , V 2 , K m 2 , which control how fast the biochemical processes involving the five 'species' (S, D, R, R pp ) take place. We followed the study in Chapter 8 of Lawrence et al. (2010) and generated data from the model with the following parameter values: 0, 1, 0). We used 20 data points within the interval [0, 100] s, measured at time points {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 14, 18, 20, 25, 30, 40, 60, 80, 100}. We added iid addi-tive Gaussian noise to the data, with the variance of 0.0004 ) for all 'species' (SNR S = SNR D = 270, SNR R = SNR R pp = 130). A depiction of the data can be seen in Fig. 3.

Real-world application: fluid-dynamics model of the pulmonary blood circulation
One dimensional (1D) fluid-dynamics models are used to model the pulmonary blood circulation (Qureshi et al. 2017(Qureshi et al. , 2018, and enable the non-intrusive diagnostication of pulmonary hypertension (high blood pressure in the pulmonary system, i.e. the blood vessel network connected to the right ventricle of the heart). If left untreated, pulmonary hypertension may lead to coronary artery disease, stroke and heart failure. Currently, the diagnosis process requires knowledge of the pulmonary blood pressure, which can only be obtained invasively for patients using right-heart catheterisation (unlike for the systemic circulation, i.e. the blood vessel network connected to the left ventricle of the heart, for which a sphygmomanometer may be used non-invasively). The ultimate goal is to combine magnetic resonance imaging (MRI) with mathematical modelling and statistical inference to develop a non-invasive alternative. The 1D fluid-dynamics model (Qureshi et al. 2017(Qureshi et al. , 2018 is described by a set of nonlinear PDEs: where x (cm) and t (s) are the longitudinal spatial and temporal coordinates, p (mmHg) is the blood pressure, q (ml/s) is the blood flow rate, A (cm 2 ) is the cross-sectional area.
A(x, t) = πr (x, t) 2 , where r (x, t) (cm) is the vessel radius. Also, A 0 is the unstressed vessel cross-sectional area, p 0 (mmHg) is the transmural pressure, s (mmHg) is the arterial network stiffness (kept constant across the network of vessels), ρ = 1.055 g/ml is the blood density, μ = 0.049 g/(cm s) is the viscosity and δ = √ μT /2πρ cm is the boundary-layer thickness of the velocity profile. The boundary conditions are specified as follows: As inlet boundary conditions, an inlet flow at the main pulmonary artery (MPA) was prescribed, which was measured with ultrasound-see the left panel in Fig. 2b. At the vessel junctions, conservation of flow (q p = q d 1 + q d 2 ) and continuity of pressure ( p p = p d 1 = p d 2 ) were ensured; here p represents the parent vessel, and d 1 and d 2 represent the daughter vessels.
As outflow boundary conditions, three-element Windkessel models (two resistors R 1 and R 2 , and a capacitor, C) were attached at every terminal artery, and are of the form is the impedance, ω is the angular frequency, T is the length of the cardiac cycle, R 1 , R 2 (mmHg s/ml) are the two resistances, and C (ml/mmHg) is the capacitance.
The three Windkessel elements (R 1 , R 2 , C) vary across the different terminal arteries. The nominal values for every terminal vessel j were calculated first (R j 01 , R j 02 , C j 0 in Eq. (11)), see Qureshi et al. (2018) for details, then global scaling factors r 1 , r 2 , c for these estimates were introduced, and they were kept constant across all terminal arteries: We used measured blood pressure (a time series consisting of 512 temporal pressure points) in the MPA to infer the various biophysical parameters: s, r 1 , r 2 , c. An example of the pulmonary blood pressure time series in a healthy mouse is given in the right panel of Fig. 2b.

Software
Our statistical methods were implemented in Matlab (Mathworks, Natick, MA) and simulations were run on a RedHat Enterprise Linux 6 machine with Intel(R) Xeon(R) CPU E5-2680 v2 2.80 GHz and 32 GB RAM.
We constructed the GP models using the GPstuff toolbox (Vanhatalo et al. 2013) and the MCMC convergence diagnostics (multivariate potential scale reduction factor, MPSRF Brooks andESS Kass et al. 1998) made use of functions from the MCMC toolbox (Laine 2007). To run the No U-turn sampler, Riemann Manifold HMC and Lagrangian Dynamical MC, we used the Matlab implementations developed by the authors of the papers where these algorithms were proposed (Hoffman and Gelman 2014;Girolami and Calderhead 2011;Lan et al. 2015).
For the numerical integration of the ODEs, we used the ode15s Matlab solver for the FitzHugh-Nagumo model, and the ode23 Matlab solver for the biochemical signalling pathway model. The PDEs of the 1D fluid-dynamics model were numerically integrated using a two step Lax-Wendroff scheme (Lax and Wendroff 1960) implemented in C++ by Qureshi et al. (2018); Olufsen et al. (2000). We note that for the two synthetic examples (FitzHugh-Nagumo and biochemical pathway), we have used the same numerical integrator for data generation and for inference. Given that we do not allow for potential signal distortion caused by the numerical integration, our inference results are therefore in principle over-optimistic. However, since our dynamical systems have well-behaved (as opposed to chaotic) attractors (a periodic limit cycle for the FitzHugh-Nagumo example and a stable fixed point for the biochemical pathway example), the effect of the numerical integration is practically negligible.

GP kernel
For the GP emulator of RSS, we used a squared exponential kernel (see Sect. 4.2 in Rasmussen and Williams 2005), which is infinitely differentiable, 6 allowing to analytically compute 7 first-order derivatives of the GP posterior predictive mean and variance (Eqs. 4,5,or 12,13 in the Supplement) needed in HMC, and second-and third-order derivatives of the GP posterior predictive mean, needed in RMHMC and LDMC. We note that differentiation is a linear operator, so the derivative of a GP is again a GP for differentiable kernels; see Sect. 9.4 in Rasmussen and Williams (2005).

GP mean
For the sinusoidal, FitzHugh-Nagumo and pulmonary models we used a zero mean GP prior, while for the biochemical pathway model we used a second-order polynomial for the mean basis functions (for reasons discussed in Sect. 6.3), i.e. in 5D: Further relevant implementation details are offered in the Supplement, Sect. 5.1.

Multiple GPs
For problems with multiple 'species' (biochemical pathway and FitzHugh-Nagumo examples), we emulated the RSS for every 'species' independently, thus Eq. (2) becomes J is the total number of 'species'.

Data sets
For all three synthetic examples (sinusoidal, FitzHugh-Nagumo and biochemical pathway) we generated 10 synthetic data sets with different noise instantiations. For the pulmonary example we had one single measured data set.

GP-HMC phases
Initial design stage At the initial stage of the GP-HMC algorithm (Rasmussen 2003), the number of training points can be determined by quantifying the efficiency of the MCMC sampler in the beginning of the exploratory phase (the acceptance rate of the sampler run using the initial emulator should not be too low, e.g. below 20%). The Supplement, Sect. 5.3 offers more implementation details. Exploratory phase In the exploratory phase of GP-HMC (Rasmussen 2003) we tried to ensure that a minimum num-ber of training points is stored (to boost computational efficiency), while preserving a high enough emulator accuracy (as quantified by GP diagnostics (Bastos and O'Hagan 2009)), and a high acceptance rate in the sampling phase (>65% (Neal 2011)). Initially 100 × d (d: parameter dimensionality) training points can be used, and if the acceptance rate in the sampling phase is sub-optimal (<65%), the exploratory phase can be extended. Generally this number depends on the parameter dimensionality and the complexity of the posterior distribution. For example, we found the rule of thumb presented in a Bayesian optimisation context (Jones et al. 1998), 10 × d, inadequate, providing a suboptimal emulator in an MCMC context.
The noise variance was kept fixed in the first part of the exploratory phase to enable learning the parameters while avoiding changes in curvature of the log likelihood induced by varying the noise variance (last term of Eq. (1)). An informed initial guess can be calculated based on the RSS value obtained from fitting a nonlinear regression model to the data. The acceptance rate in the exploratory phase of the algorithm can then act as an objective metric to assess the appropriateness of the σ 2 value. If the acceptance rate is too low, e.g. <10%, σ 2 is too large, making the likelihood in eqns (1) and (2) peaked and causing a large discrepancy between the emulated and original likelihood. If the acceptance rate is too large, e.g. >90%, σ 2 should be reduced to allow the sampler to focus on the area of interest and avoid flattening out the likelihood landscape. The σ 2 value can be sequentially altered by, e.g. 10%, and the exploratory phase re-run with a monitoring of the acceptance rate. We note here that we allow σ 2 to depend on the algorithmic implementation in the exploratory phase only, when the emulator is not a faithful representation of the simulator. In the sampling phase of the algorithm, σ 2 is drawn from the conditional posterior distribution, conditional on the data and dependent on the model. Further implementation details from the exploratory phase of the algorithm are given in the Supplement, Sect. 5.3. Sampling phase In the sampling phase of GP-HMC (Rasmussen 2003) we allowed for 100 samples as burn-in phase (chosen to ensure that MPSRF ≤ 1.1 (Brooks and Gelman 1998)), and 2000 samples were subsequently drawn and used for inference. Besides sampling the model parameters, we also sampled the noise variance in a Gibbs step. Every algorithm (DA-GP-HMC, noDA-GP-HMC, DA-GP-NUTS, DA-GP-RMHMC, noDA-GP-RMHMC, DA-GP-LDMC, noDA-GP-LDMC) was run 10 times from different random number generator seeds and different starting values for the parameters, selected from the points collected in the exploratory phase, to make it less likely that we start in a low probability region. For each of the 10 simulations, we recorded ESS for each parameter: where d: parameter dimensionality, and using Eq. (12), we calculated the minimum, median and maximum ESS across parameters as: Details about the implementation of Bayesian Optimisation for parameter tuning of the various algorithms can be found in Sect. 5.4 of the Supplement, and details about required parameter transformations are in Sect. 5.5 of the Supplement.

Setting the mass matrix for RMHMC/LDMC
For second-order algorithms (RMHMC, LDMC) obtaining the metric tensor is necessary. This is not trivial when emulating the objective function. Equation (3)  (1), we may take the log to obtain and the corresponding first-and second-order derivatives are: By noting that E(y|θ ) = m(θ ) and taking the expectation with respect to y|θ of the negative term in (16): Given that we emulate the expression n i=1 (y i −m i (θ)) 2 , we do not know the individual terms inside the sum, hence taking the expectation of the expression in Eq. (16) is impossible. Instead, we can set M to be the observed Fisher Information matrix (the matrix of negative second-order derivatives of the log likelihood), plus the negative Hessian matrix of the log prior, i.e.
However, this matrix is not guaranteed to be positive definite. In that case, we set M to be where λ = 0 leads to the empirical Fisher information matrix, which is semi-positive definite by construction. Thus the form in Eq. (19) ensures that the mass matrix is at least semipositive definite for a small enough λ ∈ [0, 1]. This form is motivated by the fact that the two terms are identical in expectation over y|θ for a constant prior, and may lead to a lower condition number of the mass matrix (hence numerical stability) than setting λ exactly equal to 0. We found that setting the metric tensor as per Eq. (19) does not work for all parameter values tried by the sampler. In some cases, parameter values throughout the trajectory tend to take extreme, unrealistic values, subsequently leading to numerical instabilities in the ODE solver, which is called for the end trajectory value. In that case, the approach taken is akin to a Quasi-Newton method in optimisation (Broyden 1972), i.e. if at any point throughout the trajectory, the mass matrix is numerically unstable, the simulation within the trajectory is stopped prematurely before reaching the end. A new simulation is started from the beginning of the trajectory and the HMC algorithm is used instead of RMHMC or LDMC for that particular iteration. The resulting posterior samples will have been drawn using a hybrid version of HMC and RMHMC or LDMC, and the proposed algorithm is called hybrid-HMC-RMHMC or hybrid-HMC-LDMC. This algorithm naturally satisfies detailed balance since each sample is drawn using a valid sampler (either HMC or RMHMC/LDMC).

Sampler consistency checks
To check the mathematical and coding correctness of the samplers derived, we implemented the Geweke consistency test (Geweke 2004), reviewed in the Supplement, Sect. 3.

Numerical results
For all test examples and emulation methods implemented in our study we checked for MCMC convergence using MPSRF, and an MPSRF ≤ 1.1 was recorded for all simulations, thus there was no evidence of lack of convergence.

Geweke consistency test: mathematical and coding correctness of the sampler
We checked the mathematical and coding correctness of the samplers' implementation using the Geweke consistency test (Geweke 2004). This was done for the linear ODE sinusoidal example to ensure no high computational costs attributed to a large number of ODE numerical integrations were incurred. In Fig. 1 Table 1 for abbreviation explanations. The points lie on the equality line, and we take this as evidence that the implementation of the three samplers is correct. We extrapolate this conclusion to the other samplers: (no)DA-GP-RMHMC and (no)DA-GP-LDMC since by looking at the proofs in Sect. 4 of the Supplement, we notice that they are a straightforward extension of (no)DA-GP-HMC.

DA versus noDA
Next, we investigated whether the DA scheme offers any gains in computational efficiency. To quantify efficiency, for each of the 10 data sets, we calculated the median ESS across all parameters, as defined in Eq. (13), for all DA Hamiltonian/Lagrangian algorithms against their noDA alternative (with the exception of NUTS, as justified in Sect. 3.1). Using hypothesis testing, for the DA and noDA schemes, we tested for mean equality of the median ESS values normalised by the total number of MCMC samples N , and by the number In Eq. (20), μ noDA ESS is the mean ESS for the noDA method in the sample of 10 medians from the 10 data sets, i.e.
where [E SS] noDA k is the median ESS across all parameters (Eq. (13)) for the k th data set corresponding to the noDA method. In contrast, μ DA ESS is the mean ESS for the DA method in the sample of 10 medians from 10 data sets. The distribution of the 10 ESS values for the DA and noDA methods is shown in Fig. 2(a) of the Supplement. The test revealed a p-value > 0.05, hence we cannot reject the null hypothesis. Given that our next goal was to compare the different algorithms, we made the arbitrary choice of going ahead with the DA algorithms.

Accuracy
We first compare the performance of the DA-GP-HMC, DA-GP-NUTS, DA-GP-RMHMC and DA-GP-LDMC algorithms (see Table 1 for abbreviation explanations) in terms of accuracy in both parameter and functional space. Parameter space Table 2 illustrates the mean of all ODE parameter posterior medians (i.e. the mean of the K posterior medians from the K data sets), and the corresponding standard deviation, i.e.
where [θ] k is the parameter posterior median vector for the k th data set. More specifically, for each of the K data sets, we find the parameter posterior median. We then take the mean over all K posterior medians for every parameter, which is compared to the true parameter value, as a measure of accuracy and to test for unbiasedness. For a large enough number of data sets, K , this mean should be close to the true parameter value.
Setting K = 10, we observe that all four algorithms register a very similar performance in terms of accuracy, and the true value lies within the interval given by mean ± 2 std. Functional space Next we examined the performance of the algorithms in functional space, quantified using R 2 , which indicates how good the fit between the data and the signal generated with the posterior median, is. R 2 is defined in terms of RSS and the total sum-of-squares SS total .
We can define R 2 , RSS and SS total for each of the 10 data sets, i.e. for the k th data set, k = 1, . . . K , K = 10: where n is the number of data points in a data set. Thus, R 2 lies within [0,1], and the higher R 2 , the better the fit. Table 2 shows the mean and standard deviation of R 2 over the 10 data sets. We notice that R 2 is very high, with a mean of ∼ 0.97 for all methods, suggesting that all algorithms perform equally well in terms of predicting a signal which is very similar to the data. Parameter UQ We also quantified the uncertainty in the parameter estimates for all methods. In Fig. 3 of the Supplement we illustrate the marginal posterior densities for the parameters A, B, C obtained with 1D kernel density estimation from the posterior samples drawn with the four emulation methods (a randomly selected data set out of the 10 data sets was used). We also superimpose results obtained using a long-run MCMC algorithm (direct MCMC, the Adaptive Metropolis (AM) algorithm Haario et al. 2001), that draws samples directly from the asymptotically exact posterior distribution. HMC running directly on the original posterior distribution would incur too many forward evaluations (see Sect. 2.3 for a discussion on this), hence in our work we opt for a random-walk algorithm (AM). To obtain the marginal posterior densities, we used the kernel smoothing function estimate for univariate data with the optimal bandwidth for normal densities (Bowman and Azzalini 1997). We note that the marginal posterior densities are comparable, backing up previous findings that all four emulation methods perform very similarly, and their performance is close to that of the long-run MCMC sampler.

Efficiency
The next step in the analysis was to compare the different methods in terms of efficiency, which we quantified using min, median and max ESS (see Eq. (13)) normalised by the total number of MCMC samples N and by the number of forward (model) evaluations. 8,9 Results based on the two efficiency measures are presented in Fig. 4, showing the distribution of these quantities over 10 data sets. When inspecting ESS/N (left panel), we observe great variability between the distributions for the minESS/N , medianESS/N and maxESS/N for the first-order methods (HMC and NUTS), in contrast with the higher-order methods (RMHMC and LDMC). In terms of minESS/N , RMHMC and LDMC are clearly superior to HMC and NUTS, while in terms of medianESS/N all methods are more comparable, and similarly in terms of maxESS/N , with the HMC algorithm having moderate advantage. The same pattern is observed when inspecting ESS normalised by the number of model evaluations (right panel), which is expected, considering the high acceptance rate (>95% across all methods), i.e. the number of model evaluations is very close to N . 8 Given that the number of parameters for the sinusoidal example is three, the min, median and max ESS correspond to the ESS for each of the three parameters. For comparability with the other mathematical models, we report the results in terms of the former three measures. 9 The number of model evaluations is representative of the sampling phase, and excludes the number of samples used for the initial and exploratory phases. The latter samples are the same across methods, rendering their inclusion unnecessary for method comparison.  Table 1 for further explanations of the abbreviations 6.2 FitzHugh-Nagumo

Hybrid algorithms for non-positive definite negative Hessian matrix
For the FitzHugh-Nagumo model we encountered difficulties with running the higher-order methods (RMHMC and LDMC) due to the negative Hessian matrix of the log unnormalised posterior not being positive definite, as illustrated in Fig. 5, where we display the regions in 2D parameter space for which the negative Hessian matrix is not positive definite. As outlined in Sect. 5.3, for this example we replaced the RMHMC or LDMC methods by the emulation hybrid-HMC-RMHMC or hybrid-HMC-LDMC, and for the RMHMC/LDMC component of the method, we used Eq. (19) to set the metric tensor. The emulation hybrid-HMC-LDMC algorithm registered a percentage of 25% (average over the 10 data sets) of HMC-drawn samples, and 75% of LDMCdrawn samples out of the total number of MCMC samples. Also, the emulation hybrid-HMC-RMHMC algorithm registered a percentage of 34% (average over the 10 data sets) of HMC-drawn samples, and 66% of RMHMC-drawn samples out of the total number of MCMC samples. For the FitzHugh-Nagumo model, results in subsequent sections are shown for the hybrid algorithms.

DA versus noDA
The distribution of the ESS values for the DA and noDA methods is shown in Fig. 2(b) of the Supplement. The hypothesis test investigating the effect of using the DA scheme revealed a p-value > 0.05 (see Eq. (20)) for all efficiency measures, thus, there is no difference in efficiency between the DA and noDA algorithms. For consistency with the sinusoidal example, we took forward the DA algorithms.

Accuracy
Parameter space Table 3 illustrates the mean and standard deviation of the posterior medians over 10 data sets (see Eq. (22)) for the ODE parameters drawn with all emulation methods, and of the noise variances, sampled with Gibbs sampling. All four emulation algorithms perform very similarly in terms of accuracy, with the true values for the ODE parameters and the two noise variances being contained within the interval given by mean ± 2 std. Functional space Table 3 gives the the mean and standard deviation of R 2 (Eq. (23)) over 10 data sets for both 'species' obtained with every emulation algorithm. The methods perform very similarly, and one of the signals appears to be better learnt (mean R 2 ∼ 0.9) than the other (mean R 2 ∼ 0.7). Parameter UQ Regarding UQ, all the methods are on a par, as evident in Fig. 4 in the Supplement, showing the marginal posterior densities for the ODE parameters and noise variances for one randomly selected data set out of the 10 data sets, and their performance is close to that of a long-run MCMC sampler (direct MCMC) drawing samples from the asymptotically exact posterior distribution.

Efficiency
Results based on the min, median and max ESS (see Eq. (13) normalised by the total number of MCMC samples N and by the number of forward (model) evaluations are presented in Fig. 6, which shows the distribution of these quantities over 10 data sets. When inspecting ESS/N (left panel), we again note a much larger variability between the distributions for the minESS/N , medianESS/N and maxESS/N for the first-order methods (HMC and NUTS), unlike the higher-order methods (hybrid-HMC-RMHMC and hybrid-HMC-LDMC). Another observation is that NUTS tends to perform systematically worst. In terms of minESS/N , HMC, hybrid-HMC-RMHMC and hybrid-HMC-LDMC are comparable, while in terms of medianESS/N and maxESS/N , HMC seems to have an advantage. The same observations can be made when inspecting ESS normalised by the number of model evaluations (right panel), which is expected,

Table 3
Accuracy in parameter and functional space for the FitzHugh-Nagumo example For each of the emulation methods compared we show the mean and standard deviation of the parameter posterior medians for 10 data sets, calculated using Eq. (22) (note that the noise variances were sampled using Gibbs sampling). The true parameter values are also shown. R 2 (Eq. (23)), as a measure of fit to the data is also displayed for each of the two 'species'. Legend: HMC-Hamiltonian Monte Carlo, NUTS: No U-turn sampler, RMHMC-Riemann Manifold Hamiltonian Monte Carlo, LDMC-Lagrangian Dynamical Monte Carlo, DA -delayed acceptance, GP: Gaussian process. See Table 1    considering the high acceptance rate (>80% across all methods).

A zero mean GP versus a quadratic mean GP
For the biochemical pathway example, we implemented both a zero mean GP model, as well as a quadratic mean function GP model for the RSS. The latter potentially places high prior density values near the mode, where a parabola-shaped form of the log unnormalised posterior distribution is expected. The quadratic mean function fits this parabola, thus regions far away from the mode are suppressed. Our findings are that the zero mean model encouraged adding 'extreme' RSS values (high relative to the low RSS region) to the list of training points as a consequence of stepping into a region of high uncertainty of the emulator. Therefore, the zero mean GP emulator is not a faithful representation of the simulator, which leads to poor performance in the sampling phase, as shown in Table 4. Table 4 displays quantitative metrics, showing that the quadratic mean GP leads to better perfor-  mance (in terms of acceptance rate and ESS) compared to the zero mean GP model. Additionally, in Fig. 8 of the Supplement we show parameter posterior samples collected in the sampling phase for one randomly selected data set out of the 10 data sets. The top two figures show posterior samples generated from the zero mean GP log unnormalised posterior and two different chain initialisations, while the bottom two figures show samples drawn from the quadratic mean GP log unnormalised posterior. The chain mixing based on the latter GP model is better than the former, although for both models, periods of chain stagnations are registered, an issue which we discuss in Sect. 7.6. Consequently, in the next sections, results produced with a quadratic mean function GP prior model are presented.

Hybrid algorithms for non-positive definite negative Hessian matrix
Applying the higher-order methods (RMHMC and LDMC) posed difficulties due to the negative Hessian matrix of the log unnormalised posterior not always being positive definite, as illustrated for a 2D parameter space in Fig. 7 of the Supplement. Similarly to the FitzHugh-Nagumo model, we implemented the hybrid algorithms as described in Sect. 5.3. Our findings revealed the following: for some data sets, the hybrid algorithms returned samples drawn mostly with the HMC algorithm, i.e. the percentage of HMC-drawn samples was 85-90%, while 10-15% of the samples were drawn with RMHMC or LDMC, the reason being a high condition number (> 10 15 ) of the mass matrix set using Eq. (19). For other data sets, the Bayesian optimisation employed for RMHMC or LDMC set the tuning parameters to very low values (higher values would encourage the sampler to step into regions where the matrix has high condition number), leading to the sampler making tiny, ineffective moves, resulting in low ESS. These issues illustrate that the second-order methods (RMHMC, LDMC) combined with emulation of the log unnormalised posterior distribution effectively reduce to the standard first-order HMC with emulation method due to the sub-optimality of the mass matrix. However, it is important to emphasize that our proposed hybrid algorithm provides a safety net that enables the overall algorithm to finish successfully by dynamically switching between firstand second-order methods.
In light of these findings, the second-order methods were excluded from the comparison, and subsequent results for the biochemical pathway example are presented for HMC and NUTS only.

DA versus noDA
The hypothesis testing comparing the mean of the distribution over 10 data sets of the normalised minimum, median and maximum ESS (Eq. (13)) between the DA and noDA scheme revealed a p-value > 0.05 for all measures, except MaxESS/number of forward evaluations, for which DA seems to have a slight advantage. The distribution of these ESS values for the DA and noDA methods is shown in Fig. 2(c) of the Supplement. For the purpose of comparing the Hamiltonian Monte Carlo algorithms and to ensure consistency with previous examples, we proceeded with the DA algorithms.

Accuracy
Parameter space Table 5 illustrates the mean and standard deviation of the posterior medians over 10 data sets (see Eq. (22)) for the ODE parameters drawn with all emulation methods, and of the noise variances, sampled with Gibbs sampling.
The methods perform very similarly in terms of accuracy, with the inferred values for the ODE parameters and the 'species' noise variances being contained within the interval given by mean ± 2 std. Functional space Table 5 shows the mean and standard deviation of R 2 (Eq. (23)) over 10 data sets of R 2 for every 'species' obtained with every emulation method. A similar performance of the methods is registered, with all methods giving a very large R 2 (mean of ∼ 0.99).

Table 5
Accuracy in parameter and functional space for the biochemical pathway example  Table 1 for further explanations of the abbreviations Parameter UQ The two methods give overlapping marginal posterior densities for the parameters, see Fig. 5 in the Supplement.

Efficiency
Method comparison Figure 7 shows the distribution of min, median and max ESS (see Eq. (13)) normalised by the total number of MCMC samples N and by the number of forward (model) evaluations over 10 data sets for HMC and NUTS. In terms of normalised MinESS and MedianESS, the HMC algorithm has a superior performance to NUTS, while a fair degree of similarity in the performance with respect to normalised MaxESS between the two algorithms is recorded. Chain stagnation The simulation results in Figure 8 of the Supplement show fairly long periods of rejections, after which the sampler recovers, with good mixing exhibited. This issue points to a mismatch between the emulator and the simulator; a more thorough discussion on this is given in Sect. 7.6.

DA versus noDA
For the particular data set available, we test for the equality of mean normalised ESS of the distribution over all four parameters between the DA and noDA schemes for all four emulation algorithms. The hypothesis test reveals a p-value > 0.05 for all algorithms. These findings suggest no difference in efficiency between the DA and noDA schemes. Similarly to the previous examples, we took forward the DA algorithms. Table 6 shows the posterior median and 95% credible interval of the PDE parameters obtained from the posterior samples generated with all the emulation methods (DA-GP-HMC, DA-GP-NUTS, DA-GP-RMHMC, DA-GP-LDMC, see Table 1 for abbreviation explanations) and with a longrun MCMC sampler, and of the noise variance, sampled with Gibbs sampling. These results are for one single measured (real) data set, for which the ground truth parameter values are unknown. In addition, in Fig. 6 of the Supplement we plot the marginal posterior densities of the parameters. In the absence of a gold standard, to test whether the emulation approach gives any bias in the results, we also present results obtained with a long-run MCMC sampler. Table 6 and Fig. 6 of the Supplement suggest that all methods provide very similar results. The overlapping densities for the different algorithms indicate that the methods provide samples from approximately the same distribution. In the absence of a proper gold standard, the agreement between the predicted posterior probability distributions across all emulation algorithms and the long-run (direct) MCMC, can be taken as a proxy for accuracy. This statement is backed up by a very high R 2 value of 0.99 (see Table 6) registered by all methods, indicating a very good fit to the measured data. Figure 8 displays the distribution of ESS (Eq. (12)) normalised by the total number of MCMC samples N and by the number of forward (model) evaluations over all four parameters for the single data set analysed. RMHMC appears superior to all other algorithms when analysing ESS/N (left panel) or ESS/#forwardEval (right panel). Additionally, NUTS systematically performs more poorly than the other algorithms. LDMC is clearly superior to HMC when looking at the minimum or median ESS/N or ESS/#forwardEval, and HMC is better when looking at the maximum ESS/N or ESS/#forwardEval. The distribution of these quantities is much more variable for HMC and NUTS than for RMHMC and LDMC.

Discussion
We have contributed to the research field of accelerating Hamiltonian/Lagrangian Monte Carlo algorithms by coupling them with Gaussian processes for emulation of the log unnormalised posterior distribution. We have provided proofs of detailed balance with respect to the asymp- Table 6 Accuracy in parameter and functional space for the statistical inference performed on the fluid-dynamics pulmonary application We show the parameter posterior medians and 95% credible interval obtained with all emulation methods (the noise variance was sampled using Gibbs sampling), as well as R 2 , calculated using Eq. (23). Results obtained with the Adaptive Metropolis (Haario et al. 2001) Table 1 Table 1 for further explanations of the abbreviations totically exact posterior distribution for these algorithms, and validated the mathematical and coding correctness of the samplers' implementation by Geweke consistency tests ( Fig. 1 in the Supplement). Moreover, we have carried out a comparative evaluation study to assess the performance of the methods on a series of ODE/PDE models (sinusoidal, FitzHugh-Nagumo, biochemical pathway and fluid-dynamics pulmonary model). We have aimed to identify the most computationally efficient and accurate parameter inference and UQ tools to be applied to nonlinear ODE or PDE models. Typically, these models incur onerous computational costs caused by repeated numerical integrations as part of an iterative sampling scheme. In addition, we have investigated whether the delayed acceptance scheme used in conjunction with these emulation algorithms can further offer computational gains over the standard algorithms.

A discussion on the algorithms compared
We have compared the following algorithms: noDA-GP-HMC (i.e. standard GP-HMC), DA-GP-HMC, DA-GP-NUTS, noDA-GP-RMHMC, DA-GP-RMHMC, noDA-GP-LDMC and DA-GP-LDMC (see Table 1 for explanations of the abbreviations). While the standard GP-HMC was originally proposed in Rasmussen (2003), all the other algorithms are our contribution. The noDA-GP-NUTS algorithm was not implemented in practice due to requiring a number of expensive model evaluations roughly one order of magni-tude higher than DA-GP-NUTS, see the proof in Sect. 4.7 of the Supplement.

Hybrid algorithms for non-positive definite negative Hessian matrix
As discussed in Sect. 5.3, due to emulating the log unnormalised posterior instead of the signals (i.e. the solutions of the ODEs/PDEs), we could not use the expected Fisher information matrix when setting the mass matrix in RMHMC or LDMC. Instead we used the observed Fisher information matrix. The resulting negative Hessian matrix of the log unnormalised posterior distribution (which is the sum of the observed Fisher information matrix and the matrix of negative second-order derivatives of the log prior) is not guaranteed to be positive definite. This was an issue for the FitzHugh-Nagumo and biochemical pathway models (see Fig. 5 in the main paper and Fig. 7 in the Supplement), but not for the sinusoidal and pulmonary models. For the former models, we adopted a form for the mass matrix based on a combination of the observed and empirical Fisher information matrix (Eq. (19)), ensuring at least positive semi-definiteness. The downside was that the matrix can have a high condition number (> 10 15 ). To overcome this, we took a hybrid approach: if at any point throughout the trajectory the matrix was numerically unstable, the simulation within the trajectory was stopped prematurely, and HMC was run instead of RMHMC/LDMC for that particular iteration. The FitzHugh-Nagumo model in particular benefited from this hybrid approach, as efficiency gain over the HMC algorithm was achieved, with roughly two thirds of samples being RMHMC-drawn and three quarters of samples being LDMC-drawn (while the rest of one third and one quarter were HMC-drawn samples). In contrast, the biochemical pathway model registered no efficiency gain over HMC, as most samples were drawn with the latter.

Emulation of the model output
To avoid the loss of information inherent in emulating the log unnormalised posterior distribution (Davies et al. 2019) (see Sect. 5.3 for specific equations and details), the multivariate signal could be emulated instead, using e.g. ensembles of single-output emulators (MS) (Conti and O'Hagan 2010b), or multivariate-output Gaussian processes (MO) (Álvarez et al. 2010;Moreno-Muñoz et al. 2018). However, emulating a high-dimensional output brings computational challenges. The computational costs for emulating a high-dimensional complex output will increase significantly when compared to emulating a one-dimensional function (Wilkinson 2014;Davies et al. 2019). General methods based on marginalisation over covariance between outputs with an uninformative prior, as done in Conti et al. (2009) and Conti and O'Hagan (2010a) does not take into account relevant information about the data (e.g. periodicity of the time series). Additionally, allowing for the particular dependence of a time series (e.g. correlations, periodicity) requires a thorough exploration to identify an appropriate emulation strategy, e.g. using ensembles of independent GPs or multivariate-output GPs, sums or products of different kernel functions, or different forms for the mean function.
In light of the findings of this study, a new research direction could be that of finding a trade-off between high computational complexity (due to emulating a multivariate output) and potential efficiency gains of a second-order Hamiltonian/Lagrangian scheme guaranteeing a positive definite Fisher information matrix.

Advantage of delayed acceptance
There is no evidence that the DA scheme brings any computational gains when coupled with Hamiltonian/Lagrangian Monte Carlo algorithms. The efficiency of DA-GP-HMC, noDA-GP-HMC, DA-GP-RMHMC, noDA-GP-RMHMC, DA-GP-LDMC, noDA-GP-LDMC (see Table 1 for abbreviation explanations), as measured in terms of ESS normalised by the total number of MCMC samples and by the number of model (forward) evaluations, is comparable between the DA and noDA algorithms, a conclusion drawn based on a formal hypothesis test testing for equal sample means of the normalised ESS.
While an MCMC with DA approach has been taken in previous studies in the literature (Christen and Fox 2005;Golightly et al. 2015;Sherlock et al. 2017;Higdon et al. 2011;Cui et al. 2011;Quiroz et al. 2018;Banterle et al. 2019), to our best knowledge, our current study is the only one to complement our previous study (Paun and Husmeier 2020), and use DA in conjunction with Hamiltonian/Lagrangian Monte Carlo algorithms. Other studies have compared standard random-walk MCMC algorithms to their DA version. For example, Golightly et al. (2015) showed that the DA scheme can lead to improvements in computational efficiency in a particle MCMC algorithm applied to stochastic kinetic models. Additionally, Banterle et al. (2019) and Quiroz et al. (2018) showed that DA brings computational advantages when applied to M-H algorithms on large data sets, for which data sub-sampling is employed. However, these MCMC algorithms are based on a random-walk, which is known to have a lower acceptance rate (and efficiency) than the gradient-driven Hamiltonian Monte Carlo algorithms (see Ch. 5 in Brooks et al. 2011or Sengupta et al. 2016. Therefore, the former algorithms provide more potential for improvement with the DA scheme than the latter, i.e. if a rejection is more likely, a higher number of computationally expensive model evaluations are avoided by the first stage employing the emulator. 10

Accuracy
The accuracy in parameter and functional space proved to be very similar between the different methods for all ODE/PDE models considered, see Tables 2, 3 and 5 and 6. In addition, for the toy examples, we showed that the algorithms were able to learn the true parameter values that generated the data (Tables 2, 3 and 5). The marginal posterior densities constructed from the MCMC posterior samples showed overlapping densities, indicating that the uncertainty quantification was on a par for all methods (Figs. 3, 4, 5 and 6 in the Supplement).

Efficiency
ESS normalised by the total number of MCMC samples In terms of ESS/N (left panel in Figs. 4,6,7,8), the performance of DA-GP-NUTS was generally inferior to that of the other algorithms (DA-GP-HMC, DA-GP-RMHMC, DA-GP-LDMC, see Table 1 for abbreviation explanations). A possible explanation is that for DA-GP-NUTS the tuning of the step size and number of steps is performed in the emulated log unnormalised posterior entirely, based on samples accepted at the emulator stage, due to the construction of the algorithm (see the proof in Sect. 4.7 of the Supplement). In contrast, for the other three algorithms, the tuning is performed based on samples accepted at the simulator stage, thus the simulator plays a role in the choice of optimum tuning parameters, positively impacting efficiency.
In terms of minESS/N , generally the RMHMC and LDMC algorithms perform better than HMC, while in terms of medianESS/N or maxESS/N no clear pattern is observed (sometimes RMHMC and LDMC are better, other times HMC is preferred). We also note a much larger discrepancy between minESS/N and maxESS/N for HMC and NUTS than RMHMC and LDMC, for which ESS/N varies much less across parameters. This is a consequence of the latter two algorithms using a mass matrix set via the curvature of the log unnormalised posterior, while HMC and NUTS use an identity matrix as the mass matrix, and the optimum step size is restricted by the lowest marginal variance. Thus, a first-order algorithm like HMC or NUTS can be more inefficient (e.g. in terms of minESS/N ) for problems with large discrepancies between the lowest and largest marginal variance. Generally, RMHMC and LDMC perform similarly, an exception is the pulmonary model, which registers better performance for RMHMC than LDMC. ESS normalised by the number of forward evaluations In terms of ESS/#forwardEval (right panel in Figs. 4,6,7,8), a similar pattern as for ESS/N is observed, which is expected given the high acceptance rate (>80%), meaning that the number of model evaluations is close to the total number of MCMC samples. This finding also helps explain why we found no advantage of the DA scheme: generally most proposals are accepted at the emulator (first) stage, and are subsequently subject to the accept/reject decision at the simulator (second) stage.
The above interpretations for the FitzHugh-Nagumo model apply to the emulation hybrid-RMHMC-HMC/hybrid-LDMC-HMC instead of the standard emulation RMHMC/ LDMC.

Limitations and future improvements for the biochemical pathway example
The biochemical pathway example was a hard problem to emulate due to the high correlations manifested through long ridges in the log unnormalised posterior landscape (see Fig. 7 in the Supplement). While a quadratic mean GP prior improved the acceptance rate, mixing and efficiency (see Table 4 of the main paper and Fig. 8 of the Supplement), a larger number of training points would have been needed for an optimal coverage of the parameter space. However, this would have resulted in high CPU times 11 due to different operations performed repeatedly (to compute the GP predictive mean and up to its third-order derivatives) involving the high-dimensional covariance matrix. The consequence was a mismatch between the emulator and the simulator in the tails of the target distribution: proposals were accepted at the emulator stage, but rejected at the simulator stage. The result was occasional chain stagnations, i.e. 'stickiness', see Fig. 8 of the Supplement. The use of a larger number of training points in conjunction with sparse GPs (Titsias 2009;Hensman et al. 2015), which optimally select a lower number of points retaining the maximum information at reduced costs could overcome the issues presented and constitutes future work. This strategy could potentially be coupled with continuous refinement of the emulator when the sampler steps into a region of high uncertainty, similar to the study in Conrad et al. (2016), to avoid deciding when to stop the exploratory phase, during which the emulator is refined. It is worth mentioning that the 'stickiness' problem is a notorious issue in pseudo-marginal MCMC (Drovandi et al. 2018;Murray and Graham 2016), in which the estimator of the target distribution is inconsistent with the true target distribution in the tails. This is a similar issue in nature to that encountered with emulation MCMC, thus, an interesting future direction would be an interdisciplinary cross-breeding between emulation MCMC and pseudo-marginal MCMC.

Conclusions
We have provided theoretical and empirical investigations into Hamiltonian/Lagrangian Monte Carlo algorithms coupled with Gaussian processes for emulation of the log unnormalised posterior distribution. We have proved that these emulation algorithms satisfy detailed balance with respect to the exact posterior distribution. Additionally, we have investigated whether the delayed acceptance scheme is computationally advantageous over the standard algorithms. We have carried out an empirical efficiency assessment of the emulation methods on a series of ODE/PDE models, including toy problems and a real-world application of computational fluid-dynamics of the pulmonary blood circulation model. We have aimed to identify the most computationally efficient and accurate parameter inference and UQ tools to be applied to nonlinear ODE or PDE models, which typically incur onerous computational costs due to the need for repeated numerical integrations. Results showed no advantage of the delayed acceptance scheme over the standard algorithms with respect to efficiency measures based on the effective sample size. Additionally, our methods estimated the true parameter values well, with all methods performing similarly across the ODE/PDE models considered. The Lagrangian Dynamical Monte Carlo and Riemann Manifold Hamiltonian Monte Carlo tended to register the highest efficiency (in terms of effective sample size normalised by the number of forward model evaluations), followed by the Hamiltonian Monte Carlo, and the No U-turn sampler tended to be the least efficient.

Conflict of interest
The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.