Efficient sampling of conditioned Markov jump processes

We consider the task of generating draws from a Markov jump process (MJP) between two time-points at which the process is known. Resulting draws are typically termed bridges and the generation of such bridges plays a key role in simulation-based inference algorithms for MJPs. The problem is challenging due to the intractability of the conditioned process, necessitating the use of computationally intensive methods such as weighted resampling or Markov chain Monte Carlo. An efficient implementation of such schemes requires an approximation of the intractable conditioned hazard/propensity function that is both cheap and accurate. In this paper, we review some existing approaches to this problem before outlining our novel contribution. Essentially, we leverage the tractability of a Gaussian approximation of the MJP and suggest a computationally efficient implementation of the resulting conditioned hazard approximation. We compare and contrast our approach with existing methods using three examples.


Introduction
Markov jump processes (MJPs) can be used to model a wide range of discrete-valued, continuoustime processes. Consequently, they are ubiquitous in areas such as epidemiology (Fuchs, 2013;Lin and Ludkovski, 2013;McKinley et al., 2014), population ecology (Matis et al., 2007;Boys et al., 2008) and systems biology (Wilkinson, 2009(Wilkinson, , 2012Sherlock et al., 2014). Whilst exact, forward simulation of MJPs is straightforward (Gillespie, 1977), the reverse problem of performing fully Bayesian inference for the parameters governing the MJP given partial and/or noisy observations is made challenging by the intractability of the observed data likelihood. Simulation-based approaches to inference typically involve "filling in" event times and types between the observation times. A key repeated step in many inference mechanisms starts with a sample of possible states at one observation time and for each element of the sample, creates a trajectory starting with the sample value and ending at the time of the next observation with a value that is consistent with the next observation. The resulting conditioned samples are typically referred to as bridges, and ideally the bridge should be a draw from the exact distribution of the path given the initial condition and the observation. However, except for a few simple cases, exact simulation of MJP bridges is infeasible, necessitating approximate bridge constructs that can be used as a proposal mechanism inside a weighted resampling and/or Markov chain Monte Carlo (MCMC) scheme.
The focus of this paper is the development of an approximate bridge construct that is both accurate and computationally efficient. Our contribution can be applied in a generic observation regime that allows for discrete, partial and noisy measurements of the MJP, and is particularly effective compared to competitors in the most difficult regime where the observations are sparse in time and the observation variance is small. To our knowledge, relatively little work to date has addressed this important problem. Recent progress involves an approximation of the instantaneous rate or hazard function governing the conditioned process. For example, Boys et al. (2008) linearly interpolate the hazard between observation times but require full and error-free observation of the system of interest. Fearnhead (2008) recognises that the conditioned hazard requires the intractable transition probability mass function of the MJP. This is then directly approximated by substituting the transition density associated with a continuous approximation of the MJP, that is, the chemical Langevin equation (Gillespie, 2000). Golightly and Wilkinson (2015) derive a conditioned hazard by approximating the expected number of events between observations, given the observations themselves. Unfortunately, the latter two approaches typically perform poorly when the behaviour of the conditioned process is nonlinear.
We take the approach of Fearnhead (2008) as a starting point and replace the intractable MJP transition probability with the transition density governing the linear noise approximation (LNA) (Kurtz, 1970;Elf and Ehrenberg, 2003;Komorowski et al., 2009). We find that the LNA offers superior accuracy over a single step of the CLE (which must be discretised in practice), at the expense of computational efficiency. Notably, the LNA solution requires, for each event time in each trajectory, integrating forwards until the next event time a system of ordinary differential equations (ODEs) whose dimension is quadratic in the number of MJP components. We therefore leverage linear Gaussian structure of the LNA to derive a bridge construct that only requires a single full integration of the LNA ODEs, irrespective of the number of events on each bridge or the number of bridges required. We compare the resulting novel construct to several existing approaches using three examples of increasing complexity. In the final, real-data application, we demonstrate use of the construct within a pseudo-marginal Metropolis-Hastings scheme, for performing fully Bayesian inference for the parameters governing an epidemic model.
The remainder of this paper is organised as follows. In Section 2 we define a Markov jump process as a probabilistic description of a reaction network. We consider the task of sampling conditioned jump processes in Section 3, and review two existing approaches. Our novel contribution is presented in Section 4 and illustrated in Section 5. Conclusions are drawn in Section 6.

Markov jump processes
Consider a reaction network involving u species X 1 , X 2 , . . . , X u and v reactions R 1 , R 2 , . . . , R v such that where p ij denotes the number of molecules of X j consumed by reaction R i and q ij denotes the number of molecules of X j produced by reaction R i . This information can be encoded succinctly in the u × v stoichiometry matrix S whose (i, j)th element is given by q ji − p ji . Let X j,t denote the (discrete) number of species X j at time t, and let X t be the u-vector X t = (X 1,t , X 2,t , . . . , X u,t ) ′ . We model the time evolution of X t via a Markov jump process (MJP), so that for an infinitesimal time increment dt and an instantaneous hazard h i (X t , c i ), the probability of a type i reaction occurring in the time interval (t, t + dt] is h i (X t , c i )dt. The hazard function is given by h(X t , c) = (h 1 (X t , c 1 ), . . . , h v (X t , c v )) ′ where c = (c 1 , c 2 , . . . , c v ) ′ is a vector of rate constants. Under the standard assumption of mass action kinetics, h i is proportional to a product of binomial coefficients. That is Given values of the rate constants c and the initial system state X 0 = x 0 , exact realisations of the MJP can be generated via Gillespie's direct method (Gillespie, 1977), given by Algorithm 1.

Calculate
3. Simulate the time to the next event, t ′ ∼ Exp(h 0 (x t , c)).
4. Simulate the reaction index, j, as a discrete random quantity with probabilities 5. Put x t+t ′ := x t + S j , where S j denotes the jth column of S.
6. Put t := t + t ′ . Output x t and t. If t < T , return to step 2.

Sampling conditioned MJPs
Denote by X = {X s | 0 < s ≤ T } the collection of reaction times and types over the interval (0, T ], which can be used to obtain the sample path of each species over this interval. Suppose further that the initial state x 0 is a known fixed value and that (a subset of components of) the process is observed at time T subject to Gaussian error, giving a single observation y T on the random variable Here, Y T is a length-p vector, P is a constant matrix of dimension u × p and ε T is a length-p Gaussian random vector. The role of the matrix P is to provide a flexible setup allowing for various observation scenarios. For example, taking P to be the u × u identity matrix corresponds to the case of observing all components of X t (subject to error). We denote the density linking Y T and X T as p(y T |x T ). Since throughout this paper we assume that the rate constants c and the observation variance Σ are known fixed quantities, we drop them from the notation where possible. We consider the task of generating trajectories from X|x 0 , y T with associated probability function where p(x|x 0 ) is the probability function associated with x. Although p(x|x 0 , y T ) will typically be intractable, forward simulation is straightforward. This immediately suggests drawing samples from (2) using a numerical scheme such as weighted resampling. However, as discussed in Golightly and Wilkinson (2015), drawing unconditioned trajectories from p(x|x 0 ) and weighting by p(y T |x T ) is likely to lead to highly variable weights, unless the level of intrinsic stochasticity of x(t) is outweighed by the variance of the observation process. We therefore consider a flexible strategy that allows for generating trajectories conditional on the observation y T . We will show that it is possible to derive an approximate conditioned hazard function,h(x t , c|y T ), t ∈ (0, T ], that can be used in place of h(x t , c) in the direct method (Algorithm 1). The form forh(x t , c|y T ) that we initially derive depends explicitly on t, so that sampling events might not be straightforward; however the time-dependence is sufficiently small that it can be ignored and the resulting bridge mechanism, which has a constant rate between events, still leads to efficient proposals. Let q(x|x 0 , y T ) denote the path probability function under the approximate jump process with hazard functionh(x t , c|y T ). The importance weight associated with a sample x drawn from Algorithm 2 Weighted resampling for MJPs 1. For i = 1, 2, . . . , N : (a) Draw x i ∼ q(x|x 0 , y T ) using Algorithm 1 with h(x t , c) replaced byh(x t , c|y T ).
(b) Construct the unnormalised weight whose form is given by (3).
(c) Normalise the weights: 2. Resample (with replacement) from the discrete distribution on x 1 , . . . , x N using the normalised weights as probabilities.
q(·|x 0 , y T ) is given by where h 0 is as defined in line 2 of Algorithm 1 and, over the interval (0, T ], n r is the total number of reaction events; reaction times (assumed to be in increasing order) and types are denoted by (t i , ν i ), i = 1, . . . , n r , ν i ∈ {1, . . . , v} and we take t 0 = 0 and t nr+1 = T . A derivation of (3) is given by Golightly and Wilkinson (2015). The general weighted resampling algorithm is given by Algorithm 2. Note that the average unnormalised weight gives an unbiased estimator of the transition density p(y T |x 0 ). In the case of an unknown initial value X 0 with density p(x 0 ), the algorithm can be initialised with a sample of size N from p(x 0 ). It remains for us to find a suitable form ofh(x t , c|y T ). In what follows, we review two existing methods before presenting a novel, alternative approach. Comparisons are made in Section 5.

Golightly and Wilkinson approach
The approach of Golightly and Wilkinson (2015) is based on a (linear) Gaussian approximation of the number of reaction events in the time between the current event time and the next observation time. Suppose we have simulated as far as time t and let ∆R t denote the number of reaction events over the time T −t = ∆t. Golightly and Wilkinson (2015) approximate ∆R t by assuming a constant reaction hazard over the whole non-infinitesimal time interval, ∆t. A normal approximation to the corresponding Poisson distribution then gives Under the Gaussian observation regime given by (1) it should be clear that the joint distribution of ∆R t and Y T can then be approximated by Taking the expectation of (∆R t |Y T = y T ) and dividing by ∆t gives an approximate conditioned hazard ash By ignoring the explicit time dependence ofh(x t , c|y T ) (i.e., after each most-recent event, until the next event, fixing ∆t to its value at the most recent event), we can use (4) in Algorithm 1 to give trajectories x i , i = 1, . . . , N , to be used in Algorithm 2. Whilst use of (4) has been shown to work well in several applications, assumptions of normality of ∆R t and that the hazard is constant over a time interval of length ∆t are often unreasonable, as we will show.

Fearnhead approach
As noted by Fearnhead (2008) (see also Golightly and Kypraios (2017)), an expression for the intractable conditioned hazard can be derived exactly. Consider again an interval [0, T ] and suppose that we have simulated as far as time In practice, the intractable transition density p(y T |x t , c) must be replaced by a suitable approximation. Golightly and Kypraios (2017) used the transition density governing the (discretised) chemical Langevin equation (CLE). The CLE (Gillespie, 1992(Gillespie, , 2000 is an Itô stochastic differential equation (SDE) that has the same infinitesimal mean and variance as MJP. It is written as where W t is a u-vector of standard Brownian motion and S diag{h(X t , c)}S ′ is a u × u matrix B such that BB ′ = S diag{h(X t , c)}S ′ . Since the CLE can rarely be solved analytically, it is common to work with a discretisation such as the Euler-Maruyama discretisation: where Z is a standard multivariate Gaussian random variable. Combining (7) with the observation model (1) gives an approximate conditioned hazard as where As with the approach of Golightly and Wilkinson (2015), the remaining time ∆t until the observation is treated as a single discretisation. However, unless ∆t = T − t is very small, p cle is unlikely to achieve a reasonable approximation of the transition probability under the jump process. In what follows, therefore, we seek an approximation that is both accurate and computationally inexpensive.

Improved constructs
We take (5) as a starting point and replace p(y T |X t = x ′ , c) and p(y T |X t = x t , c) using the linear noise approximation (LNA) (Kurtz, 1970;Elf and Ehrenberg, 2003;Komorowski et al., 2009). We first describe the LNA, and then consider two constructions for bridges from a known initial condition, x 0 , to a potentially noisy observation Y T , based on different implementations of the LNA. The first is expected to be more accurate as the approximate hazard is recalculated after every event by re-integrating a set of ODEs from the event time to the observation time both from the current value and once for each possible next reaction, the second is more computationally efficient as the recalculation is based on a single, initial integration of a set of ODEs from time 0 to time T .

Linear noise approximation
For notational simplicity we rewrite the CLE in (6) as and derive the LNA by directly approximating (9). The basic idea behind construction of the LNA is to adopt the partition X t = z t + M t where the deterministic process z t satisfies an ordinary differential equation and the residual stochastic process M t can be well approximated under the assumption that residual stochastic fluctuations are "small" relative to the deterministic process. Taking the first two terms in the Taylor expansion of α(X t , c), and the first term in the Taylor expansion of β(X t , c) gives an SDE satisfied by an approximate residual processM t of the form where F t is the Jacobian matrix with (i, j)th element (F t ) i,j = ∂α i (z t , c)/∂z j,t . The SDE in (11) can be solved by first defining the u × u fundamental matrix G t as the solution of where I u is the u× u identity matrix. Under the assumption of a fixed or Gaussian initial condition, M 0 ∼ N (m 0 , V 0 ), it can be shown that (see e.g. Fearnhead et al., 2014) Note that it is convenient here to In practice, if x 0 is a known fixed value then we may take z 0 = x 0 , m 0 = 0 u (the u-vector of zeros) and V 0 = 0 u×u (the u × u zero matrix). Solving (10) and (14) gives the approximating distribution of X t as In this case, the ODE system governing the fundamental matrix G t need not be solved.

LNA bridge with restart
Now, consider again the problem of approximating the MJP transition probability p(y T |X t = x t , c). Given a value x t at time t ∈ [0, T ), the ODE system given by (10) and (14) can be re-integrated over the time interval (t, T ] to give output denoted by z T |t and V T |t . Similarly, the initial conditions are denoted z t|t = x t and V t|t = 0 u×u . We refer to use of the LNA in this way as the LNA with restart (LNAR). The approximation to p(y T |X t = x t , c) is given by Likewise, p lnar (y T |X t = x ′ , c) can be obtained by initialising (10) with z t|t = x ′ and integrating again. Hence, the approximate conditioned hazard is given bỹ Whilst use of the LNA in this way is likely to give an accurate approximation to the intractable transition probability (especially as t approaches T ), the conditioned hazard in (5) must be calculated for x t and for each x ′ obtained after the v possible transitions of the process. Consequently, the ODE system given by (10) and (14) must be solved for each of the v + 1 possible states at each event time. Since the LNA ODEs are rarely tractable (necessitating the use of a numerical solver), this approach is likely to be prohibitively expensive, computationally. In the next section, we outline a novel strategy for reducing the cost associated with integrating the LNA ODE system, that only requires one full integration.

LNA bridge without restart
Consider the solution of the ODE system given by (10), (12) and (13) over the interval (0, T ] with respective initial conditions Z 0 = x 0 , G 0 = I u and ψ 0 = 0 u×u . Although in practice a numerical solver must be used, we assume that the solution can be obtained over a sufficiently fine time grid to allow reasonable approximation to the ODE solution at an arbitrary time t ∈ (0, T ], denoted by z t , G t and ψ t . Given a value x t at time t ∈ [0, T ), the LNA (without restart) approximates the intractable transition probability under the MJP by T |t ]P + Σ where G T |t and ψ T |t are the solutions of (12) and (13) integrated over (t, T ] with initial conditions G t|t = I u and ψ t|t = 0 u×u . Crucially, the ODE system satisfied by z t is not re-integrated (and hence the residual term at time t isM t = x t − z t ). Moreover, G T |t and ψ T |t can be obtained without further integration. We have that and therefore the first identity we require is Similarly, where we have used (16) to obtain the last line. The second identity we require is therefore Hence, given z t , G t and ψ t for t ∈ (0, T ], p lna (y T |X t = x t , c) is easily evaluated via repeated application of (16) and (17). Additionally obtaining p lna (y T |X t = x ′ , c) is straightforward by replacing the residual x t − z t with x ′ − z t . Hence, only one full integration of (10), (12) and (13) over (0, T ] is required, giving a computationally efficient construct. The conditioned hazard takes the formh In Section 6 we describe how, in the case of unknown X 0 it is possible to make further computational savings, using this technique. The accuracy of p lna (and therefore the accuracy of the resulting conditioned hazard) is likely to depend on T , the length of the inter-observation period over which a realisation of the conditioned process is required. For example, the residual processM t will approximate the true (intractable) residual process increasingly poorly if z t and X t diverge significantly as t increases. We investigate the effect of inter-observation time in the next section.

Applications
In order to examine the empirical performance of the methods proposed in section 4, we consider three examples of increasing complexity. These are a simple (and tractable) death model, the stochastic Lotka-Volterra model examined by Boys et al. (2008) among others and an epidemic susceptible-infected-removed (SIR) model. For the latter, we use the best performing LNA based construct to drive a pseudo-marginal Metropolis-Hastings (PMMH) scheme. Using real data consisting of susceptibles and infectives during the well studied Eyam plague (Raggett, 1982), we compare bridge-based PMMH with a standard implementation (using blind, forward simulation) and a recently proposed scheme based on the alive particle filter .

Death model
We consider a single reaction, governing a single specie X , of the form where x t denotes the state of the system at time t. Under the assumption of an error free observation scenario, the conditioned hazard of Golightly and Wilkinso (2015), given by (4), takes the formh (x t , c|y T ) = x T − x t ∆t and recall that ∆t = T − t. The CLE is given by Although the CLE is tractable in this special case (Cox et al., 1985), for reaction networks of reasonable size and complexity, the CLE will be intractable. We therefore implement the approach of Fearnhead (2008) by taking the conditioned hazard as in (8) where p cle is based on a single-time step numerical approximation of the CLE. The Euler-Maruyama approximation gives The ODE system characterising the LNA (equations (10), (12) and (13)) with respective initial conditions z 0 = x 0 , G 0 = I u and ψ 0 = 0 u×u can be solved analytically to give Hence, for the LNA with restart, we have that For the LNA without restart, we obtain In what follows, we took c = 0.5 and x 0 = 50 to be fixed. The end-point x T was chosen as either the median (denoted by X T,(50) ) or the lower 1% or upper 99% quantile (denoted by X T,(1) and X T,(99) ) of X t |X 0 = 50. To assess the performance of the proposed approach as an observation is made with increasing time sparsity, we took T ∈ {0.5, 1, 2}. We applied weighted resampling (Algorithm 2) with five different hazard functions. These were, the unconditioned 'blind' hazard function, the conditioned hazard of Golightly/Wilkinson given by (4), and the Fearnhead approach based on the CLE (8), LNA with restart (15) and LNA without restart (18). The resulting algorithms are designated as blind, GW, F-CLE, F-LNAR and F-LNA. Each was run m = 5000 times with N = 10 samples to give a set of 5000 estimates of the transition probability π(x t |x 0 ) and we denote this set by π 1:m (x t |x 0 ). To compare the algorithms, we report the effective sample size ESS( π 1:m ) = m i=1 π i 2 m i=1 ( π i ) 2 and relative mean-squared error ReMSE( π 1:m ) given by where π(x t |x 0 ) can be obtained analytically (e.g., Bailey (1964)) as The results are summarised in Table 1. Whilst the Blind approach gives broadly comparable performance to the conditioned approaches when X T = x T,(50) , its performance deteriorates significantly when the end-point is taken to be a value in the tails of X t |X 0 = 50. This is due to the Blind approach struggling to generate trajectories that are highly unlikely to hit the end-point. For the CH approach we see a decrease in ESS and an increase in ReMSE as T increases, due to the linear form being unable to adequately describe the exponential like decay exhibited by the true conditioned process. Whilst the F-CLE approach performs well when X T = x T,(50) and X T = x T,(99) , it is unable to match the performance of the LNA based methods across all scenarios. The effect of not restarting the LNA (i.e. by reintegrating the LNA ODEs after each value of the jump process is generated) appears to be minimal here, with both F-LNAR and F-LNA giving comparable ESS and ReMSE values.

Lotka-Volterra
We consider here a Lotka-Volterra model of prey (X 1 ) and predator (X 2 ) interaction comprising three reactions of the form The stoichiometry matrix is given by
The conditioned hazard described in Section 3.1 and given by (4) can then be obtained. The CLE for the Lotka-Volterra model is given by It is then straightforward to obtain the Euler-Maruyama approximation of the CLE, for use in the conditioned hazard described in Section 3.2 and given by (8).
Following Boys et al. (2008) among others we impose the parameter values c = (c 1 , c 2 , c 3 ) ′ = (0.5, 0.0025, 0.3) ′ and let x 0 = (50, 50) ′ . We assume an observation model of the form (1) and take Σ = σ 2 I 2 with σ = 5 representing low measurement error (since typical simulations of X 1,t and X 2,t are around two orders of magnitude larger than σ). We generated a number of challenging scenarios by taking y T as the pair of 1%, 50% or 99% marginal quantiles of Y T |X 0 = (50, 50) ′ for T ∈ {1, 2, 3, 4}. These quantiles are shown in Table 2. Figure 1 compares summaries (mean plus and minus two standard deviations) of each competing bridge process with the same summaries of the true conditioned process (obtained via simulation), for the extreme case of T = 4 and y T = y T,(99) . Plainly, the blind forward simulation approach and CLE based Fearnhead approach (F-CLE) are unable to match the dynamics of the true conditioned process. Moreover, we found that these bridges gave very small effective sample sizes for T ≥ 2 and we therefore omit these results from the following analysis.
We report results based on weighted resampling using N = 5000 with three different hazard functions: the Golightly/Wilkinson approach (CH) and the Fearnhead approach based on the LNA with and without restart (F-LNAR and F-LNA respectively). For the latter (F-LNA), we integrated the LNA once in total. Figure 2 shows, for each value of y T in Table 2, effective sample size (ESS), log (base 10) CPU time and log (base 10) ESS per second. Note that for this example, ESS is calculated as (w i ) 2 wherew 1:N denotes the unnormalised weights generated by the weighted resampling algorithm. We see that although CH is computationally inexpensive, ESS decreases as T increases, as it is unable to match the nonlinear dynamics of the true conditioned process. In contrast, although more computationally expensive, F-LNAR and F-LNA maintain high ESS values as T is increased. Consequently, in terms of ESS per second, CH is outperformed by F-LNAR for T ≥ 3 and F-LNA for T ≥ 2. Due to not having to restart the LNA ODEs after each simulated value of the jump process, F-LNA is around an order of magnitude faster than F-LNAR in terms of CPU time, with the difference increasing as T is increased. Given then the comparable ESS values obtained for F-LNAR and F-LNA, we see that in terms of ESS/s, F-LNA outperforms F-LNAR by at least an order of magnitude in all cases, and outperforms CH by 1-2 orders of magnitude when T = 4.

Model and data
The Susceptible-Infected-Removed (SIR) epidemic model has two species (susceptibles X 1 and infectives X 2 ) and two reaction channels (infection of a susceptible and removal of an infective): The vector of rate constants is c = (c 1 , c 2 ) ′ and the stoichiometry matrix is given by The hazard function is given by h(x t , c) = (c 1 x 1,t x 2,t , c 2 x 2,t ) ′ . For the linear noise approximation, the Jacobian matrix F t is given by The ODEs characterising the LNA solution, given by (10), (12) and (13) are intractable. As in Section 5.2, we use the deSolve package in R whenever a numerical solution is required. We consider data consisting of 8 observations on susceptible and infectives during the outbreak of plague in the village of Eyam, England. The data are taken over a four month period from June 18th 1666 and are presented here in Table 3. Note that the infective population is estimated from a list of deaths, and by assuming a fixed illness length (Raggett, 1982).

Pseudo-marginal Metropolis-Hastings
Let y = {y t i }, i = 1, . . . , 8 denote the observations at times 0 = t 1 < . . . < t 8 = 4. The latent Markov jump process over the time interval (t i , t i+1 ] is denoted by  Note that under the assumption of no measurement error, we have that X t i = y t i , i = 1, . . . , 8. Upon ascribing a prior density p(c) to the rate constants c, Bayesian inference may proceed via the marginal parameter posterior p(c|y) ∝ p(c)p(y|c) where is the observed data likelihood. Although p(y|c) is intractable, we note that each term in (21) can be seen as the normalising constant of where p(y t i+1 |x t i+1 ) takes the value 1 if x t i+1 = y t i+1 and 0 otherwise. Hence, running steps 1(a) and (b) of Algorithm 2 with x 0 and y T replaced by x t i+1 and y t i+1 respectively, can be used to unbiasedly estimate p(y t i+1 |y t i , c). No resampling is required, since only those trajectories that coincide with the observation y t i+1 will have non-zero weight. Explicitly, we have the unbiased estimator where X j (t i ,t i+1 ] is an independent draw from the path probability function q(·|x t i , y t i+1 , c) associated with the bridge construct. Then, multiplying thep(y t i+1 |y t i , c), i = 2, . . . , 7, gives an unbiased estimator of the observed data likelihood p(y|c).
An alternative unbiased estimator of the observed data likelihood can be found by using (a special case of) the alive particle filter (Del Moral et al., 2015). Essentially, forward draws are repeatedly generated from p(·|x t i , c) (via Gillespie's direct method) until N + 1 trajectories that match the observation are obtained. Let n i denote the number of simulations required to generate N + 1 matches with y t i+1 . The estimator is then given bŷ Let U ∼ p(·|c) denote the flattened vector of all random variables required to generate the estimator of observed data likelihood, which we denote byp U (y|c). The pseudo-marginal Metropolis-Hastings (PMMH) scheme is an MH scheme that targets the joint density p(c, u) ∝ p(c)p U (y|c)p (u|c) for which it is easily checked that the target posterior p(c|y) is a marginal. Using a proposal density q(c * |c)p(u * |c * ) gives the acceptance probability min 1, p(c * )p u * (y|c * ) p(c)p u (y|c) × q(c|c * ) q(c * |c) .  Practical advice for choosing N to balance mixing performance and computational cost can found in Doucet et al. (2015) and Sherlock et al. (2015). The variance of the log-posterior (denoted σ 2 N , computed with N samples) at a central value of c (e.g. the estimated posterior median) should be around 2. In what follows, we use a random walk on log c as the parameter proposal. The innovation variance is taken to be the marginal posterior variance of log c estimated from a pilot run, and further scaled to give an acceptance rate of around 0.2-0.3. We followed Ho et al. (2018) by adopting independent N (0, 100 2 ) priors for log c i , i = 1, 2.

Results
We ran PMMH using the observed data likelihood estimator based on (22), with trajectories drawn either using forward simulation or the Fearnhead approach based on the LNA (without restart). We designate the former as "Blind" and the latter as "F-LNA". Additionally, we ran PMMH using the observed data likelihood estimator based on (23). We designate this scheme as "Alive".
We ran each scheme for 10 4 iterations. For Alive, we followed Drovandi and McCutchan (2016) by terminating any likelihood calculation that exceeded 100,000 forward simulations, and rejecting the corresponding move. Marginal posterior densities can be found in Figure 3 and are consistent with the posterior summaries reported by Ho et al. (2018). Figure 4 summarises the posterior distribution of X t |x 0 , y 0.5 , c, where c is fixed at the estimated posterior mean. We note the nonlinear behaviour of the conditioned process over this time interval, with similar nonlinear dynamics observed for other intervals (not reported). Table 4 summarises the computational and statistical performance of the competing inference schemes. We measure statistical efficiency by calculating minimum (over each parameter chain) effective sample size per second (mESS/s). As is appropriate for MCMC output, we use ESS = n iters 1 + 2 ∞ k=1 α k where α k is the autcorrelation function for the series at lag k and n iters is the number of iterations in the main monitoring run. Inspection of Table 4 reveals that although use of the alive particle filter only requires N = 8 (compared to N = 5000 and N = 100 for Blind and F-LNA respectively), it exhibits the largest CPU time. We found that for parameter values in the tails of the posterior, Alive would often require many thousands of forward simulations to obtain N = 8 matches. Consequently Alive is outperformed by Blind by a factor of 2 in terms of overall efficiency. Use of the LNA-driven bridge (without restart) gives a further improvement over Blind of a factor of 2.

Discussion
Performing efficient sampling of a Markov jump process (MJP) between a known value and a potentially partial or noisy observation is a key requirement of simulation-based approaches to parameter inference. Generating end-point conditioned trajectories, known as bridges, is challenging due to the intractability of the probability function governing the conditioned process. Approximating the hazard function associated with the conditioned process (that is, the conditioned hazard), and correcting draws obtained via this hazard function using weighted resampling or Markov chain Monte Carlo offers a viable solution to the problem. Recent work in this direction (Fearnhead, 2008;Golightly and Wilkinson, 2015) give approximate hazard functions that utilise a Gaussian approximation of the MJP. For example, Golightly and Wilkinson (2015) approximates the number of reactions between observation times as Gaussian. Fearnhead (2008) recognises that the conditioned hazard can be written in terms of the intractable transition probability associated with the MJP. The transition probability is replaced with a Gaussian transition density obtained from the Euler-Maruyama approximation of the chemical Langevin equation. In both approaches the remaining time until the next observation is treated as a single discretisation. Consequently, the accuracy of the resulting bridges deteriorates as the inter-observation time increases.
Starting with the form of the conditioned hazard function, we have proposed a novel bridge construct by replacing the intractable MJP transition probability with the transition density governing the linear noise approximation (LNA). Whilst our approach also involves a Gaussian approximation, we find that the tractability of the LNA can be exploited to give an accurate bridge construct. Essentially, the LNA solution can be re-integrated over each observation window to maintain accuracy. The cost of 'restarting' the LNA in this way is likely to preclude its practical use. We have therefore further proposed an implementation that only requires a single full integration of the ordinary differential equation system governing the LNA. Our experiments demonstrated superior performance of the LNA based bridge over existing constructs, especially in data-sparse scenarios. Using a real data application, we further demonstrated the potential of the proposed methodology in allowing efficient parameter inference.
This article has focused on bridges from a known initial condition. When the initial condition is unknown, such as typically arises in a particle filter-based analysis, a sample from the distribution of the initial state, {x 1 0 , . . . , x N 0 }, is available and a separate bridge to the observation is required from each element of the sample. In this case, two different implementations of the LNA bridge without restarting are possible. In the first implementation, trajectories X i |x i 0 , y T are generated using one full integration of (10), (12) and (13) over (0, T ] for each x i 0 . That is, each trajectory has (10) initialised at x i 0 . In the second implementation, (10), (12) and (13) are integrated just once, irrespective of the number of required trajectories. This can be achieved by initialising (10) at some plausible value e.g. E(X 0 ). Although the second implementation will be more computationally efficient than the first, some loss of accuracy is expected, especially when the uncertainty in X 0 is large. A single integral, however, may well be adequate in the cases which are the focus of this article: where the observation noise is small.
Finally, we note that the proposed bridge is likely to perform poorly in scenarios where species dynamics are inherently discrete, since the LNA is likely to provide a poor approximation of the intractable MJP transition probability. Of course, in such scenarios, when relatively few reactions occur over the time-course of interest, simple forward simulation is likely to offer a computationally efficient alternative. Use of the proposed bridge in multi-scale scenarios where some reactions regularly occur more frequently than others remains the subject of ongoing research.  Figure 4: SIR model. Mean and two standard deviation intervals for the true conditioned process X t |x 0 , y 0.5 , c over the first observation interval, with c = (0.02, 3.2) ′ .