Efficient sampling of conditioned Markov jump processes

We consider the task of generating draws from a Markov jump process (MJP) between two time points at which the process is known. Resulting draws are typically termed bridges, and the generation of such bridges plays a key role in simulation-based inference algorithms for MJPs. The problem is challenging due to the intractability of the conditioned process, necessitating the use of computationally intensive methods such as weighted resampling or Markov chain Monte Carlo. An efficient implementation of such schemes requires an approximation of the intractable conditioned hazard/propensity function that is both cheap and accurate. In this paper, we review some existing approaches to this problem before outlining our novel contribution. Essentially, we leverage the tractability of a Gaussian approximation of the MJP and suggest a computationally efficient implementation of the resulting conditioned hazard approximation. We compare and contrast our approach with existing methods using three examples.


Introduction
Markov jump processes (MJPs) can be used to model a wide range of discrete-valued, continuous-time processes. Our focus here is on the MJP representation of a reaction network, which has been ubiquitously applied in areas such as epidemiology (Fuchs 2013;Lin and Ludkovski 2013;McKinley et al. 2014), population ecology (Matis et al. 2007;Boys et al. 2008) and systems biology (Wilkinson 2009(Wilkinson , 2018Sherlock et al. 2014). Whilst exact, forward simulation of this class of MJP is straightforward (Gillespie 1977), the reverse problem of performing fully Bayesian inference for the parameters governing the MJP given partial and/or noisy observations is made challenging by the intractability of the observed data likelihood. Simulation-based approaches to inference typically involve "filling in" event times and types between the observation times. A key repeated step in many inference mechanisms starts with a sample of possible states at one observation time and, for each element of the sample, creates a trajectory starting with the sample value and ending at B Andrew Golightly andrew.golightly@ncl.ac.uk 1 School of Mathematics and Statistics, Newcastle University, Newcastle Upon Tyne, UK 2 Department of Mathematics and Statistics, Lancaster University, Lancaster, UK the time of the next observation with a value that is consistent with the next observation. The resulting conditioned samples are typically referred to as bridges, and ideally, the bridge should be a draw from the exact distribution of the path given the initial condition and the observation. However, except for a few simple cases, exact simulation of MJP bridges is infeasible, necessitating approximate bridge constructs that can be used as a proposal mechanism inside a weighted resampling and/or Markov chain Monte Carlo (MCMC) scheme.
The focus of this paper is the development of an approximate bridge construct that is both accurate and computationally efficient. Our contribution can be applied in a generic observation regime that allows for discrete, partial and noisy measurements of the MJP, and is particularly effective compared to competitors in the most difficult regime where the observations are sparse in time and the observation variance is small. Many bridge constructs have been proposed for partially observed stochastic differential equations [SDEs, e.g. Delyon and Hu (2006), Bladt and Sørensen (2014), Bladt et al. (2016), Schauer et al. (2017) and Whitaker et al. (2017)], but the literature on bridges for MJPs is relatively sparse. Recent progress involves an approximation of the instantaneous rate or hazard function governing the conditioned process. For example, Boys et al. (2008) linearly interpolate the hazard between observation times but require full and error-free observation of the system of inter-est. Fearnhead (2008) recognises that the conditioned hazard requires the intractable transition probability mass function of the MJP. This is then directly approximated by substituting the transition density associated with the coarsest possible discretisation of a spatially continuous approximation of the MJP, the chemical Langevin equation (Gillespie 2000). Golightly and Wilkinson (2015) derive a conditioned hazard by approximating the expected number of events between observations, given the observations themselves. Unfortunately, the latter two approaches typically perform poorly when the behaviour of the conditioned process is nonlinear.
We take the approach of Fearnhead (2008) as a starting point and replace the intractable MJP transition probability with the transition density governing the linear noise approximation (LNA) (Kurtz 1970;Elf and Ehrenberg 2003;Komorowski et al. 2009;Schnoerr et al. 2017). Whilst the LNA has been used as an inferential model (see e.g. Ruttor and Opper (2009) and Ruttor et al. (2010) for a maximum likelihood approach and Stathopoulos and Girolami (2013) and Fearnhead et al. (2014) for an MCMC approach), we believe that this is the first attempt to use the LNA to develop a bridge construct for simulation of conditioned MJPs. We find that the LNA offers superior accuracy over a single step of the CLE (which must be discretised in practice), at the expense of computational efficiency. Notably, the LNA solution requires, for each event time in each trajectory, integrating forwards until the next event time a system of ordinary differential equations (ODEs) whose dimension is quadratic in the number of MJP components. We therefore leverage the linear Gaussian structure of the LNA to derive a bridge construct that only requires a single full integration of the LNA ODEs, irrespective of the number of transition events on each bridge or the number of bridges required. We compare the resulting novel construct to several existing approaches using three examples of increasing complexity. In the final, real data application, we demonstrate use of the construct within a pseudo-marginal Metropolis-Hastings scheme, for performing fully Bayesian inference for the parameters governing an epidemic model. The remainder of this paper is organised as follows. In Sect. 2, we define a Markov jump process as a probabilistic description of a reaction network. We consider the task of sampling conditioned jump processes in Sect. 3 and review two existing approaches. Our novel contribution is presented in Sect. 4 and illustrated in Sect. 5. Conclusions are drawn in Sect. 6.

Reaction networks
Consider a reaction network involving u species X 1 , X 2 , . . . , X u and v reactions R 1 , R 2 , . . . , R v such that reaction R i is written as where a i j denotes the number of molecules of X j consumed by reaction R i and b i j denotes the number of molecules of X j produced by reaction R i . Let X j,t denote the (discrete) number of species X j at time t, and let X t be the u-vector X t = (X 1,t , X 2,t , . . . , X u,t ) . The effect of a particular reaction is to change the system state X t abruptly and discretely. Hence, if the ith reaction occurs at time t, the new state becomes The time evolution of X t is therefore most naturally described by a continuous-time, discrete-valued Markov process defined in the following section.

Markov jump processes
We model the time evolution of X t via a Markov jump process (MJP), so that the state of the system at time t is where x 0 is the initial system state and R i,t denotes the number of times that the ith reaction occurs by time t. The process R i,t is a counting process with intensity h i (x t ), known in this setting as the reaction hazard, which depends on the current state of the system x t . Explicitly, we have that where the Y i , i = 1, . . . , v are independent, unit rate Poisson processes (see e.g. Kurtz (1972) or Wilkinson (2018) for further details of this representation). The hazard function is given by h( . Under the standard assumption of mass-action kinetics, h i is proportional to a product of binomial coefficients. That is where c i is the rate constant associated with reaction R i and c = (c 1 , c 2 , . . . , c v ) is a vector of rate constants. Since in this article, except in Sect. 5.3 the rate constants are assumed to be a known fixed quantities, we drop them from the notation where possible.
3. Simulate the time to the next event, t ∼ E x p(h 0 (x t )). 4. Simulate the reaction index, i, as a discrete random quantity with . . . , v. 5. Put x t+t := x t + S i , where S i denotes the ith column of S. 6. Put t := t + t . Output x t and t. If t < T , return to step 2.
Given a value of the initial system state x 0 , exact realisations of the MJP can be generated via Gillespie's direct method (Gillespie 1977), given by Algorithm 1.

Sampling conditioned MJPs
Denote by X = {X s | 0 < s ≤ T } the MJP sample path over the interval (0, T ]. Complete information on an observed sample path x corresponds to all reaction times and types. To this end, let n r denote the total number of reaction events; reaction times (assumed to be in increasing order) and types are denoted by (t i , ν i ), i = 1, . . . , n r , ν i ∈ {1, . . . , v}, and we take t 0 = 0 and t n r +1 = T .
Suppose that the initial state x 0 is a known fixed value and that (a subset of components of) the process is observed at time T subject to Gaussian error, giving a single observation y T on the random variable Here Y T is a length-d vector, P is a constant matrix of dimension u × d, and ε T is a length-d Gaussian random vector. The role of the matrix P is to provide a flexible set-up allowing for various observation scenarios. For example, taking P to be the u ×u identity matrix corresponds to the case of observing all components of X t (subject to error). We denote the density linking Y T and X T as p(y T |x T ).
We consider the task of generating trajectories from p(x|x 0 , y T ) given by Here, p(x|x 0 ) is the complete data likelihood (Wilkinson 2018) which takes the form where h 0 is as defined in line 2 of Algorithm 1. Although p(x|x 0 , y T ) will typically be intractable, generating draws from p(x|x 0 ) is straightforward via Gillespie's direct method (Algorithm 1). This immediately suggests drawing samples from (2) using a numerical scheme such as weighted resampling. However, as discussed in Golightly and Wilkinson (2015), drawing unconditioned trajectories from p(x|x 0 ) and weighting by p(y T |x T ) is likely to lead to highly variable weights, unless the level of intrinsic stochasticity of X t is outweighed by the variance of the observation process. Our umbrella aim, therefore, is to find an approximating MJP whose dynamics remain tractable under conditioning on y T . The resulting construct can then be used to generate proposed trajectories within the weighted resampling scheme. We will show that this is possible via the derivation of an approximate conditioned hazard function,h(x t |y T ), t ∈ (0, T ], that can be used in place of h(x t ) in Algorithm 1. The form forh(x t |y T ) that we initially derive depends explicitly on t, so that sampling events might not be straightforward; however, the time dependence is sufficiently small that it can be ignored and the resulting bridge mechanism, which has a constant rate between events, still leads to efficient proposals.

Weighted resampling
Let q(x|x 0 , y T ) denote the complete data likelihood for a sample path x drawn from an approximate jump process with hazard functionh(x t |y T ). The importance weight associated with x is given by where dP dQ is the Radon-Nikodym derivative of the true Markov jump process (P) with respect to the approximating process (Q) and can be derived in an entirely rigorous way (Brémaud 1981). An informal approach is provided by Wilkinson (2018), giving the Radon-Nikodym derivative as the likelihood ratio is defined analogously. As noted above, the explicit dependence ofh on t is ignored so that both h 0 andh 0 are piece-wise constant (between reaction events). Hence, in practice, we evaluate the weight using Algorithm 2 Weighted resampling for MJPs 1. For j = 1, 2, . . . , N : 2. Resample (with replacement) from the discrete distribution on x 1 , . . . , x N using the normalised weights as probabilities. where The general weighted resampling algorithm is given by Algorithm 2. It is straightforward to show that the average unnormalised weight gives an unbiased estimator of the transition density p(y T |x 0 ). This estimator is given bŷ where X j is an independent draw from q(·|x 0 , y T ). In the case of an unknown initial value X 0 with density p(x 0 ), Algorithm 2 can be initialised with a sample of size N from p(x 0 ) in which case (4) can be used to estimate p(y T ).
It remains for us to find a suitable form ofh(x t |y T ). In what follows, we review two existing methods before presenting a novel, alternative approach. Comparisons are made in Sect. 5.

Golightly and Wilkinson approach
The approach of Golightly and Wilkinson (2015) is based on a (linear) Gaussian approximation of the number of reaction events in the time between the current event time and the next observation time. Suppose we have simulated as far as time t and let ΔR t denote the number of reaction events over the time T − t = Δt. Golightly and Wilkinson (2015) approximate ΔR t by assuming a constant reaction hazard over the whole non-infinitesimal time interval, Δt. A Gaussian approximation to the corresponding Poisson distribution then gives Under the Gaussian observation regime given by (1), it should be clear that the joint distribution of ΔR t and Y T can then be approximated by , Taking the expectation of (ΔR t |Y T = y T ) and dividing by Δt gives an approximate conditioned hazard as By ignoring the explicit time dependence ofh(x t |y T ) (i.e. after each most recent event, until the next event, fixing Δt to its value at the most recent event), we can use (5), suitably truncated to ensure positivity, in Algorithm 1 to give trajectories x i , i = 1, . . . , N , to be used in Algorithm 2. Whilst use of (5) has been shown to work well in several applications, assumptions of normality of ΔR t and that the hazard is constant over a time interval of length Δt are often unreasonable, as we will show.

Fearnhead approach
As noted by Fearnhead (2008) [see also Ruttor and Opper (2009)], an expression for the intractable conditioned hazard can be derived exactly. Consider again an interval [0, T ] and suppose that we have simulated as far as time t ∈ [0, T ]. For reaction R i , let x = x t + S i . Recall that S i denotes the ith column of the stoichiometry matrix so that x is the state of the MJP after a single occurrence of R i . The conditioned hazard of R i satisfies In practice, the intractable transition density p(y T |x t ) must be replaced by a suitable approximation. Golightly and Kypraios (2017) (see also Fearnhead (2008) for the case of no measurement error) used the transition density governing the (discretised) chemical Langevin equation (CLE). The CLE (Gillespie 1992(Gillespie , 2000 is an Itô stochastic differential equation (SDE) that has the same infinitesimal mean and variance as the MJP. It is written as where W t is a u-vector of standard Brownian motion and Since the CLE can rarely be solved analytically, it is common to work with a discretisation such as the Euler-Maruyama discretisation: where Z is a standard multivariate Gaussian random variable. Combining (8) with the observation model (1) gives an approximate conditioned hazard as where As with the approach of Golightly and Wilkinson (2015), the remaining time Δt until the observation is treated as a single discretisation. However, unless Δt = T − t is very small, p cle is unlikely to achieve a reasonable approximation of the transition probability under the jump process. In what follows, therefore, we seek an approximation that is both accurate and computationally inexpensive.

Improved constructs
We take (6) as a starting point and replace p(y T |X t = x ) and p(y T |X t = x t ) using the linear noise approximation (LNA) (Kurtz 1970;Elf and Ehrenberg 2003;Komorowski et al. 2009;Schnoerr et al. 2017). We first describe the LNA and then consider two constructions for bridges from a known initial condition, x 0 , to a potentially noisy observation Y T , based on different implementations of the LNA. The first is expected to be more accurate as the approximate hazard is recalculated after every event by re-integrating a set of ODEs from the event time to the observation time both from the current value and once for each possible next reaction. The second is more computationally efficient as the recalculation is based on a single, initial integration of a set of ODEs from time 0 to time T .

Linear noise approximation
For notational simplicity, we rewrite the CLE in (7) as and derive the LNA by directly approximating (10). The basic idea behind construction of the LNA is to adopt the partition X t = z t + M t where the deterministic process z t satisfies an ordinary differential equation and the residual stochastic process M t can be well approximated under the assumption that residual stochastic fluctuations are "small" relative to the deterministic process. Taking the first two terms in the Taylor expansion of α(X t ) and the first term in the Taylor expansion of β(X t ) gives an SDE satisfied by an approximate residual processM t of the form where F t is the Jacobian matrix with (i, j)th element (12) can be solved by first defining the u × u fundamental matrix G t as the solution of where I u is the u × u identity matrix. Under the assumption of a fixed or Gaussian initial condition,M 0 ∼ N (m 0 , V 0 ), it can be shown that (see e.g. Fearnhead et al. 2014) It is convenient here to write V t = G t ψ t G t , and it is straightforward to show that V t satisfies In practice, if x 0 is a known fixed value, then we may take z 0 = x 0 , m 0 = 0 u (the u-vector of zeros) and V 0 = 0 u×u (the u × u zero matrix). Solving (11) and (15) gives the approximating distribution of X t as In this case, the ODE system governing the fundamental matrix G t need not be solved.

LNA bridge with restart
Now, consider again the problem of approximating the MJP transition probability p(y T |X t = x t ). Given a value x t at time t ∈ [0, T ), the ODE system given by (11) and (15) can be re-integrated over the time interval (t, T ] to give output denoted by z T |t and V T |t . Similarly, the initial conditions are denoted z t|t = x t and V t|t = 0 u×u . We refer to use of the LNA in this way as the LNA with restart (LNAR). The approximation to p(y T |X t = x t ) is given by Likewise, p lnar (y T |X t = x ) can be obtained by initialising (11) with z t|t = x and integrating again. Hence, the approximate conditioned hazard is given bỹ Whilst use of the LNA in this way is likely to give an accurate approximation to the intractable transition probability (especially as t approaches T ), the conditioned hazard in (6) must be calculated for x t and for each x obtained after the v possible transitions of the process. Consequently, the ODE system given by (11) and (15) must be solved at each event time for each of the v + 1 possible states. Since the LNA ODEs are rarely tractable (necessitating the use of a numerical solver), this approach is likely to be prohibitively expensive, computationally. In the next section, we outline a novel strategy for reducing the cost associated with integrating the LNA ODE system, that only requires one full integration.

LNA bridge without restart
Consider the solution of the ODE system given by (11), (13) and (14) over the interval (0, T ] with respective initial conditions Z 0 = x 0 , G 0 = I u and ψ 0 = 0 u×u . Although in practice a numerical solver must be used, we assume that the solution can be obtained over a sufficiently fine time grid to allow reasonable approximation to the ODE solution at an arbitrary time t ∈ (0, T ], denoted by z t , G t and ψ t . Given a value x t at time t ∈ [0, T ), the LNA (without restart) approximates the intractable transition probability under the MJP by where G T |t and ψ T |t are the solutions of (13) and (14) integrated over (t, T ] with initial conditions G t|t = I u and ψ t|t = 0 u×u . Crucially, the ODE system satisfied by z t is not re-integrated (and hence the residual term at time t isM t = x t − z t ). Moreover, G T |t and ψ T |t can be obtained without further integration. We have that and therefore, the first identity we require is Similarly, where we have used (17) to obtain the last line. The second identity we require is therefore Hence, given z t , G t and ψ t for t ∈ (0, T ], p lna (y T |X t = x t , c) is easily evaluated via repeated application of (17) and (18). Additionally obtaining p lna (y T |X t = x ) is straightforward by replacing the residual x t − z t with x − z t . Hence, only one full integration of (11), (13) and (14) over (0, T ] is required, giving a computationally efficient construct. The conditioned hazard takes the form In Sect. 6, we describe how in the case of unknown X 0 it is possible to make further computational savings, using this technique. The accuracy of p lna (and therefore the accuracy of the resulting conditioned hazard) is likely to depend on T , the length of the inter-observation period over which a realisation of the conditioned process is required. For example, the residual processM t will approximate the true (intractable) residual process increasingly poorly if z t and X t diverge significantly as t increases. We investigate the effect of interobservation time in the next section.

Applications
In order to examine the empirical performance of the methods proposed in Sect. 4, we consider three examples of increas-ing complexity. These are a simple (and tractable) death model, the stochastic Lotka-Volterra model examined by Boys et al. (2008) among others and a susceptible-infectedremoved (SIR) epidemic model. For the last of these, we use the best-performing LNA-based construct to drive a pseudomarginal Metropolis-Hastings (PMMH) scheme to perform fully Bayesian inference for the rate constants c. Using real data consisting of susceptibles and infectives during the well-studied Eyam plague (Raggett 1982), we compare bridge-based PMMH with a standard implementation (using blind forward simulation) and a recently proposed scheme based on the alive particle filter . All algorithms are coded in R and were run on a desktop computer with an Intel Core i7-4770 processor at 3.40 GHz.

Death model
We consider a single reaction, governing a single specie X , of the form where x t denotes the state of the system at time t.
Under the assumption of an error-free observation scenario, the conditioned hazard of Golightly and Wilkinson (2015), given by (5), takes the form The CLE is given by Although the CLE is tractable in this special case (Cox et al. 1985), for reaction networks of reasonable size and complexity, the CLE will be intractable. We therefore implement the approach of Fearnhead (2008) by taking the conditioned hazard as in (9) where p cle is based on a single time step numerical approximation of the CLE. The Euler-Maruyama approximation gives The ODE system characterising the LNA (equations (11), (13) and (14)) with respective initial conditions z 0 = x 0 , G 0 = I u and ψ 0 = 0 u×u can be solved analytically to give Hence, for the LNA with restart, we have that For the LNA without restart, we obtain In what follows, we took c = 0.5 and x 0 = 50 to be fixed. The end-point x T was chosen as either the median, lower 1% or upper 99% quantile of the forward process X T |X 0 = 50. We adopt the notation that x T ,(α) is the α% quantile of X T |X 0 = 50. Hence, we took the end-point x T ∈ {x T ,(1) , x T ,(50) , x T ,(99) }. To assess the performance of the proposed approach as an observation is made with increasing time sparsity, we took T ∈ {0.5, 1, 2}. We applied weighted resampling (Algorithm 2) with five different hazard functions. These were: the unconditioned 'blind' hazard function, the conditioned hazard of Golightly/Wilkinson given by (5) and the Fearnhead approach based on the CLE (9), LNA with restart (16) and LNA without restart (19). The resulting algorithms are designated as blind, GW, F-CLE, F-LNAR and F-LNA. Each was run m = 5000 times with N = 10 samples to give a set of 5000 estimates of the transition probability π(x t |x 0 ), and we denote this set by π 1:m (x t |x 0 ). To compare the algorithms, we report the effective sample size and relative mean-squared error ReMSE( π 1:m ) given by where π(x t |x 0 ) can be obtained analytically [e.g. Bailey (1964)] as The results are summarised in Table 1. Whilst the blind approach gives broadly comparable performance to the conditioned approaches when x T = x T ,(50) , its performance deteriorates significantly when the end point is taken to be a value in the tails of X t |X 0 = 50. This is due to the blind approach struggling to generate trajectories that are highly unlikely to hit the neighbourhood of the end point. For the CH approach, we see a decrease in ESS and an increase in ReMSE as T increases, due to the linear form being unable to adequately describe the exponential like decay exhibited by the true conditioned process. Whilst the F-CLE approach ESS( π 1:m ) and ReMSE( π 1:m ), based on 5000 runs of each algorithm performs well when x T = x T ,(50) and x T = x T ,(99) , it is unable to match the performance of the LNA-based methods across all scenarios. The effect of not restarting the LNA (i.e. by re-integrating the LNA ODEs after each value of the jump process is generated) appears to be minimal here, with both F-LNAR and F-LNA giving comparable ESS and ReMSE values.

Lotka-Volterra
We consider here a Lotka-Volterra model of prey (X 1 ) and predator (X 2 ) interaction comprising three reactions of the form The stoichiometry matrix is given by

and the associated hazard function is
h(x t ) = (c 1 x 1,t , c 2 x 1,t x 2,t , c 3 x 2,t ) .
The conditioned hazard described in Sect. 3.2 and given by (5) can then be obtained.
The CLE for the Lotka-Volterra model is given by after suppressing dependence on t. It is then straightforward to obtain the Euler-Maruyama approximation of the CLE, for use in the conditioned hazard described in Sect. 3.3 and given by (9). For the linear noise approximation, the Jacobian matrix F t is given by Unfortunately, the ODEs characterising the LNA solution, given by (11), (13) and (14), are intractable, necessitating the use of a numerical solver. In what follows, we use the deSolve package in R, with the default LSODA integrator (Petzold 1983).
Our initial experiments used the following settings. Following Boys et al. (2008) among others, we imposed the parameter values c = (c 1 , c 2 , c 3 ) = (0.5, 0.0025, 0.3) and let x 0 = (50, 50) . We assumed an observation model of the form (1) and took Σ = σ 2 I 2 with σ = 5 representing low measurement error (since typical simulations of X 1,t and X 2,t are around two orders of magnitude larger than σ ). We generated a number of challenging scenarios by taking y T as the pair of 1%, 50% or 99% marginal quantiles of Y T |X 0 = (50, 50) for T ∈ {1, 2, 3, 4}. These quantiles are denoted by y T ,(1) , y T ,(50) and y T ,(99) , respectively, and are shown in Table 2. Figure 1 compares summaries (mean plus and minus two standard deviations) of each competing bridge process with the same summaries of the true conditioned process (obtained Quantiles of Y T |X 0 = (50, 50) found by repeatedly simulating from the Euler-Maruyama approximation of (20) with c = (0.5, 0.0025, 0.3) and corrupting X 1,T and X 2,T with additive N (0, 5 2 ) noise   y T ,(99) . Plainly, the blind forward simulation approach and CLE-based Fearnhead approach (F-CLE) are unable to match the dynamics of the true conditioned process. Moreover, we found that these bridges gave very small effective sample sizes for T ≥ 2 and we therefore omit these results from the following analysis.
We report results based on weighted resampling using N = 5000 with three different hazard functions: the Golightly/Wilkinson approach (CH) and the Fearnhead approach based on the LNA with and without restart (F-LNAR and F-LNA, respectively). For the latter (F-LNA), we integrated the LNA once in total. Figure 2 shows, for each value of y T in Table 2, effective sample size (ESS), log (base 10) CPU time and log (base 10) ESS per second. Note that for this example, ESS is calculated as wherew 1:N denotes the unnormalised weights generated by the weighted resampling algorithm. We see that although CH is computationally inexpensive, ESS decreases as T increases, as it is unable to match the nonlinear dynamics of the true conditioned process. In contrast, although more computationally expensive, F-LNAR and F-LNA maintain high ESS values as T is increased. Consequently, in terms of ESS per second, CH is outperformed by F-LNAR for T ≥ 3 and F-LNA for T ≥ 2. Due to not having to restart the LNA ODEs after each simulated value of the jump process, F-LNA is around an order of magnitude faster than F-LNAR in terms of CPU time, with the difference increasing as T is increased. Given then the comparable ESS values obtained for F-LNAR and F-LNA, we see that in terms of ESS/s, F-LNA outperforms F-LNAR by at least an order of magnitude in all cases and outperforms CH by 1-2 orders of magnitude when T = 4. The LNA is known to break down as an inferential model in situations involving low counts of the MJP components Median of Y T |X 0 = x 0 found by repeatedly simulating from the Euler-Maruyama approximation of (20) with c = (0.5, 0.0025, 0.3) and corrupting X 1,T and X 2,T with additive N (0, σ 2 ) noise  (Schnoerr et al. 2017). Therefore, to investigate the performance of the use of the LNA in constructing an approximate conditioned hazard in low-count scenarios, we additionally considered an initial condition with x 1,0 = x 2,0 ∈ {10, 25, 50} and took y T as the median of Y T |X 0 = x 0 for T ∈ {1, 2, 3, 4}. To fix the relative effect of the measurement error, we took σ = 1 for the case x 0 = (10, 10) and scaled σ in proportion to the components of x 0 for the remaining scenarios. The resulting values of y T can be found in Table 3. We report results based on weighted resampling using N = 5000 and F-LNA in Fig. 3. We see that when the initial condition is decreased from x 0 = (50, 50) to x 0 = (10, 10) , ESS decreases by a factor of around 1.6 (4906 vs 2998) when T = 1 and 2.5 (4562 vs 1853) when T = 4. Nevertheless, computational cost decreases as x 0 decreases (and in turn, the expected number of reaction events in the observation window decreases). Hence, there is little difference in overall efficiency (ESS/s) across the three scenarios.

Model and data
The susceptible-infected-removed (SIR) epidemic model has two species (susceptibles X 1 and infectives X 2 ) and two reaction channels (infection of a susceptible and removal of an infective): The vector of rate constants is c = (c 1 , c 2 ) , and the stoichiometry matrix is given by The hazard function is given by h(x t ) = (c 1 x 1,t x 2,t , c 2 x 2,t ) . For the linear noise approximation, the Jacobian matrix F t is given by The ODEs characterising the LNA solution, given by (11), (13) and (14) are intractable. As in Sect. 5.2, we use the deSolve package in R whenever a numerical solution is required.
We consider data consisting of eight observations on susceptible and infectives during the outbreak of plague in the village of Eyam, England. The data are taken over a fourmonth period from June 18th 1666 and are presented here in Table 4. Note that the infective population is estimated from a list of deaths, and by assuming a fixed illness length (Raggett 1982).

Pseudo-marginal Metropolis-Hastings
Let y = {y t i }, i = 1, . . . , 8 denote the observations at times 0 = t 1 < · · · < t 8 = 4. The latent Markov jump process over the time interval Under the assumption of no measurement error, we have that X t i = y t i , i = 1, . . . , 8. Upon ascribing a prior density p(c) to the rate constants c, Bayesian inference may proceed via the marginal parameter posterior where is the observed data likelihood. Although p( y|c) is intractable, we note that each term in (22) can be seen as the normalising constant of where p(y t i+1 |x t i+1 ) takes the value 1 if x t i+1 = y t i+1 and 0 otherwise. Hence, running steps 1(a) and (b) of Algorithm 2 with x 0 and y T replaced by x t i and y t i+1 , respectively, can be used to unbiasedly estimate p(y t i+1 |y t i , c). No resampling is required, since only those trajectories that coincide with the observation y t i+1 will have nonzero weight. By analogy with Eq. (4), and allowing explicit dependence on c, we have the unbiased estimator is an independent draw from q(·|x t i , y t i+1 , c). Then, multiplying thep(y t i+1 |y t i , c), i = 1, . . . , 7, gives an unbiased estimator of the observed data likelihood p( y|c).
An alternative unbiased estimator of the observed data likelihood can be found by using (a special case of) the alive particle filter (Del Moral et al. 2015). Essentially, forward draws are repeatedly generated from p(·|x t i , c) (via Gillespie's direct method) until N + 1 trajectories that match the observation are obtained. Let n i denote the number of simulations required to generate N + 1 matches with y t i+1 . The estimator is then given bŷ Let U ∼ p(·|c) denote the flattened vector of all random variables required to generate the estimator of observed data likelihood, which we denote byp U ( y|c). The pseudomarginal Metropolis-Hastings (PMMH) scheme is an MH scheme that targets the joint density for which it is easily checked that where the last line follows from the unbiasedness property ofp U ( y|c). Hence, we see that the target posterior p(c| y) is a marginal of the joint density p (c, u). Now, running an MH scheme with a proposal density of the form q(c * |c) p(u * |c * ) gives the acceptance probability Practical advice for choosing N to balance mixing performance and computational cost can found in Doucet et al. (2015) and Sherlock et al. (2015). The variance of the log posterior (denoted σ 2 N , computed with N samples) at a central value of c (e.g. the estimated posterior median) should be around 2. In what follows, we use a random walk on log c as the parameter proposal. The innovation variance is taken to be the marginal posterior variance of log c estimated from a pilot run and further scaled to give an acceptance rate of around 0.2-0.3. We followed Ho et al. (2018) by adopting independent N (0, 100 2 ) priors for log c i , i = 1, 2.
Although we do not pursue it here, the case of nonzero measurement error is easily accommodated by iteratively running Algorithm 2 in full, for each observation time t i , i = 1, . . . , 7. At time t i , y T is replaced by y t i+1 and x 0 is replaced by x j t i . At time t 1 , x 0 can be replaced by a draw from a prior density p(x t 1 ) placed on the unobserved initial value. The product (across time) of the average unnormalised weight can be shown to give an unbiased estimator of the observed data likelihood (Del Moral 2004;Pitt et al. 2012). We refer the reader to Golightly and Wilkinson (2015) and the references therein for further details of the resulting Metropolis-Hastings scheme.

Results
We ran PMMH using the observed data likelihood estimator based on (23), with trajectories drawn using either forward simulation or the Fearnhead approach based on the LNA (without restart). We designate the former as "Blind" and the latter as "F-LNA". Additionally, we ran PMMH using the observed data likelihood estimator based on (24). We designate this scheme as "Alive".
We ran each scheme for 10 4 iterations. For Alive, we followed  by terminating any likelihood calculation that exceeded 100,000 forward simulations, and rejecting the corresponding move. Marginal posterior densities can be found in Fig. 4 and are consistent with the posterior summaries reported by Ho et al. (2018). Figure 5 summarises the posterior distribution of X t |x 0 , y 0.5 , c, where c is fixed at the estimated posterior mean. We note the nonlinear behaviour of the conditioned process over this time interval, with similar nonlinear dynamics observed for other intervals (not reported). Table 5 summarises the computational and statistical performance of the competing inference schemes. We measure statistical efficiency by calculating minimum (over each parameter chain) effective sample size per second (mESS/s). As is appropriate for MCMC output, we use where α k is the autocorrelation function for the series at lag k and n iters is the number of iterations in the main monitoring run. Inspection of Table 5 reveals that although use of the alive particle filter only requires N = 8 (compared to N = 5000 and N = 100 for Blind and F-LNA, respectively), it exhibits the largest CPU time. We found that for parameter values in the tails of the posterior, Alive would often require many thousands of forward simulations to obtain N = 8 matches. Consequently, Alive is outperformed by Blind by a factor of 2 in terms of overall efficiency. Use of the LNAdriven bridge (without restart) gives a further improvement over Blind of a factor of 2.

Discussion
Performing efficient sampling of a Markov jump process (MJP) between a known value and a potentially partial or noisy observation is a key requirement of simulation-based approaches to parameter inference. Generating end-point conditioned trajectories, known as bridges, is challenging due to the intractability of the probability function governing the conditioned process. Approximating the hazard function associated with the conditioned process (that is, the conditioned hazard), and correcting draws obtained via this hazard function using weighted resampling or Markov chain Monte Carlo offers a viable solution to the problem. Recent All results are based on 10 4 iterations of each scheme approaches in this direction (Fearnhead 2008;Golightly and Wilkinson 2015) give approximate hazard functions that utilise a Gaussian approximation of the MJP. For example, Golightly and Wilkinson (2015) approximate the number of reactions between observation times as Gaussian. Fearnhead (2008) recognises that the conditioned hazard can be written in terms of the intractable transition probability associated with the MJP. The transition probability is replaced with a Gaussian transition density obtained from the Euler-Maruyama approximation of the chemical Langevin equation. In both approaches, the remaining time until the next observation is treated as a single discretisation. Consequently, the accuracy of the resulting bridges deteriorates as the inter-observation time increases. Starting with the form of the conditioned hazard function, we have proposed a novel bridge construct by replacing the intractable MJP transition probability with the transition density governing the linear noise approximation (LNA). Whilst our approach also involves a Gaussian approximation, we find that the tractability of the LNA can be exploited to give an accurate bridge construct. Essentially, the LNA solution can be re-integrated over each observation window to maintain accuracy. The cost of 'restarting' the LNA in this way is likely to preclude its practical use. We have therefore further proposed an implementation, which only requires a single full integration of the ordinary differential equation system governing the LNA. Our experiments demonstrated superior performance of the LNA-based bridge over existing constructs, especially in data-sparse scenarios. Whilst the LNA is known to give a poor approximation of the MJP in low-count scenarios (Schnoerr et al. 2017), we note that its role here is in the approximation of transition densities over ever diminishing time intervals. Moreover, the resulting approximate conditioned hazard function is corrected for via a weighted resampling scheme. Consequently, we find that use of the LNA in this way is relatively robust to situations involving low counts. Using a real data application, we further demonstrated the potential of the proposed methodology in allowing efficient parameter inference.
When the dimension of the statespace is finite, then the transition probability from a known state at time 0 to a known state at time T can be calculated exactly and efficiently via the action of a matrix exponential on a vector (e.g. Sidje and Stewart 1999), giving the likelihood directly; alternatively, the uniformisation method of Rao and Teh (2013) may be used for Bayesian inference. The recent article Georgoulas et al. (2017) extends the standard finite-statespace matrixexponential method to an infinite statespace pseudo-marginal MCMC algorithm which uses random truncation (e.g. Glynn and Rhee 2014) to produce a realisation from an unbiased estimator of the likelihood when the observations are exact. In contrast to the algorithms which we have investigated, which simulate paths for the process and whose performance improves as the observation noise increases, any extension to the algorithm of Georgoulas et al. (2017) that allows for observation error would reduce the efficiency of the algorithm. This suggests the possibility that for small enough observation noise an extension to the algorithm in Georgoulas et al. (2017) might be more efficient than our non-restarting bridge. Investigations into the relative efficiencies of such algorithms are ongoing.
This article has focused on bridges from a known initial condition. When the initial condition is unknown, such as typically arises in a particle filter-based analysis, a sample from the distribution of the initial state, {x 1 0 , . . . , x N 0 }, is available and a separate bridge to the observation is required from each element of the sample. In this case, two different implementations of the LNA bridge without restarting are possible. In the first implementation, trajectories X i |x i 0 , y T are generated using one full integration of (11), (13) and (14) over (0, T ] for each x i 0 . That is, each trajectory has (11) initialised at x i 0 . In the second implementation, (11), (13) and (14) are integrated just once, irrespective of the number of required trajectories. This can be achieved by initialising (11) at some plausible value e.g. E(X 0 ). Although the second implementation will be more computationally efficient than the first, some loss of accuracy is expected, especially when the uncertainty in X 0 is large. A single integral, however, may well be adequate in the cases which are the focus of this article: where the observation noise is small. Investigating the efficiency of the bridge construct in this scenario, as well as in multi-scale settings (see e.g. Thomas et al. 2014) where some reactions regularly occur more frequently than others, remains the subject of ongoing research.