Time-dependent propagators for stochastic models of gene expression: an analytical method

The inherent stochasticity of gene expression in the context of regulatory networks profoundly influences the dynamics of the involved species. Mathematically speaking, the propagators which describe the evolution of such networks in time are typically defined as solutions of the corresponding chemical master equation (CME). However, it is not possible in general to obtain exact solutions to the CME in closed form, which is due largely to its high dimensionality. In the present article, we propose an analytical method for the efficient approximation of these propagators. We illustrate our method on the basis of two categories of stochastic models for gene expression that have been discussed in the literature. The requisite procedure consists of three steps: a probability-generating function is introduced which transforms the CME into (a system of) partial differential equations (PDEs); application of the method of characteristics then yields (a system of) ordinary differential equations (ODEs) which can be solved using dynamical systems techniques, giving closed-form expressions for the generating function; finally, propagator probabilities can be reconstructed numerically from these expressions via the Cauchy integral formula. The resulting ‘library’ of propagators lends itself naturally to implementation in a Bayesian parameter inference scheme, and can be generalised systematically to related categories of stochastic models beyond the ones considered here.


Motivation
Understanding the process of gene expression in the context of gene regulatory networks is indispensable for gaining insight into the fundamentals of numerous biological processes. However, gene expression can be highly stochastic in nature, both in prokaryotic and in eukaryotic organisms; see e.g. the work by Elowitz et al. (2002), Raj and Oudenaarden (2008), Shahrezaei and Swain (2008b), and the references therein. This inherent stochasticity has a profound influence on the dynamics of the involved species, in particular when their abundance is low. Therefore, gene expression is often appropriately described by stochastic models (Bressloff 2014; Karlebach and Shamir 2008;Thattai and Oudenaarden 2001;Wilkinson 2009). A schematic of the canonical model for gene expression is depicted in Fig. 1. Here, the processes of transcription, translation, and degradation are approximated by single rates.
To test the validity of such stochastic models, a comparison with experimental data needs to be performed. The development of experimental techniques, such as time-lapse fluorescence microscopy (Coutu and Schroeder 2013;Elowitz et al. 2002;Larson et al. 2011;Muzzey and Oudenaarden 2009;Raj et al. 2006;Young et al. 2011), allows for real-time tracking of gene expression dynamics in single cells, providing mRNA or protein abundance time series, such as those depicted in Fig. 2. To select between competing hypotheses on the underlying regulatory networks given measurement data, as well as to infer the values of the corresponding model parameters, we can apply Bayesian inference theory to calculate the likelihood of a given model, which is constructed as follows.
The abundance of a protein, denoted by n, is sampled at times t i , see Fig. 2, yielding a list of measurement pairs (t i , n i ) from which transitions ( t, n i → n i+1 ) between states can be extracted; here, t = t i+1 − t i is the regular, fixed, sampling interval. Next, the underlying stochastic model with parameter set is used to calculate the probabilities of these transitions, which are denoted by P n i+1 |n i ( t). These so-called propagators give the probability of n i+1 protein being present after time t, given an initial protein abundance of n i . The log-likelihood L( |D) of the parameter set , given the observed data D, is now defined in terms of the propagators as L( |D) = i log P n i+1 |n i ( t). (1.1) To infer the values of parameters in the model, propagators are calculated for a wide range of parameter combinations, resulting in a 'log-likelihood landscape'; the maximal value of the log-likelihood as a function of the model parameters yields the most likely parameter values, given the experimental data. An example realisation of the above procedure can e.g. be found in the work by Feigelman et al. (2015).
To calculate accurately the log-likelihood in (1.1), it is imperative that the values of the propagators can be extracted from the underlying stochastic model for any desired combination of parameters in . In particular, we need to be able to calculate the propagator P n|n 0 (t) as a function of time t for any initial protein number n 0 . Unfortunately, the highly complex nature of the stochastic models involved makes it very difficult to obtain explicit expressions for these probabilities. Some analytical progress can be made when a steady-state approximation is performed, i.e. when it is assumed that the system is allowed to evolve for a sufficiently long time, such that it converges to a time-independent state. However, the sampling interval t used for obtaining experimental data, as seen in Fig. 2, is often short with respect to the protein life time. As that life time represents a natural time scale for the system dynamics, it follows that the evolution of the probabilities P n|n 0 (t) should be studied over short times, in contradiction with the steady-state or long-evolution-time approximations which have previously been employed to derive analytical results (Bokes et al. 2012a;Hornos et al. 2005; Iyer-Biswas and Jayaprakash 2014; Shahrezaei and Swain 2008a). The complex nature of stochastic models for gene expression has led to the widespread use of stochastic simulation techniques, such as Gillespie's algorithm (Gillespie 1977), with the aim of predicting values for the associated propagators from these models; see Feigelman et al. (2016) for recent work combining stochastic simulation with a particle filtering approach. However, these approaches can still be very time-consuming, due to the (relatively) high dimensionality of the model parameter space, combined with the fact that, for each combination of parameter values, the stochastic model has to be simulated sufficiently many times to yield a probability distribution that can be used to infer the corresponding propagator. For that reason, it is desirable to be able to obtain explicit expressions for the propagator P n|n 0 (t) directly in terms of the model parameters, if necessary in an appropriate approximation.

Analytical method
In the present article, we develop an analytical method for the efficient evaluation of time-dependent propagators in stochastic gene expression models, for arbitrary values of the model parameters. The results of our analysis can be implemented in a straightforward fashion in a Bayesian parameter inference framework, as outlined above.
To demonstrate our approach, we analyse two different stochastic models for gene expression. The first model, henceforth referred to as 'model A', is a model that incorporates autoregulation, where transcription and translation are approximated by single rates and protein can either stimulate or inhibit its own production by influencing the activity of DNA; see Fig. 3. That model was first studied by Iyer-Biswas and Jayaprakash (2014) via a steady-state approximation. The second model, henceforth referred to as 'model B', models both mRNA and protein explicitly and again incorporates DNA switching between an active and an inactive state; see Fig. 4. That model was first studied by Shahrezaei and Swain (2008a) in a long-evolution-time approximation.
Both model A and model B are formulated in terms of the chemical master equation (CME), which is the accepted mathematical representation of stochastic gene expression in the context of the model categories considered here; cf. Iyer-Biswas and  (Shahrezaei and Swain 2008a). Figure courtesy of Shahrezaei and Swain (2008a). (Copyright (2008) National Academy of Sciences, U.S.A.) Jayaprakash (2014) and Shahrezaei and Swain (2008a), respectively. Mathematically speaking, the CME is an infinite-dimensional system of linear ordinary differential equations (ODEs) that describes the evolution in time of the probabilities of observing a specific state in the system, given some initial state. Numerous approaches have been suggested for the (approximate) solution of the CME; see e.g. Popović et al. (2016) and the references therein for details. Our method relies on a combination of various techniques from the theory of differential equations and dynamical systems; specifically, we perform three consecutive steps, as follows. 1. CME system → PDE system: We introduce a probability-generating function to convert the CME into a (system of) partial differential equations (PDEs). 2. PDE system → ODE system: Applying the method of characteristicscombined, if necessary, with perturbation techniques-we transform the system of PDEs obtained in step 1 into a dynamical system, that is, a system of ODEs. 3. ODE system → Explicit solution: Making use of either special functions (model A) or multiple-time-scale analysis (model B), we obtain explicit solutions to the dynamical system found in step 2.
We emphasise that the 'characteristic system' of ODEs which is obtained in step 2 is low-dimensional, in contrast to the underlying CME system, as well as that it exhibits additional structure, allowing for the derivation of a closed-form analytical approximation for the associated generating function.
To convert the results of the above procedure into solutions to the original stochastic model, the three steps involved in our analysis have to be reverted. To that end, we require the following three ingredients: 1. Initial conditions are originally stated in terms of the CME, and first have to be reformulated in terms of the corresponding system of PDEs to ensure wellposedness; then, initial conditions can be extracted for the dynamical system that was obtained via the method of characteristics, reverting step 3. 2. To transform solutions to the characteristic system into solutions of the underlying PDE system, the associated 'characteristic transformation' has to be inverted, reverting step 2. 3. Lastly, solutions of the CME have to be extracted from solutions to the resulting PDE system, reverting step 1. Although the correspondence between the two sets of solutions is exact, theoretically speaking, the complexity of the expressions involved precludes the efficient analytical reconstruction of propagators from their generating functions. Therefore, we propose a novel hybrid analytical-numerical approach which relies on the Cauchy integral formula.
The various steps in our analytical method, as indicated above, are represented in Fig. 5. It is important to mention that the implementation of Bayesian parameter inference, as outlined in Sect. 1.1, is not a topic for the present article; rather, the aim here is to describe our method, and to present analytical results which can readily be implemented in the context of parameter inference. The article hence realises the first stage of our research programme; the natural next stage, which is precisely that implementation, will be the subject of a follow-up article by the same authors.

Outline
The present article is organised as follows. In Sect. 2, we apply the analytical method outlined in Sect. 1.2 to model A, the gene expression model with autoregulation. Here, we use a perturbative approach to incorporate the autoregulatory aspects of the model; the resulting dynamical system can be solved in terms of confluent hypergeometric functions, see §13 in NIST Digital Library of Mathematical Functions. In Sect. 3, the same method is applied to model B, the model that explicitly incorporates transcription. We also indicate how autoregulation can be added to that model, and how the resulting extended model can be analysed on the basis of our treatment of model A. The analysis carried out in Sects. 2 and 3 yields a 'library' of explicit asymptotic expressions for the probability-generating functions associated to the underlying stochastic models. To obtain quantifiable expressions for their propagators, we introduce a novel hybrid analytical-numerical approach in Sect. 4, which can be readily implemented in the Bayesian parameter inference framework that provided the motivation for our analysis; see Sect. 1.1. We conclude with a discussion of our results, and an outlook to future work, in Sect. 5.

Model A: gene expression with autoregulation
We first demonstrate our analytical method in the context of an autoregulatory stochastic gene expression model, as presented by Iyer-Biswas and Jayaprakash (2014); see also Fig. 3. In the original article (Iyer-Biswas and Jayaprakash 2014), a Poisson representation was used to obtain analytical descriptions for time-independent solutions to the model. For a visual guide to the upcoming analysis, the reader is referred to Fig. 5.

Stochastic model and CME
The basic stochastic model for gene expression is represented by the reaction scheme (2.1) The gene can hence switch between the inactive state D and the active state D * , with switching rates c f and c b , respectively. The active gene produces protein (P) with rate p b , while protein decays with rate p d . The autoregulatory part of the model is implemented as either positive or negative feedback: In the case of autoactivation, viz. (2.2a), protein induces activation of the gene with activation rate a, thereby accelerating its own production; in the case of autorepression, viz. (2.2b), protein deactivates the active gene with repression rate r , impeding its own production.
The CME system that is associated to the reaction scheme in (2.1), with autoactivation as in (2.2a), is given by (2.3b) Here, P n (t) ( j = 0, 1) represents the probability of n protein being present at time t while the gene is either inactive (0) or active (1). The time variable is nondimensionalised by the protein decay rate p d ; other model parameters are scaled as Analogously, the CME system for the case of autorepression, as defined in (2.2b), is given by (2.5b) Remark 2.1 A priori, it is possible to incorporate both autoactivation and autorepression in a single model, by merging systems (2.3) and (2.5). However, since autoactivation and autorepression precisely counteract each other, a partial cancellation would ensue, resulting in effective activation or repression. It can hence be argued that the simultaneous inclusion of both effects would introduce superfluous terms and parameters, which could be considered as poor modelling practice. Therefore, we choose to model the two autoregulation mechanisms separately.

Generating function PDE
Rather than investigating the dynamics of (2.3) and (2.5) numerically, using stochastic simulation, we aim to employ an analytical approach. To that end, we define the probability-generating functions F ( j) (z, t) ( j = 0, 1) as follows; see e.g. Gardiner (2009): (2.6) In the case of autoactivation, the generating functions F ( j) (z, t) can be seen to satisfy if the coefficients P ( j) n (t) in (2.6) obey the CME system (2.3); likewise, in the case of autorepression, (2.3) gives rise to Both (2.7) and (2.8) are systems of coupled, linear, first-order, hyperbolic partial differential equations (PDEs). Systems of this type are typically difficult to analyse; existing techniques only provide general results (Courant and Hilbert 1962;Taylor 2011).
To allow for an explicit analysis of systems (2.7) and (2.8), we make the following assumption: Assumption 2.2 We assume that the autoactivation rate a in (2.2) is small in comparison with the other model parameters; specifically, we write a = αp d δ, (2.9) where 0 < δ < 1 is sufficiently small. Likewise, we assume that the autorepression rate r is small in comparison with the other model parameters, writing (2.10) Previous work on the inclusion of autoregulatory effects in model selection by Feigelman et al. (2016) suggests that, in the context of Nanog expression in mouse embryonic stem cells, autoregulation rates are indeed small compared to other model parameters.
Based on Assumption 2.2, we can expand the generating functions F ( j) ( j = 0, 1) as power series in δ: (2.11) Substitution of (2.11) into (2.7) yields analogously, we substitute (2.11) into (2.8) to find We observe that, in (2.12) and (2.13), the same leading-order differential operator acts on both F (0) m and F (1) m , which allows us to apply the method of characteristics to solve the equations for F ( j) m ( j = 0, 1) simultaneously. In particular, we emphasise that, mathematically speaking, the resulting perturbation is regular in the perturbation parameter δ.

Dynamical systems analysis
In this section, we apply the method of characteristics to derive the 'characteristic equations' that are associated to the PDE systems (2.12) and (2.13), respectively; the former are systems of ODEs, which are naturally analysed in the language of dynamical systems.

Autoactivation
We first consider the case of autoactivation; to that end, we rewrite system (2.12) as The differential operator ∂ t + v∂ v in Eq. (2.14) gives rise to characteristics ξ(s; v 0 ) that obey the characteristic equation Since the partial differential operators in (2.14) transform into we arrive at the following system: Note that Eq. (2.19) is a recursive (nonhomogeneous) system of ordinary differential equations for F ( j) m ( j = 0, 1). Henceforth, we will therefore refer to (2.19) as such, while retaining the use of partial derivatives ∂ s due to the presence of ∂ v 0 in the corresponding right-hand sides.
To solve system (2.19), we rewrite it as a second-order ODE for F m : we hence obtain (2.23) Using (2.19a), we can express the second component F At leading order, i.e. for m = 0, (2.23) reduces to the solutions of which can be expressed in terms of the confluent hypergeometric function 1 F 1 , see §13 of NIST Digital Library of Mathematical Functions, to yield Using (2.24), we can determine (2.28) Next, we apply the method of variation of constants to express the solution to (2.28) as 1 as given in (2.26) and (2.29), respectively. At this point in the analysis, the constants c 1 and c 2 in (2.26) and the integration limits c 3 and c 4 in (2.29) remain undetermined. To fix these constants, and thereby determine a unique solution to (2.19), we have to prescribe appropriate initial conditions.

Autorepression
Given the analysis of autoactivation in the previous subsection, the case of autorepression can be analysed in an analogous manner. Employing the same characteristics as before, recall (2.17), we obtain from system (2.13); cf. Eq. (2.19). Next, we rewrite (2.32) as a second-order ODE for F (1) m , using again the variable transformation in (2.21): to leading order, we thus obtain (2.36) The corresponding equation for the first-order correction F (1) 1 reads which can be solved via the method of variation of constants to give with F (1) 0 as in (2.35). The first-order correction to the first component F (0) 0 is hence given by 1 as in (2.35) and (2.38), respectively. As in the case of autoactivation, the constantsĉ 1 andĉ 2 in (2.35) and the integration limitsĉ 3 andĉ 4 in (2.38) remain undetermined, and have to be fixed through suitable initial conditions.

Initial conditions
To determine appropriate initial conditions for the dynamical systems (2.19) and (2.32), we consider the original CME systems (2.3) and (2.5), respectively.
At time t = 0, we impose an initial protein number n = n 0 , which implies for the probabilities P ( j) n (t) ( j = 0, 1); here, δ n,n 0 denotes the standard Kronecker symbol, with δ n,n 0 = 1 for n = n 0 and δ n,n 0 = 0 otherwise. Using the definition of the generating functions F ( j) (z, t) in (2.6), we find that taking into account the change of variables in (2.15). Thus, (2.42) provides an initial condition for the PDE systems (2.7) and (2.8). Given the power series expansion in (2.11), we infer that the coefficients F which holds for both (2.12) and (2.13).
To be able to interpret the initial conditions in (2.43) in the context of the dynamical systems (2.19) and (2.32), we revisit the method of characteristics, which was used to map the PDE systems (2.12) and (2.13) to the former, respectively.
The characteristics of the differential operator ∂ t + v∂ v (= ∂ s ) in (2.19) and (2.32) are the integral curves of the vector field (1, v). Geometrically speaking, these characteristic curves foliate the (t, v)-plane over the v-axis. Therefore, each characteristic can be identified by its base point, which is the point where the characteristic curve intersects the v-axis, at v = v 0 ; see Eq. (2.17) and Fig. 6. Equivalently, each characteristic can be represented as a graph over the t-axis. Indeed, the differential equation for the v-component of a characteristic curve (2.16) can be solved to obtain the natural parametric description of a characteristic 'fibre' with base point v = v 0 , which is given by (s, v 0 e s ). Here, the parameter s along the characteristic is chosen such that its point of intersection with the v-axis lies at s = 0. Given that choice, it is natural to identify the parameter along the characteristic (s) with the time variable (t). Hence, the initial conditions in (2.43), which determine a relation between F (0) m and F (1) m on the v-axis, can be interpreted on every characteristic as Recast in the framework of generating functions, recall (2.6), the above normalisation condition yields the boundary condition (2.46) It is worthwhile to note that (2.46) is automatically satisfied whenever (2.43) is imposed: by adding the two equations in system (2.12)-or, equivalently, in (2.13)one sees that F (2.47) The line {v = 0} is represented by the 'trivial' characteristic, which is identified by v 0 = 0; therefore, on that characteristic, we impose (2.43) to find = 1, and (2.48b) = 1 for all s, as well as that F (0) m + F (1) m vanishes identically for all m ≥ 1. Substituting these results into the power series representation for F ( j) (z, t) ( j = 0, 1) in (2.11), we obtain (2.46). At this point, it is important to observe that Eq. (2.43) determines a line of initial conditions in the phase spaces of the dynamical systems (2.19) and (2.32). Therefore, at every order in δ, we can only fix one of the two free parameters that arise in the solution of the corresponding differential equations. In particular, (2.43) only fixes either c 1 or c 2 in (2.26), and either c 3 or c 4 in (2.29), and so forth. That indeterminacy motivates us to introduce a new parameter χ , which is defined as follows: Definition 2.4 Consider the CME systems (2.3) and (2.5). We define χ(n 0 ) to be the probability that the gene modelled by the reaction scheme in (2.1) is switched off at time t = 0, given an initial protein number n 0 .
Definition 2.4 immediately specifies initial conditions for systems (2.3) and (2.5) via (2.49) the above expression, in turn, provides us with a complete set of initial conditions for the PDE systems (2.7) and (2.8), to wit Here, we allow for the fact that χ(n 0 ) may depend on other model parameters, and in particular on the autoregulation rates a and r ; for a discussion of alternative options, see Sect. 4.1. We therefore expand χ(n 0 ) as a power series in δ, The above expansion can be used to infer a complete set of initial conditions for the PDE systems (2.12) and (2.13), yielding By the same reasoning that inferred (2.44) from (2.43), we can conclude that the complete set of initial conditions for the dynamical systems (2.19) and (2.32) is given by We can now use the conditions in (2.53) to determine the free constants c 1 and c 2 in (2.26), which yields Analogously, forĉ 1 andĉ 2 in (2.35), we obtain

Inverse transformation
The final step towards providing explicit solutions to the PDE systems (2.7) and (2.8) consists in interpreting the solutions to the dynamical systems (2.19) and (2.32), with initial conditions as in (2.53), as solutions to the PDE systems (2.12) and (2.13), respectively. To that end, we again consider the corresponding characteristics from a geometric viewpoint. As mentioned in Sect. 2.4, the (t, v)-coordinate plane is foliated by characteristics, which are the integral curves of the vector field (1, v); recall Fig. 6. Hence, any point (t, v) lies on a unique characteristic; flowing backward along that characteristic to its intersection with the v-axis, we can determine the corresponding base point v 0 by the inverse transformation (2.56) since we have identified the parameter along the characteristic (s) with the time variable (t).
To determine the value of F interpreted as a solution to the PDE system (2.12) or (2.13), we proceed as follows. We first apply the inverse transformation in (2.56) to establish on which characteristic the coordinate pair (t, v) lies. For that characteristic, identified by its base point v 0 , we then find the solution to the dynamical system (2.19) or (2.32), which is a function of s and v 0 . Next, we substitute s = t and v 0 = ve −t into that solution to obtain an explicit expression for the solution to the PDE system (2.12) or (2.13): ( 2.57) where F ( j) m ( j = 0, 1) on the right-hand side denotes the solution to (2.19) or (2.32), with initial conditions as in (2.53). Lastly, we substitute F ( j) m (v, t) into the power series in (2.11) to obtain an explicit solution to (2.7) or (2.8), to satisfactory order in δ.
Remark 2.5 The geometric interpretation of characteristics that was used to motivate the inverse transformation in (2.56) also shows that the introduction of the new system parameter χ in Definition 2.4 is necessary for the generating functions F ( j) to be determined uniquely as solutions to (2.7) or (2.8), even if we are only interested in their sum The crucial point is that any free constants obtained in the process of solving the dynamical systems (2.19) and (2.32)-or, equivalently, their second-order one-dimensional counterparts (2.20) and (2.33), respectively-are constant in s. In other words, they are constant along the particular characteristic on which the dynamical system is solved. These constants-such as e.g. c 1 and c 2 in (2.26)-can, and generally will, depend on the base point v 0 of the characteristic; see for example (2.54). The inverse transformation in (2.56) that is used to reconstruct the solution to the original PDE from that of the corresponding dynamical system would then yield undetermined functions c(v e −t ) in the resulting solutions to (2.7) and (2.8), respectively.

Summary of main result
To summarise Sect. 2, we combine the analysis of the previous subsections to state our main result.
Main result: The PDE system (2.7) can be solved for sufficiently small autoactivation rates a; see Assumption 2.2. Its solutions F ( j) (z, t) ( j = 0, 1) are expressed as power series in the small parameter δ; recall (2.11). The coefficients F ( j) m (z, t) in these series, written in terms of the shifted variable v defined in (2.15), can be found by (1) solving recursively the second-order ODE (2.20) and using the identity in (2.24), incorporating the initial conditions in (2.53); (2) and, subsequently, applying the inverse transformation in (2.56) to the resulting solutions.
Likewise, we can solve the PDE system (2.8) for sufficiently small autorepression rates r ; see Assumption 2.2. Its solutions F ( j) (z, t) are again expressed as power series in the small parameter δ; cf. (2.11). The coefficients F ( j) m (z, t) in these series, written in terms of the shifted variable v defined in (2.15), can be found by (1) solving recursively the second-order ODE (2.33) and using the identity in (2.34), incorporating the initial conditions in (2.53); (2) and, subsequently, applying the inverse transformation in (2.56) to the resulting solutions.
To illustrate the procedure described above, we state the resulting explicit expressions for the leading-order solution to (2.7)-or, equivalently, to (2.8)-in terms of the original variables z and t: Note that similar expressions were derived by Iyer-Biswas et al. (2009), where an analogous generating function approach was applied under the assumption that the gene is initially inactive-i.e. that χ(n 0 ) = 1, see Definition 2.4-as well as that the initial protein number n 0 is zero. With these choices, the expression for F

Model B: gene expression with explicit transcription
In this section, we apply our analytical method to model B, a stochastic gene expression model presented by Shahrezaei and Swain (2008a) which explicitly incorporates the transcription stage in the expression process, as well as DNA switching; see also Fig. 4. In the original article by Shahrezaei and Swain (2008a), a generating function approach was used to obtain analytical expressions for the time-independent ('stationary') solution to the model. For a visual guide to the upcoming analysis, the reader is again referred to Fig. 5.

Stochastic model and CME
The model for stochastic gene expression considered here is given by the reaction scheme D (3.1) The modelled gene can hence switch between inactive and active states which are denoted by D and D * , respectively, with corresponding switching rates k 0 and k 1 . The active gene is transcribed to mRNA (M) with rate ν 0 ; mRNA is translated to protein (P) with rate ν 1 . Finally, mRNA decays with rate d 0 , while protein decays with rate d 1 .
As in model A, autoregulatory terms can be added to the core reaction scheme in (3.1). Since both mRNA and protein are modelled explicitly, we can identify four distinct autoregulatory mechanisms, in analogy to those in (2.2a) and (2.2b): Autoactivation can be achieved either by mRNA or by protein, with rates a M or a P , respectively; similarly, autorepression can occur either through mRNA or through protein, with respective rates r M or r P .
The CME system associated to the reaction scheme in (3.1) is given by Here, P m,n (t) ( j = 0, 1) represents the probability of m mRNA and n protein being present at time t while the gene is either inactive (0) or active (1). As in (2.3) and (2.5), the time variable is nondimensionalised by the protein decay rate d 1 ; other model parameters are scaled as We note that the above scaling was also used by Shahrezaei and Swain (2008a). The effects of incorporating the autoregulatory mechanisms in (3.2) into the CME system (3.3) are specified in Table 1.

Generating function PDE
Since the probabilities P ( j) m,n (t) ( j = 0, 1) in (3.3) depend on both the mRNA number m and the protein number n, we introduce probability-generating functions that are defined by double asymptotic series: m,n (t) that obey the CME system (3.3), the associated generating The effects of incorporating the autoregulatory mechanisms in (3.2) into system (3.6) are specified in Table 2.

Dynamical systems analysis
As before, the PDE system (3.6) for the generating function can be reformulated as a system of ODEs via the method of characteristics. The differential operator ∂ t + (z − 1)∂ z + γ (w − 1)∂ w − γ μ(z − 1)w∂ w now gives rise to the characteristic systeṁ (3.7b) whereu = du ds is the derivative along the characteristic, which is parametrised by s. For simplicity, we have introduced the new variables u and v, which are defined as respectively. On the resulting characteristics, Eq. (3.6) yields the system of ODEṡ First, we note that the characteristic system (3.7) can be solved explicitly in terms of the incomplete Gamma function (a, z), see §8.2(i) of NIST Digital Library of Mathematical Functions), yielding However, due to its complex nature, the expression for u(s) given in (3.10a) cannot be used to obtain explicit expressions for the generating functions F ( j) that solve system (3.9). Therefore, inspired by the analysis by Shahrezaei and Swain (2008a) and Popović et al. (2016), we make the following assumption: Assumption 3.1 We assume that the decay rate of protein (d 1 ) is smaller than the decay rate of mRNA (d 0 ); specifically, we write where 0 < ε < 1 is sufficiently small.
The resulting scale separation between mRNA and protein decay rates is welldocumented in many microbial organisms, including in bacteria and yeast (Shahrezaei and Swain 2008a;Yu et al. 2006). For clarity of presentation, we make an additional assumption here.
Remark 3.3 Although Assumption 3.2 is not strictly necessary for the upcoming analysis, it is beneficial. It is worthwhile to note that the analytical scheme presented in this section can be applied in a straightforward fashion in cases where Assumption 3.2 fails, which is particularly relevant in relation to previous work (Shahrezaei and Swain 2008a;Feigelman et al. 2015), where the CME system (3.3) is studied for parameter values far beyond the range implied by Assumption 3.2.
Using Assumption 3.1, we can write the characteristic system (3.7) as (3.12b) Since ε is assumed to be small, we can classify (3.12) as a singularly perturbed slowfast system in standard form; see Kuehn (2015). A comprehensive slow-fast analysis of Eq. (3.12) was carried out by Popović et al. (2016); we highlight some relevant aspects of that analysis here. System (3.12) gives rise to a critical manifold C 0 = (u, v) u = μv 1−μv . If ε is asymptotically small, orbits of (3.12) can be separated into slow and fast segments, Fig. 7 Phase space dynamics of systems (3.12) and (3.13). The slow flow along C 0 is indicated by single arrows; the fast dynamics transverse to C 0 are denoted by double arrows using Fenichel's geometric singular perturbation theory (Kuehn 2015). The critical manifold C 0 is normally repelling; in other words, orbits converge to C 0 in backward time at an exponential rate. For initial conditions asymptotically close to C 0 , orbits initially follow C 0 closely for some time, after which they move away from C 0 under the fast dynamics; see also Fig. 7.
To analyse the fast dynamics of system (3.12), we introduce the fast variable σ = s ε ; in terms of σ , (3.12) is hence expressed as where u = du dσ . We can solve (3.13b) explicitly and write the result as a power series in ε, which yields v(σ ) = v 0 e εσ = v 0 ∞ n=0 ε n σ n n! . (3.14) Expressing the solution to (3.13a) as a power series in ε, as well, i.e. writing substituting (3.15) into (3.13a), and making use of (3.14), we find with initial conditionŝ u 0 (0) = u 0 andû n (0) = 0 for n ≥ 1. (3.17) The first two terms in (3.15) are thus given bŷ (3.18b) We now employ the expansion in (3.15) to obtain explicit expressions for the generating functions F ( j) ( j = 0, 1). In the fast variable σ = s ε , system (3.9) takes the form As in the analysis of model A, recall Sect. 2.3, we rewrite system (3.19) as a secondorder ODE for F (0) to find Next, we use (3.19a) to express F (1) in terms of F (0) as To incorporate the expansion for u in (3.15), we also expand F ( j) ( j = 0, 1) in powers of ε, writing  and for n ≥ 0. We can solve Eqs. (3.23) and (3.24) iteratively-taking into account the additional condition on F for the first three terms which, upon substitution into (3.26), yields (3.28b) here, f i and g i are free constants, to be determined by initial conditions; see Sect. 3.4. Contrary to common practice in the study of slow-fast systems such as (3.19), we forego a detailed analysis of the slow system (3.9), continuing our discussion with the determination of appropriate initial conditions; cf. Sect. 2.4. For details on why the slow dynamics is disregarded, the reader is referred to Remark 3.5.

Initial conditions and inverse transformation
To complete our analytical method, we discuss the determination of initial conditions and the reconstruction of the solution to the original PDE system (3.6), as was done for model A in Sects. 2.4 and 2.5, respectively.

Initial conditions
We follow the reasoning of Sect. 2.4, and determine appropriate initial conditions by considering the original CME system (3.3).
At time t = 0, we prescribe initial mRNA and protein numbers m 0 and n 0 , respectively. As in Sect. 2.4, we again introduce the parameter χ , which is defined as follows; compare Definition 2.4: Definition 3.4 Consider the CME system (3.3). We define χ(m 0 , n 0 ) to be the probability that the gene modelled by the reaction scheme in (3.1) is switched off at time t = 0, given initial mRNA and protein numbers m 0 and n 0 , respectively.
Next, we can infer e.g. from (3.10) and well-known properties of the incomplete Gamma function-see §8.2 of NIST Digital Library of Mathematical Functionsthat solutions to (3.7) exist globally, i.e. for all s. In particular, given an arbitrary triple (s * , u * , v * ), we can apply the inverse flow of (3.7) to flow backward (u * , v * ) for time s * . We conclude that the characteristics of the operator ∂ t + (z − 1)∂ z + γ (w − 1)∂ w − γ μ(z − 1)w∂ w = ∂ t + v∂ v + γ u∂ u − γ μ(u + 1)v∂ u , which are given by the orbits of system (3.7), foliate the (t, u, v)-coordinate space over the {t = 0}-plane when the parameter along the characteristic (s) is identified with the time variable (t). We can therefore uniquely identify such a characteristic-interpreted as a fibre over the {t = 0}-plane-by its base point. Because orbits of (3.7) provide a parametrisation of the underlying characteristics, the coordinates of that base point are given by (0, u 0 , v 0 ), where (u 0 , v 0 ) are the initial values of the corresponding orbit of (3.7). Hence, the conditions in (3.30) yield, on each characteristic, As s = 0 implies σ = 0, we can apply the initial conditions in (3.31) to the solutions of the ODE system (3.19) in order to determine the free constants f i and g i in (3.27) and (3.28). Combining the power series expansion in (3.22) with (3.31), we obtain = 0 for all n ≥ 1, with j = 0, 1, (3.32c) which implies in (3.27) and (3.28).

Inverse transformation
Since the (t, u, v)-coordinate space is foliated by the characteristics of the operator any point (t, u, v) lies on a unique characteristic. Flowing backward along that characteristic to its intersection with the {t = 0}-plane, we can determine the corresponding base point (0, u 0 , v 0 ) by inverting the relations in (3.10). Since the dynamics of the v-coordinate do not depend on u, we may use (2.56) to express v 0 in terms of t and v only. Taking the resulting expression as input for inverting (3.10a), we obtain the inverse characteristic transformation Under Assumption 3.1, we can employ the power series expansion in (3.15), in combination with the recursive set of ODEs in (3.16) and initial conditions as in (3.17), as an alternative to (3.35a) to obtain u 0 as a function of u, v (or v 0 ), and σ . Rewriting the result as a power series in ε, we find (3.36) naturally, (3.14) gives rise to (3.37) Since the independent variable in (3.16) is σ , and since s is naturally identified with t, we replace σ with t ε in (3.36) to obtain a perturbative expansion for u 0 (t, u, v). The solution to the PDE system (3.6) can now be found to satisfactory order in ε by applying the inverse transformation in (3.34) to the solutions given in (3.27) and (3.28), taking into account the values of f i and g i in (3.33). In other words, (3.38) where F ( j) n ( j = 0, 1) on the right-hand side of the above expression denotes the solution to Eqs. (3.23a)-(3.26), with initial conditions as in (3.32).
Remark 3.5 The absence of any detailed analysis of the characteristic system in its slow formulation, Eq. (3.12), can be argued as follows, by considering the corresponding phase space, as depicted in Fig. 7. (a) For arbitrary initial conditions (u 0 , v 0 ), the dominant dynamics are fast, since the critical manifold C 0 is normally repelling. In other words, solutions are generally repelled away from C 0 under the fast dynamics. (b) All orbits that have their initial conditions on the same Fenichel fibre are exponentially close (in ε) to each other near the slow manifold C ε that is associated to the critical manifold C 0 . Therefore, flowing backward from (u, v) to (u 0 , v 0 )-as expressed through the inverse transformation in (3.34) that yields the corresponding PDE solution-may introduce exponentially large terms in the transformation, precluding any sensible series expansion.
Thus, although the construction of a composite '(initially) slow-(ultimately) fast' expression of F ( j) as a solution to systems (3.12) and (3.13) certainly makes sense from a dynamical systems perspective, the extreme lack of sensitivity of orbits on their initial conditions (u 0 , v 0 ) may prevent such a composite expansion from being useful for obtaining solutions to the original PDE system (3.6).
Remark 3.6 In Sect. 3.3, explicit expressions are given for the expansion of F (0) up to O(ε 2 ) only, cf. (3.27); similarly, F (1) is approximated to O(ε) in (3.28), for the sake of brevity. It is worthwhile to note that a lower bound on the order of the expansion is stipulated by the application; recall Sect. 1.1: the sampling time t can be considered as a minimum time interval over which the results of our analysis should be (reasonably) accurate. To that end, we have to compare t with ε = 1 γ , the parameter defining the fast time scale on which the above analytical results have been derived. We can then apply the classical theory of Poincaré expansions (Verhulst 2000) to infer that, if for example t = O(1)-which implies t = O(ε −1 ) in the fast time variable σ -the generating functions F ( j ) should at least be expanded up to O(ε 2 ) for the resulting approximation to be accurate to O(ε).

Autoregulation
The inclusion of any type of autoregulation into system (3.6)-which is equivalent to the addition of model terms from Table 2 to the right-hand sides of the corresponding equations-precludes the direct application of the method of characteristics, as the resulting partial differential operators in Eqs. (3.6a) and (3.6b) do not coincide anymore. To resolve that complication, we follow the approach of Sect. 2, making the following assumption: Assumption 3.7 We assume that the autoregulation rates a M , r M , a P , and r P , as defined in (3.2), are small in comparison with the protein decay rate d 1 ; specifically, we write where 0 < δ < 1 is sufficiently small.
Next, we expand the generating functions F ( j) ( j = 0, 1) as power series in δ; recall (2.11): (3.40) To demonstrate the procedure, we include mRNA autoactivation in (3.6), see again Table 2; the analysis of the remaining autoregulatory mechanisms can be performed in a similar fashion. Substitution of (3.40) now yields cf. (2.12). System (3.41) is then amenable to the method of characteristics. In fact, employing the same characteristics as in the unperturbed setting, recall (3.7), we find (3.42) while the partial differential operators in Table 2 transform into here, u(s; u 0 , v 0 ) and v(s; v 0 ) are as given in (3.10). Thus, the mRNA autoactivation system (3.41) transforms to To obtain explicit solutions to system (3.44), we adopt Assumption 3.1 and revert to the fast time scale σ = s ε to write the dynamical system (3.44) as a second-order ODE for F (0) m , which yields To solve (3.45) (recursively), we expand F and for n ≥ 0; recall (3.25) and (3.26). To solve Eqs. (3.48) through (3.53) iteratively, we fix m-the order of the expansion in δ-and determine the solution to satisfactory order in n, the order of the expansion in ε. Then, we increase m to m + 1 and take the result as input for the dynamics at order m + 1. The resulting repeated iteration procedure yields an explicit expression for the generating functions F ( j) ( j = 0, 1) as double asymptotic series in both δ and ε.
The determination of appropriate initial conditions is largely analogous to the nonautoregulated case; see Sect. 3.4.1. However, with the inclusion of autoregulation into model B, we need to incorporate the possibility that χ(m 0 , n 0 ) depends on the corresponding autoregulation rates. As in the case of model A, we expand χ(m 0 , n 0 ) as a power series in δ: (3.54) recall (2.51). In that case, the initial conditions for F ( j) m,n can be inferred from (3.32) to give = 0 for all m ≥ 0 and n ≥ 1, with j = 0, 1. (3.55d) Solutions to Eqs. (3.48)-(3.53) that incorporate the conditions in (3.55) for all types of autoregulation introduced in (3.2) can be found in Appendix B. Finally, our previous results on the inverse characteristic transformation in the non-autoregulated case from Sect. 3.4.2 can now be applied in a straightforward fashion to give solutions to the PDE system (3.6) with added autoregulation.

Summary of main result
To summarise Sect. 3, we combine the analysis of the previous subsections to state our main result.
Main result: The PDE system (3.6) can be solved for sufficiently large values of γ ; see Assumption 3.1. Its solutions F ( j) (w, z, t) ( j = 0, 1) are expressed as power series in the small parameter ε = 1 γ ; cf. (3.22). The coefficients F ( j) n (w, z, t) in these series, written in terms of the shifted variables u and v defined in (3.8), can be found by (1) solving recursively the second-order ODEs (3.23a) through (3.24) and using the identities in (3.25) and (3.26), incorporating the initial conditions in (3.32); (2) subsequently applying the inverse transformations in (3.36) and (3.37) to the resulting solutions; (3) and, finally, substituting σ = t ε . To illustrate the procedure described above, we state the resulting explicit expressions for the leading-order solution to (3.6) in terms of the original variables w, z, and t here: (3.56b) Note that the sum F 0 corresponds precisely to the leading-order fast expansion found in Equation (21) of Bokes et al. (2012b).
If autoregulation as in (3.2) is incorporated into model B, the main result can be formulated as follows.
Main result (autoregulatory extension): The PDE system (3.6) incorporating any one type of autoregulation from Table 2 can be solved as long as γ is sufficiently large and δ is sufficiently small; see Assumptions 3.1 and 3.7, respectively. Its solutions F ( j) (w, z, t) ( j = 0, 1) are expressed as double power series in the small parameters δ and ε, viz. (3.57) The coefficients F ( j) m,n in these series, written in terms of the shifted variables u and v defined in (3.8), can be found by (1a) solving recursively the second-order ODEs (3.48) through (3.50) for fixed m and using the identities in (3.52) and (3.53), incorporating the initial conditions in (3.55); (1b) increasing m to m + 1, and repeating step (1a) until a sufficient accuracy in δ (and ε) is attained; (2) subsequently applying the inverse transformations in (3.36) and (3.37) to the resulting solutions; (3) and, finally, substituting σ = t ε .

From generating function to propagator
The final step in our analytical method consists in reconstructing the probabilities P ( j) n (model A) and P m,n (model B), respectively, from the explicit expressions for the associated generating functions F ( j) ( j = 0, 1), which were the main analytical outcome of Sects. 2 and 3.
In principle, the relation between probabilities and probability-generating functions is clear from the definition of the latter, and is given in (2.6) and (3.5), respectively. Specifically, probabilities can be expressed in terms of derivatives of their generating functions as follows: (4.1b) However, explicit expressions for the nth and (m + n)th order derivatives, respectively, of these generating functions become progressively unwieldy with increasing m and n. Indeed, from the expressions for the generating functions F ( j) obtained previously, which combine (2.26) for specific initial conditions as in (2.54) with the inverse characteristic transformation in (2.56), it is clear that finding explicit expressions for derivatives of arbitrary order is very difficult indeed, if it is possible at all. 1 To complete successfully the final step towards approximating propagators for parameter inference in the present setting, we abandon the requirement of deriving explicit expressions for the probabilities P ( j) n and P ( j) m,n . Instead, we use the standard Cauchy integral formula for derivatives of holomorphic functions to write here, γ A is a suitably chosen contour around z = 0, while γ B is a (double) contour around (w, z) = (0, 0).
The above expression of probabilities as integrals is well suited for an efficient numerical implementation, which is naturally incorporated into the realisation of the parameter inference scheme discussed in Sect. 1. From a numerical perspective, the integral formula in (4.2) has the additional advantage that the values of F ( j) on (a discretisation of) the integral contours γ A and γ B , respectively, only have to be determined once to yield propagators for any values of m and n and fixed initial states m 0 and n 0 . Moreover, we are free to choose the integration contours γ A and γ B , which allows us to accelerate the calculation of these integrals; see Bornemann (2011). Here, we note that the choice of circular integration contours with unit radius, and subsequent discretisation of those contours as M-sided and N -sided polygons, respectively, coincides with the 'Fourier mode' approach, as presented by Bokes et al. (2012a).
Remark 4.1 By introducing the Cauchy integral formula for derivatives of holomorphic functions in (4.2), we implicitly assume that the integration contours γ A and γ B are chosen such that they lie completely within the open neighbourhoods of the origin in C and C 2 , respectively, where the canonical complex extensions of the generating functions F ( j) -which exist by the Cauchy-Kowalevski theorem-are holomorphic. In other words, γ A and γ B must be chosen such that any poles of F ( j) lie outside of these integration contours. The expansion for u 0 in (3.36) shows that this is not a moot point: the generating functions F ( j) resulting for model B, as established in Sect. 3, will generically have a pole at v = 1 μ , i.e. at z = 1 + 1 μ . As μ is positive, by (3.4), choosing the z-contour of γ B to be a circle with at most unit radius allows us to avoid that pole.

Incorporation of χ χ χ
In the course of the analysis presented in Sects. 2 and 3, the introduction of the parameter χ was necessary to obtain definite, explicit expressions for the generating functions as solutions to the PDE systems (2.7), (2.8), and (3.6); see Definitions 2.4 and 3.4. The successful implementation of these expressions in a parameter inference scheme requires us to decide how to incorporate that new parameter. We identify three options here.
1. Before implementing parameter inference, we can marginalise over the new parameter χ to eliminate it altogether, using a predetermined measure dμ(χ), which adds an additional integration step to the requisite numerical scheme. 2. We can make a choice for χ that is based on the specifics of the model under consideration. Thus, exploiting the Markov property of the stochastic models underlying (2.3), (2.5), and (3.3), we may use the switching rates and any autoregulation rates to express χ in model A as the corresponding expressions for model B read (protein autorepression). (4.6d)

Discussion and outlook
In the present article, we have developed an analytical method for obtaining explicit, fully time-dependent expressions for the probability-generating functions that are associated to models for stochastic gene expression. Moreover, we have presented a computationally efficient approach which allows us to derive model predictions (in the form of propagators) from these generating functions, using the Cauchy integral formula. It is important to note that our method does not make any steady-state or long-evolution-time approximations. On the contrary, the perturbative nature of our approach naturally optimises its applicability over relatively short (or intermediate) time scales; see also Remark 3.6. As is argued in Sect. 1.1, such relatively short evolution times naturally occur in the calculation of quantities such as the log-likelihood, as defined in Eq. (1.1). Therefore, our analytical approach is naturally suited to an implementation in a Bayesian parameter inference scheme, such as is outlined in Sect. 1.1. As mentioned in Sects. 2.2 and 3.3, the introduction of Assumptions 2.2 and 3.7 in our analysis of the systems of PDEs and ODEs that are obtained via the generating function approach is necessary for determining explicit expressions for the generating functions themselves. Therefore, we can only be certain of the validity of our approach if we assume that the autoregulation rates are small in comparison with other model parameters, as is done there. Moreover, in the analysis of model B, we have to assume that the protein decay rate is smaller than the decay rate of mRNA; recall Assumption 3.1. That assumption is valid for a large class of (microbial) organisms (Shahrezaei and Swain 2008a;Yu et al. 2006); however, it is by no means generic, as the two decay rates are often comparable in mammalian cells (Schwanhäusser et al. 2011;Vogel and Marcotte 2012). Since the accuracy of approximation of the explicit expressions for the generating functions derived here is quantified in terms of orders of the perturbation parameter(s), see e.g. Remark 3.6, violation of Assumption 2.2, 3.1, or 3.7 will decrease the predictive power of the results obtained by the application of the analytical method developed in the article.
The method which is described in Sect. 1.2, and outlined visually in Fig. 5, hence provides a generic framework for the analysis of stochastic gene expression models such as model A (Fig. 3 and Sect. 2.1) and model B (Fig. 4 and Sect. 3.1). Note that, for example, the steady-state and long-evolution-time approximations derived by Shahrezaei and Swain (2008a) could be extended to autoregulatory systems via the same approach. However, as is apparent from the (differences between the) analysis presented in Sects. 2 and 3, the 'path to an explicit solution' is highly model-dependent. The decision on which analytical techniques to apply, such as the perturbative expansion postulated in (3.22), has to be made on a case-by-case basis. The success of the method presented in the article fully depends on whether the resulting dynamical systems can be solved explicitly. To that end, it is highly beneficial that the systems (2.19) and (3.19) obtained here are linear, which is a direct consequence of the fact that all reactions described in the reaction schemes in (2.1) and (2.2), as well as in (3.1) and (3.2), are of first order. Inclusion of second-order reactions would introduce both nonlinear terms and second-order differential operators in the PDE systems for the corresponding generating functions, which would severely increase the complexity of these systems, thus preventing us from obtaining explicit solutions.
The method presented in this article, and the results thus obtained, can be seen, first and foremost, as the natural extension of previous work by Popović et al. (2016). Analytical results for the classes of models studied here can be found in several earlier articles. We mention the article by Shahrezaei and Swain (2008a), where a leadingorder approximation was obtained in a long-evolution-time and steady-state limit. Bokes et al. (2012a) derived analytical expressions for stationary distributions in a model that is equivalent to that considered by Popović et al. (2016). Also for that model, a time-scale separation was exploited by Bokes et al. (2012b), in a manner that is similar to the present article, to obtain leading-order analytical expressions on both time scales. The model that is referred to as Model A in Sect. 2 was analysed in a steadystate setting by Iyer-Biswas and Jayaprakash (2014) via the Poisson integral transform. A similar model was studied by Hornos et al. (2005), were a generating function approach was used; making a steady-state Ansatz, the authors were able to obtain an explicit solution for the generating function in terms of special (Kummer) functions; see also NIST Digital Library of Mathematical Functions. The same model was later solved in a fully time-dependent context by Ramos et al. (2011), after a cleverly chosen variable substitution, in terms of another class of special (Heun) functions; cf. again NIST Digital Library of Mathematical Functions.
Other authors have attempted to solve several classes of CMEs directly, i.e. without resorting to generating function techniques or integral transforms. A noteworthy example is the work of Jahnke and Huisinga (2007) on monomolecular systems. Another, more recent example can be found in the work by Iserles and MacNamara (2017), where exact solutions are determined for explicitly time-dependent isomerisation models.
It is important to emphasise that the 'time dependence' referred to in the title of the present article is solely due to the dynamic nature of the underlying stochastic process, and that it hence manifests exclusively through time derivatives in the associated CMEs, such as e.g. in (2.3). In particular, none of the model parameters are timedependent, as opposed to, for example, the system studied by Iserles and MacNamara (2017). The inclusion of such explicitly time-dependent parameters would be a starting point for incorporating the influence of (extrinsic) noise in the context of the model categories considered in the article.
The availability of analytical expressions for generating functions does, in principle, allow one to try to obtain insight into the underlying processes by studying the explicit form of said expressions, as has been done e.g. by Bokes et al. (Bokes et al. 2012a, b). However, the complex nature of the processes we analyse here seems to preclude such insights. For example, the integrals over confluent hypergeometric functions, which appear in (2.29), cannot themselves be efficiently expressed in terms of (other) special functions. Still, that complication does not necessarily pose an obstacle to the application we ultimately have in mind, i.e. to Bayesian parameter inference. As the last step in our method-the extraction of propagators from generating functions, see Sect. 4-is numerical, the precise functional form of the generating function is not of importance. The mere fact that an explicit expression can be obtained is sufficient for the application of the Cauchy integral formula, where these generating functions enter into the calculation of the appropriate integrals; see again Sect. 4.
The analytical approach explored in the article does not, of course, represent the only feasible way of obtaining numerical values for propagator probabilities, which can, in turn, serve as input for a Bayesian parameter inference scheme. For an example of a direct numerical method in which the Cauchy integral plays a central role, the reader is referred to the work by MacNamara (2015). Our main motivation for pursuing an analytical alternative is reducing the need for potentially lengthy numerical simulations. An efficient implementation of the resulting expressions can result in (significantly) reduced computation times; see, for example, the work by Bornemann (2011). The optimisation of the underlying numerical procedures is, however, beyond the scope of the present article in particular, and of our research programme in general.
The analytical results obtained thus far, as presented in the article, are ready for implementation in a Bayesian parameter inference framework. An analysis of the performance of the resulting approximations to the associated generating functions in the spirit of the article by Feigelman et al. (2015), where parameter inference is tested on simulated data based on specific stochastic models, is ongoing work. Moreover, the successful application of our analytical method to specific model categories, such as are represented by model A and model B, suggests several feasible expansions of the 'model library' for which explicit expressions for the corresponding generating functions can be constructed. Thus, stochastic models comprised of multiple proteins represent a natural next stage, bringing the analysis of toggle switch-type models within reach. In addition, one could begin exploring the vast field of gene regulatory networks by considering a simple twoprotein system with, for example, activator-inhibitor interaction. Under the assumption of small interaction rates, the resulting PDE system for the associated generating function would be directly amenable to the analytical method described in the article. The analysis of these and similar systems could be a topic for future research.

A Model A: explicit expressions
An explicit expression for (2.27) can be stated as follows: which can be shown to be equal to (2.35) using §13.3(i) of NIST Digital Library of Mathematical Functions whenĉ 1 = c 1 κ f κ b andĉ 2 = c 2 . Explicit expressions involving the integration limits c 3 and c 4 in (2.29), respectively, are obtained as respectively, with F (0) 0 as in (2.26). Similarly, explicit expression that involve the integration limitsĉ 3 andĉ 4 in (2.38), respectively, read λv 0 respectively, with F (1) 0 as in (2.35).

B Model B with autoregulation: explicit expressions
To leading order in δ, i.e. for m = 0, the right-hand side of Eq. (3.45) vanishes; hence, the expressions in Eqs. (3.20) and (3.21) apply. In other words, we can identify F ( j) 0,n with F ( j) n as given in (3.27) and (3.28), for j = 0, 1 and n = 0, 1, 2. The free constants f i and g i are as stated in (3.33), with χ replaced with χ 0 . In the following sections, we present explicit expressions for the first-order correction in δ-i.e. for m = 1-for all autoregulatory scenarios listed in (3.2).

B.1 mRNA autoactivation
We can solve (3.45) to second order in ε, i.e. for n = 2, applying the initial conditions in (3.55) to obtain Hence, it follows that (B.22)

B.4 Protein autorepression
For the case of protein autorepression, we replace α M (u + 1) ∂u