On Observability and Reconstruction of Promoter Activity Statistics from Reporter Protein Mean and Variance Profiles

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9957)

Abstract

Reporter protein systems are widely used in biology for the indirect quantitative monitoring of gene expression activity over time. At the level of population averages, the relationship between the observed reporter concentration profile and gene promoter activity is established, and effective methods have been introduced to reconstruct this information from the data. At single-cell level, the relationship between population distribution time profiles and the statistics of promoter activation is still not fully investigated, and adequate reconstruction methods are lacking.

This paper develops new results for the reconstruction of promoter activity statistics from mean and variance profiles of a reporter protein. Based on stochastic modelling of gene expression dynamics, it discusses the observability of mean and autocovariance function of an arbitrary random binary promoter activity process. Mathematical relationships developed are explicit and nonparametric, i.e. free of a priori assumptions on the laws governing the promoter process, thus allowing for the decoupled analysis of the switching dynamics in a subsequent step. The results of this work constitute the essential tools for the development of promoter statistics and regulatory mechanism inference algorithms.

Keywords

Gene regulation Doubly stochastic process Spectral analysis 

1 Introduction

A common experimental technique to monitor gene expression is the use of reporter proteins [9], i.e. fluorescent or luminescent proteins that are synthesized upon expression of the gene of interest. Light intensity measurements collected at different points in time are proportional to the amount of reporter molecules. This provides a quantitative, however indirect, readout of the activity of the gene, since reporter abundance depends on gene activation via its own transcription and translation dynamics.

When cellular populations are observed as a whole, such as in automated microplate readers, an average reporter profile is obtained. An estimate of the average gene activation over the population of cells may thus be obtained by regularized inversion of the reporter synthesis dynamics [21]. Provided accurate knowledge of the latter, reconstruction of the promoter activity allows one to investigate gene expression regulatory mechanisms, a crucial step toward inference of gene regulatory networks [18].

When individual cells are observed, for instance via flow-cytometry or fluorescence videomicroscopy, a statistical distribution of gene expression levels over a sample of the population (often called a population snapshot [6]) is obtained at several points in time (reporter traces for individual cells can also be obtained by suitable experimental setups and image processing techniques [20], but we will not analyze this case here). In many cases of interest, this crucially reveals variability of gene expression levels across cells that can be explained in terms of the stochasticity of the gene regulation and expression process [13, 16, 19]. Reporter statistics thus contains information about the stochastic laws governing gene activation. However, recovering the relevant information from the data is less trivial than in the population average case, and no satisfactory methods exist to date.

With reference to population snapshot data, in [2, 3], we have started addressing the problem of estimating promoter activity statistics (the biological information of interest in gene expression reporting) from reporter mean and variance profiles. In [2], parametric models of stochastic gene activation have been considered, and the identifiability of promoter switching rates that are fixed over time and across cells has been analyzed. However, due to a priori unknown regulatory mechanisms, switching rates may fluctuate over time and/or across cells (extrinsic noise). To cope with this, in [3], a nonparametric method, i.e. avoiding assumptions on the regulatory mechanisms behind the expression of the gene of interest, has been proposed for the special case of irreversible activation. A rather extensive account of relevant research literature is also contained in these works.

Following up from the developments in [2, 3], for the general case of unmodelled stochastic (possibly time varying) gene expression regulation, we address here the problem of reconstructing second-order statistics of the promoter activity process from reporter mean and variance profiles. The importance of this problem lies in the fact that, in analogy with linear stochastic processes [12], cross-correlation of promoter activity at different points in time (i.e. the autocorrelation function) contains information about the time dynamics of activation and deactivation. Reconstruction of these statistics from data is thus the crucial step for the understanding of the gene regulatory mechanisms at the level of single cells, where stochastic variability offers more to discover than traditional population analysis [13, 14].

The contribution of this paper is the development of explicit relationships between the unknown (first- and) second-order promoter activity statistics and the experimentally measurable reporter mean and variance profiles. Crucially, these relationships rely on nonparametric models of gene activation, i.e. no a priori assumption is made except the absence of stochastic feedback from reporter abundance to the regulation of the gene itself, a hypothesis that agrees well with the biochemistry of reporter systems. Based on analytic investigation and examples, we show that these relationships are essentially linear, whence invertible in a tractable manner, and allow for the discrimination among different promoter activity regulatory laws. On these basis, the implementation of algorithms for the actual estimation of the statistics of interest is left for future work. For ease of reading, all mathematical proofs are deferred to Appendix A. Appendix B, instead, summarizes results from [2] that are used in this work.

2 Background Material

Gene expression monitoring over time is commonly operated by the use of fluorescent or luminescent reporter proteins (see [9] and references therein). In essence, synthesis of a reporter protein is placed under the control of the promoter of the gene of interest by engineering its coding sequence onto the DNA at an appropriate place. When the gene is expressed, transcription and subsequent translation leads to the formation of new reporter protein molecules. Whether luminescent or fluorescent, reporter protein molecules can be quantified at any time by measuring light intensity at the relevant wavelength, thus providing a dynamical readout of the activity of the gene. To do so, time-lapse microscopy, flow cytometry, microplate reading, or other experimental techniques are used, depending whether single-cell measurements, population histograms, or population-average profiles are sought. Synthesis of reporter proteins is often completed by a maturation step, that takes immature proteins into their mature, visible form.

2.1 Stochastic Gene Expression Modelling

Gene expression is commonly described in terms of the synthesis and degradation reactions for mRNA and protein molecules
$$\begin{aligned} \mathscr {R}_1:~&F\xrightarrow {k_M} F+M&\mathscr {R}_2:~&M\xrightarrow {d_M}\emptyset \end{aligned}$$
(1)
$$\begin{aligned} \mathscr {R}_3:~&M\xrightarrow {k_P} M+P&\mathscr {R}_4:~&P\xrightarrow {d_P}\emptyset \end{aligned}$$
(2)
[5, 10] where M and P denote mRNA and protein species, respectively, and F represents the active promoter species. In the context of this paper, P is the fluorescent or luminescent reporter protein. We will not distinguish between immature (invisible) and mature (visible) protein molecules. If necessary (e.g. for slow, stochastic maturation), an additional first-order reaction \(P\rightarrow P_{mature}\) can be included in the model (along with \(P_{mature}\rightarrow \emptyset \)) to account for protein maturation (and mature protein degradation).
Denote with \(X_1\in \mathbb {N}\) and \(X_2\in \mathbb {N}\) the number of copies of M and P, in the same order, and with \(X_3\in \{0,1\}\) the state of the promoter, i.e. \(X_3=0\) when the promoter is inactive (absence of F) and \(X_3=1\) when it is active (presence of F). Switching promoter dynamics (responsible of mRNA synthesis bursts in single cells) are formally captured by two additional reactions,
$$\begin{aligned} \mathscr {R}_5:~&\emptyset \xrightarrow {\lambda _+\cdot (1-X_3)} F,&\mathscr {R}_6:~&F\xrightarrow {\lambda _-}\emptyset , \end{aligned}$$
(3)
representing in the order activation with propensity \(\lambda _+\cdot (1-X_3)\) (only enabled if \(X_3=0\)), and deactivation with propensity \(\lambda _-\cdot X_3\) (only enabled if \(X_3=1\)). Overall, this is a system of \(m=6\) chemical reactions over \(n=3\) different species.
The kinetics of this biochemical reaction system can be expressed in terms of stoichiometry matrix S and reaction rate vector a(x) given by
$$\begin{aligned} S=\begin{bmatrix} 1&-1&0&0&0&0 \\ 0&0&1&-1&0&0 \\ 0&0&0&0&1&-1 \end{bmatrix},\quad a(x)= \begin{bmatrix} k_M x_3 \\ d_Mx_1 \\ k_Px_1 \\ d_P x_2 \\ \lambda _+(1-x_3) \\ \lambda _-x_3\end{bmatrix}, \end{aligned}$$
where, for \(i=1,\ldots , n\) and \(j=1,\ldots , m\), \(S_{i,j}\) denotes the net change in molecule number of species i when reaction \(\mathscr {R}_j\) occurs. At the level of a single cell, \(X=[X_1~X_2~X_3]^T\) is a stochastic process and, for every j, \(a_j\big (x\big )\) is interpreted as the infinitesimal probability that reaction \(\mathscr {R}_j\) occurs in an infinitesimal time period when \(X=x\) molecules of the different species are present in the reaction volume [16]. For constant rates \(\lambda _+\) and \(\lambda _-\), Eqs. (1)–(3) together constitute the so-called random telegraph model [16]. In general, however, these rates might themselves depend upon the amount of transcription factors regulating the expression of the gene, which one may write as \(\lambda _+(X_?)\) and \(\lambda _-(X_?)\), with \(X_?\) denoting the amount of some unspecified species.

The question we are going to investigate is what can be said about the statistics of F, given mean and variance profiles of the amounts of protein P across a population of cells. In practice, fluorescence or luminescence measurements proportional to the actual amount of protein are measured and are possibly affected by error. In this paper, however, we are not concerned with the details of the measurement model, and assume that mean and variance of \(X_2\) are observed directly.

2.2 Propagation of Moments

Consider an arbitrary biochemical reaction system with n reactants, m reactions, stoichiometry matrix S and reaction rates a(xu) possibly depending on a deterministic input u. Let X(t) be the corresponding random state vector at time t, and define \(\mu (t)=\mathbb {E}[X(t)]\) and \({\varSigma }(t)=\text {Cov}\big (X(t)\big )= \mathbb {E}\big [\big (X(t)-\mu (t)\big )\big (X(t)-\mu (t)\big )^T\big ]\). It can be shown (see e.g. [8]) that \(\mu \) and \({\varSigma }\) obey the so-called moment equations
$$\begin{aligned} \dot{\mu }&= S\mathbb {E}[a(X,u)], \end{aligned}$$
(4)
$$\begin{aligned} \dot{{\varSigma }}&=S\mathbb {E}\left[ a(X,u)(X-\mu )^T\right] +\mathbb {E}\left[ (X-\mu )a^T(X,u)\right] S^T \nonumber \\&\quad + S\text {diag}(\mathbb {E}[a(X,u)])S^T. \end{aligned}$$
(5)
Above and in the sequel, time t is omitted from notation where no confusion may arise. If rates are affine in the state, i.e. \(a(x,u)=W(u)x+w_0(u)\) for some W(u) and \(w_0(u)\), these equations simplify to
$$\begin{aligned} \dot{\mu }&= S W(u)\mu +Sw_0(u), \end{aligned}$$
(6)
$$\begin{aligned} \dot{{\varSigma }}&=S W(u){\varSigma }+{\varSigma }W^T(u)S^T+S\text {diag}\big (W(u)\mu +w_0(u)\big )S^T. \end{aligned}$$
(7)
This system of differential equations is closed in the sense that it does not depend on unmodelled moments. If in addition W does not depend on u, then the system is linear in the input (and the initial conditions).
For the system (1)–(3), Eqs. (6)–(7) apply in the case of constant rates \(\lambda _+\) and \(\lambda _-\). In the general case of regulated switching rates \(\lambda _+(X_?)\) and \(\lambda _-(X_?)\), one may instead interpret (4)–(5) as the moment equations for the augmented state composed of X and \(X_?\). Since the laws regulating \(X_?\) are unspecified, the full system cannot be spelled out, but one may still work out the equations for the evolution of the moments of \(X_1\), \(X_2\) and \(X_3\). Define
$$\begin{aligned} \begin{bmatrix} z_{MP}^T ~|~ z^T_{\times } ~|~ z_{F}^T \end{bmatrix}= \begin{bmatrix} \mu _M&\mu _P&\sigma _{MM}&\sigma _{PP}&\sigma _{MP} ~|~ \sigma _{MF}&\sigma _{PF} ~|~ \mu _F&\sigma _{FF} \end{bmatrix}, \end{aligned}$$
(vertical bars denoting vector blocks) where of course \(\mu _{\bullet }\) and \(\sigma _{\bullet \bullet }\) are the mean and covariance of the states corresponding to the species in subscript (identical subscripts denoting variance). From an engineering viewpoint, \(z_{MP}\) is the state of the dynamical sensor for the statistics of F, with sensor output given by the elements \([\mu _P~\sigma _{PP}]^T\) of \(z_{MP}\). Then one gets
$$\begin{aligned} \dot{z}_{MP}&=A_{MP}\cdot z_{MP}+A_{MP,\times }\cdot z_\times +A_{MP,F}\cdot z_F, \end{aligned}$$
(8)
$$\begin{aligned} \dot{z}_\times&=A_{\otimes }\cdot z_\times +z_\otimes +A_{\times ,F}\cdot z_F. \end{aligned}$$
(9)
for matrices \(A_{MP}\), \(A_{MP,\times }\), \(A_{MP,F}\), \(A_{\otimes }\) and \(A_{\times ,F}\) depending solely on \(\theta _{MP}=(k_M,d_M,k_P,d_P)\) (see Appendix A), i.e. the parameters of the sensing system. Note that, for every fixed t, F(t) is a Bernoulli random variable. Then \(\sigma _{FF}(t)=\mu _F(t)\big (1-\mu _F(t)\big )\) for all t (as a consequence, (8)–(9) are somewhat redundant).
From (8)–(9) one observes that mean and variance of \(X_2\), the observed elements of \(z_{MP}\), are thus a dynamical transformation of those of F, i.e. \(z_F\), plus a contribution from
$$\begin{aligned} z_\otimes =\text {Cov}\left( \begin{bmatrix} X_1\\ X_2\end{bmatrix}, \begin{bmatrix} \lambda _+(X_?)(1-X_3) \\ \lambda _-(X_?)X_3\end{bmatrix}\right) \cdot \begin{bmatrix} 1 \\ -1\end{bmatrix}. \end{aligned}$$
As it will become clear, \(z_\otimes \) implicitly brings about a contribution from the correlation structure of F (see later Remark 1).

2.3 Marginalization of Moments

From now on, abusing notation in favor of simplicity, we will refer to \(X_1\), \(X_2\) and \(X_3\) by the symbols for the corresponding species, i.e. M, P and F, in the same order. Let f be any possible outcome of F, and let
$$\begin{aligned} \mu _P(t)&=\mathbb {E}[P(t)],&\mu _P^f(t)&=\mathbb {E}[P(t)|F=f], \\ \mathscr {M}_P(t)&=\mathbb {E}[P(t)^2],&\mathscr {M}_P^f(t)&=\mathbb {E}[P(t)^2|F=f], \end{aligned}$$
where, unlike the approach in [7], conditioning is intended over the whole history of F. By marginalization,
$$\begin{aligned} \mu _P&=\mathbb {E}\big [\mathbb {E}[P|F]\big ]=\int \mu _P^fd\mathscr {P}_F(f),&\mathscr {M}_P&=\mathbb {E}\big [\mathbb {E}[P^2|F]\big ]=\int \mathscr {M}_Y^f d\mathscr {P}_F(f), \end{aligned}$$
(10)
with \(\mathscr {P}_F\) the probability distribution of F over all possible binary switching sequences. Let us now state the following assumption.

Assumption 1

(Granger causality [12]). There is no feedback from M and P to F, i.e., at any time t, the future of F is conditionally independent on the past of Mand P given the past of F.

This captures the idea that species M and P do not participate in the regulation of the promoter [1, 3], and corresponds well to all the reporter systems where reporter and regulatory proteins are physically different molecules. In the light of Assumption 1, the conditional moments \(\mu _P^f\) and \(\mathscr {M}_P^f\) are those of the reduced system (1)–(2) with f defining the state of species F at all times. Let
$$\begin{aligned} z_{MP}^f= \begin{bmatrix} \mu _M^f&\mu _P^f&\sigma _{MM}^f&\sigma _{PP}^f&\sigma _{MP}^f \end{bmatrix} \end{aligned}$$
be the vector of conditional moments of M and P. Working out the moment equations (6)–(7) for \(X=\begin{bmatrix}M&P\end{bmatrix}^T\) and input \(u=f\), one gets that
$$\begin{aligned} \dot{z}_{MP}^f=A_{MP}\cdot z_{MP}^f+(A_{MP,F})_1\cdot f, \end{aligned}$$
(11)
where \((A_{MP,F})_1\) denotes the first column of \(A_{MP,F}\). Then \(\mu _P^f\) and \(\sigma _{PP}^f\) follow from the solution of this system and \(\mathscr {M}_P^f=\sigma _{PP}^f+(\mu _P^f)^2\), while marginalization (10) completes the computation of \(\mu _P\) and \(\mathscr {M}_P\). Note that, because of the relationship \(\mathscr {M}_P=\sigma _{PP}+(\mu _P)^2\), we can equivalently consider \((\mu _P, \sigma _{PP})\) or \((\mu _P, \mathscr {M}_P)\) to be the observed output quantities. We will often exploit this equivalence in the sequel without further notice.

Incidentally, notice that (11) represents a linear switching system with two alternating operational modes, \(f=0\) and \(f=1\).

3 The Fixed Rate Promoter Process

In order to investigate how statistics of F reflect into the observed profiles \(\mu _P\) and \(\mathscr {M}_P\), and how they may possibly be reconstructed from the output, we first focus on the fundamental case where switching rates \(\lambda _+\) and \(\lambda _-\) are constant. Define \(\alpha =\lambda _++\lambda _-\).

Proposition 1

Mean \(\mu _F(t)=\mathbb {E}[F(t)]\) and autocovariance function \(\rho _F(t,s)=\text {cov}\big (F(t),F(s)\big )\) obey the equations
$$\begin{aligned} \mu _F(t)&=\mu _F(0)e^{-\alpha t}+\frac{\lambda _+}{\alpha }\left( 1-e^{-\alpha t}\right) ,&t&\ge 0, \end{aligned}$$
(12)
$$\begin{aligned} \rho _F(t,\tau )&=\left( \frac{\lambda _+}{\alpha }+\frac{(\alpha -\lambda _+)}{\alpha }e^{-\alpha (t-\tau )}\right) \cdot \mu _F(\tau )-\mu _F(t)\cdot \mu _F(\tau ),&t&\ge \tau . \end{aligned}$$
(13)
In stationary conditions, with an abuse of notation for the arguments of \(\rho _F\),
$$\begin{aligned} \mu _F&=\frac{\lambda _+}{\alpha },&\rho _F(t-\tau )&=\frac{\lambda _+(\alpha -\lambda _+)}{\alpha ^2}\cdot e^{-\alpha (t-\tau )}. \end{aligned}$$
(14)

Incidentally, the autocovariance in (14) is the same as that of an Ornstein-Uhlenbeck process [15].

It can be appreciated that, in transient conditions, the mean profile \(\mu _F\) contains all the information about the statistics of F. Indeed, in this simple case, rates \(\lambda _+\) and \(\lambda _-\) (or equivalently \(\alpha \)), together with the initial condition \(\mu _F(0)\), fully determine the laws of F. In turn, these three quantities have distinct effects on \(\mu _F\), i.e. they are distinguishable from a transient mean profile. In [2], it was shown that these and other model parameters, notably \(\theta _{MP}\), are also jointly distinguishable from the measured profiles \(\mu _P\) and \(\mathscr {M}_P\). The result is based on the specialization of (8)–(9) for the case of the fixed rate process, given by [2]
$$\begin{aligned} \begin{bmatrix} \dot{z}_{MP} \\ \dot{z}_\times \\ \dot{z}_F \end{bmatrix}= \begin{bmatrix}A_{MP}&A_{MP,\times }&A_{MP,F} \\ 0&A_\times&A_{\times ,F} \\ 0&0&A_F \end{bmatrix}\cdot \begin{bmatrix} z_{MP} \\ z_\times \\ z_F \end{bmatrix}+\begin{bmatrix} 0 \\ 0 \\ u_F \end{bmatrix}, \end{aligned}$$
(15)
where \(A_\times =-\alpha I+A_\otimes \), \(u_F=\begin{bmatrix}\lambda _+&\lambda _+\end{bmatrix}^T\) and \(A_F\) depends only on \(\lambda _+\) and \(\alpha \) as detailed in Appendix A. For known parameters \(\theta _{MP}\), we may easily show that \(\lambda _-\), \(\lambda _+\) and \(\mu _F(0)\) are also distinguishable from the sole mean \(\mu _P\). For simplicity, we consider the case where M and P are identically 0 at time 0. By inspection of (15),
$$\begin{aligned} \begin{aligned} \dot{\mu }_F&=-\alpha \mu _F+\lambda _+, \\ \dot{\mu }_M&=-d_M \mu _M+k_M \mu _F, \\ \dot{\mu }_P&=-d_P \mu _P+k_P \mu _M \end{aligned} \end{aligned}$$
(16)
(the expression of \(\dot{\mu }_F\) above coincides with the differential form of (12)). Thus, in terms of Laplace transform,
$$\begin{aligned} \mu _P(s)=\frac{\lambda _+k_Mk_P}{s(\alpha +s)(d_M+s)(d_P+s)}+\frac{\mu _F(0)k_Mk_P}{(\alpha +s)(d_M+s)(d_P+s)}, \end{aligned}$$
and one may apply the method of [2] (also reported in Appendix B) to prove sensitivity of this solution (equivalently, the solution over time) to any change in the three unknown parameters, almost everywhere in the space of the remaining parameters. In practical terms, parameter values can be reconstructed either from \(\mu _F\) as obtained by deconvolution from \(\mu _P\), or by direct fit of (16) to an observed \(\mu _P\) profile.

Now assume that F has reached stationarity. In this case, all relevant statistics of F are determined by \(\lambda _+\) and \(\lambda _-\). However, from Proposition 1, mean \(\mu _F\) only conveys information about the ratio \(\lambda _+/\alpha \), and, because \(\sigma ^2_F=\mu _F(1-\mu _F)\) at any point in time, no more information is contained in the variance. Specific contributions of the two parameters can instead be traced in the autocovariance function \(\rho _F\). Indeed, multiplicative factor \(\lambda _+(\alpha -\lambda _+)/\alpha ^2\) and decay rate \(\alpha \) have distinguishable effects on \(\rho _F\) (different choices of the two lead to different profiles \(\rho _F(\cdot )\)) and uniquely determine \(\lambda _+\) and \(\lambda _-\). The question arises whether \(\rho _F\) is observable from the measured profiles \(\mu _P\) and \(\mathscr {M}_P\) (i.e. whether \(\lambda _+\) and \(\lambda _-\) are also distinguishable from the experimental output). In this section we provide a positive answer in terms of identifiability of \(\lambda _+\) and \(\lambda _-\), i.e. for processes F with fixed rates. A more general answer will be provided in the next section.

From Proposition 1, stationary conditions are achieved when \(\mu _F\) is in steady state (i.e. when the factors of \(\rho _F(t,\tau )\) involving \(\mu _F\) no longer depend on \(\tau \)). It then suffices to check identifiability of \(\lambda _+\) and \(\lambda _-\) from the solution of (15) with stationary initial conditions \(\mu _F(0)=\lambda _+/\alpha \) and \(\sigma ^2_F(0)=\lambda _+/\alpha (1-\lambda _+/\alpha )\). Using again the method of [2], one computes the Laplace response function of this system. The resulting equations are lengthy and not reported here. Then, it can be checked that the Laplace sensitivity condition also reported in Appendix B is verified, i.e. the time profiles of \(\mu _P\) and \(\mathscr {M}_P\) are sensitive to all possible changes of \(\lambda _+\) and \(\alpha \), almost everywhere in the space of the parameters \(\theta _{MP}\).
Fig. 1.

Statistics for F (dashed lines and circles) and \(F'\) (solid lines and dots). Lines visualize analytic solutions, markers are for empirical statistics from Gillespie simulations. Gillespie simulations are performed using Stochkit [17] for the generation of \(10^4\) sample paths (i.e. simulated cells). Numerical calculations are performed in Matlab.

Example 1

Refer to Fig. 1. Statistics for two fixed-rate promoter activity processes, F and \(F'\), are considered. F has \(\lambda _+=\lambda _-=0.05\), while \(F'\) has the faster switching dynamics \(\lambda _+=\lambda _-=0.5\). Starting from the non-stationary conditions \(F=F'=0\) at time 0, means \(\mu _F\) and \(\mu _{F'}\) converge both at 0.5 at different rates (Fig. 1, left), thus resulting into different output profiles \(\mu _P\) (not shown). In other words, the two processes are distinguishable from the mean. In stationary conditions, instead, the means for F and \(F'\) are the same. Yet the stationary autocovariance functions \(\rho _F\) and \(\rho _{F'}\) differ in the two cases (Fig. 1, center). This results in different observed profiles of \(\sigma _{PP}\) (Fig. 1, right). In other words, in stationary conditions, F and \(F'\) are distinguishable from the output variance.

Remark 1

Equations (15) are obtained from (8)–(9) by developing the expression of \(z_\otimes \). This results in expressions depending on matrices \(A_\times \) and \(A_F\), which bring in the role of \(\alpha \), the decay rate of \(\rho _F\), into the propagation of second-order moments from \(z_F\) to \(z_{MP}\). This fact is indeed in agreement with the discussion of \(z_\otimes \) at the end of Sect. 2.2.

To summarize, we have shown that constant switching rates, whence all statistics, of a promoter activity process F can be reconstructed from the output mean if F is not in stationary conditions. In stationary conditions, the promoter statistics cannot be determined from the output mean, but rather from the output variance since this reflects differences in the autocovariance function of F. Analytic expressions and a case study have been developed to support our arguments.

4 General Promoter Switching Processes

We now wish to study how first- and second-order moments of switching process F reflect into outputs \(\mu _P\) and \(\mathscr {M}_P\), and how to possibly reconstruct the former from the latter, without a priori knowledge on F. In particular, we do not assume that switching rates \(\lambda _+\) and \(\lambda _-\) are fixed. We only assume that F has continuous (mean and) autocovariance \(\rho _F(t,s)\), and that Assumption 1 holds. For simplicity, we focus on the case where \(z_{MP}(0)\) is null (M and P equal to zero at time zero).

From Eq. (11), for some final time \(T>0\), the conditional moments \(\mu _P^f(t)\) and \(\sigma _{PP}^f(t)\) over [0, T) are the output of a linear dynamical system with (zero initial conditions and) input f. We may then introduce linear operators, \(L_1\) and \(L_2\), and abstract the transformation from function f to \(\mu _P^f\) and \(\sigma _{PP}^f\) as \(\mu _P^f=L_1f\) and \(\sigma _{PP}^f=L_2f\). When necessary, for any \(t\in [0,T)\), we will write \(\mu _P^f(t)=(L_1f)(t)\) as \(L_1^tf\) and \(\sigma _{PP}^f(t)=(L_2f)(t)\) as \(L_2^tf\). Of course, for \(k=1\) and \(k=2\),
$$\begin{aligned} L_k^tf&=\int _0^t d\tau \,\ell _k(t,\tau )f(\tau ),&\ell _k(t,\tau )&=C_ke^{A_{MP}(t-\tau )}(A_{MP,F})_1, \end{aligned}$$
with \(C_1=\begin{bmatrix}0&1&0&0&0\end{bmatrix}\) (mean readout) and \(C_2=\begin{bmatrix}0&0&0&1&0\end{bmatrix}\) (variance readout).

4.1 Observability and Reconstruction of the Process Mean

From the first equality in (10), one has that
$$\begin{aligned} \mu _P=\int (L_1f) d\mathscr {P}_F(f)=L_1 \left( \int f d\mathscr {P}_F(f)\right) =L_1 \mu _F. \end{aligned}$$
Not surprisingly at this point, \(\mu _P\) thus follows from the linear dynamical transformation of \(\mu _F\) already found in (8). Observability of \(\mu _F\) from \(\mu _P\) essentially depends on the spectrum of \(L_1\). Since
$$\begin{aligned} \mu _P(s)=\frac{k_Mk_P}{(d_M+s)(d_P+s)}\mu _F(s), \end{aligned}$$
for strictly positive parameters \(\theta _{MP}\), the transformation is invertible over the whole spectrum, i.e. \(\mu _F\) can be perfectly reconstructed from \(\mu _P\). In practice, this amounts to a deconvolution problem of rather easy solution [2].

4.2 Observability and Reconstruction of the Process Covariance

We begin with the following result.

Proposition 2

For any time \(t\in [0,T)\), it holds that
$$\begin{aligned} \mathscr {M}_P(t)=L_2^t\mu _F+\mathbb {E}[(L_1^tF)^2]. \end{aligned}$$
(17)
Clearly the autocovariance function of F plays a role in the term \(\mathbb {E}[(L_1^tF)^2]\). To study this term further, consider the Karhunen-Loève decomposition [15] of process F, given by
$$\begin{aligned} F-\mu _F=\sum _{i=1}^\infty a_i\phi _i, \end{aligned}$$
where the \(\phi _i\) are the mutually orthogonal, unit norm eigenfunctions of the operator \(K:\phi \mapsto \int d\tau \rho _F(\cdot ,\tau )\phi (\tau )\), i.e. \(K\phi _i=\sigma ^2_i\phi _i\), and the \(a_i\) are mutually uncorrelated, zero-mean random variables with variance equal to the eigenvalues \(\sigma ^2_i\) (function norm is in \(L^2\) and the decomposition holds in the mean-square sense). Then
$$\begin{aligned} \rho _F(t,\tau )&=\sum _{i=1}^\infty \sigma _i^2 \phi _i(t) \phi _i(\tau ),&\sigma _{FF}(t)&=\sum _{i=1}^\infty \sigma _i^2 \phi _i^2(t). \end{aligned}$$

Proposition 3

It holds that \(\mathbb {E}[(L_1^tF)^2]=(L_1^t\mu _F)^2+\sum _{i=1}^\infty \sigma _i^2 (L_1^t\phi _i)^2\).

In sums, from Propositions 23 and using the fact that \((\mu _P)^2=(L_1\mu _F)^2\),
$$\begin{aligned} \sigma _{PP}(t)=\mathscr {M}_P(t)-\mu _P^2(t)=L_2^t\mu _F+\sum _{i=1}^\infty \sigma _i^2 (L_1^t\phi _i)^2. \end{aligned}$$
(18)
Comparing the expressions of \(\sigma _{PP}\) and \(\sigma _{FF}\) one notices that, besides term \(L_2^t\mu _F\), the functions composing F and characterizing its autocovariance structure are transformed by \(L_1^t\) into contributions that make up the variance of P at time t. Were \(L_1^t\) an evaluation operator, i.e. \(L_1^t\phi _i=\phi _i(t)\), then \(\sigma _{PP}(t)\) would degenerate to \(L_2^t\mu _F+\sigma _{FF}(t)\), i.e. information about the autocovariance structure of F would be lost. For every t, it is the integral nature of \(L_1^t\) that channels information about the whole \(\rho _F(\cdot ,\cdot )\) into \(\sigma _{PP}(t)\). Another viewpoint on this is given in what follows.

Equation (18) explains the nature of the information transfer from \(\rho _F\) to \(\sigma _{PP}\). For reconstruction purposes, however, we seek a more explicit relationship between \(\sigma _{PP}\) and \(\rho _F\). The following result relies on the convolutional form of \(L_1\).

Proposition 4

It holds that
$$\begin{aligned} \sigma _{PP}(t)=L_2^t\mu _F+\iint d\tau \,dv\,\ell _1(t,\tau )\ell _1(t,v)\rho _F(\tau ,v). \end{aligned}$$
(19)
Hence \(\rho _F\) undergoes itself a linear transformation H defined by
$$\begin{aligned} H^t\rho =(H\rho )(t)=\int _0^t d\tau \int _0^t dv\,\ell _1(t,\tau )\ell _1(t,v)\rho (\tau ,v). \end{aligned}$$
In particular, suppose that F is stationary. Then, by a change of variables,
$$\begin{aligned} H^t\rho _F=\int _0^t d\tau \int _0^t dv\,\ell _1(t,\tau )\ell _1(t,v)\rho _F(\tau -v)=\int _{-t}^t d\delta \, h(t,\delta )\rho _F(\delta ) \end{aligned}$$
with
$$\begin{aligned} h(t,\delta )=\int _{\max \{-\delta ,0\}}^{\min \{t,t-\delta \}}dv\,\ell _1(t,v+\delta )\ell _1(t,v). \end{aligned}$$
In the light of these results, the problem of the observability of \(\rho _F\), or better the joint observability of \(\rho _F\) and \(\mu _F\) from \(\mu _P\) and \(\sigma _{PP}\), is thus equivalent to that of the invertibility of the linear operator
$$\begin{aligned} (\mu _F,\rho _F)\mapsto (L_1\mu _F,L_2\mu _F+H\rho _F) \end{aligned}$$
(20)
(with relevant simplifications if stationarity of F is hypothesized). We note that, besides term \(L_2\mu _F\), the relationship between \(\rho _F\) and \(\sigma _{PP}\) is analogous to that pertaining linear transformations of second-order processes. In particular, using the fact that \(\ell _1(t,\cdot )\) is the impulse response of a time-invariant dynamical system, the second term of (19) can be seen as the autocovariance of the output of a linear filter with response \(\ell _1\) fed with an input process with autocovariance \(\rho _F\). It is then natural to frame observability analysis of \(\rho _F\) in the context of spectral analysis [11, 12]. This analysis is left for future work. Here we limit ourselves to the discussion of an illustrative example.
Fig. 2.

Statistics for a random-rate promoter process F (dash-dotted lines) and relevant fixed-rate promoter processes \(F'\) (same as in Fig. 1, dashed lines) and \(F''\) (dotted lines). Left: autocovariance functions \(\rho _F\) (dots: estimates from Gillespie simulations; line: interpolation), \(\rho _{F'}\) (from (14)) and \(\rho _{F''}\) (from (14)); Center: Observed output mean \(\mu _P\) for F (Gillespie simulation), \(F'\) (solution of (15)) and \(F''\) (solution of (15)) – curves for F and \(F'\) are superimposed; Right: The observed output variance of P for F (diamonds: numerical computation of (19), based on the profile of \(\rho _F\) from Gillespie simulations; line: estimate from Gillespie simulation – diamonds and line are superimposed), \(F'\) (solution of (15)) and \(F''\) (solution of (15)) – curves for F and \(F''\) are superimposed. Gillespie simulations are performed using Stochkit [17] for the generation of \(10^5\) sample paths (i.e. simulated cells). Numerical calculations are performed in Matlab.

Example 2

Refer to Fig. 2. We consider a promoter activity process F with randomly regulated rates, and compare its statistics with those of relevant fixed-rate processes \(F'\) and \(F''\). All processes are analyzed in stationary regime and have rate \(\lambda _-\) identically set to 0.5, i.e. their definition only differs in the activation rate. The activation rate of F is \(\lambda _+(R)=1\cdot R\). Regulator R is another random binary process with switch-off rate equal to 0.1 and switch-on rate, equal to 0.2217, chosen so as to guarantee that the stationary mean of F is \(\mu _F=0.5\). Process \(F'\) is defined as in Example 1, i.e. it has \(\lambda _+=0.5\), again resulting in \(\mu _{F'}=0.5\). Finally, process \(F''\) has activation rate \(\lambda _+=\mathbb {E}[\lambda _+(R)]=0.6892\), i.e. a switch-on rate equivalent on average to that of F. This results in a different mean, \(\mu _{F''}=0.5795\).

The autocovariance function of F (Fig. 2, left) is markedly different from those of \(F'\) and \(F''\), which are similar. Because \(\mu _F=\mu _{F'}\ne \mu _{F''}\), F can be distinguished from \(F''\), but not from \(F'\), from the output mean \(\mu _P\) (Fig. 2, center). However, because of the different autocovariance function, F can be distinguished from \(F'\) from the output variance \(\sigma _{PP}\). Interestingly, the output variance profiles for F and \(F''\) are quite similar, a sign that the differences between F and \(F''\) in mean and autocovariance compensate each other in this case. This is possible since output variance depends not only on the autocovariance but also on the mean of the promoter activity process.

Finally, in the light of the linearity of (20), joint estimation of \(\mu _F\) and \(\rho _F\) from possibly noisy and sampled measurements of \(\mu _P\) and \(\sigma _{PP}\) can be seen as a linear inversion problem. Regularized solutions for both the stationary and the nonstationary case may be developed in accordance with the vast literature on the subject (see e.g. [4] and references therein). Note that, because \(\mu _F\) can be reconstructed from the sole mean \(\mu _P\), the problem may also be reduced to that of estimating \(\rho _F\) from \(\sigma _{PP}-L_2^t\mu _F\) via (regularized) inversion of H.

In summary, we have analyzed the relationship between second-order promoter activity statistics and mean and variance profiles of the reporter protein P in the case of promoter processes with randomly regulated rates. In particular, we have developed explicit relationships between the autocovariance function of F and the readout variance profile of P, showing that this integral relationship is essentially linear. By this we provided the basis for a full spectral analysis of observability of \(\rho _F\) and its linear reconstruction from reporter protein mean and variance statistics. We also illustrated the relevance of our results by investigating the distinguishability of a random-rate and relevant fixed-rate processes on an example.

5 Conclusions

We have studied the relationships between second-order statistics of random promoter activity and the mean and variance profiles of gene expression reporter proteins typically observed in biological experiments. For both fixed and randomly regulated (thus also possibly time-varying) switching rates, we developed explicit mathematical formulas showing that these relationships are linear, and provided first results about the observability of the promoter process statistics from gene reporter data. Based on analytic considerations as well as on example case studies, we showed when and how analysis of second-order moments allows for discrimination of different promoter activation statistics.

This work provides the basis for an extensive observability analysis of promoter processes from gene reporter data at a single-cell level, and the development of promoter statistics reconstruction algorithms that are fully non-parametric, i.e. independent of a priori knowledge about the promoter activity laws. Our results show that both observability and estimation can be framed in the well-studied context of linear operators. Subsequent research work will henceforth focus on the application of the relevant spectral analysis and regularized linear inversion techniques. On these bases, we will then address a key challenge of this research effort, namely the identification and discrimination among alternative promoter activity regulatory mechanisms on the basis of the reconstructed promoter activation statistics and data from candidate regulators.

References

  1. 1.
    Bowsher, C.G., Voliotis, M., Swain, P.S.: The fidelity of dynamic signaling by noisy biomolecular networks. PLoS Comput. Biol. 9(3), e1002965 (2013)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Cinquemani, E.: Reconstruction of promoter activity statistics from reporter protein population snapshot data. In: 2015 54th IEEE Conference on Decision and Control (CDC), pp. 1471–1476, December 2015Google Scholar
  3. 3.
    Cinquemani, E.: Reconstructing statistics of promoter switching from reporter protein population snapshot data. In: Abate, A., et al. (eds.) HSB 2015. LNCS, vol. 9271, pp. 3–19. Springer, Heidelberg (2015). doi:10.1007/978-3-319-26916-0_1 CrossRefGoogle Scholar
  4. 4.
    De Nicolao, G., Sparacino, G., Cobelli, C.: Nonparametric input estimation in physiological systems: problems, methods, and case studies. Automatica 33(5), 851–870 (1997)MathSciNetCrossRefMATHGoogle Scholar
  5. 5.
    Friedman, N., Cai, L., Xie, X.S.: Linking stochastic dynamics to population distribution: an analytical framework of gene expression. Phys. Rev. Lett. 97, 168302 (2006)CrossRefGoogle Scholar
  6. 6.
    Hasenauer, J., Waldherr, S., Doszczak, M., Radde, N., Scheurich, P., Allgower, F.: Identification of models of heterogeneous cell populations from population snapshot data. BMC Bioinform. 12(1), 125 (2011)CrossRefMATHGoogle Scholar
  7. 7.
    Hasenauer, J., Wolf, V., Kazeroonian, A., Theis, F.J.: Method of conditional moments (MCM) for the chemical master equation. J. Math. Biol. 69(3), 687–735 (2014)MathSciNetCrossRefMATHGoogle Scholar
  8. 8.
    Hespanha, J.: Modelling and analysis of stochastic hybrid systems. IEE Proc. Control Theor. Appl. 153(5), 520–535 (2006)MathSciNetCrossRefGoogle Scholar
  9. 9.
    de Jong, H., Ranquet, C., Ropers, D., Pinel, C., Geiselmann, J.: Experimental and computational validation of models of fluorescent and luminescent reporter genes in bacteria. BMC Syst. Biol. 4(1), 55 (2010)CrossRefGoogle Scholar
  10. 10.
    Kaern, M., Elston, T.C., Blake, W.J., Collins, J.J.: Stochasticity in gene expression: from theories to phenotypes. Nat. Rev. Gen. 6, 451–464 (2005)CrossRefGoogle Scholar
  11. 11.
    Koopmans, L.H.: The Spectral Analysis of Time Series. Probability and Mathematical Statistics. Academic Press, San Diego (1995)MATHGoogle Scholar
  12. 12.
    Lindquist, A., Picci, G.: Linear Stochastic Systems - A Geometric Approach to Modeling, Estimation and Identification. Springer, Heidelberg (2015)MATHGoogle Scholar
  13. 13.
    Munsky, B., Trinh, B., Khammash, M.: Listening to the noise: Random fluctuations reveal gene network parameters. Mol. Syst. Biol. 5 (2009). Article ID 318Google Scholar
  14. 14.
    Neuert, G., Munsky, B., Tan, R., Teytelman, L., Khammash, M., van Oudenaarden, A.: Systematic identification of signal-activated stochastic gene regulation. Science 339(6119), 584–587 (2013)CrossRefGoogle Scholar
  15. 15.
    Papoulis, A.: Probability, Random Variables, and Stochastic Processes. McGraw-Hill Series in Electrical Engineering. McGraw-Hill, New York (1991)MATHGoogle Scholar
  16. 16.
    Paulsson, J.: Models of stochastic gene expression. Phys. Life Rev. 2(2), 157–175 (2005)CrossRefGoogle Scholar
  17. 17.
    Sanft, K.R., Wu, S., Roh, M., Fu, J., Lim, R.K., Petzold, L.R.: Stochkit2: Software for discrete stochastic simulation of biochemical systems with events. Bioinformatics 27(17), 2457–2458 (2011)CrossRefGoogle Scholar
  18. 18.
    Stefan, D., Pinel, C., Pinhal, S., Cinquemani, E., Geiselmann, J., de Jong, H.: Inference of quantitative models of bacterial promoters from time-series reporter gene data. PLoS Comput. Biol. 11(1), e1004028 (2015)CrossRefGoogle Scholar
  19. 19.
    Zechner, C., Ruess, J., Krenn, P., Pelet, S., Peter, M., Lygeros, J., Koeppl, H.: Moment-based inference predicts bimodality in transient gene expression. PNAS 21(109), 8340–8345 (2012)CrossRefGoogle Scholar
  20. 20.
    Zechner, C., Unger, M., Pelet, S., Peter, M., Koeppl, H.: Scalable inference of heterogeneous reaction kinetics from pooled single-cell recordings. Nat. Methods 11, 197–202 (2014)CrossRefGoogle Scholar
  21. 21.
    Zulkower, V., Page, M., Ropers, D., Geiselmann, J., de Jong, H.: Robust reconstruction of gene expression profiles from reporter gene data using linear inversion. Bioinformatics 31(12), i71–i79 (2015)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.Inria Grenoble – Rhône-AlpesSt. Ismier CedexFrance

Personalised recommendations