1 Introduction

A major long-term goal in biology is to understand how biological function is encoded within the sequences of DNA, RNA, and protein. The canonical success story in this effort is the genetic code: given an arbitrary sequence of messenger RNA, the genetic code allows us to predict with near certainty what peptide sequence will result. There are many other biological codes we would like to learn as well. How does the DNA sequence of a promoter or enhancer encode transcriptional regulatory programs? How does the sequence of pre-mRNA govern which exons are kept and which are removed from the final spliced mRNA? How does the peptide sequence of an antibody govern how strongly it binds to target antigens?

A major difference between the genetic code and these other codes is that while the former is qualitative in nature, the latter are governed by sequence–function relationships that are inherently quantitative. Quantitative sequence–function relationshipsFootnote 1 describe any function that maps the sequence of a biological heteropolymer to a biologically relevant activity (Fig. 1a). Perhaps the simplest example of such a relationship is how the affinity of a transcription factor protein for its DNA binding site depends on the DNA sequence of that site (Fig. 1b). Such relationships are a key component of the more complicated relationship between the DNA sequence of a promoter or enhancer (which typically binds multiple proteins) and the resulting rate of mRNA transcription (Fig. 1c). In both of these cases, the activities of interest (affinity or transcription rate) can vary over orders of magnitude and yet still be finely tuned by adjusting the corresponding sequence (binding site or promoter/enhancer). Similarly, other sequence–function relationships, like the inclusion of exons during mRNA splicing or the affinity of a protein for its ligand, are fundamentally quantitative.

The study of quantitative sequence–function relationships presents an exciting opportunity for the concepts and methods of statistical physics to shed light on biological systems. There is a natural analogy between biological sequences and the microstates of physical systems, as well as between biological activities and physical Hamiltonians. Yet we currently lack answers to basic questions a statistical physicist might ask, such as “what is the density of states?” or “is a relationship convex or glassy?” The answers to such questions may well have important consequences for diverse fields including biochemistry, systems biology, immunology, and evolution.

Fig. 1
figure 1

Sequence–function relationships in biology. a A sequence–function relationship maps a biological sequence (blue bar) to a biologically relevant activity (yellow star). b One of the simplest sequence–function relationships is how the affinity (star) of a transcription factor protein (magenta) for its DNA binding site depends on the sequence of that site (blue). c A more complicated sequence–function relationship describes how the rate of mRNA transcription depends on the DNA sequence of a gene’s promoter region. At the lac promoter of E. coli, this transcription rate (star) depends on how strongly both the transcription factor CRP (purple) and the RNA polymerase holoenzyme (RNAP; orange) bind their respective sites within the promoter region (blue)

Experimental methods for measuring sequence–function relationships have improved dramatically in recent years. In the mid 2000s, multiple “high-throughput” methods for measuring the DNA sequence specificity of transcription factors were developed; these methods include protein binding microarrays (PBMs) [2, 3], Escherichia coli one-hybrid technology (E1H) [4], and microfluidic platforms [5]. The subsequent development and dissemination of ultra-high-throughput DNA sequencing technologies then led, starting in 2009, to the creation of a number of “massively parallel” experimental techniques for probing a wide range of sequence–function relationships (Table 1). These massively parallel assays can readily measure the functional activity of \(10^3\) to \(10^8\) sequences in a single experiment by coupling standard bench-top techniques to ultra-high-throughput DNA sequencing.

Massively parallel experiments are very unlike conventional experiments in physics: they are typically very noisy and rarely provide direct readouts of the quantities that one cares about. Moreover, the noise characteristics of these measurements are difficult to accurately model. Indeed, such noise generally exhibits substantial day-to-day variability. Although standard inference methods require an explicit model of experimental noise, it is still possible to precisely learn quantitative sequence–function relationships from massively parallel data even when noise characteristics are unknown [27, 28].

Table 1 Massively parallel experiments used for studying various sequence–function relationships

The ability to fit parametric models to these data reflects subtle but important distinctions between two objective functions used for statistical inference: (i) likelihood, which requires a priori knowledge of the experimental noise function and (ii) mutual information [29], a quantity based on the concept of entropy, which does not require a noise function. In contrast to the conventional wisdom that more experimental measurements will improve the model inference task, the standard maximum likelihood approach will typically never learn the right model, even in the infinite data limit, if one uses an imperfect model of experimental noise. Model inference based on mutual information does not suffer from this ailment.

Mutual-information-based inference is unable to pin down the values of model parameters along certain directions in parameter space known as “diffeomorphic modes” [28]. This inability is not a shortcoming of mutual information, but rather reflects a fundamental distinction between how diffeomorphic and nondiffeomorphic directions in parameter space are constrained by data. Analogous to the emergence of Goldstone modes in particle physics due to a specific yet arbitrary choice of phase, diffeomorphic modes arise from a somewhat arbitrary choice of the sequence-dependent activity that one wishes to model. Likelihood, in contrast to mutual information, is oblivious to the distinction between diffeomorphic and nondiffeomorphic modes.

We begin this paper by briefly reviewing a variety of massively parallel assays for probing quantitative sequence–function relationships. We then turn to the problem of learning parametric models of these relationships from the data that these experiments generate. After reviewing recent work on this problem [28], we extend this work in three ways. First, we show that “diffeomorphic modes” of the parametric activity model that one wishes to learn are “dual” to certain transformations of the corresponding model of experimental noise (the “noise function”). This duality reveals a symmetry of the inference problem, thereby establishing a close analogy with Goldstone modes. Next we compute and compare the Hessians of likelihood and mutual information. This comparison suggests an additional analogy between this inference problem and concepts in fluid mechanics. Finally, we work through an analytically tractable model of a massively parallel experiment of protein–DNA binding. This example explicitly illustrates the differences between likelihood- and mutual-information-based approaches to inference, as well as the emergence of diffeomorphic modes.

It should be noted that the inference of receptive fields in sensory neuroscience is another area of biology in which mutual information has proved useful as an objective function, and that work in this area has also provided important insights into basic aspects of machine learning [3034]. Indeed, the problem of learning quantitative sequence–function relationships in molecular biology is very similar to the problem of learning receptive fields in neuroscience [28]. The discussion of this problem in the neuroscience context, however, has largely avoided in-depth analyses of how mutual information relates to likelihood, as well as of how diffeomorphic modes emerge.

Fig. 2
figure 2

Overview of massively parallel experiments for studying quantitative sequence–function relationships. a The input to each experiment is a library of different sequences that one wishes to test. The output is one or more bins of sequences; each sequence in each bin is randomly selected from the library with a weight that depends on a measurement of that sequence’s activity (star). b The resulting data set consists of a list of (non-unique) sequences, each sequence assigned to either the input library or one of the output bins. c Illustration of experimental methods for measuring the sequence-dependent binding energy of purified transcription factor proteins. The input library typically consists of random DNA flanked by constant sequence. This library DNA is mixed with the protein of interest and binding is allowed to come to equilibrium. DNA bound by protein is then separated from unbound DNA, e.g. by running complexes on a gel (shown), then sequenced along with a sample from the input library. d Sort-Seq [12] is a massively parallel experiment that uses a library of mutagenized sequences to probe the mechanisms of transcriptional regulation employed by a specific wild type promoter of interest. Mutant promoters are cloned upstream of the GFP gene, and E. coli cells harboring these expression constructs are sorted into bins using FACS. The mutant promoters in each bin, as well as promoters from the input library, are then sequenced

2 Massively Parallel Experiments Probing Sequence–Function Relationships

All of the massively parallel experiments in Table 1 share a common structure (Fig. 2a). The first step in each experiment is to generate a large set of (roughly \(10^3\) to \(10^8\)) different sequences to measure. This set of sequences is called the “library.” Multiple different types of libraries can be used depending on the application. One then performs an experiment that takes this library as input, and as output provides a set of one or more “bins” of sequences. Each output bin contains sequences selected from the library with a weight that depends on the measured activity of that sequence. Finally, a sample of sequences from each of the output bins, as well as from the input library, are determined using ultra-high-throughput DNA sequencing. The resulting data thus consists of a long list of (typically non-unique) DNA sequences, each assigned to a corresponding bin (Fig. 2b). It is from these data that we wish to learn quantitative models of sequence–function relationships.

Fig. 3
figure 3

The lac promoter region studied in [12]. a Sort-Seq was used to dissect a 75 bp region of the E. coli lac promoter using a library consisting of wild type sequences mutagenized at 12 % per nucleotide, i.e., each library sequence had nine mutations on average. b The resulting data were used to learn a quantitative sequence–function relationship, the mathematical form of which reflected an explicit biophysical model of transcriptional regulation. This model included two “energy matrices” describing the sequence-dependent binding energy of CRP (Q) and RNAP (P) to their respective sites. It also included a value for the interaction energy \(\gamma \) between these two proteins

Some of the earliest massively parallel experiments were designed to measure the specificity of purified transcription factors for their DNA binding sites [610] (Fig. 2c). The library used in such studies consists of a fixed-length region of random DNA flanked by constant sequences used for PCR amplification. This library is mixed with the transcription factor of interest, after which protein–bound DNA is separated from unbound DNA, e.g., by running the protein–DNA mixture on a gel. Protein–bound DNA is then sequenced, along with the input library.

Using a library of random DNA to assay protein–DNA binding has the advantage that the same library can be used to study each protein. This is particularly useful when performing assays on many different proteins at once (e.g., [8, 35]). On the other hand, only a very small fraction of library sequences will be specifically bound by the protein of interest. Moreover, because proteins typically bind DNA in a non-specific manner, such experiments are often performed serially in order to achieve substantial enrichment.Footnote 2

The first massively parallel experiment to probe how multi-protein–DNA complexes regulate transcription in living cells was Sort-Seq [12] (Fig. 2d). The sequence library used in this experiment was generated by introducing randomly scattered mutations into a “wild type” sequence of interest, specifically, the 75 bp region of the promoter of the lac gene in E. coli depicted in Fig. 3a. A few million of these mutant promoters were cloned upstream of the green fluorescent protein (GFP) gene. Cells carrying these expression constructs were grown under conditions favorable to promoter activity and were then sorted into a small number of bins according to each cell’s measured fluorescence. This partitioning of cells was accomplished using fluorescence-activated cell sorting (FACS) [41], a method that can readily sort \({\sim }10^4\) cells per second. The mutant promoters within each sorted bin as well as within the input library were then sequenced, yielding measurements for \({\sim }2 \times 10^5\) variant promoter sequences. We note that advances in DNA sequencing have since made it possible to accumulate much more data, and it is no longer difficult to measure the activities of \({\sim }10^7\) different sequences in this manner.

Massively parallel experiments using mutagenized sequences provide data about sequence–function relationships within a localized region of sequence space centered on the wild type sequence of interest. Measuring these local relationships can provide a wealth of information about the functional mechanisms of the wild type sequence. For instance, the Sort-Seq data of [12] allowed the inference of an explicit biophysical model for how CRP and RNAP work together to regulate transcription at the lac promoter (Fig. 3b). In particular, the authors used their data to learn quantitative models for the in vivo sequence specificity of both CRP and RNAP. Model fitting also enabled measurement of the protein–protein interaction by which CRP is able to recruit RNAP and up-regulate transcription.

Mutagenized sequences have also been used extensively for “deep mutational scanning” experiments on proteins. In this context, selection experiments on mutagenized proteins allow one to identify protein domains critical for folding and function. A variety of deep mutational scanning experiments are described in [42].

3 Inference Using likelihood

The inference of quantitative sequence–function relationships from massively parallel experiments can be phrased as follows. Data consists of a large number of sequences \(\left\{ S_n \right\} _{n=1}^N\), each sequence S having a corresponding measurement M. Due to experimental noise, repeated measurements of the same sequence S can yield different values for M. Our experiment therefore has the following probabilistic form:

(1)

If we assume that the measurements for each sequence are independent, and if we have an explicit parametric form for p(M|S), then we can learn the values of the parameters by maximizing the per-datum log likelihood,

$$\begin{aligned} L = \frac{1}{N} \sum _{n=1}^N \log p(M_n | S_n). \end{aligned}$$
(2)

In what follows we will refer to the quantity L simply as the “likelihood.”

Fig. 4
figure 4

Schematic illustration of how likelihood \(L(\theta ,\pi )\) depends on the model \(\theta \) and the noise function \(\pi \) in the \(N \rightarrow \infty \) limit. a, b L will typically have a correlated dependence on \(\theta \) and \(\pi \). If \(\pi \) is set equal to the correct noise function \(\pi ^{*}\), then L will be maximized by the correct model \(\theta ^{*}\). However, if \(\pi \) is set to an incorrect noise function \(\pi '\), L will typically attain a maximum at an incorrect \(\theta '\)

In regression problems such as this, one introduces an additional layer of structure. Specifically, we assume the measurement M of each sequence S is a noisy readout of some underlying activity R that is a deterministic function of that sequence. We call the function relating R to S the “activity model” and denote it using \(\theta (S)\). This activity model is ultimately what we want to understand. The specific way the activity R is read out by measurements M is then specified by a conditional probability distribution, \(\pi (M|R)\), which we call the “noise function.”Footnote 3 Our experiment is thus represented by the Markov chain

(3)

The corresponding likelihood is

$$\begin{aligned} L(\theta ,\pi ) = \frac{1}{N} \sum _{n=1}^N \log \pi (M_n | \theta (S_n)). \end{aligned}$$
(4)

The model we adopt for our experiment therefore has two components: \(\theta \), which describes the sequence–function relationship of interest, and \(\pi \), which we do not really care about.

Standard statistical regression requires that the noise function \(\pi \) be specified up-front. \(\pi \) can be learned either by performing separate calibration experiments, or by assuming a functional form based on an educated guess. This can be problematic, however. Consider inference in the large data limit, \(N \rightarrow \infty \), which is illustrated in Fig. 4. Likelihood is determined by both the model \(\theta \) and the noise function \(\pi \) (Fig. 4a). If we know the correct noise function \(\pi ^{*}\) exactly, then maximizing \(L(\theta ,\pi ^{*})\) over \(\theta \) is guaranteed to recover the correct model \(\theta ^{*}\). However, if we assume an incorrect noise function \(\pi '\), maximizing likelihood will typically recover an incorrect model \(\theta '\) (Fig. 4b).

4 Inference Using Mutual Information

Information theory provides an alternative inference approach. Suppose we hypothesize a specific model \(\theta \), which gives predictions R. Denote the true model \(\theta ^{*}\) and the corresponding true activity \(R^{*}\). The dependence between S, M, \(R^{*}\), and R will then form a Markov chain,

(5)

From the simple fact that M depends on R only through the value of \(R^{*}\), any dependence measure \(\mathcal {D}\) that satisfies the data processing inequality (DPI) [29] must satisfy

$$\begin{aligned} \mathcal {D}[R;M] \le \mathcal {D}[R^{*};M]. \end{aligned}$$
(6)

Therefore, in the set of possible models \(\theta \), the true model is guaranteed to globally maximize the objective function \(\mathcal {D}(\theta ) \equiv \mathcal {D}[R;M]\).

One particularly relevant dependence measure that satisfies DPI is mutual information, a quantity that plays a fundamental role in information theory [29].Footnote 4 For the massively parallel experiments such as those in Fig. 2, R is continuous and M is discrete. In these cases, mutual information is given by

$$\begin{aligned} I(\theta ) = I[R;M] = \sum _M \int dR\, p(R,M) \log \frac{p(R,M)}{p(R)p(M)}, \end{aligned}$$
(7)

where p(MR) is the joint distribution of activity predictions and measurements resulting from the model \(\theta \). If one is able to estimate p(MR) from a finite sample of data, mutual information can be used as an objective function for determining \(\theta \) without assuming any noise function \(\pi \).

It should be noted that there are multiple dependence measures \(\mathcal {D}\) that satisfy DPI. One might wonder whether maximizing multiple different dependence measures would improve on the optimization of mutual information alone. The answer is not so simple. In [28] it was shown that if the correct model \(\theta ^{*}\) is within the space of models under consideration, then, in the large data limit, maximizing mutual information is equivalent to simultaneously maximizing every dependence measure that satisfies DPI. On the other hand, one rarely has any assurance that the correct model \(\theta ^{*}\) is within the space of parameterized models one is considering. In this case, considering different DPI-satisfying measures might provide a test for whether \(\theta ^{*}\) is noticeably outside the space of parameterized models. To our knowledge, this potential approach to the model selection problem has yet to be demonstrated.

5 Relationship Between Likelihood and Mutual Information

A third inference approach is to admit that we do not know the noise function \(\pi \) a priori, and to fit both \(\theta \) and \(\pi \) simultaneously by maximizing \(L(\theta , \pi )\) over this pair. It is easy to see why this makes sense: the division of the inference problem into first measuring \(\pi \), then learning \(\theta \) using that inferred \(\pi \), is somewhat artificial. The process that maps S to M is determined by both \(\theta \) and \(\pi \) and thus, from a probabilistic point of view, it makes sense to maximize likelihood over both of these quantities simultaneously.

We now show that, in the large N limit, maximizing likelihood over both \(\theta \) and \(\pi \) is equivalent to maximizing the mutual information between model predictions and measurements. Here we follow the argument given in [28]. In the large N limit, likelihood can be written

$$\begin{aligned} L(\theta ,\pi )= & {} \sum _M \int dR\, p(R,M) \log \pi (M|R) \end{aligned}$$
(8)
$$\begin{aligned}= & {} I(\theta ) - D(\theta ,\pi ) - H[M], \end{aligned}$$
(9)

where

$$\begin{aligned} D(\theta , \pi ) = \sum _M \int dR\, p(R,M) \log \frac{p(M|R)}{\pi (M|R)}, \end{aligned}$$
(10)

is the Kullback–Leibler divergence between the assumed noise function \(\pi \) and the observed noise function p(M | R), and \(H[M] = - \sum _M p(M) \log p(M)\) is the entropy of the measurements, which does not depend on \(\theta \). To maximize \(L(\theta , \pi )\) it therefore suffices to maximize \(I(\theta )\) over \(\theta \) alone, then to set the noise function \(\pi (M|R)\) equal to the empirical noise function p(M|R), which causes \(D(\theta ,\pi )\) to vanish.

Thus, when we are uncertain about the noise function \(\pi \), we need not despair. We can, if we like, simply learn \(\pi \) at the same time that we learn \(\theta \). We need not explicitly model \(\pi \) in order to do this; it suffices instead to maximize the mutual information \(I(\theta )\) over \(\theta \) alone.

The connection between mutual information and likelihood can further be seen in a quantity called the “noise-averaged” likelihood. This quantity was first described for the analysis of microarray data [27]; see also [28]. The central idea is to put an explicit prior on the space of possible noise functions, then compute likelihood after marginalizing over these noise functions. Explicitly, the per-datum log noise-averaged likelihood \(L_\mathrm{na}(\theta )\) is related to \(L(\theta ,\pi )\) via

$$\begin{aligned} e^{N L_\mathrm{na}(\theta )}= & {} \int d\pi \, p(\pi )\, e^{N L(\theta , \pi )}. \end{aligned}$$
(11)

We will refer to \(L_\mathrm{na}\) simply as “noise-averaged likelihood” in what follows.

Under fairly general conditions, one finds that noise-averaged likelihood is related to mutual information via

$$\begin{aligned} L_\mathrm{na}(\theta )= & {} I(\theta ) - \Delta (\theta ) - H[M]. \end{aligned}$$
(12)

Here, the effect of the noise function prior \(p(\pi )\) is absorbed entirely by the term \(\Delta (\theta )\). Under weak assumptions, \(\Delta (\theta )\) vanishes in the \(N \rightarrow \infty \) limit and thus \(p(\pi )\) becomes irrelevant for the inference problem [27, 28].

6 Diffeomorphic Modes

Mutual information has a mathematical property that is important to account for when using it as an objective function: the mutual information between any two variables is unchanged by an invertible transformation of either variable. So if a change in model parameters, \(\theta \rightarrow \theta '\), results in changes in model predictions \(R \rightarrow R'\) that preserves the rank order of these predictions, then

$$\begin{aligned} I(\theta ) = I[M;R] = I[M;R'] = I(\theta '), \end{aligned}$$
(13)

and \(\theta \) and \(\theta '\) are judged to be equally valid.

By using mutual information as an objective function, we are therefore unable to constrain any parameters of \(\theta \) that, if changed, produce invertible transformations of model predictions. Such parameters are called “diffeomorphic parameters” or “diffeomorphic modes” [28]. The distinction between diffeomorphic modes and nondiffeomorphic modes is illustrated in Fig. 5.

Fig. 5
figure 5

Illustration of diffeomorphic and nondiffeomorphic modes. a A diffeomorphic mode \(v^\mathrm{dif}\) at a point \(\theta \) in parameter space is a vector that will (regardless of the underlying data) be tangent to a level curve of \(I(\theta )\). All other vectors (e.g., \(v^\mathrm{non}\)) correspond to nondiffeomorphic modes. b Moving \(\theta \) along a nondiffeomorphic mode results in a sort of “diffusion” in which the R values assigned to different sequences change rank order. Here, the probability distribution p(R|M) is illustrated (for fixed M) in gray. The motion of individual R values upon such a change in \(\theta \) are indicated by arrows. c Changing \(\theta \) along a diffeomorphic mode, however, results in a “flow” of R values that maintains their rank order

6.1 Criterion for Diffeomorphic Modes

Following [28], we now derive a criterion that can be used to identify all of the diffeomorphic modes of a model \(\theta \).Footnote 5 Consider an infinitesimal change in model parameters \(\theta \rightarrow \theta + d \theta \), where the components of \(d\theta \) are specified by

$$\begin{aligned} d \theta _i = \epsilon v_i \end{aligned}$$
(14)

for small \(\epsilon \) and for some vector \(v_i\) in \(\theta \)-space. This change in \(\theta \) will produce a corresponding change in model predictions \(R \rightarrow R + dR\), where

$$\begin{aligned} dR = \epsilon \sum _i v_i \frac{\partial R}{\partial \theta _i}. \end{aligned}$$
(15)

In general, the derivative \(\partial R / \partial \theta _i\) can have arbitrary dependence on the underlying sequence S. This transformation will preserve the rank order of R-values only if dR is the same for all sequences having the same value of R. The change dR must therefore be a function of R and have no other dependence on S. A diffeomorphic mode is a vector field \(v^\mathrm{dif}(\theta )\) that has this property at all points in parameter space. Specifically, a vector field \(v^\mathrm{dif}(\theta )\) is a diffeomorphic mode if and only if there is a function \(h(R,\theta )\) such that

$$\begin{aligned} \sum _i v^\mathrm{dif}_i(\theta ) \frac{\partial R}{\partial \theta _i} = h(R,\theta ). \end{aligned}$$
(16)

6.2 Diffeomorphic Modes of Linear Models

As a simple example, consider a situation in which each sequence S is a D-dimensional vector and R is an affine function of S, i.e.

$$\begin{aligned} R = \theta _0 + \sum _{i=1}^D \theta _i S_i, \end{aligned}$$
(17)

for model parameters \(\theta = \left\{ \theta _0, \theta _1, \ldots , \theta _D \right\} \). The criterion in Eq. (16) then gives

$$\begin{aligned} v^\mathrm{dif}_0(\theta ) + \sum _{i=1}^D v^\mathrm{dif}_i(\theta ) S_i = h(R,\theta ). \end{aligned}$$
(18)

Because the left hand side is linear in S and R is linear in S, the function \(h(R,\theta )\) must be linear in R. Thus, h must have the form

$$\begin{aligned} h(R,\theta ) = a(\theta ) + b(\theta ) R \end{aligned}$$
(19)

for some functions \(a(\theta )\) and \(b(\theta )\). The corresponding diffeomorphic mode is

$$\begin{aligned} v^\mathrm{dif}_i(\theta ) = \left\{ \begin{array}{ll} a(\theta ) &{}\quad i = 0 \\ b(\theta ) \theta _i &{}\quad i = 1, 2, \ldots , D \end{array} \right. , \end{aligned}$$
(20)

which has two degrees of freedom. Specifically, the a component of \(v^\mathrm{dif}\) corresponds to adding a constant to R while the b component corresponds to multiplying R by a constant.

Note that if we had instead chosen \(R = \sum _{i=1}^D \theta _i S_i\), i.e. left out the constant component \(\theta _0\), then there would be only one diffeomorphic mode, corresponding to multiplication of R by a constant. This fact will be used when we analyze the Gaussian selection model in Sect. 8.

6.3 Diffeomorphic Modes of a Biophysical Model of Transcriptional Regulation

Diffeomorphic modes can become less trivial in more complicated situations. Consider the biophysical model of transcriptional regulation by the E. coli lac promoter (Fig. 3). This model was fit to Sort-Seq data in [12]. The form of this model is as follows. Let S denote a \(4 \times D\) matrix representing a DNA sequence of length D and having elements

$$\begin{aligned} S_{bl} = \left\{ \begin{array}{ll} 1 &{} \quad \text {if base }b\text { occurs at position }l \\ 0 &{}\quad \text {otherwise} \end{array} \right. \end{aligned}$$
(21)

where \(b \in \left\{ A,C,G,T \right\} \) and \(l = 1, 2, \ldots D\). The binding energy Q of CRP to DNA was modeled in [12] as an “energy matrix”: each position in the DNA sequence was assumed to contribute additively to the overall energy. Specifically,

$$\begin{aligned} Q = \sum _{b,l} \theta _{Q}^{bl} S_{bl} + \theta ^0_Q, \end{aligned}$$
(22)

where \(\theta _Q = \left\{ \theta ^0_Q, \theta ^{bl}_Q \right\} \) are the parameters of this energy matrix. Similarly, the binding energy P of RNAP to DNA was modeled as

$$\begin{aligned} P = \sum _{b,l} \theta _{P}^{bl} S_{bl} + \theta ^0_P. \end{aligned}$$
(23)

Both energies were taken to be in thermal units (\(k_B T\)). The rate of transcription R resulting from these binding energies was assumed to be proportional to the occupancy of RNAP at its binding site. This transcription rate is given by

$$\begin{aligned} R = R_{\max }\, \frac{e^{-P} + e^{-P - Q - \gamma }}{1 + e^{-Q} + e^{-P} + e^{-P - Q - \gamma }}, \end{aligned}$$
(24)

where \(\gamma \) is the interaction energy between CRP and RNAP (again in units of \(k_B T\)) and \(R_{max}\) is a scalar.

Because the binding sites for CRP and RNAP do not overlap, one can learn the parameters \(\theta _Q\) and \(\theta _P\) from data separately by independently maximizing I[QM] and I[PM]. Doing this, however, leaves undetermined the overall scale of each energy matrix as well as the chemical potentials \(\theta _P^0\) and \(\theta _Q^0\). The reason is that the energy scale and chemical potential are diffeomorphic modes of energy matrix models and therefore cannot be inferred by maximizing mutual information.

However, if Q and P are inferred together by maximizing I[RM] instead, one is now able to learn both energy matrices with a physically meaningful energy scale. The chemical potential of CRP, \(\theta _Q^0\), is also determined. The only parameters left unspecified are the chemical potential of RNA polymerase, \(\theta _P^0\), and the maximal transcription rate \(R_{\max }\). The reason for this is that in the formula for R in Eq. (24) the energies P and Q combine in a nonlinear way. This nonlinearity eliminates three of the four diffeomorphic modes of P and Q.Footnote 6 See [28] for the derivation of this result.

6.4 Dual Modes of the Noise Function

Diffeomorphic transformations of model parameters can be thought of as being equivalent to certain transformations of the noise function. Consider the transformation of model parameters

$$\begin{aligned} \theta _i \rightarrow \theta _i' = \theta _i + \epsilon v_i, \end{aligned}$$
(25)

where \(\epsilon \) is an infinitesimal number and \(v_i\) is a vector in \(\theta \)-space.Footnote 7 For any sequence S, this transformation induces a transformation of the model prediction

$$\begin{aligned} R \rightarrow R'= & {} R + \epsilon \sum _i v_i \frac{\partial R}{\partial \theta _i }. \end{aligned}$$
(26)

To see the effect this transformation has on likelihood, we rewrite Eq. (4) as,

$$\begin{aligned} L(\theta , \pi ) = \left\langle \log \pi (M|R) \right\rangle _\mathrm{data}, \end{aligned}$$
(27)

where \(\left\langle \cdot \right\rangle _\mathrm{data}\) indicates an average taken over the measurements \(M_n\) and predictions \(R_n\) for all of the sequences \(S_n\) in the data set. The change in likelihood resulting from Eq. (26) is therefore given by

$$\begin{aligned} L(\theta ',\pi ) = L(\theta , \pi ) + \epsilon \left\langle \frac{\partial \log \pi (M|R)}{\partial R}\sum _i \frac{\partial R}{\partial \theta _i } v_i \right\rangle _\mathrm{data}. \end{aligned}$$
(28)

Now suppose that there is a noise function \(\pi '\) that has an equivalent effect on likelihood, i.e.,

$$\begin{aligned} L(\theta ',\pi ) = L(\theta ,\pi ') + O(\epsilon ^2), \end{aligned}$$
(29)

for all possible data sets \(\left\{ S_n, M_n \right\} \). We say that this transformation of the noise function \(\pi \rightarrow \pi '\) is “dual” to the transformation \(\theta \rightarrow \theta '\) of model parameters. The transformed noise function will necessarily have the form

$$\begin{aligned} \log \pi '(M|R) = \log \pi (M|R) + \epsilon \tilde{v}(M,R) \end{aligned}$$
(30)

for some function \(\tilde{v}(M,R)\). To determine \(\tilde{v}\) we consider the transformation of likelihood induced by \(\pi \rightarrow \pi '\):

$$\begin{aligned} L(\theta ,\pi ')= & {} L(\theta ,\pi ) + \epsilon \left\langle \tilde{v}(M,R) \right\rangle _\mathrm{data}. \end{aligned}$$
(31)

Comparing Eqs. (28) and (31), we see that \(\pi \rightarrow \pi '\) will be dual to \(\theta \rightarrow \theta '\) for all possible data sets if and only if

$$\begin{aligned} \frac{\partial \log \pi (M|R)}{\partial R} \sum _i \frac{\partial R}{\partial \theta _i } v_i = \tilde{v}(M,R) \end{aligned}$$
(32)

for all sequences S.

For general choice of vector v, no function \(\tilde{v}\) will exist that satisfies Eq. (32). The reason is that \(\partial R/\partial \theta _i\) will typically depend on the sequence S independently of the value of R. In other words, for a fixed value of M and R, the left hand side of Eq. (32) will retain a dependence on S. The right hand side, however, cannot have such a dependence. The converse is also true: for general choice of the function \(\tilde{v}\), no vector v will exist such that Eq. (32) is satisfied for all sequences. This is evident from the simple fact that v is a finite dimensional vector while \(\tilde{v}\) is a function of the continuous quantity R and therefore has an infinite number of degrees of freedom.

In fact, Eq. (32) will have a solution if and only if

$$\begin{aligned} \sum _i \frac{\partial R}{\partial \theta _i } v^\mathrm{dif}_i = h(R) \end{aligned}$$
(33)

for some function h. Here we have added the superscript “dif” because this is precisely the definition of a diffeomorphic mode given in Eq. (16). In this case, the function \(\tilde{v}^\mathrm{dif}\) dual to this diffeomorphic mode \(v^\mathrm{dif}\) is seen to be

$$\begin{aligned} \tilde{v}^\mathrm{dif}(M,R) = \frac{\partial \log \pi (M|R)}{\partial R} h(R). \end{aligned}$$
(34)

These findings are summarized by the Venn diagram in Fig. 6. Arbitrary transformations of the model parameters \(\theta \) will alter likelihood in a way that cannot be imitated by any change to the noise function \(\pi \). The reverse is also true: most changes to \(\pi \) cannot be imitated by a corresponding change in \(\theta \). However, a subset of transformations of \(\theta \) are equivalent to corresponding dual transformations of \(\pi \). These transformations are precisely the diffeomorphic transformations of \(\theta \). This partial duality between \(\theta \) and \(\pi \) has a simple interpretation: the choice of how we parse an experiment into an activity model \(\theta \) and a noise function \(\pi \) is not unique. The ambiguity in this choice is parameterized by the diffeomorphic modes of \(\theta \) and the dual modes of \(\pi \).

Fig. 6
figure 6

Venn diagram illustrating the degrees of freedom of the likelihood \(L(\theta ,\pi )\) considered over all possible data sets \(\left\{ S_n, M_n \right\} \). Altering the model parameters \(\theta \) will typically change \(L(\theta ,\pi )\) in a way that cannot be recapitulated by changes in the noise function \(\pi \). Similarly, changes in \(\pi \) cannot typically be imitated by changes in \(\theta \). However, diffeomorphic transformations of \(\theta \) will affect \(L(\theta ,\pi )\) in the exact same way that dual transformation of \(\pi \) will. The diffeomorphic modes of \(\theta \) and the dual modes of \(\pi \) can therefore be thought of as lying within the intersection of \(\theta \) and \(\pi \)

7 Error Bars from Likelihood, Mutual Information, and Noise-Averaged Likelihood

We now consider the consequences of performing inference using various objective functions at large but finite N. Specifically, we discuss the optimal parameters and corresponding error bars that are found by sampling \(\theta \) from posterior distributions of the form

$$\begin{aligned} p(\theta | \mathrm{data}) \sim e^{N F(\theta )} \end{aligned}$$
(35)

for the following choices of the objective function \(F(\theta )\):

  1. (a)

    \(F(\theta ) = L(\theta ,\pi ^{*})\) is likelihood computed using the correct noise function \(\pi ^{*}\).

  2. (b)

    \(F(\theta ) = L(\theta , \pi ')\) where \(\pi '\) differs from \(\pi ^{*}\) by a small but arbitrary error.

  3. (c)

    \(F(\theta ) = L(\theta , \pi '')\) where \(\pi ''\) differs from \(\pi ^{*}\) by a small amount along a dual mode.

  4. (d)

    \(F(\theta ) = I(\theta )\) is the mutual information between measurements and model predictions.

  5. (e)

    \(F(\theta ) = L_\mathrm{na}(\theta )\) is the noise-averaged likelihood.

To streamline notation, we will use \(\left\langle \cdot \right\rangle \) to denote averages computed in multiple different contexts. In each case, the appropriate context will be specified by a subscript. As above \(\left\langle \cdot \right\rangle _\mathrm{data}\) will denote averaging over a specific data set \(\left\{ S_n,M_n \right\} _{n=1}^N\). \(\left\langle \cdot \right\rangle _\mathrm{real}\) will indicate averaging over an infinite number of data set realizations. \(\left\langle \cdot \right\rangle _S\), \(\left\langle \cdot \right\rangle _{S,M}\), \(\left\langle \cdot \right\rangle _{S|R}\), and \(\left\langle \cdot \right\rangle _{S|R,M}\) will respectively denote averages over the distributions p(S), p(SM), p(S|R), and p(S|RM), the empirical distributions obtained in the infinite data limit. \(\left\langle \cdot \right\rangle _\theta \) will indicate an average computed over parameter values \(\theta \) sampled from the posterior distribution \(p(\theta |\mathrm{data})\). Subscripts on \(\mathrm{cov}(\cdot )\) or \(\mathrm{var}(\cdot )\) should be interpreted analogously.

7.1 Likelihood

Consider Eq. (35) with \(F(\theta ) = L(\theta ,\pi ^{*})\) at large but finite N. The posterior distribution \(p(\theta | \mathrm{data})\) will, in general, be maximized at some choice of parameters \(\theta ^o\) that deviates randomly from the correct parameters \(\theta ^{*}\). At large N, \(p(\theta |\mathrm{data})\) will become sharply peaked about \(\theta ^o\) with a peak width governed by the Hessian of likelihood; specifically

$$\begin{aligned} \mathrm{cov}_\theta (\theta _i - \theta ^o_i, \theta _j - \theta ^o_j) = - \frac{H_{ij}^{-1}}{N}, \end{aligned}$$
(36)

where

$$\begin{aligned} H_{ij} = \left. \frac{\partial ^2 L(\theta ,\pi ^{*})}{\partial \theta _i \partial \theta _j} \right| _{\theta ^{*}}, \end{aligned}$$
(37)

is the Hessian of the likelihood. It is also readily shown (see Appendix 1) that this peak width is consistent with the correct parameters \(\theta ^{*}\), in the sense that

$$\begin{aligned} \mathrm{cov}_\mathrm{real}(\theta _i^{*} - \theta _i^o,\theta _j^{*} - \theta _j^o) = \mathrm{cov}_\theta (\theta _i - \theta ^o_i, \theta _j - \theta ^o_j). \end{aligned}$$
(38)
Fig. 7
figure 7

Posterior distributions on model parameters resulting from various objective functions. Each panel schematically illustrates the posterior distribution \(p(\theta | \mathrm{data})\) (gray shaded area) as it relates to the correct model \(\theta ^{*}\) (dot) along both diffeomorphic (abscissa) and nondiffeomorphic (ordinate) directions in parameter space. a Likelihood with the correct noise function \(\pi ^{*}\) leads to a posterior distribution consistent with \(\theta ^{*}\) in all parameters. b Likelihood with a noise function \(\pi '\) that differs arbitrarily from \(\pi ^{*}\) will, in general, lead to a posterior distribution that is inconsistent with \(\theta ^{*}\) along both diffeomorphic and nondiffeomorphic modes. c Likelihood with a noise function \(\pi ''\) that differs from \(\pi ^{*}\) only along a dual mode \(\tilde{v}^\mathrm{dif}\) leads to a posterior that is inconsistent with \(\theta ^{*}\) only along the diffeomorphic mode \(v^\mathrm{dif}\) (parallel to dashed line), but consistent with \(\theta ^{*}\) in all other directions (perpendicular to dashed line). d Using mutual information gives a posterior that is consistent with \(\theta ^{*}\); this posterior places constraints similar to likelihood along non-diffeomorphic modes but places no constraints whatsoever along diffeomorphic modes. e Using noise-averaged likelihood results in a posterior distribution similar to mutual information but with weak constraints on diffeomorphic modes resulting from the noise function prior \(p(\pi )\)

In Appendix 1 we show that the Hessian of likelihood, Eq. (37), is given by

$$\begin{aligned} H_{ij} = - \int dR\, p(R) J(R) \left. \left\langle \frac{\partial R}{\partial \theta _i} \frac{\partial R}{\partial \theta _j} \right\rangle _{S|R} \right| _{\theta ^{*}}, \end{aligned}$$
(39)

where

$$\begin{aligned} J(R) = \sum _M \pi ^{*}(M|R) \left[ \frac{\partial \log \pi ^{*}(M|R)}{\partial R} \right] ^2 = - \sum _M \pi ^{*}(M|R) \frac{\partial ^2 \log \pi ^{*}(M|R)}{\partial R^2} \end{aligned}$$
(40)

is the Fisher information of the noise function \(\pi ^{*}\). This Fisher information is a nonnegative measure of how sensitive our experiment is in the vicinity of R.Footnote 8 We thus see that, as long as the set of vectors \(\partial R/\partial \theta _i\) spans all directions in parameter space, the Hessian matrix \(H_{ij}\) will be nonsingular. Using \(F(\theta ) = L(\theta ,\pi ^{*})\) will therefore put constraints on all directions in parameter space, and these constraints will shrink with increasing data as \(N^{-1/2}\). This situation is illustrated in Fig. 7a.

Now consider what happens if instead we use a noise function \(\pi '\) that deviates from \(\pi ^{*}\) in a small but arbitrary way. Specifically, let

$$\begin{aligned} \log \pi '(M|R) = \log \pi ^{*}(M|R) + \epsilon f(M,R) \end{aligned}$$
(41)

for some function f(MR) and small parameter \(\epsilon \). It is readily shown (see Appendix 1) that the maximum likelihood parameters \(\theta '\) will deviate from \(\theta ^{*}\) by an amount

$$\begin{aligned} \left\langle \theta '_i - \theta _i^{*} \right\rangle _\mathrm{real} = - \epsilon \sum _j H^{-1}_{ij} w_j,\quad \mathrm{where}\quad w_j = \left. \left\langle \frac{\partial f}{\partial R} \frac{\partial R}{\partial \theta _j} \right\rangle _S \right| _{\theta ^{*}}. \end{aligned}$$
(42)

This expected deviation does not depend on N and will therefore not shrink to zero in the large N limit. Indeed, for any choice of \(\epsilon > 0\), there will always be an N large enough such that this bias in \(\theta '\) dominates over the uncertainty due to finite sampling.

Is there any restriction on the types of biases in \(\theta '\) that can be produced by the choice of incorrect noise function \(\pi '\)? In general, no. Because the Hessian matrix H is nonsingular, one can always find a vector w such that the deviation of \(\theta '\) from \(\theta ^{*}\) in Eq. (42) points in any chosen direction of \(\theta \)-space. As long as the functions

$$\begin{aligned} g_i(R) = \left. \left\langle \frac{\partial R}{\partial \theta _i} \right\rangle _{S|R} \right| _{\theta ^{*}} \end{aligned}$$
(43)

are linearly independent for different indices i, a function f can always be found that generates the vector w in Eq. (42).

We therefore see that arbitrary errors in the noise function will bias the inference of model parameters in arbitrary directions. This fact presents a major concern for standard likelihood-based inference: if you assume an incorrect noise function \(\pi \), the parameters \(\theta \) that you then infer will, in general, be biased in an unpredictable way. Moreover, the magnitude of this bias will be directly proportional to the magnitude of the error in the log of your assumed noise function. This problem is illustrated in Fig. 7b.

There is a case that deserves some additional consideration. Suppose we use a noise function \(\pi ''\) that differs from \(\pi ^{*}\) only along a dual mode \(\tilde{v}^\mathrm{dif}\), i.e.,

$$\begin{aligned} \log \pi ''(M|R) = \log \pi ^{*}(M|R) + \epsilon \tilde{v}^\mathrm{dif}(M,R). \end{aligned}$$
(44)

The maximum likelihood parameters \(\theta ''\) of \(L(\theta ,\pi '')\) will still deviate from \(\theta ^{*}\) by an amount that does not shrink to zero in the \(N \rightarrow \infty \) limit. However, this bias in parameter values will be restricted to the diffeomorphic mode \(v^\mathrm{dif}\) to which \(\tilde{v}^\mathrm{dif}\) is dual, i.e.,

$$\begin{aligned} \left\langle \theta ''_i - \theta ^{*}_i \right\rangle _\mathrm{real} = - \epsilon v_i^\mathrm{dif}. \end{aligned}$$
(45)

This state of affairs ain’t so bad since the incorrect noise function will lead to model parameters that are inaccurate only along modes that we already know we cannot learn from the data. This situation is illustrated in Fig. 7c; see Appendix 1 for the derivation of Eq. (45).

7.2 Mutual Information

The constraints on parameters imposed by using mutual information \(I(\theta )\) as the objective function \(F(\theta )\) in Eq. (35) are determined by the Hessian

$$\begin{aligned} K_{ij} = \left. \frac{\partial ^2 I(\theta )}{\partial \theta _i \partial \theta _j} \right| _{\theta ^{*}}. \end{aligned}$$
(46)

Appendix 2 provides a detailed derivation of this Hessian, which after some computation is found to be given by

$$\begin{aligned} K_{ij} = - \int dR\, p(R) J(R) \left. \left[ \left\langle \frac{\partial R}{\partial \theta _i} \frac{\partial R}{\partial \theta _j} \right\rangle _{S|R} - \left\langle \frac{\partial R}{\partial \theta _i} \right\rangle _{S|R} \left\langle \frac{\partial R}{\partial \theta _j} \right\rangle _{S|R} \right] \right| _{\theta ^{*}}. \end{aligned}$$
(47)

Comparing Eqs. (47) and (39), we see that for any vector v in parameter space,

$$\begin{aligned} - \sum _{i,j} H_{ij} v_i v_j \ge - \sum _{i,j} K_{ij} v_i v_j \ge 0. \end{aligned}$$
(48)

Likelihood is thus seen to constrain parameters in all directions at least as much as mutual information does. As expected, mutual information provides no constraint whatsoever in the direction of any diffeomorphic mode \(v^\mathrm{dif}\) of the model, since

$$\begin{aligned} - \sum _{i,j} K_{ij} v_i^\mathrm{dif} v_j^\mathrm{dif} = \int dR\, p(R) J(R) \left. \left[ \left\langle h^2(R) \right\rangle _{S|R} - \left\langle h(R) \right\rangle _{S|R}^2 \right] \right| _{\theta ^{*}}= 0. \end{aligned}$$
(49)

The converse is also true: if there is no constraint on parameters along v, then v must be a diffeomorphic mode. This is because

$$\begin{aligned} - \sum _{i,j} K_{ij} v_i v_j = \int dR\, p(R)\, J(R)\, \left. \mathrm{var}\left( \sum _i v_i \frac{\partial R}{\partial \theta _i} \right) _{S|R} \right| _{\theta ^{*}} . \end{aligned}$$
(50)

Because J(R) is positive almost everywhere, the right hand side of Eq. (50) can vanish only if \(\sum _i v_i \frac{\partial R}{\partial \theta _i}\) does not differ between any two sequences that have the same R value. There must therefore exist a function h(R) such that \(h(R) = \sum _i v_i \frac{\partial R}{\partial \theta _i}\) for all sequences S. This is precisely the requirement in Eq. (16) that v be a diffeomorphic mode.

However, except along diffeomorphic modes, we can generally expect that the constraints provided by likelihood and by mutual information will be of the same magnitude. This situation is illustrated in Fig. 7d. Indeed, in the next section we will see an explicit example where all nondiffeomorphic constraints imposed by mutual information are comparable to those imposed by likelihood.

Before proceeding, we note that the relationship between the Hessians of likelihood and mutual information suggests an analogy to fluid mechanics. Consider a trajectory in parameter space given by \(\theta _i(t) = t v_i\), where t is time and v is a velocity vector pointing in the direction of motion. This motion in parameter space will induce a motion in the prediction R(t) that the model provides for every sequence S. The set of sequences \(\left\{ S_n \right\} \) thus presents us with a dynamic cloud of “particles” moving about in R-space. At \(t = 0\), the quantity \({\langle {\dot{R}}^2 \rangle }_{S|R}\) will be proportional to the average kinetic energy of particles at location R. The quantity \({\langle {\dot{R}}\rangle }^2 _{S|R}\) will be proportional to the (per particle) kinetic energy of the bulk fluid element at R, a quantity that does not count energy due to thermal motion. In this way we see that \(-\sum _{i,j} H_{ij} v_i v_j\) is a weighted tally of total kinetic energy, whereas \(-\sum _{i,j} K_{ij} v_i v_j\) corresponds to a tally of internal thermal energy only, the kinetic energy of bulk motion having been subtracted out.

7.3 Noise-Averaged Likelihood

Noise-averaged likelihood provides constraints in between those of likelihood, computed using the correct noise function, and those of mutual information. This is illustrated in Fig. 7e. Whereas mutual information provides no constraints whatsoever on the diffeomorphic modes of \(\theta \), noise-averaged likelihood provides weak constraints in these directions. These soft constraints reflect the Hessian of \(\Delta (\theta )\) in Eq. (12). The constraints along diffeomorphic modes, however, have an upper bound on how tight they can become in the \(N \rightarrow \infty \) limit. This is because such constraints only reflect our prior \(p(\pi )\) on the noise function, not the information we glean from data.

8 Worked Example: Gaussian Selection

The above principles can be illustrated in the following analytically tractable model of a massively parallel experiment, which we call the “Gaussian selection model.” In this model, our experiment starts with a large library of “DNA” sequences S, each of which is actually a D-dimensional vector drawn from a Gaussian probability distributionFootnote 9

$$\begin{aligned} p_\mathrm{lib}(S) = (2 \pi )^{-D/2} \exp \left( - \frac{|S - \mu |^2}{2} \right) . \end{aligned}$$
(51)

Here, \(\mu \) is a D-dimensional vector defining the average sequence in the library. From this library we extract sequences into two bins, labeled \(M=0\) and \(M=1\). We fill the \(M=0\) bin with sequences sampled indiscriminately from the library. The \(M=1\) bin is filled with sequences sampled from this library with relative probability

$$\begin{aligned} \frac{p(M=1 | S)}{p(M=0|S)} = \exp ( a^{*} + b^{*} R^{*}) \end{aligned}$$
(52)

where the activity \(R^{*}\) is defined as the dot product of S with a D-dimensional vector \(\theta ^{*}\), i.e.,

$$\begin{aligned} R^{*} = S^T \theta ^{*}. \end{aligned}$$
(53)

We use \(N_M\) to denote the number of sequences in each bin M, along with \(N = N_0 + N_1\).

All of our calculations are performed in the limit where \(N_1\) is large but \(N_0\) is far larger. More specifically, we assume that \(\exp ( a^{*} + b^{*} R^{*}) \ll 1\) everywhere that both \(p(S|M=0)\) and \(p(S|M=1)\) are significant. We use \(\epsilon \) to denote the ratio

$$\begin{aligned} \epsilon \equiv \frac{p(M=1)}{p(M=0)} = \frac{N_1}{N_0}, \end{aligned}$$
(54)

and all of our calculations are carried out only to first order in \(\epsilon \). This model experiment is illustrated in Fig. 8.

Fig. 8
figure 8

Illustration of the Gaussian selection model of a massively parallel experiment. Each assayed sequence in this model is a D-dimensional vector. The library (corresponding to bin \(M=0\)) consists of \(N_0\) sequences S drawn from a Gaussian distribution \(p_\mathrm{lib}(S)\) that is centered on a specific sequence \(\mu \). Bin \(M=1\) consists of \(N_1\) sequences drawn from the distribution \(p_\mathrm{lib}(S)\) then enriched by a factor of \(\exp (b^{*} R^{*})\) where \(R^{*} = S^T \theta ^{*}\). This enrichment procedure is analogous to selecting protein–bound DNA sequences where \(b^{*} R^{*}\) is negative the binding energy. Calculations in the text are performed in the \(N_0 \gg N_1\) limit

Our goal is this: given the sampled sequences in the two bins, recover the parameters \(\theta ^{*}\) defining the sequence–function relationship for \(R^{*}\). To do this, we adopt the following model for the sequence-dependent activity R:

$$\begin{aligned} R = S^T \theta , \end{aligned}$$
(55)

where \(\theta \) is the D-dimensional vector we wish to infer. From the arguments above and in [28], it is readily seen that the magnitude of \(\theta \), i.e. \(|\theta |\), is the only diffeomorphic mode of the model: changing this parameter rescales R, preserving rank order.

8.1 Bin-Specific Distributions

We can readily calculate the conditional sequence distribution p(S|M) for each bin M, as well as the conditional distribution p(R|M) of model predictions. Because the sequences sampled for bin 0 are indiscriminately drawn from \(p_{\mathrm{lib}}\), we have

$$\begin{aligned} p(S | M=0) = p_\mathrm{lib}(S) = (2 \pi )^{-D/2} \exp \left( - \frac{|S - \mu |^2}{2} \right) . \end{aligned}$$
(56)

The distribution of selected sequences is found to be

$$\begin{aligned} p(S | M=1) = (2 \pi )^{-D/2} \exp \left( - \frac{|S - \mu - b^{*} \theta ^{*}|^2}{2} \right) . \end{aligned}$$
(57)

The value of \(\epsilon \) is found to be related to \(a^{*}\), \(b^{*}\), and \(\theta ^{*}\) via

$$\begin{aligned} \epsilon = \exp \left( a^{*} + b^{*} \mu ^T \theta ^{*} + \frac{b^{*2} |\theta ^{*}|^2}{2} \right) . \end{aligned}$$
(58)

Appendix 3 provides an explicit derivation of Eqs. (57) and (58).

We compute the distribution of model predictions for each bin as follows. For each bin M, this distribution is defined as

$$\begin{aligned} p(R|M) = \int dS\, \delta (R - \theta ^T S) p(S|M). \end{aligned}$$
(59)

This can be analytically calculated for both of the bins owing to the Gaussian form of each sequence distribution. We find that

$$\begin{aligned} p(R|M=0)= & {} \frac{1}{\sqrt{2 \pi } |\theta |} \exp \left( - \frac{(R - \mu ^T \theta )^2}{2 |\theta |^2} \right) , \end{aligned}$$
(60)
$$\begin{aligned} p(R|M=1)= & {} \frac{1}{\sqrt{2 \pi } |\theta |} \exp \left( - \frac{(R - [\mu + b^{*} \theta ^{*}]^T \theta )^2}{2 |\theta |^2} \right) . \end{aligned}$$
(61)

See Appendix 3 for details.

8.2 Noise Function

To compute likelihood, we must posit a noise function \(\pi (M|R)\). Based on our prior knowledge of the selection procedure, we choose \(\pi (M|R)\) so that

$$\begin{aligned} \frac{\pi (M=1|R)}{\pi (M=0|R)} = \exp ( a + bR), \end{aligned}$$
(62)

where a and b are scalar parameters that we might or might not know a priori. This, combined with the normalization requirement, \(\sum _M \pi (M|R) = 1\), gives

$$\begin{aligned} \pi (M=1|R) = \frac{e^{a + bR}}{1 + e^{a + bR}}, \quad \pi (M=0|R) = \frac{1}{1 + e^{a + bR}}. \end{aligned}$$
(63)

This noise function \(\pi \) is correct when \(a = a^{*}\) and \(b = b^{*}\). The parameter b is dual to the diffeomorphic mode \(|\theta |\), whereas the parameter a is not dual to any diffeomorphic mode.

In the experimental setup used to motivate the Gaussian selection model, the parameter a is affected by many aspects of the experiment, including the concentration of the protein used in the binding assay, the efficiency of DNA extraction from the gel, and the relative amount of PCR amplification used for the bin 0 and bin 1 sequences. In practice, these aspects of the experiment are very hard to control, much less predict. From the results in the previous section, we can expect that if we assume a specific value for a and perform likelihood-based inference, inaccuracies in this value for a will distort our inferred model \(\theta \) in an unpredictable (i.e., nondiffeomorphic) way. We will, in fact, see that this is the case. The solution to this problem, of course, is to infer \(\theta \) alone by maximizing the mutual information \(I(\theta )\); in this case the values for a and b become irrelevant. Alternatively, one can place a prior on a and b, then maximize noise-averaged likelihood \(L_\mathrm{na}(\theta )\). We now analytically explore the consequences of these three approaches.

8.3 Likelihood

Using the noise function in Eq. (63), the likelihood L becomes a function of \(\theta \), a, and b. Computing L in the \(N \rightarrow \infty \) and \(\epsilon \rightarrow 0\) limits, we find that

$$\begin{aligned} L(\theta ,a,b) = \epsilon [ a + b \theta ^T \mu + b b^{*} \theta ^T \theta ^{*} ] - \exp \left( a + b \theta ^T \mu + \frac{b^2 |\theta |^2}{2} \right) . \end{aligned}$$
(64)

We now consider the consequences of various approaches for using \(L(\theta ,a,b)\) to estimate \(\theta ^{*}\). In each case, the inferred optimum will be denoted by a superscript ‘o.’ Standard likelihood-based inference requires that we assume a specific value for a and for b, then optimize \(L(\theta , a, b)\) over \(\theta \) alone by setting

$$\begin{aligned} 0 = \left. \frac{\partial L}{\partial \theta _i} \right| _{\theta ^o,a,b} \end{aligned}$$
(65)

for each component i. By this criteria we find that the optimal model \(\theta ^o\) is given by a linear combination of \(\theta ^{*}\) and \(\mu \):

$$\begin{aligned} \theta ^o = \frac{c b^{*}}{b} \theta ^{*} + \frac{c-1}{b} \mu , \end{aligned}$$
(66)

where c is a scalar that solves the transcendental equation

$$\begin{aligned} c = \exp \left( [a^{*} - a] + \frac{1-c^2}{2}|b^{*} \theta ^{*} + \mu |^2 \right) . \end{aligned}$$
(67)

See Appendix 2 for the derivation of this result. Note that c is determined only by the value of a and not by the value of b. Moreover, \(c = 1\) if and only if \(a = a^{*}\).

If our assumed noise function is correct, i.e., \(a = a^{*}\) and \(b = b^{*}\), then

$$\begin{aligned} \theta ^o = \theta ^{*}. \end{aligned}$$
(68)

Thus, maximizing likelihood will identify the correct model parameters. This exemplifies the general behavior illustrated in Fig. 7a.

If \(a = a^{*}\) but \(b \ne b^{*}\), our assumed noise function will differ from the correct noise function only in a manner dual to the diffeomorphic mode \(|\theta |\). In this case we find that \(c = 1\) and

$$\begin{aligned} \theta ^o = \frac{b^{*}}{b} \theta ^{*}. \end{aligned}$$
(69)

\(\theta ^o\) is thus proportional but not equal to \(\theta ^{*}\). This comports with our claim above that the diffeomorphic mode of the inferred model, i.e. \(|\theta ^o|\), will be biased so as to compensate for the error in the dual parameter b. This finding follows the behavior described in Fig. 7c.

If \(a \ne a^{*}\), however, \(c \ne 1\). As a result, \(\theta ^o\) is a nontrivial linear combination of \(\theta ^{*}\) and \(\mu \), and will thus point in a different direction than \(\theta ^{*}\). This is true regardless of the value of b. This behavior is illustrated in Fig. 7b: errors in non-dual parameters of the noise function will typically lead to errors in nondiffeomorphic parameters of the activity model.

We now consider the error bars that likelihood places on model parameters. Setting \(\theta = \theta ^o + \delta \theta \) and expanding \(L(\theta ,a,b)\) about \(\theta ^o\), we find that

$$\begin{aligned} N L(\theta , a^{*}, b^{*}) \approx N L(\theta ^o, a^{*}, b^{*}) - \frac{N_1 b^{*2}}{2} \sum _{i,j} \varLambda _{ij} \delta \theta _i \delta \theta _j, \end{aligned}$$
(70)

where \(\varLambda _{ij} = \delta _{ij} + (\mu _i + b^{*} \theta ^{*}_i)(\mu _j + b^{*} \theta ^{*}_j).\) Note that all eigenvalues of \(\varLambda \) are greater or equal to 1. Adopting the posterior distribution

$$\begin{aligned} p(\theta | \mathrm{data}) \sim e^{N L(\theta , a, b)} \end{aligned}$$
(71)

therefore gives a covariance matrix on \(\theta \) of

$$\begin{aligned} \left\langle \delta \theta _i \delta \theta _j \right\rangle = \frac{\varLambda ^{-1}_{ij}}{N_1 b^{*2}}. \end{aligned}$$
(72)

Thus, \(\delta \theta \sim N_1^{-1/2}\) in all directions of \(\theta \)-space. Therefore, when the noise function is incorrect and N is sufficiently large, the finite bias introduced into \(\theta ^o\) will cause \(\theta ^{*}\) to fall outside the inferred error bars.

8.4 Mutual Information

In the \(\epsilon \rightarrow 0\) limit, Eq. (7) simplifies to

$$\begin{aligned} I(\theta ) = \epsilon \int dR\, p(R | M=1) \log \frac{p(R|M=1)}{p(R|M=0)} + O(\epsilon ^2). \end{aligned}$$
(73)

The lowest order term on the right hand side can be evaluated exactly using Eqs. (60) and (61):

$$\begin{aligned} I(\theta ) = \frac{\epsilon b^{*2}}{2} \frac{(\theta ^T \theta ^{*})^2}{|\theta |^2}. \end{aligned}$$
(74)

See Appendix 3 for details. Note that the expression on the right is invariant under a rescaling of \(\theta \). This reflects the fact that \(|\theta |\) is a diffeomorphic mode of the model defined in Eq. (55).

To find the model \(\theta ^o\) that maximizes mutual information, we set

$$\begin{aligned} 0 = \left. \frac{\partial I}{\partial \theta _i} \right| _{\theta ^o} = \frac{\epsilon b^{*2} \theta ^{oT} \theta ^{*}}{|\theta ^o|^2} \left[ \theta ^{*}_i - \theta _i^o \frac{\theta ^{oT} \theta ^{*}}{|\theta ^o|^2} \right] \end{aligned}$$
(75)

The optimal model \(\theta ^o\) must therefore be parallel to \(\theta ^{*}\), i.e.

$$\begin{aligned} \theta ^0 \propto \theta ^{*}. \end{aligned}$$
(76)

Expanding about \(\theta = \theta ^o + \delta \theta \) as above, we find that

$$\begin{aligned} N I(\theta ) = N I(\theta ^o) - \frac{N_1 b^{*2}}{2} (\delta \theta _\perp )^2 \end{aligned}$$
(77)

where \(\delta \theta _\perp \) is the component of \(\delta \theta \) perpendicular to \(\theta ^{*}\); see Appendix 3. Therefore, if we use the posterior distribution \(p(\theta | \mathrm{data}) \sim e^{N I(\theta )}\) to infer \(\theta \), we find uncertainties in directions perpendicular to \(\theta ^{*}\) of magnitude \(N_1^{-1/2}\). These error bars are only slightly larger than those obtained using likelihood, and have the same dependence on N. However, we find no constraint whatsoever on the component of \(\delta \theta \) parallel to \(\theta ^{*}\). These results are illustrated by Fig. 7d.

8.5 Noise-Averaged Likelihood

We can also compute the noise-averaged likelihood, \(L_\mathrm{na}(\theta )\), in the case of a uniform prior on a and b, i.e. \(p(\pi ) = p(a,b) = \mathcal {C}\) where \(\mathcal {C}\) is an infinitesimal constant. We find that

$$\begin{aligned} \exp [ N L_\mathrm{na}(\theta ) ]= & {} \int d\pi \, p(\pi ) \exp [N L(\theta ,\pi )] \end{aligned}$$
(78)
$$\begin{aligned}= & {} \mathcal {C} \int _{-\infty }^\infty da\, \int _{-\infty }^\infty db\, \exp \left( N_1 [ a + b \theta ^T \mu + b b^{*} \theta ^T \theta ^{*} ]\right. \nonumber \\&\left. - N \exp \left[ a + b \theta ^T \mu + \frac{b^2 |\theta |^2}{2} \right] \right) \end{aligned}$$
(79)
$$\begin{aligned}= & {} \mathcal {C} \varGamma (N_1) \sqrt{\frac{2 \pi }{N |\theta |^2} } \exp \left( \frac{N_1 b^{*2}}{2} \frac{(\theta ^T \theta ^{*})^2}{|\theta |^2} \right) . \end{aligned}$$
(80)

See the Appendix 3 for details. Thus,

$$\begin{aligned} L_\mathrm{na}(\theta ) = I(\theta ) - \frac{1}{N} \log |\theta | + \mathrm{const}, \end{aligned}$$
(81)

where the constant (which absorbs \(\mathcal {C}\) entirely) does not depend on \(\theta \). If we perform Bayesian inference using noise-averaged likelihood, i.e., using \(p(\theta | \mathrm{data}) \sim e^{N L_\mathrm{na}(\theta )}\), we will therefore find in the large N limit that \(\delta \theta _\perp \) is constrained in the same way as if we had used mutual information. The noise function prior we have assumed further results in weak constraints on \(|\theta |\) that do not tighten as N increases.Footnote 10 This is represented in Fig. 7e.

9 Discussion

The systematic study of quantitative sequence–function relationships in biology is just now becoming possible thanks to the development of a variety of massively parallel experiments. Concepts and methods from statistical physics are likely to prove valuable for understanding this basic class of biological phenomena as well as for learning sequence–function relationships from data.

In this paper we have discussed the problem of learning parametric models of sequence–function relationships from experiments having poorly characterized experimental noise. We have seen that standard likelihood-based inference, which requires an explicit model of experimental noise, will generally lead to incorrect model parameters due to errors in the assumed noise function. By contrast, mutual-information-based inference allows one to learn parametric models without having to assume any noise function at all. Mutual-information-based inference is unable to pin down the values of model parameters along diffeomorphic modes. This behavior reflects a fundamental difference between how diffeomorphic and nondiffeomorphic modes are constrained by data. Diffeomorphic modes arise from arbitrariness in the distinction between the activity model and the noise function. These findings were illustrated using an analytically tractable model for a massively parallel experiment.

The study of quantitative sequence–function relationships still presents many challenges, both theoretical and computational. One major practical difficulty with the mutual-information-based approach described here is accurately estimating mutual information from data. Although methods are available for doing this [44], it remains unclear whether any are accurate enough to enable computational sampling of the posterior distribution \(p(\theta |\mathrm{data}) \sim e^{NI(\theta )}\), as suggested here. Moreover, none of these estimation methods is regarded as definitive. We believe this lack of clarity regarding how to estimate mutual information reflects the fact that the density estimation problem itself has never been fully solved, even in one or two dimensions. We are hopeful, however, that field-theoretic methods for estimating probability densities [4547] might help resolve the problem of mutual information.

The problem of model selection poses a major theoretical challenge. Ideally, one would like to explore a hierarchy of possible model classes when fitting parametric models to data. However, when considering effective models it is unclear how to move far beyond independent site models (e.g., energy matrices) due to the number of parameters growing exponentially with the length of the sequence. Moreover, when learning mechanistic models such as the model of the lac promoter featured in Fig. 3, it is unclear how to go about systematically testing different arrangements of binding sites, different protein–protein interactions, and so on. We emphasize that this model prioritization problem is fundamentally theoretical, not computational, and as of now there is little clarity on how to address this matter.

Finally, the geometric structure of sequence–function relationships presents an array of intriguing questions. For instance, very little is known (in any system) about how convex or glassy such landscapes in sequence space are, what their density of states looks like, etc.. Most of the biological and evolutionary implications of these aspects of sequence–function relationships also have yet to be worked out. We believe that the methods and ideas of statistical physics may lead to important insights into these questions in the near future.