Learning Quantitative Sequence–Function Relationships from Massively Parallel Experiments

A fundamental aspect of biological information processing is the ubiquity of sequence–function relationships—functions that map the sequence of DNA, RNA, or protein to a biochemically relevant activity. Most sequence–function relationships in biology are quantitative, but only recently have experimental techniques for effectively measuring these relationships been developed. The advent of such “massively parallel” experiments presents an exciting opportunity for the concepts and methods of statistical physics to inform the study of biological systems. After reviewing these recent experimental advances, we focus on the problem of how to infer parametric models of sequence–function relationships from the data produced by these experiments. Specifically, we retrace and extend recent theoretical work showing that inference based on mutual information, not the standard likelihood-based approach, is often necessary for accurately learning the parameters of these models. Closely connected with this result is the emergence of “diffeomorphic modes”—directions in parameter space that are far less constrained by data than likelihood-based inference would suggest. Analogous to Goldstone modes in physics, diffeomorphic modes arise from an arbitrarily broken symmetry of the inference problem. An analytically tractable model of a massively parallel experiment is then described, providing an explicit demonstration of these fundamental aspects of statistical inference. This paper concludes with an outlook on the theoretical and computational challenges currently facing studies of quantitative sequence–function relationships.


Introduction
A major long-term goal in biology is to understand how biological function is encoded within the sequences of DNA, RNA, and protein.The canonical success story in this effort is the genetic code: given an arbitrary sequence of messenger RNA, the genetic code allows us to predict with near certainty what peptide sequence will result.There are many other biological codes we would like to learn as well.How does the DNA sequence of a promoter or enhancer encode transcriptional regulatory programs?How does the sequence of pre-mRNA govern which exons are kept and which are removed from the final spliced mRNA?How does the peptide sequence of an antibody govern how strongly it binds to target antigens?
A major difference between the genetic code and these other codes is that while the former is qualitative in nature, the latter are governed by sequence-function relationships that are inherently for its DNA binding site depends on the sequence of that site (blue).(c) A more complicated sequencefunction relationship describes how the rate of mRNA transcription depends on the DNA sequence of a gene's promoter region.At the lac promoter of E. coli (illustrated), this transcription rate (star) depends on how strongly both the transcription factor CRP (purple) and the RNA polymerase holoenzyme (RNAP; orange) bind their respective sites within the promoter region (blue).
quantitative.Quantitative sequence-function relationships 1 describe any function that maps the sequence of a biological heteropolymer to a biologically relevant activity (Fig. 1a).Perhaps the simplest example of such a relationship is how the affinity of a transcription factor protein for its DNA binding site depends on the DNA sequence of that site (Fig. 1b).Such relationships are a key component of the more complicated relationship between the DNA sequence of a promoter or enhancer (which typically binds multiple proteins) and the resulting rate of mRNA transcription (Fig. 1c).In both of these cases, the activities of interest (affinity or transcription rate) can vary over orders of magnitude and yet still be finely tuned by adjusting the corresponding sequence (binding site or promoter/enhancer).
Similarly other sequence-function relationships, like the inclusion of exons during mRNA splicing or the affinity of a protein for its ligand, are fundamentally quantitative.
The study of quantitative sequence-function relationships presents an exciting opportunity for the concepts and methods of statistical physics to shed light on biological systems.There is a natural analogy between biological sequences and the microstates of physical systems, as well as between biological activities and physical Hamiltonians.Yet we currently lack answers to basic questions a statistical physicist might ask, such as "what is the density of states?" or "is a relationship convex or glassy?"The answers to such questions may well have important consequences for diverse fields including biochemistry, systems biology, immunology, and evolution.
Experimental methods for measuring sequence-function relationships have improved dramatically in recent years.In the mid 2000s, multiple "high-throughput" methods for measuring the DNA sequence specificity of transcription factors were developed; these methods include protein binding microarrays (PBMs) [2,3], E. coli one-hybrid technology (E1H) [4], and microfluidic platforms [5].The subsequent development and dissemination of ultra-high-throughput DNA sequencing technologies then led, starting in 2009, to the creation of a number of "massively parallel" experimental techniques for probing a wide range of sequence-function relationships (Table 1).These massively parallel assays can readily measure the functional activity of 10 3 to 10 8 sequences in a single experiment by coupling standard bench-top techniques to ultra-high-throughput DNA sequencing.
Massively parallel experiments are very unlike conventional experiments in physics: they are typically very noisy and rarely provide direct readouts of the quantities that one cares about.Moreover, the noise characteristics of these measurements are difficult to accurately model.Indeed, such noise  [26] Table 1: Massively parallel experiments used for studying various sequence-function relationships.
Columns show the type of sequences interrogated, the sequence activity assayed, the biological system on which the experiments were performed, the name (if any) of the experimental technique, and the publication first describing the method.This table is not exhaustive; it only describes some of the earliest experiments in each type of system.
generally exhibits substantial day-to-day variability.Although standard inference methods require an explicit model of experimental noise, it is still possible to precisely learn quantitative sequence-function relationships from massively parallel data even when noise characteristics are unknown [27,28].
The ability to fit parametric models to these data reflects subtle but important distinctions between two objective functions used for statistical inference: (i) likelihood, which requires a priori knowledge of the experimental noise function and (ii) mutual information [29], a quantity based on the concept of entropy, which does not require a noise function.In contrast to the conventional wisdom that more experimental measurements will improve the model inference task, the standard maximum likelihood approach will typically never learn the right model, even in the infinite data limit, if one uses an imperfect model of experimental noise.Model inference based on mutual information does not suffer from this ailment.
Mutual-information-based inference is unable to pin down the values of model parameters along certain directions in parameter space known as "diffeomorphic modes" [28].This inability is not a shortcoming of mutual information, but rather reflects a fundamental distinction between how diffeomorphic and nondiffeomorphic directions in parameter space are constrained by data.Analogous to the emergence of Goldstone modes in particle physics due to a specific yet arbitrary choice of phase, diffeomorphic modes arise from a somewhat arbitrary choice one must make when defining the sequence-dependent activity that one wishes to model.Likelihood, in contrast to mutual information, is oblivious to the distinction between diffeomorphic and nondiffeomorphic modes.
We begin this paper by briefly reviewing a variety of massively parallel assays for probing quantitative sequence-function relationships.We then turn to the problem of learning parametric models of these relationships from the data that these experiments generate.After reviewing recent work on this problem [28], we extend this work in three ways.First, we show that "diffeomorphic modes" of the parametric activity model that one wishes to learn are "dual" to certain transformations of the corre-sponding model of experimental noise (the "noise function").This duality reveals a symmetry of the inference problem, thereby establishing a close analogy with Goldstone modes.Next we compute and compare the Hessians of likelihood and mutual information.This comparisons suggests an additional analogy between this inference problem and concepts in fluid mechanics.Finally, we work through an analytically tractable model of a massively parallel experiment of protein-DNA binding.This example explicitly illustrates the differences between likelihood-and mutual-information-based approaches to inference, as well as the emergence of diffeomorphic modes.
It should be noted that the inference of receptive fields in sensory neuroscience is another area of biology in which mutual information has proved useful as an objective function, and that work in this area has also provided important insights into basic aspects of machine learning [30,31,32,33,34].Indeed, the problem of learning quantitative sequence-function relationships in molecular biology is very similar to the problem of learning receptive fields in neuroscience [28].The discussion of this problem in the neuroscience context, however, has largely avoided in-depth analyses of how mutual information relates to likelihood, as well as of how diffeomorphic modes emerge.

Massively parallel experiments probing sequence-function relationships
All of the massively parallel experiments in Table 1 share a common structure (Fig. 2a).The first step in each experiment is to generate a large set of (roughly 10 3 to 10 8 ) different sequences to measure.This set of sequences is called the "library."Multiple different types of libraries can be used depending on the application.One then performs an experiment that takes this library as input, and as output provides a set of one or more "bins" of sequences.Each output bin contains sequences selected from the library with a weight that depends on the measured activity of that sequence.Finally, a sample of sequences from each of the output bins, as well as from the input library, are determined using ultra-high-throughput DNA sequencing.The resulting data thus consists of a long list of (typically non-unique) DNA sequences, each assigned to a corresponding bin (Fig. 2b).It is from these data that we wish to learn quantitative models of sequence-function relationships.Some of the earliest massively parallel experiments were designed to measure the specificity of purified transcription factors for their DNA binding sites [6,7,8,9,10] (Fig. 2c).The library used in such studies consists of a fixed-length region of random DNA flanked by constant sequences used for PCR amplification.This library is mixed with the transcription factor of interest, after which proteinbound DNA is separated from unbound DNA, e.g., by running the protein-DNA mixture on a gel.Protein-bound DNA is then sequenced, along with the input library.
Using a library of random DNA to assay protein-DNA binding has the advantage that the same library can be used to study each protein.This is particularly useful when performing assays on many different proteins at once (e.g., [8,35]).On the other hand, only a very small fraction of library sequences will be specifically bound by the protein of interest.Moreover, because proteins typically bind DNA in a non-specific manner, such experiments are often performed serially in order to achieve substantial enrichment. 2he first massively parallel experiment to probe how multi-protein-DNA complexes regulate transcription in living cells was Sort-Seq [12] (Fig. 2d).The sequence library used in this experiment was generated by introducing randomly scattered mutations into a "wild-type" sequence of interest, specifically, the 75 bp region of the promoter of the lac gene in E. coli depicted in Fig. 3a.A few million of these mutant promoters were cloned upstream of the green fluorescent protein (GFP) gene.Cells carrying these expression constructs were grown under conditions favorable to promoter activity and were then sorted into a small number of bins according to each cell's measured fluorescence.This partitioning of cells was accomplished using fluorescence-activated cell sorting (FACS) [41], a method that can readily sort ∼ 10 4 cells per second.The mutant promoters within each sorted bin as well as within the input library were then sequenced, yielding measurements for ∼ 2 × 10 5 variant promoter sequences.We note that advances in DNA sequencing have since made it possible to accumulate much more data, and it is no long difficult to measure the activities of ∼ 10 7 different sequences in this manner.The input to each experiment is a library of different sequences that one wishes to test.The output is one or more bins of sequences; each sequence in each bin is randomly selected from the library with a weight that depends on a measurement of that sequence's activity (star).(b) The resulting data set consists of a list of (non-unique) sequences, each sequence assigned to either the input library or one of the output bins.(c) Illustration of experimental methods for measuring the sequence-dependent binding energy of purified transcription factor proteins.The input library typically consists of random DNA flanked by constant sequence.This library DNA is mixed with the protein of interest and binding is allowed to come to equilibrium.DNA bound by protein is then separated from unbound DNA, e.g. by running complexes on a gel (shown), then sequenced along with a sample from the input library.(d) Sort-Seq [12] is a massively parallel experiment that uses a library of partially mutagenized sequences to probe the mechanisms of transcriptional regulation employed by a specific wild type promoter of interest.Mutant promoters are cloned upstream of the GFP gene, and E. coli cells harboring these expression constructs are sorted into bins using FACS.The mutant promoters in each bin, as well as promoters from the input library, are then sequenced.Fig. 3: The lac promoter region studied in [12].(a) Sort-Seq was used to dissect a 75 bp region of the E. coli lac promoter using a library consisting of wild type sequences mutagenized at 12% per nucleotide, i.e., each library sequence had 9 mutations on average.(b) The resulting data were used to learn a quantitative sequence-function relationship, the mathematical form of which reflected an explicit biophysical model of transcriptional regulation.This model included two "energy matrices" describing the sequence-dependent binding energy of CRP (Q) and RNAP (P ) to their respective sites.It also included a value for the interaction energy γ between these two proteins.
Massively parallel experiments using partially mutagenized sequences provide data about sequencefunction relationships within a localized region of sequence space centered on the wild type sequence of interest.Measuring these local relationships can provide a wealth of information about the functional mechanisms of the wild type sequence.For instance, the Sort-Seq data of [12] allowed the inference of an explicit biophysical model for how CRP and RNAP work together to regulate transcription at the lac promoter (Fig. 3b).In particular, the authors used their data to learn quantitative models for the in vivo sequence specificity of both CRP and RNAP.Model fitting also enabled measurement of the protein-protein interaction by which CRP is able to recruit RNAP and up-regulate transcription.
Partially mutagenized sequences have also been used extensively for "deep mutational scanning" experiments on proteins, starting with [16,17].In this context, selection experiments on partially mutagenized proteins allow one to identify protein domains critical for folding and function.A variety of deep mutational scanning experiments are described in [42].

Inference using likelihood
The inference of quantitative sequence-function relationships from massively parallel experiments can be phrased as follows.Data consists of a large number of sequences {S n } N n=1 , each sequence S having a corresponding measurement M .Due to experimental noise, repeated measurements of the same sequence S can yield different values for M .Our experiment therefore has the following probabilistic form form: If π is set equal to the correct noise function π * , then L will be maximized by the correct model θ * .However, if π is set to an incorrect noise function π , L will typically attain a maximum at an incorrect θ .
If we assume that the measurements for each sequence are independent, and if we have an explicit parametric form for p(M |S), then we can learn the values of the parameters by maximizing the perdatum log likelihood, In what follows we will refer to the quantity L simply as the "likelihood." In regression problems such as this, one introduces an additional layer of structure.Specifically, we assume the measurement M of each sequence S is a noisy readout of some underlying activity R that is a deterministic function of that sequence.We call the function relating R to S the "activity model" and denote it using θ(S).This activity model is ultimately what we want to understand.The specific way the activity R is read out by measurements M is then specified by a conditional probability distribution, π(M |R), which we call the "noise function." 3Our experiment is thus represented by the Markov chain The corresponding likelihood is The model we adopt for our experiment therefore has two components: θ, which describes the sequencefunction relationship of interest, and π, which we do not really care about.Standard statistical regression requires that the noise function π be specified up-front.π can be learned either by performing separate calibration experiments, or by assuming a functional form based on an educated guess.This can be problematic, however.Consider inference in the large data limit, N → ∞, which is illustrated in Fig. 4. Likelihood is determined by both the model θ and the noise function π (Fig. 4a).If we know the correct noise function π * exactly, then maximizing L(θ, π * ) over θ is guaranteed to recover the correct model θ * .However, if we assume an incorrect noise function π , maximizing likelihood will typically recover an incorrect model θ (Fig. 4b).

Inference using mutual information
Information theory provides an alternative inference approach.Suppose we hypothesize a specific model θ, which gives predictions R. Denote the true model θ * and the corresponding true activity R * .The dependence between S, M , R * , and R will then form a Markov chain, From the simple fact that M depends on R only through the value of R * , any dependence measure D that satisfies the Data Processing Inequality (DPI) [29] must satisfy Therefore, in the set of possible models θ, the true model is guaranteed to globally maximize the objective function One particularly relevant dependence measure that satisfies DPI is mutual information, a quantity that plays a fundamental role in information theory [29]. 4For the massively parallel experiments such as those in Fig. 2, R is continuous and M is discrete.In these cases, mutual information is given by where p(M, R) is the joint distribution of activity predictions and measurements resulting from the model θ.If one is able to estimate p(M, R) from a finite sample of data, mutual information can be used as an objective function for determining θ without assuming any noise function π.
It should be noted that there are multiple dependence measures D that satisfy DPI.One might wonder whether maximizing multiple different dependence measures would improve on the optimization of mutual information alone.The answer is not so simple.In [28] it was shown that if the correct model θ * is within the space of models under consideration, then, in the large data limit, maximizing mutual information is equivalent to simultaneously maximizing every dependence measure that satisfies DPI.On the other hand, one rarely has any assurance that the correct model θ * is within the space of parameterized models one is considering.In this case, considering different DPI-satisfying measures might provide a test for whether θ * is noticeably outside the space of parameterized models.To our knowledge, this potential approach to the model selection problem has yet to be demonstrated.

Relationship between likelihood and mutual information
A third inference approach is to admit that we do not know the noise function π a priori, and to fit both θ and π simultaneously by maximizing L(θ, π) over this pair.It is easy to see why this makes sense: the division of the inference problem into first measuring π, then learning θ using that inferred π, is somewhat artificial.The process that maps S to M is determined by both θ and π and thus, from a probabilistic point of view, it makes sense to maximize likelihood over both of these quantities simultaneously.
We now show that, in the large N limit, maximizing likelihood over both θ and π is equivalent to maximizing the mutual information between model predictions and measurements.Here we follow the argument given in [28].In the large N limit, likelihood can be written where is the Kullback-Leibler divergence between the assumed noise function π and the observed noise function p(M |R), and H[M ] = − M p(M ) log p(M ) is the entropy of the measurements, which does not depend on θ.To maximize L(θ, π) it therefore suffices to maximize I(θ) over θ alone, then to set the noise function π(M |R) equal to the empirical noise function p(M |R), which causes D(θ, π) to vanish.Thus, when we are uncertain about the noise function π, we need not despair.We can, if we like, simply learn π at the same time that we learn θ.We need not explicitly model π in order to do this; it suffices instead to maximize the mutual information I(θ) over θ alone.
The connection between mutual information and likelihood can further be seen in a quantity called the "noise-averaged" likelihood.This quantity was first described for the analysis of microarray data [27]; see also [28].The central idea is to put an explicit prior on the space of possible noise functions, then compute likelihood after marginalizing over these noise functions.Explicitly, the per-datum log noise-averaged likelihood L na (θ) is related to L(θ, π) via We will refer to L na simply as "noise-averaged likelihood" in what follows.Under fairly general conditions, one finds that noise-averaged likelihood is related to mutual information via Here, the effect of the noise function prior p(π) is absorbed entirely by the term ∆(θ).Under very weak assumptions, ∆(θ) vanishes in the N → ∞ limit and thus p(π) becomes irrelevant for the inference problem [27,28].

Diffeomorphic modes
Mutual information has a mathematical property that is important to account for when using it as an objective function: the mutual information between any two variables is unchanged by an invertible transformation of either variable.So if a change in model parameters, θ → θ , results in changes in model predictions R → R that preserves the rank order of these predictions, then and θ and θ are judged to be equally valid.By using mutual information as an objective function, we are therefore unable to constrain any parameters of θ that, if changed, produce invertible transformations of model predictions.Such parameters are called "diffeomorphic parameters" or "diffeomorphic modes" [28].The distinction between diffeomorphic modes and nondiffeomorphic modes is illustrated in Fig. 5.

Criterion for diffeomorphic modes
Following [28], we now derive a criterion that can be used to identify all of the diffeomorphic modes of a model θ. 5 Consider an infinitesimal change in model parameters θ → θ + dθ, where the components of dθ are specified by for small epsilon and for some vector v i in θ-space.This change in θ will produce a corresponding change in model predictions R → R + dR, where In general, the derivative ∂R/∂θ i can have arbitrary dependence on the underlying sequence S.This transformation will preserve the rank order of R-values only if dR is the same for all sequences having the same value of R. The change dR must therefore be a function of R and have no other dependence on S. A diffeomorphic mode is a vector field v dif (θ) that has this property at all points in parameter space.Specifically, a vector field v dif (θ) is a diffeomorphic mode if and only if there is a function h(R, θ)

Diffeomorphic modes of linear models
As a simple example, consider a situation in which each sequence S is a D-dimensional vector and R is an affine function of S, i.e.
for model parameters θ = {θ 0 , θ 1 , . . ., θ D }.The criterion in Eq. ( 16) then gives Because the left hand side is linear in S and R is linear in S, the function h(R, θ) must be linear in R. Thus, h must have the form for some functions a(θ) and b(θ).The corresponding diffeomorphic mode is which has two degrees of freedom.Specifically, the a component of v dif corresponds to adding a constant to R while the b component corresponds to multiplying R by a constant.Note that if we had instead chosen R = D i=1 θ i S i , i.e. left out the constant component θ 0 , then there would be only one diffeomorphic mode, corresponding to multiplication of R by a constant.This fact will be used when we analyze the Gaussian selection model in Section 8.

Diffeomorphic modes of a biophysical model of transcriptional regulation
Diffeomorphic modes can become less trivial in more complicated situations.Consider the biophysical model of transcriptional regulation by the E. coli lac promoter (Fig. 3).This model was fit to Sort-Seq data in [12].The form of this model is as follows.Let S denote a 4 × D matrix representing a DNA sequence of length D and having elements where b ∈ {A, C, G, T } and l = 1, 2, . . .D. The binding energy Q of CRP to DNA was modeled in [12] as an "energy matrix": each position in the DNA sequence was assumed to contribute additively to the overall energy.Specifically, where are the parameters of this energy matrix.Similarly, the binding energy P of RNAP to DNA was modeled as Both energies were taken to be in thermal units (k B T ).The rate of transcription R resulting from these binding energies was assumed to be proportional to the occupancy of RNAP at its binding site.This is given by where γ is the interaction energy between CRP and RNAP (again in units of k B T ).
Because the binding sites for CRP and RNAP do not overlap, one can learn the parameters θ Q and θ P from data separately by independently maximizing I[Q; M ] and I[P ; M ].Doing this, however, leaves undetermined the overall scale of each energy matrix as well as the chemical potentials θ 0 P and θ 0 Q .The reason is that the energy scale and chemical potential are diffeomorphic modes of energy matrix models and therefore cannot be inferred by maximizing mutual information.
However, if Q and P are inferred together by maximizing I[R; M ] instead, one is now able to learn both energy matrices with a physically meaningful energy scale.The chemical potential of CRP, θ 0 Q , is also determined.The only parameters left unspecified are the chemical potential of RNA polymerase, θ 0 P , and the maximal transcription rate R max .The reason for this is that in the formula for R in Eq. ( 24) the energies P and Q combine in a nonlinear way.This nonlinearity eliminates three of the four diffeomorphic modes of P and Q. 6 See [28] for the derivation of this result.

Dual modes of the noise function
Diffeomorphic transformations of model parameters can be thought of as being equivalent to certain transformations of the noise function π.Consider the transformation of model parameters Altering the model parameters θ will typically change L(θ, π) in a way that cannot be recapitulated by changes in the noise function π.Similarly, changes in π cannot typically be imitated by changes in θ.However, diffeomorphic transformations of θ will affect L(θ, π) in the exact same way that dual transformation of π will.The diffeomorphic modes of θ and the dual modes of π can therefore be thought of as lying within the intersection of θ and π.
where is an infinitesimal number and v i is a vector in θ-space. 7For any sequence S, this transformation induces a transformation of the model prediction To see the effect this transformation has on likelihood, we rewrite Eq. ( 4) as, where • data indicates an average taken over the measurements M n and predictions R n for all of the sequences S n in the data set.The change in likelihood resulting from Eq. ( 26) is therefore given by Now suppose that there is a noise function π that has an equivalent effect on likelihood, i.e., for all possible data sets {S n , M n }.We say that this transformation of the noise function π → π is "dual" to the transformation θ → θ of model parameters.The transformed noise function will necessarily have the form for some function ṽ(M, R).To determine ṽ we consider the transformation of likelihood induced by π → π : Comparing Eq. ( 28) and Eq. ( 31), we see that π → π will be dual to θ → θ for all possible data sets if and only if for all sequences S. For general choice of vector v, no function ṽ will exist that satisfies Eq. ( 32).The reason is that ∂R/∂θ i will typically depend on the sequence S independently of the value of R. In other words, for a fixed value of M and R, the left hand side of Eq. ( 32) will retain a dependence on S. The right hand side, however, cannot have such a dependence.The converse is also true: for general choice of the function ṽ, no vector v will exist such that Eq. ( 32) is satisfied for all sequences.This is evident from the simple fact that v is a finite dimensional vector while ṽ is a function of the continuous quantity R and therefore has an infinite number of degrees of freedom.
In fact, Eq. ( 32) will have a solution if and only if for some function h.Here we have added the superscript "dif" because this is precisely the definition of a diffeomorphic mode given in Eq. ( 16).In this case, the function ṽdif dual to this diffeomorphic mode v dif is seen to be These findings are summarized by the Venn diagram in Fig. 6.Arbitrary transformations of the model parameters θ will alter likelihood in a way that cannot be imitated by any change to the noise function π.The reverse is also true: most changes to π cannot be imitated by a corresponding change in θ.However, a subset of transformations of θ are equivalent to corresponding dual transformations of π.These transformations are precisely the diffeomorphic transformations of θ.This partial duality between θ and π has a simple interpretation: the choice of how we parse an experiment into an activity model θ and a noise function π is not unique.The ambiguity in this choice is parameterized by the diffeomorphic modes of θ and the dual modes of π.
7 Error bars from likelihood, mutual information, and noise-averaged likelihood We now consider the consequences of performing inference using various objective functions at large but finite N .Specifically, we discuss the optimal parameters and corresponding error bars that are found by sampling θ from posterior distributions of the form for the following choices of the objective function F (θ): To streamline notation, we will use • to denote averages computed in multiple different contexts.In each case, the appropriate context will be specified by a subscript.As above • data will denote averaging over a specific data set {S n , M n } N n=1 .• real will indicate averaging over an infinite number of data set realizations.• S , • S,M , • S|R , and • S|R,M will respectively denote averages over the distributions p(S), p(S, M ), p(S|R), and p(S|R, M ), the empirical distributions obtained in the infinite data limit.
• θ will indicate an average computed over parameter values θ sampled from the posterior distribution p(θ|data).Subscripts on cov(•) or var(•) should be interpreted analogously.Likelihood with a noise function π that differs arbitrarily from π * will, in general, lead to a posterior distribution that is inconsistent with θ * along both diffeomorphic and nondiffeomorphic modes.(c) Likelihood with a noise function π that differs from π * only along a dual mode ṽdif leads to a posterior that is inconsistent with θ * along the diffeomorphic mode v dif (parallel to dashed line), but consistent with θ * in all other directions (perpendicular to dashed line).(d) Using mutual information give a posterior that is consistent with θ * ; this posterior places constraints similar to likelihood along non-diffeomorphic modes but places no constraints whatsoever along diffeomorphic modes.(e) Using noise-averaged likelihood results in a posterior distribution similar to mutual information but with weak constraints on diffeomorphic modes resulting from the noise function prior p(π).

Likelihood
Consider Eq. ( 35) with F (θ) = L(θ, π * ) at large but finite N .The posterior distribution p(θ|data) will, in general, be maximized at some choice of parameters θ o that deviates randomly from the correct parameters θ * .At large N , p(θ|data) will become sharply peaked about θ o with a peak width governed by the Hessian of likelihood; specifically where is the Hessian of the likelihood.It is also readily shown (see Appendix A) that this peak width is consistent with the correct parameters θ * , in the sense that In Appendix A we show that the Hessian of likelihood, Eq. ( 125), is given by where is the Fisher information of the noise function π * .This Fisher information is a nonnegative measure of how sensitive our experiment is in the vicinity of R. 8 We thus see that, as long as the set of vectors ∂R/∂θ i spans all directions in parameter space, the Hessian matrix H ij will be nonsingular.Using F (θ) = L(θ, π * ) will therefore put constraints on all directions in parameters space, and these constraints will shrink with increasing data as N −1/2 .This situation is illustrated in Fig. 7a.Now consider what happens if instead we use a noise function π that deviates from π * in a small but arbitrary way.Specifically, let for some function f (M, R) and small parameter .It is readily shown (see Appendix A) that the maximum likelihood parameters θ will deviate from θ * by an amount This expected deviation does not depend on N and will therefore not shrink to zero in the large N limit.Indeed, for any choice of > 0, there will always be an N large enough such that this bias in θ dominates over the uncertainty due to finite sampling.Is there any restriction on the types of biases in θ that can be produced by the choice of incorrect noise function π ?In general, no.Because the Hessian matrix H is nonsingular, one can always find a vector w such that the deviation of θ from θ * in Eq. ( 42) points in any chosen direction of θ-space.As long as the functions are linearly independent for different indices i, a function f can always be found that generates the vector w.
We therefore see that arbitrary errors in the noise function will bias the inference of model parameters in arbitrary directions.This fact presents a major concern for standard likelihood-based inference: if you assume an incorrect noise function π, the parameters θ that you then infer will, in general, be biased in an unpredictable way.Moreover, the magnitude of this bias will be directly proportional to the magnitude of the error in the log of your assumed noise function.This problem is illustrated in Fig. 7b.
There is a case that deserves some additional consideration.Suppose we use a noise function π that differs from π * only along a dual mode ṽdif , i.e., log π (M |R) = log π * (M |R) + ṽdif (M, R). (44) The maximum likelihood parameters θ of L(θ, π ) will still deviate from θ * by an amount that does not shrink to zero in the N → ∞ limit.However, this bias in parameter values will be restricted to the diffeomorphic mode v dif to which ṽdif is dual, i.e., This state of affairs ain't so bad since the incorrect noise function will lead to model parameters that are inaccurate only along modes that we already know we cannot learn from the data.This situation is illustrated in Fig. 7c; see Appendix A for the derivation of Eq. (45).

Mutual information
The constraints on parameters imposed by using mutual information I(θ) as the objective function F (θ) in Eq. ( 35) are determined by the Hessian Appendix B provides a detailed derivation of this Hessian, which after some computation is found to be given by Comparing Eq. ( 47) and Eq. ( 39), we see that for any vector v in parameter space, Likelihood is thus seen to constrain parameters in all directions at least as much as mutual information does.As expected, mutual information provides no constraint whatsoever in the direction of any diffeomorphic mode v dif of the model, since The converse is also true: if there is no constraint on parameters along v, then v must be a diffeomorphic mode.This is because Because J(R) is positive almost everywhere, the right hand side of Eq. (50) can vanish only if i v i ∂R ∂θi does not differ between any two sequences that have the same R value.There must therefore exist a function h(R) such that h(R) = i v i ∂R ∂θi for all sequences S.This is precisely the requirement in Eq. ( 16) that v be a diffeomorphic mode.
However, except along diffeomorphic modes, we can generally expect that the constraints provided by likelihood and by mutual information will be of the same magnitude.This situation is illustrated in Fig. 7d.Indeed, in the next section we will see an explicit example where all nondiffeomorphic constraints imposed by mutual information are the same as those imposed by likelihood.
Before proceeding, we note that the relationship between the Hessians of likelihood and mutual information suggests an analogy to fluid mechanics.Consider a trajectory in parameter space given by Gaussian sequence distribution + enrichment experiment library sequences selected sequences Fig. 8: Illustration of the Gaussian selection model of a massively parallel experiment.Each assayed sequence in this model is a D-dimensional vector.The library (corresponding to bin M = 0) consists of N 0 sequences S drawn from a Gaussian distribution p lib (S) that is centered on a specific sequence µ.Bin M = 1 consists of N 1 sequences drawn from the distribution p lib (S) then enriched by a factor of exp(b * R * ) where R * = S T θ * .This enrichment procedure is analogous to selecting protein-bound DNA sequences where b * R * is negative the binding energy.Calculations in the text are performed in the N 0 N 1 limit.
θ i (t) = tv i , where t is time and v is a velocity vector pointing in the direction of motion.This motion in parameter space will induce a motion in the prediction R(t) that the model provides for every sequence S. The set of sequence {S n } thus presents us with a dynamic cloud of "particles" moving about in R-space.At t = 0, the quantity Ṙ2 S|R will be proportional to the average kinetic energy of particles at location R. The quantity Ṙ 2 S|R will be proportional to the (per particle) kinetic energy of the bulk fluid element at R, a quantity that does not count energy due to thermal motion.In this way we see that − i,j H ij v i v j is a weighted tally of total kinetic energy, whereas − i,j K ij v i v j corresponds to a tally of internal thermal energy only, the kinetic energy of bulk motion having been subtracted out.

Noise-averaged likelihood
Noise-averaged likelihood provides constraints in between those of likelihood, computed using the correct noise function, and those of mutual information.This is illustrated in Fig. 7e.Whereas mutual information provides no constraints whatsoever on the diffeomorphic modes of θ, noise-averaged likelihood provides weak constraints in these directions.These soft constraints reflect the Hessian of ∆(θ) in Eq. ( 12).The constraints along diffeomorphic modes, however, have an upper bound on how tight they can become in the N → ∞ limit.This is because such constraints only reflect our prior p(π) on the noise function, not the information we glean from data.

Worked example: Gaussian selection
The above principles can be illustrated in the following analytically tractable model of a massively parallel experiment, which we call the "Gaussian selection model."In this model, our experiment starts with a large library of "DNA" sequences S, each of which is actually a D-dimensional vector drawn from a Gaussian probability distribution 9 9 For the sake of simplicity we set the covariance matrix of this distribution equal to the identity matrix.The more general case of a non-identity covariance matrix yields the same basic results.Also, we note that, while approximating discrete DNA sequences by continuous vectors might seem crude, it is only the marginal distributions p(R|M ) that matter for the inference problem.Most of the quantities R that one encounters in practice are computed by summing up contributions from a large number of different nucleotide positions.In such cases, the marginal distributions p(R|M ) will often be nearly continuous and virtually indistinguishable from the marginal distributions one might obtain from a Gaussian sequence library.
Here, µ is a D-dimensional vector defining the average sequence in the library.From this library we extract sequences into two bins, labeled M = 0 and M = 1.We fill the M = 0 bin with sequences sampled indiscriminately from the library.The M = 1 bin is filled with sequences sampled from this library with relative probability where the activity R * is defined as the dot product of S with a D-dimensional vector θ * , i.e., We use N M to denote the number of sequences in each bin M , along with All of our calculations are performed in the limit where N 1 is large but for which N 0 is far larger.More specifically, we assume that exp(a * +b * R * ) << 1 everywhere that both p(S|M = 0) and p(S|M = 1) are significant.We use to denote the ratio and all of our calculations are carried out only to first order in .This model experiment is illustrated in Fig. 8.Our goal is this: given the sampled sequences in the two bins, recover the parameters θ * defining the sequence-function relationship for R * .To do this, we adopt the following model for the sequencedependent activity R: where θ is the D-dimensional vector we wish to infer.From the arguments above and in [28], it is readily seen that the magnitude of θ, i.e. |θ|, is the only diffeomorphic mode of the model: changing this parameter rescales R, which preserves rank order.

Bin-specific distributions
We can readily calculate the conditional sequence distribution p(S|M ) for each bin M , as well as the conditional distribution p(R|M ) of model predictions.Because the sequences sampled for bin 0 are indiscriminately drawn from p lib , we have The selected distribution of sequences is found to be The value of is found to be related to a * , b * , and Appendix C provides an explicit derivation of Eq. (57) and Eq.(58).
We compute the distribution of model predictions for each bin as follows.For each bin M , this distribution is defined as This can be analytically calculated for both of the bins owing to the Gaussian form of each sequence distribution.We find that See Appendix C for details.

Noise function
To compute likelihood, we must posit a noise function π(M |R).Based on our prior knowledge of the selection procedure, we choose π(M |R) so that where a and b are scalar parameters that we might or might not know a priori.This, combined with the normalization requirement, M π(M |R) = 1, gives This noise function π is correct when a = a * and b = b * .The parameter b is dual to the diffeomorphic mode |θ|, whereas the parameter a is not dual to any diffeomorphic mode.
In the experimental setup used to motivate the Gaussian selection model, the parameter a is affected by many aspects of the experiment, including the concentration of the protein used in the binding assay, the efficiency of DNA extraction from the gel, and the relative amount of PCR amplification used for the bin 0 and bin 1 sequences.In practice, these aspects of the experiment are very hard to control, much less predict.From the results in the previous section, we can expect that if we assume a specific value for a and perform likelihood-based inference, inaccuracies in this value for a will distort our inferred model θ in an unpredictable (i.e., nondiffeomorphic) way.We will, in fact, see that this is the case.The solution to this problem, of course, is to infer θ alone by maximizing the mutual information I(θ); in this case the values for a and b become irrelevant.Alternatively, one can place a prior on a and b, then maximize noise-averaged likelihood L na (θ).We now analytically explore the consequences of these three approaches.

Likelihood
Using the noise function in Eq. ( 63), the likelihood L becomes a function of θ, a, and b.Computing L in the N → ∞ and → 0 limits, we find that We now consider the consequences of various approaches for using L(θ, a, b) to estimate θ * .In each case, the inferred optimum will be denoted by a superscript 'o.' Standard likelihood-based inference requires that we assume a specific value for a and for b, then optimize L(θ, a, b) over θ alone by setting for each component i.By this criteria we find that the optimal model θ o is given by a linear combination of θ * and µ: where c is a scalar that solves the transcendental equation See Appendix C for the derivation of this result.Note that c is determined only by the value of a and not by the value of b.Moreover, c = 1 if and only if a = a * .If our assumed noise function is correct, i.e., a = a * and b = b * , then Thus, maximizing likelihood will identify the correct model parameters.This exemplifies the general behavior illustrated in Fig. 7a.If a = a * but b = b * , our assumed noise function will differ from the correct noise function only in a manner dual to the diffeomorphic mode |θ|.In this case we find that c = 1 and θ o is thus proportional but not equal to θ * .This comports with our claim above that the diffeomorphic mode of the inferred model, i.e. |θ o |, will be biased so as to compensate for the error in the dual parameter b.This finding follows the behavior described in Fig. 7c.If a = a * , however, c = 1.As a result, θ o is a nontrivial linear combination of θ * and µ, and will thus point in a different direction than θ * .This is true regardless of the value of b.This behavior is illustrated in Fig. 7b: errors in non-dual parameters of the noise function will typically lead to errors in nondiffeomorphic parameters of the activity model.
We now consider the error bars that likelihood places on model parameters.Setting θ = θ o + δθ and expanding L(θ, a, b) about θ o , we find that where Note that all eigenvalues of Λ are greater or equal to 1. Adopting the posterior distribution therefore gives a covariance matrix on θ of in all directions of θ-space.Although Λ ij will change somewhat if a and b deviate from a * and b * , this same scaling behavior will hold.Therefore, when the noise function is incorrect and N is sufficiently large, the finite bias introduced into θ o will cause θ * to fall outside the inferred error bars.

Mutual information
In the → 0 limit, Eq. ( 7) simplifies to The lowest order term on the right hand side can be evaluated exactly using Eq.(60) and Eq.(61): See Appendix C for details.Note that the expression on the right is invariant under a rescaling of θ.This reflects the fact that |θ| is a diffeomorphic mode of the model defined in Eq. (55).
To find the model θ o that maximizes mutual information, we set The optimal model θ o must therefore be parallel to θ * , i.e.
Expanding about θ = θ o + δθ as above, we find that where δθ ⊥ is the component of δθ perpendicular to θ * ; see Appendix C. Therefore, if we use the posterior distribution p(θ|data) ∼ e N I(θ) to infer θ, we find uncertainties in directions perpendicular to . These error bars are only slightly larger than those obtained using likelihood, and have the same dependence on N .However, we find no constraint whatsoever on the component of δθ parallel to θ * .These results are illustrated by Fig. 7d.

Noise-averaged likelihood
We can also compute the noise-averaged likelihood, L na (θ), in the case of a uniform prior on a and b, i.e. p(π) = p(a, b) = C where C is an infinitesimal constant.We find that See the Appendix C for details.Thus, where the constant (which absorbs C entirely) does not depend on θ.If we perform Bayesian inference using noise-averaged likelihood, i.e., using p(θ|data) ∼ e N Lna(θ) , we will therefore find in the large N limit that δθ ⊥ is constrained in the same way as if we had used mutual information.The noise function prior we have assumed further results in weak constraints on |θ| that do not tighten as N increases. 10his is represented in Fig. 7e.

Discussion
The systematic study of quantitative sequence-function relationships in biology is just now becoming possible, thanks to the development of a variety of massively parallel experiments.Concepts and methods from statistical physics are likely to prove valuable for understanding this basic class of biological phenomena as well as for learning sequence-function relationships from data.
In this paper we have discussed the problem of learning parametric models of sequence-function relationships from experiments having poorly characterized experimental noise.We have seen that standard likelihood-based inference, which requires an explicit model of experimental noise, will generally lead to incorrect model parameters due to errors in the assumed noise function.By contrast, mutual-information-based inference allows one to learn parametric models without having to assume any noise function at all.Mutual-information-based inference is unable to pin down the values of model parameters along diffeomorphic modes.This behavior reflects a fundamental difference between how diffeomorphic and nondiffeomorphic modes are constrained by data.Diffeomorphic modes arise from arbitrariness in the distinction between the activity model and the noise function.These findings were illustrated using an analytically tractable model for a massively parallel experiment.
The study of quantitative sequence-function relationships still presents many challenges, both theoretical and computational.One major practical difficulty with the mutual-information-based approach described here is the difficulty of accurately estimating mutual information from data.Although methods are available for doing this [44], it remains unclear whether any are accurate enough to enable computational sampling of the posterior distribution p(θ|data) ∼ e N I(θ) , as suggested here.Moreover, none of these estimation methods is regarded as definitive.We believe this lack of clarity regarding how to estimate mutual information reflects the fact that the density estimation problem itself has never been fully solved, even in one or two dimensions.We are hopeful, however, that field-theoretic methods for estimating probability densities [45,46,47] might help resolve the problem of estimating low-dimensional probability densities as well as estimating mutual information.
The problem of model selection poses a major theoretical challenge.Ideally, one would like to explore a hierarchy of possible model classes when fitting parametric models to data.However, when considering effective models it is unclear how to move far beyond independent site models (e.g., energy matrices) due to the number of parameters growing exponentially with the length of the sequence.Alternatively, when learning mechanistic models such as the model of the lac promoter featured in Fig. 3, it is unclear how to go about systematically testing different arrangements of binding sites, different protein-protein interactions, and so on.We emphasize that this model prioritization problem is fundamentally theoretical, not computational, and as of now there is little clarity on how to address this matter.
Finally, the geometric structure of sequence-function relationships presents an array of intriguing questions.For instance, very little is known (in any system) about how convex or glassy such landscapes in sequence space are, what their density of states looks like, etc.. Most of the biological and evolutionary implications of these aspects of sequence-function relationships also have yet to be worked out.We believe that the methods and ideas of statistical physics may lead to important insights into these questions in the near future.
10 Appendix A: Maximum likelihood under various noise functions At the correct noise function π * , likelihood is given by Taylor expanding this quantity about θ * gives We define the random vector u in terms of the coefficient of the linear term of this expansion: Because u i / √ N is defined as a sum of N random terms, and because the mean of these terms vanishes, the covariance u i u j real will, by the central limit theorem, be given by At θ = θ * , Each measurement M will provide no additional information about S beyond that provided by the model prediction R = θ(S).Mathematically this means that for all S, R, and M .Equivalently, the conditional expectation value of any sequence-dependent function f (S) will obey for all M .We use this fact to simplify Eq. ( 86): where J(R) is the Fisher information from Eq. ( 40).We compute the Hessian of likelihood as follows: The second term on the right hand side vanishes because of Eq. (88): We therefore find that which is Eq.(39).Note that, from Eq. ( 90), u i u j real = −H ij .The optimum θ o of L(θ, π * ) will occur when 0 = ∂L(θ, π * ) ∂θ We therefore find that, to lowest order in N −1/2 , The covariance of θ o is thus given by which is Eq.(38).
In the case of a noise function π that differs from π * only along a dual mode, as in Eq. (44), the vector w i is given by The maximum likelihood parameters θ will therefore satisfy which is solved by Eq. (45).The fact that this uniquely specifies θ i − θ * i real follows from the Hessian H being nonsingular.
11 Appendix B: Gradient and Hessian of mutual information Here we calculate the gradient and Hessian of mutual information evaluated at θ = θ * .We do this by first computing derivatives of the empirical probability distributions p(R) and p(R, M ) with respect to model parameters.The mathematical trick used to do this is adapted from [31].These results are first applied to likelihood in order to demonstrate their use and correctness.We then use this approach to compute the gradient and Hessian of mutual information.To clarify these derivations, we use r(θ, S), instead of θ(S), to explicitly denote the model prediction R as a function of sequence S and model parameters θ.We also define ∂ i ≡ ∂ ∂θi and use dS to represent sums over sequences.

How the distribution of model predictions changes with model parameters
The empirical probability distribution of model predictions R is given by The gradient of this probability distribution with respect to model parameters is computed as follows: Similarly, the Hessian of p(R) is given by Analogous results follow for the gradient and Hessian of the joint distribution p(R, M ):

Gradient and Hessian of likelihood
Likelihood can be expressed in terms of the empirical distribution p(R, M ) as Keep in mind that R is just a dummy variable in this integral; the empirical distribution p is the only quantity that depends on θ.The gradient of likelihood is therefore computed as Note that in going from Eq. (122) to Eq. ( 123) we used integration by parts.The Hessian of likelihood is computed similarly: This expression is valid for all choices θ and π.
Restricting our attention now to θ = θ * and π = π * , we see that the second term in Eq. (127) vanishes as it did in Eq. (92) through Eq. (96).Moreover, the first term gives which is the formula obtained for H ij in Eq. (39).

Gradient and Hessian of mutual information
The gradient and Hessian computations for mutual information are simplified by expressing mutual information in terms of its component entropies.We write where The gradient of H R is given by Similarly, H M doesn't depend on θ, so ∂ i H M = 0.The resulting gradient of mutual information is Note from Eq. ( 121) that Similarly, The Hessian of mutual information is therefore given by, Using the form of ∂ i ∂ j L in Eq. ( 125), we see that this reduces to where and We now split Λ R ij and Λ RM ij into four terms each.For Λ R ij we get where Similarly, where It is unclear how to simplify the expression for ∂ i ∂ j I at general choices of θ.At θ = θ * , however, the expectation value ∂ i r S|R,M looses all M -dependence, and this causes a lot of cancellations to occur: and We therefore find that, The expression in braces can be simplified as follows: The Hessian of mutual information at θ = θ * therefore has a rather simple form: which is Eq. ( 47).
12 Appendix C: Gaussian selection model In deriving Eq. (193) we assumed that e a+bR 1 for all values of R over which both p(R|M = 0) and p(R|M = 1) have significant support.This assumption necessarily holds in the → 0 limit.We have also kept only the lowest order terms in .Note in particular that e a+bR S|M =0 will be of order .The second term in Eq. ( 193) can be directly read off from Eq. (61): From Eq. (60) we see that the first term in Eq. ( 193) can be computed by completing the square: which is Eq.(67).
12.5 Derivation of Eq. ( 70) From the expression for likelihood in Eq. ( 64), we find that the Hessian of likelihood is where Note: in deriving Eq. (213) we used the expression for in Eq. (58).The expression in Eq. (70) further makes use of the approximations N 1 ≈ N , which will hold in the → 0 limit, and which will hold in the large N limit.
12.6 Derivation of Eqs.73 and 74 We derive Eq. (73) as follows.To ease notation a bit, we define p M (R) = p(R|M ).
Because p(M = 1) = + O( 2 ), the first term in Eq. ( 219) is the right hand side of Eq. (73) to lowest order in .We now show that the second term is of order 2 and can therefore be ignored.Up to terms of order 2 , p(R) = (1 − )p 0 (R) + p 1 (R).
Eq. ( 74) is derived as follows: The result in Eq. (77) readily follows by substituting this into the formula for mutual information in Eq. ( 74), then approximating the Hessian of mutual information at θ o by the Hessian at θ * .

Fig. 1 :
Fig.1: Sequence-function relationships in biology.(a) A sequence-function relationship maps a biological sequence (blue bar) to a biologically relevant activity (yellow star).(b) One of the simplest sequence-function relationships is how the affinity (star) of a transcription factor protein (magenta) for its DNA binding site depends on the sequence of that site (blue).(c) A more complicated sequencefunction relationship describes how the rate of mRNA transcription depends on the DNA sequence of a gene's promoter region.At the lac promoter of E. coli (illustrated), this transcription rate (star) depends on how strongly both the transcription factor CRP (purple) and the RNA polymerase holoenzyme (RNAP; orange) bind their respective sites within the promoter region (blue).

Fig. 2 :
Fig.2: Overview of massively parallel experiments for studying quantitative sequence-function relationships.(a) The input to each experiment is a library of different sequences that one wishes to test.The output is one or more bins of sequences; each sequence in each bin is randomly selected from the library with a weight that depends on a measurement of that sequence's activity (star).(b) The resulting data set consists of a list of (non-unique) sequences, each sequence assigned to either the input library or one of the output bins.(c) Illustration of experimental methods for measuring the sequence-dependent binding energy of purified transcription factor proteins.The input library typically consists of random DNA flanked by constant sequence.This library DNA is mixed with the protein of interest and binding is allowed to come to equilibrium.DNA bound by protein is then separated from unbound DNA, e.g. by running complexes on a gel (shown), then sequenced along with a sample from the input library.(d) Sort-Seq[12] is a massively parallel experiment that uses a library of partially mutagenized sequences to probe the mechanisms of transcriptional regulation employed by a specific wild type promoter of interest.Mutant promoters are cloned upstream of the GFP gene, and E. coli cells harboring these expression constructs are sorted into bins using FACS.The mutant promoters in each bin, as well as promoters from the input library, are then sequenced.

Fig. 4 :
Fig.4: Schematic illustration of how likelihood L(θ, π) depends on the model θ and the noise function π in the N → ∞ limit.(a,b) L will typically have a correlated dependence on θ and π.If π is set equal to the correct noise function π * , then L will be maximized by the correct model θ * .However, if π is set to an incorrect noise function π , L will typically attain a maximum at an incorrect θ .

Fig. 5 :
Fig.5: Illustration of diffeomorphic and nondiffeomorphic modes.(a) A diffeomorphic mode v dif at a point θ in parameter space is a vector that will (regardless of the underlying data) be tangent to a level curve of I(θ).All other vectors (e.g., v non ) correspond to nondiffeomorphic modes.(b) Moving θ along a nondiffeomorphic mode results in a sort of "diffusion" in which the R values assigned to different sequences change rank order.Here, the probability distribution p(R|M ) is illustrated (for fixed M ) in gray.The motion of individual R values upon such a change in θ are indicated by arrows.(c) Changing θ along a diffeomorphic mode, however, results in a "flow" of R values that maintains their rank order.

Fig. 6 :
Fig.6: Venn diagram illustrating the degrees of freedom of the likelihood L(θ, π) considered over all possible data sets {S n , M n }.Altering the model parameters θ will typically change L(θ, π) in a way that cannot be recapitulated by changes in the noise function π.Similarly, changes in π cannot typically be imitated by changes in θ.However, diffeomorphic transformations of θ will affect L(θ, π) in the exact same way that dual transformation of π will.The diffeomorphic modes of θ and the dual modes of π can therefore be thought of as lying within the intersection of θ and π.

Fig. 7 :
Fig.7: Posterior distributions on model parameters resulting from various objective functions.Each panel schematically illustrates the posterior distribution p(θ|data) (gray shaded area) as it relates to the correct model θ * (dot) along both diffeomorphic (abscissa) and nondiffeomorphic (ordinate) directions in parameter space.(a) Likelihood with the correct noise function π * leads to a posterior distribution consistent with θ * in all parameters.(b) Likelihood with a noise function π that differs arbitrarily from π * will, in general, lead to a posterior distribution that is inconsistent with θ * along both diffeomorphic and nondiffeomorphic modes.(c) Likelihood with a noise function π that differs from π * only along a dual mode ṽdif leads to a posterior that is inconsistent with θ * along the diffeomorphic mode v dif (parallel to dashed line), but consistent with θ * in all other directions (perpendicular to dashed line).(d) Using mutual information give a posterior that is consistent with θ * ; this posterior places constraints similar to likelihood along non-diffeomorphic modes but places no constraints whatsoever along diffeomorphic modes.(e) Using noise-averaged likelihood results in a posterior distribution similar to mutual information but with weak constraints on diffeomorphic modes resulting from the noise function prior p(π).