Skip to main content

Prediction of Structures and Interactions from Genome Information

  • Chapter
  • First Online:
Integrative Structural Biology with Hybrid Methods

Part of the book series: Advances in Experimental Medicine and Biology ((AEMB,volume 1105))

Abstract

Predicting three dimensional residue-residue contacts from evolutionary information in protein sequences was attempted already in the early 1990s. However, contact prediction accuracies of methods evaluated in CASP experiments before CASP11 remained quite low, typically with <20% true positives. Recently, contact prediction has been significantly improved to the level that an accurate three dimensional model of a large protein can be generated on the basis of predicted contacts. This improvement was attained by disentangling direct from indirect correlations in amino acid covariations or cosubstitutions between sites in protein evolution. Here, we review statistical methods for extracting causative correlations and various approaches to describe protein structure, complex, and flexibility based on predicted contacts.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Appendix

Appendix

An appendix described in full will be found in the article (Miyazawa 2017a) submitted to the arXiv.

1.1 Inverse Potts Model

1.1.1 A Gauge Employed for h i(a k) and J ij(a k, a l)

Unless specified, a following gauge is employed; we call it q-gauge, here.

$$\displaystyle \begin{aligned} h_{i}(a_{q})=J_{ij}(a_{k},a_{q})=J_{ij}(a_{q},a_{l})=0 {} \end{aligned} $$
(9.16)

In this gauge, the amino acid a q is the reference state for fields and couplings, and P i(a q), P ij(a k, a q) = P ji(a q, a k), and P ij(a q, a q) are regarded as dependent variables. Common choices for the reference state a q are the most common (consensus) state at each site. Any gauge can be transformed to another by the following transformation.

$$\displaystyle \begin{aligned} J_{ij}^{\mathrm{I}}(a_{k},a_{l})&\equiv J_{ij}(a_{k},a_{l})-J_{ij}(\cdot,a_{l})-J_{ij}(a_{k}, \cdot)+J_{ij}(\cdot, \cdot) {} \end{aligned} $$
(9.17)
$$\displaystyle \begin{aligned} h_{i}^{\mathrm{I}}(a_{k})&\equiv h_{i}(a_{k})-h_{i}(\cdot)+\sum_{j\neq i}(J_{ij}(a_{k}, \cdot)-J_{ij}(\cdot, \cdot)) {} \end{aligned} $$
(9.18)

where “⋅” denotes the reference state, which may be a q for each site (q-gauge) or the average over all states (Ising gauge).

1.1.2 Boltzmann Machine

Fields h i(a k) and couplings J ij(a k, a l) are estimated by iterating the following 2-step procedures.

  1. 1.

    For a given set of h i and J ij(a k, a l), marginal probabilities, P MC(σ i = a k) and P MC(σ i = a k, σ i = a l), are estimated by a Markov chain Monte Carlo method (the Metropolis-Hastings algorithm (Metropolis et al. 1953)) or by any other method (for example, the message passing algorithm (Weigt et al. 2009)).

  2. 2.

    Then, h i and J ij(a k, a l) are updated according to the gradient of negative log-posterior-probability per instance, ∂S 0∂h i(a k) or ∂S 0∂J ij(a k, a l), multiplied by a parameter-specific weight factor (Barton et al. 2016), w i(a k) or w ij(a k, a l); see Eqs. 9.8 and 9.12.

    $$\displaystyle \begin{aligned} \varDelta h_{i}(a_{k})&=-(P^{\mathrm{MC}}(\sigma_{i}=a_{k})+\frac{\partial R}{\partial h_{i}(a_{k})}-P_{i}(a_{k}))\cdot w_{i}(a_{k}) {} \end{aligned} $$
    (9.19)
    $$\displaystyle \begin{aligned} \varDelta J_{ij}(a_{k},a_{l})&=-(P^{\mathrm{MC}}(\sigma_{i}=a_{k},\sigma_{i}=a_{l})+\frac{\partial R}{\partial J_{ij}(a_{k},a_{l})} \end{aligned} $$
    $$\displaystyle \begin{aligned} &\quad -P_{ij}(a_{k},a_{l}))\cdot w_{ij}(a_{k},a_{l}) {} \end{aligned} $$
    (9.20)

    where weights are also updated as w i(a k) ← f(w i(a k)) and w ij(a k, a l) ← f(w ij(a k, a l)) according to the RPROP (Riedmiller and Braun 1993) algorithm; the function f(w) is defined as

    $$\displaystyle \begin{aligned} f(w)\equiv\left\{\begin{array}{ll} \max(w\cdot s_{-},w_{\min}) & \mathrm{if}\ \mathrm{the}\ \mathrm{gradient}\ \mathrm{changes}\ \mathrm{its}\ \mathrm{sign},\\ \min(w\cdot s_{+},w_{\max}) & \mathrm{otherwise} \end{array}\right. {} \end{aligned} $$
    (9.21)

    \(w_{\min }=10^{-3}\), \(w_{\max }=10\), s  = 0.5, and s + = 1.9 < 1∕s were employed (Barton et al. 2016). After updated, h i(a k) and J ij(a k, a l) may be modified to satisfy a given gauge.

The Boltzmann machine has a merit that model correlations are calculated.

1.1.3 Gaussian Approximation for P(σ) with a Normal-Inverse-Wishart Prior

The normal-inverse-Wishart distribution (NIW) is the product of the multivariate normal distribution \((\mathcal {N})\) and the inverse-Wishart distribution \((\mathcal {W}^{-1})\), which are the conjugate priors for the mean vector and for the covariance matrix of a multivariate Gaussian distribution, respectively. The NIW is employed as a prior in GaussDCA (Baldassi et al. 2014), in which the sequence distribution P(σ) is approximated as a Gaussian distribution. In this approximation, the q-gauge is used, and P i(a q), P ij(a k, a q) = P ji(a q, a k), and P ij(a q, a q) are regarded as dependent variables; see section “A Gauge Employed for h i(a k) and J ij(a k, a l)”; in GaussDCA, deletion is excluded from independent variables.

The posterior distribution for the NIW is also a NIW. Thus, the cross entropy S 0 can be represented as

(9.22)
$$\displaystyle \begin{aligned} &=\frac{-1}{B}\log[\mathcal{N}(\mu|\mu^{B},\varSigma/\kappa^{B})\mathcal{W}^{-1}(\varSigma|\varLambda^{B},\nu^{B}){} \end{aligned} $$
(9.23)
$$\displaystyle \begin{aligned} &\quad (\det(2\pi\varSigma))^{-B/2}(\frac{\kappa}{\kappa^{B}})^{\dim\varSigma/2}\frac{(\det(\varLambda/2))^{\nu/2}}{(\det(\varLambda^{B}/2))^{\nu^{B}/2}}\frac{\varGamma_{\dim\varSigma}(\nu^{B}/2)}{\varGamma_{\dim\varSigma}(\nu/2)}(\det\varSigma)^{-(\nu - \nu^{B})2}] {} \end{aligned} $$
(9.24)

where \(\varGamma _{\dim \varSigma }(\nu /2)\) is the multivariate Γ function, μ is the mean vector, and \(\dim \varSigma \) is the dimension of covariance matrix Σ, \(\dim \varSigma =(q-1)L\) excluding deletion in GaussDCA. The normal and NIW distributions are defined as follows.

$$\displaystyle \begin{aligned} \mathcal{N}(\mu|\mu^{0},\varSigma)&\equiv(\det(2\pi\varSigma))^{-1/2}\exp(-\frac{(\mu-\mu^{0})^{T}{{\varSigma^{-1}}}(\mu-\mu^{0})}{2}) {} \end{aligned} $$
(9.25)
$$\displaystyle \begin{aligned} \mathcal{W}^{-1}(\varSigma|\varLambda,\nu)&\equiv\frac{(\det(\varLambda/2))^{\nu/2}}{\varGamma_{\dim\varSigma}(\nu/2)}(\det\varSigma)^{-(\nu+\dim\varSigma+1)/2}\exp(-\frac{1}{2}\mathrm{Tr}\varLambda\varSigma^{-1}) {} \end{aligned} $$
(9.26)

Parameters μ B, κ B, ν B, and Λ B satisfy

$$\displaystyle \begin{aligned} \mu_{i}^{B}(a_{k})&=(\kappa\mu_{i}^{0}(a_{k})+BP_{i}(a_{k}))/(\kappa+B)\ ,\ \kappa^{B}=\kappa+B\ ,\ \nu^{B}=\nu+B {} \end{aligned} $$
(9.27)
$$\displaystyle \begin{aligned} \varLambda_{ij}^{B}(a_{k},a_{l})&=\varLambda_{ij}(a_{k},a_{l})+BC_{ij}(a_{k},a_{l}) \end{aligned} $$
$$\displaystyle \begin{aligned} &\quad +\frac{\kappa B}{\kappa+B}[(P_{i}(a_{k})-\mu_{i}^{0}(a_{k}))(P_{j}(a_{l})-\mu_{j}^{0}(a_{l}))] {} \end{aligned} $$
(9.28)

where the Λ and ν are the scale matrix and the degree of freedom, respectively, shaping the inverse-Wishart distribution, and C is the given covariance matrix; C ij(a k, a l) ≡ P ij(a k, a l) − P i(a k)P i(a l). The mean values of μ and Σ under NW posterior are μ B and \(\varLambda ^{B}/(\nu ^{B}-\dim \varSigma -1)\), and their mode values are μ B and \(\varLambda ^{B}/(\nu ^{B}+\dim \varSigma +1)\), which minimize the cross entropy or maximize the posterior probability. The covariance matrix Σ can be estimated to be the exactly same value by adjusting the value of ν, whichever the mean posterior or the maximum posterior is employed for the estimation of Σ. In GaussDCA, the mean posterior estimate was employed but here the maximum posterior estimate is employed according to the present formalism.

$$\displaystyle \begin{aligned} (\mu,\varSigma)=\arg\min\limits_{(\mu,\varSigma)}S_{0}(\mu,\varSigma|\{P_{i}\},\{P_{ij}\})=(\mu^{B},\varLambda^{B}/(\nu^{B}+\dim\varSigma+1)) {} \end{aligned} $$
(9.29)

According to GaussDCA, ν is chosen in such a way that σ ij(a k, a l) is nearly equal to the covariance matrix corrected by pseudocount; \(\nu =\kappa +\dim \varSigma +1\) for the mean posterior estimate in GaussDCA, but \(\nu =\kappa -\dim \varSigma -1\) for the maximum posterior estimate here.

From Eq. 9.15, the estimates of couplings and fields are calculated.

$$\displaystyle \begin{aligned} J_{ij}^{\mathrm{NIW}}(a_{k},a_{l})=-\frac{\partial S_{0}(\{P_{i}\},\{P_{ij}\})}{\partial P_{ij}(a_{k},a_{l})}=-\frac{(\kappa+B+1)}{\kappa+B}(\varSigma^{-1})_{ij}(a_{k},a_{l}) {} \end{aligned} $$
(9.30)

Because the number of instances is far greater than 1 (B ≫ 1), these estimates of couplings are practically equal to the estimates (J MF = −Σ −1) in the mean field approximation, which was employed in GaussDCA (Baldassi et al. 2014).

$$\displaystyle \begin{aligned} h_{i}^{\mathrm{NIW}}(a_{k})&=-\sum_{j\neq i}\sum_{l}J_{ij}^{\mathrm{NIW}}(a_{k},a_{l})P_{j}(a_{l})-\frac{(\kappa+B+1)}{\kappa+B}\sum_{j}\sum_{l\neq q}(\varSigma^{-1})_{ij}(a_{k},a_{l}) \\ &\quad [\delta_{ij}\frac{\delta_{kl}-2P_{i}(a_{l})}{2}+\frac{\kappa B}{\kappa+B}(P_{j}(a_{l})-\mu_{j}^{0}(a_{l}))] {} \end{aligned} $$
(9.31)

The \((h_{i}^{\mathrm {NIW}}(a_{k})-h_{i}^{\mathrm {NIW}}(a_{q}))\) does not converge to \(\log P_{i}(a_{k})/P_{i}(a_{q})\) as J NIW → 0 but \(h_{i}^{\mathrm {MF}}(a_{k})- h_{i}^{\mathrm {MF}}(a_{q})\) does; in other words, the mean field approximation gives a better h for the limiting case of no couplings than the present approximation. Barton et al. (2016) reported that the Gaussian approximation generally gave a better generative model than the mean field approximation.

In GaussDCA (Baldassi et al. 2014), μ 0 and Λκ were chosen to be as uninformative as possible, i.e., mean and covariance for a uniform distribution.

$$\displaystyle \begin{aligned} \mu_{i}^{0}(a_{k})=1/q,\quad \frac{\varLambda_{ij}(a_{k},a_{l})}{\kappa}=\frac{\delta_{ij}}{q}(\delta_{kl}-\frac{1}{q}) {} \end{aligned} $$
(9.32)

1.1.4 Pseudo-likelihood Approximation

Symmetric Pseudo-likelihood Maximization

The probability of an instance σ τ is approximated as follows by the product of conditional probabilities of observing \(\sigma _{i}^{\tau }\) under the given observations \(\sigma _{j\neq i}^{\tau }\) of all other sites.

$$\displaystyle \begin{aligned} P(\sigma^{\tau})\approx\prod_{i}P(\sigma_{i}=\sigma_{i}^{\tau}|\{\sigma_{j\neq i}=\sigma_{j}^{\tau}\}) {} \end{aligned} $$
(9.33)

Then, cross entropy is approximated as

$$\displaystyle \begin{aligned} S_{0}(h,J|\{P_{i}\}, \{P_{ij}\})&\approx S_{0}^{\mathrm{PLM}}(h,J|\{P_{i}\},\{P_{ij}\})\equiv\sum_{i}S_{0,i}(h, J|\{P_{i}\}, \{P_{ij}\}) {} \end{aligned} $$
(9.34)
$$\displaystyle \begin{aligned} S_{0,i}(h, J|\{P_{i}\}, \{P_{ij}\})&\equiv\frac{-1}{B}\sum_{\tau}\ell_{i}(\sigma_{i}=\sigma_{i}^{\tau}|\{\sigma_{j\neq i}=\sigma_{j}^{\tau}\},h, J)+R_{i}(h,J) {} \end{aligned} $$
(9.35)

where conditional log-likelihoods and 2 norm regularization terms employed in Ekeberg et al. (2013) are

$$\displaystyle \begin{aligned} \ell_{i}(\sigma_{i}=\sigma_{i}^{\tau}|\{\sigma_{j\neq i}=\sigma_{j}^{\tau}\},h, J)&=\log[\frac{\exp(h_{i}(\sigma_{i}^{\tau})+\sum_{j\neq i}J_{ij}(\sigma_{i}^{\tau},\sigma_{j}^{\tau}))}{\sum_{k}\exp(h_{i}(a_{k})+\sum_{j\neq i}J_{ij}(a_{k},\sigma_{j}^{\tau}))}]{}\end{aligned} $$
(9.36)
$$\displaystyle \begin{aligned} R_{i}(h, J)&\equiv\gamma_{h}\sum_{k}h_{i}(a_{k})^{2}+\frac{\gamma_{J}}{2}\sum_{k}\sum_{j\neq i}\sum_{l}J_{ij}(a_{k},a_{l})^{2} {} \end{aligned} $$
(9.37)

The optimum fields and couplings in this approximation are estimated by minimizing the pseudo-cross-entropy, \(S_{0}^{\mathrm {PLM}}\).

$$\displaystyle \begin{aligned} (h^{\mathrm{PLM}},J^{\mathrm{PLM}})=\arg\min\limits_{h,J}S_{0}^{\mathrm{PLM}}(h,J|\{P_{i}\}, \{P_{ij}\}) {} \end{aligned} $$
(9.38)

Equation 9.38 is not invariant under gauge transformation; the 2 norm regularization terms in Eq. 9.38 favors only a specific gauge that corresponds to γ JlJ ij(a k, a l) = γ hh i(a k), γ JkJ ij(a k, a l) = γ hh j(a l), and ∑kh i(a k) = 0 for all i, j(> i), k and l (Ekeberg et al. 2013). γ J = γ h = 0.01 that is relatively a large value independent of B was employed in Ekeberg et al. (2013). γ h = 0.01 but γ J = q(L − 1)γ h were employed in Hopf et al. (2017), in which gapped sites in each sequence were excluded in the calculation of the Hamiltonian H(σ), and therefore q = 20.

GREMLIN (Kamisetty et al. 2013) employs Gaussian prior probabilities that depend on site pairs.

$$\displaystyle \begin{aligned} R_{i}(h, J)&\equiv\gamma_{h}\sum_{k}h_{i}(a_{k})^{2}+\sum_{k}\sum_{j\neq i}\frac{\gamma_{ij}}{2}\sum_{l}J_{ij}(a_{k},a_{l})^{2} {} \end{aligned} $$
(9.39)
$$\displaystyle \begin{aligned} \gamma_{ij}&\equiv\gamma_{c}(1-\gamma_{p}\log(P_{ij}^{0})) {} \end{aligned} $$
(9.40)

where \(P_{ij}^{0}\) is the prior probability of site pair (i, j) being in contact.

Asymmetric Pseudo-likelihood Maximization

To speed up the minimization of S 0, a further approximation, in which S 0,i is separately minimized, is employed (Ekeberg et al. 2014), and fields and couplings are estimated as follows.

$$\displaystyle \begin{aligned} J_{ij}^{\mathrm{PLM}}(a_{k},a_{l})&\simeq\frac{1}{2}(J_{ij}^{*}(a_{k},a_{l})+J_{ji}^{*}(a_{l},a_{k})) {} \end{aligned} $$
(9.41)
$$\displaystyle \begin{aligned} (h_{i}^{\mathrm{PLM}},J_{i}^{*})&=\arg\min\limits_{h_{i},J_{i}}S_{0,i}(h,J|\{P_{i}\}, \{P_{ij}\}) {} \end{aligned} $$
(9.42)

It is appropriate to transform h and J estimated above into a some specific gauge such as the Ising gauge.

1.1.5 ACE (Adaptive Cluster Expansion) of Cross-Entropy for Sparse Markov Random Field

The cross entropy S 0({h i, J ij}|{P i}, {P ij}, i, j ∈ Γ) of a cluster of sites Γ, which is defined as the negative log-likelihood per instance in Eq. 9.14, is approximately minimized by taking account of sets L k(t) of only significant clusters consisting of k sites, the incremental entropy (cluster cross entropy) ΔS Γ of which is significant (|ΔS Γ| > t) (Cocco and Monasson 2011, 2012; Barton et al. 2016).

$$\displaystyle \begin{aligned} S_{0}(\{P_{i},P_{ij}|i,j\in\varGamma\})&\simeq\sum_{l=1}^{|\varGamma|},\sum_{\varGamma'\in L_l(t),\varGamma'\subset\varGamma} \varDelta S_{0}(\{P_{i},P_{ij}|i,j\in\varGamma'\}) {} \end{aligned} $$
(9.43)
$$\displaystyle \begin{aligned} \varDelta S_{0}(\{P_{i},P_{ij}|i,j\in\varGamma\})&\equiv S_{0}(\{P_{i},P_{ij}|i,j\in\varGamma\})-\sum_{\varGamma'\subset\varGamma}\varDelta S_{0}(\{P_{i},P_{ij}|i,j\in\varGamma'\}) {} \end{aligned} $$
(9.44)
$$\displaystyle \begin{aligned} &=\sum_{\varGamma'\subseteq\varGamma}(-1)^{|\varGamma|-|\varGamma'|} {\ S_{0}(\{P_{i},P_{ij}|i,j\in\varGamma'\})} {} \end{aligned} $$
(9.45)

L k+1(t) is constructed from L k(t) by adding a cluster Γ consisting of (k + 1) sites in a lax case provided that any pair of size k clusters Γ 1, Γ 2 ∈ L k(t) and Γ 1 ∪ Γ 2 = Γ or in a strict case if Γ′∈ L k(t) for ∀Γ′ such that Γ′⊂ Γ and |Γ′| = k. Thus, Eq. 9.43 yields sparse solutions. The cross entropies S 0({P i, P ij|i, j ∈ Γ′}) for the small size of clusters are estimated by minimizing S 0({h i, J ij}|{P i, P ij}, i, j ∈ Γ′) with respect to fields and couplings. Starting from a large value of the threshold t (typically t = 1), the cross-entropy S 0({P i, P ij}|i, j ∈{1, …, N}) is calculated by gradually decreasing t until its value converges. Convergence of the algorithm may also be more difficult for alignments of long proteins or those with very strong interactions. In such cases, strong regularization may be employed.

The following regularization terms of 2 norm are employed in ACE (Barton et al. 2016), and so Eq. 9.43 is not invariant under gauge transformation.

$$\displaystyle \begin{aligned} -\frac{1}{B}\log P_{0}(h,J|i,j\in\varGamma)=\gamma_{h}\sum_{i\in\varGamma}\sum_{k}h_{i}(a_{k})^{2}+\gamma_{J}\sum_{i\in\varGamma}\sum_{k}\sum_{J>i,j\in\varGamma}\sum_{l}J_{ij}(a_{k},a_{l})^{2} {} \end{aligned} $$
(9.46)

γ h = γ J ∝ 1∕B was employed (Barton et al. 2016).

The compression of the number of Potts states, q i ≤ q, at each site can be taken into account. All infrequently observed states or states that insignificantly contribute to site entropy can be treated as the same state, and a complete model can be recovered (Barton et al. 2016) by setting \(h_{i}(a_{k})= h_{i}(a_{k^{\prime }})+\log (P_{i}(a_{k})/P_{i}^{\prime }(a_{k^{\prime }}))\), and \(J_{ij}(a_{k},a_{l})=J_{ij}^{\prime }(a_{k^{\prime }},a_{l^{\prime }})\), where “” denotes a corresponding aggregated state and a potential.

Starting from the output set of the fields h i(a k) and couplings J ij(a k, a l) obtained from the cluster expansion of the cross-entropy, a Boltzmann machine is trained with P i(a k) and P ij(a k) by the RPROP algorithm (Riedmiller and Braun 1993) to refine the parameter values of h i and J ij(a k, a l) (Barton et al. 2016); see section “Boltzmann Machine”. This post-processing is also useful because model correlations are calculated.

An appropriate value of the regularization parameter for trypsin inhibitor were much larger (γ = 1) for contact prediction than those (γ = 2∕B = 10−3) for recovering true fields and couplings (Barton et al. 2016), probably because the task of contact prediction requires the relative ranking of interactions rather than their actual values.

1.1.6 Scoring Methods for Contact Prediction

Corrected Frobenius Norm (L 22 Matrix Norm), \(\mathcal {S}_{ij}^{\mathrm {CFN}}\)

For scoring, plmDCA (Ekeberg et al. 2013, 2014) employs the corrected Frobenius norm of \(J_{ij}^{\mathrm {I}}\) transformed in the Ising gauge, in which \(J_{ij}^{\mathrm {I}}\) does not contain anything that could have been explained by fields h i and h j; \(J_{ij}^{\mathrm {I}}(a_{k},a_{l})\equiv J_{ij}(a_{k},a_{l})-J_{ij}(\cdot ,a_{l})-J_{ij}(a_{k}, \cdot )+J_{ij}(\cdot , \cdot )\) where \(J_{ij}( \cdot ,a_{l})=J_{ji}(a_{l}, \cdot )\equiv \sum _{k=1}^{q}J_{ij}(a_{k},a_{l})/q\).

$$\displaystyle \begin{aligned} \mathcal{S}_{ij}^{\mathrm{CFN}}\equiv \mathcal{S}_{ij}^{\mathrm{FN}}-\mathcal{S}_{\cdot j}^{\mathrm{FN}}\mathcal{S}_{i\cdot}^{\mathrm{FN}}/\mathcal{S}_{\cdot\cdot}^{\mathrm{FN}},\quad \mathcal{S}_{ij}^{\mathrm{FN}}\equiv\sqrt{\sum_{\kappa\neq \mathrm{gap}}\sum_{l\neq \mathrm{gap}}J_{ij}^{\mathrm{I}}(a_{k},a_{l})^{2}} {} \end{aligned} $$
(9.47)

where “⋅” denotes average over the indicated variable. This CFN score with the gap state excluded in Eq. 9.47 performs better (Ekeberg et al. 2014; Baldassi et al. 2014) than both scores of FN and DI/EC (Weigt et al. 2009; Morcos et al. 2011; Marks et al. 2011; Hopf et al. 2012).

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Miyazawa, S. (2018). Prediction of Structures and Interactions from Genome Information. In: Nakamura, H., Kleywegt, G., Burley, S., Markley, J. (eds) Integrative Structural Biology with Hybrid Methods. Advances in Experimental Medicine and Biology, vol 1105. Springer, Singapore. https://doi.org/10.1007/978-981-13-2200-6_9

Download citation

Publish with us

Policies and ethics