Journal of Mathematical Chemistry

, Volume 52, Issue 2, pp 665–674 | Cite as

A derivation of the Grand Canonical Partition Function for systems with a finite number of binding sites using a Markov chain model for the dynamics of single molecules

  • Johannes W. R. Martini
  • Michael Habeck
  • Martin Schlather
Open Access
Original Paper


We use a Markov chain to model the ligand binding dynamics of a single molecule and show that its stationary distribution coincides with the laws of the Grand Canonical Ensemble. This way of deriving the equilibrium laws has the following advantages: Firstly, the derivation is short and does not require the knowledge of the Microcanonical, Canonical or Grand Canonical Ensemble. Secondly, it provides a descriptive interpretation of the factors that contribute to the probability of a microstate. In this regard, it also shows that the chemical activity, which cannot be regarded as a probability (since it is not necessarily bounded by one), can be interpreted as the ratio of two probabilities. Thirdly, our approach allows modeling how the system reaches equilibrium. This can be a useful tool for the study of non-equilibrium states.


Decoupled sites representation Ligand binding Binding polynomial Grand Canonical Partition Function Binding energy Markov chain  Binding dynamics 

1 Introduction

We consider the following situation: A target molecule M has n binding sites for substance L. A certain amount of both substances is solvated in a liquidity at a much higher concentration of ligand L than of M, and the number of free ligand molecules can be measured. Thus, the difference between the number of free and the total number of ligand molecules allows us to determine the average number of ligand molecules L bound to a single target molecule M as a function of the concentration (or chemical activity) of L. Experiments of this kind are a classical procedure in chemistry and produce titration curves that characterize the overall binding of L to M. Titration curves for protons binding to aminoacids can be found in nearly every biochemistry textbook and have been studied for 100 years [2, 4, 7, 8, 13]. The mathematical model for titration curves is based on the binding polynomial (bp). It is a function of the chemical activity of the ligand and derived as a special case of the Grand Canonical Partition Function (GCPF), if molecule M is regarded as a system that can take up a finite number n of particles [1, 12, 14]. Its origin in statistical mechanics reemphasizes that it characterizes stochastic properties of a system: It defines a family of distributions over the number of bound ligands, which is parameterized by the chemical activity of the ligand (the temperature is fixed). The titration curve, which is the result of the previously described experiments, is derived by applying the expectation operator to the parameterized family of distributions. However, the GCPF describes only the thermodynamic equilibrium, a steady state of a system consisting of a large number of molecules, in which every single molecule follows its own dynamics of releasing and binding ligands. Thus, it seems obvious that another approach to derive the well known laws of equilibrium might be based on modeling the ligand binding dynamics of a single molecule. In this work, we derive the GCPF for a system with a finite number of binding sites, starting from modeling the binding dynamics. We use a Markov chain model in discrete time and use some reasonable assumptions about the binding dynamics of the molecule to deduce the transition probabilities. This approach facilitates the understanding of the equilibrium distribution, especially the composition of the probabilities of the microstates and provides an idea of how the chemical activity (which is not necessarily bounded by 1) could be interpreted from a stochastic point of view. Moreover, it also allows us to model the system’s way into equilibrium.

2 Binding dynamics of a single molecule as a Markov chain

The binding state of the individual molecule \(M_1\) can be described by a Markov chain on the set of tuples \(K:=\{0,1\}^n\), with \(M_{1,m}\) denoting the state of molecule \(M_1\) after \(m\) “time steps”. \(M_{1,m}=k=(k_1,\ldots ,k_n)\in K\) indicates whether a ligand molecule occupies site \(i \,(k_i=1\)) or not (\(k_i=0\)). We make the following assumptions concerning ligand binding dynamics to deduce transition probabilities and equilibrium laws:
  • [A1] The time between step \(m\) and \(m+1\) is so short that the binding state of only one site can change. Using the \(\ell _1\)-Norm
    $$\begin{aligned} |k|:=\sum \limits _{i=1}^n |k_i| \end{aligned}$$
    this means \(|M_{1,m}-M_{1,m+1}| \le 1\), where, as usual, the difference of the tuples is understood componentwise.
  • [A2] For \(k,l \in K\) with \(|k-l| = 1\), the probability of a transition \(k \mapsto l\) is composed of three factors:

  • [A2-1] the random choice of a binding site that may change its binding state,

  • [A2-2] the probability that the environment provides a ligand molecule or takes it up (depending on the state of the chosen site) and

  • [A2-3] the probability barrier given by the difference of the energies of microstates \(k,l\) of the target molecule.

  • [A3] Since the concentration of L is much higher than that of M, we assume that the binding of the ligand to the individual target molecules occurs stochastically independently. This means that the molecules of type M do not interact, and a small reduction of the number of free ligand molecules, due to an uptake by molecules M, does not affect the probability of [A2-2].

Assumption [A3] guarantees that we can describe the whole system of all target molecules by modeling only one target molecule. In the following, we will specify [A2], which allows us to deduce the matrix of transition probabilities and subsequently its stationary distribution.
  • [A2-1] Since this probability factor describes the choice of a site, there is no need to discriminate between the sites at this point. Consequently, we assume a uniform distribution which means the first factor equals \(\frac{1}{n}\).

  • [A2-2] If the chosen site is not occupied, the second factor is given by probability \(\theta _1 \ne 0\), which can be interpreted as the “availability” of the ligand. It incorporates the spatial availability, geometric orientation of the ligand to the binding site and how “costly” it is to decouple the ligand from its environment (e.g. the energy required to remove hydrogen bonds between the ligand and the solvent molecules). In the case of a chosen site being occupied, probability \(\theta _2\) characterizes the barrier of releasing the ligand molecule. In “most” cases \(\theta _2 \ne 0\) can be considered as being equal to \(1\). However, e.g. in supersaturated solutions or due to weak solubility of the ligand, the release of a ligand molecule might be of energetic disadvantage for the environment. Both factors \(\theta _1\) and \(\theta _2\) depend on the ligand concentration and describe the energetic state of the environment.

  • [A2-3] The third and last component \(p_{k,l}\) models the probability barrier given by the energy difference of the target molecule, when a ligand is released or taken up. In contrast to [A2-2], this factor is not assumed to depend on the environment, i.e. on the energy state of the solution. We will derive a suitable function that depends on the energy levels of the states \(k\) and l: Let \(G(k), G(l)\) denote the energy levels of the states. We are looking for a function \(p_{k,l}:=p(G(k),G(l)) \longrightarrow [0,1]\) with \(p_{k,l}=1\) if \(G(l) \le G(k)\). This means if the energy level is the same, or is reduced by the transition, there will not be an energy barrier that impedes the transition (expressed as a probability). However, if energy is required, i.e. \(G(l) > G(k)\), then \(p_{k,l} < 1\). Since \(p_{k,l}\) is a probability, it can be represented by
    $$\begin{aligned} p_{k,l}= \text{ min }\left( 1,f(G(l) - G(k))\right) \end{aligned}$$
    for an appropriate nonnegative function \(f(x)\) which is different from the zero function and which depends only on the energy differences. Some properties of \(f\) are reasonable to assume
    $$\begin{aligned}&f(x+y)=f(x) f(y), \end{aligned}$$
    $$\begin{aligned}&f(x) \in (0,1) \text{ if } \text{ and } \text{ only } \text{ if } x \in (0,\infty )\end{aligned}$$
    $$\begin{aligned}&f \text{ is } \text{ monotone } \end{aligned}$$
    The first property models that an additional energy barrier represents a second factor: The probability of overcoming a barrier \(x+y\) shall be equal to the probability of overcoming \(x\) and subsequently \(y\). This characteristic of function \(f\) is also required for consistency with possible extensions of this model by incorporating intermediate states. The existence of intermediate states leads to a splitting of the energy barriers. The second property expresses that only a transition that requires energy poses a probability barrier. Monotonicity is reasonable, too.
It is well known, that these assumptions about \(f\) determine \(f\) to be the exponential function.

Lemma 1

  1. (a)

    A \(\beta \in {\mathbb {R}}^{+}\) exists such that \(f(x)=\text{ exp }(-\beta x)\).

  2. (b)

    \(p_{k,l} < 1 \Longrightarrow p_{l,k}=1\).



(a) Due to Eq. (3), we can apply the natural logarithm [on \(x \in (0, \infty )\)] leading to Cauchy’s functional equation, which means ln\((f)\) is linear and \(-\beta <0\), according to Eq. (3). Part (b) is a direct consequence of (a). \(\square \)

3 The transition probabilities

With assumptions [A1-A3] we obtain, for a certain molecule \(M_1\), a Markov chain \(M_{1,m}\) on the set of states \(\{0,1\}^n\), with n denoting the number of binding sites. Thus, for \(|k|<|l|\) and \(|l - k|=1\), the transition probabilities are given by
$$\begin{aligned} q_{k,l}:=P(k \mapsto l)= \frac{1}{n} \theta _1 p_{k,l}, \end{aligned}$$
where \(\theta _1\) denotes the “availability” of the ligand. If \(|k|>|l|,\, |l - k|=1\):
$$\begin{aligned} q_{k,l}= \frac{1}{n} \theta _2 p_{k,l} \end{aligned}$$
with \(\theta _2\) denoting the “resistance”. The probability of staying in the present state \(l\) is:
$$\begin{aligned} q_{l,l}&= 1 - \sum \limits _{k \ne l}q_{l,k} \nonumber \\&= 1 - \frac{1}{n}\left( \theta _1 \sum \limits _{ \{k \, | \, |k|>|l|, |k-l|=1 \}} p_{l,k} \; \;+ \; \;\theta _2 \sum \limits _{ \{k \,| \, |k|<|l|, |k-l|=1 \}} p_{l,k}\right) \end{aligned}$$

Example 1

For a molecule with two binding sites for ligand L, we use the notation \(0:=(0,0),1:=(0,1),2:=(1,0),3:=(1,1)\) as a new composite index. The matrix of transition probabilities is
$$\begin{aligned} \begin{pmatrix} \,\,1- \frac{1}{2}\left( \theta _1 \left( p_{0,1} + p_{0,2}\right) \right) &{}\quad \frac{1}{2}\theta _1 p_{0,1} &{}\quad \frac{1}{2} \theta _1 p_{0,2} &{}\quad 0\\ \frac{1}{2} \theta _2 p_{1,0} &{}\quad 1- \frac{1}{2} \left( \theta _1 p_{1,3} + \theta _2 p_{1,0}\right) &{}\quad 0 &{}\quad \frac{1}{2} \theta _1 p_{1,3} \\ \frac{1}{2} \theta _2 p_{2,0} &{}\quad 0 &{}\quad 1- \frac{1}{2}\left( \theta _1 p_{2,3} + \theta _2 p_{2,0}\right) &{}\quad \frac{1}{2} \theta _1 p_{2,3}\\ 0&{}\quad \frac{1}{2} \theta _2 p_{3,1}&{}\quad \frac{1}{2} \theta _2 p_{3,2}&{}1- \left( \frac{1}{2} \theta _2 \left( p_{3,1} + p_{3,2}\right) \right) \,\, \\ \end{pmatrix} \end{aligned}$$

4 Aperiodicity, connectivity and detailed Balance

We know that the Markov chain with these transition probabilities is aperiodic and connected. The first property can clearly be seen because the system can return to its initial state within one time step, which means it remains in this state, or in two time steps by going there and back. The latter property is also obvious since every state can be reached. Consequently, the Markov chain has a unique stationary distribution \(\pi \) to which the system’s distribution will converge and which we will characterize. If the matrix fulfills the detailed balance condition, we will be able to calculate the stationary distribution quickly, according to the procedure described in the following lemma.

Lemma 2

Let \(Q=(q_{i,j})_{i,j \in \{1,\ldots ,n\}}\) be a transition matrix on a connected space. Moreover, let \(\pi =(\pi _1,\ldots ,\pi _n)\) denote its unique stationary distribution fulfilling the detailed balance condition. Then the stationary distribution can be calculated in the following way:
  • Choose a reference state \(k\), and define \( \pi _k = 1\).

  • Calculate the ratios \(\frac{\pi _i}{\pi _k}\) of all pairs \(\{i,k\}\) with \(q_{i,k}\ne 0\) by \(\frac{\pi _i}{\pi _k}= \frac{q_{k,i}}{q_{i,k}}\).

  • If \(q_{i,k} = 0\) choose any path \((i,\ldots ,k)\) with probability greater than zero and calculate the pairwise ratio.

  • Normalize the distribution.


First note that \(\pi _i \ne 0 \; \forall i\in \{1,\ldots ,n\}\), since the space is connected. The described procedure gives the stationary distribution because the detailed balance condition means
$$\begin{aligned} \pi _{i} q_{i,k}=\pi _{k}q_{k,i}, \end{aligned}$$
which gives the ratio \(\frac{\pi _i}{\pi _k}\) if \(q_{i,k}\ne 0\). As the space is connected a path from \(i\) to \(k\) exists with probability greater than zero. Thus, if \(q_{i,k}=0\), we can calculate the ratios pairwise “along the path” to calculate the ratio \(\frac{\pi _i}{\pi _k}\). \(\square \)

In other words, Lemma 2 states that for a given reference state, the ratio of the probabilities of the stationary distributions are identical to the ratios of the expected flux between two states (pairwise) along any path. This statement is actually one direction of Kolmogorov’s criterion [5]. Even though it is not obvious that the matrix of Example 1 satisfies the detailed balance equation, we will use the procedure of Lemma 2 and show that the obtained distribution is stationary, for the special case of two binding sites.

Example 2

For the case of two binding sites, we use the same abbreviations for the different states as in Example 1. We calculate the probabilities of the stationary distribution the following way:
$$\begin{aligned}&\pi (0) \propto 1 \\&\pi (1)\propto \frac{\frac{1}{2} \theta _1 p_{0,1}}{\frac{1}{2} \theta _2 p_{1,0}} = \frac{\theta _1 p_{0,1}}{\theta _2 p_{1,0}}\\&\pi (2)\propto \frac{\frac{1}{2} \theta _1 p_{0,2}}{\frac{1}{2} \theta _2 p_{2,0}}= \frac{\theta _1 p_{0,2}}{\theta _2 p_{2,0}}\\&\pi (3)\propto \frac{\frac{1}{2} \theta _1 p_{2,3} \cdot \pi (2)}{\frac{1}{2} \theta _2 p_{3,2}}= \frac{\theta _1^2 p_{0,2} p_{2,3}}{ \theta _2^2 p_{3,2}p_{2,0}}, \end{aligned}$$
where \(\propto \) means proportional to (of course with the same factor for all equations).
For the weights of Example 2 it is not obvious, whether we would obtain the same probability distribution if we compared \(\pi (3)\) with \(\pi (1)\):
$$\begin{aligned} \pi (3) = \frac{\theta _1^2 p_{0,1} p_{1,3}}{ \theta _2^2 p_{3,1}p_{1,0}}. \end{aligned}$$
To see that the weights do not depend on the choice of the path Lemma 3 is helpful, which will also be used subsequently to show that our model satisfies the detailed balance condition for any number of binding sites.

Lemma 3

$$\begin{aligned} \frac{p_{i,j}}{p_{j,i}}=f\left( G(j)-G(i)\right) \end{aligned}$$


If \(G(j)-G(i) \ge 0\), then \(p_{i,j}=f(G(j)-G(i))\le 1\) and \(p_{j,i}=1\), according to Lemma 1. Otherwise \(p_{j,i} <1\) meaning \(p_{j,i}=f(G(i)-G(j))\) which gives the statement since
$$\begin{aligned} f(G(i)-G(j) ) \cdot f(G(j)-G(i))=1. \end{aligned}$$
\(\square \)

Proposition 1

For every number of binding sites n, the matrix of transition probabilities defined by Eqs. (5)–(7) is detailed balanced with respect to its stationary distribution.


We use Kolmogorov’s criterion [5], which (in simple words) states that a stochastic matrix and its stationary distribution fulfill the detailed balance condition if and only if the probability for “walking on a closed path” is independent of the direction. More precisely, this means, the matrix \((q_{i,j})\) fulfills the detailed balance condition if and only if
$$\begin{aligned} q_{k,i_1} q_{i_1,i_2} \ldots q_{i_{r-1},i_{r}} q_{i_r,k}= q_{k,i_r} q_{i_{r},i_{r-1}} \ldots q_{i_2,i_1} q_{i_1,k} \end{aligned}$$
for any path \((k,i_1,i_2,\ldots ,i_r,k)\) and any \(r \in {\mathbb {N}}\). We show that the matrix of transition probabilities defined by Eqs. (5)–(7) satisfies Eq. (9) and firstly identify \(q_{k,l}=P(k \mapsto l)\). Let a path \((k=:i_0,i_1,i_2,\ldots ,i_r,k=:i_{r+1})\) be given. First note that if a path includes a step which changes the state of more than one binding site, both directions will have probability zero, since \(q_{j,l}=0=q_{l,j}\) if \(|j-l|>1\). The probability of all other transitions from \(j\) to \(l\) with \(|j-l|\le 1\) are nonzero, since all factors which the probabilities \(q_{j,l}\) are built of are nonzero. Moreover, if at a certain step, the state is not changed, the factor \(q_{j,j}\) cancels out on both sides. Thus, without loss of generality, every step of the path changes the state, that is \(i_j \ne i_{j+1} \forall j \in \{0,\ldots ,r\}\). Since every probability \(q_{i,j}\) includes the factor \(\frac{1}{n}\) on both sites, it cancels out. Moreover, since the path is closed, the power of the factor \(\theta _1\) on one side of the equation is equal to the power of \(\theta _2\) (we return to the initial state, every ligand which is taken up has to be released afterwards). Using the other “direction” of the path every factor \(\theta _1\) of the left side will be substituted by a factor \(\theta _2\). However, since both factors have the same power, they all cancel. The remaining factors are given by \(p_{i,j}\) and we see, that the matrix \((q_{i,j})\) satisfies Eq. (9) if and only if
$$\begin{aligned} \frac{p_{k,i_1} p_{i_1,i_2} \ldots p_{i_{r-1},i_{r}} p_{i_r,k}}{p_{k,i_r} p_{i_{r},i_{r-1}} \ldots p_{i_2,i_1} p_{i_1,k}}=1, \end{aligned}$$
which is true since Lemma 3 states that the left site is equal to \(f(0)=1\). \(\square \)

Remark 1

In our model, the probability of a transition from \(k\) to \(l\), with \(|k-l|=1\) is composed of a uniform proposal distribution on the states of the “neighborhood” and of an acceptance rate given by \(\theta _1 p_{i,j}\) or \(\theta _2 p_{i,j}\), depending on the state of the chosen site. Even though this structure resembles the Metropolis–Hastings algorithm [3, 10], our model does not coincide with this algorithm: The factor \(\theta _i\) is not part of the proposal distribution, since otherwise, the proposal probabilities do not sum up to one. Consequently, the acceptance probability is different to the one commonly used, since it is bounded by \(\theta _i\), and not by one.

5 The stationary distribution

Proposition 2

The stationary distribution on the set of states is given by a normalized version of
$$\begin{aligned} P({l})= \left( \frac{\theta _1}{\theta _2} \right) ^{|l|} f\left( G(l)-G(\{0\}^n)\right) . \end{aligned}$$


We know that the Markov chain fulfills the detailed balance condition. Using Lemma 2 with the reference state \(\{0\}^n\), we receive Eq. (10). \(\square \)

Since we assumed the molecules to bind ligands independently [A3], the distribution of the states within the solution in equilibrium will be close to the stationary distribution of a single molecule, due to the Law of Large numbers, if the number of molecules is sufficiently large.

6 Activation energies

In the model presented in Sect. 3 we did not incorporate activation energy barriers. However, an extension of our model is straightforward: Assuming, that an activation energy barrier between two states \(i,j\) is a “symmetric” barrier, given by an instable transition intermediate state \(e_{i,j}\), we can rewrite Eq. (1):
$$\begin{aligned} p_{i,j}= \text{ min }\left( 1, f(G(e_{i,j})-G(i))\right) \cdot \text{ min }\left( 1, f(G(j)-G(e_{i,j}))\right) . \end{aligned}$$
Assuming that \(e_{i,j}\) has an energy level higher than those of states \(i,j\) (activation energy, instable state), the second factor equals \(1\). This gives for the ratios
$$\begin{aligned} \frac{p_{i,j}}{p_{j,i}}=\frac{\text{ min }\left( 1, f(G(e_{i,j})-G(i))\right) }{\text{ min }\left( 1, f(G(e_{i,j})-G(j))\right) }=f\left( G(j)-G(i)\right) \end{aligned}$$
This result shows that we can add any additional “symmetric” probability barriers and the stationary distribution will be unchanged.

7 Comparison to the Grand Canonical Partition Function

The Grand Canonical Partition Function (or “binding polynomial” for a finite number of binding sites) is usually formulated as a function in the variable “chemical activity” which is denoted by \(\varLambda \):
$$\begin{aligned} \sum \limits _{\left\{ k \in K\right\} } f\left( G(k)-G(\{0\}^n)\right) \varLambda ^{|k|}. \end{aligned}$$
$$\begin{aligned} \varLambda = \text{ exp }\left( \frac{\mu -\mu ^{0}}{R T}\right) , \end{aligned}$$
\(\mu \) the chemical potential, \(\mu _0\) the chemical potential of a reference, \(R\) the Boltzmann constant and \(T\) the absolute temperature in degrees Kelvin. It coincides with the stationary distribution of our model if we identify \(\frac{\theta _1}{\theta _2}=:\varLambda \). Thus, chemical activity might be interpreted as the ratio of “availability” and “resistance” in our model.

8 Decoupled sites

We will shortly highlight what decoupled (stochastic independent) sites, for every fixed chemical activity, mean for the presented model of the molecule’s ligand binding dynamics. Following [9, 11], we know that a molecule has decoupled sites if and only if the energy \(G(k)\) satisfies
$$\begin{aligned} G(k)= k_1 \cdot G(1_1) + k_2 \cdot G(1_2) + \cdots + k_n \cdot G(1_n) \end{aligned}$$
for any microstate \(k=(k_1,\ldots ,k_n)\). Here, \(1_i\) denotes the state in which only site \(i\) is occupied and all other sites are unoccupied. Equation (15) directly implies that the energy difference between two neighboring states, e.g. \((k_1,\ldots ,k_{m-1}, 0, k_{m+1},k_n)\) and \((k_1,\ldots ,k_{m-1}, 1,k_{m+1},k_n)\), only depends on the site \(m\) which has a different binding state. Due to the structure of the transition probabilities [Eqs. (5)–(7)] this means that the probability of changing the occupation state of a certain site does not depend on the state of the other binding sites. Thus, in the presented model, stochastic independence of the sites in the stationary distribution for every fixed chemical activity, translates into a transition matrix \(q_{i,l}\) that satisfies a certain kind of stationarity: \(q_{i,l}=\tilde{q}_{i-l}\) for a distribution \(\tilde{q}\) and any pair \(i,l\). Note that, in a decoupled molecule, the rows of the transition matrix cannot be identical, i.e. the transition distribution depends on the state. For more information on the interpretation of decoupled sites from a probabilistic point of view see also [6].

9 Summary and outlook

We presented a derivation of the Grand Canonical Partition Function for a system with a finite number of binding sites, from a model of stochastic ligand binding dynamics of a single molecule. Some assumptions about the process of ligand binging led to a Markov chain model with a matrix of transition probabilities that satisfies the detailed balanced condition. The corresponding stationary distribution coincides with the Grand Canonical Partition Function if we identify the chemical activity \(\varLambda \) with a ratio of two probabilities. The model directly offers the possibility to investigate the dynamics into equilibrium.



We would like to thank Alexander Malinowski for helpful discussions.


  1. 1.
    C.R. Cantor, P.R. Schimmel, Biophysical Chemistry. Part III. The Behavior of Biological Macromolecules, 1st edn. (W. H. Freeman, San Francisco, CA, 1980)Google Scholar
  2. 2.
    K. Hasselbalch, Die Berechnung der Wasserstoffzahl des Blutes aus der freien und gebundenen Kohlensäure desselben, und die Sauerstoffbindung des Blutes als Funktion der Wasserstoffzahl (Julius Springer, 1916)Google Scholar
  3. 3.
    W.K. Hastings, Monte carlo sampling methods using markov chains and their applications. Biometrika 57(1), 97–109 (1970)CrossRefGoogle Scholar
  4. 4.
    L.J. Henderson, The Fitness of the Environment (Macmillan Company, New York, 1913).Google Scholar
  5. 5.
    F.P. Kelly, Reversibility and Stochastic Networks (Cambridge University Press, Cambridge, MA, 2011)Google Scholar
  6. 6.
    J.W.R. Martini, M. Schlather, G.M. Ullmann, The meaning of the decoupled sites representation in terms of statistical mechanics and stochastics. MATCH Commun. Math. Comput. Chem. 70(3), 829–850 (2013a)Google Scholar
  7. 7.
    J.W.R. Martini, M. Schlather, G.M. Ullmann, On the interaction of different types of ligands binding to the same molecule part ii: systems with n to 2 and n to 3 binding sites. J. Math. Chem. 51(2), 696–714 (2013b)CrossRefGoogle Scholar
  8. 8.
    J.W.R. Martini, M. Schlather, G.M. Ullmann, On the interaction of two different types of ligands binding to the same molecule part i: basics and the transfer of the decoupled sites representation to systems with n and one binding sites. J. Math. Chem. 51(2), 672–695 (2013c)CrossRefGoogle Scholar
  9. 9.
    J.W.R. Martini, G.M. Ullmann, A mathematical view on the decoupled sites representation. J. Math. Biol. 66(3), 477–503 (2013)CrossRefGoogle Scholar
  10. 10.
    N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller, E. Teller, Equation of state calculations by fast computing machines. J. Chem. Phys. 21(6), 1087–1092 (1953)CrossRefGoogle Scholar
  11. 11.
    A. Onufriev, D.A. Case, G.M. Ullmann, A novel view of pH titration in biomolecules. Biochemistry 40(12), 3413–3419 (2001)CrossRefGoogle Scholar
  12. 12.
    J.A. Schellman, Macromolecular binding. Biopolymers 14, 999–1018 (1975)CrossRefGoogle Scholar
  13. 13.
    C. Tanford, J.G. Kirkwood, Theory of protein tiration curves. I. General equations for impenetrable spheres. J. Am. Chem. Soc. 79(20), 5333–5339 (1957)CrossRefGoogle Scholar
  14. 14.
    J. Wyman, S.J. Gill, Binding and Linkage: Functional Chemistry of Biological Macromolecules (University Science Books, Mill Valley, 1990)Google Scholar

Copyright information

© The Author(s) 2013

Open AccessThis article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.

Authors and Affiliations

  • Johannes W. R. Martini
    • 1
  • Michael Habeck
    • 1
  • Martin Schlather
    • 2
  1. 1.Institut für Mathematische StochastikGeorg-August Universität GöttingenGöttingenGermany
  2. 2.Institut für MathematikUniversität MannheimMannheimGermany

Personalised recommendations