Skip to main content

Introduction to Stochastic Kinetic Models for Molecular Motors

  • Chapter
  • First Online:
Physics of Molecular and Cellular Processes

Part of the book series: Graduate Texts in Physics ((GTP))

  • 910 Accesses

Abstract

Molecular motors such as those of kinesin, dynein, and myosin superfamilies play a critical role in various aspects of cell physiology. These motors move along linear tracks formed by cytoskeletal filaments and can transport materials, apply forces, or both. One class of motors is referred to as processive, in that the motor typically remains bound to the filament even as it steps from one site to the next. To avoid the obvious equilibrium finding that one binding site is as good as the next and hence there should be no direct preference for the motor to move in a particular direction, motors couple their stepping motion to hydrolysis of energy-rich compounds with out-of-equilibrium concentrations (such as ATP) to tailor an appropriate free energy landscape driving them in a preferred direction. This chapter introduces stochastic models of these motors that can be used to calculate their statistical properties and provide an understanding of detailed single-molecule experimental data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Recall that \(\sum _{n=0}^\infty \frac{a^n}{n!} = e^a\).

  2. 2.

    The reader should not confuse the Laplace variable with the step size.

  3. 3.

    A zero eigenvalue corresponds to the stationary state. However, by construction, the stationary state is the absorbing state, which is not accounted for in the N state. Therefore, the N eigenvalues of the matrix \(\hat{\textrm{K}}\) must be negative.

  4. 4.

    In other words, the motor takes discrete steps of size s, as assumed in the case of the previous models.

  5. 5.

    The bimodality occurs also at \(F=0\), but the probability of recording a negative velocity is extremely low.

  6. 6.

    Hint: use the duplication formula for the Gamma function: \(\Gamma (z)\Gamma (z+\frac{1}{2}) = 2^{1-2z} \sqrt{\pi } \Gamma (2z)\) [23].

  7. 7.

    The case for \(\xi _1 = \xi _2\) is left as an exercise, and the solution may be found in [15].

References

  1. B. Alberts, Molecular Biology of the Cell (2008)

    Google Scholar 

  2. T. Soldati, M. Schliwa, Nat. Rev. Mol. Cell Biol. 7(12), 897 (2006)

    Article  Google Scholar 

  3. D.D. Hackney, Ann. Rev. Physiol. 58, 731 (1996)

    Article  Google Scholar 

  4. M.L. Mugnai, C. Hyeon, M. Hinczewski, D. Thirumalai, Rev. Mod. Phys. 92(2), 025001 (2020)

    Google Scholar 

  5. J. Howard et al., Mechanics of Motor Proteins and the Cytoskeleton, vol. 743 (Sinauer Associates Sunderland, MA, 2001)

    Google Scholar 

  6. E.L. Holzbaur, Y.E. Goldman, Current Opin. Cell Biol. 22(1), 4, 025001 (2010)

    Google Scholar 

  7. G. Bhabha, G.T. Johnson, C.M. Schroeder, R.D. Vale, Trends Biochem. Sci. 41(1), 94, 025001 (2016)

    Google Scholar 

  8. N. Kodera, D. Yamamoto, R. Ishikawa, T. Ando, Nature 468(7320), 72 (2010)

    Article  ADS  Google Scholar 

  9. A.D. Mehta, M. Rief, J.A. Spudich, D.A. Smith, R.M. Simmons, Science 283(5408), 1689 (1999)

    Article  ADS  Google Scholar 

  10. A. Yildiz, P.R. Selvin, Acc. Chem. Res. 38(7), 574, 025001 (2005)

    Google Scholar 

  11. J. Andrecka, Y. Takagi, K. Mickolajczyk, L. Lippert, J. Sellers, W. Hancock, Y. Goldman, P. Kukura, Methods Enzymol. 581, 517, 025001 (2016)

    Google Scholar 

  12. M.J. Schnitzer, K. Visscher, S.M. Block, Nat. Cell Biol. 2(10), 718, 025001 (2000)

    Google Scholar 

  13. W.J. Walter, V. Beránek, E. Fischermeier, S. Diez, PloS One 7(8), e42218 (2012)

    Google Scholar 

  14. H.T. Vu, S. Chakrabarti, M. Hinczewski, D. Thirumalai, Phys. Rev. Lett. 117(7), 078101 (2016)

    Google Scholar 

  15. R. Takaki, M.L. Mugnai, Y. Goldtzvik, D. Thirumalai, Proc. National Acad. Sci. 116(46), 23091, 078101 (2019)

    Google Scholar 

  16. M.L. Mugnai, M.A. Caporizzo, Y.E. Goldman, D. Thirumalai, Biophys. J. 118(7), 1537, 078101 (2020)

    Google Scholar 

  17. K.J. Mickolajczyk, N.C. Deffenbaugh, J.O. Arroyo, J. Andrecka, P. Kukura, W.O. Hancock, Proc. Nat. Acad. Sci. 112(52), E7186, 078101 (2015)

    Google Scholar 

  18. H. Isojima, R. Iino, Y. Niitani, H. Noji, M. Tomishige, Nat. Chem. Biol. 12(4), 290, 078101 (2016)

    Google Scholar 

  19. D.S. Bernstein, Matrix Mathematics: Theory, Facts, and Formulas (Princeton University Press, 2009)

    Google Scholar 

  20. I. Oppenheim, K.E. Shuler, G.H. Weiss, Stochastic Processes in Chemical Physics: the Master Equation (1977)

    Google Scholar 

  21. T.L. Hill, Free Energy Transduction and Biochemical Cycle Kinetics (Dover Publications, Inc., 2005)

    Google Scholar 

  22. M. Hinczewski, R. Tehver, D. Thirumalai, Proc. Nat. Acad. Sci. 110(43), E4059, 078101 (2013)

    Google Scholar 

  23. F.W. Olver, A. Olde Daalhuis, D. Lozier, B. Schneider, R. Boisvert, C. Clark, B. Miller, B. Saunders, H. Cohl, M. McClain. Nist digital library of mathematical functions. http://dlmf.nist.gov/,Release1.1.1of2021-03-15. http://dlmf.nist.gov/

  24. M. Abramowitz, I.A. Stegun, R.H. Romer, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables (Dover, 1964)

    Google Scholar 

  25. J.W. Brown, R.V. Churchill, Complex Variables and Applications Eighth Edition (McGraw-Hill Book Company, 2009)

    Google Scholar 

  26. F.W. Olver, D.W. Lozier, R.F. Boisvert, C.W. Clark, NIST Handbook of Mathematical Functions Hardback and CD-ROM (Cambridge University Press, 2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to D. Thirumalai .

Editor information

Editors and Affiliations

Appendices

5.8 Appendix A: Mathematical Functions

In this section, we list special functions used to calculate P(n) and P(v).

Hypergeometric function:

$$\begin{aligned} \begin{aligned} \ _0\text {F}_1 (;a;z) = \sum _{k=0}^{\infty }\frac{\Gamma (a)}{\Gamma (a+k)}\frac{z^k}{k!}, \end{aligned} \end{aligned}$$
(5.57)

where \(\Gamma \) is Gamma function.

Gauss hypergeometric function:

$$\begin{aligned} \begin{aligned} \ _2\text {F}_1 (a,b;c;z) = \frac{\Gamma (c)}{\Gamma (a)\Gamma (b)}\sum _{k=0}^{\infty }\frac{\Gamma (a+k)\Gamma (b+k)}{\Gamma (c+k)}\frac{z^k}{k!}. \end{aligned} \end{aligned}$$
(5.58)

Modified Bessel function of the first kind:

$$\begin{aligned} \begin{aligned} I_\sigma (z)=\Big (\frac{z}{2}\Big )^\sigma \sum _{k=0}^{\infty }\frac{(\frac{z^2}{4})^k}{k!\Gamma (\sigma + k + 1)}, \end{aligned} \end{aligned}$$
(5.59)

where \(\sigma \) is a real number.

Modified Bessel function of the second kind:

$$\begin{aligned} \begin{aligned} K_{n+\frac{1}{2}}(z)=\sqrt{\frac{\pi }{2z}}\frac{e^{-z}}{\sqrt{z}} \sum _{k=0}^{\infty } \frac{(k+n)!}{k!(-k+n)!}(2z)^{-k}, \end{aligned} \end{aligned}$$
(5.60)

where n is an integer.

5.9 Appendix B: The Distribution of Run Length

The summation for (5.5),

$$\begin{aligned} \begin{aligned} P(n)=\sum _{m,l=0}^{\infty }\frac{(m+l)!}{m!l!}\Big (\frac{k^+}{k_T}\Big )^m\Big (\frac{k^-}{k_T}\Big )^l\Big (\frac{\gamma }{k_T}\Big )\delta _{m-l,n}, \end{aligned} \end{aligned}$$
(5.61)

can be carried out for \(n>0\) and \(n<0\), leading toFootnote 6

$$\begin{aligned} \begin{aligned} P(n>0)&=\Big (\frac{k^+}{k_T}\Big )^n\Big (\frac{\gamma }{k_T}\Big )\sum _{l=0}^{\infty }\Big (\frac{k^+k^-}{k_T^2}\Big )^l\frac{(2l+n)!}{(n+l)!l!}\\&=\Big (\frac{k^+}{k_T}\Big )^n\Big (\frac{\gamma }{k_T}\Big )\ _2\text {F}_1\Big (\frac{1+n}{2},\frac{2+n}{2};1+n;4\frac{k^+k^-}{k_T^2}\Big ), \\ P(n<0)&=\Big (\frac{k^-}{k_T}\Big )^{-n}\Big (\frac{\gamma }{k_T}\Big )\sum _{m=0}^{\infty }\Big (\frac{k^+k^-}{k_T^2}\Big )^m\frac{(2m-n)!}{(m-n)!m!}\\&=\Big (\frac{k^-}{k_T}\Big )^{-n}\Big (\frac{\gamma }{k_T}\Big )\ _2\text {F}_1\Big (\frac{1-n}{2},\frac{2-n}{2};1-n;4\frac{k^+k^-}{k_T^2}\Big ), \end{aligned} \end{aligned}$$
(5.62)

where\(\ _2\text {F}_1\) is Gaussian hypergeometric function (5.58). By using the special case of \(\ _2\text {F}_1\) [24]:

$$\begin{aligned} \begin{aligned} \ _2\text {F}_1\left( a,\frac{1}{2}+a;2a;z\right) = 2^{2a-1}(1-z)^{-\frac{1}{2}}[1+(1-z)^{\frac{1}{2}}]^{1-2a}, \end{aligned} \end{aligned}$$
(5.63)

we obtain the following expression for the run length distribution:

$$\begin{aligned} \begin{aligned} P(n\gtrless 0)=\Big (\frac{2k^\pm }{k_T+\sqrt{k_T^2-4k^+k^-}}\Big )^{|n|}\frac{\gamma }{\sqrt{k_T^2-4k^+k^-}}. \end{aligned} \end{aligned}$$
(5.64)

We now want to compute the average of this distribution, that is

$$\begin{aligned} \begin{aligned} \overline{n} = g\Big [\sum _{n=0}^{\infty } n (a_+)^{n} + \sum _{n=-\infty }^0 n (a_-)^{-n}\Big ] = g\Big [\sum _{n=0}^{\infty } n (a_+)^{n} - \sum _{n=0}^\infty n (a_-)^{n}\Big ] \end{aligned}, \end{aligned}$$
(5.65)

where we defined

$$\begin{aligned} \begin{aligned} a_{\pm }&= \frac{2k^\pm }{k_T+\Delta }\\ g&= \frac{\gamma }{\Delta }\\ \Delta&= \sqrt{k_T^2-4k^+k^-}. \end{aligned} \end{aligned}$$
(5.66)

With a little bit of algebra, we can prove the following relationships:

$$\begin{aligned} \begin{aligned} \frac{1-a_+a_-}{1+a_+a_-}&= \frac{\Delta }{k_T},\\ \frac{a_+ \pm a_-}{1+a_+a_-}&= \frac{k^{+}\pm k^{-}}{k_T}. \end{aligned} \end{aligned}$$
(5.67)

Now, it is easy to show that

$$\begin{aligned} \sum _{n=0}^\infty n a^n = \frac{a}{(1-a)^2} \end{aligned}$$
(5.68)

for \(a_{\pm }\), which are both non-negative and \(<1\). From this, we obtain that

$$ \overline{n} = g\Big [\frac{a_+}{(1-a_+)^2} - \frac{a_-}{(1-a_-)^2}\Big ], $$

and after some simple algebra, we can rewrite this expression as

$$ \overline{n} = g\Big [\frac{a_+-a_-}{1+a_+a_-}\frac{1-a_+a_-}{1+a_+a_-}\frac{1}{\Big (1-\frac{a_++a_-}{1+a_+a_-}\Big )^2}\Big ]. $$

After plugging in (5.66)–(5.67), we finally get

$$\begin{aligned} \overline{n} = \frac{k^{+}- k^{-}}{\gamma }. \end{aligned}$$
(5.69)

5.10 Appendix C: Derivation of the Run Time Distribution

We start from (5.21) and we wish to compute

$$ \sum _{m=0}^\infty \sum _{l=0}^\infty \tilde{f}(m,l,s) = \sum _{m=0}^\infty \sum _{l=0}^\infty \frac{(m+l)!}{m!l!} [\tilde{f}_+(s)]^m[\tilde{f}_-(s)]^l [\tilde{f}_\gamma (s)]. $$

The summations over m and l may be reorganized as \(\sum _{\sigma =0}^\infty \sum _{m=0}^\sigma \), from which we obtain

$$ \sum _{m=0}^\infty \sum _{l=0}^\infty \tilde{f}(m,l,s) = \sum _{\sigma =0}^\infty \sum _{m=0}^\sigma \frac{\sigma !}{m!(\sigma -m)!} [\tilde{f}_+(s)]^m[\tilde{f}_-(s)]^{\sigma -m} [\tilde{f}_\gamma (s)]. $$

Now, the \(\sigma \)-th power of the sum of two numbers a and b is \((a+b)^\sigma = \sum _{m=0}^\sigma \frac{\sigma !}{m!(\sigma -m)!} a^m b^{\sigma -m}\); therefore,

$$ \sum _{m=0}^\infty \sum _{l=0}^\infty \tilde{f}(m,l,s) = [\tilde{f}_\gamma (s)] \sum _{\sigma =0}^\infty [\tilde{f}_+(s) + \tilde{f}_-(s)]^\sigma , $$

and because \(\tilde{f}_+(s) + \tilde{f}_-(s)<1\), we use the geometric series and obtain

$$ \sum _{m=0}^\infty \sum _{l=0}^\infty \tilde{f}(m,l,s) = \frac{\tilde{f}_\gamma (s)}{1-\tilde{f}_+(s) - \tilde{f}_-(s)}. $$

By plugging in the definitions of the Laplace transforms of \(f_+(t)\), \(f_-(t)\), and \(f_\gamma (t)\), we obtain

$$ \sum _{m=0}^\infty \sum _{l=0}^\infty \tilde{f}(m,l,s) = \frac{k\gamma }{ (s+k)(s+k^{+}+k^{-}+\gamma ) - k(k^{+}+k^{-})}. $$

The two roots of the second-order polynomial at the denominator are

$$\begin{aligned} \lambda _{\pm } = \frac{-(k_T+k) \pm \sqrt{(k_T+k)^2 - 4k\gamma }}{2}, \end{aligned}$$
(5.70)

so that we obtain

$$ \sum _{m=0}^\infty \sum _{l=0}^\infty \tilde{f}(m,l,s) = \frac{k\gamma }{(s-\lambda _+)(s-\lambda _-)}. $$

With a little bit of algebraic manipulations, we can write

$$ \tilde{P}(s) = \frac{k\gamma }{\lambda _+-\lambda _-}\Big (\frac{1}{s-\lambda _+} - \frac{1}{s-\lambda _-}\Big ). $$

Computing the inverse Laplace transform is now easy, as it is nothing more than the sum of two exponentials, which leads to our desired result,

$$ P(t) = \frac{k\gamma }{\lambda _+-\lambda _-}\Big (e^{\lambda _+ t}-e^{\lambda _- t}\Big ). $$

Note that \(\lambda _{\pm }<0\).

5.11 Appendix D: Velocity Distribution

5.1.1 5.11.1 One-State Model

Equation 5.3 reduces to

$$\begin{aligned} \begin{aligned} f(m,l,t)=&\frac{(m+l)!}{m!l!}(k^+)^m(k^-)^l \gamma e^{-k_Tt} \\&\int _{0}^{t}dt_{m+l}\int _{0}^{t_{m+l}}dt_{m+l-1}\cdots \int _{0}^{t_3}dt_2\int _{0}^{t_2}dt_1. \end{aligned} \end{aligned}$$
(5.71)

The time-ordered integration can be evaluated recurrently, which gives \(\frac{t^{m+l}}{(m+l)!}\), thus,

$$\begin{aligned} \begin{aligned} f(m,l,t)&=\Big (\frac{t^{m+l}}{m!l!}\Big )(k^+)^m (k^-)^l \gamma e^{-k_T t}. \end{aligned} \end{aligned}$$
(5.72)

We obtain f(nt) by imposing the condition \(m-l=n\),

$$\begin{aligned} \begin{aligned} f(n,t)&=\sum _{m,l=0}^{\infty }\Big (\frac{t^{m+l}}{m!l!}\Big )(k^+)^m (k^-)^l \gamma e^{-k_T t}\delta _{m-l,n}\\&=\Big (\frac{k^+}{k^-}\Big )^\frac{n}{2}\gamma e^{-k_Tt}(t\sqrt{k^+k^-})^n\sum _{l=0}^{\infty }\frac{(t^2k^+k^-)^l}{l!(n+l)!}\\&=\Big (\frac{k^+}{k^-}\Big )^{\frac{n}{2}} \gamma e^{-k_T t} I_n(2t \sqrt{k^+ k^-}). \end{aligned} \end{aligned}$$
(5.73)

5.1.2 5.11.2 Two-State Model

The inverse Laplace transform \(f(m,l,t) = \mathcal {L}^{-1}[\tilde{f}(m,l,s)]\) is defined as

$$\begin{aligned} \begin{aligned} \mathcal {L}^{-1}[\tilde{f}(m,l,s)] = \frac{(m+l)!}{m!l!}(k)^{\sigma +1}(k^{+})^m(k^{-})^l\gamma \frac{1}{2\pi i}\int _{c-i\infty }^{c+i\infty }\frac{\text {e}^{st}}{(s+\xi _1)^{\sigma +1}(s+\xi _2)^{\sigma +1}}ds, \end{aligned} \end{aligned}$$
(5.74)

where for the sake of conciseness we defined \(\sigma =m+l\), \(\xi _1=k\), and \(\xi _2=k^++k^-+\gamma \). We evaluate the inverse Laplace transform using the residue theorem (see [25] for details),

$$\begin{aligned} \int _\mathcal {C} f(z) dz = 2\pi i \sum _{i=1}^n \text {Res}_{z=-\xi _i}f(z). \end{aligned}$$
(5.75)

A function with a singular point in \(-\xi _1\) of order \(\sigma +1\) with \(\sigma \ge 0\) can be written as \(f(z) = \chi (z)/(z+\xi _1)^{\sigma +1}\), where \(\chi \) is analytic and different from zero at the singular point \(-\xi _1\). In this case, the residue of the function in \(\xi _1\) is given by

$$\begin{aligned} \text {Res}_{z=-\xi _1} f(z) = \frac{\frac{d^\sigma }{dz^\sigma }\chi (z)|_{z=-\xi _1}}{\sigma !}. \end{aligned}$$
(5.76)

In the case of (5.74), under the assumption that \(\xi _1 \ne \xi _2\),Footnote 7 there are two poles of order \(\sigma +1\) to compute, one around \(-\xi _1\), and the other around \(-\xi _2\). Therefore,

$$\begin{aligned} \begin{aligned} \text {Res}\Big [\frac{1}{(s+\xi _1)^{\sigma +1}(s+\xi _2)^{\sigma +1}}\text {e}^{st}\Big ]\Big |_{s=-\xi _1}&=\frac{1}{\sigma !}\frac{d^\sigma }{ds^\sigma }\Big [\frac{1}{(s+\xi _2)^{\sigma +1}}\text {e}^{st}\Big ]\Big |_{s=-\xi _1} \\&=\frac{1}{\sigma !}\sum _{p=0}^{\sigma }(-1)^p\left( {\begin{array}{c}\sigma \\ p\end{array}}\right) \frac{t^{\sigma -p}\text {e}^{-\xi _1t}}{(-\xi _1+\xi _2)^{\sigma +p+1}}\frac{(\sigma +p)!}{\sigma !}, \end{aligned} \end{aligned}$$
(5.77)
$$\begin{aligned} \begin{aligned} \text {Res}\Big [\frac{1}{(s+\xi _1)^{\sigma +1}(s+\xi _2)^{\sigma +1}}\text {e}^{st}\Big ]\Big |_{s=-\xi _2}&=\frac{1}{\sigma !}\frac{d^\sigma }{ds^\sigma }\Big [\frac{1}{(s+\xi _1)^{\sigma +1}}\text {e}^{st}\Big ]\Big |_{s=-\xi _2} \\&=\frac{1}{\sigma !}\sum _{p=0}^{\sigma }(-1)^p\left( {\begin{array}{c}\sigma \\ p\end{array}}\right) \frac{t^{\sigma -p}\text {e}^{-\xi _2t}}{(-\xi _2+\xi _1)^{\sigma +p+1}}\frac{(\sigma +p)!}{\sigma !}. \end{aligned} \end{aligned}$$
(5.78)

Thus, (5.74) (\(f(m,l,t) = \mathcal {L}^{-1}[\tilde{f}(m,l,s)]\)) can be written as

$$\begin{aligned} \begin{aligned} f(m,l,t)=&\frac{\gamma (k)^{\sigma +1} (k^{+})^m(k^{-})^l}{m!l!}\Big [\sum _{p=0}^{\sigma }(-1)^p\left( {\begin{array}{c}\sigma \\ p\end{array}}\right) \frac{t^{\sigma -p}\text {e}^{-\xi _1t}}{(-\xi _1+\xi _2)^{\sigma +p+1}}\frac{(\sigma +p)!}{\sigma !} \\&+\sum _{p=0}^{\sigma }(-1)^p\left( {\begin{array}{c}\sigma \\ p\end{array}}\right) \frac{t^{\sigma -p}\text {e}^{-\xi _2t}}{(-\xi _2+\xi _1)^{\sigma +p+1}}\frac{(\sigma +p)!}{\sigma !}\Big ] \\ =&\frac{\gamma (k)^{\sigma +1} (k^{+})^m(k^{-})^l}{m!l!}\frac{t^\sigma }{\sqrt{\pi }}\text {e}^{-\frac{\xi _1+\xi _2}{2}t}\Big [\frac{\sqrt{(\xi _1-\xi _2)t}}{(\xi _2-\xi _1)^{\sigma +1}}K_{\sigma +\frac{1}{2}}(\frac{\xi _1-\xi _2}{2}t) \\&+ \frac{\sqrt{(\xi _2-\xi _1)t}}{(\xi _1-\xi _2)^{n+1}}K_{\sigma +\frac{1}{2}}(\frac{\xi _2-\xi _1}{2}t)\Big ], \end{aligned} \end{aligned}$$
(5.79)

where K is the modified Bessel function of the second kind (5.60). We then take advantage of an identity that relates K with the modified Bessel function of the first kind, I (see 5.59), which can be found in Eq. 10.34.2 of [26],

$$\begin{aligned} \begin{aligned} K_{n+\frac{1}{2}}(-x)=-i\big ( \pi I_{n+\frac{1}{2}}(x)+(-1)^nK_{n+\frac{1}{2}}(x) \big ) \ \ \ \ \ (x>0). \end{aligned} \end{aligned}$$
(5.80)

Using this identity, we rewrite (5.79) as follows: For \(\xi _1-\xi _2>0\),

$$\begin{aligned} \begin{aligned} f(m,l,t)&= \frac{\gamma }{m!l!}\sqrt{\pi }\text {e}^{-\frac{\xi _1+\xi _2}{2}t}t^{m+l} \frac{k^{m+l+1}(k^+)^m(k^-)^l}{(\xi _1-\xi _2)^{m+l+1}} \sqrt{(\xi _1-\xi _2)t}\ I_{m+l+\frac{1}{2}}\big (\frac{\xi _1-\xi _2}{2}t\big ). \end{aligned} \end{aligned}$$
(5.81)

If \(\xi _2-\xi _1>0\), then

$$\begin{aligned} \begin{aligned} f(m,l,t)&= \frac{\gamma }{m!l!}\sqrt{\pi }\text {e}^{-\frac{\xi _1+\xi _2}{2}t}t^{m+l} \frac{k^{m+l+1}(k^+)^m(k^-)^l}{(\xi _2-\xi _1)^{m+l+1}} \sqrt{(\xi _2-\xi _1)t}\ I_{m+l+\frac{1}{2}}\big (\frac{\xi _2-\xi _1}{2}t\big ). \end{aligned} \end{aligned}$$
(5.82)

Both cases are written as

$$\begin{aligned} \begin{aligned} f(m,l,t)&= \frac{\gamma \sqrt{\pi }}{m!l!}\text {e}^{-\frac{\xi _1+\xi _2}{2}t}t^{m+l} \frac{k^{m+l+1}(k^+)^m(k^-)^l}{|\xi _2-\xi _1|^{m+l+1}} \sqrt{|\xi _2-\xi _1|t}\ I_{m+l+\frac{1}{2}}\big (\frac{|\xi _2-\xi _1|}{2}t\big ). \end{aligned} \end{aligned}$$
(5.83)

Finally, as explained in the main text, we obtain the velocity distribution by changing variables to \(v = (m-l)/t\) with the help of Dirac’s delta function. For \(m-l>0\), we obtain

$$\begin{aligned} \begin{aligned} P(v>0)=&\sum _{\begin{array}{c} m,l\\ m>l \end{array}}^{\infty }\int _{0}^{\infty }f(m,l,t)\delta (v-\frac{m-l}{t})dt\\ =&\sum _{\begin{array}{c} m,l\\ m>l \end{array}}^{\infty }\frac{m-l}{v^2} \frac{\gamma \sqrt{\pi }}{m!l!}\text {e}^{-\frac{\xi _1+\xi _2}{2}\frac{m-l}{v}}\big (\frac{m-l}{v}\big )^{m+l+\frac{1}{2}} \\&\frac{k^{m+l+1}(k^+)^m(k^-)^l}{|\xi _2-\xi _1|^{m+l+\frac{1}{2}}} \ I_{m+l+\frac{1}{2}}\big (\frac{|\xi _2-\xi _1|}{2}\frac{m-l}{v}\big ). \end{aligned} \end{aligned}$$
(5.84)

Analogously, when \(m-l<0\), we have

$$\begin{aligned} \begin{aligned} P(v<0)=&\sum _{\begin{array}{c} m,l\\ l>m \end{array}}^{\infty }\int _{0}^{\infty }f(m,l,t)\delta (v-\frac{m-l}{t})dt\\ =&\sum _{\begin{array}{c} m,l\\ l>m \end{array}}^{\infty }\frac{l-m}{v^2} \frac{\gamma \sqrt{\pi }}{m!l!}\text {e}^{-\frac{\xi _1+\xi _2}{2}\frac{m-l}{v}}\big (\frac{m-l}{v}\big )^{m+l+\frac{1}{2}}\\ {}&\frac{k^{m+l+1}(k^+)^m(k^-)^l}{|\xi _2-\xi _1|^{m+l+\frac{1}{2}}} \ I_{m+l+\frac{1}{2}}\big (\frac{|\xi _2-\xi _1|}{2}\frac{m-l}{v}\big ). \end{aligned} \end{aligned}$$
(5.85)

5.12 Appendix E: Averages in the N-state Model

We want to compute

$$\begin{aligned} \overline{\sigma } = \sum _{\sigma =0}^\infty \sigma P(\sigma ) = \textbf{1}\cdot (\hat{\textrm{I}}-\hat{\textrm{K}}_\textrm{Step})\cdot \Big [\sum _{\sigma =0}^\infty \sigma (\hat{\textrm{K}}_\textrm{Step})^\sigma \Big ]\cdot \textbf{p}(0). \end{aligned}$$
(5.86)

In order to do so, we use the two following results. Given a matrix \(\hat{\textrm{M}}\) whose eigenvalues \(\mu _i\) are such that \(|\mu _i| < 1\) we have [19]

$$\begin{aligned} \sum _{n=0}^\infty \hat{\textrm{M}}^n = (\hat{\textrm{I}}- \hat{\textrm{M}})^{-1}. \end{aligned}$$
(5.87)

Furthermore, if \(\hat{\textrm{I}}-\alpha \hat{\textrm{M}}\) is invertible (\(\alpha \) is a number), then we have [19]

$$\begin{aligned} \frac{d}{d\alpha }(\hat{\textrm{I}}-\alpha \hat{\textrm{M}})^{-1}|_{\alpha =1} = (\hat{\textrm{I}}-\hat{\textrm{M}})^{-1}\cdot \hat{\textrm{M}}\cdot (\hat{\textrm{I}}-\hat{\textrm{M}})^{-1}. \end{aligned}$$
(5.88)

We can write

$$ \sum _{\sigma =0}^\infty \sigma (\hat{\textrm{K}}_\textrm{Step})^\sigma = \lim _{\alpha \rightarrow 1}\alpha \frac{d}{d\alpha }\sum _{\sigma =0}^\infty (\alpha \hat{\textrm{K}}_\textrm{Step})^{\sigma }, $$

and if the eigenvalues of \(\hat{\textrm{K}}_\textrm{Step}\) are all less than 1 (which is necessary for convergence), the summation converges as long as the limit is taken from below 1. Using (5.87), we get

$$ \sum _{\sigma =0}^\infty \sigma (\hat{\textrm{K}}_\textrm{Step})^\sigma = \lim _{\alpha \rightarrow 1}\alpha \frac{d}{d\alpha }(\hat{\textrm{I}}-\alpha \hat{\textrm{K}}_\textrm{Step})^{-1}, $$

and using (5.88), the expression becomes

$$ \sum _{\sigma =0}^\infty \sigma (\hat{\textrm{K}}_\textrm{Step})^\sigma = (\hat{\textrm{I}}-\hat{\textrm{K}}_\textrm{Step})^{-1}\cdot \hat{\textrm{K}}_\textrm{Step}\cdot (\hat{\textrm{I}}-\hat{\textrm{K}}_\textrm{Step})^{-1}. $$

Plugging in this expression in (5.86), we derive

$$\begin{aligned} \overline{\sigma } = \textbf{1}\cdot \hat{\textrm{K}}_\textrm{Step}\cdot (\hat{\textrm{I}}-\hat{\textrm{K}}_\textrm{Step})^{-1}\cdot \textbf{p}(0). \end{aligned}$$
(5.89)

The other averages in (5.49) are obtained in a similar way.

Finally, we show that \(\overline{m}+\overline{l} = \overline{\sigma }\). In order to prove this, consider the following:

$$ \begin{aligned}&\hat{\textrm{K}}_\textrm{Fwd}\cdot (\hat{\textrm{I}}-\hat{\textrm{K}}_\textrm{Fwd})^{-1} + \hat{\textrm{K}}_\textrm{Bwd}\cdot (\hat{\textrm{I}}-\hat{\textrm{K}}_\textrm{Bwd})^{-1} =\\&-\hat{\textrm{K}}^+\cdot (\hat{\textrm{K}}^\textrm{S}+\hat{\textrm{K}}^-)^{-1}\cdot [\hat{\textrm{I}}+\hat{\textrm{K}}^+\cdot (\hat{\textrm{K}}^\textrm{S}+\hat{\textrm{K}}^-)^{-1}]^{-1} \\&-\hat{\textrm{K}}^-\cdot (\hat{\textrm{K}}^\textrm{S}+\hat{\textrm{K}}^+)^{-1}\cdot [\hat{\textrm{I}}+\hat{\textrm{K}}^-\cdot (\hat{\textrm{K}}^\textrm{S}+\hat{\textrm{K}}^+)^{-1}]^{-1} . \end{aligned} $$

We replace the first identity matrix with \((\hat{\textrm{K}}^\textrm{S}+\hat{\textrm{K}}^-)\cdot (\hat{\textrm{K}}^\textrm{S}+\hat{\textrm{K}}^-)^{-1}\), and the second one with \((\hat{\textrm{K}}^\textrm{S}+\hat{\textrm{K}}^+)\cdot (\hat{\textrm{K}}^\textrm{S}+\hat{\textrm{K}}^+)^{-1}\), and remembering that \((\hat{\textrm{M}}\cdot \hat{\textrm{X}})^{-1} = \hat{\textrm{X}}^{-1}\cdot \hat{\textrm{M}}^{-1}\), we obtain

$$\begin{aligned} \hat{\textrm{K}}_\textrm{Fwd}\cdot (\hat{\textrm{I}}-\hat{\textrm{K}}_\textrm{Fwd})^{-1} + \hat{\textrm{K}}_\textrm{Bwd}\cdot (\hat{\textrm{I}}-\hat{\textrm{K}}_\textrm{Bwd})^{-1} = \hat{\textrm{K}}_\textrm{Step}\cdot (\hat{\textrm{I}}-\hat{\textrm{K}}_\textrm{Step})^{-1}. \end{aligned}$$
(5.90)

With this relationship, it is trivial to show that \(\overline{m}+\overline{l} = \overline{\sigma }\) (just multiply by \(\textbf{1}\) on the left and \(\textbf{p}(0)\) on the right).

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Mugnai, M.L., Takaki, R., Thirumalai, D. (2022). Introduction to Stochastic Kinetic Models for Molecular Motors. In: Blagoev, K.B., Levine, H. (eds) Physics of Molecular and Cellular Processes. Graduate Texts in Physics. Springer, Cham. https://doi.org/10.1007/978-3-030-98606-3_5

Download citation

Publish with us

Policies and ethics