Skip to main content
Log in

Turing Instability in a Model with Two Interacting Ising Lines: Non-equilibrium Fluctuations

  • Published:
Journal of Statistical Physics Aims and scope Submit manuscript

Abstract

This is the second of two articles on the study of a particle system model that exhibits a Turing instability type effect. About the hydrodynamic equations obtained in Capanna and Soprano (Markov Proc Relat Fields 23(3):401–420, 2017), we find conditions under which Turing instability occurs around the zero equilibrium solution. In this instability regime: for long times at which the process is of infinitesimal order, we prove that the non-equilibrium fluctuations around the hydrodynamic limit are Gaussian; for times converging to the critical time defined as the one at which the process starts to be of finite order, we prove that the \(\pm \,1\)-Fourier modes are uniformly away from zero.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Asslani, M., Di Patti, F., Fanelli, D.: Stochastic turing patterns on a network. Phys. Rev. E 86, 046105 (2012)

    Article  ADS  Google Scholar 

  2. Andreu-Vaillo, F., Mazón, J.M., Rossi, J.D., Toledo-Melero, J.J.: Nonlocal Diffusion Problems. Mathematical Surveys and Monographs, vol. 165. American Mathematical Society, Providence (2010)

    Book  MATH  Google Scholar 

  3. Biancalani, T., Fanelli, D., Di Patti, F.: Stochastic turing patterns in the Brusselator model. Phys. Rev. E 81, 046215 (2010)

    Article  ADS  Google Scholar 

  4. Brémaud, P.: Markov Chains: Gibbs Fields, Monte Carlo Simulation and Queues, Texts in Applied Mathematics. Springer, New York (1999)

    Book  MATH  Google Scholar 

  5. Cao, Y., Erban, R.: Stochastic Turing patterns: analysis of compartment-based approaches (2010). arXiv:1310.7634

  6. Capanna, M., Soprano-Loto, N.: Turing instability in a model with two interacting Ising lines: hydrodynamic limit. Markov Proc. Relat. Fields 23(3), 401–420 (2017)

    MathSciNet  MATH  Google Scholar 

  7. De Masi, A., Ferrari, P.A., Lebowitz, J.L.: Reaction–diffusion equations for interacting particle systems. J. Stat. Phys. 44(3), 589–644 (1986)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  8. Han, Q.: A Basic Course in Partial Differential Equations, Graduate Studies in Mathematics, vol. 120. American Mathematical Society, Providence (2011)

    Google Scholar 

  9. Kipnis, C., Landim, C.: Scaling Limits of Interacting Particle Systems, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 320. Springer, Berlin (1999)

    MATH  Google Scholar 

  10. Meester, R., Roy, R.: Continuum Percolation, Cambridge Tracts in Mathematics, vol. 119. Cambridge University Press, Cambridge (1996)

    Book  MATH  Google Scholar 

  11. Murray, J.D.: Mathematical Biology. Interdisciplinary Applied Mathematics. Spatial Models and Biomedical Applications, vol. 18, 3rd edn. Springer, New York (2003)

    Google Scholar 

  12. Perthame, B.: Parabolic Equations in Biology, Lecture Notes on Mathematical Modelling in the Life Sciences. Growth, Reaction, Movement and Diffusion. Springer, Cham (2015)

    Google Scholar 

  13. Pollard, D.: Convergence of Stochastic Processes. Springer Series in Statistics. Springer, New York (1984)

    Book  MATH  Google Scholar 

  14. Scarsoglio, S., Laio, F., D’Odorico, P., Ridolfi, L.: Spatial pattern formation induced by Gaussian white noise. Math. Biosci. 229(2), 174–184 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  15. Terras, A.: Fourier Analysis on Finite Groups and Applications, London Mathematical Society Student Texts, vol. 43. Cambridge University Press, Cambridge (1999)

    Book  Google Scholar 

  16. Turing, A.M.: The chemical basis of morphogenesis. Philos. Trans. R. Soc. Lond. Ser. B 237(641), 37–72 (1952)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  17. Zheng, Q., Wang, Z., Shen, J., Iqbal, A., Muhammad, H.: Turing bifurcation and pattern formation of stochastic reaction–diffusion system. Adv. Math. Phys. (2017). https://doi.org/10.1155/2017/9648538

Download references

Acknowledgements

It is a great pleasure to thank Errico Presutti for suggesting us the problem and for his continuous advising. We also acknowledge (in alphabetical order) fruitful discussions with Inés Armendáriz, Anna De Masi, Pablo Ferrari, Ellen Saada, Livio Triolo, and Maria Eulália Vares. The authors also acknowledge the hospitality of Laboratoire MAP5 at Université Paris Descartes. We finally thank the anonymous referees for several comments that helped us improve the presentation of the article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nahuel Soprano-Loto.

Ethics declarations

Conflict of interest

We declare our research does not involve potential conflicts of interest nor participation with Humans and/or Animals.

Appendix

Appendix

The matricial norm \( \left\| \cdot \right\| \) is defined as \( \displaystyle {sup_{\underline{v}:\Vert \underline{v}\Vert =1} \left\| A\underline{v}\right\| }\).

Lemma 4.1

There exists C (independent of k) such that

$$\begin{aligned} \big \Vert e^{tA^{\left( k\right) }}\big \Vert \le C e^{t\mu } \end{aligned}$$
(4.1)

for \(k=\pm \, 1\), and

$$\begin{aligned} \big \Vert e^{tA^{\left( k\right) }}\big \Vert \le C e^{\frac{1}{2}t{\mathfrak {R}} (\mu _1^{\left( k\right) })} \end{aligned}$$
(4.2)

for \(k\ne \pm \, 1\).

Proof

We first analyze the case \(k=1\) (the case \(k=-1\) is the same since \(A^{\left( 1\right) }=A^{\left( -1\right) }\)). Let \(S^{\left( 1\right) }\) be the matrix with columns \(\underline{v}_1^{\left( 1\right) }\) and \(\underline{v}_2^{\left( 1\right) }\). We have

$$\begin{aligned} \big \Vert e^{tA^{\left( k\right) }}\big \Vert \le \big \Vert S^{\left( 1\right) }\big \Vert \, \Bigg \Vert \left( \begin{array}{cc} e^{t\mu _1^{\left( k\right) }} &{}0 \\ 0 &{} e^{t\mu _2^{\left( k\right) }} \end{array} \right) \Bigg \Vert \, \big \Vert (S^{\left( 1\right) })^{-1}\big \Vert = \big \Vert S^{\left( 1\right) }\big \Vert \, \big \Vert (S^{\left( 1\right) })^{-1}\big \Vert \, e^{t\mu }. \end{aligned}$$
(4.3)

We proceed with the case \(k\ne \pm \, 1\). Recall the definition of \({{\,\mathrm{dis}\,}}^{\left( k\right) }\mathrel {\mathop :}=[{{\,\mathrm{tr}\,}}A^{\left( k\right) }]^2-4\det A^{\left( k\right) }\) given in Sect. 2.3. Since \(\lim _{|k|\rightarrow \infty } {{\,\mathrm{dis}\,}}^{\left( k\right) } =4\tanh (\lambda \beta _1) \tanh (\lambda \beta _2)\), there exists \( C _1\) such that \(|{{\,\mathrm{dis}\,}}^{\left( k\right) }|\ge 2\tanh \left( \lambda \beta _1\right) \tanh \left( \lambda \beta _2\right) \) for every k such that \( \left| k\right| \ge C _1\).

We first analyze the sub-case \( \left| k\right| \ge C_1\). In this case, the matrix is also diagonalizable, being the difference here that we need to control coefficients in the variable k. Let \(S^{\left( k\right) }\) be the matrix with columns \(\underline{v}_1^{\left( k\right) }\) and \(\underline{v}_2^{\left( k\right) }\). We have

$$\begin{aligned} \big \Vert e^{tA^{\left( k\right) }}\big \Vert \le \big \Vert S^{\left( k\right) }\big \Vert \, \Bigg \Vert \left( \begin{array}{cc} e^{t\mu _1^{\left( k\right) }} &{} 0 \\ 0 &{} e^{t\mu _2^{\left( k\right) }} \end{array} \right) \Bigg \Vert \, \big \Vert (S^{\left( k\right) })^{-1}\big \Vert = \big \Vert S^{\left( k\right) }\big \Vert \, e^{t{\mathfrak {R}}(\mu _1^{\left( k\right) })} \, \big \Vert (S^{\left( k\right) })^{-1}\big \Vert . \end{aligned}$$
(4.4)

There exists \(C_2\) (independent of k) such that \(\max \Big \{\big \Vert \underline{v}^{\left( k\right) }_1\big \Vert _1,\big \Vert \underline{v}^{\left( k\right) }_2\big \Vert _1\Big \}\le C_2\) (see the explicit expressions (2.7) and (2.8)), so \(\big \Vert S^{\left( k\right) }\big \Vert \le C_2\). Here \( \left\| \cdot \right\| _1\) is defined as \( \bigg \Vert \left( \begin{array}{c} z_1 \\ z_2 \end{array}\right) \bigg \Vert _1=|z_1|+|z_2|\). Since \((S^{\left( k\right) })^{-1}=\frac{1}{\det S^{\left( k\right) }}{{\tilde{S}}}^{\left( k\right) }\) with \({{\tilde{S}}}^{\left( k\right) }\) obtained from \(S^{\left( k\right) }\) after rearranging the coefficients and changing the signs of some of them, there exists \( C _3\) such that

$$\begin{aligned} \big \Vert (S^{\left( k\right) })^{-1}\big \Vert \le C _3\big |\det S^{\left( k\right) }\big |^{-1}&= C _3\,\frac{1}{\tanh \left( \lambda \beta _2\right) \big | \sqrt{{{\,\mathrm{dis}\,}}^{\left( k\right) }} \big | }\nonumber \\&\le C _3\,\frac{1}{\tanh \left( \lambda \beta _2\right) \sqrt{2\tanh \left( \lambda \beta _1\right) \tanh \left( \lambda \beta _2\right) } }. \end{aligned}$$
(4.5)

We finally analyze the sub-case \(k\ne \pm \, 1\) and \( \left| k\right| <C_1\). If \({{\,\mathrm{dis}\,}}^{\left( k\right) }\ne 0\), we can proceed as in the case \(k=1\) since we have to control only a finite number of k’s. If \({{\,\mathrm{dis}\,}}^{\left( k\right) }=0\), we have \(\mu _1^{\left( k\right) }=\mu _2^{\left( k\right) }=\mu ^{\left( k\right) }\). The matrix \(A^{\left( k\right) }\) is not diagonalizable; nevertheless it is equivalent to a triangular matrix

$$\begin{aligned} T^{\left( k\right) }=\left( \begin{array}{cc} \mu ^{\left( k\right) } &{} a^{\left( k\right) }\\ 0 &{} \mu ^{\left( k\right) } \end{array}\right) \end{aligned}$$
(4.6)

via conjugating by orthogonal matrices (this is Schur’s unitary triangularization theorem). Then there exists \( C _4\) such that

$$\begin{aligned} \big \Vert e^{tA^{\left( k\right) }}\big \Vert \le C _4\big \Vert e^{tT^{\left( k\right) }}\big \Vert = C _4\,\Bigg \Vert \left( \begin{array}{cc} e^{t\mu ^{\left( k\right) }} &{} a^{\left( k\right) }te^{t\mu ^{\left( k\right) }}\\ 0 &{} e^{t\mu ^{\left( k\right) }} \end{array}\right) \Bigg \Vert =C _4\Big ( e^{t\mu ^{\left( k\right) }}+|a^{\left( k\right) }|\, te^{t\mu ^{\left( k\right) }}\Big ). \end{aligned}$$
(4.7)

We conclude by observing that there exists \( C _5\) such that \(e^{t\mu ^{\left( k\right) }}+|a^{\left( k\right) }|te^{t\mu ^{\left( k\right) }}\le C _5 e^{\frac{1}{2}t\mu ^{\left( k\right) }}\) (since we have only finite cases to consider, \(|a^{\left( k\right) }|\) can be bounded by a constant). \(\square \)

Proposition 4.2

For \(M\in \mathbb {N}\) and \(j\in \left\{ 1,\ldots ,M\right\} \), let \(I_j\mathrel {\mathop :}=\left[ \frac{j-1}{M},\frac{j}{M}\right) \). For integrable \(f:\mathbb {T}\rightarrow \mathbb {R}\) and \(M\in \mathbb {N}\), let \(f_M\) be the function such that, for every \(j\in \left\{ 1,\ldots ,M\right\} \), takes the value \(M\int _{I_j}f\) in the interval \(I_j\) (\(f_M\) is a piece-wise constant approximation of f). Let \(f\in C\left( \mathbb {T},\mathbb {R}\right) \) such that \(\int _0^1f_M^2\xrightarrow [ M\rightarrow \infty ]{}\int _0^1f^2\). Let \(\left( \sigma _i\right) _{i=0}^{N-1}\) be an independent family with distribution \(P \left[ \sigma _i=1\right] =P \left[ \sigma _i=-1\right] =\frac{1}{2}\). Then

$$\begin{aligned} Y^{\left( N\right) }\mathrel {\mathop :}=\frac{1}{\sqrt{N}}\sum _{i=0}^{N-1}f\left( \frac{i}{N}\right) \sigma _i \end{aligned}$$
(4.8)

converges in distribution to \(N\left( 0,\int _0^1 f^2\right) \).

To prove Proposition 4.2, we need the following lemma.

Lemma 4.3

If the integrable function \(f: \mathbb {T} \rightarrow \mathbb {R}\) is such that \(f=f_M\) for some \(M\in \mathbb {N}\), the assertion of Proposition 4.2 holds.

Proof

Let \(\Lambda _j^{\left( N\right) }\mathrel {\mathop :}=\left( NI_j\right) \cap \mathbb {Z}\). We first see that

$$\begin{aligned} {{\tilde{Y}}}^{\left( N\right) }\mathrel {\mathop :}=\frac{1}{\sqrt{M}}\sum _{j=1}^M f\left( \frac{j-1}{M}\right) \frac{1}{\sqrt{ \left| \Lambda _j^{\left( N\right) }\right| }} \sum _{i\in \Lambda _j^{\left( N\right) }}\sigma _i \end{aligned}$$
(4.9)

converges weakly to \(N\left( 0,\int _0^1f^2\right) \). Call \(X^{\left( N\right) }_j\mathrel {\mathop :}=\frac{1}{\sqrt{ \left| \Lambda _j^{\left( N\right) }\right| }} \sum _{i\in \Lambda _j^{\left( N\right) }}\sigma _i\). For every N, the family \( \left\{ X_j^{\left( N\right) }\right\} _{j}\) is independent. Also \(X_j^{\left( N\right) }\) converges weakly to \(N\left( 0,1\right) \) for every j. Then the random vector \(\left( X_j^{\left( N\right) }\right) _{j}\) converges weakly to the random vector \(\left( X_j\right) _j\sim N\left( {\underline{0}},\text{ Id }_M\right) \). Then the random variable (4.9) converges weakly to \(\frac{1}{\sqrt{M}}\sum _{j=1}^M f\left( \frac{j-1}{M}\right) X_j\sim N\left( 0,\int _0^1f^2\right) \).

Using that f is bounded, that \({\mathrm{Var}}\left( X_j^{\left( N\right) }\right) =1\), and that \(\sqrt{\frac{ \left| \Lambda _j^{\left( N\right) }\right| }{N}}\xrightarrow [ N\rightarrow \infty ]{}\frac{1}{\sqrt{M}}\), one can see that \({\mathrm{Var}}\left( {{\tilde{Y}}}^{\left( N\right) }-Y^{\left( N\right) }\right) \xrightarrow [ N\rightarrow \infty ]{}0\); then, for every \(\tilde{\delta }>0\),

$$\begin{aligned} P\left( \left| {{\tilde{Y}}}^{\left( N\right) }-Y^{\left( N\right) }\right| >\tilde{\delta }\right) \xrightarrow [ N\rightarrow \infty ]{}0. \end{aligned}$$
(4.10)

Let \(G_{\int _0^1f^2}\) be the Gaussian probability distribution with zero mean and variance \(\int _0^1f^2\). For \(h:\mathbb {R}\rightarrow \mathbb {R}\) bounded and uniformly continuous, we have to prove that

$$\begin{aligned} \left| E\left( h\left( Y^{\left( N\right) }\right) \right) -G_{\int _0^1f^2}\left( h\right) \right| \xrightarrow [ N\rightarrow \infty ]{}0. \end{aligned}$$
(4.11)

(Recall that weak convergence of probabilities is equivalent to convergence of the expectations against bounded uniformly continuous functions.) We have already proved that

$$\begin{aligned} \left| E\left( h\left( {{\tilde{Y}}}^{\left( N\right) }\right) \right) -G_{\int _0^1f^2}\left( h\right) \right| \xrightarrow [ N\rightarrow \infty ]{}0, \end{aligned}$$
(4.12)

so we only need to prove

$$\begin{aligned} \left| E\left( h\left( Y^{\left( N\right) }\right) \right) -E\left( h\left( {{\tilde{Y}}}^{\left( N\right) }\right) \right) \right| \xrightarrow [ N\rightarrow \infty ]{}0. \end{aligned}$$
(4.13)

Fix \(\varepsilon >0\) and take \(\delta >0\) such that \( \left| h\left( y\right) -h\left( x\right) \right| <\varepsilon \) whenever \( \left| y-x\right| \le \delta \). The quantity to control in (4.13) is bounded by

$$\begin{aligned} \begin{aligned}&E\left( \left| h\left( Y^{\left( N\right) }\right) -h\left( {{\tilde{Y}}}^{\left( N\right) }\right) \right| \mathbf{1 } \left\{ \left| Y^{\left( N\right) }-{{\tilde{Y}}}^{\left( N\right) }\right| >\delta \right\} \right) \\&\quad +E\left( \left| h\left( Y^{\left( N\right) }\right) -h\left( {{\tilde{Y}}}^{\left( N\right) }\right) \right| \mathbf{1 } \left\{ \left| Y^{\left( N\right) }-{{\tilde{Y}}}^{\left( N\right) }\right| \le \delta \right\} \right) . \end{aligned} \end{aligned}$$
(4.14)

The first addend goes to zero because of (4.10); the second one is bounded by \(\varepsilon \). Since \(\varepsilon \) is arbitrary, we can conclude. \(\square \)

Proof of Proposition 4.2

Let \(f_M\) be the discretized version of f and

$$\begin{aligned} Y^{\left( N\right) }_M\mathrel {\mathop :}=\frac{1}{\sqrt{N}}\sum _{i=0}^{N-1}f_M\left( \frac{i}{N}\right) \sigma _i. \end{aligned}$$
(4.15)

From Chebyshev inequality, there is a constant depending only on f such that, for every \(\tilde{\delta }>0\),

$$\begin{aligned} P\left( \left| Y^{\left( N\right) }-Y_M^{\left( N\right) }\right| >\tilde{\delta }\right) \le \frac{C}{\tilde{\delta }^2M^2} \end{aligned}$$
(4.16)

(observe that this bound is uniform in N). Let \(h:\mathbb {R}\rightarrow \mathbb {R}\) bounded and uniformly continuous. We have to prove that

$$\begin{aligned} \left| E\left( h\left( Y^{\left( N\right) }\right) \right) -G_{\int _0^1 f^2}h\right| \xrightarrow [ N\rightarrow \infty ]{}0. \end{aligned}$$
(4.17)

Fix \(\varepsilon >0\) and let \(\delta >0\) be such that \( \left| h\left( y\right) -h\left( x\right) \right| <\varepsilon \) whenever \( \left| y-x\right| \le \delta \). Take M such that \( \left| G_{\int _0^1 f_M^2}h-G_{\int _0^1 f^2}h\right| <\varepsilon \) and \(\frac{C}{\delta ^2 M^2}< \frac{\varepsilon }{ \left\| h\right\| _\infty }\). The quantity to control in (4.17) is bounded by

$$\begin{aligned} E\left( \left| h\left( Y^{\left( N\right) }\right) -h\left( Y_M^{\left( N\right) }\right) \right| \right) + \left| E\left( h\left( Y_M^{\left( N\right) }\right) \right) - G_{\int _0^1f_M^2}h \right| +\varepsilon . \end{aligned}$$
(4.18)

Multiply by \(1=\mathbf{1 } \left\{ \left| Y^{\left( N\right) }-Y_M^{\left( N\right) }\right| >\delta \right\} +\mathbf{1 } \left\{ \left| Y^{\left( N\right) }-Y_M^{\left( N\right) }\right| \le \delta \right\} \) inside the first expectation to get the upper bound \(2\varepsilon \) for it. The second addend goes to zero as N goes to infinity because of Lemma 4.3. We can conclude because \(\varepsilon \) is arbitrary. \(\square \)

Lemma 4.4

There exists a constant C such that, for every \(\delta >0\),

$$\begin{aligned}&\mathbb {P}\left( \sup _{t\in [0,\infty )} \left\| \int _0^t e^{\left( t-s\right) A^{\left( k\right) }}\underline{M}_\gamma ^{\left( k\right) }\left( {\mathrm{d}}s\right) \right\| \le k^2 \gamma ^{\frac{1}{2}-\delta }\quad \forall k\ne \pm \, 1\right) \ge 1-C\gamma ^{2\delta } \end{aligned}$$
(4.19)
$$\begin{aligned}&\mathbb {P}\left( \sup _{t\in [0,\infty )} \left\| \int _0^t e^{-sA^{\left( k\right) }}\underline{M}_\gamma ^{\left( k\right) }\left( {\mathrm{d}}s\right) \right\| \le \gamma ^{\frac{1}{2}-\delta }, \;k=\pm \, 1\right) \ge 1- C\gamma ^{2\delta } \end{aligned}$$
(4.20)
$$\begin{aligned}&\mathbb {P}\left( \left\| \underline{X}_\gamma ^{\left( k\right) }\left( 0\right) \right\| \le k^2 \gamma ^{\frac{1}{2}-\delta }\quad \forall k\in \mathbb {Z}\right) \ge 1-C\gamma ^{2\delta }. \end{aligned}$$
(4.21)

Proof

We start by proving (4.19). Observe that the proof follows once we show that there exists a constant C such that, for all \(T>0\),

$$\begin{aligned} \mathbb {P}\left( \exists k\ne \pm \, 1:\sup _{t\in [0,T]} \left\| \int _0^t e^{\left( t-s\right) A^{\left( k\right) }}\underline{M}_\gamma ^{\left( k\right) }\left( {\mathrm{d}}s\right) \right\| > k^2 \gamma ^{\frac{1}{2}-\delta }\right) \le C\gamma ^{2\delta }, \end{aligned}$$
(4.22)

which follows from

$$\begin{aligned} \begin{aligned} \sum _{k\ne \pm 1}P\left( \sup _{t\in [0,T]} \left\| \int _0^t e^{\left( t-s\right) A^{\left( k\right) }}\underline{M}_\gamma ^{\left( k\right) }\left( {\mathrm{d}}s\right) \right\| > k^2 \gamma ^{\frac{1}{2}-\delta }\right) \le C\gamma ^{2\delta }. \end{aligned} \end{aligned}$$
(4.23)

Call \(B^{\left( k\right) }_{i,j}\left( t-s\right) \) the element on the position \(\left( i,j\right) \) of the matrix \(e^{\left( t-s\right) A^{\left( k\right) }}\), and observe that, by Doob’s inequality and Ito’s isometry, we get

$$\begin{aligned} \begin{aligned} \mathbb {P}\left( \sup _{t\in [0,T]} \left| \int _0^tB^{\left( k\right) }_{i,j} \left( t-s\right) \# M_{\gamma ,l}^{\left( k\right) }\left( {\mathrm{d}}s\right) \right| >\varepsilon \right)&\le \varepsilon ^{-2}\mathbb {E} \left[ \left( \int _0^TB^{\left( k\right) }_{i,j}\left( T-s\right) \# M_{\gamma ,l}^{\left( k\right) }\left( {\mathrm{d}}s\right) \right) ^2\right] \\&\le \varepsilon ^{-2}\mathbb {E}\left( \int _0^TB^{\left( k\right) }_{i,j}\left( T-s\right) ^2\langle \# M_{\gamma ,l}^{\left( k\right) }\rangle \left( {\mathrm{d}}s\right) \right) \end{aligned} \end{aligned}$$
(4.24)

for every \(\varepsilon >0\), \(i,j,l\in \left\{ 1,2\right\} \), and \(\#\in \left\{ {\mathfrak {R}},{\mathfrak {I}}\right\} \). Here \( \langle \# M_{\gamma ,l}^{\left( k\right) } \rangle (t) \) denotes the quadratic variation of the martingale \( \# M_{\gamma ,l}^{\left( k\right) }(t) \). From Lemma 5.1 in Appendix A of [9], we can express it as follows:

$$\begin{aligned} \langle \# M_{\gamma ,l}^{\left( k\right) }\rangle \left( t\right) =\int _0^t \left( L_\gamma \langle \sigma _{\gamma , l}\left( s\right) , \# F^{\left( k\right) }\rangle -2\langle \sigma _{\gamma , l}\left( s\right) , \# F^{\left( k\right) }\rangle L_\gamma \langle \sigma _{\gamma , l}\left( s\right) , \# F^{\left( k\right) }\rangle \right) {\mathrm{d}}s. \end{aligned}$$
(4.25)

Expanding this expression, we get

$$\begin{aligned}&\int _0^tL_\gamma \left\langle \sigma _{\gamma , l}\left( s\right) , \# F^{\left( k\right) }\right\rangle -2 \left\langle \sigma _{\gamma , l}\left( s\right) , \# F^{\left( k\right) }\right\rangle L_\gamma \left\langle \sigma _{\gamma , l}\left( s\right) , \# F^{\left( k\right) }\right\rangle {\mathrm{d}}s \end{aligned}$$
(4.26)
$$\begin{aligned}&\qquad =\int _0^t\sum _{x\in \Lambda _\gamma }R_l\left( x, \underline{\sigma }_\gamma \right) \left( \left\langle \sigma _{\gamma , l}^x, \#F^{(k)}\right\rangle - \left\langle \sigma _{\gamma , l}, \#F^{(k)}\right\rangle \right) ^2 {\mathrm{d}}s\end{aligned}$$
(4.27)
$$\begin{aligned}&\qquad \le \int _0^t \sum _{x\in \Lambda _\gamma }R_l\left( x, \underline{\sigma }_\gamma \right) 4\gamma ^2 \left\| \#F^{(k)}\right\| _\infty \le C_1\gamma t \end{aligned}$$
(4.28)

for a constant \(C_1\) that does not depend on k (this dependence is lost because \(\Vert \#F^{(k)}\Vert _\infty \le 1\) for all k). Since the maximum of the modulus of the entries of a matrix defines a norm, and since all the norms are equivalent, Lemma  4.1 guarantees the existence of a constant \(C_2\) such that

$$\begin{aligned} |B^{\left( k\right) }_{i,j}\left( t-s\right) |\le C_2e^{\frac{1}{2}{\mathfrak {R}}(\mu _1^{\left( k\right) })\left( t-s\right) } \end{aligned}$$
(4.29)

Plugging the estimations (4.26) and (4.29) into (4.24), we get

$$\begin{aligned}&\mathbb {P}\left( \sup _{t\in [0,T]} \left| \int _0^tB^{\left( k\right) }_{i,j}\left( t-s\right) \# M_{\gamma ,l}^{\left( k\right) }\left( {\mathrm{d}}s\right) \right| >\varepsilon \right) \nonumber \\&\quad \le \varepsilon ^{-2}\gamma C_2^2\int _0^T e^{{\mathfrak {R}}(\mu _1^{\left( k\right) })\left( t-s\right) }{\mathrm{d}}s\le \frac{C_2^2}{{\mathfrak {R}}(\mu _1^{\left( k\right) })}\varepsilon ^{-2}\gamma \end{aligned}$$
(4.30)

for every \(\varepsilon >0\), \(i,j,l\in \left\{ 1,2\right\} \), and \(\#\in \left\{ {\mathfrak {R}},{\mathfrak {I}}\right\} \). From the explicit expression of the eigenvalue \( \mu _1^{\left( k\right) } \) given in (2.5), we get \(\lim _{ \left| k\right| \rightarrow \infty }{\mathfrak {R}}(\mu _1^{\left( k\right) })=-1\), so the family \(\{{\mathfrak {R}}(\mu _1^{\left( k\right) })\}_k\) is uniformly lower bounded in k and we can find a common constant \(C_3\) which can replace \(\frac{C_2^2}{{\mathfrak {R}}(\mu _1^{\left( k\right) })}\) in the right hand side of (4.30). Then, by decomposing \(\int _0^t e^{\left( t-s\right) A^{\left( k\right) }}\underline{M}_\gamma ^{\left( k\right) }\left( {\mathrm{d}}s\right) \) first into the two coordinates and after into their real and imaginary parts, we can conclude that, for all \(\zeta >0\) and for all \(k\ne \pm \, 1\)

$$\begin{aligned} \begin{aligned} \mathbb {P}\left( \sup _{t\in [0,T]} \left\| \int _0^t e^{\left( t-s\right) A^{\left( k\right) }}\underline{M}_\gamma ^{\left( k\right) }\left( {\mathrm{d}}s\right) \right\| >\zeta \right)&\le C_4\zeta ^{-2}\gamma . \end{aligned} \end{aligned}$$
(4.31)

(4.23) follows after taking \(\zeta =k^2\gamma ^{\frac{1}{2}-\delta }\) into (4.31). The proof for (4.20) is analogous, so we will omit it. Finally, to prove (4.21), we can proceed as we just did to reduce the problem to proving that

$$\begin{aligned} \mathbb {P}\left( |\#{X}_{\gamma , i}^{\left( 1\right) }(0)|\ge \tilde{\varepsilon }\right) \le \tilde{\varepsilon }^{-2}\gamma \end{aligned}$$
(4.32)

for every \(\tilde{\varepsilon }>0\), \(i\in \left\{ 1,2\right\} \), and \(\#\in \left\{ {\mathfrak {R}},{\mathfrak {I}}\right\} \); the last one is a consequence of Chebyshev inequality. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Capanna, M., Soprano-Loto, N. Turing Instability in a Model with Two Interacting Ising Lines: Non-equilibrium Fluctuations. J Stat Phys 174, 365–403 (2019). https://doi.org/10.1007/s10955-018-2206-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10955-018-2206-7

Keywords

Navigation