Abstract
This is the second of two articles on the study of a particle system model that exhibits a Turing instability type effect. About the hydrodynamic equations obtained in Capanna and Soprano (Markov Proc Relat Fields 23(3):401–420, 2017), we find conditions under which Turing instability occurs around the zero equilibrium solution. In this instability regime: for long times at which the process is of infinitesimal order, we prove that the non-equilibrium fluctuations around the hydrodynamic limit are Gaussian; for times converging to the critical time defined as the one at which the process starts to be of finite order, we prove that the \(\pm \,1\)-Fourier modes are uniformly away from zero.
Similar content being viewed by others
References
Asslani, M., Di Patti, F., Fanelli, D.: Stochastic turing patterns on a network. Phys. Rev. E 86, 046105 (2012)
Andreu-Vaillo, F., Mazón, J.M., Rossi, J.D., Toledo-Melero, J.J.: Nonlocal Diffusion Problems. Mathematical Surveys and Monographs, vol. 165. American Mathematical Society, Providence (2010)
Biancalani, T., Fanelli, D., Di Patti, F.: Stochastic turing patterns in the Brusselator model. Phys. Rev. E 81, 046215 (2010)
Brémaud, P.: Markov Chains: Gibbs Fields, Monte Carlo Simulation and Queues, Texts in Applied Mathematics. Springer, New York (1999)
Cao, Y., Erban, R.: Stochastic Turing patterns: analysis of compartment-based approaches (2010). arXiv:1310.7634
Capanna, M., Soprano-Loto, N.: Turing instability in a model with two interacting Ising lines: hydrodynamic limit. Markov Proc. Relat. Fields 23(3), 401–420 (2017)
De Masi, A., Ferrari, P.A., Lebowitz, J.L.: Reaction–diffusion equations for interacting particle systems. J. Stat. Phys. 44(3), 589–644 (1986)
Han, Q.: A Basic Course in Partial Differential Equations, Graduate Studies in Mathematics, vol. 120. American Mathematical Society, Providence (2011)
Kipnis, C., Landim, C.: Scaling Limits of Interacting Particle Systems, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 320. Springer, Berlin (1999)
Meester, R., Roy, R.: Continuum Percolation, Cambridge Tracts in Mathematics, vol. 119. Cambridge University Press, Cambridge (1996)
Murray, J.D.: Mathematical Biology. Interdisciplinary Applied Mathematics. Spatial Models and Biomedical Applications, vol. 18, 3rd edn. Springer, New York (2003)
Perthame, B.: Parabolic Equations in Biology, Lecture Notes on Mathematical Modelling in the Life Sciences. Growth, Reaction, Movement and Diffusion. Springer, Cham (2015)
Pollard, D.: Convergence of Stochastic Processes. Springer Series in Statistics. Springer, New York (1984)
Scarsoglio, S., Laio, F., D’Odorico, P., Ridolfi, L.: Spatial pattern formation induced by Gaussian white noise. Math. Biosci. 229(2), 174–184 (2011)
Terras, A.: Fourier Analysis on Finite Groups and Applications, London Mathematical Society Student Texts, vol. 43. Cambridge University Press, Cambridge (1999)
Turing, A.M.: The chemical basis of morphogenesis. Philos. Trans. R. Soc. Lond. Ser. B 237(641), 37–72 (1952)
Zheng, Q., Wang, Z., Shen, J., Iqbal, A., Muhammad, H.: Turing bifurcation and pattern formation of stochastic reaction–diffusion system. Adv. Math. Phys. (2017). https://doi.org/10.1155/2017/9648538
Acknowledgements
It is a great pleasure to thank Errico Presutti for suggesting us the problem and for his continuous advising. We also acknowledge (in alphabetical order) fruitful discussions with Inés Armendáriz, Anna De Masi, Pablo Ferrari, Ellen Saada, Livio Triolo, and Maria Eulália Vares. The authors also acknowledge the hospitality of Laboratoire MAP5 at Université Paris Descartes. We finally thank the anonymous referees for several comments that helped us improve the presentation of the article.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
We declare our research does not involve potential conflicts of interest nor participation with Humans and/or Animals.
Appendix
Appendix
The matricial norm \( \left\| \cdot \right\| \) is defined as \( \displaystyle {sup_{\underline{v}:\Vert \underline{v}\Vert =1} \left\| A\underline{v}\right\| }\).
Lemma 4.1
There exists C (independent of k) such that
for \(k=\pm \, 1\), and
for \(k\ne \pm \, 1\).
Proof
We first analyze the case \(k=1\) (the case \(k=-1\) is the same since \(A^{\left( 1\right) }=A^{\left( -1\right) }\)). Let \(S^{\left( 1\right) }\) be the matrix with columns \(\underline{v}_1^{\left( 1\right) }\) and \(\underline{v}_2^{\left( 1\right) }\). We have
We proceed with the case \(k\ne \pm \, 1\). Recall the definition of \({{\,\mathrm{dis}\,}}^{\left( k\right) }\mathrel {\mathop :}=[{{\,\mathrm{tr}\,}}A^{\left( k\right) }]^2-4\det A^{\left( k\right) }\) given in Sect. 2.3. Since \(\lim _{|k|\rightarrow \infty } {{\,\mathrm{dis}\,}}^{\left( k\right) } =4\tanh (\lambda \beta _1) \tanh (\lambda \beta _2)\), there exists \( C _1\) such that \(|{{\,\mathrm{dis}\,}}^{\left( k\right) }|\ge 2\tanh \left( \lambda \beta _1\right) \tanh \left( \lambda \beta _2\right) \) for every k such that \( \left| k\right| \ge C _1\).
We first analyze the sub-case \( \left| k\right| \ge C_1\). In this case, the matrix is also diagonalizable, being the difference here that we need to control coefficients in the variable k. Let \(S^{\left( k\right) }\) be the matrix with columns \(\underline{v}_1^{\left( k\right) }\) and \(\underline{v}_2^{\left( k\right) }\). We have
There exists \(C_2\) (independent of k) such that \(\max \Big \{\big \Vert \underline{v}^{\left( k\right) }_1\big \Vert _1,\big \Vert \underline{v}^{\left( k\right) }_2\big \Vert _1\Big \}\le C_2\) (see the explicit expressions (2.7) and (2.8)), so \(\big \Vert S^{\left( k\right) }\big \Vert \le C_2\). Here \( \left\| \cdot \right\| _1\) is defined as \( \bigg \Vert \left( \begin{array}{c} z_1 \\ z_2 \end{array}\right) \bigg \Vert _1=|z_1|+|z_2|\). Since \((S^{\left( k\right) })^{-1}=\frac{1}{\det S^{\left( k\right) }}{{\tilde{S}}}^{\left( k\right) }\) with \({{\tilde{S}}}^{\left( k\right) }\) obtained from \(S^{\left( k\right) }\) after rearranging the coefficients and changing the signs of some of them, there exists \( C _3\) such that
We finally analyze the sub-case \(k\ne \pm \, 1\) and \( \left| k\right| <C_1\). If \({{\,\mathrm{dis}\,}}^{\left( k\right) }\ne 0\), we can proceed as in the case \(k=1\) since we have to control only a finite number of k’s. If \({{\,\mathrm{dis}\,}}^{\left( k\right) }=0\), we have \(\mu _1^{\left( k\right) }=\mu _2^{\left( k\right) }=\mu ^{\left( k\right) }\). The matrix \(A^{\left( k\right) }\) is not diagonalizable; nevertheless it is equivalent to a triangular matrix
via conjugating by orthogonal matrices (this is Schur’s unitary triangularization theorem). Then there exists \( C _4\) such that
We conclude by observing that there exists \( C _5\) such that \(e^{t\mu ^{\left( k\right) }}+|a^{\left( k\right) }|te^{t\mu ^{\left( k\right) }}\le C _5 e^{\frac{1}{2}t\mu ^{\left( k\right) }}\) (since we have only finite cases to consider, \(|a^{\left( k\right) }|\) can be bounded by a constant). \(\square \)
Proposition 4.2
For \(M\in \mathbb {N}\) and \(j\in \left\{ 1,\ldots ,M\right\} \), let \(I_j\mathrel {\mathop :}=\left[ \frac{j-1}{M},\frac{j}{M}\right) \). For integrable \(f:\mathbb {T}\rightarrow \mathbb {R}\) and \(M\in \mathbb {N}\), let \(f_M\) be the function such that, for every \(j\in \left\{ 1,\ldots ,M\right\} \), takes the value \(M\int _{I_j}f\) in the interval \(I_j\) (\(f_M\) is a piece-wise constant approximation of f). Let \(f\in C\left( \mathbb {T},\mathbb {R}\right) \) such that \(\int _0^1f_M^2\xrightarrow [ M\rightarrow \infty ]{}\int _0^1f^2\). Let \(\left( \sigma _i\right) _{i=0}^{N-1}\) be an independent family with distribution \(P \left[ \sigma _i=1\right] =P \left[ \sigma _i=-1\right] =\frac{1}{2}\). Then
converges in distribution to \(N\left( 0,\int _0^1 f^2\right) \).
To prove Proposition 4.2, we need the following lemma.
Lemma 4.3
If the integrable function \(f: \mathbb {T} \rightarrow \mathbb {R}\) is such that \(f=f_M\) for some \(M\in \mathbb {N}\), the assertion of Proposition 4.2 holds.
Proof
Let \(\Lambda _j^{\left( N\right) }\mathrel {\mathop :}=\left( NI_j\right) \cap \mathbb {Z}\). We first see that
converges weakly to \(N\left( 0,\int _0^1f^2\right) \). Call \(X^{\left( N\right) }_j\mathrel {\mathop :}=\frac{1}{\sqrt{ \left| \Lambda _j^{\left( N\right) }\right| }} \sum _{i\in \Lambda _j^{\left( N\right) }}\sigma _i\). For every N, the family \( \left\{ X_j^{\left( N\right) }\right\} _{j}\) is independent. Also \(X_j^{\left( N\right) }\) converges weakly to \(N\left( 0,1\right) \) for every j. Then the random vector \(\left( X_j^{\left( N\right) }\right) _{j}\) converges weakly to the random vector \(\left( X_j\right) _j\sim N\left( {\underline{0}},\text{ Id }_M\right) \). Then the random variable (4.9) converges weakly to \(\frac{1}{\sqrt{M}}\sum _{j=1}^M f\left( \frac{j-1}{M}\right) X_j\sim N\left( 0,\int _0^1f^2\right) \).
Using that f is bounded, that \({\mathrm{Var}}\left( X_j^{\left( N\right) }\right) =1\), and that \(\sqrt{\frac{ \left| \Lambda _j^{\left( N\right) }\right| }{N}}\xrightarrow [ N\rightarrow \infty ]{}\frac{1}{\sqrt{M}}\), one can see that \({\mathrm{Var}}\left( {{\tilde{Y}}}^{\left( N\right) }-Y^{\left( N\right) }\right) \xrightarrow [ N\rightarrow \infty ]{}0\); then, for every \(\tilde{\delta }>0\),
Let \(G_{\int _0^1f^2}\) be the Gaussian probability distribution with zero mean and variance \(\int _0^1f^2\). For \(h:\mathbb {R}\rightarrow \mathbb {R}\) bounded and uniformly continuous, we have to prove that
(Recall that weak convergence of probabilities is equivalent to convergence of the expectations against bounded uniformly continuous functions.) We have already proved that
so we only need to prove
Fix \(\varepsilon >0\) and take \(\delta >0\) such that \( \left| h\left( y\right) -h\left( x\right) \right| <\varepsilon \) whenever \( \left| y-x\right| \le \delta \). The quantity to control in (4.13) is bounded by
The first addend goes to zero because of (4.10); the second one is bounded by \(\varepsilon \). Since \(\varepsilon \) is arbitrary, we can conclude. \(\square \)
Proof of Proposition 4.2
Let \(f_M\) be the discretized version of f and
From Chebyshev inequality, there is a constant depending only on f such that, for every \(\tilde{\delta }>0\),
(observe that this bound is uniform in N). Let \(h:\mathbb {R}\rightarrow \mathbb {R}\) bounded and uniformly continuous. We have to prove that
Fix \(\varepsilon >0\) and let \(\delta >0\) be such that \( \left| h\left( y\right) -h\left( x\right) \right| <\varepsilon \) whenever \( \left| y-x\right| \le \delta \). Take M such that \( \left| G_{\int _0^1 f_M^2}h-G_{\int _0^1 f^2}h\right| <\varepsilon \) and \(\frac{C}{\delta ^2 M^2}< \frac{\varepsilon }{ \left\| h\right\| _\infty }\). The quantity to control in (4.17) is bounded by
Multiply by \(1=\mathbf{1 } \left\{ \left| Y^{\left( N\right) }-Y_M^{\left( N\right) }\right| >\delta \right\} +\mathbf{1 } \left\{ \left| Y^{\left( N\right) }-Y_M^{\left( N\right) }\right| \le \delta \right\} \) inside the first expectation to get the upper bound \(2\varepsilon \) for it. The second addend goes to zero as N goes to infinity because of Lemma 4.3. We can conclude because \(\varepsilon \) is arbitrary. \(\square \)
Lemma 4.4
There exists a constant C such that, for every \(\delta >0\),
Proof
We start by proving (4.19). Observe that the proof follows once we show that there exists a constant C such that, for all \(T>0\),
which follows from
Call \(B^{\left( k\right) }_{i,j}\left( t-s\right) \) the element on the position \(\left( i,j\right) \) of the matrix \(e^{\left( t-s\right) A^{\left( k\right) }}\), and observe that, by Doob’s inequality and Ito’s isometry, we get
for every \(\varepsilon >0\), \(i,j,l\in \left\{ 1,2\right\} \), and \(\#\in \left\{ {\mathfrak {R}},{\mathfrak {I}}\right\} \). Here \( \langle \# M_{\gamma ,l}^{\left( k\right) } \rangle (t) \) denotes the quadratic variation of the martingale \( \# M_{\gamma ,l}^{\left( k\right) }(t) \). From Lemma 5.1 in Appendix A of [9], we can express it as follows:
Expanding this expression, we get
for a constant \(C_1\) that does not depend on k (this dependence is lost because \(\Vert \#F^{(k)}\Vert _\infty \le 1\) for all k). Since the maximum of the modulus of the entries of a matrix defines a norm, and since all the norms are equivalent, Lemma 4.1 guarantees the existence of a constant \(C_2\) such that
Plugging the estimations (4.26) and (4.29) into (4.24), we get
for every \(\varepsilon >0\), \(i,j,l\in \left\{ 1,2\right\} \), and \(\#\in \left\{ {\mathfrak {R}},{\mathfrak {I}}\right\} \). From the explicit expression of the eigenvalue \( \mu _1^{\left( k\right) } \) given in (2.5), we get \(\lim _{ \left| k\right| \rightarrow \infty }{\mathfrak {R}}(\mu _1^{\left( k\right) })=-1\), so the family \(\{{\mathfrak {R}}(\mu _1^{\left( k\right) })\}_k\) is uniformly lower bounded in k and we can find a common constant \(C_3\) which can replace \(\frac{C_2^2}{{\mathfrak {R}}(\mu _1^{\left( k\right) })}\) in the right hand side of (4.30). Then, by decomposing \(\int _0^t e^{\left( t-s\right) A^{\left( k\right) }}\underline{M}_\gamma ^{\left( k\right) }\left( {\mathrm{d}}s\right) \) first into the two coordinates and after into their real and imaginary parts, we can conclude that, for all \(\zeta >0\) and for all \(k\ne \pm \, 1\)
(4.23) follows after taking \(\zeta =k^2\gamma ^{\frac{1}{2}-\delta }\) into (4.31). The proof for (4.20) is analogous, so we will omit it. Finally, to prove (4.21), we can proceed as we just did to reduce the problem to proving that
for every \(\tilde{\varepsilon }>0\), \(i\in \left\{ 1,2\right\} \), and \(\#\in \left\{ {\mathfrak {R}},{\mathfrak {I}}\right\} \); the last one is a consequence of Chebyshev inequality. \(\square \)
Rights and permissions
About this article
Cite this article
Capanna, M., Soprano-Loto, N. Turing Instability in a Model with Two Interacting Ising Lines: Non-equilibrium Fluctuations. J Stat Phys 174, 365–403 (2019). https://doi.org/10.1007/s10955-018-2206-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10955-018-2206-7