Skip to main content
Log in

Scaling Limit of Semiflexible Polymers: A Phase Transition

  • Published:
Communications in Mathematical Physics Aims and scope Submit manuscript


We consider a semiflexible polymer in \({{\,\mathrm{{\mathbb {Z}}}\,}}^d\) which is a random interface model with a mixed gradient and Laplacian interaction. The strength of the two operators is governed by two parameters called lateral tension and bending rigidity, which might depend on the size of the graph. In this article we show a phase transition in the scaling limit according to the strength of these parameters: we prove that the scaling limit is, respectively, the Gaussian free field, a “mixed” random distribution and the continuum membrane model in three different regimes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others


  1. We shall use \(\Delta \) in the subscript of the spaces and the norms instead of \(\Delta _c\) to ease notation.


  1. Beals, R.: Classes of compact operators and eigenvalue distributions for elliptic operators. Am. J. Math. 89(4), 1056–1072 (1967)

    Article  MathSciNet  Google Scholar 

  2. Berestycki, N.: Introduction to the Gaussian free field and Liouville quantum gravity (2015). Accessed 3 May 2018

  3. Biskup, M.: Extrema of the two-dimensional discrete Gaussian free field. In: Barlow, M.T., Slade, G. (eds.) Random Graphs, Phase Transitions, and the Gaussian Free Field, pp. 163–407. Springer International Publishing, Cham (2020)

    Chapter  Google Scholar 

  4. Borecki, M.: Pinning and wetting models for polymers with \((\nabla +\Delta )\)-interaction. Thesis (2010). Accessed 29 July 2018

  5. Borecki, M., Caravenna, F.: Localization for \((1+1)\)-dimensional pinning models with \((\nabla +\Delta )\)-interaction. Electron. Commun. Probab. 15, 534–548 (2010).

    Article  MathSciNet  MATH  Google Scholar 

  6. Brascamp, H.J., Lieb, E.H.: On extensions of the Brunn–Minkowski and Prékopa–Leindler theorems, including inequalities for log concave functions, and with an application to the diffusion equation. J. Funct. Anal. 22(4), 366–389 (1976)

    Article  Google Scholar 

  7. Caravenna, F., Deuschel, J.-D.: Pinning and wetting transition for \((1+1)\)-dimensional fields with Laplacian interaction. Ann. Probab. 36(6), 2388–2433 (2008)

    Article  MathSciNet  Google Scholar 

  8. Caravenna, F., Deuschel, J.-D.: Scaling limits of \((1+1)\)-dimensional pinning models with Laplacian interaction. Ann. Probab. 37(3), 903–945 (2009).

    Article  MathSciNet  MATH  Google Scholar 

  9. Cipriani, A., Dan, B., Hazra, R.S.: The scaling limit of the \((\nabla +\Delta ) \)-model. arXiv preprint arXiv:1808.02676 (2018)

  10. Cipriani, A., Dan, B., Hazra, R.S.: The scaling limit of the membrane model. Ann. Probab. 47(6), 3963–4001 (2019).

    Article  MathSciNet  MATH  Google Scholar 

  11. Ding, J., Roy, R., Zeitouni, O.: Convergence of the centered maximum of log-correlated Gaussian fields. Ann. Probab. 45(6A), 3886–3928 (2017).

    Article  MathSciNet  MATH  Google Scholar 

  12. Evans, L.C.: Partial Differential Equations. Graduate Studies in Mathematics Edition, vol. 19. AMS, Providence (2002)

    Google Scholar 

  13. Folland, G.: Real Analysis: Modern Techniques and Their Applications. Pure and Applied Mathematics. Wiley, Hoboken (1999)

    MATH  Google Scholar 

  14. Gazzola, F., Grunau, H., Sweers, G.: Polyharmonic Boundary Value Problems: Positivity Preserving and Nonlinear Higher Order Elliptic Equations in Bounded Domains. Number No. 1991 in Lecture Notes in Mathematics. Springer (2010). ISBN 9783642122446.

  15. Hörmander, L.: The Analysis of Linear Partial Differential Operators I: Distribution Theory and Fourier Analysis. Springer, Berlin (2015)

    MATH  Google Scholar 

  16. Hryniv, O., Velenik, Y.: Some rigorous results on semiflexible polymers, i: free and confined polymers. Stoch. Process. Appl. 119(10), 3081–3100 (2009).

    Article  MathSciNet  MATH  Google Scholar 

  17. Kallenberg, O.: Foundations of Modern Probability. Springer, Berlin (2006)

    MATH  Google Scholar 

  18. Kurt, N.: Entropic repulsion for a Gaussian membrane model in the critical and supercritical dimension. PhD thesis, University of Zurich (2008). Accessed 2 Aug 2018

  19. Leibler, S.: Equilibrium statistical mechanics of fluctuating films and membranes. In: Statistical Mechanics of Membranes and Surfaces. World Scientific, pp. 49–101 (2004)

  20. Lipowsky, R.: Generic interactions of flexible membranes. Handb. Biol. Phys. 1, 521–602 (1995)

    Article  Google Scholar 

  21. Müller, S., Schweiger, F.: Estimates for the Green’s function of the discrete bilaplacian in dimensions 2 and 3. Vietnam J. Math. 47(1), 133–181 (2019)

    Article  MathSciNet  Google Scholar 

  22. Pleijel, Å.: On the eigenvalues and eigenfunctions of elastic plates. Commun. Pure Appl. Math. 3(1), 1–10 (1950).

    Article  MathSciNet  MATH  Google Scholar 

  23. Rudin, W.: Functional Analysis. McGrawHill, New York (1991)

    MATH  Google Scholar 

  24. Ruiz-Lorenzo, J.J., Cuerno, R., Moro, E., Sánchez, A.: Phase transition in tensionless surfaces. Biophys. Chem. 115(2–3), 187–193 (2005)

    Article  Google Scholar 

  25. Sakagawa, H.: Localization of a Gaussian membrane model with weak pinning potentials. ALEA 15, 1123–1140 (2018)

    Article  MathSciNet  Google Scholar 

  26. Schweiger, F.: The maximum of the four-dimensional membrane model. arXiv preprint arXiv:1903.02522 (2019)

  27. Sheffield, S.: Gaussian free fields for mathematicians. Probab. Theory Relat. Fields 139(3–4), 521–541 (2007)

    Article  MathSciNet  Google Scholar 

  28. Thomée, V.: Elliptic difference operators and Dirichlet’s problem. Contrib. Differ. Equ. 3(3), 301–324 (1964)

    MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Rajat Subhra Hazra.

Additional information

Communicated by H. Spohn.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The first author is supported by the Grant 613.009.102 of the Netherlands Organisation for Scientific Research (NWO). The third author acknowledges the MATRICS Grant from SERB and the Dutch stochastics cluster STAR (Stochastics—Theoretical and Applied Research) for an invitation to TU Delft where part of this work was carried out. The authors would like to thank Luca Avena and Alberto Chiarini for their remarks that led to the draft of the present article. We are very grateful to an anonymous referee for her/his insightful comments and for outlining a new proof of Theorem 2.10 (3) improving on a previous version.


Appendix A. Covariance Bound for the Membrane Model in \(d=1\)

In this section we consider \(d=1\) and the membrane model \((\varphi _x)_{x\in V_N}\) on \(V_N = \{1,\ldots , N-1\}\) with zero boundary conditions outside \(V_N\). We want to show the following bound:

Lemma A.1

There exists a constant \(C>0\) such that

$$\begin{aligned} {\mathbf {E}}_{V_N} [(\varphi _x-\varphi _{x+1})^2] \le CN, \quad x\in {{\,\mathrm{{\mathbb {Z}}}\,}}. \end{aligned}$$


Let \(\{X_i\}_{i\in {{\,\mathrm{{\mathbb {N}}}\,}}}\) be a sequence of i.i.d. standard Gaussian random variables. We define \(\{Y_i\}_{i\in {{\,\mathrm{{\mathbb {Z}}}\,}}^+}\) to be the associated random walk starting at 0, that is,

$$\begin{aligned} Y_0=0,\quad Y_n=\sum _{i=1}^n X_i,\,\,n\in {{\,\mathrm{{\mathbb {N}}}\,}}, \end{aligned}$$

and \(\{Z_i\}_{i\in {{\,\mathrm{{\mathbb {Z}}}\,}}^+}\) to be the integrated random walk starting at 0, that is, \(Z_0=0\) and for \(n\in {{\,\mathrm{{\mathbb {N}}}\,}}\)

$$\begin{aligned} Z_n=\sum _{i=1}^n Y_i. \end{aligned}$$

Then one can show that \({\mathbf {P}}_{V_N}\) is the law of the vector \((Z_1,\ldots ,Z_{N-1})\) conditionally on \(Z_N=Z_{N+1}=0\) [7, Proposition 2.2]. So we have that

$$\begin{aligned} {\mathbf {E}}_{V_N}\left[ (\varphi _{i+1}-\varphi _i)^2\right] = {\mathbf {E}}\left[ (Z_{i+1}-Z_i)^2| Z_N=Z_{N+1}=0\right] ={\mathbf {E}}\left[ Y_{i+1}^2| Z_N=Z_{N+1}=0\right] . \end{aligned}$$

Hence it is enough to find a bound for \({\mathbf {E}}[Y_i^2|Z_N=Z_{N+1}=0]\) for \(i=1,\ldots ,N-1\). The covariance matrix \(\Sigma \) for \((Y_1,\ldots , Y_{N-1}, Z_{N}, Z_{N+1})\) can be partitioned as

$$\begin{aligned} \Sigma = \begin{bmatrix} A &{} B \\ B &{} D \end{bmatrix} \end{aligned}$$

where A is a \((N-1) \times (N-1)\) matrix with entries

$$\begin{aligned} A(i,j)= \mathbf {Cov}( Y_i, Y_j)= \min \{i,\,j\}. \end{aligned}$$

B(ij) and C(ij) are \((N-1)\times 2\) and \(2\times (N-1)\) matrices respectively, with \(C=B^T\) and

$$\begin{aligned} B(i,j)= \mathbf {Cov}(Y_i,Z_{j+N-1})= \sum _{l=1}^{j+N-1} \min \{i,\,l\}. \end{aligned}$$

Finally, D is a \(2\times 2\) matrix with

$$\begin{aligned} D(i,j)= \mathbf {Cov}(Z_{i+N-1},Z_{j+N-1}). \end{aligned}$$

It easily follows that

$$\begin{aligned} D=\frac{1}{6}\begin{bmatrix} N(N+1) (2N+1) &{} N(N+1)(2N+4)\\ N(N+1)(2N+4) &{} (N+1)(N+2)(2N+3) \end{bmatrix}. \end{aligned}$$

It is well known that \((Y_1,\ldots ,Y_{N-1}|Z_N=Z_{N+1}=0)\) is a Gaussian vector with mean zero and covariance matrix given by \(A-BD^{-1}C\). The inverse of D is as follows. Observe

$$\begin{aligned} \gamma _N:= \det (D)= \frac{1}{36} N(N+1)^2(8N^2+3N+6) \end{aligned}$$


$$\begin{aligned} D^{-1}= \frac{1}{\gamma _N}\begin{bmatrix} D(2,2)\quad -D(1,2)\\ -D(2,1)\quad D(1, 1) \end{bmatrix}. \end{aligned}$$

Now the diagonal element of \(BD^{-1}C\) can be determined:

$$\begin{aligned} (BD^{-1}C)(i,i)&=\frac{1}{\gamma _N}\left[ \left( \sum _{l=1}^{N} \min \{i,\,l\}\right) ^2D(2,2)-\left( \sum _{l=1}^{N} \min \{i,\,l\}\right) \right. \\&\quad \left( \sum _{l=1}^{N+1} \min \{i,\,l\}\right) D(1,2)\\&\quad -\left( \sum _{l=1}^{N} \min \{i,\,l\}\right) \left( \sum _{l=1}^{N+1} \min \{i,\,l\}\right) D(1,2) \\&\quad \left. + \left( \sum _{l=1}^{N+1} \min \{i,\,l\}\right) ^2D(1,1)\right] . \end{aligned}$$

Plugging in the entries D(ij) from (A.1) and simplifying we get

$$\begin{aligned} (BD^{-1}C)(i,i)=\frac{i^2(N+1)}{24\gamma _N}\left[ 6N^2-12Ni+6i^2+4N\right] >0. \end{aligned}$$

This shows that for \(i=1,2,\ldots , N-1\),

$$\begin{aligned} {\mathbf {E}}[Y_i^2|Z_N=Z_{N+1}=0]=A(i,i)-(BD^{-1}C)(i,i)<i. \end{aligned}$$

Similar bound can be obtained for \({\mathbf {E}}[Y_N^2|Z_N=Z_{N+1}=0]\) and this completes the proof.

\(\square \)

Appendix B. Details on the Space \({\mathcal {H}}^{-s}_{-\Delta +\Delta ^2}(D)\)

In this section we briefly describe some of the details regarding the space \({\mathcal {H}}^{-s}_{-\Delta +\Delta ^2}(D)\) and also about the spectral theory of \(-\Delta _c+\Delta ^2_c\). This is an elliptic operator, and the spectral theory is similar to that of either \(-\Delta _c\) or \(\Delta _c^2\). First recall the standard Sobolev inner products on \(H^1_0(D)\) and \(H^2_0(D)\). They are

$$\begin{aligned} \left\langle u, v\right\rangle _{1}= \int _D \nabla u\cdot \nabla v {{\,\mathrm{d}\,}}x,\quad u,\,v\in H^1_0(D) \end{aligned}$$


$$\begin{aligned} \left\langle u, v\right\rangle _{2}= \int _D \Delta u \Delta v {{\,\mathrm{d}\,}}x,\quad u,\,v\in H^2_0(D) \end{aligned}$$

and they induce norms on \(H^1_0(D)\) and \(H^2_0(D)\) respectively which are equivalent to the standard Sobolev norms [14, Corollary 2.29]. We now consider the following inner product on \(H^2_0(D)\):

$$\begin{aligned} \left\langle u, v\right\rangle _{mixed}:= \int _D \nabla u\cdot \nabla v {{\,\mathrm{d}\,}}x + \int _D \Delta u \Delta v {{\,\mathrm{d}\,}}x,\,\,u,v\in H^2_0(D). \end{aligned}$$

Clearly the norm induced by this inner product is equivalent to the norm \(\Vert \cdot \Vert _{H^2_0}\) (by integration by parts). We consider \(H^{-2}(D)\) to be the dual of \((H^2_0(D),\,\Vert \cdot \Vert _{mixed})\).

We now give some results whose proofs are similar to Theorem 3.2 and 3.3 of [10].

  1. (1)

    There exists a bounded linear isometry

    $$\begin{aligned} T_0:H^{-2}(D)\rightarrow (H^2_0(D),\,\Vert \cdot \Vert _{mixed}) \end{aligned}$$

    such that, for all \(f\in H^{-2}(D)\) and for all \(v\in H^2_0(D)\),

    $$\begin{aligned} (f\,,\,v)= \left\langle v,\,T_0 f\right\rangle _{mixed}. \end{aligned}$$

    Moreover, the restriction T on \(L^2(D)\) of the operator \(i\circ T_0 :H^{-2}(D)\rightarrow L^2(D) \) is a compact and self-adjoint operator, where \(i: (H^2_0(D),\,\Vert \cdot \Vert _{mixed}) \hookrightarrow L^2(D)\) is the inclusion map.

  2. (2)

    There exist \(v_1,\, v_2,\, \ldots \) in \((H_0^2(D),\Vert \cdot \Vert _{mixed})\) and numbers \(0<\mu _1\le \mu _2\le \cdots \rightarrow \infty \) such that

    • \(\{v_j\}_{j\in {{\,\mathrm{{\mathbb {N}}}\,}}}\) is an orthonormal basis for \(L^2(D)\),

    • \(Tv_j=\mu ^{-1}_jv_j\),

    • \(\left\langle v_j,v \right\rangle _{mixed}=\mu _j\left\langle v_j, v \right\rangle _{L^2}\) for all \(v\in H^2_0(D)\),

    • \(\{\mu ^{-1/2}_jv_j\}\) is an orthonormal basis for \((H_0^2(D),\Vert \cdot \Vert _{mixed})\).

For each \(j\in {{\,\mathrm{{\mathbb {N}}}\,}}\) one has \(v_j\in C^\infty (D).\) Moreover \(v_j\) is an eigenfunction of \(-\Delta _c + \Delta _c^2\) with eigenvalue \(\mu _j\). Indeed, we have for all \(v\in H_0^2(D)\)

$$\begin{aligned} \left\langle ({-}\Delta _c {+} \Delta _c^2) v_j,\,v \right\rangle _{L^2} {=} \left\langle ({-}\Delta _c) v_j,\,v \right\rangle _{L^2} {+} \left\langle (\Delta _c^2) v_j,\,v \right\rangle _{L^2}{\mathop {=}\limits ^{GI}}\left\langle v_j,\,v \right\rangle _{mixed} =\mu _j\left\langle v_j,\,v \right\rangle _{L^2} \end{aligned}$$

where “GI” stands for Green’s first identity

$$\begin{aligned} \int _D u\Delta v {{\,\mathrm{d}\,}}V=-\int _D\nabla u\cdot \nabla v {{\,\mathrm{d}\,}}V+\int _{\partial D}u\nabla v\cdot {\mathbf {n}} {{\,\mathrm{d}\,}}S. \end{aligned}$$

Thus \(v_j\) is an eigenfunction of \(-\Delta _c + \Delta _c^2\) with eigenvalue \(\mu _j\) in the weak sense. The smoothness of \(v_j\) follows from the fact that \(-\Delta _c + \Delta _c^2\) is an elliptic operator with smooth coefficients and the elliptic regularity theorem [13, Theorem 9.26]. Hence \(v_j\) is an eigenfunction of \(-\Delta _c + \Delta _c^2\) with eigenvalue \(\mu _j\). As a consequence of the above, one easily has that

$$\begin{aligned} \Vert f\Vert _{mixed}^2 = \sum _{j\ge 1} \mu _j \left\langle f, v_j\right\rangle ^2_{L^2} \end{aligned}$$

for any \(f\in H^2_0(D)\).

For any \(v\in C_c^\infty (D)\) and for any \(s>0\) we define

$$\begin{aligned} \Vert v\Vert _{s, \,-\Delta + \Delta ^2}^2:=\sum _{j\in {{\,\mathrm{{\mathbb {N}}}\,}}}\mu _j^{s/2}\left\langle v,v_j\right\rangle _{L^2}^2. \end{aligned}$$

We define \({\mathcal {H}}_{-\Delta +\Delta ^2, 0}^{s}(D)\) to be the Hilbert space completion of \(C_c^\infty (D)\) with respect to the norm \(\Vert \cdot \Vert _{s, \,-\Delta + \Delta ^2}\). Then \(\left( {\mathcal {H}}_{-\Delta +\Delta ^2, 0}^{s}(D) \,,\,\Vert \cdot \Vert _{s, \,-\Delta + \Delta ^2}\right) \) is a Hilbert space for all \(s>0\). Moreover, we also notice the following.

  • Note that for \(s=2\) we have \({\mathcal {H}}_{-\Delta +\Delta ^2, 0}^{2}(D)= (H_0^2(D),\,\Vert \cdot \Vert _{mixed})\) by (B.1).

  • \(i:{\mathcal {H}}_{-\Delta +\Delta ^2, 0}^{s}(D)\hookrightarrow L^2(D)\) is a continuous embedding.

For \(s>0\) we define \({\mathcal {H}}^{-s}_{-\Delta +\Delta ^2}(D)= ({\mathcal {H}}_{-\Delta +\Delta ^2, 0}^{s}(D))^*\), the dual space of \({\mathcal {H}}_{-\Delta +\Delta ^2, 0}^{s}(D)\). Then we have

$$\begin{aligned} {\mathcal {H}}_{-\Delta +\Delta ^2, 0}^s(D) \subseteq L^2(D) \subseteq {\mathcal {H}}^{-s}_{-\Delta +\Delta ^2}(D). \end{aligned}$$

One can show using the Riesz representation theorem that for \(s>0\), and \(v\in L^2(D)\) the norm of \({\mathcal {H}}^{-s}_{-\Delta +\Delta ^2}(D)\) is given by

$$\begin{aligned} \Vert v\Vert _{-s, \,-\Delta + \Delta ^2}^2:=\sum _{j\in {{\,\mathrm{{\mathbb {N}}}\,}}}\mu _j^{-s/2}\left\langle v,\,v_j\right\rangle _{L^2}^2. \end{aligned}$$

Before we show the definition of the continuum mixed model, we need an analog of Weyl’s law for the eigenvalues of the operator \(-\Delta _c + \Delta _c^2\).

Proposition B.1

(Beals [1, Theorem 5.1], Pleijel [22]). There exists an explicit constant c such that, as \(j\uparrow +\infty \),

$$\begin{aligned} \mu _j\sim c^{-d/4}j^{4/d}. \end{aligned}$$


We want to apply Theorem 5.1 of [1] for \(A:= -\Delta _c + \Delta _c^2\). First note that A is an elliptic operator of order \(m=4\) defined on D having smooth coefficients. Let us consider \(A_1:= (-\Delta _c+\Delta _c^2)|_{H^4(D)\cap H^2_0(D)}\). Clearly, \(A_1: H^4(D)\cap H^2_0(D) \rightarrow L^2(D)\) and also \(C_c^\infty (D) \subset D(A_1) \subset H^4(D)\), where \(D(A_1)\) is the domain of \(A_1\). By elliptic regularity we have \(D(A_1^p) \subset H^{4p}, \, p = 1,\,2,\,\ldots \) We first show that \(A_1\) is self-adjoint. Note that as \(C_c^\infty (D) \subset D(A_1) \) and \(C_c^\infty (D)\) is dense in \(L^2(D)\), \(A_1\) is densely defined. Again, by Green’s identity we have for all \(u,v \in H^4(D)\cap H^2_0(D)\)

$$\begin{aligned} \left\langle (-\Delta _c + \Delta _c^2)u,\, v\right\rangle _{L^2} = \left\langle \nabla u,\, \nabla v\right\rangle _{L^2} + \left\langle \Delta _c u,\, \Delta _c v\right\rangle _{L^2} = \left\langle u,\, (-\Delta _c + \Delta _c^2)v\right\rangle _{L^2}. \end{aligned}$$

Thus \(A_1\) is symmetric. Also by Corollary 2.21 of Gazzola et al. [14] we observe that image of \(A_1\) is \( L^2(D)\). The self-adjointness of \(A_1\) now follows from Theorem 13.11 of [23]. Also we conclude from Theorem 13.9 of [23] that \(A_1\) is closed. Now applying Theorem 5.1 of [1] we get the asymptotic. \(\quad \square \)

The result we will prove now shows the well-posedness of the series expansion for \(\Psi ^{-\Delta +\Delta ^2}\).

Proposition B.2

Let \((\xi _j)_{j\in {{\,\mathrm{{\mathbb {N}}}\,}}}\) be a collection of i.i.d. standard Gaussian random variables. Set

$$\begin{aligned} \Psi ^{-\Delta +\Delta ^2}:=\sum _{j\in {{\,\mathrm{{\mathbb {N}}}\,}}}\mu _j^{-1/2}\xi _j v_j. \end{aligned}$$

Then \(\Psi ^{-\Delta +\Delta ^2}\in {\mathcal {H}}^{-s}_{-\Delta +\Delta ^2}(D)\) a.s. for all \(s>({d-4})/2\).


Fix \(s>({d-4})/2\). Clearly \(v_j\in L^2(D)\subseteq {\mathcal {H}}^{-s}_{-\Delta +\Delta ^2}(D)\). We need to show that \(\Vert \psi \Vert _{-s, \,-\Delta + \Delta ^2}<+\infty \) almost surely. Now this boils down to showing the finiteness of the random series

$$\begin{aligned} \Vert \psi \Vert _{-s, \,-\Delta + \Delta ^2}^2=\sum _{j\ge 1} \mu _j^{-s/2} \left( \sum _{k\ge 1} \mu _k^{-1/2} u_k \xi _k\, , v_j\right) ^2=\sum _{j\ge 1} \mu _j^{-\frac{s}{2}-1}\xi _j^2 \end{aligned}$$

where the last equality is true since \((v_j)_{j\ge 1}\) form an orthonormal basis of \(L^2(D)\). Observe that the assumptions of Kolmogorov’s two-series theorem are satisfied: indeed using Proposition B.1 one has

$$\begin{aligned} \sum _{j\ge 1}{\mathbf {E}}\left( \mu _j^{-\frac{s}{2}-1}\xi _j^2\right) \asymp \sum _{j\ge 1} j^{-\frac{4}{d}\left( \frac{s}{2}+1\right) }<+\infty \end{aligned}$$

for \(s>(d-4)/2\) and

$$\begin{aligned} \sum _{j\ge 1}\mathbf {Var}\left( \mu _j^{-\frac{s}{2}-1}\xi _j^2\right) \asymp \sum _{j\ge 1} j^{-\frac{4}{d}(s+2)}<+\infty \end{aligned}$$

for \(s>(d-8)/4\). The result then follows. \(\quad \square \)

Appendix C. Random Walk Representation of the \((\nabla +\Delta )\)-model in \(d=1\) and estimates

In this appendix we recall some of the notations about the \(d=1\) case which were used in the heuristic explanations of the Introduction. We take advantage of the representation of the mixed model given in Borecki [4, Subsection 3.3.1] in our setting. To do that we set \(\beta _N:= 16 \kappa _N\).


$$\begin{aligned} \gamma = \left( \frac{1+\beta _N-\sqrt{1+2\beta _N}}{1+\beta _N+\sqrt{1+2\beta _N}}\right) ^{1/2} \end{aligned}$$

and let \((\varepsilon _i)_{i\in {{\,\mathrm{{\mathbb {Z}}}\,}}^{+}}\) be i.i.d. \({\mathcal {N}}(0,\sigma ^2)\) with

$$\begin{aligned} \sigma ^2= 4/(1+\beta _N+\sqrt{1+2\beta _N}). \end{aligned}$$


$$\begin{aligned} Y_n= \gamma ^{n-1}\varepsilon _1+\cdots + \gamma ^0 \varepsilon _n= \sum _{i=1}^{n}\gamma ^{n-i}\varepsilon _i. \end{aligned}$$

Let the integrated walk be denoted by

$$\begin{aligned} W_n=\sum _{i=1}^n Y_i= r_{n-1}\varepsilon _1+\cdots +r_0 \varepsilon _n= \sum _{i=1}^n r_{n-i}\varepsilon _i \end{aligned}$$

where \(r_{n-i}= \sum _{i=0}^{n-i} \gamma ^i\).

We consider the case when \(\kappa _N\rightarrow \infty \) and note that then \(\gamma =\gamma _N\rightarrow 1\) and \(\sigma _N^2=\sigma ^2\rightarrow 0\). The following representation will give an idea on how the phase transition occurs in the mixed model:

$$\begin{aligned} W_n= \frac{1}{1-\gamma }(\varepsilon _1+\cdots +\varepsilon _n)-\frac{1}{1-\gamma }(\gamma ^n\varepsilon _1+\gamma ^{n-1}\varepsilon _2+\cdots +\gamma \varepsilon _n). \end{aligned}$$

We recall the following Proposition from Borecki [4, Proposition 1.10].

Proposition C.1

Let \({\mathbf {P}}_{N}(\cdot )\) be the mixed model with 0 boundary conditions. Then

$$\begin{aligned} {\mathbf {P}}_{N}(\cdot )= {\mathbf {P}}\left( (W_1,\ldots , W_{N-1})\in \cdot | W_N=W_{N+1}=0\right) \end{aligned}$$

Let \(({\widetilde{\varepsilon }}_i)_{i\in {{\,\mathrm{{\mathbb {Z}}}\,}}^{+}}\) be i.i.d. \({\mathcal {N}}\left( 0, \frac{\sigma ^2}{(1-\gamma )^2}\right) \). Then \(W_n\) can be written as

$$\begin{aligned} W_n= S_n-U_n \end{aligned}$$

where \(S_n=\sum _{k=1}^n {\widetilde{\varepsilon }}_k\) and \(U_n= \gamma ^n {\widetilde{\varepsilon }}_1+\gamma ^{n-1} {\widetilde{\varepsilon }}_2+\cdots + \gamma {\widetilde{\varepsilon }}_n \). The conditional integrated random walk process has a representation, stated in Proposition 3.7 of [4]. Let

$$\begin{aligned} {\mathbf {P}}\left( ({{\widehat{W}}}_1, \ldots , {{\widehat{W}}}_{N-1})\in \cdot \right) ={\mathbf {P}}\left( (W_1,\ldots , W_{N-1})\in \cdot | W_N=W_{N+1}=0\right) . \end{aligned}$$


$$\begin{aligned} {{\widehat{W}}}_k= W_k- W_Nr_1(k)-W_{N+1}r_2(k) \end{aligned}$$

where \(r_1(k)= s_1(k)/r(k)\) and \(r_2(k)=s_2(k)/r(k)\). The definitions of r(k) and \(s_i(k)\) for \(i=1,2\) are as follows:

$$\begin{aligned} r(k)&=(-1+\gamma )(-1+\gamma ^{N+1})\left( -N+\gamma (2+N+\gamma ^N(-2+(-1+\gamma )N))\right) ,\\ s_1(k)&= (-k+\gamma (1-\gamma ^k+k))+\gamma ^{3+2N+k}(1+\gamma ^k(-1+(-1+\gamma )k))\\&\quad +\gamma ^{N-k}(\gamma ^k(-\gamma +\gamma ^3)(1-k+N)+\gamma ^{2+2k}(2+N-\gamma (1+N))\\&\quad +\gamma (1+N-\gamma (2+N))), \end{aligned}$$


$$\begin{aligned} s_2(k)&=\gamma (\gamma ^{1+k}+k-\gamma (1+k))+\gamma ^{2+2N-k}(-1+\gamma ^k(1+k-\gamma k))\\&\quad +\gamma ^{1+N-k}(\gamma +\gamma ^k(-1+\gamma ^2)(k-N)-N+\gamma N+\gamma ^{1+2k}(-1+(-1+\gamma )N)). \end{aligned}$$

Let us consider the unconditional process \(W_n\). Note that

$$\begin{aligned} \mathbf {Var}( S_n)= \frac{n\sigma ^2}{(1-\gamma )^2} ,\quad \quad \mathbf {Var}(U_n)= \frac{\sigma ^2\gamma ^2 (1-\gamma ^{2n})}{(1-\gamma )^2(1-\gamma ^2)} \end{aligned}$$


$$\begin{aligned} \mathbf {Cov}(S_n, U_n)= \frac{\gamma \sigma ^2(1-\gamma ^n)}{(1-\gamma )^2(1-\gamma )}. \end{aligned}$$

So from here we have

$$\begin{aligned} \mathbf {Var}(W_n)= \frac{n\sigma ^2}{(1-\gamma )^2} -\frac{\sigma ^2\gamma ^2(1-\gamma ^n)^2}{(1-\gamma )^3(1+\gamma )}-\frac{2\sigma ^2\gamma (1-\gamma ^N)}{(1-\gamma )^3(1+\gamma )}. \end{aligned}$$

From the above expressions one can show that \(\mathbf {Var}(W_{N-1})\sim N\) when \(\kappa =\kappa _N\ll N^2\). We now derive the variance estimate when \(\kappa \gg N^2\). For ease of writing, denote

$$\begin{aligned} \zeta =\frac{1}{\beta _N}+\sqrt{ \frac{1}{\beta _N}}\sqrt{\frac{1}{\beta _N}+2}\rightarrow 0. \end{aligned}$$

Furthermore \(\gamma =1/(1+\zeta )\) and \(\sigma ^2= 2/\beta _N(1+\zeta )\). Rewriting (C.3) in terms of \(\zeta \) we have

$$\begin{aligned} \mathbf {Var}(W_{N-1})&= \frac{2(N-1)(1+\zeta )^2}{\zeta ^2\beta _N(1+\zeta )}-\frac{2(1+\zeta )(1-(1+\zeta )^{-(N-1)})^2}{\beta _N\zeta ^3(2+\zeta )}\nonumber \\&\quad - 4\frac{(1+\zeta )^2(1-(1+\zeta )^{-(N-1)})}{\beta _N\zeta ^3(2+\zeta )}\nonumber \\&= \frac{2(1+\zeta )}{\beta _N(2+\zeta )\zeta ^3}\left[ (N-1)(2+\zeta )\zeta -(1-(1+\zeta )^{-(N-1)})^2\right. \nonumber \\&\quad \left. -2(1+\zeta )(1-(1+\zeta )^{-(N-1)})\right] . \end{aligned}$$

Using a Taylor series expansion of the fourth order for the second and third summands in (C.4) (since coefficients up to \(\zeta ^2\) get cancelled) we obtain that

$$\begin{aligned} \mathbf {Var}(W_{N-1})\approx \frac{(1+\zeta ) N(N-1)^2}{\beta _N(2+\zeta )}\approx \frac{N^3}{\beta _N} \approx \frac{N^3}{\kappa _N}. \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cipriani, A., Dan, B. & Hazra, R.S. Scaling Limit of Semiflexible Polymers: A Phase Transition. Commun. Math. Phys. 377, 1505–1544 (2020).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: