Skip to main content

Random band matrices in the delocalized phase, III: averaging fluctuations

Abstract

We consider a general class of symmetric or Hermitian random band matrices \(H=(h_{xy})_{x,y \in \llbracket 1,N\rrbracket ^d}\) in any dimension \(d\ge 1\), where the entries are independent, centered random variables with variances \(s_{xy}=\mathbb {E}|h_{xy}|^2\). We assume that \(s_{xy}\) vanishes if \(|x-y|\) exceeds the band width W, and we are interested in the mesoscopic scale with \(1\ll W\ll N\). Define the generalized resolvent of H as \(G(H,Z):=(H - Z)^{-1}\), where Z is a deterministic diagonal matrix with entries \(Z_{xx}\in \mathbb {C}_+\) for all x. Then we establish a precise high-probability bound on certain averages of polynomials of the resolvent entries. As an application of this fluctuation averaging result, we give a self-contained proof for the delocalization of random band matrices in dimensions \(d\ge 2\). More precisely, for any fixed \(d\ge 2\), we prove that the bulk eigenvectors of H are delocalized in certain averaged sense if \(N\le W^{1+\frac{d}{2}}\). This improves the corresponding results in He and Marcozzi (Diffusion profile for random band matrices: a short proof, 2018. arXiv:1804.09446) that imposed the assumption \(N\ll W^{1+\frac{d}{d+1}}\), and the results in Erdős and Knowles (Ann Henri Poincaré12(7):1227–1319, 2011; Commun Math Phys 303(2): 509–554, 2011) that imposed the assumption \(N\ll W^{1+\frac{d}{6}}\). For 1D random band matrices, our fluctuation averaging result was used in Bourgade et al. (J Stat Phys 174:1189–1221, 2019; Random band matrices in the delocalized phase, I: quantum unique ergodicity and universality, 2018. arXiv:1807.01559) to prove the delocalization conjecture and bulk universality for random band matrices with \(N\ll W^{4/3}\).

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Availability of data and material

Not applicable.

References

  1. Anderson, P.W.: Absence of diffusion in certain random lattices. Phys. Rev. 109, 1492–1505 (1958)

    Article  Google Scholar 

  2. Bao, Z., Erdős, L.: Delocalization for a class of random block band matrices. Probab. Theory Relat. Fields 167(3), 673–776 (2017)

    Article  MathSciNet  Google Scholar 

  3. Bloemendal, A., Erdős, L., Knowles, A., Yau, H.-T., Yin, J.: Isotropic local laws for sample covariance and generalized Wigner matrices. Electron. J. Probab. 19(33), 1–53 (2014)

    MathSciNet  MATH  Google Scholar 

  4. Bourgade, P., Erdős, L., Yau, H.-T., Yin, J.: Universality for a class of random band matrices. Adv. Theor. Math. Phys. 21(3), 739–800 (2017)

    Article  MathSciNet  Google Scholar 

  5. Bourgade, P., Yang, F., Yau, H.-T., Yin, J.: Random band matrices in the delocalized phase, II: generalized resolvent estimates. J. Stat. Phys. 174, 1189–1221 (2019)

    Article  MathSciNet  Google Scholar 

  6. Bourgade, P., Yau, H.-T., Yin, J.: Random band matrices in the delocalized phase, I: quantum unique ergodicity and universality (2018). arXiv:1807.01559

  7. Casati, G., Guarneri, I., Izrailev, F., Scharf, R.: Scaling behavior of localization in quantum chaos. Phys. Rev. Lett. 64, 5–8 (1990)

    Article  Google Scholar 

  8. Casati, G., Molinari, L., Izrailev, F.: Scaling properties of band random matrices. Phys. Rev. Lett. 64, 1851–1854 (1990)

    Article  MathSciNet  Google Scholar 

  9. Disertori, M., Pinson, L., Spencer, T.: Density of states for random band matrices. Commun. Math. Phys. 232, 83–124 (2002)

    Article  MathSciNet  Google Scholar 

  10. Efetov, K.: Supersymmetry in Disorder and Chaos. Cambridge University Press, Cambridge (1997)

    MATH  Google Scholar 

  11. Erdős, L., Knowles, A.: Quantum diffusion and delocalization for band matrices with general distribution. Ann. Henri Poincaré 12(7), 1227–1319 (2011)

    Article  MathSciNet  Google Scholar 

  12. Erdős, L., Knowles, A.: Quantum diffusion and eigenfunction delocalization in a random band matrix model. Commun. Math. Phys. 303(2), 509–554 (2011)

    Article  MathSciNet  Google Scholar 

  13. Erdős, L., Knowles, A., Yau, H.-T.: Averaging fluctuations in resolvents of random band matrices. Ann. Henri Poincaré 14, 1837–1926 (2013)

    Article  MathSciNet  Google Scholar 

  14. Erdős, L., Knowles, A., Yau, H.-T., Yin, J.: Delocalization and diffusion profile for random band matrices. Commun. Math. Phys. 323(1), 367–416 (2013)

    Article  MathSciNet  Google Scholar 

  15. Erdős, L., Knowles, A., Yau, H.-T., Yin, J.: The local semicircle law for a general class of random matrices. Electron. J. Prob. 18(59), 1–58 (2013)

    MathSciNet  MATH  Google Scholar 

  16. Erdős, L., Knowles, A., Yau, H.-T., Yin, J.: Spectral statistics of Erdős–Rényi graphs II: eigenvalue spacing and the extreme eigenvalues. Commun. Math. Phys. 314, 587–640 (2012)

    Article  Google Scholar 

  17. Erdős, L., Knowles, Antti, Yau, Horng-Tzer, Yin, Jun: Spectral statistics of Erdős–Rényi graphs I: local semicircle law. Ann. Probab. 41(3B), 2279–2375 (2013)

    Article  MathSciNet  Google Scholar 

  18. Erdős, L., Yau, H.-T., Yin, J.: Bulk universality for generalized Wigner matrices. Probab. Theory Relat. Fields 154(1–2), 341–407 (2012)

    Article  MathSciNet  Google Scholar 

  19. Erdős, L., Yau, H.-T., Yin, J.: Rigidity of eigenvalues of generalized Wigner matrices. Adv. Math. 229(3), 1435–1515 (2012)

    Article  MathSciNet  Google Scholar 

  20. Erdős, L., Yau, H.-T., Yin, J.: Universality for generalized Wigner matrices with Bernoulli distribution. J. Comb. 1(2), 15–85 (2011)

    MathSciNet  MATH  Google Scholar 

  21. Feingold, M., Leitner, D.M., Wilkinson, M.: Spectral statistics in semiclassical random-matrix ensembles. Phys. Rev. Lett. 66, 986–989 (1991)

    Article  MathSciNet  Google Scholar 

  22. Fyodorov, Y.V., Mirlin, A.D.: Scaling properties of localization in random band matrices: a -model approach. Phys. Rev. Lett. 67, 2405–2409 (1991)

    Article  MathSciNet  Google Scholar 

  23. He, Y., Marcozzi, M.: Diffusion profile for random band matrices: a short proof (2018). arXiv:1804.09446

  24. Knowles, A., Yin, J.: Anisotropic local laws for random matrices. Probab. Theory Relat. Fields 169(1), 257–352 (2017)

    Article  MathSciNet  Google Scholar 

  25. Peled, R., Sodin, S., Schenker, J., Shamis, M.: On the Wegner orbital model. Int. Math. Res. Not. 2019, 1030–1058 (2017)

    Article  Google Scholar 

  26. Pillai, Natesh S., Yin, Jun: Universality of covariance matrices. Ann. Appl. Probab. 24(3), 935–1001 (2014)

    Article  MathSciNet  Google Scholar 

  27. Schenker, J.: Eigenvector localization for random band matrices with power law band width. Commun. Math. Phys. 290, 1065–1097 (2009)

    Article  MathSciNet  Google Scholar 

  28. Shcherbina, M., Shcherbina, T.: Characteristic polynomials for 1D random band matrices from the localization side. Commun. Math. Phys. 351(3), 1009–1044 (2017)

    Article  MathSciNet  Google Scholar 

  29. Shcherbina, M., Shcherbina, T.: Universality for 1D random band matrices (2019). arXiv:1910.02999

  30. Shcherbina, T.: On the second mixed moment of the characteristic polynomials of 1D band matrices. Commun. Math. Phys. 328, 45–82 (2014)

    Article  MathSciNet  Google Scholar 

  31. Shcherbina, T.: Universality of the local regime for the block band matrices with a finite number of blocks. J. Stat. Phys. 155, 466–499 (2014)

    Article  MathSciNet  Google Scholar 

  32. Shcherbina, T.: Universality of the second mixed moment of the characteristic polynomials of the 1D band matrices: real symmetric case. J. Math. Phys. 56, 063303 (2015)

    Article  MathSciNet  Google Scholar 

  33. Sodin, S.: The spectral edge of some random band matrices. Ann. Math. 173(3), 2223–2251 (2010)

    Article  MathSciNet  Google Scholar 

  34. Spencer, T.: Random Banded and Sparse Matrices. Oxford Handbook of Random Matrix Theory. Oxford University Press, Oxford (2011)

    Google Scholar 

  35. Spencer, T.: SUSY Statistical Mechanics and Random Band Matrices. Lecture Notes. Springer, New York (2012)

    MATH  Google Scholar 

  36. Wigner, E.P.: Characteristic vectors of bordered matrices with infinite dimensions. Ann. Math. 62(3), 548–564 (1955)

    Article  MathSciNet  Google Scholar 

  37. Wilkinson, M., Feingold, M., Leitner, D.M.: Localization and spectral statistics in a banded random matrix ensemble. J. Phys. A Math. Gen. 24(1), 175 (1991)

    Article  Google Scholar 

Download references

Acknowledgements

The second author would like to thank Paul Bourgade, L. Fu, Benedek Valkó and Horng-Tzer Yau for fruitful discussions and valuable suggestions. We also want to thank the editors and three anonymous referees for their helpful comments, which have improved the paper significantly.

Funding

The work of Jun Yin is partially supported by the NSF grant DMS-1552192.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fan Yang.

Ethics declarations

Code availability

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The work of Jun Yin is partially supported by the NSF Grant DMS-1552192.

Appendices

A Proof of (2.35)

With Taylor expansion, we can write

$$\begin{aligned} (1-|m|^2 S)^{-1}S = \left( 1-|m|^{2K}S^{K}\right) ^{-1}\sum _{k=0}^{K-1}|m|^{2k}S^{k+1}. \end{aligned}$$
(A.1)

Since \(\Vert S\Vert _{l^\infty \rightarrow l^\infty } = 1\) and \(|m|\le 1 - c\eta \) for some constant \(c>0\) by (1.4), it is easy to see that by taking \(K=\eta ^{-1} \) in (A.1), we have

$$\begin{aligned} 0\le \left[ (1-|m|^2 S)^{-1}S\right] _{xy} \le C \max _{x, y}\sum _{k=1}^{\eta ^{-1}} (S^{k})_{xy}. \end{aligned}$$
(A.2)

Since S is a doubly stochastic matrix, \((S^{k})_{xy}\) can be understood through a k-step random walk on the torus \(\mathbb {Z}_N^d\). We first prove the following lemma. Here different from the previous proof, for any vector \(v\in \mathbb {R}^d\) we denote \(| v|\equiv \Vert v\Vert _2\).

Lemma 30

Let \(B_n = \sum _{i=1}^n X_i\) be a random walk on \(\mathbb {Z}^d\) with i.i.d. steps \(X_i\) that satisfy the following conditions: (i) \(X_1\) is symmetric; (ii) \(|X_1| \le L\) almost surely; (iii) there exists constants \(C^*,c_*>0\) such that

$$\begin{aligned} c_* L^{-d} {\mathbf {1}} _{|x| \le c_* L}\le \mathbb {P}(|X_1|=x) \le C^* L^{-d} {\mathbf {1}} _{|x| \le L}, \quad j\in \mathbb {Z}. \end{aligned}$$
(A.3)

Let \(\varSigma \) be the covariance matrix of \(X_1\) with \(\varSigma _{ij}=\mathbb {E} [ (X_1)_i (X_1)_j ]\). Assume that \(n\in \mathbb {N}\) satisfies

$$\begin{aligned} \log n \ge c_0\log L \end{aligned}$$
(A.4)

for some constant \(c_0>0\). Then for any fixed (large) \(D>0\), we have

$$\begin{aligned} \mathbb {P}\left( B_n = x\right) = \frac{1+{{\,\mathrm{o}\,}}_n(1)}{(2\pi n)^{d/2} \sqrt{\det (\varSigma )}}e^{-\frac{1}{2} x^T (n\varSigma )^{-1} x}+{{\,\mathrm{O}\,}}(L^{-D}), \end{aligned}$$
(A.5)

for large enough L (and n).

Proof of Lemma 30

Note that (A.5) is in accordance with the central limit theorem. Our proof below is in fact a variant of the proof of CLT with characteristic functions.

Combining condition (ii), i.e., \(|X_i|\le L\), with a large deviation estimate, with (A.4), we get that

$$\begin{aligned} \mathbb {P}\left( |B_n| \ge L n^{1/2+\tau }\right) = {{\,\mathrm{O}\,}}(L^{-D}), \end{aligned}$$

for any fixed (small) \(\tau >0\) and (large) \(D>0\). Thus to prove (A.5), we only need to focus on the case

$$\begin{aligned} |x|={{\,\mathrm{O}\,}}(L n^{1/2+\tau _0}) \end{aligned}$$

for some small enough constant \(\tau _0>0 \). In the following proof, we always make this assumption.

For \(p\in \mathbb {R}^d \) with \(0<|p|\le L^{-1}n^{-1/2+\tau _0}\), we have

$$\begin{aligned} \log \mathbb {E}e^{ \mathrm {i}p \cdot X_1}= - \frac{1}{2} p^T \varSigma p + \sum _{k\ge 3} \frac{\kappa _k(\hat{p})}{k!} (\mathrm {i}|p|)^k, \end{aligned}$$

where \(\hat{p}= p/|p|\) and \(\kappa _k(\hat{p})\) is the kth cumulant of \(\hat{p}\cdot X_1\). It gives that

$$\begin{aligned} \frac{1}{n}\log \mathbb {E}e^{ \mathrm {i}p \cdot B_n}= - \frac{1}{2} p^T \varSigma p + \sum _{k\ge 3} \frac{\kappa _k(\hat{p})}{k!} (\mathrm {i}|p|)^k. \end{aligned}$$

By the condition (A.3), it is easy to verify that

$$\begin{aligned} C^{-1}L^{2}\le \varSigma \le CL^2 \end{aligned}$$
(A.6)

in the sense of operators, and

$$\begin{aligned} |\kappa _k(\hat{p})|\le C^k k! L^k, \quad k\in \mathbb {N}, \quad \hat{p} \in \mathbb {S}^d, \end{aligned}$$

for some constant \(C>0\). Then for \(|p|\le L^{-1}n^{-1/2+\tau _0}\), we have

$$\begin{aligned} \mathbb {E}e^{\mathrm {i}p \cdot B_n}=e^{-\frac{1}{2} n p^T\varSigma p}\left( 1+\sum _{3 \le k \le K_D}\alpha _k(\hat{p})( Ln^{1/2} |p|)^k\right) + {{\,\mathrm{O}\,}}(L^{-D}), \end{aligned}$$
(A.7)

where \(\alpha _k\in \mathbb {C}\) are coefficients (independent of p) satisfying

$$\begin{aligned} \alpha _k(\hat{p})={{\,\mathrm{O}\,}}(n^{1-k/2})={{\,\mathrm{O}\,}}(n^{-k/6}), \quad k\ge 3, \end{aligned}$$
(A.8)

and \(K_D={{\,\mathrm{O}\,}}(1)\) is a fixed integer depending only on D and the constant \(c_0\) in (A.4).

Now we estimate \(\mathbb {E}e^{\mathrm {i}p\cdot B_n}\) for large p. Because of the existence of the core in (A.3), it is easy to see that for some constant \(c>0\),

$$\begin{aligned} | \mathbb {E}e^{ \mathrm {i}p\cdot X_1}|\le 1-c\min \{1,(L|p|)^2\},\quad L^{-1}n^{-1/2+\tau _0} \le |p|\le \pi , \end{aligned}$$

which implies that for some \(c>0\),

$$\begin{aligned} \left| \mathbb {E}e^{\mathrm {i}p \cdot B_n}\right| \le e^{-cn^{\tau _0}},\quad |p|\ge L^{-1}n^{-1/2+\tau _0}. \end{aligned}$$

Together with (A.7), with

$$\begin{aligned} y:=(n\varSigma )^{-1/2}x, \quad |y|={{\,\mathrm{O}\,}}(n^{\tau _0}), \quad q:=(n\varSigma )^{1/2}p , \end{aligned}$$

and \(H_n\) being the Hermite polynomials, we have

$$\begin{aligned}&\mathbb {P}\left( B_n =x \right) \\&\quad = \displaystyle \frac{1}{(2\pi )^d}\int _{|p| \le L^{-1}n^{-1/2+\tau _0}} \mathrm{d}p \, e^{-\mathrm {i}p\cdot x}e^{-\frac{1}{2} n p^T\varSigma p}\left( 1+\sum _{3 \le k \le K_D}\alpha _k(\hat{p})( Ln^{1/2} |p|)^k\right) +{{\,\mathrm{O}\,}}(L^{-D})\\&\quad =\displaystyle \frac{1}{(2\pi )^d\sqrt{ n^d \det (\varSigma )}} \int _{|L\varSigma ^{-1/2}q|\le n^{\tau _0}} \mathrm{d}q \, e^{ -\mathrm {i}q \cdot y}e^{-\frac{q^2}{2}}\left( 1+\sum _{3\le k\le K_D}\alpha _k(\hat{p}) |L\varSigma ^{-1/2}q|^k\right) +{{\,\mathrm{O}\,}}(L^{-D})\\&\quad =\displaystyle \frac{1}{(2\pi )^d\sqrt{ n^d \det (\varSigma )}} \int _{q\in \mathbb {R}^d} \mathrm{d}q \, e^{ -\mathrm {i}q \cdot y}e^{-\frac{q^2}{2}}\left( 1+\sum _{3\le k\le K_D}\alpha _k(\hat{p}) |L\varSigma ^{-1/2}q|^k\right) +{{\,\mathrm{O}\,}}(L^{-D})\\&\quad =\displaystyle \frac{1}{(2\pi n)^{d/2} \sqrt{\det (\varSigma )} }\left( 1+\sum _{3\le k \le K_D}{{\,\mathrm{O}\,}}(n^{-({1}/{6} - \tau _0) k} ) \right) e^{-\frac{y^2}{2}}+ {{\,\mathrm{O}\,}}(L^{-D}) \end{aligned}$$

where in the third step we used \(C^{-1/2}|q|\le |L\varSigma ^{-1/2}q|\le C^{1/2}|q|\) by (A.6) and approximated the \(\int _{|L\varSigma ^{-1/2}q|\le n^{\tau _0}}\) with \(\int _{q\in \mathbb {R}}\) up to an error \({{\,\mathrm{O}\,}}(L^{-D})\) due to the factor \(e^{-q^2/2}\), and in the last step we used (A.8), \(|y|={{\,\mathrm{O}\,}}(n^{\tau _0})\) and stationary approximation to bound the integrals. This proves (A.5). \(\square \)

Now we can give a proof of (2.35).

Proof of (2.35)

Fix any small constant \(\tau >0\). We now bound the sum in (A.2). Let \(B_n = \sum _{i=1}^n X_i\) be a random walk on \(\mathbb {Z}_N^d\) with i.i.d. steps \(X_i\), with distribution \(\mathbb {P}(X_1 = y-x)=s_{xy}\). Then it is easy to see that

$$\begin{aligned} (S^k)_{xy}=\mathbb {P}(B_k = y - x). \end{aligned}$$

For \(1\le k \le N^{\tau }\), with (2.4) we can bound

$$\begin{aligned} (S^k)_{xy} \le \mathbf {1}_{|x-y| \le C_s k W} W^{-d} \lesssim \frac{N^{(d-2)\tau }}{W^2\langle x-y\rangle ^{d-2}}. \end{aligned}$$
(A.9)

For \(N^{\tau } \le k \le N^{2-\tau }/W^2\), we have a large deviation estimate

$$\begin{aligned} \mathbb {P}\left( |B_k| \ge |x-y| \right) \le \exp \left( - \frac{c|x-y|^2}{k^2 W^2}\right) \end{aligned}$$

for some constant \(c>0\). In particular, with high probability, \(b_x\) can be regarded as a random walk on the full lattice \(\mathbb {Z}^d\) if \(k \le N^{2-\tau }/W^2\), and we can apply (A.5) to get that

$$\begin{aligned} (S^k)_{xy}=\mathbb {P}( B_k= y-x) \lesssim \frac{1}{k^{d/2} W^d}e^{-\frac{c}{2kW^2} |y-x|^2 }+{{\,\mathrm{O}\,}}(N^{-D}), \end{aligned}$$
(A.10)

for some constant \(c>0\) and for any large constant \(D>0\). Finally, for \(N^{2-\tau }/W^2 \le k \le \eta ^{-1}\), using \(\Vert S\Vert _{l^\infty \rightarrow l^\infty } \le 1\) we get that

$$\begin{aligned} (S^k)_{xy} \le \max _{x,y}(S^{N^{2-\tau }/W^2})_{xy} \le \frac{1}{N^{d-d\tau /2}} \end{aligned}$$
(A.11)

where we used (A.10) in the last step. Applying (A.9)–(A.11) to (A.2), we obtain that

$$\begin{aligned} \sum _{k=1}^{\eta ^{-1}} (S^{k})_{xy}&\lesssim \frac{N^{(d-2)\tau }}{W^2\langle x-y\rangle ^{d-2}} +\frac{N^{d\tau /2}}{N^{d}\eta }\\&\quad + \sum _{N^\tau \le k\le N^{2-\tau }/W^2} \frac{1}{k^{d/2} W^d}\mathbf {1}_{k\ge N^{-\tau }|x-y|^2/W^2} + {{\,\mathrm{O}\,}}(N^{-D}), \end{aligned}$$

where it is easy to verify that

$$\begin{aligned} \sum _{N^\tau \le k\le N^{2-\tau }/W^2} \frac{1}{k^{d/2} W^d}\mathbf {1}_{k\ge N^{-\tau }|x-y|^2/W^2}&\lesssim \frac{1}{W^d}\mathbf {1}_{|x-y|\le N^ \tau W} \\&\quad + \frac{1}{W^d (N^{-\tau }|x-y|^2/W^2)^{d/2-1}}\mathbf {1}_{|x-y|\ge N^ \tau W} \\&\lesssim \frac{N^{(d-2)\tau }}{W^2 \langle x-y\rangle ^{d-2}}. \end{aligned}$$

This finishes the proof of (2.35) since \(\tau \) can be arbitrarily small and D can be arbitrarily large. \(\square \)

B Proof of Lemma 29

We fix \(x,y, y'\) in the proof. For simplicity, we ignore “x” from the coefficients \(c_{xw}\) and \(\widetilde{c}_{xw}\). With (6.3), we can write

$$\begin{aligned} G_{xy}G_{xy'}=M_x^2\sum ^{(x)}_{s,s'} H_{xs} H_{xs'}G^{(x)}_{sy}G^{(x)}_{s'y'}+\left( 2\varLambda _x M_x+\varLambda _x ^2\right) \frac{G_{xy}}{G_{xx}}\frac{G_{xy'}}{G_{xx}}, \quad \varLambda _x:= G_{xx}-M_x. \end{aligned}$$

Similarly for \(x\notin \{y,y'\}\cup \dot{I}\cup \ddot{I}\), together with (5.5), we get that

$$\begin{aligned}&\mathbb {E}_x \dot{G}_{xy}\ddot{G}_{xy'} =M_x^2\sum _{w} s_{xw}\dot{G}^{(x)}_{wy}\ddot{G}^{(x)}_{w y'} + \mathbb {E}_x \left( \left( \dot{\varLambda }_x M_x+\ddot{\varLambda }_x M_x+\ddot{\varLambda }_x \dot{\varLambda }_x\right) \frac{\dot{G}_{xy}}{\dot{G}_{xx}}\frac{\ddot{G}_{xy'}}{\ddot{G}_{xx}}\right) \nonumber \\&\quad =M_x^2\sum _{y} s_{xw}\dot{G} _{wy}\ddot{G} _{wy'} \nonumber \\&\qquad -M_x^2\sum _{y} s_{xw}\left( \dot{G} _{wy}\frac{\ddot{G}_{wx}\ddot{G}_{xy'}}{\ddot{G}_{xx}} + \frac{\dot{G}_{wx}\dot{G}_{xy}}{\dot{G}_{xx}}\ddot{G}_{wy'} - \frac{\dot{G}_{wx}\dot{G}_{xy}}{\dot{G}_{xx}} \frac{\ddot{G}_{wx}\ddot{G}_{xy'}}{\ddot{G}_{xx}}\right) \nonumber \\&\qquad + \mathbb {E}_x \left( \left( \dot{\varLambda }_x M_x+\ddot{\varLambda }_x M_x+\ddot{\varLambda }_x \dot{\varLambda }_x\right) \frac{\dot{G}_{xy}}{\dot{G}_{xx}}\frac{\ddot{G}_{xy'}}{\ddot{G}_{xx}}\right) . \end{aligned}$$
(B.1)

Now we define the “error” part as

$$\begin{aligned} {\mathcal B}_x :&= -M_x^2\sum _{y} s_{xw}\left( \dot{G} _{wy}\frac{\ddot{G}_{wx}\ddot{G}_{xy'}}{\ddot{G}_{xx}} + \frac{\dot{G}_{wx}\dot{G}_{xy}}{\dot{G}_{xx}}\ddot{G}_{wy'} - \frac{\dot{G}_{wx}\dot{G}_{xy}}{\dot{G}_{xx}} \frac{\ddot{G}_{wx}\ddot{G}_{xy'}}{\ddot{G}_{xx}}\right) \nonumber \\&\quad + \mathbb {E}_x \left( \left( \dot{\varLambda }_x M_x+\ddot{\varLambda }_x M_x+\ddot{\varLambda }_x \dot{\varLambda }_x\right) \frac{\dot{G}_{xy}}{\dot{G}_{xx}}\frac{\ddot{G}_{xy'}}{\ddot{G}_{xx}}\right) , \quad \text { if } x\notin \{y,y'\}\cup {\dot{I}\cup \ddot{I}}, \end{aligned}$$
(B.2)

and

$$\begin{aligned} {\mathcal B}_x:=\mathbb {E}_x\dot{G}_{xy}\ddot{G}_{xy'}-M_x^2\sum _{y} s_{xw}\dot{G} _{wy}\ddot{G} _{wy'}, \quad \text { if } x\in \{y,y'\}\cup {\dot{I}\cup \ddot{I}}. \end{aligned}$$

With the above definition and (B.1), we have for any \(x\in \mathbb {Z}_N^d\),

$$\begin{aligned} \dot{G}_{xy}\ddot{G}_{xy'}= M_x^2\sum _{y} s_{xw}\dot{G} _{wy}\ddot{G} _{wy'}+Q_x\left( \dot{G}_{xy}\ddot{G}_{xy'}\right) +{\mathcal B}_x. \end{aligned}$$
(B.3)

It implies (with \(\{y,y'\}\cup \dot{I}\cup \ddot{I}\subset J\))

$$\begin{aligned}&\dot{G}_{xy}\ddot{G}_{xy'} =\sum _{y}\left[ (1-M^2S)^{-1}\right] _{xw} \left( Q_w\left( \dot{G}_{wy}\ddot{G}_{wy'} \right) + {\mathcal B}_w\right) \\&=\sum _{w\notin J} \left[ (1-M^2S)^{-1}\right] _{xw} \left( Q_w\left( \dot{G}_{wy}\ddot{G}_{wy'} \right) + {\mathcal B}_w\right) \\&\quad + \sum _{w\in J} \left[ (1-M^2S)^{-1}\right] _{xw} \sum _{w'} \left( 1-M^2S \right) _{ww'} \left( \dot{G}_{w' y}\ddot{G}_{w' y'}\right) . \end{aligned}$$

Then using (2.15), we obtain that for any fixed \(D>0\),

$$\begin{aligned}&\dot{G}_{xy}\ddot{G}_{xy'} -Q_x \left( \dot{G}_{xy}\ddot{G}_{xy'} \right) = \sum _{w\notin J} c_{w} Q_w\left( \dot{G}_{wy}\ddot{G}_{wy'} \right) + { {\mathcal B}_x}+ \sum _{j\notin J} c_{w} {\mathcal B}_w \nonumber \\&\quad + \sum _{w\in J } c_{w} \dot{G}_{wy}\ddot{G}_{wy'} + \sum _{w\in J}\sum _{w'} d_{ww'} \dot{G}_{w' y}\ddot{G}_{w' y'} +{{\,\mathrm{O}\,}}_\prec (N^{ -D}), \end{aligned}$$
(B.4)

for some coefficients satisfying

$$\begin{aligned} c_{w}={{\,\mathrm{O}\,}}(W^{-d})\mathbf{1}_{|x-w|\le (\log N)^2W} ,\quad d_{ww'}= {{\,\mathrm{O}\,}}(W^{-2d})\mathbf{1}_{|x-w|+|x-w'|\le (\log N)^2W}. \end{aligned}$$

Furthermore, by the definition of \({\mathcal B}_w\), we have

$$\begin{aligned} \sum _{j\notin J} c_w {\mathcal B}_w&=\sum _{w\notin J} \sum _{v} c'_{w} s_{wv} \left( \dot{G} _{vy}\frac{\ddot{G}_{vw}\ddot{G}_{wy'}}{\ddot{G}_{ww}} + \frac{\dot{G}_{vw}\dot{G}_{wy}}{\dot{G}_{ww}}\ddot{G}_{vy'} - \frac{\dot{G}_{vw}\dot{G}_{wy}}{\dot{G}_{ww}} \frac{\ddot{G}_{vw}\ddot{G}_{wy'}}{\ddot{G}_{ww}}\right) \\&\quad +\sum _{w\notin J}c_w \mathbb {E}_w \left( \left( \dot{\varLambda }_w M_{w}+\ddot{\varLambda }_w M_{w}+\ddot{\varLambda }_w \dot{\varLambda }_w\right) \frac{\dot{G}_{wy}}{\dot{G}_{ww}}\frac{\ddot{G}_{wy'}}{\ddot{G}_{ww}}\right) , \end{aligned}$$

for some coefficients

$$\begin{aligned} c'_w ={{\,\mathrm{O}\,}}(W^{-d})\mathbf{1}_{|x-w|\le (\log N)^2W}. \end{aligned}$$
(B.5)

Therefore, up to the error term \({{\,\mathrm{O}\,}}_\prec (N^{-D})\), \(\dot{G}_{xy}\ddot{G}_{xy'}-Q_x\left( \dot{G}_{xy}\ddot{G}_{xy'}\right) \) is equal to (see the explanation below)

(B.6)

where \(c_{1,w}\) and \(c_{2,w}\) are some coefficients that also satisfy (B.5). Here we have only drawn the dashed lines and ignored the \(\times \)-dashed lines. Moreover, the y and \(y'\) can be the same atom, but we did not draw this case. The first graph in the first row represents the first term on the right-hand side of (B.4). The second and third graphs in the first row represent the two terms in the second line of (B.4). The second row of (B.6) represents the the first row of (B.2), and the third row of (B.6) represents the the second row of (B.2). The graphs of \(\sum _{w\notin J}c_w {\mathcal B}_w \) have the same structures as the graphs in the second and third rows of (B.6), and we used “\(\ldots \)” to represent them in the fourth row.

Now the first graph in (B.6) gives the second term on the right-hand side of (6.54). All the other graphs in the first and second rows of (B.6) can be included into the third term on the right-hand side of (6.54) by relabelling w, \(w'\) as \(\alpha \) atoms. It is easy to check that these graphs satisfy the conditions for \({\mathcal G}_\kappa \) below (6.56). Therefore to finish the proof of (6.54), it remains to write the graphs in the third line of (B.6) into the form of the third term on the right-hand side of (6.54).

Following the idea in the proof for Lemma 21, we can write the graph with \(P_x\) color into a sum of colorless graphs. More precisely, using

$$\begin{aligned} \frac{\dot{G}_{xy}}{\dot{G}_{xx}}\frac{\ddot{G}_{xy'}}{\ddot{G}_{xx}}= & {} \sum _{\alpha _1,\alpha _2}H_{x\alpha _1}H_{x\alpha _2}\dot{G}^{(x)}_{\alpha _1 y}\ddot{G}^{(x)}_{\alpha _2 y'},\quad \dot{G} _{xx}-M_x = \left( \dot{{\mathcal Y}}_{x}\right) ^{-1}-M_x \\&+ \sum _{m=1}^\infty (\mathcal Y_{x})^{-m-1} ({\mathcal Z}_x)^m , \end{aligned}$$

and taking partial expectation \(\mathbb {E}_x\), we can write the graphs in the third row of (B.6) as

$$\begin{aligned} \sum _\kappa \sum _{\mathbf {\alpha }}C^{\kappa }_{\mathbf {\alpha }}\cdot {\mathcal G}_\kappa (\mathbf {\alpha }, x, y,y')+{{\,\mathrm{O}\,}}_\prec (N^{-D}), \end{aligned}$$
(B.7)

where

$$\begin{aligned} C^{\kappa }_{\mathbf {\alpha }}=O\left( \left( W^{-d}\right) ^{\#\text { of } \alpha \text { atoms}}\right) \mathbf{1}\left( \max _l{|x-\alpha _l |\le (\log N)^2W}\right) , \end{aligned}$$

and \({\mathcal G}_\kappa (\mathbf {\alpha }, x, y,y')\) are colorless graphs which look like the graphs in (6.56). In fact, it is easy to check that \({\mathcal G}_\kappa \) either has a light weight (i.e. \(f_4\) light weight on the atom x or \(f_6\) weight on some \(\alpha \) atom) or there exists a solid line between \(\alpha \) atoms. Hence (B.7) can be written into the form of the third term on the right-hand side of (6.54) and satisfies the conditions below (6.56). This completes the proof of (6.54) in Lemma 29.

For (6.55), we use (B.3) and see that it suffices to write \({\mathcal B}_x\) into the form of the third term on the right-hand side of (6.55), which have been done above. Thus we finish the proof of (6.55).

Rights and permissions

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, F., Yin, J. Random band matrices in the delocalized phase, III: averaging fluctuations. Probab. Theory Relat. Fields 179, 451–540 (2021). https://doi.org/10.1007/s00440-020-01013-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00440-020-01013-5

Keywords

  • Random band matrices
  • Delocalization
  • Averaging fluctuations
  • Generalized resolvent

Mathematics Subject Classification

  • 60B20
  • 15B52
  • 82B44