Abstract
We consider Hermitian random band matrices H in \(d \geqslant 1 \) dimensions. The matrix elements \(H_{xy},\) indexed by \(x, y \in \varLambda \subset \mathbb {Z}^d,\) are independent, uniformly distributed random variable if \(|x-y| \) is less than the band width W, and zero otherwise. We update the previous results of the converge of quantum diffusion in a random band matrix model from convergence of the expectation to convergence in high probability. The result is uniformly in the size \(|\varLambda | \) of the matrix.
We’re sorry, something doesn't seem to be working properly.
Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.
1 Introduction
Random band matrices \(H=\left( H_{xy}\right) _{x,y \in \varGamma }\) represent systems on a large finite graph with a metric. They are the natural intermediate models to study quantum propagation in disordered systems as they interpolate in between the Wigner matrices and Random Schrödinger operators. The elements \(H_{xy}\) are independent random variables with variance \(\sigma _{xy}^2=\mathbb {E}|H_{xy}|^2\) depending on the distance between the two sites. The variance decays with the distance on the scale W, called the band width of the matrix H. This terminology comes from the simplest model in which the graph is a path on N vertices labelled by \(\varGamma =\{1,2,\ldots , N\},\) and the matrix elements \(H_{xy}\) are zero if \(|x-y| \geqslant W.\) If \(W=O(1)\) we obtain the one-dimensional Anderson type model (see [1]) and if \(W=N\) we recover the Wigner matrix. In the general Anderson model, introduced in [1], a random on-site potential V is added to a deterministic Laplacian on a graph that is typically a regular box in \(\mathbb {Z}^d\,.\) For higher dimensional models in which the graph is \(\varGamma \) is a box in \(\mathbb {Z}^d\), see [2].
In [3] it was proved that the quantum dynamics of d-dimensional band matrix is given by a superposition of heat kernels up to time scales \(t \ll W^{d/3}\,.\) Note that diffusion is expected to hold for \(t \sim W^{2}\) for \(d=1\) and up to any time for \(d \geqslant 3\) when the thermodynamic limit is taken. The threshold d / 3 on the exponent is due to technical estimates on Feynman graphs.
The approach of this paper is similar with the one in [3]. We normalize the entries of the matrix such that the rate of quantum jumps is of order one. In contrast with [3] in this paper double-rooted Feynman graphs are used to estimate the variance of the quantum diffusion. The main result of this paper is upgrading the previous results on the convergence of expectation of the quantum diffusion from [3] to convergence in high probability.
2 Model and Main Result
Let \(\mathbb {Z}^d\) be the infinite lattice with the Euclidean norm \(|\cdot |_{\mathbb {Z}^d}\) and let M be the number of points situated at distance at most \(W (W \geqslant 2)\) from the origin, i.e.
For simplicity, we avoid working directly on an infinite lattice.
Throughout our proof, we consider a d-dimensional finite periodic lattice \(\varLambda _N \subset \mathbb {Z}^d \) (\(d\geqslant 2\)) of linear size N equipped with the Euclidean norm \(|\cdot |_{\mathbb {Z}^d}\). Specifically, we take \(\varLambda _N\) to be a cube centered around the origin with the side length N, i.e.
We regard \(\varLambda _N\) periodic, i.e. we equip it with the periodic addition and periodic distance
We analyze random matrices H with band width W and with elements \(H_{xy}\), where x and y are indices of points in \(\varLambda _N\) . For introducing H we first define a matrix
We consider \(A=A^*=(A_{xy})\) a Hermitian random matrix whose upper triangular entries (\(A_{xy}:x \leqslant y\)) are independent random variables uniformly distributed on the unit circle \(\mathbb {S}^1 \subset \mathbb {C}\) . We define the random band matrix \((H_{xy})\) through
Note that H is Hermitian and \(|H_{xy}|^{2}=S_{xy}\) .
Throughout our investigation we will use the simplified notation \(\sum \limits _{y_1}\) for\(\sum \limits _{y_1 \in \varLambda _N}\).
Our main quantity is
The function P(t, x) describes the quantum transition probability of a particle starting in \(x_0\) and ending up at position x after time t .
Let \(\kappa >0\) . We introduce the macroscopic time and space coordinates T and X, which are independent of W, and consider the microscopic time and space coordinates
Using the definition of the quantum probability and the scaling that we have introduced before, we define the random variable that we are going to investigate by
where \(\phi \in C_b(\mathbb {R}^d)\) is a test function in \(\mathbb {R}^d\) .
Our main result gives an estimate for the variance of the random variable \(Y_T(\phi )\) up to time scales \(t=O(W^{d\kappa })\) if \(\kappa <1/3\) .
Theorem 1
Fix \(T_0>0\) and \(\kappa \) such that \(0< \kappa <1/3\,.\) Choose a real number \(\beta \) satisfying \(0<\beta <2/3-2\kappa \,.\) Then there exists C \(\geqslant 0\) and \(W_0 \geqslant 0\) depending only on \(T_0\), \(\kappa \) and \(\beta \) such that for all \(T \in [0,T_0]\,,\) \(W \geqslant W_0\) and \(N \geqslant W^{1+\frac{d}{6}}\) we have
Remark 1
Using the estimate that we obtain in Theorem 2.1 and Chebyshev inequality for the second moment we obtain the convergence in high probability of the random variable \(Y_{T}(\phi )\,.\) We think that the same technique can be implemented for a graphical representation with 2p directed chains with \(p \in \mathbb {N}\,.\) This approach should give similar estimates on the 2p-th moment of our random variable that we further use in the Chebyshev’s inequality to get the desired conclusion.
3 Graphical Representation
In this section we give the exact formula of the quantity of our analysis and we motivate the graphical representation that we will use in order to compute the upper bound.
3.1 Expansion in Non-backtracking Powers
First, as in [3] we define \(H^{(n)}_{x_0x_n}\) by
The following result is proved in [3] .
Lemma 1
Let \(U_k\) be the kth Chebyshev polynomial of the second kind and let
We define the quantity \(a_m(t)\;:=\;\sum \limits _{k \geqslant 0}\frac{\alpha _{m+2k}(t)}{(M-1)^{k}}\,.\) We have that
We will use also the abbreviation
Plugging in the definition of \(Y_T(\phi )\) we have
Moreover,
We summarize the graphical representation of \(\langle H_{0y_1}^{(n_{11})}H_{y_10}^{(n_{12})} ; H_{0y_2}^{(n_{21})}H_{y_20}^{(n_{22})} \rangle \,.\)
3.2 Graphical Representation
We define a graph \(\mathcal {L}\) which consists of two rooted directed chains \(\mathcal {L}_1\) and \(\mathcal {L}_2\) by
where \(\mathcal {L}_k(n_{k1},n_{k2})\) is a rooted directed chain of length \(n_{k1}+n_{k2} \geqslant 1\) for \(k \in \{1,2 \}.\) We denote the set of vertices of the graph \(\mathcal {L}\) by \(V(\mathcal {L})\) and the set of edges by \(E(\mathcal {L})\). Each of the rooted directed chains contains two distinct vertices denoted by \(r(\mathcal {L}_k)\) (\(\textit{root}\)) and \(s(\mathcal {L}_k)\) (\(\textit{summit}\)) defined as the unique vertex such that the path \(r(\mathcal {L}_k)\rightarrow s(\mathcal {L}_k)\) has length \(n_{k1}\). Note that if \(n_{k1}=0\) or \(n_{k2}=0\) then \(r(\mathcal {L}_k)=s(\mathcal {L}_k)\) . Using the orientation of the edges, for each \(e \in E(\mathcal {L})\) we denote the vertex \(a(e) \in V(\mathcal {L})\) as predecessor and the vertex \(b(e) \in V(\mathcal {L})\) as successor (see Fig. 1). Similarly, for each vertex \(i \in V(\mathcal {L})\) , we denote the adjacent vertices, a(i) and b(i) as the predecessor and the successor of i (see Fig. 2). The root and the summit are drawn using white dots and all other vertices using black dots. Hence, the set of vertices can be split as \(V(\mathcal {L})=V_{w}(\mathcal {L})\sqcup V_{b}(\mathcal {L}),\) where the subscript w stands for the white vertices and b for the black vertices.
Each vertex \(i \in V(\mathcal {L})\) carries a \(\textit{label}\) \(x_i \in \varLambda _N\) . The labels \(\varvec{\mathrm {x}}=(x_i)_{i \in V(\mathcal {L})}\) can be split according to the needs, e.g. \(\varvec{\mathrm {x}}=(\varvec{\mathrm {x}}_1, \varvec{\mathrm {x}}_2),\) where \(\varvec{\mathrm {x}}_k:=(\varvec{\mathrm {x}}_i)_{i \in V(\mathcal {L}_k)},\; k \in \{1,2 \},\) or \(\varvec{\mathrm {x}}=(\varvec{\mathrm {x}}_b,\varvec{\mathrm {x}}_w),\; \varvec{\mathrm {x}}_b:=(x_i)_{i \in V_b(\mathcal {L})} \) and \(\varvec{\mathrm {x}}_w:=(x_i)_{i \in V_w(\mathcal {L})}\,.\)
For each configuration of labels \(\varvec{\mathrm {x}}\) we assign a lumping \(\varGamma =\varGamma (\varvec{\mathrm {x}})\) of the set of edges \(E(\mathcal {L})\) as in [3] . A lumping is an equivalence relation on \(E(\mathcal {L})\) . We use the notation \(\varGamma =\{\gamma \}_{\gamma \in \varGamma }\) where \(\gamma \in \varGamma \) is a lump, i.e. an equivalence class of \(\varGamma \) . The lumping \(\varGamma = \varGamma (\varvec{\mathrm {x}})\) associated with the labels \(\varvec{\mathrm {x}}\) is given by the equivalence relation
The summation over \(\varvec{\mathrm {x}}\) is performed with respect to the indicator function
Throughout the proof we will use the notation
Using the graph \(\mathcal {L}\) we may now write the covariance as
where
We further define the \(\textit{value}\) of the lumping \(\varGamma \) by
Let \(\mathfrak {P}_{c}(E(\mathcal {L}))\) be the set of connected even lumpings, i.e. the set of all lumpings \(\varGamma \) for which each lump \(\gamma \in \varGamma \) has even size and there exists \(\gamma \in \varGamma \) such that \(\gamma \cap E(\mathcal {L}_k) \ne \emptyset \,,\) for \(k \in \{1,2\}\,.\)
Using that \(\mathbb {E}H_{xy}=0\) , it is not hard to see that the graphical representation of the variance yields to the following result (for further details, see [4]) .
Lemma 2
We have that
We define the set of all connected pairings
We call the lumps \(\pi \in \varPi \) of a pairing \(\varPi \) \(\textit{bridges}\). Moreover, with each pairing \(\varPi \in \mathfrak {M}_c\) we associate its underlying graph \(\mathcal {L}(\varPi )\), and regard \(n_{11}(\varPi )\) and \(n_{12}(\varPi )\), \(n_{21}(\varPi )\) and \(n_{22}(\varPi )\) as functions on \(\mathfrak {M}_c \) in self-explanatory notation. We abbreviate \(V(\varPi )=V(\mathcal {L}(\varPi ))\) and \(E(\varPi )=E(\mathcal {L}(\varPi ))\). We refer to \(V(\varPi )\) as the set of vertices of \(\varPi \) and to \(E(\varPi )\) as the set of edges of \(\varPi \) .
Let us define the indicator function
Using the same reasoning as in Section 4 of [4] and Equation 4.14 of [4], we obtain the following bound.
Lemma 3
We have
3.3 Collapsing of Parallel Bridges
We further construct as in [4] the skeleton \(\varSigma =S(\varPi )\) of a pairing \(\varPi \in \mathfrak {M}_c\) by collapsing all parallel bridges of \(\varPi \) . By definition the bridges \(\{e_1, e'_1\}\) and \(\{e_2, e'_2 \}\) are \(\textit{parallel}\) if \(b(e_1)=a(e_2)\in V_b(\varPi )\) and \(b(e'_2)=a(e'_1)\in V_b(\varPi )\) . To each \(\varPi \in \mathfrak {M}_c\) we associate a couple \((\varSigma , l_{\varSigma })\), where \(\varSigma \in \mathfrak {M}_c\) has no parallel bridges and \(l_{\varSigma }:=(l_\sigma )_{\sigma \in \varSigma } \in \mathbb {N}^{\varSigma }\) . The integer \(l_{\sigma }\) denotes the number of parallel bridges of \(\varPi \) that were collapsed into the bridge \(\sigma \) of \(\varSigma \,.\) Conversely, for any given couple \((\varSigma , l_{\varSigma }), \) where \(\varSigma \in \mathfrak {M}_c\) has no parallel bridges and \(l_\varSigma \in \mathbb {N}^{\varSigma }\), we define \(\varPi =G_{l_{\varSigma }}(\varSigma )\) as the pairing obtained from \(\varSigma \) by replacing for each bridge \(\sigma \in \varSigma \) , the bridge \(\sigma \) with \(l_{\sigma }\) parallel bridges (Fig. 3). This construction gives a bijective mapping \(\varPi \longleftrightarrow (\varSigma , l_{\varSigma })\,.\) We further define the set of admissible skeletons as
Note that all \(\varSigma \in \mathfrak {G}\) are connecting.
The following result is to check from the definition of \(\mathfrak {G}\); see Lemma 7.4 (ii) in [3] .
Lemma 4
Let \(\{e, e'\} \in \varSigma \). Then e and \(e'\) are adjacent only if \(e\cap e' \in V_w(\varSigma )\,.\)
In the following we rewrite the right hand side of (2.7) using the summation over skeleton pairings \(\varSigma =S(\varPi )\), followed by different ways of expanding the bridges of \(\varSigma \) . For this, let \(\varPi =G_{l_\varSigma }(\varSigma )\) . We further define \(|l_{\varSigma }|:=\sum _{\sigma \in \varSigma }l_{\sigma }\) for \(\varSigma \in \mathfrak {G}\) and \(l_{\varSigma } \in \mathbb {N}^{\varSigma }\). For the skeleton \(\varSigma \in \mathfrak {G}\) of the pairing \(\varPi =G_{l_{\varSigma }}(\varSigma )\) we use the notation \(n_{ij}(\varSigma , l_{\varSigma })\) for \(n_{ij}(\varPi )\), for all \(i, j \in \{1,2\}\) .
Parametrising \(\varPi \) using \(\varSigma \) and \(l_{\varSigma }\) and neglecting the non-backtracking condition in the definition of \(Q_{y_1,y_2}(\varvec{\mathrm {x}})\) we obtain the following upper bound (for full details see Lemma 7.6 in [3]) .
Let us define
The following result is obtained using (2.9) .
Lemma 5
We have that
The following result follows easy from the definition of \(S_{xy}\).
Lemma 6
Let \(l \in \mathbb {N}\) . For each \(x,y \in \varLambda _N\) we have
-
(i)
\(\sum \limits _{y}(S^l)_{xy}=(\frac{M}{M-1})^{l}\,.\)
-
(ii)
\((S^l)_{xy}\;\leqslant \;(\frac{M}{M-1})^{l-1}\frac{1}{M-1}\,.\)
3.4 Orbits of Vertices
Let us fix \(\varSigma \in \mathfrak {G}\) . On the set of vertices \( V(\varSigma )\) we construct the \(\textit{orbits of vertices}\) as in [3] . We define \(\tau : V(\varSigma ) \rightarrow V(\varSigma )\) as follows. Let \(i \in V(\varSigma )\) and let e be the unique edge such that \(\{\{i, b(i)\}, e\} \in \varSigma \) . Then, for any vertex i of \(\varSigma \in \mathfrak {G}\) we define \(\tau i :=b(e)\). We denote the orbit of the vertex \(i \in \varSigma \) by \([i]\;:=\; \{ \tau ^n i : n \in \mathbb {N}\}\) .
We order the edges of \(\varSigma \) in some arbitrary fashion and denote this order by < . Each bridge \(\sigma \in \varSigma \) ”sits between” the orbits \(\zeta _1(\sigma )\) and \(\zeta _2(\sigma )\). More precisely, let \(\sigma =\{e , e'\}\) with \( e <e'\) and \(e=\{ i, b(i)\}\) . Then, \(\zeta _1(\sigma ):=[i]\) and \(\zeta _2(\sigma ):=[b(i)]\) .
Let \(Z(\varSigma ):=\{[i] : i \in V(\varSigma )\}\) be the set of orbits of \(\varSigma \) . This set contains four distinguished orbits \(\{ [r(\mathcal {L}_1)],[r(\mathcal {L}_2)],[s(\mathcal {L}_1)],[s(\mathcal {L}_2)] \}\) which need not be distinct (Fig. 4). Let \(|\varSigma |\) be the number of bridges of the skeleton \(\varSigma \in \mathfrak {G}\) and let \(L(\varSigma )=|Z^*(\varSigma )|\) with \(Z^*(\varSigma ):=Z(\varSigma )\setminus \{[r(\mathcal {L}_1)],[r(\mathcal {L}_2)]\}\) .
The following result is an adaptation of the \(\textit{2/3-rule}\) introduced in Lemma 7.7 of [3] .
Lemma 7
We have the inequality
Proof
Let \(Z'(\varSigma ):=Z(\varSigma )\setminus \{ [r(\mathcal {L}_1)], [r(\mathcal {L}_2)], [s(\mathcal {L}_1)], [s(\mathcal {L}_2)]\}\) . Using the same reasoning as in the proof of the \(\textit{2/3 rule}\) in [3] we obtain that each orbit contains at least 3 vertices.
The total number of vertices of \(\varSigma \) not including \(\{r(\mathcal {L}_1),r(\mathcal {L}_2),s(\mathcal {L}_1),s(\mathcal {L}_2) \} \) is \(2|\varSigma |-4\) . It follows that \(3|Z'(\varSigma )|\;\leqslant \; 2|\varSigma |-4 \Leftrightarrow |Z'(\varSigma )|\;\leqslant \; 2|\varSigma |/3-4/3\).
Using that \(|Z^*(\varSigma )|\; \leqslant \; |Z'(\varSigma )|+2\), we obtain \(|Z^*(\varSigma )|\;\leqslant \; 2|\varSigma |/3+2/3\) . \(\square \)
We remark that Lemma 2.7 is sharp in the sense that there exists \(\varSigma \in \mathfrak {G}\) such that the estimate of Lemma 2.7 saturates.
4 The Case \(|\varSigma |\;\geqslant \; 3\)
Using Lemma 2.7 and the same argument as in Section 7.5 of [3] we obtain the following result.
Lemma 8
Let \(\varSigma \in \mathfrak {G}\) and \(l_{\varSigma } \in \varLambda _N^{V(\varSigma )}\). We have that
4.1 Estimation of the Variance for \(|l_{\varSigma }| \ll M^{1/3}\)
Let \(\mu < \frac{1}{3}\) . In the summation (2.12) we introduce a cut-off at \(|l_{\varSigma }|< M^{\mu }\) . We define
The following result is proved in [3] , Lemma 7.9 .
Lemma 9
-
(i)
For any time t and for any \(n \in \mathbb {N}\) we have \(|a_n(t)|\leqslant \frac{Ct^n}{n!}\,, \) for some constant \(C\,.\)
-
(ii)
We have \(\sum \limits _{n\geqslant 0}|a_n(t)|^2=1+O(M^{-1})\,,\) uniformly in \(t \in \mathbb {R}\) .
A new estimate on \( \sum _{l_{\varSigma }}\varvec{\mathrm {1}}(|l_{\varSigma }|\;\leqslant \; M^{\mu })|a_{n_{11}(\varSigma , l_{\varSigma })}(t)\overline{a_{n_{12}(\varSigma , l_{\varSigma })}(t)}a_{n_{21}(\varSigma , l_{\varSigma })}(t)\overline{a_{n_{22}(\varSigma , l_{\varSigma })}(t)}|\) is established in the following lemma. The new technique is based on splitting the summation according to the bridges that touch the rooted directed chains (Fig. 5).
Lemma 10
For any \(\varSigma \in \mathfrak {G}\) with \(|\varSigma |\geqslant 3\) we have
Proof
Let \(\varSigma \in \mathfrak {G}\) . We denote each path by \(\mathcal {S}_1 \equiv r(\mathcal {L}_1(\varSigma ))\rightarrow s(\mathcal {L}_1(\varSigma ))\), \(\mathcal {S}_2 \equiv s(\mathcal {L}_1(\varSigma ))\rightarrow r(\mathcal {L}_1(\varSigma ))\), \(\mathcal {S}_3 \equiv r(\mathcal {L}_2(\varSigma ))\rightarrow s(\mathcal {L}_2(\varSigma ))\) and \(\mathcal {S}_4 \equiv s(\mathcal {L}_2(\varSigma ))\rightarrow r(\mathcal {L}_2(\varSigma ))\) . There always exists a bridge connecting \(\mathcal {S}_{i}\) and \(\mathcal {S}_{j}\) , for \(i \ne j\,.\) Without loss of generality we choose \(\sigma _1\) connecting \(\mathcal {S}_1\) and \(\mathcal {S}_2\) .
We have the following cases :
(i) There is a bridge \(\sigma _2 \in \varSigma \) between \(\mathcal {S}_3\) and \(\mathcal {S}_4\) .
Let \(\bar{\varSigma }\; :=\; \varSigma \setminus \{\sigma _1, \sigma _3 \}\) . There exist the functions \(f_1(l_{\bar{\varSigma }})\), \(f_2(l_{\bar{\varSigma }})\), \(f_3(l_{\bar{\varSigma }})\) and \(f_4(l_{\bar{\varSigma }})\) such that \(n_{11}(\varSigma , l_{\varSigma })=f_1(l_{\bar{\varSigma }})+l_{\sigma _1}\) and \( n_{12}(\varSigma , l_{\varSigma })=f_2(l_{\bar{\varSigma }})+l_{\sigma _1}\), \(n_{21}(\varSigma , l_{\varSigma })=f_3(l_{\bar{\varSigma }})+l_{\sigma _2}\) and \( n_{22}(\varSigma , l_{\varSigma })=f_4(l_{\bar{\varSigma }})+l_{\sigma _2}\) . Note that \(n_{11}(\varSigma , l_{\varSigma })\) and \(n_{21}(\varSigma , l_{\varSigma })\) do not represent the same linear combination of elements of \(l_\varSigma \) .
We get
Using the elementary inequality \(|abcd| \leqslant |a|^{2}|c|^{2}+|b|^{2}|d|^{2}\) we obtain
Using the inequality between the indicator functions \(\sum \limits _{l_{\bar{\varSigma }}}\varvec{\mathrm {1}}(|l_{\varSigma }|\leqslant M^{\mu })\leqslant \sum \limits _{l_{\bar{\varSigma }}}\varvec{\mathrm {1}}(|l_{\bar{\varSigma }}|\leqslant M^{\mu })\) and Lemma 3.2 (ii) we obtain that
Note that the same argument holds for \(|a_{f_2(l_{\bar{\varSigma }})+l_{\sigma _1}}(t)|^2|a_{f_4(l_{\bar{\varSigma }})+l_{\sigma _2}}(t)|^2\) .
Using that \(|\bar{\varSigma }|=|\varSigma |-2\) we obtain that
(ii) There is no bridge connecting \(\mathcal {S}_3\) and \(\mathcal {S}_4\) . In this case we consider two bridges \(\sigma _3\) and \(\sigma _4\) that are touching \(\mathcal {S}_3\) and \(\mathcal {S}_4\) respectively. We further define \(\bar{\varSigma }\;:=\;\varSigma \setminus \{ \sigma _1, \sigma _3, \sigma _4 \}\) .
We have that
where \(\eta _3\,, \eta _4 \in \{1, 2\}\,.\) Using the same inequality as in (3.1) and that \(l_{\bar{\sigma }}\) and \(l_{\sigma _1}\) are distinct we obtain that
The same holds for \(\overline{a_{f_2(\bar{l},l_{\sigma _3},l_{\sigma _4})+l_{\sigma _1}}(t)}\) and \(\overline{a_{f_4(\bar{l})+\eta _4 l_{\sigma _4}}(t)}\) .
Now the claim follows like in (i). \(\square \)
Let \(m=|\varSigma |\) . Using Lemma 3.3 and the same reasoning as in Section 7.6 of [3] we obtain
It is easy to see, as in Section 7.6 of [3] , that \(|\{\varSigma : |\varSigma |=m\}|\;\leqslant \; 2^{m}m!\) .
Finally, we obtain that
4.2 Estimation of the Variance for \(|l_{\varSigma }| \geqslant M^{1/3}\)
Let us define
We also define the new set of variables
Note that \(p_1+p_2=|l_{\varSigma }|\) .
As in [3], using Lemma 3.2 (i) and the inequality \( \frac{p!}{(p-q)!} \leqslant \frac{(p+q)!}{p!}\) we obtain
Using the time scale \(t \sim CM^{\kappa }\) we obtain that
As in Section 7.7 of [3], using that \(C p_1!p_1!p_2!p_2!\geqslant p_1^{2p_1}p_2^{2p_2}\) , for some constant C, we obtain that
Choosing \(\mu \;=\;1/3-\beta \) with the condition \(1/3-\kappa>\mu -\kappa \;>\; 1/3M^{\mu }\;>\;1/3M^{1/3}\) (where we have \(0 \;<\;\beta \;<\; 2/3-2\kappa -2/3M^{\mu } \;\leqslant \; 2/3-2\kappa \)) completes the proof of Theorem 1 in the case \(|\varSigma |\;\geqslant \; 3\,.\)
5 Estimation for the Variance in the Case \(|\varSigma |\;\leqslant \;2\)
5.1 Estimation for the Variance in the Case \(|\varSigma |\;=\;0\) and \(|\varSigma |\;=\;1\)
For \(|\varSigma |=0\) we obtain that \(\langle H_{00} ; H_{00}\rangle \) vanishes. Also, for \(|\varSigma |=1\) we estimate term of the form
Given that \(\langle H_{00}; H_{00} \rangle =0\) it follows that in the cases \(|\varSigma |=0\) and \(|\varSigma |=1\) the quantity of interest is deterministic.
5.2 Estimation of the Variance in the Case \(|\varSigma |\;=\;2\)
Given that the two rooted directed chains are connected we obtain that the graph with \(l_{\sigma _1}\) bridges that touch \(\mathcal {S}_1\) and \(\mathcal {S}_2\) and \(l_{\sigma _2}\) bridges that touch \(\mathcal {S}_3\) and \(\mathcal {S}_4\) gives no contribution to the value of the variance. Also, we obtain, up to permutations, four different possible configurations. In all four cases it holds that \(y_1=y_2\) .
We have that
Using Lemma 3.2 (ii) twice and Lemma 2.6 (ii) and (i) we obtain that
Using again the time scale \(t \sim CM^{\kappa }\) we obtain that
As in (3.11) we obtain that
As before, we choose \(\mu \;=\;1/3-\beta \) . This completes the proof of Theorem 1.
References
Anderson, P.: Absences of diffusion in certain random lattices. Phys. Rev. 109, 1492–1505 (1958)
Spencer T.: Random banded and sparse matrices (Chapter 23). In: Akemann, G., Baik, J., Di Francesco, P. (eds.) Öxford handbook of Random Matrix Theory
Erdös, L., Knowles, A.: Quantum diffusion and eigenfunction delocalization in a random band matrix model. Commun. Math. Phys. 303, 509–554 (2011)
Erdös, L., Knowles, A.: The Altshuler-Shklovskii formulas for random band matrices II: the general case. Preprint arXiv:1309.5107, to appear in Ann. H. Poincaré
Acknowledgements
This result is based on a Semester Project in ETH Zürich under the supervision of Prof. Dr. Antti Knowles. The author is grateful to Prof. Antti Knowles for the careful guiding into understanding the problem.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Margarint, V. Convergence in High Probability of the Quantum Diffusion in a Random Band Matrix Model. J Stat Phys 172, 781–794 (2018). https://doi.org/10.1007/s10955-018-2065-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10955-018-2065-2