Central Limit Theorem for Linear Eigenvalue Statistics for a Tensor Product Version of Sample Covariance Matrices
- 521 Downloads
- 1 Citations
Abstract
Keywords
Random matrices Sample covariance matrices Central Limit Theorem Linear eigenvalue statisticsMathematics Subject Classification (2010)
15B52 60F051 Introduction: Problem and Main Result
The model constructed in (1.3) appeared in the quantum information theory and was introduced to random matrix theory by Hastings (see [3, 14, 15]). In [3], it was studied as a quantum analog of the classical probability problem on the allocation of p balls among q boxes (a quantum model of data hiding and correlation locking scheme). In particular, by combinatorial analysis of moments of \(n^{-k}{{\mathrm{Tr}}}{\mathcal {M}}_n^p\), \(p\in {\mathbb {N}}\), it was proved that for the special cases of random vectors \({\mathbf {y}}\) uniformly distributed on the unit sphere in \({\mathbb {C}}^n\) or having Gaussian components, the expectations of the Normalized Counting Measures of eigenvalues of the corresponding matrices converge to the Marchenko–Pastur law [17]. The main goal of the present paper is to extend this result of [3] to a wider class of matrices \(M_{n,m,k}({\mathbf {y}})\) and also to prove the Central Limit Theorem for linear eigenvalue statistics in the case \(k=2\).
Definition 1.1
Following the scheme of the proof proposed in [20], we show that despite the fact that the number of independent parameters, \({ kmn}=O(n^{k+1})\) for \(k\ge 2\), is much less than the number of matrix entries, \(n^{2k}\), the limiting distribution of eigenvalues still obeys the Marchenko–Pastur law. We have:
Theorem 1.2
We use the notation \(\int \) for the integrals over \({\mathbb {R}}\). Note that in [26] there was proved an analog of this statement for a deformed version of \(M_{n,m,2}\).
There are a considerable number of papers on the CLT for linear eigenvalue statistics of sample covariance matrices \({\mathcal {M}}_{n,m,1}\) (1.5), where all entries of the matrix \({B}_{n,m,1}\) are independent (see [4, 7, 8, 9, 11, 16, 18, 19, 21, 25] and references therein). Less is known in the case where the components of vector \({\mathbf {y}}\) are dependent. In [13], the CLT was proved for linear statistics of eigenvalues of \({\mathcal {M}}_{n,m,1}\), corresponding to some special class of isotropic vectors defined below.
Definition 1.3
The distribution of a random vector \({\mathbf {y}}\in {\mathbb {R}}^n\) is called unconditional if its components \(\{y_j\}_{j=1}^n\) have the same joint distribution as \(\{\pm y_j\}_{j=1}^n\) for any choice of signs.
Definition 1.4
Here and in what follows we use the notation \(\xi ^\circ =\xi -{\mathbf {E}}\{\xi \}\).
Theorem 1.5
Here we prove an analog of Theorem 1.5 in the case \(k=2\). We start with establishing a version of (1.16) in general case \(k\ge 1\):
Lemma 1.6
It follows from Lemma 1.6 that in order to prove the CLT (if any) for linear eigenvalue statistics of \({\mathcal {M}}_n\), one needs to normalize them by \(n^{-(k-1)/2}\). To formulate our main result we need more definitions.
Definition 1.7
We say that the distribution of a random vector \({\mathbf {y}}\in {\mathbb {R}}^n\) is permutationally invariant (or exchangeable) if it is invariant with respect to the permutations of entries of \({\mathbf {y}}\).
Definition 1.8
- (i)
- (ii)their sixth moments satisfy conditions$$\begin{aligned} a_{2,2,2}:=&\,{\mathbf {E}}\{y_{ i}^2y_{ j}^2y_{ k}^2\}=n^{-3}+O(n^{-4}),\nonumber \\ a_{2,4}:=&\,{\mathbf {E}}\{y_{ i}^2y_{ j}^4\}=O(n^{-3}),\quad a_{6}:={\mathbf {E}}\{y_{ i}^6\}=O(n^{-3}), \end{aligned}$$(1.21)
- (iii)for every \(n\times n\) matrix \(H_n\) which does not depend on \({\mathbf {y}}\),$$\begin{aligned} {\mathbf {E}}\{|(H_n{\mathbf {y}},{\mathbf {y}})^{\circ }|^6\}\le C||H_{n}||^{6}n^{-3}. \end{aligned}$$(1.22)
It can be shown that a vector of the form \({\mathbf {y}}={\mathbf {x}}/n^{1/2}\), where \({\mathbf {x}}\) has i.i.d. components with even distribution and bounded twelfth moment is a CLT-vector as well as a vector uniformly distributed on the unit ball in \({\mathbb {R}}^n\) or a properly normalized vector uniformly distributed on the unit ball \(B_p^n=\big \{{\mathbf {x}}\in {\mathbb {R}}^n:\; \sum _{j=1}^n|x_j|^p\le 1\big \}\) in \(l_p^n\) (see [13], Section 2 for \(k=1\)).
The main result of the present paper is:
Theorem 1.9
Remark 1.10
- (i)In particular, if \(\tau _1=\cdots = \tau _m= 1\), thenwhere \(a_{\pm }=(1\pm \sqrt{c})^{2}\) and \(a_{m}=1+c\).$$\begin{aligned} V[\varphi ]=\frac{(a+b+2)}{2c\pi ^{2}}\left( \int _{a_{-}}^{a_{+}}\varphi (\mu )\frac{\mu -a_{m}}{\sqrt{(a_{+}-\mu )(\mu -a_{-})}}\hbox {d}\mu \right) ^{2}, \end{aligned}$$
- (ii)
We can replace the condition of the uniform boundedness of \(\tau _\alpha \) with the condition of uniform boundedness of eighth moments of the Normalized Counting Measures \(\sigma _n\), or take \(\{\tau _\alpha \}_\alpha \) being real random variables independent of \({\mathbf {y}}\) with common probability law \(\sigma \) having finite eighth moment. In general, it is clear from (1.23) that it should be enough to have second moments of \(\sigma _n\) being uniformly bounded in n.
- (iii)
If in (1.23) \(a+b+2=0\), then to prove the CLT one needs to renormalize linear eigenvalue statistics. In particular, it can be shown that if \({\mathbf {y}}\) in the definition of \({\mathcal {M}}_{n,m,k}({\mathbf {y}})\) is uniformly distributed on the unit sphere in \({\mathbb {R}}^n\), then \(a+b+2=0\) and under additional assumption \(m/n=c+O(n^{-1})\) the variance of the linear eigenvalue statistic corresponding to a smooth enough test function is of the order \(O(n^{k-2})\) (cf 1.20).
The paper is organized as follows. Section 3 contains some known facts and auxiliary results. In Sect. 4, we prove Theorem 1.2 on the convergence of the NCMs of eigenvalues of \({\mathcal {M}}_{n,m,k}\). Sections 5 and 7 present some asymptotic properties of bilinear forms (HY, Y), where Y is given by (1.1) and H does not depend on Y. In Sect. 6, we prove Lemma 1.6. In Sect. 8, the limit expression for the covariance of the resolvent traces is found. Section 9 contains the proof of the main result, Theorem 1.9.
2 Notations
Given matrix H, ||H|| and \(||H||_{HS}\) are the Euclidean operator norm and the Hilbert-Schmidt norm, respectively. We use C for any absolute constant which can vary from place to place.
3 Some Facts and Auxiliary Results
We need the following bound for the martingales moments, obtained in [10]:
Proposition 3.1
Lemma 3.2
Proof
Lemma 3.3
Proof
The following statement was proved in [20].
Proposition 3.4
Also, we will need the following simple claim:
Claim 3.5
4 Proof of Theorem 1.2
Remark 4.1
5 Variance of Bilinear Forms
Lemma 5.1
Proof
6 Proof of Lemma 1.6
Lemma 6.1
Proof
Proof of Lemma 1.6
7 Case \(k=2\): Some Preliminary Results
Lemma 7.1
Proof
Lemma 7.2
Proof
Lemma 7.3
Proof
8 Covariance of the Resolvent Traces
Lemma 8.1
Proof
9 Proof of Theorem 1.9
It remains to prove the following lemma.
Lemma 9.1
Proof
Notes
Acknowledgements
The author would like to thank Leonid Pastur for an introduction to the problem and for fruitful discussions.
References
- 1.Adamczak, R.: On the Marchenko–Pastur and circular laws for some classes of random matrices with dependent entries. Electron. J. Prob. 16, 1065–1095 (2011)MathSciNetCrossRefMATHGoogle Scholar
- 2.Akhiezer, N.I., Glazman, I.M.: Theory of Linear Operators in Hilbert Space. Dover, New York (1993)MATHGoogle Scholar
- 3.Ambainis, A., Harrow, A.W., Hastings, M.B.: Random tensor theory: extending random matrix theory to random product states. Commun. Math. Phys. 310(1), 25–74 (2012)MathSciNetCrossRefMATHGoogle Scholar
- 4.Bai, Z.D., Silverstein, J.W.: CLT for linear spectral statistics of large dimensional sample covariance matrices. Ann. Prob. 32, 553–605 (2004)MathSciNetCrossRefMATHGoogle Scholar
- 5.Bai, Z.D., Silverstein, J.W.: Spectral Analysis of Large Dimensional Random Matrices. Springer, New York (2010)CrossRefMATHGoogle Scholar
- 6.Bai, Z.D., Zhou, W.: Large sample covariance matrices without independence structures in columns. Stat. Sin. 18(2), 425 (2008)MathSciNetMATHGoogle Scholar
- 7.Bai, Z.D., Wang, X., Zhou, W.: Functional CLT for sample covariance matrices. Bernoulli 16(4), 1086–1113 (2010)MathSciNetCrossRefMATHGoogle Scholar
- 8.Banna, M., Merlevéde, F.: Limiting spectral distribution of large sample covariance matrices associated with a class of stationary processes. J. Theor. Prob. 28(2), 745–783 (2015)Google Scholar
- 9.Cabanal-Duvillard, T.: Fluctuations de la loi empirique de grandes matrices aleat’oires. Ann. Inst. H. Poincart’e Probab. Statist. 37(3), 73–402 (2001)MathSciNetMATHGoogle Scholar
- 10.Dharmadhikari, S.W., Fabian, V., Jogdeo, K.: Bounds on the moments of martingales. Ann. Math. Statist. 39, 1719–1723 (1968)MathSciNetCrossRefMATHGoogle Scholar
- 11.Girko, V.: Theory of Stochastic Canonical Equations. Kluwer, Dordrecht (2001)CrossRefMATHGoogle Scholar
- 12.Göotze, F., Naumov, A.A., Tikhomirov, A.N.: Limit theorems for two classes of random matrices with dependent entries. Teor. Veroyatnost. i Primenen 59(1), 61–80 (2014)MathSciNetCrossRefGoogle Scholar
- 13.Guédon, O., Lytova, A., Pajor, A., Pastur, L.: The central limit theorem for linear eigenvalue statistics of independent random matrices of rank one. Spectral Theory and Differential Equations. AMS Trans. Ser 2(233), 145–164 (2014). arXiv:1310.2506
- 14.Hastings., M.B.: A counterexample to additivity of minimum output entropy. Nat. Phys. 5 (2009). arXiv:0809.3972
- 15.Hastings., M.B.: Entropy and entanglement in quantum ground states. Phys. Rev. B, 76, 035114 (2007). arXiv:cond-mat/0701055
- 16.Lytova, A., Pastur, L.: Central limit theorem for linear eigenvalue statistics of random matrices with independent entries. Ann. Prob. 37(5), 1778–1840 (2009)MathSciNetCrossRefMATHGoogle Scholar
- 17.Marchenko, V., Pastur, L.: The eigenvalue distribution in some ensembles of random matrices. Math. USSR Sb. 1, 457–483 (1967)CrossRefMATHGoogle Scholar
- 18.Merlevéde, F., Peligrad, M.: On the empirical spectral distribution for matrices with long memory and independent rows. Stoch. Process. Appl. (2016). arXiv:1406.1216
- 19.Najim, J., Yao, J.: Gaussian fluctuations for linear spectral statistics of large random covariance matrices. Ann. Appl. Prob. 26(3), 1837–1887 (2016). arXiv:1309.3728
- 20.Pajor, A., Pastur, L.: On the limiting empirical measure of eigenvalues of the sum of rank one matrices with log-concave distribution. Stud. Math. 195(1), 11–29 (2009)MathSciNetCrossRefMATHGoogle Scholar
- 21.Pan, G.M., Zhou, W.: Central limit theorem for signal-to-interference ratio of reduced rank linear receiver. Ann. Appl. Probab. 18, 1232–1270 (2008)MathSciNetCrossRefMATHGoogle Scholar
- 22.Pastur, L.: Limiting laws of linear eigenvalue statistics for unitary invariant matrix models. J. Math. Phys. 47, 103–303 (2006)CrossRefGoogle Scholar
- 23.Pastur, L., Shcherbina, M.: Eigenvalue distribution of large random matrices. Mathematical surveys and monographs. Amer. Math. Soc. 171 (2011)Google Scholar
- 24.Rudin, W.: Real and Complex Analysis, 3rd edn. McGraw-Hill Inc., New York (1986)MATHGoogle Scholar
- 25.Shcherbina, M.: Central limit theorem for linear eigenvalue statistics of Wigner and sample covariance random matrices. J. Math. Phys., Anal., Geom. 72, 176–192 (2011)MathSciNetMATHGoogle Scholar
- 26.Tieplova, D.: Distribution of eigenvalues of sample covariance matrices with tensor product samples. J. Math. Phys., Anal., Geom. 13(1), 1–17 (2017)Google Scholar
- 27.Yaskov, P.: The universality principle for spectral distributions of sample covariance matrices (2014). arXiv:1410.5190
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.