Abstract
Consider the product \(X = X_{1}\cdots X_{m}\) of m independent \(n\times n\) iid random matrices. When m is fixed and the dimension n tends to infinity, we prove Gaussian limits for the centered linear spectral statistics of X for analytic test functions. We show that the limiting variance is universal in the sense that it does not depend on m (the number of factor matrices) or on the distribution of the entries of the matrices. The main result generalizes and improves upon previous limit statements for the linear spectral statistics of a single iid matrix by Rider and Silverstein as well as Renfrew and the second author.
Similar content being viewed by others
References
Adhikari, K., Kishore Reddy, N., Ram Reddy, T., Saha, K.: Determinantal point processes in the plane from products of random matrices. Ann. Inst. Henri Poincaré Probab. Stat. 52(1), 16–46 (2016)
Akemann, G., Burda, Z.: Universal microscopic correlation functions for products of independent Ginibre matrices. J. Phys. A Math. Theor. 45, 465201 (2012)
Akemann, G., Burda, Z., Kieburg, M.: Universal distribution of Lyapunov exponents for products of Ginibre matrices. J. Phys. A Math. Theor. 47, 395202 (2014)
Akemann, G., Ipsen, J.R., Kieburg, M.: Products of rectangular random matrices: singular values and progressive scattering. Phys. Rev. E 88, 052118 (2013)
Akemann, G., Ipsen, J.R., Strahov, E.: Permanental processes from products of complex and quaternionic induced Ginibre ensembles. Random Matrices Theory Appl. 3(4), 1450014 (2014)
Akemann, G., Kieburg, M., Wei, L.: Singular value correlation functions for products of Wishart random matrices. J. Phys. A Math. Theor. 46, 275205 (2013)
Akemann, G., Strahov, E.: Hole probabilities and overcrowding estimates for products of complex Gaussian matrices. J. Stat. Phys. 151(6), 987–1003 (2013)
Anderson, G.: Convergence of the largest singular value of a polynomial in independent Wigner matrices. Ann. Probab. 41(3B), 2103–2181 (2013)
Anderson, G., Zeitouni, O.: CLT for a band matrix model. Probab. Theory Relat. Fields 134, 283–338 (2006)
Bai, Z.D.: Circular law. Ann. Probab. 25, 494–529 (1997)
Bai, Z.D., Silverstein, J.W.: CLT for linear spectral statistic of large-dimensional sample covariance matrix. Ann. Probab. 32, 553–605 (2004)
Bai, Z.D., Silverstein, J.: No eigenvalues outside the support of the limiting spectral distribution of large-dimensional sample covariance matrices. Ann. Probab. 26(1), 316–345 (1998)
Bai, Z.D., Silverstein, J.: Spectral Analysis of Large Dimensional Random Matrices. Mathematics Monograph Series, vol. 2. Science Press, Beijing (2006)
Bhatia, R.: Matrix Analysis. Graduate Texts in Mathematics. Springer, New York (1997)
Billingsley, P.: Probability and Measure. Wiley Series in Probability and Mathematical Statistics, 3rd edn. Wiley, New York (1995)
Billingsley, P.: Convergence of Probability Measures, 1st edn. Wiley, New York (1968)
Bordenave, C.: On the spectrum of sum and product of non-Hermitian random matrices. Electronic Commun. Probab. 16, 104–113 (2011)
Bordenave, C., Chafaï, D.: Around the circular law. Probab. Surv. 9, 1–89 (2012)
Burda, Z., Janik, R.A., Waclaw, B.: Spectrum of the product of independent random Gaussian matrices. Phys. Rev. E 81, 041132 (2010)
Burda, Z., Jarosz, A., Livan, G., Nowak, M.A., Swiech, A.: Eigenvalues and singular values of products of rectangular Gaussian random matrices. Phys. Rev. E 82, 061114 (2010)
Burda, Z., Nowak, M.A., Swiech, A.: Spectral relations between products and powers of isotropic random matrices. Phys. Rev. E 86, 061137 (2012)
Burda, Z.: Free products of large random matrices—a short review of recent developments. J. Phys. Conf. Ser. 473, 012002 (2013)
Coston, N., O’Rourke, S., Wood, P.: Outliers in the spectrum for products of independent random matrices. arXiv:1711.07420
Deng, C.Y.: A generalization of the Sherman–Morrison–Woodbury formula. Appl. Math. Lett. 24(9), 1561–1564 (2011)
Diaconis, P., Shahshahani, M.: On the eigenvalues of random matrices. J. Appl. Probab. 31A, 49–62 (1994)
Diaconis, P., Evans, S.N.: Linear functionals of eigenvalues of random matrices. Trans. Am. Math. Soc. 353(7), 2615–2633 (2001)
Edelman, A.: The probability that a random real Gaussian matrix has \(k\) real eigenvalues, related distributions, and the circular law. J. Multivar. Anal. 60, 203–232 (1997)
Forrester, P.J.: Lyapunov exponents for products of complex Gaussian random matrices. J. Stat. Phys. 151, 796–808 (2013)
Forrester, P.J.: Probability of all eigenvalues real for products of standard Gaussian matrices. J. Phys. A 47, 065202 (2014)
Ginibre, J.: Statistical ensembles of complex, quaternion, and real matrices. J. Math. Phys. 6, 440–449 (1965)
Girko, V.L.: Circular law. Theory Probab. Appl. 29, 694–706 (1984)
Girko, V.L.: The circular law. Teor. Veroyatnost. i Primenen. 29(4), 669–679 (1984)
Girko, V.L., Vladimirova, A.: L.I.F.E.: and Halloween Law. Random Operat. Stoch. Equ. 18(4), 327–353 (2010)
Götze, F., Naumov, A., Tikhomirov, T.: Local laws for non-Hermitian random matrices. Doklady Math. 96, 558–560 (2017). https://doi.org/10.1134/S1064562417060072
Götze, F., Tikhomirov, T.: The circular law for random matrices. Ann. Probab. 38(4), 1444–1491 (2010)
Götze, F., Tikhomirov, T.: On the asymptotic spectrum of products of independent random matrices. arXiv:1012.2710
Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge University Press, Cambridge (1991)
Hwang, S.: Cauchy’s interlace theorem for eigenvalues of Hermitian matrices. Am. Math. Mon. 111(2), 157–159 (2004)
Ipsen, J.R.: Products of Independent Gaussian Random Matrices. Bielefeld University, Bielefeld (2015)
Ipsen, J.R., Kieburg, M.: Weak commutation relations and eigenvalue statistics for products of rectangular random matrices. Phys. Rev. E 89, 032106 (2014)
Johansson, K.: On fluctuations of eigenvalues of random Hermitian matrices. Duke Math. J. 91, 151–204 (1998)
Kopel, P.: Linear statistics of non-Hermitian matrices matching the real or complex Ginibre ensemble to four moments. arXiv:1510.02987 [math.PR]
Kopel, P., O’Rourke, S., Vu, V.: Random matrix products: Universality and least singular values. arXiv:1802.03004
Kuijlaars, A.B.J., Zhang, L.: Singular values of products of Ginibre random matrices, multiple orthogonal polynomials and hard edge scaling limits. Commun. Math. Phys. 332(2), 759–781 (2014)
Lytova, A., Pastur, L.: Central limit theorem for linear eigenvalue statistics of random matrices with independent entries. Ann. Probab. 37, 1778–1840 (2009)
Mehta, M.L.: Random Matrices and the Statistical Theory of Energy Levels. Academic Press, New York (1967)
Mehta, M.L.: Random Matrices, 3rd edn. Elsevier/Academic Press, Amsterdam (2004)
Nemish, Y.: No outliers in the spectrum of the product of independent non-Hermitian random matrices with independent entries. J. Theor. Probab. 31, 402 (2018)
Nemish, Y.: Local law for the product of independent non-Hermitian random matrices with independent entries. Electron. J. Probab. 22(22), 1–35 (2017)
Nourdin, I., Peccati, G.: Universal Gaussian fluctuations of non-Hermitian matrix ensembles: from weak convergence to almost sure CLTs. Lat. Am. J. Probab. Math. Stat. 7, 341–375 (2010)
O’Rourke, S., Renfrew, D.: Central limit theorem for linear eigenvalue statistics of elliptic random matrices. J. Theor. Probab. 29(3), 1121–1191 (2016)
O’Rourke, S., Renfrew, D.: Low rank perturbations of large elliptic random matrices. Electron. J. Probab. 19(43), 1–65 (2014)
O’Rourke, S., Renfrew, D., Soshnikov, A., Vu, V.: Products of independent elliptic random matrices. J. Stat. Phys. 160(1), 89–119 (2015)
O’Rourke, S., Soshnikov, A.: Products of independent non-Hermitian random matrices. Electron. J. Probab. 16(81), 2219–2245 (2011)
Pan, G., Zhou, W.: Circular law, extreme singular values and potential theory. J. Multivar. Anal. 101, 645–656 (2010)
Rider, B., Silverstein, J.W.: Gaussian fluctuations for non-Hermitian random matrix ensembles. Ann. Probab. 34, 2118–2143 (2006)
Shcherbina, M.: Central limit theorem for linear eigenvalue statistics of the Wigner and sample covariance random matrices. Zh. Mat. Fiz. Anal. Geom. 7(2), 176–192 (2011)
Sinai, Y., Soshnikov, A.: Central limit theorem for traces of large random symmetric matrices with independent matrix elements. Bol. Soc. Brasil. Mat. (N.S.) 29, 1–24 (1998)
Soshnikov, A.: The central limit theorem for local linear statistics in classical compact groups and related combinatorial identities. Ann. Probab. 28, 1353–1370 (2000)
Sosoe, P., Wong, P.: Regularity conditions in the CLT for linear eigenvalue statistics of Wigner matrices. Adv. Math. 249(20), 37–87 (2013)
Strahov, E.: Differential equations for singular values of products of Ginibre random matrices. J. Phys. A Math. Theor. 47, 325203 (2014)
Tao, T.: Outliers in the spectrum of iid matrices with bounded rank perturbations. Probab. Theory Relat. Fields 155, 231–263 (2013)
Tao, T., Vu, V.: Random matrices: the circular law. Commun. Contemp. Math. 10, 261–307 (2008)
Tao, T., Vu, V.: From the Littlewood–Offord problem to the circular law: universality of the spectral distribution of random matrices. Bull. Am. Math. Soc. (N.S.) 46(3), 377–396 (2009)
Tao, T., Vu, V.: Random matrices: universality of ESDs and the circular law. Ann. Probab. 38(5), 2023–2065 (2010)
Acknowledgements
The paper is based on a chapter from N. Coston’s doctoral thesis, and she would like to thank her thesis committee for their feedback and support. The authors would also like to thank Philip Wood for providing useful feedback on an earlier draft of the manuscript. S. O’Rourke has been supported in part by NSF grants ECCS-1610003 and DMS-1810500.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A. Truncation Arguments
This section is devoted to the proof of Lemma 4.3.
Proof of Lemma 4.3
First, we prove property (i). Observe that
Also observe that
which implies \(\left| \mathbb {E}[\xi {\mathbf {1}_{\{|\xi |\le n^{1/2-\varepsilon }\}}}]\right| =\left| \mathbb {E}[\xi {\mathbf {1}_{\{|\xi |> n^{1/2-\varepsilon }\}}}]\right| .\) Hence
Next we move onto (ii). By construction, \(\mathbb {E}[\hat{\xi }]=0\) and \(\text {Var}(\hat{\xi })=1\) provided n is sufficiently large. By part (i),
for some constant \(C>0\) so choosing \(N_{0}>\left( \frac{4C}{3}\right) ^{1/(1+2\varepsilon )}\) ensures that \(\frac{1}{4}\le \text {Var}(\tilde{\xi })\), which gives \(2\ge \left( \text {Var}(\tilde{\xi })\right) ^{-1/2}\) for \(n>N_{0}\). With such an \(n>N_{0}\),
almost surely. For part 4.3, we have
completing the proof of the claim. \(\square \)
Appendix B. Largest and Smallest Singular Values
In this section, we consider events concerning the largest and smallest singular values for the random matrices appearing in this paper. These results are included as an appendix because the methods used to prove them are slight modifications of those in [23, 48, 52]. In order to prove these results, we need to introduce an intermediate truncation of the matrices. Specifically, let \(\xi _{1},\xi _{2},\dots \xi _{m}\) be real-valued random variables each having mean zero, variance one, and finite \(4+\tau \) moment for some \(\tau >0\). Let \(X_{n,1}X_{n,2},\dots X_{n,m}\) be independent iid \(n\times n\) random matrices with atom random variables \(\xi _{1},\xi _{2},\dots \xi _{m}\), respectively. For a fixed \(\varepsilon >0\), and for each \(1\le k\le m\), define truncated random variables (at \(n^{1/2-\varepsilon }\)) \(\tilde{\xi }_{k}\) and \(\hat{\xi }_{k}\) as in (19). Also define truncated matrices \(\tilde{X}_{n,k}\) and \(\hat{X}_{n,k}\) as in (21) and (22), respectively. Define the linearized truncated matrix \(\mathcal {Y}_{n}\) as in (31). Also recall that \(P_{n}=n^{-m/2}X_{n,1}X_{n,2}\cdots X_{n,m}\) and \(\hat{P}_{n}=n^{-m/2}\hat{X}_{n,1}\hat{X}_{n,2}\cdots \hat{X}_{n,m}.\)
Let X be an \(n\times n\) random matrix filled with iid copies of a random variable \(\xi \) which has mean zero, unit variance, and finite \(4+\tau \) moment. For a fixed constant \(L>0\), define matrices \(\mathring{X}\) and \(\check{X}\) to be the \(n\times n\) matrices with entries defined by
and
for \(1 \le i,j \le n\). Define \(\mathring{X}_{n,1},\mathring{X}_{n,2},\dots \mathring{X}_{n,m}\) and \(\check{X}_{n,1},\check{X}_{n,2},\dots \check{X}_{n,m}\) as in (109) and (110), respectively. Finally, define the linearized truncated matrix
Lemma B.1
Fix \(\varepsilon >0\). For a fixed integer \(m>0\), let \(\xi _{1},\xi _{2},\dots \xi _{m}\) be real-valued random variables each mean zero, variance one, and finite \(4+\tau \) moment for some \(\tau >0\). Let \(\hat{X}_{n,1},\hat{X}_{n,2},\dots ,\hat{X}_{n,m}\) be independent iid random matrices with atom variables as defined in (22), and define \(\mathcal {Y}_{n}\) as in (31). For every \(\delta >0\), there exists a constant \(c>0\) depending only on \(\delta \) such that
with overwhelming probability.
Proof
Fix \(\delta >0\) and define \(\check{\mathcal {Y}}_{n}\) as in (111). By [23, Lemma 8.1], which is based on techniques in [48, 49], we know that there exists a constant \(c'>0\) which depends only on \(\delta \) such that \(\inf _{|z|> 1+\delta /2}s_{mn}\left( \check{\mathcal {Y}}_{n}-zI\right) \ge c'\) with overwhelming probability. Note that by Weyl’s inequality (13),
Focusing on an arbitrary value of k, we have
for any \(1\le i,j\le n\). Observe that
By [23, Lemma 7.1], \(\left( \text {Var}((\mathring{X}_{n,k})_{(i,j)})\right) ^{-1/2}\le 2\) for L sufficiently large. Additionally, an argument similar to that of [23, Lemma 7.1] shows that \(\left| 1-\sqrt{\text {Var}((\mathring{X}_{n,k})_{(i,j)})}\right| \le \frac{C}{L^{2}}\) for any \(1\le i,j\le n\) and some constant \(C>0\). Therefore, by [62, Theorem 1.4], for L sufficiently large,
with overwhelming probability. Similarly,
By the arguments to prove part (ii) of Lemma 4.3, \(\left( \text {Var}((\tilde{X}_{n,k})_{(i,j)})\right) ^{-1/2}\le 2\) for n sufficiently large. Also, by part (i) of Lemma 4.3, we can show that \(\left| 1-\sqrt{\text {Var}((\tilde{X}_{n,k})_{(i,j)})}\right| =o(n^{-1+2\varepsilon })\). Therefore, by [13, Theorem 5.9],
with overwhelming probability. Ergo, by the triangle inequality, for L sufficiently large,
with overwhelming probability.
Now, recall that the entries of \(\mathring{X}_{n,k}\) are truncated at level L for a fixed \(L>0\) so for sufficiently large n, \(L\le n^{1/2-\varepsilon }\). Note that if all entries are less than L in absolute value, then the entries in \(\mathring{X}_{n,k}\) and \(\tilde{X}_{n}\) agree. Similarly, if all entries are greater than \(n^{1/2-\varepsilon }\), then the entries in \(\mathring{X}_{n,k}\) and \(\tilde{X}_{n}\) agree. Ergo, we need only consider the case when there exists some entries \(1\le i,j\le n\) such that \(L\le |(\tilde{X}_{n,k})_{i,j}|\le n^{1/2-\varepsilon }\). For each \(1\le k\le m\), define the random variables
and define \(\dot{X}_{n,k}\) to be the matrix with entries
for \(1\le i,j \le n\). Note that the definitions of \(\dot{\xi }\) and \(\dot{X}_{n,k}\) differ from the definitions in Sect. 4. We will use the definition given in this appendix for the remainder of this proof. We can write
By [13, Lemma 5.9], for L sufficiently large
with overwhelming probability. Thus, by choosing L large enough to satisfy both conditions, by (113) and (114),
with overwhelming probability. By recalling (112), this implies that, for L sufficiently large,
with overwhelming probability where \(c=\frac{c'}{2}\). \(\square \)
Lemma B.2
Fix \(\varepsilon >0\). For a fixed integer \(m>0\), let \(\xi _{1},\xi _{2},\dots \xi _{m}\) be real-valued random variables each mean zero, variance one, and finite \(4+\tau \) moment for some \(\tau >0\). Let \({X}_{n,1},{X}_{n,2},\dots ,{X}_{n,m}\) be independent iid random matrices with atom variables \(\xi _{1},\xi _{2},\dots ,\xi _{m}\), respectively. Define \(\hat{X}_{n,1},\hat{X}_{n,2},\dots \hat{X}_{n,m}\) as in (22), and define \(\hat{P}_{n}\) as in (24). For any \(\delta >0\), there exists a constant \(c>0\) depending only on \(\delta \) such that
with overwhelming probability.
Proof
Fix \(\delta >0\). By Lemma B.1, we know that there exists some \(c'>0\) such that \(\inf _{|z|>1+\delta /2}s_{mn}\left( \mathcal {Y}_{n}-zI\right) \ge c'\) with overwhelming probability as well. Recall that \(s_{mn}\left( \mathcal {Y}_{n}-zI\right) =s_{1}\left( \left( \mathcal {Y}_{n}-zI\right) ^{-1}\right) \) provided z is not an eigenvalue of \(\mathcal {Y}_{n}\). A block inverse matrix calculation reveals that
where the notation \(A^{[1,1]}\) denotes the upper left \(n\times n\) block of A. Therefore,
This implies that there exists a constant \(c>0\) such that
with overwhelming probability. This gives \(\inf _{|z| > 1+\delta /2}s_{n}\left( \hat{P}_{n}-zI\right) \ge c\) with overwhelming probability. \(\square \)
Lemma B.3
For a fixed integer \(m>0\), let \(\xi _{1},\xi _{2},\dots \xi _{m}\) be real-valued random variables each satisfying Assumption 2.1. Fix \(\delta >0\) and let \(X_{n,1},X_{n,2},\dots X_{n,m}\) be independent iid random matrices with atom variables \(\xi _{1},\xi _{2},\dots \xi _{m}\), respectively. Then there exists a constant \(c>0\) depending only on \(\delta \) such that
with probability \(1-o(1)\) where \(\sigma = \sigma _{1}\cdots \sigma _{m}\).
Proof
By a simple rescaling, it is sufficient to assume that the variance of each random variable is 1 so that \(\sigma =1\). Let \(\delta >0\) and recall by Lemma B.2 there exists a \(c'>0\) depending only on \(\delta \) such that \(\inf _{|z|> 1+\delta /2}s_{n}\left( \hat{P}_{n}-zI\right) \ge c'\) with overwhelming probability. Then by Lemma 4.10,
Suppose that there exists a \(z_{0}\in \mathbb {C}\) with \(|z_{0}|\ge 1+\delta /2\) such that \(s_{n}\left( P_{n}-z_{0}I\right) <\frac{c'}{2}\) and \(\left\| P_{n}-\hat{P}_{n}\right\|<n^{-\varepsilon }<\frac{c'}{2}\). Then, by Weyl’s inequality (13), \(\Big |s_{n}(P_{n}-z_{0}I)-s_{n}(\hat{P}_{n}-z_{0}I)\Big |<\frac{c'}{2}\) which implies \(s_{n}(\hat{P}_{n}-z_{0}I)<c'\). Thus, for n sufficiently large to ensure that \(n^{-\varepsilon }<\frac{c'}{2}\), by Lemma 4.10
Thus, selecting \(c=\frac{c'}{2}\), we have \(\inf _{|z|> 1+\delta /2}s_{n}\left( P_{n}-zI\right) \ge c\) with probability \(1-o(1)\). \(\square \)
Lemma B.4
Let A be an \(n\times n\) matrix. Let R be a subset of the integer set \(\{1,2,\dots n\}\). Let \(A^{(R)}\) denote the matrix A, but with the rth column replaced with zero for each \(r\in R\). Then
Proof
Let \(A^{((R))}\) denote the matrix A with column r removed for all \(r\in R\). Note that \(A^{((R))}\) is an \(n\times (n-|R|)\) matrix, which is distinct from the \(n\times n\) matrix \(A^{(R)}\). Also, let \(I^{((R))}\) denote the \(n\times n\) identity matrix with column r removed for all \(r\in R\). In order to bound the least singular value of \((A^{(R)}-zI)\), we will consider the eigenvalues of \(\left( A-zI\right) ^{*}\left( A-zI\right) ,\)\(\left( A^{(R)}-zI\right) ^{*}\left( A^{(R)}-zI\right) ,\) and \(\left( A^{((R))}-zI^{((R))}\right) ^{*}\left( A^{((R))}-zI^{((R))}\right) .\)
Now, observe that \(\left( A^{((R))}-zI^{((R))}\right) ^{*}\left( A^{((R))}-zI^{((R))}\right) \) is an \((n-|R|)\times (n-|R|)\) matrix, and is a principle sub-matrix of the Hermitian matrix \((A-zI)^{*}(A-zI)\). Therefore, the eigenvalues of \(\left( A^{((R))}-zI^{((R))}\right) ^{*}\left( A^{((R))}-zI^{((R))}\right) \) must interlace with the eigenvalues of \(\left( A-zI\right) ^{*}\left( A-zI\right) \) by Cauchy’s interlacing theorem [38, Theorem 1]. This implies
Next, we compare the eigenvalues of \(\left( A^{(R)}-zI\right) ^{*}\left( A^{(R)}-zI\right) \) to the eigenvalues of \(\left( A^{((R))}-zI^{((R))}\right) ^{*}\left( A^{((R))}-zI^{((R))}\right) \). Note that, after a possible permutation of columns to move all zero columns of \(A^{(R)}\) to be in the last |R| columns, the product \(\left( A^{(R)}-zI\right) ^{*}\left( A^{(R)}-zI\right) \) becomes
Due to the block structure of the matrix above, if w is an eigenvalue of \(\left( A^{(R)}-zI\right) ^{*}\left( A^{(R)}-zI\right) \), then either w is an eigenvalue of \(\left( A^{((R))}-zI^{((R))}\right) ^{*}\left( A^{((R))}-zI^{((R))}\right) \) or w is \(|z|^{2}\). Ergo,
which implies \(s_{n}\left( A^{(R)}-zI\right) \ge \min \left\{ s_{n}\left( A-zI\right) ,\;|z|\right\} \) concluding the proof. \(\square \)
This lemma gives way to the following two corollaries.
Corollary B.5
Fix \(\varepsilon >0\). For a fixed integer \(m>0\), let \(\xi _{1},\xi _{2},\dots \xi _{m}\) be real-valued random variables each mean zero, variance one, and finite \(4+\tau \) moment for some \(\tau >0\). Let \({X}_{n,1},{X}_{n,2},\dots ,{X}_{n,m}\) be independent iid random matrices with atom variables \(\xi _{1},\xi _{2},\dots ,\xi _{m}\), respectively, and define \(\hat{X}_{n,1},\hat{X}_{n,2},\dots \hat{X}_{n,m}\) as in (22). Define \(\mathcal {Y}_{n}\) as in (31) and \(\mathcal {Y}_{n}^{(k)}\) as \(\mathcal {Y}_{n}\) with the columns \(c_{k},c_{n+k},c_{2n+k},\dots ,c_{(m-1)n+k}\) replaced with zeros. For any \(\delta >0\), there exists a constant \(c>0\) depending only on \(\delta \) such that
with overwhelming probability.
Proof
Note that by Lemmas B.1 and B.4,
with overwhelming probability for some constant \(c'>0\) depending only on \(\delta \). The result follows by setting \(c=\min \left\{ c',\;1\right\} \). \(\square \)
Corollary B.6
Fix \(\varepsilon >0\). For a fixed integer \(m>0\), let \(\xi _{1},\xi _{2},\dots \xi _{m}\) be real-valued random variables each mean zero, variance one, and finite \(4+\tau \) moment for some \(\tau >0\). Let \(\hat{X}_{n,1},\hat{X}_{n,2},\dots ,\hat{X}_{n,m}\) be independent iid random matrices with atom variables as defined in (22). Define \(\mathcal {Y}_{n}\) as in (31) and \(\mathcal {Y}_{n}^{(k,s)}\) as \(\mathcal {Y}_{n}\) with the columns \(c_{k},c_{n+k},c_{2n+k},\dots ,c_{(m-1)n+k}\) and \(c_{s}\) replaced with zeros. For any \(\delta >0\), there exists a constant \(c>0\) depending only on \(\delta \) such that
with overwhelming probability.
The proof of Corollary B.6 follows in exactly the same way as the proof of Corollary B.5.
Appendix C. Useful Lemmas
Lemma C.1
(Lemma 2.7 from [12]). For \(X = (x_{1},x_{2},\ldots ,x_{N})^{T}\) iid standardized complex entries, B an \(N\times N\) complex matrix, we have, for any \(p\ge 2\),
where the constant \(K_{p}>0\) depends only on p.
Lemma C.2
Let A be an \(N\times N\) complex-valued matrix. Suppose that \(\xi \) is a complex-valued random variable with mean zero and unit variance. Let \(S\subseteq [N]\), and let \(w=(w_{i})_{i=1}^{N}\) be a vector with the following properties:
-
(i)
\(\{w_i : i \in S \}\) is a collection of iid copies of \(\xi \),
-
(ii)
\(w_{i}=0\) for \(i\not \in S\).
Additionally, \(A_{S\times S}\) denote the \(|S|\times |S|\) matrix which has entries \(A_{(i,j)}\) for \(i,j\in S\). Then for any even \(p\ge 2\),
Proof
Let \(w_{S}\) denote the |S|-vector which contains entries \(w_{i}\) for \(i\in S\) and observe
Therefore, by Lemma C.1, for any even \(p\ge 2\),
Now observe that
Therefore,
\(\square \)
Lemma C.3
(Lemma A.1 from [12]). For \(X = (x_{1},x_{2},\ldots ,x_{N})^{T}\) iid standardized complex entries, B an \(N\times N\) complex-valued Hermitian nonnegative definite matrix, we have, for any \(p\ge 1\),
where \(K_{p}>0\) depends only on p.
Lemma C.4
Let A be an \(N\times N\) Hermitian positive semidefinite matrix. Suppose that \(\xi \) is a complex-valued random variable with mean zero and unit variance. Let \(S\subseteq [N]\), and let \(w = (w_i)_{i=1}^N\) be a vector with the following properties:
-
(i)
\(\{w_i : i \in S \}\) is a collection of iid copies of \(\xi \),
-
(ii)
\(w_{i}=0\) for \(i\not \in S\).
Then for any \(p\ge 2\),
Proof
Let \(w_{S}\) denote the |S|-vector which contains entries \(w_{i}\) for \(i\in S\), and let \(A_{S\times S}\) denote the \(|S|\times |S|\) matrix which has entries \(A_{(i,j)}\) for \(i,j\in S\). Then we have
By Lemma C.3, we get
Since A is nonnegative definite, the diagonal elements are nonnegative so that \({{\,\mathrm{tr}\,}}(A_{S\times S}^{p})\le ({{\,\mathrm{tr}\,}}(A_{A\times A}))^{p}\). By this and the fact that for a Hermitian positive semidefinite matrix, the partial trace is less than or equal to the full trace, we observe that
\(\square \)
Lemma C.5
Let A and B be \(n\times n\) matrices. Then
Proof
This follows by an application of the Cauchy–Schwarz inequality and an application of [13, Theorem A.10]. \(\square \)
Rights and permissions
About this article
Cite this article
Coston, N., O’Rourke, S. Gaussian Fluctuations for Linear Eigenvalue Statistics of Products of Independent iid Random Matrices. J Theor Probab 33, 1541–1612 (2020). https://doi.org/10.1007/s10959-019-00905-0
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10959-019-00905-0
Keywords
- Random matrices
- Linear eigenvalue statistics
- Non-Hermitian random matrices
- iid random matrices
- Product matrices