Skip to main content
Log in

Gaussian Fluctuations for Linear Eigenvalue Statistics of Products of Independent iid Random Matrices

  • Published:
Journal of Theoretical Probability Aims and scope Submit manuscript

Abstract

Consider the product \(X = X_{1}\cdots X_{m}\) of m independent \(n\times n\) iid random matrices. When m is fixed and the dimension n tends to infinity, we prove Gaussian limits for the centered linear spectral statistics of X for analytic test functions. We show that the limiting variance is universal in the sense that it does not depend on m (the number of factor matrices) or on the distribution of the entries of the matrices. The main result generalizes and improves upon previous limit statements for the linear spectral statistics of a single iid matrix by Rider and Silverstein as well as Renfrew and the second author.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Adhikari, K., Kishore Reddy, N., Ram Reddy, T., Saha, K.: Determinantal point processes in the plane from products of random matrices. Ann. Inst. Henri Poincaré Probab. Stat. 52(1), 16–46 (2016)

    MathSciNet  MATH  Google Scholar 

  2. Akemann, G., Burda, Z.: Universal microscopic correlation functions for products of independent Ginibre matrices. J. Phys. A Math. Theor. 45, 465201 (2012)

    MathSciNet  MATH  Google Scholar 

  3. Akemann, G., Burda, Z., Kieburg, M.: Universal distribution of Lyapunov exponents for products of Ginibre matrices. J. Phys. A Math. Theor. 47, 395202 (2014)

    MathSciNet  MATH  Google Scholar 

  4. Akemann, G., Ipsen, J.R., Kieburg, M.: Products of rectangular random matrices: singular values and progressive scattering. Phys. Rev. E 88, 052118 (2013)

    Google Scholar 

  5. Akemann, G., Ipsen, J.R., Strahov, E.: Permanental processes from products of complex and quaternionic induced Ginibre ensembles. Random Matrices Theory Appl. 3(4), 1450014 (2014)

    MathSciNet  MATH  Google Scholar 

  6. Akemann, G., Kieburg, M., Wei, L.: Singular value correlation functions for products of Wishart random matrices. J. Phys. A Math. Theor. 46, 275205 (2013)

    MathSciNet  MATH  Google Scholar 

  7. Akemann, G., Strahov, E.: Hole probabilities and overcrowding estimates for products of complex Gaussian matrices. J. Stat. Phys. 151(6), 987–1003 (2013)

    MathSciNet  MATH  Google Scholar 

  8. Anderson, G.: Convergence of the largest singular value of a polynomial in independent Wigner matrices. Ann. Probab. 41(3B), 2103–2181 (2013)

    MathSciNet  MATH  Google Scholar 

  9. Anderson, G., Zeitouni, O.: CLT for a band matrix model. Probab. Theory Relat. Fields 134, 283–338 (2006)

    MathSciNet  MATH  Google Scholar 

  10. Bai, Z.D.: Circular law. Ann. Probab. 25, 494–529 (1997)

    MathSciNet  MATH  Google Scholar 

  11. Bai, Z.D., Silverstein, J.W.: CLT for linear spectral statistic of large-dimensional sample covariance matrix. Ann. Probab. 32, 553–605 (2004)

    MathSciNet  MATH  Google Scholar 

  12. Bai, Z.D., Silverstein, J.: No eigenvalues outside the support of the limiting spectral distribution of large-dimensional sample covariance matrices. Ann. Probab. 26(1), 316–345 (1998)

    MathSciNet  MATH  Google Scholar 

  13. Bai, Z.D., Silverstein, J.: Spectral Analysis of Large Dimensional Random Matrices. Mathematics Monograph Series, vol. 2. Science Press, Beijing (2006)

    MATH  Google Scholar 

  14. Bhatia, R.: Matrix Analysis. Graduate Texts in Mathematics. Springer, New York (1997)

    MATH  Google Scholar 

  15. Billingsley, P.: Probability and Measure. Wiley Series in Probability and Mathematical Statistics, 3rd edn. Wiley, New York (1995)

    MATH  Google Scholar 

  16. Billingsley, P.: Convergence of Probability Measures, 1st edn. Wiley, New York (1968)

    MATH  Google Scholar 

  17. Bordenave, C.: On the spectrum of sum and product of non-Hermitian random matrices. Electronic Commun. Probab. 16, 104–113 (2011)

    MathSciNet  MATH  Google Scholar 

  18. Bordenave, C., Chafaï, D.: Around the circular law. Probab. Surv. 9, 1–89 (2012)

    MathSciNet  MATH  Google Scholar 

  19. Burda, Z., Janik, R.A., Waclaw, B.: Spectrum of the product of independent random Gaussian matrices. Phys. Rev. E 81, 041132 (2010)

    MathSciNet  Google Scholar 

  20. Burda, Z., Jarosz, A., Livan, G., Nowak, M.A., Swiech, A.: Eigenvalues and singular values of products of rectangular Gaussian random matrices. Phys. Rev. E 82, 061114 (2010)

    MathSciNet  MATH  Google Scholar 

  21. Burda, Z., Nowak, M.A., Swiech, A.: Spectral relations between products and powers of isotropic random matrices. Phys. Rev. E 86, 061137 (2012)

    Google Scholar 

  22. Burda, Z.: Free products of large random matrices—a short review of recent developments. J. Phys. Conf. Ser. 473, 012002 (2013)

    Google Scholar 

  23. Coston, N., O’Rourke, S., Wood, P.: Outliers in the spectrum for products of independent random matrices. arXiv:1711.07420

  24. Deng, C.Y.: A generalization of the Sherman–Morrison–Woodbury formula. Appl. Math. Lett. 24(9), 1561–1564 (2011)

    MathSciNet  MATH  Google Scholar 

  25. Diaconis, P., Shahshahani, M.: On the eigenvalues of random matrices. J. Appl. Probab. 31A, 49–62 (1994)

    MathSciNet  MATH  Google Scholar 

  26. Diaconis, P., Evans, S.N.: Linear functionals of eigenvalues of random matrices. Trans. Am. Math. Soc. 353(7), 2615–2633 (2001)

    MathSciNet  MATH  Google Scholar 

  27. Edelman, A.: The probability that a random real Gaussian matrix has \(k\) real eigenvalues, related distributions, and the circular law. J. Multivar. Anal. 60, 203–232 (1997)

    MathSciNet  MATH  Google Scholar 

  28. Forrester, P.J.: Lyapunov exponents for products of complex Gaussian random matrices. J. Stat. Phys. 151, 796–808 (2013)

    MathSciNet  MATH  Google Scholar 

  29. Forrester, P.J.: Probability of all eigenvalues real for products of standard Gaussian matrices. J. Phys. A 47, 065202 (2014)

    MathSciNet  MATH  Google Scholar 

  30. Ginibre, J.: Statistical ensembles of complex, quaternion, and real matrices. J. Math. Phys. 6, 440–449 (1965)

    MathSciNet  MATH  Google Scholar 

  31. Girko, V.L.: Circular law. Theory Probab. Appl. 29, 694–706 (1984)

    MathSciNet  MATH  Google Scholar 

  32. Girko, V.L.: The circular law. Teor. Veroyatnost. i Primenen. 29(4), 669–679 (1984)

    MathSciNet  MATH  Google Scholar 

  33. Girko, V.L., Vladimirova, A.: L.I.F.E.: and Halloween Law. Random Operat. Stoch. Equ. 18(4), 327–353 (2010)

    MATH  Google Scholar 

  34. Götze, F., Naumov, A., Tikhomirov, T.: Local laws for non-Hermitian random matrices. Doklady Math. 96, 558–560 (2017). https://doi.org/10.1134/S1064562417060072

    Article  MathSciNet  MATH  Google Scholar 

  35. Götze, F., Tikhomirov, T.: The circular law for random matrices. Ann. Probab. 38(4), 1444–1491 (2010)

    MathSciNet  MATH  Google Scholar 

  36. Götze, F., Tikhomirov, T.: On the asymptotic spectrum of products of independent random matrices. arXiv:1012.2710

  37. Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge University Press, Cambridge (1991)

    MATH  Google Scholar 

  38. Hwang, S.: Cauchy’s interlace theorem for eigenvalues of Hermitian matrices. Am. Math. Mon. 111(2), 157–159 (2004)

    MathSciNet  MATH  Google Scholar 

  39. Ipsen, J.R.: Products of Independent Gaussian Random Matrices. Bielefeld University, Bielefeld (2015)

    MATH  Google Scholar 

  40. Ipsen, J.R., Kieburg, M.: Weak commutation relations and eigenvalue statistics for products of rectangular random matrices. Phys. Rev. E 89, 032106 (2014)

    Google Scholar 

  41. Johansson, K.: On fluctuations of eigenvalues of random Hermitian matrices. Duke Math. J. 91, 151–204 (1998)

    MathSciNet  MATH  Google Scholar 

  42. Kopel, P.: Linear statistics of non-Hermitian matrices matching the real or complex Ginibre ensemble to four moments. arXiv:1510.02987 [math.PR]

  43. Kopel, P., O’Rourke, S., Vu, V.: Random matrix products: Universality and least singular values. arXiv:1802.03004

  44. Kuijlaars, A.B.J., Zhang, L.: Singular values of products of Ginibre random matrices, multiple orthogonal polynomials and hard edge scaling limits. Commun. Math. Phys. 332(2), 759–781 (2014)

    MathSciNet  MATH  Google Scholar 

  45. Lytova, A., Pastur, L.: Central limit theorem for linear eigenvalue statistics of random matrices with independent entries. Ann. Probab. 37, 1778–1840 (2009)

    MathSciNet  MATH  Google Scholar 

  46. Mehta, M.L.: Random Matrices and the Statistical Theory of Energy Levels. Academic Press, New York (1967)

    MATH  Google Scholar 

  47. Mehta, M.L.: Random Matrices, 3rd edn. Elsevier/Academic Press, Amsterdam (2004)

    MATH  Google Scholar 

  48. Nemish, Y.: No outliers in the spectrum of the product of independent non-Hermitian random matrices with independent entries. J. Theor. Probab. 31, 402 (2018)

    MathSciNet  MATH  Google Scholar 

  49. Nemish, Y.: Local law for the product of independent non-Hermitian random matrices with independent entries. Electron. J. Probab. 22(22), 1–35 (2017)

    MathSciNet  MATH  Google Scholar 

  50. Nourdin, I., Peccati, G.: Universal Gaussian fluctuations of non-Hermitian matrix ensembles: from weak convergence to almost sure CLTs. Lat. Am. J. Probab. Math. Stat. 7, 341–375 (2010)

    MathSciNet  MATH  Google Scholar 

  51. O’Rourke, S., Renfrew, D.: Central limit theorem for linear eigenvalue statistics of elliptic random matrices. J. Theor. Probab. 29(3), 1121–1191 (2016)

    MathSciNet  MATH  Google Scholar 

  52. O’Rourke, S., Renfrew, D.: Low rank perturbations of large elliptic random matrices. Electron. J. Probab. 19(43), 1–65 (2014)

    MathSciNet  MATH  Google Scholar 

  53. O’Rourke, S., Renfrew, D., Soshnikov, A., Vu, V.: Products of independent elliptic random matrices. J. Stat. Phys. 160(1), 89–119 (2015)

    MathSciNet  MATH  Google Scholar 

  54. O’Rourke, S., Soshnikov, A.: Products of independent non-Hermitian random matrices. Electron. J. Probab. 16(81), 2219–2245 (2011)

    MathSciNet  MATH  Google Scholar 

  55. Pan, G., Zhou, W.: Circular law, extreme singular values and potential theory. J. Multivar. Anal. 101, 645–656 (2010)

    MathSciNet  MATH  Google Scholar 

  56. Rider, B., Silverstein, J.W.: Gaussian fluctuations for non-Hermitian random matrix ensembles. Ann. Probab. 34, 2118–2143 (2006)

    MathSciNet  MATH  Google Scholar 

  57. Shcherbina, M.: Central limit theorem for linear eigenvalue statistics of the Wigner and sample covariance random matrices. Zh. Mat. Fiz. Anal. Geom. 7(2), 176–192 (2011)

    MathSciNet  MATH  Google Scholar 

  58. Sinai, Y., Soshnikov, A.: Central limit theorem for traces of large random symmetric matrices with independent matrix elements. Bol. Soc. Brasil. Mat. (N.S.) 29, 1–24 (1998)

    MathSciNet  MATH  Google Scholar 

  59. Soshnikov, A.: The central limit theorem for local linear statistics in classical compact groups and related combinatorial identities. Ann. Probab. 28, 1353–1370 (2000)

    MathSciNet  MATH  Google Scholar 

  60. Sosoe, P., Wong, P.: Regularity conditions in the CLT for linear eigenvalue statistics of Wigner matrices. Adv. Math. 249(20), 37–87 (2013)

    MathSciNet  MATH  Google Scholar 

  61. Strahov, E.: Differential equations for singular values of products of Ginibre random matrices. J. Phys. A Math. Theor. 47, 325203 (2014)

    MathSciNet  MATH  Google Scholar 

  62. Tao, T.: Outliers in the spectrum of iid matrices with bounded rank perturbations. Probab. Theory Relat. Fields 155, 231–263 (2013)

    MathSciNet  MATH  Google Scholar 

  63. Tao, T., Vu, V.: Random matrices: the circular law. Commun. Contemp. Math. 10, 261–307 (2008)

    MathSciNet  MATH  Google Scholar 

  64. Tao, T., Vu, V.: From the Littlewood–Offord problem to the circular law: universality of the spectral distribution of random matrices. Bull. Am. Math. Soc. (N.S.) 46(3), 377–396 (2009)

    MathSciNet  MATH  Google Scholar 

  65. Tao, T., Vu, V.: Random matrices: universality of ESDs and the circular law. Ann. Probab. 38(5), 2023–2065 (2010)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The paper is based on a chapter from N. Coston’s doctoral thesis, and she would like to thank her thesis committee for their feedback and support. The authors would also like to thank Philip Wood for providing useful feedback on an earlier draft of the manuscript. S. O’Rourke has been supported in part by NSF grants ECCS-1610003 and DMS-1810500.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sean O’Rourke.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A. Truncation Arguments

This section is devoted to the proof of Lemma 4.3.

Proof of Lemma 4.3

First, we prove property (i). Observe that

$$\begin{aligned} 1&=\text {Var}(\xi )\\&=\mathbb {E}[\xi ^{2}{\mathbf {1}_{\{|\xi |\le n^{1/2-\varepsilon }\}}}]+\mathbb {E}[\xi ^{2}{\mathbf {1}_{\{|\xi |> n^{1/2-\varepsilon }\}}}]\\&=\text {Var}(\tilde{\xi })+\left( \mathbb {E}[\xi {\mathbf {1}_{\{|\xi |\le n^{1/2-\varepsilon }\}}}]\right) ^{2}+\mathbb {E}[\xi ^{2}{\mathbf {1}_{\{|\xi |> n^{1/2-\varepsilon }\}}}]. \end{aligned}$$

Also observe that

$$\begin{aligned} 0=\mathbb {E}[\xi ]=\mathbb {E}[\xi {\mathbf {1}_{\{|\xi |\le n^{1/2-\varepsilon }\}}}]+\mathbb {E}[\xi {\mathbf {1}_{\{|\xi |> n^{1/2-\varepsilon }\}}}] \end{aligned}$$

which implies \(\left| \mathbb {E}[\xi {\mathbf {1}_{\{|\xi |\le n^{1/2-\varepsilon }\}}}]\right| =\left| \mathbb {E}[\xi {\mathbf {1}_{\{|\xi |> n^{1/2-\varepsilon }\}}}]\right| .\) Hence

$$\begin{aligned} |1-\text {Var}(\tilde{\xi })|&=\left( \mathbb {E}[\xi {\mathbf {1}_{\{|\xi |\le n^{1/2-\varepsilon }\}}}]\right) ^{2}+\mathbb {E}[\xi ^{2}{\mathbf {1}_{\{|\xi |> n^{1/2-\varepsilon }\}}}]\\&=\left| \mathbb {E}[\xi {\mathbf {1}_{\{|\xi |> n^{1/2-\varepsilon }\}}}]\right| ^{2}+\mathbb {E}[\xi ^{2}{\mathbf {1}_{\{|\xi |> n^{1/2-\varepsilon }\}}}]\\&\le 2\mathbb {E}\left[ \frac{|\xi |^{4}}{n^{1-2\varepsilon }}{\mathbf {1}_{\{|\xi |> n^{1/2-\varepsilon }\}}}\right] \\&=o(n^{-1-2\varepsilon }). \end{aligned}$$

Next we move onto (ii). By construction, \(\mathbb {E}[\hat{\xi }]=0\) and \(\text {Var}(\hat{\xi })=1\) provided n is sufficiently large. By part (i),

$$\begin{aligned} 1-\frac{C}{n^{1+2\varepsilon }}\le {{\,\mathrm{Var}\,}}(\tilde{\xi }) \end{aligned}$$

for some constant \(C>0\) so choosing \(N_{0}>\left( \frac{4C}{3}\right) ^{1/(1+2\varepsilon )}\) ensures that \(\frac{1}{4}\le \text {Var}(\tilde{\xi })\), which gives \(2\ge \left( \text {Var}(\tilde{\xi })\right) ^{-1/2}\) for \(n>N_{0}\). With such an \(n>N_{0}\),

$$\begin{aligned} \left| \hat{\xi }\right|&=\left| \frac{\xi {\mathbf {1}_{\{|\xi |\le n^{1/2-\varepsilon }\}}}-\mathbb {E}\left[ \xi {\mathbf {1}_{\{|\xi |\le n^{1/2-\varepsilon }\}}}\right] }{\sqrt{\text {Var}(\tilde{\xi })}}\right| \\&\le 2\left| \xi {\mathbf {1}_{\{|\xi |\le n^{1/2-\varepsilon }\}}}\right| +2\left| \mathbb {E}\left[ \xi {\mathbf {1}_{\{|\xi |\le n^{1/2-\varepsilon }\}}}\right] \right| \\&\le 4n^{1/2-\varepsilon } \end{aligned}$$

almost surely. For part 4.3, we have

$$\begin{aligned} \mathbb {E}|\hat{\xi }|^{4}&=\mathbb {E}\left| \frac{\xi {\mathbf {1}_{\{|\xi |\le n^{1/2-\varepsilon }\}}}-\mathbb {E}\left[ \xi {\mathbf {1}_{\{|\xi |\le n^{1/2-\varepsilon }\}}}\right] }{\sqrt{\text {Var}(\tilde{\xi })}}\right| ^{4}\\&\le 2^{4}\mathbb {E}\left| \xi {\mathbf {1}_{\{|\xi |\le n^{1/2-\varepsilon }\}}}-\mathbb {E}\left[ \xi {\mathbf {1}_{\{|\xi |\le n^{1/2-\varepsilon }\}}}\right] \right| ^{4}\\&\le 2^{8}\mathbb {E}\left| \xi \right| ^{4}, \end{aligned}$$

completing the proof of the claim. \(\square \)

Appendix B. Largest and Smallest Singular Values

In this section, we consider events concerning the largest and smallest singular values for the random matrices appearing in this paper. These results are included as an appendix because the methods used to prove them are slight modifications of those in [23, 48, 52]. In order to prove these results, we need to introduce an intermediate truncation of the matrices. Specifically, let \(\xi _{1},\xi _{2},\dots \xi _{m}\) be real-valued random variables each having mean zero, variance one, and finite \(4+\tau \) moment for some \(\tau >0\). Let \(X_{n,1}X_{n,2},\dots X_{n,m}\) be independent iid \(n\times n\) random matrices with atom random variables \(\xi _{1},\xi _{2},\dots \xi _{m}\), respectively. For a fixed \(\varepsilon >0\), and for each \(1\le k\le m\), define truncated random variables (at \(n^{1/2-\varepsilon }\)) \(\tilde{\xi }_{k}\) and \(\hat{\xi }_{k}\) as in (19). Also define truncated matrices \(\tilde{X}_{n,k}\) and \(\hat{X}_{n,k}\) as in (21) and (22), respectively. Define the linearized truncated matrix \(\mathcal {Y}_{n}\) as in (31). Also recall that \(P_{n}=n^{-m/2}X_{n,1}X_{n,2}\cdots X_{n,m}\) and \(\hat{P}_{n}=n^{-m/2}\hat{X}_{n,1}\hat{X}_{n,2}\cdots \hat{X}_{n,m}.\)

Let X be an \(n\times n\) random matrix filled with iid copies of a random variable \(\xi \) which has mean zero, unit variance, and finite \(4+\tau \) moment. For a fixed constant \(L>0\), define matrices \(\mathring{X}\) and \(\check{X}\) to be the \(n\times n\) matrices with entries defined by

$$\begin{aligned} \mathring{X}_{(i,j)}:=X_{(i,j)}{\mathbf {1}_{\{|X_{(i,j)}|\le L/\sqrt{2}\}}}-\mathbb {E}\left[ X_{(i,j)}{\mathbf {1}_{\{|X_{(i,j)}|\le L/\sqrt{2}\}}}\right] \end{aligned}$$
(109)

and

$$\begin{aligned} \check{X}_{(i,j)}:= \frac{\mathring{X}_{(i,j)}}{\sqrt{\text {Var}(\mathring{X}_{(i,j)})}} \end{aligned}$$
(110)

for \(1 \le i,j \le n\). Define \(\mathring{X}_{n,1},\mathring{X}_{n,2},\dots \mathring{X}_{n,m}\) and \(\check{X}_{n,1},\check{X}_{n,2},\dots \check{X}_{n,m}\) as in (109) and (110), respectively. Finally, define the linearized truncated matrix

$$\begin{aligned} \check{\mathcal {Y}}_{n} :=n^{-1/2}\left[ \begin{array}{ccccc} 0 &{}\quad \check{X}_{n,1} &{}\quad 0 &{}\quad \cdots &{}\quad 0\\ 0 &{}\quad 0 &{}\quad \check{X}_{n,2} &{}\quad \dots &{}\quad 0\\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \vdots \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad \cdots &{}\quad \check{X}_{n,m-1}\\ \check{X}_{n,m} &{}\quad 0 &{}\quad 0 &{}\quad \dots &{}\quad 0 \end{array}\right] . \end{aligned}$$
(111)

Lemma B.1

Fix \(\varepsilon >0\). For a fixed integer \(m>0\), let \(\xi _{1},\xi _{2},\dots \xi _{m}\) be real-valued random variables each mean zero, variance one, and finite \(4+\tau \) moment for some \(\tau >0\). Let \(\hat{X}_{n,1},\hat{X}_{n,2},\dots ,\hat{X}_{n,m}\) be independent iid random matrices with atom variables as defined in (22), and define \(\mathcal {Y}_{n}\) as in (31). For every \(\delta >0\), there exists a constant \(c>0\) depending only on \(\delta \) such that

$$\begin{aligned} \inf _{|z|> 1+\delta /2}s_{mn}\left( \mathcal {Y}_{n}-zI\right) \ge c \end{aligned}$$

with overwhelming probability.

Proof

Fix \(\delta >0\) and define \(\check{\mathcal {Y}}_{n}\) as in (111). By [23, Lemma 8.1], which is based on techniques in [48, 49], we know that there exists a constant \(c'>0\) which depends only on \(\delta \) such that \(\inf _{|z|> 1+\delta /2}s_{mn}\left( \check{\mathcal {Y}}_{n}-zI\right) \ge c'\) with overwhelming probability. Note that by Weyl’s inequality (13),

$$\begin{aligned} \sup _{z\in \mathcal {C}}\left| s_{mn}\left( \check{\mathcal {Y}}_{n}-zI\right) -s_{mn}\left( \mathcal {Y}_{n}-zI\right) \right| \le \left\| \check{\mathcal {Y}}_{n}-\mathcal {Y}_{n}\right\| \le \max _{1 \le k \le m}\frac{1}{\sqrt{n}}\left\| \check{X}_{n,k}-\hat{X}_{n,k}\right\| . \end{aligned}$$
(112)

Focusing on an arbitrary value of k, we have

$$\begin{aligned} \frac{1}{\sqrt{n}}\left\| \check{X}_{n,k}-\hat{X}_{n,k}\right\| \le \frac{1}{\sqrt{n}}\left\| \frac{\mathring{X}_{n,k}}{\sqrt{\text {Var}((\mathring{X}_{n,k})_{(i,j)})}}-\frac{\tilde{X}_{n,k}}{\sqrt{\text {Var}((\tilde{X}_{n,k})_{(i,j)})}}\right\| \end{aligned}$$

for any \(1\le i,j\le n\). Observe that

$$\begin{aligned} \frac{1}{\sqrt{n}}\left\| \frac{\mathring{X}_{n,k}}{\sqrt{\text {Var}((\mathring{X}_{n,k})_{(i,j)})}}- \mathring{X}_{n,k}\right\| =\frac{1}{\sqrt{n}}\left\| \frac{\mathring{X}_{n,k}\left( 1-\sqrt{\text {Var}((\mathring{X}_{n,k})_{(i,j)})}\right) }{\sqrt{\text {Var}((\mathring{X}_{n,k})_{(i,j)})}}\right\| . \end{aligned}$$

By [23, Lemma 7.1], \(\left( \text {Var}((\mathring{X}_{n,k})_{(i,j)})\right) ^{-1/2}\le 2\) for L sufficiently large. Additionally, an argument similar to that of [23, Lemma 7.1] shows that \(\left| 1-\sqrt{\text {Var}((\mathring{X}_{n,k})_{(i,j)})}\right| \le \frac{C}{L^{2}}\) for any \(1\le i,j\le n\) and some constant \(C>0\). Therefore, by [62, Theorem 1.4], for L sufficiently large,

$$\begin{aligned} \frac{1}{\sqrt{n}}\left\| \frac{\mathring{X}_{n,k}}{\sqrt{\text {Var}((\mathring{X}_{n,k})_{(i,j)})}}- \mathring{X}_{n,k}\right\| \le \frac{C}{L^{2}\sqrt{n}}\left\| \frac{\mathring{X}_{n,k}}{\sqrt{\text {Var}((\mathring{X}_{n,k})_{(i,j)})}}\right\| \le \frac{c'}{16} \end{aligned}$$

with overwhelming probability. Similarly,

$$\begin{aligned} \frac{1}{\sqrt{n}}\left\| \frac{\tilde{X}_{n,k}}{\sqrt{\text {Var}((\tilde{X}_{n,k})_{(i,j)})}}- \tilde{X}_{n,k}\right\| =\frac{1}{\sqrt{n}}\left\| \frac{\tilde{X}_{n,k}\left( 1-\sqrt{\text {Var}((\tilde{X}_{n,k})_{(i,j)})}\right) }{\sqrt{\text {Var}((\tilde{X}_{n,k})_{(i,j)})}}\right\| . \end{aligned}$$

By the arguments to prove part (ii) of Lemma 4.3, \(\left( \text {Var}((\tilde{X}_{n,k})_{(i,j)})\right) ^{-1/2}\le 2\) for n sufficiently large. Also, by part (i) of Lemma 4.3, we can show that \(\left| 1-\sqrt{\text {Var}((\tilde{X}_{n,k})_{(i,j)})}\right| =o(n^{-1+2\varepsilon })\). Therefore, by [13, Theorem 5.9],

$$\begin{aligned} \frac{1}{\sqrt{n}}\left\| \frac{\tilde{X}_{n,k}}{\sqrt{\text {Var}((\tilde{X}_{n,k})_{(i,j)})}}- \tilde{X}_{n,k}\right\| =o(n^{-1-2\varepsilon }) \frac{1}{\sqrt{n}}\left\| \tilde{X}_{n,k}\right\| \le \frac{c'}{16} \end{aligned}$$

with overwhelming probability. Ergo, by the triangle inequality, for L sufficiently large,

$$\begin{aligned} \frac{1}{\sqrt{n}}\left\| \check{X}_{n,k}-\hat{X}_{n,k}\right\|&\le \frac{1}{\sqrt{n}}\left\| \frac{\mathring{X}_{n,k}}{\sqrt{\text {Var}((\mathring{X}_{n,k})_{(i,j)})}}-\frac{\tilde{X}_{n,k}}{\sqrt{\text {Var}((\tilde{X}_{n,k})_{(i,j)})}}\right\| \nonumber \\&\le \frac{c'}{8}+ \frac{1}{\sqrt{n}}\left\| \mathring{X}_{n,k}-\tilde{X}_{n,k}\right\| \end{aligned}$$
(113)

with overwhelming probability.

Now, recall that the entries of \(\mathring{X}_{n,k}\) are truncated at level L for a fixed \(L>0\) so for sufficiently large n, \(L\le n^{1/2-\varepsilon }\). Note that if all entries are less than L in absolute value, then the entries in \(\mathring{X}_{n,k}\) and \(\tilde{X}_{n}\) agree. Similarly, if all entries are greater than \(n^{1/2-\varepsilon }\), then the entries in \(\mathring{X}_{n,k}\) and \(\tilde{X}_{n}\) agree. Ergo, we need only consider the case when there exists some entries \(1\le i,j\le n\) such that \(L\le |(\tilde{X}_{n,k})_{i,j}|\le n^{1/2-\varepsilon }\). For each \(1\le k\le m\), define the random variables

$$\begin{aligned} \dot{\xi }_{k} := \xi _{k}{\mathbf {1}_{\{L\le |\xi _{k}|\le n^{1/2-\varepsilon }\}}}-\mathbb {E}\left[ \xi _{k}{\mathbf {1}_{\{L\le |\xi _{k}|\le n^{1/2-\varepsilon }\}}}\right] \end{aligned}$$

and define \(\dot{X}_{n,k}\) to be the matrix with entries

$$\begin{aligned} (\dot{X}_{n,k})_{(i,j)} := (X_{n,k})_{(i,j)}{\mathbf {1}_{\{L\le |(X_{n,k})_{(i,j)}|\le n^{1/2-\varepsilon }\}}}-\mathbb {E}\left[ (X_{n,k})_{(i,j)}{\mathbf {1}_{\{L\le |(X_{n,k})_{(i,j)}|\le n^{1/2-\varepsilon }\}}}\right] , \end{aligned}$$

for \(1\le i,j \le n\). Note that the definitions of \(\dot{\xi }\) and \(\dot{X}_{n,k}\) differ from the definitions in Sect. 4. We will use the definition given in this appendix for the remainder of this proof. We can write

$$\begin{aligned} \frac{1}{\sqrt{n}}\left\| \mathring{X}_{n,k}-\tilde{X}_{n,k}\right\| = \frac{1}{\sqrt{n}}\left\| \dot{X}_{n,k}\right\| . \end{aligned}$$

By [13, Lemma 5.9], for L sufficiently large

$$\begin{aligned} \frac{1}{\sqrt{n}}\left\| \dot{X}_{n,k}\right\| \le \frac{c'}{8} \end{aligned}$$
(114)

with overwhelming probability. Thus, by choosing L large enough to satisfy both conditions, by (113) and (114),

$$\begin{aligned} \max _{1 \le k \le m}\frac{1}{\sqrt{n}}\left\| \check{X}_{n,k}-\hat{X}_{n,k}\right\| <\frac{c'}{4} \end{aligned}$$

with overwhelming probability. By recalling (112), this implies that, for L sufficiently large,

$$\begin{aligned} \inf _{|z|>1+\delta /2}s_{mn}\left( \mathcal {Y}_{n}-zI\right) \ge c \end{aligned}$$

with overwhelming probability where \(c=\frac{c'}{2}\). \(\square \)

Lemma B.2

Fix \(\varepsilon >0\). For a fixed integer \(m>0\), let \(\xi _{1},\xi _{2},\dots \xi _{m}\) be real-valued random variables each mean zero, variance one, and finite \(4+\tau \) moment for some \(\tau >0\). Let \({X}_{n,1},{X}_{n,2},\dots ,{X}_{n,m}\) be independent iid random matrices with atom variables \(\xi _{1},\xi _{2},\dots ,\xi _{m}\), respectively. Define \(\hat{X}_{n,1},\hat{X}_{n,2},\dots \hat{X}_{n,m}\) as in (22), and define \(\hat{P}_{n}\) as in (24). For any \(\delta >0\), there exists a constant \(c>0\) depending only on \(\delta \) such that

$$\begin{aligned} \inf _{|z|> 1+\delta /2}s_{mn}\left( \hat{P}_{n}-zI\right) \ge c \end{aligned}$$

with overwhelming probability.

Proof

Fix \(\delta >0\). By Lemma B.1, we know that there exists some \(c'>0\) such that \(\inf _{|z|>1+\delta /2}s_{mn}\left( \mathcal {Y}_{n}-zI\right) \ge c'\) with overwhelming probability as well. Recall that \(s_{mn}\left( \mathcal {Y}_{n}-zI\right) =s_{1}\left( \left( \mathcal {Y}_{n}-zI\right) ^{-1}\right) \) provided z is not an eigenvalue of \(\mathcal {Y}_{n}\). A block inverse matrix calculation reveals that

$$\begin{aligned} \left( \left( \mathcal {Y}_{n}-zI\right) ^{-1}\right) ^{[1,1]}=z^{m-1}\left( \hat{P}_{n}-z^{m}I\right) ^{-1}, \end{aligned}$$

where the notation \(A^{[1,1]}\) denotes the upper left \(n\times n\) block of A. Therefore,

$$\begin{aligned} \frac{1}{c'} \ge \sup _{|z|> 1+\delta /2}s_{1}\left( \left( \mathcal {Y}_{n}-zI\right) ^{-1}\right) \ge \sup _{|z|> 1+\delta /2}|z|^{m-1}\left\| \left( \hat{P}_{n}-z^{m}I\right) ^{-1}\right\| . \end{aligned}$$

This implies that there exists a constant \(c>0\) such that

$$\begin{aligned} \frac{1}{c}\ge \sup _{|z|> 1+\delta /2}s_{1}\left( \left( \hat{P}_{n}-zI\right) ^{-1}\right) \end{aligned}$$

with overwhelming probability. This gives \(\inf _{|z| > 1+\delta /2}s_{n}\left( \hat{P}_{n}-zI\right) \ge c\) with overwhelming probability. \(\square \)

Lemma B.3

For a fixed integer \(m>0\), let \(\xi _{1},\xi _{2},\dots \xi _{m}\) be real-valued random variables each satisfying Assumption 2.1. Fix \(\delta >0\) and let \(X_{n,1},X_{n,2},\dots X_{n,m}\) be independent iid random matrices with atom variables \(\xi _{1},\xi _{2},\dots \xi _{m}\), respectively. Then there exists a constant \(c>0\) depending only on \(\delta \) such that

$$\begin{aligned} \inf _{|z|> 1+\delta /2}s_{n}\left( P_{n}/\sigma -zI\right) \ge c \end{aligned}$$

with probability \(1-o(1)\) where \(\sigma = \sigma _{1}\cdots \sigma _{m}\).

Proof

By a simple rescaling, it is sufficient to assume that the variance of each random variable is 1 so that \(\sigma =1\). Let \(\delta >0\) and recall by Lemma B.2 there exists a \(c'>0\) depending only on \(\delta \) such that \(\inf _{|z|> 1+\delta /2}s_{n}\left( \hat{P}_{n}-zI\right) \ge c'\) with overwhelming probability. Then by Lemma 4.10,

$$\begin{aligned}&\mathbb {P}\left( \inf _{|z|> 1+\delta /2}s_{n}\left( P_{n}-zI\right)<\frac{c'}{2}\right) \\&\quad = \mathbb {P}\left( \inf _{|z|> 1+\delta /2}s_{n}\left( P_{n}-zI\right)<\frac{c'}{2}\;\;\text { and }\;\;\left\| P_{n}-\hat{P}_{n}\right\| \le n^{-\varepsilon }\right) \\&\qquad + \mathbb {P}\left( \inf _{|z|> 1+\delta /2}s_{n}\left( P_{n}-zI\right)<\frac{c'}{2}\;\;\text { and }\;\;\left\| P_{n}-\hat{P}_{n}\right\|> n^{-\varepsilon }\right) \\&\quad \le \mathbb {P}\left( \inf _{|z|> 1+\delta /2}s_{n}\left( P_{n}-zI\right)<\frac{c'}{2}\;\;\text { and }\;\;\left\| P_{n}-\hat{P}_{n}\right\| \le n^{-\varepsilon }\right) \\&\qquad + \mathbb {P}\left( \left\| P_{n}-\hat{P}_{n}\right\|> n^{-\varepsilon }\right) \\&\quad \le \mathbb {P}\left( \inf _{|z|> 1+\delta /2}s_{n}\left( P_{n}-zI\right) <\frac{c'}{2}\;\;\text { and }\;\;\left\| P_{n}-\hat{P}_{n}\right\| \le n^{-\varepsilon }\right) +o(1). \end{aligned}$$

Suppose that there exists a \(z_{0}\in \mathbb {C}\) with \(|z_{0}|\ge 1+\delta /2\) such that \(s_{n}\left( P_{n}-z_{0}I\right) <\frac{c'}{2}\) and \(\left\| P_{n}-\hat{P}_{n}\right\|<n^{-\varepsilon }<\frac{c'}{2}\). Then, by Weyl’s inequality (13), \(\Big |s_{n}(P_{n}-z_{0}I)-s_{n}(\hat{P}_{n}-z_{0}I)\Big |<\frac{c'}{2}\) which implies \(s_{n}(\hat{P}_{n}-z_{0}I)<c'\). Thus, for n sufficiently large to ensure that \(n^{-\varepsilon }<\frac{c'}{2}\), by Lemma 4.10

$$\begin{aligned} \mathbb {P}\left( \inf _{|z|> 1+\delta /2}s_{n}\left( P_{n}-zI\right)<\frac{c'}{2}\right) \le \mathbb {P}\left( \inf _{|z|> 1+\delta /2}s_{n}\left( \hat{P}_{n}-zI\right) <c'\right) +o(1). \end{aligned}$$

Thus, selecting \(c=\frac{c'}{2}\), we have \(\inf _{|z|> 1+\delta /2}s_{n}\left( P_{n}-zI\right) \ge c\) with probability \(1-o(1)\). \(\square \)

Lemma B.4

Let A be an \(n\times n\) matrix. Let R be a subset of the integer set \(\{1,2,\dots n\}\). Let \(A^{(R)}\) denote the matrix A, but with the rth column replaced with zero for each \(r\in R\). Then

$$\begin{aligned} s_{n}\left( A^{(R)}-zI\right) \ge \min \{s_{n}(A-zI),|z|\}. \end{aligned}$$

Proof

Let \(A^{((R))}\) denote the matrix A with column r removed for all \(r\in R\). Note that \(A^{((R))}\) is an \(n\times (n-|R|)\) matrix, which is distinct from the \(n\times n\) matrix \(A^{(R)}\). Also, let \(I^{((R))}\) denote the \(n\times n\) identity matrix with column r removed for all \(r\in R\). In order to bound the least singular value of \((A^{(R)}-zI)\), we will consider the eigenvalues of \(\left( A-zI\right) ^{*}\left( A-zI\right) ,\)\(\left( A^{(R)}-zI\right) ^{*}\left( A^{(R)}-zI\right) ,\) and \(\left( A^{((R))}-zI^{((R))}\right) ^{*}\left( A^{((R))}-zI^{((R))}\right) .\)

Now, observe that \(\left( A^{((R))}-zI^{((R))}\right) ^{*}\left( A^{((R))}-zI^{((R))}\right) \) is an \((n-|R|)\times (n-|R|)\) matrix, and is a principle sub-matrix of the Hermitian matrix \((A-zI)^{*}(A-zI)\). Therefore, the eigenvalues of \(\left( A^{((R))}-zI^{((R))}\right) ^{*}\left( A^{((R))}-zI^{((R))}\right) \) must interlace with the eigenvalues of \(\left( A-zI\right) ^{*}\left( A-zI\right) \) by Cauchy’s interlacing theorem [38, Theorem 1]. This implies

$$\begin{aligned} s_{n}\left( A^{((R))}-zI^{((R))}\right) ^{2}\ge s_{n}\left( A-zI\right) ^{2}. \end{aligned}$$

Next, we compare the eigenvalues of \(\left( A^{(R)}-zI\right) ^{*}\left( A^{(R)}-zI\right) \) to the eigenvalues of \(\left( A^{((R))}-zI^{((R))}\right) ^{*}\left( A^{((R))}-zI^{((R))}\right) \). Note that, after a possible permutation of columns to move all zero columns of \(A^{(R)}\) to be in the last |R| columns, the product \(\left( A^{(R)}-zI\right) ^{*}\left( A^{(R)}-zI\right) \) becomes

$$\begin{aligned} \left[ \begin{array}{cc} \left( A^{((R))}-zI^{((R))}\right) ^{*}\left( A^{((R))}-zI^{((R))}\right) &{} 0\cdot I_{|R|\times (n-|R|)}\\ &{} \\ 0\cdot I_{(n-|R|)\times |R|} &{} |z|^{2}\cdot I_{|R|\times |R|} \end{array}\right] . \end{aligned}$$

Due to the block structure of the matrix above, if w is an eigenvalue of \(\left( A^{(R)}-zI\right) ^{*}\left( A^{(R)}-zI\right) \), then either w is an eigenvalue of \(\left( A^{((R))}-zI^{((R))}\right) ^{*}\left( A^{((R))}-zI^{((R))}\right) \) or w is \(|z|^{2}\). Ergo,

$$\begin{aligned} s_{n}\left( A^{(R)}-zI\right) ^{2}&= \min \left\{ s_{n}\left( A^{((R))}-zI^{((R))}\right) ^{2},\;|z|^{2}\right\} \\&\ge \min \left\{ s_{n}\left( A-zI\right) ^{2},\;|z|^{2}\right\} \end{aligned}$$

which implies \(s_{n}\left( A^{(R)}-zI\right) \ge \min \left\{ s_{n}\left( A-zI\right) ,\;|z|\right\} \) concluding the proof. \(\square \)

This lemma gives way to the following two corollaries.

Corollary B.5

Fix \(\varepsilon >0\). For a fixed integer \(m>0\), let \(\xi _{1},\xi _{2},\dots \xi _{m}\) be real-valued random variables each mean zero, variance one, and finite \(4+\tau \) moment for some \(\tau >0\). Let \({X}_{n,1},{X}_{n,2},\dots ,{X}_{n,m}\) be independent iid random matrices with atom variables \(\xi _{1},\xi _{2},\dots ,\xi _{m}\), respectively, and define \(\hat{X}_{n,1},\hat{X}_{n,2},\dots \hat{X}_{n,m}\) as in (22). Define \(\mathcal {Y}_{n}\) as in (31) and \(\mathcal {Y}_{n}^{(k)}\) as \(\mathcal {Y}_{n}\) with the columns \(c_{k},c_{n+k},c_{2n+k},\dots ,c_{(m-1)n+k}\) replaced with zeros. For any \(\delta >0\), there exists a constant \(c>0\) depending only on \(\delta \) such that

$$\begin{aligned} \inf _{|z|> 1+\delta /2}s_{mn}\left( \mathcal {Y}_{n}^{(k)}-zI\right) \ge c \end{aligned}$$

with overwhelming probability.

Proof

Note that by Lemmas B.1 and B.4,

$$\begin{aligned} \inf _{|z|> 1+\delta /2}s_{mn}\left( \mathcal {Y}_{n}^{(k)}-zI\right)&\ge \inf _{|z|> 1+\delta /2} \min \left\{ s_{mn}\left( \mathcal {Y}_{n}-zI\right) ,\;|z|\right\} \\&\ge \inf _{|z|> 1+\delta /2} \min \left\{ s_{mn}\left( \mathcal {Y}_{n}-zI\right) ,\;1\right\} \\&\ge \min \left\{ c',\;1\right\} \end{aligned}$$

with overwhelming probability for some constant \(c'>0\) depending only on \(\delta \). The result follows by setting \(c=\min \left\{ c',\;1\right\} \). \(\square \)

Corollary B.6

Fix \(\varepsilon >0\). For a fixed integer \(m>0\), let \(\xi _{1},\xi _{2},\dots \xi _{m}\) be real-valued random variables each mean zero, variance one, and finite \(4+\tau \) moment for some \(\tau >0\). Let \(\hat{X}_{n,1},\hat{X}_{n,2},\dots ,\hat{X}_{n,m}\) be independent iid random matrices with atom variables as defined in (22). Define \(\mathcal {Y}_{n}\) as in (31) and \(\mathcal {Y}_{n}^{(k,s)}\) as \(\mathcal {Y}_{n}\) with the columns \(c_{k},c_{n+k},c_{2n+k},\dots ,c_{(m-1)n+k}\) and \(c_{s}\) replaced with zeros. For any \(\delta >0\), there exists a constant \(c>0\) depending only on \(\delta \) such that

$$\begin{aligned} \inf _{|z|> 1+\delta /2}s_{mn}\left( \mathcal {Y}_{n}^{(k,s)}-zI\right) \ge c \end{aligned}$$

with overwhelming probability.

The proof of Corollary B.6 follows in exactly the same way as the proof of Corollary B.5.

Appendix C. Useful Lemmas

Lemma C.1

(Lemma 2.7 from [12]). For \(X = (x_{1},x_{2},\ldots ,x_{N})^{T}\) iid standardized complex entries, B an \(N\times N\) complex matrix, we have, for any \(p\ge 2\),

$$\begin{aligned} \mathbb {E}\left| X^{*}BX-{{\,\mathrm{tr}\,}}(B)\right| ^{p}\le K_{p}\left( \left( \mathbb {E}\left| x_{1}\right| ^{4}{} tr B^{*}B\right) ^{p/2}+\mathbb {E}|x_{1}|^{2p}{} tr (B^{*}B)^{p/2}\right) , \end{aligned}$$

where the constant \(K_{p}>0\) depends only on p.

Lemma C.2

Let A be an \(N\times N\) complex-valued matrix. Suppose that \(\xi \) is a complex-valued random variable with mean zero and unit variance. Let \(S\subseteq [N]\), and let \(w=(w_{i})_{i=1}^{N}\) be a vector with the following properties:

  1. (i)

    \(\{w_i : i \in S \}\) is a collection of iid copies of \(\xi \),

  2. (ii)

    \(w_{i}=0\) for \(i\not \in S\).

Additionally, \(A_{S\times S}\) denote the \(|S|\times |S|\) matrix which has entries \(A_{(i,j)}\) for \(i,j\in S\). Then for any even \(p\ge 2\),

$$\begin{aligned} \mathbb {E}\left| w^{*}Aw-{{\,\mathrm{tr}\,}}(A_{S\times S})\right| ^{p}\ll _{p}\mathbb {E}\left| \xi \right| ^{2p}\left( {{\,\mathrm{tr}\,}}(A^{*}A)\right) ^{p/2}. \end{aligned}$$

Proof

Let \(w_{S}\) denote the |S|-vector which contains entries \(w_{i}\) for \(i\in S\) and observe

$$\begin{aligned} w^{*}Aw = \sum _{i,j}\bar{w}_{i}A_{(i,j)}w_{j}= w_{S}^{*}A_{S\times S}w_{S}. \end{aligned}$$

Therefore, by Lemma C.1, for any even \(p\ge 2\),

$$\begin{aligned} \mathbb {E}\left| w^{*}Aw-{{\,\mathrm{tr}\,}}(A_{S\times S})\right| ^{p}&= \mathbb {E}\left| w_{S}^{*}A_{S\times S}w_{S}-{{\,\mathrm{tr}\,}}(A_{S\times S})\right| ^{p}\\&\ll _{p}\left( \mathbb {E}\left| \xi \right| ^{4}{{\,\mathrm{tr}\,}}(A_{S\times S}^{*}A_{S\times S})\right) ^{p/2}+\mathbb {E}\left| \xi \right| ^{2p}{{\,\mathrm{tr}\,}}(A_{S\times S}^{*}A_{S\times S})^{p/2}\\&\ll _{p}\mathbb {E}\left| \xi \right| ^{2p}\left( {{\,\mathrm{tr}\,}}(A_{S\times S}^{*}A_{S\times S})\right) ^{p/2}. \end{aligned}$$

Now observe that

$$\begin{aligned} {{\,\mathrm{tr}\,}}(A_{S\times S}^{*}A_{S\times S})= \sum _{i,j\in S} A_{i,j}^{*}A_{j,i} \le \sum _{i,j=1}^{N} A_{i,j}^{*}A_{j,i}= {{\,\mathrm{tr}\,}}(A^{*}A). \end{aligned}$$

Therefore,

$$\begin{aligned} \mathbb {E}\left| w^{*}Aw-{{\,\mathrm{tr}\,}}(A_{S\times S})\right| ^{p}\ll _{p}\mathbb {E}\left| \xi \right| ^{2p}\left( {{\,\mathrm{tr}\,}}(A_{S\times S}^{*}A_{S\times S})\right) ^{p/2}\le \mathbb {E}\left| \xi \right| ^{2p}\left( {{\,\mathrm{tr}\,}}(A^{*}A)\right) ^{p/2}. \end{aligned}$$

\(\square \)

Lemma C.3

(Lemma A.1 from [12]). For \(X = (x_{1},x_{2},\ldots ,x_{N})^{T}\) iid standardized complex entries, B an \(N\times N\) complex-valued Hermitian nonnegative definite matrix, we have, for any \(p\ge 1\),

$$\begin{aligned} \mathbb {E}\left| X^{*}BX\right| ^{p}\le K_{p}\left( \left( tr B\right) ^{p}+\mathbb {E}|x_{1}|^{2p}{} tr B^{p}\right) , \end{aligned}$$

where \(K_{p}>0\) depends only on p.

Lemma C.4

Let A be an \(N\times N\) Hermitian positive semidefinite matrix. Suppose that \(\xi \) is a complex-valued random variable with mean zero and unit variance. Let \(S\subseteq [N]\), and let \(w = (w_i)_{i=1}^N\) be a vector with the following properties:

  1. (i)

    \(\{w_i : i \in S \}\) is a collection of iid copies of \(\xi \),

  2. (ii)

    \(w_{i}=0\) for \(i\not \in S\).

Then for any \(p\ge 2\),

$$\begin{aligned} \mathbb {E}\left| w^{*}Aw\right| ^{p} \ll _{p}\mathbb {E}|\xi |^{2p}\left( {{\,\mathrm{tr}\,}}A\right) ^{p}. \end{aligned}$$
(115)

Proof

Let \(w_{S}\) denote the |S|-vector which contains entries \(w_{i}\) for \(i\in S\), and let \(A_{S\times S}\) denote the \(|S|\times |S|\) matrix which has entries \(A_{(i,j)}\) for \(i,j\in S\). Then we have

$$\begin{aligned} w^{*}Aw = \sum _{i,j}\bar{w}_{i}A_{(i,j)}w_{j}= w_{S}^{*}A_{S\times S}w_{S}. \end{aligned}$$

By Lemma C.3, we get

$$\begin{aligned} \mathbb {E}\left| w^{*}Aw\right| ^{p} \ll _{p}\left( {{\,\mathrm{tr}\,}}A_{S\times S}\right) ^{p}+\mathbb {E}|\xi |^{2p}{{\,\mathrm{tr}\,}}A_{S\times S}^{p}. \end{aligned}$$

Since A is nonnegative definite, the diagonal elements are nonnegative so that \({{\,\mathrm{tr}\,}}(A_{S\times S}^{p})\le ({{\,\mathrm{tr}\,}}(A_{A\times A}))^{p}\). By this and the fact that for a Hermitian positive semidefinite matrix, the partial trace is less than or equal to the full trace, we observe that

$$\begin{aligned} \left( {{\,\mathrm{tr}\,}}A_{S\times S}\right) ^{p}+\mathbb {E}|\xi |^{2p}{{\,\mathrm{tr}\,}}A_{S\times S}^{p} \ll _{p} \mathbb {E}|\xi |^{2p}\left( {{\,\mathrm{tr}\,}}A_{S\times S}\right) ^{p} \ll _{p} \mathbb {E}|\xi |^{2p}\left( {{\,\mathrm{tr}\,}}A\right) ^{p}. \end{aligned}$$

\(\square \)

Lemma C.5

Let A and B be \(n\times n\) matrices. Then

$$\begin{aligned} \left| {{\,\mathrm{tr}\,}}(AB)\right| \le \sqrt{n}\left\| AB\right\| _{2}\le \sqrt{n}\left\| A\right\| \cdot \left\| B\right\| _{2}. \end{aligned}$$

Proof

This follows by an application of the Cauchy–Schwarz inequality and an application of [13, Theorem A.10]. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Coston, N., O’Rourke, S. Gaussian Fluctuations for Linear Eigenvalue Statistics of Products of Independent iid Random Matrices. J Theor Probab 33, 1541–1612 (2020). https://doi.org/10.1007/s10959-019-00905-0

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10959-019-00905-0

Keywords

Mathematics Subject Classification

Navigation