Abstract
A classical result in random matrix theory reveals that the limiting spectral distribution of a Wigner matrix whose entries have a common variance and satisfy other regular assumptions almost surely converges to the semicircular law. In the paper, we will relax the assumption of uniform covariance of each entry, when the average of the normalized sums of the variances in each row of the data matrix converges to a constant, we prove that the same limiting spectral distribution holds. A similar result on a sample covariance matrix is also established. The proofs mainly depend on the Stein equation and the generalized Stein equation of independent random variables.
Similar content being viewed by others
1 Introduction
Suppose \(A_{n}\) is an \(n\times n\) Hermitian matrix and \(\lambda _{1},\lambda _{2},\ldots,\lambda _{n}\) denote the real eigenvalues of \(A_{n}\). The empirical spectral distribution function (ESD) of \(A_{n}\) can be defined as
where \(I_{A}\) represents the indicator function on the set A. The limit distribution of \(F^{A_{n}}(x)\) as \(n\rightarrow \infty \), if it exists, will be called the limiting spectral distribution (LSD) of \(A_{n}\). Since most of the global spectral limiting properties of \(A_{n}\) can be determined by its LSD, the LSD of large dimensional random matrices has attracted considerable interest among mathematicians, probabilists, and statisticians, one can refer to Wigner [15, 16], Grenander and Silverstein [7], Jonsson [8], Yin and Krishnaiah [18], Bai and Yin [4] and so on.
The Wigner matrix is one of the most basic and popular objects in the random matrix theory. A Wigner matrix is a symmetric (or Hermitian in the complex case) random matrix whose entries on or above the diagonal are independent random variables. When a Wigner matrix \(X_{n}\) whose entries are i.i.d. real (or complex) random variables with mean zero and variance 1, Wigner [16] proved that the expected ESD of \(W_{n}=\frac{1}{\sqrt{n}}X_{n}\), tends to the limiting distribution \(F_{sc}\), whose density is given by
The LSD \(F_{sc}\) is usually called the semicircular law in the literature. Grenander [6] proved that \(\|F^{W_{n}}-F_{sc}\|\rightarrow {0}\) in probability. Arnold [1, 2] obtained the result that \(F^{W_{n}}\) converges to \(F_{sc}\) almost surely. Pastur [12] removed the identically distributed assumption, and considered that when the entries above or on the diagonal of \(X_{n}\) are independent real or complex random variables with mean zero and variance 1, may not necessarily be identically distributed, but satisfy the following Lindeberg type assumption, for any constant \(\eta >0\):
Then the ESD of \(W_{n}\) converges almost surely to the semicircular law.
Among the results above, the assumption that the entries of the Wigner matrix have a common variance is necessary. However, in a practical application, the uniform variance assumption is a strong condition. In the paper, we will remove the uniform variance assumption and establish the same semicircular law result under a milder assumption on the variances of the entries, in particular, we assume that the covariances of the entries may not be equal to a constant, but only the average of the normalized sums of in each row of the data matrix converges to a positive constant. The result reads as follows.
Theorem 1.1
Let\(W_{n}=\frac{1}{\sqrt{n}}X_{n}\)be a Wigner matrix, and the entries above or on the diagonal of\(X_{n}\)be independent real or complex random variables, but they may not be necessarily identically distributed. Assume that all the entries of\(X_{n}\)are of mean zero, and the variance\(E|X_{ij}|^{2}=\sigma ^{2} _{ij}\), where\(\sigma _{ij}\)satisfies\(\frac{1}{n}\sum_{i=1}^{n}|\frac{1}{n}\sum_{j=1}^{n}\sigma _{ij}^{2}-1| \rightarrow {0}\)as\(n\rightarrow \infty \), and the assumption (1.1) holds. Then, almost surely, the ESD of\(W_{n}\)converges weakly to the semicircular law.
Remark 1.1
The result of Theorem 1.1 can be extended to a general one: when the average of the normalized sums in each row converges weakly to a positive constant \(\sigma ^{2}\), then almost surely the LSD of \(W_{n}\) is the general semicircular law with density
Now, we will consider the LSD of a sample covariance matrix, which is also an important object in random matrix theory and multivariate statistics. Suppose \(Y_{n}=(Y_{ij})_{n\times N}\) is a real or complex random matrix, whose entries \(Y_{ij}\ (i=1,\ldots,n,j=1,\ldots,N)\) are i.i.d. real or complex random variables with mean zero and variance 1. Write \(Y_{j}=(Y_{1j},\ldots,Y_{nj})'\) and \(\mathbf{Y}_{n}=(Y_{1},\ldots,Y_{N})\). Define \(\bar{Y}=\frac{1}{N}\sum_{k=1}^{N}Y_{k}\). Since \(\tilde{S}_{n}=\frac{1}{N-1}\sum_{k=1}^{N}(Y_{k}-\bar{Y})(Y_{k}- \bar{Y})^{*}\) shares the same the LSD with \(S_{n}=\frac{1}{N}\sum_{k=1}^{N}Y_{k}Y_{k}^{*}=\frac{1}{N}\mathbf{Y}_{n} \mathbf{Y}_{n}^{*}\), where ∗ represents the conjugate transpose symbol, we usually consider the sample covariance matrix defined by \(S_{n}=\frac{1}{N}\mathbf{Y}_{n}\mathbf{Y}_{n}^{*}\), The limiting spectral properties of large sample covariance matrices have generated a considerable amount of interest in statistics, signal processing and other disciplines. The first result on the LSD of \(S_{n}\) is due to Marc̆enko and Pastur [10], who proved that when \(\lim_{n\rightarrow \infty }\frac{n}{N}=y\in (0,\infty )\), the LSD of \(S_{n}\) is M-P law \(F_{y}^{MP}(x)\) with density
and has a point mass \(1-1/y\) at the origin if \(y>1\), where \(a=(1-\sqrt{y})^{2}\) and \(b=(1+\sqrt{y})^{2}\). There is also some work on the discussion of the M-P law of sample covariance matrices, such as Bai and Yin [4], Grenander and Silverstein [7], Jonsson [8], Yin [17], Silverstein [13] and Silverstein and Bai [14]. A typical result (see Theorem 3.9 of Bai and Silverstein [3]) states that when the entries of \(Y_{n}\) are independent random variables with mean zero and variance 1, \(n/N\rightarrow y\in (0,\infty )\), and for any \(\eta >0\),
then the ESD of \(S_{n}\) tends to the M-P law \(F_{y}^{MP}\) almost surely. Note that the entries of \(Y_{n}\) having a uniform variance 1 is also a necessary condition in the proof. By the same motivation as Theorem 1.1, we will also consider removing the equal covariance condition. Similarly, we can get the following result.
Theorem 1.2
Assume that the entries of the random matrix\(Y_{n}\)defined above are independent variables with mean zero and variance\(E|Y_{ij}|^{2}=\sigma _{ij}^{2}\), where\(\sigma _{ij}\)satisfies\(\frac{1}{n}\sum_{i=1}^{n}|\frac{1}{N}\sum_{j=1}^{N}\sigma _{ij}^{2}-1| \rightarrow {0}\). Assume that\(n/N\rightarrow y\in (0,\infty )\)as\(n\rightarrow \infty \)and the assumption (1.2) holds. Then, almost surely, the ESD of the sample covariance matrix\(S_{n}=\frac{1}{N}Y_{n}Y_{n}^{*}\)converges weakly to the M-P law.
Remark 1.2
Likewise, if there exists a positive constant \(\sigma ^{2}>0\) satisfying \(\frac{1}{n}\sum_{i=1}^{n}\times |\frac{1}{N}\sum_{j=1}^{N}\sigma _{ij}^{2}- \sigma ^{2}|\rightarrow {0}\), the other assumptions remain unchanged, we also get, almost surely, for the LSD of \(S_{n}=\frac{1}{N}Y_{n}Y_{n}^{*}\) the general M-P law with density
and it has a point mass \(1-1/y\) at the origin if \(y>1\), where \(\tilde{a}=\sigma ^{2}(1-\sqrt{y})^{2}\) and \(\tilde{b}=\sigma ^{2}(1+\sqrt{y})^{2}\).
The rest of the paper is organized as follows. The proofs of the main results are presented in Sect. 2. In the Appendix, some useful lemmas are listed. In the sequel, when there is no confusion, we may get rid of the subscript n in the notation of matrices for brevity. \(A^{\ast }\) denotes the conjugate transpose of matrix A, and \(\operatorname{tr}(A)\) denotes the trace of A, and C denotes positive constant, which may be different in different cases.
2 Proofs
The Stieltjes transform method is mainly adopted to complete the proofs. For a distribution function \(F(x)\), its Stieltjes transform can be defined as
Obviously, we can write the Stieltjes transform of ESD \(F^{A_{n}}(x)\) as
where \(I_{n}\) is the identity matrix with order n. The continuity theorem of Stieltjes transform states that, for a sequence of functions of bounded variation \(\{G_{n}\}\) with the Stieltjes transform \(s_{G_{n}}(z)\), \(G_{n}(-\infty )=0\) for all n, and a function of bounded variation G, \(G(-\infty )=0\), with the Stieltjes transform \(s_{G}(z)\), \(G_{n}\) converges vaguely to G if and only if \(s_{G_{n}}(z)\) converges to \(s_{G}(z)\) for \(z\in \mathbb{C}^{+}\). In view of the fact that the sequence of the ESD of Wigner matrix is tight (see Lytova and Pastur [9]), the weak convergence of the ESD can be obtained by the convergence of their corresponding Stieltjes transform. Furthermore, if the LSD is a deterministic probability density function, then the almost surely convergence of the ESD can be achieved by the almost surely convergence of the Stieltjes transform, which is a basic idea in the following proofs.
2.1 Proof of Theorem 1.1
Define \(F^{W_{n}}\) to be the ESD of \(W_{n}\) and \({s}_{n}(z)\) the Stieltjes transforms of \(F^{W_{n}}\). Then by the continuity theorem of the Stieltjes transform, we complete the proof of Theorem 1.1 by showing
where \(s(z)\) is the Stieltjes transform of the semicircular law \(F_{sc}\).
The proofs of the real-valued Wigner matrix are almost the same as those of the complex-valued Wigner matrix, that is, all the results as well as the main ingredients of the proofs in the real symmetric matrices case remain valid in the Hermitian case with natural modifications. For the sake of simplicity, we will confine ourselves to a real symmetric Wigner matrix. To this end, we will write \(\hat{W}_{n}=\frac{1}{\sqrt{n}}\hat{X}_{n}\) to be a Wigner matrix independent of \(W_{n}\), and the entries of \(\hat{X}_{n}=(\hat{X}_{ij})_{n\times n}\) are independent \(N(0,1)\) random variables. Define \(F^{\hat{W_{n}}}\) to be the ESD of \(\hat{W}_{n}\), and \(\hat{s}_{n}(z)\) the Stieltjes transforms of \(F^{\hat{W}_{n}}\). By Theorem 2.9 of Bai and Silverstein [3], we know that, almost surely, the LSD of \(\hat{W}_{n}\) is a semicircular law \(F_{sc}(x)\), which means
Thus, (2.1) can be achieved by
In the sequel, we will complete the proof of (2.2) by the following two steps.
- (i)
For any fixed \(z\in \mathbb{C}^{+}\), \(\hat{s}_{n}(z)-E\hat{s}_{n}(z)\rightarrow {0}, a.s\). and \(s_{n}(z)-Es_{n}(z)\rightarrow {0}, a.s\).
- (ii)
For any fixed \(z\in \mathbb{C}^{+}\), \(Es_{n}(z)-E\hat{s}_{n}(z)\rightarrow {0}\).
We begin with step (i). Define \(W_{k}\) to be a major submatrix of order \((n-1)\), obtained from \(W_{n}\) with the kth row and column removed, and \(\alpha _{k}\) to be the vector from the kth column of \(W_{n}\) by deleting the kth entry. Denote by \(E_{k}(\cdot )\) conditional expectation with respect to the σ-field generated by the random variables {\(X_{i,j}\), \(i,j>k\)}, with the convention that \(E_{n}s_{n}(z)=Es_{n}(z)\) and \(E_{0}s_{n}(z)=s_{n}(z)\). Then
By Theorem A.5 of Bai and Silverstein [3], we know
Note that
which implies \(|\gamma _{k}|\leq 2/nv\). Since \(\{\gamma _{k}, k\geq 1\}\) forms a martingale difference sequence, then it follows by Lemma A.1 with \(p=4\) that
which, together with the Borel–Cantelli lemma, yields
for every fixed \(z\in \mathbb{C}^{+}\).
Similarly, we also get
for every fixed \(z\in \mathbb{C}^{+}\). Therefore, step (i) is completed.
Now we come to step (ii). We firstly introduce some notation:
By the facts that \(X(1)=X\), \(X(0)=\hat{X}\), we can write
Denote \(G'=\frac{\partial }{\partial {z}}G(s,z)\). Write the \((i,j)\)-entry of \(G'\) by \(G'_{ij}\) and the \((i,j)\)-entry of \(X(s)\) by \(X_{ij}(s)\). Since the random variables \(\hat{X}_{ij}\) are independent \(N(0,1)\) random variables, applying the Stein equation in Lemma A.2 with \(\varPhi =G_{ij}'\), we have
where \(D_{ij}{(s)}={\partial }/{\partial X_{ij}(s)}\).
On the other hand, as the random variables \(X_{ij}\) are independent, we will adopt the generalized Stein equation in Lemma A.3 to rewrite the second term in the parentheses of the r.h.s. of (2.3). To this end, we will take \(p=1\) and \(\varPhi =G'_{ij}\) in Lemma A.3. Note that \(\kappa _{1}=EX_{ij}=0\) and \(\kappa _{2}=E|X_{ij}|^{2}\). Then we have
and
where \(\wp _{n}\) is the set of \(n\times n\) real symmetric matrices.
Thus
By (3.25) in Lytova and Pastur [9],
where \(c_{l}\) is an absolute constant for every l. Let \(l=1\), then \(|D_{ij}G'_{ij}|\leq c_{1}/{v^{3}} \), and
Since \(E|X_{ij}|^{2}=\sigma _{ij}^{2}\), and \(\sigma _{ij}\) satisfies \(\frac{1}{n}\sum_{i=1}^{n}|\frac{1}{n}\sum_{j=1}^{n}\sigma _{ij}^{2}-1| \rightarrow {0} \) based on the condition in Theorem 1.1, we easily get
and then
We have
By the assumption (1.1), we select a sequence \(\eta _{n}\downarrow {0}\) as \(n\rightarrow {\infty }\), such that
And the convergence rate of \(\eta _{n}\) can be as slow as desired. For definiteness, we may assume that \(\eta _{n} > 1/\log n\) and \(\eta _{n}\rightarrow 0\). Then we have
Since
we obtain \(\frac{1}{n^{2}}\sum_{i,j}E|X_{ij}|^{2}=1+o(1)\). And by (2.5), let \(l=2\), then
So we have
By \(\eta _{n}\rightarrow 0\) as \(n\rightarrow \infty \), we have \(\mathit{II}=o(1)\). This, together with (2.4) and (2.6), means that
for any fixed \(z\in \mathbb{C}^{+}\). Step (ii) is completed.
Combining steps (i) and (ii), we see that (2.2) is proved. Therefore, we have
By (2.1)–(2.2), we complete the proof of Theorem 1.1.
2.2 Proof of Theorem 1.2
We will also consider the real-valued sample covariance matrix case and take a similar procedure to Theorem 1.1 to complete the proof. To this end, we also firstly define \(\hat{Y}_{n}=(\hat{Y}_{ij})_{n\times N}\) to be a \(n\times N\) random matrix independent of \(Y_{n}\), and the entries \(\hat{Y}_{ij}^{,}s\) to be i.i.d. \(N(0,1)\) random variables. Write \(\hat{S}_{n}=\frac{1}{N}\hat{Y}_{n}\hat{Y}_{n}^{*}\). We will use \(F^{S_{n}}\) and \(F^{\hat{S}_{n}}\) to denote the ESD of \(S_{n}\) and \(\hat{S}_{n}\), respectively. Let \(m_{n}(z)\) and \(\hat{m}_{n}(z)\) be the Stieltjes transforms of \(F^{S_{n}}\) and \(F^{\hat{S}_{n}}\), respectively.
By Theorem 3.10 of Bai and Silverstein [3], we have obtained
where \(m(z)\) is the Stieltjes transform of standard M-P law \(F_{y}^{MP}\). Thus, by the continuity theorem of the Stieltjes transform again, we complete the proof by showing that, for any fixed \(z\in \mathbb{C}^{+}\),
- (i)
\(m_{n}(z)-Em_{n}(z)\rightarrow 0\), a.s. and \(\hat{m}_{n}(z)-E\hat{m}_{n}(z)\rightarrow 0\), a.s.
- (ii)
\(Em_{n}(z)-E\hat{m}_{n}(z)\rightarrow {0}\).
For (i), we prove it by a similar argument to Bai and Silverstein [3]. For the sake of completeness, we will also give the proof. Let \(\widetilde{E}_{k}(\cdot )\) denote the conditional expectation given by \(\{\text{\scriptsize {$Y_{k+1}$}},\ldots ,\text{\scriptsize {$Y_{N}$}} \}\), with the convention that \(\widetilde{E}_{N}m_{n}(z)=Em_{n}(z)\) and \(\widetilde{E}_{0}m_{n}(z)=m_{n}(z)\). Then
where
Here \(S_{nk}=S_{n}-\text{\scriptsize {Y}}_{k}\text{\scriptsize {Y}}_{k}^{*}\) and \(\text{\scriptsize {Y}}_{k}\) is the kth column of \(Y_{n}\) with the kth element removed, and
Note that \(\{\widetilde{\gamma }_{k},k\geq 1\}\) forms a sequence of bounded martingale differences.
By Lemma A.1 with \(p=4\), we have
By the Borel–Cantelli lemma again, we see that almost surely \(m_{n}(z)-Em_{n}(z)\rightarrow 0\). By the same argument, we get \(\hat{m}_{n}(z)-E\hat{m}_{n}(z)\rightarrow 0\), a.s., which means (i) is completed.
Then we come to the proof of (ii). We firstly introduce some notation. For \(0\leq {s}\leq {1}\),
By the same procedure in (2.3), we have
It follows by Lemma A.2 with \(\varPhi =(V^{*}(s)U')_{ij}\) that
where \(D_{ij}{(s)}={\partial }/{\partial V_{ij}(s)}\).
By Lemma A.3 with \(p=1\) and \(\varPhi =(V^{*}(s)U')_{ij}\) again, we can see by \(\kappa _{1}=EY_{ij}=0\) and \(\kappa _{2}=E|Y_{ij}|^{2}\) that
and
where \(\mathcal{M}_{n,N}\) is the set of \(n\times N\) real matrices.
By (2.7), we have
The bound of \(|D_{ij}^{r}(V^{*}U')_{ji}|, r=1,2\), is critical for the proof. Since \((V^{*}U')_{ij}\) is analytic in \(z\in \mathbb{C}^{+}\), by the Cauchy inequality for the bound of derivatives of analytic functions in Lemma A.4, to get the bound of \(D_{ij}^{r}(V^{*}U')_{ij}, r=1,2\), on any compact set of \(\mathbb{C}^{+}\), it suffices to find the bound of \(D_{ij}^{r}(V^{*}U)_{ij}\) on the compact set. By elementary calculations, we can get the derivatives of \(V^{*}U\) with respect to the entries \(V_{ij}\), \(i = 1, 2,\ldots, n\), \(j = 1, 2,\ldots N\),
As \(U=(VV^{*}-zI_{n\times n})^{-1}\), this induces \(\|U\|\leq \frac{1}{v}, \vert U_{ii} \vert \leq \frac{1}{v} \). Define \(\widetilde{U}=(V^{*}V-zI_{N\times N})^{-1}\). We also have \(\|\widetilde{U}\|\leq \frac{1}{v} \). By the facts that \(V\widetilde{U}=UV\) and \(V^{*}V\widetilde{U}=V^{*}UV\), we get
and
By the Schwartz inequality, we have
It follows by the Cauchy inequality in Lemma A.4 that
and
hold uniformly on any compact set of \(\mathbb{C}^{+}\).
Since \(\frac{1}{n}\sum_{i=1}^{n}|\frac{1}{N}\sum_{j=1}^{N}\sigma _{ij}^{2}-1| \rightarrow {0} \) by the assumption in Theorem 1.2,
which, together with (2.8) and (2.9), yields \(\widehat{I}=o(1)\).
By the assumption (1.2), without loss of generality, we select a sequence \(\eta _{n}\downarrow {0}\) and \(\eta _{n} > 1/\log N\) as \(n\rightarrow {\infty }\). Then
We also easily get \(\frac{1}{nN}\sum_{i,j}E|Y_{ij}|^{2}=1+o(1)\). Using (2.9) again, we can see
As \(\eta _{n}\rightarrow 0\), we have
for any \(z\in \mathbb{C}^{+}\), which completes the proof of (ii).
Based on steps (i) and (ii), we conclude that
The proof of Theorem 1.2 is complete.
References
Arnold, L.: On the asymptotic distribution of the eigenvalues of random matrices. J. Math. Anal. Appl. 20, 262–268 (1967)
Arnold, L.: On Wigner’s semicircle law for the eigenvalues of random matrices. Probab. Theory Relat. Fields 19, 191–198 (1971)
Bai, Z., Silverstein, J.: Spectral Analysis of Large Dimensional Random Matrices. Science Press, Beijing (2010)
Bai, Z., Yin, Y.: Convergence to the semicircle law. Ann. Probab. 16, 863–875 (1988)
Burkholder, D.: Distribution function inequalities for martingales. Ann. Probab. 1, 19–42 (1973)
Grenander, U.: Probabilities on Algebraic Structures. Wiley, New York (1963)
Grenander, U., Silverstein, J.: Spectral analysis of networks with random topologies. SIAM J. Appl. Math. 32, 499–519 (1977)
Jonsson, D.: Some limit theorems for the eigenvalues of a sample covariance matrix. J. Multivar. Anal. 12, 1–38 (1982)
Lytova, A., Pastur, L.: Central limit theorem for linear eigenvalue statistics of random matrices with independent entrie. Ann. Probab. 37, 1778–1840 (2009)
Marčenko, V., Pastur, L.: Distribution for some sets of random matrices. Math. USSR Sb. 1, 457–483 (1967)
Markushevich, A., Silverstein, R.: Theorem of Functions of a Complex Variable. AMS Chelsea Series. AMS, Providence (2005)
Pastur, L.: On the spectrum of random matrices. Theor. Math. Phys. 10, 67–74 (1972)
Silverstein, J.: Strong convergence of the limiting distribution of the eigrnvalues of large dimensional random matrices. J. Multivar. Anal. 55, 331–339 (1995)
Silverstein, J., Bai, Z.: On the empirical distribution of eigenvalues of a class of large dimensional random matrices. J. Multivar. Anal. 54, 175–192 (1995)
Wigner, E.: Characteristic vectors bordered matrices with infinite dimensions. Ann. Math. 62, 548–564 (1955)
Wigner, E.: On the distributions of the roots of certain symmetric matrices. Ann. Math. 67, 325–327 (1958)
Yin, Y.: Limiting spectral distribution for a class of random matrices. J. Multivar. Anal. 20, 50–68 (1986)
Yin, Y., Krishnaiahm, P.: A limit theorem for the eigenvalues of product of two random matrices. J. Multivar. Anal. 13, 489–507 (1983)
Availability of data and materials
Data sharing not applicable to this article as no datasets were generated during the current study.
Funding
This research is supported by NNSF Grant No. 11401169 and KRPH Grant No. 20A110001.
Author information
Authors and Affiliations
Contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Conflicts of interest
The authors declare that there is no conflict of interests regarding the publication of this article.
Competing interests
The authors declare that they have no competing interests.
Appendix: Some lemmas
Appendix: Some lemmas
We will list several important lemmas in our proofs. The first one is the Burkholder inequality for a complex martingale difference sequence, which can be found in Burkholder [5].
Lemma A.1
Let\(\{X_{k}\}\)be a complex martingale difference sequence with respect to the increasingσ-field\(\{\mathcal{F}_{k}\}\). Then, for\(p>1\),
where\(K_{p}\)is a constant related top.
The second one is the well-known Stein equation for independent Gaussian random variables. It should be noted that a similar result holds for independent complex-valued Gaussian random variables.
Lemma A.2
Letξbe a Gaussian random variable of zero mean, and\(\varPhi: \mathbb{R}\rightarrow \mathbb{C}\)be a differentiable function with bounded derivative\(\varPhi '\). Then we have
The third one is the generalized Stein equation for independent random variables, which can be found in Lytova and Pastur [9, Proposition 3.1].
Lemma A.3
Letξbe a random variable such that\(E(|\xi |^{p+2})<\infty \)for a certain nonnegative integerp. Then, for any function\(\varPhi: \mathbb{R}\rightarrow \mathbb{C}\)of the class\(C^{p+1}\)with bounded derivatives\(\varPhi ^{(l)}\), \(l=1,2,\ldots,p+1\), we have
where\(\kappa _{l}\)is thelth cumulant ofξ, and the remainder term\(\varepsilon _{p}\)admits the bound
At last, a useful result on the bound of analytic function will be introduced, which is quoted from Markushevich and Silverstein [11, Theorem 14.7].
Lemma A.4
(Cauchy inequality)
Let\(f(z)\)be analytic in the simply connected domainDthat contains the circle\(C_{R}(z_{0})=\{z:|z-z_{0}|=R\}\). If\(|f(z)|\leq M\)holds for all points\(z\in C_{R}(z_{0})\), then
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Jin, S., Xie, J. A result on the limiting spectral distribution of random matrices with unequal variance entries. J Inequal Appl 2020, 174 (2020). https://doi.org/10.1186/s13660-020-02440-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-020-02440-7