1 Introduction

In the study of spectral properties of random operators, generically denoted below by \(H_\omega \), one is led to consider random elements of a class of functions of a complex variable \(z\), which is variably named after Pick ([13]) or Herglotz ([12]). Included in this class are functions of the form:

$$\begin{aligned}&R_{\omega ,n}(z) = \frac{1}{n} {{\mathrm{tr}}}\frac{1}{H_{\omega ,n} -z} \, = \, \frac{1}{n} \sum _{j=1}^n \frac{1}{E_{\omega ,j}^{(n)}-z} \nonumber \\&R_{\omega ,n}^{\phi }(z) = \left\langle \phi , \frac{1}{H_{\omega ,n}-z} \, \phi \right\rangle \, = \, \sum _{j=1}^n \frac{|\langle \phi | \psi _{\omega ,j}^{(n)}\rangle |^2}{E_{\omega ,j}^{(n)}-z}. \end{aligned}$$
(1.1)

where \(H_{\omega ,n}\) are operators acting in spaces of finite dimension, or alternatively \(n\times n\) matrices. In the second example \(\phi \) is a vector in the space on which \(H_{\omega ,n}\) acts and the expressions on the right correspond to the operator’s spectral representation.

More generally, the Herglotz–Pick (HP) class, as defined here,Footnote 1 consists of analytic functions from the upper half plane \(\mathbb {C}^+ := \{ z \in \mathbb {C}\, | \, {{\mathrm{Im \,}}}z > 0 \} \) into its closure \(\overline{\mathbb {C}^+} = \mathbb {C}^+ \cup {\mathbb {R}}\). By the Herglotz representation theorem (cf. [12]) each such function admits a unique spectral representation as

$$\begin{aligned} F(z) \, = \, b+ a z + \int _\mathbb {R}\left( \frac{1}{u-z} - \frac{u}{u^2+1}\right) \mu (du). \end{aligned}$$
(1.2)

with \(a \ge 0\), \(b \in \mathbb {R}\), and \(\mu \) a non-negative Borel measure on \( \mathbb {R}\), which is referred to as the spectral measure of \( F \), for which:

$$\begin{aligned} \int (u^2+1)^{-1} \mu (du) < \infty . \end{aligned}$$
(1.3)

Of particular interest here are the scaling limits in which spectra of finite dimensional operators of increasing dimension are studied on a scale of the eigenvalue spacing [2, 4, 6, 7, 18, 23, 27]. The functions of interest may be found to converge in an appropriate distributional sense to random HP functions of singular spectrum which in the simplest case consists of simple poles located along \(\mathbb {R}\). In the latter case, (1.2) extends to a random meromorphic functions, whose spectra form a random point process on \(\mathbb {R}\).

Our main purpose here is two fold. One is to clarify some of the relations between shift invariant scaling limits of point processes and the limits of the corresponding random HP functions. The other is to present the general observation that translation invariance, and more specifically ‘shift amenability’, of an HP function with singular spectral measure carries the implication that the probability distribution of the boundary values \(F(x) := F(x+i0)\) is a Cauchy distribution. The examples to which this principle applies include scaling limits of eigenvalue point processes of a number of random matrix models where the spectral statistics are of otherwise quite different nature. This includes both limits of random diagonal matrices without level repulsion, and those of random matrix ensembles within the GXE domains of attraction. The latter case includes a class of random Wigner matrices for which the result is established through a combination of the general criteria derived here with previous analytical results derived in [16, 17, 29] on the convergence of the local law to the scaling limit of the GUE ensemble.

The topics discussed here are of relevance for quantum transport in mesoscopic quantum systems. In that context, an argument for the general appearance of the Cauchy distribution was first presented by Mello [24],Footnote 2 who proposed also an extension of this principle to a somewhat less universal law concerning the limiting (joint) distribution of arbitrary size (\(k\times k\)) resolvent subblocks of random matrices of much larger size (\(n \times n\), with \(n \gg k\))). Support for some of Mello’s reasoning was presented by Brouwer [9], who pointed out that also the statement is strictly true within a Lorentzian matrix ensemble, where it holds for any \(k\le n\), and in [19], [20, Ch.IV] and [21, App. A] using supersymmetric calculations on other GXE ensembles in the large \(n\) limit. Other, more recent results are mentioned in Sect. 6.3.

2 Cauchy distribution in shift amenable HP functions

2.1 Definition and main result

It is of relevance to recall here the following general result.

Proposition 2.1

(de la Vallée Poussin, see eg. [12, 15]) For any function \( F(z)\) in the HP class the limit

$$\begin{aligned} F(x+i 0)\, := \, \lim _{\eta \downarrow 0} F(x+i\eta ) \end{aligned}$$
(2.1)

exists for Lebesgue—almost every \(x\in \mathbb {R}\).

Definition 2.2

A measurable function \(K: \mathbb {R}\mapsto \mathbb {C}\) will be said here to be shift amenable if there is a probability measure \(\nu \) on \(\mathbb {C}\) (supported necessarily on its range’s closure \(\overline{\text {Ran } K} \)) such that for any continuous bounded function \(\Psi : \overline{\text {Ran } K} \mapsto \mathbb {C}\) the following limit exists and satisfies

$$\begin{aligned} \lim _{L\rightarrow \infty } \frac{1}{L} \int _{-L/2}^{L/2} \Psi (K(x) ) \, dx \,= \, \int _\mathbb {C}\Psi ( w ) \, \nu (dw) \, =: \, \nu (\Psi ). \end{aligned}$$
(2.2)

We refer to \(\nu \equiv \nu _K\) as \(K\)s distribution under shifts.

In other words, a function is shift amenable if when sampled uniformly over the range \([-L/2,L/2]\), with \(L\rightarrow \infty \), the distribution of the values of \(K(x)\) is asymptotically described by a probability measure \(\nu \) on \(\overline{\text {Ran } K}\).

As it is noted in Sect. 5.1 shift amenable functions appear naturally among the typical realizations of random functions with shift invariant law. The following statement is however deterministic in the sense that it applies to every shift amenable function.

Theorem 2.3

Let \(F(z)\) be a HP function whose boundary values satisfy:

  1. 1.

    \({{\mathrm{Im \,}}}F(x+i0) = 0 \text{ for } \text{ Lebesgue } \text{ almost } \text{ every }~ x\in \mathbb {R}\).

  2. 2.

    \(F_0(x):=F(x+i0)\) is shift amenable.

Then under shifts \(F_0(x)\) has a Cauchy distribution.

By a Cauchy distribution we refer here to a probability law, parameterized by \(\Gamma \in \overline{\mathbb {C}^+}\), of the form

$$\begin{aligned} {\mathbb P}\left( dF \right) \, = \, \pi ^{-1} \frac{{{\mathrm{Im \,}}}\Gamma \, dF}{(F-{{\mathrm{Re \,}}}\Gamma )^2+ ({{\mathrm{Im \,}}}\Gamma )^2}, \end{aligned}$$
(2.3)

which for \(\Gamma \in \mathbb {R}\) is to be interpreted as a \(\delta \)-measure located at \({{\mathrm{Re \,}}}\Gamma \). We refer to \(\Gamma \in \mathbb {C}^+\) as the Cauchy distribution’s ‘analytic baricenter’. More is said on its value in the present context in Theorem 5.5 below.

Condition (1) in Theorem 2.3 is equivalent to the statement that the spectral measure \(\mu \) of \( F \) has no absolutely continuous component, as the latter is in general given by \(\pi ^{-1} {{\mathrm{Im \,}}}F(x+i0) \, dx\). In the theorem’s proof use is made of the following auxiliary statements. In the first one, the range of \(K\) is limited to \(\overline{\mathbb {C}}^+ \) in order to make the statement applicable to functions such as \(\Psi (z) = 1/(z+i)\).

Lemma 2.4

If \(K\) is a shift amenable function with range Ran \(K = \overline{\mathbb {C}}^+ \) then for any bounded continuous function \(\Psi : \overline{\mathbb {C}}^+ \mapsto \mathbb {C}\), and any monotone decreasing \(g:\mathbb {R}_+ \mapsto \mathbb {R}_+\) satisfying the normalization condition \(\int _\mathbb {R}g(|u|) \, du =1\):

$$\begin{aligned} \nu _{K}(\Psi ) = \lim _{\eta \rightarrow \infty } \int _\mathbb {R}\Psi (K(u)) \, g(|u-x|/\eta ) \, \frac{du}{ \eta }, \end{aligned}$$
(2.4)

where the limit does not depend on \(x\in \mathbb {R}\).

Proof

For \(g(x) = 2^{-1} \mathbb {1}[|x| <1]\) the statement holds by the definition of shift amenability. The extension to more general \(g\) is by a standard application of Abel’s lemma, which can be deduced through the ‘layer-cake’ representation: \(g(t) = \int _0^\infty \mathbb {1}[ g(t) \ge \tau ]\, d \tau \). \(\square \)

Lemma 2.5

Let \( F(z): \mathbb {C}^+ \mapsto \overline{\mathbb {C}^+} \) be a Herglotz–Pick function whose boundary value function \(F_0(x):= F(x+i0)\) is shift amenable. Then the following limits exist and satisfy:

  1. 1.

    for any bounded continuous \(\Psi : \overline{\mathbb {C}^+} \rightarrow \mathbb {C}\) which is analytic on \( \mathbb {C}^+\), and any \(x\in \mathbb {R}\):

    $$\begin{aligned} \nu _{F_0}(\Psi ) \, = \, \lim _{\eta \rightarrow \infty } \Psi (F(x+i\eta )), \end{aligned}$$
    (2.5)
  2. 2.

    for every \(x\in \mathbb {R}\) (which however does not affect the limit):

    $$\begin{aligned} \lim _{\eta \rightarrow \infty } F(x+i\eta ) \, = \, \left\{ \int \frac{\nu _{F_0}(dw)}{w +i} \right\} ^{-1} -i \, =: \, \Gamma . \end{aligned}$$
    (2.6)

Proof

  1. 1.

    Since \( z \mapsto \Psi (F(z)) \) is bounded and analytic over \(\mathbb {C}^+\), its values where \({{\mathrm{Im \,}}}z>0\) admit the Poisson integral representation (cf. [15, Thm. 11.2]):

    $$\begin{aligned} \Psi (F(x+i\eta )) \, = \, \int \Psi (F(u+i 0)) \, \frac{\pi ^{-1} \, \eta \, \, du}{(u-x)^2+\eta ^2}, \end{aligned}$$
    (2.7)

    By Lemma 2.4, with \(g(u) = \pi ^{-1}/(u^2+1)\), in the limit \(\eta \rightarrow \infty \) the expression on the right converges to \(\nu _{F_0}(\Psi )\).

  2. 2.

    The second statement, (2.6), follows by applying (2.5) to the function \(\Psi (w) := -[w+i]^{-1}\).

\(\square \)

Proof of Theorem 2.3

By (2.5), applied to the function \(\Psi (w) = e^{it w }\) with \(t \in (0,\infty )\), we learn that:

$$\begin{aligned} \int e^{i t w} \nu _{F_0}(dw) \, = \lim _{\eta \rightarrow \infty } e^{i t F(x+i\eta )} \, = \, e^{it\Gamma } \end{aligned}$$
(2.8)

where the limit is evaluated using (2.6).

The above argument yields the generating function of the probability measure \(\nu _{F_0}\) for \(t>0\) (that part being applicable regardless of the first assumption of the theorem). However, under the assumption that \(F_0(x)\) is a.s. real for \(x\in \mathbb {R}\) the generating function at \(t<0\) can also be obtained from (2.8) through complex conjugation. Thus, under this assumption, for any \(t\in \mathbb {R}\):

$$\begin{aligned} \int e^{i t z} \nu _{F_0}(dz) \, = e^{it {{\mathrm{Re \,}}}\Gamma - |t| {{\mathrm{Im \,}}}\Gamma }. \end{aligned}$$
(2.9)

Since probability measures on \(\mathbb {R}\) are uniquely determined by their characteristic functions, (2.9) implies that the probability distribution \(\nu _{F_0}\) coincides with that of \( {{\mathrm{Re \,}}}\Gamma + \xi \, {{\mathrm{Im \,}}}\Gamma \) where \(\xi \) is the standard Cauchy random variable of the probability distribution \(\pi ^{-1} d \xi /[\xi ^2+1]\). \(\square \)

2.2 Examples and the relation of the Cauchy law with Boole’s identity

Following are some examples of functions to which Theorem 2.3 applies. One may note that these functions differ quite significantly in the structure of the higher correlations, which however do not affect the common Cauchy law.

  1. 1.

    The periodic function (cf. [1, Ch. 19])

    $$\begin{aligned} F^{Per} (z) \, = \, -\pi \cot (\pi z) \, = \, \frac{-1}{z} \, - \, \sum _{n=1}^\infty \left( \frac{1}{z-n} + \frac{1}{z+n} \right) \end{aligned}$$
    (2.10)
  2. 2.

    Quasi-periodic functions of the form

    $$\begin{aligned} F^{QP}(z) \, = \, - \sum _{j=1}^M \alpha _j \cot (\beta _j z+ \theta _j) \end{aligned}$$
    (2.11)

    with \(\alpha _j \ge 0\) and \(\beta _j, \theta _j \in \mathbb {R}\).

  3. 3.

    The random function with Poisson distributed poles:

    $$\begin{aligned} F_\omega ^{Poi}(z) \, = \, \lim _{N\rightarrow \infty } \sum _{u\in \omega \cap [-N,N]} \frac{1}{u-z} \end{aligned}$$
    (2.12)

    where \(\omega \subset \mathbb {R}\) is a random configuration of the Poisson point process on \(\mathbb {R}\) with intensity \(dx\). In this case, the assumptions of Theorem 2.3 hold for almost every \(\omega \) (cf. Sect. 4.2).

  4. 4.

    A function whose singularities have the \(\beta \)-ensemble statistics, e.g.

    $$\begin{aligned} F_\omega ^{GUE}(z) \, = \, \lim _{N\rightarrow \infty } \sum _{u\in \omega \cap [-N,N]} \frac{1}{u-z} \end{aligned}$$
    (2.13)

    where \(\omega \) is a configuration of the shift invariant determinantal point process associated with the kernel \(K(x,y) = \frac{\sin \pi (x-y)}{\pi (x-y)}\) (cf. Sect. 4.2).

  5. 5.

    More generally than the previous two examples, \(F(z)\) could be a random HP function of shift invariant distribution, as defined in Sect. 5.1 below. The almost-sure shift amenability of such functions is the consequence of Birkhoff’s ergodic theorem.

The universality of the first order statistics, which holds regardless of the differences in the second order statistics expresses the fact that the fraction of the Lebesgue measure (\(\mathcal L\)):

$$\begin{aligned} \frac{\mathcal L (\{ x\in [-L/2,L/2] \, : \, F(x) >t \} )}{L} \end{aligned}$$
(2.14)

is not affected by a wide range of rearrangements of the singularities. These may include both shifts of the singularities locations and splits of their mass. A similar “integrability” condition can be spotted to lie behind an identity which Boole presented to the Royal Society in 1857. In a slightly generalized form, the Boole identity may be stated as follows.

Proposition 2.6

(Extension of Boole [8]) For any finite singular measure \(\mu (dx)\) which has no absolutely continuous component the function

$$\begin{aligned} F(z) \, = \, \int _\mathbb {R}\frac{\mu (du) }{u-z} \end{aligned}$$
(2.15)

satisfies, for all for any \(t>0\):

$$\begin{aligned} \mathcal L (\{ x\in \mathbb {R}\, : \, F(x+i0) >t \} ) \, = \, \frac{\mu (\mathbb {R})}{t} \end{aligned}$$
(2.16)

Boole’s original Theorem was stated and proven for point measures of finite support. For convenience, a proof of this generalization is enclosed in Appendix 7.

3 The spectral representation and related topology

3.1 An alternative spectral representation

As an alternative to (1.2), each HP function can also be written as

$$\begin{aligned} F(z) = G(w(z)) \qquad \text{ with } \quad z \, =\, i \frac{1+w}{1-w} ,\qquad w\, = \, \, \frac{z-i}{z+i}, \end{aligned}$$
(3.1)

with

$$\begin{aligned} G(w) \, = \, b\, + \, \int _{S} \sigma ( d \theta ) \, \, i\, \, \frac{e^{i\theta } +w}{e^{i\theta } -w} \end{aligned}$$
(3.2)

where \( \sigma \) is a uniquely defined finite measure on the unit circle \(S\) and \( w \) is a point in the unit disk \( \mathbb {D} \). The correspondence between the two representations is:

$$\begin{aligned} \frac{\mu (dx)}{x^2+1} \, = \, \sigma ( d\theta ) \, \mathbb {1}[\theta \ne 0], \qquad a \, = \, \sigma (\{0 \}). \end{aligned}$$
(3.3)

with \(x= - \cot (\theta /2)\), where the coefficient \(a\) of (1.2) is incorporated as a \(\delta \)-point mass of \(\sigma \).

Thus, any HP function is uniquely associated with the pair \( (\sigma , G(0) )\) in the space

$$\begin{aligned} \Omega \, =\, \left\{ \sigma \in \mathcal {M}(S) \, : \int \sigma (d\theta ) < \infty \, \right\} \, \times \, \overline{\mathbb {C}^+ } \end{aligned}$$
(3.4)

or equivalently with the pair \((\mu , F(i) ) \) in the space

$$\begin{aligned} \widetilde{\Omega }\, =\, \left\{ \mu \in \mathcal {M}(\mathbb {R}) \, \, : \, \int \frac{\mu (du) }{u^2+1} < \infty \, \right\} \, \times \, \overline{\mathbb {C}^+}, \end{aligned}$$
(3.5)

and correspondingly the space of HP functions can be identified with either \( \Omega \) or \(\widetilde{\Omega }\), and we shall be frequently switching between the two.

3.2 The topology of pointwise convergence

A natural topology on the collection of HP functions is that of uniform convergence on compact subsets of \( \mathbb {C}^+ \) (uniform convergence preserves analyticity as well as the restriction \({{\mathrm{Im \,}}}F(z) \ge 0\)). However, it is a known consequence of the Montel theorem that under an added restriction on the range of the functions the conditions can be simplified. In particular, for HP functions uniformity on compacta follows from just pointwise convergence over \(\mathbb {C}^+\) (as can also be seen from the bounds presented below). In this section our goal is to clarify the expression of this topology in terms of the spectral representation. Particularly convenient for this purposes is the representation of HP functions in the space \(\Omega \), in terms of (3.2).

The parameter \(b\), as well as the total mass of the measure \(\sigma (S) \) are continuous in the topology of pointwise convergence, since

$$\begin{aligned} b \, = \, {{\mathrm{Re \,}}}G(0), \qquad \sigma (S) = {{\mathrm{Im \,}}}G(0), \end{aligned}$$
(3.6)

where \(G(0) \equiv F(i) \).

For a pair of measures on \(S\) the variational distance is

$$\begin{aligned} |m_1 -m_2| = \sup \left\{ \int _{S} \, f(e^{i\theta })\, [ m_1(d\theta )-m_2(d\theta ) ] \, \big | \, f \in C(S) ; \, \Vert f\Vert _{ \infty } \le 1 \right\} ,\qquad \end{aligned}$$
(3.7)

and in case of measures of equal mass the Wasserstein distance is

$$\begin{aligned} W(m_1,m_2) \, = \, \sup \left\{ \int _{S} \, f(e^{i\theta })\, [ m_1(d\theta )-m_2(d\theta ) ] \, \big | \, \mathrm {Lip}(f) \le 1 \right\} . \end{aligned}$$
(3.8)

To make this applicable to pairs of HP functions \(G_j\) with spectral measures \(\sigma _j(d\theta )\) of different masses, let us first consider the case \( \sigma _j(S) \ne 0 \) and denote the corresponding probability measures by

$$\begin{aligned} \widetilde{\sigma }_j (d \theta ) \, = \, \frac{\sigma _j(d\theta )}{\sigma _j(S)}. \end{aligned}$$
(3.9)

By direct estimates, for all \(w\in \mathbb D\) and all \( \theta \in S\),

$$\begin{aligned} \left| \left( \frac{e^{i\theta }+w}{e^{i\theta }-w} \right) \right| \, \le \, \frac{2}{ 1-|w|}, \qquad \left| \frac{d}{d\theta } \left( \frac{e^{i\theta }+w}{e^{i\theta }-w} \right) \right| \, \le \, \frac{2\, |w|}{ (1-|w|)^2}, \end{aligned}$$
(3.10)

one may hence conclude:

$$\begin{aligned} |G_1(w) \, - \, G_2(w)| \,&\le \, |{{\mathrm{Re \,}}}G_1(0) - {{\mathrm{Re \,}}}G_2(0)|\nonumber \\&+ | {{\mathrm{Im \,}}}( G_1(0) - G_2(0) ) | \, \frac{2 }{ 1-|w|} \nonumber \\&+ \, \frac{\sigma _1(S) + \sigma _2(S)}{2} \, W(\widetilde{\sigma }_1, \widetilde{\sigma }_2) \, \frac{2\, |w|}{ (1-|w|)^2}. \end{aligned}$$
(3.11)

If one (or both) of the measures is of zero mass the last term can be dropped, since then its “normalized” measure can be selected arbitrarily, and “by fiat” it can be arranged so that \( \widetilde{\sigma }_j \) are equal and thus \(W(\widetilde{\sigma }_1, \widetilde{\sigma }_2) =0\).

These bounds are of help in establishing the following equivalence.

Theorem 3.1

For a sequence of HP functions \(G_n: \mathbb D \mapsto \overline{\mathbb {C}^+} \) the following are equivalent:

  1. A.

    The pair of conditions:

    1. 1.

      the single-site limit exists: \( \lim _{n\rightarrow \infty } G_n(0) =: G(0) \).

    2. 2.

      the spectral measures \(\sigma _n\) on \(S\) (defined by 3.2) converge weakly to a measure \(\sigma \in \mathcal M(S)\), in the sense that \( \sigma _{n}(g) \rightarrow \sigma (g) \) for every continuous \(g\in C(S)\).

  2. B.

    There exists a HP function \(G\) such that for all \(w \in \mathbb D\): \(G_n(w) \rightarrow G(w) \), uniformly on compact subsets of \( \mathbb D \).

  3. C.

    The functions \(G_n\) converge pointwise over \(\mathbb D\).

Proof

\(A \Rightarrow \, B\)”: Set \(b={{\mathrm{Re \,}}}G(0)\), and let \(G: \mathbb D \mapsto \overline{\mathbb {C}^+}\) be the function which corresponds to \(({{\mathrm{Re \,}}}G(0), \sigma ) \) under (3.2). (The definition is consistent with the previously determined \(G(0)\), since under the assumption [A1–A2], the condition \( {{\mathrm{Im \,}}}G_n(0) \, = \, \sigma _n(S)\) persists also in the limit \(n\rightarrow \infty \).)

The claim that \([G_n(w) - G(w) ]\rightarrow 0\) uniformly on compact subsets of \(\mathbb D\) will be verified separately for two cases:

  1. i.

    \(\sigma (S) \, = \,0\): The claim follows from (3.11) (without the last term) and \({{\mathrm{Im \,}}}G_n(0) = \sigma _n(S)\rightarrow \sigma (S) = 0\).

  2. ii.

    \(\sigma (S) \ne 0\): In this case the weak convergence of the measures implies that also the normalized measures converge weakly, and by implication also in the Wasserstein distance. Thus

    $$\begin{aligned} \lim _{n\rightarrow \infty } W(\widetilde{\sigma }_n, \widetilde{\sigma }) \, = \, 0. \end{aligned}$$
    (3.12)

The claim then follows from (3.11).

\(B \Rightarrow \, C \)”: is evident.

\(C\Rightarrow \, A \)”: The convergence of \(G_n(0) \) directly implies [A1]. By (3.6) this implies convergence of \(b_n\) as well as that of the total mass \(\sigma _n(S)\).

By the compactness of the set of measures on \(S\) with \(\sigma (S) \le {{\mathrm{Im \,}}}G(0)\), the sequence \(\sigma _n( d \theta )\) has accumulation points, to which it converges over suitable subsequences \((n_k)\). All these measures share the values of the following integrals:

$$\begin{aligned} \int _{S} \frac{\sigma ( d \theta )}{e^{i\theta } -w} \, = \, \lim _{k\rightarrow \infty } \int _{S} \frac{\sigma _{n_k}( d \theta )}{e^{i\theta } -w} \, = \, \frac{1}{ 2i w} \, \lim _{k\rightarrow \infty } ( G_{n_k}(w) - G_{n_k}(0) ), \end{aligned}$$
(3.13)

for all \(w \in \mathbb {D}\backslash \{0\} \) (in addition to \(w=0\) which was established already).

A standard argument [11], based on the Stone-Weierstrass theorem (and resolvent identities), allows to conclude that: i. along each such subsequence the measures converge weakly, ii. the limit is uniquely characterized by (3.13), and thus \(\sigma _n\) is a convergent sequence. \(\square \)

Remarks on Theorem 3.1

  1. 1.

    The equivalence \( B \Leftrightarrow C\) is a known consequence of the more general Montel theorem.

  2. 2.

    For [A1] the point \(0\) is convenient, but with a minor adjustment in the argument it can be replaced by any other (pre selected) \(w_0 \in \mathbb D\).

  3. 3.

    The statement can be alternatively expressed in terms of the functions \(F_n\equiv G_n \circ w\) which are defined over \(\mathbb {C}^+\), and the corresponding spectral measures \(\mu _{n}\) on \(\mathbb {R}\). The main difference is that [A2] is to replaced by the condition: [A2’] The measures \(\mu _{n}(f)\) converge vaguely, in the sense that \( \mu _{n}(f) \rightarrow \mu (f)\) for all continuous, compactly supported functions \( f \in C_c(\mathbb {R})\). In terms of \(\sigma (d\theta )\), the condition [A2’] corresponds to vague convergence on \(S\backslash \{0\}\), which is a weaker statement that [A2] (since that guarantee the preservation of the total mass \(\sigma (S)\)). The two are however equivalent under the assumption [A1], since [A2] may be concluded from [A2’] plus the convergence \( \sigma _n(g) \rightarrow \sigma (g) \) of a single function \(g\in C(S)\) with \(g(1) \ne 0\).

In view of the rather direct correspondence between the representations of HP functions as \( G_{\omega ,n} : \mathbb D \mapsto \overline{\mathbb {C}^+}\) versus \( F_{\omega ,n} = G_{\omega ,n} \circ w : \mathbb {C}^+ \mapsto \overline{\mathbb {C}^+}\), from here on we shall not be duplicating the various statements of interest and instead use the language which locally appears to be convenient.

It may be worth noting that in contrast to \(b\), the parameter

$$\begin{aligned} a \, = {{\mathrm{Im \,}}}F(i) - \int _\mathbb {R}\frac{\mu _F(dx)}{x^2+1} \, = \, \sigma (\{0\}) \end{aligned}$$
(3.14)

is not a continuous function on \(\Omega \). That is clearly seen in the circle representation, where it corresponds to the fact that weak convergence of measures on \(\mathbb {C}\) allows for the build up of a \(\delta \)-function at \(\{1\}\) (which corresponds to \( \theta = 0 \)).

4 Stieltjes transforms of random measures

4.1 A constructive criterion

Spectral measures of interest often come in the form of random Borel measures, \( \mu _\omega \) on \(\mathbb {R}\) (a concept discussed e.g. in  [22]), with the indexing parameter \(\omega \) ranging over a probability space \( (\Omega , \mathcal {A}, \mathbb {P} ) \) over which we have the action of the group of shifts of \(\mathbb {R}\), represented by measurable transformations \(\{ \mathcal {T}_a \}_{a\in \mathbb {R}}\) for which \( \mu _{\mathcal {T}_a\omega }\) coincides with the shifted measure \({\mathcal {T}}_a\mu _{\omega }\), the action of shifts on measures being defined by:

$$\begin{aligned} {\mathcal T}_a \mu (I) = \mu (I+a). \end{aligned}$$
(4.1)

The following deterministic result presents conditions under which the Stieltjes transform may be constructed for such measures as the pointwise limit, \(F_{\mu }(z) := \lim _{n\rightarrow \infty } F_{\mu }^{(n)}(z)\), of

$$\begin{aligned} F_{\mu }^{(n)}(z) := \int _{-n}^n \frac{\mu (dx)}{x-z}. \end{aligned}$$
(4.2)

It is worth noting that under the conditions listed there the functional \(\mu \mapsto F_\mu \) is shift covariant, even though this property may at first be questioned since the “principal value”-like integral seen in (4.2) is centered at \(x=0\).

In the statement we compare the Stieltjes transform of \(\mu \) with a reference measure \(\overline{\mu }\), which in applications to random measure may be the mean value of \(\mu \) averaged over that source of randomness. Let \(N_\mu (x)\) be the counting function, and \(\delta N(x)\) the difference (which in the above example corresponds to the fluctuating part) defined by:

$$\begin{aligned} N_\mu (x) := \int _0^x \mu (dy), \quad N_{\overline{\mu }}(x) := \int _0^x \overline{\mu }(dy), \quad \delta {N}(x) \, :=\, N_\mu (x) - N_{\overline{\mu }}(x).\qquad \end{aligned}$$
(4.3)

Through integration by parts:

$$\begin{aligned} F_{\mu }^{(n)}(z) = F_{\overline{\mu }}^{(n)}(z) \, +\, \left[ \frac{\delta N(n)}{n-z} \, - \, \frac{\delta N(-n)}{-n-z} \right] \, +\, \int _{-n}^n \frac{\delta N(x)}{(x-z)^2} \, dx. \end{aligned}$$
(4.4)

Using this representation one has the the following criterion for the existence of the Stieltjes transform.

Theorem 4.1

Let \( \mu \) and \(\overline{\mu }\) be a pair of Borel measures on \(\mathbb {R}\) with the properties:

  1. i.

    for the reference measure the following limit exists for all (or equivalently, by Theorem 3.1, for some) \(z\in \mathbb {C}^+\):

    $$\begin{aligned} \lim _{n\rightarrow \infty } \int _{-n}^n \frac{\overline{\mu }(dx)}{x-z} \, =: \, F_{\overline{\mu }} (z), \end{aligned}$$
    (4.5)
  2. ii.

    the difference in the pair’s counting functions, defined by (4.3), satisfies:

    $$\begin{aligned} \lim _{n\rightarrow \pm \infty } \frac{\delta N(n)}{n} = 0 \, \qquad \text{ and } \qquad \int \frac{|\delta N(x)|}{x^2+1} \, dx < \infty . \end{aligned}$$
    (4.6)

Then:

  1. 1.

    the limit (to which we refer as the Stieltjes transform)

    $$\begin{aligned} F_{\mu }(z) := \lim _{n\rightarrow \infty } F_{\mu }^{(n)}(z) \end{aligned}$$
    (4.7)

    exists for all \( z \in \mathbb {C}^+ \).

  2. 2.

    for each \(t \in \mathbb {R}\):

    $$\begin{aligned} \lim _{\eta \rightarrow \infty } [ F_\mu (t+i\eta ) \, - \, F_{\overline{\mu }}(t+i\eta ) ] \, = \, 0. \end{aligned}$$
    (4.8)

If in addition

$$\begin{aligned} \lim _{|n|\rightarrow \infty } \frac{\overline{\mu }([n,n+1])}{n} \, = \, 0 \end{aligned}$$
(4.9)

then

  1. 3.

    the resulting Stieltjes transform is a shift-covariant functional of \(\mu \), in the sense that the limit in (4.7) exists also for \(\mu \) replaced by any of the shifted measures defined by (4.1) and for all \( a \in \mathbb {R}\) and \( z \in \mathbb {C}^+ \)

    $$\begin{aligned} F_{\mathcal {T}_a \mu }( z ) = F_{\mu }(z+a). \end{aligned}$$
    (4.10)

Proof

  1. 1.

    Since the truncated measures \(\mathbb {1}[-n,n]\, \mu (dx) \) converge to \(\mu \) in the vague topology, by Theorem 3.1 the limit (4.7) exists or not simultaneously for all \( z \in \mathbb {C}^+ \), and hence it suffices to test the convergence at \( z = i \). Applying the representation (4.4) at \( z = i \), the first term on the right converges by (4.5). The second and third terms converge almost surely to zero by the first assumption in (4.6). The integral in the forth term is absolutely convergent, which ensures the convergence of this term.

  2. 2.

    From the first part of this proof and (4.4) we learn that for any \( z \in \mathbb {C}^+ \):

    $$\begin{aligned} F_\mu (z) \, - \, F_{\overline{\mu }}(z) \, = \, \int _{\mathbb {R}} \frac{\delta N(x) }{(x-z)^2} dx. \end{aligned}$$
    (4.11)

    Monotone convergence implies \( \lim _{\eta \rightarrow \infty } \int | \delta N(x) | /[(x-t)^2+\eta ^2] dx = 0 \) for any \( t \in \mathbb {R} \) and hence the claim (4.8).

  3. 3.

    In order to establish the shift-covariance we note that it is straightforward to show that for all \( n \in {\mathbb N}\), \( a \in \mathbb {R}\) and \( z \in \mathbb {C}^+ \):

    $$\begin{aligned} F_{\mathcal {T}_a \mu }^{(n)}(z) = F_{\mu }^{(n)}(z+a) + \int _n^{n+a} \frac{\mu (dx)}{x-a-z} - \int _{-n}^{-n+a} \frac{\mu (dx)}{x-a-z}. \end{aligned}$$
    (4.12)

    Each of the two terms on the right side converge to zero as \( n \rightarrow \infty \). This is seen through the representation

    $$\begin{aligned} \int _n^{n+a} \frac{\mu (dx)}{x-z} = \int _n^{n+a}\frac{\overline{\mu }(dx)}{x-z} + \frac{\delta N(n+a)}{n+a-z} - \frac{\delta N(n)}{n-z} + \int _n^{n+a} \frac{\delta N(x)}{(x-z)^2} dx\nonumber \\ \end{aligned}$$
    (4.13)

    (and analogously for the second term). The first term goes to zero as \( n \rightarrow \infty \) by (4.9). The remaining three terms converge to zero using (4.6). \(\square \)

4.2 A pair of examples

The criterion of Theorem 4.1 apply in particular to the following examples of random spectral measures on \(\mathbb {R}\), which are rather different nature.

  1. 1.

    Poisson–Stieltjes function: In this example \( \mu _\omega \) is a Poisson process with constant intensity \(1 \). Picking for the reference measure \(\overline{\mu }(dx) = dx \) (the Lebesgue measure), one finds that for every \( \varepsilon > 0 \):

    $$\begin{aligned} | \delta N_\omega (x) | \le C_\omega (\varepsilon ) \, ( |x|^{\frac{1}{2} + \varepsilon } + 1 ). \end{aligned}$$
    (4.14)

    for all \(x\in {\mathbb {R}}\), with \(C_\omega (\varepsilon )\) which is almost surely finite. Consequently, the assumptions (4.6) are almost surely met in this case. We refer to the function defined by the corresponding limit (4.2) as the Poisson–Stieltjes function.

  2. 2.

    The sine-kernel Stieltjes function: The determinantal point process with the kernel \( K(x,y) = \sin (\pi (x-y))/ [\pi (x-y)]\) defines an ergodic random measure \( \mu _\omega \) whose intensity is \(1 \) (cf. [5, 28]). To verify (4.6) for this case, with \(\overline{\mu }(dx) = dx\), one may use the observation that by an explicit computation (cf. [5, Ex. 4.2.40]), for \( |x| \rightarrow \infty \):

    $$\begin{aligned} \mathbb {E}[\delta N (x)^2 ] = \int _0^x K(s,s) ds - \int _0^x \int _0^x K(s,t) \, d s dt = \frac{\log ( |x|)}{\pi ^2} + \mathcal {O}(1).\nonumber \\ \end{aligned}$$
    (4.15)

    Consequently, also in this case the integral in (4.6) is absolutely convergent. Moreover, a Chebychev estimate shows that \( \sum _n \mathbb {P}( |\delta N(n)|/|n| > \varepsilon ) < \infty \) for any \( \varepsilon > 0 \), and hence, by the Borel–Cantelli lemma, also the first condition in (4.6) is met. We refer to the function defined through the limit (4.2) with \(\mu \) corresponding to this process as the sine-kernel Stieltjes function \( F^{GUE}_\omega \).

In both cases the measure are stationary and even ergodic. The functions which are defined through the almost-sure limit (2.12) provide examples of random stationary HP functions, a term to whose further exploration we turn next. It should be added that a construction related to (4.6) was studied (for the Poisson process) in [3]. However, the approach presented there breaks the shift covariance.

5 Random HP functions

Standard considerations imply that the function space \(\Omega \) (and equivalently \(\widetilde{\Omega }\)) whose topology is discussed in Sect. 3.2 is metrizable and can be presented as a complete separable metric space (cf. [22]). Estimates which are somewhat similar to (3.11) (though less explicit) are facilitated by the “flat metric” (c.f. [14, 25]):

$$\begin{aligned} d(\sigma _1,\sigma _2) \, = \, \inf _{\widehat{\sigma }_1, \widehat{\sigma }_2 \in \mathcal M(S); |\widehat{\sigma }_1|=| \widehat{\sigma }_2|} [ |\sigma _1 - \widehat{\sigma }_1| \, + \, |\sigma _1 - \widehat{\sigma }_1| \, + \, W( \widehat{\sigma }_1, \widehat{\sigma }_2)]. \end{aligned}$$
(5.1)

Definition 5.1

(Random HP functions)

  1. 1.

    Denoting by \(\mathcal B\) the Borel \(\sigma \)-algebra on \(\Omega \) which corresponds to the topology discussed above, a random Herglotz–Pick function is given by a probability measure on \((\Omega , \mathcal B)\).

  2. 2.

    A sequence of random HP function \( F_{\omega ,n} : \mathbb {C}^+ \mapsto \overline{\mathbb {C}^+}\) is said to converge in distribution to \( F_{\omega } \) iff the probability measure on \(\widetilde{\Omega }= \mathcal M(\mathbb {R}) \times \overline{\mathbb {C}^+}\) which forms the distribution of \( ( \mu _{\omega ,n} , F_{\omega ,n}(i) ) \) converges (weakly) to that of the \( ( \mu _{\omega } , F_{\omega }(i) ) \). Such convergence will be denoted \( ( \mu _{\omega ,n}(f) , F_{\omega ,n}(i) ) \mathop {\rightarrow }\limits ^{\mathcal {D}} ( \mu _{\omega } , F_{\omega }(i)) \).

By general theory of probability measures on complete separable metric spaces, of a finite diameter, the convergence of measures on \(\Omega \) is equivalent to the condition that for any \(\varepsilon >0\), there is \(N(\varepsilon )<\infty \) such that for all \(n \ge N(\varepsilon )\) the measures can be coupled so that:

$$\begin{aligned} \int d \mu _n( \omega , \omega ') \left[ |G_\omega (0)-G_{\omega '}(0)| \, + \, d({\sigma _\omega },{\sigma _{\omega '}}) \right] \, \le \, \varepsilon \end{aligned}$$
(5.2)

with the marginals of \(\mu _n( \omega , \omega ') \) yielding the distributions of \(F_{\omega } \) and \( F_{\omega '} \), correspondingly. (In case the distance function is unbounded, (5.2) is to be replaced by the statement that \([...]\) is small in probability, though possibly not in the mean.)

For future purpose let us also add

Lemma 5.2

Let \( F_{\omega ,n} : \mathbb {C}^+ \mapsto \overline{\mathbb {C}^+}\) be a sequence of random HP functions which converges in distribution to a random HP function \( F_{\omega } \), and for which the support of the spectral measures stays away from an interval \([a, b] \subset \mathbb {R}\), in the sense that for some \(\varepsilon >0\), and almost all \( \omega \) and all \( n\):

$$\begin{aligned} \mu _{\omega ,n}([a-\varepsilon ,b+\varepsilon ]) \, = \, 0. \end{aligned}$$
(5.3)

Then the functions \(F_{\omega ,n}\) and \(F_{\omega }\) are (almost surely) analytic and real along \([a,b]\), and the convergence in distribution extends to: \( ( \mu _{\omega ,n}, F_{\omega ,n}(x) ) \mathop {\rightarrow }\limits ^{\mathcal {D}} ( \mu _{\omega } , F_{\omega }(x)) \) for any \(x\in [a,b]\).

Proof

The analyticity of the functions \( F_{\omega ,n}\) within spectral gaps is a simple consequence of the spectral representation. Analyticity at \(x\in [a,b]\) allows to applying the harmonic average principle to the analytic continuation of \(F_{\omega ,n}\) through the spectral gap which includes \([a,b]\), by which:

$$\begin{aligned} F_{\omega ,n}(x) \, = \, \int _{[0,2\pi ]} F_{\omega ,n}(x+ e^{i\theta } \varepsilon / 2)\; \frac{d \theta }{2\pi } \end{aligned}$$
(5.4)

The convergence in distribution then readily follows from the coupling estimate (5.2) and the uniform pointwise bound (3.11) (and the observation that the analytic continuation of such a HP function into \(\mathbb {C}^-\) is given by the natural extension of the spectral representation to that regime). \(\square \)

5.1 Shift invariance and shift amenability

In the language of probabilistic ergodic theory, the subject may be presented in the following terms.

Random functions are parameterized by a variable \(\omega \) taking values in a probability space \( (\Omega _0, \mathcal {A}, \mathbb {P} ) \) (for which a possible choice is itself the suitable space of functions such as \(\Omega \) discussed above). The random functions are given by a \(\mathbb {C}\)-valued kernel \(K_\omega (x)\) defined over \( \Omega \times \mathbb {R}\) such that \(K_\omega (x)\) is jointly measurable over \(\Omega \times \mathbb {R}\) (to which may optionally be added topological properties, such as discussed above). Translation invariance, or the more limited invariance under discrete shifts, is expressed in the two additional properties:

  1. 1.

    acting on \(\Omega _0\) is a group of measurable mappings \(\{\mathcal {T}_u\}_{u\in \mathbb {R}}\) which provides a representation of the group of translations of \(\mathbb {R}\), with

    $$\begin{aligned} K_{T_u \omega } (x) = K_{\omega } (x+u) \qquad \text{(for } \text{ almost } \text{ every } (\omega ,x)\text{) }, \end{aligned}$$
    (5.5)
  2. 2.

    the probability measure \(\mathbb {P} \) is invariant under the action of the shifts \(\mathcal {T}_u\), or at least under the action of a discrete sub group \(\{\mathcal {T}_{n\tau } \}_{n\in {\mathbb Z}}\) of period \(\tau \).

In the above setup, let \(\ell _\omega (dw)\) be the pullback measure of the conditional distribution of the values of \(K_\omega (x)\) with \(x\) averaged with the Lebesgue measure over \( [0,\tau ]\) at given \(\omega \). In other words, \(\ell _\omega (dw)\) is defined so that for each continuous bounded function \(\Psi : \mathbb {C}\mapsto \mathbb {C}\)

$$\begin{aligned} \int _\mathbb {C}\Psi (w) \ell _\omega (dw) \, = \, \int _{0}^{ \tau } \Psi (K_\omega (x)) \, dx \, =: \, A_{\Psi ,K}(\omega ). \end{aligned}$$
(5.6)

The average seen in (2.2) can be presented through the relation:

$$\begin{aligned} \frac{1}{L} \int _{0}^{L \tau } \Psi (K(x) ) \, dx \, = \, \frac{1}{L} \sum _{n=0}^{L-1} A_{\Psi ,K}(\mathcal {T}_{n\tau } \omega ), \end{aligned}$$
(5.7)

Birkhoff’s ergodic theorem allows then to conclude that for \(\mathbb {P}\)-almost every \(\omega \) the function \(K_\omega \) is shift-amenable (over \(x\)). Furthermore, \( \nu _{K_\omega } \, = \, \tau \lim _{L \rightarrow \infty } \frac{1}{L} \sum _{n=0}^{L-1} \ell _{\mathcal {T}_{n\tau } \omega }, \) and as is easily seen:

$$\begin{aligned} \nu _{K_\omega } = \nu _{K_{T_\tau }\omega } \end{aligned}$$
(5.8)

This implies also that in the ergodic case \(\nu _{K_\omega } \) is almost surely given by a common measure (on \(\overline{\text{ Ran }\,\,K}\)).

In a slight abuse of notation we shall generically use the symbol \(\mathcal {T}_u\) for translations corresponding to shifts of \(\mathbb {R}\), i.e. for both the transformations on \(\Omega _0\) and for their induced actions on functions and measures on \(\mathbb {R}\). The explicit form of this mapping in the representation of random \(HP\) functions which was introduced in Sect. 5, for which \(\Omega _0 = \widetilde{\Omega }\) and \(\omega = (\mu ,F(i)) \) is easily seen to take the form of the co-cycle evolution:

$$\begin{aligned} \mathcal {T}_u (\mu ,\beta ) \, = \, (\mathcal {T}_u\mu , \, \beta + Q(u,\mu ) \, + \, u\, a ) \end{aligned}$$
(5.9)

with

$$\begin{aligned} a := {{\mathrm{Im \,}}}\beta - \int \frac{\mu (dx)}{x^2+1}, \qquad Q(u,\mu ) \, = \, \int \left[ \frac{1}{x-u - i} - \frac{1}{x-i} \right] \mu (dx),\qquad \end{aligned}$$
(5.10)

and \(\mathcal {T}_u \mu \) the usual shift of measure, i.e. \(( \mathcal {T}_u \mu )(I) =\mu (I+u) \) for every bounded Borel set \( I \subset \mathbb {R}\).

Focusing on this case we take as definition:

Definition 5.3

A probability measure on \(\widetilde{\Omega }\) is stationary (or translation invariant) if and only if it is invariant under the mapping induced on it by the above defined mappings \(\{ \mathcal {T}_u \}_{u \in \mathbb {R}}\).

Equivalently, we will refer to the corresponding random HP function as stationary.

The following observation makes the results of Sect. 2 applicable to stationary HP functions.

Lemma 5.4

Let \( F_\omega \) be a stationary random HP function, and \(F_{0,\omega }(x) := F_\omega (x+i0)\). Then:

  1. 1.

    With probability one the function \(F_{0,\omega }(x)\) is shift amenable and the corresponding measures \(\nu _{F_{0,\omega }}\) are constant on ergodic components of the probability measure.

  2. 2.

    For each bounded continuous \(\Psi : \overline{\mathbb {C}^+} \rightarrow \mathbb {C}\) which is analytic on \( \mathbb {C}^+\):

    $$\begin{aligned} \mathbb {E}[ \Psi (F(z))] \, = \, \mathbb {E}[ \Psi (F(x+i0))] \end{aligned}$$
    (5.11)

    for all \(z\in \mathbb {C}^+\) and \(x\in \mathbb {R}\).

Proof

The first assertion is readily implied by Birkhoff’s ergodic theorem, as is explained above (5.7).

For the second, we note that by translation invariance \( \mathbb {E}[ \Psi (F(z)) ] \) does not depend on \(x := {{\mathrm{Re \,}}}z\). Since under the assumptions it forms an analytic function of \(z\in {\mathbb {C}}^+\), it follows that it also does not depend on \(y= {{\mathrm{Im \,}}}z\). One may then deduce (5.11) using Proposition 2.1 and applying the dominated convergence theorem to the limit \(y\downarrow 0\). \(\square \)

Thus Theorem 2.3 is applicable to such functions. Let us note also the following implications of stationarity.

Theorem 5.5

Let \(F_\omega \) be a stationary random HP function for which \({{\mathrm{Im \,}}}F(x+i0) = 0\) almost surely (separately at each \(x\in \mathbb {R}\)). Then:

  1. 1.

    for almost all \( \omega \): \( a_\omega =0. \)

  2. 2.

    If the process is also ergodic then for each \(x\in \mathbb {R}\) the random variable \(F_\omega (x+i0)\) has the Cauchy distribution of width:

    $$\begin{aligned} {{\mathrm{Im \,}}}\Gamma \, \equiv \, \lim _{\eta \rightarrow \infty } {{\mathrm{Im \,}}}F_\omega (x+i\eta ) \, = \, \pi \rho , \end{aligned}$$
    (5.12)

    where \(\Gamma \) is the distribution’s analytic baricenter, as defied in (2.3), and \(\rho = {\mathbb E}\left( \mu _F([0,1) \right) \).

Proof

  1. 1.

    For any \(x\in \mathbb {R}\) and \(t>0\):

    $$\begin{aligned} \mathbb {P}(a =0 ) \, = \, \mathbb {E}( \mathbb {1}[ a=0 ]) \, \ge \, \lim _{y\rightarrow \infty } \mathbb {E}[ e^{itF(x+i y)}] \, = \, \mathbb {E}[ e^{itF(i )}], \end{aligned}$$
    (5.13)

    where the inequality is by the observation that \({{\mathrm{Im \,}}}F(x+iy) \ge a y\), and the last equality by Lemma 5.4 Taking now \(t\rightarrow 0\) we conclude (applying again the dominated convergence theorem):

    $$\begin{aligned} {\mathbb P}\left( a =0 \right) \, \ge \, \lim _{t \rightarrow 0} \mathbb {E}[ e^{itF( i )}] \, = \, 1. \end{aligned}$$
    (5.14)
  2. 2.

    By Birkhoff’s theorem, for ergodic processes averages over \(\omega \) yield (almost surely) the same result as averages over shifts, and thus the Cauchy nature of the distribution follows from Theorem 2.3. The value of the distribution’s analytic baricenter is determined from the spectral representation (1.2) applying Lemma 2.4 with \(g(u) = \pi ^{-1}/(u^2+1)\) as in (2.7):

    $$\begin{aligned} {{\mathrm{Im \,}}}\Gamma \, = \, \lim _{\eta \rightarrow \infty } {{\mathrm{Im \,}}}F_\omega (x+i\eta ) = \lim _{\eta \rightarrow \infty } \frac{\pi }{\eta } \int g(|u-x|/\eta ) \mu _\omega (du) \, \mathop {=}\limits ^{a.s.} \, \pi \rho .\nonumber \\ \end{aligned}$$
    (5.15)

\(\square \)

Remarks

  1. 1.

    The center \( {{\mathrm{Re \,}}}\Gamma \in \mathbb {R}\) of the Cauchy distribution of \(F_\omega (x+i0)\) is not determined from the spectral measure alone, since adding a real constant to a random, ergodic HP function produces another such function with a different value of this parameter.

  2. 2.

    Inspecting the above proof shows that one may exchange the assumption of ergodicity in the above theorem by requiring (i) stationarity of the HP function together with (ii) the distributional convergence \( F(i\eta ) \rightarrow \Gamma \) with some \( \Gamma \in \mathbb {C}^+ \).

5.2 A cocycle criterion

Clearly, for any shift invariant random HP function \(F_\omega \) the spectral measure \(\mu _\omega \) forms a stationary random measure on \(\mathbb {R}\), which in the discrete case corresponds to a point process. One may ask about the converse direction: under what conditions would a random measure on \(\mathbb {R}\) with a translation invariant distribution (and a.s. satisfying (1.3)) be the spectral measure of a stationary random HP function?

It is easy to see that (1.3) suffices for the association with \(\mu _\omega \) of the function

$$\begin{aligned} K_\omega (z) \, = \, \int \frac{\mu _\omega (dx) }{(x-z)^2}, \end{aligned}$$
(5.16)

which is holomorphic over \( \mathbb {C}^+\) and which inherits the stationarity of \(\mu _\omega \). The above question can therefore be rephrased as asking under what conditions would \(K_\omega (z)\) be the derivative of a stationary random HP function. For that a standard ergodic theory argument is of relevance.

Theorem 5.6

Let \(\mu _\omega \) be a stationary random measure on \(\mathbb {R}\) (given by a measurable function from a probability space to \(\mathcal M (\mathbb {R})\)), satisfying (almost surely) (1.3). Then \(\mu _\omega \) may be extended to the spectral measure of a random stationary HP function if and only if the cocycle

$$\begin{aligned} {{\mathrm{Re \,}}}Q(u,\mu _\omega ) = {{\mathrm{Re \,}}}\int \left[ \frac{1}{x-u - i} - \frac{1}{x-i} \right] \mu _\omega (dx) \end{aligned}$$
(5.17)

is tight. That is, if and only if:

$$\begin{aligned} \lim _{t\rightarrow \infty } \, \sup _{u\in \mathbb {R}} \, {\mathbb P}\left( \left| {{\mathrm{Re \,}}}\int \left[ \frac{1}{x-u - i} - \frac{1}{x-i} \right] \mu (dx) \right| \, > \, t \right) \, = \, 0 \end{aligned}$$
(5.18)

Proof

By a general result in ergodic theory [26] the tightness condition (5.18) allows to conclude that the cocycle is a coboundary, i.e. there exists a measurable map \( b: \widetilde{\Omega }\rightarrow \mathbb {R}\) such that for all \( u \in \mathbb {R}\):

$$\begin{aligned} {{\mathrm{Re \,}}}Q(u,\mu _\omega ) = b_{\mathcal {T}_u\omega } - b_{\omega }. \end{aligned}$$
(5.19)

The HP function given by

$$\begin{aligned} F_\omega (z) = b_{\omega } + \int \left[ \frac{1}{x-z} - \frac{x}{x^2+1} \right] \mu _\omega (dx) \end{aligned}$$
(5.20)

is then (i) almost surely well defined by (1.3) and (ii) easily seen to be stationary.

Conversely, if the random HP function \( F_\omega \) is stationary, it is of the form (5.20) with \( b_\omega = {{\mathrm{Re \,}}}F_\omega (i) \) and \( {{\mathrm{Im \,}}}F_\omega (i) = \int \frac{\mu _\omega (dx) }{x^2+1} \) (by Theorem 5.5). Therefore \( Q(u,\mu _\omega ) = F_\omega (i+u) - F_\omega (i) \) forms a tight collection of random variables indexed by \( u \in \mathbb {R}\).

\(\square \)

6 Convergence criteria for the scaling limit of random HP functions

HP functions often appear as the resolvent functions of random hermitian \(n\times n\) matrices \(H_{\omega ,n}\) for which it is of interest to gain understanding of the behavior of the spectra in the limit \(n\rightarrow \infty \). Examples were given in (1.1). If the norm \(\Vert H_{\omega ,n}\Vert \) remains uniformly bounded, the relevant spectra consist of \(n\) points whose gaps may typically be of order \(O(n^{-1})\). To study this function at that level of resolution in the vicinity of an energy \(E_0\) (which in principle could also depend on \(n\), or be randomized in the vicinity of a target value), it is natural to enquire about the possible convergence in distribution of the random HP functions

$$\begin{aligned} F_{\omega ,n}( z) \, := \, R_{\omega ,n}(E_0+z/n). \end{aligned}$$
(6.1)

6.1 Convergence off the real axis

For the examples (1.1), the convergence of the distribution of the spectral measure \( \mu _{\omega ,n} = \sum _j \delta _{ x_{ \omega ,n}(j)} \) in essence is a local statement about the behavior of the function’s singularities. The information which is added to it through \(F_{\omega ,n}(i)\) reflects the local effect of the tails of the spectral measure, which affect its Stieltjes transform in the vicinity of \(E_0\). In many cases of interest one may expect the contribution from the “distant” values of the spectrum to have only asymptotically vanishing fluctuations. In such situations, the following theorem provides a handy criterion for the convergence in distribution of a sequences of HP functions.

Theorem 6.1

A sufficient condition for a sequence \( F_{\omega ,n} \) of random HP functions to converge in distribution to a random HP function \(F_\omega \) (i.e. for \(( \mu _{\omega ,n}, F_{\omega ,n}(i)) \mathop {\rightarrow }\limits ^{\mathcal {D}} ( \mu _{\omega } , F_{\omega }(i)) \)) is that:

  1. 1.

    the corresponding random spectral measures \(\mu _{\omega ,n} \) converge in distribution to the random spectral measure \(\mu _\omega \) in the sense that for all \( f \in C_c(\mathbb {R}) \):

    $$\begin{aligned} \mu _{n}(f) := \int f(x) \mu _{n}(dx) \mathop {\rightarrow }\limits ^{\mathcal {D}} \mu (f) \end{aligned}$$
    (6.2)
  2. 2.

    there exists \( \Gamma \in \mathbb {C}^+ \) such that for all \( \varepsilon > 0 \):

    $$\begin{aligned} \lim _{\eta \rightarrow \infty } \mathbb {P}( | F(i\eta ) - \Gamma | \ge \varepsilon )&= 0. \end{aligned}$$
    (6.3)
    $$\begin{aligned} \lim _{\eta \rightarrow \infty } \limsup _{n \rightarrow \infty } \mathbb {P}( | F_n(i\eta ) - \Gamma | \ge \varepsilon ) \,&= 0, \end{aligned}$$
    (6.4)

Proof

We will write

$$\begin{aligned} F_{\omega ,n}(i)&= F_{\omega ,n}(i\eta ) + \int \left( \frac{1}{x-i} - \frac{1}{x-i\eta } \right) \mu _{\omega ,n}(dx)\nonumber \\&=: F_{\omega ,n}(i\eta ) + \int g_\eta (x) \mu _{\omega ,n}(dx). \end{aligned}$$
(6.5)

In a first step we establish that distributional convergence of the pair

$$\begin{aligned} \left( \mu _{\omega ,n}, \int g_\eta (x) \mu _{\omega ,n}(dx) \right) \mathop {\rightarrow }\limits ^{\mathcal {D}} \left( \mu _{\omega }, \int g_\eta (x) \mu _{\omega }(dx)\right) \end{aligned}$$
(6.6)

for all \( \eta \in [1,\infty ) \). To do so, we split the integral into two parts by inserting a smooth indicator function \( \chi _W \in C_c(\mathbb {R}) \) of the interval \( |x | \le W \) with the property that \(\chi _W^c(x) := 1 - \chi _\eta (x) = 0 \) for all \( |x | \le W \). The pair \( ( \mu _{\omega ,n} \, , \int g_\eta (x) \chi _W(x) \mu _{\omega ,n}(dx)) \) converges in distribution by assumption. The contribution to the integral from \( |x| \ge W \) is bounded:

$$\begin{aligned} \left| \int \chi _W^c(x) g_\eta (x) \mu _{\omega ,n}(dx)\right| \le \eta \int _{|x|\ge W} \frac{ \mu _{\omega ,n}(dx)}{\sqrt{x^2+1}\sqrt{x^2+\eta ^2}} \le \frac{2\eta }{W} {{\mathrm{Im \,}}}F_{\omega ,n}(iW).\nonumber \\ \end{aligned}$$
(6.7)

Choosing \( W = \eta ^{1+\alpha } \) with some \( \alpha > 0 \), assumption (6.4) ensures that for any \( \varepsilon > 0 \):

$$\begin{aligned} \lim _{\eta \rightarrow \infty } \limsup _{n\rightarrow \infty } \mathbb {P}\left( \left| \int \chi _{W_\eta }^c(x) g_\eta (x) \mu _{\omega ,n}(dx)\right| \ge \varepsilon \right) = 0. \end{aligned}$$
(6.8)

This establishes (6.6).

The second assumption allows to convert (6.6) to the statement that the pair \(( \mu _{\omega ,n} , F_{\omega ,n}(i) ) \) is asymptotically close (in distribution) to \(( \mu _{\omega }, F_{\omega }(i) ) \), since the extra terms \(F_{\omega ,n}(i\eta ) \) and \(F_{\omega }(i\eta ) \) are asymptotic (in probability) to the same constant \(\Gamma \).

This finishes the proof of the distributional convergence in the sense discussed in Sect. 3.2. \(\square \)

6.2 Convergence of the distribution of the boundary values

To follow up on the convergence criterion of Theorem 6.1, more needs to be said to address the convergence of the distribution of the random function along the boundary \(\mathbb {R}\). Following are some useful criteria, which will allow to apply the analysis of this paper to a number of cases of interest.

Theorem 6.2

Let \( F_{\omega ,n} \) be a sequence of random HP functions which converges in distribution, to a random function \( F_{\omega }\) (the sense discussed in Sect. 3.2), and suppose that in an interval \([a,b]\subset \mathbb {R}\) the spectral measures of both \( F_{\omega ,n}\) and \( F_{\omega }\) consist only of simple point processes. Then also

$$\begin{aligned} F_{\omega ,n}(x+i0) \mathop {\rightarrow }\limits ^{\mathcal D} F_\omega (x+i0) \end{aligned}$$
(6.9)

for any \(x\in (a,b)\) for which

$$\begin{aligned} {\mathbb E}\left( \mu (\{x\}) \right) =0. \end{aligned}$$
(6.10)

Remarks

There is a reason here for the restriction on the nature of the spectra: (6.9) fails when the spectral measures of \(F_{\omega ,n}\) are discrete but converge to an absolutely continuous measure. In such case there will be a positive measure set of \(x\in \mathbb {R}\) at which \({{\mathrm{Im \,}}}F_\omega (x+i0) >0 \), while \({{\mathrm{Im \,}}}F_{\omega ,n}(x+i0) =0 \) for all \(n<\infty \).

Proof of Theorem 6.2

In the proof we decompose each HP function \(F_n(z)\) to a sum of two components, one due to the near part of the spectral measure \(\mu _F(du)\) and the other due to its far part. The distributional continuity (6.9) of the first component is where the limitation to discrete spectra is being used. This condition however places the statement within the reach of standard continuity arguments. The second contribution is continuous by Lemma 5.2. In addition to the separate continuity statements one needs to notice that we have here a joint distributional convergence of the two components.

Due to the freedom to shift and scale the result, it suffices to prove the assertions for the case \([a,b] = [-1,1]\), and sites \(x\in [-1/2,1/2]\). Focusing on that case, let \(\chi : \mathbb {R}\mapsto [0,1]\) be the interpolated projection onto \([-1,1]\):

$$\begin{aligned} \chi (x) \, = \, {\left\{ \begin{array}{ll} 1 \quad &{} \hbox {for}\,\, |x|<1 \\ 1-2(|x|-1) &{} \hbox {if}\,\, 1<|x|<1.5\\ 0 &{} \text{ for }\,\, |x|> 1.5 \end{array}\right. } \end{aligned}$$
(6.11)

Using it, for each measure \(\mu \in \mathcal M(\mathbb {R})\) we denote its “near” and “far” parts as:

$$\begin{aligned} \mu ^{(1)} (dx) \, := \, \chi (x) \, \mu (dx), \quad \text{ and } \quad \mu ^{(2)} (dx) \, := \, [1-\chi (x) ]\, \mu (dx). \end{aligned}$$
(6.12)

Correspondingly, we decompose any HP function \(F(z)\) into:

$$\begin{aligned} F(z) \, = \, F^{(1)}(z) \, + \, F^{(2)}(z) \end{aligned}$$
(6.13)

breaking the spectral representation (1.2) into:

$$\begin{aligned} F^{(1)}(z)&= \int _{[-1.5,1.5]} \left[ \frac{1}{u-z} - \frac{u}{u^2+1} \right] \, \chi (x) \, \mu ^{(1)}_F(du)\end{aligned}$$
(6.14)
$$\begin{aligned} F^{(2)}(z)&= b + a z + \int _{\mathbb {R}\backslash [-1,1]} \left[ \frac{1}{u-z} - \frac{u}{u^2+1} \right] \, [1-\chi (x)] \, \mu ^{(1)}_F(du).\qquad \quad \end{aligned}$$
(6.15)

It is easy to see that the assumed convergence in distribution of \(F{\omega ,n}\) implies the joint convergence of their two components as a pair of HP functions, in the natural extension of this notion to pairs of functions:

$$\begin{aligned} (F^{(1)} _{\omega ,n}, F^{(2)} _{\omega , n}) \mathop {\longrightarrow }\limits ^{\mathcal D} ( F^{(1)} _{\omega }, F^{(2)} _{\omega }). \end{aligned}$$
(6.16)

(In essence: the corresponding spectral measures converge for each value of \(j\), and since \(F^{(1)}\) falls off at infinity the second requirement for convergence is of relevance only for \(j=2\).)

The assumed structure of the spectral measure of \(F^{(1)}_{\omega ,n}(z) \) within \([-1,1]\) means that for each \(n\) these random measures corresponds to a random probability distribution on the disjoint union of compact sets \(Y \, := \, \cup _{k=0,1,2,\ldots } [-1,1]^k\), the point of each we shall denote by \(y_k=(y_{k,j})_{j=1}^k\), with measures \(\nu _{\omega ,n}(d y_k)\) (on \(k\) labeled particles) which are symmetric under permutations. In particular, the probability that there are \(k\) particles in \([-1,1]^k\) is

$$\begin{aligned} p_n(k) \, := \, \int _{[-1,1]^k} \nu _{\omega ,n} (d\, y_k)/k! \end{aligned}$$
(6.17)

In this notation:

$$\begin{aligned} F^{(1)}_{\omega ,n}(z) \, = \, \sum _{k=1}^\infty \int _{[-1,1]^k} \chi (y_j) \left[ \frac{1}{z-y_j} - \frac{y_j}{y_j^2+1} \right] \, \nu _{\omega ,n} (d\, y_k)/k!. \end{aligned}$$
(6.18)

In the natural topology on \(Y\), the number of particles in \([-1,1]\) may change discontinuously due the appearance or disappearance of a particle at the boundary of the set. Otherwise, the configuration depends continuously on the position of the particles in \([-1,1]\). Thus functions of the form

$$\begin{aligned} \sum _{k=1}^\infty \int _{[-1,1]^k} \phi (y_j) \, \chi (y_j) \nu _{\omega ,n} (d\, y_k)/k! \end{aligned}$$
(6.19)

with \(\phi \in C([-1,1]) \) whose supported lies in \((-1,1)\) are continuous.

Under the assumption of convergence in distribution of the random spectral measures, the sequence of probability measures \(p_n\) on \({\mathbb N}\) is tight, and the integrals of functions which are continuous in \([-1,1]^k\) and vanish at the boundary have distribution which converges to that of the limiting measure. By the continuous mapping theorem, this extends to functions which are continuous on a complement of a set which is not charged by the limiting measure. In the representation (6.18) of \(F_n(x+i0) \) for a given \(x\) the integrand is a continuous function of the configuration except at configurations with a particle at \(x\). Thus, the assumed convergence of the spectral measure allows to deduce the continuity of the probability distribution of \(F^{(1)}_{\omega ,n}(x+i0)\) for sites \(x\in [-1/2,1/2]\) at which (6.10) holds.

The probability distribution of \(F^{(2)}_{\omega ,n}(x+i0)\) is continuous in the limit \(n \rightarrow \infty \) by an application of Lemma 5.2. Furthermore, combined with (6.16), the arguments imply that the joint distribution of the pair of random variables \(( F^{(1)}_{\omega ,n}(x+i0), F^{(2)}_{\omega ,n}(x) )\) is continuous in the limit, and hence (6.9) holds.\(\square \)

Theorem 6.2 has implications for the random matrix models which are discussed next, and for the Šeba process [3, 6, 23, 27] on which more is said in [2]. Following is another continuity criterion which may be of interest beyond the cases covered by it, in particular when the spectral measures are singular but with dense support and not of uniform masses. An example to keep in mind are the possible scaling limits of the Green functions of random operators in the regime of Anderson localization.

In discussing continuity of HP functions along the line it is natural to regard the range of \(F\) as the Riemann sphere \( \overline{\mathbb {C}} \), i.e. the one point compactification of \(\mathbb {C}\). This suggests the following terminology.

Definition 6.3

(\(*\)-continuity) 1. A function \(F: \overline{\mathbb {C}} \mapsto \mathbb {C}\) is said (here) to be \(*\)continuous at \(z\) iff the mapping \(z\mapsto \frac{-1}{F(z+i0) + i} \) is continuous at that point.

2. For a random HP function, we define as its (mean) modulus of \(*\) continuity, at \(x\in \mathbb {R}\), the function

$$\begin{aligned} \kappa (x,\delta ) \, := \, {\mathbb E}\left( \left| \frac{1}{F(x+\delta +i0)+i} - \frac{1}{F(x+i0)+i} \right| \right) . \end{aligned}$$
(6.20)

(which for almost all \(x \in \mathbb {R}\) is defined for almost all \(\delta \in \mathbb {R}\)).

Theorem 6.4

If a sequence of random HP functions converges in distribution, \( F_{\omega ,n} \mathop {\rightarrow }\limits ^{\mathcal D} F_\omega \), and the moduli of \(*\)-continuity of \(F_{\omega ,n} \) and \(F_\omega \) are bounded uniformly in \( n \) by \(\kappa (x,\delta )\), then for any \(x\in \mathbb {R}\) for which:

$$\begin{aligned} \lim _{\delta \rightarrow 0} \kappa (x,\delta ) \, = \, 0 \end{aligned}$$
(6.21)

the distributions of the random variables \(F_{\omega ,n}(x+i0)\) converge:

$$\begin{aligned} F_{\omega ,n}(x+i0) \mathop {\rightarrow }\limits ^{\mathcal D} F_\omega (x+i0). \end{aligned}$$
(6.22)

Proof

We already know, under the theorem’s first assumption, that for each \(x\in R\) and \(\delta >0\):

$$\begin{aligned} F_{\omega ,n}(x+i\delta ) \mathop {\rightarrow }\limits ^{\mathcal D} F_\omega (x+i\delta ). \end{aligned}$$
(6.23)

To related this to the values at \(\delta =0\), we note that by the Cauchy integral formula, for each \(\delta >0\):

$$\begin{aligned} \frac{1}{\pi } \int \frac{1}{F_{\omega ,n}(u+i0) +i} \; \frac{\delta \, du}{(x-u)^2+\delta ^2} \, = \, F_{\omega ,n}(x+i\delta ), \end{aligned}$$
(6.24)

with similar relation holding for the limiting function \(F_\omega \). The difference can by estimated in the \(L^1\)-sense by:

$$\begin{aligned}&{\mathbb E}\left( \left| \frac{1}{F(x+i0) +i} \, - \, \frac{1}{\pi } \int \frac{1}{F(u+i0) +i} \; \frac{\delta \, du}{(x-u)^2+\delta ^2} \right| \right) \nonumber \\&\quad \le \, \frac{1}{\pi } \int \kappa (x, u-x) \, \frac{\delta \, du}{(x-u)^2+\delta ^2} \, =: \, \hat{\kappa }(x,\delta ) \end{aligned}$$
(6.25)

Under the assumption (6.21) also: \(\hat{\kappa }(x,\delta ) \rightarrow 0\) as \( \delta \rightarrow 0 \). Thus, a standard three step comparison allows to conclude the distributional convergence \( (F_{\omega ,n}(x+i0) + i)^{-1} \mathop {\rightarrow }\limits ^{\mathcal D} (F_{\omega }(x+i0) + i)^{-1} \) and hence the claim (6.22). \(\square \)

6.3 Examples from RMT and random operators

The above criterion can be verified for the rescaled trace functions defined in (6.1) for random matrices corresponding to the two examples which were discussed in Sect. 4.2, whose spectra are rather different.

GUE and Wigner ensembles  The spectra of \(n \times n\) hermitian matrices with complex Gaussian entries, which form the GUE random Gaussian ensemble, are well known to have for \(n\rightarrow \infty \) the asymptotic density

$$\begin{aligned} \varrho _{sc}(E_0) := \pi ^{-1} \sqrt{ 1 - (E_0/2)^2}. \end{aligned}$$
(6.26)

It is also known that the rescaled eigenvalue point process, amplified in the vicinity of energy \( |E_0| < 2 \),

$$\begin{aligned} \mu _{\omega ,n} = \sum _j \delta _{n \, \varrho _{sc}(E_0) \, (E_{j,n}(\omega ) -E_0 ) }, \end{aligned}$$
(6.27)

converges in distribution to the “sine-kernel process”, which is a shift invariant determinantal point process \( \mu _\omega \) of kernel \( K(x,y) = \sin ( \pi (x-y)) / (\pi (x-y)) \) (cf. [5]).

In celebrated works [17, Thm. 1.3], [29, Thm. 5] the above statement was recently generalized to the broader class of Wigner matrices, which are random hermitian \( n \times n \) matrices whose entries \( \{h_{jj} \), \( \{ {{\mathrm{Re \,}}}h_{jk} \}_{j<k}\), and \(\{ {{\mathrm{Im \,}}}h_{jk} \}_{j<k}\) are independent, centered and of variance \( 1 /2 \). The quoted results imply that in the above case the rescaled trace function (cf. (1.1))

$$\begin{aligned} F_{\omega ,n}(z) := \int \frac{ \mu _{\omega ,n}(dx)}{x-z} = \frac{1}{\varrho _{sc}(E_0) } \; R_{\omega ,n}\left( {E_0 +} \frac{z}{n \, \varrho _{sc}(E_0) }\right) , \end{aligned}$$
(6.28)

satisfies the first condition of Theorem 6.1, i.e. (6.2) holds.

Of the criterion’s second condition, (6.3), holds for the shifted random sine-kernel Stieltjes function \( F^{GUE}_\omega (z) + {{\mathrm{Re \,}}}\Gamma \) (cf. (2.13) and Sect. 4) with:

$$\begin{aligned} \Gamma := \frac{1}{\varrho _{sc}(E_0) } \int \frac{\varrho _{sc}(v)\, dv}{ v- E_0 - i0} = -\frac{E_0}{2\varrho _{sc}(E_0)} + i \pi . \end{aligned}$$
(6.29)

The assertion that (6.4) holds also in the generality of Wigner matrices, of distributions with subgaussian tails, is implied by the statement derived in [16, Theorem 3.1] that at this generality, for all small enough \(\varepsilon > 0 \):

$$\begin{aligned} \lim _{\eta \rightarrow \infty } \limsup _{n\rightarrow \infty }\mathbb {P}( | F_{\omega ,n}(i\eta ) -\Gamma | \ge \varepsilon ) \le \lim _{\eta \rightarrow \infty } C e^{- c\varepsilon \sqrt{\eta }} = 0 \end{aligned}$$
(6.30)

at some \( c, C < \infty \) (while this suffices for our purpose, an improved bound was recently presented in [10]).

Combining these statements with the general criterion provided by Theorem 6.2, one gets:Footnote 3

Corollary 6.5

For Wigner matrices \(H_{\omega ,n} \) whose entries have a common subgaussian distribution \( \nu \), i.e., \( \int e^{\delta x^2} \nu (dx) < \infty \) for some \( \delta > 0 \), the rescaled trace \( F_{\omega ,n}(x) \), defined by (6.28), converges in distribution, for \( n \rightarrow \infty \) and any fixed \(x\), to a Cauchy random variable whose analytic baricenter \( \Gamma \) is given by (6.29).

Random diagonal matrices  A similar statement is valid also for the much simpler ensemble of \( n\times n \) random diagonal matrices, whose diagonal entries \((V_j)\) are of a common probability distribution with a smooth density \(\rho \in C^1(\mathbb {R}) \). In this case the rescaled trace function

$$\begin{aligned} F_{\omega ,n}( z) : = \sum _{j=1}^n \frac{1}{ n \, \rho (E_0) [V_j - E_0] - z} = \frac{1}{\rho (E_0) } \; R_{\omega ,n}\left( {E_0 +} \frac{z}{n \, \rho (E_0) }\right) ,\qquad \quad \end{aligned}$$
(6.31)

with \( E_0\) such that \( \rho (E_0) > 0 \), converges in distribution for any \( z \in \overline{\mathbb {C}^+} \) to the shifted Poisson–Stieltjes function \( F^{Poi}_\omega (z) + {{\mathrm{Re \,}}}\Gamma \) with

$$\begin{aligned} \Gamma := \frac{ 1}{\rho (E_0)} \int \frac{\rho (v)}{v-E_0- i 0} dv = \frac{1}{\rho (E_0)} \, P.V. \int \frac{\rho (v)\, dv}{v-E_0} + i \pi . \end{aligned}$$
(6.32)

In particular, for any \( x \in \mathbb {R}\) the random variables \( F_{\omega ,n}( x) \) converge in distribution as \( n \rightarrow \infty \) to a Cauchy random variable with baricenter \( \Gamma \) given by (6.32).

Here, the assertion can be easily proven by a direct computation of the characteristic functional \( \mathbb {E}[ e^{it F_{\omega ,n}( x) } ] \). Alternatively, it also follows from Theorems 6.1 and 6.2 and the fact that \( F^{Poi}_\omega (x) \) has a Cauchy distribution with baricenter \( (i {{\mathrm{Im \,}}}\Gamma )\), cf. Theorem 4.1 and 2.3.