Abstract
Correlation functions involving products and ratios of halfinteger powers of characteristic polynomials of random matrices from the Gaussian orthogonal ensemble (GOE) frequently arise in applications of random matrix theory (RMT) to physics of quantum chaotic systems, and beyond. We provide an explicit evaluation of the large\(N\) limits of a few nontrivial objects of that sort within a variant of the supersymmetry formalism, and via a related but different method. As one of the applications we derive the distribution of an offdiagonal entry \(K_{ab}\) of the resolvent (or Wigner \(K\)matrix) of GOE matrices which, among other things, is of relevance for experiments on chaotic wave scattering in electromagnetic resonators.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Motivations, Background and Results
1.1 Introduction
The goal of the present article is to attract attention to the problem of systematic evaluation of the large\(N\) asymptotics of random matrix averages of the form
where \(\mu _{Fi}, \, i=1,\ldots ,K\) and \(\mu _{Bj}, \, j=1,\ldots ,L\) are sets of complex parameters. The angular brackets here and henceforth denote the average over the ensemble of realsymmetric \(N\times N\) matrices \(H\) with Gaussian entries characterised by the probability density \(\mathcal{P}(H)\propto \exp {\frac{N}{4J^2}{\mathrm{Tr}} H^2} \) and known as the Gaussian orthogonal ensemble (GOE). Note that the correlation functions involving products of square roots of the characteristic polynomials in the numerator can be always reduced to the above form by multiplying and dividing both the numerator and the denominator with the same corresponding factors.
Although there are reasons to suspect that the correlation functions (1) may have a nice mathematical structure even for finite \(N\), perhaps not unlike those determinantal or Pfaffian structures discovered in [1–5] for similar objects involving only integer powers (see also [6, 7] for an alternative derivation) we were not yet able to reveal such structures beyond the simplest case \(K=1,L=1\), see (12) below. Instead we are mainly concentrating on the large\(N\) limit of a few simplest, yet nontrivial examples of the correlation function of the type (1). We start with considering correlation functions with two square roots in the denominator, and with one or two characteristic polynomials in the numerator, that is \(\mathcal {C}_{1,2}(\mu _{F1};\mu _{B1},\mu _{B2})\) and \(\mathcal {C}_{2,2}(\mu _{F1},\mu _{F2};\mu _{B1},\mu _{B2})\), and then treat a special case of the correlation function involving four square roots in the denominator, and two determinants in the numerator, that is \(\mathcal {C}_{2,4}\) in our notation. As it should be clear from the examples given below the most physically interesting (bulk) scaling regime in the large\(N\) limit arises when all spectral parameters are close to some value \(E\in (2J,2J)\) by a distance of the order of the mean spacing between neighbouring eigenvalues in the bulk, i.e. \(\mathcal {O}(J/N)\). Correspondingly we define the scaled version of the correlation function as
and
where the approximate equality sign above should be understood in the sense of extracting the leading asymptotic dependence on the parameters \(\omega _{B}\) and \(\omega _F\) when \(N\rightarrow \infty \). Our results for the above correlation functions are given in Eqs. (13) and (14) for \(\mathcal {C}^{(\mathrm{bulk})}_{1,2}\) and in Eqs. (16) and (17) for \(\mathcal {C}^{(\mathrm{bulk})}_{2,2}\). In Eq. (22) we provide the result for a special limit (see Eq. (8)) of \(\mathcal {C}^{\mathrm{(bulk)}}_{2,4}\). These objects are already rich enough to provide answers for quantities arising in applications of random matrices in the field of Quantum Chaos in closed and open (scattering) systems. We discuss such relations in much detail below.
Although our methods are specifically tailored for dealing with the GOE we expect our results in the bulk scaling limit to be universal and shared by a broad class of invariant measures on real symmetric matrices \(H\) [8] and by socalled Wigner ensembles of random real symmetric matrices with independent, identically distributed entries satisfying relevant moments conditions [9, 10].
1.2 Motivations and Background
To explain the origin of interest in the correlation functions (1) we start with recalling that the phenomenon of Quantum Chaos attracted considerable theoretical and experimental interest for more than three decades and remains one of the areas where applications of Random Matrix Theory are most fruitful and successful [11]. The applications are based on the famous Bohigas–Giannoni–Schmit (BGS) [12] conjecture claiming that in appropriately chosen energy window sequences of highly excited discrete energy levels of generic quantum systems whose classical counterparts are chaotic are statistically indistinguishable from sequences of real eigenvalues of large random matrices of appropriate symmetry. Although not yet fully rigorously proven, this conjecture has an overwhelming support in experimental, numerical and analytical work of the last decades [13]. Inspired by this analogy as well as by the fact of universality of many random matrix properties (i.e. insensitivity to the particular choice of the probability measure on the matrix space), see [9, 10] and references therein, one of the common strategies for predicting universal observables of quantum chaotic systems has been expressing them in terms of resolvents of underlying Hamiltonians, then replacing the actual Hamiltonians by random matrices taken from analytically tractable (usually, Gaussian) ensembles of \(N\times N\) random matrices. The characteristic functions of the probability densities of the observables under consideration can be frequently computed explicitly by appropriate ensemble averages. Note that the eigenvalues of the standard Gaussian Ensembles, Unitary (GUE, \(\beta =2\)), Orthogonal (GOE, \(\beta =1\)) or Symplectic (GSE, \(\beta =4\)) are independent of the eigenvectors, with the matrix of \(N\) orthonormal eigenvectors being uniformly distributed over the Haar’s measure of the Unitary \(U(N)\), Orthogonal \(O(N)\) or Symplectic \(Sp(2N)\) group, correspondingly. To that end it is natural to evaluate the corresponding characteristic functions by performing first the ensemble average over the eigenvectors. For the \(\beta =2\) case the average can be frequently done exactly for any \(N\) by employing the socalled Itzykson–Zuber–Harish–Chandra [14, 15] formula, which is not yet available for \(\beta =1,4\) group averages. Nevertheless, one is able to perform the eigenvector averages in the limit \(N\gg 1\) by using a heuristic idea (going back to [16]) that the set of eigenvectors essentially behaves for \(N\gg 1\) as if their components were independent, identically distributed Gaussian variables with mean zero and variance \(1/N\). One can rigorously justify this procedure if only a number \(n\ll N^{1/2}\) of eigenvectors is involved in the set, see e.g. [17], but in general a rigorous justification of such a step requires some nontrivial estimates on the resolvents. The heuristic procedure is widely employed in Theoretical Physics for RMT applications to Quantum Chaos using the properties of the standard Gaussian integrals over complex or real variables. In this way the analysis of many distributions of practical interest is reduced to correlation functions of products and ratios involving integer (for \(\beta =2,4\)) or halfinteger (for \(\beta =1\)) powers of characteristic polynomials of random matrices. Similar averages arise if one is interested in statistics of the matrix elements of the resolvents computed in the basis of random Gaussian vectors, as it is frequently done in applications to scattering systems with Quantum Chaos, see e.g. the recent paper [18] for an example and further references. For those and other reasons averages of products and ratios of powers of characteristic polynomials of random matrices attracted much interest over the years. When only integer powers are involved in the average the corresponding theory was developed for \(\beta =2\) in [2–4] and extended to \(\beta =1,4\) in [5]. The case of halfinteger powers for \(\beta =1\) remains however outstanding, despite the fact that it is most relevant for an overwhelming majority of experiments in Quantum Chaos due to the preserved timereversal invariance of the underlying Hamiltonians. Additional interest to this type of averages gives the fact that they are closely related to the problem of evaluating averages of quantities involving absolute values of characteristic polynomials due to the relation \(\det (EH) = \lim _{\epsilon \rightarrow 0} \det (EH+\tfrac{i\epsilon }{N})^{1/2} \det (EH\tfrac{i\epsilon }{N})^{1/2}\) valid for matrices \(H\) with real eigenvalues. Such averages emerge, for example, when studying the statistics of the socalled “level curvatures” in quantum chaotic systems [19, 20], see Eq. (5) below, as well as in the problem of counting the number of stationary points of random Gaussian surfaces, see [21, 22].
To support the above picture we describe below explicitly a few examples of relations between the characteristic functions of the physical observables of interest in quantum chaotic systems which can be related to particular instances of the correlation function (1). The list is almost certainly not exhaustive (for example, when writing this article we have learned that the square roots of characteristic polynomials emerged very recently in [23]), but hopefully representative.

LDoS distribution. One of the first examples of that sort which is worth mentioning is related to the statistics of the local density of states (LDoS) \(\rho (x;E,\eta )\) at a point \(x\) of a quantum system with energy levels broadening \(\eta \) due to a uniform absorption in the sample. Mathematically the LDoS is defined in terms of the diagonal matrix element of the resolvent as \(\rho (x;E,\eta ) = \tfrac{1}{\pi }\, \text {Im} \langle x  (E\tfrac{i\eta }{N}H)^{1}  x \rangle \), and one is interested in understanding the statistics of the LDoS assuming a random matrix GOE Hamiltonian \(H\) of size \(N\times N\), with the parameter \(\eta \) being fixed when \(N\rightarrow \infty \). The Laplace transform for the probability density \(\mathcal{P}(\rho )\) of the LDoS can be expressed in the large\(N\) limit as [24]
$$\begin{aligned} \int _0^{\infty } e^{s\rho } \mathcal{P}(\rho )\,d\rho =\left\langle \frac{\det ^{1/2}\left[ (EH)^2+\frac{\eta ^2}{N^2}\right] }{\det ^{1/2}\left[ (EH)^2+\frac{\eta ^2}{N^2}+\frac{\eta s}{N}\right] } \right\rangle _{GOE,N \rightarrow \infty }. \end{aligned}$$(4)Evaluation of the above random matrix average (which in our notation is a particular case of \(\mathcal {C}^{\mathrm{(bulk)}}_{2,4}\) ) attempted in [24] resulted in a quite impractical 5fold integral, and to this end remains an outstanding RMT problem. Note however that the density \(\mathcal{P}(\rho )\) has been found via a different route avoiding (4) as a sum of twofold integrals in [25, 26].

Probability distribution of “level curvatures”. Consider a perturbation \(\mathcal {H}=H+\alpha V\) of the Hamiltonian \(H\) where \(\alpha \) is a control parameter and \(V\) is a real symmetric matrix. “Level curvatures” are defined as second derivatives of the eigenvalues \(\lambda _n(\alpha )\) (interpreted as energy levels of a quantumchaotic system) with respect to the external parameter \(\alpha \): \(C_n = \frac{\partial ^2 \lambda _n(\alpha )}{\partial \alpha ^2} = \sum _{m \ne n} \frac{\langle nVm \rangle ^2}{\lambda _n\lambda _m}\). Assuming the perturbation \(V\) to be taken as well from the GOE one can show that the probability density \(P_E(c) =\frac{1}{\bar{\rho }(E)} \left\langle \sum _{n=1}^N \delta (cC_n)\delta (E\lambda _n)\right\rangle \) of the level curvatures for GOE matrices \(H\) with eigenvalues \(\lambda _n\) and mean density of eigenvalues \(\bar{\rho }(E)\) can be represented as[19, 20]
$$\begin{aligned} P_E(c) \propto \int _{\infty }^{+\infty }\! d\omega \, e^{i\omega c} \left\langle \frac{\det (EH)\det ^{1/2}(EH)}{\det ^{1/2}(E+\frac{i\omega }{N}H)} \right\rangle _{GOE,N \rightarrow \infty } \end{aligned}$$(5)where the required random matrix average in the righthand side was independently evaluated by several alternative methods in [19, 20]. Note that heuristic arguments appealing to Gaussianity of GOE eigenvectors in the large\(N\) limit suggest universality of the level curvature distribution for a “generic” choice of \(V\), and a rigorous proof of this fact is under consideration[27].

Statistics of Smatrix poles. Various questions related to the statistics of quantum chaotic resonances (poles of the scattering matrix in the complex energy plane [28]) in the regime of a weakly open scattering system can be related to evaluation of the averages
$$\begin{aligned} \left\langle \frac{\det H^2}{\det ^{1/2}(H^2+\frac{\omega ^2}{N^2})} \right\rangle _{GOE, N\rightarrow \infty }\quad \text{ and } \quad \left\langle \text{ det }^{1/2}{\left( H^2+\frac{\omega ^2}{N^2}\right) } \right\rangle _{GOE,N \rightarrow \infty } \end{aligned}$$(6)where \(\omega \) is considered as \(N\)independent parameter. The first of these averages features in the statistics of resonance widths change under influence of a small perturbation of the Hamiltonian \(H\rightarrow H+\alpha V\) akin to that considered above for the level curvature case. Such change reflects the intrinsic nonorthogonality of the associated resonance eigenfunctions [29]. Another manifestation of the same nonorthogonality is the statistics of the socalled Petermann factor which again can be related to random matrix averages involving halfinteger powers of characteristic polynomials, see [30]. The second average in (6) arose in a recent attempt of clarifying the statistics of resonance widths beyond the standard firstorder perturbation theory, see [31]. Evaluating both averages featuring in (6) in a uniform way by a systematic procedure was one of our motivations behind writing the present paper.

Statistics of Wigner \(K\) matrix. In the theory of quantum chaotic scattering the Wigner \(K\)matrix is essentially defined as a certain projection of the resolvent of \(H\). More precisely this is an \(M\times M\) matrix with entries \(K_{ab}=W_a^T (EH)^{1} W_b\) , with \(W_a\) being an \(N\)component vector of coupling amplitudes \(W_{ia}\) between \(N\) energy levels of the closed system (modelled for a chaotic system by an \(N\times N\) random matrix Hamiltonian \(H\)) and \(M\) scattering channels open at a given energy \(E\) of incoming waves. Note that the more standard \(M\times M\) unitary \(S\)matrix is related to \(K\) via a simple Cayley transform \(S=\frac{IiK}{I+iK}\). In the random matrix approach one usually assumes for the amplitudes \(W_{ia}\) either the model of fixed orthogonal channels with \(W_a^T W_b=\gamma _a \delta _{ab}\) [32] or independent Gaussian channels where the amplitudes are taken to be i.i.d. Gaussian variables with \(\langle W_a^T W_b \rangle = \gamma _a \delta _{ab}\) [33].
The quantities \(K_{ab}\) are of direct experimental relevance and can be measured in microwave experiments as they are related to the real part of the electromagnetic impedance [34, 35]. For real \(E\) in the bulk of the spectrum the statistics of the diagonal entries \(K_{aa}\) is long known to be given by the same Cauchy distribution for all \(\beta =1,2,4\), see e.g. [36, 37], and very recently was actually shown to be very insensitive to spectral properties of \(H\) under rather general conditions [38]. Similarly, one can consider the probability density \(\mathcal{P}(K_{ab})\) of the individual offdiagonal entries \(K_{a\ne b}\) for \(\beta =1\). For the model of Gaussian channels one arrives to the Fourier transformed \(\mathcal{P}(K_{ab})\) in the form:
$$\begin{aligned} \int _{\infty }^\infty e^{ix K_{ab}}\mathcal{P}(K_{ab})\,dK_{ab} = \lim _{N\rightarrow \infty }\left\langle \frac{\det (EH)}{\det ^{1/2}[(EH)^2+\frac{\gamma _a \gamma _b x^2}{N^2}]} \right\rangle _{GOE}=R_E(x). \end{aligned}$$(7)Note that the average featuring in the righthand side does not follow from either \(\mathcal {C}_{1,2}^{\mathrm{(bulk)}}\) or \(\mathcal {C}_{2,2}^{\mathrm{(bulk)}}\) as a special case, but is rather a limiting case of the more general correlation function \(\mathcal {C}_{2,4}^{\mathrm{(bulk)}}\) as it can be seen from the following representation:
$$\begin{aligned} R_E(x)=\lim _{\epsilon \rightarrow 0}\lim _{N\rightarrow \infty }\left\langle \frac{\det ^2(E H)}{\det ^{1/2}\left( (EH)^2+\frac{\gamma _a \gamma _b x^2}{N^2}\right) \det ^{1/2}\left( (EH)^2+\frac{\epsilon ^2}{N^2}\right) } \right\rangle _{GOE}. \end{aligned}$$(8)To the best of our knowledge the probability density \(\mathcal{P}(K_{ab})\) for \(a\ne b\) (or its Fourier transform) was not yet given explicitly in the literature^{Footnote 1} and we will find it below for the center of the GOE spectrum, see Eq. (22). Note that it is expected that statistics of the \(K\)matrix entries for a GOE Hamiltonian \(H\) is the same for the two choices of the coupling \(W\) as long as \(M\) stays finite for \(N\rightarrow \infty \).
As to the \(M\times M\) matrix \(K\) as a whole, the probability density \(\mathcal {P}(K)\) for \(\beta =1\) and \(E\) in the bulk of the spectrum is expected to be given by a Cauchylike expression:
$$\begin{aligned} \mathcal {P}(K) \propto \det [\lambda ^2+(K\langle K \rangle )^2]^{\frac{M+1}{2}} \end{aligned}$$(9)with \(E\)dependent mean \(\langle K \rangle \) and the width parameter \(\lambda \). This distribution was conjectured in 1995 by P. Brouwer on the experience of working with \(H\) from the socalled Lorentzian ensemble, see [42]. A similar formula for invariant ensembles of complex Hermitian random matrices \(H\) ( i.e. \(\beta =2\)) was proved rigorously very recently in [18], and in the same paper it was mentioned that for \(\beta =1\) and the case of random Gaussian coupling the following relation holds^{Footnote 2}:
$$\begin{aligned} \int \! e^{i \mathrm{Tr}(KX)} \mathcal {P}(K) dK = \lim _{N \rightarrow \infty }\left\langle \prod _{c=1}^M \frac{\det ^{1/2}(EH)\left[ {{\mathrm{sgn}}}\det (EH)\right] ^{\Theta (x_c)}}{\det ^{1/2}(E+\frac{i\gamma _c x_c}{N}H)} \right\rangle _{GOE} \end{aligned}$$(10)where \(\Theta (x_c)=1\) for negative \(x_c\) and is zero otherwise. Although our attempts to verify Brouwer’s conjecture for \(\beta =1, M=2\) along these lines were not fully successful yet, we discuss partial results, see (24)–(26) below.

A particular type of the correlation functions (1) was investigated in [43] where it has been shown that for any integer \(k>0\) and fixed real \(\delta \) holds ^{Footnote 3}
$$\begin{aligned}&\left\langle \frac{1}{\det ^{k/2}(i\delta /NH)\det ^{k/2}(i\delta /NH)} \right\rangle _{GOE,N \rightarrow \infty } \nonumber \\&\propto e^{k\delta }\int _1^{\infty }\frac{d\lambda _1e^{\delta \lambda _1}}{\sqrt{\lambda _1^21}}\ldots \int _1^{\infty }\frac{d\lambda _ke^{\delta \lambda _k}}{\sqrt{\lambda _k^21}}\,\prod _{i<j}^k\lambda _i\lambda _j. \end{aligned}$$(11)
1.3 The Results

As it has been mentioned above, we were not yet able to reveal nice mathematical structures for (1) at finite values of the matrix size \(N\) beyond the simplest case \(K=1, L=1\), where the methods outlined below yielded a determinantal structure which we give here for completeness:
$$\begin{aligned} \mathcal {C}_{1,1}(\mu _F;\mu _B)=&\left( \frac{J^2}{2N}\right) ^{N/4} \frac{[i{{\mathrm{sgn}}}({{\mathrm{Im}}}(\mu _B))]^{N+1}}{ \Gamma (N/2)} \nonumber \\&\times \det {\left( \begin{array}{cc} H_{N1}\left( \frac{\sqrt{N}}{J}\mu _F\right) &{} F_{N/21}\left( \frac{\sqrt{N}}{\sqrt{2}J}\mu _B\right) \\ H_{N}\left( \frac{\sqrt{N}}{J}\mu _F\right) &{} F_{N/2}\left( \frac{\sqrt{N}}{\sqrt{2}J}\mu _B\right) \end{array}\right) } \end{aligned}$$(12)where \(\Gamma (x)\) is the Euler Gammafunction, \(H_{N}(z)=\frac{i^N}{\sqrt{2\pi }}\int _{\infty }^{\infty }dt\, t^N \exp [\tfrac{1}{2}(t+iz)^2]\) is a Hermite polynomial and the function
$$\begin{aligned} F_{N}(z)=[i {{\mathrm{sgn}}}({{\mathrm{Im}}}(z))]^N\int _0^{\infty }dt\, t^N\exp [\tfrac{1}{2}(t^2+2i {{\mathrm{sgn}}}({{\mathrm{Im}}}(z))z t) ] \end{aligned}$$may be associated with the Cauchy transforms of Hermite polynomials [2].

The explicit forms for the “bulk” correlation functions \(\mathcal {C}^{(\mathrm{bulk})}_{1,2}(\omega _{F1};\omega _{B1},\omega _{B2})\) (see Eq. (2)) and \(\mathcal {C}^{(\mathrm{bulk})}_{2,2}(\omega _{F1},\omega _{F2};\omega _{B1},\omega _{B2})\) (see Eq. (3)) depend very essentially on the signs of \(\omega _{B1}\) and \(\omega _{B2}\). In particular, if \({{\mathrm{sgn}}}\omega _{B1}={{\mathrm{sgn}}}\omega _{B2}\) the first correlation function is given by
$$\begin{aligned} \mathcal {C}^{(\mathrm{bulk, } {{\mathrm{sgn}}}\omega _{B1}={{\mathrm{sgn}}}\omega _{B2})}_{1,2}(\omega _{F1};\omega _{B1},\omega _{B2})\approx e^{\frac{2\omega _{F1}\omega _{B1}\omega _{B2}}{4J^2}(iE+{{\mathrm{sgn}}}{\omega _B} \sqrt{4J^2E^2})}, \end{aligned}$$(13)whereas for \({{\mathrm{sgn}}}\omega _{B1}={{\mathrm{sgn}}}\omega _{B2}\) the same object takes instead the form
$$\begin{aligned}&\mathcal {C}^{(\mathrm{bulk, }{{\mathrm{sgn}}}\omega _{B1}={{\mathrm{sgn}}}\omega _{B2})}_{1,2}(\omega _{F1};\omega _{B1},\omega _{B2}) \approx \frac{(i)^N}{\pi \sqrt{2 N \rho }(2J)^{N+1}}\,e^{\frac{iE}{4J^2}(\omega _{B1}+\omega _{B2}2\omega _{F1})} \nonumber \\&\times \bigg \{[Ae^{\pi \rho \omega _{F1}} (1)^N A^*e^{+\pi \rho \omega _{F1}} ](\omega _{B1}+\omega _{B2}2\omega _{F1})K_0\left( \tfrac{\pi \rho }{2}\omega _{B1}\omega _{B2}\right) \nonumber \\&+[Ae^{\pi \rho \omega _{F1}}+(1)^N A^*e^{+\pi \rho \omega _{F1}}]\omega _{B1}\omega _{B2}K_1\left( \tfrac{\pi \rho }{2}\omega _{B1}\omega _{B2}\right) \bigg \} \end{aligned}$$(14)with
$$\begin{aligned} A(E,N)=(2\pi J^2 \rho +iE)^{N1/2}\ e^{\frac{i\pi N}{2}\rho E}, \end{aligned}$$(15)where we introduced \(\rho =\frac{1}{2\pi J^2}\sqrt{4J^2E^2}\) for the mean eigenvalue density of large GOE matrices in the bulk of the spectrum and used the standard notation \(K_m(z)\) for the modified Bessel (Macdonald) functions of second kind and index \(m\). Note that the asymptotic expression (14) shows an interesting “parity effect”: it behaves differently depending on whether \(N\) is even or odd for arbitrary large values of \(N\).
Similarly the second correlation function for \({{\mathrm{sgn}}}\omega _{B1}={{\mathrm{sgn}}}\omega _{B2}\) is given by
$$\begin{aligned}&\mathcal {C}^{(\mathrm{bulk, }{{\mathrm{sgn}}}\omega _{B1}={{\mathrm{sgn}}}\omega _{B2})}_{2,2}(\omega _{F1},\omega _{F2};\omega _{B1},\omega _{B2})\approx \nonumber \\&\left( \frac{J}{\sqrt{N}}\right) ^N \frac{3\tilde{H}_N\left( \frac{\sqrt{N}E}{J}\right) }{[\pi \rho (\omega _{F1}\omega _{F2})]^3} e^{\frac{iE(\omega _{F1}+\omega _{F2})}{2J^2}}\,e^{\frac{iE(\omega _{B1}+\omega _{B2})}{4J^2}} e^{\frac{\pi \rho (\omega _{B1}+\omega _{B2})}{2}} \nonumber \\&\times \left[ \pi \rho (\omega _{F1}\omega _{F2})\cosh \left( \pi \rho (\omega _{F1}\omega _{F2})\right) \sinh \left( \pi \rho (\omega _{F1}\omega _{F2})\right) \right] \!, \end{aligned}$$(16)where \(\tilde{H}_N\left( \frac{\sqrt{N}E}{J}\right) =\sqrt{2}\left( \frac{iN}{2J}\right) ^N e^{N/2}\, e^{\frac{N}{4J^2}E^2} [(1)^N A(E,N)+ A^*(E,N)]\) is the appropriate large\(N\) asymptotic of the \(N\)th Hermite polynomial, with \(A(E,N)\) defined in Eq. (15). In the case \({{\mathrm{sgn}}}\omega _{B1}={{\mathrm{sgn}}}\omega _{B2}\) we get instead
$$\begin{aligned}&\mathcal {C}^{(\mathrm{bulk, }{{\mathrm{sgn}}}\omega _{B1}={{\mathrm{sgn}}}\omega _{B2} )}_{2,2}(\omega _{F1},\omega _{F2};\omega _{B1},\omega _{B2}) \approx \nonumber \\&\sqrt{\frac{2N}{\pi }} \frac{J^{N+1} e^{N/2}}{(\omega _{F1}\omega _{F2})^3} e^{\frac{N}{4J^2}E^2} e^{\frac{iE(\omega _{F1}+\omega _{F2})}{2J^2}}\,e^{\frac{iE(\omega _{B1}+\omega _{B2})}{4J^2}} \nonumber \\&\bigg \{ [(\omega _{F1}+\omega _{F2})(\omega _{B1}+\omega _{B2})2\omega _{F1}\omega _{F2}2\omega _{B1}\omega _{B2}] K_0 \left( \tfrac{\pi \rho }{2}\omega _{B1}\omega _{B2} \right) \nonumber \\&\qquad \times \left[ \pi \rho (\omega _{F1}\omega _{F2})\cosh \left( \pi \rho (\omega _{F1}\omega _{F2})\right) \sinh \left( \pi \rho (\omega _{F1}\omega _{F2})\right) \right] \nonumber \\&\qquad + \pi \rho (\omega _{F1}\omega _{F2})^2 \omega _{B1}\omega _{B2} \sinh \left( \pi \rho (\omega _{F1}\omega _{F2})\right) K_1 \left( \tfrac{\pi \rho }{2}\omega _{B1}\omega _{B2} \right) \bigg \}. \end{aligned}$$(17)Note that the parity of \(N\) plays no role for the large\(N\) behaviour of this correlation function.
Let us now discuss a few special cases motivated by applications mentioned above.

The characteristic function of the “level curvatures”, Eq. (5) can be represented as a special limit of \(\mathcal {C}^{(\mathrm{bulk})}_{2,2}\),
$$\begin{aligned} \left\langle \frac{\det (EH)\det (EH)^{1/2}}{\det (E+i\omega /NH)^{1/2}} \right\rangle _{GOE,N\rightarrow \infty }&=\lim _{\epsilon \rightarrow 0}\mathcal {C}^{(\mathrm{bulk})}_{2,2}(\epsilon ,\epsilon ;\epsilon ,\omega ) \nonumber \\&\propto e^{\frac{i E}{4J^2}\omega }\omega  K_1\left( \tfrac{\sqrt{4J^2E^2}}{4J^2}\omega  \right) \!. \end{aligned}$$(18)The Fourier transform of this result (for brevity we choose \(E=0\), \(J=1\)) yields the curvature distribution,
$$\begin{aligned} P(c) = \frac{1}{4\pi } \int _{\infty }^\infty d\omega \omega  K_1\left( \tfrac{1}{2}\omega \right) \exp (i\omega c) = (1+4c^2)^{3/2}, \end{aligned}$$(19)which coincides with the expression found in earlier works by alternative methods [19, 20].

The two averages featuring in Eq. (6) can be recovered as special cases from \(\mathcal {C}^{(\mathrm{bulk})}_{2,2}\) and are for the choice \(J=1\) given by
$$\begin{aligned}&\left\langle \frac{\det ^2 H}{\det ^{1/2}(H^2+\tfrac{\omega ^2}{N^2})} \right\rangle _{GOE, N \rightarrow \infty }= \ \mathcal {C}^{(\mathrm{bulk})}_{2,2}(0,0;\omega ,\omega ) \nonumber \\&\approx 2\sqrt{\frac{2N}{\pi }}e^{N/2} \left[ \frac{\omega ^2}{3} K_0 \left( \omega  \right) + \omega  K_1 \left( \omega  \right) \right] \!, \end{aligned}$$(20)$$\begin{aligned}&\left\langle \det (H^2+\tfrac{\omega ^2}{N^2})^{1/2} \right\rangle _{GOE,N\rightarrow \infty }=\mathcal {C}^{(\mathrm{bulk})}_{2,2}(\omega ,\omega ;\omega ,\omega ) \nonumber \\&\qquad \approx \sqrt{\frac{2N}{\pi }}e^{N/2} \bigg [ \left( \cosh (2\omega ) \frac{\sinh (2\omega )}{2\omega } \right) K_0(\omega )+ \sinh (2\omega ) K_1 (\omega ) \bigg ]. \end{aligned}$$(21)The above formulas have been already presented in [29, 31], with derivation relegated to the present paper. We tested the validity of (21) by direct numerical simulations of GOE matrices of a moderate size, see Fig. 1.

For the characteristic function of an offdiagonal element \(K_{ab}\) of the \(K\)matrix, see Eq. (7), we choose to present the corresponding result only for the socalled “perfect coupling” case, i.e. \(E=0\) and \(\gamma _a = \gamma _b =1\), the case of general \(\gamma _a\ne \gamma _b\) following by a trivial rescaling. It is given by
$$\begin{aligned} \lim _{N \rightarrow \infty }\left\langle \frac{\det H}{\det (H^2+\frac{x^2}{N^2})^{1/2}} \right\rangle _{GOE}=\frac{2}{\pi }\left( \frac{x}{J} K_0(x/J)+\int _{x/J}^\infty \!dy\, K_0(y) \right) \!. \end{aligned}$$(22)The ensuing distribution \(\mathcal {P}(K_{ab})\) is then consequently given by its Fourier transform,
$$\begin{aligned} \mathcal {P}(K_{ab}) = \frac{2}{\pi ^2(1+K_{ab}^2)} \left( 1+\frac{\text {arsinh}(K_{ab})}{K_{ab}\sqrt{1+K_{ab}^2}} \right) \!. \end{aligned}$$(23)In the Appendix A we verify that this result is in complete agreement with Brouwer’s conjecture claiming that \(K\) for the “perfect coupling” case is distributed as \(\mathcal {P}(K) \propto \det [1+K^2]^{(M+1)/2}\). We also check these expressions against direct numerical simulations, see Fig. 2.

The \(M=2\) case of Eq. (10) features the correlation function
$$\begin{aligned} \left\langle \frac{\det (EH) {{\mathrm{sgn}}}\det (EH)^{\Theta (x_1 x_2)}}{\det ^{1/2}(E+\frac{i\gamma _1 x_1}{N}H) \det ^{1/2}(E+\frac{i\gamma _2 x_2}{N}H)} \right\rangle _{GOE}. \end{aligned}$$(24)Assume that \(x_1 x_2>0\) so that \(\Theta (x_1 x_2)=0\) and the signfactor is immaterial. The correlation function then takes the form of
$$\begin{aligned} \mathcal {C}^{(\mathrm{bulk})}_{1,2}(0;\gamma _1 x_1,\gamma _2 x_2)\approx e^{\frac{\gamma _1 x_1\gamma _2 x_2}{4J^2}(iE+{{\mathrm{sgn}}}x_1 \sqrt{4J^2E^2})}, \end{aligned}$$(25)which simplifies even further to \(e^{\frac{ x_1 x_2}{2J}}\) for the “perfect coupling” case \(E=0\), \(\gamma _1=\gamma _2=1\). In the opposite case \(x_1 x_2 <0\) on the other hand the correlation function takes the form
$$\begin{aligned} \left\langle \frac{\det (EH)}{\det ^{1/2}(E+\frac{i\gamma _1 x_1}{N}H) \det ^{1/2}(E+\frac{i\gamma _2 x_2}{N}H)} \right\rangle _{GOE}, \end{aligned}$$(26)which is again a special case of \(\mathcal {C}^{(\mathrm{bulk})}_{2,4}\). In the particular case \(\gamma _1 x_1=\gamma _2 x_2 \equiv \gamma x\), the above expression assumes the same form as one needed for extracting the distribution of a single offdiagonal element \(K_{ab}\), see Eqs. (7) and (22). While a full proof that \(K\) is distributed according to the Cauchy distribution, Eq. (9), requires the knowledge of the above expression for arbitrary values of \(x_1\) and \(x_2\), one can show that our partial results for \(\gamma _1 x_1=\gamma _2 x_2 \equiv \gamma x\) are indeed consistent with Eq. (9), see Appendix B.

Finally we notice that an interesting special case of \(\mathcal {C}^{(\mathrm{bulk})}_{1,2}\) is the average of the sign of the GOE characteristic polynomial given asymptotically by
$$\begin{aligned} \langle {{\mathrm{sgn}}}\det (EH) \rangle _{GOE, N\rightarrow \infty }&=\lim _{\epsilon \rightarrow 0}\mathcal {C}^{(\mathrm{bulk})}_{1,2}(0;\epsilon ,\epsilon ) \nonumber \\&\approx \frac{2J^2(i/(2J))^N}{\sqrt{\pi N}(4J^2E^2)^{3/4}} [A(E,N)+(1)^N A^*(E,N)], \end{aligned}$$(27)where \(A(E,N)\) is defined in Eq. (15).
2 Derivation of the Main Results
2.1 Evaluation of the Correlation Functions Eqs. (2) and (3)
At present the only systematic method for evaluating the ensemble averages \( \mathcal {C}^{(\mathrm{bulk})}_{1,2}\) and \(\mathcal {C}^{(\mathrm{bulk})}_{2,2}\) seems to be the socalled supersymmetric formalism, see [45] and references therein. Within RMT framework several variants of that method are by now welldeveloped and we will follow one of them proposed in [46]. We only outline the major steps of the procedure below referring the interested reader to the cited literature and leaving technical detail for [47]. To that end we start with replacing the square roots of determinants in the denominator by Gaussian integrals over \(N\)component real vectors \(\mathbf{x}_i\), and the determinants in the numerator by integrals over vectors \(\zeta _i\) whose \(N\) components are complex anticommuting (Grassmann) variables. In that way the correlation function \(\mathcal {C}^{(\mathrm{bulk})}_{1,2}\) can be represented by
and similarly for \(\mathcal {C}^{(\mathrm{bulk})}_{2,2}\) where we have to introduce one more integration over a vector of \(N\) anticommuting components. Note that we have to introduce \(s_i \equiv {{\mathrm{sgn}}}\omega _{Bi}\) in order to render the integrals over the commuting variables convergent.
The ensemble average can now easily be performed and yields for \(\mathcal {C}^{(\mathrm{bulk})}_{2,2}\) the result,
where we introduced the \(N\times N\) matrix \(B=s_1 \mathbf{x}_1 \otimes \mathbf{x}_1^T + s_2 \mathbf{x}_2 \otimes \mathbf{x}_2^T\) as well as the \(2\times 2\) matrices
A similar expression for \(\mathcal {C}^{(\mathrm{bulk})}_{1,2}\) can be obtained from the above by replacing all terms containing \(\zeta _2\) with 0 so that \(Q_F\) becomes a scalar in this case. At the next step we employ a HubbardStratonovich transformation for the anticommuting variables only by exploiting the identity
where \(\widehat{Q}_F=\begin{bmatrix} q_{11}&q_{12} \\ q_{12}^*&q_{22} \end{bmatrix}\) is a Hermitian \(2\times 2\) matrix of commuting variables for \(\mathcal {C}^{(\mathrm{bulk})}_{2,2}\) and a single scalar variable \(\widehat{Q}_F \equiv q\) for \(\mathcal {C}^{(\mathrm{bulk})}_{1,2}\). For \(\mathcal {C}^{(\mathrm{bulk})}_{2,2}\) we also need to bilinearise the term \(\zeta _1^T\zeta _2\zeta _2^\dag \zeta _1^*\) which can be achieved by introducing an auxiliary Gaussian integral over a complex variable \(u\), with \(u^*\) standing for its conjugate:
With the integrand being bilinear in the Grassmann vectors it is easy to perform the integration over the anticommuting variables explicitly. The resulting expression in both cases depends on the \(\mathbf{x}\)vectors only via the eigenvalues of the matrix \(Q_BL\). This allows us to follow the route explained in detail in [43, 46] and to employ the identity from Appendix D of [48]:
which helps one to replace the integration over \(n\) real vectors of dimension \(N\) by an integral over a positive definite real symmetric matrix \(\widehat{Q}_B\) of dimension \(n \times n\), where in both our cases actually \(n=2\). In the first case this procedure leads us after a trivial rescaling of the integration variables to
where for notational convenience we omitted the hats here and henceforth. Similarly, in the second case we arrive at
Here we introduced the \(2\times 2\) matrices \(M_{B(F)}=E 1_2+\frac{i}{N} {{\mathrm{diag}}}(\omega _{B1(F1)},\omega _{B2(F2)})\) and used \(\lambda _{B1}\) and \(\lambda _{B2}\) for the real eigenvalues of the \(2\times 2\) nonselfadjoint matrix \(Q_B L\), see [43, 46] for technical details.
Setting aside the issue of performing the integration over the matrix \(Q_B\) for the time being, in the first case the procedure leaves us with a single \(q\)integration whereas in the second case we have to deal with an integral over the \(2\times 2\) Hermitian matrix \(Q_F\) which contains four independent variables, and in addition with integrals over the complex variable \(u\). To simplify the integrand we then use that \(Q_F\) can be diagonalised by a unitary transformation \(Q_F = U {{\mathrm{diag}}}(q_{F1},q_{F2}) U^\dag \). The integration over the unitary group can then be performed using the Itzykson–Zuber–Harish–Chandra (IZHC) formula[14, 15] which reduces the integration variables to the set \(q_{F1}\), \(q_{F2}\), \(u\) and \(u^*\). Next we note that by introducing a matrix \(R=\begin{bmatrix} q_{F1}&u \\ u^*&q_{F2} \end{bmatrix}\) one can express the integrand in terms of \(R\) (note e.g. that \(\det Q_F u^* u = \det R\), \({{\mathrm{tr}}}Q_F^2+2u^*u = {{\mathrm{tr}}}R^2\) etc.). This latter matrix is Hermitian as well, so can also be diagonalized by a unitary transformation \(R=U_2 {{\mathrm{diag}}}(r_1,r_2)U_2^\dag \). Although the group integral is not of the IZHC type in this case, it still can be performed explicitly. Following this procedure the correlation function simplifies to
At the final step we aim at simplifying the integral over \(Q_B\), which in both cases is a \(2 \times 2\) real symmetric positive definite matrix. As the integrands in (34) and (36) actually depend on the combination \(Q_B L\) we change the integration from \(Q_B\) to \(Q_B L\). Recall that the matrix \(L={{\mathrm{diag}}}({{\mathrm{sgn}}}\omega _{B1},{{\mathrm{sgn}}}\omega _{B2})\) reflects the signs of \(\omega _{B1}\) and \(\omega _{B2}\) and this fact will play now a crucial role. If \(\omega _{B1}\) and \(\omega _{B2}\) are of the same sign, \(L\) is proportional to the identity and hence \(Q_B L\) is still positive definite real symmetric and can be diagonalized by an orthogonal transformation \(Q_B L = \pm O {{\mathrm{diag}}}(p_1,p_2) O^T\). If, however, the signs are different (we may assume for definiteness \(\omega _{B1}>0\) and \(\omega _{B2}<0\)), then the matrix \(Q_B L\) will have an underlying hyperbolic symmetry and can be parametrised as [43, 46]
where \(p_1,p_2>0\) and \(\theta \in (\infty ,\infty )\). The only term in the integrands (34) and (36) which actually depends on \(\theta \) is \({{\mathrm{tr}}}Q_B L M_B = E(p_1p_2)+\frac{i}{2N}[(p_1p_2)(\omega _{B1}+\omega _{B2})+(p_1+p_2)(\omega _{B1}\omega _{B2})\cosh (2\theta )]\). For the \({{\mathrm{sgn}}}\omega _{B1}={{\mathrm{sgn}}}\omega _{B2}\) case one obtains the same type of expression with \(p_2 \rightarrow p_2\) and \(\cosh (2\theta ) \rightarrow \cos (2\theta )\). The \(\theta \)integration can be performed explicitly using
where \(I_0(x)\) and \(K_0(x)\) stand for the modified Bessel function of the first and second kind, respectively. in this way we arrive at the final expression which is exact for arbitrary value of \(N\),
and
The superscript \(+\) is to remind us that the expression corresponds to the choice \(\omega _{B1}>0\) and \(\omega _{B2}<0\). The expression for equal signs can be obtained from the above by replacing \(p_1 \rightarrow +{{\mathrm{sgn}}}(\omega _{B1}) p_1\), \(p_2 \rightarrow {{\mathrm{sgn}}}(\omega _{B1}) p_2\), \(p_1+p_2 \rightarrow p_1 p_2\) and \(K_0 \rightarrow I_0\).
So far our manipulations were exact and did not use any approximation. As was explained in the introduction we are mainly interested in extracting the “bulk” large\(N\) asymptotic of these correlation functions. The most natural way to proceed from here is by performing a saddlepoint analysis. We believe with due effort such analysis can be done with full mathematical rigor, see e.g. a recent paper [49], but we do not attempt it here concentrating on explaining the gross structures of the saddlepoint analysis which yield the correct results.
For the case of different signs the saddle points of the integrand are given by
For \(p_1\) and \(p_2\) only solutions with positive real parts are contributing to the asymptotics. There is no such restriction for \(q\) or \(r_1\) and \(r_2\), respectively, and we have two saddle points contributing in each of these variables. Hence for \(\mathcal {C}^{(\mathrm{bulk},+)}_{1,2}\) the final expression is given by the sum of two different saddlepoint contributions. For \(\mathcal {C}^{(\mathrm{bulk},+)}_{2,2}\) there are in principle four different contributions. However, the contributions from the saddle points satisfying \(r_1^{SP}=r_2^{SP}\) are actually negligible due to the factor \(r_1r_2\) in the integrand. Moreover the integrand is invariant under exchanging \(r_1\) and \(r_2\), and hence the two remaining contributions are identical. It therefore suffices to choose for \(r_1^{SP}\) the solution with positive real part and for \(r_2^{SP}\) the one with negative real part. One may further notice that the integrand itself vanishes when evaluated at the saddle points due to the factors \((qp_1)(q+p_2)\) and \((r_1+ p_1)(r_2+ p_1)(r_1 p_2)(r_2 p_2)\) . This fact makes it necessary to expand the integrand to a higher order around the saddle points. The corresponding calculation is rather tedious, but managable. We refrain from presenting it here and refer the interested reader to [47] for technical detail. The outcome of the analysis are precisely the formulae given in Eqs. (14) and (17).
The case of same signs looks quite different. Here the saddle points are given by
where \(s \equiv {{\mathrm{sgn}}}\omega _{B1}={{\mathrm{sgn}}}\omega _{B2}\). Again we must choose \(p_1^{SP}\) and \(p_2^{SP}\) to have a positive real part, so that two contributions arise for \(\mathcal {C}^{(\mathrm{bulk})}_{1,2}\) and four for \(\mathcal {C}^{(\mathrm{bulk})}_{2,2}\). However, the term \((qs p_1)(qs p_2)\) is only nonvanishing if we choose \(q^{SP}=sp_1^{SP}\), contributions for all other choices becoming subdominant. For \(\mathcal {C}^{(\mathrm{bulk})}_{2,2}\) the same arguments as before suggest to choose for \(r_1^{SP}\) the solution with positive real part and for \(r_2^{SP}\) with negative real part neglecting the other three contributions. While the integrand still vanishes at he saddle points due to the factor \(p_1p_2\) and for \(\mathcal {C}^{(\mathrm{bulk})}_{2,2}\) due to the factors \((r_1+sp_1)(r_2+sp_1)(r_1+sp_2)(r_2+sp_2)\), the saddle point analysis is now much simpler than in the previous case. Indeed, when extracting the leadingorder contribution one has to replace \(p_1=p_1^{SP}+\xi _1\) (with \(\xi _1\) parametrizing the integration around the relevant saddle point) and similarly for the other variables, and then expand the \(N\)independent part of the integrand to zeroth order in \(\xi _1\) etc. (apart from the factors which come naturally in first order like \(p_1p_2=\xi _1\xi _2\)). It is then readily seen that the corresponding integrals yield a nonvanishing contribution rather straightforwardly without need to expand the integrand to higher orders like it was necessary in the previous case of opposite signs. The results of such saddlepoint analysis is then much simpler and is given in Eqs. (13) and (16).
2.2 Distribution of \(K_{ab}\) via Eq.(8)
For the correlation function (8) associated with the distribution of an individual offdiagonal \(K\)matrix element we consider for simplicity only the perfect coupling case \(E=0\) and \(\gamma _a=\gamma _b=1\), see Eq. (22)). For evaluating the ensemble average we first tried to follow the same method as described in the previous section. In this way we started with writing \(\det (H^2+\frac{x^2}{N^2})^{1/2}=\det (H+\frac{ix}{N})^{1/2}\det (H\frac{ix}{N})^{1/2}\) and \(\det H=(\det H)^2 /\det H = \lim _{\epsilon \rightarrow 0} (\det H)^2 \det (H+\frac{i\epsilon }{N})^{1/2}\det (H\frac{i\epsilon }{N})^{1/2}\) and then replaced the square roots of characteristic polynomials in the denominator by four Gaussian integrals over real commuting vectors and those in the numerator by Gaussian integrals over two vectors with anticommuting components. The ensemble averaging then yields a \(4 \times 4\) \(Q_B\)matrix, but we found no efficient ways of evaluating the ensuing group integral over the diagonalizing matrices. We also attempted a direct saddlepoint analysis for large \(N\) along the same lines as before, and found it to become very tedious as not only the zeroth and first, but also the second order of the integrand expansion in fluctuations around the relevant saddle points turned out to be vanishing at the saddle points. Expanding to an even higher order with the group integrals still present did not seem to us as a viable option.
Confronted with those difficulties we followed a different method (inspired by the insights from [30]) which avoids introducing anticommuting variables altogether. We demonstrate it first for the correlation function \(\mathcal {C}^{(\mathrm{bulk})}_{1,2}\). For brevity we will consider only the simplest case \(E=0\) where such object can be written as
We start with representing only the denominator by a Gaussian integral over a real \(N\)component vector \(\mathbf{S}\) and hence get
where
Note that the above integral is welldefined only for \(\omega _{B1}\) and \(\omega _{B2}\) having different signs, otherwise the term \(\omega _{B1} \omega _{B2}/N^2>0\) would render the integral divergent.
Let us further assume that \(\omega _{B1}=\omega _{B2} \equiv \omega _B\), such that the linear term \(iH\frac{\omega _{B1}+\omega _{B2}}{N}\) vanishes. Such assumption is not necessary to make the method functional but helps to simplify the presentation considerably. Next we parametrize the vector \(\mathbf{S}\) of integration variables as \(\mathbf{S}=\mathbf{S} O e_1\), where \(e_1=[1,0,\dots ,0]\) is an \(N\)dimensional unit vector and \(O\) is an orthogonal matrix: \(O^{1}=O^{T}\). Since both the determinant factor and the GOE probability density \(\mathcal {P}(H)\) in (46) are invariant under orthogonal transformations \(H\rightarrow O^{1}HO\) the matrices \(O, O^T\) can be omitted. The term \(e_1^T H^2 e_1\) then suggests that it is advantageous to decompose \(H\) as
where \(h\) is a real \(N1\)component vector, \(H_{N1}\) is the \((N1) \times (N1)\) subblock of \(H\) and \(H_{11}\) is the first element of \(H\). With such a decomposition one is able to integrate out the variable \(H_{11}\) as well as the vector \(h\), which leads to
where we have introduced the shorthand notations \(I_1=\langle \det (\tfrac{i\omega _F}{N}H_{N1}) \rangle _{N1}\) and \(I_2=\left\langle \det (\tfrac{i\omega _F}{N}H_{N1})\ {{\mathrm{tr}}}(\tfrac{i\omega _F}{N}H_{N1})^{1} \right\rangle _{N1}\) where the ensemble average should be performed over the \((N1)\times (N1)\) GOE matrix \(H_{N1}\). Moreover, it actually suffices to know only \(I_1\) since \(I_2=iN \frac{dI_1}{d\omega _F}\). As is wellknown \(I_1\) is proportional to the Hermite polynomial: \(I_1 \propto H_{N1}(i\omega _F/(\sqrt{N}J))\), so that asymptotically we have \(I_1 \propto e^{\omega _F/J}+(1)^N e^{\omega _F/J}\). It remains to perform the \(\mathbf{S}\)integration for which it is advantageous to introduce rescaled polar coordinates, such that \(\mathbf{S}^2=N^2 R\). The problem then reduces to performing the single integral
For large \(N\gg 1\) it is easy to verify that the leading contribution to the integral can be written as
which indeed coincides with the earlier derived expression for \(\mathcal {C}^{(\mathrm{bulk})}_{1,2}(\omega _F; \omega _B,\omega _B)\) from Eq. (14).
Now we follow the same route for evaluation of the correlation function (8). We will only outline the key steps and differences from the previous case, but refrain from presenting intermediate results relegating them to [47]. One starts with replacing only the square roots of the characteristic polynomials in the denominator by Gaussian integrals, which leads us to
where \(\mathbf{S}_1\) and \(\mathbf{S}_2\) are two real \(N\)component vectors, and
In contrast to a single vector \(\mathbf{S}\) in the previous case we now have to deal with two real vectors \(\mathbf{S}_1\) and \(\mathbf{S}_2\), which we can conveniently combine into the matrix \(Q\). Such a ranktwo \(N\times N\) matrix has two nonzero eigenvalues which we call \(q_1\) and \(q_2\), all other \(N2\) eigenvalues being identically zero. Being real symmetric \(Q\) can be diagonalised by an orthogonal transformation: \(Q=O {{\mathrm{diag}}}(q_1,q_2,0,\dots ,0) O^T\) and the orthogonal matrices can be omitted from the integrand by the same invariance reasons as before. Owing to this structure we can conveniently decompose \(H\) into its upper left \(2 \times 2\) block, its lower right \((N2) \times (N2)\) block \(H_{N2}\) and the two ensuing offdiagonal blocks. It is easy to integrate out all variables apart from those entering \(H_{N2}\) and get, with a slight abuse of notations:
where we used the notations
The result then reduces to performing ensemble averages over expressions \(\det H_{N2}^2\) multiplied with various powers of traces of the inverse matrices \(H_{N2}^{k}\) for a few instances of positive integers \(k\). One may notice that all the required averages can be represented as derivatives of the correlation function of two GOE characteristic polynomials, using e.g. the identities
and similarly for the higher powers. As a result for the object featuring in (53) we have:
where the differential operator \(\mathcal {D}_{\xi _1,\xi _2}(q_1,q_2)\) is explicitly given by
The ensemble average of the product of two GOE characteristic polynomials is known and for large \(N\) is given asymptotically by (see e.g. [50])
Using this result, and taking the necessary derivatives and the limits \(\xi _1, \xi _2 \rightarrow 0\), we finally get an explicit expression for \(\Psi (q_1,q_2)\).
The last step is to perform the integrals over \(\mathbf{S}_1\) and \(\mathbf{S}_2\), see Eq. (51). In the previous case we could reduce integration over \(\mathbf{S}_1\) to a single integration in polar coordinates. Similarly we can now exploit the invariance of the integrand and exploit the identity (33). In this way we can restrict the integration to the manifold of positive definite real symmetric \(2 \times 2\) matrices with eigenvalues \(q_1\) and \(q_2\). Extracting the leading large\(N\)asymptotics is then a straightforward exercise and we finally end up with the integral representation
Note that here the limit \(\epsilon \rightarrow 0\) is implied, which can now trivially be performed. It turns out that this rather complicatedlooking integral is actually proportional to
A way to verify this claim is to differentiate both equations (assuming for definiteness \(x>0\), \(J=1\)) with respect to \(x\). The derivative of Eq. (58) is \(x K_1(x)\), and the derivative of (57) is \(x\) times a certain twofold integral which with some efforts can be shown to be proportional to \(K_1(x)\). The details of this calculation are relegated to [47].
3 Conclusions and Open Problems
In this paper we have started the program of systematic evaluation of correlation functions (1) involving halfinteger powers of the characteristic polynomials of \(N\times N\) GOE matrices. Motivated by diverse applications outlined in the introductory section we mainly concentrated on extracting the asymptotic behaviour of several objects of that type as \(N\rightarrow \infty \). Our calculations were based on variants of the supersymmetry method or related techniques. The method in a nutshell amounts to replacing the initial average involving the product of \(K\) characteristic polynomials divided by \(L\) square roots of characteristic polynomials of \(N\times N\) GOE matrices \(H\) with an average over the sets of \(K\times K\) matrices \(Q_F\) and \(L\times L\) matrices \(Q_B>0\) with Gaussian weights augmented essentially with the factors \(\det {Q_B}\) and \(\det {Q_F}\) raised to powers of order \(N\), see e.g. (35). As we are eventually mostly interested in \(K,L\) fixed but \(N\rightarrow \infty \) this replacement is very helpful as it allows to employ saddlepoint approximations. In this paper we managed to perform all steps of such a procedure successfully only for relatively small values of \(K\) and \(L\), but we hope that the general case can eventually be treated along similar lines. One reason and guiding principle for a moderate optimism is as follows. An inspection of a somewhat simpler example of \(\beta =2\) shows, see in particular [2], that the success of our method is deeply connected to the existence of the socalled duality relations for Gaussian ensembles, see [51] for a better understanding of such dualities. In particular, the Proposition 7 of the latter paper shows that one of such duality relations exists for general Gaussian \(\beta \)ensembles with \(\beta >0\) for an object involving the ensemble average of the product of the corresponding characteristic polynomials raised to the power \(\beta /2\). For the GOE with \(\beta =1\) that object (see Proposition 2 in [51]) is exactly the particular case of (1) with \(K=0\) and arbitrary integer \(L\) which makes a contact to the present context; e.g. one can employ such a duality to reproduce the relation (11) in an alternative way. A deeper understanding of connections between the supersymmetric approach and the duality relations for Gaussian ensembles will certainly be helpful in dealing efficiently with asymptotics of (1) for arbitrary integer values \(K\) and \(L\). The problem of revealing possible Pfaffiandeterminant structures behind (1) for finite matrix size \(N\) remains at the moment completely outstanding. It may well be that the methods of [6, 7] or relations to generalized hypergeometric functions noticed for some particular instances in [44] could be useful for clarifying that issue.
Notes
The corresponding formula in [18] was written not accurately enough and did not show the dependence on \({{\mathrm{sgn}}}\det \) factors.
Note also that an ensemble average closely related to the lefthand side of (11) was evaluated explicitly in [44], with the general circular \(\beta \)ensemble replacing the GOE. The result was expressed for all \(\beta >0\) and all integer \(N\ge 1\) in terms of a certain generalised hypergeometric function. The \(\delta \rightarrow 0\) asymptotics for large \(N\gg 1\) of the latter function does agree with the one following from the righthand side of (11).
References
Brezin, E., Hikami, S.: Characteristic polynomials of random matrices. Commun. Math. Phys. 214, 111–135 (2000)
Fyodorov, Y.V., Strahov, E.: An exact formula for general spectral correlation function of random Hermitian matrices. J. Phys. A: Math. Gen. 36(12), 3203–3213 (2003)
Strahov, E., Fyodorov, Y.V.: Universal results for correlations of characteristic polynomials: RiemannHilbert approach. Commun. Math. Phys. 241(2–3), 343–382 (2003)
Baik, J., Deift, P., Strahov, E.: Products and ratios of characteristic polynomials of random Hermitian matrices. J. Math. Phys. 44(8), 3657–3670 (2003)
Borodin, A., Strahov, E.: Averages of characteristic polynomials in random matrix theory. Commun. Pure Appl. Math. 59(2), 161–253 (2006)
Kieburg, M., Guhr, T.: Derivation of determinantal structures for random matrix ensembles in a new way. J. Phys. A: Math. Theor. 43(7), 075201 (2010)
Kieburg, M., Guhr, T.: A new approach to derive Pfaffian structures for random matrix ensembles. J. Phys. A: Math. Theor. 43(13), 135204 (2010)
Shcherbina, M.: On universality for orthogonal ensembles of random matrices. Commun. Math. Phys. 285, 957–974 (2009)
Erdős, L., Schlein, B., Yau, H.T., Yin, J.: The local relaxation flow approach to universality of the local statistics for random matrices. Ann. Inst. H. Poincare Probab. Stat. 48(1), 1–46 (2012)
Tao, T.; Vu, V.: Random matrices: The Universality phenomenon for Wigner ensembles. arXiv:1202.0068
Guhr, T., MüllerGroeling, A., Weidenmüller, H.A.: Randommatrix theories in quantum physics: common concepts. Phys. Rep. 299(4–6), 189–425 (1998)
Bohigas, O., Giannoni, M.J., Schmit, C.: Characterization of chaotic quantum spectra and universality of level fluctuation laws. Phys. Rev. Lett. 52, 1–4 (1984)
Müller, S., Heusler, S., Altland, A., Braun, P., Haake, F.: Periodicorbit theory of universal level correlations in quantum chaos New. J. Phys. 11, 103025 (2009)
Itzykson, C., Zuber, J.B.: The planar approximation. II. J. Math. Phys. 21(3), 411–421 (1980)
HarishChandra. Differential operators on a semisimple Lie algebra. Am. J. Math. 79, 87–120 (1957)
Beenakker, C.W.J.: Randommatrix theory of quantum size effects on nuclear magnetic resonance in metal particles. Phys. Rev. B 50, 15170–15173 (1994)
Jiang, T.: How many entries of a typical orthogonal matrix can be approximated by independent normals? Ann. Probab. 34, 1497–1529 (2006)
Fyodorov, Y.V., Khoruzhenko, B.A., Nock, A.: Universal Kmatrix distribution in \(\beta =2\) ensembles of random matrices. J. Phys. A: Math. Theor. 46, 262001 (2013)
Fyodorov, Y.V., Sommers, H.J.: Universality of “Level Curvature” distributions for large random matrices: systematic analytical approaches. Z. Phys. B 99, 123–135 (1995)
von Oppen, F.: Exact distributions of eigenvalue curvatures for timereversalinvariant chaotic systems. Phys. Rev. E 51, 2647–2650 (1995)
Fyodorov, Y.V.: Complexity of random energy landscapes, glass transition, and absolute value of the spectral determinant of random matrices. Phys. Rev. Lett. 92(24) 240601 (2004); Erratum ibid 93(14) 149901(E) (2004)
Fyodorov, Y.V.: Counting stationary points of a random landscape as a random matrix problem. Acta Phys. Pol. B 36(9), 2699–2707 (2005)
Akemann, G., Guhr, T., Kieburg, M., Wegner, R., Wirtz, T.: Completing the picture for the smallest eigenvalue of real Wishart matrices. Phys. Rev. Lett. 113, 250201 (2014)
Taniguchi, N., Prigodin, V.N.: Distribution of the absorption by chaotic states in quantum dots. Phys. Rev. B 54, R14305(R) (1996)
Savin, D.V., Sommers, H.J., Fyodorov, Y.V.: Universal statistics of the local Green’s function in wave chaotic systems with absorption. JETP Lett. 82, 544–548 (2005)
Fyodorov, Y.V., Savin, D.V., Sommers, H.J.: Scattering, reflection and impedance of waves in chaotic and disordered systems with absorption. J. Phys. A: Math. Gen. 38(49), 10731–10760 (2005)
Guionnet, A.: private communication
Fyodorov, Y.V., Savin, D.V.: Resonance scattering of waves in chaotic systems. In: Akemann, G., et al. (eds.) The Oxford Handbook of Random Matrix Theory, pp. 703–722. Oxford University Press (2011), [ arXiv:1003.0702]
Fyodorov, Y.V., Savin, D.V.: Statistics of resonance width shifts as a signature of eigenfunction nonorthogonality. Phys. Rev. Lett. 108(18), 184101 (2012)
Schomerus, H., Frahm, K.M., Patra, M., Beenakker, C.W.J.: Quantum limit of the laser line width in chaotic cavities and statistics of residues of scattering matrix poles. Phys. A. 278(3–4), 469–496 (2000)
Fyodorov, Y.V., Savin, D.V.: Resonance Widths Distribution in RMT: systematic approximation for weak coupling regime beyond PorterThomas (under preparation)
Verbaarschot, J.J.M., Weidenmüller, H.A., Zirnbauer, M.R.: Grassmann integration in stochastic quantum physics: the case of compoundnucleus scattering. Phys. Rep. 129(6), 367–438 (1985)
Sokolov, V.V., Zelevinsky, V.G.: Dynamics and statistics of unstable quantum states. Nucl. Phys. A 504(3), 562–588 (1989)
Hemmady, S., Zheng, X., Ott, E., Antonsen, T.M., Anlage, S.M.: Universal impedance fluctuations in wave chaotic systems. Phys. Rev. Lett. 94(1), 014102 (2005)
Hemmady, S., Zheng, X., Hart, J., Antonsen Jr, T.M., Ott, E., Anlage, S.M.: Universal properties of twoport scattering, impedance, and admittance matrices of wavechaotic systems. Phys. Rev. E 74, 036213 (2006)
Fyodorov, Y.V., Sommers, H.J.: Statistics of resonance poles, phase shifts and time delays in quantum chaotic scattering: random matrix approach for systems with broken timereversal invariance. J. Math. Phys. 38(4), 1918–1981 (1997)
Fyodorov, Y.V., Williams, I.: Replica symmetry breaking condition exposed by random matrix calculation of landscape complexity. J. Stat. Phys. 129(5–6), 1081–1116 (2007)
Aizenman, M., Warzel, S.: On the ubiquity of the Cauchy distribution in spectral problems. Probab. Theory Relat. Fields (2014). doi:10.1007/s0044001405873
Dietz, B., Friedrich, T., Harney, H.L., MiskiOglu, M., Richter, A., Schäfer, F., Weidenmüller, H.A.: Quantum chaotic scattering in microwave resonators. Phys. Rev. E 81(3), 036205 (2010)
Kumar, S., Nock, A., Sommers, H.J., Guhr, T., Dietz, B., MiskiOglu, M., Richter, A., Schäfer, F.: Distribution of scattering matrix elements in quantum chaotic scattering. Phys. Rev. Lett. 111(3), 030403 (2013)
Nock, A., Kumar, S., Sommers, H.J., Guhr, T.: Distributions of offdiagonal scattering matrix elements: exact results. Ann. Phys. 342, 103–132 (2014)
Brouwer, P.W.: Generalized circular ensemble of scattering matrices for a chaotic cavity with nonideal leads. Phys. Rev. B 51(23), 16878–16884 (1995)
Fyodorov, Y.V., Keating, J.P.: Negative moments of characteristic polynomials of random GOE matrices and singularitydominated strong fluctuations. J. Phys. A: Math. Gen. 36, 4035–4046 (2003)
Forrester, P.J., Keating, J.P.: Singularity dominated strong fluctuations for some random matrix averages. Commun. Math. Phys. 250, 119–131 (2004)
Guhr, T.: Supersymmetry. In: Akemann, G., et al. (eds.) The Oxford Handbook of Random Matrix Theory. Oxford University Press, pp. 135–154 (2011) [ arXiv:1005.0979]
Fyodorov, Y.V.: Negative moments of characteristic polynomials of random matrices: InghamSiegel integral as an alternative to HubbardStratonovich transformation. Nucl. Phys. B 621, 643–674 (2002)
Nock, A.: PhDthesis. Queen Mary University of London (under preparation)
Fyodorov, Y.V., Strahov, E.: Characteristic polynomials of random Hermitian matrices and DuistermaatHeckman localisation on noncompact Kähler manifolds. Nucl. Phys. B 630, 453–491 (2002)
Shcherbina, T.: Universality of the second mixed moment of the characteristic polynomials of the 1D band matrices: real symmetric case. epreprint arXiv:1410.3084
Kösters, H.: On the secondorder correlation function of the characteristic polynomial of a realsymmetric Wigner matrix. Electron. Commun. Probab. 13, 435–447 (2008)
Desrosiers, P.: Duality in random matrix ensembles for all \(\beta \). Nucl. Phys. B 817, 224–251 (2009)
Acknowledgments
Y. V. F. and A. N. were supported by EPSRC Grant EP/J002763/1 “Insights into Disordered Landscapes via Random Matrix Theory and Statistical Mechanics”.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix A: Evaluation of the Distribution for \(K_{ab}\) Using Brouwer’s Conjecture
We show that the matrix Cauchytype probability density \(\mathcal {P}(K) \propto \det [1+K^2]^{(M+1)/2}\) leads to the same answer for the distribution of an offdiagonal matrix element as the Hamiltonian approach, given in Eq. (23). Without loss of generality we may choose \(M=2\) when we have explicitly
In order to obtain the probability density for \(K_{12}\) we need to integrate out the other two variables. We start with integrating out the variable \(K_{22}\). The integrand is of the form \((a K_{22}^2+b K_{22}+c)^{3/2}=\left[ a(K_{22}+\frac{b}{2a^2})^2\frac{b^2}{4a}+c\right] ^{3/2}\) with \(a=1+K_{11}^2,\ b=2K_{11}K_{12}^2,\ c=1+K_{11}^2+2K_{12}^2+K_{12}^4\). Now we change variables \(\sqrt{\frac{a}{D}}(K_{22}+\frac{b}{2a^2}) \rightarrow K_{22}\) where we denoted \(D=c\frac{b^2}{4a}=\frac{(1+K_{11}^2+K_{12}^2)^2}{1+K_{11}^2}>0\). The joint probability density of \(K_{11}\) and \(K_{12}\) is then given by
To integrate out \(K_{11}\) we change variables \(K_{11}=\frac{y}{a}\sqrt{\frac{1}{1y^2/a^2}}\), with \(a=\frac{K_{12}}{\sqrt{1+K_{12}^2}}\). As the integrand is even the integral transforms to
The integration on the righthand side can be easily performed as
with the last integral on the right yielding \(\text {artanh}\,a\). In this way we arrive at the probability density for \(K_{12}\) in the form
It can be finally brought to the form of Eq. (23) by reinserting \(a(K_{12})=\frac{K_{12}}{\sqrt{1+K_{12}^2}}\) and employing the identity \(\text {artanh} \left( \frac{x}{\sqrt{1+x^2}}\right) =\text {arsinh}\,x\).
Appendix B: Consistency Between Eq. (26) and Brouwer’s Conjecture
We show that the characteristic function of the probability density \(\mathcal {P}(K)\) in the case \(M=2\) given in Eq. (26) is fully consistent with the claim that \(\mathcal {P}(K) \propto \det [1+K^2]^{3/2}\). For the particular choice \(\gamma _1 x_1 = \gamma _2 x_2 \equiv \gamma x\) the expression Eq. (26) is equivalent to Eq. (22) (for brevity we choose \(\gamma =1\)). Our task then amounts to demonstrating that
where \(X\) can be chosen diagonal, \(X={{\mathrm{diag}}}(x,x)\). Since \(K\) is symmetric we can diagonalise it by an orthogonal transformation, \(K=O{{\mathrm{diag}}}(k_1,k_2)O^T\). Choosing for \(O\) the standard parametrization of a \(2 \times 2\) orthogonal matrix, the lefthand side of Eq. (64) then simplifies to
The integral over the angle yields a Bessel function, and can also be rewritten in the form \( \int _0^{2\pi } d\phi \, e^{\frac{i}{2}x(k_1k_2)\sin (2\phi )}\). Now note that \(\frac{1}{2}(k_1k_2)\sin (2\phi ) \equiv K_{12}\), which allows to present Eq. (65) in the form
This is precisely the Fourier transform of \(\mathcal {P}(K_{12})\), which due to Appendix A is proportional to \( x K_0(x)+\int _x^\infty dy K_0(y)\). This shows the validity of the claim (64).
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
About this article
Cite this article
Fyodorov, Y.V., Nock, A. On Random Matrix Averages Involving HalfInteger Powers of GOE Characteristic Polynomials. J Stat Phys 159, 731–751 (2015). https://doi.org/10.1007/s109550151209x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s109550151209x