1 Introduction

The energy Hamiltonian of a closed quantum system is usually modelled by a Hermitian random matrix H. The Hamiltonian of this system after coupling it to the outer world via s open channels is modelled by the so-called effective HamiltonianFootnote 1

$$\begin{aligned} H_{eff}=H + i \Gamma , \end{aligned}$$
(1.1)

where \(\Gamma \ge {\varvec{0}}\) is a rank s positive semi-definite Hermitian matrix that is independent of H. The eigenvalues of \(H_{eff}\) are the mathematical model for the resonances, which are the long-lived decaying states of our open quantum system.

In this paper we are concerned with the exact joint distribution of these eigenvalues when there is one open channel \((s=1)\), and H is a Gaussian or Laguerre (Wishart) orthogonal/unitary/symplectic random matrix. \(\Gamma \) may be deterministic or random with a given distribution function. We obtain tridiagonal models (in the spirit of Dumitriu–Edelman [2]) and compute the joint eigenvalue distribution for any \(\beta >0\), not merely \(\beta =1,2,4\) (Theorems 3 and 4).

The joint eigenvalue law for non-Hermitian perturbations of Laguerre ensembles has not been addressed in the literature before (however, see [11] for a related topic), while the joint eigenvalue law for non-Hermitian perturbations of Gaussian ensembles has been studied in the physics literature by numerous authors: Ullah [19] (for the case \(\beta =1\)), Sokolov–Zelevinsky [15] (\(\beta =1\)), Stöckmann–Šeba [17] (\(\beta =1,2\)), Fyodorov–Khoruzhenko [5] (\(\beta =2\)). The present paper provides a rigorous derivation of this law which works for any \(\beta >0\) and for any choice of \(\Gamma \)—deterministic or random. More importantly, our approach can be applied to other models, e.g., perturbations of Laguerre \(\beta \)-ensembles (done in this paper); of chiral Gaussian \(\beta \)-ensembles; multiplicative perturbations of Gaussian and Laguerre \(\beta \)-ensembles (to be explored in a forthcoming paper). We also expect that the tridiagonal matrix models proposed here will be useful for establishing asymptotic properties of these “weakly non-Hermitian” ensembles. Finally, we note that our methods can provide matrix models (namely, block Jacobi matrices with independent (matrix-valued) Jacobi coefficients) for higher order perturbations \(s\ge 2\) as well, which could prove to be useful for computing their eigenvalue density (for the case \(\beta =2\), \(s\ge 2\), Fyodorov–Khoruzhenko [5] provide another approach). The solution to this matrix-valued eigenvalue problem is currently beyond our reach. We leave this as a challenging open problem.

The asymptotic analysis of the weakly non-Hermitian ensembles are of high interest in the mathematics and physics literature and have been studied in [3, 4, 6, 14], see also [11, 12]. The numerous physical applications of such random matrices can be found in the review papers [6, 7, 10].

The important cornerstones of our proofs are the Dumitriu–Edelman Hermitian matrix models [2], and the Arlinskiĭ–Tsekanovskiĭ result [1] on the spectral analysis of (deterministic) Jacobi matrices.

2 Preliminaries

2.1 Gaussian and Laguerre Ensembles

Definition 1

Denote by \(N (0,\sigma )\), \(N (0,\sigma \mathbf {I}_2)\), and \(N(0,\sigma \mathbf {I}_4)\) the real, complex, and quaternionic normal random variables (r.v.) with variance \(\beta \sigma ^2\) \((\beta =1,2,4\), respectively).

Denote by \(\chi ^2_k\) \((k>0)\) a real r.v. with p.d.f. \(\tfrac{1}{2^{k/2}\Gamma (k/2)} x^{k/2-1}e^{-x/2}\). Denote by \(\chi _k\) \((k>0)\) a square root of a \(\chi ^2_k\) r.v., and \(\tilde{\chi }_k\) \((k>0)\) to be \(\tfrac{1}{\sqrt{2}} \chi _k\).

Definition 2

Let Y be an \(n\times n\) matrix with independent identically distributed (i.i.d.) entries chosen from N(0, 1), \(N(0, \mathbf {I}_2)\), or \(N(0, \mathbf {I}_4)\). Then we say that \(X=\tfrac{1}{2} (Y + Y^*)\) belongs to the Gaussian orthogonal/unitary/symplectic ensemble, respectively. We denote it by \(GOE_n\), \(GUE_n\), \(GSE_n\), respectively.

Definition 3

Let Y be an \(m\times n\) matrix with i.i.d. entries chosen from N(0, 1), \(N(0, \mathbf {I}_2)\), or \(N(0, \mathbf {I}_4)\). Then we say that the \(n\times n\) matrix \(X=Y^* Y\) belongs to the Laguerre (Wishart) orthogonal/unitary/symplectic ensemble, respectively. We denote it by \(LOE_{(m,n)}\), \(LUE_{(m,n)}\), \(LSE_{(m,n)}\), respectively.

2.2 Tridiagonalization of Hermitian Matrices

Let H be an \(n \times n\) Hermitian matrix. Denote \(\mathbf {e}_j\) to be the j-th standard vector in \({\mathbb {C}}^n\), that is, having 1 in its j-th entry and 0 everywhere else. Let \(\langle \mathbf {x}, \mathbf {y} \rangle := \mathbf {x}^* \mathbf {y}\), the usual inner product in \({\mathbb {C}}^n\). Let us apply the Gram–Schmidt orthogonalization procedure in \({\mathbb {C}}^n\) to the sequence of vectors \(\mathbf {e}_1, H \mathbf {e}_1, H^2 \mathbf {e}_1,\ldots , H^{k-1} \mathbf {e}_1\), where \(k= \dim \mathrm{span}\{ H^j \mathbf {e}_1: j\ge 0 \}\). Note that \(1\le k\le n\). After normalization we obtain an orthonormal sequence of vectors \(\mathbf {v}_1,\ldots ,\mathbf {v}_k\) in \({\mathbb {C}}^n\). If \(k< n\), then we choose an arbitrary unit vector \(\mathbf {v}_{k+1}\) in \({\mathbb {C}}^n \ominus \mathrm{span}\{ \mathbf {v}_1,\ldots ,\mathbf {v}_k \}\) and repeat the procedure but with \(\mathbf {v}_{k+1}\) instead of \(\mathbf {e}_1\). By repeating this procedure finitely many times more if necessary and combining all the resulting vectors together, we obtain an orthonormal basis \(\{\mathbf {v}_j\}_{j=1}^n\) of \({\mathbb {C}}^n\).

Standard arguments (see, e.g., [16, Sect 1.3]) show that the matrix of H in the basis \(\{\mathbf {v}_j\}_{j=1}^n\) is tridiagonal. In other words, if we form unitary matrix S with \(\{\mathbf {v}_j\}_{j=1}^n\) as its columns, then \(S^* H S = {\mathcal J}\), where

$$\begin{aligned} {\mathcal J}= S^* H S = \left( \begin{array}{ccccc} b_1&{}a_1&{}0&{} &{}\\ a_1&{}b_2&{}a_2&{}\ddots &{}\\ 0&{}a_2&{}b_3&{}\ddots &{} 0 \\ &{}\ddots &{}\ddots &{}\ddots &{} a_{n-1} \\ &{} &{} 0 &{} a_{n-1} &{} b_n \end{array}\right) , \quad a_j \ge 0, b_n\in {\mathbb {R}}. \end{aligned}$$
(2.1)

We call matrices of the form (2.1) Jacobi, and the coefficients \(\{a_j,b_j\}\)—their Jacobi coefficients. For a future reference, observe that

$$\begin{aligned} S \mathbf {e}_1 = S^* \mathbf {e}_1 = \mathbf {e}_1 \end{aligned}$$
(2.2)

since \(\mathbf {v}_1 = \mathbf {e}_1\) in the Gram–Schmidt procedure. Note that in the tridiagonalization procedure above, if \(\dim \mathrm{span}\{ H^j \mathbf {e}_1: j\ge 0 \} = k < n\), then \(a_j > 0\) for \(1\le j\le k-1\), and \(a_k = 0\), i.e., \({\mathcal J}\) becomes a direct sum of Jacobi matrices of smaller sizes.

2.3 Matrix Models for Gaussian and Laguerre Ensembles

Now let us apply the tridiagonalization procedure from the previous section to a random matrix from a Gaussian or a Laguerre ensemble. This is the idea of Dumitriu–Edelman [2] (see also Trotter’s [18]).

If H is from \(GOE_n\), \(GUE_n\), or \(GSE_n\), then \(\mathbf {e}_1\) is a cyclic vector for H with probability 1. Therefore we obtain (2.1) with \(a_j>0\) for all \(1\le j\le n-1\).

The same is true for a random matrix H from \(LOE_{(m,n)}\), \(LUE_{(m,n)}\), or \(LSE_{(m,n)}\), but only if \(m \ge n\). If \(m<n\), then with probability 1, \(\dim \mathrm{span}\{ H^j \mathbf {e}_1: j\ge 0 \} = m +1 \le n\), and \({\mathbb {C}}^n \ominus \mathrm{span}\{ H^j \mathbf {e}_1: j\ge 0 \} \subseteq \ker H\), so that the resulting Jacobi matrix (2.1) that we obtain has \(a_{m+1} = \cdots = a_{n-1} = 0, b_{m+2} = \cdots = b_n=0\). In other words, we have that \({\mathcal J}\) is the direct sum of an \((m+1)\times (m+1)\) Jacobi matrix and the \((n-m-1)\times (n-m-1)\) zero matrix. The proof of this case can be done by following the Dumitriu–Edelman [2] arguments.

Lemma 1

(Dumitriu–Edelman [2]) Let H be a random matrix taken from \(GOE_n\), \(GUE_n\), or \(GSE_n\) ensemble. There exists a (random) unitary matrix S satisfying (2.2) such that \(S H S^* = {\mathcal J}\) is tridiagonal (2.1), where

$$\begin{aligned} a_j&\sim \tilde{\chi }_{\beta (n-j)},&1\le j\le n-1, \\ b_j&\sim N(0,1),&1\le j\le n, \end{aligned}$$

where \(\beta =1,2,4\) for \(GOE_n\), \(GUE_n\), \(GSE_n\), respectively.

Lemma 2

(Dumitriu–Edelman [2]) Let H be a random matrix taken from \(LOE_{(m,n)}\), \(LUE_{(m,n)}\), or \(LSE_{(m,n)}\) ensemble. There exists a (random) unitary matrix S satisfying (2.2) such that \(S H S^* = {\mathcal J}= B^* B\) is tridiagonal (2.1), where

$$\begin{aligned} B = \left( \begin{array}{ccccc} x_1&{}y_1&{}0&{} &{}\\ 0&{}x_2&{}y_2&{}\ddots &{}\\ 0&{}0&{}x_3&{}\ddots &{} 0 \\ &{}\ddots &{}\ddots &{}\ddots &{} y_{n-1} \\ &{} &{} 0 &{} 0 &{} x_n \end{array}\right) , \quad \text{ with } \end{aligned}$$
(2.3)

(i) If \(m \ge n\):

$$\begin{aligned} x_j&\sim \chi _{\beta (m-j+1)},&1\le j\le n, \\ y_j&\sim \chi _{\beta (n-j)},&1\le j\le n-1; \end{aligned}$$

(ii) If \(m \le n-1\):

$$\begin{aligned} x_j&\sim {\left\{ \begin{array}{ll} \chi _{\beta (m-j+1)}, &{} \text{ if } 1\le j\le m, \\ 0, &{} \text{ if } m+1\le j\le n, \end{array}\right. } \\ y_j&\sim {\left\{ \begin{array}{ll} \chi _{\beta (n-j)}, &{} \text{ if } 1\le j\le m, \\ 0, &{} \text{ if } m+1\le j\le n-1 ; \end{array}\right. } \end{aligned}$$

where \(\beta =1,2,4\) for \(LOE_{(m,n)}\), \(LUE_{(m,n)}\), \(LSE_{(m,n)}\), respectively.

Remarks

  1. 1.

    For \(GSE_n\) and \(LSE_{(m,n)}\) every entry is quaternionic, so all the instances of \({\mathbb {C}}\) in the arguments above should be replaced with the algebra of quaternions. The resulting coefficients \(a_j\), \(b_j\), \(x_j\), \(y_j\) in Lemmas 1 and 2 are quaternionic too, but with the \(\mathsf {i}\), \(\mathsf {j}\), and \(\mathsf {k}\) parts equal to zero.

  2. 2.

    It is worth reminding the reader that the random matrix S in Lemmas 1 and 2 is statistically independent of \({\mathcal J}\).

2.4 \(\beta \)-Ensembles

The tridiagonal matrix ensembles from Lemmas 1 and 2 make sense for any \(\beta >0\), not merely for \(\beta =1,2,4\). They are called the Gaussian \(\beta \)-ensemble \(G\beta E_n\) and the Laguerre \(\beta \)-ensemble \(L\beta E_{(m,n)}\), respectively.

2.5 Spectral Measures of Gaussian and Laguerre \(\beta \)-Ensembles

By the Riesz representation theorem, for any Hermitian matrix H there exists a probability measure \(\mu \) (called the spectral measure) satisfying

$$\begin{aligned} \langle \mathbf {e}_1, H^k \mathbf {e}_1 \rangle = \int _{\mathbb {R}}x^k d\mu (x), \quad \text{ for } \text{ all } \quad k\ge 0. \end{aligned}$$
(2.4)

In fact, any Hermitian can be unitarily diagonalized, so that we can write \(H = U D U^*\), where D is the diagonal matrix with eigenvalues \(\lambda _1,\ldots ,\lambda _n\) of H on the diagonal, and the columns \(\mathbf u _1,\ldots ,\mathbf u _n\) of U are the corresponding orthonormal eigenvectors of H. This easily implies (2.4) with

$$\begin{aligned} \mu (x) = \sum _{j=1}^n w_j \delta _{\lambda _j}, \quad \text{ where } \quad w_j = |\langle \mathbf {e}_1, \mathbf {u}_j \rangle |^2. \end{aligned}$$
(2.5)

Here \(\delta _\lambda \) is the Dirac measure at \(\lambda \). The support of \(\mu \) consists of \(\le n\) points.

As our matrix H is random, its spectral measure is random too. The joint law of \(w_j\)’s and \(\lambda _j\)’s in (2.5) will be referred to as the law of the spectral measure of H.

Because of (2.2), the laws of the spectral measures of H and of its Jacobi form \({\mathcal J}\) coincide, that is, H and \({\mathcal J}\) have identically distributed eigenvalues \(\lambda _j\)’s and eigenweights \(w_j\)’s. In particular, laws of the spectral measures of \(GOE_n\) and \(G \beta E_n\) with \(\beta =1\) coincide; laws of the spectral measures of \(GUE_n\) and \(G \beta E_n\) with \(\beta =2\) coincide; laws of the quaternion-valued spectral measures of \(GSE_n\) and \(G \beta E_n\) with \(\beta =4\) (viewed as a matrix with purely-real quaternion entries) coincide. The analogous statements hold true for the Laguerre case.

Laws of the spectral measures for \(G\beta E_n\) and \(L\beta E_{(m,n)}\) with \(m \ge n\) have been computed in [2], see Lemmas 3 and 4 below. We also need the spectral measure of \(L\beta E_{(m,n)}\) when \(m < n\), which we compute in Proposition 1 below.

Lemma 3

(Dumitriu–Edelman [2]) For any \(\beta >0\), the spectral measure of a random matrix from the \(G \beta E_n\)-ensemble is (2.5) with the joint distribution

$$\begin{aligned} \tfrac{1}{g_{\beta ,n}} \prod _{j=1}^n e^{-\lambda _j^2/2} \prod _{1\le j<k \le n} |\lambda _j - \lambda _k|^\beta d\lambda _1 \ldots d\lambda _n \times \tfrac{1}{c_{\beta ,n}} \prod _{j=1}^n w_j^{\beta /2 -1 }dw_1 \ldots dw_{n-1}, \end{aligned}$$
(2.6)

where

$$\begin{aligned}&\sum _{j=1}^n w_j =1; \quad w_j > 0, \quad 1 \le j \le n; \quad \lambda _j \in {\mathbb {R}}, \end{aligned}$$
(2.7)
$$\begin{aligned}&g_{\beta ,n} = (2\pi )^{n/2} \prod _{j=1}^{n} \frac{\Gamma (1+\beta j/2)}{\Gamma (1+\beta /2)}, \quad c_{\beta ,n} = \frac{\Gamma (\beta /2)^n}{\Gamma (\beta n/2)}. \end{aligned}$$
(2.8)

Lemma 4

(Dumitriu–Edelman [2]) For any \(m \ge n\) and \(\beta >0\), the spectral measure of a random matrix from the \(L \beta E_{(m,n)}\)-ensemble is (2.5) with the joint distribution

$$\begin{aligned}&\tfrac{1}{h_{\beta ,n,a}} \prod _{j=1}^n \lambda _j^{\beta a/2} e^{-\lambda _j/2} \prod _{1\le j<k \le n} |\lambda _j - \lambda _k|^\beta d\lambda _1 \ldots d\lambda _n \nonumber \\&\quad \times \, \Gamma (\beta n/2) \prod _{j=1}^n \frac{w_j^{\beta /2 -1 }}{\Gamma (\beta /2)} dw_1 \ldots dw_{n-1}, \end{aligned}$$
(2.9)

where \(a=m-n+1-2/ \beta \) and

$$\begin{aligned}&\sum _{j=1}^n w_j =1; \quad w_j> 0, \quad 1 \le j \le n; \quad \lambda _j >0, \end{aligned}$$
(2.10)
$$\begin{aligned}&h_{\beta ,n,a} = 2^{n(a\beta /2 +1+(n-1)\beta /2)} \prod _{j=1}^{n} \frac{\Gamma (1+\beta j /2) \Gamma (1+\beta a/2+\beta (j-1)/2)}{\Gamma (1+\beta /2)}, \end{aligned}$$
(2.11)

Proposition 1

If \(m \le n-1\) and \(\beta >0\), the spectral measure of a random matrix from the \(L \beta E_{(m,n)}\) is

$$\begin{aligned} \mu (x) = w_0 \delta _0 + \sum _{j=1}^m w_j \delta _{\lambda _j}, \end{aligned}$$
(2.12)

with the joint distribution

$$\begin{aligned}&\tfrac{1}{h_{\beta ,m,a}} \prod _{j=1}^m \lambda _j^{\beta a/2} e^{-\lambda _j/2} \prod _{1\le j<k \le m} |\lambda _j - \lambda _k|^\beta d\lambda _1 \ldots d\lambda _m \nonumber \\&\quad \times \, \frac{w_0^{\beta (n-m) /2 -1}}{\Gamma (\beta (n-m)/2)}\times \Gamma (\beta n/2) \prod _{j=1}^m \frac{w_j^{\beta /2 -1 }}{\Gamma (\beta /2)} dw_1 \ldots dw_{m}, \end{aligned}$$
(2.13)

where \(a=n-m+1-2/ \beta \); \(h_{\beta ,m,a}\) is as in (2.11); and

$$\begin{aligned} \sum _{j=0}^m w_j =1; \quad w_j> 0, \quad 0 \le j \le m; \quad \lambda _j >0. \end{aligned}$$
(2.14)

Let us denote the normalization constant for \(w_j\)’s as

$$\begin{aligned} d_{\beta ,m,n} = \frac{\Gamma (\beta (n-m)/2) \Gamma (\beta /2)^m}{\Gamma (\beta n/2)}. \end{aligned}$$
(2.15)

Proof

Let us first deal with \(\beta =1\) case. The distribution of the eigenvalues of a matrix H from \(LOE_{(m,n)}\) is well-known. Let its eigenvalues be \(\lambda _1>\cdots>\lambda _m>0=0=\cdots =0\) (\(n-m\) zeros). Now choose an orthonormal system of (real) eigenvectors \(\mathbf {u}_1,\ldots ,\mathbf {u}_n\) of H corresponding to these eigenvalues, respectively. We pick each \(\mathbf {u}_j\) at random uniformly from the set of all possible choices. Since for any \(n\times n\) orthogonal matrix O, the matrix \(O^T H O\) also belongs to \(LOE_{(m,n)}\), we can see that: \(\mathbf {u}_1\) is uniformly distributed on the unit sphere \(\{\mathbf {u}\in {\mathbb {R}}^n: ||\mathbf {u}|| = 1 \}\); and for any \(1\le j \le n\), the vector \(\mathbf {u}_j\) conditionally on \(\mathbf {u}_1, \ldots , \mathbf {u}_{j-1}\) is uniformly distributed on the subset of this unit sphere orthogonal to \(\mathbf {u}_1, \ldots , \mathbf {u}_{j-1}\). So the matrix consisting of \(\mathbf {u}_1,\ldots ,\mathbf {u}_n\) as its columns is a Haar distributed orthogonal matrix (see, e.g., [9, Prop. 2.2(a)]). Then its first row \((v_1,\ldots ,v_n)\) is distributed uniformly on the unit sphere \(\{\mathbf {u}\in {\mathbb {R}}^n: ||\mathbf {u}|| = 1 \}\). Now recalling (2.5), we obtain that \(w_j = v_j^2\), \(1\le j \le m\), and \(w_0 = v_{m+1}^2+\cdots + v_n^2\). Now one can apply arguments from the proof of [8, Cor. A.2] (note that \(dw_j = 2w_j^{1/2} dv_j\)) to see that the joint distribution of \(w_1,\ldots ,w_m\) is proportional to \(w_0^{(n-m-2)/2} \prod _{j=1}^m w_j^{-1/2} dw_1 \ldots dw_m\).

This allows us to compute the Jacobian for the change of variables from \(\{x_j,y_j\}_{j=1}^m\) in (2.3) to \(\{\lambda _j,w_j\}_{j=1}^m\). Why is this change of variables bijective? By Favard’s theorem (see, e.g., [16, Thms. 1.3.2–1.3.3]), there is 1-to-1 correspondence between all \((m+1) \times (m+1)\) Jacobi matrices (2.1) with \(a_j>0\) (\(1\le j\le m\)) and all probability measures supported on \(m+1\) distinct points. This means there is 1-to-1 correspondence between all positive semi-definite \((m+1) \times (m+1)\) Jacobi matrices \({\mathcal J}\) with \(a_j>0\) (\(1\le j\le m\)), \(\det {\mathcal J}=0\) and all probability measures supported on \(m+1\) points of the form (2.12), (2.14). By semi-definiteness, any such \({\mathcal J}\) can be Cholesky factorized \({\mathcal J}= B^* B\) with B upper-triangular with non-negative entries on the diagonal. Since \({\mathcal J}\) is tridiagonal, this \((m+1)\times (m+1)\) matrix B must be two-diagonal as in (2.3) with \(x_j\ge 0\), \(1\le j \le m+1\). Since \(\det {\mathcal J}=0\), we must have that \(x_j=0\) for at least one \(1\le j\le m+1\). But since all \(a_j>0\), we obtain that \(x_{m+1}=0\), \(x_j> 0\) for \(1\le j\le m\), and \(y_j>0\), \(1\le j\le m\). Conversely, any \((m+1)\times (m+1)\) matrix B (2.3) with \(x_j>0, y_j>0\) for \(1\le j\le m\) and \(x_{m+1}=0\) leads to a positive semi-definite \((m+1) \times (m+1)\) Jacobi matrix \({\mathcal J}\) with \(\det {\mathcal J}=0\) and \(a_j>0\) (\(1\le j\le m\)).

Using the matrix model in Lemma 2 (case \(m<n\)) and the distribution (2.13) that we proved for \(\beta =1\), we obtain that the Jacobian is proportional (let us ignore the normalizing constants for now) to

$$\begin{aligned}&\det \frac{\partial (x_1,\ldots ,x_m,y_1,\ldots ,y_m)}{\partial (\lambda _1,\ldots ,\lambda _m,w_1,\ldots ,w_m)} \propto \prod _{j=1}^m x_j^{-m+j} e^{x_j^2/2} \prod _{j=1}^m y_j^{-n+j+1} e^{y_j^2/2} \\&\quad \times \, w_0^{\tfrac{n-m}{2} -1} \prod _{j=1}^m w_j^{-\tfrac{1}{2} } \prod _{j=1}^m \lambda _j^{\tfrac{n-m-1}{2}} e^{-\tfrac{\lambda _j}{2}} \prod _{1\le j<k\le m} |\lambda _j - \lambda _k|. \end{aligned}$$

Now taking the specified in Lemma 2(ii) joint distribution of \(\{x_1,\ldots ,x_m,y_1,\ldots ,y_m\}\) for \(L \beta E_{(m,n)}\), \(m<n\), applying the the above Jacobian, and using the identities from Lemma 5 below, one obtains (2.13), up to a normalization. Finally, note that \(h_{\beta ,m,a}\) is the right normalization constant for the eigenvalues in (2.13) by Lemma 4. The normalization constant \(d_{\beta ,m,n}\) can be computed by evaluating the Dirichlet integral, see, e.g., [8, Cor. A.4]. \(\square \)

Lemma 5

The following identities hold:

$$\begin{aligned} \prod _{j=1}^m x_j^{m-j+1} y_j^{m-j+1}&= \prod _{j=0}^m w^{1/2}_j \prod _{1\le j<k\le m} |\lambda _j-\lambda _k| \prod _{j=1}^m \lambda _j , \end{aligned}$$
(2.16)
$$\begin{aligned} \prod _{j=1}^m y_j^2&= w_0 \prod _{j=1}^m \lambda _j. \end{aligned}$$
(2.17)

Proof

(2.16) follows immediately by noting that \(x_j y_j = a_j\), \(1\le j \le m\), and then applying [2, Lemma 2.7]. Note the clash of notations: their n is our \(m+1\), their \(\{b_1,\ldots ,b_{n-1}\}\), \(\{\lambda _1,\ldots ,\lambda _n\}\), and \(\{q_1^2,\ldots ,q_n^2\}\) are ours \(\{a_m,\ldots ,a_1\}\), \(\{\lambda _1,\ldots ,\lambda _m,0\}\), and \(\{w_1,\ldots ,w_m,w_0\}\), respectively. To prove (2.17), we use theory of orthogonal polynomials, see, e.g., [16]. By combining [16, Prop. 3.2.8] and [16, Prop. 2.3.12] we get

$$\begin{aligned} w_0 = -\lim _{z\rightarrow 0} \langle \mathbf {e}_1, z ({\mathcal J}-z)^{-1} \mathbf {e}_1 \rangle = \lim _{z\rightarrow 0} \tfrac{z q_{m+1}(z)}{p_{m+1}(z)} = \tfrac{q_{m+1}(0)}{p'_{m+1}(0)}, \end{aligned}$$

where \(p_j\)’s and \(q_j\)’s are the orthonormal polynomials associated to \({\mathcal J}\) of the first and second kind, respectively (in order to define \(p_{m+1}\) and \(q_{m+1}\) we need \(a_{m+1}\) which we take to be an arbitrary positive number). By [16, Thm. 1.2.4], \(p_{m+1}(z) = \left( \prod _{j=1}^{m+1} a_j^{-1} \right) \det (z-{\mathcal J})\), so \(p'_{m+1}(0) = (-1)^m \prod _{j=1}^{m+1} a_j^{-1} \prod _{j=1}^m \lambda _j\). Using the Wronskian relation [16, Prop. 3.2.3] and \(p_{m+1}(0) = 0\) (since 0 is an eigenvalue of \({\mathcal J}\)), we obtain \(q_{m+1}(0)= 1/(a_{m+1} p_{m}(0))\). Finally, \(p_m(z) = \left( \prod _{j=1}^{m} a_j^{-1} \right) \det (z-{\mathcal J}_{m\times m})\), where \({\mathcal J}_{m\times m}\) is the \(m\times m\) top left corner of \({\mathcal J}\). Recall that \({\mathcal J}= B^* B\). It is easy to see that \({\mathcal J}_{m\times m} = B_{m\times m} ^* B_{m\times m}\), where \(B_{m\times m}\) is the \(m\times m\) top left corner of B. Therefore \(p_m(0)=(\prod _{j=1}^{m} a_j^{-1}) \det (-B_{m\times m} ^* B_{m\times m}) = (-1)^m (\prod _{j=1}^{m} a_j^{-1}) \prod _{j=1}^m x_j^2\). Combining this all together with \(a_j = x_j y_j\), \(1\le j \le m\), we obtain (2.17). \(\square \)

3 Rank One Perturbations: Location of the Eigenvalues

Let us discuss all attainable configurations of eigenvalues of rank one perturbations of (deterministic) Hermitian matrices. Part (i) of the following statement is certainly well-known (see, e.g., [1, 11]), but (ii) and (iii) seem to be new.

For the rest of the paper let \({\mathbb {C}}_+ := \{z\in {\mathbb {C}}: {{\mathrm{Im}}}z>0\}\).

Theorem 1

Let \(H_{eff}\) be as in (1.1), where \(H=H^*\), \(\Gamma \ge \mathbf {0}\), \(\mathrm{rank}\,\Gamma = 1\). Choose any \(\mathbf {w} \in \mathrm{Ran}\, \Gamma \), \(\mathbf {w} \ne 0\), and let \(k = \dim {\text {span}}\{H^j \mathbf {w}: j\ge 0\}\). Then:

  1. (i)

    \(H_{eff}\) has k complex eigenvalues in \({\mathbb {C}}_+\) and \(n-k\) real eigenvalues (counted with their algebraic multiplicities).

  2. (ii)

    If \(H > \mathbf {0}\), then the k complex eigenvalues \(\{z_j\}_{j=1}^k\) of \(H_{eff}\) belong to the set \( \{ (z_j)_{j=1}^k \in ({\mathbb {C}}_+)^k : \sum _{j=1}^k \mathrm{Arg}\,z_j < \tfrac{\pi }{2} \}, \) and every such a configuration may occur.

  3. (iii)

    If \(H \ge \mathbf {0}\) and \(\det H = 0\), then the k complex eigenvalues \(\{z_j\}_{j=1}^k\) of \(H_{eff}\) belong to the set \( \{ (z_j)_{j=1}^k \in ({\mathbb {C}}_+)^k : \sum _{j=1}^k \mathrm{Arg}\,z_j \le \tfrac{\pi }{2} \}, \) and every such a configuration may occur.

Remark

Using similar ideas one can prove the analogue for the case when H is not positive semi-definite, but has s negative eigenvalues. The k complex eigenvalues (the other \(n-k\) being real) of \(H_{eff}\) then belong to \(\big \{ (z_j)_{j=1}^k \in ({\mathbb {C}}_+)^k : \tfrac{\pi }{2}+\pi (s-1) < \sum _{j=1}^k \mathrm{Arg}\,z_j \le \tfrac{\pi }{2}+\pi s \big \}\), and every such a configuration may occur.

The proof relies on the following uniqueness+existence result for Jacobi matrices. We use n in (i) and \(m+1\) in (ii) as the size of our matrices in order to be consistent with what follows later.

Proposition 2

For \(l>0\), let

$$\begin{aligned} {\mathcal J}_l = {\mathcal J}+ il I_{1\times 1}, \end{aligned}$$
(3.1)

where \(I_{1\times 1}\) is the matrix with (1, 1)-entry equal to 1 and 0 everywhere else.

  1. (i)

    Let \({\mathcal J}\) be an \(n\times n\) positive definite (real) Jacobi matrix (2.1) with \(a_j>0\), \(j=1,\ldots ,n-1\). Eigenvalues of \({\mathcal J}_l\), counting algebraic multiplicities, belong to

    $$\begin{aligned} \left\{ (z_j)_{j=1}^n \in ({\mathbb {C}}_+)^n : \sum _{j=1}^n \mathrm{Arg}\,z_j < \tfrac{\pi }{2} \right\} . \end{aligned}$$
    (3.2)

    Moreover, for every configuration of n points from (3.2) there exists a unique matrix \({\mathcal J}_l\) of the form above with such a system of eigenvalues.

  2. (ii)

    Let \({\mathcal J}\) be an \((m+1)\times (m+1)\) positive semi-definite (real) Jacobi matrix (2.1) with \(a_j>0\), \(j=1,\ldots ,m\), satisfying \(\det {\mathcal J}= 0\). Eigenvalues of \({\mathcal J}_l\), counting with their algebraic multiplicities, belong to

    $$\begin{aligned} \left\{ (z_j)_{j=1}^{m+1} \in ({\mathbb {C}}_+)^{m+1} : \sum _{j=1}^{m+1} \mathrm{Arg}\,z_j = \tfrac{\pi }{2} \right\} . \end{aligned}$$
    (3.3)

    Moreover, for every configuration of \(m+1\) points from (3.3) there exists a unique matrix \({\mathcal J}_l\) of the form above with such a system of eigenvalues.

We will prove Proposition 2 in Sect. 5.2.

Proof of Theorem 1

Since \(\Gamma \ge \mathbf {0}\), we can diagonalize \(\Gamma = U (l I_{1\times 1}) U^*\), where \(l>0\) and U is unitary. We may assume \(\mathbf {w} = U e_1\). Then \( H_{eff}=U( U^*HU + i l I_{1\times 1} ) U^*. \) Applying the tridiagonalization procedure from Sect. 2.2, we can reduce \(U^*HU\) to the Jacobi form (2.1): \(U^*HU = S {\mathcal J}S^*\) with S unitary. Note that \(k = \dim {\text {span}}\{H^j \mathbf {w}: j\ge 0\} = \dim {\text {span}}\{ (U^* H U)^j \mathbf {e}_1: j\ge 0\}\), so \(a_j > 0\) for \(1\le j\le k-1\), \(a_k =0\) (see Sect. 2.2). Therefore \({\mathcal J}\) is a direct sum of a \(k\times k\) Jacobi matrix with positive \(a_j\)’s and some other \((n-k)\times (n-k)\) Jacobi matrix. Because of (2.2), \(S^* I_{1\times 1} S=I_{1\times 1} \) and therefore

$$\begin{aligned} H_{eff}=U S ( {\mathcal J}+ i l I_{1\times 1} ) S^* U^*. \end{aligned}$$
(3.4)

Part (i) now follows from [1]. Part (ii) follows from Proposition 2(i). For the case (iii), \(\det {\mathcal J}=0\), but it might happen that the zero eigenvalue of \({\mathcal J}\) is an eigenvalue either of the \(k\times k\) or \((n-k)\times (n-k)\) submatrix of \({\mathcal J}\). Thus either Proposition 2(i) or (ii) applies and finishes the proof. \(\square \)

4 Rank One Perturbations: Tridiagonal Matrix Models

Let H be an \(n \times n\) matrix from one of the six ensembles \(GOE_n\), \(LOE_{m\times n}\) (\(\beta =1\)); \(GUE_n\), \(LUE_{m\times n}\) (\(\beta =2\)); \(GSE_n\), \(LSE_{m\times n}\) (\(\beta =4\)). Let \(H_{eff}\) be as in (1.1), where \(\Gamma = (\Gamma _{jk})_{j,k=1}^n\) is an \(n\times n\) positive definite (deterministic or random) matrix with real (if \(\beta =1\)), complex (if \(\beta =2\)), or quaternionic (if \(\beta =4\)) entries. We assume that \(\Gamma \) is independent of H and has rank 1 (for the case \(\beta =4\), the (right) rank is viewed over quaternions, see, e.g., [13]).

Since \(\Gamma \ge \mathbf {0}\), we can write \(\Gamma = U (l I_{1\times 1}) U^*\), where U is orthogonal, unitary, or unitary symplectic for \(\beta =1,2,4\), respectively (for quaternion diagonalization, see, e.g., [13, Thm. 5.3.6]). Since the Hilbert–Schmidt norm should be preserved, we see that \(l=||\Gamma ||_{HS} = \left( \sum _{j,k=1}^n |\Gamma _{jk}|^2 \right) ^{1/2}\).

Then \( H_{eff}=U( U^*HU + i l I_{1\times 1} ) U^*, \) where U is independent of H. From Definitions 2 and 3, it is clear that \(U^*HU\) belongs to the same ensemble as H. Therefore we can apply the tridiagonalization procedure from Sect. 2.2 to reduce \(U^*HU\) to the Dumitriu–Edelman form: \(U^*HU = S {\mathcal J}S^*\) with \({\mathcal J}\) as in Lemma 1 or 2, and S unitary satisfying \(S^* I_{1\times 1} S=I_{1\times 1}\) (by 2.2), so (3.4) holds. We proved

Theorem 2

(Matrix model for rank one non-Hermitian perturbations of Gaussian and Laguerre ensembles) Let H be taken from one of the six ensembles \(GOE_n\), \(GUE_n\), \(GSE_n\), \(LOE_{m\times n}\), \(LUE_{m\times n}\), \(LSE_{m\times n}\). Suppose the (deterministic or random) matrix \(\Gamma \) is independent of H and \(\Gamma \ge \mathbf {0}\), \(\mathrm{rank}\,\Gamma = 1\). Then \(H_{eff} = H+ i \Gamma \) is unitarily equivalent to

$$\begin{aligned} {\mathcal J}+ i l I_{1\times 1} \end{aligned}$$
(4.1)

where \({\mathcal J}\) is as in Lemma 1 or 2, respectively, and \(l=||\Gamma ||_{HS} = (\sum _{j,k=1}^n |\Gamma _{jk}|^2)^{1/2}\) is independent of \({\mathcal J}\).

Remark

This tridiagonal matrix ensemble (4.1) makes sense for any \(\beta >0\).

5 Rank One Perturbations: Joint Eigenvalue Distribution

5.1 Perturbations of Gaussian \(\beta \)-Ensembles

Theorem 3

Fix a deterministic \(l>0\), and for any \(\beta >0\) let \({\mathcal J}\) be from \(G\beta E_n\) ensemble. Then the eigenvalues of \({\mathcal J}_l\), (4.1), are distributed on \(\{(z_j)\in ({\mathbb {C}}_+)^n: \sum _{j=1}^n {{\mathrm{Im}}}z_j =l \}\) according to

$$\begin{aligned}&\tfrac{1}{h_{\beta ,n}} \, e^{-\frac{1}{2} \sum _{j=1}^n {{\mathrm{Re}}}(z_j^2) } \times \prod _{j,k=1}^n |z_j-\bar{z}_k |^{\tfrac{\beta }{2} -1} \prod _{j<k} |z_j-z_k|^2 \nonumber \\&\quad \times \, l^{-\tfrac{\beta n}{2}+1} e^{-\frac{l^2}{2} } d^2 z_1\ldots d^2 z_{n-1} d({{\mathrm{Re}}}z_n), \end{aligned}$$
(5.1)

where \(d^2 z\) stands for the 2-dimensional Lebesgue measure on \({\mathbb {C}}\); and

$$\begin{aligned} h_{\beta ,n} = 2^{n(\beta /2-1)} g_{\beta ,n} c_{\beta ,n}, \end{aligned}$$
(5.2)

where \(g_{\beta ,n}\) and \(c_{\beta ,n}\) are as in (2.8).

Remarks

  1. 1.

    In view of Theorem 2, distribution (5.1) with \(\beta =1,2,4\) is the eigenvalue distribution of rank one perturbations of \(GOE_n\), \(GUE_n\), \(GSE_n\), respectively.

  2. 2.

    If we suppose that \(l>0\) is random (independent of \({\mathcal J}_l\)) with a distribution \(\gamma \), then the expression in (5.1) should be viewed as the conditional distribution of \(z_j\)’s given l. The joint distribution of \(z_j\)’s and l is therefore equal to the product of (5.1) and \(d\gamma (l)\). In the special case when \(\gamma \) is absolutely continuous \(d\gamma (l) = F(l) dl\), we get that the eigenvalues of \({\mathcal J}_l\) are distributed on \(\{(z_j)\in ({\mathbb {C}}_+)^n: \sum _{j=1}^n {{\mathrm{Im}}}z_j \in \mathrm{supp}(F) \}\) according to

    $$\begin{aligned} \tfrac{1}{h_{\beta ,n}} \, e^{-\frac{1}{2} \sum _{j=1}^n {{\mathrm{Re}}}(z_j^2) }\times & {} \prod _{j,k=1}^n |z_j-\bar{z}_k |^{\tfrac{\beta }{2} -1} \prod _{j<k} |z_j-z_k|^2 \nonumber \\\times & {} l^{-\tfrac{\beta n}{2}+1} e^{-\frac{l^2}{2} } F(l) d^2 z_1\ldots d^2 z_n, \end{aligned}$$
    (5.3)

    where \(l=\sum _{j=1}^n {{\mathrm{Im}}}z_j\).

Proof

By Theorem 1(i), each of the eigenvalues \(z_1,\ldots ,z_n\) lies in \({\mathbb {C}}_+\). Moreover, by the result of Arlinskiĭ–Tsekanovskiĭ [1, Thm. 5.1], the mapping

$$\begin{aligned}&\{a_j\}_{j=1}^{n-1}, \{b_j\}_{j=1}^n \mapsto z_1,\ldots , z_n \nonumber \\&(0,\infty )^{n-1}\times {\mathbb {R}}^n \rightarrow ({\mathbb {C}}_+)^n \end{aligned}$$
(5.4)

is one-to-one and onto the set \(\{(z_j)\in ({\mathbb {C}}_+)^n: \sum _{j=1}^n {{\mathrm{Im}}}z_j =l \}\) (see 5.17 below). Then so is the mapping \(\{\lambda _j\}_{j=1}^{n}, \{w_j\}_{j=1}^{n-1}\mapsto z_1,\ldots , z_n\), where \(\mu \) (2.5) is the spectral measure of \({\mathcal J}\). Let us compute the Jacobian of this transformation.

Lemma 6

$$\begin{aligned} \left| \det \frac{\partial \left( {{\mathrm{Re}}}z_1,\ldots , {{\mathrm{Re}}}z_n,{{\mathrm{Im}}}z_1,\ldots , {{\mathrm{Im}}}z_{n-1} \right) }{\partial \left( \lambda _1,\ldots ,\lambda _n,w_1,\ldots ,w_{n-1} \right) } \right| = l^{n-1} \prod _{j<k} \frac{|\lambda _j-\lambda _k|^2}{|z_j-z_k|^2}. \end{aligned}$$
(5.5)

Proof

Let \(m(z)=\langle e_1, ({\mathcal J}-z)^{-1} e_1\rangle = \sum _{j=1}^n \frac{w_j}{\lambda _j-z}\). Denote the characteristic polynomial as \(\sum _{j=0}^n \kappa _j z^j = \det (z-{\mathcal J}_{l})=\prod _{j=1}^n (z-z_j) \), where \(\kappa _n=1\). Let us first compute the Jacobian of the transformation of \({{\mathrm{Re}}}\kappa _0,\ldots ,{{\mathrm{Re}}}\kappa _{n-1},{{\mathrm{Im}}}\kappa _0,\ldots ,{{\mathrm{Im}}}\kappa _{n-2}\) with respect to \(\lambda _1,\ldots ,\lambda _n,w_1,\ldots ,w_{n-1}\). Note that \({{\mathrm{Im}}}\kappa _{n-1} = -\sum _{j=1}^n {{\mathrm{Im}}}z_j = -l\) is fixed.

Observe that

$$\begin{aligned} \sum _{j=0}^n \kappa _j z^j = \det (z-{\mathcal J}) \det ( I -(z-{\mathcal J})^{-1} il I_{1\times 1}) = \left( 1+il m(z)\right) \prod _{j=1}^n (z-\lambda _j). \end{aligned}$$
(5.6)

By taking the real parts for \(z\in {\mathbb {R}}\), and then using analytic continuation, we obtain

$$\begin{aligned} \tfrac{1}{2} \prod _{j=1}^n (z-z_j)+\tfrac{1}{2} \prod _{j=1}^n (z-\bar{z}_j) = \sum _{j=0}^n ({{\mathrm{Re}}}\kappa _j) z^j = \prod _{j=1}^n (z-\lambda _j). \end{aligned}$$
(5.7)

This implies that the Jacobian submatrix \( \frac{\partial \left( {{\mathrm{Re}}}\kappa _0,\ldots , {{\mathrm{Re}}}\kappa _{n-1} \right) }{\partial \left( w_1,\ldots ,w_{n-1} \right) }\) is equal to the \(n\times (n-1)\) zero matrix, while

$$\begin{aligned} \left| \det \frac{\partial \left( {{\mathrm{Re}}}\kappa _0,\ldots , {{\mathrm{Re}}}\kappa _{n-1} \right) }{\partial \left( \lambda _1,\ldots ,\lambda _n \right) } \right| = \prod _{j<k} |\lambda _j-\lambda _k|. \end{aligned}$$
(5.8)

Thus we just need to evaluate \(|\det \frac{\partial \left( {{\mathrm{Im}}}\kappa _0,\ldots , {{\mathrm{Im}}}\kappa _{n-2} \right) }{\partial \left( w_1,\ldots ,w_{n-1} \right) }|\), regarding \(\lambda _j\)’s as constants.

The imaginary parts of (5.6) for \(z\in {\mathbb {R}}\) give

$$\begin{aligned} \sum _{j=0}^{n-1} ({{\mathrm{Im}}}\kappa _j) z^j= & {} l m(z) \prod _{j=1}^n (z-\lambda _j) = -l \sum _{j=1}^n w_j \prod _{\begin{array}{c} 1 \le k \le n \\ k\ne j \end{array}} (z-\lambda _k) \nonumber \\= & {} -l \left[ \sum _{j=1}^{n-1} w_j (\lambda _j-\lambda _n)\prod _{\begin{array}{c} 1 \le k \le n-1 \\ k\ne j \end{array}} (z-\lambda _k) \right] - l \prod _{k=1}^{n-1} (z-\lambda _k) \end{aligned}$$
(5.9)

Denote the polynomial in the square brackets as \(s(z)=\sum _{j=0}^{n-2} s_j z^j\). Then by (5.9),

$$\begin{aligned} \det \frac{\partial \left( {{\mathrm{Im}}}\kappa _0,\ldots , {{\mathrm{Im}}}\kappa _{n-2} \right) }{\partial \left( s_0,\ldots ,s_{n-2}\right) } = (-l)^{n-1}. \end{aligned}$$
(5.10)

Now note that s(z) can be rewritten as

$$\begin{aligned} s(z)=\sum _{j=1}^{n-1} \widetilde{w}_j \prod _{\begin{array}{c} 1 \le k \le n-1 \\ k\ne j \end{array}} \frac{z-\lambda _k}{\lambda _j-\lambda _k}, \end{aligned}$$

where

$$\begin{aligned} \widetilde{w}_j = w_j (\lambda _j-\lambda _n) \prod _{\begin{array}{c} 1 \le k \le n-1 \\ k\ne j \end{array}} (\lambda _j-\lambda _k). \end{aligned}$$
(5.11)

One can now recognize that s(z) is the interpolating polynomial \(s(\lambda _k) = \widetilde{w}_k\) for \(k=1,\ldots ,n-1\). This implies

$$\begin{aligned} \left| \det \frac{\partial \left( \widetilde{w}_1,\ldots , \widetilde{w}_{n-1} \right) }{\partial \left( s_0,\ldots ,s_{n-2} \right) } \right| = \prod _{1 \le j<k \le n-1} |\lambda _j - \lambda _k|. \end{aligned}$$
(5.12)

Finally, from (5.11),

$$\begin{aligned} \det \frac{\partial \left( \widetilde{w}_1,\ldots , \widetilde{w}_{n-1} \right) }{\partial \left( w_1,\ldots ,w_{n-1} \right) } = \prod _{j=1}^{n-1} (\lambda _j - \lambda _n) \prod _{1 \le j<k \le n-1} |\lambda _j - \lambda _k|^2. \end{aligned}$$
(5.13)

Combining (5.10), (5.12), (5.13), we get

$$\begin{aligned} \left| \det \frac{\partial \left( {{\mathrm{Im}}}\kappa _0,\ldots , {{\mathrm{Im}}}\kappa _{n-2} \right) }{\partial \left( w_1,\ldots ,w_{n-1} \right) } \right| = l^{n-1} \prod _{1\le j<k \le n} |\lambda _j-\lambda _k|. \end{aligned}$$
(5.14)

Using (5.8), we get

$$\begin{aligned} \left| \det \frac{\partial \left( {{\mathrm{Re}}}\kappa _0,\ldots ,{{\mathrm{Re}}}\kappa _{n-1},{{\mathrm{Im}}}\kappa _0,\ldots , {{\mathrm{Im}}}\kappa _{n-2} \right) }{\partial \left( \lambda _1,\ldots ,\lambda _n,w_1,\ldots ,w_{n-1} \right) } \right| = l^{n-1} \prod _{1\le j<k \le n} |\lambda _j-\lambda _k|^2. \end{aligned}$$
(5.15)

Finally, observe that if we have to restriction on \(\kappa _j\)’s and \(z_j\)’s, then

$$\begin{aligned} \prod _{j<k} |z_j-z_k|^2&= \left| \det \frac{\partial \left( {{\mathrm{Re}}}\kappa _0,\ldots , {{\mathrm{Re}}}\kappa _{n-1},{{\mathrm{Im}}}\kappa _0,\ldots , {{\mathrm{Im}}}\kappa _{n-1} \right) }{\partial \left( {{\mathrm{Re}}}z_1,\ldots ,{{\mathrm{Re}}}z_n,{{\mathrm{Im}}}z_1,\ldots ,{{\mathrm{Im}}}z_{n} \right) }\right| \\&= \left| \det \frac{\partial \left( {{\mathrm{Re}}}\kappa _0,\ldots , {{\mathrm{Re}}}\kappa _{n-1},{{\mathrm{Im}}}\kappa _0,\ldots , {{\mathrm{Im}}}\kappa _{n-1} \right) }{\partial \left( {{\mathrm{Re}}}z_1,\ldots ,{{\mathrm{Re}}}z_n,{{\mathrm{Im}}}z_1,\ldots ,{{\mathrm{Im}}}z_{n-1},{{\mathrm{Im}}}\kappa _{n-1} \right) }\right| \\&= \left| \det \frac{\partial \left( {{\mathrm{Re}}}\kappa _0,\ldots , {{\mathrm{Re}}}\kappa _{n-1},{{\mathrm{Im}}}\kappa _0,\ldots , {{\mathrm{Im}}}\kappa _{n-2} \right) }{\partial \left( {{\mathrm{Re}}}z_1,\ldots ,{{\mathrm{Re}}}z_n,{{\mathrm{Im}}}z_1,\ldots ,{{\mathrm{Im}}}z_{n-1} \right) }\right| . \end{aligned}$$

The first equality is the standard fact; second equality comes from the change of variables \({{\mathrm{Im}}}\kappa _{n-1} = -\sum _{j=1}^n {{\mathrm{Im}}}z_j\); the last equality comes from Laplace expansion for the determinants (under the condition \({{\mathrm{Im}}}\kappa _{n-1} = {\text {const}}\)).

Combining the last Jacobian with (5.15), we obtain the statement of the lemma. \(\square \)

The joint distribution of \(\{\lambda _j\}_{j=1}^{n}, \{w_j\}_{j=1}^{n-1}\) is

$$\begin{aligned} \tfrac{1}{g_{\beta ,n} c_{\beta ,n}} \prod _{j<k} |\lambda _j-\lambda _k|^\beta \prod _{j=1}^n e^{- \lambda _j^2/2} \prod _{j=1}^n w_j^{\beta /2-1} d\lambda _1\ldots d\lambda _n dw_1\ldots dw_{n-1}. \end{aligned}$$

Using this and Lemma 6, we obtain that the distribution of \(z_j\)’s is

$$\begin{aligned} \tfrac{1}{g_{\beta ,n} c_{\beta ,n}} l^{-(n-1)} \prod _{j<k} |\lambda _j-\lambda _k|^{\beta -2} \prod _{j=1}^n e^{- \lambda _j^2/2} \prod _{j=1}^n w_j^{\beta /2-1} \prod _{j<k} |z_j-z_k|^2 d^2 z_1\ldots d({{\mathrm{Re}}}z_n). \end{aligned}$$
(5.16)

Note that

$$\begin{aligned} l=-{{\mathrm{Im}}}\kappa _{n-1}&= \sum _{j=1}^n {{\mathrm{Im}}}z_j, \end{aligned}$$
(5.17)
$$\begin{aligned} \sum _{j=1}^n \lambda _j&= \sum _{j=1}^n {{\mathrm{Re}}}z_j, \end{aligned}$$
(5.18)
$$\begin{aligned} \sum _{j \ne k} \lambda _j \lambda _k&= \sum _{j\ne k} {{\mathrm{Re}}}(z_j z_k). \end{aligned}$$
(5.19)

The first equation comes from (5.9), while the latter two follow from (5.7). Then

$$\begin{aligned} \sum _{j=1}^n \lambda _j^2= & {} \left( \sum _{j=1}^n {{\mathrm{Re}}}z_j\right) ^2 - \sum _{j\ne k} {{\mathrm{Re}}}(z_j z_k) = \sum _{j=1}^n ({{\mathrm{Re}}}z_j)^2 + \sum _{j\ne k} ({{\mathrm{Im}}}z_j)({{\mathrm{Im}}}z_k) \nonumber \\= & {} \sum _{j=1}^n {{\mathrm{Re}}}\left( z_j\right) ^2 + l^2. \end{aligned}$$
(5.20)

Finally, from (5.6),

$$\begin{aligned} -i lw_j = il \mathop {{{\mathrm{Res}}}}\limits _{z=\lambda _j} m(z) = \mathop {{{\mathrm{Res}}}}\limits _{z=\lambda _j} \prod _{k=1}^n \frac{z-z_k}{z-\lambda _k} = \frac{\prod _{k=1}^n (\lambda _j-z_k)}{\prod _{k\ne j} (\lambda _j-\lambda _k)}, \end{aligned}$$
(5.21)

so

$$\begin{aligned} \prod _{j=1}^n w_j = (\tfrac{i}{l})^n \frac{\prod _{j,k} (\lambda _j-z_k)}{\prod _{j<k} |\lambda _j-\lambda _k|^2} = (\tfrac{i}{l})^n \tfrac{1}{2^n} \frac{\prod _{j,k} (\bar{z}_j-z_k)}{\prod _{j<k} |\lambda _j-\lambda _k|^2} = \tfrac{1}{(2l)^n} \frac{\prod _{j,k} |\bar{z}_j-z_k|}{\prod _{j<k} |\lambda _j-\lambda _k|^2}, \end{aligned}$$
(5.22)

where we used (5.7) with \(z=z_k\), \(k=1,\ldots ,n\). Combining (5.17), (5.20), (5.22) with (5.16), we obtain (5.1).\(\square \)

5.1.1 Example

Since \(\Gamma \) in Theorem 2 has rank 1, we can decompose it as \(\Gamma = L^* L\), where \(L = (l_{1j})_{j=1}^n\) is an \(1\times n\) matrix. Assuming the entries \(l_{1j}\) of L are independent and normal \(N(0,\sigma \mathbf {I}_\beta )\), then \(l=\sum _{j=1}^n |l_{1j}|^2 \sim \sigma ^2 \chi ^2_{\beta n}\), that is l is distributed on \((0,\infty )\) according to F(l)dl with \(F(l) = \tfrac{1}{(\sqrt{2} \sigma )^{\beta n} \Gamma (\beta n/2)} l^{\beta n/2-1}e^{-l/(2\sigma ^2)}\). In this special case, eigenvalues \(\{z_1,\ldots ,z_n\}\) are distributed on \(({\mathbb {C}}_+)^n\) according to

$$\begin{aligned}&\tfrac{1}{(\sqrt{2} \sigma )^{\beta n} \Gamma (\beta n/2) c_{\beta ,n}g_{\beta ,n}} \, e^{-\frac{1}{2} \sum _{j=1}^n {{\mathrm{Re}}}(z_j^2) } \prod _{j,k=1}^n |z_j-\bar{z}_k |^{\tfrac{\beta }{2} -1} \prod _{j<k} |z_j-z_k|^2 \nonumber \\&\quad \times \, e^{-\frac{l^2}{2} -\frac{l}{2\sigma ^2}} d^2 z_1\ldots d^2 z_n. \end{aligned}$$
(5.23)

5.2 Perturbations of Laguerre \(\beta \)-Ensembles

Proof of Proposition 2

We use the same notation as in the previous section: let \(z_j\)’s be the eigenvalues of \({\mathcal J}_l\); let \(\lambda _j\)’s and \(w_j\)’s be the eigenvalues and eigenweights of the spectral measure of \({\mathcal J}\) (which is of the form (2.5) with (2.10) for the case (i) and (2.12) with (2.14) for the case (ii)). By [1], \(z_j\in {\mathbb {C}}_+\) for every j.

Consider now case (i). Equations (5.7) and (5.9) imply

$$\begin{aligned} {{\mathrm{Re}}}s_k(z_1,\ldots ,z_n)&= s_k(\lambda _1,\ldots ,\lambda _n), \quad k=1,2,\ldots ,n; \end{aligned}$$
(5.24)
$$\begin{aligned} {{\mathrm{Im}}}s_k(z_1,\ldots ,z_n)&= l \sum _{j=1}^n w_j s_{k-1}(\{\lambda _t\}_{t\ne j}), \quad k=1,2,\ldots ,n, \end{aligned}$$
(5.25)

where \(s_0:=1\), and \(s_k\) (\(k\ge 1\)) is the k-th elementary symmetric polynomial

$$\begin{aligned} s_k(z_1,\ldots ,z_n) := \sum _{1\le j_1<j_2<\ldots <j_k\le n} z_{j_1}\ldots z_{j_k}. \end{aligned}$$
(5.26)

Since for each j, \(\lambda _j> 0, w_j>0, l>0\), we obtain that \(z_1,\ldots ,z_n\) must belong to

$$\begin{aligned} \left\{ (z_j)_{j=1}^n \in ({\mathbb {C}}_+)^n : s_k(z_1,\ldots ,z_n) \in Q_1, \quad k=1,2,\ldots ,n \right\} , \end{aligned}$$
(5.27)

where \(Q_1:=\{z: 0<\mathrm{Arg}z<\pi /2 \}\). Conversely, take a collection of points from (5.27). Since it belongs to \(({\mathbb {C}}_+)^n\), we know from [1, Thm. 5.1] that there exists a unique matrix of the form \({\mathcal J}+ il I_{1\times 1}\) with \(l>0\) and \(a_j>0\), \(j=1,\ldots ,n-1\). Equation (5.7) along with the positivity of (5.24) implies that \(\lambda _1,\ldots ,\lambda _n\) are the real roots of the polynomial \(\prod _{j=1}^n (z-\lambda _j)\) with alternating signs of the coefficients. By Descartes’ rule of signs, such a polynomial cannot have negative zeros. This means that all \(\lambda _j\)’s are positive. Therefore (5.27) is precisely the space of all possible eigenvalue configurations of \(H_{eff}\). Let us now show that it coincides with (3.2).

It is elementary that (3.2) is a subset of (5.27). To see the converse, take any sequence from (5.27). Since \( s_n(z_1,\ldots ,z_n)= z_1 z_2 \ldots z_n \in Q_1\), we must have that

$$\begin{aligned} 0+2k\pi< \mathrm{Arg}z_1 + \mathrm{Arg}z_2 +\cdots + \mathrm{Arg}z_n < \pi /2 + 2k\pi \end{aligned}$$
(5.28)

for some integer \(k\ge 0\). We already know that these \(z_1,\ldots ,z_n\) are the eigenvalues of \({\mathcal J}+ il I_1\), where \({\mathcal J}\) is positive definite. Let us now fix \({\mathcal J}\) and view \(z_1,\ldots ,z_n\) as functions of \(l\ge 0\) only. Each of these functions is continuous and never passes through 0. For any \(0< l<\infty \), we have (5.28) for some k. But when \(l=0\) the sum of the arguments is zero. By continuity \(k=0\) for any l, i.e.,  (5.27) = (3.2).

To deal with the case (ii), we use similar arguments with \(m+1\) instead of n and \(\lambda _1,\ldots ,\lambda _m,0\) as the eigenvalues (with \(\lambda _j>0, j=1,\ldots , m\)). Then Eqs. (5.24) and (5.25) imply that the eigenvalues \(z_1,\ldots ,z_{m+1}\) of \({\mathcal J}+ilI_{1\times 1}\) belong to

$$\begin{aligned} \big \{ (z_j)_{j=1}^{m+1} \in ({\mathbb {C}}_+)^{m+1} : s_{m+1}(z_1,\ldots ,z_{m+1}) \in i{\mathbb {R}}_+; \big . \nonumber \\ \big . s_k(z_1,\ldots ,z_{m+1}) \in Q_1, \quad k=1,2,\ldots ,m \big \}, \end{aligned}$$
(5.29)

where \({\mathbb {R}}_+ = \{z\in {\mathbb {R}}: z>0\}\). Conversely, by [1, Thm. 5.1], any configuration of point from (5.29) coincides with eigenvalues of some \({\mathcal J}+ilI_{1\times 1}\), \(l>0\). The eigenvalues \(\lambda _1,\ldots ,\lambda _{m+1}\) of \({\mathcal J}\) satisfy \(s_{k}(\lambda _1,\ldots ,\lambda _{m+1})>0\) for \(k=1,\ldots ,m\) and \(s_{m+1}(\lambda _1,\ldots ,\lambda _{m+1})=0\). This implies \(\lambda _j>0\) for all j except for one zero eigenvalue.

Finally, let us show that (5.29) coincides with (3.3). The inclusion (3.3)\(\subseteq \)(5.29) is easy. Conversely, take any configuration \(\{z_j\}_{j=1}^{m+1}\) from (5.29). By the above, these points are the eigenvalues of some \({\mathcal J}+ilI_{1\times 1}\) with \(l>0\), where \({\mathcal J}\) has eigenvalues \(\{0,\lambda _1,\ldots ,\lambda _m\}\) with \(\lambda _j>0\) for \(1\le j\le m\). Since \(s_{m+1} \in i{\mathbb {R}}_+\) in (5.29), we have

$$\begin{aligned} \mathrm{Arg}\, z_1 + \mathrm{Arg}\, z_2 +\ldots + \mathrm{Arg}\, z_{m+1} = \pi /2 + 2k\pi \end{aligned}$$
(5.30)

for some integer \(k\ge 0\). After reordering, we can assume that \(z_j \rightarrow \lambda _j\), \(1\le j\le m\), and \(z_{m+1}\rightarrow 0\) when \(l\rightarrow 0\) (while \({\mathcal J}\) is fixed). Therefore \(\mathrm{Arg}\, z_j \rightarrow 0\) as \(l\rightarrow 0\) for \(1\le j\le m\), while \(0\le \mathrm{Arg}\, z_{m+1}\le \pi /2\) for any l. This proves that \(k=0\), and so (5.29\(\subseteq \) (3.3), finishing the proof. \(\square \)

In the next theorem we compute the joint distribution of eigenvalues of rank one perturbations of the Laguerre \(\beta \)-ensembles.

Theorem 4

Fix a deterministic \(l>0\), and for any \(\beta >0\) and any integer \(m,n>0\), let \({\mathcal J}\) be the \(n\times n\) matrix from \(L\beta E_{(m,n)}\) ensemble.

  1. (i)

    If \(m\ge n\), then the eigenvalues \(\{z_1,\ldots ,z_n\}\) of \({\mathcal J}_l = {\mathcal J}+ ilI_{1\times 1}\) are distributed on

    $$\begin{aligned} \Big \{ (z_j)_{j=1}^n \in ({\mathbb {C}}_+)^n : \sum _{j=1}^n \mathrm{Arg}\,z_j < \tfrac{\pi }{2}, \, \sum _{j=1}^n {{\mathrm{Im}}}z_j = l \Big \} \end{aligned}$$
    (5.31)

    according to

    $$\begin{aligned}&\tfrac{1}{q_{\beta ,n,a,l}} \prod _{j,k=1}^n |z_j-\bar{z}_k |^{\tfrac{\beta }{2} -1} \prod _{j<k} |z_j-z_k|^2 \nonumber \\&\quad \times \, e^{-\frac{1}{2} \sum _{j=1}^n {{\mathrm{Re}}}z_j } \left( {{\mathrm{Re}}}\prod _{j=1}^n z_j \right) ^{\tfrac{\beta a}{2}} d^2 z_1\ldots d^2 z_{n-1} d({{\mathrm{Re}}}z_n), \end{aligned}$$
    (5.32)

    where \(a=m-n+1-2/\beta \) and

    $$\begin{aligned} q_{\beta ,n,a,l} = 2^{n(\beta /2-1)} h_{\beta ,n,a} c_{\beta ,n} l^{\tfrac{\beta n}{2}-1}, \end{aligned}$$

    where \(h_{\beta ,n,a}\) and \(c_{\beta ,n}\) are as in (2.11) and (2.8).

  2. (ii)

    If \(m\le n-1\), then the \(m+1\) nonzero eigenvalues of \({\mathcal J}_l = {\mathcal J}+ ilI_{1\times 1}\) are distributed on

    $$\begin{aligned} \left\{ (z_j)_{j=1}^{m+1} \in ({\mathbb {C}}_+)^{m+1} : \sum _{j=1}^{m+1} \mathrm{Arg}\,z_j = \tfrac{\pi }{2}, \, \sum _{j=1}^{m+1} {{\mathrm{Im}}}z_j= l \right\} \end{aligned}$$
    (5.33)

    according to

    $$\begin{aligned}&\tfrac{1}{t_{\beta ,m,n,l}} \prod _{j,k=1}^{m+1} |z_j-\bar{z}_k |^{\tfrac{\beta }{2} -1} \prod _{1\le j<k\le m+1} |z_j-z_k|^2 \nonumber \\&\quad \times \, e^{-\frac{1}{2} \sum _{j=1}^{m+1} {{\mathrm{Re}}}z_j } \prod _{j=1}^{m+1} |z_j|^{\frac{\beta (n-m-1)}{2}} \left( {{\mathrm{Re}}}\prod _{j=1}^m z_j \right) ^{-1} d^2 z_1\ldots d^2 z_m, \end{aligned}$$
    (5.34)

    where

    $$\begin{aligned} t_{\beta ,m,n,l} = (m+1) 2^{(m+1)(\beta /2-1)} h_{\beta ,m,a} d_{\beta ,m,n} l^{\tfrac{\beta n}{2}-1}, \end{aligned}$$
    (5.35)

    where \(a=n-m+1-2/\beta \), and \(h_{\beta ,m,a}\) and \(d_{\beta ,m,n}\) are as in (2.11) and (2.15).

Remarks

  1. 1.

    Distributions (5.32) and (5.34) with \(\beta =1,2,4\) are the eigenvalue distribution of rank one perturbations of \(LOE_{(m,n)}\), \(LUE_{(m,n)}\), \(LSE_{(m,n)}\), respectively.

  2. 2.

    In (ii), \(z_{m+1}\) is determined from \(z_1,\ldots ,z_m\) because of (5.33).

  3. 3.

    Similarly to the remark 2 after Theorem 3, we can also assume that \(l>0\) is random (independent of \({\mathcal J}_l\)) with a distribution \(\gamma \). Then (5.32) and (5.34) are the conditional distributions of \(z_j\)’s given l. The joint distribution of \(z_j\)’s and l is then equal to the product with \(d\gamma (l)\) and can be calculated as in the case of Gaussian ensembles above.

Proof

  1. (i)

    We can take the known joint distribution of the eigenvalues \(\lambda _j\)’s and the eigenweights \(w_j\)’s (see Lemma 4) and change the variables to \(z_j\)’s (by Proposition 2(i) it is one-to-one and onto (5.31), so the Jacobian (5.5) applies). Using (5.22), (5.17), (5.18), (5.24) (with \(k=n\)), we obtain the resulting distribution (5.32).

  2. (ii)

    By Proposition 2(ii), the map from the spectral measures of the form (2.12), (2.14) to the eigenvalues of \({\mathcal J}+il I_{1\times 1}\): \(\lambda _1,\ldots ,\lambda _m,w_1,\ldots ,w_m\mapsto z_1,\ldots ,z_{m+1}\) is one-to-one and onto (5.33) (if we impose some natural ordering on \(\lambda _j\)’s and \(z_j\)’s; we will remove it in the end of the proof). Its Jacobian is different from (5.5) computed earlier. Similar to the notation in the proof of Lemma 6, let \(m(z)=\langle e_1, ({\mathcal J}-z)^{-1} e_1\rangle = -\frac{w_0}{z} + \sum _{j=1}^m \frac{w_j}{\lambda _j-z}\) and \(\sum _{j=0}^{m+1} \kappa _j z^j = \det (z-{\mathcal J}_{l})=\prod _{j=1}^{m+1} (z-z_j) \), where \(\kappa _{m+1}=1\). Because of \(\det {\mathcal J}= 0\), we obtain \({{\mathrm{Re}}}\kappa _0 = 0\). Following similar reasoning as in the proof of Lemma 6, we first obtain the value of the Jacobian

    $$\begin{aligned} \left| \det \frac{\partial \left( {{\mathrm{Re}}}\kappa _1,\ldots , {{\mathrm{Re}}}\kappa _{m},{{\mathrm{Im}}}\kappa _0,\ldots ,{{\mathrm{Im}}}\kappa _{m-1} \right) }{\partial \left( \lambda _1,\ldots ,\lambda _m,w_1,\ldots ,w_{m} \right) } \right| = l^m\prod _{j=1}^m \lambda _j \prod _{1\le j<k\le m} |\lambda _j-\lambda _k|^2. \end{aligned}$$
    (5.36)

    Since \({{\mathrm{Re}}}(z_1\ldots z_{m+1}) = (-1)^{m+1} {{\mathrm{Re}}}\kappa _0 = 0\) and \({{\mathrm{Im}}}\kappa _m = -\sum _{j=1}^{m+1} {{\mathrm{Im}}}z_j = -l\), we have that \(z_{m+1}\) is determined by \(z_1,\ldots ,z_m\). Therefore we have a one-to-one map \({\mathbb {R}}^{2m}\rightarrow {\mathbb {R}}^{2m}\) taking \(z_1,\ldots ,z_{m}\) to \({{\mathrm{Re}}}\kappa _1,\ldots , {{\mathrm{Re}}}\kappa _{m},{{\mathrm{Im}}}\kappa _0,\ldots ,{{\mathrm{Im}}}\kappa _{m-1}\). We need its Jacobian on the manifold \({{\mathrm{Re}}}(z_1\ldots z_{m+1}) =0, \sum _{j=1}^{m+1} {{\mathrm{Im}}}z_j = l\). If we have no restrictions on \(z_j\)’s or \(\kappa _j\)’s, then

    $$\begin{aligned} \prod _{1\le j<k\le m+1} |z_j-z_k|^2&= \left| \det \frac{\partial \left( {{\mathrm{Re}}}\kappa _0,{{\mathrm{Im}}}\kappa _0, \ldots ,{{\mathrm{Re}}}\kappa _{m},{{\mathrm{Im}}}\kappa _{m}\right) }{\partial \left( {{\mathrm{Re}}}z_1,{{\mathrm{Im}}}z_1, \ldots ,{{\mathrm{Re}}}z_{m+1},{{\mathrm{Im}}}z_{m+1} \right) } \right| \\&= \left| \det \frac{\partial \left( {{\mathrm{Re}}}\kappa _0,{{\mathrm{Im}}}\kappa _0, \ldots ,{{\mathrm{Re}}}\kappa _{m},{{\mathrm{Im}}}\kappa _{m}\right) }{\partial \left( {{\mathrm{Re}}}z_1,{{\mathrm{Im}}}z_1, \ldots ,{{\mathrm{Re}}}z_{m},{{\mathrm{Im}}}z_{m},{{\mathrm{Re}}}\kappa _0,{{\mathrm{Im}}}\kappa _m \right) } \right| \\&\quad \times \left| \det \frac{\partial \left( {{\mathrm{Re}}}z_1,{{\mathrm{Im}}}z_1, \ldots ,{{\mathrm{Re}}}z_{m},{{\mathrm{Im}}}z_{m},{{\mathrm{Re}}}\kappa _0,{{\mathrm{Im}}}\kappa _m \right) }{\partial \left( {{\mathrm{Re}}}z_1,{{\mathrm{Im}}}z_1, \ldots ,{{\mathrm{Re}}}z_{m+1},{{\mathrm{Im}}}z_{m+1} \right) } \right| \end{aligned}$$

    The last determinant is equal to \( |{{\mathrm{Re}}}(z_1\ldots z_m)|\), so

    $$\begin{aligned} \frac{ \prod _{1\le j<k\le m+1} |z_j-z_k|^2 }{ |{{\mathrm{Re}}}(z_1\ldots z_m)|}&{=} \left| \det \frac{\partial \left( {{\mathrm{Re}}}\kappa _0,{{\mathrm{Im}}}\kappa _0, \ldots ,{{\mathrm{Re}}}\kappa _{m},{{\mathrm{Im}}}\kappa _{m}\right) }{\partial \left( {{\mathrm{Re}}}z_1,{{\mathrm{Im}}}z_1, \ldots ,{{\mathrm{Re}}}z_{m},{{\mathrm{Im}}}z_{m},{{\mathrm{Re}}}\kappa _0,{{\mathrm{Im}}}\kappa _m \right) } \right| \\&{=} \left| \det \frac{\partial \left( {{\mathrm{Re}}}\kappa _1,\ldots , {{\mathrm{Re}}}\kappa _{m},{{\mathrm{Im}}}\kappa _0,\ldots ,{{\mathrm{Im}}}\kappa _{m-1}\right) }{\partial \left( {{\mathrm{Re}}}z_1,{{\mathrm{Im}}}z_1, \ldots ,{{\mathrm{Re}}}z_{m},{{\mathrm{Im}}}z_{m} \right) } \right| , \end{aligned}$$

    where in the last determinant we are assuming that \({{\mathrm{Re}}}\kappa _0 ={\text {const}}\) and \({{\mathrm{Im}}}\kappa _m = {\text {const}}\). Combining this with (5.36), we get that on \({{\mathrm{Re}}}\kappa _0 = 0,{{\mathrm{Im}}}\kappa _m=-l\),

    $$\begin{aligned} \left| \det \frac{\partial \left( {{\mathrm{Re}}}z_1,{{\mathrm{Im}}}z_1, \ldots ,{{\mathrm{Re}}}z_{m},{{\mathrm{Im}}}z_{m} \right) }{\partial \left( \lambda _1,\ldots ,\lambda _m,w_1,\ldots ,w_{m} \right) } \right| {=} l^{m} \big |{{\mathrm{Re}}}\prod _{j=1}^m z_j\big | \prod _{j=1}^m \lambda _j \frac{ \prod _{1\le j<k\le m} |\lambda _j-\lambda _k|^2}{\prod _{1\le j<k\le m+1} |z_j-z_k|^2}. \end{aligned}$$
    (5.37)

    Repeating the arguments from (5.21) and (5.22), we obtain

    $$\begin{aligned} w_0 = \frac{\prod _{j=1}^{m+1}|z_j|}{l \prod _{j=1}^m|\lambda _j|} ,\quad \text{ and } \prod _{j=1}^m w_j = \frac{1}{l^{m} 2^{m+1}} \frac{\prod _{j,k=1}^{m+1}|z_j-\bar{z}_k|}{\prod _{j=1}^{m+1} |z_j| \prod _{j=1}^m |\lambda _j| \prod _{j<k} |\lambda _j-\lambda _k|^2}. \end{aligned}$$

    Finally, just as in (i), we still have \(\sum _{j=1}^m \lambda _j = \sum _{j=1}^{m+1} {{\mathrm{Re}}}z_j\). Now, starting from the joint distribution of \(\lambda _1,\ldots ,\lambda _m,w_1,\ldots ,w_m\) (see Proposition 1), applying the Jacobian (5.37), and using these substitutions (note that terms with \(\prod |\lambda _j|\) cancel out in the process), we arrive at the distribution (5.34). Note that the factor \((m+1)\) in (5.35) comes from removing the ordering of \(z_j\)’s and \(\lambda _j\)’s (there are \((m+1)!\) of permutations for \(\{z_j\}_{j=1}^{m+1}\), and only m! for \(\{\lambda _j\}_{j=1}^m\)).

\(\square \)