Edge distribution of thinned real eigenvalues in the real Ginibre ensemble

This paper is concerned with the explicit computation of the limiting distribution function of the largest real eigenvalue in the real Ginibre ensemble when each real eigenvalue has been removed independently with constant likelihood. We show that the recently discovered integrable structures in \cite{BB} generalize from the real Ginibre ensemble to its thinned equivalent. Concretely, we express the aforementioned limiting distribution function as a convex combination of two simple Fredholm determinants and connect the same function to the inverse scattering theory of the Zakharov-Shabat system. As corollaries, we provide a Zakharov-Shabat evaluation of the ensemble's real eigenvalue generating function and obtain precise control over the limiting distribution function's tails. The latter part includes the explicit computation of the usually difficult constant factors.


Introduction and statement of results
Let X ∈ R n×n , n ∈ Z ≥2 be a matrix whose entries are independent, identically distributed standard normal random variables with mean 0 and variance 1. In other words, X is a matrix drawn from the real Ginibre ensemble (GinOE) [31]. It is known, cf. [10,30,44], that the eigenvalues {z j (X)} n j=1 of X form a Pfaffian point process, a fact which allows one to compute gap probabilities in the GinOE as Fredholm determinants. Of particular interest for us is the following result about the absence of real eigenvalues in (t, ∞) ⊂ R. where χ t is the operator of multiplication by the characteristic function χ [t,∞) of the interval [t, ∞) ⊂ R and K n the following Hilbert-Schmidt integral operator on L 2 (R) ⊕ L 2 (R), (

1.2)
Here, ρ multiplies by any differentiable, square-integrable weight function ρ(x) > 0 on R such that ρ −1 (x) ≡ 1/ρ(x) is polynomially bounded. Moreover S n and are the integral operators on L 2 (R) with kernels 1 k! z k is the exponential partial sum, S * n the real adjoint of S n , D acts by differentiation on the independent variable and IS n has kernel IS n (x, y) := S n (x, y), n ∈ Z ≥2 even, and IS n (x, y) := S n (x, y) + 1 2 n/2 Γ(n/2)ˆy 0 u n−1 e − 1 2 u 2 du, n ∈ Z ≥3 odd.
Remark 1.2. The ordinary Fredholm determinant of K n is ill-defined since not all its entries vanish at ±∞ and since is not trace-class on L 2 (R). This is a standard issue in random matrix theory, compare [47,Section VIII], [48, page 2199] or [20, page 79 − 84], and it is commonly bypassed either through the use of regularized determinants or weighted Hilbert spaces. In where det is the ordinary Fredholm determinant, the block operators act on L 2 (R) ⊕ L 2 (R) and the trace in the exponent is taken in L 2 (R). Note that (1.3) is slightly different from the Hilbert-Carleman determinant [43,Chapter 9] in that for trace class L we have det 2 (1 + L) = det(1 + L) and for any two of the above block operators det The finite n GinOE result (1.1) can be used to derive a limit theorem for the largest real eigenvalue of a real Ginibre matrix which in turn quantifies the well-known saturn effect. Indeed, in order to state the corresponding limit theorem for the largest real eigenvalue we first consider the following Riemann-Hilbert problem (RHP).
(3) As z → ∞, This problem is uniquely solvable for all (x, γ) ∈ R × [0, 1], cf. [2,Theorem 3.9], and its solution enables us to state the limit theorem for the largest real eigenvalue as follows. Eigenvalues off the real axis are much simpler to deal with, see [42,Theorem 1.2].
Identities similar to (1.10) have been derived in [13,Proposition 1.1] for the limiting GOE and the limiting Gaussian symplectic ensemble (GSE) based on Painlevé representations for the underlying eigenvalue generating functions, cf. [18,Theorem 2.1]. Our proof of Lemma 1.7 will rely on the observation that thinned Pfaffian point processes are Pfaffian with an appropriately γ-modified kernel, see Section 2 below, which is similar to the proof for determinantal point processes given in [38,Appendix A]. The fact that a thinned process built from a determinantal point process is also determinantal was first observed in [8].
Once (1.10) is established we will then use this finite n result to derive the following limit theorem for the thinned real GinOE process, our second result. Set γ := γ(2 − γ) (1.11) and note thatγ ∈ [0, 1] for γ ∈ [0, 1]. The limit is a convex combination of two simple Fredholm determinants. The special value γ = 1 reduces (1.13) to P (t; 1) = det 1 − χ t Sχ t L 2 (R) , which was first proven by the authors in [2,Theorem 1.11]. Note that the formula for P (t; 1) is the analogue of the Ferrari-Spohn formula [24] in the GOE, generalized to the thinned GOE by Forrester in [28,Corollary 1]. Comparing (1.13) to the last reference (modulo the typo correction ξ →ξ in the determinants in the first line of [28, (1.22)] and after completing squares), we spot a striking resemblance between the thinned GOE and the thinned GinOE: up to the kernel replacement with the Airy function w = Ai(z), see [40, 9.2.2], the formulae are exactly the same.

1.2.
Integrability of the thinned real GinOE process. In our third result we express the limiting distribution function P (t; γ) in (1.12) in terms of the solution of RHP 1.3 and thus in terms of the solution of RHP 1.3 and thus in terms of the solution to an inverse scattering problem for the Zakharov-Shabat system.
Here are the details: where the function y = y(x; γ) : R × [0, 1] → iR is given by y(x; γ) := 2iY 12 1 (x, γ) in terms of (1.6) and (1. 16) We emphasize that the structure in the right hand side of (1.14), (1.16) is completely similar to the one in the limiting distribution function for the largest eigenvalue in the thinned GOE ensemble, cf. [13, (1.6)]. It is only the appearance of the solution to the Zakharov-Shabat inverse scattering problem which sets the thinned GinOE apart from the thinned GOE -at least as far as the largest real eigenvalue is concerned, compare Remark 1.5 for the special case γ = 1. We further emphasize this point with our fourth result, a simple corollary to Theorem 1.13: let E(m, (t, ∞)) denote the limiting (as n → ∞) probability that there are m ∈ Z ≥0 edge scaled real eigenvalues µ j (X) := z j (X) − √ n ∈ R of a matrix X ∈ GinOE in the interval (t, ∞) ⊂ R. Now define the associated generating function which, as a consequence of Theorem 1.8 can also be evaluated in terms of the solution of RHP 1.3: 18) withλ := 2λ − λ 2 , the above function y(x; λ) = 2iY 12 1 (x, λ) and the antiderivative (1.15).
Formula (1.18) is a simple consequence of the inclusion-exclusion principle, see Section 6 below. The generating function is of interest from the random matrix theory viewpoint as it allows one to compute the limiting distribution function F m (t) of the mth largest edge scaled real eigenvalue (m = 1 is the largest) in the GinOE in recursive form, see [5,Section 6.3.2] for the standard probabilistic argument used in the derivation of such recursions in random matrix theory.
Remark 1.12. The analogue of (1.18) for the GOE was first derived in [18, Theorem 2.1] and then used for the computation of the limiting distribution function of the largest eigenvalue in the thinned GOE, see for example [13,Proposition 1.1]. For the GinOE, we will proceed in the reverse direction and first prove (1.14).
1.3. Tail expansions. One major advantage of the explicit formula (1.14) -besides the fact that it places the thinned GinOE on firm integrable systems ground -originates from its usefulness in the derivation of tail expansions. Indeed, once the Riemann-Hilbert problem connection is in place, it is somewhat straightforward to obtain asymptotic information for the distribution function P (t; γ) in (1.12) as t → ±∞. We summarize the relevant estimates in our fifth result below.
Expansion (1.19) was first derived in [30] for γ = 1. The leading order exponential decay of the left tail (1.20) appeared in [41, (1.11)] for γ = 1 and for γ ∈ [0, 1] in [27, (2.30)], albeit in somewhat implicit form. The notoriously difficult constant factor c 0 (γ) in (1.20) was recently computed in [25, (3)] for γ = 1 using probabilistic arguments. In this paper we derive (1.20) for all γ ∈ [0, 1) by nonlinear steepest descent techniques. The evaluation of c 0 (1) would require further analysis and we choose not to rederive c 0 (1) in this paper. Nonetheless, we note that our result (1.20), (1.21) matches formally onto [41, (1.11)], [25, (3)], i.e. onto the t → −∞ expansion 2π ζ 3 2 and since c 0 (γ) in (1.21) satisfies the following property Lemma 1.14. The function c 0 (γ) is continuous in γ ∈ [0, 1] and equals As it is standard (for instance in invariant random matrix theory ensembles) the right tail (1.19) of the extreme value distribution P (t; γ) follows from elementary considerations and does not need RHP 1.3. The left tail, however, is much more subtle since becomes unbounded, yet the distribution function P (t; γ) converges to zero. It is this well-known issue which requires the full use of RHP 1.3 and associated nonlinear steepest descent techniques for its asymptotic analysis, see Section 7 below.
Remark 1.15. The explicit computation of constant factors such as c 0 (γ) in (1.21) is a well-known challenge in the asymptotic analysis of correlation and distribution functions in nonlinear mathematical physics. Without aiming for completeness, we mention the following contributions to the field: in the theory of exactly solvable lattice models, the works [6,7,11,12,46]. In classical invariant random matrix theory, the works [3,16,17,22,23,37], and most recently on τ -function connection problems for Painlevé transcendents the works [35,36].
1.4. Numerics. The Fredholm determinant formula (1.13) provides us with an efficient way to evaluate P (t; γ) numerically, cf. [9]. Indeed, in order to showcase the applicability of (1.13) we now provide the following numerical evaluations for the limiting distribution of max j z γ j (X): First, Table 1 shows a few centralized moments for varying γ. Second, probability density and distribution function plots for varying γ ∈ [0, 1] are shown in Figure 1. Third, we compare our asymptotic expansions (1.19) and (1.20) to the numerical results obtained from (1.13) in Figure 2 and 3 below.  1.5. Methodology and outline of paper. The remainder of the paper is organized as follows. We prove Lemma 1.7 in Section 2 using a simple probabilistic argument. Afterwards we use (1.10) and carefully simplify the regularized Fredholm determinant in order to arrive at a finite n formula which is amenable to asymptotics. Our approach is somewhat similar to the ones carried out in [18,42], however two issues arise along the way: one, the absence of Christoffel-Darboux structures throughout forces us to rely on the Fourier tricks used in [2, Section 2 and 3] in the derivation of (1.14). Two, unlike in the invariant ensembles, our computations depend heavily on the parity of n. We first work out the necessary details for even n in Section 3 and afterwards develop a comparison argument to treat all odd n, see Subsection 3.3. The content of Subsection 3.3 seemingly marks the first time that the extreme value statistics in the GinOE for odd n have been computed rigorously. Even for γ = 1, typos in [42,Section 4.2] have been pointed out in [41, Appendix B] but these had not been fixed until now. After several initial steps in Section 3 we complete the proof of Theorem 1.8 in Section 4. Once Theorem 1.8 has been derived, our proof of Theorem 1.9 in Section 5 is rather short, making essential use of the inverse scattering theory connection worked out in our previous paper [2]. This is followed by our short proof of (1.11) for the eigenvalue generating function in Section 6. Afterwards we prove Theorem 1.13 in Section 7. In fact, the asymptotic analysis is split into two parts, one part which deals with a total integral of y = y(x; γ) and a second part which computes the constant factor in the asymptotic expansion of the determinant det(1 − γχ t T χ t L 2 (R) ).
Unlike for invariant matrix ensembles, compare the discussion in [14, page 492, 493], we are here able to efficiently employ the γ-derivative method in the computation of the constant factor without having a differential equation in the spectral variable. Indeed, since our nonlinear steepest descent analysis in Appendix B does not use any local model functions, the cumbersome double integration in the γ-derivative method becomes manageable. This feature is comparable with Deift's proof of the strong Szegő limit theorem in [19,Example 3] and the details of our analysis can be found in Section 7. The final two sections of the paper in Appendix A and B prove two curious integral identities used in the proof of Theorem 1.8 and present a streamlined version of the nonlinear steepest descent analysis of [2, Section 5] which is crucial in our proof of Theorem 1.13.

Proof of Lemma 1.7
It is known from [39] that the eigenvalues {z j (X)} n j=1 ⊂ C of X ∈ R n×n drawn from the GinOE are distributed according to a random point process whose correlation functions are computable as Pfaffians, cf. [10,30,44]. In particular, the real eigenvalues form a Pfaffian process whose correlations are given by with the skew-symmetric 2 × 2-matrix kernel .
Thus, if ρ γ denotes the -th correlation function in the thinned real GinOE process, we find with 1 ≤ ≤ m γ,n , ρ γ (w 1 , . . . , w ) = lim ∆wi→0 P(one thinned real GinOE eigenvalue in each (w i , w i + ∆w i )) ∆w 1 · . . . · ∆w = lim ∆wi→0 P(one real GinOE eigenvalue in each (w i , w i + ∆w i ) and they are not discarded) since each eigenvalue is removed independently with likelihood 1 − γ. In short, ρ γ = γ ρ which shows that the thinned Pfaffian point process is also a Pfaffian process and its kernel is simply given by γK R,R n (x, y). Equipped with this insight one now repeats the computations in [42, page 1630] and arrives at (1.10).
We will first simplify F n for n even and afterwards take the limit as n → ∞ with n even. Once done, we then compare the odd n case with the even n case and prove existence of the limit (1.12) all together.
3.1. Finite even n calculations. We consider F 2n . Our overall approach follows closely [42, page 1640], keeping throughout track of the γ-modifications due to (1.10). First, the kernel χ t K 2n χ t can be factorized as and by using (1.5) we can move the factor on the left in (3.1) to the right, so F 2n (t, γ) equals the regularized 2-determinant of the operator with kernel Next we observe that the traces of the last operator's powers of 2, 3, . . . match the corresponding traces of the operator with kernel Hence, by the Plemelj-Smithies formula for det 2 , see [43,Theorem 9.3], Factorizing the underlying kernel we then obtain and since both triangular factors are of the form identity plus block operator as in Remark 1.2, we are allowed to use (1.4). In fact the regularized 2-determinant of those triangular factors equals one, so we have just shown that the original determinant in (1.1) for even n simplifies to and as our upcoming computations will show (see in particular (3.8) below) the operator S 2n χ t D − S * 2n χ t − γS * 2n χ t χ t D is of finite rank, i.e. the regularized 2-determinant in (3.3) is an ordinary Fredholm determinant by Remark 1.2 and the conjugation with ρ now redundant. We have thus arrived at the following replacement of the equation right above [42, (4.6)], In order to simplify (3.4) further we now record Proof. The stated identity follows easily by induction on n ∈ Z ≥1 using only that Inserting (3.5) into (3.4) we find We write α ⊗ β for a general rank one integral operator on L 2 (R) with kernel (α ⊗ β)(x, y) = α(x)β(y).
Noting Dχ t = −χ t and applying the commutator identity, cf. [47, (16)], Next, from the definition of S n in Proposition 1.1, we may write, see [42, (4.7)], where T n (x, y) is a symmetric kernel and Lemma 3.2. Given t ≥ 0 and n ∈ Z ≥2 , the trace class operator T n : Proof. For every f ∈ L 2 (t, ∞), which implies non-negativity of T n . For the upper bound we apply Schur's test, and conclude by self-adjointness of T n that i.e. T n ≤ 1 for any n ∈ Z ≥2 . Next, using that T n ≤ 1, the invertibility of 1 − γT n on L 2 (t, ∞) follows readily from the underlying Neumann series provided γ ∈ [0, 1). The case γ = 1 has been addressed in [42,Lemma 4.2]. This concludes our proof.
In the following we will use the result of Lemma 3.2 for the operatorγχ t T 2n χ t which acts on L 2 (R). Inserting the operator decomposition S * n = T n + φ n ⊗ ψ n into (3.8) and using the general (3.10) Here, ·, · is the standard L 2 (R) inner product. Since we can then rewrite (3.10) by Sylvester's identity [32, Chapter IV, (5.9)] as Next, using that 1 −γχ t T n χ t is invertible on L 2 (R) for t ≥ 0 by Lemma 3.2, we factorize F 2n (t, γ) as follows, Here, α j , β k denote the six functions By general theory, cf. [32, Chapter I, with the L 2 (R) inner product ·, · . We conclude our finite n calculation for even n with the following further algebraic simplifications.
Next, with R n :=γT n χ t (1 −γχ t T n χ t ) −1 , n ∈ Z ≥1 which is well-defined as operator on L 2 (R) by Lemma 3.2 for any t ≥ 0, Proof. We use self-adjointness of the operator T n and write R n (x, t) for (cf. [47, page 732]) lim y→t y>t R n (x, y).
With Lemma 3.3 in place we finally evaluate the Fredholm determinant in (3.13). Noting that the terms c n , d n cancel out due to multilinearity of the finite-dimensional determinant, we obtain in terms of the three inner products α 1 , β k , k = 1, 2, 3 and the four integrals I j = I j (t, γ, 2n) with Identities (3.12) and (3.14) conclude our calculations for finite n, provided n is even.
3.2. The limit n → ∞, n even. In order to pass to the large n limit we first shift the independent variable t according to t → t + √ 2n, compare the left hand side of (1.7). Under this scaling we have Moreover, the entries in the 3 × 3 determinant (3.14) transform in a similar fashion, for instance and likewise which involve φ n (x) := φ n (x + √ n ) and ψ n (x) := ψ n (x + √ n ). The remaining four integrals I k , see (3.15), are treated the same way and every occurrence of R n in them gets replaced by R n with (3.16). At this point we collect a sequence of technical limits.
Lemma 3.5. Uniformly in x ∈ R chosen from compact subsets, and for any fixed s ∈ R with p ∈ {1, 2}, But on compact subsets of R x, which yields the second limit in (3.17). Since also for any x > 0 and n ∈ Z ≥2 , the dominated convergence theorem yields the first L p (s, ∞) convergence in (3.18). For the limits involving φ n , we note that as n → ∞, uniformly in x ∈ R, with the normalized incomplete gamma function w = P (z), cf. [40, 8.2.4]. But on compact subsets of R x, see [40, 8.11.10], which yields the first limit in (3.17). For the outstanding limit in (3.18) we use that as n → ∞, uniformly in x ∈ R, with w = Γ(a, z) the incomplete Gamma function, see [40, 8.2.2]. But since for x > 0 and a ≥ 1 such that we find for any x > 0 and n ∈ Z ≥2 , Using also that on compact subsets of R x, the second limit in (3.18) follows from (3.19), (3.20), (3.21) and the dominated convergence theorem. This completes our proof.
The next limits concern the large n-behavior of the kernel function T n (x, y) and its total integrals. Recall the kernel T (x, y) defined in (1.8).
Finally we state the central convergence result for the operator χ t T n χ t on L 2 (R). . Given t ∈ R, the operator χ t T n χ t converges in trace norm on L 2 (R) and in L p (R) operator norm with p ∈ {1, 2, ∞} to the operator χ t T χ t . Additionally, for any γ ∈ [0, 1], Proof. The convergences have been proven for γ = 1 in [42,Lemma 4.2]. The extension to γ ∈ [0, 1) follows from the Neumann series expansion of the resolvents in (3.24), compare Lemma 3.2 and [2, Lemma 2.1].
We now apply Lemmas 3.5, 3.6 and Lemma 3.7 in the large n analysis of the Fredholm determinants back in (3.12), after the rescaling t → t + √ 2n. First the leading factor: Proof. We know from Lemma 3.2 that T n is trace class on L 2 (t, ∞) and the same applies to T (since it is a product of Hilbert-Schmidt operators). Thus with [32, Chapter IV, (5.14)], But the operator difference in trace norm converges to zero by Lemma 3.7 and χ t T 2n χ t 1 remains bounded by the same result. This completes our proof.
Next we move on to the L 2 (R) inner products which appear in (3.14).
Lemma 3.9. For any γ ∈ [0, 1] and t ∈ R, Proof. In the first inner product we write and now use that, uniformly in x ∈ R, But from Lemma 3.5 and Lemma 3.7 we also know that for p ∈ {1, 2}, = 0, so with (3.18) and Hölder's inequality therefore back in (3.25) as claimed. For the second inner product we write instead (with χ [t,∞) (t) = 1) and recall the previous decomposition of φ n used in (3.25). Hence, with Lemma 3.7 and (3.17), (3.23) we find from Hölder's inequality, The derivation of the third inner product is completely analogous.
At this point we are left with the computation of the large n limits of the rescaled integrals I k . Let R(x, y) denote the kernel of the resolvent R =γT χ t (1 −γχ t T χ t ) −1 on L 2 (R).
Proof. We begin with the kernel function identity (cf. [47, page 748]), which, upon insertion into the integrand of I 1 (t + √ 2n, γ, 2n), leads to four integrals, Apply Lemma 3.10 and conclude for the first two integrals For the third integral in (3.26) we writê and note that each entry of the last L 2 (R) inner product converges to its formal limits in L 2 (R) sense, cf. Lemma 3.7, equation (3.23) and the workings in [42, page 1643]. The outstanding fourth integral is treated similarly, the difference being that the first entry in the corresponding L 2 (R) inner product equals Since both terms converge to their formal limits in L 2 (R) sense, compare our reasoning above and (3.23), we find all together, which is the desired formula for I 1 , given that R =γT χ t (1 −γχ t T χ t ) −1 . The derivation of the limit for I 2 is completely analogous and in fact simpler since no integrals over (−∞, t) occur. Moving ahead, the limit evaluation of I 3 (t + √ 2n, γ, 2n) also requires four integrals, Note that which converges to its formal limit as n → ∞, compare our reasoning for I 1 and (3.18). The same is true for the remaining fourth integral and we obtain all together, as n → ∞, which is the claimed identity. The derivation for I 4 is again similar and does not use any integrals along (−∞, t). This completes our proof.
With Lemma 3.8, 3.9 and 3.10 in place, we now obtain the following result.
Proposition 3.11. As n → ∞, uniformly for t ∈ R chosen from compact subsets and any γ ∈ [0, 1], where u, v, p, q, r, w are the following six functions of (t, γ) ∈ R × [0, 1], 3.3. The limit (1.12) for odd n. In this subsection we will compute the limit F 2n+1 (t + √ 2n + 1, γ) using a comparison argument. Precisely, we show how the computations in Subsection 3.1 have to be modified in order to account for odd n ∈ Z ≥3 . These additional manipulations are necessary given the different structure of the operator IS n in Proposition 1.1 for odd n. The details are as follows. We first relate K 2n+1 to K 2n : Proposition 3.12. For any n ∈ Z ≥3 ,

30)
and thus in turn, Proof. Identity (3.30) follows from the equality S n (x, y) = S n−1 (x, y) which appears in [42, page 1628] and which can be proven by induction on n ∈ Z ≥3 using the original definition of S n (x, y) given in Proposition 1.1. Once (3.32) is known we find immediately (3.30) by comparison with (3.9). On the other hand, and this is (3.31) after comparison with the kernel of IS 2n+1 written in Proposition 1.1.
Inserting (3.30) and (3.31) into formula (1.2) for K 2n+1 , we find that K 2n+1 = K 2n + E 2n where the operator E n has kernel Note that χ t E n χ t is finite rank on L 2 (R) ⊕ L 2 (R), so in particular trace class. Also, since for any x ∈ R, Lemma 3.5 and triangle inequality yield that, in trace norm, is invertible for sufficiently large n and any (t, γ) ∈ R × [0, 1] by the working of Section 3 and Remark 1.2. Hence we use (1.4) and obtain for n ≥ n 0 , , where the second (finite rank) determinant converges to one as n → ∞ because of (3.34). This shows that for any (t, γ) ∈ R × [0, 1]. In fact, the above convergence is uniform in (t, γ) ∈ R × [0, 1] chosen from compact subsets and since F 2n+1 (t + √ 2n, γ) is at least differentiable in t ∈ R (this can be seen directly from (1.1) by scaling t into the kernel and then using the logic behind [1, Lemma 2.20]), we find on compact subsets of (t, γ) ∈ R × [0, 1]. Hence, combining (3.35) with (3.36) we arrive at the analogue of (3.28) for odd n, i.e.
Finally, merging Propositions 3.11 and 3.13 we have now established the existence of the limit (1.12). This completes the current section.

Proof of Theorem 1.8 -final steps
In order to prove the outstanding representation (1.13) we now find a new representation for the 3 × 3 determinant in (3.28). To begin with, we list four algebraic relations between the functions u, v, p, q, r and w in Corollary 4.3 below. These follow from the next Lemma. Recall R = (1 −γT χ t L 2 (R) ) −1 − 1 and the definitions of g and G in (1.9).
Proof. The first equality follows from [2, (4.9)] with the formal replacements γ →γ, G γ → G and g γ → g, see [2, (4.3)]. The second and third are a consequence of (A.3) below. Indeed, we havê and, similarly, which is the second integral identity. For the third, we simply choose I = (−∞, 0) in (A.3) and for the fourth we use (A.5), self-adjointness of T and´∞ −∞ g(x) dx = 1 to find that This completes our proof.
Proof. These follow from inserting the integral identities of Lemma 4.1 into the definitions of u, v, p, q, r and w.
Once we substitute (4.1) into the 3 × 3 determinant (3.28) we are left with two unknown, p and q, say, and the determinant simplifies to Next, we define the two functions for (t, γ) ∈ R × [0, 1], k = 1, 2 and note that by (3.29) , (4.4) and now set out to simplify τ k . First, by the second and fourth identity in Lemma 4.1, Second, making essential use of the regularization scheme for Fredholm determinant and inner product manipulations in [47, Section VIII], we have the following two analogues of [26, (4.18), (4.21)] which we will use with a = ± √γ .
Proof. As outlined in [2, (6.9)], identity (4.6) is equivalent to and thus to where we do not indicate the underlying Hilbert spaces for compact notation. In proving (4.7) we use the following straightforward a-generalization of [2, (6.11)], d dt where ∆ 0 denotes multiplication by δ 0 (x) and D(= d dx ) differentiation. We have thus d dt (4.7). This completes our proof.

Proof of Theorem 1.9
Our proof begins with the following analogue of [26, (4.12)].
1 [2, RHP 3.8] is a rescaled version of our RHP 1.3 , hence the independent variable x 2 occurs in the integrand of µ.

Total integral computation.
We first address the computation of D 2 (γ). Since we need to evaluate a total integral. In order to achieve this, we follow the approach developed in [4], our net result being an analogue of [4, (28)]. Recall that Y(z) = Y(z; x, γ) solves RHP 1.3.
In order to apply (7.16) we use the explicit formula (B.1) for the kernel of R(λ, µ) (see [34] for regularity properties of R(λ, µ)), and combine it with the asymptotic results of Appendix B, afterwards we integrate in (7.16). In more detail, once the t → −∞ asymptotic expansion of the kernel R(λ, λ) is known uniformly with respect to fixed γ ∈ [0, 1) and any λ ∈ Ω we simply integrate , (7.18) for (λ, µ) ∈ Ω × Ω with the unimodular factors Inserting (7.18) into the right hand side of (7.16) we obtain after a short computation Given the particular shape of f (λ) and g(λ) in (7.13), the first integral in (7.19) evaluates to zero. For the fourth integral we record the following estimate.
The remaining two integrals in (7.19) yield non-trivial contributions. We first state a lemma which is used in their evaluation.
Proof. Integration by parts in the variable s, as well as in the variable λ, yields and therefore since both remaining integrals are standard Gaussians. This proves (7.21).
We now compute the two outstanding integrals in (7.19) Lemma 7.7. For every γ ∈ [0, 1), Proof. Inserting the formulae for f (λ), g(λ) and A 2 (λ) we find Integrating by parts, collapsing R + iω to R and using the oddness of a part of the integrand, we see that the first remaining integral yields i 4πˆR +iω e − 1 4 λ 2 +itλ d dλ In the second (double) integral, we use geometric progression and the power series expansion ln(1 − z) = − ∞ n=1 1 n z n , |z| < 1 for h(s; 1, γ), and Li − 1 2 (0) = Li 1 2 (0) = 0. This completes our proof. Lemma 7.8. For every γ ∈ [0, 1), Proof. Using the above formula for A 1 (λ) and (7.13) we find at once Here, the first remaining integral was already computed in the proof of Lemma 7.7, For the second one, we use the Plemelj-Sokhotski formula, (7.24) and note that by oddness of the integrand, Thus, integrating by parts and adding (7.25), we find Now change the contour R λ to R + iω by Cauchy's theorem while using the analytic continuation (7.24) for the second round bracket. The result equals after another integration by parts in the last equality. The obtained result is identical to the second (double) integral in the proof of Lemma 7.7 and we therefore find (7.23) all together.
Proposition 7.9. There exists c > 0 such that for every fixed γ ∈ [0, 1) we can find t 0 = t 0 (γ) > 0 so that for (−t) ≥ t 0 where the error term r(t, γ) is differentiable with respect to γ and satisfies Proof. We have, as t → −∞, uniformly in γ ∈ [0, 1). Integrating this expansion in (7.17) from 0 to γ < 1 yields immediately the two leading terms in (7.26) and for the error term we estimate as follows ˆγ This completes our proof.
. Combining our results we finally arrive at (1.20).

Appendix A. Integral identities
Given two continuous functions φ, ψ : R → R which decay exponentially fast at +∞, we define and the associated integral operator K on L 2 (R) with kernel K(x, y). We denote by f y (x) := f (x + y) the horizontal shift of a function f by −y.
Lemma A.1. Let I ⊂ R be an interval and Then for any y, t ∈ R and k ∈ Z ≥1 , where K * is the real adjoint of K.
Proof. We proceed by induction on k ∈ Z ≥1 . For k = 1, the left hand side in (A.2) equalŝ and hence by Fubini's theorem and the definition of Φ(x), which is the right hand side in (A.2). Now assume (A.2) holds true for general k, then where we used Fubini's theorem in the second equality and the induction hypothesis in the third. Continuing further with Fubini's theorem and the fact that K * (x, y) = K(y, x), we have then which is the right hand side of (A.2) with k − 1 → k, as desired. This concludes our proof.
Lemma A.1 implies the following integral identity.
Corollary A.2. For any t ∈ R and k ∈ Z ≥1 , Proof. By (A.2) (with y = t) and χ [t,∞) (t) = 1, However from [2, Proposition B.1] we have since in the kernel of K * the functions φ and ψ are simply interchanged, compare (A.1). But ψ 0 ≡ ψ and for any function f we have f u−t (t) = f (u) by definition of the shift. Thus all together, as claimed.
The special case y = 0 in (A.4) will be useful for us, we summarize it below.
(3) As z → ∞, we enforce the normalization  This leads us to the problem summarized below.