1 Introduction

Random matrix theory has proven over time to be a powerful modern tool in mathematics and physics. With widespread applications in different areas such as engineering, statistical mechanics, probability, number theory, to mention only a few, its theory is rich and has been under intense development in the past thirty or so years. In a sense, much of the success of random matrix theory has been due to its exact solvability, or integrability, turning them into touchstones for predicting and confirming complex phenomena in nature.

One of the most celebrated results in random matrix theory is the convergence of fluctuations of the largest eigenvalue towards the Tracy–Widom law \(F_{\textrm{GUE}}\). This result was first obtained by Tracy and Widom [71] for matrices from the Gaussian Unitary Ensemble (GUE), who also showed that \(F_{\textrm{GUE}}\) is expressible in terms of a particular solution to the Painlevé II equation (shortly PII). Their findings sparked numerous advances in mathematics and physics, which began from the extension to several other matrix models but shortly afterwards widespread beyond the realm of random matrices.

Starting with the celebrated Baik–Deift–Johansson Theorem [5], the distribution \(F_{\textrm{GUE}}\) has been identified as the limiting one-point distribution for the fluctuations of a wide range of different probabilistic models. One of the most ubiquitous of such models is the KPZ equation, introduced in the 1980s by Kardar, Parisi and Zhang. Despite numerous developments surrounding it, exactly solving it remained an outstanding open problem until the early 2010s, when four different groups of researchers [2, 29, 49, 70] independently found an exact solution for its so-called narrow wedge solution. Amongst these works, Amir et al. [2] found the one-point distribution for the height function of the KPZ solution, showing that it relates to a distribution found a little earlier by Johansson in a grand canonical Gaussian-type matrix model [57], and further characterizing it in terms of the integro-differential Painlevé II equation. The latter is an extension of the PII differential equation, and almost as an immediate consequence the authors of [2] also obtained that this one-point distribution, in the large time limit, converges to \(F_{\textrm{GUE}}\) itself.

In much inspired by [5, 56] and later by [2, 29, 49, 70], it has been realized that several stochastic growth models share an inherent connection with statistics of integrable point processes, in what is formally established as an identity between a transformation of the growth model and statistics for the point process. To our knowledge, the very first instance of such relation appears in the work of Borodin [16] which connects higher spin vertex model with Macdonald measures. By taking appropriate limit of such connection, it was later found that the KPZ equation is connected to the Airy\(_2\) point process [20], ASEP is related to the discrete Laguerre Ensemble, stochastic six vertex model is connected in the same way to the Meixner ensemble, or yet the Krawtchouk ensemble [21].

As a common feature to these connections, the underlying correspondences establish that the so-called q-Laplace transform of the associated height function coincides with some multiplicative statistics of the point process. The latter, in turn, admits exact solvability, and it is widely believed that a long list of new insights on the growth models can be obtained by studying the corresponding multiplicative statistics of the point processes.

This program has already been taken to a great start for the KPZ equation, and exploring its connection with the Airy\(_2\) point process Corwin and Ghosal [37] were able to obtain bounds for the lower tail of the KPZ equation. Shortly afterwards, such bounds were improved by Cafasso and Claeys [27] with Riemann–Hilbert methods common in random matrix theory.

Our major goal is to take on the program of understanding multiplicative statistics for random particle systems, and carry out its detailed asymptotic analysis for one of the most inspiring models, namely eigenvalues of random matrices.

Statistics of eigenvalues of random matrices have been extensively studied in the past, notably in the context of the so-called linear statistics. In more recent times, statistics associated to Fisher-Hartwig singularities came to the spotlight, in particular due to their implications in connection with Gaussian Multiplicative Chaos, and are deeply understood to a quite high level of generality and precision; we refer the reader to [1, 6, 9, 30, 35, 44, 48, 72,73,74] for a non-exhaustive list of accomplishments in different directions. We consider a different type of statistics, inspired by the works in stochastic growth models and which motivate us to refer to them as multiplicative. Among other distinct features our family of multiplicative statistics has the key property that infinitely many singularities of its symbol are approaching the edge of the eigenvalue spectrum in a critical way.

In more concrete terms, we consider the Hermitian matrix model with an arbitrary one-cut regular polynomial potential V, and associate to it a general family of multiplicative statistics on its eigenvalues, indexed by a function Q satisfying certain natural regularity conditions. Our findings show that when the number of eigenvalues is large such multiplicative statistics become universal: in the large matrix limit they converge to a multiplicative statistics of the Airy\(_2\) point process which is independent of V and Q. This limiting statistics admits a characterization in terms of a particular solution to the integro-differential Painlevé II equation, and it is the same quantity that connects the KPZ equation and the Airy\(_2\) point process. So, in turn, we find that random matrix theory can recast the narrow wedge solution to KPZ equation for finite time in an universal way.

The random matrix statistics that we study are associated to a deformed orthogonal polynomial ensemble, also indexed by Q, which we analyze. As we learn from earlier work of Borodin and Rains [22] which was recently rediscovered and greatly extended by Claeys and Glesner [36] (and which we also briefly explain later on), this deformed ensemble is a conditional ensemble of a marked process associated to the original random matrix model. We show that the correlation kernel for this point process converges to a kernel constructed out of the same solution to the integro-differential PII equation that appeared in [2]. This kernel is again universal in both V and Q, and turns out to be the kernel of the induced conditional process on the marked Airy\(_2\) point process. Naturally, there are orthogonal polynomials and their norming constants and recurrence coefficients associated to this deformed ensemble. With our approach we also obtain similar universality results for such quantities, showing that they are indeed universal in V and Q and also connect to the integro-differential PII in a neat way.

Beyond the concrete results, with this work we also hope to shed light into the rich structure underlying multiplicative statistics for eigenvalues of random matrices with singular symbols beyond the Fisher-Hartwig type. Much of the recent relevance of Painlevé equations is due to its appearance in random matrix theory, see [50] for an overview of several of these connections. There has been a growing recent interest in integro-differential Painlevé-type equations [24, 26, 28, 32, 58, 64], and our results place the integro-differential PII as a central universal object in random matrix theory as well.

We scale the multiplicative statistics to produce a critical behavior at a soft edge of the matrix model, and consequently the core of our asymptotic analysis lies within the construction of a local approximation to all the quantities near this critical point. Our main technical tool is the application of the Deift-Zhou nonlinear steepest descent method to the associated Riemann–Hilbert problem (shortly RHP), and the mentioned local approximation is the so-called construction of a local parametrix. In our case, a novel feat is that this local parametrix construction is performed in a two-step way, first with the construction of a model problem varying with large parameter, and second with the asymptotic analysis of this model problem. In the latter, a RHP recently studied by Cafasso and Claeys [27] (see also the subsequent works [28, 32]) which is related to the lower tail of the KPZ equation shows up, and it is this RHP that ultimately connects all of our considered quantities to the integro-differential PII.

The choice of scaling of our multiplicative statistics is natural, illustrative but not exhaustive. As we point out later, with our approach it becomes clear that other scalings could also be analyzed, say for instance scaling around a bulk point, or yet soft/hard edge points with critical potentials, and indicate that other integrable systems extending the integro-differential PII may emerge.

2 Statement of Main Results

Let \(\Lambda ^{(n)} :=(\lambda _1<\ldots <\lambda _n)\) be a n-particle system with distribution

$$\begin{aligned} \frac{1}{Z_n}\prod _{1\le j<k\le n}(\lambda _k-\lambda _j)^2\prod _{j=1}^n {{\,\mathrm{\mathrm e}\,}}^{-nV(\lambda _j)}\textrm{d}\lambda _1\ldots \textrm{d}\lambda _n, \end{aligned}$$
(2.1)

where \(Z_n\) is the partition function

$$\begin{aligned} Z_n:=\int _{\mathbb {R}^n} \prod _{1\le j<k\le n}(\lambda _k-\lambda _j)^2\prod _{j=1}^n {{\,\mathrm{\mathrm e}\,}}^{-n V(\lambda _j)}\textrm{d}\lambda _1\ldots \textrm{d}\lambda _n. \end{aligned}$$
(2.2)

The distribution (2.1) is the eigenvalue distribution of the unitarily-invariant random matrix model with potential function V [42, 66].

We associate to \(\Lambda ^{(n)}\) the multiplicative statistics

$$\begin{aligned} \mathsf L_n^Q(\mathsf s) :=&\; \mathbb {E}\left( \prod _{j=1}^n \frac{1}{1+ {{\,\mathrm{\mathrm e}\,}}^{-\mathsf s-n^{2/3} Q(\lambda _j)}} \right) \nonumber \\ =&\; \frac{1}{Z_n}\int _{\mathbb {R}^n} \prod _{1\le j<k\le n}(\lambda _k-\lambda _j)^2\prod _{j=1}^n \sigma _{n}(\lambda _j){{\,\mathrm{\mathrm e}\,}}^{-n V(\lambda _j)}\textrm{d}\lambda _1\ldots \textrm{d}\lambda _n = \frac{\mathsf Z^Q_n(\mathsf s)}{Z_n},\quad \mathsf s>0, \end{aligned}$$
(2.3)

where \(\mathsf Z_n^Q(\mathsf s)\) is the partition function for the deformed model

$$\begin{aligned} \mathsf Z_n^Q(\mathsf s):=\int _{\mathbb {R}^n} \prod _{1\le j<k\le n}(\lambda _k-\lambda _j)^2\prod _{j=1}^n \sigma _{n}(\lambda _j){{\,\mathrm{\mathrm e}\,}}^{-n V(\lambda _j)}\textrm{d}\lambda _1\ldots \textrm{d}\lambda _n \end{aligned}$$
(2.4)

and we denoted

$$\begin{aligned} \sigma _{n}(z)=\sigma _{n}(z\mid \mathsf s):=\left( 1+ {{\,\mathrm{\mathrm e}\,}}^{-\mathsf s-n^{2/3} Q(z) } \right) ^{-1}. \end{aligned}$$

When Q is linear, with a straightforward change of parameters \(\mathsf L^Q_n\) reduces to

$$\begin{aligned} \mathbb {E}\left( \prod _{x\in \mathcal X}\frac{1}{1+\zeta q^{x}}\right) , \end{aligned}$$
(2.5)

where the expectation is over the set \(\mathcal X\) of configurations of points (that is, for us \(\mathcal X=\Lambda ^{(n)}\)) and \(q\in (0,1)\) should be viewed as a parameter of the model and, in general, \(\zeta \in \mathbb {C}\) is a free parameter. The expression (2.5) may be viewed as a transformation of the point process, where \(\zeta \in \mathbb {C}\) becomes the spectral variable of this transformation, and the matching \(\zeta ={{\,\mathrm{\mathrm e}\,}}^{-\mathsf s}\) motivates the distinguished role of \(\mathsf s\) in (2.3). In the context of random particle systems, this particular multiplicative statistics is associated to the notion of a q-Laplace transform [17, 19,20,21] that we already mentioned in the Introduction, and it has been one of the key quantities in several outstanding recent progresses in asymptotics for random particle systems [27, 37, 55].

We work under the following assumptions.

Assumption 2.1

  1. (i)

    The potential V is a nonconstant real polynomial of even degree and positive leading coefficient, and its equilibrium measure \(\mu _V\) is one-cut regular, we refer to Sect. 8.1 below for the precise definitions. Performing a shift on the variable, without loss of generality we assume that the right-most endpoint of \({{\,\textrm{supp}\,}}\mu _V\) is at the origin, so that

    $$\begin{aligned} {{\,\textrm{supp}\,}}\mu _V=[-a,0], \end{aligned}$$

    for some \(a>0\).

  2. (ii)

    The function Q is real-valued over the real line, and analytic on a neighborhood of the real axis. We also assume that it changes sign at the right-most endpoint of \({{\,\textrm{supp}\,}}\mu _V\), with

    $$\begin{aligned} Q(x)>0 \text { on }(-\infty ,0), \quad Q(x)<0 \text { on } (0,\infty ), \end{aligned}$$
    (2.6)

    with \(x=0\) being a simple zero of Q. A particular role is played by the negative value \(Q'(0)\), so we set

    $$\begin{aligned} \mathsf t:=-Q'(0)>0. \end{aligned}$$
    (2.7)

Although \(\mathsf t\) in Assumption 2.1-(ii) will have the interpretation of time, we stress that in this paper it will be kept fixed within a compact of \((0,+\infty )\) rather than being made large or small.

For our results and throughout the whole work, we also talk about uniformity of several error terms with respect to \(\mathsf t\) in the sense that we now explain. Because Q is analytic on a neighborhood of the real axis, analytic continuation shows that it is completely determined by its derivatives \(Q^{(k)}(0)\), \(k\ge 0\). When we say that some error is uniform in \(\mathsf t\) within a certain range, we mean uniform when we vary Q as a function of \(-Q'(0)=\mathsf t\) while keeping all other derivatives \(Q^{(k)}(0)\), \(k\ge 2\), fixed.

The condition in Assumption 2.1-(i) is standard in random matrix theory and they are known to hold when, say, V is a convex function [69]. The one-cut assumption is made just for ease of presentation, as it simplifies the Riemann–Hilbert analysis at the technical level considerably. On the other hand, the regularity condition is used substantially in our arguments, but is standard in Random Matrix Theory literature and holds true generically [62]. Most of our results are of local nature near the right-most endpoint of \({{\,\textrm{supp}\,}}\mu _V\) and could be shown to hold true for multi-cut potentials near regular endpoints as well, with appropriate but non-essential modifications.

Assumption 2.1-(ii) should be seen as specifying enough regularity on the multiplicative statistics, here indexed by this factor Q. Because of condition (ii), we have the pointwise convergence

$$\begin{aligned} \sigma _n(x){\mathop {\rightarrow }\limits ^{n\rightarrow \infty }} {\left\{ \begin{array}{ll} 0, &{} x>0, \\ 1, &{} x<0, \end{array}\right. } \end{aligned}$$
(2.8)

which means that the introduction of the factor \(\sigma _n\) in the original weight \({{\,\mathrm{\mathrm e}\,}}^{-nV}\) has the effect of producing an interpolation between this original weight and its cut-off version \(\chi _{(-\infty ,0)}{{\,\mathrm{\mathrm e}\,}}^{-nV}\), where from here onward \(\chi _J\) is the characteristic version of a set J. Comparing the Euler-Lagrange conditions on the equilibrium problem induced by the weights \({{\,\mathrm{\mathrm e}\,}}^{-nV}\) and \(\chi _{(-\infty ,0)}{{\,\mathrm{\mathrm e}\,}}^{-nV{}}\), the observation we just made heuristically indicates that the factor \(\sigma _n\) does not change the global behavior of eigenvalues. This may also be rigorously confirmed as an immediate consequence of our analysis, but we do not elaborate on this end.

On the other hand, introducing a local coordinate u near the origin via the relation \(z=-u/n^{2/3}\), the approximation

$$\begin{aligned} {{\,\mathrm{\mathrm e}\,}}^{-\mathsf s-n^{2/3}Q(z)}\approx {{\,\mathrm{\mathrm e}\,}}^{-( \mathsf tu+\mathsf s)} \end{aligned}$$

goes through, and we see that there is a competition between the term \(\mathsf s\) and Q(z) that affects the local behavior of the weight at the scale \(\mathcal {O}(n^{-2/3})\) near the origin, which is the same scale for nontrivial fluctuations of eigenvalues around the same point. The main results that we are about to state concern obtaining the asymptotic behavior as \(n\rightarrow \infty \) of several quantities of the model, and in particular they showcase how this term Q affects the local scaling regime of the eigenvalues near the origin and leads to connections with the integro-differential Painlevé II equation as already mentioned.

A central object in this paper is the multiplicative statistics

$$\begin{aligned} \mathsf L^{{{\,\textrm{Ai}\,}}}(s,T):=\mathbb {E}\left( \prod _{j=1}^\infty \frac{1}{1+{{\,\mathrm{\mathrm e}\,}}^{T^{1/3}(s+\mathfrak {a}_j)}}\right) , \end{aligned}$$
(2.9)

where the expectation is over the Airy\(_2\) point process with random configuration of points \(\{\mathfrak {a}_j\}\) [68]. Expectations of the Airy\(_2\) with respect to other meaningful symbols have been considered in the recent past, see for instance [25, 31, 34]. The quantity \(\mathsf L^{{{\,\textrm{Ai}\,}}}\) admits two remarkable characterizations, which are also of particular interest to us. The first is the formulation via a Fredholm determinant, namely

where \(\mathbb {K}^{\textrm{Ai}}_T\) is the integral operator on \(L^2(-s,\infty )\) acting with the finite temperature (or fermi-type) deformation of the Airy kernel \(\mathsf K_T^{\textrm{Ai}}\), defined by

$$\begin{aligned} \mathsf K^\mathrm{{Ai}}_T(u,v):=\int _{-\infty }^\infty \frac{{{\,\mathrm{\mathrm e}\,}}^{T^{1/3}\zeta }}{1+{{\,\mathrm{\mathrm e}\,}}^{T^{1/3}\zeta }}{{\,\textrm{Ai}\,}}(u+\zeta ){{\,\textrm{Ai}\,}}(\zeta +v)\textrm{d}\zeta , \quad u,v\in \mathbb {R}. \end{aligned}$$

The term ‘temperature’ stems from the connection between the KPZ equation and the random polymer models. Despite the name finite temperature, the parameter T here corresponds to the time in the KPZ equation, see (2.12) below. The Fredholm determinant \(\det \left( \mathbb {I}-\mathbb {K}^\mathrm{{Ai}}_T \right) \) appeared for the first time in the work of Johansson [57] as the limiting process of a grand canonical (that is, when the number of particles/size of matrix is also random) version of a Gaussian random matrix model, and interpolates between the classical Airy kernel when \(T\rightarrow +\infty \) and the Gumbel distribution when \(T\rightarrow 0^+\) with s scaled appropriately. In [57, Remark 1.13] Johansson already raises the question on whether a related classical (that is, not grand canonical) matrix model has limiting local statistics that interpolate between Gumbel and Tracy–Widom, as a feature similar to \(\det \left( \mathbb {I}-\mathbb {K}^\mathrm{{Ai}}_T \right) \). Since then, other works have found \(\det \left( \mathbb {I}-\mathbb {K}^\mathrm{{Ai}}_T \right) \) to be the limiting distribution for fluctuations around the largest particle of a point process [11, 39, 41, 64]. In common, these works consider specific models rather than obtaining \(\det \left( \mathbb {I}-\mathbb {K}^\mathrm{{Ai}}_T \right) \) as the universal limit for a whole family of particle systems. Finite-temperature type distributions extending \(\det \left( \mathbb {I}-\mathbb {K}^\mathrm{{Ai}}_T \right) \) have also appeared in the past, see for instance [18, 38, 54].

Another characterization of \(\mathsf L^{{{\,\textrm{Ai}\,}}}\) is via a Tracy–Widom type formula that relates it to the integro-differential PII. It reads

$$\begin{aligned} \log \mathsf L^{{{\,\textrm{Ai}\,}}}(-\mathsf S\mathsf T^{1/3},\mathsf T^{-2})=-\frac{1}{\mathsf T}\int _{\mathsf S}^{\infty }(v-\mathsf S) \left( \int _{-\infty }^\infty \Phi (r\mid v,\mathsf T)^2 \frac{{{\,\mathrm{\mathrm e}\,}}^{-r}}{(1+{{\,\mathrm{\mathrm e}\,}}^{-r})^2}\textrm{d}r - \frac{v}{2}\right) \textrm{d}v, \end{aligned}$$
(2.10)

where \(\Phi \) solves the integro-differential Painlevé II equation

$$\begin{aligned} \partial ^2_\mathsf S\Phi (\xi \mid \mathsf S,\mathsf T)=\left( \xi +\frac{\mathsf S}{\mathsf T}+\frac{2}{\mathsf T}\int _{-\infty }^\infty \Phi (r\mid \mathsf S,\mathsf T)^2\frac{{{\,\mathrm{\mathrm e}\,}}^{-r}}{(1+{{\,\mathrm{\mathrm e}\,}}^{-r})^2}\textrm{d}r \right) \Phi (\xi \mid \mathsf S,\mathsf T) \end{aligned}$$
(2.11)

with boundary value

$$\begin{aligned} \Phi (\xi \mid \mathsf S,\mathsf T)\sim \mathsf T^{1/6}{{\,\textrm{Ai}\,}}(\mathsf T^{2/3}\xi +\mathsf S\mathsf T^{-1/3}),\quad \text {as } \xi \rightarrow \infty \text { with } |\arg \xi |<\pi -\delta , \end{aligned}$$

for any \(\delta >0\). This characterization has been obtained in the already mentioned work by Amir et al. [2], in connection with the narrow wedge solution to the KPZ equation, and following the work [20] by Borodin and Gorin has the interpretation that we now describe. For \(\mathcal H(X,T)\) being the Hopf-Cole solution to the KPZ equation with narrow wedge initial data at the space-time point (XT), introduce the rescaled random variable

$$\begin{aligned} \Upsilon _T=\frac{\mathcal H(0,2T)+\frac{T}{12}}{T^{1/3}}. \end{aligned}$$

Based on the previous works [2, 29, 49, 70], in [20] the identity

$$\begin{aligned} \mathbb {E}_{\textrm{KPZ}}\left( {{\,\mathrm{\mathrm e}\,}}^{-{{\,\mathrm{\mathrm e}\,}}^{T^{1/3}(\Upsilon _T+s)}}\right) =\mathsf L^{{{\,\textrm{Ai}\,}}}(s,T) \end{aligned}$$
(2.12)

between the height function of the KPZ equation and the multiplicative statistics \(\mathsf L^{{{\,\textrm{Ai}\,}}}(s,T)\) is identified. This is an instance of matching formulas relating growth processes with determinantal point processes that we already mentioned at the Introduction. One of the key aspects of this representation is that the Airy\(_2\) point process is determinantal, and consequently its statistics can be studied using techniques from exactly solvable/integrable models. Indeed Eq. (2.12) is the starting point taken by Cafasso and Claeys [27], who then connected \(\mathsf L^{{{\,\textrm{Ai}\,}}}(s,T)\) to a RHP that will also play a major role for us. Recently, Cafasso et al. [28] also obtained an independent proof of the representation (2.10), extending it to more general multiplicative statistics of the Airy\(_2\) point process. Other proofs and extensions of this integro-differential equation have also been recently found in related contexts [24, 26, 58]. Also, by exploring (2.12) the tail behavior of the KPZ equation has become rigorously accessible in various asymptotic regimes [27, 28, 32, 37].

As a first result, we prove that the multiplicative statistics \(\mathsf L^{{{\,\textrm{Ai}\,}}}\) is the universal limit of \(\mathsf L_n^Q(\mathsf s)\).

Theorem 2.2

Suppose that V and Q satisfy Assumptions 2.1 and fix \(\mathsf s_0>0\) and \(\mathsf t_0\in (0,1)\). For a constant \(\mathsf c_V>0\) that depends solely on V, and any \(\nu \in (0,2/3)\), the asymptotic estimate

$$\begin{aligned} \log \mathsf L^Q_n\left( \mathsf s\right) =\log \mathsf L^{{{\,\textrm{Ai}\,}}}\left( -\frac{\mathsf c_V\mathsf s}{\mathsf t},\frac{\mathsf t^3}{\mathsf c_V^3}\right) +\mathcal {O}(n^{-\nu }),\quad n\rightarrow \infty \end{aligned}$$
(2.13)

holds true uniformly for \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\).

The constant \(\mathsf c_V\) can be determined from (8.2) and (8.5) below, albeit in an implicit form as it depends on the associated equilibrium measure for V. It is the first derivative of a conformal map near the origin, which is constructed out of the equilibrium measure for V. Ultimately, we make a conformal change of variables of the form \(\zeta \approx \mathsf c_V z n^{2/3}\), which in turn identifies

$$\begin{aligned} \frac{1}{1+{{\,\mathrm{\mathrm e}\,}}^{-\mathsf s-n^{2/3}Q(z)}}\approx \frac{1}{1+{{\,\mathrm{\mathrm e}\,}}^{-\mathsf s+\mathsf t\zeta /\mathsf c_V}}. \end{aligned}$$

In light of (2.9), this explains the evaluation \(s=-\mathsf s\mathsf c_V/\mathsf t\) and \(T=\mathsf t^3/\mathsf c_V^3\) on the right-hand side of (2.13).

Findings on random matrix theory surrounding the Tracy–Widom distribution have inspired an enormous development in the KPZ universality theory. One of the major developments surrounding the KPZ equation can be phrased by saying that the fluctuations of the height function for the narrow wedge solution coincide, in the large time limit, with the \(\beta =2\) Tracy–Widom law from random matrix theory. Theorem 2.2 is saying that the connection between random matrix theory and the KPZ equation can be recast already at any finite time, and not only for Gaussian models but also universally in V and Q. Similar connection exists [8] between the solution of the KPZ equation in half-space under the Robin boundary condition and Airy\(_1\) point process which, in turn, in the large time limit relate this KPZ solution to GOE matrices.

We emphasize that the error term in (2.13) is not in sharp form. In Sect. 3.1 we explain how this term arises from our techniques. We do not have indications regarding whether the true optimal error would be \(\mathcal {O}(n^{-2/3})\) (or of any polynomial order) or if it should involve, say, logarithmic corrections.

Our next results concern limiting asymptotic formulas for the matrix model underlying \(\mathsf L_n^Q\), starting with the partition function \(\mathsf Z_n^Q\) from (2.2). The quantities \(Z_n\) and \(\mathsf Z_n^{Q}\) are Hankel determinants associated to the symbols \({{\,\mathrm{\mathrm e}\,}}^{-nV}\) and \(\sigma _{n}{{\,\mathrm{\mathrm e}\,}}^{-nV}\). As mentioned earlier, a great deal of fine asymptotic information is known for a large class of symbols with so-called Fisher-Hartwig singularities. This class includes symbols with singularities of root type and jump discontinuities, and greatly extends the symbol \({{\,\mathrm{\mathrm e}\,}}^{-nV}\). We briefly review such results in a moment. However, the perturbation \(\sigma _n\) produces a wildly varying term in the symbol \(\sigma _n {{\,\mathrm{\mathrm e}\,}}^{-nV}\); in fact, as \(n\rightarrow \infty \) there are infinitely many simple poles of \(\sigma _n\) accumulating on the real axis, and to our knowledge not much is known of Hankel determinants associated to such symbols.

Under certain technical conditions, Bleher and Its [15] obtained a full asymptotic expansion for \(\log Z_n\) in inverse powers of \(n^2\), computing explicitly the very first high order terms, see also [10, 13, 14, 23, 51] for important early work obtaining similar results under different technical conditions. Thanks to several recent contributions valid in various degrees of generality [9, 30, 52, 72], a much more detailed information than (2.14) is known. In particular, to our knowledge a detailed asymptotic analysis of \(Z_n\) for general regular one-cut potentials without any further tehcnical assumptions has been completed by Berestycki, Webb and Wong [9], including lower order terms up to the constant. This asymptotic formula can also be read off from a more general result by Charlier [30, Theorem 1.1 and Remark 1.4], which under our conditions coincide with the result of [9] and reads as

$$\begin{aligned} Z_n=\exp \left( \textbf{e}_1^V n^2+\textbf{e}_2^V n-\frac{1}{12}\log n+\textbf{e}_4^V\right) \left( 1+\mathcal {O}\left( \frac{\log n}{n}\right) \right) , \end{aligned}$$
(2.14)

where the constants \(\textbf{e}_1^V,\textbf{e}_2^V\) and \(\textbf{e}_4^V\) depend on V in an explicit manner.

As an immediate corollary to Theorem 2.2 we obtain some terms in the asymptotic expansion of the deformed partition function (2.4).

Corollary 2.3

Suppose V and Q satisfy Assumptions 2.1 and fix \(\mathsf s_0>0\) and \(\mathsf t_0>0\). For any \(\nu \in (0,2/3)\), the deformed partition function \(\mathsf Z_n^Q\) admits an expansion of the form

$$\begin{aligned} \mathsf Z_n^Q(\mathsf s)=\exp \left( \textbf{e}_1^V n^2+\textbf{e}_2^V n-\frac{1}{12}\log n+\textbf{e}_4^V\right) \mathsf L^{{{\,\textrm{Ai}\,}}}\left( -\frac{\mathsf s\mathsf c_V}{\mathsf t},\frac{\mathsf t^3}{\mathsf c_V^3}\right) \left( 1+\mathcal {O}(n^{-\nu })\right) ,\quad n\rightarrow \infty \nonumber \\ \end{aligned}$$
(2.15)

which is valid uniformly for \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\), and the coefficients \(\textbf{e}_1^V\), \(\textbf{e}_2^V\) and \(\textbf{e}_4^V\) are as in (2.14).

Proof

Follows from the expansion (2.14), Theorem 2.2 and (2.3). \(\square \)

The order of error (2.15) is not \(\mathcal {O}(\log n/n)\) as in (2.14) but weaker and not sharp. This phenomenon can be traced back to the fact that \(\sigma _n\) has infinitely many poles accumulating on the real axis as \(n\rightarrow \infty \), see the discussion in Sect. 3.1 below. A similar error order was obtained in [15, Theorem 9.1 and Equation (9.68)], in a transitional regime from a one-cut to two-cut potential, and where the role played here by \(\mathsf L^{{{\,\textrm{Ai}\,}}}\) is replaced by the GUE Tracy–Widom distribution itself.

From the general theory of unitarily invariant random matrix models, it is known that the density appearing in (2.3) admits a determinantal form. Setting

$$\begin{aligned} \omega _{n}(z)=\omega ^Q_n(z\mid \mathsf s):=\sigma _{n}(z){{\,\mathrm{\mathrm e}\,}}^{-n V(z)},\quad \text {where we recall}\quad \sigma _{n}(z)=\left( 1+ {{\,\mathrm{\mathrm e}\,}}^{-\mathsf s- n^{2/3}Q(z) } \right) ^{-1}, \end{aligned}$$
(2.16)

this means that the identity

$$\begin{aligned} \frac{1}{\mathsf Z_n(\mathsf s)} \prod _{1\le j<k\le n}(\lambda _k-\lambda _j)^2\prod _{j=1}^n \sigma _{n}(\lambda _j){{\,\mathrm{\mathrm e}\,}}^{-n V(\lambda _j)} = \frac{1}{n!}\det \left( \omega _n(\lambda _j)^{1/2}\mathsf K^Q_n(\lambda _j,\lambda _k)\omega _n(\lambda _k)^{1/2}\right) _{j,k=1}^n \end{aligned}$$

holds true for a function of two variables \(\mathsf K^Q_n(x,y)\) satisfying certain properties, known as the correlation kernel of the eigenvalue density on the left-hand side. The correlation kernel is not unique, but in the present setup it may be taken to be the Christoffel-Darboux kernel for the orthogonal polynomials for the weight \(\omega _n\), as we introduce in detail in (9.3), and whenever we talk about \(\mathsf K_n^Q\) we mean this Christoffel-Darboux kernel. In particular, \(\mathsf K^Q_n=\mathsf K^Q_n(\cdot \mid \mathsf s)\) does depend on both Q and \(\mathsf s\).

Our second result proves universality of the kernel \(\mathsf K_n^Q\), showing that its limit depends solely on \(\mathsf s\) and \(\mathsf t=-Q'(0)\), but not on other aspects on Q, and relates to the integro-differential PII. For its statement, it is convenient to introduce the new set of variables

$$\begin{aligned} \mathsf T=\mathsf t^{-3/2}\quad \text {and}\quad \mathsf S=\mathsf s\mathsf t^{-3/2}. \end{aligned}$$
(2.17)

With \(\Phi (\xi )=\Phi (\xi \mid \mathsf S,\mathsf T)\) being the solution to the integro-differential Painlevé II equation in (2.11) and the variables \(\mathsf s,\mathsf t\) and \(\mathsf S,\mathsf T\) related by (2.17), we set

$$\begin{aligned} \upphi _1(\zeta \mid \mathsf s,\mathsf t)=\Phi (\xi (\zeta )\mid \mathsf S,\mathsf T),\quad \upphi _2(\zeta \mid \mathsf s,\mathsf t)=(\partial _\mathsf S\Phi )(\xi (\zeta )\mid \mathsf S,\mathsf T),\quad \xi (\zeta ):=-\mathsf s+\mathsf t\zeta , \end{aligned}$$

and introduce the kernel

$$\begin{aligned} \mathsf K_\infty (u,v\mid \mathsf s,\mathsf t):=\frac{\upphi _1(v\mid \mathsf s,\mathsf t)\upphi _2(u\mid \mathsf s,\mathsf t)-\upphi _1(u\mid \mathsf s,\mathsf t)\upphi _2(v\mid \mathsf s,\mathsf t)}{u-v},\quad u,v\in \mathbb {R}. \end{aligned}$$

Theorem 2.4

Assume that V and Q satisfy Assumptions 2.1 and fix \(\mathsf s_0>0\) and \(\mathsf t_0\in (0,1)\). With

$$\begin{aligned} u_n:=\frac{u}{\mathsf c_V n^{2/3}},\quad v_n:=\frac{v}{\mathsf c_V n^{2/3}}, \end{aligned}$$
(2.18)

the estimate

$$\begin{aligned} \frac{{{\,\mathrm{\mathrm e}\,}}^{-\frac{n}{2}(V(u_n)+V(v_n))}}{\mathsf c_V n^{2/3}} \mathsf K^Q_n\left( u_n,v_n\mid \mathsf s\right) = \mathsf K_\infty (u,v\mid \mathsf s,\mathsf t/\mathsf c_V)+ \mathcal {O}(n^{-1/3}),\quad n\rightarrow \infty , \end{aligned}$$
(2.19)

holds true uniformly for uv in compacts of \(\mathbb {R}\), and uniformly for \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\).

In the recent work [36], Claeys and Glesner developed a general framework for certain conditional point processes, which in particular yields a probabilistic interpretation of the kernel \(\mathsf K_n^Q\) as we now explain. For a point process \(\Lambda \), we add a mark 0 to a point \(\lambda \in \Lambda \) with probability \(\sigma _n(\lambda )\) and a mark 1 with complementary probability \(1-\sigma _n(\lambda )\). This induces a decomposition of the point process \(\Lambda =\Lambda _0\cup \Lambda _1\), where \(\Lambda _j\) is the set of eigenvalues with mark j. We then consider the induced point process \(\widehat{\Lambda }\) obtained from \(\Lambda \) upon conditioning that \(\Lambda _1=\emptyset \), that is, that all points have mark 0.

When applied to the eigenvalue point process \(\Lambda =\Lambda ^{(n)}\) induced by the distribution (2.1), the theory developed in [36] shows that \(\widehat{\Lambda }^{(n)}\) is a determinantal point process with correlation kernel proportional to \(\omega _n(x)^{1/2}\omega _n(y)^{1/2}\mathsf K_n^Q(x,y)\) which, in turn, generates the same point process as the left-hand side of (2.19), see [36, Sections 4 and 5]. A comparison of the RHP that characterizes the kernel \(\mathsf K_\infty \) (see Sect. 5.1 below, in particular (5.16)) with the discussion in [36, Section 5.2] shows that \(\mathsf K_\infty \) is a (renormalized) correlation kernel for the marked point process \(\widehat{\{\mathfrak a_k\}}\) of the Airy\(_2\) point process \(\{\mathfrak a_k\}\) with the marking function \((1+{{\,\mathrm{\mathrm e}\,}}^{-\mathsf s+\mathsf t\lambda })^{-1}\). So Theorem 2.4 assures that the conditional process on the marked eigenvalues converges, at the level of rescaled correlation kernels, to the conditional process on the marked Airy\(_2\) point process.

We also obtain asymptotics for the norming constant \(\upgamma ^{(n,Q)}_{n-1}(\mathsf s)\) for the \({(n-1)}\)-th monic orthogonal polynomial for the weight \(\omega _n(x\mid \mathsf s)\) (see (9.2) for the definition), showing that its first correction term depends again solely on \(\mathsf s,\mathsf t\), and also relates to the integro-differential Painlevé II equation.

Theorem 2.5

Suppose that V and Q satisfy Assumptions 2.1 and fix \(\mathsf s_0>0\) and \(\mathsf t_0\in (0,1)\). The norming constant has asymptotic behavior

$$\begin{aligned} \upgamma ^{(n,Q)}_{n-1}(\mathsf s)^2=\frac{a}{4\pi }{{\,\mathrm{\mathrm e}\,}}^{-2n\ell _V}\left( \frac{1}{2}-\frac{1}{n^{1/3}}\frac{\mathsf c_V^{1/2}}{\mathsf t^{1/2}} \left( \mathsf p(\mathsf s,\mathsf t/\mathsf c_V)-\frac{\mathsf s^2\mathsf c_V^{3/2}}{4\mathsf t^{3/2}}\right) +\mathcal {O}(n^{-2/3}) \right) , \quad n\rightarrow \infty , \end{aligned}$$
(2.20)

uniformly for \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\), where \(\mathsf p(\mathsf s,\mathsf t)=\mathsf P(\mathsf S,\mathsf T)\), and the function \(\mathsf P\) relates to the solution \(\Phi \) from (2.11) via

$$\begin{aligned} \partial _\mathsf S\mathsf P(\mathsf S,\mathsf T)=\frac{\mathsf S}{2\mathsf T}+\frac{1}{\mathsf T}\int _{-\infty }^\infty \Phi (r\mid \mathsf S,\mathsf T)^2\frac{{{\,\mathrm{\mathrm e}\,}}^{-r}}{(1+{{\,\mathrm{\mathrm e}\,}}^{-r})^2}\textrm{d}r. \end{aligned}$$
(2.21)

Our approach also yields asymptotic formulas for the orthogonal polynomials and their recurrence coefficients, and relate them to the integro-differential Painlevé II equation as well, but for the sake of brevity we do not state them.

3 About our Approach: Issues and Extensions

3.1 Issues to be overcome

Our main tool for obtaining all of our results is the Fokas–Its–Kitaev [53] Riemann–Hilbert Problem (RHP) for orthogonal polynomials (shortly OPs) that encodes the correlation kernel \(\mathsf K_n^Q\), the norming constants \(\upgamma ^{(n,Q)}_{n-1}(\mathsf s)^2\) and ultimately also the multiplicative statistics \(\mathsf L_n^Q\), and its asymptotic analysis via the Deift-Zhou nonlinear steepest descent method [45, 47]. The overall arch of this asymptotic analysis is the usual one, summarized in the diagram in Fig. 1, and we now comment on its major steps.

Fig. 1
figure 1

Schematic diagram for the steps in the asymptotic analysis of the RHP for orthogonal polynomials. There is also an Airy local parametrix used near \(z=-a\) which we omit in this diagram

Starting with the RHP for OPs that we name \(\textbf{Y}\), in the first step we transform \(\textbf{Y}\mapsto \textbf{T}\) with the introduction of the g-function (or, equivalently, the \(\phi \)-function), and this is done so with the help of the equilibrium measure for V that accounts only for the part \({{\,\mathrm{\mathrm e}\,}}^{-nV}\) of the weight \(\omega _n\). In the second step, we open lenses with a transformation \(\textbf{T}\mapsto \textbf{S}\) as usual.

The third step is the construction of the global parametrix \(\textbf{G}\). In our case, in this construction we also have to account for the perturbation \(\sigma _n\) of the weight \({{\,\mathrm{\mathrm e}\,}}^{-nV}\), so a Szegö function-type construction is used.

The fourth step is the construction of local parametrices at the endpoints \(z=-a,0\) of \({{\,\textrm{supp}\,}}\mu _V\), with the goal of approximating all the quantities locally near these endpoints. This is accomplished by, first, considering a change of variables \(z\mapsto \zeta =n^{2/3}\psi (z)\) after the conformal map \(\psi \) chosen appropriately for each endpoint and, then, constructing the solution to a model RHP \({\varvec{\Phi }}(\zeta )\) in the \(\zeta \)-plane. Following these steps, the local parametrix at the left edge \(z=-a\) of \({{\,\textrm{supp}\,}}\mu _V\) is standard and utilizes Airy functions.

The construction of the local parametrix at the right edge \(z=0\) is, however, a lot more involved. As we mentioned earlier, the factor \(\sigma _n\) affects asymptotics of local statistics near the origin. In fact, the weight \(\sigma _n\) has singularities at the points of the form

$$\begin{aligned} \frac{\mathsf s}{\mathsf tn^{2/3}}+\frac{\pi \textrm{i}(2k+1)}{\mathsf tn^{2/3}}+\mathcal {O}(n^{-4/3}),\quad k\in \mathbb {Z}. \end{aligned}$$

This means that for \(|\mathsf s|\ll n^{2/3}\) there are infinitely many poles of \(\sigma _n\) accumulating near the real axis. As such, in this case for large n the perturbed weight \(\omega _n\) fails to be analytic in any fixed neighborhood of the origin. If we were to consider only \(\mathsf s\rightarrow +\infty \) fast enough, one could still push the standard RHP analysis further with the aid of Airy local parametrices, at the cost of a worse error estimate. However, when \(\mathsf s=\mathcal {O}(1)\) we have poles accumulating too fast to the real axis, and a different asymptotic analysis has to be accomplished, in particular a new local parametrix is needed.

When changing coordinates \(z\mapsto \zeta \) near \(z=0\), the model problem \({\varvec{\Phi }}={\varvec{\Phi }}_n\) obtained is then n-dependent. This is so because the jump of the local parametrix involves \(\sigma _n\), and consequently in the process of changing variables the resulting model problem has a jump that involves a transformation of \(\sigma _n\) itself. This is in contrast with usual constructions with, say, Airy, Bessel or Painlevé-type parametrices, where the jumps can be turned into piecewise constant, or homogeneous, in the z-plane and, hence, also remain piecewise constant in the \(\zeta \)-plane. As we learned after finishing the first version of this paper somewhat similar issues have occurred for instance in [40, 60], although we handle this issue here in a different and independent manner. Another feature of the RHP for the model problem \({\varvec{\Phi }}_n\) is that its jump is not analytic on the whole plane, and instead it is analytic only in a growing (with n) disk, and for a fixed n we can only ensure that its jump matrix is \(C^\infty \) in a neighborhood of the jump contour.

All in all, this means that carrying out the asymptotic analysis of \({\varvec{\Phi }}_n(\zeta )\) as \(n\rightarrow \infty \) is also needed. As we said, the jump for \({\varvec{\Phi }}_n\) involves a transformation of \(\sigma _n\), so ultimately also depends on the function Q from (2.16). But it turns out that as \(n\rightarrow \infty \), we have the convergence \({\varvec{\Phi }}_n\rightarrow {\varvec{\Phi }}_0\) in an appropriate sense, where \({\varvec{\Phi }}_0\) is independent of Q. This limiting \({\varvec{\Phi }}_0\) is the solution to a RHP that appeared recently in connection with the KPZ equation [27] and which was later shown to connect with the integro-differential PII in the recent work of Claeys et al. [28, 32]. For this reason we term it the id-PII RHP.

With the construction of the global and local parametrices, the asymptotic analysis is concluded in the usual way, by patching them together and obtaining a new RHP for a matrix function \(\textbf{R}\). This matrix \(\textbf{R}\), in turn, solves a RHP whose jump is asymptotically close to the identity, and consequently \(\textbf{R}\) can be found perturbatively.

After concluding this asymptotic analysis, we undress the transformations \(\textbf{R}\mapsto \cdots \mapsto \textbf{Y}\) and obtain asymptotic expressions for the wanted quantities. For the kernel \(\mathsf K_n^Q\) and the norming constant \(\upgamma ^{(n,Q)}_{n-1}(\mathsf s)^2\), after this undressing Theorems 2.4 and 2.5 follow in a standard manner.

However, to obtain (2.13) quite some extra work is needed. When dealing with statistics of matrix models via OPs, one of the usual approaches is to extract the needed information via the partition function and its relation with the norming constants via a product formula, see for instance (9.6) below. Usually this is accomplished via some differential identity or with careful estimate of each term in the product formula, see for instance the works [4, 13, 15, 59] and their references for explorations along these lines. In virtue of the relation (2.3) this was in fact our original attempt, but several technical issues arise. Instead, at the end we express \(\mathsf L_n^Q\) directly as a weighted double integral of \(\mathsf K_n^Q(x,x\mid \mathsf s)\) in the variables in x and \(\mathsf s\), this is done in Proposition 9.1 below. The x-integral takes place over the whole real line, which means that when we undress \(\textbf{R}\mapsto \textbf{Y}\) we obtain a formula for \(\mathsf L_n^Q\) involving global and all local parametrices. The integral in \(\mathsf s\) extends to \(+\infty \), which is one of the main reasons why in our main statements we also keep track of uniformity of errors when \(\mathsf s\rightarrow +\infty \). We then have to estimate the double integral, accounting for exponential decays of most of the terms but also exact cancellations of some other terms. Ultimately, the whole analysis leads to a leading contribution coming solely from a portion of the integral that arises from the model problem \({\varvec{\Phi }}_n\). With a further asymptotic analysis of the latter integral we obtain an integral solely of \({{\varvec{\Phi }}}_0\) which then yields Theorem 2.2.

The convergence \({\varvec{\Phi }}_n\rightarrow {{\varvec{\Phi }}}_0\) is treated as a separate issue, and to achieve it we need several information about this id-PII parametrix \({{\varvec{\Phi }}}_0\). As a final outcome, we obtain that \({\varvec{\Phi }}_n\) is close to \({{\varvec{\Phi }}}_0\) with an error term of the form \(\mathcal {O}(n^{-\nu })\), for any \(\nu \in (0,2/3)\). But, in much due to the non-analyticity of the jump matrix for \({\varvec{\Phi }}_n\), we are not able to achieve a sharp order \(\mathcal {O}(n^{-2/3})\) unless further conditions were placed on Q. This non-optimal error explains the appearance of the same error order in (2.13). In the course of this asymptotic analysis we rely substantially in [28]. Among other needed info, we also borrow from the same work the connection of \({{\varvec{\Phi }}}_0\) with the integro-differential PII. In the same work, the authors actually show that \({{\varvec{\Phi }}}_0\) relates to particular solutions to the KdV equation that reduce to the integro-differential PII. As such, Theorems 2.4 and 2.5 could be phrased in terms of a solution to the KdV rather than to the integro-differential PII. We opt to phrase them with the latter because this formulation encodes that all self-similarities have already been accounted for.

If we were to assume that the jump matrix for \({\varvec{\Phi }}_n\) were piecewise analytic on the whole plane and not merely \(C^\infty \), we could deform \({\varvec{\Phi }}_n\) to a family of RHPs considered in [28]. With this in mind, the analysis of the convergence \({\varvec{\Phi }}_n\rightarrow {{\varvec{\Phi }}}_0\) is inspired by several aspects in this just mentioned work but, as we already said, here we are forced to work under different conditions on the jump matrix. In particular, one could adapt the methods in [28] to actually prove that \({\varvec{\Phi }}_n\) does too relate to an n-dependent solution to the integro-differential PII. Consequently, with a careful inspection of our work one could show that Theorems 2.2, 2.4 and 2.5 admit versions with n-dependent leading terms. For instance, relating the norming constant \(\upgamma ^{(n,Q)}_{n-1}(\mathsf s)^2\) with the model problem \({\varvec{\Phi }}_n\) one could obtain an asymptotic formula of the form

$$\begin{aligned} \upgamma ^{(n,Q)}_{n-1}(\mathsf s)^2=\frac{a}{4\pi }{{\,\mathrm{\mathrm e}\,}}^{-2n\ell _V}\left( \frac{1}{2}-\frac{1}{n^{1/3}}\frac{\mathsf c_V^{1/2}}{\mathsf t^{1/2}} \left( \mathsf p_n(\mathsf s,\mathsf t/\mathsf c_V)-\frac{\mathsf s^2\mathsf c_V^{3/2}}{4\mathsf t^{3/2}}\right) +\mathcal {O}(n^{-2/3}) \right) , \quad n\rightarrow \infty , \end{aligned}$$

where the n-dependent function \(\mathsf p_n\) is obtained from \({\varvec{\Phi }}_n\) and relates to a n-dependent solution \(\Phi _n\) to the integro-differential PII. In fact, with standard arguments one could improve the formula above to a full asymptotic expansion in powers of \(n^{-1/3}\), with bounded but n-dependent coefficients. Underlying our arguments there is the statement that \(\mathsf p_n=\mathsf p+\mathcal {O}(n^{-\nu })\) for any \(\nu \in (0,2/3)\), which then yields Theorem 2.5. But as a drawback, although one could potentially improve (2.20) and also obtain the term of order \(n^{-2/3}\) explicitly, it is not possible to obtain the \(\mathcal {O}(n^{-1})\) term in (2.20) unless one improves the error \(\mathcal {O}(n^{-\nu })\) in the convergence \({\varvec{\Phi }}_n\rightarrow {{\varvec{\Phi }}}_0\) to a sharp error \(\mathcal {O}(n^{-2/3})\).

3.2 Possible extensions

Most of our approach may be extended to potentials V for which the equilibrium measure \(\mu _V\) is critical, and also under different conditions on Q as we now explain.

Apart from technical adaptations in several steps of the RHP for OPs which are nowadays well understood, our analysis carries over to potentials V for which the equilibrium measure \(\mu _V\) is regular but multicut, with the same conditions on Q when \(\mu _V\) has the origin as its right-most endpoint.

When, say, the density \(\mu _V\) vanishes to a higher power at a soft edge and/or Q changes sign with an arbitrary odd vanishing order at the same soft edge, we need to replace the power \(n^{2/3}\) in \(\sigma _n\) by another appropriate power to modify the local statistics near this point in a non-trivial critical manner. Once this is done, the asymptotic analysis of the RHP for OPs that we perform carries over mostly with minor modifications, and the only major issue to overcome is in the construction of a new local parametrix \(\widetilde{{\varvec{\Phi }}}_n\) near this soft edge point and its corresponding asymptotic analysis. In this case, we expect that \(\widetilde{{\varvec{\Phi }}}_n\rightarrow \widetilde{{\varvec{\Phi }}}_0\) for a new function \(\widetilde{{\varvec{\Phi }}}_0\). It is relatively simple to write a RHP that should be satisfied by this \(\widetilde{{\varvec{\Phi }}}_0\), and we expect it to be related to the KdV hierarchy [33] but with nonstandard initial data. It would be interesting to see if the particular solutions obtained this way reduce to integro-differential hierarchies of Painlevé equations, in the same spirit of the recent works [26, 58].

One could also consider similar statistics to (2.3) with a Q that vanishes at a bulk point of \({{\,\textrm{supp}\,}}\mu _V\). We do expect that most of our work carries through to this situation, at least when we impose V to be again one-cut regular and Q to vanish quadratically at a point inside \({{\,\textrm{supp}\,}}\mu _V\). The main issue that should arise is again on the construction of the local parametrix near this point, and its corresponding asymptotic analysis. This model should lead to multiplicative statistics of the Sine kernel (and the higher order generalizations of it). Similar considerations go through to hard-edge models, leading to multiplicative statistics of the Bessel process. To our knowledge, such multiplicative statistics of Bessel and Sine have not been considered in the literature so far. However, finite temperature versions of the Sine and Bessel kernels do have appeared, see for instance [11, 12, 39, 57].

3.3 Organization of the paper

The paper is organized in two parts. In the first part, we deal with a family of RHPs \({\varvec{\Phi }}_\tau \) that contains the model RHP \({\varvec{\Phi }}_n\) needed in the asymptotic analysis of OPs. In Section 4 we introduce \({\varvec{\Phi }}_\tau \) formally. In Sects. 5 and 6 we discuss the RHP \({{\varvec{\Phi }}}_0\), which is a particular case of \({\varvec{\Phi }}_\tau \), and review several of its properties, translating results from [27, 28] to our notation and needs. In Sect. 7 we prove the convergence \({\varvec{\Phi }}_\tau \rightarrow {{\varvec{\Phi }}}_0\) and of related quantities in the appropriate sense. The latter section contains all the needed results for the asymptotic analysis of the RHP for OPs, and concludes the first part of this paper.

The second part of the paper is focused on the asymptotic analysis of the RHP for OPs. In Sect. 8 we discuss several aspects that relate to the equilibrium measure. In Sect. 9 we introduce the Christoffel-Darboux kernel \(\mathsf K_n^Q\) and related quantities, and display how they relate to the RHP for OPs. In particular, in Proposition 9.1 we write \(\mathsf L_n^Q\) directly as an integral of the kernel \(\mathsf K_n^Q\), a result which may be of independent interest. In Sect. 10 we perform the asymptotic analysis of the RHP for the OPs. In Sect. 11 we use the conclusions from Sects. 10 and 7 and prove Theorems 2.4 and 2.5. Also from the results from Sects. 10 and 7 and assuming additional technical estimates, the proof of Theorem 2.2 is given in Sect. 11. Such remaining technical estimates are also ultimately a consequence of the RHP analysis, but their proofs are rather cumbersome and postponed to Sect. 12.

For the remainder of the paper it is convenient to denote

$$\begin{aligned} \textbf{e}_1:=\begin{pmatrix} 1 \\ 0 \end{pmatrix},\quad \textbf{e}_2:=\begin{pmatrix} 0 \\ 1 \end{pmatrix},\quad \textbf{E}_{jk}:=\textbf{e}_j\textbf{e}_k^\mathrm T, \end{aligned}$$
(3.1)

so \(\textbf{E}_{jk}\) is a \(2\times 2\) matrix with the (jk)-entry equals 1 and all other entries zero. With this notation, the Pauli matrices, for instance, take the form

$$\begin{aligned} \textbf{I}:=\textbf{E}_{11}+\textbf{E}_{22},\quad \varvec{\sigma }_1:=\textbf{E}_{12}+\textbf{E}_{21}, \quad \varvec{\sigma }_2:=-\textrm{i}\textbf{E}_{12}+\textrm{i}\textbf{E}_{21},\quad \varvec{\sigma }_3:=\textbf{E}_{11}-\textbf{E}_{22}. \end{aligned}$$
(3.2)

In particular, for any reasonably regular scalar function f, the spectral calculus yields

$$\begin{aligned} f(z)^{\varvec{\sigma }_3}= \begin{pmatrix} f(z) &{} 0 \\ 0 &{} 1/f(z) \end{pmatrix}. \end{aligned}$$
(3.3)

These notations will be used extensively in the coming sections.

4 A Model Riemann–Hilbert Problem

In this section we discuss a model Riemann–Hilbert Problem that will be used in the construction of a local parametrix in the asymptotic analysis for the orthogonal polynomials. As such, this model problem plays a central role in obtaining all our major results.

4.1 The model problem

Set

$$\begin{aligned} \mathsf \Sigma _0\!:=\![0,+\infty ), \!\; \mathsf \Sigma _1\!:=\![0,{{\,\mathrm{\mathrm e}\,}}^{2\pi \textrm{i}/3}), \;\! \mathsf \Sigma _2\!:=\!(-\infty ,0], \;\! \mathsf \Sigma _3\!:=\![0,{{\,\mathrm{\mathrm e}\,}}^{-2\pi \textrm{i}/3}), \!\quad \mathsf \Sigma \!:=\!\bigcup _{j=0}^3 \mathsf \Sigma _j, \end{aligned}$$
(4.1)

orienting \(\mathsf \Sigma _0\) from the origin to \(\infty \), and the remaining arcs from \(\infty \) to the origin, see Fig. 2.

Fig. 2
figure 2

The contours \(\mathsf \Sigma _0,\mathsf \Sigma _1,\mathsf \Sigma _2\) and \(\mathsf \Sigma _3\) in (4.1) that constitute \(\mathsf \Sigma \)

The model RHP we are about to introduce depends on a function \(\mathsf h:\mathsf \Sigma \rightarrow \mathbb {C}\) used to describe its jump. For the moment we assume

$$\begin{aligned}\mathsf h\in C^\infty (\mathsf \Sigma ), \quad \mathsf h(z)\in \mathbb {R}\text { for }z\in \mathbb {R},\quad \text {and}\quad \liminf _{\begin{array}{c} z\rightarrow \infty \\ z\in \mathsf \Sigma \end{array}} \frac{{{\,\textrm{Re}\,}}\mathsf h(z)}{|z|}>0. \end{aligned}$$

These conditions are present only to ensure the RHP below is well posed and are far from optimal, but enough for our purposes. Later on we will impose more conditions on this function \(\mathsf h\), these conditions will be tailored to our later needs regarding the asymptotic analysis of OPs.

The associated RHP asks for finding a \(2\times 2\) matrix-valued function \({\varvec{\Phi }}\) with the following properties.

\({\varvec{\Phi }}\)-1.:

The matrix \({\varvec{\Phi }}={\varvec{\Phi }}(\cdot \mid \mathsf h):\mathbb {C}{\setminus } \mathsf \Sigma \rightarrow \mathbb {C}^{2\times 2}\) is analytic.

\({\varvec{\Phi }}\)-2.:

Along the interior of the arcs of \(\mathsf \Sigma \) the function \({\varvec{\Phi }}\) admits continuous boundary values \({\varvec{\Phi }}_\pm \) related by \({\varvec{\Phi }}_+(\zeta )={\varvec{\Phi }}_-(\zeta )\textbf{J}_{{\varvec{\Phi }}}(\zeta )\), \(\zeta \in \mathsf \Sigma \), with

(4.2)
\({\varvec{\Phi }}\)-3.:

As \(\zeta \rightarrow \infty \),

$$\begin{aligned} {\varvec{\Phi }}(\zeta )=\left( \textbf{I}+\frac{1}{\zeta }{\varvec{\Phi }}^{(1)}+\mathcal {O}(1/\zeta ^{2})\right) \zeta ^{\varvec{\sigma }_3/4}\textbf{U}_0^{-1} {{\,\mathrm{\mathrm e}\,}}^{-\frac{2}{3}\zeta ^{3/2}\varvec{\sigma }_3}, \end{aligned}$$
(4.3)

where

$$\begin{aligned} \textbf{U}_0:=\frac{1}{\sqrt{2}} \begin{pmatrix} 1 &{} \textrm{i}\\ \textrm{i}&{} 1 \end{pmatrix} \end{aligned}$$
(4.4)

and \({\varvec{\Phi }}^{(1)}={\varvec{\Phi }}^{(1)}(\mathsf h)\) is a matrix that depends on the choice of function \(\mathsf h\) but it is independent of \(\zeta \).

\({\varvec{\Phi }}\)-4.:

The matrix \({\varvec{\Phi }}\) remains bounded as \(\zeta \rightarrow 0\).

Given \(\mathsf h\), it is not at all obvious that the RHP above has a solution and how to describe it. We study this model problem when \(\mathsf h=\mathsf h_\tau \) depends on an additional large parameter \(\tau \), in a way that appears naturally in the asymptotic analysis of the orthogonal polynomials mentioned earlier. For large values of \(\tau \), we then prove that the solution \({\varvec{\Phi }}\) exists and is asymptotically close to a model RHP that appeared recently [27] and that we discuss in a moment.

4.2 The model RHP with admissible data

For us, we need to consider the model problem \({\varvec{\Phi }}={\varvec{\Phi }}(\cdot \mid \mathsf h)\) with functions \(\mathsf h=\mathsf h_\tau \) satisfying certain properties which are formally introduced in the next definition.

Definition 4.1

We call a function \(\mathsf h_\tau :\Sigma \rightarrow \mathbb {C}\) admissible if it is of the form

$$\begin{aligned} \mathsf h_\tau (\zeta )=\mathsf h_\tau (\zeta \mid \mathsf s)=\mathsf s+\tau \mathsf H\left( \frac{\mathsf \zeta }{\tau }\right) ,\quad \zeta \in \Sigma ,\quad \tau >0,\; \mathsf s\in \mathbb {R}, \end{aligned}$$

where \(\mathsf H\) is defined on a neighborhood \(\mathcal S\) of \(\Sigma \) and satisfies the following properties.

  1. (i)

    The function \(\mathsf H\) is independent of \(\tau \) and \(\mathsf s\), of class \(C^\infty \) on \(\mathcal S\) and real-valued along \(\mathbb {R}\).

  2. (ii)

    \(\mathsf H\) is analytic on a disk \(D_\delta (0)\subset \mathcal S\) centered at the origin, and its unique zero on \(D_\delta (0)\) is at \(\zeta =0\), with

    $$\begin{aligned} \mathsf t:=-\mathsf H'(0)>0. \end{aligned}$$
  3. (iii)

    There exist constants \(\eta ,\widehat{\eta }>0\) for which

    $$\begin{aligned} {{\,\textrm{Re}\,}}\mathsf H(w)>\eta |w| \quad \text {for } w\in \mathsf \Sigma _1\cup \mathsf \Sigma _2\cup \mathsf \Sigma _3, \end{aligned}$$

    and

    $$\begin{aligned} -\widehat{\eta }w^{3/2-\epsilon }<\mathsf H(w)<-\eta w \quad \text {for } w\in \mathsf \Sigma _0, \end{aligned}$$

    for some \(\epsilon \in (0,1/2]\).

Conditions (i)–(ii), and also the bounds in (iii) involving \(\eta \), are natural in our setup. The bound \(\mathsf H(w)>-\widehat{\eta }w^{3/2-\epsilon }\) is present for technical reasons, and it plays a role only for the proof of Lemma 7.4, allowing us to write certain estimates in a cleaner matter. It could be removed, at the cost of slightly more complicated error terms in the mentioned Lemma. For our purposes, namely to use \({\varvec{\Phi }}={\varvec{\Phi }}(\cdot \mid \mathsf h_\tau )\) as a local parametrix with an appropriate \(\mathsf h_{\tau }\), this condition is satisfied anyway (this will be accomplished in Proposition 8.3), so we include it in our definition here as well, as it simplifies our analysis.

In the course of the analysis for the RHP for the orthogonal polynomials discussed in Sect. 9, the function \(\mathsf H\) will be a transformation of the function Q appearing in the deformed weight (2.16), and the parameter \(\mathsf t\) that we defined here will play the same role as the one in the definition (2.7).

Given an admissible \(\mathsf h_\tau \), we denote

$$\begin{aligned} {\varvec{\Phi }}_\tau (\zeta ):={\varvec{\Phi }}(\zeta \mid \mathsf h=\mathsf h_\tau (\cdot \mid \mathsf s)). \end{aligned}$$
(4.5)

We are interested in the asymptotic analysis for \({\varvec{\Phi }}_\tau \) as \(\tau \rightarrow +\infty \) and \(\mathsf s\ge -\mathsf s_0\), for any \(\mathsf s_0>0\), and \(\mathsf t>0\) kept fixed within a compact of the positive axis.

We now explain in an ad hoc manner the appearance of a RHP for the integro-differential equation, which also relates to the KPZ equation. Definition 4.1-(ii) gives that \(\mathsf H\) has an expansion of the form

$$\begin{aligned} \mathsf H(\zeta )=-\mathsf t\zeta (1+\mathcal {O}(\zeta )),\quad |\zeta |\le \delta , \end{aligned}$$

This means that any admissible function \(\mathsf h_\tau \) satisfies

$$\begin{aligned} \mathsf h_\tau (\zeta )=\mathsf s-\mathsf t\zeta \left( 1+\mathcal {O}(\zeta \tau ^{-1})\right) ,\quad |\zeta |\le \delta \tau . \end{aligned}$$

In particular, the convergence

$$\begin{aligned} \mathsf h_\tau (\zeta )\rightarrow \mathsf h_0(\zeta )=\mathsf h_0(\zeta \mid \mathsf s,\mathsf t):=\mathsf s-\mathsf t\zeta , \end{aligned}$$
(4.6)

holds true uniformly in compacts as \(\tau \rightarrow \infty \). This indicates that the solution \({\varvec{\Phi }}_\tau \) should converge to the solution

$$\begin{aligned} {{\varvec{\Phi }}}_0:={\varvec{\Phi }}(\cdot \mid \mathsf h=\mathsf h_0) \end{aligned}$$
(4.7)

of the model problem obtained from \(\mathsf h_0\). The RHP-\({{\varvec{\Phi }}}_0\) relates to the integro-differential PII and is a rescaled version of an RHP that appears in the description of the narrow wedge solution to the KPZ equation, as we discuss in the next section in detail.

5 The RHP for the Integro-Differential RHP

For the choice

$$\begin{aligned} \mathsf h^{\mathrm {\scriptscriptstyle (KPZ)}}(\zeta )=\mathsf h^{\mathrm {\scriptscriptstyle (KPZ)}}(\zeta \mid s,T):=-T^{1/3}(s+\zeta ) \end{aligned}$$
(5.1)

the corresponding solution of the RHP–\({\varvec{\Phi }}\)

$$\begin{aligned} {{\varvec{\Phi }}}^{\mathrm {\scriptscriptstyle (KPZ)}}(\zeta )={{\varvec{\Phi }}}^{\mathrm {\scriptscriptstyle (KPZ)}}(\zeta \mid s,T):={\varvec{\Phi }}(\zeta \mid \mathsf h=\mathsf h^{\mathrm {\scriptscriptstyle (KPZ)}}(\cdot \mid s,T)) \end{aligned}$$

appeared for the first time in the work of Cafasso and Clayes [27] (this is the RHP-\(\Psi \) in Section 2 therein) in connection with the narrow wedge solution to the KPZ equation as we explain in a moment, in Sect. 5.1. To avoid confusion with the related quantities that we are about to introduce, we term it the KPZ RHP. In virtue of the identity

$$\begin{aligned} \mathsf h_0(\zeta \mid \mathsf s,\mathsf t)=\mathsf h^{\mathrm {\scriptscriptstyle (KPZ)}}(\zeta \mid s=-\mathsf s/\mathsf t, T=\mathsf t^3) \end{aligned}$$

which follows from (4.6) and (5.1), we also have the correspondence

$$\begin{aligned} {{\varvec{\Phi }}}_0(\zeta \mid \mathsf s,\mathsf t)={{\varvec{\Phi }}}^{\mathrm {\scriptscriptstyle (KPZ)}}(\zeta \mid s=-\mathsf s/\mathsf t,T=\mathsf t^3), \end{aligned}$$
(5.2)

and we refer to \({{\varvec{\Phi }}}_0\) as the id-PII RHP. For the record, we state the existence of \({{\varvec{\Phi }}}_0\) formally as a result.

Proposition 5.1

For any \(\mathsf s\in \mathbb {R}\) and any \(\mathsf t>0\), the solution \({{\varvec{\Phi }}}_0\) exists and is unique. Furthermore, for any fixed \(\mathsf s_0>0\) and \(\mathsf t_0\in (0,1)\), the solution \({{\varvec{\Phi }}}_{0,+}(\zeta )\) remains bounded for \(\zeta \) in compacts of \(\mathbb {R}\) and \(\mathsf s\ge -\mathsf s_0\), \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\).

Proof

It is a consequence of [27, Section 2] that the solution \({{\varvec{\Phi }}}^{\mathrm {\scriptscriptstyle (KPZ)}}(\cdot \mid s,T)\) exists and is unique, for any \(s\in \mathbb {R}\) and \(T>0\), and from the correspondence (5.2) the existence and uniqueness of \({{\varvec{\Phi }}}_0\) is thus granted.

For the boundedness, we start from the representation

$$\begin{aligned} {{\varvec{\Phi }}}_0(\zeta )=\textbf{I}+\frac{1}{2\pi \textrm{i}}\int _{\Gamma }\frac{{{\varvec{\Phi }}}_{0,-}(s)(\textbf{J}_{{{\varvec{\Phi }}}_0}(s)-\textbf{I})}{s-\zeta }\textrm{d}s,\quad \zeta \in \mathbb {C}{\setminus } \Gamma , \end{aligned}$$

which follows from the \(L^p\) theory of RHPs (see [43]). The jump matrix admits an analytic continuation to any neighborhood of the real axis, and this analytic continuation remains bounded in compacts, also uniformly for \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\) (see for instance (5.9) for the exact expression). With these observations in mind, the claimed boundedness follows from standard arguments. We skip additional details, but refer to the proof of Theorem 7.1, in particular (7.16) et seq., for similar arguments in a more involved context. \(\square \)

In this section we collect several results on \({{\varvec{\Phi }}}_0\) that were obtained in [27, 28] and which will be needed later.

But before proceeding, a word of caution. As we said, the RHP–\({{\varvec{\Phi }}}^{\mathrm {\scriptscriptstyle (KPZ)}}\) appeared first in [27], but was also studied in the subsequent work [28]. The meanings for the variables sx and t in these two works are not consistent, but we need results from both of them. Comparing to the work [27] by Cafasso and Claeys, the correspondence is

$$\begin{aligned} s_\textrm{CC}=-\frac{\mathsf s}{\mathsf t}\quad \text {and}\quad T_\textrm{CC}=\mathsf t^3. \end{aligned}$$
(5.3)

This correspondence is consistent with (5.2). On the other hand, when comparing to the subsequent work [28] by Cafasso, Claeys and Ruzza, the correspondence between notations is

$$\begin{aligned} t_\textrm{CCR}=\frac{1}{\mathsf t^{3/2}}, \quad x_\textrm{CCR}=-\frac{\mathsf s}{\mathsf t^{3/2}},\qquad \text {that is}\qquad x_\textrm{CCR}=-\mathsf S,\quad t_\textrm{CCR}=\mathsf T, \end{aligned}$$
(5.4)

where \(\mathsf T,\mathsf S\) are as in (2.17).

In our asymptotic analysis, the most convenient choice of variables to work with is the choice \((\mathsf s,\mathsf t)\) and the correspondence \((\mathsf S,\mathsf T)\) from (2.17) that we have already been using, and which leads to the RHP \({{\varvec{\Phi }}}_0\) as we introduced. Nevertheless, we will need to collect results from both mentioned works, and when the need arises we refer to the correspondences of variables (5.3)–(5.4).

On the other hand, when making correspondence with integrable systems, in particular the integro-differential Painlevé II equation, it is more convenient to work with the variables \(\mathsf S\) and \(\mathsf T\) as in (2.17).

5.1 Properties of the id-PII parametrix

In this section we describe many of the findings from [27, 28], in a way suitably adapted to our notation and needs. In particular the connection of \({{\varvec{\Phi }}}_0\) introduced in (4.7) with the integro-differential Painlevé II equation is described in this section.

For

(5.5)

the identity

$$\begin{aligned} \partial _s\log \mathsf L^{{{\,\textrm{Ai}\,}}}(s,T)=\frac{T^{1/3}}{2\pi \textrm{i}}\int _{-\infty }^\infty \frac{{{\,\mathrm{\mathrm e}\,}}^{T^{1/3}(x+s)}}{(1+{{\,\mathrm{\mathrm e}\,}}^{T^{1/3}(x+s)})^2}\left[ ({{\varvec{\Delta }}}^{\mathrm {\scriptscriptstyle (KPZ)}}(x)^{-1}{{\varvec{\Phi }}}^{\mathrm {\scriptscriptstyle (KPZ)}}_+(x)^{-1}({{\varvec{\Phi }}}^{\mathrm {\scriptscriptstyle (KPZ)}}_+ {{\varvec{\Delta }}}^{\mathrm {\scriptscriptstyle (KPZ)}})'(x)\right] _{21} \textrm{d}x, \end{aligned}$$
(5.6)

was shown in [27, Theorem 2.1] and will also be useful for us. With (5.2) we now rewrite this identity in terms of \({{\varvec{\Phi }}}_0\). With the principal branch of the argument, set

$$\begin{aligned} {{\varvec{\Delta }}}_0(\zeta )={{\varvec{\Delta }}}_0(\zeta \mid \mathsf s, \mathsf t):={\left\{ \begin{array}{ll} \textbf{I}, &{} |\arg \zeta |<\frac{2\pi }{3}, \\ \textbf{I}+(1+{{\,\mathrm{\mathrm e}\,}}^{-\mathsf s+\mathsf t\zeta })\textbf{E}_{21}, &{} |\arg \zeta |>\frac{2\pi }{3}. \end{array}\right. } \end{aligned}$$
(5.7)

This function relates to \({{\varvec{\Delta }}}^{\mathrm {\scriptscriptstyle (KPZ)}}\) in (5.5) via

$$\begin{aligned} {{\varvec{\Delta }}}_{0,+}(\zeta )={{\varvec{\Delta }}}^{\mathrm {\scriptscriptstyle (KPZ)}}(\zeta \mid s=-\mathsf s/\mathsf t, T=\mathsf t^3),\quad \zeta \in \mathbb {R}, \end{aligned}$$

and (5.6) rewrites as

$$\begin{aligned} \partial _s\log \mathsf L^{{{\,\textrm{Ai}\,}}}(s=-\mathsf s/\mathsf t,T=\mathsf t^3)=\frac{\mathsf t}{2\pi \textrm{i}}\int _{-\infty }^\infty \frac{{{\,\mathrm{\mathrm e}\,}}^{\mathsf tx-\mathsf s}}{(1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf tx-\mathsf s})^2}\left[ ({{\varvec{\Delta }}}_0(x)^{-1}{{\varvec{\Phi }}}_{0,+}(x)^{-1}({{\varvec{\Phi }}}_{0,+}{{\varvec{\Delta }}}_0)'(x)\right] _{21} \textrm{d}x. \end{aligned}$$
(5.8)

For further reference, it is now convenient to state the RHP for \({{\varvec{\Phi }}}_0\) explicitly.

\({{\varvec{\Phi }}}_0\)-1.:

The matrix \({{\varvec{\Phi }}}_0:\mathbb {C}{\setminus } \mathsf \Sigma \rightarrow \mathbb {C}^{2\times 2}\) is analytic.

\({{\varvec{\Phi }}}_0\)-2.:

Along the interior of the arcs of \(\mathsf \Sigma \) the function \({{\varvec{\Phi }}}_0\) admits continuous boundary values \({{\varvec{\Phi }}}_{0,\pm }\) related by \({{\varvec{\Phi }}}_{0,+}(\zeta )={{\varvec{\Phi }}}_{0,-}(\zeta )\textbf{J}_{{{\varvec{\Phi }}}_0}(\zeta )\), \(\zeta \in \mathsf \Sigma \), with

(5.9)
\({{\varvec{\Phi }}}_0\)-3.:

As \(\zeta \rightarrow \infty \),

$$\begin{aligned} {{\varvec{\Phi }}}_0(\zeta )=\left( \textbf{I}+\mathcal {O}(1/\zeta )\right) \zeta ^{\varvec{\sigma }_3/4}\textbf{U}_0^{-1} {{\,\mathrm{\mathrm e}\,}}^{-\frac{2}{3}\zeta ^{3/2}\sigma _3}, \end{aligned}$$
(5.10)

where we recall that \(\textbf{U}_0\) is given in (4.4) and the principal branch of the roots are used.

\({{\varvec{\Phi }}}_0\)-4.:

The matrix \({{\varvec{\Phi }}}_0\) remains bounded as \(\zeta \rightarrow 0\).

To compare with [28] we perform a transformation of this RHP. All the calculations that follow already take into account the correspondence (5.4) between the notation in the mentioned work and our notation.

With \({{\varvec{\Delta }}}_0\) as in (5.7) and introducing

$$\begin{aligned} \xi =\xi (\zeta )=-\mathsf s+\mathsf t\zeta ,\quad \text {with inverse}\quad \zeta =\zeta (\xi )=\frac{\xi +\mathsf s}{\mathsf t}, \end{aligned}$$

we transform

$$\begin{aligned} {\varvec{\Psi }}_0(\xi )=\left( \textbf{I}+\frac{\textrm{i}\mathsf s^2}{4\mathsf t^{3/2}}\textbf{E}_{12}\right) \mathsf t^{\varvec{\sigma }_3/4}{{\varvec{\Phi }}}_0(\zeta (\xi ))\times {\left\{ \begin{array}{ll} {{\varvec{\Delta }}}_0(\zeta (\xi )), &{} {{\,\textrm{Im}\,}}\xi >0, \; \arg (\zeta (\xi ))\ne 2\pi /3,\\ {{\varvec{\Delta }}}_0(\zeta (\xi ))^{-1}, &{} {{\,\textrm{Im}\,}}\xi <0, \; \arg (\zeta (\xi ))\ne -2\pi /3. \end{array}\right. } \end{aligned}$$
(5.11)

Then \({\varvec{\Psi }}_0\) satisfies the following RHP.

\({\varvec{\Psi }}_0\)-1.:

The matrix \({\varvec{\Psi }}_0:\mathbb {C}{\setminus } \mathbb {R}\rightarrow \mathbb {C}^{2\times 2}\) is analytic.

\({\varvec{\Psi }}_0\)-2.:

Along \(\mathbb {R}\) the function \({\varvec{\Psi }}_0\) admits continuous boundary values \({\varvec{\Psi }}_{0,\pm }\) related by

$$\begin{aligned} {\varvec{\Psi }}_{0,+}(\xi )={\varvec{\Psi }}_{0,-}(\xi ) \left( \textbf{I}+\frac{1}{1+{{\,\mathrm{\mathrm e}\,}}^{\xi }} \textbf{E}_{12}\right) , \quad \xi \in \mathbb {R}. \end{aligned}$$
\({\varvec{\Psi }}_0\)-3.:

For any \(\delta \in (0,2\pi /3)\), as \(\xi \rightarrow \infty \) the matrix \({\varvec{\Psi }}_0\) has the following asymptotic behavior,

$$\begin{aligned} {\varvec{\Psi }}_0(\xi )= & {} \left( \textbf{I}+\mathcal {O}(1/\xi )\right) \xi ^{\varvec{\sigma }_3/4}\textbf{U}_0^{-1} {{\,\mathrm{\mathrm e}\,}}^{-\mathsf t^{-3/2}\left( \frac{2}{3}\xi ^{3/2}+\mathsf s\xi ^{1/2}\right) \sigma _3}\nonumber \\{} & {} \times {\left\{ \begin{array}{ll} \textbf{I}, &{} |\arg \xi |\le \pi -\delta , \\ \textbf{I}\pm \textbf{E}_{21}, &{} \pi -\delta<\pm \arg \xi <\pi . \end{array}\right. } \end{aligned}$$
(5.12)

This RHP is the same RHP considered in [28, page 1120] (in fact, the keen reader will notice a sign difference between the last term in the right-hand side of (4.3) and the corresponding term in [28, page 1120], but the latter is a typo) with the choice \(\sigma (r)=(1+{{\,\mathrm{\mathrm e}\,}}^{-r})^{-1}\) therein and the correspondence of variables (5.4).

As a consequence, and with the change of variables \((\mathsf s,\mathsf t)\mapsto (\mathsf S,\mathsf T)\) from (2.17), we obtain that for some functions \(\mathsf Q=\mathsf Q(\mathsf S,\mathsf T),\mathsf R=\mathsf R(\mathsf S,\mathsf T), \mathsf P=\mathsf P(\mathsf S,\mathsf T)\) and

$$\begin{aligned} \mathsf q=\mathsf q(\mathsf s,\mathsf t)=\mathsf Q(\mathsf S,\mathsf T),\quad \mathsf r=\mathsf r(\mathsf s,\mathsf t)=\mathsf R(\mathsf S,\mathsf T),\quad \mathsf p=\mathsf p(\mathsf s,\mathsf t)=\mathsf P(\mathsf S,\mathsf T), \end{aligned}$$
(5.13)

the asymptotic behavior (5.12) improves to

$$\begin{aligned} {\varvec{\Psi }}_0(\xi )= & {} \left( \textbf{I}+ \frac{1}{\xi } \begin{pmatrix} \mathsf q &{} -\textrm{i}\mathsf r \\ \textrm{i}\mathsf p &{} -\mathsf q \end{pmatrix}+ \mathcal {O}(\xi ^{-2})\right) \xi ^{\varvec{\sigma }_3/4}\textbf{U}_0^{-1}\nonumber \\{} & {} \times {{\,\mathrm{\mathrm e}\,}}^{-\mathsf t^{-3/2}\left( \frac{2}{3}\xi ^{3/2}+\mathsf s\xi ^{1/2}\right) \sigma _3} \times {\left\{ \begin{array}{ll} \textbf{I}, &{} |\arg \xi |\le \pi -\delta , \\ \textbf{I}\pm \textbf{E}_{21}, &{} \pi -\delta<\pm \arg \xi <\pi , \end{array}\right. } \quad \xi \rightarrow \infty .\nonumber \\ \end{aligned}$$
(5.14)

Stressing that the correspondence (5.4) is in place, the functions \(\mathsf P\) and \(\mathsf Q\) satisfy the relation [28, Equation (3.14)]

$$\begin{aligned} \partial _\mathsf S\mathsf P(\mathsf S,\mathsf T)=2\mathsf Q(\mathsf S,\mathsf T)+\mathsf P(\mathsf S,\mathsf T)^2. \end{aligned}$$

Furthermore, from [28, Equations (3.12),(3.16), Theorem 1.3 and Corollary 1.4] we see that \({\varvec{\Psi }}_0\) takes the form

$$\begin{aligned} {\varvec{\Psi }}_0(\xi \mid \mathsf s,\mathsf t)= \sqrt{2\pi }{{\,\mathrm{\mathrm e}\,}}^{-\frac{\pi \textrm{i}}{4}\varvec{\sigma }_3}\left( \textbf{I}- \mathsf p(\mathsf s,\mathsf t)\textbf{E}_{12}\right) \begin{pmatrix} -\partial _\mathsf S\Phi (\xi \mid \mathsf S,\mathsf T) &{} *\\ -\Phi (\xi \mid \mathsf S,\mathsf T) &{} *\end{pmatrix} {{\,\mathrm{\mathrm e}\,}}^{\frac{\pi \textrm{i}}{4}\varvec{\sigma }_3}, \end{aligned}$$
(5.15)

where \(\Phi =\Phi (\xi \mid \mathsf S,\mathsf T)\) solves the NLS equation with potential \(2\partial _\mathsf S\mathsf P\),

$$\begin{aligned} \partial _\mathsf S^2\Phi (\xi \mid \mathsf S,\mathsf T)=(\xi +2\partial _\mathsf S\mathsf P(\mathsf S,\mathsf T))\Phi (\xi \mid \mathsf S,\mathsf T). \end{aligned}$$

In addition, \(\mathsf P\) and \(\Phi \) are related through the identity (2.21) which, in turn, implies that \(\Phi \) is the solution to the integro-differential Painlevé II equation in (2.11).

It is convenient to write some quantities of \({{\varvec{\Phi }}}_0\) directly in terms of the just introduced functions. The first identity we need for later is

$$\begin{aligned}{} & {} \left[ \left( {{\varvec{\Phi }}}_0(\zeta (v)\mid \mathsf s,\mathsf t){{\varvec{\Delta }}}_0(\zeta (v)\mid \mathsf s,\mathsf t)\right) ^{-1}{{\varvec{\Phi }}}_0(\zeta (u)\mid \mathsf s,\mathsf t){{\varvec{\Delta }}}_0(\zeta (u)\mid \mathsf s,\mathsf t)\right] _{21,+}\nonumber \\{} & {} \quad =-2\pi \textrm{i}\left( \Phi (u\mid \mathsf S,\mathsf T)(\partial _{\mathsf S}\Phi )(v\mid \mathsf S,\mathsf T)-\Phi (v\mid \mathsf S,\mathsf T)(\partial _{\mathsf S}\Phi )(u\mid \mathsf S,\mathsf T)\right) \end{aligned}$$
(5.16)

which follows from (5.11) and (5.15) after a straightforward calculation, accounting also that \(\det {{\varvec{\Phi }}}_0=\det {\varvec{\Psi }}_0\equiv 1\).

The second relation we need is an improvement of the asymptotics of \({{\varvec{\Phi }}}_0\) in (5.10). With the coefficients

$$\begin{aligned} \mathsf c_1=\mathsf c_1(\mathsf s,\mathsf t):=-\frac{\mathsf s^2}{4\mathsf t^{3/2}},\quad \mathsf c_2=\mathsf c_2(\mathsf s,\mathsf t):=\frac{\mathsf s^4}{32\mathsf t^3},\quad \mathsf c_3=\mathsf c_3(\mathsf s,\mathsf t):=-\frac{\mathsf s^3(\mathsf s^3-16\mathsf t^3)}{384\mathsf t^{9/2}}, \end{aligned}$$

and the functions \(\mathsf q,\mathsf r,\mathsf p\) in (5.13), introduce

$$\begin{aligned} {{\varvec{\Phi }}}_0^{(1)}:=\frac{1}{\mathsf t} \begin{pmatrix} -\dfrac{\mathsf s}{4}+\mathsf q+\mathsf c_2-\mathsf c_1\mathsf p-\mathsf c_1^2 &{} \textrm{i}\mathsf t^{-1/2}\left( -\mathsf r-2\mathsf q\mathsf c_1+\dfrac{\mathsf s}{2}\mathsf c_1+\mathsf p\mathsf c_1^2+\mathsf c_1\mathsf c_2-\mathsf c_3\right) \\ \textrm{i}\mathsf t^{1/2}(\mathsf p+\mathsf c_1) &{} \dfrac{\mathsf s}{4}-\mathsf q+\mathsf p\mathsf c_1+\mathsf c_2 \end{pmatrix}. \end{aligned}$$
(5.17)

After some cumbersome but straightforward calculations, the asymptotics (5.14) improves (5.10) to

$$\begin{aligned} {{\varvec{\Phi }}}_0(\zeta )=\left( \textbf{I}+\frac{{{\varvec{\Phi }}}_0^{(1)}}{\zeta }+\mathcal {O}(\zeta ^{-2})\right) \zeta ^{\varvec{\sigma }_3/4}\textbf{U}_0^{-1}{{\,\mathrm{\mathrm e}\,}}^{-\frac{2}{3}\zeta ^{3/2}\varvec{\sigma }_3},\quad \zeta \rightarrow \infty . \end{aligned}$$

6 Bounds on the id-PII RHP

We need to obtain certain asymptotic bounds on \({{\varvec{\Phi }}}_0\) in different regimes. These bounds will be used later to show that the model problem \({\varvec{\Phi }}_\tau \) converges, as \(\tau \rightarrow +\infty \), to \({{\varvec{\Phi }}}_0\) as already indicated in (4.5) et seq. We split these necessary estimates in the next subsections, depending on the regime we are in.

In what follows, for a matrix-valued function \(\textbf{M}=(\textbf{M}_{jk})\) and a contour \(\Sigma \subset \mathbb {C}\), we also use the pointwise matrix norm

$$\begin{aligned} |\textbf{M}(\zeta )|:=\max _{j,k} |\textbf{M}_{j,k}(\zeta )|, \end{aligned}$$
(6.1)

and the matrix \(L^p\) norm (possibly also with \(p=\infty \))

$$\begin{aligned} \Vert \textbf{M}\Vert _{L^p(\Sigma )}:=\max _{j,k} \Vert \textbf{M}_{j,k}\Vert _{L^p(\Sigma )}, \end{aligned}$$
(6.2)

where the measure is always understood to be the arc-length measure. In particular, for any two given matrices \(\textbf{M}_1\) and \(\textbf{M}_2\) the inequality

$$\begin{aligned} \Vert \textbf{M}_1\textbf{M}_2\Vert _{L^\infty (\Sigma )}\le 2 \Vert \textbf{M}_1\Vert _{L^\infty (\Sigma )}\Vert \textbf{M}_2\Vert _{L^\infty (\Sigma )} \end{aligned}$$

is satisfied. Similar straightforward inequalities involving \(L^1,L^2\) and \(L^\infty \) and the pointwise norm (6.1) also hold, and will be used without further mention. Sometimes we also write

$$\begin{aligned} \Vert \textbf{M}\Vert _{L^p\cap L^q(\Sigma )}:=\max \left\{ \Vert \textbf{M}\Vert _{L^p(\Sigma )}, \Vert \textbf{M}\Vert _{L^q(\Sigma )} \right\} , \end{aligned}$$
(6.3)

to identify that possible convergences are taking place in various norms simultaneously.

6.1 The singular regime

The first asymptotic regime we consider is

$$\begin{aligned} \mathsf s\ge \mathsf s_0 \quad \text {and}\quad \mathsf t_0\le \mathsf t\le \frac{1}{\mathsf t_0}, \end{aligned}$$

where \(\mathsf t_0\in (0,1)\) is any given value, and \(\mathsf s_0=\mathsf s_0(\mathsf t_0)>0\) will be made sufficiently large depending on \(\mathsf t_0>0\), but independent of \(\mathsf t\) within the range above. With (5.4) in mind, this is a particular case of the singular regime in [28].

For this asymptotic regime, we need the following result.

Proposition 6.1

For any \(\mathsf t_0\in (0,1)\) there exists \(\mathsf s_0=\mathsf s_0(\mathsf t_0)>0\), \(M=M(\mathsf t_0)>0\) and \(\eta =\eta (\mathsf t_0)>0\) such that the inequalities

$$\begin{aligned}&\left| {{\varvec{\Phi }}}_{0,+}(\zeta )\textbf{E}_{12}{{\varvec{\Phi }}}_{0,+}(\zeta )^{-1}\right| \le M{{\,\mathrm{\mathrm e}\,}}^{-\eta {{\,\textrm{Re}\,}}(\zeta ^{3/2})},{} & {} \zeta \in \mathsf \Sigma _0, \\&\left| {{\varvec{\Phi }}}_{0,+}(\zeta )\textbf{E}_{21}{{\varvec{\Phi }}}_{0,+}(\zeta )^{-1}\right| \le M{{\,\mathrm{\mathrm e}\,}}^{-\eta {{\,\textrm{Re}\,}}(\zeta ^{3/2})},{} & {} \zeta \in \mathsf \Sigma _1\cup \mathsf \Sigma _3, \quad \text {and}\\&\left| {{\varvec{\Phi }}}_{0,+}(\zeta )\textbf{E}_{22}{{\varvec{\Phi }}}_{0,+}(\zeta )^{-1}\right| \le M|\zeta |^{1/2},{} & {} \zeta \in \mathsf \Sigma _2, \end{aligned}$$

hold true for any \(\mathsf s\ge \mathsf s_0\) and any \(\mathsf t\in [\mathsf t_0,1/\mathsf t_0]\).

The proof of Proposition 6.1 is a recollection of the analysis in [28], so before going into the details we need to review some further notions from their work.

Introduce

$$\begin{aligned} {{\varvec{\Phi }}}_\textrm{Ai}(\zeta ):=-\sqrt{2\pi } \times {\left\{ \begin{array}{ll} \begin{pmatrix} {{\,\textrm{Ai}\,}}'(\zeta ) &{} -{{\,\mathrm{\mathrm e}\,}}^{2\pi \textrm{i}/3}{{\,\textrm{Ai}\,}}'({{\,\mathrm{\mathrm e}\,}}^{-2\pi \textrm{i}/3}\zeta ) \\ \textrm{i}{{\,\textrm{Ai}\,}}(\zeta ) &{} -\textrm{i}{{\,\mathrm{\mathrm e}\,}}^{-2\pi \textrm{i}/3}{{\,\textrm{Ai}\,}}({{\,\mathrm{\mathrm e}\,}}^{-2\pi \textrm{i}/3}\zeta ) \end{pmatrix}, &{} {{\,\textrm{Im}\,}}\zeta >0, \\ \begin{pmatrix} {{\,\textrm{Ai}\,}}'(\zeta ) &{} {{\,\mathrm{\mathrm e}\,}}^{-2\pi \textrm{i}/3}{{\,\textrm{Ai}\,}}'({{\,\mathrm{\mathrm e}\,}}^{2\pi \textrm{i}/3}\zeta ) \\ \textrm{i}{{\,\textrm{Ai}\,}}(\zeta ) &{} \textrm{i}{{\,\mathrm{\mathrm e}\,}}^{2\pi \textrm{i}/3}{{\,\textrm{Ai}\,}}({{\,\mathrm{\mathrm e}\,}}^{2\pi \textrm{i}/3}\zeta ) \end{pmatrix},&{{\,\textrm{Im}\,}}\zeta <0. \end{array}\right. } \end{aligned}$$
(6.4)

This is the matrix appearing in [28, Equation (2.5)]. With the correspondence of variables (5.4) in mind, when we combine our identity (5.11) with [28, Equation (2.8)], we obtain the equality

$$\begin{aligned} {{\varvec{\Phi }}}_0(\zeta )=\textbf{Y}(\zeta ){{\varvec{\Phi }}}_\textrm{Ai}(\zeta ) \times {\left\{ \begin{array}{ll} \textbf{I}, &{} -2\pi /3<\arg \zeta<2\pi /3,\\ \textbf{I}\mp (1+{{\,\mathrm{\mathrm e}\,}}^{-\mathsf s+\mathsf t\zeta })\textbf{E}_{21}, &{} 2\pi /3<\pm \arg \zeta <\pi . \end{array}\right. } \end{aligned}$$
(6.5)

The exact form of the matrix \(\textbf{Y}\) is not important for us, but we can interpret this last equality as a defining identity for \(\textbf{Y}\). What is important for us is that \(\textbf{Y}\) is analytic off the real axis, with a jump matrix \(\textbf{J}_{\textbf{Y}}\) on \(\mathbb {R}\) which admits an analytic continuation to a neighborhood of the axis.

The small norm theory for \(\textbf{Y}\) in our regime of interest was carried out in [28, Lemma 5.1 and Section 5.2]. As a consequence, we obtain that for any \(\mathsf t_0>0\) there exist \(M=M(\mathsf t_0)>0,\mathsf s_0=\mathsf s_0(\mathsf t_0)>0,\eta =\eta (\mathsf t_0)>0\) such that the inequalities

$$\begin{aligned} \Vert \textbf{J}_{\textbf{Y}}-\textbf{I}\Vert _{L^\infty \cap L^1(\mathbb {R})}\le M{{\,\mathrm{\mathrm e}\,}}^{-\eta s},\quad \text {and}\quad \Vert \textbf{Y}_\pm -\textbf{I}\Vert _{\cap L^\infty \cap L^1(\mathbb {R})}\le M{{\,\mathrm{\mathrm e}\,}}^{-\eta s} \end{aligned}$$

hold for any \(\mathsf s\ge \mathsf s_0\) and any \(\mathsf t\in [\mathsf t_0,1/\mathsf t_0]\). Also as a consequence of the small norm theory, we obtain the expression

$$\begin{aligned} \textbf{Y}(\zeta )=\textbf{I}+\frac{1}{2\pi \textrm{i}}\int _\mathbb {R}\frac{\textbf{Y}_-(x)(\textbf{J}_{\textbf{Y}}(x)-\textbf{I})}{x-\zeta } \textrm{d}x,\quad \zeta \in \mathbb {C}{\setminus } \mathbb {R}. \end{aligned}$$

We combine this last identity with the fact that \(\textbf{J}_{\textbf{Y}}\) admits an analytic continuation in a neighborhood of \(\mathbb {R}\), and learn that there exists \(M=M(\mathsf t_0)>0\) for which

$$\begin{aligned} |\textbf{Y}(\zeta )^{\pm 1}|\le M, \end{aligned}$$
(6.6)

for every \(\zeta \in \mathbb {C}\), \(\mathsf s\ge \mathsf s_0\) and \(\mathsf t\in [\mathsf t_0,1/\mathsf t_0]\).

Proof (Proof of Proposition 6.1)

For \(\zeta \in \mathsf \Sigma _0=(0,\infty )\), we use (6.5) and the definition of \({{\varvec{\Phi }}}_\textrm{Ai}\) in

$$\begin{aligned} {{\varvec{\Phi }}}_{0,+}(\zeta )\textbf{E}_{12}{{\varvec{\Phi }}}_{0,+}(\zeta )^{-1}= 2\pi \textbf{Y}_+(\zeta ) \begin{pmatrix} -\textrm{i}{{\,\textrm{Ai}\,}}(\zeta ){{\,\textrm{Ai}\,}}'(\zeta ) &{} {{\,\textrm{Ai}\,}}'(\zeta )^2 \\ {{\,\textrm{Ai}\,}}(\zeta )^2 &{} \textrm{i}{{\,\textrm{Ai}\,}}(\zeta ){{\,\textrm{Ai}\,}}'(\zeta ) \end{pmatrix} \textbf{Y}_+(\zeta )^{-1} \end{aligned}$$

Using the bound (6.6), the continuity and the known asymptotics as \(\zeta \rightarrow \infty \) of the Airy function and its derivative, the claim along \(\mathsf \Sigma _0\) follows.

The claim for \(\zeta \in \mathsf \Sigma _j\) with \(j=1,2,3\) follows in exactly the same explicit manner, we skip the details. \(\square \)

6.2 The non-asymptotic regime

In the non-asymptotic regime, we fix any \(\mathsf t_0\in (0,1)\) and \(\mathsf s_0>0\) and seek for bounds of certain entries of \({{\varvec{\Phi }}}_0\) which are valid uniformly within the range

$$\begin{aligned} |\mathsf s|\le \mathsf s_0\quad \text {and}\quad \mathsf t_0\le \mathsf t\le \frac{1}{\mathsf t_0}. \end{aligned}$$

For the next result, we recall the matrix norm introduced in (6.1).

Proposition 6.2

Fix any values \(\mathsf t_0\in (0,1)\) and \(\mathsf s_0>0\). There exist \(M=M(\mathsf s_0,\mathsf t_0)>0\) and \(\eta =\eta (\mathsf s_0,\mathsf t_0)>0\) for which the estimates

$$\begin{aligned}&\left| {{\varvec{\Phi }}}_{0,+}(\zeta )\textbf{E}_{12}{{\varvec{\Phi }}}_{0,+}(\zeta )^{-1}\right| \le M{{\,\mathrm{\mathrm e}\,}}^{-\eta {{\,\textrm{Re}\,}}(\zeta ^{3/2})},{} & {} \zeta \in \mathsf \Sigma _0, \\&\left| {{\varvec{\Phi }}}_{0,+}(\zeta )\textbf{E}_{21}{{\varvec{\Phi }}}_{0,+}(\zeta )^{-1}\right| \le M{{\,\mathrm{\mathrm e}\,}}^{-\eta {{\,\textrm{Re}\,}}(\zeta ^{3/2})},{} & {} \zeta \in \mathsf \Sigma _1\cup \mathsf \Sigma _3, \quad \text {and}\\&\left| {{\varvec{\Phi }}}_{0,+}(\zeta )\textbf{E}_{22}{{\varvec{\Phi }}}_{0,+}(\zeta )^{-1}\right| \le M|\zeta |^{1/2},{} & {} \zeta \in \mathsf \Sigma _2. \end{aligned}$$

hold true uniformly for \(|\mathsf s|\le \mathsf s_0\) and \(\mathsf t_0\le \mathsf t\le \mathsf t_0^{-1}\).

Proof

The asymptotic behavior as \(\zeta \rightarrow \infty \) in the RHP\({{\varvec{\Phi }}}_0\) is valid uniformly up to the boundary values \({{\varvec{\Phi }}}_{0,\pm }\) as well, and also uniformly when the parameters \(\mathsf s\) and \(\mathsf t\) vary within compact sets, implying that

$$\begin{aligned} |{{\varvec{\Phi }}}_{0,+}(\zeta )\textbf{E}_{12}{{\varvec{\Phi }}}_{0,+}(\zeta )^{-1}|\le {{\,\mathrm{\mathrm e}\,}}^{-\frac{4}{3}{{\,\textrm{Re}\,}}(\zeta ^{3/2})} | \zeta ^{\varvec{\sigma }_3/4}\textbf{U}_0^{-1}\textbf{E}_{12}\textbf{U}_0\zeta ^{-\varvec{\sigma }_3/4} |(1+\mathcal {O}(\zeta ^{-1})),\quad \zeta \rightarrow \infty . \end{aligned}$$

Combined with the continuity of the boundary value \({{\varvec{\Phi }}}_{0,+}\) with respect to both \(\zeta \) and also \(\mathsf s,\mathsf t\), the first estimate follows. The remaining estimates are completely analogous. \(\square \)

7 Asymptotic Analysis for the Model Problem with Admissible Data

We now carry out the asymptotic analysis as \(\tau \rightarrow +\infty \) of \({\varvec{\Phi }}_\tau \) introduced in (4.5). For that, we fix \(\mathsf s_0>0\) and \(\mathsf t_0\in (0,1)\) and work under the assumption that

$$\begin{aligned} \tau \rightarrow +\infty \quad \text {with}\quad \mathsf s\ge -\mathsf s_0 \quad \text {and}\quad \mathsf t_0\le \mathsf t\le \frac{1}{\mathsf t_0}. \end{aligned}$$
(7.1)

During this section, \(\mathsf h_\tau \) always denotes an admissible function in the sense of Definition 4.1, and \({\varvec{\Phi }}_\tau \) is the solution to the associated RHP.

We also talk about uniformity of error terms in the parameter \(\mathsf t\) ranging on a compact interval \(K\subset (0,\infty )\), and by this we mean the following. The solution \({\varvec{\Phi }}_\tau \) depends on the parameter \(\mathsf t\) via the derivative \(\mathsf H'(0)=-\mathsf t\), see Definition 4.1. We view \(\mathsf H=\mathsf H_\mathsf t\) as varying with \(\mathsf t\) while keeping all the remaining derivatives \(\mathsf H^{(k)}(0)\), \(k\ne 1\) fixed. By analyticity this determines \(\mathsf H\) uniquely at \(D_\delta (0)\), but not outside this disk. We then consider \(\mathsf H_\mathsf t\) outside \(D_\delta (0)\) to be any extension from \(D_\delta (0)\) that satisfies Definition 4.1 with the additional requirement that the constants \(\eta , \widehat{\eta }\) and \(\epsilon \) in (iii) may depend on K but are independent of \(\mathsf t\in K\). Of course, for each \(\mathsf H_\mathsf t\) extended this way there corresponds a solution \({\varvec{\Phi }}_\tau \) of the associated RHP. By uniformity in \(\mathsf t\in K\) we mean that the error may depend on K and the corresponding values \(\eta , \widehat{\eta }\) and \(\epsilon \), but is valid for any \({\varvec{\Phi }}_\tau \) obtained with an extension \(\mathsf H_\mathsf t\) constructed with the explained requirement.

The asymptotic analysis itself makes use of somewhat standard arguments and objects in the RHP literature. Some consequences of this asymptotic analysis will be needed later, and we now state them.

The first such consequence is the existence of a solution with asymptotic formulas relating quantities of interest with the corresponding quantities in the id-PII RHP.

Theorem 7.1

Fix an admissible function \(\mathsf h_\tau \) in the sense of Definition 4.1. There exists \(\tau _0=\tau _0(\mathsf s_0,\mathsf t_0)>0\) for which for any \(\tau \ge \tau _0\) and any \(\mathsf s,\mathsf t\) as in (7.1), the RHP for \({\varvec{\Phi }}(\cdot \mid \mathsf h_\tau )\) admits a unique solution \({\varvec{\Phi }}={\varvec{\Phi }}_\tau \) as in (4.5).

Furthermore, for any \(\kappa \in (0,1)\), the coefficient \({\varvec{\Phi }}^{(1)}={\varvec{\Phi }}^{(1)}_\tau \) in the asymptotic condition (4.3) satisfies

$$\begin{aligned} {\varvec{\Phi }}^{(1)}_\tau ={{\varvec{\Phi }}}_0^{(1)}+\mathcal {O}(\tau ^{-\kappa }),\quad \tau \rightarrow +\infty , \end{aligned}$$
(7.2)

where \({\varvec{\Phi }}^{(1)}_0\) is as in (5.17) and the error term is uniform for \(\mathsf s,\mathsf t\) as in (7.1). Also, still for \(\kappa \in (0,1)\) the asymptotic formula

$$\begin{aligned} {\varvec{\Phi }}_{\tau ,+}(x)=\left( \textbf{I}+\mathcal {O}\left( \frac{1}{\tau ^{\kappa }(1+|x|)}\right) \right) {{\varvec{\Phi }}}_{0,+}(x),\quad \tau \rightarrow +\infty \end{aligned}$$
(7.3)

holds true uniformly for \(x\in \mathsf \Sigma \) with \(|x|\le \tau ^{(1-\kappa )/2}\), and uniformly for \(\mathsf s,\mathsf t\) as in (7.1).

The second consequence connects the solution \({\varvec{\Phi }}_\tau \) directly with the statistics \(\mathsf Q\). For its statement, set

$$\begin{aligned} {\varvec{\Delta }}_\tau (x):=\textbf{I}+(1+{{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_\tau (x)})\chi _{(-\infty ,0)}(x)\textbf{E}_{21},\quad x\in \mathbb {R}. \end{aligned}$$

Theorem 7.2

Fix \(a,b>0\), \(\mathsf s_0>0\) and \(\mathsf t_0\in (0,1)\). For any \(\kappa \in (0,1)\), the estimate

$$\begin{aligned}{} & {} \frac{1}{2\pi \textrm{i}} \int _{\mathsf s}^{\infty }\int _{-\tau a}^{\tau b}\frac{{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_\tau (x\mid u)}}{\left( 1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_\tau (x\mid u)}\right) ^2}\left[ {\varvec{\Delta }}_\tau (x\mid u)^{-1}{\varvec{\Phi }}_{\tau ,+}(x\mid u)^{-1}\left( {\varvec{\Phi }}_{\tau ,+} {\varvec{\Delta }}_\tau \right) '(x\mid u)\right] _{21}\textrm{d}x \textrm{d}u\\{} & {} \quad = -\log \mathsf L^{{{\,\textrm{Ai}\,}}}(-\mathsf s/\mathsf t,\mathsf t^3) +\mathcal {O}(\tau ^{-\kappa }) \end{aligned}$$

holds as \(\tau \rightarrow +\infty \), uniformly for \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\).

For the proof of these results, we compare \({\varvec{\Phi }}_\tau \) with the solution \({{\varvec{\Phi }}}_0\) of the id-PII RHP via the Deift-Zhou nonlinear steepest descent method. The required asymptotic analysis itself is carried out Sect. 7.1, and the proofs of Theorems 7.1 and 7.2 are completed in Sect. 7.2.

7.1 Asymptotic analysis

For \({{\varvec{\Phi }}}_0\) as introduced in (5.2) and whose properties were discussed in Sect. 5.1, we perform the transformation

$$\begin{aligned} \varvec{\Psi }_\tau (\zeta )={\varvec{\Phi }}_\tau (\zeta ){{\varvec{\Phi }}}_0(\zeta )^{-1},\quad \zeta \in \mathbb {C}{\setminus }\mathsf \Sigma . \end{aligned}$$
(7.4)

Then \(\varvec{\Psi }_\tau \) satisfies the following RHP.

\(\varvec{\Psi }_\tau \)-1.:

The matrix \(\varvec{\Psi }_\tau :\mathbb {C}{\setminus } \mathsf \Sigma \rightarrow \mathbb {C}^{2\times 2}\) is analytic.

\(\varvec{\Psi }_\tau \)-2.:

Along the interior of the arcs of \(\mathsf \Sigma \) the function \(\varvec{\Psi }_\tau \) admits continuous boundary values \(\varvec{\Psi }_{\tau ,\pm }\) related by \(\varvec{\Psi }_{\tau ,+}(\zeta )=\varvec{\Psi }_{\tau ,-}(\zeta )\textbf{J}_{\varvec{\Psi }_\tau }(\zeta )\), \(\zeta \in \mathsf \Sigma \). With

$$\begin{aligned} \lambda _0(\zeta ):=\frac{1}{1+{{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_0(\zeta )}}, \quad \lambda _\tau (\zeta ):=\frac{1}{1+{{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_\tau (\zeta )}}, \end{aligned}$$

where \(\mathsf h_0\) is as in (4.6), the jump matrix \(\textbf{J}_{\varvec{\Psi }_\tau }\) is

(7.5)
\(\varvec{\Psi }_\tau \)-3.:

For \({\varvec{\Phi }}_\tau ^{(1)}\) and \({{\varvec{\Phi }}}_0^{(1)}\) the residues at \(\infty \) of \({\varvec{\Phi }}_\tau \) and \({{\varvec{\Phi }}}_0\), respectively, the matrix \(\varvec{\Psi }_\tau \) has the asymptotic behavior

$$\begin{aligned} \varvec{\Psi }_\tau (\zeta )=\textbf{I}+\frac{1}{\zeta }({\varvec{\Phi }}_\tau ^{(1)}-{{\varvec{\Phi }}}_0^{(1)})+\mathcal {O}(1/\zeta ^2)\qquad \text {as}\quad \zeta \rightarrow \infty . \end{aligned}$$
(7.6)
\(\varvec{\Psi }_\tau \)-4.:

The matrix \(\varvec{\Psi }_\tau \) remains bounded as \(\zeta \rightarrow 0\).

The next step is to verify that the jump matrix decays to the identity in the appropriate norms. The terms in the jump that come from \({{\varvec{\Phi }}}_0\) are precisely the ones we already estimated in Sects. 6.1 and 6.2, so it remains to estimate the terms involving the \(\lambda \)-functions. The basic needed estimate is the following lemma.

Lemma 7.3

Fix \(\nu \in (0,1/2)\) and \(\mathsf t_0\in (0,1)\). The estimate

$$\begin{aligned} {{\,\mathrm{\mathrm e}\,}}^{\mathsf h_0(\zeta )-\mathsf h_\tau (\zeta )}=1+\mathcal {O}(\zeta ^2/\tau ), \quad \tau \rightarrow \infty , \end{aligned}$$

holds true uniformly for \(|\zeta |\le \tau ^\nu \) and uniformly for \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\), where the error term is independent of \(\mathsf s\in \mathbb {R}\).

Proof

The Definition 4.1 of admissibility of \(\mathsf h_\tau \) ensures that for \(\tau \) sufficiently large, we can expand the term \(\mathsf H(\zeta /\tau )\) in power series near the origin and obtain the expansion

$$\begin{aligned} \mathsf h_\tau (\zeta )=\mathsf s-\mathsf t\zeta +\mathcal {O}(\zeta ^2/\tau ),\quad \tau \rightarrow +\infty , \end{aligned}$$

valid uniformly for \(|\zeta |\le \tau ^\nu \), \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\), and with error independent of \(\mathsf s\in \mathbb {R}\). Recalling that \(\mathsf h_0(\zeta )=\mathsf s-\mathsf t\zeta \), the proof is complete. \(\square \)

We are now able to prove the appropriate convergence of \(\textbf{J}_{\varvec{\Psi }_\tau }\) to the identity matrix. We split the analysis into three lemmas, corresponding to different pieces of the contour \(\mathsf \Sigma \). In the results that follow we use the matrix norm notations introduced in (6.1)–(6.3).

Lemma 7.4

Fix  \(\mathsf t_0\in (0,1)\), \(\mathsf s_0>0\) and \(\nu \in (0,1/2)\). There exist \(\tau _0=\tau _0(\mathsf t_0,\mathsf s_0,\nu )>0\), \(M=M(\mathsf t_0,\mathsf s_0,\nu )>0\) and \(\eta =\eta (\mathsf t_0,\mathsf s_0,\nu )>0\) for which the inequality

$$\begin{aligned} \Vert \textbf{J}_{\varvec{\Psi }_\tau }-\textbf{I}\Vert _{L^1\cap L^\infty (\mathsf \Sigma _0)}\le M{{\,\mathrm{\mathrm e}\,}}^{-\mathsf s} \max \left\{ \tau ^{-1+2\nu },{{\,\mathrm{\mathrm e}\,}}^{-\eta \tau ^{3\nu /2}} \right\} \end{aligned}$$

holds true for any \(\tau \ge \tau _0\), \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t\in [\mathsf t_0,1/\mathsf t_0]\).

Proof

Because both \(\mathsf h_\tau \) and \(\mathsf h_0\) are real-valued along the real line, the inequality

$$\begin{aligned} \left| \lambda _\tau (\zeta )-\lambda _0(\zeta )\right| =\frac{|{{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_\tau (\zeta )}-{{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_0(\zeta )}|}{(1+{{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_\tau (\zeta )})(1+{{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_0(\zeta )})}\le |{{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_\tau (\zeta )}-{{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_0(\zeta )}| \end{aligned}$$

is immediate. For \(0\le \zeta \le \tau ^\nu \), we then use Lemma 7.3 and the explicit expression for \(\mathsf h_0\) in (4.6) and obtain

$$\begin{aligned} \left| \lambda _\tau (\zeta )-\lambda _0(\zeta )\right| =\mathcal {O}\left( \frac{{{\,\mathrm{\mathrm e}\,}}^{-\mathsf s+\mathsf t\zeta }}{\tau ^{1-2\nu }} \right) . \end{aligned}$$
(7.7)

For \(\zeta \ge \tau ^\nu \), we instead use that both \(\mathsf h_\tau \) and \(\mathsf h_0\) are real-valued along the positive axis and write

$$\begin{aligned}{} & {} |\lambda _\tau (\zeta )-\lambda _0(\zeta )| \le \left| \lambda _\tau (\zeta )-1 \right| +\left| \lambda _\tau (\zeta )-1 \right| \\{} & {} \quad =\frac{{{\,\mathrm{\mathrm e}\,}}^{-\mathsf s+\mathsf t\zeta }}{1+{{\,\mathrm{\mathrm e}\,}}^{-\mathsf s+\mathsf t\zeta }}+\frac{{{\,\mathrm{\mathrm e}\,}}^{-\mathsf s-\tau \mathsf H(\zeta /\tau )}}{1+{{\,\mathrm{\mathrm e}\,}}^{-\mathsf s-\tau \mathsf H(\zeta /\tau )}}\le {{\,\mathrm{\mathrm e}\,}}^{-\mathsf s}\left( {{\,\mathrm{\mathrm e}\,}}^{\mathsf t\zeta }+{{\,\mathrm{\mathrm e}\,}}^{-\tau \mathsf H(\zeta /\tau )}\right) . \end{aligned}$$

From Definition 4.1-(iii) we bound \({{\,\mathrm{\mathrm e}\,}}^{-\tau \mathsf H(\zeta /\tau )}\le {{\,\mathrm{\mathrm e}\,}}^{\widehat{\eta }\zeta ^{3/2-\epsilon }}\) and simplify the last inequality to

$$\begin{aligned} |\lambda _\tau (\zeta )-\lambda _0(\zeta )| \le {{\,\mathrm{\mathrm e}\,}}^{-\mathsf s}{{\,\mathrm{\mathrm e}\,}}^{\tilde{\eta }\zeta ^{\alpha }},\quad \zeta \ge \tau ^\nu , \; \alpha :=\max \{1,3/2-\epsilon \}<\frac{3}{2} , \end{aligned}$$
(7.8)

for a new value \(\tilde{\eta }>0\).

Recall that \(\textbf{J}_{\varvec{\Psi }_\tau }\) was given in (7.5). We use (7.7) and (7.8) in combination with Propositions 6.1 and 6.2 to get the existence of a value \(\tau _0>0\) for which

$$\begin{aligned} |\textbf{J}_{{\varvec{\Psi }}_\tau }(\zeta )-\textbf{I}| \le M {{\,\mathrm{\mathrm e}\,}}^{-\mathsf s} {{\,\mathrm{\mathrm e}\,}}^{-\eta \zeta ^{3/2}} \left( \chi _{(0,\tau ^{\nu })}(\zeta ) \frac{{{\,\mathrm{\mathrm e}\,}}^{\mathsf t\zeta }}{\tau ^{1-2\nu }}+{{\,\mathrm{\mathrm e}\,}}^{\tilde{\eta }\zeta ^\alpha }\chi _{(\tau ^{\nu },+\infty )}(\zeta )\right) ,\quad \tau \ge \tau _0, \end{aligned}$$
(7.9)

where \(\eta>0,M>0\) may depend on \(\mathsf s_0,\mathsf t_0\) and \(\nu \in [0,1/2)\), but are independent of \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\). After appropriately changing the values of \(\tilde{\eta },\eta ,M\), and having in mind that \(\alpha <3/2\), the result follows from this inequality. \(\square \)

Next, we prove the equivalent result along the pieces of \(\mathsf \Sigma \) which are not on the real line.

Lemma 7.5

Fix \(\mathsf t_0\in (0,1)\), \(\mathsf s_0>0\) and \(\nu \in (0,1/2)\). There exist \(\tau _0=\tau _0(\mathsf t_0,\mathsf s_0,\nu )>0\), \(M=M(\mathsf t_0,\mathsf s_0,\nu )>0\) and \(\eta =\eta (\mathsf t_0,\mathsf s_0,\nu )>0\), for which the inequality

$$\begin{aligned} \Vert \textbf{J}_{\varvec{\Psi }_\tau }-\textbf{I}\Vert _{L^1\cap L^\infty (\mathsf \Sigma _1\cup \mathsf \Sigma _3)}\le M{{\,\mathrm{\mathrm e}\,}}^{-\mathsf s} \max \left\{ \tau ^{-1+2\nu },{{\,\mathrm{\mathrm e}\,}}^{-\eta \tau ^{3\nu /2}} \right\} \end{aligned}$$

holds true for any \(\tau \ge \tau _0\), \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t\in [\mathsf t_0,1/\mathsf t_0]\).

Proof

Write

$$\begin{aligned} \frac{1}{\lambda _\tau (\zeta )}-\frac{1}{\lambda _0(\zeta )}={{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_\tau (\zeta )}-{{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_0(\zeta )}=-{{\,\mathrm{\mathrm e}\,}}^{-\mathsf s+\mathsf t\zeta }(1-{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_0(\zeta )-\mathsf h_\tau (\zeta )}). \end{aligned}$$

From Lemma 7.3, we estimate for \(0\le |\zeta |\le \tau ^\nu \),

$$\begin{aligned} \frac{1}{\lambda _\tau (\zeta )}-\frac{1}{\lambda _0(\zeta )}=\mathcal {O}\left( \frac{{{\,\mathrm{\mathrm e}\,}}^{-\mathsf s+\mathsf t{{\,\textrm{Re}\,}}\zeta }}{\tau ^{1-2\nu }}\right) ,\quad \tau \rightarrow \infty , \end{aligned}$$

where the implicit error term is independent of \(\mathsf s\) and uniform for \(\mathsf t\in [\mathsf t_0,1/\mathsf t_0]\). On the other hand, from the explicit form of \(\mathsf h_0\) and Definition 4.1-(iii),

$$\begin{aligned} \left| \frac{1}{\lambda _\tau (\zeta )}-\frac{1}{\lambda _0(\zeta )}\right| \le {{\,\mathrm{\mathrm e}\,}}^{-\mathsf s}\left( {{\,\mathrm{\mathrm e}\,}}^{\mathsf t{{\,\textrm{Re}\,}}\zeta }+{{\,\mathrm{\mathrm e}\,}}^{-\eta |\zeta |}\right) . \end{aligned}$$

We combine this inequality with Propositions 6.1 and 6.2 and use them on (7.5). The conclusion is that there exist \(M>0\), \(\eta _1,\eta _2>0\) and \(\tau _0>0\), depending on \(\nu ,\mathsf t_0,\mathsf s_0\), for which the inequality

$$\begin{aligned} \left| \textbf{J}_{\varvec{\Psi }_\tau }(\zeta )-\textbf{I}\right| \le M {{\,\mathrm{\mathrm e}\,}}^{-\mathsf s} {{\,\mathrm{\mathrm e}\,}}^{-\eta _1{{\,\textrm{Re}\,}}\zeta ^{3/2}+\eta _2{{\,\textrm{Re}\,}}\zeta } \left( \frac{1}{\tau ^{1-2\nu }}\chi _{\{|\zeta |\le \tau ^\nu \}}(\zeta )+\chi _{\{|\zeta |\ge \tau ^\nu \}}(\zeta )\right) ,\quad \zeta \in \mathsf \Sigma _1\cup \mathsf \Sigma _3, \end{aligned}$$
(7.10)

is valid for every \(\tau \ge \tau _0,\) \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t\in [\mathsf t_0,1/\mathsf t_0]\). The definition (4.1) of the contours \(\mathsf \Sigma _1\) and \(\mathsf \Sigma _3\) assure us that \({{\,\textrm{Re}\,}}\zeta ^{3/2}>0\) and \({{\,\textrm{Re}\,}}\zeta <0\) on these contours. After possibly changing the values of the constants \(\eta _1,\eta _2\) and M, the result follows. \(\square \)

Finally, we now handle the jump on the negative axis.

Lemma 7.6

Fix \(\mathsf t_0\in (0,1)\), \(\mathsf s_0>0\) and \(\nu \in (0,1/2)\). There exist \(\tau _0=\tau _0(\mathsf t_0,\mathsf s_0,\nu )>0, M=M(\mathsf t_0,\mathsf s_0,\nu )>0\) and \(\eta =\eta (\mathsf t_0,\mathsf s_0,\nu )>0\) for which the inequality

$$\begin{aligned} \Vert \textbf{J}_{\varvec{\Psi }_\tau }-\textbf{I}\Vert _{L^1\cap L^\infty (\mathsf \Sigma _2)}\le M {{\,\mathrm{\mathrm e}\,}}^{-\mathsf s} \max \left\{ \tau ^{-1+2\nu },{{\,\mathrm{\mathrm e}\,}}^{-\eta \tau ^{\nu }} \right\} \end{aligned}$$

holds true for any \(\tau \ge \tau _0\), \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t\in [\mathsf t_0,1/\mathsf t_0]\).

Proof

The initial step is to rewrite the last line of (7.5) as

$$\begin{aligned} \textbf{J}_{\varvec{\Psi }_{\tau }}(\zeta )-\textbf{I}=\left( \frac{\lambda _\tau (\zeta )}{\lambda _0(\zeta )}-1\right) \textbf{I}+\left( \frac{\lambda _0(\zeta )}{\lambda _\tau (\zeta )}-\frac{\lambda _\tau (\zeta )}{\lambda _0(\zeta )}\right) {{\varvec{\Phi }}}_{0,+}(\zeta )\textbf{E}_{22}{{\varvec{\Phi }}}_{0,+}(\zeta )^{-1},\quad \zeta \in \mathsf \Sigma _2. \end{aligned}$$
(7.11)

The identities

$$\begin{aligned} \frac{\lambda _\tau (\zeta )}{\lambda _0(\zeta )}-1=\frac{{{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_0(\zeta )}-{{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_\tau (\zeta )}}{1+{{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_\tau (\zeta )}},\qquad \frac{\lambda _0(\zeta )}{\lambda _\tau (\zeta )}-1=-\frac{{{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_0(\zeta )}-{{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_\tau (\zeta )}}{1+{{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_0(\zeta )}}, \end{aligned}$$

are trivial, and because \(\mathsf h_0\) and \(\mathsf h_\tau \) are real-valued along \(\mathsf \Sigma _2=(-\infty ,0)\), these equalities give

$$\begin{aligned} \left| \frac{\lambda _\tau (\zeta )}{\lambda _0(\zeta )}-1\right| +\left| \frac{\lambda _0(\zeta )}{\lambda _\tau (\zeta )}-1\right| \le 2\left| {{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_0(\zeta )}-{{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_\tau (\zeta )}\right| ,\quad \zeta <0. \end{aligned}$$

For \(|\zeta |\le \tau ^\nu \) we use Lemma 7.3 and estimate

$$\begin{aligned} \left| \frac{\lambda _\tau (\zeta )}{\lambda _0(\zeta )}-1\right| +\left| \frac{\lambda _0(\zeta )}{\lambda _\tau (\zeta )}-1\right| =\mathcal {O}\left( \frac{{{\,\mathrm{\mathrm e}\,}}^{-\mathsf s-\mathsf t|\zeta |}}{\tau ^{1-2\nu }}\right) ,\quad \tau \rightarrow +\infty ,\quad -\tau ^{\nu }\le \zeta \le 0, \end{aligned}$$

whereas for \(\zeta \le -\tau ^\nu \) we use instead the definition of \(\mathsf h_0\) in (4.6) and Definition 4.1-(iii) and write

$$\begin{aligned} \left| \frac{\lambda _\tau (\zeta )}{\lambda _0(\zeta )}-1\right| +\left| \frac{\lambda _0(\zeta )}{\lambda _\tau (\zeta )}-1\right| \le 2{{\,\mathrm{\mathrm e}\,}}^{-\mathsf s-(\mathsf t+\eta )|\zeta |}. \end{aligned}$$

We combine these two inequalities with Propositions 6.1 and 6.2, and apply them to (7.11). As a result, we learn that there exist \(M>0,\eta>0,\tau _0>0\) for which the estimate

$$\begin{aligned} \left| \textbf{J}_{\varvec{\Psi }_\tau }(\zeta )-\textbf{I}\right| \le M|\zeta |^{1/2}{{\,\mathrm{\mathrm e}\,}}^{-\mathsf s-\eta |\zeta |} \left( \frac{1}{\tau ^{1-2\nu }}\chi _{(-\tau ^\nu ,0]}(\zeta )+\chi _{(-\infty ,-\tau ^\nu )}(\zeta )\right) ,\quad \zeta \le 0, \end{aligned}$$
(7.12)

is valid for any \(\mathsf s\ge -\mathsf s_0,\) \(\mathsf t\in [\mathsf t_0,1/\mathsf t_0]\), \(\tau \ge \tau _0\). After possibly changing the values of \(\eta ,M\), the result follows from standard arguments. \(\square \)

Now that we controlled the asymptotic behavior for the jump matrix \(\textbf{J}_{\varvec{\Psi }_\tau }\), we are ready to obtain small norm estimates for \(\varvec{\Psi }_{\tau }\) itself. We summarize these estimates in the next result. For that, we recall the matrix norm notations introduced in (6.1), (6.2), (6.3).

Theorem 7.7

Fix \(\mathsf t_0\in (0,1)\) and \(\mathsf s_0>0\). There exists \(\tau _0=\tau _0(\mathsf t_0,\mathsf s_0)>0\) for which the solution \(\varvec{\Psi }_\tau \) uniquely exists for any \(\tau \ge \tau _0\) and any \(\mathsf s\ge -\mathsf s_0, \mathsf t\in [\mathsf t_0,1/\mathsf t_0]\). Furthermore, it satisfies the following asymptotic properties.

Its boundary value \(\varvec{\Psi }_{\tau ,-}\) exists along \(\mathsf \Sigma \), and satisfies the estimate

$$\begin{aligned} \Vert \varvec{\Psi }_{\tau ,-}-\textbf{I}\Vert _{L^2(\mathsf \Sigma )}=\mathcal {O}\left( \tau ^{-\kappa }\right) ,\quad \tau \rightarrow +\infty , \end{aligned}$$

for any \(\kappa \in (0,1)\), where the error term, for a given \(\kappa \), is uniform for \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t\in [\mathsf t_0,1/\mathsf t_0]\).

For \(\tau \) sufficiently large, the solution \(\varvec{\Psi }_\tau \) admits the representation

$$\begin{aligned} \varvec{\Psi }_\tau (\zeta )=\textbf{I}+\frac{1}{2\pi \textrm{i}}\int _{\mathsf \Sigma }\frac{\varvec{\Psi }_{\tau ,-}(w)(\textbf{J}_{\varvec{\Psi }_\tau }(w)-\textbf{I})}{w-\zeta }\textrm{d}w,\quad \zeta \in \mathbb {C}{\setminus } \mathsf \Sigma . \end{aligned}$$
(7.13)

Still for \(\tau \) sufficiently large, \(\varvec{\Psi }_\tau \) satisfies

$$\begin{aligned} \varvec{\Psi }_\tau (\zeta )=\textbf{I}+\varvec{\Psi }_{\tau }^{(1)}\frac{1}{\zeta }+\mathcal {O}(\zeta ^{-2}),\quad \zeta \rightarrow \infty ,\quad \text {with} \quad \varvec{\Psi }_{\tau }^{(1)}:=-\frac{1}{2\pi \textrm{i}}\int _{\mathsf \Sigma } \varvec{\Psi }_{\tau ,-}(\xi )(\textbf{J}_{\varvec{\Psi }_{\tau }}(\xi )-\textbf{I})\textrm{d}\xi . \end{aligned}$$
(7.14)

Proof

The small norm estimates provided by Lemmas 7.4, 7.5 and 7.6 allow us to apply the small norm theory for Riemann–Hilbert problems (see for instance [43, 47]), and the claims follow with standard methods. We stress that for this statement we only need the \(L^2\) and \(L^\infty \) estimates from the aforementioned lemmas, but the \(L^1\) estimates provided by them will be useful later. \(\square \)

7.2 Proof of main results of the section

We are ready to prove the main results of this section.

Proof of Theorem 7.1

During the whole proof we identify \(\nu =(1-\kappa )/2\).

The matrix \({{\varvec{\Phi }}}_0\) always exists, whereas Theorem 7.7 provides the existence of \(\varvec{\Psi }_{\tau }\) for \(\tau \) sufficiently large. From the relation (7.4) we obtain the claimed existence of \({\varvec{\Phi }}_\tau \).

Comparing (7.6) with (7.14) we obtain the identity

$$\begin{aligned} {\varvec{\Phi }}_\tau ^{(1)}={{\varvec{\Phi }}}_0^{(1)}+\varvec{\Psi }_{\tau }^{(1)}. \end{aligned}$$

Writing

$$\begin{aligned} \varvec{\Psi }_{\tau }^{(1)}=-\frac{1}{2\pi \textrm{i}}\int _{\mathsf \Sigma } (\varvec{\Psi }_{\tau ,-}(\xi )-\textbf{I})(\textbf{J}_{\varvec{\Psi }_{\tau }}(\xi )-\textbf{I})\textrm{d}\xi -\frac{1}{2\pi \textrm{i}}\int _{\mathsf \Sigma } \mathbf (\textbf{J}_{\varvec{\Psi }_{\tau }}(\xi )-\textbf{I})\textrm{d}\xi , \end{aligned}$$
(7.15)

and using Cauchy-Schwarz,

$$\begin{aligned} |\varvec{\Psi }_{\tau }^{(1)}|\le \Vert \varvec{\Psi }_{\tau ,-}-\textbf{I}\Vert _{L^2(\mathsf \Sigma )} \Vert \textbf{J}_{\varvec{\Psi }_{\tau }}-\textbf{I}\Vert _{L^2(\mathsf \Sigma )}+\Vert \textbf{J}_{\varvec{\Psi }_{\tau }}-\textbf{I}\Vert _{L^1(\mathsf \Sigma )}, \end{aligned}$$

and from Lemmas 7.4, 7.5, 7.6 and Theorem 7.7, the right-hand side above is \(\mathcal {O}(\tau ^{-\kappa })\), for any \(\kappa \in (0,1)\) and uniformly for \(\mathsf s,\mathsf t\) as in (7.1), proving (7.2).

To prove the asymptotic formula (7.3) we follow arguments presented in [61, Theorem 3.1] and [3, Lemma 2], with minor modifications to handle the uniformity on the unbounded set \(|x|\le \tau ^{-\nu }\) as claimed.

First off, the jump matrix \(\textbf{J}_{\varvec{\Psi }_\tau }\) is \(C^\infty \) on \(\mathsf \Sigma \), in particular Hölder continuous, implying that \(\varvec{\Psi }_\tau \) extends continuously to its boundary values \(\varvec{\Psi }_{\tau ,\pm }\). Accounting also for the behavior of \(\varvec{\Psi }_{\tau }\) at \(\infty \) and combining with the maximum principle,

$$\begin{aligned} \Vert \varvec{\Psi }_{\tau }\Vert _{L^\infty (\mathbb {C}{\setminus } \mathsf \Sigma )}\le M_\tau :=\max \left\{ \Vert \varvec{\Psi }_{\tau ,+}\Vert _{L^\infty (\mathsf \Sigma )},\Vert \varvec{\Psi }_{\tau ,-}\Vert _{L^\infty (\mathsf \Sigma )} \right\} , \end{aligned}$$
(7.16)

where the constant \(M_\tau \) is finite. For a point \(x\in \mathsf \Sigma {\setminus } \{0\}\) and \(\varepsilon >0\), we consider the arcs \(C^\pm _\varepsilon (x)\) of the disk centered at x and radius \(\varepsilon \) which are on the ± side of \(\mathsf \Sigma \). We then set

$$\begin{aligned} \mathsf \Sigma ^\pm :=\left( \mathsf \Sigma {\setminus } D_\varepsilon (x)\right) \cup C^\pm _\varepsilon (x), \end{aligned}$$

with the orientation induced from \(\mathsf \Sigma \). We deform contour in the integral representation (9.4) and then send \(z\rightarrow x\), obtaining that

$$\begin{aligned} \varvec{\Psi }_{\tau ,\pm }(x)=\textbf{I}+\frac{1}{2\pi \textrm{i}}\int _{\mathsf \Sigma ^\mp }\frac{\varvec{\Psi }_{\tau ,-}(s)(\textbf{J}_{\varvec{\Psi }_{\tau }}(s)-\textbf{I})}{s-x}\textrm{d}s. \end{aligned}$$
(7.17)

From standard estimates and using (7.16), the just written equation yields

$$\begin{aligned} |\varvec{\Psi }_{\tau ,\pm }(x)|\le 1+\frac{1}{\pi \varepsilon }\Vert \varvec{\Psi }_{\tau ,-}\Vert _{L^\infty (\mathsf \Sigma ^\mp )} \Vert \textbf{J}_{\varvec{\Psi }_\tau }-\textbf{I}\Vert _{L^1(\mathsf \Sigma ^\mp )}\le 1+\frac{1}{\pi \varepsilon }M_\tau \Vert \textbf{J}_{\varvec{\Psi }_\tau }-\textbf{I}\Vert _{L^1(\mathsf \Sigma ^\mp )}, \end{aligned}$$

and therefore

$$\begin{aligned} M_\tau \le \left( 1-\frac{1}{\pi \varepsilon }\Vert \textbf{J}_{\varvec{\Psi }_\tau }-\textbf{I}\Vert _{L^1(\mathsf \Sigma ^\mp )}\right) ^{-1}. \end{aligned}$$
(7.18)

Lemmas 7.4, 7.5 and 7.6 provide \(L^p\) estimates for \(\textbf{J}_{\varvec{\Psi }_\tau }-\textbf{I}\) along \(\mathsf \Sigma \). Exploring that \(\mathsf \Sigma ^\pm \) is obtained from \(\mathsf \Sigma \) after a small deformation around the point \(x=\mathcal {O}(\tau ^{-\nu })\), it is straightforward to see that the same estimates hold in \(\mathsf \Sigma ^\pm \), which can be summarized as

$$\begin{aligned} |\textbf{J}_{\varvec{\Psi }_\tau }(\zeta )-\textbf{I}| \le M{{\,\mathrm{\mathrm e}\,}}^{-\eta \min \{|\zeta |,|\zeta |^{3/2}\}}\left( \frac{1}{\tau ^{\kappa }}\chi _{\{|\zeta |\le \tau ^\nu \}}(\zeta )+\chi _{\{|\zeta |> \tau ^\nu \}}(\zeta )\right) ,\quad \zeta \in \mathsf \Sigma ^\pm , \end{aligned}$$
(7.19)

for some constants \(\eta ,M>0\) which may depend on \(\mathsf s_0,\mathsf t_0,\tau _0\) but are independent of \(\mathsf s\ge \mathsf s_0,\mathsf t\in [\mathsf t_0,1/\mathsf t_0],\tau \ge \tau _0\), see (7.9), (7.10) and (7.12). Combining with (7.18), we conclude in particular that \(M_\tau \le 2\), for \(\mathsf s,\mathsf t,\tau \) in the same range of values. Having in mind (7.4), this bound on \(M_\tau \) applied to (7.17) is enough to ensure that \(\varvec{\Psi }_{\tau ,\pm }-\textbf{I}=\mathcal {O}(\tau ^{-\nu })\), but to obtain the decay in x for the error claimed in (7.3) a little more care is needed as follows.

First off, we split the integral in (7.17) into two, namely along

$$\begin{aligned} J_x:=\{\zeta \in \mathsf \Sigma ^-\mid |\zeta -x|\ge |x|/2\}\quad \text {and}\quad \mathsf \Sigma ^-{\setminus } J_x. \end{aligned}$$

For the integral over \(J_x\), we estimate as

$$\begin{aligned} \left| \int _{J_x}\frac{\varvec{\Psi }_{\tau ,-}(s)(\textbf{J}_{\varvec{\Psi }_{\tau }}(s)-\textbf{I})}{s-x}\textrm{d}s \right| \le 2M_\tau \sup _{\zeta \in J_x}\frac{1}{|\zeta -x|} \Vert \textbf{J}_{\varvec{\Psi }_{\tau }}-\textbf{I}\Vert _{L^1(J_x)}=\mathcal {O}(\tau ^{-\kappa }|x|^{-1}), \end{aligned}$$

where we used (7.19), the fact that \(x=\mathcal {O}(\tau ^{-\kappa })\) and again the bound \(M_\tau \le 2\). Observing that \(|\zeta -x|\ge \varepsilon \) along the remaining piece, a similar argument yields

$$\begin{aligned} \left| \frac{\varvec{\Psi }_{\tau ,-}(s)(\textbf{J}_{\varvec{\Psi }_{\tau }}(s)-\textbf{I})}{s-x} \right| \le \frac{4}{\varepsilon } \left| \textbf{J}_{\varvec{\Psi }_{\tau }}(s)-\textbf{I}\right| ,\quad s\in \mathsf \Sigma ^-{\setminus } J_x, \end{aligned}$$

and again from (7.19) we see that the right-hand side above decays exponentially in x when \(x\rightarrow \infty \) and is \(\mathcal {O}(\tau ^{-\kappa })\) when \(\tau \rightarrow \infty \). From (7.17) we thus obtain

$$\begin{aligned} \left| \varvec{\Psi }_{\tau ,+}(x)-\textbf{I}\right| =\mathcal {O}\left( \frac{1}{(1+|x|)\tau ^{\kappa }}\right) ,\quad \tau \rightarrow \infty , \end{aligned}$$

uniformly for \(x\in \mathsf \Sigma , |x|\le \tau ^{-\nu }\), and uniformly for \(\mathsf s\ge -\mathsf s_0, \mathsf t_0\le \mathsf t\le 1/\mathsf t_0\). In virtue of (7.4), this proves (7.3). \(\square \)

Remark 7.8

For admissible functions \(\mathsf h_{\tau }\), the asymptotics (4.3) as \(\zeta \rightarrow \infty \) of \({\varvec{\Phi }}_\tau \) is valid uniformly in \(\tau , \mathsf t\) and \(\mathsf s\), in the sense that for any \(\mathsf t_0\in (0,1)\) and any \(\mathsf s_0>0\), there exist \(K>0\) and \(R>0\) such that

$$\begin{aligned} \left| {\varvec{\Phi }}_\tau (\zeta ){{\,\mathrm{\mathrm e}\,}}^{\frac{2}{3}\zeta ^{2/3}\varvec{\sigma }_3}\textbf{U}_0\zeta ^{\varvec{\sigma }_3/4}-\textbf{I}\right| \le \frac{K}{|\zeta |},\quad \text {whenever } |\zeta |\ge R, \; \mathsf s\ge -\mathsf s_0, \; \mathsf t_0\le \mathsf t\le 1/\mathsf t_0, \end{aligned}$$

and we emphasize that K and R are independent of \(\mathsf s,\mathsf t\). To see that this is true, in virtue of (7.4) it is enough to show that the asymptotics (5.10) and (7.6) are uniform in the same sense, we indicate the proof for the latter and the former is analogous.

Using the trivial identity \(1/(w-\zeta )=s/(s(s-\zeta ))-1/\zeta \), we express (7.13) as

$$\begin{aligned} \varvec{\Psi }_\tau (\zeta )=\textbf{I}-\frac{1}{2\pi \textrm{i}\zeta }\int _{\mathsf \Sigma }\varvec{\Psi }_{\tau ,-}(w)(\textbf{J}_{\varvec{\Psi }_\tau }(w)-\textbf{I})\textrm{d}w+\frac{1}{2\pi \textrm{i}\zeta }\int _{\mathsf \Sigma }\frac{ w\varvec{\Psi }_{\tau ,-}(w)(\textbf{J}_{\varvec{\Psi }_\tau }(w)-\textbf{I})}{w-\zeta }\textrm{d}w. \end{aligned}$$

Because \(\varvec{\Psi }_{\tau ,-}\in \textbf{I}+ L^1(\mathsf \Sigma )\) and \(\textbf{J}_{\varvec{\Psi }_{\tau }}-\textbf{I}\) decays pointwise exponentially fast (and uniformly in \(\mathsf s,\mathsf t\) as claimed), the two integrals can be bounded uniformly in \(\zeta ,\tau ,\mathsf s\) as claimed, and the uniform decay for \(\varvec{\Psi }_{\tau }\) as claimed follows.

To finish this section, it remains to prove Theorem 7.2, and to that end we first establish a lemma.

Lemma 7.9

Fix \(\mathsf s\in \mathbb {R}\) and \(\mathsf t_0\in (0,1)\). For any \(\nu \in (0,1/2)\), there exists \(\eta =\eta (\nu )>0\) independent of \(\mathsf s,\mathsf t,\tau \) for which the estimates

$$\begin{aligned} \int _{|x|\ge \tau ^{\nu }} \frac{{{\,\mathrm{\mathrm e}\,}}^{\mathsf tx-u}}{(1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf tx-u})^2}\left| \left[ ({{\varvec{\Delta }}}_0(x\mid u,\mathsf t)^{-1}{{\varvec{\Phi }}}_{0,+}(x\mid u,\mathsf t)^{-1}({{\varvec{\Phi }}}_{0,+}{{\varvec{\Delta }}}_0)'(x\mid u,\mathsf t)\right] _{21} \right| \textrm{d}x=\mathcal {O}({{\,\mathrm{\mathrm e}\,}}^{-u-\eta \tau ^{\nu }}) \end{aligned}$$
(7.20)

and

$$\begin{aligned} \int _{|x|\ge \tau ^{\nu }} \frac{{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_\tau (x\mid u)}}{\left( 1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_\tau (x\mid u)}\right) ^2}\left| \left[ {\varvec{\Delta }}_\tau (x\mid u)^{-1}{\varvec{\Phi }}_{\tau ,+}(x\mid u)^{-1}\left( {\varvec{\Phi }}_{\tau ,+} {\varvec{\Delta }}_\tau \right) '(x\mid u)\right] _{21}\right| \textrm{d}x =\mathcal {O}({{\,\mathrm{\mathrm e}\,}}^{-u-\eta \tau ^{\nu }}) \end{aligned}$$
(7.21)

are valid as \(\tau \rightarrow \infty \), uniformly for \(u\ge \mathsf s\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\). In particular, both integrands are integrable over \(\mathbb {R}\).

Proof

We prove (7.21) which is slightly more technical because the integrand depends on \(\tau \), the estimate (7.20) follows in a similar manner.

Having in mind Remark 7.8, we use the expansion (4.3) to estimate

$$\begin{aligned}{} & {} {\varvec{\Phi }}_{\tau ,+}(x){\varvec{\Delta }}_{\tau }(x)=\left( 1+\mathcal {O}(x^{-1})\right) x^{\varvec{\sigma }_3/4}\textbf{U}_0^{-1}\left( \textbf{I}+\chi _{(-\infty ,0)}(x){{\,\mathrm{\mathrm e}\,}}^{\frac{4}{3}x^{3/2}_+}(1-{{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_{\tau }(x)})\textbf{E}_{21}\right) {{\,\mathrm{\mathrm e}\,}}^{-\frac{2}{3}x_+^{3/2}\varvec{\sigma }_3},\quad x\in E_\tau . \end{aligned}$$

Observe that the factor \(\chi _{(-\infty ,0)}(x){{\,\mathrm{\mathrm e}\,}}^{\frac{4}{3}x^{3/2}_+}(1-{{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_{\tau }(x)})\) is bounded, thanks to the facts that \(x_+^{3/2}\in \textrm{i}\mathbb {R}\) and \({{\,\textrm{Re}\,}}H>0\) for \(x<0\) (see Definition 4.1-(iii)). The identity above can be differentiated, and using it we obtain the crude bound

$$\begin{aligned} \left[ {\varvec{\Delta }}_\tau (x\mid u)^{-1}{\varvec{\Phi }}_{\tau ,+}(x\mid u)^{-1}\left( {\varvec{\Phi }}_{\tau ,+} {\varvec{\Delta }}_\tau \right) '(x\mid u)\right] _{21}={{\,\mathrm{\mathrm e}\,}}^{-\frac{4}{3}x_+^{3/2}}\mathcal {O}(|x|^{3}),\quad x\in E_\tau , \end{aligned}$$

which is non-optimal in x, but will be enough for the coming estimates, and which is valid uniformly for \(u\ge -\mathsf s_0,\mathsf t\in [\mathsf t_0,1/\mathsf t_0]\) as \(\tau \rightarrow \infty \). Thus, to conclude the result it is enough to estimate each of the integrals

$$\begin{aligned} I_-:=\int _{-\infty }^{-\tau ^\nu }|x|^3\frac{{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_\tau (x\mid u)}}{(1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_\tau (x\mid u)})^2}\textrm{d}x\quad \text {and}\quad I_+:=\int _{\tau ^{\nu }}^{+\infty }|x|^3\frac{{{\,\mathrm{\mathrm e}\,}}^{-\frac{4}{3}x^{3/2}}{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_\tau (x\mid u)}}{(1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_\tau (x\mid u)})^2}\textrm{d}x. \end{aligned}$$

To estimate \(I_-\), we use the inequalities \(v/(1+v)\le 1\) and \(1/(1+v)\le 1/v\), valid for \(v>0\), to estimate

$$\begin{aligned} \frac{{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_\tau (x\mid u)}}{(1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_\tau (x\mid u)})^2}\le {{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_{\tau }(x\mid u)}, \end{aligned}$$

and using now the inequality from Definition 4.1-(iii) along \(\mathsf \Sigma _2=(-\infty ,0)\), we obtain

$$\begin{aligned} I_-\le {{\,\mathrm{\mathrm e}\,}}^{-u}\int _{-\infty }^{-\tau ^\nu }|x|^3{{\,\mathrm{\mathrm e}\,}}^{-\eta |x|}\textrm{d}x=\mathcal {O}({{\,\mathrm{\mathrm e}\,}}^{-u-\widetilde{\eta }\tau ^{\nu }/2}) \end{aligned}$$

for some \(\widetilde{\eta }>0\). In a similar manner, we also obtain that

$$\begin{aligned} I_+\le {{\,\mathrm{\mathrm e}\,}}^{-u} \int _{\tau ^\nu }^{+\infty } |x|^3 {{\,\mathrm{\mathrm e}\,}}^{-\frac{4}{3}x^{3/2}+\widehat{\eta }x^{2/3-\varepsilon } }\textrm{d}x=\mathcal {O}({{\,\mathrm{\mathrm e}\,}}^{-u-\tau ^{3\nu /2}}), \end{aligned}$$

where now for the last equality we used Definition 4.1-(iii) along \(\mathsf \Sigma _0=(0,\infty )\). \(\square \)

Proof of Theorem 7.2

As in the previous proof, we identify \(\kappa \in (0,1)\) from Theorem 7.1 with \(\nu =(1-\kappa )/2\in (0,1/2)\). Thanks to (7.21),

$$\begin{aligned}{} & {} \int _{-a\tau }^{b\tau } \frac{{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_\tau (x\mid u)}}{\left( 1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_\tau (x\mid u)}\right) ^2}\left[ {\varvec{\Delta }}_\tau (x\mid u)^{-1}{\varvec{\Phi }}_{\tau ,+}(x\mid u)^{-1}\left( {\varvec{\Phi }}_{\tau ,+} {\varvec{\Delta }}_\tau \right) '(x\mid u)\right] _{21}\textrm{d}x\\{} & {} \quad = \int _{-\tau ^\nu }^{\tau ^\nu } \frac{{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_\tau (x\mid u)}}{\left( 1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_\tau (x\mid u)}\right) ^2}\left[ {\varvec{\Delta }}_\tau (x\mid u)^{-1}{\varvec{\Phi }}_{\tau ,+}(x\mid u)^{-1}\left( {\varvec{\Phi }}_{\tau ,+} {\varvec{\Delta }}_\tau \right) '(x\mid u)\right] _{21}\textrm{d}x + \mathcal {O}({{\,\mathrm{\mathrm e}\,}}^{-u-\eta \tau ^{\nu }}), \end{aligned}$$

valid as \(\tau \rightarrow \infty \) and uniformly for \(u\ge \mathsf s\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\). Next, we use Lemma 7.3 and (7.3) to ensure that

$$\begin{aligned}{} & {} \int _{-\tau ^\nu }^{\tau ^\nu } \frac{{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_\tau (x\mid u)}}{\left( 1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_\tau (x\mid u)}\right) ^2}\left[ {\varvec{\Delta }}_\tau (x\mid u)^{-1}{\varvec{\Phi }}_{\tau ,+}(x\mid u)^{-1}\left( {\varvec{\Phi }}_{\tau ,+} {\varvec{\Delta }}_\tau \right) '(x\mid u)\right] _{21}\textrm{d}x\\{} & {} \quad =(1+\mathcal {O}(\tau ^{-\kappa })) \int _{-\tau ^\nu }^{\tau ^\nu } \frac{{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_0(x)}}{(1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_0(x)})^2} \left[ ({{\varvec{\Delta }}}_0(x\mid u,\mathsf t)^{-1}{{\varvec{\Phi }}}_{0,+}(x\mid u,\mathsf t)^{-1}({{\varvec{\Phi }}}_{0,+}{{\varvec{\Delta }}}_0)'(x\mid u,\mathsf t)\right] _{21} \textrm{d}x. \end{aligned}$$

With the help of the calculation

$$\begin{aligned} \frac{{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_0(x)}}{(1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_0(x)})^2}=\frac{1}{(1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_0(x)})(1+{{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_0(x)})}=\frac{{{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_0(x)}}{(1+{{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_0(x)})^2}=\frac{{{\,\mathrm{\mathrm e}\,}}^{\mathsf tx-u}}{(1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf tx-u})^2} \end{aligned}$$

we recognize the integrand from (7.20), and then conclude that

$$\begin{aligned}{} & {} \int _{-a\tau }^{b\tau } \frac{{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_\tau (x\mid u)}}{\left( 1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_\tau (x\mid u)}\right) ^2}\left[ {\varvec{\Delta }}_\tau (x\mid u)^{-1}{\varvec{\Phi }}_{\tau ,+}(x\mid u)^{-1}\left( {\varvec{\Phi }}_{\tau ,+} {\varvec{\Delta }}_\tau \right) '(x\mid u)\right] _{21}\textrm{d}x \\{} & {} \quad =(1+\mathcal {O}(\tau ^{-\kappa })) \int _{-\infty }^{\infty } \frac{{{\,\mathrm{\mathrm e}\,}}^{\mathsf tx-u}}{(1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf tx-u})^2}\left[ ({{\varvec{\Delta }}}_0(x\mid u,\mathsf t)^{-1}{{\varvec{\Phi }}}_{0,+}(x\mid u,\mathsf t)^{-1}({{\varvec{\Phi }}}_{0,+}{{\varvec{\Delta }}}_0)'(x\mid u,\mathsf t)\right] _{21} \textrm{d}x\\{} & {} \qquad + \mathcal {O}({{\,\mathrm{\mathrm e}\,}}^{-u-\eta \tau ^{\nu }}). \end{aligned}$$

Finally, with arguments very similar to the ones used in the proof of Lemma 7.9, we see that this remaining integral on the right-hand side is \(\mathcal {O}({{\,\mathrm{\mathrm e}\,}}^{-u})\). Everything combined, we just proved that

$$\begin{aligned}{} & {} \int _{-\tau a}^{\tau b}\frac{{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_\tau (x\mid u)}}{\left( 1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_\tau (x\mid u)}\right) ^2}\left[ {\varvec{\Delta }}_\tau (x\mid u)^{-1}{\varvec{\Phi }}_{\tau ,+}(x\mid u)^{-1}\left( {\varvec{\Phi }}_{\tau ,+} {\varvec{\Delta }}_\tau \right) '(x\mid u)\right] _{21}\textrm{d}x \\{} & {} \quad =\int _{-\infty }^\infty \frac{{{\,\mathrm{\mathrm e}\,}}^{\mathsf tx-u}}{(1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf tx-u})^2}\left[ ({{\varvec{\Phi }}}_{0,+}(x\mid u,\mathsf t){{\varvec{\Delta }}}_0(x\mid u,\mathsf t))^{-1}({{\varvec{\Phi }}}_{0,+}{{\varvec{\Delta }}}_0)'(x\mid u,\mathsf t)\right] _{21} \textrm{d}x\\{} & {} \qquad +\mathcal {O}(\tau ^{-\kappa }{{\,\mathrm{\mathrm e}\,}}^{-u}),\quad \tau \rightarrow +\infty . \end{aligned}$$

We now integrate in \(u\in [-\mathsf s,+\infty )\) and use Eq. (5.8) and the limit

$$\begin{aligned} \lim _{s\rightarrow -\infty }\mathsf L^{{{\,\textrm{Ai}\,}}}(s,T)=1, \end{aligned}$$

which is valid by dominated convergence since \(|1+{{\,\mathrm{\mathrm e}\,}}^\alpha |^{-1}\le 1\) for every \(\alpha \in \mathbb {R}\), to conclude the proof. \(\square \)

8 The Underlying Equilibrium Measure and Related Quantities

As we mentioned earlier, one of the main objects we will analyze is the RHP for the orthogonal polynomials associated to (2.3)–(2.4). In this analysis, the model problem we discussed in the last few sections plays a central role, and another important quantity is the associated equilibrium measure and related objects that we discuss in this section.

8.1 The equilibrium measure

A major role in our calculations is played by the equilibrium measure for the polynomial potential V, which is the unique probability measure \(\mu _V\) on \(\mathbb {R}\) for which the quantity

$$\begin{aligned} \iint \log \frac{1}{|x-y|}\textrm{d}\mu (x) \textrm{d}\mu (y)+\int V(x)\textrm{d}\mu (x) \end{aligned}$$

attains its minimum over all Borel probability measures \(\mu \) supported on \(\mathbb {R}\). Its existence and uniqueness is assured by standard results, see for instance [69], and its regularity that we now discuss is of particular relevance.

The measure \(\mu _V\) is supported on a finite union of bounded intervals and is absolutely continuous with respect to the Lebesgue measure [46]. Following Assumption 2.1-(i), we assume that \(\mu _V\) is one-cut, that is, it is supported on a single interval that we take to be of the form

$$\begin{aligned} {{\,\textrm{supp}\,}}\mu _V=[-a,0],\quad a>0, \end{aligned}$$

and regular, meaning that the density of \(\mu _V\) vanishes as a square-root at a neighborhood of the endpoints, does not vanish on \((-a,0)\), and the Euler-Lagrange equations are valid with strict inequality outside the support,

$$\begin{aligned} \int \log \frac{1}{|x-y|}\textrm{d}\mu _V(y)+\frac{1}{2} V(x)+\ell _V \; {\left\{ \begin{array}{ll} > 0, &{} x\in \mathbb {R}{\setminus } {{\,\textrm{supp}\,}}\mu _V, \\ =0, &{} x\in {{\,\textrm{supp}\,}}\mu _V, \end{array}\right. } \end{aligned}$$
(8.1)

for some constant \(\ell _V\in \mathbb {R}\). The notions just introduced are consistent with the notions and notations that we already introduced and used in Sect. 2.

A transformation of the equilibrium measure of particular interest is its Cauchy transform,

$$\begin{aligned} C^{\mu _V}(z):=\int \frac{\textrm{d}\mu _V(x)}{x-z},\quad z\in \mathbb {C}{\setminus } {{\,\textrm{supp}\,}}\mu _V. \end{aligned}$$

Using the Euler-Lagrange identity, it can be shown that \(C^{\mu _V}\) satisfies an algebraic equation of the form

$$\begin{aligned} \left( C^{\mu _V}(z)+\frac{V'(z)}{2}\right) ^2=\frac{1}{4}z(z+a)h_V(z)^2,\quad z\in \mathbb {C}{\setminus } {{\,\textrm{supp}\,}}\mu _V, \end{aligned}$$
(8.2)

for some polynomial \(h_V\) which does not vanish on \([-a,0]\) [63, 65].

We also associate to the equilibrium measure its \(\phi \) function

$$\begin{aligned} \phi (z):=\int _{0}^z\left( C^{\mu _V}(s)+\frac{1}{2} V'(s)\right) \textrm{d}s,\quad z\in \mathbb {C}{\setminus } (-\infty ,0]. \end{aligned}$$
(8.3)

The next result summarizes some properties of \(\phi \) that will be needed later.

Proposition 8.1

The function \(\phi \) has the following properties.

  1. (i)

    The function \(\phi \) is analytic on \(\mathbb {C}{\setminus } (-\infty ,0]\).

  2. (ii)

    For \(x\in (-a,0)\),

    $$\begin{aligned} \phi _{+}(x)+\phi _{-}(x)=0, \quad \text {and}\quad \phi _{+}(x)-\phi _{-}(x)=-2\pi \textrm{i}\mu _V((x,0)). \end{aligned}$$
  3. (iii)

    For \(x\in (-\infty ,-a)\),

    $$\begin{aligned}\phi _{+}(x)-\phi _{-}(x)=-2\pi \textrm{i}.\end{aligned}$$
  4. (iv)

    For \(x\in \mathbb {R}{\setminus } [-a,0]\),

    $$\begin{aligned}{{\,\textrm{Re}\,}}\phi _{+}(x)={{\,\textrm{Re}\,}}\phi _{-}(x) >0.\end{aligned}$$
  5. (v)

    As \(z\rightarrow \infty \) and some constant \(\phi _\infty \),

    $$\begin{aligned} \phi (z)=\frac{V(z)}{2}+\ell _V-\log z+\frac{\phi _\infty }{z}+\mathcal {O}(z^{-1}), \end{aligned}$$

    where \(\ell _V\) is as in (8.1).

  6. (vi)

    The function \(\phi \) satisfies the estimate

    $$\begin{aligned} \phi (z)= \frac{1}{3} h_V(0)a^{1/2} z^{3/2}(1+\mathcal {O}(z)), \quad z\rightarrow 0. \end{aligned}$$
    (8.4)

Proof

The proof is standard using the properties of the equilibrium measure, see for instance [42]. \(\square \)

8.2 The conformal map

Finally, using \(\phi \) we construct a conformal map \(\psi \), introduced formally with the next result.

Proposition 8.2

The function

$$\begin{aligned} \psi (z):=\left( \frac{3}{2}\phi (z)\right) ^{2/3}, \end{aligned}$$

is a conformal map from a neighborhood of the origin to a disk \(D_{2\delta }(0)\), with \(U_0:=\psi ^{-1}(D_\delta (0))\), and admits an expansion of the form

$$\begin{aligned} \psi (z)=\mathsf c_V z(1+\mathcal {O}(z)),\quad z\rightarrow 0,\qquad \mathsf c_V:=2^{-2/3}h_V(0)^{2/3}a^{1/3}>0. \end{aligned}$$
(8.5)

Proof

The proof is also standard, and follows essentially from Proposition 8.1-(iv). We omit the details. \(\square \)

In the previous proposition, the factor \(2\delta \) instead of \(\delta \) is chosen just for later convenience, for the statement of Proposition 8.3. Later on, we use \(\psi \) only over the smaller neighborhood \(D_\delta (0)\).

As it is customary in RHP analysis, at a later stage we will need to glue the model problem as a local parametrix for the original RHP for orthogonal polynomials. This gluing procedure is done, in our case, using the conformal map \(\psi \). In usual situations, the jump matrices of the model local problem are piecewise constant or yet homogeneous, and as such this procedure of using the conformal map does not significantly alter them. However, in our situation the jump involves the function Q, and consequently the jump will be altered by the conformal map in a nontrivial way.

With the next result we introduce the necessary quantities needed to keep track of this transformation. Recall the half rays \(\mathsf \Sigma _j\), \(j=0,1,2,3,\) which were introduced in (4.1). For the next statement, we talk about neighborhoods of \(\mathsf \Sigma _j\), by which we mean open connected sets that contain \(\mathsf \Sigma _j{\setminus } \{0\}\) in their interior.

Proposition 8.3

There exist neighborhoods \(\mathcal S_j\) of \(\mathsf \Sigma _j\) and a function

$$\begin{aligned} \mathsf H_Q:\mathcal S\rightarrow \mathbb {C},\quad \mathcal S:=\bigcup _{j=0}^3 \mathcal S_j \end{aligned}$$

with the following properties.

  1. (i)

    For the value \(\delta >0\) in Proposition 8.2, the inclusions

    $$\begin{aligned} D_\delta (0)\subset \mathcal S\quad \text {and}\quad \mathcal S_j\cap \mathcal S_k\subset D_\delta (0), \end{aligned}$$

    hold true for any \(j\ne k\).

  2. (ii)

    The function \(\mathsf H_Q\) is \(C^\infty \) on \(\mathcal S\), and is an extension of \(Q\circ \psi ^{-1}\) from \(D_\delta (0)\), that is,

    $$\begin{aligned} \mathsf H_Q(w)=Q(\psi ^{-1}(w)),\quad |w|<\delta . \end{aligned}$$
    (8.6)
  3. (iii)

    The function \(\mathsf H_Q\) is analytic on \(D_\delta (0)\), extends continuously up to the boundary of \(D_\delta (0)\) and satisfies

    $$\begin{aligned} \mathsf H_Q(w)=-\mathsf c_{\mathsf H}w+\mathcal {O}(w),\quad \mathsf c_{\mathsf H}:=\frac{\mathsf t}{\mathsf c_V}, \end{aligned}$$
    (8.7)

    uniformly for \(|w|\le \delta \), where we recall that \(\mathsf t\) and \(\mathsf c_V\) are as in (2.7) and (8.5), respectively.

  4. (iv)

    For some constants \(\widehat{\eta }>\eta >0\), the function \(\mathsf H_Q\) satisfies the estimates

    $$\begin{aligned} -\widehat{\eta }|w|\le {{\,\textrm{Re}\,}}\mathsf H_Q(w)\le -\eta |w| \qquad \text {for every }w\in \mathcal S_0{\setminus } D_\delta (0), \end{aligned}$$

    and

    $$\begin{aligned} {{\,\textrm{Re}\,}}\mathsf H_Q(w)\ge \eta |w| \qquad \text {for every } w\in \left( \mathcal S_1\cup \mathcal S_2\cup \mathcal S_3\right) {\setminus } D_\delta (0). \end{aligned}$$

Proof

We construct the set \(\mathcal S_j\) as tubular neighborhoods of \(\mathsf \Sigma _j\) away from the origin, and as disks near the origin, namely

$$\begin{aligned} \mathcal S_j=\mathcal S_j(\delta ')=\left\{ w\in \mathbb {C}\mid \inf _{w'\in \mathsf \Sigma _j}|w-w'|< \delta ' \right\} \cup D_\delta (0). \end{aligned}$$

By choosing \(\delta '>0\) sufficiently small, in particular smaller than \(\delta >0\), property (i) is immediate.

The function

$$\begin{aligned} \mathsf H_Q(w):=Q(\psi ^{-1}(w)),\quad |w|<\delta , \end{aligned}$$

is obviously analytic on \(D_{\delta }(0)\), satisfies (iii) and admits an extension to the larger open set \(D_{2\delta }(0)\). A standard argument using partitions of unity allows us to extend it to the sets \(\mathcal S_j\)’s as claimed by (ii), also making sure that it satisfies (iv). \(\square \)

9 Associated Orthogonal Polynomials

The first and arguably major step towards understanding \(\mathsf L^Q_n(\mathsf s)\) is to study several quantities related to the orthogonal polynomials for the varying weight (2.16), as we introduce next.

9.1 Orthogonal polynomials and related quantities

Denote by \(\mathsf P_k=\mathsf P_k^{(n,\mathsf s)}\) the monic orthogonal polynomial of degree k for the weight \(\omega _n\) in (2.16),

$$\begin{aligned} \mathsf P_k^{(n,\mathsf s)}(x)=x^k+\text {(lower degree terms)},\quad \int _{\mathbb {R}}\mathsf P_k^{(n,\mathsf s)}(x)x^j\omega _n(x)\textrm{d}x=0, \quad j=0,\ldots , k-1. \end{aligned}$$
(9.1)

These polynomials depend on \(\omega _n\), so ultimately also on Q, but we refrain from stressing this dependence in the notation. We also denote by \(\upgamma _k^{(n,Q)}=\upgamma _k^{(n,Q)}(\mathsf s)>0\) the corresponding norming constant, determined by

$$\begin{aligned} \frac{1}{\upgamma _k^{(n)}(\mathsf s)^2}=\int _\mathbb {R}\mathsf P_k^{(n,\mathsf s)}(x)^2\omega _n(x)\textrm{d}x. \end{aligned}$$
(9.2)

We associate to the orthogonal polynomials their Christoffel-Darboux kernel,

$$\begin{aligned} \mathsf K^Q_n(x,y)=\mathsf K^Q_n(x,y\mid \mathsf s):=\sum _{k=0}^{n-1} \upgamma _k^{(n)}(\mathsf s)^2\mathsf P_k^{(n,\mathsf s)}(x)\mathsf P_k^{(n,\mathsf s)}(y), \end{aligned}$$
(9.3)

stressing that we are not including the weight \(\omega _n\) in this definition. In particular, the identity

$$\begin{aligned} \int _{-\infty }^\infty \mathsf K_n(x,x\mid \mathsf s)\omega _n(x\mid \mathsf s)\textrm{d}x=n \end{aligned}$$
(9.4)

holds true for any \(\mathsf s\in \mathbb {R}\) and follows immediately from (9.2).

In a similar manner, we introduce the related quantities for the undeformed weight \({{\,\mathrm{\mathrm e}\,}}^{-nV}\). The partition function \(Z_n\) already appeared in (2.2), the orthogonal polynomials \(P_k=P_k^{(n)}\) are determined by

$$\begin{aligned} P_k^{(n)}(x)=x^k+\text {(lower degree terms)},\quad \int _{\mathbb {R}}P_k^{(n)}(x)x^j{{\,\mathrm{\mathrm e}\,}}^{-nV(x)}\textrm{d}x=0, \quad j=0,\ldots , k-1, \end{aligned}$$

and the norming constants and Christoffel-Darboux kernel are determined from

$$\begin{aligned} \frac{1}{(\gamma _k^{(n)})^2}=\int _\mathbb {R}P_k^{(n)}(x)^2{{\,\mathrm{\mathrm e}\,}}^{-nV(x)}\textrm{d}x,\quad K_n(x,y):=\sum _{k=0}^{n-1}(\gamma _k^{(n)})^2P_k^{(n)}(x)P_k^{(n)}(y). \end{aligned}$$

The orthogonal polynomials \(\mathsf P_k^{(n,\mathsf s)}\) vary continuously with \(\mathsf s\), which is a consequence of Heine’s formula [42, Equation (3.10)]. In particular, when taking the limit \(\mathsf s\rightarrow +\infty \) we have that \(x^k\omega _n(x)\rightarrow x^k{{\,\mathrm{\mathrm e}\,}}^{-nV}\) both uniformly in compacts and also in \(L^1\), and \(|x|^k\omega _n(x)\le |x|^k{{\,\mathrm{\mathrm e}\,}}^{-nV(x)}\). Thus, dominated convergence then gives that all the just introduced undeformed quantities are recovered from their deformed versions in the limit \(s\rightarrow +\infty \). This means, for instance, that the Christoffel-Darboux kernel \(K_n\) and the partition function \(Z_n\) are recovered via

$$\begin{aligned} K_n(x,y)=\mathsf K_n(x,y\mid \mathsf s=+\infty )\quad \text {and}\quad Z_n=\mathsf Z_n(\mathsf s=+\infty ). \end{aligned}$$
(9.5)

The next result will be key into transforming asymptotics for the orthogonal polynomials to asymptotics for \(\mathsf L^Q_n(s)\) itself.

Proposition 9.1

The identity

$$\begin{aligned} \log \mathsf L_n^Q(\mathsf s) = -\int _\mathsf s^{\infty }\int _{-\infty }^{\infty } \mathsf K^Q_n(x,x\mid u)\frac{\omega _n(x\mid u)}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\textrm{d}x\; \textrm{d}u \end{aligned}$$
(9.6)

holds true for every \(\mathsf s\in \mathbb {R}\).

Remark 9.2

While we were finishing this manuscript, the work [36] was posted to the ArXiv. Therein, they also derive the formula (9.6) in more general terms, using the underlying RHP for IIKS-type integrable operators, see the first displayed formula in page 28 therein. Similar formulas play a fundamental role in the recent works [27, 28], see for instance (5.6) above. Our proof of (9.6) relies solely on orthogonality properties, so we decided to present it nevertheless.

Proof

The equality

$$\begin{aligned} \mathsf Z^Q_n(\mathsf s)=n!\prod _{k=0}^{n-1}\upgamma _k^{(n)}(\mathsf s)^{-2} \end{aligned}$$

is standard in random matrix theory. From this identity, (9.2) and the orthogonality relations we derive the deformation formula

$$\begin{aligned} \partial _\mathsf s\log \mathsf Z^Q_n(\mathsf s) =-\int _\mathbb {R}\partial _\mathsf s\mathsf K^Q_n(x,x\mid \mathsf s)\omega _n(x\mid \mathsf s)\textrm{d}x, \end{aligned}$$

which in fact is valid for general weights depending on an additional parameter \(\mathsf s\). To our knowledge, this last identity was first observed by Krasovsky [59, Equation (14)]. We fix constants \(L>\mathsf s>0\) and integrate the identity above,

$$\begin{aligned} \log \mathsf Z^Q_n(\mathsf s) =\log \mathsf Z^Q_n(L)+\int _{\mathsf s}^L \int _{-\infty }^\infty (\partial _\mathsf s\mathsf K^Q_n)(x,x\mid u)\omega _n(x\mid u)\textrm{d}x\textrm{d}u. \end{aligned}$$
(9.7)

We want to interchange the order of integration in the above. The derivative \(\partial _\mathsf s\mathsf K^Q_n\) is a polynomial of degree at most \(2n-2\) in x, and from Heine’s formula for orthogonal polynomials we see that the polynomial coefficients of \(\mathsf K_n\) are continuous functions of \(\mathsf s\). Therefore, for given \(\mathsf s,L\) there exists a constant \(M=M(\mathsf s,L)>0\) for which the pointwise bound

$$\begin{aligned} \left| (\partial _\mathsf s\mathsf K^Q_n)(x,x\mid u)\right| \le M \sup _{0\le k\le 2n-2}|x|^k,\quad x\in \mathbb {R}, \end{aligned}$$

is valid for every \(u\in [\mathsf s,L]\). Together with the inequalities \(0\le \omega _n(x)\le {{\,\mathrm{\mathrm e}\,}}^{-nV(x)}\), this bound ensures that we can interchange order of integration in (9.7). After integration by parts, we then obtain

$$\begin{aligned} \log \mathsf Z^Q_n(\mathsf s)= & {} \log \mathsf Z^Q_n(L)+\int _{-\infty }^\infty \mathsf K^Q_n(x,x\mid L)\omega _n(x\mid L)\textrm{d}x-\int _{-\infty }^\infty \mathsf K^Q_n(x,x\mid \mathsf s)\omega _n(x\mid \mathsf s)\textrm{d}x \\{} & {} -\int _{-\infty }^{\infty } \int _\mathsf s^L \mathsf K^Q_n(x,x\mid u)\frac{\omega _n(x\mid u)}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\textrm{d}u\textrm{d}x \end{aligned}$$

From the identity (9.4) the two single integrals cancel one another. The integrand of the double integral is positive, so by Tonelli’s Theorem we can interchange order of integration. After this interchange, we take the limit \(L\rightarrow +\infty \) and use (9.5) and (2.3) to conclude the proof. \(\square \)

9.2 The Riemann–Hilbert Problem for orthogonal polynomials

We are ready to introduce the RHP for orthogonal polynomials for the weight \(\omega _n\) in (2.16). During this section, we keep using the matrix notation that was already used in previous sections, recall for instance (3.1), (3.2) and (3.3).

The RHP for orthogonal polynomials for the weight (2.16) asks for finding a \(2\times 2\) matrix-valued function \(\textbf{Y}\) with the following properties.

Y-1.:

The matrix \(\textbf{Y}:\mathbb {C}{\setminus } \mathbb {R}\rightarrow \mathbb {C}^{2\times 2}\) is analytic.

Y-2.:

The function \(\textbf{Y}\) has continuous boundary values

$$\begin{aligned} \textbf{Y}_\pm (x):=\lim _{\varepsilon \searrow 0}\textbf{Y}(x\pm \textrm{i}\varepsilon ),\quad x\in \mathbb {R}, \end{aligned}$$

which are related by the jump condition \(\textbf{Y}_+(x)=\textbf{Y}_-(x)\textbf{J}_{\textbf{Y}}(x)\), \(x\in \mathbb {R}\), with

$$\begin{aligned} \textbf{J}_{\textbf{Y}}(x):=\textbf{I}+\omega _n(x)\textbf{E}_{12}. \end{aligned}$$
Y-3.:

As \(z\rightarrow \infty \),

$$\begin{aligned} \textbf{Y}(z)=\left( \textbf{I}+\mathcal {O}(z^{-1})\right) z^{n\varvec{\sigma }_3}. \end{aligned}$$

Observe that \(\textbf{Y}=\textbf{Y}^{(n)}(\cdot \mid \mathsf s,Q)\) depends on the index n and also on \(\mathsf s\) and Q, although we do not make this dependence explicit in our notation. As shown by Fokas, Its and Kitaev [53], for each n the RHP above has a unique solution, which is explicitly given by

$$\begin{aligned} \textbf{Y}(z)= \begin{pmatrix} \mathsf P^{(n,\mathsf s)}_{n}(z) &{}\displaystyle { \frac{1}{2\pi \textrm{i}}\int _{\mathbb {R}} \frac{P_n^{(n,\mathsf s)}(x)}{x-z}\omega _n(x)\textrm{d}x }\\[10pt] -2\pi \textrm{i}\upgamma ^{(n,Q)}_{n-1}(\mathsf s)^2 \mathsf P^{(n,\mathsf s)}_{n-1}(z) &{} \displaystyle {-\upgamma ^{(n,Q)}_{n-1}(\mathsf s)^2\dfrac{1}{2\pi \textrm{i}}\int _{\mathbb {R}} \frac{\mathsf P^{(n,\mathsf s)}_{n-1}(x)}{x-z}\omega _n(x)\textrm{d}x} \end{pmatrix}, \end{aligned}$$

where \(\upgamma _k^{(n,Q)}(\mathsf s)\) and \(\mathsf P_k^{(n,\mathsf s)}\) are as in (9.1) and (9.2).

In particular, from this identity we obtain the relation

$$\begin{aligned} \upgamma _{n-1}^{(n,Q)}(\mathsf s)^2=-\frac{1}{2\pi \textrm{i}}\left( \textbf{Y}^{(n,1)}\right) _{21}, \end{aligned}$$
(9.8)

where \(\textbf{Y}^{(n,1)}=\textbf{Y}^{(n,1)}(\mathsf s,Q)\) is the matrix determined from the more detailed expansion

$$\begin{aligned} \textbf{Y}(z)=\textbf{Y}^{(n)}(z)=\left( \textbf{I}+\frac{1}{z}\textbf{Y}^{(n,1)}+\frac{1}{z^2}\textbf{Y}^{(n,2)}+\mathcal {O}(z^{-3})\right) z^{n\varvec{\sigma }_3},\quad z\rightarrow \infty . \end{aligned}$$
(9.9)

Also, the Christoffel-Darboux kernel (9.3) can be recast directly from \(\textbf{Y}\) from the identity

$$\begin{aligned} \mathsf K^Q_n(x,y\mid \mathsf s)=\frac{1}{2\pi \textrm{i}}\frac{1}{x-y}\textbf{e}_2^\mathrm T\textbf{Y}_+(y)^{-1}\textbf{Y}_+(x) \textbf{e}_1,\quad x,y\in \mathbb {R}, \; x\ne y. \end{aligned}$$
(9.10)

In the confluent limit \(x=y\), this formula yields

$$\begin{aligned} \mathsf K^Q_n(x,x\mid \mathsf s)=\frac{1}{2\pi \textrm{i}}\textbf{e}_2^\mathrm T\textbf{Y}_+(x)^{-1}\textbf{Y}'_+(x) \textbf{e}_1,\quad x\in \mathbb {R}. \end{aligned}$$
(9.11)

The remainder of this paper is dedicated to applying the Deift-Zhou method for this RHP and collecting its consequences, analysis which will ultimately lead to the proofs of our main results.

10 The RH Analysis for the Orthogonal Polynomials

With all the preliminary work completed, we are finally at the stage of performing the asymptotic analysis for the RHP-\(\textbf{Y}\) for orthogonal polynomials that was introduced in Sect. 9. Most of the transformations are standard, so we go over them quickly and without much detail. Care will be taken in the construction of the parametrices, which are the steps where the introduction of the factor \(\sigma _n\) plays a major role. We also remind the reader that the function V and Q are always assumed to satisfy Assumptions 2.1.

The function \(\sigma _n=\sigma _n(z\mid \mathsf s,Q)\) depends on \(\mathsf s\in \mathbb {R}\) and Q, and as such in all the steps below several quantities will also depend on these parameters. Nevertheless, in most of the work that follows the parameter \(\mathsf s\) and the function Q do not play a major role so we omit them in our notations unless when needed to avoid confusion.

10.1 First transformation: normalization at infinity

Recall the function \(\phi \) introduced in (8.3). The first transformation, which has the effect of normalizing the RHP as \(z\rightarrow \infty \), takes the form

$$\begin{aligned} \textbf{T}(z):={{\,\mathrm{\mathrm e}\,}}^{-n\ell _V \varvec{\sigma }_3} \textbf{Y}(z){{\,\mathrm{\mathrm e}\,}}^{n\left( \phi (z)-\frac{1}{2}V(z)\right) \varvec{\sigma }_3},\quad z\in \mathbb {C}{\setminus } \mathbb {R}. \end{aligned}$$
(10.1)

From the RHP for \(\textbf{Y}\) and the properties from Proposition 8.1, we obtain that \(\textbf{T}\) satisfies the following RHP.

T-1.:

The matrix \(\textbf{T}:\mathbb {C}{\setminus } \mathbb {R}\rightarrow \mathbb {C}^{2\times 2}\) is analytic.

T-2.:

For \(z\in \mathbb {R}\), it satisfies the jump \(\textbf{T}_+(z)=\textbf{T}_-(z)\textbf{J}_{\textbf{T}}(z)\), with

$$\begin{aligned} \textbf{J}_{\textbf{T}}(z) :=\begin{pmatrix} {{\,\mathrm{\mathrm e}\,}}^{n(\phi _{+}(z)-\phi _{-}(z))} &{} \sigma _{n}(z){{\,\mathrm{\mathrm e}\,}}^{-n(\phi _{+}(z)+\phi _{-}(z))} \\ 0 &{} {{\,\mathrm{\mathrm e}\,}}^{-n(\phi _{+}(z)-\phi _{-}(z))} \end{pmatrix},\quad z\in \mathbb {R}. \end{aligned}$$
T-3.:

As \(z\rightarrow \infty \),

$$\begin{aligned} \textbf{T}(z)=\textbf{I}+\frac{1}{z}\textbf{T}_1+\mathcal {O}(z^{-2}), \end{aligned}$$

where the coefficient \(\textbf{T}_1\) is

$$\begin{aligned} \textbf{T}_1:={{\,\mathrm{\mathrm e}\,}}^{-n\ell _V \varvec{\sigma }_3}\textbf{Y}^{(n,1)}{{\,\mathrm{\mathrm e}\,}}^{n\ell _V \varvec{\sigma }_3}+\phi _\infty \varvec{\sigma }_3, \end{aligned}$$

and we recall that \(\textbf{Y}^{(n,1)}\) and \(\phi _\infty \) were introduced in (9.9) and in Proposition 8.1–(v), respectively.

From the properties (ii) of Proposition 8.1, the jump matrix for \(\textbf{T}\) simplifies in convenient ways. For \(-a<z<0\),

$$\begin{aligned} \textbf{J}_\textbf{T}(z)&= \begin{pmatrix} {{\,\mathrm{\mathrm e}\,}}^{2n\phi _{+}(z)} &{} \sigma _n(z) \\ 0 &{} {{\,\mathrm{\mathrm e}\,}}^{-2n\phi _{+}(z)} \end{pmatrix} \\&=\left( \textbf{I}+\frac{1}{\sigma _n(z)} {{\,\mathrm{\mathrm e}\,}}^{-2n\phi _{+}(z)}\textbf{E}_{21}\right) \left( \sigma _n(z)\textbf{E}_{12}-\frac{1}{\sigma _n(z)}\textbf{E}_{21}\right) \left( \textbf{I}+\frac{1}{\sigma _n(z)} {{\,\mathrm{\mathrm e}\,}}^{2n\phi _{+}(z)}\textbf{E}_{21}\right) \\&=\left( \textbf{I}+\frac{1}{\sigma _n(z)}{{\,\mathrm{\mathrm e}\,}}^{2n\phi _{-}(z)}\textbf{E}_{21}\right) \left( \sigma _n(z)\textbf{E}_{12}-\frac{1}{\sigma _n(z)}\textbf{E}_{21}\right) \left( \textbf{I}+\frac{1}{\sigma _n(z)} {{\,\mathrm{\mathrm e}\,}}^{2n\phi _{+}(z)}\textbf{E}_{21}\right) , \end{aligned}$$

and for \(z\in \mathbb {R}{\setminus } [-a,0]\),

$$\begin{aligned} \textbf{J}_\textbf{T}(z)=\textbf{I}+\sigma _n(z) {{\,\mathrm{\mathrm e}\,}}^{-2n\phi _{+}(z)}\textbf{E}_{12}. \end{aligned}$$

10.2 Second transformation: opening of lenses

From the identities just written for \(\textbf{J}_{\textbf{T}}\) and Proposition 8.1-(ii), it follows that the diagonal entries of \(\textbf{J}_{\textbf{T}}\) are highly oscillatory on \((-a,0)\) as \(n\rightarrow \infty \). In the second transformation of the RHP we perform the so-called opening of lenses, which has the effect of moving this oscillatory behavior to a region where it becomes exponentially decaying.

Define regions \(\mathcal G^\pm \) on the ±-side of \((-a,0)\) (the lenses, see Fig. 3), assuming in addition that for \(U_0\) as in Proposition 8.2 these regions satisfy

$$\begin{aligned} \psi (\partial \mathcal G^\pm \cap U_0 )\subset (0,{{\,\mathrm{\mathrm e}\,}}^{\pm 2\pi \textrm{i}/3}\infty )\cup (0,\infty ), \end{aligned}$$
(10.2)

which can always be achieved because \(\psi \) is conformal from a neighborhood of \(U_0\) to a neighborhood of \(D_\delta (0)\).

The function \(\sigma _n\) has no zeros and may have singularities, but these are all poles due to the analyticity of Q in a neighborhood of the real axis. Therefore, the fraction \(1/\sigma _n\) is analytic on a neighborhood of the real axis. We use this fraction to transform

Fig. 3
figure 3

The regions used for the opening of lenses in the transformation \(\textbf{T}\mapsto \textbf{S}\)

With

$$\begin{aligned} \Gamma _{\textbf{S}}:=\mathbb {R}\cup \partial \mathcal G^+\cup \partial \mathcal G^-, \end{aligned}$$

and using the jump properties of \(\phi \) listed in Proposition 8.1, the matrix \(\textbf{S}\) satisfies the following RHP.

S-1.:

The matrix \(\textbf{S}:\mathbb {C}{\setminus } \Gamma _{\textbf{S}}\rightarrow \mathbb {C}^{2\times 2}\) is analytic.

S-2.:

For \(z\in \Gamma _{\textbf{S}}\), it satisfies the jump \(\textbf{S}_+(z)=\textbf{S}_-(z)\textbf{J}_{\textbf{S}}(z)\), with

S-3.:

As \(z\rightarrow \infty \),

$$\begin{aligned} \textbf{S}(z)=\textbf{I}+\frac{\textbf{S}_1}{z}+\mathcal {O}(z^{-2}),\qquad \text {with}\quad \textbf{S}_1:=\textbf{T}_1. \end{aligned}$$
S-4.:

The matrix \(\textbf{S}\) remains bounded near the points \(z=-a,0\).

Before moving to the construction of the mentioned parametrices, we conclude this section with the needed estimate for the jump matrix \(\textbf{J}_{\textbf{S}}\) away from \([-a,0]\). For that, recall the matrix norm notation introduced in (6.1), (6.2), (6.3).

Proposition 10.1

For \(U_0\) as in Proposition 8.2, introduce the set

$$\begin{aligned} \Gamma _\varepsilon :=\Gamma _{\textbf{S}}{\setminus } \left( [-a,0]\cup U_0\cup D_\varepsilon (-a)\right) \end{aligned}$$

For some \(\varepsilon >0\), and possibly reducing \(U_0\) if necessary, there is an \(\eta >0\) such that

$$\begin{aligned} \Vert \textbf{J}_{\textbf{S}}-\textbf{I}\Vert _{L^1\cap L^2\cap L^\infty (\Gamma _\varepsilon )}=\mathcal {O}({{\,\mathrm{\mathrm e}\,}}^{-\eta n}), \end{aligned}$$

as \(n\rightarrow \infty \).

Proof

From Proposition 8.1-(iv) we obtain that for any \(\varepsilon >0\) there is a constant \(\eta '>0\) for which

$$\begin{aligned} {{\,\textrm{Re}\,}}\phi _{+}(x)\ge \eta ',\quad \text {for every } x\in (-a-\varepsilon ,\varepsilon ). \end{aligned}$$

On the other hand, the jump conditions on Proposition 8.1-(ii) combined with Cauchy-Riemann equations imply in a standard way that \({{\,\textrm{Re}\,}}\phi \le - \eta \) along the lipses of the lenses and away from the endpoints \(-a\) and 0, as long as the lens stay within a positive but small distance from the interval \([-a,0]\). From these pointwise estimates, the growth of \({{\,\textrm{Re}\,}}\phi \) as \(z\rightarrow \pm \infty \) and the fact that \(\sigma _n\) remains bounded on \(\mathbb {R}\) and \(1/\sigma _n\) grows at most with \(\mathcal {O}({{\,\mathrm{\mathrm e}\,}}^{c n^{2/3}})\) on compacts of \(\mathbb {C}\), the claimed \(L^p\) estimates follow in a standard manner. \(\square \)

10.3 Global parametrix

The global parametrix problem, obtained after neglecting the jumps of \(\textbf{S}\) that are exponentially close to the identity, is the following RHP.

G-1.:

\(\textbf{G}:\mathbb {C}{\setminus } [-a,0] \rightarrow \mathbb {C}^{2\times 2}\) is analytic.

G-2.:

For \(z\in (-a,0)\), it satisfies the jump

$$\begin{aligned} \textbf{G}_+(z)=\textbf{G}_-(z)\left( \sigma _n(z)\textbf{E}_{12}-\frac{1}{\sigma _n(z)}\textbf{E}_{21}\right) . \end{aligned}$$
G-3.:

As \(z\rightarrow \infty \),

$$\begin{aligned} \textbf{G}(z)=\textbf{I}+\mathcal {O}(z^{-1}). \end{aligned}$$
G-4.:

\(\textbf{G}\) has square-integrable singularities at \(z=-a,0\).

The construction of the global parametrix follows standard techniques. First, one introduces a function that we denote \(\mathsf q(z)\), with the aim at transforming the RHP for \(\textbf{G}\) to a RHP with constant jumps. Then, by diagonalizing the resulting jump matrix, we further reduce the problem to two scalar-valued RHPs. With the help of Plemelj’s formula, we then solve these scalar RHPs, and by tracing back all the transformations we recover the matrix \(\textbf{G}\) itself.

The procedure just described is standard in RHP literature, see for instance [7, Appendix A.1], so we refrain from completing it in detail and instead only describe the final form of the solution.

The function \(\sigma _n\) does not vanish and is real and positive over the real axis, so its real logarithm over the real axis is well defined. With this in mind, introduce

$$\begin{aligned} \mathsf q(z):=\frac{((z+a)z)^{1/2}}{2\pi }\int _{-a}^{0} \frac{\log \sigma _n(x)}{\sqrt{|x|(x+a)}} \frac{\textrm{d}x}{x-z},\quad z\in \mathbb {C}{\setminus } [-a,0], \end{aligned}$$

where \((\cdot )^{1/2}\) stands for the principal branch of the square root and \(\sqrt{\cdot }\) is reserved for the standard positive real root of positive real numbers. This function \(\mathsf q\) depends on n but we do not make this dependence explicit for ease of notation. It is analytic on \(\mathbb {C}{\setminus } [-a,0]\), and it is chosen to satisfy the jump condition

$$\begin{aligned} \mathsf q_+(x)+\mathsf q_-(x)=-\log \sigma _n(x),\quad -a<x<0. \end{aligned}$$

Furthermore, standard calculations show that

$$\begin{aligned} \mathsf q(z)=\mathcal {O}(1),\quad z\rightarrow -a,0,\qquad \text {and} \qquad \mathsf q(z)=\mathsf q_0+\frac{\mathsf q_1}{z} + \mathcal {O}(z^{-2}), \quad z\rightarrow \infty , \end{aligned}$$

with coefficients given by

$$\begin{aligned}{} & {} \mathsf q_0=\mathsf q_0(n):=-\frac{1}{2\pi }\int _{-a}^{0} \frac{\log \sigma _n(x)}{\sqrt{|x|(x+a)}}\textrm{d}x \quad \text {and}\nonumber \\{} & {} \mathsf q_1=\mathsf q_1(n):=-\frac{1}{2\pi }\int _{-a}^{0} \frac{x\log \sigma _n(x)}{\sqrt{|x|(x+a)}}\textrm{d}x +\frac{a\mathsf q_0}{2}. \end{aligned}$$
(10.3)

Next, set

$$\begin{aligned} \textbf{U}_0:=\frac{1}{\sqrt{2}} \begin{pmatrix} 1 &{} \textrm{i}\\ \textrm{i}&{} 1 \end{pmatrix}, \quad \mathsf m(z):=\frac{z}{z+a}, \end{aligned}$$
(10.4)

which is consistent with (4.4), and introduce

$$\begin{aligned} \textbf{M}(z):=\textbf{U}_0 \mathsf m(z)^{\varvec{\sigma }_3/4} \textbf{U}_0^{-1} = \frac{1}{2} \begin{pmatrix} \mathsf m(z)+\dfrac{1}{\mathsf m(z)} &{} -\textrm{i}\left( \mathsf m(z)-\dfrac{1}{\mathsf m(z)}\right) \\ \textrm{i}\left( \mathsf m(z)-\dfrac{1}{\mathsf m(z)}\right) &{} \mathsf m(z)+\dfrac{1}{\mathsf m(z)} \end{pmatrix} \end{aligned}$$
(10.5)

This matrix \(\textbf{M}\) satisfies

$$\begin{aligned} \textbf{M}_+(z)=\textbf{M}_-(z)\left( \textbf{E}_{12}-\textbf{E}_{21}\right) , \; -a<z<0, \quad \text {and}\quad \textbf{M}(z)=\textbf{I}-\frac{a}{4z}\varvec{\sigma }_2+\mathcal {O}(z^{-2}), \end{aligned}$$

where \(\varvec{\sigma }_2\) is the second Pauli matrix (recall (3.2)).

Then the solution to the global parametrix RHP- \(\textbf{G}\) is

$$\begin{aligned} \textbf{G}(z) = {{\,\mathrm{\mathrm e}\,}}^{-\mathsf q_0\varvec{\sigma }_3}\textbf{M}(z) {{\,\mathrm{\mathrm e}\,}}^{\mathsf q(z)\varvec{\sigma }_3},\quad z\in \mathbb {C}{\setminus } [-a,0]. \end{aligned}$$
(10.6)

This solution \(\textbf{G}\) satisfies

$$\begin{aligned} \textbf{G}(z)=\mathsf I+\frac{\textbf{G}_1}{z}+\mathcal {O}(z^{-2}),\; z\rightarrow \infty ,\quad \text {with}\quad \textbf{G}_1:=\begin{pmatrix} \mathsf q_1 &{} \dfrac{\textrm{i}a}{4}{{\,\mathrm{\mathrm e}\,}}^{-2\mathsf q_0} \\ -\dfrac{\textrm{i}a }{4}{{\,\mathrm{\mathrm e}\,}}^{2\mathsf q_0} &{} -\mathsf q_1 \end{pmatrix}. \end{aligned}$$
(10.7)

Recall that \(U_0\) denotes the neighborhood of the origin given in Proposition 8.2. We will also need some control on \(\mathsf q\) inside \(U_0\).

For the next result, set

$$\begin{aligned} \begin{aligned}&F_{\beta }(\mathsf s):=\int _0^\infty v^{\beta }\log (1+{{\,\mathrm{\mathrm e}\,}}^{-\mathsf s-v})\textrm{d}v, \\&\mathsf q_0^{(1)}=\mathsf q_0^{(1)}(\mathsf s,\mathsf t):=\frac{\mathsf t^{1/2}}{2\pi a^{1/2}}F_{-1/2}(\mathsf s), \\&\mathsf q^{(1)}(z)=\mathsf q^{(1)}(z\mid \mathsf s,\mathsf t):=\frac{\mathsf t^{1/2}}{2\pi a^{1/2}\mathsf m(z)^{1/2}}F_{-1/2}(\mathsf s), \end{aligned} \end{aligned}$$
(10.8)

which are n-independent quantities. The index \(\beta \) does not have any specific meaning for what comes later, but it arises naturally from the asymptotic analysis resulting in the following result.

Lemma 10.2

For any fixed \(\mathsf s_0>0\), the estimate

$$\begin{aligned} \mathsf q_0=\frac{1}{n^{1/3}}\mathsf q_0^{(1)}+\mathcal {O}(n^{-2/3}),\quad n\rightarrow \infty , \end{aligned}$$

is valid uniformly for \(\mathsf s\ge -\mathsf s_0\). In addition, the estimates

$$\begin{aligned} \mathsf q(z)=\frac{1}{n^{1/3}}\mathsf q^{(1)}(z)+\mathcal {O}(n^{-2/3}),\quad \text {and}\quad \mathsf q'(z)=\mathcal {O}(n^{-1/3}), \end{aligned}$$

are valid uniformly for z on compacts of \(\mathbb {C}{\setminus } [-a,0]\) (in particular on \(\partial U_0\)) and uniformly for \(\mathsf s\ge -\mathsf s_0\), and carry through to boundary values \(\mathsf q_\pm (x)\) for x along \(\mathbb {R}{\setminus } \{-a,0\}\).

Finally,

$$\begin{aligned}\textbf{M}(z)=\left( \textbf{I}+\frac{1}{n^{1/3}}\left( \mathsf q^{(1)}_0\varvec{\sigma }_3 -\mathsf q^{(1)}(z) \textbf{M}(z)\varvec{\sigma }_3\textbf{M}(z)^{-1}\right) + \mathcal {O}(n^{-2/3}) \right) \textbf{G}(z),\quad n\rightarrow \infty ,\end{aligned}$$

uniformly for \(z\in \partial U_0\) and \(s\ge -\mathsf s_0\).

Proof

The estimate for \(\mathsf q_0(n)\) follows immediately from an application of Proposition A.2. The estimates for \(\mathsf q(z)\) and \(\mathsf q'(z)\) also follow from Proposition A.2, once we observe that the integrals defining them can be slightly deformed to the upper/lower half plane in a neighborhood of the unique point in the intersection \(\partial U_0\cap (-a,0)\).

Finally, using the first part of the statement and the fact that \(\textbf{M}\) is bounded for \(z\in \partial U_0\) and independent of n, we expand the exponentials in series and write

$$\begin{aligned} \textbf{M}(z)\textbf{G}(z)^{-1}&=\textbf{M}(z){{\,\mathrm{\mathrm e}\,}}^{-\mathsf q(z)\varvec{\sigma }_3}\textbf{M}(z)^{-1}{{\,\mathrm{\mathrm e}\,}}^{\mathsf q_0\varvec{\sigma }_3}\\&=\left( \textbf{I}-\mathsf q(z)\textbf{M}(z)\varvec{\sigma }_3\textbf{M}(z)^{-1}+\mathcal {O}(n^{-2/3})\right) \left( \textbf{I}+\mathsf q_0\varvec{\sigma }_3+\mathcal {O}(n^{-2/3})\right) , \end{aligned}$$

and the last claim follows after rearranging the terms in this expansion. \(\square \)

10.4 Local parametrix near \(-a\)

The local parametrix \(\textbf{P}=\textbf{P}^{(a)}\) near \(z=-a\) is constructed in a neighborhood of \(z=-a\) which without loss of generality can be taken to be the disk \(D_\delta (-a)\) of radius \(\delta \) around a, and it is the solution to the following RHP.

\(\textbf{P}^{({a})}\)-1.:

The matrix \(\textbf{P}^{(a)}:D_\delta (-a){\setminus } \Gamma _{\textbf{S}}\rightarrow \mathbb {C}^{2\times 2}\) is analytic.

\(\textbf{P}^{({a})}\)-2.:

For \(z\in \Gamma _{\textbf{S}}\cap \partial D_\delta (-a)\), it satisfies the jump \(\textbf{P}^{(a)}_+(z)=\textbf{P}^{(a)}_-(z)\textbf{J}_{\textbf{S}}(z)\).

\(\textbf{P}^{({a})}\)-3.:

Uniformly for \(z\in \partial D_\delta (-a)\),

$$\begin{aligned} \textbf{P}^{(a)}(z)=\left( \textbf{I}+ o (1)\right) \textbf{G}(z),\quad n\rightarrow \infty . \end{aligned}$$
\(\textbf{P}^{({a})}\)-4.:

The matrix \(\textbf{P}^{(a)}\) remains bounded as \(z\rightarrow -a\).

The asymptotic condition \(\textbf{P}^{({a})}\)-3. above will be improved to (10.13) below.

From the conditions on Q, we know that there exists a value \(\eta >0\) for which

$$\begin{aligned} {{\,\textrm{Re}\,}}Q(z)\ge 2\eta ,\quad |z+a|<\delta . \end{aligned}$$

This value is uniform for \(\mathsf t\in [\mathsf t_0,1/\mathsf t_0],\) for any \(\mathsf t_0\in (0,1)\) fixed, and it is independent of \(\mathsf s\in \mathbb {R}\). In particular, once we fix \(\mathsf s_0>0\) and assume that \(\mathsf s\ge -\mathsf s_0\), from this inequality we obtain

$$\begin{aligned} |{{\,\mathrm{\mathrm e}\,}}^{-\mathsf s-n^{2/3}Q(z)}|\le {{\,\mathrm{\mathrm e}\,}}^{-n^{2/3}\eta },\quad |z+a|<\delta ,\quad \text {for large enough } n. \end{aligned}$$

This way, for \(n>0\) sufficiently large the function \(\sigma _n\) admits an analytic continuation to the whole disk \(D_\delta (-a)\), and this continuation does not have zeros on the same disk. Thus, a branch of \(\log \sigma _n\) is well defined in a neighborhood of \(z=-a\), and the just mentioned estimate also shows that

$$\begin{aligned} \log \sigma _n(z)=\mathcal {O}({{\,\mathrm{\mathrm e}\,}}^{-\eta n^{2/3}}),\quad n\rightarrow \infty , \end{aligned}$$
(10.9)

uniformly for z in a neighborhood of \(z=-a\) and \(\mathsf s\ge -\mathsf s_0\).

With this in mind, the parametrix \(\textbf{P}^{(a)}\) can be constructed explicitly out of Airy functions in a standard way, see for instance [42, Section 7.6]. Since it involves a somewhat nonstandard matching analytic prefactor that accounts for \(\sigma _n\), we briefly go over this construction.

Recall the contour \(\mathsf \Sigma \) introduced in (4.1). With appropriate Airy functions, we construct a \(2\times 2\) matrix \(\varvec{\Psi }_{{{\,\textrm{Ai}\,}}}\), which is analytic on \(\mathbb {C}{\setminus } \mathsf \Sigma \) and satisfies

$$\begin{aligned} \varvec{\Psi }_{{{\,\textrm{Ai}\,}},+}(\zeta )=\varvec{\Psi }_{{{\,\textrm{Ai}\,}},-}(\zeta )\times {\left\{ \begin{array}{ll} \textbf{I}-\textbf{E}_{12}, &{} \zeta \in \mathsf \Sigma _0, \\ \textbf{I}-\textbf{E}_{21}, &{} \zeta \in \mathsf \Sigma _1\cup \mathsf \Sigma _3, \\ -\textbf{E}_{12}+\textbf{E}_{21}, &{} \zeta \in \mathsf \Sigma _2, \\ \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} \varvec{\Psi }_{{{\,\textrm{Ai}\,}}}(\zeta )=\zeta ^{\varvec{\sigma }_3/4}\textbf{U}_0\left( \textbf{I}+\mathcal {O}(\zeta ^{-3/2})\right) {{\,\mathrm{\mathrm e}\,}}^{-\frac{2}{3}\zeta ^{3/2}\varvec{\sigma }_3},\quad \zeta \rightarrow \infty . \end{aligned}$$
(10.10)

In fact, \(\varvec{\Psi }_{{{\,\textrm{Ai}\,}}}\) can be obtained with a modification of the matrix \({{\varvec{\Phi }}}_\textrm{Ai}\) which we previously used in (6.4). We will not need its explicit form, so we do not write it down explicitly.

Using the properties of \(\phi \) we construct a conformal map \(\varphi \) from a neighborhood of \(-a\) to a neighborhood of the origin, with

$$\begin{aligned} \varphi (-a)=0, \quad \varphi '(-a)<0\quad \text {and}\quad \frac{2}{3}\varphi (z)^{3/2}=\phi (z)+2\pi \textrm{i}\mathbb {Z}, \; z\in D_\delta (-a){\setminus } \mathsf \Sigma _{\textbf{S}}. \end{aligned}$$

With standard arguments (see for instance the proof of Proposition 10.5 below for similar arguments), one shows that the matrix

$$\begin{aligned} \textbf{F}^{(a)}(z):=\textbf{G}(z){{\,\mathrm{\mathrm e}\,}}^{\frac{1}{2}\log \sigma _n(z)\varvec{\sigma }_3}\textbf{U}_0^{-1}(n^{2/3}\varphi (z))^{-\varvec{\sigma }_3/4} \end{aligned}$$
(10.11)

is analytic on a neighborhood of \(z=-a\). The local parametrix then takes the form

$$\begin{aligned} \textbf{P}^{(a)}(z)=\textbf{F}^{(a)}(z)\varvec{\Psi }_{{{\,\textrm{Ai}\,}}}(n^{2/3}\varphi (z)){{\,\mathrm{\mathrm e}\,}}^{-\frac{1}{2}\log \sigma _n(z)\varvec{\sigma }_3}{{\,\mathrm{\mathrm e}\,}}^{n\phi (z)\varvec{\sigma }_3},\quad z\in D_\delta (-a){\setminus } \Gamma _{\textbf{S}}. \end{aligned}$$
(10.12)

As a result, the error term in fact takes on the stronger form

$$\begin{aligned} \textbf{P}^{(a)}(z)=\textbf{G}(z)\left( \textbf{I}+\mathcal {O}(n^{-1})\right) ,\quad n\rightarrow \infty , \end{aligned}$$
(10.13)

which is valid uniformly for \(z\in \partial D_\delta (-a)\) and uniformly for \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t\in [\mathsf t_0,1/\mathsf t_0]\), for any \(\mathsf s_0>0\) and \(\mathsf t_0\in (0,1)\) fixed.

10.5 Local parametrix near the origin

The local parametrix near the origin requires the model problem from Sect. 4.

Recall the neighborhood \(U_0\) of the origin introduced in Proposition 8.2. The initial local parametrix we seek for should be the solution to the following RHP.

\(\textbf{P}^{(0)}\)-1.:

The matrix \(\textbf{P}^{(0)}:U_0{\setminus } \Gamma _{\textbf{S}}\rightarrow \mathbb {C}^{2\times 2}\) is analytic.

\(\textbf{P}^{(0)}\)-2.:

For \(z\in \Gamma _{\textbf{S}}\cap U_0\), it satisfies the jump \(\textbf{P}^{(0)}_+(z)=\textbf{P}^{(0)}_-(z)\textbf{J}_{\textbf{S}}(z)\).

\(\textbf{P}^{(0)}\)-3.:

Uniformly for \(z\in \partial U_0\),

$$\begin{aligned} \textbf{P}^{(0)}(z)=\left( \textbf{I}+ o (1)\right) \textbf{G}(z),\quad n\rightarrow \infty . \end{aligned}$$
\(\textbf{P}^{(0)}\)-4.:

\(\textbf{P}^{(0)}\) remains bounded as \(z\rightarrow 0\).

To construct the solution \(\textbf{P}^{(0)}\) required above, some work is needed. Aiming at removing \(\phi \) from the jump of \(\textbf{P}^{(0)}\), we change this RHP-\(\textbf{P}\) with the transformation

$$\begin{aligned} \textbf{L}(z)=\textbf{P}^{(0)}(z){{\,\mathrm{\mathrm e}\,}}^{-n\phi (z)\varvec{\sigma }_3 },\quad z\in U_0{\setminus } \Gamma _{\textbf{S}}. \end{aligned}$$
(10.14)

Then the matrix \(\textbf{L}\), should it exist, must satisfy the following RHP.

\(\textbf{L}\)-1.:

The matrix \(\textbf{L}:U_0{\setminus } \Gamma _{\textbf{S}}\rightarrow \mathbb {C}^{2\times 2}\) is analytic.

\(\textbf{L}\)-2.:

For \(z\in \Gamma _{\textbf{S}}\cap U_0\), it satisfies the jump \(\textbf{L}_+(z)=\textbf{L}_-(z)\textbf{J}_{\textbf{L}}(z)\), with

(10.15)
\(\textbf{L}\)-3.:

Uniformly for \(z\in \partial U_0\),

$$\begin{aligned} \textbf{L}(z)=\left( \textbf{I}+ o (1)\right) \textbf{G}(z){{\,\mathrm{\mathrm e}\,}}^{-n\phi (z)\varvec{\sigma }_3 },\quad n\rightarrow \infty . \end{aligned}$$
\(\textbf{L}\)-4.:

The matrix \(\textbf{L}\) remains bounded as \(z\rightarrow 0\).

Based on the usual way of matching the local parametrix with a model problem, one is tempted to moving the non-constant part of the jump - namely \(\sigma _n\) - to the behavior at \(\partial U_0\) as well. This would be done so including a term of the form \(\sigma _n^{\varvec{\sigma }_3/2}={{\,\mathrm{\mathrm e}\,}}^{\varvec{\sigma }_3\log \sigma _n/2}\) into the transformation \(\textbf{P}\mapsto \textbf{L}\), in much the same way we did in (10.12). However, as we discussed in Sect. 3.1, for any \(s\in \mathbb {R}\) fixed there are poles of \(\sigma _n\) accumulating too fast near the origin, so \(\sigma _n\) fails to be analytic in any small neighborhood of the origin and we have to stick to the non-constant jumps as above.

The RHP-\(\textbf{L}\) has a solution if, and only if, RHP-\(\textbf{P}^{(0)}\) has a solution. Such solutions need not be unique, as one could possibly improve on the asymptotic matching conditions on \(\partial U_0\). The goal of the rest of this section is to describe a solution \(\textbf{L}\), and consequently a solution \(\textbf{P}^{(0)}\) related by (10.14), with a more explicit control of the error term in RHP-\(\textbf{L}\)-3. For that, we use the model problem thoroughly studied in Sects. 4 and 7.

The construction that follows needs several quantities that appeared before. These are the conformal map \(\psi \) appearing in Proposition 8.2, the function \(\mathsf H_Q\) introduced with the help of Proposition 8.3, the model RHP solution \({\varvec{\Phi }}={\varvec{\Phi }}(\cdot \mid \mathsf h)\) introduced in Sect. 4 and further discussed in Sect. 7, and the constant \(\mathsf q_0\) and matrices \(\textbf{U}_0\) and \(\textbf{M}(z)\) from (10.3)–(10.5). With all these quantities at hand, we set

$$\begin{aligned} \begin{aligned}&\textbf{L}(z):=\textbf{F}_n(z){\varvec{\Phi }}_n(z),\quad z\in U_0{\setminus } \mathsf \Sigma _{\textbf{S}},\qquad \text {with the choices} \\&\widehat{{\varvec{\Phi }}}_n(\zeta )={\varvec{\Phi }}\left( \zeta \mid \mathsf h=\mathsf h_n\right) , \quad \mathsf h_n(\zeta ):=\mathsf s+n^{2/3}\mathsf H_Q(\zeta /n^{2/3}), \\&{{\varvec{\Phi }}}_n(z):=\widehat{{\varvec{\Phi }}}_n\left( \zeta =n^{2/3}\psi (z)\right) ,\\&{\textbf{F}}_n(z):=\textbf{M}(z)\textbf{U}_0(n^{2/3}\psi (z))^{-\varvec{\sigma }_3/4}=\textbf{U}_0\mathsf m(z)^{\varvec{\sigma }_3/4}(n^{2/3}\psi (z))^{-\varvec{\sigma }_3/4}. \end{aligned} \end{aligned}$$
(10.16)

With the identification \(\tau =n^{2/3}\) and thanks to Proposition 8.3, the function \(\mathsf h_n\) becomes admissible in the sense of Definition 4.1, so the notation for the corresponding solution \({\varvec{\Phi }}_n={\varvec{\Phi }}(\cdot \mid \mathsf h_n)\) chosen above is consistent with the solution \({\varvec{\Phi }}_\tau (\zeta )\big |_{\tau =n^{2/3}}\) in (4.5). For later reference, we keep track of the expansion

$$\begin{aligned} \mathsf h_n(\zeta )=\mathsf s-\mathsf c_{\mathsf H}\zeta +\mathcal {O}(\zeta ^{-2}),\quad \zeta \rightarrow 0,\quad \text {where we recall that}\quad \mathsf c_{\mathsf H}=\frac{\mathsf t}{\mathsf c_V}, \end{aligned}$$
(10.17)

compare (8.7) with Definition 4.1-(ii).

Proposition 10.3

Fix \(\mathsf s_0>0\) and \(\mathsf t_0\in (0,1)\). There exists \(n_0=n_0(\mathsf s_0,\mathsf t_0)\) for which for any \(s\ge -\mathsf s_0\) and any \(\mathsf t\in [\mathsf t_0,1/\mathsf t_0]\) the matrix \({{\varvec{\Phi }}}_n\) exists for every \(n\ge n_0\). This matrix satisfies the jump

$$\begin{aligned} {{\varvec{\Phi }}}_{n,+}(z)={{\varvec{\Phi }}}_{n,-}(z)\textbf{J}_{\textbf{L}}(z),\quad z\in U_0\cap \Gamma _{\textbf{S}}. \end{aligned}$$

Furthermore, for the matrix

$$\begin{aligned} {{\varvec{\Phi }}}^{(1)}_n:={\varvec{\Phi }}^{(1)}(\mathsf h=\mathsf h_n),\quad \text {with }{\varvec{\Phi }}^{(1)}(\mathsf h) \text { as in (4.3)}, \end{aligned}$$

the asymptotic expansion

$$\begin{aligned} {{\varvec{\Phi }}}_n(z)=\left( \textbf{I}+\frac{1}{n^{2/3}}\frac{1}{\psi (z)}{{\varvec{\Phi }}}_n^{(1)}+\mathcal {O}(n^{-4/3})\right) (n^{2/3}\psi (z))^{\varvec{\sigma }_3/4} \textbf{U}_0^{-1} {{\,\mathrm{\mathrm e}\,}}^{-n\phi (z)\varvec{\sigma }_3} ,\quad n\rightarrow \infty , \end{aligned}$$

holds true uniformly for \(z\in \partial U_0\) and uniformly for \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t\in [\mathsf t_0,1/\mathsf t_0]\).

Proof

The existence of \({{\varvec{\Phi }}}_n\) is granted by the first claim of Theorem 7.1. The jumps for \({{\varvec{\Phi }}}_{n,+}\) follow from the jumps in (4.2), the definition of \(\mathsf h_n\) taken in (10.16), the correspondence (8.6) and the conformality of the change of variables \(\zeta =n^{2/3}\psi (z)\). The asymptotic expansion for \({{\varvec{\Phi }}}_n\) is immediate from (4.3) \(\square \)

The introduction of the additional notation \(\widehat{{\varvec{\Phi }}}_n\), which plays the role of the local parametrix in the variable \(\zeta \), is convenient for later calculations. At that moment, some of its properties will be needed, and we keep track of these properties with the next result. For the formal statement, we recall that \(\Phi (\xi \mid \mathsf S,\mathsf T)\) is the solution to the integro-differential PII that already appeared in (2.11) and (5.15), and \({\varvec{\Phi }}_0^{(1)}\) is the residue matrix from (5.17) that collects the functions \(\mathsf P(\mathsf S,\mathsf T)\) and \(\mathsf Q(\mathsf S,\mathsf T)\) which, in turn, are related to \(\Phi \) as explained in (5.13) et seq..

Proposition 10.4

Fix \(\mathsf s_0>0\) and \(\mathsf t_0\in (0,1)\) and \(\nu \in (0,2/3)\), and let \(\mathsf c_{\mathsf H}\) be as in (8.7). The following asymptotic formulas hold true uniformly for \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t\in [\mathsf t_0,1/\mathsf t_0]\).

The matrix \({\varvec{\Phi }}_n^{(1)}\) from Proposition 10.3 satisfies

$$\begin{aligned} \left( {{\varvec{\Phi }}}_n^{(1)}\right) _{21}=\frac{\textrm{i}}{\mathsf c_{\mathsf H}^{1/2}}\left( \mathsf p(\mathsf s,\mathsf c_{\mathsf H})-\frac{\mathsf s^2}{4\mathsf c_{\mathsf H}^{3/2}}\right) +\mathcal {O}(n^{-\nu }),\quad n\rightarrow \infty . \end{aligned}$$
(10.18)

Furthermore, for \({{\varvec{\Phi }}}_0\) being the solution to the RHP in (5.9)–(5.10), the estimate

$$\begin{aligned} \widehat{{\varvec{\Phi }}}_{n,+}(\zeta )=(\textbf{I}+\mathcal {O}(n^{-\nu })){{\varvec{\Phi }}}_{0,+}(\zeta \mid \mathsf s,\mathsf c_{\mathsf H}),\quad n\rightarrow \infty , \end{aligned}$$
(10.19)

holds true uniformly for \(\zeta \in \mathsf \Sigma \) with \(|\zeta |\le n^{\frac{1}{3}-\frac{1}{2}\nu }\).

Proof

It is immediate from the identification \(\tau =n^{2/3}\), Eq. (10.16) and Theorem 7.1. \(\square \)

Next, we now verify that \(\textbf{L}\) given as in (10.16) indeed solves the RHP-\(\textbf{L}\).

Proposition 10.5

The matrix \(\textbf{L}\) solves the RHP-\(\textbf{L}\).

Furthermore, setting

$$\begin{aligned} \textbf{L}_1(z):=\frac{\left( {{\varvec{\Phi }}}^{(1)}_n\right) _{21}}{\psi (z)^{1/2}}\textbf{M}(z)\textbf{U}_0 \textbf{E}_{21} \textbf{U}_0^{-1}\textbf{M}(z)^{-1}, \quad z\in \partial U_0, \end{aligned}$$
(10.20)

the condition \(\textbf{L}\)-3. is improved to

$$\begin{aligned} \textbf{L}(z)=\left( \textbf{I}+\frac{1}{n^{1/3}}\textbf{L}_1(z) + \mathcal {O}(n^{-2/3})\right) \textbf{M}(z){{\,\mathrm{\mathrm e}\,}}^{-n\phi (z)\varvec{\sigma }_3},\quad n\rightarrow \infty , \end{aligned}$$
(10.21)

uniformly for \(z\in \partial U_0\) and \(\mathsf s\ge -\mathsf s_0\), \(\mathsf t\in [\mathsf t_0,1/\mathsf t_0]\), for any \(\mathsf s_0>0,\mathsf t_0\in (0,1)\).

Proof

First we prove that \({\textbf{F}}\) is analytic. For that, notice that a jump for it may come only from the factors \(\textbf{M}\) and \(\psi ^{1/4}\), and therefore only possibly in the interval \((-a,0)\cap U_0\). However, along this interval it is simple to compute that \((\psi ^{1/4})_\pm = {{\,\mathrm{\mathrm e}\,}}^{\pm \pi \textrm{i}/4}|\psi |^{1/4}\), and also that

$$\begin{aligned} \textbf{U}_0^{-1}\textbf{J}_{\textbf{M}}(x)\textbf{U}_0=\textrm{i}\varvec{\sigma }_3, \end{aligned}$$

from which we obtain

$$\begin{aligned} {\textbf{F}}_-(x)^{-1}{\textbf{F}}_+(x)=n^{\varvec{\sigma }_3/6}|\psi |^{\varvec{\sigma }_3/4}{{\,\mathrm{\mathrm e}\,}}^{-\pi \textrm{i}\varvec{\sigma }_3/4}\textrm{i}\varvec{\sigma }_3 {{\,\mathrm{\mathrm e}\,}}^{-\pi \textrm{i}\varvec{\sigma }_3/4}|\psi |^{-\varvec{\sigma }_3/4}n^{-\varvec{\sigma }_3/6}=\textbf{I}, \end{aligned}$$

so \({\textbf{F}}\) is indeed analytic across \((-a,0)\cap U_0\). In principle, \({\textbf{F}}\) may have an isolated singularity at 0, but because

$$\begin{aligned} \textbf{M}(z),\varvec{\varphi }(z)^{1/4}=\mathcal {O}(z^{1/4})\quad \text {as }z\rightarrow 0, \end{aligned}$$

we see that \(z=0\) is a removable singularity. In virtue of the definition of \(\textbf{L}\) in (10.16) and the jump for \({\varvec{\Phi }}_n\) from Proposition 10.3, the analyticity of \(\textbf{F}\) is enough to conclude that \(\textbf{L}\) satisfies RHP.\(\textbf{L}\)-1.

Knowing that \({\textbf{F}}\) is analytic, the jump for \(\textbf{L}\) is precisely the same as the jump for \({{\varvec{\Phi }}}_n\), and by Proposition 10.3 we thus have that \(\textbf{L}\) satisfies RHP.\(\textbf{L}\)-2.

Finally, to the asymptotic condition (10.20). For that, we use the asymptotic condition for \({{\varvec{\Phi }}}_n\) given by Proposition 10.3 and the definition of \({\textbf{F}}\) and write

$$\begin{aligned} \textbf{L}(z)= & {} \textbf{M}(z)\textbf{U}_0n^{-\varvec{\sigma }_3/6}\psi (z)^{-\varvec{\sigma }_3/4}\left( \textbf{I}+ \frac{1}{n^{2/3}\psi (z)}{{\varvec{\Phi }}}^{(1)}_n+\mathcal {O}(n^{-4/3}) \right) \nonumber \\{} & {} \times n^{\varvec{\sigma }_3/6}\psi (z)^{\varvec{\sigma }_3/4}\textbf{U}_0^{-1} {{\,\mathrm{\mathrm e}\,}}^{-n\phi (z)\varvec{\sigma }_3},\quad n\rightarrow \infty , \end{aligned}$$

and where the error is uniform for \(z\in \partial U_0\) and \(\mathsf s,\mathsf t\) as claimed. Since \(\partial U_0\) remains within a positive distance from the unique zero \(z=0\) of \(\psi \), the function \(|\psi ^{1/4}|\) remains bounded from below away from zero, so the corresponding conjugation of the error by the term \(\psi ^{\varvec{\sigma }_3/4}\) does not change the order of the error. Next, the conjugation by \(n^{\varvec{\sigma }_3/6}\) contributes at most to an error of order \(n^{1/3}\), and only in the (2, 1)-entry. The remaining term \(\textbf{M}\) is bounded along \(\partial U_0\), so it can be commuted with the error term above without changing its order, leading to (10.21). And (10.21) is indeed an improvement of the asymptotic condition RHP.\(\textbf{L}\)-3, because from Lemma 10.2 we know that \(\textbf{M}=(\textbf{I}+\mathcal {O}(n^{-1/3}))\textbf{G}\) uniformly as claimed. \(\square \)

We now trace back the transformation \(\textbf{L}\mapsto \textbf{P}^{(0)}\) and construct the latter as

$$\begin{aligned} \textbf{P}^{(0)}(z):={\textbf{F}}_n(z){{\varvec{\Phi }}}_n(z){{\,\mathrm{\mathrm e}\,}}^{n\phi (z)\varvec{\sigma }_3},\quad z\in U_0{\setminus } \mathsf \Sigma _{\textbf{S}}, \end{aligned}$$
(10.22)

keeping track that \({{\varvec{\Phi }}}_n\) and \(\textbf{F}_n\) are introduced in (10.16). With this construction and thanks to Lemma 10.2 and (10.21), the matching condition RHP.\(\textbf{P}^{(0)}\)-3 is

$$\begin{aligned} \textbf{P}^{(0)}(z) =\left( \textbf{I}+\frac{1}{n^{1/3}}\left( \textbf{L}_1(z)+\mathsf q^{(1)}_0\varvec{\sigma }_3 -\mathsf q^{(1)}(z) \textbf{M}(z)\varvec{\sigma }_3\textbf{M}(z)^{-1}\right) +\mathcal {O}(n^{-2/3})\right) \textbf{G}(z),\quad n\rightarrow \infty , \end{aligned}$$
(10.23)

and the errors are uniform for \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\).

10.6 Final transformation

The final transformation combines the local and global parametrices to remove the non-decaying jumps from \(\textbf{S}\).

Set

$$\begin{aligned} U:=U_0\cup D_\delta (-a), \end{aligned}$$

orienting \(\partial U_0\) and \(\partial D_\delta (-a)\) in the clockwise direction. With \(\textbf{P}^{(a)}\) being the local parametrix near \(-a\), \(\textbf{P}^{(0)}\) the local parametrix near the origin and \(\textbf{G}\) the global parametrix, we introduce a last parametrix \(\textbf{P}\) with unified notation as

$$\begin{aligned} \textbf{P}(z)= {\left\{ \begin{array}{ll} \textbf{P}^{(a)}(z),&{} z\in D_\delta (-a){\setminus } \Gamma _{\textbf{S}}, \\ \textbf{P}^{(0)}(z), &{} z\in U_0{\setminus } \Gamma _{\textbf{S}}, \\ \textbf{G}(z), &{} \text {elsewhere on } \mathbb {C}{\setminus } (\Gamma _{\textbf{S}}\cup \partial U). \end{array}\right. } \end{aligned}$$

The final transformation \(\textbf{S}\mapsto \textbf{R}\) is then

$$\begin{aligned} \textbf{R}(z):=\textbf{S}(z)\textbf{P}(z)^{-1},\quad z\in \mathbb {C}{\setminus } (\Gamma _{\textbf{S}}\cup \partial U). \end{aligned}$$

With this transformation, the jumps that \(\textbf{S}\) has in common with the parametrices \(\textbf{G}\) and \(\textbf{P}\) get canceled, and we remain with jumps only away from \([-a,0]\) and U. With

$$\begin{aligned} \Gamma _{\textbf{R}}:=\partial U\cup \Gamma _{\textbf{S}}{\setminus } \left( [-a,0]\cup U\right) , \end{aligned}$$

the matrix \(\textbf{R}\) satisfies the following RHP.

R-1.:

The matrix \(\textbf{R}:\mathbb {C}{\setminus } \Gamma _{\textbf{R}}\rightarrow \mathbb {C}^{2\times 2}\) is analytic.

R-2.:

For \(z\in \Gamma _{\textbf{R}}\), it satisfies the jump \(\textbf{R}_+(z)=\textbf{R}_-(z)\textbf{J}_{\textbf{R}}(z)\), with

(10.24)
R-3.:

With \(\textbf{R}_1:=\textbf{S}_1-\textbf{G}_1\),

$$\begin{aligned} \textbf{R}(z)=\textbf{I}+\frac{1}{z}\textbf{R}_1+\mathcal {O}(1/z^2),\quad z\rightarrow \infty . \end{aligned}$$
(10.25)

To conclude that \(\textbf{R}\) is asymptotically close to the identity, we control its jumps. We use the matrix norm notation introduced in (6.1)–(6.3).

Proposition 10.6

Fix \(\mathsf s_0>0\) and \(\mathsf t_0\in (0,1)\). There is \(\eta >0\) for which the jump matrix for \(\textbf{R}\) satisfies the estimates

$$\begin{aligned} \Vert \textbf{J}_{\textbf{R}}-\textbf{I}\Vert _{L^1\cap L^2\cap L^\infty (\Gamma _{\textbf{R}}{\setminus } \partial U)}=\mathcal {O}({{\,\mathrm{\mathrm e}\,}}^{-\eta n}) \end{aligned}$$

and

$$\begin{aligned} \Vert \textbf{J}_{\textbf{R}}-\textbf{I}\Vert _{L^1\cap L^2\cap L^\infty (\partial U_0)}=\mathcal {O}(n^{-1/3}),\quad \Vert \textbf{J}_{\textbf{R}}-\textbf{I}\Vert _{L^1\cap L^2\cap L^\infty (\partial D_\delta (-a))}=\mathcal {O}(n^{-1}) \end{aligned}$$

as \(n\rightarrow \infty \), uniformly for \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\).

Proof

The function \(\textbf{G}\) remains uniformly bounded away from the interval \([-a,0]\), and the first claimed decay then follows from (10.24) and Proposition 10.1.

Using (10.13) and (10.24), we obtain the uniform estimate of the jump along \(\partial D_\delta (-a)\) directly. Since \(\partial D_\delta (-a)\) is bounded, the \(L^1\) and \(L^2\) estimates also follow. Finally, estimate for the jump along \(\partial U_0\) follows similarly, once we recall (10.23). \(\square \)

With Proposition 10.6 at hand, the small norm theory of Riemann–Hilbert problems yields

Theorem 10.7

Fix \(\mathsf s_0>0\) and \(\mathsf t_0>0\). The matrix \(\textbf{R}\) satisfies

$$\begin{aligned} \textbf{R}(z)= \textbf{I}+\mathcal {O}(n^{-1/3}),\quad \textbf{R}'(z)=\mathcal {O}(n^{-1/3}) \quad \text {and}\quad \textbf{R}(w)^{-1}\textbf{R}(z)=\textbf{I}+\mathcal {O}\left( \frac{z-w}{n^{1/3}}\right) ,\quad n\rightarrow \infty , \end{aligned}$$

and the residue matrix \(\textbf{R}_1\) satisfies

$$\begin{aligned} \textbf{R}_1=-\frac{1}{2\pi \textrm{i}}\int _{\partial U_0}\textbf{R}_-(s)(\textbf{J}_{\textbf{R}}(s)-\textbf{I})\textrm{d}s+\mathcal {O}(n^{-1}), \end{aligned}$$
(10.26)

where the error terms are all uniform for zw on the same connected components of \(\mathbb {C}{\setminus } \Gamma _{\textbf{R}}\), and they are also uniform for \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\).

Proof

The arguments are standard, so we only sketch them. The small norm theory for RHPs ensures the representation

$$\begin{aligned} \textbf{R}(z)=\textbf{I}+\frac{1}{2\pi \textrm{i}}\int _{\Gamma _{\textbf{R}}}\frac{\textbf{R}_-(s)(\textbf{J}_{\textbf{R}}(s)-\textbf{I})}{s-z}\textrm{d}s,\quad z\in \mathbb {C}{\setminus } \Gamma _{\textbf{R}}. \end{aligned}$$
(10.27)

This ensures the estimates for \(\textbf{R}\) and \(\textbf{R}'\) away from \(\Gamma _{\textbf{R}}\) and, because the jump matrix is analytic on a neighborhood of \(\Gamma _{\textbf{R}}\), we are able to extend these estimates also to \(\Gamma _{\textbf{R}}\). To obtain the estimate involving z and w, we then write, with the help of Cauchy’s formula,

$$\begin{aligned} \textbf{R}(w)^{-1}\textbf{R}(z)=\textbf{I}+\textbf{R}(w)^{-1}\left( \textbf{R}(w)-\textbf{R}(z)\right) =\textbf{I}+\textbf{R}(w)^{-1}\frac{w-z}{2\pi \textrm{i}}\oint \frac{\textbf{R}(s)}{(s-w)(s-z)}\textrm{d}s, \end{aligned}$$
(10.28)

where the integral is over a contour on \(\textbf{C}{\setminus } \Gamma _{\textbf{R}}\) encircling both w and z. The right-hand side is now \(\textbf{I}+\mathcal {O}((w-z)n^{-1/3})\) by the estimate on \(\textbf{R}\) already proven.

Finally, the estimate for \(\textbf{R}_1\) follows expanding (10.27) as \(z\rightarrow \infty \), and then using that the jump matrix decays to the identity at least as \(\mathcal {O}(n^{-1})\) away from \(\partial U_0\). \(\square \)

Later on, we also need the first term in the asymptotic expansion for \(\textbf{R}\), we state it as a separate result.

Proposition 10.8

Fix \(\mathsf s_0>0\) and \(\mathsf t_0>0\). Setting

$$\begin{aligned} \widehat{\textbf{R}}_1(z):=\frac{1}{2\pi \textrm{i}} \int _{\partial U_0}\left( \frac{(\widehat{{\varvec{\Phi }}}_n^{(1)})_{21}}{\psi (s)^{1/2}\mathsf m(s)^{1/2}}\textbf{E}_{21}-\mathsf q^{(1)}(s)\varvec{\sigma }_3\right) \frac{\textrm{d}s}{s-z}+\mathsf q_0^{(1)}\varvec{\sigma }_2,\quad z\in U_0, \end{aligned}$$
(10.29)

the matrix \(\textbf{R}\) has the expansion

$$\begin{aligned} \textbf{R}(z)= \textbf{I}+\frac{1}{n^{1/3}}\textbf{U}_0 \widehat{\textbf{R}}_1(z)\textbf{U}_0^{-1} + \mathcal {O}(n^{-2/3}),\quad n\rightarrow \infty \end{aligned}$$
(10.30)

where the error term is uniform for \(z\in U_0\), and it is also uniform for \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\).

Proof

Starting from the representation (10.27), we write

$$\begin{aligned} \textbf{R}(z)-\textbf{I}=\frac{1}{2\pi \textrm{i}}\int _{\partial U_0}\frac{\textbf{R}_-(s)(\textbf{J}_{\textbf{R}}(s)-\textbf{I})}{s-z}\textrm{d}s+\mathcal {O}(n^{-1})=\frac{1}{2\pi \textrm{i}}\int _{\partial U_0}\frac{\textbf{J}_{\textbf{R}}(s)-\textbf{I}}{s-z}\textrm{d}s+\mathcal {O}(n^{-2/3}), \end{aligned}$$

and the result follows combining (10.24) and (10.23) and performing straightforward calculations. \(\square \)

All the transformations \(\textbf{Y}\mapsto \textbf{T}\mapsto \textbf{S}\mapsto \textbf{R}\) involve only analytic factors in their construction. As such, we can actually slightly deform the contour \(\Gamma _{\textbf{R}}\) in all these steps, so that in fact the estimates in the previous result are valid everywhere on \(\mathbb {C}\), interpreting them as with boundary values when \(z,w\in \Gamma _{\textbf{R}}\). We will use this fact without further warning.

With this theorem at hand, in the next sections we recover the needed asymptotic formulas for the proof of our main results.

11 Proof of Main Results

The next step in the direction of concluding our asymptotic analysis is to unravel the transformations \(\textbf{R}\mapsto \textbf{S}\mapsto \textbf{T}\mapsto \textbf{Y}\). Introduce

$$\begin{aligned} \varvec{\Lambda }_n(x):=\textbf{I}+\frac{{{\,\mathrm{\mathrm e}\,}}^{2n\phi _+(x)}}{\sigma _n(x)}\chi _{(-a,0)}(x)\textbf{E}_{21}\quad \text {and}\quad {\varvec{\Delta }}_n(x):=\left( \textbf{I}+\frac{\chi _{(-a,0)}(x)}{\sigma _n(x)}\textbf{E}_{21}\right) ,\quad x\in \mathbb {R}, \end{aligned}$$

which are related by

$$\begin{aligned} \varvec{\Lambda }_n(x){{\,\mathrm{\mathrm e}\,}}^{-n\phi _+(x)\varvec{\sigma }_3}={{\,\mathrm{\mathrm e}\,}}^{-n\phi _+(x)\varvec{\sigma }_3}{\varvec{\Delta }}_n(x). \end{aligned}$$

Then the result of the unfolding of the transformations is

$$\begin{aligned} \textbf{Y}_+(z)={{\,\mathrm{\mathrm e}\,}}^{n\ell _V\varvec{\sigma }_3}\textbf{R}_+(x)\textbf{P}_+(x)\varvec{\Lambda }_n(x){{\,\mathrm{\mathrm e}\,}}^{-n(\phi _+(x)-V(x)/2)\varvec{\sigma }_3},\quad x\in \mathbb {R}. \end{aligned}$$
(11.1)

We split the proofs of our results in the next few sections.

11.1 Proof of Theorem 2.4

Thanks to (11.1) and Theorem 10.7, the expression (9.10) reduces to the asymptotic formula

$$\begin{aligned} {{\,\mathrm{\mathrm e}\,}}^{-n(V(x)+V(y))/2}\mathsf K_n^Q(x,y)= & {} \frac{{{\,\mathrm{\mathrm e}\,}}^{-n(\phi _+(x)+\phi _+(y))}}{2\pi \textrm{i}(x-y)}\textbf{e}^\mathrm T_2\varvec{\Lambda }_{n}(y)^{-1}\textbf{P}_+(y)^{-1}\nonumber \\{} & {} \left( \textbf{I}+\mathcal {O}\left( \frac{x-y}{n^{1/3}}\right) \right) \textbf{P}_+(x)\varvec{\Lambda }_{n}(x)\textbf{e}_1, \end{aligned}$$
(11.2)

valid as \(n\rightarrow \infty \), \(x,y\in \mathbb {R}\) and uniformly for \(\mathsf s,\mathsf t\) as in Theorem 10.7.

For the value \(\mathsf c_V>0\) introduced in (8.5), we scale \(x=u_n\) and \(y=v_n\) as in (2.18), where uv are on any given compact of the real axis. For such values of u, for n large enough the points \(u_n\) are always on the neighborhood \(U_0\) of the origin where \(\textbf{P}=\textbf{P}^{(0)}\), and from (10.22) we simplify

$$\begin{aligned} \textbf{e}_2^\mathrm T\varvec{\Lambda }_{n}^{-1}\textbf{P}_+^{-1}={{\,\mathrm{\mathrm e}\,}}^{n\phi _+}\textbf{e}_2^\mathrm T{\varvec{\Delta }}_n^{-1}{\varvec{\Phi }}_{n,+}^{-1}\textbf{F}_n^{-1}\quad \text {and}\quad \textbf{P}_+\varvec{\Lambda }_{n}\textbf{e}_1=\textbf{F}_n{\varvec{\Phi }}_{n,+}{\varvec{\Delta }}_n\textbf{e}_1{{\,\mathrm{\mathrm e}\,}}^{n\phi _+}. \end{aligned}$$

where all the quantities above are evaluated at \(u_n\). The next step is to plug these identities into (11.2), when doing so we also use the estimates

$$\begin{aligned} \textbf{F}_n(u_n)=\textbf{U}_0\left( \textbf{I}+\mathcal {O}(n^{-2/3})\right) (\mathsf c_V a)^{-\varvec{\sigma }_3/4}n^{-\varvec{\sigma }_3/6},\quad n\rightarrow \infty , \end{aligned}$$

which follows from (10.16), (10.4) and (8.5), and its immediate consequence

$$\begin{aligned} \textbf{F}_n(v_n)^{-1}\textbf{F}_n(u_n)=n^{\varvec{\sigma }_3/6}\left( \textbf{I}+\mathcal {O}\left( \frac{u-v}{n^{2/3}}\right) \right) n^{-\varvec{\sigma }_3/6}=\textbf{I}+\mathcal {O}\left( \frac{u-v}{n^{1/3}}\right) ,\quad n\rightarrow \infty , \end{aligned}$$

which is obtained with arguments similar to (10.28). These estimates are uniform for uv in compacts of \(\mathbb {R}\), and are also uniform for \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t\in [\mathsf t_0,1/\mathsf t_0]\). Equation (11.2) then simplifies to

$$\begin{aligned}{} & {} {{\,\mathrm{\mathrm e}\,}}^{-n(V(u_n)+V(v_n))/2}\mathsf K_n^Q(u_n,v_n) \\{} & {} \quad =\frac{\mathsf c_V n^{2/3}}{2\pi \textrm{i}(u-v)} \textbf{e}_2^\mathrm T{\varvec{\Delta }}_n(v_n)^{-1} {\varvec{\Phi }}_{n,+}(v_n)^{-1} \left( \textbf{I}+\mathcal {O}\left( \frac{u-v}{n^{1/3}}\right) \right) {\varvec{\Phi }}_{n,+}(u_n){\varvec{\Delta }}_n(u_n)\textbf{e}_1. \end{aligned}$$

Now, with the definition of \({\varvec{\Phi }}_n\) in (10.16) at hand, the aid of (10.19) and the constant \(\mathsf c_{\mathsf H}\) appearing in (10.17), we get

$$\begin{aligned} {\varvec{\Phi }}_{n,+}(u_n)=\widehat{{\varvec{\Phi }}}_{n,+}(n^{2/3}\psi (u_n))=\left( \textbf{I}+\mathcal {O}(n^{-\nu })\right) {{\varvec{\Phi }}}_{0,+}(u\mid \mathsf s,\mathsf c_{\mathsf H}),\quad n\rightarrow \infty , \end{aligned}$$

for any \(\nu \in (0,2/3)\), with the error being valid uniformly for u in compacts of \(\mathbb {R}\), and also uniformly for \(\mathsf s\ge -\mathsf s_0, \mathsf t_0\le \mathsf t\le 1/\mathsf t_0\). Also, thanks to Theorem 5.1 we know that \({{\varvec{\Phi }}}_{0,+}(u\mid \mathsf s,\mathsf c_{\mathsf H})^{\pm 1}\) remain bounded for u in compacts and \(\mathsf s\ge -\mathsf s_0, \mathsf t_0\le \mathsf t\le 1/\mathsf t_0\). Therefore,

$$\begin{aligned}{} & {} {\varvec{\Phi }}_{n,+}(v_n)^{-1}\left( \!\textbf{I}+\mathcal {O}\left( \frac{u-v}{n^{1/3}}\right) \right) {\varvec{\Phi }}_{n,+}(u_n) \! =\! \left( \textbf{I}+\mathcal {O}(n^{-\nu })\right) {{\varvec{\Phi }}}_{0,+}(v\mid \mathsf s,\mathsf c_{\mathsf H})^{-1}{{\varvec{\Phi }}}_{0,+}(u\mid \mathsf s,\mathsf c_{\mathsf H})\!+\!\mathcal {O}\left( \frac{u-v}{n^{1/3}}\!\right) \end{aligned}$$

as \(n\rightarrow \infty \). In addition, the estimate

$$\begin{aligned} \frac{1}{\sigma _n(u_n)}=1+{{\,\mathrm{\mathrm e}\,}}^{-\mathsf s+\mathsf tu/\mathsf c_V}(1+\mathcal {O}(n^{-2/3}))=1+{{\,\mathrm{\mathrm e}\,}}^{-\mathsf s+\mathsf c_{\mathsf H} u}(1+\mathcal {O}(n^{-2/3})),\quad n\rightarrow \infty , \end{aligned}$$
(11.3)

is valid uniformly for u in compacts, uniformly for \(\mathsf s\in \mathbb {R}\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\), and is immediate from (2.16). Combining everything, and denoting \(\chi _+=\chi _{[0,\infty )}\), we obtained the asymptotic formula

$$\begin{aligned}{} & {} \frac{{{\,\mathrm{\mathrm e}\,}}^{-n(V(u_n)+V(v_n))/2}}{\mathsf c_V n^{2/3}}\mathsf K_n^Q(u_n,v_n)= \frac{1+\mathcal {O}(n^{-\nu })}{2\pi \textrm{i}(u-v)} \textbf{e}_2^\mathrm T\left( \textbf{I}-(1+{{\,\mathrm{\mathrm e}\,}}^{-\mathsf s+\mathsf c_{\mathsf H} v})\chi _{+}(v)\textbf{E}_{21}\right) \\{} & {} \quad \times {{\varvec{\Phi }}}_{0,+}(v\mid \mathsf s,\mathsf c_{\mathsf H})^{-1}{{\varvec{\Phi }}}_{0,+}(u\mid \mathsf s,\mathsf c_{\mathsf H})\left( \textbf{I}+(1+{{\,\mathrm{\mathrm e}\,}}^{-\mathsf s+\mathsf c_{\mathsf H} u})\chi _{+}(u)\textbf{E}_{21}\right) \textbf{e}_1+\mathcal {O}(n^{-1/3}),\quad n\rightarrow \infty , \end{aligned}$$

which is valid uniformly for uv in compacts of \(\mathbb {R}\), and also uniformly for \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\). With \({{\varvec{\Delta }}}_0\) as in (5.7), this identity rewrites as

$$\begin{aligned}{} & {} \frac{{{\,\mathrm{\mathrm e}\,}}^{-n(V(u_n)+V(v_n))/2}}{\mathsf c_V n^{2/3}}\mathsf K_n^Q(u_n,v_n)= \frac{1+\mathcal {O}(n^{-\nu })}{2\pi \textrm{i}(u-v)}\\{} & {} \quad \times \left[ \left( {{\varvec{\Phi }}}_0(v\mid \mathsf s,\mathsf c_{\mathsf H}){{\varvec{\Delta }}}_0(v\mid \mathsf s,\mathsf c_{\mathsf H})\right) ^{-1}{{\varvec{\Phi }}}_0(u\mid \mathsf s,\mathsf c_{\mathsf H}){{\varvec{\Delta }}}_0(u\mid \mathsf s,\mathsf c_{\mathsf H})\right] _{21,+} +\mathcal {O}(n^{-1/3}),\quad n\rightarrow \infty , \end{aligned}$$

and the proof of Theorem 2.4 is now completed using Eq. (5.16).

11.2 Proof of Theorem 2.5

Now that the RHP asymptotic analysis is completed, the proof of Theorem 2.5 follows standard steps. In our case, there is an additional cancellation that has to be accounted for at a later step, in virtue of the presence of the factor \(\mathsf q\) in both the global and local parametrices, see (10.6) and (10.23). So we opt for presenting the detailed calculation.

Starting from the representation (9.8), we unravel the transformations and obtain

$$\begin{aligned}{} & {} -2\pi \textrm{i}\left( \upgamma _{n-1}^{(n,Q)}\right) ^2={{\,\mathrm{\mathrm e}\,}}^{-2n\ell _V}\left( \textbf{R}_1+\textbf{G}_1\right) _{21}\nonumber \\{} & {} \quad ={{\,\mathrm{\mathrm e}\,}}^{-2n\ell _V}\left( (\textbf{R}_1)_{21}- \frac{\textrm{i}a}{4} -\frac{1}{n^{1/3}}\frac{\mathsf t^{1/2}a^{1/2}}{4\pi \textrm{i}}F_{-1/2}(\mathsf s) +\mathcal {O}(n^{-2/3}) \right) , \end{aligned}$$
(11.4)

with \(\textbf{R}_1\) as in (10.25), and where for the second identity we used the definition of \(\textbf{G}_1\) in (10.7) and the estimate for \(\mathsf q_0\) from Lemma 10.2.

It remains to estimate \(\textbf{R}_1\), which we do so starting from (10.26). Using Cauchy-Schwartz, Propositions 10.6 and Theorem 10.7, we write

$$\begin{aligned} \textbf{R}_1=-\frac{1}{2\pi \textrm{i}}\int _{\partial U_0}(\textbf{J}_{\textbf{R}}(s)-\textbf{I})\textrm{d}s+\mathcal {O}(n^{-2/3}),\quad n\rightarrow \infty . \end{aligned}$$

The matrix \(\textbf{J}_\textbf{R}\) is in (10.24), and combining its explicit expression along \(\partial U_0\) with (10.23) and (10.5), after a cumbersome but straightforward calculation we arrive at

$$\begin{aligned} \left( \textbf{R}_1\right) _{21}=-\frac{1}{2\pi \textrm{i}n^{1/3}}\int _{\partial U_0}\left( \frac{\textrm{i}\mathsf q^{(1)}(s)(\mathsf m(s)-1)}{2\mathsf m(s)^{1/2}}+(\textbf{L}_1(s))_{21}\right) \textrm{d}s+\mathcal {O}(n^{-2/3}),\quad n\rightarrow \infty , \end{aligned}$$

where we recall that \(\mathsf m\), \(\mathsf q^{(1)}\) and \(\textbf{L}_1\) are given in (10.4), (10.8) and (10.20), respectively. Using this explicit expression for \(\textbf{L}_1\), we see that

$$\begin{aligned} (\textbf{L}_1(s))_{21}=\frac{\left( {{\varvec{\Phi }}}^{(1)}_n\right) _{21}}{2\mathsf m(s)^{1/2}\psi (s)^{1/2}}. \end{aligned}$$

Both functions \(\mathsf m\) and \(\psi \) are analytic in a neighborhood of the origin, and vanish linearly therein. Combining in addition with (8.5), we see that the product \(\mathsf m(z)^{1/2}\psi (z)^{1/2}\) admits an analytic continuation near the origin, with the expansion

$$\begin{aligned} \mathsf m(s)^{1/2}\psi (s)^{1/2}=\frac{\mathsf c_V}{a}s(1+\mathcal {O}(s)),\quad s\rightarrow 0. \end{aligned}$$

Thus, computing residues

$$\begin{aligned} \int _{\partial U_0}(\textbf{L}_1(s))_{21}\textrm{d}s=-\pi \textrm{i}\frac{a }{\mathsf c_V}\left( {{\varvec{\Phi }}}^{(1)}_n\right) _{21}. \end{aligned}$$

Similarly, using now (10.8) we obtain

$$\begin{aligned} \int _{\partial _{U_0}}\frac{\textrm{i}\mathsf q^{(1)}(s)(\mathsf m(s)-1)}{2\mathsf m(s)^{1/2}}\textrm{d}s=-\mathsf t^{1/2}a^{1/2} F_{-1/2}(\mathsf s) \end{aligned}$$

Hence,

$$\begin{aligned} \left( \textbf{R}_1\right) _{21}= \frac{1}{n^{1/3}}\left( \frac{a}{2\mathsf c_V}\left( {\varvec{\Phi }}_n^{(1)}\right) _{21}+\frac{\mathsf t^{1/2}a^{1/2}}{4\pi \textrm{i}} F_{-1/2}(\mathsf s) \right) +\mathcal {O}(n^{-2/3}), \end{aligned}$$

and (11.4) updates to

$$\begin{aligned} \left( \upgamma _{n-1}^{(n,Q)}(\mathsf s)\right) ^2={{\,\mathrm{\mathrm e}\,}}^{-2n\ell _V}\left( \frac{a}{8\pi }-\frac{1}{n^{1/3}}\frac{a}{4\pi \textrm{i}}\left( {\varvec{\Phi }}_n^{(1)}\right) _{21}+\mathcal {O}(n^{-2/3})\right) ,\quad n\rightarrow \infty , \end{aligned}$$

uniformly for \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\). We now need only to apply (10.18) with any \(\nu \ge 1/3\) to complete the proof.

11.3 Proof of Theorem 2.2

Unlike the already proven major results, the proof of Theorem 2.2 is not a straightforward consequence of the steepest descent analysis concluded with Theorem 10.7. It does rely substantially on Theorem 10.7, but several other inputs are also needed along the way. Equipped with (9.10), the idea is to account for the different approximations for \(\mathsf K_n^Q\) on the different components

$$\begin{aligned} \mathbb {R}{\setminus } (U_0\cup D_\delta (-a)\cup (-a,0)),\quad \mathbb {R}\cup U_0, \quad (-a,-\delta ,-a+\delta ) \quad \text {and}\quad (-a+\delta ,0){\setminus } U_0 \end{aligned}$$

that arise from the RHP, and integrate each such approximation. With Proposition 9.1 in mind, we are thus able to recover asymptotics for \(\mathsf L_n^Q\) itself. As one would expect, it turns out that in this process the terms that arise away from \(U_0\) become all exponentially negligible, and only the contribution from \(U_0\) survives in the leading contribution. The contribution that arrives this way involves \({\varvec{\Phi }}_n\), and we further need to split it into different parts and still account for some exact cancellations to arrive at the leading asymptotic contribution. We postpone this analysis to the next section, where it is split into several different lemmas, and summarize the outcome with the next result. For its statement, we recall that \(\mathsf h_n\) and \(\widehat{{\varvec{\Phi }}}_n\) were introduced in (10.16), and we also set

$$\begin{aligned} \widehat{{\varvec{\Delta }}}_n(x):=\textbf{I}+ (1+{{\,\mathrm{\mathrm e}\,}}^{-\mathsf h_n(u)})\chi _{(0,+\infty )(x)}\textbf{E}_{21},\quad x\in \mathbb {R}. \end{aligned}$$
(11.5)

Proposition 11.1

Fix \(\mathsf s_0>0\) and \(\mathsf t_0\in (0,1)\). There exists a function \(\mathsf R(u)=\mathsf R_1(u\mid \mathsf t)\) satisfying

$$\begin{aligned} \int _{\mathsf s}^\infty \mathsf R(u)\textrm{d}u=\mathcal {O}(n^{-1/3}),\quad n\rightarrow \infty , \end{aligned}$$

uniformly for \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\), and for which the identity

$$\begin{aligned}{} & {} \int _{-\infty }^{\infty }\mathsf K_n^Q(x,x\mid u)\frac{\omega _n(x\mid u)}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\textrm{d}x\\{} & {} \quad =\frac{1}{2\pi \textrm{i}} \int _{n^{2/3}\zeta _0}^{n^{2/3}\zeta _1}\frac{{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_n(u)}}{\left( 1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_n(u)}\right) ^2}\left[ \widehat{{\varvec{\Delta }}}_n(x)^{-1}\widehat{{\varvec{\Phi }}}_{n,+}(x)^{-1}\left( \widehat{{\varvec{\Phi }}}_{n,+} \widehat{{\varvec{\Delta }}}_n\right) '(x)\right] _{21}\textrm{d}x+\mathsf R(u), \end{aligned}$$

holds true for every \(u\ge -\mathsf s_0\) and every \(\mathsf t\in [\mathsf t_0,1/\mathsf t_0]\).

In words, Proposition 11.1 is saying that the major contribution to \(\mathsf L_n^Q\) comes from the neighborhood \(U_0\), so from the local parametrix \(\widehat{{\varvec{\Phi }}}_n\). But according to the developments from the previous sections, this local parametrix is close to the id-PII parametrix \({{\varvec{\Phi }}}_0\), and we are ready to conclude our last major result.

Proof of Theorem 2.2

We combine Propositions 9.1 and (11.1) to obtain

$$\begin{aligned}{} & {} \log \mathsf L_n^Q(\mathsf s)\\{} & {} \quad =-\frac{1}{2\pi \textrm{i}}\int _{\mathsf s}^\infty \int _{n^{2/3}\zeta _0}^{n^{2/3}\zeta _1}\frac{{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_n(u)}}{\left( 1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_n(u)}\right) ^2}\left[ \widehat{{\varvec{\Delta }}}_n(x)^{-1}\widehat{{\varvec{\Phi }}}_{n,+}(x)^{-1}\left( \widehat{{\varvec{\Phi }}}_{n,+} \widehat{{\varvec{\Delta }}}_n\right) '(x)\right] _{21}\textrm{d}x\textrm{d}u+\mathcal {O}(n^{-1/3}), \end{aligned}$$

valid as \(n\rightarrow \infty \) and uniformly for \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\). Next, with the identifications (10.16)–(10.17) in mind, we estimate the integral on the right-hand side with the help of Theorem 7.2 and conclude the proof. \(\square \)

12 Technical Lemmas

It remains to prove Proposition 11.1, analysis which we split into several technical lemmas in this section.

Our starting point is the integral representation for \(\mathsf L_n^Q\) from Proposition 9.1 with the asymptotic information for \(\mathsf K_n^Q\) provided by the RHP analysis. For \(x\in \mathbb {R}\), set

$$\begin{aligned} \mathsf A(x):=\left[ \varvec{\Lambda }_{n}(x)^{-1}\textbf{P}_+(x)^{-1}\textbf{R}_+(x)^{-1}\textbf{R}_+'(x)\textbf{P}_+(x)\varvec{\Lambda }_n(x)+\varvec{\Lambda }_{n}(x)^{-1}\textbf{P}_+(x)^{-1}\textbf{P}_+'(x)\varvec{\Lambda }_{n}(x) \right] _{21}.\qquad \end{aligned}$$
(12.1)

Recalling (9.11), the unwrap of the transformations of the RHP yields the identity

$$\begin{aligned} \mathsf K_n^Q(x,x\mid u)\frac{\omega _n(x\mid u)}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}=\frac{\sigma _n(x\mid u){{\,\mathrm{\mathrm e}\,}}^{-2n\phi _+(x)}}{2\pi \textrm{i}(1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)})}\left( \mathsf A(x)+\left( \frac{{{\,\mathrm{\mathrm e}\,}}^{2n\phi _+(x)}}{\sigma _n(x)}\right) '\chi _{\mathcal G^+}(x)\right) \end{aligned}$$

We now need to integrate each of the terms on the right-hand side above, first in \(x\in \mathbb {R}\) and then in \(u\in (\mathsf s,+\infty )\). Each term will be analyzed individually, also depending on whether we integrate x in the bulk, each of the edges or away from the support \([-a,0]\) of the equilibrium measure. To simplify notation, it is convenient to introduce the additional notation for each relevant integral, and for an arbitrary set \(J\subset \mathbb {R}\) denote

$$\begin{aligned} \mathsf I_1(J):=\int _J \frac{\sigma _n(x\mid u){{\,\mathrm{\mathrm e}\,}}^{-2n\phi _+(x)}}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\mathsf A(x) \textrm{d}x \end{aligned}$$
(12.2)

and

$$\begin{aligned} \begin{aligned} \mathsf I_2(J) :=&\int _J \frac{\sigma _n(x\mid u){{\,\mathrm{\mathrm e}\,}}^{-2n\phi _+(x)}}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\left( \frac{{{\,\mathrm{\mathrm e}\,}}^{2n\phi _+(x)}}{\sigma _n(x)}\right) '\textrm{d}x \\ =&\int _J \left( 2n\phi _{n,+}(x)-\frac{n^{2/3}Q'(x)}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\right) \frac{1}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\textrm{d}x, \end{aligned} \end{aligned}$$
(12.3)

which are functions of \(u\in \mathbb {R}\) as well. With \(\varepsilon _0,\varepsilon _1>0\) being determined by \((-\varepsilon _0,\varepsilon _1)=U_0\cap \mathbb {R}\), the split

$$\begin{aligned}{} & {} 2\pi \textrm{i}\int _{-\infty }^{\infty }\mathsf K_n^Q(x,x\mid u)\frac{\omega _n(x\mid u)}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\textrm{d}x= \mathsf I_1(-\infty ,-a-\delta )+\mathsf I_1(-a-\delta ,a+\delta ) \nonumber \\{} & {} \quad +\mathsf I_1(a+\delta ,-\varepsilon _0)+\mathsf I_1(-\varepsilon _0,\varepsilon _1)+\mathsf I_1(\varepsilon _1,+\infty ) +\mathsf I_2(-a,-\varepsilon _0)+\mathsf I_2(-\varepsilon _0,0) \end{aligned}$$
(12.4)

is immediate, and with the next series of lemmas we estimate each of the terms on the right-hand side.

Lemma 12.1

Fix \(\mathsf s\in \mathbb {R}\) and \(\mathsf t_0\in (0,1)\). There exists \(\eta >0\) for which the estimate

$$\begin{aligned} \mathsf I_1(-\infty ,-a-\delta )+\mathsf I_1(-a+\delta ,-\varepsilon _0)+\mathsf I_2(-a,-\varepsilon _0)=\mathcal {O}\left( {{\,\mathrm{\mathrm e}\,}}^{-u}{{\,\mathrm{\mathrm e}\,}}^{-n^{2/3}\eta }\right) \end{aligned}$$

holds true uniformly for \(u\ge \mathsf s\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\).

Proof

On the intervals \((-\infty ,-a-\delta )\) and \((\varepsilon _1,\infty )\) the function \(\varvec{\Lambda }_{n}\) is identically the identity matrix. On the interval \((a+\delta ,-\varepsilon _0)\) the nontrivial entry of \(\varvec{\Lambda }_{n}\) is \({{\,\mathrm{\mathrm e}\,}}^{2n\phi _+}/\sigma _n\) and, because \(Q<0\) and \(\phi _{+}\) is purely imaginary in this interval, this quotient is bounded. Also, away from the endpoints \(-a\) and 0 we have \(\textbf{P}\equiv \textbf{G}\). Both \(\textbf{P}\) and \(\textbf{G}\), and their x-derivatives, decay as \(x\rightarrow \pm \infty \) and remain bounded as \(n\rightarrow \infty \), all uniformly in u and \(\mathsf t\) as claimed. All of these facts combined together, we obtain that for some constants \(K>0\) and \(n_0\ge 1\)

$$\begin{aligned} |\mathsf A(x)|\le K,\quad \text {for all}\; x\in \mathbb {R}{\setminus } \left( (-a-\delta ,-a+\delta )\cup (-\varepsilon _0,\varepsilon _1)\right) ,\; n\ge n_0,\; \mathsf t_0\le \mathsf t\le 1/\mathsf t_0, \; u\ge \mathsf s. \end{aligned}$$
(12.5)

Next, we now use that \(Q\ge 0\) on the interval \((-\infty ,0)\) to bound

$$\begin{aligned} 0\le \frac{\sigma _n(x\mid u)}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\le \frac{1}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\le {{\,\mathrm{\mathrm e}\,}}^{-u},\quad x\le 0, \end{aligned}$$
(12.6)

which is valid for any \(n\ge 1, u\in \mathbb {R},\mathsf t>0\). Thus, combining everything we obtain

$$\begin{aligned} |\mathsf I_1(-\infty ,-a-\delta )|\le K{{\,\mathrm{\mathrm e}\,}}^{-u}\int _{-\infty }^{-a-\delta }{{\,\mathrm{\mathrm e}\,}}^{-2n\phi _+(x)}\textrm{d}x, \end{aligned}$$

and using Proposition 8.1-(iv), (v), this proves the bound for \(\mathsf I_1(-\infty ,-a-\delta )\).

For the second integral, we term \(\phi _+\) is oscillatory, so to obtain the exponential decay in the x-integral we have to argue differently and as follows. The function

$$\begin{aligned} v\mapsto \frac{1}{1+{{\,\mathrm{\mathrm e}\,}}^{-v}}\frac{1}{1+{{\,\mathrm{\mathrm e}\,}}^v}=\frac{{{\,\mathrm{\mathrm e}\,}}^{-v}}{(1+{{\,\mathrm{\mathrm e}\,}}^{-v})^2} \end{aligned}$$

is strictly increasing on \((-\infty ,0)\) and strictly decreasing on \((0,+\infty )\). Because \(Q>0\) on \((-\infty ,0)\) and \(Q(0)=0\), by reducing \(U_0\) if necessary we can assume without loss of generality that \(Q(x)\ge Q(-\varepsilon _0)\) for every \(x\in [-a,-\varepsilon _0]\). This way, \(v=u+n^{2/3}Q(x)\ge v_0:=u+n^{2/3}Q(-\varepsilon _0)\) and, because u is assumed to be bounded from below, we can make sure that \(v_0>0\) for every u. Therefore

$$\begin{aligned} 0\le \frac{\sigma _n(x\mid u)}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\le \frac{{{\,\mathrm{\mathrm e}\,}}^{-v_0}}{(1+{{\,\mathrm{\mathrm e}\,}}^{-v_0})^2}\le {{\,\mathrm{\mathrm e}\,}}^{-u-n^{2/3}Q(-\varepsilon _0)}, \end{aligned}$$

and upon integration and using Proposition 8.1-(ii), we obtain

$$\begin{aligned} |\mathsf I_2(-a+\delta ,-\varepsilon _0)|\le (a-\delta +\varepsilon _0)K{{\,\mathrm{\mathrm e}\,}}^{-u-n^{2/3}Q(-\varepsilon _0)}. \end{aligned}$$

for every \(n\ge n_0\) and \(\mathsf t,u\) as claimed.

For the estimate for \(\mathsf I_2(-a,-\varepsilon _0)\) we use again the last inequality in (12.6) and also that both \(\phi '_{+}\) and \(Q'\) are continuous, and hence bounded, on \((-a,-\varepsilon _0)\). This concludes the proof. \(\square \)

For the integral over \((\varepsilon _1,+\infty )\), it is easier to actually estimate its u-integral directly.

Lemma 12.2

Fix \(\mathsf s_0>0\) and \(\mathsf t_0\in (0,1)\). There exists \(\eta >0\) for which the estimate

$$\begin{aligned} \int _{\mathsf s}^\infty \mathsf I_1(\varepsilon _1,+\infty )\textrm{d}u=\mathcal {O}\left( {{\,\mathrm{\mathrm e}\,}}^{-n\eta }\right) \end{aligned}$$

holds true uniformly for \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\).

Proof

With the bound (12.5) we see that it is enough to estimate the integral

$$\begin{aligned} \int _{\mathsf s}^\infty \int _{\varepsilon _1}^\infty \frac{\sigma _n(x\mid u){{\,\mathrm{\mathrm e}\,}}^{-2n\phi (x)}}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\textrm{d}x\textrm{d}u=\int _{\varepsilon _1}^\infty \frac{{{\,\mathrm{\mathrm e}\,}}^{-2n\phi _+(x)}}{1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf s+n^{2/3}Q(x)}}\textrm{d}x, \end{aligned}$$

where for the equality we used Tonelli’s Theorem to interchange the order of integration, and then integrated exactly. The term \((1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf s+n^{2/3}Q(x)})^{-1}\) is bounded by 1, and using Proposition 8.1-(iv), (v) we see that the integral of \({{\,\mathrm{\mathrm e}\,}}^{-2n\phi }\) is \(\mathcal {O}({{\,\mathrm{\mathrm e}\,}}^{-\eta n})\) for some \(\eta >0\) independent of \(\mathsf s\) and \(\mathsf t\), which concludes the proof. \(\square \)

Next, we analyze the contribution coming from a neighborhood of the endpoint \(z=a\).

Lemma 12.3

Fix \(\mathsf s\in \mathbb {R}\), \(\mathsf t_0\in (0,1)\). There exists \(\eta >0\) such that the estimate

$$\begin{aligned} \mathsf I_1(-a-\delta ,-a+\delta ) =\mathcal {O}({{\,\mathrm{\mathrm e}\,}}^{-u-\eta n^{2/3}}),\quad n\rightarrow \infty , \end{aligned}$$

is valid uniformly for \(u\ge \mathsf s\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\).

Proof

On the neighborhood \(D_\delta (-a)\) the function \(\textbf{P}=\textbf{P}^{(a)}\) is the Airy local parametrix (10.12), which involves the function \({\varvec{\Phi }}_{{{\,\textrm{Ai}\,}}}\) evaluated at the argument \(\zeta =n^{2/3}\varphi (z)\). In the \(\zeta \)-plane, we fix \(R>0\) for which the asymptotic expansion (10.10) is valid for \(|\zeta |>R\), and split the analysis into two cases, namely for \(|n^{2/3}\varphi (z)|\le R\) and \(|n^{2/3}\varphi (z)|\ge R\), and for

$$\begin{aligned} A_n:=\{x\in (-a-\delta ,-a+\delta )\mid |n^{2/3}\varphi (x)|\le R \},\quad B_n:=(-a-\delta ,-a+\delta ){\setminus } A_n \end{aligned}$$

we write

$$\begin{aligned} \mathsf I_1(-a-\delta ,-a+\delta )=\mathsf I_1(A_n)+\mathsf I_1(B_n). \end{aligned}$$

and now analyze each integral on the right-hand side separately.

For \(|n^{2/3}\varphi (z)|\le R\), the terms \({\varvec{\Phi }}_{{{\,\textrm{Ai}\,}},+}(\zeta =n^{2/3}\varphi (z))\) and \({\varvec{\Phi }}'_{{{\,\textrm{Ai}\,}},+}(\zeta =n^{2/3}\varphi (z))\) consist of continuous functions evaluated inside the compact interval \([-R,R]\), and therefore they are bounded. By the same reason, the expression (10.11) shows that \(\textbf{F}\) is bounded for \(|n^{2/3}\varphi (z)|\le R\). On the other hand, without further analysis we obtain that \(\textbf{F}'\) may grow at most as \(\mathcal {O}(n^{1/6})\). Finally, combining with (10.9) and the fact that the determinant of \(\textbf{P}\) is identically 1, we conclude that for \(|n^{2/3}\varphi (x)|\le R\),

$$\begin{aligned} \textbf{P}_+(x)=\mathcal {O}(1){{\,\mathrm{\mathrm e}\,}}^{n\phi _+(x)\varvec{\sigma }_3},\quad \textbf{P}_+(x)^{-1}={{\,\mathrm{\mathrm e}\,}}^{-n\phi _+(x)\varvec{\sigma }_3}\mathcal {O}(1)\quad \text {and}\quad \textbf{P}_+'(x)=\mathcal {O}(n){{\,\mathrm{\mathrm e}\,}}^{-n\phi _+(x)\varvec{\sigma }_3} \end{aligned}$$
(12.7)

which is valid as \(n\rightarrow \infty \) and uniformly for \(x\in (-a-\delta ,-a+\delta )\), also uniformly for \(u\ge \mathsf s\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\).

We use (12.7) back into (12.1), and combined with the fact that \(|\sigma _n^{-1}|\le 2\) for \(x\le (-\infty ,0)\) the result is that

$$\begin{aligned} \mathsf A(x)=\left[ \varvec{\Lambda }_{n}(x)^{-1}{{\,\mathrm{\mathrm e}\,}}^{-n\phi _+(x)\varvec{\sigma }_3}\mathcal {O}(n){{\,\mathrm{\mathrm e}\,}}^{n\phi _+(x)\varvec{\sigma }_3} \varvec{\Lambda }_{n}(x)\right] _{21}={{\,\mathrm{\mathrm e}\,}}^{2n\phi _+(x)}\mathcal {O}(n),\quad n\rightarrow \infty , \end{aligned}$$

where the last error term is a scalar error, valid uniformly in \(x,u,\mathsf t\) as before. Integrating, we obtain that for some absolute constant \(K>0\),

$$\begin{aligned} |\mathsf I_1(A_n)|\le Kn \int _{A_n} \frac{\sigma _n(x\mid u)}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\textrm{d}x\le Kn \int _{A_n}{{\,\mathrm{\mathrm e}\,}}^{-u-n^{2/3}Q(x)}\textrm{d}x=\mathcal {O}({{\,\mathrm{\mathrm e}\,}}^{-u-\eta n^{2/3}}),\quad n\rightarrow \infty , \end{aligned}$$
(12.8)

where for the second inequality we used that \(0<\sigma _n\le 1\) and for the last estimate we used that Q is strictly positive on \((-a-\delta ,-a+\delta )\supset A_n\).

Next, we consider the case \(|n^{2/3}\varphi (z)|\ge R\). In such situation, the asymptotics (10.10) take place, and when combined with (10.11) they yield

$$\begin{aligned} \textbf{P}_+(x)=\textbf{G}_+(x){{\,\mathrm{\mathrm e}\,}}^{-\varvec{\sigma }_3\log \sigma _n(x)/2}\left( \textbf{I}+\mathcal {O}(n^{-1})\right) {{\,\mathrm{\mathrm e}\,}}^{\varvec{\sigma }_3\log \sigma _n(x)/2}=\textbf{G}_+(x)\left( \textbf{I}+\mathcal {O}(n^{-1})\right) . \end{aligned}$$
(12.9)

where for the last equality we used (10.9).

The matrix \(\textbf{P}_+\) is bounded as \(x\rightarrow 0\), whereas \(\textbf{G}\) is not, but the cancellation that leads to this boundedness of \(\textbf{P}\) is not captured by this asymptotics. Nevertheless, as we need some uniform control over x, we now account for the behavior of \(x\rightarrow 0\) in a rough manner as follows. We are assuming that \(|n^{2/3}\varphi (x)|\ge R\), and because \(\varphi \) is conformal with \(\varphi (0)=0\) this means that \(|x+a|\ge c/n^{2/3}\), for some fixed \(c>0\). The function \(\textbf{G}\) has a fourth-root singularity at \(x=0\), and therefore we arrive at the rough estimate

$$\begin{aligned} \textbf{P}_+(x)=\mathcal {O}(n^{1/6}),\quad n\rightarrow \infty . \end{aligned}$$
(12.10)

The estimate (12.9) can be differentiated term by term. With arguments similar to the ones we just applied, we arrive at the rough estimate

$$\begin{aligned} \textbf{P}'_+(x)=\mathcal {O}(n^{7/6}),\quad n\rightarrow \infty . \end{aligned}$$
(12.11)

These latter two estimates are valid uniformly for \(u\ge \mathsf s\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\) as \(n\rightarrow \infty \).

Finally, on the interval \((-a,0)\) the factor \(\phi _+\) is purely imaginary, implying that \(\varvec{\Lambda }_{n}^{\pm 1}\) remains bounded therein. All combined, we obtained that

$$\begin{aligned} \mathsf A(x)=\mathcal {O}(n),\quad n\rightarrow \infty , \end{aligned}$$

uniformly for \(x\in B_n\) and \(u,\mathsf t\) as claimed. Proceeding as in (12.8) we obtain a bound for \(I_1(B_n)\) and complete the proof. \(\square \)

It remains to analyze the two integrals \(\mathsf I_1(-\varepsilon _0,\varepsilon _1)\) and \(\mathsf I_2(-\varepsilon _0,0)\). The hardest analysis is the integral \(\mathsf I_1\) which, as will turn out, contains both the leading contribution, a term to cancel \(\mathsf I_2\) and asymptotically negligible terms. For ease of presentation, we explore the expression for \(\mathsf A\) in (12.1) as a sum and split

$$\begin{aligned} \mathsf I_1(-\varepsilon _0,\varepsilon _1)=\mathsf J_1(-\varepsilon _0,\varepsilon _1)+\mathsf J_2(-\varepsilon _0,\varepsilon _1), \end{aligned}$$
(12.12)

where, for any measurable set \(J\subset \mathbb {R}\),

$$\begin{aligned} \begin{aligned} \mathsf J_1(J)&:=\int _J \left[ \varvec{\Lambda }_{n}(x)^{-1}\textbf{P}_+(x)^{-1}\textbf{R}(x)^{-1}\textbf{R}'(x)\textbf{P}_+(x)\varvec{\Lambda }_{n}(x)\right] _{21}\frac{\sigma _n(x\mid u){{\,\mathrm{\mathrm e}\,}}^{-2n\phi _+(x)}}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}} \textrm{d}x,\\ \mathsf J_2(J)&:=\int _J \left[ \varvec{\Lambda }_{n}(x)^{-1}\textbf{P}_+(x)^{-1}\textbf{P}_+'(x)\varvec{\Lambda }_{n}(x) \right] _{21}\frac{\sigma _n(x\mid u){{\,\mathrm{\mathrm e}\,}}^{-2n\phi _+(x)}}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\textrm{d}x, \end{aligned} \end{aligned}$$
(12.13)

and analyze each of these terms separately. For the estimation of \(\mathsf J_1\), it is also easier to perform the u-integral, just as we did in Lemma 12.2.

Lemma 12.4

Fix \(\mathsf s_0>0\) and \(\mathsf t_0\in (0,1)\). The estimate

$$\begin{aligned} \int _{\mathsf s}^\infty \mathsf J_1(-\varepsilon _0,\varepsilon _1)\textrm{d}u=\mathcal {O}(n^{-1/3}) \end{aligned}$$
(12.14)

holds true uniformly for \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\).

Proof

The idea is similar to the proof of Lemma 12.3. We fix a number \(R>0\) for which the asymptotic expansion (4.3) for \({\varvec{\Phi }}=\widehat{{\varvec{\Phi }}}_n\) is valid for \(|\zeta |\ge R\), uniformly in \(\mathsf t,\mathsf s\) as claimed (see Remark 7.8). Then, introduce

$$\begin{aligned} C_n:=\{ x\in (-\varepsilon _0,\varepsilon _1)\mid |n^{2/3}\psi (x)|\le R \},\quad D_n:=(-\varepsilon _0,\varepsilon _1){\setminus } C_n. \end{aligned}$$
(12.15)

We now find bounds for the integrands, with separate arguments for each of \(C_n\) and \(D_n\).

Recalling (10.22), on the interval \((-\varepsilon _0,\varepsilon _1)\) the parametrix is \(\textbf{P}=\textbf{P}^{(0)}=\textbf{F}_n{\varvec{\Phi }}_n{{\,\mathrm{\mathrm e}\,}}^{n\phi \varvec{\sigma }_3}\), with \(\textbf{F}_n\) and \({\varvec{\Phi }}_n\) as in (10.16). Using the definition of \(\textbf{F}_n\) in (10.16) and Theorem 10.7, we express

$$\begin{aligned} \textbf{F}_n^{-1}\textbf{R}^{-1}\textbf{R}'\textbf{F}_n=\textbf{F}_n^{-1}(\textbf{R}^{-1}-\textbf{I})\textbf{R}'\textbf{F}_n-\textbf{F}_n^{-1}\textbf{R}'\textbf{F}_n=-\textbf{F}_n^{-1}\textbf{R}'\textbf{F}_n+\mathcal {O}(n^{-1/3}),\quad n\rightarrow \infty , \end{aligned}$$
(12.16)

valid uniformly when evaluated at \(x\in C_n\) and also uniformly for \(\mathsf s,\mathsf t\) as claimed. Using again the definition of \(\textbf{F}_n\), the fact that \(\psi /\mathsf m\) remains bounded near \(z=0\) and (10.30),

$$\begin{aligned} \textbf{F}_n^{-1}\textbf{R}'\textbf{F}_n=\frac{1}{n^{1/3}}\psi ^{\varvec{\sigma }_3/4}\mathsf m^{-\varvec{\sigma }_3/4}n^{\varvec{\sigma }_3/6}\widehat{\textbf{R}}_1' n^{-\varvec{\sigma }_3/4}\mathsf m^{\varvec{\sigma }_3/4}\psi ^{-\varvec{\sigma }_3/4}+\mathcal {O}(n^{-1/3}). \end{aligned}$$

A careful inspection on (10.29) shows that \((\widehat{\textbf{R}}_1(z))_{12}\) is independent of z, so \(R'\) has zero (1, 2)-entry. Therefore we conclude that

$$\begin{aligned} \textbf{F}_n^{-1}\textbf{R}^{-1}\textbf{R}'\textbf{F}_n=\mathcal {O}(n^{-1/3}), \end{aligned}$$

stressing that this is valid for \(x\in C_n\). Also, from (10.19) we learn that \({\varvec{\Phi }}_n(x)=\mathcal {O}(1)\) uniformly on \(C_n\). Everything we have so far combined yields that

$$\begin{aligned} \left[ \varvec{\Lambda }_{n}^{-1}\textbf{P}_+^{-1}\textbf{R}^{-1}\textbf{R}'\textbf{P}\varvec{\Lambda }_{n}\right] _{21}=\left[ \varvec{\Lambda }_{n}^{-1} {{\,\mathrm{\mathrm e}\,}}^{-n\phi \varvec{\sigma }_3}\mathcal {O}(n^{-1/3}){{\,\mathrm{\mathrm e}\,}}^{n\phi _3\varvec{\sigma }_3} \varvec{\Lambda }_{n}\right] _{21}={{\,\mathrm{\mathrm e}\,}}^{-2n\phi }\left[ {\varvec{\Delta }}_{n}^{-1}\mathcal {O}(n^{-1/3}){\varvec{\Delta }}_{n}\right] _{21}. \end{aligned}$$

Along \((-a,0)\) we have \(Q>0\) so that \(1/\sigma _n\) remains bounded uniformly, and consequently \({\varvec{\Delta }}_{n}=\mathcal {O}(1)\) in the same interval. Using this information in the last displayed equation, we thus obtain that

$$\begin{aligned} \mathsf J_1(C_n)=\int _{C_n}\mathcal {O}(n^{-1/3}) \frac{\sigma _n(x\mid u)}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\textrm{d}x,\quad n\rightarrow \infty , \end{aligned}$$
(12.17)

uniformly for \(u\ge \mathsf s\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\), and where the error term is uniform for \(x\in C_n\).

Next, along \(D_n\) we now use the asymptotic expansion (4.3) for \({\varvec{\Phi }}=\widehat{{\varvec{\Phi }}}_n\) and the definition of \(\mathsf F_n\) and obtain that

$$\begin{aligned} \textbf{P}(x)=\textbf{U}_0\mathsf m_+(x)^{\varvec{\sigma }_3/4}\textbf{U}_0^{-1}\left( 1+\mathcal {O}(n^{-1/3})\right) =\textbf{U}_0 \begin{pmatrix} \mathcal {O}(1) &{} 0 \\ 0 &{} \mathcal {O}(n^{1/6}) \end{pmatrix} \textbf{U}_0^{-1} \left( 1+\mathcal {O}(n^{-1/3})\right) ,\quad x\in D_n, \end{aligned}$$

and similarly

$$\begin{aligned} \textbf{P}(x)= \left( 1+\mathcal {O}(n^{-1/3})\right) \textbf{U}_0 \begin{pmatrix} \mathcal {O}(n^{1/6}) &{} 0 \\ 0 &{} \mathcal {O}(1) \end{pmatrix} \textbf{U}_0^{-1} ,\quad x\in D_n, \end{aligned}$$

all valid as \(n\rightarrow \infty \), uniformly for \(x\in D_n\) and uniformly in \(u,\mathsf t\) as claimed

With the same arguments that we applied in (12.16), we obtain along \(D_n\) as well

$$\begin{aligned} \textbf{P}^{-1}\textbf{R}^{-1}\textbf{R}'\textbf{P}=\mathcal {O}(n^{-1/3}),\quad \text {so that}\quad \mathsf J_1(D_n)=\int _{D_n}\mathcal {O}(n^{-1/3}) \frac{\sigma _n(x\mid u){{\,\mathrm{\mathrm e}\,}}^{-2n\phi _+(x)}}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\textrm{d}x,\quad n\rightarrow \infty . \end{aligned}$$

The factor \(\phi _+\) is purely imaginary on \((-a,0)\) and positive on \([0,+\infty )\), so the term \({{\,\mathrm{\mathrm e}\,}}^{-2n\phi _+}\) in the integrand above is bounded by a uniform constant. Combining with (12.17), we conclude

$$\begin{aligned} \mathsf J_1(-\varepsilon _0,\varepsilon _1)=\mathcal {O}(n^{-1/3})\int _{-\varepsilon _0}^{\varepsilon _1}\frac{\sigma _n(x\mid u)}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\textrm{d}x \end{aligned}$$

Finally, we now integrate in u and use Tonelli’s Theorem to interchange the order of integration, obtaining just like in the proof of Lemma 12.2 that

$$\begin{aligned} \int _{\mathsf s}^\infty \int _{-\varepsilon _0}^{\varepsilon _1}\frac{\sigma _n(x\mid u)}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\textrm{d}x \textrm{d}u= \int _{-\varepsilon _0}^{\varepsilon _1}\frac{1}{1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf s+n^{2/3}Q(x)}}\textrm{d}x\le \varepsilon _1+\varepsilon _0, \end{aligned}$$

which concludes the proof. \(\square \)

Next, in our pursuit of analyzing (12.4), the final missing piece of the puzzle is the term \(\mathsf J_2(-\varepsilon _0,\varepsilon _0)\) that arises from \(\mathsf I_1(-\varepsilon _0,\varepsilon _0)\). To state the rigorous results, we recall once again that the conformal map \(\psi \) was introduced in Proposition 8.2, the function \(\mathsf h_n\) is given in (10.16), the function \(\widehat{{\varvec{\Delta }}}_n\) is in (11.5) and in addition also set

$$\begin{aligned} \zeta _0:=\psi (-\varepsilon _0)<0,\quad \zeta _1:=\psi (\varepsilon _1)>0. \end{aligned}$$

Lemma 12.5

Fix \(\mathsf s_0>0\) and \(\mathsf t_0\in (0,1)\). There exists a function \(\mathsf R_1(u)=\mathsf R_n(u\mid \mathsf t)\) satisfying the estimate

$$\begin{aligned} \int _{\mathsf s}^\infty \mathsf R(u)\textrm{d}u=\mathcal {O}(n^{-2/3}),\quad n\rightarrow \infty , \end{aligned}$$

uniformly for \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\), and for which the identity

$$\begin{aligned}{} & {} \mathsf J_2(-\varepsilon _0,\varepsilon _0)=-\mathsf I_2(-\varepsilon _0,0)\nonumber \\{} & {} \quad +\int _{n^{2/3}\zeta _0}^{n^{2/3}\zeta _1}\frac{{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_n(u)}}{\left( 1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf h_n(u)}\right) ^2}\left[ \widehat{{\varvec{\Delta }}}_n(x)^{-1}\widehat{{\varvec{\Phi }}}_{n,+}(x)^{-1}\left( \widehat{{\varvec{\Phi }}}_{n,+} \widehat{{\varvec{\Delta }}}_n\right) '(x)\right] _{21}\textrm{d}x+\mathsf R_1(u),\qquad \qquad \quad \end{aligned}$$
(12.18)

holds true for every \(u\in \mathbb {R}\) and every \(\mathsf t>0\).

Proof

Recall that \(\mathsf J_2\) was introduced in (12.13). In the interval \((-\varepsilon _0,\varepsilon _1)\) the local parametrix \(\textbf{P}=\textbf{P}^{(0)}\) coincides with (10.22). A direct calculation from (10.16) shows that the matrix \(\textbf{F}_n\) therein satisfies the identity

$$\begin{aligned} \textbf{F}_n'(z)=\frac{1}{4}\left( \frac{\mathsf m'(z)}{\mathsf m(z)}-\frac{\psi '(z)}{\psi (z)}\right) \textbf{F}_n(z)\varvec{\sigma }_3. \end{aligned}$$

With the help of this identity, we express

$$\begin{aligned} \textbf{P}_+^{-1}\textbf{P}_+'= {{\,\mathrm{\mathrm e}\,}}^{-n\phi _+\varvec{\sigma }_3}\left[ \frac{1}{4}\left( \frac{\mathsf m'}{\mathsf m}-\frac{\psi '}{\psi }\right) {\varvec{\Phi }}_{n,+}^{-1}\varvec{\sigma }_3 {\varvec{\Phi }}_{n,+}+{\varvec{\Phi }}_{n,+}^{-1}{\varvec{\Phi }}_{n,+}'+n\phi '_+\varvec{\sigma }_3 \right] {{\,\mathrm{\mathrm e}\,}}^{n\phi _+\varvec{\sigma }_3}, \end{aligned}$$

where both sides are evaluated at \(x\in (-\varepsilon _0,\varepsilon _1)\). Thus,

$$\begin{aligned}{} & {} \Bigg [\varvec{\Lambda }_{n}^{-1} \textbf{P}_+^{-1}\textbf{P}_+' \varvec{\Lambda }_{n}\Bigg ]_{21}\nonumber \\{} & {} \quad = -2n\frac{\phi _+' \chi _0{{\,\mathrm{\mathrm e}\,}}^{2n\phi _+}}{\sigma _n}+{{\,\mathrm{\mathrm e}\,}}^{2n\phi _+}\left[ {\varvec{\Delta }}_{n} {\varvec{\Phi }}_{n,+}^{-1}{\varvec{\Phi }}_{n,+}' {\varvec{\Delta }}_n\right] _{21}\nonumber \\{} & {} \qquad +\frac{{{\,\mathrm{\mathrm e}\,}}^{2n\phi _+}}{4}\left( \frac{\mathsf m'}{\mathsf m}-\frac{\psi '}{\psi }\right) \left[ {\varvec{\Delta }}_n^{-1}{\varvec{\Phi }}_{n,+}^{-1}\varvec{\sigma }_3 {\varvec{\Phi }}_{n,+}{\varvec{\Delta }}_n\right] _{21} \nonumber \\{} & {} \quad = -2n\frac{\phi _+' \chi _0{{\,\mathrm{\mathrm e}\,}}^{2n\phi _+}}{\sigma _n} -{{\,\mathrm{\mathrm e}\,}}^{2n\phi _+}\left[ {\varvec{\Delta }}_n^{-1} {\varvec{\Delta }}_n'\right] _{21} +{{\,\mathrm{\mathrm e}\,}}^{2n\phi _+}\left[ {\varvec{\Delta }}_{n} {\varvec{\Phi }}_{n,+}^{-1}\left( {\varvec{\Phi }}_{n,+} {\varvec{\Delta }}_n\right) '\right] _{21}\nonumber \\{} & {} \qquad +\frac{{{\,\mathrm{\mathrm e}\,}}^{2n\phi _+}}{4}\left( \frac{\mathsf m'}{\mathsf m}-\frac{\psi '}{\psi }\right) \left[ {\varvec{\Delta }}_n^{-1}{\varvec{\Phi }}_{n,+}^{-1}\varvec{\sigma }_3 {\varvec{\Phi }}_{n,+}{\varvec{\Delta }}_n\right] _{21} \end{aligned}$$
(12.19)

We now multiply this last expression by \({{\,\mathrm{\mathrm e}\,}}^{-2n\phi _+}\sigma _n/(1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q})\) and integrate. A direct calculation gives

$$\begin{aligned}{} & {} \frac{\sigma _n}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q}}\left( 2n\frac{\phi _+' \chi _0}{\sigma _n}+\left[ {\varvec{\Delta }}_n^{-1} {\varvec{\Delta }}_n'\right] _{21}\right) \\{} & {} \quad = \frac{\sigma _n\chi _0}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q}}\left( 2n\frac{\phi _+' }{\sigma _n}+\left( \frac{1}{\sigma _n}\right) '\right) =\frac{\chi _0\sigma _n{{\,\mathrm{\mathrm e}\,}}^{-2n\phi _+}}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q}}\left( \frac{{{\,\mathrm{\mathrm e}\,}}^{2n\phi _+}}{\sigma _n}\right) ', \end{aligned}$$

which is the integrand of \(\mathsf I_2(-\varepsilon _0,\varepsilon _1)=\mathsf I_2(-\varepsilon _0,0)\), see (12.3). Thus, from (12.19) we obtain

$$\begin{aligned}{} & {} \mathsf J_2(-\varepsilon _0,\varepsilon _1)=-\mathsf I_1(-\varepsilon _0,0)+\int _{-\varepsilon _0}^{\varepsilon _1} \frac{\sigma _n(x)}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\left[ {\varvec{\Delta }}_n(x)^{-1}{\varvec{\Phi }}_{n,+}(x)^{-1}\left( {\varvec{\Phi }}_{n,+}(x) {\varvec{\Delta }}_n(x)\right) '\right] _{21}\textrm{d}x+\mathsf R_1(u)\nonumber \\ \end{aligned}$$
(12.20)

where we have set

$$\begin{aligned}{} & {} \mathsf R_1(u):=\frac{1}{4}\int _{-\varepsilon _0}^{\varepsilon _1} \left( \frac{\mathsf m'(x)}{\mathsf m(x)}-\frac{\psi '(x)}{\psi (x)}\right) \frac{\sigma _n(x)}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\left[ {\varvec{\Delta }}_n(x)^{-1}{\varvec{\Phi }}_{n,+}(x)^{-1} \varvec{\sigma }_3{\varvec{\Phi }}_{n,+}(x) {\varvec{\Delta }}_n(x)\right] _{21}\textrm{d}x. \end{aligned}$$

It is worth mentioning that both \(\mathsf m\) and \(\psi \) have a simple zero at \(x=0\), so the first term in the integrand of \(\mathsf R\) remains bounded near \(x=0\).

Recalling (10.16), the integrand written explicitly in (12.20) is of the form

$$\begin{aligned}{} & {} f_1(n^{2/3}\psi (x))\left[ \textbf{f}_2(n^{2/3}\psi (x))\left( \textbf{f}_3(n^{2/3}\psi (x))\right) '\right] _{21}\\{} & {} \quad =n^{2/3}\psi '(x)f_1(n^{2/3}\psi (x))\left[ \textbf{f}_2(n^{2/3}\psi (x))\textbf{f}_3'(n^{2/3}\psi (x))\right] _{21} \end{aligned}$$

with obvious choices of the functions \(f_1,\textbf{f}_2,\textbf{f}_3\), and performing the change of variables \(u=n^{2/3}\psi (x)\) we obtain the integral in the right-hand side of (12.18).

To conclude, it remains to show the bound for \(\mathsf R\). For that, we use again the sets \(C_n\) and \(D_n\) from (12.15) and analyze the integral over each of these sets separately.

Along \(C_n\), the convergence (10.19) ensures \({\varvec{\Phi }}_{n,+}\) remains bounded uniformly, and combined with the boundedness of all the other terms on the whole interval \((-\varepsilon _0,\varepsilon _1)\) we obtain

$$\begin{aligned}{} & {} \int _{C_n}\left( \frac{\mathsf m'(x)}{\mathsf m(x)}-\frac{\psi '(x)}{\psi (x)}\right) \frac{\sigma _n(x)}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\left[ {\varvec{\Delta }}_n(x)^{-1}{\varvec{\Phi }}_{n,+}(x)^{-1} \varvec{\sigma }_3{\varvec{\Phi }}_{n,+}(x) {\varvec{\Delta }}_n(x)\right] _{21}\textrm{d}x\\{} & {} \quad =\mathcal {O}(1)\int _{C_n} \frac{\sigma _n(x)}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\textrm{d}x. \end{aligned}$$

The function \(\psi \) is conformal, and consequently \(|z|\le cn^{-2/3}\) for \(z\in C_n\) and some constant c independent of \(\mathsf t,u,n\). In particular, this ensures that \(n^{2/3}Q(x)\le -\tilde{c}\) for every \(x\in C_n\) and a constant \(\tilde{c}>0\), and therefore

$$\begin{aligned} \frac{\sigma _n(x)}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\le \frac{1}{1+{{\,\mathrm{\mathrm e}\,}}^{u-\tilde{c}}}\le {{\,\mathrm{\mathrm e}\,}}^{-u+\tilde{c}}. \end{aligned}$$

Still because \(\psi \) is conformal, we are sure that the Lebesgue measure of \(C_n\) is \(\mathcal {O}(n^{-2/3})\). Everything combined, we conclude the estimate

$$\begin{aligned}{} & {} \int _{C_n}\left( \frac{\mathsf m'(x)}{\mathsf m(x)}-\frac{\psi '(x)}{\psi (x)}\right) \frac{\sigma _n(x)}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\left[ {\varvec{\Delta }}_n(x)^{-1}{\varvec{\Phi }}_{n,+}(x)^{-1} \varvec{\sigma }_3{\varvec{\Phi }}_{n,+}(x) {\varvec{\Delta }}_n(x)\right] _{21}\textrm{d}x\\{} & {} \quad =\mathcal {O}({{\,\mathrm{\mathrm e}\,}}^{-u}n^{-2/3}), \end{aligned}$$

as \(n\rightarrow \infty \), which is valid uniformly for \(u,\mathsf t\) as claimed by the Lemma.

Finally, on \(D_n\) we use the expansion (5.10) for \({\varvec{\Phi }}={\varvec{\Phi }}_n\), which provides

$$\begin{aligned} {\varvec{\Phi }}_{n,+}(x)^{-1}\varvec{\sigma }_3{\varvec{\Phi }}_{n,+}(x)={{\,\mathrm{\mathrm e}\,}}^{n\phi _+(x)\varvec{\sigma }_3}\left( \varvec{\sigma }_2 +\mathcal {O}(n^{-1/3}) \right) {{\,\mathrm{\mathrm e}\,}}^{-n\phi _+(x)\varvec{\sigma }_3},\quad n\rightarrow \infty , \end{aligned}$$

which is valid uniformly for \(x\in D_n\) and uniformly in the parameters \(u,\mathsf t\) as required. After some straightforward calculations, we thus arrive at

$$\begin{aligned}{} & {} \int _{D_n}\left( \frac{\mathsf m'(x)}{\mathsf m(x)}-\frac{\psi '(x)}{\psi (x)}\right) \frac{\sigma _n(x)}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\left[ {\varvec{\Delta }}_n(x)^{-1}{\varvec{\Phi }}_{n,+}(x)^{-1} \varvec{\sigma }_3{\varvec{\Phi }}_{n,+}(x) {\varvec{\Delta }}_n(x)\right] _{21}\textrm{d}x \\{} & {} \quad = \int _{D_n}\left( \frac{\mathsf m'(x)}{\mathsf m(x)}-\frac{\psi '(x)}{\psi (x)}\right) \frac{\textrm{i}\sigma _n(x){{\,\mathrm{\mathrm e}\,}}^{-2n\phi _+(x)}}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}} \left[ 1+\frac{\chi _0(x)}{\sigma _n(x)^2}+\mathcal {O}(n^{-1/3}) \right] \textrm{d}x,\quad n\rightarrow \infty , \end{aligned}$$

with, as always, uniform error term in \(x\in D_n\), \(u,\mathsf t\). Each of the terms \((\mathsf m'/\mathsf m-\psi '/\psi )\) and \(\chi _0/\sigma _n^2\) is bounded on \((-\varepsilon _0,\varepsilon _1)\), so to bound the integral above it is enough to estimate

$$\begin{aligned} \int _{-\varepsilon _0}^{\varepsilon _1} \frac{\sigma _n(x){{\,\mathrm{\mathrm e}\,}}^{-n\phi _+(x)}}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\textrm{d}x=\int _{-\varepsilon _0}^{0} \frac{\sigma _n(x){{\,\mathrm{\mathrm e}\,}}^{-n\phi _+(x)}}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\textrm{d}x+\int _{0}^{\varepsilon _1} \frac{\sigma _n(x){{\,\mathrm{\mathrm e}\,}}^{-n\phi (x)}}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\textrm{d}x. \end{aligned}$$

For the integral over \((-\varepsilon _0,0)\), we know that \({{\,\textrm{Re}\,}}\phi _+=0\), and then using Tonelli’s Theorem to integrate first in u we obtain that

$$\begin{aligned} \int _{\mathsf s}^{+\infty } \left| \int _{-\varepsilon _0}^{0} \frac{\sigma _n(x){{\,\mathrm{\mathrm e}\,}}^{-n\phi _+(x)}}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\textrm{d}x\right| \textrm{d}u\le \int _{-\varepsilon _0}^0 \frac{1}{1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf s+n^{2/3}Q(x)}}\textrm{d}x. \end{aligned}$$

Changing variables \(v=n^{2/3}Q\) in this last integral, it then follows that the right-hand side above is \(\mathcal {O}(n^{-2/3})\) uniformly for \(\mathsf s\ge -\mathsf s_0\) and \(\mathsf t_0\le \mathsf t\le 1/\mathsf t_0\).

Finally, for the integral over \((0,\varepsilon _1)\) we now have that \(\phi \ge 0\) in this interval, and it is independent of u, so once again interchanging order of integration we obtain

$$\begin{aligned} 0\le \int _\mathsf s^\infty \int _{0}^{\varepsilon _1} \frac{\sigma _n(x){{\,\mathrm{\mathrm e}\,}}^{-n\phi (x)}}{1+{{\,\mathrm{\mathrm e}\,}}^{u+n^{2/3}Q(x)}}\textrm{d}x \textrm{d}u=\int _0^{\varepsilon _1}\frac{{{\,\mathrm{\mathrm e}\,}}^{-n\phi (x)}}{1+{{\,\mathrm{\mathrm e}\,}}^{\mathsf s+n^{2/3}Q(x)}}\textrm{d}x\le \int _0^{\varepsilon _1}{{\,\mathrm{\mathrm e}\,}}^{-n\phi (x)}\textrm{d}x, \end{aligned}$$

and now changing variables \(v=n\phi \) (which is well defined in this interval because of the local behavior (8.4)) we see that the integral on the right-most side is \(\mathcal {O}(n^{-1})\). This completes the proof. \(\square \)

To conclude, it remains to prove Proposition 11.1.

Proof of Proposition 11.1

Recalling (12.4) and (12.12), the result is an immediate consequence of Lemmas 12.112.5. \(\square \)