1 Introduction

Airy kernel Fredholm determinants The Airy point process or Airy ensemble [39, 42] is one of the most important universal point processes arising in random matrix ensembles and other repulsive particle systems. It describes among others the eigenvalues near soft edges in a wide class of ensembles of large random matrices [16, 21, 22, 25, 40], the largest parts of random partitions or Young diagrams with respect to the Plancherel measure [5, 13], and the transition between liquid and frozen regions in random tilings [32]. It is a determinantal point process, which means that correlation functions can be expressed as determinants involving a correlation kernel, which characterizes the process. This correlation kernel is given in terms of the Airy function by

$$\begin{aligned} K^{\mathrm{Ai}}(u, v) = \frac{ \mathrm{Ai}(u) \mathrm{Ai}'(v) - \mathrm{Ai}'(u) \mathrm{Ai}(v) }{ u - v }. \end{aligned}$$
(1.1)

Let us denote \(N_A\) for the number of points in the process which are contained in the set \(A\subset \mathbb {R}\), let \(A_1,\ldots , A_m\) be disjoint subsets of \(\mathbb {R}\), with \(m\in \mathbb {N}_{>0}\), and let \(s_1,\ldots , s_m\in \mathbb {C}\). Then, the general theory of determinantal point processes [11, 33, 42] implies that

$$\begin{aligned} \mathbb {E}\left( \prod _{j = 1}^m s_j^{N_{A_j}} \right) = \det \left( 1 - \chi _{\cup _j A_j}\sum _{j=1}^m (1 - s_j) \mathcal {K}^{\mathrm{Ai}} \chi _{A_j} \right) , \end{aligned}$$
(1.2)

where the right hand side of this identity denotes the Fredholm determinant of the operator \(\chi _{\cup _j A_j}\sum _{j=1}^m (1 - s_j) \mathcal {K}^{\mathrm{Ai}} \chi _{A_j}\), with \(\mathcal {K}^{\mathrm{Ai}}\) the integral operator associated to the Airy kernel and \(\chi _A\) the projection operator from \(L^2(\mathbb {R})\) to \(L^2(A)\). The integral kernel operator \(\mathcal {K}^{\mathrm{Ai}}\) is trace-class when acting on bounded real intervals or on unbounded intervals of the form \((x,+\,\infty )\). Note that, when \(s_j=0\) for \(j\in \mathcal {K} \subset \{1,\ldots ,m\}\), the left-hand-side of (1.2) should be interpreted as

$$\begin{aligned}&\mathbb {E}\left( \prod _{j \notin \mathcal {K}} s_j^{N_{A_j}} \times \left\{ \begin{array}{l l} 1, &{}\quad \text{ if } N_{A_{j}} = 0 \text{ for } \text{ all } j \in \mathcal {K}, \\ 0, &{}\quad \text{ otherwise } \end{array} \right\} \right) \\&\quad = \mathbb {E}\left( \prod _{j \notin \mathcal {K}} s_j^{N_{A_j}} \Big | N_{\cup _{j \in \mathcal {K}}A_{j}} = 0 \right) \mathbb {P}\left( N_{\cup _{j \in \mathcal {K}}A_{j}} = 0 \right) . \end{aligned}$$

In what follows, we take the special choice of subsets

$$\begin{aligned} A_j=(x_j,x_{j-1}),\qquad +\,\infty =:x_0>x_1>\cdots>x_m>-\infty , \end{aligned}$$

we restrict to \(s_1,\ldots , s_m\in [0,1]\), and we study the function

$$\begin{aligned} F(\vec {x};\vec {s})= & {} F(x_1,\ldots ,x_{m};s_1,\ldots , s_m)\nonumber \\:= & {} \det \left( 1 - \chi _{(x_m, +\,\infty )} \sum _{j = 1}^m (1 - s_j) \mathcal {K}^{\mathrm{Ai}} \chi _{(x_j, x_{j-1})} \right) . \end{aligned}$$
(1.3)

The case \(m=1\) corresponds to the Tracy–Widom distribution [43], which can be expressed in terms of the Hastings–McLeod [29] (if \(s_{1} = 0\)) or Ablowitz–Segur [1] (if \(s_{1}\in (0,1)\)) solutions of the Painlevé II equation. It follows directly from (1.2) that F(x; 0) is the probability distribution of the largest particle in the Airy point process. The function F(xs) for \(s\in (0,1)\) is the probability distribution of the largest particle in the thinned Airy point process, which is obtained by removing each particle independently with probability s. Such thinned processes were introduced in random matrix theory by Bohigas and Pato [9, 10] and rigorously studied for the sine process in [15] and for the Airy point process in [14]. For \(m \ge 1\), \(F(\vec {x};\vec {s})\) is the probability to observe a gap on \((x_{m},+\,\infty )\) in the piecewise constant thinned Airy point process, where each particle on \((x_{j},x_{j-1})\) is removed with probability \(s_{j}\) (see [18] for a similar situation, with more details provided). It was shown recently that the m-point determinants \(F(\vec {x};\vec {s})\) for \(m>1\) can be expressed identically in terms of solutions to systems of coupled Painlevé II equations [19, 44], which are special cases of integro-differential generalizations of the Painlevé II equations which are connected to the KPZ equation [2, 20]. We refer the reader to [19] for an overview of other probabilistic quantities that can be expressed in terms of \(F(\vec {x};\vec {s})\) with \(m>1\).

Large gap asymptotics Since \(F(\vec {x};\vec {s})\) is a transcendental function, it is natural to try to approximate it for large values of components of \(\vec {x}\). Generally speaking, the asymptotics as components of \(\vec {x}\) tend to \(+\,\infty \) is relatively easy to understand and can be deduced directly from asymptotics for the kernel, but the asymptotics as components of \(\vec {x}\) tend to \(-\infty \) are much more challenging. The problem of finding such large gap asymptotics for universal random matrix distributions has a rich history, for an overview see e.g. [35] and [26]. In general, it is particularly challenging to compute the multiplicative constant arising in large gap expansions explicitly. In the case \(m=1\) with \(s=0\), it was proved in [4, 23] that

$$\begin{aligned} F(x;0)=2^{\frac{1}{24}}e^{\zeta '(-1)}|x|^{-\frac{1}{8}} e^{-\frac{|x|^3}{12}}(1+o(1)),\quad \text{ as }\ x\rightarrow -\infty , \end{aligned}$$
(1.4)

where \(\zeta '\) denotes the derivative of the Riemann zeta function. Tracy and Widom had already obtained this expansion in [43], but without rigorously proving the value \(2^{\frac{1}{24}}e^{\zeta '(-1)}\) of the multiplicative constant. For \(m=1\) with \(s> 0\), it is notationally convenient to write \(s=e^{-2\pi i\beta }\) with \(\beta \in i\mathbb {R}\), and it was proved only recently by Bothner and Buckingham [14] that

$$\begin{aligned} F(x;s= & {} e^{-2\pi i\beta })=G(1+\beta )G(1-\beta )e^{-\frac{3}{2} \beta ^2\log |4x|}e^{-\frac{4i\beta }{3}|x|^{3/2}}(1+o(1)),\quad \nonumber \\&\text{ as }\ x\rightarrow -\infty , \end{aligned}$$
(1.5)

where G is Barnes’ G-function, confirming a conjecture from [8]. The error term in (1.5) is uniform for \(\beta \) in compact subsets of the imaginary line.

We generalize these asymptotics to general values of m, for \(s_2,\ldots , s_m\in (0,1]\), and \(s_1\in [0,1]\), and show that they exhibit an elegant multiplicative structure. To see this, we need to make a change of variables \(\vec {s}\mapsto \vec {\beta }\), by defining \(\beta _j\in i\mathbb {R}\) as follows. If \(s_1>0\), we define \(\vec {\beta }=(\beta _1,\ldots , \beta _m)\) by

$$\begin{aligned} e^{-2\pi i\beta _j}={\left\{ \begin{array}{ll} \frac{s_{j}}{s_{j+1}} &{}\text{ for }\ j=1,\ldots , m-1,\\ s_m &{}\text{ for }\ j=m, \end{array}\right. } \end{aligned}$$
(1.6)

and if \(s_1=0\), we define \(\vec {\beta }_0=(\beta _2,\ldots , \beta _m)\) with \(\beta _2,\ldots , \beta _m\) again defined by (1.6). We then denote, if \(s_1>0\),

$$\begin{aligned} E(\vec {x};\vec {\beta }):=\mathbb {E}\left( \prod _{j = 1}^m e^{-2\pi i\beta _j N_{(x_j,+\infty )}}\right) =F(\vec {x};\vec {s}), \end{aligned}$$
(1.7)

and if \(s_1=0\),

$$\begin{aligned} E_0(\vec {x};\vec {\beta }_0):= \mathbb {E}'\left( \prod _{j = 2}^m e^{-2\pi i\beta _j N_{(x_j, x_1)}}\right) =\frac{F(\vec {x};\vec {s})}{F(x_1;0)}, \end{aligned}$$
(1.8)

where \(\mathbb {E}'\) denotes the expectation associated to the law of the particles \(\lambda _1\ge \lambda _2\ge \cdots \) conditioned on the event \(\lambda _1 \le x_1\).

Main result for\(s_1>0\). We express the asymptotics for the m-point determinant \(E(\vec {x};\vec {\beta })\) in two different but equivalent ways. First, we write them as the product of the determinants \(E(x_j;\beta _j)\) with only one singularity (for which asymptotics are given in (1.5)), multiplied by an explicit pre-factor which is bounded in the relevant limit. Secondly, we write them in a more explicit manner.

Theorem 1.1

Let \(m\in \mathbb {N}_{>0}\), and let \(\vec {x}=(x_1,\ldots , x_m)\) be of the form \(\vec {x}=r\vec {\tau }\) with \(\vec {\tau }=(\tau _1,\ldots , \tau _m)\) and \(0>\tau _1>\tau _2>\cdots >\tau _m\). For any \(\beta _1,\ldots , \beta _m\in i\mathbb {R}\), we have the asymptotics

$$\begin{aligned} E(\vec {x};\vec {\beta })=e^{-4 \pi ^{2}\sum _{1\le k<j \le m}\beta _j\beta _k \Sigma (\tau _k,\tau _j)} \prod _{j=1}^m E(x_j;\beta _j)\ \Big (1+{\mathcal {O}}\Big ( \frac{\log r}{r^{3/2}} \Big )\Big ), \end{aligned}$$
(1.9)

as \(r\rightarrow +\infty \), where \(\Sigma (\tau _k,\tau _j)\) is given by

$$\begin{aligned} \Sigma (\tau _k,\tau _j)=\frac{1}{2\pi ^{2}}\log \frac{\left( |\tau _k|^{\frac{1}{2}}+|\tau _j|^{\frac{1}{2}}\right) ^2}{\tau _k-\tau _j}. \end{aligned}$$
(1.10)

The error term is uniformly small for \(\beta _1,\beta _2,\ldots , \beta _m\) in compact subsets of \(i\mathbb {R}\), and for \(\tau _1,\ldots , \tau _m\) such that \(\tau _1<-\delta \) and \(\min _{1\le k\le m-1}\{\tau _k-\tau _{k+1}\}>\delta \) for some \(\delta >0\). Equivalently,

$$\begin{aligned} E(\vec {x},\vec {\beta })= & {} \exp \left( -2\pi i\sum _{j=1}^m\beta _j\mu (x_j) -2\pi ^2\sum _{j=1}^m\beta _j^2\sigma ^2(x_j)-4\pi ^{2} \sum _{1\le k<j\le m}\beta _j \beta _k \Sigma (\tau _k,\tau _j)\right. \nonumber \\&\left. +\,\sum _{j=1}^m\log G(1+\beta _{j})G(1-\beta _{j}) + {\mathcal {O}}\Big (\frac{\log r}{r^{3/2}} \Big ) \right) , \end{aligned}$$
(1.11)

as \(r\rightarrow +\infty \), with

$$\begin{aligned} \mu (x):= \frac{2}{3\pi }|x|^{3/2}\quad \text{ and } \quad \sigma ^2(x) := \frac{3}{4\pi ^2}\log |4x|. \end{aligned}$$
(1.12)

Remark 1

We observe that \(\Sigma (r\tau _k,r\tau _j)=\Sigma (\tau _k,\tau _j)\), hence we could also write \(\Sigma (x_k,x_j)\) in (1.9) such that the right hand side would only involve \(\vec {x}\). We prefer to write \(\Sigma (\tau _k,\tau _j)\) to emphasize that it does not depend on the large parameter r.

Remark 2

The above asymptotics have similarities with the asymptotics for Hankel determinants with m Fisher–Hartwig singularities studied in [17]. This is quite natural, since the Fredholm determinants \(E(\vec {x};\vec {\beta })\) and \(E_0(\vec {x};\vec {\beta }_0)\) can be obtained as scaling limits of such Hankel determinants. However, the asymptotics from [17] were not proved in such scaling limits and cannot be used directly to prove Theorem 1.1. An alternative approach to prove Theorem 1.1 could consist of extending the results from [17] to the relevant scaling limits. This was in fact the approach used in [23] to prove (1.4) in the case \(m=1\), but it is not at all obvious how to generalize this method to general m. Instead, we develop a more direct method to prove Theorem 1.1 which uses differential identities for the Fredholm determinants \(F(\vec {x};\vec {s})\) with respect to the parameter \(s_m\) together with the known asymptotics for \(m=1\). Our approach also allows us to compute the r-independent prefactor \(e^{-4\pi ^{2}\sum _{1\le k<j\le m}\beta _j\beta _k \Sigma (\tau _k,\tau _j)}\) in a direct way.

Average, variance, and covariance in the Airy point process Let us give a more probabilistic interpretation to this result. For \(m=1\), we recall that \(E(x;\beta )=\mathbb {E} e^{-2\pi i\beta N_{(x,+\,\infty )}}\), and we note that, as \(\beta \rightarrow 0\),

$$\begin{aligned} \mathbb {E} e^{-2\pi i\beta N_{(x,+\infty )}}=1-2\pi i\beta \mathbb {E} N_{(x,+\infty )} -2\pi ^2\beta ^2 \mathbb {E} N_{(x,+\infty )}^2 +\mathcal {O}(\beta ^3). \end{aligned}$$

Comparing this to the small \(\beta \) expansion of the right hand side of (1.11), we see that the average and variance of \(N_{(x,+\infty )}\) behave as \(x\rightarrow -\infty \) like \(\mu (x)\) and \(\sigma ^2(x)\). More precisely, by expanding the Barnes’ G-functions (see [38, formula 5.17.3]), we obtain

$$\begin{aligned}&\mathbb {E} N_{(x,+\infty )}=\frac{2}{3\pi }|x|^{3/2} +{\mathcal {O}}\Big ( \frac{\log |x|}{|x|^{3/2}} \Big ) ,\quad \\&\mathrm{Var}N_{(x,+\infty )}=\frac{3}{4\pi ^2} \log |4x|+\frac{1+\gamma _E}{2\pi ^2} +{\mathcal {O}}\Big (\frac{\log |x|}{|x|^{3/2}} \Big ) , \end{aligned}$$

where \(\gamma _E\) is Euler’s constant, and asymptotics for higher order moments can be obtained similarly. At least the leading order terms in the above are in fact well-known, see e.g. [6, 28, 41].Footnote 1 For \(m=2\), (1.9) implies that

$$\begin{aligned} \lim _{r\rightarrow \infty }\frac{\mathbb {E} e^{-2\pi i\beta N_{(x_1,+\infty )}}e^{-2\pi i\beta N_{(x_2,+\infty )}}}{\mathbb {E} e^{-2\pi i\beta N_{x_1,+\infty )}}\ \mathbb {E} e^{-2\pi i\beta N_{(x_2,+\infty )}}}=e^{-4\pi ^{2}\beta ^2 \Sigma (\tau _1,\tau _2)}. \end{aligned}$$

If we expand the above for small \(\beta \) (note that our result holds uniformly for \(\beta \in i\mathbb {R}\) small), we recover the logarithmic covariance structure of the process \(N_{(x,+\infty )}\) (see e.g. [11, 12, 34]), namely we then see that the covariance of \(N_{(x_1,+\infty )}\) and \(N_{(x_2,+\infty )}\) converges as \(r\rightarrow \infty \) to \(\Sigma (\tau _1,\tau _2)\). Note in particular that \(\Sigma (\tau _1,\tau _2)\) blows up like a logarithm as \(\tau _1-\tau _2\rightarrow 0\), and that such log-correlations are common for processes arising in random matrix theory and related fields. We also infer that, given \(0>\tau _1>\tau _2\),

$$\begin{aligned} \mathrm{Var}N_{(r\tau _2,r\tau _1)}= & {} \mathrm{Var}N_{(r\tau _1,\infty )} +\mathrm{Var}N_{(r\tau _2,\infty )}-2\mathrm{Cov}\left( N_{(r\tau _1,\infty )}, N_{(r\tau _2,\infty )}\right) \\= & {} \frac{3}{2\pi ^2}\log r +\frac{3}{4\pi ^2}\log |16\tau _1\tau _2| +\frac{1+\gamma _E}{\pi ^2}-2\Sigma (\tau _1,\tau _2) +{\mathcal {O}}\Big ( \frac{\log r}{r^{3/2}} \Big ) \end{aligned}$$

as \(r\rightarrow +\infty \).

We also mention that asymptotics for the first and second exponential moments \(\mathbb {E} e^{-2\pi i\beta N_{(x,+\infty )}}\) and \(\mathbb {E} e^{-2\pi i\beta N_{(x_1,+\infty )}-2\pi i\beta N_{(x_2,+\infty )}}\) of counting functions are generally important in the theory of multiplicative chaos, see e.g. [3, 7, 37], which allows to give a precise meaning to limits of random measures like \(\frac{e^{-2\pi i\beta N_{(x,+\infty )}}}{\mathbb {E} e^{-2\pi i\beta N_{(x,+\infty )}}}\), and which provides efficient tools for obtaining global rigidity estimates and statistics of extreme values of the counting function.

Main result for\(s_1=0\). The asymptotics for the determinants \(F(\vec {x};\vec {s})\) if one or more of the parameters \(s_j\) vanish are more complicated. If \(s_j=0\) for some \(j>1\), we expect asymptotics involving elliptic \(\theta \)-functions in analogy to [14], but we do not investigate this situation here. The case where the parameter \(s_1\) associated to the rightmost inverval \((x_1,+\infty )\) vanishes is somewhat simpler, and we obtain asymptotics for \(E_0(\vec {x};\vec {\beta }_0)=F(\vec {x};\vec {s})/F(x_{1};0)\) in this case. We first express the asymptotics for \(E_0(\vec {x};\vec {\beta }_0)\) in terms of a Fredholm determinant of the form \(E(\vec {y};\vec {\beta }_0)\) with \(m-1\) jump discontinuities, for which asymptotics are given in Theorem 1.1. Secondly, we give an explicit asymptotic expansion for \(E_0(\vec {x};\vec {\beta }_0)\).

Theorem 1.2

Let \(m\in \mathbb {N}_{>0}\), let \(\vec {x}=(x_1,\ldots , x_m)\) be of the form \(\vec {x}=r\vec {\tau }\) with \(\vec {\tau }=(\tau _1,\ldots , \tau _m)\) and \(0>\tau _1>\tau _2>\cdots >\tau _m\), and define \(\vec {y}=(y_2,\ldots , y_m)\) by \(y_j=x_j-x_1\). For any \(\beta _2,\ldots , \beta _m\in i\mathbb {R}\), we have as \(r\rightarrow +\infty \),

$$\begin{aligned} E_0(\vec {x};\vec {\beta }_0)=E(\vec {y};\vec {\beta }_0) \prod _{j=2}^m\left[ \left( \frac{2(x_1-x_j)}{x_1-2x_j}\right) ^{\beta _j^2} e^{-2i \beta _j |x_1|\, |x_1-x_j|^{1/2}}\right] \ \Big (1+\Big ( \frac{\log r}{r^{3/2}} \Big )\Big ).\nonumber \\ \end{aligned}$$
(1.13)

The error term is uniformly small for \(\beta _2,\ldots , \beta _m\) in compact subsets of \(i\mathbb {R}\), and for \(\tau _1,\ldots , \tau _m\) such that \(\tau _1<-\delta \) and \(\min _{1\le k\le m-1}\{\tau _k-\tau _{k+1}\}>\delta \) for some \(\delta >0\).

Equivalently,

$$\begin{aligned} E_0(\vec {x},\vec {\beta }_0)= & {} \exp \left( -2\pi i\sum _{j=2}^m\beta _j\mu _0(x_j) -2\pi ^2\sum _{j=2}^m\beta _j^2\sigma _0^2(x_j) \right. \nonumber \\&-4\pi ^{2} \sum _{2\le k<j\le m} \beta _j\beta _k \Sigma _0(\tau _k,\tau _j) \nonumber \\&\left. +\sum _{j=2}^m\log G(1+\beta _{j})G(1-\beta _{j}) + {\mathcal {O}}\Big ( \frac{\log r}{r^{3/2}} \Big ) \right) , \end{aligned}$$
(1.14)

as \(r\rightarrow +\infty \), with

$$\begin{aligned} \mu _0(x)&:= \frac{2}{3\pi }|x_1-x|^{3/2}+\frac{|x_1|}{\pi }|x_1-x|^{1/2} =\mu (x-x_1)+\frac{|x_1|}{\pi }|x_1-x|^{1/2},\\ \sigma _0^2(x)&:= \frac{1}{2\pi ^2}\log \left( 8|x_1-x|^{3/2}+4|x_1| \,|x_1-x|^{1/2}\right) \\&=\sigma ^2(x-x_1)-\frac{1}{2\pi ^2} \log \frac{2(x_1-x)}{x_1-2x},\\ \Sigma _0(\tau _k,\tau _j)&:=\frac{1}{2\pi ^{2}} \log \frac{\left( |\tau _k-\tau _1|^{\frac{1}{2}} +|\tau _j-\tau _1|^{\frac{1}{2}}\right) ^2}{\tau _k-\tau _j}=\Sigma (\tau _k-\tau _1,\tau _j-\tau _1). \end{aligned}$$

Remark 3

We can again give a probabilistic interpretation to this result. In a similar way as explained in the case \(s_1>0\), we can expand the above result for \(m=2\) as \(\beta _2\rightarrow 0\) to conclude that the mean and variance of the random counting function \(N'_{(x_2,x_1)}\), conditioned on the event \(\lambda _1\le x_1\), behave, in the asymptotic scaling of Theorem 1.2, like \(\mu _0(x)\) and \(\sigma _0^2(x)\). Doing the same for \(m=3\) implies that the covariance of \(N_{(x_2,x_1)}'\) and \(N_{(x_3,x_1)}'\) converges to \(\Sigma _0(\tau _2, \tau _3)\).

Remark 4

Another probabilistic interpretation can be given through the thinned Airy point process, which is obtained by removing each particle in the Airy point process independently with probability \(s=e^{-2\pi i\beta }\), \(s\in (0,1)\). We denote \(\mu _1^{(s)}\) for the maximal particle in this thinned process. It is natural to ask what information a thinned configuration gives about the parent configuration. For instance, suppose that we know that \(\mu _1^{(s)}\) is smaller than a certain value \(x_2\), then what is the probability that the largest overall particle \(\lambda _1=\mu _1^{(0)}\) is smaller than \(x_1\)? For \(x_1>x_2\), we have that the joint probability of the events \(\mu _1^{(s)}<x_2\) and \(\lambda _1<x_1\) is given by (see [19, Section 2])

$$\begin{aligned} \mathbb {P}\left( \mu _1^{(s)}<x_2 \text{ and } \lambda _1<x_1\right) =F(x_1,x_2;0,s)=E_0((x_1,x_2);\beta )F(x_{1};0). \end{aligned}$$

If we set \(0>x_1=r\tau _1>x_2=r\tau _2\) and let \(r\rightarrow +\infty \), Theorem 1.2 implies that

$$\begin{aligned}&\mathbb {P}\left( \mu _1^{(s)}<x_2 \text{ and } \lambda _1<x_1\right) \\&\quad =F(x_{1};0)E(x_2-x_1;\beta ) \left( \frac{x_1-2x_{2}}{2(x_1-x_{2})}\right) ^{-\beta ^2} e^{-2i\beta |x_1|\, |x_1-x_2|^{1/2}} \Big (1+{\mathcal {O}}\Big ( \frac{\log r}{r^{3/2}} \Big )\Big ), \end{aligned}$$

or equivalently,

$$\begin{aligned} \mathbb {P}\left( \mu _1^{(s)}<x_2 \text{ and } \lambda _1<x_1\right)= & {} \mathbb {P}(\lambda _1<x_1)\mathbb {P}(\mu _1^{(s)}<x_2-x_1)\\&\times \left( \frac{x_1-2x}{2(x_1-x)}\right) ^{-\beta ^2} e^{-2i\beta |x_1|\, |x_1-x_2|^{1/2}} \Big (1+{\mathcal {O}}\Big ( \frac{\log r}{r^{3/2}} \Big )\Big ). \end{aligned}$$

This describes the tail behavior of the joint distribution of the largest particle distribution of the Airy point process and the associated largest thinned particle.

Outline In Sect. 2, we will derive a suitable differential identity, which expresses the logarithmic partial derivative of \(F(\vec {x};\vec {s})\) with respect to \(s_m\) in terms of a Riemann-Hilbert (RH) problem. In Sect. 3, we will perform an asymptotic analysis of the RH problem to obtain asymptotics for the differential identity as \(r\rightarrow +\infty \) in the case where \(s_1= 0\). This will allow us to integrate the differential identity asymptotically and to prove Theorem 1.2 in Sect. 4. In Sect. 5 and in Sect. 6, we do a similar analysis, but now in the case \(s_1>0\) to prove Theorem 1.1.

2 Differential Identity for F

Deformation theory of Fredholm determinants In this section, we will obtain an identity for the logarithmic derivative of \(F(\vec {x};\vec {s})\) with respect to \(s_m\), which will be the starting point of our proofs of Theorems 1.1 and 1.2. To do this, we follow a general procedure known as the Its-Izergin-Korepin-Slavnov method [30], which applies to integral operators of integrable type, which means that the kernel of the operator can be written in the form \(K(x,y)=\frac{f^T(x)g(y)}{x-y}\) where f(x) and g(y) are column vectors which are such that \(f^T(x)g(x)=0\). The operator \(\mathcal {K}_{\vec {x},\vec {s}}\) defined by

$$\begin{aligned} \mathcal {K}_{\vec {x},\vec {s}}f(x)=\chi _{(x_m, +\infty )}(x) \sum _{j = 1}^m (1 - s_j)\int _{x_j}^{x_{j-1}}K^{\mathrm{Ai}}(x,y)f(y)dy \end{aligned}$$
(2.1)

is of this type, since we can take

$$\begin{aligned} f(x)=\begin{pmatrix} \mathrm{Ai}(x)\chi _{(x_m, +\infty )}(x)\\ \mathrm{Ai}'(x)\chi _{(x_m, +\infty )}(x) \end{pmatrix}, \quad g(y)=\begin{pmatrix} \sum _{j = 1}^m (1 - s_j)\mathrm{Ai}'(y)\chi _{(x_j, x_{j-1})}(y)\\ -\sum _{j = 1}^m (1 - s_j)\mathrm{Ai}(y)\chi _{(x_j, x_{j-1})}(y) \end{pmatrix}. \end{aligned}$$

Using general theory of integral kernel operators, if \(s_m\ne 0\), we have

$$\begin{aligned} \partial _{s_m}\log \det \left( 1-\mathcal {K}_{\vec {x},\vec {s}}\right)&=-\text {Tr}\left( (1-\mathcal {K}_{\vec {x},\vec {s}})^{-1}\partial _{s_m} \mathcal {K}_{\vec {x},\vec {s}}\right) \\&=\frac{1}{1-s_m}\text {Tr}\left( (1-\mathcal {K}_{\vec {x},\vec {s}})^{-1} \mathcal {K}_{\vec {x},\vec {s}}\chi _{(x_m,x_{m-1})}\right) \\&=\frac{1}{1-s_m}\text {Tr}\left( \mathcal {R}_{\vec {x},\vec {s}} \chi _{(x_m,x_{m-1})}\right) = \frac{1}{1-s_m}\int _{x_m}^{x_{m-1}} R_{\vec {x},\vec {s}}(\xi ,\xi )d\xi , \end{aligned}$$

where \(\mathcal {R}_{\vec {x},\vec {s}}\) is the resolvent operator defined by

$$\begin{aligned} 1+\mathcal {R}_{\vec {x},\vec {s}}=\left( 1-\mathcal {K}_{\vec {x}, \vec {s}}\right) ^{-1}, \end{aligned}$$

and where \(R_{\vec {x},\vec {s}}\) is the associated kernel. Using the Its-Izergin-Korepin-Slavnov method, it was shown in [19, proof of Proposition 1] that the resolvent kernel \(R_{\vec {x},\vec {s}}(\xi ;\xi )\) can be expressed in terms of a RH problem. For \(\xi \in (x_m,x_{m-1})\), we have

$$\begin{aligned} R_{\vec {x},\vec {s}}(\xi ,\xi )=\frac{1-s_m}{2\pi i}\left( \Psi _+^{-1}\Psi _+'\right) _{21}(\zeta =\xi -x_m;x = x_{m},\vec {y},\vec {s}), \end{aligned}$$
(2.2)

where \(\Psi (\zeta )\) is the solution, depending on parameters \(x, \vec {y}=(y_1,\ldots , y_{m-1}), \vec {s}=(s_1,\ldots , s_{m})\), to the following RH problem. The relevant values of the components \(y_j\) of \(\vec {y}\) are given as \(y_j=x_j-x_m > 0\) for all \(j=1,\ldots , m-1\), and the relevant value of x is \(x=x_m\).

Fig. 1
figure 1

Jump contours for the model RH problem for \(\Psi \) with \(m=3\)

2.1 RH problem for \(\Psi \)

  1. (a)

    \(\Psi : \mathbb {C}\backslash \Gamma \rightarrow \mathbb {C}^{2\times 2}\) is analytic, with

    $$\begin{aligned} \Gamma =\mathbb {R}\cup e^{\pm \frac{2\pi i}{3}} (0,+\infty ) \end{aligned}$$
    (2.3)

    and \(\Gamma \) oriented as in Fig. 1.

  2. (b)

    \(\Psi (\zeta )\) has continuous boundary values as \(\zeta \in \Gamma \backslash \{y_1,\ldots , y_m\}\) is approached from the left (\(+\) side) or from the right (− side) and they are related by

    $$\begin{aligned} \left\{ \begin{array}{ll} \Psi _+(\zeta ) = \Psi _-(\zeta ) \begin{pmatrix} 1 &{}\quad 0 \\ 1 &{}\quad 1 \end{pmatrix} &{}\quad \text {for } \zeta \in e^{\pm \frac{2\pi i}{3}} (0,+\infty ), \\ \Psi _+(\zeta ) = \Psi _-(\zeta ) \begin{pmatrix} 0 &{}\quad 1 \\ -1 &{}\quad 0 \end{pmatrix} &{}\quad \text {for } \zeta \in (-\infty ,0), \\ \Psi _+(\zeta ) = \Psi _-(\zeta ) \begin{pmatrix} 1 &{}\quad s_j \\ 0 &{}\quad 1 \end{pmatrix}&\quad \text {for } \zeta \in (y_{j}, y_{j-1}), j=1,\ldots , m, \end{array}\right. \end{aligned}$$

    where we write \(y_m=0\) and \(y_{0} = + \infty \).

  3. (c)

    As \(\zeta \rightarrow \infty \), there exist matrices \(\Psi _1,\Psi _2\) depending on \(x,\vec {y},\vec {s}\) but not on \(\zeta \) such that \(\Psi \) has the asymptotic behavior

    $$\begin{aligned} \Psi (\zeta ) = \left( I + \Psi _1\zeta ^{-1}+\Psi _2\zeta ^{-2} +{\mathcal {O}}(\zeta ^{-3})\right) \zeta ^{\frac{1}{4} \sigma _3} M^{-1} e^{-(\frac{2}{3}\zeta ^{3/2} + x\zeta ^{1/2}) \sigma _3}, \end{aligned}$$
    (2.4)

    where \(M = (I + i \sigma _1) / \sqrt{2}\), \(\sigma _1 = \begin{pmatrix} 0 &{}\quad 1 \\ 1 &{}\quad 0 \end{pmatrix}\) and \(\sigma _3 = \begin{pmatrix} 1 &{}\quad 0 \\ 0 &{}\quad -1 \end{pmatrix}\), and where principal branches of \(\zeta ^{3/2}\) and \(\zeta ^{1/2}\) are taken.

  4. (d)

    \(\Psi (\zeta ) = \mathcal {O}( \log (\zeta -y_j) )\) as \(\zeta \rightarrow y_j\), \(j = 1, \ldots , m\).

We can conclude from this result that

$$\begin{aligned} \partial _{s_m}\log \det \left( 1-\mathcal {K}_{\vec {x},\vec {s}}\right) =\frac{1}{2\pi i}\int _{x_m}^{x_{m-1}}\left( \Psi _+^{-1}\Psi _+'\right) _{21} (\zeta =\xi -x_m;x=x_m,\vec {y},\vec {s})d\xi .\nonumber \\ \end{aligned}$$
(2.5)

From here on, we could try to obtain asymptotics for \(\Psi \) with \(\vec {y}\) replaced by \(r\vec {y}\) as \(r\rightarrow +\infty \). However, we can simplify the right-hand side of the above identity and evaluate the integral explicitly. To do this, we follow ideas similar to those of [14, Section 3].

Lax pair identities We know from [19, Section 3] that \(\Psi \) satisfies a Lax pair. More precisely, if we define

$$\begin{aligned} \Phi (\zeta ;x) = e^{\frac{1}{4} \pi i \sigma _3}\begin{pmatrix} 1 &{} -\Psi _{1,21} \\ 0 &{} 1 \end{pmatrix} \Psi (\zeta ;x), \end{aligned}$$
(2.6)

then we have the differential equation

$$\begin{aligned} \partial _{\zeta }\Phi (\zeta ;x) = A(\zeta ;x)\Phi (\zeta ;x), \end{aligned}$$

where A is traceless and takes the form

$$\begin{aligned} A(\zeta ;x) = \zeta \sigma _+ + \begin{pmatrix} 0 &{} -i\partial _{x}\Psi _{1,21}+\frac{x}{2} \\ 1 &{} 0 \end{pmatrix} + \sum _{j = 1}^m \frac{1}{\zeta -y_j} A_j(x), \end{aligned}$$
(2.7)

for some matrices \(A_j\) independent of \(\zeta \), and where \(\sigma _{+} = \begin{pmatrix} 0 &{}\quad 1 \\ 0 &{}\quad 0 \end{pmatrix}.\) Therefore, we have

$$\begin{aligned} \partial _{\zeta }\Psi (\zeta ;x) = \widehat{A}(\zeta ;x)\Psi (\zeta ;x), \end{aligned}$$

and we can use the relation \(-i \partial _{x} \Psi _{1,21} + \Psi _{1,21}^{2} = 2 \Psi _{1,11}\) (see [19, (3.20)]) to see that \(\widehat{A}\) takes the form

$$\begin{aligned} \widehat{A}(\zeta ;x)= & {} \displaystyle \begin{pmatrix} 1 &{}\quad \Psi _{1,21} \\ 0 &{}\quad 1 \end{pmatrix}e^{-\frac{\pi i}{4} \sigma _{3}}A(\zeta ;x)e^{\frac{\pi i}{4}\sigma _{3}}\begin{pmatrix} 1 &{}\quad -\Psi _{1,21} \\ 0 &{}\quad 1 \end{pmatrix} \nonumber \\= & {} \displaystyle i \begin{pmatrix} \Psi _{1,21} &{}\quad -\frac{x}{2}-\zeta -2\Psi _{1,11} \\ 1 &{}\quad -\Psi _{1,21} \end{pmatrix} + \sum _{j=1}^{m} \frac{\widehat{A}_{j}(x)}{\zeta -y_{j}}, \end{aligned}$$
(2.8)

where the matrices \(\widehat{A}_{j}(x)\) are independent of \(\zeta \) and have zero trace. It follows that

$$\begin{aligned} \left( \Psi ^{-1}\Psi '\right) _{21}= & {} \left( \Psi ^{-1} \widehat{A}\Psi \right) _{21}=\Psi _{11}^2\widehat{A}_{21} -\Psi _{21}^2\widehat{A}_{12}-2\Psi _{11}\Psi _{21}\widehat{A}_{11} \nonumber \\= & {} \displaystyle (\Psi \sigma _{+} \Psi ^{-1})_{12} \bigg ( i + \sum _{j=1}^{m} \frac{\widehat{A}_{j,21}}{\zeta -y_{j}} \bigg ) +2(\Psi \sigma _{+} \Psi ^{-1})_{11} \bigg ( i\Psi _{1,21} + \sum _{j=1}^{m} \frac{\widehat{A}_{j,11}}{\zeta -y_{j}} \bigg ) \nonumber \\&\displaystyle + (\Psi \sigma _{+} \Psi ^{-1})_{21} \bigg ( -i \Big ( \frac{x}{2}+\zeta +2\Psi _{1,11} \Big ) + \sum _{j=1}^{m} \frac{\widehat{A}_{j,12}}{\zeta -y_{j}} \bigg ). \end{aligned}$$
(2.9)

Let us define \(\widehat{F}(\zeta )=\partial _{s_m}(\Psi (\zeta )) \ \Psi ^{-1}(\zeta )\). From the RH problem for \(\Psi \), \(\widehat{F}\) satisfies the following RH problem (recall that \(y_{m} = 0\)):

  1. (a)

    \(\widehat{F}\) is analytic on \(\mathbb {C}{\setminus } [0,y_{m-1}]\).

  2. (b)

    The jumps are given by

    $$\begin{aligned} \widehat{F}_{+}(\zeta ) = \widehat{F}_{-}(\zeta ) + \Psi _{-}(\zeta ) \sigma _{+} \Psi _{-}^{-1}(\zeta ), \qquad \zeta \in (0,y_{m-1}). \end{aligned}$$
    (2.10)
  3. (c)

    As \(\zeta \rightarrow \zeta _{\star } \in \{0,y_{m-1}\}\), we have \(\widehat{F}(\zeta ) = {\mathcal {O}}\big (\log (\zeta - \zeta _{\star })\big )\).

    As \(\zeta \rightarrow \infty \), we have

    $$\begin{aligned} \widehat{F}(\zeta ) = \frac{\partial _{s_{m}}\Psi _{1}}{\zeta } +\frac{\partial _{s_{m}}\Psi _{2} - \partial _{s_{m}} (\Psi _{1}) \Psi _{1}}{\zeta ^{2}} + {\mathcal {O}}(\zeta ^{-3}). \end{aligned}$$
    (2.11)

Thus, by Cauchy’s formula, we have

$$\begin{aligned} \widehat{F}(\zeta ) = \frac{1}{2\pi i} \int _{0}^{y_{m-1}} \frac{\Psi _{-}(\xi )\sigma _{+}\Psi _{-}^{-1}(\xi )}{\xi -\zeta }d\xi . \end{aligned}$$
(2.12)

Expanding the right-hand-side of (2.12) as \(\zeta \rightarrow \infty \), and comparing it with (2.11), we obtain the identities

$$\begin{aligned} -\frac{1}{2\pi i}\int _{0}^{y_{m-1}}\Psi _-(\xi )\sigma _+ \Psi _-(\xi )^{-1}d\xi&=\partial _{s_m}\Psi _1, \end{aligned}$$
(2.13)
$$\begin{aligned} -\frac{1}{2\pi i}\int _{0}^{y_{m-1}}\Psi _-(\xi ) \sigma _+\Psi _-(\xi )^{-1}\xi d\xi&= \partial _{s_m}\Psi _2 -\partial _{s_m}(\Psi _1)\ \Psi _1. \end{aligned}$$
(2.14)

Following again [19], see in particular formula (3.15) in that paper, we can express \(\Psi \) in a neighborhood of \(y_j\) as

$$\begin{aligned} \Psi (\zeta )=G_j(\zeta )\left( I+\frac{s_{j+1}-s_j}{2\pi i} \sigma _{+}\log (\zeta -y_j)\right) , \end{aligned}$$
(2.15)

for \(0<\arg (\zeta -y_j)<\frac{2\pi }{3}\) and with \(G_j\) analytic at \(y_j\). This implies that

$$\begin{aligned} \widehat{A}_j= & {} \frac{s_{j+1}-s_{j}}{2\pi i}G_{j}(y_{j}) \sigma _{+}G_{j}(y_{j})^{-1}\nonumber \\= & {} \frac{s_{j+1}-s_j}{2\pi i}\begin{pmatrix} -G_{j,11}(y_j)G_{j,21}(y_j)&{}\quad G_{j,11}^2(y_j) \\ -G_{j,21}^2(y_j)&{}\quad G_{j,11}(y_j)G_{j,21}(y_j) \end{pmatrix}, \end{aligned}$$
(2.16)

for \(j=1,\ldots , m\), where we denoted \(s_{m+1}=1\), and also that

$$\begin{aligned} \widehat{F}(y_j)= & {} (\partial _{s_m}G_j(y_j))\ G_j(y_j)^{-1}, \quad \text{ if }\ j\ne m, m-1, \end{aligned}$$
(2.17)
$$\begin{aligned} \widehat{F}(\zeta )= & {} (\partial _{s_m}G_m(y_m))\ G_m(y_m)^{-1} -\frac{\log (\zeta -y_m)}{1-s_m}\widehat{A}_m +o(1), \quad \text{ as }\ \zeta \rightarrow y_m, \qquad \qquad \end{aligned}$$
(2.18)
$$\begin{aligned} \widehat{F}(\zeta )= & {} (\partial _{s_m}G_{m-1}(y_{m-1})) \ G_{m-1}(y_{m-1})^{-1}\nonumber \\&+\frac{\log (\zeta -y_{m-1})}{s_{m}-s_{m-1}} \widehat{A}_{m-1}+o(1),\quad \text{ as }\ \zeta \rightarrow y_{m-1}. \end{aligned}$$
(2.19)

Using (2.13)–(2.14), (2.16) (in particular the fact that \(\det \widehat{A}_j=0\)) and (2.172.18)–(2.19) while substituting (2.9) into (2.5), we obtain

$$\begin{aligned}&\partial _{s_m}\log \det (1-\mathcal {K}_{\vec {x},\vec {s}}) =i\partial _{s_m}\left( \Psi _{2,21}-\Psi _{1,12}+\frac{x}{2}\Psi _{1,21}\right) \\&\quad +\,i \Psi _{1,11}\partial _{s_m}\Psi _{1,21} -i \Psi _{1,21}\partial _{s_m}\Psi _{1,11} \\&\quad +\,\sum _{j=1}^m\frac{s_{j+1}-s_j}{2\pi i} \left[ -\left[ (\partial _{s_m}G_j) G_j^{-1}\right] _{12}G_{j,21}^2 +\left[ (\partial _{s_m}G_j) G_j^{-1}\right] _{21} G_{j,11}^2 \right. \\&\left. \quad -\,2\left[ (\partial _{s_m}G_j) G_j^{-1}\right] _{11} G_{j,11}G_{j,21}\right] _{\zeta =y_j}. \end{aligned}$$

The above sum can be simplified using the fact that \(\det G_{j} \equiv 1\), and we finally get

$$\begin{aligned}&\partial _{s_m}\log \det (1-\mathcal {K}_{\vec {x},\vec {s}})\nonumber \\&\quad =i\partial _{s_m}\left( \Psi _{2,21}-\Psi _{1,12} +\frac{x}{2}\Psi _{1,21}\right) +i \Psi _{1,11}\partial _{s_m}\Psi _{1,21} -i \Psi _{1,21}\partial _{s_m}\Psi _{1,11} \nonumber \\&\qquad +\, \sum _{j=1}^m\frac{s_{j+1}-s_j}{2\pi i} [G_{j,11} \partial _{s_{k}} G_{j,21}-G_{j,21}\partial _{s_{k}}G_{j,11}]_{\zeta = y_{j}}, \end{aligned}$$
(2.20)

where \(s_{m+1}=1\). The only quantities appearing at the right hand side are \(\Psi _1,\Psi _{2,21}\) and \(G_j\). In the next sections, we will derive asymptotics for these quantities as \(\vec {x}=r\vec {\tau }\) with \(r\rightarrow +\infty \).

3 Asymptotic Analysis of RH Problem for \(\Psi \) with \(s_{1}~\hbox {=}~0\)

We now scale our parameters by setting \(\vec {x}=r\vec {\tau }\), \(\vec {y}=r\vec {\eta }\), with \(\eta _j=\tau _j-\tau _m\). We assume that \(0>\tau _1>\cdots >\tau _m\). The goal of this section is to obtain asymptotics for \(\Psi \) as \(r\rightarrow +\infty \). This will also lead us to large r asymptotics for the differential identity (2.20). In this section, we deal with the case \(s_{1} = 0\). The general strategy in this section has many similarities with the analysis in [17], needed in the study of Hankel determinants with several Fisher–Hartwig singularities.

3.1 Re-scaling of the RH problem

Define the function \(T(\lambda )=T(\lambda ;\vec {\eta }, \tau _m, \vec {s})\) as follows,

$$\begin{aligned} T(\lambda )=\begin{pmatrix} 1&{}\quad \frac{i}{4}(\eta _1^2+2\tau _m\eta _1)r^{3/2}\\ 0&{}\quad 1 \end{pmatrix}r^{-\frac{\sigma _3}{4}} \Psi (r\lambda +r\eta _1;x=r\tau _m,r\vec {\eta },\vec {s}). \end{aligned}$$
(3.1)

The asymptotics (2.4) of \(\Psi \) then imply after a straightforward calculation that T behaves as

$$\begin{aligned} T(\lambda ) = \left( I + T_1\frac{1}{\lambda } +T_2\frac{1}{\lambda ^2}+ \mathcal {O}\left( \frac{1}{\lambda ^3}\right) \right) \lambda ^{\frac{1}{4} \sigma _3} M^{-1} e^{-r^{3/2}(\frac{2}{3}\lambda ^{3/2} +(\tau _m+\eta _1)\lambda ^{1/2}) \sigma _3}, \end{aligned}$$
(3.2)

as \(\lambda \rightarrow \infty \), where the principal branches of the roots are chosen. The entries of \(T_1\) and \(T_2\) are related to those of \(\Psi _1\) and \(\Psi _2\) in (2.4): we have

$$\begin{aligned} T_{1,11}= & {} \frac{\Psi _{1,11}}{r} +\Psi _{1,21}\frac{iA}{4}r+\frac{\eta _1}{4} -\frac{A^2r^3}{32} = -T_{1,22}, \nonumber \\ T_{1,12}= & {} \frac{\Psi _{1,12}}{r^{3/2}} -\frac{iA}{2}\Psi _{1,11}r^{1/2} -\frac{i\eta _1^{2}}{24}(2\eta _{1}+3\tau _{m})r^{3/2} +\frac{A^2}{16}\Psi _{1,21}r^{5/2}+\frac{iA^3}{192}r^{9/2}, \nonumber \\ T_{1,21}= & {} \frac{\Psi _{1,21}}{r^{1/2}}+\frac{iAr^{3/2}}{4}, \nonumber \\ T_{2,21}= & {} \frac{\Psi _{2,21}}{r^{3/2}} -\frac{3\eta _1}{4r^{1/2}}\Psi _{1,21}-\frac{iA}{4}\Psi _{1,11}r^{1/2} -\frac{i\eta _{1}^{2}}{48}(5\eta _{1}+12\tau _{m})r^{3/2} \nonumber \\&+\,\frac{A^2}{32}\Psi _{1,21}r^{5/2} +\frac{iA^{3}}{384}r^{9/2}, \end{aligned}$$
(3.3)

where

$$\begin{aligned} A=(\eta _1^2+2\tau _m\eta _1). \end{aligned}$$

The singularities in the \(\lambda \)-plane are now located at the (non-positive) points \(\lambda _j=\eta _j-\eta _1=\tau _j-\tau _1\), \(j=1,\ldots ,m\).

3.2 Normalization with g-function and opening of lenses

In order to normalize the RH problem at \(\infty \), in view of (3.2), we define the g-function by

$$\begin{aligned} g(\lambda )=-\frac{2}{3}\lambda ^{3/2}-\tau _1\lambda ^{1/2}, \end{aligned}$$
(3.4)

once more with principal branches of the roots. Also, around each interval \((\lambda _{j},\lambda _{j-1})\), \(j = 2,\ldots ,m\), we will split the jump contour in three parts. This procedure is generally called the opening of the lenses. Let us consider lens-shaped contours \(\gamma _{j,+}\) and \(\gamma _{j,-}\), lying in the upper and lower half plane respectively, as shown in Fig. 2. Let us also denote \(\Omega _{j,+}\) (resp. \(\Omega _{j,-}\)) for the region inside the lenses around \((\lambda _{j},\lambda _{j-1})\) in the upper half plane (resp. in the lower half plane). Then we define S by

$$\begin{aligned} S(\lambda )=T(\lambda )e^{-r^{3/2}g(\lambda )\sigma _3} \prod _{j=2}^{m}\left\{ \begin{array}{l l} \begin{pmatrix} 1 &{} 0 \\ -s_{j}^{-1}e^{-2r^{3/2}g(\lambda )} &{} 1 \end{pmatrix}, &{}\quad \text{ if } \lambda \in \Omega _{j,+}, \\ \begin{pmatrix} 1 &{}\quad 0 \\ s_{j}^{-1}e^{-2r^{3/2}g(\lambda )} &{}\quad 1 \end{pmatrix}, &{}\quad \text{ if } \lambda \in \Omega _{j,-}, \\ I, &{}\quad \text{ if } \lambda \in \mathbb {C}{\setminus }(\Omega _{j,+} \cup \Omega _{j,-}). \end{array} \right. \end{aligned}$$
(3.5)

In order to derive RH conditions for S, we need to use the RH problem for \(\Psi \), the definitions (3.1) of T and (3.5) of S, and the fact that \(g_{+}(\lambda ) + g_{-}(\lambda ) = 0\) for \(\lambda \in (-\infty ,0)\). This allows us to conclude that S satisfies the following RH problem.

Fig. 2
figure 2

Jump contours \(\Gamma _{S}\) for S with \(m=3\) and \(s_{1} = 0\)

3.2.1 RH problem for S

  1. (a)

    \(S : \mathbb {C}\backslash \Gamma _{S} \rightarrow \mathbb {C}^{2\times 2}\) is analytic, with

    $$\begin{aligned} \Gamma _{S}=(-\infty ,0]\cup \big (\lambda _{m}+e^{\pm \frac{2\pi i}{3}} (0,+\infty )\big )\cup \gamma _{+}\cup \gamma _{-}, \qquad \gamma _{\pm } = \bigcup _{j=2}^{m} \gamma _{j,\pm }, \end{aligned}$$
    (3.6)

    and \(\Gamma _{S}\) oriented as in Fig. 2.

  2. (b)

    The jumps for S are given by

    $$\begin{aligned} S_{+}(\lambda )= & {} S_{-}(\lambda )\begin{pmatrix} 0 &{}\quad s_{j} \\ -s_{j}^{-1} &{}\quad 0 \end{pmatrix}, \quad \lambda \in (\lambda _{j}, \lambda _{j-1}), \, j = 2,\ldots ,m+1, \\ S_{+}(\lambda )= & {} S_{-}(\lambda )\begin{pmatrix} 1 &{}\quad 0 \\ s_{j}^{-1}e^{-2r^{3/2}g(\lambda )} &{}\quad 1 \end{pmatrix}, \qquad \lambda \in \gamma _{j,+} \cup \gamma _{j,-}, \, j = 2,\ldots ,m, \\ S_{+}(\lambda )= & {} S_{-}(\lambda )\begin{pmatrix} 1 &{}\quad 0 \\ e^{-2r^{3/2}g(\lambda )} &{}\quad 1 \end{pmatrix}, \quad \lambda \in \lambda _{m} +e^{\pm \frac{2\pi i}{3}} (0,+\infty ), \end{aligned}$$

    where \(\lambda _{m+1} = - \infty \) and \(s_{m+1} = 1\).

  3. (c)

    As \(\lambda \rightarrow \infty \), we have

    $$\begin{aligned} S(\lambda ) = \left( I + \frac{T_1}{\lambda } +\frac{T_2}{\lambda ^2}+ \mathcal {O}\left( \frac{1}{\lambda ^3}\right) \right) \lambda ^{\frac{1}{4} \sigma _3} M^{-1}. \end{aligned}$$
    (3.7)
  4. (d)

    \(S(\lambda ) = \mathcal {O}( \log (\lambda -\lambda _j) )\) as \(\lambda \rightarrow \lambda _j\), \(j = 1, \ldots , m\).

Let us now take a closer look at the jump matrices on the lenses \(\gamma _{j,\pm }\). By (3.4), we have

$$\begin{aligned} \mathfrak {R}g(re^{i\theta }) = - \frac{2}{3}r^{3/2}\cos (\tfrac{3\theta }{2}) -(\eta _{1}+\tau _{m})r^{1/2}\cos (\tfrac{\theta }{2}), \qquad \text{ for } \theta \in (-\pi ,\pi ], \quad r > 0.\nonumber \\ \end{aligned}$$
(3.8)

Since \(\eta _{1}+\tau _{m} = \tau _{1} < 0\), we have

$$\begin{aligned} \mathfrak {R}g(re^{i\theta })> & {} -\frac{2}{3}r^{3/2} \cos \left( \frac{3\theta }{2}\right) > 0, \qquad \frac{\pi }{3}< |\theta | < \pi , \\ \mathfrak {R}g(re^{i\theta })= & {} 0, \qquad |\theta | = \pi . \end{aligned}$$

It follows that the jumps for S are exponentially close to I as \(r \rightarrow + \infty \) on the lenses, and on \(\lambda _{m} + e^{\pm \frac{2\pi i}{3}}(0,+\infty )\). This convergence is uniform outside neighborhoods of \(\lambda _{1},\ldots ,\lambda _{m}\), but is not uniform as \(r \rightarrow + \infty \) and simultaneously \(\lambda \rightarrow \lambda _{j}\), \(j \in \{1,\ldots ,m\}\).

3.3 Global parametrix

We will now construct approximations to S for large r, which will turn out later to be valid in different regions of the complex plane. We need to distinguish between neighborhoods of each of the singularities \(\lambda _1,\ldots , \lambda _m\) and the remaining part of the complex plane. We call the approximation to S away from the singularities the global parametrix. To construct it, we ignore the jump matrices near \(\lambda _1,\ldots , \lambda _m\) and the exponentially small entries in the jumps as \(r\rightarrow +\infty \) on the lenses \(\gamma _{j,\pm }\). In other words, we aim to find a solution to the following RH problem.

3.3.1 RH problem for \(P^{(\infty )}\)

  1. (a)

    \(P^{(\infty )} : \mathbb {C}\backslash (-\infty ,0] \rightarrow \mathbb {C}^{2\times 2}\) is analytic.

  2. (b)

    The jumps for \(P^{(\infty )}\) are given by

    $$\begin{aligned} P^{(\infty )}_{+}(\lambda ) = P^{(\infty )}_{-}(\lambda )\begin{pmatrix} 0 &{} s_{j} \\ -s_{j}^{-1} &{} 0 \end{pmatrix}, \qquad \lambda \in (\lambda _{j}, \lambda _{j-1}), \, j = 2,\ldots ,m+1. \end{aligned}$$
  3. (c)

    As \(\lambda \rightarrow \infty \), we have

    $$\begin{aligned} P^{(\infty )}(\lambda ) = \left( I + \frac{P^{(\infty )}_1}{\lambda } +\frac{P^{(\infty )}_2}{\lambda ^2} + \mathcal {O}\left( \frac{1}{\lambda ^3}\right) \right) \lambda ^{\frac{1}{4} \sigma _3} M^{-1}. \end{aligned}$$
    (3.9)

The solution to this RH problem is not unique unless we specify its local behavior as \(\lambda \rightarrow 0\) and as \(\lambda \rightarrow \lambda _j\). We will construct a solution \(P^{(\infty )}\) which is bounded as \(\lambda \rightarrow \lambda _j\) for \(j=2,\ldots , m\), and which is \({\mathcal {O}}(\lambda ^{-\frac{1}{4}})\) as \(\lambda \rightarrow 0\). We take it of the form

$$\begin{aligned} P^{(\infty )}(\lambda )=\begin{pmatrix} 1&{}id_1\\ 0&{}1 \end{pmatrix}\lambda ^{\frac{1}{4} \sigma _3}M^{-1}D(\lambda )^{-\sigma _3}, \end{aligned}$$
(3.10)

with D a function depending on the \(\lambda _j\)’s and \(\vec {s}\), and where we define \(d_1\) below. In order to satisfy the above RH conditions, we need to take

$$\begin{aligned} D(\lambda )=\exp \left( \frac{\lambda ^{1/2}}{2\pi } \sum _{j=2}^{m} \log s_j \int _{\lambda _j}^{\lambda _{j-1}} (-u)^{-1/2}\frac{du}{\lambda -u}\right) . \end{aligned}$$
(3.11)

For later use, let us now take a closer look at the asymptotics of \(P^{(\infty )}\) as \(\lambda \rightarrow \infty \) and as \(\lambda \rightarrow \lambda _j\). For any \(k \in \mathbb {N}_{N>0}\), as \(\lambda \rightarrow \infty \) we have,

$$\begin{aligned} D(\lambda )= & {} \exp \left( \sum _{\ell = 1}^{k} \frac{d_{\ell }}{\lambda ^{\ell -\frac{1}{2}}} + {\mathcal {O}}(\lambda ^{-k-\frac{1}{2}}) \right) \nonumber \\= & {} 1 + d_{1} \lambda ^{-1/2}+\frac{d_{1}^{2}}{2}\lambda ^{-1} + \left( \frac{d_{1}^{3}}{6}+d_{2} \right) \lambda ^{-3/2} + {\mathcal {O}}(\lambda ^{-2}), \end{aligned}$$
(3.12)

where

$$\begin{aligned} d_{\ell }= & {} \sum _{j=2}^{m} \frac{(-1)^{\ell - 1} \log s_{j}}{2\pi } \int _{\lambda _{j}}^{\lambda _{j-1}} (-u)^{\ell -\frac{3}{2}}du \nonumber \\= & {} \sum _{j=2}^{m} \frac{(-1)^{\ell - 1} \log s_{j}}{\pi (2\ell -1)} \Big ( |\lambda _{j}|^{\ell -\frac{1}{2}}-| \lambda _{j-1}|^{\ell -\frac{1}{2}} \Big ), \end{aligned}$$
(3.13)

and this also defines the value of \(d_1\) in (3.10). A long but direct computation shows that

$$\begin{aligned} P_1^{(\infty )}=\begin{pmatrix} -\frac{d_1^2}{2}&{}\frac{i}{3}(d_{1}^{3}-3d_{2})\\ id_1&{}\frac{d_1^2}{2} \end{pmatrix},\quad P_2^{(\infty )}=\begin{pmatrix} -\frac{d_{1}^{4}}{8}&{}\frac{i}{30}(d_{1}^{5}+15d_{2}d_{1}^{2}-30d_{3})\\ \frac{i}{6}(d_{1}^{3}+6d_{2})&{}\frac{d_{1}^{4}}{24}+d_{2}d_{1} \end{pmatrix}.\nonumber \\ \end{aligned}$$
(3.14)

To study the local behavior of \(P^{(\infty )}\) near \(\lambda _j\), it is convenient to use a different representation of D, namely

$$\begin{aligned} D(\lambda ) = \prod _{j=2}^{m}D_{j}(\lambda ), \end{aligned}$$
(3.15)

where

$$\begin{aligned} D_{j}(\lambda ) = \left( \frac{(\sqrt{\lambda }-i \sqrt{|\lambda _{j-1}|}) (\sqrt{\lambda }+i \sqrt{|\lambda _{j}|})}{(\sqrt{\lambda } -i \sqrt{|\lambda _{j}|})(\sqrt{\lambda } +i \sqrt{|\lambda _{j-1}|})} \right) ^{\frac{\log s_{j}}{2\pi i}}. \end{aligned}$$
(3.16)

From this representation, it is straightforward to derive the following expansions. As \(\lambda \rightarrow \lambda _{j}\), \(j \in \{2,\ldots ,m\}\), \(\mathfrak {I}\lambda > 0\), we have

$$\begin{aligned} D_{j}(\lambda ) = \sqrt{s_{j}} T_{j,j}^{\frac{\log s_{j}}{2\pi i}} (\lambda -\lambda _{j})^{-\frac{\log s_{j}}{2\pi i}} (1+{\mathcal {O}}(\lambda -\lambda _{j})), \quad T_{j,j} = \frac{4|\lambda _{j}|(\sqrt{|\lambda _{j}|} -\sqrt{|\lambda _{j-1}|})}{\sqrt{|\lambda _{j}|}+\sqrt{|\lambda _{j-1}|}}. \end{aligned}$$

As \(\lambda \rightarrow \lambda _{j-1}\), \(j \in \{3,\ldots ,m\}\), \(\mathfrak {I}\lambda >0\), we have

$$\begin{aligned} D_{j}(\lambda )= & {} T_{j,j-1}^{\frac{\log s_{j}}{2\pi i}}(\lambda -\lambda _{j-1})^{\frac{\log s_{j}}{2\pi i}} (1+{\mathcal {O}}(\lambda -\lambda _{j-1})), \quad \\ T_{j,j-1}= & {} \frac{\sqrt{|\lambda _{j}|}+\sqrt{|\lambda _{j-1}|}}{4|\lambda _{j-1}|(\sqrt{|\lambda _{j}|}-\sqrt{|\lambda _{j-1}|})}. \end{aligned}$$

For \(j \in \{2,\ldots ,m\}\), as \(\lambda \rightarrow \lambda _{k}\), \(k\in \{2,\ldots ,m\}\), \(k \ne j,j-1\), \(\mathfrak {I}\lambda > 0\), we have

$$\begin{aligned} D_{j}(\lambda ) = T_{j,k}^{\frac{\log s_{j}}{2\pi i}} (1+{\mathcal {O}}(\lambda -\lambda _{k})), \quad T_{j,k} =\frac{(\sqrt{|\lambda _{k}|}-\sqrt{|\lambda _{j-1}|}) (\sqrt{|\lambda _{k}|}+\sqrt{|\lambda _{j}|})}{(\sqrt{|\lambda _{k}|}-\sqrt{|\lambda _{j}|}) (\sqrt{|\lambda _{k}|}+\sqrt{|\lambda _{j-1}|})}.\nonumber \\ \end{aligned}$$
(3.17)

Note that \(T_{j,k} \ne T_{k,j}\) for \(j \ne k\) and \(T_{j,k}>0\) for all jk. From the above expansions, we obtain, as \(\lambda \rightarrow \lambda _{j}\), \(\mathfrak {I}\lambda > 0\), \(j \in \{2,\ldots ,m\}\), that

$$\begin{aligned} D(\lambda ) = \sqrt{s_{j}} \left( \prod _{k=2}^{m} T_{k,j}^{\frac{\log s_{k}}{2\pi i}} \right) (\lambda -\lambda _{j})^{\beta _{j}}(1+{\mathcal {O}}(\lambda -\lambda _{j})), \end{aligned}$$
(3.18)

where \(\beta _1,\ldots , \beta _m\) are as in (1.6). The first two terms in the expansion of \(D(\lambda )\) as \(\lambda \rightarrow \lambda _{1} = 0\) are given by

$$\begin{aligned} D(\lambda ) = \sqrt{s_{2}}\Big (1-d_{0}\sqrt{\lambda } +{\mathcal {O}}(\lambda )\Big ), \end{aligned}$$
(3.19)

where

$$\begin{aligned} d_{0} = \frac{\log s_{2}}{\pi \sqrt{|\lambda _{2}|}} -\sum _{j=3}^{m} \frac{\log s_{j}}{\pi } \Big ( \frac{1}{\sqrt{|\lambda _{j-1}|}} -\frac{1}{\sqrt{|\lambda _{j}|}} \Big ). \end{aligned}$$
(3.20)

The above expressions simplify if we write them in terms of \(\beta _2,\ldots , \beta _m\) defined by (1.6). For all \(\ell \in \{0,1,2,\ldots \}\), we have

$$\begin{aligned} d_{\ell } = \frac{2i(-1)^{\ell }}{2\ell - 1}\sum _{j=2}^{m} \beta _{j}|\lambda _{j}|^{\ell -\frac{1}{2}}. \end{aligned}$$
(3.21)

We also have the identity

$$\begin{aligned} \prod _{k=2}^{m} T_{k,j}^{\frac{\log s_{k}}{2\pi i}} =(4|\lambda _{j}|)^{-\beta _{j}}\prod _{\begin{array}{c} k=2 \\ k \ne j \end{array}}^{m} \widetilde{T}_{k,j}^{-\beta _{k}}, \quad \text{ where } \qquad \widetilde{T}_{k,j} = \frac{\left( \sqrt{|\lambda _{j}|} +\sqrt{|\lambda _{k}|}\right) ^2}{\big |{\lambda _{j}}-{\lambda _{k}}\big |}, \end{aligned}$$
(3.22)

which will turn out useful later on.

3.4 Local parametrices

As a local approximation to S in the vicinity of \(\lambda _{j}\), \(j= 1,\ldots ,m\), we construct a function \(P^{(\lambda _{j})}\) in a fixed but sufficiently small (such that the disks do not intersect or touch each other) disk \(\mathcal {D}_{\lambda _{j}}\) around \(\lambda _{j}\). This function should satisfy the same jump relations as S inside the disk, and it should match with the global parametrix at the boundary of the disk. More precisely, we require the matching condition

$$\begin{aligned} P^{(\lambda _{j})}(\lambda ) = (I+o(1))P^{(\infty )}(\lambda ), \quad \text{ as } r \rightarrow +\infty , \end{aligned}$$
(3.23)

uniformly for \(\lambda \in \partial \mathcal {D}_{\lambda _{j}}\). The construction near \(\lambda _1\) is different from the ones near \(\lambda _2,\ldots , \lambda _m\).

3.4.1 Local parametrices around \(\lambda _{j}\), \(j = 2,\ldots ,m\).

For \(j \in \{2,\ldots ,m\}\), \(P^{(\lambda _{j})}\) can be constructed in terms of Whittaker’s confluent hypergeometric functions. This type of construction is well understood and relies on the solution \(\Psi _\mathrm{HG}(z)\) to a model RH problem, which we recall in “Appendix A.3” for the convenience of the reader. For more details about it, we refer to [17, 27, 31]. Let us first consider the function

$$\begin{aligned} \displaystyle f_{\lambda _{j}}(\lambda )= & {} \displaystyle -2 \left\{ \begin{array}{l l} g(\lambda )-g_{+}(\lambda _{j}), &{}\quad \text{ if } \mathfrak {I}\lambda > 0, \\ -(g(\lambda )-g_{-}(\lambda _{j})), &{}\quad \text{ if } \mathfrak {I}\lambda < 0, \end{array} \right. \nonumber \\= & {} \displaystyle - \frac{4i}{3} \left( (-\lambda )^{3/2} -(-\lambda _{j})^{3/2} \right) +2 \tau _1i \left( (-\lambda )^{1/2} -(-\lambda _{j})^{1/2} \right) , \end{aligned}$$
(3.24)

defined in terms of the g-function (3.4). This is a conformal map from \(\mathcal {D}_{\lambda _{j}}\) to a neighborhood of 0, which maps \(\mathbb {R}\cap \mathcal {D}_{\lambda _{j}}\) to a part of the imaginary axis. As \(\lambda \rightarrow \lambda _{j}\), the expansion of \(f_{\lambda _j}\) is given by

$$\begin{aligned} f_{\lambda _{j}}(\lambda ) = i c_{\lambda _{j}} (\lambda -\lambda _{j})(1+{\mathcal {O}}(\lambda -\lambda _{j})), \quad \text{ with } \quad c_{\lambda _{j}} = \frac{2|\lambda _{j}|-\tau _{1}}{\sqrt{|\lambda _{j}|}} > 0. \end{aligned}$$
(3.25)

We need moreover that all parts of the jump contour \(\Sigma _S\cap \mathcal {D}_{\lambda _{j}}\) are mapped on the jump contour \(\Gamma \) for \(\Phi _\mathrm{HG}\), see Fig. 6. We can achieve this by choosing \(\Gamma _2,\Gamma _3,\Gamma _5, \Gamma _6\) in such a way that \(f_{\lambda _j}\) maps the parts of the lenses \(\gamma _{j,+}, \gamma _{j,-}, \gamma _{j+1,+}, \gamma _{j+1,-}\) inside \(\mathcal {D}_{\lambda _{j}}\) to parts of the respective jump contours \(\Gamma _2, \Gamma _6, \Gamma _3\), \(\Gamma _5\) for \(\Phi _\mathrm{HG}\) in the z-plane.

We can construct a suitable local parametrix \(P^{(\lambda _{j})}\) in the form

$$\begin{aligned} P^{(\lambda _{j})}(\lambda ) = E_{\lambda _{j}}(\lambda ) \Phi _{\mathrm {HG}}(r^{3/2}f_{\lambda _{j}}(\lambda ); \beta _{j})(s_{j}s_{j+1})^{-\frac{\sigma _{3}}{4}} e^{-r^{3/2}g(\lambda )\sigma _{3}}. \end{aligned}$$
(3.26)

If \(E_{\lambda _j}\) is analytic in \(\mathcal {D}_{\lambda _{j}}\), then it follows from the RH conditions for \(\Phi _\mathrm{HG}\) and the construction of \(f_{\lambda _j}\) that \(P^{(\lambda _{j})}\) satisfies exactly the same jump conditions as S on \(\Sigma _S\cap \mathcal {D}_{\lambda _{j}}\). In order to satisfy the matching condition (3.23), we are forced to define \(E_{\lambda _{j}}\) by

$$\begin{aligned} E_{\lambda _{j}}(\lambda ) = P^{(\infty )}(\lambda ) (s_{j} s_{j+1})^{\frac{\sigma _{3}}{4}} \left\{ \begin{array}{l l} \displaystyle \sqrt{\frac{s_{j}}{s_{j+1}}}^{\sigma _{3}}, &{}\quad \mathfrak {I}\lambda > 0 \\ \begin{pmatrix} 0 &{}\quad 1 \\ -1 &{}\quad 0 \end{pmatrix},&\quad \mathfrak {I}\lambda < 0 \end{array} \right\} e^{r^{3/2}g_{+}(\lambda _{j}) \sigma _{3}}(r^{3/2}f_{\lambda _{j}}(\lambda ))^{\beta _{j}\sigma _{3}}.\nonumber \\ \end{aligned}$$
(3.27)

Using the asymptotics of \(\Phi _\mathrm{HG}\) at infinity given in (A.13), we can strengthen the matching condition (3.23) to

$$\begin{aligned} P^{(\lambda _{j})}(\lambda )P^{(\infty )}(\lambda )^{-1} = I + \frac{1}{r^{3/2}f_{\lambda _{j}}(\lambda )}E_{\lambda _{j}}(\lambda ) \Phi _{\mathrm {HG},1}(\beta _{j}) E_{\lambda _{j}}(\lambda )^{-1}+ {\mathcal {O}}(r^{-3}),\nonumber \\ \end{aligned}$$
(3.28)

as \(r \rightarrow + \infty \), uniformly for \(\lambda \in \partial \mathcal {D}_{\lambda _{j}}\), where \(\Phi _{\mathrm {HG},1}\) is a matrix specified in (A.14). Also, a direct computation shows that

$$\begin{aligned} E_{\lambda _{j}}(\lambda _{j}) = \begin{pmatrix} 1 &{}\quad id_{1} \\ 0 &{}\quad 1 \end{pmatrix} e^{\frac{\pi i}{4}\sigma _{3}} | \lambda _{j}|^{\frac{\sigma _{3}}{4}}M^{-1}\Lambda _{j}^{\sigma _{3}}, \end{aligned}$$
(3.29)

where

$$\begin{aligned} \Lambda _{j} = \left( \prod _{\begin{array}{c} k=2 \\ k \ne j \end{array}}^{m} \widetilde{T}_{k,j}^{\beta _{k}} \right) (4|\lambda _{j}|)^{\beta _{j}} e^{r^{3/2}g_{+}(\lambda _{j})}r^{\frac{3}{2}\beta _{j}} c_{\lambda _{j}}^{\beta _{j}}. \end{aligned}$$
(3.30)

3.4.2 Local parametrix around \(\lambda _{1} = 0\).

For the local parametrix \(P^{(0)}\) near 0, we need to use a different model RH problem whose solution \(\Phi _\mathrm{Be}(z)\) can be expressed in terms of Bessel functions. We recall this construction in “Appendix A.2”, and refer to [36] for more details. Similarly as for the local parametrices from the previous section, we first need to construct a suitable conformal map which maps the jump contour \(\Sigma _S\cap \mathcal {D}_0\) in the \(\lambda \)-plane to a part of the jump contour \(\Sigma _{\mathrm {Be}}\) for \(\Phi _\mathrm{Be}\) in the z-plane. This map is given by

$$\begin{aligned} f_{0}(\lambda ) = \frac{g(\lambda )^{2}}{4}, \end{aligned}$$
(3.31)

and it is straightforward to check that it indeed maps \(\mathcal {D}_{0}\) conformally to a neighborhood of 0. Its expansion as \(\lambda \rightarrow 0\) is given by

$$\begin{aligned} f_{0}(\lambda ) = \frac{\tau _1^{2}}{4}\lambda \left( 1+\frac{4}{3\tau _1}\lambda + {\mathcal {O}}(\lambda ^{2})\right) . \end{aligned}$$
(3.32)

We can choose the lenses \(\gamma _{2,\pm }\) in such a way that \(f_0\) maps them to the jump contours \(e^{\pm \frac{2\pi i}{3}}\mathbb {R}^+\) for \(\Phi _\mathrm{Be}\).

If we take \(P^{(0)}\) of the form

$$\begin{aligned} P^{(0)}(\lambda ) = E_{0}(\lambda )\Phi _{\mathrm {Be}}(r^{3}f_{0}(\lambda )) s_{2}^{-\frac{\sigma _{3}}{2}}e^{-r^{3/2}g(\lambda )\sigma _{3}}, \end{aligned}$$
(3.33)

with \(E_{0}\) analytic in \(\mathcal {D}_{0}\), then it is straightforward to verify that \(P^{(0)}\) satisfies the same jump relations as S in \(\mathcal {D}_0\). In addition to that, if we let

$$\begin{aligned} E_{0}(\lambda ) = P^{(\infty )}(\lambda )s_{2}^{\frac{\sigma _{3}}{2}}M^{-1} \left( 2\pi r^{3/2}f_{0}(\lambda )^{1/2} \right) ^{\frac{\sigma _{3}}{2}}, \end{aligned}$$
(3.34)

then matching condition (3.23) also holds. It can be refined using the asymptotics for \(\Phi _\mathrm{Be}\) given in (A.7): we have

$$\begin{aligned} P^{(0)}(\lambda )P^{(\infty )}(\lambda )^{-1} = I + \frac{1}{r^{3/2}f_{0}(\lambda )^{1/2}}P^{(\infty )} (\lambda )s_{2}^{\frac{\sigma _{3}}{2}}\Phi _{\mathrm {Be},1} s_{2}^{-\frac{\sigma _{3}}{2}}P^{(\infty )}(\lambda )^{-1} + {\mathcal {O}}(r^{-3}),\nonumber \\ \end{aligned}$$
(3.35)

as \(r \rightarrow +\infty \) uniformly for \(z \in \partial \mathcal {D}_{0}\). Also, after a direct computation in which we use (3.19) and (3.32) yields

$$\begin{aligned} \displaystyle E_{0}(0)= & {} \displaystyle \lim _{\lambda \rightarrow 0} \frac{1}{2}\begin{pmatrix} 1 &{}\quad d_{1} \\ 0 &{}\quad 1 \end{pmatrix} \begin{pmatrix} \Big [ \frac{\sqrt{s_{2}}}{D(\lambda )}-\frac{D(\lambda )}{\sqrt{s_{2}}} \Big ] (\lambda f_{0}(\lambda ))^{\frac{1}{4}} &{}\quad -i \Big [ \frac{\sqrt{s_{2}}}{D(\lambda )}+\frac{D(\lambda )}{\sqrt{s_{2}}} \Big ] \Big ( \frac{\lambda }{f_{0}(\lambda )} \Big )^{\frac{1}{4}} \\ -i \Big [ \frac{\sqrt{s_{2}}}{D(\lambda )}+\frac{D(\lambda )}{\sqrt{s_{2}}} \Big ] \Big ( \frac{f_{0} (\lambda )}{\lambda } \Big )^{\frac{1}{4}} &{}\quad \Big [ \frac{D(\lambda )}{\sqrt{s_{2}}}- \frac{\sqrt{s_{2}}}{D(\lambda )} \Big ] (\lambda f_{0}(\lambda ))^{-\frac{1}{4}} \end{pmatrix}\nonumber \\&\times (2\pi r^{\frac{3}{2}})^{\frac{\sigma _{3}}{2}} \nonumber \\= & {} -i\begin{pmatrix} 1 &{}\quad id_{1} \\ 0 &{}\quad 1 \end{pmatrix}\begin{pmatrix} 0 &{}\quad 1 \\ 1 &{}\quad -id_{0} \end{pmatrix}(\pi |\tau _1| r^{\frac{3}{2}})^{\frac{\sigma _{3}}{2}}. \end{aligned}$$
(3.36)

3.5 Small norm problem

Now that the parametrices \(P^{(\lambda _j)}\) and \(P^{(\infty )}\) have been constructed, it remains to show that they indeed approximate S as \(r\rightarrow +\infty \). To that end, we define

$$\begin{aligned} R(\lambda ) = \left\{ \begin{array}{l l} S(\lambda )P^{(\infty )}(\lambda )^{-1}, &{}\quad \text{ for } \lambda \in \mathbb {C}{\setminus } \bigcup _{j=1}^{m}\mathcal {D}_{\lambda _{j}}, \\ S(\lambda )P^{(\lambda _{j})}(\lambda )^{-1}, &{}\quad \text{ for } \lambda \in \mathcal {D}_{\lambda _{j}}, \, j=1,\ldots ,m. \end{array} \right. \end{aligned}$$
(3.37)

Since the local parametrices were constructed in such a way that they satisfy the same jump conditions as S, it follows that R has no jumps and is hence analytic inside each of the disks \(\mathcal {D}_{\lambda _1},\ldots , \mathcal {D}_{\lambda _m}\). Also, we already knew that the jump matrices for S are exponentially close to I as \(r\rightarrow +\infty \) outside the local disks on the lips of the lenses, which implies that the jump matrices for R are exponentially small there. On the boundaries of the disks, the jump matrices are close to I with an error of order \({\mathcal {O}}(r^{-3/2})\), by the matching conditions (3.35) and (3.28). The error is moreover uniform in \(\vec {\tau }\) as long as the \(\tau _j\)’s remain bounded away from each other and from 0, and uniform for \(\beta _j\), \(j=2,\ldots , m\), in a compact subset of \(i\mathbb {R}\). By standard theory for RH problems [21], it follows that R exists for sufficiently large r and that it has the asymptotics

$$\begin{aligned} R(\lambda ) = I + \frac{R^{(1)}(\lambda )}{r^{3/2}} + {\mathcal {O}}(r^{-3}), \quad R^{(1)}(\lambda ) = {\mathcal {O}}(1), \end{aligned}$$
(3.38)

as \(r\rightarrow +\infty \), uniformly for \(\lambda \in \mathbb {C}{\setminus } \Gamma _{R}\), where

$$\begin{aligned} \Gamma _{R}=\smash {\bigcup _{j=1}^m}\partial \mathcal {D}_{\lambda _j}\cup \left( \Gamma _S{\setminus } \smash {\bigcup _{j=1}^m} \mathcal {D}_{\lambda _j}\right) \end{aligned}$$

is the jump contour for the RH problem for R, and with the same uniformity in \(\vec {\tau }\) and \(\beta _2,\ldots , \beta _m\) as explained above. The remaining part of this section is dedicated to computing \(R^{(1)}(\lambda )\) explicitly for \(\lambda \in \mathbb {C}{\setminus } \bigcup _{j=1}^{m}\mathcal {D}_{\lambda _{j}}\) and for \(\lambda =0\). Let us take the clockwise orientation on the boundaries of the disks, and let us write \(J_{R}(\lambda )=R_-^{-1}(\lambda )R_+(\lambda )\) for the jump matrix of R as \(\lambda \in \Gamma _R\). Since R satisfies the equation

$$\begin{aligned} R(\lambda )=I+\frac{1}{2\pi i}\int _{\Gamma _R} \frac{R_-(s)(J_R(s)-I)}{s-\lambda }ds, \end{aligned}$$

and since \(J_R\) has the expansion

$$\begin{aligned} J_{R}(\lambda ) = I + \frac{J_{R}^{(1)}(\lambda )}{r^{3/2}} + {\mathcal {O}}(r^{-3}), \end{aligned}$$
(3.39)

as \(r \rightarrow +\infty \) uniformly for \(\lambda \in \bigcup _{j=1}^{m}\partial \mathcal {D}_{\lambda _{j}}\), while it is exponentially small elsewhere on \(\Gamma _R\), we obtain that \(R^{(1)}\) can be written as

$$\begin{aligned} R^{(1)}(\lambda ) = \frac{1}{2\pi i} \int _{\bigcup _{j=1}^{m}\partial \mathcal {D}_{\lambda _{j}}} \frac{J_{R}^{(1)}(s)}{s-\lambda }ds. \end{aligned}$$
(3.40)

If \(\lambda \in \mathbb {C}{\setminus } \bigcup _{j=1}^{m}\mathcal {D}_{\lambda _{j}}\), by a direct residue calculation, we have

$$\begin{aligned} R^{(1)}(\lambda ) = \frac{1}{\lambda }\text{ Res }(J_{R}^{(1)}(s),s = 0)+ \sum _{j=2}^{m} \frac{1}{\lambda -\lambda _{j}} \text{ Res }(J_{R}^{(1)}(s),s = \lambda _{j}). \end{aligned}$$
(3.41)

By (3.35) and (A.7),

$$\begin{aligned} \text{ Res }\left( J_{R}^{(1)}(s),s=0 \right) =\frac{d_{1}}{8 |\tau _{1}|}\begin{pmatrix} 1 &{}\quad -id_{1} \\ -id_{1}^{-1} &{}\quad -1 \end{pmatrix}. \end{aligned}$$
(3.42)

Similarly, by (3.28)–(3.30) and (A.13), for \(j \in \{2,\ldots ,m\}\), we have

$$\begin{aligned} \text{ Res }\left( J_{R}^{(1)}(s),s=\lambda _{j} \right)= & {} \frac{\beta _{j}^{2}}{ic_{\lambda _{j}}}\begin{pmatrix} 1 &{}\quad id_{1} \\ 0 &{}\quad 1 \end{pmatrix}e^{\frac{\pi i}{4}\sigma _{3}}| \lambda _{j}|^{\frac{\sigma _{3}}{4}} \\&M^{-1} \begin{pmatrix} -1 &{}\quad \widetilde{\Lambda }_{j,1} \\ -\widetilde{\Lambda }_{j,2} &{}\quad 1 \end{pmatrix} M |\lambda _{j}|^{-\frac{\sigma _{3}}{4}} e^{-\frac{\pi i}{4}\sigma _{3}}\begin{pmatrix} 1 &{}\quad -id_{1} \\ 0 &{}\quad 1 \end{pmatrix}, \end{aligned}$$

where

$$\begin{aligned} \widetilde{\Lambda }_{j,1} =\frac{- \Gamma \left( -\beta _j \right) }{\Gamma \left( \beta _j + 1 \right) } \Lambda _{j}^{2} \quad \text{ and } \quad \widetilde{\Lambda }_{j,2} = \frac{- \Gamma \left( \beta _j \right) }{\Gamma \left( 1-\beta _j\right) } \Lambda _{j}^{-2}. \end{aligned}$$
(3.43)

We will also need asymptotics for R(0). By a residue calculation, we obtain

$$\begin{aligned} R^{(1)}(0) = -\text{ Res }\left( \frac{J_{R}^{(1)}(s)}{s},s = 0\right) - \sum _{j=2}^{m} \frac{1}{\lambda _{j}}\text{ Res }(J_{R}^{(1)}(s),s = \lambda _{j}). \end{aligned}$$
(3.44)

The above residue at 0 is more involved to compute, but after a careful calculation we obtain

$$\begin{aligned}&\text{ Res }\left( \frac{J_{R}^{(1)}(s)}{s},s = 0\right) \nonumber \\&\quad = \frac{1}{12 \tau _1^2}\begin{pmatrix} -6d_{1} \tau _1d_{0}^{2}-6\tau _1d_{0}+d_{1} &{}\quad -i(-6\tau _1d_{0}^{2}d_{1}^{2}+d_{1}^{2}-12\tau _1d_{0}d_{1} -\frac{9}{2}\tau _1) \\ -i(-6\tau _1d_{0}^{2}+1) &{}\quad 6d_{1} \tau _1d_{0}^{2}+6\tau _1d_{0}-d_{1} \end{pmatrix}. \end{aligned}$$
(3.45)

In addition to asymptotics for R, we will also need asymptotics for \(\partial _{s_m}R\). For this, we note that \(\partial _{s_m}R(\lambda )\) tends to 0 at infinity, that it is analytic in \(\mathbb {C}{\setminus }\Gamma _R\), and that it satisfies the jump relation

$$\begin{aligned} \partial _{s_m}R_+=\partial _{s_m}R_-J_R+R_- \partial _{s_m}J_R,\quad \lambda \in \Gamma _R. \end{aligned}$$

This implies the integral equation

$$\begin{aligned} \partial _{s_m}R(\lambda )=\frac{1}{2\pi i}\int _{\Gamma _R} \big (\partial _{s_m}R_-(\xi )(J_R(\xi )-I)+R_-(\xi ) \partial _{s_m}J_R(\xi )\big )\frac{d\xi }{\xi -\lambda }. \end{aligned}$$

Next, we observe that \(\partial _{s_m}J_R(\xi ) =\partial _{s_m}J_R^{(1)}(\xi )r^{-3/2}+\mathcal {O}(r^{-3}\log r)\) as \(r\rightarrow +\infty \), where the extra logarithm in the error term is due to the fact that \(\partial _{s_m}|\lambda _j|^{\beta _j} =\mathcal {O}(\log r)\). Standard techniques [24] then allow one to deduce from the integral equation that

$$\begin{aligned} \partial _{s_m}R(\lambda ) = \partial _{s_m}R^{(1)}(\lambda ) r^{-3/2}+{\mathcal {O}}(r^{-3}\log r) \end{aligned}$$
(3.46)

as \(r\rightarrow +\infty \).

4 Integration of the Differential Identity

The differential identity (2.20) can be written as

$$\begin{aligned} \partial _{s_m}\log \det (1-\mathcal {K}_{r\vec {\tau },\vec {s}}) =A_{\vec {\tau }, \vec {s}}(r)+\sum _{j=1}^m B_{\vec {\tau }, \vec {s}}^{(j)}(r), \end{aligned}$$
(4.1)

where

$$\begin{aligned} A_{\vec {\tau }, \vec {s}}(r) =i\partial _{s_m}(\Psi _{2,21} -\Psi _{1,12}+\frac{r \tau _{m}}{2}\Psi _{1,21}) +i\partial _{s_m}\Psi _{1,21}\ \Psi _{1,11}-i\partial _{s_m} \Psi _{1,11}\ \Psi _{1,21}, \end{aligned}$$

and, by (2.15),

$$\begin{aligned} B_{\vec {\tau }, \vec {s}}^{(j)}(r)=\frac{s_{j+1}-s_j}{2\pi i} \left( G_j^{-1}\partial _{s_m}G_j\right) _{21}(r\eta _j) =\frac{s_{j+1}-s_j}{2\pi i} \left( \Psi ^{-1}\partial _{s_m}\Psi \right) _{21}(r\eta _j), \end{aligned}$$

where we set \(s_{m+1}=1\) as before.

4.1 Asymptotics for \(A_{\vec {\tau }, \vec {s}}(r)\)

For \(|\lambda |\) large, more precisely outside the disks \(\mathcal {D}_{\lambda _j}\), \(j=1,\ldots , m\) and outside the lens-shaped regions, we have

$$\begin{aligned} S(\lambda ) = R(\lambda ) P^{(\infty )}(\lambda ), \end{aligned}$$

by (3.37). As \(\lambda \rightarrow \infty \), we can write

$$\begin{aligned} R(\lambda ) = I + \frac{R_{1}}{\lambda } + \frac{R_{2}}{\lambda ^{2}} + {\mathcal {O}}(\lambda ^{-3}), \end{aligned}$$
(4.2)

for some matrices \(R_1, R_2\) which may depend on r and the other parameters of the RH problem, but not on \(\lambda \). Thus, by (3.7) and (3.9), we have

$$\begin{aligned}&T_{1} = R_{1} + P_{1}^{(\infty )}, \\&T_{2} = R_{2} + R_{1} P_{1}^{(\infty )} + P_{2}^{(\infty )}. \end{aligned}$$

Using (3.38) and the above expressions, we obtain

$$\begin{aligned}&T_{1} = P_{1}^{(\infty )} + \frac{R_{1}^{(1)}}{r^{3/2}} + {\mathcal {O}}(r^{-3}) , \\&T_{2} = P_{2}^{(\infty )} + \frac{R_{1}^{(1)}P_{1}^{(\infty )} +R_{2}^{(1)}}{r^{3/2}} + {\mathcal {O}}(r^{-3}), \end{aligned}$$

as \(r\rightarrow +\infty \), where \(R_{1}^{(1)}\) and \(R_{2}^{(1)}\) are defined through the expansion

$$\begin{aligned} R^{(1)}(\lambda ) = \frac{R_{1}^{(1)}}{\lambda } +\frac{R_{2}^{(1)}}{\lambda ^{2}} + {\mathcal {O}}(\lambda ^{-3}), \quad \text{ as } \lambda \rightarrow \infty . \end{aligned}$$

After a long computation with several cancellations using (3.3), we obtain that \(A_{\vec {\tau }, \vec {s}}(r)\) has large r asymptotics given by

$$\begin{aligned} A_{\vec {\tau }, \vec {s}}(r)&=i \partial _{s_{m}} \Big ( \Psi _{2,21} - \Psi _{1,12} + \frac{r \tau _{m}}{2}\Psi _{1,21} \Big ) + i \Psi _{1,11} \partial _{s_{m}}\Psi _{1,21} - i \Psi _{1,21} \partial _{s_{m}}\Psi _{1,11} \nonumber \\&=-i\Big ( P_{1,21}^{(\infty )}\partial _{s_{m}}P_{1,11}^{(\infty )} + \partial _{s_{m}}P_{1,12}^{(\infty )} - \frac{\tau _1}{2}\partial _{s_{m}} P_{1,21}^{(\infty )} -P_{1,11}^{(\infty )}\partial _{s_{m}}P_{1,21}^{(\infty )} - \partial _{s_{m}}P_{2,21}^{(\infty )} \Big ) r^{3/2} \nonumber \\&\quad -\,i \Big ( P_{1,21}^{(\infty )}\partial _{s_{m}}R_{1,11}^{(1)} + \partial _{s_{m}}R_{1,12}^{(1)} - \frac{\tau _1}{2}\partial _{s_{m}}R_{1,21}^{(1)} - R_{1,11}^{(1)}\partial _{s_{m}}P_{1,21}^{(\infty )} \nonumber \\&\quad -\, R_{1,22}^{(1)}\partial _{s_{m}}P_{1,21}^{(\infty )} - 2 P_{1,11}^{(\infty )}\partial _{s_{m}}R_{1,21}^{(1)} - P_{1,21}^{(\infty )}\partial _{s_{m}}R_{1,22}^{(1)} - \partial _{s_{m}}R_{2,21}^{(1)} \Big ) +{\mathcal {O}}\Big (\frac{\log r}{r^{3/2}}\Big ). \end{aligned}$$
(4.3)

Using (1.6), (3.14) and (3.41)–(3.45), we can rewrite this more explicitly as

$$\begin{aligned} A_{\vec {\tau }, \vec {s}}(r)= & {} \Big ( -\frac{\tau _1}{2}\partial _{s_{m}}d_{1} - 2\partial _{s_{m}}d_{2} \Big ) r^{3/2} -\sum _{j=2}^m\frac{2|\lambda _j|^{1/2}}{c_{\lambda _j}} \partial _{s_m}(\beta _j^2) \nonumber \\&- \,\partial _{s_{m}}d_{1} \sum _{j=2}^{m} \frac{\beta _{j}^{2}}{c_{\lambda _j}}\big ( \widetilde{\Lambda }_{j,1} +\widetilde{\Lambda }_{j,2} \big ) +\,\tau _1\sum _{j=2}^{m} \frac{\partial _{s_{m}} \big [ \beta _{j}^{2}(\widetilde{\Lambda }_{j,1} -\widetilde{\Lambda }_{j,2}+2i) \big ]}{4 ic_{\lambda _j} {|\lambda _{j}|^{1/2}}} \nonumber \\&\quad + {\mathcal {O}}\Big (\frac{\log r}{r^{3/2}}\Big ), \end{aligned}$$
(4.4)

where we recall the definition (3.13) of \(d_1=d_1(\vec {s})\) and \(d_2=d_2(\vec {s})\).

4.2 Asymptotics for \(B_{\vec {\tau }, \vec {s}}^{(j)}(r)\) with \(j\ne 1\)

Now we focus on \(\Psi (\zeta )\) with \(\zeta \) near \(y_j\). Inverting the transformations (3.37) and (3.5), and using the definition (3.26) of the local parametrix \(P^{(\lambda _{j})}\), we obtain that for z outside the lenses and inside \(\mathcal {D}_{\lambda _{j}}\), \(j \in \{2,\ldots ,m\}\),

$$\begin{aligned} T(\lambda ) = R(\lambda )E_{\lambda _{j}}(\lambda ) \Phi _{\mathrm {HG}}(r^{3/2}f_{\lambda _{j}}(\lambda );\beta _{j}) (s_{j}s_{j+1})^{-\frac{\sigma _{3}}{4}}. \end{aligned}$$
(4.5)

By (3.1), we have

$$\begin{aligned} B_{\vec {\tau }, \vec {s}}^{(j)}(r)=\frac{s_{j+1}-s_j}{2\pi i} \left( T^{-1}\partial _{s_m}T\right) _{21}(\lambda _j) =B_{\vec {\tau }, \vec {s}}^{(j,1)}(r)+B_{\vec {\tau }, \vec {s}}^{(j,2)}(r) +B_{\vec {\tau }, \vec {s}}^{(j,3)}(r), \end{aligned}$$
(4.6)

with

$$\begin{aligned}&B_{\vec {\tau }, \vec {s}}^{(j,1)}(r)=\frac{s_{j+1}-s_j}{2\pi i\sqrt{s_js_{j+1}}} \left( \Phi _\mathrm{HG}^{-1}(0;\beta _j)\partial _{s_m}\Phi _\mathrm{HG}(0;\beta _j)\right) _{21},\\&B_{\vec {\tau }, \vec {s}}^{(j,2)}(r)=\frac{s_{j+1}-s_j}{2\pi i\sqrt{s_js_{j+1}}} \left( \Phi _\mathrm{HG}^{-1}(0;\beta _j)E_{\lambda _j}^{-1}(\lambda _j)\left( \partial _{s_m}E_{\lambda _j}(\lambda _j)\right) \Phi _\mathrm{HG}(0;\beta _j)\right) _{21},\\&B_{\vec {\tau }, \vec {s}}^{(j,3)}(r)=\frac{s_{j+1}-s_j}{2\pi i\sqrt{s_js_{j+1}}} \left( \Phi _\mathrm{HG}^{-1}(0;\beta _j)E_{\lambda _j}^{-1}(\lambda _j)R^{-1}(\lambda _j)\left( \partial _{s_m}R(\lambda _j)\right) E_{\lambda _j}(\lambda _j) \Phi _\mathrm{HG}(0;\beta _j)\right) _{21}. \end{aligned}$$

Evaluation of\(B_{\vec {\tau }, \vec {s}}^{(j,3)}(r)\). The last term \(B_{\vec {\tau }, \vec {s}}^{(j,3)}(r)\) is the easiest to evaluate asymptotically as \(r\rightarrow +\infty \). By (3.38) and (3.46), we have that

$$\begin{aligned} R^{-1}(\lambda _j)\left( \partial _{s_m}R(\lambda _j)\right) =\mathcal {O}(r^{-3/2}\log r),\quad r\rightarrow +\infty . \end{aligned}$$

Moreover, from (3.29), since \(\beta _{j} \in i \mathbb {R}\), we know that \(E_{\lambda _j}(\lambda _j)=\mathcal {O}(1)\). Using also the fact that \(\Phi _\mathrm{HG}(0;\beta _j)\) is independent of r, we obtain that

$$\begin{aligned} B_{\vec {\tau }, \vec {s}}^{(j,3)}(r)=\mathcal {O}(r^{-3/2}\log r), \quad r\rightarrow +\infty . \end{aligned}$$
(4.7)

Evaluation of\(B_{\vec {\tau }, \vec {s}}^{(j,1)}(r)\). To compute \(B_{\vec {\tau }, \vec {s}}^{(j,1)}(r)\), we need to use the explicit expression for the entries in the first column of \(\Phi _\mathrm{HG}\) given in (A.19). Together with (1.6), this implies that

$$\begin{aligned}&B_{\vec {\tau }, \vec {s}}^{(j,1)}(r)\\&\quad =\frac{s_{j+1}-s_j}{2\pi i\sqrt{s_js_{j+1}}}\left( \Phi _\mathrm{HG}^{-1}(0;\beta _j) \partial _{s_m}\Phi _\mathrm{HG}(0;\beta _j)\right) _{21}\\&\quad ={\left\{ \begin{array}{ll} \frac{(-1)^{m-j+1}}{2\pi is_{m}} \frac{\sin \pi \beta _{j}}{\pi } \left( \Gamma (1+\beta _{j}) \Gamma '(1-\beta _{j})+\Gamma '(1+\beta _{j}) \Gamma (1-\beta _{j})\right) ,&{}\quad \text{ for }\ j\ge \max \{2,m-1\},\\ 0,&{}\quad \text{ otherwise }.\end{array}\right. } \end{aligned}$$

Using also the \(\Gamma \) function relations

$$\begin{aligned} \Gamma (1+\beta )\Gamma (1-\beta )=\frac{\pi \beta }{\sin \pi \beta }, \quad \partial _{\beta }\log \frac{\Gamma (1+\beta )}{\Gamma (1-\beta )} =\frac{\Gamma '(1+\beta )}{\Gamma (1+\beta )} +\frac{\Gamma '(1-\beta )}{\Gamma (1-\beta )}, \end{aligned}$$

we obtain

$$\begin{aligned} \sum _{j=2}^{m}B_{\vec {\tau }, \vec {s}}^{(j,1)}(r) = \frac{\beta _{m-1}}{2\pi is_m}\partial _{\beta _{m-1}}\log \frac{\Gamma (1+\beta _{m-1})}{\Gamma (1-\beta _{m-1})} - \frac{\beta _m}{2\pi is_m}\partial _{\beta _{m}} \log \frac{\Gamma (1+\beta _{m})}{\Gamma (1-\beta _{m})},\quad \end{aligned}$$
(4.8)

for \(m\ge 3\); for \(m=2\) the formula is correct only if we set \(\beta _{1}=0\), which we do here and in the remaining part of this section, such that the first term vanishes.

Evaluation of\(B_{\vec {\tau }, \vec {s}}^{(j,2)}(r)\). We use (3.29) and obtain

$$\begin{aligned}&E_{\lambda _j}^{-1}(\lambda _j)\partial _{s_m}E_{\lambda _{j}}(\lambda _j)\\&\quad =\begin{pmatrix} \partial _{s_m}\log \Lambda _j-\frac{i}{2}|\lambda _j|^{-1/2}\partial _{s_m}d_1 &{}\quad \frac{1}{2}|\lambda _j|^{-1/2}\Lambda _j^{-2}\partial _{s_m}d_1 \\ \frac{1}{2}|\lambda _j|^{-1/2}\Lambda _j^{2}\partial _{s_m}d_1 &{}\quad -\partial _{s_m}\log \Lambda _j+\frac{i}{2}|\lambda _j|^{-1/2} \partial _{s_m}d_1\end{pmatrix}. \end{aligned}$$

By (3.43), we get

$$\begin{aligned} B_{\vec {\tau },\vec {s}}^{(j,2)}(r)&=-2\beta _j\partial _{s_m}\log \Lambda _j + \frac{1}{2} |\lambda _j|^{-1/2}(\partial _{s_m}d_1)\nonumber \\&\quad \times \left( 2i\beta _j -\frac{\beta _j^2\Gamma (-\beta _j)}{2\Gamma (1+\beta _j)}\Lambda _j^{2} -\frac{\beta _j^2\Gamma (\beta _j)}{2\Gamma (1-\beta _j)}\Lambda _j^{-2}\right) \nonumber \\&=-2\beta _j\partial _{s_m}\log \Lambda _j +\frac{1}{2}|\lambda _j|^{-1/2}\partial _{s_m}d_1 \left( 2i\beta _j +\beta _j^2(\tilde{\Lambda }_{j,1} +\tilde{\Lambda }_{j,2})\right) . \end{aligned}$$
(4.9)

By (4.9), (4.8), and (4.7), we obtain

$$\begin{aligned} \sum _{j=2}^mB_{\vec {\tau },\vec {s}}^{(j)}(r)= & {} \frac{\beta _{m-1}}{2\pi is_m} \partial _{\beta _{m-1}}\log \frac{\Gamma (1+\beta _{m-1})}{\Gamma (1-\beta _{m-1})}- \frac{\beta _m}{2\pi is_m}\partial _{\beta _{m}} \log \frac{\Gamma (1+\beta _{m})}{\Gamma (1-\beta _{m})}\nonumber \\&+\,\sum _{j=2}^m\left( -2\beta _j\partial _{s_m}\log \Lambda _j +\frac{\partial _{s_m}d_1}{2|\lambda _j|^{1/2}} \left( 2i\beta _j +\beta _j^2(\tilde{\Lambda }_{j,1} +\tilde{\Lambda }_{j,2})\right) \right) .\nonumber \\ \end{aligned}$$
(4.10)

4.3 Asymptotics for \(B_{\vec {\tau }, \vec {s}}^{(j)}(r)\) with \(j= 1\)

For \(j=1\), we have near \(\lambda _1=0\) that

$$\begin{aligned} T(\lambda ) = R(\lambda )E_{0}(\lambda )\Phi _{\mathrm {Be}} (r^{3/2}f_0(\lambda ))s_{2}^{-\frac{\sigma _{3}}{2}}. \end{aligned}$$
(4.11)

By (3.1), we have

$$\begin{aligned} B_{\vec {\tau }, \vec {s}}^{(1)}(r)=\frac{s_{2}}{2\pi i} \left( T^{-1}\partial _{s_m}T\right) _{21}(0)=B_{\vec {\tau }, \vec {s}}^{(1,1)}(r) +B_{\vec {\tau }, \vec {s}}^{(1,2)}(r)+B_{\vec {\tau }, \vec {s}}^{(1,3)}(r), \end{aligned}$$
(4.12)

with

$$\begin{aligned}&B_{\vec {\tau }, \vec {s}}^{(1,1)}(r)=\frac{1}{2\pi i} \left( \Phi _\mathrm{Be}^{-1}(0)\partial _{s_m}\Phi _\mathrm{Be}(0)\right) _{21},\\&B_{\vec {\tau }, \vec {s}}^{(1,2)}(r)=\frac{1}{2\pi i} \left( \Phi _\mathrm{Be}^{-1}(0)E_{0}^{-1}(0) \left( \partial _{s_m}E_{0}(0)\right) \Phi _\mathrm{Be}(0)\right) _{21},\\&B_{\vec {\tau }, \vec {s}}^{(1,3)}(r)=\frac{1}{2\pi i} \left( \Phi _\mathrm{Be}^{-1}(0)E_{0}^{-1}(0)R^{-1}(0) \left( \partial _{s_m}R(0)\right) E_{0}(0) \Phi _\mathrm{Be}(0)\right) _{21}. \end{aligned}$$

Since \(\Phi _\mathrm{Be}(0)\) is independent of \(s_m\), we have \(B_{\vec {\tau }, \vec {s}}^{(1,1)}(r)=0\). For \(B_{\vec {\tau }, \vec {s}}^{(1,2)}(r)\), we use the explicit expressions for the entries in the first column of \(\Phi _\mathrm{Be}\) given in (A.11) and (3.36) to obtain

$$\begin{aligned} B_{\vec {\tau }, \vec {s}}^{(1,2)}(r)=-\frac{\tau _1}{2}r^{3/2} \partial _{s_m}d_1. \end{aligned}$$
(4.13)

The computation of \(B_{\vec {\tau }, \vec {s}}^{(1,3)}(r)\) is more involved. Using (3.38) and (3.46), we have

$$\begin{aligned} R^{-1}(0)\partial _{s_m}R(0)=\partial _{s_m}R^{(1)}(0)r^{-3/2} +\mathcal {O}(r^{-3}\log r),\qquad r\rightarrow +\infty . \end{aligned}$$

Now we use again (A.11) and (3.36) together with (3.44) in order to conclude that

$$\begin{aligned} B_{\vec {\tau }, \vec {s}}^{(1,3)}(r)= & {} \frac{-\tau _1}{2} \Big ( d_{1} \partial _{s_{m}} (R_{11}^{(1)}(0) -R_{22}^{(1)}(0)) -id_{1}^{2} \partial _{s_{m}} R_{21}^{(1)}(0) -i \partial _{s_{m}} R_{12}^{(1)}(0) \Big ) \nonumber \\&+\mathcal {O} \Big (\frac{\log r}{r^{3/2}}\Big )\nonumber \\= & {} \frac{d_{0}\partial _{s_{m}}d_{1}}{2} - \sum _{j=2}^{m} \frac{\tau _1|\lambda _j|^{-1/2}}{4 i c_{\lambda _{j}}}\partial _{s_{m}} \big ( \beta _{j}^{2}(\widetilde{\Lambda }_{j,1} -\widetilde{\Lambda }_{j,2}-2i)\big )\nonumber \\&+\,\partial _{s_{m}}d_{1}\sum _{j=2}^{m} \frac{\tau _1\beta _{j}^{2}}{2c_{\lambda _{j}}|\lambda _{j}|} (\widetilde{\Lambda }_{j,1}+\widetilde{\Lambda }_{j,2}) +\mathcal {O}\Big (\frac{\log r}{r^{3/2}}\Big ) \end{aligned}$$
(4.14)

as \(r\rightarrow +\infty \). Substituting (4.13) and (4.14) into (4.12), we obtain

$$\begin{aligned} B_{\vec {\tau }, \vec {s}}^{(1)}(r)= & {} -\frac{\tau _1}{2}r^{3/2} \partial _{s_m}d_1 + \frac{d_{0}\partial _{s_{m}}d_{1}}{2}- \sum _{j=2}^{m} \frac{\tau _1|\lambda _j|^{-1/2}}{4 i c_{\lambda _{j}}} \partial _{s_{m}}\big ( \beta _{j}^{2}(\widetilde{\Lambda }_{j,1} -\widetilde{\Lambda }_{j,2}-2i)\big )\nonumber \\&+\,\partial _{s_{m}}d_{1}\sum _{j=2}^{m} \frac{\tau _1\beta _{j}^{2}}{2c_{\lambda _{j}}|\lambda _{j}|} (\widetilde{\Lambda }_{j,1}+\widetilde{\Lambda }_{j,2}) +\mathcal {O}\Big (\frac{\log r}{r^{3/2}}\Big ) \end{aligned}$$
(4.15)

as \(r\rightarrow +\infty \).

4.4 Asymptotics for the differential identity

We now substitute (4.4), (4.10), and (4.15) into (4.1) and obtain after a straightforward calculation in which we use (3.25),

$$\begin{aligned} \partial _{s_m}\log F(r\vec {\tau },\vec {s})&=\Big ( -\tau _1\partial _{s_{m}}d_{1} - 2 \partial _{s_{m}}d_{2} \Big )r^{3/2} -2\sum _{j=2}^{m} \beta _{j} \partial _{s_{m}} \log \Lambda _{j} -\sum _{j=2}^m\partial _{s_{m}} (\beta _{j}^{2})\nonumber \\&\quad +\,\beta _{m-1}\partial _{s_m}\log \frac{\Gamma (1+\beta _{m-1})}{\Gamma (1-\beta _{m-1})}+ \beta _m\partial _{s_m} \log \frac{\Gamma (1+\beta _{m})}{\Gamma (1-\beta _{m})} + {\mathcal {O}}\Big ( \frac{\log r}{r^{3/2}}\Big ), \end{aligned}$$
(4.16)

as \(r\rightarrow +\infty \), where we recall that \(\beta _1=0\) if \(m=2\). Now we note that

$$\begin{aligned} -\tau _1\partial _{s_{m}}d_{1} - 2 \partial _{s_{m}}d_{2} =\frac{1}{\pi s_m} \Big (-\tau _1(|\lambda _{m}|^{1/2} -|\lambda _{m-1}|^{1/2})+\frac{2}{3}(|\lambda _{m}|^{3/2} -|\lambda _{m-1}|^{3/2})\Big ) \end{aligned}$$

by (3.13). Next, by (3.30), we have

$$\begin{aligned}&-2\sum _{j=2}^{m}\beta _{j} \partial _{s_{m}} \log \Lambda _{j}\nonumber \\&\quad =\frac{1}{\pi is_m} \beta _{m} \log (4|\lambda _{m}|c_{\lambda _{m}}r^{3/2}) -\frac{1}{\pi is_m} \beta _{m-1} \log (4|\lambda _{m-1}|c_{\lambda _{m-1}}r^{3/2}) \nonumber \\&\qquad +\,\frac{1}{\pi i s_m}\left( \sum _{j=2}^{m-2} \beta _j\left( \log (\widetilde{T}_{m,j}) -\log (\widetilde{T}_{m-1,j})\right) \right. \nonumber \\&\qquad \left. + \,\beta _{m-1}\log (\widetilde{T}_{m,m-1})-\beta _{m} \log (\widetilde{T}_{m-1,m})\phantom {\sum ^{\int }}\right) . \end{aligned}$$
(4.17)

We substitute this in (4.16) and integrate in \(s_m\). For the integration, we recall the relation (1.6) between \(\vec {\beta }\) and \(\vec {s}\), and we note that letting the integration variable \(s_m'=e^{-2\pi i\beta _m'}\) go from 1 to \(s_m=e^{-2\pi i\beta _m}\) boils down to letting \(\beta _m'\) go from 0 to \(-\frac{\log s_m}{2\pi i}\), and at the same time (unless if \(m=2\)) to letting \(\beta _{m-1}'\) go from \(\hat{\beta }_{m-1}:=-\frac{\log s_{m-1}}{2\pi i}\) to \(\beta _{m-1}=\frac{\log s_{m}}{2\pi i}-\frac{\log s_{m-1}}{2\pi i}\). If \(m=2\), we set \(\hat{\beta }_{1}=\beta _{1}=0\). We then obtain, also using (3.25) and writing \(\vec {s}_0:=(s_1,\ldots , s_{m-1},1)\),

$$\begin{aligned} \log \frac{F(r\vec {\tau };\vec {s})}{F(r\vec {\tau };\vec {s}_{0})}= & {} -2i\beta _m \Big (-\tau _1(|\lambda _{m}|^{1/2}-|\lambda _{m-1}|^{1/2}) +\frac{2}{3}(|\lambda _{m}|^{3/2}-|\lambda _{m-1}|^{3/2})\Big )r^{3/2}\nonumber \\&-\,\beta _m^2-\beta _{m-1}^2+\hat{\beta }_{m-1}^2 +\int _{\widehat{\beta }_{m-1}}^{\beta _{m-1}} x \partial _{x} \log \frac{\Gamma (1+x)}{\Gamma (1-x)}dx \nonumber \\&+\,\int _{0}^{\beta _{m}} x \partial _{x} \log \frac{\Gamma (1+x)}{\Gamma (1-x)}dx -\,\beta _m^2\log (4|\lambda _m|^{1/2}(\tau _1-2\tau _m) r^{3/2})\nonumber \\&+\,(\widehat{\beta }_{m-1}^2-\beta _{m-1}^2) \log (4|\lambda _{m-1}|^{1/2}(\tau _1-2\tau _{m-1}) r^{3/2})\nonumber \\&-\,2\sum _{j=2}^{m-2}\beta _{j}\beta _{m}\left( \log (\widetilde{T}_{m,j}) -\log (\widetilde{T}_{m-1,j})\right) \nonumber \\&-\,2\beta _m\beta _{m-1}\log (\widetilde{T}_{m-1,m}) +{\mathcal {O}}\Big ( \frac{\log r}{r^{3/2}}\Big ) \end{aligned}$$
(4.18)

as \(r\rightarrow +\infty \). Now we use the following identity for the remaining integrals in terms of Barnes’ G-function (which can be obtained from an integration by part of [38, formula 5.17.4], see also [17, equations (5.24) and (5.25)]),

$$\begin{aligned} \int _{0}^{\beta } x \partial _{x} \log \frac{\Gamma (1+x)}{\Gamma (1-x)}dx = \beta ^{2} + \log G(1+\beta )G(1-\beta ). \end{aligned}$$
(4.19)

Noting that \(\lambda _j=\tau _j-\tau _1\) and \(-\beta _{m} =\beta _{m-1}-\hat{\beta }_{m-1}\), we find after a straightforward calculation that

$$\begin{aligned} \log \frac{F(r\vec {\tau };\vec {s})}{F(r\vec {\tau };\vec {s}_{0})}= & {} -2i\beta _m \Big (|\tau _1| \, |\lambda _{m}|^{1/2} +\frac{2}{3} |\lambda _{m}|^{3/2}\Big ) r^{3/2}\nonumber \\&-\,2i(\beta _{m-1} -\hat{\beta }_{m-1})\Big (|\tau _{1}| \, |\lambda _{m-1}|^{1/2} +\frac{2}{3}|\lambda _{m-1}|^{3/2}\Big )r^{3/2} \nonumber \\&-\,\frac{3}{2}(\beta _m^2+\beta _{m-1}^2-\hat{\beta }_{m-1}^2) \log r +\log \frac{G(1+\beta _{m-1})G(1-\beta _{m-1})}{G(1+\hat{\beta }_{m-1}) G(1-\hat{\beta }_{m-1})}\nonumber \\&+\,\log G(1+\beta _m)G(1-\beta _m)\nonumber \\&-\,\beta _m^2\log \left( 8|\tau _m-\tau _1|^{3/2}-4\tau _1|\tau _m -\tau _1|^{1/2}\right) \nonumber \\&+\,(\hat{\beta }_{m-1}^2-\beta _{m-1}^2) \log \left( 8|\tau _{m-1}-\tau _1|^{3/2}-4\tau _{1}|\tau _{m-1} -\tau _{1}|^{1/2}\right) \nonumber \\&-\,2\sum _{j=2}^{m-2}\left( \beta _{j}\beta _{m}\log (\widetilde{T}_{m,j}) +\beta _j(\beta _{m-1}-\hat{\beta }_{m-1})\log (\widetilde{T}_{m-1,j})\right) \nonumber \\&- \,2\beta _{m-1}\beta _m\log (\widetilde{T}_{m-1,m}) + {\mathcal {O}}\Big ( \frac{\log r}{r^{3/2}}\Big ), \end{aligned}$$
(4.20)

as \(r\rightarrow +\infty \), uniformly in \(\vec {\tau }\) as long as the \(\tau _j\)’s remain bounded away from each other and from 0, and uniformly for \(\beta _2,\ldots , \beta _{m}\) in a compact subset of \(i\mathbb {R}\).

4.5 Proof of Theorem 1.2

We now prove Theorem 1.2 by induction on m. For \(m=1\), the result (1.4) is proved in [14], and we work under the hypothesis that the result holds for values up to \(m-1\). We can thus evaluate \(F(r\vec {\tau };\vec {s}_0)\) asymptotically, since this corresponds to an Airy kernel Fredholm determinant with only \(m-1\) discontinuities. In this way, we obtain after another straightforward calculation the large r asymptotics, uniform in \(\vec {\tau }\) and \(\beta _2,\ldots , \beta _m\),

$$\begin{aligned} F(r\vec {\tau };\vec {s})= C_1r^3+C_2r^{3/2}+C_3\log r +C_4+\mathcal {O}(r^{-3/2}\log r) \end{aligned}$$

where

$$\begin{aligned} \displaystyle C_{1}= & {} \displaystyle - \frac{|\tau _{1}|^{3}}{12}, \quad C_{2} = \displaystyle - \sum _{j=2}^{m}2i \beta _{j} \Big ( |\tau _{1}| \, |\tau _j-\tau _1|^{1/2} + \frac{2}{3}|\tau _j-\tau _1|^{3/2} \Big ),\\ C_{3}= & {} \displaystyle -\, \frac{1}{8} - \sum _{j=2}^{m} \frac{3}{2}\beta _{j}^{2}, \\ \displaystyle C_{4}= & {} \displaystyle -2\sum _{2\le j<k\le m} \beta _{j}\beta _{k} \log \widetilde{T}_{j,k} - \sum _{j=2}^{m} \beta _{j}^{2} \log \Big ( 4 \big ( 2|\tau _{j}-\tau _1|^{3/2} +|\tau _{1}| \, |\tau _{j}-\tau _1|^{1/2} \big ) \Big ) \\&\displaystyle +\, \sum _{j=2}^{m} \log G(1+\beta _{j})G(1-\beta _{j}) +\frac{\log 2}{24} + \zeta ^{\prime }(-1) - \frac{1}{8} \log |\tau _{1}|. \end{aligned}$$

This implies the explicit form (1.14) of the asymptotics for \(E_0(r\vec {\tau };\vec {\beta }_0)=F(r\vec {\tau }; \vec {s})/F(r\tau _1;0)\). The recursive form (1.9) of the asymptotics follows directly by relying on (1.4) and (1.11). Note that we prove (1.11) independently in the next section.

5 Asymptotic Analysis of RH Problem for \(\Psi \) with \(s_{1} > 0\)

We now analyze the RH problem for \(\Psi \) asymptotically in the case where \(s_1> 0\). Although the general strategy of the method is the same as in the case \(s_1=0\) (see Sect. 3), several modifications are needed, the most important ones being a different g-function and the construction of a different local Airy parametrix instead of the local Bessel parametrix which we needed for \(s_1=0\). We again write \(\vec {x}=r\vec {\tau }\) and \(\vec {y}=r\vec {\eta }\), with \(\eta _j=\tau _j-\tau _m\).

5.1 Re-scaling of the RH problem

We define T, in a slightly different manner than in (3.1), as follows,

$$\begin{aligned} T(\lambda )=\begin{pmatrix} 1 &{}\quad - \frac{i}{4}\tau _{m}^{2} r^{3/2} \\ 0 &{}\quad 1 \end{pmatrix} r^{-\frac{\sigma _3}{4}}\Psi (r(\lambda -\tau _{m});x =r\tau _m,r\vec {\eta },\vec {s}). \end{aligned}$$
(5.1)

Similarly as in the case \(s_1=0\), because of the triangular pre-factor above, we then have

$$\begin{aligned} T(\lambda ) = \left( I + \frac{T_1}{\lambda } +\frac{T_2}{\lambda ^2} + \mathcal {O}\left( \frac{1}{\lambda ^3}\right) \right) \lambda ^{\frac{\sigma _3}{4} } M^{-1} e^{-\frac{2}{3}r^{3/2}\lambda ^{3/2} \sigma _3}, \end{aligned}$$
(5.2)

as \(\lambda \rightarrow \infty \), but with modified expressions for the entries of \(T_1\) and \(T_2\):

$$\begin{aligned}&T_{1,11} = \frac{\Psi _{1,11}}{r}-\frac{i \tau _{m}^{2}}{4}r\Psi _{1,21} -\frac{\tau _{m}}{4}-\frac{\tau _{m}^{4}r^{3}}{32} = -T_{1,22}, \end{aligned}$$
(5.3)
$$\begin{aligned}&T_{1,12} = \frac{\Psi _{1,12}}{r^{\frac{3}{2}}} +\frac{i\tau _{m}^{2}}{2}\Psi _{1,11}r^{1/2} -\frac{i \tau _{m}^{3}}{24}r^{3/2}+\frac{\tau _{m}^{4}}{16}\Psi _{1,21}r^{5/2} -\frac{i \tau _{m}^{6}}{192}r^{9/2}, \end{aligned}$$
(5.4)
$$\begin{aligned}&T_{1,21} = \frac{\Psi _{1,21}}{r^{1/2}} -\frac{i\tau _{m}^{2}}{4}r^{3/2}, \end{aligned}$$
(5.5)
$$\begin{aligned}&T_{2,21} = \frac{\Psi _{2,21}}{r^{3/2}} +\frac{3 \tau _{m}}{4 r^{1/2}}\Psi _{1,21} + \frac{i \tau _{m}^{2}}{4}\Psi _{1,11}r^{1/2} - \frac{7 i \tau _{m}^{3}}{48}r^{3/2}+\frac{\tau _{m}^{4}}{32} \Psi _{1,21}r^{5/2} - \frac{i \tau _{m}^{6}}{384}r^{9/2}. \end{aligned}$$
(5.6)

The singularities of T now lie at the negative points \(\lambda _j=\tau _{j}\), \(j = 1,\ldots ,m\).

5.2 Normalization with g-function and opening of lenses

Instead of the g-function defined in (3.4), we can now use the simpler function \(-\frac{2}{3} \lambda ^{3/2}\) with principal branch of \(\lambda ^{3/2}\), and define

$$\begin{aligned} S(\lambda )=T(\lambda )e^{\frac{2}{3}(r\lambda )^{3/2}\sigma _3} \prod _{j=1}^{m} \left\{ \begin{array}{l l} \begin{pmatrix} 1 &{}\quad 0 \\ -s_{j}^{-1}e^{\frac{4}{3}(r\lambda )^{3/2}} &{}\quad 1 \end{pmatrix}, &{}\quad \text{ if } \lambda \in \Omega _{j,+}, \\ \begin{pmatrix} 1 &{}\quad 0 \\ s_{j}^{-1}e^{\frac{4}{3}(r\lambda )^{3/2}} &{}\quad 1 \end{pmatrix}, &{}\quad \text{ if } \lambda \in \Omega _{j,-}, \\ I, &{}\quad \text{ if } \lambda \in \mathbb {C}{\setminus }(\Omega _{j,+} \cup \Omega _{j,-}), \end{array} \right. \end{aligned}$$
(5.7)

where \(\Omega _{j,\pm }\) are lens-shaped regions around \((\lambda _{j},\lambda _{j-1})\) as before, but where we note that the index j now starts at \(j=1\) instead of at \(j=2\), and where we define \(\lambda _{0} := 0\), see Fig. 3 for an illustration of these regions. Note that \(\lambda _{0}\) is not a singular point of the RH problem for T, but since \(\mathfrak {R}\lambda ^{3/2} = 0\) on \((-\infty ,0)\), it plays a role in the asymptotic analysis for S. S satisfies the following RH problem.

Fig. 3
figure 3

Jump contours \(\Gamma _{S}\) for S with \(m=3\) and \(s_{1} > 0\)

5.2.1 RH problem for S

  1. (a)

    \(S : \mathbb {C}\backslash \Gamma _{S} \rightarrow \mathbb {C}^{2\times 2}\) is analytic, with

    $$\begin{aligned} \Gamma _{S}=(-\infty ,0]\cup \big (\lambda _{m} +e^{\pm \frac{2\pi i}{3}} (0,+\infty )\big ) \cup \gamma _{+}\cup \gamma _{-}, \quad \gamma _{\pm } =\bigcup _{j=1}^{m} \gamma _{j,\pm }, \end{aligned}$$
    (5.8)

    and \(\Gamma _{S}\) oriented as in Fig. 3.

  2. (b)

    The jumps for S are given by

    $$\begin{aligned} S_{+}(\lambda )= & {} S_{-}(\lambda )\begin{pmatrix} 0 &{}\quad s_{j} \\ -s_{j}^{-1} &{}\quad 0 \end{pmatrix}, \quad \lambda \in (\lambda _{j},\lambda _{j-1}), \, j = 1,\ldots ,m+1, \\ S_{+}(\lambda )= & {} S_{-}(\lambda )\begin{pmatrix} 1 &{}\quad 0 \\ s_{j}^{-1}e^{\frac{4}{3}(r\lambda )^{3/2}} &{}\quad 1 \end{pmatrix}, \quad \lambda \in \gamma _{j,+} \cup \gamma _{j,-}, \, j = 1,\ldots ,m, \\ S_{+}(\lambda )= & {} S_{-}(\lambda )\begin{pmatrix} 1 &{}\quad 0 \\ e^{\frac{4}{3}(r\lambda )^{3/2}} &{}\quad 1 \end{pmatrix}, \quad \lambda \in \lambda _{m} +e^{\pm \frac{2\pi i}{3}} (0,+\infty ), \\ S_{+}(\lambda )= & {} S_{-}(\lambda ) \begin{pmatrix} 1 &{}\quad s_{1}e^{-\frac{4}{3}(r\lambda )^{3/2}} \\ 0 &{}\quad 1 \end{pmatrix}, \qquad \lambda \in (0, + \infty ), \end{aligned}$$

    where we set \(\lambda _{m+1}:=-\infty \) and \(\lambda _{0} := 0\).

  3. (c)

    As \(\lambda \rightarrow \infty \), we have

    $$\begin{aligned} S(\lambda ) = \left( I + \frac{T_1}{\lambda } +\frac{T_2}{\lambda ^2}+ \mathcal {O}\left( \frac{1}{\lambda ^3}\right) \right) \lambda ^{\frac{1}{4} \sigma _3} M^{-1}. \end{aligned}$$
    (5.9)
  4. (d)

    \(S(\lambda ) = \mathcal {O}( \log (\lambda -\lambda _j) )\) as \(\lambda \rightarrow \lambda _j\), \(j = 1, \ldots , m\), and \(S(\lambda ) = {\mathcal {O}}(1)\) as \(\lambda \rightarrow 0\).

Inspecting the sign of the real part of \(\lambda ^{3/2}\) on the different parts of the jump contour, we observe that the jumps for S are exponentially close to I as \(r \rightarrow + \infty \) on the lenses, and also on the rays \(\lambda _{m} + e^{\pm \frac{2\pi i}{3}}(0,+\infty )\). This convergence is uniform outside neighborhoods of \(\lambda _0,\lambda _{1},\ldots ,\lambda _{m}\), but breaks down as we let \(\lambda \rightarrow \lambda _{j}\), \(j \in \{0,1,\ldots ,m\}\).

5.3 Global parametrix

The RH problem for the global parametrix is as follows.

5.3.1 RH problem for \(P^{(\infty )}\)

  1. (a)

    \(P^{(\infty )} : \mathbb {C}\backslash (-\infty ,0] \rightarrow \mathbb {C}^{2\times 2}\) is analytic.

  2. (b)

    The jumps for \(P^{(\infty )}\) are given by

    $$\begin{aligned} P^{(\infty )}_{+}(\lambda ) = P^{(\infty )}_{-}(\lambda )\begin{pmatrix} 0 &{}\quad s_{j} \\ -s_{j}^{-1} &{}\quad 0 \end{pmatrix}, \quad \lambda \in (\lambda _{j},\lambda _{j-1}), \, j = 1,\ldots ,m+1. \end{aligned}$$
  3. (c)

    As \(\lambda \rightarrow \infty \), we have

    $$\begin{aligned} P^{(\infty )}(\lambda ) = \left( I + \frac{P^{(\infty )}_1}{\lambda } +\frac{P^{(\infty )}_2}{\lambda ^2} + \mathcal {O}\left( \frac{1}{\lambda ^3}\right) \right) \lambda ^{\frac{1}{4} \sigma _3} M^{-1}. \end{aligned}$$
    (5.10)

    As \(\lambda \rightarrow 0\), we have \(P^{(\infty )}(\lambda ) ={\mathcal {O}}(\lambda ^{-\frac{1}{4}})\). As \(\lambda \rightarrow \lambda _{j}\) with \(j \in \{2,\ldots ,m\}\), we have \(P^{(\infty )}(\lambda ) = {\mathcal {O}}(1)\).

This RH problem is of the same form as the one in the case \(s_1=0\), but with an extra jump on the interval \((\lambda _1,\lambda _0)\). We can construct \(P^{(\infty )}\) in a similar way as before, by setting

$$\begin{aligned} P^{(\infty )}(\lambda )=\begin{pmatrix} 1&{}id_1\\ 0&{}1 \end{pmatrix}\lambda ^{\frac{1}{4}\sigma _3} M^{-1}D(\lambda )^{-\sigma _3}, \end{aligned}$$
(5.11)

with

$$\begin{aligned} D(\lambda )=\exp \left( \frac{\lambda ^{1/2}}{2\pi }\sum _{j=1}^{m} \log s_j \int _{\lambda _j}^{\lambda _{j-1}}(-u)^{-1/2} \frac{du}{\lambda -u}\right) . \end{aligned}$$
(5.12)

We emphasize that the sum in the above expression now starts at \(j=1\). For any positive integer k, as \(\lambda \rightarrow \infty \) we have

$$\begin{aligned} D(\lambda )= & {} \exp \left( \sum _{\ell = 1}^{k} \frac{d_{\ell }}{\lambda ^{\ell -\frac{1}{2}}} + {\mathcal {O}}(\lambda ^{-k-\frac{1}{2}}) \right) = 1 + d_{1} \lambda ^{-1/2}+\frac{d_{1}^{2}}{2}\lambda ^{-1}\nonumber \\&+\left( \frac{d_{1}^{3}}{6}+d_{2} \right) \lambda ^{-3/2} +{\mathcal {O}}(\lambda ^{-2}), \end{aligned}$$
(5.13)

where

$$\begin{aligned} d_{\ell }= & {} \sum _{j=1}^{m} \frac{(-1)^{\ell - 1} \log s_{j}}{2\pi } \int _{\lambda _{j}}^{\lambda _{j-1}}(-u)^{\ell -\frac{3}{2}}du\nonumber \\= & {} \sum _{j=1}^{m} \frac{(-1)^{\ell - 1} \log s_{j}}{\pi (2\ell -1)} \Big ( |\lambda _{j}|^{\ell -\frac{1}{2}} -|\lambda _{j-1}|^{\ell -\frac{1}{2}}\Big ). \end{aligned}$$
(5.14)

This defines the value of \(d_1\) in (5.11), and with these values of \(d_1, d_2\), the expressions (3.14) for \(P_1^{(\infty )}\) and \(P_2^{(\infty )}\) remain valid. As before, we can also write D as

$$\begin{aligned} D(\lambda ) = \prod _{j=1}^{m}D_{j}(\lambda ), \qquad D_{j}(\lambda ) = \left( \frac{(\sqrt{\lambda } -i \sqrt{|\lambda _{j-1}|})(\sqrt{\lambda } +i \sqrt{|\lambda _{j}|})}{(\sqrt{\lambda } -i \sqrt{|\lambda _{j}|})(\sqrt{\lambda } +i \sqrt{|\lambda _{j-1}|})} \right) ^{\frac{\log s_{j}}{2\pi i}}. \end{aligned}$$

This expression allows us, in a similar way as in Sect. 3, to expand \(D(\lambda )\) as \(\lambda \rightarrow \lambda _{j}\), \(\mathfrak {I}\lambda > 0\), \(j \in \{1,\ldots ,m\}\), and to show that

$$\begin{aligned} D(\lambda ) = \sqrt{s_{j}} \left( \prod _{k=1}^{m} T_{k,j}^{\frac{\log s_{k}}{2\pi i}} \right) (\lambda -\lambda _{j})^{\beta _{j}}(1+{\mathcal {O}}(\lambda -\lambda _{j})), \end{aligned}$$
(5.15)

with \(T_{k,j}\) as in (3.17) and the equations just above (3.17) (which are now defined for \(k,j \ge 1\)). The first two terms in the expansion of \(D(\lambda )\) as \(\lambda \rightarrow \lambda _{0}=0\) are given by

$$\begin{aligned} D(\lambda ) = \sqrt{s_{1}}\Big (1-d_{0}\sqrt{\lambda } +{\mathcal {O}}(\lambda )\Big ), \end{aligned}$$
(5.16)

where

$$\begin{aligned} d_{0} = \frac{\log s_{1}}{\pi \sqrt{|\lambda _{1}|}} -\sum _{j=2}^{m} \frac{\log s_{j}}{\pi } \Big ( \frac{1}{\sqrt{|\lambda _{j-1}|}} -\frac{1}{\sqrt{|\lambda _{j}|}} \Big ). \end{aligned}$$
(5.17)

Note again, for later use, that for all \(\ell \in \{0,1,2,\ldots \}\), we can rewrite \(d_{\ell }\) in terms of the \(\beta _{j}\)’s as follows,

$$\begin{aligned} d_{\ell } = \frac{2i(-1)^{\ell }}{2\ell - 1}\sum _{j=1}^{m} \beta _{j}|\lambda _{j}|^{\ell -\frac{1}{2}}, \end{aligned}$$
(5.18)

and that

$$\begin{aligned} \prod _{k=1}^{m} T_{k,j}^{\frac{\log s_{k}}{2\pi i}} = (4|\lambda _{j}|)^{-\beta _{j}}\prod _{\begin{array}{c} k=1 \\ k \ne j \end{array}}^{m} \widetilde{T}_{k,j}^{-\beta _{k}}, \quad \text{ where } \qquad \widetilde{T}_{k,j} = \frac{\left( \sqrt{|\lambda _{j}|} +\sqrt{|\lambda _{k}|}\right) ^2}{\big |\lambda _j-\lambda _k\big |}.\qquad \end{aligned}$$
(5.19)

5.4 Local parametrices

The local parametrix around \(\lambda _{j}\), \(j \in \{0,\ldots ,m\}\), denoted by \(P^{(\lambda _{j})}\), should satisfy the same jumps as S in a fixed (but sufficiently small) disk \(\mathcal {D}_{\lambda _{j}}\) around \(\lambda _{j}\). Furthermore, we require that

$$\begin{aligned} P^{(\lambda _{j})}(\lambda ) = (I+o(1))P^{(\infty )}(\lambda ), \quad \text{ as } r \rightarrow +\infty , \end{aligned}$$
(5.20)

uniformly for \(\lambda \in \partial \mathcal {D}_{\lambda _{j}}\).

5.4.1 Local parametrices around \(\lambda _{j}\), \(j = 1,\ldots ,m\).

For \(j \in \{1,\ldots ,m\}\), \(P^{(\lambda _{j})}\) can again be explicitly expressed in terms of confluent hypergeometric functions. The construction is the same as in Sect. 3, with the only difference being that \(f_{\lambda _j}\) is now defined as

$$\begin{aligned} f_{\lambda _{j}}(\lambda )= - \frac{4i}{3} \left( (-\lambda )^{3/2}-(-\lambda _{j})^{3/2} \right) , \end{aligned}$$
(5.21)

where the principal branch of \((-\lambda )^{3/2}\) is chosen. This is a conformal map from \(\mathcal {D}_{\lambda _{j}}\) to a neighborhood of 0, satisfies \(f_{\lambda _{j}}(\mathbb {R}\cap \mathcal {D}_{\lambda _{j}})\subset i \mathbb {R}\), and its expansion as \(\lambda \rightarrow \lambda _{j}\) is given by

$$\begin{aligned} f_{\lambda _{j}}(\lambda ) = i c_{\lambda _{j}} (\lambda -\lambda _{j}) (1+{\mathcal {O}}(\lambda -\lambda _{j})), \quad \text{ with } \quad c_{\lambda _{j}} = 2|\lambda _{j}|^{1/2} > 0. \end{aligned}$$
(5.22)

Similarly as in Sect. 3.4.1, we define

$$\begin{aligned} P^{(\lambda _{j})}(\lambda ) = E_{\lambda _{j}}(\lambda ) \Phi _{\mathrm {HG}}(r^{3/2}f_{\lambda _{j}}(\lambda ); \beta _{j})(s_{j}s_{j+1})^{-\frac{\sigma _{3}}{4}} e^{\frac{2}{3} (r\lambda )^{3/2}\sigma _{3}}, \end{aligned}$$
(5.23)

where \(\Phi _{\mathrm {HG}}\) is the confluent hypergeometric model RH problem presented in “Appendix A.3” with parameter \(\beta =\beta _{j}\). The function \(E_{\lambda _{j}}\) is analytic inside \(\mathcal {D}_{\lambda _{j}}\) and is given by

$$\begin{aligned} E_{\lambda _{j}}(\lambda ) = P^{(\infty )}(\lambda ) (s_{j} s_{j+1})^{\frac{\sigma _{3}}{4}} \left\{ \begin{array}{l l} \displaystyle \sqrt{\frac{s_{j}}{s_{j+1}}}^{\sigma _{3}}, &{}\quad \mathfrak {I}\lambda > 0 \\ \begin{pmatrix} 0 &{} \quad 1 \\ -1 &{}\quad 0 \end{pmatrix},&\quad \mathfrak {I}\lambda < 0 \end{array} \right\} e^{-\frac{2}{3}(r\lambda _j)_+^{3/2} \sigma _{3}}(r^{3/2}f_{\lambda _{j}}(\lambda ))^{\beta _{j}\sigma _{3}}.\nonumber \\ \end{aligned}$$
(5.24)

We will need a more detailed matching condition than (5.20), which we can obtain from (A.13):

$$\begin{aligned} P^{(\lambda _{j})}(\lambda )P^{(\infty )}(\lambda )^{-1} = I + \frac{1}{r^{3/2}f_{\lambda _{j}}(\lambda )}E_{\lambda _{j}}(\lambda ) \Phi _{\mathrm {HG},1}(\beta _{j})E_{\lambda _{j}}(\lambda )^{-1} +{\mathcal {O}}(r^{-3}),\nonumber \\ \end{aligned}$$
(5.25)

as \(r \rightarrow + \infty \) uniformly for \(\lambda \in \partial \mathcal {D}_{\lambda _{j}}\). Moreover, we note for later use that

$$\begin{aligned} E_{\lambda _{j}}(\lambda _{j}) = \begin{pmatrix} 1 &{}\quad id_{1} \\ 0 &{}\quad 1 \end{pmatrix} e^{\frac{\pi i}{4}\sigma _{3}} |\lambda _{j}|^{\frac{\sigma _{3}}{4}}M^{-1}\Lambda _{j}^{\sigma _{3}}, \end{aligned}$$
(5.26)

with

$$\begin{aligned} \Lambda _{j} = \left( \prod _{\begin{array}{c} k=1 \\ k \ne j \end{array}}^{m} \widetilde{T}_{k,j}^{\beta _{k}} \right) (4|\lambda _{j}|)^{\beta _{j}} e^{-\frac{2}{3} (r\lambda _{j})_{+}^{3/2}}r^{\frac{3}{2}\beta _{j}} c_{\lambda _{j}}^{\beta _{j}}. \end{aligned}$$
(5.27)

5.4.2 Local parametrices around \(\lambda _{1} = 0\).

The local parametrix \(P^{(0)}\) can be explicitly expressed in terms of the Airy function. Such a construction is fairly standard, see e.g. [21, 22]. We can take \(P^{(0)}\) of the form

$$\begin{aligned} P^{(0)}(\lambda ) = E_{0}(\lambda )\Phi _{\mathrm {Ai}}(r\lambda ) s_{1}^{-\frac{\sigma _{3}}{2}}e^{\frac{2}{3} (r\lambda )^{3/2}\sigma _{3}}, \end{aligned}$$
(5.28)

for \(\lambda \) in a sufficiently small disk \(\mathcal {D}_0\) around 0, and where \(\Phi _{\mathrm {Ai}}\) is the Airy model RH problem presented in “Appendix A.1”. The function \(E_{0}\) is analytic inside \(\mathcal {D}_{0}\) and is given by

$$\begin{aligned} E_{0}(\lambda ) = P^{(\infty )}(\lambda )s_{1}^{\frac{\sigma _{3}}{2}}M^{-1} \left( r \lambda \right) ^{\frac{\sigma _{3}}{4}}. \end{aligned}$$
(5.29)

A refined version of the matching condition (5.20) can be derived from (A.2): one shows that

$$\begin{aligned} P^{(0)}(\lambda )P^{(\infty )}(\lambda )^{-1} = I + \frac{1}{r^{3/2}\lambda ^{3/2}}P^{(\infty )} (\lambda )s_{1}^{\frac{\sigma _{3}}{2}}\Phi _{\mathrm {Ai},1} s_{1}^{-\frac{\sigma _{3}}{2}}P^{(\infty )}(\lambda )^{-1} + {\mathcal {O}}(r^{-3}),\nonumber \\ \end{aligned}$$
(5.30)

as \(r \rightarrow +\infty \) uniformly for \(z \in \partial \mathcal {D}_{0}\), where \(\Phi _{\mathrm {Ai},1}\) is given below (A.2). An explicit expression for \(E_0(0)\) is given by

$$\begin{aligned} E_{0}(0) = -i\begin{pmatrix} 1 &{}\quad id_{1} \\ 0 &{}\quad 1 \end{pmatrix}\begin{pmatrix} 0 &{}\quad 1 \\ 1 &{}\quad -id_{0} \end{pmatrix}r^{\frac{\sigma _{3}}{4}}. \end{aligned}$$
(5.31)

5.5 Small norm problem

As in Sect. 3.5, we define R as

$$\begin{aligned} R(\lambda ) = \left\{ \begin{array}{l l} S(\lambda )P^{(\infty )}(\lambda )^{-1}, &{}\quad \text{ for } \lambda \in \mathbb {C}{\setminus } \bigcup _{j=0}^{m}\mathcal {D}_{\lambda _{j}}, \\ S(\lambda )P^{(\lambda _{j})}(\lambda )^{-1}, &{}\quad \text{ for } \lambda \in \mathcal {D}_{\lambda _{j}}, \, j \in \{0,\ldots ,m\}, \end{array} \right. \end{aligned}$$
(5.32)

and we can conclude in the same way as in Sect. 3.5 that (3.38) and (3.46) hold, uniformly for \(\beta _1,\beta _2,\ldots , \beta _m\) in compact subsets of \(i\mathbb {R}\), and for \(\tau _1,\ldots , \tau _m\) such that \(\tau _1<-\delta \) and \(\min _{1\le k\le m-1}\{\tau _k-\tau _{k+1}\}>\delta \) for some \(\delta >0\), with

$$\begin{aligned} R^{(1)}(\lambda ) = \frac{1}{2\pi i}\int _{\bigcup _{j=0}^{m} \partial \mathcal {D}_{\lambda _{j}}} \frac{J_{R}^{(1)}(s)}{s-\lambda }ds, \end{aligned}$$
(5.33)

where \(J_R\) is the jump matrix for R and \(J_R^{(1)}\) is defined by (3.39).

A difference with Sect. 3.5 is that \(J_R^{(1)}\) now has a double pole at \(\lambda =0\), by (5.30). At the other singularities \(\lambda _j\), it has a simple pole as before. If \(\lambda \in \mathbb {C}{\setminus } \bigcup _{j=0}^{m}\mathcal {D}_{\lambda _{j}}\), a residue calculation yields

$$\begin{aligned} R^{(1)}(\lambda ) = \sum _{j=1}^{2}\frac{1}{\lambda ^{j}} \text{ Res }(J_{R}^{(1)}(s)s^{j-1},s = 0)+ \sum _{j=1}^{m} \frac{1}{\lambda -\lambda _{j}}\text{ Res }(J_{R}^{(1)}(s),s = \lambda _{j}).\nonumber \\ \end{aligned}$$
(5.34)

From (5.30), we deduce

$$\begin{aligned} \text{ Res }\left( J_{R}^{(1)}(s)s,s=0 \right) = \frac{5d_{1}}{48}\begin{pmatrix} -1 &{}\quad id_{1} \\ id_{1}^{-1} &{}\quad 1 \end{pmatrix} \end{aligned}$$
(5.35)

and

$$\begin{aligned} \text{ Res }\left( J_{R}^{(1)}(s),s=0 \right) = \frac{1}{4} \begin{pmatrix} -d_{1} d_{0}^{2}-d_{0} &{}\quad i\big (d_{0}^{2}d_{1}^{2}+2d_{0}d_{1} + \frac{7}{12}\big ) \\ id_{0}^{2} &{}\quad d_{1}d_{0}^{2}+d_{0} \end{pmatrix}. \end{aligned}$$
(5.36)

By (5.25)–(5.27), for \(j \in \{1,\ldots ,m\}\), we have

$$\begin{aligned}&\text{ Res }\left( J_{R}^{(1)}(s),s=\lambda _{j} \right) \\&\quad = \frac{\beta _{j}^{2}}{ic_{\lambda _{j}}}\begin{pmatrix} 1 &{}\quad id_{1} \\ 0 &{}\quad 1 \end{pmatrix}e^{\frac{\pi i}{4}\sigma _{3}} |\lambda _{j}|^{\frac{\sigma _{3}}{4}} M^{-1} \begin{pmatrix} -1 &{}\quad \widetilde{\Lambda }_{j,1} \\ -\widetilde{\Lambda }_{j,2} &{}\quad 1 \end{pmatrix} M |\lambda _{j}|^{-\frac{\sigma _{3}}{4}} e^{-\frac{\pi i}{4}\sigma _{3}}\begin{pmatrix} 1 &{}\quad -id_{1} \\ 0 &{}\quad 1 \end{pmatrix}, \end{aligned}$$

where

$$\begin{aligned} \widetilde{\Lambda }_{j,1} = \frac{-\Gamma (-\beta _{j})}{\Gamma (1+\beta _{j})} \Lambda _{j}^{2} \qquad \text{ and } \qquad \widetilde{\Lambda }_{j,2} =\frac{-\Gamma (\beta _{j})}{\Gamma (1-\beta _{j})}\Lambda _{j}^{-2}. \end{aligned}$$
(5.37)

6 Integration of the Differential Identity for \(s_{1} > 0\)

Like in Sect. 4, (2.20) yields

$$\begin{aligned} \partial _{s_m}\log F(r\vec {\tau };\vec {s}) =A_{\vec {\tau }, \vec {s}}(r)+\sum _{j=1}^m B_{\vec {\tau }, \vec {s}}^{(j)}(r), \end{aligned}$$
(6.1)

with

$$\begin{aligned}&A_{\vec {\tau }, \vec {s}}(r) =i\partial _{s_m} \left( \Psi _{2,21}-\Psi _{1,12}+\frac{r\tau _{m}}{2}\Psi _{1,21}\right) +i \Psi _{1,11} \partial _{s_m}\Psi _{1,21}-i \Psi _{1,21} \partial _{s_m}\Psi _{1,11},\\&B_{\vec {\tau }, \vec {s}}^{(j)}(r)=\frac{s_{j+1}-s_j}{2\pi i} \left( G_j^{-1}\partial _{s_m}G_j\right) _{21}(r \eta _j) =\frac{s_{j+1}-s_j}{2\pi i} \left( \Psi ^{-1}\partial _{s_m}\Psi \right) _{21}(r \eta _j), \end{aligned}$$

where we set \(s_{m+1}=1\). We assume in what follows that \(m\ge 2\).

For the computation of \(A_{\vec {\tau }, \vec {s}}(r)\), we start from the expansion (4.3), which continues to hold for \(s_1>0\), but now with \(P_1^{(\infty )}\) and \(P_2^{(\infty )}\) as in Sect. 5 (i.e. defined by (3.14) but with \(d_1, d_2\) given by (5.18)), and with \(R_{1}^{(1)}\) and \(R_{2}^{(1)}\) defined through the expansion

$$\begin{aligned} R^{(1)}(\lambda ) = \frac{R_{1}^{(1)}}{\lambda } +\frac{R_{2}^{(1)}}{\lambda ^{2}} + {\mathcal {O}}(\lambda ^{-3}), \qquad \text{ as } \lambda \rightarrow \infty , \end{aligned}$$
(6.2)

corresponding to the function \(R^{(1)}\) from Sect. 5, given in (5.33).

Using (3.14), (3.38), (3.46), (5.3)–(5.6), (5.22) and (5.34), we obtain after a long computation the following explicit large r expansion

$$\begin{aligned} A_{\vec {\tau }, \vec {s}}(r)= & {} - 2\partial _{s_{m}}d_{2} r^{3/2} +\frac{d_{0}\partial _{s_{m}}d_{1}}{2} - \partial _{s_{m}}d_{1} \sum _{j=1}^{m} \frac{\beta _{j}^{2}}{c_{\lambda _{j}}} \big (\widetilde{\Lambda }_{j,1}+\widetilde{\Lambda }_{j,2} \big )\nonumber \\&-\sum _{j=1}^{m} \partial _{s_{m}}(\beta _{j}^{2}) +{\mathcal {O}}\Big (\frac{\log r}{r^{3/2}}\Big ). \end{aligned}$$
(6.3)

For the terms \(B_{\vec {\tau }, \vec {s}}^{(j)}(r)\), we proceed as before by splitting this term in the same way as in (4.6). We can carry out the same analysis as in Sect. 4 for each of the terms. We note that the terms corresponding to \(j=1\) can now be computed in the same way as the terms \(j=2,\ldots , m\). This gives, analogously to (4.10),

$$\begin{aligned} \sum _{j=1}^mB_{\vec {\tau },\vec {s}}^{(j)}(r)&=\frac{\beta _{m-1}}{2\pi is_m} \partial _{\beta _{m-1}}\log \frac{\Gamma (1+\beta _{m-1})}{\Gamma (1-\beta _{m-1})}- \frac{\beta _m}{2\pi is_m}\partial _{\beta _{m}} \log \frac{\Gamma (1+\beta _{m})}{\Gamma (1-\beta _{m})}\nonumber \\&\quad +\,\sum _{j=1}^m\left( -2\beta _j\partial _{s_m}\log \Lambda _j + \frac{\partial _{s_m}d_1}{2|\lambda _j|^{1/2}} \left( 2i\beta _j +\beta _j^2(\tilde{\Lambda }_{j,1} +\tilde{\Lambda }_{j,2})\right) \right) \nonumber \\&\quad +\mathcal {O} \Big (\frac{\log r}{r^{3/2}}\Big ) \end{aligned}$$
(6.4)

as \(r\rightarrow +\infty \).

Summing up (6.3) and (6.4) and using the expressions (5.22) for \(c_{\lambda _{j}}\) and (5.18) for \(d_0\), we obtain the large r asymptotics

$$\begin{aligned} \partial _{s_m}\log F(r\vec {\tau };\vec {s})= & {} - 2\partial _{s_{m}}d_{2} r^{3/2} -\sum _{j=1}^{m} \partial _{s_{m}}(\beta _{j}^{2}) -\sum _{j=1}^m 2\beta _j\partial _{s_m}\log \Lambda _j \nonumber \\&+\,\frac{\beta _{m-1}}{2\pi is_m}\partial _{\beta _{m-1}} \log \frac{\Gamma (1+\beta _{m-1})}{\Gamma (1-\beta _{m-1})}\nonumber \\&-\, \frac{\beta _m}{2\pi is_m}\partial _{\beta _{m}} \log \frac{\Gamma (1+\beta _{m})}{\Gamma (1-\beta _{m})} +\mathcal {O} \Big (\frac{\log r}{r^{3/2}}\Big ) , \end{aligned}$$
(6.5)

uniformly for \(\beta _1,\beta _2,\ldots , \beta _m\) in compact subsets of \(i\mathbb {R}\), and for \(\tau _1,\ldots , \tau _m\) such that \(\tau _1<-\delta \) and \(\min _{1\le k\le m-1}\{\tau _k-\tau _{k+1}\}>\delta \) for some \(\delta >0\). Next, we observe that (5.27) implies the identity

$$\begin{aligned} -\sum _{j=1}^{m} 2\beta _{j} \partial _{s_{m}} \log \Lambda _{j}= & {} -2 \sum _{j=1}^{m} \beta _{j} \partial _{s_{m}}(\beta _{j}) \log (8|\lambda _{j}|^{3/2}r^{3/2})\nonumber \\&-2\sum _{j=1}^{m} \beta _{j} \sum _{\begin{array}{c} \ell = 1 \\ \ell \ne j \end{array}}^{m} \partial _{s_{m}}(\beta _{\ell })\log (\widetilde{T}_{\ell ,j}). \end{aligned}$$
(6.6)

Substituting this identity and the fact that \(\lambda _j=\tau _j\), we find after a straightforward calculation [using also (1.6)] that, uniformly in \(\vec {\tau }\) and \(\vec {\beta }\) as \(r\rightarrow +\infty \),

$$\begin{aligned} \partial _{s_m}\log F(r\vec {\tau };\vec {s})= & {} - 2\partial _{s_{m}}d_{2} r^{3/2} -\sum _{j=1}^{m}\partial _{s_{m}}(\beta _{j}^{2}) +\frac{\beta _m}{\pi i s_m}\log (8|\tau _{m}|^{3/2}r^{3/2})\nonumber \\&-\,\frac{\beta _{m-1}}{\pi is_m}\log (8|\tau _{m-1}|^{3/2}r^{3/2}) +\frac{\beta _{m-1}}{2\pi is_m}\partial _{\beta _{m-1}} \log \frac{\Gamma (1+\beta _{m-1})}{\Gamma (1-\beta _{m-1})}\nonumber \\&-\,\frac{\beta _m}{2\pi is_m}\partial _{\beta _{m}} \log \frac{\Gamma (1+\beta _{m})}{\Gamma (1-\beta _{m})}\nonumber \\&-\,2\sum _{j=1}^{m} \beta _{j} \sum _{\begin{array}{c} \ell = 1 \\ \ell \ne j \end{array}}^{m} \partial _{s_{m}}(\beta _{\ell })\log (\widetilde{T}_{\ell ,j}) +\mathcal {O} \Big (\frac{\log r}{r^{3/2}}\Big ). \end{aligned}$$
(6.7)

We are now ready to integrate this in \(s_m\). Recall that we need to integrate \(s_m'=e^{-2\pi i\beta _m'}\) from 1 to \(s_m=e^{-2\pi i\beta _m}\), which means that we let \(\beta _m'\) go from 0 to \(-\frac{\log s_m}{2\pi i}\), and at the same time \(\beta _{m-1}'\) go from \(\hat{\beta }_{m-1}:=-\frac{\log s_{m-1}}{2\pi i}\) to \(\beta _{m-1}=\frac{\log s_{m}}{2\pi i}-\frac{\log s_{m-1}}{2\pi i}\). We then obtain, using (4.19) and (5.18), and writing \(\vec {s}_0:=(s_1,\ldots , s_{m-1},1)\),

$$\begin{aligned} \log \frac{F(r\vec {\tau };\vec {s})}{F(r\vec {\tau };\vec {s}_0)}= & {} -2\pi i\beta _m\mu (r\tau _m)-\beta _m^2\log (8|\tau _{m}|^{3/2}r^{3/2})\nonumber \\&-\,(\beta _{m-1}^2 - \hat{\beta }_{m-1}^2) \log (8|\tau _{m-1}|^{3/2}r^{3/2})\nonumber \\&+\,\log G(1+\beta _m)G(1-\beta _m)+\log \frac{G(1+\beta _{m-1}) G(1-\beta _{m-1})}{G(1+\hat{\beta }_{m-1})G(1-\hat{\beta }_{m-1})} \nonumber \\&-\,2\sum _{j=1}^{m-2}\beta _{j}\beta _{m}\left( \log (\widetilde{T}_{m,j}) -\log (\widetilde{T}_{m-1,j})\right) \nonumber \\&-\,2 \beta _m \beta _{m-1} \log (\widetilde{T}_{m-1,m}) +{\mathcal {O}}\Big ( \frac{\log r}{r^{3/2}}\Big ) \end{aligned}$$
(6.8)

as \(r\rightarrow +\infty \), where \(\mu (x)\) is as in Theorem 1.1.

We can now conclude the proof of Theorem 1.1 by induction on m. For \(m=1\), we have (1.5). Assuming that the result (1.11) holds for \(m-1\) singularities, we know the asymptotics for \(F(r\vec {\tau };\vec {s}_0)=E(r\tau _1,\ldots , r\tau _{m-1};\beta _1,\ldots ,\beta _{m-2}, \hat{\beta }_{m-1})\). Substituting these asymptotics in (6.8) and using (5.19), we obtain

$$\begin{aligned} \log F(r\vec {\tau };\vec {s})=C_1r^{3/2}+C_2\log r + C_3, \qquad r\rightarrow +\infty , \end{aligned}$$

with

$$\begin{aligned}&C_1=-2\pi i \sum _{j=1}^m \beta _{j}\mu (\tau _j), \quad C_2=-\frac{3}{2}\sum _{j=1}^m\beta _j^2,\\&C_3=\sum _{j=1}^m\log G(1+\beta _j)G(1-\beta _j) -\frac{3}{2}\sum _{j=1}^m\beta _j^2\log (4|\tau _j|) -2\sum _{1\le k<j\le m}\beta _{j}\beta _{k}\log \widetilde{T}_{j,k}. \end{aligned}$$

From this expansion, it is straightforward to derive (1.11). The expansion (1.9) follows from (1.5) after another straightforward calculation. This concludes the proof of Theorem 1.1.