1 Introduction

Since the pioneering work of Lee and Yang [30, 44], much attention in the statistical physics literature has been paid to studying partition functions of various models at complex values of parameters such as complex inverse temperature or complex external magnetic field, see, e.g. [3, 5]. These studies are sometimes referred to as the Lee–Yang program. The motivation here is to identify the mechanisms causing phase transitions of the model under study. These transitions manifest themselves in the analyticity breaking of the logarithm of the partition function which, in turn, is related to the complex zeros of the partition function. Phase transitions are thus associated with the accumulation points of the complex zeros of the partition function on the real axis, in the large system limit. In this respect, complex-valued parameters provide a clean framework for identification of phase transitions.

The main emphasis of the Lee–Yang program was on the classical lattice models of statistical mechanics. In this work, we focus on the simplest model of a spin glass [7, 43]: the random energy model (REM) introduced by Derrida [11, 12]. Let \(X,X_1,X_2,\ldots \) be independent real standard normal random variables. The partition function of REM at inverse temperature \(\beta \) is defined by

$$\begin{aligned} \mathcal{Z }_N(\beta )=\sum _{k=1}^N \mathrm{e}^{\beta \sqrt{n} X_k}. \end{aligned}$$
(1.1)

Here, \(N\) is a large integer, and we use the notation \(n=\log N\). For real inverse temperature \(\beta >0\) the asymptotic behavior of \(\mathcal{Z }_N(\beta )\) as \(N \rightarrow \infty \) (or equivalently, as \(n \rightarrow \infty \)) has been extensively studied in the literature; see [2, 7, 8, 17, 37]. Specifically, the limiting log-partition function is given by the formula

$$\begin{aligned} p(\beta ) :=\lim _{N \rightarrow \infty } \frac{1}{n} \log | \mathcal{Z }_N(\beta )|= \left\{ \begin{array}{l@{\quad }l} 1+\frac{1}{2}\beta ^2,&0\le \beta \le \sqrt{2},\\ \sqrt{2}\beta ,&\beta \ge \sqrt{2}. \end{array}\right. \end{aligned}$$
(1.2)

Convergence (1.2) holds both a.s. and in \(L^q, q\ge 1\); see [7, 17, 37]. On the level of fluctuations, it was shown in [8] that upon suitable rescaling, the random variable \(\mathcal{Z }_N(\beta )\) becomes asymptotically Gaussian, for \(\beta \le \sqrt{2}/2\), whereas, for \(\beta >\sqrt{2}/2\), it has limiting stable non-Gaussian distribution, as \(N \rightarrow \infty \).

Using heuristic arguments, Derrida [13] studied the REM at complex inverse temperature \(\beta =\sigma +i\tau \). He derived the following logarithmic asymptotics extending (1.2) to the complex plane:

$$\begin{aligned} p(\beta ) :=\lim _{N \rightarrow \infty } \frac{1}{n} \log | \mathcal{Z }_N(\beta )|= \left\{ \begin{array}{l@{\quad }l} 1+\frac{1}{2}(\sigma ^2 - \tau ^2),&\beta \in \overline{B}_{1},\\ \sqrt{2} |\sigma |,&\beta \in \overline{B}_{2},\\ \frac{1}{2}+\sigma ^2,&\beta \in \overline{B}_{3}, \end{array}\right. \end{aligned}$$
(1.3)

where \(B_{1}, B_{2}, B_{3}\) are three subsets of the complex plane (see Fig. 1) defined by

$$\begin{aligned} B_{1}&= \mathbb{C }\backslash \overline{B_2\cup B_3}, \end{aligned}$$
(1.4)
$$\begin{aligned} B_{2}&= \{\beta \in \mathbb{R }^2 :2\sigma ^2 > 1, |\sigma |+|\tau | > \sqrt{2}\}, \end{aligned}$$
(1.5)
$$\begin{aligned} B_{3}&= \{ \beta \in \mathbb{R }^2 :2\sigma ^2 < 1, \sigma ^2+\tau ^2> 1\}. \end{aligned}$$
(1.6)

Here, \(\bar{A}\) denotes the closure of the set \(A\). Note that the limiting log-partition function p is continuous.

Fig. 1
figure 1

Complex zeros of \(\mathcal{Z }_N\) in the large \(N\) limit. There are three phases: \(B_1\) (white no zeros), \(B_2\) (light gray density of zeros is of order 1), \(B_3\) (dark gray density of zeros is of order n). On the boundary of \(B_1\) the linear density of zeros is of order n. The plot shows also the contour lines (gray curves and lines) of the log-partition function p

To derive (1.3), Derrida [13] used an approach which can roughly be described as follows. Instead of \(\mathcal{Z }_N(\beta )\), one can consider the truncated sum

$$\begin{aligned} \mathcal{Z }_N^{*}(\beta )=\sum _{k=1}^N \mathrm{e}^{\beta \sqrt{n} X_k}{\small 1}\!\!1_{|X_k|<\sqrt{2n}}. \end{aligned}$$

Indeed, with high probability it holds that \(\mathcal{Z }_N^{*}(\beta )=\mathcal{Z }_N(\beta )\) since the order of the maximum term among \(|X_1|,\ldots ,|X_N|\) is \(\sqrt{2n}\) and the existence of an outlier satisfying \(|X_k|>\sqrt{2n}\) has probability converging to 0. Note, however, that although \(\mathcal{Z }_N^{*}(\beta )\) and \(\mathcal{Z }_N(\beta )\) are close in probability, their expectations (and standard deviations) may be very different from each other, at least for some values of \(\beta \). Derrida derived an asymptotic formula for the expectation of \(\mathcal{Z }_N^{*}(\beta )\), as \(N\rightarrow \infty \), using the saddle-point method. Two cases are possible: the expectation is dominated by the energies \(X_k\) inside the interval \(({-}\sqrt{2n}, \sqrt{2n})\) (equivalently, the contribution of the saddle point dominates the expectation), or by the energies located near one of the boundary points \({\pm }\sqrt{2n}\). He also obtained two similar cases for the standard deviation of \(\mathcal{Z }_N^{*}(\beta )\). Comparing the resulting four formulas, Derrida discovered the three phases \(B_1,B_2,B_3\). The arguments of Derrida [13] are not fully rigorous, although it should be emphasized that he did not use the replica method or other standard non-rigorous spin glass method. In the present paper, we will make the argument of Derrida rigorous and refine his results by deriving distributional limit theorems for the fluctuations of \(\mathcal{Z }_N(\beta )\) (and for the fluctuations in some more general models, see Sect. 2.3) at complex \(\beta \). An essential feature of the REM at complex temperature is the possibility of canceling of terms in \(\mathcal{Z }_N(\beta )\) due to the presence of complex amplitudes. It is for this reason that some standard techniques of rigorous spin glass theory [43] like the concentration inequalities or the second-moment method do not (or do not always) lead to the desired result. These difficulties will be discussed in more detail in Sect. 3.3.

Based on his formula (1.3) for the limiting log-partition function, Derrida [13] computed the asymptotic distribution of zeros of \(\mathcal{Z }_N\) in the complex plane. His predictions were in a good agreement with the numerical simulations of Moukarzel and Parga [32]. Derrida observed that since \(\mathcal{Z }_N(\beta )\) is an analytic function of \(\beta \), its empirical distribution of zeros (a measure \(\Xi _N\) assigning to every zero of \(\mathcal{Z }_N\) a weight equal to its multiplicity) is given by

$$\begin{aligned} \Xi _N=\frac{1}{2\pi }\Delta \log |\mathcal{Z }_N| , \end{aligned}$$
(1.7)

where \(\Delta =\frac{\partial ^2}{\partial \sigma ^2}+\frac{\partial ^2}{\partial \tau ^2} \) denotes the Laplace operator in the \(\beta \)-plane. In fact, identity (1.7) should rigorously be understood in the sense of distributions (=generalized functions), cf. Remark 2.2. Taking the large \(N\) limit, Derrida obtained the formula \(\frac{n}{2\pi } \Delta p\) for the asymptotic distribution of zeros of \(\mathcal{Z }_N\). Since the function \(p\) is harmonic in \(B_1\) and \(B_2\), Derrida predicted that “there should be no zeros (or at least that the density of zeros vanishes) in phases \(B_1\) and \(B_2\)”. In phase \(B_3\), “the density of zeros is uniform” and is asymptotic to \(\frac{n}{2\pi }\). Also, since the normal derivative of \(p\) has a jump on the boundary of \(B_1\), but has no jump on the boundary between \(B_1\) and \(B_3\) “the boundaries between phases \(B_1\) and \(B_2\), and between phases \(B_1\) and \(B_3\) are lines of zeros whereas the separation between phases \(B_2\) and \(B_3\) is not”. The argument of Derrida involves interchanging the Laplace operator and the large N limit. In the present paper we justify Derrida’s approach rigorously and derive further results on the distribution of zeros of \(\mathcal{Z }_N\). Namely, we relate the zeros of \(\mathcal{Z }_N\) to the zeros of two random analytic functions: a Gaussian analytic function \(\mathbb{G }\) (in phase \(B_3\)), and a zeta-function \(\zeta _P\) associated to the Poisson process (in phase \(B_2\)). Also, we will clarify the local structure of the mysterious “lines of zeros” on the boundary of \(B_1\).

For the partition function of REM, considered as a function of a complex external magnetic field, a non-rigorous analysis similar to that of Derrida [13] has been carried out by Moukarzel and Parga [33, 34]. For directed polymers with complex weights on a tree, which is another related model, the logarithmic asymptotics (1.3) has been derived in [14]; see also [4, 29]. Recently, Takahashi [42] and Obuchi and Takahashi [36] studied the complex zeros in the generalized REM and other spin glass models using the non-rigorous replica method. However, spin glasses at complex temperature have not been much studied rigorously in the mathematics literature. Our aim is to fill this gap.

Substantial motivation for the setup with complex random energies comes from quantum mechanics. There, the sums of random exponentials with complex-valued exponents arise naturally in the models of interference in inhomogeneous media [14, 15], and in the studies of the quantum Monte Carlo method [16].

The sum of random exponentials \(\mathcal{Z }_N\) is a natural random analytic function exhibiting, despite of its simple form, a rather non-trivial behavior. We hope that the methods developed to study this function can be applied to other random analytic functions, for example to random polynomials or random Taylor series. For a recent work in this direction, we refer to [27, 28]. Also, \(\mathcal{Z }_N\) can be interpreted as a (normalized) characteristic function of the i.i.d. normal sample \(X_1,\ldots ,X_N\). This connection will be discussed in Sect. 2.4.

The paper is organized as follows. After introducing some notation in Sect. 2.1, we state our results on zeros and fluctuations in Sects. 2.2 and 2.3, respectively. Proofs can be found in Sects. 3 and 4. In Sect. 2.4, we discuss possible extensions and open problems related to our results.

2 Statement of results

2.1 Notation

We will write the complex inverse temperature \(\beta \) in the form \(\beta =\sigma +i\tau \), where \(\sigma ,\tau \in \mathbb{R }\). We use the notation \(n=\log N\), where N is a large integer and the logarithm is natural. Note that in the physics literature on the REM, it is customary to take the logarithm at basis 2. Replacing \(\beta \) by \(\beta /\sqrt{\log 2}\) in our results we can easily switch to the physics notation.

We denote by \(N_{\mathbb{R }}(0,s^2)\) the real Gaussian distribution with mean zero and variance \(s^2>0\). By \(N_{\mathbb{C }}(0,s^2)\), we denote the complex Gaussian distribution with density

$$\begin{aligned} z\mapsto \frac{1}{\pi s^2} \mathrm{e}^{-|z/s|^2} \end{aligned}$$

w.r.t. the Lebesgue measure on \(\mathbb{C }\). Note that \(Z\sim N_{\mathbb{C }}(0,s^2)\) iff \(Z=X+iY\), where \(X,Y\sim N_{\mathbb{R }}(0,s^2/2)\) are independent. In this case, \(\mathbb{E }Z=0\) and \(\mathbb{E }|Z|^2=1\). Real or complex normal distribution is referred to as standard if \(s=1\). The standard normal distribution function is denoted by \(\Phi \).

Convergence in probability and weak (distributional) convergence will be denoted by \(\overset{{P}}{\underset{}{\longrightarrow }}\) and \(\overset{{w}}{\underset{}{\longrightarrow }}\), respectively. Let C be a generic positive constant whose value will change at different occurrences.

2.2 Results on zeros

Let \(\mathcal{Z }_N\) be the partition function of the REM defined as in (1.1). Note the distributional equalities

$$\begin{aligned} \mathcal{Z }_N(\beta )\stackrel{{d}}{=}\mathcal{Z }_N(-\beta ),\quad \mathcal{Z }_N(\overline{\beta })\stackrel{{d}}{=}\overline{\mathcal{Z }_N(\beta )}. \end{aligned}$$
(2.1)

Due to (2.1), it is often enough to consider the case \(\sigma ,\tau \ge 0\). The next result describes the global structure of complex zeros of \(\mathcal{Z }_N\), as \(N\rightarrow \infty \). Let \(\Xi _3\) be the Lebesgue measure restricted to \(B_3\). Also, let \(\Xi _{13}\) be the one-dimensional length measure on the boundary between \(B_1\) and \(B_3\) (which consists of two circular arcs). Finally, let \(\Xi _{12}\) be a measure having the density \(\sqrt{2} |\tau |\) with respect to the one-dimensional length measure restricted to the boundary between \(B_1\) and \(B_2\) (which consists of four line segments). Define a measure \(\Xi =2\Xi _3+\Xi _{12}+\Xi _{13}\).

Theorem 2.1

For every continuous function \(f:\mathbb{C }\rightarrow \mathbb{R }\) with compact support,

$$\begin{aligned} \frac{1}{n} \sum _{\beta \in \mathbb{C }:\mathcal{Z }_N(\beta )=0}f(\beta ) \overset{{P}}{\underset{N\rightarrow \infty }{\longrightarrow }}\frac{1}{2\pi }\int \limits _{\mathbb{C }} f(\beta ) \Xi (\mathrm{d}\beta ). \end{aligned}$$
(2.2)

Remark 2.2

As a consequence, the random measure assigning a weight \(1/n\) to each zero of \(\mathcal{Z }_N\) converges weakly to the deterministic measure \(\frac{1}{2\pi } \Xi \). The limit measure \(\Xi \) is related to the limiting log-partition function \(p\), see (1.3), by the formula \(\Xi =\Delta p\), in accordance with [13]. Here, \(\Delta \) is the Laplace operator which should be understood in the distributional sense. The pointwise Laplacian of p is easily seen to be \(2 \Xi _3\). However, in the distributional Laplacian there are additional terms which come from the jumps of the normal derivative of \(p\) along the boundaries \(\bar{B}_1\cap \bar{B}_2\) and \(\bar{B}_1\cap \bar{B}_3\). On the boundary \(\bar{B}_2\cap \bar{B}_3\) the jump turns out to be 0. that p can be viewed as the two-dimensional electrostatic potential generated by the charge distribution \(\Xi \).

Theorem 2.1 makes the last formula in [13] rigorous. In the next theorems, we will investigate more fine properties of the zeros of \(\mathcal{Z }_N\). We start by describing the local structure of zeros of \(\mathcal{Z }_N\) in a neighborhood of area \(1/n\) of a fixed point \(\beta _0\in B_3\). Let \(\{\mathbb{G }(t) :t\in \mathbb{C }\}\) be a Gaussian random analytic function [35] given by

$$\begin{aligned} \mathbb{G }(t)=\sum _{k=0}^{\infty } \xi _k \frac{t^k}{\sqrt{k!}}, \end{aligned}$$
(2.3)

where \(\xi _0,\xi _1,\ldots \) are independent standard complex Gaussian random variables. The complex zeros of \(\mathbb{G }\) form a remarkable point process which has intensity \(1/\pi \) and is translation invariant. Up to rescaling, this is the only translation invariant zero set of a Gaussian analytic function; see [21, Section 2.5]. This and related zero sets have been much studied; see the monograph [21].

Theorem 2.3

Let \(\beta _0\in B_3\) be fixed. For every continuous function \(f:\mathbb{C }\rightarrow \mathbb{R }\) with compact support,

$$\begin{aligned} \sum _{\beta \in \mathbb{C }:\mathcal{Z }_N(\beta )=0} f(\sqrt{n} (\beta - \beta _0)) \overset{{w}}{\underset{N\rightarrow \infty }{\longrightarrow }}\sum _{\beta \in \mathbb{C }:\mathbb{G }(\beta )=0} f(\beta ). \end{aligned}$$

Remark 2.4

Equivalently, the point process consisting of the points \(\sqrt{n} (\beta -\beta _0)\), where \(\beta \) is a zero of \(\mathcal{Z }_N\), converges weakly to the point process of zeros of \(\mathbb{G }\).

Derrida [13] predicted that the set \(B_1\) should be free of zeros. As we will see below, it is not true that the number of zeros in \(B_1\) converges to 0 in probability since with non-vanishing probability there exist zeros very close to the boundary of \(B_1\). However, a slightly weaker statement is true.

Theorem 2.5

Let \(K\) be a compact subset of \(B_1\). Then, there exists \(\varepsilon >0\) depending on \(K\) such that

$$\begin{aligned} \mathbb{P }[\mathcal{Z }_N(\beta )=0, \text{ for} \text{ some} \beta \in K] =O(N^{-\varepsilon }),\quad N\rightarrow \infty . \end{aligned}$$

As a consequence, the number of zeros of \(\mathcal{Z }_N\) in \(K\) converges to 0 in probability. It is natural to conjecture that the convergence holds a.s. The number \(\varepsilon \), as provided by the Proof of Theorem 2.5, converges to 0 as the distance between \(K\) and the boundary of \(B_1\) gets smaller. So, the a.s. convergence does not follow from a Borel–Cantelli argument.

Consider now the zeros of \(\mathcal{Z }_N\) in the set \(B_2\). We will show that in the limit as \(N\rightarrow \infty \) the zeros of \(\mathcal{Z }_N\) in \(B_2\) look like the zeros of certain random analytic function \(\zeta _{P}\). This function may be viewed as a zeta-function associated to the Poisson process. It is defined as follows. Let \(P_1<P_2<\ldots \) be the arrival times of a unit intensity homogeneous Poisson process on the positive half-line. That is, \(P_k=\varepsilon _1+\cdots +\varepsilon _k\), where \(\varepsilon _1,\varepsilon _2,\ldots \) are i.i.d. standard exponential random variables, i.e., \(\mathbb{P }[\varepsilon _k>t]=\mathrm{e}^{-t}, t\ge 0\). For \(T>1\), define the random process

$$\begin{aligned} \tilde{\zeta }_P(\beta ; T)=\sum _{k=1}^{\infty } \frac{1}{P_k^{\beta }}{\small 1}\!\!1_{P_k\in [0,T]}-\int \limits _1^T t^{-\beta }\mathrm{d}t, \quad \beta \in \mathbb{C }. \end{aligned}$$
(2.4)

Theorem 2.6

With probability \(1\), the sequence \(\tilde{\zeta }_P(\beta ; T)\) converges as \(T\rightarrow \infty \) to a limiting function denoted by \(\tilde{\zeta }_{P}(\beta )\). The convergence is uniform on compact subsets of the half-plane \(\{\beta \in \mathbb{C }:\mathrm{Re}\beta >1/2\}\).

Corollary 2.7

With probability \(1\), the Poisson process zeta-function

$$\begin{aligned} \zeta _{P}(\beta )=\sum _{k=1}^{\infty } \frac{1}{P_k^{\beta }} \end{aligned}$$
(2.5)

defined originally for \(\mathrm{Re}\beta >1\), admits a meromorphic continuation to the domain \(\mathrm{Re}\beta >1/2\). The function \(\tilde{\zeta }_{P}(\beta )=\zeta _{P}(\beta )-\frac{1}{\beta -1}\) is a.s. analytic in this domain.

The next theorem describes the limiting structure of zeros of \(\mathcal{Z }_N\) in \(B_2\). The form of the process \(\zeta _P\) appearing there is not surprising and can be explained as follows. In phase \(B_2\), the process \(\mathcal{Z }_N\) is dominated by the extremal order statistics of the sample \(X_1,\ldots ,X_N\). These form a Poisson point process in the large N limit, see, e.g., [39, Corollary 4.19(i)], and \(\zeta _P\) is some functional of this process.

Theorem 2.8

Let \(f :B_2\rightarrow \mathbb{R }\) be a continuous function with compact support. Let \(\zeta _P^{(1)}\) and \(\zeta _P^{(2)}\) be two independent copies of \(\zeta _P\). Then,

$$\begin{aligned} \sum _{\beta \in B_2 :\mathcal{Z }_N(\beta )=0} f(\beta ) \overset{{w}}{\underset{N\rightarrow \infty }{\longrightarrow }}\sum _{\begin{array}{c} \beta \in B_2 :\\ \zeta _{P}^{(1)}(\beta / \sqrt{2})=0 \end{array}} f(\beta ) + \sum _{\begin{array}{c} \beta \in B_2 :\\ \zeta _{P}^{(2)}(\beta / \sqrt{2})=0 \end{array}} f({-}\beta ). \end{aligned}$$

Theorem 2.8 tells us that the zeros of \(\mathcal{Z }_N\) in the domain \(\sigma >1/\sqrt{2},|\sigma |+|\tau |>\sqrt{2}\) (which constitutes one half of \(B_2\)) have approximately the same law as the zeros of \(\zeta _P\), as \(N\rightarrow \infty \). Let us stress that the approximation breaks down in the triangle \(\sigma >1/\sqrt{2}\), \(|\sigma |+|\tau |<\sqrt{2}\). Although the function \(\zeta _P\) is well-defined and may have zeros there, the function \(\mathcal{Z }_N\) has, with high probability, no zeros in any compact subset of the triangle by Theorem 2.5.

Next, we state some properties of the function \(\zeta _P\). Let \(\beta >1/2\) be real. For \(\beta \ne 1\), the random variable \(\zeta _{P}(\beta )\) is stable with index \(1/\beta \) and skewness parameter 1. In fact, (2.4) is just the series representation of this random variable; see [41, Theorem 1.4.5]. For \(\beta =1\), the random variable \(\tilde{\zeta }_{P}(1)\) (which is the residue of \(\zeta _{P}\) at 1) is 1-stable with skewness 1. For general complex \(\beta \), we have the following stability property.

Proposition 2.9

If \(\zeta _{P}^{(1)},\ldots , \zeta _{P}^{(k)}\) are independent copies of \(\zeta _{P}\), then we have the following distributional equality of stochastic processes:

$$\begin{aligned} \zeta _{P}^{(1)}+\cdots +\zeta _{P}^{(k)}\stackrel{{d}}{=}k^{\beta }\zeta _{P}. \end{aligned}$$

To see this, observe that the union of k independent unit intensity Poisson processes has the same law as a single unit intensity Poisson process scaled by the factor 1/k. As a corollary, the distribution of the random vector \((\mathrm{Re}\,\zeta _{P}(\beta ), \mathrm{Im}\,\zeta _{P}(\beta ))\) belongs to the family of operator stable laws; see [31].

Proposition 2.10

Fix \(\tau \in \mathbb{R }\). As \(\sigma \downarrow 1/2\), we have

$$\begin{aligned} \sqrt{2\sigma -1}\, \zeta _P(\sigma + i\tau ) \overset{{w}}{\underset{}{\longrightarrow }}\left\{ \begin{array}{ll} N_{\mathbb{C }}(0,1),&\quad \text{ if} \tau \ne 0,\\ N_{\mathbb{R }}(0,1),&\quad \text{ if} \tau =0. \end{array}\right. \end{aligned}$$

As a corollary, there is a.s. no meromorphic continuation of \(\zeta _P\) beyond the line \(\sigma =1/2\). Using the same method of proof, it can be shown that for every different \(\tau _1,\tau _2>0\) the random variables \(\sqrt{2\sigma -1}\,\zeta _P(\sigma +i\tau _j)\), \(j=1,2\), become asymptotically independent as \(\sigma \downarrow 1/2\). Thus, the function \(\zeta _P\) looks like a naïve white noise near the line \(\sigma =1/2\). The intensity of complex zeros of \(\zeta _P\) at \(\beta \) can be computed by the formula \(g(\beta )=\frac{1}{2\pi } \Delta \mathbb{E }\log |\zeta _P(\beta )|\), where \(\Delta \) is the Laplace operator; see [21, Section 2.4]. Proposition 2.10 suggests that \(g(\beta ) \sim \frac{1}{\pi }\frac{1}{(2\sigma -1)^2}\) as \(\sigma \downarrow 1/2\). In particular, every point of the line \(\sigma =1/2\) should be an accumulation point for the zeros of \(\zeta _P\) with probability 1.

Let us look locally at the zeros of \(\mathcal{Z }_N\) near some \(\beta _0=\sigma _0+i\tau _0\) on one of the boundaries \(\bar{B}_1\cap \bar{B}_3\) or \(\bar{B}_1\cap \bar{B}_2\). We will show that in both cases the zeros form approximately an arithmetic sequence. The structure of the measures \(\Xi _{13}\) and \(\Xi _{12}\) in Theorem 2.1 suggests that the distances between the consequent zeros should behave like \(\frac{2\pi }{n}\) in the first case and like \(\frac{\sqrt{2}\pi }{|\tau _0|n}\) in the second case. The next theorems show that this is indeed true. First, we analyze the boundary \(\bar{B}_1\cap \bar{B}_3\).

Theorem 2.11

Let \(\beta _0=\sigma _0 + i\tau _0\) be such that \(\sigma _0^2+\tau _0^2=1\) and \(\sigma _0^2<1/2\). There exist a complex-valued random variable \(\xi \) and a bounded real sequence \(\delta _N\) such that for every continuous function \(f :\mathbb{C }\rightarrow \mathbb{R }\) with compact support,

$$\begin{aligned} \sum _{\beta \in \mathbb{C }:\mathcal{Z }_N(\beta )=0}f\left(n\left(\frac{\beta }{\beta _0}-1\right)-i \delta _N\right) \overset{{w}}{\underset{N\rightarrow \infty }{\longrightarrow }}\sum _{k\in \mathbb{Z }} f(2\pi i k + \xi ). \end{aligned}$$

Remark 2.12

In other words, the zeros of \(\mathcal{Z }_N\) near \(\beta _0\) are given by the formula

$$\begin{aligned} \beta =\beta _0\left(1+\frac{2\pi i k+\xi +i\delta _N}{n}\right)+o\left(\frac{1}{n}\right),\quad k\in \mathbb{Z }. \end{aligned}$$

As we will see in the proof, the random variable \(\mathrm{Re}\,\xi \) takes negative values with positive probability. It follows that the probability that \(\mathcal{Z }_N\) has a zero in \(B_1\) does not go to \(0\) as \(N\rightarrow \infty \).

The boundary \(\bar{B}_1\cap \bar{B}_2\) consists of \(4\) line segments. By symmetry (2.1), it suffices to consider one of them.

Theorem 2.13

Let \(\beta _0=\sigma _0+i\tau _0\) be such that \(\sigma _0>1/\sqrt{2}, \tau _0>0\) and \(\sigma _0+\tau _0=\sqrt{2}\). There exist a complex-valued random variable \(\eta \) and a complex sequence \(d_N=O(\log n)\) such that for every continuous function \(f:\mathbb{C }\rightarrow \mathbb{R }\) with compact support,

$$\begin{aligned} \sum _{\beta \in \mathbb{C }:\mathcal{Z }_N(\beta )=0}f\left(\mathrm{e}^{\frac{2\pi i}{3}}n(\beta -\beta _0)-d_N\right)\overset{{w}}{\underset{N\rightarrow \infty }{\longrightarrow }}\sum _{k\in \mathbb{Z }} f\left(\frac{2\pi i k+\eta }{\sqrt{2} \tau _0}\right). \end{aligned}$$

Remark 2.14

In other words, the zeros of \(\mathcal{Z }_N\) near \(\beta _0\) are given by the formula

$$\begin{aligned} \beta =\beta _0+\mathrm{e}^{-\frac{2\pi i}{3}}\frac{1}{n}\left(\frac{2\pi i k}{\sqrt{2} \tau _0}+d_N\right)+o\left(\frac{1}{n}\right),\quad k\in \mathbb{Z }. \end{aligned}$$

2.3 Results on fluctuations

We state our results on fluctuations for a generalization of (1.1) which we call complex random energy model. This model involves complex phases and allows for arbitrary dependence between the energies and the phases. Let \((X,Y), (X_1,Y_1), \ldots \) be i.i.d. zero-mean bivariate Gaussian random vectors with

$$\begin{aligned} \mathrm{Var}X_k=\mathrm{Var}Y_k=1,\quad \text{ Corr}(X_k,Y_k)=\rho . \end{aligned}$$

Here, \({-}1\le \rho \le 1\) is fixed. Recall (1.1) and consider the following partition function:

$$\begin{aligned} \mathcal{Z }_N(\beta )=\sum _{k=1}^{N}\mathrm{e}^{\sqrt{n}(\sigma X_k+i \tau Y_k)},\quad \beta =(\sigma ,\tau )\in \mathbb{R }^2. \end{aligned}$$
(2.6)

For \(\tau =0\), this is the REM of Derrida [12] at real inverse temperature \(\sigma \). For \(\rho =1\), we obtain the REM at the complex inverse temperature \(\beta = \sigma + i \tau \) considered above; see (1.1). For \(\rho =0\), the model is a REM with independent complex phases considered in [14]. Note also that the substitutions \((\beta ,\rho )\mapsto ({-}\beta , \rho )\) and \((\beta ,\rho )\mapsto (\bar{\beta },{-}\rho )\) leave the distribution of \(\mathcal{Z }_N(\beta )\) unchanged.

Recall (1.2). Define the log-partition function as

$$\begin{aligned} p_N(\beta )=\frac{1}{n} \log |\mathcal{Z }_N(\beta )|,\quad \beta =(\sigma ,\tau )\in \mathbb{R }^2. \end{aligned}$$
(2.7)

Theorem 2.15

For every \(\beta \in \mathbb{R }^2\), the limit

$$\begin{aligned} p(\beta ):=\lim _{N \rightarrow \infty } p_N(\beta ) \end{aligned}$$
(2.8)

exists in probability and in \(L^q, q\ge 1\), and is explicitly given as

$$\begin{aligned} p(\beta )=\left\{ \begin{array}{l@{\quad }l} 1+\frac{1}{2}(\sigma ^2-\tau ^2),&\beta \in \overline{B}_{1},\\ \sqrt{2} |\sigma |,&\beta \in \overline{B}_2,\\ \frac{1}{2}+\sigma ^2,&\beta \in \overline{B}_3. \end{array}\right. \end{aligned}$$
(2.9)

Note that the limit in (2.9) does not depend on \(\rho \). However, we will see below that the fluctuations of \(\mathcal{Z }_N(\beta )\) do depend on \(\rho \). The next theorem shows that \(\mathcal{Z }_N(\beta )\) satisfies the central limit theorem in the domain \(\sigma ^2<1/2\).

Theorem 2.16

If \(\sigma ^2<1/2\) and \(\tau \ne 0\), then

$$\begin{aligned} \frac{\mathcal{Z }_N(\beta )-N^{1+\frac{1}{2} (\sigma ^2-\tau ^2)+ i\sigma \tau \rho }}{N^{\frac{1}{2}+\sigma ^2}}\overset{{w}}{\underset{N\rightarrow \infty }{\longrightarrow }}N_{\mathbb{C }}(0,1). \end{aligned}$$
(2.10)

Remark 2.17

If \(\sigma ^2<1/2\) and \(\tau =0\), then the limiting distribution is real normal, as was shown in [8].

Remark 2.18

If in addition to \(\sigma ^2<1/2\) we have \(\sigma ^2+\tau ^2>1\), then \(N^{1+\frac{1}{2}(\sigma ^2-\tau ^2)}=o(N^{\frac{1}{2}+ \sigma ^2})\) and, hence, the theorem simplifies to

$$\begin{aligned} \frac{\mathcal{Z }_N(\beta )}{N^{\frac{1}{2} +\sigma ^2}}\overset{{w}}{\underset{N\rightarrow \infty }{\longrightarrow }}N_{\mathbb{C }}(0,1). \end{aligned}$$
(2.11)

Equation (2.11) explains the difference between phases \(B_1\) and \(B_3\): in phase \(B_1\) the expectation of \(\mathcal{Z }_N(\beta )\) is of larger order than the mean square deviation, whereas, in phase \(B_3\), vice versa: the mean square deviation is larger than the expectation. It is this behavior that leads to the phase transition between \(B_1\) and \(B_3\) in (2.9).

In the boundary case \(\sigma ^2=1/2\), the limiting distribution is normal, but it has truncated variance.

Theorem 2.19

If \(\sigma ^2=1/2\) and \(\tau \ne 0\), then

$$\begin{aligned} \frac{\mathcal{Z }_N(\beta )-N^{1+\frac{1}{2} (\frac{1}{2}-\tau ^2)+ i\sigma \tau \rho }}{N}\overset{{w}}{\underset{N\rightarrow \infty }{\longrightarrow }}N_{\mathbb{C }}(0,1/2). \end{aligned}$$

Next, we describe the fluctuations of \(\mathcal{Z }_N(\beta )\) in the domain \(\sigma ^2>1/2\). Due to (2.1), it is not a restriction of generality to assume that \(\sigma >0\). Let \(b_N\) be a sequence such that \(\sqrt{2\pi }b_N \mathrm{e}^{b_N^2/2} \sim N\) as \(N\rightarrow \infty \). We can take

$$\begin{aligned} b_N=\sqrt{2n}-\frac{\log (4\pi n)}{2\sqrt{2 n}}. \end{aligned}$$
(2.12)

Theorem 2.20

Let \(\sigma >1/\sqrt{2}\), \(\tau \ne 0\), and \(|\rho |<1\). Then,

$$\begin{aligned} \frac{\mathcal{Z }_N(\beta )-N\mathbb{E }[\mathrm{e}^{\sqrt{n}(\sigma X+i\tau Y)}{\small 1}\!\!1_{X<b_N}]}{\mathrm{e}^{\sigma \sqrt{n}b_N}} \overset{{w}}{\underset{N\rightarrow \infty }{\longrightarrow }}S_{\sqrt{2}/\sigma }, \end{aligned}$$
(2.13)

where \(S_{\alpha }\) denotes a complex isotropic \(\alpha \)-stable random variable with a characteristic function of the form \(\mathbb{E }[\mathrm{e}^{i \mathrm{Re}(S_{\alpha }\overline{z})}]=\mathrm{e}^{-\mathrm const \cdot |z|^{\alpha }}\), \(z\in \mathbb{C }\).

Remark 2.21

If \(\sigma >1/\sqrt{2}\) and \(\tau =0\), then the limiting distribution is real totally skewed \(\alpha \)-stable; see [8]. If \(\sigma >1/\sqrt{2}\) and \(\rho =1\) (resp., \(\rho ={-}1\)), then it follows from Theorem 4.8 below that

$$\begin{aligned} \frac{\mathcal{Z }_N(\beta )-N\mathbb{E }[\mathrm{e}^{\beta \sqrt{n} X}{\small 1}\!\!1_{X<b_N}]}{\mathrm{e}^{\beta \sqrt{n} b_N}} \overset{{w}}{\underset{N\rightarrow \infty }{\longrightarrow }}\tilde{\zeta }_{P} \left(\frac{\beta }{\sqrt{2}}\right) \quad \left(\text{ resp.,} \tilde{\zeta }_{P} \left(\frac{\bar{\beta }}{\sqrt{2}}\right)\right)\!. \end{aligned}$$
(2.14)

Remark 2.22

We will compute asymptotically the truncated expectation on the left-hand side of (2.13) in Sect. 3.2 below. We will obtain that under the assumptions of Theorem 2.20,

$$\begin{aligned}&\frac{\mathcal{Z }_N(\beta )}{\mathrm{e}^{\sigma \sqrt{n}b_N}} \overset{{w}}{\underset{N\rightarrow \infty }{\longrightarrow }}S_{\sqrt{2}/\sigma },\quad \text{ if} \sigma +|\tau |>\sqrt{2},\end{aligned}$$
(2.15)
$$\begin{aligned}&\frac{\mathcal{Z }_N(\beta ) - N^{1+\frac{1}{2}(\sigma ^2-\tau ^2)+ i\sigma \tau \rho }}{\mathrm{e}^{\sigma \sqrt{n}b_N}} \overset{{w}}{\underset{N\rightarrow \infty }{\longrightarrow }}S_{\sqrt{2}/\sigma },\quad \text{ if} \sigma +|\tau |\le \sqrt{2}. \end{aligned}$$
(2.16)

Similarly, if \(\sigma >1/\sqrt{2}\), but \(\rho =1\), then we have

$$\begin{aligned}&\displaystyle \frac{\mathcal{Z }_N(\beta )}{\mathrm{e}^{\beta \sqrt{n}b_N}} \overset{{w}}{\underset{N\rightarrow \infty }{\longrightarrow }}\zeta _{P} \left(\frac{\beta }{\sqrt{2}}\right),\quad \text{ if} \sigma \!+\!|\tau |>\sqrt{2},\end{aligned}$$
(2.17)
$$\begin{aligned}&\displaystyle \frac{\mathcal{Z }_N(\beta ) - N^{1+\frac{1}{2} (\sigma ^2-\tau ^2)+ i\sigma \tau }}{\mathrm{e}^{\beta \sqrt{n}b_N}} \overset{{w}}{\underset{N\rightarrow \infty }{\longrightarrow }}\zeta _{P} \left(\frac{\beta }{\sqrt{2}}\right),\quad \text{ if} \sigma \!+\!|\tau |\le \sqrt{2},\; \sigma \ne \sqrt{2}.\nonumber \\ \end{aligned}$$
(2.18)

For \(\rho ={-}1\), we have to replace \(\beta \) by \(\bar{\beta }\).

2.4 Discussion, extensions and open questions

The results on fluctuations are closely related, at least on the heuristic level, to the results on the zeros of \(\mathcal{Z }_N\). In Sect. 2.3, we claimed that regardless of the value of \(\beta \ne 0\) we can find normalizing constants \(m_N(\beta )\in \mathbb{C }\), \(v_N(\beta )>0\) such that

$$\begin{aligned} \frac{\mathcal{Z }_N(\beta )-m_N(\beta )}{v_N(\beta )}\overset{{w}}{\underset{N\rightarrow \infty }{\longrightarrow }}Z(\beta ) \end{aligned}$$

for some non-degenerate random variable \(Z(\beta )\). It turns out that in phase \(B_1\) the sequence \(m_N(\beta )\) is of larger order than \(v_N(\beta )\), which suggests that there should be no zeros in this phase. In phases \(B_2\) and \(B_3\), the sequence \(m_N(\beta )\) is of smaller order than \(v_N(\beta )\), which does not rule out the possibility of zeros in these phases. One way to guess the density of zeros in phases \(B_2\) and \(B_3\) is to look more closely at the correlations of the process \(\mathcal{Z }_N\). In phase \(B_3\), it can be seen from Theorem 4.6 below that \(\mathcal{Z }_N(\beta _1)\) and \(\mathcal{Z }_N(\beta _2)\) become asymptotically decorrelated if the distance between \(\beta _1\) and \(\beta _2\) is of order larger than \(1/\!\sqrt{n}\). This suggests that the distances between the close zeros in phase \(B_3\) should be of order \(1/\!\sqrt{n}\) and hence, the density of zeros should be of order \(n\). Similarly, in phase \(B_2\) the variables \(\mathcal{Z }_N(\beta _1)\) and \(\mathcal{Z }_N(\beta _2)\) remain non-trivially correlated at distances of order 1 by Theorem 4.8 below, which suggests that the density of zeros in this phase should be of order 1.

An additional motivation for studying \(\mathcal{Z }_N\) comes from its connection to the empirical characteristic function. Given an i.i.d. standard normal sample \(X_1,\ldots ,X_N\), the empirical characteristic function is defined by \(c_N(\beta )=\sum _{k=1}^N \mathrm{e}^{i \beta X_k}\). We have \(\mathcal{Z }_N(\beta )=c_N(-i\sqrt{n} \beta )\). The limit behavior of the stochastic process \(\{c_N(\beta ) :\beta \in \mathbb{R }\}\) without rescaling \(\beta \) by the factor \(\sqrt{n}\) has been much studied; see, e.g., [10, 18]. There has been also interest in the behavior of \(R_N=\inf \{\beta >0 :\mathrm{Re}\,c_N(\beta )=0\}\), the first real zero of \(\mathrm{Re}\,c_N\); see [20, 22]. In particular, it has been shown in [20, Corollary 4.5] that, for all \(t\in \mathbb{R }\),

$$\begin{aligned} \lim _{N\rightarrow \infty }\mathbb{P }[R_N^2-n<2t]=\Phi ({-}\sqrt{2} e^{-t}). \end{aligned}$$

Hence, the first real zero of \(\mathrm{Re}\mathcal{Z }_N(\beta )\) restricted to \(\beta \in i \mathbb{R }\) is located near \(i\) with high probability. This is exactly the point where the imaginary axis meets the set \(B_3\).

It is possible to extend or strengthen our results in several directions. The statements of Theorems 2.1 and 2.15 should hold almost surely, although it seems difficult to prove this. Several authors considered models involving sums of random exponentials generalizing the REM; see [2, 6, 9, 25]. They analyze the case of real \(\beta \) only. We believe that our results (both on zeros and on fluctuations) can be extended, with appropriate modifications, to these models.

3 Proofs of the results on fluctuations

3.1 Truncated exponential moments

We will often need estimates for the truncated exponential moments of the normal distribution. In the next lemmas, we denote by \(X\) a real standard normal random variable. Let \(\Phi (z)=\frac{1}{\sqrt{2\pi }}\int _{-\infty }^z \mathrm{e}^{-\frac{x^2}{2}}\mathrm{d}x\) be the distribution function of X. It is well-known that

$$\begin{aligned} \Phi (z)=-\frac{1+o(1)}{\sqrt{2\pi } z}\mathrm{e}^{-\frac{z^2}{2}}, \quad z\rightarrow {-}\infty . \end{aligned}$$
(3.1)

The normal distribution function \(\Phi \) can be extended as an analytic function to the entire complex plane. We need an extension of (3.1) to the complex case.

Lemma 3.1

Fix some \(\varepsilon >0\). The following holds as \(|z|\rightarrow \infty \), \(z\in \mathbb{C }\):

$$\begin{aligned} \Phi (z)= \left\{ \begin{array}{ll} -\frac{1+o(1)}{\sqrt{2\pi }z} \mathrm{e}^{-\frac{z^2}{2}},&\text{ if} |\arg z|>\frac{\pi }{4}+\varepsilon ,\\ 1-\frac{1+o(1)}{\sqrt{2\pi }z} \mathrm{e}^{-\frac{z^2}{2}},&\text{ if} |\arg z|<\frac{3\pi }{4}-\varepsilon . \end{array}\right. \end{aligned}$$
(3.2)

In particular, \(\Phi (z)\rightarrow 1\) if \(|z|\rightarrow \infty \) and \(|\arg z|<\frac{\pi }{4}-\varepsilon \).

Remark 3.2

We take the principal value of the argument, ranging in \(({-}\pi ,\pi ]\) and having a jump on the negative half-axis. In the domain \(\frac{\pi }{4}+\varepsilon <|\arg z|<\frac{3\pi }{4}-\varepsilon \) both asymptotics in (3.2) can be applied. To see that they give the same result, note that \(|\frac{1}{z}\mathrm{e}^{-\frac{z^2}{2}}|\rightarrow \infty \) there.

Proof of Lemma 3.1

For the first case of (3.2), see [1, Eq. 7.1.23 on p. 298]. The second case of (3.2) follows from the identity \(\Phi (z)=1-\Phi ({-}z)\). \(\square \)

In the next lemmas, we record several simple facts on the truncated exponential moments which we will often use later. Note that \(\mathbb{E }[\mathrm{e}^{wX}]=\mathrm{e}^{\frac{w^2}{2}}\), for all \(w\in \mathbb{C }\).

Lemma 3.3

Let \(w\in \mathbb{C }\), \(a\in \mathbb{R }\). Then, \(\mathbb{E }[\mathrm{e}^{w X}{\small 1}\!\!1_{X<a}]=\mathrm{e}^{\frac{w^2}{2}} \Phi (a-w)\).

Proof

For \(w\in \mathbb{R }\), we have

$$\begin{aligned} \mathbb{E }[\mathrm{e}^{w X}{\small 1}\!\!1_{X < a}]=\frac{1}{\sqrt{2\pi }} \int \limits _{-\infty }^{a} \mathrm{e}^{wz-\frac{z^2}{2}}\mathrm{d}z=\frac{1}{\sqrt{2\pi }} \mathrm{e}^{\frac{w^2}{2}} \int \limits _{-\infty }^{a} \mathrm{e}^{-\frac{(z-w)^2}{2}}\mathrm{d}z=\mathrm{e}^{\frac{w^2}{2}}\Phi (a-w). \end{aligned}$$

For \(w\in \mathbb{C }\), this holds by analytic continuation. \(\square \)

Lemma 3.4

Let \(w,a\in \mathbb{R }\). The following estimates hold.

  1. (1)

    If \(w>a\), then  \(\mathbb{E }[\mathrm{e}^{w X}{\small 1}\!\!1_{X < a}]<\mathrm{e}^{aw-\frac{a^2}{2}}\).

  2. (2)

    If \(w<a\), then  \(\mathbb{E }[\mathrm{e}^{w X}{\small 1}\!\!1_{X > a}]<\mathrm{e}^{aw-\frac{a^2}{2}}\).

Proof

Consider the case \(w>a\). By Lemma 3.3, \(\mathbb{E }[\mathrm{e}^{w X}{\small 1}\!\!1_{X < a}]=\mathrm{e}^{\frac{w^2}{2}} \Phi (a-w)\). Using the inequality \(\Phi (z)<\mathrm{e}^{-\frac{z^2}{2}}\) valid for \(z\le 0\), we obtain the statement of case (1). Case (2) can be reduced to case (1) by the substitution \((X,w,a)\mapsto (-X,-w,-a)\). \(\square \)

Lemma 3.5

Let \(F(n)=\mathbb{E }[\mathrm{e}^{w \sqrt{n} X}{\small 1}\!\!1_{X<\sqrt{n} a(n)}]\), where \(w=u+iv\in \mathbb{C }\), \(n>0\), and \(a(n)\) is a real-valued function with \(\lim _{n\rightarrow \infty }a(n)=a\). The following hold, as \(n\rightarrow \infty \):

$$\begin{aligned} F(n) \sim \left\{ \begin{array}{ll} \frac{1}{\sqrt{2\pi n}(w-a)}\, \mathrm{e}^{n (a(n) w -\frac{1}{2} a^2(n))},&\text{ if} u+|v|>a,\\ \mathrm{e}^{\frac{1}{2} w^2 n},&\text{ if} u+|v|<a. \end{array}\right. \end{aligned}$$
(3.3)

If \(w\in \mathbb{R }\) and \(a(n)=w+\frac{c}{\sqrt{n}}+o(\frac{1}{\sqrt{n}})\), for some \(c\in \mathbb{R }\), then

$$\begin{aligned} F(n) \sim \Phi (c)\, \mathrm{e}^{\frac{1}{2} w^2 n}, \;\;\;n\rightarrow \infty . \end{aligned}$$
(3.4)

Remark 3.6

The second line in (3.3) can be generalized to the following formula valid in the case \(u-|v|<a\):

$$\begin{aligned} F(n)=\mathrm{e}^{\frac{1}{2} w^2 n}+\frac{1+o(1)}{\sqrt{2\pi n}(w-a)}\, \mathrm{e}^{n (a(n) w -\frac{1}{2} a^2(n))}, \quad n\rightarrow \infty . \end{aligned}$$
(3.5)

Proof of Lemma 3.5

Let \(z(n)=\sqrt{n} a(n)-w \sqrt{n}\). By Lemma 3.3, we have

$$\begin{aligned} F(n)=\mathrm{e}^{\frac{1}{2} w^2 n }\Phi (\sqrt{n} a(n)-w\sqrt{n})=\mathrm{e}^{\frac{1}{2} w^2 n }\Phi (z(n)). \end{aligned}$$
(3.6)

Note that \(z(n)\sim (a-u-iv)\sqrt{n}\), as \(n\rightarrow \infty \).

  • Case 1. If \(u+|v|>a\), then \(|\arg z(n)|>\frac{\pi }{4}+\varepsilon \), for some \(\varepsilon >0\), and all sufficiently large \(n\). Applying the first line of (3.2), we arrive at the first line of (3.3).

  • Case 2. If \(u-|v|<a\), then \(|\arg z(n)|<\frac{3\pi }{4}-\varepsilon \), for some \(\varepsilon >0\), and all sufficiently large \(n\). Applying the second line of (3.2), we get (3.5).

  • Case 2a. If even the stronger condition \(u+|v|<a\) holds, then \(\frac{1}{2} (u^2-v^2)>au-\frac{1}{2} a^2\) and hence, the first term in (3.5) asymptotically dominates the second one. We obtain the second line of (3.3).

  • Case 3. If \(a=w\in \mathbb{R }\) and \(a(n)=w+\frac{c}{\sqrt{n}}+o(\frac{1}{\sqrt{n}})\), for some \(c\in \mathbb{R }\), then \(\lim _{n\rightarrow \infty } z(n)=c\) and we arrive at (3.4).

\(\square \)

Lemma 3.7

If \((X,Y)\) is a real Gaussian vector with standard marginals and correlation \(\rho \), then, for \(s,a\in \mathbb{R }\),

$$\begin{aligned} \mathbb{E }[\mathrm{e}^{s(\sigma X+i \tau Y)}{\small 1}\!\!1_{X<a}]=\mathrm{e}^{-s^2\tau ^2(1-\rho ^2)/2} \mathbb{E }[\mathrm{e}^{s(\sigma +i\tau \rho )X}{\small 1}\!\!1_{X<a}]. \end{aligned}$$

In particular, \(\mathbb{E }[\mathrm{e}^{s(\sigma X + i \tau Y)}]=\mathrm{e}^{s^2(\sigma ^2-\tau ^2+2i\sigma \tau \rho )/2}\).

Proof

We have a distributional equality \((X,Y)\overset{d}{=}(X, \rho X+\sqrt{1-\rho ^2} W)\), where \((X,W)\) are independent standard normal real random variables. It follows that

$$\begin{aligned} \mathbb{E }[\mathrm{e}^{s(\sigma X + i \tau Y)} {\small 1}\!\!1_{X<a} ]&= \mathbb{E }[\mathrm{e}^{s(\sigma +i\tau \rho )X+ is\tau \sqrt{1-\rho ^2}W} {\small 1}\!\!1_{X<a}]\\&= \mathrm{e}^{-s^2\tau ^2(1-\rho ^2)/2} \mathbb{E }[\mathrm{e}^{s(\sigma +i\tau \rho )X} {\small 1}\!\!1_{X<a}], \end{aligned}$$

where we have used that \(\mathbb{E }[\mathrm{e}^{tW}]=\mathrm{e}^{t^2/2}\) and that W and X are independent. \(\square \)

3.2 Proof of Theorems 2.16, 2.19, 2.20

The main tool to prove the results on the fluctuations is the summation theory of triangular arrays of random vectors; see [19] and [31]. The following theorem can be found in [19, §25] in the one-dimensional setting and in [40] or in [31, Theorem 3.2.2] in the \(d\)-dimensional setting. Denote by \(|\cdot |\) the Euclidean norm and by \(\langle \cdot ,\cdot \rangle \) the Euclidean scalar product.

Theorem 3.8

For every \(N\in \mathbb{N }\), let a series \(W_{1,N},\ldots ,W_{N,N}\) of independent random \(d\)-dimensional vectors be given. Assume that, for some locally finite measure \(\nu \) on \(\mathbb{R }^d\backslash \{0\}\), and some positive semidefinite matrix \(\Sigma \), the following conditions hold:

  1. (1)

    \(\lim _{N\rightarrow \infty } \sum _{k=1}^N \mathbb{P }[W_{k,N}\in B]=\nu (B)\), for every Borel set \(B\subset \mathbb{R }^d\backslash \{0\}\) such that \(\nu (\partial B)=0\).

  2. (2)

    The following limits exist:

    $$\begin{aligned} \Sigma =\lim _{\varepsilon \downarrow 0}\limsup _{N\rightarrow \infty }\sum _{k=1}^N \mathrm{Var}[W_{k,N}{\small 1}\!\!1_{|W_{k,N}|<\varepsilon }]=\lim _{\varepsilon \downarrow 0} \liminf _{N\rightarrow \infty }\sum _{k=1}^N \mathrm{Var}[W_{k,N} {\small 1}\!\!1_{|W_{k,N}|<\varepsilon }]. \end{aligned}$$

Then, the random vector \(S_N:=\sum _{k=1}^N (W_{k,N}-\mathbb{E }[W_{k,N} {\small 1}\!\!1_{|W_{k,N}|<R}])\) converges, as \(N\rightarrow \infty \), to an infinitely divisible random vector \(S\) whose characteristic function is given by the Lévy–Khintchine representation

$$\begin{aligned} \log \mathbb{E }[\mathrm{e}^{i \langle t, S\rangle }]=-\frac{1}{2} \langle t, \Sigma t\rangle +\int \limits _{\mathbb{R }^d} (\mathrm{e}^{i \langle t, s \rangle }-1- i \langle t, s \rangle {\small 1}\!\!1_{|s|<R}) \nu (\mathrm{d}s),\quad t\in \mathbb{R }^d. \end{aligned}$$

Here, \(R>0\) is any number such that \(\nu \) does not charge the set \(\{s\in \mathbb{R }^d :|s|=R\}\).

Proof of Theorem 2.16

For \(k=1,\ldots ,N\), define

$$\begin{aligned} W_{k,N}=N^{-\frac{1}{2}-\sigma ^2} \mathrm{e}^{\sqrt{n}(\sigma X_k+i \tau Y_k)}. \end{aligned}$$

Let \(W_N\) be a random variable having the same law as the \(W_{k,N}\)’s. Note that \(N\mathbb{E }[W_{N}]=N^{(1-\sigma ^2-\tau ^2+2i\sigma \tau \rho )/2}\) by Lemma 3.7. To prove the theorem, we need to show that

$$\begin{aligned} \sum _{k=1}^N (W_{k,N}-\mathbb{E }W_{k,N}) \overset{{w}}{\underset{N\rightarrow \infty }{\longrightarrow }}N_{\mathbb{C }}(0,1). \end{aligned}$$

The proof is based on the two-dimensional Lindeberg central limit theorem. We consider \(W_{k,N}\) as an \(\mathbb{R }^2\)-valued random vector \((\mathrm{Re}W_{k,N}, \mathrm{Im}W_{k,N})\). Let \(\Sigma _N\) be the covariance matrix of this vector. First, we show that

$$\begin{aligned} \lim _{N\rightarrow \infty } N\Sigma _N= \left(\begin{array}{ll} 1/2&0\\ 0&1/2 \end{array}\right)\!. \end{aligned}$$
(3.7)

We have

$$\begin{aligned} N\mathbb{E }[(\mathrm{Re}W_N)^2+(\mathrm{Im}W_N)^2]=N\mathbb{E }[|W_N|^2]=N^{-2\sigma ^2} \mathbb{E }[\mathrm{e}^{2\sigma \sqrt{n} X}]=1. \end{aligned}$$
(3.8)

Also, we have \(N \mathbb{E }[W_N^2]=N^{-2\tau ^2 +4i \sigma \tau \rho }\) by Lemma 3.7. Since we assume that \(\tau \ne 0\), this implies that \(\lim _{N\rightarrow \infty }N \mathbb{E }[W_N^2]=0\). By taking real and imaginary parts, we obtain that

$$\begin{aligned} \lim _{N\rightarrow \infty } N\mathbb{E }[(\mathrm{Re}W_N)^2-(\mathrm{Im}W_N)^2]=\lim _{N\rightarrow \infty } N\mathbb{E }[(\mathrm{Re}W_N)(\mathrm{Im}W_N)]=0. \end{aligned}$$
(3.9)

Combining (3.8) and (3.9), we get

$$\begin{aligned} \lim _{N\rightarrow \infty } N\mathbb{E }[(\mathrm{Re}W_N)^2]=\lim _{N\rightarrow \infty } N\mathbb{E }[(\mathrm{Im}W_N)^2]=1/2. \end{aligned}$$
(3.10)

Also, by Lemma 3.7, we have

$$\begin{aligned} \lim _{N\rightarrow \infty } \sqrt{N} \mathbb{E }[W_N] = \lim _{N\rightarrow \infty } N^{-(\sigma ^2+\tau ^2-2i\sigma \tau \rho )/2}=0. \end{aligned}$$
(3.11)

It follows from (3.9), (3.10), (3.11) that (3.7) holds. Fix an arbitrary \(\varepsilon >0\). We complete the proof of the theorem by verifying the Lindeberg condition

$$\begin{aligned} \lim _{N\rightarrow \infty } N \mathbb{E }[|W_N-\mathbb{E }W_N|^2 {\small 1}\!\!1_{|W_N-\mathbb{E }W_N|>\varepsilon }]=0. \end{aligned}$$
(3.12)

Assume first that \(\sigma \ne 0\), say \(\sigma >0\). Write \( a_N=\sigma +\frac{1}{2\sigma }+\frac{\log \varepsilon }{\sigma n}\). Then, \(\lim _{N\rightarrow \infty } a_N=\sigma +\frac{1}{2\sigma }>2\sigma \) by the assumption \(\sigma ^2<1/2\). Hence, by Part 2 of Lemma 3.4, we have

$$\begin{aligned} \lim _{N\rightarrow \infty } N \mathbb{E }[|W_N|^2 {\small 1}\!\!1_{|W_N|>\varepsilon }]=\lim _{N\rightarrow \infty }\mathrm{e}^{-2\sigma ^2 n}\mathbb{E }[\mathrm{e}^{2\sigma \sqrt{n} X}{\small 1}\!\!1_{X>\sqrt{n} a_N}]=0. \end{aligned}$$
(3.13)

This also trivially holds for \(\sigma =0\). Together with (3.11), (3.13) implies (3.12). \(\square \)

Proof of Theorem 2.19

Without loss of generality, let \(\sigma =1/\sqrt{2}\). For \(k=1,\ldots ,N\), define

$$\begin{aligned} W_{k,N}=N^{-1} \mathrm{e}^{\sqrt{n} (\sigma X_k +i \tau Y_k)}. \end{aligned}$$

Let \(W_N\) be a random variable with the same distribution as the \(W_{k,N}\)’s. To prove the theorem, we need to verify that

$$\begin{aligned} \sum _{k=1}^N (W_{k,N}-\mathbb{E }W_{k,N}) \overset{{w}}{\underset{N\rightarrow \infty }{\longrightarrow }}N_{\mathbb{C }}(0,1/2). \end{aligned}$$

As we will see in equation (3.14) below, the Lindeberg condition (3.12) is not satisfied. We are going to apply Theorem 3.8 instead. Fix \(\varepsilon >0\) and let \(a_N=\sqrt{2}+\frac{\sqrt{2}\log \varepsilon }{n}\). By Lemma 3.5, Eq. (3.4) with \(c=0\), we have

$$\begin{aligned} \lim _{N\rightarrow \infty } N\mathbb{E }[|W_{N}|^2 {\small 1}\!\!1_{|W_N|<\varepsilon }] =\lim _{N\rightarrow \infty } N^{-1}\mathbb{E }[ \mathrm{e}^{\sqrt{2n} X} {\small 1}\!\!1_{ X<\sqrt{n} a_N }] =1/2. \end{aligned}$$
(3.14)

If \(|\rho |\ne 1\), then by Lemma 3.7 and (3.14),

$$\begin{aligned} \lim _{N\rightarrow \infty } N\mathbb{E }[W_{N}^2 {\small 1}\!\!1_{|W_N|<\varepsilon }] \le \lim _{N\rightarrow \infty } \mathrm{e}^{-2n (1-\rho ^2) \tau ^2} N^{-1} \mathbb{E }[\mathrm{e}^{\sqrt{2n} X} {\small 1}\!\!1_{ X<\sqrt{n} a_N }]=0.\qquad \quad \end{aligned}$$
(3.15)

The result of (3.15) continues to hold for \(|\rho |=1\) since in this case, Lemma 3.7 and Lemma 3.5 (first part of (3.3)) yield, as \(N\rightarrow \infty \),

$$\begin{aligned} N\mathbb{E }[ W_{N}^2 {\small 1}\!\!1_{|W_N|<\varepsilon } ]=N^{-1}\mathbb{E }[\mathrm{e}^{2\sqrt{n} (\sigma +i\tau \rho )X} {\small 1}\!\!1_{X<\sqrt{n} a_N}]=o(N^{-1}\mathrm{e}^{n(\sqrt{2}a_N-\frac{1}{2} a_N^2)}) \rightarrow 0. \end{aligned}$$

By Remark 3.6 we have, as \(N\rightarrow \infty \),

$$\begin{aligned} \mathbb{E }[\mathrm{e}^{\sqrt{n} \sigma X}{\small 1}\!\!1_{X>\sqrt{n} a_N}] \sim \frac{1}{\sqrt{\pi n}} \mathrm{e}^{n \left(\frac{1}{\sqrt{2}}a_N-\frac{1}{2} a_N^2\right)} \sim \frac{1}{\varepsilon \sqrt{\pi n}}. \end{aligned}$$

It follows that

$$\begin{aligned} \lim _{N\rightarrow \infty } N \mathbb{E }[|W_N|{\small 1}\!\!1_{|W_N|>\varepsilon }]=\lim _{N\rightarrow \infty } \mathbb{E }[\mathrm{e}^{\sqrt{n} \sigma X}{\small 1}\!\!1_{X>\sqrt{n} a_N}]=0. \end{aligned}$$
(3.16)

We consider \(W_N\) as an \(\mathbb{R }^2\)-valued random vector \((\mathrm{Re}W_N, \mathrm{Im}W_N)\). It follows from (3.14), (3.15), (3.16) that the covariance matrix \(\Sigma _N:=\mathrm{Var}[W_N {\small 1}\!\!1_{|W_N|<\varepsilon }]\) satisfies

$$\begin{aligned} \lim _{N\rightarrow \infty } N\Sigma _N= \left(\begin{array}{l@{\quad }l} 1/4&0\\ 0&1/4 \end{array}\right)\!. \end{aligned}$$
(3.17)

It follows from (3.16) that \(\lim _{N\rightarrow \infty } N \mathbb{P }[|W_N|>\varepsilon ] = 0\). Therefore, the conditions of Theorem 3.8 are satisfied with \(\nu =0\) and \(\Sigma \) given by the right-hand side of (3.17). Applying Theorem 3.8, we obtain the required statement. \(\square \)

Proof of Theorem 2.20

Recall that \(\alpha =\sqrt{2}/\sigma \in (0,2)\). For \(k=1,\ldots ,N\), define random variables

$$\begin{aligned} W_{k,N}=\mathrm{e}^{\sqrt{n} (\sigma X_k+i\tau Y_k-\sigma b_N)}. \end{aligned}$$

Let \(W_N\) be a random variable having the same law as the \(W_{k,N}\)’s. We will verify the conditions of Theorem 3.8. To verify the first condition, fix \(0<r_1<r_2\), \(0<\theta _1<\theta _2<2\pi \) and consider the set

$$\begin{aligned} B=\{z\in \mathbb{C }:r_1<|z|<r_2, \theta _1<\arg z<\theta _2\}. \end{aligned}$$

We will show that

$$\begin{aligned} \lim _{N\rightarrow \infty } N\mathbb{P }[W_N\in B]=\left(\frac{1}{r_1^{\alpha }}-\frac{1}{r_2^{\alpha }}\right) \cdot \frac{\theta _2-\theta _1}{2\pi }. \end{aligned}$$
(3.18)

Define a set

$$\begin{aligned} A_N=\bigcup _{j\in \mathbb{Z }} \left(\frac{2\pi j+\theta _1}{\tau \sqrt{n}}, \frac{2\pi j+\theta _2}{\tau \sqrt{n}}\right)\subset \mathbb{R }. \end{aligned}$$

We have

$$\begin{aligned} \mathbb{P }[W_N\in B]&= \mathbb{P }[\mathrm{e}^{\sigma \sqrt{n}(X-b_N)} \in (r_1,r_2),Y \in A_N]\\&= \int \limits _{r_1}^{r_2} \mathbb{P }\left[Y\in A_N \mid \sigma \sqrt{n}(X-b_N)=\log r\right] f_N(r) \mathrm{d}r. \end{aligned}$$

Here, \(f_N(r)\) is the density of the log-normal random variable \(\mathrm{e}^{\sqrt{n} \sigma (X-b_N)}\):

$$\begin{aligned} f_N(r)=\frac{1}{\sqrt{2\pi n}\sigma r} \exp \left\{ -\frac{1}{2}\left(\frac{\log r}{\sigma \sqrt{n}}+b_N\right)^2\right\} \sim \frac{1}{N} \alpha r^{-(1+\alpha )},\quad N\rightarrow \infty ,\qquad \quad \end{aligned}$$
(3.19)

where the asymptotic equivalence holds uniformly in \(r\in [r_1,r_2]\). To prove (3.19), recall that \(\sqrt{2\pi }b_N\mathrm{e}^{b_N^2/2}\sim N\) and \(b_N\sim \sqrt{2n}\). Conditionally on \(\sigma \sqrt{n}(X-b_N)=\log r\), the random variable \(Y\) is normal with mean \(\mu _N=\rho (\frac{\log r}{\sigma \sqrt{n}}+b_N)\) and variance \(\sqrt{1-\rho ^2}\). The variance is strictly positive by the assumption \(|\rho |\ne 1\). It follows easily that

$$\begin{aligned} \lim _{N\rightarrow \infty }\mathbb{P }[Y \in A_N \mid \sigma \sqrt{n}(X-b_N)=\log r]=\frac{\theta _2-\theta _1}{2\pi }. \end{aligned}$$

Bringing everything together, we arrive at (3.18). So, the first condition of Theorem 3.8 holds with

$$\begin{aligned} \nu (\mathrm{d}x \mathrm{d}y)=\frac{\alpha }{2\pi }\cdot \frac{\mathrm{d}x\mathrm{d}y}{r^{2+\alpha }},\quad r=\sqrt{x^2+y^2}. \end{aligned}$$

To verify the second condition of Theorem 3.8 with \(\Sigma =0\), it suffices to show that

$$\begin{aligned} \lim _{\varepsilon \downarrow 0}\limsup _{N\rightarrow \infty } N\mathbb{E }[|W_N|^2 {\small 1}\!\!1_{|W_N|\le \varepsilon }]=0. \end{aligned}$$
(3.20)

Condition \(|W_N|\le \varepsilon \) is equivalent to \(X<a_N\), where \(a_N=b_N+\frac{\log \varepsilon }{\sigma \sqrt{n}}\sim \sqrt{2n}\). By Lemma 3.5 (first case of (3.3)) with \(\lambda =\sqrt{n}\), \(w=2\sigma \), we have

$$\begin{aligned} \mathbb{E }[\mathrm{e}^{2\sigma \sqrt{n} X}{\small 1}\!\!1_{X<a_N}]\sim C n^{-1/2} \mathrm{e}^{2\sigma \sqrt{n} a_N- a_N^2/2}\sim CN^{-1}\mathrm{e}^{2\sigma \sqrt{n} b_N}\varepsilon ^{2-\sqrt{2}/\sigma },\quad N\rightarrow \infty , \end{aligned}$$

where we have again used that \(\sqrt{2\pi }b_N \mathrm{e}^{b_N^2/2}\sim N\). We obtain that

$$\begin{aligned} \lim _{N\rightarrow \infty } N\mathbb{E }[|W_N|^2 {\small 1}\!\!1_{|W_N|\le \varepsilon }]=\lim _{N\rightarrow \infty } N \mathrm{e}^{-2\sigma \sqrt{n} b_N} \mathbb{E }[\mathrm{e}^{2\sigma \sqrt{n} X}{\small 1}\!\!1_{X<a_N}]=C\varepsilon ^{2-\sqrt{2}/\sigma }. \end{aligned}$$

Recalling that \(2>\sqrt{2}/\sigma \), we arrive at (3.20). By Theorem 3.8,

$$\begin{aligned} \sum _{k=1}^N (W_{k,N}-\mathbb{E }[W_{N}{\small 1}\!\!1_{|W_N|<1}])\overset{{w}}{\underset{N\rightarrow \infty }{\longrightarrow }}S_{\alpha }, \end{aligned}$$

where the limiting random vector \(S_{\alpha }\) is infinitely divisible with a characteristic function given by

$$\begin{aligned} \psi (z):=\log \mathbb{E }[\mathrm{e}^{i \langle S_\alpha , z \rangle }]=\frac{\alpha }{2\pi } \int _{\mathbb{R }^2} (\mathrm{e}^{i \langle u, z \rangle }-1-i \langle u, z \rangle {\small 1}\!\!1_{|u|<1}) \frac{\mathrm{d}x \mathrm{d}y}{|u|^{2+\alpha }}, \quad z\in \mathbb{C }. \end{aligned}$$

Here, \(u=x+iy\) and \(\langle u, z \rangle = \mathrm{Re}(u \bar{z})\). Clearly, \(\psi (z)\) depends on \(|z|\) only and satisfies \(\psi (\lambda z)=\lambda ^{\alpha }\psi (z)\) for every \(\lambda >0\). It follows that \(\psi (z)=\text{ const}\cdot |z|^{\alpha }\). \(\square \)

Proof of Remark 2.22

We prove (2.15) and (2.16) first. Let \(\sigma >1/\sqrt{2}\), \(\tau \ne 0\), and \(|\rho |<1\). By Lemma 3.7, we have

$$\begin{aligned} m_N := N\mathbb{E }[\mathrm{e}^{\sqrt{n}(\sigma X + i\tau Y)}{\small 1}\!\!1_{X<b_N}]=N^{1-\frac{1}{2} \tau ^2(1-\rho ^2)} \mathbb{E }[\mathrm{e}^{\sqrt{n}(\sigma +i\tau \rho )X}{\small 1}\!\!1_{X<b_N}].\qquad \quad \end{aligned}$$
(3.21)

Write \(w=\sigma +i\tau \rho \). Recall from (2.12) that

$$\begin{aligned} \sqrt{2\pi }b_N \mathrm{e}^{b_N^2/2} \sim N, \;\;\; b_N\sim \sqrt{2n},\quad N\rightarrow \infty . \end{aligned}$$
(3.22)

Applying Lemma 3.5, we obtain

$$\begin{aligned} \mathbb{E }[\mathrm{e}^{\sqrt{n}(\sigma +i\tau \rho )X}{\small 1}\!\!1_{X<b_N}] \sim \left\{ \begin{array}{ll} \frac{1}{(w/\sqrt{2})-1} N^{-1}\mathrm{e}^{w\sqrt{n} b_N},&\sigma +|\tau \rho |>\sqrt{2},\\ \mathrm{e}^{\frac{1}{2} w^2n},&\sigma +|\tau \rho | \le \sqrt{2}. \end{array}\right. \end{aligned}$$
(3.23)

In the case \(\sigma +|\tau \rho |=\sqrt{2}\), we applied Remark 3.6 and noted that the first term in (3.5) dominates the second one. \(\square \)

Proof of (2.15)

Assume that \(\sigma +|\tau |>\sqrt{2}\). If even the stronger condition \(\sigma +|\tau \rho |>\sqrt{2}\) is satisfied, then it follows from (3.21) and the first line of (3.23) that

$$\begin{aligned} |m_N|\sim C\mathrm{e}^{\sigma \sqrt{n} b_N-\frac{1}{2} \tau ^2(1-\rho ^2)n}=o(\mathrm{e}^{\sigma \sqrt{n} b_N }). \end{aligned}$$
(3.24)

The last step follows from \(\tau \ne 0\) and \(|\rho |<1\). If \(\sigma +|\tau |>\sqrt{2}\) but \(\sigma +|\tau \rho |\le \sqrt{2}\), then it follows from (3.21) and the second line of (3.23) that

$$\begin{aligned} |m_N|\sim N^{1+\frac{1}{2} (\sigma ^2-\tau ^2)}=o(\mathrm{e}^{\sigma \sqrt{n} b_N}). \end{aligned}$$
(3.25)

The last step follows from the inequality \(1+\frac{1}{2} (\sigma ^2-\tau ^2)<\sqrt{2} \sigma \). It follows from (3.24) and (3.25) that we can rewrite Theorem 2.20 in the form (2.15). \(\square \)

Proof of (2.16)

Assume that \(\sigma +|\tau |\le \sqrt{2}\). Then, \(\sigma -|\tau \rho |<\sqrt{2}\) and it follows from (3.21) and Remark 3.6 that

$$\begin{aligned} m_N=N^{1-\frac{1}{2} \tau ^2(1-\rho ^2)}(\mathrm{e}^{\frac{1}{2} w^2n}+O(N^{-1}\mathrm{e}^{w\sqrt{n} b_N}))=N^{1+\frac{1}{2} (\sigma ^2-\tau ^2)+i\sigma \tau \rho }+o(\mathrm{e}^{\sigma \sqrt{n} b_N}). \end{aligned}$$

The last step follows from \(\tau \ne 0\) and \(|\rho |<1\). Hence, we can rewrite Theorem 2.20 in the form (2.16). \(\square \)

We now proceed to the proof of (2.17) and (2.18). Let \(\sigma >1/\sqrt{2}\) and \(\rho =1\). Our starting point is (2.14).

Proof of (2.17)

Assume that \(\sigma +|\tau |>\sqrt{2}\) and \(\sigma \ne \sqrt{2}\). By Lemma 3.5, first line of (3.3), we have

$$\begin{aligned} m_N := N\mathbb{E }[\mathrm{e}^{\beta \sqrt{n} X}{\small 1}\!\!1_{X<b_N}] \sim \frac{N}{\sqrt{2\pi n}(\beta -\sqrt{2})}\mathrm{e}^{\beta \sqrt{n} b_N- \frac{1}{2} b_N^2} \sim \frac{\sqrt{2}}{\beta -\sqrt{2}}\mathrm{e}^{\beta \sqrt{n} b_N}, \end{aligned}$$

where we have used (3.22). Recall that

$$\begin{aligned} \tilde{\zeta }_P\left(\frac{\beta }{\sqrt{2}}\right)+\frac{\sqrt{2}}{\beta -\sqrt{2}} = \zeta _P\left(\frac{\beta }{\sqrt{2}}\right). \end{aligned}$$
(3.26)

It follows that we can rewrite (2.14) in the form (2.17). \(\square \)

Proof of (2.18)

Assume that \(\sigma +|\tau |\le \sqrt{2}\). By Remark 3.6,

$$\begin{aligned} m_N := N\mathbb{E }[\mathrm{e}^{\beta \sqrt{n} X}{\small 1}\!\!1_{X<b_N}]=N^{1+\frac{1}{2}\beta ^2} + \frac{\sqrt{2}+o(1)}{\beta -\sqrt{2}}\mathrm{e}^{\beta \sqrt{n} b_N}. \end{aligned}$$

It follows that we can rewrite (2.14) in the form (2.18). \(\square \)

3.3 Proof of Theorem 2.15

We will deduce the stochastic convergence of the log-partition function \(p_N(\beta )=\frac{1}{n}\log |\mathcal{Z }_N(\beta )|\) from the weak convergence of \(\mathcal{Z }_N(\beta )\). This will be done via Lemma 3.9 stated below. One may ask whether there exists a more direct way to prove the convergence of \(p_N(\beta )\). A standard method to handle such questions for real \(\beta \) is to use the Gaussian concentration inequality; see [43, Theorem 1.3.4]. To apply it, we need to verify that the function \(p_N(\beta )\) is Lipschitz in the variables \(X_1,\ldots ,X_N\). This is easy to do in the real setting, but if \(\beta \) is complex, we have \(\mathcal{Z }_N(\beta )=0\) for some non-empty set of tuples \(X_1,\ldots ,X_N\). Thus, \(p_N(\beta )\) is not even finite, so that the Lipschitz property does not hold. The possibility of having infinite \(p_N(\beta )\) is not just a technical difficulty, especially in view of the fact that the zeros of \(\mathcal{Z }_N\) become dense in \(B_3\) in the large N limit.

Lemma 3.9

Let \(Z,Z_1,Z_2,\ldots \) be random variables with values in \(\mathbb{C }\) and let \(m_N\in \mathbb{C }\), \(v_N\in \mathbb{C }\backslash \{0\}\) be sequences of normalizing constants such that

$$\begin{aligned} \frac{Z_N-m_N}{v_N}\overset{{w}}{\underset{N\rightarrow \infty }{\longrightarrow }}Z. \end{aligned}$$
(3.27)

The following two statements hold:

  1. (1)

    If \(|v_N|=o(|m_N|)\) and \(|m_N|\rightarrow \infty \) as \(N\rightarrow \infty \), then \(\frac{\log |Z_N|}{\log |m_N|}\overset{{P}}{\underset{N\rightarrow \infty }{\longrightarrow }}1\).

  2. (2)

    If \(|m_N|\!=\!O(|v_N|)\), \(|v_N|\!\rightarrow \! \infty \) as \(N\!\rightarrow \!\infty \) and Z has no atoms, then \(\frac{\log |Z_N|}{\log |v_N|}\overset{{P}}{\underset{N\rightarrow \infty }{\longrightarrow }}1\).

Proof of (1)

Fix \(\varepsilon >0\). For sufficiently large N, we have \(|m_N|>1\) and, hence,

$$\begin{aligned} \mathbb{P }\left[1-\varepsilon <\frac{\log |Z_N|}{\log |m_N|}<1+\varepsilon \right]&= \mathbb{P }[|m_N|^{1-\varepsilon }<|Z_N|<|m_N|^{1+\varepsilon }]\\&\ge \mathbb{P }\left[\left|\frac{Z_N-m_N}{v_N}\right|<\frac{1}{2} \frac{|m_N|}{|v_N|}\right]. \end{aligned}$$

The right-hand side converges to 1 by our assumptions. \(\square \)

Proof of (2)

Fix \(\varepsilon >0\). For sufficiently large N,

$$\begin{aligned} \mathbb{P }\left[\frac{\log |Z_N|}{\log |v_N|}>1+\varepsilon \right] = \mathbb{P }\left[\frac{|Z_N|}{|v_N|}>|v_N|^{\varepsilon }\right] \le \mathbb{P }\left[\left|\frac{Z_N-m_N}{v_N}\right|>\frac{1}{2} |v_N|^{\varepsilon }\right]. \end{aligned}$$

The right-hand side converges to 0 by our assumptions. Consider now

$$\begin{aligned} \mathbb{P }\left[\frac{\log |Z_N|}{\log |v_N|}<1-\varepsilon \right]\!=\!\mathbb{P }\left[\frac{|Z_N|}{|v_N|}<|v_N|^{-\varepsilon }\right]\!=\!\mathbb{P }\left[ \left|\frac{Z_N-m_N}{v_N}+\frac{m_N}{v_N}\right|<|v_N|^{-\varepsilon }\right]. \end{aligned}$$

Assume that there is \(\delta >0\) such that the right-hand side is greater than \(\delta \) for infinitely many N’s. Recall that \(m_N/v_N\) is bounded. Taking a subsequence, we may assume that \(-m_N/v_N\) converges to some \(a\in \mathbb{C }\). Recall that \(|v_N|\rightarrow \infty \). But then, for every \(\eta >0\),

$$\begin{aligned} \mathbb{P }[|Z-a|<\eta ]\ge \limsup _{N\rightarrow \infty }\mathbb{P }\left[\left|\frac{Z_{N}-m_{N}}{v_{N}}-a\right|<\frac{\eta }{2}\right]>\delta . \end{aligned}$$

This contradicts the assumption that \(Z\) has no atoms. \(\square \)

Proof of Theorem 2.15

(Convergence in probability) Let \(p(\beta )\) be defined by (2.9). Note that p is a continuous function. We are going to prove that for every \(\beta \in \mathbb{C }\), \(\lim _{N\rightarrow \infty }p_N(\beta )=p(\beta )\) in probability. We may assume that \(\tau \ne 0\) since otherwise the result is known from Eq. (1.2); see [17, 37]. It follows from Theorems 2.16, 2.19, 2.20, and Remark 2.21 that condition (3.27) is satisfied with \(Z_N=\mathcal{Z }_N(\beta )\) and an appropriate choice of normalizing sequences \(m_N,v_N\). We will now verify that Lemma 3.9 is applicable.

  • Case 1. Let \(\sigma ^2\le 1/2\). In this case, \(m_N\) and \(v_N\) are given by Theorems 2.16, 2.19; see also Remark 2.18. Namely, \( |m_N|=N^{1+\frac{1}{2} (\sigma ^2-\tau ^2)} \) and \(v_N=N^{\frac{1}{2}+\sigma ^2}.\)

  • Case 1a. If in addition to \(\sigma ^2\le 1/2\) we have \(\sigma ^2+\tau ^2<1\), then \(1+\frac{1}{2} (\sigma ^2-\tau ^2)>\frac{1}{2}+\sigma ^2\) and we obtain \(|v_N|=o(|m_N|)\).

  • Case 1b. If in addition to \(\sigma ^2\le 1/2\) we have \(\sigma ^2+\tau ^2\ge 1\), then \(1+\frac{1}{2} (\sigma ^2-\tau ^2)\le \frac{1}{2}+\sigma ^2\) and we obtain \(|m_N|=O(|v_N|)\).

  • Case 2. Let \(\sigma ^2>1/2\) and, without restriction of generality, \(\sigma >1/\sqrt{2}\). Then, \(m_N\) and \(v_N\) are given by Remark 2.22. Namely, \(|v_N|=\mathrm{e}^{\sigma \sqrt{n} b_N}\) and the formula for \(m_N\) depends on whether \(\sigma +|\tau |>\sqrt{2}\) or not.

  • Case 2a. If \(\sigma >1/\sqrt{2}\) and \(\sigma + |\tau |>\sqrt{2}\), then \(m_N=0\), see (2.15) and (2.17). Thus, \(|m_N|=o(|v_N|)\) is satisfied.

  • Case 2b. If \(\sigma >1/\sqrt{2}\) and \(\sigma + |\tau |< \sqrt{2}\), then \(|m_N|=N^{1+\frac{1}{2}(\sigma ^2-\tau ^2)}\); see (2.16) and (2.18). From the inequality \(1+\frac{1}{2} (\sigma ^2-\tau ^2)>\sqrt{2} \sigma \) it follows that \(|v_N|=o(|m_N|)\).

  • Case 2c. If \(\sigma >1/\sqrt{2}\) and \(\sigma + |\tau |=\sqrt{2}\), then \(1+\frac{1}{2}(\sigma ^2-\tau ^2)=\sqrt{2} \sigma \). However, since \(\sqrt{n} b_N-\sqrt{2}n\rightarrow {-}\infty \) by (2.12), we still have \(|v_N|=o(|m_N|)\).

To summarize, the normalizing constants \(m_N\) and \(v_N\) satisfy the first condition of Lemma 3.9 if \(\beta \in B_1\) or \(\beta \) belongs to one of four (open) line segments on the boundary of \(B_1\) and \(B_2\). Otherwise, \(m_N\) and \(v_N\) satisfy the second condition of Lemma 3.9. Note that we need also to verify that the random variable \(\zeta _P(\beta /\sqrt{2})\) has no atoms if \(\sigma >1/\sqrt{2}\). This will be done in Lemma 3.10, below. Applying Lemma 3.9, we obtain that \(p_N(\beta )\rightarrow p(\beta )\) in probability. \(\square \)

Lemma 3.10

If \(\sigma >1/2\), then the random variable \(\zeta _P(\beta )\) has no atoms in \(\mathbb{C }\).

Proof

For a random variable \(Y\) with values in \(\mathbb{C }\), let \(Q(Y)=\sup _{y\in \mathbb{C }} \mathbb{P }[Y=y]\) be the weight of the maximal atom of \(Y\). Note that \(Q\) is a special case of the concentration function; see [38, p. 22]. For independent random variables \(Y_1\) and \(Y_2\), the convolution formula implies that

$$\begin{aligned} Q(Y_1+Y_2)\le \max (Q(Y_1),Q(Y_2)). \end{aligned}$$
(3.28)

Also, \(Q(Y+c)=Q(Y)\) for every \(c\in \mathbb{C }\). Let \(P_1<P_2<\ldots \) be the points of a unit intensity Poisson point process on the positive half-line. For \(T>1\), we write

$$\begin{aligned} \zeta _P(\beta )=\tilde{\zeta }_P(\beta ; T)+R(\beta ; T), \end{aligned}$$
(3.29)

where \(R(\beta ;T)\) is a rest term and \(\tilde{\zeta }_P(\beta ;T)\) is defined as in (2.4), that is

$$\begin{aligned} \tilde{\zeta }_P(\beta ;T)=\sum _{k=1}^{\infty }\frac{1}{P_k^{\beta }}{\small 1}\!\!1_{P_k\in [0,T]}-\int \limits _1^Tt^{-\beta }\mathrm{d}t. \end{aligned}$$
(3.30)

Note that \(\tilde{\zeta }_P(\beta ;T)\) and \(R(\beta ; T)\) are independent random variables since \(R(\beta ;T)\) depends only on those points of the Poisson process which are in the interval \((T,\infty )\).

We will show that \(Q(\tilde{\zeta }_P(\beta ;T))\le \mathrm{e}^{-T}\) for every \(T>1\). By (3.28) and (3.29) this implies that \(Q(\zeta _P(\beta ))=0\), which is the desired result. Consider the random events \(A_m(T)=\{\sum _{k=1}^{\infty } {\small 1}\!\!1_{P_k\in [0,T]}=m\}\), \(m\in \mathbb{N }_0\). That is, \(A_m(T)\) occurs if there are exactly m points of the Poisson point process in \([0,T]\). Note that \(\mathbb{P }[A_0(T)]=\mathrm{e}^{-T}\) and \(\tilde{\zeta }_P(\beta ;T)\) is constant on the event \(A_0(T)\). Let \(z\in \mathbb{C }\). By the total probability formula,

$$\begin{aligned} \mathbb{P }[\tilde{\zeta }_P(\beta ;T)=z] \le \mathrm{e}^{-T}+\sum _{m=1}^{\infty } \mathbb{P }[\tilde{\zeta }_P(\beta ;T)=z| A_m]. \end{aligned}$$
(3.31)

Conditionally on \(A_m\), where \(m\in \mathbb{N }\), the points \(P_1,\ldots ,P_m\) have the same distribution as the increasingly reordered independent random variables \(U_1,\ldots ,U_m\) distributed uniformly on \([0,T]\). It follows that, for every \(m\in \mathbb{N }\),

$$\begin{aligned} \mathbb{P }[\tilde{\zeta }_P(\beta ;T)=z| A_m]\le Q\left(\sum _{k=1}^m U_k^{-\beta }\right) \le Q( U_1^{-\beta })=0, \end{aligned}$$
(3.32)

where the last line follows from the fact that the random variable \(U_1^{-\beta }\) has no atoms. It follows from (3.31) and (3.32) that \(\mathbb{P }[\tilde{\zeta }_P(\beta ;T)=z]\le \mathrm{e}^{-T}\) for every \(z\in \mathbb{C }\), \(T>1\). This implies that \(Q(\tilde{\zeta }_P(\beta ;T))\le \mathrm{e}^{-T}\) and completes the proof. \(\square \)

Proof of Theorem 2.15

(Convergence in \(L^q\)) We are going to show that \(p_N(\beta )\rightarrow p(\beta )\) in \(L^q\), where \(q\ge 1\) is fixed. From the fact that \(p_N(\beta )\rightarrow p(\beta )\) in probability, and since \(p(\beta )>0\), for every \(\beta \in \mathbb{C }\), we conclude that, for every \(C>p(\beta )\),

$$\begin{aligned} \lim _{N\rightarrow \infty } p_N(\beta ){\small 1}\!\!1_{0 \le p_N(\beta ) \le C+1}=p(\beta ) \text{ in} L^q. \end{aligned}$$

For every \(u\in \mathbb{R }\), we have

$$\begin{aligned} \mathbb{P }[p_N(\beta )\ge u] \le \mathrm{e}^{-n u} \mathbb{E }|\mathcal{Z }_N(\beta )| \le \mathrm{e}^{-n u} N \mathbb{E }[\mathrm{e}^{\sigma \sqrt{n} X}] =\mathrm{e}^{n(C-u)}, \end{aligned}$$

where \(C=1+\sigma ^2/2\). From this, we conclude that

$$\begin{aligned} \mathbb{E }\left[|p_N(\beta )|^q {\small 1}\!\!1_{p_N(\beta )>C+1}\right]&= \sum _{k=1}^{\infty } \mathbb{E }[|p_N(\beta )|^q{\small 1}\!\!1_{C+k < p_N(\beta ) \le C+k+1}]\\&\le \sum _{k=1}^{\infty } \mathrm{e}^{-nk}(C+k+1)^q, \end{aligned}$$

which converges to 0, as \(N\rightarrow \infty \). To complete the proof, we need to show that

$$\begin{aligned} \lim _{N\rightarrow \infty } \mathbb{E }\left[|p_N(\beta )|^q {\small 1}\!\!1_{p_N(\beta )<0}\right]=0. \end{aligned}$$
(3.33)

The problem is to bound the probability of small values of \(\mathcal{Z }_N(\beta )\), where the logarithm has a singularity and \(|p_N(\beta )|\) becomes large. This is non-trivial because of the presence of complex amplitudes in the definition of \(\mathcal{Z }_N(\beta )\); see (2.6). We have to show that there is not much cancellation among the terms in (2.6). Fix a small \(\varepsilon >0\). Clearly,

$$\begin{aligned} \mathbb{E }\left[|p_N(\beta )|^q {\small 1}\!\!1_{-\varepsilon \sigma ^2 \le p_N(\beta )\le 0}\right]\le (\varepsilon \sigma ^2)^q. \end{aligned}$$
(3.34)

To prove (3.33), we would like to estimate from above the probability \(\mathbb{P }[|\mathcal{Z }_N(\beta )|\le r]\) for \(0<r<\mathrm{e}^{-\varepsilon \sigma ^2 n}\). Recall that \(\mathcal{Z }_N(\beta )\) is a sum of N independent copies of the random variable \(\mathrm{e}^{\sqrt{n} (\sigma X+ i\tau Y)}\). Unfortunately, the distribution of the latter random variable does not possess nice regularity properties. For example, in the most interesting case \(\rho =1\) it has no density. This is why we need a smoothing argument. Denote by \(B_r(t)\) the disc of radius \(r\) centered at \(t\in \mathbb{C }\). Fix a large \(A>1\). We will show that uniformly in \(t\in \mathbb{C }\), \(1/A<|\beta | <A\), \(n>(2A)^2\), and \(0<r<\mathrm{e}^{-\varepsilon \sigma ^2 n}\),

$$\begin{aligned} \mathbb{P }[\mathrm{e}^{\sqrt{n} (\sigma X+ i\tau Y)}\in B_r(t)]<C r^{\frac{\varepsilon }{20}}. \end{aligned}$$
(3.35)

This inequality is stated in a form which will be needed later in the Proof of Theorem 2.1.

Let \(|t| \ge \sqrt{r}\) and \(\tau \ge 1/(2A)\). The argument \(\arg t\) of a complex number t is considered to have values in the circle \(\mathbb{T }=\mathbb{R }/2\pi \mathbb{Z }\). Let \(P:\mathbb{R }\rightarrow \mathbb{T }\) be the canonical projection. Denote by \(I_r(t)\) the sector \(\{z\in \mathbb{C }:|\arg z-\arg t|\le 2\sqrt{r}\}\), where we take the geodesic distance between the arguments. A simple geometric argument shows that the disc \(B_r(t)\) is contained in the sector \(I_r(t)\). The density of the random variable \(P(\tau \sqrt{n} Y)\) at \(\theta \in \mathbb{T }\) is given by

$$\begin{aligned} \mathbb{P }[P(\tau \sqrt{n} Y)\in \mathrm{d}\theta ]=\frac{1}{\sqrt{2\pi n} \tau }\sum _{k\in \mathbb{Z }} \mathrm{e}^{-(\theta +2\pi k)^2/(2\tau ^2 n)}\mathrm{d}\theta . \end{aligned}$$

By considering the right-hand side as a Riemann sum and recalling that \(\tau \ge 1/(2A)\), we see that the density converges to \(1/(2\pi )\) uniformly in \(\theta \in \mathbb{T }\) as \(N\rightarrow \infty \). We have

$$\begin{aligned} \mathbb{P }[\mathrm{e}^{\sqrt{n} (\sigma X+ i\tau Y)}\in B_r(t)] \le \mathbb{P }[\mathrm{e}^{\sqrt{n} (\sigma X+i\tau Y)}\in I_r(t)]<C\sqrt{r}, \end{aligned}$$

which implies (3.35).

Let now \(|t|<\sqrt{r}\). Then, recalling that \(\log r<-\varepsilon \sigma ^2 n\), we obtain

$$\begin{aligned} \mathbb{P }[ \mathrm{e}^{\sqrt{n} (\sigma X+ i\tau Y)}\in B_r(t)]\le \mathbb{P }[\mathrm{e}^{\sigma \sqrt{n} X}<r^{1/4}]=\mathbb{P }\left[X<\frac{\log r}{4\sigma \sqrt{n}}\right]<\mathrm{e}^{-\frac{(\log r)^2}{16\sigma ^2 n}}<r^{\frac{\varepsilon }{16}}. \end{aligned}$$

It remains to consider the case \(t \ge \sqrt{r}\), \(|\sigma | \ge 1/(2A)\). The density of the random variable \(\mathrm{e}^{\sigma \sqrt{n} X}\) is given by

$$\begin{aligned} g(x)=\frac{1}{\sqrt{2\pi n}\sigma x}\mathrm{e}^{-\frac{(\log x)^2}{2\sigma ^2 n}},\quad x>0. \end{aligned}$$

It attains its maximum at \(x_0=\mathrm{e}^{-\sigma ^2 n}\). The maximum is equal to \(g(x_0)=\frac{1}{\sqrt{2\pi n}\sigma } \mathrm{e}^{\sigma ^2 n/2}\). Let \(r \le (2\pi n)\sigma ^2 \mathrm{e}^{-\sigma ^2 n}\). Then,

$$\begin{aligned} \mathbb{P }[\mathrm{e}^{\sqrt{n} (\sigma X+ i\tau Y)} \in B_r(t)] \le \mathbb{P }[t-r\le \mathrm{e}^{\sigma \sqrt{n} X}\le t+r]\le \frac{Cr}{\sqrt{n}\sigma } \mathrm{e}^{\sigma ^2 n/2} \le C r^{1/2}. \end{aligned}$$

Let \(r \ge (2\pi n)\sigma ^2 \mathrm{e}^{-\sigma ^2 n}\), which, together with \(|\sigma |>1/(2A)\) and \(n>(2A)^2\), implies that \(r>\mathrm{e}^{-\sigma ^2 n}\). Using the unimodality of the density \(g\) and the inequality \(t-r>r\), we get

$$\begin{aligned} \mathbb{P }[\mathrm{e}^{\sqrt{n} (\sigma X+ i\tau Y)} \in B_r(t)]\le \mathbb{P }[t-r\le \mathrm{e}^{\sigma \sqrt{n} X}\le t+r]<2r g(r)<\mathrm{e}^{-\frac{(\log r)^2}{2\sigma ^2 n}}<r^{\frac{\varepsilon }{2}}. \end{aligned}$$

The last inequality follows from \(r<\mathrm{e}^{-\sigma ^2 n}\). This completes the Proof of (3.35).

Now we are in position to complete the Proof of (3.33). Let \(U_r\) be a random variable distributed uniformly on the disc \(B_r(0)\) and independent of all variables considered previously. It follows from (3.35) that the density of the random variable \(\mathrm{e}^{\sqrt{n} (\sigma X+ i\tau Y)}+U_r\) is bounded from above by \(Cr^{-2+(\varepsilon /20)}\). Hence, the density of \(\mathcal{Z }_N(\beta )+U_r\) is bounded by the same term \(Cr^{-2+(\varepsilon /20)}\). With the notation \(r=\mathrm{e}^{-kn}\) it follows that, for every \(k\ge \varepsilon \sigma ^2\),

$$\begin{aligned} \mathbb{P }[p_N(\beta ) \le -k] = \mathbb{P }[|\mathcal{Z }_N(\beta )|\le \mathrm{e}^{-kn}] \le \mathbb{P }[|\mathcal{Z }_N(\beta )+U_{r}| \le 2r ] \le Cr^{\frac{\varepsilon }{20}}. \end{aligned}$$

From this, we obtain that

$$\begin{aligned} \mathbb{E }[|p_N(\beta )|^q {\small 1}\!\!1_{p_N(\beta )\in [-k-1,-k]}] \le C (k+1)^{q} \mathrm{e}^{-\frac{\varepsilon kn}{20}}. \end{aligned}$$

Taking the sum over all \(k=\varepsilon \sigma ^2+l\), \(l=0,1,\ldots \), we get

$$\begin{aligned} \mathbb{E }[|p_N(\beta )|^q {\small 1}\!\!1_{p_N(\beta )<-\varepsilon \sigma ^2}] \le C \mathrm{e}^{-\frac{\varepsilon ^2 \sigma ^2 n}{20}} \sum _{l=1}^{\infty } l^{q} \mathrm{e}^{-\frac{\varepsilon ln}{20}} \le C \mathrm{e}^{-\frac{\varepsilon ^2 \sigma ^2 n}{20}}. \end{aligned}$$

Recalling (3.34), we arrive at (3.33). \(\square \)

Remark 3.11

As a byproduct of the proof, we have the following statement. For every \(A>0\), there is a constant \(C=C(A)\) such that \(\mathbb{E }|p_N(\beta )| < C\), for all \(1/A<|\beta |<A\) and sufficiently large \(N\).

4 Proofs of the results on zeros

4.1 Convergence of random analytic functions

In this section, we collect some lemmas on weak convergence of stochastic processes whose sample paths are analytic functions. As we will see, the analyticity assumption simplifies the things considerably. For a metric space M, denote by \(C(M)\) the space of complex-valued continuous functions on \(M\) endowed with the topology of uniform convergence on compact sets. Let \(D\subset \mathbb{C }\) be a simply connected domain.

Lemma 4.1

Let \(\{U(t):t\in D\}\) be a random analytic function defined on D. Let \(\Gamma \subset D\) be a closed differentiable contour and let \(K\) be a compact subset located strictly inside \(\Gamma \). Then, for every \(p\in \mathbb{N }_0\), there is a constant \(C=C_p(K,\Gamma )\) such that

$$\begin{aligned} \mathbb{E }\left[\sup _{t\in K} |U^{(p)}(t)|\right] \le C \oint _{\Gamma } \mathbb{E }|U(w)| |\mathrm{d}w|. \end{aligned}$$

Proof

By the Cauchy formula, \(|U^{(p)}(t)|\le C \oint _{\Gamma } |U(w)| |\mathrm{d}w|\), for all \(t\in K\). Take the supremum over \(t\in K\) and then the expectation. \(\square \)

It is easy to check that a sequence of stochastic processes with paths in \(C(D)\) is tight (resp., weakly convergent) if and only if it is tight (resp., weakly convergent) in \(C(K)\), for every compact set \(K\subset D\).

Lemma 4.2

Let \(U_1,U_2,\ldots \) be random analytic functions on D. Assume that there is a continuous function \(f:D\rightarrow \mathbb{R }\) such that \(\mathbb{E }|U_N(t)|<f(t)\), for all \(t\in D\), and all \(N\in \mathbb{N }\). Then, the sequence \(U_N\) is tight on \(C(D)\).

Proof

Let \(K\subset D\) be a compact set. Let \(\Gamma \) be a contour enclosing K and located inside D. By Lemma 4.1,

$$\begin{aligned} \mathbb{E }\left[\sup _{t\in K} |U_N(t)|\right] \le C \oint \limits _{\Gamma }f(w) |\mathrm{d}w|,\quad \mathbb{E }\left[\sup _{t\in K}|U_N^{\prime }(t)|\right]\le C \oint \limits _{\Gamma }f(w)|\mathrm{d}w|. \end{aligned}$$

By standard arguments, this implies that the sequence \(U_N\) is tight on \(C(K)\). \(\square \)

Lemma 4.3

Let \(U,U_1,U_2,\ldots \) be random analytic functions on D such that \(U_N\) converges as \(N\rightarrow \infty \) to \(U\) weakly on \(C(D)\) and \(\mathbb{P }[U\equiv 0]=0\). Then, for every continuous function \(f:D\rightarrow \mathbb{R }\) with compact support, we have

$$\begin{aligned} \sum _{z\in \mathbb{C }:U_N(z)=0}f(z)\overset{{w}}{\underset{N\rightarrow \infty }{\longrightarrow }}\sum _{z\in \mathbb{C }:U(z)=0}f(z). \end{aligned}$$

Remark 4.4

Equivalently, the zero set of \(U_N\), considered as a point process on D, converges weakly to the zero set of U.

Proof

Let H be the closed linear subspace of \(C(D)\) consisting of all analytic functions. Consider a functional \(\Psi :H\rightarrow \mathbb{R }\) mapping an analytic function \(\varphi \) which is not identically \(0\) to \(\sum _{z}f(z)\), where the sum is over all zeros of \(\varphi \). Define also \(\Psi (0)=0\). It is an easy consequence of Rouché’s theorem that \(\Psi \) is continuous on \(H\backslash \{0\}\). Note that \(H\backslash \{0\}\) is a set of full measure with respect to the law of \(U\). Recall that \(U_N\rightarrow U\) weakly on \(H\). By the continuous mapping theorem [39, §3.5], \(\Psi (U_N)\) converges in distribution to \(\Psi (U)\). This proves the lemma.

4.2 Proof of Theorem 2.1

A standard approximation argument shows that we can assume that \(f\) is infinitely differentiable. Let \(\lambda \) be the Lebesgue measure on \(\mathbb{C }\). In his computation of the limiting density of zeros, Derrida [13] used the fact that \(\frac{1}{2\pi }\Delta \log |\mathcal{Z }_N|\) (where \(\Delta \) is the Laplacian interpreted in the distributional sense) gives the measure counting the zeros of \(\mathcal{Z }_N\). That is,

$$\begin{aligned} \sum _{\beta \in \mathbb{C }:\mathcal{Z }_N(\beta )=0}f(\beta )=\frac{1}{2\pi } \int \limits _{\mathbb{C }}\log |\mathcal{Z }_N(\beta )|\Delta f(\beta )\lambda (\mathrm{d}\beta ). \end{aligned}$$
(4.1)

A proof of (4.1) can be found in [21, Section 2.4]. Recall that \(p(\beta )\) has been defined in Theorem 2.15. We have

$$\begin{aligned} \int \limits _{\mathbb{C }}p(\beta )\Delta f(\beta )\lambda (\mathrm{d}\beta )=\int \limits _{\mathbb{C }} f(\beta )\Xi (\mathrm{d}\beta ). \end{aligned}$$
(4.2)

Indeed, Green’s identity gives

$$\begin{aligned} \int \limits _{B_i}\!\!p(\beta )\Delta f(\beta )\lambda (\mathrm{d}\beta )\!=\!\int \limits _{B_i}\!\!\Delta p(\beta )f(\beta )\lambda (\mathrm{d}\beta )\!+\!\oint \limits _{\partial B_i}\left(p(\beta )\frac{\partial f(\beta )}{\partial \mathbf{n }}-f(\beta )\frac{\partial p(\beta )}{\partial \mathbf{n }}\right)|\mathrm{d}\beta |. \end{aligned}$$

Here, \(\mathbf{n }\) is the unit normal to the boundary of \(B_i\) pointing outwards \(B_i\) and \(\frac{\partial }{\partial \mathbf{n }}\) is the corresponding directional derivative. The first term on the right-hand side is equal to \(2\int _{\mathbb{C }}f(\beta )\Xi _3(\mathrm{d}\beta )\) for \(i=3\) and to 0 for \(i=1,2\). Adding Green’s identities for \(i=1,2,3\), noting that \(\frac{\partial f}{\partial \mathbf{n }}\) has no jumps and computing the jumps of \(\frac{\partial p}{\partial \mathbf{n }}\) on the boundaries between the different \(B_i\)’s, we arrive at (4.2).

Recall that \(p_N(\beta )=\frac{1}{n} \log |\mathcal{Z }_N(\beta )|\). From (4.1) and (4.2), we conclude that Theorem 2.1 is equivalent to

$$\begin{aligned} \int \limits _{\mathbb{C }}p_N(\beta )\Delta f(\beta )\lambda (\mathrm{d}\beta )\overset{{P}}{\underset{N\rightarrow \infty }{\longrightarrow }}\int \limits _{\mathbb{C }}p(\beta )\Delta f(\beta )\lambda (\mathrm{d}\beta ). \end{aligned}$$

We will show that this holds even in \(L^1\). By Fubini’s theorem, it suffices to show that

$$\begin{aligned} \lim \limits _{N\rightarrow \infty }\int \limits _{\mathbb{C }}\mathbb{E }|p_N(\beta )-p(\beta )||\Delta f(\beta )|\lambda (\mathrm{d}\beta )=0. \end{aligned}$$
(4.3)

We know from Theorem 2.15 that \(\lim _{N\rightarrow \infty } \mathbb{E }|p_N(\beta )-p(\beta )|=0\), for every \(\beta \in \mathbb{C }\). To complete the proof, we need to interchange the limit and the integral. We may represent f as a sum of two functions, the first one vanishing on \(|\beta |<1/4\) and the second one vanishing outside \(|\beta |<1/2\). Since the contribution of the second function to (2.2) vanishes by Theorem 2.5, we may assume that \(f\) vanishes on \(|\beta |<1/4\). With this assumption, the use of the dominated convergence theorem is justified by Remark 3.11. \(\square \)

4.3 Proof of Theorem 2.5

The idea of the proof is to show that the fluctuations of \(\mathcal{Z }_N(\beta )\) around its expectation are of smaller order than the expectation, in phase \(B_1\). We don’t rely on the expression for the limiting log-partition function p. Let \(\Gamma \) be a differentiable contour enclosing the set K and located inside \(B_1\). We have

$$\begin{aligned} \mathbb{P }[\mathcal{Z }_N(\beta )=0, \text{ for} \text{ some} \beta \in K]&\le \mathbb{P }\left[\sup _{\beta \in K} \left|\frac{\mathcal{Z }_N(\beta )-\mathbb{E }\mathcal{Z }_N(\beta )}{\mathbb{E }\mathcal{Z }_N(\beta )}\right|\ge 1\right]\\&\le \mathbb{E }\sup _{\beta \in K} \left|\frac{\mathcal{Z }_N(\beta )-\mathbb{E }\mathcal{Z }_N(\beta )}{\mathbb{E }\mathcal{Z }_N(\beta )}\right|\\&\le C \oint \limits _{\Gamma } \mathbb{E }\left|\frac{\mathcal{Z }_N(\beta )-\mathbb{E }\mathcal{Z }_N(\beta )}{\mathbb{E }\mathcal{Z }_N(\beta )}\right| |\mathrm{d}w|, \end{aligned}$$

where the last step is by Lemma 4.1. Note that \(|\mathbb{E }\mathcal{Z }_N(\beta )|=N^{1+\frac{1}{2} (\sigma ^2-\tau ^2)}\). To complete the proof, we need to show that there exist \(\varepsilon >0\) and \(C>0\) depending on \(\Gamma \) such that, for every \(\beta \in \Gamma \), \(N\in \mathbb{N }\),

$$\begin{aligned} \mathbb{E }\left|\mathcal{Z }_N(\beta )-\mathbb{E }\mathcal{Z }_N(\beta )\right|<C N^{1-\varepsilon +\frac{1}{2}(\sigma ^2-\tau ^2)}. \end{aligned}$$
(4.4)

Since \(\Gamma \subset B_1\), we can choose \(\varepsilon >0\) so small that \(\Gamma \subset B_1^{\prime }(\varepsilon )\cup B_1^{\prime \prime }(\varepsilon )\), where

$$\begin{aligned} B_1^{\prime }(\varepsilon )&= \{\beta \in \mathbb{C }:\sigma ^2+\tau ^2<1-2\varepsilon \},\\ B_1^{\prime \prime }(\varepsilon )&= \{\beta \in \mathbb{C }:(|\sigma |-\sqrt{2})^2-\tau ^2>2\varepsilon , 1/2<\sigma ^2<2\}. \end{aligned}$$

We have

$$\begin{aligned} \mathbb{E }|\mathcal{Z }_N(\beta )-\mathbb{E }\mathcal{Z }_N(\beta )|^2=N\mathbb{E }|\mathrm{e}^{\beta \sqrt{n} X}-\mathbb{E }\mathrm{e}^{\beta \sqrt{n} X}|^2\le N\mathbb{E }\mathrm{e}^{2\sigma \sqrt{n} X}=N^{1+2\sigma ^2}. \end{aligned}$$

If \(\beta \in B_1^{\prime }(\varepsilon )\), then it follows that

$$\begin{aligned} \mathbb{E }|\mathcal{Z }_N(\beta )-\mathbb{E }\mathcal{Z }_N(\beta )| \le N^{\frac{1}{2} +\sigma ^2} \le N^{1-\varepsilon +\frac{1}{2}(\sigma ^2-\tau ^2)}. \end{aligned}$$

This implies (4.4). Assume now that \(\beta \in B_1^{\prime \prime }(\varepsilon )\) and \(\sigma >0\). For \(k=1,\ldots ,N\), define random variables

$$\begin{aligned} U_{k,N}=\mathrm{e}^{\beta \sqrt{n} X_k-\sigma \sqrt{2}n} {\small 1}\!\!1_{X_k \le \sqrt{2n}},\quad V_{k,N}=\mathrm{e}^{\beta \sqrt{n} X_k-\sigma \sqrt{2}n} {\small 1}\!\!1_{X_k > \sqrt{2n}}, \end{aligned}$$

By Part 1 of Lemma 3.4, we have

$$\begin{aligned} \mathbb{E }\left|\sum _{k=1}^N (U_{k,N}-\mathbb{E }U_{k,N})\right|^2 \le N\mathbb{E }|U_{1,N}|^2 = N\mathrm{e}^{-2\sqrt{2} \sigma n} \mathbb{E }[\mathrm{e}^{2\sigma \sqrt{n} X} {\small 1}\!\!1_{X<\sqrt{2n}}] <1.\qquad \end{aligned}$$
(4.5)

Similarly, by Part 2 of Lemma 3.4,

$$\begin{aligned} \mathbb{E }\left|\sum _{k=1}^N (V_{k,N}-\mathbb{E }V_{k,N})\right|\le 2N\mathbb{E }|V_{k,N}|=2N \mathrm{e}^{-\sigma \sqrt{2} n} \mathbb{E }[\mathrm{e}^{\sigma \sqrt{n} X} {\small 1}\!\!1_{X>\sqrt{2n}}]<2.\qquad \end{aligned}$$
(4.6)

Combining (4.5) and (4.6), we obtain \(\mathbb{E }|\mathcal{Z }_N(\beta )-\mathbb{E }\mathcal{Z }_N(\beta )|\le 3\mathrm{e}^{\sigma \sqrt{2}n}\). Since \(\beta \in B_1^{\prime \prime }(\varepsilon )\), this implies the required estimate (4.4). \(\square \)

As a by-product, we obtain a proof of the formula for the limiting log-partition function in phase \(B_1\) which is simpler than the proof given in Sect. 3.3. We will use only the information about the first two truncated moments of \(\mathcal{Z }_N(\beta )\).

Proposition 4.5

For \(\beta \in B_1\), we have \(\lim _{N\rightarrow \infty } p_N(\beta )=1+\frac{1}{2} (\sigma ^2-\tau ^2)\) in probability.

Proof

We have shown in (4.4) that, for every \(\beta \in B_1\), there exist \(C>0\) and \(\varepsilon >0\) depending on \(\beta \) such that for all \(N\in \mathbb{N }\),

$$\begin{aligned} \mathbb{E }\left|\frac{\mathcal{Z }_N(\beta )}{\mathbb{E }\mathcal{Z }_N(\beta )}-1\right|<C N^{-\varepsilon }. \end{aligned}$$
(4.7)

Let \(p(\beta )=1+\frac{1}{2} (\sigma ^2-\tau ^2)\). Note that \(\frac{1}{n} \log |\mathbb{E }\mathcal{Z }_N(\beta )|=p(\beta )\) for all \(N\in \mathbb{N }\). It follows that for every \(\delta >0\) and all sufficiently large \(N\),

$$\begin{aligned} \mathbb{P }[|p_N(\beta )-p(\beta )|>\delta ]=\mathbb{P }\left[\left|\log \left|\frac{\mathcal{Z }_N(\beta )}{\mathbb{E }\mathcal{Z }_N(\beta )}\right|\right|>n \delta \right] \le \mathbb{P }\left[\left|\frac{\mathcal{Z }_N(\beta )}{\mathbb{E }\mathcal{Z }_N(\beta )}-1\right|>\frac{1}{2} \right].\nonumber \\ \end{aligned}$$
(4.8)

In the last step, we have used that \(|\log |z||>n\delta \) implies that \(|z-1|>1/2\), for large \(n\). By (4.7) and the Markov inequality we can estimate the right-hand side of (4.8) by \(2CN^{-\varepsilon }\), which implies that \(p_N(\beta )\) converges to \(p(\beta )\) in probability. \(\square \)

4.4 Proof of Theorems 2.3 and 2.11

Recall that \(\mathbb{G }\) is the Gaussian analytic function defined in (2.3). Theorem 2.3 will be deduced from the following result.

Theorem 4.6

Fix some \(\beta _0=\sigma _0+i\tau _0\) with \(\sigma _0^2<1/2\) and \(\tau _0\ne 0\). Define a random process \(\{G_N(t):t\in \mathbb{C }\}\) by

$$\begin{aligned} G_N(t):=\frac{\mathcal{Z }_N\left(\beta _0+\frac{t}{\sqrt{n}}\right)-N^{1+\frac{1}{2} (\beta _0+\frac{t}{\sqrt{n}})^2}}{N^{\frac{1}{2}+(\sigma _0+\frac{t}{\sqrt{n}})^2}}. \end{aligned}$$
(4.9)

Then, the process \(G_N\) converges weakly, as \(N\rightarrow \infty \), to the process \(\mathrm{e}^{-t^2/2}\mathbb{G }(t)\) on \(C(\mathbb{C })\).

Proof

For \(k=1,\ldots ,N\), define a random process \(\{W_{k,N}(t):t\in \mathbb{C }\}\) by

$$\begin{aligned} W_{k,N}(t)=N^{-1/2}\mathrm{e}^{(\beta _0\sqrt{n} +t)X_k-(\sigma _0 \sqrt{n} + t)^2}. \end{aligned}$$

Then, \( G_N(t)=\sum _{k=1}^N (W_{k,N}(t)-\mathbb{E }W_{k,N}(t)) \). First, we show that the convergence stated in Theorem 4.6 holds in the sense of finite-dimensional distributions. Take \(t_1,\ldots ,t_d\in \mathbb{C }\). Write \(\mathbf{W }_{k,N}=(W_{k,N}(t_1),\ldots ,W_{k,N}(t_d))\). We need to prove that

$$\begin{aligned} \sum _{k=1}^N(\mathbf{W }_{k,N}-\mathbb{E }\mathbf{W }_{k,N})\overset{{w}}{\underset{N\rightarrow \infty }{\longrightarrow }}(\mathrm{e}^{-t_1^2/2}\mathbb{G }(t_1),\ldots ,\mathrm{e}^{-t_d^2/2}\mathbb{G }(t_d)). \end{aligned}$$
(4.10)

Let \(W_N\) be a process having the same law as the \(W_{k,N}\)’s and define \(\mathbf{W }_N=(W_N(t_1),\ldots ,W_N(t_d))\). A straightforward computation shows that for all \(t,s\in \mathbb{C }\),

$$\begin{aligned} N\mathbb{E }[W_N(t)\overline{W_N(s)}]&= \mathrm{e}^{-(t-\overline{s})^2/2},\end{aligned}$$
(4.11)
$$\begin{aligned} \lim _{N\rightarrow \infty } N|\mathbb{E }[W_N(t)W_N(s)]|&= 0. \end{aligned}$$
(4.12)

Also, we have

$$\begin{aligned} \lim _{N\rightarrow \infty } \sqrt{N} |\mathbb{E }[W_N(t)]|=\lim _{N\rightarrow \infty }\mathrm{e}^{-\frac{1}{2} (\sigma _0^2+\tau _0^2) n +O(\sqrt{n})}=0. \end{aligned}$$
(4.13)

Note that by (2.3),

$$\begin{aligned} \mathbb{E }[\mathrm{e}^{-t^2/2} \mathbb{G }(t)\overline{\mathrm{e}^{-s^2/2}\mathbb{G }(s)}]=\mathrm{e}^{-(t-\overline{s})^2/2}, \quad \mathbb{E }[\mathrm{e}^{-t^2/2} \mathbb{G }(t) \mathrm{e}^{-s^2/2}\mathbb{G }(s)]=0. \end{aligned}$$

We see that the covariance matrix of the left-hand side of (4.10) converges to the covariance matrix of the right-hand side of (4.10) if we view both sides as \(2d\)-dimensional real random vectors. To complete the proof of (4.10), we need to verify the Lindeberg condition: for every \(\varepsilon >0\),

$$\begin{aligned} \lim _{N\rightarrow \infty } N \mathbb{E }[|\mathbf{W }_N|^2 {\small 1}\!\!1_{|\mathbf{W }_N|>\varepsilon }]=0. \end{aligned}$$
(4.14)

For \(l=1,\ldots ,d\), let \(A_l\) be the random event \(|W_{N}(t_l)|\ge |W_{N}(t_j)|\) for all \(j=1,\ldots ,d\). On \(A_l\), we have \(|\mathbf{W }_N|^2\le d |W_N(t_l)|^2\). It follows that

$$\begin{aligned} N \mathbb{E }[|\mathbf{W }_N|^2 {\small 1}\!\!1_{|\mathbf{W }_N|>\varepsilon }] \le d\sum _{l=1}^d N \mathbb{E }\left[|W_N(t_l)|^2 {\small 1}\!\!1_{|W_N(t_l)|>\frac{\varepsilon }{\sqrt{d}}}\right] \rightarrow 0, \end{aligned}$$

where the last step is by the same argument as in (3.13). This completes the proof of the finite-dimensional convergence stated in (4.10). The tightness follows from Lemma 4.2 which can be applied since

$$\begin{aligned} \mathbb{E }|G_N(t)|\le \sqrt{\mathbb{E }[|G_N(t)|^2]} \le \sqrt{N \mathbb{E }[|W_N(t)|^2]}=\mathrm{e}^{(\mathrm{Im}t)^2}. \end{aligned}$$

The last equality follows from (4.11). \(\square \)

Proof of Theorem 2.3

If \(\beta _0\in B_3\), then the expectation term in the definition of \(G_N\), see (4.9), can be ignored: we have \(\lim _{N\rightarrow \infty } |G_N(t)-U_N(t)|=0\) uniformly on compact sets, where

$$\begin{aligned} U_N(t)=N^{-\frac{1}{2}-(\sigma _0+\frac{t}{\sqrt{n}})^2} \mathcal{Z }_N\left(\beta _0+\frac{t}{\sqrt{n}}\right). \end{aligned}$$

It follows from Theorem 4.6 that \(U_N\) converges to the process \(\mathrm{e}^{-t^2/2} \mathbb{G }(t)\) weakly on \(C(\mathbb{C })\). Applying Lemma 4.3, we obtain the statement of Theorem 2.3. \(\square \)

Proof of Theorem 2.11

Let \(\delta _N\) be a bounded sequence such that \(n\sigma _0\tau _0-\delta _N\in 2\pi \mathbb{Z }\). Taking \(t=\beta _0 \frac{s+ i\delta _N}{\sqrt{n}}\) in Theorem 4.6, we obtain that weakly on \(C(\mathbb{C })\),

$$\begin{aligned} G_N\left(\beta _0\frac{s+i \delta _N}{\sqrt{n}}\right) \overset{{w}}{\underset{N\rightarrow \infty }{\longrightarrow }}\mathbb{G }(0). \end{aligned}$$

Doing elementary transformations, we arrive at

$$\begin{aligned} \frac{\mathcal{Z }_N\left(\beta _0\left(1+\frac{s+ i \delta _N}{n}\right)\right)}{N^{\frac{1}{2}+(\sigma _0+\beta _0\frac{s+ i\delta _N }{n})^2}} \overset{{w}}{\underset{N\rightarrow \infty }{\longrightarrow }}\mathrm{e}^{-s}+\mathbb{G }(0). \end{aligned}$$

The zeros of the right-hand side are located at \(s=2 \pi i k +\xi \), \(k\in \mathbb{Z }\), where \(\xi =-\log (-\mathbb{G }(0))\). The proof is completed by applying Lemma 4.3. \(\square \)

4.5 Proof of Theorems 2.6, 2.8 and 2.13

Proof of Theorem 2.6

Fix a compact set \(K\) contained in the half-plane \(\sigma >1/2\). Define random \(C(K)\)-valued elements \(S_k(\beta )=s_1(\beta )+\cdots +s_k(\beta )\), where

$$\begin{aligned} s_k(\beta )=\sum _{j=1}^{\infty } P_j^{-\beta } {\small 1}\!\!1_{k\le P_j<k+1} -\int \limits _{k}^{k+1}t^{-\beta }\mathrm{d}t, \quad \beta \in K. \end{aligned}$$

Note that \(s_1,s_2,\ldots \) are independent. By the properties of the Poisson process,

$$\begin{aligned} \mathbb{E }[s_k(\beta )]=0,\quad \sum _{k=1}^{\infty }\mathbb{E }[|s_k(\beta )|^2]=\int \limits _{1}^{\infty } t^{-2\sigma }\mathrm{d}t<\infty . \end{aligned}$$
(4.15)

Thus, as long as \(\sigma >1/2\), the sequence \(\{S_k(\beta ) \}_{k\in \mathbb{N }}\) is an \(L^2\)-bounded martingale. Hence, \(S_k(\beta )\) converges a.s. to a limiting random variable denoted by \(S(\beta )\). We need to show that the convergence is uniform a.s. It follows from (4.15) and Lemma 4.2 that the sequence \(S_k\), \(k\in \mathbb{N }\), is tight on \(C(K)\). Hence, \(S_k\) converges weakly on \(C(K)\) to the process \(S\). By the Itô–Nisio theorem [24], this implies that \(S_k\) converges to \(S\) a.s. as a random element of \(C(K)\). This proves the theorem. \(\square \)

Proof of Theorem 2.8

Let us first describe the idea. Consider the case \(\sigma >1/\sqrt{2}\). Arrange the values \(X_1,\ldots , X_{N}\) in an increasing order, obtaining the order statistics \(X_{1:N}\le \cdots \le X_{N:N}\). It turns out that the main contribution to the sum \(\mathcal{Z }_N(\beta )=\sum _{k=1}^{N} \mathrm{e}^{\beta \sqrt{n} X_k}\) comes from the upper order statistics \(X_{N-k:N}\), where \(k=0,1,\ldots \). Their joint limiting distribution is well-known in the extreme-value theory, see [39, Corollary 4.19(i)], and will be recalled now. Denote by \(\mathbb{M }\) the space of locally finite counting measures on \(\bar{\mathbb{R }}=\mathbb{R }\cup \{+\infty \}\). We endow \(\mathbb{M }\) with the (Polish) topology of vague convergence. A point process on \(\bar{\mathbb{R }}\) is a random element with values in \(\mathbb{M }\). Let \(P_1,P_2,\ldots \) be the arrivals of the unit intensity Poisson process on the positive half-line. Define the sequence \(b_N\) as in (2.12), that is

$$\begin{aligned} b_N=\sqrt{2n}-\frac{\log (4\pi n)}{2\sqrt{2 n}}. \end{aligned}$$

Proposition 4.7

The point process \(\pi _N:=\sum _{k=1}^N \delta (\sqrt{n}(X_k-b_N))\) converges as \(N\rightarrow \infty \) to the point process \(\pi _{\infty }=\sum _{k=1}^{\infty } \delta (-(\log P_k)/\sqrt{2})\) weakly on \(\mathbb{M }\).

Utilizing this result, we will show that it is possible to approximate \(\mathcal{Z }_N(\beta )\) (after appropriate normalization) by \(\tilde{\zeta }_P(\beta /\sqrt{2})\) in the half-plane \(\sigma >1/\sqrt{2}\). Consider now the case \(\sigma <-1/\sqrt{2}\). This time, the main contribution to the sum \(\mathcal{Z }_N(\beta )\) comes from the lower order statistics \(X_{k:N}\), \(k=1,2,\ldots \) Their joint limiting distribution is the same as for the upper order statistics, only the sign should be reversed. Moreover, it is known that the upper and the lower order statistics become asymptotically independent as \(N\rightarrow \infty \); see [23] or [39, Cor. 5.28]. Thus, in the half-plane \(\sigma <-1/\sqrt{2}\) it is possible to approximate \(\mathcal{Z }_N(\beta )\) by an independent copy of \(\zeta _P(-\beta /\sqrt{2})\). In the rest of the proof, we make this idea rigorous. For simplicity of notation, we restrict ourselves to the half-plane \(D=\{\beta \in \mathbb{C }:\sigma >1/\sqrt{2}\}\).

Theorem 4.8

The following convergence holds weakly on \(C(D)\):

$$\begin{aligned} \xi _N(\beta ):=\frac{\mathcal{Z }_N(\beta )-N\mathbb{E }[\mathrm{e}^{\beta \sqrt{n} X}1_{X<b_N}]}{\mathrm{e}^{\beta \sqrt{n} b_N}}\overset{{w}}{\underset{N\rightarrow \infty }{\longrightarrow }}\tilde{\zeta }_{P}\left(\frac{\beta }{\sqrt{2}}\right). \end{aligned}$$

The proof consists of two lemmas. Take \(A>0\) and write \(\xi _N(\beta )=\xi _N^A(\beta )-e_N^A(\beta )+\Delta _N^A(\beta )\), where

$$\begin{aligned} \xi _N^A(\beta )&= \sum _{k=1}^N \mathrm{e}^{\beta \sqrt{n} (X_k-b_N)}{\small 1}\!\!1_{b_N-\frac{A}{\sqrt{n}}<X_k},\\ e_N^A(\beta )&= N\mathbb{E }\left[\mathrm{e}^{\beta \sqrt{n} (X_k-b_N)}{\small 1}\!\!1_{b_N-\frac{A}{\sqrt{n}}\le X_k<b_N}\right],\\ \Delta _N^A(\beta )&= \sum _{k=1}^N\left(\mathrm{e}^{\beta \sqrt{n} (X_k-b_N)}{\small 1}\!\!1_{X_k \le b_N-\frac{A}{\sqrt{n}}}-\mathbb{E }[\mathrm{e}^{\beta \sqrt{n} (X_k-b_N)}{\small 1}\!\!1_{X_k\le b_N-\frac{A}{\sqrt{n}}}]\right). \end{aligned}$$

Lemma 4.9

Let \(\tilde{\zeta }_P(\cdot ;\cdot )\) be defined as in (2.4). Then, the following convergence holds weakly on \(C(D)\):

$$\begin{aligned} \xi _N^A(\beta )-e_N^A(\beta )\overset{{w}}{\underset{N\rightarrow \infty }{\longrightarrow }}\tilde{\zeta }_{P}\left(\frac{\beta }{\sqrt{2}};\mathrm{e}^{\sqrt{2}A}\right). \end{aligned}$$

Proof

Recall that by Proposition 4.7 the point process \(\pi _N\) converges to the point process \(\pi _{\infty }\) weakly on \(\mathbb{M }\). Consider a functional \(\Psi :\mathbb{M }\rightarrow C(D)\) which maps a locally finite counting measure \(\rho =\sum _{i\in I}\delta (y_i)\in \mathbb{M }\) to the function \(\Psi (\rho )(\beta )=\sum _{i\in I} \mathrm{e}^{\beta y_i}{\small 1}\!\!1_{y_i>-A}\), where \(\beta \in D\). Here, \(I\) is at most countable index set. If \(\rho \) charges the point \({+}\infty \), define \(\Psi (\rho )\), say, as \(0\). The functional \(\Psi \) is continuous on the set of all \(\rho \in \mathbb{M }\) not charging the points \({-}A\) and \(+\infty \), which is a set of full measure with respect to the law of \(\pi _{\infty }\). It follows from the continuous mapping theorem [39, §3.5] that \(\xi _N^A=\Psi (\pi _N)\) converges weakly on \(C(D)\) to \(\Psi (\pi _{\infty })\). Note that

$$\begin{aligned} \Psi (\pi _{\infty })(\beta )=\sum _{k=1}^{\infty }P_k^{-\beta /\sqrt{2}}{\small 1}\!\!1_{P_k<\mathrm{e}^{\sqrt{2} A}}. \end{aligned}$$

We prove the convergence of \(e_N^A(\beta )\). Using the change of variables \(\sqrt{n}(x-b_N)=y\), we obtain

$$\begin{aligned} e_N^A(\beta )=\frac{N}{\sqrt{2\pi }}\int \limits _{b_N-\frac{A}{\sqrt{n}}}^{b_N} \mathrm{e}^{\beta \sqrt{n} (x-b_N)}\mathrm{e}^{-\frac{x^2}{2}}\mathrm{d}x=\frac{N}{\sqrt{2\pi n}}\int \limits _{-A}^{0} \mathrm{e}^{\beta y} \mathrm{e}^{-\frac{1}{2} (b_N+\frac{y}{\sqrt{n}})^2}\mathrm{d}y. \end{aligned}$$

Recalling that \(\sqrt{2\pi }b_N \mathrm{e}^{b_N^2/2}\sim N\) and \(b_N\sim \sqrt{2n}\) as \(N\rightarrow \infty \), we obtain that \(\lim _{N\rightarrow \infty }e_N^A(\beta )=\int _{1}^{\mathrm{e}^{\sqrt{2} A}} t^{-\beta /\sqrt{2}}\mathrm{d}t\), as required. \(\square \)

Lemma 4.10

For every compact set \(K\subset D\), there is \(C>0\) such that, for all sufficiently large N,

$$\begin{aligned} \mathbb{E }\left[\sup _{\beta \in K}|\Delta _N^A(\beta )|\right] \le C \mathrm{e}^{(1-\sqrt{2} \sigma ) A/2}. \end{aligned}$$

Proof

Let \(\Gamma \) be a contour enclosing K and located inside D. First, \(\mathbb{E }[\Delta _N^A(\beta )]=0\) by definition. Second, uniformly in \(\beta \in \Gamma \) it holds that

$$\begin{aligned} \mathbb{E }[|\Delta _N^A(\beta )|^2]&\le N \mathbb{E }\left[\mathrm{e}^{2\sigma \sqrt{n} (X-b_N)}{\small 1}\!\!1_{X<b_N-\frac{A}{b_N}}\right]\\&= N \mathrm{e}^{-2\sigma \sqrt{n} b_N} \mathrm{e}^{2\sigma ^2 n} \Phi \left(b_N-\frac{A}{b_N}-2\sigma \sqrt{n}\right)\\&\le C \mathrm{e}^{(1-\sqrt{2} \sigma ) A}, \end{aligned}$$

where the second step follows from Lemma 3.3 and the last step follows from (3.1). By Lemma 4.1, we have

$$\begin{aligned} \mathbb{E }\left[\sup _{\beta \in K}|\Delta _N^A(\beta )|\right] \le C \oint \limits _{\Gamma } \mathbb{E }|\Delta _N^A(\beta )| |\mathrm{d}\beta | \le C \mathrm{e}^{(1-\sqrt{2} \sigma ) A/2}. \end{aligned}$$

The proof is complete. \(\square \)

Proof of Theorem 4.8

By Theorem 2.6, we have the weak convergence

$$\begin{aligned} \tilde{\zeta }_P\left(\frac{\beta }{\sqrt{2}};\mathrm{e}^{\sqrt{2} A}\right) \overset{d}{\underset{A\rightarrow \infty }{\longrightarrow }} \tilde{\zeta }_P\left(\frac{\beta }{\sqrt{2}}\right). \end{aligned}$$

Together with Lemmas 4.9 and 4.10, this implies Theorem 4.8 by a standard argument; see for example [26, Lemma 6.7]. \(\square \)

The Proof of Theorem 2.8 can be completed as follows. For \(\sigma >1/\sqrt{2}\), Lemma 3.5 yields

$$\begin{aligned} \lim _{N\rightarrow \infty } N \mathrm{e}^{-\beta \sqrt{n} b_N} \mathbb{E }[\mathrm{e}^{\beta \sqrt{n} X}1_{X<b_N}]= \left\{ \begin{array}{ll} \frac{\sqrt{2}}{\beta -\sqrt{2}},&\text{ if} |\sigma |+|\tau |>\sqrt{2},\\ \infty ,&\text{ if} |\sigma |+|\tau |\le \sqrt{2}. \end{array}\right. \end{aligned}$$

The first equality holds uniformly on compact subsets of \(B_2\). By Theorem 4.8, the process \(\mathrm{e}^{-\beta \sqrt{n} b_N}\mathcal{Z }_N(\beta )\) converges to \(\zeta _P(\beta /\sqrt{2})\) weakly on the space of continuous functions on the set \(B_2\cap \{\sigma >1/\sqrt{2}\}\). Similarly, on the space of continuous functions on \(B_2\cap \{\sigma <-1/\sqrt{2}\}\) the same process converges weakly to an independent copy of \(\zeta _P(-\beta /\sqrt{2})\). By Lemma 4.3, this implies Theorem 2.8. \(\square \)

Proof of Theorem 2.13

Let \(d_N^{\prime }\) be a complex sequence such that

$$\begin{aligned} d_N^{\prime }+\beta _0 \frac{\log (4\pi n)}{2\sqrt{2}}-i\tau _0^2 n\in 2\pi i \mathbb{Z }\quad \text{ and} \quad d_N^{\prime }=O(\log n). \end{aligned}$$

Write \(\beta _N=\beta _0+\frac{s+d_N^{\prime }}{(\beta _0-\sqrt{2})n}\), where \(s\in \mathbb{C }\) is a new variable. Note that \(\lim _{N\rightarrow \infty }\beta _N=\beta _0\). Let \(X\sim N_{\mathbb{R }}(0,1)\). Applying Remark 3.6 and noting that the second term on the right-hand side in (3.5) converges to \(\frac{\sqrt{2}}{\beta _0-\sqrt{2}}\), we obtain

$$\begin{aligned} \lim _{N\rightarrow \infty } N \mathrm{e}^{-\beta _N\sqrt{n} b_N} \mathbb{E }[\mathrm{e}^{\beta _N\sqrt{n} X} {\small 1}\!\!1_{X<b_N}]&= \lim _{N\rightarrow \infty } N \mathrm{e}^{-\beta _N \sqrt{n} b_N}\mathrm{e}^{\frac{\beta _N^2 n}{2} }+\frac{\sqrt{2}}{\beta _0-\sqrt{2}}\\&= \mathrm{e}^{s}+\frac{\sqrt{2}}{\beta _0-\sqrt{2}}. \end{aligned}$$

By Theorem 4.8 and (3.26), the following holds weakly on \(C(\mathbb{C })\):

$$\begin{aligned} \mathrm{e}^{-\beta _N \sqrt{n} b_N}\mathcal{Z }_N\left(\beta _0+\frac{s+d_N^{\prime }}{(\beta _0-\sqrt{2})n} \right) \overset{{w}}{\underset{N\rightarrow \infty }{\longrightarrow }}\mathrm{e}^{s}+\zeta _P\left(\frac{\beta _0}{\sqrt{2}}\right). \end{aligned}$$

The zeros of the right-hand side are located at \(s=2\pi i k+\eta \), \(k\in \mathbb{Z }\), where \(\eta =\log (-\zeta _P(\beta _0/\sqrt{2}))\). Define \(d_N=d_N^{\prime }/(\sqrt{2} \tau _0)\). The theorem follows from Lemma 4.3 after elementary transformations. \(\square \)

4.6 Proof of Proposition 2.10

Let \(\tau \ne 0\) be fixed. Let \(S(\beta )\) be a random variable defined as in the Proof of Theorem 2.6. Take \(a,b\in \mathbb{R }\). For \(\sigma >1/2\), consider a random variable

$$\begin{aligned} Y(\sigma )=a \mathrm{Re}S(\beta )+b \mathrm{Im}S(\beta )=\lim _{k\rightarrow \infty }\left(\sum _{j=1}^{\infty } f(P_j;\sigma ) {\small 1}\!\!1_{1\le P_j<k}-\int \limits _{1}^{k} f(t;\sigma ) \mathrm{d}t\right), \end{aligned}$$

where \(f(t;\sigma )=\sqrt{a^2+b^2} t^{-\sigma }\cos (\tau \log t-\theta )\) and \(\theta \in \mathbb{R }\) is such that \(\cos \theta =\frac{a}{\sqrt{a^2+b^2}}\) and \(\sin \theta =\frac{b}{\sqrt{a^2+b^2}}\).

We need to show that \(\sqrt{2\sigma -1}\,Y(\sigma )\) converges, as \(\sigma \downarrow 1/2\), to a centered real Gaussian distribution with variance \((a^2+b^2)/2\). By the properties of the Poisson process, the log-characteristic function of \(Y(\sigma )\) is given by

$$\begin{aligned} \log \mathbb{E }\mathrm{e}^{iz Y(\sigma )}=\int \limits _{1}^{\infty }\left(\mathrm{e}^{izf(t;\sigma )}-1-izf(t;\sigma )+\frac{z^2}{2} f^2(t;\sigma )\right)\mathrm{d}t-\frac{z^2}{2} \int \limits _{1}^{\infty } f^2(t;\sigma )\mathrm{d}t. \end{aligned}$$

We will compute the second term and show that the first term is negligible. By elementary integration, we have

$$\begin{aligned} \int \limits _{1}^{\infty } f^2(t;\sigma )\mathrm{d}t&= \frac{a^2+b^2}{2} \int \limits _{1}^{\infty } \frac{1+\cos (2\tau \log t-2\theta )}{t^{2\sigma }}\mathrm{d}t \nonumber \\&= \frac{a^2+b^2}{2} \left(\frac{1}{2\sigma -1}-\mathrm{Re}\frac{\mathrm{e}^{-2\theta i}}{(1-2\sigma )+2 i \tau } \right). \end{aligned}$$
(4.16)

Using the inequalities \(|\mathrm{e}^{ix}-1-ix+\frac{x^2}{2}|\le |x|^3\) and \(|f(t;\sigma )|< C t^{-\sigma }\), we obtain

$$\begin{aligned} \left|\int \limits _{1}^{\infty }\left(\mathrm{e}^{izf(t;\sigma )}-1-izf(t;\sigma )+\frac{z^2}{2} f^2(t;\sigma )\right)\mathrm{d}t\right| \le \frac{C}{3\sigma -1} |z|^3. \end{aligned}$$
(4.17)

Bringing (4.16) and (4.17) together and recalling that \(\tau \ne 0\), we arrive at

$$\begin{aligned} \lim _{\sigma \downarrow 1/2}\log \mathbb{E }\mathrm{e}^{i \sqrt{2\sigma -1}\, z Y(\sigma )}=-\frac{1}{4} (a^2+b^2) z^2. \end{aligned}$$
(4.18)

This proves the result for \(\tau \ne 0\). For \(\tau =0\), the limit is (4.18) is \(-a^2z^2/2\). \(\square \)