Introduction

Singular values and condition number

The condition number was independently introduced by von Neumann and Goldstine in (Goldstine and von Neumann 1951; von Neumann et al. 1947), and by Turing in (Turing 1948) for studying the accuracy in the solution of a linear system in the presence of finite-precision arithmetic. Roughly speaking, the condition number measures how much the output value of a linear system can change by a small perturbation in the input argument, see (Smale 1985; Wo 1977) for further details.

Let \(\mathcal {A}\in \mathbb {C}^{n\times m}\) be a matrix of dimension \(n\times m\). We denote the singular values of \(\mathcal {A}\) in non-decreasing order by \(0\le \sigma ^{(n,m)}_{1}(\mathcal {A})\le \cdots \le \sigma ^{(n,m)}_{n}(\mathcal {A})\). That is to say, they are the square roots of the eigenvalues of the n-square matrix \(\mathcal {A}\mathcal {A}^*\), where \(\mathcal {A}^*\) denotes the conjugate transpose matrix of \(\mathcal {A}\). The condition number of \(\mathcal {A}\), \(\kappa (\mathcal {A})\), is defined as

$$\begin{aligned} \kappa (\mathcal {A}):=\frac{\sigma ^{(n,m)}_{n}(\mathcal {A})}{ \sigma ^{(n,m)}_{1}(\mathcal {A})}\quad \text { whenever }\quad \sigma ^{(n,m)}_{1}(\mathcal {A})>0. \end{aligned}$$
(1)

It is known that

$$\sigma ^{(n,m)}_{1}(\mathcal {A})=\inf \{\Vert \mathcal {A}-\mathcal {B}\Vert : \mathcal {B}\in \mathfrak {S}^{(n,m)}\},$$

where

$$\mathfrak {S}^{(n,m)}:=\{\mathcal {B}\in \mathbb {C}^{n\times m}: \mathrm {rank}(\mathcal {B})< \min \{n,m\} \}$$

and \(\Vert \cdot \Vert\) denotes the operator norm. For \(n=m\), the smallest singular value \(\sigma ^{(n,n)}_{1}(\mathcal {A})\) measures the distance between the matrix \(\mathcal {A}\) and the set of singular matrices. We refer to Chapter 1 in (Bürgisser and Cucker 2013) for further details.

The extremal singular values are ubiquitous in the applications and have produced a vast literature in geometric functional analysis, mathematical physics, numerical linear algebra, time series, statistics, etc., see for instance (Bose and Saha 2018; Davis et al. 2016; Edelman 1989; Rider and Sinclair 2014; Rudelson and Vershynin 2010; Tao and Vu 2010) and Chapter 5 in (Vershynin 2012). The study of the largest and the smallest singular values has been very important in the study of sample correlation matrices, we refer to (Heiny and Mikosch 20172018, 20192021) for further details. Moreover, Poisson statistics for the largest eigenvalues in random matrices ensembles (Wigner ensembles for instance) are studied in (Soshnikov 2004, 2006).

Calculate or even estimate the condition number of a generic matrix is a difficult task, see (Sankar et al. 2006). In computational complexity theory, it is of interest to analyze the random condition number, that is, when the matrix \(\mathcal {A}\) given in (1) is a random matrix. In (Edelman 1988), it is computed the limiting law of the condition number of a random rectangular matrix with independent and identically distributed (i.i.d. for short) standard Gaussian entries. Moreover, the exact law of the condition number of a \(2\times n\) matrix is derived. In (Azaïs and Wschebor 2004), for real square random matrices with i.i.d. standard Gaussian entries, no asymptotic lower and upper bounds for the tail probability of the condition number are established. Later, in (Edelman and Sutton 2005), the results are generalized for non-square matrices and analytic expressions for the tail distribution of the condition number are obtained. Lower and upper bounds for the condition number (in p-norms) and the so-called average “loss of precision” are studied in (Szarek 1991) for real and complex square random matrices with i.i.d. standard Gaussian entries. In (Viswanath and Trefethen 1998), it is studied the case of random lower triangular matrices \(\mathcal {L}_n\) of dimension n with i.i.d. standard Gaussian random entries and shown that \((\kappa (\mathcal {L}_n))^{1/n}\) converges almost surely to 2 as tends to infinity. In (Pérez et al. 2014), using a Coulomb fluid technique, it is derived asymptotics for the cumulative distribution function of the condition number for rectangular matrices with i.i.d. standard Gaussian entries. More recently, distributional properties of the condition number for random matrices with i.i.d. Gaussian entries are established in (Anderson and Wells 2009; Chen and Dongarra 2005; Shakil and Ahsanullah 2018) and large deviation asymptotics for condition numbers of sub-Gaussian distributions are given in (Singull et al. 2021). We recommend (Bürgisser and Cucker 2013; Cucker 2016; Demmel 1988; Edelman 1989) for a complete and current descriptions of condition number for random matrices.

Random circulant matrices

The (random) circulant matrices and (random) circulant type matrices are an important object in different areas of pure and applied mathematics, for instance compressed sensing, cryptography, discrete Fourier transform, extreme value analysis, information processing, machine learning, numerical analysis, spectral analysis, time series analysis, etc. For more details we refer to (Aldrovandi 2001; Bose et al. 2011, 2012; Davis 1994; Gray 2006; Kra and Simanca 2012; Rauhut 2009) and the monograph on random circulant matrices (Bose and Saha 2018). Some topics that have been studied are spectral norms, extremal distributions, the so-called limiting spectral distribution for random circulant matrices and random circulant-type matrices and process convergence of fluctuations, see (Bose et al. 200220092010, 2011a, b, 2012ab2020).

Let \(\nu _0,\ldots ,\nu _{n-1}\in \mathbb {C}\) be any given complex numbers. We say that an \(n\times n\) complex matrix \(\text {circ}(\nu _0,\ldots ,\nu _{n-1})\) is circulant with generating elements \(\{\nu _0,\ldots ,\nu _{n-1}\}\) if it has the following structure:

$$\begin{aligned} \text {circ}(\nu _0,\ldots ,\nu _{n-1}):=\left[ \begin{array}{ccccc} \nu _0 &{} \nu _{1} &{} \cdots &{} \nu _{n-2} &{} \nu _{n-1} \\ \nu _{n-1} &{} \nu _{0} &{} \cdots &{} \nu _{n-3} &{} \nu _{n-2} \\ \vdots &{} \vdots &{} \ddots &{} \vdots &{} \vdots \\ \nu _2 &{} \nu _3 &{} \cdots &{} \nu _0 &{} \nu _1\\ \nu _{1} &{} \nu _{2} &{} \cdots &{} \nu _{n-1} &{} \nu _{0} \end{array}\right] . \end{aligned}$$

Let \(\mathsf {i}\) be the imaginary unit and \(\omega _n=\exp (\mathsf {i}\cdot 2\pi /n)\) be a primitive n-th root of unity. Define the Fourier unitary matrix of order n by \(\mathcal {F}_n:=\frac{1}{\sqrt{n}}(\omega ^{kj}_n)_{k,j\in \{0,\ldots ,n-1\}}\). It is well-known that \(\text {circ}(\nu _0,\ldots ,\nu _{n-1})\) can be diagonalized as follows:

$$\begin{aligned} \text {circ}(\nu _0,\ldots ,\nu _{n-1})=\mathcal {F}^*_n \mathcal {D}_n \mathcal {F}_n, \end{aligned}$$
(2)

where \(\mathcal {D}_n:=\text {diag}(\lambda ^{(n)}_1,\ldots ,\lambda ^{(n)}_n)\) is a diagonal matrix with entries satisfying

$$\begin{aligned} \lambda ^{(n)}_k=\sum _{j=0}^{n-1}\nu _j \,\omega ^{kj}_n\quad \text { for }\quad k\in \{0,\ldots ,n-1\}. \end{aligned}$$

By (2) we note that \((\lambda ^{(n)}_k)_{k\in \{0,\ldots ,n-1\}}\) are the eigenvalues of \(\text {circ}(\nu _0,\ldots ,\nu _{n-1})\). For \(\xi _0,\ldots ,\xi _{n-1}\) being random elements, we say that \(\mathcal {C}_n:=\text {circ}(\xi _0,\ldots ,\xi _{n-1})\) is an \(n\times n\) random circulant matrix. Then its eigenvalues are given by

$$\begin{aligned} \lambda ^{(n)}_k=\sum \limits _{j=0}^{n-1}\xi _j\, \omega ^{kj}_n\quad \text { for }\quad k\in \{0,\ldots ,n-1\}. \end{aligned}$$
(3)

Since any circulant matrix is a normal matrix, its singular values are given by \((|\lambda ^{(n)}_k|)_{k\in \{0,\ldots ,n-1\}}\), where the symbol \(|\cdot |\) denotes the complex modulus. Then the extremal singular values of \(\mathcal {C}_n\) can be written as

$$\begin{aligned} \sigma ^{(n)}_{\min }:=\min _{k\in \{0,\ldots ,n-1\}}\big |\lambda ^{(n)}_k\big | \quad \text { and } \quad \sigma ^{(n)}_{\max }:=\max _{k\in \{0,\ldots ,n-1\}}\big |\lambda ^{(n)}_k\big |. \end{aligned}$$
(4)

By (1) and (4) it follows that the random condition number of \(\mathcal {C}_n\) is given by

$$\begin{aligned} \kappa (\mathcal {C}_n)=\frac{\sigma ^{(n)}_{\max }}{\sigma ^{(n)}_{\min }} \quad \text { whenever }\quad \sigma ^{(n)}_{\min }>0. \end{aligned}$$
(5)

We stress that the random variables \(\sigma ^{(n)}_{\max }\) and \(\sigma ^{(n)}_{\min }\) are not independent. Thus, it is a challenging problem to compute (or estimate) the distribution of condition numbers of circulant matrices for general i.i.d. random coefficients \(\xi _0,\ldots ,\xi _{n-1}\).

Main results

The problem of computing the limiting distribution of the condition number for square matrices with i.i.d. (real or complex) standard Gaussian random entries is completely analyzed in Chapter 7 of (Edelman 1989). In this manuscript we focus on the computation of the limiting distribution of \(\kappa (\mathcal {C}_n)\) for \(\xi _0,\ldots ,\xi _{n-1}\) being i.i.d. real random variables satisfying the so-called Lyapunov condition, see Hypothesis 1.1 below. In fact, the limiting distribution is a Fréchet distribution that belongs to the class of the so-called extreme value distributions (Galambos 1987). Non-asymptotic estimates for the condition number for random circulant and Toeplitz matrices with i.i.d. standard Gaussian random entries are given in (Pan 2001, 2015). The approach and results of (Pan 2001, 2015) are different in nature from our results given in Theorem 1.2.

We assume the following integrability condition. It appears for instance in the so-called Lyapunov Central Limit Theorem, see Section 7.3.1 in (Ash 2000). Along this manuscript, the set of non-negative integers is denoted by \(\mathbb {N}_0\).

Hypothesis 1.1 (Lyapunov condition)

We assume that \((\xi _j)_{j\in \mathbb {N}_0}\) is a sequence of i.i.d. non-degenerate real random variables on some probability space \((\Omega ,\mathcal {F}, \mathbb {P})\) with zero mean and unit variance. If there exists \(\delta >0\) such that \(\mathbb {E}\left[ |\xi _0|^{2+\delta }\right] <\infty\), where \(\mathbb {E}\) denotes the expectation with respect to \(\mathbb {P}\), we say that \((\xi _j)_{j\in \mathbb {N}_0}\) satisfies the Lyapunov integrability condition.

We note that a sequence of i.i.d. non-degenerate sub-Gaussian random variables satisfies the Lyapunov condition. Before state the main result and its consequences, we introduce some notation. For shorthand and in a conscious abuse of notation, we use indistinctly the following notations for the exponential function: \(\exp (a)\) or \(e^{a}\) for \(a\in \mathbb {R}\). We denote by \(\ln (\cdot )\) the Napierian logarithm function and we use the same notation \(|\cdot |\) for the complex modulus and the absolute value.

The main result of this manuscript is the following.

Theorem 1.2 (Joint asymptotic distribution of the smallest and the largest singular values)

Assume that Hypothesis 1.1 is valid. Then it follows that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\mathbb {P}\left( \sigma ^{(n)}_{\min }\le x,\frac{\sigma ^{(n)}_{\max }-\mathfrak {a}_n}{\mathfrak {b}_n}\le y \right) =R(x)G(y) \quad \text { for any } x\ge 0 \text { and } y\in \mathbb {R}, \end{aligned}$$
(6)

where

$$\begin{aligned} R(x)=1-\exp \left( {-x^2/2}\right) ,~x\ge 0,\quad \text { and }\quad G(y)=\exp \left( {-e^{-y}}\right) ,~ y\in \mathbb {R} \end{aligned}$$
(7)

are the Rayleigh distribution and Gumbel distribution, respectively, and the normalizing constants are given by

$$\begin{aligned} \mathfrak {a}_n=\sqrt{n\ln (n/2)}\quad \text { and } \quad \mathfrak {b}_n=\frac{1}{2}\sqrt{\frac{n}{\ln (n/2)}},\quad n\ge 3. \end{aligned}$$
(8)

The proof of Theorem 1.2 is based in the Davis–Mikosch method used to prove that a normalized maximum of the periodogram converges in distribution to the Gumbel law, see Theorem 2.1 in (Davis and Mikosch 1999). This method relies on Einmahl’s multivariate extension of the so-called Komlós–Major–Tusnády approximation, see (Einmahl 1989). However, the laws of the largest singular value \(\sigma ^{(n)}_{\max }\) and the smallest singular value \(\sigma ^{(n)}_{\min }\) are strongly correlated, and hence the computation of the condition number law is a priori hard. Thus, using the Davis–Mikosch method we compute the limiting law of the random vector

$$\begin{aligned} \left( \sigma ^{(n)}_{\min },\frac{\sigma ^{(n)}_{\max }-\mathfrak {a}_n}{\mathfrak {b}_n}\right) \end{aligned}$$
(9)

whose components are not independent. Applying the Continuous Mapping Theorem we deduce the limiting law of the condition number \(\kappa (\mathcal {C}_n)\), see Corollary 1.3 Item (iv). Along the lines of (Heiny and Mikosch 2021), one can obtain convergence of the point processes of the singular values in the setting of Theorem 1.2. We remark that (3) resembles the discrete Fourier transform and it is related with the so-called periodogram, which have been used in many areas of apply science. The maximum of the periodogram has been already studied for instance in (Davis and Mikosch 1999; Kokoszka and Mikosch 2000; Lin and Liu 2009; Turkman and Walker 1984) and under a suitable rescaling, the Gumbel distribution appears as a limiting law. The asymptotic behavior of Fourier transforms of stationary ergodic sequences with finite second moments is analyzed and shown that asymptotically the real part and the imaginary part of the Fourier transform decouple in a product of independent Gaussian distributions, see (Peligrad and Wu 2010). The so-called quenched central limit theorem for the discrete Fourier transform of a stationary ergodic process is obtained in (Barrera and Peligrad 2016) and the central limit theorem for discrete Fourier transforms of function times series are given in (Cerovecki and Hörmann 2017). In addition, in (Cerovecki et al. 2022) the maximum of the periodogram is studied in times series with values in a Hilbert space. However, up to our knowledge, Theorem 1.2 is not immediately implication of these results. As a consequence of Theorem 1.2 we obtain the limiting distribution of the (normalized) largest singular value, the smallest singular value and the (normalized) condition number as Corollary 1.3 states. For a \(n\times n\) symmetric random Toeplitz matrix satisfying Hypothesis 1.1, it is shown in (Sen and Virág 2013) that the largest eigenvalue scaled by \(\sqrt{2n\ln (n)}\) converges in \(L^{2+\delta }\) as \(n\rightarrow \infty\) to the constant \(\Vert \textsf {Sin}\Vert ^2_{2\rightarrow 4}=0.8288\ldots\).

We point out that Theorem 1 in (Bryc and Sethuraman 2009) yields that the Gumbel distribution is the limiting distribution for the (renormalized) largest singular value for symmetric circulant random matrices with generating i.i.d. sequence (half of its entries) satisfying Hypothesis 1.1. Also, for suitable normalization the limiting law of the largest singular value of Hermitian Gaussian circulant matrices has Gumbel distribution, see Corollary 5 in (Meckes 2009).

Recall that the square root of a Exponential distribution with parameter \(\lambda\) has Rayleigh distribution with parameter \((2\lambda )^{-1/2}\). The exponential law appears as the limiting distribution of the minimum modulus of trigonometric polynomials, see Theorem 1 in (Yakir and Zeitouni 2021) for the Gaussian case and Theorem 1.2 of (Cook and Nguyen 2021) for the sub-Gaussian case.

The Fréchet distribution appears as a limiting distribution of the largest eigenvalue (rescaled) for random real symmetric matrices with independent and heavy tailed entries, see Corollary 1 in (Auffinger et al. 2009) and (Basrak et al. 2021) for the non-i.i.d. case. The Fréchet distribution in Corollary 1.3 Item (iv) has cumulative distribution \(F(t)=\exp (-t^{-2}) \mathbbm {1}_{\lbrace t>0 \rbrace }\) with shape parameter 2, scale parameter 1 and location parameter 0. Moreover, it does not possess finite variance. For descriptions of extreme value distributions and limiting theorems we refer to (Galambos 1987).

Corollary 1.3 (Asymptotic distribution of the largest singular value, the smallest singular value and the condition number)

Let the notation and hypothesis of Theorem 1.2 be valid. The following holds.

  1. (i)

    The normalized maximum \(\frac{\sigma ^{(n)}_{\max }-\mathfrak {a}_n}{\mathfrak {b}_n}\) and the minimum \(\sigma ^{(n)}_{\min }\) are asymptotically independent.

  2. (ii)

    The normalized maximum \(\frac{\sigma ^{(n)}_{\max }-\mathfrak {a}_n}{\mathfrak {b}_n}\) converges in distribution as \(n\rightarrow \infty\) to a Gumbel distribution, i.e.,

    $$\lim \limits _{n\rightarrow \infty }\mathbb {P}\left( \frac{\sigma ^{(n)}_{\max }-\mathfrak {a}_n}{\mathfrak {b}_n}\le y \right) =G(y) \quad \text { for any }\quad y\in \mathbb {R}.$$
  3. (iii)

    The minimum \(\sigma ^{(n)}_{\min }\) converges in distribution as \(n\rightarrow \infty\) to a Rayleigh distribution, i.e.,

    $$\lim \limits _{n\rightarrow \infty }\mathbb {P}\left( \sigma ^{(n)}_{\min }\le x \right) =R(x) \quad \text { for any }\quad x\ge 0.$$
  4. (iv)

    The condition number \(\kappa (\mathcal {C}_n)\) converges in distribution as \(n\rightarrow \infty\) to a Fréchet distribution, i.e.,

    $$\lim \limits _{n\rightarrow \infty } \mathbb {P}\left( \frac{\kappa (\mathcal {C}_n)}{\sqrt{\frac{1}{2}n\ln (n)}}\le z \right) =F(z)\quad \text { for any }\quad z>0,$$

    where \(F(z)=\exp \left( -z^{-2}\right) \mathbbm {1}_{\lbrace z>0 \rbrace }\).

In the sequel, we briefly compare our results with the literature about the limiting law of the condition number, the smallest singular value and the largest singular value.

Remark 1.4 (Frechet's distributions as limiting distributions for condition numbers)

The Fréchet distribution with shape parameter 2, scale parameter 2 and location parameter 0 is the limiting distribution as the dimension growths of \(\kappa (\mathcal {A}_n)/n\), where \(\mathcal {A}_n\) is a square matrix of dimension with i.i.d. complex Gaussian entries, see Theorem 6.2 in (Edelman 1988). When \(\mathcal {A}_n\) has real i.i.d. Gaussian entries, the limiting law of \(\kappa (\mathcal {A}_n)/n\) converges in distribution as tends to infinity to a random variable with an explicit density, see Theorem 6.1 in (Edelman 1988). We stress that such density does not belong to the Fréchet family of distributions. We also point out that the distribution of the so-called Demmel condition for (real and complex) Wishart matrices are given explicitly in (Edelman 1992).

Remark 1.5 (A word about the smallest singular value \(\sigma _1(\mathcal {A}_n)\))

The behavior of the smallest singular value appears naturaly in numerical inversion of large matrices. For instance, when the random matrix \(\mathcal {A}_n\) has complex i.i.d. Gaussian entries, for all n the random variable of \(n\sigma _1(\mathcal {A}_n)\) has the Chi-square distribution with two degrees of freedom, see Corollary 3.3 in (Edelman 1988). For a random matrix \(\mathcal {A}_n\) with real i.i.d. Gaussian entries, it is shown in Corollary 3.1 in (Edelman 1988) that \(n\sigma _1(\mathcal {A}_n)\)converges in distribution as the dimension increases to a random variable with an explicit density. For further discussion about the smallest singular values, we refer to (Bai and Yin 1993; Bose and Hachem 2020; Gregoratti and Maran 2021; Heiny and Mikosch 2018; Huang and Tikhomirov 2020; Kostlan 1988; Tao and Vu 2010; Tatarko 2018).

Remark 1.6 (A word about the largest singular value \(\sigma _n(\mathcal {A}_n)\))

A lot is known about the behavior of the largest singular value. As an illustration, for a random matrix \(\mathcal {A}_n\) with real i.i.d. Gaussian entries, it is shown in Lemma 4.1 in (Edelman 1988) that \((1/n)\sigma _n(\mathcal {A}_n)\) converges in probability to 4 as n growths while for the random matrix \(\mathcal {A}_n\) with complex i.i.d. Gaussian entries, \((1/n)\sigma _n(\mathcal {A}_n)\) converges in probability to 8. We stress that the Gumbel distribution is the limiting law of the spectral radius of Ginibre ensembles, see Theorem 1.1 in (Rider 2014). Recently, it is shown in Theorem 4 in (Arenas-Velilla and Pérez-Abreu 2021) that the Gumbel distribution is the limiting law of the largest eigenvalue of a Gaussian Laplacian matrix. For further discussion, we recommend to (Auffinger et al. 2009; Bai et al. 1988; Bryc and Sethuraman 2009; Cerovecki et al. 2022; Davis and Mikosch 1999; Heiny and Mikosch 2018; Meckes 2009; Soshnikov 2004, 2006).

We continue with the proof of Corollary 1.3.

Proof of Corollary 1.3

Items (i)-(iii) follow directly from Theorem 1.2 and the Continuous Mapping Theorem (see Theorem 13.25 in (Klenke 2014)). In the sequel, we prove Item (iv). By (5) we have

$$\begin{aligned} \frac{\kappa (\mathcal {C}_n)}{\sqrt{\frac{1}{2}n\ln (n)}}=\frac{\mathfrak {b}_n}{{\sqrt{\frac{1}{2}n\ln (n)}}} \frac{1}{\sigma ^{(n)}_{\min }} \frac{\sigma ^{(n)}_{\max }-\mathfrak {a}_n}{\mathfrak {b}_n}+ \frac{\mathfrak {a}_n}{{\sqrt{\frac{1}{2}n\ln (n)}}}\frac{1}{\sigma ^{(n)}_{\min }}. \end{aligned}$$
(10)

Theorem 1.2 with the help of the Continuous Mapping Theorem, limits

$$\lim \limits _{n\rightarrow \infty }\frac{\mathfrak {b}_n}{{\sqrt{\frac{1}{2}n\ln (n)}}}=0,\qquad \lim \limits _{n\rightarrow \infty } \frac{\mathfrak {a}_n}{{\sqrt{\frac{1}{2}n\ln (n)}}}=\sqrt{2},$$

and the Slutsky Theorem (see Theorem 13.18 in (Klenke 2014)) implies

$$\begin{aligned} \begin{array}{c}\frac{\mathfrak {b}_n}{{\sqrt{\frac{1}{2}n\ln (n)}}}\frac{1}{\sigma ^{(n)}_{\min }} \frac{\sigma ^{(n)}_{\max }-\mathfrak {a}_n}{\mathfrak {b}_n}{}\longrightarrow 0\cdot \frac{\mathfrak {G}}{\mathfrak {R}}=0, \quad \text{ in } \text{ distribution, } \quad \text{ as } \quad n\rightarrow \infty ,\\ \frac{\mathfrak {a}_n}{{\sqrt{\frac{1}{2}n\ln (n)}}}\frac{1}{\sigma ^{(n)}_{\min }}{}\longrightarrow \frac{\sqrt{2}}{\mathfrak {R}}, \quad \text{ in } \text{ distribution, } \quad \text{ as } \quad n\rightarrow \infty , \end{array} \end{aligned}$$
(11)

where \(\mathfrak {G}\) and \(\mathfrak {R}\) are random variables with Gumbel and Rayleigh distributions, respectively. The Slutsky Theorem with the help of (10) and (11) implies

$$\begin{aligned} \frac{\kappa (\mathcal {C}_n)}{\sqrt{\frac{1}{2}n\ln (n)}}\longrightarrow \frac{\sqrt{2}}{\mathfrak {R}}, \quad \text { in distribution, } \quad \text { as }\quad n\rightarrow \infty . \end{aligned}$$
(12)

Lemma 3.4 in Appendix A implies (iv) that \(\frac{\sqrt{2}}{\mathfrak {R}}\) possesses Fréchet distribution F. This finishes the proof of Item (iv).\(\square\)

Recently, the condition number for powers of matrices has been studied in (Huang and Tikhomirov 2020). As a consequence of Theorem 1.2 we have the following corollary which, in particular, gives the limiting distribution of the condition number for the powers of \(\mathcal {C}_n\).

Corollary 1.7 (Asymptotic distribution of p-th power of the maximum, the minimum and the condition number)

Let the notation and hypothesis of Theorem 1.2 be valid and take \(p\in \mathbb {N}\). The following holds.

  1. (i)

    Asymptotic distribution of the p-th power of the normalized maximum. For any \(y\in \mathbb {R}\) it follows that

    $$\lim \limits _{n\rightarrow \infty }\mathbb {P}\left( \frac{\sigma ^{(n)}_{\max }(\mathcal {C}^p_n)-A_n(p)}{B_n(p)}\le y \right) =G\left( y\right) ,$$

    where \(A_n(p)=\mathfrak {a}^p_n\) and \(B_n(p)=p\mathfrak {b}^p_n(2^p-1)\) for all \(n\ge 3\).

  2. (ii)

    Asymptotic distribution of the p-th power of the minimum. For any \(x\ge 0\) it follows that

    $$\lim \limits _{n\rightarrow \infty }\mathbb {P}\left( \sigma ^{(n)}_{\min }(\mathcal {C}^p_n)\le x \right) =R\left( x^{1/p}\right) .$$
  3. (iii)

    Asymptotic distribution of the p-th power of the condition number. For any \(z>0\) it follows that

    $$\lim \limits _{n\rightarrow \infty } \mathbb {P}\left( \frac{\kappa (\mathcal {C}^p_n)}{(\frac{1}{2}n\ln (n))^{p/2}}\le z \right) =F\left( z^{1/p}\right) .$$

Proof

By (2) we have \(\mathcal {C}^p_n=\mathcal {F}^*_n \mathcal {D}^p_n \mathcal {F}_n\) for any \(p\in \mathbb {N}\). Therefore, \(\mathcal {C}^p_n\) is a normal matrix and then

$$\sigma ^{(n)}_{\min }(\mathcal {C}^p_n)=(\sigma ^{(n)}_{\min }(\mathcal {C}_n))^p,\quad \sigma ^{(n)}_{\max }(\mathcal {C}^p_n)=(\sigma ^{(n)}_{\max }(\mathcal {C}_n))^p \quad \text { and } \quad \kappa (\mathcal {C}^p_n)=(\kappa (\mathcal {C}_n))^p.$$

Consequently, the statements (ii)-(iii) are direct consequences of Theorem 1.2 with the help of Lemma 3.3 in Appendix A and the Continuous Mapping Theorem.

In the sequel, we prove (i). For \(p=1\), Item (i) follows directly from Item (ii) of Corollary 1.3. For any \(p\in \mathbb {N}\setminus \{1\}\) set

$$A_n:=A_n(p)=\mathfrak {a}^p_n\quad \text { and } \quad B_n=p\sum \limits _{j=0}^{p-1} \mathfrak {a}^j_n \mathfrak {b}^{p-j}_n.$$

Let \(y\in \mathbb {R}\) and observe that \(B_n y+A_n\rightarrow \infty\) as \(n\rightarrow \infty\) due to \(\mathfrak {b}_n/ \mathfrak {a}_n\rightarrow 0\) as \(n\rightarrow \infty\). Then for n large enough we have

$$\begin{aligned} \begin{aligned}\mathbb {P}\left( \frac{\sigma ^{(n)}_{\max }(\mathcal {C}^p_n)-A_n}{B_n}\le y \right)&=\mathbb {P}\left( \sigma ^{(n)}_{\max }(\mathcal {C}_n)\le (B_n y+A_n)^{1/p} \right) \\&=\mathbb {P}\left( \frac{\sigma ^{(n)}_{\max }(\mathcal {C}_n)-\mathfrak {a}_n}{\mathfrak {b}_n}\le \frac{(B_n y+A_n)^{1/p}-\mathfrak {a}_n}{\mathfrak {b}_n} \right) . \end{aligned} \end{aligned}$$
(13)

We claim that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\frac{(B_n y+A_n)^{1/p}-\mathfrak {a}_n}{\mathfrak {b}_n}=y\quad \text { for any }\quad y\in \mathbb {R}. \end{aligned}$$
(14)

Indeed, notice that

$$\begin{aligned} \begin{aligned}\frac{(B_n y+A_n)^{1/p}-\mathfrak {a}_n}{\mathfrak {b}_n}&=\frac{\mathfrak {a}_n}{\mathfrak {b}_n}\left( \left( p\sum \limits _{j=0}^{p-1} \mathfrak {a}^{j-p}_n \mathfrak {b}^{p-j}_n y+1\right) ^{1/p}-1\right) \\&=\frac{1}{\delta _n}\left( \left( py\sum \limits _{j=1}^{p} \delta ^{j}_n +1\right) ^{1/p}-1\right) =\frac{1}{\delta _n}\left( \left( py\sum \limits _{j=1}^{p} \delta ^{j}_n +1\right) ^{1/p}-1\right) ,\end{aligned} \end{aligned}$$
(15)

where \(\delta _n:=\mathfrak {b}_n/ \mathfrak {a}_n\). Since \(\delta _n\rightarrow 0\) as \(n\rightarrow \infty\), L’Hôpital’s rule yields

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\frac{1}{\delta _n}\left( \left( py\sum \limits _{j=1}^{p} \delta ^{j}_n +1\right) ^{1/p}-1\right) =\lim \limits _{n\rightarrow \infty } \left[ \frac{1}{p}\left( py\sum \limits _{j=1}^{p} \delta ^{j}_n +1\right) ^{1/p-1}py \sum \limits _{j=1}^{p} j\delta ^{j-1}_n\right] =y. \end{aligned}$$
(16)

Hence, (13) with the help of Lemma 3.3 in Appendix A implies

$$\lim \limits _{n\rightarrow \infty }\mathbb {P}\left( \frac{\sigma ^{(n)}_{\max }(\mathcal {C}^p_n)-A_n}{B_n}\le y \right) =G(y)\quad \text { for any }\quad y\in \mathbb {R}.$$

By (8), a straightforward computation yields \(B_n=B_n(p)=p\mathfrak {b}^p_n(2^p-1)\) for all \(n\ge 3\). This finishes the proof of Item (i).\(\square\)

The following proposition states the standard Gaussian case. It is an immediately consequence of Theorem 1.2. However, since the random variables are i.i.d. with standard Gaussian distribution, the computation of the law of (9) and its limiting distribution can be carried out explicitly. We decide to include an alternative and instructive proof in order to illustrate how we identify the right-hand side of (6).

Proposition 1.8 (Joint asymptotic distribution of the minimum and the maximum: the Gaussian case)

Let \((\xi _j)_{j\in \mathbb {N}_0}\) be a sequence of i.i.d. real Gaussian coefficients with zero mean and unit variance. Then it follows that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\mathbb {P}\left( \sigma ^{(n)}_{\min }\le x,\frac{\sigma ^{(n)}_{\max }-\mathfrak {a}_n}{\mathfrak {b}_n}\le y \right) =R(x)G(y) \quad \text { for any } x\ge 0 \text { and } y\in \mathbb {R}, \end{aligned}$$
(17)

where the functions R and G are given in (7), and the normalizing constants \(\mathfrak {a}_n\), \(\mathfrak {b}_n\) are defined in (8).

Proof

Let \(q_n:=\lfloor n/2\rfloor\). For \(1\le k\le q_n\) we note that \(\lambda ^{(n)}_k = \overline{\lambda ^{(n)}_{n-k}}\), where \(\overline{\lambda }\) denotes the complex conjugate of a complex number \(\lambda\). Then we have

$$\begin{aligned} \sigma ^{(n)}_{\max } = \max \lbrace |\lambda ^{(n)}_k | : 0\le k\le q_n \rbrace \quad \text { and } \quad \sigma ^{(n)}_{\min } = \min \lbrace |\lambda ^{(n)}_k | : 0\le k\le q_n \rbrace . \end{aligned}$$
(18)

The complex modulus of \(\lambda ^{(n)}_k\) is given by \(|\lambda ^{(n)}_{k} |=\sqrt{c_{k,n}^2+s_{k,n}^2}\), where \(c_{k,n} := \sum _{j=0}^{n-1} \xi _j \cos \left( {2\pi \frac{kj}{n}}\right)\) and \(s_{k,n}:= \sum _{j=0}^{n-1} \xi _j \sin \left( {2\pi \frac{kj}{n}}\right)\) for \(0\le k \le q_n\). Note that \(s_{0,n}=0\). By straightforward computations we obtain for any \(n\in \mathbb {N}\)

  1. (i)

    \(\mathbb {E}\left( c_{k,n} \right) =\mathbb {E}\left( s_{k,n} \right) =0\) for \(0\le k\le q_n\),

  2. (ii)

    \(\mathbb {E}\left( c^2_{0,n} \right) =n\), \(\mathbb {E}\left( c_{k,n}^2 \right) =\mathbb {E}\left( s_{k,n}^2 \right) =\frac{n}{2}\) for \(1\le k\le q_n\),

  3. (iii)

    \(\mathbb {E}\left( c_{k,n}\cdot s_{l,n} \right) =\mathbb {E}\left( c_{k,n}\cdot c_{l,n} \right) =\mathbb {E}\left( s_{k,n}\cdot s_{l,n} \right) = 0\) for \(0\le l<k\le q_n\).

Then (i), (ii) and (iii) implies that \(\frac{1}{\sqrt{n}}\left( {c_{0,n},c_{1,n},s_{1,n},\ldots ,c_{q_n,n},s_{q_n,n}}\right)\) is a Gaussian vector such that its entries are not correlated, i.e., it has independent Gaussian entries. Thus, the random vector \(\frac{1}{n}\big (c_{0,n}^2,c^2_{1,n}+s^2_{1,n},\ldots ,c^2_{q_n,n}+s^2_{q_n,n}\big ) =\frac{1}{n}\big (|\lambda ^{(n)}_{0}|^2,|\lambda ^{(n)}_{1}|^2,\ldots ,|\lambda ^{(n)}_{q_n}|^2\big )\) has independent random entries satisfying

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{1}{n}|\lambda ^{(n)}_{0}|^2 &{} \text {has a Chi-square distribution } \chi _1 \text { with one-degree of freedom,}\\ \frac{1}{n}|\lambda ^{(n)}_{k}|^2 &{} \text {has an exponential distribution } \mathsf {E}_1 \text { with parameter one for } 1\le k\le q_n-1, \end{array}\right. } \end{aligned}$$
(19)

and

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{1}{n}|\lambda ^{(n)}_{q_n}|^2 &{} \text {has } \chi _1\text { distribution for } n\text { being an even number }(\text {due to }s_{q_n,n}=0),\\ \frac{1}{n}|\lambda ^{(n)}_{q_n}|^2 &{} \text {has }\mathsf {E}_1\text { distribution for }n\text { being an odd number}. \end{array}\right. } \end{aligned}$$
(20)

For n being an odd number, (19), (20) and Lemma 3.1 in Appendix A imply

$$\begin{aligned} \begin{aligned} \mathbb {P}\left( \sigma ^{(n)}_{\min }\le x,\sigma ^{(n)}_{\max }\le \mathfrak {b}_n y+\mathfrak {a}_n \right)&{}=\prod _{j=0}^{q_n} \mathbb {P}\left( |\lambda ^{(n)}_j|\le \mathfrak {b}_n y+\mathfrak {a}_n \right) - \prod _{j=0}^{q_n} \mathbb {P}\left( x<|\lambda ^{(n)}_j|\le \mathfrak {b}_n y+\mathfrak {a}_n \right) \\&{}=\prod _{j=0}^{q_n} \mathbb {P}\left( \frac{|\lambda ^{(n)}_j|^2}{n}\le \frac{(\mathfrak {b}_n y+\mathfrak {a}_n)^2}{n} \right) - \prod _{j=0}^{q_n} \mathbb {P}\left( \frac{x^2}{n}< \frac{|\lambda ^{(n)}_j|^2}{n}\le \frac{(\mathfrak {b}_n y+\mathfrak {a}_n)^2}{n} \right) \\&{}=\mathbb {P}\left( \chi _1\le \frac{(\mathfrak {b}_n y+\mathfrak {a}_n)^2}{n} \right) \left( \mathbb {P}\left( \textsf {E}_1\le \frac{(\mathfrak {b}_n y+\mathfrak {a}_n)^2}{n} \right) \right) ^{q_n}\\&{}\qquad - \mathbb {P}\left( \frac{x^2}{n}<\chi _1\le \frac{(\mathfrak {b}_n y+\mathfrak {a}_n)^2}{n} \right) \left( \mathbb {P}\left( \frac{x^2}{n}< \textsf {E}_1\le \frac{(\mathfrak {b}_n y+\mathfrak {a}_n)^2}{n} \right) \right) ^{q_n}. \end{aligned} \end{aligned}$$
(21)

We observe that \(\lim \limits _{n\rightarrow \infty } \frac{x^2}{n}=0\) and \(\lim \limits _{n\rightarrow \infty }\frac{(\mathfrak {b}_n y+\mathfrak {a}_n)^2}{n}=\infty\) for any \(x,y\in \mathbb {R}\). Recall that for any non-negative numbers u and v such that \(u<v\) it follows that \(\mathbb {P}\left( u<\textsf {E}_1\le v \right) =\exp (-u)-\exp (-v)\). Then, a straightforward calculation yields

$$\begin{aligned} \left( \mathbb {P}\left( \frac{x^2}{n}< \textsf {E}_1\le \frac{(\mathfrak {b}_n y+\mathfrak {a}_n)^2}{n} \right) \right) ^{q_n} =\exp \left( -\frac{x^2}{2}\right) \left( 1- \frac{\exp \left( \frac{x^2}{n}-\frac{y^2}{4\ln (n/2)}\right) \exp (-y)}{n/2} \right) ^{n/2-1/2}. \end{aligned}$$
(22)

Hence, for any xy it follows that

$$\lim \limits _{n\rightarrow \infty }\left( \mathbb {P}\left( \frac{x^2}{n}< \textsf {E}_1\le \frac{(\mathfrak {b}_n y+\mathfrak {a}_n)^2}{n} \right) \right) ^{q_n}=\exp \left( -\frac{x^2}{2}\right) \exp \left( -e^{-y}\right) .$$

The preceding limit with the help of (21) implies limit (17). Similar reasoning yields the proof when n is an even number. This finishes the proof of Proposition 1.8.\(\square\)

The rest of the manuscript is organized as follows. Section 2 is divided in five subsections. In Sect. 2.1 we prove that the asymptotic behavior of \(|\lambda ^{(n)}_0|\) can be removed from our computations. In Sect. 2.2 we establish that the original sequence of random variables can be assumed to be bounded. In Sect. 2.3 we provide a procedure in which we smooth (by a Gaussian perturbation) the bounded sequence obtained in Sect. 2.2. In Sect. 2.4 we prove the main result for the bounded and smooth sequence given in Sect. 2.3. Finally, in Sect. 2.5 we summarize all the results proved in previous subsections and show Theorem 1.2. Finally, there is Appendix 1 which collects technical results used in the main text.

Komlós–Major–Tusnády approximation

In this section, we show that Theorem 1.2 can be deduced by a Gaussian approximation in which computations can be carried out. Roughly speaking, we show that

$$\begin{aligned} \mathbb {P}\left( \sigma ^{(n)}_{\min }\le x,\; \frac{\sigma ^{(n)}_{\max }-\mathfrak {a}_n}{\mathfrak {b}_n}\le y \right) - \mathbb {P}\left( {\sigma }^{(n,\mathcal {N})}_{\min } \le \sqrt{\frac{2}{n}} x,\; \sigma ^{(n,\mathcal {N})}_{\max }\le \sqrt{\frac{2}{n}}(\mathfrak {b}_n y+\mathfrak {a}_n) \right) =\text {o}_n(1), \end{aligned}$$
(23)

where \(\text {o}_n(1)\rightarrow 0\) as \(n\rightarrow \infty\), \((\mathfrak {a}_n)_{n\in \mathbb {N}}\) and \((\mathfrak {b}_n)_{n\in \mathbb {N}}\) are the sequences defined in (8) in Theorem 1.2, and \(\sigma ^{(n,\mathcal {N})}_{\min }\) and \(\sigma ^{(n,\mathcal {N})}_{\max }\) denote the smallest and the largest singular values, respectively, of a random circulant matrix with generating sequence given by i.i.d. bounded and smooth random variables which can be well-approximated by a standard Gaussian distribution.

Along this section, for any set \(A\subset \Omega\), we denote its complement with respect to \(\Omega\) by \(A^{\text {c}}\). We also point out the following immediately relation:

$$\begin{aligned} &{}\mathbb {P}\left( \sigma ^{(n)}_{\min }\le x,\; \frac{\sigma ^{(n)}_{\max }-\mathfrak {a}_n}{\mathfrak {b}_n}\le y \right) \\ &{}= \mathbb {P}\left( \bigcap _{k=0}^{n-1}\lbrace |\lambda ^{(n)}_k |\le \mathfrak {b}_n y + \mathfrak {a}_n \rbrace \right) - \mathbb {P}\left( \bigcap _{k=0}^{n-1}\lbrace x < |\lambda ^{(n)}_k |\le \mathfrak {b}_n y + \mathfrak {a}_n \rbrace \right) . \end{aligned}$$
(24)

Removing the singular value \(|\lambda ^{(n)}_0|\)  

In this subsection we show that \(\left| \lambda ^{(n)}_0\right|\) can be removed from our computations. In other words, we only need to consider our computations over the array \(\left( {|\lambda ^{(n)}_k |}\right) _{k\in \{1,\ldots ,n-1\}}\) as the following lemma states.

Lemma 2.1 (Asymptotic behavior of  \(|\lambda ^{(n)}_0|\) is negligible)

Assume that Hypothesis 1.1 is valid. Then for any \(x,y\in \mathbb {R}\) it follows that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty } \left| \mathbb {P}\left( \bigcap _{k=1}^{n-1} \lbrace x< |\lambda ^{(n)}_k |\le \mathfrak {b}_n y + \mathfrak {a}_n \rbrace \right) - \mathbb {P}\left( \bigcap _{k=0}^{n-1} \lbrace x < |{\lambda }^{(n)}_k |\le \mathfrak {b}_n y + \mathfrak {a}_n \rbrace \right) \right| =0. \end{aligned}$$
(25)

Proof

For any \(n\in \mathbb {N}\) we set

$$A:=\bigcap _{k=0}^{n-1} \lbrace x< |\lambda ^{(n)}_k |\le \mathfrak {b}_n y + \mathfrak {a}_n \rbrace \quad \text { and } \quad B:=\bigcap _{k=1}^{n-1} \lbrace x < |{\lambda }^{(n)}_k |\le \mathfrak {b}_n y + \mathfrak {a}_n \rbrace .$$

Since \({A}\subset {B}\), we have

$$\begin{aligned} |\mathbb {P}\left( B \right) -\mathbb {P}\left( A \right) |&{}=\mathbb {P}\left( B \right) -\mathbb {P}\left( A \right) = \mathbb {P}\left( {B}\setminus {A} \right) = \mathbb {P}\left( B\cap \lbrace x< |{\lambda }^{(n)}_0 |\le \mathfrak {b}_n y + \mathfrak {a}_n \rbrace ^\text {c} \right) \\ &{}\le \mathbb {P}\left( \lbrace x < |{\lambda }^{(n)}_0 |\le \mathfrak {b}_n y + \mathfrak {a}_n \rbrace ^\text {c} \right) \\ &{}= \mathbb {P}\left( |\lambda ^{(n)}_0 | \le x \right) + \mathbb {P}\left( |\lambda ^{(n)}_0 |> \mathfrak {b}_n y + \mathfrak {a}_n \right) \\ &{} = \mathbb {P}\left( |\frac{\lambda ^{(n)}_0}{\sqrt{n}} | \le \frac{x}{\sqrt{n}} \right) + \mathbb {P}\left( |\frac{\lambda ^{(n)}_0}{\sqrt{n}} | > \frac{\mathfrak {b}_n y + \mathfrak {a}_n}{\sqrt{n}} \right) . \end{aligned}$$
(26)

By (3) we have \(\lambda ^{(n)}_0=\sum _{j=0}^{n-1}\xi _j\). Since \(\xi _0,\ldots ,\xi _{n-1}\) are i.i.d. non-degenerate zero mean random variables with finite second moment, the Central Limit Theorem yields

$$\begin{aligned} \frac{\lambda ^{(n)}_0}{\sqrt{n}}\longrightarrow \text {N}(0,\mathbb {E}[|\xi _0|^{2}), \quad \text { in distribution, } \quad \text { as }\quad n\rightarrow \infty , \end{aligned}$$
(27)

where \(\text {N}(0,\mathbb {E}[|\xi _0|^{2})\) denotes the Gaussian distribution with zero mean and variance \(\mathbb {E}[|\xi _0|^{2}]\). We note that for any \(x,y\in \mathbb {R}\) the following limits holds

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\frac{x}{\sqrt{n}}=0\quad \text { and } \quad \lim \limits _{n\rightarrow \infty }\frac{\mathfrak {b}_n y + \mathfrak {a}_n}{\sqrt{n}}=\infty . \end{aligned}$$
(28)

Hence (26), (27) and (28) with the help of Lemma 3.3 in Appendix 1 imply (25).\(\square\)

Reduction to the bounded case

In this subsection, we prove that it is enough to prove Theorem 1.2 for bounded random variables. That is, the random variables \((\xi _j)_{j\in \{0,\ldots ,n-1\}}\) in \(\lambda ^{(n)}_k = \sum _{j=0}^{n-1} \omega ^{kj}_n \xi _j\) can be considered bounded for all \(j\in \{0,\ldots ,n-1\}\). Following the spirit of Lemma 4 in (Bryc and Sethuraman 2009) we obtain the following comparison.

Lemma 2.2 (Truncation procedure)

Assume that Hypothesis 1.1 is valid. For each \(n\in \mathbb {N}\) and \(j\in \{0,\ldots n-1\}\) define the array of random variables \(\big (\widetilde{\xi }_j^{(n)}\big )_{j\in \{0,\ldots ,n-1\}}\) by

$$\begin{aligned} \widetilde{\xi }_j^{(n)}:= \xi _j\mathbbm {1}_{\lbrace |\xi _j |\le n^{1/s} \rbrace } - \mathbb {E}\left( \xi _j\mathbbm {1}_{\lbrace |\xi _j |\le n^{1/s} \rbrace } \right) , \end{aligned}$$
(29)

where \(s=2+\delta\) and \(\delta\) is the constant that appears in Hypothesis 1.1. For each \(k\in \{1,\ldots ,n-1\}\), set

$$\begin{aligned} \widetilde{\lambda }^{(n)}_k:= \sum _{j=0}^{n-1} \widetilde{\xi }^{(n)}_j\, \omega ^{kj}_n. \end{aligned}$$
(30)

Then it follows that

$$\begin{aligned} \mathbb {P}\left( \lim \limits _{n\rightarrow \infty } \max _{1\le k \le n-1} ||\lambda ^{(n)}_k | - |\widetilde{\lambda }^{(n)}_k | |=0\right) =1. \end{aligned}$$
(31)

Proof

Let \(k\in \{1,\ldots ,n-1\}\) be fixed. Recall that \(\omega _n^{k} = \exp (\mathsf {i} k \cdot 2\pi /n)\). Note that \(w_n^{k} \ne 1\) and \(w_n^{kn}= 1\). Hence the geometric sum \(\sum _{j=0}^{n-1} \omega _n^{kj}=0\). Then

$$\begin{aligned} \sum _{j=0}^{n-1} \omega _n^{kj} \widetilde{\xi }^{(n)}_j = \sum _{j=0}^{n-1} \omega _n^{kj} \xi _j \mathbbm {1}_{\lbrace |\xi |\le n^{1/s} \rbrace }. \end{aligned}$$
(32)

Indeed,

$$\begin{aligned}\sum\limits_{j=0}^{n-1} \omega _n^{kj} \widetilde{\xi }^{(n)}_j &{} = \sum _{j=0}^{n-1} \omega _n^{kj}\xi _j\mathbbm {1}_{\lbrace |\xi _j |\le n^{1/s} \rbrace } - \sum _{j=0}^{n-1} \omega _n^{kj}\mathbb {E}\left( \xi _j\mathbbm {1}_{\lbrace |\xi _j |\le n^{1/s} \rbrace } \right) \\ &{} = \sum _{j=0}^{n-1} \omega _n^{kj}\xi _j\mathbbm {1}_{\lbrace |\xi _j |\le n^{1/s} \rbrace } - \mathbb {E}\left( \xi _0\mathbbm {1}_{\lbrace |\xi _0 |\le n^{1/s} \rbrace } \right) \sum _{j=0}^{n-1} \omega _n^{kj} \\ &{} = \sum _{j=0}^{n-1} \omega _n^{kj}\xi _j\mathbbm {1}_{\lbrace |\xi _j |\le n^{1/s} \rbrace }.\end{aligned}$$
(33)

As a consequence of (32) and the triangle inequality we have the following estimate

$$\begin{aligned} ||\sum\limits_{j=0}^{n-1} \omega _n^{kj} \xi _j | - |\sum _{j=0}^{n-1} \omega _n^{kj} \widetilde{\xi }^{(n)}_j | | &{} \le |\sum _{j=0}^{n-1} \omega _n^{kj} \xi _j - \sum _{j=0}^{n-1} \omega _n^{kj} \widetilde{\xi }^{(n)}_j | \\ &{} = |\sum _{j=0}^{n-1} \omega _n^{kj} \xi _j - \sum _{j=0}^{n-1} \omega _n^{kj} \xi _j \mathbbm {1}_{\lbrace |\xi _j |\le n^{1/s} \rbrace } | \\ &{} = |\sum _{j=0}^{n-1} \omega _n^{kj} \xi _j \mathbbm {1}_{\lbrace |\xi _j |> n^{1/s} \rbrace } | \le \sum _{j=0}^{n-1} |\xi _j | \mathbbm {1}_{\lbrace |\xi _j |> n^{1/s} \rbrace }. \end{aligned}$$
(34)

Since \(\mathbb {E}\left( |\xi _0 |^s \right) <\infty\), we have \(\sum _{\ell =1}^\infty \mathbb {P}\left( |\xi _0 |^s > \ell \right) <\infty\), (see for instance Theorem 4.26 in (Klenke 2014)). Hence \(\sum _{\ell =1}^\infty \mathbb {P}\left( |\xi _0 | > \ell ^{1/s} \right) <\infty\). By the Borel–Cantelli Lemma (see Theorem 2.7 Item (i) in (Klenke 2014)), we have that

$$\mathbb {P}\left( \limsup _{\ell \rightarrow \infty } \lbrace |\xi _\ell | > \ell ^{1/s} \rbrace \right) =0.$$

In other words, there exists an event \(\Omega ^*\) with \(\mathbb {P}\left( \Omega ^* \right) =1\) such that for each \(v\in \Omega ^*\) there is \(L(v)\in \mathbb {N}\) satisfying

$$\begin{aligned} |\xi _\ell (v) |\le \ell ^{1/s}\quad \text{ for } \text{ all } \ell \ge L(v). \end{aligned}$$
(35)

Let \(v\in \Omega ^*\) and define \(n\ge 1+\max \lbrace L(v),|\xi _0(v) |^s,\ldots ,|\xi _{L(v)}(v) |^s \rbrace\). By (35) and the definition of n, we obtain

$$\begin{aligned} \sum\limits_{j=0}^{n-1} |\xi _j | \mathbbm {1}_{\lbrace |\xi |> n^{1/s} \rbrace } &{} \le \sum _{j=0}^{L(v)} |\xi _j | \mathbbm {1}_{\lbrace |\xi _j |> n^{1/s} \rbrace } + \sum _{j=L(v)+1}^{n-1} |\xi _j | \mathbbm {1}_{\lbrace |\xi _j |> n^{1/s} \rbrace } \\ &{} \le \sum _{j=0}^{L(v)} |\xi _j | \mathbbm {1}_{\lbrace |\xi _j |> n^{1/s} \rbrace } + \sum _{j=L(v)+1}^{n-1} |\xi _j | \mathbbm {1}_{\lbrace |\xi _j |> j^{1/s} \rbrace } = 0. \end{aligned}$$
(36)

Combining (34) and (36) we deduce (31).\(\square\)

Next lemma allows us to replace the original array of random variables \(\left( {{\xi }_j^{(n)}}\right) _{j\in \{0,\ldots ,n-1\}}\) by array \((\widetilde{\xi }^{(n)}_j)_{j\in \{0,\ldots ,n-1\}}\) of bounded random variables defined in (29) of Lemma 2.2.

Lemma 2.3 (Reduction to the bound case)

Assume that Hypothesis 1.1 is valid. Let

$$\begin{aligned} \widetilde{\sigma }^{(n)}_{\min } := \min _{k\in \lbrace 1,\ldots ,n-1 \rbrace }{ |\widetilde{\lambda }^{(n)}_k |}\quad \text { and } \quad \widetilde{\sigma }^{(n)}_{\max } := \max _{k\in \lbrace 1,\ldots ,n-1 \rbrace }{|\widetilde{\lambda }^{(n)}_k |}, \end{aligned}$$
(37)

where \((\widetilde{\lambda }^{(n)}_k)_{k\in \{1,\ldots , n-1\}}\) is defined in (30) of Lemma 2.2. Then for any \(x,y\in \mathbb {R}\) it follows that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\left| \mathbb {P}\left( \sigma ^{(n)}_{\min }\le x,\frac{\sigma ^{(n)}_{\max }-\mathfrak {a}_n}{\mathfrak {b}_n}\le y \right) - \mathbb {P}\left( \widetilde{\sigma }^{(n)}_{\min }\le x,\frac{\widetilde{\sigma }^{(n)}_{\max }-\mathfrak {a}_n}{\mathfrak {b}_n}\le y \right) \right| = 0. \end{aligned}$$
(38)

Proof

For each \(n\in \mathbb {N}\) let

$$\begin{aligned} A_n:=\bigcap _{k=1}^{n-1} \lbrace x< |\lambda ^{(n)}_k |\le \mathfrak {b}_n y + \mathfrak {a}_n \rbrace \quad \text { and }\quad B_n:=\bigcap _{k=1}^{n-1} \lbrace x < |\widetilde{\lambda }^{(n)}_k |\le \mathfrak {b}_n y + \mathfrak {a}_n \rbrace . \end{aligned}$$

Lemma 2.2 implies that almost surely \(\mathbbm {1}_{A_n} - \mathbbm {1}_{B_n}\rightarrow 0\), as \(n\rightarrow \infty\). Hence, the Dominated Convergence Theorem yields

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\left| \mathbb {P}\left( A_n \right) - \mathbb {P}\left( B_n \right) \right| =0. \end{aligned}$$
(39)

The preceding limit with the help of Lemma 2.1 and (24) and implies (38).\(\square\)

Smoothness

In this subsection we introduce a Gaussian perturbation in order to smooth the random variables \((\widetilde{\xi }^{(n)}_j)_{j\in \{1,\ldots ,n-1\}}\) defined in (29) of Lemma 2.2. Let \((N_j)_{j\in \{1,\ldots ,n-1\}}\) be an array of random variables with standard Gaussian distribution. Let \((\mathfrak {s}_n)_{n\in \mathbb {N}}\) be a deterministic sequence of positive numbers satisfying \(\mathfrak {s}_n\rightarrow 0\) as \(n\rightarrow \infty\) in a way that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty } \frac{\mathfrak {s}_n\mathfrak {b}_n}{\sqrt{n}}=\lim \limits _{n\rightarrow \infty } \frac{\mathfrak {s}_n\mathfrak {a}_n}{\sqrt{n}}=0. \end{aligned}$$
(40)

This is precise in (52) below. We anticipate that \(\mathfrak {s}_n\approx n^{-\theta }\) for a suitable positive exponent \(\theta\). We define the array \((\gamma^{(n)}_k)_{k\in \{1,\ldots ,n-1\}}\) as follows

$$\begin{aligned} \gamma ^{(n)}_k := \sum _{j=0}^{n-1} \omega ^{kj}_n \mathfrak {s}_n N_j = \mathfrak {s}_n\sum _{j=0}^{n-1} \omega ^{kj}_n N_j\quad \text { for }\quad k\in \{1,\ldots ,n-1\}. \end{aligned}$$
(41)

Then we set

$$\begin{aligned} &\beta _n:=\sqrt{\frac{2}{n}},\quad \displaystyle \sigma ^{(n,\mathcal {N})}_{\min } := \beta _n\min _{1\le k\le n-1}\lbrace |\widetilde{\lambda }^{(n)}_k + \gamma ^{(n)}_k | \rbrace ,\quad \displaystyle \sigma ^{(n,\mathcal {N})}_{\max }:= \beta _n\max _{1\le k\le n-1}\lbrace |\widetilde{\lambda }^{(n)}_k + \gamma ^{(n)}_k | \rbrace , \end{aligned}$$
(42)

where \((\widetilde{\lambda }^{(n)}_k)_{k\in \{1,\ldots ,n-1\}}\) is defined in (30) of Lemma 2.2.

Lemma 2.4

Assume that Hypothesis 1.1 is valid. Then for any \(x,y\in \mathbb {R}\) and a suitable sequence \((\mathfrak {s}_n)_{n\in \mathbb {N}}\) that tends to zero as \(n\rightarrow \infty\), it follows that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty } \mathbb {P}\left( \widetilde{\sigma }^{(n)}_{\min }\le x,\; \widetilde{\sigma }^{(n)}_{\max } \le \mathfrak {b}_n y + \mathfrak {a}_n \right) =R(x)G(y), \end{aligned}$$
(43)

where \(\widetilde{\sigma }^{(n)}_{\min }\) and \(\widetilde{\sigma }^{(n)}_{\max }\) are given in (37) of Lemma 2.3 and RG are defined in Theorem 1.2.

Proof

Let \(k\in \{1,\ldots ,n-1\}\). The triangle inequality yields

$$\begin{aligned} |\widetilde{\lambda }^{(n)}_k + \gamma ^{(n)}_k | - \max _{1\le \ell \le n-1}{|\gamma ^{(n)}_\ell |} \le |\widetilde{\lambda }^{(n)}_k | \le |\widetilde{\lambda }^{(n)}_k + \gamma ^{(n)}_k | + \max _{1\le \ell \le n-1}{|\gamma ^{(n)}_\ell |}. \end{aligned}$$
(44)

By (44) we have

$$\begin{aligned} &{} \lbrace \sigma ^{(n,\mathcal {N})}_{\min } + \beta _n \max _{1\le \ell \le n-1}|\gamma ^{(n)}_\ell | \le \beta _n x,\; \sigma ^{(n,\mathcal {N})}_{\max } + \beta _n \max _{1\le \ell \le n-1}{|\gamma ^{(n)}_\ell |}\le \beta _n (\mathfrak {b}_n y + \mathfrak {a}_n) \rbrace \\ &{} \quad \subset \lbrace \sigma ^{(n,\mathcal {N})}_{\min } + \beta _n \max _{1\le \ell \le n-1}|\gamma ^{(n)}_\ell | \le \beta _n x,\; \beta _n \widetilde{\sigma }^{(n)}_{\max } \le \beta _n (\mathfrak {b}_n y + \mathfrak {a}_n) \rbrace \\ &{} \quad \subset \lbrace \beta _n \widetilde{\sigma }^{(n)}_{\min }\le \beta _n x,\; \beta _n \widetilde{\sigma }^{(n)}_{\max } \le \beta _n (\mathfrak {b}_n y + \mathfrak {a}_n) \rbrace \\ &{} \quad = \lbrace \widetilde{\sigma }^{(n)}_{\min }\le \ x,\; \widetilde{\sigma }^{(n)}_{\max } \le (\mathfrak {b}_n y + \mathfrak {a}_n) \rbrace \\ &{} \quad \subset \lbrace \sigma ^{(n,\mathcal {N})}_{\min } - \beta _n \max _{1\le \ell \le n-1}|\gamma ^{(n)}_\ell | \le \beta _n x,\; \beta _n \widetilde{\sigma }^{(n)}_{\max } \le \beta _n (\mathfrak {b}_n y + \mathfrak {a}_n) \rbrace \\ &{} \quad \subset \lbrace \sigma ^{(n,\mathcal {N})}_{\min } - \beta _n \max _{1\le \ell \le n-1}|\gamma ^{(n)}_\ell | \le \beta _n x,\; \sigma ^{(n,\mathcal {N})}_{\max } - \beta _n \max _{1\le \ell \le n-1}|\gamma ^{(n)}_{\ell } |\le \beta _n (\mathfrak {b}_n y + \mathfrak {a}_n) \rbrace , \end{aligned}$$
(45)

where \(\beta _n\), \(\sigma ^{(n,\mathcal {N})}_{\min }\) and \(\sigma ^{(n,\mathcal {N})}_{\max }\) are defined in (42). We claim that

$$\begin{aligned} \beta _n \max _{1\le \ell \le n-1}|\gamma ^{(n)}_\ell |\longrightarrow 0, \quad \text { in distribution, } \quad \text { as }\quad n\rightarrow \infty . \end{aligned}$$
(46)

Indeed, by Proposition 1.8, Slutsky’s Theorem and (40) we have

$$\begin{aligned} \beta _n \max _{1\le \ell \le n-1}|\gamma ^{(n)}_\ell | = \mathfrak {s}_n\beta _n\mathfrak {b}_n\frac{\frac{1}{\mathfrak {s}_n} \max \limits _{1\le \ell \le n-1}|\gamma ^{(n)}_\ell |-\mathfrak {a}_n}{\mathfrak {b}_n}+\mathfrak {s}_n\beta _n \mathfrak {a}_n \longrightarrow 0, \end{aligned}$$
(47)

in distribution, as \(n\rightarrow \infty\). The limit (47) can be also deduced from (1.1), p. 522 in (Davis and Mikosch 1999) or Chapter 10 in (Brockwell and Davis 1991).

Now, we have the necessary elements to conclude the proof of Lemma 2.4. By Lemma 2.5 we have

$$\begin{aligned} \lim \limits _{n\rightarrow \infty } \mathbb {P}\left( \sigma ^{(n,\mathcal {N})}_{\min } \le \beta _n x,\; \sigma ^{(n,\mathcal {N})}_{\max } \le \beta _n(\mathfrak {b}_n y + \mathfrak {a}_n) \right) =R(x)G(y)\quad \text { for any } x\ge 0,~y\in \mathbb {R}. \end{aligned}$$
(48)

By (46) and (48) with the help of the Slutsky Theorem we deduce

$$\begin{aligned} &{}\lim \limits _{n\rightarrow \infty } \mathbb {P}\left( \sigma ^{(n,\mathcal {N})}_{\min } - \beta _n\max _{1\le \ell \le n-1}|\gamma ^{(n)}_\ell | \le \beta _n x,\; \sigma ^{(n,\mathcal {N})}_{\max } - \beta _n \max _{1\le \ell \le n-1}|\gamma ^{(n)}_{\ell } |\le \beta _n(\mathfrak {b}_n y + \mathfrak {a}_n) \right) \\ &{}=\lim \limits _{n\rightarrow \infty } \mathbb {P}\left( \sigma ^{(n,\mathcal {N})}_{\min } + \beta _n\max _{1\le \ell \le n-1}|\gamma ^{(n)}_\ell | \le \beta _n x,\; \sigma ^{(n,\mathcal {N})}_{\max } + \beta _n \max _{1\le \ell \le n-1}|\gamma ^{(n)}_{\ell } |\le \beta _n(\mathfrak {b}_n y + \mathfrak {a}_n) \right) \\ &{} =R(x)G(y)\quad \text { for any } x\ge 0,~y\in \mathbb {R}. \end{aligned}$$
(49)

The preceding limit with the help of (45) implies

$$\lim \limits _{n\rightarrow \infty }\mathbb {P}\left( \widetilde{\sigma }^{(n)}_{\min }\le x,\; \widetilde{\sigma }^{(n)}_{\max } \le \mathfrak {b}_n y + \mathfrak {a}_n \right) =R(x)G(y)\quad \text { for any } x\ge 0,~y\in \mathbb {R}.$$

As a consequence we conclude (43).\(\square\)

Bounded and smooth case

In the subsection we prove the following lemma.

Lemma 2.5 (Gaussian approximation for the bounded and smooth case)

Assume that Hypothesis 1.1 is valid. Let \(\beta _n\), \(\sigma ^{(n,\mathcal {N})}_{\min }\) and \(\sigma ^{(n,\mathcal {N})}_{\max }\) being as in (42). Then for any \(x,y\in \mathbb {R}\) and a suitable sequence \((\mathfrak {s}_n)_{n\in \mathbb {N}}\) that tends to zero as \(n\rightarrow \infty\), it follows that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty } \mathbb {P}\left( \sigma ^{(n,\mathcal {N})}_{\min } \le \beta _n x,\; \sigma ^{(n,\mathcal {N})}_{\max } \le \beta _n(\mathfrak {b}_n y + \mathfrak {a}_n) \right) =R(x)G(y), \end{aligned}$$
(50)

where R and G are defined in Theorem 1.2.

We follow the ideas from (Davis and Mikosch 1999). To prove Lemma 2.5 we introduce some notation and the so-called Einmahl–Komlós–Major–Tusnády approximation. For each \(d\in \mathbb {N}\) and indexes \(i_j\in \{1,\ldots ,n-1\}\), \(j=1,\ldots ,d\) we define the Fourier frequencies \(w_{i_j} = \frac{2\pi i_j}{n}\), \(j=1,\ldots ,d\) and then the vector

$$\begin{aligned} v_d(\ell ) = \left( \cos (w_{i_1}\ell ), \sin (w_{i_1}\ell ),\ldots , \cos (w_{i_d}\ell ), \sin (w_{i_d}\ell )\right) ^{\text {T}}\quad \text { for any }\quad \ell \in \mathbb {N}_0, \end{aligned}$$
(51)

where \({\text {T}}\) denotes the transpose operator. The next lemma is the main tool in the proof of Lemma 2.5. It allows us to reduce the problem to a perturbed Gaussian case.

Lemma 2.6 [Lemma 3.4 of (Davis 1999)]

Let \(d,n\in \mathbb {N}\) and denote by \(\widetilde{p}_n\) the continuous density function of the random vector

$$2^{1/2}n^{-1/2} \sum _{\ell =0}^{n-1}\left( {\widetilde{\xi }_{\ell }^{(n)} + s_n N_\ell }\right) v_d(\ell ),$$

where \(\left( {N_\ell }\right) _{\ell\in \mathbb {N}_0}\) is a sequence of i.i.d. standard Gaussian random variables, independent of the sequence \(( \widetilde{\xi }_{\ell }^{(n)})_{\ell \in \{0,\ldots ,n-1\}}\) that is defined in (29) of Lemma 2.2, and \(s_n^2 = \text {Var}\left( {\widetilde{\xi }_0^{(n)}}\right) r_n^2\). If

$$n^{-2c}\ln (n) \le r_n^2 \le 1\quad \text { with }\quad c=\frac{1}{2}-\frac{1-\eta }{2+\delta }$$

for arbitrarily small \(\eta >0\), then the relation

$$\widetilde{p}_n(x) = \varphi _{(1+s^2_n)I_{2d}}(x)(1+\text {o}_n(1))$$

holds uniformly for \(|x |^3=\text {o}^{(d)}_n \ {\left( \min \lbrace n^c,n^{1/2-1/(2+\delta )} \rbrace \right) }\), where the implicit constant in the \(\text {o}^{(d)}_n\)-notation depends on d, and \(\varphi _\Sigma\) is the density of a 2d-dimensional zero mean Gaussian vector with covariance matrix \(\Sigma\).

Proof of Lemma 2.5

Let \(x,y\in \mathbb {R}\). The idea is to apply Lemma 2.6 to the random sequence \(\left( \widetilde{\xi }_j^{(n)}+\mathfrak {s}_n N_j\right) _{j\in \mathbb {N}_0}\) with a suitable deterministic sequence \((\mathfrak {s}_n)_{n\in \mathbb {N}}\). We note that the variance of \(\widetilde{\xi }_0^{(n)}\), \(\text {Var}\left( {\widetilde{\xi }_0^{(n)}}\right)\), is bounded by \(\mathbb {E}[\xi ^2_0]=1\). Then we define

$$\begin{aligned} \mathfrak {s}^2_n:= \text {Var}\left( {\widetilde{\xi }_0^{(n)}}\right) n^{-1/2+(1-\eta )/(2+\delta )} \end{aligned}$$
(52)

for sufficiently small \(\eta >0\). Since

$$\begin{aligned} &{} \mathbb {P}\left( \sigma ^{(n,\mathcal {N})}_{\min } \le \beta _n x,\; \sigma ^{(n,\mathcal {N})}_{\max }\le \beta _n(\mathfrak {b}_n y + \mathfrak {a}_n) \right) \\ &{} = \mathbb {P}\left( \bigcap _{k=1}^{n-1}\lbrace |\widetilde{\lambda }^{(n)}_k + \gamma ^{(n)}_k |\le \mathfrak {b}_n y + \mathfrak {a}_n \rbrace \right) - \mathbb {P}\left( \bigcap _{k=1}^{n-1}\lbrace x< |\widetilde{\lambda }^{(n)}_k + \gamma ^{(n)}_k |\le \mathfrak {b}_n y + \mathfrak {a}_n \rbrace \right) \\ &{} = \mathbb {P}\left( \bigcap _{k=1}^{q_n}\lbrace |\widetilde{\lambda }^{(n)}_k + \gamma ^{(n)}_k |\le \mathfrak {b}_n y + \mathfrak {a}_n \rbrace \right) - \mathbb {P}\left( \bigcap _{k=1}^{q_n}\lbrace x < |\widetilde{\lambda }^{(n)}_k + \gamma ^{(n)}_k |\le \mathfrak {b}_n y + \mathfrak {a}_n \rbrace \right) , \end{aligned}$$
(53)

where \(q_n:=\lfloor n/2\rfloor\) with \(\lfloor \cdot \rfloor\) being the floor function. Then we define

$$\begin{aligned} J^{(n)}(x,y):=\mathbb {P}\left( \bigcap _{k=1}^{q_n}\lbrace x < |\widetilde{\lambda }^{(n)}_k + \gamma ^{(n)}_k |\le \mathfrak {b}_n y + \mathfrak {a}_n \rbrace \right) . \end{aligned}$$
(54)

In what follows we compute the limit of (54) as \(n\rightarrow \infty\). In fact we prove that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty } J^{(n)}(x,y)=\exp \left( -\left( {\frac{x^2}{2} + e^{-y}}\right) \right) . \end{aligned}$$
(55)

For convenience, let

$$\begin{aligned} A^{(n)}_{k}:=\lbrace x < |\widetilde{\lambda }^{(n)}_k + \gamma ^{(n)}_k |\le \mathfrak {b}_n y + \mathfrak {a}_n \rbrace ^{\text {c}}\quad \text { for all }\quad k\in \{1,\ldots ,q_n\} \end{aligned}$$
(56)

and observe that

$$\begin{aligned} 1-J^{(n)}(x,y)=1 - \mathbb {P}\left( \bigcap _{k=1}^{q_n} \left( A^{(n)}_{k}\right) ^{\text {c}} \right) =\mathbb {P}\left( \bigcup _{k=1}^{q_n}A^{(n)}_{k} \right) . \end{aligned}$$
(57)

By Lemma 3.2 in Appendix 1 for any fixed \(\ell \in \mathbb {N}\) we obtain

$$\begin{aligned} \sum _{j=1}^{2\ell }(-1)^{j-1} S^{(n)}_j \le \mathbb {P}\left( \bigcup _{k=1}^{q_n} A^{(n)}_{k} \right) \le \sum _{j=1}^{2\ell -1} (-1)^{j-1} S^{(n)}_j, \end{aligned}$$
(58)

where

$$\begin{aligned} S^{(n)}_j = \sum _{1\le i_1<\cdots <i_j\le q_n} \mathbb {P}\left( {A^{(n)}_{i_1}\cap \cdots \cap A^{(n)}_{i_j}}\right) . \end{aligned}$$
(59)

We claim that for every fixed \(d\in \mathbb {N}\) the following limit holds true

$$\begin{aligned} \lim \limits _{n\rightarrow \infty } \left( {\begin{array}{c}q_n\\ d\end{array}}\right) \mathbb {P}\left( \bigcap _{k=1}^d A^{(n)}_{k} \right) = \frac{1}{d!}\left( {\frac{x^2}{2} + e^{-y}}\right) ^d, \end{aligned}$$
(60)

where the symbol \(\left( {\begin{array}{c}q_n\\ d\end{array}}\right)\) denotes the binomial coefficient. Indeed, by Lemma 2.6 we have

$$\begin{aligned} \begin{aligned} \mathbb {P}\left( \bigcap _{k=1}^d A^{(n)}_{k} \right)&= \mathbb {P}\left( \bigcap _{k=1}^d \lbrace \frac{2x^2}{n} < |\sqrt{\frac{2}{n}}{\left( \widetilde{\lambda }^{(n)}_{i_k} + \gamma ^{(n)}_{i_k}\right) } |^2 \le \frac{2\left( {\mathfrak {b}_n y + \mathfrak {a}_n}\right) ^2}{n} \rbrace ^\text {c} \right) \\&= (1+\text {o}_n(1))\int _{{B}^{(n)}_d} \varphi _{(1+\mathfrak {s}_n^2)I_{2d}}(u) \mathrm {d}u, \end{aligned} \end{aligned}$$
(61)

where \(I_{2d}\) denotes the \(2d\times 2d\) identity matrix, \(\varphi _{(1+\mathfrak {s}_n^2)I_{2d}}\) is the density of a 2d-dimensional Gaussian vector with zero mean and covariance matrix \((1+\mathfrak {s}_n^2)I_{2d}\), \(\text {o}_n(1)\rightarrow 0\) as \(n\rightarrow \infty\), and

$$\begin{aligned}{B}^{(n)}_d:=\Biggl\{\left( {w_1,v_1,\ldots ,w_d,v_d}\right) \in \mathbb {R}^{2d}: \ &w_i^2+v_i^2\le \frac{2x^2}{n}\quad \text { or } \\ & w_i^2+v_i^2 > 2\frac{\left( {\mathfrak {b}_n y + \mathfrak {a}_n}\right) ^2}{n}\quad \text { for all }\quad i\in \{1,\ldots ,d\}\Biggr\}.\end{aligned}$$
(62)

By (8) and since \(x,y\in \mathbb {R}\) are fixed, there exists \(n_0=n_0(x,y)\) such that

$$\begin{aligned} 2x^2/n<2{\left( {\mathfrak {b}_n y + \mathfrak {a}_n}\right) ^2}/n\quad \text { for all }\quad n\ge n_0. \end{aligned}$$

Hence, (61) with the help of Lemma 3.5 in Appendix 1 yields

$$\begin{aligned} \mathbb {P}\left( \bigcap _{k=1}^d A^{(n)}_{k} \right) = (1+\text {o}_n(1))\left( {1-\exp \left( -\frac{x^2}{n(1+\mathfrak {s}^2_n)} \right) + \exp \left( -\frac{(\mathfrak {b}_n y + \mathfrak {a}_n)^2}{n(1+\mathfrak {s}^2_n)} \right) }\right) ^d. \end{aligned}$$
(63)

By (8), the Stirling formula (see formula (1) in (Robbins 1955)) and the fact that \(\mathfrak {s}_n\rightarrow 0\) as \(n\rightarrow \infty\) we deduce

$$\begin{aligned} \lim \limits _{n\rightarrow \infty } \left( {\begin{array}{c}q_n\\ d\end{array}}\right) \left( {1-\exp \left( -\frac{x^2}{n(1+\mathfrak {s}^2_n)} \right) + \exp \left( -\frac{(\mathfrak {b}_n y + \mathfrak {a}_n)^2}{n(1+\mathfrak {s}^2_n)} \right) }\right) ^d = \frac{1}{d!}\left( {\frac{x^2}{2} + e^{-y}}\right) ^d. \end{aligned}$$
(64)

As a consequence of (63) and (64) we obtain (60).

Now, we prove (55). By (58), (59),  (60) and the fact that the events \((A^{(n)}_{k})_{k\in \{1,\ldots ,q_n\}}\) are independent, we have for any \(\ell \in \mathbb {N}\)

$$\begin{aligned} \sum _{j=1}^{2\ell }(-1)^{j-1} \frac{1}{j!}\left( {\frac{x^2}{2} + e^{-y}}\right) ^j &{}\le \liminf \limits _{n\rightarrow \infty }\mathbb {P}\left( \bigcup _{k=1}^{q_n} A^{(n)}_{k} \right) \\ &{}\le \limsup \limits _{n\rightarrow \infty }\mathbb {P}\left( \bigcup _{k=1}^{q_n} A^{(n)}_{k} \right) \le \sum _{j=1}^{2\ell -1} (-1)^{j-1} \frac{1}{j!}\left( {\frac{x^2}{2} + e^{-y}}\right) ^j. \end{aligned}$$
(65)

Sending \(\ell \rightarrow \infty\) in the preceding inequality yields

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\mathbb {P}\left( \bigcup _{k=1}^{q_n} A^{(n)}_{k} \right) = \sum _{j=1}^{\infty } (-1)^{j-1} \frac{1}{j!}\left( {\frac{x^2}{2} + e^{-y}}\right) ^j=1-\exp \left( -\left( {\frac{x^2}{2} + e^{-y}}\right) \right) . \end{aligned}$$
(66)

The preceding limit with the help of (57) implies (55).

Finally, by (53), (54) and (55) we obtain

$$\begin{aligned} \lim \limits _{n\rightarrow \infty } \mathbb {P}\left( \sigma ^{(n,\mathcal {N})}_{\min } \le \beta _n x,\; \sigma ^{(n,\mathcal {N})}_{\max }\le \beta _n(\mathfrak {b}_n y + \mathfrak {a}_n) \right) &{}=\lim \limits _{n\rightarrow \infty }\left( J^{(n)}(0,y)-J^{(n)}(x,y)\right) \\ &{}=\exp \left( -e^{-y}\right) -\exp \left( -\left( {\frac{x^2}{2} + e^{-y}}\right) \right) \\ &{}=R(x)G(y), \end{aligned}$$

where R and G are defined in (7).\(\square\)

Proof of Theorem 1.2

In the subsection, we stress the fact that Theorem 1.2 is just a consequence of what we have already stated up to here.

Proof of Theorem 1.2

Let \(x,y\in \mathbb {R}\). By (43) in Lemma 2.4 we have

$$\begin{aligned} \lim \limits _{n\rightarrow \infty } \mathbb {P}\left( \widetilde{\sigma }^{(n)}_{\min }\le x,\; \widetilde{\sigma }^{(n)}_{\max } \le \mathfrak {b}_n y + \mathfrak {a}_n \right) =R(x)G(y), \end{aligned}$$
(67)

where \(\widetilde{\sigma }^{(n)}_{\min }\) and \(\widetilde{\sigma }^{(n)}_{\max }\) are given in (37) of Lemma 2.3. The preceding limit with the help of (38) in Lemma 2.3 implies

$$\begin{aligned} \lim \limits _{n\rightarrow \infty } \mathbb {P}\left( \sigma ^{(n)}_{\min }\le x,\frac{\sigma ^{(n)}_{\max }-\mathfrak {a}_n}{\mathfrak {b}_n}\le y \right) =R(x)G(y). \end{aligned}$$
(68)

This concludes the proof of Theorem 1.2.\(\square\)