1 Introduction

A sequence \(Z=(z_n)_{n\in \mathbb {N}}\) in the unit disc \(\mathbb {D}\) is interpolating for \(\mathrm {H}^\infty \) if, given any bounded sequence \((w_n)_{n\in \mathbb {N}}\) in \(\mathbb {C}\) there exists a bounded analytic function f on \(\mathbb {D}\) so that \(f(z_n)=w_n\), for any n in \(\mathbb {N}\). The celebrated work of Carleson, [9, 10], characterized interpolating sequences in term of separation properties. To be precise, let

$$\begin{aligned} b_\tau (z):=\frac{\tau -z}{1-\overline{\tau }z},\quad z\in \mathbb {D}, \end{aligned}$$

be the involutive Blaschke factor at \(\tau \) in \(\mathbb {D}\), and let, for any z and w in \(\mathbb {D}\),

$$\begin{aligned} \rho (z, w):=\left| b_z(w)\right| \end{aligned}$$

be the pseudo-hyperbolic distance in \(\mathbb {D}\). Z is

  • weakly separated if

    $$\begin{aligned} \inf _{n\ne k}\rho (z_n, z_k)>0; \end{aligned}$$
  • uniformly separated if

    $$\begin{aligned} \inf _{n\in \mathbb {N}}\prod _{k\ne n}\rho (z_n, z_k)>0. \end{aligned}$$

Carleson proved in [9] that Z is interpolating if and only if it is uniformly separated. Later on, [10], he characterized uniform separation in terms of a measure theoretic condition and weak separation:

Theorem 1.1

(Carleson) A sequence Z in \(\mathbb {D}\) is uniformly separated if and only if it is weakly separated and the measure

$$\begin{aligned} \mu _Z:=\sum _{n\in \mathbb {N}}(1-|z_n|^2)\delta _{z_n} \end{aligned}$$

is a Carleson measure for \(\mathrm {H}^2(\mathbb {D})\).

Throughout this note, a measure \(\mu \) on a domain D will be a Carleson measure for a reproducing kernel Hilbert space \(\mathcal {H}_k\) of holomorphic functions on D if

$$\begin{aligned} \Vert f\Vert _{\mathrm {L}^2(D, \mu )}\le C\Vert f\Vert _{\mathcal {H}_k},\quad f\in \mathcal {H}_k, \end{aligned}$$

for some \(C>0\). Later sections will take \(D=\mathbb {D}^d\), the unit polydisc, or \(D=\mathbb {B}^d\), the unit ball, respectively: the kernels that we are going to choose for such domains are the Szegö kernel on the polydisc and the Besov–Sobolev kernels on the unit ball.

In certain instances, the randomization of the conditions studied by Carleson become more tractable and provide insight into the structure of interpolating sequences. Cochran studied in [12] separation properties of random sequences. A random sequence in the unit disc is defined as follows: let \((\theta _n)_{n\in \mathbb {N}}\) be a sequence of independent random variables, all distributed uniformly in \((0, 2\pi )\) and defined on the same probability space \((\Omega , \mathcal {A}, \mathbb {P})\). Then, for any choice of a deterministic sequence of radii \((r_n)_{n\in \mathbb {N}}\) approaching 1 define

$$\begin{aligned} \lambda _n(\omega ):=r_ne^{i\theta _n(\omega )},\quad \omega \in \Omega . \end{aligned}$$

Considering the random sequence \(\Lambda (\omega )=(\lambda _n(\omega ))_{n\in \mathbb {N}}\), the 0-1 Kolmogorov law yields that events such as

$$\begin{aligned} \begin{aligned} \mathcal {W}&:=\{\Lambda \text { is weakly separated}\}\\ \mathcal {U}&:=\{\Lambda \text { is uniformly separated}\}\\ \mathcal {C}&:=\{\mu _\Lambda \text { is a Carleson measure for} \mathrm {H}^2(\mathbb {D})\}\\ \mathcal {I}&:=\{\Lambda \text { is an interpolating sequence}\} \end{aligned} \end{aligned}$$

have probability zero or one, thanks to the independence of the arguments of the points in \(\Lambda \). Let

$$\begin{aligned} I_j:=\{z\in \mathbb {D}: 1-2^{-j}\le |z|<1-2^{-(j+1)}\},\quad j\in \mathbb {N}, \end{aligned}$$
(1.1)

be the jth dyadic annulus of \(\mathbb {D}\), and let

$$\begin{aligned} N_j:= \# \Lambda \cap I_j. \end{aligned}$$
(1.2)

All the randomness of the sequence is on the arguments of the points in \(\Lambda \), and therefore \((N_j)_{j\in \mathbb {N}}\) is a deterministic sequence. Cochran proved in [12,  Thm. 2] that \(\mathbb {P}(\mathcal {W})=1\) provided that

$$\begin{aligned} \sum _{j\in \mathbb {N}} N_j^22^{-j}<\infty , \end{aligned}$$
(1.3)

and that \(\mathbb {P}(\mathcal {W})=0\) whenever the sum in (1.3) diverges. Later on, Rudowicz showed in [16] that (1.3) is a sufficient condition for \(\mu _\Lambda \) to be a Carleson measure for \(\mathrm {H}^2(\mathbb {D})\) almost surely, and concluded, thanks to Theorem 1.1, that \(\mathbb {P}(\mathcal {I})=1\) if and only if (1.3) holds. In particular, condition (1.3) encodes all those random sequences so that \(\mathcal {W}\), \(\mathcal {U}\) and \(\mathcal {I}\) all have probability one.

The goal of this paper is to study random interpolating sequences on the polydisc and the d dimensional unit ball. A sequence \(Z=(z_n)_{n\in \mathbb {N}}\) in \(\mathbb {D}^d\) is interpolating for \(\mathrm {H}^\infty (\mathbb {D}^d)\) if, given any bounded \((w_n)_{n\in \mathbb {N}}\) in \(\mathbb {C}\) there exists a bounded holomorphic function f on \(\mathbb {D}^d\) so that \(f(z_n)=w_n\), for all n. On the polydisc, the deterministic starting point is the following (partial) analogue of the Carleson interpolation Theorem for sequences in the polydisc [7]:

Theorem 1.2

(Berndtsson, Chang and Lin) Let \(Z=(z_n)_{n\in \mathbb {N}}\) be a sequence in \(\mathbb {D}^d\), and let (a), (b) and (c) denote the following statements:

  1. (a)
    $$\begin{aligned} \inf _{n\in \mathbb {N}}\prod _{k\ne n}\rho _G(z_n, z_k)>0; \end{aligned}$$
    (1.4)
  2. (b)

    Z is interpolating for \(\mathrm {H}^\infty (\mathbb {D}^d)\);

  3. (c)

    The measure

    $$\begin{aligned} \mu _Z:=\sum _{n\in \mathbb {N}}\left( \prod _{i=1}^d(1-|z_n^i|^2)\right) \delta _{z_n} \end{aligned}$$

    is a Carleson measure for \(\mathrm {H}^2(\mathbb {D}^d)\) and

    $$\begin{aligned} \inf _{n\ne k}\rho _G(z_n, z_k )>0. \end{aligned}$$
    (1.5)

Then (a)\(\implies \)(b)\(\implies \)(c), and none of the converse implications hold.

Conditions (1.4) and (1.5) are separation conditions, both stated in terms of the so called Gleason distance on the polydisc:

$$\begin{aligned} \rho _G(w, z):=\max _{i=1,\dots d}\rho (z^i, w^i),\quad z, w\in \mathbb {D}^d. \end{aligned}$$

Throughout this note, (1.4) will refer to uniform separation on the polydisc, while (1.5) defines a weakly separated sequence on the polydisc. Theorem 1.2 represents one of the best known attempts to characterize \(\mathrm {H}^\infty (\mathbb {D}^d)\)-interpolating sequences on the polydisc in terms of its hyperbolic geometry. One can find a characterization for interpolating sequences for bounded analytic functions on the bi-disc in [1], stated in terms of uniform separation conditions on an entire class of reproducing kernels on \(\mathbb {D}^2\). The motivation for the first part of this note is to find out whether condition (a), and (c) of Theorem 1.2 are equivalent at least almost surely. A negative answer would imply that Theorem 1.2 is far from being a characterization. A positive answer would give the 0-1 Kolmogorov law for \(\mathrm {H}^\infty (\mathbb {D}^d)\)-interpolating sequences in the polydisc with random arguments. The construction of a random sequence \(\Lambda \) on the polydisc follows the same outline as for the case of the unit disc. Let \(\mathbb {T}^d\) be the d-dimensional torus in \(\mathbb {C}^d\), and let \((\theta ^1_n, \dots , \theta ^d_n)_{n\in \mathbb {N}}\) be a sequence of independent and indentically distributed random variables taking values on \(\mathbb {T}^d\), all distributed uniformly and defined on the same probability space \((\Omega , \mathcal {A}, \mathbb {P})\). Let \((r_n)_{n\in \mathbb {N}}\) be a sequence in \([0, 1)^d\), and define a random sequence \(\Lambda =(\lambda _n)_{n\in \mathbb {N}}\) in \(\mathbb {D}^d\) as

$$\begin{aligned} \lambda _n(\omega )=\left( r^1_ne^{i\theta ^1_n(\omega )}, \dots , r^d_ne^{i\theta ^d_n(\omega )}\right) , \quad \omega \in \Omega . \end{aligned}$$

The events of interest are going to be

$$\begin{aligned} \begin{aligned} \mathcal {W}(\mathbb {D}^d)&:=\{\Lambda \text { is weakly separated in} \mathbb {D}^d\}\\ \mathcal {U}(\mathbb {D}^d)&:=\{\Lambda \text { is uniformly separated in} \mathbb {D}^d\}\\ \mathcal {C}(\mathrm {H}^2(\mathbb {D}^d))&:=\{\mu _\Lambda \text { is a Carleson measure for }\mathrm {H}^2(\mathbb {D}^d)\}\\ \mathcal {I}(\mathbb {D}^d)&:=\{\Lambda \text { is an interpolating sequence for } \mathrm {H}^\infty (\mathbb {D}^d)\}. \end{aligned} \end{aligned}$$

Our first aim is to give necessary conditions and sufficient conditions for \(\Lambda \) to be interpolating for \(\mathrm {H}^\infty (\mathbb {D}^d)\) almost surely. This will be achieved by studying separately the probability of the events \(\mathcal {W}(\mathbb {D}^d)\), \(\mathcal {U}(\mathbb {D}^d)\) and \(\mathcal {C}(\mathrm {H}^2(\mathbb {D}^ d))\), and by applying Theorem 1.2. Looking for separation conditions on \((r_n)_{n\in \mathbb {N}}\) that yield almost sure separation properties for \(\Lambda \), (1.1) and (1.2) are extended to the d dimensional case by considering

$$\begin{aligned} I_m:=\{z\in \mathbb {D}^d: 1-2^{-m_i}\le |z^i|<1-2^{-(m_i+1)}, i=1,\dots d\} \end{aligned}$$
(1.6)

and

$$\begin{aligned} N_m= \# \Lambda \cap I_m, \end{aligned}$$

for any multi-index \(m=(m_1,\dots ,m_d)\) in \(\mathbb {N}^d\). Throughout this note, \(|m|=m_1+\dots +m_d\) will denote the length of m.

The first main result partially extends Cochran’s and Rudowicz’s works to the polydisc:

Theorem 1.3

Let \(\Lambda \) be a random sequence in \(\mathbb {D}^d\). Then

  1. (i)

    If

    $$\begin{aligned} \sum _{m\in \mathbb {N}^d}N_m^22^{-|m|}<\infty \end{aligned}$$
    (1.7)

    then \(\mathbb {P}(\mathcal {W}(\mathbb {D}^d))=1\). If the sum in (1.7) diverges, then \(\mathbb {P}(\mathcal {W}(\mathbb {D}^d))=0\).

  2. (ii)

    If

    $$\begin{aligned} \sum _{m\in \mathbb {N}^d}N_m^{{1+1/d}}2^{-|m|/d}<\infty \end{aligned}$$
    (1.8)

    then \(\mathbb {P}(\mathcal {U}(\mathbb {D}^d))=1\).

  3. (iii)

    If (1.7) holds, then \(\mathbb {P}(\mathcal {C}(\mathrm {H}^2(\mathbb {D}^d)))=1\).

Observe that the case \(d=1\) yields Rudowicz’s and Cochran’s characterization of random interpolating sequences on the unit disc. In general, part (i) of the above Theorem gives the 0-1 Komolgorov law for a sequence to be weakly separated. In part (ii) and (iii), the result gives a sufficient condition for a sequence to be almost surely uniformly separated and to generate a Carleson measure for the Hardy space in the polydisc. In particular, thanks to Theorem 1.2, it is the case that the 0-1 Kolmogorov law for almost surely interpolating sequences for \(\mathrm {H}^\infty (\mathbb {D}^d)\) lies somewhere in between (1.8) and (1.7):

Corollary 1.4

Let \(\Lambda \) be a random sequence on \(\mathbb {D}^d\). Then

  1. (i)

    If (1.8) holds, then \(\mathbb {P}(\mathcal {I}(\mathbb {D}^d))=1\);

  2. (ii)

    If the sum in (1.7) diverges, then \(\mathbb {P}(\mathcal {I}(\mathbb {D}^d))=0\).

Proposition 3.3 will give an example of a class of random sequences for which the 0-1 Kolmogorov law for almost surely \(\mathrm {H}^\infty (\mathbb {D}^d)\)-interpolating sequences coincides with the sum in (1.7). Whether this is the case for a general choice of the radii \((r_n)_{n\in \mathbb {N}}\) remains, for us, open. Nevertheless, we will observe in Sect. 3.4 how (1.7) implies that the Szegö Grammian for a random sequence in the polydisc differs from the identity only by a Hilbert–Schmidt operator, a rather strong separation condition for the random kernel functions in the Hardy space associated to \(\Lambda \). In particular, this will give the \(0-1\) law for a random sequence \(\Lambda \) to be interpolating for \(\mathrm {H}^2(\mathbb {D}^d)\). In the deterministic setting, a sequence \((z_n)_{n\in \mathbb {N}}\) on \(\mathbb {D}^d\) is interpolating for \(\mathrm {H}^2(\mathbb {D}^d)\) if the map

$$\begin{aligned} f\in \mathrm {H}^2(\mathbb {D}^d)\mapsto \left( \prod _{i=1}^ d\sqrt{1-|z_n^ i|^2}f(z_n)\right) _{n\in \mathbb {N}}\in l^2 \end{aligned}$$

is surjective and bounded. This, in particular, is equivalent to asking that the Szegö Grammian associated to \((z_n)_{n\in \mathbb {N}}\) be bounded above and below. Given a random sequence \(\Lambda \) in \(\mathbb {D}^d\), let

$$\begin{aligned} \tilde{\mathcal {I}}(\mathbb {D}^d):=\{\Lambda \,\text {is interpolating for}\,\mathrm {H}^2(\mathbb {D}^d)\}. \end{aligned}$$

Any \(\mathrm {H}^\infty (\mathbb {D}^d)\)-interpolating sequence on \(\mathbb {D}^d\) is also \(\mathrm {H}^2(\mathbb {D}^d)\)-interpolating, and the converse does not hold, since \(\mathrm {H}^2(\mathbb {D}^d)\) has not the Pick property (for an example of a sequence which is \(\mathrm {H}^2(\mathbb {D}^2)\)-interpolating but not \(\mathrm {H}^\infty (\mathbb {D}^2)\)-interpolating, see [4]). Therefore, \(\mathcal {I}(\mathbb {D}^d)\subseteq \tilde{\mathcal {I}}(\mathbb {D}^d)\). We show that \(\tilde{\mathcal {I}}(\mathbb {D}^d)\) has the same \(0-1\) law of \(\mathcal {W}(\mathbb {D}^d)\):

Theorem 1.5

Let \(\Lambda \) be a random sequence in \(\mathbb {D}^d\). Then

$$\begin{aligned} \mathbb {P}(\tilde{\mathcal {I}}(\mathbb {D}^d))={\left\{ \begin{array}{ll} 0&{}\quad \text {if}\quad \sum _{m\in \mathbb {N}^d}N_m^22^{-|m|}=\infty \\ 1&{}\quad \text {if}\quad \sum _{m\in \mathbb {N}^d}N_m^22^{-|m|}<\infty \end{array}\right. }. \end{aligned}$$

Related questions about interpolation for function spaces on the unit ball in \(\mathbb {C}^d\) are also considered. The authors in [11] studied the interpolating sequences in the Dirichlet spaces over the unit disc and this serves as some of the motivation for the results in the ball. Section 4, will generalize some theorems in [11] to the unit ball. Because the generalization of the Dirichlet space is the Besov–Sobolev space, random interpolating sequences in the Besov–Sobolev spaces \(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) \) are studied, where \(0<\sigma <\infty \). In [6], a characterization of interpolating sequences in the Besov–Sobolev spaces in the case of \(0<\sigma \le 1/2\) was given. Because a characterization exists only in this range, that is the case we will focus on in this paper.

Let \(\mathbb {B}_{d}\) be the unit ball in \(\mathbb {C}^{d}\). Let dz be Lebesgue measure on \(\mathbb {C}^{d}\) and let

$$\begin{aligned} d \lambda _{d}(z)=\left( 1-|z|^{2}\right) ^{-d-1} d z \end{aligned}$$

be the invariant measure on the ball. For an integer \(m \ge 0\), and for \(0< \sigma <\infty \), \(1<p<\infty \), \(m+\sigma >d / p\) define the analytic Besov–Sobolev spaces \(B_{p}^{\sigma }\left( \mathbb {B}_{d}\right) \) to consist of those holomorphic functions f on the ball such that

$$\begin{aligned} \Vert f\Vert _{B_{p}^{\sigma }\left( \mathbb {B}_{d}\right) }^{p}=\left\{ \sum _{k=0}^{m-1}\left| f^{(k)}(0)\right| ^{p} +\int _{\mathbb {B}_{d}}\left| \left( 1-|z|^{2}\right) ^{m+\sigma } f^{(m)}(z)\right| ^{p} d \lambda _{d}(z)\right\} ^{1/p}<\infty . \end{aligned}$$

Here \(f^{(m)}\) is the \(m^{t h}\) order complex derivative of f. The spaces \(B_{p}^{\sigma }\left( \mathbb {B}_{d}\right) \) are independent of m and are Banach spaces. A Carleson measure for \(B_{p}^{\sigma }\left( \mathbb {B}_{d}\right) \) is a positive measure defined on \(\mathbb {B}_{d}\) such that the following Carleson embedding holds for \(f \in B_{p}^{\sigma }\left( \mathbb {B}_{d}\right) \)

$$\begin{aligned} \int _{\mathbb {B}_{d}}|f(z)|^{p} d \mu \le C_{\mu }\Vert f\Vert _{B_{p}^{\sigma }\left( \mathbb {B}_{d}\right) }^{p}. \end{aligned}$$

Given \(\sigma \) with \(0 < \sigma \le 1 / 2\) and a discrete set \(Z=\left\{ z_{i}\right\} _{i=1}^{\infty } \subset \mathbb {B}_{d}\) define the associated measure

$$\begin{aligned} \mu _{Z}=\sum _{j=1}^{\infty }\left( 1-\left| z_{j}\right| ^{2}\right) ^{2 \sigma } \delta _{z_{j}}. \end{aligned}$$

Z is an interpolating sequence for \(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) \) if the restriction map R defined by \(R f\left( z_{i}\right) =f\left( z_{i}\right) \) for \(z_{i} \in Z {\text {maps}} B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) \) into and onto \(\ell ^{2}\left( Z, \mu _{Z}\right) \).

Theorem 1.6

Given \(\sigma \) with \(0 < \sigma \le 1/2\) and

$$\begin{aligned} \mu _{Z}=\sum _{j=1}^{\infty }\left( 1-\left| z_{j}\right| ^{2}\right) ^{2 \sigma } \delta _{z_{j}}. \end{aligned}$$

Then Z is an interpolating sequence for \(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) \) if and only if Z satisfies the weak separation condition \(\inf _{i \ne j} \beta \left( z_{i}, z_{j}\right) >0\) and \(\mu _{Z}\) is a \(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) \) Carleson measure.

Proof

When \(0< \sigma <1 / 2\), this theorem is given by [6,  Thm.  3]. When \(\sigma =1/2\), since \(B_{2}^{1/2}\left( \mathbb {B}_{d}\right) \) has the complete Pick property, we obtain the theorem from [3,  Thm. 1.1]. \(\square \)

Namely, in contrast to the polydisc case, the deterministic setting for the interpolating sequences for the Besov–Sobolev space are well-understood and are characterized by weak separation and a Carleson measure condition. Therefore, in order to find the 0-1 Kolmogorov law for interpolating sequences for \(\mathbb {B}_2^\sigma \), it suffices to find the cut-off conditions on the detrministic radii for the associated sequence with randomly chosen arguments to be weakly separated and to generate a Carleson measure almost surely. This is the intent of the second part of our work. Random sequences in the unit ball are constructed as follows. Let \(\Lambda (\omega )=\left\{ \lambda _{j}\right\} \) with \(\lambda _{j}=\rho _{j} \xi _j(\omega )\) where \(\xi _{j}(\omega )\) is a sequence of independent random variables, all uniformly distributed on the unit sphere and \(\rho _{j} \in [0,1)\) is a sequence of a priori fixed radii. There is an interesting thing about the random interpolating sequences in the Besov–Sobolev spaces on the unit ball. As we will see, for \(d\ge 2\) a random sequence \(\{\lambda _n\}\) is an interpolating sequence almost surely if and only if \(\sum _{n}(1-|\lambda _n|)\delta _{\lambda _n}\) is a Carleson measure on \(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) \) almost surely. Moreover, the characterization for almost surely interpolating sequences is strictly stronger that the characterization for almost surely weakly separated sequences.

For any \(m\in \mathbb {N}\), let

$$\begin{aligned} N_m= \# \left\{ \lambda _j\in \Lambda (\omega ): m\frac{\ln 2}{2} \le \beta (0,\lambda _j)<(m+1)\frac{\ln 2}{2} \right\} , \end{aligned}$$

where \(\beta \) is the Bergman metric on the unit ball \(\mathbb {B}_{d}\) in \(\mathbb {C}^{d}\). Let

$$\begin{aligned} \mathcal {I}(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) ):=\{\omega :\Lambda (\omega ) \text { is an interpolating sequence for} B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) \}. \end{aligned}$$

The following result is obtained regarding a 0-1 Komolgorov law for interpolating sequences on the unit ball. We only work on the case of \(0 < \sigma \le 1/2\) and \(d\ge 2\). When \(\sigma =d/2\), it is well-known that \(B_{2}^{d/2}\left( \mathbb {B}_{d}\right) \) is the Hardy space. By [14,  Thm. 3.3], we know that

$$\begin{aligned} \mathbb {P}\{\mathcal {I}(B_{2}^{d/2}\left( \mathbb {B}_{d}\right) )\}=1 \text { if and only if } \sum _{m=0}^{\infty }2^{-m}N^2_m<\infty . \end{aligned}$$

When \(d=1\) and \(0<\sigma \le 1/4\), by (i) in [11,  Thm. 1.5] we know that

$$\begin{aligned} \mathbb {P}\{\mathcal {I}(B_{2}^{\sigma }\left( \mathbb {D}\right) )\}=1 \text { if and only if } \sum _{m=0}^{\infty }2^{-2\sigma m}N_m<\infty . \end{aligned}$$

When \(d=1\) and \(1/4<\sigma <1/2\), by (ii) in [11,  Thm. 1.5] we know that

$$\begin{aligned} \mathbb {P}\{\mathcal {I}(B_{2}^{\sigma }\left( \mathbb {D}\right) )\}=1 \text { if and only if } \sum _{m=0}^{\infty }2^{-m}N^2_m<\infty . \end{aligned}$$

In our case, we have the following:

Theorem 1.7

Let \(0 < \sigma \le 1/2\) and \(d\ge 2\). Then

  1. (i)

    If

    $$\begin{aligned} \sum _{m=0}^{\infty }2^{-2\sigma m}N_m<\infty , \end{aligned}$$

    then \(\mathbb {P}\{\mathcal {I}(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) )\}=1\);

  2. (ii)

    If

    $$\begin{aligned} \sum _{m=0}^{\infty }2^{-2\sigma m}N_m=\infty , \end{aligned}$$

    then \(\mathbb {P}\{\mathcal {I}(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) )\}=0\).

Section 2 will construct the necessary technical tools for the proof of our main results. Section 3 provides the proof of Theorems 1.3 and 1.5, and characterizes random interpolating sequences for \(\mathrm {H}^\infty (\mathbb {D}^d)\) for some specific choice of the radii in \((r_n)_{n\in \mathbb {N}}\). Finally, Sect. 4 proves Theorem 1.7 and studies uniform separation on the unit ball.

We would like to thank Nikolaos Chalmoukis for some useful comments that led to the final version of Theorem 1.3. We would also like to thank the referees for their valuable suggestions.

2 Preliminary Results

This section contains relatively general results that are going to be used throughout the proof of Theorem 1.3. Deterministic and probabilistic tools will be separately analyzed.

2.1 Deterministic Tools

Double sums are extensively used throughout this work. In particular the fact that, for a certain class of double sums involving exponential decay, the terms of the sums on their diagonals contain all the necessary information to bound the whole sums:

Lemma 2.1

Let \(s\ge 1\), and let \((A_m)_{m\in \mathbb {N}}\) and \((B_k)_{k\in \mathbb {N}}\) be two sequences of positive numbers. Then there exists some constant \(C=C_s>0\) such that

$$\begin{aligned} \sum _{m, k\in \mathbb {N}}\frac{A_mB^{1/s}_k}{(2^m+2^k)^{1/s}}\le & {} C_s\left( \max \left\{ \sum _{m\in \mathbb {N}}A_m^{1+1/s}2^{-m/s}, \sum _{k\in \mathbb {N}}B_k^{1+1/s}2^{-k/s}\right\} \right. \\&\left. +\sum _{m\in \mathbb {N}}A_mB_m^{1/s}2^{-m/s}\right) . \end{aligned}$$

Proof

First observe that

$$\begin{aligned} \sum _{m, k\in \mathbb {N}}\frac{A_mB^{1/s}_k}{(2^m+2^k)^{1/s}}\lesssim & {} \sum _{k> m}\frac{A_mB^{1/s}_k}{(2^m+2^k)^{1/s}}\\&+\sum _{ k<m}\frac{A_mB^{1/s}_k}{(2^m+2^k)^{1/s}}+\sum _{m\in \mathbb {N}}A_mB_m^{1/s}2^{-m/s}. \end{aligned}$$

Let’s first estimate the sum for \(k>m\):

$$\begin{aligned} \begin{aligned} \sum _{k> m}\frac{A_mB^{1/s}_k}{(2^m+2^k)^{1/s}}&\le C_s\sum _{m=1}^\infty A_m2^{-m/s}\sum _{k=1}^\infty B_{m+k}^{1/s}2^{-k/s}\\&=C_s\sum _{k=1}^\infty 2^{-k/(s+1)}\sum _{m=1}^\infty A_m2^{-m/(s+1)}B_{m+k}^{1/s}2^{-(m+k)/(s(s+1))}\\&\le C_s \sum _{k=1}^\infty 2^{-k/(s+1)}\left( \sum _{m=1}^\infty A_m^{1+1/s}2^{-m/s}\right) ^{s/(s+1)}\\&\quad \times \left( \sum _{m=1}^\infty B_{m+k}^{1+1/s}2^{-(m+k)/s}\right) ^{1/(s+1)}\\&\le C_s~\max \left\{ \sum _{m\in \mathbb {N}}A_m^{1+1/s}2^{-m/s}, \sum _{k\in \mathbb {N}}B_k^{1+\frac{1}{s}}2^{-k/s}\right\} , \end{aligned} \end{aligned}$$

thanks to Holder’s inequality with dual exponents \(1+1/s\) and \(s+1\). The sum for \(m>k\) is estimated analogously. This concludes the proof. \(\square \)

Our takeaway from Lemma 2.1 is the following

Corollary 2.2

Let \(s\ge 1\), \(d\ge 1\) and \((N_m)_{m\in \mathbb {N}^d}\) be a sequence of positive numbers so that

$$\begin{aligned} \sum _{m\in \mathbb {N}^d}N_m^{1+1/s}2^{-|m|/s}<\infty . \end{aligned}$$
(2.1)

Then

$$\begin{aligned} \sum _{m\in \mathbb {N}^d}N_m\sum _{k\in \mathbb {N}^d}N_k^{1/s}\left( \prod _{i=1}^d\frac{1}{2^{m_i}+2^{k_i}}\right) ^{1/s}<\infty . \end{aligned}$$
(2.2)

Proof

The proof is by induction on d:

\(d=1\):

apply Lemma 2.1 to \(A_m=B_m=N_m\);

\(d\ge 2\):

suppose that (2.2) is true for \(d-1\), and let \((N_m)_{n\in \mathbb {N}^d}\) be a sequence of positive numbers. Then, by applying Lemma 2.1,

$$\begin{aligned}&\sum _{m\in \mathbb {N}^d}N_m\sum _{k\in \mathbb {N}^d}N_k^{1/s}\left( \prod _{i=1}^d\frac{1}{2^{m_i}+2^{k_i}}\right) ^{1/s}\\&\quad =\sum _{\tilde{m}, \tilde{k} \in \mathbb {N}^{d-1}}\prod _{i=1}^{d-1}\left( \frac{1}{2^{\tilde{k}_i}+2^{\tilde{m}_i}}\right) ^{1/s}\sum _{m_1, k_1\in \mathbb {N}}\frac{N_{(m_1, \tilde{m})}N_{(k_1, \tilde{k})}^{1/s}}{(2^{m_1}+2^{k_1})^{1/s}}\\&\quad \underset{\sim }{<}\sum _{\tilde{m}, \tilde{k} \in \mathbb {N}^{d-1}}\prod _{i=1}^{d-1}\left( \frac{1}{2^{\tilde{k}_i}+2^{\tilde{m}_i}}\right) ^{1/s}\\&\qquad \times \max \left\{ \sum _{m_1\in \mathbb {N}}N_{(m_1, \tilde{m})}^{1+1/s}2^{-m_1/s}\,,\,\sum _{m_1\in \mathbb {N}}N_{(m_1, \tilde{k})}^{1+1/s}2^{-m_1/s}\right\} \\&\qquad +\sum _{\tilde{m}, \tilde{k}\in \mathbb {N}^{d-1}}\prod _{i=1}^{d-1}\left( \frac{1}{2^{\tilde{m}_i}+2^{\tilde{k}_i}}\right) ^{1/s}\sum _{m_1\in \mathbb {N}}N_{(m_1, \tilde{m})}N_{(m_1, \tilde{k})}^{1/s}2^{-m_1/s}\\&\quad \le \sum _{\tilde{m}, \tilde{k} \in \mathbb {N}^{d-1}}\prod _{i=1}^{d-1}\left( \frac{1}{2^{\tilde{k}_i}+2^{\tilde{m}_i}}\right) ^{1/s}\sum _{m_1\in \mathbb {N}}N_{(m_1, \tilde{m})}^{1+1/s}2^{-m_1/s}\\&\qquad +\sum _{\tilde{m}, \tilde{k} \in \mathbb {N}^{d-1}}\prod _{i=1}^{d-1}\left( \frac{1}{2^{\tilde{k}_i}+2^{\tilde{m}_i}}\right) ^{1/s}\sum _{m_1\in \mathbb {N}}N_{(m_1, \tilde{k})}^{1+1/s}2^{-m_1/s}\\&\qquad +\sum _{\tilde{m}, \tilde{k}\in \mathbb {N}^{d-1}}\prod _{i=1}^{d-1}\left( \frac{1}{2^{\tilde{m}_i}+2^{\tilde{k}_i}}\right) ^{1/s}\sum _{m_1\in \mathbb {N}}N_{(m_1, \tilde{m})}N_{(m_1, \tilde{k})}^{1/s}2^{-m_1/s}\\&\quad =: I_1+I_2+I_3, \end{aligned}$$

where the index m in \(\mathbb {N}^d\) is written as \((m_1, \tilde{m})\), with \(m_1\) in \(\mathbb {N}\) and \(\tilde{m}\) in \(\mathbb {N}^{d-1}\). Observe that, thanks to (2.1), \(I_1\) and \(I_2\) converge. As for \(I_3\), we can change the order of summation and apply the case \(d-1\). Which yields

$$\begin{aligned} \begin{aligned}&\sum _{m\in \mathbb {N}^d}N_m\sum _{k\in \mathbb {N}^d}N_k^{1/s}\left( \prod _{i=1}^d\frac{1}{2^{m_i}+2^{k_i}}\right) ^{1/s}\\&\quad \underset{\sim }{<}\sum _{m_1\in \mathbb {N}}2^{-m_1/s}\sum _{\tilde{m}, \tilde{k}\in \mathbb {N}^{d-1}}N_{(m_1, \tilde{m})}N_{(m_1, \tilde{k})}^{1/s}\prod _{i=1}^{d-1}\left( \frac{1}{2^{\tilde{m}_i}+2^{\tilde{k}_i}}\right) ^{1/s}\\&\quad \underset{\sim }{<}\sum _{m_1\in \mathbb {N}}2^{-m_1/s}\sum _{\tilde{m}\in \mathbb {N}^{d-1}}N_{(m_1, \tilde{m})}^{1+1/s}2^{-|\tilde{m}|/s}\\&\quad = \sum _{m\in \mathbb {N}^d}N_m^{1+1/s}2^{-|m|/s}<\infty . \end{aligned} \end{aligned}$$

\(\square \)

2.2 Random Tools

Fairly elementary facts from probability theory are exploited in the proofs. All the events and the random variables that are considered will be defined on the same probability space \((\Omega , \mathcal {A}, \mathbb {P})\). For a comprehensive treatment of the probabilistic results used, see [8].

The first tool is the Borel–Cantelli Lemma. Recall that, given a sequence \((A_n)_{n\in \mathbb {N}}\) of events in \(\mathcal {A}\), then

$$\begin{aligned} \limsup _{n\in \mathbb {N}}A_n:=\bigcap _{k\in \mathbb {N}}\bigcup _{n\ge k }A_n \end{aligned}$$

denotes the event made of those \(\omega \) in \(\Omega \) that belong to infinitely many of the events in \((A_n)_{n\in \mathbb {N}}\).

Theorem 2.3

(Borel–Cantelli Lemma) Let \((A_n)_{n\in \mathbb {N}}\) be a sequence of events in \(\mathcal {A}\). Then

  1. (i)

    If \({\sum _{n\in \mathbb {N}}}\mathbb {P}(A_n)<\infty \), then \(\mathbb {P}\left( \limsup _{n\in \mathbb {N}}A_n\right) =0\);

  2. (ii)

    If \({\sum _{n\in \mathbb {N}}}\mathbb {P}(A_n)=\infty \) and the events in \((A_n)_{n\in \mathbb {N}}\) are independent, then \(\mathbb {P}\left( \limsup _{n\in \mathbb {N}}A_n\right) =1\).

Given a random variable X on \(\Omega \), its mean value (or expectation) will be denoted by

$$\begin{aligned} \mathbb {E}(X):=\int _{\Omega }X\,d\mathbb {P}. \end{aligned}$$

In particular, if \(\mathbb {E}(X)<\infty \), then \(\mathbb {P}\{X=\infty \}=0\).

Another classic tool from probability that will be used is Jensen’s Inequality:

Theorem 2.4

(Jensen’s Inequality) Let X be a real-valued random variable on \(\Omega \), and let \(\phi :\mathbb {R}\rightarrow \mathbb {R}\) be a convex function. Then

$$\begin{aligned} \mathbb {E}(\phi (X))\ge \phi (\mathbb {E}(X)). \end{aligned}$$

In particular, since

$$\begin{aligned} t\in (0, \infty )\mapsto t^{1/s} \end{aligned}$$

is concave, for any \(s\ge 1\), this gives

$$\begin{aligned} \mathbb {E}\left( X^{1/s}\right) \le \mathbb {E}(X)^{1/s}, \end{aligned}$$
(2.3)

for any positive random variable X on \(\Omega \), by applying Jensen’s inequality to \(\phi (t)=-t^{1/s}\).

We can now prove Lemma 2.5, a tool for the proofs of Theorem 1.3:

Lemma 2.5

Let \((X^i_{n, j})_{n, j\in \mathbb {N}}\) be a sequence of positive random variables, for any \(i=1,\dots ,d\). Set

$$\begin{aligned} m(n, j):=\min _{i=1,\dots ,d}X^i_{n, j},\quad p(n, j)=\prod _{i=1}^dX^i_{n, j}. \end{aligned}$$

Assume that

$$\begin{aligned} \sum _{j\in \mathbb {N}}\left( \sum _{k\in \mathbb {N}}\mathbb {E}(p(k, j))\right) ^{1/d}<\infty . \end{aligned}$$

Then

$$\begin{aligned} \sup _{n\in \mathbb {N}}\sum _{j\ne n} m(n, j) \end{aligned}$$

is bounded almost surely.

Proof

Since, for any \(n\ne j\) in \(\mathbb {N}\),

$$\begin{aligned} m(n, j)\le p(n, j)^{1/d}\le \left( \sum _{k\ne j}p(k, j)\right) ^{1/d}, \end{aligned}$$

we have

$$\begin{aligned} \sup _{n\in \mathbb {N}}\sum _{j\ne n}m(n, j)\le \sum _{j\in \mathbb {N}}\left( \sum _{k\ne j}p(k, j)\right) ^{1/d}. \end{aligned}$$

Thus

$$\begin{aligned} \mathbb {E}\left( \sup _{n\in \mathbb {N}}\sum _{j\ne n}m(n, j)\right) \le \sum _{j\in \mathbb {N}}\mathbb {E}\left( \left( \sum _{k\in \mathbb {N}}p(k, j)\right) ^{1/d}\right) \le \sum _{j\in \mathbb {N}}\left( \sum _{k\in \mathbb {N}}\mathbb {E}(p(k, j))\right) ^{1/d}. \end{aligned}$$

\(\square \)

3 Random Sequences in the Polydisc

This section is devoted to the proofs of Theorems 1.3 and 1.5. The events \(\mathcal {U}(\mathbb {D}^d)\), \(\mathcal {W}(\mathbb {D}^d)\), \(\mathcal {C}(\mathrm {H}^2(\mathbb {D}^d))\) and \(\tilde{\mathcal {I}}(\mathbb {D}^d)\) will be analyzed separately.

3.1 Weak Separation

For weak separation in the polydisc, it turns out that Cochran’s argument in [12,  Thm. 2] extends to the higher dimensional case:

Proof of Theorem 1.3, (i)

For the sake of readability, we will adapt Cochran’s proof only to the case \(d=2\): the proof will lift appropriately to any \(d>1\). Assume first that \(\sum _{m\in \mathbb {N}^2}N_m^22^{-|m|}=\infty \) and let l be in \(\mathbb {N}\). Define

$$\begin{aligned} A_l:=\bigcup _{r\ne n}\{\rho _G(\lambda _r, \lambda _n)\le 5\cdot 2^{-l} \} \end{aligned}$$

as the set of those \(\omega \) in \(\Omega \) such that there exists a pair of distinct indices n and r so that the Gleason distance between \(\lambda _n(\omega )\) and \(\lambda _r(\omega )\) is controlled by, roughly, \(2^{-l}\). Since \(\mathcal {W}(\mathbb {D}^d)^c\subseteq \bigcap _{l\in \mathbb {N}}A_l\), it suffices to show that \(\mathbb {P}(A_l)=1\) for any l in \(\mathbb {N}\).

For any m in \(\mathbb {N}^2\), partition \(I_{m}\) into \(2^{2l}\)“rectangles” of the form

$$\begin{aligned} \left\{ (z^1,z^2)\in \mathbb {D}^2\,|\, \frac{1}{2^{m_i-1}}+\frac{r_i}{2^{m_i+l}}\le 1-|z^i|< \frac{1}{2^{m_i-1}}+\frac{r_i}{2^{m_i+l}}\right\} ,\quad r_i=1, \dots , 2^l \end{aligned}$$

and observe that at least one of these rectangles, say \(R_{m}\), must contain at least \(M_m:=N_m/2^{2l}\) points of \(\Lambda \). Let

$$\begin{aligned} B_{m}:=\bigcup _{r\ne n}\left\{ \lambda _r\in R_m, \lambda _n\in R_m, |\theta ^1_r-\theta ^1_n|\le \pi \cdot 2^{-(m_1+l)}, |\theta ^2_n-\theta ^2_r|\le \pi \cdot 2^{-(m_2+l)}\right\} . \end{aligned}$$

Since

$$\begin{aligned} \limsup _{m}B_{m}\subseteq A_l \end{aligned}$$

and the events \(B_{m}\) are independent, by the Borel–Cantelli Lemma, Theorem 2.3, it suffices to show that \(\sum _{m\in \mathbb {N}^2}\mathbb {P}(B_{m})=\infty \).

In order to estimate the probability of each \(B_{m}\) from below, we give an upper bound for \(\mathbb {P}(B_{m}^c)\). If \(\tau \) is in \(\mathbb {T}^2\), let \(S_{m}(\tau )\) be a“rectangle”in \(\mathbb {T}^2\) centered at \(\tau \) with basis \(2^{-(m_1+l)}\) and height \(2^{-(m_2+l)}\). If \(\tau _n=(e^{i\theta ^1_n}, e^{i\theta ^2_n})\), then thanks to the independence of \((\tau _n)_{n\in \mathbb {N}}\) we have

$$\begin{aligned} \begin{aligned} \mathbb {P}(B_{m}^c)&\le \mathbb {P}\left( \left\{ \tau _1\in \mathbb {T}^2, \tau _2\in \mathbb {T}^2\setminus S_{m}(\tau _1), \dots , \tau _{M_{m}}\in S\setminus \bigcup _{j=1}^{M_{m}-1}S_{m}(\tau _j)\right\} \right) \\&\le \left( 1-2^{-(|m|+2l)}\right) \left( 1-\frac{3}{2}\cdot 2^{-(|m|+2l)}\right) \cdots \left( 1-\frac{M_{m}}{2}\cdot 2^{-(|m|+2l)}\right) \\&=\prod _{j=2}^{M_{m}}\left( 1-j\cdot 2^{-(|m|+2l+1)}\right) . \end{aligned} \end{aligned}$$

If \(\liminf _{m}\mathbb {P}(B_m^c)<1\), then \(\mathbb {P}(B_{m})\) is uniformly bounded away from 0 infinitely many times, and \(\sum _{m\in \mathbb {N}^2}\mathbb {P}(B_{m})=\infty \) trivially.

On the other hand, if \(\lim _{|m|\rightarrow \infty }\mathbb {P}(B^c_{m})=1\), then

$$\begin{aligned} \begin{aligned} \mathbb {P}(B_{m})&\ge 1-\prod _{j=2}^{M_{m}}\left( 1-j\cdot 2^{-(|m|+2l+1)}\right) \\&\underset{|m|\rightarrow \infty }{\sim }-\log \prod _{j=2}^{M_{m}}\left( 1-j\cdot 2^{-(|m|+2l+1)}\right) \\&=-\sum _{j=2}^{M_{m}}\log \left( 1-j\cdot 2^{-(|m|+2l+1)}\right) \\&\ge \sum _{j=2}^{M_{m}}j\cdot 2^{-(|m|+2l+1)}\\&\underset{|m|\rightarrow \infty }{\sim }\frac{M_m^22^{-|m|}}{2^{2l+2}}\ge \frac{N_{m}^22^{-|m|}}{2^{6l+2}}, \end{aligned} \end{aligned}$$

which is the general term of a divergent series.

To conclude the proof of Theorem 1.3, part (i), it suffices to show that a random sequence \(\Lambda \) in \(\mathbb {D}^d\) is almost surely weakly separated whenever (1.7) holds. To do so, let

$$\begin{aligned} \Omega _m:=\bigcup _{r\ne n}\left\{ \lambda _r\in I_m, \lambda _n\in I_m, |\theta ^1_r-\theta ^1_n|\le \pi \cdot 2^{-m_1}, |\theta ^2_n-\theta ^2_r|\le \pi \cdot 2^{-m_2}\right\} . \end{aligned}$$

Then

$$\begin{aligned} \mathbb {P}(\Omega _m)\le \left( {\begin{array}{c}N_m\\ 2\end{array}}\right) 2^{-|m|}\le \frac{1}{2}N_{m}^22^{-|m|}, \end{aligned}$$

and the Borel–Cantelli Lemma provides that, almost surely, any pair \((\lambda _n, \lambda _r)\) in all but finitely many“rectangles”\(I_m\) satisfies

$$\begin{aligned} |\theta _n^1-\theta _r^1|>\pi 2^{-m_1}\quad \text {or}\quad |\theta _n^2-\theta _r^2|>\pi 2^{-m_2}. \end{aligned}$$
(3.1)

The same argument applies for the right-shifted“rectangles”\(I'_m\) of the form

$$\begin{aligned} \left\{ 1-\frac{3\cdot 2^{-m_1}}{4}\le |z_1|<1-\frac{3\cdot 2^{-(m_1+1)}}{4}, 1-2^{-m_2}\le |z_2|<1-2^{-(m_2+1)}\right\} \end{aligned}$$

and the up-shifted“rectangles” \(I''_m\) of the form

$$\begin{aligned} \left\{ 1-2^{-m_1}\le |z_1|<1-2^{-(m_1+1)}, 1-\frac{3\cdot 2^{-m_2}}{4}\le |z_2|<1-\frac{3\cdot 2^{-(m_2+1)}}{4}\right\} . \end{aligned}$$

This ensures that all but finitely many pairs \((\lambda _n,\lambda _r)\) in \(\Lambda \) so that both

$$\begin{aligned} |\lambda _n^1-\lambda _r^1|\simeq 2^{-m_1} \end{aligned}$$

and

$$\begin{aligned} |\lambda _n^2-\lambda _r^2|\simeq 2^{-m_2} \end{aligned}$$

have property (3.1). Therefore, see [12,  Claim, p. 741] \(\Lambda \) is almost surely weakly separated. \(\square \)

3.2 Uniform Separation

While weak separation behaves essentially in the same way as the dimension d grows, the sufficient condition in (1.8) for almost sure uniform separation picks up a dependence on d. As will be shown, this is due to some estimates on the expected value of quantities related to the (random) Gleason distances between the points in \(\Lambda \).

It will also be explained how (1.8) can be improved for some choices of \((r_n)_{n\in \mathbb {N}}\). As a corollary, a cutoff condition for \(\Lambda \) to be almost surely \(\mathrm {H}^\infty (\mathbb {D}^d)\)-interpolating for some types of random sequences in the polydisc will be given.

Let \(s_d\) be the Szegö kernel on \(\mathbb {D}^d\). Then the Hardy space \(\mathrm {H}^2(\mathbb {D}^d)\) is the reproducing kernel Hilbert space \(\mathcal {H}_{s_d}\). Denote the normalized Szegö kernel by

$$\begin{aligned} S_d(z, w):=\prod _{i=1}^d\frac{\sqrt{(1-|z^i|^2)(1-|w^i|^2)}}{1-z^i\overline{w^i}}, \end{aligned}$$

and observe that, for any z and w in \(\mathbb {D}^d\),

$$\begin{aligned} \rho _G(z, w)^2=1-\min _{i=1,\dots , d}|S_1(z^i, w^i)|^2. \end{aligned}$$
(3.2)

Given a random sequence \(\Lambda \) in \(\mathbb {D}^d\) denote, for the sake of readability,

$$\begin{aligned} S^i(n, j):=S_1(\lambda _n^i, \lambda _j^i) \end{aligned}$$

and

$$\begin{aligned} S_d(n, j):=S_d(\lambda _n, \lambda _j). \end{aligned}$$

Thanks to (3.2), uniform separation can be achieved from weak separation and a uniform bound on sums depending on the random sequences \((S^ i(n, j))_{n, j\in \mathbb {N}}\):

$$\begin{aligned} \mathcal {U}(\mathbb {D}^d)=\mathcal {W}(\mathbb {D}^d)\cap \left\{ \sup _{n\in \mathbb {N}}\sum _{j\ne n}\min _{i=1,\dots , d}|S^i(n, j)|^2<\infty \right\} . \end{aligned}$$
(3.3)

Observe that each \((S^i(n, j))_{n, j\in \mathbb {N}}\) is a sequence of random variables on \(\Omega \) which is determined, together with \(\Lambda \), by \((r_{n})_{n\in \mathbb {N}}\). It is not surprising then that the expectation of \(|S^i(n, j)|^2\) depends, for any i, n and j, only on \(r^i_n\) and \(r_j^i\):

Lemma 3.1

Let \(\Lambda \) be a random sequence in \(\mathbb {D}^d\). Then, for any \(n\ne j\) in \(\mathbb {N}\) and for any \(i=1,\dots , d\),

$$\begin{aligned} \mathbb {E}(|S^i(n, j)|^2)=\frac{\bigg (1-(r^i_n)^2\bigg )\bigg (1-(r^i_j)^2\bigg )}{1-\left( r^i_nr^i_j\right) ^2}. \end{aligned}$$

Proof

Observe thatFootnote 1

$$\begin{aligned} \begin{aligned} |S^i(n, j)|^2&=\bigg (1-(r^i_n)^2\bigg )\bigg (1-(r^i_j)^2\bigg )\left| \sum _{k=0}^\infty (r^i_nr^i_j)^ke^{-ik(\theta ^i_n-\theta ^i_j)}\right| ^2\\&= \bigg (1-(r^i_n)^2\bigg )\bigg (1-(r^i_j)^2\bigg )\sum _{k=0}^\infty (r^i_nr^i_j)^k\sum _{l=0}^ke^{i(2l-k)(\theta ^i_n-\theta ^i_j)}. \end{aligned} \end{aligned}$$

Therefore, by making use of the independence of \(\theta ^i_n\) and \(\theta ^i_j\),

$$\begin{aligned} \begin{aligned} \mathbb {E}(|S^i(n, j)|^2)&=\bigg (1-(r^i_n)^2\bigg )\bigg (1-(r^i_j)^2\bigg )\sum _{k=0}^\infty (r^i_nr^i_j)^k\\&\quad \times \sum _{l=0}^k\mathbb {E}\left( e^{i(2l-k)\theta ^i_n}\right) \mathbb {E}\left( e^{i(k-2l)\theta ^i_2}\right) \\&=\bigg (1-(r^i_n)^2\bigg )\bigg (1-(r^i_j)^ 2\bigg )\sum _{k=0}^\infty (r^i_nr^i_j)^{2k}\\&=\frac{\bigg (1-(r^ i_n)^2\bigg )\bigg (1-(r^i_j)^ 2\bigg )}{1-(r^ i_nr^ i_j)^2}. \end{aligned} \end{aligned}$$

\(\square \)

Remark 3.2

Let m and k be two multi-indices in \(\mathbb {N}^d\), and suppose that \(\lambda _n\) and \(\lambda _j\) belong to \(I_m\) and \(I_k\), respectively. Then, thanks to Lemma 3.1 and (1.6),

$$\begin{aligned} \mathbb {E}(|S^i(n, j)|^2)\simeq \frac{2^{-(m_i+k_i)}}{2^{-m_i}+2^{-k_i}-2^{-(m_i+k_i)}}=\frac{1}{2^{k_i}+2^{m_i}-1}\simeq \frac{1}{2^{k_i}+2^{m_i}}. \end{aligned}$$

In particular, since \(S^i(n, j)\) and \(S^r(n, j)\) are independent for any \(i\ne r\), we have

$$\begin{aligned} \mathbb {E}(|S_d(n, j)|^2)\simeq \prod _{i=1}^d\frac{1}{2^{k_i}+2^{m_i}}. \end{aligned}$$

Part (ii) of Theorem 1.3 can now be proved:

Proof of Theorem 1.3, (ii)

Observe that

$$\begin{aligned} \sum _{m\in \mathbb {N}^d}N_m^22^{-|m|}\le \sum _{m\in \mathbb {N}^d}N_m^{1+1/d}2^{-|m|/d}, \end{aligned}$$

whenever \(N_m\le 2^{|m|}\), and so under our assumption \(\Lambda \) is weakly separated, thanks to Theorem 1.3, part (i). Therefore, thanks to (3.3), it suffices to show that the random sequence \((S_n)_{n\in \mathbb {N}}\) given by

$$\begin{aligned} S_n:=\sum _{j\ne n}\min _{i=1,\dots ,d}|S^i(n, j)|^2 \end{aligned}$$

is bounded almost surely. Thanks to Lemma 2.5, it is enough to show that

$$\begin{aligned} \sum _{j\in \mathbb {N}}\left( \sum _{n\in \mathbb {N}}\mathbb {E}\left( |S_d(n, j)|^2\right) \right) ^{1/d}<\infty . \end{aligned}$$
(3.4)

By regrouping the terms of the double sum in (3.4) with respect to the partition \((I_m)_{m\in \mathbb {N}^d}\) of \(\mathbb {D}^d\) and thanks to Remark 3.2 and (2.3) we get

$$\begin{aligned} \begin{aligned}&\sum _{j\in \mathbb {N}}\left( \sum _{n\in \mathbb {N}}\mathbb {E}\left( |S_d(n, j)|^2\right) \right) ^{1/d}\\&\quad =\sum _{m\in \mathbb {N}^d}\sum _{\lambda _n\in I_m}\left( \sum _{k\in \mathbb {N}^d}\sum _{\lambda _j\in I_k}\mathbb {E}(|S_d(n, j)|^2)\right) ^{1/d}\\&\quad \simeq \sum _{m\in \mathbb {N}^d}N_m\left( \sum _{k\in \mathbb {N}^d}N_k\prod _{i=1}^d\frac{1}{2^{m_i}+2^{k_i}}\right) ^{1/d}\\&\quad \le \sum _{m\in \mathbb {N}^d}N_m\sum _{k\in \mathbb {N}^d}N_k^{1/d}\prod _{i=1}^d\left( \frac{1}{2^{m_i}+2^{k_i}}\right) ^{1/d}. \end{aligned} \end{aligned}$$

Corollary 2.2, \(d=s\), concludes the proof. \(\square \)

Condition (1.8) is not sharp. Indeed, for some choices of \((r_n)_{n\in \mathbb {N}}\), we can show that the \(0-1\) Kolmogorov law for \(\mathrm {H}^\infty (\mathbb {D}^d)\)-interpolating sequences coincide with the one for weak separation:

Proposition 3.3

Let \(d=2\) and \((t_n)_{n\in \mathbb {N}}\) be a sequence in (0, 1), and consider its Cartesian product with itself

$$\begin{aligned} r_{n}:=(t_{n_1}, t_{n_2}),\quad n=(n_1, n_2)\in \mathbb {N}^2. \end{aligned}$$

Then the random sequence \(\Lambda \) associated with \((r_n)_{n\in \mathbb {N}^2}\) is interpolating for \(\mathrm {H}^\infty (\mathbb {D}^d)\) almost surely if an only if (1.7) holds.

Proof

If \(\sum _{m\in \mathbb {N}^2}N_m^22^{-|m|}=\infty \), then \(\Lambda \) is not weakly separated almost surely, and in particular it is almost surely not interpolating. Thus it suffices to show that \(\Lambda \) is \(\mathrm {H}^\infty (\mathbb {D}^d)\)-interpolating provided that \(\sum _{m\in \mathbb {N}^2}N_m^22^{-|m|}<\infty \), which, by construction of \((r_n)_{n\in \mathbb {N}^2}\), it is equivalent to

$$\begin{aligned} \sum _{n\in \mathbb {N}}T_n^22^{-n}<\infty , \end{aligned}$$

where \(T_n:=\#\{l\in \mathbb {N}\quad |\quad 1-2^{-n}\le t_l<1-2^{-(n+1)}\}\). By Rudowicz’s Theorem, [16], the random sequence T on \(\mathbb {D}\) given by

$$\begin{aligned} \tau _n:=t_ne^{i\theta _n},\quad n\in \mathbb {N}, \end{aligned}$$

is almost surely interpolating in \(\mathbb {D}\), where \((\theta _n)_{n\in \mathbb {N}}\) is a sequence of i.i.d. random variables defined on a probability space \((\Omega , \mathcal {A}, \mathbb {P})\) and distributed uniformly on the unit circle. In particular, T has almost surely a sequence of so called P. Beurling functions, that is, there exists an event \(\Omega '\) so that \(\mathbb {P}(\Omega ')=1\) and, for any \(\omega \) in \(\Omega '\), there exists a sequence of \(\mathrm {H}^\infty (\mathbb {D})\) functions \((F_{\omega , n})_{n\in \mathbb {N}}\) such that

$$\begin{aligned} {\left\{ \begin{array}{ll} F_{\omega , n}(\tau _j(\omega ))=\delta _{n, j}\\ \sup _{z\in \mathbb {D}}\sum _{n\in \mathbb {N}}|F_{\omega , n}(z)|<\infty . \end{array}\right. } \end{aligned}$$

Let us consider now the product probability space \((\tilde{\Omega }, \tilde{\mathcal {A}}, \tilde{\mathbb {P}})\), where \(\tilde{\Omega }:=\Omega \times \Omega \), \(\tilde{\mathcal {A}}\) is the product \(\sigma \)-algebra of \(\mathcal {A}\) with itself, and

$$\begin{aligned} \tilde{\mathbb {P}}(A\times B)=\mathbb {P}(A)\mathbb {P}(B),\quad A, B\in \mathcal {A}. \end{aligned}$$

Then the random variables

$$\begin{aligned} \theta _{n_1, n_2}:\tilde{\Omega }\rightarrow \mathbb {T}^2 \end{aligned}$$

given by

$$\begin{aligned} \theta _{n_1, n_2}(\omega _1, \omega _2):=(\theta _{n_1}(\omega _1), \theta _{n_2}(\omega _2)) \end{aligned}$$

are uniformly distributed in \(\mathbb {T}^2\) and independent. Thus we can think of the random sequence \(\Lambda \) as

$$\begin{aligned} \lambda _{n_1, n_2}(\omega _1, \omega _2):=(r_{n_1}e^{i\theta _{n_1}(\omega _1)}, r_{n_2}e^{i\theta _{n_2}(\omega _2)}),\quad (\omega _1, \omega _2)\in \tilde{\Omega }. \end{aligned}$$

Let \(\Omega '':=\Omega '\times \Omega '\) and define, for any \(n=(n_1, n_2)\) in \(\mathbb {N}^2\) and \(\tilde{\omega }=(\omega _1, \omega _2)\) in \(\Omega ''\) the \(\mathrm {H}^\infty (\mathbb {D}^2)\) function

$$\begin{aligned} G_{\tilde{\omega }, n}(z_1, z_2)=F_{\omega _1, n_1}(z_1)~F_{\omega _2, n_2}(z_2),\quad (z_1, z_2)\in \mathbb {D}^2. \end{aligned}$$

Then \((G_{\tilde{\omega }, n})_{n\in \mathbb {N}^2}\) is a set of Beurling functions for \(\Lambda (\tilde{\omega })\), and in particular \(\Lambda (\tilde{\omega })\) is \(\mathrm {H}^\infty (\mathbb {D}^d)\)-interpolating for any \(\tilde{\omega }\) in \(\Omega ''\). Since \(\tilde{\mathbb {P}}(\Omega '')=\mathbb {P}(\Omega ')^2=1\), \(\Lambda \) is interpolating for \(\mathrm {H}^\infty (\mathbb {D}^d)\) almost surely. \(\square \)

The argument in Proposition 3.3 can be easily extended to any \(d>1\) to show that, whenever the sequence of radii \((r_n)_{n\in \mathbb {N}}\) is the Cartesian product of d sequences in [0, 1), then (1.7) encodes all random sequences that are almost surely interpolating for \(\mathrm {H}^\infty (\mathbb {D}^d)\). For a general choice of \((r_n)_{n\in \mathbb {N}}\) the following question remains open:

Question 1 Is any random sequence \(\Lambda \) in \(\mathbb {D}^d\) satisfying (1.7) uniformly separated? Or else, does there exist a choice of \((r_n)_{n\in \mathbb {N}}\) so that the random sequence \(\Lambda \) obtained is almost surely weakly separated but not uniformly separated?

3.3 Carleson Measures

The same idea that was used for random uniform separation works for the proof of Theorem 1.3, part (iii), modulo some adaptations. Let \(Z=(z_n)_{n\in \mathbb {N}}\) be a sequence in \(\mathbb {D}^d\) and consider the Szegö Grammian

$$\begin{aligned} G:=(S_d(z_n, z_j))_{n, j\in \mathbb {N}} \end{aligned}$$

associated with the sequence Z. Therefore,

Theorem 3.4

The following are equivalent:

  1. (i)

    \(\mu _Z\) is a Carleson measure for \(\mathrm {H}^2(\mathbb {D}^d)\);

  2. (ii)

    \(G:l^2\rightarrow l^2\) is bounded.

A proof of Theorem 3.4 can be found in [2,  Thm. 9.5]. Moreover, a standard operator theory argument gives that any sufficiently strong decay of the coefficients of G outside its diagonal implies that G is bounded (above and below):

Lemma 3.5

Let \(A=(a_{n, j})_{n, j\in \mathbb {N}}:l^2\rightarrow l^2\) be invertible and self adjoint. Suppose that \(a_{i,i}=1\) for any i in \(\mathbb {N}\), and that

$$\begin{aligned} \sum _{j\in \mathbb {N}}\sum _{n\ne j}|a_{n, j}|^2=M^2<\infty . \end{aligned}$$
(3.5)

Then A is bounded above and below.

Proof

Such an A can be written as \(A=Id+H\), where H is a Hilbert–Schmidt operator. Let \((y_n)_{n\in \mathbb {N}}\) be the sequence of eigenvalues of A, and let \((x_n)_{n\in \mathbb {N}}\) be the eigenvalues of H. Since H is a Hilbert–Schmidt operator, then

$$\begin{aligned} \sum _{n\in \mathbb {N}}|x_n|^2<\infty , \end{aligned}$$

and since \(A=Id+H\) we have that \(y_n=1+x_n\) for any n. Since A is invertible, none of the \(y_n\) are null. Moreover, being a self-adjoint infinite matrix, A is bounded by \(\sup _{n\in \mathbb {N}}|y_n|\) and bounded below by \(\inf _{n\in \mathbb {N}}|y_n|\). Since \(x_n\) converges to 0, the two quantities are bounded above and below, hence the result. \(\square \)

Remark 3.6

In the above proof one uses only the fact that \(x_n\) goes to 0, as \(n\rightarrow \infty \). Therefore the same conclusion holds if we assume H to be compact.

Let \(\Lambda \) be a random sequence in \(\mathbb {D}^d\). Thanks to Lemma 3.5, to show that \(\mathbb {P}(\mathcal {C}(\mathrm {H}^2(\mathbb {D}^d)))=1\) it is enough to show that the random Grammian associated to \(\Lambda \) has a strong decay outside its diagonal almost surely:

Proof of Theorem 1.3,(iii)

It suffices to show that

$$\begin{aligned} \sum _{j\in \mathbb {N}}\sum _{n\ne j}\mathbb {E}(|S_d(n, j)|^2)<\infty . \end{aligned}$$
(3.6)

Indeed, if (3.6) holds, then

$$\begin{aligned} \sum _{j\in \mathbb {N}}\sum _{n\ne j}|S_d(n, j)|^2<\infty \end{aligned}$$

almost surely, and Lemma 3.5 would conclude the proof. By Remark 3.2 and by regrouping the sum in (3.6) with respect to the partition \((I_m)_{m\in \mathbb {N}^d}\) of \(\mathbb {D}^d\), one obtains

$$\begin{aligned} \sum _{j\in \mathbb {N}}\sum _{n\ne j}\mathbb {E}(|S_d(n, j)|^2)\le C~\sum _{m, k\in \mathbb {N}^d}N_mN_k\left( \prod _{i=1}^d\frac{1}{2^{m_i}+2^{k_i}}\right) . \end{aligned}$$

Corollary 2.2, \(s=1\), concludes the proof. \(\square \)

3.4 Almost Orthogonal Random Grammians

Equation (3.5) is a rather strong condition for an infinite matrix A. Indeed, in addition to implying that A is bounded, it says that \(A-Id\) is a Hilbert–Schmidt operator on \(l^2\), i.e., that for any choice of an orthonormal basis \((e_n)_{n\in \mathbb {N}}\) of \(l^2\)

$$\begin{aligned} \sum _{n\in \mathbb {N}}\Vert (A-Id)e_n\Vert ^2<\infty . \end{aligned}$$

If \(A=G\) is a Szegö Grammian associated to a sequence \(Z=(z_n)_{n\in \mathbb {N}}\) in the polydisc, it is natural to ask whether such an almost orthogonality condition on the kernels at the points of Z translates to interpolation properties on the points of the sequences:

Question 2 Let \(d\ge 2\). Is a sequence Z in \(\mathbb {D}^d\) interpolating for \(\mathrm {H}^\infty (\mathbb {D}^d)\), provided that its Szegö Grammian can be written as \(G=Id+H\), where H is a Hilbert–Schmidt operator on \(l^2\)?

The case \(d=1\) of Question 2 has a positive answer. For any sequence Z in the unit disc, let

$$\begin{aligned} \delta _n:=\prod _{j\ne n}\rho (z_n, z_j) \end{aligned}$$

be the hyperbolic distance from \(z_n\) to the rest of the sequence. By the Carleson interpolation Theorem, Z is interpolating if and only if \(\inf _{n\in \mathbb {N}}\delta _n>0\). On the other hand, [13], \(G-Id\) is a Hilbert–Schmidt operator if and only if

$$\begin{aligned} \sum _{n\in \mathbb {N}}1-\delta _n<\infty , \end{aligned}$$

giving that Z is interpolating rather comfortably.

Another motivation for answering Question 2 comes from random interpolating sequences for \(\mathrm {H}^\infty (\mathbb {D}^d)\). We proved in Sect. 3.3 that the random Grammian associated to a random sequence \(\Lambda \) in the polydisc differs from the identity by a Hilbert–Schmidt operator, provided that the sum in (1.7) converges. Conversely, if Z is not weakly separated, then infinitely many entries outside the diagonal of its Szegö Grammian are arbitrarily close to 1 in absolute value, hence \(G-Id\) is not Hilbert–Schmidt. Namely,

$$\begin{aligned} \mathbb {P}(G-Id\quad \text {is Hilbert{-}Schmidt})={\left\{ \begin{array}{ll} 1\quad \text {if}\,&{}\quad \sum _{m\in \mathbb {N}^d}N_m^22^{-|m|}<\infty \\ 0\quad \text {if}\,&{}\quad \sum _{m\in \mathbb {N}^d}N_m^22^{-|m|}=\infty . \end{array}\right. } \end{aligned}$$
(3.7)

In particular, a positive answer to Question 2 would imply that the event \(\mathcal {I}(\mathbb {D}^d)\) follows the same \(0-1\) law of (3.7), giving a \(0-1\) law for random \(\mathrm {H}^\infty (\mathbb {D}^d)\)-interpolating sequences.

Moreover, (3.7) helps in understanding interpolating sequences for \(\mathrm {H}^2(\mathbb {D}^d)\), and it implies Theorem 1.5. Indeed, any invertible Szegö Grammian \((S_d(z_n, z_j))_{n, j\in \mathbb {N}}\) that can be written as \(G=Id+H\), where H is Hilbert–Schmidt, is bounded above and below, thanks to Lemma 3.5, which in turn is equivalent to \((z_n)_{n\in \mathbb {N}}\) being interpolating for \(\mathrm {H}^2(\mathbb {D}^d)\). On the other hand, as pointed out above, if Z is not weakly separated then infinitely many pairs of normalized Szegö kernels at the points of Z are at an angle arbitrarily close to 0, and hence G is not bounded below. Thus

$$\begin{aligned} \mathbb {P}(\tilde{\mathcal {I}}(\mathbb {D}^d))={\left\{ \begin{array}{ll} 1\quad \text {if}\,&{}\quad \sum _{m\in \mathbb {N}^d}N_m^22^{-|m|}<\infty \\ 0\quad \text {if}\,&{}\quad \sum _{m\in \mathbb {N}^d}N_m^22^{-|m|}=\infty \end{array}\right. }. \end{aligned}$$

4 Random Separation in the Unit Ball

This section is devoted to the proof of Theorem 1.7. In addition, we will study uniform separation on the unit ball. Compared with the polydisc, we use the spherical geometry of the unit ball more heavily rather than the Euclidean geometry of the Hardy spaces involved. So, the techniques used in this section are different from the ones used in the previous sections.

Recall that \(\Lambda (\omega )=\left\{ \lambda _{j}\right\} \) with \(\lambda _{j}=\rho _{j} \xi _j(\omega )\) where \(\xi _{j}(\omega )\) is a sequence of independent random variables, all uniformly distributed on the unit sphere and \(\rho _{j} \in [0,1)\) is a sequence of a priori fixed radii. Depending on the distribution conditions on \(\{\rho _{j}\}\) as will be discussed below, the probability that \(\Lambda (\omega )\) is interpolating for Besov–Sobolev spaces \(B_{p}^{\sigma }\left( \mathbb {B}_{d}\right) \), where \(0 < \sigma \le 1 / 2\) is studied.

The Bergman tree \(\mathcal {T}_{d}\) associated to the ball \(\mathbb {B}_{d}\) with the structure constants 1 and \(\ln (2)/2\) is needed in the analysis, so we present here some details. More information can be found in [5,  p. 17]. Let \(\rho \) be the pseudo-hyperbolic distance on the unit ball, thus \(\rho (z,w)=|\varphi _{z}(w)|\) where \(\varphi _{z}(w)\) is the M\(\ddot{\text {o}}\)bius transform. The Bergman metric on the unit ball \(\mathbb {B}_{d}\) in \(\mathbb {C}^{d}\) is given by

$$\begin{aligned} \beta (z,w)=\frac{1}{2}\log \frac{1+\rho (z,w)}{1-\rho (z,w)}. \end{aligned}$$

Further, for any \(r>0\), we define

$$\begin{aligned} \mathcal {U}_{r}=\partial B_{\beta }(0, r)=\left\{ z \in \mathbb {B}_{d}: \beta (0, z)=r\right\} . \end{aligned}$$

For any \(N\in \mathbb {N}\), according to [5,  Lem. 2.6] and the fact that \(\mathcal {U}_{r}\) is a compact set, there is a positive integer J, a set of points \(\{z^N_{j}\}_{j=1}^J\) and a set of subsets \(\{Q_j^N\}_{j=1}^N\) of \(\mathcal {U}_{N\ln (2)/2}\) such that

$$\begin{aligned} \mathcal {U}_{N\ln (2)/2}=\bigcup _{j=1}^J Q_j^N, \end{aligned}$$
$$\begin{aligned} Q_i^N \cap Q_j^N=\emptyset \quad \text {when }i\ne j, \end{aligned}$$
$$\begin{aligned} \mathcal {U}_{N\ln (2)/2}\cap B_{\beta }\left( z^N_{j}, 1\right) \subset Q^N_{j} \subset \mathcal {U}_{N\ln (2)/2}\cap B_{\beta }\left( z^N_{j}, 2 \right) . \end{aligned}$$

Let

$$\begin{aligned} K_{j}^{N}=\left\{ z \in \mathbb {B}_{n}: \frac{N\ln 2}{2} \le \beta (0, z)< \frac{(N+1)\ln 2}{2}, P_{N} z \in Q_{j}^{N}\right\} , \end{aligned}$$

where \(P_{N} z\) denotes the radial projection of z onto the sphere \(\mathcal {U}_{N\ln (2)/2}\). Define a tree structure on the collection of sets

$$\begin{aligned} \mathcal {T}_{d}=\left\{ K_{j}^{N}\right\} _{N \ge 0, j \ge 1} \end{aligned}$$

by declaring that \(K_{i}^{N+1}\) is a child of \(K_{j}^{N}\), written \(K_{i}^{N+1} \ge K_{j}^{N}\), if the projection \(P_{N }\left( z_{i}^{N+1}\right) \) of \(z_{i}^{N+1}\) onto the sphere \(\mathcal {U}_{N\ln (2)/2}\) lies in \(Q_{j}^{N} \). For any \(K_{j}^{N}\in \mathcal {T}_{d}\), we define \(d(K_{j}^{N})\) by

$$\begin{aligned} d(K_{j}^{N})=N. \end{aligned}$$

Given a non-negative function h on \(\mathbb {N}\), we say h is summable if

$$\begin{aligned} \sum _{N\in \mathbb {N}}h(N)<+\infty . \end{aligned}$$

For \(\sigma >0\), a measure \(\mu \) satisfies the strengthened simple condition if there is a summable function \(h(\cdot )\) such that

$$\begin{aligned} 2^{2 \sigma d(\alpha )} I^{*} \mu (\alpha ) \le C h(d(\alpha )), \quad \alpha \in \mathcal {T}_{d}, \end{aligned}$$

where

$$\begin{aligned} I^{*} \mu (\alpha )=\sum _{\alpha '\ge \alpha ,\alpha '\in \mathcal {T}_{d}}\mu (\alpha '). \end{aligned}$$

The following lemma follows from [6,  Lem. 32 and Thm. 23].

Lemma 4.1

Let \(\sigma >0 \). If \(\mu \) satisfies the strengthened simple condition, then \(\mu \) is a \(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) \)-Carleson measure on \(\mathbb {B}_{d}\).

The following Lemma can be found in [8].

Lemma 4.2

If X is a binomial random variable with parameter pN, then for every \(s=0,1,2,\ldots \),

$$\begin{aligned} \lim _{\tiny {\begin{array}{c}N\rightarrow \infty \\ pN\rightarrow 0\end{array}}}\frac{P(X=s)}{(pN)^s}= \lim _{\tiny {\begin{array}{c}N\rightarrow \infty \\ pN\rightarrow 0\end{array}}}\frac{P(X\ge s)}{(pN)^s}=\frac{1}{s!} \end{aligned}$$

Define

$$\begin{aligned} \mathcal {C}(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) ):=\{\omega : \mu _\Lambda \text { is a Carleson measure for} B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) \}. \end{aligned}$$

Theorem 4.3

Let \(\frac{d}{2}>\sigma >0 \) and \(d\ge 2\). Then

  1. (i)

    If

    $$\begin{aligned} \sum _{m=0}^{\infty }2^{-2\sigma m}N_m<\infty , \end{aligned}$$

    then \(\mathbb {P}\{\mathcal {C}(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) )\}=1\).

  2. (ii)

    If

    $$\begin{aligned} \sum _{m=0}^{\infty }2^{-2\sigma m}N_m=\infty , \end{aligned}$$

    then \(\mathbb {P}\{\mathcal {C}(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) )\}=0\).

Proof

First, it will be shown that if

$$\begin{aligned} \sum _{m=0}^{\infty }2^{-2\sigma m}N_m<\infty , \end{aligned}$$

then

$$\begin{aligned} \mu _{\Lambda (\omega )}=\sum _{j=1}^{\infty }\left( 1-\left| \lambda _{j}\right| ^{2}\right) ^{2 \sigma } \delta _{\lambda _{j}} \end{aligned}$$

is a Carleson measure almost surely.

Since \(d/2>\sigma >0\), there is a constant \(\epsilon \) such that \(d>2\sigma +\epsilon \). Next, it will be shown that

$$\begin{aligned} \sup _{\alpha \in \mathcal {T}_{d}}2^{(\epsilon +2\sigma )d(\alpha )}\sum _{\lambda _j\in \beta \ge \alpha }(1-|\lambda _j|^2)^{2\sigma } \end{aligned}$$

is bounded almost surely, that is to say

$$\begin{aligned} 2^{2 \sigma d(\alpha )} I^{*} \mu (\alpha )=2^{2\sigma d(\alpha )}\sum _{\lambda _j\in \beta \ge \alpha }(1-|\lambda _j|^2)^{2\sigma }\lesssim 2^{-\epsilon d(\alpha )}, \end{aligned}$$

which implies \(\mu _{\Lambda (\omega )}\) is a Carleson measure almost surely by Lemma 4.1. For any \(\alpha \), let

$$\begin{aligned} X_{m,\alpha }=\# \{\lambda _j\in \Lambda (\omega ): \lambda _j\in \beta \ge \alpha ,d(\beta )=m \}. \end{aligned}$$

By [5,  Lem. 2.8], we have

$$\begin{aligned} \left| \bigcup _{\beta \ge \alpha , d(\beta )=m} \beta \right| =c_{m,\alpha }2^{-d(\alpha )d}\left| \{z, m\theta \le \beta (0,z)<(m+1)\theta \}\right| , \end{aligned}$$

thus \(X_{m,\alpha }\) follows the binomial distribution \(B(c_{m,\alpha }2^{-d(\alpha )d}, N_m)\) and \(\sup _{m,\alpha }c_{m,\alpha }=c<\infty \). Then

$$\begin{aligned} 2^{(\epsilon +2\sigma )d(\alpha )}\sum _{\lambda _j\in \beta \ge \alpha }(1-|\lambda _j|^2)^{2\sigma }&=2^{(\epsilon +2\sigma )d(\alpha )}\sum _{m=d(\alpha )}^{\infty } \sum _{\lambda _j\in \beta \ge \alpha ,d(\beta )=m}(1-|\lambda _j|^2)^{2\sigma }\\&\lesssim 2^{(\epsilon +2\sigma )d(\alpha )}\sum _{m=d(\alpha )}^{\infty } 2^{-2\sigma m} X_{m,\alpha }\triangleq S_{\alpha }. \end{aligned}$$

Choose \(\gamma \in \mathbb {N}\) such that \(-2\sigma \gamma +2\sigma +2\epsilon <-d\). Let

$$\begin{aligned} Y_{\alpha }=2^{(\epsilon +2\sigma )d(\alpha )}\sum _{m=d(\alpha )}^{d(\alpha )(1+\gamma )-1} 2^{-2\sigma m} X_{m,\alpha } \text { and } R_{\alpha }=2^{(\epsilon +2\sigma )d(\alpha )}\sum _{m=d(\alpha )(1+\gamma )}^{\infty } 2^{-2\sigma m}X_{m,\alpha }. \end{aligned}$$

For any constant A, observe that

$$\begin{aligned} \mathbb {P}\left( \{\omega :S_{\alpha }\ge A \}\right) \le \mathbb {P}\left( \left\{ \omega :Y_{\alpha }\ge \frac{A}{2} \right\} \right) +\mathbb {P}\left( \left\{ \omega :R_{\alpha }\ge \frac{A}{2} \right\} \right) . \end{aligned}$$

For any m and \(\alpha \), there is an open set \(S_{m,\alpha }\) such that

$$\begin{aligned} \bigcup _{\beta \ge \alpha , d(\beta )=m} \beta \subset S_{m,\alpha } \subset \{z, m\theta \le \beta (0,z)<(m+1)\theta \} \end{aligned}$$

and

$$\begin{aligned} |S_{m,\alpha }|=c2^{-d(\alpha )d}|\{z, m\theta \le \beta (0,z)<(m+1)\theta \}|. \end{aligned}$$

Let

$$\begin{aligned} \tilde{X}_{m,\alpha }=\# \{\lambda _j: \lambda _j\in \Lambda (\omega )\cap S_{m,\alpha } \}, \end{aligned}$$

then \(\tilde{X}_{m,\alpha }\) follows the binomial distribution \(B(c2^{-d(\alpha )d}, N_m)\) and \(X_{m,\alpha }\le \tilde{X}_{m,\alpha }\). Let

$$\begin{aligned} \tilde{Y}_{m,\alpha }= 2^{(\epsilon +2\sigma )d(\alpha )}\sum _{m=d(\alpha )}^{d(\alpha )(1+\gamma )-1} 2^{-2\sigma m} \tilde{X}_{m,\alpha }, \end{aligned}$$

then \(\tilde{Y}_{m,\alpha }\) follows the binomial distribution

$$\begin{aligned} B\left( c2^{-d(\alpha )d}, 2^{(\epsilon +2\sigma )d(\alpha )}\sum _{m=d(\alpha )}^{d(\alpha )(1+\gamma )-1} 2^{-2\sigma m} N_m\right) . \end{aligned}$$

Then, by Lemma 4.2, it follows that

$$\begin{aligned} \mathbb {P}\{\omega :\tilde{Y}_{\alpha }\ge \frac{A}{2}\}&\lesssim \frac{\Big [c2^{-d(\alpha )d} 2^{(\epsilon +2\sigma )d(\alpha )} \sum _{m=d(\alpha )}^{d(\alpha )(1+\gamma )-1} 2^{-2\sigma m} N_m\Big ]^{A/2}}{(A/2)!}\\&\lesssim \frac{\Big [c2^{-d(\alpha )d} 2^{(\epsilon +2\sigma )d(\alpha )}\Big ]^{A/2}}{(A/2)!}.\\ \end{aligned}$$

Since \(\epsilon +2\sigma -d<0\), choose A big enough such that \((A/2)(\epsilon +2\sigma -d)\le -2d\). Thus

$$\begin{aligned} \mathbb {P}\left( \left\{ \omega :Y_{\alpha }\ge \frac{A}{2}\right\} \right) \lesssim \mathbb {P}\left( \left\{ \omega :\tilde{Y}_{\alpha }\ge \frac{A}{2}\right\} \right) \lesssim 2^{-2d(\alpha )d}. \end{aligned}$$

On the other hand,

$$\begin{aligned} \mathbb {E}(R_{\alpha })&=2^{(\epsilon +2\sigma )d(\alpha )}\sum _{m=d(\alpha )(1+\gamma )}^{\infty } 2^{-2\sigma m}\mathbb {E}(X_{m,\alpha })\\&=2^{(\epsilon +2\sigma )d(\alpha )}\sum _{m=d(\alpha )(1+\gamma )}^{\infty } 2^{-2\sigma m}c_{m,\alpha }2^{-d(\alpha )d} N_m\\&\le 2^{(\epsilon +2\sigma -d)d(\alpha )}\sum _{m=d(\alpha )(1+\gamma )}^{\infty } 2^{-2\sigma m} N_m\le C, \end{aligned}$$

for some constant C. Without loss of generality, suppose \(A/4\ge C\). Then

$$\begin{aligned} \mathbb {P}\left( \left\{ \omega :R_{\alpha }\ge \frac{A}{2}\right\} \right)&= \mathbb {P}\left( \left\{ \omega :R_{\alpha }-\mathbb {E}(R_{\alpha })\ge \frac{A}{2}-\mathbb {E}(R_{\alpha })\right\} \right) \\&\le \mathbb {P}\left( \left\{ \omega :|R_{\alpha }-\mathbb {E}(R_{\alpha })|\ge \frac{A}{4}\right\} \right) \\&\lesssim {\text {Var}}(R_{\alpha })\lesssim 2^{2(\epsilon +2\sigma )d(\alpha )}\sum _{m=d(\alpha )(1+\gamma )}^{\infty } 2^{-4\sigma m}2^{-d(\alpha )d} N_m\\&\lesssim 2^{2(\epsilon +2\sigma )d(\alpha )}2^{-d(\alpha )d}\sum _{m=d(\alpha )(1+\gamma )}^{\infty } 2^{-2\sigma m}\\&\lesssim 2^{2(\epsilon +2\sigma )d(\alpha )}2^{-d(\alpha )d} 2^{-2\sigma d(\alpha )(1+\gamma )}\\&\lesssim 2^{-d(\alpha )d} 2^{d(\alpha )(-2\sigma \gamma +2\sigma +2\epsilon )}\lesssim 2^{-d(\alpha )2d}. \end{aligned}$$

Thus,

$$\begin{aligned} \sum _{\alpha \in \mathcal {T}_{d}} \mathbb {P}\left( \left\{ \omega :S_{\alpha }\ge A \right\} \right)&=\sum _{k=0}^{\infty }\sum _{\alpha \in \mathcal {T}_{d},d(\alpha )=k} \mathbb {P}\left( \left\{ \omega :S_{\alpha }\ge A \right\} \right) \\&\lesssim \sum _{k=0}^{\infty }\sum _{\alpha \in \mathcal {T}_{d},d(\alpha )=k}2^{-d(\alpha )2d}\lesssim \sum _{k=0}^{\infty }2^{-kd}<\infty , \end{aligned}$$

which means that \(S_{\alpha }\) is bounded almost surely. Thus \(\mu _{\Lambda (\omega )}\) is a Carleson measure almost surely.

On the other hand, if

$$\begin{aligned} \sum _{m=0}^{\infty }2^{-2\sigma m}N_m=\infty , \end{aligned}$$

then

$$\begin{aligned} \int _{\mathbb {B}_d}d\mu _{\Lambda }=\sum _{j=1}^{\infty }\left( 1-\left| \lambda _{j}\right| ^{2}\right) ^{2 \sigma } \backsimeq \sum _{m=0}^{\infty }2^{-2\sigma m}N_m=\infty . \end{aligned}$$

Thus,

$$\begin{aligned} \mathbb {P}\{\mathcal {C}(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) ) \}=0. \end{aligned}$$

\(\square \)

For a sequence \(\{z_j\}\), if \(\inf _{i \ne j} \beta \left( z_{i}, z_{j}\right) >0\), call \(\{z_j\}\) weakly separated. On the unit ball, denote

$$\begin{aligned} \mathcal {W}(\mathbb {B}_{d}):=\{\omega : \Lambda (\omega ) \text { is weakly separated in }\mathbb {B}_{d}\}. \end{aligned}$$

We need to point out that the weak separation with respect to the Bergman metric is equivalent to the weak separation with respect to the pseudo-hyperbolic metric. Thus, we have the following lemma.

Lemma 4.4

([14,  Lem. 3.5]) Let \(\Lambda (\omega )=\left\{ \lambda _{j}\right\} \) be a random sequence. Then the following statements hold.

  1. (i)

    If

    $$\begin{aligned} \sum _{m}2^{-dm}N^2_m<\infty , \end{aligned}$$

    then \(\mathbb {P}\{\mathcal {W}(\mathbb {B}_{d}) \}=1\).

  2. (ii)

    If

    $$\begin{aligned} \sum _{m}2^{-dm}N^2_m=\infty , \end{aligned}$$

    then \(\mathbb {P}\{\mathcal {W}(\mathbb {B}_{d}) \}=0\).

By Theorem 1.6, a sequence is an interpolating sequence if and only if it is weakly separated and the corresponding measure is a Carleson measure. The proof of Theorem 1.7 is now given.

Proof of Theorem 1.7

Since \(1/2\ge \sigma >0 \) and \(d\ge 2\), then \(-d+4\sigma \le 0\). If

$$\begin{aligned} \sum _{m=0}^{\infty }2^{-2\sigma m}N_m<\infty , \end{aligned}$$

then

$$\begin{aligned} \sum _{m=0}^{\infty }2^{-4\sigma m}N^2_m<\infty , \end{aligned}$$

which implies

$$\begin{aligned} \sum _{m=0}^{\infty }2^{-dm}N^2_m=\sum _{m=0}^{\infty }2^{(-d+4\sigma )m}2^{-4\sigma m}N^2_m\le \sum _{m=0}^{\infty }2^{-4\sigma m}N^2_m<\infty . \end{aligned}$$

By Theorem 1.6 and Lemma 4.4, the conclusion follows.

On the other hand, if \(\sum _{m=0}^{\infty }2^{-2\sigma m}N_m=\infty \), then

$$\begin{aligned} \mathbb {P}\{\mathcal {C}(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) ) \}=0. \end{aligned}$$

By Theorem 1.6 again, it follows that

$$\begin{aligned} \mathbb {P}\{\mathcal {I}(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) )\}=0. \end{aligned}$$

\(\square \)

Finally, the uniformly separated sequences on the unit ball when \(d\ge 2\) are studied. It is well known that

$$\begin{aligned} 1-\left| \varphi _{z}(w)\right| ^{2}=\frac{\left( 1-|z|^{2}\right) \left( 1-|w|^{2}\right) }{|1-\langle w, z\rangle |^{2}}. \end{aligned}$$

A sequence \(\{z_j\}\) is uniformly separated if \(\inf _{k}\prod _{j\ne k}\rho (z_{j},z_k)>0\), where \(\rho \) is the pseudo-hyperbolic distance on the unit ball. Let

$$\begin{aligned} \mathcal {U}(\mathbb {B}_{d}):=\{\omega : \Lambda (\omega ) \text { is uniformly separated in} \mathbb {B}_{d}\}. \end{aligned}$$

An important lemma in analysis, see [15,  Prop. 1.4.10], is needed.

Lemma 4.5

For \(z \in \mathbb {B}_d\) and \(c\in \mathbb {R}\), let

$$\begin{aligned} I_{c}(z)=\int _{\partial \mathbb {B}_d} \frac{1}{|1-\langle z, \zeta \rangle |^{d+c}}d \sigma (\zeta ). \end{aligned}$$

When \(c<0\), then \(I_{c}\) is bounded in \(\mathbb {B}_d\). When \(c>0\), then

$$\begin{aligned} I_{c}(z) \approx \left( 1-|z|^{2}\right) ^{-c}. \end{aligned}$$

Finally,

$$\begin{aligned} I_{0}(z) \approx \log \frac{1}{1-|z|^{2}}. \end{aligned}$$

Proposition 4.6

Let \(\Lambda (\omega )=\left\{ \lambda _{j}\right\} \) be a random sequence. Then the following statements hold.

  1. (i)

    If \(d=2\) and

    $$\begin{aligned} \sum _{m=0}^{\infty }N_m 2^{-m}(m+1)<\infty , \end{aligned}$$

    then \(\mathbb {P}\{\mathcal {U}(\mathbb {B}_{d})\}=1\).

  2. (ii)

    If \(d\ge 3\) and

    $$\begin{aligned} \sum _{m=0}^{\infty }N_m2^{-m}<\infty , \end{aligned}$$

    then \(\mathbb {P}\{\mathcal {U}(\mathbb {B}_{d})\}=1\).

  3. (iii)

    If \(d\ge 3\) and

    $$\begin{aligned} \sum _{m=0}^{\infty }N_m2^{-m}=\infty , \end{aligned}$$

    then \(\mathbb {P}\{\mathcal {U}(\mathbb {B}_{d})\}=0\).

Proof

First, \(\inf _{k}\prod _{j\ne k}\rho (\lambda _{j},\lambda _k)^2>0\) almost surely if and only if

$$\begin{aligned} \sup _{k}\sum _{j\ne k}-\log {\rho (\lambda _{j},\lambda _k)^2}<\infty . \end{aligned}$$

Since \(-\log {x}\ge 1-x\) when \(1\ge x>0\), it follows that

$$\begin{aligned} -\log {\rho (\lambda _{j},\lambda _k)^2}\ge 1-\rho (\lambda _j,\lambda _k)^2. \end{aligned}$$

Thus \(\inf _{k}\prod _{j\ne k}\rho (\lambda _{j},\lambda _k)^2>0\) almost surely implies

$$\begin{aligned} \sup _{k}\sum _{j\ne k}[1-\rho (\lambda _j,\lambda _k)^2]<\infty \end{aligned}$$

almost surely.

On the other hand, if \(\inf _{\lambda _j\ne \lambda _k}\rho (\lambda _j,\lambda _k)>0 \), then

$$\begin{aligned} -\log {\rho (\lambda _{j},\lambda _k)^2}\lesssim 1-\rho (\lambda _j,\lambda _k)^2. \end{aligned}$$

Thus, in this case,

$$\begin{aligned} \sup _{k}\sum _{j\ne k}[1-\rho (\lambda _j,\lambda _k)^2]<\infty \end{aligned}$$

almost surely implies

$$\begin{aligned} \inf _{k}\prod _{j\ne k}\rho (\lambda _{j},\lambda _k)^2>0 \end{aligned}$$

almost surely.

For any constant c, consider

$$\begin{aligned}&\sum _{k=1}^{\infty }\mathbb {P}\left( \left\{ \omega : \sum _{j\ne k}[1-\rho (\lambda _j,\lambda _k)^2]>c\right\} \right) \le \frac{1}{c}\sum _{k=1}^{\infty }\mathbb {E}\Big [ \sum _{j\ne k}[1-\rho (\lambda _j,\lambda _k)^2]\Big ]\\&\quad =\frac{1}{c}\sum _{k=1}^{\infty }\sum _{j\ne k}\mathbb {E}\Big [[1-\rho (\lambda _j,\lambda _k)^2]\Big ]\\&\quad =\frac{1}{c}\sum _{k=1}^{\infty }\sum _{j\ne k}\mathbb {E}\Big [\frac{\left( 1-|\lambda _j|^{2}\right) \left( 1-|\lambda _k|^{2}\right) }{|1-\langle \lambda _k, \lambda _j\rangle |^{2}}\Big ]\\&\quad = \frac{1}{c}\sum _{k=1}^{\infty }\sum _{j\ne k} \int _{\partial \mathbb {B}_d}\int _{\partial \mathbb {B}_d}\frac{\left( 1-|\lambda _j|^{2}\right) \left( 1-|\lambda _k|^{2}\right) }{|1-\langle \lambda _k, \lambda _j\rangle |^{2}}d\sigma (\xi _{j})d\sigma (\xi _{k})\\&\quad \le \frac{1}{c}\sum _{k=1}^{\infty }\sum _{j\ne k} \left( 1-|\lambda _j|^{2}\right) \left( 1-|\lambda _k|^{2}\right) \\&\qquad \times \int _{\partial \mathbb {B}_d}\int _{\partial \mathbb {B}_d}\frac{1}{|1-\langle |\lambda _j\Vert \lambda _k|\xi _{k}, \xi _j\rangle |^{d+2-d}}d\sigma (\xi _{j})d\sigma (\xi _{k}). \end{aligned}$$

Next, consider two cases.

If \(d=2\), then by Lemma 4.5 we have

$$\begin{aligned} \int _{\partial \mathbb {B}_d}\frac{1}{|1-\langle |\lambda _j\Vert \lambda _k|\xi _{k}, \xi _j\rangle |^{d+2-d}}d\sigma (\xi _{j})\lesssim \log \frac{1}{1-|\lambda _j|^2|\lambda _k|^2}. \end{aligned}$$

Substituting this estimate into the one above yields that

$$\begin{aligned}&\sum _{k=1}^{\infty }\mathbb {P}\left( \left\{ \omega : \sum _{j\ne k}[1-\rho (\lambda _j,\lambda _k)^2]>c\right\} \right) \\&\quad \lesssim \frac{1}{c}\sum _{k=1}^{\infty } \sum _{j=1}^{\infty }\left( 1-|\lambda _j|^{2}\right) \left( 1-|\lambda _k|^{2}\right) \log \frac{1}{1-|\lambda _j|^2|\lambda _k|^2}\\&\quad \lesssim \sum _{l=0}^{\infty }\sum _{m=0}^{\infty }N_lN_m 2^{-l}2^{-m}\Big [\log {(\frac{1}{2^{-l-1}+2^{-m-1}})}+1\Big ]\\&\quad \le 2\sum _{l\ge m}N_lN_m 2^{-l}2^{-m}\Big [\log {(\frac{1}{2^{-l-1}+2^{-m-1}})}+1\Big ]\lesssim \sum _{l\ge m}N_lN_m 2^{-l}2^{-m}(l+1)\\&\quad =\sum _{m=0}^{\infty }N_m 2^{-m} \sum _{l=m}^{\infty }N_l2^{-l}(l+1)\le \left( \sum _{l=0}^{\infty }N_l 2^{-l}(l+1)\right) ^{2}. \end{aligned}$$

Thus

$$\begin{aligned} \sum _{m=0}^{\infty }N_l 2^{-l}(l+1)<\infty \end{aligned}$$

implies that

$$\begin{aligned} \sup _{k}\sum _{j\ne k}[1-\rho (\lambda _j,\lambda _k)^2]<\infty \end{aligned}$$

almost surely. Lemma 4.4 also yields that

$$\begin{aligned} \sum _{m=0}^{\infty }N_l 2^{-l}(l+1)<\infty \end{aligned}$$

implies that \(\Lambda (\omega )\) is weakly separated almost surely, thus

$$\begin{aligned} \sum _{m=0}^{\infty }N_l 2^{-l}(l+1)<\infty \end{aligned}$$

implies that \(\Lambda (\omega )\) is uniformly separated almost surely.

If \(d\ge 3\), by Lemma 4.5, then

$$\begin{aligned} \int _{\partial \mathbb {B}^d}\frac{1}{|1-\langle |\lambda _j\Vert \lambda _k|\xi _{k}, \xi _j\rangle |^{d+2-d}}d\sigma (\xi _{j})\le C_d<\infty . \end{aligned}$$

Then

$$\begin{aligned}&\sum _{k=1}^{\infty }\mathbb {P}\left( \left\{ \omega : \sum _{j\ne k}[1-\rho (\lambda _j,\lambda _k)^2]>c\right\} \right) \\&\quad \lesssim \sum _{k=1}^{\infty }\sum _{j=1}^{\infty }\left( 1-|\lambda _j|^{2}\right) \left( 1-|\lambda _k|^{2}\right) \\&\quad \backsimeq \left( \sum _{j=1}^{\infty }N_{m}2^{-m}\right) ^2. \end{aligned}$$

By a similar argument for the case \(d=2\),

$$\begin{aligned} \sum _{j=1}^{\infty }N_{m}2^{-m}<\infty \end{aligned}$$

implies that \(\Lambda (\omega )\) is separated almost surely. Conversely, if

$$\begin{aligned} \sum _{j=1}^{\infty }N_{m}2^{-m}=\infty , \end{aligned}$$

then

$$\begin{aligned} \sum _{j=1}^{\infty }\left( 1-|\lambda _j|^{2}\right) =\infty . \end{aligned}$$

Thus, for any \(z_k\), we have

$$\begin{aligned} \sum _{j\ne k}-\log {\rho (\lambda _{j},\lambda _k)^2}&\ge \sum _{j\ne k}1-\rho (\lambda _j,\lambda _k)^2= \sum _{j\ne k}\frac{(1-|\lambda _k|^{2})(1-|\lambda _j|^{2})}{|1-\langle \lambda _j,\lambda _k\rangle |^2}\\&\ge \frac{(1-|\lambda _k|^{2})}{(1+|\lambda _k|)^2}\sum _{j\ne k}(1-|\lambda _j|^{2})=\infty , \end{aligned}$$

giving the conclusion that

$$\begin{aligned} \mathbb {P}\{\mathcal {U}(\mathbb {B}_{d}) \}=0. \end{aligned}$$

\(\square \)