Abstract
We study almost surely separating and interpolating properties of random sequences in the polydisc and the unit ball. In the unit ball, we obtain the 0–1 Komolgorov law for a sequence to be interpolating almost surely for all the Besov–Sobolev spaces \(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) \), in the range \(0 < \sigma \le 1 / 2\). For those spaces, such interpolating sequences coincide with interpolating sequences for their multiplier algebras, thanks to the Pick property. This is not the case for the Hardy space \(\mathrm {H}^2(\mathbb {D}^d)\) and its multiplier algebra \(\mathrm {H}^\infty (\mathbb {D}^d)\): in the polydisc, we obtain a sufficient and a necessary condition for a sequence to be \(\mathrm {H}^\infty (\mathbb {D}^d)\)-interpolating almost surely. Those two conditions do not coincide, due to the fact that the deterministic starting point is less descriptive of interpolating sequences than its counterpart for the unit ball. On the other hand, we give the \(0-1\) law for random interpolating sequences for \(\mathrm {H}^2(\mathbb {D}^d)\).
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
A sequence \(Z=(z_n)_{n\in \mathbb {N}}\) in the unit disc \(\mathbb {D}\) is interpolating for \(\mathrm {H}^\infty \) if, given any bounded sequence \((w_n)_{n\in \mathbb {N}}\) in \(\mathbb {C}\) there exists a bounded analytic function f on \(\mathbb {D}\) so that \(f(z_n)=w_n\), for any n in \(\mathbb {N}\). The celebrated work of Carleson, [9, 10], characterized interpolating sequences in term of separation properties. To be precise, let
be the involutive Blaschke factor at \(\tau \) in \(\mathbb {D}\), and let, for any z and w in \(\mathbb {D}\),
be the pseudo-hyperbolic distance in \(\mathbb {D}\). Z is
-
weakly separated if
$$\begin{aligned} \inf _{n\ne k}\rho (z_n, z_k)>0; \end{aligned}$$ -
uniformly separated if
$$\begin{aligned} \inf _{n\in \mathbb {N}}\prod _{k\ne n}\rho (z_n, z_k)>0. \end{aligned}$$
Carleson proved in [9] that Z is interpolating if and only if it is uniformly separated. Later on, [10], he characterized uniform separation in terms of a measure theoretic condition and weak separation:
Theorem 1.1
(Carleson) A sequence Z in \(\mathbb {D}\) is uniformly separated if and only if it is weakly separated and the measure
is a Carleson measure for \(\mathrm {H}^2(\mathbb {D})\).
Throughout this note, a measure \(\mu \) on a domain D will be a Carleson measure for a reproducing kernel Hilbert space \(\mathcal {H}_k\) of holomorphic functions on D if
for some \(C>0\). Later sections will take \(D=\mathbb {D}^d\), the unit polydisc, or \(D=\mathbb {B}^d\), the unit ball, respectively: the kernels that we are going to choose for such domains are the Szegö kernel on the polydisc and the Besov–Sobolev kernels on the unit ball.
In certain instances, the randomization of the conditions studied by Carleson become more tractable and provide insight into the structure of interpolating sequences. Cochran studied in [12] separation properties of random sequences. A random sequence in the unit disc is defined as follows: let \((\theta _n)_{n\in \mathbb {N}}\) be a sequence of independent random variables, all distributed uniformly in \((0, 2\pi )\) and defined on the same probability space \((\Omega , \mathcal {A}, \mathbb {P})\). Then, for any choice of a deterministic sequence of radii \((r_n)_{n\in \mathbb {N}}\) approaching 1 define
Considering the random sequence \(\Lambda (\omega )=(\lambda _n(\omega ))_{n\in \mathbb {N}}\), the 0-1 Kolmogorov law yields that events such as
have probability zero or one, thanks to the independence of the arguments of the points in \(\Lambda \). Let
be the jth dyadic annulus of \(\mathbb {D}\), and let
All the randomness of the sequence is on the arguments of the points in \(\Lambda \), and therefore \((N_j)_{j\in \mathbb {N}}\) is a deterministic sequence. Cochran proved in [12, Thm. 2] that \(\mathbb {P}(\mathcal {W})=1\) provided that
and that \(\mathbb {P}(\mathcal {W})=0\) whenever the sum in (1.3) diverges. Later on, Rudowicz showed in [16] that (1.3) is a sufficient condition for \(\mu _\Lambda \) to be a Carleson measure for \(\mathrm {H}^2(\mathbb {D})\) almost surely, and concluded, thanks to Theorem 1.1, that \(\mathbb {P}(\mathcal {I})=1\) if and only if (1.3) holds. In particular, condition (1.3) encodes all those random sequences so that \(\mathcal {W}\), \(\mathcal {U}\) and \(\mathcal {I}\) all have probability one.
The goal of this paper is to study random interpolating sequences on the polydisc and the d dimensional unit ball. A sequence \(Z=(z_n)_{n\in \mathbb {N}}\) in \(\mathbb {D}^d\) is interpolating for \(\mathrm {H}^\infty (\mathbb {D}^d)\) if, given any bounded \((w_n)_{n\in \mathbb {N}}\) in \(\mathbb {C}\) there exists a bounded holomorphic function f on \(\mathbb {D}^d\) so that \(f(z_n)=w_n\), for all n. On the polydisc, the deterministic starting point is the following (partial) analogue of the Carleson interpolation Theorem for sequences in the polydisc [7]:
Theorem 1.2
(Berndtsson, Chang and Lin) Let \(Z=(z_n)_{n\in \mathbb {N}}\) be a sequence in \(\mathbb {D}^d\), and let (a), (b) and (c) denote the following statements:
-
(a)
$$\begin{aligned} \inf _{n\in \mathbb {N}}\prod _{k\ne n}\rho _G(z_n, z_k)>0; \end{aligned}$$(1.4)
-
(b)
Z is interpolating for \(\mathrm {H}^\infty (\mathbb {D}^d)\);
-
(c)
The measure
$$\begin{aligned} \mu _Z:=\sum _{n\in \mathbb {N}}\left( \prod _{i=1}^d(1-|z_n^i|^2)\right) \delta _{z_n} \end{aligned}$$is a Carleson measure for \(\mathrm {H}^2(\mathbb {D}^d)\) and
$$\begin{aligned} \inf _{n\ne k}\rho _G(z_n, z_k )>0. \end{aligned}$$(1.5)
Then (a)\(\implies \)(b)\(\implies \)(c), and none of the converse implications hold.
Conditions (1.4) and (1.5) are separation conditions, both stated in terms of the so called Gleason distance on the polydisc:
Throughout this note, (1.4) will refer to uniform separation on the polydisc, while (1.5) defines a weakly separated sequence on the polydisc. Theorem 1.2 represents one of the best known attempts to characterize \(\mathrm {H}^\infty (\mathbb {D}^d)\)-interpolating sequences on the polydisc in terms of its hyperbolic geometry. One can find a characterization for interpolating sequences for bounded analytic functions on the bi-disc in [1], stated in terms of uniform separation conditions on an entire class of reproducing kernels on \(\mathbb {D}^2\). The motivation for the first part of this note is to find out whether condition (a), and (c) of Theorem 1.2 are equivalent at least almost surely. A negative answer would imply that Theorem 1.2 is far from being a characterization. A positive answer would give the 0-1 Kolmogorov law for \(\mathrm {H}^\infty (\mathbb {D}^d)\)-interpolating sequences in the polydisc with random arguments. The construction of a random sequence \(\Lambda \) on the polydisc follows the same outline as for the case of the unit disc. Let \(\mathbb {T}^d\) be the d-dimensional torus in \(\mathbb {C}^d\), and let \((\theta ^1_n, \dots , \theta ^d_n)_{n\in \mathbb {N}}\) be a sequence of independent and indentically distributed random variables taking values on \(\mathbb {T}^d\), all distributed uniformly and defined on the same probability space \((\Omega , \mathcal {A}, \mathbb {P})\). Let \((r_n)_{n\in \mathbb {N}}\) be a sequence in \([0, 1)^d\), and define a random sequence \(\Lambda =(\lambda _n)_{n\in \mathbb {N}}\) in \(\mathbb {D}^d\) as
The events of interest are going to be
Our first aim is to give necessary conditions and sufficient conditions for \(\Lambda \) to be interpolating for \(\mathrm {H}^\infty (\mathbb {D}^d)\) almost surely. This will be achieved by studying separately the probability of the events \(\mathcal {W}(\mathbb {D}^d)\), \(\mathcal {U}(\mathbb {D}^d)\) and \(\mathcal {C}(\mathrm {H}^2(\mathbb {D}^ d))\), and by applying Theorem 1.2. Looking for separation conditions on \((r_n)_{n\in \mathbb {N}}\) that yield almost sure separation properties for \(\Lambda \), (1.1) and (1.2) are extended to the d dimensional case by considering
and
for any multi-index \(m=(m_1,\dots ,m_d)\) in \(\mathbb {N}^d\). Throughout this note, \(|m|=m_1+\dots +m_d\) will denote the length of m.
The first main result partially extends Cochran’s and Rudowicz’s works to the polydisc:
Theorem 1.3
Let \(\Lambda \) be a random sequence in \(\mathbb {D}^d\). Then
-
(i)
If
$$\begin{aligned} \sum _{m\in \mathbb {N}^d}N_m^22^{-|m|}<\infty \end{aligned}$$(1.7)then \(\mathbb {P}(\mathcal {W}(\mathbb {D}^d))=1\). If the sum in (1.7) diverges, then \(\mathbb {P}(\mathcal {W}(\mathbb {D}^d))=0\).
-
(ii)
If
$$\begin{aligned} \sum _{m\in \mathbb {N}^d}N_m^{{1+1/d}}2^{-|m|/d}<\infty \end{aligned}$$(1.8)then \(\mathbb {P}(\mathcal {U}(\mathbb {D}^d))=1\).
-
(iii)
If (1.7) holds, then \(\mathbb {P}(\mathcal {C}(\mathrm {H}^2(\mathbb {D}^d)))=1\).
Observe that the case \(d=1\) yields Rudowicz’s and Cochran’s characterization of random interpolating sequences on the unit disc. In general, part (i) of the above Theorem gives the 0-1 Komolgorov law for a sequence to be weakly separated. In part (ii) and (iii), the result gives a sufficient condition for a sequence to be almost surely uniformly separated and to generate a Carleson measure for the Hardy space in the polydisc. In particular, thanks to Theorem 1.2, it is the case that the 0-1 Kolmogorov law for almost surely interpolating sequences for \(\mathrm {H}^\infty (\mathbb {D}^d)\) lies somewhere in between (1.8) and (1.7):
Corollary 1.4
Let \(\Lambda \) be a random sequence on \(\mathbb {D}^d\). Then
-
(i)
If (1.8) holds, then \(\mathbb {P}(\mathcal {I}(\mathbb {D}^d))=1\);
-
(ii)
If the sum in (1.7) diverges, then \(\mathbb {P}(\mathcal {I}(\mathbb {D}^d))=0\).
Proposition 3.3 will give an example of a class of random sequences for which the 0-1 Kolmogorov law for almost surely \(\mathrm {H}^\infty (\mathbb {D}^d)\)-interpolating sequences coincides with the sum in (1.7). Whether this is the case for a general choice of the radii \((r_n)_{n\in \mathbb {N}}\) remains, for us, open. Nevertheless, we will observe in Sect. 3.4 how (1.7) implies that the Szegö Grammian for a random sequence in the polydisc differs from the identity only by a Hilbert–Schmidt operator, a rather strong separation condition for the random kernel functions in the Hardy space associated to \(\Lambda \). In particular, this will give the \(0-1\) law for a random sequence \(\Lambda \) to be interpolating for \(\mathrm {H}^2(\mathbb {D}^d)\). In the deterministic setting, a sequence \((z_n)_{n\in \mathbb {N}}\) on \(\mathbb {D}^d\) is interpolating for \(\mathrm {H}^2(\mathbb {D}^d)\) if the map
is surjective and bounded. This, in particular, is equivalent to asking that the Szegö Grammian associated to \((z_n)_{n\in \mathbb {N}}\) be bounded above and below. Given a random sequence \(\Lambda \) in \(\mathbb {D}^d\), let
Any \(\mathrm {H}^\infty (\mathbb {D}^d)\)-interpolating sequence on \(\mathbb {D}^d\) is also \(\mathrm {H}^2(\mathbb {D}^d)\)-interpolating, and the converse does not hold, since \(\mathrm {H}^2(\mathbb {D}^d)\) has not the Pick property (for an example of a sequence which is \(\mathrm {H}^2(\mathbb {D}^2)\)-interpolating but not \(\mathrm {H}^\infty (\mathbb {D}^2)\)-interpolating, see [4]). Therefore, \(\mathcal {I}(\mathbb {D}^d)\subseteq \tilde{\mathcal {I}}(\mathbb {D}^d)\). We show that \(\tilde{\mathcal {I}}(\mathbb {D}^d)\) has the same \(0-1\) law of \(\mathcal {W}(\mathbb {D}^d)\):
Theorem 1.5
Let \(\Lambda \) be a random sequence in \(\mathbb {D}^d\). Then
Related questions about interpolation for function spaces on the unit ball in \(\mathbb {C}^d\) are also considered. The authors in [11] studied the interpolating sequences in the Dirichlet spaces over the unit disc and this serves as some of the motivation for the results in the ball. Section 4, will generalize some theorems in [11] to the unit ball. Because the generalization of the Dirichlet space is the Besov–Sobolev space, random interpolating sequences in the Besov–Sobolev spaces \(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) \) are studied, where \(0<\sigma <\infty \). In [6], a characterization of interpolating sequences in the Besov–Sobolev spaces in the case of \(0<\sigma \le 1/2\) was given. Because a characterization exists only in this range, that is the case we will focus on in this paper.
Let \(\mathbb {B}_{d}\) be the unit ball in \(\mathbb {C}^{d}\). Let dz be Lebesgue measure on \(\mathbb {C}^{d}\) and let
be the invariant measure on the ball. For an integer \(m \ge 0\), and for \(0< \sigma <\infty \), \(1<p<\infty \), \(m+\sigma >d / p\) define the analytic Besov–Sobolev spaces \(B_{p}^{\sigma }\left( \mathbb {B}_{d}\right) \) to consist of those holomorphic functions f on the ball such that
Here \(f^{(m)}\) is the \(m^{t h}\) order complex derivative of f. The spaces \(B_{p}^{\sigma }\left( \mathbb {B}_{d}\right) \) are independent of m and are Banach spaces. A Carleson measure for \(B_{p}^{\sigma }\left( \mathbb {B}_{d}\right) \) is a positive measure defined on \(\mathbb {B}_{d}\) such that the following Carleson embedding holds for \(f \in B_{p}^{\sigma }\left( \mathbb {B}_{d}\right) \)
Given \(\sigma \) with \(0 < \sigma \le 1 / 2\) and a discrete set \(Z=\left\{ z_{i}\right\} _{i=1}^{\infty } \subset \mathbb {B}_{d}\) define the associated measure
Z is an interpolating sequence for \(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) \) if the restriction map R defined by \(R f\left( z_{i}\right) =f\left( z_{i}\right) \) for \(z_{i} \in Z {\text {maps}} B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) \) into and onto \(\ell ^{2}\left( Z, \mu _{Z}\right) \).
Theorem 1.6
Given \(\sigma \) with \(0 < \sigma \le 1/2\) and
Then Z is an interpolating sequence for \(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) \) if and only if Z satisfies the weak separation condition \(\inf _{i \ne j} \beta \left( z_{i}, z_{j}\right) >0\) and \(\mu _{Z}\) is a \(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) \) Carleson measure.
Proof
When \(0< \sigma <1 / 2\), this theorem is given by [6, Thm. 3]. When \(\sigma =1/2\), since \(B_{2}^{1/2}\left( \mathbb {B}_{d}\right) \) has the complete Pick property, we obtain the theorem from [3, Thm. 1.1]. \(\square \)
Namely, in contrast to the polydisc case, the deterministic setting for the interpolating sequences for the Besov–Sobolev space are well-understood and are characterized by weak separation and a Carleson measure condition. Therefore, in order to find the 0-1 Kolmogorov law for interpolating sequences for \(\mathbb {B}_2^\sigma \), it suffices to find the cut-off conditions on the detrministic radii for the associated sequence with randomly chosen arguments to be weakly separated and to generate a Carleson measure almost surely. This is the intent of the second part of our work. Random sequences in the unit ball are constructed as follows. Let \(\Lambda (\omega )=\left\{ \lambda _{j}\right\} \) with \(\lambda _{j}=\rho _{j} \xi _j(\omega )\) where \(\xi _{j}(\omega )\) is a sequence of independent random variables, all uniformly distributed on the unit sphere and \(\rho _{j} \in [0,1)\) is a sequence of a priori fixed radii. There is an interesting thing about the random interpolating sequences in the Besov–Sobolev spaces on the unit ball. As we will see, for \(d\ge 2\) a random sequence \(\{\lambda _n\}\) is an interpolating sequence almost surely if and only if \(\sum _{n}(1-|\lambda _n|)\delta _{\lambda _n}\) is a Carleson measure on \(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) \) almost surely. Moreover, the characterization for almost surely interpolating sequences is strictly stronger that the characterization for almost surely weakly separated sequences.
For any \(m\in \mathbb {N}\), let
where \(\beta \) is the Bergman metric on the unit ball \(\mathbb {B}_{d}\) in \(\mathbb {C}^{d}\). Let
The following result is obtained regarding a 0-1 Komolgorov law for interpolating sequences on the unit ball. We only work on the case of \(0 < \sigma \le 1/2\) and \(d\ge 2\). When \(\sigma =d/2\), it is well-known that \(B_{2}^{d/2}\left( \mathbb {B}_{d}\right) \) is the Hardy space. By [14, Thm. 3.3], we know that
When \(d=1\) and \(0<\sigma \le 1/4\), by (i) in [11, Thm. 1.5] we know that
When \(d=1\) and \(1/4<\sigma <1/2\), by (ii) in [11, Thm. 1.5] we know that
In our case, we have the following:
Theorem 1.7
Let \(0 < \sigma \le 1/2\) and \(d\ge 2\). Then
-
(i)
If
$$\begin{aligned} \sum _{m=0}^{\infty }2^{-2\sigma m}N_m<\infty , \end{aligned}$$then \(\mathbb {P}\{\mathcal {I}(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) )\}=1\);
-
(ii)
If
$$\begin{aligned} \sum _{m=0}^{\infty }2^{-2\sigma m}N_m=\infty , \end{aligned}$$then \(\mathbb {P}\{\mathcal {I}(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) )\}=0\).
Section 2 will construct the necessary technical tools for the proof of our main results. Section 3 provides the proof of Theorems 1.3 and 1.5, and characterizes random interpolating sequences for \(\mathrm {H}^\infty (\mathbb {D}^d)\) for some specific choice of the radii in \((r_n)_{n\in \mathbb {N}}\). Finally, Sect. 4 proves Theorem 1.7 and studies uniform separation on the unit ball.
We would like to thank Nikolaos Chalmoukis for some useful comments that led to the final version of Theorem 1.3. We would also like to thank the referees for their valuable suggestions.
2 Preliminary Results
This section contains relatively general results that are going to be used throughout the proof of Theorem 1.3. Deterministic and probabilistic tools will be separately analyzed.
2.1 Deterministic Tools
Double sums are extensively used throughout this work. In particular the fact that, for a certain class of double sums involving exponential decay, the terms of the sums on their diagonals contain all the necessary information to bound the whole sums:
Lemma 2.1
Let \(s\ge 1\), and let \((A_m)_{m\in \mathbb {N}}\) and \((B_k)_{k\in \mathbb {N}}\) be two sequences of positive numbers. Then there exists some constant \(C=C_s>0\) such that
Proof
First observe that
Let’s first estimate the sum for \(k>m\):
thanks to Holder’s inequality with dual exponents \(1+1/s\) and \(s+1\). The sum for \(m>k\) is estimated analogously. This concludes the proof. \(\square \)
Our takeaway from Lemma 2.1 is the following
Corollary 2.2
Let \(s\ge 1\), \(d\ge 1\) and \((N_m)_{m\in \mathbb {N}^d}\) be a sequence of positive numbers so that
Then
Proof
The proof is by induction on d:
- \(d=1\):
-
apply Lemma 2.1 to \(A_m=B_m=N_m\);
- \(d\ge 2\):
-
suppose that (2.2) is true for \(d-1\), and let \((N_m)_{n\in \mathbb {N}^d}\) be a sequence of positive numbers. Then, by applying Lemma 2.1,
$$\begin{aligned}&\sum _{m\in \mathbb {N}^d}N_m\sum _{k\in \mathbb {N}^d}N_k^{1/s}\left( \prod _{i=1}^d\frac{1}{2^{m_i}+2^{k_i}}\right) ^{1/s}\\&\quad =\sum _{\tilde{m}, \tilde{k} \in \mathbb {N}^{d-1}}\prod _{i=1}^{d-1}\left( \frac{1}{2^{\tilde{k}_i}+2^{\tilde{m}_i}}\right) ^{1/s}\sum _{m_1, k_1\in \mathbb {N}}\frac{N_{(m_1, \tilde{m})}N_{(k_1, \tilde{k})}^{1/s}}{(2^{m_1}+2^{k_1})^{1/s}}\\&\quad \underset{\sim }{<}\sum _{\tilde{m}, \tilde{k} \in \mathbb {N}^{d-1}}\prod _{i=1}^{d-1}\left( \frac{1}{2^{\tilde{k}_i}+2^{\tilde{m}_i}}\right) ^{1/s}\\&\qquad \times \max \left\{ \sum _{m_1\in \mathbb {N}}N_{(m_1, \tilde{m})}^{1+1/s}2^{-m_1/s}\,,\,\sum _{m_1\in \mathbb {N}}N_{(m_1, \tilde{k})}^{1+1/s}2^{-m_1/s}\right\} \\&\qquad +\sum _{\tilde{m}, \tilde{k}\in \mathbb {N}^{d-1}}\prod _{i=1}^{d-1}\left( \frac{1}{2^{\tilde{m}_i}+2^{\tilde{k}_i}}\right) ^{1/s}\sum _{m_1\in \mathbb {N}}N_{(m_1, \tilde{m})}N_{(m_1, \tilde{k})}^{1/s}2^{-m_1/s}\\&\quad \le \sum _{\tilde{m}, \tilde{k} \in \mathbb {N}^{d-1}}\prod _{i=1}^{d-1}\left( \frac{1}{2^{\tilde{k}_i}+2^{\tilde{m}_i}}\right) ^{1/s}\sum _{m_1\in \mathbb {N}}N_{(m_1, \tilde{m})}^{1+1/s}2^{-m_1/s}\\&\qquad +\sum _{\tilde{m}, \tilde{k} \in \mathbb {N}^{d-1}}\prod _{i=1}^{d-1}\left( \frac{1}{2^{\tilde{k}_i}+2^{\tilde{m}_i}}\right) ^{1/s}\sum _{m_1\in \mathbb {N}}N_{(m_1, \tilde{k})}^{1+1/s}2^{-m_1/s}\\&\qquad +\sum _{\tilde{m}, \tilde{k}\in \mathbb {N}^{d-1}}\prod _{i=1}^{d-1}\left( \frac{1}{2^{\tilde{m}_i}+2^{\tilde{k}_i}}\right) ^{1/s}\sum _{m_1\in \mathbb {N}}N_{(m_1, \tilde{m})}N_{(m_1, \tilde{k})}^{1/s}2^{-m_1/s}\\&\quad =: I_1+I_2+I_3, \end{aligned}$$where the index m in \(\mathbb {N}^d\) is written as \((m_1, \tilde{m})\), with \(m_1\) in \(\mathbb {N}\) and \(\tilde{m}\) in \(\mathbb {N}^{d-1}\). Observe that, thanks to (2.1), \(I_1\) and \(I_2\) converge. As for \(I_3\), we can change the order of summation and apply the case \(d-1\). Which yields
$$\begin{aligned} \begin{aligned}&\sum _{m\in \mathbb {N}^d}N_m\sum _{k\in \mathbb {N}^d}N_k^{1/s}\left( \prod _{i=1}^d\frac{1}{2^{m_i}+2^{k_i}}\right) ^{1/s}\\&\quad \underset{\sim }{<}\sum _{m_1\in \mathbb {N}}2^{-m_1/s}\sum _{\tilde{m}, \tilde{k}\in \mathbb {N}^{d-1}}N_{(m_1, \tilde{m})}N_{(m_1, \tilde{k})}^{1/s}\prod _{i=1}^{d-1}\left( \frac{1}{2^{\tilde{m}_i}+2^{\tilde{k}_i}}\right) ^{1/s}\\&\quad \underset{\sim }{<}\sum _{m_1\in \mathbb {N}}2^{-m_1/s}\sum _{\tilde{m}\in \mathbb {N}^{d-1}}N_{(m_1, \tilde{m})}^{1+1/s}2^{-|\tilde{m}|/s}\\&\quad = \sum _{m\in \mathbb {N}^d}N_m^{1+1/s}2^{-|m|/s}<\infty . \end{aligned} \end{aligned}$$
\(\square \)
2.2 Random Tools
Fairly elementary facts from probability theory are exploited in the proofs. All the events and the random variables that are considered will be defined on the same probability space \((\Omega , \mathcal {A}, \mathbb {P})\). For a comprehensive treatment of the probabilistic results used, see [8].
The first tool is the Borel–Cantelli Lemma. Recall that, given a sequence \((A_n)_{n\in \mathbb {N}}\) of events in \(\mathcal {A}\), then
denotes the event made of those \(\omega \) in \(\Omega \) that belong to infinitely many of the events in \((A_n)_{n\in \mathbb {N}}\).
Theorem 2.3
(Borel–Cantelli Lemma) Let \((A_n)_{n\in \mathbb {N}}\) be a sequence of events in \(\mathcal {A}\). Then
-
(i)
If \({\sum _{n\in \mathbb {N}}}\mathbb {P}(A_n)<\infty \), then \(\mathbb {P}\left( \limsup _{n\in \mathbb {N}}A_n\right) =0\);
-
(ii)
If \({\sum _{n\in \mathbb {N}}}\mathbb {P}(A_n)=\infty \) and the events in \((A_n)_{n\in \mathbb {N}}\) are independent, then \(\mathbb {P}\left( \limsup _{n\in \mathbb {N}}A_n\right) =1\).
Given a random variable X on \(\Omega \), its mean value (or expectation) will be denoted by
In particular, if \(\mathbb {E}(X)<\infty \), then \(\mathbb {P}\{X=\infty \}=0\).
Another classic tool from probability that will be used is Jensen’s Inequality:
Theorem 2.4
(Jensen’s Inequality) Let X be a real-valued random variable on \(\Omega \), and let \(\phi :\mathbb {R}\rightarrow \mathbb {R}\) be a convex function. Then
In particular, since
is concave, for any \(s\ge 1\), this gives
for any positive random variable X on \(\Omega \), by applying Jensen’s inequality to \(\phi (t)=-t^{1/s}\).
We can now prove Lemma 2.5, a tool for the proofs of Theorem 1.3:
Lemma 2.5
Let \((X^i_{n, j})_{n, j\in \mathbb {N}}\) be a sequence of positive random variables, for any \(i=1,\dots ,d\). Set
Assume that
Then
is bounded almost surely.
Proof
Since, for any \(n\ne j\) in \(\mathbb {N}\),
we have
Thus
\(\square \)
3 Random Sequences in the Polydisc
This section is devoted to the proofs of Theorems 1.3 and 1.5. The events \(\mathcal {U}(\mathbb {D}^d)\), \(\mathcal {W}(\mathbb {D}^d)\), \(\mathcal {C}(\mathrm {H}^2(\mathbb {D}^d))\) and \(\tilde{\mathcal {I}}(\mathbb {D}^d)\) will be analyzed separately.
3.1 Weak Separation
For weak separation in the polydisc, it turns out that Cochran’s argument in [12, Thm. 2] extends to the higher dimensional case:
Proof of Theorem 1.3, (i)
For the sake of readability, we will adapt Cochran’s proof only to the case \(d=2\): the proof will lift appropriately to any \(d>1\). Assume first that \(\sum _{m\in \mathbb {N}^2}N_m^22^{-|m|}=\infty \) and let l be in \(\mathbb {N}\). Define
as the set of those \(\omega \) in \(\Omega \) such that there exists a pair of distinct indices n and r so that the Gleason distance between \(\lambda _n(\omega )\) and \(\lambda _r(\omega )\) is controlled by, roughly, \(2^{-l}\). Since \(\mathcal {W}(\mathbb {D}^d)^c\subseteq \bigcap _{l\in \mathbb {N}}A_l\), it suffices to show that \(\mathbb {P}(A_l)=1\) for any l in \(\mathbb {N}\).
For any m in \(\mathbb {N}^2\), partition \(I_{m}\) into \(2^{2l}\)“rectangles” of the form
and observe that at least one of these rectangles, say \(R_{m}\), must contain at least \(M_m:=N_m/2^{2l}\) points of \(\Lambda \). Let
Since
and the events \(B_{m}\) are independent, by the Borel–Cantelli Lemma, Theorem 2.3, it suffices to show that \(\sum _{m\in \mathbb {N}^2}\mathbb {P}(B_{m})=\infty \).
In order to estimate the probability of each \(B_{m}\) from below, we give an upper bound for \(\mathbb {P}(B_{m}^c)\). If \(\tau \) is in \(\mathbb {T}^2\), let \(S_{m}(\tau )\) be a“rectangle”in \(\mathbb {T}^2\) centered at \(\tau \) with basis \(2^{-(m_1+l)}\) and height \(2^{-(m_2+l)}\). If \(\tau _n=(e^{i\theta ^1_n}, e^{i\theta ^2_n})\), then thanks to the independence of \((\tau _n)_{n\in \mathbb {N}}\) we have
If \(\liminf _{m}\mathbb {P}(B_m^c)<1\), then \(\mathbb {P}(B_{m})\) is uniformly bounded away from 0 infinitely many times, and \(\sum _{m\in \mathbb {N}^2}\mathbb {P}(B_{m})=\infty \) trivially.
On the other hand, if \(\lim _{|m|\rightarrow \infty }\mathbb {P}(B^c_{m})=1\), then
which is the general term of a divergent series.
To conclude the proof of Theorem 1.3, part (i), it suffices to show that a random sequence \(\Lambda \) in \(\mathbb {D}^d\) is almost surely weakly separated whenever (1.7) holds. To do so, let
Then
and the Borel–Cantelli Lemma provides that, almost surely, any pair \((\lambda _n, \lambda _r)\) in all but finitely many“rectangles”\(I_m\) satisfies
The same argument applies for the right-shifted“rectangles”\(I'_m\) of the form
and the up-shifted“rectangles” \(I''_m\) of the form
This ensures that all but finitely many pairs \((\lambda _n,\lambda _r)\) in \(\Lambda \) so that both
and
have property (3.1). Therefore, see [12, Claim, p. 741] \(\Lambda \) is almost surely weakly separated. \(\square \)
3.2 Uniform Separation
While weak separation behaves essentially in the same way as the dimension d grows, the sufficient condition in (1.8) for almost sure uniform separation picks up a dependence on d. As will be shown, this is due to some estimates on the expected value of quantities related to the (random) Gleason distances between the points in \(\Lambda \).
It will also be explained how (1.8) can be improved for some choices of \((r_n)_{n\in \mathbb {N}}\). As a corollary, a cutoff condition for \(\Lambda \) to be almost surely \(\mathrm {H}^\infty (\mathbb {D}^d)\)-interpolating for some types of random sequences in the polydisc will be given.
Let \(s_d\) be the Szegö kernel on \(\mathbb {D}^d\). Then the Hardy space \(\mathrm {H}^2(\mathbb {D}^d)\) is the reproducing kernel Hilbert space \(\mathcal {H}_{s_d}\). Denote the normalized Szegö kernel by
and observe that, for any z and w in \(\mathbb {D}^d\),
Given a random sequence \(\Lambda \) in \(\mathbb {D}^d\) denote, for the sake of readability,
and
Thanks to (3.2), uniform separation can be achieved from weak separation and a uniform bound on sums depending on the random sequences \((S^ i(n, j))_{n, j\in \mathbb {N}}\):
Observe that each \((S^i(n, j))_{n, j\in \mathbb {N}}\) is a sequence of random variables on \(\Omega \) which is determined, together with \(\Lambda \), by \((r_{n})_{n\in \mathbb {N}}\). It is not surprising then that the expectation of \(|S^i(n, j)|^2\) depends, for any i, n and j, only on \(r^i_n\) and \(r_j^i\):
Lemma 3.1
Let \(\Lambda \) be a random sequence in \(\mathbb {D}^d\). Then, for any \(n\ne j\) in \(\mathbb {N}\) and for any \(i=1,\dots , d\),
Proof
Observe thatFootnote 1
Therefore, by making use of the independence of \(\theta ^i_n\) and \(\theta ^i_j\),
\(\square \)
Remark 3.2
Let m and k be two multi-indices in \(\mathbb {N}^d\), and suppose that \(\lambda _n\) and \(\lambda _j\) belong to \(I_m\) and \(I_k\), respectively. Then, thanks to Lemma 3.1 and (1.6),
In particular, since \(S^i(n, j)\) and \(S^r(n, j)\) are independent for any \(i\ne r\), we have
Part (ii) of Theorem 1.3 can now be proved:
Proof of Theorem 1.3, (ii)
Observe that
whenever \(N_m\le 2^{|m|}\), and so under our assumption \(\Lambda \) is weakly separated, thanks to Theorem 1.3, part (i). Therefore, thanks to (3.3), it suffices to show that the random sequence \((S_n)_{n\in \mathbb {N}}\) given by
is bounded almost surely. Thanks to Lemma 2.5, it is enough to show that
By regrouping the terms of the double sum in (3.4) with respect to the partition \((I_m)_{m\in \mathbb {N}^d}\) of \(\mathbb {D}^d\) and thanks to Remark 3.2 and (2.3) we get
Corollary 2.2, \(d=s\), concludes the proof. \(\square \)
Condition (1.8) is not sharp. Indeed, for some choices of \((r_n)_{n\in \mathbb {N}}\), we can show that the \(0-1\) Kolmogorov law for \(\mathrm {H}^\infty (\mathbb {D}^d)\)-interpolating sequences coincide with the one for weak separation:
Proposition 3.3
Let \(d=2\) and \((t_n)_{n\in \mathbb {N}}\) be a sequence in (0, 1), and consider its Cartesian product with itself
Then the random sequence \(\Lambda \) associated with \((r_n)_{n\in \mathbb {N}^2}\) is interpolating for \(\mathrm {H}^\infty (\mathbb {D}^d)\) almost surely if an only if (1.7) holds.
Proof
If \(\sum _{m\in \mathbb {N}^2}N_m^22^{-|m|}=\infty \), then \(\Lambda \) is not weakly separated almost surely, and in particular it is almost surely not interpolating. Thus it suffices to show that \(\Lambda \) is \(\mathrm {H}^\infty (\mathbb {D}^d)\)-interpolating provided that \(\sum _{m\in \mathbb {N}^2}N_m^22^{-|m|}<\infty \), which, by construction of \((r_n)_{n\in \mathbb {N}^2}\), it is equivalent to
where \(T_n:=\#\{l\in \mathbb {N}\quad |\quad 1-2^{-n}\le t_l<1-2^{-(n+1)}\}\). By Rudowicz’s Theorem, [16], the random sequence T on \(\mathbb {D}\) given by
is almost surely interpolating in \(\mathbb {D}\), where \((\theta _n)_{n\in \mathbb {N}}\) is a sequence of i.i.d. random variables defined on a probability space \((\Omega , \mathcal {A}, \mathbb {P})\) and distributed uniformly on the unit circle. In particular, T has almost surely a sequence of so called P. Beurling functions, that is, there exists an event \(\Omega '\) so that \(\mathbb {P}(\Omega ')=1\) and, for any \(\omega \) in \(\Omega '\), there exists a sequence of \(\mathrm {H}^\infty (\mathbb {D})\) functions \((F_{\omega , n})_{n\in \mathbb {N}}\) such that
Let us consider now the product probability space \((\tilde{\Omega }, \tilde{\mathcal {A}}, \tilde{\mathbb {P}})\), where \(\tilde{\Omega }:=\Omega \times \Omega \), \(\tilde{\mathcal {A}}\) is the product \(\sigma \)-algebra of \(\mathcal {A}\) with itself, and
Then the random variables
given by
are uniformly distributed in \(\mathbb {T}^2\) and independent. Thus we can think of the random sequence \(\Lambda \) as
Let \(\Omega '':=\Omega '\times \Omega '\) and define, for any \(n=(n_1, n_2)\) in \(\mathbb {N}^2\) and \(\tilde{\omega }=(\omega _1, \omega _2)\) in \(\Omega ''\) the \(\mathrm {H}^\infty (\mathbb {D}^2)\) function
Then \((G_{\tilde{\omega }, n})_{n\in \mathbb {N}^2}\) is a set of Beurling functions for \(\Lambda (\tilde{\omega })\), and in particular \(\Lambda (\tilde{\omega })\) is \(\mathrm {H}^\infty (\mathbb {D}^d)\)-interpolating for any \(\tilde{\omega }\) in \(\Omega ''\). Since \(\tilde{\mathbb {P}}(\Omega '')=\mathbb {P}(\Omega ')^2=1\), \(\Lambda \) is interpolating for \(\mathrm {H}^\infty (\mathbb {D}^d)\) almost surely. \(\square \)
The argument in Proposition 3.3 can be easily extended to any \(d>1\) to show that, whenever the sequence of radii \((r_n)_{n\in \mathbb {N}}\) is the Cartesian product of d sequences in [0, 1), then (1.7) encodes all random sequences that are almost surely interpolating for \(\mathrm {H}^\infty (\mathbb {D}^d)\). For a general choice of \((r_n)_{n\in \mathbb {N}}\) the following question remains open:
Question 1 Is any random sequence \(\Lambda \) in \(\mathbb {D}^d\) satisfying (1.7) uniformly separated? Or else, does there exist a choice of \((r_n)_{n\in \mathbb {N}}\) so that the random sequence \(\Lambda \) obtained is almost surely weakly separated but not uniformly separated?
3.3 Carleson Measures
The same idea that was used for random uniform separation works for the proof of Theorem 1.3, part (iii), modulo some adaptations. Let \(Z=(z_n)_{n\in \mathbb {N}}\) be a sequence in \(\mathbb {D}^d\) and consider the Szegö Grammian
associated with the sequence Z. Therefore,
Theorem 3.4
The following are equivalent:
-
(i)
\(\mu _Z\) is a Carleson measure for \(\mathrm {H}^2(\mathbb {D}^d)\);
-
(ii)
\(G:l^2\rightarrow l^2\) is bounded.
A proof of Theorem 3.4 can be found in [2, Thm. 9.5]. Moreover, a standard operator theory argument gives that any sufficiently strong decay of the coefficients of G outside its diagonal implies that G is bounded (above and below):
Lemma 3.5
Let \(A=(a_{n, j})_{n, j\in \mathbb {N}}:l^2\rightarrow l^2\) be invertible and self adjoint. Suppose that \(a_{i,i}=1\) for any i in \(\mathbb {N}\), and that
Then A is bounded above and below.
Proof
Such an A can be written as \(A=Id+H\), where H is a Hilbert–Schmidt operator. Let \((y_n)_{n\in \mathbb {N}}\) be the sequence of eigenvalues of A, and let \((x_n)_{n\in \mathbb {N}}\) be the eigenvalues of H. Since H is a Hilbert–Schmidt operator, then
and since \(A=Id+H\) we have that \(y_n=1+x_n\) for any n. Since A is invertible, none of the \(y_n\) are null. Moreover, being a self-adjoint infinite matrix, A is bounded by \(\sup _{n\in \mathbb {N}}|y_n|\) and bounded below by \(\inf _{n\in \mathbb {N}}|y_n|\). Since \(x_n\) converges to 0, the two quantities are bounded above and below, hence the result. \(\square \)
Remark 3.6
In the above proof one uses only the fact that \(x_n\) goes to 0, as \(n\rightarrow \infty \). Therefore the same conclusion holds if we assume H to be compact.
Let \(\Lambda \) be a random sequence in \(\mathbb {D}^d\). Thanks to Lemma 3.5, to show that \(\mathbb {P}(\mathcal {C}(\mathrm {H}^2(\mathbb {D}^d)))=1\) it is enough to show that the random Grammian associated to \(\Lambda \) has a strong decay outside its diagonal almost surely:
Proof of Theorem 1.3,(iii)
It suffices to show that
Indeed, if (3.6) holds, then
almost surely, and Lemma 3.5 would conclude the proof. By Remark 3.2 and by regrouping the sum in (3.6) with respect to the partition \((I_m)_{m\in \mathbb {N}^d}\) of \(\mathbb {D}^d\), one obtains
Corollary 2.2, \(s=1\), concludes the proof. \(\square \)
3.4 Almost Orthogonal Random Grammians
Equation (3.5) is a rather strong condition for an infinite matrix A. Indeed, in addition to implying that A is bounded, it says that \(A-Id\) is a Hilbert–Schmidt operator on \(l^2\), i.e., that for any choice of an orthonormal basis \((e_n)_{n\in \mathbb {N}}\) of \(l^2\)
If \(A=G\) is a Szegö Grammian associated to a sequence \(Z=(z_n)_{n\in \mathbb {N}}\) in the polydisc, it is natural to ask whether such an almost orthogonality condition on the kernels at the points of Z translates to interpolation properties on the points of the sequences:
Question 2 Let \(d\ge 2\). Is a sequence Z in \(\mathbb {D}^d\) interpolating for \(\mathrm {H}^\infty (\mathbb {D}^d)\), provided that its Szegö Grammian can be written as \(G=Id+H\), where H is a Hilbert–Schmidt operator on \(l^2\)?
The case \(d=1\) of Question 2 has a positive answer. For any sequence Z in the unit disc, let
be the hyperbolic distance from \(z_n\) to the rest of the sequence. By the Carleson interpolation Theorem, Z is interpolating if and only if \(\inf _{n\in \mathbb {N}}\delta _n>0\). On the other hand, [13], \(G-Id\) is a Hilbert–Schmidt operator if and only if
giving that Z is interpolating rather comfortably.
Another motivation for answering Question 2 comes from random interpolating sequences for \(\mathrm {H}^\infty (\mathbb {D}^d)\). We proved in Sect. 3.3 that the random Grammian associated to a random sequence \(\Lambda \) in the polydisc differs from the identity by a Hilbert–Schmidt operator, provided that the sum in (1.7) converges. Conversely, if Z is not weakly separated, then infinitely many entries outside the diagonal of its Szegö Grammian are arbitrarily close to 1 in absolute value, hence \(G-Id\) is not Hilbert–Schmidt. Namely,
In particular, a positive answer to Question 2 would imply that the event \(\mathcal {I}(\mathbb {D}^d)\) follows the same \(0-1\) law of (3.7), giving a \(0-1\) law for random \(\mathrm {H}^\infty (\mathbb {D}^d)\)-interpolating sequences.
Moreover, (3.7) helps in understanding interpolating sequences for \(\mathrm {H}^2(\mathbb {D}^d)\), and it implies Theorem 1.5. Indeed, any invertible Szegö Grammian \((S_d(z_n, z_j))_{n, j\in \mathbb {N}}\) that can be written as \(G=Id+H\), where H is Hilbert–Schmidt, is bounded above and below, thanks to Lemma 3.5, which in turn is equivalent to \((z_n)_{n\in \mathbb {N}}\) being interpolating for \(\mathrm {H}^2(\mathbb {D}^d)\). On the other hand, as pointed out above, if Z is not weakly separated then infinitely many pairs of normalized Szegö kernels at the points of Z are at an angle arbitrarily close to 0, and hence G is not bounded below. Thus
4 Random Separation in the Unit Ball
This section is devoted to the proof of Theorem 1.7. In addition, we will study uniform separation on the unit ball. Compared with the polydisc, we use the spherical geometry of the unit ball more heavily rather than the Euclidean geometry of the Hardy spaces involved. So, the techniques used in this section are different from the ones used in the previous sections.
Recall that \(\Lambda (\omega )=\left\{ \lambda _{j}\right\} \) with \(\lambda _{j}=\rho _{j} \xi _j(\omega )\) where \(\xi _{j}(\omega )\) is a sequence of independent random variables, all uniformly distributed on the unit sphere and \(\rho _{j} \in [0,1)\) is a sequence of a priori fixed radii. Depending on the distribution conditions on \(\{\rho _{j}\}\) as will be discussed below, the probability that \(\Lambda (\omega )\) is interpolating for Besov–Sobolev spaces \(B_{p}^{\sigma }\left( \mathbb {B}_{d}\right) \), where \(0 < \sigma \le 1 / 2\) is studied.
The Bergman tree \(\mathcal {T}_{d}\) associated to the ball \(\mathbb {B}_{d}\) with the structure constants 1 and \(\ln (2)/2\) is needed in the analysis, so we present here some details. More information can be found in [5, p. 17]. Let \(\rho \) be the pseudo-hyperbolic distance on the unit ball, thus \(\rho (z,w)=|\varphi _{z}(w)|\) where \(\varphi _{z}(w)\) is the M\(\ddot{\text {o}}\)bius transform. The Bergman metric on the unit ball \(\mathbb {B}_{d}\) in \(\mathbb {C}^{d}\) is given by
Further, for any \(r>0\), we define
For any \(N\in \mathbb {N}\), according to [5, Lem. 2.6] and the fact that \(\mathcal {U}_{r}\) is a compact set, there is a positive integer J, a set of points \(\{z^N_{j}\}_{j=1}^J\) and a set of subsets \(\{Q_j^N\}_{j=1}^N\) of \(\mathcal {U}_{N\ln (2)/2}\) such that
Let
where \(P_{N} z\) denotes the radial projection of z onto the sphere \(\mathcal {U}_{N\ln (2)/2}\). Define a tree structure on the collection of sets
by declaring that \(K_{i}^{N+1}\) is a child of \(K_{j}^{N}\), written \(K_{i}^{N+1} \ge K_{j}^{N}\), if the projection \(P_{N }\left( z_{i}^{N+1}\right) \) of \(z_{i}^{N+1}\) onto the sphere \(\mathcal {U}_{N\ln (2)/2}\) lies in \(Q_{j}^{N} \). For any \(K_{j}^{N}\in \mathcal {T}_{d}\), we define \(d(K_{j}^{N})\) by
Given a non-negative function h on \(\mathbb {N}\), we say h is summable if
For \(\sigma >0\), a measure \(\mu \) satisfies the strengthened simple condition if there is a summable function \(h(\cdot )\) such that
where
The following lemma follows from [6, Lem. 32 and Thm. 23].
Lemma 4.1
Let \(\sigma >0 \). If \(\mu \) satisfies the strengthened simple condition, then \(\mu \) is a \(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) \)-Carleson measure on \(\mathbb {B}_{d}\).
The following Lemma can be found in [8].
Lemma 4.2
If X is a binomial random variable with parameter p, N, then for every \(s=0,1,2,\ldots \),
Define
Theorem 4.3
Let \(\frac{d}{2}>\sigma >0 \) and \(d\ge 2\). Then
-
(i)
If
$$\begin{aligned} \sum _{m=0}^{\infty }2^{-2\sigma m}N_m<\infty , \end{aligned}$$then \(\mathbb {P}\{\mathcal {C}(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) )\}=1\).
-
(ii)
If
$$\begin{aligned} \sum _{m=0}^{\infty }2^{-2\sigma m}N_m=\infty , \end{aligned}$$then \(\mathbb {P}\{\mathcal {C}(B_{2}^{\sigma }\left( \mathbb {B}_{d}\right) )\}=0\).
Proof
First, it will be shown that if
then
is a Carleson measure almost surely.
Since \(d/2>\sigma >0\), there is a constant \(\epsilon \) such that \(d>2\sigma +\epsilon \). Next, it will be shown that
is bounded almost surely, that is to say
which implies \(\mu _{\Lambda (\omega )}\) is a Carleson measure almost surely by Lemma 4.1. For any \(\alpha \), let
By [5, Lem. 2.8], we have
thus \(X_{m,\alpha }\) follows the binomial distribution \(B(c_{m,\alpha }2^{-d(\alpha )d}, N_m)\) and \(\sup _{m,\alpha }c_{m,\alpha }=c<\infty \). Then
Choose \(\gamma \in \mathbb {N}\) such that \(-2\sigma \gamma +2\sigma +2\epsilon <-d\). Let
For any constant A, observe that
For any m and \(\alpha \), there is an open set \(S_{m,\alpha }\) such that
and
Let
then \(\tilde{X}_{m,\alpha }\) follows the binomial distribution \(B(c2^{-d(\alpha )d}, N_m)\) and \(X_{m,\alpha }\le \tilde{X}_{m,\alpha }\). Let
then \(\tilde{Y}_{m,\alpha }\) follows the binomial distribution
Then, by Lemma 4.2, it follows that
Since \(\epsilon +2\sigma -d<0\), choose A big enough such that \((A/2)(\epsilon +2\sigma -d)\le -2d\). Thus
On the other hand,
for some constant C. Without loss of generality, suppose \(A/4\ge C\). Then
Thus,
which means that \(S_{\alpha }\) is bounded almost surely. Thus \(\mu _{\Lambda (\omega )}\) is a Carleson measure almost surely.
On the other hand, if
then
Thus,
\(\square \)
For a sequence \(\{z_j\}\), if \(\inf _{i \ne j} \beta \left( z_{i}, z_{j}\right) >0\), call \(\{z_j\}\) weakly separated. On the unit ball, denote
We need to point out that the weak separation with respect to the Bergman metric is equivalent to the weak separation with respect to the pseudo-hyperbolic metric. Thus, we have the following lemma.
Lemma 4.4
([14, Lem. 3.5]) Let \(\Lambda (\omega )=\left\{ \lambda _{j}\right\} \) be a random sequence. Then the following statements hold.
-
(i)
If
$$\begin{aligned} \sum _{m}2^{-dm}N^2_m<\infty , \end{aligned}$$then \(\mathbb {P}\{\mathcal {W}(\mathbb {B}_{d}) \}=1\).
-
(ii)
If
$$\begin{aligned} \sum _{m}2^{-dm}N^2_m=\infty , \end{aligned}$$then \(\mathbb {P}\{\mathcal {W}(\mathbb {B}_{d}) \}=0\).
By Theorem 1.6, a sequence is an interpolating sequence if and only if it is weakly separated and the corresponding measure is a Carleson measure. The proof of Theorem 1.7 is now given.
Proof of Theorem 1.7
Since \(1/2\ge \sigma >0 \) and \(d\ge 2\), then \(-d+4\sigma \le 0\). If
then
which implies
By Theorem 1.6 and Lemma 4.4, the conclusion follows.
On the other hand, if \(\sum _{m=0}^{\infty }2^{-2\sigma m}N_m=\infty \), then
By Theorem 1.6 again, it follows that
\(\square \)
Finally, the uniformly separated sequences on the unit ball when \(d\ge 2\) are studied. It is well known that
A sequence \(\{z_j\}\) is uniformly separated if \(\inf _{k}\prod _{j\ne k}\rho (z_{j},z_k)>0\), where \(\rho \) is the pseudo-hyperbolic distance on the unit ball. Let
An important lemma in analysis, see [15, Prop. 1.4.10], is needed.
Lemma 4.5
For \(z \in \mathbb {B}_d\) and \(c\in \mathbb {R}\), let
When \(c<0\), then \(I_{c}\) is bounded in \(\mathbb {B}_d\). When \(c>0\), then
Finally,
Proposition 4.6
Let \(\Lambda (\omega )=\left\{ \lambda _{j}\right\} \) be a random sequence. Then the following statements hold.
-
(i)
If \(d=2\) and
$$\begin{aligned} \sum _{m=0}^{\infty }N_m 2^{-m}(m+1)<\infty , \end{aligned}$$then \(\mathbb {P}\{\mathcal {U}(\mathbb {B}_{d})\}=1\).
-
(ii)
If \(d\ge 3\) and
$$\begin{aligned} \sum _{m=0}^{\infty }N_m2^{-m}<\infty , \end{aligned}$$then \(\mathbb {P}\{\mathcal {U}(\mathbb {B}_{d})\}=1\).
-
(iii)
If \(d\ge 3\) and
$$\begin{aligned} \sum _{m=0}^{\infty }N_m2^{-m}=\infty , \end{aligned}$$then \(\mathbb {P}\{\mathcal {U}(\mathbb {B}_{d})\}=0\).
Proof
First, \(\inf _{k}\prod _{j\ne k}\rho (\lambda _{j},\lambda _k)^2>0\) almost surely if and only if
Since \(-\log {x}\ge 1-x\) when \(1\ge x>0\), it follows that
Thus \(\inf _{k}\prod _{j\ne k}\rho (\lambda _{j},\lambda _k)^2>0\) almost surely implies
almost surely.
On the other hand, if \(\inf _{\lambda _j\ne \lambda _k}\rho (\lambda _j,\lambda _k)>0 \), then
Thus, in this case,
almost surely implies
almost surely.
For any constant c, consider
Next, consider two cases.
If \(d=2\), then by Lemma 4.5 we have
Substituting this estimate into the one above yields that
Thus
implies that
almost surely. Lemma 4.4 also yields that
implies that \(\Lambda (\omega )\) is weakly separated almost surely, thus
implies that \(\Lambda (\omega )\) is uniformly separated almost surely.
If \(d\ge 3\), by Lemma 4.5, then
Then
By a similar argument for the case \(d=2\),
implies that \(\Lambda (\omega )\) is separated almost surely. Conversely, if
then
Thus, for any \(z_k\), we have
giving the conclusion that
\(\square \)
Notes
The reader should not confuse the index \(i=0,\dots ,d\) and \(i=\sqrt{-1}\)!
References
Agler, J., McCarthy, J. E.: Interpolating sequences on the bidisk. Int. J. Math. 12(9) (2001)
Agler, J., McCarthy, J.E.: Pick interpolation and Hilbert function spaces, Graduate Studies in Mathematics, vol. 44. American Mathematical Society, Providence (2002)
Aleman, A., Hartz, M., McCarthy, J.E., Richter, S.: Interpolating sequences in spaces with complete pick property. Int. Math. Res. Not. 2019(12), 3832–3854 (2017)
Amar, D., Amar, E.: Sur les Suites d’Interpolation en plusieurs variables (French). Pac. J. Math. 75(1), 15–20 (1978)
Arcozzi, N., Rochberg, R., Sawyer, E.: Carleson measures and interpolating sequences for Besov spaces on complex balls. Mem. Am. Math. Soc. 859 (2006)
Arcozzi, N., Rochberg, R., Sawyer, E.: Carleson Measures for the Drury–Arveson Hardy space and other Besov–Sobolev spaces on complex balls. Adv. Math. 218(2), 1107–1180 (2008)
Berndtsson, B., Chang, S.Y.A., Lin, K.-C.: Interpolating sequences in the polydisc. Trans. Am. Math. Soc. 302(1), 161–169 (1987)
Billingsley, P.: Probability and Measure. Wiley, New York (1979)
Carleson, L.: An interpolation problem for bounded analytic functions. Am. J. Math. 80(4), 921–930 (1958)
Carleson, L.: Interpolation by bounded analytic functions and the corona problem. Ann. Math. Second Ser. 76(3), 547–559 (1962)
Chalmoukis, N., Hartmann, A., Kellay, K., Wick, B. D.: Random Interpolating Sequences in Dirichlet Spaces, International Mathematics Research Notices (2021)
Cochran, W.G.: Random Blaschke products. Trans. Am. Math. Soc. 322(2), 731–755 (1990)
Gorkin, P., McCarthy, J.E., Pott, S., Wick, B.: Thin sequences and the Gram matrix. Arch. Math. 103, 93–99 (2014)
Massaneda, X.: Random sequences with prescribed radii in the unit ball. Complex Var. 31(3), 193–211 (1996)
Rudin, W.: Function Theory in the Unit Ball of \(\mathbb{C}^n\). Springer, New York (1980)
Rudowicz, R.: Random sequences interpolating with probability one. Bull. Lond. Math. Soc. 26(2), 160–164 (1994)
Funding
Open access funding provided by NTNU Norwegian University of Science and Technology (incl St. Olavs Hospital - Trondheim University Hospital).
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Thomas Ransford.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Alberto Dayan was partially supported by National Science Foundation Grant DMS 1565243, Brett D. Wick is partially supported by National Science Foundation DMS grant 1800057, and by Australian Research Council—DP 190100970, Shengkun Wu is supported by CSC201906050022.
Missing Open Access funding information has been added in the Funding Note.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Dayan, A., Wick, B.D. & Wu, S. Random Interpolating Sequences in the Polydisc and the Unit Ball. Comput. Methods Funct. Theory 23, 165–198 (2023). https://doi.org/10.1007/s40315-022-00448-2
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40315-022-00448-2