Random Interpolating Sequences in the Polydisc and the Unit Ball

We study almost sure separating and interpolating properties of random sequences in the polydisc and the unit ball. In the unit ball, we obtain the 0-1 Komolgorov law for a sequence to be interpolating almost surely for all the Besov-Sobolev spaces $B_{2}^{\sigma}\left(\mathbb{B}_{d}\right)$, in the range $0<\sigma\leq1 / 2$. For those spaces, such interpolating sequences coincide with interpolating sequences for their multiplier algebras, thanks to the Pick property. This is not the case for the Hardy space $\mathrm{H}^2(\mathbb{D}^d)$ and its multiplier algebra $\mathrm{H}^\infty(\mathbb{D}^d)$: in the polydisc, we obtain a sufficient and a necessary condition for a sequence to be $\mathrm{H}^\infty(\mathbb{D}^d)$-interpolating almost surely. Those two conditions do not coincide, due to the fact that the deterministic starting point is less descriptive of interpolating sequences than its counterpart for the unit ball. On the other hand, we give the $0-1$ law for random interpolating sequences for $\mathrm{H}^2(\mathbb{D}^d)$.


Introduction
A sequence Z = (z n ) n∈N in the unit disc D is interpolating for H ∞ if, given any bounded sequence (w n ) n∈N in C there exists a bounded analytic function f on D so that f (z n ) = w n , for any n in N. The celebrated work of Carleson, [9] and [10], characterized interpolating sequences in term of separation properties.To be precise, let ρ(z n , z k ) > 0; Carleson proved in [9] that Z is interpolating if and only if it is uniformly separated.Later on, [10], he characterized uniform separation in terms of a measure theoretic condition and weak separation: Theorem 1.1 (Carleson).A sequence Z in D is uniformly separated if and only if it is weakly separated and the measure Throughout this note, a measure µ on a domain D will be a Carleson measure for a reproducing kernel Hilbert space H k of holomorphic functions on D if for some C > 0. Later sections will take D = D d , the unit polydisc, or D = B d , the unit ball, respectively: the kernels that we are going to choose for such domains are the Szegö kernel on the polydisc and the Besov-Sobolev kernels on the unit ball.
In certain instances, the randomization of the conditions studied by Carleson become more tractable and provide insight into the structure of interpolating sequences.Cochran studied in [12] separation properties of random sequences.A random sequence in the unit disc is defined as follows: let (θ n ) n∈N be a sequence of independent random variables, all distributed uniformly in (0, 2π) and defined on the same probability space (Ω, A, P).Then, for any choice of a deterministic sequence of radii (r n ) n∈N approaching 1 define λ n (ω) := r n e iθn (ω) , ω ∈ Ω.
Considering the random sequence Λ(ω) = (λ n (ω)) n∈N , the 0-1 Kolmogorov law yields that events such as W := {Λ is weakly separated} All the randomness of the sequence is on the arguments of the points in Λ, and therefore (N j ) j∈N is a deterministic sequence.Cochran proved in [12,Th. 2] that P(W) = 1 provided that and that P(W) = 0 whenever the sum in (1.3) diverges.Later on, Rudowicz showed in [15] that (1.3) is a sufficient condition for µ Λ to be a Carleson measure for H 2 (D) almost surely, and concluded, thanks to Theorem 1.1, that P(I) = 1 if and only if (1.3) holds.In particular, condition (1.3) encodes all those random sequences so that W, U and I have all probability one.The goal of this paper is to study random interpolating sequences on the polydisc and the d dimensional unit ball.A sequence Z = (z n ) n∈N in D d is interpolating for H ∞ (D d ) if, given any bounded (w n ) n∈N in C there exists a bounded holomorphic function f on D d so that f (z n ) = w n , for all n.On the polydisc, the deterministic starting point is the following (partial) analogous of Carleson interpolation Theorem for sequences in the polydisc [7]: Theorem 1.2 (Berndtsson, Chang and Lin).Let Z = (z n ) n∈N be a sequence in D d , and let (a), (b) and (c) denote the following statements: (a) Then (a) =⇒ (b) =⇒ (c), and none of the converse implications hold.
Conditions (1.4) and (1.5) are separation conditions, both stated in terms of the so called Gleason distance on the polydisc: Throughout this note, (1.4) will refer to uniform separation on the polydisc, while (1.5) defines a weakly separated sequence on the polydisc.Theorem 1.2 represents one of the best known attempts to characterize H ∞ (D d )-interpolating sequences on the polydisc in terms of its hyperbolic geometry.One can find a characterization for interpolating sequences for bounded analytic functions on the bi-disc in [1], stated in terms of uniform separation conditions on an entire class of reproducing kernels on D 2 .The motivation of the first part of this note is to find out whether condition (a), and (c) of Theorem 1.2 are equivalent at least almost surely.A negative answer would imply that Theorem 1.2 is far from being a characterization.A positive answer would give the 0-1 Kolmogorov law for H ∞ (D d )-interpolating sequences in the polydisc with random arguments.The construction of a random sequence Λ on the polydisc follows the same outline as for the case of the unit disc.Let T d be the d-dimensional torus in C d , and let (θ 1 n , . . ., θ d n ) n∈N be a sequence of independent and indentically distributed random variables taking values on T d , all distributed uniformly and defined on the same probability space (Ω, A, P).Let (r n ) n∈N be a sequence in [0, 1) d , and define a random sequence The events of interest are going to be Our first aim is to give necessary conditions and sufficient conditions for Λ to be interpolating for H ∞ (D d ) almost surely.This will be achieved by studying separately the probability of the events W(D The first main result partially extends Cochran's and Rudowicz's works to the polydisc: Observe that the case d = 1 yields Rudowicz's and Cochran's characterization of random interpolating sequences on the unit disc.In general, part (i) of the above Theorem gives the 0-1 Komolgorov law for a sequence to be weakly separated.In part (ii) and (iii), the result gives a sufficient condition for a sequence to be almost surely uniformly separated and to generate a Carleson measure for the Hardy space in the polydisc.In particular, thanks to Theorem 1.2, it is the case that the 0-1 Kolmogorov law for almost surely interpolating sequences for H ∞ (D d ) lies somewhere in between (1.8) and (1.Proposition 3.3 will give an example of a class of random sequences for which the 0-1 Kolmogorov law for almost surely H ∞ (D d )-interpolating sequences coincides with the sum in (1.7).Whether this is the case for a general choice of the radii (r n ) n∈N remain, for us, open.Nevertheless, we will observe in Section 3.4 how (1.7) implies that the Szegö Grammian for a random sequence in the polydisc differs from the identity only by a Hilbert-Schmidt operator, a rather strong separation condition for the random kernel functions in the Hardy space associated to Λ.In particular, this will give the 0 − 1 law for a random sequence Λ to be interpolating for H 2 (D d ).In the deterministic setting, a sequence is surjective and bounded.This, in particular, is equivalent of asking that the Szegö Grammian associated to (z n ) n∈N is bounded above and below.Given a random sequence Any H ∞ (D d )-interpolating sequence on D d is also H 2 (D d )-interpolating, and the converse does not hold, since H 2 (D d ) has not the Pick property (for an example of a sequence which is H 2 (D 2 )-interpolating but not H ∞ (D 2 )-interpolating, see [4]).Therefore, We show that Ĩ(D d ) has the same 0 − 1 law of W(D d ): Theorem 1.5.Let Λ be a random sequence in D d .Then Related questions about interpolation for function spaces on the unit ball in C d are also considered.The authors in [11] studied the interpolating sequences in the Dirichlet spaces over the unit disc and this serves as some of the motivation for the results in the ball.Section 4, will generalize some theorems in [11] to the unit ball.Because the generalization of the Dirichlet spaces is the Besov-Sobolev spaces, random interpolating sequences in the Besov-Sobolev spaces B σ 2 (B d ) are studied, where 0 < σ < ∞.In [6], a characterization of interpolating sequence in the Besov-Sobolev spaces in the case of 0 < σ ≤ 1 2 was given.Because a characterization exists only in this range, that is the case focused upon in this paper.
Let B d be the unit ball in C d .Let dz be Lebesgue measure on C d and let dλ d (z) = 1 − |z| 2 −d−1 dz be the invariant measure on the ball.For an integer m ≥ 0, and for 0 < σ < ∞, 1 < p < ∞, m + σ > d/p define the analytic Besov-Sobolev spaces B σ p (B d ) to consist of those holomorphic functions f on the ball such that Here f (m) is the m th order complex derivative of f.The spaces B σ p (B d ) are independent of m and are Banach spaces.A Carleson measure for B σ p (B d ) is a positive measure defined on B d such that the following Carleson embedding holds for f ∈ B σ p (B d ) Namely, in a different fashion with respect the polydisc case, the deterministic setting for the Besov-Sobolev space has its interpolating sequences well-understood and characterized by weak separation and a Carleson measure condition.Therefore, in order to find the 0-1 Kolmogorov law for interpolating sequences for B σ 2 , it suffices to find the cut-off conditions on the detrministic radii for the associated sequence with randomly chosen arguments to be weakly separated and to generate a Carleson measure almost surely.This is the intent of the second part of our work.Random sequences in the unit ball are constructed as follows.Let Λ(ω) = {λ j } with λ j = ρ j ξ j (ω) where ξ j (ω) is a sequence of independent random variables, all uniformly distributed on the unit sphere and ρ j ∈ [0, 1) is a sequence of a priori fixed radii.There is an interesting thing about the random interpolating sequences in the Besov-Sobolev spaces on the unit ball.As we will see, for d ≥ 2 a random sequence {λ n } is an interpolating sequence almost surely if and only if n (1 Moreover, the characterization for almost surely interpolating sequences is strictly stronger that the characterization for almost surely weakly separated sequences.
For any m ∈ N, let where β is the Bergman metric on the unit ball B d in C d .Let The following result is obtained regarding a 0-1 Komolgorov law for interpolating sequences on the unit ball.We only work on the case of In our case, we have the follows.
Section 2 will construct the necessary technical tools for the proof of our main results.Section 3 provides the proof of Theorem 1.3 and Theorem 1.5, and characterize random interpolating sequences for H ∞ (D d ) for some specific choice of the radii in (r n ) n∈N .Finally, Section 4 proves Theorem 1.7 and studies the uniform separation on the unit ball.
We would like to thank Nikolaos Chalmoukis for some useful comments that led to the final version of Theorem 1.3.We would also like to thank the referees for their valuable suggestions.

Preliminary Results
This section contains relatively general results that are going to be used throughout the proof of Theorem 1.3.Deterministic and probabilistic tools will be separately analyzed.
2.1.Deterministic tools.Double sums are extensively used throughout this work.In particular the fact that, for a certain class of double sums involving exponential decay, the terms of the sums on their diagonals contain all the necessary informations to bound the whole sums: Lemma 2.1.Let s ≥ 1, and let (A m ) m∈N and (B k ) k∈N be two sequences of positive numbers.Then there exists some constant Let's first estimate the sum in k > m: thanks to Holder's inequality with dual exponents 1 + 1/s and s + 1.The sum in m > k is estimated analogously.This concludes the proof.
Our take away from Lemma 2.1 is the following Corollary 2.2.Let s ≥ 1, d ≥ 1 and (N m ) m∈N d be a sequence of positive numbers so that (2.1)

Then
(2.2) Proof.The proof is by induction on d: 2) is true for d − 1, and let (N m ) n∈N d be a sequence of positive numbers.Then, by applying Lemma 2.1, where the index m in N d is written as (m 1 , m), with m 1 in N and m in N d−1 .
Observe that, thanks to (2.1),I 1 and I 2 converge.As for I 3 , we can change the order of summation and apply the case d − 1.Which yields 2.2.Random tools.Fairly elementary facts from probability theory are exploited in the proofs.All the events and the random variables that are considered will be defined on the same probability space (Ω, A, P).For a comprehensive treatment of the probabilistic results used, see [8].
The first tool is the Borel-Cantelli Lemma.Recall that, given a sequence (A n ) n∈N of events in A, then denotes the event made of those ω in Ω that belong to infinitely many of the events in Given a random variable X on Ω, its mean value (or expectation) will be denoted by In particular, if E(X) < ∞, then P{X = ∞} = 0. Another classic tool from probability that will be used is Jensen's Inequality: Theorem 2.4 (Jensen's Inequality).Let X be a real-valued random variable on Ω, and let φ : R → R be a convex function.Then In particular, since is concave, for any s ≥ 1, this gives for any positive random variable X on Ω, by applying Jensen's inequality to φ(t) = −t 1 s .We can now prove Lemma 2.5, a tool for the proofs of Theorem 1.3: Lemma 2.5.Let (X i n,j ) n,j∈N be a sequence of positive random variables, for any i = 1, . . ., d. Set Assume that Then sup is bounded almost surely.
Proof.Since, for any n = j in N, m(n, j) ≤ p(n, j) . .

Random Sequences in the Polydisc
This section is devoted to the proof of Theorem 1.3 and Theorem 1.5.The events U (D d ), W(D d ), C(H 2 (D d )) and Ĩ(D d ) will be analyzed separately.
3.1.Weak Separation.For weak separation in the polydisc, it turns out that Cochran's argument in [12, Th. 2] extends to the higher dimensional case: Proof of Theorem 1.3, (i).For the sake of readability, we will adapt Cochran's proof only to the case d = 2: the proof will lift appropriately to any d > 1. Assume first that m∈N 2 N 2 m 2 −|m| = ∞ and let l be in N. Define as the set of those ω in Ω such that there exists a pair of distinct indices n and r so that the Gleason distance between λ n (ω) and λ r (ω) is controlled by, roughly, 2 −l .Since W(D d ) c ⊆ l∈N A l , it suffices to show that P(A l ) = 1 for any l in N.
For any m in N 2 , partition I m into 2 2l "rectangles" of the form and observe that at least one of these rectangles, say R m , must contain at least and the events B m are independent, by the Borel-Cantelli Lemma, Theorem 2.3, it suffices to show that m∈N 2 P(B m ) = ∞.
In order to estimate the probability of each B m from below, we give an upper bound for P(B c m ).If τ is in T 2 , let S m (τ ) be a "rectangle" in T 2 centered at τ with basis 2 −(m 1 +l) and height 2 −(m 2 +l) .If τ n = (e iθ 1 n , e iθ 2 n ), then thanks to the independence of (τ n ) n∈N we have If lim inf m P(B c m ) < 1, then P(B m ) is uniformly bounded away from 0 infinitely many times, and m∈N 2 P(B m ) = ∞ trivially.
On the other hand, if lim |m|→∞ P(B c m ) = 1, then which is the general term of a divergent series To conclude the proof of Theorem 1.3, part (i), it suffices to show that a random sequence Λ in D d is almost surely weakly separated whenever (1.7) holds.To do so, let Then and the Borel-Cantelli Lemma provides that, almost surely, any pair (λ n , λ r ) in all but finitely many "rectangles" I m satisfies (3.1) The same argument applies for the right-shifted "rectangles" I ′ m of the form and the up-shifted "rectangles" I ′′ m of the form This ensures that all but finitely many pairs (λ n , λ r ) in Λ so that both 3.2.Uniform Separation.While weak separation behaves essentially in the same way as the dimension d grows, the sufficient condition in (1.8) for almost sure uniform separation picks up a dependence on d.As it will be shown, this is due to some estimates on the expected value of quantities related to the (random) Gleason distances between the points in Λ.
It will also be explained how (1.8) can be improved for some choices of (r n ) n∈N .As a corollary, a cut off condition for Λ to be almost surely H ∞ (D d )-interpolating for some types of random sequences in the polydisc will be given.
and observe that, for any z and w in D d , Given a random sequence Λ in D d denote, for the sake of readability, and Thanks to (3.2), uniform separation can be achieved from weak separation and a uniform bound on sums depending on the random sequences (S i (n, j)) n,j∈N : Observe that each (S i (n, j)) n,j∈N is a sequence of random variables on Ω which is determined, together with Λ, by (r n ) n∈N .It is not surprising then that the expectation of |S i (n, j)| 2 depends, for any i, n and j, only on r i n and r i j : Lemma 3.1.Let Λ be a random sequence in D d .Then, for any n = j in N and for any i = 1, . . ., d, Therefore, by making use of the independence of θ i n and θ i j , Remark 3.2.Let m and k be two multi-indices in N d , and suppose that λ n and λ j belong to I m and I k , respectively.Then, thanks to Lemma 3.1 and (1.6), In particular, since S i (n, j) and S r (n, j) are independent for any i = r, we have Part (ii) of Theorem 1.3 can now be proved: Proof of Theorem 1.3, (ii).Observe that whenever N m ≤ 2 |m| , and so under our assumption Λ is weakly separated, thanks to Theorem 1.3, part (i).Therefore, thanks to (3.3), it suffices to show that the random sequence (S n ) n∈N given by S n := is bounded almost surely.Thanks to Lemma 2.5, it is enough to show that (3.4) j∈N n∈N By regrouping the terms of the double sum in (3.4) Condition (1.8) is not sharp.Indeed, for some choices of (r n ) n∈N , we can show that the 0 − 1 Kolmogorov law for H ∞ (D d )-interpolating sequences coincide with the one for weak separation: Proposition 3.3.Let d = 2 and (t n ) n∈N be a sequence in (0, 1), and consider its Cartesian product with itself Then the random sequence Λ associated with (r n ) n∈N 2 is interpolating for then Λ is not weakly separated almost surely, and in particular it is almost surely not interpolating.Thus it suffices to show that Λ is where . By Rudowicz's Theorem, [15], the random sequence T on D given by is almost surely interpolating in D, where (θ n ) n∈N is a sequence of i.i.d.random variables defined on a probability space (Ω, A, P) and distributed uniformly on the unit circle.In particular, T has almost surely a sequence of so called P. Beurling functions, that is, there exists an event Ω ′ so that P(Ω ′ ) = 1 and, for any ω in Ω ′ , there exists a sequence of H ∞ (D) functions (F ω,n ) n∈N such that Let us consider now the product probability space ( Ω, Ã, P), where Ω := Ω × Ω, Ã is the product σ-algebra of A with itself, and Then the random variables given by are uniformly distributed in T 2 and independent.Thus we can think of the random sequence Λ as Let Ω ′′ := Ω ′ × Ω ′ and define, for any n = (n Then (G ω,n ) n∈N 2 is a set of P. Beurling functions for Λ(ω), and in particular The argument in Proposition 3.3 can be easily extended to any d > 1 to show that, whenever the sequence of radii (r n ) n∈N is the Cartesian product of d sequences in [0, 1), then (1.7) encodes all random sequences that are almost surely interpolating for H ∞ (D d ).
For a general choice of (r n ) n∈N the following question remains open: Question 1.Is any random sequence Λ in D d satisfying (1.7) uniformly separated?Or else, does there exist a choice of (r n ) n∈N so that the random sequence Λ obtained is almost surely weakly separated but not uniformly separated?3.3.Carleson Measures.The same idea that was used for random uniform separation works for the proof of Theorem 1.3, part (iii), modulo some adaptations.Let Z = (z n ) n∈N be a sequence in D d and consider the Szegö Grammian associated with the sequence Z.Therefore, Theorem 3.4.The following are equivalent: A proof of Theorem 3.4 can be found in [2, Th. 9.5].Moreover, a standard operator theory argument gives that any sufficiently strong decay of the coefficients of G outside its diagonal implies that G is bounded (above and below): Lemma 3.5.Let A = (a n,j ) n,j∈N : l 2 → l 2 be invertible and self adjoint.Suppose that a i,i = 1 for any i in N, and that Then A is bounded above and below.
Proof.Such an A can be written as A = Id + H, where H is a Hilbert-Schmidt operator.Let (y n ) n∈N be the sequence of eigenvalues of A, and let (x n ) n∈N be the eigenvalues of H. Since H is a Hilbert-Schmidt operator, then and since A = Id + H we have that y n = 1 + x n for any n.Since A is invertible, none of the y n is null.Moreover, being a self-adjoint infinite matrix, A is bounded by sup n∈N |y n | and bounded below by inf n∈N |y n |.Since x n converges to 0, the two quantities are bounded above and below, hence the result.Remark 3.6.In the above proof one uses only the fact that x n goes to 0, as n → ∞.Therefore the same conclusion holds if we assume H to be compact.
Let Λ be a random sequence in D d .Thanks to Lemma 3.5, to show that P(C(H 2 (D d ))) = 1 it is enough to show that the random Grammian associated to Λ has a strong decay outside its diagonal almost surely: Corollary 2.2, s = 1, concludes the proof.

3.4.
Almost Orthogonal Random Grammians.Equation (3.5) is a rather strong condition for an infinite matrix A. Indeed, other than implying that A is bounded, it says that A − Id is a Hilbert-Schmidt operator on l 2 , i.e., that for any choice of an orthonormal basis If A = G is a Szegö Grammian associated to a sequence Z = (z n ) n∈N in the polydisc, it comes natural to ask whether such an almost orthogonality condition on the kernels at the points of Z translates to interpolation properties on the points of the sequences: ), provided that its Szegö Grammian can be written as G = Id + H, where H is a Hilbert-Schmidt operator on l 2 ?
The case d = 1 of Question 2 has a positive answer.For any sequence Z in the unit disc, let be the hyperbolic distance from z n to the rest of the sequence.By Carleson interpolation Theorem, Z is interpolating if and only if inf n∈N δ n > 0. On the other hand, [13], G − Id is a Hilbert-Schmidt operator if and only if giving that Z is interpolating rather comfortably.
Another motivation to answer Question 2 comes from random interpolating sequences for H ∞ (D d ).We proved in Section 3.3 that the random Grammian associated to a random sequence Λ in the polydisc differs from the identity by a Hilbert-Schmidt operator, provided that the sum in (1.7).Conversely, if Z is not weakly separated, then infinitely many entries outside the diagonal of its Szegö Grammian are arbitrarily close to 1 in absolute value, hence G − Id is not Hilbert-Schmidt.Namely, ( In particular, a positive answer to Question 2 would imply that the event I(D d ) follows the same 0 − 1 law of (3.7), giving the 0 − 1 las for random H ∞ (D d )-interpolating sequences.Moreover, (3.7) helps understanding interpolating sequences for H 2 (D d ), and it implies Theorem 1.5.Indeed, any invertible Szegö Grammian (S d (z n , z j )) n,j∈N that can be written as G = Id+H, where H is Hilbert-Schimdt, is bounded above and below, thanks to Lemma 3.5, which in turn is equivalent to (z n ) n∈N being interpolating for H 2 (D d ).On the other hand, as pointed out above if Z is not weakly separated then infinitely many pairs of normalized Szegö kernels at the points of Z are at an angle arbitrarily close to 0, and hence G is not bounded below.Thus

Random Separation in the Unit Ball
This section is devoted to the proof of Theorem 1.7.On the other hand, we will study the uniform separation on the unit ball.Compared with the polydisc, we use more heavily the spherical geometry of the unit ball rather than the Euclidean geometry of the Hardy spaces involved.So, the techniques used in this section are different from the previous sections.
Recall that Λ(ω) = {λ j } with λ j = ρ j ξ j (ω) where ξ j (ω) is a sequence of independent random variables, all uniformly distributed on the unit sphere and ρ j ∈ [0, 1) is a sequence of a priori fixed radii.Depending on the distribution conditions on {ρ j } as will be discussed below, the probability that Λ(ω) is interpolating for Besov-Sobolev spaces B σ p (B d ), where 0 < σ ≤ 1/2 is studied.
The Bergman tree T d associated to the ball B d with the structure constants 1 and ln 2 2 is needed in the analysis, so we present here some details.More information can be found in [5, pg 17].Let ρ be the pseudo-hyperbolic distance on the unit ball, thus ρ(z, w) = |ϕ z (w)| where ϕ z (w) is the Möbius transform.The Bergman metric on the unit ball B d in C d is given by Further, for any r > 0, we define For any N ∈ N, according to [5,Lemma 2.6] and the fact that U r is a compact set, there is a positive integer J, a set of points {z N j } J j=1 and a set of subsets where P N z denote the radial projection of z onto the sphere U N ln 2

2
. Define a tree structure on the collection of sets lies in Q N j .For any K N j ∈ T d , we define d(K N j ) by d(K N j ) = N.Given a non-negative function h on N, we say h is summable if For σ > 0, a measure µ satisfies the strengthened simple condition if there is a summable function h(•) such that The following lemma follows from [6, Lemma 32 and Theorem 23].The following Lemma can be found in [8].
Since d 2 > σ > 0, there is a constant ǫ such that d > 2σ + ǫ.Next, it will be shown that sup is bounded almost surely, that is to say which implies µ Λ(ω) is a Carleson measure almost surely by Lemma 4.1.For any α, let On the other hand, for some constant C. Without loss of generality, suppose A/4 ≥ C. Then Thus, which means that S α is bounded almost surely.Thus µ Λ(ω) is a Carleson measure almost surely.
On the other hand, if For a sequence {z j }, if inf i =j β (z i , z j ) > 0, call {z j } weakly separated.On the unit ball, denote W(B d ) := {ω : Λ(ω) is weakly separated in B d }.We need to point out that the weak separation with respect to Bergman metric is equivalent to the weak separation with respect to pseudo-hyperbolic metric.Thus, we have following lemma.
By Theorem 1.6, a sequence is an interpolating sequence if and only if it is weakly separated and the corresponding measure is a Carleson measure.The proof of Theorem 1.7 is now given.
By Theorem 1.6 and Lemma 4.4, the conclusion follows.
On the other hand, if A sequence {z j } is uniformly separated if inf k j =k ρ(z j , z k ) > 0, where ρ is the pseudohyperbolic distance on the unit ball.Let Proof.First, inf k j =k ρ(λ j , λ k ) 2 > 0 almost surely if and only if Thus inf k j =k ρ(λ j , λ k ) 2 > 0 almost surely implies sup k j =k [1 − ρ(λ j , λ k ) 2 ] < ∞ almost surely.
Let s d be the Szegö kernel on D d .Then the Hardy space H 2 (D d ) is the reproducing kernel Hilbert space H s d .Denote the normalized Szegö kernel by S d (z, w) := d i=1 (1 Proof of Theorem 1.3,(iii).It suffices to show that (3.6) j∈N n =j E(|S d (n, j)| 2 ) < ∞.Indeed, if (3.6) holds, then j∈N n =j |S d (n, j)| 2 < ∞ almost surely, and Lemma 3.5 would conclude the proof.By Remark 3.2 and by regrouping the sum in (3.6) with respect the partition (I m ) m∈N d of D d , one obtains j∈N n =j