1 Introduction

Given a measure \(\mu \) in \({\mathbb {R}}^n\) we consider the space \({\mathcal {P}}_k\) of polynomials of total degree at most k in n-variables endowed with the natural scalar product in \(L^2(\mu )\). We assume that the support of \(\mu \) is not contained in the zero set of any \(p\in \mathcal {P}_k\), \(p\ne 0\). In this case, the norm of \(L^2(\mu )\) is also a norm for the space \({\mathcal {P}}_k\), and the point evaluation at any given point \(x\in \mathbb {R}^n\) is a bounded linear functional. Hence, the space \(\mathcal {P}_k\) becomes a reproducing kernel Hilbert space. By the Riesz representation theorem, for any \(x\in \mathbb {R}^n\), there is a unique function \(K_k(\mu , x,\cdot )\in \mathcal {P}_k\) such that

$$\begin{aligned} p(x) = \langle p, K_k(\mu ,x,\cdot )\rangle = \int p(y) K_k(\mu ,x, y) \, d\mu (y) \end{aligned}$$

for every \(p\in {\mathcal {P}}_k\). Given a point \(x\in \mathbb {R}^n\) the normalized reproducing kernel is denoted by \(\kappa _{k,x}\), i.e.

$$\begin{aligned} \kappa _{k,x}(\mu ,y) = \frac{K_k(\mu ,x,y)}{\Vert K_k(\mu ,x,\cdot ) \Vert _{L^2(\mu )}} = \frac{K_k(\mu ,x,y)}{\sqrt{K_k(\mu ,x,x)}}. \end{aligned}$$

We will denote by \(\beta _k(\mu ,x)\) the value of the reproducing kernel on the diagonal

$$\begin{aligned} \beta _k(\mu ,x)=K_k(\mu ,x,x). \end{aligned}$$

The function \(1/\beta _k(\mu ,x)\) is often called the Christoffel function. For brevity we may sometimes omit the dependence on \(\mu \).

Following Shapiro and Shields in [16] we define sampling and interpolating sets:

Definition 1

A sequence \(\Lambda = \{\Lambda _{k}\}\) of finite sets of points in \(\mathbb {R}^n\) is said to be interpolating for the sequence of spaces \(({\mathcal {P}}_{k},L^2(\mu ))_{k\ge 0}\) if the associated family of normalized reproducing kernels at the points \(\lambda \in \Lambda _k\), i.e. \(\kappa _{k,\lambda }\), is a Riesz sequence in the Hilbert space \(\mathcal {P}_k\), uniformly in k. That is, if there is a constant \(C>0\) independent of k such that for any linear combination of the normalized reproducing kernels we have:

$$\begin{aligned} \frac{1}{C} \sum _{\lambda \in \Lambda _k} |c_\lambda |^2 \le \bigl \Vert \sum _{\lambda \in \Lambda _k} c_\lambda \kappa _{k,\lambda } \bigr \Vert ^2\le C \sum _{\lambda \in \Lambda _k} |c_\lambda |^2. \end{aligned}$$
(1.1)

The definition above is usually decoupled into two separate conditions. The left hand side inequality in (1.1) is usually called the Riesz–Fischer property for the reproducing kernels and is equivalent to the condition that the following moment problem is solvable: for arbitrary values \(\{v_\lambda \}_{\lambda \in \Lambda _k}\) there exists a polynomial \(p\in \mathcal {P}_k\) such that \(p(\lambda )/\sqrt{\beta _k(\lambda )} = \langle p, \kappa _{k,\lambda }\rangle = v_\lambda \) for all \(\lambda \in \Lambda _k\) and

$$\begin{aligned} \Vert p\Vert ^2 \le C \sum _{\lambda \in \Lambda _k} |v_\lambda |^2 = \sum _{\lambda \in \Lambda _k} \frac{|p(\lambda )|^2}{\beta _k(\lambda )}. \end{aligned}$$

This is the reason \(\Lambda \) is called an interpolating family.

The right hand side inequality in (1.1) is called the Bessel property for the normalized reproducing kernels \(\{\kappa _{k,\lambda }\}_{\lambda \in \Lambda _k}\). The Bessel property is equivalent to having

$$\begin{aligned} \sum _{\lambda \in \Lambda _k} \frac{|p(\lambda )|^2}{\beta _k(\lambda )} \le C \Vert p\Vert ^2 \end{aligned}$$
(1.2)

for all \(p\in {\mathcal {P}}_k\). That is, if we denote \(\mu _k := \sum _{\lambda \in \Lambda _k} \delta _{\lambda }/\beta _k(\lambda )\), we are requiring that the identity is a uniformly continuous embedding of \((\mathcal {P}_k, L^2(\mu ))\) into \((\mathcal {P}_k, L^2(\mu _k))\).

The notion of sampling plays a similar but opposite role.

Definition 2

A sequence \(\Lambda = \{\Lambda _{k}\}\) of finite sets of points in \(\mathbb {R}^n\) is said to be sampling or Marcinkiewicz–Zygmund for the sequence of spaces \(({\mathcal {P}}_{k},L^2(\mu ))_{k\ge 0}\) if each family \(\{\kappa _{k,\lambda }\}_{\lambda \in \Lambda _k}\) is a frame for the corresponding Hilbert space \(\mathcal {P}_k\), and their frame bounds are uniform in k. More precisely, if there is a constant \(C>0\) independent of k such that for any polynomial \(p\in P_k\):

$$\begin{aligned} \frac{1}{C} \sum _{\lambda \in \Lambda _k} |\langle p, \kappa _{k,\lambda } \rangle |^2 \le \Vert p\Vert ^2 \le C \sum _{\lambda \in \Lambda _k} |\langle p, \kappa _{k,\lambda } \rangle |^2. \end{aligned}$$
(1.3)

Observe that the left hand side inequality in (1.3) is the Bessel condition mentioned above. If we were considering only a single space of polynomials \({\mathcal {P}}_{k_0}\) then the notion of an interpolating family amounts to the independence of the corresponding reproducing kernels. On the other hand, the notion of a sampling family corresponds to reproducing kernels that span the whole space \(\mathcal {P}_{k_0}\).

Throughout this paper, \(\Omega \) denotes a smooth, bounded, convex domain. We will restrict our attention to two cases:

  • In the first case we consider measures \(d\mu (x) = \chi _\Omega (x) dV(x)\) on \(\Omega \), where dV is the Lebesgue measure.

  • In the second case, we consider the case of the unit ball \(\mathbb {B}= \{x\in \mathbb {R}^n : |x| \le 1\}\), and measures of the form \(d\mu (x) = (1-|x|^2)^{a-1/2}\chi _\mathbb {B}(x) dV(x)\) where \(a\ge 0\).

In these two cases there are good explicit estimates for the size of the reproducing kernel on the diagonal \(K_k(\mu ,x,x)\), and therefore both notions, interpolation and sampling families, become more tangible. In [2] the authors obtained necessary geometric conditions for sampling families in bounded smooth convex sets with weights when the weights satisfy two technical conditions: Bernstein-Markov and moderate growth. These properties are both satisfied for the Lebesgue measure in a convex set. The case of interpolating families in convex sets was not considered, since there were several technical hurdles to applying the same technique.

Our aim in this paper is to fill this gap and obtain necessary geometric conditions for interpolating families in the two settings mentioned above. The geometric conditions that usually appear in this type of problem come in three flavours:

  • A separation condition. This is implied by the Riesz–Fischer condition i.e. the left hand side of (1.1). The requirement that one should be able to interpolate the values one and zero implies that different points \(\lambda , \lambda ' \in \Lambda _k\) with \(\lambda \ne \lambda '\) cannot be too close. The separation conditions in our settings are studied in Sect. 3.1.

  • A Carleson-type condition. This is a condition that ensures the continuity of the embedding as in (1.2). A geometric characterization of the Carleson-type condition is given in Theorem 6 for convex domains and the Lebesgue measure, and in Theorem 7 for the ball and the measures \(\mu _a\).

  • A density condition. This is a global condition that usually follows from both the Bessel and the Riesz–Fischer conditions. A necessary density condition for interpolating sequences on convex sets endowed with the Lebesgue measure is provided in our first main result (see next section for precise definitions):

Theorem 1

Let \(\Omega \) be a smooth, bounded, convex domain in \(\mathbb {R}^n\), and let \(\Lambda =\{ \Lambda _k \}\) be an interpolating sequence. Then for any Euclidean ball \(\mathbb {B}(x,r) \subset \Omega \) the following holds:

$$\begin{aligned} \limsup _{k\rightarrow \infty } \frac{\#(\Lambda _k \cap \mathbb {B}(x,r))}{\dim {\mathcal {P}}_k}\le \mu _{eq}(\mathbb {B}(x,r)). \end{aligned}$$

Here \(\mu _{eq}\) is the normalized equilibrium measure associated to \(\Omega \).

In Theorem 8 we consider the case of the ball with the measure \(\mu _a\). In both cases a matching density result for sampling sequences was proved in [2].

Finally, a natural question is whether or not there exists a family \(\{\Lambda _k\}\) that is both sampling and interpolating. To answer this question is very difficult in general (see [14, Sect. 9], [7]). A particular case is when \(\{\kappa _{k,\lambda }\}_{\lambda \in \Lambda _k}\) form an orthonormal basis. In the last section we study the existence of orthonormal bases of reproducing kernels in the case of the ball with the measures \(\mu _a\). More precisely, if the spaces \(\mathcal {P}_k\) are endowed with the inner product of \(L^2(\mu _a)\), we prove that for k big enough the space \(\mathcal {P}_k\) does not admit an orthonormal basis of reproducing kernels:

Theorem 2

Let \(\mathbb {B}\subset \mathbb {R}^n\) be the unit ball and \(n>1\), and fix any \(a\ge 0\). There exists a \(k_0\) big enough such that for any \(k\ge k_0\) the space \(\mathcal {P}_k\) does not admit a basis of reproducing kernels that is orthogonal with respect to the inner product induced by the measure \(d\mu _a=(1-|x|^2)^{a-1/2}dV\).

To determine whether or not there exists a family \(\{\Lambda _k\}\) that is both sampling and interpolating for \((\mathcal {P}_k,\mu _a)\) remains an open problem.

2 Technical Results

Before stating and proving our results we will recall the behaviour of the kernel on the diagonal, or equivalently the Christoffel function, we will define an appropriate metric and introduce some tools.

2.1 Christoffel Functions and Equilibrium Measures

To write explicitly the sampling and interpolating conditions we need an estimate for the Christoffel function. In [2] it was observed that in the case of the measure \(d\mu (x) = \chi _\Omega (x) dV(x)\) it is possible to obtain precise estimates for the size of the reproducing kernel on the diagonal:

Theorem 3

Let \(\Omega \) be a smooth bounded convex domain in \(\mathbb {R}^n\). Then the reproducing kernel for \(({\mathcal {P}}_{k},\chi _\Omega dV)\) satisfies

$$\begin{aligned} \beta _k(x)=K_k(x,x) \simeq \min \Bigl ( \frac{k^n}{\sqrt{d(x,\partial \Omega )}}, k^{n+1}\Bigr )\quad \text {for all } x\in \Omega , \end{aligned}$$
(2.1)

where \(d(x,\partial \Omega )\) denotes the Euclidean distance from \(x\in \Omega \) to the boundary of \(\Omega \).

For the weight \((1-|x|^2)^{a-1/2}\) in the ball \({\mathbb {B}}\subseteq \mathbb {R}^n\) the behaviour of the Christoffel function is well known.

Proposition 4

For any \(a \ge 0\), let \(d\mu _a(x)=(1-|x|^2)^{a-1/2}\chi _\mathbb {B}(x) dV(x)\). Then the reproducing kernel for \(({\mathcal {P}}_{k},d\mu _a )\) satisfies

$$\begin{aligned} \beta _k(\mu _a,x)=K_k(\mu _a,x,x) \simeq \min \Bigl ( \frac{k^n}{d(x,\partial {\mathbb {B}})^a}, k^{n+2a}\Bigr )\quad \text {for all } x\in \mathbb {B}. \end{aligned}$$
(2.2)

The proof follows from [15, Prop 4.5 and 5.6], the Cauchy–Schwarz inequality and the extremal characterization of the kernel

$$\begin{aligned} K_k(\mu _a;x,x)=\sup \left\{ |p(x)|^2\;\; : \;\;p\in {\mathcal {P}}_k,\int |p|^2 d\mu _a \le 1 \right\} . \end{aligned}$$

The asymptotic behaviour for the case of the ball was established in [4]. See also [1] for the general case.

To define the equilibrium measure we have to introduce a few concepts from pluripotential theory, see [10]. Let \(L\subset \mathbb {R}^n\subset \mathbb {C}^n\) be a non-pluripolar compact set. Given \(\xi \in \mathbb {C}^n\), the Siciak extremal function is defined by

$$\begin{aligned} G_L(\xi )=\sup \left\{ \frac{\log ^+|p(\xi )|}{\deg (p)}\; : \; p\in P(\mathbb {C}^n),\; \sup _L |p|\le 1\right\} , \end{aligned}$$

where \(P(\mathbb {C}^n)\) is the space of holomorphic polynomials. The pluricomplex Green’s function is the semicontinuous regularization of \(G_L\) defined for \(z\in \mathbb {C}^n\) by

$$\begin{aligned} G^*_L(z)=\limsup _{\xi \rightarrow z}G_L(\xi ). \end{aligned}$$

The pluripotential equilibrium measure for L is the Monge-Ampère Borel (probability) measure

$$\begin{aligned} d \mu _{eq}=(d d^c G_L^*)^n. \end{aligned}$$

In the general case, when \(\Omega \) is a smooth bounded convex domain the equilibrium measure is very well understood, see [3, 5]. It behaves roughly as \(d\mu _{eq}\simeq 1/\sqrt{d(x,\partial \Omega )}dV\). In particular, the pluripotential equilibrium measure for the ball \({\mathbb {B}}\) is given (up to normalization) by \(d \mu _0(x)=1/\sqrt{1-|x|^2}dV(x)\).

2.2 An Anisotropic Distance

The natural distance to formulate the separation condition and the Carleson-type condition is not the Euclidean distance. Consider in the unit ball \(\mathbb {B}\subset \mathbb {R}^n\) the following distance:

$$\begin{aligned} \rho (x,y) = \arccos \left\{ \langle x, y\rangle + \sqrt{1-|x|^2}\sqrt{1-|y|^2}\right\} . \end{aligned}$$

This is the geodesic distance between the points \(x',\ y'\) in the sphere \({\mathbb {S}}^{n}\) defined as \(x'=(x,\sqrt{1-|x|^2})\) and \(y'=(y,\sqrt{1-|y|^2})\). If we consider anisotropic balls \(B(x,\varepsilon ) = \{y \in {\mathbb {B}}: \rho (x,y) < \varepsilon \}\), they are comparable to a box centered at x (a product of intervals) which are of size \(\varepsilon \) in the tangent directions and size \(\varepsilon ^2 + \varepsilon \sqrt{1-|x|^2}\) in the radial direction. If we want to refer to a Euclidean ball of center x and radius \(\varepsilon \) we would use the notation \(\mathbb {B}(x,\varepsilon )\).

The Euclidean volume of a ball \(B(x,\varepsilon )\) is comparable to \(\varepsilon ^{n}\sqrt{1-|x|^2}\) if \((1-|x|^2) > \varepsilon ^2\) and \(\varepsilon ^{n+1}\) otherwise.

This distance \(\rho \) can be extended to an arbitrary smooth convex domain \(\Omega \) by using Euclidean balls contained in \(\Omega \) and tangent to the boundary of \(\Omega \). This can be done in the following way. Since \(\Omega \) is smooth, there is a tubular neighbourhood \(U\subset \mathbb {R}^n\) of the boundary of \(\Omega \) where each point \(x\in U\) has a unique closest point \({\tilde{x}}\) in \(\partial \Omega \) and the normal line to \(\partial \Omega \) at \({\tilde{x}}\) passes through x. There is a fixed small radius \(r>0\) such that any point \(x\in U\cap \Omega \) is contained in a ball of radius r, \(\mathbb {B}(p, r)\subset \Omega \) and such that it is tangent to \(\partial \Omega \) at \({\tilde{x}}\). We define at x a Riemannian metric which comes from the pullback of the standard metric on \(\partial {\tilde{\mathbb {B}}}(p,r)\) where \({\tilde{\mathbb {B}}}(p,r)\) is a ball in \(\mathbb {R}^{n+1}\) centered at (p, 0) and of radius \(r>0\) by the projection of \(\mathbb {R}^{n+1}\) onto the first n-variables. In this way we have defined a Riemannian metric in the domain \(\Omega \cap U\). In the core of \(\Omega \), i.e. far from the boundary we use the standard Euclidean metric. We glue the two metrics with a partition of unity.

The resulting metric \(\rho \) on \(\Omega \) has the relevant property that the balls of radius \(\varepsilon \) behave as in the unit ball, that is a ball \(B(x,\varepsilon )\) of center x and of radius \(\varepsilon \) in this metric is comparable to a box of size \(\varepsilon \) in the tangent directions and size \(\varepsilon ^2 + \varepsilon \sqrt{d(x,\partial \Omega )}\) in the normal direction.

2.3 Well Localized Polynomials

The basic tool that we will use to prove the Carleson-type condition and the separation condition are well localized polynomials. These were studied by Petrushev and Xu in the unit ball with the measure \(d\mu _a=(1-|x|^2)^{a-1/2}dV\), for \(a\ge 0\). We recall their basic properties:

Theorem

A (Petrushev and Xu) Let \(d\mu _a=(1-|x|^2)^{a-1/2}dV\) on \(\mathbb {B}\) for \(a\ge 0\). For any \(k\ge 1\) and any \(y\in \mathbb {B}\subset \mathbb {R}^n\) there are polynomials \(L_k^a (\cdot , y) \in {\mathcal {P}}_{2k}\) that satisfy:

  1. (1)

    \(L_k^a(x,y)\) as a variable of x is a polynomial of degree 2k.

  2. (2)

    \(L_k^a(x,y) = L_k^a(y, x)\).

  3. (3)

    \(L_k^a\) reproduces all the polynomials of degree k, i.e.

    $$\begin{aligned} p(y) = \frac{1}{\mu _a(\mathbb {B})} \int _\mathbb {B}L_k^a(x,y) p(x)\, d\mu _a(x),\qquad \text {for all } p\in {\mathcal {P}}_k. \end{aligned}$$
    (2.3)
  4. (4)

    For any \(\gamma >0\) there is a \(c_\gamma \) such that

    $$\begin{aligned} |L_k^a (x, y)| \le c_\gamma \frac{\sqrt{\beta _k(\mu _a,x) \beta _k(\mu _a,y) }}{(1+ k \rho (x,y))^\gamma }. \end{aligned}$$
    (2.4)
  5. (5)

    The kernels \(L_k^a\) are Lispchitz with respect to the metric \(\rho \). More concretely, for all \(x\in B(y, 1/k)\)

    $$\begin{aligned} |L_k^a (w, x) - L_k^a (w, y)| \le c_\gamma \frac{k \rho (x,y) \sqrt{ \beta _k(\mu _a,w) \beta _k(\mu _a,y) }}{ (1+k \rho (w,y))^\gamma }. \end{aligned}$$
    (2.5)
  6. (6)

    There is \(\varepsilon > 0\) such that \(L_k^a (x, y) \simeq K_k(\mu _a; y, y)\) for all \(x\in B(y, \varepsilon /k)\).

Proof

All the properties are proved in [15, Thm 4.2, Prop 4.7 and 4.8] except the behaviour near the diagonal number 6. Let us start by observing that by the Lipschitz condition (2.5) it is enough to prove that \(L_k^a (x,x)\simeq K_k(\mu _a;x,x)\).

This follows from the definition of \(L_k^a\) which is done as follows. The subspace \(V_k\subset L^2(\mathbb {B})\) are the polynomials of degree k that are orthogonal to lower degree polynomials in \(L^2(\mathbb {B})\) with respect to the measure \(d\mu _a\). Consider the kernels \(P_k(x, y)\) which are the kernels that give the orthogonal projection on \(V_k\). If \(f_1,\ldots , f_r\) is an orthonormal basis for \(V_k\) then \(P_k(x,y) = \sum _{j= 1}^r f_j(x)f_j(y)\). The kernel \(L_k^a\) is defined as

$$\begin{aligned} L_k^a (x,y) = \sum _{j = 0}^\infty {\hat{a}} \left( \frac{j}{k}\right) P_j(x,y). \end{aligned}$$

We assume that \({\hat{a}}\) is compactly supported, \({\hat{a}} \ge 0\), \({\hat{a}} \in {\mathcal {C}}^\infty ({\mathbb {R}})\), \({\text {supp}} {\hat{a}} \subset [0, 2]\), \({\hat{a}}(t) = 1\) on [0, 1] and \({\hat{a}} (t) \le 1\) on [1, 2] as in Fig. 1.

Fig. 1
figure 1

Graph of \(\hat{a}\).

Then, all the terms are positive in the diagonal. Recall that

$$\begin{aligned} K_k(\mu _a;x, y)=\sum _{j=0}^k P_j(x,y). \end{aligned}$$

Hence, we get

$$\begin{aligned} \beta _k (\mu _a,x) = K_k(\mu _a;x, x) \le L_k^a (x, x) \le K_{2k}(\mu _a;x,x) = \beta _{2k}(\mu _a,x). \end{aligned}$$

Since \(\beta _k (\mu _a,x) \simeq \beta _{2k} (\mu _a,x)\) we obtain the desired estimate.

\(\square \)

Petrushev and Xu also proved the following integral estimate [15, Lem. 4.6]

Lemma

B Let \(\alpha >0\) and \(a\ge 0\). If \(\gamma >0\) is big enough we have

$$\begin{aligned} \int _{\mathbb {B}} \frac{K_k(\mu _a,y,y)^\alpha }{(1+k \rho (x,y))^\gamma }d\mu _a (y) \lesssim \frac{1}{K_k(\mu _a,x,x)^{1-\alpha }}. \end{aligned}$$

3 Main Results

3.1 Separation

In our first result we prove that for \(\Lambda =\{ \Lambda _k \}\) interpolating there exists \(\varepsilon >0\) such that

$$\begin{aligned} \inf _{\lambda ,\lambda '\in \Lambda _k,\lambda \ne \lambda '}\rho (\lambda ,\lambda ')\ge \frac{\varepsilon }{k}. \end{aligned}$$

Theorem 5

If \(\Omega \) is a smooth, bounded, convex domain and \(\Lambda =\{ \Lambda _k \}\) is an interpolating sequence for \(({\mathcal {P}}_k,L^2(\chi _\Omega dV))_{k\ge 0}\) then there is an \(\varepsilon >0\) such that the balls \(\{B(\lambda ,\varepsilon /k)\}_{\lambda \in \Lambda _k}\) are pairwise disjoint.

Proof

Consider the metric in \(\Omega \) defined in Sect. 2.2, and let \(r(\Omega )\) be the width of the tubular neighborhood. If \(d(\lambda ,\partial \Omega )\le r(\Omega )\), then we take the ball of radius \(r(\Omega )\) that contains \(\lambda \) and is tangent to \(\partial \Omega \) at the closest point to \(\lambda \). If \(d(\lambda ,\partial \Omega )> r(\Omega )\), then we take the ball of radius \(r(\Omega )\) centered at x. To simplify the notation assume that \(r(\Omega )=1\), and in both cases we denote this ball by \(\mathbb {B}\). Note that by defining \(\mathbb {B}\) in this way, the anisotropic distance in \(\mathbb {B}\) is equivalent to the anisotropic distance in \(\Omega \). So, we will denote both by \(\rho \). Suppose that there is another point \(\lambda '\in \Lambda _k\cap \mathbb {B}\) such that \(\rho (\lambda ,\lambda ')<1/k\). Since \(\Lambda \) is interpolating we can build a polynomial \(p\in {\mathcal {P}}_k\) such that \(p(\lambda ') = 0\), \(p(\lambda ) = 1\) and \(\Vert p\Vert ^2 \lesssim 1/K_k(\mu _{1/2},\lambda , \lambda )\). In the ball \(\mathbb {B}\), the kernel \(L_k^{1/2}\) from Theorem A, for the Lebesgue measure \(a=1/2\), is reproducing. Therefore

$$\begin{aligned} 1 = \int _\mathbb {B}(L_k^{1/2}(\lambda , w)-L_k^{1/2}(\lambda ', w)) p(w) dV(w). \end{aligned}$$
(3.1)

We can use the estimate

$$\begin{aligned} |p(w)| \le \sqrt{\beta _k(\mu _{1/2},w)} \Vert p\Vert \le \sqrt{\beta _k(\mu _{1/2},w)/\beta _k(\mu _{1/2},\lambda ))} \end{aligned}$$

and the inequality (2.5) to obtain

$$\begin{aligned} 1 \lesssim k\rho (\lambda ,\lambda ') \int _\mathbb {B}\frac{\beta _k(\mu _{1/2},w) dV(w)}{(1+k\rho (w,\lambda ))^\gamma }, \end{aligned}$$

Taking \(\alpha = 1\) and \(a=1/2\) in Lemma B we finally get \(1 \lesssim k\rho (\lambda ,\lambda ')\) as stated. \(\square \)

Observe that considering the general case \(L_k^a\) in (3.1), one can prove the corresponding result for interpolating sequences for \({\mathcal {P}}_k\) with weight

$$\begin{aligned} d\mu _a(x)=(1-|x|^2)^{a-1/2}dV(x) \end{aligned}$$

in the ball \({\mathbb {B}}\).

3.2 Carleson-Type Condition

Let us deal with condition (1.2).

Definition 3

A sequence of measures \(\{\mu _k\}_{k\ge 0}\) are called Carleson measures for \(({\mathcal {P}}_k,d\mu )\) if there is a constant \(C>0\), independent of k, such that

$$\begin{aligned} \int _{\Omega } |p(x)|^2\, d\mu _k(x) \le C \Vert p\Vert _{L^2(\mu )}^2, \end{aligned}$$
(3.2)

for all \(p\in {\mathcal {P}}_k\).

In particular if \(\Lambda _k\) is a sequence of interpolating sets then the sequence of measures \(\mu _k = \sum _{\lambda \in \Lambda _k} \delta _\lambda /\beta _k(\lambda )\) is Carleson.

The geometric characterization of the Carleson measures when \(\Omega \) is a smooth convex domain is in terms of anisotropic balls.

Theorem 6

Let \(\Omega \) be a smooth, bounded, convex domain in \({\mathbb {R}}^n\). A sequence of measures \(\{\mu _k\}_{k\ge 0}\) is Carleson for \((\mathcal {P}_k, \chi _\Omega dV)\) if and only if there is a constant \(C>0\), independent of k, such that for all points \(x\in \Omega \)

$$\begin{aligned} \mu _k(B(x,1/k)) \le C V(B(x,1/k)). \end{aligned}$$
(3.3)

Proof

We prove the necessity. For any \(x\in \Omega \) there is a cube Q that contains \(\Omega \) which is tangent to \(\partial \Omega \) at a closest point to x as in Fig. 2.

Fig. 2
figure 2

External tangent cube

This cube has fixed dimensions independent of the point \(x\in \Omega \). Define the polynomials \(Q_k^x(y)=L_k^{1/2}(x_1,y_1)\cdots L_k^{1/2}(x_n,y_n)\) of degree at most kn. We test the Carleson condition (3.2) against these polynomials that peak at B(x, 1/k) and we get

$$\begin{aligned} \int _{B(x,1/k)} |Q_k^x|^2 d\mu _k \le \int _{\Omega } |Q_k^x|^2 d\mu _k\le C \Vert Q_k^x \Vert _{L^{2}(Q)}^2. \end{aligned}$$

By property (6) in Theorem A the left hand side is bounded below by

$$\begin{aligned} C \mu _k\left( B(x,1/k)\right) \prod _{i=1}^n K_k(\mu _{1/2}; x_i,x_i)^2. \end{aligned}$$

For the right hand side we split the product of integrals and use that

$$\begin{aligned} \int L_k^{1/2} (x_i,y_i)^2 d\mu _{1/2} (y_i)\simeq K_{k}(\mu _{1/2}; x_i,x_i). \end{aligned}$$

Finally the estimate (2.2) applied to these one dimensional kernels and the fact that

$$\begin{aligned} V(B(x,1/k))\simeq \frac{1}{k^n}\left( \sqrt{1-|x|^2}+\frac{1}{k}\right) \end{aligned}$$

gives the necessary condition.

For the sufficiency we use the reproducing property of \(L_k^{1/2}(z,y)\). For any point \(x\in \Omega \) there is a Euclidean ball \(\mathbb {B}_x\) contained in \(\Omega \) such that \(x\in \mathbb {B}_x\) and is tangent to \(\partial \Omega \) at the closest point to x. Moreover, since \(\Omega \) is a smooth bounded convex domain we can assume that the radius of \(\mathbb {B}_x\) has a lower bound independent of x. In this ball we can reconstruct the square of any polynomial \(p\in \mathcal {P}_k\) using the kernel \(L_{2k}^{1/2}\) relative to the ball \(\mathbb {B}_x\). That is

$$\begin{aligned} \int _\Omega |p(x)|^2 \, d\mu _k(x) \le \int _\Omega \left| \int _{\mathbb {B}_x} L_{2k}^{1/2}(x,y) p^2(y)\, dV(y)\right| \, d\mu _k(x). \end{aligned}$$

We use the estimate (2.4) and we get

$$\begin{aligned} \int _\Omega |p(x)|^2 \, d\mu _k(x) \lesssim \int _\Omega \int _{\mathbb {B}_x} \frac{\sqrt{\beta _{2k}(x)\beta _{2k}(y)}}{(1+ 2k \rho (x,y))^\gamma } |p(y)|^2 dV(y)\, d\mu _k(x). \end{aligned}$$

We break the integral in two regions, when \(\rho (x,y) < 1 \) and otherwise. When \(\gamma \) is big enough we obtain:

$$\begin{aligned}\begin{aligned} \int _\Omega |p(x)|^2 \, d\mu _k(x) \lesssim&\int _\Omega \int _{\mathbb {B}_x \cap \rho (x,y) > 1} |p(y)|^2 dV(y) d\mu _k(x) + \\&\int _\Omega \int _{\mathbb {B}_x\cap \rho (x,y) < 1} \frac{\sqrt{\beta _{2k}(x)\beta _{2k}(y)}}{(1+ 2k \rho (x,y))^\gamma } |p(y)|^2 dV(y)\, d\mu _k(x). \end{aligned} \end{aligned}$$

The first integral on the right hand side is bounded by \(\int _\Omega |p(y)|^2\, dV(y)\) since \(\mu _k(\Omega )\) is bounded by hypothesis (it is possible to cover \(\Omega \) by balls \(\{B(x_n,1/k)\}\) with controlled overlap).

In the second integral, observe that if \(w\in B(x,1/k)\) then \(\rho (w,x)\le 1/k\) and therefore

$$\begin{aligned} \frac{\sqrt{\beta _{2k}(x)\beta _{2k}(y)}}{(1+ 2k \rho (x,y))^\gamma } \lesssim \frac{1}{V(B(x,1/k))} \int _{B(x,1/k)} \frac{\sqrt{\beta _{2k}(w)\beta _{2k}(y)}}{(1+ 2k \rho (w,y))^\gamma } dV(w). \end{aligned}$$

We plug this inequality into the second integral and we can bound it by

$$\begin{aligned} \int _\Omega |p(y)|^2 \int _{\rho (w,y) < 2}\frac{\sqrt{\beta _{2k}(w)\beta _{2k}(y)}}{(1+ 2k \rho (w,y))^\gamma } \frac{\mu _k(B(w,1/k))}{V(B(w,1/k))}dV(w) dV(y). \end{aligned}$$

We use the hypothesis (3.3) and Lemma B with \(\alpha = 1/2\) to bound it finally by \(\int _\Omega |p(y)|^2dV(y)\).

\(\square \)

The weighted case in the unit ball is simpler.

Theorem 7

Let \(d\mu _a (x)=(1-|x|^2)^{a-1/2}dV(x)\) for \(a\ge 0\) be the weight in the unit ball \(\mathbb {B}\subset \mathbb {R}^n\). A sequence of measures \(\{ \mu _k \}_{k\ge 0}\) is Carleson for \((\mathcal {P}_k,\mu _a)_{k\ge 0}\) if and only if there is a constant C, independent of k, such that for all points \(x\in \mathbb {B}\)

$$\begin{aligned} \mu _k(B(x,1/k)) \le C\; \mu _a(B(x,1/k)). \end{aligned}$$
(3.4)

Proof

Suppose that \(\{ \mu _k \}_{k\ge 0}\) is a Carleson sequence of measures. Then for any \(x\in \mathbb {B}\)

$$\begin{aligned} \int _{B(x,1/k)} |L_k^a(x,w)|^2\, d\mu _{2k}(w) \le C \Vert L_k^a(x,\cdot ) \Vert ^2_{L^2(\mu _a)}. \end{aligned}$$
(3.5)

By the definition of the polynomials \(L_k^a\) we get

$$\begin{aligned} \Vert L_k^a(x,\cdot ) \Vert ^2_{L^2(\mu _a)}&= \sum _{j_1,j_2 = 0}^\infty {\hat{a}} \left( \frac{j_1}{k}\right) {\hat{a}} \left( \frac{j_2}{k}\right) \int _\mathbb {B}P_{j_1}(x,y) P_{j_2}(x,y) d\mu _a(y)\\&=\sum _{j = 0}^\infty \left| \hat{a} \left( \frac{j}{k}\right) \right| ^2\int _\mathbb {B}|P_{j}(x,y)|^2 d\mu _a(y) \quad \text{ by } \text{ orthogonality }\\&=\sum _{j = 0}^\infty \left| \hat{a} \left( \frac{j}{k}\right) \right| ^2 P_j(x,x)\quad \text{ by } \text{ the } \text{ projection } \text{ property. }\\ \end{aligned}$$

Since \({\hat{a}}(t) \le 1\), \({\hat{a}}(t)=1\) on [0, 1], and the support of \({\hat{a}}\) is contained in [0, 2] we obtain the following estimates

$$\begin{aligned} K_k(\mu _a,x,x)\le \Vert L_k^a(x,\cdot ) \Vert ^2_{L^2(\mu _a)}\le K_{2k}(\mu _a,x,x) , \end{aligned}$$

From these estimates and property (6) in Theorem A we get inequality (3.4). The necessity follows exactly as in the unweighted case with the obvious changes. \(\square \)

3.3 Density Condition

In [2, Thm. 4] a necessary density condition for sampling sequences for polynomials in convex domains was obtained. It states the following:

Theorem

C(Berman and Ortega-Cerdà) Let \(\Omega \) be a smooth convex domain in \(\mathbb {R}^n\), and let \(\Lambda \) be a sampling sequence. Then for any \(\mathbb {B}(x,r) \subset \Omega \) the following holds:

$$\begin{aligned} \liminf _{k\rightarrow \infty } \frac{\#(\Lambda _k \cap \mathbb {B}(x,r))}{\dim {\mathcal {P}}_k}\ge \mu _{eq}(\mathbb {B}(x,r)). \end{aligned}$$

Here \(\mu _{eq}\) is the normalized equilibrium measure associated to \(\Omega \).

Let us see how, with a similar technique, a corresponding density condition can be obtained as well in the case of interpolating sequences.

Proof

(Proof of Theorem 1) Let \(F_k\subset \mathcal {P}_k\) be the subspace spanned by

$$\begin{aligned} \kappa _\lambda (x)=\frac{K_k(\lambda ,x)}{\sqrt{\beta _k(\lambda )}}\qquad \text {for all } \lambda \in \Lambda _k. \end{aligned}$$

Denote by \(g_\lambda \) the dual (biorthogonal) basis to \(\kappa _\lambda \) in \(F_k\). We have clearly that

  • We can span any function in \(F_k\) in terms of \(\kappa _\lambda \), thus:

    $$\begin{aligned} \sum _{\lambda \in \Lambda _k} \kappa _\lambda (x)g_\lambda (x)={\mathcal {K}}_k(x,x), \end{aligned}$$

    where \({\mathcal {K}}_k(x,y)\) is the reproducing kernel of the subspace \(F_k\).

  • The norm of \(g_\lambda \) is uniformly bounded since \(\kappa _\lambda \) was a uniform Riesz sequence.

  • \(g_\lambda (\lambda )=\sqrt{\beta _k(\lambda )}\). This is due to the biorthogonality and the reproducing property.

We are going to prove that the measure \(\sigma _k=(1/\dim \mathcal P_k)\sum _{\lambda \in \Lambda _k}\delta _\lambda \), and the measure \(d\nu _k=(1/\dim {\mathcal {P}}_k) {\mathcal {K}}_k(x,x)dV(x)\) are very close to each other. These are two positive measures that are not probability measures but they have the same mass (equal to \(\#\Lambda _k/\dim {\mathcal {P}}_k\le 1\)). Therefore, there is a way to quantify the closeness through the Vaserstein 1-distance. For an introduction to Vaserstein distance see for instance [17]. We want to prove that \(W(\sigma _k,\nu _k)\rightarrow 0\) because the Vaserstein distance metrizes the weak-* topology.

In this case, it is known that \({\mathcal {K}}_k(x,x)\le K_k(x,x)\) and \((1/\dim {\mathcal {P}}_k) \beta _k(x)\, \rightarrow \mu _{eq}\) in the weak-* topology, where \(\mu _{eq}\) is the normalized equilibrium measure associated to \(\Omega \) (see for instance [1]). Therefore, \(\limsup _k \sigma _k\le \mu _{eq}\).

In order to prove that \(W(\sigma _k,\nu _k)\rightarrow 0\) we use a signed transport plan as in [12]:

$$\begin{aligned} \rho _k(x,y)=\frac{1}{\dim {\mathcal {P}}_k} \sum _{\lambda \in \Lambda _k} \delta _\lambda (y) \times g_\lambda (x)\kappa _\lambda (x)\,dV(x) \end{aligned}$$

It has the right marginals, \(\sigma _k\) and \(\nu _k\) and we can estimate the integral

$$\begin{aligned} W(\sigma _k,\nu _k)\le \iint _{\Omega \times \Omega } |x-y|d|\rho _k|=O(1/\sqrt{k}). \end{aligned}$$

The only point that merits a clarification is that we need an inequality:

$$\begin{aligned} \begin{aligned} \frac{1}{\dim {\mathcal {P}}_k} \sum _{\lambda \in \Lambda _k} \int _\Omega&|\lambda -x|^2 \frac{|K_k(\lambda ,x)|^2}{ K_k(x,x)}\,dV(x)\le \\&\frac{1}{\dim {\mathcal {P}}_k} \iint _{\Omega \times \Omega } |y-x|^2 |K_k(y,x)|^2\, dV(x)dV(y). \end{aligned} \end{aligned}$$

This is problematic. We know that \(\Lambda _k\) is an interpolating sequence for the polynomials of degree k. Thus the normalized reproducing kernels at \(\lambda \in \Lambda _k\) form a Bessel sequence for \(\mathcal {P}_k\) but the inequality that we need is applied to \(K_k(x,y)(y_i-x_i)\) for all \(i=1,\ldots , n\), that is to a polynomial of degree \(k+1\). We are going to show that if \(\Lambda _k\) is an interpolating sequence for the polynomials of degree k it is also a Carleson sequence for the polynomials of degree \(k+1\).

Observe that since it is interpolating then it is uniformly separated, i.e. \(B(\lambda , \varepsilon /k)\) are disjoint. That means that in particular

$$\begin{aligned} \mu _{k} (B(z, 1/(k+1)) \lesssim V(B(z,1/(k+1)). \end{aligned}$$

Thus \(\mu _k\) is a Carleson measure for \(\mathcal {P}_{k+1}\).

Finally in [2, Thm. 17] it was proved that

$$\begin{aligned} \frac{1}{\dim {\mathcal {P}}_k} \iint _{\Omega \times \Omega } |y-x|^2 |K_k(y,x)|^2\, dV(x)dV(y) = O(1/k). \end{aligned}$$

\(\square \)

From the behaviour on the diagonal of the kernel (2.2) its easy to check that the kernel is both Bernstein-Markov (sub-exponential) and has moderate growth, see definitions in [2]. From the characterization of sampling sequences proved in [2, Thm. 1] and with the obvious changes in the proof of the previous theorem we deduce the following:

Theorem 8

Consider the space of polynomials \({\mathcal {P}}_k\) restricted to the ball \(\mathbb {B}\subset \mathbb {R}^n\) with the measure \(d\mu _a(x)=(1-|x|^2)^{a-1/2}dV\). Let \(\Lambda =\{\Lambda _k \}\) be a sequence of sets of points in \(\mathbb {B}\).

  • If \(\Lambda \) is a sampling sequence then, for each \(x\in \mathbb {B}\) and \(r>0\)

    $$\begin{aligned} \liminf _{k\rightarrow \infty } \frac{\# ( \Lambda _k \cap \mathbb {B}(x,r) )}{\dim {\mathcal {P}}_k}\ge \mu _{eq}(\mathbb {B}(x,r)). \end{aligned}$$
  • If \(\Lambda \) is interpolating sequence then, for each \(x\in \mathbb {B}\) and \(r>0\)

    $$\begin{aligned} \limsup _{k\rightarrow \infty } \frac{\# ( \Lambda _k \cap \mathbb {B}(x,r) )}{\dim {\mathcal {P}}_k}\le \mu _{eq}(\mathbb {B}(x,r)). \end{aligned}$$

Remark

In the statements of Theorems 1, C and 8 we could have replaced \({\mathbb {B}}(x, r)\) by any open set, in particular they could have been formulated with balls B(xr) in the anisotropic metric.

One can construct interpolation or sampling sequences with density arbitrarily close to the critical density with sequences of points \(\{\Lambda _k\}\) such that the corresponding Lagrange interpolating polynomials are uniformly bounded. In particular the above inequalities are sharp. For a similar construction on the sphere see [13].

3.4 Orthonormal Basis of Reproducing Kernels

Sampling and interpolation are somehow dual concepts. Sequences which are both sampling and interpolating (i.e. complete interpolating sequences) are optimal in some sense because they are at the same time minimal sampling sequences and maximal interpolating sequences. In the setting of Theorem 8, such sequences satisfy

$$\begin{aligned} \lim _{k\rightarrow \infty } \frac{\# ( \Lambda _k \cap \mathbb {B}(x,r) )}{\dim {\mathcal {P}}_k}= \mu _{eq}(\mathbb {B}(x,r)). \end{aligned}$$

In general domains, to prove or disprove the existence of such sequences is a difficult problem [14].

If \(\Lambda =\{ \Lambda _k \}\) is a complete interpolating sequence the corresponding reproducing kernels \(\{ \kappa _{k,\lambda } \}\) form a Riesz basis in the space of polynomials (uniformly in the degree). An obvious example of complete interpolating sequences would be sequences providing an orthonormal basis of reproducing kernels. In dimension 1, with the weight \((1-x^2)^{a-1/2}\), a basis of Gegenbauer polynomials \(\{ G^{(a)}_j \}_{j=0,\dots , k}\) is orthogonal and the reproducing kernel in \({\mathcal {P}}_k\) evaluated at the zeros of the polynomial \(G^{(a)}_{k+1}\) gives an orthogonal sequence. In our last result we prove that in higher dimensions there is no orthogonal basis of \({\mathcal {P}}_k\) consisting of reproducing kernels with the measure \(d\mu _a(x)=(1-|x|^2)^{a-1/2}dV(x)\).

Our first goal is to show that sampling sequences are dense enough, Proposition 10. Recall that in \(\mathbb {B}(0,r)\), for \(r<1\), the Euclidean metric and the metric \(\rho \) are equivalent. In our first result we prove that the right hand side of (1.3) and the separation condition imply that there are points of the sequence in any ball \(\mathbb {B}(0,r)\) that has a sufficiently large radius.

Proposition 9

Let \(d\mu _a (x)=(1-|x|^2)^{a-1/2}dV(x)\) for \(a\ge 0\) be the weight in the unit ball \(\mathbb {B}\subset \mathbb {R}^n\). Let \(\Lambda _k\subset \mathbb {B}\) be a finite subset and \(C,\varepsilon >0\) be constants such that

$$\begin{aligned} \int _\mathbb {B}|P(x)|^2 d\mu _a(x) \le C \sum _{\lambda \in \Lambda _k} \frac{|P(\lambda )|^2}{K_{k}(\mu _a;\lambda ,\lambda )}, \end{aligned}$$
(3.6)

for all \(P\in \mathcal {P}_k\) and

$$\begin{aligned} \inf _{\begin{array}{c} \lambda ,\lambda '\in \Lambda _k \\ \lambda \ne \lambda ' \end{array} }\rho (\lambda ,\lambda ')\ge \frac{\varepsilon }{k}. \end{aligned}$$

Given \(|x_0|=C_0<1/4\), there exists \(A>0\) depending only on \(C,\varepsilon ,n\) and a such that

$$\begin{aligned} \Lambda _k \cap \mathbb {B}(x_0,A/k)\ne \emptyset . \end{aligned}$$

Proof

By the construction of the function \(L_\ell ^a (x,y)\), as we have already mentioned, for any \(\ell \ge 0\)

$$\begin{aligned} K_\ell (\mu _a; x,x)\le \int _{\mathbb {B}}L_\ell ^a (x,y)^2 d\mu _a (y)\le K_{2\ell }(\mu _a; x,x). \end{aligned}$$

Let \(P(x)=L^a_{[k/2]}(x,x_0)\in \mathcal {P}_k\). Suppose that for some \(M>\varepsilon \) and \(k\ge 1\)

$$\begin{aligned} \Lambda _k \cap \mathbb {B}(x_0,M/k)=\emptyset . \end{aligned}$$

From the property above, the hypothesis and Proposition 4 we get

$$\begin{aligned} k^n \simeq K_{[k/2]} (\mu _a; x_0,x_0)\le \int _{\mathbb {B}} P(y)^2 d\mu _a (y)\lesssim \sum _{|\lambda -x_0|>M/k} \frac{|P(\lambda )|^2}{K_k(\mu _a ;\lambda ,\lambda )}.\nonumber \\ \end{aligned}$$
(3.7)

From [6, Lem. 11.3.6.], given \(x\in \mathbb {B}\) and \(0<r<\pi \)

$$\begin{aligned} \mu _a (B(x,r))\simeq r^n (\sqrt{1-|x|^2}+r)^{2a}, \end{aligned}$$
(3.8)

and therefore

$$\begin{aligned} \mu _a(B(x,r))\simeq {\left\{ \begin{array}{ll} r^{n+2a}&{} \text{ if }\;\; 1-|x|^2<r^2, \\ r^n (1-|x|^2)^a &{} \text{ otherwise }, \end{array}\right. } \end{aligned}$$
(3.9)

and

$$\begin{aligned} \mu _a (B(x,r))\gtrsim {\left\{ \begin{array}{ll} r^{n+2 a}&{} \text{ if }\;\; |x|>\frac{1}{2}, \\ r^n &{} \text{ otherwise }. \end{array}\right. } \end{aligned}$$
(3.10)

From (4) in Theorem A, the separation of the sequence, and the estimate (3.10) we get

$$\begin{aligned} \begin{aligned} 0&<c \le \sum _{|\lambda -x_0|>M/k} \frac{1}{(1+[k/2]\rho (x_0,\lambda ))^{2\gamma }} \\&= \sum _{|\lambda -x_0|>M/k} \frac{1}{ \mu _a (B(\lambda ,\varepsilon /2k))} \int _{B(\lambda ,\varepsilon /2k)} \frac{d \mu _a (x)}{(1+[k/2]\rho (x_0,\lambda ))^{2\gamma }} \\&\lesssim \left[ \sum _{\frac{M}{k}<|\lambda -x_0|<\frac{1}{2}}+ \sum _{\frac{1}{2}<|\lambda -x_0|} \right] \frac{1}{ \mu _a (B(\lambda ,\varepsilon /2k))} \int _{B(\lambda ,\varepsilon /2k)} \frac{d \mu _a (x)}{(1+ 2 k \rho (x_0,x))^{2 \gamma }} \\&\lesssim \left( \frac{k}{\varepsilon }\right) ^n \int _{M/k}^{3/4} \frac{r^{n-1}}{(k r)^{2\gamma }}dr + \frac{k^{2a +n-2 \gamma }}{\varepsilon ^{2 a +n}} \mu _a (B(0,1/2)^c). \end{aligned} \end{aligned}$$
(3.11)

Now, for \(\gamma =n+a\) we get

$$\begin{aligned} 0<c\le \frac{1}{k^{n+2a}}\left[ -\frac{1}{r^{n+2a}} \right] _{r=M/k}^{3/4}+ \frac{1}{k^n} , \end{aligned}$$

and then a uniform (i.e. independent of k) upper bound for M. Taking \(A=A(C,\varepsilon ,n,a)>M\) we get the result. \(\square \)

Proposition 10

Let \(\Lambda =\{\Lambda _k\}\) be a separated sampling sequence for \(\mathbb {B}\subset \mathbb {R}^n\). Then there exist \(M_0>0\) such that for any \(M>M_0\) and all \(k\ge 4M\)

$$\begin{aligned} \# \left( \Lambda _k \cap \mathbb {B}(0,M/k)\right) \simeq M^n. \end{aligned}$$

Proof

Let \(\varepsilon >0\) be the constant from the separation, i.e.

$$\begin{aligned} \inf _{\begin{array}{c} \lambda ,\lambda '\in \Lambda _k \\ \lambda \ne \lambda ' \end{array} }\rho (\lambda ,\lambda ')\ge \frac{\varepsilon }{k}. \end{aligned}$$

Assume that \(M/k\le 1/2\). For \(\lambda \in \Lambda _k \cap \mathbb {B}(0,M/k)\) we have \(V(\mathbb {B}(\lambda ,\varepsilon /k))\simeq (\varepsilon /k)^n\) and therefore

$$\begin{aligned} \# \left( \Lambda _k \cap \mathbb {B}(0,M/k)\right) \left( \frac{\varepsilon }{k}\right) ^n \lesssim \left( \frac{M}{k}\right) ^n. \end{aligned}$$
(3.12)

For the other inequality, take the constant A (assume \(A>\varepsilon \)) given in Proposition 9 depending on the sampling and the separation constants of \(\Lambda \) and n. For \(M>A\) and \(k>0\) such that \(\mathbb {B}(0,M/k)\subset \mathbb {B}(0,1/4)\) one can find N disjoint balls \(\mathbb {B}(x_j,A/k)\) for \(j=1,\dots ,N\) contained in \(\mathbb {B}(0,M/k)\) and such that

$$\begin{aligned} N V(\mathbb {B}(0,A/k))>\frac{1}{2} V(\mathbb {B}(0,M/k)). \end{aligned}$$

Observe that by Proposition 9 each ball \(\mathbb {B}(x_j,A/k)\) contains at least one point from \(\Lambda _k\) and therefore

$$\begin{aligned} \# \left( \Lambda _k \cap \mathbb {B}(0,M/k)\right) \ge N \gtrsim \left( \frac{M}{A}\right) ^n. \end{aligned}$$

\(\square \)

We will use the following result from [9].

Theorem

D Let \(\mathbb {B} \subset \mathbb {R}^{n}\), \(n>1\), be the unit ball. There do not exist infinite subsets \(\Lambda \subset \mathbb {R}^{n}\) such that the exponentials \(e^{i\langle x, \lambda \rangle }\), \(\lambda \in \Lambda \), are pairwise orthogonal in \(L^{2}(\mathbb {B})\). Or, equivalently, there do not exist infinite subsets \(\Lambda \subset \mathbb {R}^{n}\) such that \(|\lambda -\lambda '|\) is a zero of \(J_{n/2}\), the Bessel function of order n/2, for all distinct \(\lambda ,\lambda '\in \Lambda \).

Following ideas from [8] we can prove now our main result about orthogonal bases, Theorem 2. A similar argument can be used on the sphere to study tight spherical designs.

Proof of Theorem 2

The following result can be deduced from [11, Thm. 1.7]: Given \(u,v \in \mathbb {R}^n\), consider two sequences \(\{u_k\}_k\) and \(\{v_k\}_k\) in \(\mathbb {R}^n\) that converge to u and v respectively, and such that \(u_k/k,v_k/k\in \mathbb {B}\) for every \(k\ge 1\). Then

$$\begin{aligned} \lim _{k\rightarrow \infty } \frac{K_k(\mu _a,\frac{u_k}{k},\frac{v_k}{k})}{K_k(\mu _a,0,0)}=\frac{J^*_{n/2}(|u-v|)}{J^*_{n/2}(0)}. \end{aligned}$$

Let \(\Lambda _k\) be such that \(\{\kappa _{\lambda }\}_{\lambda \in \Lambda _k}\) is an orthonormal basis of \(\mathcal {P}_k\) with respect to the measure \(d\mu _a=(1-|x|^2)^{a-1/2}dV\). Then

$$\begin{aligned} K_k(\mu _a,\lambda _{k},\lambda _{k}')=0, \end{aligned}$$

for \(\lambda _{k}\ne \lambda _{k}'\in \Lambda _k\).

We know that \(\Lambda _k\) is uniformly separated for some \(\varepsilon >0\)

$$\begin{aligned} \rho (\lambda _{k},\lambda _{k}')\ge \frac{\varepsilon }{k}. \end{aligned}$$

Then the sets \(X_k=k(\Lambda _k\cap \mathbb {B}(0,1/2))\subset \mathbb {R}^n\) are uniformly separated

$$\begin{aligned} |\lambda - \lambda '|\gtrsim \varepsilon ,\;\;\lambda \ne \lambda '\in X_k, \end{aligned}$$

and \(X_k\) converges weakly to some uniformly separated set \(X\subset \mathbb {R}^n\). The limit is not empty because by Proposition 10 for any \(M>0\),

$$\begin{aligned} \# \left( \Lambda _k \cap \mathbb {B}(0,M/k)\right) \simeq M^d. \end{aligned}$$

Observe that this last result would be a direct consequence of the necessary density condition for complete interpolating sets if we could take balls of radius r/n for a fixed \(r>0\) in the condition. Finally, we obtain an infinite set X such that for \(u\ne v\in X\)

$$\begin{aligned} J^*_{n/2}(|u- v|)=0, \end{aligned}$$

in contradiction with Theorem D. \(\square \)

Remark

Note that the fact that the interpolating sequence \(\{\Lambda _k\}\) is complete was used only to guarantee that \(\# \left( \Lambda _k \cap \mathbb {B}(0,M/k)\right) \simeq M^d\). So, the above result could be extended to sequences \(\{\Lambda _k\}\) such that \(\{\kappa _{k,\lambda }\}_{\lambda \in \Lambda _k}\) is orthonormal (but not necessarily a basis for \(\mathcal {P}_k\)) if \(\Lambda _k \cap \mathbb {B}(0,M/k)\) contains enough points.