Abstract
A theorem of Grothmann states that interpolating polynomials to a holomorphic function on a compact set E is maximally convergent to f only if a subsequence of the interpolation points converges to the equilibrium distribution of E in the weak* sense. Grothmann’s proof applies only for connected sets E. The objective of this paper is to provide a new necessary condition for maximal convergence which is the crucial tool to prove Grothmann’s theorem for unconnected sets E.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
For \(B \subset {\mathbb {C}}\), we denote by \({\overline{B}}\) its closure and by \(\partial {B}\) the boundary of B and we use \(\Vert \cdot \Vert _B\) for the supremum norm over B. Let \({\mathcal {A}}(B)\) be the class of functions that are holomorphic (i.e., analytic and single-valued) in a neighborhood of B, and we denote by \({\mathcal {P}}_n\) the set of algebraic polynomials of degree at most n.
Let E be a compact subset of the complex plane \({\mathbb {C}}\), and let \({\mathcal {M}}(E)\) be the collection of all probability measures supported on E, then the logarithmic potential of \(\mu \in {\mathcal {M}}(E)\) is defined by
and the logarithmic energy \(I(\mu )\) by
Let
then V(E) is either finite or \(V(E) = +\infty \). The quantity
is called the logarithmic capacity or capacity of E.
Let E be compact in the complex plane \({\mathbb {C}}\) with connected complement \(\Omega = {\overline{{\mathbb {C}}}} \setminus E\) in the extended plane \({\overline{{\mathbb {C}}}}\). The domain \(\Omega \) is called regular if the Green function \(G(z) = G(z,\infty )\) on \(\Omega \) with pole at \(\infty \) tends to 0 as \(z\in \Omega \) tends to the boundary \(\partial \Omega \) of \(\Omega \). If \(\Omega \) is regular, then \(\mathrm{cap} \; E > 0\) and there exists a unique measure \(\mu _E \in {\mathcal {M}}(E)\) such that
and we have
\(\mu _E\) is called equilibrium measure of E.
In the following, let E be compact in \({\mathbb {C}}\) with regular connected complement \(\Omega = {\overline{{\mathbb {C}}}} \setminus E\). Then, we define for \(\sigma > 1\) the Green domains \(E_\sigma \) by
with boundary \(\Gamma _\sigma := \partial E_\sigma \). Since \(\Omega \) is regular, the Green domain \(E_\sigma \) consists of a finite number of Jordan regions which are mutually exterior (cf. Walsh ([4], Chapter 4, section 4.1). Only in the case that E is connected, each \(E_\sigma \) is a single Jordan region for any \(\sigma > 1\).
If \(f \in {\mathcal {A}}(E)\), then there exists \(\rho > 1\) and polynomials \(p_n \in {\mathcal {P}}_n\), \(n \in {\mathbb {N}}\), such that
due to a result of Walsh ([4]).
Let \(\rho (f)\) denote the maximal parameter \(\rho > 1\), \(1< \rho \le \infty \), such that f is holomorphic on \(E_\rho \). Then, there exist polynomials \(p_n \in {\mathcal {P}}_n\) such that
Such a sequence \(p_n \in {\mathcal {P}}_n\) is called maximally convergent. Moreover, Walsh ([4], Sect 4.7, Theorem 7, Theorem 8 and its Corollary, pp. 79–81) proved that for such maximally convergent polynomials
Consider Lagrange–Hermite interpolation to f on point sets
by polynomials \(p_n \in {\mathcal {P}}_n\). Then, it is known, due to Bernstein–Walsh, that the interpolation on such schemes \(Z_n\) yields (1.1) if the normalized counting measures \(\nu _n\), i.e.,
satisfy
in the weak* sense, where \({\widehat{\nu }}_n\) denotes the balayage measure of \(\nu _n\) on the boundary of E, i.e., \({\widehat{\nu }}_n\) is the measure supported on the boundary of E such that
Conversely, Grothmann stated the following theorem.
Theorem
(Grothmann [1]): Let \({p_n}\) be the polynomial of interpolation on \(Z_n \subset E\). If \(f \in {\mathcal {A}}(E), 1< \rho (f)< \infty \), and if
then \(\mu _E\) is a weak* limit point of \(\left\{ {\widehat{\nu }}_n\right\} _{n \in {\mathbb {N}}}\).
The proof given in ([1]) applies only if E is connected or, at least, if \(E_{\rho (f)}\) is connected. Hence, one objective of this paper is to provide a proof of Grothmann’s theorem even for unconnected sets E. The crucial tool will be a new necessary condition for maximally convergent polynomials which seems to be interesting itself.
2 Maximal Convergence and Interpolation
Let \(E \subset {\mathbb {C}}\) be compact with regular connected complement \(\Omega = {\overline{{\mathbb {C}}}} \setminus E\) and let \(p_n \in {\mathcal {P}}_n\), \(n \in {\mathbb {N}}\), denote polynomials such that
where \(\rho (f)\) is the maximal parameter of holomorphy of f and \(1< \rho (f) < \infty \).
The Green domains \(E_r\), \(1< r < \infty \), consist of a finite number of disjoint regions \({E_r}^{i}\),
Each \(E^{i}_{r}\) is a Jordan region, and we write \(\Gamma _r^{i}=\partial E^{i}_{r}\). Then, the boundary \(\Gamma _r = \partial E_r\) is
and we note that \(\Gamma ^{i}_{r}\) and \(\Gamma ^{j}_{r}\) may have points in common if \(i \ne j\), but only a finite number of points (cf. Walsh ([4], chapter 4, section 4.1).
Our first result is a necessary condition for maximal convergence that is new if E is not connected.
Proposition 1
Let \(f\in {\mathcal {A}}(E_\rho )\), \(1< \rho < \infty \), and let \(p_n \in {\mathcal {P}}_n, n \in {\mathbb {N}}\), be polynomials such that
If \(1< \sigma < \rho \) and if
then the maximal parameter \(\rho (f)\) of holomorphy of f satisfies \(\rho (f) > \rho \).
As a consequence of Proposition 1, we get immediately
Theorem 1
Let \(f\in {\mathcal {A}}(E)\), and let \(1< \sigma< \rho (f) < \infty \). Then, the sequence \(\left\{ p_n\right\} _{n\in {\mathbb {N}}}\) with \(p_n\in {\mathcal {P}}_n\) is maximally convergent to f if and only if
If E is connected, then \(l_\sigma = 1\) for any \(\sigma \) and Theorem 1 coincides with the well-known characterization of Bernstein–Walsh.
Next, we want to use the above results to characterize interpolating polynomials converging maximally to f by the distribution of the interpolation points.
Proposition 2
Let \(f\in {\mathcal {A}}(E_\rho )\), \(1< \rho < \infty \), and let \(p_n \in {\mathcal {P}}_n\) be the interpolating polynomial to f on the point set \(Z_n \subset E\), \(n\in {\mathbb {N}}\), with
for all \(\sigma ^{*}\), \( 1< \sigma ^{*} <\rho \).
Let \(\Lambda \subset {\mathbb {N}}\) and let \(\mu _E\) be not a weak* limit point of \({\widehat{\nu }}_n\), \(n \in \Lambda \). If \(\sigma \) is fixed with \(1< \sigma < \rho \), then for all \(n \in \Lambda \) there exists an index s(n), \(1 \le s(n) \le l_\sigma \), such that the strict inequality
holds.
Finally, we combine Proposition 2 with Proposition 1 to obtain a characterization of maximally converging interpolation polynomials.
Theorem 2
Let \(f\in {\mathcal {A}}(E)\) with \(1< \rho (f) < \infty \), and let \(\left\{ p_n\right\} _{n\in {\mathbb {N}}}\) be maximally convergent to f on E. If \(p_n\) interpolates f at the interpolation point set \(Z_n\subset {E}\) with counting measure \(\nu _n\) and balayage measure \({\widehat{\nu }}_n\) on \(\partial E\), then the following holds:
-
(a)
\(\mu _E\) is a weak* limit point of \(\left\{ {\widehat{\nu }}_n\right\} _{n \in {\mathbb {N}}}\).
-
(b)
For every fixed \(\sigma \), \(1< \sigma < \rho (f)\), there exists a subset \(\Lambda \subset {\mathbb {N}}\) such that
$$\begin{aligned} \frac{\sigma }{\rho (f)}\; = \;\underset{n\rightarrow \infty }{\lim \sup }\; \Vert f - p_n\Vert ^{1/n}_{\Gamma _\sigma } \;=\; \underset{n\in \Lambda , n\rightarrow \infty }{\lim } \; \underset{1 \le i \le l_\sigma }{\min }\; \Vert f - p_n\Vert ^{1/n}_{\Gamma ^{i}_{\sigma }} \end{aligned}$$(2.7)and
$$\begin{aligned} {{\widehat{\nu }}_n {\mathop {\rightarrow }\limits ^{*}}\mu _E \quad as \; { n \rightarrow \infty , n\in \Lambda }}. \end{aligned}$$
The first statement (a) is Grothmann’s theorem, even proved here for unconnected E. The second statement (b) describes the subsequences \(\Lambda \in {\mathbb {N}}\) which lead to \(\mu _E\) as a weak* limit point of \({\widehat{\nu }}_n, n \in {\mathbb {N}}\).
3 Proof of Proposition 1
The proof is based on constructing a telescoping series of f,
which is holomorphic in a neighborhood of \({\overline{E}}_\rho \).
Because of (2.2), the Lemma of Bernstein–Walsh induces that there exists for \(\varepsilon > 0 \) and \(1 \le r < \rho \) a number \(n_\varepsilon (r) \in {\mathbb {N}}\) such that
for \(n \ge n_\varepsilon (r)\). Because of (2.3), there exist a map
and \(\delta > 0\) and \(n_1(\delta ) \in {\mathbb {N}}\) such that
for all \(n \ge n_1(\delta )\).
3.1 The Starting Telescoping Series
Let us fix a parameter \(\kappa > 1\), then the telescoping series
and the sequence \(\Lambda _1(\kappa ) := \left\{ m_{i}\right\} ^{\infty }_{i=1}\) is defined recursively:
Set \(m_1 := 1\). If \(m_i\) is defined and if there exists \(m > m_i\) with
then we define
Hence, \(\Lambda _1(\kappa ) = \left\{ m_{i}\right\} ^{\infty }_{i=1}\) has the following properties:
or
Next, we decompose \(\Lambda _1(\kappa )\) into
with
and
The next lemma shows how to estimate the norm of the difference
outside of \(E_r\). We use as auxiliary tool the harmonic measure
where \(\Gamma _r^{i} = \partial {E^{i}_{r}}\) is the boundary of the Jordan region \(E_r^{i}\) in the decomposition of \(E_r\) in (2.1), i.e., \(h_r^{i}\) is the harmonic function in \({\overline{{\mathbb {C}}}} \setminus \overline{E_r}\) that satisfies the boundary conditions \(h_r^{i} = 1\) on \(\Gamma _r^{i}\) and \(h_r^{i} = 0\) on \(\Gamma _r \setminus \Gamma _r^{i}\), possibly except of a finite number of points (cf. [3], p. 111, section III, 17 or [2]). Then,
and if we define for \(r^{*} > r\)
we obtain
Lemma 1
Let \(n,m \in {\mathbb {N}}\) with \(m < n\) and \(n/m \le \kappa \), and let \(1< r < \rho \) such that
and
where \(\delta _r > 0\). If \(r^{*} > r\), \(s(m)=s(n)\) and if
then there exists \(\delta ^{*}_{r^{*}} > 0\) and \(n^{*} = n^{*}(\kappa )\in {\mathbb {N}}\) such that
for \(m \ge n^{*}\), where
Proof
or
Fix \(0< \varepsilon < \log {\rho /\sigma }\). Because of (3.1), there exists \(n_\varepsilon (r) \in {\mathbb {N}}\) such that
for \(n \ge n_\varepsilon (r)\). Then,
or
Define
and
Then, H(z) is subharmonic in \({\overline{{\mathbb {C}}}} \setminus {\overline{E}}_r\) and the harmonic function
is an upper bound of H(z) in \({\mathbb {C}}\setminus {{\overline{E}}_r}\), where we have taken into account (3.14) and (3.15) and the definition of \(h^{i}_{r}\) for \(i=s(n)=s(m)\). Inserting \(z\in \Gamma _{r^{*}}\), we obtain
and
If we choose \(\kappa \) as in (3.11), we get with
Now, let us define
then \(\delta ^{*}_{r^{*}} > 0\), and we can choose n large enough and \(\varepsilon \) sufficiently small such that
Hence, we can find \(n^{*} = n^{*}(\kappa ) \in {\mathbb {N}}\) such that by (3.16) we obtain
for \(m \ge n^{*}\), and (3.12) and (3.13) are proven. \(\square \)
We know from (3.2) that
Let
and let \(1< \kappa < \kappa _1^{*}\) in the definition of \(\Lambda _1(\kappa )\) in (3.6)−(3.8), then we obtain by Lemma 1 and choosing \(r = \sigma \) and \(r^{*} = \rho \):
There exists \(\delta ^{*}_{1}\ > 0\) and \(n^{*}_{1} = n^{*}_{1}(\kappa ) \in {\mathbb {N}}\) such that
for all \(m_i \in \Lambda _{1,1}(\kappa )\), \(m_i \ge n^{*}_{1}\), and
Corollary 1
Let \(1< \kappa < \kappa ^{*}_{1}\) and assume that \(\Lambda _{1,2}(\kappa )\) is a finite set in the decomposition of \(\Lambda _1(\kappa )\) in (3.6). Then, the telescoping series (3.3) converges uniformly on compact sets of a neighborhood of \(\overline{E_\rho }\), i.e., \(\rho (f) > \rho .\)
Proof
We apply the Bernstein–Walsh Lemma to the differences
and use the inequality (3.19). \(\square \)
Hence, Proposition 1 is proved for this special situation.
3.2 The Auxiliary Parameter \(1< \sigma _0 < \sigma \)
To restrict \(\kappa \) in the definition of \(\Lambda _1(\kappa )\) completely, we start with the decomposition of \(E_\sigma \) in (2.1),
Then, we can define a parameter \(\sigma _0, 1< \sigma _0 < \sigma \), such that the decomposition of \(E_{\sigma _0}\) into disjoint Jordan regions \(E^{i}_{\sigma _0}\),
satisfies
This can be achieved by the strict monotonicity of \(E_r\) with respect to r and the fact that the Green function G(z) of \(\Omega \) has only a finite number of critical points in \({\mathbb {C}}\setminus E \) (cf. Walsh [4], chapter 4, section 4.1).
Lemma 2
Let \(\delta > 0\) and let \(1< \sigma _0 < \sigma \) such that according to (3.2)
Then, there exist \(\delta _0 > 0\) and \(n_0(\delta )\) such that
Proof
Because of (3.1), there exists \(n_\varepsilon (1)\) such that
Let us consider the Dirichlet problem for the harmonic function \(g_{i}(z)\) in the region
with the boundary conditions
where \(\Gamma = \partial E\). Then, \(g_{i}(z) < 0\) for \(z\in E^{i}_\sigma \setminus E\). Define
then \(\beta _{i} < 0\) and also
Moreover, the function
is a harmonic majorant of
That leads to
or
If we define \(\varepsilon := - \beta /2\), then we get
for all \(n \ge n_\varepsilon (1)\). Therefore,
satisfy the statement of Lemma 2. \(\square \)
Lemma 3
Let \(n,m \in {\mathbb {N}}\) with \(m < n\) and \(n/m \le \kappa \). Let \(\delta _0 > 0\) and \(n_{0}(\delta ) \in {\mathbb {N}}\) such that (3.20) holds according to Lemma 2. Moreover, let \(s(m) = s(n)\) and let
then there exists \(\delta ^{*}_{0} > 0\) and \(n^{*}_{2}= n^{*}_{2}(\kappa ) \in {\mathbb {N}}\) such that
for \(m \ge n^{*}_{2}\) and
Proof
Because of Lemma 2, there exists \(n_0(\delta )\) such that
and
for \(m < n\) and \(m \ge n_{0}(\delta )\). Since \(s(m) = s(n)\) and \(\kappa \) satisfies (3.21), then Lemma 1 yields that there exists \(n^{*}_{2} = n^{*}_{2}(\kappa )\in {\mathbb {N}}\) and \(\delta ^{*}_{0} > 0\) such that
for \(m \ge n^{*}_{2}\) and \(\delta ^{*}_{0}\) satisfies (3.23). \(\square \)
3.3 The Final Telescoping Series
We start with the telescoping series associated with
satisfying (3.4) and (3.5) and choosing the parameter \(\kappa \) such that
\(\kappa ^{*}_{1}\) is defined by (3.18), i.e.,
and \(\delta \) satisfies (3.17). \(\kappa ^{*}_{2}\) is defined by (3.21), i.e.,
and \(\delta _0\) satisfies (3.20). \(\kappa ^{*}_{3}\) will be defined by
and \(\delta ^{*}_{0}\) satisfies (3.23). The role of \(\kappa ^{*}_{3}\) will be seen in the proof of Lemma 5. As above, we use the decomposition
Hence, by (3.19) there exist \(\delta ^{*}_{1} > 0\) and \(n^{*}_{1} = n^{*}_{1}(\kappa ) \in {\mathbb {N}}\) such that
for all \(m_i \in \Lambda _{1,1}(\kappa ), \; m_i \ge n^{*}_{1}\). So, as critical differences in the telescoping series with respect to \(\Lambda _1(\kappa )\) remain \(\;p_{m_{i+1}} - p_{m_i}\), where
In Corollary 1, we have given already the proof of Proposition 1 for the case that \(\Lambda _{1,2}(\kappa )\) is a finite sequence. Therefore, we assume henceforth that
is an infinite sequence.
In the following, we use a real parameter \(c \in {\mathbb {R}}\), \(0< c < 1\).
Lemma 4
Let \(\lambda _k \in \Lambda _{1,2}(\kappa )\) be fixed. Then, there exist at most \(l_\sigma \) elements of \(\Lambda _{1,2}(\kappa )\) in the interval
Moreover, let the parameter \(c \in {\mathbb {R}}\) , \(0< c < 1\), be fixed and let the semi-open intervals \(I(\lambda _k, j)\) be defined by
for \(0 \le j \le l_{\sigma }-1\). Then, there exists \(\widetilde{l_k}\), \(0 \le \widetilde{l_k} \le l_{\sigma }-1\), such that
Proof
Let us assume that there exist at least \(l_\sigma \) elements of \(\Lambda _{1,2}(\kappa )\) in the interval \(\left( \lambda _k,\kappa \lambda _k\right] \). Then, the definition of \(\Lambda _1(\kappa )\), resp. \(\Lambda _{1,2}(\kappa )\), implies that the values of the function s at the points
are all different, which contradicts the definition of \(l_\sigma \).
Let us assume that the second part of the Lemma is false. Then, in each
there exists at least one element of \(\Lambda _{1,2}(\kappa )\). Hence, the interval
contains at least \(l_\sigma \) elements of \(\Lambda _{1,2}(\kappa )\), contradicting the first part of the lemma. \(\square \)
3.3.1 The Telescoping Defining Sequence \(\Lambda (\kappa ,c)\)
Let
satisfies (3.4) and (3.5) with a parameter \(\kappa \) where
\(\kappa ^{*}_{1}\) is defined by (3.18), \(\kappa ^{*}_{2}\) by (3.21), \(\kappa ^{*}_{3}\) by (3.24). As in (3.6) - (3.8), we decompose
Then, we define the sequence
as follows: If \(\Lambda _{1,2}(\kappa )\) is a finite sequence, then \(\Lambda (\kappa ,c) := \Lambda _1(\kappa )\). If \(\Lambda _{1,2}(\kappa )\) is an infinite sequence, we define
and we set
Then, we define \(n_j := m_j\) for \( 1\le m_j \le M \). The remaining elements \(n_j \in \Lambda (\kappa ,c), \;n_j > M\), will be defined recursively:
If \(n_j \ge M\) is already constructed, we note that we obtain by (3.27) for \(0 \le \widetilde{l_{j}} \le l_\sigma -1\)
Then, we fix
and distinguish 2 cases:
-
(i)
If \(s(m) = s(n_j)\), then \(n_{j+1} := m\).
-
(ii)
If \(s(m) \ne s(n_j)\), then we apply Lemma 4. Hence, there exists \(k_0 \in {\mathbb {N}}\) and \(0 \le \widetilde{l_{j}} \le l_\sigma -1\), such that
$$\begin{aligned} \lambda _{k_0} \le n_j\left( 1 + \left( \frac{c}{1+c}\right) ^{\widetilde{l_{j}}+1}(\kappa -1)\right) \end{aligned}$$and
$$\begin{aligned} \lambda _{k_{0+1}} > n_j\left( 1 + \left( \frac{c}{1+c}\right) ^{\widetilde{l_{j}}} (\kappa -1)\right) , \end{aligned}$$where we have used (3.28) and the enumeration of \(\Lambda _{1,2}(\kappa )\) as in (3.26). Then, we define
$$\begin{aligned} n_{j+1} := \lambda _{k_0}. \end{aligned}$$
Properties of \(\Lambda (\kappa ,c)\) We have always \(n_{j+1} \le \kappa \, n_j\). If \(s(n_{j+1}) \ne s(n_j)\), then
and
where \(0 \le \widetilde{l_j} \le l_\sigma - 1\). Moreover, \(s(m) = s(n_{j+1})\) for
In the following, we use the decomposition
where
and
Lemma 5
Let \(n_j \in \Lambda _2(\kappa ,c)\), then there exists \(\delta _{2,j} > 0\) and \(n^{*}_{3} = n^{*}_{3}(\kappa ) \in {\mathbb {N}}\) such that
Moreover, \(\delta _{2,j}\) can be chosen in such a way that
with \(\delta ^{*}_{0}\) satisfying (3.22) and (3.23) of Lemma 3.
Proof
We consider the telescoping series
and define
Because of (3.30), we have
keeping in mind that \(n_{j+k_j} \in {\mathbb {N}}\). Now, we write
where
and
Estimation of \(A_j\) on \(\Gamma _\sigma \) Because of the definition of \(k_j\) in (3.34), we use (3.31) and apply Lemma 3 for all differences
occurring in \(A_j\). We obtain with \(\delta ^{*}_{0} > 0\) and \(n^{*}_{2} = n^{*}(\kappa )\in {\mathbb {N}}\) that
where \(\delta ^{*}_{0}\) satisfies the inequality (3.23), since
and \(\delta _0\) is defined by Lemma 2 in (3.20). Then,
for all j with \(n_j \ge n^{*}_{2}\) and \(\beta _1\) is a constant independent of j.
Estimation of \(B_j\) on \(\Gamma _\sigma \) Let us define
Because of (3.1), there exists \(n_\varepsilon (\sigma )\) such that for \(n \ge n_\varepsilon (\sigma )\)
where \(0< \varepsilon < \log (\rho /\sigma )\) is fixed. Then, for \(n_j \ge n_\varepsilon (\sigma )\)
and with (3.35) and (3.37) we obtain
with
Analogously,
for all \(k \ge k_j\) and \(n_j \ge n_\varepsilon (\sigma )\). Hence, for such \(n_j\)
where
Because of (3.29),
and therefore
For abbreviation, we define
and note that
Since \(\varepsilon < \log ({\rho /\sigma })\), multiplication of (3.39) by \(\log ({\sigma /\rho }) + \varepsilon \) yields
Hence, the upper bound in (3.40) leads to
Next, we define
Then the general condition \(\varepsilon < \log (\rho /\sigma )\) is satisfied and the lower bound of (3.40) yields
Therefore, for such \(\varepsilon \) we obtain by (3.38) for \(n_j \in \Lambda _2(\kappa ,c)\) and \(n_j \ge n_\varepsilon (\sigma )\)
where \(\delta ^{*}_{2.j}\) is defined by
and \(\varepsilon \) is defined by (3.41).
Summarizing, by (3.36) and (3.42) we have got for \(n_j \in \Lambda _2(\kappa ,c)\) and \(n_j \ge \; \max \,(n^{*}_{2}, n_\varepsilon (\sigma ))\)
\(\beta _1\) and \(\beta _3\) are constants, independent of \(n_j\). Hence, if we define
and if we use the lower bound in (3.40), then there exists \(n^{*}_{3} = n^{*}_{3}(\kappa ,c)\) such that
and (3.32) and (3.33) of Lemma 5 are proven. \(\square \)
3.3.2 Fixing the Parameter c in \(\Lambda (\kappa ,c)\)
In the case that \(n_j \in \Lambda _2(\kappa ,c)\), we have by Lemma 5: There exists \(\delta _{2,j} > 0\) and \(n^{*}_{3} = n^{*}_{3}(\kappa ,c)\) such that
for all \(n_j \ge n^{*}_{3}(\kappa )\). Moreover,
with \(\delta ^{*}_{0} > 0\) as in Lemma 3. Because of (3.43), we have a fortiori
On the other hand, we have by (3.2)
for all \(n_j \ge n_1(\delta )\) with \(\delta > 0\). Now, we can apply Lemma 1 by taking into account (3.45) and (3.46): There exists \(n^{*}_{4} = n^{*}_{4}(\kappa )\) such that
where
if we can achieve, i.e., if we can arrange c with 0 < c <1 such that
Since
the inequalities (3.48) are fulfilled if
Taking into account (3.44) and
the inequality (3.48) is satisfied if
Because of (3.29), we know that
Therefore, (3.48) is guaranteed if
3.4 Conclusions
We consider the telescoping series
associated with the sequence
The parameter \(\kappa \) satisfies
and we fix a parameter c such that
where \(\kappa ^{*}_{1}\), \(\kappa ^{*}_{2}\), \(\kappa ^{*}_{3}\) are defined by (3.18), (3.21), (3.24). If \(n_j \in \Lambda _1(\kappa ,c)\), then according to (3.25)
for all \(n_j \in \Lambda _1(\kappa ,c)\), \(n_j \ge n^{*}_{1}(\kappa )\). If \(n_j \in \Lambda _2(\kappa ,c)\), then according to (3.47)
for all \(n_j \in \Lambda _2(\kappa ,c)\), \(n_j \ge n^{*}_{4}(\kappa )\), since c satisfies (3.49). Therefore,
for all \(n_j \in \Lambda (\kappa ,c)\) with \(n_j \ge \max \,\left( n^{*}_{1}(\kappa ),n^{*}_{4}(\kappa )\right) \). Finally, the Lemma of Bernstein–Walsh implies that f is holomorphic in a neighborhood of \(\overline{E_\rho }\), i.e.,
and Proposition 1 is proven.
4 Proof of Proposition 2
We choose r and R such that
under the additional condition that in the decomposition of \(E_R\), resp. \(E_\rho \), analogous to (2.1), the numbers \(l_R\) and \(l_\rho \) satisfy \(l_R = l_\rho \). For abbreviation, we define
Now, for all \(z\in \Omega ={\overline{{\mathbb {C}}}}\setminus {E}\) we have
and therefore
Hence, the uniqueness of the equilibrium measure of \(\overline{E_r}\) implies
Next, we fix \(z_n\in \Gamma _r\) such that
and we choose \(s^{*}(n)\in {\mathbb {N}}\) such that
Consider
then \(D^{s^{*}(n)}_{R,r}\) is a region with boundary
where
The Lagrange–Hermite formula for the error \(f - p_n\) at \(z\in \Gamma _r\) is
with
where \(z_{n,i},\; 0\le i\le n,\) are the interpolation points of \(Z_n\). Moreover, we can write
for \(z\in \Gamma _r\). If \(z\in \Gamma _r \cap E^{s^{*}(n)}_{R}\), we may reduce the path of integration to \(\Gamma ^{s^{*}(n)}_{R}\), hence
Let \(\varepsilon > 0\), then (2.5) implies that there exists \(n_0(\varepsilon )\) such that
and
for all \(n \ge n_0(\varepsilon )\). Using (4.4), we may choose \(n_0(\varepsilon )\) in such a way that for all \(z\in \Gamma ^{s^{*}(n)}_{r}\) and \(n\ge n_0(\varepsilon )\)
Since
we get for \(z\in \Gamma ^{s^{*}(n)}_{r}\)
Now, let us consider the difference
which is a harmonic function in \(\Omega \). Then, the maximum of this difference on the level curve \(\Gamma _{\sigma ^{*}}\) is increasing with decreasing \(\sigma ^{*},\; 1< \sigma ^{*} < \infty \). Consequently,
We note for further applications that (4.6) holds also if we replace \(\nu _n\) by any probability measure \(\nu \ne \mu _E\) with support in E.
Because of (4.1) and (4.2) and the choice of \(s^{*}(n)\), we have
Next, we define the Dirichlet problem for the harmonic function \(\Phi _n(z)\) in the region
with the boundary conditions
and
where
Because of (4.3) and (4.6), \(\Phi _n(z_n) < 0\) and therefore \(\Phi _n(z) < 0\) for all \(z\in D^{s^{*}(n)}_{R,r}\). Thus, if we define
then
Moreover, the maximum principle for harmonic functions, together with (4.5), implies that the harmonic function
is an upper bound for the subharmonic function
i.e.,
for all \(z\in D^{s^{*}(n)}_{R,r}\). Hence, we obtain
for all \(z\in \Gamma _\sigma \cap E^{s^{*}(n)}_{R}\) and all \(n \ge n_0(\varepsilon )\).
Now, we claim: There exists \(\delta > 0\) such that
Let us assume that the claim is false:
Then, there exists a subsequence \(\Lambda _1 \subset \Lambda \) such that
By Helly’s theorem, there exists a subsequence \(\Lambda _2\subset \Lambda _1\) such that
with supp\((\nu )\subset E\) and \(\nu \ne \mu _E\). Since there are only l different sets
we can finally choose \(\Lambda _2\) such that the sets
are fixed for all \(n\in \Lambda _2\), i.e., \(s^{*}(n) = j_0\) is fixed for all \(n\in \Lambda _2\).
Because of
there exists \(n_1(\varepsilon ) \ge n_0(\varepsilon )\) such that
for all \(n\in \Lambda _2\), \(n \ge n_1(\varepsilon )\). Then, for \(z\in \Gamma _r\) and \(n \ge n_1(\varepsilon )\)
where we have defined
Therefore, the boundary condition (4.7) can be estimated by
for \(z\in \Gamma _r\cap E^{j_0}_{R}.\)
Now, we consider the Dirichlet Problem for the function \(\Phi (z)\) in the region \(D^{j_0}_{R,r} = E^{j_0}_{R} \setminus \overline{E_r}\) with the boundary conditions
and
where \(c(\nu ;\Gamma _R)\) is defined by (4.10).The continuous functions \(U^{\mu _E} - U^{\nu _n}\) converge in \(\Omega \) uniformly on compact sets, especially on \(\Gamma _r \cup \Gamma _R\), as \(n\in \Lambda _2, n\rightarrow \infty \). Hence, by (4.1) and (4.2)
The last inequality follows from \(\nu \ne \mu _E\), mentioned in the remark following (4.6).
Next, we choose \(\varepsilon > 0\) such that
Hence, the boundary conditions for the harmonic function \(\Phi (z)\) in (4.12) and (4.13) read as \(\Phi (z) \le 0\), but \(\Phi (z)\) is not identically 0 on \(\Gamma _r \cap E^{j_0}_{R}\). Then, the maximum principle for harmonic functions yields
If we compare the Dirichlet problems for \(\Phi _n\) and \(\Phi \), then by (4.11)
Therefore,
for \(n\in \Lambda _2, n \ge n_1(\varepsilon )\), contradicting our assumption that (4.9) is not true.
Hence, (4.8) and (4.9) imply that
for all \(z\in \Gamma _\sigma \cap E^{s^{*}(n)}_{R}\) and \(n \ge n_0(\varepsilon ), n\in \Lambda \). If we choose \(\varepsilon = \delta /4\), then we finally get
for \(z\in \Gamma _\sigma \cap E^{s^{*}(n)}_{R}\) and \(n \ge n_0(\varepsilon ), n\in \Lambda \).
We note that each \(\Gamma _\sigma \cap E^{s^{*}(n)}_{R}, 1 \le s^{*}(n) \le l\), consists of a finite number of connected components of \(\Gamma _\sigma \). Therefore, because of (4.14) we can define for each \(n\in \Lambda \) a number \(s(n), 1 \le s(n) \le l_\sigma \) such that
5 Proof of the Theorems
We have already mentioned that Theorem 1 is a direct consequence of Proposition 1. More precisely, if the condition (2.4) of Theorem 1 is true, then the sequence \(p_n\in {\mathcal {P}}_n\) is maximally convergent to f, due to Bernstein–Walsh. Conversely, if the condition (2.4) is not true for some \(\sigma , \; 1< \sigma< \rho (f) < \infty \), i.e.,
then Proposition 1 shows that \(\rho (f)\) is not the maximal parameter of holomorphy of f, which is a contradiction.
Concerning part (a) of Theorem 2, let us assume that \(\mu _E\) is not a weak* limit point of \({\widehat{\nu }}_n, n\in {\mathbb {N}}\). Then, Proposition 2 yields—using \(\Lambda = {\mathbb {N}}\)—that there exist parameter \(s(n), 1 \le s(n) \le l_\sigma \), such that
But according to Theorem 1, then \(\rho (f)\) could not be the maximal parameter of holomorphy of f. This is a contradiction to the maximal convergence of \(\left\{ p_n\right\} ^{\infty }_{n\in {\mathbb {N}}}\).
Concerning part (b), we know already that there exists a subsequence \(\Lambda \in {\mathbb {N}}\) such that (2.7) holds. Let us assume that \(\mu _E\) is not a weak* limit point of \({\widehat{\nu }}_n, \; n\in \Lambda \). Then, Proposition 2 implies that there exist
such that
References
Grothmann, R.: Distribution of interpolation points. Ark. Mat. 34, 103–117 (1996)
Ransford, Th.: Potential Theory in the Complex Plane, London Mathematical Society Student texts, vol. 28. Cambridge University Press, Cambridge (1995)
Tsuji, M.: Potential Theory in Modern Function Theory. Maruzen Co. LTD, Tokyo (1959)
Walsh, J.L.: Interpolation and approximation by rational functions in the complex domain. Am. Math. Soc. 20 (1969)
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Doron Lubinsky.
Dedicated to the memory of Peter Borwein and Stephan Ruscheweyh.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Blatt, HP. Maximal Convergence and Interpolation on Unconnected Sets. Constr Approx 56, 505–535 (2022). https://doi.org/10.1007/s00365-022-09564-7
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00365-022-09564-7