Abstract
It is well known that the interpolation error for \(\left| x\right| ^{\alpha },\alpha >0\) in \(L_{\infty }\left[ -1,1\right] \) by Lagrange interpolation polynomials based on the zeros of the Chebyshev polynomials of first kind can be represented in its limiting form by entire functions of exponential type. In this paper, we establish new asymptotic bounds for these quantities when \(\alpha \) tends to infinity. Moreover, we present some explicit constructions for near best approximation polynomials to \(\left| x\right| ^{\alpha },\alpha >0\) in the \(L_{\infty }\) norm which are based on the Chebyshev interpolation process. The resulting formulas possibly indicate a general approach towards the structure of the associated Bernstein constants.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 The Bernstein Constants and Polynomials of Best Approximation
Let \(\alpha >0\) be not an even integer. Starting in year 1913 for the case \(\alpha =1\), and later in 1938 for the general case \(\alpha >0\), Bernstein [1, 2] established the existence of the limit
where
denotes the error in best \(L_{p}\) approximation of a function f on the interval \(\left[ a,b\right] \) by polynomials of degree less or equal n. The proofs in [1, 2] are highly difficult and long, missing many non-trivial technical details. In his 1938 paper, Bernstein made essential use of the homogeneity property of \(\left| x\right| ^{\alpha }\), namely that for \(c>0\) one has \(\left| cx\right| ^{\alpha }=c^{\alpha }\left| x\right| ^{\alpha }\). This enabled Bernstein to relate the uniform best approximating error on \(\left[ -1,1\right] \) to that on \(\left[ -n,n\right] \). Denote by \(P_{n}^{*}\) the best uniform approximation polynomial of order n to \(\left| x\right| ^{\alpha }\) on the interval \(\left[ -1,1\right] \). Then, by the homogeneity property, Bernstein established a representation for the quantities \(\Delta _{\infty ,\alpha }\) in form of the approximation error on the real line for \(\left| x\right| ^{\alpha }\) by entire functions of exponential type, namely
Recall that an entire function f is of exponential type \(A\ge 0\) means that for each \(\varepsilon >0\) there is \(z_{0}=z_{0}\left( \varepsilon \right) \), such that
Moreover, A is taken to be the infimum over all possible numbers for which (1.2) holds. In addition, uniformly on compact subsets we have
and there is exactly one entire function H of exponential type less or equal 1 which minimizes (1.1). The elegant formulation which introduces now functions of exponential type extends to spaces other than \(L_{\infty }\). Ganzburg [3] and Lubinsky [5] have shown that for all \(1\le p\le \infty \) positive constants \(\Delta _{p,\alpha }\) exists, where \(\Delta _{p,\alpha }\) is defined by
From now on \(\Delta _{p,\alpha }\) are called the Bernstein constants. It is worth important to note that the Bernstein constants are only known for \(p=1\) [7] and \(p=2\) [11], whereas for \(p=\infty \) no single value of \(\Delta _{\infty ,\alpha }\) is known. For details on the Bernstein constants, conjectures and numerical computations, we refer the reader to [5, 6, 8, 12, 14, 15].
Now, turning back to the quantities \(\Delta _{\infty ,\alpha } \) we recall that for each integer n the best approximating polynomial \(P_{n}^{*}\) can be represented as an interpolating polynomial with unknown consecutive nodes in \(\left[ -1,1\right] \). Since \(\left| x\right| ^{\alpha }\) is an even function a standard argument allows us to restrict ourselves to interpolation polynomials of even order n. Thus, by starting with some well chosen interpolation polynomials and subsequently analyzing their asymptotic character one might hope to find some explicit expressions for the limiting approximation error which possibly can be connected to the Bernstein constants themselves. To this end, we collect some important results for asymptotic relations and their corresponding entire functions of exponential type with respect to certain interpolation polynomials for \(\left| x\right| ^{\alpha }\).
Let \(\alpha >0\) and \(n\in \mathbb {N}\) and define the modified Chebyshev interpolation system by
Obviously, the \(x_{j,2n}^{\left( 1\right) }\) are the zeros of the Chebyshev polynomial \(T_{2n}\) of first kind, defined by \(T_{n}\left( x\right) =\cos \left( n\arccos x\right) \) at least for \(j=1,2,\ldots ,2n\) and \(x_{0,2n}^{\left( 1\right) }\) is a further additional interpolation node. On the other hand, we take the zeros of \(T_{2n+1}\), defined by
Note that here \(x_{n+1,2n+1}^{\left( 2\right) }=0\) is always a zero of the corresponding \(T_{2n+1}\). Thus, the second node system is apparently the more natural choice in regard to the first system. However, as we will see later, the interpolation process appears here to be more complicated. To proceed further, denote by \(P_{2n}^{\left( 1\right) }\) and \(P_{2n}^{\left( 2\right) } \) the corresponding unique interpolation polynomials of order 2n for \(\left| x\right| ^{\alpha }\). From Ganzburg ([3], Formulas 2.1, 2.7 and 4.14) we have
Theorem 1.1
where
is an entire function of exponential type 1 that interpolates \(\left| x\right| ^{\alpha }\) at the nodes \(\left\{ \left( k+\frac{1}{2}\right) \pi :k\in \mathbb {Z}\right\} \cup \left\{ 0\right\} \). Furthermore, uniformly on compact subsets in \(\left[ 0,\infty \right) \) we have
Let us give two remarks. Firstly, note that in Theorem 1.1 the representation of the limiting error, i.e. the sup-expression, can be given in full explicit form. In particular, the last equation in Theorem 1.1 follows easily by some standard analysis arguments. Secondly, the sequence of the scaled interpolation polynomials \(\left( 2n\right) ^{\alpha } P_{2n}^{\left( 1\right) }\left( \frac{\cdot }{2n}\right) \) converges also to an entire function of exponential type, thus showing a similar behavior than the best approximating polynomials in (1.3). For the second node system, we have an analogous representation, but unfortunately it can be given only in a more complicated form. From ([13], Theorems 3.1 and 3.2) we have
Theorem 1.2
where
is an entire function of exponential type 1 that interpolates \(\left| x\right| ^{\alpha }\) at the nodes \(\left\{ k\pi :k\in \mathbb {Z}\right\} \). Furthermore, uniformly on compact subsets in \(\left[ 0,\infty \right) \) we have
As it can be seen from the representation of the limiting error term the exact determination of the quantity on the right-hand side in (1.4) for individual values for \(\alpha \) appears to be a rather difficult challenge.
In the next chapters we investigate in more detail in this mentioned quantity. Though we are still unable to present a full explicit expression for (1.4), similar and comparable to that in Theorem 1.1, we can establish an asymptotic expression when \(\alpha \) tends to infinity. We will prove the following
Theorem 1.3
The following asymptotics hold.
Though the result in Theorem 1.3 reveals a more or less simple formula for the asymptotic behavior, the proof is rather long and requires many non-trivial smart estimates. Many of them were first furnished for our purposes by extensive use of computational methods (Mathematica). The hardest part for the proof requires the determination of a higher asymptotics up to order 5, i.e. the generalized Watson-lemma, involving the computation of certain rather complicated defined constants. Thus, we split the whole proof in two sections, see Sects. 3 and 4. We believe that this sequencing keeps the readability and also the development of the main ideas more comprehensible.
The rest of the paper is organized as follows.
In Sect. 2 we first collect some definitions for several constants and functions together with some standard results for later use.
In Sect. 3 we study an envelope function, denoted by \(H_{1}\left( \alpha ,\cdot \right) \), for the error term in Theorem 1.3, later to be denoted by \(H\left( \alpha ,\cdot \right) \) . Here, we establish in Theorem 3.1 an asymptotic formula for \(H_{1}\left( \alpha ,\cdot \right) \), when \(\alpha \rightarrow \infty .\)
In Sect. 4, by using an higher order asymptotics and investigating into an itself interesting integral inequality, see Theorem 4.1, we finally arrive on the desired asymptotic relation between \(\left\| H_{1}\left( \alpha ,\cdot \right) \right\| _{L_{\infty }\left[ 0,\infty \right) }\) and \(\left\| H\left( \alpha ,\cdot \right) \right\| _{L_{\infty }\left[ 0,\infty \right) }\), when \(\alpha \rightarrow \infty ,\) thus giving the final proof of Theorem 1.3.
In the final Sect. 5, to emphasize the importance of the interpolation formulas based on the \(P_{n}^{\left( 1\right) }\) and \(P_{n}^{\left( 2\right) }\) polynomials, we present a compilation of numerical results involving linear combinations of the just mentioned polynomials together with their corresponding Chebyshev polynomials \(T_{n}\), in order to present explicit formulas for near best approximation polynomials in the \(L_{\infty }\) norm, see formula (5.1), together with their corresponding entire functions of exponential type, see formula (5.2). Possibly and hopefully these formulas could indicate a feasible direction towards some explicit asymptotic representations of best approximation polynomials for \(\left| x\right| ^{\alpha }\) in the \(L_{\infty }\) norm and thus for the Bernstein constants \(\Delta _{\infty ,\alpha }\) themselves.
2 Notation
In this section we record the following constants and functions, together with properties which are used later in the paper. We denote by \(\Gamma \left( .\right) \) the usual Gamma function. For \(x\in \mathbb {R}\), let \(\left[ x\right] \) to be the floor function, namely \(\left[ x\right] =\max \left\{ m\in \mathbb {Z}:m\le x\right\} \). Obviously, then \(x-1<\left[ x\right] \le x.\) We define the following constants.
Next, we define the following functions.
We proceed further by
We collect the following easy to establish properties.
Note that (2.1f) is not an easy consequence of (2.1e). We also remark, that for \(\alpha \ge 1\) Eq. (2.1a) remains also valid for \(x=0\), by interpreting both sides as their \(\lim _{x\rightarrow 0^{+} }\). The same holds true for (2.1b) and (2.1d) for \(\alpha >0.\) We then have
Then, using (2.2), we can check that \(G_{\alpha }^{\left( 2\right) }\) from Theorem 1.2 is a well defined function for all \(\alpha >0\) and \(x\ge 0.\)
Next, we record
Proof
Both equations in (2.3c) as well as (2.3b) are derived directly from ([4], 3.381.4). The Eqs. (2.3a) are an easy consequence of (2.3c) combined together with ([4], 3.381.3 and 8.356.2). Inequality (2.3d) can be derived from ([4], 8.327) by some simple manipulations. \(\square \)
3 The Envelope Function
In this section we consider the envelope function \(H_{1}\left( \alpha ,\cdot \right) \) with respect to \(H\left( \alpha ,\cdot \right) \). Our objective is to establish an asymptotics for \(\left\| H_{1}\left( \alpha ,\cdot \right) \right\| _{L_{\infty }\left[ 0,\infty \right) }\) when \(\alpha \rightarrow \infty \). We show
Theorem 3.1
Let \(\alpha \ge 2.\) Then we have
Figure 1 shows the functions \(\left| H\left( \alpha ,\cdot \right) \right| \) and \(H_{1}\left( \alpha ,\cdot \right) \) as well as their point evaluations \(H_{1}\left( \alpha ,\alpha \right) \) for values \(\alpha =1.8\) and \(\alpha =6.4\). The figure suggests that a powerful lower bound for \(\left\| H_{1}\left( \alpha ,\cdot \right) \right\| \) is derivable by determining its point evaluation \(H_{1}\left( \alpha ,\alpha \right) \), at least for large values of \(\alpha \). We now start to prove Theorem 3.1 by splitting it in several lemmas. First, we present the following five lemmas without proof. They can be derived by some standard calculus arguments.
Lemma 3.1
The function \(f\left( x\right) =\left( 1+\frac{1}{x}\right) ^{x}\) is monotonically increasing on \(\left( 0,\infty \right) \) and \(f\left( x\right) \le e\) in this interval.
Lemma 3.2
For \(x>0\) we have
Lemma 3.3
Let \(\alpha >0.\) The function
is convex for \(x\ge 0.\) Here \(f\left( 0\right) =\lim _{x\rightarrow 0^{+} }f\left( x\right) =\frac{1}{2\alpha }.\)
Lemma 3.4
Let \(\alpha >0.\) Then, for \(x\in \left[ 0,1+\frac{1}{2\alpha }\right] ,\) we have
Lemma 3.5
Denote by \(f\left( x\right) =x\left( x+1\right) /\left( x^{2}+1\right) .\) Then, for \(x\ge 0,\) we have
Our first substantial result is now the following
Lemma 3.6
Let \(\alpha \ge 1.\) Then
Proof
By some routine arguments and using Lemma 3.5, (2.3a), (2.3d) and (2.3c), we estimate
We summarize
Now, using (2.1a), (2.1e) together with (3.1), we obtain the final result
\(\square \)
Next, we show
Lemma 3.7
Let \(\alpha >1.\) Then
Proof
From (2.2) it follows that we can restrict ourselves to values \(H_{1}\left( \alpha ,x\right) \) for \(x>0.\) Thus
\(\square \)
Lemma 3.8
Let \(\alpha \ge 2.\) Then
Proof
By using Lemmas 3.4 and 3.2, we begin with
Note, that for \(\alpha \ge \frac{1}{2}\) we have \(1/\alpha \ge 1/\left( 2\alpha \right) +1/\left( 4\alpha ^{2}\right) .\) From this, by using (2.3a), it follows that
Then, using Lemma 3.1 and (2.3b), the last expression can be estimated to
We collect for \(\alpha \ge 2\) the inequality \(2/\alpha \ge 1/\left( \alpha -1\right) .\) Now, using (2.3d) and (2.3c), we estimate further
Combining all together, we obtain for all \(\alpha \ge 2,\)
Finally, using (2.1f) and (2.1e), we arrive at
\(\square \)
Proof of Theorem 3.1
The theorem is now an easy consequence of the Lemmas 3.6, 3.7 and 3.8. \(\square \)
4 Asymptotics of the Error Function
In this section we establish an asymptotic bound for the norm of the limiting error function, i.e. for \(\left\| H\left( \alpha ,\cdot \right) \right\| _{L_{\infty }\left[ 0,\infty \right) }.\,\)This section contains the most technical part in this paper. Here, we use the generalized Watson-lemma (Laplace method for integrals with large parameter) for deriving an asymptotic expansion used to be later in the context. As it turns out, we need a higher order asymptotics up to order 5. To begin with we start with a lower estimate for \(\left\| H\left( \alpha ,\cdot \right) \right\| _{L_{\infty }\left[ 0,\infty \right) }.\) Though the process is technical and complicated, the main idea is pretty easy to see. So, let us start with the idea behind, illustrated in Fig. 2 involving the functions \(\left| H\left( \alpha ,\cdot \right) \right| \) and \(H_{1}\left( \alpha ,\cdot \right) .\) Figure 2 shows the error function \(\left| H\left( \alpha ,\cdot \right) \right| \) and its envelope \(H_{1}\left( \alpha ,\cdot \right) \) together with the point evaluations \(H_{1}\left( \alpha ,\alpha \right) \) (green) and \(\left| H\left( \alpha ,\beta \right) \right| =H_{1}\left( \alpha ,\beta \right) \) (magenta), where \(\beta =\beta \left( \alpha \right) =\pi \left[ \frac{\alpha }{\pi }\right] +\frac{3}{2}\pi \) and \(\alpha =3.9\) and \(\alpha =8.4\).
Geometrically, the point \(\beta \) is the position of the first or the second relative maximum of \(\left| H\left( \alpha ,\cdot \right) \right| \) on the right-hand side of \(\alpha \), where \(H_{1}\left( \alpha ,\cdot \right) \) appears to be descending. For higher values of \(\alpha \), the size of these maxima appear to be of the same magnitude compared to the size \(H_{1}\left( \alpha ,\alpha \right) \). We use both observations for the asymptotic analysis. First, we show that \(H_{1}\left( \alpha ,\cdot \right) \) is descending at least for values \(x\ge \alpha \). Then, we derive the asymptotics for the local maximum in \(\left| H\left( \alpha ,\beta \right) \right| \). It turns out that the following integral inequality plays an essential role.
Theorem 4.1
There exists a fixed constant \(\alpha _{0}>0\) such that for \(\alpha \ge \alpha _{0},\)
We remark that (4.1) is not true for all values \(\alpha _{0}>0\). This can be seen out from Fig. 3. Also, for higher values of \(\alpha \), the positive magnitude becomes rather small. Numerical experiments suggest that the minimal value for \(\alpha _{0}\) such that (4.1) becomes true, is inside the interval \(\left[ 2.54288, 2.54289\right] \). However, since we are interested in an asymptotic expansion, the determination of the precise minimal value for \(\alpha _{0}\) is not important. Then, from Theorem 4.1 we may derive our first desired property.
Theorem 4.2
There exists a fixed constant \(\alpha _{0}>0\) such that \(H_{1}\left( \alpha ,\cdot \right) \) is decreasing, whenever \(x\ge \alpha >\max \left( \alpha _{0},1\right) .\)
As we will see later, from Theorem 4.2 we obtain our final asymptotics in Theorem 1.3 and we are finished.
We first establish Theorem 4.2 by assuming that Theorem 4.1 holds true. Then, we present the proof for Theorem 4.1 which is completely independent of the forthcoming lemmas related to Theorem 4.2. Finally, we present the proof for Theorem 1.3. Without proof, we first present the following
Lemma 4.1
Let \(\alpha >0\) and \(x>0\). Then \(S\left( \alpha ,x\right) \) has the representation
Lemma 4.2
Let \(\alpha >1\) and \(x>0\). Then
Proof
Using (2.1a) and by differentiating under the integral, we get
\(\square \)
Lemma 4.3
Let \(\alpha >1\) and \(x>0\). Then
is an increasing function in x.
Proof
Using Lemma 4.1 and by differentiating again under the integral sign, we get
\(\square \)
The rescaling of \(R\left( \alpha ,\cdot \right) \) in Lemma 4.3 is now very powerful in proving Theorem 4.2. Considering formula (4.2) contributes to my colleague, Dr. Maximilian Thaler, for which I thank him.
Proof of Theorem 4.2
By assuming the validity of Theorem 4.1 there exists some \(\alpha _{0}>0\), such that \(R\left( \alpha ,\alpha \right) >0\) for all \(\alpha \ge \alpha _{0}\). From this fact and (4.2) we deduce \(S\left( \alpha ,\alpha \right) =\alpha ^{\alpha +2}R\left( \alpha ,\alpha \right) >0\) for all \(\alpha >\max \left( \alpha _{0},1\right) \). Now, combining Lemma 4.2 together with Lemma 4.3, we establish for \(x\ge \alpha > \max \left( \alpha _{0},1\right) \),
\(\square \)
We turn now to the proof for Theorem 4.1. As before, we derive several lemmas.
Lemma 4.4
Let \(\alpha >2\) and \(x>0.\) Then
Proof
By some routine calculations we obtain the representation
Since \(Z\left( \alpha \right) \) is the well known zeta function, from ([4], 9.522.2) we derive for \(\alpha >1,\)
Combining (4.3) together with (4.4), we obtain for \(\alpha >2\) the right-hand side in Lemma 4.4 by
Similarly, for \(\alpha >0\), the left-hand side in Lemma 4.4 can be derived by
\(\square \)
Lemma 4.5
Let \(\alpha >2\). Then
Proof
By using a routine estimate for the zeta function, namely
we combine this together with Lemma 4.4. For \(\alpha >2\) it then follows
\(\square \)
Lemma 4.6
Let \(\alpha >0\) and \(c\ge 0.\) Then, as \(\alpha \rightarrow \infty ,\) we have the following asymptotics.
Proof
We prove the relations with the generalized Watson-lemma. Let \(\alpha >0,k=0,1\) and \(c\ge 0.\) Then
with \(f_{k,c}\left( t\right) =t^{k}/\left( e^{ct}\left( 1+t^{2}\right) \right) \) and \(g\left( t\right) =t-\log t\). Before applying the Watson-lemma, we have to split the integral in two parts \(T\left( \alpha +k,\alpha +c\right) =\int _{0}^{\infty }=\int _{1}^{\infty }+\int _{0}^{1},\) because g has exactly one single minimum at \(a=1\). After verifying the conditions for the Watson-lemma ([9], Theorem 8.1) it allows us to expand the integral \(\int _{1}^{\infty }\) into an asymptotic series of the form
with certain coefficients \(\lambda ,\mu \) and \(a_{n}^{\left( k,c\right) }\). For the second integral \(\int _{0}^{1}\) we have to apply a suitable transformation before expanding it. It is worth mentioning, that in the classical textbooks on asymptotic analysis (compare [9], p. 86) there is no general formula for the coefficients \(a_{n}\) available. Only the first one or two coefficients are derived and as it can easily be checked, they are of rather complicated nature. Surprisingly, in the newer literature ([10], Formula 2.3.18) one can find a remarkable suitable representation for these coefficients in terms of some residues as well as a reference for its derivation, namely (in our context)
We used a symbolic computation software (Wolfram Mathematica 12.0) for the computation of the residues in (4.5), but we do not present the general output of these formulas. This would fill several pages. However, since the calculations are of crucial importance in the proof for Theorem 4.1, we present all relevant outputs. For \(k=0,1,\ldots \), and \(c=0\) we calculate
For \(k=0\) and \(c\ge 0\) we compute \(a_{0}^{\left( 0,c\right) }=e^{-c}\frac{1}{2\sqrt{2}}\) and \(a_{1}^{\left( 0,c\right) }=-e^{-c}\frac{1+3c}{6}.\) With \(\lambda =1\) and \(\mu =2\) we obtain for \(\alpha \rightarrow \infty ,\)
Proceeding in the same way for the second integral \(\int _{0}^{1}\), we compute
For \(k=0\) and \(c\ge 0\) we compute \(a_{0}^{\left( 0,c\right) }=e^{-c}\frac{1}{2\sqrt{2}}\) and \(a_{1}^{\left( 0,c\right) }=e^{-c}\frac{1+3c}{6}.\) Again, with \(\lambda =1\) and \(\mu =2\) we obtain for \(\alpha \rightarrow \infty ,\)
Collecting the results we finally arrive at the expansion in Lemma 4.6. \(\square \)
Lemma 4.7
There exists some \(\alpha _{1}>0\) such that for \(\alpha \ge \alpha _{1}\),
Proof
From Lemma 4.6, we calculate
Now, combining the last expression again together with Lemma 4.6, we obtain for \(\alpha \rightarrow \infty \) the asymptotics
The assertion now follows. \(\square \)
Proof of Theorem 4.1
Let \(\alpha >\max \left( 2,\alpha _{1}\right) \). Combining Lemma 4.5 together with Lemma 4.7, we deduce
Since \(T\left( \alpha ,\alpha \right) >0\) for all \(\alpha >0,\) a standard calculation reveals that the remaining term in the last expression becomes positive, at least for all \(\alpha \ge \alpha _{0}=\max \left( 14,\alpha _{1}\right) .\) \(\square \)
We turn now to the final proof for Theorem 1.3, again by establishing some lemmas. Without proof, we first present the following
Lemma 4.8
Let \(\alpha >0\) and \(\beta =\beta \left( \alpha \right) =\pi \left[ \frac{\alpha }{\pi }\right] +\frac{3}{2}\pi .\) Then
Lemma 4.9
Let \(\alpha >0\) and \(c\ge 0\). Then
Proof
From Lemma 4.6, we simply derive
\(\square \)
Lemma 4.10
Let \(\alpha >2\). Then
Proof
Using (2.1a), we obtain
Next, using Lemma 4.4 together with a standard estimate for the zeta function, we establish
and
Now, combining (4.6), (4.7) and (4.8) together with Lemma 4.9, we establish the result. \(\square \)
Proof of Theorem 1.3
For \(\alpha \ge 2\), it follows from Theorem 3.1 that
For the reverse side, let \(\alpha >2.\) From Lemma 4.10 we deduce that there exists a function \(\varepsilon \left( \alpha \right) \rightarrow 0\) whenever \(\alpha \rightarrow \infty \) for which we estimate
Using Lemma 4.8, Theorems 4.2 and 3.1 , we further obtain for \(\alpha >\max \left( 2,\alpha _{0}\right) \) the estimate
Finally, combining (4.9) together with (4.10), establishes the result and we are finished. \(\square \)
5 Approximation Polynomials in \(L_{\infty }\)
This section is devoted to an explicit construction for near best approximation polynomials to \(\left| x\right| ^{\alpha },\alpha >0\) in the \(L_{\infty }\) norm. The construction involves the polynomials \(P_{n}^{\left( 1\right) }\) and \(P_{n}^{\left( 2\right) }\) together with the Chebyshev polynomials \(T_{n}\). The construction method is based on numerical results. The resulting formulas could indicate a general possible approach and structure for the Bernstein constants \(\Delta _{\infty ,\alpha }\).
First, we refer back again to the content of Theorems 1.1 and 1.2. Now, based on numerical computations, we made the following observations. For all \(\alpha >0\) (not an even integer) we find that, beginning with the second positive node, all interpolation points of the best approximation polynomials \(P_{2n}^{*}\) are located somewhere between two consecutive interpolation points for the \(P_{2n}^{\left( 1\right) }\) and \(P_{2n}^{\left( 2\right) }\) polynomials, see Fig. 4.
It is well known that \(\left[ 1,x,\ldots ,x^{n},x^{\alpha /2}\right] \) is an hypernormal Haar space of dimension \(n+2\) on the interval \(\left[ 0,1\right] \,\), see ([15], p. 199). Consequently it follows that we have always an alternation point at \(x=0\). Thus we cannot expect to perform in the quality of best approximation solely by using the polynomials \(P_{n}^{\left( 1\right) }\) and \(P_{n}^{\left( 2\right) }\), since both of them interpolate at \(x=0\). Thus we consider the following polynomials
where \(c_{1,\alpha }\) and \(c_{2,\alpha }\) are numerical constants, depending only on \(\alpha \). As we see later, for good choices of \(c_{1,\alpha }\) and \(c_{2,\alpha }\), the linear combination of \(P_{2n}^{\left( 1\right) }\) and \(P_{2n}^{\left( 2\right) }\) results in a polynomial with almost all the same interpolation points as its best approximation \(P_{2n}^{*}\), while at the same time the last term in (5.1) establishes the alternation property at \(x=0\) and leaves the new interpolation points largely unchanged.
Since we are interested into the asymptotic behavior of the polynomials \(P_{2n}^{\left( 3\right) }\), we directly pass to the resulting scaled limit. From Theorems 1.1, 1.2 and ([13], Lemma 3.6), it follows that uniformly on compact subsets of \(\left[ 0,\infty \right) \) we have
Thus, we try to numerically minimize the quantity
For the moment, we cannot present an explicit formula for the constants \(c_{1,\alpha }\) and \(c_{2,\alpha }\), but based on numerical calculations, we present the following numerical Table 1.
Although formula (5.1) is not in full explicit form it appears to be an important step towards a possible representation for the Bernstein constants \(\Delta _{\infty ,\alpha }\). This can be seen from the following observations. First, recall the existence of the unique minimizing entire function \(H_{\alpha }^{*}\) of exponential type 1 from formulas (1.1) and (1.3) together with the facts that \(\Delta _{\infty ,\alpha }=\left\| \left| x\right| ^{\alpha }-H_{\alpha }^{*}\right\| _{L_{\infty }\left( \mathbb {R}\right) }\) and \(\lim _{n\rightarrow \infty }n^{\alpha }P_{n}^{*}\left( \frac{x}{n}\right) =H_{\alpha }^{*}\left( x\right) \). There is also a representation for \(H_{\alpha }^{*}\) as an interpolation series with unknown interpolation points \(0<x_{1}^{*}<x_{2}^{*}<x_{3}^{*}<\cdots \). However, it is known ([6], Formulas 1.4 and 1.5) that
Moreover, from ([5], Formulas 1.6 and 1.7) it follows that the minimizing entire function \(H_{\alpha }^{*}\) satisfies an alternation property with unknown alternation points.
Now, using our numerical values \(c_{1,\alpha }\) and \(c_{2,\alpha }\) we use the right-hand side of formula (5.2) as an approximation for \(H_{\alpha }^{*}\). First, in Fig. 5 we present some illustrations for the \(P_{n}^{\left( 3\right) }\) polynomials in competition with their best approximation polynomials \(P_{n}^{*}\) for values \(\alpha =0.5\) and \(n=4,8\). The same is done in Fig. 6 for \(\alpha = 1\).
In Fig. 7 we present the approximations for \(H_{\alpha }^{*}\) together with their corresponding interpolation points for values \(\alpha =0.5\) and \(\alpha =1\). In Fig. 8 we illustrate the near equioscillating behavior of the error term in (5.2), again for values \(\alpha =0.5\) and \(\alpha =1\), and we compare the maximal error magnitude with the corresponding numerical values for the Bernstein constants
The values for the Bernstein constants are taken from ([15], Table 1.1).
Finally, in the following Table 2 we present the approximations for the best positive interpolation points \(x_{j}^{*}\) for \(H_{\alpha }^{*}\), \(j=1,2,\ldots ,10\), from (5.2), see also Fig. 7.
The values in Table 2 are in accordance with (5.4). Moreover, Table 2 suggests that, for small positive values \(\alpha \), all interpolation points are slightly shifted to the left. Apparently this effect becomes greater for those interpolation points which are located closer to the origin. On the other hand the values in Table 2 suggest that \( x_{n+1}^{*}-x_{n}^{*}\rightarrow \pi \), as \(n\rightarrow \infty \).
We remark that the polynomials \(P_{n}^{\left( 3\right) }\) generally appear to be of extreme good quality in regard to best approximation polynomials \(P_{n}^{*}\) even for small values of n. Presently, we are not aware of similar and comparable constructions for near best approximation polynomials to \(\left| x\right| ^{\alpha }\) from the literature.
References
Bernstein, S.N.: Sur la meilleure approximation de \(\left|x\right|\) par des polynômes des degrés donnés. Acta Math. 37, 1–57 (1913)
Bernstein, S.N.: Sur la meilleure approximation de \(\left|x\right|^{p}\) par des polynômes des degrés trés élevés. Bull. Acad. Sci. USSR Sér. Math. 2, 181–190 (1938)
Ganzburg, M.I.: The Bernstein constant and polynomial interpolation at the Chebyshev nodes. J. Approx. Theory 119, 193–213 (2002)
Gradshteyn, I.S., Ryzhik, I.M.: Table of Integrals, Series and Products, 5th edn. Academic Press, New York (1994)
Lubinsky, D.S.: On the Bernstein constants of polynomial approximation. Constr. Approx. 25(3), 303–366 (2007)
Lubinsky, D.S.: Series representations for best approximating entire functions of exponential type. In: Chen, G., Lai, M. (eds.) Modern Methods in Mathematics, Athens, GA, USA, May 16–19, 2005, Nashboro Press, Brentwood, pp. 356–364 (2006)
Nikolskii, S.M.: On the best mean approximation by polynomials of the functions \(\left|x-c\right|^{s}\). Izvestia Akad. Nauk SSSR 11, 139–180 (1947). ((in Russian))
Pachón, R., Trefethen, L.N.: Barycentric–Remez algorithms for best polynomial approximation in the Chebfun system. BIT 49(4), 721–741 (2009)
Olver, F.W.J.: Asymptotics and Special Functions. A K Peters, Wellesley (1997)
Olver, F., Lozier, D., Boisvert, R., Clark, Ch.: NIST (National Institute of Standards and Technology) Handbook of Mathematical Functions, Cambridge University Press (2010)
Raitsin, R.A.: On the best approximation in the mean by polynomials and entire functions of finite degree of functions having an algebraic singularity. Izv. Vysch. Uchebn. Zaved. Mat. 13, 59–61 (1969). ((in Russian))
Revers, M.: On the asymptotics of polynomial interpolation to \(\left|x\right|^{\alpha }\) at the Chebyshev nodes. J. Approx. Theory 165, 70–82 (2013)
Revers, M.: Extremal polynomials and entire functions of exponential type. Results Math. 73, Article No. 109 (2018)
Varga, R.S., Carpenter, A.J.: On the Bernstein conjecture in approximation theory. Constr. Approx. 1, 333–348 (1985)
Varga, R.S., Carpenter, A.J.: Some Numerical Results on Best Uniform Polynomial Approximation of \(x^{\alpha }\) on \(\left[0,1\right] \), Lecture Notes Mathematics, vol. 1550, Berlin, Springer, pp. 192–222 (1993)
Funding
Open access funding provided by Paris Lodron University of Salzburg.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Revers, M. Asymptotics of Polynomial Interpolation and the Bernstein Constants. Results Math 76, 100 (2021). https://doi.org/10.1007/s00025-021-01408-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00025-021-01408-3
Keywords
- Bernstein constants
- Chebyshev nodes
- entire functions of exponential type
- higher order asymptotics
- Watson-lemma
- best uniform approximation