Asymptotic and Non-asymptotic Results in the Approximation by Bernstein Polynomials

This paper deals with the approximation of functions by the classical Bernstein polynomials in terms of the Ditzian–Totik modulus of smoothness. Asymptotic and non-asymptotic results are respectively stated for continuous and twice continuously differentiable functions. By using a probabilistic approach, known results are either completed or strengthened.


Introduction and Statements of the Main Results
This work is partially supported by Research Project PGC2018-097621-B-I00. The second author is also supported by Junta de Andalucía Research Group FQM-0178.
The indicator function of a set A is denoted by 1 A , and E stands for mathematical expectation.
The second order central difference of f is defined by The classical first order modulus of continuity is simply denoted by ω(f ; δ).
In this paper, we will make use of the following important inequality proved by Bustamante [2]: Finally, the nth Bernstein polynomial of f is defined as We have the probabilistic representation where S n (x) is a random variable having the binomial law with parameters n and x, that is to say, Throughout this paper, whenever we write f , n, x, and y, we are assuming that f ∈ C[0, 1], n ∈ N, and x, y ∈ [0, 1].
Following the works by Ditzian and Ivanov [4] and Totik [9], the rates of uniform convergence for the Bernstein polynomials are characterized by provided that f is not an affine function. The contribution of this paper is twofold. In first place, we strength statement (5) by giving a non-asymptotic version of it. In fact, we prove the following result.
In second place, we complete statement (4) in the following asymptotic form.

Theorem 2. Let (τ n ) n≥1 be a sequence of positive real numbers such that
If f ∈ C[0, 1] is not an affine function, then Moreover, we have in (4), This result is based upon Theorem 3 in Sect. 3, which gives estimates of the form for some explicit constants K 2 (n, x) depending on n and x. The paper is organized as follows. The proof of Theorem 1 is given in Sect. 2. We show Theorem 2 in Sect. 3 with the aid of two kinds of auxiliary results. On the one hand, we define certain smooth approximants Q a h f of the function f ∈ C[0, 1], by antisymmetrizing in an appropriate way the classical Steklov means of f . On the other hand, we estimate the tail probabilities and the truncated variance of the random variable S n (x) appearing in the probabilistic representation of B n f given in (2).

Proof of Theorem 1 2.1. Preliminaries
The Taylor's formula of order m ∈ N for f ∈ C m [0, 1], with remainder in integral form can be written as where β m is a random variable with the beta density as well as Adding these two identities, we obtain Replacing in (10) h by hϕ(x) and applying the reverse triangular inequality, we have thus completing the proof. Gonska et al. [6] showed that

Proof of Theorem 1
Statement (6) is an inmediate consequence of (11), Lemma 1 with δ = 1/ √ n, and the reverse and direct triangular inequalities. On the other hand, we have from Lemma 1 . Thus, statement (5) readily follows from (6), and completes the proof.

Auxiliary Results
Let 0 < h ≤ 1/3. We consider the Steklov means of f defined as In probabilistic terms, the Steklov means of f can be written as follows. Let V 1 and V 2 be independent identically distributed random variables having the uniform distribution on [−1, 1] and set V = (V 1 + V 2 )/2. Since ρ(v) is the probability density of V , we can write (12). Then,

Lemma 2. Let 0 < h ≤ 1/3 and let P h f (y) be as in
Proof. Since V takes values in [−1, 1] and is symmetric (i. e., V and −V have the same law), we see that thus showing (a). On the other hand, it can be checked that is a second antiderivative of f . This readily implies part (b) and completes the proof.
We will make use of the approximant P h f , whose domain is the interval [h, 1 − h], to define a further one whose domain is the whole interval [0, 1], keeping at the same time analogous properties to those given in Lemma 2. To this end, we assume that and take It turns out that h ≤ min(ax, 1/3).
Now, we define the approximant Q a h f (y) by antisymmetrizing P h f (y) around the axes y = ax and y = 1 − ax as follows The fact that Q a h f is well defined readily follows from (13) and (14). Also, note that Q a h f is twice differentiable except at the points ax and 1 − ax. In these two points, Q a h f only has sided second derivatives. This implies that Under assumptions (13) and (14), we have and Thus, the first inequality in part (a) follows from Lemma 2(a) and definition (16), whereas the second one follows from Lemma 2(b).
Proof. (a) As follows from (3), we have Let θ ≥ 0. By (22) and Chebyshev's inequality, we have where we have used the inequalities Choosing θ = r in (23) (the value minimizing the exponent), we get On the other hand, we claim that Indeed, let 0 ≤ θ ≤ 1. Using the inequalities we have, as in the proof of (24), On the other hand, since 1 − ax ≥ 1/2, we have k k − 1 ≤ n/2 n/2 − 1 = n n − 2 , k > n(1 − ax).
We therefore have This, together with (27) and (28), shows part (b) and completes the proof.
We are in a position to give the following local estimate.