Abstract
We introduce a sequence of random linear operators arising from piecewise linear interpolation at a set of random nodes on the unit interval. We show that such operators uniformly converge in probability to the target function, providing at the same time rates of convergence. Analogous results are shown for their deterministic counterparts, derived by taking expectations of the aforementioned random operators. Special attention is paid to the case in which the random nodes are the uniform order statistics, where an explicit form for their associated deterministic operators is provided. This allows us to compare the speed of convergence of the aforementioned operators with that of the random and deterministic Bernstein polynomials.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let \(\mathbb {N}\) be the set of positive integers and \(\mathbb {N}_0=\mathbb {N}\cup \{0\}\). Denote by \(1_A\) the indicator function of the set A, and by \(e_i\), \(i\in \mathbb {N}_0\), the monomial function \(e_i(x)=x^i\). As usual, C[0, 1] stands for the space of all real continuous functions defined on [0, 1] endowed with the supremum norm \(\Vert \cdot \Vert _{\infty }\), and \(C^m[0,1]\) denotes the subspace of all m-times continuously differentiable functions. Throughout this paper, we assume that \(n\in \mathbb {N}\) and \(f\in C[0,1]\).
Let \(x\in [0,1]\). The classical Bernstein polynomials, defined by
are the paradigmatic example of positive linear operators. Rates of uniform convergence for these operators are well known. In fact, let \(\omega _2^{\varphi }(f;h)\) be the Ditzian–Totik modulus of smoothness of f with weight function \(\varphi (x)=\sqrt{x(1-x)}\), that is,
for \(h\ge 0\). It turns out (c.f. Ditzian and Ivanov [7] and Totik [15]) that
for some absolute constants \(K_1\) and \(K_2\). Păltănea [12, Corollary 4.1.10] gave \(K_2=2.5\) (see also Bustamante [5]).
On the other hand, the Bernstein polynomials can be written in probabilistic terms as follows (see, for instance, Adell and de la Cal [3] and Adell and Cárdenas–Morales [1]). Let \((U_k)_{1\le k\le n}\) be a finite sequence of independent identically distributed random variables having the uniform distribution on [0, 1]. Let
be the order statistics obtained by arranging \((U_k)_{1\le k\le n}\) in increasing order of magnitude. Consider the random variable
Since \(S_n(x)\) has the binomial law with parameters n and x, we have
where \(\mathbb {E}\) stands for mathematical expectation.
Let \(x\in [0,1)\). By (1.2) and the second equality in (1.3), we have the identity of events \(\{S_n(x)=k\}=\{U_{n:k}\le x<U_{n:k+1}\}\), \(k=0,1,\ldots ,n\). Thus, we can write
whereas
Denote by \(\mathbb {U}_n=(U_{n:k})_{0\le k\le n+1}\). Accordingly, we define for \(x\in [0,1)\)
together with \(B_n(f,\mathbb {U}_n;1):=f(1)\). From (1.4) and (1.5), we see that
In other words, formula (1.5) defines a random positive linear operator whose expectation is the classical Bernstein polynomial of f.
From (1.3), we see that \(S_n(x)\le S_n(y), 0\le x\le y\le 1\), thus implying that the random operator \(B_n(f,\mathbb {U}_n;x)\) preserves monotone functions. However, this operator produces, in general, discontinuous functions. This notwithstanding, \(B_n(f,\mathbb {U}_n;x)-f(x)\) uniformly converges in probability to 0. To be more precise, recall that a sequence of random variables \((X_n)_{n\ge 1}\) converges in probability to 0, denoted by \(X_n{\mathop {\longrightarrow }\limits ^{\text {(P)}}} 0\), if
It will be shown in Theorem 5.5 at the end of this paper that
providing at the same time rates of convergence.
Having in mind the previous considerations, suppose we are given a random vector \(\mathbb {V}_n=(V_k)_{0\le k\le n+1}\) such that
We define the following random linear operator, arising from piecewise linear interpolation at the set of the random nodes defined in (1.6), that is,
for \(0\le x<1\), together with \(L_n(f,\mathbb {V}_n;1)=f(1)\).
As pointed out by the referee, “the algorithm in (1.7) does not have a strong numerical stability from the point of view of a computer programmer. From time to time, any random number generator can produce two consecutive numbers whose difference is so small that many computers treat it as zero due to computers limited machine precision ... However, random piecewise linear interpolating is theoretically appealing”. In fact, we describe below the approximation properties of the random operator \(L_n(f,\mathbb {V}_n;x)\) and the deterministic operator \(L_n(f;x)\) defined in (1.8), and compare them with those of Bernstein polynomials.
Taking expectations in (1.7), we obtain the (deterministic) linear operator
for \(0\le x<1\), whereas \(L_n(f;1)=f(1)\). Observe that
By construction, both operators are positive and preserve affine functions, since
It is also clear that both operators preserve convex functions. Apart from these shape preserving properties, the most interesting fact is that both operators approximate the target function f(x) at a faster rate of convergence than the standard positive linear operators do.
The interest of replacing deterministic nodes by random ones comes from the fact that, in applications, a variety of circumstances may cause that the data are contaminated by random errors. This is one of the main reasons for the introduction of the so-called stochastic Bernstein polynomials by Wu et al. [16], further investigated in recent papers (see [2, 13, 17]).
We point out two main differences between the operators defined in (1.7) and (1.8). In the first place, the construction of the random operator \(L_n(f,\mathbb {V}_n;x)\) is very simple in comparison with that of \(L_n(f;x)\). In this regard, see Proposition 4.1 in Sect. 4 for the specific form of \(L_n(f;x)\) in the case of uniform order statistics. As a counterpart, \(L_n(f;x)\) approximates f(x) at slightly better rates of convergence than \(L_n(f,\mathbb {V}_n;x)\) does. As an illustration of this fact, we have from Corollaries 5.1 and 5.2 in Sect. 5 the following approximation results in the case of uniform order statistics
and
for an appropriate \(\tau _n>1\), where
Inequality (1.11) tells us that, with probability one, \(L_n(f;x)\) is in the confidence band \(f(x)\pm 5/2\omega _2^{\varphi }(f;(n+1)^{-1/2})\), \(x\in [0,1]\), whereas, by (1.12), the random operator \(L_n(f,\mathbb {V}_n;x)\) is within the confidence band \(f(x)\pm 5/2\omega _2^{\varphi }(f;\epsilon _n)\), \( x\in [0,1]\), with asymptotically high probability. The confidence band in this second case is larger since \(\epsilon _n>(n+1)^{-1/2}\), as follows from (1.13).
In view of (1.1) and (1.11), it seems that \(L_n(f;x)\) behaves like the classical Bernstein polynomials. However, it will be shown in Corollary 5.1 in Sect. 5 that
showing in this way that the local behavior of \(L_n(f;x)\) is much better than that of \(B_n(f;x)\). In addition, when acting on smooth functions, \(L_n(f;x)\) produces better rates of convergence than \(B_n(f;x)\) (see Corollary 5.3 in Sect. 5). With respect to the random operators defined in (1.5) and (1.7), the rate of uniform convergence in probability of \(L_n(f,\mathbb {V}_n;x)\) is also faster than that of \(B_n(f,\mathbb {U}_n;x)\), as seen in Corollary 5.2 and Theorem 5.5.
We finally mention that the notion of linear interpolation in definition (1.7) could be replaced by polynomial interpolation or by a general random linear operator. In our opinion, the approximation properties of the resulting operators would be an interesting topic of research.
This paper is organized as follows. In the next section, we prove quantitative convergence results for the new sequences of operators defined, respectively, in (1.7) and (1.8). In Sect. 3, we show that the given rates of uniform convergence are improved whenever the target functions belong to \(C^{m+1}[0,1]\), \(m\in \mathbb {N}_0\). In Sects. 4 and 5, special attention is paid to the particular case when the random vector of nodes defined in (1.6) coincides with \(\mathbb {U}_n\). This will allow us to compare the behavior of the operators considered in this paper with the deterministic and random Bernstein polynomials defined in (1.4) and (1.5), respectively.
2 Pointwise and Uniform Quantitative Results
In view of (1.9), we assume here, and onwards, that \(x\in (0,1)\). A basic ingredient in our approach is the following quantitative approximation result shown by Păltănea [12, Theorem 2.5.1]. Given a positive linear operator L, acting on functions \(f\in C[0,1]\), such that \(L(e_i;x)=e_i(x)\), \(i=0,1\), one has
Recalling (1.7), we consider the random variables
and
With these notations, we state the following auxiliary result.
Lemma 2.1
We have
and
Proof
The equality in (2.4) readily follows from definition (1.7), whereas the inequality follows from the property
On the other hand, (2.5) follows from the equality in (2.4) and the fact that
\(\square \)
We see from (2.3) and (2.5) that the random variable \(Y_n(x)\) takes values in [0, 1]. In the following result, we give pointwise and uniform quantitative estimates for the deterministic operator \(L_n\) defined in (1.8).
Theorem 2.2
Assume that
Then
and
Proof
Let \(0<h\le 1/2\). By (2.2), we have
We, therefore, obtain from (1.10) and (2.1)
By assumption (2.6), we can choose \(h=\sqrt{\mathbb {E}Y_n(x)}\) in the preceding inequality to show (2.7). Inequality (2.8) directly follows from (2.6) and (2.7). \(\square \)
Inequality (2.5) implies that assumption (2.6) is fulfilled, at least asymptotically, provided that \(W_n\) converges in probability to 0. In the case of the aforementioned uniform order statistics, we will show in Sect. 4 that \(\mathbb {E}Y_n(x)\sim 1/n^2\), as \(n\rightarrow \infty \), for a fixed \(x\in (0,1)\), whereas \(\delta _n\sim 1/n\), as \(n\rightarrow \infty \). This means, in contraposition to the classical Bernstein polynomials, that estimate (2.7) may be significantly better than (2.8).
With respect to the random linear operator defined in (1.7), we give the following result.
Theorem 2.3
Let \(0<h\le 1/2\). Then
and
In addition,
whenever \(\mathbb {E}W_n\le 1/4\).
Proof
By (1.10), (2.1), and (2.2), we have
This implies that
as follows from (2.4). This shows (2.9).
The proof of (2.10) follows the same pattern starting from the inequality
which, in turn, follows from (2.5) and (2.12). Finally, taking expectations in (2.13) and choosing \(h=\sqrt{\mathbb {E}W_n}\), we obtain inequality (2.11). \(\square \)
Theorem 2.3 is only meaningful if \(W_n\) converges in probability to 0, as \(n \rightarrow \infty \). By dominated convergence, this implies that \(\mathbb {E}W_n\) converges to 0, as well. This theorem not only implies that
but also that the random operator \(L_n(f,\mathbb {V}_n;x)\) is within the confidence band \(f(x)\pm (5/2)\omega _2^{\varphi }(f;h)\) with high probability, as \(n \rightarrow \infty \). More specific statements will be given in Sect. 5 for the uniform order statistics.
3 Approximation for Smooth Functions
For smooth functions, the rates of convergence given in Theorems 2.2 and 2.3 can be considerably improved. In this respect, recall the usual first modulus of smoothness of f is defined as
The following subadditivity property is well known
We will need the following auxiliary result.
Lemma 3.1
Let \(m\in \mathbb {N}_0\), \(\delta >0\), and \(0\le y\le x <z\le 1\). If \(f\in C^{m+1}[0,1]\), then
where
Proof
For any \(r\in \mathbb {N}\), let \(\beta _r\) be a random variable having the beta density
Set \(\beta _0=1\). The Taylor’s formula of order \(m+1\) for f with reminder in integral form can be written as
On the other hand, let U be a random variable having the uniform distribution on [0, 1] and independent of the sequence \((\beta _r)_{r\ge 0}\). Denote
By (3.5) and (3.6), we can write
Since \(\mathbb {E}U^r=1/(r+1)\), \(r\in \mathbb {N}_0\), we see from (3.6) that
This, together with (3.5) and (3.7), shows (3.2), provided that we denote by \(R_m\) the sum of the last terms in (3.5) and (3.7).
As follows from (3.4), \(\mathbb {E}\beta _r=1/(r+1)\), \(r\in \mathbb {N}_0\). This fact and (3.1) imply that the absolute value of the last term in (3.5) is bounded above by
Definition (3.6) implies that \(|T-x|\le z-y\). Thus, proceeding as in (3.8), the absolute value of the last term in (3.7) is bounded above by
This, in conjunction with (3.8), shows estimate (3.3) and completes the proof. \(\square \)
Remark 3.2
The term corresponding to \(j=0\) in (3.2) is null. Hence, the main terms in (3.2) for \(m=0\) and \(m=1\) are, respectively,
For any \(r,s\in \mathbb {N}_0\), consider the [0, 1]-valued random variable
together with the moments
Let \(f\in C^{m+1}[0,1]\) and \(\delta >0\). From (1.7), Lemma 3.1, and Remark 3.2, we can write
where the main term \(M_n(x)\) is given by
and the remainder term \(R_n(x)\) satisfies the inequality
In view of (3.9)–(3.13), the following result for the deterministic operator \(L_n(f;x)\) acting on smooth functions f is immediate.
Theorem 3.3
Let \(m\in \mathbb {N}_0\) and \(\delta >0\). If \(f\in C^{m+1}[0,1]\), then
We are also in a position to state and prove the corresponding result for the random operator \(L_n(f,\mathbb {V}_n;x)\).
Theorem 3.4
Let \(m\in \mathbb {N}_0\) and \(\delta >0\). If \(f\in C^{m+1}[0,1]\), then
where \(W_n\) is defined in (2.3).
Proof
From (2.3), (3.9), (3.11), and (3.13), we have
Note that, for any two random variables X and Y, we have
We, therefore, have from (3.14)
thus showing the result. \(\square \)
4 The Case of Uniform Order Statistics: Moment Computations
Let \(\mathbb {U}_n=(U_{n:k})_{0\le k\le n+1}\) be the ordered statistics defined in (1.2). For the sake of simplicity, denote
We recall the following well-known facts (see, for instance, Arnold et al. [4, Chap. 2]). For any \(k=1,\ldots , n\), \(V_k\) has probability density
whereas for \(k=1,\ldots , n-1\), the random vector \((V_k,V_{k+1})\) has probability density
Finally, the random variables \((V_{k+1}-V_k)_{0\le k\le n}\), called spacings, are identically distributed with common beta density
In the case at hand, the analytic form of the operator \(L_n\) defined in (1.8) is given in the following result.
Proposition 4.1
Let \(V_k\) be as in (4.1). Then
where
Proof
Note that (4.3) implies that
Hence, the result follows from (1.8) and (4.2). \(\square \)
To apply the results in Sect. 2 to the case of uniform order statistics, we need to compute the moments \(\mathbb {E}Y_n(x)\) and \(\mu _n^{(r,s)}(x)\) (see (2.4) and (3.10), respectively). This is done in the following two auxiliary results. Recall that the random variable \(S_n(x)\) defined in (1.3) has the binomial law with parameters n and x, i.e.,
Lemma 4.2
Let \(Y_n(x)\) be as in (2.4). Then
As a consequence,
Proof
Let \(k=1,\ldots , n-1\). By (4.3), we have
Making the change \(\theta =xy\) and \(u=x+(1-x)v\), (4.8) equals to
Replacing in (4.8) the probability density \(g_k(\theta ,u)\) by \(g_1(\theta )\) and \(g_n(\theta )\), as defined in (4.2), it can be checked that formula (4.9) also holds for \(k=0\) and \(k=n\). Hence, the first equality in (4.6) follows from (2.4), whereas the second is an easy consequence of (4.5).
The inequality in (4.6) follows from the fact that \((j+1)(n-j+1)\ge n+1\), \(j=0,1,\ldots , n\). Finally, formula (4.7) readily follows from the second equality in (4.6) and the fact that \(S_n(0)=0\) and \(S_n(1)=n\). \(\square \)
Lemma 4.3
Let \(\mu _n^{(r,s)}(x)\) be as in (3.10), \(r,s\in \mathbb {N}_0\). Then
As a consequence,
Proof
Proceeding as in the proof of Lemma 4.2, it can be checked that
Thus, formula (4.10) follows from (3.9). Inequality (4.11) follows from (4.10) and the combinatorial identity
\(\square \)
To conclude this section, we compute the expectation \(\mathbb {E}W_n\) of the random variable \(W_n\) defined in (2.3) in terms of the harmonic numbers
Recall that if X is a nonnegative random variable, then
Lemma 4.4
Let \(W_n\) be as in (2.3). Then
Proof
It is shown in David [6, p. 81] that
where \(x_+=\max (0,x)\). By (4.12), this implies that
Let U and V be two independent random variables having the uniform distribution on [0, 1]. Since \(\mathbb {E}(UV)^j=1/(j+1)^2, j\in \mathbb {N}_0\), we have from (4.14)
By Fubini‘s theorem, the right-hand side in (4.15) equals to
This, together with (4.15), completes the proof. \(\square \)
5 The Case of Uniform Order Statistics: Approximation Results
We keep here the same notations as in Sect. 4. Particularly, the random nodes defining the operators \(L_n(f;x)\) and \(L_n(f,\mathbb {V}_n;x)\) are the order statistics defined in (4.1). In first place, we consider the deterministic operator \(L_n(f;x)\).
Corollary 5.1
Let \(n\ge 3\). Then
and
Proof
By Lemma 4.2, assumption (2.6) in Theorem 2.2 is fulfilled. Therefore, the first inequality follows from (2.7) and the first equality in (4.6), whereas the second follows from (2.8) and the upper bound in (4.6). The proof is complete. \(\square \)
Note that for a fixed \(x\in (0,1)\), the argument of the Ditzian–Totik modulus of smoothness of f has the order of \(1/(n+1)\). In contrast, the rate of uniform convergence in Corollary 5.1 is the same as that for the classical Bernstein polynomials, as seen in (1.1).
Corollary 5.2
Let \(\tau _n>1\). If
then
If
then
In addition,
Proof
Inequality (5.3) follows by choosing \(h=\tau _n\log (n+1)/(n\varphi (x))\) in (2.9). Similarly,
Therefore, inequality (5.4) follows by choosing \(h=\sqrt{\tau _n\log (n+1)/n}\) in (2.10). Finally, (5.5) directly follows from (2.11) and Lemma 4.4. \(\square \)
Concerning Corollary 5.2, some remarks are in order. In its proof, we have used (4.4) instead of the exact formula for \(P(W_n>\theta )\) given in (4.13). This approach simplifies the upper bounds without loosing accuracy.
On the other hand, denote by \(\epsilon _n=\left( \tau _n n^{-1}\log (n+1)\right) ^{1/2}\). Inequality (5.4) tells us that the random operator \(L_n(f,\mathbb {V}_n;x)\) is within the confidence band \(f(x)\pm (5/2)\omega _2^\varphi (f;\epsilon _n), x\in (0,1)\), with (asymptotically) high probability. We are free to choose \(\tau _n\) with the restriction that \(\epsilon _n\rightarrow 0\), as \(n\rightarrow \infty \). In making this choice, one has to balance the length of the confidence band and the speed of convergence of \((n+1)^{1-\tau _n}\) towards 0.
Inequality (5.3) gives us a confidence interval for \(L_n(f,\mathbb {V}_n;x)\), for any fixed \(x\in (0,1)\). In this case, its length is much more shorter than that of the aforementioned confidence band.
Comparing (1.7) with Proposition 4.1, we see that the stochastic operator \(L_n(f,\mathbb {V}_n;x)\) is constructed in a much more simpler way than the deterministic operator \(L_n(f;x)\). The price to pay for it is that the rates of convergence to the target function f are slightly worse in the case of \(L_n(f,\mathbb {V}_n;x)\), as shown in Corollaries 5.1 and 5.2.
Finally, the harmonic number \(H_n\) has the same order of magnitude as that of \(\log n\), as \(n\rightarrow \infty \). As follows from (5.5), this implies that \(L_n(f,\mathbb {V}_n;x)\) converges uniformly in \(L^1\) to f(x) at the rate \(\omega _2^\varphi (f;(\log n/n)^{1/2})\).
The following two results are concerned with smooth functions.
Corollary 5.3
Let \(m\in \mathbb {N}_0\). If \(f\in C^{m+1}[0,1]\), then
In particular, we have the Voronovskaja’s formula
Proof
The first statement follows by choosing \(\delta =1/(n+m+2)\) in Theorem 3.3 and using estimate (4.11) to bound above the moments \(\mu _n^{(m,1)}\) and \(\mu _n^{(m+1,1)}\).
The second statement follows from the first one by choosing \(m=1\). Note that, by (2.2) and Remark 3.2, the main term is given by
where the last equality follows from (4.6). The proof is complete. \(\square \)
It is interesting to note that Corollary 5.3 gives us a quantitative generalized Voronovskaja’s formula with an explicit upper bound for the remainder of the order of
In the analogous result for the classical Bernstein polynomials (see, for instance, Gonska and Păltănea [10], Tachev [14], Gavrea and Ivan [9], and Adell and Cárdenas-Morales [1]), the remainder term has the order of
Corollary 5.4
Let \(m\in N_0\) and \(\tau _n>1\). If \(f\in C^{m+1}[0,1]\), then
where \(M_n(x)\) is defined in (3.12).
In particular, we have the Voronovskaja’s formula
Proof
Let \(W_n\) be as in (2.3). As in the proof of Corollary 5.2, we have
Hence, (5.6) follows from Theorem 3.4 by choosing \(\delta =\tau _n\log (n+1)/n\). Estimate (5.7) follows from (5.6) by setting \(m=1\). Note that, by (2.2) and Remark 3.2, the main term for \(m=1\) is
This completes the proof. \(\square \)
We conclude this section by considering the approximation properties of the random operator defined in (1.5). To this end, we will use the Dvoretzky–Kiefer–Wolfowitz inequality (c.f. [8]) in the final form shown by Massart [11], that is,
Theorem 5.5
Let \(B_n(f,\mathbb {U}_n;x)\) be as in (1.5) and let \(r_n>0\). Then,
and
Proof
Using the subadditivity property in (3.1), we get
thus implying that
where the last inequality follows from (5.8). Thus, (5.9) follows by choosing \(h^2=r_n/n\) in (5.11).
From (4.12) and (5.8), we see that
Therefore, (5.10) follows by taking expectations in (5.11) and setting \(h=1/\sqrt{n}\). \(\square \)
Clearly, estimate (5.9) is only meaningful if \(r_n\rightarrow \infty \) and \(r_n/n\rightarrow 0\), as \(n\rightarrow \infty \). On the other hand, the comments following Corollary 5.2 can be applied, with the obvious modifications, to Theorem 5.5.
Finally, it is interesting to note that the rates of convergence in Corollary 5.2 are much faster that those in Theorem 5.5. Despite the sharpness of inequality (5.8) used in the proof of Theorem 5.5, we cannot expect to replace \(\omega _1(f;\cdot )\) by \(\omega _2^{\varphi }(f;\cdot )\) in (5.9). In contraposition to \(L_n(f,\mathbb {V}_n;x)\) as defined in (1.7), the random operator \(B_n(f,\mathbb {U}_n;x)\) does not satisfy (1.10), since
whereas \(\omega _2^{\varphi }(e_1;\delta )=0\), \(\delta \ge 0\). In other words, the event
has probability 1 for any positive constants \(C_n\) and \(\delta _n\).
References
Adell, J.A., Cárdenas-Morales, D.: Quantitative generalized Voronovskaja’s formulae for Bernstein polynomials. J. Approx. Theory 231, 41–52 (2018)
Adell, J.A., Cárdenas-Morales, D.: Stochastic Bernstein polynomials: uniform convergence in probability with rates. Adv. Comput. Math. 46, 16 (2020)
Adell, J.A., de la Cal, J.: Bernstein-type operators diminish the \(\Phi \)-variation. Constr. Approx. 12, 489–507 (1996)
Arnold, B.C., Balakrishnan, N., Nagaraja, H.N.: A First Course in Order Statistics. SIAM, Philadelphia (2008)
Bustamante, J.: Estimates of positive linear operators in terms of second-order moduli. J. Math. Anal. Appl. 345, 203–212 (2008)
David, H.A.: Order Statistics. Wiley, New York (1970)
Ditzian, Z., Ivanov, K.G.: Strong converse inequalities. J. Anal. Math. 61, 61–111 (1993)
Dvoretzky, A., Kiefer, J., Wolfowitz, J.: Asymptotic minimax character of the sample distribution function and of the classical multinomial estimator. Ann. Math. Stat. 27(3), 642–669 (1956)
Gavrea, I., Ivan, M.: The Bernstein Voronovskaja-type theorem for positive linear approximation operators. J. Approx. Theory 192, 291–296 (2015)
Gonska, H., Păltănea, R.: General Voronovskaya and asymptotic theorems in simultaneous approximation. Mediterr. J. Math. 7, 37–49 (2010)
Massart, P.: The tight constant in the Dvoretzky–Kiefer–Wolfowitz inequality. Ann. Probab. 18(3), 1269–1283 (1990)
Păltănea, R.: Approximation Theory Using Positive Linear Operators. Birkhäuser Boston Inc, Boston (2004)
Sun, X., Wu, Z.: Chebyshev type inequality for stochastic Bernstein polynomials. Proc. Am. Math. Soc. 147(2), 671–679 (2019)
Tachev, G.T.: The complete asymptotic expansion for Bernstein operators. J. Math. Anal. Appl. 385, 1179–1183 (2012)
Totik, V.: Strong converse inequalities. J. Approx. Theory 76(3), 369–375 (1994)
Wu, Z., Sun, X., Ma, L.: Sampling scattered data with Bernstein polynomials: stochastic and deterministic error estimates. Adv. Comput. Math. 38(1), 187–205 (2013)
Wu, Z., Zhou, X.: Polynomial convergence order of stochastic Bernstein approximation. Adv. Comput. Math. 46, 8 (2020)
Acknowledgements
The authors would like to thank an anonymous referee for her/his careful reading of the manuscript and for her/his constructive criticism, which greatly improved the final outcome.
Funding
Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.
Author information
Authors and Affiliations
Corresponding author
Additional information
Dedicated to Professor Francesco Altomare on the occasion of his 70th birthday.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This work is partially supported by Ministerio de Ciencia, Innovación y Universidades, Research Project (PGC2018-097621-B-I00). José A. Adell is also supported by Research Project DGA (E-64). Daniel Cárdenas-Morales is also supported by Junta de Andalucía, Research Group, (FQM-0178).
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Adell, J.A., Cárdenas-Morales, D. Random Linear Operators Arising from Piecewise Linear Interpolation on the Unit Interval. Mediterr. J. Math. 19, 223 (2022). https://doi.org/10.1007/s00009-022-02147-7
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00009-022-02147-7
Keywords
- Random linear operator
- uniform convergence in probability
- Ditzian–Totik modulus of smoothness
- uniform order statistics
- rate of convergence
- confidence band