1 Introduction

The classical Bernstein operators are the prototypical positive linear operators used in Approximation Theory. They approximate the continuous functions, preserve the functions \(e_0(x)=1\) and \(e_1(x)=x\) and have remarkable shape preserving properties. The classical Kantorovich operators can be used to approximate integrable functions and preserve \(e_0\), but not \(e_1\). Their properties were intensively investigated from several points of view (see, e.g., [2, 3, 7, 8, 14,15,16] and the references therein). Some papers (see, e.g., [6, 28]) are devoted to modifying the Kantorovich operators in order to preserve \(e_0\) and \(e_1\). Beside their importance in classical Aproximation Theory, the Kantorovich polynomials play an important role in the theory of generalized sampling operators. The classical sampling operators are of discrete type and use point evaluations of the approximated function f. The Kantorovich type generalized operators use mean values of f on suitable intervals and consequently perform better than the classical ones from several points of view (see, e.g., [11,12,13] and the references therein).

In this paper we introduce a family of Bernstein–Kantorovich type operators \({{\mathcal {K}}}_n\) preserving the affine functions. Their structure is inspired by the construction of the Kantorovich and Bernstein operators. The family depends on several parameters. In a limiting case the operators \({{\mathcal {K}}}_n\) reduce to Bernstein operators. For special parameters one obtains the operators investigated in [28]. As mentioned above, the classical Kantorovich operators were the starting point for new developments in the theory of generalized sampling operators, leading to important results in digital image processing with medical and industrial applications. Our modified operators \({{\mathcal {K}}}_n\) could also be used in this context.

The operators \({{\mathcal {K}}}_n\) are defined as follows. For a given integer \(n\ge 2\) let \(a_{n,1},\dots , a_{n,n-1}\) be real numbers such that

$$\begin{aligned} 0<a_{n,k}\le \dfrac{1}{n},\,\, k=1,\dots ,n-1. \end{aligned}$$
(1.1)

Define the functionals \(F_{n,k}: C[0,1]\rightarrow {\mathbb R}\) as

$$\begin{aligned} F_{n,0}(f):=f(0),\,\, F_{n,n}(f):=f(1), \end{aligned}$$
$$\begin{aligned} F_{n,k}(f):=\dfrac{1}{2a_{n,k}}\int _{\frac{k}{n}-a_{n,k}}^{\frac{k}{n}+a_{n,k}}f(t)dt,\,\, k=1,\dots , n-1. \end{aligned}$$
(1.2)

Let \(p_{n,k}(x):={n\atopwithdelims ()k}x^k(1-x)^{n-k},\,\, k=0,1,\dots ,n,\,\, x\in [0,1]\).

Consider the operators \({{\mathcal {K}}}_n\) defined by

$$\begin{aligned} {{\mathcal {K}}}_{n}f(x):=\displaystyle \sum _{k=0}^n F_{n,k}(f)p_{n,k}(x),\,\, f\in C[0,1],\,\, x\in [0,1]. \end{aligned}$$
(1.3)

They are positive linear operators on C[0, 1] and

$$\begin{aligned} {{\mathcal {K}}}_ne_0=e_0,\,\, {{\mathcal {K}}}_ne_1=e_1, \end{aligned}$$
(1.4)

where \(e_j(x):=x^j,\,\, j=0,1,\dots ,\,\, x\in [0,1].\)

Section 2 contains the Voronovskaja type formula for the sequence \({{\mathcal {K}}}_n\), which coincides with the corresponding formula for the sequence of Bernstein operators \(B_n\). Proposition 2.1 presents an example of a function for which the approximation in the sense of Voronovskaja’s formula, provided by \({{\mathcal {K}}}_n\), is better than that provided by the classical Kantorovich operators. The central moments of \({{\mathcal {K}}}_n\) are described in Proposition 2.2 and used to present the Voronovskaja formula of order two for the sequence \({{\mathcal {K}}}_n\). The operators \({{\mathcal {K}}}_n\) have useful shape preserving properties. Theorem 3.1 shows that they preserve the monotonicity. The images of a convex function f under \(B_n\), \({{\mathcal {K}}}_n\) and the genuine Berstein–Durrmeyer operators \(U_n\) are compared in Sect. 4. The proof of the main result Theorem 4.1 is based on Ohlin’s Lemma, a result from the theory of convex stochastic ordering (see, e.g., [22, 23]). In fact, all the inequalities presented in this paper and involving convex functions have natural interpretations in the framework of convex stochastic ordering. Theorem 5.1 deals with the preservation of convexity. Theorems 5.2 and 5.3 describe the behavior of the operators \({{\mathcal {K}}}_n\) with respect to the parameters in their structure. The monotonic convergence under convexity is an important property of the Bernstein operators. Theorem 6.1 shows that the operators \({{\mathcal {K}}}_n\) have also this property. Strongly convex functions and approximately concave functions were investigated in several papers. The preservation of the corresponding properties under the Bernstein operators was studied in [18]. The preservation of these properties under \({{\mathcal {K}}}_n\) is presented in Sect. 7. Section 8 is devoted to conclusions and further work.

2 Voronovskaja type results

Let \(B_n\), \(n\ge 1\), be the classical Bernstein operators,

$$\begin{aligned} B_nf(x):=\displaystyle \sum _{k=0}^n f\left( \dfrac{k}{n}\right) p_{n,k}(x),\,\, f\in C[0,1],\,\, x\in [0,1]. \end{aligned}$$

According to Voronovskaja’s formula,

$$\begin{aligned} \displaystyle \lim _{n\rightarrow \infty } n(B_nf(x)-f(x))=\dfrac{x(1-x)}{2}f^{\prime \prime }(x),\,\, f\in C^2[0,1], \end{aligned}$$
(2.1)

uniformly on [0, 1].

Theorem 2.1

For each \(f\in C^2[0,1]\) we have

$$\begin{aligned} \displaystyle \lim _{n\rightarrow \infty }n\left( {{\mathcal {K}}}_nf(x)-f(x)\right) =\dfrac{x(1-x)}{2}f^{\prime \prime }(x), \end{aligned}$$
(2.2)

uniformly on [0, 1].

Proof

Let us remark that for \(k=1,\dots , n-1\),

$$\begin{aligned} F_{n,k}(e_2)=\left( \dfrac{k}{n}\right) ^2+\dfrac{1}{3}a_{n,k}^2. \end{aligned}$$

On the other hand, for \(f\in C^2[0,1]\) we have (see, e.g., [4, Lemma 4.1])

$$\begin{aligned} F_{n,k}(f)-f\left( \dfrac{k}{n}\right) =\left( F_{n,k}(e_2)-\left( \dfrac{k}{n}\right) ^2\right) \dfrac{f^{\prime \prime }(\xi _{n,k})}{2} \end{aligned}$$

with a certain \(\xi _{n,k}\in [0,1]\). Therefore,

$$\begin{aligned} F_{n,k}(f)-f\left( \dfrac{k}{n}\right) =\dfrac{1}{6}a_{n,k}^2 f^{\prime \prime }(\xi _{n,k}),\,\, k=1,\dots , n-1. \end{aligned}$$
(2.3)

Using (2.3) we can write

$$\begin{aligned} n({{\mathcal {K}}}_nf(x)-f(x))&=n\left( {{\mathcal {K}}}_n f(x)-B_n f(x)\right) +n\left( B_nf(x)-f(x)\right) \\&=\dfrac{1}{6}n\displaystyle \sum _{k=1}^{n-1}p_{n,k}(x)a_{n,k}^2f^{\prime \prime }(\xi _{n,k})+n\left( B_nf(x)-f(x)\right) . \end{aligned}$$

Due to (1.1) we have

$$\begin{aligned} \left| \dfrac{1}{6}n\displaystyle \sum _{k=1}^{n-1}p_{n,k}(x) a_{n,k}^2 f^{\prime \prime }(\xi _{n,k}) \right| \le \dfrac{1}{6n} \Vert f^{\prime \prime }\Vert _{\infty }\left( 1-(1-x)^n-x^n\right) \rightarrow 0 \end{aligned}$$

uniformly on [0, 1]. Combined with (2.1), this leads to (2.2) and the proof is finished. \(\square \)

Let \(K_n\), \(n\ge 1\), be the classical Kantorovich operators,

$$\begin{aligned} K_nf(x):=(n+1)\displaystyle \sum _{k=0}^n p_{n,k}(x)\int _{\frac{k}{n+1}}^{\frac{k+1}{n+1}}f(t)dt,\,\, f\in C[0,1]. \end{aligned}$$

It is known (see [10]) that

$$\begin{aligned}&\displaystyle \lim _{n\rightarrow \infty } n(K_nf(x)-f(x))\\&\quad =\dfrac{x(1-x)}{2}f^{\prime \prime }(x)+\left( \dfrac{1}{2}-x\right) f^{\prime }(x)=:Vf(x),\,\, f\in C^2[0,1],\,\, x\in [0,1]. \end{aligned}$$

Set \(Wf(x):=\dfrac{x(1-x)}{2}f^{\prime \prime }(x)\) (see (2.2).

Proposition 2.1

Let \(f\in C^2[0,1]\) be decreasing on [0, 1], concave on \(\left[ 0,\dfrac{1}{2}\right] \) and convex on \(\left[ \dfrac{1}{2},1\right] \).

Then,

$$\begin{aligned} |Vf(x)|\ge |Wf(x)|,\,\, x\in [0,1]. \end{aligned}$$
(2.4)

Proof

It is easy to verify that for \(x\in \left[ 0,\frac{1}{2}\right] \),

$$\begin{aligned} \left| Vf(x)\right| =|W f(x)|-\left( \dfrac{1}{2}-x\right) f^{\prime }(x)\ge |Wf(x)|, \end{aligned}$$

while for \(x\in \left[ \dfrac{1}{2},1\right] \),

$$\begin{aligned} \left| Vf(x)\right| =| Wf(x)|+\left( \dfrac{1}{2}-x\right) f^{\prime }(x)\ge |Wf(x)|. \end{aligned}$$

This concludes the proof. \(\square \)

The inequality (2.4) shows that, from the point of view of Voronovskaja’s formula, the approximation of f furnished by \({{\mathcal {K}}}_n\) is better than that provided by \(K_n\). For other results of this type see [5]. In fact, the following figures show that for the functions \(f(x)=\cos (\pi x)\) and \(f(x)=\arctan (x-1/2)\), \(x\in [0,1]\),

$$\begin{aligned} |{{\mathcal {K}}}_nf(x)-f(x)|\le |K_nf(x)-f(x)|. \end{aligned}$$
Fig. 1
figure 1

Graph of \(K_nf\), \({{\mathcal {K}}}_nf\) and f for \(n=10\) and \(a_{n,k}=1/2n\)

Fig. 2
figure 2

Graph of \(|K_nf-f|\) and \(|{{\mathcal {K}}}_nf-f|\) for \(n=10\), \(a_{n,k}=1/2n\)

Fig. 3
figure 3

Graph of \(K_nf\), \({{\mathcal {K}}}_nf\) and f for \(n=4\) and \(a_{n,k}=1/2n\)

Fig. 4
figure 4

Graph of \(|K_nf-f|\) and \(|{{\mathcal {K}}}_nf-f|\) for \(n=4\), \(a_{n,k}=1/2n\)

Remark 2.1

If \(a_{n,k}:=\dfrac{1}{n}\dfrac{k}{2k+1}\), \(k=1,\dots , n-1,\) the operators \({{\mathcal {K}}}_n\) reduce to the operators \({K_n^*}\) from [28]. A Voronovskaja type formula for the sequence \(({K_n^*})\) was proved in [28, p. 6194].

In the rest of this section we take \(a_{n,k}=\dfrac{\theta }{n}\), \(k=1,\dots , n-1\), for a given \(\theta \in (0,1]\) and we denote by \({{\mathcal {K}}}_n^{\theta }\) the corresponding operators \({{\mathcal {K}}}_n\). The next result presents the relation between central moments of \({{\mathcal {K}}}_n^{\theta }\) and those of \(B_n\).

Proposition 2.2

For \(k\ge 1\) one has

  1. (i)

    \({{\mathcal {K}}}_n^{\theta }\left( (t-x)^{2k};x\right) =B_n\left( (t-x)^{2k};x\right) +\displaystyle \sum _{j=1}^k\dfrac{1}{2j+1}{2k\atopwithdelims ()2j}\) \(\times \left( \dfrac{\theta }{n}\right) ^{2j}\left[ B_n\left( (t-x)^{2k-2j};x\right) -(1-x)^nx^{2k-2j}-x^n(1-x)^{2k-2j}\right] \);

  2. (ii)

    \({{\mathcal {K}}}_n^{\theta }\left( (t-x)^{2k-1};x\right) =B_n\left( (t-x)^{2k-1};x\right) +\displaystyle \sum _{j=1}^{k-1}\dfrac{1}{2j+1}{2k-1\atopwithdelims ()2j}\) \(\times \left( \dfrac{\theta }{n}\right) ^{2j}\left[ B_n\left( (t-x)^{2k-1-2j};x\right) +(1-x)^nx^{2k-1-2j}-x^n(1-x)^{2k-1-2j}\right] \).

Proof

The proof is based on straightforward calculation and we omit it. \(\square \)

Remark 2.2

The central moments of Bernstein operators are investigated in detail in [9, Sect. 2.9]. In particular it is known that

$$\begin{aligned} B_n\left( (t-x)^s;x\right) ={{\mathcal {O}}}\left( n^{-\left[ \frac{s+1}{2}\right] }\right) , \end{aligned}$$

where [a] is the integer part of a.

Combined with Proposition 2.2 this leads to

$$\begin{aligned} {{\mathcal {K}}}_n^{\theta }\left( (t-x)^s;x\right) ={{\mathcal {O}}}\left( n^{-\left[ \frac{s+1}{2}\right] }\right) . \end{aligned}$$

The important conclusion is that the classical result of Sikkema [26] can be applied to the sequence \(({{\mathcal {K}}}_n^{\theta })\). So, in this case we have another proof of Theorem 2.1 and moreover, for \(f\in C^4[0,1]\), \(x\in [0,1]\),

$$\begin{aligned}&\displaystyle \lim _{n\rightarrow \infty } n\left[ n\left( {{\mathcal {K}}}_n^{\theta }f(x)-f(x)\right) -\dfrac{x(1-x)}{2}f^{(2)}(x)\right] \\&\quad =\left\{ \begin{array}{ll} 0,&{} x\in \{0,1\},\\ \dfrac{1}{6}f^{(2)}(x)+\dfrac{1}{6}x(1-x)(1-2x)f^{(3)}(x)+\dfrac{1}{8}x^2(1-x)^2 f^{(4)}(x),&{} 0<x<1. \end{array}\right. \end{aligned}$$

It is well known that (see, e.g., [1])

$$\begin{aligned}&\displaystyle \lim _{n\rightarrow \infty } n\left[ n\left( B_nf(x)-f(x)\right) -\dfrac{x(1-x)}{2}f^{(2)}(x)\right] \\&\quad =\dfrac{1}{6}x(1-x)(1-2x)f^{(3)}(x)+\dfrac{1}{8}x^2(1-x)^2f^{(4)}(x),\,\,0\le x\le 1. \end{aligned}$$

We conclude that

$$\begin{aligned} \displaystyle \lim _{n\rightarrow \infty }n^2\left( {{\mathcal {K}}}_n^{\theta }f(x)-B_nf(x)\right) =\left\{ \begin{array}{ll} 0,&{}x\in \{0,1\},\\ \dfrac{1}{6}f^{(2)}(x),&{} 0<x<1.\end{array}\right. \end{aligned}$$

3 \({{\mathcal {K}}}_n\) and increasing functions

Consider the general case, i.e., \(0<a_{n,k}\le 1/n\).

Theorem 3.1

If \(f\in C[0,1]\) is increasing, then \({{\mathcal {K}}}_nf\) is increasing.

Proof

Using \(p_{n,k}^{\prime }(x)=n\left( p_{n-1,k-1}(x)-p_{n-1,k}(x)\right) \), \(n\ge 1\), we get

$$\begin{aligned} ({{\mathcal {K}}}_nf)^{\prime }(x)=n\displaystyle \sum _{k=0}^{n-1}\left( F_{n,k+1}(f)-F_{n,k}(f)\right) p_{n-1,k}(x). \end{aligned}$$

Clearly

$$\begin{aligned} F_{n,1}(f)-F_{n,0}(f)=\dfrac{1}{2a_{n,1}}\displaystyle \int _{\frac{1}{n}-a_{n,1}}^{\frac{1}{n}+a_{n,1}}f(t)dt-f(0)\ge 0 \end{aligned}$$

and

$$\begin{aligned} F_{n,n}(f)-F_{n,n-1}(f)\ge 0. \end{aligned}$$

So, let \(k\in \{1,\dots , n-1\}\). Then

$$\begin{aligned} F_{n,k+1}(f)-F_{n,k}(f)&=\dfrac{1}{2a_{n,k+1}}\displaystyle \int _{\frac{k+1}{n}-a_{n,k+1}}^{\frac{k+1}{n}+a_{n,k+1}} f(t) dt-\dfrac{1}{2a_{n,k}}\displaystyle \int _{\frac{k}{n}-a_{n,k}}^{\frac{k}{n}+a_{n,k}} f(t) dt\\&=\dfrac{1}{2a_{n,k}}\int _{\frac{k}{n}-a_{n,k}}^{\frac{k}{n}+a_{n,k}}\left[ f\left( \frac{a_{n,k+1}}{a_{n,k}}\left( s-\frac{k}{n}\right) +\dfrac{k+1}{n}\right) -f(s)\right] ds. \end{aligned}$$

Moreover, for \(s\in \left[ \dfrac{k}{n}-a_{n,k},\dfrac{k}{n}+a_{n,k}\right] \) we have

$$\begin{aligned} \dfrac{a_{n,k+1}}{a_{n,k}}\left( s-\dfrac{k}{n}\right) +\dfrac{k+1}{n}\ge s \end{aligned}$$

and so

$$\begin{aligned} F_{n,k+1}(f)-F_{n,k}(f)\ge 0. \end{aligned}$$

Summing-up, we see that \(({{\mathcal {K}}}_nf)^{\prime }\ge 0\). \(\square \)

4 Approximating convex functions

A well known consequence of (1.4) is

Proposition 4.1

If \(f\in C[0,1]\) is convex, then

$$\begin{aligned} {{\mathcal {K}}}_n f\ge f,\,\, n\ge 1.\ \end{aligned}$$
(4.1)

In order to prove the main result, Theorem 4.1, we need two lemmas.

Lemma 4.1

Let \(i\ge 1\), \(j\ge 1\) be integers. Then

$$\begin{aligned} \gamma _{i,j}:=\dfrac{(i+j)!}{(i+j)^{i+j}}\dfrac{i^i}{i!}\dfrac{j^j}{j!}\le \dfrac{1}{2}. \end{aligned}$$
(4.2)

Proof

First, we have

$$\begin{aligned} \dfrac{\gamma _{i,j+1}}{\gamma _{i,j}}=\left( 1+\dfrac{1}{j}\right) ^j\left( 1+\dfrac{1}{i+j}\right) ^{-(i+j)} <1. \end{aligned}$$

Then \(\gamma _{i,j}\le \gamma _{i,1}=\left( 1+\dfrac{1}{i}\right) ^{-i}\le \dfrac{1}{2}\), and the proof is completed. \(\square \)

Lemma 4.2

(Ohlin’s Lemma) [22] Let X, Y be two random variables such that \({\mathbb E}X={\mathbb E}Y\). If the distribution functions \(F_{X}\), \(F_{Y}\) cross exactly one time, i.e., for some \(x_0\) holds

$$\begin{aligned} F_{X}(x)\le F_{Y}(x)\text { if } x<x_0 \text { and } F_{X}(x)\ge F_{Y}(x)\text { if } x> x_0, \end{aligned}$$

then \({\mathbb E}f(X)\le {\mathbb E}f(Y) \), for all convex functions \(f:{\mathbb R}\rightarrow {\mathbb R}\).

Remark 4.1

Szostok noticed in [27] that if the measures \(\mu _X\), \(\mu _Y\) corresponding to X, Y, respectively, are concentrated on the interval [ab], then, in fact, the relation \({\mathbb E}f(X)\le {\mathbb E}f(Y) \) holds for all convex functions \(f:{\mathbb R}\rightarrow {\mathbb R}\) if and only if this inequality is satisfied for all continuous convex functions \(f:[a,b]\rightarrow {\mathbb R}\).

Consider the genuine Bernstein–Durrmeyer operators \(U_n\) defined by

$$\begin{aligned} U_nf(x):= & {} f(0)p_{n,0}(x)+f(1)p_{n,n}(x)\\ {}{} & {} +(n-1)\sum _{k=1}^{n-1}\left( \int _0^1 p_{n-2,k-1}(t)f(t)dt\right) p_{n,k}(x). \end{aligned}$$

Now we are in a position to state

Theorem 4.1

If \(a_{n,k}:=\dfrac{1}{n}\), \(1\le k\le n-1\), and \(f\in C[0,1]\) is convex, then

$$\begin{aligned} f\le B_nf\le {{\mathcal {K}}}_nf\le U_n f. \end{aligned}$$
(4.3)

Proof

The first inequality is well known, as a consequence of the fact that \(B_n\) preserves the affine functions. According to the Hermite–Hadamard inequality we have

$$\begin{aligned} f\left( \dfrac{k}{n}\right) \le \dfrac{n}{2}\int _{\frac{k-1}{n}}^{\frac{k+1}{n}}f(t)dt, k=1,\dots , n-1, \end{aligned}$$

and this implies the second inequality in (4.3). So, it remains to prove that

$$\begin{aligned} \dfrac{n}{2}\int _{\frac{k-1}{n}}^{\frac{k+1}{n}}f(t)dt\le (n-1)\displaystyle \int _0^1 p_{n-2,k-1}(t)f(t)dt,\,\, k=1,\dots , n-1. \end{aligned}$$
(4.4)

To this end, fix \(k\in \{1,\dots ,n-1\}\) and consider a random variable X uniformly distributed on \(\displaystyle \left[ \frac{k-1}{n},\frac{k+1}{n}\right] \) and a Beta-type random variable Y having the density \((n-1)p_{n-2,k-1}(t),\,\, t\in [0,1]\). The distribution function of X is

$$\begin{aligned} F_X(x)=\left\{ \begin{array}{ll} 0,&{} x\le \dfrac{k-1}{n},\\ \dfrac{1}{2}(nx-k+1),&{} \dfrac{k-1}{n}<x\le \dfrac{k+1}{n},\\ 1,&{} x>\dfrac{k+1}{n}, \end{array}\right. \end{aligned}$$

and the distribution function of Y is

$$\begin{aligned} F_Y(x)=\left\{ \begin{array}{ll} 0,&{} x\le 0,\\ \displaystyle \int _0^x (n-1) p_{n-2,k-1}(t)dt,&{} x\in (0,1],\\ 1,&{} x>1. \end{array}\right. \end{aligned}$$

We have \({\mathbb E}X={\mathbb E}Y=\dfrac{k}{n}\), \(k=1,\dots , n-1\).

  1. (1)

    Let \(k=1\). Then

    $$\begin{aligned} F_X(x)=\left\{ \begin{array}{ll} 0,&{} x\le 0,\\ \dfrac{1}{2}nx,&{} 0<x\le \dfrac{2}{n},\\ 1,&{} x>\dfrac{2}{n}, \end{array}\right. \end{aligned}$$

    and

    $$\begin{aligned} F_Y(x)=\left\{ \begin{array}{ll} 0,&{} x\le 0,\\ 1-(1-x)^{n-1},&{} 0<x\le 1,\\ 1,&{} x\ge 1. \end{array}\right. \end{aligned}$$

    It is easy to prove the existence of \(x_0\in (0,1)\) from Ohlin’s Lemma.

  2. (2)

    The case \(k=n-1\) can be treated similarly.

  3. (3)

    It remains to consider the case \(2\le k\le n-2\). Define \(H(x):=F_{Y}(x)-F_X(x)\), \(x\in {\mathbb R}\). Clearly

    $$\begin{aligned} H(x)\ge 0,\,\, x\in \left( -\infty ,\dfrac{k-1}{n}\right) \text { and } H(x)\le 0,\,\, x\in \left( \dfrac{k+1}{n},\infty \right) . \end{aligned}$$

Moreover, \(H\left( \dfrac{k-1}{n}\right) >0\) and \(H\left( \dfrac{k+1}{n}\right) <0\). For \(\dfrac{k-1}{n}<x<\dfrac{k+1}{n}\) we have

$$\begin{aligned} H^{\prime }(x)&=(n-1)p_{n-2,k-1}(x)-\dfrac{n}{2}\\&\le (n-1)p_{n-2,k-1}\left( \dfrac{k-1}{n-2}\right) -\dfrac{n}{2}\\&=(n-1)\dfrac{(n-2)!}{(n-2)^{n-2}}\dfrac{(k-1)^{k-1}}{(k-1)!}\dfrac{(n-k-1)^{n-k-1}}{(n-k-1)!}-\dfrac{n}{2}. \end{aligned}$$

Using Lemma 4.1 we get

$$\begin{aligned} H^{\prime }(x)\le \dfrac{n-1}{2}-\dfrac{n}{2}=-\dfrac{1}{2}. \end{aligned}$$

Therefore H is strictly decreasing on \(\left[ \dfrac{k-1}{n},\dfrac{k+1}{n}\right] \) and the existence of \(x_0\) from Ohlin’s Lemma is proved. Summing-up, according to Ohlin’s Lemma we have \({\mathbb E}f(X)\le {\mathbb E}f(Y) \) and this is (4.4). \(\square \)

Returning to the general case with \(0<a_{n,k}\le \dfrac{1}{n}\) we have the following two remarks

Remark 4.2

Recall the operators \({K_n^*}\) from [28] (see also Remark 2.1). Using the technique based on Ohlin’s lemma, it is not difficult to prove that for each convex function \(f\in C[0,1]\) we have

  1. (i)

    If \(0<a_{n,k}\le \dfrac{1}{3n},\,\, k=1,\dots ,n-1\), then \(f\le {{\mathcal {K}}}_nf\le {{\mathcal {K}}}_n^*f\);

  2. (ii)

    If \(\dfrac{n-1}{2n-1}\dfrac{1}{n}\le a_{n,k}\le \dfrac{1}{n},\,\, k=1,\dots , n-1\), then \(f\le {K_n^*}f\le {{\mathcal {K}}}_n f.\)

Remark 4.3

We have

$$\begin{aligned} {{\mathcal {K}}}_ne_2(x)-e_2(x)= & {} \dfrac{x(1-x)}{n}+\dfrac{1}{3}a_{n,k}^2\left[ 1-(1-x)^n-x^n\right] \le \dfrac{4}{3}\dfrac{x(1-x)}{n}, \\ U_ne_2(x)-e_2(x)= & {} \dfrac{2x(1-x)}{n+1}. \end{aligned}$$

Therefore, in light of the classical result of Shisha–Mond [25],

$$\begin{aligned}&\left| {{\mathcal {K}}}_nf(x)-f(x)\right| \le 2\omega \left( f;\sqrt{\dfrac{4}{3}\dfrac{x(1-x)}{n}}\right) ,\\&\left| { U}_nf(x)-f(x)\right| \le 2\omega \left( f;\sqrt{\dfrac{2x(1-x)}{n+1}}\right) . \end{aligned}$$

So, from this point of view, the approximation of a function \(f\in C[0,1]\) provided by \({{\mathcal {K}}}_n\) is better than that provided by \(U_n\).

5 \({{\mathcal {K}}}_n\) and convex functions

In this section we take \(a_{n,k}:=\dfrac{\theta }{n}\), \(k=1,\dots ,n-1\), for a given \(\theta \in (0,1]\). Consequently, we denote the functionals by \(F_{n,k}^{\theta }\) and the operators by \({{\mathcal {K}}}_n^{\theta }\). So, we have

$$\begin{aligned} F_{n,k}^{\theta }(f):=\dfrac{n}{2\theta }\displaystyle \int _{\frac{k-\theta }{n}}^{\frac{k+\theta }{n}}f(t)dt,\,\, k=1,\dots , n-1. \end{aligned}$$

Theorem 5.1

If \(f\in C[0,1]\) is convex, then \({{\mathcal {K}}}_n^{\theta }f\) is convex.

Proof

First, we have

$$\begin{aligned} ({{\mathcal {K}}}_n^{\theta }f)^{\prime \prime }(x)=n(n-1)\displaystyle \sum _{k=0}^{n-2}\left( F_{n,k+2}^{\theta }(f)-2F_{n,k+1}^{\theta }(f)+F_{n,k}^{\theta }(f)\right) p_{n-2,k}(x) \end{aligned}$$
  1. (i)

    In order to prove that \(F_{n,2}^{\theta }(f)-2F_{n,1}^{\theta }(f)+F_{n,0}^{\theta }(f)\ge 0\), it suffices to show that it holds for the functions \(\varphi (t)=1\), \(\varphi (t)=t\), \(\varphi (t)=\max \{t-x,0\}\), \(t,x\in [0,1]\) (see [19, p. 645, B.4. Proposition and B.4.a. Proposition]). This can be done by elementary calculations.

  2. (ii)

    The proof of the inequality

    $$\begin{aligned} F_{n,n}^{\theta }(f)-2F_{n,n-1}^{\theta }(f)+F_{n,n-2}^{\theta }(f)\ge 0 \end{aligned}$$

    is similar.

  3. (iii)

    It remains to prove that

    $$\begin{aligned} F_{n,k+2}^{\theta }(f)-2F_{n,k+1}^{\theta }(f)+F_{n,k}^{\theta }(f)\ge 0,\,\, k=1,\dots , n-3. \end{aligned}$$

This follows by integrating

$$\begin{aligned} f\left( t-\frac{1}{n}\right) +f\left( t+\dfrac{1}{n}\right) \ge 2f(t),\,\, \,\,\dfrac{1}{n}\le t\le \dfrac{n-1}{n} \end{aligned}$$

on the interval \(\left[ \dfrac{k+1-\theta }{n},\dfrac{k+1+\theta }{n}\right] \).

\(\square \)

Theorem 5.2

If \(f\in C[0,1]\) is convex, then \({{\mathcal {K}}}_n^{\sigma }f\le {{\mathcal {K}}}_n^{\tau }f \) whenever \(0<\sigma <\tau \le 1\).

Proof

It suffices to prove that \(F_{n,k}^{\sigma }(f)\le F_{n,k}^{\tau }(f), k=1,\dots , n-1\). In fact, we will prove that

$$\begin{aligned} \dfrac{d}{d\theta } F_{n,k}^{\theta }(f)\ge 0, \,\, 0<\theta \le 1. \end{aligned}$$

Indeed,

$$\begin{aligned} \dfrac{d}{d\theta }F_{n,k}^{\theta }(f)&=-\dfrac{n}{2\theta ^2}\int _{\frac{k-\theta }{n}}^{\frac{k+\theta }{n}}f(t)dt+\dfrac{n}{2\theta }\left[ \dfrac{1}{n}f\left( \dfrac{k+\theta }{n}\right) +\dfrac{1}{n}f\left( \dfrac{k-\theta }{n}\right) \right] \nonumber \\&=\dfrac{1}{\theta }\left[ \dfrac{f\left( \dfrac{k-\theta }{n}\right) +f\left( \dfrac{k+\theta }{n}\right) }{2}-\dfrac{n}{2\theta }\int _{\frac{k-\theta }{n}}^{\frac{k+\theta }{n}}f(t)dt\right] \ge 0, \end{aligned}$$
(5.1)

where the last inequality follows from the Hermite–Hadamard inequality. \(\square \)

In the next result we estimate the differnce between \({{\mathcal {K}}}_n^{\tau }f\) and \({{\mathcal {K}}}_n^{\sigma }f\).

Theorem 5.3

Let \(0<\sigma <\tau \le 1\), \(f\in C^2[0,1]\) and \(x\in [0,1]\). Then

$$\begin{aligned} \left| {{\mathcal {K}}}_n^{\tau }f(x)- {{\mathcal {K}}}_n^{\sigma }f(x)\right| \le \tau (\tau -\sigma )\dfrac{1}{3n^2}\Vert f^{\prime \prime }\Vert _{\infty }(1-(1-x)^n-x^n), \end{aligned}$$

where \(\Vert \cdot \Vert _{\infty }\) is the supremum norm.

Proof

Denote \(\psi _{n,k}(\theta ):=F_{n,k}^{\theta }(f),\,\,\theta \in (0,1]\). We have, for a suitable \(\theta \in (\sigma , \tau )\),

$$\begin{aligned} |\psi _{n,k}(\tau )-\psi _{n,k}(\sigma )|=(\tau -\sigma )|\psi _{n,k}^{\prime }(\theta )|. \end{aligned}$$
(5.2)

Using (5.1) we can write

$$\begin{aligned} \dfrac{d}{d\theta }\psi _{n,k}(\theta )=\dfrac{1}{\theta }\left[ \dfrac{f\left( \dfrac{k-\theta }{n}\right) +f\left( \dfrac{k+\theta }{n}\right) }{2}-\dfrac{n}{2\theta }\int _{\frac{k-\theta }{n}}^{\frac{k+\theta }{n}}f(t)dt\right] . \end{aligned}$$
(5.3)

Applying the trapezoidal rule we get

$$\begin{aligned} \left| \dfrac{f\left( \dfrac{k-\theta }{n}\right) +f\left( \dfrac{k+\theta }{n}\right) }{2}-\dfrac{n}{2\theta }\int _{\frac{k-\theta }{n}}^{\frac{k+\theta }{n}}f(t)dt\right| \le \dfrac{\theta ^2}{3n^2}\Vert f^{\prime \prime }\Vert _{\infty }. \end{aligned}$$
(5.4)

From (5.3) and (5.4) it follows that

$$\begin{aligned} |\psi _{n,k}^{\prime }(\theta )|\le \dfrac{\tau }{3n^2}\Vert f^{\prime \prime }\Vert _{\infty }. \end{aligned}$$

Combined with (5.2) this leads to

$$\begin{aligned} |F_{n,k}^{\tau }(f)-F_{n,k}^{\sigma }(f)|\le \tau (\tau -\sigma )\dfrac{1}{3n^2}\Vert f^{\prime \prime }\Vert _{\infty }. \end{aligned}$$

Therefore,

$$\begin{aligned} \left| {{\mathcal {K}}}_n^{\tau }f(x)- {{\mathcal {K}}}_n^{\sigma }f(x)\right|&\le \sum _{k=1}^{n-1}p_{n,k}(x)|F_{n,k}^{\tau }(f)-F_{n,k}^{\sigma }(f)|\\&\le \tau (\tau -\sigma )\dfrac{1}{3n^2}\Vert f^{\prime \prime }\Vert _{\infty }(1-(1-x)^n-x^n)\end{aligned}$$

and this concludes the proof. \(\square \)

Remark 5.1

For \(\sigma \rightarrow 0\), from Theorem 5.3 we get

$$\begin{aligned} \left| {{\mathcal {K}}}_n^{\tau }f(x)- B_nf(x)\right| \le \dfrac{\tau ^2}{3n^2}\Vert f^{\prime \prime }\Vert _{\infty }(1-(1-x)^n-x^n). \end{aligned}$$

Estimating directly with (2.3) we obtain

$$\begin{aligned} \left| F_{n,k}^{\tau }(f)-f\left( \dfrac{k}{n}\right) \right| =\left( F_{n,k}^{\tau }(e_2)-\left( \dfrac{k}{n}\right) ^2\right) \dfrac{|f^{\prime \prime }(\xi )|}{2}\le \dfrac{1}{6}\left( \dfrac{\tau }{n}\right) ^2\Vert f^{\prime \prime }\Vert _{\infty }. \end{aligned}$$

This produces the better result

$$\begin{aligned} \left| {{\mathcal {K}}}_n^{\tau }f(x)- B_nf(x)\right| \le \dfrac{\tau ^2}{6n^2}\Vert f^{\prime \prime }\Vert _{\infty }(1-(1-x)^n-x^n). \end{aligned}$$

6 Monotonic convergence under convexity

In this section we consider again the case \(a_{n,k}=\theta /n\), \(n\ge 1\), \(k=1,\dots , n-1\), for a given \(\theta \in (0,1]\).

Theorem 6.1

If \(f\in C[0,1]\) is convex, then \({{\mathcal {K}}}_n^{\theta }f\ge {{\mathcal {K}}}_{n+1}^{\theta }f\), \(n\ge 2\).

Proof

First, we have

$$\begin{aligned}&{{\mathcal {K}}}_n^{\theta }f(x)-{{\mathcal {K}}}_{n+1}^{\theta }f(x)\nonumber \\&\quad =\displaystyle \sum _{k=1}^n{n+1\atopwithdelims ()k}\left[ \dfrac{k}{n+1}F_{n,k-1}^{\theta }(f)+\dfrac{n+1-k}{n+1}F_{n,k}^{\theta }(f)-F_{n+1,k}^{\theta }(f)\right] x^k(1-x)^{n+1-k}. \nonumber \\ \end{aligned}$$
(6.1)

The proof is similar to that of Proposition 2.10 from [9] and we omit it. It follows that

$$\begin{aligned}&{{\mathcal {K}}}_n^{\theta }f(x)-{{\mathcal {K}}}_{n+1}^{\theta }f(x)\\&\quad =\dfrac{n(n+1)}{2\theta }x(1-x)^n\int _{\frac{1-\theta }{n}}^{\frac{1+\theta }{n}}\left[ \dfrac{1}{n+1}f(0)+\dfrac{n}{n+1}f(s)-f\left( \dfrac{n}{n+1}s\right) \right] ds\\&\qquad +\dfrac{n(n+1)}{2\theta }x^n(1-x)\int _{\frac{n-1-\theta }{n}}^{\frac{n-1+\theta }{n}}\left[ \dfrac{n}{n+1}f(s)+\dfrac{1}{n+1}f(1)-f\left( \dfrac{n}{n+1}s+\dfrac{1}{n+1}\right) \right] ds\\&\qquad +\dfrac{n}{2\theta }\sum _{k=2}^{n-1}x^k(1-x)^{n+1-k}{n+1\atopwithdelims ()k}\\&\qquad \times \int _{\frac{k-1-\theta }{n}}^{\frac{k-1+\theta }{n}}\left[ \dfrac{k}{n+1}f(s)+\dfrac{n+1-k}{n+1}f\left( s+\dfrac{1}{n}\right) -f\left( \dfrac{n}{n+1}s+\dfrac{1}{n+1}\right) \right] ds\\&\quad =I_1+I_2+I_3. \end{aligned}$$

Since f is convex, by Jensen inequality we have

$$\begin{aligned} \dfrac{1}{n+1}f(0)+\dfrac{n}{n+1}f(s)-f\left( \dfrac{n}{n+1}s\right) \ge 0, \\ \dfrac{n}{n+1}f(s)+\dfrac{1}{n+1}f(1)-f\left( \dfrac{n}{n+1}s+\dfrac{1}{n+1}\right) \ge 0, \end{aligned}$$

and so \(I_1\ge 0\), \(I_2\ge 0\). Concerning \(I_3\), we have

$$\begin{aligned} {\tilde{I}}_3&:=\displaystyle \int _{\frac{k-1-\theta }{n}}^{\frac{k-1+\theta }{n}}\left[ \dfrac{k}{n+1}f(s)+\dfrac{n+1-k}{n+1}f\left( s+\dfrac{1}{n}\right) -f\left( \dfrac{n}{n+1}s+\dfrac{1}{n+1}\right) \right] ds\\&\ge \displaystyle \int _{\frac{k-1-\theta }{n}}^{\frac{k-1+\theta }{n}}\left[ f\left( s+\dfrac{n+1-k}{n(n+1)}\right) -f\left( \dfrac{n}{n+1}s+\dfrac{1}{n+1}\right) \right] ds\\&=\displaystyle \int _{\frac{k}{n+1}-\frac{\theta }{n}}^{\frac{k}{n+1}+\frac{\theta }{n}}f(s)ds-\dfrac{n+1}{n}\int _{\frac{k-\theta }{n+1}}^{\frac{k+\theta }{n+1}}f(s)ds. \end{aligned}$$

Let y(s) be a polynomial function of degree at most 1 whose graph is a support line to the graph of f(s) at \(s=\dfrac{k}{n+1}\). Then

$$\begin{aligned} \int _{\frac{k}{n+1}-\frac{\theta }{n}}^{\frac{k}{n+1}+\frac{\theta }{n}}y(s)ds= \dfrac{n+1}{n}\int _{\frac{k-\theta }{n+1}}^{\frac{k+\theta }{n+1}}y(s)ds, \end{aligned}$$

so that

$$\begin{aligned} {\tilde{I}}_3\ge \displaystyle \int _{\frac{k}{n+1}-\frac{\theta }{n}}^{\frac{k}{n+1}+\frac{\theta }{n}}\left( f(s)-y(s)\right) ds-\dfrac{n+1}{n}\int _{\frac{k-\theta }{n+1}}^{\frac{k+\theta }{n+1}}\left( f(s)-y(s)\right) ds. \end{aligned}$$

Setting \(g(s):=f(s)-y(s)\) we have \(g(s)\ge 0\) and the above inequality can be written as

$$\begin{aligned} {\tilde{I}}_3&\ge \left( \int _{\frac{k}{n+1}-\frac{\theta }{n}}^{\frac{k-\theta }{n+1}}g(s)ds-\dfrac{1}{n}\int _{\frac{k-\theta }{n+1}}^{\frac{k}{n+1}}g(s)ds\right) \\&\quad +\left( \int _{\frac{k+\theta }{n+1}}^{\frac{k}{n+1}+\frac{\theta }{n}}g(s)ds-\dfrac{1}{n}\int _{\frac{k}{n+1}}^{\frac{k+\theta }{n+1}}g(s)ds\right) =J_1+J_2.\\ \end{aligned}$$

Let \(h:=g\left( \dfrac{k+\theta }{n+1}\right) \). With notation from Fig. 5, using the convexity of g and elementary geometric considerations we find that \(H=\dfrac{n+1}{n}h\),

$$\begin{aligned} \int _{\frac{k+\theta }{n+1}}^{\frac{k}{n+1}+ \frac{\theta }{n}}g(s)ds\ge & {} \dfrac{h+H}{2}\dfrac{\theta }{n(n+1)}= \dfrac{(2n+1)h\theta }{2n^2(n+1)},\\ \int _{\frac{k}{n+1}}^{\frac{k+\theta }{n+1}}g(s)ds\le & {} \dfrac{h\theta }{2(n+1)}. \end{aligned}$$

Therefore, \(J_2\ge 0\), and similarly \(J_1\ge 0\). We conclude that \({\tilde{I}}_3\ge 0\), hence \(I_3\ge 0\), and the proof is complete.

Fig. 5
figure 5

Support for the proof of Theorem 6.1

\(\square \)

Remark 6.1

Let \(\mu \), \(\nu \) be probability distributions (Borel measures on \({\mathbb R}\), \(\mu ({\mathbb R})\!=\!\nu (\mathbb R)\!=\!1\)). If \(\int \varphi (x)d\nu (x)\le \int \varphi (x)d\mu (x)\), for each \( \varphi :{\mathbb R}\rightarrow {\mathbb R} \) convex, \(\nu \) is said to be smaller than \(\mu \) in the convex stochastic order. One uses the notation \(\nu \le _{cx}\mu \) (see, e.g., [23, 24]).

The operators in this paper can be represented under the form

$$\begin{aligned} {{\mathcal {L}}}_nf=\int f d\mu _n,\,\,\,{{\mathcal {M}}}_nf=\int f d\nu _n, \end{aligned}$$

with suitable probability distributions \(\mu _n\), \(\nu _n\).

Moreover, they satisfy inequalities of the form (see Theorems 4.1, 5.2, 6.1)

$$\begin{aligned} {{\mathcal {M}}}_nf \le {{\mathcal {L}}}_nf, \text { for all convex functions } f. \end{aligned}$$

This is equivalent to

$$\begin{aligned} \int f d\nu _n\le \int f d\mu _n, \text { for all convex functions } f, \end{aligned}$$

and hence to

$$\begin{aligned} \nu _n\le _{cx}\mu _n. \end{aligned}$$

Therefore, Theorems 4.1, 5.2, 6.1 have natural interpretations in the theory of convex stochastic ordering.

7 \({{\mathcal {K}}}_n^{\theta }\) and strongly convex functions with modulus c

For the definitions of strongly convex functions and approximately concave functions with modulus c see, e.g., [17, 18, 20, 21] and the references therein. We need the following characterizations of these functions.

Lemma 7.1

  1. (i)

    A function \(f: I \rightarrow {\mathbb R}\) is strongly convex with modulus \(c>0\) if and only if the function \(g:I\rightarrow {\mathbb R}\) defined by \(g=f-ce_2\) is convex.

  2. (ii)

    A function \(f: I \rightarrow {\mathbb R}\) is approximately concave with modulus \(c>0\) if and only if the function \(g:I\rightarrow {\mathbb R}\) defined by \(g=f-ce_2\) is concave.

Theorem 7.1

  1. (i)

    If \(f\in C[0,1]\) is strongly convex with modulus c, then \({{\mathcal {K}}}_n^{\theta }\) is strongly convex with modulus \(c\left( 1-\dfrac{\theta ^2}{6}\right) \dfrac{n-1}{n}\).

  2. (ii)

    Let \(f\in C[0,1]\) be approximately concave with modulus c, then \({{\mathcal {K}}}_n^{\theta }\) is approximately concave with modulus \(c\left( 1-\dfrac{\theta ^2}{3\cdot 2^{n-2}}\right) \dfrac{n-1}{n}\).

Proof

(i) Let \(f\in C[0,1]\) be strongly convex with modulus c. Then \(f-ce_2\) is convex. According to Theorem 5.1, \({{\mathcal {K}}}_n^{\theta }f-c{{\mathcal {K}}}_n^{\theta }e_2\) is convex. Therefore,

$$\begin{aligned} ({{\mathcal {K}}}_n^{\theta }f)^{\prime \prime }\ge c({{\mathcal {K}}}_n^{\theta }e_2)^{\prime \prime }. \end{aligned}$$

We have

$$\begin{aligned} {{\mathcal {K}}}_n^{\theta }e_2(x)=\dfrac{n-1}{n}x^2+\dfrac{1}{n}x+\dfrac{\theta ^2}{3n^2}\left[ 1-(1-x)^n-x^n\right] , \end{aligned}$$

and so

$$\begin{aligned} ({{\mathcal {K}}}_n^{\theta }e_2)^{\prime \prime }(x)&=2\dfrac{n-1}{n}-\dfrac{\theta ^2}{3}\dfrac{n-1}{n}\left[ (1-x)^{n-2}+x^{n-2}\right] \\&\ge \left( 2-\dfrac{\theta ^2}{3}\right) \dfrac{n-1}{n}. \end{aligned}$$

It follows that

$$\begin{aligned} ({{\mathcal {K}}}_n^{\theta }e_2)^{\prime \prime }(x)\ge c\left( 2-\dfrac{\theta ^2}{3}\right) \dfrac{n-1}{n}, \end{aligned}$$

i.e., \({{\mathcal {K}}}_n^{\theta }f-c\left( 1-\dfrac{\theta ^2}{6}\right) \dfrac{n-1}{n}e_2\) is convex. This shows that \({{\mathcal {K}}}_n^{\theta }f\) is strongly convex with modulus \(c\left( 1-\dfrac{\theta ^2}{6}\right) \dfrac{n-1}{n}\).

(ii) The proof is similar to the previous one and we omit it. \(\square \)

8 Conclusions and further work

Let \(c \in \mathbb {R}\), \(n \in \mathbb {R}\), \(n > c\) for \(c\ge 0\) and \(-n/c=l \in \mathbb {N}\) for \(c<0\). Furthermore, let \(I_c = [0,\infty )\) for \(c\ge 0\) and \(I_c=[0,-1/c]\) for \(c < 0\). Take \(f:I_c \longrightarrow \mathbb {R}\) given in such a way that the corresponding integrals and series are convergent.

Let \(0<a_{n,k}<\dfrac{1}{n}\). Define

$$\begin{aligned} { {\mathcal {K}}}_n^{[c]} (f;x)= & {} f(0)p_{n,0}^{[c]}(x)+\dfrac{n}{2} \sum _{k=1}^{\infty } p_{n,k}^{[c]}(x)\int _{\frac{k}{n}-a_{n,k}}^{\frac{k}{n}+a_{n,k}}f(t)dt,\,\, c\ge 0, \\ { {\mathcal {K}}}_{n}^{[c]} (f;x)= & {} f(0)p_{n,0}^{[c]}(x)+f\left( \dfrac{l}{n}\right) p_{n,l}^{[c]}(x)+\dfrac{n}{2} \sum _{k=1}^{l-1} p_{n,k}^{[c]}(x)\int _{\frac{k}{n}-a_{n,k}}^{\frac{k}{n}+a_{n,k}}f(t)dt,\,\, c<0, \end{aligned}$$

with the corresponding basis functions

$$\begin{aligned} p_{n,k}^{[c]}(x)= \left\{ \begin{array}{ll} \displaystyle \frac{n^k}{k!} x^k e^{-nx} &{}, \, c = 0,\\ \displaystyle \frac{n^{c,\overline{k}}}{k!} x^k (1+cx)^{-\left( \frac{n}{c}+k\right) } &{}, \, c \not = 0, \end{array} \right. \end{aligned}$$
(8.1)

and \( a^{c,\overline{k}} := \prod _{l=0}^{k-1} (a+cl),\quad a^{c,\overline{0}} :=1. \) As further work we propose to investigate the operators \({ {\mathcal {K}}}_{n}^{[c]}\).

We intend to consider all these modified Kantorovich operators in the framework of generalized sampling operators with applications in medical and industrial domains.