1 Introduction and summary

Throughout \( I \subset \mathbb {R}\) is an interval. For \( {{\mathbf {z}}} = ( z_1 , z_2 , \ldots , z_n ) \in \mathbb {R}^{n}\) and \( i = 1,2,\ldots ,n \), the symbol \( z_{[i]} \) stands for the ith largest entry of \( {\mathbf z} \).

An n-tuple \( {{\mathbf {y}} } = ( y_1 , y_2 , \ldots , y_n ) \in I^n \) is said to be weakly majorized by an n-tuple \( {\mathbf x } = ( x_1 , x_2 , \ldots , x_n ) \in I^n \), written as \( {\mathbf y } \prec _w {{\mathbf {x}} } \), if

$$\begin{aligned} \sum \limits _{i=1}^k y_{[i]} \le \sum \limits _{i=1}^k x_{[i]} \;\;\; \hbox { for all}\ k = 1,2,\ldots ,n \end{aligned}$$
(1)

(see [9, p. 12]). If, in addition,

$$\begin{aligned} \sum \limits _{i=1}^n y_i = \sum \limits _{i=1}^n x_i , \end{aligned}$$

then \( {{\mathbf {y}} } \) is said to be majorized by \( {{\mathbf {x}} } \), written as \( {{\mathbf {y}} } \prec {{\mathbf {x}} } \) (see [9, p. 8]).

A function \( F : I^n \rightarrow \mathbb {R}\) is said to be Schur-convex (resp. Schur-concave) on \( I^n \) if

$$\begin{aligned} {{\mathbf {y}} } \prec {{\mathbf {x}} } \;\;\; \text{ implies } \;\;\; F ( {{\mathbf {y}} } ) \le (\text{ resp. } \ge ) F ( {{\mathbf {x}} }) , \end{aligned}$$

provided \( {{\mathbf {x}} } , {{\mathbf {y}} } \in I^n \) (see [9, p. 80]).

Let \( g_1 , g_2 : [ a , b ] \rightarrow \mathbb {R}\) be two integrable real functions. The function \( g_2 \) is said to be unordered submajorized by \( g_1 \), written as \( g_2 \prec _w^u g_1 \), if

$$\begin{aligned} \int \limits _{a}^s g_2 (t) \, d t \le \int \limits _{a}^s g_1 (t) \, d t \;\;\; \text{ for } s \in [a,b] . \end{aligned}$$

If, moreover,

$$\begin{aligned} \int \limits _{a}^b g_2 (t) \, d t = \int \limits _{a}^b g_1 (t) \, d t , \end{aligned}$$

then \( g_2 \) is said to be (unordered) majorized by \( g_1 \), written as \( g_2 \prec ^u g_1 \) (see [4], cf. [9, p. 22]).

By a cumulative function induced by an integrable function \( g : [a,b] \rightarrow \mathbb {R}\), we mean the integral function

$$\begin{aligned} G (s) = \int \limits _a^s g (t) \, d t , \;\;\; s \in [a,b] . \end{aligned}$$
(2)

In what follows, we assume that there exist all integrals under consideration.

Elezović and Pečarić in [5] established the following result.

Theorem A

[5] Let f be a continuous function on an interval I. Then the function

$$\begin{aligned} F (x,y) = \left\{ \begin{array}{ll} \frac{1}{y-x} \int \limits _x^y f (t) \, d t &{}\quad \text{ for } x , y \in I ,\, x \ne y , \\ f (x) &{}\quad \text{ for } x = y \in I , \\ \end{array} \right. \end{aligned}$$

is Schur-convex (Schur-concave) on \( I^2 \) iff f is convex (concave) on I.

It is well-known that if \( f : I \rightarrow \mathbb {R}\) is a convex function on an interval \( I \subset \mathbb {R}\), \( a , b \in I \) with \( a < b \), then the following Hermite–Hadamard inequality holds:

$$\begin{aligned} f \left( \frac{a+b}{2} \right) \le \frac{1}{b-a} \int \limits _a^b f (x) \, d x \le \frac{ f (a) + f (b) }{2} \end{aligned}$$
(3)

(see [3, p. 137]).

A more general result is incorporated in the following [1, 7, 11].

Theorem B

[1] Let \( f : I \rightarrow \mathbb {R}\) be a convex function on an interval \( I \subset \mathbb {R}\), \( a , b \in I \) with \( a < b \), and let \( p : [a,b] \rightarrow \mathbb {R}\) be a non-negative integrable weight on I. Assume that p is symmetric about \( \frac{a+b}{2} \). Then the following Fejér inequality holds:

$$\begin{aligned} f \left( \frac{a+b}{2} \right) \int \limits _a^b p (t) \, d t \le \int \limits _a^b f (t) p (t) \, d t \le \frac{ f (a) + f (b) }{2} \int \limits _a^b p (t) \, d t . \end{aligned}$$
(4)

Throughout, we denote by \( G_1 \) and \( G_2 \) the cumulative functions of \( g_1 \) and \( g_2 \) on [ab] , respectively, in the sense that

$$\begin{aligned} G_1 (s) = \int \limits _a^s g_1 (t) \, d t \;\; \text{ and } \;\; G_2 (s) = \int \limits _a^s g_2 (t) \, d t \;\; \text{ for } s \in [a,b] . \end{aligned}$$
(5)

Likewise, we denote by \( {{\mathcal {G}}}_1 \) and \( {{\mathcal {G}}}_2 \) the cumulative functions of \( G_1 \) and \( G_2 \) on [ab] , respectively, that is

$$\begin{aligned} {{\mathcal {G}}}_1 (s) = \int \limits _a^s G_1 (t) \, d t \;\; \text{ and } \;\; {{\mathcal {G}}}_2 (s) = \int \limits _a^s G_2 (t) \, d t \;\; \text{ for } s \in [a,b] . \end{aligned}$$
(6)

We now present the Levin–Stečkin theorem [8].

Theorem C

[8] Let \( g_1 , g_2 : [ a , b ] \rightarrow \mathbb {R}\) be integrable functions and \( G_1 , G_2 , {{\mathcal {G}}}_1 , {{\mathcal {G}}}_2 : [ a , b ] \rightarrow \mathbb {R}\) defined by (5), (6) be functions satisfying the condition

$$\begin{aligned} G_1 (b) = G_2 (b) \;\;\; \text{ and } \;\;\; {{\mathcal {G}}}_1 (b) = {{\mathcal {G}}}_2 (b) . \end{aligned}$$
(7)

If

$$\begin{aligned} G_2 \prec ^u_w G_1 , \end{aligned}$$

then

$$\begin{aligned} \int \limits _a^b f (t) g_2 (t) \, d t \le \int \limits _a^b f (t) g_1 (t) \, d t \end{aligned}$$
(8)

for all continuously twice differentiable convex functions \( f : [ a , b ] \rightarrow \mathbb {R}\).

In this paper, we study integral inequalities of type (8) for uniformly convex functions, strongly convex functions and superquadratic functions. Our purpose is to establish some further results related to Theorems A, B and C. Similar problems for real convex functions f are well-known (see [8, 12,13,14, 16,17,18,19]).

The paper is arranged as follows. In Sect. 2, first we point out that for a given generalized uniformly convex function \( f : [a,b] \rightarrow \mathbb {R}\), the unordered submajorization of cumulative functions \( G_1 \) and \( G_2 \) induced by \( g_1 \) and \( g_2 \), respectively, implies a refinement of inequality (8) (see Theorem 1).

Next, we provide some sufficient conditions under which the cumulative functions are unordered submajorized (see Lemma 2). In consequence, we are able to demonstrate sufficient conditions on two given functions \( g_1 \) and \( g_2 \) so that the refinement of inequality (8) holds (see Theorem 2). As an application, for uniformly convex functions we refine a result due to Elezović and Pečarić [5] (see Theorem A). This corresponds to the case of Theorem 1 when \( g_1 \) and \( g_2 \) represent two pdf’s of uniform distribution.

In Sect. 3 we focus on symmetric functions. This leads to some simplifications of the results of Sect. 2. After giving some properties of cumulative functions (see Lemma 3), we interpret the previous results for symmetric functions (see Theorem 3). We establish a Levin–Stečkin type inequality with uniformly convex f. We also specify the obtained results for symmetric probability density functions (see Corollary 3). Finally, we show applications for Simpson distributions.

2 Results

Let \( I = [a,b] \) be an interval and \( \psi : [0,b-a] \rightarrow \mathbb {R}\) be a function. A function \( f : [a,b] \rightarrow \mathbb {R}\) is said to be generalized\( \psi \)-uniformly convex if

$$\begin{aligned}&f (t x + (1-t) y) \le t f (x) + (1-t) f (y) - t (1-t) \psi (|x-y|)\nonumber \\&\qquad \qquad \qquad \,\,\quad \;\;\; \hbox {for}\; x,y \in I \hbox {and}\; t \in [0,1] \end{aligned}$$
(9)

(cf. [2]). If in addition \( \psi \ge 0 \), then f is said to be \(\psi \)-uniformly convex (see [15, 20]).

Observe that the case \( \psi = 0 \) corresponds to usual convex functions. Moreover, a \( \psi \)-uniformly convex function f (so, \( \psi \ge 0 \)) is necessarily convex. Conversely, if \( \psi \le 0 \), then a (usual) convex function f is generalized \(\psi \)-uniformly convex.

In general, if \( \psi _1 \le \psi _2 \), then generalized \(\psi _2\)-uniform convexity implies generalized \(\psi _1\)-uniform convexity.

We are now in a position to prove a Levin–Stečkin type theorem for generalized \( \psi \)-uniformly convex functions. Some simplifications of conditions (10) and (11) will be discussed after the end of the proof of Theorem 1. A similar approach for convex or n-convex functions can be found in [17,18,19].

Theorem 1

Let \( I = [a,b] \) be an interval and \( \psi : [0,b-a] \rightarrow \mathbb {R}\) be a function. Let \( f : [ a , b ] \rightarrow \mathbb {R}\) be a continuously twice differentiable generalized \( \psi \)-uniformly convex function on [ab] . Denote \( \varphi (t) = \frac{\psi (t)}{t^2} \) for \( t \in (0,b-a] \) and \( \varphi (0) = \lim \limits _{t \rightarrow 0^+} \varphi (t) \).

Let \( g_1 , g_2 : [ a , b ] \rightarrow \mathbb {R}\) be integrable functions and \( G_1 , G_2 , {{\mathcal {G}}}_1, {{\mathcal {G}}}_2 : [ a , b ] \rightarrow \mathbb {R}\) defined by (5), (6) be functions satisfying the condition

$$\begin{aligned} f (b) [ G_1 (b) - G_2 (b) ] - f^\prime (b) [ {{\mathcal {G}}}_1 (b) - {{\mathcal {G}}}_2 (b) ] \ge 0 . \end{aligned}$$
(10)

If

$$\begin{aligned} G_2 \prec ^u_w G_1 \end{aligned}$$
(11)

then

$$\begin{aligned} R + \int \limits _a^b f (t) g_2 (t) \, d t \le \int \limits _a^b f (t) g_1 (t) \, d t , \end{aligned}$$
(12)

where \( R = 2 \varphi (0) \int \limits _a^b ( {{\mathcal {G}}}_1 (t) - {{\mathcal {G}}}_2 (t) ) \, d t \). In particular, \( R \ge 0 \) whenever f is a \( \psi \)-uniformly convex function on [ab] .

Proof

Inequality (11) means that

$$\begin{aligned} \int \limits _a^s G_2 (t) \, d t \le \int \limits _a^s G_1 (t) \, d t \;\;\; \text{ for } s \in [a,b] . \end{aligned}$$
(13)

By using (6) and (13) we obtain

$$\begin{aligned} {{\mathcal {G}}}_2 (t) \le {{\mathcal {G}}}_1 (t) \;\;\; \text{ for } t \in [a,b] . \end{aligned}$$
(14)

By integrating by parts twice [6, p. 129], we have (see (5) and (6))

$$\begin{aligned} \int \limits _a^b f (t) [ g_1 (t) - g_2 (t) ] \, d t= & {} f (t) [ G_1 (t) - G_2 (t) ] \vert _a^b - \int \limits _a^b f^\prime (t) [ G_1 (t) - G_2 (t) ] \, d t\nonumber \\= & {} f (t) [ G_1 (t) - G_2 (t) ] \vert _a^b - f^\prime (t) [ {{\mathcal {G}}}_1 (t) - {{\mathcal {G}}}_2 (t) ] \vert _a^b \nonumber \\&+\, \int \limits _a^b {f^{\prime \prime }} (t) [ {{\mathcal {G}}}_1 (t) - {{\mathcal {G}}}_2 (t) ] \, d t . \end{aligned}$$
(15)

It is easily seen from (5), (6) that

$$\begin{aligned} G_1 (a) = G_2 (a) = 0 \;\;\; \text{ and } \;\;\; {{\mathcal {G}}}_1 (a) = {{\mathcal {G}}}_2 (a) = 0 . \end{aligned}$$
(16)

In consequence, by (16) and (10),

$$\begin{aligned}&f (t) [ G_1 (t) - G_2 (t) ] \vert _a^b - f^\prime (t) [ {{\mathcal {G}}}_1 (t) - {{\mathcal {G}}}_2 (t) ] \vert _a^b \nonumber \\&\quad = f (b) [ G_1 (b) - G_2 (b) ] - f^\prime (b) [ {{\mathcal {G}}}_1 (b) - {{\mathcal {G}}}_2 (b) ] \ge 0 . \end{aligned}$$
(17)

It follows from (9) that

$$\begin{aligned} ( f^\prime (x) - f^\prime (y) ) ( x - y ) \ge 2 \psi (|x-y|) \;\;\; \text{ for } x,y \in I = [a,b] . \end{aligned}$$
(18)

In fact, for \( x,y \in I \) and \( t \in [0,1] \), (9) gives

$$\begin{aligned} f (y + t (x-y) ) - f (y) \le t ( f (x) - f (y) ) - t (1-t) \psi (|x-y|) \end{aligned}$$
(19)

and further for \( t \in (0,1] \),

$$\begin{aligned} \frac{f (y + t (x-y) ) - f (y)}{t} \le f (x) - f (y) - (1-t) \psi (|x-y|) . \end{aligned}$$
(20)

Hence for \( x,y \in I \), \( x \ne y \),

$$\begin{aligned}&\lim \limits _{t \rightarrow 0^+} \frac{f (y + t (x-y) ) - f (y)}{t (x-y)} \left( x - y \right) \nonumber \\&\quad \le \lim \limits _{t \rightarrow 0^+} ( f (x) - f (y) - (1-t) \psi (|x-y|) ) . \end{aligned}$$
(21)

Therefore,

$$\begin{aligned} f^\prime (y) (x-y) \le f (x) - f (y) - \psi (|x-y|) \;\;\; \text{ for } x,y \in I , x \ne y . \end{aligned}$$
(22)

For \( x = y \) inequality (22) also holds, because \( \psi (0) \le 0 \) is satisfied by (9).

By replacing the roles of x and y in (22), we get

$$\begin{aligned} f^\prime (x) (y-x) \le f (y) - f (x) - \psi (|x-y|) \;\;\; \text{ for } x,y \in I . \end{aligned}$$
(23)

By multiplying both sides by \( -1 \), we obtain

$$\begin{aligned} f^\prime (x) (x-y) \ge f (x) - f (y) + \psi (|x-y|) \;\;\; \text{ for } x,y \in I . \end{aligned}$$
(24)

Now, subtracting inequalities (24) and(22) by sides yields (18), as claimed.

It holds that

$$\begin{aligned} f^{\prime \prime } (y) \ge 2 \varphi (0) \;\;\; \text{ for } y \in I = [a,b] . \end{aligned}$$
(25)

To see this, observe that (18) implies

$$\begin{aligned} \frac{ f^\prime (x) - f^\prime (y) }{x-y} \ge 2 \frac{\psi (x-y)}{(x-y)^2} = 2 \varphi (x-y) \;\;\; \text{ for } x,y \in I , x > y , \end{aligned}$$
(26)

because \( \psi (x-y) = (x-y)^2 \varphi (x-y) \).

Consequently,

$$\begin{aligned} f^{\prime \prime } (y) = \lim \limits _{x \rightarrow y^+} \frac{ f^\prime (x) - f^\prime (y) }{x-y} \ge 2 \lim \limits _{x \rightarrow y^+} \varphi (x-y) = 2 \varphi (0) \;\;\; \text{ for } y \in I , \end{aligned}$$
(27)

which gives (25).

In conclusion, we get

$$\begin{aligned} \int \limits _a^b {f^{\prime \prime }} (t) [ {{\mathcal {G}}}_1 (t) - {{\mathcal {G}}}_2 (t) ] \, d t \ge 2 \varphi (0) \int \limits _a^b [ {{\mathcal {G}}}_1 (t) - {{\mathcal {G}}}_2 (t) ] \, d t = R . \end{aligned}$$
(28)

Therefore, by (15), (17) and (28), we deduce that

$$\begin{aligned} \int \limits _a^b f(t) [ g_1 (t) - g_2 (t) ] \, d t \ge R . \end{aligned}$$

In addition, \( R \ge 0 \) provided that f is \(\psi \)-uniformly convex, because \( {{\mathcal {G}}}_1 (t) - {{\mathcal {G}}}_2 (t) \ge 0 \) for \( t \in [a,b] \) by (14), and \( \psi \ge 0 \) implies \( \varphi \ge 0 \).

This completes the proof of (12). \(\square \)

Let \( m \ge 0 \) be a nonnegative number. A function \( f : I =[a,b] \rightarrow \mathbb {R}\) is said to be m-strongly convex if it is \( \psi \)-uniformly convex for \( \psi (t) = \frac{m}{2} t^2 \), i.e.,

$$\begin{aligned}&f (t x + (1-t) y) \le t f (x) + (1-t) f (y) - t (1-t) \frac{m}{2} (x-y)^2 \;\;\; \nonumber \\&\quad \text{ for } x,y \in I \hbox { and } t \in [0,1] . \end{aligned}$$
(29)

Note that m-strongly convex functions with \( m = 0 \) are simply convex.

Corollary 1

Under the hypothesis of Theorem 1, let \( f : [ a , b ] \rightarrow \mathbb {R}\) be a continuously twice differentiable m-uniformly convex function on [ab] with \( m \ge 0 \). If conditions (10), (11) are fulfilled, then inequality (12) holds with \( R = m \int \limits _a^b ( {{\mathcal {G}}}_1 (t) - {{\mathcal {G}}}_2 (t) ) \, d t \).

Proof

It is enough to use Theorem 1 with \( \psi (t) = \frac{m}{2} t^2 \) and \( \varphi (t) = \frac{m}{2} \) for \( t \in [0,b-a] \). \(\square \)

Let \( f : [ 0 , b ] \rightarrow \mathbb {R}\) be a differentiable function. The function f is said to be superquadratic on [0, b] if

$$\begin{aligned} f (x) - f (y) \ge f^{\prime } (y) ( x - y ) + f (|x-y|) \;\;\; \text{ for } x,y \in I = [0,b] . \end{aligned}$$
(30)

Corollary 2

Under the hypothesis of Theorem 1, let \( f : [ 0 , b ] \rightarrow \mathbb {R}\) be a continuously twice differentiable superquadratic function on [0, b] . If conditions (10), (11) are fulfilled, then inequality (12) holds with \( R = 2 \varphi (0) \int \limits _0^b ( {{\mathcal {G}}}_1 (t) - {{\mathcal {G}}}_2 (t) ) \, d t \), and \( \varphi (t) = \frac{f (t)}{t^2} \) for \( t \in (0,b] \) and \( \varphi (0) = \lim \limits _{t \rightarrow 0^+} \varphi (t) \).

Proof

Proceeding as in the proof of Theorem 1 with \( a = 0 \), \( \psi (t) = f (t) \) for \( t \in [0,b] \), and \( \varphi (t) = \frac{f (t)}{t^2} \) for \( t \in (0,b] \), we can see that the superquadracity of f on [0, b] leads to the validity of inequality (12).

Indeed, property (30) guarantees that inequalities (22) and (24) are met with \( \psi = f \), which implies (18) and (25) with \( \psi = f \) and \( \varphi (t) = \frac{f (t)}{t^2} \) for \( t \in (0,b] \) and \( \varphi (0) = \lim \limits _{t \rightarrow 0^+} \varphi (t) \). Hence (28) is satisfied.

Finally, by compiling (15), (17) and (28) we get

$$\begin{aligned} \int \limits _a^b f(t) [ g_1 (t) - g_2 (t) ] \, d t \ge R . \end{aligned}$$

This completes the proof of (12) for a superquadratic function f. \(\square \)

We now discuss sufficient conditions for majorization inequalities (11) and (13) to be valid.

The following lemma is based on a discrete result due to Marshall et al. (see [9, Proposition B.1., p. 186]). It is also inspired by Ohlin’s Lemma [13], see also [14, Lemma 1].

Lemma 1

Let \( g_1 , g_2 : [ a , b ] \rightarrow \mathbb {R}\) be integrable functions such that

$$\begin{aligned} \int \limits _a^b g_2 (t) \, d t \le \int \limits _a^b g_1 (t) \, d t , \end{aligned}$$
(31)

and, in addition, there exists \( c \in [a,b] \) satisfying

$$\begin{aligned} g_2 (t) \le g_1 (t) \;\;\; \text{ for } t \in [a,c) , \;\;\; \text{ and } \;\;\; g_1 (t) \le g_2 (t) \;\;\; \text{ for } t \in [c,b] . \end{aligned}$$
(32)

Then

$$\begin{aligned} \int \limits _a^s g_2 (t) \, d t \le \int \limits _a^s g_1 (t) \, d t \end{aligned}$$
(33)

for \( s \in [a,b] \).

Proof

It follows from the first inequality in (32) that (33) holds for \( s \in [a,c) \).

Assume that \( s \in [c,b] \). Due to (31) we can see that

$$\begin{aligned} \int \limits _a^s g_2 (t) \, d t= & {} \int \limits _a^b g_2 (t) \, d t - \int \limits _s^b g_2 (t) \, d t \le \int \limits _a^b g_1 (t) \, d t - \int \limits _s^b g_2 (t) \, d t\\\le & {} \int \limits _a^b g_1 (t) \, d t - \int \limits _s^b g_1 (t) \, d t = \int \limits _a^s g_1 (t) \, d t , \end{aligned}$$

the last inequality being a consequence of the second inequality in (32).

Summarizing all of this, inequality (33) holds true for all \( s \in [a,b] \). \(\square \)

In the next lemma we utilize interlaced functions \( g_1 \) and \( g_2 \) (see (35), (36)). In consequence we obtain the required inequalities (11) and (13) for the corresponding cumulative functions \( G_1 \) and \( G_2 \) (see (38)).

Lemma 2

Let \( g_1 , g_2 : [ a , b ] \rightarrow \mathbb {R}\) be integrable functions and \( G_1 , G_2 : [ a , b ] \rightarrow \mathbb {R}\) be functions defined by (5). Assume that there exists \( c \in [a,b] \) satisfying

$$\begin{aligned} \int \limits _a^c g_2 (t) \, d t = \int \limits _a^c g_1 (t) \, d t \;\;\; \text{ and } \;\;\; \int \limits _c^b g_1 (t) \, d t = \int \limits _c^b g_2 (t) \, d t , \end{aligned}$$
(34)

and, in addition, there exist \( d_1 \in [a,c) \) and \( d_2 \in [c,b] \) satisfying (a.e.)

$$\begin{aligned} g_2 (t)\le & {} g_1 (t) \;\;\; \text{ for } t \in [a,d_1) , \;\;\; \text{ and } \;\;\; g_1 (t) \le g_2 (t) \;\;\; \text{ for } t \in [d_1,c] , \end{aligned}$$
(35)
$$\begin{aligned} g_1 (t)\le & {} g_2 (t) \;\;\; \text{ for } t \in [c,d_2) , \;\;\; \text{ and } \;\;\; g_2 (t) \le g_1 (t) \;\;\; \text{ for } t \in [d_2,b] . \end{aligned}$$
(36)

If

$$\begin{aligned} \int \limits _a^b G_2 (t) \, d t \le \int \limits _a^b G_1 (t) \, d t , \end{aligned}$$
(37)

then

$$\begin{aligned} \int \limits _a^s G_2 (t) \, d t \le \int \limits _a^s G_1 (t) \, d t \;\;\; \text{ for } s \in [a,b] . \end{aligned}$$
(38)

Proof

We consider the restrictions of \( g_1 \) and \( g_2 \) to the interval [ac] . In light of Lemma 1 applied to the interval [ac] , by using (35) and the first part of (34), we find that

$$\begin{aligned} G_2 (t) \le G_1 (t) \;\;\; \text{ for } t \in [a,c] , \end{aligned}$$
(39)

with equality for \( t = c \) (see (34)).

Likewise, consider the restrictions of \( g_1 \) and \( g_2 \) to the interval [cb] . Denote

$$\begin{aligned} {\widetilde{G}}_1 (t) = \int \limits _c^t g_1 (s) \, d s \;\;\; \text{ for } t \in [c,b] , \;\;\; \text{ and } \;\;\; {\widetilde{G}}_2 (t) = \int \limits _c^t g_2 (s) \, d s \;\;\; \text{ for } t \in [c,b] . \;\;\; \end{aligned}$$

Hence

$$\begin{aligned} G_1 (t) = G_1 (c) + \widetilde{G}_1 (t) \;\;\; \text{ and } \;\;\; G_2 (t) = G_2 (c) + \widetilde{G}_2 (t) \;\; \text{ for } t \in [c,b] . \end{aligned}$$
(40)

By making use of Lemma 1, applied to the interval [cb] via (36) and the second part of (34), we derive

$$\begin{aligned} \widetilde{G}_1 (t) \le \widetilde{G}_2 (t) \;\;\; \text{ for } t \in [c,b] , \end{aligned}$$
(41)

with equality for \( t = b \) (see (34)).

By combining (40) and (41), with \( G_1 (c) = G_2 (c) \) (see (34)), we obtain

$$\begin{aligned} G_1 (t) \le G_2 (t) \;\;\; \text{ for } t \in [c,b] . \end{aligned}$$
(42)

According to Lemma 1 applied to the functions \( G_1 \) and \( G_2 \) on the interval [ab] , properties (39), (42) and (37) imply (38), as desired. \(\square \)

Remark 1

The conditions (35), (36) say that the pair \( (g_2 , g_1 ) \) crosses two times (see [14, Definition 1]).

Remark 2

In Lemma 2, conditions (34), (35), (36) and (37) ensure that

$$\begin{aligned} g_2 \prec ^u g_1 \;\;\; \text{ on } [a,c] , \;\;\;\;\;\; g_1 \prec ^u g_2 \;\;\; \text{ on } [c,b] , \;\;\; \text{ and } \;\;\; G_2 \prec ^u_w G_1 \;\;\; \text{ on } [a,b] . \end{aligned}$$

Theorem 2

Let \( I = [a,b] \) be an interval and \( \psi : [0,b-a] \rightarrow \mathbb {R}\) be a function. Let \( f : [ a , b ] \rightarrow \mathbb {R}\) be a continuously twice differentiable generalized \( \psi \)-uniformly convex function on [ab] . Denote \( \varphi (t) = \frac{\psi (t)}{t^2} \) for \( t \in (0,b-a] \) and \( \varphi (0) = \lim \limits _{t \rightarrow 0^+} \varphi (t) \).

Let \( g_1 , g_2 : [ a , b ] \rightarrow \mathbb {R}\) be integrable functions and \( G_1 , G_2 , {{\mathcal {G}}}_1 , {{\mathcal {G}}}_2 : [ a , b ] \rightarrow \mathbb {R}\) be functions defined by (5), (6). Assume that there exist \( c \in [a,b] \), \( d_1 \in [a,c) \) and \( d_2 \in [c,b] \) satisfying conditions (34), (35), (36) and (37).

If

$$\begin{aligned} f^\prime (b) [ {{\mathcal {G}}}_1 (b) - {{\mathcal {G}}}_2 (b) ] \le 0 , \end{aligned}$$
(43)

then

$$\begin{aligned} R + \int \limits _a^b f (t) g_2 (t) \, d t \le \int \limits _a^b f (t) g_1 (t) \, d t , \end{aligned}$$
(44)

where \( R = 2 \varphi (0) \int \limits _a^b ( {{\mathcal {G}}}_1 (t) - {{\mathcal {G}}}_2 (t) ) \, d t \). In particular, \( R \ge 0 \) whenever f is a \( \psi \)-uniformly convex function on [ab] .

Proof

In light of (34) one has \( G_1 (b) = G_2 (b) \), so \( f (b) [ G_1 (b) - G_2 (b) ] = 0 \). Therefore (10) reduces to (43).

Simultaneously, conditions (34), (35), (36) and (37) of Lemma 2 ensure that (38) is satisfied. Therefore (11) is fulfilled. Now, it is sufficient to apply Theorem 1 to get (44). \(\square \)

2.1 Uniform distributions

In order to illustrate the above results, we now show how to use Theorem 2 to extend the sufficiency part of Theorem A [5] to uniformly convex functions.

Let \( I = [a,b] \) be an interval, \( \psi : [0,b-a] \rightarrow \mathbb {R}\) be a function, \( \varphi (t) = \frac{\psi (t)}{t^2} \) for \( t \in (0,b-a] \) and \( \varphi (0) = \lim \limits _{t \rightarrow 0^+} \varphi (t) \). Take \( f : [a,b] \rightarrow \mathbb {R}\) to be a continuously twice differentiable generalized \( \psi \)-uniformly convex function on [ab] .

Assume that \( x_1 , x_2 , y_1 , y_2 \in [a,b] \) such that \( (x_2,y_2) \prec (x_1,y_1) \) and \( a \le x_1 \le x_2< \frac{a+b}{2} < y_2 \le y_1 \le b \), with \( c = \frac{a+b}{2} = \frac{x_1 + y_1}{2} = \frac{x_2 + y_2}{2} \). Set

$$\begin{aligned} g_1 (t) {=} \left\{ \begin{array}{ll} \frac{1}{y_1 - x_1} &{}\quad \text{ for } t \in [x_1,y_1] , \\ 0 &{}\quad \text{ otherwise, } \end{array} \right. \;\;\;\;\;\; \text{ and } \;\;\;\;\;\; g_2 (t) = \left\{ \begin{array}{ll} \frac{1}{y_2 - x_2} &{}\quad \text{ for } t \in [x_2,y_2] , \\ 0 &{}\quad \text{ otherwise. } \end{array} \right. \end{aligned}$$

By putting \( d_1 = x_2 \) and \( d_2 = y_2 \), we see that conditions (35), (36) are satisfied. Furthermore, (34) holds in the form

$$\begin{aligned} \int \limits _a^c g_2 (t) \, d t = \int \limits _a^c g_1 (t) \, d t = \frac{1}{2} = \int \limits _c^b g_1 (t) \, d t = \int \limits _c^b g_2 (t) \, d t . \end{aligned}$$

In this way, we have \( G_1 (b) = G_2 (b) \). We also find by a straightforward calculation that

$$\begin{aligned} {{\mathcal {G}}}_1 (b) = \int \limits _a^b G_1 (t) \, d t = \frac{1}{2} (b - a) \;\;\; \text{ and } \;\;\; {{\mathcal {G}}}_2 (b) = \int \limits _a^b G_2 (t) \, d t = \frac{1}{2} (b - a) . \end{aligned}$$

So, we infer that (37) is valid.

Since \( {{\mathcal {G}}}_1 (b) = {{\mathcal {G}}}_2 (b) \), condition (43) is satisfied trivially. Taking Theorem 2 into consideration, we obtain (44) with the above \( g_1 \) and \( g_2 \), as follows:

$$\begin{aligned} R + \frac{1}{ y_2 - x_2 } \int \limits _{x_2}^{y_2} f (t) \, d t \le \frac{1}{ y_1 - x_1 } \int \limits _{x_1}^{y_1} f (t) \, d t , \end{aligned}$$
(45)

where \( R = 2 \varphi (0) \int \limits _a^b ( {{\mathcal {G}}}_1 (t) - {{\mathcal {G}}}_2 (t) ) \, d t \) (see (46)).

By direct computations, we find that

$$\begin{aligned} G_1 (t) {=} \left\{ \begin{array}{ll} 0 &{}\quad \hbox { for}\ t \in [a,x_1) \\ \frac{t - x_1}{y_1 - x_1} &{}\quad \hbox { for}\ t \in [x_1,y_1] \\ 1 &{}\quad \hbox { for}\ t \in (y_1,b] \end{array} \right. \;\;\; \text{ and } \;\;\; G_2 (t) {=} \left\{ \begin{array}{ll} 0 &{}\quad \hbox { for}\ t \in [a,x_2) \\ \frac{t - x_2}{y_2 - x_2} &{}\quad \hbox { for}\ t \in [x_2,y_2] \\ 1 &{}\quad \hbox { for}\ t \in (y_2,b] \end{array} \right. . \end{aligned}$$

Hence we derive

$$\begin{aligned} {{\mathcal {G}}}_1 (u) = \left\{ \begin{array}{ll} 0 &{}\quad \hbox { for}\ u \in [a,x_1) \\ \frac{(u - x_1)^2}{2 (y_1 - x_1)} &{}\quad \hbox { for}\ u \in [x_1,y_1] \\ u - \frac{x_1 + y_1}{2} &{}\quad \hbox { for}\ u \in (y_1,b] \end{array} \right. \text{ and } \;\; {{\mathcal {G}}}_2 (u) = \left\{ \begin{array}{ll} 0 &{}\quad \hbox { for}\ u \in [a,x_2) \\ \frac{(u - x_2)^2}{2 (y_2 - x_2)} &{}\quad \hbox { for}\ u \in [x_2,y_2] \\ u - \frac{x_2 + y_2}{2} &{}\quad \hbox { for}\ u \in (y_2,b] \end{array} \right. . \end{aligned}$$

Therefore we have

$$\begin{aligned} {{\mathcal {G}}}_1 (u) - {{\mathcal {G}}}_2 (u) = \left\{ \begin{array}{ll} 0 &{}\quad \hbox { for}\ u \in [a,x_1) \\ \frac{(u - x_1)^2}{2 (y_1 - x_1)} &{}\quad \hbox { for}\ u \in [x_1,x_2) \\ \frac{(u - x_1)^2}{2 (y_1 - x_1)} - \frac{(u - x_2)^2}{2 (y_2 - x_2)} &{}\quad \hbox { for}\ u \in [x_2,y_2] \\ \frac{(u - x_1)^2}{2 (y_1 - x_1)} - u + \frac{x_2 + y_2}{2} &{}\quad \hbox { for}\ u \in [y_2,y_1] \\ 0 &{}\quad \hbox { for}\ u \in (y_1,b] \end{array} \right. . \end{aligned}$$

Because \( x_1 + y_1 = x_2 + y_2 \), a bit of algebra gives

$$\begin{aligned} \int \limits _{a}^b ( {{\mathcal {G}}}_1 (u) - {{\mathcal {G}}}_2 (u) ) \, d u = \frac{1}{6} \left[ (y_1 - y_2) ( x_1 + y_1 ) - (x_2 - x_1) (x_1 + x_2) \right] . \end{aligned}$$

So, we deduce from (45) that

$$\begin{aligned}&\frac{1}{3} \varphi (0) \left[ (y_1 - y_2) ( x_1 + y_1 ) - (x_2 - x_1) (x_1 + x_2) \right] + \frac{1}{ y_2 - x_2 } \int \limits _{x_2}^{y_2} f (t) \, d t\nonumber \\&\quad \le \frac{1}{ y_1 - x_1 } \int \limits _{x_1}^{y_1} f (t) \, d t . \end{aligned}$$
(46)

In particular, for an m-strongly convex function f we obtain the inequality

$$\begin{aligned}&\frac{1}{6} m \left[ (y_1 - y_2) ( x_1 + y_1 ) - (x_2 - x_1) (x_1 + x_2) \right] + \frac{1}{ y_2 - x_2 } \int \limits _{x_2}^{y_2} f (t) \, d t\\&\quad \le \frac{1}{ y_1 - x_1 } \int \limits _{x_1}^{y_1} f (t) \, d t . \end{aligned}$$

Also, for a superquadratic function f inequality (46) holds valid with \( \varphi (0) = \lim \limits _{t \rightarrow 0^+} \frac{f (t)}{t^2} \). If, moreover, f is positive, then f must be convex, and in this case (46) refines the original inequality of Theorem A due to [5].

3 Applications for symmetric functions

We are interested in simplifying the assumptions of the results in the previous section. To this end we employ symmetric functions.

A function \( g : [a,b] \rightarrow \mathbb {R}\) is said to symmetric about \( c = \frac{a+b}{2} \) if

$$\begin{aligned} g (c - u ) = g (c + u ) \;\;\; \text{ for } u \in [ 0, \frac{b-a}{2} ] \text{. } \end{aligned}$$
(47)

Lemma 3

Let \( g : [a,b] \rightarrow \mathbb {R}\) be an integrable symmetric function about \( c = \frac{a+b}{2} \), and \( G : [a,b] \rightarrow \mathbb {R}\) be the cumulative function of g defined by (2).

Then

  1. (i)

    G is rotational symmetric around the point (cG(c)) , i.e.,

    $$\begin{aligned} G (c) - G (c-u) = G (c+u) - G (c) \;\;\; \text{ for } u \in \left[ 0, \frac{b-a}{2} \right] , \end{aligned}$$
    (48)
  2. (ii)

    the following equality holds:

    $$\begin{aligned} \int \limits _a^b G (t) \, d t = (b-a) G (c) . \end{aligned}$$
    (49)

Proof

  1. (i)

    Fix any \( u \in [ 0, \frac{b-a}{2} ] \). It is not hard to check that

    $$\begin{aligned} G (c-u)= & {} \int \limits _a^{c-u} g (t) \, d t = \int \limits _a^c g (t) \, d t + \int \limits _c^{c-u} g (t) \, d t = G (c) - \int \limits _0^u g (c-v) \, d v ,\\ G (c+u)= & {} \int \limits _a^{c+u} g (t) \, d t = \int \limits _a^c g (t) \, d t + \int \limits _c^{c+u} g (t) \, d t = G (c) + \int \limits _0^u g (c+v) \, d v . \end{aligned}$$

    Therefore, by (47), we derive

    $$\begin{aligned} G (c) - G (c-u) = \int \limits _0^u g (c-v) \, d v = \int \limits _0^u g (c+v) \, d v = G (c+u) - G (c) , \end{aligned}$$

    which proves (48).

  2. (ii)

    It follows that

    $$\begin{aligned} \int \limits _a^c G (t) \, d t= & {} \int \limits _a^c G (c) \, d t - \left( \int \limits _a^c ( G (c) - G (t) ) \, d t \right) \nonumber \\= & {} \int \limits _a^c G (c) \, d t - P_1 = (c-a) G (c) - P_1 , \end{aligned}$$
    (50)

    and

    $$\begin{aligned} \int \limits _c^b G (t) \, d t= & {} \int \limits _c^b G (c) \, d t + \left( \int \limits _c^b ( G (t) - G (c) ) \, d t \right) \nonumber \\= & {} \int \limits _c^b G (c) \, d t + P_2 = (b-c) G (c) + P_2 , \end{aligned}$$
    (51)

    where

    $$\begin{aligned} P_1 = \int \limits _a^c ( G (c) - G (t) ) \, d t = \int \limits _0^{b-c} ( G (c) - G (c-v) ) \, d v \end{aligned}$$

    and

    $$\begin{aligned} P_2 = \int \limits _c^b ( G (t) - G (c) ) \, d t = \int \limits _0^{b-c} ( G (c+v) - G (c) ) \, d v . \end{aligned}$$

    In view of (48) we find that \( P_1 = P_2 \). Hence, by (50) and (51),

    $$\begin{aligned} \int \limits _a^b G (t) \, d t {=} \int \limits _a^c G (t) \, d t {+} \int \limits _c^b G (t) \, d t {=} (c {-} a {+} b {-} c) G (c) {-} P_1 {+} P_2 {=} (b - a) G (c) . \end{aligned}$$

    Thus we see that (49) holds valid.

\(\square \)

Theorem 3

(Symmetric functions.) Let \( I = [a,b] \) be an interval and \( \psi : [0,b-a] \rightarrow \mathbb {R}\) be a function. Let \( f : [ a , b ] \rightarrow \mathbb {R}\) be a continuously twice differentiable generalized \( \psi \)-uniformly convex function on [ab] . Denote \( \varphi (t) = \frac{\psi (t)}{t^2} \) for \( t \in (0,b-a] \) and \( \varphi (0) = \lim \limits _{t \rightarrow 0^+} \varphi (t) \).

Let \( g_1 , g_2 : [a,b] \rightarrow \mathbb {R}\) be integrable symmetric functions about \( c = \frac{a+b}{2} \), and \( G_1 , G_2 : [a,b] \rightarrow \mathbb {R}\) be the cumulative functions of \( g_1 \) and \( g_2 \) defined by (5), respectively, and \( {{\mathcal {G}}}_1 , {{\mathcal {G}}}_2 : [a,b] \rightarrow \mathbb {R}\) be the cumulative functions of \( G_1 \) and \( G_2 \) defined by (6), respectively.

Assume that

$$\begin{aligned} G_2 (c) = G_1 (c) \end{aligned}$$
(52)

and, in addition, there exists \( d_2 \in [c,b] \) satisfying (a.e.)

$$\begin{aligned} g_1 (t) \le g_2 (t) \;\;\; \text{ for } t \in [c,d_2) , \;\;\; \text{ and } \;\;\; g_2 (t) \le g_1 (t) \;\;\; \text{ for } t \in [d_2,b] . \end{aligned}$$
(53)

Then

$$\begin{aligned} R + \int \limits _a^b f (t) g_2 (t) \, d t \le \int \limits _a^b f (t) g_1 (t) \, d t , \end{aligned}$$
(54)

where \( R = 2 \varphi (0) \int \limits _a^b ( {{\mathcal {G}}}_1 (t) - {{\mathcal {G}}}_2 (t) ) \, d t \).

Proof

Because of (52), we have \( G_1 (b) = 2 G_1 (c) = 2 G_2 (c) = G_2 (b) \). For symmetric functions conditions (34), (35), (36) are reduced to (52) and (53). To see (37), we apply \( G_2 (c) = G_1 (c) \) via Lemma 3, part (ii), and we derive

$$\begin{aligned} {{\mathcal {G}}}_2 (b) = \int \limits _a^b G_2 (t) \, d t = (b-a) G_2 (c) = (b-a) G_1 (c) = \int \limits _a^b G_1 (t) \, d t = {{\mathcal {G}}}_1 (b) \end{aligned}$$

(see (6)). Moreover, condition (43) is fulfilled, too. We appeal now to Theorem 2 to get the desired result. \(\square \)

A result for symmetric probability density functions is given as follows.

Corollary 3

(Symmetric p.d.f.) Under the assumptions of Theorem 3 with deleted condition (52), let \( g_1 , g_2 : [a,b] \rightarrow \mathbb {R}\) be probability density functions symmetric about \( c = \frac{a+b}{2} \).

Then inequality (54) holds.

Proof

For symmetric p.d. functions \( g_1 \) and \( g_2 \), condition (52) holds, because

$$\begin{aligned} G_1 (c) = \int \limits _a^c g_1 (t) \, d t = \frac{1}{2} = \int \limits _a^c g_2 (t) \, d t = G_2 (c) . \end{aligned}$$

So, the result is true according to Theorem 3. \(\square \)

3.1 Levin–Stečkin type inequalities for uniformly convex functions

We now demonstrate the use of Theorem 3 to derive a Levin–Stečkin type inequality with uniformly convex f.

Let \( I = [a,b] \) be an interval and \( \psi : [0,b-a] \rightarrow \mathbb {R}\) be a function. We denote \( \varphi (t) = \frac{\psi (t)}{t^2} \) for \( t \in (0,b-a] \) with \( \varphi (0) = \lim \limits _{t \rightarrow 0^+} \varphi (t) \).

Let \( f : I \rightarrow \mathbb {R}\) be a continuously twice differentiable generalized \( \psi \)-uniformly convex function on I. Let \( p : [a,b] \rightarrow \mathbb {R}\) be a non-negative integrable weight on I. Suppose that p is symmetric about \( c = \frac{a+b}{2} \).

We also introduce

$$\begin{aligned} C = \frac{1}{b-a} \int \limits _a^b p (t) \, d t \;\;\; \text{ for } t \in [a,b] . \end{aligned}$$
(55)

In the case when there exists \( d_2 \in [c,b] \) satisfying (a.e.)

$$\begin{aligned} C \le p (t) \;\;\; \text{ for } t \in [c,d_2) , \;\;\; \text{ and } \;\;\; p (t) \le C \;\;\; \text{ for } t \in [d_2,b] , \end{aligned}$$
(56)

we set

$$\begin{aligned} g_1 (t) = C \;\;\;\;\; \text{ and } \;\;\;\;\; g_2 (t) = p (t) \;\;\; \text{ for } t \in [a,b] . \end{aligned}$$
(57)

Thus (53) is fulfilled.

By referring to the symmetry of p about \( c = \frac{a+b}{2} \) we can write \( b-c = \frac{1}{2} (b-a) \) and

$$\begin{aligned} \int \limits _a^c p (t) \, d t = \int \limits _c^b p (t) \, d t = \frac{1}{2} \int \limits _a^b p (t) \, d t . \end{aligned}$$

From this, by (55) and (57),

$$\begin{aligned} g_1 (t) = C = \frac{1}{b-c} \int \limits _c^b p (t) \, d t = \frac{1}{b-c} \int \limits _c^b g_2 (t) \, d t \;\;\; \text{ for } t \in [a,b] , \end{aligned}$$

which easily leads to (52) as follows

$$\begin{aligned} \int \limits _c^b g_1 (t) \, d t = C (b-c) = \int \limits _c^b g_2 (t) \, d t . \end{aligned}$$

To sum up, inequality (54) in Theorem 3 quarantees that

$$\begin{aligned} R + \int \limits _a^b f (t) p (t) \, d t \le \frac{1}{b-a} \int \limits _a^b f (t) \, d t \int \limits _a^b p (t) \, d t , \end{aligned}$$
(58)

which is a Levin–Stečkin type inequality for a generalized \( \psi \)-uniformly convex function f (cf. [10]). Here \( R = 2 \varphi (0) \int \limits _a^b ( {{\mathcal {G}}}_1 (t) - {{\mathcal {G}}}_2 (t) ) \, d t \) (see below).

Additionally, we have

$$\begin{aligned} G_1 (s) = \int \limits _a^s g_1 (u) \, d u = \int \limits _a^s C \, d u = C (s-a) \;\;\; \text{ for } s \in [a,b] . \end{aligned}$$

Hence

$$\begin{aligned} {{\mathcal {G}}}_1 (t) = \int \limits _a^t G_1 (s) \, d s = \int \limits _a^t C (s-a) \, d s = C \frac{(t-a)^2}{2} \;\;\; \text{ for } t \in [a,b] . \end{aligned}$$

So, we infer that

$$\begin{aligned} R = 2 \varphi (0) \int \limits _a^b \left( C \frac{(t-a)^2}{2} - {{\mathcal {G}}}_2 (t) \right) \, d t = 2 \varphi (0) \left( C \frac{(b-a)^3}{6} - \int \limits _a^b {{\mathcal {G}}}_2 (t) \, d t \right) . \end{aligned}$$

On the other hand, in the case when there exists \( d_2 \in [c,b] \) satisfying (a.e.)

$$\begin{aligned} p (t) \le C \;\;\; \text{ for } t \in [c,d_2) , \;\;\; \text{ and } \;\;\; C \le p (t) \;\;\; \text{ for } t \in [d_2,b] , \end{aligned}$$
(59)

we put

$$\begin{aligned} g_1 (t) = p (t) \;\;\;\;\; \text{ and } \;\;\;\;\; g_2 (t) = C \;\;\; \text{ for } t \in [a,b] . \end{aligned}$$
(60)

For this reason (53) is satisfied.

As previously, by the symmetry of p about \( c = \frac{a+b}{2} \), and thanks to (55) and (60) we can write

$$\begin{aligned} g_2 (t) = C = \frac{1}{b-c} \int \limits _c^b p (t) \, d t = \frac{1}{b-c} \int \limits _c^b g_1 (t) \, d t \;\;\; \text{ for } t \in [a,b] . \end{aligned}$$

This forces (52), because

$$\begin{aligned} \int \limits _c^b g_2 (t) \, d t = C (b-c) = \int \limits _c^b g_1 (t) \, d t . \end{aligned}$$

Finally, we deduce from inequality (54) in Theorem 3 that

$$\begin{aligned} R + \frac{1}{b-a} \int \limits _a^b f (t) \, d t \int \limits _a^b p (t) \, d t \le \int \limits _a^b f (t) p (t) \, d t , \end{aligned}$$
(61)

with \( R = 2 \varphi (0) \int \limits _a^b ( {{\mathcal {G}}}_1 (t) - {{\mathcal {G}}}_2 (t) ) \, d t \) (see below). This is a Levin–Stečkin type inequality for a generalized \( \psi \)-uniformly convex function f (cf. [10]).

Furthermore,

$$\begin{aligned} G_2 (s) = \int \limits _a^s g_2 (u) \, d u = \int \limits _a^s C \, d u = C (s-a) \;\;\; \text{ for } s \in [a,b] , \end{aligned}$$

and

$$\begin{aligned} {{\mathcal {G}}}_2 (t) = \int \limits _a^t G_2 (s) \, d s = \int \limits _a^t C (s-a) \, d s = C \frac{(t-a)^2}{2} \;\;\; \text{ for } t \in [a,b] . \end{aligned}$$

Therefore, we conclude that

$$\begin{aligned} R = 2 \varphi (0) \int \limits _a^b \left( {{\mathcal {G}}}_1 (t) - C \frac{(t-a)^2}{2} \right) \, d t = 2 \varphi (0) \left( \int \limits _a^b {{\mathcal {G}}}_1 (t) \, d t - C \frac{(b-a)^3}{6} \right) . \end{aligned}$$

3.2 Simpson distributions

Recall that Theorem A corresponds to uniform distribution on an interval [ab] . We shall establish a similar result corresponding to the Simpson (triangle) distribution on an interval [ab] .

As usual, \( f : [a,b] \rightarrow \mathbb {R}\) is a continuously twice differentiable generalized \( \psi \)-uniformly convex function, where \( \psi : [0,b-a] \rightarrow \mathbb {R}\) is a function. Also, \( \varphi (t) = \frac{\psi (t)}{t^2} \) for \( t \in (0,b-a] \) with \( \varphi (0) = \lim \limits _{t \rightarrow 0^+} \varphi (t) \).

We put \( c = \frac{a+b}{2} \) and take \( x_1 , x_2 , y_1 , y_2 \in [a,b] \) with \( (x_2,y_2) \prec (x_1,y_1) \) and \( a \le x_1< x_2< c< y_2 < y_1 \le b \).

We define \( g_1 \) and \( g_2 \) to be probability density functions of Simpson distributions on [ab] with triangles based on intervals \( [x_1,y_1] \) and \( [x_2,y_2] \), respectively. That is,

$$\begin{aligned} g_1 (t)&= \left\{ \begin{array}{ll} \frac{4 (t - x_1)}{(y_1 - x_1)^2} &{}\quad \text{ for } t \in [x_1,c] , \\ \frac{4 (y_1 - t)}{(y_1 - x_1)^2} &{}\quad \text{ for } | t \in [c,y_1] , \\ 0 &{}\quad \text{ for } t \in [a,x_1] \cup [y_1,b] , \end{array} \right. \\ \quad \text{ and } \quad g_2 (t)&= \left\{ \begin{array}{ll} \frac{4 (t - x_2)}{(y_2 - x_2)^2} &{}\quad \text{ for } t \in [x_2,c] , \\ \frac{4 (y_2 - t)}{(y_2 - x_2)^2} &{}\quad \text{ for } t \in [c,y_2] , \\ 0 &{}\quad \text{ for } t \in [a,x_2] \cup [y_2,b] . \end{array} \right. \end{aligned}$$

By setting \( d_2 = \frac{\xi y_2 - y_1}{\xi -1 } \) with \( \xi = \left( \frac{y_1 - x_1}{y_2 - x_2} \right) ^2 \), we see that condition (53) is satisfied. Taking Corollary 3 into account, we can rewrite (54) as

$$\begin{aligned}&R + \frac{4}{ (y_2 - x_2)^2 } \left( \int \limits _{x_2}^{c} f (t) ( t - x_2 ) \, d t + \int \limits _{c}^{y_2} f (t) ( y_2 - t ) \, d t \right) \\&\quad \le \frac{4}{ (y_1 - x_1)^2 } \left( \int \limits _{x_1}^{c} f (t) ( t - x_1 ) \, d t + \int \limits _{c}^{y_1} f (t) ( y_1 - t ) \, d t \right) , \end{aligned}$$

where \( R = 2 \varphi (0) \int \limits _a^b ( {{\mathcal {G}}}_1 (t) - {{\mathcal {G}}}_2 (t) ) \, d t \).