1 Introduction

Suppose f is a continuous function defined on an interval I with a nonempty interior. Then, define

$$ F(x,y)= \textstyle\begin{cases} \frac{1}{y-x}\int _{x}^{y}f(t)\,dt , & x,y\in I,~x\neq y, \\ f(x), & x=y\in I. \end{cases} $$
(1.1)

In a seminal work [14], Wulbert proved that the integral arithmetic mean F, defined in (1.1), exhibits convexity on the interval \(I^{2}\) when the underlying function f is convex over the interval I. In a separate study [15], Zhang and Chu independently rediscovered this result without making any reference to Wulbert’s findings. Their work revealed that the convexity of the integral arithmetic mean F hinges on the crucial condition that f must be convex on the interval I.

Since it will hold significant importance for our forthcoming analysis, let us take into consideration a real-valued function f defined on the interval \([a, b]\). The divided difference of order n for the function f at distinct points \(x_{0}, x_{1}, \dots , x_{n} \in [a, b]\) is defined recursively (as elucidated in [1, 9]) in the following manner:

$$ f[x_{i}]=f(x_{i})\quad (i=0,\ldots ,n) $$

and

$$ f[x_{0},\ldots ,x_{n}]= \frac{f [x_{1},\ldots ,x_{n}]-f[x_{0},\ldots ,x_{n-1}]}{x_{n}-x_{0}}, \quad n\in\mathbb{N}_{0}. $$

The value \(f[x_{0},\ldots ,x_{n}]\) remains invariant regardless of the order in which the points \(x_{0},\ldots ,x_{n}\) are arranged.

The definition can be further extended to accommodate scenarios where some (or all) of the points coincide. Provided that \(f^{(j-1)}(x)\) exists, we establish the following notation:

$$ f[\underbrace{x,\ldots ,x}_{j\text{-times}}]= \frac{f^{(j-1)}(x)}{(j-1)!}. $$
(1.2)

In the context of divided differences, the following holds:

$$ f[x_{0},\ldots ,x_{n}]=\sum _{i=0}^{n} \frac{f(x_{i})}{\omega '(x_{i})},\quad \text{where } \omega (x)=\prod_{j=0}^{n}(x-x_{j}). $$

In conclusion, it is evident that the following property holds for divided differences:

$$ f[x_{0},\ldots ,x_{n}]=\sum _{i=0}^{n} \frac{f(x_{i})}{\prod_{j=0,j\neq i}^{n}(x_{i}-x_{j})}. $$

Under the condition that the function f has a continuous nth derivative on the interval \([a, b]\), we can represent the divided difference \(f[x_{0},\ldots ,x_{n}]\) using integral notation (refer to [9, p. 15]) as

$$ f[x_{0},\ldots ,x_{n}]= \int _{\Delta _{n}}f^{(n)} \Biggl(\sum _{i=0}^{n}u_{i}x_{i} \Biggr)\,du _{0}\cdots \,du _{n-1}, $$

where

$$ \Delta _{n}= \Biggl\{ (u_{0},\ldots ,u_{n-1}) : u_{i}\geq 0,\sum_{i=0}^{n-1}u_{i} \leq 1 \Biggr\} $$

and \(u_{n}=1-\sum_{i=0}^{n-1}u_{i}\).

The notion of n-convexity is attributed to Popoviciu [10]. For the present study, we adhere to the definition as presented by Karlin [6].

Definition 1

A function \(f: [a,b]\rightarrow \mathbb{R}\) is said to be n-convex on \([a,b]\), \(n \geq 0\), if for all choices of \((n+1)\) distinct points in \([a,b]\), the nth order divided difference of f satisfies

$$ f[x_{0},\dots ,x_{n}] \geq 0. $$

It is worth noting that Popoviciu’s work demonstrated the fundamental result that any continuous n-convex function defined on the interval \([a,b]\) can be represented as the uniform limit of a sequence of n-convex polynomials. Moreover, [7] provides an extensive collection of related results and essential inequalities attributed to Favard, Berwald, and Steffensen.

The proof of the Jensen inequality for divided differences can be found in [4]:

Theorem 1

Let f be an \((n+2)\)-convex function on \((a,b)\) and \({\mathbf {x}}\in (a,b)^{n+1}\). Then

$$ G(\mathbf{x})=f[x_{0},\ldots ,x_{n}] $$

is a convex function of the vector \(\mathbf{x}=(x_{0},\ldots ,x_{n})\). Consequently,

$$ f \Biggl[\sum_{i=0}^{m}a_{i}x^{i}_{0}, \ldots ,\sum_{i=0}^{m}a_{i}x^{i}_{n} \Biggr]\leq \sum_{i=0}^{m}a_{i}f \bigl[x^{i}_{0},\ldots ,x^{i}_{n} \bigr]\quad (i~ \textit{is an upper index}) $$
(1.3)

holds for all \(a_{i}\geq 0\) such that \(\sum_{i=0}^{m}a_{i}=1\).

In the context of future research, the notion of a generalized divided difference will hold relevance. Provided below is the definition for reference.

Consider a real-valued function \(f(x, y)\) defined on \(I \times J\) (\(I=[a,b]\), \(J=[c,d]\)). The divided difference of order \((n,m)\) for the function f at distinct points \(x_{0}, \ldots , x_{n} \in I\) and \(y_{0},\ldots , y_{m}\in J\) is defined as follows (see [9, p. 18]):

$$\begin{aligned} f\begin{bmatrix} x_{0}, \ldots , x_{n} \\ y_{0},\ldots , y_{m}\end{bmatrix} =&f[y_{0},\ldots , y_{m}] [x_{0}, \ldots , x_{n}] \\ =&f[x_{0}, \ldots , x_{n}] [y_{0},\ldots , y_{m}] \\ =&\sum_{i=0}^{n}\sum _{j=0}^{m} \frac{f(x_{i},y_{j})}{\omega '(x_{i})w'(y_{j})}, \end{aligned}$$
(1.4)

where \(\omega (x)=\prod_{i=0}^{n}(x-x_{i})\), \(w(y)=\prod_{j=0}^{m}(y-y_{j})\).

Following the aforementioned definition, we can establish the concept of \((n,m)\)-convexity, which is as follows (see [9, p. 18]):

Definition 2

A function \(f:I\times J\rightarrow \mathbb{R}\) is said to be \((n,m)\)-convex, or convex of order \((n,m)\), if for all distinct points \(x_{0}, \ldots , x_{n} \in I, y_{0},\ldots , y_{m}\in J\),

$$ f\begin{bmatrix} x_{0}, \ldots , x_{n} \\ y_{0},\ldots , y_{m}\end{bmatrix}\geq 0. $$
(1.5)

If this inequality is strict, then f is said to be strictly \((n,m)\)-convex.

In [11], Popoviciu presented and proved the following theorem:

Theorem 2

If the partial derivative \(f^{(n+m)}_{x^{n}y^{m}}(\partial ^{(n+m)}f/\partial x^{n}\partial y^{m})\) of f exists, then f is \((n,m)\)-convex iff

$$ f^{(n+m)}_{x^{n}y^{m}}\geq 0. $$
(1.6)

If the inequality in (1.6) is strict, then f is strictly \((n,m)\)-convex.

In this research, we build upon the generalization of Levinson’s inequality, and thus, we begin by stating the fundamental Levinson’s inequality as follows (see [8] and [12]):

Theorem 3

Let f be a real valued 3-convex function on \([0,2a]\). Then for \(0\leq x_{k}\leq a\), \(p_{k}>0\) (\(1\leq k\leq n\)), and \(P_{k}=\sum_{i=1}^{k}p_{i}\) (\(2\leq k\leq n\)) we have

$$\begin{aligned}& \frac{1}{P_{n}}\sum_{k=1}^{n}p_{k}f(x_{k})-f \Biggl(\frac{1}{P_{n}} \sum_{k=1}^{n}p_{k}x_{k} \Biggr) \\& \quad \leq \frac{1}{P_{n}}\sum_{k=1}^{n}p_{k}f(2a-x_{k})-f \Biggl( \frac{1}{P_{n}}\sum_{k=1}^{n}p_{k}(2a-x_{k}) \Biggr). \end{aligned}$$
(1.7)

If \(f'''>0\), then the equality holds iff \(x_{1}=\cdots =x_{n}\).

In [2], Bullen provided a proof for the generalization of Theorem 3:

Theorem 4

  1. a)

    Let f be a real-valued 3-convex function on \([a,b]\) and \(x_{k}\), \(y_{k}\) (\(1\leq k\leq n\)) be 2n points on \([a,b]\) such that

    $$ \max \{x_{1},\ldots ,x_{n}\}\leq \min \{y_{1},\ldots ,y_{n}\},\qquad x_{1}+y_{1}= \cdots =x_{n}+y_{n}. $$
    (1.8)

    If \(p_{k}>0\) (\(1\leq k\leq n\)), then

    $$ \frac{1}{P_{n}}\sum_{k=1}^{n}p_{k}f(x_{k})-f \Biggl(\frac{1}{P_{n}} \sum_{k=1}^{n}p_{k}x_{k} \Biggr)\leq \frac{1}{P_{n}}\sum_{k=1}^{n}p_{k}f(y_{k})-f \Biggl(\frac{1}{P_{n}}\sum_{k=1}^{n}p_{k}y_{k} \Biggr). $$
    (1.9)

    If f is strictly 3-convex there is equality in (1.9) if and only if \(x_{1}=\cdots =x_{n}\).

  2. b)

    If (1.9) holds for a continuous function f, (1.8) is satisfied by 2n-distinct points and \(p_{k}>0\) for \(k\in [1,n]\), then f is 3-convex.

It is shown in [9] that the condition (1.8) can be weakened, i.e., the following result holds:

Theorem 5

Let f be a 3-convex function on \([a,b]\), \(p_{i}>0\) \((1\leq i \leq n)\), \(x_{k}\), \(y_{k}\) \((1\leq k\leq n)\) be points in \([a,b]\) such that

$$ x_{1}+y_{1}=\cdots =x_{n}+y_{n}=2c $$
(1.10)

and

$$\begin{aligned}& x_{i}+x_{n-i+1}\leq 2c, \end{aligned}$$
(1.11)
$$\begin{aligned}& (p_{i}x_{i}+p_{n-i+1}x_{n-i+1})/(p_{i}+p_{n-i+1}) \leq c, \quad \textit{for } 1 \leq i \leq n. \end{aligned}$$
(1.12)

Then (1.9) is valid.

The primary objective of this paper is to provide an extension of Wulbert’s result, as presented in [14], for 3-convex functions. We will also consider relevant findings from [13]. Moreover, we aim to establish an inequality involving divided differences by utilizing the generalization of Levinson’s inequality given in [9]. As a significant outcome, we will demonstrate the convexity of higher order for functions defined by divided differences.

2 Inequalities involving averages

Theorem 6

Let f be a real-valued 3-convex function on \([a,b]\) and let F be defined in (1.1). Then for \(p_{i}>0\) (\(1\leq i\leq n\)), \(a\leq x_{k}, \tilde{x}_{k},y_{k}\), \(\tilde{y}_{k}\leq b\) (\(1\leq k\leq n\)) such that

$$\begin{aligned}& x_{1}+y_{1}=\cdots =x_{n}+y_{n}=2c,\qquad \tilde{x}_{1}+\tilde{y}_{1}=\cdots = \tilde{x}_{n}+\tilde{y}_{n}=2c,\\& x_{i}+x_{n-i+1}\leq 2c,\qquad \tilde{x}_{i}+ \tilde{x}_{n-i+1}\leq 2c,\\& \frac{p_{i}x_{i}+p_{n-i+1}x_{n-i+1}}{p_{i}+p_{n-i+1}}\leq c,\qquad \frac{p_{i}\tilde{x}_{i}+p_{n-i+1}\tilde{x}_{n-i+1}}{p_{i}+p_{n-i+1}} \leq c,\quad 1\leq i\leq n, \end{aligned}$$

and \(P_{k}=\sum_{i=1}^{k}p_{i}\) (\(2\leq k\leq n\)) we have

$$ \frac{1}{P_{n}}\sum_{k=1}^{n}p_{k}F(x_{k}, \tilde{x}_{k})-F(\bar{x}, \bar{\tilde{x}})\leq \frac{1}{P_{n}} \sum_{k=1}^{n}p_{k}F(y_{k}, \tilde{y}_{k})-F(\bar{y}, \bar{\tilde{y}}), $$
(2.1)

where \(\bar{x}=\frac{1}{P_{n}}\sum_{k=1}^{n}p_{k}x_{k}\), \(\bar{\tilde{x}}=\frac{1}{P_{n}}\sum_{k=1}^{n}p_{k}\tilde{x}_{k}\), \(\bar{y}=\frac{1}{P_{n}}\sum_{k=1}^{n}p_{k}y_{k}\), and \(\bar{\tilde{y}}=\frac{1}{P_{n}}\sum_{k=1}^{n}p_{k}\tilde{y}_{k}\).

Consequently, for \(l+m=3\) the integral arithmetic mean (1.1) is \((l,m)\)-convex on \([a,b]^{2}\).

Proof

Since the conditions

$$\begin{aligned}& s\tilde{x}_{1}+(1-s)x_{1}+s\tilde{y}_{1}+(1-s)y_{1}= \cdots =s\tilde{x}_{n}+(1-s)x_{n}+s \tilde{y}_{n}+(1-s)y_{n}=2c,\\& s\tilde{x}_{i}+(1-s)x_{i}+s\tilde{x}_{n-i+1}+(1-s)x_{n-i+1} \leq 2c,\\& \frac{p_{i} (s\tilde{x}_{i}+(1-s)x_{i} )+p_{n-i+1} (s\tilde{x}_{n-i+1}+(1-s)x_{n-i+1} )}{p_{i}+p_{n-i+1}} \leq c, \quad 1\leq i \leq n, \end{aligned}$$

from Theorem 5 are satisfied, by using inequality (1.9), we get

$$\begin{aligned}& \frac{1}{P_{n}}\sum_{k=1}^{n}p_{k}F(x_{k}, \tilde{x}_{k})-F(\bar{x}, \bar{\tilde{x}}) \\& \quad = \frac{1}{P_{n}}\sum_{k=1}^{n}p_{k} \int _{0}^{1}f\bigl(s\tilde{x}_{k}+(1-s)x_{k} \bigr)\,ds \\& \quad \quad {} - \int _{0}^{1}f \Biggl(s\frac{1}{P_{n}} \sum_{k=1}^{n}p_{k} \tilde{x}_{k}+(1-s) \frac{1}{P_{n}}\sum _{k=1}^{n}p_{k}x_{k} \Biggr)\,ds \\& \quad = \int _{0}^{1} \Biggl[\frac{1}{P_{n}}\sum _{k=1}^{n}p_{k}f\bigl(s \tilde{x}_{k}+(1-s)x_{k}\bigr)-f \Biggl( \frac{1}{P_{n}}\sum_{k=1}^{n}p_{k} \bigl(s\tilde{x}_{k}+(1-s)x_{k} \bigr) \Biggr) \Biggr]\,ds \\& \quad \leq \int _{0}^{1} \Biggl[\frac{1}{P_{n}}\sum _{k=1}^{n}p_{k}f\bigl(s \tilde{y}_{k}+(1-s)y_{k}\bigr)-f \Biggl( \frac{1}{P_{n}}\sum_{k=1}^{n}p_{k} \bigl(s\tilde{y}_{k}+(1-s)y_{k} \bigr) \Biggr) \Biggr]\,ds \\& \quad = \frac{1}{P_{n}}\sum_{k=1}^{n}p_{k} \int _{0}^{1}f\bigl(s\tilde{y}_{k}+(1-s)y_{k} \bigr)\,ds \\& \quad \quad {} - \int _{0}^{1}f \Biggl(s\frac{1}{P_{n}} \sum_{k=1}^{n}p_{k} \tilde{y}_{k}+(1-s) \frac{1}{P_{n}}\sum _{k=1}^{n}p_{k}y_{k} \Biggr)\,ds \\& \quad = \frac{1}{P_{n}}\sum_{k=1}^{n}p_{k}F(y_{k}, \tilde{y}_{k})-F( \bar{y},\bar{\tilde{y}}). \end{aligned}$$

Now, if we put \(n=2\), \(x_{1}=x\), \(x_{2}=y_{2}=x+\frac{3h}{2}\), \(y_{1}=x+3h\), \(\tilde{x}_{1}= \tilde{x}_{2}=y\), \(\tilde{y}_{1}=\tilde{y}_{2}=y\), \(2x+3h=2y=2c\), \(p_{1}=1\), \(p_{2}=2\), then inequality (2.1) reduces to

$$ \frac{1}{3}F(x,y)-F(x+h,y)\leq \frac{1}{3}F(x+3h,y)-F(x+2h,y). $$

Using the definition in (1.4), we get

$$ 2h^{3}\bigl(F[x, x+h, x+2h, x+3h]\bigr)[y]\geq 0. $$

It is a known fact that if this property holds for all possible \(x,y,h > 0\), then F is \((3,0)\)-convex, as stated in [11].

If we put \(n=2\), \(x_{1}=x\), \(x_{2}=x+2h_{1}\), \(y_{1}=x+2h_{1}\), \(y_{2}=x\), \(\tilde{x}_{1}= \tilde{x}_{2}=y\), \(\tilde{y}_{1}=\tilde{y}_{2}=y+h_{2}\), \(p_{1}=p_{2}=1\), \(2x+2h_{1}=2y+h_{2}=2c\) then inequality (2.1) reduces to

$$\begin{aligned}& \frac{1}{2} \bigl(F(x,y)+F(x+2h_{1},y) \bigr)-F(x+h_{1},y) \\& \quad \leq \frac{1}{2} \bigl(F(x+2h_{1},y+h_{2})+F(x,y+h_{2}) \bigr)-F(x+h_{1},y+h_{2}). \end{aligned}$$

Using the definition in (1.4), we get

$$ h_{1}^{2}h_{2}\bigl(F[x, x+h_{1}, x+2h_{1}]\bigr)[y, y+h_{2}]\geq 0. $$

Continuing the previous arguments, since this property holds for all possible \(x, h_{1}, y, h_{2} > 0\), we can deduce that F is \((2,1)\)-convex.

The proofs for \((0,3)\)- and \((1,2)\)-convexity exhibit similarities, leading us to conclude that F is \((l,m)\)-convex on \([a,b]^{2}\) when \(l+m=3\). □

Remark 1

Theorem 6 can be regarded as a generalization of Theorem 5 since inequality (2.1) for \(x_{k}=\tilde{x}_{k}\) and \(y_{k}=\tilde{y}_{k}\), \(k=1,\ldots ,n\) reproduces inequality (1.9).

For similar results regarding Jensen’s inequality involving averages of convex functions, refer to [3] and [5].

3 Inequalities for divided differences

Theorem 7

Let f be an \((n+3)\)-convex function on \([a,b]\) and \({\mathbf {x}}, {\mathbf {y}}\in [a,b]^{n+1}\). Then for \(x^{i}_{k}\), \(y^{i}_{k}\), \((0\leq k\leq n)\) (“i” is an upper index), \(a_{i}>0\) \((0\leq i\leq m)\), such that \(\sum_{i=0}^{m}a_{i}=1\),

$$\begin{aligned}& x^{i}_{0}+y^{i}_{0}=x^{i}_{1}+y^{i}_{1}= \cdots =x^{i}_{n}+y^{i}_{n}=2c,\\& x^{i}_{k}+x^{m-i}_{k}\leq 2c,\\& \bigl(a_{i}x^{i}_{k}+a_{m-i}x^{m-i}_{k} \bigr)/(a_{i}+a_{m-i})\leq c, \end{aligned}$$

we have

$$\begin{aligned} &\sum_{i=0}^{m}a_{i}f \bigl[x^{i}_{0},\ldots ,x^{i}_{n} \bigr]-f \Biggl[\sum_{i=0}^{m}a_{i}x^{i}_{0}, \ldots ,\sum_{i=0}^{m}a_{i}x^{i}_{n} \Biggr] \\ &\quad \leq \sum_{i=0}^{m}a_{i}f \bigl[y^{i}_{0},\ldots ,y^{i}_{n} \bigr]-f \Biggl[ \sum_{i=0}^{m}a_{i}y^{i}_{0}, \ldots ,\sum_{i=0}^{m}a_{i}y^{i}_{n} \Biggr]. \end{aligned}$$
(3.1)

Consequently,

$$ G(\mathbf{x})=f[x_{0},x_{1},x_{2}] $$

is an \((l_{1},l_{2},l_{3})\)-convex function of the vector \(\mathbf{x}=(x_{0}, x_{1}, x_{2})\), when \(l_{1}+l_{2}+l_{3}=3\).

Proof

Since the conditions

$$\begin{aligned}& \sum_{j=0}^{n}u_{j}x_{j}^{0}+ \sum_{j=0}^{n}u_{j}y_{j}^{0}= \cdots = \sum_{j=0}^{n}u_{j}x_{j}^{m}+ \sum_{j=0}^{n}u_{j}y_{j}^{m}=2c,\\& \sum_{j=0}^{n}u_{j}x_{j}^{i}+ \sum_{j=0}^{n}u_{j}x_{j}^{m-i} \leq 2c,\\& \frac{a_{i}\sum_{j=0}^{n}u_{j}x^{i}_{j}+a_{m-i}\sum_{j=0}^{n}u_{j}x^{m-i}_{j}}{a_{i}+a_{m-i}} \leq c, \quad 0\leq i \leq m, \end{aligned}$$

from Theorem 5 are satisfied, by using inequality (1.9), for the 3-convex function \(f^{(n)}\), we get

$$\begin{aligned}& \sum_{i=0}^{m}a_{i}f \bigl[x^{i}_{0},\ldots ,x^{i}_{n} \bigr]-f \Biggl[\sum_{i=0}^{m}a_{i}x^{i}_{0}, \ldots ,\sum_{i=0}^{m}a_{i}x^{i}_{n} \Biggr] \\& \quad = \sum_{i=0}^{m}a_{i} \int _{\Delta _{n}}f^{(n)} \Biggl(\sum _{j=0}^{n}u_{j}x_{j}^{i} \Biggr)\,du _{0}\cdots \,du _{n-1} \\& \quad \quad {} - \int _{\Delta _{n}}f^{(n)} \Biggl(\sum _{j=0}^{n}u_{j}\sum _{i=0}^{m}a_{i}x_{j}^{i} \Biggr)\,du _{0}\cdots \,du _{n-1} \\& \quad = \int _{\Delta _{n}} \Biggl[\sum_{i=0}^{m}a_{i}f^{(n)} \Biggl(\sum_{j=0}^{n}u_{j}x_{j}^{i} \Biggr)-f^{(n)} \Biggl(\sum_{i=0}^{m}a_{i} \sum_{j=0}^{n}u_{j}x_{j}^{i} \Biggr) \Biggr]\,du _{0}\cdots \,du _{n-1} \\& \quad \leq \int _{\Delta _{n}} \Biggl[\sum_{i=0}^{m}a_{i}f^{(n)} \Biggl( \sum_{j=0}^{n}u_{j}y_{j}^{i} \Biggr)-f^{(n)} \Biggl(\sum_{i=0}^{m}a_{i} \sum_{j=0}^{n}u_{j}y_{j}^{i} \Biggr) \Biggr]\,du _{0}\cdots \,du _{n-1} \\& \quad = \sum_{i=0}^{m}a_{i} \int _{\Delta _{n}}f^{(n)} \Biggl(\sum _{j=0}^{n}u_{j}y_{j}^{i} \Biggr)\,du _{0}\cdots \,du _{n-1} \\& \quad \quad {} - \int _{\Delta _{n}}f^{(n)} \Biggl(\sum _{j=0}^{n}u_{j}\sum _{i=0}^{m}a_{i}y_{j}^{i} \Biggr)\,du _{0}\cdots \,du _{n-1} \\& \quad = \sum_{i=0}^{m}a_{i}f \bigl[y^{i}_{0},\ldots ,y^{i}_{n} \bigr]-f \Biggl[\sum_{i=0}^{m}a_{i}y^{i}_{0}, \ldots ,\sum_{i=0}^{m}a_{i}y^{i}_{n} \Biggr]. \end{aligned}$$

Now, if we put \(n=2\), \(m=1\),

$$\begin{aligned}& x^{0}_{0}=y_{0}, \qquad x_{0}^{1}=y_{0}+\frac{3h}{2}, \\& y^{0}_{0}=y_{0}+3h, \qquad y_{0}^{1}=y_{0}+\frac{3h}{2}, \\& x^{0}_{1}=x^{1}_{1}=y^{0}_{1}=y^{1}_{1}=y_{1}, \\& x_{2}^{0}=x_{2}^{1}=y_{2}^{0}=y_{2}^{1}=y_{2}, \\& a_{0}=\frac{1}{3}, \qquad a_{1}= \frac{2}{3}, \end{aligned}$$

then inequality (3.1) reduces to

$$\begin{aligned}& \frac{1}{3}G(y_{0}, y_{1}, y_{2})-G(y_{0}+h,y_{1},y_{2}) \\& \quad \leq \frac{1}{3}G(y_{0}+3h, y_{1}, y_{2})-G(y_{0}+2h, y_{1}, y_{2}). \end{aligned}$$

Using the generalization of definition (1.4), we get

$$ 2h^{3} \bigl( \bigl(G[y_{0},y_{0}+h,y_{0}+2h,y_{0}+3h] \bigr)[y_{1}] \bigr)[y_{2}]\geq 0. $$

As observed in the proof of Theorem 6, since this property holds for all possible \(y_{0}, y_{1}, y_{2}, h > 0\), we can conclude that G is \((3,0,0)\)-convex.

If we put \(n=2\), \(m=1\),

$$\begin{aligned}& x^{0}_{0}=y^{1}_{0}=y_{0}, \qquad x_{0}^{1}=y_{0}^{0}=y_{0}+2h_{0}, \\& x^{0}_{1}=x^{1}_{1}=y_{1}, \\& x_{2}^{0}=x_{2}^{1}=y_{2}^{0}=y_{2}^{1}=y_{2}, \\& y^{0}_{1}=y^{1}_{1}=y_{1}+h_{1}, \\& a_{0}=a_{1}=\frac{1}{2} \end{aligned}$$

then inequality (3.1) reduces to

$$\begin{aligned}& \frac{1}{2}G(y_{0},y_{1},y_{2})+ \frac{1}{2}G(y_{0}+2h_{0},y_{1},y_{2})-G(y_{0}+h_{0},y_{1},y_{2}) \\& \quad \leq \frac{1}{2}G(y_{0}+2h_{0},y_{1}+h_{1},y_{2})+ \frac{1}{2}G(y_{0},y_{1}+h_{1},y_{2})-G(y_{0}+h_{0},y_{1}+h_{1},y_{2}). \end{aligned}$$

Using the generalization of definition (1.4), we get

$$ h_{0}^{2}h_{1} \bigl( \bigl(G[y_{0},y_{0}+h_{0},y_{0}+2h_{0}] \bigr)[y_{1},y_{1}+h_{1}] \bigr)[y_{2}]\geq 0. $$

As before, since this holds for all possible \(y_{0}, y_{1}, y_{2}, h_{0}, h_{1}>0\), G is \((2,1,0)\)-convex.

If we put \(n=2\), \(m=3\),

$$\begin{aligned}& x^{0}_{0}=x_{0}^{2}=y^{1}_{0}=y_{0}^{3}=y_{0}, \qquad x_{0}^{1}=x_{0}^{3}=y_{0}^{0}=y_{0}^{2}=y_{0}+h_{0}, \\& x^{0}_{1}=x^{3}_{1}=y^{1}_{1}=y^{2}_{1}=y_{1}, \qquad x_{1}^{1}=x_{1}^{2}=y_{1}^{0}=y_{1}^{3}=y_{1}+h_{1} \\& x_{2}^{0}=x_{2}^{1}=y_{2}^{2}=y_{2}^{3}=y_{2}, \qquad x_{2}^{2}=x_{2}^{3}=y_{2}^{0}=y_{2}^{1}=y_{2}+h_{2}, \\& a_{0}=a_{1}=a_{2}=a_{3}= \frac{1}{4}, \end{aligned}$$

then inequality (3.1) reduces to

$$\begin{aligned}& \frac{1}{4}G(y_{0},y_{1},y_{2})+ \frac{1}{4}G(y_{0}+h_{0},y_{1}+h_{1},y_{2}) \\& \quad \quad {} +\frac{1}{4}G(y_{0},y_{1}+h_{1},y_{2}+h_{2})+ \frac{1}{4}G(y_{0}+h_{0},y_{1},y_{2}+h_{2}) \\& \quad \leq \frac{1}{4}G(y_{0}+h_{0},y_{1}+h_{1},y_{2}+h_{2})+ \frac{1}{4}G(y_{0},y_{1},y_{2}+h_{2}) \\& \quad \quad {} +\frac{1}{4}G(y_{0}+h_{0},y_{1},y_{2})+ \frac{1}{4}G(y_{0},y_{1}+h_{1},y_{2}). \end{aligned}$$

Using the generalization of definition (1.4), we get

$$ \frac{1}{4}h_{0}h_{1}h_{2} \bigl( \bigl(G[y_{0},y_{0}+h_{0}] \bigr)[y_{1},y_{1}+h_{1}] \bigr)[y_{2},y_{2}+h_{2}]\geq 0. $$

Continuing the previous arguments, since this property holds for all possible \(y_{0}\), \(y_{1}\), \(y_{2}\), \(h_{0}, h_{1}, h_{2} > 0\), we can conclude that G is \((1,1,1)\)-convex.

The proofs for \((0,3,0)\)-, \((0,0,3)\)-, \((1,2,0)\)-, \((0,2,1)\)-, \((0,1,2)\)-, \((2,0,1)\)-, and \((1,0,2)\)-convexity share similarities. □