1 Introduction

Hopf [8] and Popoviciu [13, 14] introduced the notion of higher order convex functions based on the so called divided differences. Let \(n\ge 1 \). Given a function of one real variable on an interval I and a system \(x_0, x_1, \ldots , x_{n+1}\) of pairwise distinct points of I the divided differences of order \(0,1, \ldots , n+1\) are respectively defined by the formulas

$$\begin{aligned} \left[ x_0;f \right]&=f(x_0),\\ \left[ x_0, x_1;f \right]&=\frac{f(x_1)-f(x_0)}{x_1-x_0},\quad \ldots ,\\ \left[ x_0, x_1,\ldots ,x_{n+1};f \right]&=\frac{\left[ x_1, x_2,\ldots , x_{n+1};f \right] -\left[ x_0, x_1,\ldots , x_{n};f \right] }{x_{n+1}-x_0}. \end{aligned}$$

The function f(x) is called n-convex (n-concave), if \(\left[ x_0, x_1,\ldots ,x_{n+1};f \right] \ge 0\) (\(\le 0\)) for any pairwise distinct points \(x_0, x_1,\ldots ,x_{n+1}\). The function f is convex when all divided differences of order two are nonnegative for all systems of pairwise distinct points.

Proposition 1

[13] A function f(x) is n-convex if and only if its derivative \(f^{(n-1)}(x)\) exists and is convex (with the convention \(f^{(0)}(x)=f(x)\)).

If \(f=f(x,y)\) is a function defined on a rectangle \(I \times J\) and \(x_0, x_1,\ldots ,x_{m}\) are pairwise distinct points of I and \(y_0,y_1,\ldots ,y_n\) are pairwise distinct points of J, one defines the divided double difference of order (mn) by the formula

$$\begin{aligned} \left[ {\begin{array}{c} {x_0,\quad x_1, \quad \ldots \quad x_m}\\ \\ {y_0, \quad y_1, \quad \ldots \quad y_n} \end{array}};f \right]&= \left[ x_0, x_1,\ldots , x_{m};\left[ y_0, y_1,\ldots ,y_{n};f(x, \cdot ) \right] \right] \end{aligned}$$
(1)
$$\begin{aligned}&= \left[ y_0, y_1,\ldots ,y_{n};\left[ x_0, x_1,\ldots , x_{m};f(\cdot ,y) \right] \right] \end{aligned}$$
(2)

Drawing a parallel to the one dimensional case, Popoviciu [13, p. 78], has called a function \(f:I \times J \rightarrow {\mathbb {R}}\) convex of order (m, n) (box-(m, n)-convex in our terminology) if all divided differences

$$\begin{aligned} \left[ {\begin{array}{c} {x_0,\quad x_1, \quad \ldots \quad x_m}\\ \\ {y_0, \quad y_1, \quad \ldots \quad y_n} \end{array}};f \right] \end{aligned}$$

are nonnegative for all pairwise distinct points \(x_0, x_1,\ldots , x_{m}\) and \(y_0, y_1,\ldots , y_n\). The related notions of box-(m, n)-concave function and box-(m, n)-affine function can be introduced in the standard way.

We say that a function f is box-monotone if it is box-(1, 1)-convex, i.e. it satisfies the condition

$$\begin{aligned} \left[ {\begin{array}{c} {x_0,\quad x_1}\\ \\ {y_0, \quad y_1} \end{array}};f \right] =\frac{f(x_1,y_1)-f(x_0,y_1)-f(x_1,y_0)+f(x_0,y_0)}{(x_1-x_0)(y_1-y_0)}\ge 0 \end{aligned}$$

for all distinct points \(x_0,x_1\in I\) and \(y_0,y_1\in J\). The box-monotonicity of a real-valued function defined on the rectangle \(S=I \times J \) can be discussed in terms of box-increments.

The box-increment of f over a compact subrectangle \(S'=[a_1,a_2]\times [b_1,b_2]\) of S is defined by the formula

$$\begin{aligned} \Delta (f;S')=f(a_1,b_1)-f(a_1,b_2)-f(a_2,b_1)+f(a_2,b_2). \end{aligned}$$

Accordingly, the function f is box-monotone if and only if \(\Delta (f;S')\ge 0\) for all compact subrectangles \(S'\) of S.

The following proposition offers a differential criterion of box-(mn)-convexity.

Proposition 2

[13, p. 82] Let \(f:I \times J \rightarrow {\mathbb {R}}\) be a \((m+n)\)-differentiable function. Then f is box-(mn)-convex if and only if \(\frac{\partial ^{m+n}f }{\partial x^m \partial y^n}\ge 0. \)

An important role in the study of box-(mn)-convex functions is played by the pseudo-polynomials.

Definition 1

[13, p. 65] A pseudo-polynomial of order (mn) is any function \(W:\mathbb R^2\rightarrow {\mathbb {R}}\) of the form

$$\begin{aligned} W(x,y)=\sum _{i=0}^m x^i A_i(y) + \sum _{j=0}^n y^j B_j(x), \end{aligned}$$

where \(A_i,B_j:{\mathbb {R}}\rightarrow {\mathbb {R}}\) \((i=0,1,\ldots ,m;\) \(j=0,1,\ldots ,n)\) are arbitrary functions.

Proposition 3

[13, p. 65] The box-(mn)-affine functions \(f:I\times J\rightarrow {\mathbb {R}}\), i.e. the functions satisfying the equation

$$\begin{aligned} \left[ {\begin{array}{c} {x_0,\quad x_1, \quad \ldots \quad x_m}\\ \\ {y_0, \quad y_1, \quad \ldots \quad y_n} \end{array}};f \right] =0 \end{aligned}$$

for all pairwise distinct points are pseudo-polynomials of order \((m-1,n-1)\).

This paper is inspired by the results of two recent papers: by Gal and Niculescu [3] and by Gavrea and Gavrea [5, 6].

In Sect. 2, we give the integral representation of box-monotone functions, without any additional assumptions about their differentiability.

In Sect. 3, we give necessary and sufficient conditions for probability measures satisfying Raşa type inequality for box-monotone functions.

In Sect. 4, we give the integral representation and several characterizations of box-(mn)-convex functions.

In Sect. 5, we introduce a definition of box-(mn)-convex orders. Based on the integral representation, we obtain a characterization of box-(mn)-convex orders, which will be used in Sects. 6 and 7.

In Sect. 6, we obtain necessary and sufficient conditions for probability measures satisfying Raşa type inequality for box-(mn)-convex functions (Theorem 24). In this paper, we present a probabilistic version of the Raşa type inequality. Theorem 24 gives a generalization of an analytical version of the Raşa type inequality for binomial distributions, which has been recently proved in [5, 6].

In Sect. 7, we obtain the Hermite–Hadamard type and the Jensen inequalities for box-(mn)-convex functions. The Jensen inequalities presented in [3], have been proved based on the box analog of the subdifferential inequality (see [3, Sect. 6]). In this paper, we present a new approach to obtaining the Hermite–Hadamard type and the Jensen inequalities, based on box-(mn)-convex orders.

In Sect. 8, we introduce the notion of strongly box-(mn)-convex functions. We obtain the Hermite–Hadamard type and the Jensen inequalities for strongly box-(mn)-convex functions.

2 Integral Representation of Box-Monotone Functions

Let \(S=[a,b]\times [c,d]\) and \(f:S \rightarrow {\mathbb {R}}\) be a box-monotone function. If \(a\le a_1<a_2\le b,\; c\le c_1<c_2\le d\) and \(S'=[a_1,a_2]\times [c_1,c_2]\subset S\), then \(0\le \Delta (f;S')\le \Delta (f;S)\).

Lemma 1

Let \(S=[a,b]\times [c,d]\) and \(f:S \rightarrow {\mathbb {R}}\) be a box-monotone function and \(f^*(x,y)=f(x,y)-f(a,y)-f(x,c)+f(a,c), \quad x\in [a,b],\,\, y\in [c,d]\). Then the function \(f^*(x,y)\) is non-decreasing box-monotone such that \(f^*(x,c)=f^*(a,y)=0\).

Proof

Let \((x_1,y_1),(x_2,y_2)\in S\) be such that \(x_1\le x_2\), \(y_1\le y_2\). Let \(S_1=[a,x_1]\times [c,y_1]\), \(S_2=[a,x_2]\times [c,y_2]\). Then, taking into account that \(S_1\subset S_2\), we obtain: \(f^*(x_2,y_2)-f^*(x_1,y_1)=[f(x_2,y_2)-f(a,y_2)-f(x_2,c)+f(a,c)]-[f(x_1,y_1)-f(a,y_1)-f(x_1,c)+f(a,c)] =\Delta (f;S_2)-\Delta (f;S_1)\ge 0\). Since f(xy) is box-monotone and \(f^*(x,y)-f(x,y)\) is a pseudo-polynomial of order (0, 0), it follows that \(f^*(x,y)\) is box-monotone. Clearly, \(f^*(x,c)=f^*(a,y)=0\). The lemma is proved. \(\square \)

In the following theorem, we give an integral representation of box-monotone functions without any assumptions about their differentiability. The integrals used in the theorem are the Lebesgue-Stieltjes integrals.

Theorem 2

Let \(S=[a,b]\times [c,d]\) and let \(f:S\rightarrow {\mathbb {R}}\) be a function such that the function \(f^*(x,y)=f(x,y)-f(a,y)-f(x,c)+f(a,c)\) is right-continuous. Then f is box-monotone if and only if it is of the form

$$\begin{aligned} f(x,y)=A_0(y) + B_0(x)+\int _a^b \int _c^d \chi _{(-\infty ,x]}(u)\chi _{(-\infty ,y]}(v)dg(u,v), \end{aligned}$$
(3)

where \(A_0:[c,d] \rightarrow {\mathbb {R}}\), \(B_0:[a,b] \rightarrow {\mathbb {R}}\) are arbitrary functions and \(g:S \rightarrow \mathbb R\) is a right-continuous box-monotone function.

Moreover, if f is of the form (3), then

i):

\(A_0(y) + B_0(x)=f(a,y)+f(x,c)-f(a,c), \quad x\in [a,b],\,\, y\in [c,d], \)

ii):

\(g(x,y)=f(x,y) \) up to a pseudo-polynomial of order (0, 0),

iii):

in place of g, it can be taken the non-decreasing right-continuous box-monotone function \( g^*(u,v)= g(u,v)-g(a,v)- g(u,c)+ g(a,c) \).

Proof

\((\Rightarrow )\) Assume that \(f:S \rightarrow {\mathbb {R}}\) is a box-monotone function. From our assumptions and Lemma 1, we obtain

$$\begin{aligned} f^*(x,y)=\int _a^x \int _c^y df^*(u,v)=\int _a^b \int _c^d \chi _{(-\infty ,x]}(u)\chi _{(-\infty ,y]}(v)df^*(u,v), \end{aligned}$$

consequently

$$\begin{aligned} f(x,y)=f(a,y)+f(x,c)-f(a,c)+\int _a^b \int _c^d \chi _{(-\infty ,x]}(u)\chi _{(-\infty ,y]}(v)\,df^*(u,v), \end{aligned}$$

which implies that (3) is satisfied with \(A_0(y) + B_0(x)=f(a,y)+f(x,c)-f(a,c)\) and \(g=f^*\).

\((\Leftarrow )\) Let f be of the form (3) with right-continuous box-monotone function g. Let \(a\le a_1<a_2\le b\) and \(c\le c_1<c_2\le d\). Since g is box-monotone, \(\Delta (g;[a_1,a_2]\times [c_1,c_2])\ge 0\). It is not difficult to prove that \( \Delta (f;[a_1,a_2]\times [c_1,c_2])=\Delta (g;[a_1,a_2]\times [c_1,c_2]) \). Taking into account that \(\Delta (g;[a_1,a_2]\times [c_1,c_2])\ge 0\), we obtain that \(\Delta (f;[a_1,a_2]\times [c_1,c_2]) \ge 0\), which implies that f is box-monotone.

We will prove that i), ii) and iii) are satisfied. Let f be of the form (3). Let \(a\le a_1<a_2\le b\) and \(c\le c_1<c_2\le d\). It is not difficult to prove that

$$\begin{aligned} \Delta (f^*;[a_1,a_2]\times [c_1,c_2])=\Delta (f;[a_1,a_2]\times [c_1,c_2])=\Delta (g;[a_1,a_2]\times [c_1,c_2]). \end{aligned}$$
(4)

By (4), f can be written in the form (3), with \( f^*\) in place of g. Taking into account that \( \int _a^b \int _c^d \chi _{(-\infty ,x]}(u)\chi _{(-\infty ,y]}(v)df^*(u,v)=f^*(x,y)\), we obtain \(f(x,y)=A_0(y) + B_0(x)+f^*(x,y)=A_0(y) + B_0(x)+[f(x,y)-f(a,y)-f(x,c)+f(a,c)]\), which implies i).

Condition ii) follows immediately from (4).

It is not difficult to prove that the function \(g^*(x,y)=g(x,y)-g(a,y)-g(x,c)+g(a,c)\) is non-decreasing. Since \(g=g^* \) up to a pseudo-polynomial of order (0, 0), the condition iii) is proved. \(\square \)

3 The Raşa Type Inequalities for Box-Monotone Functions

The Raşa inequality [19] can be written in terms of binomially distributed random variables [11] as follows

$$\begin{aligned} \mathbb {E}\,f\left( \frac{X_1+X_2}{2n}\right) +\mathbb {E}\,f\left( \frac{Y_1+Y_2}{2n}\right) -2 \;\mathbb {E}\,f\left( \frac{X+Y}{2n}\right) \ge 0, \end{aligned}$$
(5)

where \(X,X_1,X_2, Y,Y_1,Y_2\) are random variables such that \(X,X_1,X_2\sim B(n,x)\), \(Y,Y_1,Y_2\sim B(n,y)\), XY are independent, \(X_1,X_2\) are independent and \(Y_1,Y_2\) are independent, and \(f:[0,1] \rightarrow {\mathbb {R}}\) is a continuous convex function.

Gavrea [4] presented the problem of generalization of the Raşa inequality, which can be written in terms of the expectation of binomially distributed random variables as follows.

Problem 1. [4] Give a characterization of the class of convex functions \(g:[0,1]^2 \rightarrow {\mathbb {R}}\), satisfying

$$\begin{aligned} \mathbb {E}\,g\left( \frac{X_1}{n},\frac{X_2}{n}\right) +\mathbb {E}\,g\left( \frac{Y_1}{n},\frac{Y_2}{2}\right) - 2\;\mathbb {E}\,g\left( \frac{X}{n},\frac{Y}{n}\right) \ge 0, \end{aligned}$$

where \(X,X_1,X_2, Y,Y_1,Y_2\) are random variables such that \(X,X_1,X_2\sim B(n,x)\), \(Y,Y_1,Y_2\sim B(n,y)\), XY are independent, \(X_1,X_2\) are independent and \(Y_1,Y_2\) are independent

We proposed [11] some modification to Gavrea’s problem.

Problem 2. [11] Give a characterization of the class of functions \(g:[0,1]^2 \rightarrow {\mathbb {R}}\), satisfying

$$\begin{aligned} \mathbb {E}\,g\left( \frac{X_1}{n},\frac{X_2}{n}\right) +\mathbb {E}\,g\left( \frac{Y_1}{n},\frac{Y_2}{2}\right) - \;\mathbb {E}\,g\left( \frac{X}{n},\frac{Y}{n}\right) -\;\mathbb {E}\,g\left( \frac{Y}{n},\frac{X}{n}\right) \ge 0, \end{aligned}$$
(6)

where \(X,X_1,X_2, Y,Y_1,Y_2\) are random variables such that \(X,X_1,X_2\sim B(n,x)\), \(Y,Y_1,Y_2\sim B(n,y)\), XY are independent, \(X_1,X_2\) are independent and \(Y_1,Y_2\) are independent.

Remark 3

[11] The inequality (6) is not satisfied for all convex functions g. Let us take \(g(x,y)=|x-y|\). Then g is convex, \(g(0,0)= g(1,1)=0\) and \(g(0,1)= g(1,0)=1\). Let \(X,X_1,X_2\sim B(n,0)=\delta _0\), \(Y,Y_1,Y_2\sim B(n,1)=\delta _n\) be independent random variables. We obtain

$$\begin{aligned} \mathbb {E}\,g\left( \frac{X}{n},\frac{Y}{n}\right) + \mathbb {E}\,g\left( \frac{Y}{n},\frac{X}{n}\right) =1+1>0+0=\mathbb {E}\,g\left( \frac{X_1}{n},\frac{X_2}{n}\right) + \mathbb {E}\,g\left( \frac{Y_1}{n},\frac{Y_2}{n}\right) , \end{aligned}$$

consequently the inequality (6) does not hold.

In [11], we gave some sufficient conditions for functions g and random variables XY (which are not necessarily binomially distributed), such that (6) is satisfied (up to a natural number n). We use the following notation: \(X\le _{st}Y\) (\(\mu _X\le _{st}\mu _Y\)) means that \(\mu _X([x,\infty ))\le \mu _Y([x,\infty ))\) for all \(x\in {\mathbb {R}}\), where \(\mu _X\) and \(\mu _Y\) are probability distributions of random variables X and Y, respectively. An important characterization of the usual stochastic order \(\le _{st}\) for probability distributions is given in the following proposition.

Proposition 4

[21, p. 5] Two probability distributions \(\mu \) and \(\nu \) satisfy \(\mu \le _{st}\nu \) if and only if there exist two random variables X and Y defined on the same probability space, such that the distribution of X is \(\mu \), the distribution of Y is \(\nu \) and \(P(X\le Y)=1\).

Theorem 4

Let \(g: {\mathbb {R}}^2 \rightarrow {\mathbb {R}}\) be a function satisfying

$$\begin{aligned} g(x_1,x_2)-g(x_1,y_2)-g(y_1,x_2)+g(y_1,y_2) \ge 0 \quad \text {for } (y_1-x_1)(y_2-x_2)>0.\nonumber \\ \end{aligned}$$
(7)

Let X and Y be two independent random variables, such that

$$\begin{aligned} X\le _{st} Y \quad \text {or} \quad Y\le _{st} X. \end{aligned}$$
(8)

Then

$$\begin{aligned} \mathbb {E}\,g(X_1,X_2)-\mathbb {E}\,g(X_1,Y_2)-\mathbb {E}\,g(Y_1,X_2)+\mathbb {E}\,g(Y_1,Y_2) \ge 0, \end{aligned}$$
(9)

where \(X_1,X_2\) and \(Y_1,Y_2\) are independent random variables such that \(X_1,X_2\sim X\) and \(Y_1,Y_2\sim Y\) (assuming that all the expectations in (9) exist).

Proof

Without loss of generality we may assume that \(X\le _{st} Y\). By Proposition 4, there exist two independent random vectors \((X_1,Y_1)\) and \((X_2,Y_2)\) such that

$$\begin{aligned} X_1,X_2\sim X, \quad Y_1,Y_2\sim Y, \quad P(X_1\le Y_1)=1 \quad \text {and} \quad P(X_2\le Y_2)=1. \end{aligned}$$
(10)

By (10) and (7), we obtain

$$\begin{aligned} P\left( g(X_1,X_2)-g(X_1,Y_2)- g(Y_1,X_2)+g(Y_1,Y_2) \ge 0\right) =1, \end{aligned}$$

which implies (9). The theorem is proved. \(\square \)

Note, that condition (7) means that the function g is box monotone. Thus, condition (8) is a sufficient condition for the box monotone function to satisfy (9). In the next theorem, we prove, that (8) is also a necessary condition.

Theorem 5

Let \(X,X_1,X_2, Y,\) \(Y_1,Y_2\) be [ab]-valued random variables, such that \(X_1,X_2\sim X\), \(Y_1,Y_2\sim Y\) and the vectors \((X_1,X_2)\), \((Y_1,Y_2)\) are independent. If

$$\begin{aligned} \mathbb {E}\,g(X_1,X_2)-\mathbb {E}\,g(X_1,Y_2)- \mathbb {E}\,g(Y_1,X_2)+\mathbb {E}\,g(Y_1,Y_2) \ge 0 \end{aligned}$$
(11)

for all functions \(g: {\mathbb {R}}^2\rightarrow {\mathbb {R}}\) satisfying condition (7), then

$$\begin{aligned} X\le _{st} Y \quad \text {or} \quad Y\le _{st} X. \end{aligned}$$
(12)

Proof

Let us assume that the assumptions of the theorem are satisfied. Let \({\overline{F}},{\overline{G}},\mu ,\nu \) be the tail distribution functions and distributions of the random variables XY, respectively, i.e. \({\overline{F}}(x)=P(X\ge x)=\mu ([x,\infty ))\), \({\overline{G}}(x)=P(Y\ge x)=\nu ([x,\infty ))\). Since condition (11) is satisfied for functions \(g: {\mathbb {R}}^2\rightarrow {\mathbb {R}}\) satisfying condition (7), it is satisfied for all functions g(xy) of the form

$$\begin{aligned} g_{A,B}(x,y)=\chi _{[A,\infty )}(x)\chi _{[B,\infty )}(y), \end{aligned}$$

where \(A,B\in [a,b]\). We obtain

$$\begin{aligned}{} & {} 0\le \mathbb {E}\,g(X_1,X_2)-\mathbb {E}\,g(X_1,Y_2)- \mathbb {E}\,g(Y_1,X_2)+\mathbb {E}\,g(Y_1,Y_2)\\{} & {} \quad =\int _a^b \int _a^b \chi _{[A,\infty )}(x)\chi _{[B,\infty )}(y)\mu (dy)\mu (dx) -\int _a^b \int _a^b \chi _{[A,\infty )}(x)\chi _{[B,\infty )}(y)\nu (dy)\mu (dx)\\{} & {} \qquad -\int _a^b \int _a^b \chi _{[A,\infty )}(x)\chi _{[B,\infty )}(y)\mu (dy)\nu (dx) \!+\!\int _a^b \int _a^b \chi _{[A,\infty )}(x)\chi _{[B,\infty )}(y)\nu (dy)\nu (dx)\\{} & {} \quad = {\overline{F}}(A)\;{\overline{F}}(B)-{\overline{G}}(B)\;{\overline{F}}(A)-{\overline{F}}(B)\;{\overline{G}}(A)+{\overline{G}}(A)\;{\overline{G}}(B)\\{} & {} \quad =\left[ {\overline{F}}(A)\;-\;{\overline{G}}(A)\right] \; \left[ {\overline{F}}(B)\;-\;{\overline{G}}(B)\right] . \end{aligned}$$

There are three possible cases:

(a):

\({\overline{F}}(A)=\;{\overline{G}}(A)\) for all \(A\in [a,b]\). Then \(X\le _{st} Y \quad \text {and} \quad Y\le _{st} X\).

(b):

There exists \(A_0 \in [a,b]\), such that \({\overline{F}}(A_0)\;-\;{\overline{G}}(A_0)>0\). Then \({\overline{F}}(B)\;-\;{\overline{G}}(B)\ge 0\) for all \(B\in [a,b]\), which implies \(Y\le _{st} X\).

(c):

There exists \(A_0 \in [a,b]\), such that \(\overline{F}(A_0)\;-\;{\overline{G}}(A_0)<0\). Then \(\overline{F}(B)\;-\;{\overline{G}}(B)\le 0\) for all \(B\in [a,b]\), which implies \(X\le _{st} Y\).

The theorem is proved. \(\square \)

4 Integral Representation of Box-(mn)-Convex Functions

Now we are going to develop an integral representation of box-(mn)-convex functions for \(m,n\ge 2\). Proposition 2 seems to be a convenient tool to study \((m+n)\)-differentiable functions. In this section, however, we are interested in all (even not necessarily continuous) box-(mn)-convex functions defined on \(I\times J\), where \(I,J\subset {\mathbb {R}}\) are open intervals (bounded or unbounded).

In the subsequent lemmas we need the following notion. Let \(m,n\ge 1\). We say that \(f:I\times J\rightarrow {\mathbb {R}}\) is (mn)-regular if for every \(y\in J\) the function \(f(\cdot ,y)\) is a linear combination of \((m-1)\)-convex functions, and for every \(x\in I\) the function \(f(x,\cdot )\) is a linear combination of \((n-1)\)-convex functions. Here, by a 0-convex function we mean a non-decreasing, right-continuous function.

Observe, that a pseudo-polynomial \(W(x,y)\!=\!\sum _{i=0}^m x^i A_i(y)\! +\! \sum _{j=0}^n y^j B_j(x)\) is (mn)-regular if and only if \(A_0,\dots ,A_m\) are linear combinations of \((n-1)\)-convex functions, and \(B_0,\dots ,B_n\) are linear combinations of \((m-1)\)-convex functions. In that case, for any \(\alpha \in I\) the function \(\int _\alpha ^xW(t,y)dt\) is an \((m+1,n)\)-regular pseudo-polynomial of order \((m+1,n)\). If, in addition, \(m>1\), then the right-derivative \(W'_R:=W'_{x^+}\) is an \((m-1,n)\)-regular pseudo-polynomial of order \((m-1,n)\).

We also need several lemmas. Two of them concern a regularization of a box-(mn)-convex function by subtracting an appropriate pseudo-polynomial.

Lemma 6

Let \(m,n\ge 1\) and \(f:I\times J\rightarrow {\mathbb {R}}\) be a box-(mn)-convex function.

(i):

Let \(u_1,\dots ,u_m\in I\) be pairwise distinct points satisfying \(f(u_k,y)=0\) for each \(k=1,\dots ,m\) and \(y\in J\). Then, for every \(x\in I{\setminus }\{u_1,\dots ,u_m\}\) the function \(g_x(y)=f(x,y)\) is \((n-1)\)-convex, when \(\{k=1,\dots ,m:u_k>x\}\) has even number of elements, and it is \((n-1)\)-concave, when \(\{k=1,\dots ,m:u_k>x\}\) has odd number of elements.

(ii):

Let \(v_1,\dots ,v_n\in J\) be pairwise distinct points satisfying \(f(x,v_l)=0\) for each \(l=1,\dots ,n\) and \(x\in I\). Then for every \(y\in J{\setminus }\{v_1,\dots ,v_n\}\) the function \(h_y(x)=f(x,y)\) is \((m-1)\)-convex, when \(\{l=1,\dots ,n:v_l>y\}\) has even number of elements, and it is \((m-1)\)-concave, when \(\{l=1,\dots ,n:v_l>y\}\) has odd number of elements.

Proof

It is enough to prove (i). We note that for every \(y\in J\) and \(x\in I{{\setminus }}\{u_1,\dots ,u_m\}\), we have

$$\begin{aligned}{} & {} [x, u_1,\ldots , u_m;f(\cdot ,y)]=\frac{[x, u_1,\ldots , u_{m-1};f(\cdot ,y)]-[u_1,\ldots , u_m;f(\cdot ,y)]}{x-u_m}\\{} & {} \quad =\frac{[x, u_1,\ldots , u_{m-1};f(\cdot ,y)]}{x-u_m} =\frac{[x, u_1,\ldots , u_{m-2};f(\cdot ,y)]}{(x-u_m)(x-u_{m-1})}\\{} & {} \quad =\cdots =\frac{[x;f(\cdot ,y)]}{(x-u_m)(x-u_{m-1})\dots (x-u_1)} =\frac{g_x(y)}{(x-u_m)(x-u_{m-1})\dots (x-u_1)}. \end{aligned}$$

Let \(y_0,y_1,\dots ,y_n\in J\) be pairwise distinct points. Since f is box-(mn)-convex, we obtain

$$\begin{aligned} 0\le \left[ {\begin{array}{c} {x,\quad u_1, \quad \ldots \quad u_m}\\ \\ {y_0, \quad y_1, \quad \ldots \quad y_n} \end{array}};f \right] =\frac{[y_0,y_1,\dots ,y_n;g_x]}{(x-u_m)(x-u_{m-1})\dots (x-u_1)}. \end{aligned}$$

The proof of part (i) is finished. The proof of (ii) is analogous. \(\square \)

Lemma 7

For every function \(f:I\times J\rightarrow {\mathbb {R}}\) and pairwise distinct \(u_1,\dots ,u_m\in I\) and \(v_1,\dots ,v_n\in J\), there exists a pseudo-polynomial V of order \((m-1,n-1)\), such that \((f-V)(u_k,y)=(f-V)(x,v_l)=0\) for each \(k=1,\dots ,m\), \(l=1,\dots ,n\), \(x\in I\) and \(y\in J\). Moreover, if f is (mn)-regular, then V is also (mn)-regular. If f is box-(mn)-convex, then \(f-V\) is (mn)-regular.

Proof

For every \(y\in J\), let \(A_0(y)\), ..., \(A_{m-1}(y)\in {\mathbb {R}}\) be such that \(\sum _{i=0}^{m-1}A_i(y)x^i\) is the Lagrange interpolation polynomial satisfying \(\sum _{i=0}^{m-1}A_i(y)u_k^i=f(u_k,y)\) for \(k=1,\dots ,m\). Let \({\widetilde{f}}(x,y)= f(x,y)-\sum _{i=0}^{m-1}A_i(y)x^i\). Then \({\widetilde{f}}(u_k,y)=0\) for each \(k=1,\dots ,m\) and \(y\in J\).

Now, for every \(x\in I\), let \(B_0(x)\), ..., \(B_{n-1}(x)\in \mathbb R\) be such that \(\sum _{j=0}^{n-1}B_j(x)y^j\) is the Lagrange interpolation polynomial satisfying \(\sum _{j=0}^{n-1}B_j(x)v_l^j={\widetilde{f}}(x,v_l)\) for \(l=1,\dots ,n\). Then, the pseudo-polynomial \(V(x,y)\!=\!\sum _{i=0}^{m-1}A_i(y)x^i\!+\!\sum _{j=0}^{n-1}B_j(x)y^j\) satisfies \((f-V)(u_k,y)=(f-V)(x,v_l)=0\) for each \(k=1,\dots ,m\), \(l=1,\dots ,n\), \(x\in I\) and \(y\in J\).

Clearly, if f is (mn)-regular, then the functions \(A_0,\dots ,A_{m-1}\) are linear combinations of \((n-1)\)-convex functions (as linear combinations of the functions \(f(u_k,\cdot )\), \(k=1,\dots ,m\)) and \(B_0,\dots ,B_{n-1}\) are linear combinations of \((m-1)\)-convex functions, hence V is (mn)-regular.

If f is box-(mn)-convex, then \(f-V\) is also box-(mn)-convex. By Lemma 6, \(f-V\) is (mn)-regular. \(\square \)

In the next two lemmas, we describe how differentiation affects a box-(mn)-convex function.

Lemma 8

Let \(n\ge 2\), let \(I\subset {\mathbb {R}}\) be an interval, and let \(f:I\rightarrow {\mathbb {R}}\) be a right-differentiable function. For pairwise distinct points \(x_1,\dots ,x_n\in I{\setminus }\{\sup I\}\) and \(k=1,\dots ,n\), the limit \(\lim _{x_0\downarrow x_k}[x_0,x_1,\dots ,x_n;f]\) exists and

$$\begin{aligned}{} & {} \lim _{x_0\downarrow x_k}[x_0,x_1,\dots ,x_n;f]\nonumber \\{} & {} \quad =\frac{f'_R(x_k)}{\prod \nolimits _{\begin{array}{c} i=1\\ i\ne k \end{array}}^n(x_k-x_i)}+\sum \limits _{\begin{array}{c} j=1\\ j\ne k \end{array}}^n\frac{1}{x_j-x_k}\left( \frac{f(x_j)}{\prod \nolimits _{\begin{array}{c} i=1\\ i\ne j \end{array}}^n(x_j-x_i)}+\frac{f(x_k)}{\prod \nolimits _{\begin{array}{c} i=1\\ i\ne k \end{array}}^n(x_k-x_i)}\right) .\nonumber \\ \end{aligned}$$
(13)

Moreover,

$$\begin{aligned}{}[x_1,\dots ,x_n;f'_R]= \sum _{k=1}^n\lim _{x_0\downarrow x_k}[x_0,x_1,\dots ,x_n;f]. \end{aligned}$$

Proof

Due to symmetry, it is enough to show that (13) holds for \(k=1\). We skip an easy proof by induction on n. Using (13), we obtain

$$\begin{aligned} \sum _{k=1}^n\lim _{x_0\downarrow x_k}[x_0,x_1,\dots ,x_n;f]=\sum _{k=1}^n\frac{f'_R(x_k)}{\prod \nolimits _{\begin{array}{c} i=1\\ i\ne k \end{array}}^n(x_k-x_i)}=[x_1,\dots ,x_n;f'_R]. \end{aligned}$$

\(\square \)

Lemma 9

Let \(I,J\subset {\mathbb {R}}\) be open intervals. Let \(m\ge 2\), \(n\ge 1\), \(\alpha \in I\) and let \(f:I\times J\rightarrow {\mathbb {R}}\) be a box-(mn)-convex and (mn)-regular function. For every \(y\in J\), let \(F(\cdot ,y)\) be the right-derivative of \(f(\cdot ,y)\) (i.e., \(F=f'_{x^+}\)). Then, the function \(F:I\times J\rightarrow {\mathbb {R}}\) is box-\((m-1,n)\)-convex and \((m-1,n)\)-regular, and \(f(x,y)=f(\alpha ,y)+\int _\alpha ^x F(t,y)dt\) for \(x\in I\), \(y\in J\).

The same holds, when the roles of x and y are exchanged.

Proof

Since \(f(\cdot ,y)\) is a linear combination of \((m-1)\)-convex functions and \(m-1\ge 1\), we obtain that F is well defined and \(f(x,y)=f(\alpha ,y)+\int _\alpha ^x F(t,y)dt\). We need to show that F is box-\((m-1,n)\)-convex and \((m-1,n)\)-regular.

The box-\((m-1,n)\)-convexity. Let \(x_1,\dots ,x_m\in I\) be pairwise distinct and let \(y_0,\dots ,y_n\in J\) be pairwise distinct. Using Lemma 8, we obtain

$$\begin{aligned}{} & {} \left[ {\begin{array}{c} {x_1,\quad x_2, \quad \ldots \quad x_m}\\ \\ {y_0, \quad y_1, \quad \ldots \quad y_n} \end{array}};F\right] =\left[ y_0, y_1,\ldots ,y_n;\left[ x_1, x_2,\ldots , x_m;F(\cdot ,y) \right] \right] \\{} & {} \quad =\left[ y_0, y_1,\ldots ,y_n;\sum _{k=1}^m\lim _{x_0\downarrow x_k}\left[ x_0, x_1,\ldots , x_m;f(\cdot ,y) \right] \right] \\{} & {} \quad =\sum _{k=1}^m\lim _{x_0\downarrow x_k}\left[ {\begin{array}{c} {x_0,\quad x_1, \quad \ldots \quad x_m}\\ \\ {y_0, \quad y_1, \quad \ldots \quad y_n} \end{array}};f\right] \ge 0, \end{aligned}$$

therefore F is box-\((m-1,n)\)-convex.

The \((m-1,n)\)-regularity. For every \(y\in J\) the function \(f(\cdot ,y)\) is a linear combination of \((m-1)\)-convex functions. Therefore \(F(\cdot ,y)=f'_R(\cdot ,y)\) is a linear combination of \((m-2)\)-convex functions. We fix \(x=u_1\in I\), pairwise distinct \(u_2,\dots ,u_m\in I{\setminus }\{x\}\) and pairwise distinct \(v_1,\dots ,v_n\in J\). Let V be an (mn)-regular pseudo-polynomial of order \((m-1,n-1)\) given by Lemma 7. We denote \({\widetilde{f}}=f-V\) and \(\widetilde{F}(\cdot ,y)={\widetilde{f}}'_R(\cdot ,y)\). Then \(F-{\widetilde{F}}\) is an \((m-1,n)\)-regular pseudo-polynomial of order \((m-2,n-1)\). By (13), we obtain \({\widetilde{F}}(x,y)=\lim _{x_0\downarrow x}[x_0,x,u_2,\dots ,u_m;\widetilde{f}(\cdot ,y)]\cdot \prod _{i=2}^m(x-u_i)\). It follows that for pairwise distinct \(y_0,\dots ,y_n\in J\), we have

$$\begin{aligned}{}[y_0,\dots ,y_n;{\widetilde{F}}(x,\cdot )]=\lim _{x_0\downarrow x}\left[ {\begin{array}{c} {x_0,\quad u_1, \quad \ldots \quad u_m}\\ \\ {y_0, \quad y_1, \quad \ldots \quad y_n} \end{array}};f\right] \cdot \prod _{i=2}^m(x-u_i), \end{aligned}$$

which has the same sign as \(\prod _{i=2}^m(x-u_i)\). If follows that \({\widetilde{F}}(x,\cdot )\) is \((n-1)\)-convex or \((n-1)\)-concave. Therefore, \(F(x,\cdot )\) is a linear combination of \((n-1)\)-convex functions. This finishes the proof of \((m-1,n)\)-regularity of F.

Obviously, the same holds, when the roles of x and y are exchanged. \(\square \)

The final two lemmas describe how integration affects a box-(mn)-convex function.

Lemma 10

Let \(n\ge 1\), let \(I\subset {\mathbb {R}}\) be an interval, \(\alpha \in I\) and let \(f:I\rightarrow {\mathbb {R}}\) be an integrable function. For pairwise distinct points \(x_0,x_1,\dots ,x_n\in I\), we have

$$\begin{aligned} \left[ x_0,x_1,\dots ,x_n;\int _\alpha ^\cdot f(t)dt\right] =\int _0^1t^{n-1}[x_{1,t},\dots ,x_{n,t};f]dt, \end{aligned}$$

where \(x_{i,t}=tx_i+(1-t)x_0\) for \(t\in [0,1]\) and \(i=1,2,\dots ,n\).

Proof

The proof of the lemma is by induction on n. For \(n=1\) we have

$$\begin{aligned} \left[ x_0,x_1;\int _\alpha ^\cdot f(t)dt\right] =\frac{\int _{x_0}^{x_1} f(t)dt}{x_1-x_0}=\int _0^1f(x_{1,t})dt=\int _0^1t^{1-1}[x_{1,t};f]dt. \end{aligned}$$

In the induction step, we use the definition of the divided difference and the fact that it does not depend on the permutation of the points \(x_0,\dots ,x_{n+1}\).

$$\begin{aligned}{} & {} \left[ x_0,x_1,\dots ,x_{n+1};\int _\alpha ^\cdot f(t)dt\right] =\left[ x_1,x_0,x_2,\dots ,x_{n+1};\int _\alpha ^\cdot f(t)dt\right] \\{} & {} \quad =\frac{\left[ x_0,x_2,\dots ,x_{n+1};\int _\alpha ^\cdot f(t)dt\right] -\left[ x_0,x_1,\dots ,x_n;\int _\alpha ^\cdot f(t)dt\right] }{x_{n+1}-x_1}\\{} & {} \quad =\int _0^1t^{n-1}\frac{[x_{2,t},\dots ,x_{n+1,t};f]-[x_{1,t},\dots ,x_{n,t};f]}{x_{n+1}-x_1}dt\\{} & {} \quad =\int _0^1t^n\frac{[x_{2,t},\dots ,x_{n+1,t};f]-[x_{1,t},\dots ,x_{n,t};f]}{x_{n+1,t}-x_{1,t}}dt\\{} & {} \quad =\int _0^1t^{(n+1)-1}[x_{1,t},\dots ,x_{n+1,t};f]dt. \end{aligned}$$

\(\square \)

Lemma 11

Let \(m,n\ge 1\). Let \(f:I\times J \rightarrow {\mathbb {R}}\) be a box-(mn)-convex function. For any \(\alpha \in I\) and \(\beta \in J\), let \(\varphi (x,y)=\int _\alpha ^xf(t,y)dt\) and \(\psi (x,y)=\int _\beta ^yf(x,t)dt\). If the functions \(\varphi \) and \(\psi \) are well defined, then \(\varphi \) is box-\((m+1,n)\)-convex and \(\psi \) is box-\((m,n+1)\)-convex. If f is (mn)-regular, then \(\varphi \) is \((m+1,n)\)-regular and \(\psi \) is \((m,n+1)\)-regular.

Proof

Let \(f:I \times J\rightarrow {\mathbb {R}}\) be a box-(mn)-convex function, \(\alpha \in I\) and let \(x_0, x_1,\ldots ,x_{m+1}\) be pairwise distinct points of I and \(y_0,y_1,\ldots ,y_n\) be pairwise distinct points of J. Since the function \(\varphi \) is well defined, we may use Lemma 10, and we obtain

$$\begin{aligned}{} & {} \left[ {\begin{array}{c} {x_0,\quad x_1, \quad \ldots \quad x_{m+1}}\\ \\ {y_0, \quad y_1, \quad \ldots \quad y_n} \end{array}};\int _\alpha ^x f(t,y)dt \right] \\{} & {} \quad =\left[ y_0, y_1,\ldots ,y_n;\left[ x_0, x_1,\ldots , x_{m+1};\int _\alpha ^\cdot f(t,y)dt \right] \right] \\{} & {} \quad =\left[ y_0, y_1,\ldots ,y_n;\int _0^1t^m[x_{1,t},\dots ,x_{m+1,t};f(\cdot ,y)]dt\right] \\{} & {} \quad =\int _0^1t^m\left[ {\begin{array}{c} {x_{1,t},\quad x_{2,t}, \quad \ldots \quad x_{m+1,t}}\\ \\ {y_0, \quad y_1, \quad \ldots \quad y_n} \end{array}};f\right] dt\ge 0. \end{aligned}$$

It follows that \(\varphi (x,y)=\int _\alpha ^xf(t,y)dt\) is box-\((m+1,n)\)-convex. The proof that \(\psi (x,y)=\int _\beta ^yf(x,t)dt\) is box-\((m,n+1)\)-convex is analogous.

Assume that f is (mn)-regular. For every \(y\in J\), the function \(f(\cdot ,y)\) is a linear combination of \((m-1)\)-convex, hence \(\varphi (\cdot ,y)\) is a linear combination of m-convex functions. Let \(x\in I\). We fix pairwise distinct \(u_1,\dots ,u_m\in I\) smaller than \(\min (\alpha ,x)\), and pairwise distinct \(v_1,\dots ,v_n\in J\). Let V be an (mn)-regular pseudo-polynomial of order \((m-1,n-1)\) given by Lemma 7. We denote \({\widetilde{f}}=f-V\) and \({\widetilde{\varphi }}(x,y)=\int _\alpha ^x{\widetilde{f}}(t,y)dy\). Then \(\varphi -{\widetilde{\varphi }}\) is an \((m+1,n)\)-regular pseudo-polynomial of order \((m,n-1)\). By Lemma 6, for every t between \(\alpha \) and x the function \(\widetilde{f}(t,\cdot )\) is \((n-1)\)-convex. It follows that for pairwise distinct \(y_0,\dots ,y_n\in J\) the divided difference

$$\begin{aligned} \left[ y_0,\dots ,y_n;{\widetilde{\varphi }}(x,\cdot )\right] =\int _\alpha ^x[y_0,\dots ,y_n;{\widetilde{f}}(t,\cdot )]dt \end{aligned}$$

is non-negative for \(\alpha \le x\) and non-positive for \(\alpha \ge x\). In either case, \({\widetilde{\varphi }}(x,\cdot )\) is \((n-1)\)-convex or \((n-1)\)-concave. Therefore, \(\varphi (x,\cdot )\) is a linear combination of \((n-1)\)-convex functions. This shows, that \(\varphi \) is \((m+1,n)\)-regular. \(\square \)

In the following theorems, we give the integral representations of box-(mn)-convex functions.

Theorem 12

Let \(I,J\subset {\mathbb {R}}\) be open intervals (bounded or unbounded), \(m,n\ge 2\). Let \(f:I\times J\rightarrow {\mathbb {R}}\) be a function, \(\alpha \in I\) and \(\beta \in J\). Then f is box-(mn)-convex, if and only if it is of the form

$$\begin{aligned}{} & {} f(x,y)=W(x,y)\nonumber \\{} & {} \quad +\int _{\alpha }^{x}\int _{\alpha }^{x_2}\ldots \int _{\alpha }^{x_{m-1}} \int _{\beta }^{y} \int _{\beta }^{y_2}\ldots \int _{\beta }^{y_{n-1}}g(x_m,y_n) dy_n \ldots dy_2 dx_m \ldots d x_2\nonumber \\ \end{aligned}$$
(14)

for all \((x,y)\in I\times J\), where \(W:I\times J \rightarrow \mathbb R\) is a pseudo-polynomial of order \((m-1,n-1)\) and \(g:I\times J \rightarrow {\mathbb {R}}\) is a box-monotone (1, 1)-regular function.

Proof

\((\Rightarrow )\) Let \(f:I\times J\rightarrow {\mathbb {R}}\) be a box-(mn)-convex function. If f is (mn)-regular, then (14) is an immediate consequence of Lemma 9 (we skip a trivial induction on \(m+n\)).

If f is not (mn)-regular, then we first use Lemma 7 (obtaining an appropriate pseudo-polynomial V of order \((m-1,n-1)\)), and then we use the above argument for the (mn)-regular function \(f-V\).

\((\Leftarrow )\) Assume that f is of the form (14). By Lemma 11, it follows that f is box-(mn)-convex. The theorem is proved. \(\square \)

Theorem 13

Let \(m,n\ge 2\), let \(I,J\subset {\mathbb {R}}\) be open intervals, and \(f:I\times J\rightarrow {\mathbb {R}}\) be a function. Let \(\alpha \in I\) and \(\beta \in J\). Then f is box-(mn)-convex, if and only if it is of the form

$$\begin{aligned}{} & {} f(x,y)=W(x,y)\nonumber \\{} & {} \quad +\int _{\alpha }^{x}\int _{\alpha }^{x_2}\ldots \int _{\alpha }^{x_{m-1}} \int _{\beta }^{y} \int _{\beta }^{y_2}\ldots \int _{\beta }^{y_{n-1}}g(x_m,y_n) dy_n \ldots dy_2 dx_m \ldots d x_2\nonumber \\ \end{aligned}$$
(15)

for all \((x,y)\in I\times J\), where \(W:I\times J \rightarrow \mathbb R\) is a pseudo-polynomial of order \((m-1,n-1)\) and \(g:I\times J \rightarrow {\mathbb {R}}\) is a box-monotone (1, 1)-regular function such that \(g(\alpha , y)=g(x,\beta ) =0\) \((x\in I, y\in J )\).

Proof

By Theorem 12, f is box-(mn)-convex, if and only if it is of the form (14). Since \(g^*_{\alpha ,\beta }(x_m,y_n) =g(x_m,y_n)-g(\alpha ,y_n)-g(x_m,\beta )+g(\alpha ,\beta )\) is the box-monotone (1, 1)-regular function such that \(g^*(\alpha , y)=g^*(x,\beta ) =0\) \((x\in I, y\in J)\) and \(R(x_m,y_n)=g(\alpha ,y_n)+g(x_m,\beta )-g(\alpha ,\beta )\) is a pseudo-polynomial of order (0, 0), then taking \(g(x_m,y_n)=g^*_{\alpha ,\beta }(x_m,y_n) +R(x_m,y_n)\), we obtain (15) with \(g(x_m,y_n) =g^*_{\alpha ,\beta }(x_m,y_n)\). The theorem is proved. \(\square \)

If \(g:I\times J \rightarrow {\mathbb {R}}\) is a box-monotone (1, 1)-regular function, then the function \(\int _{\alpha }^x \int _{\beta }^y g(u,v)\;dv\;du \) \( ((x,y)\in I\times J)\) is continuous. Thus, it is not difficult to prove the following lemma and theorem.

Lemma 14

Let

$$\begin{aligned}{} & {} \Psi _{m,n}g(x,y)\\{} & {} \quad =\int _{\alpha }^{x}\int _{\alpha }^{x_2}\ldots \int _{\alpha }^{x_{m-1}} \int _{\beta }^{y} \int _{\beta }^{y_2}\ldots \int _{\beta }^{y_{n-1}}g(x_m,y_n) dy_n \ldots dy_2 dx_m \ldots d x_2 \end{aligned}$$

for all \((x,y)\in I\times J\), where \(g:I\times J \rightarrow {\mathbb {R}}\) is a box-monotone (1, 1)-regular function. Then \(\Psi _{m,n}\in C^{m+n-4} \) and

$$\begin{aligned} \frac{\partial ^{m+n-4}\;\Psi _{m,n} g}{\partial x^{m-2} \; \partial y^{n-2}} (x,y)=\int _{\alpha }^x \int _{\beta }^y g(u,v)\;dv\;du. \end{aligned}$$

Theorem 15

Let \(I,J\subset {\mathbb {R}}\) be open intervals, \(m,n\ge 2\). Let \(f:I\times J\rightarrow {\mathbb {R}}\) be a function, \(\alpha \in I\) and \(\beta \in J\). Then f is box-(mn)-convex, if and only if it is of the form

$$\begin{aligned} f(x,y)=W(x,y)+\Phi _{m,n}(x,y), \end{aligned}$$

where \(W:I\times J \rightarrow {\mathbb {R}}\) is a pseudo-polynomial of order \((m-1,n-1)\) and \(\Phi _{m,n} :I\times J\rightarrow {\mathbb {R}}\) is a function of the class \(C^{m+n-4}\) such that its partial derivative \( \frac{\partial ^{m+n-4}\;\Phi _{m,n} }{\partial x^{m-2} \; \partial y^{n-2}} (x,y) \) is a box-(2, 2)-convex function.

The characterization of box-(mn)-convex given by Theorem 15, is a counterpart of characterization of n-convex functions given in Proposition 1.

Theorem 16

Let \(I,J\subset {\mathbb {R}}\) be open intervals (bounded or unbounded), \(m,n\ge 2\). Let \(f:I\times J\rightarrow {\mathbb {R}}\) be a function, \(\alpha \in I\) and \(\beta \in J\). Then f is box-(mn)-convex, if and only if it is of the form

$$\begin{aligned} f(x,y)=W(x,y)+\int _{\alpha }^x \int _{\beta }^y \frac{(x-u)^{m-1}}{(m-1)!}\;\frac{(y-v)^{n-1}}{(n-1)!}\;dg(u,v) \end{aligned}$$

for all \((x,y)\in I\times J\), where \(W:I\times J \rightarrow \mathbb R\) is a pseudo-polynomial of order \((m-1,n-1)\), \(g:I\times J \rightarrow {\mathbb {R}}\) is a box-monotone (1, 1)-regular function, and dg(uv) is the Borel measure generated by g.

Proof

By Theorem 13, f is box-(mn)-convex, if and only if it is of the form (15). Since \(m,n\ge 2\) and g is the box-monotone (1, 1)-regular function such that \(g(\alpha , y)=g(x,\beta ) =0\) \((x\in I, y\in J )\), it follows that

$$\begin{aligned} \int _{\alpha }^{x_{m-1}} \int _{\beta }^{y_{n-1}}g(x_m,y_n)dy_ndx_m =\int _{\alpha }^{x_{m-1}} \int _{\beta }^{y_{n-1}}\int _{\alpha }^{x_m} \int _{\beta }^{y_n} dg(u,v)dy_ndx_m. \end{aligned}$$

Consequently, changing the order of integration, we obtain

$$\begin{aligned} f(x,y)= & {} W(x,y)\\{} & {} +\int _{\alpha }^{x}\int _{\alpha }^{x_2}\ldots \int _{\alpha }^{x_{m}} \int _{\beta }^{y} \int _{\beta }^{y_2}\ldots \int _{\beta }^{y_{n}}dg(u,v) dy_n \ldots dy_2 dx_m \ldots d x_2\\= & {} W(x,y)\\{} & {} +\int _{\alpha }^x\int _{\beta }^y\int _u^x\int _u^{x_2}\ldots \int _u^{x_{m-1}} \int _v^y \int _v^{y_2}\ldots \\ {}{} & {} \int _v^{y_{n-1}}\ dy_n \ldots dy_2 dx_m \ldots d x_2dg(u,v)\\= & {} W(x,y)+\int _{\alpha }^x \int _{\beta }^y \frac{(x-u)^{m-1}}{(m-1)!}\;\frac{(y-v)^{n-1}}{(n-1)!}\;dg(u,v). \end{aligned}$$

The theorem is proved. \(\square \)

As an immediate consequence of Theorem 16, we obtain the following theorem, which is a generalization of Theorem 2 on integral representation of box-monotone functions.

Theorem 17

Let \(I,J\subset {\mathbb {R}}\) be open intervals, \(m,n\ge 2\). Let \(f:I\times J\rightarrow {\mathbb {R}}\) be a function. Then f is box-(mn)-convex, if and only if for all \(a,b\in I\), \(c,d\in J\), such that \(a<b\), \(c<d\),

$$\begin{aligned} f(x,y)=W(x,y)+\int _a^b \int _c^d \frac{(x-u)_+^{m-1}}{(m-1)!}\;\frac{(y-v)_+^{n-1}}{(n-1)!}\;d\mu (u,v) \end{aligned}$$
(16)

for all \((x,y)\in [a,b]\times [c,d]\), where W is a pseudo-polynomial of order \((m-1,n-1)\) and \(\mu \) is a finite Borel measure on \([a,b]\times [c,d]\).

The following theorem can be regarded as a counterpart of the theorem on decomposition of an n-convex function into a sum of n-times monotone functions and a polynomial [15, Theorem 3.2, p. 741].

Theorem 18

Let \(I,J\subset {\mathbb {R}}\) be open intervals, \(m,n\ge 2\). Let \(f:I\times J\rightarrow {\mathbb {R}}\) be a function. Then f is box-(mn)-convex, if and only if for all \(a,b\in I\), \(c,d\in J\), such that \(a<b\), \(c<d\)

$$\begin{aligned} f(x,y)=W(x,y)+P(x,y) \end{aligned}$$
(17)

for all \((x,y)\in [a,b]\times [c,d]\), where, W is a a pseudo-polynomial of order \((m-1,n-1)\) and \(P:[a,b]\times [c,d]\rightarrow {\mathbb {R}}\) is a box-(kl)-convex function for all \(k=1,\ldots , m\) and \(l=1,\ldots ,n\).

Proof

\((\Rightarrow )\) Let f be box-(mn)-convex. Then by Theorem 17, f is of the form (16). Then, for the function P(xy) as below, we have

$$\begin{aligned}{} & {} P(x,y)=\int _a^b \int _c^d \frac{(x-u)_+^{m-1}}{(m-1)!}\;\frac{(y-v)_+^{n-1}}{(n-1)!}\;d\mu (u,v)\\{} & {} \quad \qquad =\int _a^b \int _c^d\frac{(x-u)_+^{k-1}}{(k-1)!}\;\frac{(y-v)_+^{l-1}}{(l-1)!}\;d\mu _{(k,l)}(u,v)\\{} & {} d\mu _{(k,l)}(u,v)=\frac{(k-1)!\;(x-u)_+^{m-k}}{(m-1)!}\;\frac{(l-1)!\;(y-v)_+^{n-l}}{(n-1)!}\;d\mu (u,v), \end{aligned}$$

where \(k=1,\ldots , m\) and \(l=1,\ldots , n. \) Then, by Theorem 17, the function P(xy) is box-(kl)-convex.

(\(\Leftarrow \))The proof is obvious. The theorem is proved. \(\square \)

5 The Box-(mn)-Convex Orders

By the analogy to the n-convex orders, we define the box-(mn)-convex orders as follows.

Definition 2

Let \(m,n\ge 1\) and let \(\tau _1,\tau _2\) be two signed Borel measures on \([a,b]\times [c,d]\) such that

$$\begin{aligned} \int _a^b\int _c^d f(x,y)d\tau _1(x,y)\ge \int _a^b\int _c^d f(x,y)d\tau _2(x,y) \end{aligned}$$

for all box-(mn)-convex continuous functions \(f:[a,b]\times [c,d] \rightarrow {\mathbb {R}}\), provided the integrals exist. Then we say that the signed measure \(\tau _1\) on \([a,b]\times [c,d]\) is greater than \(\tau _2\) in the box-(mn)-convex order (denoted as .

Theorem 19

Let \(m,n\ge 2\). Let \(S=[a,b]\times [c,d]\). Let \(\tau \) be a signed finite Borel measure on S. Then in order that

$$\begin{aligned} \iint \limits _S f(x,y)d\tau (x,y)\ge 0 \end{aligned}$$
(18)

for all continuous box-(mn)-convex functions \(f:S\rightarrow {\mathbb R}\), it is necessary and sufficient that

(a):

\(\iint \nolimits _S x^i\,A_i(y)d\tau (x,y)=0, \quad \) \(\iint \nolimits _S y^j\,B_j(x)d\tau (x,y)= 0\) for all continuous functions \(A_i\), \(B_j:{\mathbb R}\rightarrow {\mathbb R}\)    \((i=0,1,\ldots , \ m-1\), \(j=0,1,\ldots , n-1)\),

(b):

\(\iint \nolimits _S \frac{(x-A)_+^{m-1}}{(m-1)!}\;\frac{(y-B)_+^{n-1}}{(n-1)!}d\tau (x,y)\ge 0\)    \((\,(A,B)\in S).\)

Proof

\((\Rightarrow )\) By Theorem 17, we conclude that inequality (18) is satisfied for all continuous box-(mn)-convex functions \(f:S\rightarrow {\mathbb R}\), if and only if it is satisfied for all functions f of the form: \(f(x,y)=x^i\,A_i(y)\), \(f(x,y)=y^j\,B_j(x)\) and \( f_{A,B}(x,y)=\frac{(x-A)_+^{m-1}}{(m-1)!}\;\frac{(y-B)_+^{n-1}}{(n-1)!}\). Taking \(f(x,y)=x^i\,A_i(y)\), we obtain \(\iint \nolimits _S x^i\,A_i(y)d\tau (x,y)\ge 0 \). For \(f(x,y)=x^i(-A_i(y))\), we obtain \(\iint \nolimits _S x^i(-A_i(y))\) \(d\tau (x,y) \ge 0 \), which implies \(\iint \nolimits _S x^iA_i(y)d\tau (x,y)=0\). Similarly, we obtain \(\iint \nolimits _S y^j\,B_j(x)d\tau (x,y)= 0\). Taking \(f(x,y)=f_{A,B}(x,y)=\frac{(x-A)_+^{m-1}}{(m-1)!}\;\frac{(y-B)_+^{n-1}}{(n-1)!}\), we obtain \(\iint \nolimits _S \frac{(x-A)_+^{m-1}}{(m-1)!}\;\frac{(y-B)_+^{n-1}}{(n-1)!}d\tau (x,y)\ge 0\).

\((\Leftarrow )\) Obviously, if the conditions (a) and (b) are satisfied, then by Theorem 17, inequality (18) is satisfied for all continuous box-(mn)-convex functions \(f:S\rightarrow {\mathbb R}\). The theorem is proved. \(\square \)

Let \(\gamma _1\) and \(\gamma _2\) be two signed finite Borel measures on [ab] and [cd], respectively. Then the product measure \(\gamma _1\otimes \gamma _2\) is greater than 0 in the box-(mn)-convex order (denoted as ) if

$$\begin{aligned} \int _a^b\int _c^d g(x,y)\gamma _2(dy)\gamma _1(dx)\ge 0 \end{aligned}$$

for all continuous box-(mn)-convex functions \(g:[a,b]\times [c,d] \rightarrow {\mathbb {R}}\).

Theorem 20

Let \(m,n\ge 2\). Let \(\tau _1\), \(\tau _2\) be two non-zero signed finite Borel measures on [ab] and [cd], respectively, such that \(\tau _1([a,b])=\tau _2([c,d])=0\). Then

$$\begin{aligned} \int _a^b\int _c^d f(x,y)\,\tau _2(dy)\,\tau _1(dx)\ge 0 \end{aligned}$$
(19)

for all continuous box-(mn)-convex functions \(f:[a,b]\times [c,d]\rightarrow {\mathbb R}\), if and only if the following conditions are satisfied

(a):

\(\int _a^b x^i\,\tau _1(dx)=0\)   \((i=0,1,\ldots , \ m-1),\)

(b):

\(\int _c^d y^j\,\tau _2(dy)=0\)   \((j=0,1,\ldots , n-1). \)

(c):

\(\int _a^b\frac{(x-A)_+^{m-1}}{(m-1)!}\,\tau _1(dx)\times \int _c^d \frac{(y-B)_+^{n-1}}{(n-1)!}\tau _2(dy)\ge 0\)    \((\,(A,B)\in [a,b]\times [c,d]).\)

Proof

\((\Rightarrow )\) Since \(\tau _2\) is a non-zero measure on [cd], there exists a continuous function \(A_0:[c,d]\rightarrow {\mathbb R}\) such that \(\int _c^d \,A_0(y)\tau _2(dy)\ne 0\). Taking \(A_i(y)=A_0(y)\), \(i=1,\ldots , \ m-1\), by Theorem 19, we obtain

$$\begin{aligned} \int _a^b\int _c^d x^i\,A_0(y)\,\tau _2(dy)\,\tau _1(dx)=\int _a^b x^i\,\tau _1(dx)\int _c^d \,A_0(y)\tau _2(dy)= 0. \end{aligned}$$

Taking into account that \(\int _c^d A_0(y)\tau _2(dy)\ne 0\), we conclude that \(\int _a^b x^i\tau _1(dx)=0\). Similarly, we obtain (b). Taking the product measure \(\tau =\tau _1\otimes \tau _2\), (c) is an immediate consequence of Theorem 19 (b).

\((\Leftarrow )\) If conditions (a) and (b) are satisfied, then we have the equality \(\int _a^b\int _c^d x^i\,A_i(y)\,\tau _2(dy)\,\tau _1(dx)=\int _a^b x^i\,\tau _1(dx)\int _c^d \,A_i(y)\tau _2(dy)= 0\) and \(\int _c^d\int _a^b x^i\,B_j(x)\) \(\tau _1(dx)\,\tau _2(dy)= \int _c^d y^j\,\tau _2(dy)\int _a^b \,B_j(x)\tau _1(dx)= 0\) for all continuous functions \(A_i:[c,d]\rightarrow {\mathbb R}\), \(B_j:[a,b]\rightarrow {\mathbb R}\), \(i=0,1,\ldots , \ m-1,\) \(j=0,1,\ldots , n-1\). Taking into account also condition (c), by Corollary 19, we conclude that inequality (19) is satisfied for all continuous box-(mn)-convex functions \(f:[a,b]\times [c,d]\rightarrow {\mathbb R}\). The theorem is proved. \(\square \)

Theorems 19, 20 give a characterization of the box-(mn)-convex ordering. They are a generalization of the Levin–Stečkin theorems as well as the Denuit–Lefèvre–Shaked theorems concerning n-convex orderings (see [18] and the references therein).

6 The Raşa Type Inequality for Box-(mn)-Convex Functions

Let \(\mu \) and \(\nu \) be two signed Borel measures on \({\mathbb {R}}\) such that

$$\begin{aligned} \int _{{\mathbb {R}}}\varphi (x)\mu (dx)\le \int _{\mathbb R}\varphi (x)\nu (dx) \quad \text {for all { n}-convex functions }\ \varphi :{\mathbb {R}}\rightarrow {\mathbb {R}}, \end{aligned}$$

provided the integrals exist. Then \(\mu \) is said to be smaller then \(\nu \) in the n-convex order (denoted as \(\mu \le _{n-cx}\nu \) ). Then the Raşa type inequality (5) can be written equivalently

(20)

where \(\mu =B(n,x)\) and \(\nu =B(n,y)\).

In [9, 10], we gave some useful sufficient condition and necessary and sufficient conditions for Borel measures \(\mu \) and \(\nu \) to satisfy the following generalized Raşa inequality:

Then Theorems 4 and 5 on the necessary and sufficient condition for probability distributions that satisfy the Raşa type inequality for box-(1, 1)-convex functions, can be rewritten in terms of box-(1, 1)-convex order.

Theorem 21

Let \(\mu \) and \(\nu \) be two probability distributions on [ab]. Then

(21)

if and only if

$$\begin{aligned} \nu -\mu \le _{st}0\quad or \quad \nu -\mu \ge _{st}0. \end{aligned}$$

Note that inequality (21) is equivalent to inequality (11). We consider some generalization of the Raşa inequality (21)

(22)

Note that inequality (22) is a probabilistic version of the Raşa inequality. On the other hand, in [5, 6], an analytical version of inequality (22) for binomial distributions has been recently proved. We will prove some necessary and sufficient condition for a more general version of the Raşa inequality than the version (22).

We will need two lemmas.

Lemma 22

[10, Lemma 5, p. 5] Let \(\gamma _1, \ldots ,\gamma _n\) be signed measures on \({\mathbb R}\) such that \(\gamma _i({\mathbb R})=0\) and \(\int _{-\infty }^{\infty } |x|^{n-1} |\gamma _i|(dx)<\infty \), \(i=1,\ldots , n\), Then

(a):

\(\gamma _1* \ldots *\gamma _k ({\mathbb R})=0 \), \(k=1,\ldots , n\),

(b):

\(\int _{-\infty }^{\infty } x^k \gamma _1* \ldots *\gamma _{m} (dx)=0\) for all integers \(0<k<m\le n\).

Lemma 23

[10, p. 7] Let \(\tau _1, \ldots ,\tau _q\) be signed measures on \({\mathbb R}\) such that \(\tau _i({\mathbb R})\) \(=0\), \(i=1,\ldots , q\).

Then for all \(A\in {\mathbb R}\)

$$\begin{aligned} \int _{-\infty }^{\infty } \frac{\bigl (x-A\bigr )^{q-1}_+ }{(q-1)!}\, \tau _1*\cdots *\tau _q(dx)= {\overline{F}}_{\tau _1} * \overline{F}_{\tau _2}*\cdots * {\overline{F}}_{\tau _q}(A). \end{aligned}$$
(23)

Theorem 24

Let \(m,n\ge 2\). Let \(\mu _1, \ldots , \mu _m\), \(\nu _1, \ldots , \nu _m\), \(\alpha _1, \ldots , \alpha _n\) and \(\beta _1,\) \(\ldots ,\) \(\beta _n\) be probability measures on \({\mathbb R}\), such that \(\int _{-\infty }^{\infty } |x|^{m-1} \mu _i(dx)<\infty \), \(\int _{-\infty }^{\infty } |x|^{m-1} \nu _i(dx)<\infty \) for \(i=1,\dots ,m\), \(\int _{-\infty }^{\infty } |x|^{n-1} \alpha _j(dx)<\infty \), \(\int _{-\infty }^{\infty } |x|^{n-1}\) \(\beta _j(dx)<\infty \) for \(j=1,\dots ,n\). Then the following conditions are equivalent:

a):
b):
$$\begin{aligned}{} & {} \left[ (\overline{F}_{\nu _1}- \overline{F}_{\mu _1} ) *\cdots * (\overline{F}_{\nu _m}- \overline{F}_{\mu _m} )\right] (A)\\{} & {} \quad \times \left[ (\overline{F}_{\alpha _1}- \overline{F}_{\beta _1} ) *\cdots * (\overline{F}_{\alpha _n}- \overline{F}_{\beta _n} )\right] (B) \ge 0 \end{aligned}$$

for all \(A,B\in {\mathbb R}\).

Proof

Let \(\tau _i=\nu _i-\mu _i\), \(i=1,\ldots , m\), \(\eta _j=\alpha _j-\beta _j\), \(j=1,\ldots , n\). Then \(\tau _i\), \(\eta _j\) are signed measure such that \(\tau _i({\mathbb R})=\eta _j({\mathbb R})=0\), \(i=1,\ldots , m\), \(j=1,\ldots , n\). By Lemma 22,

$$\begin{aligned}{} & {} \int _{-\infty }^{\infty } x^i \tau _1* \cdots *\tau _{m} (dx)=0, \quad i=0,1,\ldots , m-1, \\{} & {} \int _{-\infty }^{\infty } y^j \eta _1* \cdots *\eta _{n} (dy)=0, \quad j=0,1,\ldots , n-1. \end{aligned}$$

Then, by Theorem 20,

$$\begin{aligned} \int _{-\infty }^{\infty }\int _{-\infty }^{\infty }f(x,y) \,\tau _1* \cdots *\tau _{m}(dx)\,\eta _1* \cdots *\eta _{n} (dy)\ge 0 \end{aligned}$$
(24)

for all continuous box-(mn)-convex functions f, if and only if

$$\begin{aligned} \int _{-\infty }^{\infty }\frac{(x-A)_+^{m-1}}{(m-1)!}\,\tau _1* \cdots *\tau _{m}(dx)\times \int _{-\infty }^{\infty } \frac{(y-B)_+^{n-1}}{(n-1)!}\eta _1* \cdots *\eta _{n} (dy)\ge 0 \end{aligned}$$

for all \(A,B\in {\mathbb R}\). Taking into account Lemma 23, inequality (24) is satisfied for all continuous box-(mn)-convex functions f, if and only if

$$\begin{aligned}{} & {} [{\overline{F}}_{\tau _1} * {\overline{F}}_{\tau _2}*\cdots * {\overline{F}}_{\tau _m}](A)\times [{\overline{F}}_{\eta _1} * {\overline{F}}_{\eta _2}*\cdots * {\overline{F}}_{\eta _n}](B)\\{} & {} \quad = \left[ (\overline{F}_{\nu _1}- \overline{F}_{\mu _1} ) *\cdots * (\overline{F}_{\nu _m}- \overline{F}_{\mu _m} )\right] (A)\\{} & {} \qquad \times \left[ (\overline{F}_{\alpha _1}- \overline{F}_{\beta _1} ) *\cdots * (\overline{F}_{\alpha _n}- \overline{F}_{\beta _n} )\right] (B) \ge 0 \end{aligned}$$

for all \(A,B\in {\mathbb R}\). The theorem is proved. \(\square \)

Taking \(\mu _i=\mu \), \(\nu _i=\nu \) \((i=1,\ldots , m)\), \(\alpha _j=\alpha \), \(\beta _j=\beta \) \((j=1,\ldots , n)\), by Theorem 24, we obtain necessary and sufficient condition for measures satisfying inequality(22).

Theorem 25

Let \(m,n\ge 2\). Let \(\mu \), \(\nu \), \(\alpha \) and \(\beta \) be probability measures on measures on \({\mathbb R}\), such that \(\int _{-\infty }^{\infty } |x|^{m-1} \mu (dx)<\infty \), \(\int _{-\infty }^{\infty } |x|^{m-1} \nu (dx)<\infty \), \(\int _{-\infty }^{\infty } |x|^{n-1} \alpha (dx)<\infty \) and \(\int _{-\infty }^{\infty } |x|^{n-1} \beta (dx)<\infty \). Then the following conditions are equivalent:

a):
b):

\( \left[ (\overline{F}_{\nu }- \overline{F}_{\mu } ) ^{*m}\right] (A) \times \left[ (\overline{F}_{\alpha }- \overline{F}_{\beta } ) ^{*n}\right] (B)\ge 0 \)  for all \(A,B\in {\mathbb R}.\)

7 The Hermite–Hadamard-Type and the Jensen Inequality for Box-(mn)-Convex Functions

Let \(f :I\rightarrow \mathbb {R}\) be a convex function defined on a real interval I and \(a,b \in I\) with \(a<b\). The following double inequality

$$\begin{aligned} f\left( \frac{a+b}{2}\right) \le \frac{1}{b-a}\cdot \int _a^b f(x)\,dx\le \frac{f(a)+f(b)}{2} \end{aligned}$$
(25)

is known as the Hermite–Hadamard inequality for convex functions (see [2]).

A very useful sufficient condition for convex stochastic ordering is the following Ohlin lemma (see [17]).

Lemma 26

[12] Let XY be two random variables such that \(\mathbb {E}\,X=\mathbb {E}\,Y\). If the distributions functions \(F_X, F_Y\) cross exactly one time, i.e., for some \(x_0\) holds

$$\begin{aligned} F_X(x) \le F_Y(x) \textit{ if } x < x_0 \textit{ and } F_X(x) \ge F_Y(x) \textit{ if } x > x_0, \end{aligned}$$

then \(X \le _{cx} Y\) \((\mu _X \le _{cx} \mu _Y)\).

Note that inequality (25) can be written as

$$\begin{aligned} \int _{I}f(x)\mu (dx)\le \int _{I}f(x)\nu (dx) \le \int _{I}f(x)\eta (dx), \end{aligned}$$

where \(\mu = \delta _{(a+b)/2}\), \(\nu (dx)=\frac{1}{b-a}\chi _{[a,b]}(x)\, dx\) and \(\eta = \frac{1}{2}(\delta _a+\delta _b)\) (see [17]). Hence the Hermite–Hadamard inequality (25) is equivalent to the convex ordering relations

which follow immediately from the Ohlin Lemma.

One of the most familiar and elementary inequalities in the probability theory is the Jensen inequality:

$$\begin{aligned} f\bigl (\mathbb {E}\,X \bigr ) \le \mathbb {E}\,f(X), \end{aligned}$$
(26)

where f is convex over the convex hull of the range of the random variable X (see [1]). Conversely, if (26) holds for all random variables X (for which the expectations there exist), then f is a convex function. Inequality can be written as

$$\begin{aligned} f\bigl (m_X\bigr ) \le \int _{-\infty }^{\infty }f(x)\mu _X(dx), \end{aligned}$$
(27)

where \(m_X=\mathbb {E}\,[X]\) and \(\mu _X\) is the distribution of the random variable X.

If the random variable X is [mM] valued and \(f :[m,M]\rightarrow {\mathbb R}\) is a convex function, then the converse of the Jensen inequality (see [7]) is as follows

$$\begin{aligned} \int _{-\infty }^{\infty }f(x)\mu _X(dx)\le \frac{M-m_X}{M-m}f(m)+\frac{m_X-m}{M-m}f(M). \end{aligned}$$
(28)

The probabilistic version of (28) can be written as

$$\begin{aligned} \mathbb {E}\,f(X)\le \frac{M-\mathbb {E}\,X }{M-m}f(m)+\frac{\mathbb {E}\,X -m}{M-m}f(M). \end{aligned}$$
(29)

Remark 27

Note that inequalities (27) and (28) can be easily proved using the Ohlin Lemma. Let \(\nu =\delta _{m_X}\), \(\mu =\mu _X\) and \(\eta =\frac{M-m_X}{M-m}\delta _m+\frac{m_X-m}{M-m}\delta _M\). Since the pairs \((\nu , \mu )\) and \((\mu , \eta )\) satisfy the assumptions of the Ohlin Lemma, it follows the relations \(\nu \le _{cx}\mu \) and \(\mu \le _{cx} \eta \), which are equivalent to (27) and (28), respectively.

Note that the Jensen inequality (27) and the converse of the Jensen inequality (28) for convex functions are a special case of the Hermite–Hadamard type inequalities for convex functions.

In this paper we obtain some Hermite–Hadamard inequalities and the Jensen inequality for box-(mn)-convex functions and strongly box-(mn)-convex functions.

Theorem 28

Let IJ be two real intervals and \(m,n\ge 2\). Let \(\mu _1, \nu _1\) be two probability measures on I and \(\mu _2, \nu _2\) be two probability measures on J such that and , then

$$\begin{aligned} \int _I\int _J f(x,y) (\nu _2-\mu _2)(dy)\,(\nu _1-\mu _1)(dx)\ge 0 \end{aligned}$$
(30)

for all continuous box-(mn)-convex functions \(f:I\times J \rightarrow {\mathbb {R}}\), i.e.

(31)

Theorem 28 follows immediately from Theorem 20 and the following proposition.

Proposition 5

Corollary 2.1 [18] Let \(\gamma \) be a signed Borel measure on \({\mathbb R}\), which is concentrated on the interval (ab) (bounded or unbounded) and such that \(\int _{a}^{b} |x|^n |\gamma |(dx)< \infty \). Then in order that

$$\begin{aligned} \int _{a}^{b} f(x) \gamma (dx)\ge 0 \end{aligned}$$

for all n-convex Borel functions \(f:(a,b)\rightarrow {\mathbb {R}}\), it is necessary and sufficient that \(\gamma \) verify the following conditions: (a) \( \gamma ((a,b))=0, \) (b) \( \int _{a}^{b} x^k \gamma (dx)=0 \quad \text {for }\,\,k=1,\ldots ,n, \) (c) \( \int _{a}^{b} \bigl (x-A\bigr ) ^n _+ \gamma (dx)\ge 0 \quad \text {for all }\,\,A\in (a,b). \)

Theorem 28 offers an easy way to transfer \((m-1)\)-convex orders and \((n-1)\)-convex orders to box-(mn)-convex orders.

The following inequalities can be regarded as the Hermite–Hadamard type inequalities for box-(2, 2)-convex functions.

Theorem 29

(Hermite–Hadamard inequality for box-(2,2)-convex functions) Let IJ be two real intervals. Let \(a,b\in I\), \(c,d\in J\) with \(a<b\) and \(c<d\). Then

$$\begin{aligned}{} & {} \frac{1}{(b-a)(d-c)}\cdot \int _a^b \int _c^d f(x,y)\,dy\, dx -\frac{1}{(b-a)}\int _a^b f\left( x,\frac{c+d}{2}\right) \, dx\nonumber \\{} & {} \quad -\frac{1}{(d-c)}\cdot \int _c^d f\left( \frac{a+b}{2},y\right) \,dy +f\left( \frac{a+b}{2},\frac{c+d}{2} \right) \ge 0, \end{aligned}$$
(32)
$$\begin{aligned}{} & {} \frac{1}{(b-a)(d-c)}\cdot \int _a^b \int _c^d f(x,y)\,dy\, dx\nonumber \\{} & {} \quad -\frac{1}{2(b-a)}\int _a^b [f(x,c)+f(x,d)]\, dx -\frac{1}{2(d-c)}\cdot \int _c^d [f(a,y)+f(b,y)]\,dy\nonumber \\{} & {} \quad +\frac{1}{4}\left( f\left( a,c \right) +f\left( a,d \right) +f\left( b,c \right) +f\left( b,d \right) \right) \ge 0, \end{aligned}$$
(33)

for all continuous box-(2, 2)-convex functions \(f:I\times J \rightarrow {\mathbb {R}}\).

Proof

We consider

$$\begin{aligned}{} & {} \nu _1= \delta _{\frac{a+b}{2}}, \ \mu _1(dx)=\frac{1}{b-a}\chi _{[a,b]}(x)\, dx, \ \eta _1= \frac{1}{2}(\delta _a+\delta _b),\nonumber \\{} & {} \nu _2= \delta _{\frac{c+d}{2}},\ \mu _2(dy)=\frac{1}{d-c}\chi _{[c,d]}(y)\, dy, \ \eta _2= \frac{1}{2}(\delta _c+\delta _d), \ \end{aligned}$$
(34)

We have \(\nu _1\le _{1-cx}\mu _1\le _{1-cx}\eta _1\) and \(\nu _2\le _{1-cx}\mu _2\le _{1-cx}\eta _2\). Then, by Theorem 28, we obtain

which is equivalent to (32), and (33), respectively. \(\square \)

Similarly, we can obtain other Hermite–Hadamard inequalities for box-(2, 2)-convex functions.

In [3], the authors gave some Jensen’s type inequality for box-(2,2)-convex functions. In the next theorem, using Theorem 28, we easily obtain new Jensen’s inequality for box-(2,2)-convex functions.

Theorem 30

(Jensen inequality for box-(2,2)-convex functions) Let IJ be two real intervals. Let \(a,b\in I\), \(c,d\in J\) with \(a<b\) and \(c<d\). Let \(\mu _X\) be a probability measure on [ab] with the expectation \(m_X\) and \(\mu _Y\) be a probability measure on [cd] with the expectation \(m_Y\). Then

$$\begin{aligned}{} & {} \int _a^b \int _c^d f(x,y)\,\mu _Y(dy)\, \mu _X(dx) -\int _a^b f(x,m_Y)\, \mu _X(dx)\nonumber \\{} & {} -\int _c^d f(m_X,y)\,\mu _Y(dy) +f\left( m_X,m_Y \right) \ge 0 \end{aligned}$$
(35)

for all continuous box-(2, 2)-convex functions \(f:I\times J \rightarrow {\mathbb {R}}\).

Proof

We consider \(\nu _1= \delta _{m_X}, \ \mu _1(dx)=\mu _X(dx), \ \nu _2= \delta _{m_Y},\ \mu _2(dy)=\mu _Y(dy), \ \) We have and . Then by Theorem 28, we obtain

which is equivalent to (35). \(\square \)

In the following theorem, we give the converse of the Jensen type inequality for box-(2,2)-convex functions.

Theorem 31

(Converse Jensen inequality for box-(2,2)-convex functions) Let IJ be two real intervals. Let \(a,b\in I\), \(c,d\in J\) with \(a<b\) and \(c<d\). Let \(\mu _X\) be a probability measure on [ab] with the expectation \(m_X\) and \(\mu _Y\) be a probability measure on [cd] with the expectation \(m_Y\). Then

$$\begin{aligned}{} & {} \int _a^b \int _c^d f(x,y)\,\mu _Y(dy)\, \mu _X(dx)\nonumber \\{} & {} \quad -\int _a^b \left( \frac{d-m_Y}{d-c}f(x,c)+\frac{m_Y-c}{d-c}f(x,d)\right) \, \mu _X(dx)\nonumber \\{} & {} \quad -\int _c^d \left( \frac{b-m_X}{b-a}f(a,y)+ \frac{m_X-a}{b-a}f(b,y) \right) \,\mu _Y(dy)\nonumber \\{} & {} \quad +\frac{(b-m_X)(d-m_Y)}{(b-a)(d-c)}f(a,c) + \frac{(b-m_X)(m_Y-c)}{(b-a)(d-c)}f(a,d)\nonumber \\{} & {} \quad +\frac{(m_X-a)(d-m_Y)}{(b-a)(d-c)}f(b,c) +\frac{(m_X-a)(m_Y-c)}{(b-a)(d-c)}f(b,d) \ge 0 \end{aligned}$$
(36)

for all continuous box-(2, 2)-convex functions \(f:I\times J \rightarrow {\mathbb {R}}\).

Proof

We consider

$$\begin{aligned}{} & {} \mu _1(dx)=\mu _X(dx), \ \eta _1= \frac{b-m_X}{b-a}\delta _a+\frac{m_X-a}{b-a}\delta _b,\\{} & {} \mu _2(dy)=\mu _Y(dy), \ \eta _2= \frac{d-m_Y}{d-c}\delta _c+\frac{m_Y-c}{d-c}\delta _d. \end{aligned}$$

We have and . Then by Theorem 28, we obtain

which is equivalent to (36). \(\square \)

Similarly to probabilistic Jensen’s inequalities (26) and (29) for convex functions, we give probabilistic versions of Jensen’s inequalities (35) and (36) for box-(2, 2)-convex functions.

Similarly as the Jensen inequality allows to characterize the convexity of functions of one variable, the box-(2, 2)-convexity counterparts of Jensen’s inequalities and Jensen’s converse allow to characterize the box-(2, 2)-convexity of the functions.

Let IJ be two real intervals, \(a,b\in I\), \(c,d\in J\) with \(a<b\) and \(c<d\). Let \(f:I\times J \rightarrow {\mathbb {R}}\) be a continuous function. For independent random variables XY, such that X is [ab] valued and Y is [cd] valued, we define the following Jensen gap type functions.

$$\begin{aligned}{} & {} \mathcal {J}_{(1)}(f;X,Y)=\mathbb {E}\,f(X,Y) -\mathbb {E}\,f(X,\mathbb {E}\,Y)-\mathbb {E}\,f(\mathbb {E}\,X,Y)+f(\mathbb {E}\,X,\mathbb {E}\,Y), \end{aligned}$$
(37)
$$\begin{aligned}{} & {} \mathcal {J}_{(2)}(f;X,Y,a,b,c,d)=\mathbb {E}\,f(X,Y) -\mathbb {E}\,\left( \frac{d-\mathbb {E}\,Y}{d-c}f(X,c)+\frac{\mathbb {E}\,Y-c}{d-c}f(X,d)\right) \,\nonumber \\{} & {} \quad -\mathbb {E}\,\left( \frac{b-\mathbb {E}\,X}{b-a}f(a,Y)+ \frac{\mathbb {E}\,X-a}{b-a}f(b,Y) \right) \,\nonumber \\{} & {} \quad +\frac{(b-\mathbb {E}\,X)(d-\mathbb {E}\,Y)}{(b-a)(d-c)}f(a,c) + \frac{(b-\mathbb {E}\,X)(\mathbb {E}\,Y-c)}{(b-a)(d-c)}f(a,d)\nonumber \\{} & {} \quad +\frac{(\mathbb {E}\,X-a)(d-\mathbb {E}\,Y)}{(b-a)(d-c)}f(b,c) +\frac{(\mathbb {E}\,X-a)(\mathbb {E}\,Y-c)}{(b-a)(d-c)}f(b,d), \end{aligned}$$
(38)
$$\begin{aligned}{} & {} \mathcal {J}_{(3)}(f;X,Y,a,b,c,d)=f\left( \mathbb {E}\,X,\mathbb {E}\,Y \right) \nonumber \\{} & {} \quad -\left( \frac{d-\mathbb {E}\,Y}{d-c}f(\mathbb {E}\,X,c)+\frac{\mathbb {E}\,Y-c}{d-c}f(\mathbb {E}\,X,d)\right) \, \nonumber \\{} & {} \quad - \left( \frac{b-\mathbb {E}\,X}{b-a}f(a,\mathbb {E}\,Y)+ \frac{\mathbb {E}\,X-a}{b-a}f(b,\mathbb {E}\,Y) \right) \,\nonumber \\{} & {} \quad +\frac{(b-\mathbb {E}\,X)(d-\mathbb {E}\,Y)}{(b-a)(d-c)}f(a,c) + \frac{(b-\mathbb {E}\,X)(\mathbb {E}\,Y-c)}{(b-a)(d-c)}f(a,d)\nonumber \\{} & {} \quad +\frac{(\mathbb {E}\,X-a)(d-\mathbb {E}\,Y)}{(b-a)(d-c)}f(b,c) +\frac{(\mathbb {E}\,X-a)(\mathbb {E}\,Y-c)}{(b-a)(d-c)}f(b,d). \end{aligned}$$
(39)

Theorem 32

Let IJ be two real intervals. Let \(f:I\times J \rightarrow \mathbb R\) be a continuous function. Then the following conditions are equivalent:

(a):

f is box-(2, 2)-convex.

(b):

\( \mathcal {J}_{(1)}(f;X,Y)\ge 0, \) for all independent I valued random variables X and J valued random variables Y.

(c):

\(\mathcal {J}_{(2)}(f;X,Y,a,b,c,d)\ge 0\) for all \(a,b\in I\), \(c,d\in J\) with \(a<b\) and \(c<d\) for all independent [ab] valued random variables X and [cd] valued random variables Y.

(d):

\(\mathcal {J}_{(3)}(f;X,Y,a,b,c,d)\ge 0\) for all \(a,b\in I\), \(c,d\in J\) with \(a<b\) and \(c<d\) for all independent [ab] valued random variables X and [cd] valued random variables Y.

Proof

The proof of: (a)\(\Rightarrow \)(b), (a)\(\Rightarrow \)(c) follow immediately from Theorems 30, 31, taking independent random variables XY with distributions \(\mu _X, \mu _Y\), and expectations \(m_X, m_Y\), respectively.

(a)\(\Rightarrow \)(d). Let XY be independent random variables, such that X is [ab] valued and Y is [cd] valued. Let \(m_X=\mathbb {E}\,X\), \(m_Y=\mathbb {E}\,Y\) and

$$\begin{aligned}{} & {} \nu _1= \delta _{m_X}, \ \eta _1= \frac{b-m_X}{b-a}\delta _a+\frac{m_X-a}{b-a}\delta _b,\\{} & {} \nu _2= \delta _{m_Y}, \ \eta _2= \frac{d-m_Y}{d-c}\delta _c+\frac{m_Y-c}{d-c}\delta _d \end{aligned}$$

We have and . Then by Theorem 28, we obtain

which is equivalent to \(\mathcal {J}_{(3)}(f;X,Y,a,b,c,d)\ge 0\).

b)\(\Rightarrow \)a). Let us assume that the condition b) is satisfied. Let \(x_0,x_1, x_2 \in I\), \(y_0,y_1, y_2\in J\) with \(x_0<x_1<x_2\) and \(y_0<y_1<y_2\). We take \(a=x_0\), \(b=x_2\), \(c=y_0\), \(d=y_2\) and independent random variables XY such that \(\mu _X=\frac{x_2-x_1}{x_2-x_0}\delta _{x_0}+\frac{x_1-x_0}{x_2-x_0}\delta _{x_2}\), \(\mu _Y=\frac{y_2-y_1}{y_2-y_0}\delta _{y_0}+\frac{y_1-y_0}{y_2-y_0}\delta _{y_2}\). Then

$$\begin{aligned} \left[ {\begin{array}{c} {x_0,\quad x_1, \quad x_2}\\ \\ {y_0, \quad y_1, \quad y_2} \end{array}};f \right] =\frac{\mathcal {J}_{(1)}(f;X,Y)}{(x_2-x_1)(x_1-x_0)(y_2-y_1)(y_1-y_0)}\ge 0, \end{aligned}$$

which implies that f is box-(2, 2)-convex.

(c)\(\Rightarrow \)(a). The proof is similar to the proof of (b)\(\Rightarrow \)(a). We take \(a=x_0\), \(b=x_2\), \(c=y_0\), \(d=y_2\) and independent random variables XY such that \(\mu _X=\delta _{x_1}\), \(\mu _Y=\delta _{y_1}\). Then

$$\begin{aligned} \left[ {\begin{array}{c} {x_0,\quad x_1, \quad x_2}\\ \\ {y_0, \quad y_1, \quad y_2} \end{array}};f \right] =\frac{\mathcal {J}_{(2)}(f;X,Y,a,b,c,d)}{(x_2-x_1)(x_1-x_0)(y_2-y_1)(y_1-y_0)}\ge 0, \end{aligned}$$

which implies that f is box-(2, 2)-convex.

(d)\(\Rightarrow \)(a). Now we can take either random variables XY defined in the proof (b)\(\Rightarrow \)(a) or those defined in the proof (c)\(\Rightarrow \)(a). In both cases we obtain

$$\begin{aligned} \left[ {\begin{array}{c} {x_0,\quad x_1, \quad x_2}\\ \\ {y_0, \quad y_1, \quad y_2} \end{array}};f \right] =\frac{\mathcal {J}_{(3)}(f;X,Y,a,b,c,d)}{(x_2-x_1)(x_1-x_0)(y_2-y_1)(y_1-y_0)}\ge 0, \end{aligned}$$

which implies that f is box-(2, 2)-convex. \(\square \)

Taking in Theorem 32, the independent random variables XY, such that \(\mu _X=\sum _{i=1} ^M \alpha _i \delta _{x_i}\) and \(\mu _Y=\sum _{j=1} ^N \beta _j\delta _{ y_j}\), \(\bar{x}=\sum _{i=1} ^M \alpha _i x_i\), \(\bar{y}=\sum _{j=1} ^N \beta _j y_j\), we obtain the following box-analog of Jensen’s inequality and the converse of Jensen’s inequality.

Theorem 33

Let IJ be two real intervals. Let \(M,N>1\). Then, a function \(f:I\times J \rightarrow {\mathbb {R}}\) is box-(2, 2)-convex if and only if

$$\begin{aligned} \sum _{i=1} ^M \sum _{j=1} ^N \alpha _i \beta _j f(x_i, y_j) - \sum _{j=1} ^N \beta _j f(\bar{x}, y_j) -\sum _{i=1} ^M \alpha _i f(x_i,\bar{y}) +f(\bar{x}, \bar{y})\ge 0 \end{aligned}$$
(40)

for all \(x_1, \ldots , x_M \in I\), \(y_1, \ldots , y_N \in J\), \(\alpha _1, \ldots , \alpha _M, \beta _1, \ldots , \beta _N \ge 0\) such that \(\alpha _1+ \cdots + \alpha _M=1\), \( \beta _1+ \cdots + \beta _N=1\).

Proof

(\(\Leftarrow \)) follows from Theorem 32. The proof of (\(\Rightarrow \)), which is similar to the proof of (b)\(\Rightarrow \)(a) in Theorem 32, we omit. \(\square \)

Theorem 34

Let IJ be two real intervals. Let \(M,N>1\). Then, a function \(f:I\times J \rightarrow {\mathbb {R}}\) is box-(2, 2)-convex if and only if

$$\begin{aligned}{} & {} \sum _{i=1} ^M \sum _{j=1} ^N \alpha _i \beta _j f(x_i, y_j) -\sum _{i=1} ^M \alpha _i \left( \frac{d-\bar{y}}{d-c}f(x_i,c)+\frac{\bar{y}-c}{d-c}f(x_i,d)\right) \nonumber \\{} & {} \quad - \sum _{j=1} ^N \beta _j \left( \frac{b-\bar{x}}{b-a}f(a,y_j)+ \frac{\bar{x}-a}{b-a}f(b,y_j) \right) \nonumber \\{} & {} \quad +\frac{(b-\bar{x})(d-\bar{y})}{(b-a)(d-c)}f(a,c) + \frac{(b-\bar{x})(\bar{y}-c)}{(b-a)(d-c)}f(a,d)\nonumber \\{} & {} \quad +\frac{(\bar{x}-a)(d-\bar{y})}{(b-a)(d-c)}f(b,c) +\frac{(\bar{x}-a)(\bar{y}-c)}{(b-a)(d-c)}f(b,d) \ge 0 \end{aligned}$$
(41)

for all \(x_1, \ldots , x_M \in [a,b]\subset I\), \(y_1, \ldots , y_N \in [c,d]\subset J\), \(\alpha _1, \ldots , \alpha _M, \beta _1, \ldots , \beta _N \ge 0\) such that \(\alpha _1+ \cdots + \alpha _M=1\), \( \beta _1+ \cdots + \beta _N=1\).

Proof

(\(\Leftarrow \)) follows from Theorem 32. The proof of (\(\Rightarrow \)), which is similar to the proof of (c)\(\Rightarrow \)(a) in Theorem 32, we omit. \(\square \)

Remark 35

We note, that inequality (40) for \(\alpha _1= \cdots =\alpha _M=\frac{1}{M}\), \(\beta _1= \cdots = \beta _N =\frac{1}{N}\) was also proved by Gal and Niculescu [3, p. 909].

8 Strongly Box-(mn)-Convex Functions

In this paper we consider also convex functions fulfilling some stronger condition (cf. [7, 20]). Let \(D\subset {\mathbb R}^n\) be a convex set and let \(C\ge 0\). We say that the function \(g:D\rightarrow {\mathbb R}\) is strongly convex with modulus C, if

$$\begin{aligned} g\bigl (tx+(1-t)y\bigr )\le tg(x)+(1-t)g(y)-Ct(1-t)\Vert x-y\Vert ^2 \end{aligned}$$

for all \(x,y\in D\) and for all \(t\in [0,1]\). Obviously every strongly convex function is convex. Strong convexity has a nice characterization.

Proposition 6

[7, p. 73, Proposition 1.1.2] Let \(D\subset {\mathbb R}^n\) be a convex set. The function \(g:D\rightarrow {\mathbb R}\) is strongly convex with modulus C if and only if the function \(g-C\Vert \cdot \Vert ^2\) is convex.

Taking the function \(f(x)=g(x)-Cx^2\) in (25), we obtain the Hermite–Hadamard inequality for strongly convex function g with modulus C

$$\begin{aligned} g\left( \frac{a+b}{2}\right) + \frac{C}{12}\left( a-b\right) ^2\le \frac{1}{b-a}\cdot \int _a^b g(x)\,dx\le \frac{g(a)+g(b)}{2} -\frac{C}{6}\left( a-b\right) ^2.\nonumber \\ \end{aligned}$$
(42)

Similarly, taking \(f(x)=g(x)-Cx^2\) in inequalities (26), (29), we can obtain the Jensen inequality for strongly convex functions g with modulus \(C\ge 0\) [16]

$$\begin{aligned} \mathbb {E}\,g(X)-g\bigl (\mathbb {E}\,X\bigr )\ge C\,D^2X, \end{aligned}$$
(43)

and the converse of the Jensen inequality

$$\begin{aligned}{} & {} \frac{M-\mathbb {E}\,X}{M-m}g(m)+\frac{\mathbb {E}\,X -m}{M-m}g(M) - \mathbb {E}\,g(X)\nonumber \\{} & {} \quad \ge C\left[ \frac{M-\mathbb {E}\,X}{M-m}m^2+\frac{\mathbb {E}\,X-m}{M-m}M^2 -\mathbb {E}\,X^2 \right] \nonumber \\ {}{} & {} =C[(M-\mathbb {E}\,X)(\mathbb {E}\,X-m)-D^2X], \end{aligned}$$
(44)

where X is a [mM] valued random variable.

In this paper, we introduce strongly box-(mn)-convex functions with modulus \(C\ge 0\) and obtain the Hermite–Hadamard inequalities, the Jensen inequality and the converse of the Jensen inequality for such functions.

Definition 3

Let \(m,n\ge 1\). We say that a function \(g:I\times J \rightarrow {\mathbb {R}}\) is strongly box-(mn)-convex with modulus \(C\ge 0\) if the function \(f(x,y)=g(x,y)-Cx^m y^n\) is box-(mn)-convex.

It is not difficult to prove the following lemma.

Lemma 36

Let \(n\ge 1\). Then \([x_0,x_1,\ldots , x_n;x^n]=1\) for all pair distinct \(x_0,x_1,\ldots , x_n \in {\mathbb R}\).

Lemma 37

Let \(m,n\ge 1\). Then

$$\begin{aligned} \left[ {\begin{array}{c} {x_0,\quad x_1, \quad \ldots \quad x_m}\\ \\ {y_0, \quad y_1, \quad \ldots \quad y_n} \end{array}};\ Cx^m y^n\right] =C \end{aligned}$$
(45)

for all pairwise distinct points \(x_0, x_1,\ldots , x_{m}\) and \(y_0, y_1,\ldots , y_n\).

Proof

By Lemma 36, we obtain

$$\begin{aligned} \left[ {\begin{array}{c} {x_0,\quad x_1, \quad \ldots \quad x_m}\\ \\ {y_0, \quad y_1, \quad \ldots \quad y_n} \end{array}};\ Cx^m y^n\right] \\ =\left[ y_0, y_1,\ldots , y_{n};\;\left[ x_0, x_1,\ldots , x_{m};\;C(\cdot )^m y^n \right] \right] =\left[ y_0, y_1,\ldots , y_{n};\; C y^n \right] =C. \end{aligned}$$

\(\square \)

Theorem 38

A function \(g:I\times J \rightarrow {\mathbb {R}}\) is strongly box-(mn)-convex with modulus \(C\ge 0\) if and only if

$$\begin{aligned} \left[ {\begin{array}{c} {x_0,\quad x_1, \quad \ldots \quad x_m}\\ \\ {y_0, \quad y_1, \quad \ldots \quad y_n} \end{array}};\ g \right] \ge C \end{aligned}$$
(46)

for all pairwise distinct points \(x_0, x_1\ldots x_{m}\in I\) and \(y_0, y_1\ldots y_n \in J\).

Proof

The function g is strongly box-(mn)-convex with modulus \(C\ge 0\) if and only if

$$\begin{aligned} \left[ {\begin{array}{c} {x_0,\quad x_1, \quad \ldots \quad x_m}\\ \\ {y_0, \quad y_1, \quad \ldots \quad y_n} \end{array}};\ g(x,y)-Cx^m y^n \right] \ge 0 \end{aligned}$$
(47)

for all pairwise distinct points \(x_0, x_1,\ldots , x_{m}\in I\) and \(y_0, y_1,\ldots , y_n \in J\).

Taking into account Lemma 37, inequality (47) is equivalent to

$$\begin{aligned} \left[ {\begin{array}{c} {x_0,\quad x_1, \quad \ldots \quad x_m}\\ \\ {y_0, \quad y_1, \quad \ldots \quad y_n} \end{array}};\ g \right] \ge \left[ {\begin{array}{c} {x_0,\quad x_1, \quad \ldots \quad x_m}\\ \\ {y_0, \quad y_1, \quad \ldots \quad y_n} \end{array}};\ Cx^m y^n\right] =C. \end{aligned}$$
(48)

The theorem is proved. \(\square \)

By Proposition 2, we obtain a differential criterion of strong box-(mn)-convexity.

Theorem 39

Let \(g:I \times J \rightarrow {\mathbb {R}}\) be a \((m+n)\)-differentiable function. Then g is strongly box-(mn)-convex with modulus \(C\ge 0\) if and only if \(\frac{\partial ^{m+n}g }{\partial x^m \partial y^n}\ge C\,m!\,n!. \)

By Theorem 32, we obtain the following theorem on Jensen type gaps for strongly box-(2, 2)-convex functions with modulus C, which are counterparts of inequalities (43), (44) (see also [16]) for strongly convex functions.

Theorem 40

Let IJ be two real intervals and \(C\ge 0\). Let \(g:I\times J \rightarrow {\mathbb {R}}\) be a continuous function. Then the following conditions are equivalent:

(a):

g is strongly box-(2, 2)-convex with modulus C.

(b):

\( \mathcal {J}_{(1)}(g;X,Y)\ge C\,D^2X\, D^2Y, \) for all independent I valued random variables X and J valued random variables Y.

(c):

\(J_{(2)}(g;X,Y,a,b,c,d) \ge C \left[ (b-\mathbb {E}\,X)(\mathbb {E}\,X-a)-D^2 X\right] \,\big [ (d-\mathbb {E}\,Y)(\mathbb {E}\,Y-c)-D^2Y\big ]\) for all \(a,b\in I\), \(c,d\in J\) with \(a<b\) and \(c<d\) for all independent [ab] valued random variables X and [cd] valued random variables Y.

(d):

\(\mathcal {J}_{(3)}(g;X,Y,a,b,c,d) \ge C (b-\mathbb {E}\,X)(\mathbb {E}\,X-a)(d-\mathbb {E}\,Y)(\mathbb {E}\,Y-c)\) for all \(a,b\in I\), \(c,d\in J\) with \(a<b\) and \(c<d\) for all independent [ab] valued random variables X and [cd] valued random variables Y.

By Theorem 29, we obtain the Hermite–Hadamard type inequalities for strongly with modulus C box-(2, 2)-convex functions.

Theorem 41

Let IJ be two real intervals. Let \(a,b\in I\), \(c,d\in J\) with \(a<b\) and \(c<d\). Then

$$\begin{aligned}{} & {} \frac{1}{(b-a)(d-c)}\cdot \int _a^b \int _c^d g(x,y)\,dy\, dx -\frac{1}{(b-a)}\int _a^b g\left( x,\frac{c+d}{2}\right) \, dx\nonumber \\{} & {} \quad -\frac{1}{(d-c)}\cdot \int _c^d g\left( \frac{a+b}{2},y\right) \,dy +g\left( \frac{a+b}{2},\frac{c+d}{2} \right) \ge C \frac{(a-b)^2(c-d)^2}{144}, \nonumber \\\end{aligned}$$
(49)
$$\begin{aligned}{} & {} \frac{1}{(b-a)(d-c)}\cdot \int _a^b \int _c^d g(x,y)\,dy\, dx\nonumber \\{} & {} \quad -\frac{1}{2(b-a)}\int _a^b [g(x,c)+g(x,d)]\, dx -\frac{1}{2(d-c)}\cdot \int _c^d [g(a,y)+g(b,y)]\,dy\nonumber \\{} & {} \quad +\frac{1}{4}\left( g \left( a,c \right) +g \left( a,d \right) +g \left( b,c \right) +g \left( b,d \right) \right) \ge C \frac{(a-b)^2(c-d)^2}{36} \end{aligned}$$
(50)

for all strongly box-(2, 2)-convex functions \(g:I\times J \rightarrow {\mathbb {R}}\) with modulus C.

Using the box analog of the subdifferential inequality given in [3], we obtain the following box analog of the subdifferential inequality for strongly box-(2, 2)-convex functions.

Theorem 42

A function \(g\in C^2(I\times J)\) is strongly box-(2, 2)-convex with modulus \(C\ge 0\) if and only if

$$\begin{aligned} g(x,y)\ge Ag(a,b)(x,y)+C(x-a)^2(y-b)^2, \end{aligned}$$

where \((x,y),(a,b)\in I\times J\) and

$$\begin{aligned}{} & {} Ag(a,b)(x,y)=g(a,b)+g(x,b)+g(a,y)+(x-a)\frac{\partial g}{\partial u}(a,y) +(y-b)\frac{\partial g}{\partial v}(x,b)\\{} & {} \quad -(x-a)\frac{\partial g}{\partial u}(a,b) -(y-b)\frac{\partial g}{\partial v}(a,b) -(x-a)(y-b)\frac{\partial ^2 g}{\partial u\partial v}(a,b) \end{aligned}$$

is the box-affine part of g.

In Sects. 7 and 8, we used Theorem 28 to obtain several inequalities for box-(mn)-convex functions and strongly box-(mn)-convex functions for \(m=2\) and \(n=2\). The same theorem can be also used to obtain similar inequalities for other m and n.