1 Introduction

Let I denote a nonempty open subinterval of \(\mathbb {R}\) throughout this paper. Given \(n\in \mathbb {N}\), a function \(M_n:I^n\rightarrow I\) is called an n-variable mean if

$$\begin{aligned} { \min (x_1,\dots ,x_n)\le M_n(x)\le \max (x_1,\dots ,x_n) } \end{aligned}$$

holds for all \(x=(x_1,\dots ,x_n)\in I^n\). A function \(M:\bigcup _{n=1}^{\infty } I^n\rightarrow I\) is said to be a mean if, for all \(n\in \mathbb {N}\), the restriction \(M|_{I^n}\) is an n-variable mean.

The Jensen convexity (Jensen concavity) of means become a key property in the investigation of Hardy-type inequalities (cf. [11,12,13]). An n-variable mean \(M_n:I^n\rightarrow I\) is said to be Jensen convex if, for all \(x,y\in I^n\),

$$\begin{aligned} {M_n\Big (\frac{x+y}{2}\Big )\le \frac{M_n(x)+M_n(y)}{2}. } \end{aligned}$$

A mean \(M:\bigcup _{n=1}^{\infty } I^n\rightarrow I\) is said to be Jensen convex if, for all \(n\in \mathbb {N}\), the n-variable mean \(M_n:=M|_{I^n}\) is Jensen convex.

Given \(p\in \mathbb {R}\), the \(p^\textrm{th}\) power mean or Hölder mean is defined by

Concerning the convexity of Hölder means we have the following classical result.

Theorem A

Let \(n\ge 2\) be fixed. Let \(p\in \mathbb {R}\), and I be a subinterval of \(\mathbb {R}_+\). Then the n-variable mean is Jensen convex if and only if \(p\ge 1\).

An important generalization of Hölder means is the notion of quasiarithmetic means. Given a continuous strictly monotone function \(f :I \rightarrow \mathbb {R}\), the quasiarithmetic mean is defined by

(1.1)

If \(p\in \mathbb {R}\setminus \{0\}\) and \(f(x):=x^p\) for \(x\in \mathbb {R}_+\), then for \(x\in \mathbb {R}_+\), then , therefore, Hölder means are indeed quasiarithmetic means.

The Jensen convexity of quasiarithmetic means have been ultimately characterized by combining the results of the papers [4, 14].

Theorem B

Let \(n\ge 2\) be fixed and \(f :I \rightarrow \mathbb {R}\) be continuous and strictly monotone. Then the n-variable mean is Jensen convex if and only if f is twice continuously differentiable with a nonvanishing first derivative and either \(f''\) is identically zero on I, or \(f''\) is nowhere zero and \(f'/f''\) is positive and convex on I.

Example 1.1

Consider now the quasiarithmetic mean generated by the function \(f:\mathbb {R}\rightarrow \mathbb {R}\) given by \(f(x):=x^p\), where p is the ratio of two odd integers. Then f is strictly monotone on \(\mathbb {R}\) and its inverse is given by \(f^{-1}(x)=x^{1/p}\). Obviously, the quasiarithmetic obtained this way is an extension of the power mean to the domain \( \bigcup _{n=1}^{\infty } \mathbb {R}^n\). If \(p<1\), then f is not differentiable at 0, however, it is infinitely many times differentiable on \(\mathbb {R}_0:=\mathbb {R}\setminus \{0\}\), \(f''\) does not vanish on \(\mathbb {R}_0\) and \((f'/f'')(x)=x/(p-1)\). Thus, if \(p<1\), then \(f'/f''\) is positive and convex on \((-\infty ,0)\), which, according to the above theorem implies that is a Jensen convex mean on \((-\infty ,0)\). Since we have the identity , it follows that is Jensen concave on \((0,\infty )\). On the other hand, for any \(a<0<b\), the mean is neither convex nor concave on (ab) (because f is not differentiable at \(0\in (a,b)\)).

Another generalization of Hölder means was introduced by Gini [6]. To recall this definition, for \(q,r\in \mathbb {R}\), define the Gini mean by

The characterization of the convexity of Gini means can easily be deduced from the general results of Losonczi [7] and it reads as follows.

Theorem C

Let \(q,r\in \mathbb {R}\). Then the mean is Jensen convex if and only if \(0\le \min (q,r)\le 1\le \max (q,r)\).

It is important to emphasize that for a fixed number of the variables, the characterization is different as we have the result of Losonczi and Páles [8]:

Theorem D

Let \(q,r\in \mathbb {R}\). Then the 2-variable mean is Jensen convex if and only if \(0\le \min (q,r)\le 1\le q+r\).

A class of means which includes Gini as well as quasiarithmetic means was discovered by Bajraktarević in the papers [1, 2]. Given a positive function \(p:I\rightarrow \mathbb {R}_+\) and a continuous strictly monotone function \(f:I\rightarrow \mathbb {R}\), the Bajraktarević mean is defined by

If \(q,r\in \mathbb {R}\), \(q\ne r\), \(f(x):=x^{q-r}\), \(p(x):=x^r\) for \(x\in \mathbb {R}_+\), or if \(q=r\in \mathbb {R}\) and \(f(x):=\log (x)\), \(p(x):=x^q\) for \(x\in \mathbb {R}_+\), then . Therefore, Gini means form a subclass of Bajraktarević means. On the other hand, if p is a constant function, then one can see that and hence quasiarithmetic means are also included in the class of Bajraktarević means.

The convexity of Bajraktarević means with sufficiently regular generating functions was characterized by the following result of Losonczi [7].

Theorem E

Let \(p:I\rightarrow \mathbb {R}_+\) be a positive function and \(f:I\rightarrow \mathbb {R}\) be a differentiable strictly monotone function with a nonvanishing first derivative. Then the Bajraktarević mean is convex if and only if the two-variable map \(B_{f,p}:I^2\rightarrow \mathbb {R}\) defined by

$$\begin{aligned} {B_{f,p}(x,u):=\frac{p(x)(f(x)-f(u))}{p(u)f'(u)} } \end{aligned}$$

is convex.

The notions of deviations and quasideviations were introduced by Daróczy in [5] and by Páles [9], respectively. In what follows, we recall Definition 2.1 and Theorem 2.1 from the paper [9]. A two-variable function \(E:I^2\rightarrow \mathbb {R}\) will be called a quasideviation if E possesses the following three properties:

  1. (D1)

    For all \(x,u\in I\), the equality \({{\,\textrm{sign}\,}}E(x,u)={{\,\textrm{sign}\,}}(x-u)\) holds.

  2. (D2)

    For all \(x\in I\), the mapping \(I\ni u\mapsto E(x,u)\) is continuous.

  3. (D3)

    For all \(x,y\in I\) with \(x<y\), the mapping

    $$\begin{aligned} { (x,y)\ni u\mapsto \frac{E(x,u)}{E(y,u)} } \end{aligned}$$

    is strictly decreasing.

We say that E is a deviation (cf. [5]) if E possesses properties (D1), (D2) and, instead of (D3), the following condition holds.

  1. (D3’)

    For all \(x\in I\), the mapping \(I\ni u\mapsto E(x,u)\) is strictly decreasing.

It is not difficult to show that every deviation is also a quasideviation. In order to introduce quasideviation means, the following statement is instrumental (cf. [9, Theorem 2.1]).

Theorem F

Let \(E:I^2\rightarrow \mathbb {R}\) be a quasideviation. Then, for all \(n\in \mathbb {N}\) and \(x_1,\dots ,x_n\in I\), there exists a unique element \(u\in I\) such that

$$\begin{aligned} { E(x_1,u)+\dots +E(x_n,u)=0. } \end{aligned}$$
(1.2)

Furthermore, \(\min (x_1,\dots ,x_n)<u<\max (x_1,\dots ,x_n)\) unless \(x_1=\dots =x_n\).

For \(n\in \mathbb {N}\) and \(x_1,\dots ,x_n\in I\), the solution u of equation (1.2) is called the E-quasideviation mean of \(x_1,\dots ,x_n\) and will be denoted by .

If \(f:I\rightarrow \mathbb {R}\) is strictly increasing and \(p:I\rightarrow \mathbb {R}\) is continuous then \(E(x,u):=p(x)(f(x)-f(u))\) is a deviation (and also a quasideviation) and a simple computation yields that .

We say that a quasideviation \(E:I\times I\rightarrow \mathbb {R}\) is normalizable (cf. [5]) if, for all \(x\in I\), the function \(u\mapsto E(x,u)\) is differentiable at x and the mapping \(x\mapsto \partial _2E(x,x)\) is strictly negative and continuous on I. The normalization \(E^*:I\times I\rightarrow \mathbb {R}\) of E is defined by

$$\begin{aligned} { E^*(x,u):=-\frac{E(x,u)}{\partial _2E(u,u)}. } \end{aligned}$$

If E is normalizable, then \(E^*\) is also a quasideviation, and \(\partial _2E^*(x,x)=-1\), therefore \((E^*)^*=E^*\).

In the class of deviation means generated by normalizable quasideviations the characterization of the Jensen convexity follows from a general result of Daróczy [5].

Theorem G

Let \(E:I\times I\rightarrow \mathbb {R}\) be a normalizable quasideviation. Then is convex if and only if \(E^*\) is convex on \(I\times I\).

Without assuming normalizability, Páles [10, Theorem 11] obtained a general theorem, which implies the following result.

Theorem H

Let \(E:I\times I\rightarrow \mathbb {R}\) be a quasideviation. Then is convex if and only if there exist two functions \(a,b:I\times I\rightarrow \mathbb {R}\) such that, for all \(x,y,u,v\in I\),

$$\begin{aligned} {E\Big (\frac{x+y}{2},\frac{u+v}{2}\Big ) \le a(u,v)E(x,u)+b(u,v)E(y,v). } \end{aligned}$$
(1.3)

Motivated by the characterization theorem about the Jensen convexity of quasiarithmetic means, our aim is to establish a characterization of the Jensen convexity of quasideviation and Bajraktarević means without any additional regularity assumptions, that is to generalize Theorems E and G. The main starting point of our approach will be Theorem H.

2 Main results

The following auxiliary result will be needed in the sequel.

Lemma 2.1

Let \(I \subset \mathbb {R}\) be an open interval and \(f :I \rightarrow \mathbb {R}\). If \(f(\frac{x+y}{2})\le f(x)\) for all \(x,y \in I\), then f is constant.

Proof

Take any element x of I and \(\varepsilon >0\) such that \(x-2\varepsilon ,x+2\varepsilon \in I\). Now, for all \(\delta \in (-\varepsilon ,\varepsilon )\), we have that \(x-\delta ,x+\delta ,x+2\delta \in (x-2\varepsilon ,x+2\varepsilon )\subseteq I\), and

$$\begin{aligned} { f(x)= f\Big (\tfrac{(x+\delta )+(x-\delta )}{2}\Big )\le f(x+\delta )=f\Big (\tfrac{x+(x+2\delta )}{2}\Big )\le f(x). } \end{aligned}$$

Consequently, \(f(x)=f(x+\delta )\) for all \(\delta \in (-\varepsilon ,\varepsilon )\). Thus f is differentiable at x and \(f'(x)=0\).

Since x was arbitrary, we obtain that f is differentiable on I and \(f'\) is identically zero on it. This implies that f is constant. \(\square \)

To simplify the formulation of some of the results and also the proofs, we introduce the following regularity property: A function \(f:I\rightarrow \mathbb {R}\) is called nearly differentiable if, at every point of I, it has left and right derivatives and the set of those points where these one-sided derivatives are different is at most countable. It is well-known that convex functions admit this regularity property.

Theorem 2.2

Let \(E:I^2\rightarrow \mathbb {R}\) be a quasideviation. Then the following conditions are equivalent to each other:

  1. (i)

    The quasideviation mean is Jensen convex.

  2. (ii)

    For all \(u\in I\), the map \(x\mapsto E(x,u)\) has a positive right-derivative at \(x=u\), denoted by \(\partial _1^+E(u,u)\), and the mapping \(E^+:I^2\rightarrow \mathbb {R}\) defined by \(E^+(x,u):=\frac{E(x,u)}{\partial _1^+E(u,u)}\) is convex on \(I^2\).

  3. (iii)

    For all \(u\in I\), the map \(x\mapsto E(x,u)\) has a positive left-derivative at \(x=u\), denoted by \(\partial _1^-E(u,u)\), and the mapping \(E^-:I^2\rightarrow \mathbb {R}\) defined by \(E^-(x,u):=\frac{E(x,u)}{\partial _1^-E(u,u)}\) is convex on \(I^2\).

Moreover, if any of the above equivalent conditions is satisfied then the quasideviation E and quasideviation mean possess the following properties:

  1. (a)

    E is continuous on \(I^2\).

  2. (b)

    For all \(u\in I\), the function \(I\ni x\mapsto E(x,u)\) is convex.

  3. (c)

    The function \(I\ni u\mapsto (\partial _1^-E(u,u),\partial _1^+E(u,u))\) is continuous.

  4. (d)

    The map \(I\ni u \mapsto \frac{\partial _1^+E(u,u)}{\partial _1^-E(u,u)}\) is constant.

  5. (e)

    is non-smaller than the arithmetic mean.

Proof

Now assume that the condition (i) is satisfied. In view of Theorem H, we know that the quasideviation mean is Jensen convex if and only if there exists two functions \(a,b :I^2 \rightarrow \mathbb {R}\) such that (1.3) holds. Interchanging the pair (xu) with (yv), it follows that

$$\begin{aligned} { E\Big (\frac{x+y}{2},\frac{u+v}{2}\Big )\le a(v,u) E(y,v)+b(v,u)E(x,u), \qquad x,y,u,v \in I. } \end{aligned}$$

Adding the above inequality to (1.3) side by side, we get

$$\begin{aligned} {E\Big (\frac{x+y}{2},\frac{u+v}{2}\Big )\le c(u,v) E(x,u)+c(v,u)E(y,v), \qquad x,y,u,v \in I, } \nonumber \\ \end{aligned}$$
(2.1)

where \(c:I^2\rightarrow \mathbb {R}\) is defined by

$$\begin{aligned}{ c(u,v):=\frac{a(u,v)+b(v,u)}{2},\qquad u,v \in I. } \end{aligned}$$

Putting \(y:=x\) and \(v:=u\) into (2.1) we obtain

$$\begin{aligned}{ E(x,u)\le 2c(u,u) E(x,u),\qquad x,u \in I. } \end{aligned}$$

Then, by property (D1) of quasideviations, we get that, for each \(u \in I\), the factor \(E(\cdot ,u)\) takes both positive and negative values, thus we can conclude that

$$\begin{aligned}{ c(u,u)=\frac{1}{2},\qquad u \in I. } \end{aligned}$$

In the next step, substituting \(u:=v\) into (2.1), we get

$$\begin{aligned} { E\Big (\frac{x+y}{2},u\Big )\le \frac{E(x,u)+E(y,u)}{2}, \qquad x,y,u \in I. } \end{aligned}$$

Therefore, for each fixed \(u\in I\), the function \(I \ni x \mapsto E(x,u)\) is Jensen convex. On the other hand, this function is bounded from above by 0 on the open interval \((-\infty ,u)\cap I\). Thus, in view of the Bernstein–Doetsch Theorem [3], it is convex. As a consequence, all such maps are nearly differentiable. Thus, for all \((x,u) \in I^2\), the one-sided partial derivatives \(\partial _1^+E(x,u)\) and \(\partial _1^-E(x,u)\) exist. By applying property (D1) of quasideviations, we also get

$$\begin{aligned} { \partial _1^+E(u,u)\ge \partial _1^-E(u,x)>0,\qquad u\in I.} \end{aligned}$$
(2.2)

Next, putting \(y:=v\) into (2.1) we get

$$\begin{aligned}{ E\Big (\frac{x+v}{2},\frac{u+v}{2}\Big )\le c(u,v) E(x,u),\qquad x,u,v \in I. } \end{aligned}$$

Now, using (D1), we can obtain the double inequality

$$\begin{aligned} { \frac{E\big (\frac{x_2+v}{2},\frac{u+v}{2}\big )}{E(x_2,u)}\le c(u,v) \le \frac{E\big (\frac{x_1+v}{2},\frac{u+v}{2}\big )}{E(x_1,u)}, \quad x_1,x_2,u,v \in I\text { with }x_1<u<x_2. }\nonumber \\ \end{aligned}$$
(2.3)

Since E vanishes on the diagonal (by (D1)), we obtain

$$\begin{aligned}{} & {} \lim \limits _{x_1\rightarrow u^-}\dfrac{E\big (\frac{x_1+v}{2},\frac{u+v}{2}\big )}{E(x_1,u)}\\ \quad{} & {} =\frac{1}{2}\lim \limits _{x_1\rightarrow u^-}\frac{E\big (\frac{x_1+v}{2},\frac{u+v}{2}\big )-E\big (\frac{u+v}{2},\frac{u+v}{2}\big )}{\frac{x_1+v}{2}-\frac{u+v}{2}}\frac{x_1-u}{E(x_1,u)-E(u,u)}\\ {}{} & {} =\frac{\partial _1^-E(\frac{u+v}{2},\frac{u+v}{2})}{2\partial _1^-E(u,u)}. \end{aligned}$$

Similarly

$$\begin{aligned} \lim _{x_2\rightarrow u^+}\frac{E\big (\frac{x_2+v}{2},\frac{u+v}{2}\big )}{E(x_1,u)}=\frac{\partial _1^+E(\frac{u+v}{2},\frac{u+v}{2})}{2\partial _1^+E(u,u)}. \end{aligned}$$

Upon taking the limits \(x_1 \rightarrow u^-\) and \(x_2 \rightarrow u^+\) in the inequalities (2.3), in view of the just proved equalities, we arrive at

$$\begin{aligned} { \frac{\partial _1^+E(\frac{u+v}{2},\frac{u+v}{2})}{2\partial _1^+E(u,u)} \le c(u,v) \le \frac{\partial _1^-E(\frac{u+v}{2},\frac{u+v}{2})}{2\partial _1^-E(u,u)},\qquad u,v \in I. } \end{aligned}$$
(2.4)

By (2.2), we can rewrite this inequality in the following way

$$\begin{aligned}{ \frac{\partial _1^+E(\frac{u+v}{2},\frac{u+v}{2})}{\partial _1^-E(\frac{u+v}{2},\frac{u+v}{2})} \le \frac{\partial _1^+E(u,u)}{\partial _1^-E(u,u)},\qquad u,v \in I. } \end{aligned}$$

Applying Lemma 2.1 to the function \(f(u):=\frac{\partial _1^+E(u,u)}{\partial _1^-E(u,u)}\), we can see that f is constant, thus there exists a constant \(\alpha \in \mathbb {R}\) such that \(\partial _1^+E(u,u)=\alpha \partial _1^-E(u,u)\) for all \(u \in I\). Obviously \(\alpha \ge 1\) since \(\partial _1^+E(u,u)\ge \partial _1^-E(u,u)>0\). Therefore,

$$\begin{aligned}{ \frac{\partial _1^+E(\frac{u+v}{2},\frac{u+v}{2})}{2\partial _1^+E(u,u)}=\frac{\alpha \partial _1^-E(\frac{u+v}{2},\frac{u+v}{2})}{2\alpha \partial _1^-E(u,u)}=\frac{\partial _1^-E(\frac{u+v}{2},\frac{u+v}{2})}{2\partial _1^-E(u,u)},\qquad u,v \in I. } \end{aligned}$$

Consequently, the inequalities in (2.4) yield

$$\begin{aligned}{ c(u,v)=\frac{\partial _1^+E(\frac{u+v}{2},\frac{u+v}{2})}{2\partial _1^+E(u,u)} = \frac{\partial _1^-E(\frac{u+v}{2},\frac{u+v}{2})}{2\partial _1^-E(u,u)},\qquad u,v \in I. } \end{aligned}$$

Thus, using (2.1), the inequality (2.2) implies the following Jensen convexity-type properties:

$$\begin{aligned} { \frac{E(\frac{x+y}{2},\frac{u+v}{2})}{\partial _1^+E(\frac{u+v}{2},\frac{u+v}{2})}\le \frac{1}{2} \left( \frac{E(x,u)}{\partial _1^+E(u,u)}+\frac{E(y,v)}{\partial _1^+E(u,u)} \right) ,\qquad x,y,u,v \in I }\nonumber \\ \end{aligned}$$
(2.5)

and

$$\begin{aligned}{ \frac{E(\frac{x+y}{2},\frac{u+v}{2})}{\partial _1^-E(\frac{u+v}{2},\frac{u+v}{2})}\le \frac{1}{2} \left( \frac{E(x,u)}{\partial _1^-E(u,u)}+\frac{E(y,v)}{\partial _1^-E(u,u)} \right) ,\qquad x,y,u,v \in I. } \end{aligned}$$

Equivalently, both \(E^+\) and \(E^-\) are Jensen convex over \(I^2\). On the other hand, the function E and hence also \(E^+\) and \(E^-\) are bounded from above by zero over the open set \(\{(x,u)\in I^2\mid x<u\}\). Therefore, in view of the Bernstein–Doetsch theorem, the Jensen convexity implies the convexity of both \(E^+\) and \(E^-\), i.e., the conditions (ii) and (iii) hold, respectively.

To show the converse implications assume that (ii) holds, that is, for all \(u\in I\), the map \(x\mapsto E(x,u)\) has a positive right-derivative at \(x=u\) and \(E^+\) is convex over \(I^2\). Then (2.1) is satisfied with

$$\begin{aligned}{ c(u,v):=\frac{\partial _1^+E(\frac{u+v}{2},\frac{u+v}{2})}{2\partial _1^+E(u,u)},} \end{aligned}$$

which, by applying Theorem H, implies that is Jensen convex. The proof of the implication (iii)\(\Longrightarrow \)(i) is analogous.

To prove the last statements of the theorem, assume that (i) (and hence (ii), (iii)) holds. As we have seen it in the proof, this implies that E is convex in its first variable, i.e., (b) is valid. We have also verified assertion (d). It follows from (ii) that the function \(E^+\) is convex, and hence it is continuous on \(I^2\).

To prove assertion (c), let \(u_0\in I\) be fixed and choose \(x\in I{\setminus }\{u_0\}\). Then, using also property (D2) of quasideviations, it follows that the map

$$\begin{aligned}{ u\mapsto \frac{E(x,u)}{E^+(x,u)}=\partial _1^+E(u,u) } \end{aligned}$$

is continuous at \(u_0\). This proves that the map \(I\ni u\mapsto \partial _1^+E(u,u)\) is continuous. Similarly, we can see that the map \(I\ni u\mapsto \partial _1^-E(u,u)\) is also continuous and hence assertion (c) is valid.

Using the equality \(E(x,u)=\partial _1^+E(u,u)\cdot E^+(x,u)\), the continuity of \(E^+\) (which is a consequence of its convexity) and property (c), we can conclude that assertion (a) is also valid.

It easily follows from property (b) of the quasideviation E, that (e) holds. Indeed, if for all \(u\in I\), the function \(E(\cdot ,u)\) is convex, then there exists a function \(h:I\rightarrow \mathbb {R}\) such that

$$\begin{aligned}{ h(u)(x-u)\le E(x,u) \qquad (x,u\in I). } \end{aligned}$$

According to the results of the paper [10, Theorem 7, condition (iv)], it follows that is non-smaller than the arithmetic mean. \(\square \)

Theorem 2.3

Let \(E:I^2\rightarrow \mathbb {R}\) be a quasideviation and \(\alpha ,\beta \in (0,\infty )\). Define \(E_{\alpha ,\beta }:I^2\rightarrow \mathbb {R}\) by

$$\begin{aligned} { E_{\alpha ,\beta }(x,u): ={\left\{ \begin{array}{ll} \alpha E(x,u) &{}\text { for }x\le u, \\ \beta E(x,u) &{}\text { for }x>u. \end{array}\right. } } \end{aligned}$$
(2.6)

Then \(E_{\alpha ,\beta }\) is a quasideviation. If, additionally, is Jensen convex and \(\alpha \le \beta \), then so is .

Furthermore, if E is differentiable in the sense of Gateaux at every point of the diagonal of \(I^2\) and the map \(u\mapsto \partial _1E(u,u)\) is continuous, then is Jensen convex if and only if is Jensen convex and \(\alpha \le \beta \).

Proof

The properties (D1) and (D2) of quasideviations are obviously satisfied. To check (D3), let \(x,y\in I\) with \(x<y\). Then, for all \(u\in (x,y)\), we have

$$\begin{aligned}{ \frac{E_{\alpha ,\beta }(x,u)}{E_{\alpha ,\beta }(y,u)} =\frac{\alpha E(x,u)}{\beta E(y,u)}. } \end{aligned}$$

The right hand side is a strictly decreasing function of u because E is a quasideviation, therefore, so is the left hand side, which shows that \(E_{\alpha ,\beta }\) also possesses property (D3).

Assume now that is Jensen convex and \(\alpha \le \beta \). Then,

$$\begin{aligned}{ E_{\alpha ,\beta }(x,u)=\max (\alpha E(x,u),\beta E(x,u)), \qquad (x,u)\in I^2, } \end{aligned}$$

and, according to condition (ii) of Theorem 2.2, the function \(E^+\) is convex over \(I^2\). On the other hand, for \((x,u)\in I^2\),

$$\begin{aligned} E^+_{\alpha ,\beta }(x,u)= & {} \dfrac{E_{\alpha ,\beta }(x,u)}{\partial _1^+E_{\alpha ,\beta }(u,u)} =\dfrac{\max (\alpha E(x,u),\beta E(x,u))}{\beta \partial _1^+E(u,u)}\\ {}= & {} \frac{1}{\beta }\max \big (\alpha E^+(x,u),\beta E^+(x,u)\big ). \end{aligned}$$

Therefore,

$$\begin{aligned} { E^+_{\alpha ,\beta }=\frac{1}{\beta }\max \big (\alpha E^+,\beta E^+\big ), } \end{aligned}$$
(2.7)

which shows that \(E^+_{\alpha ,\beta }\) is the maximum of two convex functions, and hence, itself is convex. Thus, condition (ii) of Theorem 2.2 holds for \(E_{\alpha ,\beta }\) and hence is Jensen convex.

Now assume that E is differentiable in the sense of Gateaux at every point of the diagonal of \(I^2\), the map \(I\ni u\mapsto \partial _1E(u,u)\) is continuous and is Jensen convex. Then \(E^+_{\alpha ,\beta }\) is convex.

To prove that \(\alpha \le \beta \), let \(u\in I\) be fixed. Then we have that

$$\begin{aligned}{ \alpha \partial _1 E(u,u)=\partial _1^-E_{\alpha ,\beta }(u,u) \le \partial _1^+E_{\alpha ,\beta }(u,u)=\beta \partial _1 E(u,u). } \end{aligned}$$

Since \(\partial _1 E(u,u)>0\), it follows that \(\alpha \le \beta \).

The Jensen convexity of implies that \(E^+_{\alpha ,\beta }\) is convex. In view of formula (2.7), we can see that \(E^+\) is convex on both triangles \(\Delta ^+:=\{(x,u)\in I^2\mid x\le u\}\) and \(\Delta ^-:=\{(x,u)\in I^2\mid x\ge u\}\). To prove that \(E^+\) is convex on \(I^2=\Delta ^+\cup \Delta ^-\), it suffices to show that \(E^+\) is convex along any line which crosses the diagonal of \(I^2\).

Let \(u\in I\) be fixed and let \((0,0)\ne (v,w)\in \mathbb {R}^2\) be arbitrary. Then the line \(\mathbb {R}\ni t\mapsto (u+tv,u+tw)\) crosses the diagonal of \(I^2\) at (uu). We are going to show that the function \(e:T\rightarrow \mathbb {R}\) defined by \(e(t):=E^+(u+tv,u+tw)\) is convex over the interval \(T:=\{t\in \mathbb {R}\mid (u+tv,u+tw)\in I^2\}\). The convexity of \(E^+\) over the triangles \(\Delta ^+\) and \(\Delta ^-\) implies that e is convex over the subintervals \(T_-:=(-\infty ,0]\cap T\) and \(T_+:=[0,\infty )\cap T\). On the other hand, using the continuity of the map \(u\mapsto \partial _1E(u,u)\), we can get that

$$\begin{aligned} \lim _{t\rightarrow 0} \frac{e(t)-e(0)}{t}= & {} \lim \limits _{t\rightarrow 0} \dfrac{E^+(u+tv,u+tw)}{t} =\lim \limits _{t\rightarrow 0} \dfrac{E(u+tv,u+tw)}{\partial _1 E(u+tw,u+tw)t}\\ {}= & {} \frac{1}{\partial _1 E(u,u)} \lim \limits _{t\rightarrow 0} \dfrac{E(u+tv,u+tw)-E(u,u)}{t}. \end{aligned}$$

By the Gateaux differentiability assumption on E, the limit on the right hand side exists, therefore, e is differentiable at \(t=0\). This property of e together with its convexity over the subintervals \(T_-\) and \(T_+\) imply that e is convex over T. Therefore, we have proved that \(E^+\) is convex on \(I^2\) and hence, the mean is Jensen convex. \(\square \)

Corollary 2.4

Let \(f :I \rightarrow \mathbb {R}\) be a continuous, strictly increasing function and \(\alpha ,\beta \in (0,\infty )\) with \(\alpha \le \beta \). Then the function \(E_{\alpha ,\beta } :I^2 \rightarrow \mathbb {R}\) given by

$$\begin{aligned} { E_{\alpha ,\beta }(x,u):={\left\{ \begin{array}{ll} \alpha (f(x)-f(u)) &{}\text { for }x\le u; \\ \beta (f(x)-f(u)) &{}\text { for }x>u \end{array}\right. }} \end{aligned}$$
(2.8)

is a quasideviation. Furthermore, is Jensen convex if and only if \(\alpha \le \beta \), f is twice differentiable with a positive derivative and

$$\begin{aligned} { \text{ either } f'' \text{ is } \text{ nonvanishing } \text{ and } \frac{f'}{f''} \text{ is } \text{ positive } \text{ and } \text{ convex } \text{ or } f''\equiv 0.}\nonumber \\ \end{aligned}$$
(2.9)

Proof

Define \(E:I^2\rightarrow \mathbb {R}\) by \(E(x,u):=f(x)-f(u)\). Then E is a deviation and hence it is a quasideviation. Thus, by the first statement of Theorem 2.3, we can see that \(E_{\alpha ,\beta }\) is a quasideviation.

Assume first that is Jensen convex. Then, by assertion (b) of Theorem 2.2, for all \(u\in I\), the map \(x\mapsto E_{\alpha ,\beta }(x,u)\) is convex on I. This implies that \(\alpha f-\alpha f(u)\) is convex on \((-\infty ,u)\cap I\) for all \(u\in I\), and hence, f is convex on I. Therefore, f is nearly differentiable. We can now get, for all \(u\in I\), that \(\partial _1^-E(u,u)=\alpha f'_-(u)\) and \(\partial _1^+E(u,u)=\beta f'_+(u)\). In view of assertion (d) of Theorem 2.2, the ratio function

$$\begin{aligned}{ u\mapsto \frac{\partial _1^+E(u,u)}{\partial _1^+E(u,u)} =\frac{\beta f'_+(u)}{\alpha f'_-(u)} } \end{aligned}$$

is constant on I. Since, except for countably many values of u, we have that \(f'_+(u)=f'_-(u)\), the value of the above ratio equals the constant \(\beta /\alpha \). Thus, for all \(u\in I\), we obtain that \(f'_+(u)=f'_-(u)\), which proves the differentiability of f at every element of I. Thus E is also differentiable over \(I^2\), it is Gateaux differentiable at the diagonal points of \(I^2\). Thus, in view of Theorem 2.3, it follows that \(\alpha \le \beta \) and that the mean is Jensen convex. According to Theorem B, it follows that is Jensen convex if and only if f is twice differentiable with a positive derivative and (2.9) holds.

Now assume to the converse that \(E_{\alpha ,\beta }\) is of the form (2.8) for some \(\alpha ,\beta \in (0,+\infty )\) with \(\alpha \le \beta \) and a function f which satisfies (2.9). Then, by Theorem B, is convex and, due to Theorem 2.3, so is the mean . \(\square \)

3 The case of Bajraktarević means

In what follows, the spaces of k times continuously differentiable functions and k times continuously differentiable functions with a nonvanishing first derivative (which are defined on the open interval I) will be denoted by and , respectively.

Theorem 3.1

Let \(f:I\rightarrow \mathbb {R}\) be a strictly monotone and continuous function and \(p:I\rightarrow \mathbb {R}_+\) be a positive function. Then the following conditions are equivalent to each other:

  1. (i)

    The Bajraktarević mean is Jensen convex.

  2. (ii)

    and the mapping \(B_{f,p}:I^2\rightarrow \mathbb {R}\) defined by

    $$\begin{aligned} {B_{f,p}(x,u):=\frac{p(x)(f(x)-f(u))}{p(u)f'(u)}} \end{aligned}$$

    is convex on \(I^2\).

  3. (iii)

    , , and for all \(x,y,u,v\in I\),

    $$\begin{aligned} \frac{p(y)(f(y)-f(v))}{p(v)f'(v)}\ge&\frac{p(x)(f(x)-f(u))}{p(u)f'(u)} +\frac{(pf)'(x)-f(u)p'(x)}{(pf')(u)}(y-x)\\{}&+p(x)\frac{(f(u)-f(x))\cdot (pf')'(u)-(pf')(u)\cdot f'(u)}{(pf')(u)^2}(v-u). \end{aligned}$$
  4. (iv)

    , , and for all \(x,y,u,v\in I\),

    $$\begin{aligned}&\bigg (\frac{(pf)'(x)-f(u)p'(x)}{(pf')(u)}-\frac{(pf)'(y)-f(v)p'(y)}{(pf')(v)}\bigg )(x-y)\\&\qquad +\bigg (p(x)\frac{(f(u)-f(x)) (pf')'(u)-(pf')(u)f'(u)}{(pf')(u)^2}\\ {}&\qquad \qquad \qquad -p(y)\frac{(f(v)-f(y))\cdot (pf')'(v)-(pf')(v)\cdot f'(v)}{(pf')(v)^2}\bigg )(u-v)\ge 0. \end{aligned}$$

Proof

Without loss of generality, we may assume that f is increasing. Define the quasideviation \(E:I^2\rightarrow \mathbb {R}\) by \(E(x,u):=p(x)(f(x)-f(u))\). Then we have that .

Assume that is Jensen convex. Then, according to assertion (a) of Theorem 2.2, we get that E is convex in its first variable. That is, for all \(u\in I\), the function \(pf-f(u)p\) is convex and hence it is nearly differentiable. Let uv be distinct elements of I, then

$$\begin{aligned}{ p=\frac{(pf-f(u)p)-(pf-f(v)p)}{f(v)-f(u)}, } \end{aligned}$$

which shows that p is also nearly differentiable. We also have that

$$\begin{aligned} { f=\frac{pf-f(u)p}{p}+f(u), } \end{aligned}$$

which shows that f is also nearly differentiable.

In view of these properties, for all \(u\in I\), we can obtain

$$\begin{aligned} \partial _1^+E(u,u)= & {} (pf-f(u)p)_+'(u)\\= & {} p_+'(u)f(u)+p(u)f_+'(u)-f(u)p_+'(u)=p(u)f_+'(u). \end{aligned}$$

Similarly,

$$\begin{aligned}{ \partial _1^-E(u,u)=p(u)f_-'(u). } \end{aligned}$$

By assertion (a) of Theorem 2.2, the ratio function \(u\mapsto \frac{\partial _1^+E(u,u)}{\partial _1^-E(u,u)}\) is constant, therefore, \(f_+'=cf_-'\) for some constant \(c\in \mathbb {R}\). On the other hand, f is differentiable nearly everywhere, hence, \(c=1\), which yields that f is differentiable everywhere with a positive derivative. Assertion (ii) of Theorem 2.2 now gives us that the function \(B_{f,p}\) defined in assertion (ii) is convex. Thus, we have proved the equivalence of assertions (i) and (ii).

Assume now that (ii) holds. It follows from the convexity of \(B_{f,p}\) that, for all \(x\in I\), the map \(u\mapsto B_{f,p}(x,u)\) is convex. Therefore, it is nearly differentiable. For \(x,u\in I\), we have that

$$\begin{aligned} { f'(u)=\frac{p(x)(f(x)-f(u))}{p(u)B_{f,p}(x,u)}. } \end{aligned}$$

For any fixed \(x\in I\), the function on the right hand side is nearly differentiable with respect to u. Consequently, \(f'\) is also nearly differentiable, in particular, \(f'\) is continuous.

Using the convexity of \(B_{f,p}\) again, we can obtain that there exist two functions \(r,s:I^2\rightarrow \mathbb {R}\) such that

$$\begin{aligned} { B_{f,p}(y,v)-B_{f,p}(x,u)\ge r(x,u)(y-x)+s(x,u)(v-u), \quad x,y,u,v\in I. }\nonumber \\ \end{aligned}$$
(3.1)

After substituting \(y:=x\), inequality (3.1) implies that

$$\begin{aligned}{ \frac{(f(x)-f(v))((pf')(u)-(pf')(v))-(pf')(v)(f(v)-f(u))}{(pf')(v)(pf')(u)}\ge \frac{s(x,u)}{p(x)}(v-u). } \end{aligned}$$

If \(v>u\), then dividing the inequality by \((v-u)\) side by side, then taking the right limit as \(v\downarrow u\), we get

$$\begin{aligned}{ \frac{(f(u)-f(x))\cdot (pf')_+'(u)-(pf')(u)\cdot f'(u)}{(pf')(u)^2} \ge \frac{s(x,u)}{p(x)}, \qquad x,u\in I. } \end{aligned}$$

Repeating the above argument for \(v<u\), we get that

$$\begin{aligned}{ \frac{s(x,u)}{p(x)} \ge \frac{(f(u)-f(x))\cdot (pf')_-'(u)-(pf')(u)\cdot f'(u)}{(pf')(u)^2}, \qquad x,u\in I. } \end{aligned}$$

Binding the above two inequalities, it follows that

$$\begin{aligned}{ (f(u)-f(x))\cdot [(pf')_+'(u)-(pf')_-'(u)]\ge 0, \qquad x,u\in I. } \end{aligned}$$

Since x is arbitrary, this inequality can hold only if \((pf')_+'(u)=(pf')_-'(u)\) for all \(u\in I\), which proves that \(pf'\) is differentiable everywhere. It follows from this property that, for all \(x,u\in I\),

$$\begin{aligned} { s(x,u)=p(x)\frac{(f(u)-f(x))\cdot (pf')'(u)-(pf')(u)\cdot f'(u)}{(pf')(u)^2}=\partial _2B_{f,p}(x,u). }\nonumber \\ \end{aligned}$$
(3.2)

Now taking (3.1) for \(v:=u\), we get

$$\begin{aligned}{ \frac{p(y)(f(y)-f(u))-p(x)(f(x)-f(u))}{(pf')(u)}\ge r(u,x)(y-x), \qquad x,y,u\in I. } \end{aligned}$$

Therefore, for all \(x,y,u\in I\) with \(y>x\) we obtain

$$\begin{aligned} r(u,x)\le & {} \frac{p(y)(f(y)-f(u))-p(x)(f(x)-f(u))}{(pf')(u) (y-x)}\\= & {} \frac{1}{(pf')(u)}\bigg (\frac{(pf)(y)-(pf)(x)}{y-x}-f(u)\frac{p(y)-p(x)}{y-x}\bigg ). \end{aligned}$$

Since both p and f are nearly differentiable, we can take the limit \(y \searrow x\) to obtain

$$\begin{aligned} { \frac{(pf)'_+(x)-f(u)p'_+(x)}{(pf')(u)}\ge r(u,x), \qquad x,u \in I. } \end{aligned}$$
(3.3)

Similarly, for all \(x,y,u \in I\) with \(y<x\), one gets

$$\begin{aligned} r(u,x)&\ge \dfrac{1}{(pf')(u)}\bigg (\dfrac{(pf)(y)-(pf)(x)}{y-x}-f(u)\dfrac{p(y)-p(x)}{y-x}\bigg ), \end{aligned}$$

which, in the limiting case as \(y \nearrow x\) leads us to the inequality

$$\begin{aligned} { r(u,x)\ge \frac{(pf)'_-(x)-f(u)p'_-(x)}{(pf')(u)}, \qquad x,u \in I. } \end{aligned}$$
(3.4)

From the inequalities (3.3) and (3.4), we can conclude that

$$\begin{aligned} \frac{(pf)'_+(x)-f(u)p'_+(x)}{(pf')(u)}&\ge \dfrac{(pf)'_-(x)-f(u)p'_-(x)}{(pf')(u)}, \qquad x,u \in I. \end{aligned}$$

We know that \((pf')(u) >0\), thus we can obtain that

$$\begin{aligned} (pf)'_+(x)-f(u)p'_+(x)&\ge (pf)'_-(x)-f(u)p'_-(x), \qquad x,u \in I. \end{aligned}$$

By the differentiability of f, for all \(x,u \in I\), it follows that

$$\begin{aligned}{ p'_+(x)f(x)+p(x)f'(x)-f(u)p'_+(x)\ge p'_-(x)f(x)+p(x)f'(x)-f(u)p'_-(x), } \end{aligned}$$

which can equivalently be rewritten as

$$\begin{aligned}{ (f(x)-f(u))(p'_+(x)-p'_-(x)) \ge 0, \qquad x,u \in I. } \end{aligned}$$

Since u is arbitrary and f is strictly monotone, this inequality can only hold if \(p'_+(x)-p'_-(x)=0\) for all \(x \in I\), which yields the differentiability of p on the interval I. However, we have already proved that \(pf'\) is differentiable, therefore f must be twice differentiable.

Therefore, the upper and lower bounds for the function r given by (3.3) and (3.4) are equal to each other, whence we get that

$$\begin{aligned} { r(u,x)=\frac{(pf)'(x)-f(u)p'(x)}{(pf')(u)}=\partial _1B_{f,p}(x,u), \qquad x,u \in I. } \end{aligned}$$
(3.5)

The differentiability of p and the twice differentiability of f imply that \(B_{f,p}\) is differentiable. On the other hand, it is well known that the partial derivatives of a differentiable convex function are continuous. Therefore, \(\partial _1B_{f,p}\) and \(\partial _2B_{f,p}\) are continuous over \(I^2\).

In view of formula (3.5), for all \(x,u,\in I\) with \(x\ne u\), we can obtain that

$$\begin{aligned}{ p'(x)=\frac{\partial _1B_{f,p}(x,u)(pf')(u)-(pf')(x)}{f(x)-f(u)}. } \end{aligned}$$

This shows that \(p'\) is continuous everywhere except at \(x=u\). But, since u was an arbitrary element of I, we get that \(p'\) is continuous on I and hence it belongs to .

Using formula (3.2), for all \(x,u,\in I\) with \(x\ne u\), we can get that

$$\begin{aligned}{ f''(u)=\frac{1}{p(u)}\bigg (\frac{(pf')(u)^2\partial _2 B_{f,p}(x,u)+p(x)(pf')(u)f'(u)}{p(x) (f(u)-f(x))}-(p'f')(u)\bigg ), } \end{aligned}$$

which shows that \(f''\) is continuous everywhere except at \(u=x\). Since x was arbitrary, this implies that \(f''\) is continuous on I and hence it belongs to .

Now the inequality (3.1) can be seen to be equivalent to condition (iii), hence the implication (ii)\(\Rightarrow \)(iii) is verified. On the other hand, if (iii) holds, then \(B_{f,p}\) is the pointwise supremum of affine functions and hence it is convex, i.e., (ii) holds as well. The last condition expresses the monotonicity of the gradient of \(B_{f,p}\), i.e., for all \(x,y,u,v\in I\), the inequality

$$\begin{aligned}{ (\partial _1 B_{f,p}(x,u)\!-\!\partial _1 B_{f,p}(y,v))(x\!-\!y) +(\partial _2 B_{f,p}(x,u)\!-\!\partial _2 B_{f,p}(y,v))(u\!-\!v)\ge 0 } \end{aligned}$$

holds, which is also known to be equivalent to the convexity of \(B_{f,p}\). \(\square \)

4 Convexity of Gini means in subintervals

For \(q,r\in \mathbb {R}\), we need to introduce the following notations.

$$\begin{aligned}{ \gamma _{q,r}(t):= {\left\{ \begin{array}{ll} \dfrac{t^q-t^r}{q-r} &{}\hbox {if } q\ne r,\\ t^q\log t &{}\hbox {if } q=r, \end{array}\right. }\qquad t\in \mathbb {R}_+, } \end{aligned}$$

and

$$\begin{aligned}{ \beta _{q,r}:= {\left\{ \begin{array}{ll} \bigg (\dfrac{q(q-1)}{r(r-1)}\bigg )^{\frac{1}{q-r}} &{}\hbox {if } q\ne r \hbox { and } qr(q-1)(r-1)>0,\\ \exp \bigg (\dfrac{1}{q}+\dfrac{1}{q-1}\bigg ) &{}\hbox {if } q=r \hbox { and } q(q-1)\ne 0. \end{array}\right. } } \end{aligned}$$

Theorem 4.1

Let \(q,r\in \mathbb {R}\), \(0<a<b<\infty \). Then the following three assertions are equivalent to each other.

  1. (i)

    The mean is Jensen convex on the interval (ab).

  2. (ii)

    The function \(\gamma _{q,r}\) is convex on the interval \(\big [\frac{a}{b},\frac{b}{a}\big ]\).

  3. (iv)

    One of the following conditions is valid:

    1. (1)

      \(0\le \min (q,r)\le 1\le \max (q,r)\);

    2. (2)

      \(\max (q,r)<1\le q+r\) and \(\beta _{q,r}\le \frac{a}{b}\);

    3. (3)

      \(\min (q,r)\le 0\), \(1\le q+r\) and \(\beta _{q,r}\ge \frac{b}{a}\);

    4. (4)

      \(1\le \min (q,r)\) and \(\beta _{q,r}\ge \frac{b}{a}\).

Proof

Let \(r<q\) in the subsequent argument. The cases \(q=r\) and \(q<r\) can be dealt with analogously and therefore, they are left to the reader.

Define \(f(x):=x^{q-r}\) and \(p(x):=x^r\) for \(x\in \mathbb {R}_+\). Then the Bajraktarević mean equals the Gini mean . Therefore, to characterize the Jensen convexity of on (ab), we need to describe the Jensen convexity of on (ab). According to Theorem E or to our Theorem 3.1 this property is equivalent to the convexity of the following mapping

$$\begin{aligned} (a,b)^2\ni (x,u)\mapsto \frac{p(x)(f(x)-f(u))}{p(u)f'(u)}= & {} \frac{x^r(x^{q-r}-u^{q-r})}{(q-r)u^{r}u^{q-r-1}}\\= & {} \frac{u}{q-r}\bigg (\Big (\frac{x}{u}\Big )^q-\Big (\frac{x}{u}\Big )^r\bigg )=u\,\gamma _{q,r}\Big (\frac{x}{u}\Big ). \end{aligned}$$

That is, is Jensen convex on (ab) if and only if, for all \(x,u,y,v\in (a,b)\) and \(t\in [0,1]\),

$$\begin{aligned}{ (tu+(1-t)v)\,\gamma _{q,r}\Big (\frac{tx+(1-t)y}{tu+(1-t)v}\Big ) \le tu\,\gamma _{q,r}\Big (\frac{x}{u}\Big ) +(1-t)v\,\gamma _{q,r}\Big (\frac{y}{v}\Big ). } \end{aligned}$$

This inequality is equivalent to

$$\begin{aligned}{} & {} \gamma _{q,r}\Big (\frac{tu}{tu+(1-t)v}\frac{x}{u}+\frac{(1-t)v}{tu+(1-t)v}\frac{y}{v}\Big )\\{} & {} \quad \le \frac{tu}{tu+(1-t)v}\gamma _{q,r}\Big (\frac{x}{u}\Big ) +\frac{(1-t)v}{tu+(1-t)v}\gamma _{q,r}\Big (\frac{y}{v}\Big ). \end{aligned}$$

With the substitution \(w:=\frac{x}{u}\), \(z:=\frac{y}{v}\) and \(\lambda :=\frac{tu}{tu+(1-t)v}\) one can easily see that the above inequality holds for all \(x,u,y,v\in (a,b)\) and \(t\in [0,1]\) if and only if

$$\begin{aligned}{ \gamma _{q,r}(\lambda w+(1-\lambda )z) \le \lambda \gamma _{q,r}(w)+(1-\lambda )\gamma _{q,r}(z) } \end{aligned}$$

is valid for all \(w,z\in \big (\frac{a}{b},\frac{b}{a}\big )\) and \(\lambda \in [0,1]\), that is, if \(\gamma _{q,r}\) is convex over the interval \(\big (\frac{a}{b},\frac{b}{a}\big )\).

Thus, we have proved that assertion (i) is equivalent to assertion (ii).

The convexity of \(\gamma _{q,r}\) over \(\big (\frac{a}{b},\frac{b}{a}\big )\) is valid if and only if \(\gamma _{q,r}''(t)\ge 0\) for all \(t\in \big [\frac{a}{b},\frac{b}{a}\big ]\), i.e., if

$$\begin{aligned} { q(q-1)t^{q-2}\ge r(r-1)t^{r-2}, \qquad t\in \big [\tfrac{a}{b},\tfrac{b}{a}\big ]. } \end{aligned}$$
(4.1)

Substituting \(t=1\), we get that \((q-r)(q+r-1)\ge 0\), which implies that \(1\le q+r\).

Then we have the following four possibilities for the location of (qr) (keeping in mind that \(r<q\)).

$$\begin{aligned}{} & {} (1)\quad 0\le r\le 1\le q;\qquad (2)\quad r<q<1\le q+r;\\{} & {} (3)\quad r<0 \hbox { and } 1\le q+r;\qquad (4)\quad 1<r<q. \end{aligned}$$

In case (1), the inequality (4.1) holds for all \(t>0\), because the left hand side is nonnegative and the right hand side is nonpositive and condition (1) is equivalent to (iii)(1).

In case (2), we have that \(r,q\in (0,1)\), therefore both sides of the inequality (4.1) are negative, and hence it is equivalent to the following inequality

$$\begin{aligned}{ \beta _{q,r}\le \frac{1}{t}, \qquad t\in \big [\tfrac{a}{b},\tfrac{b}{a}\big ], } \end{aligned}$$

which turns out to be equivalent to (iii)(2).

In cases (3), and (4), we can see that both sides of the inequality (4.1) are positive and it is equivalent to the following inequality

$$\begin{aligned}{ \beta _{q,r}\ge \frac{1}{t}, \qquad t\in \big [\tfrac{a}{b},\tfrac{b}{a}\big ], } \end{aligned}$$

which turns out to be equivalent to (iii)(3) and (iii)(4), respectively. \(\square \)

Corollary 4.2

Let \(q,r\in \mathbb {R}\) such that either \(q+r<1\) or \(q+r=1\) and \(\{q,r\}\ne \{0,1\}\). Then there is no \(0<a<b<\infty \) such that is Jensen convex on the interval (ab).

Proof

Assume that is Jensen convex on the interval (ab). Then, according to condition (iii) of the previous theorem, we have that \(q+r\ge 1\). Therefore, \(q+r<1\) cannot be valid and thus \(q+r=1\) must hold. If (iii)(1) were valid, then \(\{q,r\}=\{0,1\}\), which was excluded by the assumption of this corollary. Thus one of the subconditions (2), (3) or (4) of condition (iii) has to be valid. Due to the equality \(q+r=1\), it follows that \(\beta _{q,r}=1\), hence the inequalities related to \(\beta _{q,r}\) imply that \(b\le a\), which is a contradiction. \(\square \)

We note that one could also prove that if \(q+r=1\) and \(\{q,r\}\ne \{0,1\}\). Then there is no \(0<a<b<\infty \) such that is Jensen concave on the interval (ab).