1 Introduction and preliminaries

A real-valued function f on a convex set I is called convex if epigraph of f is a convex set. Another definition of a convex function is that a function f is convex if

$$f\bigl(\mu x + (1-\mu)y\bigr) \leq\mu f(x) + (1-\mu) f(y) $$

for all \(x, y \in I\) and \(\mu\in[0, 1]\).

Geometrically, if we have three points P, Q, and R on the graph of convex function such that Q lies between P and R, then

$$ \operatorname{slope}PQ \leq \operatorname{slope} PR \leq \operatorname{slope} QR. $$
(1)

A generalization of this inequality is well-known Jensen’s inequality (see, e.g., [2]). The king of inequalities, the Jensen inequality, states that if a function \(f : I \rightarrow\mathbb{R}\) is convex, then for all \(x_{1}, x_{2},\ldots, x_{n} \in I\) and nonnegative real \(\mu_{1}, \mu_{2}, \ldots, \mu_{n}\) such that \(\sum_{i=1}^{n} \mu_{i} = 1\), we have

$$f \Biggl(\sum_{i=1}^{n} \mu_{i} x_{i} \Biggr) \leq\sum_{i=1}^{n} \mu_{i} f(x_{i}). $$

The following theorem is a consequence of the Jensen inequality proved by Pečarić and Janić [1].

Theorem 1.1

Let \(f:[0,\infty)\rightarrow\mathbb{R}\) be a nondecreasing convex function, and let \((V,\|\cdot\|)\) be a normed space. Then for all \(x_{i}\in V\) and \(p_{i}\geq0\) (\(1\leq i \leq n\)) such that \(P_{n}=\sum^{n}_{i=1}p_{i}>0\), we have

$$ f \Biggl(\frac{1}{P_{n}}\Biggl\Vert \sum ^{n}_{i=1}p_{i}x_{i}\Biggr\Vert \Biggr)\leq\frac{1}{P_{n}}\sum^{n}_{i=1}p_{i}f \bigl(\Vert x_{i}\Vert \bigr). $$
(2)

In 1929, the notion of exponential convexity was introduced by Bernstein [3]; later Widder [4] introduced these functions as a subclass of convex functions in a given interval \((a,b)\). Some notable results related to exponential and logarithmic convexity are found in [57]. Pečarić and Perić [8] gave the concept of n-exponentially convex functions. For several recent results concerning n-exponential convexity, see [915]. Mercer [16, 17] gave two mean value theorems of the Lagrange and Cauchy types for the discrete Jensen inequality.

In the next section, we discuss the n-exponential convexity of the functional defined as the difference of the left-hand and right-hand sides of inequality (2). We deduce results about exponential convexity and log-convexity. In Section 3 we give mean value theorems of Lagrange and Cauchy types. Finally, we construct means with Stolarsky property.

2 Exponential convexity

Let us recall some definitions and notions about n-exponentially convex functions (see [8]).

Definition 1

A real-valued function \(h : I \rightarrow\mathbb{R}\) on an open interval \(I, I \subset\mathbb{R}\) is called n-exponentially convex in the Jensen sense if

$$\sum_{j,k =1}^{n} b_{j} b_{k} h \biggl(\frac{x_{j} + x_{k}}{2} \biggr) \geq 0 $$

for all \(b_{i} \in\mathbb{R}\) and all \(x_{i} \in I\), \(i=1,\ldots,n\).

A real-valued function \(h : I \rightarrow\mathbb{R}\) is n-exponentially convex on I if it is n-exponentially convex in the Jensen sense and continuous on I.

Remark 2.1

  1. (i)

    From the definition it is obvious that the set of all n-exponentially convex functions on I is a convex cone.

  2. (ii)

    It is less obvious that a product of any two n-exponentially convex functions on I is again of the same type (see [18]).

  3. (iii)

    n-exponentially convex functions are invariant under admissible shifts and translations of the argument, that is, if \(x\mapsto f(x)\) is n-exponentially convex, then \(x\mapsto f(x-c)\) and \(x\mapsto f(x/\lambda)\) are also n-exponentially convex functions.

Definition 2

A real-valued function \(h : I \rightarrow\mathbb{R}\) is exponentially convex in the Jensen sense if it is n-exponentially convex in the Jensen sense for all \(n\in\mathbb{N}\).

A real-valued function \(h : I \rightarrow\mathbb{R}\) is exponentially convex if it is exponentially convex in the Jensen sense and continuous.

Remark 2.2

Note that a positive real-valued function \(h : I \rightarrow\mathbb{R}\) is log-convex in the Jensen sense if and only if it is 2-exponentially convex in the Jensen sense, that is,

$$b_{1}^{2} h(x)+ 2 b_{1}b_{2} h \biggl(\frac{x+y}{2} \biggr) + b_{2}^{2} h(y) \geq0 $$

for all \(b_{1}, b_{2} \in\mathbb{R}\) and \(x, y \in I\).

If h is 2-exponentially convex, then it is log-convex. The converse is true if h also is continuous. n-exponentially convex functions are not exponentially convex in general. For example, see [18].

We will use the following basic inequality of log-convex functions.

Lemma 2.3

If \(\Phi: I \rightarrow\mathbb{R}\) is log-convex, then for \(r< s< t\) (\(r,s,t\in I\)),

$$ \bigl(\Phi(s)\bigr)^{t-r}\leq\bigl(\Phi(r) \bigr)^{t-s} \bigl(\Phi(t)\bigr)^{s-r}. $$
(3)

Proof

See [19], p.4. □

Let us give a few basic examples of exponentially convex functions; for details, see [18].

Example 2.4

  1. (i)

    \(f(x)=c\) is exponentially convex on \(\mathbb {R}\) for any \(c\geq0\).

  2. (ii)

    \(f(x)=e^{\alpha x}\) is exponentially convex on \(\mathbb {R}\) for any \(\alpha\in \mathbb {R}\).

  3. (iii)

    \(f(x)=x^{-\alpha}\) is exponentially convex on \((0,\infty)\) for any \(\alpha>0\).

Lemma 2.5

  1. (i)

    For \(p>0\), let \(\varphi_{p}:[0,\infty )\rightarrow \mathbb {R}\) be defined by

    $$ \varphi_{p}(x)=\frac{e^{px^{2}}}{p^{2}}. $$

    Then \(p\mapsto\varphi_{p}(x)\), \(p\mapsto\frac{d}{dx}\varphi_{p}(x)\), and \(p\mapsto\frac{d^{2}}{dx^{2}}\varphi_{p}(x)\) are exponentially convex on \((0, \infty)\) for each \(x\in[0, \infty)\).

  2. (ii)

    For \(p>1\), let \(\phi_{p}:[0,\infty )\rightarrow \mathbb {R}\) be defined by

    $$ \phi_{p}(x)=\frac{x^{p}}{p(p-1)}. $$

    Then \(p\mapsto\phi_{p}(x)\), \(p\mapsto\frac{d}{dx}\phi_{p}(x)\), and \(p\mapsto\frac{d^{2}}{dx^{2}}\phi_{p}(x)\) are exponentially convex on \((1, \infty)\) for each \(x\in[0, \infty)\).

Proof

(i) follows from parts (ii) and (iii) of Example 2.4 and Remark 2.1.

(ii) follows by similar arguments as in part (i) noting that \(x^{p}=e^{p\ln x}\). □

The next simple lemma will be useful in our applications.

Lemma 2.6

Let \(f : [0, \infty) \rightarrow\mathbb{R}\) be a convex function with \(f'(0) = 0\). Then f is an increasing convex function.

Proof

If f is a convex function, then \(f'\) is nondecreasing. Since \(f'(0)=0\), we have \(f'(x)\geq0\), that is, f is an increasing convex function. □

We opt an elegant method (see [18]) of constructing n-exponentially convex functions and exponentially convex functions.

Consider the following functional acting on nondecreasing convex functions:

$$ f\mapsto\Omega(f) = \frac{1}{P_{n}}\sum ^{n}_{i=1}p_{i} f\bigl(\Vert x_{i}\Vert \bigr)- f \Biggl(\frac{1}{P_{n}}\Biggl\Vert \sum ^{n}_{i=1}p_{i}x_{i} \Biggr\Vert \Biggr). $$
(4)

From Theorem 1.1 it follows that \(\Omega(f) \geq0\).

Theorem 2.7

Let \(f\mapsto\Omega(f)\) be the linear functional defined by (4) and define \(\Phi_{1}:(0,\infty)\rightarrow \mathbb {R}\) and \(\Phi_{2}:(1,\infty)\rightarrow \mathbb {R}\) by

$$ \Phi_{1}(p)=\Omega(\varphi_{p}),\qquad \Phi_{2}(p)=\Omega(\phi_{p}), $$

where \(\varphi_{p}\) and \(\phi_{p}\) are defined in Lemma  2.5. Then:

  1. (i)

    The functions \(\Phi_{1}\) and \(\Phi_{2}\) are continuous on \((0,\infty)\) and \((1,\infty)\), respectively.

  2. (ii)

    If \(n\in \mathbb {N}\), \(p_{1},\ldots,p_{n}\in(0,\infty )\), and \(q_{1},\ldots,q_{n}\in(1,\infty)\), then the matrices

    $$\biggl[\Phi_{1} \biggl(\frac{p_{j}+p_{k}}{2} \biggr) \biggr] _{j,k=1}^{n}, \qquad \biggl[\Phi_{2} \biggl( \frac{q_{j}+q_{k}}{2} \biggr) \biggr]_{j,k=1}^{n} $$

    are positive semidefinite.

  3. (iii)

    The functions \(\Phi_{1}\) and \(\Phi_{2}\) are exponentially convex on \((0,\infty)\) and \((1,\infty)\), respectively.

  4. (iv)

    If \(p,q,r\in(0,\infty)\) are such that \(p< q< r\), then

    $$\begin{aligned}& \biggl( \frac{\sum_{i=1}^{n}p_{i}\exp(q\|x_{i}\| ^{2})-P_{n}\exp ( \frac{q\|\sum_{i=1}^{n}p_{i}x_{i}\|^{2}}{P_{n}^{2}} ) }{q^{2}P_{n}} \biggr)^{r-p} \\& \quad \leq \biggl( \frac{\sum_{i=1}^{n}p_{i}\exp(p\|x_{i}\| ^{2})-P_{n}\exp ( \frac{p\|\sum_{i=1}^{n}p_{i}x_{i}\|^{2}}{P_{n}^{2}} ) }{p^{2}P_{n}} \biggr)^{r-q} \\& \qquad {}\times \biggl( \frac{\sum_{i=1}^{n}p_{i}\exp(r\|x_{i}\| ^{2})-P_{n}\exp ( \frac{r\|\sum_{i=1}^{n}p_{i}x_{i}\|^{2}}{P_{n}^{2}} ) }{r^{2}P_{n}} \biggr)^{q-p}; \end{aligned}$$

    if \(u,v,w\in(1,\infty)\) are such that \(u< v< w\), then

    $$\begin{aligned}& \biggl( \frac{\sum_{i=1}^{n}p_{i}\|x_{i}\| ^{v}}{v(v-1)P_{n}}-\frac{ (\Vert \sum^{n}_{i=1}p_{i}x_{i}\Vert )^{v}}{v(v-1)P_{n}^{v}} \biggr)^{w-u} \\& \quad \leq \biggl( \frac{\sum_{i=1}^{n}p_{i}\|x_{i}\| ^{u}}{u(u-1)P_{n}}-\frac{ (\Vert \sum^{n}_{i=1}p_{i}x_{i}\Vert )^{u}}{u(u-1)P_{n}^{u}} \biggr)^{w-v} \\& \qquad {}\times \biggl( \frac{\sum_{i=1}^{n}p_{i}\|x_{i}\|^{w}}{w(w-1)P_{n}}-\frac{ (\Vert \sum^{n}_{i=1}p_{i}x_{i}\Vert )^{w}}{w(w-1)P_{n}^{w}} \biggr)^{v-u}. \end{aligned}$$

Proof

(i) The continuity of the functions \(p\mapsto\Phi_{i}(p)\), \(i=1,2\), is obvious.

(ii) Let \(n\in \mathbb {N}\) and \(\xi_{j}, p_{j}\in \mathbb {R}\) (\(j = 1, \ldots, n\)). Define the auxiliary function \(\Psi_{1} : [0,\infty) \rightarrow \mathbb {R}\) by

$$ \Psi_{1}(x)=\sum_{j,k=1}^{n} \xi_{j}\xi_{k}\varphi_{\frac{p_{j}+p_{k}}{2}}(x). $$

Now \(\Psi_{1}'(0)=0\) since \(\frac{d}{dx}\varphi_{t}(0)=0\), and

$$\Psi_{1}''(x)=\sum _{j,k=1}^{n}\xi_{j}\xi_{k} \frac {d^{2}}{dx^{2}}\varphi_{\frac{p_{j}+p_{k}}{2}}(x) \geq0 $$

for \(x\geq0 \) by Lemma 2.5, which means, by Lemma 2.6, that \(\Psi_{1}\) is an increasing convex function. Now Theorem 1.1 implies that \(\Omega(\Psi_{1})\geq0\). This means that

$$\biggl[\Phi_{1} \biggl(\frac{p_{j}+p_{k}}{2} \biggr) \biggr]_{j,k=1}^{n} $$

is a positive semidefinite matrix.

Similarly, we can define an auxiliary function \(\Psi_{2}\) concluding that

$$\biggl[\Phi_{2} \biggl(\frac{q_{j}+q_{k}}{2} \biggr) \biggr]_{j,k=1}^{n} $$

is a positive semidefinite matrix.

(iii) and (iv) are simple consequences of (i), (ii), and Lemma 2.3. □

The following application in the probability is a consequence of the above theorem and gives an interesting connection between moments of discrete random variables.

Corollary 2.8

Let \((V,\|\cdot\|)\) be a normed space, and let X be a discrete random variable defined by \(P(X=x_{i})=p_{i}\), \(x_{i}\in V\), \(p_{i}>0\), \(i=1,\ldots,n\), \(\sum_{i=1}^{n}p_{i}=1\). Then, for \(1< j< k< m\),

$$\begin{aligned}& \bigl\{ \mathbb{E}\bigl[\Vert X\Vert ^{k}\bigr]-\bigl(\bigl\Vert \mathbb{E}[X]\bigr\Vert \bigr)^{k}\bigr\} ^{m-j} \\& \quad \leq C(j,k,m) \bigl\{ \mathbb{E}\bigl[\Vert X\Vert ^{j}\bigr]-\bigl(\bigl\Vert \mathbb{E}[X]\bigr\Vert \bigr)^{j}\bigr\} ^{m-k}\bigl\{ \mathbb {E}\bigl[\Vert X\Vert ^{m}\bigr]-\bigl(\bigl\Vert \mathbb{E}[X] \bigr\Vert \bigr)^{m}\bigr\} ^{k-j}, \end{aligned}$$

where

$$ C(j,k,m)=\frac{\binom{k}{2}^{m-j} }{\binom{j}{2}^{m-k}\binom{m}{2}^{k-j}}. $$
(5)

Theorem 2.7 also sets the following model.

Theorem 2.9

Let \(I \subset\mathbb{R}\) be an open interval, and \(\Gamma= \{\eta_{t} | t \in I\}\) be a family of continuous functions defined on \(J\subseteq [0,\infty) \) such that \(\frac{d}{dx}\eta_{t}(0)=0\), \(t\in I\), and \(t\mapsto\frac{d^{2}}{dx^{2}}\eta_{t}(x)\) is n-exponentially convex I for any \(x\in J\). Consider the functional \(f\mapsto\Omega(f)\) given in (4). Then \(t \mapsto \Omega(\eta_{t})\) is an n-exponentially convex function on I.

Remark 2.10

Other features of Theorem 2.7 can be easily added in the previous theorem.

3 Mean value theorems

The following lemma will be very useful.

Lemma 3.1

Let \(f\in C^{2}([0,a] ) \) with \(f'(0) = 0\). Denote \(m=\inf_{t\in[0,a]} f''(t)\) and \(M= \sup_{ t\in[0,a]} f''(t)\). Then the functions \(f_{1}, f_{2}:I\rightarrow\mathbb{R}^{+}\) defined by

$$ \begin{aligned} &f_{1}(t)=\frac{M}{2}t^{2}-f(t), \\ &f_{2}(t)=f(t)-\frac{m}{2}t^{2} \end{aligned} $$
(6)

are convex and nondecreasing.

Proof

The functions \(f_{1}\), \(f_{2}\) satisfy the conditions of Lemma 2.6, and the result follows. □

Theorem 3.2

Let \(x_{i} \in X\) and \(p_{i}\geq0\) (\(i=1,2,\ldots, n\)) be such that \(P_{n}=\sum^{n}_{i=1}p_{i}>0\). Let \(f\in C^{2}([0,a])\) with \(f'(0) = 0\), where \(\max_{i}\|x_{i}\|< a\). Then there exists \(\xi\in[0,a]\) such that

$$ \frac{1}{P_{n}}\sum^{n}_{i=1}p_{i}f \bigl(\Vert x_{i}\Vert \bigr)-f \Biggl(\frac {1}{P_{n}}\Biggl\Vert \sum^{n}_{i=1}p_{i}x_{i} \Biggr\Vert \Biggr) =\varrho f''(\xi), $$
(7)

where

$$ \varrho=\frac{1}{2} \Biggl[\frac{1}{P_{n}}\sum ^{n}_{i=1}p_{i}\|x_{i}\| ^{2}- \Biggl(\frac{1}{P_{n}}\Biggl\Vert \sum ^{n}_{i=1}p_{i}x_{i}\Biggr\Vert \Biggr)^{2} \Biggr]. $$

Proof

Denote \(M=\max_{ t\in[0,a]} f''(t)\) and \(m=\min_{ t\in[0,a]} f''(t)\). Then the functions \(f_{1}, f_{2}:[0,a]\rightarrow\mathbb{R}\) as in Lemma 3.1 are convex and nondecreasing. This means that \(\Omega(f_{1}), \Omega(f_{2})\geq0\), that is,

$$ \varrho m\leq \frac{1}{P_{n}}\sum^{n}_{i=1}p_{i}f \bigl(\Vert x_{i}\Vert \bigr)-f \Biggl(\frac {1}{P_{n}}\Biggl\Vert \sum^{n}_{i=1}p_{i}x_{i} \Biggr\Vert \Biggr)\leq \varrho M. $$
(8)

Now by (7) the Bolzano intermediate theorem ensures that \(\xi \in[0,a]\). □

Corollary 3.3

Let \(x_{i} \in X\) and \(p_{i}\geq0\) (\(i=1,2,\ldots, n\)) be such that \(P_{n}=\sum^{n}_{i=1}p_{i}>0\). Let \(f,g\in C^{2}([0,a])\) with \(f'(0) = g'(0)=0\), where \(\max_{i}\|x_{i}\|< a\). Then there exists \(\xi\in[0,a]\) such that

$$\begin{aligned}& g''(\xi) \Biggl[ \frac{1}{P_{n}}\sum ^{n}_{i=1}p_{i}f\bigl(\Vert x_{i}\Vert \bigr)-f \Biggl(\frac{1}{P_{n}}\Biggl\Vert \sum ^{n}_{i=1}p_{i}x_{i} \Biggr\Vert \Biggr) \Biggr] \\& \quad = f''(\xi) \Biggl[ \frac{1}{P_{n}}\sum^{n}_{i=1}p_{i}g \bigl(\Vert x_{i}\Vert \bigr)-g \Biggl(\frac{1}{P_{n}}\Biggl\Vert \sum^{n}_{i=1}p_{i}x_{i} \Biggr\Vert \Biggr) \Biggr]. \end{aligned}$$
(9)

Proof

Consider the auxiliary function \(k\in C^{2}([0,a])\) defined by \(k(x)=c_{1}f(x)-c_{2}g(x)\), where

$$ c_{1}=\frac{1}{P_{n}}\sum^{n}_{i=1}p_{i}g \bigl(\Vert x_{i}\Vert \bigr)-g \Biggl(\frac {1}{P_{n}}\Biggl\Vert \sum^{n}_{i=1}p_{i}x_{i} \Biggr\Vert \Biggr) $$
(10)

and

$$ c_{2}=\frac{1}{P_{n}}\sum^{n}_{i=1}p_{i}f \bigl(\Vert x_{i}\Vert \bigr)-f \Biggl(\frac {1}{P_{n}}\Biggl\Vert \sum^{n}_{i=1}p_{i}x_{i} \Biggr\Vert \Biggr). $$
(11)

It is obvious \(\Omega(k)=0\). Further, since \(k'(0)=0\), from Theorem 3.2 it follows that there exists \(\xi\in[0,a]\) such that

$$ \frac{1}{P_{n}}\sum^{n}_{i=1}p_{i}k \bigl(\Vert x_{i}\Vert \bigr)-k \Biggl(\frac {1}{P_{n}}\Biggl\Vert \sum^{n}_{i=1}p_{i}x_{i} \Biggr\Vert \Biggr)= \varrho k''(\xi). $$
(12)

The left-hand side of this equation equals zero, whereas the term ϱ on the right-hand side is nonzero, so that \(k''(\xi)=0\). □

Remark 3.4

If the inverse of \(f''/g''\) exists, then various kinds of means can be defined by (9). That is,

$$ \xi= \biggl(\frac{f''}{g''} \biggr)^{-1} \biggl( \frac{\Omega (f)}{\Omega(g)} \biggr). $$
(13)

Particularly, if we substitute \(f(x) = \phi_{p}(x)\) and \(g(x) = \phi_{q}(x)\) into (9) (the functions \(\phi_{p}\) are defined in Lemma 2.5), then we obtain the following expressions:

$$ \mu(p,q;\Omega)=\left \{ \textstyle\begin{array}{l@{\quad}l} \frac{1}{P_{n}} (\frac{q(q-1)}{p(p-1)}\frac{P_{n}^{p-1}\sum_{i=1}^{n}p_{i}\|x_{i}\|^{p}- (\Vert \sum^{n}_{i=1}p_{i}x_{i}\Vert )^{p}}{ P_{n}^{q-1}\sum_{i=1}^{n}p_{i}\|x_{i}\|^{q}- (\Vert \sum^{n}_{i=1}p_{i}x_{i}\Vert )^{q}} )^{\frac{1}{p-q}}, & p\neq q, \\ \exp (\frac{1-2p}{p(p-1)}+\frac{P_{n}^{p-1}\sum_{i=1}^{n}p_{i}\|x_{i}\|^{p}\ln\|x_{i}\|-\|\sum_{i=1}^{n}p_{i}x_{i}\| ^{p}\ln(\sum_{i=1}^{n}\|p_{i}x_{i}\|/P_{n})}{P_{n}^{p-1}\sum_{i=1}^{n}p_{i}\|x_{i}\|^{p}-\|\sum_{i=1}^{n}p_{i}x_{i}\|^{p}} ), & p=q\neq1. \end{array}\displaystyle \right . $$

4 Concluding remarks

There are several levels of construction in this paper. Starting from Theorem 1.1 we constructed families of desired convexity (Lemma 2.5). Over ranging parameters, we have exponential convexity, and new exponentially convex functions are produced. Particularly, the constructed mean value theorems enabled us to define means that substantially contain the constructed exponentially convex functions.