1 Introduction

In mathematics, Jensen’s inequality is a powerful mathematical tool which relates the value of a convex function of an integral to the integral of the convex function. A basic form of the Jensen weighted integral inequality is given below.

Theorem 1.1 Let g,p:[a,b]R be functions defined on [a,b] and J be an interval such that g(x)J for every x[a,b]. Let f:JR be a convex function and suppose that p, pg, p(fg) are all integrable on [a,b]. If p(u)0 on [a,b] and a b p(u)du>0, then the inequality

f ( a b p ( u ) g ( u ) d u a b p ( u ) d u ) a b p ( u ) f ( g ( u ) ) d u a b p ( u ) d u
(1)

holds.

Theorem 1.2 Let g,p:[a,b]R be functions defined on [a,b] and J be an interval such that g(x)J for every x[a,b]. Let f:JR be a convex function and suppose that p, pg, p(fg) are all integrable on [a,b]. If g is monotonic on [a,b] and p satisfies

0 a x p(u)du a b p(u)dufor every x[a,b] and  a b p(u)du>0,
(2)

then (1) holds.

Inequality (1) under conditions from Theorem 1.2 is known as the Jensen-Steffensen weighted integral inequality.

In Section 2, we present an integral version of some results recently proved in [1]. We define linear functionals constructed from the non-negative difference of the refined inequalities and give mean value theorems for the linear functionals. In Section 3, we give definitions and results that will be needed later. Further, we investigate the n-exponential convexity and log-convexity of the functions associated with the linear functionals and also deduce Lyapunov-type inequalities. We also prove the monotonicity property of the generalized Cauchy means obtained via these functionals. Finally, in Section 4, we give several examples of the families of functions for which the obtained results can be applied.

2 Main results

The following theorem is our first main result.

Theorem 2.1 Let g,p:[a,b]R be functions defined on [a,b] such that g is monotonic and differentiable. Let J be an interval such that g(x)J for every x[a,b] and f:JR be a differentiable convex function. If p, pg, p(fg) are all integrable on [a,b] and (2) holds, then the function

F ( x ) = a x p ( u ) f ( g ( u ) ) d u + f ( g ( x ) ) x b p ( u ) d u a b p ( u ) d u f ( a x p ( u ) g ( u ) d u + g ( x ) x b p ( u ) d u a b p ( u ) d u )
(3)

is increasing on [a,b], i.e., for all x,y[a,b] such that axyb, we have

0F(x)F(y) a b p ( u ) f ( g ( u ) ) d u a b p ( u ) d u f ( a b p ( u ) g ( u ) d u a b p ( u ) d u ) .
(4)

Proof

We have

F (x)= g ( x ) x b p ( u ) d u a b p ( u ) d u [ f ( g ( x ) ) f ( a x p ( u ) g ( u ) d u + g ( x ) x b p ( u ) d u a b p ( u ) d u ) ] ,

where x b p ( u ) d u a b p ( u ) d u 0 as (2) holds. The claim will follow if F (x)0, i.e., if

g ( x ) x b p ( u ) d u a b p ( u ) d u 0
(5)

and

f ( g ( x ) ) f ( a x p ( u ) g ( u ) d u + g ( x ) x b p ( u ) d u a b p ( u ) d u ) 0
(6)

hold or if

g ( x ) x b p ( u ) d u a b p ( u ) d u 0
(7)

and

f ( g ( x ) ) f ( a x p ( u ) g ( u ) d u + g ( x ) x b p ( u ) d u a b p ( u ) d u ) 0
(8)

hold.

Now, we discuss the following two cases.

Case I. If g is increasing, then (5) holds and g(x) a x p ( u ) g ( u ) d u + g ( x ) x b p ( u ) d u a b p ( u ) d u 0. Since f is a differentiable convex function defined on J, f is increasing on J, and so (6) holds, which together with (5) implies that F (x)0.

Case II. If g is decreasing, then (7) holds and g(x) a x p ( u ) g ( u ) d u + g ( x ) x b p ( u ) d u a b p ( u ) d u 0. Again, by using the convexity of f, (8) holds, which together with (7) implies that F (x)0.

Now, as F(x) is increasing on [a,b], for all x,y[a,b] such that axyb, we have

F(a)F(x)F(y)F(b).
(9)

At x=a and at x=b, (3) gives F(a)=0 and F(b)= a b p ( u ) f ( g ( u ) ) d u a b p ( u ) d u f( a b p ( u ) g ( u ) d u a b p ( u ) d u ) respectively. By using these values of F(a) and F(b) in (9), we have (4). □

The second main result states the following.

Theorem 2.2 Let all the conditions of Theorem  2.1 be satisfied. Then the function

F ¯ ( x ) = x b p ( u ) f ( g ( u ) ) d u + f ( g ( x ) ) a x p ( u ) d u a b p ( u ) d u f ( x b p ( u ) g ( u ) d u + g ( x ) a x p ( u ) d u a b p ( u ) d u )
(10)

is decreasing on [a,b], i.e., for all x,y[a,b] such that axyb, we have

0 F ¯ (y) F ¯ (x) a b p ( u ) f ( g ( u ) ) d u a b p ( u ) d u f ( a b p ( u ) g ( u ) d u a b p ( u ) d u ) .
(11)

Proof

We have

F ¯ (x)= g ( x ) a x p ( u ) d u a b p ( u ) d u [ f ( g ( x ) ) f ( x b p ( u ) g ( u ) d u + g ( x ) a x p ( u ) d u a b p ( u ) d u ) ] ,

where a x p ( u ) d u a b p ( u ) d u 0 as (2) holds. The claim will follow if F ¯ (x)0, i.e., if

g ( x ) a x p ( u ) d u a b p ( u ) d u 0
(12)

and

f ( g ( x ) ) f ( x b p ( u ) g ( u ) d u + g ( x ) a x p ( u ) d u a b p ( u ) d u ) 0
(13)

hold or if

g ( x ) a x p ( u ) d u a b p ( u ) d u 0
(14)

and

f ( g ( x ) ) f ( x b p ( u ) g ( u ) d u + g ( x ) a x p ( u ) d u a b p ( u ) d u ) 0
(15)

hold.

Now, we discuss the following two cases.

Case I. If g is increasing, then (12) holds and g(x) x b p ( u ) g ( u ) d u + g ( x ) a x p ( u ) d u a b p ( u ) d u 0. Since f is a differentiable convex function defined on J, f is increasing on J and so (13) holds, which together with (12) implies that F ¯ (x)0.

Case II. If g is decreasing, then (14) holds and g(x) x b p ( u ) g ( u ) d u + g ( x ) a x p ( u ) d u a b p ( u ) d u 0. Again, by using the convexity of f, (15) holds, which together with (14) implies that F ¯ (x)0.

Now, as F ¯ is decreasing on [a,b], for any x,y[a,b] such that axyb, we have

F ¯ (b) F ¯ (y) F ¯ (x) F ¯ (a).
(16)

At x=a and at x=b, (10) gives F ¯ (a)= a b p ( u ) f ( g ( u ) ) d u a b p ( u ) d u f( a b p ( u ) g ( u ) d u a b p ( u ) d u ) and F ¯ (b)=0 respectively. By using these values of F ¯ (a) and F ¯ (b) in (16), we have (11). □

Let us observe the inequalities (4) and (11). Motivated by them, we define two linear functionals Φ i (i=1,2)

(17)
(18)

where x,y[a,b], p is a function satisfying (2), g is a monotone differentiable function and the functions F and F ¯ are as in (3) and (10) respectively. If f is a differentiable convex function defined on J, then Theorems 2.1 and 2.2 imply that Φ i (x,y;p,g,f)0, i=1,2. Now, we give mean value theorems for the functionals Φ i , i=1,2. These theorems enable us to define various classes of means that can be expressed in terms of linear functionals.

First, we state the Lagrange-type mean value theorem related to Φ i , i=1,2.

Theorem 2.3 Let x,y[a,b] be such that xy, p be a function satisfying (2) and g be a monotone differentiable function. Let J be a compact interval such that g(x)J for every x[a,b] and f C 2 (J). Suppose that Φ 1 and Φ 2 are linear functionals defined as in (17) and (18). Then there exist ξ 1 , ξ 2 J such that

Φ i (x,y;p,g,f)= f ( ξ i ) 2 Φ i (x,y;p,g, f 0 ),i=1,2,
(19)

where f 0 (x)= x 2 .

Proof Analogous to the proof of Theorem 2.4 in [2]. □

The following theorem is a new analogue of the classical Cauchy mean value theorem, related to the functionals Φ i , i=1,2.

Theorem 2.4 Let x,y[a,b] be such that xy, p be a function satisfying (2) and g be a monotone differentiable function. Let J be a compact interval such that g(x)J for every x[a,b] and f,k C 2 (J). Suppose that Φ 1 and Φ 2 are linear functionals defined as in (17) and (18). Then there exist ξ 1 , ξ 2 J such that

Φ i ( x , y ; p , g , f ) Φ i ( x , y ; p , g , k ) = f ( ξ i ) k ( ξ i ) ,i=1,2,
(20)

provided that the denominators are not equal to zero.

Proof Analogous to the proof of Theorem 2.6 in [2]. □

Remark 2.5

  1. (i)

    By taking f(x)= x s and k(x)= x q in (20), where s,qR{0,1} are such that sq, we have

    ξ i s q = q ( q 1 ) Φ i ( x , y ; p , g , x s ) s ( s 1 ) Φ i ( x , y ; p , g , x q ) ,i=1,2.
  2. (ii)

    If the inverse of the function f k exists, then (20) gives

    ξ i = ( f k ) 1 ( Φ i ( x , y ; p , g , f ) Φ i ( x , y ; p , g , k ) ) ,i=1,2.

3 n-exponential convexity and log-convexity of the functions associated with integral Jensen-Steffensen differences

In this section, we give definitions and properties which will be needed for the proofs of our results. In the sequel, let I be an open interval in ℝ.

We recall the following definition of a convex function (see [[3], p.2]).

Definition 1 A function f:IR is convex on I if

( x 3 x 2 )f( x 1 )+( x 1 x 3 )f( x 2 )+( x 2 x 1 )f( x 3 )0
(21)

holds for all x 1 , x 2 , x 3 I such that x 1 < x 2 < x 3 .

The following proposition will be useful further (see [[3], p.2]).

Proposition 3.1 If f is a convex function on an interval I and if x 1 y 1 , x 2 y 2 , x 1 x 2 , y 1 y 2 , then the following inequality is valid:

f ( x 2 ) f ( x 1 ) x 2 x 1 f ( y 2 ) f ( y 1 ) y 2 y 1 .
(22)

If the function f is concave, the inequality reverses (see [[3], p.2]).

Another interesting type of convexity we consider is the n-exponential convexity.

Definition 2 A function h:IR is n-exponentially convex in the Jensen sense on I if

i , j = 1 n α i α j h ( x i + x j 2 ) 0

holds for every α i R and x i I, i=1,,n (see [2, 4]).

Definition 3 A function h:IR is n-exponentially convex if it is n-exponentially convex in the Jensen sense and continuous on I.

Remark 3.2 From the above definition, it is clear that 1-exponentially convex functions in the Jensen sense are non-negative functions. Also, n-exponentially convex functions in the Jensen sense are k-exponentially convex functions in the Jensen sense for all kN, kn.

Positive semi-definite matrices represent a basic tool in our study. By the definition of positive semi-definite matrices and some basic linear algebra, we have the following proposition.

Proposition 3.3 If h is n-exponentially convex in the Jensen sense, then the matrix [ h ( x i + x j 2 ) ] i , j = 1 k is a positive semi-definite matrix for all kN, kn. Particularly,

det [ h ( x i + x j 2 ) ] i , j = 1 k 0for every kN,kn, x i I,i=1,,n.

Definition 4 A function h:IR is exponentially convex in the Jensen sense if it is n-exponentially convex in the Jensen sense for all nN.

Definition 5 A function h:IR is exponentially convex if it is exponentially convex in the Jensen sense and continuous.

Lemma 3.4 A function h:I(0,) is log-convex in the Jensen sense, that is, for every x,yI,

h 2 ( x + y 2 ) h(x)h(y)

holds if and only if the relation

α 2 h(x)+2αβh ( x + y 2 ) + β 2 h(y)0

holds for every α,βR and x,yI.

Remark 3.5 It follows that a function is log-convex in the Jensen sense if and only if it is 2-exponentially convex in the Jensen sense. Also, by using the basic convexity theory, a function is log-convex if and only if it is 2-exponentially convex. For more results about log-convexity, see [5] and the references therein.

Definition 6 The second-order divided difference of a function f:[a,b]R at mutually distinct points y 0 , y 1 , y 2 [a,b] is defined recursively by

(23)

Remark 3.6 The value [ y 0 , y 1 , y 2 ;f] is independent of the order of the points y 0 , y 1 and y 2 . This definition may be extended to include the case in which some or all the points coincide (see [[3], p.16]). Namely, taking the limit y 1 y 0 in (23), we get

lim y 1 y 0 [ y 0 , y 1 , y 2 ;f]=[ y 0 , y 0 , y 2 ;f]= f ( y 2 ) f ( y 0 ) f ( y 0 ) ( y 2 y 0 ) ( y 2 y 0 ) 2 , y 2 y 0 ,

provided that f exists; and furthermore, taking the limits y i y 0 , i=1,2, in (23), we get

lim y 2 y 0 lim y 1 y 0 [ y 0 , y 1 , y 2 ;f]=[ y 0 , y 0 , y 0 ;f]= f ( y 0 ) 2

provided that f exists.

The following definition of a real-valued convex function is characterized by the second-order divided difference (see [[3], p.15]).

Definition 7 A function f:[a,b]R is said to be convex if and only if for all choices of three distinct points y 0 , y 1 , y 2 [a,b], [ y 0 , y 1 , y 2 ;f]0.

Next, we study the n-exponential convexity and log-convexity of the functions associated with the linear functionals Φ i (i=1,2) defined in (17) and (18).

Theorem 3.7 Let Ω={ f s :sIR} be a family of differentiable functions defined on J such that the function s[ y 0 , y 1 , y 2 ; f s ] is n-exponentially convex in the Jensen sense on I for every three mutually distinct points y 0 , y 1 , y 2 J. Let Φ i (i=1,2) be linear functionals defined as in (17) and (18). Then the following statements hold.

  1. (i)

    The function s Φ i (x,y;p,g, f s ) is n-exponentially convex in the Jensen sense on I and the matrix [ Φ i ( x , y ; p , g , f s j + s k 2 ) ] j , k = 1 m is a positive semi-definite matrix for all mN, mn and s 1 ,, s m I. Particularly,

    det [ Φ i ( x , y ; p , g , f s j + s k 2 ) ] j , k = 1 m 0,mN,mn.
  2. (ii)

    If the function s Φ i (x,y;p,g, f s ) is continuous on I, then it is n-exponentially convex on I.

Proof The idea of the proof is the same as that of Theorem 3.9 in [2].

  1. (i)

    Let α j R (j=1,,n) and consider the function

    φ(y)= j , k = 1 n α j α k f s j + s k 2 (y),

where s j I and f s j + s k 2 Ω. Then

[ y 0 , y 1 , y 2 ;φ]= j , k = 1 n α j α k [ y 0 , y 1 , y 2 ; f s j + s k 2 ]

and since [ y 0 , y 1 , y 2 ; f s j + s k 2 ] is n-exponentially convex in the Jensen sense on I by assumption, it follows that

[ y 0 , y 1 , y 2 ;φ]= j , k = 1 n α j α k [ y 0 , y 1 , y 2 ; f s j + s k 2 ]0.

And so, by using Definition 7, we conclude that φ is a convex function. Hence,

Φ i (x,y;p,g,φ)0,i=1,2,

which is equivalent to

j , k = 1 n α j α k Φ i (x,y;p,g, f s j + s k 2 )0,i=1,2,

and so we conclude that the function s Φ i (x,y;p,g, f s ) is n-exponentially convex in the Jensen sense on I.

The remaining part follows from Proposition 3.3.

  1. (ii)

    If the function s Φ i (x,y;p,g, f s ) is continuous on I, then from (i) and by Definition 3, it follows that it is n-exponentially convex on I. □

The following corollary is an immediate consequence of the above theorem.

Corollary 3.8 Let Ω={ f s :sIR} be a family of differentiable functions defined on J such that the function s[ y 0 , y 1 , y 2 ; f s ] is exponentially convex in the Jensen sense on I for every three mutually distinct points y 0 , y 1 , y 2 J. Let Φ i (i=1,2) be linear functionals defined as in (17) and (18). Then the following statements hold.

  1. (i)

    The function s Φ i (x,y;p,g, f s ) is exponentially convex in the Jensen sense on I and the matrix [ Φ i ( x , y ; p , g , f s j + s k 2 ) ] j , k = 1 n is a positive semi-definite matrix for all nN and s 1 ,, s n I.

  2. (ii)

    If the function s Φ i (x,y;p,g, f s ) is continuous on I, then it is exponentially convex on I.

Corollary 3.9 Let Ω={ f s :sIR} be a family of differentiable functions defined on J such that the function s[ y 0 , y 1 , y 2 ; f s ] is 2-exponentially convex in the Jensen sense on I for every three mutually distinct points y 0 , y 1 , y 2 J. Let Φ i (i=1,2) be linear functionals defined as in (17) and (18). Further, assume Φ i (x,y;p,g, f s ) (i=1,2) is strictly positive for f s Ω. Then the following statements hold.

  1. (i)

    If the function s Φ i (x,y;p,g, f s ) is continuous on I, then it is 2-exponentially convex on I and so it is log-convex and for r,s,tI such that r<s<t, we have

    (24)

known as Lyapunov’s inequality.

  1. (ii)

    If the function s Φ i (x,y;p,g, f s ) is differentiable on I, then for every s,q,u,vI such that su and qv, we have

    μ s , q (x,y,p,g, Φ i ,Ω) μ u , v (x,y,p,g, Φ i ,Ω),i=1,2,
    (25)

where

μ s , q (x,y,p,g, Φ i ,Ω)={ ( Φ i ( x , y ; p , g , f s ) Φ i ( x , y ; p , g , f q ) ) 1 s q , s q , exp ( d d s Φ i ( x , y ; p , g , f s ) Φ i ( x , y ; p , g , f s ) ) , s = q ,
(26)

for f s , f q Ω.

Proof The idea of the proof is the same as that of Corollary 3.11 in [2].

  1. (i)

    The claim that the function s Φ i (x,y;p,g, f s ) is log-convex on I is an immediate consequence of Theorem 3.7 and Remark 3.5, and (24) can be obtained by replacing the convex function f with the convex function f(z)=log Φ i (x,y;p,g, f z ) for z=r,s,t in (21), where r,s,tI such that r<s<t.

  2. (ii)

    Since by (i) the function s Φ i (x,y;p,g, f s ) is log-convex on I, that is, the function slog Φ i (x,y;p,g, f s ) is convex on I. Applying Proposition 3.1 with setting f(z)=log Φ i (x,y;p,g, f z ) (i=1,2), we get

    (27)

for su, qv, sq, uv; and therefore, we conclude that

μ s , q (x,y,p,g, Φ i ,Ω) μ u , v (x,y,p,g, Φ i ,Ω),i=1,2.

If s=q, we consider the limit when qs in (27) and conclude that

μ s , s (x,y,p,g, Φ i ,Ω) μ u , v (x,y,p,g, Φ i ,Ω),i=1,2.

The case u=v can be treated similarly. □

Remark 3.10 Note that the results from Theorem 3.7, Corollary 3.8 and Corollary 3.9 still hold when two of the points y 0 , y 1 , y 2 J coincide, say y 1 = y 0 , for a family of differentiable functions f s such that the function s[ y 0 , y 1 , y 2 ; f s ] is n-exponentially convex in the Jensen sense (exponentially convex in the Jensen sense, log-convex in Jensen sense on I); and furthermore, they still hold when all three points coincide for a family of twice differentiable functions with the same property. The proofs can be obtained by recalling Remark 3.6 and by using suitable characterizations of convexity.

4 Examples

In this section, we present several families of functions which fulfill the conditions of Theorem 3.7, Corollary 3.8, Corollary 3.9 and Remark 3.10. This enables us to construct large families of functions which are exponentially convex.

Example 4.1

Consider the family of functions

Ω 1 = { g s : R [ 0 , ) : s R }

defined by

g s (x)={ 1 s 2 e s x , s 0 , 1 2 x 2 , s = 0 .

We have d 2 d x 2 g s (x)= e s x >0, which shows that g s is convex on ℝ for every sR and s d 2 d x 2 g s (x) is exponentially convex by definition (see also [6]). In order to prove that the function s[ y 0 , y 1 , y 2 ; g s ] is exponentially convex, it is enough to show that

j , k = 1 n α j α k [ y 0 , y 1 , y 2 ; g s j + s k 2 ]= [ y 0 , y 1 , y 2 ; j , k = 1 n α j α k g s j + s k 2 ] 0,
(28)

nN, α j , s j R, j=1,,n. By Definition 7, (28) will hold if ϒ(x):= j , k = 1 n α j α k g s j + s k 2 is convex. Since s g s (x) is exponentially convex, i.e., j , k = 1 n α j α k g s j + s k 2 0, nN, α j , s j R, j=1,,n, showing the convexity of ϒ(x) and so (28) holds. Now, as the function s[ y 0 , y 1 , y 2 ; g s ] is exponentially convex, s[ y 0 , y 1 , y 2 ; g s ] is exponentially convex in the Jensen sense and by using Corollary 3.8, we have s Φ i (x,y;p,g, g s ) (i=1,2) are exponentially convex in the Jensen sense. Since these mappings are continuous (although the mapping s g s is not continuous for s=0), so s Φ i (x,y;p,g, g s ) (i=1,2) are exponentially convex.

For this family of functions, by taking Ω= Ω 1 in (26), Ξ s , q ; 1 i := μ s , q (x,y,p,g, Φ i , Ω 1 ) (i=1,2) are of the from

where

X ˆ = a x p ( u ) g ( u ) d u + g ( x ) x b p ( u ) d u a b p ( u ) d u , Y ˆ = a y p ( u ) g ( u ) d u + g ( y ) y b p ( u ) d u a b p ( u ) d u , X ˜ = x b p ( u ) g ( u ) d u + g ( x ) a x p ( u ) d u a b p ( u ) d u , Y ˜ = y b p ( u ) g ( u ) d u + g ( y ) a y p ( u ) d u a b p ( u ) d u .
(29)

By using (25), Ξ s , q ; 1 i (i=1,2) are monotonous in parameters s and q. By using Theorem 2.4, it follows that

M s , q (x,y,p,g, Φ i , Ω 1 )=log μ s , q (x,y,p,g, Φ i , Ω 1 ),i=1,2,

satisfy min x [ a , b ] g(x) M s , q (x,y,p,g, Φ i , Ω 1 ) max x [ a , b ] g(x), showing that M s , q (x,y,p,g, Φ i , Ω 1 ) (i=1,2) are means.

Example 4.2

Consider the family of functions

Ω 2 = { f s : ( 0 , ) R : s R }

defined by

f s (x)={ x s s ( s 1 ) , s 0 , 1 , ln x , s = 0 , x ln x , s = 1 .

Here, d 2 d x 2 f s (x)= x s 2 = e ( s 2 ) ln x >0, which shows that f s is convex for x>0 and s d 2 d x 2 f s (x) is exponentially convex by definition (see also [6]). It is easy to prove that the function s[ y 0 , y 1 , y 2 ; f s ] is exponentially convex. Arguing as in Example 4.1, we have s Φ i (x,y;p,g, f s ) (i=1,2) are exponentially convex.

If sR{0,1} and r,t{0,1} such that r<s<t, then from (24) we have

Φ i (x,y;p,g, f s ) [ Φ i ( x , y ; p , g , f r ) ] t s t r [ Φ i ( x , y ; p , g , f t ) ] s r t r .
(30)

If r<t<s or s<r<t, then opposite inequalities hold in (30).

Particularly, for 0<s<1 and for i=1,2, we have

1 s ( s 1 ) ( x y p ( u ) g s ( u ) d u + g s ( y ) y b p ( u ) d u g s ( x ) x b p ( u ) d u a b p ( u ) d u + X ˆ s Y ˆ s ) ( x y p ( u ) ln ( g ( u ) ) d u + ln ( g ( y ) ) y b p ( u ) d u ln ( g ( x ) ) x b p ( u ) d u a b p ( u ) d u ln X ˆ + ln Y ˆ ) 1 s × ( x y p ( u ) g ( u ) ln ( g ( u ) ) d u + g ( y ) ln ( g ( y ) ) y b p ( u ) d u g ( x ) ln ( g ( x ) ) x b p ( u ) d u a b p ( u ) d u + ln X ˆ X ˆ ln Y ˆ Y ˆ ) s

and

1 s ( s 1 ) ( x y p ( u ) g s ( u ) d u + g s ( x ) a x p ( u ) d u g s ( y ) a y p ( u ) d u a b p ( u ) d u + Y ˜ s X ˜ s ) ( x y p ( u ) ln ( g ( u ) ) d u + ln ( g ( x ) ) a x p ( u ) d u ln ( g ( y ) ) a y p ( u ) d u a b p ( u ) d u ln Y ˜ + ln X ˜ ) 1 s × ( x y p ( u ) g ( u ) ln ( g ( u ) ) d u + g ( x ) ln ( g ( x ) ) a x p ( u ) d u g ( y ) ln ( g ( y ) ) a y p ( u ) d u a b p ( u ) d u + ln Y ˜ Y ˜ ln X ˜ X ˜ ) s

respectively, where X ˆ , Y ˆ , X ˜ and Y ˜ are the same as defined in (29).

By taking Ω= Ω 2 in (26), Ξ s , q ; 2 i := μ s , q (x,y,p,g, Φ i , Ω 2 ) (i=1,2) for x,y>0, where x,y[a,b] are of the form

where X ˆ , Y ˆ , X ˜ and Y ˜ are the same as defined in (29).

If Φ i (i=1,2) are positive, then Theorem 2.4 applied to J=[ min x [ a , b ] g(x), max x [ a , b ] g(x)], f= f s Ω 2 and k= f q Ω 2 yields that there exists ξ i J such that

ξ i s q = Φ i ( x , y ; p , g , f s ) Φ i ( x , y ; p , g , f q ) ,i=1,2.

Since the functions ξ i ξ i s q (i=1,2) are invertible for sq, we have

min x [ a , b ] g(x) ( Φ i ( x , y ; p , g , f s ) Φ i ( x , y ; p , g , f q ) ) 1 s q max x [ a , b ] g(x),i=1,2,
(31)

which together with the fact that Ξ s , q ; 2 i (i=1,2) are continuous, symmetric and monotonous (by (25)) shows that Ξ s , q ; 2 i are means.

Now, by the substitutions x x t , y y t , s s t , q q t (t0, sq), where x,y[a,b], from (31) we have

min { ( min x [ a , b ] g ( x ) ) t , ( max x [ a , b ] g ( x ) ) t } ( Φ i ( x t , y t ; p , g , f s / t Φ i ( x t , y t ; p , g , f q / t ) t s q max { ( min x [ a , b ] g ( x ) ) t , ( max x [ a , b ] g ( x ) ) t } .

We define a new mean (for i=1,2) as follows:

μ s , q ; t (x,y,p,g, Φ i , Ω 2 )={ ( μ s t , q t ( x t , y t , p , g , Φ i , Ω 2 ) ) 1 t , t 0 , μ s , q ( ln x , ln y , p , g , Φ i , Ω 1 ) , t = 0 .

These new means are also monotonous. More precisely, for s,q,u,vR such that su,qv, sq, uv, we have

μ s , q ; t (x,y,p,g, Φ i , Ω 2 ) μ u , v ; t (x,y,p,g, Φ i , Ω 2 ),i=1,2.

We know that

μ s t , q t ( x t , y t , p , g , Φ i , Ω 2 ) μ u t , v t ( x t , y t , p , g , Φ i , Ω 2 ) ,i=1,2,

equivalently

( Φ i ( x t , y t ; p , g , f s / t ) Φ i ( x t , y t ; p , g , f q / t ) ) t s q ( Φ i ( x t , y t ; p , g , f u / t ) Φ i ( x t , y t ; p , g , f v / t ) ) t u v

for s,q,u,vI such that s/tu/t, q/tv/t and t0, since μ s , q (x,y,p,g, Φ i , Ω 2 ) (i=1,2) are monotonous in both parameters, so the claim follows. For t=0, we obtain the required result by taking the limit t0.

Example 4.3

Consider the family of functions

Ω 3 = { h s : ( 0 , ) ( 0 , ) : s ( 0 , ) }

defined by

h s (x)={ s x ln 2 s , s 1 , x 2 2 , s = 1 .

We have d 2 d x 2 h s (x)= s x >0, which shows that h s is convex for all s>0. Since s d 2 d x 2 h s (x)= s x is the Laplace transform of a non-negative function (see [6, 7]), it is exponentially convex. It is easy to see that the function s[ y 0 , y 1 , y 2 ; h s ] is also exponentially convex. Arguing as in Example 4.1, we have s Φ i (x,y;p,g, h s ) (i=1,2) are exponentially convex.

In this case, by taking Ω= Ω 3 in (26), Ξ s , q ; 3 i := μ s , q (x,y,p,g, Φ i , Ω 3 ) (i=1,2) for x,y>0, where x,y[a,b] are of the form

where X ˆ , Y ˆ , X ˜ and Y ˜ are the same as in (29). By using (25), μ s , q (x,y, Φ i , Ω 3 ) (i=1,2) are monotonous in parameters s and q. By using Theorem 2.4, it can be seen that

M s , q (x,y,p,g, Φ i , Ω 3 )=L(s,q)log μ s , q (x,y,p,g, Φ i , Ω 3 ),i=1,2,

satisfy min x [ a , b ] g(x) M s , q (x,y,p,g, Φ i , Ω 3 ) max x [ a , b ] g(x) and so M s , q (x,y,p,g, Φ i , Ω 3 ) (i=1,2) are means, where L(s,q)= s q log s log q , sq, L(s,s)=s, is known as the logarithmic mean.

Example 4.4

Consider the family of functions

Ω 4 = { k s : ( 0 , ) ( 0 , ) : s ( 0 , ) }

defined by

k s (x)= e x s s .

Here, d 2 d x 2 k s (x)= e x s >0, which shows that k s is convex for all s>0. Since s d 2 d x 2 k s (x)= e x s is the Laplace transform of a non-negative function (see [6, 7]), it is exponentially convex. It is easy to prove that the function s[ y 0 , y 1 , y 2 ; k s ] is also exponentially convex. Arguing as in Example 4.1, we have s Φ i (x,y;p,g, k s ) (i=1,2) are exponentially convex.

In this case, by taking Ω= Ω 4 in (26), Ξ s , q ; 4 i := μ s , q (x,y,p,g, Φ i , Ω 4 ) (i=1,2) for x,y>0, where x,y[a,b] are of the form

where X ˆ , Y ˆ , X ˜ and Y ˜ are the same as in (29).

Remark 4.5

  1. (i)

    If Φ i (i=1,2) are positive, then applying Theorem 2.4 to J=[ min x [ a , b ] g(x), max x [ a , b ] g(x)] in Examples 4.1, 4.3 and 4.4, we have

    (32)
(33)

and

M s , q (x,y,p,g, Φ i , Ω 4 )=( s + q )log μ s , q (x,y,p,g, Φ i , Ω 4 )
(34)

(i=1,2) respectively, where L(s,q)= s q log s log q , sq, L(s,s)=s, is known as the logarithmic mean. By using the same arguing as in Example 4.2, (32), (33) and (34) satisfy min x [ a , b ] g(x) M s , q (x,y,p,g, Φ i ,Ω) max x [ a , b ] g(x) for Ω= Ω 1 , Ω 3 , Ω 4 , showing that M s , q (x,y,p,g, Φ i ,Ω) (i=1,2) are means for Ω= Ω 1 , Ω 3 , Ω 4 . Also, from (25) it is clear that μ s , q (x,y,p,g, Φ i ,Ω) (i=1,2) for Ω= Ω 1 , Ω 3 and Ω 4 are monotonous functions in parameters s and q.

  1. (ii)

    If we make the substitutions p(u)=1 and g(u)=u in our means μ s , q (x,y,p,g, Φ i , Ω 2 ) and μ s , q ; t (x,y,p,g, Φ i , Ω 2 ) (i=1,2), then the results for the means μ s , q (x,y, Φ i , Ω 2 ) and μ s , q ; t (x,y, Φ i , Ω 2 ) (i=1,2) given in [8] are recaptured. In this way, our results for means are the generalizations of the above mentioned means.