Abstract
In this paper, we give refinements of the integral form of Jensen’s inequality and the Lah–Ribarič inequality. Using these results, we obtain a refinement of the Hölder inequality and a refinement of some inequalities for integral power means and quasiarithmetic means. We also give applications in information theory, namely, we give some interesting estimates for the integral Csiszár divergence and its important particular cases.
Similar content being viewed by others
1 Introduction
Let I be an interval in \(\mathbb{R}\), and let \(f \colon I \to \mathbb{R}\) a convex function. If \(\boldsymbol{x}= ( x_{1},\ldots,x_{n} ) \) is any n-tuple in \(I^{n}\) and \(\boldsymbol{p}= ( p_{1},\ldots,p_{n} ) \) a nonnegative n-tuple such that \(P_{n}=\sum_{i=1}^{n}p_{i}>0\), then the well-known Jensen inequality
holds (see [6] or, e.g., [16, p. 43]). If f is strictly convex, then (1.1) is strict unless \(x_{i}=c\) for all \(i\in \{ j:p_{j}>0 \} \).
Jensen’s inequality is probably the most important inequality: it has many applications in mathematics and statistics, and some other well-known inequalities are its particular cases (such as Cauchy’s inequality, Hölder’s inequality, A–G–H inequality, etc.).
One of many generalizations of the Jensen inequality is its integral form (see [1, 7], or, e.g., [8]).
Theorem 1.1
(Integral form of Jensen’s inequality)
Let\(g \colon [a, b] \to \mathbb{R}\)be an integrable function, and let\(p \colon [a, b] \to \mathbb{R}\)be a nonnegative function. Iffis a convex function given on an intervalIthat includes the image ofg, then
where
Our first main result is a refinement of inequality (1.2).
Strongly related to Jensen’s inequality is the Lah–Ribarič inequality (see [11] or, e.g., [13, p. 9]). Its integral form is given in the following theorem.
Theorem 1.2
(Integral form of the Lah–Ribarič inequality)
Let\(g \colon [a, b] \to \mathbb{R}\)be an integrable function such that\(m \leq g(t) \leq M\)for\(t \in [a, b]\), \(m < M\), and let\(p \colon [a, b] \to \mathbb{R}\)be a nonnegative function. Iffis a convex function given on an intervalIsuch that\([m, M] \subseteq I\), then
wherePis given as before, and
Our second main result is a refinement of inequality (1.3).
Another famous inequality established for the class of convex functions is the Hermite–Hadamard inequality. This double inequality, which was first discovered by Hermite in 1881, is stated as follows (see, e.g., [16, p. 137]). Let f be a convex function on \([ a,b ] \subset \mathbb{R}\), where \(a< b\). Then
This result was later incorrectly attributed to Hadamard, who apparently was not aware of Hermite’s discovery, and today, when relating to (1.4), we use both names.
This result can be improved by applying (1.4) on each subinterval \([ a, \frac{a+b}{2}], [\frac{a+b}{2}, b]\), and the following result is obtained (see [14, p. 37]):
where \(l=\frac{1}{2} ( f ( \frac{3b+a}{4} ) + f ( \frac{b + 3a}{2} ) )\) and \(L = \frac{1}{2} ( f ( \frac{b+a}{2} ) + \frac{f(a)+f(b)}{2} )\).
The following improvement of (1.5) is given in [2].
Theorem 1.3
Let\(f \colon I \to \mathbb{R}\)be a convex function onI. Then for all\(\lambda \in [0, 1]\)and\(a, b \in I\), we have
where
and
Inequality (1.6) for \(\lambda = \frac{1}{2}\) gives inequality (1.5). Further improvement was given in [3].
Theorem 1.4
Let\(I \subseteq \mathbb{R}\)be an interval, and let\(f \colon I \to \mathbb{R}\)be a convex function. Let\(\varPhi \colon [a, b] \to I\)be such that\(f \circ \varPhi \)is also convex, where\(a < b\). Then for\(n \in \mathbb{N}\), \(\lambda _{0}=0\), \(\lambda _{n+1}=1\), and arbitrary\(0 \leq \lambda _{1} \leq \cdots \leq \lambda _{n} \leq 1\), we have
where
and
Applying the previous theorem to \(\varPhi (x)=x\) and \(n=1\), we get inequality (1.6).
We also give a refinement of the Hermite–Hadamard inequality. In the last section, we give some interesting estimates for the integral Csiszár divergence and for its important particular cases.
2 New refinements
Our first result is a refinement of the integral form of the Jensen inequality (1.2).
Theorem 2.1
Letgbe an integrable function defined on an interval\([a, b]\), and let\(a_{0}, a_{1}, \dots, a_{n-1}, a_{n}\)be such that\(a = a_{0} < a_{1} < \cdots < a_{n-1} < a_{n} = b\). Iffis a convex function given on an intervalIthat includes the image ofg, then
where\(p \colon [a, b] \to \mathbb{R}\)is a nonnegative function, and
Proof
Let \(a_{0}, a_{1}, \dots, a_{n-1}, a_{n}\) be such that \(a = a_{0} < a_{1} < \cdots < a_{n-1} < a_{n} = b\). Then we have (using Jensen’s inequality)
which is the left-hand side of (2.1).
Now we will use inequality (1.2) on each subinterval \([a_{i-1}, a_{i}]\):
which is the right-hand side of (2.1). □
The next result is a refinement of the integral form of the Lah–Ribarič inequality (1.3). We need the following lemma.
Lemma 2.2
Iffis a convex function on an intervalI, then for\(a, b, u, c, d \in I\)such that\(a \leq b \leq u \leq c \leq d\), \(b < c\), we have
Proof
We can write
and since f is convex,
Now we have
□
Theorem 2.3
Letgbe an integrable function defined on an interval\([a, b]\), and let\(a_{0}, a_{1}, \dots, a_{n-1}, a_{n}\)be such that\(a = a_{0} < a_{1} < \cdots < a_{n-1} < a_{n} = b\)and\(m_{i} \leq g(t) \leq M_{i}\)for\(t \in [a_{i-1}, a_{i}], m_{i} < M_{i}, i=1, \dots, n\), \(m=\min_{i \in \{ 1, \dots, n\} } m_{i}\), and\(M=\max_{i \in \{ 1, \dots, n \} } M_{i}\). Iffis a convex function given on an intervalIthat includes the image ofg, then
where\(p \colon [a, b] \to \mathbb{R}\)is nonnegative function,
and\(\bar{g}, \bar{g_{i}}, p_{i}\)are defined as
Proof
We will use (1.3) on each subinterval \([a_{i-1}, a_{i}]\):
which is the left-hand side of inequality (2.2).
Using \(m \leq m_{i} \leq \bar{g_{i}} \leq M_{i} \leq M, m < M, m_{i} < M_{i}\), and Lemma 2.2, we get
which is the right-hand side of (2.2). □
Remark 2.4
If we set \(p(t)=1\) in Theorem 2.1, then we get (1.7) in the form
In particular, for \(g(t) = t\), this gives
which is a refinement of the left-hand side of (1.6).
Analogously, from Theorem 2.3 we have (for \(p(t)=1\))
and for \(g(t)=t, m_{i}=a_{i-1}, M_{i}=a_{i}\), we get
which is a refinement of the right-hand side of (1.6).
Using our main result, we give a refinement of the Hölder inequality (more about the Hölder inequality see [16]).
Corollary 2.5
Let\(p, q \in \mathbb{R}\)be such that\(\frac{1}{p} + \frac{1}{q} = 1\). Let\(w, g_{1}\), and\(g_{2}\)be nonnegative functions defined on\([a, b]\)such that\(w g_{1}^{p}, w g_{2}^{q}, w g_{1} g_{2} \in L^{1}([a, b])\).
- (i)
If\(p > 1\), then
$$\begin{aligned} & \int _{a}^{b} w(t) g_{1}(t) g_{2} (t) \,dt \\ & \quad\leq \biggl( \int _{a}^{b} w(t) g_{2}^{q}(t) \,dt \biggr)^{ \frac{1}{q}} \\ &\qquad{}\times\Biggl( \sum_{i=1}^{n} \biggl( \int _{a_{i-1}}^{a_{i}} w(t) g_{2}^{q}(t) \,dt \biggr)^{1-p} \biggl( \int _{a_{i-1}}^{a_{i}} w(t) g_{1}(t) g_{2} (t) \,dt \biggr)^{p} \Biggr)^{\frac{1}{p}} \\ &\quad \leq \biggl( \int _{a}^{b} w(t) g_{1}^{p} (t) \,dt \biggr)^{ \frac{1}{p}} \biggl( \int w(t) g_{2}^{q}(t) \,dt \biggr)^{\frac{1}{q}}. \end{aligned}$$ - (ii)
If\(p < 1, p \neq 0\), then
$$\begin{aligned} & \biggl( \int _{a}^{b} w(t) g_{1}^{p}(t) \,dt \biggr)^{\frac{1}{p}} \biggl( \int w(t) g_{2}^{q}(t) \,dt \biggr)^{\frac{1}{q}} \\ & \quad\leq \sum_{i=1}^{n} \biggl( \int _{a_{i-1}}^{a_{i}} w(t) g_{1}^{p} (t) \,dt \biggr)^{\frac{1}{p}} \biggl( \int _{a_{i-1}}^{a_{i}} w(t) g_{2}^{q}(t) \,dt \biggr)^{\frac{1}{q}} \\ & \quad\leq \int _{a}^{b} w(t) g_{1} (t) g_{2}(t) \,dt. \end{aligned}$$
Proof
For the case \(p > 1\), we use Theorem 2.1 with \(p (t) = w(t) g_{2}^{q}(t)\), \(g(t)=g_{1}(t) g_{2}^{-\frac{q}{p}}\), and the function \(f (x) = x^{p}\), which is convex for \(x > 0, p > 1\). From (2.1) we get
Using \(q - \frac{q}{p} = 1\), multiplying by \(\int _{a}^{b} w(t) g_{2}^{q}(t) \,dt\), and taking the power \(\frac{1}{p}\), we have
Now multiplying by \(( \int _{a}^{b} w(t) g_{2}^{q}(t) \,dt )^{\frac{1}{q}}\), we get
For \(0 < p < 1\), we use Theorem 2.1 with \(p (t) = w(t) g_{2}^{q}(t)\), \(g(t)=g_{1}^{p}(t) g_{2}^{-q}(t)\), and the function \(f (x) = x^{\frac{1}{p}}\), which is convex for \(x > 0, 0 < p < 1\). From (2.1) we get
Now using \(q - \frac{q}{p} = 1\) and multiplying by \(\int _{a}^{b} w(t) g_{2}^{q}(t) \,dt\), we have
If \(p < 0 \), then \(0 < q < 1\), and we have the same result by symmetry. □
Let p and g be positive integrable functions defined on \([a, b]\). Then the integral power means of order \(r \in \mathbb{R}\) are defined as follows:
Let \(\mathbf{x}= (x_{1},\ldots,x_{n} )\) and \(\mathbf{w}= (w_{1},\ldots,w_{n} )\) be positive n-tuples. The weighted power mean (of the n-tuple x with weight w) of order \(r\in \mathbb{R}\) is defined as
In this paper, it is more suitable to use the notation \(M_{r} (x_{i};w_{i};\overline{1,n} )\).
Using our main result, we obtain following inequalities for integral power means.
Corollary 2.6
Letpandgbe positive integrable functions defined on\([a, b]\), and let\(a_{0}, a_{1}, \dots, a_{n-1}, a_{n}\)be such that\(a = a_{0} < a_{1} < \cdots < a_{n-1} < a_{n} = b\). Let\(s, t \in \mathbb{R}\)be such that\(s \leq t\). Then
Proof
We use Theorem 2.1 with \(f(x) = x^{\frac{t}{s}}\) for \(x > 0, s, t \in \mathbb{R}, s, t \neq 0, s \leq t\) (convex on \(\langle 0, + \infty \rangle \)). From (2.1) we get
Substituting g with \(g^{s}\) and taking the power \(\frac{1}{t}\), we get the result.
Similarly, we use Theorem 2.1 with \(f(x) = x^{\frac{s}{t}}\) for \(x > 0, s, t \in \mathbb{R}, s, t \neq 0, s \leq t\) (concave on \(\langle 0, + \infty \rangle \)). From (2.1) we get
Substituting g with \(g^{t}\) and takingthe power \(\frac{1}{s}\), we get the result.
The cases \(t = 0\) and \(s = 0\) follow from inequalities (2.3) and (2.4) by simple limiting process. □
Means of the type
can be regarded as mixed means.
Let p be positive integrable function defined on \([a, b]\), and let g be any integrable function defined on \([a, b]\). Then for a strictly monotone continuous function h with domain belonging to the image of g, the quasiarithmetic mean is defined as follows:
Using our main result, we obtain following inequalities for quasiarithmetic means.
Corollary 2.7
Letpbe positive integrable function defined on\([a, b]\), letgbe an integrable function defined on\([a, b]\), and let\(a_{0}, a_{1}, \dots, a_{n-1}, a_{n}\)be such that\(a = a_{0} < a_{1} < \cdots < a_{n-1} < a_{n} = b\). Also, assume thathis a strictly monotone continuous function with domain belonging to the image ofg. If\(f \circ h^{-1}\)is a convex function, then
Proof
We use Theorem 2.1 with \(f \rightarrow f \circ h^{-1}\) and \(g \rightarrow h \circ g\). □
3 Applications in information theory
In this section, we give some interesting estimates for the integral Csiszár divergence and for its important particular cases (see, e.g., [4, 5, 9, 10, 12, 15]).
Definition 3.1
(Csiszár divergence)
Let \(f \colon I \to \mathbb{R}\) be a function defined on some positive interval I, and let \(p, q \colon [a, b] \to \mathbb{R}^{+}\) be two probability density functions such that \(\frac{p(t)}{q(t)} \in I\) for \(t \in [a, b]\). The Csiszár divergence is defined as
Theorem 3.2
Let\(f \colon I \to \mathbb{R}\)be a convex function defined on a positive intervalI, let\(p, q \colon [a, b] \to \mathbb{R}^{+}\)be probability density functions such that\(\frac{p(t)}{q(t)} \in I\)for\(t \in [a, b]\), and let\(a_{0}, a_{1}, \dots, a_{n-1}, a_{n}\)be such that\(a = a_{0} < a_{1} < \cdots < a_{n-1} < a_{n} = b\). Then
Proof
Using Theorem 2.1 with \(p \to q\) and \(g \to \frac{p}{q}\), we obtain the result.
The condition \(\frac{p(t)}{q(t)} \in I\) for \(t \in [a, b]\) obviously implies that \(1 \in I\) and \(\frac{\int _{a_{i-1}}^{a_{i}} p(t) \,dt}{\int _{a_{i-1}}^{a_{i}} q(t) \,dt} \in I\) for \(i = 1, \dots, n\). □
Theorem 3.3
Let\(f \colon I \to \mathbb{R}\)be a convex function defined on a positive interval I, let\(p, q \colon [a, b] \to \mathbb{R}^{+}\)be probability density functions such that\(\frac{p(t)}{q(t)} \in I\)for\(t \in [a, b]\), and let\(a_{0}, a_{1}, \dots, a_{n-1}, a_{n}\)be such that\(a = a_{0} < a_{1} < \cdots < a_{n-1} < a_{n} = b\). Let\(m_{i} \leq \frac{p(t)}{q(t)} \leq M_{i}\)for\(t \in [a_{i-1}, a_{i}], m_{i} < M_{i}, i=1, \dots, n\), \(m=\min_{i=1, \dots, n} m_{i}\), and\(M=\max_{i=1, \dots, n} M_{i}\). Then
Proof
Using Theorem 2.3 with \(p \to q\) and \(g \to \frac{p}{q}\), we obtain the result. □
Definition 3.4
(Shannon entropy)
Let \(p \colon [a, b] \to \mathbb{R}^{+}\) be a probability density function. The Shannon entropy is defined as
Corollary 3.5
Let\(q \colon [a, b] \to \mathbb{R}^{+}\)be a probability density function, and let\(a_{0}, a_{1}, \dots, a_{n-1}, a_{n}\)be such that\(a = a_{0} < a_{1} < \cdots < a_{n-1} < a_{n} = b\). Then
Proof
Using Theorem 3.2 with \(f(t)=- \log t\), \(t \in \mathbb{R}^{+}\), and \(p(t)=\frac{1}{b-a}\), \(t \in [a, b]\), we obtain the result. □
Corollary 3.6
Let\(q \colon [a, b] \to \mathbb{R}^{+}\)be a probability density function, let\(a_{0}, a_{1}, \dots, a_{n-1}, a_{n}\)be such that\(a = a_{0} < a_{1} < \cdots < a_{n-1} < a_{n} = b\), and let\(m_{i} \leq \frac{1}{q(t)} \leq M_{i}\)for\(t \in [a_{i-1}, a_{i}]\), \(m_{i} < M_{i}, i=1, \dots, n\), \(m=\min_{i=1, \dots, n} m_{i}\)and\(M=\max_{i=1, \dots, n} M_{i}\). Then
Proof
Using Theorem 3.3 with \(f(t)=- \log t\), \(t \in \mathbb{R}^{+}\), and \(p(t)=\frac{1}{b -a}\), \(t \in [a, b]\), we obtain the result. □
Definition 3.7
(Kullback–Leibler divergence)
Let \(p, q \colon [a, b]\to \mathbb{R}^{+}\) be two probability density functions. The Kullback–Leibler divergence is defined as
Corollary 3.8
Let\(p, q \colon [a, b] \to \mathbb{R}^{+}\)be probability density functions, and let\(a_{0}, a_{1}, \dots, a_{n-1}, a_{n}\)be such that\(a = a_{0} < a_{1} < \cdots < a_{n-1} < a_{n} = b\). Then
Proof
Using Theorem 3.2 with \(f(t)= t \log t\), \(t \in \mathbb{R}^{+}\), we obtain the result. □
Corollary 3.9
Let\(p, q \colon [a, b] \to \mathbb{R}^{+}\)be probability density functions, let\(a_{0}, a_{1}, \dots \), \(a_{n-1}, a_{n}\)be such that\(a = a_{0} < a_{1} < \cdots < a_{n-1} < a_{n} = b\), and let\(m_{i} \leq \frac{1}{q(t)} \leq M_{i}\)for\(t \in [a_{i-1}, a_{i}]\), \(m_{i} < M_{i}, i=1, \dots, n\), \(m=\min_{i=1, \dots, n} m_{i}\)and\(M=\max_{i=1, \dots, n} M_{i}\). Then
Proof
Using Theorem 3.3 with \(f(t)= t \log t\), \(t \in \mathbb{R}^{+}\), we obtain the result. □
Definition 3.10
(Variational distance)
Let \(p, q \colon [a, b]\to \mathbb{R}^{+}\) be two probability density functions. The variational distance is defined by
The following corollary can be also proved elementarily using the triangle inequality for integrals.
Corollary 3.11
Let\(p, q \colon [a, b] \to \mathbb{R}^{+}\)be probability density functions, and let\(a_{0}, a_{1}, \dots, a_{n-1}, a_{n}\)be a such that\(a = a_{0} < a_{1} < \cdots < a_{n-1} < a_{n} = b\). Then
Proof
Using Theorem 3.2 with \(f(t)= | t - 1 |\), \(t \in \mathbb{R}^{+}\), we obtain the result. □
Corollary 3.12
Let\(p, q \colon [a, b] \to \mathbb{R}^{+}\)be probability density functions, let\(a_{0}, a_{1}, \dots \), \(a_{n-1}, a_{n}\)be such that\(a = a_{0} < a_{1} < \cdots < a_{n-1} < a_{n} = b\), and let\(m_{i} \leq \frac{p(t)}{q(t)} \leq M_{i}\)for\(t \in [a_{i-1}, a_{i}]\), \(m_{i} < M_{i}, i=1, \dots, n\), \(m=\min_{i=1, \dots, n} m_{i}\)and\(M=\max_{i=1, \dots, n} M_{i}\). Then
Proof
Using Theorem 3.3 with \(f(t)= | t - 1 |\), \(t \in \mathbb{R}^{+}\), and \(m \leq 1 \leq M\), we obtain get the result. □
Definition 3.13
(Jeffrey’s distance)
Let \(p, q \colon [a, b]\to \mathbb{R}^{+}\) be two probability density functions. The Jeffrey distance is defined as
Corollary 3.14
Let\(p, q \colon [a, b] \to \mathbb{R}^{+}\)be probability density functions, and let\(a_{0}, a_{1}, \dots, a_{n-1}, a_{n}\)be such that\(a = a_{0} < a_{1} < \cdots < a_{n-1} < a_{n} = b\). Then
Proof
Using Theorem 3.2 with \(f(t)= (t - 1) \log t\), \(t \in \mathbb{R}^{+}\), we obtain the result. □
Corollary 3.15
Let\(p, q \colon [a, b] \to \mathbb{R}^{+}\)be probability density functions, let\(a_{0}, a_{1}, \dots \), \(a_{n-1}, a_{n}\)be such that\(a = a_{0} < a_{1} < \cdots < a_{n-1} < a_{n} = b\), and let\(m_{i} \leq \frac{p(t)}{q(t)} \leq M_{i} \)for\(t \in [a_{i-1}, a_{i}]\), \(m_{i} < M_{i}, i=1, \dots, n\), \(m=\min_{i=1, \dots, n} m_{i}\), and\(M=\max_{i=1, \dots, n} M_{i}\). Then
Proof
Using Theorem 3.3 with \(f(t)= (t - 1) \log t\), \(t \in \mathbb{R}^{+}\), we obtain the result. □
Definition 3.16
(Bhattacharyya coefficient)
Let \(p, q \colon [a, b]\to \mathbb{R}^{+}\) be two probability density functions. The Bhattacharyya distance is defined as
Corollary 3.17
Let\(p, q \colon [a, b] \to \mathbb{R}^{+}\)be probability density functions, and let\(a_{0}, a_{1}, \dots, a_{n-1}, a_{n}\)be such that\(a = a_{0} < a_{1} < \cdots < a_{n-1} < a_{n} = b\). Then
Proof
Using Theorem 3.2 with \(f(t)=- \sqrt{t}\), \(t \in \mathbb{R}^{+}\), we obtain the result. □
Corollary 3.18
Let\(p, q \colon [a, b] \to \mathbb{R}^{+}\)be probability density functions, let\(a_{0}, a_{1}, \dots \), \(a_{n-1}, a_{n}\)be such that\(a = a_{0} < a_{1} < \cdots < a_{n-1} < a_{n} = b\), and let\(m_{i} \leq \frac{p(t)}{q(t)} \leq M_{i} \)for\(t \in [a_{i-1}, a_{i}]\), \(m_{i} < M_{i}, i=1, \dots, n\), \(m=\min_{i=1, \dots, n} m_{i}\), and\(M=\max_{i=1, \dots, n} M_{i}\). Then
Proof
Using Theorem 3.3 with \(f(t)= - \sqrt{t}\), \(t \in \mathbb{R}^{+}\), we obtain the result. □
Definition 3.19
(Hellinger distance)
Let \(p, q \colon [a, b]\to \mathbb{R}^{+}\) be two probability density functions. The Hellinger distance is defined as
Corollary 3.20
Let\(p, q \colon [a, b] \to \mathbb{R}^{+}\)be probability density functions, and let\(a_{0}, a_{1}, \dots, a_{n-1}, a_{n}\)be such that\(a = a_{0} < a_{1} < \cdots < a_{n-1} < a_{n} = b\). Then
Proof
Using Theorem 3.2 with \(f(t)=( \sqrt{t} - 1 )^{2}\), \(t \in \mathbb{R}^{+}\), we obtain the result. □
Corollary 3.21
Let\(p, q \colon [a, b] \to \mathbb{R}^{+}\)be probability density functions, let\(a_{0}, a_{1}, \dots \), \(a_{n-1}, a_{n}\)be such that\(a = a_{0} < a_{1} < \cdots < a_{n-1} < a_{n} = b\), and let\(m_{i} \leq \frac{p(t)}{q(t)} \leq M_{i}\)for\(t \in [a_{i-1}, a_{i}]\), \(m_{i} < M_{i}, i=1, \dots, n\), \(m=\min_{i=1, \dots, n} m_{i}\), and\(M=\max_{i=1, \dots, n} M_{i}\). Then
Proof
Using Theorem 3.3 with \(f(t)=( \sqrt{t} - 1 )^{2}\), \(t \in \mathbb{R}^{+}\), we obtain the result. □
Definition 3.22
(Triangular discrimination)
Let \(p, q \colon [a, b]\to \mathbb{R}^{+}\) be two probability density functions. The triangular discrimination between p and q is defined as
Corollary 3.23
Let\(p, q \colon [a, b] \to \mathbb{R}^{+}\)be probability density functions, and let\(a_{0}, a_{1}, \dots, a_{n-1}, a_{n}\)be such that\(a = a_{0} < a_{1} < \cdots < a_{n-1} < a_{n} = b\). Then
Proof
Using Theorem 3.2 with \(f(t)= \frac{(t-1)^{2}}{t+1}\), \(t \in \mathbb{R}^{+}\), we obtain the result. □
Corollary 3.24
Let\(p, q \colon [a, b] \to \mathbb{R}^{+}\)be probability density functions, let\(a_{0}, a_{1}, \dots \), \(a_{n-1}, a_{n}\)be such that\(a = a_{0} < a_{1} < \cdots < a_{n-1} < a_{n} = b\), and let\(m_{i} \leq \frac{p(t)}{q(t)} \leq M_{i}\)for\(t \in [a_{i-1}, a_{i}]\), \(m_{i} < M_{i}, i=1, \dots, n\), \(m=\min_{i=1, \dots, n} m_{i}\), and\(M=\max_{i=1, \dots, n} M_{i}\). Then
Proof
Using Theorem 3.3 with \(f(t)= \frac{(t-1)^{2}}{t+1}\), \(t \in \mathbb{R}^{+}\), we obtain the result. □
References
Dragomir, S.S., Khan, M.A., Abathun, A.: Refinement of Jensen’s integral inequality. Open Math. 14, 221–228 (2016)
El Farissi, A.: Simple proof and refinement of Hermite–Hadamard inequality. J. Math. Inequal. 4(3), 365–369 (2010)
Gao, X.: A note on the Hermite–Hadamard inequality. J. Math. Inequal. 4(4), 587–591 (2010)
Ivelić Bradanović, S., Latif, N., Pečarić, Ð., Pečarić, J.: Sherman’s and related inequalities with applications in information theory. J. Inequal. Appl. 2018, 98 (2018)
Jakšetić, J., Pečarić, Ð., Pečarić, J.: Hybrid Zipf–Mandelbrot law. J. Math. Inequal. 13, 275–286 (2019)
Jensen, J.L.W.V.: Om konvexe Funktioner og Uligheder mellem Middelvaerdier. Nyt Tidsskr. Math. 16B, 49–69 (1905) (German)
Khalid, S., On the refinements of the integral Jensen–Steffensen inequality. J. Inequal. Appl. 2013, 20 (2013) https://doi.org/10.1186/1029-242X-2013-20
Khan, M.A., Khan, J., Pečarić, J.: Generalization of Jensen’s and Jensen–Steffensen’s inequalities by generalized majorization theorem. J. Math. Inequal. 11(4), 1049–1074 (2017)
Khan, M.A., Pečarić, Ð., Pečarić, J.: Bounds for Csiszár divergence and hybrid Zipf–Mandelbrot entropy. Math. Methods Appl. Sci. 42(18), 7411–7424 (2019)
Khan, M.A., Pečarić, Ð., Pečarić, J.: New refinement of the Jensen inequality associated to certain functions with applications. J. Inequal. Appl. 2020, Article ID 76 (2020)
Lah, P., Ribarič, M.: Converse of Jensen’s inequality for convex functions. Publ. Elektroteh. Fak. Univ. Beogr., Ser. Mat. Fiz. 412–460, 201–205 (1973)
Mikić, R., Pečarić, Ð., Pečarić, J.: Inequalities of the Jensen and Edmundson–Lah–Ribarič type for 3-convex functions with applications. J. Math. Inequal. 12, 677–692 (2018)
Mitrinović, D.S., Pečarić, J.E., Fink, A.M.: Classical and New Inequalities in Analysis. Mathematics and Its Applications (East European Series), vol. 61. Kluwer Academic, Dordrecht (1993). ISBN 0-7923-2064-6
Niculescu, C.P., Persson, L.-E.: Convex Functions and Their Applications. A Contemporary Approach. CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC, vol. 23. Springer, New York (2006). ISBN 978-0387-24300-9
Pečarić, Ð., Pečarić, J., Rodić, M.: On a Jensen-type inequality for generalized f-divergences and Zipf–Mandelbrot law. Math. Inequal. Appl. 22(4), 1463–1475 (2019)
Pečarić, J.E., Proschan, F., Tong, Y.L.: Convex Functions, Partial Orderings, and Statistical Applications. Mathematics in Science and Engineering, vol. 187. Academic Press, Boston (1992). ISBN 0-12-549250-2
Acknowledgements
The research of the first author was supported by the Ministry of Education and Science of the Russian Federation (the Agreement number No. 02.a03.21.0008).
Availability of data and materials
Not applicable.
Funding
There is no funding for this work.
Author information
Authors and Affiliations
Contributions
Both authors jointly worked on the results, and they read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Pečarić, J., Perić, J. Refinements of the integral form of Jensen’s and the Lah–Ribarič inequalities and applications for Csiszár divergence. J Inequal Appl 2020, 108 (2020). https://doi.org/10.1186/s13660-020-02369-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-020-02369-x