## 1 Introduction

A real-valued function $$\psi : I \rightarrow \mathbb{R}$$ is said to be convex if the inequality

$$\psi (\alpha {\xi }+\beta {\zeta })\leq \alpha \psi (\xi )+ \beta \psi (\zeta )$$

holds for all $$\xi , \zeta \in I$$ and $$\alpha , \beta \geq 0$$ with $$\alpha +\beta =1$$. It is well known that $$\psi : I \rightarrow \mathbb{R}$$ is convex if and only if

$$\psi \Biggl(\sum_{i=1}^{n}\alpha _{i}\xi _{i} \Biggr)\leq \sum_{i=1} ^{n}\alpha _{i}\psi (\xi _{i})$$

for all $$\xi _{i}\in I$$ and $$\alpha _{i}\geq 0$$ with $$\sum_{i=1}^{n} \alpha _{i}=1$$.

Convex function has wide applications in pure and applied mathematics, physics, and other natural sciences [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]; it has many important and interesting properties [21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37] such as monotonicity, continuity, and differentiability. Recently, many generalizations and extensions have been made for the convexity, for example, s-convexity [38], strong convexity [39,40,41], preinvexity [42], GA-convexity [43], GG-convexity [44], Schur convexity [45,46,47,48,49], and others [50,51,52,53,54]. In particular, many remarkable inequalities can be found in the literature [55,56,57,58,59,60,61,62,63,64,65,66,67] via the convexity theory.

Chen [68] generalized the convex function to the s-convex function, gave the relation between the convex and s-convex functions, and established Jensen’s inequality for s-convex function as follows.

Let K be a convex subset of a real linear space and $$s\in (0, \infty )$$ be a fixed real positive number. Then the mapping $$f: K\rightarrow \mathbb{R}$$ is called s-convex on K if

$$f(\alpha x+\beta y)\leq \alpha ^{s}f(x)+\beta ^{s}f(y)$$
(1.1)

for all $$x, y\in \mathbb{K}$$ and $$\alpha , \beta \geq 0$$ with $$\alpha +\beta$$.

### Lemma 1.1

([68])

Let $$\psi :I \rightarrow \mathbb{R}$$ be a convex function defined on interval I. Then the following statements are true:

1. (i)

If ψ is non-negative, then ψ is s-convex for $$s\in (0, 1]$$.

2. (ii)

If ψ is non-positive, then ψ is s-convex for $$s\in [1, \infty )$$.

### Theorem 1.2

([68])

Let $$i\in \{1,2,\ldots,n\}$$, $$\alpha _{i}\geq 0$$, $$Q_{n}=\sum_{i=1}^{n} \alpha _{i}^{\frac{1}{s}}>0$$, and $$\psi :I\rightarrow \mathbb{R}$$ be an s-convex function. Then

$$\psi \Biggl(\frac{1}{Q_{n}}\sum_{i=1}^{n} \alpha _{i}^{\frac{1}{s}}\xi _{i} \Biggr)\leq \frac{1}{Q_{n}^{s}}\sum_{i=1}^{n}\alpha _{i}\psi (\xi _{i})$$

for all $$\xi _{i}\in I$$.

## 2 Information divergence measures

Divergence measure is actually the distance between two probability distributions. Divergence measures have been introduced in the effort to solve the problems related to probability theory. Divergence measures have vast applications in a variety of fields such as economics, biology, signal processing, pattern recognition, computational learning, color image segmentation, magnetic resonance image analysis, and so on.

A class of information divergence measures, which is one of the important divergence measures due to its compact behavior, is the Csiszár ϕ-divergence [69] given below:

$${I}_{\phi }(\boldsymbol{\eta },\boldsymbol{\zeta })=\sum _{i=1} ^{n}\zeta _{i}\phi \biggl( \frac{\eta _{i}}{\zeta _{i}} \biggr),$$

where $$\boldsymbol{\eta }=(\eta _{1},\eta _{2},\dots ,\eta _{n})$$, $$\boldsymbol{\zeta } =(\zeta _{1},\zeta _{2},\dots ,\zeta _{n})$$ are positive real n-tuples.

The Csiszár ϕ-divergence is a generalized measure of information on the convex function $$\phi : \mathbb{R}^{+}\rightarrow \mathbb{R}$$, where the convexity ensures the non-negativity of divergence measures $${I}_{\phi }(\boldsymbol{\eta }, \boldsymbol{\zeta })$$. The following Theorems 2.1 and 2.2 can be found in the literature [70, 71].

### Theorem 2.1

If $$\phi :[0, \infty )\rightarrow \mathbb{R}$$ is convex, then $${I}_{\phi }(\boldsymbol{\eta },\boldsymbol{\zeta })$$ is jointly convex in η and ζ.

### Theorem 2.2

Let $$\phi : \mathbb{R}^{+}\rightarrow \mathbb{R}^{+}$$ be convex. Then, for every $$p, q \in \mathbb{R}^{n}_{+}$$ with $$Q_{n}=\sum_{i=1} ^{n}\zeta _{i}$$, we have

$${I}_{\phi }(\boldsymbol{\eta },\boldsymbol{\zeta })\geq Q_{n} \phi \biggl(\frac{\sum_{i=1}^{n}\eta _{i}}{\sum_{i=1} ^{n}\zeta _{i}} \biggr).$$
(2.1)

If ϕ is strictly convex, then equality holds in (2.1) if and only if

$$\frac{\eta _{1}}{\zeta _{1}}=\frac{\eta _{2}}{\zeta _{2}}=\frac{\eta _{3}}{ \zeta _{3}}=\cdots =\frac{\eta _{n}}{\zeta _{n}}.$$

### Corollary 2.3

Let $$\phi : \mathbb{R}^{+}\rightarrow \mathbb{R}^{+}$$ be convex and normalized ($$\phi (1)=0$$) with $$\sum_{i=1}^{n}\eta _{i}=\sum_{i=1}^{n}\zeta _{i}$$. Then we have

$${I}_{\phi }(\boldsymbol{\eta },\boldsymbol{\zeta })\geq 0.$$
(2.2)

Equality holds in (2.2) if ϕ is strictly convex and $$\sum_{i=1}^{n}\eta _{i}=\sum_{i=1}^{n}\zeta _{i}$$.

Many well-known distance functions or divergences can be obtained for a suitable choice of function ϕ, and they are frequently used in mathematical statistics, signal processing, and information theory. Some of the divergences are Kullback–Leibler, Renyi, Hellinger, Chi-square, Jeffery’s divergences, variational distance, and so on. Some brief introduction to these divergences is given below.

In probability and statistics, observed data is approximated by probability distribution. This approximation results in loss of information. The primitive object of information theory is to estimate how much information is in the data. Entropy is used to measure this information. Approximating a distribution by $$\boldsymbol{\zeta} (\boldsymbol{x})$$ for which the actual distribution is $$\boldsymbol{\eta} (\boldsymbol{x})$$ results in loss of information. KL-divergence, although not a true metric, is a useful measure of distance between the two distributions. The KL-divergence measure is the insufficiency of encoding the data with respect to the distribution ζ, rather than the true distribution η. The formula for KL-divergence can be obtained by choosing $$\phi (t)=t \log t$$ in Csiszár divergence

$$K(\boldsymbol{\eta },\boldsymbol{\zeta })=\sum_{i=1}^{n} \eta _{i} \log \biggl(\frac{\eta _{i}}{\zeta _{i}} \biggr).$$

The KL-divergence is non-negative if and only if $$\boldsymbol{\eta }= \boldsymbol{\zeta }$$. However, it is not true distance between distributions, since it is not symmetric and does not satisfy the triangle inequality.

A logical alternative divergence or extension to KL-divergence is Jaffery’s divergence. It is the sum of the KL-divergence in both directions. It is defined by

$$J(\boldsymbol{\eta },\boldsymbol{\zeta })=\sum_{i=1}^{n}( \eta _{i}- \zeta _{i})\log \biggl(\frac{\eta _{i}}{\zeta _{i}} \biggr),$$

which corresponds to ϕ-divergence for ϕ defined by

$$\phi (z)=(z-1)\log {z}, \quad z>0.$$

It exhibits the two properties of metric like KL-divergence but is also symmetric; however, it does not obey the triangle inequality. Its uses are similar to those of KL-divergence.

The Bhattacharyya divergence is defined by

$$B(\boldsymbol{\eta }, \boldsymbol{\zeta })= \sqrt{\eta _{i}\zeta _{i}},$$

which corresponds to ϕ-divergence for ϕ defined by

$$\phi (z)=\sqrt{z}, \quad z>0.$$

It satisfies the first three properties of metric but does not obey the triangle inequality. A nice feature of Bhattacharyya divergence is its limited range. Indeed its range is limited to make it quite attractive for a distance comparison.

The Bhattacharyya divergence is related to Hellinger divergence

$$H(\boldsymbol{\eta },\boldsymbol{\zeta })=\sum_{i=1}^{n} (\sqrt{ \zeta _{i}}-\sqrt{\eta _{i}} )^{2},$$

corresponding to a ϕ-divergence for ϕ defined by

$$\phi (z)= (1-\sqrt{z})^{2}, \quad z>0.$$

Hellinger divergence is in fact a proper metric because it satisfies non-negativity, symmetry, and triangle inequality properties. This makes it an ideal candidate for estimation and classification problems. Test statistics based on Hellinger divergence were developed for the independent samples drawn from two different continuous populations with a common parameter. It is used as a splitting criterion in decision trees, which is an effective way to address the imbalanced data problems. Hellinger divergence has deep roots in information theory and machine learning. It is extensively used in data analysis, especially when the objects being compared are high dimensional empirical probability distribution built from data.

Another ϕ-divergence is the total variational distance. The total variational distance is a distance measure for probability distribution, sometimes called statistical distance or variational distance, and it is defined by

$$V(\boldsymbol{\eta },\boldsymbol{\zeta })=\sum_{i=1}^{n} \vert \eta _{i}- \zeta _{i} \vert ,$$

which corresponds to a ϕ-divergence for ϕ defined by

$$\phi (z)= \vert z-1 \vert , \quad z>0.$$

Variational distance is a fundamental quantity in statistics and probability which appeared in many diverse applications. In information theory it is used to define strong typicality and asymptotic equipartition of sequences generated by sampling from a given distribution. In decision problems it arises naturally when discriminating the results of observation of two statistical hypotheses. In studying the ergodicity of Markov chains, it is used to define Dobrushin coefficient and establish the contraction property of transition probability distributions. Moreover, distance in total variation of probability measure is related via upper and lower bounds to an anthology of distance and distance metrics.

Another divergence measure is the Renyi divergence defined as

$$R(\boldsymbol{\eta },\boldsymbol{\zeta })=\sum_{i=1}^{n} \eta _{i}^{ \alpha } \zeta _{i}^{1-\alpha },$$

which corresponds to a ϕ-divergence for ϕ defined by

$$\phi (z)=z^{\alpha }, \quad z>0,$$

where $$\alpha >1$$. Renyi divergence is related to Renyi entropy much like KL-divergence is related to Shannon’s entropy.

Some other important divergences can be obtained from Csiszár divergence which are given below.

Chi-square divergence. For $$\phi (z)=(z-1)^{2}$$ ($$z>0$$) in ϕ-divergence. The $$\chi ^{2}$$-divergence is given by

$$\chi ^{2}(\boldsymbol{\eta },\boldsymbol{\zeta })=\sum _{i=1}^{n}\frac{( \eta _{i}-\zeta _{i})^{2}}{\zeta _{i}},$$

and $$\chi ^{2}(\boldsymbol{\eta },\boldsymbol{\zeta })+\chi ^{2}( \boldsymbol{\zeta },\boldsymbol{\eta })$$ is known as symmetric Chi- square divergence.

Triangular discrimination. For $$\phi (z)= \frac{(z-1)^{2}}{z+1}$$ ($$z>0$$), the triangular discrimination is given by

$$\triangle (\boldsymbol{\eta },\boldsymbol{\zeta })=\sum _{i=1} ^{n}\frac{(\eta _{i}-\zeta _{i})^{2}}{\eta _{i}+\zeta _{i}}.$$

Relative arithmetic-geometric divergence. For $$\phi (z)= \frac{z+1}{2}\log \frac{1+z}{2z}$$ ($$z>0$$), the relative arithmetic-geometric divergence is given by

$$G(\boldsymbol{\eta },\boldsymbol{\zeta })= \sum_{i=1}^{n} \frac{\eta _{i}+\zeta _{i}}{2}\log \frac{\eta _{i}+\zeta _{i}}{2\eta _{i}}.$$

## 3 Inequalities for Csiszár divergence

### Theorem 3.1

Let $$\phi :\mathbb{R}^{+}\rightarrow \mathbb{R}$$ be an s-convex function, $$\boldsymbol{\eta }=(\eta _{1},\eta _{2},\ldots,\eta _{n})$$ and $$\boldsymbol{\zeta }=(\zeta _{1},\zeta _{2},\ldots,\zeta _{n})$$ be two positive real n-tuples, and $$Q_{n}=\sum_{i=1}^{n}\zeta _{i} ^{\frac{1}{s}}$$. Then one has

$$I_{\phi }(\boldsymbol{\eta }, \boldsymbol{\zeta }) \geq Q_{n}^{s} \phi \biggl(\frac{\sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}}{ \sum_{i=1}^{n}\zeta ^{\frac{1}{s}}_{i}} \biggr).$$
(3.1)

### Proof

By taking $$\alpha _{i}\rightarrow \zeta _{i}$$ and $$\xi _{i}\rightarrow \frac{ \eta _{i}}{\zeta _{i}}$$ in Theorem 1.2, we get

$$\frac{1}{Q_{n}^{s}}\sum_{i=1}^{n}\zeta _{i}\phi \biggl(\frac{ \eta _{i}}{\zeta _{i}} \biggr)\geq \phi \biggl( \frac{\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}(\frac{\eta _{i}}{\zeta _{i}})}{Q_{n}} \biggr),$$

which is equivalent to (3.1). □

### Theorem 3.2

Let $$\boldsymbol{\eta }=(\eta _{1},\eta _{2},\ldots,\eta _{n})$$ and $$\boldsymbol{\zeta }=(\zeta _{1},\zeta _{2},\ldots,\zeta _{n})$$ be two positive real n-tuples, and $$Q_{n}=\sum_{i=1}^{n}\zeta _{i} ^{\frac{1}{s}}$$. Then the following statements are true:

1. (i)

If $$\eta _{i}\geq \zeta _{i}$$ for $$i\in \{1,2,\ldots,n \}$$ and $$s\in (0, 1]$$, then

$$K(\boldsymbol{\eta },\boldsymbol{\zeta })\geq Q_{n}^{s}\frac{\sum_{i=1} ^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}}{\sum_{i=1}^{n}\zeta _{i}^{ \frac{1}{s}}}\log \biggl(\frac{\sum_{i=1}^{n} \zeta _{i}^{ \frac{1-s}{s}}\eta _{i}}{\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}} \biggr).$$
(3.2)
2. (ii)

If $$\eta _{i}<\zeta _{i}$$ for $$i\in \{1,2,\ldots,n\}$$ and $$s\in [1, \infty )$$, then inequality (3.2) holds.

### Proof

(i) If $$\phi (z)=z\log z$$, where $$z>0$$, then $$\phi ^{\prime \prime }(z)= \frac{1}{z}\geq 0$$, so $$\phi (z)$$ is convex on $$(0, \infty )$$. Moreover, if $$z\geq 1$$, then $$\phi (z)\geq 0$$. Hence, by Lemma 1.1, $$\phi (z)$$ is s-convex for $$s\in (0, 1]$$. Using $$\phi (z)=z\log z$$ in Theorem 3.1, we get

$$\sum_{i=1}^{n}\zeta _{i}\frac{\eta _{i}}{\zeta _{i}}\log \biggl(\frac{ \eta _{i}}{\zeta _{i}} \biggr)\geq Q_{n}^{s}\frac{\sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}}{\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}} \log \biggl(\frac{\sum_{i=1}^{n} \zeta _{i}^{\frac{1-s}{s}}\eta _{i}}{ \sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}} \biggr),$$
(3.3)

which is equivalent to (3.2).

(ii) If $$z\leq 1$$, then $$\phi (z)\leq 0$$. Hence, by Lemma 1.1, $$\phi (z)$$ is s-convex for $$s\in [1, \infty )$$; therefore, by utilizing Theorem 3.1, we obtain (3.3). □

### Theorem 3.3

Let $$\boldsymbol{\eta }=(\eta _{1},\eta _{2},\ldots,\eta _{n})$$ and $$\boldsymbol{\zeta }=(\zeta _{1},\zeta _{2},\ldots,\zeta _{n})$$ be two positive real n-tuples, $$Q_{n}=\sum_{i=1}^{n}\zeta _{i}^{ \frac{1}{s}}$$ and $$s \in (0, 1]$$. Then

$$H(\boldsymbol{\eta },\boldsymbol{\zeta }) \geq Q_{n}^{s} \biggl(1-\sqrt{ \frac{ \sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}}{\sum_{i=1}^{n} \zeta ^{\frac{1}{s}}_{i}}} \biggr)^{2}.$$
(3.4)

### Proof

If $$\phi (z)=(1-\sqrt{z})^{2}$$, where $$z>0$$, then $$\phi ^{\prime \prime }(z)= \frac{1}{2z}- \frac{\sqrt{z}-1}{2z^{\frac{3}{2}}}\geq 0$$, so $$\phi (z)$$ is convex on $$(0, \infty )$$. Moreover, if $$z>0$$, then $$\phi (z)\geq 0$$. Hence, by Lemma 1.1, $$\phi (z)$$ is s-convex for $$s \in (0, 1]$$. Using $$\phi (z)$$ in Theorem 3.1, we have

\begin{aligned}& \sum_{i=1}^{n}\zeta _{i} \biggl(1-\sqrt{\frac{\eta _{i}}{\zeta _{i}}} \biggr) ^{2} \geq Q_{n}^{s} \biggl(1-\sqrt{\frac{\sum_{i=1}^{n}\zeta _{i}^{ \frac{1-s}{s}}\eta _{i}}{\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}}} \biggr) ^{2}, \\& \sum_{i=1}^{n}(\zeta _{i}+\eta _{i}-2\sqrt{\eta _{i}\zeta _{i}})\geq Q _{n}^{s} \biggl(1-\sqrt{\frac{\sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}} \eta _{i}}{\sum_{i=1}^{n}\zeta ^{\frac{1}{s}}_{i}}} \biggr)^{2}, \end{aligned}

which is equivalent to (3.4). □

### Theorem 3.4

Let $$\boldsymbol{\eta }=(\eta _{1},\eta _{2},\ldots,\eta _{n})$$ and $$\boldsymbol{\zeta }=(\zeta _{1},\zeta _{2},\ldots,\zeta _{n})$$ be two positive real n-tuples, $$Q_{n}=\sum_{i=1}^{n}\zeta _{i}^{ \frac{1}{s}}$$ and $$s\in (0,1]$$. Then

$$\chi ^{2}(\boldsymbol{\eta },\boldsymbol{\zeta })\geq Q_{n}^{s} \biggl(\frac{ \sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}}{\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}}-1 \biggr)^{2}.$$
(3.5)

### Proof

If $$\phi (z)=(z-1)^{2}$$, where $$z>0$$, then $$\phi ^{\prime \prime }(z)=2>0$$, so $$\phi (z)$$ is convex on $$(0, \infty )$$. Also, if $$z>0$$, then $$\phi (z)\geq 0$$. Hence, by Lemma 1.1, $$\phi (z)$$ is s-convex for $$s \in (0, 1]$$. Utilizing $$\phi (z)=(z-1)^{2}$$ in Theorem 3.1, we have

$$\sum_{i=1}^{n}\zeta _{i} \biggl( \frac{\eta _{i}}{\zeta _{i}}-1 \biggr) ^{2}\geq Q_{n}^{s} \biggl(\frac{\sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}} \eta _{i}}{\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}}-1 \biggr)^{2},$$

which is equivalent to (3.5). □

### Theorem 3.5

Let $$\boldsymbol{\eta }=(\eta _{1},\eta _{2},\ldots,\eta _{n})$$ and $$\boldsymbol{\zeta }=(\zeta _{1},\zeta _{2},\ldots,\zeta _{n})$$ be two positive real n-tuples, and $$Q_{n}=\sum_{i=1}^{n}\zeta _{i} ^{\frac{1}{s}}$$. Then the following statements are true:

1. (i)

If $$\eta _{i}\geq \zeta _{i}$$ for $$i\in \{1,2,\ldots,n \}$$ and $$s \in [1, \infty )$$, then

$$K(\boldsymbol{\zeta }, \boldsymbol{\eta })\geq Q_{n}^{s}\log \biggl(\frac{ \sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}}{\sum_{i=1}^{n} \zeta _{i}^{ \frac{1-s}{s}}\eta _{i}} \biggr).$$
(3.6)
2. (ii)

If $$\eta _{i}<\zeta _{i}$$ for $$i\in \{1,2,\ldots,n\}$$ and $$s \in (0, 1]$$, then inequality (3.6) holds.

### Proof

(i) Let $$\phi (z)=-\log {z}$$ ($$z>0$$). Then $$\phi ^{\prime \prime }(z)=\frac{1}{z ^{2}}>0$$, so $$\phi (z)$$ is convex on $$(0, \infty )$$. Moreover, if $$z\geq 1$$, then $$\phi (z)\leq 0$$. Hence, by Lemma 1.1, $$\phi (z)$$ is s-convex for $$s\in [1, \infty )$$. Using $$\phi (z)=- \log {z}$$ in Theorem 3.1, we get

$$\sum_{i=1}^{n}\zeta _{i} \biggl(-\log \biggl(\frac{\eta _{i}}{\zeta _{i}} \biggr) \biggr) \geq Q_{n}^{s} \biggl(-\log \biggl(\frac{\sum_{i=1}^{n}\zeta _{i}^{ \frac{1-s}{s}}\eta _{i}}{\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}} \biggr) \biggr),$$

which is equivalent to (3.6).

(ii) If $$z\leq 1$$, then $$\phi (z)\geq 0$$. Hence, by Lemma 1.1, $$\phi (z)$$ is s-convex for $$s\in (0, 1]$$.

Similarly as above, using the function $$\phi (z)=-\log (z)$$ in Theorem 3.1, we obtain (3.6). □

### Theorem 3.6

Let $$\boldsymbol{\eta }=(\eta _{1},\eta _{2},\ldots,\eta _{n})$$ and $$\boldsymbol{\zeta }=(\zeta _{1},\zeta _{2},\ldots,\zeta _{n})$$ be two positive real n-tuples, $$Q_{n}=\sum_{i=1}^{n}\zeta _{i}^{ \frac{1}{s}}$$ and $$s \in (0, 1]$$. Then

$$J(\boldsymbol{\eta },\boldsymbol{\zeta })\geq \Biggl(\sum _{i=1}^{n} \zeta _{i}^{\frac{1-s}{s}} \eta _{i}-\sum_{i=1}^{n}\zeta _{i}^{ \frac{1}{s}} \Biggr)\log \biggl(\frac{\sum_{i=1}^{n}\zeta _{i}^{ \frac{1-s}{s}}\eta _{i}}{\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}} \biggr).$$
(3.7)

### Proof

If $$\phi (z)=(z-1)\log z$$ ($$z>0$$), then $$\phi ^{\prime \prime }(z)=\frac{z+1}{z ^{2}}$$, so $$\phi (z)$$ is convex on $$(0, \infty )$$. Moreover, if $$z>0$$, then $$\phi (z)\geq 0$$. Hence, by Lemma 1.1, $$\phi (z)$$ is s-convex for $$s \in (0, 1]$$. Using $$\phi (z)=(z-1)\log z$$ in Theorem 3.1, we have

\begin{aligned}& \sum_{i=1}^{n}\zeta _{i} \biggl( \frac{\eta _{i}}{\zeta _{i}}-1 \biggr) \log \biggl(\frac{\eta _{i}}{\zeta _{i}} \biggr)\geq Q_{n}^{s} \biggl(\frac{ \sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}}{\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}}-1 \biggr)\log \biggl( \frac{\sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}}{\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}} \biggr), \\& \quad \Rightarrow\quad \sum_{i=1}^{n} (\eta _{i}-\zeta _{i} )\log \biggl(\frac{ \eta _{i}}{\zeta _{i}} \biggr)\geq Q_{n}^{s} \biggl(\frac{\sum_{i=1}^{n} \zeta _{i}^{\frac{1-s}{s}}\eta _{i}-\sum_{i=1}^{n}\zeta _{i}^{ \frac{1}{s}}}{\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}} \biggr) \log \biggl( \frac{\sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}}{\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}} \biggr), \end{aligned}

which is equivalent to (3.7). □

### Theorem 3.7

Let $$\boldsymbol{\eta }=(\eta _{1},\eta _{2},\ldots,\eta _{n})$$ and $$\boldsymbol{\zeta }=(\zeta _{1},\zeta _{2},\ldots,\zeta _{n})$$ be two positive real n-tuples, $$Q_{n}=\sum_{i=1}^{n}\zeta _{i}^{ \frac{1}{s}}$$ and $$s \in (0, 1]$$. Then

$$R(\boldsymbol{\eta },\boldsymbol{\zeta })\geq Q_{n}^{s} \biggl(\frac{ \sum_{i=1}^{n} \zeta _{i}^{\frac{1-s}{s}}\eta _{i}}{\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}} \biggr)^{\alpha }.$$
(3.8)

### Proof

For $$\alpha >1$$, the function $$\phi (z)=z^{\alpha }$$ ($$z>0$$) is non-negative and convex. Therefore, by Lemma 1.1, $$\phi (z)$$ is s-convex for $$s \in (0, 1]$$. Using $$\phi (z)=z^{\alpha }$$ in Theorem 3.1, we get

$$\sum_{i=1}^{n}\zeta _{i} \biggl( \frac{\eta _{i}}{\zeta _{i}} \biggr)^{ \alpha }\geq Q_{n}^{s} \biggl( \frac{\sum_{i=1}^{n} \zeta _{i}^{ \frac{1-s}{s}}\eta _{i}}{\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}} \biggr) ^{\alpha },$$

which is equivalent to (3.8). □

### Theorem 3.8

Let $$\boldsymbol{\eta }=(\eta _{1},\eta _{2},\ldots,\eta _{n})$$ and $$\boldsymbol{\zeta }=(\zeta _{1},\zeta _{2},\ldots,\zeta _{n})$$ be two positive real n-tuples, $$Q_{n}=\sum_{i=1}^{n}\zeta _{i}^{ \frac{1}{s}}$$ and $$s \in (0, 1]$$. Then

$$V(\boldsymbol{\eta },\boldsymbol{\zeta })\geq Q_{n}^{s} \biggl\vert \frac{ \sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}-\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}}{\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}} \biggr\vert .$$
(3.9)

### Proof

If $$\phi (z)=|z-1|$$ ($$z\in \mathbb{R}$$), then clearly $$\phi (z)$$ is convex on $$\mathbb{R}$$. Moreover, for $$z\in \mathbb{R}$$, $$\phi (z) \geq 0$$. Hence, by Lemma 1.1, $$\phi (z)$$ is s-convex for $$s \in (0, 1]$$. Using $$\phi (z)=|z-1|$$ in Theorem 3.1, we get

$$\sum_{i=1}^{n}\zeta _{i} \biggl\vert \frac{\eta _{i}}{\zeta _{i}}-1 \biggr\vert \geq Q_{n}^{s} \biggl\vert \frac{\sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}}{\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}}-1 \biggr\vert ,$$

which is equivalent to (3.9). □

### Theorem 3.9

Let $$\boldsymbol{\eta }=(\eta _{1},\eta _{2},\ldots,\eta _{n})$$ and $$\boldsymbol{\zeta }=(\zeta _{1},\zeta _{2},\ldots,\zeta _{n})$$ be two positive real n-tuples, $$Q_{n}=\sum_{i=1}^{n}\zeta _{i}^{ \frac{1}{s}}$$ and $$s \in (0, 1]$$. Then

$$\chi ^{2}(\boldsymbol{\eta },\boldsymbol{\zeta })+\chi ^{2}( \boldsymbol{\zeta },\boldsymbol{\eta })\geq Q_{n}^{s} \biggl( \biggl(\frac{ \sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}}{\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}}-1 \biggr)^{2}+ \biggl( \frac{\sum_{i=1}^{n}\eta _{i} ^{\frac{1-s}{s}}\zeta _{i}}{\sum_{i=1}^{n}\eta _{i}^{\frac{1}{s}}}-1 \biggr) ^{2} \biggr).$$
(3.10)

### Proof

If $$\phi (z)=(z-1)^{2}$$ ($$z>0$$), then $$\phi ^{\prime \prime }(z)=2>0$$, so $$\phi (z)$$ is convex on $$(0, \infty )$$. Also, if $$z>0$$, then $$\phi (z)\geq 0$$. Hence, by Lemma 1.1, $$\phi (z)$$ is s-convex for $$s \in (0, 1]$$.

From Theorem 3.4, we have

$$\sum_{i=1}^{n} \frac{(\eta _{i}-\zeta _{i})^{2}}{\zeta _{i}}\geq Q_{n} ^{s} \biggl(\frac{\sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}}{ \sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}}-1 \biggr)^{2}.$$
(3.11)

By interchanging $$\eta _{i}$$ and $$\zeta _{i}$$ in Theorem 3.4, we get

$$\sum_{i=1}^{n} \frac{(\zeta _{i}-\eta _{i})^{2}}{\eta _{i}}\geq Q_{n}^{s} \biggl(\frac{\sum_{i=1}^{n}\eta _{i}^{\frac{1-s}{s}}\zeta _{i}}{\sum_{i=1}^{n}\eta _{i}^{\frac{1}{s}}}-1 \biggr)^{2}.$$
(3.12)

Adding (3.11) and (3.12), we get

$$\sum_{i=1}^{n}\frac{(\eta _{i}-\zeta _{i})^{2}}{\zeta _{i}}+\sum _{i=1} ^{n}\frac{(\zeta _{i}-\eta _{i})^{2}}{\eta _{i}}\geq Q_{n}^{s} \biggl(\frac{ \sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}}{\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}}-1 \biggr)^{2}+Q_{n}^{s} \biggl(\frac{\sum_{i=1} ^{n}\eta _{i}^{\frac{1-s}{s}}\zeta _{i}}{\sum_{i=1}^{n}\eta _{i}^{ \frac{1}{s}}}-1 \biggr)^{2},$$

which is equivalent to (3.10). □

### Theorem 3.10

Let $$\boldsymbol{\eta }=(\eta _{1},\eta _{2},\ldots,\eta _{n})$$ and $$\boldsymbol{\zeta }=(\zeta _{1},\zeta _{2},\ldots,\zeta _{n})$$ be two positive real n-tuples, $$Q_{n}=\sum_{i=1}^{n}\zeta _{i}^{ \frac{1}{s}}$$ and $$s \in (0, 1]$$. Then

$$\triangle (\boldsymbol{\eta }, \boldsymbol{\zeta })\geq \frac{ (\sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}-\sum_{i=1}^{n}\zeta _{i} ^{\frac{1}{s}} )^{2}}{\sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}} \eta _{i}+\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}}.$$
(3.13)

### Proof

If $$\phi (z)=\frac{(z-1)^{2}}{z+1}$$ ($$z>0$$), then $$\phi ^{\prime \prime }(z)= \frac{8}{(z+1)^{3}}\geq 0$$, so $$\phi (z)$$ is convex on $$(0, \infty )$$. Moreover, if $$z>0$$, then $$\phi (z)\geq 0$$. Hence, by Lemma 1.1, $$\phi (z)$$ is s-convex for $$s \in (0, 1]$$. Using $$\phi (z)=\frac{(z-1)^{2}}{z+1}$$ in Theorem 3.1, we have

\begin{aligned}& \sum_{i=1}^{n}\zeta _{i} \frac{(\frac{\eta _{i}}{\zeta _{i}}-1)^{2}}{\frac{ \eta _{i}}{\zeta _{i}}+1}\geq Q_{n}^{s}\frac{ \Bigl(\frac{\sum_{i=1} ^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}}{\sum_{i=1}^{n}\zeta _{i}^{ \frac{1}{s}}}-1 \Bigr)^{2}}{ \Bigl(\frac{\sum_{i=1}^{n}\zeta _{i}^{ \frac{1-s}{s}}\eta _{i}}{\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}}+1 \Bigr)}, \\& \sum_{i=1}^{n}\frac{(\eta _{i}-\zeta _{i})^{2}}{\eta _{i}+\zeta _{i}} \geq Q_{n}^{s}\frac{ (\sum_{i=1}^{n}\zeta _{i}^{ \frac{1-s}{s}}\eta _{i}-\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}} ) ^{2}}{\sum_{i=1}^{n}{\zeta _{i}^{\frac{1}{s}} (\sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}+\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}} )}}, \end{aligned}

which is equivalent to (3.13). □

### Theorem 3.11

Let $$\boldsymbol{\eta }=(\eta _{1},\eta _{2},\ldots,\eta _{n})$$ and $$\boldsymbol{\zeta }=(\zeta _{1},\zeta _{2},\ldots,\zeta _{n})$$ be two positive real n-tuples, and $$Q_{n}=\sum_{i=1}^{n}\zeta _{i} ^{\frac{1}{s}}$$. Then the following statements are true:

1. (i)

If $$\eta _{i}\geq \zeta _{i}$$ for $$i\in \{1,2,\ldots,n \}$$ and $$s \in [1, \infty )$$, then

$$G(\boldsymbol{\eta }, \boldsymbol{\zeta })\geq \frac{\sum_{i=1}^{n} \zeta _{i}^{\frac{1-s}{s}}+\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}}{2} \log \frac{\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}+\sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}}{2\sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}}.$$
(3.14)
2. (ii)

If $$\eta _{i}<\zeta _{i}$$ for $$i\in \{1,2,\ldots,n\}$$ and $$s \in (0, 1]$$, then inequality (3.14) holds.

### Proof

(i) If $$\phi (z)=\frac{z+1}{2}\log \frac{1+z}{2z}$$ ($$z>0$$), then $$\phi ^{\prime \prime }(z)=\frac{1}{2z^{2}(z+1)}>0$$, so $$\phi (z)$$ is convex on $$(0, \infty )$$. Moreover, if $$z\geq 1$$, then $$\phi (z) \leq 0$$. Hence, by Lemma 1.1, $$\phi (z)$$ is s-convex for $$s\in [1, \infty )$$. Using $$\phi (z)$$ in Theorem 3.1, we have

$$\sum_{i=1}^{n} \zeta _{i} \frac{\eta _{i}+\zeta _{i}}{2 \zeta _{i}}\log \frac{ \eta _{i}+\zeta _{i}}{2 \eta _{i}} \geq Q_{n}^{s} \frac{\frac{\sum_{i=1} ^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}}{\sum_{i=1}^{n}\zeta _{i}^{ \frac{1}{s}}}+1}{2}\log \frac{1+{\frac{\sum_{i=1}^{n}\zeta _{i}^{ \frac{1-s}{s}}\eta _{i}}{\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}}}}{2\frac{ \sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}}{\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}}},$$

which is equivalent to (3.14).

(ii) If $$z\in (0,1]$$, then $$\phi (z)\geq 0$$. Hence, by Lemma 1.1, $$\phi (z)$$ is s-convex for $$s\in (0, 1]$$. Similar to part (i), using Theorem 3.1, we obtain (3.14). □

### Theorem 3.12

Let $$\boldsymbol{\eta }=(\eta _{1},\eta _{2},\ldots,\eta _{n})$$ and $$\boldsymbol{\zeta }=(\zeta _{1},\zeta _{2},\ldots,\zeta _{n})$$ be two positive real n-tuples, $$Q_{n}=\sum_{i=1}^{n}\zeta _{i}^{ \frac{1}{s}}$$ and $$s \in (0, 1]$$. Then

\begin{aligned} F(\boldsymbol{\eta }, \boldsymbol{\zeta })&=\frac{1}{2}\bigl[G( \boldsymbol{\eta }, \boldsymbol{\zeta })+ G(\boldsymbol{\zeta }, \boldsymbol{\eta })\bigr] \\ &\geq Q_{n}^{s} \biggl[\frac{\sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}+\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}}{2\sum_{i=1}^{n}\zeta _{i} ^{\frac{1}{s}}}\log \sqrt {{\frac{\sum_{i=1}^{n}\zeta _{i}^{ \frac{1}{s}}+\sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}}{2\sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}}}} \\ &\quad {}+\frac{\sum_{i=1}^{n}\eta _{i}^{\frac{1-s}{s}}\zeta _{i}+\sum_{i=1}^{n} \eta _{i}^{\frac{1}{s}}}{2\sum_{i=1}^{n}\eta _{i}^{\frac{1}{s}}}\log \sqrt{\frac{ \sum_{i=1}^{n}\eta _{i}^{\frac{1}{s}}+\sum_{i=1}^{n}\eta _{i}^{ \frac{1-s}{s}}\zeta _{i}}{2\sum_{i=1}^{n}\eta _{i}^{\frac{1-s}{s}}\zeta _{i}}} \biggr]. \end{aligned}

### Proof

If $$\phi (z)=\frac{z+1}{2}\log \frac{1+z}{2z}$$ ($$z>0$$). Then $$\phi ^{\prime \prime }(z)=\frac{1}{2z^{2}(z+1)}>0$$, so $$\phi (z)$$ is convex on $$(0, \infty )$$. Moreover, if $$z>0$$, then $$\phi (z)\geq 0$$. Hence, by Lemma 1.1, $$\phi (z)$$ is s-convex for $$s \in (0, 1]$$. From Theorem 3.11 we have

\begin{aligned}& \sum_{i=1}^{n} \frac{\eta _{i}+\zeta _{i}}{2}\log \frac{\eta _{i}+\zeta _{i}}{2 \eta _{i}} \\& \quad \geq Q_{n}^{s}\frac{\sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}+ \sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}}{2\sum_{i=1}^{n}\zeta _{i}^{ \frac{1}{s}}}\log \frac{\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}+\sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}}{2\sum_{i=1}^{n}\zeta _{i} ^{\frac{1-s}{s}}\eta _{i}}. \end{aligned}
(3.15)

By interchanging $$\eta _{i}$$ and $$\zeta _{i}$$ in Theorem 3.11, we get

$$\sum_{i=1}^{n} \frac{\eta _{i}+\zeta _{i}}{2}\log \frac{\eta _{i}+\zeta _{i}}{2 \zeta _{i}}\geq Q_{n}^{s} \frac{\sum_{i=1}^{n}\eta _{i}^{ \frac{1-s}{s}}\zeta _{i}+\sum_{i=1}^{n}\eta _{i}^{\frac{1}{s}}}{2\sum_{i=1}^{n}\eta _{i}^{\frac{1}{s}}}\log \frac{\sum_{i=1}^{n}\eta _{i} ^{\frac{1}{s}}+\sum_{i=1}^{n}\eta _{i}^{\frac{1-s}{s}}\zeta _{i}}{2 \sum_{i=1}^{n}\eta _{i}^{\frac{1-s}{s}}\zeta _{i}}.$$
(3.16)

Adding (3.15) and (3.16), we obtain

\begin{aligned} \frac{1}{2}\bigl[G(\boldsymbol{\eta }, \boldsymbol{\zeta })+ G( \boldsymbol{\zeta }, \boldsymbol{\eta })\bigr] &\geq Q_{n}^{s} \frac{\sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}+\sum_{i=1}^{n}\zeta _{i} ^{\frac{1}{s}}}{2\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}} \frac{1}{2} \log \frac{\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}+\sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}}{2\sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}} \eta _{i}} \\ &\quad {}+ Q_{n}^{s}\frac{\sum_{i=1}^{n}\eta _{i}^{\frac{1-s}{s}}\zeta _{i}+ \sum_{i=1}^{n}\eta _{i}^{\frac{1}{s}}}{2\sum_{i=1}^{n}\eta _{i}^{ \frac{1}{s}}}\frac{1}{2}\log \frac{\sum_{i=1}^{n}\eta _{i}^{ \frac{1}{s}}+\sum_{i=1}^{n}\eta _{i}^{\frac{1-s}{s}}\zeta _{i}}{2\sum_{i=1}^{n}\eta _{i}^{\frac{1-s}{s}}\zeta _{i}}, \end{aligned}

namely

\begin{aligned} F(\boldsymbol{\eta }, \boldsymbol{\zeta })&=\sum_{i=1}^{n} \frac{\eta _{i}+\zeta _{i}}{2}\log \frac{\eta _{i}+{\zeta _{i}}}{2\sqrt{\eta _{i} \zeta _{i}}} \\ & \geq Q_{n}^{s} \biggl[\frac{\sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}+\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}}{2\sum_{i=1}^{n}\zeta _{i} ^{\frac{1}{s}}}\log \sqrt {{\frac{\sum_{i=1}^{n}\zeta _{i}^{ \frac{1}{s}}+\sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}}{2\sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}}}} \\ &\quad {}+\frac{\sum_{i=1}^{n}\eta _{i}^{\frac{1-s}{s}}\zeta _{i}+\sum_{i=1}^{n} \eta _{i}^{\frac{1}{s}}}{2\sum_{i=1}^{n}\eta _{i}^{\frac{1}{s}}}\log \sqrt{\frac{ \sum_{i=1}^{n}\eta _{i}^{\frac{1}{s}}+\sum_{i=1}^{n}\eta _{i}^{ \frac{1-s}{s}}\zeta _{i}}{2\sum_{i=1}^{n}\eta _{i}^{\frac{1-s}{s}}\zeta _{i}}} \biggr]. \end{aligned}

□

In the following theorem, we obtain a bound for Bhattacharyya divergence by utilizing an s-convex function that is not convex.

### Theorem 3.13

Let $$\boldsymbol{\eta }=(\eta _{1},\eta _{2},\ldots,\eta _{n})$$ and $$\boldsymbol{\zeta }=(\zeta _{1},\zeta _{2},\ldots,\zeta _{n})$$ be two positive real n-tuples, $$Q_{n}=\sum_{i=1}^{n}\zeta _{i}^{ \frac{1}{s}}$$ and $$0< s\leq \frac{1}{2}$$. Then

$$B(\boldsymbol{\eta }, \boldsymbol{\zeta })\geq Q_{n}^{s} \sqrt{\frac{ \sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}}{\sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}}}.$$
(3.17)

### Proof

First we show that $$\phi (z)=\sqrt{z}$$ is s-convex for $$z>0$$ and $$s\in (0, 1/2]$$, namely we show that

$$\sqrt{\lambda {z_{1}}+(1-\lambda ){z_{2}}} \leq \lambda ^{s} \sqrt{z _{1}}+(1-\lambda )^{s} \sqrt{z_{2}}$$
(3.18)

for $$\lambda \in (0, 1)$$ and $$s\in (0, 1/2]$$.

Squaring both sides, we get

$$\lambda {z_{1}}+(1-\lambda {z_{2}})\leq \lambda ^{2s} z_{1}+(1-\lambda )^{2s}z_{2}+ 2 \lambda ^{s}(1-\lambda )^{s}\sqrt{z_{1}z_{2}} ,$$

which implies that

$$\bigl(\lambda ^{2s}-\lambda \bigr)z_{1}+ \bigl((1-\lambda )^{2s}-(1-\lambda ) \bigr)z_{2}+ 2\lambda ^{s}(1- \lambda )^{s}\sqrt{z_{1}z_{2}}\geq 0.$$

Let $$\lambda =1/p$$ ($$p>1$$). Then

$$\lambda ^{2s-1}= p^{1-2s}>1$$

for $$s\in (0, 1/2]$$.

Namely,

$$\lambda ^{2s}- \lambda >0$$
(3.19)

for $$s\in (0, 1/2]$$.

As $$\lambda \in (0, 1)$$, $$1- \lambda \in (0, 1)$$ and from (3.19), we have

$$(1-\lambda )^{2s} >(1-\lambda ).$$
(3.20)

From (3.19) and (3.20) we get (3.18), namely $$\phi (z)$$ is s-convex for $$s \in (0,\frac{1}{2} ]$$.

Now, using $$\phi (z)=\sqrt{z}$$ in Theorem 3.1, we obtain

$$\sum_{i=1}^{n}\zeta _{i}\sqrt{ \frac{\eta _{i}}{\zeta _{i}}} \geq Q_{n} ^{s} \sqrt{ \frac{\sum_{i=1}^{n}\zeta _{i}^{\frac{1-s}{s}}\eta _{i}}{ \sum_{i=1}^{n}\zeta _{i}^{\frac{1}{s}}}},$$

which is equivalent to (3.17). □

## 4 Conclusion

In the literature, there are several results for Jensen’s inequality by using convex functions. Particularly, there are many applications of Jensen’s inequality for convex functions in information theory. In this paper, we associated the results for s-convex functions with several divergences and proposed several applications of Jensen’s inequality for s-convex functions in information theory. We have obtained generalized inequalities for different divergences by using Jensen’s inequality for s-convex functions. The results obtained in this paper may also open the new door to obtaining other results in information theory for s-convex functions.