1 Introduction and preliminaries

Information theory is a branch of science concerned with the storage, quantification, and transmission of data. Information is difficult to quantify since it is an abstract object. In his first of two important and essential theorems, Claude Shannon [1], who created the idea of information theory, presented entropy as the fundamental unit of information in 1948. The probability density function may also be used to measure the information. The distance between the two probability distributions is calculated using the divergence measure. Divergence measure is a concept in probability theory that is used to overcome a number of problems. Divergence measures, which compare two probability distributions and are employed in statistics and information theory, are described in the literature. Sensor networks [2], finance [3], economics [4], and approximation of probability distributions [5] are all areas where information and divergence measures are highly valuable and play a significant role.

In [6], Adeel et al. generealized the Levinson inequality and gave fruitful results in information theory. Khan [7] et al., used Abel–Gontscharoff interpolation to present Levinson-type inequalities for convex functions of higher order. Adeel et al., calculated Shannon entropy and estimated f-divergence in [8] by utilising new Lidstone polynomials and Green functions in association with Levinson-type inequalities. By using Hermite interpolating polynomial, Khan [9] et al., were successful to achieve Levinson type inequality for the convex functions of higher order and provided estimates for the Shannon entropy and f-divergence. In [10], Adeel et al. used Bullen-type inequalities to estimate different entropies and f-divergence via Fink’s identity. Khan [11] et al. gave various entropy results related to Levinson-type inequalities using Green functions and also presented results for Hermite interpolating polynomial. For 2n-convex functions, Khan [12] et al. generalized Levinson-type inequalities by applying Lidstone interpolating polynomial. In [13], Adeel et al. used Green functions to obtain generalize Levinson-type inequalities via Montgomery identity. They also found bounds for different entropies and divergences. However, in [6, 11, 13], all the generalizations and results are proved using only two Green functions. In this study, four newly defined 3-convex Green functions are used to generalize the Levinson-type inequalities, and the bounds for different entropies are given.

Higher-order convex functions are defined using divided difference techniques.

Definition 1.1

[14, p. 14] For a function \(h:[\varpi _{1},\varpi _{2}] \rightarrow \mathbb{R}\), the divided difference of order n, at mutually exclusive points \(u_{0},\ldots,u_{n}\in [\varpi _{1},\varpi _{2}]\) is recursively defined by

$$\begin{aligned}& [u_{\sigma}; h ] = h (u_{\sigma} ),\quad \sigma = 0, \ldots ,n, \\& [ u_{0},\dots ,u_{n};h ] = \frac{ [ u_{1},\ldots ,u_{n};h ] - [ u_{0},\ldots ,u_{n-1};h ] }{u_{n}-u_{0}}. \end{aligned}$$
(1)

It is clear that (1) is identical to

$$ [ u_{0},\ldots , u_{n};h ]=\sum _{\sigma =0}^{n} \frac{h (u_{\sigma} )}{l^{\prime } (u_{\sigma} )},\quad \text{where } l (u )= \prod_{e=0}^{n} (u-u_{e} ). $$

The nth-order divided difference is used to define a real valued convex function in the following formulation (see [14, p. 15]).

Definition 1.2

For \((n+1 )\) different points \(u_{0},\ldots ,u_{n} \in [\varpi _{1},\varpi _{2}]\), a function \(f : [\varpi _{1},\varpi _{2}] \rightarrow \mathbb{R}\) is called n-convex \((0\leq n )\) if and only if

$$ [u_{0},\ldots ,u_{n};f ]\geq 0 $$

holds.

If \([u_{0},\ldots ,u_{n};f ] \leq 0\), then f is n-concave.

Criteria for n-convex function is given in [14, p. 16], as follows:

Theorem 1.1

f is n-convex if and only if \(f^{(n)} \geq 0\), given that \(f^{(n)}\) exists.

Levinson [15], extended the Ky Fan’s inequality to the class of functions which are 3-convex as follows:

Theorem A

Suppose \(f :\mathbb{I}_{2}=(0, 2\gamma ) \rightarrow \mathbb{R}\) with \(f^{(3)}(z)\) is non-negative. Consider \(x_{\sigma} \in (0, \gamma )\) and \(p_{\sigma}>0\). Then

$$\begin{aligned} \frac{1}{P_{\eta}}\sum_{\sigma =1}^{\eta}p_{\sigma}f(x_{\sigma})- f \Biggl(\frac{1}{P_{\eta}}\sum_{\sigma =1}^{\eta}p_{\sigma}x_{\sigma} \Biggr) \leq & \frac{1}{P_{\eta}}\sum_{\sigma =1}^{\eta}p_{\sigma}f(2 \gamma -x_{\sigma}) \\ &{}-f \Biggl(\frac{1}{P_{\eta}}\sum_{\sigma =1}^{\eta}p_{\sigma}(2 \gamma -x_{\sigma}) \Biggr). \end{aligned}$$
(2)

Popoviciu [16], noted that Levinson’s Inequality (2) has important role on \((0, 2\gamma )\), but in [17], Bullen proved distinctive conformation of Popoviciu’s findings with the converse of (2).

Theorem B

(i) Assume \(f:\mathbb{I}_{0}=[\varpi _{1}, \varpi _{2}] \rightarrow \mathbb{R}\) be a convex function of order three and \(x_{\sigma}, y_{\sigma} \in \mathbb{I}_{0}\) for \(\sigma =1, 2, \ldots , \eta \), \(p_{\sigma}>0\) so that

$$ \min \{y_{1} \ldots y_{\eta}\}\geq \max \{x_{1} \ldots x_{\eta}\} ,\qquad y_{1}+x_{1}= \cdots =y_{\eta}+x_{\eta} $$
(3)

then

$$ \frac{1}{P_{\eta}}\sum_{\sigma =1}^{\eta} p_{\sigma}f(x_{\sigma})-f \Biggl(\frac{1}{P_{\eta}}\sum _{\sigma =1}^{\eta}p_{\sigma}x_{\sigma} \Biggr)\leq \frac{1}{P_{\eta}}\sum_{\sigma =1}^{\eta}p_{\sigma}f(y_{ \sigma})-f \Biggl(\frac{1}{P_{\eta}}\sum_{\sigma =1}^{\eta}p_{\sigma}y_{ \sigma} \Biggr). $$
(4)

(ii) For \(p_{\sigma}>0\). If f is continuous and (4) holds for all \(x_{\sigma}\), \(y_{\sigma}\) satisfying (3), then f is 3-convex.

In the following result, Pečarić [18] gave Inequality (4) by weakening Condition (3).

Theorem C

Assume \(f:\mathbb{I}_{0} \rightarrow \mathbb{R}\) be so that \(0\leq f^{3}(t)\) and \(0< p_{\sigma}\). Also, let \(x_{\sigma}, y_{\sigma} \in \mathbb{I}_{0}\) be such that \(x_{\sigma}+y_{\sigma}=2\breve{c}\), for \(\sigma =1, \ldots , \eta \), \(x_{\sigma}+x_{\eta -\sigma +1}\leq 2 \breve{c}\) and \(\frac{p_{\sigma}x_{\sigma}+p_{\eta - \sigma +1}x_{\eta -\sigma +1}}{p_{\sigma}+p_{\eta -\sigma +1}} \leq \breve{c}\). Then (4) is true.

In [19], Mercer showed that (4) is true after substituting symmetric point variances for the symmetric conditions.

Theorem D

Suppose a 3-convex function f, defined on \(\mathbb{I}_{0}\) and \(p_{\sigma}\) be such that \(\sum_{\sigma =1}^{\eta}p_{\sigma}=1\). Choose \(x_{\sigma}\), \(y_{\sigma}\) such that \(\min \{y_{1}\ldots y_{\eta}\}\geq \max \{x_{1} \ldots x_{\eta}\}\) and

$$ \sum_{\sigma =1}^{\eta}p_{\sigma} \Biggl(x_{\sigma}-\sum_{\sigma =1}^{ \eta}p_{\sigma}x_{\sigma} \Biggr)^{2}=\sum_{\sigma =1}^{\eta}p_{ \sigma} \Biggl(y_{\sigma}- \sum_{\sigma =1}^{\eta}p_{\sigma}y_{\sigma} \Biggr)^{2}, $$
(5)

then (4) holds.

Let \(\mathfrak{g}=[\hat{e}_{1},\hat{e}_{2}]\subset \mathbb{R}\), \(\hat{e}_{1}<\hat{e}_{2}\) and \(\lambda =1,\ldots ,4\). In [20], Pečarić et al. define new type of Green functions, \(\hat{G}_{\lambda}:\mathfrak{g}\times \mathfrak{g}\rightarrow \mathbb{R}\), which are given as:

$$\begin{aligned}& \hat{G}_{1}(\hat{\varphi},\vartheta )=\textstyle\begin{cases} \vartheta -\hat{e}_{1}, & \hat{e}_{1}\leq \vartheta \leq \hat{\varphi}, \\ \hat{\varphi}-\hat{e}_{1}, & \hat{\varphi}\leq \vartheta \leq \hat{e}_{2}, \end{cases}\displaystyle \end{aligned}$$
(6)
$$\begin{aligned}& \hat{G}_{2}(\hat{\varphi},\vartheta )=\textstyle\begin{cases} \hat{\varphi}-\hat{e}_{2}, & \hat{e}_{1}\leq \vartheta \leq \hat{\varphi}, \\ \vartheta -\hat{e}_{2}, & \hat{\varphi}\leq \vartheta \leq \hat{e}_{2}, \end{cases}\displaystyle \end{aligned}$$
(7)
$$\begin{aligned}& \hat{G}_{3}(\hat{\varphi},\vartheta )=\textstyle\begin{cases} \hat{\varphi}-\hat{e}_{1}, & \hat{e}_{1}\leq \vartheta \leq \hat{\varphi}, \\ \vartheta -\hat{e}_{1}, & \hat{\varphi}\leq \vartheta \leq \hat{e}_{2}, \end{cases}\displaystyle \end{aligned}$$
(8)
$$\begin{aligned}& \hat{G}_{4}(\hat{\varphi},\vartheta )=\textstyle\begin{cases} \vartheta -\hat{e}_{2}, & \hat{e}_{1}\leq \vartheta \leq \hat{\varphi}, \\ \hat{\varphi}-\hat{e}_{2}, & \hat{\varphi}\leq \vartheta \leq \hat{e}_{2}. \end{cases}\displaystyle \end{aligned}$$
(9)

The Abel–Gontscharof-type identities were also proved by them utilising these Green functions, which are given by;

$$\begin{aligned}& f(\hat{\varphi})=f(\hat{e}_{1})+(\hat{\varphi}- \hat{e}_{1})f'(\hat{e}_{2})- \int _{\mathfrak{g}}\hat{G}_{1} (\hat{\varphi},\vartheta )f''( \vartheta )\,d\vartheta , \end{aligned}$$
(10)
$$\begin{aligned}& f(\hat{\varphi})=f(\hat{e}_{2})-(\hat{e}_{2}- \hat{\varphi})f'(\hat{e}_{1})+ \int _{\mathfrak{g}}\hat{G}_{1} (\hat{\varphi},\vartheta )f''( \vartheta )\,dw, \end{aligned}$$
(11)
$$\begin{aligned}& f(\hat{\varphi})=f(\hat{e}_{2})+(\hat{\varphi}- \hat{e}_{1})f'(\hat{e}_{1})-( \hat{e}_{2}-\hat{e}_{1})f'(\hat{e}_{2}) + \int _{\mathfrak{g}}\hat{G}_{3}( \hat{\varphi},\vartheta )f''(\vartheta )\,d\vartheta , \end{aligned}$$
(12)
$$\begin{aligned}& f(\hat{\varphi})=f(\hat{e}_{1})+(\hat{e}_{2}- \hat{e}_{1})f'(\hat{e}_{1})-( \hat{e}_{2}-\hat{\varphi})f'(\hat{e}_{2}) - \int _{\mathfrak{g}} \hat{G}_{4}(\hat{\varphi},\vartheta )f''(\vartheta )\,d\vartheta , \end{aligned}$$
(13)

where \(f:[\hat{e}_{1},\hat{e}_{2}]\rightarrow \mathbb{R}\).

This work is arranged as follows: in Sect. 2, new type of 3-convex Green functions are defined, also identities related to these 3-convex Green functions are established. Additionally, a new class of 3-convex Green functions is used to modify Levinson’s inequality for the 3-convex function. In Sect. 3, the f-divergence, the Renyi entropy, Renyi divergence, the Zipf–Mandelbrot law, and the Shannon entropy are used to give results to information theory.

2 Mian results

Firstly, we define new type of 3-convex Green functions with graphs and using these Green functions, a lemma is also stated. Then results related to Levinson-type inequalities using new Green functions are being presented.

Let \(\mathfrak{g}=[\hat{e}_{1},\hat{e}_{2}]\subset (-\infty ,\infty )\) and \(\lambda \in \{1,2,3,4\}\). New type of 3-convex Green functions, \(G_{\lambda}:\mathfrak{g}\times \mathfrak{g}\rightarrow \mathbb{R}\), which are defined as:

$$\begin{aligned}& G_{1}(\hat{\varphi},\vartheta )=\textstyle\begin{cases} \frac{1}{2}(\vartheta -\hat{e}_{1})^{2}+(\hat{\varphi}-\hat{e}_{1})( \hat{\varphi}-\hat{e}_{2}), & \hat{e}_{1}\leq \vartheta \leq \hat{\varphi}, \\ (\hat{\varphi}-\hat{e}_{1})(\vartheta -\hat{e}_{2})+ \frac{(\hat{\varphi}-\hat{e}_{1})^{2}}{2}, &\hat{\varphi}\leq \vartheta \leq \hat{e}_{2}, \end{cases}\displaystyle \end{aligned}$$
(14)
$$\begin{aligned}& G_{2}(\hat{\varphi},\vartheta )=\textstyle\begin{cases} (\hat{\varphi}-\hat{e}_{2})(\vartheta -\hat{e}_{1})+\frac{1}{2}( \hat{\varphi}-\hat{e}_{2})^{2}, & \hat{e}_{1}\leq \vartheta \leq \hat{\varphi}, \\ \frac{(\vartheta -\hat{e}_{2})^{2}}{2}+(\hat{\varphi}-\hat{e}_{1})( \hat{\varphi}-\hat{e}_{2}), & \hat{\varphi}\leq \vartheta \leq \hat{e}_{2}, \end{cases}\displaystyle \end{aligned}$$
(15)
$$\begin{aligned}& G_{3}(\hat{\varphi},\vartheta )=\textstyle\begin{cases} (\hat{\varphi}-\hat{e}_{1})(\vartheta -\hat{e}_{2})+ \frac{(\hat{\varphi}-\hat{e}_{1})^{2}}{2}, & \hat{e}_{1}\leq \vartheta \leq \hat{\varphi}, \\ \frac{1}{2}(\vartheta -\hat{e}_{1})^{2}+(\hat{\varphi}-\hat{e}_{1})( \hat{\varphi}-\hat{e}_{2}), & \hat{\varphi}\leq \vartheta \leq \hat{e}_{2}, \end{cases}\displaystyle \end{aligned}$$
(16)
$$\begin{aligned}& G_{4}(\hat{\varphi},\vartheta )=\textstyle\begin{cases} \frac{(\vartheta -\hat{e}_{2})^{2}}{2}+(\hat{\varphi}-\hat{e}_{2})( \hat{\varphi}-\hat{e}_{1}), & \hat{e}_{1}\leq \vartheta \leq \hat{\varphi}, \\ (\hat{\varphi}-\hat{e}_{2})(\vartheta -\hat{e}_{1})+\frac{1}{2}( \hat{\varphi}-\hat{e}_{2})^{2}, & \hat{\varphi}\leq \vartheta \leq \hat{e}_{2}. \end{cases}\displaystyle \end{aligned}$$
(17)

Figure 1, demonstrates a graphical depiction of \(G_{k}\) (\(k=1,\ldots ,4\)). These new Green functions are used to present following lemma.

Figure 1
figure 1

Graph of Green functions \(G_{\lambda}\) for different values of φ̂ and ϑ

Lemma 2.1

Consider a function f be defined on \(\mathfrak{g}\) such that \(f'''\) exists and \(G_{\lambda}\) (\(\lambda ={1,\dots ,4}\)) be the two-point right focal problem-type Green functions given by (14)(17). Then the following identities are true:

$$\begin{aligned}& f(\hat{\varphi})=f(\hat{e}_{1})+(\hat{\varphi}- \hat{e}_{1})f'(\hat{e}_{2})+( \hat{\varphi}- \hat{e}_{1}) (\hat{\varphi}-\hat{e}_{2}) f''( \hat{e}_{1})- \frac{(\hat{\varphi}-\hat{e}_{1})^{2}}{2}f''( \hat{e}_{2}) \\& \hphantom{f(\hat{\varphi})=}{}+ \int _{\mathfrak{g}}G_{1} (\hat{\varphi},\vartheta )f'''(\vartheta )\,d\vartheta , \end{aligned}$$
(18)
$$\begin{aligned}& f(\hat{\varphi})=f(\hat{e}_{2})-(\hat{e}_{2}- \hat{\varphi})f'(\hat{e}_{1}) -f''( \hat{e}_{1})\frac{(\hat{e}_{2}-\hat{\varphi})^{2}}{2}+( \hat{\varphi}-\hat{e}_{1}) (\hat{\varphi}-\hat{e}_{2})f''( \hat{e}_{2}) \\& \hphantom{f(\hat{\varphi})=}{}- \int _{\mathfrak{g}}G_{2} (\hat{\varphi},\vartheta )f'''(\vartheta )\,d\vartheta , \end{aligned}$$
(19)
$$\begin{aligned}& f(\hat{\varphi}) = f(\hat{e}_{2})+(\hat{\varphi}- \hat{e}_{1})f'( \hat{e}_{1})-( \hat{e}_{2}-\hat{e}_{1})f'(\hat{e}_{2}) \\& \hphantom{f(\hat{\varphi}) =}{} -f''(\hat{e}_{1}) \biggl[ \frac{(\hat{\varphi}-\hat{e}_{1})^{2}}{2}+( \hat{\varphi}-\hat{e}_{1}) (\hat{e}_{1}- \hat{e}_{2}) \biggr] \\& \hphantom{f(\hat{\varphi}) =}{} + f''(\hat{e}_{2}) \biggl[ \frac{(\hat{e}_{2}-\hat{e}_{1})^{2}}{2}+( \hat{\varphi}-\hat{e}_{1}) (\hat{\varphi}- \hat{e}_{2}) \biggr] - \int _{ \mathfrak{g}}G_{3} (\hat{\varphi},\vartheta )f'''(\vartheta )\,d\vartheta , \end{aligned}$$
(20)
$$\begin{aligned}& f(\hat{\varphi})=f(\hat{e}_{1})+( \hat{e}_{2}-\hat{e}_{1})f'(\hat{e}_{1})-( \hat{e}_{2}-\hat{\varphi})f'(\hat{e}_{2}) \\& \hphantom{f(\hat{\varphi})=}{} +f''(\hat{e}_{1}) \biggl[(\hat{\varphi}- \hat{e}_{2}) (\hat{\varphi}- \hat{e}_{1})+\frac{(\hat{e}_{2}-\hat{e}_{1})^{2}}{2} \biggr] \\& \hphantom{f(\hat{\varphi})=}{}- \biggl[(\hat{\varphi}-\hat{e}_{2}) (\hat{e}_{2}- \hat{e}_{1})+ \frac{(\hat{\varphi}-\hat{e}_{2})^{2}}{2} \biggr]f''( \hat{e}_{2}) + \int _{\mathfrak{g}}f'''(w)G_{4} (\hat{\varphi},\vartheta )\,d\vartheta . \end{aligned}$$
(21)

Proof

The aforementioned results can be proved by employing the same integrating approach. As a result, we give proof of (18) only.

$$\begin{aligned}& \int _{\mathfrak{g}}G_{4}(\hat{\varphi},\vartheta )f'''(\vartheta )\,d\vartheta \\& \quad = \int _{\hat{e}_{1}}^{\hat{\varphi}}f'''( \vartheta ) \biggl[ \frac{(\vartheta -\hat{e}_{2})^{2}}{2} +(\hat{\varphi}-\hat{e}_{2}) ( \hat{\varphi}-\hat{e}_{1}) \biggr]\,d\vartheta \\& \qquad {}+ \int _{\hat{\varphi}}^{\hat{e}_{2}}f'''( \vartheta ) \biggl[( \hat{\varphi}-\hat{e}_{2}) (\vartheta - \hat{e}_{1}) + \frac{(\hat{\varphi}-\hat{e}_{2})^{2}}{2} \biggr]\,d\vartheta \\& \quad = \biggl[ \biggl\vert f''(\vartheta )\biggl\{ \frac{(\vartheta -\hat{e}_{2})^{2}}{2} +( \hat{\varphi}-\hat{e}_{2}) (\hat{\varphi}- \hat{e}_{1})\biggr\} \biggr\vert _{\hat{e}_{1}}^{ \hat{\varphi}} - \int _{\hat{e}_{1}}^{\hat{\varphi}}f''( \vartheta ) ( \vartheta -\hat{e}_{2})\,d\vartheta \biggr] \\& \qquad {}+ \biggl[ \biggl\vert f''(\vartheta )\biggl\{ ( \hat{\varphi}-\hat{e}_{2}) (\vartheta - \hat{e}_{1}) + \frac{(\hat{\varphi}-\hat{e}_{2})^{2}}{2}\biggr\} \biggr\vert _{ \hat{\varphi}}^{\hat{e}_{2}} - \int _{\hat{\varphi}}^{\hat{e}_{2}}f''( \vartheta ) (\hat{\varphi}-\hat{e}_{2})\,d\vartheta \biggr] \\& \quad = \biggl[f''(\hat{\varphi})\biggl\{ \frac{(\hat{\varphi}-\hat{e}_{2})^{2}}{2} +(\hat{\varphi}-\hat{e}_{2}) ( \hat{\varphi}- \hat{e}_{1})\biggr\} -f''( \hat{e}_{1})\biggl\{ (\hat{\varphi}- \hat{e}_{2}) (\hat{ \varphi}-\hat{e}_{1}) \\& \qquad {}+\frac{(\hat{e}_{1}-\hat{e}_{2})^{2}}{2}\biggr\} - \bigl\vert f'(\vartheta ) ( \vartheta -\hat{e}_{2}) \bigr\vert _{\hat{e}_{1}}^{\hat{\varphi}} + \int _{ \hat{e}_{1}}^{\hat{\varphi}}f'(\vartheta )\,d\vartheta \biggr] \\& \qquad {}+ \biggl[f''(\hat{e}_{2})\biggl\{ (\hat{ \varphi}-\hat{e}_{2}) (\hat{e}_{2}- \hat{e}_{1}) + \frac{(\hat{\varphi}-\hat{e}_{2})^{2}}{2}\biggr\} -\biggl\{ ( \hat{\varphi}-\hat{e}_{2}) (\hat{ \varphi}-\hat{e}_{1}) \\& \qquad {}+f''(\hat{\varphi})\frac{(\hat{\varphi}-\hat{e}_{2})^{2}}{2}\biggr\} - ( \hat{\varphi}-\hat{e}_{2}) \bigl\vert f'(\vartheta ) \bigr\vert _{\hat{\varphi}}^{\hat{e}_{2}} \biggr] \\& \quad = \biggl[\frac{(\hat{\varphi}-\hat{e}_{2})^{2}}{2}+(\hat{\varphi}- \hat{e}_{2}) (\hat{ \varphi}-\hat{e}_{1}) -(\hat{\varphi}-\hat{e}_{2}) ( \hat{ \varphi}-\hat{e}_{1})-\frac{(\hat{\varphi}-\hat{e}_{2})^{2}}{2} \biggr] f''( \hat{\varphi}) \\& \qquad {}- \biggl[\frac{(\hat{e}_{2}-\hat{e}_{1})^{2}}{2} +(\hat{\varphi}- \hat{e}_{2}) (\hat{ \varphi}-\hat{e}_{1}) \biggr]f''( \hat{e}_{1}) + \biggl[( \hat{\varphi}-\hat{e}_{2}) ( \hat{e}_{2}-\hat{e}_{1}) \\& \qquad {}+\frac{(\hat{\varphi}-\hat{e}_{2})^{2}}{2} \biggr]f''(\hat{e}_{2}) -( \hat{\varphi}-\hat{e}_{2})f'(\hat{\varphi}) +( \hat{e}_{1}-\hat{e}_{2})f'( \hat{e}_{1})+f(\hat{\varphi})-f(\hat{e}_{1}) \\& \qquad {}-(\hat{\varphi}-\hat{e}_{2})f'(\hat{e}_{2})+( \hat{\varphi}-\hat{e}_{2})f'( \hat{\varphi}) \\& \quad = \bigl[(\hat{\varphi}-\hat{e}_{2})-(\hat{\varphi}- \hat{e}_{2}) \bigr]f'(\hat{\varphi}) - \biggl[ \frac{(\hat{e}_{2}-\hat{e}_{1})^{2}}{2}+(\hat{\varphi}-\hat{e}_{2}) ( \hat{\varphi}- \hat{e}_{1}) \biggr]f''(\hat{e}_{1}) \\& \qquad {}+ \biggl[(\hat{\varphi}-\hat{e}_{2}) (\hat{e}_{2}- \hat{e}_{1})+ \frac{(\hat{\varphi}-\hat{e}_{2})^{2}}{2} \biggr] f''( \hat{e}_{2})+( \hat{e}_{1}-\hat{e}_{2})f'( \hat{e}_{1}) \\& \qquad {}-(\hat{\varphi}-\hat{e}_{2})f'(\hat{e}_{2}) +f(\hat{\varphi}) -f( \hat{e}_{1}). \end{aligned}$$

After rearranging, we get Identity (21). □

Remark 2.1

If we apply integration by parts on the integral part of (10)-(13), by choosing \(f''(\vartheta )\) as first function and \(\widehat{G}_{k}(\hat{\varphi},\vartheta )\) (\(k=1,2,3,4\)) as second function, we get (18)–(21), respectively.

Now, identities associating with Jensen difference of two distinct data points are given by using newly defined 3-convex Green functions given by (14)–(17).

Theorem 2.2

Assume \(f\in C^{3}[\hat{e}_{1}, \hat{e}_{2}]\) be such that \(f: \mathfrak{g}=[\hat{e}_{1}, \hat{e}_{2}] \rightarrow \mathbb{R}\), \((q_{1}, \ldots , q_{\varrho}) \in \mathbb{R}^{\varrho}\), \((p_{1}, \ldots , p_{\eta}) \in \mathbb{R}^{\eta}\) such that \(\sum_{\sigma =1}^{\eta}p_{\sigma}=1\) and \(\sum_{\tau =1}^{\varrho}q_{\tau}=1\). Also, let \(x_{\sigma}\), \(y_{\tau}\), \(\sum_{\sigma =1}^{\eta}p_{\sigma}x_{\sigma}\), \(\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau} \in \mathfrak{g}\). Then for \(G_{k}\) (\(k=1, 4\))

$$\begin{aligned} \mathfrak{D}\bigl(f(\cdot )\bigr) =&\frac{1}{2} \Biggl[ \sum_{\tau =1}^{\varrho}q_{ \tau}y_{\tau}^{2}- \Biggl(\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau} \Biggr)^{2}-\sum_{\sigma =1}^{\eta}p_{\sigma} x_{\sigma}^{2}+ \Biggl( \sum_{\sigma =1}^{\eta}p_{\sigma}x_{\sigma} \Biggr)^{2} \Biggr] \\ &{}\times \bigl(2f^{(2)}(\hat{e}_{1})-f^{(2)}( \hat{e}_{2})\bigr)+ \int _{ \mathfrak{g}}\mathfrak{D}\bigl(G_{k}(., \vartheta ) \bigr)f^{(3)}(\vartheta )\,d\vartheta , \end{aligned}$$
(22)

and for \(G_{k}\) (\(k=2, 3\))

$$\begin{aligned} \mathfrak{D}\bigl(f(\cdot )\bigr) =&\frac{1}{2} \Biggl[ \sum_{\tau =1}^{\varrho}q_{ \tau}y_{\tau}^{2}- \Biggl(\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau} \Biggr)^{2}-\sum_{\sigma =1}^{\eta}p_{\sigma} x_{\sigma}^{2}+ \Biggl( \sum_{\sigma =1}^{\eta}p_{\sigma}x_{\sigma} \Biggr)^{2} \Biggr] \\ &{}\times \bigl(2f^{(2)}(\hat{e}_{2})-f^{(2)}( \hat{e}_{1})\bigr)- \int _{ \mathfrak{g}}\mathfrak{D}\bigl(G_{k}(., \vartheta ) \bigr)f^{(3)}(\vartheta )\,d\vartheta , \end{aligned}$$
(23)

where

$$\begin{aligned}& \mathfrak{D}\bigl(f(\cdot )\bigr)=\sum _{\tau =1}^{\varrho}q_{\tau}f(y_{\tau}) -f \Biggl(\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau} \Biggr) -\sum_{\sigma =1}^{ \eta}p_{\sigma}f(x_{\sigma})+f \Biggl(\sum_{\sigma =1}^{\eta}p_{\sigma}x_{ \sigma} \Biggr), \end{aligned}$$
(24)
$$\begin{aligned}& \mathfrak{D}\bigl(G_{k}(\cdot , \vartheta )\bigr) = \sum_{\tau =1}^{\varrho}q_{ \tau}G_{k}(y_{\tau}, \vartheta )-G_{k} \Biggl(\sum_{\tau =1}^{\varrho}q_{ \tau}y_{\tau}, \vartheta \Biggr)-\sum_{\sigma =1}^{\eta}p_{\sigma} G_{k}(x_{ \sigma}, \vartheta ) \\& \hphantom{\mathfrak{D}\bigl(G_{k}(\cdot , \vartheta )\bigr) =}{} +G_{k} \Biggl(\sum_{\sigma =1}^{\eta}p_{\sigma}x_{\sigma}, \vartheta \Biggr) \end{aligned}$$
(25)

and \(G_{k}\) (\(k=1 ,2,3,4\)) are defined in (14)(17), respectively.

Proof

Let \(k=4\) and using (21) in (24), we have

$$\begin{aligned}& \mathfrak{D}\bigl(f(\cdot )\bigr) \\& \quad = \sum_{\tau =1}^{\varrho}q_{\tau} \biggl[f(\hat{e}_{1})+(\hat{e}_{2} - \hat{e}_{1})f^{(1)}( \hat{e}_{1})-(\hat{e}_{2}-y_{\tau})f^{(1)}( \hat{e}_{2}) \\& \qquad {}+f^{(2)}(\hat{e}_{1}) \biggl(\frac{(\hat{e}_{2}-\hat{e}_{1})^{2}}{2}+(y_{ \tau}- \hat{e}_{1}) (y_{\tau}-\hat{e}_{2}) \biggr) \\& \qquad {}-f^{(2)}(\hat{e}_{2}) \biggl((y_{\tau}- \hat{e}_{2}) (\hat{e}_{2}- \hat{e}_{1}) + \frac{(y_{\tau}-\hat{e}_{2})^{2}}{2} \biggr)+ \int _{ \mathfrak{g}}G_{4}(y_{\tau}, \vartheta )f^{(3)}(\vartheta )\,d\vartheta \biggr] \\& \qquad {}- \Biggl[f(\hat{e}_{1})+(\hat{e}_{2}- \hat{e}_{1})f^{(1)}(\hat{e}_{1}) -\Biggl( \hat{e}_{2}-\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau} \Biggr)f^{(1)}(\hat{e}_{2}) \\& \qquad {}+f^{(2)}(\hat{e}_{1}) \Biggl(\frac{(\hat{e}_{2}-\hat{e}_{1})^{2}}{2} +\Biggl( \sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau}- \hat{e}_{1}\Biggr) \Biggl(\sum_{\tau =1}^{ \varrho}q_{\tau}y_{\tau}- \hat{e}_{2}\Biggr) \Biggr) -f^{(2)}(\hat{e}_{2}) \\& \qquad {}\times \Biggl(\Biggl(\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau}- \hat{e}_{2}\Biggr) ( \hat{e}_{2}-\hat{e}_{1}) + \frac{(\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau}-\hat{e}_{2})^{2}}{2} \Biggr) \\& \qquad {}+ \int _{\mathfrak{g}}G_{4} \Biggl(\sum _{\tau =1}^{\varrho}q_{\tau}y_{ \tau}, \vartheta \Biggr) f^{(3)}(\vartheta )\,d\vartheta \Biggr]-\sum _{ \sigma =1}^{\eta}p_{\sigma} \biggl[f( \hat{e}_{1}) +(\hat{e}_{2}-\hat{e}_{1})f^{(1)}( \hat{e}_{1}) \\& \qquad {}-(\hat{e}_{2}-x_{\sigma})f^{(1)}( \hat{e}_{2}) + f^{(2)}(\hat{e}_{1}) \biggl( \frac{(\hat{e}_{2}-\hat{e}_{1})^{2}}{2}+(x_{\sigma}-\hat{e}_{1}) (x_{\sigma}- \hat{e}_{2}) \biggr) \\& \qquad {}-f^{(2)}(\hat{e}_{2}) \biggl((x_{\sigma}- \hat{e}_{2}) (\hat{e}_{2}- \hat{e}_{1}) + \frac{(x_{\sigma}-\hat{e}_{2})^{2}}{2} \biggr) + \int _{ \mathfrak{g}}G_{4}(x_{\sigma}, \vartheta )f^{(3)}(\vartheta )\,d\vartheta \biggr] \\& \qquad {}+ \Biggl[f(\hat{e}_{1})+(\hat{e}_{2}- \hat{e}_{1})f^{(1)}(\hat{e}_{1})- \Biggl( \hat{e}_{2}-\sum_{\sigma =1}^{\eta}p_{\sigma}x_{\sigma} \Biggr)f^{(1)}( \hat{e}_{2}) \\& \qquad {}+f^{(2)}(\hat{e}_{1}) \Biggl(\frac{(\hat{e}_{2}-\hat{e}_{1})^{2}}{2} +\Biggl( \sum_{\sigma =1}^{\eta}p_{\sigma}x_{\sigma}- \hat{e}_{1}\Biggr) \Biggl(\sum_{ \sigma =1}^{\eta}p_{\sigma}x_{\sigma}- \hat{e}_{2}\Biggr) \Biggr) \\& \qquad {}-f^{(2)}(\hat{e}_{2}) \Biggl(\Biggl(\sum _{\sigma =1}^{\eta}p_{\sigma}x_{ \sigma}- \hat{e}_{2}\Biggr) (\hat{e}_{2}-\hat{e}_{1}) + \frac{(\sum_{\sigma =1}^{\eta}p_{\sigma}x_{\sigma}-\hat{e}_{2})^{2}}{2} \Biggr) \\& \qquad {}+ \int _{\mathfrak{g}}G_{4} \Biggl(\sum _{\sigma =1}^{\eta}p_{\sigma}x_{ \sigma}, \vartheta \Biggr) f^{(3)}(\vartheta )\,d\vartheta \Biggr] \\& \quad = \Biggl[f(\hat{e}_{1})+(\hat{e}_{2}- \hat{e}_{1})f^{(1)}(\hat{e}_{1}) -\Biggl( \hat{e}_{2}-\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau} \Biggr)f^{(1)}(\hat{e}_{2}) \\& \qquad {}+f^{(2)}(\hat{e}_{1}) \Biggl(\frac{(\hat{e}_{2}-\hat{e}_{1})^{2}}{2} + \sum _{\tau =1}^{\varrho}q_{\tau}y_{\tau}^{2} -(\hat{e}_{1}+\hat{e}_{2}) \sum _{\tau =1}^{\varrho}q_{\tau}y_{\tau} + \hat{e}_{1}\hat{e}_{2} \Biggr) \\& \qquad {}- \Biggl(\Biggl(\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau}- \hat{e}_{2}\Biggr) ( \hat{e}_{2}-\hat{e}_{1}) + \frac{\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau}^{2}-2\hat{e}_{2}\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau} +\hat{e}_{2}^{2}}{2} \Biggr) \\& \qquad {}\times f^{(2)}(\hat{e}_{2})+\sum _{\tau =1}^{\varrho}q_{\tau} \int _{ \mathfrak{g}}G_{4}(y_{\tau}, \vartheta )f^{(3)}(\vartheta )\,d\vartheta -f(\hat{e}_{1})-( \hat{e}_{2}-\hat{e}_{1})f^{(1)}( \hat{e}_{1}) \\& \qquad {}+\Biggl(\hat{e}_{2}-\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau} \Biggr)f^{(1)}( \hat{e}_{2})-f^{(2)}( \hat{e}_{1}) \Biggl( \frac{(\hat{e}_{2}-\hat{e}_{1})^{2}}{2} +\Biggl(\sum _{\tau =1}^{\varrho}q_{ \tau}y_{\tau} \Biggr)^{2} \\& \qquad {}-(\hat{e}_{1}+\hat{e}_{2}) \sum _{\tau =1}^{\varrho}q_{\tau}y_{\tau}+ \hat{e}_{1}\hat{e}_{2} \Biggr)+f^{(2)}( \hat{e}_{2}) \Biggl(\Biggl(\sum_{\tau =1}^{ \varrho}q_{\tau}y_{\tau}- \hat{e}_{2}\Biggr) (\hat{e}_{2}-\hat{e}_{1}) \\& \qquad {}+ \Biggl\{ \Biggl(\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau} \Biggr)^{2} -2\hat{e}_{2} \sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau}+ \hat{e}_{2}^{2} \Biggr\} \frac{1}{2} \Biggr)- \int _{\mathfrak{g}}G_{4}\Biggl(\sum _{\tau =1}^{\varrho}q_{ \tau}y_{\tau}, \vartheta \Biggr)f^{(3)}(\vartheta )\,d\vartheta \Biggr] \\& \qquad {}-f(\hat{e}_{1})-(\hat{e}_{2}-\hat{e}_{1})f^{(1)}( \hat{e}_{1}) +\Biggl( \hat{e}_{2}-\sum _{\sigma =1}^{\eta}p_{\sigma}x_{\sigma} \Biggr)f^{(1)}( \hat{e}_{2}) \\& \qquad {}-f^{(2)}(\hat{e}_{1}) \Biggl(\frac{(\hat{e}_{2}-\hat{e}_{1})^{2}}{2}+ \sum _{\sigma =1}^{\eta}p_{\sigma}x_{\sigma}^{2} -(\hat{e}_{1}+ \hat{e}_{2})\sum _{\sigma =1}^{\eta}p_{\sigma}x_{\sigma}+ \hat{e}_{1} \hat{e}_{2} \Biggr) \\& \qquad {}+f^{(2)}(\hat{e}_{2}) \biggl( \frac{\sum_{\sigma =1}^{\eta}p_{\sigma}x_{\sigma}^{2} -2\hat{e}_{2}\sum_{\sigma =1}^{\eta}p_{\sigma}x_{\sigma}+\hat{e}_{2}^{2}}{2} \biggr) \\& \qquad {}-\sum_{\sigma =1}^{\eta}p_{\sigma} \int _{\mathfrak{g}}G_{4}(x_{ \sigma}, \vartheta )f^{(3)}(\vartheta )\,d\vartheta \\& \qquad {}+f(\hat{e}_{1})+(\hat{e}_{2}-\hat{e}_{1})f^{(1)}( \hat{e}_{1}) -\Biggl( \hat{e}_{2}-\sum _{\sigma =1}^{\eta}p_{\sigma}x_{\sigma} \Biggr)f^{(1)}( \hat{e}_{2}) \\& \qquad {}+f^{(2)}(\hat{e}_{1}) \Biggl(\frac{(\hat{e}_{2}-\hat{e}_{1})^{2}}{2}+\Biggl( \sum_{\sigma =1}^{\eta}p_{\sigma}x_{\sigma} \Biggr)^{2} -(\hat{e}_{1}+ \hat{e}_{2})\sum _{\sigma =1}^{\eta}p_{\sigma}x_{\sigma}+ \hat{e}_{1} \hat{e}_{2} \Biggr) \\& \qquad {}-f^{(2)}(\hat{e}_{2}) \biggl( \frac{(\sum_{\sigma =1}^{\eta}p_{\sigma}x_{\sigma})^{2} -2\hat{e}_{2}\sum_{\sigma =1}^{\eta}p_{\sigma}x_{\sigma}+\hat{e}_{2}^{2}}{2} \biggr) \\& \qquad {}+ \int _{\mathfrak{g}}G_{4}\Biggl(\sum _{\sigma =1}^{\eta}p_{\sigma}x_{ \sigma},\vartheta \Biggr)f^{(3)}(\vartheta )\,d\vartheta \\& \quad = \Biggl[\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau}^{2}- \Biggl(\sum_{ \tau =1}^{\varrho}q_{\tau}y_{\tau} \Biggr)^{2}-\sum_{\sigma =1}^{\eta}p_{ \sigma} x_{\sigma}^{2}+ \Biggl(\sum_{\sigma =1}^{\eta}p_{\sigma}x_{ \sigma} \Biggr)^{2} \Biggr]f^{(2)}(\hat{e}_{1}) \\& \qquad {}-\frac{1}{2} \Biggl[\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau}^{2}- \Biggl(\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau} \Biggr)^{2}-\sum_{ \sigma =1}^{\eta}p_{\sigma} x_{\sigma}^{2}+ \Biggl(\sum_{\sigma =1}^{ \eta}p_{\sigma}x_{\sigma} \Biggr)^{2} \Biggr]f^{(2)}(\hat{e}_{2}) \\& \qquad {}+\sum_{\tau =1}^{\varrho}q_{\tau} \int _{\mathfrak{g}}G_{4}(y_{\tau}, \vartheta )f^{(3)}(\vartheta )\,d\vartheta - \int _{\mathfrak{g}}G_{4} \Biggl(\sum _{\tau =1}^{\varrho}q_{\tau}y_{\tau},\vartheta \Biggr)f^{(3)}( \vartheta )\,d\vartheta \\& \qquad {}-\sum_{\sigma =1}^{\eta}p_{\sigma} \int _{\mathfrak{g}}G_{4}(x_{ \sigma}, \vartheta )f^{(3)}(\vartheta )\,d\vartheta + \int _{ \mathfrak{g}}G_{4} \Biggl(\sum _{\sigma =1}^{\eta}p_{\sigma}x_{\sigma}, \vartheta \Biggr)f^{(3)}(\vartheta )\,d\vartheta \\& \quad = \frac{1}{2} \Biggl[\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau}^{2}- \Biggl(\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau} \Biggr)^{2}-\sum_{ \sigma =1}^{\eta}p_{\sigma} x_{\sigma}^{2}+ \Biggl(\sum_{\sigma =1}^{ \eta}p_{\sigma}x_{\sigma} \Biggr)^{2} \Biggr] \\& \qquad {}\times \bigl(2f^{(2)}(\hat{e}_{1})-f^{(2)}( \hat{e}_{2})\bigr) + \int _{ \mathfrak{g}}\mathfrak{D}\bigl(G_{k}(.,\vartheta ) \bigr)f^{(3)}(\vartheta )\,d\vartheta . \end{aligned}$$

Similar steps are followed to get (23). □

Corollary 2.1

Suppose \(f\in C^{3}[0, 2\hat{\gamma}]\) is such that \(f: \mathbb{I}_{2}= [0, 2\hat{\gamma}] \rightarrow \mathbb{R}\), \(x_{1}, \ldots , x_{\eta }\in (0, \gamma )\) and \((p_{1}, \ldots , p_{\eta})\) \(\in \mathbb{R}^{\eta}\) such that \(\sum_{\sigma =1}^{\eta}p_{\sigma}=1\). Let \(x_{\sigma}\), \(\sum_{\sigma =1}^{\eta}p_{\sigma}(2\hat{\gamma}-x_{ \sigma})\) and \(\sum_{\sigma =1}^{\eta}p_{\sigma}x_{\sigma} \in \mathfrak{g}\). Then for \(k=1,4\)

$$\begin{aligned} \mathfrak{D}\bigl(f(\cdot )\bigr) =& \int _{\mathfrak{g}}\mathfrak{D} \bigl(G_{k}( \cdot , \vartheta )\bigr)f^{(3)}(\vartheta )\,d\vartheta ,\quad 0\leq \hat{e}_{1}< \hat{e}_{2} \leq 2\hat{\gamma}, \end{aligned}$$
(26)

and \(k=2,3\)

$$\begin{aligned} \mathfrak{D}\bigl(f(\cdot )\bigr) =&- \int _{\mathfrak{g}}\mathfrak{D} \bigl(G_{k}( \cdot , \vartheta )\bigr)f^{(3)}(\vartheta )\,d\vartheta , \quad 0\leq \hat{e}_{1}< \hat{e}_{2} \leq 2\hat{\gamma}, \end{aligned}$$
(27)

where \(\mathfrak{D}(f(\cdot ))\) and \(\mathfrak{D}(G_{k}(\cdot , \vartheta ))\) are given in (24) and (25), respectively.

Proof

Taking \(\mathbb{I}_{2}=[0, 2\hat{\gamma}]\), \(y_{\tau}=(2\hat{\gamma}-x_{\sigma})\), \(x_{1}, \ldots , x_{\eta }\in (0, \gamma )\), \(p_{\sigma}=q_{\tau}\) and \(\eta =\varrho \) in Theorem 2.2, after simplifications we get (26) and (27). □

To avoid many notions, we have the following class:

ℜ: Let a 3-convex function, \(f: \mathfrak{g}= [\hat{e}_{1},\hat{e}_{2}] \rightarrow \mathbb{R}\). Assume \((p_{1},\ldots , p_{\eta}) \in \mathbb{R}^{\eta}\), \((q_{1}, \ldots , q_{\varrho})\in \mathbb{R}^{\varrho}\) to be such that \(\sum_{\sigma =1}^{\eta}p_{\sigma}=1\), \(\sum_{\tau =1}^{\varrho}q_{\tau}=1\), and \(x_{\sigma}\), \(y_{\tau}\), \(\sum_{\sigma =1}^{\eta}p_{\sigma}x_{\sigma}\), \(\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau} \in \mathfrak{g}\).

Theorem 2.3

Assume ℜ. If

$$\begin{aligned}& \frac{1}{2} \Biggl[\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau}^{2}- \Biggl(\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau} \Biggr)^{2}-\sum_{ \sigma =1}^{\eta}p_{\sigma} x_{\sigma}^{2}+ \Biggl(\sum_{\sigma =1}^{ \eta}p_{\sigma}x_{\sigma} \Biggr)^{2} \Biggr] \\& \quad {}\times \bigl(2f^{(2)}(\hat{e}_{1})-f^{(2)}( \hat{e}_{2})\bigr) \geq 0 \end{aligned}$$
(28)

and

$$\begin{aligned}& \frac{1}{2} \Biggl[\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau}^{2}- \Biggl(\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau} \Biggr)^{2}-\sum_{ \sigma =1}^{\eta}p_{\sigma} x_{\sigma}^{2}+ \Biggl(\sum_{\sigma =1}^{ \eta}p_{\sigma}x_{\sigma} \Biggr)^{2} \Biggr] \\& \quad {}\times \bigl(2f^{(2)}(\hat{e}_{2})-f^{(2)}( \hat{e}_{1})\bigr)\geq 0, \end{aligned}$$
(29)

then we have following equivalent statements:

For \(f \in C^{3}[\hat{e}_{1}, \hat{e}_{2}]\)

$$ \sum_{\sigma =1}^{\eta}p_{\sigma}f(x_{\sigma})-f \Biggl(\sum_{\sigma =1}^{ \eta}p_{\sigma}x_{\sigma} \Biggr)\leq \sum_{\tau =1}^{\varrho}q_{\tau}f(y_{ \tau})- f \Biggl(\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau} \Biggr). $$
(30)

For each \(\vartheta \in \mathfrak{g}\)

$$\begin{aligned} \sum_{\sigma =1}^{\eta}p_{\sigma}G_{k}(x_{\sigma}, \vartheta ) -G_{k} \Biggl(\sum_{\sigma =1}^{\eta}p_{\sigma}x_{\sigma}, \vartheta \Biggr) \leq \sum_{\tau =1}^{\varrho}q_{\tau}G_{k}(y_{\tau}, \vartheta )- G_{k} \Biggl(\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau}, \vartheta \Biggr), \end{aligned}$$
(31)

where \(G_{k}(\cdot , \vartheta )\) are given by (14)(17) for \(k=1,\dots ,4\), respectively.

Proof

(30) ⇒ (31): Let (30) holds. As the function \(G_{k}(\cdot , \vartheta )(\vartheta \in \mathfrak{g})\) is 3-convex, continuous, it follows that also for this function (30) holds, i.e., (31) is true.

(31) ⇒ (30): Assume a 3-convex function f, then \(f'''\) exists preserving generality. Consider a 3-convex function, if \(f \in C^{3}[\hat{e}_{1}, \hat{e}_{2}]\) and (31) holds, then f can be written in the form of (18). Now after simple calculations, we have

$$\begin{aligned}& \sum_{\tau =1}^{\varrho}q_{\tau}f(y_{\tau})-f \Biggl(\sum_{\tau =1}^{ \varrho}q_{\tau}y_{\tau} \Biggr)- \sum_{\sigma =1}^{\eta}p_{\sigma}f(x_{ \sigma})+ f \Biggl(\sum_{\sigma =1}^{\eta}p_{\sigma}x_{\sigma} \Biggr) \\& \quad = \frac{1}{2} \Biggl[\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau}^{2}- \Biggl(\sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau} \Biggr)^{2}-\sum_{ \sigma =1}^{\eta} p_{\sigma}x_{\sigma}^{2}+ \Biggl(\sum _{\sigma =1}^{ \eta}p_{\sigma}x_{\sigma} \Biggr)^{2} \Biggr] \\& \qquad {} \times \bigl(2f^{(2)}(\hat{e}_{1})-f^{(2)}( \hat{e}_{2})\bigr)+ \int _{ \mathfrak{g}} \Biggl(\sum_{\tau =1}^{\varrho}q_{\tau}G_{k}(y_{\tau}, s)-G_{k} \Biggl(\sum_{\tau =1}^{\varrho}q_{\tau}(y_{\tau}, s) \Biggr) \\& \qquad {} -\sum_{\sigma =1}^{\eta}p_{\sigma}G_{k}(x_{\sigma}, s)+G_{k} \Biggl( \sum_{\sigma =1}^{\eta} p_{\sigma}x_{\sigma}, \vartheta \Biggr) \Biggr)f^{(3)}( \vartheta )\,d\vartheta . \end{aligned}$$

Convexity of f implies \(f^{(3)}(\vartheta ) \geq 0\) for all \(\vartheta \in \mathfrak{g}\). Hence, if for each \(\vartheta \in \mathfrak{g}\) (31) is true, then it is obvious that for each 3-convex function, f define on \(\mathfrak{g}\) with \(f \in C^{3}[\hat{e}_{1}, \hat{e}_{2}]\), (30) is valid. □

Remark 2.2

If the expression

$$ \sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau}^{2}- \Biggl(\sum_{\tau =1}^{ \varrho}q_{\tau}y_{\tau} \Biggr)^{2} -\sum_{\sigma =1}^{\eta}p_{\sigma}x_{ \sigma}^{2}+ \Biggl(\sum_{\sigma =1}^{\eta}p_{\sigma}x_{\sigma} \Biggr)^{2} $$

and any of the \((2f^{(2)}(\hat{e}_{1})-f^{(2)}(\hat{e}_{2}))\), \((2f^{(2)}(\hat{e}_{1})-f^{(2)}(\hat{e}_{2}))\) have opposite signs in (28) and (29), respectively, then inequalities (30) and (31) are reversed.

The next results are related to generalize Bullen-type inequality (for real weights) presented in [17](see also [21]).

Theorem 2.4

Assumewith

$$ \max \{x_{1}, \ldots , x_{\eta}\} \leq \min \{y_{1}, \ldots , y_{ \varrho}\} $$
(32)

and

$$ \sum_{\sigma =1}^{\eta}p_{\sigma} \Biggl(x_{\sigma}-\sum_{\sigma =1}^{ \eta}p_{\sigma}x_{\sigma} \Biggr)^{2}=\sum_{\tau =1}^{\varrho}q_{ \tau} \Biggl(y_{\tau}- \sum_{\tau =1}^{\varrho}q_{\tau}y_{\tau} \Biggr)^{2}. $$
(33)

If (28) and (29) hold, then (30) and (31) are equivalent.

Proof

By taking \(x_{\sigma}\) and \(y_{\tau}\) such that (32) and (33) hold in Theorem 2.3, we get desired result. □

Remark 2.3

If \(x_{\sigma}\), \(y_{\tau}\) satisfy (32) and (33) and \(p_{\sigma}=q_{\tau}\) are positive, then inequality (30) reduces to Bullen inequality [21, p. 32, Theorem 2] for \(\varrho =\eta \).

Theorem 2.5

Assume ℜ. Also, assume \(x_{1},\ldots ,x_{\eta}\) and \(y_{1},\ldots ,y_{\varrho}\) to be so that \(x_{\sigma}+y_{\tau}=2\breve{c}\) (\(\sigma =1\ldots ,\eta \)), \(x_{\sigma}+x_{\eta -\sigma +1} \leq 2\breve{c,} \) and \(\frac{p_{\sigma}x_{\sigma}+p_{\eta -\sigma +1}x_{\eta -\sigma +1}}{p_{\sigma}+p_{\eta -\sigma +1}} \leq \breve{c}\). If (28) and (29) hold, then (30) and (31) are equivalent.

Proof

Applying Theorem 2.3, with given conditions of the statement, we get the desired result. □

Remark 2.4

If we put \(\varrho =\eta \), \(p_{\sigma}=q_{\tau}\) are positive, \(x_{\sigma}+y_{\tau}=2\breve{c}\), \(x_{\sigma}+x_{\eta -\sigma +1} \leq 2\breve{c,} \) and \(\frac{p_{\sigma}x_{\sigma}+p_{\eta -\sigma +1}x_{\eta -\sigma +1}}{p_{\sigma}+p_{\eta -\sigma +1}} \leq 2\breve{c}\), in Theorem 2.3, then (30) becomes extended form of Bullen inequality presented in [21, p. 32, Theorem 4].

Next, we have Mercer condition (5), if \(\sigma =\tau \) and \(\varrho =\eta \).

Theorem 2.6

Suppose \(f: \mathfrak{g}= [\hat{e}_{1}, \hat{e}_{2}] \rightarrow \mathbb{R}\) and \(f \in C^{3}[\hat{e}_{1}, \hat{e}_{2}]\), \(p_{\sigma}\), \(q_{\sigma}\) are positive such that \(\sum_{\sigma =1}^{\eta}p_{\sigma}=1\) and \(\sum_{\sigma =1}^{\eta}q_{\sigma}=1\). Let \(x_{\sigma}\), \(y_{\sigma}\) satisfy (32) for \(\eta =\varrho \) and

$$ \sum_{\sigma =1}^{\eta}p_{\sigma} \Biggl(x_{\sigma}-\sum_{\sigma =1}^{ \eta}p_{\sigma}x_{\sigma} \Biggr)^{2}=\sum_{\sigma =1}^{\eta}p_{ \sigma} \Biggl(y_{\sigma}-\sum_{\sigma =1} ^{\eta}q_{\sigma}y_{\sigma} \Biggr)^{2}. $$
(34)

If (28) and (29) hold, then (30) and (31) are the same.

Proof

For positive weights, statements (30) and (31) are equivalent if we used (34) and (32) in Theorem 2.3. □

The following findings depends on generalized form of Levinson-type inequality given in [15] (see also [21]).

Theorem 2.7

Consider a 3-convex function, \(f: \mathbb{I}_{2}= [0, 2\hat{\gamma}] \rightarrow \mathbb{R}\) and \(f \in C^{3}[0, 2\hat{\gamma}]\), \(x_{1}, \ldots , x_{\eta}\in (0, \gamma )\), \((p_{1}, \ldots , p_{\eta}) \in \mathbb{R}^{\eta}\) and \(\sum_{\sigma =1}^{\eta}p_{\sigma}=1\). Also, assume \(x_{\sigma}\), \(\sum_{\sigma =1}^{\eta}p_{\sigma}(2\hat{\gamma}-x_{ \sigma})\), \(\sum_{\sigma =1}^{\eta}p_{\sigma}x_{\sigma} \in \mathbb{I}_{2}\). Then the following are equivalent:

$$ \sum_{\sigma =1}^{\eta}p_{\sigma}f(x_{\sigma})-f \Biggl(\sum_{\sigma =1}^{ \eta}p_{\sigma}x_{\sigma} \Biggr)\leq \sum_{\sigma =1}^{\eta}p_{\sigma}f(2 \hat{\gamma}-x_{\sigma})- f \Biggl(\sum_{\sigma =1}^{\eta}p_{\sigma}(2 \hat{\gamma}-x_{\sigma}) \Biggr) $$
(35)

and

$$\begin{aligned}& \sum_{\sigma =1}^{\eta}p_{\sigma}G_{k}(x_{\sigma}, \vartheta ) - G_{k} \Biggl(\sum_{\sigma =1}^{\eta}p_{\sigma}x_{\sigma}, \vartheta \Biggr) \\& \quad \leq \sum_{\sigma =1}^{\eta} p_{\sigma}G_{k}(2\hat{\gamma}-x_{\sigma}, \vartheta ) \\& \qquad {} -G_{k} \Biggl(\sum_{\sigma =1}^{\eta}p_{\sigma}(2 \hat{\gamma}-x_{ \sigma}), \vartheta \Biggr), \quad \forall \vartheta \in \mathbb{I}_{2}, \end{aligned}$$
(36)

where \(G_{k}(\cdot , \vartheta )\) (\(k=1,\ldots ,4\)) are given in (14)(17).

Proof

Let \(\mathbb{I}_{2}=[0, 2\hat{\gamma}]\), \((x_{1}, \ldots , x_{\eta}) \in (0, \gamma )\), \(p_{\sigma}=q_{\tau}\), \(\varrho =\eta \) and \(y_{\tau}=(2\hat{\gamma}-x_{\sigma})\) in Theorem 2.3 with \(0 \leq \hat{e}_{1} < \hat{e}_{2} \leq 2\hat{\gamma}\), we have required result. □

Remark 2.5

If \(p_{\sigma}\) are positive in Theorem 2.7, then inequality (35) becomes Levinson inequality given in [21, p. 32, Theorem 1].

3 Applications to information theory

Levinson-type inequalities play an essential role in generalizing inequalities for divergence between probability distributions. In [6, 11, 13], Adeel et al. applied their findings to information theory by using two 3-convex Green functions. In this section, the key findings of Sect. 1 are linked to information theory via f-divergence, Rényi entropy and divergence, Shannon entropy and Zipf–Mandelbrot law using newly defined 3-convex Green functions (14)–(17).

3.1 Csiszár divergence

Csiszár [22, 23], presented following definition.

Definition 3.1

If \(f: \mathbb{R}_{+}\to \mathbb{R}_{+}\) be a convex function. Choose \(\tilde{\mathbf{v}}, \tilde{\mathbf{l}} \in \mathbb{R}_{+}^{\eta}\) such that \(\sum_{\sigma =1}^{\eta}v_{\sigma}=1\) and \(\sum_{\sigma =1}^{\eta}l_{\sigma}=1\). Then Csiszár f-divergence is defines as follows:

$$\begin{aligned} \mathbb{I}_{f}(\tilde{\mathbf{v}}, \tilde{\mathbf{l}}) := \sum_{ \sigma =1}^{\eta}l_{\sigma}f \biggl( \frac{v_{\sigma}}{l_{\sigma}} \biggr). \end{aligned}$$
(37)

In [24], Horv́ath et al. gave generalization of (37) as follows:

Definition 3.2

If \(f: \mathbb{I} \rightarrow \mathbb{R}\) be such that \(\mathbb{I} \subset \mathbb{R}\). Choose \(\tilde{\mathbf{v}}=(v_{1}, \ldots , v_{\eta})\in \mathbb{R}^{\eta}\) and \(\tilde{\mathbf{l}}=(l_{1}, \ldots , l_{\eta})\in (0, \infty )^{\eta}\) such that

$$\begin{aligned} \frac{v_{\sigma}}{l_{\sigma}} \in \mathbb{I}, \quad \sigma = 1, \ldots , \eta . \end{aligned}$$

Then

$$\begin{aligned} \hat{\mathbb{I}}_{f}(\tilde{\mathbf{v}}, \tilde{\mathbf{l}}) : = \sum_{\sigma =1}^{\eta} l_{\sigma}f \biggl( \frac{v_{\sigma}}{l_{\sigma}} \biggr). \end{aligned}$$
(38)

Throughout the paper, we assume that:

\(\mathbb{E}\): Suppose \(\tilde{\mathbf{v}}= (v_{1}, \ldots , v_{\eta} )\), \(\tilde{\mathbf{l}}= (l_{1}, \ldots , l_{\eta} )\) be in \((0, \infty )^{\eta}\) and \(\tilde{\mathbf{s}}= (s_{1}, \ldots , s_{\varrho} )\), \(\tilde{\mathbf{u}}= (u_{1}, \ldots , u_{\varrho} )\) are in \((0, \infty )^{\varrho}\).

And

$$\begin{aligned} \mathbb{H}(s,u,v,l) :=&\frac{1}{\sum_{\tau =1}^{\varrho}u_{\tau}}\sum_{ \tau =1}^{\varrho} \frac{(s_{\tau})^{2}}{u_{\tau}}- \Biggl(\sum_{\tau =1}^{ \varrho} \frac{s_{\tau}}{\sum_{\tau =1}^{\varrho}u_{\tau}} \Biggr)^{2} \\ &{}-\frac{1}{\sum_{\sigma =1}^{\eta}l_{\sigma}}\sum_{\sigma =1}^{\eta} \frac{(v_{\sigma})^{2}}{l_{\sigma}} + \Biggl(\sum_{\sigma =1}^{\eta} \frac{v_{\sigma}}{\sum_{\sigma =1}^{\eta}l_{\sigma}} \Biggr)^{2}. \end{aligned}$$

Theorem 3.1

Let the hypothesis \(\mathbb{E}\) holds and

$$\begin{aligned} \frac{v_{\sigma}}{l_{\sigma}} \in \mathbb{I},\quad \sigma = 1, \ldots , \eta , \end{aligned}$$

and

$$\begin{aligned} \frac{s_{\tau}}{u_{\tau}} \in \mathbb{I}, \quad \tau = 1, \ldots , \varrho . \end{aligned}$$

If

$$\begin{aligned} \mathbb{H}(s,u,v,l) \bigl(2f^{(2)}(\hat{e}_{1})-f^{(2)}( \hat{e}_{2})\bigr) \geq 0, \end{aligned}$$
(39)

and

$$\begin{aligned} \mathbb{H}(s,u,v,l) \bigl(2f^{(2)}(\hat{e}_{2})-f^{(2)}( \hat{e}_{1})\bigr) \geq 0, \end{aligned}$$
(40)

then the following are equivalent.

(i) For each 3-convex and continuous function \(f: \mathbb{I} \rightarrow \mathbb{R}\),

$$ \mathfrak{D}_{\hat{f}}(v,s,l,u)\geq 0. $$
(41)

(ii)

$$ \mathfrak{D}_{G_{k}}(v,s,l,u) \geq 0, $$
(42)

where

$$\begin{aligned} \mathfrak{D}_{\hat{f}}(v,s,l,u) =& \frac{1}{\sum_{\tau =1}^{\varrho}u_{\tau}}\hat{ \mathbb{I}}_{f}( \tilde{\mathbf{s}}, \tilde{\mathbf{u}})- f \Biggl(\sum _{\tau =1}^{ \varrho}\frac{s_{\tau}}{\sum_{\tau =1}^{\varrho}u_{\tau}} \Biggr) \\ &{}-\frac{1}{\sum_{\sigma =1}^{\eta}l_{\sigma}}\hat{\mathbb{I}}_{f}( \tilde{\mathbf{v}}, \tilde{\mathbf{l}})+f \Biggl(\sum_{\sigma =1}^{ \eta} \frac{v_{\sigma}}{\sum_{\sigma =1}^{\eta}l_{\sigma}} \Biggr). \end{aligned}$$
(43)

Proof

Using \(p_{\sigma} = \frac{l_{\sigma}}{\sum_{\sigma =1}^{\eta}l_{\sigma}}\), \(x_{\sigma} = \frac{v_{\sigma}}{l_{\sigma}}\), \(q_{\tau} = \frac{u_{\tau}}{\sum_{\tau =1}^{\varrho}u_{\tau}}\) and \(y_{\tau} = \frac{s_{\tau}}{u_{\tau}}\) in Theorem 2.3, we get the required results. □

Remark 3.1

(i) In Remark 2.1, put \(\hat{e}_{2}=\hat{e}_{1}\) and constant of integration equals to zero in first part of piecewise function \(G_{1}\), then the results of Theorem 3.1 coincide with [6, p. 12, Theorem 6].

(ii) Similarly, in Remark 2.1, take constant of integration equals to zero in second part of piecewise function \(G_{2}\), also replace \(\hat{e}_{2}\) with \(\hat{e}_{1}\) and φ̂ with ϑ then the results of Theorem 3.1 coincide with [6, p. 12, Theorem 6].

3.2 Shannon entropy

Definition 3.3

(See [25]) For positive probability distribution \(\tilde{\mathbf{l}}=(l_{1}, \ldots , l_{\eta})\) the Shannon entropy is given by

$$\begin{aligned} \mathbb{S} : = - \sum_{\sigma =1}^{\eta}l_{\sigma} \log (l_{ \sigma}). \end{aligned}$$
(44)

Corollary 3.1

Let the hypothesis \(\mathbb{E}\) holds. If

$$\begin{aligned} \mathbb{H}(s,u,v,l)\geq 0, \end{aligned}$$
(45)

and

$$ \mathfrak{D}_{G_{k}}(v, s, l, u)\leq 0, $$
(46)

then

$$ \mathfrak{D}_{S}(v, s, l, u) \leq 0, $$
(47)

where

$$\begin{aligned}& \mathfrak{D}_{S}(v, s, l, u) \\& \quad = \frac{1}{\sum_{\tau =1}^{\varrho}u_{\tau}} \Biggl[ \tilde{\mathbb{S}}+\sum _{\tau =1}^{\varrho} s_{\tau}\log (u_{ \tau}) \Biggr] + \Biggl[\sum_{\tau =1}^{\varrho} \frac{s_{\tau}}{\sum_{\tau =1}^{\varrho} u_{\tau}}\log \Biggl(\sum_{\tau =1}^{\varrho} \frac{s_{\tau}}{\sum_{\tau =1}^{\varrho}u_{\tau}} \Biggr) \Biggr] \\& \qquad {} - \frac{1}{\sum_{\sigma =1}^{\eta}l_{\sigma}} \Biggl[ \mathbb{S}- \sum _{\sigma =1}^{\eta}v_{\sigma}\log (l_{ \sigma}) \Biggr] - \Biggl[\sum_{\sigma =1}^{\eta} \frac{v_{\sigma}}{\sum_{\sigma =1}^{\eta} l_{\sigma}}\log \Biggl(\sum_{\sigma =1}^{\eta} \frac{v_{\sigma}}{\sum_{\sigma =1}^{\eta}l_{\sigma}} \Biggr) \Biggr] \end{aligned}$$
(48)

and \(\mathbb{S}\) is defined in (44) and

$$ \mathbb{\tilde{S}} : = - \sum_{\tau =1}^{\varrho}s_{\tau} \log (s_{\tau}). $$

If log has base less than 1, then (47) and (46) are reversed.

Proof

If log has base greater than 1, then the function \(f(x) \mapsto -x\log (x)\) is 3-convex. Hence, substituting \(f(x):= -x\log (x)\) in (39) and (41), we have required results by Remark 2.2. □

Remark 3.2

(i) In Remark 2.1, put \(\hat{e}_{2}=\hat{e}_{1}\) and constant of integration equals to zero in first part of piecewise function \(G_{1}\) then the results of Corollary 3.1 meet with [6, p. 13, Corollary 6].

(ii) Take constant of integration equals to zero in second part of piecewise function \(G_{2}\), also replace \(\hat{e}_{2}\) with \(\hat{e}_{1}\) and φ̂ with ϑ in Remark 2.1, then the results of Corollary 3.1 meet with [6, p. 13, Corollary 6].

3.3 Rényi divergence and entropy

In [26], Rényi divergence and entropy are defined as:

Definition 3.4

Suppose \(\tilde{\mathbf{v}}, \tilde{\mathbf{q}} \in \mathbb{R}_{+}^{\eta}\) is so that \(\sum_{1}^{\eta}v_{\sigma}=1\) and \(\sum_{1}^{\varrho}q_{\tau}=1\), also let \(\Delta \geq 0\), \(\Delta \neq 1\).

Δ-order, Rényi divergence is

$$\begin{aligned} \mathcal{D}_{\Delta}(\tilde{\mathbf{v}}, \tilde{\mathbf{q}}) : = \frac{1}{\Delta - 1} \log \Biggl(\sum _{\sigma =1}^{\eta}q_{ \sigma} \biggl( \frac{v_{\sigma}}{q_{\sigma}} \biggr)^{\Delta} \Biggr) \end{aligned}$$
(49)

and Δ-order Rényi entropy is given by

$$\begin{aligned} \mathcal{K}_{\Delta}(\tilde{\mathbf{v}}): = \frac{1}{1 - \Delta} \log \Biggl( \sum_{\sigma =1}^{\eta} v_{\sigma}^{ \Delta} \Biggr). \end{aligned}$$
(50)

For non-negative probability distributions, these definitions are also valid.

Theorem 3.2

Let the hypothesis \(\mathbb{E}\) holds and

$$\sum_{1}^{\eta}v_{\sigma}=1,\qquad \sum_{1}^{\eta}l_{\sigma}=1,\qquad \sum_{1}^{\varrho}s_{\tau}=1\quad \textit{and} \quad \sum_{1}^{\varrho}u_{\tau}=1.$$

If either \(1 < \Delta \) and base of log is greater than 1 or \(\Delta \in [0, 1)\) and base of log is less than 1 or if

$$\begin{aligned}& \sum_{\tau =1}^{\varrho} \frac{(u_{\tau})^{2}}{s_{\tau}} \biggl( \frac{s_{\tau}}{u_{\tau}} \biggr)^{2\Delta}- \Biggl(\sum _{\tau =1}^{ \varrho}u_{\tau} \biggl( \frac{s_{\tau}}{u_{\tau}} \biggr) ^{\Delta} \Biggr)^{2} -\sum _{\sigma =1}^{\eta}\frac{(l_{\sigma})^{2}}{v_{\sigma}} \biggl( \frac{v_{\sigma}}{l_{\sigma}} \biggr)^{2\Delta} \\& \quad {} + \Biggl(\sum_{\sigma =1}^{\eta}l_{\sigma} \biggl( \frac{v_{\sigma}}{l_{\sigma}} \biggr)^{\Delta} \Biggr)^{2}\geq 0, \end{aligned}$$
(51)

and

$$\begin{aligned}& \sum_{\sigma =1}^{\eta}v_{\sigma}G_{k} \biggl( \biggl( \frac{v_{\sigma}}{l_{\sigma}} \biggr)^{\Delta -1},\vartheta \biggr) -G_{k} \Biggl(\sum_{\sigma =1}^{\eta}v_{\sigma} \biggl( \frac{v_{\sigma}}{l_{\sigma}} \biggr)^{\Delta -1}, \vartheta \Biggr) \\& \quad \geq \sum_{\tau =1}^{\varrho}s_{\tau}G_{k} \biggl( \biggl( \frac{s_{\tau}}{u_{\tau}} \biggr)^{\Delta -1}, \vartheta \biggr) -G_{k} \Biggl(\sum_{\tau =1}^{\varrho}s_{\tau} \biggl( \frac{s_{\tau}}{u_{\tau}} \biggr)^{\Delta -1}, \vartheta \Biggr), \end{aligned}$$
(52)

then

$$\begin{aligned} \sum_{\sigma =1}^{\eta}v_{\tau } \log \biggl( \frac{v_{\tau}}{l_{\tau}} \biggr)-\mathcal{D}_{\Delta}( \tilde{\mathbf{v}}, \tilde{\mathbf{l}}) \geq \sum_{\tau =1}^{\varrho}s_{ \tau } \log \biggl( \frac{s_{\tau}}{u_{\tau}} \biggr) - \mathcal{D}_{\Delta}( \tilde{\mathbf{s}}, \tilde{\mathbf{u}}). \end{aligned}$$
(53)

If either base of log is less than 1 and \(1 < \Delta \) or \(\Delta \in [0, 1)\) and log has base greater than 1, then (52) and (53) are reversed.

Proof

We prove for \(\Delta \in [0, 1)\) and base of log is greater than 1 and the other cases can be proved in a similar manner.

Taking, \(\mathbb{I} = (0, \infty )\) and \(f(x)=\log (x)\), then \(0< f^{(3)}(x)\), so f is 3-convex. Thus, putting \(f(x)=\log (x)\) and following substitutions

$$ p_{\sigma }: = v_{\sigma},\qquad x_{\sigma }: = \biggl( \frac{v_{\sigma}}{l_{\sigma}} \biggr)^{\Delta - 1},\quad \sigma = 1, \ldots , \eta , $$

and

$$ q_{\tau }: = s_{\tau}, \qquad y_{\tau }: = \biggl( \frac{s_{\tau}}{u_{\tau}} \biggr)^{\Delta - 1},\quad \tau = 1, \ldots , \varrho , $$

in the reverse of (30) (by Remark 2.2), we obtain

$$\begin{aligned}& (\Delta -1)\sum_{\sigma =1}^{\eta}v_{\sigma } \log \biggl( \frac{v_{\sigma}}{l_{\sigma}} \biggr)-\log \Biggl(\sum _{\sigma =1}^{ \eta}l_{\sigma}\biggl(\frac{v_{\sigma}}{l_{\sigma}} \biggr)^{\Delta} \Biggr) \\& \quad \geq (\Delta -1)\sum_{\tau =1}^{\varrho}s_{\tau } \log \biggl( \frac{s_{\tau}}{u_{\tau}} \biggr) -\log \Biggl(\sum _{\tau =1}^{ \varrho}u_{\tau}\biggl(\frac{s_{\tau}}{u_{\tau}} \biggr)^{\Delta} \Biggr). \end{aligned}$$
(54)

Dividing (54) with \((\Delta -1)\) and using

$$\begin{aligned}& \mathcal{D}_{\Delta}(\tilde{\mathbf{v}}, \tilde{\mathbf{l}})= \frac{1}{\Delta -1}\log \Biggl(\sum_{\sigma =1}^{\eta}l_{\sigma } \biggl( \frac{v_{\sigma}}{l_{\sigma}}\biggr)^{\Delta} \Biggr),\\& \mathcal{D}_{\Delta}(\tilde{\mathbf{s}}, \tilde{\mathbf{u}})= \frac{1}{\Delta -1}\log \Biggl(\sum_{\tau =1}^{\varrho} u_{\tau}\biggl( \frac{s_{\tau}}{u_{\tau}}\biggr)^{\Delta} \Biggr) \end{aligned}$$

to get (53). □

Remark 3.3

Using all the conditions of Remark 3.1(i), (ii), the inequality (53) coincides with results given in [6, p. 14, inequality (48)].

Corollary 3.2

Let the hypothesis \(\mathbb{E}\) holds such that \(\sum_{1}^{\eta}v_{\sigma}=1\) and \(\sum_{1}^{\varrho}s_{\tau}=1\). Also, let

$$\begin{aligned}& \sum_{\tau =1}^{\varrho} \frac{1}{\varrho ^{2}s_{\tau}} ( \varrho s_{\tau} )^{2\Delta}- \Biggl(\sum _{\tau =1}^{\varrho} \frac{1}{\varrho} (\varrho s_{\tau} )^{\Delta} \Biggr)^{2} - \sum_{\sigma =1}^{\eta}\frac{1}{\eta ^{2}v_{\sigma}} (\eta v_{ \sigma} )^{2\Delta}+ \Biggl(\sum_{\sigma =1}^{\eta} \frac{1}{\eta} (\eta v_{\sigma} )^{\Delta} \Biggr)^{2} \geq 0 \end{aligned}$$
(55)

and

$$\begin{aligned}& \sum_{\sigma =1}^{\eta}v_{\sigma}G_{k} \bigl((\eta v_{\sigma})^{ \Delta -1}, \vartheta \bigr) -G_{k} \Biggl(\sum_{\sigma =1}^{\eta}v_{ \sigma} (\eta v_{\sigma})^{\Delta -1}, \vartheta \Biggr) \\& \quad \geq \sum_{\tau =1}^{\varrho}s_{\tau} G_{k} \bigl((\varrho s_{\tau})^{ \Delta -1}, \vartheta \bigr)-G_{k} \Biggl(\sum_{\tau =1}^{\varrho}s_{ \tau}( \varrho s_{\tau})^{\Delta -1}, \vartheta \Biggr). \end{aligned}$$
(56)

If Δ and base of log are greater than 1, then

$$ \sum_{\sigma =1}^{\eta}v_{\sigma} \log (v_{\sigma})+ \mathcal{K}_{\Delta}(\tilde{\mathbf{v}}) \geq \sum_{\tau =1}^{ \varrho}s_{\tau}\log (s_{\tau})+ \mathcal{K}_{\Delta}( \tilde{\mathbf{s}}). $$
(57)

If log has base less than 1, then (56) and (57) are reversed.

Proof

Suppose \(\tilde{\mathbf{l}}= (\frac{1}{\eta}, \ldots , \frac{1}{\eta} )\) and \(\tilde{\mathbf{u}}= (\frac{1}{\varrho}, \ldots , \frac{1}{\varrho} )\). Then from (49), we have

$$\begin{aligned} \mathcal{D}_{\Delta} (\tilde{\mathbf{v}}, \tilde{\mathbf{l}}) = \frac{1}{\Delta - 1} \log \Biggl(\sum_{\sigma =1}^{ \eta} \eta ^{\Delta - 1}v_{\sigma}^{\Delta} \Biggr) = \log \Biggl( \sum _{\sigma =1}^{\eta} v_{\sigma}^{\Delta} \Biggr) \frac{1}{\Delta -1}+\log (\eta ), \end{aligned}$$

and

$$\begin{aligned} \mathcal{D}_{\Delta} (\tilde{\mathbf{s}}, \tilde{\mathbf{u}}) = \frac{1}{\Delta - 1} \log \Biggl(\sum_{\tau =1}^{ \varrho} \varrho ^{\Delta - 1}s_{\tau}^{\Delta} \Biggr) =\log \Biggl( \sum _{\tau =1}^{\varrho}s_{\tau}^{\Delta} \Biggr) \frac{1}{\Delta - 1}+\log (\varrho ). \end{aligned}$$

It implies

$$\begin{aligned} \mathcal{K}_{\Delta}(\tilde{\mathbf{v}}) = \log (\eta ) - \mathcal{D}_{\Delta} \biggl(\tilde{\mathbf{v}}, \frac{1}{\eta}\biggr) \end{aligned}$$
(58)

and

$$\begin{aligned} \mathcal{K}_{\Delta}(\tilde{\mathbf{s}}) = \log (\varrho ) - \mathcal{D}_{\Delta} \biggl(\tilde{\mathbf{s}}, \frac{1}{\varrho}\biggr). \end{aligned}$$
(59)

From Theorem 3.2, it follows \(\tilde{\mathbf{l}}= \frac{1}{\eta}\), \(\tilde{\mathbf{u}}= \frac{1}{\varrho}\), (58) and (59), we have

$$ \sum_{\sigma =1}^{\eta}v_{\sigma} \log (\eta v_{\sigma})-\log (\eta )+ \mathcal{K}_{\Delta}( \tilde{\mathbf{v}})\geq \sum_{\tau =1}^{ \varrho} s_{\tau}\log (\varrho s_{\tau})-\log (\varrho )+ \mathcal{K}_{\Delta}(\tilde{\mathbf{s}}). $$
(60)

After simple calculations, we obtain (57). □

Remark 3.4

If all the assumptions of part (i) and (ii) of Remark 3.1 are applied to Corollary 3.2, then the results of Corollary 3.2 meet with [6, p. 16, Corollary 7].

3.4 Zipf–Mandelbrot law

The Zipf law is given as follows (see [27]).

Definition 3.5

A discrete probability distribution with three parameters, \(\phi \in [0, \infty )\), \(\mathcal{M} \in \{1, 2, \ldots , \}\), and \(w> 0\), is called Zipf–Mandelbrot law and is given by

$$\begin{aligned} f(\vartheta ; \mathcal{M}, \phi , w) : = \frac{1}{(\vartheta + \phi )^{w}\mathcal{K}_{\mathcal{M}, \phi , w}},\quad \vartheta = 1, \ldots , \mathcal{M}, \end{aligned}$$

where

$$\begin{aligned} \mathcal{K}_{\mathcal{M}, \phi , w} = \sum_{\sigma =1}^{ \mathcal{M}} \frac{1}{(\sigma + \phi )^{w}}. \end{aligned}$$

For each value of \(\mathcal{M}\), if the entire mass of the law is considered, then density function of Zipf–Mandelbrot take the following form

$$\begin{aligned} f(\vartheta ; \phi , w) = \frac{1}{(\vartheta + \phi )^{w}\mathcal{K}_{\phi , w}}, \end{aligned}$$

for \(0 \leq \phi \), \(1< w\), \(\vartheta \in \mathcal{M}\), where

$$\begin{aligned} \mathcal{K}_{\phi , w} = \sum_{\sigma =1}^{\infty} \frac{1}{(\sigma + \phi )^{w}}. \end{aligned}$$

The Zipf–Mandelbrot law changes to the Zipf law, if \(\phi = 0\).

Theorem 3.3

Assume that \(\tilde{\mathbf{s}}\) and \(\tilde{\mathbf{v}}\) be the Zipf–Mandelbrot laws. If (55) and (56) are true for \(v_{\sigma}= \frac{1}{(\sigma +l)^{\sigma}{\mathcal{K}_{\mathcal{M}, l, \sigma}}}\), then \(s_{\tau}= \frac{1}{(\tau +s)^{\tau}{\mathcal{K}_{\mathcal{M}, s, \tau}}}\).

If log has base greater than 1, then

$$\begin{aligned}& \sum_{\sigma =1}^{\eta} \frac{1}{(\sigma +l)^{\sigma}{\mathcal{K}_{\mathcal{M}, l,\sigma}}} \log \biggl( \frac{1}{(\sigma +l)^{\sigma} {\mathcal{K}_{\mathcal{M}, l, \sigma}}} \biggr) +\frac{1}{1 - \Delta}\log \Biggl( \frac{1}{\mathcal{K} _{\mathcal{M}, l, \sigma}^{\Delta}}\sum _{\sigma =1}^{\eta} \frac{1}{(\sigma + l)^{\Delta \sigma}} \Biggr) \\& \quad \geq \sum_{\tau =1}^{\varrho} \frac{1}{(\tau +s)^{\tau}{\mathcal{K}_{\mathcal{M}, s, \tau}}} \log \biggl( \frac{1}{(\tau +s)^{\tau} {\mathcal{K}_{\mathcal{M}, s, \tau}}} \biggr) \\& \qquad {} +\frac{1}{1 - \Delta}\log \Biggl( \frac{1}{\mathcal{K}_{\mathcal{M}, s, \tau}^{\Delta}}\sum _{ \tau =1}^{\varrho} \frac{1}{(\tau + s)^{\Delta \tau}} \Biggr). \end{aligned}$$
(61)

If log has base less than 1, then (56) and (61) are hold in opposite direction.

Proof

Similar to Corollary 3.2, the proof uses Definition 3.5 and the hypothesis given in statement to obtain the desired result. □

Remark 3.5

By following the same conditions and methodology as in parts (i) and (ii) of Remark 3.1, the Inequality (61) becomes [6, p. 17, inequality (56)].

4 Conclusion

Four newly defined 3-convex Green functions are utilized to generate generalized Levinson-type inequalities for the class of 3-convex functions. We are able to find applications to information theory and also the bounds for obtained entropies and divergences. The newly established new Green functions are generalizations of Green functions given in [20] as these are 3-convex. Other interpolations, e.g., Lidstone interpolation, Hermite interpolating polynomial, and Montgomery identity, are also useful to explore the related results.