1 Introduction

Johan Jensen proved Jensen’s inequality in [9]. It serves as a tool in discrete and continuous analysis for generating classical and new inequalities. The discrete version of Jensen’s inequality is given below as

$$ \zeta \biggl( { \frac{{\sum_{m = 1}^{n} {{g_{m}}{z_{m}}} }}{{\sum_{m = 1} ^{n} {{g_{m}}} }}} \biggr) \leq \frac{{\sum_{m = 1}^{n} {{g_{m}}\zeta ( {{z_{m}}} )} }}{{\sum_{m = 1}^{n} {{g_{m}}} }}, $$
(1)

where \(({z_{1}},\ldots ,{z_{n}})\in S\), S is an interval in \(\mathbb{R}\), \(({g_{1}},\ldots ,{g_{n}}) \in \mathbb{R}_{+} ^{n}\) (i.e., nonnegative weights are taken into account) and the function \(\zeta :S \to \mathbb{R}\) is convex on S. Steffensen in [16] extended it by using negative weights.

The integral representation of Jensen’s inequality (1) is as follows: Let \(\tau \in C ([a_{1},a_{2}], (a_{3},a_{4}))\). If \(\zeta \in C ( { ( {a_{3},a_{4}} ),\mathbb{R}} )\) is convex, then

$$ \zeta \biggl( { \frac{{\int _{a_{1}}^{a_{2}} {\tau ( {s} )\,d{s}} }}{{a_{2} - a_{1}}}} \biggr) \le \int _{a_{1}}^{a_{2}} { \frac{{\zeta ( {\tau ( {s} )} )\,d{s}}}{{a_{2} - a_{1}}}}. $$

The researchers have devised several new functions for the refinements of Jensen’s discrete (or integral) inequality. For instance, in [7, 8, 11, 13] improvements of the operated version of Jensen’s inequality are given. In [5] Čuljak et al. generalized Jensen’s inequality via Hermite polynomial. Several researchers discussed and applied these inequalities on time scales. In [1], Anwar et al. proved Jensen’s inequality for delta integrals:

Suppose \(a_{1},a_{2}\in \mathbb{T}\) are such that \(a_{1}< a_{2}\) and \({F}\in C_{rd}([a_{1},a_{2}]_{\mathbb{T}},\mathbb{R})\) assures \(\int _{a_{1}}^{a_{2}}|{F}({s})|\Delta {s} >0\). If \(\zeta \in C ( {S,\mathbb{R}} )\) is convex on an interval \(S \subset \mathbb{R}\) and \(\tau \in {C_{rd}} ( {{{ [ {a_{1},a_{2}} ]}_{\mathbb{T}}},S} )\), then

$$ \zeta \biggl( \frac{\int _{a_{1}}^{a_{2}} \vert {F}({s}) \vert \tau ({s})\Delta {s}}{\int _{a_{1}}^{a_{2}} \vert {F}({s}) \vert \Delta {s}} \biggr) \leq \frac{\int _{a_{1}}^{a_{2}} \vert {F}({s}) \vert \zeta (\tau ({s}))\Delta {s}}{\int _{a_{1}}^{a_{2}} \vert {F}({s}) \vert \Delta {s}}. $$

Under a similar hypothesis, in [12], by replacing the delta integral with the nabla integral, same results are obtained.

Sheng et al. in [14] presented a convex combination of the delta and nabla integrals named as diamond-α integral, where \(\alpha \in [0,1]\). In [15] the following Jensen’s inequality for diamond-α integral is given:

Suppose a time scale \(\mathbb{T}\), and \(a_{1},a_{2}\in \mathbb{T}\) are such that \(a_{1}< a_{2}\), \(S\subseteq \mathbb{R}\) is an interval, \(\tau \in {C_{rd}} ( {{{ [{a_{1},a_{2}} ]}_{\mathbb{T}}}, {S} } )\), and \({F} \in C ( { [{a_{1},a_{2}} ],\mathbb{R}} )\) is such that

$$ \int _{a_{1}}^{a_{2}} { \bigl\vert {{F} ( {s} )} \bigr\vert { \diamondsuit _{\alpha }}} {s} > 0. $$

If \(\zeta \in C ( { {S} ,\mathbb{R}} )\) is convex, then

$$ \zeta \biggl( { \frac{{\int _{a_{1}}^{a_{2}} { \vert {{F} ( {s} )} \vert \tau ( {s} ) {\diamondsuit _{\alpha }}{s}} }}{{\int _{a_{1}}^{a_{2}} { \vert {{F} ( {s} )} \vert {\diamondsuit _{\alpha }}{s}} }}} \biggr) \leq \frac{\int _{a_{1}}^{a_{2}} \vert {{F} ( {s} )} \vert \zeta ( {\tau ( {s} )} ) \diamondsuit _{\alpha }s }{{\int _{a_{1}}^{a_{2}} { \vert {{F} ( {s} )} \vert {\diamondsuit _{\alpha }}{s}} }}. $$

In [6] the authors introduced a more generalized version of the diamond-α integral, termed as diamond integral having a special interest even for \(\mathbb{T}=\mathbb{R}\). These integrals get us nearer to building a true symmetric integral on time scales.

In [3] Jensen’s inequality is proved for diamond integrals:

Let \(a_{1}, a_{2} \in \mathbb{T}\) with \(a_{1}< a_{2}\), \({F}\in C ( {{{ [ {a_{1},a_{2}} ]}_{\mathbb{T}}} ,{ \mathbb{R}^{+} }} )\), and \(\tau \in C ( {{{ [ {a_{1},a_{2}} ]}_{\mathbb{T}}},S} )\), assuring \(\int _{a_{1}}^{a_{2}} {{F} ( {s} )\diamondsuit {s} > 0} \). Consider a convex function \(\zeta \in C ( {S,\mathbb{R}} )\), where \(S = [ {m_{1},m_{2}} ]\) is such that \(m_{1} = {\min_{{s} \in {{ [ {a_{1},a_{2}} ]}_{\mathbb{T}}}}}\tau ( {s} )\), \(m_{2} = {\max_{{s} \in {{ [ {a_{1},a_{2}} ]}_{\mathbb{T}}}}}\tau ( {s} )\), then

$$ \zeta \biggl( { \frac{{\int _{a_{1}}^{a_{2}} {{F} ( {s} )\tau ( {s} ) \diamondsuit {s}} }}{{\int _{a_{1}}^{a_{2}} {{F} ( {s} )\diamondsuit {s}} }}} \biggr) \le \frac{{\int _{a_{1}}^{a_{2}} {{F} ( {s} )\zeta ( {\tau ( {s} )} )\diamondsuit {s}} }}{{\int _{a_{1}}^{a_{2}} {{F} ( {s} )\diamondsuit {s}} }}. $$
(2)

Considering the conditions of (2), Jensen’s-type linear functional defined on \(\mathbb{T}\) is given below as

$$ J ( \zeta ) = \frac{{\int _{a_{1}}^{a_{2}} { \vert {{F} ( {s} )} \vert \zeta ( {\tau ( {s} )} )\diamondsuit {s}} }}{{\int _{a_{1}}^{a_{2}} { \vert {{F} ( {s} )} \vert \diamondsuit {s}} }} - \zeta \biggl( { \frac{{\int _{a_{1}}^{a_{2}} { \vert {{F} ( {s} )} \vert \tau ( {s} ) \diamondsuit {s}} }}{{\int _{a_{1}}^{a_{2}} { \vert {{F} ( {s} )} \vert \diamondsuit {s}} }}} \biggr). $$
(3)

Remark 1.1

Inequality (2) implies that \(J(\zeta )\geq 0\) for the family of convex mappings and \(J(\zeta )=0\) for identity or constant functions.

The aim of the present study is the extension of (3) for n-convex functions with Green’s function and some types of interpolations introduced by Hermite. In the next section, after defining the diamond derivative and integral, we recall Hermite interpolating polynomial along with some of its special forms. Section 3 consists of the main results of the paper, and finally concluding remarks are given in the last section.

2 Preliminaries

2.1 Some essentials from diamond calculus

A time scale \(\mathbb{T}\) is a nonempty closed subset of \(\mathbb{R}\). It may be connected or not. Keeping the time scale discontinuity under consideration, the forward and backward jump operators \(\sigma , \rho :\mathbb{T}\rightarrow \mathbb{T}\) are defined by

$$ \sigma ({s}) = \inf \{u\in \mathbbm{ }\mathbb{T} : u > {s}\}, $$

and

$$ \rho ({s}) = \sup \{u\in \mathbb{T} : u < {s}\}. $$

In general, \(\sigma ({s}) \geq {s}\) and \(\rho ({s}) \leq {s}\). The mappings \(\mu , \nu :\mathbb{T}\rightarrow [0,+\infty )\) defined by \(\mu ({s})= \sigma ({s})-{s}\) and \(\nu ({s})={s}-\rho ({s})\) are called in the sequel the forward and backward graininess functions.

The classification of points on time scales is given below:

For any \({s} \in \mathbb{T}\),

  • if \(\rho ( {s} ) = {s}\), then s is left dense;

  • if \(\sigma ( {s} ) = {s}\), then s is right dense;

  • if \(\rho ( {s} ) < {s}\), then s is left scattered;

  • if \(\sigma ( {s} ) > {s}\), then s is right scattered;

  • if \(\rho ( {s} ) = {s}\) and \(\sigma ( {s} ) = {s}\), then s is dense;

  • if \(\rho ( {s} ) < {s}\) and \(\sigma ( {s} ) >{s}\), then s is isolated.

A mapping \(\varrho : \mathbb{T}\rightarrow \mathbbm{ }\mathbb{R}\) is said to be rd-continuous if it is continuous \(\forall {s} \in \mathbb{T}\) such that \(\sigma ( {s} ) = {s}\) and left-sided limit is finite \(\forall {s} \in \mathbb{T}\) such that \(\rho ( {s} ) = {s}\). The set of such functions is denoted by \(C_{rd}\).

Definition 2.1

Let \(\Lambda :\mathbb{T} \to \mathbb{R}\) be a mapping and \({s} \in \mathbb{T}_{k}^{k}\). Define \({\Lambda ^{\diamondsuit }}\) (assuming it is a finite positive number) having characteristic that, for a given \(\varepsilon > 0\), there exists a neighborhood W of s such that

$$\begin{aligned}& \bigl\vert { \bigl[ {{\Lambda ^{\sigma }} ( {s} ) -\Lambda ( u ) + \Lambda ( {2{s} - u} ) - {\Lambda ^{\rho }} ( {s} )} \bigr] -{\Lambda ^{\diamondsuit }} ( {s} ) \bigl[{\sigma ( {s} ) + 2{s} - 2u - \rho ( {s} )} \bigr]} \bigr\vert \\& \quad \leq \varepsilon \bigl\vert {\sigma ( {s} ) + 2{s} - 2u - \rho ( {s} )} \bigr\vert \end{aligned}$$

holds for all \(u \in W\) for which \(2{s} - u \in W\). Then \({\Lambda ^{\diamondsuit }} ( {s} )\) is known as the diamond derivative of Λ at s.

Definition 2.2

Let \(a_{1},a_{2} \in \mathbb{T}\) and \(\tau :\mathbb{T} \to \mathbb{R}\) be a function. The diamond integral of τ from \(a_{1}\) to \(a_{2}\) is given by

$$ \int _{a_{1}}^{a_{2}} {\tau ( {s} )\diamondsuit {s} := \int _{a_{1}}^{a_{2}}{\gamma ( {s} )\tau ( {s} )\Delta {s} + \int _{a_{1}}^{a_{2}} { \bigl( {1 -\gamma ( {s} )} \bigr)} \tau ( {s} )\nabla ( {s} ) } } $$

for all \({s} \in \mathbb{T}\), where γτ and \(( {1 - \gamma } )\tau \) are delta and nabla integrable on \({ [ {a_{1},a_{2}} ]_{\mathbb{T}}}\), respectively.

It is to be noted that for \(u\in \mathbb{T}_{k}^{k}\), \({ ( {\int _{b}^{s} {\tau ( {s} )\diamondsuit } } )^{\diamondsuit }} \ne \tau ( s )\), in general. The fundamental theorem of calculus also does not hold for diamond integrals.

The properties of the diamond integrals are analogous to the properties of the delta, nabla, and diamond-α integrals, see [6].

Remark 2.3

If \(\mathbb{T}=\mathbb{R}\), then

$$ \int _{a_{1}}^{a_{2}} {\tau } ( {s} )\diamondsuit {s} = \int _{a_{1}}^{a_{2}}{\tau }({{{s}}})\,{\mathrm{d}} {{{s}}}; $$

if \(\mathbb{T}=h\mathbb{Z}\) where \(h>0\), then

$$ \int _{a_{1}}^{a_{2}} {\tau } ( {s} )\diamondsuit {s} = \frac{h}{2} \Biggl(\sum_{m={{a_{1}}/h}}^{{{a_{2}}/h-1}}{ \tau }(m h) + \sum_{m={a_{1}}/h+1}^{{a_{2}}/h}{\tau }(m h) \Biggr);$$

if \(\mathbb{T}={q}^{\mathbb{N}_{0}}\) where \({q}>1\), then

$$ \int _{a_{1}}^{a_{2}} {\tau } ( {s} )\diamondsuit {s} = \frac{{q}-1}{{q}+1} \Biggl(\sum_{m=\log _{q}(a_{1})}^{\log _{q}(a_{2})-1}{q}^{m+1}{ \tau }\bigl({q}^{m}\bigr) + \sum_{m=\log _{q}(a_{1})+1}^{\log _{q}({a_{2}})}{q}^{m-1}{ \tau }\bigl({q}^{m}\bigr) \Biggr).$$

2.2 Results on Hermite interpolating polynomial

Let \(- \infty < \mu < \nu < \infty \) and \(\mu = {a_{1}} <\cdots < {a_{r}} = \nu \) (\({r \ge 2} \)) be given r points. For \(\zeta \in {C^{n}} [ {\mu ,\nu } ]\), there exists an \(( {n -1} )\)th degree polynomial \(P _{H} ( {s} )\) defined by

$$ {P _{H}} ( {s} ) = \sum_{v = 1}^{r} {\sum_{u = 0}^{{k_{v}}} {{H_{uv}}} } ( {s} ){\zeta ^{(u)}} ( {{a_{v}}} ). $$
(4)

It satisfies the following Hermite condition:

$$ {P_{H}}^{ ( u )} ( {{a_{v}}} ) = {\zeta ^{ ( u )}} ( {{a_{v}}} ), \quad 0 \le u \le {k_{v}}, 1 \le v \le r, \qquad \sum _{v = 1}^{r} {{k_{v}} + r = n.} $$

Factors \({H_{uv}}\) represent essential polynomials of the Hermite basis which satisfy the relations:

$$\begin{aligned}& H_{uv}^{ ( p )} ( {{a_{d}}} ) = 0, \quad d \ne v, p = 0, \ldots ,{k_{d}},\\& H_{uv}^{ ( p )} ( {{a_{v}}} ) = {\delta _{up}},\quad p = 0,\ldots ,{k_{v}}, \text{for } u = 0,\ldots ,{k_{v}}, \end{aligned}$$

with \(d,v=1, \ldots ,r\) and

$$ {\delta _{up}} = \textstyle\begin{cases} 1, & u = p, \\ 0, & u \ne p. \end{cases} $$

Also \({H_{uv}} ( {s} )\) is given by

$$ {H_{uv}} ( {s} ) = { {\frac{1}{{u!}} \frac{{\omega ( {s} )}}{{{{ ( {{s} - {a_{v}}} )}^{{k_{v}} + 1 - u}}}} \sum_{k = 0}^{{k_{v}} - u} { \frac{1}{{k!}}} \frac{{{d^{k}}}}{{d{{s}^{k}}}} \biggl( { \frac{{{{ ( {{s} - {a_{v}}} )}^{{k_{v}} + 1}}}}{{\omega ( {s} )}}} \biggr)} \bigg|_{{s} = {a_{v}}}} { ( {{s} - {a_{v}}} )^{k}},$$
(5)

with

$$ \omega ( {s} ) = \prod_{v = 1}^{r} {{{ ( {{s} - {a_{v}}} )}^{{k_{v}} + 1}}} .$$

Hermite conditions encompass the following specific cases:

  • (Lagrange conditions) Let \(r = n\), \(k_{v}=0\) for all v, where \(1< v< r\). Then we have a Lagrange polynomial \(P_{L}({s})\), satisfying

    $$ {P _{L}} ( {{a_{v}}} ) = \zeta ( {{a_{v}}} ),\quad 1 \le v \le n. $$
  • Conditions for Type \(( {\mathfrak{z},n-\mathfrak{z}} )\): Let \(r = 2\), \(1 \le \mathfrak{z} \le n {-} 1\), \({k_{1}} = \mathfrak{z}{-} 1\), \({k_{2}}=n{-}\mathfrak{z}{-}1\). Then we have \(P_{(\mathfrak{z},n)}({s})\) polynomial, satisfying

    $$\begin{aligned}& P _{ ( {\mathfrak{z},n} )}^{ ( u )} ( \mu ) = {\zeta ^{ ( u )}} ( \mu ),\quad 0 \le u \le \mathfrak{z} - 1, \\& P _{ ( {\mathfrak{z},n} )}^{ ( u )} ( \nu ) = {\zeta ^{ (u )}} ( \nu ), \quad 0 \le u \le n - \mathfrak{z} -1. \end{aligned}$$
  • (Conditions for Taylor’s Two-point Formula) For \(n =2\mathfrak{z}\), \(r = 2\), \({k_{1}} = {k_{2}} = \mathfrak{z}-1\), we have a Taylor two-point interpolating polynomial \(P_{2T} ({s})\), satisfying

    $$ P _{2T}^{ ( u )} ( \mu ) = {\zeta ^{ ( u )}} ( \mu ), \qquad P _{2T}^{ ( u )} ( \nu ) = { \zeta ^{ ( u )}} ( \nu ), \quad 0 \le u \le \mathfrak{z} - 1 .$$

The next theorem is useful for our results and is given in [10].

Theorem 2.4

Suppose we have \(- \infty < \mu < \nu < \infty \), \(\mu = {a_{1}} < \cdots <{a_{r}} = \nu \) (\(r \ge 2 \)), and \(\zeta \in {C^{n}} [{\mu ,\nu } ]\). Then we have

$$ \zeta ( {s} ) = {P _{H}} ( {s} ) + {R_{H}} (\zeta , \mathfrak{t } ), $$

where \({P _{H}}\) is the Hermite interpolating polynomial as defined in (4) and \({R_{H}} ( {\zeta ,{s}} )\) denotes the remainder given by

$$ {R_{H}} ( {\zeta ,{s}} ) = \int _{\mu }^{\nu }{{G_{H,n}} ( {{s},t} ){ \zeta ^{ ( n )}}} ( t )\,dt,$$

where

$$ G_{H,n} ( s,t ) = \textstyle\begin{cases} \sum_{v = 1}^{b} \sum_{u = 0}^{k_{v}} \frac{ ( a_{v} - h )^{n - u - 1}}{ ( n - u - 1 )!} H_{uv} ( s ), &t \le s, \\ - \sum_{v = b + 1}^{r} \sum_{u = 0}^{k_{v}} \frac{ ( a_{v} - h )^{n - u - 1}}{ ( n - u - 1 )!}H_{uv} ( s ), &t \ge s, \end{cases} $$
(6)

for all \({a_{b}} \le t \le {a_{b + 1}}\), \(b = 0,\ldots ,r\) with \({a_{0}} = \mu \) and \({a_{r + 1}} = \nu \).

Remark 2.5

By imposing the Lagrange conditions, Theorem 2.4 takes the form

$$ \zeta ( {s} ) = {P _{L}} ( {s} ) + {R_{L}} ( \zeta ,{s} ). $$

Here \(P_{L}({s})\) represents a Lagrange polynomial, which is

$$ {P _{L}} ( {s} ) = \sum_{v = 1}^{n} {\prod_{\mathop{k = 1} _{k \ne v} }^{n} { \biggl( { \frac{{{s} - {a_{k}}}}{{{a_{v}} - {a_{k}}}}} \biggr)\zeta ( {{a_{v}}} )} }, $$

and \(R_{L}( \zeta , {s})\) is the remainder, defined by

$$ {R_{L}} ( {\zeta ,{s}} ) = \int _{\mu }^{\nu }{{G_{L}} ( {{s},t} ) { \zeta ^{ (n )}}} ( t )\,dt, $$

with

$$ {G_{L}} ( {{s},t} ) = \frac{1}{{ ( {n - 1} )!}} \textstyle\begin{cases} \sum_{v = 1}^{b} {{{ ( {{a_{v}} - t} )}^{n - 1}} \prod_{\mathop{k = 1} _{k \ne v} }^{n} { ( { \frac{{{s} - {a_{k}}}}{{{a_{v}} - {a_{k}}}}} )}},&t\le {s}, \\ - \sum_{v = 1}^{b} {{{ ( {{a_{v}} - t} )}^{n - 1}} \prod_{\mathop{k = 1} _{k \ne v} }^{n} { ( { \frac{{{s} - {a_{k}}}}{{{a_{v}} - {a_{k}}}}} ),}}&t \le {s}, \end{cases} $$

\({a_{b}} \le t \le {a_{b + 1}}\), \(b = 1,\ldots ,n - 1\) along \(a_{1}=\mu \) and \(a_{n}=\nu \).

Remark 2.6

Similarly, by imposing \((\mathfrak{z},n-\mathfrak{z})\) conditions on Theorem 2.4, one gets

$$ \zeta ( {s} ) = {P _{(\mathfrak{z},n)}} ( {s} ) + {R_{(\mathfrak{z},n)}} ( \zeta ,{s} ), $$

where

$$ {P _{ ( {\mathfrak{z},n} )}} ( {s} ) = \sum_{u = 0}^{\mathfrak{z} - 1} {{\xi _{u}} ( {s} ){ \zeta ^{ ( u )}} ( \mu ) + \sum _{u = 0}^{n - \mathfrak{z} - 1} {{\eta _{u}} ( {s} )} } {\zeta ^{ ( u )}} ( \nu ), $$

with

ξ u (s)= 1 u ! ( s μ ) u ( s ν μ ν ) ( n z ) k = 0 z 1 u ( n z + k 1 k ) ( s μ ν μ ) k
(7)

and

η u (s)= 1 u ! ( s ν ) u ( s μ ν μ ) z k = 0 n z 1 u ( z + k 1 k ) ( s ν μ ν ) k .
(8)

The remainder \(R_{(\mathfrak{z},n)}( \zeta , {s})\) is given by

$$ {R_{(\mathfrak{z},n)}} ( {\zeta ,{s}} ) = \int _{\mu }^{\nu }{{G_{(m,n)}} ( {{s},t} ){ \zeta ^{ ( n )}}} ( t )\,dt, $$

with

G ( z , n ) (s,t)= { v = 0 z 1 [ l = 0 z 1 v ( n z + l 1 l ) ( s ν ν μ ) l ] × ( s μ ) v ( μ t ) n v 1 v ! ( n v 1 ) ! ( ν s ν μ ) n z , μ t s ν , u = 0 n z 1 [ l 1 = 0 n z u 1 ( z + l 1 1 l 1 ) ( ν s ν μ ) 1 l ] × ( s ν ) u ( ν t ) n u 1 u ! ( n u 1 ) ! ( s μ ν μ ) e , μ s t ν .

Remark 2.7

Theorem 2.4 in the form of Taylor two-point formula becomes

$$ \zeta ( {s} ) = {P _{2T}} ( {s} ) + {R_{2T}} ( \zeta ,{s} ), $$

where Taylor two-point interpolating polynomial, \(P_{2T} ({s})\), is defined by

P 2 T ( s ) = u = 0 z 1 k = 0 z 1 u ( z + k 1 k ) [ ( s μ ) u u ! ( s ν μ ν ) z ( s μ ν μ ) k ζ ( u ) ( μ ) + ( s ν ) u u ! ( s μ ν μ ) z ( s ν μ ν ) k ζ ( u ) ( ν ) ]

and \(R_{2T}( \zeta , {s})\) is

$$ {R_{2T}} ( {\zeta ,{s}} ) = \int _{\mu }^{\nu }{{G_{2T}} ( {{s},t} ){ \zeta ^{ ( n )}}} ( t )\,dt, $$

with

G 2 T (s,t)= { ( 1 ) z ( 2 z 1 ) ! l z ( s , t ) v = 0 z 1 ( z 1 + v v ) ( s t ) z 1 v l 1 v ( s , t ) , t s ; ( 1 ) z ( 2 z 1 ) ! l 1 z ( s , t ) v = 0 z 1 ( z 1 + v v ) ( t s ) z 1 v l v ( s , t ) , s t ;

where \(l({s},t)=\frac{(t-\mu )(\nu -{s})}{\nu -\mu }\), \(l_{1}({s},t)=l({s},t)\), for all \({s},t \in [\mu ,\nu ]\).

3 Extension of Jensen’s functional via Green’s function and Hermite polynomial

This section begins with the proof of our key identity regarding Jensen’s inequality extension. Green’s function \(G:[\mu ,\nu ]\times [\mu ,\nu ]\rightarrow \mathbb{R}\) is defined as

$$ G ( {{s},t} ) = \textstyle\begin{cases} \frac{{ ( {{s} - \nu } ) ( {t - \mu } )}}{{\nu - \mu }}, & \mu \le t \le {{s}\mathrm{;}} \\ \frac{{ ( {t - \nu } ) ( {t - \mu } )}}{{\nu - \mu }}, & {s} \le t \le \nu . \end{cases} $$
(9)

Because of symmetry, G satisfies the conditions of convexity and continuity with respect to both s and t.

For \(h\in C^{2}([\mu ,\nu ])\), we have

$$ h ( {s} ) = \frac{{\nu - {s}}}{{\nu - \mu }}h ( \mu ) + \frac{{{s} - \mu }}{{\nu - \mu }}h ( \nu ) + \int _{\mu }^{\nu }{G ( {{s},t} )h'' ( t )\,dt}, $$
(10)

where \(G({s},t)\) is defined in (9).

Theorem 3.1

Let \(- \infty < \mu < \nu < \infty \) and \(\mu = {a_{1}} <\cdots < {a_{r}} = \nu \) (\(r \ge 2\)). Assume that \(\zeta \in C^{n}[\mu ,\nu ]\) is a convex function, while \(H_{uv}\), \(G_{H,n}\), and G are defined as in (5), (6), and (9), respectively. Then

$$ \begin{aligned} J \bigl( \zeta (s) \bigr) ={}& \int _{\mu }^{\nu }J \bigl( {G ( {{s},t} )} \bigr) \sum _{v = 1}^{r} \sum _{u = 0}^{{k_{v}}} {H_{uv}} ( t ){\zeta ^{ ( {u + 2} )}} ( {{a_{v}}} )\,dt \\ &{} + \int _{\mu }^{\nu } \int _{\mu }^{\nu }J \bigl( G ( {{s},t} ) \bigr) {{G_{H,n - 2}} ( {t,x} ) { \zeta ^{ ( n )}} ( x )\,dx}\,dt. \end{aligned} $$
(11)

Proof

Substituting (10) into (3), we have

$$ J \bigl( {\zeta ( {s} )} \bigr)=J \biggl( { \frac{{\nu - {s}}}{{\nu - \mu }} \zeta ( \mu ) + \frac{{{s} - \mu }}{{\nu - \mu }}\zeta ( \nu ) + \int _{\mu }^{\nu }{G ( {{s},t} )\zeta '' ( t )\,dt} } \biggr). $$

Since J is linear, we have

$$ J \bigl( {\zeta ( {s} )} \bigr) = \zeta ( \mu ) J \biggl({\frac{{\nu - {s}}}{{\nu - \mu }}} \biggr) + \zeta ( \nu ) J \biggl({\frac{{{s} - \mu }}{{\nu - \mu }}} \biggr) + \int _{\mu }^{\nu }{J \bigl( {G ( {{s},t} )} \bigr) \zeta '' ( t )\,dt}. $$

Remark 1.1 implies that

$$ J \bigl( {\zeta ( {s} )} \bigr) = \int _{\mu }^{\nu }{J \bigl( {G ( {{s},t} )} \bigr) \zeta '' ( t )\,dt}, $$
(12)

where

$$ \zeta '' ( t ) = \sum _{v = 1}^{r} {\sum_{u = 0}^{{k_{v}}} {{H_{uv}} (t ){\zeta ^{ ( {u + 2} )}} ( {{a_{v}}} ) + \int _{\mu }^{\nu }{{G_{H,n - 2}} ( {t,x} ){\zeta ^{ ( n )}} ( x )\,dx} } }. $$
(13)

Substituting (13) into (12) we have

$$ \begin{aligned} J \bigl( {\zeta ( {s} )} \bigr) ={}& \int _{\mu }^{\nu }{J \bigl( {G ( {{s},t} )} \bigr) \Biggl[ {\sum_{v = 1}^{r} {\sum _{u = 0}^{{k_{v}}} {{H_{uv}} ( t ){\zeta ^{ ( {u + 2} )}} ( {{a_{v}}} ) + \int _{\mu }^{\nu }{{G_{H,n - 2}} ( {t,x} ) {\zeta ^{ ( n )}} ( x )\,dx} } } } \Biggr]}\,dt \\ ={}& \int _{\mu }^{\nu }J \bigl( {G ( {{s},t} )} \bigr) \sum _{v = 1}^{r} \sum _{u = 0}^{{k_{v}}} {H_{uv}} ( t ){\zeta ^{ ( {u + 2} )}} ( {{a_{v}}} )\,dt \\ & {}+ \int _{\mu }^{\nu } \int _{\mu }^{\nu }J \bigl( G ( {{s},t} ) \bigr) {{G_{H,n - 2}} ( {t,x} ) { \zeta ^{ ( n )}} ( x )\,dx}\,dt, \end{aligned} $$

as required. □

Remark 3.2

For different time scales, special cases of inequality (11) can be deduced. For example, when \(\mathbb{T}=\mathbb{R}\), inequality (11) holds with the Jensen’s functional

$$ J ( \zeta ) = \frac{{\int _{a_{1}}^{a_{2}} { \vert {{F} ( {s} )} \vert \zeta ( {\tau ( {s} )} )\,{\mathrm{d}} {s}} }}{{\int _{a_{1}}^{a_{2}} { \vert {{F} ( {s} )} \vert \,{\mathrm{d}} {s}} }} - \zeta \biggl( { \frac{{\int _{a_{1}}^{a_{2}} { \vert {{F} ( {s} )} \vert \tau ( {s} ) \,{\mathrm{d}} {s}} }}{{\int _{a_{1}}^{a_{2}} { \vert {{F} ( {s} )} \vert \,{\mathrm{d}} {s}} }}} \biggr). $$

Similarly, for \(\mathbb{T}=h\mathbb{Z}\) where \(h>0\), inequality (11) holds with the Jensen’s functional

$$ \begin{aligned} J ( \zeta ) ={}& \frac{{\sum_{m={{a_{1}}/h}}^{{{a_{2}}/h-1}} { \vert {{F} ( m h )} \vert \zeta ( {\tau ( m h )} )} + \sum_{m={{a_{1}}/h} +1}^{{{a_{2}}/h}} { \vert {{F} ( m h )} \vert \zeta ( {\tau ( m h )} )}}}{ {\sum_{m={{a_{1}}/h}}^{{{a_{2}}/h-1}} { \vert {{F} ( m h )} \vert } } + \sum_{m={{a_{1}}/h}+1}^{{{a_{2}}/h}} { \vert {{F} ( m h )} \vert } } \\ &{} - \zeta \biggl( \frac{{\sum_{m={{a_{1}}/h}}^{{{a_{2}}/h-1}} { \vert {{F} ( m h )} \vert \tau ( m h ) } } +{\sum_{m={{a_{1}}/h+1}}^{{{a_{2}}/h}} { \vert {{F} ( m h )} \vert \tau ( m h ) } }}{{\sum_{m={{a_{1}}/h}}^{{{a_{2}}/h-1}} { \vert {{F} ( m h )} \vert }} +{\sum_{m={{a_{1}}/h}+1}^{{{a_{2}}/h}} { \vert {{F} ( m h )} \vert }}} \biggr); \end{aligned} $$

for \(\mathbb{T}=q^{\mathbb{N}_{0}}\), where \(q>1\), inequality (11) holds with the Jensen’s functional

$$ \begin{aligned} J ( \zeta ) ={}& \frac{{\sum_{m=\log _{q}(a_{1})}^{\log _{q}(a_{2})-1} {q}^{m}{ \vert {{F} ( q^{m} )} \vert \zeta ( {\tau ( q^{m} )} )} } +{\sum_{m=\log _{q}(a_{1})+1}^{\log _{q}({a_{2}})} {q}^{m}{ \vert {{F} ( q^{m} )} \vert \zeta ( {\tau ( q^{m} )} )} }}{ {\sum_{m=\log _{q}(a_{1})}^{\log _{q}(a_{2})-1} {q}^{m}{ \vert {{F} ( q^{m} )} \vert } } +{\sum_{m=\log _{q}(a_{1})+1}^{\log _{q}({a_{2}})} {q}^{m}{ \vert {{F} ( q^{m} )} \vert } }} \\ & {}- \zeta \biggl( { \frac{{\sum_{m=\log _{q}(a_{1})}^{\log _{q}(a_{2})-1} {q}^{m}{ \vert {{F} ( q^{m} )} \vert \tau ( q^{m} ) } } +{\sum_{m=\log _{q}(a_{1})+1}^{\log _{q}({a_{2}})} {q}^{m}{ \vert {{F} ( q^{m} )} \vert \tau ( q^{m} ) } }}{{\sum_{m=\log _{q}(a_{1})}^{\log _{q}(a_{2})-1} {q}^{m}{ \vert {{F} ( q^{m} )} \vert } } +{\sum_{m=\log _{q}(a_{1})+1}^{\log _{q}({a_{2}})} {q}^{m}{ \vert {{F} ( q^{m} )} \vert } }}} \biggr). \end{aligned} $$

Theorem 3.3

Under the assumptions of Theorem 3.1, if \(\zeta :[\mu ,\nu ]\rightarrow \mathbb{R}\) is n-convex and

$$ \int _{\mu }^{\nu }J \bigl( G ( {{s},t} ) \bigr) {G_{H,n - 2}} ( {t,x} )\,dt \ge 0,\quad {s}\in [\mu ,\nu ], $$

then

$$ J ( \zeta ) \geq \int _{\mu }^{\nu }J \bigl( {G ( {{s},t} )} \bigr) \sum _{v = 1}^{r} \sum _{u = 0}^{{k_{v}}} {H_{uv}} ( t ){\zeta ^{ ( {u + 2} )}} ( {{a_{v}}} )\,dt. $$
(14)

Proof

As ζ is n-convex, \(\zeta ^{n}({s})\geq 0\) for all \({s}\in [\mu ,\nu ]\), hence

$$ \int _{\mu }^{\nu } \int _{\mu }^{\nu }J \bigl( G ( {{s},t} ) \bigr) {G_{H,n - 2}} ( {t,x} )\,dt {\zeta ^{ ( n )}} ( x )\,dx \ge 0. $$
(15)

Substituting (15) into (11), we have

$$ J ( \zeta ) \geq \int _{\mu }^{\nu }J \bigl( {G ( {{s},t} )} \bigr) \sum _{v = 1}^{r} \sum _{u = 0}^{{k_{v}}} {H_{uv}} ( t ){\zeta ^{ ( {u + 2} )}} ( {{a_{v}}} )\,dt, $$

as required. □

Theorem 3.4

Under the assumptions of Theorem 3.1, if \(\zeta :[\mu ,\nu ]\rightarrow \mathbb{R}\) is n-convex and

$$ J ( \zeta ) \ge \int _{\mu }^{\nu }{J \bigl( {G ( {{s},t} )} \bigr)B ( \cdot )}\,dt, $$
(16)

where

$$ B ( \cdot ) = \sum_{v = 1}^{r} {\sum_{u = 0}^{{k_{v}}} {{\zeta ^{ ( {u + 2} )}} ( {{a_{v}}} ){H_{uv}} ( \cdot )} } $$
(17)

is nonnegative, then

$$ J(\zeta )\geq 0. $$

Proof

Substituting (17) into (16), we get

$$ J ( \zeta ) \ge \int _{\mu }^{\nu }J \bigl( {G ( {{s},t} )} \bigr) \sum _{v = 1}^{r} \sum _{u = 0}^{{k_{v}}} {H_{uv}} ( t ){\zeta ^{ ( {u + 2} )}} ( {{a_{v}}} )\,dt\ge 0. $$

 □

The use of type \((\mathfrak{z},n-\mathfrak{z})\) condition yields the result given below.

Corollary 3.5

Suppose that \(\xi _{u}\), \(\eta _{u}\) are defined as in (7) and (8), respectively. If \(n-\mathfrak{z}\) is even, then for every n-convex function \(\zeta :[\mu ,\nu ]\rightarrow \mathbb{R}\) and

$$ J ( \zeta ) \ge \int _{\mu }^{\nu }{J \bigl( {G ( {{s},t} )} \bigr)} B(t)\,dt, $$
(18)

where

$$ B ( t ) = \Biggl( {\sum_{u = 0}^{\mathfrak{z} - 1} {{\xi _{u}} ( \cdot ){\zeta ^{ ( {u + 2} )}} ( \mu )} + \sum _{u = 0}^{n - \mathfrak{z} - 1} {{ \eta _{u}} ( \cdot ){\zeta ^{ ( {u + 2} )}} ( \nu )} } \Biggr) $$
(19)

is nonnegative, we have

$$ J(\zeta )\geq 0. $$

Proof

Substituting (19) into (18), we get

$$ J ( \zeta ) \ge \int _{\mu }^{\nu }{J \bigl( {G ( {{s},t} )} \bigr)} \Biggl( {\sum_{u = 0}^{\mathfrak{z} - 1} {{ \xi _{u}} ( \mathfrak{t} ) {\zeta ^{ ( {u + 2} )}} ( \mu )} + \sum _{u = 0}^{n - \mathfrak{z} - 1} {{\eta _{u}} ( t ){ \zeta ^{ ( {u + 2} )}} ( \nu )} } \Biggr)\,dt\ge 0. $$

 □

Application of two-point Taylor conditions gives the following result.

Corollary 3.6

Let \(\zeta :[\mu ,\nu ]\rightarrow \mathbb{R}\) be n-convex. If

$$\begin{aligned} J(\zeta ) \geq & \int _{\mu }^{\nu }J\bigl(G(x,t)\bigr)B(t)\,dt, \end{aligned}$$
(20)

where

B ( t ) = u = 0 z 1 k = 0 z 1 u ( z + k 1 k ) × [ ( t μ ) u u ! ( t ν μ ν ) z ( t μ ν μ ) k ϕ ( u + 2 ) ( μ ) + ( t ν ) u u ! ( t μ ν μ ) z ( t ν μ ν ) k ϕ ( u + 2 ) ( ν ) ]
(21)

is nonnegative, then

$$ J(\zeta )\geq 0. $$

Proof

Substituting (21) into (20), we get

J ( ζ ) μ ν J ( G ( s , t ) ) u = 0 z 1 k = 0 z 1 u ( z + k 1 k ) × [ ( t μ ) u u ! ( t ν μ ν ) z ( t μ ν μ ) k ϕ ( u + 2 ) ( μ ) + ( t ν ) u u ! ( t μ ν μ ) e ( t ν μ ν ) k ϕ ( u + 2 ) ( ν ) ] d t .

As the right-hand side is nonnegative, we have

$$ J(\zeta )\geq 0. $$

 □

Remark 3.7

As mentioned in Remark 3.2, we can deduce special cases for the results of this section for different time scales.

4 Bounds for identities associated to the extension of Jensen’s functional

Here we use Čebyšev functional and Grüss-type inequalities to present a few important results. The Čebyšev functional is given by

$$ {\Upsilon ( {g_{1},g_{2}} )} = \frac{1}{{{\nu }-{\mu } }} \int _{\mu }^{\nu } {g_{1} ( {s} ) g_{2} ( {s} )\,d{s} - \frac{1}{{{\nu }-{\mu }}}} \int _{\mu }^{\nu } {g_{1} ( {s} ) \,d{s} \cdot \frac{1}{{{\nu }-{\mu }}}} \int _{\mu }^{\nu } {g_{2} ( {s} )\,d{s}}. $$

The next two theorems are given in [4].

Theorem 4.1

If \(g_{1},g_{2}:[\mu ,\nu ]\rightarrow \mathbb{R}\) are functions such that \(g_{1}\) is Lebesgue integrable and \(g_{2}\) is absolutely continuous, along with \((\cdot -\mu )(\nu -\cdot )[{g_{2}'}]^{2} \in L[\mu ,\nu ]\), then we have

$$ \bigl\vert {\Upsilon ( {g_{1},g_{2}} )} \bigr\vert \le \frac{1}{{\sqrt{2} }}{ \bigl[{\Upsilon ( {g_{1},g_{1}} )} \bigr]^{\frac{1}{2}}} \frac{1}{{\sqrt{\nu - \mu } }}{ \biggl( { \int _{\mu } ^{\nu } { ( {{s} - \mu } ) ( { \nu - {s}} ){{ \bigl[ {{g_{2}}' ( {s} )} \bigr]}^{2}}\,d{s}} } \biggr)^{\frac{1}{2}}}, $$

where \(\frac{1}{{\sqrt{2} }}\) is the best possible constant.

Theorem 4.2

If \(g_{1},g_{2}:[\mu ,\nu ]\rightarrow \mathbb{R}\) are functions such that \(g_{1}\) is absolutely continuous together \(g_{1}' \in {L_{\infty }} [ {\mu ,\nu } ]\) and \(g_{2}\) is monotonically nondecreasing on \([\mu ,\nu ]\), then we have

$$ \bigl\vert {\Upsilon ( {g_{1},g_{2}} )} \bigr\vert \le \frac{1}{{2 ( {\nu - \mu } )}}{ \bigl\Vert {g_{1}'} \bigr\Vert _{\infty }} \int _{\mu } ^{\nu } { ( {{s} - \mu } ) ( {\nu - {s}} )\,d{s},} $$

where \(\frac{1}{{ 2 }}\) is the best possible constant.

Let

$$ \tilde{\psi } ( x ) = \int _{\mu }^{\nu }{J \bigl( {G ( {{s},t} )} \bigr) {G_{H,n - 2}} ( {t,x} )\,dt}. $$
(22)

Then Čebyšev functional becomes

$$ \Upsilon ( {\tilde{\psi },\tilde{\psi }} ) = \frac{1}{{\nu - \mu }} \int _{\mu }^{\nu }{{\tilde{\psi }^{2}} ( x )\,dx - {{ \biggl( {\frac{1}{{\nu - \mu }} \int _{\mu }^{\nu }{\tilde{\psi } ( x )\,dx} } \biggr)}^{2}}.} $$
(23)

Theorem 4.3

Let \(\zeta :[\mu ,\nu ]\rightarrow \mathbb{R}\) be such that \(\zeta \in {C^{n}}[\mu ,\nu ]\) for \(n\in \mathbb{N}\) with \((\cdot -\mu ) (\nu -\cdot )[\zeta ^{(n+1)}]^{2} \in L[\mu ,\nu ]\) and \(t \in [\mu ,\nu ]\). Suppose \(G_{H,n}\), ψ̃ and ϒ are defined as in (6), (22), and (23), respectively, then we have

$$ \begin{aligned} J ( \zeta ) ={}& \int _{\mu }^{\nu }{J \bigl( {G ( {{s},t} )} \bigr)} \sum_{v = 1}^{r} {\sum _{u = 0}^{{k_{v}}} {{\zeta ^{ ( {u + 2} )}} ( {{a_{v}}} ){H_{uv}} ( t )\,dt} } \\ &{} + \frac{{{\zeta ^{ ( {n - 1} )}} ( \nu ) - {\zeta ^{ ( {n - 1} )}} ( \mu )}}{{\nu - \mu }} \int _{\mu }^{\nu }{ \tilde{\psi } ( x )\,dx + {{{R} }_{n}} ( {\mu ,\nu ; \zeta } )}, \end{aligned} $$
(24)

where the remainder \({{{R} }_{n}}(\mu ,\nu ;\zeta )\) satisfies the estimate

$$ \bigl\vert {{{R} _{n}} ( {\mu ,\nu ;\zeta } )} \bigr\vert \le { \bigl[ {\Upsilon ( {\tilde{\psi },\tilde{\psi }} )} \bigr]^{\frac{1}{2}}} \sqrt{\frac{{\nu - \mu }}{2}} \biggl\vert { \int _{\mu }^{\nu }{ ( {x - \mu } ) ( {\nu - x} ) {{ \bigl[ {{\zeta ^{ ( {n + 1} )}} ( x )} \bigr]}^{2}}\,dx} } \biggr\vert ^{ \frac{1}{2}}. $$
(25)

Proof

We use Theorem 4.1 for \(g_{1}\rightarrow \tilde{\psi }\) and \(g_{2}\rightarrow \zeta ^{n}\) to obtain

$$ \begin{aligned} & \biggl\vert {\frac{1}{{\nu - \mu }} \int _{\mu }^{\nu }{ \tilde{\psi } ( x ){\zeta ^{n}} ( x )\,dx - \frac{1}{{\nu - \mu }}} \int _{\mu }^{\nu }{\tilde{\psi } ( x )\,dx \cdot \frac{1}{{\nu - \mu }}} \int _{\mu }^{\nu }{{ \zeta ^{n}} ( x )\,dx} } \biggr\vert \\ &\quad \le \frac{1}{{\sqrt{2} }}{ \bigl[ {\Upsilon ( { \tilde{\psi },\tilde{\psi }} )} \bigr]^{\frac{1}{2}}} \frac{1}{{\sqrt{\nu - \mu } }}{ \biggl\vert { \int _{\mu }^{\nu }{ ( {x - \mu } ) ( {\nu - x} ){{ \bigl[ {{ \zeta ^{ ( {n + 1} )}} ( x )} \bigr]}^{2}}\,dx} } \biggr\vert ^{\frac{1}{2}}}. \end{aligned} $$

Therefore we have,

$$ \frac{1}{{\nu - \mu }} \int _{\mu }^{\nu }{\tilde{\psi } ( x ){\zeta ^{n}} ( x )\,dx = \frac{1}{{{{ ( {\nu - \mu } )}^{2}}}} \bigl( {{\zeta ^{ ({n - 1} )}} ( \nu ) - {\zeta ^{ ( {n - 1} )}} ( \mu )} \bigr) \int _{\mu }^{\nu }{ \tilde{\psi } ( x )\,dx} }. $$

Hence

$$ \int _{\mu }^{\nu }{\tilde{\psi } ( x ){\zeta ^{n}} ( x )\,dx = \frac{{{\zeta ^{ ( {n - 1} )}} ( \nu ) - {\zeta ^{ ( {n - 1} )}} ( \mu )}}{{\nu - \mu }}} \int _{\mu }^{\nu }{\tilde{\psi } ( x )\,dx + {{{R} }_{n}} ( {\mu ,\nu ;\zeta } )}, $$

where the remainder \({{{R} }_{n}}(\mu ,\nu ;\zeta )\) satisfies the estimate (25). Now from identity (11) we obtain (24). □

Grüss-type inequality given below can be obtained by using Theorem 4.2.

Theorem 4.4

Assume that \(\zeta :[\mu ,\nu ]\rightarrow \mathbb{R}\) is such that \(\zeta ^{n}\) is absolutely continuous and \(\zeta ^{(n+1)}\geq 0\) on \([\mu ,\nu ]\). Suppose ψ̃ and ϒ are defined as in (22) and (23), respectively. Then we have (24) and the remainder \({R} (\mu ,\nu :\zeta )\) satisfies the bound

$$ \bigl\vert {{R} ( {\mu ,\nu ;\zeta } )} \bigr\vert \le ({ \nu - \mu } ){ \bigl\Vert {\tilde{\psi }'} \bigr\Vert _{\infty }} \biggl[ { \frac{{{\zeta ^{ ( {n - 1} )}} ( \nu ) + {\zeta ^{ ( {n - 1} )}} ( \mu )}}{{2 }} - \frac{{{\zeta ^{ ( {n - 2} )}} ( \nu ) - {\zeta ^{ ( {n - 2} )}} ( \mu )}}{{\nu - \mu }}} \biggr]. $$
(26)

Proof

Applying Theorem 4.2, for \(g_{1}\rightarrow \tilde{\psi }\) and \(g_{2}\rightarrow \zeta ^{(n)}\), we have

$$ \begin{aligned}& \biggl\vert {\frac{1}{{\nu - \mu }} \int _{\mu }^{\nu }{ \tilde{\psi } ( x ) {\zeta ^{n}} ( x )\,dx - \frac{1}{{\nu - \mu }}} \int _{\mu }^{\nu }{\tilde{\psi } ( x )\,dx \cdot \frac{1}{{\nu - \mu }}} \int _{\mu }^{\nu }{{ \zeta ^{n}} ( x )\,dx} } \biggr\vert \\ &\quad \leq \frac{1}{{2 ( {\nu - \mu } )}}{ \bigl\Vert { \tilde{\psi }'} \bigr\Vert _{\infty }} \int _{\mu }^{\nu }{ ( {x - \mu } ) ( {\nu - x} ) {\zeta ^{ ( {n + 1} )}} ( x )\,dx}. \end{aligned} $$
(27)

Since

$$ \begin{aligned} &\int _{\mu }^{\nu }{(x - \mu ) ( {\nu - x} ) {\zeta ^{ ( {n + 1} )}} ( x )\,dx} \\ &\quad = \int _{\mu }^{\nu }{ \bigl[ {2x - ( {\mu + \nu } )} \bigr]{\zeta ^{ ( n )}} ( x )\,dx} \\ &\quad = ( {\nu - \mu } ) \bigl[ {{\zeta ^{ ( {n - 1} )}} ( \nu ) + {\zeta ^{ ( {n - 1} )}} ( \mu )} \bigr] - 2 \bigl[ {{\zeta ^{ ( {n - 2} )}} ( \nu ) + {\zeta ^{ ( {n - 2} )}} ( \mu )} \bigr], \end{aligned} $$

using (11) and (27), we get (26). □

Theorem 4.5

Let all the assumptions of Theorem 3.1be satisfied. Suppose \((i,j)\) is a couple of numbers such that \(1\leq i\), \(j\leq \infty \), \(\frac{1}{i}+\frac{1}{j}=1\). Suppose \(|\zeta ^{(n)}|^{i}: [\mu ,\nu ]\rightarrow \mathbb{R}\) is a function which is R-integrable for some \(n\geq 2\). Then we have

$$\begin{aligned} &\Biggl\vert {J ( \zeta ) - \int _{\mu }^{\nu }{J \bigl({G ( {{s},t} )} \bigr) \sum _{v = 1}^{r} {\sum _{u = 0}^{{k_{v}}} {{\zeta ^{ ( {u + 2} )}} ( {{a_{v}}} ) {H_{uv}} ( t )\,dt} } } } \Biggr\vert \\ &\quad \le { \bigl\Vert {{\zeta ^{ ( n )}}} \bigr\Vert _{i}} { \biggl( { \int _{\mu }^{\nu }{{{ \biggl\vert { \int _{\mu }^{\nu }{J \bigl( {G ( {{s},t} )} \bigr) {G_{H,n - 2}} ( {t,x} )\,dt} } \biggr\vert }^{j}}\,dx} } \biggr)^{\frac{1}{j}}}. \end{aligned}$$
(28)

For \(i\in [1,\infty ]\), the constant on the right-hand side of (28) is sharp when \(i=1\).

Proof

Assume \(W(x)={\int _{\mu }^{\nu }{J ( {G ( {{s},t} )} ) {G_{H,n - 2}} ( {t,x} )\,dt} }\). Hölder’s inequality and identity (11) give us

$$ \begin{aligned} \Biggl\vert {J ( \zeta ) - \int _{\mu }^{\nu }{J \bigl({G ( {{s},t} )} \bigr) \sum _{v = 1}^{r} {\sum _{u = 0}^{{k_{v}}} {{\zeta ^{ ( {u + 2} )}} ( {{a_{v}}} ) {H_{uv}} ( t )\,dt} } } } \Biggr\vert &= \biggl\vert { \int _{\mu }^{\nu }{W ( x ){ \zeta ^{ ( n )}} ( x )\,dx} } \biggr\vert \\ &\le { \bigl\Vert {{\zeta ^{ ( n )}}} \bigr\Vert _{i}} { \biggl( { \int _{\mu }^{\nu }{{{ \bigl\vert {W ( x )} \bigr\vert }^{j}}\,dx} } \biggr)^{\frac{1}{j}}}. \end{aligned} $$

For \(i\in (1,\infty )\), let \(\zeta ^{n}(x)=\operatorname{sgn} W(x)|W(x)|^{\frac{1}{i-1}}\) and, in case of \(i=\infty \), let \(\zeta ^{(n)}(x)=\operatorname{sgn} W(x)\). We prove that, for \(i=1\),

$$ { \int _{\mu }^{\nu }{W ( x ) {\zeta ^{ ( n )}} ( x )\,dx} } \leq \mathop{\max } _{x \in [ {\mu ,\nu } ]} \bigl\vert {W ( x )} \bigr\vert \biggl( { \int _{\mu }^{\nu }{ \bigl\vert {{\zeta ^{ ( n )}} ( x )} \bigr\vert dx} } \biggr) $$
(29)

cannot be improved. Let \(|W(x)|\) achieve its maximum at \(d\in [\mu ,\nu ]\). We suppose firstly that \(W(d)>0\). We define \(\zeta(x)\) for small enough δ, by

$$ {\zeta _{\delta }} ( x ): = \textstyle\begin{cases} 0, & \mu \le x \le {d}, \\ \frac{1}{{\epsilon n!}}{ ( {x - {d}} )^{n}},& {d} \le x \le {d} + \delta , \\ \frac{1}{{n!}}{ ( {x - {d}} )^{n - 1}},& {d} + \delta \le x \le \nu . \end{cases} $$

Then, for δ small enough,

$$ \biggl\vert { \int _{\mu }^{\nu }{W ( x ){\zeta ^{ ( n )}} ( x )\,dx} } \biggr\vert = \biggl\vert { \int _{{d}}^{{d} + \delta } {W ( x )\frac{1}{\delta }\,dx} } \biggr\vert = \frac{1}{\delta } \int _{{d}}^{{d} + \delta } {W ( x )\,dx}. $$

Now from inequality (29) we have

$$ \frac{1}{\delta } \int _{d}^{d + \delta } W ( x )\,dx \le W ( d ) \int _{d}^{d + \delta } \frac{1}{\delta }\,dx = W ( d ). $$

For \(W(d)<0\), we define \(\zeta _{\delta }(x)\) as

$$ {\zeta _{\delta }} ( x ): = \textstyle\begin{cases} \frac{1}{{n!}}{ ( {x - {d} - \delta } )^{n - 1}},& \mu \le x \le {d}, \\ - \frac{1}{{\epsilon n!}}{ ( {x - {d} - \delta } )^{n}}, & {d} \le x \le {d} + \delta , \\ 0,& {d} + \delta \le x \le \nu . \end{cases} $$

Rest of the proof is same as above. □

5 Conclusion

Jensen’s functional for the diamond integral (3) is generalized for n-convex functions using Green’s function and Hermite polynomial in the present article. Different conditions of Hermite polynomial are utilized to describe respective refinements of the functional. As applications, bounds for the quantities associated to the constructed functional are also discussed. Moreover, by defining the functional as the difference of the right- and left-hand sides of the extended inequality (14), it is possible to study n-exponential convexity, exponential convexity, and applications to Stolarsky-type means as discussed by Aras-Gazič et al. in [2, Sects. 5, 6]. This article extends the results of [5] on time scales.