1 Introduction

There is an important field in the theory of inequalities which involves two kinds of special inequalities. One is based on the functions with bounded derivatives or of Ostrowski type, which is successfully applied in probability theory, mathematical statistics, information theory, numerical integration, and integral operator theory. A chapter in [1] is devoted to this kind of inequalities. Another field is concerned with the inequalities with the moments of random variables; see [28]. By using this kind of Ostrowski type inequalities, we can get various tight bounds with the moments of random variables defined on some finite interval. There are numerous works available in the literature.

In this paper, we give an inequality for covariance involving functions with bounded derivatives. As applications of the inequality, we obtain some new inequalities similar to the Čebyšev integral inequality.

We assume throughout the paper that ξ is a random variable having the cumulative distributing function F. By Eξ we denote the expectation of ξ defined by

$$ \mathrm{E}\xi= \int_{-\infty}^{\infty}t\,dF(t), $$
(1.1)

by Dξ the variance of ξ defined by

$$ \mathrm{D}\xi=\mathrm{E}(\xi-\mathrm{E}\xi)^{2}, $$
(1.2)

and by \(\operatorname {Cov}(\xi,\eta)\) the covariance of two random variables ξ, η defined by

$$ \operatorname {Cov}(\xi,\eta)=\mathrm{E}(\xi-\mathrm{E}\xi) (\eta-\mathrm{E}\eta). $$
(1.3)

We often use the following formula to compute \(\operatorname {Cov}(\xi,\eta)\):

$$ \operatorname {Cov}(\xi,\eta)=\mathrm{E}(\xi\eta)-\mathrm{E}\xi \mathrm{E}\eta. $$
(1.4)

2 A new random inequality

This paper gives the following new inequality for covariance involving functions with bounded derivatives.

Theorem 2.1

Assume that two functions \(f,g:[a,b]\rightarrow R\) are continuous in \([a,b]\) and differentiable in \((a,b)\) whose derivatives \(f',g':(a,b)\rightarrow R\) are bounded in \((a,b)\); if ξ is a random variable which has finite expected value Eξ and variance Dξ. Then one has

$$ \bigl\vert \operatorname {Cov}\bigl(f(\xi),g(\xi)\bigr)\bigr\vert \leq 2 \bigl\Vert f'\bigr\Vert _{\infty}\bigl\Vert g' \bigr\Vert _{\infty}\mathrm{D}\xi, $$
(2.1)

where a is a real or −∞; b is a real or +∞ and

$$ \bigl\Vert f'\bigr\Vert _{\infty}=\sup _{t\in (a,b)}\bigl\vert f'(t)\bigr\vert < \infty, \qquad \bigl\Vert g'\bigr\Vert _{\infty}=\sup_{t\in (a,b)} \bigl\vert g'(t)\bigr\vert < \infty. $$
(2.2)

Proof

Under the conditions of the theorem, since \(f(\xi)\) is bounded, the expected value \(\mathrm{E}f(\xi)\) exists. Applying the Lagrange mean theorem, one can get

$$\begin{aligned} \bigl[f(x)-\mathrm{E}f(\xi)\bigr]^{2} =& \mathrm{E}^{2}\bigl[f(x)-f(\xi)\bigr] \\ =&\mathrm{E}^{2}\bigl[f'\bigl(\xi+\theta(x- \xi)\bigr) (x-\xi)\bigr] \\ \leq&\bigl\Vert f'\bigr\Vert _{\infty}^{2} \mathrm{E}^{2}\vert x-\xi \vert \leq\bigl\Vert f'\bigr\Vert _{\infty}^{2}\mathrm{E}(x-\xi)^{2} \\ =&\bigl\Vert f'\bigr\Vert _{\infty}^{2} \bigl[(x-\mathrm{E}\xi)^{2}+D\xi \bigr], \end{aligned}$$
(2.3)

where the parameter \(0\leq \theta \leq 1\) is not a constant but depends on x, ξ, and \(a\leq x\leq b\). Letting \(x=\xi\) in inequality (2.3) and then taking the expectation to both sides of the inequality gives

$$\begin{aligned} \mathrm{E}\bigl[f(\xi)-Ef(\xi)\bigr]^{2}\leq \bigl\Vert f'\bigr\Vert _{\infty}^{2} \bigl[E(\xi-E \xi)^{2}+D\xi \bigr] =2 \bigl\Vert f'\bigr\Vert _{\infty}^{2}\mathrm{D}\xi. \end{aligned}$$
(2.4)

That is,

$$\begin{aligned} \mathrm{D}f(\xi)\leq 2\bigl\Vert f'\bigr\Vert _{\infty}^{2}\mathrm{D}\xi. \end{aligned}$$
(2.5)

Similarly we have

$$\begin{aligned} \mathrm{D}g(\xi)\leq 2\bigl\Vert g'\bigr\Vert _{\infty}^{2}\mathrm{D}\xi. \end{aligned}$$
(2.6)

Consequently,

$$ \bigl\vert \operatorname {Cov}\bigl(f(\xi),g(\xi)\bigr)\bigr\vert \leq \sqrt{ \mathrm{D}f(\xi)\mathrm{D}g(\xi)}\leq 2 \bigl\Vert f'\bigr\Vert _{\infty}\bigl\Vert g'\bigr\Vert _{\infty}\mathrm{D} \xi. $$
(2.7)

Thus, the inequality is derived. □

3 Some applications

In the following section, we will discuss some applications as regards the inequality (2.1). In fact, if the random variable ξ in (2.1) has a certain distribution, we can derive a corresponding Čebyšev type inequality. At first, we show the famous Čebyšev integral inequality [9].

Let us consider two functions \(f,g:[a,b]\rightarrow R\) are continuous in \([a,b]\) and differentiable in \((a,b)\) whose derivatives \(f',g':(a,b)\rightarrow R\) are bounded in \((a,b)\). Then

$$ \bigl\vert T(f,g)\bigr\vert \leq \frac{1}{12}(b-a)^{2} \bigl\Vert f'\bigr\Vert _{\infty}\bigl\Vert g'\bigr\Vert _{\infty}, $$
(3.1)

for all \(x\in [a,b]\), where

$$ T(f,g)=\frac{1}{b-a} \int_{a}^{b}f(x)g(x)\,dx-\frac{1}{b-a} \int_{a}^{b}f(x)\,dx\cdot \frac{1}{b-a} \int_{a}^{b}g(x)\,dx. $$
(3.2)

In 1935, Grüss showed that [10]

$$ \bigl\vert T(f,g)\bigr\vert \leq \frac{1}{4}(M-m) (N-n), $$
(3.3)

if M, m, N, n are real numbers which satisfy \(-\infty< m\leq f(x)\leq M<+\infty\), \(-\infty< n\leq g(x)\leq N<+\infty\) for all \(x\in [a,b]\). Moreover, 1/4 is the best possible constant.

Over the years, the Čebyšev integral inequality has evoked the interest of several researchers who showed new proofs, and extended and innovated the inequality. See e.g. [9, 1116] and the references given therein.

As the first application of the inequality (2.1), let ξ have uniform distribution in \([a,b]\), then we have the inequality as follows.

Theorem 3.1

Let \(f,g:[a,b]\rightarrow R\) be continuous in \([a,b]\) and differentiable in \((a,b)\) whose derivatives \(f',g':(a,b)\rightarrow R\) are bounded in \((a,b)\). Then

$$ \bigl\vert T(f,g)\bigr\vert \leq \frac{1}{6}(b-a)^{2} \bigl\Vert f'\bigr\Vert _{\infty}\bigl\Vert g'\bigr\Vert _{\infty}. $$
(3.4)

Proof

Let ξ be a random variable which possesses the uniform distribution \(u[a,b]\). So, it has the following probability density function:

$$ \varphi(x)=\textstyle\begin{cases} \frac{1}{b-a}, & a\leq x\leq b,\\ 0, & \mbox{otherwise}. \end{cases} $$
(3.5)

Then one can have

$$\begin{aligned} \operatorname {Cov}\bigl(f(\xi),g(\xi)\bigr) =&\mathrm{E}f(\xi)g(\xi)-\mathrm{E}f( \xi)\cdot \mathrm{E}g(\xi) \\ =&\frac{1}{b-a} \int_{a}^{b}f(x)g(x)\,dx-\frac{1}{b-a} \int_{a}^{b}f(x)\,dx\cdot \frac{1}{b-a} \int_{a}^{b}g(x)\,dx \\ =&T(f,g) \end{aligned}$$
(3.6)

and

$$ \mathrm{D}\xi=\frac{(b-a)^{2}}{12}. $$
(3.7)

Substituting (3.6) and (3.7) into (2.1) yields (3.4). Thus, the proof is complete. □

If ξ has the Gamma distribution, we can easily obtain a new inequality from (2.1).

Theorem 3.2

Let \(f,g:[0,+\infty)\rightarrow R\) be continuous in \([0,+\infty)\) and differentiable in \((0,+\infty)\) whose derivatives \(f',g':(0,+\infty)\rightarrow R\) are bounded in \((0,+\infty)\). Then for \(\alpha,\lambda>0\),

$$\begin{aligned} & \biggl\vert \int_{0}^{+\infty}f(t)g(t)t^{\alpha-1}e^{-\lambda t}\,dt \\ &\qquad {} -\frac{\lambda^{\alpha}}{\Gamma(\alpha)} \int_{0}^{+\infty}f(t)t^{\alpha-1}e^{-\lambda t}\,dt \int_{0}^{+\infty}g(t)t^{\alpha-1}e^{-\lambda t}\,dt \biggr\vert \\ &\quad \leq\frac{2\alpha\Gamma(\alpha)}{\lambda^{\alpha+2}}\bigl\Vert f'\bigr\Vert _{\infty}\bigl\Vert g'\bigr\Vert _{\infty}, \end{aligned}$$
(3.8)

where \(\Gamma(\alpha)\) is the well-known Gamma function, defined by

$$ \Gamma(\alpha)= \int_{0}^{+\infty}x^{\alpha-1}e^{-x}\,dx. $$
(3.9)

Proof

Let a random variable ξ possess Gamma distribution \(\Gamma(\alpha,\lambda)\) whose probability density function is

$$ \varphi(x)=\textstyle\begin{cases} \frac{\lambda^{\alpha}}{\Gamma(\alpha)} x^{\alpha-1}e^{-\lambda x}, & x\geq 0,\\ 0, & x< 0, \end{cases} $$
(3.10)

where the parameters \(\alpha>0\), \(\lambda>0\). Then it is easy to obtain

$$\begin{aligned} \operatorname {Cov}\bigl(f(\xi),g(\xi)\bigr) =&\mathrm{E}f(\xi)g(\xi)-\mathrm{E}f( \xi)\cdot \mathrm{E}g(\xi) \\ =&\frac{\lambda^{\alpha}}{\Gamma(\alpha)} \int_{0}^{+\infty}f(t)g(t)t^{\alpha-1}e^{-\lambda t}\,dt \\ &{}-\frac{\lambda^{\alpha}}{\Gamma(\alpha)} \int_{0}^{+\infty}f(t)t^{\alpha-1}e^{-\lambda t}\,dt \cdot\frac{\lambda^{\alpha}}{\Gamma(\alpha)} \int_{0}^{+\infty}g(t)t^{\alpha-1}e^{-\lambda t}\,dt \end{aligned}$$
(3.11)

and

$$\begin{aligned} \mathrm{D}\xi=\frac{\alpha}{\lambda^{2}}. \end{aligned}$$
(3.12)

Substituting (3.11) and (3.12) into (2.1) one gets (3.8). Thus, we complete the proof. □

If ξ has the Beta distribution, one has the following inequality from (2.1).

Theorem 3.3

Suppose \(f,g:[0,1]\rightarrow R\) be continuous in \([0,1]\) and differentiable in \((0,1)\) whose derivatives \(f',g':(0,1)\rightarrow R\) are bounded in \((0,1)\). Then

$$\begin{aligned} & \biggl\vert \int_{0}^{1}f(x)g(x)x^{a-1}(1-x)^{b-1} \,dx \\ &\qquad {}-\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)} \int_{0}^{1}f(x)x^{a-1}(1-x)^{b-1} \,dx \int_{0}^{1}g(x)x^{a-1}(1-x)^{b-1} \,dx \biggr\vert \\ &\quad \leq \frac{2ab\Gamma(a)\Gamma(b)\Vert f'\Vert _{\infty} \Vert g'\Vert _{\infty}}{(a+b)^{2}(a+b+1)\Gamma(a+b)}. \end{aligned}$$
(3.13)

Proof

Let ξ be a random variable which possesses the Beta distribution \(\beta(a,b)\). So, it has the following probability density function:

$$ \varphi(x)=\textstyle\begin{cases} \frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)} x^{a-1}(1-x)^{b-1}, & 0\leq x\leq1 ,\\ 0, & \mbox{otherwise}, \end{cases} $$
(3.14)

where the parameters \(a>0\), \(b>0\). Then one obtains

$$\begin{aligned} \operatorname {Cov}\bigl(f(\xi),g(\xi)\bigr) =&\mathrm{E}f(\xi)g(\xi)-\mathrm{E}f( \xi)\cdot \mathrm{E}g(\xi) \\ =&\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)} \int_{0}^{1}f(x)g(x)x^{a-1}(1-x)^{b-1} \,dx \\ &{}-\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)} \int_{0}^{1}f(x)x^{a-1}(1-x)^{b-1} \,dx \\ &{}\cdot\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)} \int_{0}^{1}g(x)x^{a-1}(1-x)^{b-1} \,dx \end{aligned}$$
(3.15)

and

$$\begin{aligned} \mathrm{D}\xi=\frac{ab}{(a+b)^{2}(a+b+1)}. \end{aligned}$$
(3.16)

Substituting (3.15) and (3.16) into (2.1) one gets (3.13). Thus, we complete the proof. □

All above results deal with continuous random variable. Finally, we give two examples of discrete random variables.

Theorem 3.4

Let \(f,g:[0,+\infty)\rightarrow R\) be continuous in \([0,+\infty)\) and differentiable in \((0,+\infty)\) whose derivatives \(f',g':(0,+\infty)\rightarrow R\) are bounded in \((0,+\infty)\). Then, for \(\lambda>0\),

$$\begin{aligned} \Biggl\vert \sum_{k=0}^{\infty}f(k)g(k) \frac{\lambda^{k}}{k!}-e^{-\lambda}\sum_{k=0}^{\infty}f(k) \frac{\lambda^{k}}{k!} \sum_{k=1}^{\infty}g(k) \frac{\lambda^{k}}{k!}\Biggr\vert \leq 2\lambda e^{\lambda}\bigl\Vert f'\bigr\Vert _{\infty}\bigl\Vert g'\bigr\Vert _{\infty}. \end{aligned}$$
(3.17)

Proof

Let a random variable ξ possess Poisson distribution \(P(\lambda)\). So, it has the following probability function:

$$ P(\xi=k)=\frac{\lambda^{k}}{k!}e^{-\lambda},\quad k=0,1,2,\ldots, $$
(3.18)

where the parameters \(\lambda>0\). Then it is easy to obtain

$$\begin{aligned} \operatorname {Cov}\bigl(f(\xi),g(\xi)\bigr) =&\mathrm{E}f(\xi)g(\xi)-\mathrm{E}f( \xi)\cdot \mathrm{E}g(\xi) \\ =&\sum_{k=0}^{\infty}f(k)g(k) \frac{\lambda^{k}}{k!}e^{-\lambda}-\sum_{k=0}^{\infty}f(k) \frac{\lambda^{k}}{k!}e^{-\lambda} \sum_{k=1}^{\infty}g(k) \frac{\lambda^{k}}{k!}e^{-\lambda} \end{aligned}$$
(3.19)

and

$$\begin{aligned} \mathrm{D}\xi=\lambda. \end{aligned}$$
(3.20)

Substituting (3.19) and (3.20) into (2.1) yields (3.17). The proof is complete. □

Theorem 3.5

Let \(f,g:[0,+\infty)\rightarrow R\) be continuous in \([0,+\infty)\) and differentiable in \((0,+\infty)\) whose derivatives \(f',g':(0,+\infty)\rightarrow R\) are bounded in \((0,+\infty)\). Then, for \(0< p<1\) and \(n=0,1,2,\ldots \) ,

$$\begin{aligned} & \Biggl\vert \sum_{k=0}^{n}f(k)g(k){n \choose k}p^{k}(1-p)^{n-k} \\ &\qquad {}-\sum_{k=0}^{n}f(k){n\choose k}p^{k}(1-p)^{n-k} \sum_{k=1}^{n}g(k){n \choose k}p^{k}(1-p)^{n-k} \Biggr\vert \\ &\quad \leq 2np(1-p)\bigl\Vert f'\bigr\Vert _{\infty} \bigl\Vert g'\bigr\Vert _{\infty}. \end{aligned}$$
(3.21)

Proof

Let a random variable ξ possess the binomial distribution \(B(n,p)\). So, it has the following probability function:

$$ P(\xi=k)={n\choose k}p^{k}(1-p)^{n-k},\quad k=0,1,2,\ldots,n, $$
(3.22)

where the parameters \(0< p<1\). Then it is easy to obtain

$$\begin{aligned} \operatorname {Cov}\bigl(f(\xi),g(\xi)\bigr) =&\mathrm{E}f(\xi)g(\xi)-\mathrm{E}f( \xi)\cdot \mathrm{E}g(\xi) \\ =&\sum_{k=0}^{n}f(k)g(k){n \choose k}p^{k}(1-p)^{n-k} \\ &{}-\sum_{k=0}^{n}f(k){n\choose k}p^{k}(1-p)^{n-k} \sum_{k=1}^{n}g(k){n \choose k}p^{k}(1-p)^{n-k} \end{aligned}$$
(3.23)

and

$$\begin{aligned} \mathrm{D}\xi=np(1-p). \end{aligned}$$
(3.24)

Substituting (3.23) and (3.24) into (2.1) one gets (3.21). Thus, we complete the proof. □