1 Introduction

In this section, we present some fundamental relations for k-gamma and k-beta functions introduced in [17]. In Section 2, we introduce some k-analog properties of the mapping l p , q , which is helpful in coming sections. Sections 3 to 5 are devoted to the applications of some integral inequalities like Chebyshev’s, Grüss’, and Ostrowski’s inequality for k-beta mappings. In the last section, we give the applications of the said function for the probability distribution and the probability density function.

Recently, Diaz and Pariguan [1] introduced the generalized k-gamma function as

Γ k (x)= lim n n ! k n ( n k ) x k 1 ( x ) n , k ,k>0,xCk Z
(1)

and also gave the properties of the said function. Γ k is one parameter deformation of the classical gamma function such that Γ k Γ as k1. Γ k is based on the repeated appearance of the expression of the following form:

α(α+k)(α+2k)(α+3k) ( α + ( n 1 ) k ) .
(2)

The function of the variable α given by the statement (2), denoted by ( α ) n , k is called the Pochhammer k-symbol. We obtain the usual Pochhammer symbol ( α ) n by taking k=1. The definition given in (1) is the generalization of Γ(x) and the integral form of Γ k is given by

Γ k (x)= 0 t x 1 e t k k dt,Re(x)>0.
(3)

From (3), we can easily show that

Γ k (x)= k x k 1 Γ ( x k ) .
(4)

The same authors defined the k-beta function as

β k (x,y)= Γ k ( x ) Γ k ( y ) Γ k ( x + y ) ,Re(x)>0,Re(y)>0,
(5)

and the integral form of β k (x,y) is

β k (x,y)= 1 k 0 1 t x k 1 ( 1 t ) y k 1 dt.
(6)

From the definition of β k (x,y) given in (5) and (6), we can easily prove that

β k (x,y)= 1 k β ( x k , y k ) .
(7)

Also, the researchers in [26] have worked on the generalized k-gamma and k-beta functions and discussed the following properties:

Γ k (x+k)=x Γ k (x),
(8)
( x ) n , k = Γ k ( x + n k ) Γ k ( x ) ,
(9)
Γ k (k)=1,k>0,
(10)
Γ k (x)= a x k 0 t x 1 e t k k a dt,aR,
(11)
Γ k (αk)= k α 1 Γ(α),k>0,αR,
(12)
Γ k (nk)= k n 1 (n1)!,k>0,nN,
(13)
Γ k ( ( 2 n + 1 ) k 2 ) = k 2 n 1 2 ( 2 n ) ! π 2 n n ! ,k>0,nN.
(14)

Using (5) and (7), we see that, for x,y>0 and k>0, the following properties of k-beta function are valid (see [2, 3] and [7]):

β k (x+k,y)= x x + y β k (x,y),
(15)
β k (x,y+k)= y x + y β k (x,y),
(16)
β k (xk,yk)= 1 k β(x,y),
(17)
β k (nk,nk)= [ ( n 1 ) ! ] 2 k ( 2 n 1 ) ! ,nN,
(18)
β k (x,k)= 1 x , β k (k,y)= 1 y .
(19)

Note that when k1, β k (x,y)β(x,y).

For more details about the theory of k-special functions like the k-gamma function, the k-polygamma function, the k-beta function, the k-hypergeometric functions, solutions of k-hypergeometric differential equations, contiguous functions relations, inequalities with applications and integral representations with applications involving k-gamma and k-beta functions, k-gamma and k-beta probability distributions, and so forth (see [815]).

2 Main results: some k-analog properties of the mapping l p , q

For the applications of some integral inequalities involving k-gamma and k-beta functions, we have to discuss some k-analog properties regarding these mappings. For this purpose, consider the mapping l p , q :[0,1]R, defined by

l p , q (x)= x p k ( 1 x ) q k ,p,q,k>0,
(20)

and differentiation of above equation gives

l p , q (x)= 1 k x p k k ( 1 x ) q k k [ p ( p + q ) x ] .
(21)

Here, we see that l p , q (x)=0 has the solution x 0 = p p + q in the interval (0,1). Also, l p , q (x)>0 on (0, x 0 ) and l p , q (x)<0 on ( x 0 ,1). Thus, we conclude that x 0 is the maximum point in the interval (0,1) and consequently, we have

m p , q = inf x [ 0 , 1 ] l p , q (x)=0
(22)

and

M p , q = sup x [ 0 , 1 ] l p , q (x)= l p , q ( p p + q ) = p p k q q k ( p + q ) ( p + q ) k .
(23)

Also, we have

l p , q = p p k q q k ( p + q ) ( p + q ) k ,p,q,k>0,
(24)
l p , q =k β k (p+k,q+k),p,q,k>0,
(25)

and

l p , q r = [ k β k ( p r + k , q r + k ) ] 1 r ,p,q,k>0,r>1.
(26)

Further, we observe that

l p , q ( x ) 1 k x p k k ( 1 x ) q k k | p ( p + q ) x | max { p , q } l p k , q k ( x ) , p , q , k > 0 , x ( 0 , 1 ) .

Now, we have the estimations

l p , q = 1 k max{p,q} ( p k ) p k k ( q k ) q k k ( p + q 2 k ) ( p + q 2 k ) k ,if p,q>k>0,
(27)
l p , q r =max{p,q} [ β k ( r ( p k ) + k , r ( q k ) + k ) ] 1 r ,p,q>k>0,r>1,
(28)

and

l p , q 1 =max{p,q} β k (p,q),p,q,k>0.
(29)

Again, the second derivative of the said mapping gives

l p , q ( x ) = [ l p k , q k ( x ) ] [ p ( p + q ) x ] l p k , q k ( x ) ( p + q ) = 1 k l p 2 k , q 2 k ( x ) [ p k ( p k + q k ) x ] l p k , q k ( x ) ( p + q ) = 1 k l p 2 k , q 2 k ( x ) [ ( p + q ) x 2 2 ( p + q k ) x + p k ] .

Now, consider the mapping g p , q :[0,1)R, defined by

g p , q (x)=(p+q) x 2 2(p+qk)x+pk.

Here, we have g p , q (0)=pk and g p , q (1)=kq. If p,q>k, then g p , q has a solution on the interval (0,1) and one solution in the interval (1,). Also, the quadratic function f(x)=a x 2 +bx+c has a vertex at x= b 2 a . So, the coordinates of the vertex are

x v = 2 ( p + q k ) 2 ( p + q ) = p + q k p + q <1

and

y v = q 2 + p q p k q k + k 2 p + q = ( q k + k 2 p + q ) .

Consequently, we have

| g p , q ( x ) | max { g p , q , | y v | } =max { p k , q k + k 2 p + q } =max { p , q + k 2 p + q } k,

and then we get

l p , q ( x ) [ max { p , q + k 2 p + q } k ] l p 2 k , q 2 k (x),p,q>k,x(0,1).
(30)

If p,q>2k, we have

l p , q [ max { p , q + k 2 p + q } k ] ( p 2 k ) p 2 k k ( q 2 k ) q 2 k k ( p + q 4 k ) ( p + q 4 k ) k .
(31)

From (30), if p,q>k, we get

l p , q 1 [ max { p , q + k 2 p + q } k ] β k (pk,qk)
(32)

and if p,q>2k

l p , q r [ max { p , q + k 2 p + q } k ] [ β k ( r ( p 2 k ) + k , r ( q 2 k ) + k ) ] 1 r .
(33)

Remark If k=1, we have the properties of the mapping l p , q given in [16].

3 Chebyshev type inequalities involving k-beta and k-gamma functions

In this section, we prove some inequalities which involve k-gamma and k-beta functions by using some natural inequalities [17]. The following result is well known in the literature as Chebyshev’s integral inequality for synchronous (asynchronous) functions. Here, we use this result to prove some k-analog inequalities.

Lemma 3.1 Let f,g,h:IRR be such that h(x)0 for all xI and h, hfg, hf, and hg are integrable on I. If f, g are synchronous (asynchronous) on I, i.e.,

( f ( x ) f ( y ) ) ( g ( x ) g ( y ) ) ()=0for all x,yI,
(34)

then we have the inequality (see [18, 19])

I h(x)dx I h(x)f(x)g(x)dx() I h(x)f(x)dx I h(x)g(x)dx.
(35)

Lemma 3.1 can be proved by using Korkine’s identity [20],

I h ( x ) d x I h ( x ) f ( x ) g ( x ) d x I h ( x ) f ( x ) d x I h ( x ) g ( x ) d x = 1 2 I I h ( x ) h ( y ) ( f ( x ) f ( y ) ) ( g ( x ) g ( y ) ) d x d y
(36)

and an inequality generalizing Chebyshev’s inequality is

| I h ( x ) d x I h ( x ) f ( x ) g ( x ) d x I h ( x ) f ( x ) d x I h ( x ) g ( x ) d x | f g [ I x 2 h ( x ) d x I h ( x ) d x ( I x h ( x ) d x ) 2 ] ,
(37)

provided that h(x)>0 and f , g are differentiable and the first derivatives are bounded on I.

Theorem 3.2 For k>0, let m,n,p,q>k and r,s>k, then we have the following inequality for the k-beta function:

| β k ( r + k , s + k ) β k ( m + p + r + k , n + q + s + k ) β k ( m + r + k , n + s + k ) β k ( p + r + k , q + s + k ) | M ( p , q ) M ( m , n ) [ β k ( r + 3 k , s + k ) β k ( r + k , s + k ) β k 2 ( r + 2 k , s + k ) ] ,
(38)

where

M (p,q)= 1 k max{p,q} ( p k ) p k k ( q k ) q k k ( p + q 2 k ) ( p + q 2 k ) k ,p,q>k>0.

Proof Consider the mappings

f(x)= l m , n = x m k ( 1 x ) n k ,g(x)= l p , q = x p k ( 1 x ) q k ,h(x)= l r , s = x r k ( 1 x ) s k ,

defined on the interval [0,1]. Using the generalized version of Lemma 3.1, i.e., (37), along with the mappings defined above, we get

| 0 1 x r k ( 1 x ) s k d x 0 1 x r + m + p k ( 1 x ) s + n + q k d x 0 1 x r + m k ( 1 x ) s + n k d x 0 1 x r + p k ( 1 x ) s + q k d x | f g [ 0 1 x r k + 2 ( 1 x ) s k d x 0 1 x r k ( 1 x ) s k d x ( 0 1 x r k + 1 ( 1 x ) s k d x ) 2 ] .
(39)

Applying (6), (39) gives

| k 2 β k ( r + k , s + k ) β k ( m + p + r + k , n + q + s + k ) k 2 β k ( m + r + k , n + s + k ) β k ( p + r + k , q + s + k ) | f g [ k 2 β k ( r + 3 k , s + k ) β k ( r + k , s + k ) ( k β k ( r + 2 k , s + k ) ) 2 ] .

Now, taking into account the fact

l m , n M (m,n), l p , q M (p,q),

for all m,n,p,q>k, we can deduce the desired inequality (38). □

Corollary 3.3 For k>0 and m,n,p,q>k, we have the following inequality for the k-beta function:

| β k ( m + p + k , n + q + k ) β k ( m + k , n + k ) β k ( p + k , q + k ) | M (p,q) M (m,n).

Proof Just use r=s=0 in Theorem 3.2 to get the required corollary. □

Theorem 3.4 For k>0, p,q>k, and r,s>k, we have the following inequality for the k-beta function:

| β k ( r + k , s + k ) β k ( p + r + k , q + s + k ) β k ( p + r + k , s + k ) β k ( r + k , q + s + k ) | p q k 2 [ β k ( r + 3 k , s + k ) β k ( r + k , s + k ) β k 2 ( r + 2 k , s + k ) ] .
(40)

Proof Consider the mappings

f(x)= x p k ,g(x)= ( 1 x ) q k ,h(x)= l r , s (x)= x r k ( 1 x ) s k ,

defined on the interval [0,1], k>0. Now, we have

f ( x ) = p k x p k 1 , g ( x ) = q k ( 1 x ) q k 1 , f = sup t ( a , b ) | f ( t ) | = p k , g = q k .

Using the generalized version of Lemma 3.1, i.e., (37), along with the above results, we get

| 0 1 x r k ( 1 x ) s k d x 0 1 x r + p k ( 1 x ) s + q k 0 1 x r + p k ( 1 x ) s k 0 1 x r k ( 1 x ) s + q k d x | p q k 2 [ 0 1 x r k + 2 ( 1 x ) s k d x 0 1 x r k ( 1 x ) s k d x ( 0 1 x r k + 1 ( 1 x ) s k d x ) 2 ] ,

which will be equivalent to Theorem 3.4 by applying (6) on both sides of the above inequality. □

Corollary 3.5 If r=s=0 and p,q>k>0, then Theorem  3.4 takes the form

| β k ( p + k , q + k ) k ( p + k ) ( q + k ) | p q 12 k 3
(41)

and inequality (41) is equivalent to

max { 0 , 12 k 4 p 2 q 2 p 2 q k p q 2 k p q k 2 12 k 3 ( p + k ) ( q + k ) } β k ( p + k , q + k ) 12 k 4 + p 2 q 2 + p 2 q k + p q 2 k + p q k 2 12 k 3 ( p + k ) ( q + k ) .
(42)

Proof Taking r=s=0, in Theorem 3.4, we get

| β k ( k , k ) β k ( p + k , q + k ) β k ( p + k , k ) β k ( k , q + k ) | p q k 2 [ β k ( 3 k , k ) β k ( k , k ) β k 2 ( 2 k , k ) ] .

Use of (5) and (7) implies

| 1 k β k ( p + k , q + k ) Γ k ( p + k ) Γ k ( k ) Γ k ( p + 2 k ) Γ k ( k ) Γ k ( q + k ) Γ k ( q + 2 k ) | p q k 2 [ 1 k Γ k ( 3 k ) Γ k ( k ) Γ k ( 4 k ) ( Γ k ( 2 k ) Γ k ( k ) Γ k ( 3 k ) ) 2 ] .

By (8) and (10), inequality (41) can be obtained and some algebraic calculations give the desired inequality (42). □

4 Some other inequalities for k-beta mappings

In 1935, Grüss established an integral inequality which gives an estimation for the integral of a product in terms of the product of integrals [17]. We use the following lemma [21] to prove our next theorem which is based on the Grüss integral inequality.

Lemma 4.1 If f and g are two functions defined and integrable on (a,b), then

| 1 b a a b f ( x ) g ( x ) d x 1 b a a b f ( x ) d x 1 b a a b g ( x ) d x | { f g 1 , provided  g L 1 [ a , b ] , f L ( a , b ) , [ 2 ( c + 1 ) ( c + 2 ) ] 1 c ( b a ) 1 c f g d if  g L d [ a , b ] , f L ( a , b ) , c > 1 , 1 c + 1 d = 1 , b a 3 f g , provided  g , f L ( a , b ) .

Theorem 4.2 Let m,n>k and p,q,k>0, then we have the following inequality for the k-beta mapping:

| β k ( m + p + k , n + q + k ) β k ( m + k , n + k ) β k ( p + k , q + k ) | { M ( m , n ) k β k ( p + k , q + k ) , [ 2 ( c + 1 ) ( c + 2 ) ] 1 c M ( m , n ) [ k β k ( d p + k , d q + k ) ] 1 d , if  c > 1 , 1 c + 1 d = 1 , 1 3 M ( m , n ) p p k q q k ( p + q ) p + q k ,

where

M (m,n)= 1 k max{m,n} ( m k ) m k k ( n k ) n k k ( m + n 2 k ) ( m + n 2 k ) k ,m,n>k>0.

Proof Consider the mappings

f(x)= l m , n = x m k ( 1 x ) n k ,g(x)= l p , q = x p k ( 1 x ) q k ,

defined on the interval [0,1]. Using Lemma 4.1, along with the mappings defined above, we get

| 0 1 x m + p k ( 1 x ) n + q k d x 0 1 x m k ( 1 x ) n k d x 0 1 x p k ( 1 x ) q k d x | { l m , n l p , q 1 , m , n > k , p , q , k > 0 , [ 2 ( c + 1 ) ( c + 2 ) ] 1 c l m , n l p , q d if  m , n > k , p , q , k > 0 , c > 1 , 1 c + 1 d = 1 , 1 3 l m , n l p , q , m , n > k , p , q , k > 0 .

Applying (24) and (26) and using the fact l m , n M (m,n) given in Section 2, we get

{ M ( m , n ) k β k ( p + k , q + k ) , m , n > 1 , p , q , k > 0 , [ 2 ( c + 1 ) ( c + 2 ) ] 1 c M ( m , n ) [ k β k ( d p + k , d q + k ) ] 1 d , m , n , p , q > k > 0 , c > 1 , 1 c + 1 d = 1 , 1 3 M ( m , n ) ( p ) p k ( q ) q k ( p + q ) ( p + q ) k , m , n , p , q , k > 0 .

 □

Theorem 4.3 Let m,n>k and p,q,k>0, then the k-beta mapping satisfies the inequality

| β k ( p + k , q + k ) k ( p + k ) ( q + k ) | { p q + k , m , n > k ,  if  p > k , q > k , k > 0 , [ 2 k ( c + 1 ) ( c + 2 ) ] 1 c p ( d q + k ) 1 d , p > k , q > 0 , c > 1 , 1 c + 1 d = 1 , p 3 k , p > k > 0 .

Proof Consider the mappings

f(x)= x p k ,g(x)= ( 1 x ) q k ,

defined on the interval [0,1]. Here, we observe that

f ( x ) = p k x p k 1 , f = p k , p > k , g d = ( 0 1 ( 1 x ) d q k d x ) 1 d = ( k d q + k ) 1 d ,

and

g 1 = 0 1 ( 1 x ) q k dx= k q + k , g =1.

Now, by Lemma 4.1, we get the required result. □

The following inequality of Grüss type has been established in [22].

Lemma 4.4 If f and g are two functions defined and integrable on (a,b), then

| 1 b a a b f ( x ) g ( x ) d x 1 b a a b f ( x ) d x 1 b a a b g ( x ) d x | 1 6 f c g d ( b a ) ,

provided

f L c (a,b), g L d (a,b),c>1, 1 c + 1 d =1.

Theorem 4.5 Let m,n,p,q,k>0, then we have the following inequality for the k-beta mapping:

| β k ( m + p + k , n + q + k ) β k ( m + k , n + k ) β k ( p + k , q + k ) | k 6 max { m , n } max { p , q } [ β k ( ( m k ) c + k , ( n k ) c + k ) ] 1 c × [ β k ( ( p k ) d + k , ( q k ) d + k ) ] 1 d ,

where c>1, 1 c + 1 d =1.

Proof Consider the mappings

f(x)= x m k ( 1 x ) n k ,g(x)= x p k ( 1 x ) q k ,m,n,p,q,k>0,

defined on the interval [0,1]. Using Lemma 4.4, along with the mappings defined above, we get

| 0 1 x m + p k ( 1 x ) n + q k d x 0 1 x m k ( 1 x ) n k d x 0 1 x p k ( 1 x ) q k d x | 1 6 f c g d ,

provided

f L c (a,b), g L d (a,b),c>1, 1 c + 1 d =1.

Now, using the fact that

f c max{m,n} [ k β k ( ( m k ) c + k , ( n k ) c + k ) ] 1 c
(43)

and

g d max{p,q} [ k β k ( ( p k ) d + k , ( q k ) d + k ) ] 1 d ,
(44)

we have our required result. □

Remarks If we use k=1, inequalities (43) and (44) are the results for the classical beta function proved in [22].

Lemma 4.6 If f and g are two functions defined and integrable on (a,b), then we have the inequality

| 1 b a a b f ( x ) g ( x ) d x 1 b a a b f ( x ) d x 1 b a a b g ( x ) d x | 1 6 f g 1 (ba).

Theorem 4.7 If k>0, the following inequalities for the k-beta mapping hold good:

| β k ( p + k , q + k ) k ( p + k ) ( q + k ) | p q 6 k 2 [ 1 c ( p k ) + k ] 1 c [ 1 d ( q k ) + k ] 1 d , c > 1 , 1 c + 1 d = 1 ,
(45)

and

| β k ( p + k , q + k ) k ( p + k ) ( q + k ) | p 6 k 2 ,p>k.
(46)

Proof Consider the mappings

f(x)= x p k ,g(x)= ( 1 x ) q k ,p,q>k>0

defined on the interval [0,1]. Here, we note that

f c = p k [ k c ( p k ) + k ] 1 c , g d = q k [ k d ( q k ) + k ] 1 d .

Using Lemma 4.4 for the above results, we get

| 0 1 x p k ( 1 x ) q k d x 0 1 x p k d x 0 1 ( 1 x ) q k d x | 1 6 p k [ k c ( p k ) + k ] 1 c q k [ k d ( q k ) + k ] 1 d ,

provided

f L c (a,b), g L d (a,b),c>1, 1 c + 1 d =1.

By (6) and the fact 1 c + 1 d =1, we have the inequality (45). Also, f = p k and g 1 =1. Thus, using Lemma 4.6 we have the inequality (46). □

5 Main results: via Ostrowski’s inequality

In this section, we use the integral inequality which is known in the literature as Ostrowki’s inequality [23]. The following lemma concerning Ostrowski’s inequality for absolutely continuous mappings whose derivatives belong to L p spaces hold [24, 25]. Here, we give some lemmas which are helpful for the results involving k-beta mapping.

Lemma 5.1 Let f:[a,b]R be an absolutely continuous mapping for which f L p [a,b], p>1. Then

| f ( x ) 1 b a a b f ( t ) d t | 1 ( q + 1 ) 1 q [ ( x a b a ) q + 1 + ( b x b a ) q + 1 ] 1 q ( b a ) 1 q f p 1 ( q + 1 ) 1 q ( b a ) 1 q f p
(47)

for all x[a,b], where

f p = ( a b | f ( t ) | p d t ) 1 p

and the best inequality for (47) is embodied in the form

| f ( a + b 2 ) 1 b a a b f ( t ) d t | 1 2 ( b a ) 1 q ( q + 1 ) 1 q f p .

For the application of the above inequalities to some numerical quadrature rules, we have the following lemma.

Lemma 5.2 Let f:[a,b]R be an absolutely continuous mapping for which f L p [a,b], p>1. Then for any partition I n :a= x 0 < x 1 << x n 1 < x n =b of [a,b] and any intermediate point vector ξ=( ξ 0 , ξ 1 ,, ξ n 1 ) satisfying ξ i [ x i , x i + 1 ] (i=0,1,,n1), we have

a b f(x)dx= A R (f, I n ,ξ)+ R R (f, I n ,ξ).

Here A R denotes the quadrature rule of the Riemann type defined by

A R (f, I n ,ξ)= i = 0 n 1 f( ξ i ) h i , h i = x i + 1 x i ,

and the remainder satisfies the estimate

| R R ( f , I n , ξ ) | f p ( q + 1 ) 1 q ( i = 0 n 1 [ ( ξ i x i ) q + 1 + ( x i + 1 ξ i ) q + 1 ] ) 1 q f p ( q + 1 ) 1 q ( i = 0 n 1 h i q + 1 ) 1 q ,

where h i = x i + 1 x i (i=0,1,,n1). Lemmas 5.1 and 5.2 are proved in [19]and the best quadrature formula that can be obtained from the above result is one for which ξ i =( x i + x i + 1 2 ), i=0,1,,n1, and is given in the following corollary.

Corollary 5.3 Let f and I n be as in the Lemma  5.2, then

a b f(x)dx= A M (f, I n )+ R M (f, I n ),

where A M denotes the mid point quadrature rule i.e.,

A M (f, I n )= i = 0 n 1 f ( x i + x i + 1 2 ) h i

and the remainder R M satisfies the estimation

| R M ( f , I n ) | 1 2 f p ( q + 1 ) 1 q ( i = 0 n 1 h i q + 1 ) 1 q .

We are now able to apply the above results for Euler’s k-beta mapping.

Theorem 5.4 Let s>1, p,q>2k 1 s >k, and k>0. Then we have the inequality for the k-beta function as

| k β k ( p , q ) x p k 1 ( 1 x ) q k 1 | 1 ( l + 1 ) 1 l [ x l + 1 + ( 1 x ) l + 1 ] 1 l max { p k , q k } × [ β k ( s ( p k 1 ) + 1 , s ( q k 1 ) + 1 ) ] 1 s

provided that 1 s + 1 l =1.

Proof Consider the mapping f(t)= t p k 1 ( 1 t ) q k 1 = l p k , q k (t), t[0,1]. From Lemma 5.1 along with this mapping, we get

| k β k ( p , q ) l p k , q k ( x ) | 1 ( l + 1 ) 1 l [ x l + 1 + ( 1 x ) l + 1 ] 1 l l p k , q k s , x [ 0 , 1 ] ,
(48)

where s>1 and 1 s + 1 l =1. Now, taking the derivatives of the above mapping, we have

l p k , q k (t)= 1 k l p k 1 , q k 1 [ p k ( p + q 2 k ) t ] ,t(0,1).

If t(0, p k p + q 2 k ), l p k , q k >0 and if t( p k p + q 2 k ,1), then l p k , q k <0, which shows that at t 0 = p k p + q 2 k , we have a maximum for l p k , q k and

sup t ( 0 , 1 ) l p k , q k (t)= l p k , q k ( t 0 )= ( p k ) ( p k ) ( q k ) ( q k ) ( p + q 2 k ) ( p + q 2 k ) ,p,q>k.

Consequently, we have

| l p k , q k ( t ) | 1 k | l p k 1 , q k 1 ( t ) | max t ( 0 , 1 ) | ( p k ) ( p + q 2 k ) t | 1 k ( p k ) ( p k ) ( q k ) ( q k ) ( p + q 2 k ) ( p + q 2 k ) max { p k , q k }

for all t(0,1). Thus

l p k , q k ( t ) s = ( 1 k 0 1 l p k 1 , q k 1 s ( t ) | p k ( p + q 2 k ) | s d s ) 1 s = ( 1 k 0 1 t s ( p k 1 ) k ( 1 t ) s ( q k 1 ) k | p k ( p + q 2 k ) | s d s ) 1 s = max { p k , q k } [ β k ( s ( p k 1 ) + 1 , s ( q k 1 ) + 1 ) ] 1 s .
(49)

Using (48) and (49), we get the desired Theorem 5.4. □

Now, we have the result concerning the approximation of the k-beta function in terms of the Riemann sums.

Theorem 5.5 Let s>1, p,q>2k 1 s >k and k>0. If I n :0= x 0 < x 1 << x n 1 < x n =1 is a division of [0,1], ξ i [ x i , x i + 1 ], i=(0,1,,n1) a sequence of intermediate points for I n , then we have the formula for the k-beta function:

β k (p,q)= i = 0 n 1 ξ i p k 1 ( 1 ξ i ) q k 1 h i + T n (p,q),

where the remainder T n (p,q) satisfies the estimate

| T n ( p , q ) | max { p k , q k } ( l + 1 ) 1 l [ β k ( s ( p k 1 ) + 1 , s ( q k 1 ) + 1 ) ] 1 s × ( i = 0 n 1 [ ( ξ i x i ) l + 1 + ( x i + 1 ξ i ) l + 1 ] ) 1 l max { p k , q k } ( l + 1 ) 1 l [ β k ( s ( p k 1 ) + 1 , s ( q k 1 ) + 1 ) ] 1 s ( i = 0 n 1 ( h i ) l + 1 ) 1 l ,

where h i = x i + 1 x i (i=0,1,,n1) and 1 s + 1 l =1.

Proof Taking f(t)= t p k 1 ( 1 t ) q k 1 , t[0,1], k>0 along with Lemma 5.2 we get Theorem 5.5. The proof of Lemma 5.2 is available in [11], so details are omitted. □

6 Inequalities in probability theory and applications for k-beta function

Here, we give some applications of the Ostrowski type inequality for the k-beta function and cumulative distribution functions. For this purpose, we need some basic concepts of random variable, distribution function, probability density function and expected values.

A process which generates raw data is called an experiment and an experiment which gives different results under similar conditions, even though it is repeated a large number of times, is termed a random experiment. A variable whose values are determined by the outcomes of a random experiment is called a random variable or simply a variate. The random variables are usually denoted by capital letters, X, Y, and Z, while the values associated to them by corresponding small letters x, y, and z. The random variables are classified into two classes namely discrete and continuous random variables.

A random variable that can assume only a finite or countably infinite number of values is known as a discrete random variable, while a variable which can assume each and every value within some interval is called a continuous random variable. The distribution function of a random variable X, denoted by F(x), is defined by F(x)=Pr(Xx) i.e., the distribution function gives the probability of the event that X takes a value less than or equal to a specified value x.

A random variable X may also be defined as continuous if its distribution function F(x) is continuous and differentiable everywhere except at isolated points in the given range. Let the derivative of F(x) be denoted by f(x) i.e., f(x)= d d x F(x). Since F(x) is a non-decreasing function of x,

f(x)0andF(x)= x f(x)dx,for all x.

Here, the function f(x) is called the probability density function, denoted by (pdf) or simply a density function of the random variable X. A probability density function has the properties

f(x)0,for all xandF(x)= f(x)dx=1

and the probability that the random variable X takes on a value in the interval [a,b] is given by

P(a<xb)=F(b)F(a)= b f(x)dx a f(x)dx= a b f(x)dx,

which shows the area under the curve y=f(x) between X=a and X=b.

A moment designates the power to which the deviations are raised before averaging them. In statistics, we have three kinds of moments:

  1. (i)

    Moments about any value x=A is the r th power of the deviation of variable from A and is called the r th moment of the distribution about A.

  2. (ii)

    Moments about x=0 is the r th power of the deviation of variable from 0 and is called the r th moment of the distribution about 0.

  3. (iii)

    Moments about mean i.e., x=μ is the r th power of the deviation of variable from mean and is called the r th moment of the distribution about mean. If a random variable X assumes all the values from a to b, then for a continuous distribution, the r th moments about the arbitrary number A and 0, respectively, are given by a b ( x A ) r f(x)dx and a b ( x 0 ) r f(x)dx (see [2628]).

Definition 6.1 In a random experiment with n outcomes, suppose a variable X assumes the values x 1 ,, x n with corresponding probabilities p 1 ,, p n , then the paring ( x i ,p( x i )), i=1,2, , is called a probability distribution and Σ p i =1 (in the case of discrete distributions). Also, if f(x) is a continuous probability density function defined on an interval [a,b], then a b f(x)dx=1. The expected value of the variate is defined as the first moment of the probability distribution about x=0 i.e.,

E(X)= a b xf(x)dx.

Definition 6.2 Let X be a continuous random variable, then it is said to have a beta k-distribution of the first kind with two parameters m and n, if its probability k-density function (pkdf) is defined by [8]

f k (x)={ 1 k β k ( p , q ) x p k 1 ( 1 x ) q k 1 , 0 x 1 ; p , q , k > 0 , 0 , elsewhere .
(50)

In the above distribution, the k-beta variable of the first kind is referred to as β 1 , k (m,n) and its k-distribution function F k (x) is given by

F k (x)={ 0 , x < 0 , 0 1 1 k β k ( p , q ) x p k 1 ( 1 x ) q k 1 , 0 x 1 ; p , q , k > 0 , 0 , x > 1 .
(51)

Remarks We can call the above function an incomplete k-beta function because, if k=1, it is an incomplete beta function, as tabulated in [29, 30].

Proposition 6.3 For the parameters p,q,k>0, the expected value of the k-beta random variable is given by

E(X)= p p + q .

Proof For the k-beta random variable defined above, we observe that

E ( X ) = 0 x x f k ( x ) d x = 0 x 1 k β k ( p , q ) x x p k 1 ( 1 x ) q k 1 d x , 0 x 1 ; p , q > 0 .

Using (5), (6), and (8), we have

E ( X ) = 0 1 1 k β k ( p , q ) x p k ( 1 x ) q k 1 d x = β k ( p + k , q ) β k ( p , q ) = Γ k ( p + k ) Γ k ( q ) Γ k ( p + q ) Γ k ( p ) Γ k ( q ) Γ k ( p + q + k ) = p p + q .

 □

Lemma 6.4 Let X be a random variable taking values in the finite interval [a,b], with the cumulative distribution function F(x)=Pr(Xx). Then the following inequalities of Ostrowski type hold:

| Pr ( X x ) b E ( X ) b a | 1 b a [ [ 2 x ( a + b ) ] Pr ( X x ) + a b sgn ( t x ) F ( t ) d t ] 1 b a [ ( b x ) Pr ( X x ) + ( x a ) Pr ( X x ) ] 1 2 + | x ( a + b ) 2 | b a

for all x[a,b]. All the inequalities are sharp and the constant 1 2 is the best possible. However, by the integration by parts formula for the Riemann-Stieltjes integral, we have

E ( X ) = a b t d F ( t ) = t F ( t ) | a b a b F ( t ) d t = b F ( a ) a F ( b ) a b F ( t ) = b a b F ( t )

and

1F(x)=Pr(Xx).
(52)

The proof of Lemma 6.4 is given in [19, 31]. Now, we are able to give some applications for the k-beta random variable.

Theorem 6.5 Let X be a k-beta random variable with parameters p,q,k>0, then we have the following inequalities:

| Pr ( X x ) q p + q | 1 2 + | x 1 2 |

and

| Pr ( X x ) p p + q | 1 2 + | x 1 2 |

for all x[0,1] and, in particular,

| Pr ( X 1 2 ) q p + q | 1 2

and

| Pr ( X 1 2 ) p p + q | 1 2 .

Proof Using Lemma 6.4 along with the k-beta random variable and (pkdf) defined in (50) and (51) and Proposition 6.3 for the expected values, we get

| Pr ( X x ) q p + q | 1 2 + | x 1 2 | ,

and, by (52), we have

| Pr ( X x ) p p + q | 1 2 + | x 1 2 |

for all x[0,1]. In particular, for the intermediate point of the interval [0,1], i.e., at x= 1 2 , we have the remaining results of Theorem 6.5. □

Lemma 6.6 Let X be a random variable with probability density function f:[a,b]R R + and with cumulative distribution function F(x)=Pr(Xx). If f L p [a,b], p>1, then the following inequality holds:

| Pr ( X x ) b E ( X ) b a | s s + 1 f r ( b a ) 1 s [ ( x a b a ) 1 + s s + ( b x b a ) 1 + s s ] s s + 1 f r ( b a ) 1 s 1 2 + | x ( a + b ) 2 | b a ,

for all x[a,b], where 1 r + 1 s =1.

Now, we have the application of the beta random variable X in terms of the parameter k>0. A k-beta random variable X with positive parameters p, q, and k has the probability density function

f k (x;p,q)= x p k 1 ( 1 x ) q k 1 k β k ( p , q ) ,0x1,

where β k (p,q) is the k-beta function. Here, we observe that

f k ( x ; p , q ) r = 1 β k ( p , q ) ( 1 k 0 1 x r p k r ( 1 x ) r q k r d x ) 1 r = 1 β k ( p , q ) ( 1 k 0 1 x r p k r + 1 1 ( 1 x ) r q k r + 1 1 d x ) 1 r .

Thus, we have

f k ( x ; p , q ) r = 1 β k ( p , q ) [ β k ( r ( p k ) + k , r ( q k ) + k ) ] 1 r ,
(53)

provided

( r ( p k ) + k , r ( q k ) + k ) >0,i.e.,p>k k r  and q>k k r .

Theorem 6.7 Let X be a k-beta random variable with parameters p,q,k>0, p>k k r and q>k k r . Then we have the following inequalities:

| Pr ( X x ) q p + q | s s + k [ x s + k s + ( 1 x ) s + k s ] [ β k ( r ( p k ) + k , r ( q k ) + k ) ] 1 r β k ( p , q )

for all x[0,1] and, in particular

| Pr ( X 1 2 ) q p + q | s 2 k / s ( s + k ) [ β k ( r ( p k ) + k , r ( q k ) + k ) ] 1 r β k ( p , q ) .

Proof Using Lemma 6.6 along with the k-beta random variable and (pkdf) defined in (50), (51), and (53) for the expected values, we get the required Theorem 6.7. □