1 Introduction

After Schmitendorf [1], who derived necessary and sufficient optimality conditions for static minimax problems, much attention has been paid to optimality conditions and duality theorems for minimax fractional programming problems [217]. For the theory, algorithms, and applications of some minimax problems, the reader is referred to [18].

In this paper, we consider the following nondifferentiable minimax fractional programming problem:

(P)

where Y is a compact subset of R l , f(,): R n × R l R, h(,): R n × R l R are twice continuously differentiable on R n × R l and g(): R n R m is twice continuously differentiable on R n , B, and D are a n×n positive semidefinite matrix, f(x,y)+ ( x T B x ) 1 / 2 0, and h(x,y) ( x T D x ) 1 / 2 >0 for each (x,y)J×Y, where J={x R n :g(x)0}.

Motivated by [7, 14, 15], Yang and Hou [17] formulated a dual model for fractional minimax programming problem and proved duality theorems under generalized convex functions. Ahmad and Husain [5] extended this model to nondifferentiable and obtained duality relations involving (F,α,ρ,d)-pseudoconvex functions. Jayswal [11] studied duality theorems for another two duals of (P) under α-univex functions. Recently, Ahmad et al.[4] derived the sufficient optimality condition for (P) and established duality relations for its dual problem under B-(p,r)-invexity assumptions. The papers [2, 47, 1115, 17] involved the study of first-order duality for minimax fractional programming problems.

The concept of second-order duality in nonlinear programming problems was first introduced by Mangasarian [19]. One significant practical application of second-order dual over first-order is that it may provide tighter bounds for the value of objective function because there are more parameters involved. Hanson [20] has shown the other advantage of second-order duality by citing an example, that is, if a feasible point of the primal is given and first-order duality conditions do not apply (infeasible), then we may use second-order duality to provide a lower bound for the value of primal problem.

Recently, several researchers [3, 810, 16] considered second-order dual for minimax fractional programming problems. Husain et al.[8] first formulated second-order dual models for a minimax fractional programming problem and established duality relations involving η-bonvex functions. This work was later on generalized in [10] by introducing an additional vector r to the dual models, and in Sharma and Gulati [16] by proving the results under second-order generalized α-type I univex functions. The work cited in [3, 8, 10, 16] involves differentiable minimax fractional programming problems. Recently, Hu et al.[9] proved appropriate duality theorems for a second-order dual model of (P) under η-pseudobonvexity/η-quasibonvexity assumptions. In this paper, we formulate two types of second-order dual models for (P) and then derive weak, strong, and strict converse duality theorems under generalized α-univexity assumptions. Further, examples have been illustrated to show the existence of second-order α-univex functions. Our study extends some of the known results of the literature [5, 6, 11, 12, 14].

2 Notations and preliminaries

For each (x,y) R n × R l and M={1,2,,m}, we define

Definition 2.1 Let ζ:XR (X R n ) be a twice differentiable function. Then ζ is said to be second-order α-univex at uX, if there exist η:X×X R n , b 0 :X×X R + , ϕ 0 :RR, and α:X×X R + {0} such that for all xX and p R n , we have

b 0 ϕ 0 [ ζ ( x ) ζ ( u ) + 1 2 p T 2 ζ ( u ) p ] α ( x , u ) η T ( x , u ) [ ζ ( u ) + 2 ζ ( u ) p ] .

Example 2.1 Let ζ:XR be defined as ζ(x)= e x + sin 2 x+ x 2 , where X=(1,). Also, let ϕ 0 (t)=t+18, b 0 (x,u)=u+1, α(x,u)= u 2 + 2 x + 1 and η(x,u)=x+u. The function ζ is second-order α-univex at u=1, since

b 0 ϕ 0 [ ζ ( x ) ζ ( u ) + 1 2 p T 2 ζ ( u ) p ] α ( x , u ) η T ( x , u ) [ ζ ( u ) + 2 ζ ( u ) p ] = 2 ( e x + sin 2 x + x 2 ) + 1.521 + 3.886 ( p 1.5 ) 2 0 for all  x X  and  p R .

But every α-univex function need not be invex. To show this, consider the following example.

Example 2.2 Let Ω:X=(0,)R be defined as Ω(x)= x 2 . Let ϕ 0 (t)=t, b 0 (x,u)= 1 u , α(x,u)=2u, and η(x,u)= 1 2 u . Then we have

b 0 ϕ 0 [ Ω ( x ) Ω ( u ) + 1 2 p T 2 Ω ( u ) p ] α ( x , u ) η T ( x , u ) [ Ω ( u ) + 2 Ω ( u ) p ] = 1 u [ x 2 + ( p + u ) 2 ] 0 for all  x , u X  and  p R .

Hence, the function Ω is second-order α-univex but not invex, since for x=3, u=2, and p=1, we obtain

Ω(x)Ω(u)+ 1 2 p T 2 Ω(u)p η T (x,u) [ Ω ( u ) + 2 Ω ( u ) p ] =4.5<0.

Lemma 2.1 (Generalized Schwartz inequality)

Let B be a positive semidefinite matrix of order n. Then, for allx,w R n ,

x T Bw ( x T B x ) 1 / 2 ( w T B w ) 1 / 2 .

The equality holds ifBx=λBwfor someλ0.

Following Theorem 2.1 ([13], Theorem 3.1) will be required to prove the strong duality theorem.

Theorem 2.1 (Necessary condition)

If x is an optimal solution of problem (P) satisfying x T B x >0, x T D x >0, and g j ( x ), jJ( x )are linearly independent, then there exist( s , t , y ˜ )K( x ), k 0 R + , w,v R n and μ R + m such that

(2.1)
(2.2)
(2.3)
(2.4)
(2.5)

In the above theorem, both matrices B and D are positive semidefinite at x . If either x T B x or x T D x is zero, then the functions involved in the objective of problem (P) are not differentiable. To derive necessary conditions under this situation, for ( s , t , y ˜ )K( x ), we define

Z y ˜ ( x ) = { z R n : z T g j ( x ) 0 , j J ( x ) , with any one of the next conditions (i)-(iii) holds } .

If in addition, we insert the condition Z y ˜ ( x )=ϕ, then the result of Theorem 2.1 still holds.

For the sake of convenience, let

ψ 1 ()= ξ 1 ()+ j = 1 m μ j ( g j ( ) g j ( z ) )
(2.6)

and

ψ 2 ( ) = [ i = 1 s t i ( h ( z , y ˜ i ) z T D v ) ] [ i = 1 s t i ( f ( , y ˜ i ) + ( ) T B w ) + j = 1 m μ j g j ( ) ] [ i = 1 s t i ( f ( z , y ˜ i ) + z T B w ) + j = 1 m μ j g j ( z ) ] [ i = 1 s t i ( h ( , y ˜ i ) ( ) T D v ) ] ,

where

ξ 1 ()= i = 1 s t i [ ( h ( z , y ˜ i ) z T D v ) ( f ( , y ˜ i ) + ( ) T B w ) ( f ( z , y ˜ i ) + z T B w ) ( h ( , y ˜ i ) ( ) T D v ) ] .

3 Model I

In this section, we consider the following second-order dual problem for (P):

max ( s , t , y ˜ ) K ( z ) sup ( z , μ , w , v , p ) H 1 ( s , t , y ˜ ) F(z),
(DM1)

where F(z)= sup y Y f ( z , y ) + ( z T B z ) 1 / 2 h ( z , y ) ( z T D z ) 1 / 2 and H 1 (s,t, y ˜ ) denotes the set of all (z,μ,w,v,p) R n × R + m × R n × R n × R n satisfying

(3.1)
(3.2)
(3.3)

If the set H 1 (s,t, y ˜ )=ϕ, we define the supremum of F(z) over H 1 (s,t, y ˜ ) equal to −∞.

Remark 3.1 If p=0, then using (3.3), the above dual model reduces to the problems studied in [6, 11, 12]. Further, if B and D are zero matrices of order n, then (DM1) becomes the dual model considered in [14].

Next, we establish duality relations between primal (P) and dual (DM1).

Theorem 3.1 (Weak duality)

Let x and(z,μ,w,v,s,t, y ˜ ,p)are feasible solutions of (P) and (DM1), respectively. Assume that

  1. (i)

    ψ 1 () is second-order α-univex at z,

  2. (ii)

    ϕ 0 (a)0a0 and b 0 (x,z)>0.

Then

sup y ˜ Y f ( x , y ˜ ) + ( x T B x ) 1 / 2 h ( x , y ˜ ) ( x T D x ) 1 / 2 F(z).

Proof Assume on contrary to the result that

sup y ˜ Y f ( x , y ˜ ) + ( x T B x ) 1 / 2 h ( x , y ˜ ) ( x T D x ) 1 / 2 <F(z).
(3.4)

Since y ˜ i Y(z), i=1,2,,s, we have

F(z)= f ( z , y ˜ i ) + ( z T B z ) 1 / 2 h ( z , y ˜ i ) ( z T D z ) 1 / 2 .
(3.5)

From (3.4) and (3.5), for i=1,2,,s, we get

f ( x , y ˜ i ) + ( x T B x ) 1 / 2 h ( x , y ˜ i ) ( x T D x ) 1 / 2 sup y ˜ Y f ( x , y ˜ ) + ( x T B x ) 1 / 2 h ( x , y ˜ ) ( x T D x ) 1 / 2 < f ( z , y ˜ i ) + ( z T B z ) 1 / 2 h ( z , y ˜ i ) ( z T D z ) 1 / 2 .

This further from t i 0, i=1,2,,s, t0 and y ˜ i Y(z), we obtain

(3.6)

Now,

ξ 1 ( x ) = i = 1 s t i [ ( h ( z , y ˜ i ) z T D v ) ( f ( x , y ˜ i ) + x T B w ) ( f ( z , y ˜ i ) + z T B w ) ( h ( x , y ˜ i ) x T D v ) ] i = 1 s t i [ ( h ( z , y ˜ i ) ( z T D z ) 1 / 2 ) ( f ( x , y ˜ i ) + ( x T B x ) 1 / 2 ) ( f ( z , y ˜ i ) + ( z T B z ) 1 / 2 ) ( h ( x , y ˜ i ) ( x T D x ) 1 / 2 ) ] ( using Lemma 2.1 and (3.3) ) < 0 ( from (3.6) ) .

Therefore,

ξ 1 (x)<0= ξ 1 (z).
(3.7)

By hypothesis (i), we have

b 0 ϕ 0 [ ψ 1 ( x ) ψ 1 ( z ) + 1 2 p T 2 ψ 1 ( z ) p ] α(x,z) η T (x,z) { ψ 1 ( z ) + 2 ψ 1 ( z ) p } .

This follows from (3.1) that

b 0 ϕ 0 [ ψ 1 ( x ) ψ 1 ( z ) + 1 2 p T 2 ψ 1 ( z ) p ] 0

which using hypothesis (ii) yields

ψ 1 (x) ψ 1 (z)+ 1 2 p T 2 ψ 1 (z)p0.

This further from (2.6), (3.2), and the feasibility of x implies

ξ 1 (x) j = 1 m μ j g j (x)0= ξ 1 (z).

This contradicts (3.7), hence the result. □

Theorem 3.2 (Strong duality)

Let x be an optimal solution for (P) and let g j ( x ), jJ( x )be linearly independent. Then there exist( s , t , y ˜ )K( x )and( x , μ , w , v , p =0) H 1 ( s , t , y ˜ ), such that( x , μ , w , v , s , t , y ˜ , p =0)is feasible solution of (DM1) and the two objectives have same values. If, in addition, the assumptions of Theorem  3.1 hold for all feasible solutions(x,μ,w,v,s,t, y ˜ ,p)of (DM1), then( x , μ , w , v , s , t , y ˜ , p =0)is an optimal solution of (DM1).

Proof Since x is an optimal solution of (P) and g j ( x ), jJ( x ) are linearly independent, then by Theorem 2.1, there exist ( s , t , y ˜ )K( x ) and ( x , μ , w , v , p =0) H 1 ( s , t , y ˜ ) such that ( x , μ , w , v , s , t , y ˜ , p =0) is feasible solution of (DM1) and the two objectives have same values. Optimality of ( x , μ , w , v , s , t , y ˜ , p =0) for (DM1), thus follows from Theorem 3.1. □

Theorem 3.3 (Strict converse duality)

Let x be an optimal solution to (P) and( z , μ , w , v , s , t , y ˜ , p )be an optimal solution to (DM1). Assume that

  1. (i)

    ψ 1 () is strictly second-order α-univex at z ,

  2. (ii)

    { g j ( x ),jJ( x )}, are linearly independent,

  3. (iii)

    ϕ 0 (a)>0a>0 and b 0 ( x , z )>0.

Then z = x .

Proof By the strict α-univexity of ψ 1 () at z , we get

which in view of (3.1) and hypothesis (iii) give

ψ 1 ( x ) ψ 1 ( z ) + 1 2 p T 2 ψ 1 ( z ) p >0.

Using (2.6), (3.2), and feasibility of x in above, we obtain

ξ 1 ( x ) >0= ξ 1 ( z ) .
(3.8)

Now, we shall assume that z x and reach a contradiction. Since x and ( z , μ , w , v , s , t , y ˜ , p ) are optimal solutions to (P) and (DM1), respectively, and { g j ( x ),jJ( x )}, are linearly independent, by Theorem 3.2, we get

sup y ˜ Y f ( x , y ˜ ) + ( x T B x ) 1 / 2 h ( x , y ˜ ) ( x T D x ) 1 / 2 =F ( z ) .
(3.9)

Since y ˜ i Y( z ), i=1,2,, s , we have

F ( z ) = f ( z , y ˜ i ) + ( z T B z ) 1 / 2 h ( z , y ˜ i ) ( z T D z ) 1 / 2 .
(3.10)

By (3.9) and (3.10), we get

[ ( h ( z , y ˜ i ) ( z T D z ) 1 / 2 ) ( f ( x , y ˜ i ) + ( x T B x ) 1 / 2 ) ( f ( z , y ˜ i ) + ( z T B z ) 1 / 2 ) ( h ( x , y ˜ i ) ( x T D x ) 1 / 2 ) ] 0 ,

for all i=1,2,, s and y ˜ i Y. From y ˜ i Y( z )Y and t R + s , with i = 1 s t i =1, we obtain

(3.11)

From Lemma 2.1, (3.3), and (3.11), we have

ξ 1 ( x ) = i = 1 s t i [ ( h ( z , y ˜ i ) z T D v ) ( f ( x , y ˜ i ) + x T B w ) ( f ( z , y ˜ i ) + z T B w ) ( h ( x , y ˜ i ) x T D v ) ] i = 1 s t i [ ( h ( z , y ˜ i ) ( z T D z ) 1 / 2 ) ( f ( x , y ˜ i ) + ( x T B x ) 1 / 2 ) ( f ( z , y ˜ i ) + ( z T B z ) 1 / 2 ) ( h ( x , y ˜ i ) ( x T D x ) 1 / 2 ) ] 0 = ξ 1 ( z ) ,

which contradicts (3.8), hence the result. □

4 Model II

In this section, we consider another dual problem to (P):

max ( s , t , y ˜ ) K ( z ) sup ( z , μ , w , v , p ) H 2 ( s , t , y ˜ ) i = 1 s t i ( f ( z , y ˜ i ) + ( z T B z ) 1 / 2 ) + j = 1 m μ j g j ( z ) i = 1 s t i ( h ( z , y ˜ i ) ( z T D z ) 1 / 2 ) ,
(DM2)

where H 2 (s,t, y ˜ ) denotes the set of all (z,μ,w,v,p) R n × R + m × R n × R n × R n satisfying

(4.1)
(4.2)
(4.3)

If the set H 2 (s,t, y ˜ ) is empty, we define the supremum in (DM2) over H 2 (s,t, y ˜ ) equal to −∞.

Remark 4.1 If p=0, then using (4.3), the above dual model becomes the dual model considered in [5, 11, 12]. In addition, if B and D are zero matrices of order n, then (DM2) reduces to the problem studied in [14].

Now, we obtain the following appropriate duality theorems between (P) and (DM2).

Theorem 4.1 (Weak duality)

Let x and(z,μ,w,v,s,t, y ˜ ,p)are feasible solutions of (P) and (DM2), respectively. Suppose that the following conditions are satisfied:

  1. (i)

    ψ 2 () is second-order α-univex at z,

  2. (ii)

    ϕ 0 (a)0a0 and b 0 (x,z)>0.

Then

sup y ˜ Y f ( x , y ˜ ) + ( x T B x ) 1 / 2 h ( x , y ˜ ) ( x T D x ) 1 / 2 i = 1 s t i ( f ( z , y ˜ i ) + ( z T B z ) 1 / 2 ) + j = 1 m μ j g j ( z ) i = 1 s t i ( h ( z , y ˜ i ) ( z T D z ) 1 / 2 ) .

Proof Assume on contrary to the result that

sup y ˜ Y f ( x , y ˜ ) + ( x T B x ) 1 / 2 h ( x , y ˜ ) ( x T D x ) 1 / 2 < i = 1 s t i ( f ( z , y ˜ i ) + ( z T B z ) 1 / 2 ) + j = 1 m μ j g j ( z ) i = 1 s t i ( h ( z , y ˜ i ) ( z T D z ) 1 / 2 )

or

Using t i 0, i=1,2,,s and (4.3) in above, we have

(4.4)

Now,

ψ 2 ( x ) = [ i = 1 s t i ( f ( x , y ˜ i ) + x T B w ) + j = 1 m μ j g j ( x ) ] [ i = 1 s t i ( h ( z , y ˜ i ) z T D v ) ] [ i = 1 s t i ( h ( x , y ˜ i ) x T D v ) ] [ i = 1 s t i ( f ( z , y ˜ i ) + z T B w ) + j = 1 m μ j g j ( z ) ] [ i = 1 s t i ( f ( x , y ˜ i ) + ( x T B x ) 1 / 2 ) + j = 1 m μ j g j ( x ) ] [ i = 1 s t i ( h ( z , y ˜ i ) z T D v ) ] [ i = 1 s t i ( h ( x , y ˜ i ) ( x T D x ) 1 / 2 ) ] [ i = 1 s t i ( f ( z , y ˜ i ) + z T B w ) + j = 1 m μ j g j ( z ) ] ( from Lemma 2.1 and (4.3) ) < i = 1 s t i ( h ( z , y ˜ i ) z T D v ) j = 1 m μ j g j ( x ) ( using (4.4) ) 0 ( since  i = 1 s t i ( h ( z , y ˜ i ) z T D v ) > 0  and  j = 1 m μ j g j ( x ) 0 ) .

Hence,

ψ 2 (x)<0= ψ 2 (z).
(4.5)

Now, by the second-order α-univexity of ψ 2 () at z, we get

b 0 ϕ 0 [ ψ 2 ( x ) ψ 2 ( z ) + 1 2 p T 2 ψ 2 ( z ) p ] η T (x,z)α(x,z) { ψ 2 ( z ) + 2 ψ 2 ( z ) p }

which using (4.1) and hypothesis (ii) give

ψ 2 (x) ψ 2 (z)+ 1 2 p T 2 ψ 2 (z)p0.

This from (4.2) follows that

ψ 2 (x) ψ 2 (z)

which contradicts (4.5). This proves the theorem. □

By a similar way, we can prove the following theorems between (P) and (DM2).

Theorem 4.2 (Strong duality)

Let x be an optimal solution for (P) and let g j ( x ), jJ( x )be linearly independent. Then there exist( s , t , y ˜ )K( x )and( x , μ , w , v , p =0) H 2 ( s , t , y ˜ ), such that( x , μ , w , v , s , t , y ˜ , p =0)is feasible solution of (DM2) and the two objectives have same values. If, in addition, the assumptions of weak duality hold for all feasible solutions(x,μ,w,v,s,t, y ˜ ,p)of (DM2), then( x , μ , w , v , s , t , y ˜ , p =0)is an optimal solution of (DM2).

Theorem 4.3 (Strict converse duality)

Let x and( z , μ , w , v , s , t , y ˜ , p )are optimal solutions of (P) and (DM2), respectively. Assume that

  1. (i)

    ψ 2 () is strictly second-order α-univex at z,

  2. (ii)

    { g j ( x ),jJ( x )} are linearly independent,

  3. (iii)

    ϕ 0 (a)>0a>0 and b 0 ( x , z )>0.

Then z = x .

5 Concluding remarks

In the present work, we have formulated two types of second-order dual models for a nondifferentiable minimax fractional programming problems and proved appropriate duality relations involving second-order α-univex functions. Further, examples have been illustrated to show the existence of such type of functions. Now, the question arises whether or not the results can be further extended to a higher-order nondifferentiable minimax fractional programming problem.