1 Introduction

The mathematical programming problem in which the objective function is a ratio of two numerical functions is called a fractional programming problem. Fractional programming is used in various fields of study. Most extensively, it is used in business and economic situations, mainly in the situations of deficit of financial resources. Fractional programming problems have arisen in multiobjective programming [1, 2], game theory [3], and goal programming [4]. Problems of these type have been the subject of immense interest in the past few years.

The necessary and sufficient conditions for generalized minimax programming were first developed by Schmitendorf [5]. Tanimoto [6] applied these optimality conditions to define a dual problem and derived duality theorems. Bector and Bhatia [7] relaxed the convexity assumptions in the sufficient optimality condition in [5] and also employed the optimality conditions to construct several dual models which involve pseudo-convex and quasi-convex functions, and derived weak and strong duality theorems. Yadav and Mukhrjee [8] established the optimality conditions to construct the two dual problems and derived duality theorems for differentiable fractional minimax programming. Chandra and Kumar [9] pointed out that the formulation of Yadav and Mukhrjee [8] has some omissions and inconsistencies and they constructed two modified dual problems and proved duality theorems for differentiable fractional minimax programming.

Lai et al. [10] established necessary and sufficient optimality conditions for non-differentiable minimax fractional problem with generalized convexity and applied these optimality conditions to construct a parametric dual model and also discussed duality theorems. Lai and Lee [11] obtained duality theorems for two parameter-free dual models of nondifferentiable minimax fractional problem involving generalized convexity assumptions.

Convexity plays an important role in deriving sufficient conditions and duality for nonlinear programming problems. Hanson [12] introduced the concept of invexity and established Karush-Kuhn-Tucker type sufficient optimality conditions for nonlinear programming problems. These functions were named invex by Craven [13]. Generalized invexity and duality for multiobjective programming problems are discussed in [14], and inseparable Hilbert spaces are studied by Soleimani-damaneh [15]. Soleimani-damaneh [16] provides a family of linear infinite problems or linear semi-infinite problems to characterize the optimality of nonlinear optimization problems. Recently, Antczak [17] proved optimality conditions for a class of generalized fractional minimax programming problems involving B-(p, r)-invexity functions and established duality theorems for various duality models.

In this article, we are motivated by Lai et al. [10], Lai and Lee [11], and Antczak [17] to discuss sufficient optimality conditions and duality theorems for a nondifferentiable minimax fractional programming problem with B-(p, r)-invexity. This article is organized as follows: In Section 2, we give some preliminaries. An example which is B-(1, 1)-invex but not (p, r)-invex is exemplified. We also illustrate another example which (-1, 1)-invex but convex. In Section 3, we establish the sufficient optimality conditions. Duality results are presented in Section 4.

2 Notations and prelominaries

Definition 1. Let f : XR (where XRn ) be differentiable function, and let p, r be arbitrary real numbers. Then f is said to be (p, r)-invex (strictly (p, r)-invex) with respect to η at uX on X if there exists a function η : X × XRn such that, for all xX, the inequalities

1 r e r ( f ( x ) 1 r e r ( f ( u ) [ 1 + r p f ( u ) ( e p η ( x , u ) 1 ) ] ( > if  x u ) for p 0, r 0, 1 r e r ( f ( x ) 1 r e r ( f ( u ) [ 1 + r f ( u ) ( e p η ( x , u ) 1 ) ] ( > if x u ) for p = 0, r 0, f ( x ) f ( u ) 1 p f ( u ) ( e p η ( x , u ) 1 ) ( > if  x u ) for  p 0, r = 0, f ( x ) f ( u ) f ( u ) η ( x , u ) ( > if  x u ) for  p = 0, r = 0,

hold.

Definition 2[17]. The differentiable function f : XR (where XRn ) is said to be (strictly) B-(p, r)-invex with respect to η and b at uX on X if there exists a function η : X × XRn and a function b : X × XR+ such that, for all xX, the following inequalities

1 r b ( x , u ) ( e r ( f ( x ) - f ( u ) ) - 1 ) 1 p f ( u ) ( e p η ( x , u ) - 1 ) ( > if  x u ) for  p 0 , r 0 , 1 r b ( x , u ) ( e r ( f ( x ) - f ( u ) ) - 1 ) f ( u ) η ( x , u ) ( > if x u ) for p = 0 , r 0 , b ( x , u ) ( f ( x ) - f ( u ) ) 1 p f ( u ) ( e p η ( x , u ) - 1 ) ( > if x u ) for  p 0 , r = 0 , b ( x , u ) ( f ( x ) - f ( u ) ) f ( u ) η ( x , u ) ( > if x u ) for  p = 0 , r = 0 ,

hold. f is said to be (strictly) B-(p, r)-invex with respect to η and b on X if it is B-(p, r)-invex with respect to same η and b at each uX on X.

Remark 1[17]. It should be pointed out that the exponentials appearing on the right-hand sides of the inequalities above are understood to be taken componentwise and 1 = (1, 1, ..., 1) ∈ Rn .

Example 1. Let X = [8.75, 9.15] ⊂ R. Consider the function f : XR defined by

f ( x ) = log ( sin 2 x ) .

Let η : X × XR be given by

η ( x , u ) = 1 2 ( 1 + u ) .

To prove that f is (-1, 1)-invex, we have to show that

1 r e r f ( x ) - 1 r e r f ( u ) 1 + r p f ( u ) e p η ( x , u ) - 1 0 , for p = - 1 and r = 1 .

Now, consider

φ = e f ( x ) - e f ( u ) 1 - f ( u ) e - η ( x , u ) - 1 = sin 2 x + sin 2 u e - 1 2 ( 1 + u ) - 1 - sin 2 u 0 x , u X ,

as can be seen form Figure 1.

Figure 1
figure 1

φ = sin2x + sin 2 u ( e-12(1+u)- 1) - sin2u.

Hence, f is (-1, 1)-invex.

Further, for x = 8.8 and u = 9.1, we have

ϑ = f ( x ) - f ( u ) - ( x - u ) T f ( u ) = 2 log sin x sin u - ( x - u ) sin 2 u sin 2 u = - 0 . 5 7 0 0 5 7 2 2 5 < 0

Thus f is not convex function on X.

Example 2. Let X = [0.25, 0.45] ⊂ R. Consider the function f : XR defined by

f ( x ) = - x 2 + log ( 8 x ) .

Let η : X × XR and b : X × XR+ be given by

η ( x , u ) = log ( 1 + 2 u 2 )

and

b ( x , u ) = 4 sin 2 x + sin 2 u ,

respectively.

The function f defined above is B-(1, 1)-invex as

ϕ = b ( x , u ) ( e ( f ( x ) - f ( u ) ) - 1 ) - f ( u ) ( e η ( x , u ) - 1 ) = 4 sin 2 x + sin 2 u e ( u 2 - x 2 ) x u - 1 - u - 4 u 3 0 x , u X ,

as can be seen from Figure 2.

Figure 2
figure 2

ϕ= 4 sin 2 x + sin 2 u e ( u 2 - x 2 ) x u - 1 - u - 4 u 3 .

However, it is not (p, r) invex for all p, r ∈ (-1017, 1017) as

ψ = 1 r e r f ( x ) - 1 r e r f ( u ) 1 + r p f ( u ) ( e p η ( x , u ) - 1 ) = 1 r e 1 . 4 6 1 2 9 6 1 7 6 × r - 1 r e 1 . 4 6 9 2 9 1 2 5 8 × r 1 + 0 . 4 5 × r p e 0 . 3 0 2 1 7 6 5 1 8 6 × p - 1

(for x = 0.4 and u = 0.42)

< 0 as can be seen from Figure 3.

Figure 3
figure 3

ψ = 1 r e 1 . 4 6 1 2 9 6 1 7 6 × r - 1 r e 1 . 4 6 9 2 9 1 2 5 8 × r 1 + 0 . 4 5 × r p e 0 . 3 0 2 1 7 6 5 1 8 6 × p - 1 .

Hence f is B-(1, 1)-invex but not (p, r)-invex.

In this article, we consider the following nondifferentiable minimax fractional programming problem:

(FP)

min x R n sup y Y l ( x , y ) + ( x T D x ) 1 2 m ( x , y ) - ( x T E x ) 1 2 subject to  g ( x ) 0 , x X

where Y is a compact subset of Rm , l(., .): Rn × RmR, m(., .): Rn × RmR, are C1 functions on Rn × Rm and g(.): RnRp is C1 function on Rn . D and E are n × n positive semidefinite matrices.

Let S = {xX : g(x) ≤ 0} denote the set of all feasible solutions of (FP).

Any point xS is called the feasible point of (FP). For each (x, y) ∈ Rn × Rm , we define

ϕ ( x , y ) = l ( x , y ) + ( x T D x ) 1 2 m ( x , y ) - ( x T E x ) 1 2 ,

such that for each (x, y) ∈ S × Y,

l ( x , y ) + ( x T D x ) 1 2 0 and  m ( x , y ) - ( x T E x ) 1 2 > 0 .

For each xS, we define

H ( x ) = { h H : g h ( x ) = 0 } ,

where

H = { 1 , 2 , , p } , Y ( x ) = y Y : l ( x , y ) + ( x T D x ) 1 2 m ( x , y ) - ( x T E x ) 1 2 = sup z Y l ( x , z ) + ( x T D x ) 1 2 m ( x , z ) - ( x T E x ) 1 2 . K ( x ) = ( s , t , ) N × R + s × R ms : 1 s n + 1 , t = ( t 1 , t 2 , , t s ) R + s

with i = 1 s t i = 1 , = ( ȳ 1 , ȳ 2 , , ȳ s ) with ȳ i Y ( x ) ( i = 1 , 2 , , s ) .

Since l and m are continuously differentiable and Y is compact in Rm , it follows that for each x* ∈ S, Y (x*) ≠ ∅, and for any ȳ i Y ( x * ) , we have a positive constant

k = ϕ ( x * , ȳ i ) = l ( x * , ȳ i ) + ( x * T D x * ) 1 2 m ( x * , ȳ i ) - ( x * T E x * ) 1 2 .

2.1 Generalized Schwartz inequality

Let A be a positive-semidefinite matrix of order n. Then, for all, x, wRn ,

x T A w ( x T A x ) 1 2 ( w T A w ) 1 2 .
(1)

Equality holds if for some λ ≥ 0,

A x = λ A w .

Evidently, if ( w T A w ) 1 2 1, we have

x T Aw ( x T A x ) 1 2 .

If the functions l, g, and m in problem (FP) are continuously differentiable with respect to xRn , then Lai et al. [10] derived the following necessary conditions for optimality of (FP).

Theorem 1 (Necessary conditions). If x* is a solution of (FP) satisfying x*TDx* > 0, x*TEx* > 0, and ∇g h (x*), hH(x*) are linearly independent, then there exist ( s , t * , ȳ ) K ( x * ) , koR+, w, vRn and μ * R + p such that

i = 1 s t i * l ( x * , ȳ i ) + D w - k ( m ( x * , ȳ i ) - E v ) + h = 1 p μ h * g h ( x * ) = 0 ,
(2)
l ( x * , ȳ i ) + ( x * T D x * ) 1 2 - k m ( x * , ȳ i ) - ( x * T E x * ) 1 2 = 0 , i = 1 , 2 , , s ,
(3)
h = 1 p μ h * g h ( x * ) = 0 ,
(4)
t i * 0 ( i = 1 , 2 , , s ) , i = 1 s t i * = 1 ,
(5)
w T D w 1 , v T E v 1 , ( x * T D x * ) 1 2 = x * T D w , ( x * T E x * ) 1 2 = x * T E v .
(6)

Remark 2. All the theorems in this article will be proved only in the case when p ≠ 0, r ≠ 0. The proofs in the other cases are easier than in this one. It follows from the form of inequalities which are given in Definition 2. Moreover, without limiting the generality considerations, we shall assume that r > 0.

3 Sufficient conditions

Under smooth conditions, say, convexity and generalized convexity as well as differentiability, optimality conditions for these problems have been studied in the past few years. The intrinsic presence of nonsmoothness (the necessity to deal with nondifferentiable functions, sets with nonsmooth boundaries, and set-valued mappings) is one of the most characteristic features of modern variational analysis (see [18, 19]). Recently, nonsmooth optimizations have been studied by some authors [2023]. The optimality conditions for approximate solutions in multiobjective optimization problems have been studied by Gao et al. [24] and for nondifferentiable multiobjective case by Kim et al. [25]. Now, we prove the sufficient condition for optimality of (FP) under the assumptions of B-(p, r)-invexity.

Theorem 2 (Sufficient condition). Let x* be a feasible solution of (FP) and there exist a positive integer s, 1 ≤ sn + 1, t * R + s , ȳ i Y ( x * ) ( i = 1 , 2 , s ) , koR+, w, vRn and μ * R + p satisfying the relations (2)-(6). Assume that

(i) i = 1 s t i * ( l ( . , ȳ i ) + ( . ) T D w - k ( m ( . , ȳ i ) - ( . ) T E v ) ) is B-(p, r)-invex at x* on S with respect to η and b satisfying b(x, x*) > 0 for all xS,

(ii) h = 1 p μ h * g h ( . ) is B g -(p, r)-invex at x* on S with respect to the same function η, and with respect to the function b g , not necessarily, equal to b.

Then x* is an optimal solution of (FP).

Proof. Suppose to the contrary that x* is not an optimal solution of (FP). Then there exists an x ̄ S such that

sup y Y l ( x ̄ , y ) + ( x ̄ T D x ̄ ) 1 2 m ( x ̄ , y ) - ( x ̄ T E x ̄ ) 1 2 < sup y Y l ( x * , y ) + ( x * T D x * ) 1 2 m ( x * , y ) - ( x * T E x * ) 1 2 .

We note that

sup y Y l ( x * , y ) + ( x * T D x * ) 1 2 m ( x * , y ) - ( x * T E x * ) 1 2 = l ( x * , ȳ i ) + ( x * T D x * ) 1 2 m ( x * , ȳ i ) - ( x * T E x * ) 1 2 = k ,

for ȳ i Y ( x * ) , i = 1, 2, ..., s and

l ( x ̄ , ȳ i ) + ( x ̄ T D x ̄ ) 1 2 m ( x ̄ , ȳ i ) - ( x ̄ T E x ̄ ) 1 2 sup y Y l ( x ̄ , y ) + ( x ̄ T D x ̄ ) 1 2 m ( x ̄ , y ) - ( x ̄ T E x ̄ ) 1 2 .

Thus, we have

l ( x ̄ , ȳ i ) + ( x ̄ T D x ̄ ) 1 2 m ( x ̄ , ȳ i ) - ( x ̄ T E x ̄ ) 1 2 < k ,  for i = 1 , 2 , . . . , s .

It follows that

l ( x ̄ , ȳ i ) + ( x ̄ T D x ̄ ) 1 2 - k ( m ( x ̄ , ȳ i ) - ( x ̄ T E x ̄ ) 1 2 ) < 0 ,  for i = 1 , 2 , . . . , s .
(7)

From (1), (3), (5), (6) and (7), we obtain

i = 1 s t i * { l ( x ̄ , ȳ i ) + x ̄ T D w - k ( m ( x ̄ , ȳ i ) - x ̄ T E v ) } i = 1 s t i * { l ( x ̄ , ȳ i ) + ( x ̄ T D x ̄ ) 1 2 - k ( m ( x ̄ , ȳ i ) - ( x ̄ T E x ̄ ) 1 2 ) } < 0 = i = 1 s t i * { l ( x * , ȳ i ) + ( x * T D x * ) 1 2 - k ( m ( x * , ȳ i ) - ( x * T E x * ) 1 2 ) } = i = 1 s t i * { l ( x * , ȳ i ) + x * T D w - k ( m ( x * , ȳ i ) - x * T E v ) } .

It follows that

i = 1 s t i * { l ( x ̄ , ȳ i ) + x ̄ T D w - k ( m ( x ̄ , ȳ i ) - x ̄ T E v ) } < i = 1 s t i * { l ( x * , ȳ i ) + x * T D w - k ( m ( x * , ȳ i ) - x * T E v ) } .
(8)

As i = 1 s t i * ( l ( . , ȳ i ) + ( . ) T D w - k ( m ( . , ȳ i ) - ( . ) T E v ) ) is B-(p, r)-invex at x* on S with respect to η and b, we have

1 r b ( x , x * ) e r i = 1 s t i * ( l ( x , ȳ i ) + x T D w - k ( m ( x , ȳ i ) - x T E v ) ) - i = 1 s t i * ( l ( x * , ȳ i ) + x * T D w - k ( m ( x * , ȳ i ) - x * T E v ) ) - 1 1 p i = 1 s t i * ( l ( x * , ȳ i ) + D w - k ( m ( x * , ȳ i ) - E v ) ) { e p η ( x , x * ) - 1 }

holds for all xS, and so for x= x ̄ . Using (8) and b ( x ̄ , x * ) >0 together with the inequality above, we get

1 p i = 1 s t i * ( l ( x * , ȳ i ) + D w - k ( m ( x * , ȳ i ) - E v ) ) { e p η ( x ̄ , x * ) - 1 } < 0 .
(9)

From the feasibility of x ̄ together with μ h * 0, hH, we have

h = 1 p μ h * g h ( x ̄ ) 0 .
(10)

By B g -(p, r)-invexity of h = 1 p μ h * g h ( . ) at x* on S with respect to the same function η, and with respect to the function b g , we have

1 r b g ( x ̄ , x * ) e r h = 1 p μ h * g h ( x ̄ ) - h = 1 p μ h * g h ( x * ) - 1 1 p h = 1 p μ h * g h ( x * ) e p η ( x ̄ , x * ) - 1 .

Since b g (x, x*) ≥ 0 for all xS then by (4) and (10), we obtain

1 p h = 1 p μ h * g h ( x * ) { e p η ( x ̄ , x * ) - 1 } 0 .
(11)

By adding the inequalities (9) and (11), we have

1 p i = 1 s t i * ( l ( x * , ȳ i ) + D w - k ( m ( x * , ȳ i ) - E v ) ) + h = 1 p μ h * g h ( x * ) { e p η ( x ̄ , x * ) - 1 } < 0 ,

which contradicts (2). Hence the result.   □

4 Duality results

In this section, we consider the following dual to (FP):

( F D ) max ( s , t , ȳ ) K ( a ) sup ( a , μ , k , v , w ) H 1 ( s , t , ȳ ) k ,

where H 1 ( s , t , ȳ ) denotes the set of all ( a , μ , k , v , w ) R n × R + p × R + × R n × R n satisfying

i = 1 s t i { l ( a , ȳ i ) + D w - k ( m ( a , ȳ i ) - E v ) } + h = 1 p μ h g h ( a ) = 0 ,
(12)
i = 1 s t i { l ( a , ȳ i ) + a T D w - k ( m ( a , ȳ i ) - a T E v ) } 0 ,
(13)
h = 1 p μ h g h ( a ) 0 ,
(14)
( s , t , ȳ ) K ( a ) ,
(15)
w T D w 1 , v T E v 1 .
(16)

If, for a triplet ( s , t , ȳ ) K ( a ) , the set H 1 ( s , t , ȳ ) =, then we define the supremum over it to be -∞. For convenience, we let

ψ 1 ( . ) = i = 1 s t i { l ( . , ȳ i ) + ( . ) T D w - k ( m ( . , ȳ i ) - ( . ) T E v ) } .

Let SFD denote a set of all feasible solutions for problem (FD). Moreover, let S1 denote

S 1 = { a R n : ( a , μ , k , v , w , s , t , ȳ ) S FD } .

Now we derive the following weak, strong, and strict converse duality theorems.

Theorem 3 (Weak duality). Let x be a feasible solution of (P) and ( a , μ , k , v , w , s , t , ȳ ) be a feasible of (FD). Let

(i) i = 1 s t i ( l ( . , ȳ i ) + ( . ) T D w - k ( m ( . , ȳ i ) - ( . ) T E v ) ) is B-(p, r)-invex at a on SS1 with respect to η and b satisfying b(x, a) > 0,

(ii) h = 1 p μ h g h ( . ) is B g -(p, r)-invex at a on SS1 with respect to the same function η and with respect to the function b g , not necessarily, equal to b.

Then,

sup y Y l ( x , y ) + ( x T D x ) 1 2 m ( x , y ) - ( x T E x ) 1 2 k .
(17)

Proof. Suppose to the contrary that

sup y Y l ( x , y ) + ( x T D x ) 1 2 m ( x , y ) - ( x T E x ) 1 2 < k .

Then, we have

l ( x , ȳ i ) + ( x T D x ) 1 2 - k ( m ( x , ȳ i ) - ( x T E x ) 1 2 ) < 0 , for all ȳ i Y .

It follows from (5) that

t i { l ( x , y ¯ i ) + ( x T D x ) 1 / 2 k ( m ( x , y ¯ i ) ( x T E x ) 1 / 2 } 0,
(18)

with at least one strict inequality, since t = (t1, t2, ..., t s ) ≠ 0.

From (1), (13), (16) and (18), we have

ψ 1 ( x ) = i = 1 s t i { l ( x , ȳ i ) + x T D w - k ( m ( x , ȳ i ) - x T E v ) } i = 1 s t i { l ( x , ȳ i ) + ( x T D x ) 1 2 - k ( m ( x , ȳ i ) - ( x T E x ) 1 2 ) } < 0 i = 1 s t i { l ( a , ȳ i ) + a T D w - k ( m ( a , ȳ i ) - a T E v ) } = ψ 1 ( a ) .

Hence

ψ 1 ( x ) < ψ 1 ( a ) .
(19)

Since i = 1 s t i ( l ( . , ȳ i ) + ( . ) T D w - k ( m ( . , ȳ i ) - ( . ) T E v ) ) is B-(p, r)-invex at a on SS1 with respect to η and b, we have

1 r b ( x , a ) e r i = 1 s t i ( l ( x , ȳ i ) + x T D w - k ( m ( x , ȳ i ) - x T E v ) ) - i = 1 s t i ( l ( a , ȳ i ) + a T D w - k ( m ( a , ȳ i ) - a T E v ) ) - 1 1 p i = 1 s t i ( l ( a , ȳ i ) + D w - k ( m ( a , ȳ i ) - E v ) ) { e p η ( x , a ) - 1 } .

From (19) and b(x, a) > 0 together with the inequality above, we get

1 p i = 1 s t i ( l ( a , ȳ i ) + D w - k ( m ( a , ȳ i ) - E v ) ) { e p η ( x , a ) - 1 } < 0 .
(20)

Using the feasibility of x together with μ h ≥ 0, hH, we obtain

h = 1 p μ h g h ( x ) 0 .
(21)

From hypothesis (ii), we have

1 r b g ( x , a ) e r h = 1 p μ h g h ( x ) - h = 1 p μ h g h ( a ) - 1 1 p h = 1 p μ h g h ( a ) { e p η ( x , a ) - 1 } .

As b g (x, a) ≥ 0 then by (14) and (21), we obtain

1 p h = 1 p μ h g h ( a ) { e p η ( x , a ) - 1 } 0 .
(22)

Thus, by (20) and (22), we obtain the inequality

1 p i = 1 s t i ( l ( a , ȳ i ) + D w - k ( m ( a , ȳ i ) - E v ) ) + h = 1 p μ h g h ( a ) { e p η ( x , a ) - 1 } < 0 ,

which contradicts (12). Hence (17) holds.   □

Theorem 4 (Strong duality). Let x* be an optimal solution of (FP) and ∇g h (x*), hH(x*) is linearly independent. Then there exist ( s ̄ , t ̄ , ȳ * ) K ( x * ) and ( x * , μ ̄ , k ̄ , v ̄ , w ̄ ) H 1 ( s ̄ , t ̄ , ȳ * ) such that ( x * , μ ̄ , k ̄ , v ̄ , w ̄ , s ̄ , t ̄ , ȳ * ) is a feasible solution of (FD). Further, if the hypotheses of weak duality theorem are satisfied for all feasible solutions ( a , μ , k , v , w , s , t , ȳ ) of (FD), then ( x * , μ ̄ , k ̄ , v ̄ , w ̄ , s ̄ , t ̄ , ȳ * ) is an optimal solution of (FD), and the two objectives have the same optimal values.

Proof. If x* be an optimal solution of (FP) and ∇g h (x*), hH(x*) is linearly independent, then by Theorem 1, there exist ( s ̄ , t ̄ , ȳ * ) K ( x * ) and ( x * , μ ̄ , k ̄ , v ̄ , w ̄ ) H 1 ( s ̄ , t ̄ , ȳ * ) such that ( x * , μ ̄ , k ̄ , v ̄ , w ̄ , s ̄ , t ̄ , ȳ * ) is feasible for (FD) and problems (FP) and (FD) have the same objective values and

k ̄ = l ( x * , ȳ i * ) + ( x * T D x * ) 1 2 m ( x * , ȳ i * ) - ( x * T E x * ) 1 2 .

The optimality of this feasible solution for (FD) thus follows from Theorem 3.   □

Theorem 5 (Strict converse duality). Let x* and ( ā , μ ̄ , k ̄ , v ̄ , w ̄ , s ̄ , t ̄ , ȳ * ) be the optimal solutions of (FP) and (FD), respectively, and ∇g h (x*), hH(x*) is linearly independent. Suppose that i = 1 s t i ( l ( . , ȳ i ) + ( . ) T D w - k ̄ ( m ( . , ȳ i ) - ( . ) T E v ) ) is strictly B-(p, r)-invex at a on SS1 with respect to η and b satisfying b(x, a) > 0 for all xS. Furthermore, assume that h = 1 p μ h g h ( . ) is B g -(p, r)-invex at a on SS1 with respect to the same function η and with respect to the function b g , but not necessarily, equal to the function b. Then x * =ā, that is, ā is an optimal point in (FP) and

sup y Y l ( ā , ȳ * ) + ( ā T D ā ) 1 2 m ( ā , ȳ * ) - ( ā T E ā ) 1 2 = k ̄ .

Proof. We shall assume that x * ā and reach a contradiction. From the strong duality theorem (Theorem 4), it follows that

sup y Y l ( x * , ȳ * ) + ( x * T D x * ) 1 2 m ( x * , ȳ * ) - ( x * T E x * ) 1 2 = k ̄ .
(23)

By feasibility of x* together with μ h ≥ 0, hH, we obtain

h = 1 p μ h g h ( x * ) 0 .
(24)

By assumption, h = 1 p μ h g h ( . ) is B g -(p, r)-invex at a on SS1 with respect to η and with respect to the b g . Then, by Definition 2, there exists a function b g such that b g (x, a) ≥ 0 for all xS and aS1. Hence by (14) and (24),

1 r b g ( x * , ā ) e r h = 1 p μ h g h ( x * ) - h = 1 p μ h g h ( ā ) - 1 0 .

Then, from Definition 2, we get

1 p h = 1 p μ h g h ( ā ) { e p η ( x * , ā ) - 1 } 0 .
(25)

Therefore, by (25), we obtain the inequality

1 p i = 1 s t i ( l ( ā , ȳ i ) + D w - k ̄ ( m ( ā , ȳ i ) - E v ) ) { e p η ( x * , ā ) - 1 } 0 .

As i = 1 s t i ( l ( . , ȳ i ) + ( . ) T D w - k ̄ ( m ( . , ȳ i ) - ( . ) T E v ) ) is strictly B-(p, r)-invex with respect to η and b at ā on SS1. Then, by the Definition of strictly B-(p, r)-invexity and from above inequality, it follows that

1 r b ( x * , ā ) × e r i = 1 s t i ( l ( x * , ȳ i ) + x * T D w - k ̄ ( m ( x * , ȳ i ) - x * T E v ) ) - i = 1 s t i ( l ( ā , ȳ i ) + ā T D w - k ̄ ( m ( ā , ȳ i ) - ā T E v ) ) - 1 > 0 .

From the hypothesis b ( x * , ā ) >0, and the above inequality, we get

i = 1 s t i ( l ( x * , ȳ i ) + x * T D w - k ̄ ( m ( x * , ȳ i ) - x * T E v ) ) - i = 1 s t i ( l ( ā , ȳ i ) + ā T D w - k ̄ ( m ( ā , ȳ i ) - ā T E v ) ) > 0 .

Therefore, by (13),

i = 1 s t i ( l ( x * , ȳ i ) + x * T D w - k ̄ ( m ( x * , ȳ i ) - x * T E v ) ) > 0 .

Since t i ≥ 0, i = 1, 2, ..., s, therefore there exists i* such that

l ( x * , ȳ i * ) + x * T D w - k ̄ ( m ( x * , ȳ i * ) - x * T E v ) > 0 .

Hence, we obtain the following inequality

l ( x * , ȳ i * ) + ( x * T D x * ) 1 2 m ( x * , ȳ i * ) - ( x * T E x * ) 1 2 > k ̄ ,

which contradicts (23). Hence the results.   □

5 Concluding remarks

It is not clear that whether duality in nondifferentiable minimax fractional programming with B-(p, r)-invexity can be further extended to second-order case.