1 Introduction

Let F: R n R n be a continuous mapping and C R n be a nonempty, closed, and convex set. The inner product and norm are denoted by , and , respectively. Consider the problem of finding

x Csuch thatF ( x ) =0.
(1.1)

Let S denote the solution set of (1.1). Throughout this paper, we assume that S is nonempty and F has the property that

F ( y ) , y x 0,for all yC and all  x S.
(1.2)

The property (1.2) holds if F is monotone or more generally pseudomonotone on C in the sense of Karamardian [1].

Nonlinear equations have wide applications in reality. For example, many problems arising from chemical technology, economy, and communications can be transformed into nonlinear equations; see [25]. In recent years, many numerical methods for problem (1.1) with smooth mapping F have been proposed. These methods include the Newton method, quasi-Newton method, Levenberg-Marquardt method, trust region method, and their variants; see [614].

Recently, the literature [15] proposed a hybrid method for solving problem (1.1), which combines the Newton, proximal point, and projection methodologies. The method possesses a very nice globally convergent property if F is monotone and continuous. Under the assumptions of differentiability and nonsingularity, locally superlinear convergence of the method is proved. However, the condition of nonsingularity is too strong. Relaxing the nonsingularity assumption, the literature [16] proposed a modified version for the method by changing the projection way, and showed that under the local error bound condition which is weaker than nonsingularity, the proposed method converges superlinearly to the solution of problem (1.1). The numerical performances given in [16] show that the method is really efficient. However, the literatures [15, 16] need the mapping F to be monotone, which seems too stringent a requirement for the purpose of ensuring global convergence property and locally superlinear convergence of the hybrid method.

To further relax the assumption of monotonicity of F, in this paper, we propose a new hybrid algorithm for problem (1.1) which covers one in [16]. The global convergence of our method needs only to assume that F satisfies the property (1.2), which is much weaker than monotone or more generally pseudomonotone. We also discuss the superlinear convergence of our method under mild conditions. Preliminary numerical experiments show that our method is efficient.

2 Preliminaries and algorithms

For a nonempty, closed, and convex set Ω R n and a vector x R n , the projection of x onto Ω is defined as

Π Ω (x)=argmin { y x | y Ω } .

We have the following property on the projection operator; see [17].

Lemma 2.1 Let Ω R n be a closed convex set. Then it holds that

x Π Ω ( y ) 2 x y 2 y Π Ω ( y ) 2 ,xΩ,y R n .

Algorithm 2.1 Choose x 0 C, parameters κ 0 [0,1), λ, β(0,1), γ 1 , γ 2 >0, a,b0, max{a,b}>0, and set k:=0.

Step 1. Compute F( x k ). If F( x k )=0, stop. Otherwise, let μ k = γ 1 F ( x k ) 1 / 2 , σ k =min{ κ 0 , γ 2 F ( x k ) 1 / 2 }. Choose a positive semidefinite matrix G k R n × n . Compute d k R n such that

F ( x k ) +( G k + μ k I) d k = r k ,
(2.1)

where

r k σ k μ k d k .
(2.2)

Stop if d k =0. Otherwise,

Step 2. Compute y k = x k + t k d k , where t k = β m k and m k is the smallest nonnegative integer m satisfying

F ( x k + β m d k ) , d k λ(1 σ k ) μ k d k 2 .
(2.3)

Step 3. Compute

x k + 1 = Π C k ( x k α k F ( y k ) ) ,

where C k :={xC: h k (x)0} and

(2.4)

Let k=k+1 and return to Step 1.

Remark 2.1 When we take parameters a=0, b=1, and the search direction d k = x ¯ k x k , our algorithm degrades into one in [16]. At this step of getting the next iterate, our projection way and projection region are also different from the one in [15].

Now we analyze the feasibility of Algorithm 2.1. It is obvious that d k satisfying conditions (2.1) and (2.2) exists. In fact, when we take d k = ( G k + μ k I ) 1 F( x k ), d k satisfies (2.1) and (2.2). Next, we need only to show the feasibility of (2.3).

Lemma 2.2 For all nonnegative integer k, there exists a nonnegative integer m k satisfying (2.3).

Proof If d k =0, then it follows from (2.1) and (2.2) that F( x k )=0, which means Algorithm 2.1 terminates with x k being a solution of problem (1.1).

Now, we assume that d k 0, for all k. By the definition of r k , the Cauchy-Schwarz inequality and the positive semidefiniteness of G k , we have

F ( x k ) , d k = ( d k ) T ( G k + μ k I ) ( d k ) ( d k ) T r k μ k d k 2 r k d k ( 1 σ k ) μ k d k 2 .
(2.5)

Suppose that the conclusion of Lemma 2.2 does not hold. Then there exists a nonnegative integer k 0 0 such that (2.3) is not satisfied for any nonnegative integer m, i.e.,

F ( x k 0 + β m d k 0 ) , d k 0 <λ μ k 0 (1 σ k 0 ) d k 0 2 ,m.
(2.6)

Letting m and by the continuity of F, we have

F ( x k 0 ) , d k 0 λ μ k 0 (1 σ k 0 ) d k 0 2 .

Which, together with (2.5), d k 0 0, and σ k κ 0 <1, we conclude that λ1, which contradicts λ(0,1). This completes the proof. □

3 Convergence analysis

In this section, we first prove two lemmas, and then analyze the global convergence of Algorithm 2.1.

Lemma 3.1 If the sequences { x k } and { y k } are generated by Algorithm  2.1, { x k } is bounded and F is continuous, then { y k } is also bounded.

Proof Combining inequality (2.5) with the Cauchy-Schwarz inequality, we obtain

μ k ( 1 σ k ) d k 2 F ( x k ) , d k F ( x k ) d k .

By the definition of μ k and σ k , it follows that

d k F ( x k ) μ k ( 1 σ k ) F ( x k ) 1 / 2 γ 1 ( 1 κ 0 ) .

From the boundedness of { x k } and the continuity of F, we conclude that { d k } is bounded, and hence so is { y k }. This completes the proof. □

Lemma 3.2 Let x be a solution of problem (1.1) and the function h k be defined by (2.4). If condition (1.2) holds, then

h k ( x k ) λb t k (1 σ k ) μ k d k 2 and h k ( x ) 0.
(3.1)

In particular, if d k 0, then h k ( x k )>0.

Proof

h k ( x k ) = a F ( x k ) + b F ( y k ) , x k y k + a t k F ( x k ) , d k = a F ( x k ) , t k d k + b F ( y k ) , t k d k + a t k F ( x k ) , d k = b t k F ( y k ) , d k
(3.2)
λb t k (1 σ k ) μ k d k 2 ,
(3.3)

where the inequality follows from (2.3).

h k ( x ) = a F ( x k ) + b F ( y k ) , x y k + a t k F ( x k ) , d k = a F ( x k ) , x x k + a F ( x k ) , x k y k + b F ( y k ) , x y k + a t k F ( x k ) , d k 0 ,

where the inequality follows from condition (1.2) and the definition of  y k .

If d k 0, then h k ( x k )>0 because σ k κ 0 <1. The proof is completed. □

Remark 3.1 Lemma 3.2 means that the hyperplane

H k := { x R n | a F ( x k ) + b F ( y k ) , x y k + a t k F ( x k ) , d k = 0 }

strictly separates the current iterate from the solution set of problem (1.1).

Let x S and d k 0. Since

a F ( x k ) + b F ( y k ) , x k x = a F ( x k ) , x k x + b F ( y k ) , x k x = a F ( x k ) , x k x + b F ( y k ) , x k y k + b F ( y k ) , y k x b F ( y k ) , x k y k λ b t k μ k ( 1 σ k ) d k 2 > 0 ,

where the first inequality follows from condition (1.2), the second one follows from (2.3), and the last one follows d k 0, which shows that (aF( x k )+bF( y k )) is a descent direction of the function 1 2 x x 2 at the point x k .

We next prove our main result. Certainly, if Algorithm 2.1 terminates at Step k, then x k is a solution of problem (1.1). Therefore, in the following analysis, we assume that Algorithm 2.1 always generates an infinite sequence.

Theorem 3.1 If F is continuous on C, condition (1.2) holds and sup k G k <, then the sequence { x k } R n generated by Algorithm  2.1 globally converges to a solution of (1.1).

Proof Let x be a solution of problem (1.1). Since x k + 1 = Π C k ( x k α k F( y k )), it follows from Lemma 2.1 that

x k + 1 x 2 x k α k F ( y k ) x 2 x k + 1 x k + α k F ( y k ) 2 = x k x 2 2 α k F ( y k ) , x k x x k + 1 x k 2 2 α k F ( y k ) , x k + 1 x k ,

i.e.,

x k x 2 x k + 1 x 2 2 α k F ( y k ) , x k x + x k + 1 x k 2 + 2 α k F ( y k ) , x k + 1 x k 2 α k F ( y k ) , x k y k + x k + 1 x k + α k F ( y k ) 2 α k 2 F ( y k ) 2 2 α k F ( y k ) , x k y k α k 2 F ( y k ) 2 = F ( y k ) , x k y k 2 F ( y k ) 2 ,

which shows that the sequence { x k + 1 x } is nonincreasing, and hence is a convergent sequence. Therefore, { x k } is bounded and

lim k F ( y k ) , x k y k 2 F ( y k ) 2 =0.
(3.4)

From Lemma 3.1 and the continuity of F, we have that {F( y k )} is bounded; that is, there exists a positive constant M such that

F ( y k ) M,for all k.

By (2.3) and the choices of σ k and λ, we have

F ( y k ) , x k y k 2 F ( y k ) 2 = t k 2 F ( y k ) , d k 2 F ( y k ) 2 t k 2 λ 2 ( 1 σ k ) 2 μ k 2 d k 4 M 2 λ 2 ( 1 κ 0 ) 2 t k 2 μ k 2 d k 4 M 2 .

This, together with inequality (3.4), we deduce that

lim k t k μ k d k =0.

Now, we consider the following two possible cases:

Suppose first that lim sup k t k >0. Then we must have

lim inf k μ k =0or lim inf k d k =0.

From the definition of μ k , the choice of d k and sup k G k <, each case of them follows that

lim inf k F ( x k ) =0.

Since F is continuous and { x k } is bounded, which implies that the sequence { x k } has some accumulation point x ˆ such that

F( x ˆ )=0.

This shows that x ˆ is a solution of problem (1.1). Replacing x by x ˆ in the preceding argument, we obtain that the sequence { x k x ˆ } is nonincreasing, and hence converges. Since x ˆ is an accumulation point of { x k }, some subsequence of { x k x ˆ } converges to zero, which implies that the whole sequence { x k x ˆ } converges to zero, and hence lim k x k = x ˆ .

Suppose now that lim k t k =0. Let x ¯ be any accumulation point of { x k } and { x k j } be the corresponding subsequence converging to x ¯ . By the choice of t k , (2.3) implies that

F ( x k j + t k j β 1 d k j ) , d k j <λ(1 σ k j ) μ k j d k j 2 ,for all j.

Since F is continuous, we obtain by letting j that

F ( x k j ) , d k j λ(1 σ k j ) μ k j d k j 2 .
(3.5)

From (2.5) and (3.5), we conclude that λ1, which contradicts λ(0,1). Hence, the case of lim k t k =0 is not possible. This completes the proof. □

Remark 3.2 Compared to the conditions of the global convergence used in literatures [15, 16], our conditions are weaker.

4 Convergence rate

In this section, we provide a result on the convergence rate of the iterative sequence generated by Algorithm 2.1. To establish this result, we need the following conditions (4.1) and (4.2).

For x S, there are positive constants δ, c 1 , and c 2 such that

c 1 dist(x,S) F ( x ) ,xN ( x , δ ) ,
(4.1)

and

F ( x ) F ( y ) G k ( x y ) c 2 x y 2 ,x,yN ( x , δ ) ,
(4.2)

where dist(x,S) denotes the distance from x to solution set S, and

N ( x , δ ) = { x R n | x x δ } .

If F is differentiable and F() is locally Lipschitz continuous with modulus θ>0, then there exists a constant L 1 >0 such that

F ( y ) F ( x ) F ( x ) ( y x ) L 1 y x 2 ,x,yN ( x , δ ) .
(4.3)

In fact, by the mean value theorem of vector valued function, we have

where L 1 =θ/2. Under assumptions (4.2) or (4.3), it is readily shown that there exists a constant L 2 >0 such that

F ( y ) F ( x ) L 2 yx,x,yN ( x , δ ) .
(4.4)

In 1998, the literature [15] showed that their proposed method converged superlinearly when the underlying function F is monotone, differentiable with F( x ) being nonsingular, and ∇F is locally Lipschitz continuous. It is known that the local error bound condition given in (4.1) is weaker than the nonsingular. Recently, under conditions (4.1), (4.2), and the underlying function F being monotone and continuous, the literature [16] obtained the locally superlinear rate of convergence of the proposed method.

Next, we analyze the superlinear convergence rate of the iterative sequence under a weaker condition. In the rest of section, we assume that x k x , k, where x S.

Lemma 4.1 Let G R n × n be a positive semidefinite matrix and μ>0. Then

(1) ( G + μ I ) 1 1 μ ;

(2) ( G + μ I ) 1 G2.

Proof See [18]. □

Lemma 4.2 Suppose that F is continuous and satisfies conditions (1.2), (4.1), and (4.2). If there exists a positive constant N such that G k N for all k, then for all k sufficiently large,

(1) c 3 d k F( x k ) c 4 d k ;

(2) F( x k )+ G k d k c 5 d k 3 / 2 , where c 3 , c 4 and c 5 are all positive constants.

Proof For (1), let x k N( x , 1 2 δ) and x ˆ k S be the closest solution to x k . We have

x ˆ k x x ˆ k x k + x k x δ,

i.e., x ˆ k N( x ,δ). Thus, by (2.1), (2.2), (4.2), and Lemma 4.1, we have

d k ( G k + μ k I ) 1 F ( x k ) + ( G k + μ k I ) 1 r k ( G k + μ k I ) 1 [ F ( x ˆ k ) F ( x k ) G k ( x ˆ k x k ) ] + ( G k + μ k I ) 1 G k ( x ˆ k x k ) + 1 μ k r k c 2 μ k x ˆ k x k 2 + 2 x ˆ k x k + σ k d k .

By x k x ˆ k =dist( x k ,S) and σ k κ 0 , it follows that

(1 κ 0 ) d k ( c 2 μ k dist ( x k , S ) + 2 ) dist ( x k , S ) .

From (4.1) and the choice of μ k , it holds that

c 2 μ k dist ( x k , S ) c 1 1 c 2 F ( x k ) γ 1 F ( x k ) 1 / 2 = c 2 γ 1 c 1 F ( x k ) 1 / 2 .

From the boundedness of {F( x k )}, there exists a positive constant M 1 such that

F ( x k ) 1 / 2 M 1 .

Therefore,

d k c 2 M 1 + 2 γ 1 c 1 c 1 γ 1 ( 1 κ 0 ) dist ( x k , S ) c 2 M 1 + 2 γ 1 c 1 c 1 2 γ 1 ( 1 κ 0 ) F ( x k ) .
(4.5)

We obtain that the left-hand side of (1) by setting c 3 := c 1 2 γ 1 ( 1 κ 0 ) c 2 M 1 + 2 γ 1 c 1 .

For the right-hand side part, it follows from (2.1) and (2.2) that

F ( x k ) G k + μ k I d k + r k ( G k + μ k I + σ k μ k ) d k ( N + γ 1 M 1 + κ 0 γ 1 M 1 ) d k .

We obtain the right-hand side part by setting c 4 :=N+ γ 1 M 1 + κ 0 γ 1 M 1 .

For (2), using (2.1) and (2.2), we have

F ( x k ) + G k d k μ k d k + r k ( 1 + σ k ) μ k d k ( 1 + κ 0 ) γ 1 F ( x k ) 1 / 2 d k ( 1 + κ 0 ) γ 1 c 4 1 / 2 d k 3 / 2 .

By setting c 5 :=(1+ κ 0 ) γ 1 c 4 1 / 2 , we obtain the desired result. □

Lemma 4.3 Suppose that the assumptions in Lemma  4.2 hold. Then for all k sufficiently large, it holds that

y k = x k + d k .

Proof By lim k x k = x and the continuity of F, we have

lim k F ( x k ) =F ( x ) =0.

By Lemma 4.2(1), we obtain that

lim k d k =0,

which means that x k + d k N( x ,δ) for all k sufficiently large. Hence, it follows from (4.2) that

F ( x k + d k ) =F ( x k ) + G k d k + R k ,
(4.6)

where R k c 2 d k 2 . Using (2.1) and (2.2), (4.6) can be written as

F ( x k + d k ) = μ k d k + r k + R k .
(4.7)

Hence,

F ( x k + d k ) , d k = μ k d k , d k r k d k R k d k μ k d k 2 σ k μ k d k 2 c 2 d k 3 = ( 1 c 2 d k μ k ( 1 σ k ) ) μ k ( 1 σ k ) d k 2 .

By Lemma 4.2(1) and the choices of μ k and σ k , for k sufficiently large, we obtain

1 1 c 2 d k μ k ( 1 σ k ) 1 c 2 c 3 1 F ( x k ) ( 1 κ 0 ) γ 1 F ( x k ) 1 / 2 = 1 c 2 c 3 1 F ( x k ) 1 / 2 ( 1 κ 0 ) γ 1 λ ,

where the last inequality follows from lim k F( x k )=0.

Therefore,

F ( x k + d k ) , d k λ μ k (1 σ k ) d k 2 ,

which implies that (2.3) holds with t k =1 for all k sufficiently large, i.e., y k = x k + d k . This completes the proof. □

From now on, we assume that k is large enough so that y k = x k + d k .

Lemma 4.4 Suppose that the assumptions in Lemma  4.2 hold. Set x ˜ k := x k α k F( y k ). Then for all k sufficiently large, there exists a positive constant c 6 such that

Proof Set

H k 1 = { x R n | F ( y k ) , x y k = 0 } .

Then x ˜ k = Π H k 1 ( x k ) and y k H k 1 . Hence, the vectors x k x ˜ k and y k x ˜ k are orthogonal. That is,

y k x ˜ k = y k x k sin θ k = d k sin θ k ,
(4.8)

where θ k is the angle between x ˜ k x k and y k x k . Because x ˜ k x k = α k F( y k ) and y k x k = d k , the angle between F( y k ) and μ k d k is also θ k . By (4.7), we obtain

F ( y k ) ( μ k d k ) = R k + r k ,

which implies that the vectors F( y k ), μ k d k and R k + r k constitute a triangle. Since lim k μ k = lim k γ 1 F ( x k ) 1 / 2 =0 and lim k α k =0. So for all k sufficiently large, we have

sin θ k r k + R k μ k d k σ k + c 2 d k μ k γ 2 F ( x k ) 1 / 2 + c 2 F ( x k ) c 3 γ 1 F ( x k ) 1 / 2 = ( γ 2 + c 2 c 3 γ 1 ) F ( x k ) 1 / 2 ,

which, together with (4.8) and Lemma 4.2(1), we obtain

y k x ˜ k ( γ 2 + c 2 c 3 γ 1 ) F ( x k ) 1 / 2 d k c 6 d k 3 / 2 ,

where c 6 = c 4 1 / 2 ( γ 2 + c 2 c 3 γ 1 ). This completes the proof. □

Now, we turn our attention to local rate of convergence analysis.

Theorem 4.1 Suppose that the assumptions in Lemma  4.2 hold. Then the sequence {dist( x k ,S)} Q-superlinearly converges to 0.

Proof By the definition of x ˜ k , Lemma 4.2(1) and (4.4), for sufficiently large k, we have

x ˜ k x x k α k F ( y k ) x x k x + F ( y k ) , x k y k F ( y k ) 2 F ( y k ) x k x + d k x k x + c 3 1 F ( x k ) = x k x + c 3 1 F ( x k ) F ( x ) ( 1 + L 2 c 3 1 ) x k x ,

which implies that lim k x ˜ k x =0 because lim k x k x =0. Thus, x ˜ k N( x ,δ) for k sufficiently large, which, together with (4.2), Lemma 4.2, Lemma 4.4, and the definition of x ˜ k , we obtain

F ( x ˜ k ) F ( x k ) + G k ( x ˜ k x k ) + c 2 x ˜ k x k 2 F ( x k ) + G k ( y k x k ) + G k x ˜ k y k + c 2 x ˜ k x k 2 c 5 d k 3 / 2 + N c 6 d k 3 / 2 + c 2 α k F ( y k ) 2 ( c 5 + N c 6 ) d k 3 / 2 + c 2 d k 2 = ( c 5 + N c 6 + c 2 d k 1 / 2 ) d k 3 / 2 ( c 5 + N c 6 + c 2 c 3 1 / 2 F ( x k ) 1 / 2 ) d k 3 / 2 .

Because {F( x k )} is bounded, there exists a positive constant c 7 such that

F ( x ˜ k ) c 7 d k 3 / 2 .
(4.9)

On the other hand, from Lemma 3.2, we know that

SC H k ,

where S is the solution set of problem (1.1). Since x k + 1 = Π C H k ( x ˜ k ), it follows from Lemma 2.1 that

x k + 1 x 2 x ˜ k x 2 x k + 1 x ˜ k 2 , x S,

which implies that

x k + 1 x x ˜ k x .

Therefore, together with inequalities (4.1), (4.5), and (4.9), we have

dist ( x k + 1 , S ) dist ( x ˜ k , S ) 1 c 1 F ( x ˜ k ) c 7 c 1 d k 3 / 2 c 7 c 1 ( c 2 M 1 + 2 γ 1 c 1 c 1 γ 1 ( 1 κ 0 ) ) 3 / 2 dist 3 / 2 ( x k , S ) ,

which implies that the order of superlinear convergence is at least 1.5. This completes the proof. □

Remark 4.1 Compared with the proof of the locally superlinear convergence in literatures [15, 16], our conditions are weaker.

5 Numerical experiments

In this section, we present some numerical experiments results to show the efficiency of our method. The MATLAB codes are run on a notebook computer with CPU2.10GHZ under MATLAB Version 7.0. Just as done in [16], we take G k = F ( x k ) and use the left division operation in MATLAB to solve the system of linear equations (2.1) at each iteration. We choose b=1, λ=0.96, κ 0 =0, β=0.7, and γ 1 =1. ‘Iter.’ denotes the number of iteration and ‘CPU’ denotes the CPU time in seconds. We choose F( x k ) 10 6 as the stop criterion. The example is tested in [16].

Example Let

F(x)=( 1 0 0 0 0 1 1 0 0 1 1 0 0 0 0 0 )( x 1 x 2 x 3 x 4 )+( x 1 3 x 2 3 2 x 3 3 2 x 4 3 )+( 10 1 3 0 )

and the constraint set C be taken as

C= { x R 4 | i = 1 4 x i 3 , x i 0 , i = 1 , 2 , 3 , 4 } .

From Tables 1-2, we can see that our algorithm is efficient if parameters are chosen properly. We can also observe that the algorithm’s operation results change with the value of a. When we take a=0, the operation results are not best, that is to say, the direction F( y k ) is not an optimal one.

Table 1 Numerical results of Example with a= 10 15
Table 2 Numerical results of Example with a=0