1 Introduction

The problem we are concerned with in this paper is for the following variational inequalities: find uΩ such that

( u u ) T F(u)0, u Ω,
(1.1)

with

u = ( x y ) , F ( u ) = ( f ( x ) g ( y ) ) and Ω = { ( x , y ) x R + + n , y R + + m , A x + B y = b } ,
(1.2)

where A R l × n , B R l × m are given matrices, b R l is a given vector, and f: R + + n R n , g: R + + m R m are given monotone operators. Studies and applications of such problems can be found in [111]. By attaching a Lagrange multiplier vector λ R l to the linear constraints Ax+By=b, the problem (1.1)-(1.2) can be explained in terms of finding wW such that

( w w ) T Q(w)0, w W,
(1.3)

where

w= ( x y λ ) ,Q(w)= ( f ( x ) A T λ g ( y ) B T λ A x + B y b ) ,W= R + + n × R + + m × R l .
(1.4)

The problem (1.3)-(1.4) is referred to as SVI (structured variational inequalities).

The alternating direction method (ADM) is a powerful method for solving the structured problem (1.3)-(1.4), since it decomposes the original problems into a series subproblems with lower scale, originally proposed by Gabay and Mercier [4] and Gabay [3]. The classical proximal alternating direction method (PADM) [1214] is an effective numerical approach for solving variational inequalities with a separable structure. To make the PADM more efficient and practical, He et al. [14] proposed a modified PADM as follows. For given ( x k , y k , λ k ) R + + n × R + + m × R l , the new iterative ( x k + 1 , y k + 1 , λ k + 1 ) is obtained via the following steps.

Step 1. Solve the following system of nonlinear equations to obtain x k + 1 :

( x x k + 1 ) T { f ( x k + 1 ) A T [ λ k H k ( A x k + 1 + B y k b ) ] + R k ( x k + 1 x k ) } 0 , x R + + n .
(1.5)

Step 2. Solve the following system of nonlinear equations to obtain y k + 1 :

( y y k + 1 ) T { g ( y k + 1 ) B T [ λ k H k ( A x k + 1 + B y k + 1 b ) ] + S k ( y k + 1 y k ) } 0 , y R + + m .
(1.6)

Step 3. Update λ k via

λ k + 1 = λ k H k ( A x k + 1 + B y k + 1 b ) .
(1.7)

Yuan and Li [15] have developed a logarithmic-quadratic proximal (LQP)-based decomposition method by applying the LQP terms to regularize the ADM subproblems, by substituting in the alternating directions method (1.5)-(1.7) the term R(x x k ) and S(y y k ) by R[(x x k )+μ( x k X k 2 x 1 )] and S[(y y k )+μ( y k Y k 2 y 1 )], respectively. The new iterative ( x k + 1 , y k + 1 , λ k + 1 ) in [15] is obtained via the following procedure: From a given w k =( x k , y k , λ k ) R + + n × R + + m × R l and μ(0,1), ( x k + 1 , y k + 1 , λ k + 1 ) is obtained via solving the following system:

f ( x ) A T [ λ k H ( A x + B y k b ) ] + R [ ( x x k ) + μ ( x k X k 2 x 1 ) ] = 0 , g ( y ) B T [ λ k H ( A x k + 1 + B y b ) ] + S [ ( y y k ) + μ ( y k Y k 2 y 1 ) ] = 0 , λ k + 1 = λ k H ( A x k + B y k b ) .

Note that the LQP method was presented originally in [16]. Later, Bnouhachem et al. [17, 18] have proposed a new inexact LQP alternating direction method by solving a series of related systems of nonlinear equations. Very recently, Li [19] presented an LQP-based prediction-correction method, the new iterate is obtained by a convex combination of the previous point and the one generated by a projection-type method along this descent direction.

In the present paper, inspired by the above cited works and by the recent works going in this direction, we proposed a new LQP-based prediction-correction method; the new iterate is obtained by a convex combination of the previous point and the one generated by a projection-type method along another descent direction. Under the same conditions as those in [19], we prove the global convergence of the proposed algorithm. It is proved theoretically that the lower bound of the progress obtained by the proposed method is greater than that by Li’s method [19]. The effectiveness and superiority of the proposed method is verified by our preliminary numerical experiments.

2 The proposed method

In this section, we recall some basic definitions and properties, which will be frequently used in our later analysis. Some useful results proved already in the literature are also summarized. The first lemma provides some basic properties of projection onto Ω.

Lemma 2.1 Let G be a symmetry positive definite matrix and Ω be a nonempty closed convex subset of R l , we denote P Ω , G () as the projection under the G-norm, i.e.,

P Ω , G (v)=argmin { v u G u Ω } .

Then we have the following inequalities:

( z P Ω , G [ z ] ) T G ( P Ω , G [ z ] v ) 0,z R l ,vΩ;
(2.1)
P Ω , G [ u ] P Ω , G [ v ] G u v G ,u,v R l ;
(2.2)
u P Ω , G [ z ] G 2 z u G 2 z P Ω , G [ z ] G 2 ,z R l ,uΩ.
(2.3)

In course we always make the following standard assumptions.

Assumption A f(x) is monotone with respect to R + + n and g(y) is monotone with respect to R + + m .

Assumption B The solution set of SVI, denoted by W , is nonempty.

Now, we suggest and consider the new LQP alternating direction method (LQP-ADM) for solving SVI as follows.

Prediction step: For a given w k =( x k , y k , λ k ) R + + n × R + + m × R l , and μ(0,1), the predictor w ˜ k =( x ˜ k , y ˜ k , λ ˜ k ) R + + n × R + + m × R l is obtained via solving the following system:

f(x) A T [ λ k H ( A x + B y k b ) ] +R [ ( x x k ) + μ ( x k X k 2 x 1 ) ] =0,
(2.4a)
g(y) B T [ λ k H ( A x + B y b ) ] +S [ ( y y k ) + μ ( y k Y k 2 y 1 ) ] =0,
(2.4b)
λ ˜ k = λ k H ( A x ˜ k + B y ˜ k b ) .
(2.4c)

Correction step: The new iterate w k + 1 ( α k )=( x k + 1 , y k + 1 , λ k + 1 ) is given by

w k + 1 ( α k )=(1σ) w k +σ P W [ w k α k G 1 d ( w k , w ˜ k ) ] ,σ(0,1),
(2.5)

where

α k = φ k w k w ˜ k G 2 ,
(2.6)
φ k : = w k w ˜ k M 2 + ( λ k λ ˜ k ) T ( B y k B y ˜ k ) , d ( w k , w ˜ k ) = ( f ( x ˜ k ) A T λ ˜ k + A T H B ( y k y ˜ k ) g ( y ˜ k ) B T λ ˜ k + B T H B ( y k y ˜ k ) A x ˜ k + B y ˜ k b )
(2.7)

and

G= ( ( 1 + μ ) R 0 0 0 ( 1 + μ ) S + B T H B 0 0 0 H 1 ) ,M= ( R 0 0 0 S + B T H B 0 0 0 H 1 ) .

Remark 2.1 If x k + 1 = x ˜ k , y k + 1 = y ˜ k and λ k + 1 = λ ˜ k in (2.4a), (2.4b), and (2.4c), respectively, we obtain the method proposed in [15].

We need the following result in the convergence analysis of the proposed method.

Lemma 2.2 [15]

Let q(u) R n be a monotone mapping of u with respect to R n + and R R n × n be positive definite diagonal matrix. For given u k >0, if we let U k :=diag( u 1 k , u 2 k ,, u n k ) and u 1 be an n-vector whose jth element is 1/ u j , then the equation

q(u)+R [ ( u u k ) + μ ( u k U k 2 u 1 ) ] =0
(2.8)

has a unique positive solution u. Moreover, for any v0, we have

( v u ) T q(u) 1 + μ 2 ( u v R 2 u k v R 2 ) + 1 μ 2 u k u R 2 .
(2.9)

In the next theorem we show that α k is lower bounded away from zero and it is one of the keys to prove the global convergence results.

Theorem 2.1 For given w k R + + n × R + + m × R l , let w ˜ k be generated by (2.4a)-(2.4c), then we have the following:

φ k 1 2 ( A x ˜ k + B y k b H 2 + w k w ˜ k G 2 )
(2.10)

and

α k 1 2 .
(2.11)

Proof It follows from (2.7) that

φ k = w k w ˜ k M 2 + ( λ k λ ˜ k ) T ( B y k B y ˜ k ) = x k x ˜ k R 2 + y k y ˜ k S 2 + B y k B y ˜ k H 2 + λ k λ ˜ k H 1 2 + ( λ k λ ˜ k ) T ( B y k B y ˜ k ) .
(2.12)

Using (2.4c), we have

( λ k λ ˜ k ) T ( B y k B y ˜ k ) + 1 2 ( B y k B y ˜ k H 2 + λ k λ ˜ k H 1 2 ) = ( A x ˜ k + B y ˜ k b ) T H ( B y k B y ˜ k ) + 1 2 ( B y k B y ˜ k H 2 + A x ˜ k + B y ˜ k b H 2 ) = 1 2 A x ˜ k + B y k b H 2 .
(2.13)

Substituting (2.13) into (2.12), we get

φ k = 1 2 ( A x ˜ k + B y k b H 2 + B y k B y ˜ k H 2 + λ k λ ˜ k H 1 2 ) + x k x ˜ k R 2 + y k y ˜ k S 2 = 1 2 ( A x ˜ k + B y k b H 2 + w k w ˜ k G 2 + ( 1 μ ) x k x ˜ k R 2 + ( 1 μ ) y k y ˜ k S 2 ) 1 2 ( A x ˜ k + B y k b H 2 + w k w ˜ k G 2 ) .

Therefore, it follows from (2.6) and (2.10) that

α k 1 2

and this completes the proof. □

3 Basic results

In this section, we prove some basic properties, which will be used to establish the sufficient and necessary conditions for the convergence of the proposed method. The following results are due to applying Lemma 2.1 to the LQP systems in prediction step of the proposed method.

Lemma 3.1 For given w k =( x k , y k , λ k ) R + + n × R + + m × R l , let w ˜ k be generated by (2.4a)-(2.4c). Then for any w =( x , y , λ ) W , we have

( w k w ) T G ( w k w ˜ k ) φ k .
(3.1)

Proof Applying Lemma 2.1 to (2.4a) (by setting u k = x k , u= x ˜ k , v= x in (2.9)) and

q(u)=f ( x ˜ k ) A T [ λ k H ( A x ˜ k + B y k b ) ] ,

we get

( x x ˜ k ) T { f ( x ˜ k ) A T [ λ k H ( A x ˜ k + B y k b ) ] } 1 + μ 2 ( x ˜ k x R 2 x k x R 2 ) + 1 μ 2 x k x ˜ k R 2 .
(3.2)

Recall

( x x ˜ k ) T R ( x k x ˜ k ) = 1 2 ( x ˜ k x R 2 x k x | R 2 ) + 1 2 x k x ˜ k R 2 .
(3.3)

Adding (3.2) and (3.3), we obtain

( x x ˜ k ) T { ( 1 + μ ) R ( x k x ˜ k ) f ( x ˜ k ) + A T λ ˜ k A T H B ( y k y ˜ k ) } μ x k x ˜ k R 2 .
(3.4)

Similarly, applying Lemma 2.1 to (2.4b), substituting u k = y k , u= y ˜ k , v= y , and replacing R, n with S, m, respectively, in (2.9) and

q(u)=g ( y ˜ k ) B T [ λ k H ( A x ˜ k + B y ˜ k b ) ] ,

we get

( y y ˜ k ) T { g ( y ˜ k ) B T [ λ k H ( A x ˜ k + B y ˜ k b ) ] } 1 + μ 2 ( y ˜ k y S 2 y k y S 2 ) + 1 μ 2 y k y ˜ k S 2 .
(3.5)

Recall

( y y ˜ k ) T S ( y k y ˜ k ) = 1 2 ( y ˜ k y S 2 y k y S 2 ) + 1 2 y k y ˜ k S 2 .
(3.6)

Adding (3.5) and (3.6), we have

( y y ˜ k ) T { ( 1 + μ ) S ( y k y ˜ k ) g ( y ˜ k ) + B T λ ˜ k } μ y k y ˜ k S 2 .
(3.7)

Since ( x , y , λ ) is a solution of SVI, x ˜ k R + + n and y ˜ k R + + m , we have

( x ˜ k x ) T ( f ( x ) A T λ ) 0 , ( y ˜ k y ) T ( g ( y ) B T λ ) 0

and

A x +B y b=0.

Using the monotonicity of f and g, we obtain

( x ˜ k x y ˜ k y λ ˜ k λ ) T ( f ( x ˜ k ) A T λ ˜ k g ( y ˜ k ) B T λ ˜ k A x ˜ k + B y ˜ k b ) ( x ˜ k x y ˜ k y λ ˜ k λ ) T ( f ( x ) A T λ g ( y ) B T λ A x + B y b ) 0.
(3.8)

Adding (3.4), (3.7), and (3.8), we get

( w w ˜ k ) T G ( w k w ˜ k ) = ( x x ˜ k ) T ( ( 1 + μ ) R ( x k x ˜ k ) ) + ( y y ˜ k ) T ( ( 1 + μ ) S ( y k y ˜ k ) + B T H B ( y k y ˜ k ) ) + ( λ λ ˜ k ) T ( A x ˜ k + B y ˜ k b ) μ x k x ˜ k R 2 + ( x x ˜ k ) T A T H B ( y k y ˜ k ) + ( y y ˜ k ) T B T H B ( y k y ˜ k ) + μ y k y ˜ k S 2 = μ x k x ˜ k R 2 ( A x ˜ k + B y ˜ k b ) T H B ( y k y ˜ k ) + μ y k y ˜ k S 2 = μ x k x ˜ k R 2 ( λ k λ ˜ k ) T ( B y k B y ˜ k ) + μ y k y ˜ k S 2 ,
(3.9)

where the last equality follows from (2.4c). It follows from (3.9) that

( w k w ) T G ( w k w ˜ k ) w k w ˜ k G 2 μ x k x ˜ k R 2 μ y k y ˜ k S 2 + ( λ k λ ˜ k ) T ( B y k B y ˜ k ) = x k x ˜ k R 2 + y k y ˜ k S 2 + B y k B y ˜ k H 2 + λ k λ ˜ k H 1 2 + ( λ k λ ˜ k ) T ( B y k B y ˜ k ) .

Using the definition of φ k the assertion of this lemma is proved. □

Theorem 3.1 Let w W , w k + 1 ( α k ) be defined by (2.5) and

Θ( α k ):= w k w G 2 w k + 1 ( α k ) w G 2 ,
(3.10)

then we have

Θ( α k )σ ( w k w k α k ( w k w ˜ k ) G 2 + 2 α k φ k α k 2 w k w ˜ k G 2 ) ,
(3.11)

where

w k = ( x k , y k , λ k ) := P W [ w k α k G 1 d ( w k , w ˜ k ) ] .
(3.12)

Proof Similarly as in (3.4) and (3.7), we have

( x k x ˜ k ) T { ( 1 + μ ) R ( x k x ˜ k ) f ( x ˜ k ) + A T λ ˜ k A T H B ( y k y ˜ k ) } μ x k x ˜ k R 2
(3.13)

and

( y k y ˜ k ) T { ( 1 + μ ) S ( y k y ˜ k ) g ( y ˜ k ) + B T λ ˜ k B T H B ( y k y ˜ k ) + B T H B ( y k y ˜ k ) } μ y k y ˜ k S 2 .
(3.14)

It follows from (3.13) and (3.14) that

( x k x ˜ k y k y ˜ k λ k λ ˜ k ) T ( ( 1 + μ ) R ( x k x ˜ k ) f ( x ˜ k ) + A T λ ˜ k A T H B ( y k y ˜ k ) ( ( 1 + μ ) S + B T H B ) ( y k y ˜ k ) g ( y ˜ k ) + B T λ ˜ k B T H B ( y k y ˜ k ) H 1 ( λ k λ ˜ k ) ( A x ˜ k + B y ˜ k b ) ) μ x k x ˜ k R 2 + μ y k y ˜ k S 2 ,

which implies

2 α k ( w k w ˜ k ) T ( G ( w k w ˜ k ) d ( w k , w ˜ k ) ) 2 α k μ x k x ˜ k R 2 2 α k μ y k y ˜ k S 2 0.
(3.15)

Since w W and w k = P W [ w k α k G 1 d( w k , w ˜ k )], it follows from (2.3) that

w k w G 2 w k α k G 1 d ( w k , w ˜ k ) w G 2 w k α k G 1 d ( w k , w ˜ k ) w k G 2 .
(3.16)

From (2.5), we get

w k + 1 ( α k ) w G 2 = ( 1 σ ) ( w k w ) + σ ( w k w ) G 2 = ( 1 σ ) 2 w k w G 2 + σ 2 w k w G 2 + 2 σ ( 1 σ ) ( w k w ) T G ( w k w ) .

Using the following identity:

2 ( a + b ) T Gb= a + b G 2 a G 2 + b G 2

for a= w k w k , b= w k w and (3.16), we obtain

w k + 1 ( α k ) w G 2 = ( 1 σ ) 2 w k w G 2 + σ 2 w k w G 2 + σ ( 1 σ ) { w k w G 2 w k w k G 2 + w k w G 2 } = ( 1 σ ) w k w G 2 + σ w k w G 2 σ ( 1 σ ) w k w k G 2 ( 1 σ ) w k w G 2 + σ w k α k G 1 d ( w k , w ˜ k ) w G 2 σ w k α k G 1 d ( w k , w ˜ k ) w k G 2 σ ( 1 σ ) w k w k G 2 .
(3.17)

Using the definition of Θ( α k ) and (3.17), we get

Θ ( α k ) σ w k w k G 2 + 2 σ α k ( w k w k ) T d ( w k , w ˜ k ) + 2 σ α k ( w k w ) T d ( w k , w ˜ k ) .
(3.18)

Using the monotonicity of f and g, we obtain

( x ˜ k x y ˜ k y λ ˜ k λ ) T ( f ( x ˜ k ) A T λ ˜ k g ( y ˜ k ) B T λ ˜ k A x ˜ k + B y ˜ k b ) ( x ˜ k x y ˜ k y λ ˜ k λ ) T ( f ( x ) A T λ g ( y ) B T λ A x + B y b ) 0

and consequently

( w ˜ k w ) T d ( w k , w ˜ k ) ( w ˜ k w ) T ( A T H B ( y k y ˜ k ) B T H B ( y k y ˜ k ) 0 ) = ( A x ˜ k + B x ˜ k b ) T H B ( y k y ˜ k ) = ( λ k λ ˜ k ) T B ( y k y ˜ k )

and it follows that

( w k w ) T d ( w k , w ˜ k ) ( w k w ˜ k ) T d ( w k , w ˜ k ) + ( λ k λ ˜ k ) T B ( y k y ˜ k ) .
(3.19)

Applying (3.19) to the last term in the right side of (3.18), we obtain

Θ ( α k ) σ w k w k G 2 + 2 σ α k ( w k w k ) T d ( w k , w ˜ k ) + 2 σ α k { ( w k w ˜ k ) T d ( w k , w ˜ k ) + ( λ k λ ˜ k ) T B ( y k y ˜ k ) } = σ { w k w k G 2 + 2 α k ( w k w ˜ k ) T d ( w k , w ˜ k ) + 2 α k ( λ k λ ˜ k ) T B ( y k y ˜ k ) } .
(3.20)

Adding (3.15) (multiplied by σ) to (3.20), we get

Θ ( α k ) σ { w k w k G 2 + 2 α k ( w k w ˜ k ) T G ( w k w ˜ k ) 2 α k μ x k x ˜ k R 2 2 α k μ y k y ˜ k S 2 + 2 α k ( λ k λ ˜ k ) T B ( y k y ˜ k ) } = σ { w k w k α k ( w k w ˜ k ) G 2 α k 2 w k w ˜ k G 2 + 2 α k w k w ˜ k G 2 2 α k μ x k x ˜ k R 2 2 α k μ y k y ˜ k S 2 + 2 α k ( λ k λ ˜ k ) T B ( y k y ˜ k ) }

and using the notation of φ k in (2.7), the theorem is proved. □

From the computational point of view, a relaxation factor γ(0,2) is preferable in the correction. We are now at the position to prove the contractive property of the iterative sequence.

Theorem 3.2 Let w W be a solution of SVI and let w k + 1 (γ α k ) be generated by (2.5). Then w k and w ˜ k are bounded, and

w k + 1 ( γ α k ) w G 2 w k w G 2 c w k w ˜ k G 2 ,
(3.21)

where

c:= σ γ ( 2 γ ) 4 >0.

Proof It follows from (3.11), (2.10), and (2.11) that

w k + 1 ( γ α k ) w G 2 w k w G 2 σ ( 2 γ α k φ k γ 2 α k 2 w k w ˜ k G 2 ) = w k w G 2 γ ( 2 γ ) α k σ φ k w k w G 2 σ γ ( 2 γ ) 4 ( A x ˜ k + B y k b H 2 + w k w ˜ k G 2 ) .

Since γ(0,2) we have

w k + 1 w w k w w 0 w

and thus { w k } is a bounded sequence.

It follows from (3.21) that

k = 0 c w k w ˜ k G 2 <+,

which means that

lim k w k w ˜ k G =0.
(3.22)

Since { w k } is a bounded sequence, we conclude that { w ˜ k } is also bounded. □

4 Convergence of the proposed method

In this section, we prove the global convergence of the proposed method. The following results can be proved by using the technique of Lemma 5.1 and Theorem 5.1 in [17].

Lemma 4.1 For given w k =( x k , y k , λ k ) R + + n × R + + m × R l , let w ˜ k =( x ˜ k , y ˜ k , λ ˜ k ) be generated by (2.4a)-(2.4c). Then for any w=(x,y,λ)W, we have

( x x ˜ k ) T ( f ( x ˜ k ) A T λ ˜ k + A T H B ( y k y ˜ k ) ) ( x k x ˜ k ) T R { ( 1 + μ ) x ( μ x k + x ˜ k ) }
(4.1)

and

( y y ˜ k ) T ( g ( y ˜ k ) B T λ ˜ k ) ( y k y ˜ k ) T S { ( 1 + μ ) y ( μ y k + y ˜ k ) } .
(4.2)

Proof Applying Lemma 2.1 to prediction step of LQP-ADM (by setting u k = x k , u= x ˜ k , q(u)=f( x ˜ k ) A T λ ˜ k + A T HB( y k y ˜ k ) and v=x in (2.9)), it follows that

( x x ˜ k ) T ( f ( x ˜ k ) A T λ ˜ k + A T H B ( y k y ˜ k ) ) 1 + μ 2 ( x ˜ k x R 2 x k x R 2 ) + 1 μ 2 x k x ˜ k R 2 .

By a simple manipulation, we have

1 + μ 2 ( x ˜ k x R 2 x k x R 2 ) + 1 μ 2 x k x ˜ k R 2 = ( 1 + μ ) x T R x k ( 1 + μ ) x T R x ˜ k ( 1 μ ) ( x ˜ k ) T R x k μ x k R 2 + x ˜ k R 2 = ( 1 + μ ) x T R ( x k x ˜ k ) ( x k x ˜ k ) T R ( μ x k + x ˜ k ) = ( x k x ˜ k ) T R { ( 1 + μ ) x ( μ x k + x ˜ k ) } ,

and the assertion (4.1) is proved. Similarly we can prove the assertion (4.2). □

Now, we are ready to prove the convergence of the proposed method.

Theorem 4.1 The sequence { w k } generated by the proposed method converges to some w which is a solution of SVI.

Proof It follows from (3.22) that

lim k x k x ˜ k R =0, lim k y k y ˜ k S =0
(4.3)

and

lim k λ k λ ˜ k H 1 = lim k A x ˜ k + B y ˜ k b H =0.
(4.4)

Moreover, (4.1) and (4.2) imply that

( x x ˜ k ) T ( f ( x ˜ k ) A T λ ˜ k ) ( x k x ˜ k ) T R { ( 1 + μ ) x ( μ x k + x ˜ k ) } ( x x ˜ k ) T A T HB ( y k y ˜ k )

and

( y y ˜ k ) T ( g ( y ˜ k ) B T λ ˜ k ) ( y k y ˜ k ) T S { ( 1 + μ ) y ( μ y k + y ˜ k ) } .

We deduce from (4.3) that

{ lim k ( x x ˜ k ) T { f ( x ˜ k ) A T λ ˜ k } 0 , x R + + n , lim k ( y y ˜ k ) T { g ( y ˜ k ) B T λ ˜ k } 0 , y R + + m .
(4.5)

Since { w k } is bounded, so it has at least one cluster point. Let w be a cluster point of { w k } and the subsequence { w k j } converges to w . It follows from (4.4) and (4.5) that

{ lim j ( x x k j ) T { f ( x k j ) A T λ k j } 0 , x R + + n , lim j ( y y k j ) T { g ( y k j ) B T λ k j } 0 , y R + + m , lim j ( A x k j + B y k j b ) = 0 .

Consequently

{ ( x x ) T { f ( x ) A T λ } 0 , x R + + n , ( y y ) T { g ( y ) B T λ } 0 , y R + + m , A x + B y b = 0 ,

which means that w is a solution of SVI.

Now we prove that the sequence { w k } converges to w . Since

lim k w k w ˜ k G =0and { w ˜ k j } w

for any ϵ>0, there exists an l>0 such that

w ˜ k l w < ϵ 2 and w k l w ˜ k l < ϵ 2 .
(4.6)

Therefore, for any k k l , it follows from (3.21) and (4.6) that

w k w w k l w w k l w ˜ k l + w ˜ k l w <ϵ.

This implies that the sequence { w k } converges to w , which is a solution of SVI. □

5 Comparison

Let

w I k + 1 ( α k ):= P W [ w k α k G 1 d ( w k , w ˜ k ) ]
(5.1)

and

w I I k + 1 ( α k ):= P W [ w k α k ( w k w ˜ k ) ]
(5.2)

represent the new iterates generated by the algorithm presented in this paper and Li’s algorithm in [19], respectively, where σ=1. Let

Θ I ( α k ):= w k w G 2 w I k + 1 ( α k ) w G 2

and

Θ I I ( α k ):= w k w G 2 w I I k + 1 ( α k ) w G 2

measure the progresses made by the new iterates, respectively. From (3.11), we have

Θ I ( α k ) q I ( α k ):= w k w I k + 1 ( α k ) α k ( w k w ˜ k ) G 2 +2 α k φ k α k 2 w k w ˜ k G 2 .

Theorem 3.5 of [19] indicates that

Θ I I ( α k ) q I I ( α k ):=2 α k φ k α k 2 w k w ˜ k G 2 .

Note that the optimal step sizes used in both methods are identical. It is easy to prove that

q I ( α k ) q I I ( α k ).
(5.3)

Inequality (5.3) shows theoretically that the proposed method is expected to make more progress than that in [19] at each iteration, and so it explains theoretically that the proposed method outperforms the method in [19].

6 Preliminary computational results

In this section, we report some numerical results of the proposed method, we consider the following optimization problem with matrix variables:

min { 1 2 X C F 2 | X S + n } ,
(6.1)

where F is the matrix Fröbenius norm, i.e., C F = ( i = 1 n j = 1 n | C i j | 2 ) 1 / 2 ,

S + n = { H R n × n H T = H , H 0 } .

Note that the matrix Fröbenius norm is induced by the inner product

A,B=Trace ( A T B ) .

Note that the problem (6.1) is equivalent to the following:

min 1 2 X C 2 + 1 2 Y C 2 s.t.  X Y = 0 , X , Y S + n ,
(6.2)

which is equivalent to the following variational inequality: to find u =( X , Y , Z )Ω= S + n × S + n × R n × n such that

{ X X , ( X C ) Z 0 , Y Y , ( Y C ) + Z 0 , u = ( X , Y , Z ) Ω , X Y = 0 .
(6.3)

The problem (6.3) is a special case of (1.3)-(1.4) with matrix variables where A= I n × n , B= I n × n , b=0, f(X)=XC, g(Y)=YC and W= S + n × S + n × R n × n .

For simplification, we take R=r I n × n , S=s I n × n and H= I n × n where r>0 and s>0 are scalars. In all tests we take μ=0.5, C=rand(n) and ( X 0 , Y 0 , Z 0 )=( I n × n , I n × n , 0 n × n ) as the initial point in the test. The iteration is stopped as soon as

max { X k X ˜ k , Y k Y ˜ k , Z k Z ˜ k } 10 6 .

All codes were written in Matlab; we compare the proposed method with that in [19]. The iteration numbers denoted by k, and the computational time for the problem (6.1) with different dimensions are given in Tables 1-3.

Table 1 Numerical results for the problem ( 6.1 ) with r=s=0.8
Table 2 Numerical results for the problem ( 6.1 ) with r=s=1
Table 3 Numerical results for the problem ( 6.1 ) with r=s=5

Tables 1-3 show that the proposed method is more flexible and efficient. Moreover, it demonstrates computationally that the new method is more effective than the method presented in [19] in the sense that the new method needs fewer iterations and less computational time, which clearly illustrates its efficiency and thus justifies the theoretical assertions.

Remark 6.1 For the example used in numerical results in [19], we found the same results as in [19], so we do not include the results.

7 Conclusions

In this paper, we propose a new logarithmic-quadratic proximal alternating direction method (LQP-ADM) for solving structured variational inequalities. Each iteration of the new LQP-ADM includes a prediction step where a prediction point is obtained as that in [15], and a correction step where the new iterate is generated by a convex combination of the previous iterate and the one generated by a projection-type method along a new descent direction. Global convergence of the proposed method is proved under mild assumptions. Further, it is proved theoretically that the lower bound of the progress obtained by the proposed method is greater than that by [19]. Some preliminary numerical results are reported to verify the efficiency of the proposed LQP-ADM and thus justified the theoretical assertions.