1 Introduction

Consider the following weakly absolute value equations (AVE):

$$ Ax-\vert x\vert =b, $$
(1.1)

where \(b \in\mathbb{R}^{n}\), \(\vert \cdot \vert \) denotes the componentwise absolute value, \(A=W+iT\) where \(W\in\mathbb{R}^{n\times n}\) is a symmetric positive definite matrix and \(T\in\mathbb{R}^{n\times n}\) is a symmetric positive semidefinite matrix, and \(i=\sqrt{-1}\) denotes the imaginary unit. A general form of AVE (1.1)

$$ Ax+B\vert x\vert =b, $$
(1.2)

was first introduced in [1] and investigated in [2]. The AVE (1.1) is a class of the important nonlinear linear systems, and it often comes from the fact that linear programs, quadratic programs, and bimatrix games can all be reduced to a linear complementarity problem (LCP) [35]. This fact implies that the AVE (1.1) is equivalent to the LCP [5] and turns out to be NP-hard; see [2].

In recent years, some efficient methods have been proposed to solve the AVE (1.1), such as the smoothing Newton method [6], the generalized Newton method [711], the sign accord method [12]. For other forms of the iteration method, one can see [1315] for more details.

When the involved matrix A in (1.1) is a non-Hermitian positive definite matrix, based on the Hermitian and skew-Hermitian splitting (HSS) iteration method [16], the Picard-HSS iteration method for solving the AVE (1.1) has been proposed in [17]. Numerical results show that the Picard-HSS iteration method outperforms the Picard and generalized Newton methods under certain conditions. Although the Picard-HSS iteration method is efficient and competitive, the numbers of the inner HSS iteration steps are often problem-dependent and difficult to be obtained in actual computations. To overcome this disadvantage and improve the convergence of the Picard-HSS iteration method, the nonlinear HSS-like iteration method in [18] has been presented and its convergent conditions are established. Numerical experiments demonstrate that the nonlinear HSS-like iteration method is feasible and robust.

When the involved matrix A in (1.1) is \(A = W + iT\), the convergent rate of the aforementioned Picard-HSS and nonlinear HSS-like methods maybe reduce. This is reason that each step of Picard-HSS and HSS-like iterations needs to solve two linear subsystems with the symmetric positive definite matrix \(\alpha I +W\) and the shifted skew-Hermitian matrix \(\alpha I + iT\). It is well known that the solution of the linear system with the coefficient matrix \(\alpha I + iT\) is not easy to obtain [19]. To overcome this defect, based on MHSS iteration method [20], we will establish the nonlinear MHSS-like iteration method to solve the AVE (1.1). Compared with the nonlinear HSS-like iteration method, the potential advantage of the nonlinear MHSS-like iteration method is that only two linear subsystems with coefficient matrices \(\alpha I +W\) and \(\alpha I +T\), both being real and symmetric positive definite, need to be solved at each step. This shows that the nonlinear MHSS-like iteration method can avoid a shifted skew-Hermitian linear subsystem with coefficient matrix \(\alpha I + iT\). Therefore, in this case, these two linear subsystems can be solved either exactly by a sparse Cholesky factorization or inexactly by conjugated gradient scheme. The convergent conditions of the nonlinear MHSS-like iteration method are obtained by using a smoothing approximate function.

The remainder of the paper is organized as follows. In Section 2, the MHSS iteration method is briefly reviewed. The nonlinear MHSS-like iteration method is discussed in Section 3. Numerical experiments are reported in Section 4. Finally, in Section 5 we draw some conclusions.

2 The MHSS iteration method

To establish the nonlinear MHSS-like iteration method for solving the AVE (1.1), a brief review of MHSS iteration is needed.

From (1.2), when B is a zero matrix, the generalized AVE (1.2) reduces to the system of linear equations

$$ Ax=b, $$
(2.1)

where \(A=W+iT\) with symmetric positive definite matrix \(W\in\mathbb{R}^{n\times n}\) and symmetric positive semidefinite matrix \(T\in\mathbb{R}^{n\times n}\). In fact, the matrices W and iT are the Hermitian and skew-Hermitian parts of the complex symmetric matrix A, respectively.

Obviously, the linear system (2.1) can be rewritten in the following two equivalent forms:

$$\begin{aligned}& (\alpha I + W)x = (\alpha I -i T )x + b, \end{aligned}$$
(2.2)
$$\begin{aligned}& (\alpha I + T)x = (\alpha I +i W )x- ib. \end{aligned}$$
(2.3)

Combining (2.2) with (2.3), Bai et al. in [20] skillfully designed a modified HSS (MHSS) method to solve the complex symmetric linear system (2.1) below.

The MHSS method. Let \(x^{(0)}\in\mathbb{C}^{n}\) be an arbitrary initial point. For \(k=0,1,2,\ldots\) , compute the next iterate \(x^{(k+1)}\) according to the following procedure:

$$ \left\{ \textstyle\begin{array}{l} (\alpha I+ W)x^{(k+\frac{1}{2})}=(\alpha I-iT)x^{(k)}+b, \\ (\alpha I+T)x^{(k+1)}=(\alpha I+iW)x^{(k+\frac{1}{2})}-ib, \end{array}\displaystyle \right.$$
(2.4)

where α is a given positive constant and I is the identity matrix.

In matrix-vector form, the MHSS iteration (2.4) can be equivalently rewritten as

$$x^{(k+1)}=M_{\alpha}x^{(k)}+G_{\alpha}b=M_{\alpha}^{k+1}x^{(0)}+ \sum_{j=0}^{k}M_{\alpha}^{j}G_{\alpha}b,\quad k=0,1,\ldots, $$

where

$$\begin{aligned}& M_{\alpha} =(\alpha I+T)^{-1}(\alpha I+iW) (\alpha I+W)^{-1}(\alpha I-iT), \\& G_{\alpha}=(1-i)\alpha(\alpha I+T)^{-1}(\alpha I+ W)^{-1}. \end{aligned}$$

Here, \(M_{\alpha}\) is the iteration matrix of the MHSS method.

Theoretical analysis in [20] shows that the MHSS method converges unconditionally to the unique solution of the complex symmetric linear system (2.1) when \(W\in\mathbb{R}^{n\times n}\) is symmetric positive definite and \(T\in\mathbb{R}^{n\times n}\) is symmetric positive semidefinite.

3 The nonlinear MHSS-like iteration method

In this section, we will establish the MHSS-like iteration method for solving the AVE (1.1). Using the techniques in [21], the AVE (1.1) can be rewritten as the system of the nonlinear fixed-point equations

$$\begin{aligned}& (\alpha I + W)x = (\alpha I -i T )x +\vert x\vert + b, \end{aligned}$$
(3.1)
$$\begin{aligned}& (\alpha I + T)x = (\alpha I +i W )x-i\vert x\vert - ib. \end{aligned}$$
(3.2)

In the following, we establish the following MHSS-like iteration method for solving the AVE (1.1) by alternately iterating between two systems of the nonlinear fixed-point equations (3.1) and (3.2).

The nonlinear MHSS-like method. Let \(x^{(0)}\in \mathbb{C}^{n}\) be an arbitrary initial point. For \(k=0,1,2,\ldots\) , compute the next iterate \(x^{(k+1)}\) according to the following procedure:

$$ \left\{ \textstyle\begin{array}{l} (\alpha I+ W)x^{(k+\frac{1}{2})}=(\alpha I-iT)x^{(k)}+\vert x^{(k)}\vert +b, \\ (\alpha I+T)x^{(k+1)}=(\alpha I+iW)x^{(k+\frac{1}{2})}-i\vert x^{(k+\frac{1}{2})}\vert -ib, \end{array}\displaystyle \right.$$
(3.3)

where α is a given positive constant and I is the identity matrix.

Evidently, each step of the nonlinear MHSS-like iteration alternates between two real symmetric positive definite matrices \(\alpha I+W\) and \(\alpha I+T\). Hence, these two linear subsystems involved in each step of the nonlinear MHSS-like iteration can be solved effectively by exactly using the Cholesky factorization. On the other hand, in the Picard-HSS and nonlinear HSS-like methods, a shifted skew-Hermitian linear subsystem with coefficient matrix \(\alpha I+iT\) needs to be solved at every iterative step.

Let

$$ \left\{ \textstyle\begin{array}{l} U(x)=(\alpha I+ W)^{-1}((\alpha I-iT)x+\vert x\vert +b), \\ V(x)=(\alpha I+T)^{-1}((\alpha I+iW)x-i\vert x\vert -ib), \end{array}\displaystyle \right.$$
(3.4)

and

$$ \psi(x)=V\circ U(x)=V\bigl(U(x)\bigr). $$
(3.5)

Then, from (3.4) and (3.5), the nonlinear MHSS-like iteration scheme can be equivalently rewritten as

$$ x^{(k+1)}=\psi\bigl(x^{(k)}\bigr), \quad k=0,1,2,\ldots. $$
(3.6)

Definition 3.1

[22]

Let \(G:\mathbb{D}\subset \mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\). Then \(x^{\ast}\) is a point of attraction of the iteration

$$x^{k+1}=Gx^{k}, \quad k=0,1,2,\ldots, $$

if there is an open neighborhood S of \(x^{\ast}\) such that \(S\subset \mathbb{D}\) and, for any \(x^{0}\in S\), the iterates \(\{x^{k}\}\) all lie in \(\mathbb{D}\) and converge to \(x^{\ast}\).

Lemma 3.1

Ostrowski theorem [22]

Suppose that \(G:\mathbb{D}\subset \mathbb{R}^{n}\rightarrow \mathbb{R}^{n}\) has a fixed point \(x^{\ast}\in \operatorname {int}(\mathbb{D})\) and is F-differentiable at \(x^{\ast}\). If the spectral radius of \(G'(x^{\ast})\) is less than 1, then \(x^{\ast}\) is a point of attraction of the iteration \(x^{k+1}=Gx^{k}\), \(k=0,1,2,\ldots\) .

We note that we cannot directly use the Ostrowski theorem (Theorem 10.1.3 in [22]) to give a local convergence theory for the iteration (3.6). It is reason that the nonlinear term \(\vert x\vert +b\) is non-differentiable. To overcome this defect, based on the smoothing approximate function introduced in [6], we can establish the following local convergence theory for nonlinear MHSS-like iteration method.

Define \(\phi:\mathbb{D}\subset\mathbb{C}^{n}\rightarrow\mathbb{C}^{n}\) by

$$\phi(x)=\sqrt{x^{2}+\epsilon^{2}}=\Bigl(\sqrt {x_{1}^{2}+\epsilon^{2}},\ldots ,\sqrt {x_{n}^{2}+\epsilon^{2}} \Bigr)^{T}, \quad \epsilon>0, x\in\mathbb{D}. $$

It is easy to see that \(\phi(x)\) is a smoothing function of \(\vert x\vert \). Based on the results in [6], the following properties of \(\phi(x)\) can easily be given.

Lemma 3.2

\(\phi(x)\) is a uniformly smoothing approximation function of \(\vert x\vert \), i.e.,

$$\bigl\Vert \phi(x)-\vert x\vert \bigr\Vert \leq\sqrt{n}\epsilon. $$

Lemma 3.3

For any \(\epsilon>0\), the Jacobian of \(\phi(x)\) at \(x\in\mathbb{C}^{n}\) is

$$ D=\phi'(x)=\operatorname {diag}\biggl(\frac{x_{i}}{\sqrt{x_{i}^{2}+\epsilon ^{2}}},i=1,2, \ldots,n \biggr). $$
(3.7)

Using the smoothing approximate function \(\phi(x)\) instead of \(\vert x\vert \) in (3.4), we have

$$ \left\{ \textstyle\begin{array}{l} \hat{U}(x)=(\alpha I+ W)^{-1}((\alpha I-iT)x+\phi(x)+b), \\ \hat{V}(x)=(\alpha I+T)^{-1}((\alpha I+iW)x-i\phi(x)-ib), \end{array}\displaystyle \right.$$
(3.8)

and

$$ \hat{\psi}(x)=\hat{V}\circ\hat{U}(x)=\hat{V}\bigl(\hat{U}(x) \bigr). $$
(3.9)

Then, from (3.8) and (3.9), the smoothing nonlinear MHSS-like iteration scheme can be equivalently rewritten as

$$ \hat{x}^{(k+1)}=\hat{\psi}\bigl(x^{(k)}\bigr),\quad k=0,1,2,\ldots. $$
(3.10)

Theorem 3.1

If the AVE (1.1) is weakly absolute, then the AVE (1.1) has a unique solution.

Proof

In fact, if the AVE (1.1) is weakly absolute, then the linear term Ax is strongly dominant over the nonlinear term \(\vert x\vert \) in certain consistent matrix norm [21]. Here, according to the equivalence property of norm, we take the 2-norm and have

$$\Vert Ax\Vert _{2}>\bigl\Vert \vert x\vert \bigr\Vert _{2}\Leftrightarrow x^{T}A^{T}Ax>x^{T}x \Leftrightarrow x^{T}\bigl(A^{T}A-I\bigr)x>0. $$

This shows that, when the AVE (1.1) is weakly absolute, the matrix \(A^{T}A-I\) is symmetric positive definite. Based on Theorem 4.1 in [23], the result in Theorem 3.1 holds. □

To obtain the convergence conditions of the nonlinear MHSS-like method, we define a matrix norm \(\Vert X\Vert =\Vert (\alpha I +T)X(\alpha I +T)^{-1}\Vert _{2}\) for any \(X\in\mathbb{C}^{n\times n}\). Let \(\theta(\alpha)= \Vert M_{\alpha} \Vert \). For \(\alpha>0\), we have

$$\begin{aligned} \theta(\alpha)&= \Vert M_{\alpha} \Vert =\bigl\Vert (\alpha I+iW) ( \alpha I+W)^{-1}(\alpha I-iT) (\alpha I+T)^{-1}\bigr\Vert _{2} \\ &\leq\bigl\Vert (\alpha I+iW) (\alpha I+W)^{-1} \bigr\Vert _{2}\bigl\Vert (\alpha I-iT) (\alpha I+T)^{-1}\bigr\Vert _{2} \\ &\leq\bigl\Vert (\alpha I+iW) (\alpha I+W)^{-1}\bigr\Vert _{2}< 1; \end{aligned}$$

see [20] for details.

Theorem 3.2

Let \(\phi=\sqrt{x^{2}+\epsilon^{2}}\) (\(\epsilon>0\)) be F-differentiable at a point \(x^{\ast}\in\mathbb{D}\) with \(Ax^{\ast}=\phi(x^{\ast})+b\), and \(A=W+iT\) where W is symmetric positive definite and T is symmetric positive semidefinite. Let

$$\begin{aligned}& M_{\alpha,x^{\ast}}=(\alpha I +T)^{-1}\bigl(\alpha I +iW-i \phi'\bigl(x^{\ast}\bigr)\bigr) (\alpha I+ W)^{-1} \bigl(\alpha I-iT+\phi'\bigl(x^{\ast}\bigr)\bigr), \\& \delta=\max\bigl\{ \bigl\Vert (\alpha I +W)^{-1}\bigr\Vert _{2}, \bigl\Vert (\alpha I +T)^{-1}\bigr\Vert _{2}\bigr\} , \quad\quad \theta(\alpha)= \Vert M_{\alpha} \Vert . \end{aligned}$$

If

$$ \delta< \sqrt{2- \theta(\alpha)}-1, $$
(3.11)

then \(\rho(M_{\alpha,x^{\ast}})<1\), which implies that \(x^{\ast}\in \mathbb{D}\) is a point of attraction of the smoothing nonlinear MHSS-like iteration (3.10).

Proof

Since

$$\hat{U}'\bigl(x^{\ast}\bigr)=(\alpha I+ W)^{-1} \bigl((\alpha I-iT)+\phi\bigl(x^{\ast}\bigr)\bigr) \quad \mbox{and} \quad \hat{V}'\bigl(x^{\ast}\bigr)=(\alpha I+T)^{-1} \bigl((\alpha I+iW)-i\phi\bigl(x^{\ast}\bigr)\bigr), $$

it is easy to obtain \(\hat{\psi}'(x^{\ast})=M_{\alpha,x^{\ast}}\).

By direct computations, we have

$$ \begin{aligned} &\bigl\Vert \phi'\bigl(x^{\ast}\bigr)\bigr\Vert _{2}=\Vert D\Vert _{2}< 1,\quad\quad \bigl\Vert (\alpha I-iT) ( \alpha I +T)^{-1}\bigr\Vert _{2}\leq1, \\ & \bigl\Vert (\alpha I +iW) (\alpha I+W)^{-1}\bigr\Vert _{2}< 1. \end{aligned} $$
(3.12)

Then from (3.12), we have

$$\begin{aligned} \Vert M_{\alpha,x^{\ast}} \Vert &=\bigl\Vert (\alpha I +T)M_{\alpha,x^{\ast}}( \alpha I +T)^{-1}\bigr\Vert _{2} \\ &\leq\bigl\Vert (\alpha I+T)M_{\alpha}(\alpha I+T)^{-1}\bigr\Vert _{2}+\bigl\Vert (\alpha I +iW) (\alpha I+W)^{-1} \phi'\bigl(x^{\ast}\bigr) (\alpha I +T)^{-1}\bigr\Vert _{2} \\ &\quad{} +\bigl\Vert \phi'\bigl(x^{\ast}\bigr) (\alpha I+W)^{-1}(\alpha I-iT) (\alpha I +T)^{-1}\bigr\Vert _{2} \\ &\quad{} +\bigl\Vert \phi'\bigl(x^{\ast}\bigr) (\alpha I+W)^{-1}\phi'\bigl(x^{\ast}\bigr) (\alpha I +T)^{-1}\bigr\Vert _{2} \\ &\leq \Vert M_{\alpha} \Vert +\bigl\Vert (\alpha I +iW) (\alpha I+W)^{-1}\bigr\Vert _{2}\bigl\Vert \phi' \bigl(x^{\ast}\bigr) (\alpha I +T)^{-1}\bigr\Vert _{2} \\ &\quad{} +\bigl\Vert \phi'\bigl(x^{\ast}\bigr) (\alpha I+W)^{-1}\bigr\Vert _{2}\bigl\Vert (\alpha I-iT) (\alpha I +T)^{-1}\bigr\Vert _{2} \\ &\quad{} +\bigl\Vert \phi'\bigl(x^{\ast}\bigr) (\alpha I+W)^{-1}\bigr\Vert _{2}\bigl\Vert \phi' \bigl(x^{\ast}\bigr) (\alpha I +T)^{-1}\bigr\Vert _{2} \\ &\leq \Vert M_{\alpha} \Vert +\bigl\Vert (\alpha I +iW) (\alpha I+W)^{-1}\bigr\Vert _{2}\bigl\Vert (\alpha I +T)^{-1}\bigr\Vert _{2} \\ &\quad{} +\bigl\Vert (\alpha I+W)^{-1}\bigr\Vert _{2}\bigl\Vert (\alpha I-iT) (\alpha I +T)^{-1}\bigr\Vert _{2}+ \bigl\Vert (\alpha I+W)^{-1}\bigr\Vert _{2}\bigl\Vert ( \alpha I +T)^{-1}\bigr\Vert _{2} \\ &\leq\theta(\alpha)+2\delta+\delta^{2}. \end{aligned}$$

Obviously, under the condition (3.11), we have \(\rho(M_{\alpha,x^{\ast}})\leq \Vert M_{\alpha,x^{\ast}} \Vert <1\). Based on Lemma 3.3 (Ostrowski theorem), the results in Theorem 3.1 hold. □

Observe that the matrices W and T are strongly dominant over the matrix \(\phi'(x^{\ast})\) in certain norm (i.e., \(\Vert W\Vert \gg \Vert \phi'(x^{\ast})\Vert \) and \(\Vert T\Vert \gg \Vert \phi'(x^{\ast})\Vert \) for certain matrix norm). Therefore, matrix A satisfies the condition (3.11) with W be symmetric positive definite and T be symmetric positive semidefinite. In fact, in this case, the matrix \(M_{\alpha,x^{\ast}}\) can be approximated by the matrix \(M_{\alpha}\) and the condition (3.11) reduces to \(\rho(M_{\alpha})<1\). Clearly, \(\rho(M_{\alpha})\leq\theta(\alpha)<1\).

Theorem 3.3

Let the condition of Theorem  3.1 be satisfied. For any initial point \(x^{(0)}\in\mathbb{D}\subset\mathbb{C}^{n}\), the iteration sequence \(\{x^{(k)}\}_{k=0}^{\infty}\) generated by the nonlinear MHSS-like iteration (3.6) can be approximated by the sequence produced by its smoothed scheme (3.10), i.e.,

$$\bigl\Vert \psi\bigl(x^{(k)}\bigr)-\hat{\psi}\bigl(x^{(k)} \bigr)\bigr\Vert < \varepsilon, \quad\textit{for any } \varepsilon>0, $$

provided

$$ \epsilon< \frac{\Vert (\alpha I +T)\Vert \varepsilon}{\sqrt{n}(2+\Vert (\alpha I+ W)^{-1}\Vert )}. $$
(3.13)

Proof

Based on (3.6) and (3.10), combining with (3.12) with Lemma 3.2, we have

$$\begin{aligned} \bigl\Vert x^{(k+1)}-\hat{x}^{(k+1)}\bigr\Vert &=\bigl\Vert \psi\bigl(x^{(k)}\bigr)-\hat{\psi}\bigl(x^{(k)}\bigr)\bigr\Vert \\ &\leq\bigl\Vert (\alpha I +T)^{-1}(\alpha I +iW) (\alpha I+ W)^{-1}\bigl(\phi\bigl(x^{k}\bigr)-\bigl\vert x^{k}\bigr\vert \bigr)\bigr\Vert \\ &\quad{} +\bigl\Vert (\alpha I +T)^{-1}\bigl(\phi\bigl(\hat{U} \bigl(x^{k}\bigr)\bigr)-\bigl\vert U\bigl(x^{k}\bigr)\bigr\vert \bigr)\bigr\Vert \\ &\leq\bigl\Vert (\alpha I +T)^{-1}(\alpha I +iW) (\alpha I+ W)^{-1}\bigl(\phi\bigl(x^{k}\bigr)-\bigl\vert x^{k}\bigr\vert \bigr)\bigr\Vert \\ &\quad{} +\bigl\Vert (\alpha I +T)^{-1}\bigl(\phi\bigl(\hat{U} \bigl(x^{k}\bigr)\bigr)-\bigl\vert \hat{U}\bigl(x^{k}\bigr) \bigr\vert +\bigl\vert \hat {U}\bigl(x^{k}\bigr)\bigr\vert -\bigl\vert U\bigl(x^{k}\bigr)\bigr\vert \bigr)\bigr\Vert \\ &\leq\bigl\Vert (\alpha I +T)^{-1}(\alpha I +iW) (\alpha I+ W)^{-1}\bigr\Vert \cdot\bigl\Vert \bigl(\phi\bigl(x^{k} \bigr)-\bigl\vert x^{k}\bigr\vert \bigr)\bigr\Vert \\ &\quad{} +\bigl\Vert (\alpha I +T)^{-1}\bigr\Vert \cdot\bigl\Vert \phi \bigl(\hat{U}\bigl(x^{k}\bigr)\bigr)-\bigl\vert \hat{U} \bigl(x^{k}\bigr)\bigr\vert \bigr\Vert \\ &\quad{} +\bigl\Vert (\alpha I +T)^{-1}\bigr\Vert \cdot\bigl\Vert ( \alpha I+ W)^{-1}\bigr\Vert \cdot\bigl\Vert \phi \bigl(x^{k}\bigr)-\bigl\vert x^{k}\bigr\vert \bigr\Vert \\ &\leq\sqrt{n}\epsilon\bigl\Vert (\alpha I +T)^{-1}\bigr\Vert \bigl(2+\bigl\Vert (\alpha I+ W)^{-1}\bigr\Vert \bigr). \end{aligned}$$

For any \(\varepsilon>0\), under the condition (3.13), \(\Vert x^{(k+1)}-\hat{x}^{(k+1)}\Vert <\varepsilon\). This implies that the results in Theorem 3.2 hold. □

Under the condition of Theorem 3.1,

$$\begin{aligned} \bigl\Vert x^{(k+1)}-\hat{x}^{(k+1)}\bigr\Vert \leq&\sqrt{n} \epsilon\delta(2+\delta)\leq\sqrt{n}\epsilon. \end{aligned}$$

This implies that, if we choose ϵ such that \(\sqrt{n}\epsilon<\varepsilon\), the results in Theorem 3.2 hold as well.

Theorem 3.4

Suppose that the condition of Theorem  3.1 is satisfied. Let

$$\begin{aligned}& \delta=\max\bigl\{ \bigl\Vert (\alpha I+W)^{-1}\bigr\Vert _{2},\bigl\Vert (\alpha I+T)^{-1}\bigr\Vert _{2}\bigr\} ,\quad\quad \theta(\alpha)= \Vert M_{\alpha} \Vert , \\& M_{\alpha,x^{\ast}}=(\alpha I +T)^{-1}(\alpha I +iW-iD) (\alpha I+ W)^{-1}(\alpha I-iT+D), \end{aligned}$$

where D is the Jacobian of \(\phi(x)\) at \(x^{\ast}\in \mathbb{N}\subset\mathbb{D}\subset\mathbb{C}^{n}\) defined in (3.7), and \(\mathbb{N}(x^{\ast})\) is an open neighborhood of \(x^{\ast}\). Then \(\rho(M_{\alpha,x^{\ast}})<1\). Furthermore, if

$$ \delta< \sqrt{2- \theta(\alpha)}-1, $$
(3.14)

then, for any initial point \(x^{(0)}\in\mathbb{D}\subset \mathbb{C}^{n}\), the iteration sequence \(\{x^{(k)}\}_{ k=0}^{\infty}\) generated by the nonlinear MHSS-like iteration method (3.6) converges to \(x^{\ast}\).

Proof

Since

$$\bigl\Vert x^{(k+1)}-x^{\ast}\bigr\Vert \leq\bigl\Vert x^{(k+1)}-\hat{x}^{(k+1)}\bigr\Vert +\bigl\Vert \hat {x}^{(k+1)}-x^{\ast}\bigr\Vert =\bigl\Vert \psi \bigl(x^{(k)}\bigr)-\hat{\psi}\bigl(x^{(k)}\bigr)\bigr\Vert + \bigl\Vert \hat{\psi}\bigl(x^{(k)}\bigr)-x^{\ast}\bigr\Vert , $$

then, for any \(\varepsilon>0\), one just needs to prove

$$ \bigl\Vert \psi\bigl(x^{(k)}\bigr)-\hat{\psi} \bigl(x^{(k)}\bigr)\bigr\Vert +\bigl\Vert \hat{\psi} \bigl(x^{(k)}\bigr)-x^{\ast}\bigr\Vert < \varepsilon. $$
(3.15)

From (3.15), we just prove

$$\bigl\Vert \psi\bigl(x^{(k)}\bigr)-\hat{\psi}\bigl(x^{(k)} \bigr)\bigr\Vert < \varepsilon \quad\mbox{and}\quad \bigl\Vert \hat{\psi} \bigl(x^{(k)}\bigr)-x^{\ast}\bigr\Vert < \varepsilon. $$

From Theorems 3.1 and 3.2, obviously, \(\Vert x^{(k+1)}-x^{\ast} \Vert <\varepsilon\). This completes the proof. □

In fact, under the condition of Theorem 3.3, the solution of \(Ax=\vert x\vert +b\) can be approximated by \(x^{\ast}\), which is such that \(Ax^{\ast}=\phi(x^{\ast})+b\). It is the reason that, when \(Ax^{\ast}=\vert x^{\ast} \vert +b\), we have

$$\bigl\Vert \phi\bigl(x^{\ast}\bigr)-\bigl\vert x^{\ast}\bigr\vert \bigr\Vert \leq\sqrt{n}\epsilon, \quad\mbox{for any } \epsilon>0. $$

The convergent speed of the nonlinear MHSS-like method (3.3) may depend on two factors: (1) the nonlinear term \(\vert x\vert +b\); (2) finding the optimal parameters to guarantee that the spectral radius \(\rho(M_{\alpha})\) of the iteration matrix \(M_{\alpha}\) is less than 1. As the former is problem-independent, the latter can be estimated. Based on Corollary 2.1 in [20], the optimal parameter \(\alpha^{\ast}=\sqrt{\lambda_{\max}\lambda_{\min}}\) is obtained to minimize the upper bound on the spectral radius \(\rho(M_{\alpha})\) of the MHSS iteration matrix \(M_{\alpha}\), where \(\lambda_{\max}\) and \(\lambda_{\min}\) are the extreme eigenvalues of the symmetric positive definite matrix W, respectively. It is noted that, although one usually cannot expect to minimize the spectral radius \(\rho(M_{\alpha})\) of the corresponding iteration matrix \(M_{\alpha}\) with the optimal parameter \(\alpha^{\ast}\), it is still helpful for us to choose an effective parameter for the nonlinear MHSS-like method.

4 Numerical experiments

In this section, we give some numerical experiments to demonstrate the performance of the nonlinear MHSS-like method for solving the AVE (1.1). Since the numerical results in [18] show that the nonlinear HSS-like method outperforms the Picard and Picard-HSS methods under certain conditions, here we compare the nonlinear MHSS-like method with the nonlinear HSS-like method [18] from the point of view of the number of iterations (denoted IT) and CPU times (denoted CPU) to show the advantage of the nonlinear MHSS-like method. All the tests are performed in MATLAB 7.0.

Example 1

We consider the following AVE:

$$ Ax-\vert x\vert =b, $$
(4.1)

with \(A=W+iT\),

$$T=I \otimes V+V\otimes I \quad\mbox{and} \quad W = 10(I\otimes V_{c}+V_{c}\otimes I ) + 9\bigl(e_{1}e^{T}_{m }+e_{m }e_{1}^{T} \bigr)\otimes I, $$

where \(V=\operatorname {tridag}(-1,2,-1)\in\mathbb{R}^{m\times m}\), \(V_{c}=V-e_{1}e^{T}_{m }-e_{m }e_{1}^{T} \in\mathbb{R}^{m\times m}\), \(e_{1}\) and \(e_{m}\) are the first and the last unit vectors in \(\mathbb{R}^{m}\), respectively. Here T and W correspond to the five-point centered difference matrices approximating the negative Laplacian operator with homogeneous Dirichlet boundary conditions and periodic boundary conditions, respectively, on a uniform mesh in the unit square \([0, 1]\times[0, 1]\) with the mesh-size \(h = \frac{1}{m+1 }\). The right-hand side vector b is taken to be \(b = (A-I)\textbf{1}\), with 1 being the vector of all entries equal to 1.

In our implementations, the initial point is chosen to be the zero vector and the stopping criterion for the nonlinear MHSS-like and HSS-like methods is

$$\frac{\Vert b+\vert x^{(k)}\vert -Ax^{(k)}\Vert _{2}}{\Vert b\Vert _{2}}\leq10^{-6}. $$

In the implementations of the nonlinear MHSS-like and HSS-like methods, the optimal parameters have been obtained experimentally to yield the least CPU times and iteration numbers for both iteration methods. Specifically, see Table 1.

Table 1 The optimal parameters α for MHSS-like and MHSS-like methods

In Table 2, we list the iteration numbers and CPU times for the nonlinear MHSS-like and HSS-like methods by using the optimal parameters in Table 1.

Table 2 CPU and IT for the MHSS-like and HSS-like methods

From Table 2, the iteration numbers and CPU times of the nonlinear MHSS-like method for solving the AVE (1.1) are less than that of the nonlinear HSS-like method. The presented results in Table 2 show that in all cases the nonlinear MHSS-like method is superior to the nonlinear HSS-like method in terms of the iteration numbers and CPU times. Comparing with the nonlinear HSS-like method, the nonlinear MHSS-like method for solving the AVE (1.1) may be given priority under certain conditions.

5 Conclusions

In this paper, the nonlinear MHSS-like method has been established for solving the weakly absolute value equations (AVE). In the proposed method, two real linear subsystems with symmetric positive definite matrices \(\alpha I + W\) and \(\alpha I + T\) are solved at each step. In contrast, in the nonlinear HSS-like method a shifted skew-Hermitian linear subsystem with the matrix \(\alpha I + iT\) is solved at each iteration. By using a smoothing approximate function, the local convergence of the proposed method has been analyzed. Numerical experiments have shown that the nonlinear MHSS-like method is feasible, robust, and efficient.