Two new effective iteration methods for nonlinear systems with complex symmetric Jacobian matrices

In this paper, we mainly discuss the iterative methods for solving nonlinear systems with complex symmetric Jacobian matrices. By applying an FPAE iteration (a fixed-point iteration adding asymptotical error) as the inner iteration of the Newton method and modified Newton method, we get the so–called Newton-FPAE method and modified Newton-FPAE method. The local and semi-local convergence properties under Lipschitz condition are analyzed. Finally, some numerical examples are given to expound the feasibility and validity of the two new methods by comparing them with some other iterative methods.


Introduction
Since in most areas of physics, mathematics and engineering, nonlinear systems are more common than linear systems, we usually need to solve nonlinear equations. Examples about nonlinear systems including Schrödinger equation (Sulem and Sulem 1999) and Ginzburg-Landau equation (Aranson and Kramer 2002).
Let F : D ⊂ C n → C n be a continuously differentiable mapping defined on an open convex domain D in the n-dimensional complex space C n , here in this article we consider the large-scale sparse nonlinear equations: (1) We assume Jacobian matrix F (x) to be large, sparse and complex symmetric, i.e., with W (x) and T (x) being both symmetric matrices. Moreover, W (x) is real positive definite and T (x) is real positive semi-definite matrices, respectively. Some nonlinear equations can be solved analytically, but most of the nonlinear equations need to be solved numerically. Such nonlinear systems as (1) often arise in scientific computing and engineering areas. In this paper, we will focus on the iterative numerical solutions for such systems.
Perhaps the most natural idea for solving (1) is the Newton method, where x 0 is a given initial guess. Thus, it is equivalent to solving the Newton equation at the kth iteration.
One way is that we use linear iterative methods to solve Newton equation (4), especially when the scales of problems are large and sparse. In this sense the Newton method is innerouter iterative method. For example, when GMRES method is used as an inner solver for Newton equation (4), the Newton-GMRES method (Bellavia et al. 2001) is obtained and widely used. Some similar efficient methods have been widely used such as Newton-Krylov subspace method (Brown and Saad 1994;Knoll and Keyes 2004), Newton-CG (CG means Conjugate Gradient) method (Sternberg and Hinze 2010) and so on.
Of course the choice of inner iteration methods plays an important role when we use them to solve Newton equations.
In the past few years, some HSS-based methods (HSS is the abbreviation for the Hermitian and skew-Hermitian splitting) have been proposed to solve large-scale sparse linear systems. In order to solve non-Hermitian positive definite linear systems, Bai et al. (2003) first introduced the HSS method. Some algorithms, such as preconditioned modified HSS (PMHSS) method (Bai et al. , 2011 and single step HSS (SHSS) method (Li and Wu 2015), were proposed later to improve the HSS method. Because of the efficiency of these HSS-based methods, many scholars have done a lot of research in recent years, see Xiao and Wang (2018); Huang et al. (2018); Siahkolaei and Salkuyeh (2019); Zhang et al. (2019); Wang et al. (2017Wang et al. ( , 2018. By applying these methods as inner iterations of Newton methods, the corresponding Newton-HSS type methods can be obtained, such as Newton-HSS method (Bai and Guo 2010) and Newton-MHSS method (Yang and Wu 2012), which have been used and studied widely.
To promote the efficiency, one can also try to improve the outer iteration to get better iterative methods. For example the modified Newton method (Darvishi and Barati 2007): With just one more evaluation of F per step, the advantage of the modified Newton method is that it has at least R-order of convergence three, which is much better than the Newton method. Using HSS method as the inner iteration of modified Newton method, an effective method named modified Newton-HSS method (Wu and Chen 2013) which presents better properties than Newton-HSS method was obtained. Furthermore, the multi-step modified Newton-HSS method (Li and Guo 2017a, b), which includes the Newton-HSS method and the modified Newton-HSS method as special cases, outperforms the modified Newton-HSS method. To our knowledge, there have been a succession of theses showing the efficiency of such kind of Newton-HSS based methods, see Zhong et al. (2015); Chen and Wu (2018); Xie et al. (2019) for more examples.
In this paper, we aim to obtain effective iteration methods by applying an FPAE iteration (Xiao and Wang 2018) as the inner iteration of both the Newton method and modified Newton method. This paper is organized as the following. In Sect. 2, the FPAE iteration (a fixed-point iteration adding asymptotical error) is reviewed. In Sect. 3, the Newton-FPAE method is constructed. The local and semi-local convergence properties of the Newton-FPAE method under Lipschitz condition are analyzed in Sects. 4 and 5. Then in Sect. 6, the modified Newton-FPAE method is developed and its convergence properties are proposed. Some numerical examples are presented to show the computational efficiencies of Newton-FPAE method and modified Newton-FPAE method in Sect. 7. Finally, in Sect. 8, a brief conclusion is given.

A review: a fixed-point iteration adding the asymptotical error
In the paper Xiao and Wang (2018), in order to solve the complex symmetric linear system, Xiao and Wang proposed a fixed-point iteration adding the asymptotical error (FPAE). In this section, we firstly review the FPAE method.
For any symmetric positive definite matrix V and any α > 0, the fact that V That is We can rewrite Eq. (6) as the standard form where then we have Thus, this splitting matrix B(α, V ) can be utilized as a preconditioner for the complex matrix A ∈ C n×n . We naturally hope to adjust B(α, V ) to get smaller spectral radius of the iterative matrix M(α, V ), so B(α, V ) can be called the FPAE preconditioner.
The convergence analysis of FPAE iteration (Xiao and Wang 2018) showed that the spectral radius of the iterative matrix satisfies: where As long as the parameter α satisfies then δ V (α) < 1, i.e. FPAE iteration converges to the exact solution.
In practical implementation, for the sake of convenience, V = W is generally taken. Then the FPAE iteration reduces to We can rewrite (10) as Then the spectral radius of iteration matrix ρ(M(α)) is bounded by δ(α), that is, If 0 < α < 2 1 + ρ 2 (W −1 T ) , then δ(α) < 1 i.e., the iteration converges. Moreover, the optimal parameter α * which minimize the upper bound of spectral radius δ(α) is given by

The Newton-FPAE method
In this section, to solve nonlinear equations with complex symmetric Jacobian matrices, we introduce our Newton-FPAE method.
Applying the FPAE method as the inner iteration for the Newton method, then the Newton-FPAE method for solving nonlinear system (1) can be written as the following: Algorithm 1 Newton-FPAE method 1. Given an initial guess x 0 , a positive constant α, and a positive integer sequence 2.2. For l = 0, 1, 2, · · · , l k − 1, apply FPAE method to the linear system (4): and obtain d k,l k such that 2.3. Set By straightforward derivation we can obtain the following uniform expressions of d k,l k and x k+1 , where Since the selection of V (x) affects the calculation and storage, in the practical implementation, V (x) = W (x) is usually taken for convenience. Then we get

Local convergence property of the Newton-FPAE method
In this section, we prove the local convergence property of the Newton-FPAE method under Lipschitz condition. In the remainder of this article, x * is the solution of F(x) = 0, N (x * , r ) denotes an open ball centered at x * with radius r > 0.
Lemma 1 (Perturbation Lemma) Ortega and Rheinboldt (1970) Let M, N ∈ C n×n and assume that M is nonsingular, with M −1 ≤ ξ. If M − N ≤ δ and δξ < 1, then N is also nonsingular, and The proof of Lemma 1 can be found in Ortega and Rheinboldt (1970).
Assumption 1 For any x ∈ N(x * , r ) ⊂ N 0 , assume the following conditions hold (A1) (The bounded condition) there exist positive constants β and γ such that (A2) (The Lipschitz condition) there exist nonnegative constants L 1 and L 2 such that Lemma 2 Under Assumption 1, for any x, y ∈ N(x * , r ), if r ∈ 0, 1 γ L , then F (x) −1 exists. And the following inequalities hold with L := L 1 + L 2 for any x, y ∈ N(x * , r ): Proof For the proof of the first inequality From the condition of r ∈ 0, 1 γ L , then γ L x − x * < 1. Then according to the bounded condition F (x * ) −1 ≤ γ and the above formula As for the last inequality, since This completes the proof of Lemma 2.

Theorem 1 Under the assumptions of Lemma 2, for
suppose that r ∈ (0, r 0 ) and define r 0 := min{r 1 , r 2 }, where with u = lim inf k→∞ l k , and the constant u satisfies where the symbol · is used to denote the smallest integer no less than the corresponding real number, τ ∈ 0, 1−θ θ is a prescribed positive constant and , any x ∈ N(x * , r ) and any sequences {l k } ∞ k=0 of positive integers, the iteration sequence {x k } ∞ k=0 generated by the Newton-FPAE method is well defined and converges to x * . Moreover, it holds that here we use the notation Proof First, we try to give an estimate of the iterative matrix M(α; x) of the inner solver: if By the bounded condition and the fact that by Lipschitz condition, we have That is Then we prove a recursive relationship of x k − x * : by mathematical induction, assuming for some k = n, x n ∈ N(x * , r ), then This is the end of proof of Theorem 1.

Semi-local convergence property of the Newton-FPAE method
In this section, we study the semi-local convergence property of the Newton-FPAE method after giving some assumptions on F(x).

Assumption 2
For any x 0 ∈ N 0 , assume the following conditions hold.
(A1) (The bounded condition) there exist positive constants β and γ such that (A2) (The Lipschitz condition) there exist nonnegative constants L 1 and L 2 such that for any Lemma 3 Under the condition of Assumption 2, for any x, y ∈ N(x 0 , r ), if r ∈ 0, 1 γ L , and we denote L := L 1 + L 2 , then F (x) −1 exists and: Proof The proofs of the first and fourth formulas are similar to Lemma 2.2 and will not be repeated. Since And since We define the following functions: Set t 0 = 0, construct a sequence {t k } as follows: Lemma 4 Assume that the above constants satisfy Then the sequence {t k } constructed by the above rules is monotonically increasing and Proof Since g(t) = 1 2 at 2 − bt + c is a quadratic function with a quadratic coefficient greater than 0, by direct computation it is easy to get the following results.
, then for t ∈ [0, t * ], the following inequality holds: Now we prove that for any k there is t k < t k+1 < t * by mathematical induction. Suppose the above formula holds for k − 1, i.e. t k−1 < t k < t * .
On the other hand, the function t − By mathematical induction, for any k, t k < t k+1 < t * , then the sequence {t k } is monotonically increasing and converges to t * .
Theorem 2 Under the conditions of Assumption 2 and Lemma 4, we define r := min(r 1 , r 2 ), where and define u = lim inf k→∞ l k , and the constant u satisfies is a prescribed positive constant, · represents the smallest integer no less than the corresponding value, and Then the iteration sequence {x k } generated by Newton-FPAE method converges to x * , which satisfies F(x * ) = 0.
Proof First, similar to the proof in Lemma 2, we obtain the estimate of M(α; x) , that for any x ∈ N(x 0 , r ), We prove the following conclusions by mathematical induction: First, when k = 0 : Now we have proved that the formula (22) is true when k = 0. Assuming that for any non-negative integer less than k the formula (22) is true, then we only need to prove that for k the formula (22) is true. Since Therefore, the above formula holds for any non-negative integer k.
For the reason that sequence {t k } converges to t * , and then the sequence {x k } converges to x * . For M(α; x * ) < 1 we have The proof of Theorem 2 is completed.

Modified Newton-FPAE method
In this section, we apply the FPAE method again as the inner solver for the modified Newton method (5), then the modified Newton-FPAE method is introduced.
Algorithm 2 Modified Newton-FPAE method 1. Given an initial guess x 0 , a nonnegative constant α, and two positive integer sequences 2.2. For l = 0, 1, · · · , l k − 1, apply FPAE method to the first equation of (5): and obtain d k,l k such that 2.3. Set 2.4. Compute F(y k ). 2.5. For m = 0, 1, 2, · · · , m k − 1, apply FPAE method to the second equation of (5): and obtain h k,m k such that 2.6. Set Similarly, in practice, V (x) = W (x) is generally taken, then we get: The equivalent formula of the iteration: By similar demonstration of convergence properties of Newton-FPAE method, we can derive the following local convergence theorem and semi-local convergence theorem:

Theorem 3 (Local convergence of modified Newton-FPAE method under Lipschitz condition) Under the condition of Assumption 1, for
, any x 0 ∈ N(x * , r ) and any positive integer sequences {l k } and {m k }, where r ≤ r 0 := {r 1 , r 2 }, and where · represent for the smallest integer not less than the corresponding value, τ ∈ 0, 1−θ θ is a positive constant, and We still use the notation then the iteration sequence {x k } generated by modified Newton-FPAE method converges to x * , and Proof It is similar to the theorem 1 unless the recursive relationship of x k − x * .
Then by the analogous derivation of Theorem 1, we conclude that x k converges to x * , and

Theorem 4 (Semi-local convergence of modified Newton-FPAE method under Lipschitz condition) Under the condition of Assumption 2 and Lemma 4, for
and any positive integer sequences {l k } and {m k }. We define r ≤ r 0 := {r 1 , r 2 }, where where · denotes the smallest integer no less than the corresponding value, τ ∈ 0, 1−θ θ is a positive constant and then the iteration sequence {x k } generated by modified Newton-FPAE method convergence to x * , and satisfies Proof It is similar to the proof of theorem 2.
Define the sequences {t k }, {s k } with t 0 = 0 : Then prove the following conclusion by the similar derivation of (22) The proof of the rest part is omitted.

Numerical examples
In this section, we show the validity of Newton-FPAE method and modified Newton-FPAE method by comparing them with some other existing methods. For example the Newton-PMHSS method (Zhong et al. 2015). Since the nonlinear HSS-like method in Bai and Yang (2009), the two-stage relaxation method in Bai (1997a) and Bai (1997b) can also solve the weekly nonlinear systems in our numerical examples, we compare our methods with these methods as well. In our experiments, we take V (x) = W (x) as preconditioner of Newton-FPAE method, modified Newton-FPAE method and modified Newton-PMHSS method. The number of iteration steps (denoted as "IT") and CPU running time (denoted as "CPU time") are compared. One of the important issues is how to choose the parameters. In our experiment, we use the experimental optimal parameters α * which minimize the corresponding iteration steps and errors. The numerical results were computed using MATLAB Version R2017b, on a laptop with Core AMD A8-7100 and 8.00 GB of RAM. The CPU running time is recorded by the command "tic-toc". Example 7.1 Consider the following nonlinear Helmholtz equation: where σ 1 and σ 2 are real coefficient. Here u satisfies the Dirichlet boundary condition in the rectangular region D = [0, 1] × [0, 1]. By making the finite difference on the N × N grid with mesh size h=1/(N+1) to discretize the differential equation, complex nonlinear equations corresponding of the following form can be derived: where In this numerical experiment, we take σ 1 = 1, σ 2 = 10. The initial guess is x 0 = 0, with 0 being the zero vector, and the termination condition of the outer iteration is The prescribed tolerance η k andη k for controlling the accuracy of the inner iteration are both set to be η. Which means that the stopping criterions for the inner iterations of modified Newton-PMHSS (MN-PMHSS), modified Newton-FPAE (MN-FPAE), and Newton-FPAE methods are set to be In this numerical experiment, we choose η k =η k = η = 0.1, 0.2, 0.4. The size of the grids are N = 30, 60, 90, respectively.
The experimental optimal iteration parameters α * for MN-PMHSS method, Newton-FPAE method and MN-FPAE method with different η at N = 30, 60, 90 are given in Table 1. And the experimental results of the three methods at N = 30, 60, 90 are given in Tables 4, 5, 6, respectively. In these tables, "Outer IT" denotes the outer iteration steps, and the "Inner IT" denotes the total inner iteration steps.
As can be seen from Tables 2, 3, and 4, Newton-FPAE method and MN-FPAE method have significant advantages in the number of iterations when solving this problem with respect   to MN-PMHSS method. For example, the inner iteration steps of Newton-FPAE and MN-FPAE are about half of those of MN-PMHSS. The CPU time of Newton-FPAE is a little longer than MN-PMHSS, but the CPU time decrease to just a little more than half the time of Newton-FPAE and MN-PMHSS when MN-FPAE method is used. The reason why inner iteration steps of Newton-FPAE method and MN-FPAE method are less than MN-PMHSS method is mainly because the inner iteration of the two methods is better. That is, FPAE iteration has a higher efficiency than PMHSS iteration. Since modified Newton method is superior than Newton method, the CPU time of Newton-FPAE is a little longger than MN-PMHSS. While when using MN-FPAE method, the CPU time reduced a lot. So we can conclude that modified Newton-FPAE method has much advantage in solving this example. Table 5 shows the experimental optimal iteration parameters of nonlinear HSS-like method and its numerical results. Table 6 displays the numerical results of two-stage relaxation , ∂ is the boundary of , and is a positive constant used to measure the magnitudes of the reaction term. By applying the centered finite difference scheme on the equidistant discretization grid with the stepsize t = h = 1 N +1 , the system of nonlinear equations (1) is obtained with following form where N is a prescribed positive integer, with A N = tridiag(−1, 2, −1). Here, ⊗ the Kronecker product symbol, and n = N × N . Then the Jacobian matrix is Obviously, u * = 0 is a solution of (31), so F (u * ) = M, then the following inequality holds, In actual computations, the coefficients are set to be α 1 = 1, β 1 = 0.1, α 2 = 1, β 2 = 0.1. The initial guess is u 0 = 1. The stopping criterion for the outer iteration is set to be The experimental optimal iteration parameters α * for MN-PMHSS method, Newton-FPAE method and MN-FPAE method at N = 30, 60, 90 are given in Table 7.
Tables 8, 9, 10 have shown the numerical results of MN-PMHSS method, Newton-FPAE method and MN-FPAE method. Here "Outer IT" denotes the outer iteration steps, and the "Inner IT" denotes the total inner iteration steps just like Example 7.1. Table 11 is about the optimal parameters of nonlinear HSS-like method and its numerical results. Table 12 shows the numerical results of two-stage relaxation method with parameter (ω, γ ) = (1.0, 1.0), (1.2, 1.2), (1.5, 1.5), respectively. Here we take B = M, C = O, D = diag(B), s = 2, and L, U are strictly upper triangular matrices of (−B), respectively.
As can be seen from the above tables, in Example 7.2, our methods showed the similar advantages in Example 7.1 when compared with MN-PMHSS method and nonlinear HSSlike method. But when compared with the two-stage relaxation method, although the number of iteration are still much less than the two-stage relaxation method, sometimes it takes more CPU running time. That is mainly because that the computation in the two-stage relaxation method is faster when the command "sparse()" is used to store and compute huge-scale matrices in the iteration.   , x 2 , ..., x n ) T and F = (F 1 , F 2 , ..., F n ) T , with F j (x) = (5 + i) − (2 + i)x j x j − x j−1 − x j+1 + 1, j = 1, 2, ..., n, and x 0 = x n+1 = 0. Then the Jacobian matrix of F(x) is We set the initial guess x (0) = (−1, ..., −1) T . Now we solve this nonlinear equation by MN-PMHSS, Newton-FPAE method and MN-FPAE method. The stopping criterion for the outer iteration is set to be F(x (k) ) 2 F(x (0) ) 2 ≤ 10 −12 , and the stoping criterion of the inner iteration is Here x (k) is the results after k-th iteration. We set the inner stop critierion η k =η k = η = 0.1, 0.2, 0.4, the dimension of the problem n = 500, 1000, 2000. Of course, the experimentally optimal parameters α are used in MN-PMHSS, Newton-FPAE and MN-FPAE methods. Table 15 shows the experimental optimal parameters.
From Tables 16, 17, 18, we can see that the number of iterations in Newton-FPAE and MN-FPAE method are less than MN-PMHSS. And especially the modified Newton-FPAE takes less CPU running time, which is similar to the example 1.
But when compared with the nonlinear HSS-like method in Table 19 and the two-stage relaxation method in Table 20, they don't take much advantages. In our opinion, the MN-FPAE method and Newton-FPAE method are very suitable for those problems whose Jacobian matrices's image parts are not so big compared with real parts. But, if in practical computation   Newton-FPAE method and MN-FPAE method are not superior than some other methods, we can use some preconditioner techniques or modified methods in the inner iteration to improve them. For example, see Section 3 of Xiao and Wang (2018), the PFPAE method.

Conclusions
Iterative methods for solving nonlinear systems are of great significance in practical applications. For nonlinear systems with complex symmetric Jacobian matrices, the most classical iterative scheme is the Newton method. In this paper, by applying an FPAE method as an inner iteration of the Newton method and modified Newton method, the corresponding Newton-FPAE method and the modified Newton-FPAE method are obtained, respectively. Local and semi-local convergence of the two iterative schemes under Lipschitz conditions are proved.