FR-type algorithm for finding approximate solutions to nonlinear monotone operator equations

This paper focuses on the problem of convex constraint nonlinear equations involving monotone operators in Euclidean space. A Fletcher and Reeves type derivative-free conjugate gradient method is proposed. The proposed method is designed to ensure the descent property of the search direction at each iteration. Furthermore, the convergence of the proposed method is proved under the assumption that the underlying operator is monotone and Lipschitz continuous. The numerical results show that the method is efficient for the given test problems.

where the mapping ϑ is from the Euclidean space R n onto itself. Also, ϑ is monotone and continuous and the set B is a nonempty closed and convex subset of R n . The problem (1) is called a convex constraint nonlinear equation and we denote the solution set of it by Sol(B, ϑ).
The convex constraint nonlinear problem is motivated by several applications in various fields such as financial forecasting problems [9], learning constrained neural networks [8], economic equilibrium problems [14], nonlinear compressed sensing [6], chemical equilibrium systems [27], phase retrieval [7], power flow equations [33], and non-negative matrix factorisation [4,24]. The fast local superlinear convergence property of the Newton method, quasi-Newton method, Levenberge-Marquardt method, as well as a variety of their variants [10][11][12]30] have made them appealing for solving (1). However, to apply these methods, a linear system of equations must be computed using a Jacobian matrix or its approximation. To overcome this drawback, many authors have suggested derivative-free methods that does not require or approximate the Jacobian matrix [18]. Our focus in this present research will be on derivative-free methods based on one of the first-order optimization methods. The method of interest is the conjugate gradient (CG) method which is well known for solving large-scale unconstrained optimization problems due to its simplicity and low storage.
In recent years, motivated by the projection scheme proposed by Solodov and Svaiter [31], several derivative-free methods have been proposed for solving (1). For example, Liu and Feng [25] introduced a derivative-free projection method which converges to the solution of the convex constraint problem (1). The proposed scheme involves only one projection per iteration with a monotone and Lipschitz continuity assumption imposed on the underlying mapping. Their proposed method can be viewed as a modification of the well-known Dai-Yuan CG method for unconstrained optimization. Also, Ibrahim et al. [19] proposed a derivative-free projection method for solving the nonlinear equation (1). The method combines the projection technique and the LS-FR CG method proposed by Djordjević [15]. At each iteration, the proposed method does not store any matrix. With the aid of the projection scheme, several other derivative-free methods have been developed. Interested readers can refer to [1][2][3][20][21][22]28] as well as references therein.
In this present work, based on the Fletcher and Reeves (FR) CG method for unconstrained optimization, we introduced a derivative-free type iterative method for solving the constraint problem (1). The proposed method is designed to ensure the descent property of the search direction at each iteration. Furthermore, the convergence of the proposed method is proved under the assumption that the underlying operator is monotone and Lipschitz continuous. The numerical results show that the method is efficient for the given test problems.
Our paper is organized as follows: in Sect. 2, we review some definitions which are used in the sequel. Also, the proposed search direction is presented and its global convergence result is established. Section 3 is devoted for the numerical experiment where numerical results are reported for several examples. In the last section, conclusions and discussions are given.
Notation. Throughout this article except stated otherwise, · stands for Euclidean norm on R n . In addition, for a nonempty closed and convex set B ∈ R n , P B [·] is the projection mapping from R n onto B given by P B [w] = arg min{ w − y : y ∈ B}.

Algorithm and convergence analysis
Let ϑ : R n → R n be a monotone and continuous nonlinear function, and let B be a nonempty closed and convex subset of R n . Recall that ϑ is said to be: 1. monotone if: 2. L-Lipschitz continuous, with L > 0 if: Next, we propose an algorithm based on a modified FR CG method. The modification is done on the FR CG parameter and by extension, the direction. Generally, a CG method for solving (1) generates a sequence of iterates from an initial guess w 0 via the formula: where α k is the stepsize computed using a suitable line search procedure and d k is the search direction defined as: β k is called the CG parameter and ϑ(w k ) is the function evaluation of ϑ at w k . One of the properties required to establish the convergence of an algorithm for finding approximate solution to (1) is the sufficient descent property of the direction. We say that a direction is sufficiently descent if: In this paper, to solve the problem (1), we consider the following FR-type direction given by : where: It can be observed that as ϑ(w k−1 ) → 0, the FR parameter may fail to be defined. To maintain well-defineness of the parameter, we replace ϑ( To make the direction (6) with the parameter defined in (8) descent, we introduce a new term to the direction as follows: where: Remark 2.1 Note that the term − k ϑ(w k ) was specifically introduced, so that the direction defined by (9) satisfies the sufficient descent condition (see Lemma 2.5).
In what follows, we give a systematic description of the proposed algorithm for finding approximate solutions to problem (1).
Step 3. Compute a trial point: and the stepsize α k = ρ i , where i is the least non-negative integer satisfying: Step 4. If ϑ(ϕ k ) = 0, then stop. Else, compute: where Step 5. Let k = k + 1 and repeat from Step 1.
In what follows, we show that the sequence generated by the algorithm converges. In this case, we will require the following conditions and results from the Lemmas.

Condition 2.2
The constraint set B ⊆ R n is nonempty, closed, and convex set.

Condition 2.3
The mapping ϑ is monotone and L-Lipschitz continuous on R n .

Numerical experiments on monotone operator equations
In this section, we give some numerical illustrations of Algorithm 1 called modified FR derivative-free (MFRDF) method by solving nonlinear monotone operator equations. We consider numerical comparison of MFRDF method with MPCGM method proposed in [32] and Algorithm 2.1 proposed in [17]. For the numerical illustrations, we use the proposed method and the compared methods to solve the test problems given in Table 1. For the control parameters, we choose = 1, η = 1.8, ρ = 0.8 μ = 1.3, and t = 10 −4 for the MFRDF Algorithm, and for the compared methods, we set the same values of parameters as it appears in their respective papers. Additionally, for each algorithm, we take the stopping criteria to be ϑ(w k ) ≤ 10 −5 .
In the numerical experiments, we consider six different initial points     c6829e5f-3f54-46d2-ab38-e10a2a1dd2f7. It can be seen from the results in the table of the compared algorithms that the proposed MFRDF method performs better than the compared methods in terms of number of iterations, elapsed CPU time, and number of function evaluations. Moreover, to visualize the extent of performance of the proposed method in comparison with the MPCGM and Algorithm 2.1, we adopt the performance profiles from [16]. It can be seen, respectively, from Figs. 1, 2, and 3 that MFRDF has more than 75% performance in terms of number of iterations, about 72% in terms of CPU time, and 55% success in terms of number of function evaluations.

Conclusion
This paper proposed an FR-like derivative-free algorithm combined with the projection technique for solving nonlinear monotone operator equations. The proposed search direction is bounded and satisfies the sufficient descent condition. Useful assumptions were considered to establish the global convergence. The results obtained from the numerical experiments illustrate the strength and efficiency of the proposed algorithm in contrast with the existing algorithms.