Abstract
In this paper, we propose a dynamic distributed conjugate gradient method for solving the strongly monotone variational inequality problem over the intersection of fixed-point sets of firmly nonexpansive operators. The proposed method allows the independent computation of a firmly nonexpansive operator along with the dynamic weight which is updated at each iteration. This strategy aims to speed up the convergence behavior of the algorithm by updating control factors to drive each iterative step. Under some suitable control conditions on corresponding parameters, we show a strong convergence of the iterate to the unique solution of the considered variational inequality problem. We consider the numerical experiments and discuss some observation points by applying the model to solve the image classification problem via the support vector machine learning.
Similar content being viewed by others
References
Auslender, A.: Optimisation: Methods Numeriques. Masson, Paris (1976)
Bargetz, C., Kolobov, V.I., Reich, S., Zalas, R.: Linear convergence rates for extrapolated fixed point algorithms. Optimization 68, 163–195 (2018)
Bauschke, H.H., Borwein, J.: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38, 367–426 (1996)
Butnariu, D., Censor, Y.: On the behavior of a block-iterative projection method for solving convex feasibility problems. Int. J. Comp. Math. 34, 79–94 (1990)
Cegielski, A.: Iterative Methods for Fixed Point Problems in Hilbert Spaces. Springer, Heidelberg (2012)
Cegielski, A.: Extrapolated simultaneous subgradient projection method for variational inequality over the intersection of convex subsets. J Nonlinear Convex Anal. 15(2), 211–218 (2014)
Cegielski, A., Censor, Y.: Opial-type theorems and the common fixed point problem. In: Bauschke, H.H., Burachik, R.S., Combettes, P.L., Elser, V., Luke, D.R., Wolkowicz, H. (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pp 155–183. Springer, New York (2011)
Cegielski, A., Censor, Y.: Extrapolation and local acceleration of an iterative process for common fixed point problems. J. Math. Anal. Appl. 394(2), 809–818 (2012)
Cegielski, A., Gibali, A., Reich, S., Zalas, R.: An algorithm for solving the variational inequality problem over the fixed point set of a quasi-nonexpansive operator in Euclidean space. Numer Funct Anal Optim. 34(10), 1067–1096 (2013)
Cegielski, A., Zalas, R.: Methods for variational inequality problem over the intersection of fixed point sets of quasi-nonexpansive operators. Numer Funct Anal Optim. 34(10), 255–283 (2013)
Censor, Y., Elfving, T.: New methods for linear inequalities. Lin Algebra Appl. 42, 199–211 (1982)
Cheng, W.: A two-term PRP-based descent method. Numer Funct Anal Optim. 28, 1217–1230 (2014)
Cimmino, G.: Calcolo approssimato per le soluzioni dei sistemi di equazioni lineari. Ric Sci. 9, 326–333 (1938)
Combettes, P.L.: Inconsistent signal feasibility problems: least-square solutions in a product space. IEEE Trans Signal Process. 42, 2955–2966 (1994)
Combettes, P.L.: Convex set theoretic image recovery by extrapolated iterations of parallel subgradient projections. IEEE Trans Image Process. 6, 493–506 (1997)
Dos Santos, L.T.: A parallel subgradient projections method for the convex feasibility problem. J Comput Appl Math. 18, 307–320 (1987)
Gibali, A., Reich, S., Zalas, R.: Iterative methods for solving variational inequalities in Euclidean space. J Fixed Point Theory Appl. 17(4), 775–811 (2015)
Gibali, A., Reich, S., Zalas, R.: Outer approximation methods for solving variational inequalities in Hilbert space. Optimization. 66(3), 417–437 (2017)
Goldstein, A.A.: Convex programming in Hilbert space. Bull Amer Math Soc. 70, 709–710 (1964)
Iiduka, H.: Three-term conjugate gradient method for the convex optimization problem over the fixed point set of a nonexpansive mapping. Appl Math Comput. 217, 6315–6327 (2011)
Iiduka, H.: Distributed optimization for network resource allocation with nonsmooth utility functions. IEEE Trans Control Netw Syst. 6, 1354–1365 (2019)
Iiduka, H.: Stochastic fixed point optimization algorithm for classifier ensemble. IEEE Trans Cybern. 50(10), 4370–4380 (2020)
Iiduka, H., Yamada, I.: An ergodic algorithm for the power-control games for CDMA data networks. J Math Model Algorithms. 8, 1–18 (2009)
Iiduka, H., Yamada, I.: A use of conjugate gradient direction for the convex optimization problem over the fixed point set of a nonexpansive mapping. SIAM J Optim. 19, 1881–1893 (2009)
Kiwiel, K.C.: Block-iterative surrogate projection methods for convex feasibility problems. Lin Algebra Appl. 215, 225–259 (1995)
Maingé, P.E.: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16, 899–912 (2008)
Opial, Z.: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull Am Math Soc. 73, 591–597 (1967)
Pierra, G.: Decomposition through formalization in a product space. Math Program. 28, 96–115 (1984)
Prangprakhon, M., Nimana, N., Petrot, N.: A sequential constraint method for solving variational inequality over the intersection of fixed point sets. Thai J Math. 18(3), 1105–1123 (2020)
Prangprakhon, M., Nimana, N.: Extrapolated sequential constraint method for variational inequality over the intersection of fixed-point sets. Numer Algor. 88, 1051–1075 (2021)
Slavakis, K., Yamada, I., Sakaniwa, K.: Computation of symmetric positive definite Toeplitz matrices by the hybrid steepest descent method. Signal Process. 83, 1135–1140 (2003)
Slavakis, K., Yamada, I.: Robust wideband beamforming by the hybrid steepest descent method. IEEE Trans Signal Process. 55, 4511–4522 (2007)
Takahashi, N., Yamada, I.: Parallel algorithms for variational inequalities over the Cartesian product of the intersections of the fixed point sets of nonexpansive mappings. J Approx Theory. 153, 139–160 (2008)
Wang, F., Xu, H.-K.: Cyclic algorithms for split feasibility problems in Hilbert spaces. Nonlinear Anal. 74, 4105–4111 (2011)
Xu, H.K.: Iterative algorithm for nonlinear operators. J London Math Soc. 66, 240–256 (2020)
Xu, H.K., Cegielski, A.: The Landweber operator approach to the split equality problem. SIAM J. Optim. 31(1), 626–652 (2021)
Yamada, I.: The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings. In: Butnariu, D., Censor, Y., Reich, S. (eds.) Inherently Parallel Algorithms in Feasibility and Optimization and their Applications, pp 473–504. Amsterdam, Elsevier (2001)
Yamada, I., Ogura, N., Shirakawa, N.: A numerical robust hybrid steepest descent method for the convexly constrained generalized inverse problems. Contemp Math. 313, 269–305 (2002)
Zhang, L., Zhou, W., Li, D.H.: A descent modified Polak-Ribiere-Polyak conjugate gradient method and its global convergence. IMA J Numer Anal. 26, 629–640 (2006)
Zhang, L., Zhou, W., Li, D.H.: Global convergence of a modified fletcher-reeves conjugate gradient method with Armijo-type line search. Numer Math. 104, 561–572 (2006)
Zhang, L., Zhou, W., Li, D.H.: Some descent three-term conjugate gradient methods and their global convergence. Optim Methods Softw. 22, 697–711 (2007)
Acknowledgements
The authors are thankful to the Editor and two anonymous referees for comments and remarks which improved the quality and presentation of the paper.
Funding
N. Petrot was supported by the National Research Council of Thailand (Grant No. R2565B074). N. Nimana has received funding support from the NSRF via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation (Grant No. B05F640183).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no competing interests.
Additional information
Data availability
The MNIST dataset that supports the findings of this study is available from http://www.cs.nyu.edu/~roweis/data.html.
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix 1: Parameter combinations of DDCGM
We start with the investigation of several parameter combinations which are chosen as in DDCGM by running the algorithm for 100 iterations.
We present the misclassification rate in percentage and the computational runtime (the number in parenthesis) when constructing the classifier for various choices of the parameters λk and βk, when the parameter \(\varphi _{k}=\frac {1}{k+1}\), μ = 1, and the parameter γk = 0 in Table 1.
We observe from Table 1 that the combination of \({\upbeta }_{k}=\frac {0.9}{k+1}\) with the relaxation parameter λk = 1.8 and λk = 1.9 and the combination of \({\upbeta }_{k}=\frac {0.8}{k+1}\) with a relaxation parameter λk = 1.9 lead to the lowest misclassification rate with 0.8560%. Moreover, under these three combinations, we notice that the combination of \({\upbeta }_{k}=\frac {0.9}{k+1}\) with λk = 1.8 yields the fast CPU time of 130.15 sec. It can be seen that, in each choice of λk, the amount of CPU time increased as the value of βk increased.
In Table 2, we present the misclassification rate and CPU time for the combination of several choices of parameters φk and λk when the the parameters μ = 1 and γk = 0, and the parameter \({\upbeta }_{k}=\frac {0.9}{k+1}\), which is the best choice from Table 1. It can be observed that the least misclassification rate and the least CPU time were achieved when the parameter φk was quite small. The best classifier performance of 0.8560% misclassification rate was obtained by the choices of λk ∈ [1.7, 1.9]. The least computational runtime of 125.58 sec. was obtained from the combination of λ = 1.9 with \(\varphi _{k}=\frac {0.2}{k+1}\).
Table 3 shows the misclassification rate in percentage and CPU time for several choices of parameters μ and γk when the parameters \(\lambda _{k}=1.9, \varphi _{k}=\frac {0.2}{k+1}\), and \({\upbeta }_{k}=\frac {0.9}{k+1}\). The best misclassification rate of 0.8056% was obtained by the combination of μ ∈ [1.3, 1.9] with each parameter \(\gamma _{k}=\frac {\gamma }{k+1}\) where γ ∈ [0, 0.1]. The least computational runtime was observed for the choice μ = 1.9 with \(\gamma _{k}=\frac {0.005}{k+1}\) with 125.98 sec.
Appendix 2: Parameter combinations HSDM [37]
In this section, we present some parameter combinations of the hybrid steepest descent method (HSDM) [37]. We let the operators F and define the operator \(T={\sum }_{i=1}^{2m}\omega _{i} T_{i},\) where \(\omega _{i}(x)=\frac {1}{2m}\) for all i = 1,…, 2m.
Table 4 shows the misclassification rate in percentage and CPU time for several choices of parameters μ and βk. The best misclassification rate of 1.2085% and least computational time was obtained by the combination of μ = 1.1 and βk = 0.9.
Appendix 3: Parameter combinations of Iiduka’s [20] HTCGM
In this section, we present some parameter combinations of the hybrid three-term conjugate gradient method (HTCGM) [20, Algorithm 6]. We let the operators F and define the operator \(T\! =\! {\sum }_{i=1}^{2m+1}\omega _{i} T_{i},\) where \(\omega _{i}(x)=\frac {1}{2m+1}\) for all i = 1,…, 2m + 1.
In Table 5, we present the misclassification rate and CPU time for the combination of several choices of parameters φk and βk when performing HTCGM with the parameters μ = 1 and γk = 0. The best classifier performance of 1.1581% misclassification rate was obtained by the choices of \(\varphi _{k}=\frac {0.8}{k+1}\) and \({\upbeta }_{k}=\frac {0.8}{k+1}\) with the least CPU time 241.34 sec.
Table 6 shows the misclassification rate in percentage and CPU time for several choices of parameters μ and γk when performing HTCGM with the parameters \(\varphi _{k}=\frac {0.8}{k+1}\) and \({\upbeta }_{k}=\frac {0.8}{k+1}\). The best misclassification rate of 1.1581% and least computational time were obtained by the combination of μ = 1 and γk = 0.01.
Appendix 4: Parameter combinations of Cegielski’s [6] ESSPM
In this section, we present some parameter combinations of the extrapolated simultaneous subgradient projection (ESSPM) [6]. All dataset and experimental settings are the same as above.
In Table 7, we present the misclassification rate and CPU time for the combination of several choices of parameters λk and βk when performing ESSPM with the identically constant weight with the parameter μ = 1. The best classifier performance of 1.1581% misclassification rate was obtained by the choices of λk = 1.1 and \({\upbeta }_{k}=\frac {0.9}{k+1}\).
Finally, Table 8 shows the misclassification rate in percentage and CPU time for several choices of parameters μ and λk when performing ESSPM with the identically constant weight with the parameter \({\upbeta }_{k}=\frac {0.9}{k+1}\). It can be observed that the least misclassification rate was achieved when the parameter μ ≥ 1. The best misclassification rate of 1.1581% and least computational time were obtained by the combination of μ = 1 and λk = 1.9.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Petrot, N., Prangprakhon, M., Promsinchai, P. et al. A dynamic distributed conjugate gradient method for variational inequality problem over the common fixed-point constraints. Numer Algor 93, 639–668 (2023). https://doi.org/10.1007/s11075-022-01430-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11075-022-01430-8