Skip to main content
Log in

A dynamic distributed conjugate gradient method for variational inequality problem over the common fixed-point constraints

  • Original Paper
  • Published:
Numerical Algorithms Aims and scope Submit manuscript

Abstract

In this paper, we propose a dynamic distributed conjugate gradient method for solving the strongly monotone variational inequality problem over the intersection of fixed-point sets of firmly nonexpansive operators. The proposed method allows the independent computation of a firmly nonexpansive operator along with the dynamic weight which is updated at each iteration. This strategy aims to speed up the convergence behavior of the algorithm by updating control factors to drive each iterative step. Under some suitable control conditions on corresponding parameters, we show a strong convergence of the iterate to the unique solution of the considered variational inequality problem. We consider the numerical experiments and discuss some observation points by applying the model to solve the image classification problem via the support vector machine learning.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Algorithm 1
Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Auslender, A.: Optimisation: Methods Numeriques. Masson, Paris (1976)

    MATH  Google Scholar 

  2. Bargetz, C., Kolobov, V.I., Reich, S., Zalas, R.: Linear convergence rates for extrapolated fixed point algorithms. Optimization 68, 163–195 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bauschke, H.H., Borwein, J.: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38, 367–426 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  4. Butnariu, D., Censor, Y.: On the behavior of a block-iterative projection method for solving convex feasibility problems. Int. J. Comp. Math. 34, 79–94 (1990)

    Article  MATH  Google Scholar 

  5. Cegielski, A.: Iterative Methods for Fixed Point Problems in Hilbert Spaces. Springer, Heidelberg (2012)

    MATH  Google Scholar 

  6. Cegielski, A.: Extrapolated simultaneous subgradient projection method for variational inequality over the intersection of convex subsets. J Nonlinear Convex Anal. 15(2), 211–218 (2014)

    MathSciNet  MATH  Google Scholar 

  7. Cegielski, A., Censor, Y.: Opial-type theorems and the common fixed point problem. In: Bauschke, H.H., Burachik, R.S., Combettes, P.L., Elser, V., Luke, D.R., Wolkowicz, H. (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pp 155–183. Springer, New York (2011)

  8. Cegielski, A., Censor, Y.: Extrapolation and local acceleration of an iterative process for common fixed point problems. J. Math. Anal. Appl. 394(2), 809–818 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  9. Cegielski, A., Gibali, A., Reich, S., Zalas, R.: An algorithm for solving the variational inequality problem over the fixed point set of a quasi-nonexpansive operator in Euclidean space. Numer Funct Anal Optim. 34(10), 1067–1096 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  10. Cegielski, A., Zalas, R.: Methods for variational inequality problem over the intersection of fixed point sets of quasi-nonexpansive operators. Numer Funct Anal Optim. 34(10), 255–283 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  11. Censor, Y., Elfving, T.: New methods for linear inequalities. Lin Algebra Appl. 42, 199–211 (1982)

    Article  MathSciNet  MATH  Google Scholar 

  12. Cheng, W.: A two-term PRP-based descent method. Numer Funct Anal Optim. 28, 1217–1230 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  13. Cimmino, G.: Calcolo approssimato per le soluzioni dei sistemi di equazioni lineari. Ric Sci. 9, 326–333 (1938)

    MATH  Google Scholar 

  14. Combettes, P.L.: Inconsistent signal feasibility problems: least-square solutions in a product space. IEEE Trans Signal Process. 42, 2955–2966 (1994)

    Article  Google Scholar 

  15. Combettes, P.L.: Convex set theoretic image recovery by extrapolated iterations of parallel subgradient projections. IEEE Trans Image Process. 6, 493–506 (1997)

    Article  Google Scholar 

  16. Dos Santos, L.T.: A parallel subgradient projections method for the convex feasibility problem. J Comput Appl Math. 18, 307–320 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  17. Gibali, A., Reich, S., Zalas, R.: Iterative methods for solving variational inequalities in Euclidean space. J Fixed Point Theory Appl. 17(4), 775–811 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  18. Gibali, A., Reich, S., Zalas, R.: Outer approximation methods for solving variational inequalities in Hilbert space. Optimization. 66(3), 417–437 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  19. Goldstein, A.A.: Convex programming in Hilbert space. Bull Amer Math Soc. 70, 709–710 (1964)

    Article  MathSciNet  MATH  Google Scholar 

  20. Iiduka, H.: Three-term conjugate gradient method for the convex optimization problem over the fixed point set of a nonexpansive mapping. Appl Math Comput. 217, 6315–6327 (2011)

    MathSciNet  MATH  Google Scholar 

  21. Iiduka, H.: Distributed optimization for network resource allocation with nonsmooth utility functions. IEEE Trans Control Netw Syst. 6, 1354–1365 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  22. Iiduka, H.: Stochastic fixed point optimization algorithm for classifier ensemble. IEEE Trans Cybern. 50(10), 4370–4380 (2020)

    Article  Google Scholar 

  23. Iiduka, H., Yamada, I.: An ergodic algorithm for the power-control games for CDMA data networks. J Math Model Algorithms. 8, 1–18 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  24. Iiduka, H., Yamada, I.: A use of conjugate gradient direction for the convex optimization problem over the fixed point set of a nonexpansive mapping. SIAM J Optim. 19, 1881–1893 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  25. Kiwiel, K.C.: Block-iterative surrogate projection methods for convex feasibility problems. Lin Algebra Appl. 215, 225–259 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  26. Maingé, P.E.: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16, 899–912 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  27. Opial, Z.: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull Am Math Soc. 73, 591–597 (1967)

    Article  MathSciNet  MATH  Google Scholar 

  28. Pierra, G.: Decomposition through formalization in a product space. Math Program. 28, 96–115 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  29. Prangprakhon, M., Nimana, N., Petrot, N.: A sequential constraint method for solving variational inequality over the intersection of fixed point sets. Thai J Math. 18(3), 1105–1123 (2020)

    MathSciNet  MATH  Google Scholar 

  30. Prangprakhon, M., Nimana, N.: Extrapolated sequential constraint method for variational inequality over the intersection of fixed-point sets. Numer Algor. 88, 1051–1075 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  31. Slavakis, K., Yamada, I., Sakaniwa, K.: Computation of symmetric positive definite Toeplitz matrices by the hybrid steepest descent method. Signal Process. 83, 1135–1140 (2003)

    Article  MATH  Google Scholar 

  32. Slavakis, K., Yamada, I.: Robust wideband beamforming by the hybrid steepest descent method. IEEE Trans Signal Process. 55, 4511–4522 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  33. Takahashi, N., Yamada, I.: Parallel algorithms for variational inequalities over the Cartesian product of the intersections of the fixed point sets of nonexpansive mappings. J Approx Theory. 153, 139–160 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  34. Wang, F., Xu, H.-K.: Cyclic algorithms for split feasibility problems in Hilbert spaces. Nonlinear Anal. 74, 4105–4111 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  35. Xu, H.K.: Iterative algorithm for nonlinear operators. J London Math Soc. 66, 240–256 (2020)

    Article  MathSciNet  Google Scholar 

  36. Xu, H.K., Cegielski, A.: The Landweber operator approach to the split equality problem. SIAM J. Optim. 31(1), 626–652 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  37. Yamada, I.: The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings. In: Butnariu, D., Censor, Y., Reich, S. (eds.) Inherently Parallel Algorithms in Feasibility and Optimization and their Applications, pp 473–504. Amsterdam, Elsevier (2001)

  38. Yamada, I., Ogura, N., Shirakawa, N.: A numerical robust hybrid steepest descent method for the convexly constrained generalized inverse problems. Contemp Math. 313, 269–305 (2002)

    Article  MATH  Google Scholar 

  39. Zhang, L., Zhou, W., Li, D.H.: A descent modified Polak-Ribiere-Polyak conjugate gradient method and its global convergence. IMA J Numer Anal. 26, 629–640 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  40. Zhang, L., Zhou, W., Li, D.H.: Global convergence of a modified fletcher-reeves conjugate gradient method with Armijo-type line search. Numer Math. 104, 561–572 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  41. Zhang, L., Zhou, W., Li, D.H.: Some descent three-term conjugate gradient methods and their global convergence. Optim Methods Softw. 22, 697–711 (2007)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors are thankful to the Editor and two anonymous referees for comments and remarks which improved the quality and presentation of the paper.

Funding

N. Petrot was supported by the National Research Council of Thailand (Grant No. R2565B074). N. Nimana has received funding support from the NSRF via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation (Grant No. B05F640183).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nimit Nimana.

Ethics declarations

Conflict of interest

The authors declare no competing interests.

Additional information

Data availability

The MNIST dataset that supports the findings of this study is available from http://www.cs.nyu.edu/~roweis/data.html.

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1: Parameter combinations of DDCGM

We start with the investigation of several parameter combinations which are chosen as in DDCGM by running the algorithm for 100 iterations.

We present the misclassification rate in percentage and the computational runtime (the number in parenthesis) when constructing the classifier for various choices of the parameters λk and βk, when the parameter \(\varphi _{k}=\frac {1}{k+1}\), μ = 1, and the parameter γk = 0 in Table 1.

Table 1 Misclassification rate in percentage and CPU time for several choices of parameters λk ∈ (1,2) and \({\upbeta }_{k}=\frac {\upbeta }{k+1}\) where β ∈ (0,1) when \(\varphi _{k}=\frac {1}{k+1}\), μ = 1, and γk = 0

We observe from Table 1 that the combination of \({\upbeta }_{k}=\frac {0.9}{k+1}\) with the relaxation parameter λk = 1.8 and λk = 1.9 and the combination of \({\upbeta }_{k}=\frac {0.8}{k+1}\) with a relaxation parameter λk = 1.9 lead to the lowest misclassification rate with 0.8560%. Moreover, under these three combinations, we notice that the combination of \({\upbeta }_{k}=\frac {0.9}{k+1}\) with λk = 1.8 yields the fast CPU time of 130.15 sec. It can be seen that, in each choice of λk, the amount of CPU time increased as the value of βk increased.

In Table 2, we present the misclassification rate and CPU time for the combination of several choices of parameters φk and λk when the the parameters μ = 1 and γk = 0, and the parameter \({\upbeta }_{k}=\frac {0.9}{k+1}\), which is the best choice from Table 1. It can be observed that the least misclassification rate and the least CPU time were achieved when the parameter φk was quite small. The best classifier performance of 0.8560% misclassification rate was obtained by the choices of λk ∈ [1.7, 1.9]. The least computational runtime of 125.58 sec. was obtained from the combination of λ = 1.9 with \(\varphi _{k}=\frac {0.2}{k+1}\).

Table 2 Misclassification rate in percentage and CPU time for several choices of parameters λk ∈ (1,2) and \(\varphi _{k}=\frac {\varphi }{k+1}\) where φ ∈ (0,1) when \({\upbeta }_{k}=\frac {0.9}{k+1}\), μ = 1, and γk = 0

Table 3 shows the misclassification rate in percentage and CPU time for several choices of parameters μ and γk when the parameters \(\lambda _{k}=1.9, \varphi _{k}=\frac {0.2}{k+1}\), and \({\upbeta }_{k}=\frac {0.9}{k+1}\). The best misclassification rate of 0.8056% was obtained by the combination of μ ∈ [1.3, 1.9] with each parameter \(\gamma _{k}=\frac {\gamma }{k+1}\) where γ ∈ [0, 0.1]. The least computational runtime was observed for the choice μ = 1.9 with \(\gamma _{k}=\frac {0.005}{k+1}\) with 125.98 sec.

Table 3 Misclassification rate in percentage and CPU time for several choices of parameters μ ∈ (0,2) and \(\gamma _{k}=\frac {\gamma }{k+1}\) where γ ∈ [0,1) when λk = 1.9, \(\varphi _{k}=\frac {0.2}{k+1}\), and \({\upbeta }_{k}=\frac {0.3}{k+1}\)

Appendix 2: Parameter combinations HSDM [37]

In this section, we present some parameter combinations of the hybrid steepest descent method (HSDM) [37]. We let the operators F and define the operator \(T={\sum }_{i=1}^{2m}\omega _{i} T_{i},\) where \(\omega _{i}(x)=\frac {1}{2m}\) for all i = 1,…, 2m.

Table 4 shows the misclassification rate in percentage and CPU time for several choices of parameters μ and βk. The best misclassification rate of 1.2085% and least computational time was obtained by the combination of μ = 1.1 and βk = 0.9.

Table 4 Misclassification rate in percentage and CPU time for several choices of parameters μ ∈ (0,2) and \({\upbeta }_{k}=\frac {\upbeta }{k+1}\), where β ∈ (0,1)

Appendix 3: Parameter combinations of Iiduka’s [20] HTCGM

In this section, we present some parameter combinations of the hybrid three-term conjugate gradient method (HTCGM) [20, Algorithm 6]. We let the operators F and define the operator \(T\! =\! {\sum }_{i=1}^{2m+1}\omega _{i} T_{i},\) where \(\omega _{i}(x)=\frac {1}{2m+1}\) for all i = 1,…, 2m + 1.

In Table 5, we present the misclassification rate and CPU time for the combination of several choices of parameters φk and βk when performing HTCGM with the parameters μ = 1 and γk = 0. The best classifier performance of 1.1581% misclassification rate was obtained by the choices of \(\varphi _{k}=\frac {0.8}{k+1}\) and \({\upbeta }_{k}=\frac {0.8}{k+1}\) with the least CPU time 241.34 sec.

Table 5 Misclassification rate in percentage and CPU time for several choices of parameters \(\varphi _{k}=\frac {\varphi }{k+1}\) and \({\upbeta }_{k}=\frac {\upbeta }{k+1}\), where φ,β∈ (0,1], μ = 1, and γk = 0

Table 6 shows the misclassification rate in percentage and CPU time for several choices of parameters μ and γk when performing HTCGM with the parameters \(\varphi _{k}=\frac {0.8}{k+1}\) and \({\upbeta }_{k}=\frac {0.8}{k+1}\). The best misclassification rate of 1.1581% and least computational time were obtained by the combination of μ = 1 and γk = 0.01.

Table 6 Misclassification rate in percentage and CPU time for several choices of parameters μ ∈ (0,2) and \(\gamma _{k}=\frac {\gamma }{k+1}\) where γ ∈ [0,1) when \(\varphi _{k}=\frac {0.8}{k+1}\) and \({\upbeta }_{k}=\frac {0.8}{k+1}\)

Appendix 4: Parameter combinations of Cegielski’s [6] ESSPM

In this section, we present some parameter combinations of the extrapolated simultaneous subgradient projection (ESSPM) [6]. All dataset and experimental settings are the same as above.

In Table 7, we present the misclassification rate and CPU time for the combination of several choices of parameters λk and βk when performing ESSPM with the identically constant weight with the parameter μ = 1. The best classifier performance of 1.1581% misclassification rate was obtained by the choices of λk = 1.1 and \({\upbeta }_{k}=\frac {0.9}{k+1}\).

Table 7 Misclassification rate in percentage and CPU time for several choices of parameters λk ∈ (1,2) and \({\upbeta }_{k}=\frac {\upbeta }{k+1}\) where β ∈ (0,1) when \(\varphi _{k}=\frac {1}{k+1}\), μ = 1, and γk = 0

Finally, Table 8 shows the misclassification rate in percentage and CPU time for several choices of parameters μ and λk when performing ESSPM with the identically constant weight with the parameter \({\upbeta }_{k}=\frac {0.9}{k+1}\). It can be observed that the least misclassification rate was achieved when the parameter μ ≥ 1. The best misclassification rate of 1.1581% and least computational time were obtained by the combination of μ = 1 and λk = 1.9.

Table 8 Misclassification rate in percentage and CPU time for several choices of parameters μ ∈ (0,2) and λk ∈ (1,2) when \({\upbeta }_{k}=\frac {0.9}{k+1}\)

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Petrot, N., Prangprakhon, M., Promsinchai, P. et al. A dynamic distributed conjugate gradient method for variational inequality problem over the common fixed-point constraints. Numer Algor 93, 639–668 (2023). https://doi.org/10.1007/s11075-022-01430-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11075-022-01430-8

Keywords

Navigation