Advertisement

On perturbed hybrid steepest descent method with minimization or superiorization for subdifferentiable functions

  • Mohsen HoseiniEmail author
  • Shahram Saeidi
  • Do Sang Kim
Original Paper
  • 46 Downloads

Abstract

For finding the minimum value of differentiable functions over a nonempty closed convex subset of a Hilbert space, the hybrid steepest descent method (HSDM) can be applied. In this work, we study perturbed algorithms in line with a generalized HSDM and discuss how some selections of perturbations enable us to increase the convergence speed. When we specialize these results to constrained minimization then the perturbations become bounded perturbations used in the superiorization methodology (SM). We show usefulness of the SM in studying the constrained convex minimization problem for subdifferentiable functions and proceed with the study of the computational efficiency of the SM compared with the HSDM. In the computational experiment comparing the HSDM with superiorization, the latter seems to be advantageous for the specific experiment.

Keywords

Variational inequality Perturbation Superiorization Hybrid steepest descent method 

Notes

Acknowledgments

The authors would like to thank the referee for giving valuable and constructive comments that greatly contributed to improving the final version of this paper.

References

  1. 1.
    Halpern, B.: Fixed points of nonexpanding maps. Bull. Am. Math. Soc. 73, 957–961 (1967)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Yamada, I.: The hybrid steepest descent method for the variational inequality problems over the intersection of fixed point sets of nonexpansive mappings. In: Butnariu, D, Censor, Y, Reich, S (eds.) Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications. Studies in Computational Mathematics, vol. 8, pp 473–504. North-Holland, Amsterdam (2001)Google Scholar
  3. 3.
    Yamada, I., Ogura, N., Shirakawa, N.: A numerically robust hybrid steepest descent method for the convexly constrained generalized inverse problems. Contemp. Math. 313, 269–305 (2002)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Censor, Y., Davidi, R., Herman, G.T.: Perturbation resilience and superiorization of iterative algorithms. Inverse Prob. 26, 065008 (2010)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Nikazad, T., Davidi, R., Herman, G.T.: Accelerated perturbation-resilient block-iterative projection methods with application to image reconstruction. Inverse Prob. 28, 035005 (2012)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Censor, Y.: Weak and strong superiorization: between feasibility-seeking and minimization. An. St. Univ. Ovidius Constanta, Ser. Mat. 23, 41–54 (2015)MathSciNetzbMATHGoogle Scholar
  7. 7.
    Censor, Y., Zaslavski, A.J.: Strict fejér monotonicity by superiorization of feasibility-seeking projection methods. J. Optim. Theory Appl. 165, 172–187 (2015)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Cegielski, A., Al-Musallam, F.: Superiorization with level control. Inverse Prob. 33, 044009 (2017)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Censor, Y., Davidi, R., Herman, G.T., Schulte, R.W., Tetruashvili, L.: Projected subgradient minimization versus superiorization. J. Optim. Theory Appl. 160, 730–747 (2014)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Censor, Y.: Superiorization and perturbation resilience of algorithms: a bibliography compiled and continuously updated. http://math.haifa.ac.il/yair/bib-superiorization-censor.html see also: arXiv:1506.04219
  11. 11.
    Zaslavski, A.J.: Numerical Optimization with Computational Errors, vol. 108. Springer, Berlin (2016)CrossRefGoogle Scholar
  12. 12.
    Herman, G.T., Garduño, E., Davidi, R., Censor, Y.: Superiorization: an optimization heuristic for medical physics. Med. Phys. 39, 5532–5546 (2012)CrossRefGoogle Scholar
  13. 13.
    Zaslavski, A.J.: Asymptotic behavior of two algorithms for solving common fixed point problems. Inverse Prob. 33, 044004 (2017)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Censor, Y.: Can linear superiorization be useful for linear optimization problems. Inverse Prob. 33, 044006 (2017)MathSciNetCrossRefGoogle Scholar
  15. 15.
    He, H., Xu, H.K.: Perturbation resilience and superiorization methodology of averaged mappings. Inverse Prob. 33, 044007 (2017)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Censor, Y., Heaton, H., Schulte, R.: Derivative-free superiorization with component-wise perturbations. Numer. Algor. 80, 1219–1240 (2019)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Saeidi, S., Kim, D.S.: Combination of the hybrid steepest-descent method and the viscosity approximation. J. Optim. Theory Appl. 160, 911–930 (2014)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, New York (2017)CrossRefGoogle Scholar
  19. 19.
    Mordukhovich, B.S.: Variational analysis and applications. Vol. 8 springer (2018)CrossRefGoogle Scholar
  20. 20.
    Xu, H.K.: Iterative algorithms for nonlinear operators. J. London Math. Soc. 66, 240–256 (2002)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Xu, H.K., Kim, T.H.: Convergence ofhybrid steepest-descent method for variational inequalities. J. Optim. Theory Appl. 119, 185–201 (2003)MathSciNetCrossRefGoogle Scholar
  22. 22.
    Yamada, I., Ogura, N.: Hybrid steepest descent method for variational inequality problem over the fixed point set of certain quasi-nonexpansive mappings. Numer. Funct. Anal. Optim. 25, 619–655 (2004)MathSciNetCrossRefGoogle Scholar
  23. 23.
    Cegielski, A., Al-Musallam, F.: Strong convergence of a hybrid steepest descent method for the split common fixed point problem. Optimization 65, 1463–1476 (2016)MathSciNetCrossRefGoogle Scholar
  24. 24.
    Xu, H.K.: An iterative approach to quadratic optimization. J. Optim. Theory Appl. 116, 659–678 (2003)MathSciNetCrossRefGoogle Scholar
  25. 25.
    Moudafi, A.: Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 241, 46–55 (2000)MathSciNetCrossRefGoogle Scholar
  26. 26.
    Garcia-Falset, J., Llorens-Fuster, E., Prus, S.: The fixed point property for mappings admitting a center. Nonlinear Anal. 66, 1257–1274 (2007)MathSciNetCrossRefGoogle Scholar
  27. 27.
    Yamagishi, M., Yamada, I.: Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization. Inverse Prob. 33, 044003 (2017)MathSciNetCrossRefGoogle Scholar
  28. 28.
    Bargetz, C., Reich, S., Zalas, R.: Convergence properties of dynamic string-averaging projection methods in the presence of perturbations. Numer. Algor. 77, 185–209 (2018)MathSciNetCrossRefGoogle Scholar
  29. 29.
    Iiduka, H.: Three-term conjugate gradient method for the convex optimization problem over the fixed point set of a nonexpansive mapping. Appl. Math. Comput. 217, 6315–6327 (2011)MathSciNetzbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of MathematicsUniversity of KurdistanSanandajIran
  2. 2.Department of Applied MathematicsPukyong National UniversityBusanRepublic of Korea

Personalised recommendations