Abstract
In this book we study the behavior of algorithms for constrained convex minimization problems in a Hilbert space. Our goal is to obtain a good approximate solution of the problem in the presence of computational errors. It is known that the algorithm generates a good approximate solution, if the sequence of computational errors is bounded from above by a small constant. In our study, presented in this book, we take into consideration the fact that for every algorithm its iteration consists of several steps and that computational errors for different steps are different, in general. In this chapter we discuss several algorithms which are studied in this book.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Beck A, Teboulle M (2003) Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper Res Lett 31:167–175
Korpelevich GM (1976) The extragradient method for finding saddle points and other problems. Ekon Matem Metody 12:747–756
Mainge P-E (2008) Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal 16:899–912
Mordukhovich BS (2006) Variational analysis and generalized differentiation, I: I: Basic theory. Springer, Berlin
Mordukhovich BS, Nam NM (2014) An easy path to convex analysis and applications. Morgan & Clayton Publishes, San Rafael, CA
Nadezhkina N, Takahashi Wataru (2004) Modified extragradient method for solving variational inequalities in real Hilbert spaces. Nonlinear analysis and convex analysis, pp 359–366. Yokohama Publ., Yokohama
Nedic A, Ozdaglar A (2009) Subgradient methods for saddle-point problems. J Optim Theory Appl 142:205–228
Nemirovski A, Yudin D (1983) Problem complexity and method efficiency in optimization. Wiley, New York
Nesterov Yu (1983) A method for solving the convex programming problem with convergence rate O(1∕k2). Dokl Akad Nauk 269:543–547
Qin X, Cho SY, Kang SM (2011) An extragradient-type method for generalized equilibrium problems involving strictly pseudocontractive mappings. J Global Optim 49:679–693
Shor NZ (1985) Minimization methods for non-differentiable functions. Springer, Berlin
Solodov MV, Zavriev SK (1998) Error stability properties of generalized gradient-type algorithms. J Optim Theory Appl 98:663–680
Su M, Xu H-K (2010) Remarks on the gradient-projection algorithm. J Nonlinear Anal Optim 1:35–43
Takahashi W (2009) Introduction to nonlinear and convex analysis. Yokohama Publishers, Yokohama
Xu H-K (2011) Averaged mappings and the gradient-projection algorithm. J Optim Theory Appl 150:360–378
Zaslavski AJ (2010) The projected subgradient method for nonsmooth convex optimization in the presence of computational errors. Numer Funct Anal Optim 31:616–633
Zaslavski AJ (2012) The extragradient method for convex optimization in the presence of computational errors. Numer Funct Anal Optim 33:1399–1412
Zaslavski AJ (2016) Numerical optimization with computational errors. Springer, Cham
Zeng LC, Yao JC (2006) Strong convergence theorem by an extragradient method for fixed point problems and variational inequality problems. Taiwanese J Math 10:1293–1303
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
J. Zaslavski, A. (2020). Introduction. In: Convex Optimization with Computational Errors. Springer Optimization and Its Applications, vol 155. Springer, Cham. https://doi.org/10.1007/978-3-030-37822-6_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-37822-6_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-37821-9
Online ISBN: 978-3-030-37822-6
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)