Skip to main content

Introduction

  • Chapter
  • First Online:
Convex Optimization with Computational Errors

Part of the book series: Springer Optimization and Its Applications ((SOIA,volume 155))

  • 860 Accesses

Abstract

In this book we study the behavior of algorithms for constrained convex minimization problems in a Hilbert space. Our goal is to obtain a good approximate solution of the problem in the presence of computational errors. It is known that the algorithm generates a good approximate solution, if the sequence of computational errors is bounded from above by a small constant. In our study, presented in this book, we take into consideration the fact that for every algorithm its iteration consists of several steps and that computational errors for different steps are different, in general. In this chapter we discuss several algorithms which are studied in this book.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 49.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 49.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Beck A, Teboulle M (2003) Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper Res Lett 31:167–175

    Article  MathSciNet  Google Scholar 

  2. Korpelevich GM (1976) The extragradient method for finding saddle points and other problems. Ekon Matem Metody 12:747–756

    MathSciNet  MATH  Google Scholar 

  3. Mainge P-E (2008) Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal 16:899–912

    Article  MathSciNet  Google Scholar 

  4. Mordukhovich BS (2006) Variational analysis and generalized differentiation, I: I: Basic theory. Springer, Berlin

    Book  Google Scholar 

  5. Mordukhovich BS, Nam NM (2014) An easy path to convex analysis and applications. Morgan & Clayton Publishes, San Rafael, CA

    MATH  Google Scholar 

  6. Nadezhkina N, Takahashi Wataru (2004) Modified extragradient method for solving variational inequalities in real Hilbert spaces. Nonlinear analysis and convex analysis, pp 359–366. Yokohama Publ., Yokohama

    Google Scholar 

  7. Nedic A, Ozdaglar A (2009) Subgradient methods for saddle-point problems. J Optim Theory Appl 142:205–228

    Article  MathSciNet  Google Scholar 

  8. Nemirovski A, Yudin D (1983) Problem complexity and method efficiency in optimization. Wiley, New York

    Google Scholar 

  9. Nesterov Yu (1983) A method for solving the convex programming problem with convergence rate O(1∕k2). Dokl Akad Nauk 269:543–547

    MathSciNet  Google Scholar 

  10. Qin X, Cho SY, Kang SM (2011) An extragradient-type method for generalized equilibrium problems involving strictly pseudocontractive mappings. J Global Optim 49:679–693

    Article  MathSciNet  Google Scholar 

  11. Shor NZ (1985) Minimization methods for non-differentiable functions. Springer, Berlin

    Book  Google Scholar 

  12. Solodov MV, Zavriev SK (1998) Error stability properties of generalized gradient-type algorithms. J Optim Theory Appl 98:663–680

    Article  MathSciNet  Google Scholar 

  13. Su M, Xu H-K (2010) Remarks on the gradient-projection algorithm. J Nonlinear Anal Optim 1:35–43

    MathSciNet  MATH  Google Scholar 

  14. Takahashi W (2009) Introduction to nonlinear and convex analysis. Yokohama Publishers, Yokohama

    MATH  Google Scholar 

  15. Xu H-K (2011) Averaged mappings and the gradient-projection algorithm. J Optim Theory Appl 150:360–378

    Article  MathSciNet  Google Scholar 

  16. Zaslavski AJ (2010) The projected subgradient method for nonsmooth convex optimization in the presence of computational errors. Numer Funct Anal Optim 31:616–633

    Article  MathSciNet  Google Scholar 

  17. Zaslavski AJ (2012) The extragradient method for convex optimization in the presence of computational errors. Numer Funct Anal Optim 33:1399–1412

    Article  MathSciNet  Google Scholar 

  18. Zaslavski AJ (2016) Numerical optimization with computational errors. Springer, Cham

    Book  Google Scholar 

  19. Zeng LC, Yao JC (2006) Strong convergence theorem by an extragradient method for fixed point problems and variational inequality problems. Taiwanese J Math 10:1293–1303

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

J. Zaslavski, A. (2020). Introduction. In: Convex Optimization with Computational Errors. Springer Optimization and Its Applications, vol 155. Springer, Cham. https://doi.org/10.1007/978-3-030-37822-6_1

Download citation

Publish with us

Policies and ethics