Abstract
Solving an optimization problem usually consists in generating a sequence by some numerical algorithm. The key question is then to show that such a sequence converges to some solution as well as to evaluate the efficiency of the convergent procedure. In general, the coercivity hypothesis on the problem’s data is assumed to ensure the asymptotic convergence of the produced sequence. For convex minimization problems, if the produced sequence is stationary, i.e., the sequence of subgradients approaches zero, it is interesting to know for what class of functions we can reach convergence under a weaker assumption than coercivity. This chapter introduces the concept of well-behaved asymptotic functions, which in turn are linked to the problems of error bounds associated with a given subset of a Euclidean space. A general framework is developed around these two themes to characterize asymptotic optimality and error bounds for convex inequality systems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Rights and permissions
Copyright information
© 2003 Springer-Verlag New York, Inc.
About this chapter
Cite this chapter
(2003). Minimizing and Stationary Sequences. In: Asymptotic Cones and Functions in Optimization and Variational Inequalities. Springer Monographs in Mathematics. Springer, New York, NY. https://doi.org/10.1007/0-387-22590-0_4
Download citation
DOI: https://doi.org/10.1007/0-387-22590-0_4
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-387-95520-9
Online ISBN: 978-0-387-22590-6
eBook Packages: Springer Book Archive