Abstract
The classical proximal point methodfor optimization was analyzed in details by Rockafellar [106]. It is devised to minimize a proper, lower semicontinuous, convex function g: H→ (−∞,+∞] defined on a Hilbert space H. The iteration is of the form
where {ωκ}κ∈ℕ is a bounded sequence of positive real numbers. If the function g is differentiable, then xκ+1 is the unique solution of
where g′(x) denotes the derivative of g at x. If we have a constrained optimization problem, that is, if we have to minimize the function g over a closed, convex, nonempty subset C of H, then we have to replace in (3.1) the function g by the function h:= g +I c , where I c is the indicator function of the set C, i.e., I c (x) = 0 if x ∈ C and I c (x) = +∞, otherwise. In this case the equation (3.2) becomes
where N c (x) is the normal cone to C at x.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2000 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
Butnariu, D., Iusem, A.N. (2000). Infinite Dimensional Optimization. In: Totally Convex Functions for Fixed Points Computation and Infinite Dimensional Optimization. Applied Optimization, vol 40. Springer, Dordrecht. https://doi.org/10.1007/978-94-011-4066-9_3
Download citation
DOI: https://doi.org/10.1007/978-94-011-4066-9_3
Publisher Name: Springer, Dordrecht
Print ISBN: 978-94-010-5788-2
Online ISBN: 978-94-011-4066-9
eBook Packages: Springer Book Archive