On the Strong Convergence of Subgradients of Convex Functions
- 117 Downloads
Abstract
In this paper, results on the strong convergence of subgradients of convex functions along a given direction are presented; that is, the relative compactness (with respect to the norm) of the union of subdifferentials of a convex function along a given direction is investigated.
Keywords
Convexity Subdifferentials Strong convergence of subgradients Gâteux derivativeMathematics Subject Classification
Primary 49J52 Secondary 52A41 41A651 Introduction
There are several results on the strong convergence of subgradients of a sequence of convex functions defined on a Banach space. The most celebrated result is the Attouch theorem; see, for example, [1], where the equivalence of Mosco convergence of lower semicontinuous convex functions to the Painleve–Kuratowski graph convergence of their subdifferentials is established on reflexive Banach space. There are also results extending the Attouch theorem to general Banach spaces; see, for example, [2, 3, 4, 5, 6, 7] and references therein. To the best of our knowledge all known results are of the form: there are sequences of points and subgradients such that the strong limits of sequences of subgradients exist (limits with respect to the norm of the space); see, for example, Theorem 3.1 in [7]. This is inconvenient. Simply we want to have subgradients with a desired property. The postulate of existence (“there are”) does not allow one to guarantee that the subgradients are as good as it is needed. This disadvantage can be observed also, when the directional derivative is calculated. Namely, the difference quotients form sequences of functions with respect to directions, whenever we consider the limit over a discrete subset. It is natural to ask about the convergence of subgradients of this functions; see, for example, Giannessi’s questions, which are recalled in (5). The question, about the existence of a convergent subsequence (at least) for this sequence of function, is the question on the existence of a convergent sequence of subgradients along a direction. In a finite-dimensional case, the existence of convergent subsequences is guaranteed by the continuity of the convex function under investigation. However, there can be subsequences with different limits; see [8]; see also [9, 10, 11]. It turns out that the set of “wrong directions” (there in no unique limit) has the Lebesgue measure equal zero; see Lemma 3.1. In infinite-dimensional setting, it is hard to expect the convergence. Thus, the basic question in this case, concerning directional convergence of subgradients, is: when does the union of subdifferentials along a given direction form a relatively compact set (with respect to the norm topology)? We should also ask about the uniqueness of the limit, which is the essence of Giannessi’s questions in the finite-dimensional setting. In the infinite-dimensional case, results of this type are rather unknown, but it would be convenient to have such results at hand. For instance, when the limit exists, then the limiting subgradients inherit properties of a convergent sequence, like: size of norm, being in a specified closed set, a good behaving with respect to the weak convergence of arguments, and so on. In Sect. 3, we present a result which guarantees the relative compactness for some special classes of convex functions; see Theorem 3.1. In Lemmas 2.2 (in the Hilbert space setting) and 3.2 (in the reflexive Banach space setting) examples of functions from the class are provided too.
2 Preliminaries
In this section, some basic notions and their properties are gathered.
In the sequel, \((X, \Vert \cdot \Vert )\) stands for a real normed space, \(X^*\) for its dual space and \({\mathbb {H}}\) for a real Hilbert space (with a real inner product). The weak convergence is denoted by \({\buildrel weak \over \longrightarrow }\), and the limit from the right is denoted by \(t\downarrow a\), which means that \(t>a\) and \(t\longrightarrow a\).
Lemma 2.1
Proof
Lemma 2.2
Proof
3 Relative Compactness of Sets of Subgradients
Let us recall Giannessi’s questions; see [8], see also [9, 10, 11] for examples of convex functions in two-dimensional spaces, for which the limit in (5) does not exist:
Below directions along which we have the weak\(^*\) convergence of subgradients are indicated.
Lemma 3.1
Proof
Lemma 3.2
Proof
Theorem 3.1
Proof
There are sequences \(\{t_i\}_{i\in \mathbb {N}}\) such that \(\mu _0>t_i>0\) for all \(i\in \mathbb {N}\), \(t_i\downarrow 0\), and \(\{x_i^*\}_{i\in \mathbb {N}} \) such that \(x_i^*\in \partial f(x+t_iw_0)\) for all \(i\in \mathbb {N}\), and (14) is fulfilled for a sequence of functions \(\{p_n\}_{\in \mathbb {N}}\) from \({\mathbb {F}}(\{t_i\}_{i\in \mathbb {N}}, \beta _0, Y)\) and positive numbers \(\{\alpha ^n_i\}_{i,n\in \mathbb {N}}\) from ]0, 1].
4 Conclusions
- 1.
It is shown that, for some convex functions and some direction, it is possible to find a convergent sequence of subgradients along a direction, namely they belong to subdifferentials at points of some segment (a convergent sequence of points) and they form a convergent sequence of functionals, see Theorem 3.1.
- 2.
Examples of convex functions and directions for which Theorem 3.1 can be applied are delivered, see Lemmas 2.2 and 3.2.
- 3.
In Lemma 3.1 an answer to question: we ask for conditions under which the limit (5) exists is provided. In fact, under assumptions of Lemma 3.1 we have not only the existence of the limit, even for subgradients, but it says, due to the Rademacher Theorem, that the set of directions with the property that the limit in (5) exists is a set of full measure in the finite-dimensional setting, and it is a dense \(G_{\delta }\) subset in the weak Asplund space, whenever the weak convergence is postulated instead of the strong one. Thus, Theorem 3.1, together with Lemma 3.1, gives an answer to Giannessi’s question in the infinite-dimensional setting.
References
- 1.Attouch, H.: Variational Convergence for Functions and Operators. Pitman Advanced Publishing Program, Boston (1984)MATHGoogle Scholar
- 2.Beer, G., Théra, M.: Attouch-Wets convergence and a differential operator for convex functions. Proc. Am. Math. Soc. 122(3), 851–860 (1994)MathSciNetCrossRefMATHGoogle Scholar
- 3.Combari, C., Thibault, L.: On the graph convergence of subdifferentials of convex functions. Proc. Am. Math. Soc. 126(8), 2231–2240 (1998)MathSciNetCrossRefMATHGoogle Scholar
- 4.Zagrodny, D.: On the weak\(^*\) convergence of subdifferentials of convex functions. J. Convex Anal. 12(1), 213–219 (2005)MathSciNetMATHGoogle Scholar
- 5.Zagrodny, D.: Minimizers of the limit of Mosco converging functions. Arch. Math. 85, 440–445 (2005)MathSciNetCrossRefMATHGoogle Scholar
- 6.Zagrodny, D.: A weak\(^*\) approximation of subgradient of convex function. Control Cybernet. 36(3), 793–802 (2007)MathSciNetMATHGoogle Scholar
- 7.Zagrodny, D.: Convergences of subgradients of sequences of convex functions. Nonlinear Anal. 84, 84–90 (2013)MathSciNetCrossRefMATHGoogle Scholar
- 8.Giannessi, F.: A problem on convex functions. J. Optim. Theory Appl. 59, 525 (1988)MathSciNetCrossRefMATHGoogle Scholar
- 9.Pontini, C.: Solving in the affirmative a conjecture about a limit of gradients. J. Optim. Theory Appl. 70, 623–629 (1991)MathSciNetCrossRefMATHGoogle Scholar
- 10.Rockafellar, R.T.: On a special class of convex functions. J. Optim. Theory Appl. 70, 619–621 (1991)MathSciNetCrossRefMATHGoogle Scholar
- 11.Zagrodny, D.: An example of bad convex function. J. Optim. Theory Appl. 70, 631–638 (1991)MathSciNetCrossRefMATHGoogle Scholar
- 12.Fabian, M., Habala, M.P., Hajek, P., Montesinos, V., Zizler, V.: Banach Space Theory. The Basis for Linear and Nonlinear Analysis. Springer, New York (2011)MATHGoogle Scholar
- 13.Correa, R., Gajardo, P., Thibault, L., Zagrodny, D.: Existence of minimizers on drops. SIAM. J. Optim. 23, 1154–1166 (2013)MathSciNetCrossRefMATHGoogle Scholar
- 14.Zagrodny, D.: On closures of preimages of metric projection mappings in Hilbert spaces. Set Valued Var. Anal. 23, 581–612 (2015)MathSciNetCrossRefMATHGoogle Scholar
- 15.Asplund, E.: C̆ebys̆ev sets in Hilbert spaces. Trans. Am. Math. Soc. 144, 235–240 (1969)MATHGoogle Scholar
- 16.Yosida, K.: Functional Analysis, 6th edn. Springer, Berlin (1980)MATHGoogle Scholar
- 17.Rudin, W.: Functional Analysis. McGraw-Hill, New York (1973)MATHGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.