Skip to main content

A Survey on Proximal Point Type Algorithms for Solving Vector Optimization Problems

  • Chapter
  • First Online:
Splitting Algorithms, Modern Operator Theory, and Applications

Abstract

In this survey paper we present the existing generalizations of the proximal point method from scalar to vector optimization problems, discussing some of their advantages and drawbacks, respectively, presenting some open challenges and sketching some possible directions for future research.

Dedicated to the memory of J.M. Borwein

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 119.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Aliprantis, C., Florenzano, M., da Rocha, V.M., Tourky, R.: Equilibrium analysis in financial markets with countably many securities. Journal of Mathematical Economics 40, 683–699 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  2. Alvarez, F.: On the minimizing property of a second order dissipative system in Hilbert spaces. SIAM Journal on Control and Optimization 38, 1102–1119 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  3. Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Analysis 9, 3–11 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  4. Apolinário, H., Quiroz, E.P., Oliveira, P.: A scalarization proximal point method for quasiconvex multiobjective minimization. Journal of Global Optimization 64, 79–96 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  5. Attouch, H., Garrigos, G.: Multiobjective optimization - an inertial dynamical approach to Pareto optima. arXiv 1506.02823 (2015)

    Google Scholar 

  6. Attouch, H., Garrigos, G., Goudou, X.: A dynamic gradient approach to Pareto optimization with nonsmooth convex objective functions. Journal of Mathematical Analysis and Applications 422, 741–771 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  7. Attouch, H., Goudou, X.: A continuous gradient-like dynamical approach to Pareto-optimization in Hilbert spaces. Set-Valued and Variational Analysis 22, 189–219 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  8. Auslender, A., Teboulle, M.: Interior gradient and proximal methods for convex and conic optimization. SIAM Journal on Optimization 16, 697–725 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  9. Bauschke, H., Combettes, P.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics / Ouvrages de mathématiques de la SMC. Springer-Verlag, New York (2011)

    Book  MATH  Google Scholar 

  10. Beck, A., Teboulle, M.: A fast iterative shrinkage-tresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2, 183–202 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  11. Benker, H., Hamel, A.H., Tammer, C.: An algorithm for vectorial control approximation problems. In: Multiple Criteria Decision Making (Hagen, 1995), Lecture Notes in Economics and Mathematical Systems, vol. 448, pp. 3–12. Springer-Verlag, Berlin (1997)

    Google Scholar 

  12. Bento, G.C., da Cruz Neto, J.X., López, G., Soubeyran, A., Souza, J.C.O.: The proximal point method for locally Lipschitz functions in multiobjective optimization with application to the compromise problem. SIAM Journal on Optimization 28, 1104–1120 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  13. Bento, G.C., da Cruz Neto, J.X., de Meireles, L.V.: Proximal point method for locally Lipschitz functions in multiobjective optimization of Hadamard manifolds. Journal of Optimization Theory and Applications 179, 37–52 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  14. Bento, G.C., da Cruz Neto, J.X., Soubeyran, A.: A proximal point-type method for multicriteria optimization. Set-Valued and Variational Analysis 22, 557–573 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  15. Bento, G.C., Ferreira, O.P., Junior, V.L.S.: Proximal point method for a special class of nonconvex multiobjective optimization functions. Optimization Letters 12, 311–320 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  16. Bento, G.C., Ferreira, O.P., Pereira, Y.R.L.: Proximal point method for vector optimization on Hadamard manifolds. Operations Research Letters 46, 13–18 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  17. Bento, G.C., Ferreira, O.P., Soubeyran, A., de Sousa Júnior, V.L., Valdinês, L.: Inexact multi-objective local search proximal algorithms: application to group dynamic and distributive justice problems. Journal of Optimization Theory and Applications 177, 181–200 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  18. Boţ, R.I., Csetnek, E.R., Heinrich, A.: A primal-dual splitting algorithm for finding zeros of sums of maximal monotone operators. SIAM Journal on Optimization 23, 2011–2036 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  19. Boţ, R.I., Grad, S.M.: Inertial forward-backward methods for solving vector optimization problems. Optimization 67, 959–974 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  20. Boţ, R.I., Hendrich, C.: A variable smoothing algorithm for solving convex optimization problems. TOP 23(1), 124–150 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  21. Boţ, R.I., Grad, S.M., Wanka, G.: Duality in Vector Optimization. Vector Optimization. Springer-Verlag, Berlin (2009)

    Google Scholar 

  22. Boţ, R.I., Hendrich, C.: A Douglas-Rachford type primal-dual method for solving inclusions with mixtures of composite and parallel-sum type monotone operators. SIAM Journal on Optimization 23, 2541–2565 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  23. Bolintineanu, Ş.: Approximate efficiency and scalar stationarity in unbounded nonsmooth convex vector optimization problems. Journal of Optimization Theory and Applications 106, 265–296 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  24. Bonnel, H., Iusem, A.N., Svaiter, B.F.: Proximal methods in vector optimization. SIAM Journal on Optimization 15, 953–970 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  25. Borwein, J.M.: Proper efficient points for maximizations with respect to cones. SIAM Journal on Control and Optimization 15, 57–63 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  26. Borwein, J.M.: The geometry of Pareto efficiency over cones. Mathematische Operationsforschung und Statistik Series Optimization 11, 235–248 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  27. Buong, N.: Inertial proximal point regularization algorithm for unconstrained vector convex optimization problems. Ukrainian Mathematical Journal 60, 1483–1491 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  28. Ceng, L.C., Mordukhovich, B.S., Yao, J.C.: Hybrid approximate proximal method with auxiliary variational inequality for vector optimization. Journal of Optimization Theory and Applications 146, 267–303 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  29. Ceng, L.C., Yao, J.C.: Approximate proximal methods in vector optimization. European Journal of Operational Research 183, 1–19 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  30. Chen, Z.: Generalized viscosity approximation methods in multiobjective optimization problems. Computational Optimization and Applications 49, 179–192 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  31. Chen, Z.: Asymptotic analysis in convex composite multiobjective optimization problems. Journal of Global Optimization 55, 507–520 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  32. Chen, Z., Huang, H., Zhao, K.: Approximate generalized proximal-type method for convex vector optimization problem in Banach spaces. Computers & Mathematics with Applications 57, 1196–1203 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  33. Chen, Z., Huang, X.X., Yang, X.Q.: Generalized proximal point algorithms for multiobjective optimization problems. Applicable Analysis 90, 935–949 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  34. Chen, Z., Xiang, C., Zhao, K., Liu, X.: Convergence analysis of Tikhonov-type regularization algorithms for multiobjective optimization problems. Applied Mathematics and Computation 211, 167–172 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  35. Chen, Z., Zhao, K.: A proximal-type method for convex vector optimization problem in Banach spaces. Numerical Functional Analysis and Optimization 30, 70–81 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  36. Chuong, T.D.: Tikhonov-type regularization method for efficient solutions in vector optimization. Journal of Computational and Applied Mathematics 234, 761–766 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  37. Chuong, T.D.: Generalized proximal method for efficient solutions in vector optimization. Numerical Functional Analysis and Optimization 32, 843–857 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  38. Chuong, T.D., Mordukhovich, B.S., Yao, J.C.: Hybrid approximate proximal algorithms for efficient solutions in vector optimization. Journal of Nonlinear and Convex Analysis 12, 257–286 (2011)

    MathSciNet  MATH  Google Scholar 

  39. Chuong, T.D., Yao, J.C.: Viscosity-type approximation method for efficient solutions in vector optimization. Taiwanese Journal of Mathematics 14, 2329–2342 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  40. Cruz, J.Y.B.: A subgradient method for vector optimization problems. SIAM Journal on Optimization 23, 2169–2182 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  41. Durea, M., Strugariu, R.: Some remarks on proximal point algorithm in scalar and vectorial cases. Nonlinear Functional Analysis and Applications 15, 307–319 (2010)

    MathSciNet  MATH  Google Scholar 

  42. Fliege, J., Graña Drummond, L.M., Svaiter, B.F.: Newton’s method for multiobjective optimization. SIAM Journal on Optimization 20, 602–626 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  43. Gerstewitz, C.: Nichtkonvexe Dualität in der Vektoroptimierung. Wissenschaftliche Zeitschrift der Technischen Hochschule Carl Schorlemmer Leuna-Merseburg 25, 357–364 (1983)

    MathSciNet  MATH  Google Scholar 

  44. Gong, X.H.: Optimality conditions for Henig and globally proper efficient solutions with ordering cone has empty interior. Journal of Mathematical Analysis and Applications 307, 12–31 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  45. Göpfert, A., Riahi, H., Tammer, C., Zălinescu, C.: Variational Methods in Partially Ordered Spaces. CMS Books in Mathematics / Ouvrages de mathématiques de la SMC. Springer-Verlag, New York, New York (2003)

    Google Scholar 

  46. Graña Drummond, L.M., Iusem, A.N.: A projected gradient method for vector optimization problems. Computational Optimization and Applications 28, 5–29 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  47. Graña Drummond, L.M., Maculan, N., Svaiter, B.F.: On the choice of parameters for the weighting method in vector optimization. Mathematical Programming 111, 201–216 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  48. Graña Drummond, L.M., Svaiter, B.F.: A steepest descent method for vector optimization. Journal of Computational and Applied Mathematics 175, 395–414 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  49. Grad, S.M.: Vector Optimization and Monotone Operators via Convex Duality. Vector Optimization. Springer-Verlag, Cham (2015)

    Google Scholar 

  50. Grad, S.M., Pop, E.L.: Vector duality for convex vector optimization problems by means of the quasi interior of the ordering cone. Optimization 63, 21–37 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  51. Gregório, R.M., Oliveira, P.R.: A logarithmic-quadratic proximal point scalarization method for multiobjective programming. Journal of Global Optimization 49, 281–291 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  52. Ji, Y., Goh, M., de Souza, R.: Proximal point algorithms for multi-criteria optimization with the difference of convex objective functions. Journal of Optimization Theory and Applications 169, 280–289 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  53. Ji, Y., Qu, S.: Proximal point algorithms for vector DC programming with applications to probabilistic lot sizing with service levels. Discrete Dynamics in Nature and Society - Article ID 5675183 (2017)

    Google Scholar 

  54. Kiwiel, K.C.: An aggregate subgradient descent method for solving large convex nonsmooth multiobjective minimization problems. In: A. Straszak (ed.) Large Scale Systems: Theory and Applications 1983, International Federation of Automatic Control Proceedings Series, vol. 10, pp. 283–288. Pergamon Press, Oxford (1984)

    Google Scholar 

  55. Kiwiel, K.C.: An algorithm for linearly constrained nonsmooth convex multiobjective minimization. In: A. Sydow, S.M. Thoma, R. Vichnevetsky (eds.) Systems Analysis and Simulation 1985 Part I: Theory and Foundations, pp. 236–238. Akademie-Verlag, Berlin (1985)

    Google Scholar 

  56. Kiwiel, K.C.: A descent method for nonsmooth convex multiobjective minimization. Large Scale Systems 8, 119–129 (1985)

    MathSciNet  MATH  Google Scholar 

  57. Luc, D.T.: Theory of Vector Optimization, Lecture Notes in Economics and Mathematical Systems, vol. 319. Springer-Verlag, Berlin (1989)

    Google Scholar 

  58. Mäkelä, M.M., Karmitsa, N., Wilppu, O.: Proximal bundle method for nonsmooth and nonconvex multiobjective optimization. In: Mathematical Modeling and Optimization of Complex Structures, Computational Methods in Applied Sciences, vol. 40, pp. 191–204. Springer-Verlag, Cham (2016)

    Google Scholar 

  59. Martinet, B.: Régularisation d’inéquations variationelles par approximations succesives. Revue Française de d’Informatique et de Recherche Opérationnelle 4, 154–159 (1970)

    MATH  Google Scholar 

  60. Miettinen, K., Mäkelä, M.M.: An interactive method for nonsmooth multiobjective optimization with an application to optimal control. Optimization Methods and Software 2, 31–44 (1993)

    Article  Google Scholar 

  61. Miettinen, K., Mäkelä, M.M.: Interactive bundle-based method for nondifferentiable multiobjective optimization: nimbus. Optimization 34, 231–246 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  62. Miglierina, E., Molho, E., Recchioni, M.C.: Box-constrained multi-objective optimization: a gradient-like method without “a priori” scalarization. European Journal of Operational Research 188, 662–682 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  63. Moudafi, A., Oliny, M.: Convergence of a splitting inertial proximal method for monotone operators. Journal of Computational and Applied Mathematics 155, 447–454 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  64. Mukai, H.: Algorithms for multicriterion optimization. IEEE Transactions on Automatic Control 25, 177–186 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  65. Opial, Z.: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bulletin of the American Mathematical Society 73, 591–597 (1967)

    Article  MathSciNet  MATH  Google Scholar 

  66. Penot, J.P., Théra, M.: Semi-continuous mappings in general topology. Archiv der Mathematik (Basel) 38, 158–166 (1982)

    Article  MathSciNet  MATH  Google Scholar 

  67. Qu, S., Goh, M., Ji, Y., de Souza, R.: A new algorithm for linearly constrained c-convex vector optimization with a supply chain network risk application. European Journal of Operational Research 247, 359–365 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  68. Qu, S.J., Goh, M., de Souza, R., Wang, T.N.: Proximal point algorithms for convex multi-criteria optimization with applications to supply chain risk management. Journal of Optimization Theory and Applications 163, 949–956 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  69. Quiroz, E.A.P., Apolinário, H.C.F., Villacorta, K.D.V., Oliveira, P.R.: A linear scalarization proximal point method for quasiconvex multiobjective minimization. arXiv 1510.00461 (2015)

    Google Scholar 

  70. Rocha, R.A., Gregório, R.M.: Um algoritmo de ponto proximal inexato para programaçao multiobjetivo. In: Proceeding Series of the Brazilian Society of Applied and Computational Mathematics, vol. 6 (2018)

    Google Scholar 

  71. Rocha, R.A., Oliveira, P.R., Gregório, R.M., Souza, M.: Logarithmic quasi-distance proximal point scalarization method for multi-objective programming. Applied Mathematics and Computation 273, 856–867 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  72. Rocha, R.A., Oliveira, P.R., Gregório, R.M., Souza, M.: A proximal point algorithm with quasi-distance in multi-objective optimization. Journal of Optimization Theory and Applications 171, 964–979 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  73. Souza, J.C.O.: Proximal point methods for Lipschitz functions on Hadamard manifolds: scalar and vectorial cases. Journal of Optimization Theory and Applications 179, 745–760 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  74. Tang, F.M., Huang, P.L.: On the convergence rate of a proximal point algorithm for vector function on Hadamard manifolds. Journal of the Operations Research Society of China 5, 405–417 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  75. Villacorta, K.D.V., Oliveira, P.R.: An interior proximal method in vector optimization. European Journal of Operational Research 214, 485–492 (2011)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work was partially supported by FWF (Austrian Science Fund), project M-2045 and DFG (German Research Foundation), project GR3367∕4 − 1 The author is grateful to an anonymous reviewer for making him aware of the paper [73] and for carefully reading this survey, and to the editors of this volume for the invitation to the CMO-BIRS Workshop on Splitting Algorithms, Modern Operator Theory, and Applications (17w5030) in Oaxaca.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sorin-Mihai Grad .

Editor information

Editors and Affiliations

Appendix: Proof of Theorem 11.17

Appendix: Proof of Theorem 11.17

In the following we provide an example of a convergence proof for a proximal point algorithm for determininig weakly efficient solutions to a vector optimization problem. It originates from an earlier version of [19] and incorporates some ideas from the proofs of [24, Theorem 3.1] and [3, Theorem 2.1 and Proposition 2.1]. Before formulating it, we recall the celebrated Opial’s Lemma (cf. [65]).

Lemma 11.2

Let (x n)n ⊆ X a sequence such that there exists a nonempty set S  X such that

  1. (a)

    limn→+x n − xexists for every x  S;

  2. (b)

    if \(x_{n_j} \rightharpoonup \hat x\) for a subsequence n j → +∞, then \(\hat x\in S\).

Then, there exists an \(\bar x \in S\) such that \(x_k \rightharpoonup \bar x\) when k → +∞.

Theorem 11.17

Let F be C-convex and positively C-lower semicontinuous and F(X) ∩ (F(x 1) − C) be C-complete. Then any sequence (x n)n generated by Algorithm 17 converges weakly towards a weakly efficient solution to (V P).

Proof

We show first that the algorithm is well-defined. Assuming that we have obtained an x n, where n ≥ 1, we have to secure the existence of x n+1. Take a \(z^*_n\in C^*\setminus \{0\}\) and without loss of generality assume that \(\|z^*_n\|=1\) for all n ≥ 1. Then \(\langle z^*_n, e_n\rangle > 0\) and the function

$$\displaystyle \begin{aligned}x\mapsto \langle z^*_n, \lambda_n F(x) + \frac{\alpha_n}{2}\|x-x_n-\beta_n(x_n-x_{n-1})\|{}^2 e_n \rangle + \delta_{\varOmega_n} (x)\end{aligned}$$

is lower semicontinuous, being a sum of continuous and lower semicontinuous functions, respectively, and strongly convex, as the sum of some convex functions and a squared norm, having thus exactly one minimum. By Lemma 11.1 this minimum is a weakly efficient solution to the vector optimization problem in Step 3 of Algorithm 17 and we denote it by x n+1.

The next step is to show the Fejér monotonicity of the sequence (x n)n with respect to the set Ω = {x ∈ X : F(x) ≦C F(x k) ∀k ≥ 0}, that is nonempty because of the C-completeness hypothesis. Let n ≥ 1. The function \(x\mapsto \langle z^*_n, \lambda _n F(x) + ({\alpha _n}/{2})\|x-x_n-\beta _n(x_n-x_{n-1})\|{ }^2 e_n \rangle + \delta _{\varOmega _n} (x)\) attains its only minimum at x n+1, and this fact can be equivalently written as

$$\displaystyle \begin{aligned}0\in \partial \big(\langle z^*_n, \lambda_n F(\cdot) + \frac{\alpha_n}{2}\|\cdot-x_n-\beta_n (x_n-x_{n-1})\|{}^2 e_n \rangle+ \delta_{\varOmega_n} (\cdot)\big)(x_{n+1}).\end{aligned}$$

Using the continuity of the norm, this yields (e.g. via [21, Theorem 3.5.6]) \(0\in \partial \big (\langle z^*_n, \lambda _n F(\cdot ) \rangle + \delta _{\varOmega _n} (\cdot )\big ) (x_{n+1}) + \partial \big (({\alpha _n}/{2})\langle z^*_n, e_n\rangle \|\cdot -x_n-\beta _n (x_n-x_{n-1})\|{ }^2 \big )(x_{n+1}) = \partial \big (\langle z^*_n, \lambda _n F(\cdot ) \rangle + \delta _{\varOmega _n} (\cdot )\big ) (x_{n+1})\) \(+ \alpha _n \langle z^*_n, e_n\rangle (x_{n+1}-x_n-\beta _n (x_n-x_{n-1}))\). Then, since x n+1 ∈ Ω n, for any x ∈ Ω n it holds

$$\displaystyle \begin{aligned} \lambda_n \langle z^*_n, F(x)- F(x_{n+1})\rangle \geq \alpha_n \langle z^*_n, e_n\rangle \langle x_{n+1}-x_n-\beta_n (x_n-x_{n-1}), x_{n+1}-x\rangle. \end{aligned} $$
(11.2)

Let us take an element \(\tilde x\in \varOmega \). By construction \(\tilde x\in \varOmega _n\), thus (11.2) yields, after taking into consideration that \(F(\tilde x)\leqq _C F(x_{n+1})\), λ n > 0 and \(z^*_n\in C^*\setminus \{0\}\), that \( \alpha _n \langle z^*_n, e_n\rangle \langle x_{n+1}-x_n-\beta _n (x_n-x_{n-1}), \tilde x-x_{n+1}\rangle \geq 0\).

For each k ≥ 0 denote \(\varphi _k=(1/2)\|x_k-\tilde x\|{ }^2\). The previous inequality, after dividing with the positive number \(\alpha _n \langle z^*_n, e_n\rangle \), can be rewritten as

$$\displaystyle \begin{aligned}\varphi_{n+1}-\varphi_n + \frac 12 \|x_{n+1}-x_n\|{}^2 - \beta_n \langle x_n-x_{n-1}, x_{n+1}-\tilde x\rangle \leq 0, \end{aligned}$$

and, since \(\langle x_n-x_{n-1}, x_{n+1}-\tilde x\rangle = \varphi _n-\varphi _{n-1} + (1/2)\|x_n-x_{n-1}\|{ }^2+ \langle x_n-x_{n-1}, x_{n+1}-x_n\rangle \), it turns into

$$\displaystyle \begin{aligned}\varphi_{n+1}-\varphi_n - \beta_n (\varphi_n-\varphi_{n-1}) \leq \frac{\beta_n}{2} \|x_n-x_{n-1}\|{}^2 +\end{aligned}$$
$$\displaystyle \begin{aligned} \beta_n \langle x_n-x_{n-1}, x_{n+1}-x_n\rangle -\frac 12 \|x_{n+1}-x_n\|{}^2. \end{aligned} $$
(11.3)

Since the right-hand side of (11.3) is less than or equal to ((β n − 1)∕2)∥x n+1 − x n2 + β nx n − x n−12, denoting μ k = φ k − β k φ k−1 + β kx k − x k−12, k ≥ 1, it follows that

$$\displaystyle \begin{aligned} \mu_{n+1}-\mu_n \leq \frac {3\beta - 1}{2}\|x_{n+1}-x_n\|{}^2 \leq 0, \end{aligned} $$
(11.4)

thus the sequence (μ k)k is nonincreasing, as n ≥ 1 was arbitrarily chosen. Then φ n ≤ β n φ 0 + μ 1∕(1 − β) and one also gets ∥x n+1 − x n2 ≤ (2∕(1 − 3β))(μ n − μ n+1). Employing (11.4), one obtains then

$$\displaystyle \begin{aligned}\sum_{k=1}^n \|x_{k+1}-x_k\|{}^2 \leq \frac {2}{1-3\beta}(\mu_1- \mu_{n+1}) \leq \frac {2}{1-3\beta} \left( \beta^{n+1}\varphi_0 + \frac{\mu_1}{1-\beta}\right) < +\infty, \end{aligned}$$

in particular

$$\displaystyle \begin{aligned} \sum_{k=1}^{+\infty} \|x_{k+1}-x_k\|{}^2 \leq \frac{2\mu_1}{(1-\beta)(1-3\beta)} < +\infty. \end{aligned} $$
(11.5)

The right-hand side of (11.3) can be rewritten as (1∕2)(β n(β n + 1)∥x n − x n−12 −∥x n+1 − x n − β n(x n − x n−1)∥2). Denoting τ k+1 = x k+1 − x k − β k(x k − x k−1), θ k = φ k − φ k−1 and δ k = β kx k − x k−12 for k ≥ 0 and taking into consideration that β n ∈ [0, 1∕3), (11.3) yields

$$\displaystyle \begin{aligned} \theta_{n+1}-\beta_n\theta_n \leq \delta_n-\frac 12 \|\tau_{n+1}\|{}^2. \end{aligned} $$
(11.6)

Then [θ n+1]+ ≤ (1∕3)[θ n]+ + δ n, followed by \([\theta _{n+1}]_+ \leq (1/3^n) [\theta _1]_+ + \sum _{k=0}^{n-1}\delta _{n-k}/3^k\). Hence \(\sum _{k=0}^{+\infty }[\theta _{k+1}]_+ \leq 3/2 ( [\theta _1]_+ + \sum _{k=0}^{+\infty }\delta _k)\) and, as the right-hand side of this inequality is finite due to (11.5), so is \(\sum _{k=1}^{+\infty }[\theta _k]_+\), too. This yields that the sequence (w k)k defined as \(w_k=\varphi _k - \sum _{j=1}^k [\theta _j]_+\), k ≥ 0, is bounded. Moreover, \(w_{k+1}-w_k = \varphi _{k+1}-\varphi _k - [\varphi _{k+1}-\varphi _k]_+ = \varphi _{k+1}-\varphi _k + \min \{0, \varphi _k - \varphi _{k+1}\} \leq 0\) for all k ≥ 1, thus (w k)k is convergent. Consequently, \(\lim _{k \rightarrow +\infty } \varphi _k = \lim _{k \rightarrow +\infty } w_k + \sum _{j=1}^{+\infty }[\theta _{j+1}]_+\), therefore (φ k)k is convergent. Finally, \((\|x_k-\tilde x\|{ }^2)_k\) is convergent, too, i.e. (a) in Lemma 11.2 with S = Ω is fulfilled.

We show now that (x k)k is weakly convergent. The convergence of (φ k)k implies that (x k)k is bounded, so it has weak cluster points. Let \(\hat x\in X\) be one of them and \((x_{k_j})_j\) the subsequence that converges towards it. Then, as F is positively C-lower semicontinuous and C-convex, it follows that for any z ∈ C the function 〈z , F(⋅)〉 is lower semicontinuous and convex, thus

$$\displaystyle \begin{aligned} \langle z^*, F(\hat x)\rangle \leq \lim_{j\rightarrow +\infty}\langle z^*, F(x_{k_j})\rangle = \inf_{k\geq 0} \langle z^*, F(x_k)\rangle, \end{aligned} $$
(11.7)

with the last equality following from the fact that the sequence (F(x k))k is by construction nonincreasing. Assuming that there exists a k ≥ 0 such that \(F(\hat x)\nleqq _C F(x_k)\), there exists a \(\tilde z\in C^*\setminus \{0\}\) such that \(\langle \tilde z, F(\hat x) - F(x_k)\rangle > 0\), which contradicts (11.7), consequently \(F(\hat x)\leqq _C F(x_k)\) for all k ≥ 0, i.e. \(\hat x\in \varOmega \), therefore one can employ Lemma 11.2 with S = Ω since its hypothesis (b) is fulfilled as well. This guarantees then the weak convergence of (x k)k to a point \(\bar x\in \varOmega \).

The last step is to show that \(\bar x \in \mathcal {W}\mathcal {E}(VP)\). Assuming that \(\bar x\notin \mathcal {W}\mathcal {E} (VP)\), there exists an x′∈ X such that \(F(x')< _C F(\bar x)\). This yields x′∈ Ω. As \(\|z^*_k\|=1\) for all k ≥ 0, the sequence \((z_k^*)_k\) has a weak cluster point, say \(\bar z^*\), that is the limit of a subsequence \((z^*_{k_j})_j\). Because \(z^*_k\in C^*\) for all k ≥ 0 and C is weakly closed, it follows that \(\bar z^*\in C^*\). Moreover, \(\bar z^*\neq 0\), since it can be shown via [23, Lemma 2.2] that \(\langle \bar z^*, c\rangle > 0\) for any \(c\in \operatorname *{\mathrm {int}} C\). Consequently, \(\langle \bar z^*, F(x') - F(\bar x)\rangle < 0\). For any j ≥ 0 it holds by (11.2)

$$\displaystyle \begin{aligned}\lambda_{k_j}\langle z^*_{k_j}, F(x') - F(x_{k_j+1})\rangle \geq -\langle \alpha_{k_j}\langle z^*_{k_j}, e_{k_j}\rangle (x_{k_j+1}- x_{k_j} - \beta_{k_j}(x_{k_j}-x_{k_j-1}), x' -\end{aligned}$$
$$\displaystyle \begin{aligned} x_{k_j+1}\rangle \geq - \alpha_{k_j}\langle z^*_{k_j}, e_{k_j}\rangle \|x' - x_{k_j+1}\| \big(\|x_{k_j+1}- x_{k_j}\| + \beta_{k_j}\|x_{k_j}-x_{k_j-1}\|\big). \end{aligned} $$
(11.8)

Because of (11.5), (∥x kx k−1∥)k converges towards 0 for k → +, therefore so does the last expression in the inequality chain (11.8) when j → + as well. Letting j converge towards + , (11.8) yields \(\langle \bar z^*, F(x') - F(\bar x)\rangle \geq 0\), contradicting the inequality obtained above. Consequently, \(\bar x \in \mathcal {W}\mathcal {E}(VP)\). □

Remark 11.43

In order to guarantee the lower semicontinuity of the functions \(\delta _{\varOmega _n}\), n ≥ 1, it is enough to have the vector function F only C-level closed (i.e. the set {x ∈ X : F(x) ≦C y} is closed for any y ∈ Y ), a hypotheses weaker than the positive C-lower semicontinuity imposed on F in Theorem 11.17 and Theorem 11.18. However, the latter is also necessary in the proofs of these statements in order to guarantee the lower semicontinuity of the functions \((z^*_nF)\), n ≥ 1.

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Grad, SM. (2019). A Survey on Proximal Point Type Algorithms for Solving Vector Optimization Problems. In: Bauschke, H., Burachik, R., Luke, D. (eds) Splitting Algorithms, Modern Operator Theory, and Applications. Springer, Cham. https://doi.org/10.1007/978-3-030-25939-6_11

Download citation

Publish with us

Policies and ethics