Abstract
In this survey paper we present the existing generalizations of the proximal point method from scalar to vector optimization problems, discussing some of their advantages and drawbacks, respectively, presenting some open challenges and sketching some possible directions for future research.
Dedicated to the memory of J.M. Borwein
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Aliprantis, C., Florenzano, M., da Rocha, V.M., Tourky, R.: Equilibrium analysis in financial markets with countably many securities. Journal of Mathematical Economics 40, 683–699 (2004)
Alvarez, F.: On the minimizing property of a second order dissipative system in Hilbert spaces. SIAM Journal on Control and Optimization 38, 1102–1119 (2000)
Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Analysis 9, 3–11 (2001)
Apolinário, H., Quiroz, E.P., Oliveira, P.: A scalarization proximal point method for quasiconvex multiobjective minimization. Journal of Global Optimization 64, 79–96 (2016)
Attouch, H., Garrigos, G.: Multiobjective optimization - an inertial dynamical approach to Pareto optima. arXiv 1506.02823 (2015)
Attouch, H., Garrigos, G., Goudou, X.: A dynamic gradient approach to Pareto optimization with nonsmooth convex objective functions. Journal of Mathematical Analysis and Applications 422, 741–771 (2015)
Attouch, H., Goudou, X.: A continuous gradient-like dynamical approach to Pareto-optimization in Hilbert spaces. Set-Valued and Variational Analysis 22, 189–219 (2014)
Auslender, A., Teboulle, M.: Interior gradient and proximal methods for convex and conic optimization. SIAM Journal on Optimization 16, 697–725 (2006)
Bauschke, H., Combettes, P.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics / Ouvrages de mathématiques de la SMC. Springer-Verlag, New York (2011)
Beck, A., Teboulle, M.: A fast iterative shrinkage-tresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2, 183–202 (2009)
Benker, H., Hamel, A.H., Tammer, C.: An algorithm for vectorial control approximation problems. In: Multiple Criteria Decision Making (Hagen, 1995), Lecture Notes in Economics and Mathematical Systems, vol. 448, pp. 3–12. Springer-Verlag, Berlin (1997)
Bento, G.C., da Cruz Neto, J.X., López, G., Soubeyran, A., Souza, J.C.O.: The proximal point method for locally Lipschitz functions in multiobjective optimization with application to the compromise problem. SIAM Journal on Optimization 28, 1104–1120 (2018)
Bento, G.C., da Cruz Neto, J.X., de Meireles, L.V.: Proximal point method for locally Lipschitz functions in multiobjective optimization of Hadamard manifolds. Journal of Optimization Theory and Applications 179, 37–52 (2018)
Bento, G.C., da Cruz Neto, J.X., Soubeyran, A.: A proximal point-type method for multicriteria optimization. Set-Valued and Variational Analysis 22, 557–573 (2014)
Bento, G.C., Ferreira, O.P., Junior, V.L.S.: Proximal point method for a special class of nonconvex multiobjective optimization functions. Optimization Letters 12, 311–320 (2018)
Bento, G.C., Ferreira, O.P., Pereira, Y.R.L.: Proximal point method for vector optimization on Hadamard manifolds. Operations Research Letters 46, 13–18 (2018)
Bento, G.C., Ferreira, O.P., Soubeyran, A., de Sousa Júnior, V.L., Valdinês, L.: Inexact multi-objective local search proximal algorithms: application to group dynamic and distributive justice problems. Journal of Optimization Theory and Applications 177, 181–200 (2018)
Boţ, R.I., Csetnek, E.R., Heinrich, A.: A primal-dual splitting algorithm for finding zeros of sums of maximal monotone operators. SIAM Journal on Optimization 23, 2011–2036 (2013)
Boţ, R.I., Grad, S.M.: Inertial forward-backward methods for solving vector optimization problems. Optimization 67, 959–974 (2018)
Boţ, R.I., Hendrich, C.: A variable smoothing algorithm for solving convex optimization problems. TOP 23(1), 124–150 (2015)
Boţ, R.I., Grad, S.M., Wanka, G.: Duality in Vector Optimization. Vector Optimization. Springer-Verlag, Berlin (2009)
Boţ, R.I., Hendrich, C.: A Douglas-Rachford type primal-dual method for solving inclusions with mixtures of composite and parallel-sum type monotone operators. SIAM Journal on Optimization 23, 2541–2565 (2013)
Bolintineanu, Ş.: Approximate efficiency and scalar stationarity in unbounded nonsmooth convex vector optimization problems. Journal of Optimization Theory and Applications 106, 265–296 (2000)
Bonnel, H., Iusem, A.N., Svaiter, B.F.: Proximal methods in vector optimization. SIAM Journal on Optimization 15, 953–970 (2005)
Borwein, J.M.: Proper efficient points for maximizations with respect to cones. SIAM Journal on Control and Optimization 15, 57–63 (1977)
Borwein, J.M.: The geometry of Pareto efficiency over cones. Mathematische Operationsforschung und Statistik Series Optimization 11, 235–248 (1980)
Buong, N.: Inertial proximal point regularization algorithm for unconstrained vector convex optimization problems. Ukrainian Mathematical Journal 60, 1483–1491 (2008)
Ceng, L.C., Mordukhovich, B.S., Yao, J.C.: Hybrid approximate proximal method with auxiliary variational inequality for vector optimization. Journal of Optimization Theory and Applications 146, 267–303 (2010)
Ceng, L.C., Yao, J.C.: Approximate proximal methods in vector optimization. European Journal of Operational Research 183, 1–19 (2007)
Chen, Z.: Generalized viscosity approximation methods in multiobjective optimization problems. Computational Optimization and Applications 49, 179–192 (2011)
Chen, Z.: Asymptotic analysis in convex composite multiobjective optimization problems. Journal of Global Optimization 55, 507–520 (2013)
Chen, Z., Huang, H., Zhao, K.: Approximate generalized proximal-type method for convex vector optimization problem in Banach spaces. Computers & Mathematics with Applications 57, 1196–1203 (2009)
Chen, Z., Huang, X.X., Yang, X.Q.: Generalized proximal point algorithms for multiobjective optimization problems. Applicable Analysis 90, 935–949 (2011)
Chen, Z., Xiang, C., Zhao, K., Liu, X.: Convergence analysis of Tikhonov-type regularization algorithms for multiobjective optimization problems. Applied Mathematics and Computation 211, 167–172 (2009)
Chen, Z., Zhao, K.: A proximal-type method for convex vector optimization problem in Banach spaces. Numerical Functional Analysis and Optimization 30, 70–81 (2009)
Chuong, T.D.: Tikhonov-type regularization method for efficient solutions in vector optimization. Journal of Computational and Applied Mathematics 234, 761–766 (2010)
Chuong, T.D.: Generalized proximal method for efficient solutions in vector optimization. Numerical Functional Analysis and Optimization 32, 843–857 (2011)
Chuong, T.D., Mordukhovich, B.S., Yao, J.C.: Hybrid approximate proximal algorithms for efficient solutions in vector optimization. Journal of Nonlinear and Convex Analysis 12, 257–286 (2011)
Chuong, T.D., Yao, J.C.: Viscosity-type approximation method for efficient solutions in vector optimization. Taiwanese Journal of Mathematics 14, 2329–2342 (2010)
Cruz, J.Y.B.: A subgradient method for vector optimization problems. SIAM Journal on Optimization 23, 2169–2182 (2013)
Durea, M., Strugariu, R.: Some remarks on proximal point algorithm in scalar and vectorial cases. Nonlinear Functional Analysis and Applications 15, 307–319 (2010)
Fliege, J., Graña Drummond, L.M., Svaiter, B.F.: Newton’s method for multiobjective optimization. SIAM Journal on Optimization 20, 602–626 (2009)
Gerstewitz, C.: Nichtkonvexe Dualität in der Vektoroptimierung. Wissenschaftliche Zeitschrift der Technischen Hochschule Carl Schorlemmer Leuna-Merseburg 25, 357–364 (1983)
Gong, X.H.: Optimality conditions for Henig and globally proper efficient solutions with ordering cone has empty interior. Journal of Mathematical Analysis and Applications 307, 12–31 (2005)
Göpfert, A., Riahi, H., Tammer, C., Zălinescu, C.: Variational Methods in Partially Ordered Spaces. CMS Books in Mathematics / Ouvrages de mathématiques de la SMC. Springer-Verlag, New York, New York (2003)
Graña Drummond, L.M., Iusem, A.N.: A projected gradient method for vector optimization problems. Computational Optimization and Applications 28, 5–29 (2004)
Graña Drummond, L.M., Maculan, N., Svaiter, B.F.: On the choice of parameters for the weighting method in vector optimization. Mathematical Programming 111, 201–216 (2008)
Graña Drummond, L.M., Svaiter, B.F.: A steepest descent method for vector optimization. Journal of Computational and Applied Mathematics 175, 395–414 (2005)
Grad, S.M.: Vector Optimization and Monotone Operators via Convex Duality. Vector Optimization. Springer-Verlag, Cham (2015)
Grad, S.M., Pop, E.L.: Vector duality for convex vector optimization problems by means of the quasi interior of the ordering cone. Optimization 63, 21–37 (2014)
Gregório, R.M., Oliveira, P.R.: A logarithmic-quadratic proximal point scalarization method for multiobjective programming. Journal of Global Optimization 49, 281–291 (2011)
Ji, Y., Goh, M., de Souza, R.: Proximal point algorithms for multi-criteria optimization with the difference of convex objective functions. Journal of Optimization Theory and Applications 169, 280–289 (2016)
Ji, Y., Qu, S.: Proximal point algorithms for vector DC programming with applications to probabilistic lot sizing with service levels. Discrete Dynamics in Nature and Society - Article ID 5675183 (2017)
Kiwiel, K.C.: An aggregate subgradient descent method for solving large convex nonsmooth multiobjective minimization problems. In: A. Straszak (ed.) Large Scale Systems: Theory and Applications 1983, International Federation of Automatic Control Proceedings Series, vol. 10, pp. 283–288. Pergamon Press, Oxford (1984)
Kiwiel, K.C.: An algorithm for linearly constrained nonsmooth convex multiobjective minimization. In: A. Sydow, S.M. Thoma, R. Vichnevetsky (eds.) Systems Analysis and Simulation 1985 Part I: Theory and Foundations, pp. 236–238. Akademie-Verlag, Berlin (1985)
Kiwiel, K.C.: A descent method for nonsmooth convex multiobjective minimization. Large Scale Systems 8, 119–129 (1985)
Luc, D.T.: Theory of Vector Optimization, Lecture Notes in Economics and Mathematical Systems, vol. 319. Springer-Verlag, Berlin (1989)
Mäkelä, M.M., Karmitsa, N., Wilppu, O.: Proximal bundle method for nonsmooth and nonconvex multiobjective optimization. In: Mathematical Modeling and Optimization of Complex Structures, Computational Methods in Applied Sciences, vol. 40, pp. 191–204. Springer-Verlag, Cham (2016)
Martinet, B.: Régularisation d’inéquations variationelles par approximations succesives. Revue Française de d’Informatique et de Recherche Opérationnelle 4, 154–159 (1970)
Miettinen, K., Mäkelä, M.M.: An interactive method for nonsmooth multiobjective optimization with an application to optimal control. Optimization Methods and Software 2, 31–44 (1993)
Miettinen, K., Mäkelä, M.M.: Interactive bundle-based method for nondifferentiable multiobjective optimization: nimbus. Optimization 34, 231–246 (1995)
Miglierina, E., Molho, E., Recchioni, M.C.: Box-constrained multi-objective optimization: a gradient-like method without “a priori” scalarization. European Journal of Operational Research 188, 662–682 (2008)
Moudafi, A., Oliny, M.: Convergence of a splitting inertial proximal method for monotone operators. Journal of Computational and Applied Mathematics 155, 447–454 (2003)
Mukai, H.: Algorithms for multicriterion optimization. IEEE Transactions on Automatic Control 25, 177–186 (1980)
Opial, Z.: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bulletin of the American Mathematical Society 73, 591–597 (1967)
Penot, J.P., Théra, M.: Semi-continuous mappings in general topology. Archiv der Mathematik (Basel) 38, 158–166 (1982)
Qu, S., Goh, M., Ji, Y., de Souza, R.: A new algorithm for linearly constrained c-convex vector optimization with a supply chain network risk application. European Journal of Operational Research 247, 359–365 (2015)
Qu, S.J., Goh, M., de Souza, R., Wang, T.N.: Proximal point algorithms for convex multi-criteria optimization with applications to supply chain risk management. Journal of Optimization Theory and Applications 163, 949–956 (2014)
Quiroz, E.A.P., Apolinário, H.C.F., Villacorta, K.D.V., Oliveira, P.R.: A linear scalarization proximal point method for quasiconvex multiobjective minimization. arXiv 1510.00461 (2015)
Rocha, R.A., Gregório, R.M.: Um algoritmo de ponto proximal inexato para programaçao multiobjetivo. In: Proceeding Series of the Brazilian Society of Applied and Computational Mathematics, vol. 6 (2018)
Rocha, R.A., Oliveira, P.R., Gregório, R.M., Souza, M.: Logarithmic quasi-distance proximal point scalarization method for multi-objective programming. Applied Mathematics and Computation 273, 856–867 (2016)
Rocha, R.A., Oliveira, P.R., Gregório, R.M., Souza, M.: A proximal point algorithm with quasi-distance in multi-objective optimization. Journal of Optimization Theory and Applications 171, 964–979 (2016)
Souza, J.C.O.: Proximal point methods for Lipschitz functions on Hadamard manifolds: scalar and vectorial cases. Journal of Optimization Theory and Applications 179, 745–760 (2018)
Tang, F.M., Huang, P.L.: On the convergence rate of a proximal point algorithm for vector function on Hadamard manifolds. Journal of the Operations Research Society of China 5, 405–417 (2017)
Villacorta, K.D.V., Oliveira, P.R.: An interior proximal method in vector optimization. European Journal of Operational Research 214, 485–492 (2011)
Acknowledgements
This work was partially supported by FWF (Austrian Science Fund), project M-2045 and DFG (German Research Foundation), project GR3367∕4 − 1 The author is grateful to an anonymous reviewer for making him aware of the paper [73] and for carefully reading this survey, and to the editors of this volume for the invitation to the CMO-BIRS Workshop on Splitting Algorithms, Modern Operator Theory, and Applications (17w5030) in Oaxaca.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix: Proof of Theorem 11.17
Appendix: Proof of Theorem 11.17
In the following we provide an example of a convergence proof for a proximal point algorithm for determininig weakly efficient solutions to a vector optimization problem. It originates from an earlier version of [19] and incorporates some ideas from the proofs of [24, Theorem 3.1] and [3, Theorem 2.1 and Proposition 2.1]. Before formulating it, we recall the celebrated Opial’s Lemma (cf. [65]).
Lemma 11.2
Let (x n)n ⊆ X a sequence such that there exists a nonempty set S ⊆ X such that
-
(a)
limn→+∞∥x n − x∥ exists for every x ∈ S;
-
(b)
if \(x_{n_j} \rightharpoonup \hat x\) for a subsequence n j → +∞, then \(\hat x\in S\).
Then, there exists an \(\bar x \in S\) such that \(x_k \rightharpoonup \bar x\) when k → +∞.
Theorem 11.17
Let F be C-convex and positively C-lower semicontinuous and F(X) ∩ (F(x 1) − C) be C-complete. Then any sequence (x n)n generated by Algorithm 17 converges weakly towards a weakly efficient solution to (V P).
Proof
We show first that the algorithm is well-defined. Assuming that we have obtained an x n, where n ≥ 1, we have to secure the existence of x n+1. Take a \(z^*_n\in C^*\setminus \{0\}\) and without loss of generality assume that \(\|z^*_n\|=1\) for all n ≥ 1. Then \(\langle z^*_n, e_n\rangle > 0\) and the function
is lower semicontinuous, being a sum of continuous and lower semicontinuous functions, respectively, and strongly convex, as the sum of some convex functions and a squared norm, having thus exactly one minimum. By Lemma 11.1 this minimum is a weakly efficient solution to the vector optimization problem in Step 3 of Algorithm 17 and we denote it by x n+1.
The next step is to show the Fejér monotonicity of the sequence (x n)n with respect to the set Ω = {x ∈ X : F(x) ≦C F(x k) ∀k ≥ 0}, that is nonempty because of the C-completeness hypothesis. Let n ≥ 1. The function \(x\mapsto \langle z^*_n, \lambda _n F(x) + ({\alpha _n}/{2})\|x-x_n-\beta _n(x_n-x_{n-1})\|{ }^2 e_n \rangle + \delta _{\varOmega _n} (x)\) attains its only minimum at x n+1, and this fact can be equivalently written as
Using the continuity of the norm, this yields (e.g. via [21, Theorem 3.5.6]) \(0\in \partial \big (\langle z^*_n, \lambda _n F(\cdot ) \rangle + \delta _{\varOmega _n} (\cdot )\big ) (x_{n+1}) + \partial \big (({\alpha _n}/{2})\langle z^*_n, e_n\rangle \|\cdot -x_n-\beta _n (x_n-x_{n-1})\|{ }^2 \big )(x_{n+1}) = \partial \big (\langle z^*_n, \lambda _n F(\cdot ) \rangle + \delta _{\varOmega _n} (\cdot )\big ) (x_{n+1})\) \(+ \alpha _n \langle z^*_n, e_n\rangle (x_{n+1}-x_n-\beta _n (x_n-x_{n-1}))\). Then, since x n+1 ∈ Ω n, for any x ∈ Ω n it holds
Let us take an element \(\tilde x\in \varOmega \). By construction \(\tilde x\in \varOmega _n\), thus (11.2) yields, after taking into consideration that \(F(\tilde x)\leqq _C F(x_{n+1})\), λ n > 0 and \(z^*_n\in C^*\setminus \{0\}\), that \( \alpha _n \langle z^*_n, e_n\rangle \langle x_{n+1}-x_n-\beta _n (x_n-x_{n-1}), \tilde x-x_{n+1}\rangle \geq 0\).
For each k ≥ 0 denote \(\varphi _k=(1/2)\|x_k-\tilde x\|{ }^2\). The previous inequality, after dividing with the positive number \(\alpha _n \langle z^*_n, e_n\rangle \), can be rewritten as
and, since \(\langle x_n-x_{n-1}, x_{n+1}-\tilde x\rangle = \varphi _n-\varphi _{n-1} + (1/2)\|x_n-x_{n-1}\|{ }^2+ \langle x_n-x_{n-1}, x_{n+1}-x_n\rangle \), it turns into
Since the right-hand side of (11.3) is less than or equal to ((β n − 1)∕2)∥x n+1 − x n∥2 + β n∥x n − x n−1∥2, denoting μ k = φ k − β k φ k−1 + β k∥x k − x k−1∥2, k ≥ 1, it follows that
thus the sequence (μ k)k is nonincreasing, as n ≥ 1 was arbitrarily chosen. Then φ n ≤ β n φ 0 + μ 1∕(1 − β) and one also gets ∥x n+1 − x n∥2 ≤ (2∕(1 − 3β))(μ n − μ n+1). Employing (11.4), one obtains then
in particular
The right-hand side of (11.3) can be rewritten as (1∕2)(β n(β n + 1)∥x n − x n−1∥2 −∥x n+1 − x n − β n(x n − x n−1)∥2). Denoting τ k+1 = x k+1 − x k − β k(x k − x k−1), θ k = φ k − φ k−1 and δ k = β k∥x k − x k−1∥2 for k ≥ 0 and taking into consideration that β n ∈ [0, 1∕3), (11.3) yields
Then [θ n+1]+ ≤ (1∕3)[θ n]+ + δ n, followed by \([\theta _{n+1}]_+ \leq (1/3^n) [\theta _1]_+ + \sum _{k=0}^{n-1}\delta _{n-k}/3^k\). Hence \(\sum _{k=0}^{+\infty }[\theta _{k+1}]_+ \leq 3/2 ( [\theta _1]_+ + \sum _{k=0}^{+\infty }\delta _k)\) and, as the right-hand side of this inequality is finite due to (11.5), so is \(\sum _{k=1}^{+\infty }[\theta _k]_+\), too. This yields that the sequence (w k)k defined as \(w_k=\varphi _k - \sum _{j=1}^k [\theta _j]_+\), k ≥ 0, is bounded. Moreover, \(w_{k+1}-w_k = \varphi _{k+1}-\varphi _k - [\varphi _{k+1}-\varphi _k]_+ = \varphi _{k+1}-\varphi _k + \min \{0, \varphi _k - \varphi _{k+1}\} \leq 0\) for all k ≥ 1, thus (w k)k is convergent. Consequently, \(\lim _{k \rightarrow +\infty } \varphi _k = \lim _{k \rightarrow +\infty } w_k + \sum _{j=1}^{+\infty }[\theta _{j+1}]_+\), therefore (φ k)k is convergent. Finally, \((\|x_k-\tilde x\|{ }^2)_k\) is convergent, too, i.e. (a) in Lemma 11.2 with S = Ω is fulfilled.
We show now that (x k)k is weakly convergent. The convergence of (φ k)k implies that (x k)k is bounded, so it has weak cluster points. Let \(\hat x\in X\) be one of them and \((x_{k_j})_j\) the subsequence that converges towards it. Then, as F is positively C-lower semicontinuous and C-convex, it follows that for any z ∗∈ C ∗ the function 〈z ∗, F(⋅)〉 is lower semicontinuous and convex, thus
with the last equality following from the fact that the sequence (F(x k))k is by construction nonincreasing. Assuming that there exists a k ≥ 0 such that \(F(\hat x)\nleqq _C F(x_k)\), there exists a \(\tilde z\in C^*\setminus \{0\}\) such that \(\langle \tilde z, F(\hat x) - F(x_k)\rangle > 0\), which contradicts (11.7), consequently \(F(\hat x)\leqq _C F(x_k)\) for all k ≥ 0, i.e. \(\hat x\in \varOmega \), therefore one can employ Lemma 11.2 with S = Ω since its hypothesis (b) is fulfilled as well. This guarantees then the weak convergence of (x k)k to a point \(\bar x\in \varOmega \).
The last step is to show that \(\bar x \in \mathcal {W}\mathcal {E}(VP)\). Assuming that \(\bar x\notin \mathcal {W}\mathcal {E} (VP)\), there exists an x′∈ X such that \(F(x')< _C F(\bar x)\). This yields x′∈ Ω. As \(\|z^*_k\|=1\) for all k ≥ 0, the sequence \((z_k^*)_k\) has a weak∗ cluster point, say \(\bar z^*\), that is the limit of a subsequence \((z^*_{k_j})_j\). Because \(z^*_k\in C^*\) for all k ≥ 0 and C ∗ is weakly∗ closed, it follows that \(\bar z^*\in C^*\). Moreover, \(\bar z^*\neq 0\), since it can be shown via [23, Lemma 2.2] that \(\langle \bar z^*, c\rangle > 0\) for any \(c\in \operatorname *{\mathrm {int}} C\). Consequently, \(\langle \bar z^*, F(x') - F(\bar x)\rangle < 0\). For any j ≥ 0 it holds by (11.2)
Because of (11.5), (∥x k − x k−1∥)k converges towards 0 for k → +∞, therefore so does the last expression in the inequality chain (11.8) when j → +∞ as well. Letting j converge towards + ∞, (11.8) yields \(\langle \bar z^*, F(x') - F(\bar x)\rangle \geq 0\), contradicting the inequality obtained above. Consequently, \(\bar x \in \mathcal {W}\mathcal {E}(VP)\). □
Remark 11.43
In order to guarantee the lower semicontinuity of the functions \(\delta _{\varOmega _n}\), n ≥ 1, it is enough to have the vector function F only C-level closed (i.e. the set {x ∈ X : F(x) ≦C y} is closed for any y ∈ Y ), a hypotheses weaker than the positive C-lower semicontinuity imposed on F in Theorem 11.17 and Theorem 11.18. However, the latter is also necessary in the proofs of these statements in order to guarantee the lower semicontinuity of the functions \((z^*_nF)\), n ≥ 1.
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Grad, SM. (2019). A Survey on Proximal Point Type Algorithms for Solving Vector Optimization Problems. In: Bauschke, H., Burachik, R., Luke, D. (eds) Splitting Algorithms, Modern Operator Theory, and Applications. Springer, Cham. https://doi.org/10.1007/978-3-030-25939-6_11
Download citation
DOI: https://doi.org/10.1007/978-3-030-25939-6_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-25938-9
Online ISBN: 978-3-030-25939-6
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)