# Asymptotic analysis in convex composite multiobjective optimization problems

- 264 Downloads
- 3 Citations

## Abstract

In this paper, we present a unified approach for studying convex composite multiobjective optimization problems via asymptotic analysis. We characterize the nonemptiness and compactness of the weak Pareto optimal solution sets for a convex composite multiobjective optimization problem. Then, we employ the obtained results to propose a class of proximal-type methods for solving the convex composite multiobjective optimization problem, and carry out their convergence analysis under some mild conditions.

### Keywords

Convex composite multiobjective optimization Asymptotic analysis Proximal-type method Nonemptiness and compactness Weak Pareto optimal solution### Mathematics Subject Classification

90C25 90C48 90C29## 1 Introduction

It is known that scalar-valued composite optimization model is very important in both theory and methodology, it provides a unified framework for studying convergence behaviour of various algorithms and Lagrangian optimality conditions. The study of scalar-valued composite optimization model has recently received a great deal of attention in the literature, see e.g. ( [4, 10, 15, 17, 25, 27] and the references therein).

However, we are rarely asked to make decisions based on only one criterion; most often, decisions are based on several conflicting criteria. Multiobjective optimization model provides the mathematical framework to deal with these situations, there is no doubt that it is a powerful tool in decision analysis. Moreover, it also has found a lot of significant applications in the other fields such as economics, management science and engineering design. Many papers have been published to study optimality conditions, duality theory and topological properties of solution sets of multiobjective optimization problems (see, e.g., [5, 6, 7, 9, 12, 14, 19, 24]).

Composite multiobjective optimization model is broad and flexible enough to cover many common types of multiobjective optimization problems, seen in the literature. Moreover, the model obviously includes the wide class of scalar-valued composite optimization problems, which is now recognized as fundamental for theory and computation in scalar nonsmooth optimization. Recently, some investigations for composite multiobjective optimization models has been proposed in following papers: Jeyakumar and Yang [16, 17, 18, 26] investigated some first and second order optimality conditions for both nonsmooth and smooth convex composite multi-objective optimization problems, they also obtained some duality results for the problems even when the objective functions were not cone convex. Reddy and Mukherjee studied some first order optimality conditions for a class of composite multiobjective optimization problem with \(V-\rho \)-invexity in [21]. Bot et al. [3] also obtained some conjugate duality results for multiobjective composed optimization problems. It is worth noticing that there are fewer results for the composite multiobjective optimization problems since the complexity of objective functions and the variety of solution sets. Furthermore, to the best of our knowledge, there is no numerical method has be designed for solving composite multiobjective optimization problems, even no conceptual one.

*F*, denote by \(domF_i\) the effective domain of \(F_i\), i.e. \(domF_i=\{x\in R^l| F_i(x)<+\infty \}\). The inner function \(A: S\rightarrow R^l\) is a \(l\times n\) matrix such that \(A(S)\subset \cap _{i=1}^m dom{F}_i\). Denote by

*r(A)*and \(A^\top \), the rank and the transpose of the matrix \(A\), respectively. To illustrate the nature of the model \((CMOP)\), let us look at an example.

*Example 1.1*

where \(S\subset R^n\) is closed and convex. \(\Vert .\Vert _i, i=1, 2,\ldots , m\) is a norm in \(R^l\), and for each \(i=1, 2,\ldots , m, A_i(x)\) is a \(l\times n\) matrix. Various examples of vector approximation problems of this type that arise in simultaneous approximation are given in [13, 14].

The idea is that by studying the composite model problem \((CMOP)\) a unified framework can be given for the treatment of many questions of theoretical and computational interest in multiobjective optimization. The motivation of this paper is to consider how to design a iterative algorithm for computing the model \((CMOP)\) via asymptotic analysis. Although the inner function \(A(x)\) is linear, the composite structure \(F(A(x))\) captures some elementary characterizations of composite optimization. On the other hand, there are some technical difficulties in computing asymptotic function and subdifferential of a vector-valued composite function, when the inner function is not linear.

The paper is organized as follows. In Sect. 2, we present some concepts, basic assumptions and preliminary results. In Sect. 3, we we characterize the nonemptiness and compactness of the weak Pareto optimal solution set of the problem (*CMOP*). In Sect. 4, we employ the obtained results to construct a class of proximal-type method for solving the problem (*CMOP*), convergence analysis is made under some mild conditions. In Sect. 5, we draw some conclusions.

## 2 Preliminaries

In this section, we introduce various notions of Pareto optimal solutions and present some preliminary results that will be used throughout this paper.

It is worth noticing that the binary relation \(\not \le _{intC}\) is closed in the sense that if \(x_k\rightarrow x^*\) as \(k \rightarrow \infty \), \(x_k \not \le _{intC} 0 \), then we have \(x^* \not \le _{intC} 0\). This is because of the closeness of the set \(W= : R^m\setminus \{-intC\}\).

**Definition 2.1**

**Definition 2.2**

[11] A map \(F: K\subset R^n\rightarrow R^m\cup \{+\infty _C\}\) is said to be C-lsc at \(x_0\in K\) if, for any neighborhood \(V\) of \(F(x_0)\) in \(R^m\), there exists a neighborhood \(U\) of \(x_0\) in \(R^n\) such that \(F(U\cap K)\subseteq V+C\). The map \(F: K\subset R^n\rightarrow R^m\cup \{+\infty _C\}\) is said to be C-lsc on \(K\) if it is C-lsc at every point \(x_0\in K\).

*Remark 2.1*

In fact, when \(C=R^m_+\), the \(R^m_+\)-lower semicontinuity of \(F=(F_1,\ldots , F_m)\) is equivalent to the (usual) lower semicontinuity of each \(F_i\).

**Definition 2.3**

*K*if

**Lemma 2.1**

[11] Let \(K\subset R^n\) be a closed set, and suppose that \(W\subset R^m\) is a closed set such that \(W+C\subseteq W\). Assume that \(F: K\rightarrow R^m\cup \{+\infty _C\}\) is C-lsc. Then, the set \(P=\{x\in K |~F(x)-\lambda \in -W\}\) is closed for all \(\lambda \in R^m\).

**Definition 2.4**

**Lemma 2.2**

[1] A set \(K\subset R^n\) is bounded if and only if its asymptotic cone is just the zero cone: \(K^\infty =\{0\}\).

**Definition 2.5**

**Definition 2.6**

[23] The function \(f: R^n\rightarrow R\cup \{+\infty \}\) is said to be coercive if its asymptotic function \(f^\infty (d)>0\), for all \( d\ne 0\in R^n\) and it is said to be counter-coercive if its asymptotic function \(f^\infty (d)=-\infty \), for some \( d\ne 0\in R^n\).

**Lemma 2.3**

- (a)
f is coercive;

- (b)
the optimal set \(\{x\in R^n |~ f(x)=\inf f\}\) is nonempty and compact;

- (c)
\(\lim \limits _{\Vert x\Vert \rightarrow +\infty }\inf \frac{f(x)}{\Vert x\Vert }>0.\)

**Definition 2.7**

[19] A cone \(C_2\subseteq R^m\) is called \(Daniell\) if any decreasing sequence of \(R^m\) having a lower bound converges to its infimum. For example, the cone \(C=R^m_+\) has the Daniell property.

**Definition 2.8**

[24] A set \(S\subset R^m\) is said to have the *domination property* with respect to \(C\), if there exists \(s\in R^m\) such that \(S\subseteq s+C\).

*epigraph*of \(H\), i.e.

**Definition 2.9**

**Proposition 2.1**

*Proof*

*Remark 2.2*

**Proposition 2.2**

Let \(F: R^l\rightarrow R^m\cup \{+\infty _C\}\) be a proper function, let \(A\) be a linear map from \(R^n\rightarrow R^l\) with \(A(R^n)\subset domF\), and let \(G(x)=F(Ax)\) be a proper composite function. If \(F\) is proper, \(C-lsc\) and \(C-convex\). Then, \(G\) is C-lsc and C-convex.

*Proof*

From Remark 2.1, we know that \(G\) is \(C\)-*lsc* if and only if \(G_i\) is *lsc* for any \(i\in \{1,\ldots ,m\}\). Since \(A\) is a linear map and \(F_i\) is *lsc* for any fixed \(i\in \{1,\ldots ,m\}\). So \(G_i\) is *lsc* for any \(i\in \{1,\ldots ,m\}\), clearly \(G\) is \(C\)-*lsc*.

**Proposition 2.3**

*Proof*

The proof of Proposition 2.3 is a little bit trivial, so we omit it.

Throughout this paper. we denote by \(\bar{X}\) the weak Pareto optimal solutions set and \(X^*\) the ideal solutions set of problem *(CMOP)*, respectively.

## 3 Characterizations of weak Pareto solution optimal sets

**Theorem 3.1**

*Proof*

*Remark 3.1*

When \(A=I\) is an identical mapping, some corresponding results have obtained in [8, 11].

Next let’s consider some necessary and sufficient conditions for the nonemptiness and compactness of weak Pareto optimal solution sets in the problem (CMOP).

**Lemma 3.1**

*CMOP*), we assume \(F\) is proper, C-lsc and C-convex. Then we have \(\bar{X}\) is nonempty and compact if and only if

*Proof*

*Remark 3.2*

The linearity of \(A(x)\) makes it possible to obtain the analytical expression of the asymptotic function of the vector-valued function (Proposition 2.3). By virtue of the analytical expression of the asymptotic function, Lemma 3.1 generalizes some corresponding results of [8].

## 4 Proximal-type method for convex composite multiobjective optimization problem

**Lemma 4.1**

This follows immediately from Theorem 2.1 in [2].

Now we make the following assumption:

(A) the set \(\bar{X}\) is nonempty and compact.

Step (1): Taking any \(x_0\in R^n\);

Step (2): Given \(x_k\), if \(x_k\in \bar{X}\), then \(x_{k+p}=x_k\) for all \(p\ge 1\) and the algorithm stops, otherwise goes to step (3).

Step (3): If \(x_k\notin \bar{X}\), then compute \(x_{k+1}\) satisfying

Next we will establish the main results in this section.

**Theorem 4.1**

In the problem \((MOP)\), let \(F_0: R^l\rightarrow R^m\cup \{+\infty _C\}\) be proper \(C\)-convex and \(C\)-lower semicontinuous mapping. Further suppose that \(r(A)=n\). Under the assumption (A), any sequence \(\{x_k\}\) generated by the method (VPM) is well-defined and bounded.

*Proof*

*Remark 4.1*

The main statements of Theorem 4.1 are concerned with the existence and the boundedness of sequences. Compared with some corresponding results in [2], our contributions are that we present a quite different method to prove the existence of iterates and the boundedness of sequences via asymptotic analysis. It is worth noticing that when the regular term in (4.1) is not quadratic, the traditional method does not deal with such complex cases. However, the method in this paper does still work.

**Lemma 4.2**

*Proof*

*E*is nonempty. Since \(x_{k+1}\) is a weak Pareto optimal solution of problem (4.1), there exists a \(\lambda _k\in C_1\) such that \(x_{k+1}\) is the solution of the following problem \((MOP_{\lambda _k})\):

**Theorem 4.2**

Let the assumptions in Theorem 4.1 and Lemma 4.2 hold. Then any cluster point of \(\{x_k\}\) belongs to \(\bar{X}\).

*Proof*

*MOP*). Now suppose that the algorithm does not terminate finitely. Then, by Theorem 4.1, we have that \(\{x_k\}\) is bounded and it has some cluster points. Next we will show that all of cluster points are weak Pareto optimal solutions of problem

*(MOP*). Let \(\hat{x}\) be one of the cluster points of \(\{x_k\}\) and \(\{x_{k_j}\}\) be a subsequence of \(\{x_k\}\), which converges to \(\hat{x}\). Let \(\lambda \in C_1\). We define a function \(\psi _\lambda : S\rightarrow R\cup {+\infty }\) as \(\phi _\lambda (x)= \langle F_0(Ax), \lambda \rangle \). Since \(F_0\) is

*C-lsc*and \(C\)-convex, \(\psi _\lambda \) is also lsc and convex, it follows that \(\psi _\lambda (\bar{x})\le \liminf \limits _{j\rightarrow +\infty } \psi _\lambda (x_{k_j})\). By the fact that \(x_{k+1}\in \theta _k\), we can see that \(F_0(Ax_{k+1})\le _C F_0(Ax_k)\) for \(k\in N\). Thus, \(\psi _\lambda (x_{k+1})\le \psi _\lambda (x_{k})\). Therefore,

*VPM*), we have

**Theorem 4.3**

Assume the same assumptions as in Theorem 4.2 Then the whole sequence \(\{x_k\}\) converges to a weak Pareto optimal solution of problem (MOP).

*Proof*

*CMOP*), \(\Vert \tilde{x}-x_k\Vert \) and \(\Vert \hat{x}-x_k\Vert \) are convergent. So there exists \(\tilde{\alpha }, \hat{\alpha }\in R\) such that

## 5 Conclusion

In this paper, we defined the asymptotic function of a vector-valued function, obtained the analytical expression of the asymptotic function of a class of cone-convex vector-valued function, characterized the nonemptiness and compactness of the weak Pareto optimal solution sets of a composite multiobjective optimization problem. We then applied the obtained results to construct a proximal-type method for solving the composite multiobjective optimization problem. Under some conditions, we proved that any sequence generated by this method converges to a weak Pareto optimal solution of the problem.

## Notes

### Acknowledgments

The author thanks two anonymous referees for carefully reading the original submission and supplying many helpful suggestions, which greatly improved the paper. The author thanks Prof. X. Q. Yang, The Hong Kong Polytechnic University, for his teaching. The author also thanks Prof. J. Chen, Tsinghua University, for his some useful suggestions in the revised versions. This work is supported by the National Science Foundation of China (11001289) and the Key Project of Chinese Ministry of Education (211151), the National Science Foundation of Chongqing Science and Technology Commission (CSTC, 2011BB0118) and Research Grant of Education Committee of Chongqing (KJ110633).

### References

- 1.Auslender, A., Teboulle, M.: Asymptotic Cones and Functions in Optimization and Variational Inequalities. Springer, Berlin (2003)Google Scholar
- 2.Bonnel, H., Iusem, A.N., Svaiter, B.F.: Proximal methods in vector optimization. SIAM J. Optim.
**15**(4), 953–970 (2005)CrossRefGoogle Scholar - 3.Bot, R., Vargyas, E., Wanka, G.: Conjugate duality for multiobjective composed optimization problems. Acta Mathematica Hungarica
**116**, 177–196 (2009)CrossRefGoogle Scholar - 4.Burke, J.V., Ferris, M.C.: A Gauss–Newton method for convex composite optimization. Math. Program.
**71**, 179–194 (1995)Google Scholar - 5.Chen, G.Y., Huang, X.X., Yang, X.Q.: Vector Optimization: Set-Valued and Variational Analysis, Lecture Notes in Economics and Mathematical Systems, 541. Springer, Berlin (2005)Google Scholar
- 6.Chen, Z., Huang, X.X., Yang, X.Q.: Generalized proximal point algorithms for multiobjective optimization problems. Appl. Anal.
**90**, 935–949 (2011)CrossRefGoogle Scholar - 7.Chen, Z.: Generalized viscosity approximation methods in multiobjective optimization problems. Comput. Optim. Appl.
**49**, 179–192 (2011)CrossRefGoogle Scholar - 8.Deng, S.: Characterzationa of nonemptiness and compactness of solutions sets in convex optimzation. J. Optim. Theory Appl.
**96**, 123–131 (1998)Google Scholar - 9.Deng, S.: Characterizations of the nonemptiness and boundedness of weakly efficient solution sets of convex vector optimization problems in real reflexive Banach spaces. J. Optim. Theory Appl.
**140**, 1–7 (2009)CrossRefGoogle Scholar - 10.Fletcher, R.: Practical Methods of Optimization. Wiley, New York (1987)Google Scholar
- 11.Flores-Bazan, F.: Ideal, weakly efficient solutions for vector problems. Math. Program.
**93**, 453–475 (2002)CrossRefGoogle Scholar - 12.Huang, X.X., Yang, X.Q.: Characterizations of nonemptiness and compactness of the set of weakly efficient solutions for convex vector optimization and applications. J. Math. Anal. Appl.
**264**, 270–287 (2001)CrossRefGoogle Scholar - 13.Jahn, J.: Scalarization in multi-objective optimization. Math. Program.
**29**, 203–219 (1984)CrossRefGoogle Scholar - 14.Jahn, J.: Vector Optimization: Theory. Applications and Extensions. Springer, Berlin (2004)Google Scholar
- 15.Jeyakumar, V.: Composite nonsmooth programming with Gateaux differentiability. SIAM J. Optim.
**1**, 31–40 (1991)CrossRefGoogle Scholar - 16.Jeyakumar, V., Yang, X.Q.: Convex composite multi-objective nonsmooth programming. Math. Program.
**59**, 325–343 (1993)CrossRefGoogle Scholar - 17.Jeyakumar, V., Yang, X.Q.: Convex composite minimization with C1,1 functions. J. Optim. Theory Appl.
**86**, 631–648 (1995)CrossRefGoogle Scholar - 18.Jeyakumar, V., Luc, D.T., Tinh, P.N.: Convex composite non-Lipschitz programming. Math. Program.
**92**, 177–195 (2002)CrossRefGoogle Scholar - 19.Luc, T.D.: Theory of Vector Optimization, Lecture Notes in Economics and Mathematical Systems, 319. Springer, Berlin (1989)Google Scholar
- 20.Phelps, R.R.: Convex Functions, Montone Operators and Differentiability, Lecture notes in Mathematics, 1364. Springer, Berlin (1988)Google Scholar
- 21.Reddy, L.V., Mukherjee, R.N.: Composite nonsmooth multiobjective programs with V-\(\rho \)-invexity. J. Math. Anal. Appl.
**235**, 567–577 (1999)CrossRefGoogle Scholar - 22.Rockafellar, R.T.: Convex Anal. Princeton University Press, Princeton (1970)Google Scholar
- 23.Rockafellar, R.T., Wets, J.B.: Variational Analysis. Springer, Berlin (1998)Google Scholar
- 24.Sawaragi, Y., Nakayama, H., Tanino, T.: The Theory of Multiobjective Optimization. Academic Press, New York (1985)Google Scholar
- 25.Womersley, R.S.: Local properties of algorithms forminimizing nonsmooth composite functions. Math. Program.
**32**, 69–89 (1985)CrossRefGoogle Scholar - 26.Yang, X.Q., Jeyakumar, V.: First and second-order optimality conditions for convex composite multi-objective optimization. J. Optim. Theory Appl.
**95**, 209–224 (1997)CrossRefGoogle Scholar - 27.Yang, X.Q.: Second-order global optimality conditions for convex composite optimization. Math. Program.
**81**, 327–347 (1998)Google Scholar