1 Introduction

The notion of proper minimality was developed in vector optimization in order to rule out some undesirable features of the solutions, that is, the main intention of introducing this type of notions is to provide a more depurated set of minimal points that satisfy better properties, taking into account some criterion.

One of the first notions in this vein was introduced by Geoffrion, for multiobjective optimization problems with the Pareto order (see [1]), who considered the boundedness of trade-off ratios between conflicting objectives. On the other hand, some years later, Henig proposed a geometrical approach to proper minimality (see [7]) for a general vector optimization problem, where a sort of stability with respect to order perturbations is considered. Many other authors proposed alternative versions of proper minimality (for a detailed overview about proper minimality see, e.g, [3]). Since its origin, proper minimality in vector optimization is deeply related to scalarization techniques [14], in the sense that the proper minimal points can be characterized through scalarization approaches. Indeed, this is the main advantage of this type of minimal points.

In the last decades, the attention of many researchers was focused on an extension of vector optimization to a class of problems where the objective functions are set-valued maps. Some interesting applications can be found, for instance, in mathematical finance [6], game theory [20], mathematical economics and robust optimization (see Chapter 15 in [12] and the references therein). A relevant approach in this field is based on comparisons of sets whose origins go back in time (see, e.g., [13, 22]). In this paper we deal with a quasiorder relation induced by the original order structure in the space. This approach was exploited by Kuroiwa (see [15, 17], and also [12] and the references therein).

To our knowledge, the only attempt that exists in the literature to extend the notion of proper minimality to the more general framework of set optimization was proposed in the very recent paper by Huerga et al. [8], in which several notions of proper efficiency in the sense of Henig were defined and studied for a set-valued optimization problem, and by considering the set criterion of solution.

In this paper, we continue with the contribution to the study of this kind of notions and we introduce some extensions to the framework of set optimization of the concepts of proper minimality originally developed by Geoffrion and Henig in the vector optimization setting.

Since, in particular, the concept due to Geoffrion makes sense for finite dimensional spaces, for the sake of technical simplicity, we are going to consider that the involved spaces are finite dimensional along the whole paper. In this setting, one of the main reasons of studying a definition of proper minimality in the sense of Geoffrion, in addition to the definitions following the line due to Henig, is to provide a more tangible interpretation of proper minimality, based on boundedness of trade-off ratios of conflicting objectives. Moreover, although the original notion of proper minimality in the sense of Geoffrion is given for the Pareto cone, the extension introduced in this paper is defined for polyhedral cones.

Furthermore, in the multiobjective (vector) optimization setting with the Pareto order, it is known that the notions of proper efficiency given by Henig and Geoffrion are equivalent (see, for instance [21]). This property is also satisfied by the corresponding extensions of the concepts introduced in this work to the set optimization framework, as it is studied in Sect. 4. Thus, these extensions result to be natural and provide some interesting insights in set optimization.

The paper is organized as follows. In Sect. 2, we present the framework, notations and some preliminary results that we need along the paper. In Sects.. 3 and 4, we introduce the new concepts of proper minimality in the sense of Henig and Geoffrion, respectively, for the set optimization framework, where the involved spaces are finite dimensional. Also in Sect. 4, we study the equivalence among these new notions, when a polyhedral ordering cone is considered. Finally, in Sect. 5, we develop a characterization of proper minimal points by using nonlinear scalarization techniques, without considering any convexity assumption. At the end of the paper we state the conclusions.

2 Preliminaries

As we have mentioned in the introduction, we consider this work in the setting of finite dimensional spaces. It is worth to mention that the definitions of proper efficiency in the sense of Henig presented in Sect. 3 can be straightforwardly extended to more general spaces but, for the convenience of the reader, we prefer to keep the finite dimensional setting along the work.

First of all, we remind that a nonempty set \(K\subset {\mathbb {R}}^p\) is a cone if \(\lambda k\in K\), for all \(\lambda \ge 0\) and for all \(k\in K\). In this paper, we consider a closed, convex and pointed (\(K\cap (-K)=\{0\}\)) cone \(K\subset {\mathbb {R}}^p\) that induces in \({\mathbb {R}}^p\) a partial order \(\le _K\), defined as usual:

$$\begin{aligned} x\le _K y\Leftrightarrow y-x\in K,\,\,\forall x,y\in {\mathbb {R}}^p. \end{aligned}$$

Given \(x,y\in {\mathbb {R}}^p\), we denote by x(y) the usual scalar product in \({\mathbb {R}}^p\). The polar and the strict positive polar cones of K are denoted, respectively, by \(K^*\) and \(K^{*s}\), i.e.,

$$\begin{aligned} K^{*}=\left\{ x^{*}\in {\mathbb {R}}^p: x^{*}(k)\ge 0\text { for every }k\in K\right\} , \end{aligned}$$

and

$$\begin{aligned} K^{*s}=\left\{ x^{*}\in {\mathbb {R}}^p: x^{*}(k)>0\text { for every }k\in K\setminus \left\{ 0\right\} \right\} . \end{aligned}$$

Given a nonempty set \(F\subseteq {\mathbb {R}}^p\), we denote by \(\text{int} F\) and \(\text {cone} F\), the topological interior and the cone generated by F, respectively. We remind that \(\text {cone}F=\displaystyle \bigcup _{\lambda \ge 0}\lambda F\). We say that F is solid when \(\text{int} F\ne \emptyset\). Also, the Euclidean closed unit ball in \({\mathbb {R}}^p\) is denoted by \(B_{{\mathbb {R}}^p}\).

We recall that a nonempty convex subset \({\mathcal {B}}\subset {\mathbb {R}}^p\) is a base of K, if for every \(k\in K\backslash \{0\}\) there exist a unique \(\lambda >0\) and an element \(b\in {\mathcal {B}}\) such that \(k=\lambda b\) (see, for instance, [9, Definition 1.10(d)]).

Since K is a closed convex pointed cone in a finite dimensional space, any base \({\mathcal {B}}\) of K is compact and there exists \(x^*\in K^{*s}\) such that

$$\begin{aligned} {\mathcal {B}}=S_{x^*}:=\{x\in K: x^*(x)=1\} \end{aligned}$$

(see [2, Theorem 2.1.15 and page 3], [11, Theorem 3.8.4] and [10, Lemma 3.21(d)]).

Let \({\mathcal {B}}\) be a base of K and \(\alpha \in (0,\delta )\), where \(\delta =\text {d}(0,{\mathcal {B}})=\inf _{b\in {\mathcal {B}}}\left\| b\right\|\) (we consider the Euclidean norm). T̄hen, \(C_\alpha :=\text {cone}({\mathcal {B}}+\alpha B_{{\mathbb {R}}^p})\) is a dilating cone for K, i.e., \(K\backslash \{0\}\subseteq \text{int} C_\alpha\). It is called the Henig dilating cone ([2]) and satisfies the properties collected in the following proposition (see [2, Lemma 3.2.51] and [13, Proof of Proposition 2.4.6(iii)]).

Proposition 1

  1. (i)

    \(C_\alpha\) is solid, pointed, closed and convex for every \(\alpha \in (0,\delta )\).

  2. (ii)

    If \(0<\alpha _1<\alpha _2<\delta\), then \(K\backslash \{0\}\subseteq C_{\alpha _1}\backslash \{0\}\subseteq \text{int} C_{\alpha _2}\).

  3. (iii)

    For any pointed solid convex cone \(K'\subset {\mathbb {R}}^p\) such that \(K\backslash \{0\}\subseteq \text{int} K'\), there exists \(\eta \in (0,\delta )\) such that \(C_\eta \backslash \{0\}\subseteq \text{int} K'\).

In particular, we can consider the special case when K is polyhedral, i.e.,

$$\begin{aligned} K=\left\{ y\in {\mathbb {R}}^{p}:Py\in {\mathbb {R}}_{+}^{m}\right\} , \end{aligned}$$
(1)

where \(P\in {\mathcal {M}}(m,p)\), i.e., it is a matrix with m rows and p columns, with \(m\ge p\), hence K is closed and convex. Moreover, we suppose that P has full rank p, so the pointedness property of K is kept.

For this case, Kaliszewski [11] defined a family of dilating cones for K, \(\{K_\rho \}_{\rho >0}\), by perturbing matrix P, in the following way

$$\begin{aligned} K_\rho :=\{y\in {\mathbb {R}}^p: Py + \rho U P y\in {\mathbb {R}}_+^m\}, \end{aligned}$$
(2)

where U denotes the all-ones square matrix of order m. This family of dilating cones satisfy the next properties (see [111, Lemma 3.7]).11

Proposition 2

  1. (i)

    \(K_\rho\) is solid, closed, convex and pointed for all \(\rho >0\).

  2. (ii)

    \(K\backslash \{0\}\subseteq K_{\rho _1}\backslash \{0\}\subseteq \text{int}K_{\rho _2}\), for all \(\rho _1<\rho _2\).

  3. (iii)

    For any pointed solid convex cone \(K'\subset {\mathbb {R}}^p\) such that \(K\backslash \{0\}\subseteq \text{int} K'\), there exists \(\rho > 0\) such that \(K_\rho \backslash \{0\}\subseteq \text{int} K'\).

We underline that, in particular,

$$\begin{aligned} s^*:=\sum _{i=1}^m p_i\in K_\rho ^{*s}, \end{aligned}$$
(3)

for all \(\rho \ge 0\), where \(p_i\) denotes the ith row of matrix P, so

$$\begin{aligned} S^{\rho }_{s^*}:=\{y\in K_\rho : s^*(y)=1\} \end{aligned}$$

is a compact base of \(K_\rho\), for all \(\rho \ge 0\).

In order to consider a set minimization framework, we are going to follow the approach introduced by Kuroiwa [15] (see also [17]), who defined several set relations with respect to a cone. The most popular ones are called the upper and the lower set less order relations, that are given, respectively, in the following way:

$$\begin{aligned} A&\preceq _{K}^u B\text { if and only if }A\subseteq B-K,\\ A&\preceq _{K}^l B\text { if and only if }B\subseteq A+K, \end{aligned}$$

for \(\emptyset \ne A,B\subseteq {\mathbb {R}}^p\), where we consider that \(A+\emptyset =\emptyset +A=\emptyset\). It is easy to see that

$$\begin{aligned} A&\preceq _{K}^l B\Longleftrightarrow -B\preceq _K^u -A. \end{aligned}$$

In this paper, we are going to focus on the lower set less relation, which is the most known and used for minimization processes in set optimization. As usual, with respect to \(\preceq _{K}^l\), we introduce an equivalence relation \(\backsim _{K}^l\) defined by:

$$\begin{aligned} A\backsim _{K}^lB\text { if and only if }A\preceq _{K}^l B \text { and }B\preceq _{K}^l A. \end{aligned}$$

Note that \(A\backsim _{K}^l B\) if and only if \(A+K=B+K\).

Given a collection of sets \({{\mathcal {A}}} \subseteq 2^{{{\mathbb {R}}}^{p}}\), we can now recall the definitions of minimality and strict minimality based on relation \(\preceq _{K}^l\) (see, for instance, [5, 16], and the references therein).

Definition 1

Let \(\hat{A}\in {\mathcal {A}}\).

  1. (i)

    It is said that \(\hat{A}\) is minimal in \({\mathcal {A}}\) when for every \(B\in {\mathcal {A}}\), if \(B\preceq _{K}^l\hat{A}\) then \(\hat{A}\backsim _{K}^lB.\)

  2. (ii)

    It is said that \(\hat{A}\) is strictly minimal in \({\mathcal {A}}\) if there is no \(B\in {\mathcal {A}}\) such that \(B\preceq _{K}^l\hat{A}\) and \(B\ne \hat{A}\).

The two definitions coincide when \(A\backsim _{K}^lB\) implies that \(B=A\). In the special case where all the elements of \({\mathcal {A}}\) are singletons, both definitions above collapse into the classical notion of minimal element of a set in the partially ordered space \({\mathbb {R}}^p\) with respect to \(\le _K\).

3 Henig proper minimality notion in set optimization

In this section we introduce several notions of proper minimality in set optimization based on the concepts due to Henig [7] in vector optimization.

In the original definition introduced by Henig, proper minimality is defined as minimality with respect to a dilating cone C whose interior contains \(K\backslash \{0\}\). Since the notion of minimality in a set optimization framework relies upon a quasiorder relation among sets, the reformulation of Henig’s proper minimality for set optimization involves a set relation formulated with respect to a dilating cone, as well. A further possibility is to consider a notion of strict minimality in the sense of Henig.

Thus, in the following definition we provide three different concepts of proper minimality in the sense of Henig in set optimization.

From now on, we consider a collection of sets \({\mathcal {A}}\subseteq 2^{{\mathbb {R}}^p}\backslash \{\emptyset \}\).

Definition 2

Let \(\hat{A}\in {\mathcal {A}}\).

  1. (i)

    It is said that \(\hat{A}\) is H1-properly minimal (shortly, H1-P minimal) in \({\mathcal {A}}\) if there exists a pointed convex cone C such that \(K\setminus \left\{ 0\right\} \subseteq \text{int}C\) and \(\hat{A}\) is minimal in \({\mathcal {A}}\) with respect to \(\preceq _{C}^l\), i.e.,

    $$\begin{aligned} \text {for every } B\in {\mathcal {A}}, \text { if } B\preceq _{C}^l\hat{A} \text { then } \hat{A}\backsim _{C}^l B. \end{aligned}$$
  2. (ii)

    It is said that \(\hat{A}\) is H2-properly minimal (shortly, H2-P minimal) in \({\mathcal {A}}\) if there exists a pointed convex cone C such that \(K\setminus \left\{ 0\right\} \subseteq \text{int}C\) and

    $$\begin{aligned} \text { for every } B\in {\mathcal {A}}, \text { if } B\preceq _{C}^l \hat{A} \text { then } \hat{A}\backsim _{K}^l B. \end{aligned}$$
  3. (iii)

    It is said that \(\hat{A}\) is H-properly strictly minimal (shortly, H-PS minimal) in \({\mathcal {A}}\) if there exists a pointed convex cone C such that \(K\setminus \left\{ 0\right\} \subseteq \text{int}C\) and \(\hat{A}\) is strictly minimal in \({\mathcal {A}}\) with respect to \(\preceq _{C}^l\), i.e.,

    $$\begin{aligned} \text { there is no } B\in {\mathcal {A}} \text { such that } B\preceq _{C}^l\hat{A} \text { and } B\ne \hat{A}. \end{aligned}$$

Remark 1

  1. (a)

    In the special case of vector optimization, where all the elements of \({\mathcal {A}}\) are singletons, all the three definitions coincide with Henig proper minimality, as defined in [7].

  2. (b)

    We point out that in the set optimization framework, the way to define a notion of proper minimality that follows the line due to Henig is not unique, as it is shown in Definition 2. Probably, at a first sight, the most natural extension of the concept due to Henig to a set optimization problem is the H1-P minimality, based on the idea of replacing the cone K by a dilating cone C. The H1-P minimal points satisfy interesting properties. A first study of this notion can be found in [8], for a set-valued optimization problem and by considering the set criterion of solution (with the lower set less order relation).

However, H1-P minimality does not imply, in general, minimality (see, for instance, Example 1 below). In the vector optimization framework, the notions of proper minimality arise with the idea of depurating the set of minimal points from some undesirable features. Hence, the fact that the set of H1-P minimal points is not a subset of the set of minimal points is an irregularity that deserves futher investigation. Nevertheless, the concept of H2-P minimality (as well as the notion of H-PS minimality) overcomes this drawback, as it is shown in the next proposition, whose proof follows directly from the definitions.

Proposition 3

Let \(\hat{A}\in {\mathcal {A}}\).

  1. (i)

    If \(\hat{A}\) is H-PS minimal in \({\mathcal {A}}\), then \(\hat{A}\) is strictly minimal in \({\mathcal {A}}\).

  2. (ii)

    If \(\hat{A}\) is H-PS minimal in \({\mathcal {A}}\), then \(\hat{A}\) is H2-P minimal in \({\mathcal {A}}\).

  3. (iii)

    If \(\hat{A}\) is H2-P minimal in \({\mathcal {A}}\), then \(\hat{A}\) is a minimal set in \({\mathcal {A}}\).

  4. (iv)

    If \(\hat{A}\) is H2-P minimal in \({\mathcal {A}}\), then \(\hat{A}\) is H1-P minimal in \({\mathcal {A}}\).

In the following examples we point out some features of these notions. In the first one, we show that, in general, H1-P minimality does not imply minimality.

Example 1

Let \(X={\mathbb {R}}^{2}\) ordered by \(K=\mathbb {R_{+}^{\text{2}}}\), and let \({\mathcal {A}}=\left\{ A,B\right\}\), where \(A=\left\{ \left( x,y\right) \in {\mathbb {R}}^{2}:\,y\ge 0\right\}\), and \(B=\left\{ \left( x,y\right) \in {\mathbb {R}}^{2}:y\ge 1\right\}\). Then both A and B are H1-P minimal in \({\mathcal {A}}\), but B is not minimal in \({\mathcal {A}}\).

Moreover, the next examples clarify that none of the implications summarized in Proposition 3 can be reversed.

Example 2

Let \(X={\mathbb {R}}^{2}\) ordered by \(K=\mathbb {R_{+}^{\text{2}}}\), and let \({\mathcal {A}}=\left\{ A,B\right\}\), where \(A=\left\{ \left( x,y\right) \in {\mathbb {R}}^{2}:y\ge 0\right\}\), and \(B=\left\{ \left( x,y\right) \in {\mathbb {R}}^{2}:x\ge 0\right\}\). Then A is a minimal set (and a strictly minimal set). On the other hand, since \(A+K=A\), \(B+K=B\), and \(A+C=B+C={\mathbb {R}}^{2}\) for all convex cones C such that \(K\setminus \left\{ 0\right\} \subseteq \text{int}C\), then A is H1-P minimal but it is not H2-P minimal.

Example 3

Let \(X={\mathbb {R}}^{2}\) ordered by \(K=\mathbb {R_{+}^{\text{2}}}\), and let \({\mathcal {A}}=\left\{ A,B\right\}\), where

$$\begin{aligned} A&=\left\{ \left( x,y\right) \in {\mathbb {R}}^{2}:0\le x\le \frac{1}{2},\,0\le y\le \frac{1}{2} \right\} ,\text { and}\\ B&=\left\{ \left( x,y\right) \in {\mathbb {R}}^{2}:0\le x\le 1 ,0\le y\le 1\right\} . \end{aligned}$$

Then, both the sets A and B are H2-P minimal but not H-PS minimal.

The next result straightforwardly follows from Propositions 1 and 2.

We denote by \(\text {Min}({\mathcal {A}},E)\) (respectively, \(\text {SMin}({\mathcal {A}},E)\)) the set of minimal (respectively, strictly minimal) elements of \({\mathcal {A}}\) with respect to a pointed convex cone E. On the other hand, \(\text {Min2}({\mathcal {A}},E)\) represents the set of elements \(A\in {\mathcal {A}}\) for which if \(B\in {\mathcal {A}}\) and \(B\preceq _{E}^lA\), then \(A\backsim _{K}^lB\).

Finally, by \(\text {H1-P}({\mathcal {A}})\) (respectively, \(\text {H2-P}({\mathcal {A}})\), \({\text {H-PS}}({\mathcal {A}})\)) we denote the set of H1-P (respectively, H2-P, H-PS) minimal elements of \({\mathcal {A}}\).

Proposition 4

The following relations hold.

  1. (i)

    \({\text{H1}}-{\text{P}}({\mathcal {A}})\supseteq \bigcup _{\alpha \in (0,\delta )}{\text{Min}}({\mathcal {A}},C_\alpha )\).

  2. (ii)

    \({\text{H2}}-{\text{P}}({\mathcal {A}})=\bigcup _{\alpha \in (0,\delta )}{\text{Min2}}({\mathcal {A}},C_\alpha )\).

  3. (iii)

    \({\text{H}} -{\text{PS}}({\mathcal {A}})=\bigcup _{\alpha \in (0,\delta )}{\text{SMin}}({\mathcal {A}},C_\alpha )\).

If K is polyhedral, defined as in (1), we also have that

  1. (d)

    \({\text{H1}} - {\text{P}}({\mathcal {A}})\supseteq \bigcup _{\rho >0}{\text{Min}}({\mathcal {A}},K_\rho )\).

  2. (e)

    \({\text{H2}}-{\text{P}}({\mathcal {A}})=\bigcup _{\rho >0}{\text{Min2}}({\mathcal {A}},K_\rho )\).

  3. (f)

    \({\text{H}}-{\text{PS}}({\mathcal {A}})=\bigcup _{\rho >0}\text{SMin}({\mathcal {A}},K_\rho )\).

The proof of this proposition follows in a straightforward way from the definitions of \(C_\alpha\) and \(K_\rho\), and from Propositions 1 and 2.

Remark 2

Clearly, the relations of Proposition 4 also hold if we replace \(C_\alpha\) by \(\text{int}C_\alpha \cup \{0\}\) and \(K_\rho\) by \(\text{int}{K_\rho }\cup \{0\}\).

Finally, when collection \({\mathcal {A}}\) is finite, we have the following equivalences.

Proposition 5

Suppose that \({\mathcal {A}}\) is finite and let \({\hat{A}}\in {\mathcal {A}}\). If all the elements of \({\mathcal {A}}\) are compact sets, then

  1. (i)

    \(\hat{A}\) is H-PS minimal if and only if it is strictly minimal.

  2. (ii)

    \(\hat{A}\) is H2-P minimal if and only if it is minimal.

Proof

Right hand implications in (i) and (ii) are clear (see Proposition 3(i), (iii)), so we only have to prove the reciprocal implications.

  1. (i)

    \(\Leftarrow\) Reasoning by contradiction, let us suppose that \(\hat{A}\) is strictly minimal but not H-PS minimal in \({\mathcal {A}}.\) Therefore there exists a sequence of sets \(\left\{ B_{n}\right\} _{n=1}^{\infty }\subseteq {\mathcal {A}}\) such that, for every n\(B_{n}\ne \hat{A}\) and

    $$\begin{aligned} \hat{A}\subseteq B_{n}+C_{\frac{1}{n}}. \end{aligned}$$

    Since \({\mathcal {A}}\) is finite, by considering a subsequence if necessary, we can suppose that \(B_n=B\), for all n, where \(B\in {\mathcal {A}}\), \(B\ne \hat{A}\). Then, we have that \(\hat{A}\subseteq B+C_{\frac{1}{n}}\), for all \(n\in \mathbb {N}\). We claim that \(\bigcap \nolimits _{n}\left( B+C_{\frac{1}{n}}\right) =B+K\). Indeed, inclusion \(\supseteq\) is clear. Reciprocally, let \(d\in \bigcap \nolimits _{n}\left( B+C_{\frac{1}{n}}\right)\). Then, for every n, there exist \(\hat{b}_n\in B\), \(\alpha _n>0\), \(b_n\in {\mathcal {B}}\) and \(t_n\in B_{{\mathbb {R}}^p}\) such that \(d=\hat{b}_n+\alpha _n\left( b_n+\frac{1}{n}t_n\right)\). Since B, \({\mathcal {B}}\) and \(B_{{\mathbb {R}}^p}\) are compact, by taking subsequences if necessary, we can suppose without loss of generality that there exist \(\hat{b}\in B\), \(b\in {\mathcal {B}}\) and \(t\in B_{{\mathbb {R}}^p}\) such that \(\hat{b}_n\rightarrow \hat{b}\), \(b_n\rightarrow b\) and \(t_n\rightarrow t\). If the sequence \(\left\{ \alpha _n \right\}\) is unbounded, there exists a subsequence \(\left\{ \alpha _{n_j} \right\}\) such that \(\alpha _{n_j}\rightarrow +\infty\). Then, on the one hand, \(\frac{1}{\alpha _{n_j}}d\rightarrow 0\), and on the other hand

    $$\begin{aligned}\frac{1}{\alpha _{n_j}}d=\frac{1}{\alpha _{n_j}}\hat{b}_{n_j}+b_{n_j}+\frac{1}{{n_j}}t_{n_j}\rightarrow b,\end{aligned}$$

    so we reach a contradiction, since \(0\notin {\mathcal {B}}\). Thus, \(\left\{ \alpha _n \right\}\) is bounded and there exists \(\alpha >0\) such that, up to a subsequence, \(\alpha _n\rightarrow \alpha\), from which we have that \(d=\hat{b}+\alpha b\in B+K\). Thus, \(\hat{A}\subset B+K\), and we obtain a contradiction, since \(\hat{A}\) is strictly minimal.

  2. (ii)

    \(\Leftarrow\) The proof follows analogously as in (i). In this case, if \(\hat{A}\) is not H2-P minimal, then there exists a sequence \(\left\{ B_{n}\right\} _{n=1}^{\infty }\subseteq {\mathcal {A}}\) such that

    $$\begin{aligned} \hat{A}\subseteq B_{n}+C_{\frac{1}{n}},\text { and } \hat{A}\not \sim _K^l B_n, \end{aligned}$$

    from which we get \(\hat{A}\subseteq B+K\) and \(\hat{A}\not \sim _K^l B\), for some \(B\in {\mathcal {A}}\), \(B\ne \hat{A}\), that contradicts that \(\hat{A}\) is minimal.

\(\square\)

Table 1 collects all the relationships between the various notions of Henig-type proper minimality and the well-known notions of minimality used in set optimization.

Table 1 Relationships between Henig-type proper minimality notions

4 Geoffrion proper minimality notion in set optimization

In this section, we are interested in searching for a notion of proper minimality in set optimization that extends the concept due to Geoffrion [1] in multiobjective optimization, based on bounds of trade-off ratios.

The main idea is to avoid some undesirable situations, where the trade-off ratios between conflicting objectives can be unbounded.

In a set minimization framework, we should compare trade-off ratios between variations based on elements chosen in two sets.

Here, we consider that K is polyhedral, defined as in (1).

Given \(k\in \{1,2,\ldots ,m\}\), \(A, B\in {\mathcal {A}}\) and \(b\in B\), we denote

$$\begin{aligned} \Delta _{k}^{+}(B,A,b)&:=\{a\in A\backslash (B+K) : p_k(b-a)>0\},\\ \Delta _{k}^{-}(B,A,b)&:=\{a\in A\backslash (B+K) : p_k(b-a)<0\}. \end{aligned}$$

With the intent of clarifying the ideas behind our approach, we give an informal interpretation of the above sets in the special case of a multiobjective minimization problem, where the ordering cone K is the nonnegative orthant and hence the matrix P is the identity matrix. Given an element \(b \in B\), \(\Delta _{k}^{+}(B,A,b)\) represents all the elements \(a\in A\backslash (B+K)\) such that the kth component of b is larger than the corresponding k-component of a, while \(\Delta _{k}^{-}(B,A,b)\) is the set of all the elements \(a\in A\backslash (B+K)\) such that the kth component of b is smaller than the corresponding k-component of a. Since we are considering a minimization problem, roughly speaking, we can interpret \(\Delta _{k}^{+}(B,A,b)\) as the set of elements \(a \in A\backslash (B+K)\) such that the kth component of b worsens with respect to the corresponding component of a, while \(\Delta _{k}^{-}(B,A,b)\) is the set of all the elements \(a \in A\backslash (B+K)\) such that the kth component of b improves with respect to the corresponding component in a.

Definition 3

Let \(\hat{A}\in {\mathcal {A}}\). It is said that \(\hat{A}\) is properly strictly minimal in the sense of Geoffrion (shortly G-PS minimal) in \({\mathcal {A}}\) if

  1. 1.

    \(\hat{A}\) is strictly minimal in \({\mathcal {A}}\), and

  2. 2.

    there exists \(M>0\) such that, for every \(B\in {\mathcal {A}}\), with \(\hat{A}\nsubseteq B+K\), there is \(\hat{a}\in \hat{A}\backslash (B+K)\) such that, for all indices i and all elements \(\hat{b}\in {B}\) with \(\hat{a}\in \Delta _{i}^-(B,\hat{A},\hat{b})\), there is an index j with \(\hat{a}\in \Delta _{j}^+(B,\hat{A},\hat{b})\) and

    $$\begin{aligned} \frac{p_i(\hat{a}-\hat{b})}{p_j(\hat{b}-\hat{a})}\le M. \end{aligned}$$

The above definition can be interpreted as a refinement of strict minimality in the sense of boundedness of some trade-off ratios. For the convenience of the reader, we will focus again on the special case of a multiobjective minimization problem, where \(K={\mathbb {R}}^p_+\). Hence P is the identity matrix. Let \(B\in {\mathcal {A}}\), with \(\hat{A}\nsubseteq B+K\). What the definition says is that if the strictly minimal set \(\hat{A}\) is G-PS minimal, then we can always find \(\hat{a}\in \hat{A}\) in such a way that if there exists an element \(b\in B\) such that the ith component of b is smaller than the ith component of \(\hat{a}\) (hence, better, in the minimization framework), then there is another index \(j\ne i\) for which the jth component of b is bigger than the jth component of \(\hat{a}\) and the trade-offs between the ijth components are bounded.

We can also introduce an alternative definition of proper minimality in the sense of Geoffrion by changing the strict minimality condition by minimality in Definition 3.

Definition 4

Let \(\hat{A}\in {\mathcal {A}}\). It is said that \(\hat{A}\) is properly minimal in the sense of Geoffrion (shortly G-P minimal) in \({\mathcal {A}}\) if \(\hat{A}\) is minimal in \({\mathcal {A}}\), and condition 2 of Definition 3 is verified.

Remark 3

  1. (a)

    It is clear that if \(\hat{A}\in {\mathcal {A}}\) is G-PS minimal, then it is G-P minimal.

  2. (b)

    If \(K={\mathbb {R}}_+^p\) and all the elements of \({\mathcal {A}}\) are singletons, the notions of G-PS and G-P minimality reduce to the concept of proper minimality due to Geoffrion [1] in multiobjective optimization.

Next, we prove the equivalences between the notions of Henig and Geoffrion proper minimality studied above. We remind that in the setting of multiobjective optimization, with respect to the Pareto cone, both notions provide the same set of proper minimal points (see, for instance, [21]), so it is natural to expect the equivalences of the respective notions in the extended framework of set optimization.

Theorem 1

Let \(\hat{A}\in {\mathcal {A}}\). If \(\hat{A}\) is H-PS minimal in \({\mathcal {A}}\), then \(\hat{A}\) is G-PS minimal in \({\mathcal {A}}\).

Proof

Suppose by reasoning to the contrary that \(\hat{A}\) is H-PS minimal in \({\mathcal {A}}\), but not G-PS minimal. On the one hand, by Proposition 3(i), we have that \(\hat{A}\) is strictly minimal. On the other hand, by Proposition 4(vi) there exists \(\rho >0\) such that \(\hat{A}\in \text {SMin}({\mathcal {A}},K_\rho )\), i.e.,

$$\begin{aligned} \hat{A}\nsubseteq B+K_\rho ,\,\forall B\in {\mathcal {A}}, B\ne \hat{A}. \end{aligned}$$

Fix \(M>\frac{1}{\rho }+m\). Since \(\hat{A}\) is not G-PS minimal, then there exists \(B\in {\mathcal {A}}\backslash \{\hat{A}\}\) such that for all \(a\in \hat{A}\backslash (B+K)\) there are an index \(i=i(a)\in \{1,2,\ldots ,m\}\) and an element \(b=b(a)\in B\), with \(a\in \Delta _{i}^-(B,\hat{A},b)\) such that

$$\begin{aligned} \frac{p_i(a-b)}{p_j(b-a)}> M,\,\forall j\in \{1,2,\ldots ,m\}\text { such that }a\in \Delta _{j}^+(B,\hat{A},b). \end{aligned}$$
(4)

Fix \(\hat{a}\in \hat{A}\backslash (B+K_\rho )\subseteq \hat{A}\backslash (B+K)\) (such an element \(\hat{a}\) exists since \(\hat{A}\nsubseteq B+K_\rho\)). For this \(\hat{a}\), there exist \(\hat{b}\in B\) and \(\hat{i}\in \{1,2,\ldots ,m\}\) satisfying the conditions above. Consider the vector \(\Gamma :=\hat{b}-\hat{a}\).

Without loss of generality, we can suppose that \(\hat{i}=1\) and that the components of \(\Gamma\) are ordered according to their signs, as follows:

$$\begin{aligned} p_{s}(\Gamma )\text { is }{\left\{ \begin{array}{ll} <0 &{} \text{if}\,s=1\\ >0 &{} \text{if}\,s=2,...,n\\ \le 0 &{} \text{if}\,s=n+1,...,m \end{array}\right. } \end{aligned}$$

for some \(n\le m\) (observe that \(p_s(\Gamma )>0\) for some \(s\in \{2,\ldots ,m\}\), since \(\hat{a}\notin B+K\)). We note that the ith component of \((P+\rho UP)(\Gamma )\) is given by \(p_i(\Gamma )+\rho \sum _{i=1}^m p_i(\Gamma )\). Then, we have that

$$\begin{aligned} \sum _{i=1}^m p_i(\Gamma )\le \sum _{i=1}^n p_i(\Gamma )&< p_1(\Gamma )-(n-1)\frac{p_1(\Gamma )}{M}=p_1(\Gamma )\left( 1-\frac{n-1}{M}\right) \\ {}&<p_1(\Gamma )\left( 1-\frac{m}{M}\right) <0. \end{aligned}$$

Hence,

$$\begin{aligned} p_j(\Gamma )+\rho \sum _{i=1}^m p_i(\Gamma )&<\rho p_1(\Gamma )\left( 1-\frac{m}{M}\right)<0, \text { for }j=1 \text { and }j\in \{n+1,\ldots ,m\}\\ p_j(\Gamma )+\rho \sum _{i=1}^m p_i(\Gamma )&<-\frac{p_1(\Gamma )}{M}+\rho p_1(\Gamma )\left( 1-\frac{m}{M}\right) \\ {}&= p_1(\Gamma )\left( -\frac{1}{M}+\rho \left( 1-\frac{m}{M}\right) \right) <0, \text { for }j\in \{2,\ldots ,n\}. \end{aligned}$$

Therefore, by (2), we have that \(\hat{b}-\hat{a}\in -\text {int}K_\rho\), so \(\hat{a}\in B+K_\rho\) (in fact, since \(\hat{a}\) is a general element of \(\hat{A}\backslash (B+K_\rho )\), we have that \(\hat{A}\subseteq B+K_\rho\)), and we reach a contradiction. The proof is complete. \(\square\)

Theorem 2

Let \(\hat{A}\in {\mathcal {A}}\). If \(\hat{A}\) is G-PS minimal in \({\mathcal {A}}\), then \(\hat{A}\) is H-PS minimal in \({\mathcal {A}}\).

Proof

By reasoning to the contrary, suppose that \(\hat{A}\) is G-PS minimal but not H-PS minimal in \({\mathcal {A}}\). Hence, since \(\hat{A}\) is strictly minimal but not H-PS minimal, for every \(l\in \mathbb {N}\), there exists \(B_{l}\in {\mathcal {A}}\) such that

$$\begin{aligned} \hat{A}\subseteq B_l+K_{\varepsilon _l}\quad \text{and}\quad \hat{A}\nsubseteq B_l+K, \end{aligned}$$
(5)

where \(\{\varepsilon _l\}\) is a sequence of positive real numbers that tends to zero.

On the other hand, since \(\hat{A}\) is G-PS minimal, there exists \(M>0\) such that for every \(B_l\) there is \(\hat{a}_l\in \hat{A}\backslash (B_l+K)\) satisfying the conditions of statement 2 of Definition 3.

Taking into account statement (5), it follows that for every \(\hat{a}_{l}\) there exists \(\hat{b}_{l}=\hat{b}_l(\hat{a}_{l})\in B_l\) such that \(\Gamma _{l}:=\hat{b}_{l}-\hat{a}_{l}\in -K_{\varepsilon _l}\setminus \{0\}\) and \(b-\hat{a}_{l}\notin \left( -K\right)\) for every \(b\in B_l\). Then, it holds that

$$\begin{aligned} p_i(\Gamma _l)+\varepsilon _l\sum _{i=1}^m p_i(\Gamma _l)\le 0,\,\,\forall i\in \{1,2,\ldots ,m\}. \end{aligned}$$
(6)

Since \(S_{s^*}^{\varepsilon _l}\) is a base of \(K_{\varepsilon _l}\), there exist \(\lambda _l>0\) and \(s_l\in S_{s^*}^{\varepsilon _l}\) such that

$$\begin{aligned} \Gamma _l=-\lambda _l s_l. \end{aligned}$$
(7)

By (3), \(s^*(s_l)=p_1(s_l)+p_2(s_l)+\cdots p_m(s_l)=1\), so there exists at least one index \(i\in \{1,2,\ldots ,m\}\), such that \(p_i(s_l)\ge \frac{1}{m}\). Without loss of generality, by considering a subsequence if necessary, we can consider \(p_1(s_{l})\ge \frac{1}{m}\), for all \(l\in \mathbb {N}\). Thus, \(p_1(\Gamma _l)\le -\lambda _l\frac{1}{m}<0\), for all \(l\in \mathbb {N}\).

Since \(\Gamma _{l}\notin \left( -K\right)\) for every \(l\in \mathbb {N}\), there exists a set \(\emptyset \ne J(l)\subseteq \left\{ 2,...,m\right\}\) such that \(p_{j}(\Gamma _{l})>0\) if and only if \(j\in J(l)\), for every \(l\in \mathbb {N}\). Since the set \(\left\{ 2,...,m\right\}\) is finite, there exist at least one increasing sequence of integers \(\left\{ l_{u}\right\} _{u=1}^{+\infty }\) and a nonempty subset \(J_{0}\subseteq \{2,\dots ,m\}\) such that \(J(l_{u})=J_{0}\) for every \(u\in \mathbb {N}\). Hence, by (6) and (7) we have \(0<p_{j}(\Gamma _{l_{u}})\le \varepsilon _{l_u}\lambda _{l_u}\sum _{i=1}^m p_i(s_{l_u})=\varepsilon _{l_u}\lambda _{l_u}\) for every \(j\in J_{0}\) and for every \(u\in \mathbb {N}\).

Then, for all \(u\in \mathbb {N}\), it follows on the one hand that \(\hat{a}_{l_u}\in \Delta _{1}^{-}(B_{l_u},\hat{A},\hat{b}_{l_u})\), \(\hat{a}_{l_u}\in \Delta _{j}^{+}(B_{l_u},\hat{A},\hat{b}_{l_u})\) if and only if \(j\in J_0\), and for all \(j\in J_0\)

$$\begin{aligned} -\frac{p_{1}(\Gamma _{l_{u}})}{p_{j}(\Gamma _{l_{u}})}\ge \frac{\lambda _{l_u}\frac{1}{m}}{\varepsilon _{l_u}\lambda _{l_u}}=\frac{1}{m\varepsilon _{l_u}} {\mathop {\longrightarrow }\limits ^{u\rightarrow +\infty }}+\infty , \end{aligned}$$

but, on the other hand,

$$\begin{aligned} -\frac{p_{1}(\Gamma _{l_{u}})}{p_{j}(\Gamma _{l_{u}})}\le M, \end{aligned}$$

and we obtain a contradiction. The proof is complete. \(\square\)

Then, from Theorems 1 and 2 we deduce the following result.

Theorem 3

Let \(\hat{A}\in {\mathcal {A}}\). It follows that \(\hat{A}\) is H-PS minimal in \({\mathcal {A}}\) if and only if \(\hat{A}\) is G-PS minimal in \({\mathcal {A}}\).

Thus, we have proved that the notions of strict proper minimality in the senses of Henig and Geoffrion are equivalent. The question that arises now is what happens with the notion given in Definition 4. In the following results, we prove that this concept is equivalent to the H2-P minimality notion.

Theorem 4

Let \(\hat{A}\in {\mathcal {A}}\). If \(\hat{A}\) is H2-P minimal in \({\mathcal {A}}\), then \(\hat{A}\) is G-P minimal in \({\mathcal {A}}\).

Proof

By reasoning to the contrary, suppose that \(\hat{A}\) is H2-P minimal in \({\mathcal {A}}\), but not G-P minimal. By Proposition 3(iii) we have that \(\hat{A}\) is minimal and by Proposition 4(v) there exists \(\rho >0\) such that

$$\begin{aligned} \text {for any } B\in {\mathcal {A}}, \text { if } \hat{A}\subseteq B+K_\rho , \text { then } \hat{A}\backsim _K^l B. \end{aligned}$$
(8)

Fix \(M>\frac{1}{\rho }+M\). Since \(\hat{A}\) is not G-P minimal, there exists \(B\in {\mathcal {A}}\), with \(\hat{A}\nsubseteq B+K\) such that for all \({a}\in \hat{A}\backslash (B+K)\) there are an index \(i=i({a})\in \{1,2,\ldots ,m\}\) and an element \(b=b(a)\in B\), with \(a\in \Delta _{i}^-(B,\hat{A},b)\) such that statement (4) holds.

It follows that \(\hat{A}\nsubseteq B+K_\rho\). Otherwise, by (8) we have in particular that \(\hat{A}\subseteq B+K\), and we reach a contradiction. The proof continues, then, in the same way as in the proof of Theorem 1. \(\square\)

Theorem 5

Let \(\hat{A}\in {\mathcal {A}}\). If \(\hat{A}\) is G-P minimal in \({\mathcal {A}}\), then \(\hat{A}\) is H2-P minimal in \({\mathcal {A}}\).

Proof

Suppose that \(\hat{A}\) is G-P minimal but not H2-P minimal in \({\mathcal {A}}\). Following the proof of Theorem 2, we have in this case a sequence \(B_{l}\in {\mathcal {A}}\) such that

$$\begin{aligned} \hat{A}\subseteq B_l+K_{\varepsilon _l}\quad \text{and}\quad B_{l}\not \sim _K^l \hat{A}, \end{aligned}$$

If \(\hat{A}\subseteq B_l+K\), then since \(\hat{A}\) is minimal, we have \(B_l\subseteq \hat{A}+K\), which contradicts \(B_{l}\not \sim _K^l \hat{A}\), so \(\hat{A}\nsubseteq B_l+K\) and the proof continues as in Theorem 2. \(\square\)

Then, by Theorems 4 and 5 we obtain the next result.

Theorem 6

Let \(\hat{A}\in {\mathcal {A}}\). It follows that \(\hat{A}\) is G-P minimal in \({\mathcal {A}}\) if and only if \(\hat{A}\) is H2-P minimal in \({\mathcal {A}}\).

5 Scalarization for proper minimality in set optimization

In this section we present a characterization for H-PS minimal elements through nonlinear scalarization, without any convexity assumption, when the ordering cone K is polyhedral, defined as in (1).

Let \(\rho >0\). We remind that the ith row of the matrix \(P+\rho UP\) defining the dilating cone \(K_\rho\) is given by the vector \(p^\rho _i:=p_i+\rho \sum _{l=1}^m p_l\), for \(i\in \{1,2,\ldots ,m\}\).

Theorem 7

Let \(\hat{A}\in {\mathcal {A}}\). It follows that \(\hat{A}\) is H-PS minimal in \({\mathcal {A}}\) if and only if there exists \(\rho >0\) such that for every \(B\in {\mathcal {A}}\), \(B\ne \hat{A}\), there exists \(a\in \hat{A}\backslash B\) satisfying that

$$\begin{aligned} \inf _{b\in B}\max _{i\in \{1,2,\ldots ,m\}}\left\{ {p^\rho _i(b-{a})}\right\} \ge 0. \end{aligned}$$

Proof

By Proposition 4(vi) and Remark 2, we have that \(\hat{A}\) is H-PS minimal if and only if there exists \(\rho >0\) such that

$$\begin{aligned} \hat{A}\nsubseteq B+\text{int} K_\rho \cup \{0\}, \text { for all } B\in {\mathcal {A}},\, B\ne \hat{A}. \end{aligned}$$

The above is equivalent to say that for every \(B\in {\mathcal {A}}\), \(B\ne \hat{A}\), there exists \(a\in \hat{A}\backslash B\) such that

$$\begin{aligned} b-{a}\notin -\text{int} K_\rho ,\text { for all } {b}\in B. \end{aligned}$$
(9)

From the definition of \(K_\rho\), it is easy to see that statement (9) is equivalent to say that for every \(b\in B\), there exists \(i\in \{1,2,\ldots ,m\}\) such that \(p_i^\rho (b-a)\ge 0\), which is equivalent to

$$\begin{aligned} \max _{i\in \{1,2,\ldots ,m\}}\left\{ {p^\rho _i(b-{a})}\right\} \ge 0,\,\text { for all }b\in B, \end{aligned}$$

which completes the proof. \(\square\)

Remark 4

Let us note that when all the elements of \({\mathcal {A}}\) are singletons, i.e., when we consider a vector optimization setting, by Theorem 7 it follows that \(\hat{a}\in {\mathcal {A}}\) is H-PS minimal (that is, Henig proper minimal [7]) if and only if

$$\begin{aligned} \min _{b\in {\mathcal {A}}}\max _{i\in \{1,2,\ldots ,m\}}\left\{ {p^\rho _i(b-\hat{a})}\right\} =0. \end{aligned}$$

This result was proved in [4, Theorem 3.5] for a multiobjective optimization problem (i.e., when 4\({\mathcal {A}}=f(S)\), \(f:{\mathbb {R}}^n\rightarrow {\mathbb {R}}^p\) is the multiobjective function and S is the feasible set).

6 Conclusions

In this work we have first generalized the notion of proper minimality introduced by Henig to the case of a set optimization problem in finite dimensional spaces. In this vein, we have proposed various definitions by means of dilating cones and a set-order relation, and we have analyzed their relationships. Subsequently, we have introduced two Geoffrion-type notions of proper minimality in set optimization, based on boundedness of trade-off rations, also for the general case when the ordering cone is polyhedral, and we have proved that they are equivalent to the Henig-type concepts, as it happens in the vector optimization framework. All these notions reduce to the concepts given by Henig and Geoffrion, respectively, when the vector optimization setting is considered.

Finally, we have obtained a characterization of strict proper minimality based on nonlinear scalarization techniques, without considering any convexity hypothesis.

To our knowledge, this is the first attempt in the literature to introduce a notion of proper minimality in the sense of Geoffrion, in the framework of set optimization.

The study presented in this work may be considered as a starting point for further investigation, for instance for studying optimality conditions through linear and nonlinear scalarization for proper minimal solutions of a set-valued optimization problem. Also, another future research line related to this work is the study of solution concepts in the spirit of proper minimality for set optimization problems based on lattice structures, associated to infimal and supremal sets, which is an alternative approach to the set criterion considered in this paper (see, for instance, [18, 19]).