1 Introduction

In optimization theory, the coefficients of the extremum problems are usually considered as deterministic values and, therefore, the corresponding solutions are also precise. This assumption is not satisfied, in reality, by great majority of real-life engineering and economic problems, as the real-life situations are full of uncertainty and risk. Uncertainty can be handled in various manners, namely by a stochastic process and fuzzy numbers. However, sometimes it is hard to find an appropriate membership function or probability distribution with insufficiency of data. In recent years, some deterministic frameworks of optimization methods have been studied to overcome the drawbacks of stochastic optimization and fuzzy optimization. One of the deterministic optimization methods is interval-valued optimization, which provides an alternative choice for considering the uncertainty into extremum problems. The coefficients in the interval-valued optimization are assumed as closed intervals. Various solutions concepts have been introduced for interval-valued optimization problems. One of them is LU-optimality which was originally defined by Wu [1, 2], and it follows from the concept of a nondominated solution employed in vector optimization problems. The concept of LU-optimality was used in many works concerning optimality conditions and duality results for interval-valued optimization problems (see, for example, Ahmad et al. [3], Jayswal et al. [4, 5], Sun and Wang [6,7,8], Zhang et al. [9], Zhou and Wang [10], and others).

Several studies have developed efficient and effective optimization techniques for solving interval-valued objective optimization problems. Steuer [11] proposed three algorithms, called the F-cone algorithm, E-cone algorithm and all emanating algorithms to solve the linear programming problems with interval-valued objective functions. Ishibuchi and Tanaka [12] focused their study on an extremum problem whose objective function has interval coefficients and proposed the ordering relation between two closed intervals by considering the maximization and minimization problems separately. In [13], Chanas and Kuchta presented an approach to unify the solution methods proposed by Ishibuchi and Tanaka [12]. Jiang et al. [14] suggested to solve the nonlinear interval number programming problem with uncertain coefficients both in nonlinear objective function and nonlinear constraints. Gabrel et al. [15] introduced two different methods for solving interval linear programming problems. In [16], Hladık proposed a technique to determine the optimal bounds for nonlinear mathematical programming problems with interval data that ensures the exact bounds to enclose the set of all optimal solutions. Chalco-Cano et al. [17] developed a method for solving the considered optimization problem with the interval-valued objective function considering order relationships between two closed intervals. Recently, Karmakar and Bhunia [18] proposed an alternative optimization technique for solving interval objective constrained optimization problems via multiobjective programming.

In recent years, however, considerable attention has been given to devising methods for solving nonlinear extremum problems using exact penalty functions. Exact penalty methods for finding an optimal solution of constrained optimization problems are based on the construction of a function whose unconstrained minimizing points are also an optimal solution of the constrained extremum problem. Nondifferentiable exact penalty functions were introduced for the first time by Eremin [19] and Zangwill [20]. In the exact penalty functions methods, the given constrained optimization problem is replaced by an unconstrained optimization problem, for which the objective function is the sum of a certain “merit” function (which reflects the objective function of the given extremum problem) and a penalty term, which reflects the constraint set. The merit function is chosen, in general, as the original objective function, while the penalty term is obtained by multiplying a suitable function, which represents the constraints, by a positive parameter c, called the penalty parameter.

The most frequently used type of exact penalty functions is the exact absolute value penalty function, which is also known as the exact \(l_{1}\) penalty function. It is due to the fact that the penalty term in this function involves the \(l_{1}\) norm of the violated constraints. The exact \(l_{1}\) penalty function method has been researched widely for solving constrained optimization problems (see, for instance, Antczak [21, 22], Bazaraa et al. [23], Bertsekas [24], Bertsekas and Koksal [25], Bonnans et al. [26], Charalambous [27], Di Pillo and Grippo [28], Fletcher [29, 30], Janesch and Santos [31], Mangasarian [32], Mongeau and Sartenaer [33], Nocedal and Wright [34], Peressini et al. [35], Wang and Liu [36], and others).

In the paper, we use the classical exact \(l_{1}\) penalty function method for solving a nondifferentiable interval-valued optimization problem with both inequality and equality constraints. The main purpose of this work is to relate a LU-optimal solution of the original nondifferentiable minimization problem involving locally Lipschitz functions to a LU-minimizer of its associated penalized optimization problem with the interval-valued exact \(l_{1}\) penalty function constructed in the used approach. In the context of this analysis, a threshold of the penalty parameter is given such that, for any value of the penalty parameter exceeding this value, a LU-optimal solution of the considered interval-valued minimization problem is also a LU-minimizer of its associated penalized optimization problem with the interval-valued exact \(l_{1}\) penalty function. More specifically, we first prove that a Karush-Kuhn-Tucker point of the considered nondifferentiable interval-valued constrained optimization problem is also a LU-minimizer of its associated penalized optimization problem with the interval-valued exact \(l_{1}\) penalty function. This result is established under convexity hypotheses for all penalty parameters c exceeding the threshold value, which is expressed as the function of Lagrange multipliers. The immediate corollary of this result is that a LU-optimal solution of the given interval-valued constrained extremum problem is a LU-minimizer of its penalized optimization problem with the interval-valued exact \(l_{1}\) penalty function. We also prove the converse result under convexity hypotheses. Namely, we establish that a LU-minimizer of the penalized optimization problem with the interval-valued exact \(l_{1}\) penalty function is also a LU-optimal solution of the considered interval-valued extremum problem. The results established in the paper are illustrated by suitable examples of optimization problems with the interval-valued objective function solved by using the exact \(l_{1}\) penalty function method. In this way, we extend and improve the results established by Jayswal and Banerjee [37] for the exact \(l_{1}\) penalty function method, who used it for solving differentiable interval-valued optimization problems with inequality constraints only.

2 Preliminaries and Interval-Valued Optimization

In this section, we provide some definitions and some results that we shall use in the sequel. Let \(\textit{IR}_+^n =\left\{ { \left( {x_1 ,\ldots ,x_n } \right) \in \textit{IR}^{n}:x_i \ge \;0,\;i=1,\ldots ,n} \right\} \).

Let \(I(\textit{IR})\) be a class of all closed and bounded intervals in \(\textit{IR}\). Throughout this paper, when we say that A is a closed interval, we mean that A is also bounded in \({ IR}\). If A is a closed interval, we use the notation \({A} = [{a}^{\mathrm{L}},{a}^{\mathrm{U}}]\), where \({a}^{\mathrm{L}}\) and \({a}^\mathrm{U}\) mean the lower and upper bounds of A, respectively. In other words, if \({A} = [{a}^{\mathrm{L}},{a}^{\mathrm{U}}] \in I(\textit{IR})\), then \({A} = [{a}^{\mathrm{L}},{a}^{\mathrm{U}}] = \left\{ {\;x\in \textit{IR}:a^\mathrm{L}\le \;x\;\;\le a^\mathrm{U}} \right\} \). If \({A} = [{a}^{\mathrm{L}},{a}^{\mathrm{U}}] =a\), then \(A = [a,a] = a\) is a real number.

Let \({A} = [{a}^{\mathrm{L}},{a}^{\mathrm{U}}], {B} = [{b}^{\mathrm{L}},{b}^{\mathrm{U}}\)], then, by definition, we have

  1. (i)

    \(A + B = \{ a + b : a\in A\hbox { and }b\in B \} = [{a}^{\mathrm{L}} + {b}^{\mathrm{L}}, {a}^{\mathrm{U}} + {b}^{\mathrm{U}}]\),

  2. (ii)

    \(-A = \{-a : a \in A \} = [-a^\mathrm{U},-a^\mathrm{L}]\),

  3. (iii)

    \(A - B = \{ a - b : a \in A\hbox { and }b \in B \} = [{a}^{\mathrm{L}} - {b}^{\mathrm{U}}, {a}^{\mathrm{U}} - {b}^{\mathrm{L}}]\),

  4. (iv)

    \(k + A = \{ k + a : a \in A \} = [{k} + {a}^{\mathrm{L}}, {k} + {a}^{\mathrm{U}}]\), where k is a real number,

  5. (v)

    \(kA=\left\{ {{\begin{array}{lll} {\left[ {ka^\mathrm{L},ka^\mathrm{U}} \right] }&{} \mathrm{if}&{} {k>0} \\ {\left[ {ka^\mathrm{U},ka^\mathrm{L}} \right] }&{} \mathrm{if}&{} {k\;\le \;0} \\ \end{array} }} \right. ,\) where k is a real number.

For more details on the topic of interval analysis, we refer, for example, to Alefeld and Herzberger [38].

In interval mathematics, an order relation is often used to rank interval numbers and it implies that an interval number is better than another but not that one is larger than another. For \(A = [a^\mathrm{L},a^\mathrm{U}]\) and \(B = [b^\mathrm{L},b^\mathrm{U}]\), we write \(A\le _{\mathrm{LU}} \;B\) if and only if \(a^\mathrm{L}\;\le \;b^\mathrm{L}\) and \(a^\mathrm{U}\;\le \;b^\mathrm{U}\). It is easy to see that \(\le _{\mathrm{LU}}\) is a partial ordering on \(I(\textit{IR})\). Also, we can write \(A\;<_{\mathrm{LU}} \;B\) if and only if \(A \le _{\mathrm{LU}} B\) and \(A \ne B\). Equivalently, \(A\;<_{\mathrm{LU}} \;B\) if and only if (\(a^\mathrm{L}\;<\;b^\mathrm{L},a^\mathrm{U}\le \;b^\mathrm{U})\) or (\(a^\mathrm{L}\;\le \;b^\mathrm{L},a^\mathrm{U}\;<\;b^\mathrm{U})\) or \((a^\mathrm{L}\;<\;b^\mathrm{L}, a^\mathrm{U}\;<\;b^\mathrm{U})\).

Let X be a nonempty subset of \(\textit{IR}^{n}\). A function \(\varphi \) : \(X \rightarrow I(\textit{IR})\) is called an interval-valued function if \(\varphi (x) = [\varphi ^\mathrm{L}(x),\varphi ^\mathrm{U}(x)]\) with \(\varphi ^\mathrm{L}, \varphi ^\mathrm{U} : X \rightarrow \textit{IR}\) such that \(\varphi ^\mathrm{L}(x)\;\le \;\varphi ^\mathrm{U}(x)\) for each \(x\, \in X\, \).

It is well known that a function \(f\, :\, X \,\rightarrow \, \textit{IR}\) defined on a nonempty convex set \(X\subset {\textit{IR}}^{n}\) is said to be convex provided that, for all y, \(x\in X\) and any \(\alpha \in [0,1]\), one has

$$\begin{aligned} f(\alpha y+(1-\alpha )x)\;\le \;\alpha f(y)+(1-\alpha )f(x). \end{aligned}$$

Definition 2.1

[39] The subdifferential of a convex function \(f:\textit{IR}^{n}\rightarrow { IR}\) at \(x\in \textit{IR}^{n}\) is defined as follows

$$\begin{aligned} \partial f(x):=\{ \xi \in \textit{IR}^{n}:f(y)-f(x)\ge \;\left\langle {\xi ,y-x} \right\rangle \;\;\forall y\in \textit{IR}^{n}\}. \end{aligned}$$

Remark 2.1

[39] As it follows from the definition of a convex function \(f:\textit{IR}^{n}\rightarrow \textit{IR}\) at x, the inequality

$$\begin{aligned} f(y)-f(x)\ge \left\langle {\xi ,y-x} \right\rangle ,\;\;\forall \xi \in \partial f(x) \end{aligned}$$
(1)

holds for all \(y\in \textit{IR}^{n}\), where \(\partial f(x)\) denotes the subdifferential of f at x.

It is well known (Clarke [40]) that if a locally Lipschitz function f attains its (local) minimum at \(x \in { \textit{IR}}^{n}\), then \(0\in \partial {f(x)}\). If f is a convex function, then the above condition is also sufficient and the minimum is global.

Similar to the definition of convexity for a real-valued function, the notion of convexity is also defined for an interval-valued function (see, for instance, [1, 2]) as follows:

Let X be a nonempty convex subset of \({\textit{IR}}^{n}\) and \(f : X \rightarrow I(\textit{IR})\) be an interval-valued function defined on X. It is said that f is convex on X iff, the inequality

$$\begin{aligned} f(\alpha y+(1-\alpha )x)\le _{\mathrm{LU}} \;\alpha f(y)+(1-\alpha )f(x) \end{aligned}$$
(2)

holds for all \(x, y \in X\) and any \(\alpha \in \) [0,1].

The following useful result follows from (2) immediately (see, for instance, [1, 17]).

Proposition 2.1

Let X be a nonempty convex subset of \(\textit{IR}^{n}\) and \(f : X \rightarrow I(\textit{IR})\) be an interval-valued function defined on X. The interval-valued function f is LU-convex at \(x \in X\) if and only if, the real-valued functions \(f^\mathrm{L}\) and \(f^\mathrm{U}\) are convex at x.

The following results follows from Proposition 2.1 and Remark 2.1.

Remark 2.2

If \(f : X \rightarrow I(\textit{IR})\) is convex at \(x \in X\) on X, then the following inequalities

$$\begin{aligned}&f^\mathrm{L}(y)-f^\mathrm{L}(x)\ge \;\left\langle {\xi ^\mathrm{L},y-x} \right\rangle ,\;\;\forall \xi ^\mathrm{L}\in \partial f^\mathrm{L}(x), \end{aligned}$$
(3)
$$\begin{aligned}&f^\mathrm{U}(y)-f^\mathrm{U}(x)\;\ge \left\langle {\;\xi ^\mathrm{U},y-x} \right\rangle ,\;\;\forall \xi ^\mathrm{U}\in \partial f^\mathrm{U}(x) \end{aligned}$$
(4)

hold for all \(y \in X\), where \(\partial f^\mathrm{L}(x)\) and \(\partial f^\mathrm{U}(x)\) denote the subdifferentials of \(f^\mathrm{L}\) and \(f^\mathrm{U}\) at x, respectively. If inequalities (3) and (4) are satisfied at every \(x \in X\), then f is a convex function on X.

Definition 2.2

[40] The Clarke normal cone of the convex set \(S \subset \textit{IR}^{n}\) at \(x \in S\) is given by

$$\begin{aligned} N_S (x):=\{ \xi \in \textit{IR}^{n}:\left\langle {\xi ,z-x} \right\rangle \le \;0\;\;\forall z\in \textit{IR}^{n}\}. \end{aligned}$$
(5)

The extremum problem considered in the paper is a nonlinear constrained optimization problem with the interval-valued objective function and with both inequality and equality constraints:

$$\begin{aligned} (P) \qquad \quad \quad \text{ min } \, f\left( x \right) \, s.t. \, x\in D=\{x\in S:g_{i} \left( x \right) \le 0,i\in I, \,h_{j} \left( x \right) =0, j\in J\}, \end{aligned}$$

where \(I = \{1,{\ldots },m\}, J = \{1,{\ldots },q\}, f:S\rightarrow I(\textit{IR})\) and, moreover, \(f^\mathrm{L}:S\rightarrow \textit{IR}, f^\mathrm{U}:S\rightarrow \textit{IR}, g_i :S\rightarrow \textit{IR},\;i\in I, \quad h_j :S\rightarrow \textit{IR},\;j\in J\), are locally Lipschitz functions on a nonempty open convex set \(S\subset { IR}^{n}\), and D is the set of all feasible solutions of (P).

For the purpose of simplifying our presentation, we will introduce some notations, which will be used frequently throughout this paper. We will write \({g} : = ({g}_{1},\ldots ,{g}_{m}) : S \rightarrow { \textit{IR}}^{m}\hbox { and }h : = ({h}_{1},{\ldots }, {h}_{q}) :{ S }\rightarrow { \textit{IR}}^{q}\) for convenience. Further, we denote the set of inequality constraint indexes that are active at point \(\bar{{x}}\in D\) by \(I(\bar{{x}}):=\{\;i\in I:g_i (\bar{{x}})=0\;\}\).

Many solution concepts have been introduced for interval-valued optimization problems. One of them is the concept of a nondominated solution (also named a LU-optimal solution) for interval-valued optimization problems given by Wu (see, for example, [1, 2]).

Definition 2.3

It is said that \(\bar{{x}} \in D\) is a LU-optimal solution of problem (P) iif there exists no other \(x \in D\) such that \(f(x) <_{\mathrm{LU}}\, f(\bar{{x}})\).

In [8], Sun and Wang established that the Karush-Kuhn-Tucker conditions are necessary for LU-optimality of a feasible solution \(\bar{{x}}\) for the nonlinear interval-valued optimization problem with inequality constraints only. We now extend these necessary optimality conditions to the case of a nonlinear interval-valued optimization problem with both inequality and equality constraints.

Theorem 2.1

Let \(\bar{{x}}\) be a LU-optimal solution in problem (P) and the suitable constraint qualification [8] be satisfied at \(\bar{{x}}\). Then there exist Lagrange multipliers \(\bar{{\lambda }}=\left( {\bar{{\lambda }}^\mathrm{L},\bar{{\lambda }}^\mathrm{U}} \right) \in \textit{IR}^{2}, \bar{{\mu }}\in \textit{IR}^{m}{ and }\bar{{\vartheta }}\in \textit{IR}^{q}\) such that

$$\begin{aligned}&0\in \bar{{\lambda }}^\mathrm{L}\partial f^\mathrm{L}(\bar{{x}})+\bar{{\lambda }}^\mathrm{U}\partial f^\mathrm{U}(\bar{{x}})+\sum _{i=1}^m {\bar{{\mu }}_i \partial g_i (\bar{{x}})} +\sum _{j=1}^q {\bar{{\vartheta }}_j \partial h_j (\bar{{x}})} +N_S (\bar{{x}}), \end{aligned}$$
(6)
$$\begin{aligned}&\bar{{\mu }}_i g_i (\bar{{x}})=0,\;\quad i\in I, \end{aligned}$$
(7)
$$\begin{aligned}&\bar{{\lambda }}=\left( {\bar{{\lambda }}^\mathrm{L},\bar{{\lambda }}^\mathrm{U}} \right) >0,\;\;\bar{{\lambda }}^\mathrm{L}+\bar{{\lambda }}^\mathrm{U}=1,\;\;\bar{{\mu }}\ge \;0, \end{aligned}$$
(8)

where \(N_S (\bar{{x}})\) stands for the Clarke normal cone to S at \(\bar{{x}}\).

In the paper, we will assume that the suitable constraint qualification (for example, constraint qualification of Slater’s type [8]) is satisfied at any LU-optimal solution of the considered interval-valued optimization problem (P).

Definition 2.4

The point \(\bar{{x}}\in D\) is said to be a Karush-Kuhn-Tucker point of the considered interval-valued optimization problem (P) iff, there exist Lagrange multipliers \(\bar{{\lambda }}=\left( {\bar{{\lambda }}^\mathrm{L},\bar{{\lambda }}^\mathrm{U}} \right) \in \textit{IR}^{2}, \bar{{\mu }}\in \textit{IR}^{m}\) and \(\bar{{\vartheta }}\in \textit{IR}^{q}\) such that the Karush-Kuhn-Tucker necessary optimality conditions (68) are satisfied at this point with these Lagrange multipliers.

3 The Exact \(l_{1}\) Penalty Function Method for Solving an Interval-Valued Optimization Problem

In this section, for solving the considered interval-valued optimization problem (P), we use the exact penalty function method. As it is well known, in exact penalty function methods, the given constrained extremum problem is transformed in a single unconstrained optimization problem.

Hence, if we use an exact penalty function method for solving an extremum optimization problem (P) with the interval-valued objective function, we define, therefore, an unconstrained penalized optimization problem with the interval-valued penalty function as follows:

$$\begin{aligned} P\left( {x,c} \right) :=f\left( x \right) +cp\left( x \right) e=\left( {f^\mathrm{L}\left( x \right) +cp\left( x \right) ,f^\mathrm{U}\left( x \right) +cp\left( x \right) } \right) {\rightarrow }{\hbox {min}}, \end{aligned}$$

where f is an interval-valued objective function, p is a suitable penalty function, c is a penalty parameter and \({e} = [1,1]^{T} \in { \textit{IR}}^{2}\).

Now, in a natural way, we extend the definition of exactness of the penalization of an exact scalar penalty function to the case of an exact interval-valued penalty function.

If a threshold value \(\bar{{c}}\) exists such that, for every \(c \ge \bar{{c}}\),

$$\begin{aligned} \hbox {arg LU-optimal}\{P(x,c) : x \in S \} = \hbox {arg LU-optimal} \{f(x) : x \in D\}, \end{aligned}$$

then the function \(P(\cdot ;c)\) is termed an interval-valued exact penalty function.

Now, for a given inequality constraint \(g_{i}\), we define the function \(g_{i}^+\) as follows:

$$\begin{aligned} {g}_{{ i}}^+ (x):=\left\{ {{\begin{array}{lll} {0,}&{} {\mathrm{if}}&{} {{g}_{ i} (x)\le 0,} \\ {{g}_{{ i}} (x),}&{} {\mathrm{if}}&{} {{g}_{ i} (x)>0.} \\ \end{array} }} \right. \end{aligned}$$
(9)

As it follows from (9), the function \(g_{i}^{+} \) is equal to zero for all x that satisfy the constraint, and it has a positive value whenever this constraint is violated. Moreover, large violations in the inequality constraint \({g}_{i}\) result in large values for \(g_{i}^{+} \). Thus, the function \(g_{i}^{+} \) has the penalty features relative to the single inequality constraint \({g}_{i}\).

Since we use the exact absolute value penalty function method for solving the considered minimization problem with the interval-valued objective function, we construct an unconstrained interval-valued optimization problem as follows:

$$\begin{aligned}&({P}_{1}({c}))\qquad {P}_1 ({x,c)}:=\left( {{P}_{1}^{\mathrm{L}} ({x,c}),{P}_{1}^{\mathrm{U}} (x,c)} \right) \nonumber \\&={f(x)}+\,{c} \left[ {\sum _{{ i=1}}^{ m} {{g}_{{ i}}^+ (x) } +\sum _{{ j=1}}^{ q} { \left| {{h}_{{ j}} (x)} \right| } } \right] {e}, \end{aligned}$$
(10)

where f : \({\textit{IR}}^{n} \rightarrow I({ IR})\) is an interval-valued function, \({f}^\mathrm{L},{ f}^\mathrm{U},{ g}_{i},{ i }\in \) \({I, h}_{{ j}},{ j }\in { J}\), are defined as in the definition of (P) and \(e = [1,1]^{{ T}} \in { \textit{IR}}^{2}\). By the definition of the interval-valued objective function f, (10) can be re-written as follows:

$$\begin{aligned} (P1_{1}(c))\;\; \text {min}\;\; P_1 ({x,c}):= {} \left( {f}^{{ L}}({x})+{c} \left[ {\sum _{{ i=1}}^{ m} {g}_{ i}^+ (x) +\sum _{{ j=1}}^{ {q}} { \left| {h_{ j} (x)} \right| } } \right] ,\right. \nonumber \\ \left. {f}^{\mathrm {U}}({x})+ c \left[ \sum _{{i=1}}^m {g}_{ i}^+ ({x}) +\,\sum _{{ j=1}}^{ q} { \left| {h}_{ j} ({x}) \right| } \right] \right) . \end{aligned}$$
(11)

We call the unconstrained optimization problem defined above the interval-valued exact \(l_{1}\) penalized optimization problem or the penalized optimization problem with the interval-valued exact \(l_{1}\) penalty function.

The idea of the exact absolute value penalty function method is to solve the given nonlinear constrained interval-valued optimization problem (P) by means of a single unconstrained interval-valued minimization problem \(({P}_{1}({c}))\). Roughly speaking, the interval-valued exact \(l_{1}\) penalty function for problem (P) is a function \(P_1 (\cdot ,c)\) given by (11), where \(c > 0\) is the penalty parameter with the property that there exists a lower bound \(\bar{{c}} \ge 0\) such that, for \(c>\bar{{c}}\), any LU-optimal solution of (P) is equivalent to a LU-minimizer of the associated penalized optimization problem \(({P}_{1}({c}))\) with the interval-valued exact \(l_{1}\) penalty function.

In this section, we completely characterize the property of exactness of the exact \(l_{1}\) penalty function method used for solving the considered constrained interval-valued optimization problem with both inequality and equality constraints. In other words, we prove the equivalence between a LU-optimal solution of the given constrained interval-valued extremum problem (P) and a LU-minimizer of its associated penalized optimization problem with the interval-valued exact \(l_{1}\) penalty function.

However, we first show that a Karush-Kuhn-Tucker point of the given constrained interval-valued minimization problem (P) yields a LU-minimizer of the interval-valued exact \(l_{1}\) penalty function in its associated penalized optimization problem \(({P}_{1}({c}))\) for any penalty parameter c exceeding the threshold, which is expressed as the function of Lagrange multipliers.

Theorem 3.1

Let \(\bar{{x}}\in D\) be a Karush-Kuhn-Tucker point at which the Karush-Kuhn-Tucker necessary optimality conditions (68) are satisfied with Lagrange multipliers \(\bar{{\lambda }}=\left( {\bar{{\lambda }}^\mathrm{L},\bar{{\lambda }}^\mathrm{U}} \right) \in \textit{IR}^{2}, \bar{{\mu }}\in \textit{IR}^{m}\) and \(\bar{{\vartheta }}\in \textit{IR}^{q}\). Let \(J^{+}(\bar{{x}}):=\{ j\in J:\bar{{\vartheta }}_j >0 \}\) and \(J^{-}(\bar{{x}}):=\{ j\in J:\bar{{\vartheta }}_j <0 \}\). Furthermore, assume that the functions \(f^\mathrm{L}, f^\mathrm{U}\), \(g_i ,\;i\in I(\bar{{x}}), \quad h_j ,\;j\in J^{+}(\bar{{x}})\), \(-h_j ,\;j\in J^{-}(\bar{{x}})\) are convex at \(\bar{{x}}\) on S. If c is assumed to be sufficiently large (it is sufficient to set \(c\ge \max \left\{ { \bar{{\mu }}_i ,i\in I,\;\left| {\bar{{\vartheta }}_j } \right| ,\;j\in J } \right\} \), where \(\bar{{\mu }}_i , i = 1,\ldots ,m, \bar{{\vartheta }}_j , j = 1,\ldots ,q, \) are Lagrange multipliers associated with the constraints \(g_{i}\, and\, {h}_{j}\), respectively), then \(\bar{{x}}\) is also a LU-optimal solution of its associated penalized optimization problem \((P_{1}(c))\) with the interval-valued exact \(l_{1}\) penalty function.

Proof

Suppose, by contradiction, that \(\bar{{x}}\) is not a LU-optimal solution of the penalized optimization problem \((P_{1}(c))\) with the interval-valued exact \(l_{1}\) penalty function associated with the considered constrained interval-valued minimization problem (P). Then, by Definition 2.1, there exists \(x_{0} \in S\) such that

$$\begin{aligned} P_{1}(x_{0},c) <_{\mathrm{LU}} P_{1}(\bar{{x}},c). \end{aligned}$$

Then, by definition of the relation \(<_{\mathrm{LU}}\), the above inequality is equivalent to

$$\begin{aligned} \left\{ {{\begin{array}{l} {P_1^L (x_0 ,c)\le P_1^L (\bar{{x}},c)} \\ {P_1^U (x_0 ,c)<P_1^U (\bar{{x}},c)} \\ \end{array} }} \right. \; {\hbox {or}} \left\{ {{\begin{array}{l} {P_1^L (x_0 ,c)<P_1^L (\bar{{x}},c)} \\ {P_1^U (x_0 ,c)\le P_1^U (\bar{{x}},c)} \\ \end{array} }} \right. \; {\hbox {or}} \left\{ {{\begin{array}{l} {P_1^L (x_0 ,c)<P_1^L (\bar{{x}},c)} \\ {P_1^U (x_0 ,c)<P_1^U (\bar{{x}},c)} \\ \end{array} }} \right. \nonumber \\ \end{aligned}$$
(12)

Hence, by the definition of the interval-valued exact \(l_{1}\) penalty function \({P}_{1}(\cdot ,{c})\), inequalities (12) are equivalent to

$$\begin{aligned}&\left\{ {{\begin{array}{l} {f^\mathrm{L}(x_0 )+c \left[ {\sum \limits _{i=1}^m {g_i^+ (x_0 ) } +\sum \limits _{j=1}^q { \left| {h_j (x_0 )} \right| } } \right] \le f^\mathrm{L}(\bar{{x}})+c \left[ {\sum \limits _{i=1}^m {g_i^+ (\bar{{x}}) } +\sum \limits _{j=1}^q { \left| {h_j (\bar{{x}})} \right| } } \right] } \\ {f^\mathrm{U}(x_0 )+c \left[ {\sum \limits _{i=1}^m {g_i^+ (x_0 ) } +\sum \limits _{j=1}^q { \left| {h_j (x_0 )} \right| } } \right]<f^\mathrm{U}(\bar{{x}})+c \left[ {\sum \limits _{i=1}^m {g_i^+ (\bar{{x}}) } +\sum \limits _{j=1}^q { \left| {h_j (\bar{{x}})} \right| } } \right] } \\ \end{array} }} \right. \;{\mathrm{or}}\nonumber \\&\left\{ {{\begin{array}{l} {f^\mathrm{L}(x_0 )+c \left[ {\sum \limits _{i=1}^m {g_i^+ (x_0 ) } +\sum \limits _{j=1}^q { \left| {h_j (x_0 )} \right| } } \right]<f^\mathrm{L}(\bar{{x}})+c \left[ {\sum \limits _{i=1}^m {g_i^+ (\bar{{x}}) } +\sum \limits _{j=1}^q { \left| {h_j (\bar{{x}})} \right| } } \right] } \\ {f^\mathrm{U}(x_0 )+c \left[ {\sum \limits _{i=1}^m {g_i^+ (x_0 ) } +\sum \limits _{j=1}^q { \left| {h_j (x_0 )} \right| } } \right] \le f^\mathrm{U}(\bar{{x}})+c \left[ {\sum \limits _{i=1}^m {g_i^+ (\bar{{x}}) } +\sum \limits _{j=1}^q { \left| {h_j (\bar{{x}})} \right| } } \right] } \\ \end{array} }} \right. \;\mathrm{or}\nonumber \\&\left\{ {{\begin{array}{l} {f^\mathrm{L}(x_0 )+c \left[ {\sum \limits _{i=1}^m {g_i^+ (x_0 ) } +\sum \limits _{j=1}^q { \left| {h_j (x_0 )} \right| } } \right]<f^\mathrm{L}(\bar{{x}})+c \left[ {\sum \limits _{i=1}^m {g_i^+ (\bar{{x}}) } +\sum \limits _{j=1}^q { \left| {h_j (\bar{{x}})} \right| } } \right] } \\ {f^\mathrm{U}(x_0 )+c \left[ {\sum \limits _{i=1}^m {g_i^+ (x_0 ) } +\sum \limits _{j=1}^q { \left| {h_j (x_0 )} \right| } } \right] <f^\mathrm{U}(\bar{{x}})+c \left[ {\sum \limits _{i=1}^m {g_i^+ (\bar{{x}}) } +\sum \limits _{j=1}^q { \left| {h_j (\bar{{x}})} \right| } } \right] } \\ \end{array} }} \right. \;. \end{aligned}$$
(13)

Since \(\bar{{x}} \in D\), by (9), inequalities (13) yield, respectively,

$$\begin{aligned}&\left\{ {{\begin{array}{l} {f^\mathrm{L}(x_0 )+c \left[ {\sum \limits _{i=1}^m {g_i^+ (x_0 ) } +\sum \limits _{j=1}^q { \left| {h_j (x_0 )} \right| } } \right] \le f^\mathrm{L}(\bar{{x}})} \\ {f^\mathrm{U}(x_0 )+c \left[ {\sum \limits _{i=1}^m {g_i^+ (x_0 ) } +\sum \limits _{j=1}^q { \left| {h_j (x_0 )} \right| } } \right]<f^\mathrm{U}(\bar{{x}})} \\ \end{array} }} \right. \;{\hbox {or}}\nonumber \\&\left\{ {{\begin{array}{l} {f^\mathrm{L}(x_0 )+c \left[ {\sum \limits _{i=1}^m {g_i^+ (x_0 ) } +\sum \limits _{j=1}^q { \left| {h_j (x_0 )} \right| } } \right]<f^\mathrm{L}(\bar{{x}})} \\ {f^\mathrm{U}(x_0 )+c \left[ {\sum \limits _{i=1}^m {g_i^+ (x_0 ) } +\sum \limits _{j=1}^q { \left| {h_j (x_0 )} \right| } } \right] \le f^\mathrm{U}(\bar{{x}})} \\ \end{array} }} \right. \; {\hbox {or}}\nonumber \\&\left\{ {{\begin{array}{l} {f^\mathrm{L}(x_0 )+c \left[ {\sum \limits _{i=1}^m {g_i^+ (x_0 ) } +\sum \limits _{j=1}^q { \left| {h_j (x_0 )} \right| } } \right]<f^\mathrm{L}(\bar{{x}})} \\ {f^\mathrm{U}(x_0 )+c \left[ {\sum \limits _{i=1}^m {g_i^+ (x_0 ) } +\sum \limits _{j=1}^q { \left| {h_j (x_0 )} \right| } } \right] <f^\mathrm{U}(\bar{{x}})} \\ \end{array} }} \right. \; \end{aligned}$$
(14)

Multiplying the first inequality in (14) by \(\bar{{\lambda }}^\mathrm{L}>0\), the second one by \(\bar{{\lambda }}^\mathrm{U}>0\) and then adding both sides of the resulting inequalities, we get

$$\begin{aligned}&\bar{{\lambda }}^\mathrm{L}f^\mathrm{L}(x_0 )+\bar{{\lambda }}^\mathrm{U}f^\mathrm{U}(x_0 )+c\left( {\bar{{\lambda }}^\mathrm{L}+\bar{{\lambda }}^\mathrm{U}} \right) \left[ {\sum \limits _{i=1}^m {g_i^+ (x_0 ) } +\sum \limits _{j=1}^q { \left| {h_j (x_0 )} \right| } } \right] \\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad <\bar{{\lambda }}^\mathrm{L}f^\mathrm{L}(\bar{{x}})+\bar{{\lambda }}^\mathrm{U}f^\mathrm{U}(\bar{{x}}). \end{aligned}$$

By the Karush-Kuhn-Tucker necessary optimality condition (8), we have

$$\begin{aligned} \bar{{\lambda }}^\mathrm{L}f^\mathrm{L}(x_0 )+\bar{{\lambda }}^\mathrm{U}f^\mathrm{U}(x_0 )+c \left[ {\sum \limits _{i=1}^m {g_i^+ (x_0 ) } +\sum \limits _{j=1}^q { \left| {h_j (x_0 )} \right| } } \right] <\bar{{\lambda }}^\mathrm{L}f^\mathrm{L}(\bar{{x}})+\bar{{\lambda }}^\mathrm{U}f^\mathrm{U}(\bar{{x}}).\nonumber \\ \end{aligned}$$
(15)

By assumption, \({f}^\mathrm{L}, {f}^\mathrm{U}, g_i ,\;i\in I(\bar{{x}}), \quad h_j ,\;j\in J^{+}(\bar{{x}}),\, -h_j ,\;j\in J^{-}(\bar{{x}})\), are convex at \(\bar{{x}}\) on S. Hence, by Remarks 2.1 and 2.2, the following inequalities

$$\begin{aligned}&f^\mathrm{L}(x_0 )-f^\mathrm{L}(\bar{{x}})\ge \left\langle {\xi ^\mathrm{L},x_0 -\bar{{x}}} \right\rangle ,\;\; \end{aligned}$$
(16)
$$\begin{aligned}&f^\mathrm{U}(x_0 )-f^\mathrm{U}(\bar{{x}})\;\ge \;\left\langle {\xi ^\mathrm{U},x_0 -\bar{{x}}} \right\rangle ,\;\; \end{aligned}$$
(17)
$$\begin{aligned}&g_i (x_0 )-g_i (\bar{{x}})\ge \;\left\langle {\zeta _i ,x_0 -\bar{{x}}} \right\rangle ,\;\quad i\in I(\bar{{x}}), \end{aligned}$$
(18)
$$\begin{aligned}&h_j (x_0 )-h_j (\bar{{x}})\ge \;\left\langle {\varsigma _j ,x_0 -\bar{{x}}} \right\rangle ,\;\quad j\in J^{+}(\bar{{x}}), \end{aligned}$$
(19)
$$\begin{aligned}&-\,h_j (x_0 )+h_j (\bar{{x}})\;\ge \;\left\langle {-\varsigma _j ,x_0 -\bar{{x}}} \right\rangle ,\;\quad j\in J^{-}(\bar{{x}})\; \end{aligned}$$
(20)

hold for any \(\xi ^\mathrm{L}\in \partial f^\mathrm{L}(\bar{{x}}),\; \quad \xi ^\mathrm{U}\in \partial f^\mathrm{U}(\bar{{x}}), \quad \zeta _i \in \partial g_i (\bar{{x}}),\; i\in I(\bar{{x}}), \quad \varsigma _j \in \partial h_j (\bar{{x}}),\; j\in J^{+}(\bar{{x}})\;\cup J^{-}(\bar{{x}}),\) respectively. Multiplying each inequality (1620) by the corresponding Lagrange multiplier, we get, respectively,

$$\begin{aligned}&\bar{{\lambda }}^\mathrm{L}f^\mathrm{L}(x_0 )-\bar{{\lambda }}^\mathrm{L}f^\mathrm{L}(\bar{{x}})\ge \;\left\langle {\bar{{\lambda }}^\mathrm{L}\;\xi ^\mathrm{L},x_0 -\bar{{x}}} \right\rangle ,\;\; \end{aligned}$$
(21)
$$\begin{aligned}&\bar{{\lambda }}^\mathrm{U}f^\mathrm{U}(x_0 )-\bar{{\lambda }}^\mathrm{U}f^\mathrm{U}(\bar{{x}})\ge \;\left\langle {\bar{{\lambda }}^\mathrm{U}\xi ^\mathrm{U},x_0 -\bar{{x}}} \right\rangle ,\;\; \end{aligned}$$
(22)
$$\begin{aligned}&\bar{{\mu }}_i g_i (x_0 )-\bar{{\mu }}_i g_i (\bar{{x}})\ge \left\langle {\bar{{\mu }}_i \zeta _i ,x_0 -\bar{{x}}} \right\rangle ,\;\quad i\in I(\bar{{x}}), \end{aligned}$$
(23)
$$\begin{aligned}&\bar{{\vartheta }}_j h_j (x_0 )-\bar{{\vartheta }}_j h_j (\bar{{x}})\ge \left\langle {\bar{{\vartheta }}_j \varsigma _j ,x_0 -\bar{{x}}} \right\rangle ,\;\quad j\in J^{+}(\bar{{x}})\;\cup J^{-}(\bar{{x}}). \end{aligned}$$
(24)

Taking into account the Lagrange multipliers equal to 0 and adding both sides of (2124), we obtain

$$\begin{aligned}&\bar{{\lambda }}^\mathrm{L}f(x_0 )+\bar{{\lambda }}^\mathrm{U}f(x_0 )-\bar{{\lambda }}^\mathrm{L}f(\bar{{x}})-\bar{{\lambda }}^\mathrm{U}f(\bar{{x}})+\sum \limits _{i=1}^m {\bar{{\mu }}_i g_i (x_0 )} -\sum \limits _{i=1}^m {\bar{{\mu }}_i g_i (\bar{{x}})} +\nonumber \\&\qquad \sum \limits _{j=1}^q {\bar{{\vartheta }}_j h_j (x_0 )} -\sum \limits _{j=1}^q {\bar{{\vartheta }}_j h_j (\bar{{x}})}\ge \left\langle {\bar{{\lambda }}^\mathrm{L}\xi ^\mathrm{L}+\bar{{\lambda }}^\mathrm{U}\xi ^\mathrm{U}+\sum \limits _{i=1}^m {\bar{{\mu }}_i \zeta _i +\sum \limits _{j=1}^q {\bar{{\vartheta }}_j \varsigma _j } } ,x_0 -\bar{{x}}} \right\rangle .\nonumber \\ \end{aligned}$$
(25)

By definition, for any \(z\in N_S (\bar{{x}})\), it follows that \(\left\langle {z,x_0 -\bar{{x}}} \right\rangle \le 0.\;\;\)Thus, by (25), the following inequality

$$\begin{aligned}&\bar{{\lambda }}^\mathrm{L}f(x_0 )+\bar{{\lambda }}^\mathrm{U}f(x_0 )-\bar{{\lambda }}^\mathrm{L}f(\bar{{x}})-\bar{{\lambda }}^\mathrm{U}f(\bar{{x}})+\sum \limits _{i=1}^m {\bar{{\mu }}_i g_i (x_0 )} -\sum \limits _{i=1}^m {\bar{{\mu }}_i g_i (\bar{{x}})}+\\&\qquad \sum \limits _{j=1}^q {\bar{{\vartheta }}_j h_j (x_0 )} -\sum \limits _{j=1}^q {\bar{{\vartheta }}_j h_j (\bar{{x}})} \ge \; \left\langle {\bar{{\lambda }}^\mathrm{L}\xi ^\mathrm{L}+\bar{{\lambda }}^\mathrm{U}\xi ^\mathrm{U}+\sum \limits _{i=1}^m {\bar{{\mu }}_i \zeta _i +\sum \limits _{j=1}^q {\bar{{\vartheta }}_j \varsigma _j } } +z,x_0 -\bar{{x}}} \right\rangle \;\; \end{aligned}$$

holds for any \(\xi ^\mathrm{L}\in \partial f^\mathrm{L}(\bar{{x}}), \quad \xi ^\mathrm{U}\in \partial f^\mathrm{U}(\bar{{x}}), \quad \zeta _i \in \partial g_i (\bar{{x}}),\; i\in I, \quad \varsigma _j \in \partial h_j (\bar{{x}}),\; j\in J\).

Hence, by the Karush-Kuhn-Tucker necessary optimality condition (6), the above inequality implies

$$\begin{aligned}&\bar{{\lambda }}^\mathrm{L}f(x_0 )+\bar{{\lambda }}^\mathrm{U}f(x_0 )+\sum \limits _{i=1}^m {\bar{{\mu }}_i g_i (x_0 )} +\sum \limits _{j=1}^q {\bar{{\vartheta }}_j h_j (x_0 )} \ge \nonumber \\&\qquad \bar{{\lambda }}^\mathrm{L}f(\bar{{x}})+\;\bar{{\lambda }}^\mathrm{U}f(\bar{{x}})+\sum \limits _{i=1}^m {\bar{{\mu }}_i g_i (\bar{{x}})} +\sum \limits _{j=1}^q {\bar{{\vartheta }}_j h_j (\bar{{x}})} . \end{aligned}$$
(26)

By the Karush-Kuhn-Tucker necessary optimality condition (7) and the feasibility of \(\bar{{x}}\) in problem (P), (26) gives

$$\begin{aligned} \bar{{\lambda }}^\mathrm{L}f(x_0 )+\bar{{\lambda }}^\mathrm{U}f(x_0 )+\sum \limits _{i=1}^m {\bar{{\mu }}_i g_i (x_0 )} +\sum \limits _{j=1}^q {\bar{{\vartheta }}_j h_j (x_0 )} \ge \bar{{\lambda }}^\mathrm{L}f(\bar{{x}})+\;\bar{{\lambda }}^\mathrm{U}f(\bar{{x}}).\nonumber \\ \end{aligned}$$

Hence, by (9), it follows that

$$\begin{aligned} \bar{{\lambda }}^\mathrm{L}f(x_0 )+\bar{{\lambda }}^\mathrm{U}f(x_0 )+\sum \limits _{i=1}^m {\bar{{\mu }}_i g_i^+ (x_0 )} +\sum \limits _{j=1}^q {\left| {\bar{{\vartheta }}_j } \right| \left| {h_j (x_0 )} \right| } \ge \bar{{\lambda }}^\mathrm{L}f(\bar{{x}})+\;\bar{{\lambda }}^\mathrm{U}f(\bar{{x}})\nonumber \\ \end{aligned}$$
(27)

By assumption, \(c\ge \max \left\{ { \bar{{\mu }}_i ,i\in I,\;\left| {\bar{{\vartheta }}_j } \right| ,\;j\in J } \right\} \). Thus, (27) implies that the following inequality

$$\begin{aligned} \bar{{\lambda }}^\mathrm{L}f(x_0 )+\bar{{\lambda }}^\mathrm{U}f(x_0 )+c\left[ {\sum \limits _{i=1}^m {\bar{{\mu }}_i g_i^+ (x_0 )} +\sum \limits _{j=1}^q {\left| {h_j (x_0 )} \right| } } \right] \ge \bar{{\lambda }}^\mathrm{L}f(\bar{{x}})+\;\bar{{\lambda }}^\mathrm{U}f(\bar{{x}}) \end{aligned}$$
(28)

holds, contradicting (15). This means that \(\bar{{x}}\) is a LU-optimal solution of the penalized optimization problem \(({P}_{1}({c}))\) with the interval-valued exact \(l_{1}\) penalty function associated with the original constrained interval-valued optimization problem (P). Thus, the proof of this theorem is completed.

Directly consequence of the result established in Theorem 3.1 is the following result:

Corollary 3.1

Let \(\bar{{x}}\) be a LU-optimal point of the constrained interval-valued optimization problem (P). Furthermore, assume that all hypotheses of Theorem 3.1 are fulfilled. Then \(\bar{{x}}\) is also a LU-minimizer of the penalized optimization problem \(({P}_{1}({c}))\) with the interval-valued exact \(l_{1}\) penalty function.

Before we prove the converse result to that one given in Corollary 3.1, we establish the following result:

Proposition 3.1

Let \(\bar{{x}} \in D\) be a LU-minimizer of the penalized optimization problem \((P_{1}(c))\) with the interval-valued exact \(l_{1}\) penalty function. Then, the following inequality

$$\begin{aligned} f(x)<_{\mathrm{LU}} f(\bar{{x}})\; \end{aligned}$$
(29)

cannot hold for all \(x\in D\).

Proof

We proceed by contradiction. Suppose, contrary to the result, that there exists \({x}_{0} \in D\) such that

$$\begin{aligned} f(x_0 )<_{\mathrm{LU}} f(\bar{{x}}).\; \end{aligned}$$
(30)

Hence, by the definition of \(<_{\mathrm{LU}} \), (30) yields

$$\begin{aligned} \left\{ {{\begin{array}{l} {f^\mathrm{L}(x_0 )\le f^\mathrm{L}(\bar{{x}})} \\ {f^\mathrm{U}(x_0 )<f^\mathrm{U}(\bar{{x}})} \\ \end{array} }} \right. \;{ \mathrm{or} }\left\{ {{\begin{array}{l} {f^\mathrm{L}(x_0 )<f^\mathrm{L}(\bar{{x}})} \\ {f^\mathrm{U}(x_0 )\le f^\mathrm{U}(\bar{{x}})} \\ \end{array} }} \right. \;{ \mathrm{or} }\left\{ {{\begin{array}{l} {f^\mathrm{L}(x_0 )<f^\mathrm{L}(\bar{{x}}) } \\ {f^\mathrm{U}(x_0 )<f^\mathrm{U}(\bar{{x}}).} \\ \end{array} }} \right. \; \end{aligned}$$

Using \({x}_{0} \in { D}\) together with (9), we get

$$\begin{aligned}&\left\{ {{\begin{array}{l} {f^\mathrm{L}(x_0 )+c \left[ {\sum \limits _{i=1}^m {g_i^+ (x_0 ) } +\sum \limits _{j=1}^q { \left| {h_j (x_0 )} \right| } } \right] \le f^\mathrm{L}(\bar{{x}})+c \left[ {\sum \limits _{i=1}^m {g_i^+ (\bar{{x}}) } +\sum \limits _{j=1}^q { \left| {h_j (\bar{{x}})} \right| } } \right] } \\ {f^\mathrm{U}(x_0 )+c \left[ {\sum \limits _{i=1}^m {g_i^+ (x_0 ) } +\sum \limits _{j=1}^q { \left| {h_j (x_0 )} \right| } } \right]<f^\mathrm{U}(\bar{{x}})+c \left[ {\sum \limits _{i=1}^m {g_i^+ (\bar{{x}}) } +\sum \limits _{j=1}^q { \left| {h_j (\bar{{x}})} \right| } } \right] } \\ \end{array} }} \right. \;{\hbox {or}}\nonumber \\&\left\{ {{\begin{array}{l} {f^\mathrm{L}(x_0 )+c \left[ {\sum \limits _{i=1}^m {g_i^+ (x_0 ) } +\sum \limits _{j=1}^q { \left| {h_j (x_0 )} \right| } } \right]<f^\mathrm{L}(\bar{{x}})+c \left[ {\sum \limits _{i=1}^m {g_i^+ (\bar{{x}}) } +\sum \limits _{j=1}^q { \left| {h_j (\bar{{x}})} \right| } } \right] } \\ {f^\mathrm{U}(x_0 )+c \left[ {\sum \limits _{i=1}^m {g_i^+ (x_0 ) } +\sum \limits _{j=1}^q { \left| {h_j (x_0 )} \right| } } \right] \le f^\mathrm{U}(\bar{{x}})+c \left[ {\sum \limits _{i=1}^m {g_i^+ (\bar{{x}}) } +\sum \limits _{j=1}^q { \left| {h_j (\bar{{x}})} \right| } } \right] } \\ \end{array} }} \right. \;{\hbox {or}}\nonumber \\&\left\{ {{\begin{array}{l} {f^\mathrm{L}(x_0 )+c \left[ {\sum \limits _{i=1}^m {g_i^+ (x_0 ) } +\sum \limits _{j=1}^q { \left| {h_j (x_0 )} \right| } } \right]<f^\mathrm{L}(\bar{{x}})+c \left[ {\sum \limits _{i=1}^m {g_i^+ (\bar{{x}}) } +\sum \limits _{j=1}^q { \left| {h_j (\bar{{x}})} \right| } } \right] } \\ {f^\mathrm{U}(x)+c \left[ {\sum \limits _{i=1}^m {g_i^+ (x_0 ) } +\sum \limits _{j=1}^q { \left| {h_j (x_0 )} \right| } } \right] <f^\mathrm{U}(\bar{{x}})+c \left[ {\sum \limits _{i=1}^m {g_i^+ (\bar{{x}}) } +\sum \limits _{j=1}^q { \left| {h_j (\bar{{x}})} \right| } } \right] } \\ \end{array} }} \right. \;. \end{aligned}$$
(31)

Using the definition of the interval-valued exact \(l_{1}\) penalty function \({P}_{1}(\cdot ,{c}\)) together with the definition of the relation \(<_{\mathrm{LU}} \), (31) implies that the following inequality \(P_1 (x_0 ,c)<_{\mathrm{LU}} P_1 (\bar{{x}},c)\) is satisfied, which is a contradiction to the assumption that \(\bar{{x}} \in D\) is a LU-minimizer of the penalized optimization problem \((P_{1}(c))\) with the interval-valued exact \(l_{1}\) penalty function.

Remark 3.1

Note that the result established in Proposition 3.1 has been proved in a different way than the similar result in [37]. Further, some incorrect equivalences used in [37] have not been applied in proving this result in the present paper.

Theorem 3.2

Let \(\bar{{x}}\) be a LU-minimizer of the penalized optimization problem \((P_{1}(c_{0}))\) with the interval-valued exact \(l_{1}\) penalty function. Assume that the functions \(f^\mathrm{L}, f^\mathrm{U}, g_{i}, i \in I, h_j ,\;j\in J^{\ge }(\bar{{x}}):=\left\{ {j\in J:h_j (\bar{{x}})\ge 0} \right\} , -h_j ,\;j\in J^{<}(\bar{{x}}):=\left\{ {j\in J:h_j (\bar{{x}})<0} \right\} \), are convex at \(\bar{{x}}\) on S. Then \(\bar{{x}}\) is also a LU-optimal solution of the considered nondifferentiable interval-valued optimization problem (P).

Proof

First, we show that \(\bar{{x}}\) is a feasible solution of the considered constrained interval-valued optimization problem (P). Suppose, by contradiction, that \(\bar{{x}}\) is not a feasible solution of (P). Then, by (9), it follows that

$$\begin{aligned} \sum \limits _{i=1}^m {g_i^+ (\bar{{x}}) } +\sum \limits _{j=1}^q { \left| {h_j (\bar{{x}})} \right| } >0. \end{aligned}$$
(32)

Since \(\bar{{x}}\) is a LU-optimal solution of \((P_{1}(c_{0}))\), it is also LU-optimal of any penalized optimization problem \((P_{1}(c))\) for every \(c \ge c_{0}\). We take a penalty parameter c large enough, that is, satisfying \(c \ge c_{0}\). Since \(\bar{{x}}\) is a LU-minimizer of the penalized optimization problem \((P_{1}(c))\) with the interval-valued exact \(l_{1}\) penalty function, using the weighting method (see, for example, Chankong and Haimes [41], Miettinen [42]), there exist \(\lambda ^\mathrm{L} \ge 0, \lambda ^\mathrm{U} \ge 0, \lambda ^\mathrm{L} + \lambda ^\mathrm{U} = 1\) such that

$$\begin{aligned} 0\in \lambda ^\mathrm{L}P_1^L (\bar{{x}},c)+\lambda ^\mathrm{U}P_1^U (\bar{{x}},c). \end{aligned}$$
(33)

Hence, by the definition of the interval-valued exact \(l_{1}\) penalty function, we have

$$\begin{aligned}&0\in \lambda ^\mathrm{L}\partial \left( {f^\mathrm{L}(\bar{{x}})+c \left[ {\sum \limits _{i=1}^m {g_i^+ (\bar{{x}}) } +\sum \limits _{j=1}^q { \left| {h_j (\bar{{x}})} \right| } } \right] } \right) \nonumber \\&\qquad +\lambda ^\mathrm{U}\partial \left( {f^\mathrm{U}(\bar{{x}})+c \left[ {\sum \limits _{i=1}^m {g_i^+ (\bar{{x}}) } +\sum \limits _{j=1}^q { \left| {h_j (\bar{{x}})} \right| } } \right] } \right) . \end{aligned}$$

Thus, by Proposition 2.3.1 (Clarke [40]) and Corollary 2 of Proposition 2.3.3 (Clarke [40]), the above relation implies

$$\begin{aligned}&0\in \lambda ^\mathrm{L}\partial f^\mathrm{L}(\bar{{x}})+c\lambda ^\mathrm{L} \partial \left( { \sum \limits _{i=1}^m {g_i^+ (\bar{{x}}) } +\sum \limits _{j=1}^q { \left| {h_j (\bar{{x}})} \right| } } \right) \\&+\,\lambda ^\mathrm{U}\partial f^\mathrm{U}(\bar{{x}})+c\lambda ^\mathrm{U}\partial \left( { \sum \limits _{i=1}^m {g_i^+ (\bar{{x}}) } +\sum \limits _{j=1}^q { \left| {h_j (\bar{{x}})} \right| } } \right) . \end{aligned}$$

Since \(\lambda ^\mathrm{L} + \lambda ^\mathrm{U} = 1\), the above relation gives

$$\begin{aligned} 0\in \lambda ^\mathrm{L}\partial f^\mathrm{L}(\bar{{x}})+\lambda ^\mathrm{U}\partial f^\mathrm{U}(\bar{{x}})+c\partial \left( { \sum \limits _{i=1}^m {g_i^+ (\bar{{x}}) } +\sum \limits _{j=1}^q { \left| {h_j (\bar{{x}})} \right| } } \right) . \end{aligned}$$
(34)

Hence, by Proposition 2.3.3 (Clarke [40]), (34) yields

$$\begin{aligned} 0\in \lambda ^\mathrm{L}\partial f^\mathrm{L}(\bar{{x}})+\lambda ^\mathrm{U}\partial f^\mathrm{U}(\bar{{x}})+c\left[ {\sum \limits _{i=1}^m {\partial g_i^+ (\bar{{x}}) } +\sum \limits _{j=1}^q { \partial \left( {\left| {h_j (\bar{{x}})} \right| } \right) } } \right] . \end{aligned}$$
(35)

By assumption, the functions \(f^\mathrm{L}, f^\mathrm{U}, g_i ,\;i\in I, \quad h_j ,\;j\in J^{\ge }(\bar{{x}}), -h_j ,\;j\in J^{<}(\bar{{x}})\), are convex at \(\bar{{x}}\) on S. Since \(g_i ,\;i\in I,\) are convex at \(\bar{{x}}\) on S, the functions \(g_i^+ ,\;i\in I,\) are also convex at \(\bar{{x}}\) on S. Hence, by definition, the following inequalities

$$\begin{aligned}&f^\mathrm{L}(x)-f^\mathrm{L}(\bar{{x}})\ge \left\langle {\xi ^\mathrm{L},x-\bar{{x}}} \right\rangle ,\;\; \end{aligned}$$
(36)
$$\begin{aligned}&f^\mathrm{U}(x)-f^\mathrm{U}(\bar{{x}})\;\ge \left\langle {\xi ^\mathrm{U},x-\bar{{x}}} \right\rangle ,\;\; \end{aligned}$$
(37)
$$\begin{aligned}&g_i^+ (x)-g_i^+ (\bar{{x}})\ge \left\langle {\zeta _i^+ ,x-\bar{{x}}} \right\rangle ,\;\quad i\in I, \end{aligned}$$
(38)
$$\begin{aligned}&h_j (x)-h_j (\bar{{x}})\ge \left\langle {\varsigma _j ,x -\bar{{x}}} \right\rangle ,\;\quad j\in J^{\ge }(\bar{{x}}), \end{aligned}$$
(39)
$$\begin{aligned}&-\,h_j (x)+h_j (\bar{{x}})\ge \left\langle {-\varsigma _j ,x-\bar{{x}}} \right\rangle ,\;\quad j\in J^{<}(\bar{{x}}) \end{aligned}$$
(40)

hold for all \(x \in S\) and any \(\xi ^\mathrm{L}\in \partial f^\mathrm{L}(\bar{{x}}),\; \quad \xi ^\mathrm{U}\in \partial f^\mathrm{U}(\bar{{x}}), \quad \zeta _i^+ \in \partial g_i (\bar{{x}}),\; i\in I, \quad \varsigma _j \in \partial \left( { \left| {h_j (\bar{{x}})} \right| } \right) ,\; j\in J,\) respectively. Since inequalities (3640) are satisfied for any \(x \in S\), they are also satisfied for any \(x \in D\). Thus, by the feasibility of x in problem (P) and (9), inequalities (3840) yield, respectively,

$$\begin{aligned}&-\,g_i^+ (\bar{{x}})\;\ge \left\langle {\zeta _i^+ ,x-\bar{{x}}} \right\rangle ,\;\quad i\in I, \end{aligned}$$
(41)
$$\begin{aligned}&-\,h_j (\bar{{x}})\ge \left\langle {\varsigma _j ,x -\bar{{x}}} \right\rangle ,\;\quad j\in J^{\ge }(\bar{{x}}), \end{aligned}$$
(42)
$$\begin{aligned}&h_j (\bar{{x}})\;\ge \left\langle {-\varsigma _j ,x -\bar{{x}}} \right\rangle ,\;\quad j\in J^{<}(\bar{{x}}). \end{aligned}$$
(43)

Multiplying (36) by \(\lambda ^\mathrm{L} > 0\), (37) by \(\lambda ^\mathrm{U} > 0\), (4143) by \(c > 0\), we get that the following inequalities

$$\begin{aligned}&\lambda ^\mathrm{L}f^\mathrm{L}(x)-\lambda ^\mathrm{L}f^\mathrm{L}(\bar{{x}})\;\ge \left\langle {\lambda ^\mathrm{L}\xi ^\mathrm{L},x-\bar{{x}}} \right\rangle ,\;\; \end{aligned}$$
(44)
$$\begin{aligned}&\lambda ^\mathrm{U}f^\mathrm{U}(x)-\lambda ^\mathrm{U}f^\mathrm{U}(\bar{{x}})\;\ge \left\langle {\lambda ^\mathrm{U}\xi ^\mathrm{U},x-\bar{{x}}} \right\rangle ,\;\; \end{aligned}$$
(45)
$$\begin{aligned}&-\,cg_i^+ (\bar{{x}})\;\ge \left\langle {c\zeta _i^+ ,x-\bar{{x}}} \right\rangle ,\;\quad i\in I, \end{aligned}$$
(46)
$$\begin{aligned}&-\,c\left| {h_j (\bar{{x}})} \right| \;\ge \left\langle {c\varsigma _j ,x_0 -\bar{{x}}} \right\rangle ,\;\quad j\in J \end{aligned}$$
(47)

hold for all \(x \in D\) and any \(\xi ^\mathrm{L}\in \partial f^\mathrm{L}(\bar{{x}}), \xi ^\mathrm{U}\in \partial f^\mathrm{U}(\bar{{x}}), \quad \zeta _i^+ \in \partial g_i (\bar{{x}}),\; i\in I, \quad \varsigma _j \in \partial \left( { \left| {h_j (\bar{{x}})} \right| } \right) ,\; j\in J,\) respectively. Thus, (4447) yield

$$\begin{aligned}&\lambda ^\mathrm{L}f^\mathrm{L}(x)-\lambda ^\mathrm{L}f^\mathrm{L}(\bar{{x}})\;+\;\lambda ^\mathrm{U}f^\mathrm{U}(x)-\lambda ^\mathrm{U}f^\mathrm{U}(\bar{{x}})\;-c\left[ {\sum \limits _{i=1}^m {g_i^+ (\bar{{x}}) } +\sum \limits _{j=1}^q { \left| {h_j (\bar{{x}})} \right| } } \right] \nonumber \\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \ge \left\langle {\lambda ^\mathrm{L}\xi ^\mathrm{L}+\lambda ^\mathrm{U}\xi ^\mathrm{U}+c\zeta _i^+ +c\varsigma _j ,x-\bar{{x}}} \right\rangle \;\; \end{aligned}$$
(48)

Combining (35) and (48), we obtain

$$\begin{aligned} \lambda ^\mathrm{L}f^\mathrm{L}(x)-\lambda ^\mathrm{L}f^\mathrm{L}(\bar{{x}})\;+\;\lambda ^\mathrm{U}f^\mathrm{U}(x)-\lambda ^\mathrm{U}f^\mathrm{U}(\bar{{x}})\;-c\left[ {\sum \limits _{i=1}^m {g_i^+ (\bar{{x}}) } +\sum \limits _{j=1}^q { \left| {h_j (\bar{{x}})} \right| } } \right] \ge 0. \end{aligned}$$
(49)

Hence, by (32), (49) implies that the following inequality

$$\begin{aligned} c\le \frac{\lambda ^\mathrm{L}f^\mathrm{L}(x)+\;\lambda ^\mathrm{U}f^\mathrm{U}(x)-\lambda ^\mathrm{L}f^\mathrm{L}(\bar{{x}})\;-\lambda ^\mathrm{U}f^\mathrm{U}(\bar{{x}})}{\sum \limits _{i=1}^m {g_i^+ (\bar{{x}}) } +\sum \limits _{j=1}^q { \left| {h_j (\bar{{x}})} \right| } }\; \end{aligned}$$

holds, contradicting the fact that c is any penalty parameter sufficiently large (no less than \({c}_{0})\). This means that \(\bar{{x}}\) is feasible for problem (P). Hence, its LU-optimality in problem (P) follows directly from (29). Thus, the proof of this theorem is completed.

From Corollary 4.1 and Theorem 4.2, it follows the main result in the paper:

Corollary 3.2

Let all hypotheses of Corollary 3.1 and Theorem 3.2 be satisfied. Then, there exists a threshold \(\bar{c}\) such that, for every \(c\ge \bar{c}\), the set of LU-optimal solutions for the given constrained interval-valued extremum problem (P) coincides with the set of LU-minimizers for its associated penalized optimization problem \((P_{1}(c))\) with the interval-valued exact \(l_1\) penalty function.

Remark 3.2

Note that we have proved the results under weaker assumptions than the similar ones established by Jayswal and Banerjee [37] for differentiable interval-valued optimization problems with inequality constraints only. What is more, some impractical hypotheses used by the authors [37] in proving their results (for example, Theorem 4.5 [37]) have been omitted and, therefore, the results established in the present paper have a practical importance.

Now, we illustrate the results established in the paper by the help of an example of a constrained convex nonsmooth optimization problem with the interval-valued objective function. In order to solve it, we use the exact \(l_{1}\) penalty function method.

Example 3.1

Consider the following nonsmooth constrained optimization problem with the interval-valued objective function:

$$\begin{aligned}&(P1) \;{\hbox {min}}\;\quad {f(x)=}\left[ {1,1} \right] \;\left( {|x}_{ 1} {|+|x}_{ 2} | \right) +[-1,0]\left( {x_1 -x_2 } \right) +[-1,-\frac{1}{2}]\\&\qquad {s.t.}\;\;x\in D=\{x\in { IR}^{{2}}{:g}_{ 1} {(x)=x}_{ 1}^{ 2} {-2x}_{ 1} \le {0,}\;{g}_{1} {(x)=x}_{ 2}^{ 2} +2{x}_{ 2} \le {0,}\; \\&\qquad \qquad \qquad \qquad \qquad {h}_{ 1} {(x)=x}_{ 1} +{x}_{ 2} =0\}. \end{aligned}$$

Note that \(\bar{{x}}=(0 ,0)\) is a LU-optimal solution of the considered constrained interval-valued optimization problem (P1). Further, it is not difficult to show that both the objective function f and the constraint functions \(g_{1}, g_{2}\) and \(h_{1}\) are convex on \({ IR}^{2}\). In order to solve the considered optimization problem, we use the exact \(l_{1}\) penalty function method. Thus, the following unconstrained interval-valued optimization problem is constructed in this method:

$$\begin{aligned}&(P{1_1}(c))\;\;\text {min}\;\;P{1_1} {(x,c)=[P1}_{ 1}^\mathrm{L} (x,c),{P1}_{ 1}^\mathrm{U} (x,c)]\\&\qquad \qquad \qquad =\left[ f^\mathrm{L}(x)+c\left( {g_1^+ (x)+g_2^+ (x)+\left| {h_1 (x)} \right| } \right) ,\right. \;\\&\qquad \qquad \qquad \quad \left. f^\mathrm{U}(x)+c\left( {g_1^+ (x)+g_2^+ (x)+\left| {h_1 (x)} \right| } \right) \right] \\&\qquad \qquad \qquad =\left[ {|x}_{ 1} {|}+{|x}_{ 2} {|}-{x}_{ 1} +x_2 -1\right. \\&\qquad \qquad \qquad \quad +\,c\left( {{\mathrm{max}}\{0,{x}_{ 1}^{ 2} -{2x}_{ 1} \}+{\mathrm{max}}\{0,{x}_{ 2}^{ 2} +{2x}_{ 2} \}+\left| {{x}_{ 1} +{x}_{ 2} } \right| } \right) ,\\&\quad \qquad \qquad \qquad \left. |{x}_{ 1} {|}+{|x}_{ 2} |-\frac{1}{2}+c\left( {{\mathrm{max}}\{0,{x}_{ 1}^{ 2} -{2x}_{ 1} \}+{\mathrm{max}}\{0,{x}_{ 2}^{ 2} +{2x}_{ 2} \}+\left| {{x}_{ 1} +{x}_{ 2} } \right| } \right) \right] . \end{aligned}$$

Note that \(\bar{{x}}=(0 ,0)\) is feasible in the considered interval-valued optimization problem (P1). Further, there exist Lagrange multipliers \(\bar{{\lambda }}=\left( {\bar{{\lambda }}^\mathrm{L},\bar{{\lambda }}^\mathrm{U}} \right) \in \textit{IR}^{2}, \bar{{\mu }}=\left( {\bar{{\mu }}_1 ,\bar{{\mu }}_2 } \right) \in \textit{IR}_+^2 \) and \(\bar{{\vartheta }}_1 \in \textit{IR}\) such that the Karush-Kuhn-Tucker necessary optimality conditions (68) are satisfied at \(\bar{{x}}=(0 ,0)\). Further, note that all functions constituting the considered interval-valued optimization problem (P1) are convex on \(\textit{IR}^{2}\). Then, by Theorem 3.1, for any penalty parameter c satisfying \(c\; \ge \;\max \{\;\bar{{\mu }}_1 ,\bar{{\mu }}_2 , \;\left| {\bar{{\vartheta }}_1 } \right| \;\}\; =1\), the point \(\bar{{x}}=(0 ,0)\) is also a LU-minimizer of the penalized optimization problem \((P1_1 (c))\) with the interval-valued exact \(l_{1}\) penalty function given above. Conversely, note that all hypotheses of Theorem 3.2 are also fulfilled. Indeed, since \(\bar{{x}}=(0 ,0)\) is a LU-minimizer in the penalized optimization problem \((P1_1 (1))\) and all functions involved in problem (P1) are convex on \(\textit{IR}^{2}\), by Theorem 3.2, it follows that \(\bar{{x}}= (0,0)\) is also a LU-optimal solution of the original constrained interval-valued optimization problem (P1) considered in this example. Further, note that, for the considered interval-valued optimization problem (P1), it is not possible to apply the results established by Jayswal and Banerjee [37] because not all functions constituting problem (P1) are differentiable and also the considered nonsmooth extremum problem is an interval-valued optimization problem with both inequality and equality constraints.

In the next example, we consider a constrained interval-valued optimization problem in which not all involved functions are convex. It turns out that, for such interval-valued optimization problems, the equivalence may not hold between the set of LU-optimal solutions for the given interval-valued minimization problem and the set of LU-minimizers for its associated penalized optimization problem with the interval-valued exact \(l_{1}\) penalty function constructed in the used method.

Example 3.2

Consider the following interval-valued optimization problem:

$$\begin{aligned}&(P2)\;\; \text {min}\quad {f(x)}=\left[ {{1,1}} \right] {x}^{{3}}+\left[ {{1,1}} \right] x^{2}+\left[ {1,1} \right] \,\;\\&\qquad s.t.\;x\in D=\{{x}\in {\textit{IR}}:{g}_{ 1} {(x)}=-{x}\le {0,}\;\;{g}_{ 2} {(x)=}-{x}^{{2}}-{x}\le {0}\}. \end{aligned}$$

Note that \({D}=\{{x}\in {\textit{IR}:x}\ge {0)}\}\) and \(x = 0\) is a LU-optimal solution of the considered interval-valued optimization problem (P2). Further, note that the interval-valued objective function f and the constraint function \(g_{2}\) are not convex on \(\textit{IR}\). However, we use the exact absolute value penalty function method for solving the given constrained interval-valued minimization problem (P2). Then, we construct the following unconstrained interval-valued optimization problem:

$$\begin{aligned}&(P2_1 (c))\quad \min \quad P2_1 {(x,c)=[P2}_{ 1}^\mathrm{L} (x,c),{P2}_{ 1}^\mathrm{U} (x,c)]\\&\qquad =\left[ f^\mathrm{L}(x)+c\left( {g_1^+ (x)+g_2^+ (x)} \right) ,\;f^\mathrm{U}(x)+c\left( {g_1^+ (x)+g_2^+ (x)} \right) \right] \\&\qquad =\left[ {x}^{3}+x^{2}+1+c\left( {\mathrm{max}\{0,-{x}\}+\mathrm{max}\{0,-{x}^{{2}}-{x}\}} \right) ,{x}^{3}+x^{2}+1\right. \\&\qquad \quad \left. +c\left( {\mathrm{max}\{0,-{x}\}+\mathrm{max}\{0,-{x}^{{2}}-{x}\}} \right) \right] . \end{aligned}$$

It is not difficult to show that the penalized optimization problem \((P2_1 (c))\) with the interval-valued exact \(l_{1}\) penalty function does not have a LU-minimizer at \(x = 0\) for any penalty parameter \(c > 0\). This follows from the fact that the downward order of growths of \(f^\mathrm{L}\) and \(f^\mathrm{U}\) exceed the upward of growth of g at x when moving from x toward smaller values. Indeed, note that \(\mathop {\inf }\limits _{x\in \textit{IR}} P2_1^L (x,c)\rightarrow -\infty \) and \(\mathop {\inf }\limits _{x\in \textit{IR}} P2_1^U (x,c)\rightarrow -\infty \) when \(x \rightarrow -\infty \) for any \(c > 0\). In this case, for any value of the penalty parameter \(c > 0\), there is no the equivalence between the set of LU-optimal solutions for the given interval-valued extremum problem (P2) and the set of LU-minimizers for its associated penalized optimization problem \((P2_1 (c))\) with the interval-valued exact \(l_{1}\) penalty function constructed in the exact absolute value penalty function method. The result that there is no the equivalence mentioned above is consequence of the fact that the functions constituting the considered constrained interval-valued optimization problem are not convex.

In the next example, we relax convexity hypotheses of the functions involved in the considered constrained interval-valued optimization problem to the case when they are generalized convex. Namely, we consider a constrained interval-valued optimization problem in which the objective function is pseudoconvex and the constraint function is quasiconvex. It turns out that, for such nonconvex interval-valued optimization problems, the equivalence may not hold between the set of LU-optimal solutions for the given interval-valued minimization problem and the set of LU-minimizers for its associated penalized optimization problem with the interval-valued exact \(l_{1}\) penalty function constructed in the sense discussed in the paper.

Example 3.3

Consider the following nonconvex interval-valued optimization problem:

$$\begin{aligned}&(P3)\quad {\hbox {min}}\quad {f(x)=}\left[ {{1,1}} \right] {x}^{{3}}+\left[ {{1,1}} \right] x+\left[ {0,1} \right] \;\;\quad s.t.\;\quad x\in D=\{{x}\in {\textit{IR}}:\\&\qquad \qquad {g(x)}= -\,{\mathrm{arctgx}}\le {0}\;\}. \end{aligned}$$

Note that \({D}=\{{x}\in {\textit{IR}:}-\mathrm{arctgx}\le 0)\}\) and \(x = 0\) is a LU-optimal solution of the considered interval-valued optimization problem (P3). Further, it is not difficult to show that the objective functions \(f^\mathrm{L}\) and \(f^\mathrm{U}\) are pseudoconvex on \({ IR}\) and the constraint function g is quasiconvex on \({ IR}\). Although the considered constrained interval-valued minimization problem (P3) is nonconvex, however, we use the exact \(l_{1}\) penalty function method for solving it. Therefore, we construct the following unconstrained optimization problem with the interval-valued objective function:

$$\begin{aligned}&(P{3_1})\;\;\text {min}\;\; P{1_1} {(x,c)=[P1}_{ 1}^\mathrm{L} (x,c),{P1}_{ 1}^\mathrm{U} (x,c)]\\&\quad =[f^\mathrm{L}(x)+\,c g^{+}(x),\;f^\mathrm{U}(x)+c g^{+}(x)]\\&\quad =[{x}^{3}+x+c \mathrm{max}\{ 0,-\mathrm{arctg}{x}\},{x}^{3}+x+1+c \mathrm{max}\{0,-\mathrm{arctg}{x}\}]. \end{aligned}$$

It is not difficult to show that the penalized optimization problem \((P3_1 (c))\) with the interval-valued exact \(l_{1}\) penalty function does not have a LU-minimizer at \(x = 0\) for any penalty parameter c > 0. This also follows from the fact that the downward order of growths of \({f}^\mathrm{L}\) and \({f}^\mathrm{U}\) exceed the upward of growth of g at x when moving from x toward smaller values. Indeed, note that \(\mathop {\inf }\limits _{x\in \textit{IR}} P3_1^L (x,c)\rightarrow -\infty \) and \(\mathop {\inf }\limits _{x\in \textit{IR}} P3_1^U (x,c)\rightarrow -\infty \) when \(x \rightarrow -\infty \) for any c > 0. From this example follows that if the objective function is pseudoconvex and constraint functions are quasiconvex in the constrained interval-valued minimization problem, then the equivalence between LU-minimizers in this optimization problem and in its associated penalized optimization problem \((P3_1 (c))\) with the interval-valued exact \(l_{1}\) penalty function constructed in the exact absolute value penalty function method does not hold. In other words, there does not exist a threshold of the penalty parameter c such that, for any penalty parameters exceeding this threshold, there is the equivalence between the sets of LU-optimal solutions for the given interval-valued extremum problem and the set of LU-minimizers for its associated penalized optimization problem with the interval-valued exact \(l_{1}\) penalty function constructed in the used penalty method.

4 Conclusions

In the paper, the classical exact \(l_{1}\) penalty function method has been used for solving nondifferentiable interval-valued optimization problems with both inequality and equality constraints. In this method, an associated penalized optimization problem with the interval-valued exact \(l_{1}\) penalty function is constructed for the considered interval-valued minimization problem. The conditions guarantying the equivalence between a LU-optimal solution of the considered nondifferentiable interval-valued optimization problem and a LU-minimizer of its associated penalized optimization problem with the interval-valued exact \(l_{1}\) penalty function have been established under convexity hypotheses. The results established in the paper extend and improve the results established in [37] for differentiable interval-valued optimization problems with inequality constraints only.

However, some interesting topics for further research remain. It would be of interest to investigate whether this result is true also for a larger class of constrained interval-valued optimization problems, for example, for a class of nonconvex interval-valued extremum problems. Thus, further research can focus also on the usefulness of the exact \(l_{1}\) penalty function method in solving various classes of nonconvex interval-valued optimization problems. We shall investigate these questions in subsequent papers.