1 Introduction

Fuzzy optimization problems were developed for formulating such real-world problems, which are usually vague, imprecise, and not well defined. The first who proposed the basic concept of fuzzy decision making were Bellman and Zadeh (1970). Since then, fuzzy mathematical programming problems have been extensively studied by many authors. Lai and Hwang (1992), Delgado et al. (1994) and Słowinski (1998) gave the insightful surveys in which they summarized the main ideas on this topic.

Convexity of fuzzy mappings plays central roles in fuzzy mathematics and fuzzy optimization. Since Nanda and Kar (1992) proposed the concept of a convex fuzzy mapping, the research of convexity for fuzzy mapping and application to fuzzy optimization have been developed widely and deeply (see, for example, Ammar 1992; Ammar and Metz 1992; Chalco-Cano et al. 2016; Gong and Hai 2016; Li and Noor 2013; Panigrahi et al. 2008; Syau and Lee 2006; Wang and Wu 2003; Yan and Xu 2002; Zhang et al. 2006, and others). The nondominated solution of a nonlinear optimization problem with fuzzy-valued objective function was proposed by Wu (2007, 2008). Using the concept of continuous differentiability of fuzzy-valued functions and the \(\alpha \)-cuts to describe the fuzzy objective function, he derived the sufficient optimality conditions under (generalized) convexity for obtaining the nondominated solution of a fuzzy optimization problem having fuzzy-valued objective function with real-valued inequality constraints.

In order to solve nonlinear optimization problems in real life, various methods have been proposed in optimization theory. One of them are exact penalty function methods. Exact penalty function methods for finding an optimal solution of a constrained optimization problem are based on the construction of a (penalty) function whose unconstrained minimizing points are also a solution of the constrained extremum problem. One of the most frequently used type of an exact penalty function for solving a constrained optimization problem is the \(l_{1}\) exact penalty function, also known as the exact absolute value penalty function (see, for example, Antczak 2009, 2011, 2013, 2018; Bazaraa et al. 1991; Bertsekas 1982; Bertsekas and Koksal 2000; Bonnans et al. 2003; Binh 2015; Charalambous 1978; Han and Mangasarian 1979; Janesch and Santos 1997; Mangasarian 1985; Peressini et al. 1988; Rosenberg 1984; Sun and Yuan 2006, Wang and Liu 2010; and others). Recently, Antczak 2012 used the vector \(l_{1}\) exact penalty function method for solving convex vector optimization problems with inequality constraints, and he investigated the main property for such convex nonsmooth multiobjective programming problems.

Although a lot of interesting explorations have been made in the study of optimality conditions for fuzzy optimization problems from different viewpoints, it seems that not much progress has been made in the aspect of introducing methods for solving such extremum problems. Therefore, in the present paper, the \(l_{1}\) exact penalty function method is applied for solving a nondifferentiable optimization problem with fuzzy objective function and with both real-valued inequality and equality constraints. Then, for the considered fuzzy optimization problem, its associated fuzzy penalized optimization problem with the fuzzy \(l_{1}\) exact penalty function is constructed. Further, this paper focuses on the main property of the \(l_{1}\) exact penalty function method, which is exactness of the penalization. Therefore, this property of the \(l_{1}\) exact penalty function method is extended to the case when this method is used for finding (weakly) nondominated solutions of the considered fuzzy optimization problem. Thus, the equivalence between the sets of (weakly) nondominated solutions of a nondifferentiable optimization problem with the fuzzy objective function and its associated fuzzy penalized optimization problem with the fuzzy absolute value penalty function constructed in the used approach is established under assumptions that the fuzzy objective function and the inequality constraints are convex and the equality constraints are affine. This result is illustrated by the example of an optimization problem with the fuzzy objective function which is solved by using the \(l_{1}\) exact penalty function method.

2 Notations and preliminaries

We first quote some preliminary notations, definitions, and results, which will be needed in the sequel. Throughout this paper, R is the set of all real numbers that is endowed with the usual topology.

The fuzzy subset \(\widetilde{u}\) of R is defined by a mapping \(\xi _{ \widetilde{u}}:R\rightarrow \left[ 0,1\right] \), which is called a membership function. For each fuzzy set \(\widetilde{u}\), we denote its \( \alpha \)-level set as \(\widetilde{u}_{\alpha }\) and it is defined by \( \widetilde{u}_{\alpha }=\left\{ x\in R:\xi _{\widetilde{u}}\left( x\right) \ge \alpha \right\} \) for each \(\alpha \in (0,1]\). The 0-level set \( \widetilde{u}_{0}\) is defined as the closure of the set \(\left\{ x\in R:\xi _{\widetilde{u}}\left( x\right) >0\right\} \), i.e., \(\widetilde{u} _{0}=cl\left\{ x\in R:\xi _{\widetilde{u}}\left( x\right) >0\right\} \). We denote the support of \(\widetilde{u}\) by \(supp\left( \widetilde{u}\right) \), where \(supp\left( \widetilde{u}\right) =\left\{ x\in R:\xi _{\widetilde{u} }\left( x\right) \ge 0\right\} \). The closure of \(supp\left( u\right) \) defines the 0-level of u, i.e., \(\widetilde{u}_{0}=cl\left( supp\left( \widetilde{u}\right) \right) \), where \(cl\left( S\right) \) means the closure of the subset \(S\subset R^{n}\) (see, for example, Panigrahi et al. (2008)).

A fuzzy number \(\widetilde{u}\) is a type of a fuzzy set (see Dubois and Prade 1978, 1980) defined as follows:

Definition 1

(Wu 2007, 2008) We denote by \(\mathcal {F} \left( R\right) \) the set of all fuzzy subset \(\widetilde{u}\) on R, that is, fuzzy numbers, if the membership function \(\xi _{\widetilde{u}}\) satisfies the following requirements:

  1. (a)

    \(\widetilde{u}\) is normal, i.e., there exists \(x^{*}\in R\) such that \(\xi _{\widetilde{u}}\left( x^{*}\right) =1\),

  2. (b)

    \(\xi _{\widetilde{u}}\) is an upper semi-continuous function, i.e., \( \left\{ x:\xi _{\widetilde{u}}\left( x\right) \ge \alpha \right\} \) is a closed subset of R for each \(\alpha \in (0,1]\),

  3. (c)

    \(\xi _{\widetilde{u}}\) is quasi-concave, i.e., \(\xi _{\widetilde{u} }\left( \lambda x+\left( 1-\lambda \right) y\right) \ge \min \left\{ \xi _{ \widetilde{u}}\left( x\right) ,\xi _{\widetilde{u}}\left( y\right) \right\} \) for all \(x,y\in R\) and any \(\lambda \in \left[ 0,1\right] \),

  4. (d)

    the 0-level set \(\widetilde{u}_{0}\) is a compact subset of R.

If \(\widetilde{u}\) is a fuzzy number, then, by the condition (b) in Definition 1, it follows that its \(\alpha \)-level set \(\widetilde{u}_{\alpha }\) is a convex subset of R for each \( \alpha \in \left[ 0,1\right] \). Combining this fact with the conditions (c) and (d) in Definition 1, \(\widetilde{u}_{\alpha }\) is a closed interval in R for each \(\alpha \in \left[ 0,1\right] \). Therefore, the \(\alpha \)-levels of a fuzzy number can be written for each \( \alpha \in \left[ 0,1\right] \) as \(\widetilde{u}_{\alpha }=\left[ \underline{ u}_{\alpha },\overline{u}_{\alpha }\right] \), \(\underline{u}_{\alpha }\), \( \overline{u}_{\alpha }\in R\), \(\underline{u}_{\alpha }\le \overline{u} _{\alpha }\).

Also any \(a\in R\) can be regarded as a fuzzy number \(\widetilde{a}\) with the membership function defined by

$$\begin{aligned} \xi _{\widetilde{a}}\left( x\right) =\left\{ \begin{array}{ccc} 1 &{}\quad \text {if} &{} x=a, \\ 0 &{}\quad \text {if} &{} x\ne a. \end{array} \right. \end{aligned}$$

Definition 2

(Wu 2007) Let \(\widetilde{u}\) be a fuzzy number. We say that \(\widetilde{u} \) is nonnegative if \(\underline{u}_{\alpha }\ge 0\) for each \(\alpha \in \left[ 0,1\right] \). We say that \(\widetilde{u}\) is positive if \(\underline{u }_{\alpha }>0\) for each \(\alpha \in \left[ 0,1\right] \).

Definition 3

(Panigrahi et al. 2008). A fuzzy number \(\widetilde{u}=\left[ \underline{u} _{\alpha },\overline{u}_{\alpha }\right] \) is said to be a triangular fuzzy number if \(\underline{u}_{\alpha }\left( 1\right) =\overline{u}_{\alpha }\left( 1\right) \). We denote a triangular fuzzy number by \(\widetilde{u} =\left( \underline{u},u,\overline{u}\right) .\)

Example 4

Let \(\widetilde{u}=\left( u_{1},u_{2},u_{3}\right) \) be a triangular fuzzy number. Its membership function is defined by

$$\begin{aligned} \xi _{\widetilde{u}}\left( x\right) =\left\{ \begin{array}{ccc} \frac{x-u_{1}}{u_{2}-u_{1}} &{}\quad \text {if} &{} u_{1}\le x\le u_{2}, \\ \frac{u_{3}-x}{u_{3}-u_{2}} &{}\quad \text {if} &{} u_{2}\le x\le u_{3}, \\ 0 &{} &{} \text {otherwise.} \end{array} \right. \end{aligned}$$

The \(\alpha \)-level set (a closed interval) of a triangular fuzzy number\(\ \widetilde{u}\) is then given as follows:

$$\begin{aligned} \widetilde{u}_{\alpha }=\left[ \underline{u}_{\alpha },\overline{u}_{\alpha } \right] =\left[ \left( 1-\alpha \right) u_{1}+\alpha u_{2}\text { , }\left( 1-\alpha \right) u_{3}+\alpha u_{2}\right] . \end{aligned}$$
(1)

Given two fuzzy numbers \(\widetilde{u},\widetilde{v}\in \mathcal {F}\left( R\right) \) represented by \(\widetilde{u}_{\alpha }=\left[ \underline{u} _{\alpha },\overline{u}_{\alpha }\right] \) and \(\widetilde{v}_{\alpha }= \left[ \underline{v}_{\alpha },\overline{v}_{\alpha }\right] \), respectively, and a real number k, then the fuzzy addition \(\widetilde{u}+ \widetilde{v}\) and scalar multiplication \(k\widetilde{u}\) are defined as follows (see Panigrahi et al. 2008; Rufián-Lizana et al. 2012; Wang and Wu 2003):

  1. (a)

    \(\xi _{\left( \widetilde{u}+\widetilde{v}\right) }\left( x\right) = \underset{y+z=x}{\sup }\min [\xi _{\widetilde{u}}\left( y\right) ,\xi _{ \widetilde{v}}\left( z\right) ]\),

  2. (b)

    \(\xi _{k\widetilde{u}}\left( x\right) =\left\{ \begin{array}{ccc} \xi _{\widetilde{u}}\left( \frac{x}{k}\right) , &{} \text {if} &{} k\ne 0, \\ \widetilde{0}, &{} \text {if} &{} k=0, \end{array} \right. \) where \(\widetilde{0}\in \mathcal {F}\left( R\right) \).

The above operations on fuzzy numbers can be defined in the equivalent way. Namely, for every \(\alpha \in \left[ 0,1\right] \), we have:

$$\begin{aligned} \left[ \widetilde{u}+\widetilde{v}\right] _{\alpha }=\left[ \underline{ \left( u+v\right) }_{\alpha },\overline{\left( u+v\right) }_{\alpha } \right] =\left[ \underline{u}_{\alpha }+\underline{v}_{\alpha },\overline{u} _{\alpha }+\overline{v}_{\alpha }\right] \end{aligned}$$
(2)

and

$$\begin{aligned} \left[ k\widetilde{u}\right] _{\alpha }{=}\left[ \left( \underline{ku}\right) _{\alpha },\left( \overline{ku}\right) _{\alpha }\right] {=}\left[ \min \left\{ k\underline{u}_{\alpha },k\overline{u}_{\alpha }\right\} ,\max \left\{ k\underline{u}_{\alpha },k\overline{u}_{\alpha }\right\} \right] . \end{aligned}$$
(3)

Definition 5

(Wu 2007) Let \(\widetilde{u}\) and \( \widetilde{v}\) be two fuzzy numbers. If there exists a fuzzy number \( \widetilde{w}\in \mathcal {F}\left( R\right) \) such that \(\widetilde{w}+ \widetilde{u}=\widetilde{v}\) (note that addition is commutative) and \( \widetilde{w}\) is unique, then \(\widetilde{w}\) is called the Hukuhara difference of \(\widetilde{u}\) and \(\widetilde{v}\), and it is denoted by \( \widetilde{u}\ominus _{H}\widetilde{v}\).

In order to compare two fuzzy numbers, in the recent literature, there are various definitions as a generalization of the relationship on intervals (see, for example, Guerra and Stefanini (2012)). In the paper, we shall use the concepts of partial orderings proposed by Wu (2008) , which are similar to the similar concepts used for multiobjective programming problems.

Let \(\widetilde{u},\widetilde{v}\in \mathcal {F}\left( R\right) \) be given two fuzzy numbers represented by \(\widetilde{u}_{\alpha }=\left[ \underline{u }_{\alpha },\overline{u}_{\alpha }\right] \) and \(\widetilde{v}_{\alpha }= \left[ \underline{v}_{\alpha },\overline{v}_{\alpha }\right] \), respectively.

Definition 6

(Wu 2008) We say that \(\widetilde{u}\) dominates (is better than) \(\widetilde{v}\) if and only if \(u_{\alpha }\preceq v_{\alpha }\) for all \(\alpha \in \left[ 0,1\right] \). In other words, \(\widetilde{u}\) dominates (is better than) \(\widetilde{v}\) if and only if

$$\begin{aligned} \left\{ \begin{array}{c} {\underline{u}_{\alpha }}<{\underline{v}_{\alpha }} \\ {\overline{u}_{\alpha }}\le {\overline{v}_{\alpha }} \end{array} \right. \text {or }\left\{ \begin{array}{c} {\underline{u}_{\alpha }}\le {\underline{v}_{\alpha }} \\ {\overline{u}_{\alpha }}<{\overline{v}_{\alpha }} \end{array} \right. \text {or }\left\{ \begin{array}{c} {\underline{u}_{\alpha }}<{\underline{v}_{\alpha }} \\ {\overline{u}_{\alpha }}<{\overline{v}_{\alpha }} \end{array} \right. \text { for all }\alpha \in \left[ 0,1\right] . \end{aligned}$$

Definition 7

(Wu 2008) We say that \(\widetilde{u}\) strongly dominates \(\widetilde{v}\) if and only if \(\widetilde{u}_{\alpha }\prec \widetilde{v}_{\alpha }\) for all \(\alpha \in \left[ 0,1\right] \). In other words, \(\widetilde{u}\) dominates (is better than) \(\widetilde{v}\) if and only if

$$\begin{aligned}&\left\{ \begin{array}{c} \underline{u}_{\alpha }<\underline{v}_{\alpha } \\ \overline{u}_{\alpha }\le \overline{v}_{\alpha } \end{array} \right. \text { for all }\alpha \in \left[ 0,1\right] \\&\text {or }\left\{ \begin{array}{c} \underline{u}_{\alpha }\le \underline{v}_{\alpha } \\ \overline{u}_{\alpha }<\overline{v}_{\alpha } \end{array} \right. \text { for all }\alpha \in \left[ 0,1\right] \text { \ \ \ } \\&\text {or }\left\{ \begin{array}{c} \underline{u}_{\alpha }<\underline{v}_{\alpha } \\ \overline{u}_{\alpha }<\overline{v}_{\alpha } \end{array} \right. \text { for all }\alpha \in \left[ 0,1\right] . \end{aligned}$$

Remark 8

It is not difficult to see that if \(\widetilde{u}\) strongly dominates \( \widetilde{v}\), then \(\widetilde{u}\) dominates \(\widetilde{v}\).

Now, we recall some definitions and results for nondifferentiable and convex functions.

Let X be a nonempty convex subset of \(R^{n}\). We recall that a crisp function \(f:X\rightarrow R\) is a (strictly) convex function on X if and only if, the inequality

$$\begin{aligned} f\left( \lambda y+(1-\lambda )x\right) \le \lambda f(y)+(1-\lambda )f(x) \text { \ }\left( <\right) \end{aligned}$$

holds for all \(y,x\in X\), \(\left( y\ne x\right) \) and any \(\lambda \in \left[ 0,1\right] \), \((\lambda \in \left( 0,1\right) )\).

Definition 9

(Rockafellar 1970) The subdifferential of a (nondifferentiable) convex crisp function \( f:X\rightarrow R\) at \(\widehat{x}\in X\) is defined as follows:

$$\begin{aligned} \partial f(\widehat{x}):=\left\{ \xi \in R^{n}:f(y)-f(\widehat{x})\ge \xi ^{T}\left( y-\widehat{x}\right) \text { for all }y\in X\right\} . \end{aligned}$$

Remark 10

As it follows from the definition of a convex function \(f:X\rightarrow R\) on X and the definition of its subdifferential at \(\widehat{x}\), the following inequality

$$\begin{aligned} f(y)-f(\widehat{x})\ge \xi ^{T}\left( y-\widehat{x}\right) \end{aligned}$$

holds for all \(y\in X\).

Lemma 11

(Clarke (1983)) Let\(f:X\rightarrow R\) be a locally Lipschitz function on a nonempty open set \(X\subset R^{n}\), u is an arbitrary point of X and \(\lambda \in R\). Then,

$$\begin{aligned} \partial \left( \lambda f\right) \left( u\right) \subseteq \lambda \partial f\left( u\right) . \end{aligned}$$

Proposition 12

(Clarke 1983) Let \( f_{i}:X\rightarrow R\), \(i=1,\ldots ,k\), be convex functions on a nonempty open set \(X\subset R^{n}\), u is an arbitrary point of\(X\subset R^{n}\). Then

$$\begin{aligned} \partial \left( \sum _{i=1}^{k}f_{i}\right) \left( u\right) \subseteq \sum _{i=1}^{k}\partial f_{i}\left( u\right) . \end{aligned}$$

Equality holds in the above relation if all but at most one of the functions \(f_{i}\) are strictly differentiable at u.

Corollary 13

(Clarke 1983) For any scalars \(\lambda _{i}\), one has

$$\begin{aligned} \partial \left( \sum _{i=1}^{k}\lambda _{i}f_{i}\right) \left( u\right) \subseteq \sum _{i=1}^{k}\lambda _{i}\partial f_{i}\left( u\right) , \end{aligned}$$

and equality holds if all but at most one of the \(f_{i}\) is strictly differentiable at u.

Remark 14

If each \(f_{i}\) is convex at u, equality holds in Proposition 12. Equality then holds in Corollary 13 as well, if in addition each \(\lambda _{i}\) is nonnegative.

Now, we re-call the definition for a fuzzy function (see, for example, Panigrahi et al. (2008)).

Definition 15

(Panigrahi et al. 2008) Let X be a nonempty subset of \(R^{n}\) and \( \widetilde{f}:X\rightarrow \mathcal {F}\left( R\right) \) be a fuzzy mapping. The \(\alpha \)-cut of \(\widetilde{f}\) at \(x\in X\), which is a closed and bounded interval for each \(\alpha \in \left[ 0,1\right] \), can be denoted by:

$$\begin{aligned} \widetilde{f}_{\alpha }\left( x\right) =\left[ \underline{f}_{\alpha }\left( x\right) \text { },\text { }\overline{f}_{\alpha }\left( x\right) \right] , \end{aligned}$$
(4)

where \(\underline{f}_{\alpha }\left( x\right) =\min \widetilde{f}_{\alpha }\left( x\right) \) and \(\overline{f}_{\alpha }\left( x\right) =\max \widetilde{f}_{\alpha }\left( x\right) \). Thus, \(\widetilde{f}\) can be understood by two functions \(\underline{f}_{\alpha }\) and \(\overline{f} _{\alpha }\), which are functions from \(X\times \left[ 0,1\right] \) to the set R, \(\underline{f}_{\alpha }\) is a bounded increasing function of \( \alpha \), \(\overline{f}_{\alpha }\) is a bounded decreasing function of \( \alpha \) and, moreover, \(\overline{f}_{\alpha }\left( x\right) \ge \underline{f}_{\alpha }\left( x\right) \) for all \(x\in X\) and each \(\alpha \in \left[ 0,1\right] \). Here, the endpoint functions \(\underline{f}_{\alpha },\overline{f}_{\alpha }:X\times \left[ 0,1\right] \rightarrow R\) are called lower and upper functions of \(\widetilde{f}_{\alpha }\), respectively.

In the paper, we consider fuzzy functions \(\widetilde{f}:R^{n}\rightarrow \mathcal {F}\left( R\right) \) such that their endpoint functions \(\underline{f} _{\alpha }\) and \(\overline{f}_{\alpha }\) are defined at a given point x of interest for each \(\alpha \in \left[ 0,1\right] \).

Proposition 16

(Wu 2007) Let X be a nonempty convex subset of \(R^{n}\) and \(\widetilde{f}:X\rightarrow \mathcal {F} \left( R\right) \) be a fuzzy function defined on X. Then, \(\widetilde{f}\) is convex on X if and only if the functions \(\underline{f}_{\alpha }\), \( \overline{f}_{\alpha }\) are convex on X for each \(\alpha \in \left[ 0,1 \right] \).

Definition 17

The one-sided directional \(\alpha \)-derivative of the fuzzy function \( \widetilde{f}\) (given by (4)) at \(\widehat{x}\) for some \(\alpha \)-cut in the direction d is defined as the one-side directional \(\alpha \)-derivatives of the lower and upper functions \(\underline{f}_{\alpha }\) and \( \overline{f}_{\alpha }\) at \(\widehat{x}\) in the direction d as follows

$$\begin{aligned}&\widetilde{f}_{\alpha }^{\prime }\left( \widehat{x};d\right) :=\left( \underset{t\downarrow 0}{\lim }\frac{\underline{f}_{\alpha }\left( \widehat{x }+td\right) -\underline{f}_{\alpha }\left( \widehat{x}\right) }{t},\underset{ t\downarrow 0}{\lim }\frac{\overline{f}_{\alpha }\left( \widehat{x} +td\right) -\overline{f}_{\alpha }\left( \widehat{x}\right) }{t}\right) \\&\quad :=\left( \underline{f^{\prime }}_{\alpha }\left( \widehat{x};d\right) , \overline{f^{\prime }}_{\alpha }\left( \widehat{x};d\right) \right) . \end{aligned}$$

Definition 18

We say that the fuzzy function \(\widetilde{f}:X\rightarrow \mathcal {F}\left( R\right) \) is (one-sided) directionally differentiable at \(\widehat{x}\) if \( \widetilde{f}_{\alpha }^{\prime }\left( \widehat{x};d\right) \) exists for each direction d and for all \(\alpha \)-cuts, i.e., for each \(\alpha \in \left[ 0,1\right] \).

Definition 19

Let the convex fuzzy function \(\widetilde{f}:X\rightarrow \mathcal {F}\left( R\right) \) admit the directional \(\alpha \)-derivative at \(\widehat{x}\) in each direction \(d\in R^{n}\) for some \(\alpha \)-cut. It is said that the subdifferential of this convex fuzzy function \(\widetilde{f}\) on the \(\alpha \)-cut is defined as a pair of subdifferentials at \(\widehat{x}\) of the functions \(\underline{f}_{\alpha }\) and \(\overline{f}_{\alpha }\) on this \( \alpha \)-cut as follows

$$\begin{aligned} \partial \widetilde{f}_{\alpha }(\widehat{x}):=\left( \partial \underline{f} _{\alpha }\left( \widehat{x}\right) ,\partial \overline{f}_{\alpha }\left( \widehat{x}\right) \right) , \end{aligned}$$

where \(\partial \underline{f}_{\alpha }\left( \widehat{x}\right) {:=}\left\{ \underline{\xi }\in R^{n}:\underline{f^{\prime }}_{\alpha }\left( \widehat{x} ;d\right) \geqq \underline{\xi }^{T}d\text { for all }d\in R^{n}\right\} \) and \(\partial \overline{f}_{\alpha }\left( \widehat{x}\right) {:=}\left\{ \overline{\xi }\in R^{n}:\underline{f^{\prime }}_{\alpha }\left( \widehat{x} ;d\right) \geqq \overline{\xi }^{T}d\text { for all }d\in R^{n}\right\} \).

In the paper, we consider fuzzy functions \(f:R^{n}\rightarrow \mathcal {F} \left( R\right) \) such that their functions \(\underline{f}_{\alpha }\) and \( \overline{f}_{\alpha }\) are locally Lipschitz at a given point x of interest for each \(\alpha \in \left[ 0,1\right] \).

Proposition 20

Let \(\widetilde{f}:X\rightarrow \mathcal {F}\left( R\right) \) be a (strictly) convex fuzzy function and \( \widehat{x}\) be a given point. Assume that \(\widetilde{f}\) admits the directional \(\alpha \)-derivative at \(\widehat{x}\) in each direction \(d\in R^{n}\) for some \(\alpha \)-cut. Then, the following inequalities

$$\begin{aligned}&\underline{f}_{\alpha }(x)-\underline{f}_{\alpha }(\widehat{x})\ge \underline{\xi }^{T}\left( x-\widehat{x}\right) \forall \underline{\xi }\in \partial \underline{f}_{\alpha }\left( \widehat{x} \right) ,\,\left( >\right) \end{aligned}$$
(5)
$$\begin{aligned}&\overline{f}_{\alpha }(x)-\overline{f}_{\alpha }(\widehat{x})\ge \overline{ \xi }^{T}\left( x-\widehat{x}\right) ,\forall \overline{\xi }\in \partial \overline{f}_{\alpha }\left( \widehat{x}\right) \,\left( >\right) \end{aligned}$$
(6)

hold for all \(x\in R^{n}\), (\(x\ne \widehat{x}\)).

3 Nondifferentiable convex fuzzy optimization problem and optimality

In the paper, we consider the constrained optimization problem with a fuzzy-valued objective function and both inequality and equality constraints defined by:

$$\begin{aligned} \begin{array}{c} \text {minimize }\widetilde{f}(x)\\ \text {subject to}\quad g_{j}(x)\le 0,j\in J=\left\{ 1,\ldots \,m\right\} ,\\ h_{i}\left( x\right) =0,i\in I=\left\{ 1,\ldots ,r\right\} ,\\ x\in X, \end{array} \qquad \text {(FO)} \end{aligned}$$

where X is a nonempty convex open subset of \(R^{n}\), \(\widetilde{f} :X\rightarrow \mathcal {F}\left( R\right) \) is a fuzzy function and \( g_{j}:X\rightarrow R\), \(j\in J\), \(h_{i}:X\rightarrow R\), \(i\in I\), are real-valued functions defined on X. Let \(D:=\left\{ x\in X:g_{j}(x)\le 0,j\in J,h_{i}\left( x\right) =0,i\in I\right\} \) be the set of all feasible solutions of the considered fuzzy optimization problem (FO). Further, we denote the set of active inequality constraints at point \(\widehat{x}\in X\) by \(J\left( \widehat{x}\right) =\left\{ j\in J:g_{j}\left( \widehat{x}\right) =0\right\} \).

In this paper, the \(\alpha \)-cuts are used to describe the objective fuzzy function \(\widetilde{f}\), as it was done by Wu (2007). Therefore, it is assumed that its left- and right-hand side values are given by the endpoint functions \(\underline{f}_{\alpha }:X\times \left[ 0,1\right] \rightarrow R\) and \(\overline{f}_{\alpha }:X\times \left[ 0,1\right] \rightarrow R\) for each \(\alpha \in \left[ 0,1\right] \), respectively. Throughout the paper, we shall assume that all functions constituting the fuzzy optimization problem (FO), that is, functions \(\underline{f}_{\alpha }\) and \(\overline{f} _{\alpha }\) for each \(\alpha \in \left[ 0,1\right] \), the constraints \(g_{j}\) , \(j\in J\), \(h_{i}:X\rightarrow R\), \(i\in I\), are locally Lipschitz functions on X.

Since “\(\preceq \)” and “\(\prec \)” are partial orderings on \(\mathcal {F} \left( R\right) \), we may follow the similar solution concepts used for multiobjective programming problems. Namely, we use the weakly nondominated and nondominated solutions defined by Wu (2008).

Definition 21

(Wu 2008) It is said that a feasible solution \(\widehat{x}\) of the considered constrained optimization problem (FO) with the fuzzy-valued objective function is its weakly nondominated solution if there is no other \(x\in D\) such that

$$\begin{aligned} \widetilde{f}(x)\prec \widetilde{f}\left( \widehat{x}\right) . \end{aligned}$$

In other words (by Definition 7), if \( \widehat{x}\in D\) is a weakly nondominated solution of the problem (FO), then there is no other \(x\in D\) such that

$$\begin{aligned}&\left\{ \begin{array}{c} \underline{f}_{\alpha }\left( x\right)<\underline{f}_{\alpha }\left( \widehat{x}\right) \\ \overline{f}_{\alpha }\left( x\right) \le \overline{f}_{\alpha }\left( \widehat{x}\right) \end{array} \right. \text { for all }\alpha \in \left[ 0,1\right] \nonumber \\&\quad \text {or }\left\{ \begin{array}{c} \underline{f}_{\alpha }\left( x\right) \le \underline{f}_{\alpha }\left( \widehat{x}\right) \\ \overline{f}_{\alpha }\left( x\right)<\overline{f}_{\alpha }\left( \widehat{ x}\right) \end{array} \right. \text { for all }\alpha \in \left[ 0,1\right] \nonumber \\&\quad \text {or }\left\{ \begin{array}{c} \underline{f}_{\alpha }\left( x\right)<\underline{f}_{\alpha }\left( \widehat{x}\right) \\ \overline{f}_{\alpha }\left( x\right) <\overline{f}_{\alpha }\left( \widehat{ x}\right) \end{array} \right. \text { for all }\alpha \in \left[ 0,1\right] \text {. } \end{aligned}$$
(7)

Definition 22

(Wu 2008) It is said that a feasible solution \(\widehat{x}\) of the considered constrained optimization problem (FO) with the fuzzy-valued objective function is its nondominated solution if there is no other \(x\in D\) such that

$$\begin{aligned} \widetilde{f}(x)\preceq \widetilde{f}\left( \widehat{x}\right) . \end{aligned}$$

In other words (by Definition 6), if \(\widehat{x} \in D\) is a nondominated solution of the problem (FO), then there is no other \(x\in D\) such that

$$\begin{aligned}&\left\{ \begin{array}{c} \underline{f}_{\alpha }\left( x\right)<\underline{f}_{\alpha }\left( \widehat{x}\right) \\ \overline{f}_{\alpha }\left( x\right) \le \overline{f}_{\alpha }\left( \widehat{x}\right) \end{array} \right. \text { or }\left\{ \begin{array}{c} \underline{f}_{\alpha }\left( x\right) \le \underline{f}_{\alpha }\left( \widehat{x}\right) \\ \overline{f}_{\alpha }\left( x\right)<\overline{f}_{\alpha }\left( \widehat{ x}\right) \end{array} \right. \nonumber \\&\quad \text {or }\left\{ \begin{array}{c} \underline{f}_{\alpha }\left( x\right)<\underline{f}_{\alpha }\left( \widehat{x}\right) \\ \overline{f}_{\alpha }\left( x\right) <\overline{f}_{\alpha }\left( \widehat{ x}\right) \end{array} \right. \text { for all }\alpha \in \left[ 0,1\right] . \end{aligned}$$
(8)

Remark 23

Note that any nondominated solution of the problem (FO) is its weakly nondominated solution.

Let us consider, for fixed \(\alpha \in \left[ 0,1\right] \), the following bi-objective programming problem

$$\begin{aligned} \begin{array}{c} \left( \underline{f}_{\alpha }\left( x\right) ,\overline{f}_{\alpha }\left( x\right) \right) \rightarrow \min \\ x\in D. \end{array}\quad \text { (VP}_{\alpha }\text {) } \end{aligned}$$

For such multicriterion optimization problems, an optimal solution is defined in terms of a (weak) Pareto solution in the following sense:

Definition 24

A feasible point \(\widehat{x}\) is said to be a weak Pareto solution of the problem (VP\(_{\alpha }\)) for some \(\alpha \in \left[ 0,1\right] \) if and only if there is no other \(x\in D\) such that

$$\begin{aligned} \underline{f}_{\alpha }\left( x\right)<\underline{f}_{\alpha }(\widehat{x}) \text { and }\overline{f}_{\alpha }\left( x\right) <\overline{f}_{\alpha }( \widehat{x}). \end{aligned}$$

Definition 25

A feasible point \(\widehat{x}\) is said to be a Pareto solution of the problem (VP\(_{\alpha }\)) for some \(\alpha \in \left[ 0,1\right] \) if and only if there is no other \(x\in D\) such that

$$\begin{aligned} \underline{f}_{\alpha }\left( x\right) \le \underline{f}_{\alpha }(\widehat{ x})\text { and }\overline{f}_{\alpha }\left( x\right) \le \overline{f} _{\alpha }(\widehat{x}) \end{aligned}$$

with at least one strong inequality.

For solving the vector optimization problem (VP\(_{\alpha }\)), we use the weighting method. Therefore, for fixed \(\alpha \in \left[ 0,1\right] \), we define the weighting optimization problem associated with (VP\(_{\alpha }\)) as follows:

$$\begin{aligned} \begin{array}{c} \lambda _{1}\underline{f}_{\alpha }\left( x\right) +\lambda _{2}\overline{f} _{\alpha }\left( x\right) \rightarrow \min \\ x\in D,\lambda _{1}\ge 0,\lambda _{2}\ge 0, \lambda _{1}+\lambda _{2}=1. \end{array}\quad \text { (P}_{\alpha }\text {)} \end{aligned}$$

It is well known (see, for example, Miettinen (2004)) that if \(\widehat{x} \in D\) is a minimizer of the weighting optimization problem (P\(_{\alpha }\)), then it is a weakly Pareto solution of the problem (VP\(_{\alpha }\)). If \( \lambda _{1}>0\), \(\lambda _{2}>0\), then it is a Pareto solution of the multiobjective programming problem (VP\(_{\alpha }\)). The converse result holds under assumption that the problem (VP\(_{\alpha }\)) is convex (see Miettinen 2004).

Now, we give the Karush–Kuhn–Tucker optimality conditions for the weighting optimization problem (P\(_{\alpha }\)) under assumption that the involved functions are convex.

Theorem 26

Let \(x\in D\) and there exist \(\widehat{\lambda }_{1}\in R\), \(\widehat{\lambda }_{2}\in R\), \( \widehat{\mu }\in R^{m}\) and \(\widehat{\vartheta }\in R^{r}\) such that the following Karush–Kuhn–Tucker optimality conditions

$$\begin{aligned}&0\in \partial \left( \widehat{\lambda }_{1}\underline{f}_{\alpha }(\widehat{x })+\widehat{\lambda }_{2}\overline{f}_{\alpha }(\widehat{x})\right) +\sum _{j=1}^{m}\widehat{\mu }_{j}\partial g_{j}(\widehat{x})+\sum _{i=1}^{r} \widehat{\vartheta }_{i}\partial h_{i}(\widehat{x}), \nonumber \\ \end{aligned}$$
(9)
$$\begin{aligned}&\widehat{\mu }_{j}g_{j}(\widehat{x})=0,j\in J, \end{aligned}$$
(10)
$$\begin{aligned}&\widehat{\lambda }_{1}>0,\widehat{\lambda }_{2}>0,\widehat{ \lambda }_{1}+\text { }\widehat{\lambda }_{2}=1,\widehat{\mu }\ge 0 \end{aligned}$$
(11)

hold for fixed \(\alpha \in \left[ 0,1\right] \). Further, assume that the functions \(\underline{f}_{\alpha }\) and \(\overline{f}_{\alpha }\) are convex on D for any fixed \(\alpha \in \left[ 0,1\right] \) and the functions \( g_{j} \), \(j=1,\ldots ,m\), \(h_{i}\), \(i\in I^{+}\left( \widehat{x}\right) :\left\{ i\in I:\widehat{\vartheta }_{i}>0\right\} \), \(-h_{i}\), \(i\in I^{-}\left( \widehat{x}\right) :\left\{ i\in I:\widehat{\vartheta }_{i}<0\right\} \), are convex on D. Then \(\widehat{x}\) is an optimal solution of the weighting optimization problem (P\(_{\alpha }\)).

The following results show the connection between a (weakly) nondominated solution of the considered optimization problem (FO) with the fuzzy-valued objective function and a Pareto solution of its associated multiobjective programming problem (VP\(_{\alpha }\)).

Proposition 27

If \(\widehat{x} \in D\) is a Pareto solution of the multiobjective programming problem (VP\(_{ \widehat{\alpha }}\)) for some \(\widehat{\alpha }\in \left[ 0,1\right] \), then \(\widehat{x}\) is also a weakly nondominated solution of the considered fuzzy optimization problem (FO).

Proof

We assume that \(\widehat{x}\in D\) is a Pareto solution of the multiobjective programming problem (VP\(_{\widehat{\alpha }}\)) for some \(\widehat{\alpha } \in \left[ 0,1\right] \). We proceed by contradiction. Suppose, contrary to the result, that \(\widehat{x}\in D\) is not a weakly nondominated solution in (FO). Then, by Definition 21, there exists other \(\widetilde{x}\in D\) such that \(\widetilde{f}(\widetilde{x} )\prec \widetilde{f}\left( \widehat{x}\right) \). Hence, by (7), the foregoing relation implies that

$$\begin{aligned}&\left\{ \begin{array}{c} \underline{f}_{\alpha }\left( \widetilde{x}\right)<\underline{f}_{\alpha }\left( \widehat{x}\right) \\ \overline{f}_{\alpha }\left( \widetilde{x}\right) \le \overline{f}_{\alpha }\left( \widehat{x}\right) \end{array} \right. \text { for all }\alpha \in \left[ 0,1\right] \\&\text {or }\left\{ \begin{array}{c} \underline{f}_{\alpha }\left( \widetilde{x}\right) \le \underline{f} _{\alpha }\left( \widehat{x}\right) \\ \overline{f}_{\alpha }\left( \widetilde{x}\right)<\overline{f}_{\alpha }\left( \widehat{x}\right) \end{array} \right. \text { for all }\alpha \in \left[ 0,1\right] \\&\text {or }\left\{ \begin{array}{c} \underline{f}_{\alpha }\left( \widetilde{x}\right)<\underline{f}_{\alpha }\left( \widehat{x}\right) \\ \overline{f}_{\alpha }\left( \widetilde{x}\right) <\overline{f}_{\alpha }\left( \widehat{x}\right) \end{array} \right. \text { for all }\alpha \in \left[ 0,1\right] \text {. } \end{aligned}$$

The above relations imply that, for all \(\alpha \in \left[ 0,1\right] \), \( \big (\underline{f}_{\alpha }\left( \widetilde{x}\right) <\underline{f}_{\alpha }\left( \widehat{x}\right) \) and \(\overline{f}_{\alpha }\left( \widetilde{x}\right) \le \overline{f}_{\alpha }\left( \widehat{x}\right) \big )\) or \(\big (\underline{f}_{\alpha }\left( \widetilde{x}\right) \le \underline{f}_{\alpha }\left( \widehat{x}\right) \) and \(\overline{f}_{\alpha }\left( \widetilde{x}\right) <\overline{f}_{\alpha }\left( \widehat{x}\right) \big )\). Thus, this is true also for \(\widehat{\alpha }\in \left[ 0,1\right] \). Hence, by Definition 25, this is a contradiction to the assumption that \(\widehat{x} \in D\) is a Pareto solution of the multiobjective programming problem (VP\(_{ \widehat{\alpha }}\)) for \(\widehat{\alpha }\in \left[ 0,1\right] \). \(\square \)

Proposition 28

If \(\widehat{x}\in D\) is a Pareto solution of the multiobjective programming problem (VP\(_{\alpha }\)) for each \(\alpha \in \left[ 0,1\right] \), then \(\widehat{x}\) is also a nondominated solution of the considered fuzzy optimization problem (FO).

Proof

We assume that \(\widehat{x}\in D\) is a Pareto solution of the multiobjective programming problem (VP\(_{\widehat{\alpha }}\)) for some \(\widehat{\alpha } \in \left[ 0,1\right] \). We proceed by contradiction. Suppose, contrary to the result, that \(\widehat{x}\in D\) is not a weakly nondominated solution in (FO). Then, by Definition 21, there exists other \(\widetilde{x}\in D\) such that \(\widetilde{f}(\widetilde{x} )\preccurlyeq \widetilde{f}\left( \widehat{x}\right) \). Hence, by (8), the foregoing relation implies that

$$\begin{aligned}&\left\{ \begin{array}{c} \underline{f}_{\alpha }\left( \widetilde{x}\right)<\underline{f}_{\alpha }\left( \widehat{x}\right) \\ \overline{f}_{\alpha }\left( \widetilde{x}\right) \le \overline{f}_{\alpha }\left( \widehat{x}\right) \end{array} \right. \text { or }\left\{ \begin{array}{c} \underline{f}_{\alpha }\left( \widetilde{x}\right) \le \underline{f} _{\alpha }\left( \widehat{x}\right) \\ \overline{f}_{\alpha }\left( \widetilde{x}\right)<\overline{f}_{\alpha }\left( \widehat{x}\right) \end{array} \right. \\&\text {or }\left\{ \begin{array}{c} \underline{f}_{\alpha }\left( \widetilde{x}\right)<\underline{f}_{\alpha }\left( \widehat{x}\right) \\ \overline{f}_{\alpha }\left( \widetilde{x}\right) <\overline{f}_{\alpha }\left( \widehat{x}\right) \end{array} \right. \text { for all }\alpha \in \left[ 0,1\right] . \end{aligned}$$

The above relations imply that there exists \(\widehat{\alpha }\in \left[ 0,1 \right] \) such that one of three above systems of inequalities is satisfied. Hence, by Definition 25, this is a contradiction to the assumption that \(\widehat{x}\in D\) is a Pareto solution of the multiobjective programming problem (VP\(_{\alpha }\)) for each \(\alpha \in \left[ 0,1\right] \). \(\square \)

Now, under convexity hypotheses, we prove the Karush–Kuhn–Tucker optimality conditions for a (weakly) nondominated solution of the considered fuzzy optimization problem (FO).

Theorem 29

Let \(\widehat{x}\) be a feasible solution of the problem (FO), and, for each \(\alpha \in \left[ 0,1\right] \), there exist \(\widehat{\lambda }_{1}\in R\), \(\widehat{\lambda }_{2}\in R\), \( \widehat{\mu }\in R^{m}\) and \(\widehat{\vartheta }\in R^{r}\) such that the Karush–Kuhn–Tucker optimality conditions

$$\begin{aligned}&0\in \partial \left( \widehat{\lambda }_{1}\underline{f}_{\alpha }(\widehat{x })+\widehat{\lambda }_{2}\overline{f}_{\alpha }(\widehat{x})\right) +\sum _{j=1}^{m}\widehat{\mu }_{j}\partial g_{j}(\widehat{x})+\sum _{i=1}^{r} \widehat{\vartheta }_{i}\partial h_{i}(\widehat{x}), \nonumber \\ \end{aligned}$$
(12)
$$\begin{aligned}&\widehat{\mu }_{j}g_{j}(\widehat{x})=0,j\in J, \end{aligned}$$
(13)
$$\begin{aligned}&\widehat{\lambda }_{1}>0,\widehat{\lambda }_{2}>0,\widehat{ \lambda }_{1}+\text { }\widehat{\lambda }_{2}=1,\widehat{\mu }\ge 0 \end{aligned}$$
(14)

hold. Further, assume that the objective function \(\widetilde{f}\) is a convex fuzzy function on D and the functions \(g_{j}\), \(j=1,\ldots ,m\), \(h_{i}\) , \(i\in I^{+}\left( \widehat{x}\right) :\left\{ i\in I:\widehat{\vartheta } _{i}>0\right\} \), \(-h_{i}\), \(i\in I^{-}\left( \widehat{x}\right) :\left\{ i\in I:\widehat{\vartheta }_{i}<0\right\} \), are convex on D. Then, \( \widehat{x}\) is a nondominated solution of the considered fuzzy optimization problem (FO).

Proof

By assumption, \(\widehat{x}\) is such a feasible solution of the problem (FO) at which the Karush–Kuhn–Tucker optimality conditions (12)–(14) are fulfilled. We proceed by contradiction. Suppose, contrary to the result, that \(\widehat{x}\) is not a nondominated solution of the considered fuzzy optimization problem (FO). Then, by Definition 22, there exists \(x_{0}\in D\) such that \(\widetilde{f}(x_{0})\preceq \widetilde{f}\left( \widehat{x}\right) \). Since \(\widehat{\lambda }_{1}>0\) and \(\widehat{\lambda }_{2}>0\), this relation implies by (8) that, for each \(\alpha \in \left[ 0,1\right] \),

$$\begin{aligned} \widehat{\lambda }_{1}\underline{f}_{\alpha }\left( x_{0}\right) +\widehat{ \lambda }_{2}\overline{f}_{\alpha }\left( x_{0}\right) <\widehat{\lambda } _{1}\underline{f}_{\alpha }\left( \widehat{x}\right) +\widehat{\lambda }_{2} \overline{f}_{\alpha }\left( \widehat{x}\right) . \end{aligned}$$
(15)

Note that all assumptions of Theorem 26 are fulfilled. Then, by Theorem 26, it follows that \(\widehat{x}\) is a minimizer of the weighting optimization problem (P\(_{\alpha }\)). This means that, for each \(\alpha \in \left[ 0,1\right] \), the following inequality

$$\begin{aligned} \widehat{\lambda }_{1}\underline{f}_{\alpha }\left( x\right) +\widehat{ \lambda }_{2}\overline{f}_{\alpha }\left( x\right) \ge \widehat{\lambda } _{1}\underline{f}_{\alpha }\left( \widehat{x}\right) +\widehat{\lambda }_{2} \overline{f}_{\alpha }\left( \widehat{x}\right) \end{aligned}$$

holds for all \(x\in D\), which contradicting (15). This completes the proof of this theorem. \(\square \)

In the next theorem, under convexity hypotheses, we prove the sufficient optimality conditions of a Karush–Kuhn–Tucker type for a weakly nondominated solution in the considered fuzzy optimization problem (FO).

Theorem 30

Let \(\widehat{x}\) be a given feasible solution of the problem (FO) and, for each \(\alpha \in \left[ 0,1 \right] \), there exist \(\widehat{\mu }\in R^{m}\), \(\widehat{\mu }\ge 0\) and \(\widehat{\vartheta }\in R^{r}\) such that the following Karush-Kuhn-Tucker optimality conditions

$$\begin{aligned}&0\in \partial \underline{f}_{\alpha }(\widehat{x})+\sum _{j=1}^{m}\widehat{ \mu }_{j}\partial g_{j}(\widehat{x})+\sum _{i=1}^{r}\widehat{\vartheta } _{i}\partial h_{i}(\widehat{x}), \end{aligned}$$
(16)
$$\begin{aligned}&0\in \partial \overline{f}_{\alpha }(\widehat{x})+\sum _{j=1}^{m}\widehat{\mu }_{j}\partial g_{j}(\widehat{x})+\sum _{i=1}^{r}\widehat{\vartheta } _{i}\partial h_{i}(\widehat{x}), \end{aligned}$$
(17)
$$\begin{aligned}&\widehat{\mu }_{j}g_{j}(\widehat{x})=0,j\in J \end{aligned}$$
(18)

hold. Further, assume that the objective function \(\widetilde{f}\) is a convex fuzzy function on D and the functions \(g_{j}\), \(j=1,...,m\), \(h_{i}\) , \(i\in I^{+}\left( \widehat{x}\right) :=\left\{ i\in I:\widehat{\vartheta } _{i}>0\right\} \), \(-h_{i}\), \(i\in I^{-}\left( \widehat{x}\right) :=\left\{ i\in I:\widehat{\vartheta }_{i}<0\right\} \), are convex on D. Then, \( \widehat{x}\) is a weakly nondominated solution of the considered fuzzy optimization problem (FO).

Proof

By assumption, \(\widehat{x}\) is such a feasible solution of the problem (FO) at which the Karush–Kuhn–Tucker optimality conditions (16)–(18) are fulfilled with Lagrange multipliers \(\widehat{\mu }\in R^{m}\), \(\widehat{ \mu }\ge 0\) and \(\widehat{\vartheta }\in R^{r}\). We proceed by contradiction. Suppose, contrary to the result, that \(\widehat{x}\) is not a weakly nondominated solution of the considered fuzzy optimization problem (FO). Then, by Definition 21, there exists \(x_{0}\in D\) such that

$$\begin{aligned} \underline{f}_{\widehat{\alpha }}\left( x_{0}\right) <\underline{f}_{ \widehat{\alpha }}\left( \widehat{x}\right) \text { for all }\alpha \in \left[ 0,1\right] \end{aligned}$$
(19)

or

$$\begin{aligned} \overline{f}_{\widehat{\alpha }}\left( x_{0}\right) <\overline{f}_{\widehat{ \alpha }}\left( \widehat{x}\right) \text { for all }\alpha \in \left[ 0,1 \right] . \end{aligned}$$
(20)

By hypothesis, the objective function \(\widetilde{f}\) is a convex fuzzy function on D. Hence, by Proposition 16, the functions \(\underline{f}_{\alpha }\) and \(\overline{f}_{\alpha }\) are convex on D. Combining (16), (17), (19) and (20), respectively, we get

$$\begin{aligned} \underline{\xi }^{T}\left( x_{0}-\widehat{x}\right) <0,\forall \underline{\xi }\in \partial \underline{f}_{\alpha }\left( \widehat{x} \right) \text { for all }\alpha \in \left[ 0,1\right] \end{aligned}$$
(21)

or

$$\begin{aligned} \overline{\xi }^{T}\left( x_{0}-\widehat{x}\right) <0,\forall \overline{\xi }\in \partial \overline{f}_{\alpha }\left( \widehat{x}\right) \text { for all }\alpha \in \left[ 0,1\right] . \end{aligned}$$
(22)

By convexity hypotheses imposed on the constraint functions, it follows that

$$\begin{aligned}&g_{j}(x_{0})-g_{j}(\widehat{x})\ge \zeta _{j}^{T}\left( x_{0}-\widehat{x} \right) ,\forall \zeta _{j}\in \partial g_{j}\left( \widehat{x} \right) , j=1,...,m, \end{aligned}$$
(23)
$$\begin{aligned}&h_{i}(x_{0})-h_{i}(\widehat{x})\ge \varsigma _{i}^{T}\left( x_{0}-\widehat{x }\right) , \forall \varsigma _{i}\in \partial h_{i}\left( \widehat{x} \right) , i\in I^{+}\left( \widehat{x}\right) , \end{aligned}$$
(24)
$$\begin{aligned}&-h_{i}(x_{0})+h_{i}(\widehat{x})\ge -\varsigma _{i}^{T}\left( x_{0}- \widehat{x}\right) , \forall \left( -\varsigma _{i}\right) \in \partial \left( -h_{i}\left( \widehat{x}\right) \right) , i\in I^{-}\left( \widehat{x}\right) . \nonumber \\ \end{aligned}$$
(25)

Multiplying (23)–(25) by the corresponding Lagrange multiplier, we get

$$\begin{aligned}&\widehat{\mu }_{j}g_{j}(x_{0})-\widehat{\mu }_{j}g_{j}(\widehat{x})\ge \widehat{\mu }_{j}\zeta _{j}^{T}\left( x_{0}-\widehat{x}\right) , \forall \zeta _{j}\in \partial g_{j}\left( \widehat{x}\right) , j=1,...,m, \nonumber \\ \end{aligned}$$
(26)
$$\begin{aligned}&\widehat{\vartheta }_{i}h_{i}(x_{0})-\widehat{\vartheta }_{i}h_{i}(\widehat{x })\ge \widehat{\vartheta }_{i}\varsigma _{i}^{T}\left( x_{0}-\widehat{x} \right) , \forall \varsigma _{i}\in \partial h_{i}\left( \widehat{x} \right) , i\in I^{+}\left( \widehat{x}\right) , \nonumber \\ \end{aligned}$$
(27)
$$\begin{aligned}&\widehat{\vartheta }_{i}h_{i}(x_{0})-\widehat{\vartheta }_{i}h_{i}(\widehat{x })\ge \widehat{\vartheta }_{i}\varsigma _{i}^{T}\left( x_{0}-\widehat{x} \right) ,\forall \varsigma _{i}\in \partial h_{i}\left( \widehat{x} \right) ,i\in I^{-}\left( \widehat{x}\right) .\nonumber \\ \end{aligned}$$
(28)

Using \(x_{0}\), \(\widehat{x}\in D\) and the Karush–Kuhn–Tucker optimality condition (18), we obtain, respectively,

$$\begin{aligned}&\widehat{\mu }_{j}\zeta _{j}^{T}\left( x_{0}-\widehat{x}\right) \le 0\text { , }\forall \zeta _{j}\in \partial g_{j}\left( \widehat{x}\right) ,j\in J, \end{aligned}$$
(29)
$$\begin{aligned}&\widehat{\vartheta }_{i}\varsigma _{i}^{T}\left( x_{0}-\widehat{x}\right) \le 0,\forall \varsigma _{i}\in \partial h_{i}\left( \widehat{x} \right) ,i\in I. \end{aligned}$$
(30)

Combining (21), (22), (29) and (30), respectively, we get that, for any \(\underline{\xi }\in \partial \underline{f}_{\alpha }\left( \widehat{x}\right) \), \(\overline{\xi }\in \partial \overline{f} _{\alpha }\left( \widehat{x}\right) \) for all \(\alpha \in \left[ 0,1\right] \) , \(\zeta _{j}\in \partial g_{j}\left( \widehat{x}\right) \), \(j\in J\), \( \varsigma _{i}\in \partial {h}_{i}\left( \widehat{x}\right) \), \(i\in I\),

$$\begin{aligned} \left( \underline{\xi }^{T}+\sum _{j=1}^{m}\widehat{\mu }_{j}\zeta _{j}^{T}+\sum _{i=1}^{r}\widehat{\vartheta }_{i}\varsigma _{i}^{T}\right) \left( x_{0}-\widehat{x}\right) <0, \end{aligned}$$
(31)

or

$$\begin{aligned} \left( \overline{\xi }^{T}+\sum _{j=1}^{m}\widehat{\mu }_{j}\zeta _{j}^{T}+\sum _{i=1}^{r}\widehat{\vartheta }_{i}\varsigma _{i}^{T}\right) \left( x_{0}-\widehat{x}\right) <0. \end{aligned}$$
(32)

Thus, (31) and (32) imply, respectively, that at least one of the following relations

$$\begin{aligned} 0\notin \underline{f}_{\alpha }(\widehat{x})+\sum _{j=1}^{m}\widehat{\mu } _{j}\partial g_{j}(\widehat{x})+\sum _{i=1}^{r}\widehat{\vartheta } _{i}\partial h_{i}(\widehat{x}) \end{aligned}$$

or

$$\begin{aligned} 0\notin \overline{f}_{\alpha }(\widehat{x})+\sum _{j=1}^{m}\widehat{\mu } _{j}\partial g_{j}(\widehat{x})+\sum _{i=1}^{r}\widehat{\vartheta } _{i}\partial h_{i}(\widehat{x}) \end{aligned}$$

is fulfilled for all \(\alpha \in \left[ 0,1\right] \), which is a contradiction to the Karush–Kuhn–Tucker optimality condition (16) or to the Karush–Kuhn–Tucker optimality condition (17). This means that \( \widehat{x}\) is a weakly nondominated solution of the considered fuzzy optimization problem (FO) and completes the proof of this theorem. \(\square \)

We now give the Karush–Kuhn–Tucker necessary optimality conditions for a feasible solution \(\widehat{x}\) to be a weakly nondominated solution of the considered fuzzy optimization problem (FO).

Theorem 31

Let \(\widehat{x}\in D\) be a weakly nondominated solution of the considered fuzzy optimization problem (FOP). Further, assume that \(\widetilde{f}\) is a convex fuzzy function on D and the functions \(g_{j}\), \(j=1,\ldots ,m\), \(h_{i}\), \(i\in I^{+}\left( \widehat{x }\right) :=\left\{ i\in I:\widehat{\vartheta }_{i}>0\right\} \), \(-h_{i}\), \( i\in I^{-}\left( \widehat{x}\right) :=\left\{ i\in I:\widehat{\vartheta } _{i}<0\right\} \), are convex on D and, moreover, the Slater constraint qualification is satisfied at \(\widehat{x}\) for (FO). Then, there exist \( \widehat{\alpha }\in \left[ 0,1\right] \), \(\widehat{\lambda }_{1}\in R\), \( \widehat{\lambda }_{2}\in R\), \(\widehat{\mu }\in R^{m}\) and \(\widehat{ \vartheta }\in R^{r}\) such that the following Karush–Kuhn–Tucker optimality conditions

$$\begin{aligned}&0\in \partial \left( \widehat{\lambda }_{1}\underline{f}_{\widehat{\alpha }}( \widehat{x})+\widehat{\lambda }_{2}\overline{f}_{\widehat{\alpha }}(\widehat{ x})\right) +\sum _{j=1}^{m}\widehat{\mu }_{j}\partial g_{j}(\widehat{x} )+\sum _{i=1}^{r}\widehat{\vartheta }_{i}\partial h_{i}(\widehat{x}), \nonumber \\ \end{aligned}$$
(33)
$$\begin{aligned}&\widehat{\mu }_{j}g_{j}(\widehat{x})=0,j\in J, \end{aligned}$$
(34)
$$\begin{aligned}&\widehat{\lambda }_{1}\ge 0,\widehat{\lambda }_{2}\ge 0, \widehat{\lambda }_{1}+\text { }\widehat{\lambda }_{2}=1,\widehat{\mu }\ge 0 \end{aligned}$$
(35)

hold.

Now, we introduce two types of a Karush–Kuhn–Tucker point for the considered fuzzy optimization problem (FO).

Definition 32

The point \(\widehat{x}\in D\) is said to be Karush–Kuhn–Tucker point (a KKT point, for short) if, for each \(\alpha \in \left[ 0,1\right] \), there exist Lagrange multipliers \(\widehat{\lambda } _{1}\in R\), \(\widehat{\lambda }_{2}\in R\), \(\overline{\mu }\in R^{m}\) and \( \overline{\vartheta }\in R^{r}\) such that the Karush–Kuhn–Tucker optimality conditions (12)–(14) are satisfied.

Definition 33

The point \(\widehat{x}\in D\) is said to be a strong Karush–Kuhn–Tucker point (a weak KKT point, for short) if, for each \( \alpha \in \left[ 0,1\right] \), there exist Lagrange multipliers \(\overline{ \mu }\in R^{m}\) and \(\overline{\vartheta }\in R^{r}\) such that the Karush–Kuhn–Tucker optimality conditions (16)–(18) are satisfied.

4 The exactness of the \(l_{1}\) exact penalty function method for fuzzy optimization problem

Methods using an exact penalty function transform a constrained extremum problem into a single unconstrained optimization problem. The constraints are placed into the objective function via a penalty parameter c in a way that penalizes any violation of the constraints. The basic idea in any exact penalty function method is to choose a penalty function p and a penalty parameter c so that the optimal solution \(\widehat{x}\) in the penalized optimization problem is also an optimal solution in the given extremum problem.

Now, we use an exact penalty method for solving the considered nonlinear fuzzy optimization problem (FO) with the fuzzy objective function. Therefore, for the given nonlinear fuzzy extremum problem (FO), we define an unconstrained fuzzy penalized optimization problem as follows:

$$\begin{aligned} \text {min }\widetilde{P}(x,c)=\widetilde{f}(x)+\widetilde{1}_{cp(x)},\text { (FP(}c\text {))} \end{aligned}$$
(36)

where \(\widetilde{f}:X\rightarrow \mathcal {F}\left( R\right) \) is a fuzzy function, p is a suitable penalty function, c is a penalty parameter and \(\widetilde{1}_{\left\{ cp(x)\right\} }\) is a crisp number with the value cp(x). As it follows from the definition of (FP(c)), the considered constrained fuzzy optimization problem (FP) is replaced by an unconstrained fuzzy optimization problem, whose fuzzy objective function is the sum of a certain fuzzy “merit” function (which reflects the fuzzy objective function of the given fuzzy optimization problem) and the penalty term which reflects the constraint set. The fuzzy merit function is chosen as the original fuzzy objective function, while the penalty term is obtained by multiplying a suitable function, which represents the constraints, by a positive parameter c, called the penalty parameter.

Note that the penalized objective function in the unconstrained fuzzy penalized optimization problem is a fuzzy function. Then, as it follows from (4), for any fixed \(\alpha \in \left[ 0,1\right] \), we associate with \( \widetilde{P}\) interval-valued functions \(\widetilde{P}_{\alpha }:X\times \left[ 0,1\right] \times R_{+}\rightarrow \mathcal {F}\left( R\right) \), \( \alpha \in \left[ 0,1\right] \), given by \(\widetilde{P}_{\alpha }\left( x,c\right) =\left[ \underline{P}_{\alpha }\left( x,c\right) ,\overline{P} _{\alpha }\left( x,c\right) \right] \) for any \(x\in X\), where \(\underline{P} _{\alpha }\),\(,\overline{P}_{\alpha }:X\times \left[ 0,1\right] \times R_{+}\rightarrow R\) are real-valued functions. Thus, an unconstrained fuzzy penalized optimization problem (FP\(_{\alpha }\)(c)) is defined by

$$\begin{aligned} \text {min }\widetilde{P}_{\alpha }(x,c)=\left[ \underline{f}_{\alpha }(x)+cp(x)\text { , }\overline{f}_{\alpha }\left( x\right) +cp(x)\right] . \text { (FP}_{\alpha }\text {(}c\text {))} \end{aligned}$$

Now, in a natural way, we extend the definition of the property of exactness of the penalization for a classical exact penalty function method to the fuzzy case.

Definition 34

Let \(\alpha \) be any fixed number from the interval \(\left[ 0,1\right] \). If a threshold value \(\overline{c}\ge 0\) exists such that, for every \(c\ge \overline{c}\),

$$\begin{aligned}&\arg \,(\text {weakly})\text { nondominated}\left\{ \widetilde{f} (x):x\in D\right\} \\&\quad =\arg \,(\text {weakly})\text { nondominated}\left\{ \widetilde{P}(x,c):x\in R^{n}\right\} , \end{aligned}$$

then the function \(\widetilde{P}\) is termed a fuzzy exact penalty function and, therefore, we call (FP(c)), defined by (36), the fuzzy penalized optimization problem or the penalized optimization problem with the fuzzy objective function.

It is clear that, conceptually, if \(\widetilde{P}\) is a fuzzy exact penalty function, we can find the constrained (weak) nondominated solution of the considered fuzzy optimization problem (FO), by looking for unconstrained (weakly) nondominated solutions of the function \(\widetilde{P}(x,c)\), for sufficiently large values of the penalty parameter c.

The most popular nondifferentiable exact penalty function method for solving nonlinear optimization problems is the \(l_{1}\) exact penalty function method also called the absolute value penalty function. We now give the definition of the \(l_{1}\) exact penalty function if it is used for solving the given nonlinear fuzzy optimization problem (FO) as follows

$$\begin{aligned} \text {min }\widetilde{P}(x,c)=\widetilde{f}(x)+\widetilde{1}_{c\left( \sum _{j=1}^{m}\max \left\{ 0,g_{j}(x)\right\} +\sum _{i=1}^{r}\left| h_{i}\left( x\right) \right| \right) }.\quad \text { (FP(}c\text {))} \end{aligned}$$

We call (FP(c)) defined above the fuzzy penalized optimization problem with the fuzzy \(l_{1}\) exact penalty function. Hence, for any fixed \(\alpha \in \left[ 0,1\right] \), we define the fuzzy \(l_{1}\) exact penalty function for the given nonlinear fuzzy optimization problem (FO) as follows

$$\begin{aligned} \begin{array}{c} \widetilde{P}_{\alpha }(x,c)=\left[ \underline{f}_{\alpha }(x)+c\left( \sum _{j=1}^{m}\max \left\{ 0,g_{j}(x)\right\} +\sum _{i=1}^{r}\left| h_{i}\left( x\right) \right| \right) \right. \\ \left. \overline{f}_{\alpha }\left( x\right) +c\left( \sum _{j=1}^{m}\max \left\{ 0,g_{j}(x)\right\} +\sum _{i=1}^{r}\left| h_{i}\left( x\right) \right| \right) \right] , \end{array} \end{aligned}$$
(37)

where \(\underline{P}_{\alpha }\left( x,c\right) =\underline{f}_{\alpha }({x})+c\left( \sum _{j=1}^{m}\max \left\{ 0,g_{j}(x)\right\} +\sum _{i=1}^{r}\left| h_{i}\left( x\right) \right| \right) \) and \( \overline{P}_{\alpha }\left( x,c\right) =\overline{f}_{\alpha }\left( x\right) +c\left( \sum _{j=1}^{m}\max \left\{ 0,g_{j}(x)\right\} +\sum _{i=1}^{r}\left| h_{i}\left( x\right) \right| \right) \) are left- and right-hand side values of \(\widetilde{P}_{\alpha }\).

Thus, for fixed \(\alpha \in \left[ 0,1\right] \), the unconstrained fuzzy optimization problem with the fuzzy \(l_{1}\) exact penalty function defined by (37), constructed for the considered fuzzy optimization problem (FP) in the \(l_{1}\) exact penalty function method, can be written in the following form

$$\begin{aligned} \begin{array}{l} \min \text { }\widetilde{P}_{\alpha }(x,c)\\ \quad = \left[ \underline{f}_{\alpha }(x)+c\left( \sum _{j=1}^{m}\max \left\{ 0,g_{j}(x)\right\} +\sum _{i=1}^{r}\left| h_{i}\left( x\right) \right| \right) ,\right. \\ \quad \left. \overline{f}_{\alpha }\left( x\right) +c\left( \sum _{j=1}^{m}\max \left\{ 0,g_{j}(x)\right\} +\sum _{i=1}^{r}\left| h_{i}\left( x\right) \right| \right) \right] \end{array} \text {(FP}_{\alpha }\text {(}c\text {))} \end{aligned}$$

It is well known that, for a given constraint \(g_{j}(x)\le 0\), \(j\in J\), the function \(g_{j}^{+}\) defined by

$$\begin{aligned} g_{j}^{+}(x)=\left\{ \begin{array}{ccc} 0 &{} \text {if} &{} g_{j}(x)\le 0\\ g_{j}(x) &{} \text {if} &{} g_{j}(x)>0 \end{array} \right. \end{aligned}$$
(38)

is zero for all x that satisfy the constraint and that it has a positive value whenever this constraint is violated. Moreover, large violations in the inequality constraint \(g_{j}\) result in large values for the function \( g_{j}^{+}(x)\). Thus, the function \(g_{j}^{+}\) has the penalty features relative to the single inequality constraint \(g_{j}\). However, observe that at points, where \(g_{j}(x)=0\), the foregoing objective function might not be differentiable, even though \(g_{j}\) is differentiable. Therefore, using (38), the fuzzy penalized optimization problem (FP\(_{\alpha }\)(c)) with the fuzzy \(l_{1}\) exact penalty function can be re-written for any fixed \( \alpha \in \left[ 0,1\right] \) as

$$\begin{aligned} \begin{array}{l} \min \text { }\widetilde{P}_{\alpha }(x,c)=\\ \left[ \underline{f}_{\alpha }(x)+c\left( \sum _{j=1}^{m}g_{j}^{+}\left( x\right) +\sum _{i=1}^{r}\left| h_{i}\left( x\right) \right| \right) ,\right. \\ \left. \overline{f}_{\alpha }\left( x\right) +c\left( \sum _{j=1}^{m}g_{j}^{+}\left( x\right) +\sum _{i=1}^{r}\left| h_{i}\left( x\right) \right| \right) \right] . \end{array} \text { (FP}_{\alpha }\text {(}c\text {))} \end{aligned}$$
(39)

In the paper, the \(l_{1}\) exact penalty function method is used for solving the considered nonlinear fuzzy optimization problem (FO). Then, we prove the equivalence between the sets of (weak) nondominated solutions of the problem (FO) and the fuzzy penalized optimization problem (FP(c)) for sufficiently large penalty parameter c.

First, we prove that a Karush–Kuhn–Tucker point of the considered fuzzy optimization problem with convex functions is a nondominated solution of the associated fuzzy penalized optimization problem (FP\(_{\alpha }\)(c)) with the fuzzy \(l_{1}\) exact penalty function for any \(\alpha \in \left[ 0,1 \right] \) and for sufficiently large penalty parameters c greater than the threshold equal to the largest absolute value of Lagrange multiplier associated with some of the constraints.

Theorem 35

Let \(\widehat{x}\in D\) be a Karush–Kuhn–Tucker point of the considered nonsmooth fuzzy optimization problem (FO), that is, for each \(\alpha \in \left[ 0,1\right] \), there exist Lagrange multipliers \(\widehat{\lambda }_{1}>0\), \(\widehat{\lambda }_{2}>0\), \(\widehat{\lambda }_{1}+\widehat{\lambda }_{2}=1\), \(\widehat{\mu }_{j}\), \( j\in J\), \(\widehat{\vartheta }_{i}\), \(i\in I\) such that the Karush–Kuhn–Tucker optimality conditions (12)–(14) are satisfied at \(\widehat{x}\). Further, assume that the objective function \(\widetilde{f}\) is a convex fuzzy function on X, each inequality constraint function \( g_{j} \), \(j\in J\), each equality constraint function \(h_{i}\), \(i\in I^{+}\left( \widehat{x}\right) :=\left\{ i\in I:\widehat{\vartheta } _{i}>0\right\} \), and each function \(-h_{i}\), \(i\in I^{-}\left( \widehat{x} \right) :=\left\{ i\in I:\widehat{\vartheta }_{i}<0\right\} \), are convex on X. If the penalty parameter c is assumed to be sufficiently large (it is sufficient to set \(c\ge \max \left\{ \widehat{\mu }_{j},j\in J\text { , }\left| \widehat{\vartheta }_{i}\right| ,i\in I\right\} \) ), then \(\widehat{x}\) is a nondominated solution of the fuzzy penalized optimization problem (FP(c)) with the \(l_{1}\) exact penalty function.

Proof

We are going to prove this result by contradiction. Suppose, contrary to the result, that \(\widehat{x}\) is not a nondominated solution of the fuzzy penalized optimization problem (FP(c)) with the fuzzy \(l_{1}\) exact penalty function. Therefore, by Definition 22, there exists \(x_{0}\in X\) such that

$$\begin{aligned} \widetilde{P}_{\alpha }\left( x_{0},c\right) \preceq \widetilde{P}_{\alpha }\left( \widehat{x},c\right) \text { for all }\alpha \in \left[ 0,1\right] . \end{aligned}$$

Hence, by Definition 6, the above relation implies

$$\begin{aligned}&\left\{ \begin{array}{c} \underline{P}_{\alpha }\left( x_{0},c\right)<\underline{P}_{\alpha }\left( \widehat{x},c\right) \\ \overline{P}_{\alpha }\left( x_{0},c\right) \le \overline{P}_{\alpha }\left( \widehat{x},c\right) \end{array} \right. \text { or }\left\{ \begin{array}{c} \underline{P}_{\alpha }\left( x_{0},c\right) \le \underline{P}_{\alpha }\left( \widehat{x},c\right) \\ \overline{P}_{\alpha }\left( x_{0},c\right)<\overline{P}_{\alpha }\left( \widehat{x},c\right) \end{array} \right. \\&\text {or }\left\{ \begin{array}{c} \underline{P}_{\alpha }\left( x_{0},c\right)<\underline{P}_{\alpha }\left( \widehat{x},c\right) \\ \overline{P}_{\alpha }\left( x_{0},c\right) <\overline{P}_{\alpha }\left( \widehat{x},c\right) \end{array} \right. \text { for all }\alpha \in \left[ 0,1\right] . \end{aligned}$$

By the definition of (FP(c)) (see (39)), we have that, for all \( \alpha \in \left[ 0,1\right] \),

$$\begin{aligned}&\left\{ \begin{array}{c} \underline{f}_{\alpha }\left( x_{0}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+} \left( x_{0}\right) +\sum _{i=1}^{r}\left| h_{i}\left( x_{0}\right) \right| \right] \\< \underline{f}_{\alpha }\left( \widehat{x}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\left( \widehat{x}\right) \right| \right] \\ \overline{f}_{\alpha }\left( x_{0}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+} \left( x_{0}\right) +\sum _{i=1}^{r}\left| h_{i}\left( x_{0}\right) \right| \right] \\ \le \overline{f}_{\alpha }\left( \widehat{x}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\left( \widehat{x}\right) \right| \right] \end{array} \right. \nonumber \\&\quad \text {or}\left\{ \begin{array}{c} \underline{f}_{\alpha }\left( x_{0}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+} \left( x_{0}\right) +\sum _{i=1}^{r}\left| h_{i}\left( x_{0}\right) \right| \right] \\ \le \underline{f}_{\alpha }\left( \widehat{x}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\left( \widehat{x}\right) \right| \right] \\ \overline{f}_{\alpha }\left( x_{0}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+} \left( x_{0}\right) +\sum _{i=1}^{r}\left| h_{i}\left( x_{0}\right) \right| \right] \\< \overline{f}_{\alpha }\left( \widehat{x}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\left( \widehat{x}\right) \right| \right] \end{array} \right. \text { } \\&\quad \text {or}\left\{ \begin{array}{c} \underline{f}_{\alpha }\left( x_{0}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+} \left( x_{0}\right) +\sum _{i=1}^{r}\left| h_{i}\left( x_{0}\right) \right| \right] \\< \underline{f}_{\alpha }\left( \widehat{x}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\left( \widehat{x}\right) \right| \right] \\ \overline{f}_{\alpha }\left( x_{0}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+} \left( x_{0}\right) +\sum _{i=1}^{r}\left| h_{i}\left( x_{0}\right) \right| \right] \\ < \overline{f}_{\alpha }\left( \widehat{x}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\left( \widehat{x}\right) \right| \right] . \end{array} \right. \end{aligned}$$

Multiplying the above inequalities by the corresponding Lagrange multipliers \(\widehat{\lambda }_{1}>0\), \(\widehat{\lambda }_{2}>0\) associated with the fuzzy objective function, then adding the resulting inequalities and using that \(\widehat{\lambda }_{1}+\) \(\widehat{\lambda }_{2}=1\), we get

$$\begin{aligned} \begin{array}{c} \widehat{\lambda }_{1}\underline{f}_{\alpha }\left( x_{0}\right) +\widehat{ \lambda }_{2}\overline{f}_{\alpha }\left( x_{0}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+}\left( x_{0}\right) +\sum _{i=1}^{r}\left| h_{i}\left( x_{0}\right) \right| \right] \\ <\widehat{\lambda }_{1}\underline{f}_{\alpha }\left( \widehat{x}\right) + \widehat{\lambda }_{2}\overline{f}_{\alpha }\left( \widehat{x}\right) +c \left[ \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\left( \widehat{x}\right) \right| \right] . \end{array} \end{aligned}$$
(40)

Since \(\widehat{x}\in D\), by (38), it follows that \( \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\left( \widehat{x}\right) \right| =0\). Hence, by (38) and the definition of the absolute function, (40) gives

$$\begin{aligned}&\widehat{\lambda }_{1}\underline{f}_{\alpha }\left( x_{0}\right) +\widehat{ \lambda }_{2}\overline{f}_{\alpha }\left( x_{0}\right) +c\left[ \sum _{j=1}^{m}g_{j}\left( x_{0}\right) +\sum _{i=1}^{r}h_{i}\left( x_{0}\right) \right] \nonumber \\&<\widehat{\lambda }_{1}\underline{f}_{\alpha }\left( \widehat{x} \right) +\widehat{\lambda }_{2}\overline{f}_{\alpha }\left( \widehat{x} \right) . \end{aligned}$$
(41)

By assumption, \(c\ge \max \left\{ \widehat{\mu }_{j},j\in J, \left| \widehat{\vartheta }_{i}\right| , i\in I\right\} \). Thus, (41) implies

$$\begin{aligned}&\widehat{\lambda }_{1}\underline{f}_{\alpha }\left( x_{0}\right) +\widehat{ \lambda }_{2}\overline{f}_{\alpha }\left( x_{0}\right) +\sum _{j=1}^{m} \widehat{\mu }_{j}g_{j}\left( x_{0}\right) +\sum _{i=1}^{r}\widehat{\vartheta }_{i}h_{i}\left( x_{0}\right) \\&<\widehat{\lambda }_{1}\underline{f}_{\alpha }\left( \widehat{x} \right) +\widehat{\lambda }_{2}\overline{f}_{\alpha }\left( \widehat{x} \right) . \end{aligned}$$

Using the Karush–Kuhn–Tucker condition (13) together with \(\widehat{x} \in D\), we obtain

$$\begin{aligned} \begin{array}{c} \widehat{\lambda }_{1}\underline{f}_{\alpha }\left( x_{0}\right) +\widehat{ \lambda }_{2}\overline{f}_{\alpha }\left( x_{0}\right) +\sum _{j=1}^{m} \widehat{\mu }_{j}g_{j}\left( x_{0}\right) +\sum _{i=1}^{r}\widehat{\vartheta }_{i}h_{i}\left( x_{0}\right) \\ <\widehat{\lambda }_{1}\underline{f}_{\alpha }\left( \widehat{x}\right) + \widehat{\lambda }_{2}\overline{f}_{\alpha }\left( \widehat{x}\right) +\sum _{j=1}^{m}\widehat{\mu }_{j}g_{j}\left( \widehat{x}\right) +\sum _{i=1}^{r}\widehat{\vartheta }_{i}h_{i}\left( \widehat{x}\right) \text {. } \end{array} \end{aligned}$$
(42)

By assumption, the fuzzy objective function \(\widetilde{f}\) is convex on X . Then, by Proposition 20, the inequalities

$$\begin{aligned}&\underline{f}_{\alpha }(x_{0})-\underline{f}_{\alpha }(\widehat{x})\ge \underline{\xi }^{T}\left( x_{0}-\widehat{x}\right) , \forall \underline{\xi }\in \partial \underline{f}_{\alpha }\left( \widehat{x} \right) , \end{aligned}$$
(43)
$$\begin{aligned}&\overline{f}_{\alpha }(x_{0})-\overline{f}_{\alpha }(\widehat{x})\ge \overline{\xi }^{T}\left( x_{0}-\widehat{x}\right) ,\forall \overline{\xi }\in \partial \underline{f}_{\alpha }\left( \widehat{x}\right) \end{aligned}$$
(44)

hold for each \(\alpha \in \left[ 0,1\right] \). Further, each inequality constraint function \(g_{j}\), \(j\in J\), each equality constraint function \( h_{i}\), \(i\in I^{+}\left( \widehat{x}\right) \), and each function \(-h_{i}\), \( i\in I^{-}\left( \widehat{x}\right) \), are also convex on X. Then, by Remark 10, the inequalities

$$\begin{aligned}&g_{j}(x_{0})-g_{j}(\widehat{x})\ge \zeta _{j}^{T}\left( x_{0}-\widehat{x} \right) , \forall \zeta _{j}\in \partial g_{j}\left( \widehat{x} \right) , j=1,...,m, \end{aligned}$$
(45)
$$\begin{aligned}&h_{i}(x_{0})-h_{i}(\widehat{x})\ge \varsigma _{i}^{T}\left( x_{0}-\widehat{x }\right) , \forall \varsigma _{i}\in \partial h_{i}\left( \widehat{x} \right) , i\in I^{+}\left( \widehat{x}\right) , \end{aligned}$$
(46)
$$\begin{aligned}&-h_{i}(x_{0})+h_{i}(\widehat{x})\ge -\varsigma _{i}^{T}\left( x_{0}- \widehat{x}\right) , \forall \left( -\varsigma _{i}\right) \in \partial \left( -h_{i}\left( \widehat{x}\right) \right) \text {, }i\in I^{-}\left( \widehat{x}\right) \nonumber \\ \end{aligned}$$
(47)

hold. Multiplying each inequality (43)–(47) by the corresponding Lagrange multiplier and then adding both sides of the resulting inequalities, we get that the inequality

$$\begin{aligned} \begin{array}{c} \widehat{\lambda }_{1}\underline{f}_{\alpha }\left( x_{0}\right) +\widehat{ \lambda }_{2}\overline{f}_{\alpha }\left( x_{0}\right) +\sum _{j=1}^{m} \widehat{\mu }_{j}g_{j}\left( x_{0}\right) +\sum _{i=1}^{r}\widehat{\vartheta }_{i}h_{i}\left( x_{0}\right) \\ -\left( \widehat{\lambda }_{1}\underline{f}_{\alpha }\left( \widehat{x} \right) +\widehat{\lambda }_{2}\overline{f}_{\alpha }\left( \widehat{x} \right) +\sum _{j=1}^{m}\widehat{\mu }_{j}g_{j}\left( \widehat{x}\right) +\sum _{i=1}^{r}\widehat{\vartheta }_{i}h_{i}\left( \widehat{x}\right) \right) \\ \ge \left( \widehat{\lambda }_{1}\underline{\xi }+\widehat{\lambda }_{2} \overline{\xi }+\sum _{j=1}^{m}\widehat{\mu }_{j}\zeta _{j}+\sum _{i=1}^{r} \widehat{\vartheta }_{i}\varsigma _{i}\right) ^{T}\left( x_{0}-\widehat{x} \right) \end{array} \end{aligned}$$
(48)

holds. Then, by the Karush–Kuhn–Tucker condition (12) and Corollary 13, (48) implies that the inequality

$$\begin{aligned} \begin{array}{c} \widehat{\lambda }_{1}\underline{f}_{\alpha }\left( x_{0}\right) +\widehat{ \lambda }_{2}\overline{f}_{\alpha }\left( x_{0}\right) +\sum _{j=1}^{m} \widehat{\mu }_{j}g_{j}\left( x_{0}\right) +\sum _{i=1}^{r}\widehat{\vartheta }_{i}h_{i}\left( x_{0}\right) \\ \ge \widehat{\lambda }_{1}\underline{f}_{\alpha }\left( \widehat{x}\right) + \widehat{\lambda }_{2}\overline{f}_{\alpha }\left( \widehat{x}\right) +\sum _{j=1}^{m}\widehat{\mu }_{j}g_{j}\left( \widehat{x}\right) +\sum _{i=1}^{r}\widehat{\vartheta }_{i}h_{i}\left( \widehat{x}\right) \end{array} \end{aligned}$$

holds, contradicting (42). Hence, the proof of this theorem is completed. \(\square \)

The following result follows directly from Theorem 35. It says that, under appropriate convexity hypotheses, a nondominated solution of the considered fuzzy optimization problem (FO) is also a nondominated solution of the associated penalized fuzzy optimization problem (FP(c)) with the fuzzy \(l_{1}\) exact penalty function if the penalty parameter c is assumed to be sufficiently large.

Theorem 36

Let \(\widehat{x}\) be a nondominated solution of the considered fuzzy optimization problem (FO) and all hypotheses of Theorem 35 be fulfilled. If the penalty parameter c is assumed to be sufficiently large (it is sufficient to set \(c\ge \max \left\{ \widehat{\mu }_{j}\text {, }j\in J\text { , }\left| \widehat{\vartheta }_{i}\right| \text {, }i\in I\right\} \) ), then \(\widehat{x}\) is also a nondominated solution of the associated fuzzy penalized optimization problem (FP(c)) with the fuzzy \(l_{1}\) exact penalty function.

Now, under appropriate convexity hypotheses, we prove that a strong Karush–Kuhn–Tucker point of the considered nonsmooth fuzzy optimization problem (FO) is also a weakly nondominated solution of the fuzzy penalized optimization problem (FP(c)) with the fuzzy \(l_{1}\) exact penalty function.

Theorem 37

Let \(\widehat{x}\in D\) be a strong Karush–Kuhn–Tucker point of the considered nonsmooth fuzzy optimization problem (FO) and the conditions (16)–(18) be satisfied at \(\widehat{x}\) with Lagrange multipliers \(\widehat{\mu }_{j}\), \( j\in J\), \(\widehat{\vartheta }_{i}\), \(i\in I\). Furthermore, assume that the objective function \(\widetilde{f}\) is a convex fuzzy function on X and the constraints of the problem (FO), that is, the functions \(g_{j}\), \(j=1,\ldots ,m\) , \(h_{i}\), \(i\in I^{+}\left( \widehat{x}\right) \), \(-h_{i}\), \(i\in I^{-}\left( \widehat{x}\right) \), are convex on X. If the penalty parameter c is assumed to be sufficiently large (it is sufficient to set \( c\ge \max \left\{ \widehat{\mu }_{j}\text {, }j\in J\text {, }\left| \widehat{\vartheta }_{i}\right| \text {, }i\in I\right\} \)), then \( \widehat{x}\) is a weakly nondominated solution of the fuzzy penalized optimization problem (FP(c)) with the fuzzy \(l_{1}\) exact penalty function.

Proof

We proceed by contradiction. Suppose, contrary to the result, that \(\widehat{ x}\) is not a weakly nondominated solution of the fuzzy penalized optimization problem (FP(c)) with the fuzzy \(l_{1}\) exact penalty function. Then, by Definition 21, there exists \(x_{0}\in X\) such that

$$\begin{aligned} \widetilde{P}\left( x_{0},c\right) \prec \widetilde{P}\left( \widehat{x} ,c\right) . \end{aligned}$$

In particular, one has for all \(\alpha \in \left[ 0,1\right] \) that

$$\begin{aligned} \underline{P}_{\alpha }\left( x_{0},c\right)<\underline{P}_{\alpha }\left( \widehat{x},c\right) \text { \ or \ }\overline{P}_{\alpha }\left( x_{0},c\right) <\overline{P}_{\alpha }\left( \widehat{x},c\right) . \end{aligned}$$

By (39), the above inequalities yield for all \(\alpha \in \left[ 0,1 \right] \), respectively,

$$\begin{aligned}&\underline{f}_{\alpha }\left( x_{0}\right) +c\left( \sum _{j=1}^{m}g_{j}^{+}\left( x_{0}\right) +\sum _{i=1}^{r}\left| h_{i}\left( x_{0}\right) \right| \right) \nonumber \\&<\underline{f}_{\alpha }\left( \widehat{x}\right) +c\left( \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\left( \widehat{x}\right) \right| \right) \end{aligned}$$
(49)

or

$$\begin{aligned}&\overline{f}_{\alpha }\left( x_{0}\right) +c\left( \sum _{j=1}^{m}g_{j}^{+}\left( x_{0}\right) +\sum _{i=1}^{r}\left| h_{i}\left( x_{0}\right) \right| \right) \nonumber \\&<\overline{f}_{\alpha }\left( \widehat{x}\right) +c\left( \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\left( \widehat{x}\right) \right| \right) . \end{aligned}$$
(50)

By \(\widehat{x}\in D\), (49) and (50) imply for all \(\alpha \in \left[ 0,1\right] \), respectively,

$$\begin{aligned} \underline{f}_{\alpha }\left( x_{0}\right) +c\left( \sum _{j=1}^{m}g_{j}^{+}\left( x_{0}\right) +\sum _{i=1}^{r}\left| h_{i}\left( x_{0}\right) \right| \right) <\underline{f}_{\alpha }\left( \widehat{x}\right) \end{aligned}$$
(51)

or

$$\begin{aligned} \overline{f}_{\alpha }\left( x_{0}\right) +c\left( \sum _{j=1}^{m}g_{j}^{+}\left( x_{0}\right) +\sum _{i=1}^{r}\left| h_{i}\left( x_{0}\right) \right| \right) <\overline{f}_{\alpha }\left( \widehat{x}\right) . \end{aligned}$$
(52)

By assumption, \(c\ge \max \left\{ \widehat{\mu }_{j}\text {, }j\in J\text {, } \left| \widehat{\vartheta }_{i}\right| \text {, }i\in I\right\} \). Thus, (51) and (52) imply for all \(\alpha \in \left[ 0,1\right] \) , respectively,

$$\begin{aligned} \underline{f}_{\alpha }\left( x_{0}\right) +\sum _{j=1}^{m}\widehat{\mu } _{j}g_{j}^{+}\left( x_{0}\right) +\sum _{i=1}^{r}\left| \widehat{ \vartheta }_{i}h_{i}\left( x_{0}\right) \right| <\underline{f}_{\alpha }\left( \widehat{x}\right) \end{aligned}$$
(53)

or

$$\begin{aligned} \overline{f}_{\alpha }\left( x_{0}\right) +\sum _{j=1}^{m}\widehat{\mu } _{j}g_{j}^{+}\left( x_{0}\right) +\sum _{i=1}^{r}\left| \widehat{ \vartheta }_{i}h_{i}\left( x_{0}\right) \right| <\overline{f}_{\alpha }\left( \widehat{x}\right) . \end{aligned}$$
(54)

Using again the feasibility of \(\widehat{x}\) in (FO) together with (18) and (38), we get, for all \(\alpha \in \left[ 0,1\right] \),

$$\begin{aligned}&\underline{f}_{\alpha }\left( x_{0}\right) +\sum _{j=1}^{m}\widehat{\mu } _{j}g_{j}\left( x_{0}\right) +\sum _{i=1}^{r}\widehat{\vartheta } _{i}h_{i}\left( x_{0}\right) < \nonumber \\&\underline{f}_{\alpha }\left( \widehat{x}\right) +\sum _{j=1}^{m}\widehat{\mu }_{j}g_{j}\left( \widehat{x}\right) +\sum _{i=1}^{r}\widehat{\vartheta }_{i}h_{i}\left( \widehat{x}\right) \end{aligned}$$
(55)

or

$$\begin{aligned}&\overline{f}_{\alpha }\left( x_{0}\right) +\sum _{j=1}^{m}\widehat{\mu } _{j}g_{j}\left( x_{0}\right) +\sum _{i=1}^{r}\widehat{\vartheta } _{i}h_{i}\left( x_{0}\right) < \nonumber \\&\overline{f}_{\alpha }\left( \widehat{x}\right) +\sum _{j=1}^{m} \widehat{\mu }_{j}g_{j}\left( \widehat{x}\right) +\sum _{i=1}^{r}\widehat{ \vartheta }_{i}h_{i}\left( \widehat{x}\right) . \end{aligned}$$
(56)

Now, assume that \(\widehat{x}\) is a feasible solution of the considered nonsmooth fuzzy optimization problem (FO) and the weak Karush–Kuhn–Tucker optimality conditions (16)–(18) are satisfied at \(\widehat{x}\) with Lagrange multipliers \(\widehat{\lambda }_{1}>0\), \(\widehat{\lambda } _{2}>0\), \(\widehat{\mu }_{j}\), \(j\in J\), \(\widehat{\vartheta }_{i}\), \(i\in I\) . Using convexity hypotheses, we have, by Proposition 20 and Definition 9, that the inequalities (43)–(47) are satisfied. Multiplying each inequality (43)–(47) by the corresponding Lagrange multiplier and then combining the resulting inequalities, we get for all \(\alpha \in \left[ 0,1\right] \) that the following inequalities

$$\begin{aligned} \begin{array}{c} \underline{f}_{\alpha }\left( x_{0}\right) +\sum _{j=1}^{m}\widehat{\mu } _{j}g_{j}\left( x_{0}\right) +\sum _{i=1}^{r}\widehat{\vartheta } _{i}h_{i}\left( x_{0}\right) \\ -\left( \underline{f}_{\alpha }\left( \widehat{x}\right) +\sum _{j=1}^{m} \widehat{\mu }_{j}g_{j}\left( \widehat{x}\right) +\sum _{i=1}^{r}\widehat{ \vartheta }_{i}h_{i}\left( \widehat{x}\right) \right) \\ \ge \left( \underline{\xi }+\sum _{j=1}^{m}\zeta _{j}+\sum _{i=1}^{r}\varsigma _{i}\right) ^{T}\left( x_{0}-\widehat{x}\right) \end{array} \end{aligned}$$
(57)
$$\begin{aligned} \begin{array}{c} \overline{f}_{\alpha }\left( x_{0}\right) +\sum _{j=1}^{m}\widehat{\mu } _{j}g_{j}\left( x_{0}\right) +\sum _{i=1}^{r}\widehat{\vartheta } _{i}h_{i}\left( x_{0}\right) \\ -\left( \overline{f}_{\alpha }\left( \widehat{x}\right) +\sum _{j=1}^{m} \widehat{\mu }_{j}g_{j}\left( \widehat{x}\right) +\sum _{i=1}^{r}\widehat{ \vartheta }_{i}h_{i}\left( \widehat{x}\right) \right) \\ \ge \left( \overline{\xi }+\sum _{j=1}^{m}\zeta _{j}+\sum _{i=1}^{r}\varsigma _{i}\right) ^{T}\left( x_{0}-\widehat{x}\right) \end{array} \end{aligned}$$
(58)

hold for any \(\underline{\xi }\in \partial \underline{f}_{\alpha }\left( \widehat{x}\right) \), \(\overline{\xi }\in \partial \underline{f}_{\alpha }\left( \widehat{x}\right) \), \(\zeta _{j}\in \partial g_{j}\left( \widehat{x} \right) \), \(j=1,\ldots ,m\), \(\varsigma _{i}\in \partial h_{i}\left( \widehat{ x}\right) \), \(i\in I\). Then, by the Karush–Kuhn–Tucker optimality conditions (16) and (17), (57) and (58) yield that the inequalities

$$\begin{aligned}&\underline{f}_{\alpha }\left( x_{0}\right) +\sum _{j=1}^{m}\widehat{\mu } _{j}g_{j}\left( x_{0}\right) +\sum _{i=1}^{r}\widehat{\vartheta } _{i}h_{i}\left( x_{0}\right) \nonumber \\&\ge \underline{f}_{\alpha }\left( \widehat{x}\right) +\sum _{j=1}^{m}\widehat{\mu }_{j}g_{j}\left( \widehat{x}\right) +\sum _{i=1}^{r}\widehat{\vartheta }_{i}h_{i}\left( \widehat{x}\right) , \end{aligned}$$
(59)
$$\begin{aligned}&\overline{f}_{\alpha }\left( x_{0}\right) +\sum _{j=1}^{m}\widehat{\mu } _{j}g_{j}\left( x_{0}\right) +\sum _{i=1}^{r}\widehat{\vartheta } _{i}h_{i}\left( x_{0}\right) \nonumber \\&\ge \overline{f}_{\alpha }\left( \widehat{x}\right) +\sum _{j=1}^{m} \widehat{\mu }_{j}g_{j}\left( \widehat{x}\right) +\sum _{i=1}^{r}\widehat{ \vartheta }_{i}h_{i}\left( \widehat{x}\right) \end{aligned}$$
(60)

hold for all \(\alpha \in \left[ 0,1\right] \), contradicting (55) or (56). Hence, the proof of this theorem is completed. \(\square \)

The following result follows directly from Theorem 37. It shows that, under appropriate convexity hypotheses, a weakly nondominated solution of the considered fuzzy optimization problem (FO) is also a weakly nondominated solution of the associated penalized fuzzy optimization problem (FP(c)) with the fuzzy \(l_{1}\) exact penalty function if the penalty parameter c is assumed to be sufficiently large.

Theorem 38

Let \(\widehat{x}\) be a weakly nondominated solution of the considered fuzzy optimization problem (FO) and all hypotheses of Theorem 37 be fulfilled. If the penalty parameter c is assumed to be sufficiently large (it is sufficient to set \(c\ge \max \left\{ \widehat{\mu }_{j}\text {, }j\in J\text {, }\left| \widehat{\vartheta }_{i}\right| \text {, }i\in I\right\} \)), then \(\widehat{x}\) is also a weakly nondominated solution of the associated penalized fuzzy optimization problem (FP(c)) with the fuzzy \(l_{1}\) exact penalty function.

Now, we prove the converse results to those formulated in Theorems 36 and 38. First, we establish some useful results which we use in proving them.

Proposition 39

Let \(\widehat{x}\) be a nondominated solution of the fuzzy penalized optimization problem (FP(c)) with the fuzzy \(l_{1}\) exact penalty function associated with the considered fuzzy optimization problem (FO). Then, there does not exist \(x\in D\) such that

$$\begin{aligned} \widetilde{f}\left( x\right) \preceq \widetilde{f}\left( \widehat{x}\right) . \end{aligned}$$
(61)

Proof

By assumption, \(\widehat{x}\) is a nondominated solution of the fuzzy exact \( l_{1}\) penalized optimization problem (FP(c)). We proceed by contradiction. Suppose, contrary to the result, that there exists \(x_{0}\in D \) such that \(\widetilde{f}\left( x_{0}\right) \preceq \widetilde{f}\left( \widehat{x}\right) \). Hence, for each \(\alpha \in \left[ 0,1\right] \), it follows that

$$\begin{aligned} \left\{ \begin{array}{c} \underline{f}_{\alpha }\left( x_{0}\right)<\underline{f}_{\alpha }\left( \widehat{x}\right) \\ \overline{f}_{\alpha }\left( x_{0}\right) \le \overline{f}_{\alpha }\left( \widehat{x}\right) \end{array} \right. \text {or}\left\{ \begin{array}{c} \underline{f}_{\alpha }\left( x_{0}\right) \le \underline{f}_{\alpha }\left( \widehat{x}\right) \\ \overline{f}_{\alpha }\left( x_{0}\right)<\overline{f}_{\alpha }\left( \widehat{x}\right) \end{array} \right. \text {or}\left\{ \begin{array}{c} \underline{f}_{\alpha }\left( x_{0}\right)<\underline{f}_{\alpha }\left( \widehat{x}\right) \text { }\\ \overline{f}_{\alpha }\left( x_{0}\right) <\overline{f}_{\alpha }\left( \widehat{x}\right) . \end{array} \text { }\right. \end{aligned}$$

Since \(x_{0}\in D\), by (38), we have for each \(\alpha \in \left[ 0,1 \right] \),

$$\begin{aligned}&\left\{ \begin{array}{c} \underline{f}_{\alpha }\left( x_{0}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+} \left( x_{0}\right) +\sum _{i=1}^{r}\left| h_{i}\left( x_{0}\right) \right| \right]<\underline{f}_{\alpha }\left( \widehat{x}\right) \\ \overline{f}_{\alpha }\left( x_{0}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+} \left( x_{0}\right) +\sum _{i=1}^{r}\left| h_{i}\left( x_{0}\right) \right| \right] \le \overline{f}_{\alpha }\left( \widehat{x}\right) \end{array} \right. \\&\text {or}\left\{ \begin{array}{c} \underline{f}_{\alpha }\left( x_{0}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+} \left( x_{0}\right) +\sum _{i=1}^{r}\left| h_{i}\left( x_{0}\right) \right| \right] \le \underline{f}_{\alpha }\left( \widehat{x}\right) \\ \overline{f}_{\alpha }\left( x_{0}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+} \left( x_{0}\right) +\sum _{i=1}^{r}\left| h_{i}\left( x_{0}\right) \right| \right]<\overline{f}_{\alpha }\left( \widehat{x}\right) \end{array} \right. \\&\text {or}\left\{ \begin{array}{c} \underline{f}_{\alpha }\left( x_{0}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+} \left( x_{0}\right) +\sum _{i=1}^{r}\left| h_{i}\left( x_{0}\right) \right| \right]<\underline{f}_{\alpha }\left( \widehat{x}\right) \text { }\\ \overline{f}_{\alpha }\left( x_{0}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+} \left( x_{0}\right) +\sum _{i=1}^{r}\left| h_{i}\left( x_{0}\right) \right| \right] <\overline{f}_{\alpha }\left( \widehat{x}\right) . \end{array} \text { }\right. \text { } \end{aligned}$$

Again using (38), we obtain for each \(\alpha \in \left[ 0,1\right] \),

$$\begin{aligned}&\left\{ \begin{array}{c} \underline{f}_{\alpha }\left( x_{0}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+} \left( x_{0}\right) +\sum _{i=1}^{r}\left| h_{i}\left( x_{0}\right) \right| \right] \\<\underline{f}_{\alpha }\left( \widehat{x}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\left( \widehat{x}\right) \right| \right] \\ \overline{f}_{\alpha }\left( x_{0}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+} \left( x_{0}\right) +\sum _{i=1}^{r}\left| h_{i}\left( x_{0}\right) \right| \right] \le \\ \overline{f}_{\alpha }\left( \widehat{x}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\left( \widehat{x}\right) \right| \right] \end{array} \right. \text { } \\&\text {or}\left\{ \begin{array}{c} \underline{f}_{\alpha }\left( x_{0}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+} \left( x_{0}\right) +\sum _{i=1}^{r}\left| h_{i}\left( x_{0}\right) \right| \right] \le \\ \underline{f}_{\alpha }\left( \widehat{x}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\left( \widehat{x}\right) \right| \right] \\ \overline{f}_{\alpha }\left( x_{0}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+} \left( x_{0}\right) +\sum _{i=1}^{r}\left| h_{i}\left( x_{0}\right) \right| \right] \\<\overline{f}_{\alpha }\left( \widehat{x}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\left( \widehat{x}\right) \right| \right] \end{array} \right. \text { } \\&\quad \text {or}\left\{ \begin{array}{c} \underline{f}_{\alpha }\left( x_{0}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+} \left( x_{0}\right) +\sum _{i=1}^{r}\left| h_{i}\left( x_{0}\right) \right| \right] \\<\underline{f}_{\alpha }\left( \widehat{x}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\left( \widehat{x}\right) \right| \right] \\ \overline{f}_{\alpha }\left( x_{0}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+} \left( x_{0}\right) +\sum _{i=1}^{r}\left| h_{i}\left( x_{0}\right) \right| \right] \\ <\overline{f}_{\alpha }\left( \widehat{x}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\left( \widehat{x}\right) \right| \right] . \end{array} \right. . \end{aligned}$$

Thus, by the definition of the fuzzy \(l_{1}\) exact penalty function \( \widetilde{P}\) (see (37)), it follows that

$$\begin{aligned}&\left\{ \begin{array}{c} \underline{P}_{\alpha }\left( x_{0},c\right)<\underline{P}_{\alpha }\left( \widehat{x},c\right) \\ \overline{P}_{\alpha }\left( x_{0},c\right) \le \overline{P}_{\alpha }\left( \widehat{x},c\right) \end{array} \right. \text { or}\left\{ \begin{array}{c} \underline{P}_{\alpha }\left( x_{0},c\right)<\underline{P}_{\alpha }\left( \widehat{x},c\right) \\ \overline{P}_{\alpha }\left( x_{0},c\right) \le \overline{P}_{\alpha }\left( \widehat{x},c\right) \end{array} \right. \\&\quad \text {or}\left\{ \begin{array}{c} \underline{P}_{\alpha }\left( x_{0},c\right) <\underline{P}_{\alpha }\left( \widehat{x},c\right) \\ \overline{P}_{\alpha }\left( x_{0},c\right) \le \overline{P}_{\alpha }\left( \widehat{x},c\right) \end{array} \right. \text { for all }\alpha \in \left[ 0,1\right] . \end{aligned}$$

Hence, there exists \(x_{0}\in D\subset X\) such that \(\widetilde{P}\left( x_{0},c\right) \preceq \widetilde{P}\left( \widehat{x},c\right) \). This means that \(\widehat{x}\) is not a nondominated solution of the fuzzy penalized optimization problem (FP(c)) with the fuzzy \(l_{1}\) exact penalty function, which is a contradiction. Hence, the proof of this proposition is completed. \(\square \)

Proposition 40

Let \(\widehat{x}\) be a weakly nondominated solution of the fuzzy penalized optimization problem (FP(c)) with the fuzzy \(l_{1}\) exact penalty function associated with the considered fuzzy optimization problem (FO). Then, there does not exist \(x\in D\) such that

$$\begin{aligned} \widetilde{f}\left( x\right) \prec \widetilde{f}\left( \widehat{x}\right) . \end{aligned}$$
(62)

Proof

Suppose, contrary to the result, that there exists \(x_{0}\in D\) such that

$$\begin{aligned} \widetilde{f}\left( x_{0}\right) \prec \widetilde{f}\left( \widehat{x} \right) . \end{aligned}$$
(63)

In particular, there exists \(x_{0}\in D\) such that, for each \(\alpha \in \left[ 0,1\right] \),

$$\begin{aligned} \underline{f}_{\alpha }\left( x_{0}\right)<\underline{f}_{\alpha }\left( \widehat{x}\right) \text { \ or \ }\overline{f}_{\alpha }\left( x_{0}\right) < \overline{f}_{\alpha }\left( \widehat{x}\right) . \end{aligned}$$

Using \(x_{0}\in D\) together with (38), we get

$$\begin{aligned}&\underline{f}_{\alpha }\left( x_{0}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+} \left( x_{0}\right) +\sum _{i=1}^{r}\left| h_{i}\left( x_{0}\right) \right| \right] \\&<\underline{f}_{\alpha }\left( \widehat{x}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\left( \widehat{x}\right) \right| \right] \end{aligned}$$

or

$$\begin{aligned}&\overline{f}_{\alpha }\left( x_{0}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+} \left( x_{0}\right) +\sum _{i=1}^{r}\left| h_{i}\left( x_{0}\right) \right| \right] \\&<\overline{f}_{\alpha }\left( \widehat{x}\right) +c\left[ \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\left( \widehat{x}\right) \right| \right] . \end{aligned}$$

Thus, by (37), it follows that, for each \(\alpha \in \left[ 0,1\right] \),

$$\begin{aligned} \underline{P}_{\alpha }\left( x,c\right)<\underline{P}_{\alpha }\left( \widehat{x},c\right) \text { or }\overline{P}_{\alpha }\left( x_{0},c\right) < \overline{P}_{\alpha }\left( \widehat{x},c\right) . \end{aligned}$$

This means that there exists \(x_{0}\in D\) such that the relation

$$\begin{aligned} \widetilde{P}\left( x,c\right) \prec \widetilde{P}\left( \widehat{x},c\right) \end{aligned}$$

holds, contradicting the assumption that \(\widehat{x}\) is a weakly nondominated solution of the fuzzy penalized optimization problem (FP(c)) with the fuzzy \(l_{1}\) exact penalty function. Hence, the proof of this proposition is completed. \(\square \)

Theorem 41

Let D be a compact subset of \(R^{n}\) and \(\widehat{x}\) be a (weakly) nondominated solution of the fuzzy penalized optimization problem (FP(\(\overline{c}\))) with the fuzzy \(l_{1}\) exact penalty function. Further, assume that the objective fuzzy function \( \widetilde{f}\) is convex on X, each inequality constraint function \(g_{j}\) , \(j\in J\), is convex on X, each equality function \(h_{i}\), \(i\in I\), is affine. If the penalty parameter \(\overline{c}\) is sufficiently large, then \( \widehat{x}\) is also a (weakly) nondominated solution of the considered fuzzy optimization problem (FO).

Proof

Assume that \(\widehat{x}\) is a nondominated solution of the fuzzy penalized optimization problem (FP\(_{\alpha }\)(\(\overline{c}\))) with the fuzzy \(l_{1}\) exact penalty function.

We consider two cases. First, we assume that \(\widehat{x}\in D\). Then, by Proposition 40, there is no other \(x\in D\) such that \(f(x)\preceq f\left( \widehat{x}\right) \). Hence, by Definition 22, the feasibility of \(\widehat{x}\) in (FO) implies that \(\widehat{x}\) is a nondominated solution of the considered fuzzy optimization problem (FO). We have, by Definition 34, that, for any \(c\ge \overline{c} \), \(\widehat{x}\) a nondominated solution of any fuzzy penalized optimization problem (FP(c)) with the fuzzy \(l_{1}\) exact penalty function. Moreover, we have shown that \(\widehat{x}\) is also a nondominated solution of the considered fuzzy optimization problem (FO).

Now, we prove that, under the assumptions of this theorem, the case \( \widehat{x}\notin D\) is impossible. Suppose, contrary to the result, that \( \widehat{x}\notin D\). Since \(\widehat{x}\) is a nondominated solution of the fuzzy penalized optimization problem (FP(\(\overline{c}\))), there exists \( \widehat{\lambda }_{1}\in R\), \(\widehat{\lambda }_{1}\ge 0\), \(\widehat{ \lambda }_{2}\in R\), \(\widehat{\lambda }_{2}\ge 0\), \(\widehat{\lambda }_{1}+ \widehat{\lambda }_{2}=1\), such that

$$\begin{aligned} 0\in \widehat{\lambda }_{1}\partial \underline{P}_{\alpha }(\widehat{x}, \overline{c})+\widehat{\lambda }_{2}\partial \overline{P}_{\alpha }(\widehat{ x},\overline{c}). \end{aligned}$$
(64)

By definition of the fuzzy \(l_{1}\) exact penalty function, it follows that

$$\begin{aligned}&0\in \widehat{\lambda }_{1}\partial \left( \underline{f}_{\alpha }\left( \widehat{x}\right) +\overline{c}\left[ \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\left( \widehat{x}\right) \right| \right] \right) \nonumber \\&+\widehat{\lambda }_{2}\partial \left( \overline{f}_{\alpha }\left( \widehat{x}\right) +\overline{c}\left[ \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\left( \widehat{x}\right) \right| \right] \right) . \end{aligned}$$
(65)

Since the weights \(\widehat{\lambda }_{1}\) and \(\widehat{\lambda }_{2}\) are nonnegative, by Remark 14, equality holds in Corollary 13. Thus, (65) yields

$$\begin{aligned}&0\in \widehat{\lambda }_{1}\partial \underline{f}_{\alpha }\left( \widehat{x} \right) +\widehat{\lambda }_{2}\partial \overline{f}_{\alpha }\left( \widehat{x}\right) \\&\quad +\left( \widehat{\lambda }_{1}+\widehat{\lambda }_{2}\right) \partial \left( \overline{c}\left[ \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\left( \widehat{x}\right) \right| \right] \right) . \end{aligned}$$

Hence, by \(\widehat{\lambda }_{1}+\widehat{\lambda }_{2}=1\), it follows that

$$\begin{aligned}&0\in \widehat{\lambda }_{1}\partial \underline{f}_{\alpha }\left( \widehat{x} \right) +\widehat{\lambda }_{2}\partial \overline{f}_{\alpha }\left( \widehat{x}\right) \\&\quad +\partial \left( \overline{c}\left[ \sum _{j=1}^{m}g_{j}^{+} \left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\left( \widehat{x} \right) \right| \right] \right) . \end{aligned}$$

Then, by Lemma 11, it follows that

$$\begin{aligned}&0\in \widehat{\lambda }_{1}\partial \underline{f}_{\alpha }\left( \widehat{x} \right) +\widehat{\lambda }_{2}\partial \overline{f}_{\alpha }\left( \widehat{x}\right) \nonumber \\&\quad +\overline{c}\partial \left( \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\left( \widehat{x}\right) \right| \right) . \end{aligned}$$
(66)

Thus, by Proposition 12, we have

$$\begin{aligned}&0\in \widehat{\lambda }_{1}\partial \underline{f}_{\alpha }\left( \widehat{x} \right) +\widehat{\lambda }_{2}\partial \overline{f}_{\alpha }\left( \widehat{x}\right) \nonumber \\&\quad +\overline{c}\left[ \sum _{j=1}^{m}\partial g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\partial \left( \left| h_{i}\left( \widehat{x}\right) \right| \right) \right] . \end{aligned}$$
(67)

By assumption, the objective fuzzy function \(\widetilde{f}\) is a convex fuzzy mapping. Then, by Proposition 16, the functions \(\underline{f}_{\alpha }\) and \(\overline{f}_{\alpha }\) are convex on X for each \(\alpha \in \left[ 0,1\right] \). Then, for each \(\alpha \in \left[ 0,1\right] \). by Proposition 20, the following inequalities

$$\begin{aligned}&\underline{f}_{\alpha }(x)-\underline{f}_{\alpha }(\widehat{x})\ge \underline{\xi }^{T}\left( x-\widehat{x}\right) \text {, }\forall \underline{\xi }\in \partial \underline{f}_{\alpha }\left( \widehat{x} \right) \text {,} \end{aligned}$$
(68)
$$\begin{aligned}&\overline{f}_{\alpha }(x)-\overline{f}_{\alpha }(\widehat{x})\geqq \overline{ \xi }^{T}\left( x-\widehat{x}\right) \text {, }\forall \overline{\xi }\in \partial \overline{f}_{\alpha }\left( \widehat{x}\right) \end{aligned}$$
(69)

hold for all \(x\in X\). Further, by assumption, each constraint function \( g_{j}\), \(j\in J\), is convex on X. Therefore, also the functions \(g_{j}^{+}\) , \(j\in J\), are convex on X. Since each function \(h_{i}\), \(i\in I\), is an affine function, therefore, each function \(\left| h_{i}\right| \), \( i\in I\), is convex. Then, the inequalities

$$\begin{aligned}&g_{j}^{+}(x)-g_{j}^{+}(\widehat{x})\ge \left( \zeta _{j}^{+}\right) ^{T}\left( x-\widehat{x}\right) \text {,}\nonumber \\&\forall \zeta _{j}^{+}\in \partial g_{j}^{+}\left( \widehat{x}\right) \text {, }j=1,...,m\text {,} \end{aligned}$$
(70)
$$\begin{aligned}&\left| h_{i}\right| (x)-\left| h_{i}\right| (\widehat{x} )\ge \varsigma _{i}^{T}\left( x-\widehat{x}\right) \text {,}\nonumber \\&\forall \varsigma _{i}\in \partial \left( \left| h_{i}\right| \right) \left( \widehat{x}\right) \text {, }i=1,...,r \end{aligned}$$
(71)

hold for all \(x\in X\). Multiplying (70) and (71) by \(\overline{c} >0\) and then adding the resulting inequalities, we get

$$\begin{aligned} \begin{array}{c} \overline{c}\left[ \sum _{j=1}^{m}g_{j}^{+}\left( x\right) +\sum _{i=1}^{r}\left| h_{i}\right| (x)\right] \\ -\overline{c}\left[ \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\right| (\widehat{x})\right] \\ \ge \overline{c}\left[ \sum _{j=1}^{m}\left( \zeta _{j}^{+}\right) +\sum _{i=1}^{r}\varsigma _{i}\right] ^{T}\left( x-\widehat{x}\right) . \end{array} \end{aligned}$$
(72)

Combining (68), (69) and (72), we have that the inequalities

$$\begin{aligned} \begin{array}{l} \underline{f}_{\alpha }(x)+\overline{c}\left[ \sum _{j=1}^{m}g_{j}^{+}\left( x\right) +\sum _{i=1}^{r}\left| h_{i}\right| (x)\right] \\ \quad - \left( \underline{f}_{\alpha }(\widehat{x})+\overline{c}\left[ \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\right| (\widehat{x})\right] \right) \\ \quad \ge \left( \underline{\xi }+\overline{c}\left[ \sum _{j=1}^{m}\left( \zeta _{j}^{+}\right) +\sum _{i=1}^{r}\varsigma _{i}\right] \right) ^{T}\left( x- \widehat{x}\right) \text {,} \end{array} \end{aligned}$$
(73)
$$\begin{aligned} \begin{array}{c} \overline{f}_{\alpha }(x)+\overline{c}\left[ \sum _{j=1}^{m}g_{j}^{+}\left( x\right) +\sum _{i=1}^{r}\left| h_{i}\right| (x)\right] \\ \quad - \left( \overline{f}_{\alpha }(\widehat{x})+\overline{c}\left[ \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\right| (\widehat{x})\right] \right) \\ \quad \ge \left( \overline{\xi }+\overline{c}\left[ \sum _{j=1}^{m}\left( \zeta _{j}^{+}\right) +\sum _{i=1}^{r}\varsigma _{i}\right] \right) ^{T}\left( x- \widehat{x}\right) \end{array} \end{aligned}$$
(74)

hold for all \(x\in X\) and for any \(\underline{\xi }\in \partial \underline{f} _{\alpha }\left( \widehat{x}\right) \), \(\overline{\xi }\in \partial \overline{f}_{\alpha }\left( \widehat{x}\right) \), \(\zeta _{j}^{+}\in \partial g_{j}^{+}\left( \widehat{x}\right) \), \(j\in J\), \(\varsigma _{i}\in \partial \left( \left| h_{i}\right| \right) (\widehat{x})\). Multiplying (73) and (74) by \(\widehat{\lambda }_{1}\) and \( \widehat{\lambda }_{2}\) and then adding both sides of the resulting inequalities, we get

$$\begin{aligned} \begin{array}{c} \widehat{\lambda }_{1}\underline{f}_{\alpha }(x)+\widehat{\lambda }_{2} \overline{f}_{\alpha }(x)+\overline{c}\left( \widehat{\lambda }_{1}+\widehat{ \lambda }_{2}\right) \left[ \sum _{j=1}^{m}g_{j}^{+}\left( x\right) \right. \\ \quad \left. +\sum _{i=1}^{r}\left| h_{i}\right| (x)\right] -\left( \widehat{ \lambda }_{1}\underline{f}_{\alpha }(\widehat{x})+\widehat{\lambda }_{2} \overline{f}_{\alpha }(\widehat{x})\right. \\ \quad \left. + \overline{c}\left( \widehat{\lambda }_{1}+\widehat{\lambda } _{2}\right) \left[ \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\right| (\widehat{x})\right] \right) \ge \\ \left( \widehat{\lambda }_{1}\underline{\xi }+\widehat{\lambda }_{2} \overline{\xi }+\overline{c}\left( \widehat{\lambda }_{1}+\widehat{\lambda } _{2}\right) \left[ \sum _{j=1}^{m}\left( \zeta _{j}^{+}\right) +\sum _{i=1}^{r}\varsigma _{i}\right] \right) ^{T}\left( x-\widehat{x}\right) . \end{array} \end{aligned}$$

Since \(\widehat{\lambda }_{1}+\widehat{\lambda }_{2}=1\), the above inequality gives, for all \(x\in X\) and for any \(\underline{\xi }\in \partial \underline{f}_{\alpha }\left( \widehat{x}\right) \), \(\overline{\xi }\in \partial \overline{f}_{\alpha }\left( \widehat{x}\right) \), \(\zeta _{j}^{+}\in \partial g_{j}^{+}\left( \widehat{x}\right) \), \(j\in J\), \( \varsigma _{i}\in \partial \left( \left| h_{i}\right| \right) ( \widehat{x})\), \(i\in I\),

$$\begin{aligned} \begin{array}{c} \widehat{\lambda }_{1}\underline{f}_{\alpha }(x)+\widehat{\lambda }_{2} \overline{f}_{\alpha }(x)+\overline{c}\left[ \sum _{j=1}^{m}g_{j}^{+}\left( x\right) +\sum _{i=1}^{r}\left| h_{i}\right| (x)\right] \\ -\left( \widehat{\lambda }_{1}\underline{f}_{\alpha }(\widehat{x})+\widehat{ \lambda }_{2}\overline{f}_{\alpha }(\widehat{x})+\overline{c}\left[ \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\right| (\widehat{x})\right] \right) \\ \ge \left( \widehat{\lambda }_{1}\underline{\xi }+\widehat{\lambda }_{2}\overline{\xi }+\overline{c}\left[ \sum _{j=1}^{m}\left( \zeta _{j}^{+}\right) +\sum _{i=1}^{r}\varsigma _{i}\right] \right) ^{T}\left( x- \widehat{x}\right) . \end{array} \end{aligned}$$
(75)

Hence, by (67), (75) implies that the inequality

$$\begin{aligned} \begin{array}{c} \widehat{\lambda }_{1}\underline{f}_{\alpha }(x)+\widehat{\lambda }_{2} \overline{f}_{\alpha }(x)+\overline{c}\left[ \sum _{j=1}^{m}g_{j}^{+}\left( x\right) +\sum _{i=1}^{r}\left| h_{i}\right| (x)\right] \\ \ge \widehat{\lambda }_{1}\underline{f}_{\alpha }(\widehat{x})+\widehat{\lambda } _{2}\overline{f}_{\alpha }(\widehat{x})+\overline{c}\left[ \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\right| (\widehat{x})\right] \end{array} \end{aligned}$$
(76)

holds for all \(x\in X\). By (38), for each \(x\in D\), it follows that

$$\begin{aligned} \sum _{j=1}^{m}g_{j}^{+}\left( x\right) =0\text {, }\sum _{i=1}^{r}\left| h_{i}\right| (x)=0. \end{aligned}$$
(77)

Combining (76) and (77), we get that the inequality

$$\begin{aligned}&\widehat{\lambda }_{1}\underline{f}_{\alpha }(x)+\widehat{\lambda }_{2} \overline{f}_{\alpha }(x)-\left( \widehat{\lambda }_{1}\underline{f}_{\alpha }(\widehat{x})+\widehat{\lambda }_{2}\overline{f}_{\alpha }(\widehat{x} )\right) \nonumber \\&\ge \overline{c}\left[ \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x} \right) +\sum _{i=1}^{r}\left| h_{i}\right| (\widehat{x})\right] \end{aligned}$$
(78)

holds for all \(x\in D\). By assumption, \(\widehat{x}\ \)is not feasible in the considered fuzzy optimization problem (FO). Hence, by (38), we have that

$$\begin{aligned} \sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x}\right) +\sum _{i=1}^{r}\left| h_{i}\right| (\widehat{x})>0. \end{aligned}$$
(79)

Then, by (79), (78) gives

$$\begin{aligned} \overline{c}\le \max \left\{ \frac{\widehat{\lambda }_{1}\underline{f} _{\alpha }(x)+\widehat{\lambda }_{2}\overline{f}_{\alpha }(x)-\left( \widehat{\lambda }_{1}\underline{f}_{\alpha }(\widehat{x})+\widehat{\lambda } _{2}\overline{f}_{\alpha }(\widehat{x})\right) }{\sum _{j=1}^{m}g_{j}^{+} \left( \widehat{x}\right) +\sum _{i=1}^{r}\left( \left| h_{i}\right| \right) (\widehat{x})}:x\in D\right\} . \end{aligned}$$
(80)

By assumption, \(\overline{c}\) is sufficiently large. Let us suppose that \( \overline{c}\) is assumed to satisfy for each \(\alpha \in \left[ 0,1\right] \) the condition

$$\begin{aligned} \overline{c}>\max \left\{ \frac{\widehat{\lambda }_{1}\underline{f}_{\alpha }(x)+\widehat{\lambda }_{2}\overline{f}_{\alpha }(x)-\left( \widehat{\lambda }_{1}\underline{f}_{\alpha }(\widehat{x})+\widehat{\lambda }_{2}\overline{f} _{\alpha }(\widehat{x})\right) }{\sum _{j=1}^{m}g_{j}^{+}\left( \widehat{x} \right) +\sum _{i=1}^{r}\left( \left| h_{i}\right| \right) (\widehat{x })}:x\in D\right\} . \end{aligned}$$
(81)

We now show that, by (81), \(\overline{c}\) is a finite nonnegative real number. Indeed, by assumption, \(\widehat{x}\) is a nondominated solution of the fuzzy penalized optimization problem (FP\(_{\alpha }\)(c)) with the fuzzy \(l_{1}\) exact penalty function. Then, by Definition 22, it follows that there does not exist \(x\in D\) such that, for each \(\alpha \in \left[ 0,1\right] ,\)

$$\begin{aligned} \widehat{\lambda }_{1}\underline{f}_{\alpha }(x)+\widehat{\lambda }_{2} \overline{f}_{\alpha }(x)-\left( \widehat{\lambda }_{1}\underline{f}_{\alpha }(\widehat{x})+\widehat{\lambda }_{2}\overline{f}_{\alpha }(\widehat{x} )\right) <0. \end{aligned}$$

Hence, by (81), we have that \(\overline{c}>0\). Further, since D is a compact subset of \(R^{n}\), \(\overline{c}\) is a finite real number. But the inequality (81) contradicts the inequality (80). This means that the case \(\widehat{x}\notin D\) is impossible. Hence, \(\widehat{x}\) is feasible in the considered fuzzy optimization problem (FO). Thus, the conclusion of this theorem follows directly from Proposition 39 (or Proposition 40). Hence, the proof of this theorem is completed. \(\square \)

We now formulate the main result of this work.

Theorem 42

Let all assumptions of Theorems 36 and 41 be fulfilled. Then, \(\widehat{x}\) is a (weakly) nondominated of the considered fuzzy optimization problem (FO) with the fuzzy objective function if and only if \(\widehat{x}\) is a (weakly) nondominated of the fuzzy penalized optimization problem (FP(c)) with the fuzzy \(l_{1}\) exact penalty function.

Now, we illustrate the results established in the paper by an example of a nonlinear convex nondifferentiable fuzzy optimization problem which we solve using the fuzzy \(l_{1}\) exact penalty method.

Example 43

Consider the following nondifferentiable convex fuzzy optimization problem defined by:

$$\begin{aligned} \begin{array}{c} \widetilde{f}\left( x\right) =\max \left\{ \widetilde{f}_{1}\left( x\right) \text { },\text { }\widetilde{f}_{2}\left( x\right) \right\} \rightarrow \min \\ g_{1}\left( x\right) =x^{2}-x\le 0\text {,} \end{array} \text { (FO1)} \end{aligned}$$

where

$$\begin{aligned}&\widetilde{f}_{1}\left( x\right) =\left\{ \begin{array}{ccc} \widetilde{1}x-1 &{} \text {if} &{} x<0\text {,}\\ \left( \widetilde{1}+2\right) x-1 &{} \text {if} &{} x\ge 0\text {,} \end{array} \right. \\&\widetilde{f}_{2}\left( x\right) =-\widetilde{2}x\ominus _{H}\widetilde{3} \end{aligned}$$

and, moreover, \(\widetilde{1}\), \(\widetilde{2}\) and \(\widetilde{3}\) are continuous triangular fuzzy numbers which are defined as triples \(\widetilde{ 1}=\left( 0,1,2\right) \), \(\widetilde{2}=\left( 1,2,4\right) \) and \( \widetilde{3}=\left( 1,3,5\right) \). Hence, by using (1), their \( \alpha \)-cuts are as follows \(\widetilde{1}_{\alpha }=\left[ \alpha \text { , }2-\alpha \right] \), \(\widetilde{2}_{\alpha }=\left[ 1+\alpha \text { , } 4-2\alpha \right] \) and \(\widetilde{3}_{\alpha }=\left[ 1+2\alpha \text { , } 5-2\alpha \right] \), respectively. Note that the set D of all feasible solutions of (FO1) is \(D=\left\{ x\in R:x^{2}-x\le 0\right\} \) and that \( \widehat{x}=0\) is a feasible solution of (FO1). Moreover, by (2), (3) and Definition 5, the \(\alpha \)-level cuts of the fuzzy objective functions \(\widetilde{f}_{1}\) and \(\widetilde{f} _{2}\) are defined for any \(\alpha \in \left[ 0,1\right] \) as follows:

$$\begin{aligned}&\left( \widetilde{f}_{1}\right) _{\alpha }\left( x\right) =\left\{ \begin{array}{ccc} \left[ \left( 2-\alpha \right) x-1\text { },\text { }\left( 2\alpha -4\right) x-2\alpha -1\right] &{} \text {if} &{} x<0\text {,}\\ \left[ \alpha x-1\text { },\text { }\left( 2-\alpha \right) x-1\right] &{} \text { if} &{} x\ge 0\text {,} \end{array} \right. \\&\left( \widetilde{f}_{2}\right) _{\alpha }\left( x\right) =\left\{ \begin{array}{c} \left[ \left( 2\alpha -4\right) x-\left( 2\alpha -1\right) ,-\left( 1+\alpha \right) x+2\alpha -5\right] \\ \text {if }x<0,\\ \left[ -\left( 1+\alpha \right) x-2\alpha -1\text { },\text { }\left( 2\alpha -4\right) x+2\alpha -5\right] \\ \text {if }x<0, \end{array} \right. \end{aligned}$$

Hence, the \(\alpha \)-level cut of the fuzzy objective function \(\widetilde{f} \) is defined for any \(\alpha \in \left[ 0,1\right] \) as follows

$$\begin{aligned} \widetilde{f}_{\alpha }\left( x\right) =\left\{ \begin{array}{c} \left[ \max \left\{ \left( 2-\alpha \right) x-1\text { },\text { }\left( 2\alpha -4\right) x-\left( 2\alpha -1\right) \right\} \text { },\right. \\ \left. \max \left\{ \left( 2\alpha -4\right) x-2\alpha -1\text { },\text { } -\left( 1+\alpha \right) x+2\alpha -5\right\} \right] \\ \text {if }x<0,\\ \left[ \max \left\{ \alpha x-1\text { },\text { }-\left( 1+\alpha \right) x-2\alpha -1\right\} \text { },\right. \\ \left. \max \left\{ \left( 2-\alpha \right) x-1\text { },\text { }\left( 2\alpha -4\right) x+2\alpha -5\right\} \right] \\ \text {if }x\ge 0, \end{array} \right. \end{aligned}$$

where

$$\begin{aligned}&\underline{f}_{\alpha }(x)=\left\{ \begin{array}{c} \max \left\{ \left( 2-\alpha \right) x-1\text { },\text { }\left( 2\alpha -4\right) x-\left( 2\alpha -1\right) \right\} \\ \text {if }x<0,\\ \max \left\{ \alpha x-1\text { },\text { }-\left( 1+\alpha \right) x-2\alpha -1\right\} \\ \text { if }x\ge 0\text {,} \end{array} \right. \\&\overline{f}_{\alpha }\left( x\right) =\left\{ \begin{array}{c} \max \left\{ \left( 2\alpha -4\right) x-2\alpha -1\text { },\text { }-\left( 1+\alpha \right) x+2\alpha -5\right\} \\ \text {if }x<0,\\ \max \left\{ \left( 2-\alpha \right) x-1\text { },\text { }\left( 2\alpha -4\right) x+2\alpha -5\right\} \\ \text { if }x\ge 0. \end{array} \right. \end{aligned}$$

Note that the lower function \(\underline{f}_{\alpha }\) and the upper function \(\overline{f}_{\alpha }\) of \(\widetilde{f}\) are convex for each \( \alpha \in \left[ 0,1\right] \) and the constraint function \(g_{1}\) is also convex. Hence, by Proposition 16, the objective function \(\widetilde{f}\) is a convex fuzzy function. Moreover, the lower and upper functions \(\underline{f}_{\alpha }\) and \(\overline{f} _{\alpha }\) of \(\widetilde{f}_{\alpha }\) are not differentiable at \(\widehat{ x}\) for any \(\alpha \in [0,1)\). Then, \(\widetilde{f}\) is not level-wise differentiable at this point (see Definition 4.2 Wu 2007). Thus, the Karush–Kuhn–Tucker optimality conditions existing in the literature for fuzzy optimization problems are not applicable in the considered case (see, for example, Panigrahi et al. (2008), Wu (2007)). However, the Karush–Kuhn–Tucker optimality conditions (12)–(14) are fulfilled at \(\widehat{x}\) with Lagrange multipliers \(\widehat{\lambda } _{1}\left( \alpha \right) =\frac{1}{2}\), \(\widehat{\lambda }_{2}\left( \alpha \right) =\frac{1}{2}\) and \(\widehat{\mu }_{1}\left( \alpha \right) =0\) for each \(\alpha \in \left[ 0,1\right] \). Now, we use the \(l_{1}\) exact penalty function method for solving the considered nondifferentiable fuzzy optimization problem (FO1). Then, we construct the fuzzy penalized optimization problem (FP(c)) with the fuzzy \(l_{1}\) exact penalty function. Hence, the \(\alpha \)-cut of the fuzzy penalized optimization problem (FP\(_{\alpha }\)(c)) is defined for any \(\alpha \in \left[ 0,1 \right] \) as follows:

$$\begin{aligned} \begin{array}{c} \text {min }P_{\alpha }(x,c)=\left[ \underline{f}_{\alpha }(x)+c\max \left\{ 0,x^{2}-x\right\} \text { ,}\right. \text { }\overline{f}_{\alpha }\left( x\right) \\ \left. +c\max \left\{ 0,x^{2}-x\right\} \right] . \end{array} \text { }(\text {FP}_{\alpha }(c)) \end{aligned}$$

Note that all hypotheses of Theorem 35 are fulfilled. This means that \(\widehat{x}=0\) is a nondominated solution of the penalized fuzzy optimization problem (FP(c)) for any penalty parameter \( c>0 \). Further, also all hypotheses Theorem 41 are fulfilled. Hence, if we assume that \(\widehat{x}\) is a nondominated solution of the fuzzy penalized optimization problem (FP1(c)) with the fuzzy \(l_{1}\) exact penalty function for some penalty parameter c greater than 0, then it is also a nondominated solution of (FO1).

5 Conclusions

In the paper, the nonsmooth optimization problem with the fuzzy objective function and both inequality and equality constraints has been considered. The optimality conditions of a Karush–Kuhn–Tucker type have been established for a (weakly) nondominated solution in such nondifferentiable optimization problems under appropriate convexity hypotheses. Further, the \(l_{1}\) exact penalty function method has been used for solving the considered nonsmooth convex optimization problem with the fuzzy objective function and with both inequality and equality constraints. Namely, the most frequently used \(l_{1}\) exact penalty function method has been used for finding (weakly) nondominated solutions of the considered nondifferentiable convex extremum problem with a fuzzy objective function. Therefore, its associated penalized fuzzy optimization problem has been constructed in this approach. Then, one of the most important property of this method, so-called exactness of the penalization, has been defined if the method is used for finding such optimal solutions. Further, this property has been analyzed under suitable convexity hypotheses imposed on the functions constituting the original nondifferentiable fuzzy optimization problem. Hence, conditions guarantying the equivalence of (weakly) nondominated optimal solutions of the original nonsmooth minimization problem with a fuzzy objective function and its penalized fuzzy optimization problem have been derived under convexity hypotheses. Then, the results established in the paper have been shown that the \(l_{1}\) exact penalty function method can be used for solving a class of nonsmooth extremum problems, that is, convex nondifferentiable optimization problems with fuzzy objective functions.

However, some interesting topics for further research remain. It would be of interest to investigate whether is possible to prove similar results for other classes of fuzzy optimization problems. We shall investigate these questions in the subsequent papers.