1 Introduction

Historically, the derivative was and remains to be a powerful tool in analyzing optimization problems and plays an important role in developing methods for characterizing and computing optimal solutions. If the function \(f:\mathbb {R}^n \rightarrow \overline{\mathbb {R}}\) is convex and differentiable at \( \overline{x}\) with the gradient vector \(\nabla f(\overline{x}),\) then the graph of the affine function \(a(x)=f(\overline{x}) + \langle \nabla f(\overline{x}), x-\overline{x} \rangle \) becomes a supporting hyperplane to epif at \((\overline{x},f(\overline{x}))\) and for every \(x \in dom f\) one has

$$\begin{aligned} f(x) - f(\overline{x}) \ge \langle \nabla f(\overline{x}), x-\overline{x} \rangle , \end{aligned}$$
(1)

or \(epi f \subset epi a\) and \(f(\overline{x}) = a(\overline{x}).\)

For a variety of reasons, the differentiability condition may be considered as too restrictive in applications. This situation has led to the development of the theory of generalized differentiation and hence the development of the theory of nonsmooth analysis (see e.g. [1, 3, 9, 11, 17, 19, 23, 25, 30,31,32,33,34, 40]).

At the first step of such a generalization, the limit \(t \rightarrow 0\) was relaxed by changing it to the one-sided form \(t \downarrow 0,\) which will be called in this paper, a directional derivative \(f^{\prime }(\overline{x};h)\) of f at \(\overline{x}\) in a direction \(h \in \mathbb {R}^n\) and defined as

$$\begin{aligned} f^{\prime }(\overline{x};h) = \lim _{ t \downarrow 0} \frac{f(\overline{x}+ th)- f(\overline{x})}{t}, \end{aligned}$$
(2)

if the limit exists.

The relation (1) was extended to a nondifferentiable case by T. Rockafellar in [36], where the subdifferential \(\partial f(\overline{x})\) was defined as a set of subgradient vectors \(v \in \mathbb {R}^n:\)

$$\begin{aligned} \partial f(\overline{x}) = \{v \in \mathbb {R}^n: f(x) - f(\overline{x}) \ge \langle v, x-\overline{x} \rangle , \forall x \in \mathbb {R}^n \}. \end{aligned}$$
(3)

This notion successfully extends the affine support relations given above for the gradient vector \(\nabla f(\overline{x})\) and the affine function a(x). The subdifferentiability of f at \(\overline{x}\) means the existence of a vector \(v \in \mathbb {R}^n\) such that the hyperplane

$$\begin{aligned} H(v,-1) = \{(x,y): \langle (v,-1), (x-\overline{x}, y - f(\overline{x}))\rangle =0 \} \end{aligned}$$

with normal vector \((v,-1),\) is a supporting hyperplane to the epigraph of f at \((\overline{x},f(\overline{x})):\)

$$\begin{aligned} epi f \subset H^-(v,-1) = \{(x,y): \langle (v,-1), (x-\overline{x}, y - f(\overline{x}))\rangle \le 0 \}. \end{aligned}$$
(4)

The global nature of this relation leads to the following necessary and sufficient condition for unconstrained global optimality of \(\overline{x}\):

$$\begin{aligned} 0 \in \partial f(\overline{x}) \Leftrightarrow f(x) \ge f(\overline{x}), \forall x\in \mathbb {R}^n. \end{aligned}$$
(5)

For the problem of minimizing a convex function f(x) subject to \(x \in S, \) where \(S \subset \mathbb {R}^n\) is a convex, closed set, the following necessary and sufficient condition for global optimality was formulated by Rockafellar [37] and Pshenichnyi [35]: \(\overline{x} \in S\) is a global optimal solution to this problem if and only if there exists a subgradient \(v \in \partial f(\overline{x})\) such that

$$\begin{aligned} \langle v, x - \overline{x} \rangle \ge 0, \forall x\in S. \end{aligned}$$
(6)

Subgradients in the convex analysis found important applications in optimization (see e.g. [16, 37, 41]). In the convex case, the subdifferential set \(\partial f(\overline{x})\) can be characterized with respect to the directional derivatives of f at \(\overline{x}:\)

$$\begin{aligned} \partial f(\overline{x})= & {} \{v \in \mathbb {R}^n: f^{\prime }(\overline{x};h) \ge \langle v,h \rangle \text{ for } \text{ all } h \in \mathbb {R}^n \}, \end{aligned}$$
(7)
$$\begin{aligned} f^{\prime }(\overline{x};h)= & {} \max \{\langle v,h \rangle : v \in \partial f(\overline{x}) \} \text{ for } \text{ all } h \in \mathbb {R}^n. \end{aligned}$$
(8)

A major contribution to the nonsmooth and nonconvex analysis was made by F.H. Clarke. In [10], Clarke introduced a generalized directional derivative concept \(f^{\circ }\) and showed how the definition of subdifferential \(\partial f\) can be extended to arbitrary lower semicontinuous, locally Lipschitz functions defined on Banach spaces. Clarke introduced the notion of the generalized subdifferential \(\partial ^{\circ }f(\overline{x})\) as a set of subgradient vectors \(v \in \mathbb {R}^n\) with

$$\begin{aligned} \partial ^{\circ }f(\overline{x}) = \{v \in \mathbb {R}^n: f^{\circ }(\overline{x};h) \ge \langle v,h \rangle , \forall h \in \mathbb {R}^n \}. \end{aligned}$$

The main property of the Clarke stationary points is given in the following theorem.

Theorem 1

[12, Proposition 2.3.2, p. 38] If f attains a local minimum or maximum at \(\overline{x}\) then \(0 \in \partial ^{\circ }f(\overline{x}).\)

Clarke introduced the regularity notion which plays an important role in nonsmooth analysis.

Definition 1

[12, Definition 2.3.4, p. 39] f is said to be regular at x provided that the classical directional derivative \(f^{\prime }(x;h)\) exists and \(f^{\circ }(x;h) = f^{\prime }(x;h) \) for all h.

The regularity property for the Clarke’s directional derivative was proved by Clarke under the convexity condition (see [12, Proposition 2.3.6 (b), p. 40]). On the other hand, although Clarke extended the subgradient notion to Lipschitz continuous functions, there was nothing analogous for the case of general f. T. Rockafellar made a serious contribution to fill this gap, by introducing a subderivative function \(df(\overline{x};\cdot )\) [38, 40]. By using this notion, he established a new regularity condition for the Clarke’s directional derivative in the Lipschitzian case (see Theorem 6 in Sect. 4). The extensions of the derivative notion given by Clarke and Rockafellar have made a huge contribution to the nonsmooth analysis. However, even under the regularity conditions, the stationary points \(\overline{x}\) with \(f^{\circ }(\overline{x};h) \ge f^{\circ }(\overline{x};0) = 0\) or \(\text {d}f(\overline{x};h) \ge \text {d}f(\overline{x};0) =0 \) for all h,  characterize only local extremum points of f.

The analysis given above demonstrates that, for an optimization problem with nonconvex and nondifferentiable functions, to formulate conditions guaranteeing the existence of supporting surfaces similar to (1) and/or (4), conditions for global optimality similar to (5) and/or (6), or characterization relations similar to (7) and/or (8), are not easy tasks and require additional assumptions and new approaches. The characterization of a global minimum by using the derivatives and/or generalized derivatives still remains to be one of the main problems in the mathematical programming. It can be expected that such a characterization may also help to develop solution methods for escaping from local minima. One of the main purposes of this paper, is to study tools, which allow to analyze these problems.

Two of such tools studied in this paper are the radial epiderivative and the weak subdifferential concepts, both introduced earlier by R.Kasimbeyli.

R. Kasimbeyli introduced the concept of the weak subdifferential \(\partial ^wf\) in his dissertation [20] (see also [4, 5, 13,14,15]), as a set of weak subgradients \((v,c) \in \mathbb {R}^n \times \mathbb {R}_+ \) with

$$\begin{aligned} f(x)\ge f(\overline{x})+\langle v,x-\overline{x} \rangle -c \Vert x-\overline{x}\Vert , \quad \forall x \in \mathbb {R}^n. \end{aligned}$$

The existence of a weak subgradient at \(\overline{x},\) corresponds to the existence of a conical supporting surface to \({\text {epi}}f\) at \((\overline{x},f(\overline{x}))\) and allows by this way to handle applications not fitting within the domain of convexity. By using the conical supporting philosophy developed in [20], Kasimbeyli extended the Hahn-Banach separation theorem to a nonconvex case [26, 27] which has played an important role in analyzing nonconvex optimization problems and developing solution methods for them [13, 14, 21, 28, 29]. With the help of the weak subgradients, the global optimality condition (5) was extended to a nonconvex case (see [14, Remark 2.2]):

$$\begin{aligned} (0,0) \in \partial ^w f(\overline{x}) \Leftrightarrow f(x) \ge f(\overline{x}), \forall x\in \mathbb {R}^n. \end{aligned}$$
(9)

Recently, Dinc Yalcin and Kasimbeyli developed a new weak subgradients based global solution method for nonconvex box-constrained optimization problems in [14], where the approximate computing method for the weak subgradients via the directional derivatives, was also suggested.

The radial epiderivative concept was first proposed by F. Flores-Bazan in [18] (see also [19]). In this paper, we use the definition of this concept given by Kasimbeyli in [25], in a slightly different setting, for both set-valued maps and real-valued functions. By using the radial epiderivative \(f^r(\overline{x};h)\) of a function \(f: \mathbb {R}^n \rightarrow \overline{\mathbb {R}}\) at a point \( \overline{x}\) in a direction \(h\in \mathbb {R}^n,\) a necessary and sufficient conditions for the global minimum \(\overline{x}\) of a real-valued nonconvex function f is given in [18] and [25]:

$$\begin{aligned} f^r(\overline{x};h) \ge f^r(\overline{x};0) = 0, \forall h\in \mathbb {R}^n \Leftrightarrow f(x) \ge f(\overline{x}), \forall x\in \mathbb {R}^n. \end{aligned}$$
(10)

In this paper, this condition is used to establish a descent direction for a nonconvex function.

An extension of the optimality condition (6) to a nonconvex case was established by Kasimbeyli and Mammadov in terms of the weak subgradients [29, Theorem 4]. Kasimbeyli and Mammadov proved under some mild conditions that the radial epiderivative and the directional derivative can be represented as a support function of the weak subdifferential set, and by this way the characterization relations similar to (7) and/or (8), in nonconvex case were established by using the directional derivatives, radial epiderivatives and the weak subdifferentials [28, Theorems 4.5 and 4.6].

Although all the above mentioned theorems and properties were established by using the definition of the radial epiderivative given in (11), it would be interesting to have a definition of this concept, formulated in terms of the conventional limit relation of the Newton quotient \( \frac{f(\overline{x}+ tu)- f(\overline{x})}{t}. \) In this paper, we give such a formulation for the radial epiderivative and prove that \(f^r(\overline{x};\cdot )\) is a lower semicontinuous and lower Lipschitz function. We study the relations between the radial epiderivatives and the Clarke’s directional derivative, the subderivatives and the directional derivatives and establish new regularity conditions. All these relations allow to connect the contributions made by the radial epiderivative concept and the huge world of the nonsmooth analysis due to the classical directional derivative, the Clarke directional derivative and the Rockafellar’s subderivative.

All the above mentioned properties of the radial epiderivative concept and the contributions made by this concept to the theory of nonsmooth and nonconvex analysis make it tempting to answer the question of how the radial epiderivative can be computed. In this paper, we propose two methodologies for approximate computing radial epiderivatives. One of them is based on the computational formula proposed in terms of the weak subgradients. Besides, we formulate an iterative algorithm to compute the radial epiderivative and prove that this algorithm terminates in a finite number of iterations for a certain class of functions.

The paper presents a comprehensive analysis on the proved theorems and established properties by using illustrative examples.

The rest of the paper is organized as follows: The main definitions are given in Sect. 2. Section 3 presents a new formulation and some properties of the radial epiderivative. In this section, we establish a class of radially epidifferentiable functions. The relations between the radial epiderivatives and the directional derivative, Clarke’s directional derivative and the Rockafellar’s subderivative are presented in Sect. 4. Section 5 presents two methodologies for approximate computing radial epiderivatives. Section 6 is devoted to characterization of global minimum for nonconvex functions. In this section, we formulate and prove a theorem on globally descent direction. Finally, Sect. 7 draws some conclusions from the paper.

2 Preliminaries

We begin this section by first recalling the definitions and some important properties of the main concepts used in this paper.

Definition 2

Let S be a nonempty subset of a real normed space \((\mathbb {X},\Vert \cdot \Vert )\) and \(\overline{x} \in S\) be a given element. The closed radial cone \(R(S; \overline{x})\) of S at \(\overline{x}\) is the set of all \(w \in \mathbb {X} \) such that there are sequences \(\lambda _n > 0\) and \((x_n)_{n \in \mathbb {N}} \subset S\) with \(\lim _{n \rightarrow +\infty } \lambda _n (x_n- \overline{x}) = w\). In other words,

$$\begin{aligned} R(S; \overline{x})=cl(cone(S-\overline{x})), \end{aligned}$$

where cone denotes the conic hull of a set, which is the smallest cone containing \(S-\overline{x}.\)

Now, we recall the definitions of the generalized derivatives which will be investigated in this paper. We begin with the Clarke’s directional derivative.

Definition 3

[12] Let \(\mathbb {X}\) be a Banach space, let \(f: \mathbb {X} \rightarrow \mathbb {R} \) be a locally Lipschitz function and let \(\overline{x} \in \mathbb {X}\) and \(h \in \mathbb {X}\) be given elements. The Clarke directional derivative \(f^{\circ }(\overline{x};h)\) of f at \(\overline{x}\) in the direction h is defined by

$$\begin{aligned} f^{\circ }(\overline{x};h)=\limsup _{t \downarrow 0, y \rightarrow \overline{x}} \frac{f(y+th) - f(y)}{t}. \end{aligned}$$

It was proved that the Clarke directional derivative \(f^{\circ }(\overline{x};\cdot )\) is upper semicontinuous, positively homogeneous, sublinear and Lipschitz function [12, Proposition 2.1.1, p. 25]. The convexity of \(f^{\circ }(\overline{x};\cdot )\) was proved by Rockafellar [39, Theorem 1].

Now, we recall the definition of Rockafellar’s subderivative [41, Definition 8.1, p.299] (see also [38, 40]).

Definition 4

For a function \(f: \mathbb {R}^n \rightarrow \mathbb {R} \) and a point \(\overline{x} \in \mathbb {R}^n \) with \(f(\overline{x})\) finite, the subderivative \(df(\overline{x};h)\) of function f at \(\overline{x}\) in a direction \(h \in \mathbb {R}^n, \) is defined by

$$\begin{aligned} \text {d}f(\overline{x};h)= \liminf _{t\downarrow 0, u \rightarrow h} \frac{f(\overline{x}+tu) - f(\overline{x})}{t}. \end{aligned}$$

Remark 1

The definition of the subderivative given in Definition 4 is more specifically the lower subderivative of f given in [40], where the corresponding upper subderivative is defined with “lim sup” in place of “lim inf”. Moreover, T. Rockafellar used the term “the subderivative of f at \(\overline{x}\) for h” for the subderivative \(df(\overline{x};h)\) [41, Definition 8.1, p.299].

Definition 5

The radial epiderivative \(f^r(\overline{x};h)\) of a function \(f: \mathbb {R}^n \rightarrow \overline{\mathbb {R}}\) at a point \( \overline{x}\) in a direction \(h\in \mathbb {R}^n\) is defined through the radial cone \(R(epi f; (\overline{x},f(\overline{x})))\) to the epigraph epif of f at \((\overline{x},f(\overline{x}))\) such that

$$\begin{aligned} epi f^r(\overline{x};\cdot ) = R(epi f; (\overline{x},f(\overline{x}))). \end{aligned}$$
(11)

In the case when the radial epiderivative \(f^{r}(\overline{x};h)\) exists and finite for every h,  we will say that f is radially epidifferentiable at \(\overline{x}.\)

The radial epiderivative is probably the first derivative concept which extends the global affine support relations (1) and (4) to a nonconvex case by using a global conical supporting surface to the epigraph of a function under consideration.

Remark 2

In the original definition of the radial epiderivative given in [25], the notation \(D_r F( \overline{x}, \overline{y})(\cdot )\) was used for this notion, which was defined for a set-valued map F,  where \(\overline{y} \in F(\overline{x})\). Since in this paper we consider real-valued functions, we use the notation \(f^{r}(\overline{x};\cdot ).\) Such a notation is similar to those used for the directional derivative, the Clarke’s and the Rockafellar’s derivatives, and by this way we aim to use unified notations for all the generalized derivatives considered in this paper.

Now we recall the existence condition for the radial epiderivative proved in [25].

Theorem 2

[25, Theorem 3.2] Let \((\mathbb {X}, \Vert .\Vert _\mathbb {X})\) be a real normed space, \(\mathbb {S}\) be a non-empty subset of \(\mathbb {X}, \quad \overline{x} \in \mathbb {S}\) and let \(f:\mathbb {S} \rightarrow \mathbb {R} \cup \{+\infty \}\) be a proper function. Assume that there exist functions \(g_1,g_2: \mathbb {X} \rightarrow \mathbb {R}\) with \(epi(g_1) \subset R(epi(f); (\overline{x}, f(\overline{x}))) \subset epi(g_2).\) Then, the radial epiderivative \(f^r(\overline{x};\cdot )\) is given as

$$\begin{aligned} f^r(\overline{x};h)= \inf \{ y\in \mathbb {R}: (h,y) \in R(epi(f);(\overline{x}, f(\overline{x}))) \}, \forall h \in \mathbb {X}. \end{aligned}$$
(12)

Lemma 1

[28, Lemma 3.7] Let \(f:\mathbb {X}\rightarrow \mathbb {R}\) be a single-valued function having radial epiderivative \(f^r(\overline{x};\cdot )\) given by (12). Then, the radial epiderivative \(f^r(\overline{x};\cdot )\) is a positively homogeneous function.

The generalized derivatives given by Clarke and by Rockafellar were used to define the corresponding generalized subgradients. These generalized subgradients were defined as the normal vectors of supporting hyperplanes to the epigraphs of the corresponding derivatives. In difference to these concepts, the classical subgradient of a function in the convex analysis introduced by Rockafellar was defined as the normal vector of the supporting hyperplane to the epigraph of the function under consideration. It is remarkable that in nonconvex analysis, this property was kept probably only in the definition of the weak subgradient, which strongly fits this property.

Remark 3

It follows from definition of the weak subgradient given in Sect. 1 that the pair \((v,c) \in \mathbb {R}^n \times \mathbb {R}_+\) is a weak subgradient of f at \(\overline{x} \in \mathbb {R}^n,\) if there exists a continuous (superlinear) concave function

$$\begin{aligned} g(x)= f(\overline{x})+\langle v,x-\overline{x} \rangle -c \Vert x-\overline{x}\Vert , \end{aligned}$$

such that \(g(x) \le f(x),\) for all \(x \in \mathbb {R}^n\) and \(g(\overline{x}) = f(\overline{x}).\) Then clearly the set \({\text {hypo}}(g)=\{(x,\alpha ) \in \mathbb {R}^n \times \mathbb {R}: g(x) \ge \alpha \}\) is a closed convex cone in \(\mathbb {R}^n \times \mathbb {R}\) with vertex at \((\overline{x}, f(\overline{x})),\) and

$$\begin{aligned} {\text {epi}}(f) \subset {\text {epi}}(g), \quad cl({\text {epi}}(f)) \cap graph(g) \ne \emptyset . \end{aligned}$$

The above analysis shows that the class of weakly subdifferentiable functions is essentially larger than the class of subdifferentiable functions, see e.g. [4, Theorems 3.1, 3.2, Corollary 3.1], [5, Theorem 1] [28, Lemma 2.8], [13, Theorem 3], [14, Theorem 2.3]. The following theorem explains some classes of the weakly subdifferentiable functions, which will be used in the next sections.

Theorem 3

[28, Lemma 2.7] Let S be a nonempty subset of a real normed space \((\mathbb {X},\Vert \cdot \Vert )\). Let \(f: \mathbb {S} \rightarrow (- \infty , + \infty ]\) be a given function. If f is a positively homogeneous function bounded from below on some neighborhood of \(0_{\mathbb {R}^n},\) then f is weakly subdifferentiable at \(0_{\mathbb {R}^n}.\)

3 Properties of Radial Epiderivatives

This section presents new properties of the radial epiderivative and some illustrative examples. We begin with the lower semicontinuity property for the radially epidifferentiable functions.

Theorem 4

Let \((\mathbb {X}, \Vert .\Vert _\mathbb {X})\) be real normed space, \(\mathbb {S}\) be a non-empty subset of \( \mathbb {X},\) and let \(f:\mathbb {S} \rightarrow \mathbb {R} \cup \{+\infty \}\) be a proper function. If f is radially epidifferentiable at \(\overline{x} \in \mathbb {X}\), then f is lower semicontinuous at \(\overline{x}.\)

Proof

let \(f:S \rightarrow \mathbb {R}\) be a proper function, radially epidifferentiable at \(\overline{x}.\) We need to show that \(\liminf _{ x \rightarrow \overline{x}} f(x) \ge f(\overline{x}).\) Assume to the contrary that this is not true: \(\liminf _{ x \rightarrow \overline{x}} f(x) < f(\overline{x}).\) Then, radial cone \(R(epi(f);(\overline{x}, f(\overline{x})))\) must contain a vertical line passing through the points \((\overline{x}, f(\overline{x})) \) and \((\overline{x}, \liminf _{ x \rightarrow \overline{x}} f(x))\) with

$$\begin{aligned} \inf \{ y\in \mathbb {R}: (0,y) \in R(epi(f);(\overline{x}, f(\overline{x}))) \} = -\infty \end{aligned}$$

which contradicts the hypothesis that f is radially epidifferentiable at \(\overline{x},\) and hence the proof is completed. \(\square \)

Remark 4

Note that the inverse of Theorem 4 is not true. For example \(f(x) = -\sqrt{|x |}\) is (lower semi) continuous at \(x=0\) but not radially epidifferentiable there.

The following proposition gives an equivalent representation for the radial epiderivative via a limit concept. Note that a similar expression was also given by F. Flores-Bazan in terms of the lower epiderivative (see [19, Corollary 3.4]).

Proposition 1

Let \((\mathbb {X}, \Vert .\Vert _\mathbb {X})\) be a real normed space and let \(\overline{x} \in \mathbb {X}\) be a given element. Assume that function \(f:\mathbb {X} \rightarrow \mathbb {R} \) is radially epidifferentiable at \(\overline{x}.\) Then, the radial epiderivative \(f^{r}(\overline{x},\cdot )\) can equivalently be defined as follows:

$$\begin{aligned} f^{r}(\overline{x};h) = \inf _{t > 0} \liminf _{ u \rightarrow h} \frac{f(\overline{x}+ tu)- f(\overline{x})}{t} \end{aligned}$$
(13)

for all \(h \in \mathbb {X}.\)

Proof

Let \(\overline{x} \in \mathbb {X}\) and \(\overline{y} = f(\overline{x}.)\) By the definition of the radial epiderivative, we have:

$$\begin{aligned}{} & {} R({\text {epi}}(f);(\overline{x}, \overline{y}))= {\text {epi}}(f^{r}(\overline{x};\cdot )) \\= & {} \{ (x,y) \in \mathbb {X} \times \mathbb {R}: \exists \lambda _n> 0, (x_n,y_n) \in {\text {epi}}(f), \lim _{n \rightarrow \infty } \lambda _n ((x_n, y_n)-(\overline{x}, \overline{y}))=(x,y) \} \\= & {} \{(x,y) \in \mathbb {X} \times \mathbb {R}: \exists \lambda _n > 0, (x_n,y_n) \in {\text {epi}}(f),\\{} & {} \lim _{n \rightarrow \infty } \lambda _n (x_n-\overline{x})=x, \lim _{n \rightarrow \infty } \lambda _n (y_n- \overline{y})=y \}. \end{aligned}$$

By Theorem 2,

$$\begin{aligned} f^{r}(\overline{x};h) = \inf \{y: \lambda _n > 0, (x_n,y_n) \in {\text {epi}}(f),&h&=\lim _{n \rightarrow \infty } \lambda _n (x_n-\overline{x}), \qquad \qquad \\&y&=\lim _{n \rightarrow \infty } \lambda _n (y_n- \overline{y}) \}. \end{aligned}$$

The last equality can be written in the following form:

$$\begin{aligned} f^{r}(\overline{x};h) = \inf \{ y: \lambda _n > 0, h =\lim _{n \rightarrow \infty } \lambda _n (x_n-\overline{x}), y=\lim _{n \rightarrow \infty } \lambda _n (f(x_n)- \overline{y}) \}. \end{aligned}$$

By setting \(x_n= x_n -\overline{x} + \overline{x},\) we deduce:

$$\begin{aligned} f^{r}(\overline{x};h) = \inf \Big \{&y&: \lambda _n > 0, \\&h&=\lim _{n \rightarrow \infty } \lambda _n (x_n-\overline{x}), y=\lim _{n \rightarrow \infty } \lambda _n (f(x_n -\overline{x} + \overline{x})- f(\overline{x})) \Big \} \end{aligned}$$

or

$$\begin{aligned} f^{r}(\overline{x};h) = \inf \Big \{&y&: \lambda _n > 0, \\&h&=\lim _{n \rightarrow \infty } \lambda _n (x_n-\overline{x}), y=\lim _{n \rightarrow \infty } \frac{f(\frac{\lambda _n (x_n -\overline{x} )}{\lambda _n} + \overline{x} )- f(\overline{x})}{1 / \lambda _n} \Big \}. \end{aligned}$$

Letting \(u_n= \lambda _n (x_n -\overline{x}),\) for every \(n=1,2, \ldots ,\) we obtain:

$$\begin{aligned} f^{r}(\overline{x};h)= \inf \Big \{y: \lambda _n > 0, \text{ for } \text{ every } n=1,2, \ldots , y=\liminf _{n \rightarrow \infty , u_n \rightarrow h} \frac{f(\overline{x} +\frac{u_n}{\lambda _n} )- f(\overline{x})}{1 / \lambda _n} \Big \}. \end{aligned}$$

Now by setting \(t_n = 1 / \lambda _n,\) for every \(n=1,2, \ldots ,\) we can rewrite the last relation as follows:

$$\begin{aligned} f^{r}(\overline{x};h) = \inf \Big \{ y: t_n > 0, \text{ for } \text{ every } n=1,2, \ldots , y=\liminf _{u \rightarrow h} \frac{f(\overline{x} +t_n u)- f(\overline{x})}{t_n}\Big \}, \end{aligned}$$

which completes the proof. \(\square \)

Theorem 5

Let \((\mathbb {X}, \Vert .\Vert _\mathbb {X})\) be a real normed space and \(f:\mathbb {X} \rightarrow \mathbb {R}\) be a proper function finite at \(x=\overline{x}\). If f is lower Lipschitz at \(\overline{x},\) that is there exists a positive constant L such that

$$\begin{aligned} f(x)-f(\overline{x}) \ge -L \Vert x-\overline{x}\Vert \quad \text{ for } \text{ all } x \in \mathbb {X}, \end{aligned}$$
(14)

then f is radially epidifferentiable at \(\overline{x}.\) If \(\mathbb {X} =\mathbb {R}^n\), then this condition is also necessary.

Proof

Assume that f is lower Lipschitz at \(\overline{x}:\) there exists a positive constant L such that (14) is satisfied for all \(x \in \mathbb {X}.\) Take an arbitrary element \(h \in \mathbb {X}\) and evaluate the expression

$$\begin{aligned} \inf _{t > 0} \liminf _{ u \rightarrow h} \frac{f(\overline{x}+ tu)- f(\overline{x})}{t}. \end{aligned}$$
(15)

By (14) we obtain:

$$\begin{aligned} \inf _{t> 0} \liminf _{ u \rightarrow h} \frac{f(\overline{x}+ tu)- f(\overline{x})}{t} \ge \inf _{t > 0} \liminf _{ u \rightarrow h} \frac{-tL\Vert u\Vert }{t} =-L\Vert h\Vert , \end{aligned}$$

which means that the expression (15) has a finite value, and hence we deduce by (13) that, f has a finite radial epiderivative at \(\overline{x}\) in every direction h.

Now assume that \(X =\mathbb {R}^n\) and that f has a finite radial epiderivative at \(\overline{x}\) in every direction \(h \in \mathbb {R}^n\). Show that there exists a positive constant L such that (14) is satisfied for every \(x \in \mathbb {R}^n.\) Assume to the contrary that this is not true. Let \(\{x_n\} \subset \mathbb {R}^n\) and \(\{L_n\}\) be sequences with \(L_n \rightarrow +\infty \) and \(\Vert x_n-\overline{x}\Vert >0\) such that

$$\begin{aligned} f(x_n)-f(\overline{x}) < -L_n \Vert x_n-\overline{x}\Vert \quad \text{ for } \text{ all } n =1,2, \ldots . \end{aligned}$$

Let \(u_n = \frac{x_n-\overline{x}}{\Vert x_n-\overline{x}\Vert }.\) Without loss of generality, assume that \(u_n \rightarrow h\) with \(\Vert h\Vert =1\) and let \(t_n = \Vert x_n-\overline{x}\Vert >0\) for all n. Then, we obtain:

$$\begin{aligned} \lim _{ n \rightarrow \infty } \frac{f(\overline{x}+ t_nu_n)- f(\overline{x})}{t_n} < \lim _{ n \rightarrow \infty } \frac{-t_n L_n\Vert u_n\Vert }{t_n} =-\infty , \end{aligned}$$

which contradicts to the assumption that f is radially epidifferentiable at \(\overline{x}.\) \(\square \)

Remark 5

The lower Lipschitz concept was called calmness, or calm from below in [41, Chapter 8, Section F, p.322].

Now consider some examples and demonstrate the properties of the radial epiderivative.

Example 1

Let

$$\begin{aligned} f_1(x) = \left\{ \begin{array}{ll} -x+3 &{} \text{ if } x < 1, \\ x &{} \text{ if } x \ge 1. \end{array} \right. \end{aligned}$$

The function \(f_1\) is defined and lower semicontinuous everywhere on \(\mathbb {R}.\) This function is not continuous at \(\overline{x} =1\) and does not satisfy the Lipschitz condition there. It is just lower Lipschitz at \(\overline{x} =1.\)

It follows from Theorems 2 and 5 that \(f_1\) has a radial epiderivative \(f^r_1(\overline{x};\cdot )\) at every point \(\overline{x} \in \mathbb {R}.\) By using Proposition 1, we obtain (see also Fig. 1):

$$\begin{aligned} f^r_1(\overline{x};h) = \left\{ \begin{array}{llll} -h &{} \text{ if } \overline{x} \le 1, h< 0, \\ \frac{(\overline{x} - 2)h}{1 - \overline{x}} &{} \text{ if } \overline{x} < 1, h> 0, \\ h &{} \text{ if } \overline{x} = 1, h>0, \\ h &{} \text{ if } \overline{x} > 1, h\in \mathbb {R}. \\ \end{array} \right. \end{aligned}$$
Fig. 1
figure 1

The graph of radial epiderivative of function \(f_1\) at \(\overline{x}=0\) (left) and \(\overline{x}=1\) (right)

Example 2

Let

$$\begin{aligned} f_2(x) = \left\{ \begin{array}{ll} -x+3 &{} \text{ if } x \le 1, \\ x &{} \text{ if } x > 1. \end{array} \right. \end{aligned}$$

Clearly, it follows from Theorem 2 that \(f_2\) has a radial epiderivative \(f^r_2(\overline{x};h) = f^r_1(\overline{x};h)\) at every point \(\overline{x} \ne 1, \) but at \(\overline{x} =1,\) where the function is not lower semicontinuous (and not lower Lipschitz), we have (see also Theorems 4 and 5):

$$\begin{aligned} f^r_2(1;h) = \left\{ \begin{array}{ll} -h &{} \text{ if } h \le 0, \\ -\infty &{} \text{ if } h > 0. \end{array} \right. \end{aligned}$$

Example 3

Let

$$\begin{aligned} f_3(x) = \left\{ \begin{array}{ll} 4|x+1 |&{} \text{ if } x \le 0, \\ |x-1 |+ 3 &{} \text{ if } x > 0. \end{array} \right. \end{aligned}$$

It follows from Theorem 2 that \(f_3\) has a radial epiderivative \(f^r_3(\overline{x};\cdot )\) at every point \(\overline{x} \in \mathbb {R}.\) Again by applying Proposition 1, we obtain (see Fig. 2 for illustrations):

Fig. 2
figure 2

The graphs of the radial epiderivatives of function \(f_3\) at \(\overline{x}=-2\), \(\overline{x}=-1\), \(\overline{x}=-3/4\), \(\overline{x}=-1/3\), \(\overline{x}=0\), \(\overline{x}=1/2\), \(\overline{x}=1\), and \(\overline{x}=2\) (from left to right)

$$\begin{aligned} f^r_3(\overline{x};h) = \left\{ \begin{array}{lllllllllll} -4h &{} \text{ if } \overline{x}< -1, h \in \mathbb {R}, \\ -4h &{} \text{ if } \overline{x} = -1, h \le 0, \\ h &{} \text{ if } \overline{x} = -1, h> 0, \\ h &{} \text{ if } -1< \overline{x}< -\frac{2}{3}, h>0,\\ -\frac{(1+4\overline{x})h}{1-\overline{x}} &{} \text{ if } -\frac{2}{3} \le \overline{x}< 0, h> 0, \\ 4h &{} \text{ if } -1< \overline{x}< 0, h<0,\\ -h &{} \text{ if } \overline{x} = 0, h> 0, \\ 4h &{} \text{ if } \overline{x} = 0, h< 0, \\ \frac{(4-\overline{x})h}{\overline{x}+1} &{} \text{ if } 0< \overline{x}< 1, h< 0, \\ -h &{} \text{ if } 0< \overline{x}< 1, h> 0, \\ h &{} \text{ if } \overline{x} = 1, h> 0, \\ \frac{3h}{2} &{} \text{ if } \overline{x} = 1, h< 0, \\ h &{} \text{ if } \overline{x}> 1, h> 0, \\ \frac{(2+\overline{x})h}{1+\overline{x}} &{} \text{ if } \overline{x} > 1, h < 0. \\ \end{array} \right. \end{aligned}$$
(16)

4 Regularity Conditions

In this section, we investigate the relations between the radial epiderivative, the Clarke’s derivative, the Rockafellar’s subderivative and the (classical) directional derivative.

The following theorem quoted from [41] gives a regularity condition in a nonconvex case and explains a relationship between the directional derivative, the Clarke’s directional derivative and the Rockafellar’s subderivative.

Theorem 6

[41, Theorem 9.16, p. 360] A function f that is finite on an open set \(O \subset \mathbb {R}^n\) is both strictly continuous (locally Lipschitz continuous) and regular on O if and only if for every \(x \in O\) and \(h \in \mathbb {R}^n\) the directional derivative \(f^{\prime }(x;h)\) exists, is finite, and depends upper semicontinuously on x for each fixed h. Then \(f^{\prime }(x;h)\) depends upper semicontinuously on x and h together and

$$\begin{aligned} f^{\prime }(x;h) = \lim _{t\downarrow 0, u \rightarrow h} \frac{f(x+tu) - f(x)}{t} = df(x;h) = f^{\circ }(x;h). \end{aligned}$$

It follows from the definitions of the directional derivative, the Clarke’s directional derivative, the subderivative and the radial epiderivative that

$$\begin{aligned} f^r(\overline{x};x-\overline{x}) \le \text {d}f(\overline{x};h) \le f^{\prime } (\overline{x};x-\overline{x}) \le f^{\circ } (\overline{x};x-\overline{x}) \end{aligned}$$
(17)

for all x.

The following theorem provides a condition (different from that given in Theorem 6), under which a given nonconvex function becomes regular and equality holds in (17).

Theorem 7

Let \((\mathbb {X}, \Vert \cdot \Vert _\mathbb {X})\) be a real normed space, let \(\mathbb {S}\) be a nonempty subset of the real normed space and let \(f:\mathbb {S} \rightarrow \mathbb {R}\) be a proper function. Assume that f has a finite directional derivative, a finite Clarke directional derivative and a finite subderivative at \(\overline{x} \in \mathbb {X}\) in every direction \(x-\overline{x}\) with arbitrary \(x \in \mathbb {X}.\) Then, f is radially epidifferentiable at \(\overline{x}\) and

$$\begin{aligned} f^r(\overline{x};x-\overline{x})= \text {d}f(\overline{x};x-\overline{x}) = f^{\prime } (\overline{x};x-\overline{x}) = f^{\circ } (\overline{x};x-\overline{x}) \end{aligned}$$
(18)

if and only if

$$\begin{aligned} f(x)-f(\overline{x})\ge f^{\circ } (\overline{x};x-\overline{x}) \quad \text{ for } \text{ all } x \in \mathbb {X}. \end{aligned}$$
(19)

Proof

Assume that f has a finite Clarke directional derivative at \(\overline{x} \in \mathbb {X}\) in every direction \(x-\overline{x}\) with arbitrary \(x \in \mathbb {X}\) and that (19) is satisfied. Since \(f^{\circ }(\overline{x};\cdot )\) is a Lipschitz function by [12, Proposition 2.1.1, p. 25], it is also lower Lipschitz. Then, it follows from (19) that f is lower Lipschitz at \(\overline{x} \in \mathbb {X},\) too. Hence by Theorem 5, f is radially epidifferentiable at \(\overline{x} \in \mathbb {X}\) in every direction \(x-\overline{x}\) with arbitrary \(x \in \mathbb {X}.\) By using the representation (13) for the radial epiderivative, we have:

$$\begin{aligned} f^{r}(\overline{x};h) = \inf _{t> 0} \liminf _{ u \rightarrow h} \frac{f(\overline{x}+ tu)- f(\overline{x})}{t}\ge \inf _{t > 0} \liminf _{ u \rightarrow h} \frac{f^{\circ }(\overline{x}; tu)}{t} =f^{\circ }(\overline{x}; h) \end{aligned}$$

for all \(h \in \mathbb {X},\) where the last equality above, is obtained due to the positive homogeneity and the Lipschitz continuity of \(f^{\circ }(\overline{x}; \cdot )\) [12, Proposition 2.1.1 (a),(b), p. 25]. Then, the claim follows from (17).

Now assume that (18) is satisfied for all \(x \in \mathbb {X}.\) Then, the claim follows from the inequality

$$\begin{aligned} f(x)-f(\overline{x}) \ge f^{r}(\overline{x};x-\overline{x}) \end{aligned}$$

for all \(x \in \mathbb {X},\) which is satisfied for the radial epiderivative due to its definition (see (12)). \(\square \)

It follows from the definitions of the generalized derivatives mentioned in Theorem 7 that there may be cases when the directional derivative and the subderivative does exist but the Clarke derivative does not exist. Moreover, there may be a case when the subderivative does exist but the directional derivative does not exist. The following corollaries, which are straightforward from Theorem 7, provide equality relations between the existing derivatives.

Corollary 1

Let \((\mathbb {X}, \Vert .\Vert _\mathbb {X})\) be real normed space and let \(f:\mathbb {X} \rightarrow \mathbb {R}\) be a proper function. Assume that f has both the finite directional derivative and the finite subderivative at \(\overline{x} \in \mathbb {X}\) in every direction \(x-\overline{x}\) with arbitrary \(x \in \mathbb {X}.\) Then, f is radially epidifferentiable at \(\overline{x}\) and

$$\begin{aligned} f^r(\overline{x};x-\overline{x})= \text {d}f(\overline{x};x-\overline{x}) = f^{\prime } (\overline{x};x-\overline{x}) \end{aligned}$$
(20)

if and only if

$$\begin{aligned} f(x)-f(\overline{x})\ge f^{\prime } (\overline{x};x-\overline{x}) \quad \text{ for } \text{ all } x \in \mathbb {X}. \end{aligned}$$
(21)

Proof

By the hypothesis, we have:

$$\begin{aligned} \inf _{t> 0} \liminf _{ u \rightarrow h} \frac{f(\overline{x}+ tu)- f(\overline{x})}{t}\ge \inf _{t > 0} \liminf _{ u \rightarrow h} \frac{f^{\prime }(\overline{x}; tu)}{t} =f^{\prime }(\overline{x}; h). \end{aligned}$$

This means by the definition (13) of the radial epiderivative that f is radially epidifferentiable at \(\overline{x}.\) Then, (20) follows from (17). The proof of the second part is similar to that of Theorem 7. \(\square \)

Corollary 2

Let \((\mathbb {X}, \Vert .\Vert _\mathbb {X})\) be real normed space, let \(\mathbb {S}\) be a nonempty subset of the real normed space and let \(f:\mathbb {S} \rightarrow \mathbb {R}\) be a proper function. Assume that f has the finite subderivative at \(\overline{x} \in \mathbb {X}\) in every direction \(x-\overline{x}\) with arbitrary \(x \in \mathbb {X}.\) Then, f is radially epidifferentiable at \(\overline{x}\) and

$$\begin{aligned} f^r(\overline{x};x-\overline{x})= \text {d}f(\overline{x};x-\overline{x}) \end{aligned}$$

if and only if

$$\begin{aligned} f(x)-f(\overline{x})\ge \text {d}f(\overline{x};x-\overline{x}) \quad \text{ for } \text{ all } x \in \mathbb {X}. \end{aligned}$$
(22)

Proof

The proof is similar to the proof of Corollary 1. \(\square \)

Remark 6

The relation

$$\begin{aligned} f(x)-f(\overline{x})\ge f^r(\overline{x};x-\overline{x}) \quad \text{ for } \text{ all } x \in \mathbb {X}, \end{aligned}$$
(23)

which easily follows from the definition of the radial epiderivative, explains its basic property. Therefore, the conditions (19), (21) and (22), which establish equality between the different kinds of generalized derivatives (considered in this paper), are not surprising. Probably the first class of functions that satisfy these conditions can be thought of as the class of convex functions. However, not only convex functions satisfy these conditions. As a simple example, we can consider \(f(x) = -|x |,\) for which the condition (21) is satisfied at \(\overline{x}=0, \) where \(f^r(0;h)= f^{\prime }(0;h), \forall h\in R.\) Note also that for this function we have \(f^r(x;h) = f(h), \forall x,h\in R.\) Note also that all DC functions f of the form \(f=f_1-f_2\), where \(-f_2\) is lower Lipschitz, are radially epidifferentiable. \(\Box \)

Next we present examples of nonconvex functions which illustrate different cases treated in the theorems presented above.

Example 4

Consider the function

$$\begin{aligned} f_1(x) = \left\{ \begin{array}{ll} -x+3 &{} \text{ if } x < 1, \\ x &{} \text{ if } x \ge 1 \end{array} \right. \end{aligned}$$

from Example 1, where it has been shown that \(f^{r}_1(1;h)=|h|.\) It is easy to see that

$$\begin{aligned} \text {d}f_1(1;h) =f^{\prime }_1(1;h) =f^{\circ }_3(1;h)= \left\{ \begin{array}{ll} +\infty &{} \text{ if } h < 0, \\ h &{} \text{ if } h>0. \end{array} \right. \end{aligned}$$

Note that this example nicely illustrates the Theorems 6 and 7 for different choices of \(\overline{x}\) and h. \(\Box \)

The following example demonstrates the case when the subderivative does exist, but both the directional derivative and the Clarke directional derivative do not exist. This example also demonstrates the case when the subderivative and the radial epiderivative are equal.

Example 5

Let

$$\begin{aligned} f_4(x) = \left\{ \begin{array}{ll} x\sin (\frac{1}{x}) &{} \text{ if } x \ne 0, \\ 0 &{} \text{ if } x =0 \end{array} \right. \end{aligned}$$

whose graph is depicted in Fig. 3. Then, the conditions of Corollary 2 are satisfied and

$$\begin{aligned} f^r_4 (0;h) = \text {d}f_4 (0;h)= \left\{ \begin{array}{ll} h &{} \text{ if } h < 0, \\ -h &{} \text{ if } h > 0. \end{array} \right. \end{aligned}$$

It is easy to show that both the directional derivative \(f^{\prime }_4 (0;h)\) and the Clarke directional derivative \(f^{\circ }_{5} (0;h)\) fail to exist.

Fig. 3
figure 3

The graph of the radial epiderivative and the subderivative of \(f_4\) at \(\overline{x}=0.\)

Example 6

Consider the following function (see also [41, Fig 6-4, p.199]):

$$\begin{aligned} f_{5}(x) = \left\{ \begin{array}{ll} x \sin (\ln |x|) &{} \text{ if } x \ne 0, \\ 0 &{} \text{ if } x =0 \end{array} \right. \end{aligned}$$

Then

$$\begin{aligned} f^r_{5} (0;h) = \left\{ \begin{array}{ll} h &{} \text{ if } h < 0, \\ -h &{} \text{ if } h > 0. \end{array} \right. \end{aligned}$$

Note that the conditions of Corollary 2 are satisfied and hence \(f^r_{5} (0;h) = \text {d}f_{5} (0;h)=-|h |.\) On the other hand \(f^{\prime }_{5} (0;h) \) does not exist, but \(f^{\circ }_{5} (0;h) = |h |\) (See Fig. 4).

Fig. 4
figure 4

The graph of radial epiderivative, subderivative and Clarke’s derivative of function \(f_5\) at \(\overline{x}=0\) in the interval \(x \in [-0.25,0.25] \) (left) and \(x \in [-1,1] \) (right)

Example 7

Consider the following function (see also [41, Exercise 8.8, p.304] and [12, Example 2.2.3, p.33]):

$$\begin{aligned} f_{6}(x) = \left\{ \begin{array}{ll} x^2\sin (\frac{1}{x}) &{} \text{ if } x \ne 0, \\ 0 &{} \text{ if } x =0 \end{array} \right. \end{aligned}$$

For \(\overline{x}=0\) we have \(f_{6}^{\prime }(0;h) = 0\) for all \(h \in \mathbb {R}. \) Since the derivative mapping \(\nabla f_{6}\) is discontinuous at \(\overline{x} =0,\) the conditions of Theorem 6, are not satisfied. In a similar way, for the subderivative we obtain that \(\text {d}f_{6} (0;h) = 0.\) On the other hand, the condition (21) of Corollary 1 is also not satisfied. For the radial epiderivative at \(\overline{x} =0,\) we have:

$$\begin{aligned} f^r_{6}(0;h) = \left\{ \begin{array}{ll} h &{} \text{ if } h < 0, \\ -\alpha h &{} \text{ if } h \ge 0 \end{array} \right. \end{aligned}$$

where \(-1<-\alpha = x_0^2\sin (\frac{1}{x_0}) <0,\) with \(0.2< x_0 < 0.3\) (see Fig. 5).

For the Clarke directional derivative of \(f_{6}\) at \(\overline{x}=0\), we have \(f_{6}^{\circ }(0;h) = |h |\) for all h (see, [12, Example 2.2.3, p.33] ). All derivatives for this example are depicted in Fig. 5.

Fig. 5
figure 5

The graph of radial epiderivative with \(-\alpha =-0.26\), subderivative and Clarke’s derivative of function \(f_6\) at \(\overline{x}=0\)

This function demonstrates the case, when the given function has all the generalized derivatives considered in this paper, with

$$\begin{aligned} f^r_{6} (0;h) \ne \text {d}f_{6} (0;h) = f^{\prime }_{6}(0;h)= 0 \ne f^{\circ }_{6} (0;h) = |h |. \end{aligned}$$

Example 8

Let

$$\begin{aligned} f_{7}(x) = \left\{ \begin{array}{ll} x^2\sin ^2 (\frac{1}{x}) &{} \text{ if } x \ne 0, \\ 0 &{} \text{ if } x =0 \end{array} \right. \end{aligned}$$

whose graph is depicted in Fig. 6.

Fig. 6
figure 6

The graph of function \(f_7\)

For \(\overline{x}=0\), we have \(f_{7}^{\prime }(0;h) =0\) for all \(h \in \mathbb {R}, \) but the derivative mapping \(\nabla f_{7}\) is discontinuous at \(\overline{x} =0\) and the conditions of Theorem 6 are not satisfied. On the other hand, we have \(f_{7}(x) - f_{7}(0) \ge f_{7}^{\prime }(0;x-0)\) for all \(x \in \mathbb {R},\) which shows that the conditions of Corollary 1 are satisfied and as a result, we have:

$$\begin{aligned} f^r_{7} (0;h) = \text {d}f_{7} (0;h) = f^{\prime }_{7}(0;h)=0. \end{aligned}$$

This function also demonstrates the case when the given function is directionally differentiable, but is not Clarke directionally differentiable.

Example 9

Consider the function \(f_3\) from Example 3 and try to interpret Theorem 7 and Corollary 1. Recall that

$$\begin{aligned} f_3(x) = \left\{ \begin{array}{ll} 4|x+1| &{} \text{ if } x \le 0, \\ |x-1| + 3 &{} \text{ if } x > 0. \end{array} \right. \end{aligned}$$

Then, all conditions of Theorem 7 and the assumption (19) are satisfied at points \(\overline{x} < -1\) and

$$\begin{aligned} f_3^r(\overline{x};h)= \text {d}f_3(\overline{x};h) = f_3^{\prime } (\overline{x};h) = f_3^{\circ } (\overline{x};h) \end{aligned}$$

for all \(\overline{x} < -1\) and \(h \in \mathbb {R}.\) For \(\overline{x} = -1\) we have:

$$\begin{aligned} f_3^r(-1;h)\ne \text {d}f(-1;h) = f_3^{\prime } (-1;h) = f_3^{\circ } (-1;h)=4h \end{aligned}$$

for \(h>0,\) where the assumption (19) is not satisfied at \(\overline{x} = -1.\) Finally, it is clear that the condition (21) of Corollary 1 is satisfied at the point \(\overline{x} = 0,\) where the condition of Theorem 6 is not satisfied and as a result we have:

$$\begin{aligned} f_3^r(0;h)= \text {d}f_3(0;h) = f_3^{\prime } (0;h)=\left\{ \begin{array}{ll} -4h &{} \text{ if } h \le 0, \\ h &{} \text{ if } h > 0. \end{array} \right. \ne f_3^{\circ }(0;h) \end{aligned}$$

for all \(h \in \mathbb {R},\) where

$$\begin{aligned} f_3^{\circ }(0;h) = \left\{ \begin{array}{ll} h &{} \text{ if } h \le 0, \\ 4h &{} \text{ if } h > 0. \end{array} \right. \end{aligned}$$

\(\square \)

Remark 7

An extension of the optimaity condition (6) to a nonconvex case established in terms of the weak subgradients [29, Theorem 4], the representation of the radial epiderivative as a support function of the weak subdifferential set in nonconvex case (see [28, Theorems 4.5 and 4.6]), Theorem 5 on the necessary and sufficient condition for the radial epidifferentiability, regularity relations given in Theorem 7, Corollaries 1 and 2 as well as the illustrative examples presented above help to better understand the radial epiderivative notion. On the other hand, Theorem 7, Corollaries 1 and 2 show that computational methods and approaches existing for computing the directional derivative, subderivative and the generalized derivative can be used to estimate the radial epiderivative.

Remark 8

The regularity conditions (19), (21) and (22) given in Theorem 7, Corollaries 1 and 2 respectively, not only provide necessary and sufficient conditions for equality of the generalized derivatives under consideration, but these conditions also generalize the main affine support relation (1) of the convex analysis to a nonconvex case. The examples considered above explain and illustrate that these conditions are valid not only for convex but also for nonconvex functions.

5 Computing Radial Epiderivatives

This section presents two approaches for computing the radial epiderivatives. First of them gives approximate formulas in terms of the weak subgradients, while the second approach provides an iterative algorithm.

5.1 Computing the Radial Epiderivatives in Terms of the Weak Subgradients

This section presents theorems with constructive proofs, which provide explicit formulas for approximate computing weak subgradients at a given point. These theorems can be considered as versions of the same theorem for the Euclidean (\(\ell _2\)) and the \(\ell _1-\) norms.

We first give the following lemma which plays an important role in the proof of the subsequent theorems.

Lemma 2

Let \(f:\mathbb {R}^n\rightarrow \mathbb {R}\) be a proper function, \(\overline{x} \in \mathbb {R}^n\), and \(f(\overline{x})\) be finite. If f has a radial epiderivative \(f^{r}(\overline{x};x)\) for every \(x \in \mathbb {R}^n,\) then \(f^{r}(\overline{x};\cdot )\) is weakly subdifferentiable at \(0_{\mathbb {R}^n,}\) f is weakly subdifferentiable at \(\overline{x}\) and

$$\begin{aligned} \partial ^{w}f^{r}(\overline{x};0) = \partial ^{w} f(\overline{x}). \end{aligned}$$
(24)

Proof

Suppose that the radial epiderivative \(f^{r}(\overline{x};\cdot )\) exists and is given by (12). It follows from this relation that \(f^{r}(\overline{x};\cdot )\) is bounded from below on some neighborhood of \(0_{\mathbb {R}^n}\) and that, it is a positively homogeneous function (see Lemma 1). Therefore by Theorem 3, it is weakly subdifferentiable at \(0_{\mathbb {R}^n}\).

On the other hand, it follows from the definition of the radial cone that

$$\begin{aligned} (x-\overline{x},f(x)-f(\overline{x}))\in \mathbb {R}( epi (f);(\overline{x},f\overline{x})). \end{aligned}$$

Indeed, the radial cone \(\mathbb {R}( epi (f);(\overline{x},f\overline{x}))\) consists of elements of the form

$$\begin{aligned} \lim _{n \rightarrow +\infty }t_n[(x_n,y_n) - (\overline{x},f(\overline{x}))], \end{aligned}$$

where in particular we can take \(t_n =1, x_n =x, y_n = f(x)\) for all n,  which leads to the element \((x-\overline{x},f(x)-f(\overline{x})).\) Therefore by (12), we obtain the following relation:

$$\begin{aligned} f^{r}(\overline{x};x-\overline{x})\le f(x)-f(\overline{x}) \end{aligned}$$
(25)

for all \(x \in \mathbb {R}^n.\)

Now we show that f is weakly subdifferentiable at \(\overline{x}\) and that (24) is satisfied. Let \((x^*,c)\in \partial ^{w} f^{r}(\overline{x};0).\) Then

$$\begin{aligned} f^{r}(\overline{x};h) \ge \langle x^*,h \rangle - c\Vert h\Vert \quad \text{ for } \text{ all } h\in \mathbb {R}^n, \end{aligned}$$

or

$$\begin{aligned} f^{r}(\overline{x};x-\overline{x}) \ge \langle x^*,x-\overline{x} \rangle - c\Vert x-\overline{x}\Vert \quad \text{ for } \text{ all } x\in \mathbb {R}^n. \end{aligned}$$

By (25) this implies

$$\begin{aligned} f(x)-f(\overline{x}) \ge \langle x^*,x-\overline{x} \rangle - c\Vert x-\overline{x}\Vert , \quad \text{ for } \text{ all } x\in \mathbb {R}^n, \end{aligned}$$

which means that \((x^*,c)\in \partial ^{w} f(\overline{x}).\)

If \((x^*,c) \in \partial ^{w}f(\overline{x}),\) then for any fixed \(h \in \mathbb {R}^n\) we have:

$$\begin{aligned} f^{r} (\overline{x}; h) = \inf _{t > 0} \liminf _{ u \rightarrow h} \frac{f(\overline{x}+ tu)- f(\overline{x})}{t} \end{aligned}$$
$$\begin{aligned} \ge \inf _{t > 0} \liminf _{ u \rightarrow h} \frac{\langle x^{*},tu \rangle - c\Vert tu\Vert }{t} = \langle x^{*},h \rangle - c\Vert h \Vert \end{aligned}$$

that is \((x^*,c) \in \partial ^{w} f^{r}(\overline{x};0)\) and hence the proof is completed. \(\square \)

Theorem 8

Let \(f:\mathbb {R}^n\rightarrow \mathbb {R}\) be a proper function, \(\overline{x} \in (\mathbb {R}^n,\Vert \cdot \Vert _2)\), and let \(\overline{y}=f(\overline{x})\) be finite. Assume that, f is radially epidifferentiable at \(\overline{x}.\) Then, for every \(\varepsilon > 0\) and \(h \in \mathbb {R}^n {\setminus } \{0_{\mathbb {R}^n}\}\) with \(\Vert h\Vert =1,\) there exists a weak subgradient \((v,c) \in \partial ^wf(\overline{x})\) such that

$$\begin{aligned} f^r(\overline{x};h) = \langle v,h \rangle - c + \varepsilon , \end{aligned}$$
(26)

or

$$\begin{aligned} v=(c+f^r(\overline{x};h)- \varepsilon )h. \end{aligned}$$
(27)

Proof

Let \(h \in \mathbb {R}^n\) be an arbitrary point with \(\Vert h\Vert =1\). By the positive homogeneity of \(f^r(\overline{x};\cdot )\) (see Lemma 1), it is sufficient to consider elements h with \(\Vert h\Vert =1\). Let \(\varepsilon > 0\) be an arbitrary positive number. We will show that there exist a nonnegative number c, and a vector \(v \in \mathbb {R}^n\) (possibly depending on c and h) such that the pair (vc) is a weak subgradient of \(f^r(\overline{x};\cdot )\) at zero, that is the following inequality is satisfied for every \(x \in \mathbb {R}^n:\)

$$\begin{aligned} f^r(\overline{x};x) - f^r(\overline{x};0) \ge \langle v,x-0 \rangle -c\left\| x-0\right\| . \end{aligned}$$

Since \(f^r(\overline{x};0)=0,\) the above inequality can be written simply in the following form:

$$\begin{aligned} f^r(\overline{x};x) \ge \langle v,x \rangle -c\left\| x\right\| \text{ for } \text{ all } x \in \mathbb {R}^n. \end{aligned}$$

In this proof, we aim not only to construct a weak subgradient (vc) of the given function f,  but also to construct a maximal weak subgradient for a given point h,  in the sense that the function \(g(x)= \langle v,x \rangle -c\left\| x \right\| \) is everywhere less than or equal to \( g(h)=f^r(\overline{x};h)- \varepsilon \) and that g achieves its maximum value on the unit sphere \(S_1=\{x \in \mathbb {R}^n: \Vert x\Vert =1 \}\) at the point \(x=h.\)

We will seek a pair (vc) such that, the directional derivative

$$\begin{aligned} g^{\prime }(h;y)= \langle v,y \rangle -c \langle h,y \rangle / \left\| h\right\| \quad \text{ for } \text{ all } y \in \mathbb {R}^n \end{aligned}$$

of function g at h in direction y,  equals zero on the subspace

$$\begin{aligned} \mathbb {H}= \{y \in \mathbb {R}^n: \langle h,y \rangle =0\}. \end{aligned}$$

Then, the equality \(g^{\prime }(h;y)= 0\) on the subspace \(\mathbb {H},\) implies:

$$\begin{aligned} \langle v,y \rangle =0 \quad \text{ for } \text{ all } y \in \mathbb {H}. \end{aligned}$$

Thus, we obtain that the vector v must be orthogonal to the subspace \(\mathbb {H}\). Since \(\mathbb {H}\) is an \((n-1)\)-dimensional subspace of \(\mathbb {R}^n\), there exists a set of orthonormal basis vectors \(\{e_1, \ldots ,e_{n-1}\}\) in \(\mathbb {H}\). Then, by orthogonality of v to the subspace \(\mathbb {H},\) we have

$$\begin{aligned} \langle v,e_j \rangle =0 \quad \text{ for } \text{ all } j=1,\ldots ,n-1. \end{aligned}$$
(28)

Now note that the condition \(g(h)=f^r(\overline{x};h)- \varepsilon \) leads to the relation \( \langle v,h\rangle -c\Vert h\Vert =f^r(\overline{x};h)- \varepsilon \). By using the equality \(\Vert h\Vert =1\) and combining this equality with the \(n-1\) relations given in (28), we obtain n equations for \(n+1\) unknown parameters \((v,c)\in \mathbb {R}^n\times \mathbb {R}_+\) in the following form:

$$\begin{aligned} \langle v,h\rangle= & {} c\Vert h\Vert + f^r(\overline{x};h) - \varepsilon ,\end{aligned}$$
(29)
$$\begin{aligned} \langle v,e_j \rangle= & {} 0 \quad \text{ for } \text{ all } j=1,\ldots ,n-1. \end{aligned}$$
(30)

Since the vector h is chosen to be perpendicular to the subspace \(\mathbb {H},\) and the basis vectors \(e_j,\quad j=1,\ldots ,n-1 \) are orthonormal, we obtain that the vectors \(h,e_1,\ldots ,e_{n-1}\) are linearly independent, and therefore, the system of linear equations given by relations (29)–(30) has a unique solution v for each c.

We now find a solution to the system of equations (29)–(30) explicitly. Recall that the vector h is orthogonal to the subspace \(\mathbb {H}\). Therefore, we can seek a solution to the set of equations (30) in the form \(v=\lambda h\), where \(\lambda \) is an unknown coefficient. By substituting this expression for v in (29), we obtain \(\lambda =c+f^r(\overline{x};h)- \varepsilon \).

Thus for any given \(c \ge 0,\) we have obtained a pair \((v_c,c)\in \mathbb {R}^n\times \mathbb {R}_+\) with

$$\begin{aligned} v_c=(c+f^r(\overline{x};h)- \varepsilon )h \end{aligned}$$

such that

$$\begin{aligned} g(x) \le g(h) = f^r(\overline{x};h)- \varepsilon . \end{aligned}$$

Now we show that the number c in the definition of g can be chosen large enough such that

$$\begin{aligned} g_c(x) = \langle v_{c},x \rangle - c\left\| x\right\| \le f^r(\overline{x};x) \text{ for } \text{ all } x \in \mathbb {R}^n. \end{aligned}$$
(31)

For this aim, since g and \(f^r(\overline{x};\cdot )\) are both positively homogeneous functions, it is sufficient to show (31) only for points x in the unit sphere \(S_1.\)

Suppose to the contrary that there exist sequences \(\{c_k\}\) with \(c_k \rightarrow +\infty \) and \(\{x_k\} \subset S_1\) such that

$$\begin{aligned} g_{c_k}(x_{k}) = \langle v_{c_k},x_{k} \rangle - {c_k}\left\| x_{k}\right\| > f^r(\overline{x};x_{k}) \text{ for } \text{ all } k = 1, 2, \ldots \end{aligned}$$

or, since \(\left\| x_{k}\right\| = 1,\)

$$\begin{aligned} c_k (\langle h,x_k \rangle -1) + f^r(\overline{x};h)\langle h,x_k \rangle - f^r(\overline{x};x_{k}) - \varepsilon \langle h,x_k \rangle > 0 \end{aligned}$$
(32)

for all \(k = 1, 2, \ldots \). Without loss of generality, we can assume that \(x_k\) is a convergent sequence. Consider two cases.

(Case 1) Let \(x_k \rightarrow \widetilde{x} \ne h.\) In this case, since both h and \(\widetilde{x}\) are in a unit circle, we have \(\langle h,\widetilde{x} \rangle -1 < 0.\) Then, due to the boundedness from below of \(f^r(\overline{x};\cdot )\) on the unit sphere (by the hypothesis, \(f^r(\overline{x};\cdot )\) is given by (12)), the relation (32) leads to a contradiction for \(k \rightarrow \infty .\)

(Case 2) Let \(x_k \rightarrow h.\) Now, since \(\langle h,h \rangle =1,\) by letting to the limit as \(k \rightarrow \infty \), we obtain \(- \varepsilon >0\), which is a contradiction.

Thus, (31) is proved, and it is shown that given any \(\varepsilon > 0,\) there exists a number \(c_\varepsilon > 0\) such that the function \(g_{c_\varepsilon }\) corresponding to the pair \( (v_\varepsilon , c_\varepsilon ) = ((c_\varepsilon + f^r(\overline{x};h) - \varepsilon )h, c_\varepsilon )\), defined as

$$\begin{aligned} g_{c_\varepsilon } (x) = (c_\varepsilon + f^r(\overline{x};h) - \varepsilon ) \langle h,x \rangle -c_\varepsilon \left\| x \right\| \end{aligned}$$

satisfies the following conditions

$$\begin{aligned} g_{c_\varepsilon } (x) \le f^r(\overline{x};x) \text{ for } \text{ all } x \in \mathbb {R}^n, \end{aligned}$$

and

$$\begin{aligned} g_{c_\varepsilon } (x) \le g_{c_\varepsilon } (h) = f^r(\overline{x};h) - \varepsilon . \end{aligned}$$

The first relation, in particular, means that \( (v_\varepsilon , c_\varepsilon ) \in \partial ^wf^r(\overline{x};0).\) Hence by Lemma 2, we obtain that, \( (v_\varepsilon , c_\varepsilon ) \in \partial ^w f(\overline{x}),\) which completes the proof. \(\square \)

Now we give the \(\ell _1-\) norm version of Theorem 8. Since the proof of this version is similar to that of Theorem 8, we present it without the proof.

Theorem 9

Let \(f:\mathbb {R}^n\rightarrow \mathbb {R}\) be a proper function, \(\overline{x} \in (\mathbb {R}^n,\Vert \cdot \Vert _1)\), and \(\overline{y}=f(\overline{x})\) be finite. Assume that, f is radially epidifferentiable at \(\overline{x}.\) Then, for every \(\varepsilon >0\) and \(h =(h_1,\ldots ,h_n) \in \mathbb {R}^n \setminus \{0_{\mathbb {R}^n}\},\) there exists a weak subgradient \((v,c) \in \partial ^wf(\overline{x})\) such that

$$\begin{aligned} f^r(\overline{x};h) = \frac{1}{n} \langle v,Sgn(h) \rangle - c + \varepsilon , \end{aligned}$$
(33)

or

$$\begin{aligned} v=(c+f^r(\overline{x};h)- \varepsilon )Sgn(h) \end{aligned}$$
(34)

or where Sgn(h) is the n dimensional vector defined as \(Sgn(h) = (sgn(h_1),sgn(h_2),\ldots , sgn(h_n))\) and \(sgn(h_i) = 1 \) if \(h_i >0\) and \(sgn(h_i) = -1\) if \(h_i <0\), \(i=1,\ldots ,n\).

Example 10

Consider the function \(f_3\) from Example 3 and try to illustrate Theorems 8 and 9. Recall that

$$\begin{aligned} f_3(x) = \left\{ \begin{array}{ll} 4|x+1 |&{} \text{ if } x \le 0, \\ |x-1 |+ 3 &{} \text{ if } x > 0. \end{array} \right. \end{aligned}$$

For this function, we will compute the weak subdifferentials at different points.

First consider the point \(\overline{x} = 1.\) By Lemma 2 we have \(\partial ^{w}f^{r}_3(1;0) = \partial ^{w} f_3(1).\) By definition of the weak subdifferential we obtain:

$$\begin{aligned}{} & {} \partial ^{w} f_3(1) = \partial ^{w}f_3^r(1;0) \nonumber \\= & {} \{(v,c) \in \mathbb {R} \times \mathbb {R}_+: f_3^r(1;h) - f_3^r(1;0) \ge vh - c|h| \text{ for } \text{ all } h \in \mathbb {R} \} \nonumber \\= & {} \{(v,c) \in \mathbb {R} \times \mathbb {R}_+: -c-\frac{3}{2} \le v \le c+1 \}. \end{aligned}$$
(35)

Now try to compute weak subgradients by applying Theorems 8 and/or 9. Let \(h=-1, \varepsilon =1/2.\) Then by (16) we have \(f^r_3(1;-1)=-3/2,\) and applying formula \(v=(c+f^r_3(\overline{x};h)- \varepsilon )h\) we obtain \(v= -c+2.\) By checking with (35), we see that \((v,c) = (-c+2, c) \in \partial ^{w} f_3(1)\) for every \(c \ge 1/2.\)

5.2 Algorithm for Computing the Radial Epiderivatives

In this section, we present an algorithm for numerical computing the radial epiderivative for continuous functions, satisfying the lower Lipschitz condition at a given point. We will prove that the algorithm needs a finite number of iterations to compute the radial epiderivative of a function with the above mentioned properties, at a given point in a given direction. First we present the algorithm.

Assume that \(f:\mathbb {R}^n \rightarrow \mathbb {R} \cup \{+\infty \}\) is continuous function and satisfies the lower Lipschitz condition (14) at \(\overline{x} \in \mathbb {R}^n\) with Lipschitz constant \(L>0.\)

Algorithm 1
figure a

Approximate computing of the radial epiderivative \(f^r(\overline{x};h)\) of function f at \(\overline{x}\) in direction h.

We prove that Algorithm 1 terminates in a finite iterations.

Theorem 10

Let \(f:\mathbb {R}^n\rightarrow \mathbb {R}\cup \{+\infty \}\) be a proper lower semicontinuous function and \(\overline{x} \in \mathbb {R}^n.\) Assume that f is lower Lipschitz at \(\overline{x}\) and that \(t_k, y_k, y_{k+1}\) are defined as in Algorithm 1 for every \(k=1,2, \ldots .\) Then, there exists a positive number N such that \(y_k = y_N\) for all \(k>N.\)

Proof

Let \(f:\mathbb {R}^n\rightarrow \mathbb {R}\) be a proper continuous function, \(\overline{x} \in \mathbb {R}^n,\) and let f be lower Lipschitz at \(\overline{x}\) with Lipschitz constant L. Suppose that \(t_k, y_k, y_{k+1}\) are defined as in Algorithm 1 for every \(k=1,2, \ldots .\) Assume to the contrary that Algorithm 1 generates strongly decreasing sequence of numbers \(y_k\) with \(y_k >y_{k+1}\) for all \(k=1,2, \ldots \) such that this sequence is not bıunded from below. Let the number M be chosen such that \(M > L.\) Then by the assumption, there exists a number k such that \(y_k < -M.\) Then, we have:

$$\begin{aligned} -M > y_k = \frac{f(\overline{x}+t_kh)-f(\overline{x})}{t_k} \ge \frac{-t_k L \Vert h\Vert }{t_k} =-L, \end{aligned}$$

which is a contradiction. \(\square \)

The following section discusses and studies optimality conditions via the radial epiderivatives.

6 Optimality Conditions via Generalized Derivatives

We begin this section with the following result which establishes a necessary and sufficient condition for a descent direction via the radial epiderivative, for nonconvex nonsmooth functions. We will say that \(h \in \mathbb {X}\) is a descent direction for function \(f:\mathbb {X} \rightarrow \mathbb {R} \cup \{+\infty \}\) at \(\overline{x} \in \mathbb {X},\) if there exists a positive number \(\bar{t}\) such that \(f(\overline{x}+\bar{t}h)<f(\overline{x}).\)

Theorem 11

Let \((\mathbb {X}, \Vert .\Vert _\mathbb {X})\) be real normed space and let \(f:\mathbb {X} \rightarrow \mathbb {R} \cup \{+\infty \}\) be a proper function. Assume that f is radially epidifferentiable at \(\overline{x} \in \mathbb {X}.\) Then, the vector \(h \in \mathbb {X}\) is a descent direction for f at \(\overline{x}\) if and only if \(f^r(\overline{x}; h) < 0.\)

Proof

Proof of If. let f be radially epidifferentiable at \(\overline{x}.\) Assume that \(f^r(\overline{x}; h) < 0\) for some \(h \in \mathbb {X}.\) Then by Proposition 1 we have:

$$\begin{aligned} f^{r}(\overline{x};h) = \inf _{t > 0} \liminf _{ u \rightarrow h} \frac{f(\overline{x}+ tu)- f(\overline{x})}{t} <0. \end{aligned}$$

Then, there exists a positive number \(\varepsilon >0\) such that \(f^{r}(\overline{x};h) < -\varepsilon .\) Thus, there exists a positive number \(t_{\varepsilon }\) such that

$$\begin{aligned} \liminf _{ u \rightarrow h} \frac{f(\overline{x}+ t_{\varepsilon }u)- f(\overline{x})}{t_{\varepsilon }} < -\varepsilon . \end{aligned}$$

Since f is radially epidifferentiable at \(\overline{x},\) by Theorem 4, it is lower semicontinuous there and hence, the latter relation implies:

$$\begin{aligned} \frac{f(\overline{x}+ t_{\varepsilon }h)- f(\overline{x})}{t_{\varepsilon }} < -\varepsilon , \end{aligned}$$

which means that h is a descent direction for f at \(\overline{x}.\)

The proof of “only if" is similar to that of “if" part. \(\square \)

Remark 9

Note that Theorem 11 explains a necessary and sufficient condition for a descent direction to the global minimum. This means that in the case when the point \(\overline{x} \in \mathbb {X}\) is a local but not global minimum point of f,  the vector \(h \in \mathbb {X}\) will lead to a ”better" point \(x_1\) for f (that is \(f(x_1) < f(\overline{x})\)) if \(f^r(\overline{x}; h) < 0.\)

Now we formulate the following optimality condition which can easily be obtained from Theorem 11. Note that the similar optimality condition was earlier established by Kasimbeyli in [25, Theorem 3.6].

Corollary 3

Let \((\mathbb {X}, \Vert .\Vert _\mathbb {X})\) be real normed space and let \(f:\mathbb {X} \rightarrow \mathbb {R} \cup \{+\infty \}\) be a proper function. Assume that f is radially epidifferentiable at \(\overline{x} \in \mathbb {X}.\) Then, f attains global minimum at \(\overline{x}\) if and only if \(f^r(\overline{x}; h)\) attains its minimum at \(h=0.\)

Proof

The proof easily follows from Theorem 11. \(\square \)

Example 11

Consider the function \(f_3\) from Example 3. Let

$$\begin{aligned} f_3(x) = \left\{ \begin{array}{ll} 4|x+1 |&{} \text{ if } x \le 0, \\ |x-1 |+ 3 &{} \text{ if } x > 0. \end{array} \right. \end{aligned}$$

Obviously \(h=0\) is a global minimum of the radial epiderivative

$$\begin{aligned} f^r_3(-1;h) = \left\{ \begin{array}{ll} -4h &{} \text{ if } h \le 0, \\ h &{} \text{ if } h > 0 \end{array} \right. \end{aligned}$$

(see (16)), which illustrates the assertion of Corollary 3 and demonstrates that \(\overline{x} =-1\) is a global minimum of \(f_3.\) On the other hand, since \(\overline{x} =-1\) is a global minimum of \(f_3,\) at this point the other (global) optimality condition \((0,0) \in \partial ^{w}f_3(-1)\) must be satisfied (see (9)). It can easily be checked that \((0,0) \in \partial ^{w}f_3(-1) = \{(v,c) \in \mathbb {R} \times \mathbb {R}_+: -c-4 \le v \le c+1 \}.\)

As another illustration of Corollary 3, consider the point \(\overline{x} =1,\) which is a local (but not global) minimum of \(f_3.\) At this point (see (16)), we have:

$$\begin{aligned} f^r_3(1;h)= \left\{ \begin{array}{ll} \frac{3h}{2} &{} \text{ if } h \le 0, \\ h &{} \text{ if } h > 0. \\ \end{array} \right. \end{aligned}$$

Clearly, \(h=0\) is not a minimum point of \(f^r_3(1;h),\) and as a result, it can easily be seen that \((0,0) \notin \partial ^{w}f_3(1).\)

On the other hand, since \(f^r_3(1;h) <0\) for every \(h<0,\) we obtain that (for example) \(h=-1 \) is a descent direction for \(f_3\) at \(\overline{x} = 1.\) In this case, the better point can be computed in the form \(x = 1 + t (-1)\) and the optimal value for \(t=t_{opt}>0\) can be found by solving the scalar problem: \(\min \{ f_3(1-t): t>0 \}.\) An easy computing shows that for \(t=2 \), the next iteration gives the global minimum \(x = -1.\)             \(\Box \)

Remark 10

In [14], the authors developed a method for approximate computing the weak subgradients via the directional derivatives, which was used there to formulate an algorithm for solving some classes of nonconvex minimization problems. With the help of Theorems 8 or 9, the approximate computing method for the weak subgradients can be used to estimate radial epiderivatives and by this way, using Theorem 11 one can compute the (global) descent direction for a (nonconvex) function under consideration. On the other hand, if we are given the value of the radial epiderivative, we can estimate the weak subgradients by using Theorems 8 or 9, and then use them in the weak subgradient based solution method given in [14].

Finally, we illustrate the behavior of the generalized derivatives considered in this paper, and optimality conditions given in Theorem 1 and in relations (9) and (10) on two simple functions.

Example 12

Let

$$\begin{aligned} f_8(x) = \left\{ \begin{array}{ll} x^2 &{} \text{ if } x \le 0, \\ -x + 1 &{} \text{ if } x > 0. \end{array} \right. \end{aligned}$$
(36)
Fig. 7
figure 7

The radial epiderivative (left), and subderivative and Clarke’s derivative (right) of \(f_8\) at the point \(\overline{x}=0\)

Then,

$$\begin{aligned} f_8^r(0;h)= \left\{ \begin{array}{ll} 0 &{} \text{ if } h \le 0, \\ -h &{} \text{ if } h > 0. \end{array} \right. \end{aligned}$$

Clearly, \(f_8^r(0;h)\rightarrow -\infty \) as \(h \rightarrow +\infty \) which indicates that function \(f_8\) is unbounded from below and hence has no a global minimum value, see Fig. 7.

On the other hand,

$$\begin{aligned} \text {d}f_8(0;h) = \left\{ \begin{array}{ll} 0 &{} \text{ if } x \le 0, \\ +\infty &{} \text{ if } x > 0. \end{array} \right. \end{aligned}$$

Consequently, \(f_8^{\circ }(0;h)= \text {d}f_8(0;h).\) Despite the fact that \(x=0\) is not a minimum point of \(f_8,\) the generalized subdifferential set contains the zero element, indicating that the point \(x=0\) is a Clarke stationary point.

Now consider the function

$$\begin{aligned} f_9(x) = \left\{ \begin{array}{ll} x^2 &{} \text{ if } x \le 0, \\ x + 1 &{} \text{ if } x > 0. \end{array} \right. \end{aligned}$$
(37)
Fig. 8
figure 8

The radial epiderivative (left), and subderivative and Clarke’s derivative (right) of \(f_9\) at the point \(\overline{x}=0\)

Then,

$$\begin{aligned} f_9^r(0;h)= \left\{ \begin{array}{ll} 0 &{} \text{ if } x \le 0, \\ h &{} \text{ if } x > 0. \end{array} \right. \end{aligned}$$

Clearly, \(f_9^r(0;h)\) attains its global minimum value at \(h=0\) indicating that the function \(f_9\) attains its global minimum at \(\overline{x}=0,\) see Fig. 8. Note also that \((0,0) \in \partial ^w f_9(0)\) which again justifies this assertion.

On the other hand, despite the differences between the functions \(f_8\) and \(f_9\), we obtain the same expressions for the Clarke directional derivative and the subderivative: \(\text {d}f_8(0;h) = \text {d}f_9(0;h)=f_8^{\circ }(0;h)=f_9^{\circ }(0;h)=+\infty \) for \(h>0\). \(\Box \)

7 Conclusion

This paper studies new properties of the radial epiderivatives and explains a class of radially epidifferentiable functions. We prove that lower Lipschitz functions are radially epidifferentiable and vice versa. It follows from this theorem that all convex functions are radially epidifferentiable and that radial epiderivative of every convex function coincides with the classic directional derivative. On the other hand, all DC functions f of the form \(f=f_1-f_2\), where \(-f_2\) is lower Lipschitz, become radially epidifferentiable.The paper presents new regularity conditions for establishing equality between the different kinds of generalized derivatives and compares them with the existing in the literature conditions. We establish new global optimality condition via the radial epiderivatives for nonconvex functions. These conditions are compared with the optimality conditions given in the literature via the generalized derivatives. All the regularity and optimality conditions are demonstrated and illustrated on examples. The paper presents a methodology for finding a global descent direction for nonconvex functions, in terms of radial epiderivatives. The paper also presents explicit formulas and an iterative algorithm for approximate computing the radial epiderivatives. We hope that new computing formulas presented in the paper for the radial epiderivatives and the global descent directions and optimality conditions can be used for new research directions in the future such as:

  • Developing of new radial epiderivatives based global solution methods in nonconvex programming (see e.g. [14, 15]).

  • Applications in subdifferential calculus in nonconvex programming (see e.g. [7, 8]).

  • Extensions of this work, e.g., in finance and neuroscience, such as via a generalized stochastic optimal control or generalized risk management (see e.g. [24, 42])

  • Applications in data science (see e.g. [2, 6, 22]).