Abstract
We consider differential inclusions with strengthened onesided Lipschitz (SOSL) righthand sides. The class of SOSL multivalued maps is wider than the class of Lipschitz ones and a subclass of the class of onesided Lipschitz maps. We prove a Filippov approximation theorem for the solutions of such differential inclusions with perturbations in the righthand side, both of the set of the velocities (outer perturbations) and of the state (inner perturbations). The obtained estimate of the distance between the approximate and exact solution extends the known Filippov estimate for Lipschitz maps to SOSL ones and improves the order of approximation with respect to the inner perturbation known for onesided Lipschitz (OSL) righthand sides from \(\frac{1}{2}\) to 1.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
We consider the differential inclusion
where F is a setvalued map defined in \(\mathbb {R}^{n+1}\) with nonempty compact (possibly convex) sets in \(\mathbb {R}^n\) as values, measurable in the time t for all x and upper semicontinuous (not necessarily continuous) in the state x for almost all \(t \in I = [t_0,T]\).
The solutions of the inclusion are absolutely continuous (AC) functions \(x: I \rightarrow \mathbb {R}^n\) satisfying (1) almost everywhere.
Filippovtype approximation theorems for differential inclusions follow the original theorem of Filippov [38] and provide approximation estimates for the solutions of (1) in the presence of perturbations by solutions of the original inclusion (1). The perturbations appear in the righthand side F(t, x) and in the initial set and the approximation estimates are given in terms of the norms of the perturbations. The theorem of Filippov extends classical results on Lipschitz continuity of the (unique) solution of an ODE with respect to perturbations in the righthand side and the initial point, to Lipschitz stability of the solution set of a differential inclusion. We next recall the classical theorem of Filippov [38] in a slightly simplified form with a fixed Lipschitz constant instead of a timedepending one.
Theorem 1.1
(Filippov [38]) Let \(F: I \times \mathbb {R}^n \Rightarrow \mathbb {R}^n\) have closed, nonempty sets as values and consider an approximate solution \(y: I \rightarrow \mathbb {R}^n\) with perturbed initial value \(y(t_0)=y^0\) and
where \(\varepsilon (\cdot )\) is integrable. For \(\rho > 0\) define the tube \(\Omega (t) = y(t) + \rho B_1(0)\) for \(t \in I\) with \(x^0 \in \Omega (t_0)\) and let F be continuous (in the Hausdorff metric in Sect. 2.1) for all \(x \in \Omega (t)\), \(t \in I\) as well as let \(F(t,\cdot )\) be Lipschitz continuous with a constant \(L \ge 0\), i.e.
Then there exists a (neighboring) solution \(x(\cdot )\) of (1) on a subinterval of \(I\) such that
for \(t \in I\) with \(\xi (t) \le \rho \).
In other words, an approximate solution satisfying (3) with a (timedependent) \(\varepsilon (\cdot )\)violation of the velocity from the righthand side F(t, y(t)) is close to a solution \(x(\cdot )\) of the unperturbed system (1) with a distance proportional to the norm of the violation \(\varepsilon (\cdot )\). The importance of the theorem is confirmed by its numerous applications related to discrete or other approximations of differential inclusions (e.g., [28,29,30,31, 34,35,36, 64]), relaxation theorems (called also FilippovWażewski theorems) on the density of the solutions set of (1) in the set of relaxed solutions (e.g., [2, Sec. 2.4], [3, Sec. 10.4], [6, 9, 24, 42, 58]), results on the asymptotic behavior of the solutions and others (e.g., [31,32,33, 36]). That is why extending the scope of the Filippov theorem beyond the family of Lipschitzian, and beyond the one of continuous maps, is an attractive field of investigation. For more information we refer to [3, 31, 36, 42].
In this respect see also the discussion in [31] of the theorem of Pliś which states the existence of a neighboring trajectory for differential inclusions without assuming uniqueness. It is obtained in [56] for righthand sides with closed values and integrable Lipschitz modulus and also includes an error estimate with a maximal solution of a corresponding ODE.
In this paper, for any given solution \(y(\cdot )\) of the system with inner and outer vector perturbations in \(\mathbb {R}^n\),
we want to obtain the existence of a solution of the original system (1) such that the distance between these two solutions is estimated by some norms of the measurable perturbations \(\overline{\delta }(\cdot ),\overline{\varepsilon }(\cdot ), {\overline{\rho }}^{0}\) and is small if the perturbations are small.
Our motivation for representing the perturbed system in form (5) and the importance of inner perturbations \(\overline{\delta }(\cdot )\), which is essential when F is not continuous with respect to the state variable, are discussed in details in Sect. 2.2.
Removing the continuity of F with respect to the state variable may be problematic, since then the continuous dependence of the solutions with respect to perturbations in the initial condition or the righthand side may be lost. Fortunately, in some cases the continuous dependence is preserved, possibly in a Hölder form, as in the case of onesided Lipschitz (OSL) mapping F.
The OSL property of singlevalued maps in \(\mathbb {R}^n\) or in Hilbert spaces is known in numerical analysis as a generalization of the Lipschitz continuity ([4, 22, 43, Sec. IV.12], [15]).
An early and restrictive extension of the OSL condition to setvalued maps is defined in [37] and [45, 49]. This condition, equivalent to the monotonicity of the map \(\mu I  F\) for some \(\mu \in \mathbb {R}\), may be satisfied only by maps that are a.e. singlevalued [67].
A weaker abstract version of the OSL condition in Banach spaces is formulated in [23], and its most popular form for multimaps in \(\mathbb {R}^n\) and Hilbert spaces is coined in [29]. More details on OSL maps may also be found in [25, 27].
Definition 1.2
([29]) The setvalued map F defined from a domain \([t_0,T]\times D\) in \(\mathbb {R}^{n+1}\) to \(\mathbb {R}^n\) is called OneSided Lipschitz (OSL) in D with constant \(\mu \in \mathbb {R}\) if for a.e. \(t\in [t_0,T] \), every \(x,y\in D\) and every \(v\in F(t,x)\) there is \(w\in F(t,y)\) such that
where \(\cdot _2\) denotes the Euclidean norm in \(\mathbb {R}^n\).
The OSL property describes a large family of mappings which contains both Lipschitz and dissipative maps (see also Sect. 2 for examples and a comparison to other known classes of Lipschitzlike maps).
One should note that the constant \(\mu \) may be zero or even negative, in contrast to the case of Lipschitz continuity. The OSL systems with negative OSL constant have a strongly invariant set which is asymptotically stable and attracts every trajectory [32]. In addition, OSL maps are not necessarily continuous as is shown in Sect. 2: easy examples of discontinuous OSL singlevalued functions in \(\mathbb {R}^1\) with OSL constant \(\mu =0\) are monotone decreasing functions.
In the case of OSL map F (even in the presence of discontinuities) a Filippov type approximation theorem is proved in [29] for inclusions with OSL and convexvalued righthand sides with outer perturbations and first order of approximation of the solutions with respect to these perturbation is established. This theorem is applied there to the Euler approximation of differential inclusions and error estimates are derived implying convergence for righthand sides which may be not Lipschitz in the state variable (this is easy to see for autonomous inclusions). Effective estimates for the Euler scheme providing convergence for OSL mappings being discontinuous in the state variable follow from a Filippovtype theorem for OSL mappings with inner perturbations [30], where order \(\frac{1}{2}\) of approximation with respect to the inner perturbations is obtained. This leads to the order \(\mathcal {O}(\sqrt{h})\) of the Euler method for differential inclusions with (discontinuous) OSL righthand sides.
The Strengthened OneSided Lipschitz (SOSL) condition we define next is intermediate between the Lipschitz and the OSL condition, i.e. weaker than the Lipschitz condition and stronger than the OSL one.
Definition 1.3
([53, p. 171]) The setvalued map F from a domain \([t_0,T]\times D\) in \(\mathbb {R}^{n+1}\) to \(\mathbb {R}^n\) is called Strengthened OneSided Lipschitz (SOSL) in D with constant \(\mu \in \mathbb {R}\) if for a.e. \(t\in [t_0,T]\), every \(x=(x_1,\ldots ,x_n)\), \(y=(y_1,\ldots ,y_n)\in D\) and every \(v=(v_1,\ldots ,v_n)\in F(t,x)\) there is \(w=(w_1,\ldots ,w_n)\in F(t,y)\) such that for all \(i \in \{ 1, \ldots , n\}\) we have the implications
and
where \(\cdot _\infty \) denotes the maximum norm in \(\mathbb {R}^n\).
The two cases in the definition above can be unified with the trivial case \(x_i = y_i\) as follows:
For a.e. \(t\in [t_0,T]\), every \(x, y \in D\) and every \(w \in F(t,y)\) there is \(v \in F(t,x)\) such that for all \(i \in \{ 1, \ldots , n\}\) the implications (7)–(8) hold, or equivalently
Note that also the SOSL constant \(\mu \) may be negative and F is not necessarily a.e. singlevalued. For maps with values in \(\mathbb {R}^1\), the SOSL condition is equivalent to the OSL one. Also, the setvalued map F is SOSL iff \({\text {co}}F\) (with convexified values) is SOSL.
A somewhat stronger (uniform) version of the SOSL condition appears earlier in [50, 51] (see remarks, e.g., in [8, 9]). First order convergence of the Euler scheme is derived in [49] for 1d case and in [50, 51] for higher dimensions for the unique solution of a differential inclusion satisfying this condition. Later, first order of convergence of the solution set for the explicit/implicit Euler method is derived in [9, 53] also for the wider class of SOSL maps as defined here.
The following hierarchy between the classes of OSL, SOSL and Lipschitz (in the Hausdorff metric) mappings with compact values in \(\mathbb {R}^n\) is not hard to verify (see e.g., Example 2.8 and [8, Example 5.1]):
and there is no equality between any two classes.
Although the SOSL condition is weaker than the Lipschitz continuity, it is strong enough to provide approximation results for differential inclusions (see [9, 53]), better than for OSL maps and often the same as for Lipschitz maps. This is exactly the case with the Filippov approximation theorem (Theorem 1.1) proved here for SOSL maps in the righthand side.
As a main result in this paper we prove a Filippovtype theorem for a SOSL righthand side F with inner and outer perturbations. The obtained estimate of the distance between the perturbed and nonperturbed solutions is of first order, as in the classical Filippov theorem for the Lipschitz case, and improves the corresponding approximation estimate for OSL righthand side of [30], removing the square root on the norm of the inner perturbation. Thus we prove the correctness of the conjecture in [30, Remark 3.2] stating that, under a suitably defined SOSL condition, one may obtain first order convergence with respect to the inner perturbation.
The paper is organized as follows: In the next section general definitions and known facts as well as examples and properties of OSL and SOSL maps are presented. In Section 3 the main theorem of the paper together with stability results for reachable sets are presented. In Section 4 an application of this theorem to approximations of dynamical systems with numerical experiments are presented.
2 Preliminaries and examples
2.1 Notation
We denote vectors in \(\mathbb {R}^n\) by \(x=(x_1,x_2,\ldots ,x_n)\in \mathbb {R}^n\). The (closed) Euclidean unit ball in \(\mathbb {R}^n\) is denoted by \(B_1(0)\), the ball around the center \({x}^{0}\) with radius \(r > 0\) by \(B_r({x}^{0})\). The maximum norm of the vector \(x\in \mathbb {R}^n\) is denoted by \(x_\infty =\max _{1\le i\le n}x_i\), its Euclidean norm is denoted by \(x_2\) or simply as x. The norm of an \(L_\infty \)function \(f:I\rightarrow \mathbb {R}^n\) for a bounded, nonempty interval \(I = [t_0, T] \subset \mathbb {R}\) is \(\Vert f\Vert _{L_\infty }={\text {ess sup}}_{t\in I}f(t)\), for f being an \(L_1\)function we denote the corresponding norm as \(\Vert f\Vert _{L_1}=\int _I f(t) \,dt\). For a real number \(\mu \) we denote \(\mu _+=\max \{ 0,\mu \}\), \(\mu _=\min \{ 0,\mu \}\).
We denote by \(\mathcal {K}(\mathbb {R}^n)\) the set of compact, nonempty subsets of \(\mathbb {R}^n\) and by \(\mathcal {C}(\mathbb {R}^n)\) the set of convex, compact, nonempty subsets of \(\mathbb {R}^n\). To measure distances of bounded, nonempty sets \(A, B \subset \mathbb {R}^n\) we introduce the onesided Hausdorff distance \({\text {d}}(A, B) = \sup _{a \in A} \, {\text {dist}}(a, B)\) and the (twosided) Hausdorff distance as \({\text {d}}_{{\text {H}}}(A, B) = \max \big \{ {\text {d}}(A, B)\), \({\text {d}}(B, A) \big \}\), where \({\text {dist}}(z, B) = \inf _{b \in B} zb_2\) is the distance of a vector \(z \in \mathbb {R}^n\) to the set B. The norm of a set is defined by \(\Vert A\Vert _2 = {\text {d}}_{{\text {H}}}(A, \{0\}) = \sup \{ a_2: a \in A \}\). Recall that the Hausdorff distance in \(\mathcal {K}(\mathbb {R}^n)\) is also obtained via \({\text {d}}_{{\text {H}}}(A, B) = \min \big \{ \varepsilon > 0\,\, A \subset B + \varepsilon B_1(0), \ B \subset A + \varepsilon B_1(0) \big \}\). The interior, the boundary and the closure of a set \(A \subset R^n\) are denoted by \({\text {int}}(A)\), \({\text {bd}}(A)\) and \(\overline{A}\), respectively.
We fix the time interval \(I = [t_0,T]\) and denote \(F: D \Rightarrow \mathbb {R}^n\) for a setvalued map with domain \(D \subset \mathbb {R}^m\) (usually \(m \in \{n,n+1\}\)) and which has subsets of \(\mathbb {R}^n\) as images. The graph of the setvalued map F is defined as
F is (Lebesgue) measurable if the preimage \(F^{1}(U) = \{ t \in I :\, F(t) \cap U \ne \emptyset \}\) is a (Lebesgue) measurable set for each open set \(U \subset \mathbb {R}^n\) [3, Sec. 8.1]. For a singlevalued map \(F(t) = \{ f(t) \}\) this corresponds to the usual criterion for (Lebesgue) measurable functions \(f: I \rightarrow \mathbb {R}^n\) that the preimage \(f^{1}(U) = \{ t \in I :\, f(t) \in U \}\) of an open set \(U \subset \mathbb {R}^n\) is (Lebesgue) measurable. F with compact, nonempty images is uppersemicontinuous (usc) (in the \(\varepsilon \)sense) [2, Sec. 1.1, Definition 5], [3, Sec. 1.4, below Definition 1.4.1] if for all \(x \in D\), \(\varepsilon > 0\) there exists \(\delta > 0\) such that for all \(y \in \mathbb {R}^n\) with \(yx_2 < \delta \) the inclusion \(F(y) \subset F(x) + \varepsilon B_1(0)\), i.e. \({\text {d}}(F(y), F(x)) \le \varepsilon \) holds (in contrary to setvalued continuity where \({\text {d}}(F(x), F(y)) \le \varepsilon \) would also hold).
2.2 Inner and outer perturbations
We use the term “inner perturbation” for the state perturbation \(\overline{\delta }(\cdot )\) in the inclusion (5) and “outer perturbation” for the perturbation of the set of velocities \(\overline{\varepsilon }(\cdot )\) as it is done in the classial book of Filippov [39, Chap. 2, § 7] and e.g., in [19, Definition 2], [44, Sec. 2], [21, Sec. A.4, (2)], [5, Sec. 5], [12, (14)].
The lack of continuity of \(F(t,\cdot )\) is the main reason to consider separately perturbations of the state variable (the inner perturbations) and perturbations of the set of velocities (the outer perturbations) as in [39, Chap. 2, § 7]. Indeed, if \(F(t,\cdot )\) is Lipschitz continuous with constant L, we have for small \(\overline{\delta }(t)_2\) the inclusion
Then any solution \(y(\cdot )\) of the perturbed inclusion (5) fulfills the inclusion
where \( \overline{\xi }(t) _2 \le L  \overline{\delta }(t) _2 + \overline{\varepsilon }(t) _2\). In the latter inclusion only a small outer perturbation is present. In this case it is sufficient to consider only outer perturbations in the Filippovtype theorems.
Yet, without continuity of \(F(t,\cdot )\), an element of the set \(F(t,x + \overline{\delta }(t))\) may be far away from the set F(t, x) for small \(\overline{\delta }(t)_2\) so that the approximation bound for the outer perturbation \( \overline{\xi }(t) _2\) in (11) may be large, while the inner perturbations tend to zero.
The following simple example of Filippov illustrates this observation.
Let \(F:\mathbb {R}\Rightarrow \mathbb {R}\) be defined by
The setvalued map in Fig. 1 (right plot) is the convexvalued usc “regularization” of \({\text {sign}}(x)\) (see (15), left plot) and is discontinuous, only upper semicontinuous, at \(x=0\).
On the graph of \(F(x)={\text {Sign}}(x)\) we consider a sequence of points \((x_{k},y_{k}) = ({\overline{\delta }_{k}}, 1)\) and \((x_{k},y_{k})=({\overline{\delta }_{k}}, 1)\) with \({\overline{\delta }_{k}} = \frac{1}{k}\) for \(k \in \mathbb {N}\) on its graph. In Fig. 1 (right plot) the red graph and the blue points for \(k=2\) are shown.
Due to the upper semicontinuity of F for \(x=0\), i.e. for all \(\varepsilon > 0\) there exists \(\delta > 0\) such that
the sequence \(((x_k,y_k))_k\) with \(y_k \in F(x_k)\) converges to \((0, 1) \in F(0)\). Similarly, the sequence \(((x_k,y_k))_k\) converges to \((0, 1) \in F(0)\). The missing lower semicontinuity of F at \(x=0\) implies that the inclusion
holds only with \(\varepsilon \ge 2\) for any small \(\delta >0\), and not for smaller \(\varepsilon > 0\).
Thus, replacing an inner perturbation by an outer one may yield too coarse estimates in the Filippovtype theorem. Considering inner perturbations separately from the outer ones refines the estimates and allows to extend the approximation estimates to the case of setvalued maps F which are discontinuous with respect to the state variable.
In Fig. 2 the graphs of two inner vector perturbations \(F(x+\overline{\delta }_k)\) (in blue) and \(F(x\overline{\delta }_k)\) (in green) for \(\overline{\delta }_k=\frac{1}{k}\) are shown for \(k=2\) in the left plot, while the right plot shows two outer vector perturbations \(F(x)+\overline{\varepsilon }_k\) (in blue) and \(F(x)\overline{\varepsilon }_k\) (in green) for \(\overline{\varepsilon }_k=\frac{1}{k}\) and \(k=2\). In both plots the graph of the original mapping \(F(x) = {\text {Sign}}(x)\) (dashed lines in red color in both plots) is also present.
On Fig. 2 one checks visually that the Hausdorff distance between the graphs of \(F(\cdot )\) and \(F(\cdot + \overline{\delta }_k)\) is bounded by \(\overline{\delta }_k\). The same estimate for the graphs hold for the outer vector perturbation \(F(\cdot ) + \overline{\varepsilon }_k\). Nevertheless, the Hausdorff distance between the values of F and the perturbed mapping \(F(\cdot + \overline{\delta }_k)\) at a given point \(x = 0\) is equal to 2.
Let us sketch two more motivations for the systems (5) with vector and setvalued perturbations, respectively. Theorem 1.1 requires Lipschitz continuity in the state variable with closed, not necessarily convex values and essentially that the approximate solution fulfills the inequality (2). The latter together with \(\varepsilon _0 =  y^0  x^0 _2\) means that \(y(\cdot )\) is a solution of the differential inclusion
with setvalued outer perturbation \(\varepsilon (t) B_1(0)\). In this case we can rewrite the inclusion in the form (5) with \(\overline{\delta }(t) = 0\) by [21, Proposition 3.5].
The second motivation we would like to sketch comes from setvalued discretization methods for solving the differential inclusion (1) as the setvalued Euler method [10, 17, 35, 66]. A discrete solution for the step size \(h = \frac{Tt_0}{N}\) with a given \(N \in \mathbb {N}\) taking values on the grid points \(t_j = t_0 + j h\), \(j=0,\ldots ,N\), has the form \({y}^{j+1} = {y}^{j} + h {w}^{j}\), \({w}^{j} \in F({y}^{j})\), where we have assumed F to be autonomous for simplicity. To prove the convergence for this setvalued method, one essential step is to obtain the existence of a neighboring solution in continuous time. Consider the piecewise linear interpolant
on the subinterval \(I_j = [t_j, t_{j+1}]\), \(j=0,\ldots ,N1\). It is absolutely continuous with the derivative
in the interior of \(I_j\). The righthand side in (14) can be seen as an inner vector perturbation of the righthand side F(y(t)) in (1), since
Thus, \(y(\cdot )\) is a solution of the perturbed differential inclusion (14) and the Filippov Theorem 3.7 guarantees the existence of a neighboring solution of (1) at a distance \(\mathcal {O}(h)\) for SOSL righthand sides, if the inner perturbation \(\overline{\delta }(t)\) is \(\mathcal {O}(h)\) in norm. If the original inclusion (1) has a unique solution, this Filippov theorem already implies error estimates of order 1 for the setvalued Euler’s or some Runge–Kutta methods (see [45, 49]).
2.3 Examples for SOSL/OSL setvalued maps
We list some classes of SOSL setvalued maps. An OSL (or SOSL) function in this subsection means a singlevalued function taking values in \(\mathbb {R}\) or \(\mathbb {R}^n\). Since every singlevalued map with the values from an OSL function is an OSL setvalued map (see Remark 2.3), we start the discussion with SOSL and OSL (singlevalued) functions and the special case of linear functions.
Lemma 1.4
Let \(A \in \mathbb {R}^{n \times n}\) be a matrix and \(b(t) \in \mathbb {R}^n\) for \(t \in I\). Then the affine function \(f(t,x) = A x + b(t)\) for \(x \in \mathbb {R}^n\), \(t \in I\) is

(i)
OSL with constant \(\mu = \lambda _{\text {max}}\), where \(\lambda _{\text {max}}\) is the maximal eigenvalue of the symmetrized matrix \(A_{\text {sym}} = \frac{1}{2} (A + A^\top )\),

(ii)
SOSL with constant \(\mu = \max \limits _{i=1,\ldots ,n} \bigg ( \max \{0, a_{ii} \} + \sum \limits _{\begin{array}{c} {j=1,\ldots ,n}\\ {j \ne i} \end{array}} a_{ij} \bigg )\)
The SOSL constant can be estimated via \(\max \{0, \max \limits _{i=1,\ldots ,n} a_{ii} \} + \max \limits _{i=1,\ldots ,n} \sum \limits _{\begin{array}{c} {j=1,\ldots ,n} \\ {j \ne i} \end{array}} a_{ij}\).
Proof

(i)
Let \(x, y \in \mathbb {R}^n\), \(t \in I\) and \(v = f(t,x) = A x + b(t)\), \(w = f(t,y) = A y + b(t)\). Then,
$$\begin{aligned} \langle xy,vw \rangle&= \langle xy,A(xy) \rangle = \frac{1}{2} \langle xy,A(xy) \rangle \\&\quad + \frac{1}{2} \langle A^\top (xy),xy \rangle \\&= \langle xy, \frac{1}{2} (A + A^\top )(xy) \rangle \\&= \langle xy, A_{\text {sym}}(xy) \rangle \le \lambda _{\text {max}} xy_2^2 \end{aligned}$$is OSL with the claimed constant by the estimate with the Rayleight quotient.

(ii)
Let \(i \in \{ 1, \ldots , n \}\) and consider \(v_i  w_i\). By \(v = A x + b(t)\), \(w = A y + b(t)\) we have \(v_i  w_i = a_i (xy)\) with the ith row vector \(a_i^\top \in \mathbb {R}^n\). Hence,
$$\begin{aligned} (x_iy_i)(v_iw_i)&= (x_iy_i) \cdot \langle a_i^\top ,xy \rangle = (x_iy_i) \sum _{j=1}^n a_{ij} (x_jy_j) \\&\quad \le a_{ii} (x_iy_i)^2 + x_i  y_i \sum _{\begin{array}{c} {j=1,\ldots ,n}\\ {j \ne i} \end{array}}  a_{ij} \cdot  x_jy_j  \\&\quad \le \big ( \underbrace{ \max \{0, a_{ii} \} + \sum _{\begin{array}{c} {j=1,\ldots ,n} \\ {j \ne i} \end{array}}  a_{ij}}_{ =: \mu _i} \big ) x_iy_i \cdot xy_\infty . \end{aligned}$$Obviously,
$$\begin{aligned} \mu _i&\le \max _{k=1,\ldots ,n} \mu _k = \max _{k=1,\ldots ,n} \bigg ( \max \{0, a_{kk} \} + \sum _{\begin{array}{c} {j=1,\ldots ,n}\\ {j \ne k} \end{array}} a_{kj} \bigg ) \\&\quad \le \max \{0, \max _{k=1,\ldots ,n} a_{kk} \} + \max _{k=1,\ldots ,n} \sum _{\begin{array}{c} {j=1,\ldots ,n} \\ {j \ne k} \end{array}} a_{kj}. \end{aligned}$$
\(\square \)
In the previous lemma we could have estimated the SOSL constant by the bigger rowsum norm \(\Vert A\Vert _\infty = \max \limits _{i=1,\ldots ,n} \sum \limits _{j=1,\ldots ,n} a_{ij}\), but then the SOSL constant could no longer be zero, e.g., for diagonal matrices with negative diagonal elements. Both constants can be nonpositive as it is the case for \(f(x) = Ax\) with the matrix \(A = \begin{pmatrix} 2 &{} 1 \\ 1 &{} 1 \end{pmatrix}\) with eigenvalues \(2\), \(1\) for the symmetrized matrix \(A_{\text {sym}}\) (the OSL constant is \(\mu =1\)) or for \(f(x) = B x\) with the diagonal matrix \(B = {\text {diag}}(\{ 2, 1 \})\) and the SOSL constant \(\mu = 0\). It is easy to see with Lemma 2.1 that in the first case \(f(x) = A x\) is also SOSL but with positive constant \(\mu =1\).
Remark 2.2
Each realvalued monotone decreasing function with domain in \(\mathbb {R}\) is SOSL (hence OSL) with constant \(\mu =0\) and every dissipative function from \(\mathbb {R}^n\) to \(\mathbb {R}^n\) (see [20, Chap. 3, (1)]) is OSL with the same constant.
The negation of the sign function
for \(x \in \mathbb {R}\) (see in Fig. 1, left picture) is discontinuous at \(x=0\) and SOSL with constant \(\mu =0\). The function \(g(x) = x{\text {sign}}(x)\) is OSL with constant \(1\) and SOSL with constant 0.
We now list some classes of setvalued SOSL and OSL and show connections to previously defined notions in the literature.
Remark 2.3
Let \(F: \mathbb {R}^n \Rightarrow \mathbb {R}^n\) be a setvalued map. Each singlevalued map with \(F(x) = \{f(x)\}\) and an OSL/SOSL function f is an OSL/SOSL setvalued map.
Let F be dissipative (see [20, Chap. 3, (1)]), i.e. \(G = F\) is monotone/accretive (see [21, Sec. 4.3] so that for all \(x,y \in \mathbb {R}^n\) and all \(v \in G(x)\), \(w \in G(y)\) the inequality \(\langle xy,vw \rangle \ge 0\) holds. Then F is OSL with constant 0. An important example for dissipative setvalued maps is \(F(x) = \partial g(x)\), the MoreauRockafellar subdifferential for a convex function \(g: \mathbb {R}^n \rightarrow \mathbb {R}\cup \{\infty \}\), see [21, Chap. 1, Sec. 4, Problems 12].
We state some more examples of OSL and SOSL maps and refer to [29, 30] for similar example classes and discussions on earlier OSL/SOSL concepts. The next result in (iv) generalizes [51, Lemma 3.6] to SOSL maps.
Proposition 1.7
Let \(F: \mathbb {R}^n \Rightarrow \mathbb {R}^n\) be a setvalued map and let one of the following assumptions hold:

(i)
F is Lipschitz with constant \(L \ge 0\), i.e. \({\text {d}}_{{\text {H}}}(F(x), F(y)) \le L xy_2,\) and set \(\mu _F = \sqrt{n} L\).

(ii)
\(G: \mathbb {R}^n \Rightarrow \mathbb {R}^n\) is OSL/SOSL with constant \(\mu _G \in \mathbb {R}\), \(U, V \subset \mathbb {R}^n\) are nonempty and set \(F(x) = G(x + U) + V\), \(\mu _F = \mu _G\).

(iii)
\(G: \mathbb {R}^n \Rightarrow \mathbb {R}^n\) is OSL/SOSL with constant \(\mu _G \in \mathbb {R}\), \(\lambda \ge 0\) and set \(F = \lambda G\), \(\mu _F = \lambda \mu _G\).

(iv)
\(G, H: \mathbb {R}^n \Rightarrow \mathbb {R}^n\) are OSL/SOSL maps with constants \(\mu _G, \mu _H \in \mathbb {R}\) and set \(F = G + H\), \(\mu _F = \mu _G + \mu _H\).

(v)
\(F_i: \mathbb {R}\Rightarrow \mathbb {R}\) are OSL maps with constants \(\mu _i \in \mathbb {R}\), \(i=1,\ldots ,n\) and set \(F(x) = \sum _{i=1}^n F_i(x_i)e^i\) for \(x = (x_1, \ldots ,x_n) \in \mathbb {R}^n\) with the standard unit vectors \(e^i \in \mathbb {R}^n\), \(i=1,\ldots ,n\), the notation
$$\begin{aligned} F_i(x_i)e^i&= \{ v \in \mathbb {R}^n \,:\, v_i \in F_i(x_i) \text { and } v_j = 0 \text { for }j\in \{1,\ldots ,n\}, j \ne i \} \end{aligned}$$(16)and \(\mu _F = \max \{ 0, \max _{i=1,\ldots ,n} \mu _i \}\).
If G, H are SOSL in (ii)–(iv), then F is also SOSL in (i)–(v) with the stated constant \(\mu _F\).
If G, H are OSL in (ii)–(iv), then F is OSL in (i)–(v) with constant \(\mu _F\) (with \(\mu _F = L\) in (i)).
Proof
(i) is simple for the OSL or SOSL case and follows for \(x,y \in \mathbb {R}^n\), \(v \in F(x)\), \(i=1,\ldots ,n\) from
with \(w \in F(y)\) such that \(v \in w + L xy_2 \,B_1(0)\), see without proofs [30, Remark 2.1], [29, Remark 2.2], [26, Remark 1].
For (ii) see [30, Lemma 3.1] for OSL maps, for SOSL maps let \(i=1,\ldots ,n\), \(x,y \in \mathbb {R}^n\), \(z \in F(x)\) with \(z = w + v\) and \(w \in G(x+u)\), \(u \in U\). Choose \({\widetilde{w}} \in G(y + u)\) such that the SOSL condition holds for G and set \({\widetilde{z}} = {\widetilde{w}} + v \in G(y + U) + V\). Then,
The proofs of (iii)(iv) are standard and left to the reader.
(v) Let \(x, y \in \mathbb {R}^n\) and \(v \in F(x)\) so that \(v_i \in F_i(x_i)\) for \(i=1,\ldots ,n\). By the OSL condition there exists \(w_i \in F_i(t,y) \subset \mathbb {R}\) with \((x_i  y_i)(v_i  w_i) \le \mu _i x_iy_i^2 \,\). We set \(w = (w_1,\ldots ,w_n)\) so that \(w \in F(y)\) and
which proves the SOSL condition with constant \(\mu _F\). \(\square \)
It is remarkable that many wellknown functions (or their negation) in machine learning, electrical engineering, control theory or physics are SOSL (see e.g., [14, Sec. 2], [11, Sec. 2.4], [47, 57]), some functions are listed in the following example.
Example 2.5
All functions \(f_i: \mathbb {R}\rightarrow \mathbb {R}\), \(i=1,2\), below have the real numbers as domain and range and belong to the class of SOSL functions.

(i)
The negation of the sigmoidal function
$$\begin{aligned} f_1(x)&= \sigma (x,\alpha ) \quad \text {with}\quad \sigma (x, \alpha ) = \frac{2}{1 + \exp (\frac{x}{\alpha })}  1 \end{aligned}$$(17)for \(x \in \mathbb {R}\) and some fixed \(\alpha > 0\) is SOSL with constant \(\mu =0\), since \(f_1(\cdot )\) is monotone decreasing, and is \(\text{ C}^\infty (\mathbb {R})\), in particular is Lipschitz with constant \(L_1 = \frac{1}{2 \alpha }\,\).

(ii)
The negation of the saturation function
$$\begin{aligned} f_2(x)&= {\text {sat}}(\beta x) \quad \text {with}\quad {\text {sat}}(x) = {\left\{ \begin{array}{ll} {\text {sign}}(x) &{} \quad \text {if }x > 1, \\ x &{} \quad \text {if }x \le 1 \end{array}\right. } \end{aligned}$$(18)for \(x \in \mathbb {R}\) and some fixed \(\beta > 0\) is Lipschitz with constant \(L_2 = \beta \). \(f_2(\cdot )\) is SOSL with constant \(\mu =0\), since \(f_2(\cdot )\) is also monotone decreasing.
The sigmoidal or the saturation function are used in practical realization (approximation) of the discontinuous sign function from Remark 2.2 (see e.g., [47, Sec. 3.1]), or in the theoretical analysis of discontinuous differential equations. This approximation is usually performed by the choice of small values \(\alpha > 0\) for the sigmoidal function \(f_1(x)\) in (17) or for the saturation function \(f_2(x)\) by large values for \(\beta > 0\) in (18).
In (i) \(L_1 = \max _{x \in \mathbb {R}} {\dot{f}}_1(x) = {\dot{f}}_1(0) = \frac{1}{2 \alpha }\) tends to \(\infty \) for \(\alpha \rightarrow 0+0\) and in (ii) the Lipschitz constant \(L_2 = \beta \) explodes if the nonsaturation zone \([\frac{1}{\beta }, \frac{1}{\beta }]\) is narrowed for \(\beta \rightarrow \infty \). This behavior can be observed in Fig. 3.
Further examples of SOSL (monotone decreasing) functions used in machine learning are the negation of the Heaviside and the ReLU/ramp function (see e.g., [14, Chap. 2]).
Next we present examples of OSL and SOSL setvalued maps which are not singlevalued.
Example 2.6
We study examples of SOSL setvalued maps \(F_i: \mathbb {R}\Rightarrow \mathbb {R}\), \(i=1,2\), with convex, compact, nonempty images which are set perturbations of the OSL map \(G(x) = {\text {Sign}}(x)\) in the sense of Proposition 2.4(ii). Compare both perturbations with the original setvalued map G in Fig. 1 (right plot).

(i)
\(F_1(x) = {\text {Sign}}(x) + \frac{1}{4} [1,1]\) (outer perturbation of OSL setvalued map G)
in Fig. 4 (left) is OSL (and SOSL) with constant \(\mu =0\) due to Proposition 2.4(ii) (apply with \(U = \{0\}\) and \(V = \frac{1}{4} [1,1]\)). \(F_1\) is discontinuous (only usc) and not dissipative.

(ii)
\(F_2(x) = {\text {Sign}}(x + \frac{1}{4}[1,1])\) (inner perturbation of OSL setvalued map G)
in Fig. 4 (right) is OSL (and SOSL) with constant \(\mu =0\) due to Proposition 2.4(ii) (apply with \(U = \frac{1}{4} [1,1]\) and \(V = \{0\}\)). \(F_2\) has the same properties as \(F_1\) in (i).
Example 2.7
An example of a discontinuous SOSL setvalued map defined in \(\mathbb {R}\) with nondegenerate intervals as values, a negative SOSL constant \(\mu =0\) and which cannot be represented as the sum of a Lipschitz multifunction and a dissipative (SOSL) singlevalued function is \(F(x)={\text {co}}\{{\text {sign}}(x), ({\text {sign}}(x)+x^{1/3}) \}\).
We end this section by one example which is OSL but not SOSL.
Example 2.8
([8, Example 5.6]) Consider the setvalued map \(F(x) = \partial g(x)\) for \(x \in \mathbb {R}^n\) which is the convex subdifferential of Rockafellar/Moreau of the realvalued function \(g(x) = x_2\).
Then F is OSL and dissipative by Remark 2.3 (i.e. \(F\) is monotone) but not SOSL for \(n \ge 2\).
Another example would be the Hölder continuous function of degree \(\frac{1}{3}\) from [30, Example 5.4] which is OSL with constant \(\mu =\frac{1}{2}\) but not SOSL. More variants of Lipschitztype or OSLtype setvalued maps and corresponding examples can be found in [9] and [7, 8].
3 Filippovtype theorems for SOSL maps
3.1 Existence and boundednes of solutions
For the proof of Filippov theorems under weaker conditions than Lipschitz continuity we need an existence result for differential inclusions under weak assumptions.
Theorem 1.12
([54, Corollary 6 of Theorem 1]) Consider \(F: I\times \mathbb {R}^n \Rightarrow \mathbb {R}^n\), \(x^0 \in \mathbb {R}^n\) such that

(i)
F(t, x) is an orientor field, i.e. F(t, x) is closed and nonempty,

(ii)
\(F(\cdot , x)\) is measurable in \(t \in I\) for all \(x \in \mathbb {R}^n\),

(iii)
for almost all \(t \in I\)
– either \(F(t, \cdot )\) is upper semicontinuous (= usc) at \(x \in \mathbb {R}^n\) and F(t, x) is convex
– or \(F(t, \cdot )\) is lowersemicontinuous (= lsc) at some neighborhood of \(x \in \mathbb {R}^n\)

(iv)
\(F(\cdot ,\cdot )\) is (weakly) locally integrably bounded, i.e. for every bounded set \({\widetilde{S}} \subset I\times \mathbb {R}^n\) the distance function \({\text {dist}}(0, F(t,x))\) is bounded by an integrable function \(k: I\rightarrow \mathbb {R}\) for all \((t,x) \in {\widetilde{S}}\).
Then, there exists a solution of the differential inclusion (1).
Remark 3.2
The assumptions of the theorem provide two options to guarantee the existence: convex images with upper semicontinuity only or lower semicontinuity with nonconvex closed images (similar to the discussion in [2, Sec. 2.1, p. 94]). From now on we mainly follow the first option for the rest of the paper, since the setvalued map \({\text {Sign}}(x)\) which appears in most of our applications is only usc.
A similar local existence result can be found in [63] in Theorem 8.13, where (ii) is replaced by the weaker existence of a (strongly) measurable selector of \(F(\cdot , x)\). The global existence follows from Theorem 8.15 together with Example 8.17.
We now summarize our basic assumptions on the righthand side \(F: I\times \mathbb {R}^n \Rightarrow \mathbb {R}^n\) of the differential inclusion. Here, the boundedness condition in (A1) is slightly stronger than (iv) in the previous existing result (that guarantees the boundedness of at least one solution), since we consider a subinclusion of F(t, x) and also need the boundedness of all solutions.
 (A1):

\(F(t,x) \subset \mathbb {R}^n\) is compact and nonempty and is integrably bounded on bounded sets, i.e. for every constant C and for every compact \(S \subset \mathbb {R}^n\) with \(S_2 \le C\) there is an \(L_1\)function \(K_F(\cdot ;C)\) such that
$$\begin{aligned} F(t,S)_2&\le K_F(t;C). \end{aligned}$$(19)  (A2):

\(F(\cdot , x)\) is Lebesgue measurable in \(t \in I\) for all \(x \in \mathbb {R}^n\).
 (A3):

\(F(t, \cdot )\) is upper semicontinuous at \(x \in \mathbb {R}^n\) for almost all \(t \in I\).
 (A4):

F(t, x) is convex.
 (A5):

F is SOSL with a constant \(\mu \in \mathbb {R}\).
In the case that (A2)–(A3) hold, F is called upper Carathéodory in [1, Sec. 4].
We first state a version of Gronwall’s lemma in differential form which does not require the usual nonnegativity of functions defining the righthand side (20) of the inequality. It is inspired by the proofs of [65, Lemma 2.4.4] and [21, Sec. 8.5].
Lemma 1.14
Let \(I = [t_0,T]\), \(k(\cdot )\) and \(p(\cdot )\) are in \(L_1(I)\), \(\psi : I \rightarrow \mathbb {R}\) absolutely continuous and
Then,
where \(K(t) = \int _{t_0}^t k(s) \, ds\) and \(\varphi (\cdot )\) solves the initial value problem
Proof
Define the AC function \(\eta (t) = e^{K(t)} \psi (t)\) for \(t \in I\). Then, via (20)
\(\eta (t) = \eta (t_0) + \int _{t_0}^t {\dot{\eta }}(s) \, ds\) by absolute continuity together with (22) and \(\psi (t) = e^{K(t)} \eta (t)\) yields
\(\square \)
We prove a technical lemma for the boundedness of solutions with inner vector and outer setvalued perturbations similar to [29, Lemma 3.1] and [30, Lemma 3.2]. Note that the integrable boundedness condition in (A1) (see e.g., [31]), is weaker than simply boundedness on bounded sets [30] and the linear growth condition, \(F(t, x)_2 \le c(t)(1 + x_2)\) with \(c(\cdot ) \in L_1(I)\) [21, Chap. 2, § 6], but stronger than the condition (iv) in Theorem 3.1. The assumption (A1) allows the estimates for all perturbed solutions in the next lemma below.
Lemma 1.15
Let \(F: I \times \mathbb {R}^n \Rightarrow \mathbb {R}^n\) fulfill assumptions (A1) and be OSL with constant \(\mu \in \mathbb {R}\).
Then for all \(K_\delta , K_\varepsilon , K_0 \ge 0\) there exist constants \(C_B, C_F \ge 0\) such that for all measurable vector perturbations \(\overline{\delta }(\cdot ) \in L_\infty (I)\), \(\overline{\varepsilon }(\cdot ) \in L_1(I)\) and all initial values \({y}^{0} \in \mathbb {R}^n\) with
the solutions \(y(\cdot )\) of the perturbed inclusion (5) satisfy
where
Proof
F is OSL so that for all \(x,{\widetilde{x}} \in \mathbb {R}^n\) and for a.e. \(t \in I\) (see (6) and [29])
For a.e. \(t \in I\), \({\dot{y}}(t) \in F(t, y(t) + \overline{\delta }(t)) + \overline{\varepsilon }(t)\). Then using the above inequality for support functions,
holds for a.e. \(t \in I\) by a suitable \(L_1\)function \(K_F(\cdot ;K_\delta )\), since the set \(K_\delta B_1(0)\) is bounded. Hence,
Introducing the function \(p(t) = y(t)_2\), it is trivial to show by definition that \(p(\cdot )\) is AC and that \(p^2(\cdot )\) is differentiable at each point of differentiability of \(y(\cdot )\), i.e. almost everywhere in I. Since \(p(t)^2\) is a composition of an (outer) locally Lipschitz function \(g(s) = s^2\) and an AC function p(t), the (extended) chain rule holds for a.e. \(t \in I\) by [59, Theorem 2] yielding
Next we want to prove (28) for almost every \(t \in I\).
Case 1: Consider the points t where \(p(t) \ne 0\) and \({\dot{p}}(t)\) exists.
In the (measurable) set of points \(t \in I\) where \(p(t) \ne 0\) we can cancel p(t) on both sides of the estimate (27) and get (28).
Case 2: If t lies in the (measurable) set \({\mathcal {N}} = \{ \tau \in I: p(\tau ) = 0 \}\), we can consider only its subset of the points of density (which is of full measure by the Lebesgue density theorem, see [13, Chap. II, Theorem 5.1]), at which also the derivative \({\dot{p}}(t)\) exists, since \(p(\cdot )\) is absolutely continuous. Consider an arbitrary sequence \(\{t_k\}_k\) in \({\mathcal {N}}\) converging to such a density point t and calculate
since \(p(t) = 0\). Then (28) is trivially fulfilled.
In both cases (28) holds for a.e. \(t \in I\) and it follows from the Gronwall inequality (Lemma 3.3) that
which proves the first inequality in (24). Furthermore, we have for a.e. \(t \in I\):
which proves the second inequality in (24). \(\square \)
To prove a Filippovtype theorem for the SOSL case, we state an equivalent condition to the SOSL property which refines the working condition in [53, Sec. 2, (31)] and is applied in the proofs in this section.
Lemma 1.16
Let \(F: I \times \mathbb {R}^n \Rightarrow \mathbb {R}^n\) have nonempty images. The following condition is equivalent to the SOSL condition for F:
For a.e. \(t \in I\) and every \(x,y,{\tilde{y}}\in \mathbb {R}^n\), \(w\in F(t,y)\) there is \(v\in F(t,x)\) such that
for every index \(i \in \{1,\ldots ,n\}\) satisfying
Proof
For given \(t,x,y,{\tilde{y}}\) we denote by J the set of indices satisfying (31).
First, we assume that (30) holds for any given \(t,x,y,{\tilde{y}}\) and indices \(i \in J\). Choosing \({\tilde{y}} = y\), we get from (30) the SOSL condition in the form (9) for a.e. \(t \in I\).
Conversely, let F be SOSL. Then there exists a subset \({\widetilde{I}} \subset I\) of full measure such that the inequalities in Definition 1.3 hold for given \(x,y,{\tilde{y}}\in \mathbb {R}^n\). Let \(t \in {\widetilde{I}}\) and \(i \in J\). Without loss of generality suppose \(x_i>y_i\). Then it follows from (31) that \(x_i > {\tilde{y}}_i\). We obtain from the SOSL condition that for the given \(x,y\in \mathbb {R}^n\), \(w\in F(t,y)\) there is \(v\in F(t,x)\) such that for \(i\in J\)
We multiply this inequality by the positive number \(x_i{\tilde{y}}_i = x_i{\tilde{y}}_i\) and obtain
Then, for \(\mu \ge 0\) we apply the triangle inequality \(xy_\infty \le x{\tilde{y}}_\infty +y{\tilde{y}}_\infty \) and get
which obviously implies the claim for \(t \in {\widetilde{I}}\).
In the case \(\mu <0\) we use the inverse inequality \(xy_\infty \ge x{\tilde{y}}_\infty y{\tilde{y}}_\infty \) and get from (32)
which also implies (30). \(\square \)
Remark 3.6
The working condition for SOSL maps in Lemma 3.5 plays a key role in the definition of an auxiliary differential subinclusion in the proof of the Filippov theorem for SOSL maps. The corresponding one for OSL maps
which is equivalent to the OSL condition is used in [30] for the same purpose in the proof of the Filippov theorem in the OSL case.
We recall the working condition for SOSL maps in [53, Sec. 2, (31)]:
For (a.e.) \(t \in I\) and all \(x,y,{\tilde{y}}\in \mathbb {R}^n\), \(v\in F(t,x)\) there is \(w\in F(t,y)\) such that
for indices \(i \in \{1,\ldots ,n\}\) satisfying
Both working conditions (30)–(31) and (33)–(34) are equivalent to the SOSL condition for \(\mu \ge 0\), \(\kappa = 1\), but only (30)–(31) is equivalent to the SOSL property if \(\mu < 0\).
3.2 Filippov approximation theorem for the SOSL case
We now state the main result of this paper, the Filippov theorem for inclusions with SOSL righthand sides.
Theorem 1.18
(Filippovtype theorem for the SOSL case with inner perturbations)
Let \(F: I \times \mathbb {R}^n \Rightarrow \mathbb {R}^n\) satisfy (A1)–(A5), consider the inner vector perturbation \(\overline{\delta }(\cdot ) \in L_\infty (I)\) and let \({\widetilde{y}}(\cdot )\) be a solution of the inclusion
Then there exists a solution \(x(\cdot )\) of the inclusion (1) such that for all \(t \in [t_0,T]\)
Proof
The proof is done in several steps. Denote by \(\Omega \) the measurable set of points t in I in which all \({\widetilde{y}}_i(\cdot )\), \(i=1,\ldots ,n\), are differentiable in t as well as (9), (30)–(31), (35) and the uppersemicontinuity of \(F(t,\cdot )\) hold. Since \({\widetilde{y}}(t)\) is absolutely continuous, \(\Omega \) has full measure in I.
Step 1: definition of an auxiliary differential inclusion involving the criterion of Lemma 3.5
For the given functions \({\widetilde{y}}(\cdot ),\overline{\delta }(\cdot )\) we set \(y(t) = {\widetilde{y}}(t) + \overline{\delta }(t)\) and for any \(x\in \mathbb {R}^n\), \(t\in I=[t_0,T]\) we denote by J(t, x) the set of indices \(i\in \{1,2,\ldots ,n \}\) satisfying the condition
Clearly, for the given \(t, x, y(t), {\widetilde{y}}(t)\), we have \(J(t,x) \subset J\), where J is the set of indices for which (31) holds (see the proof of Lemma 3.5). For \((t,x) \in \Omega \times \mathbb {R}^n\) let us introduce the setvalued mapping
Note that G(t, x) is welldefined by (38) for all \(x \in \mathbb {R}^n\) and \(t \in \Omega \). For \(t\in I {\setminus } \Omega \), \(x \in \mathbb {R}^n\) we define \(G(t,x)=F(t,x)\) and consider the auxiliary differential inclusion
Step 2: verification of the conditions in Theorem 3.1 ensuring the existence of a solution of (39)
(i), (iii) The values of G(t, x) are convex, compact, nonempty.
For \(t \in I {\setminus } \Omega \), \(x \in \mathbb {R}^n\), all three conditions in (i) hold by the assumptions on F, since \(G(t,x) = F(t,x)\).
For \(t \in \Omega \), the above mentioned inclusion \(J(t,x) \subset J\) and Lemma 3.5 imply that \(G(t,x)\ne \emptyset \) for all \(x\in \mathbb {R}^n\), since \(\dot{{\widetilde{y}}}(t) \in F(t, {\widetilde{y}}(t) + \overline{\delta }(t)) = F(t, y(t))\) for \(t \in \Omega \). The convexity and closedness follow directly from (38). For the upper semicontinuity we now rewrite the definition of G(t, x) for \(t \in I\), \(x \in \mathbb {R}^n\). We introduce for \(i \in \{ 1, \ldots ,n \}:\)

The setvalued map \(H_i: \mathbb {R}^n \Rightarrow I\)
$$\begin{aligned} H_i(x)&= \{ t \in I \,:\, \xi _i(t,x) > \Vert \overline{\delta }(\cdot ) \Vert _{L_\infty } \} \quad \text {with}\quad \xi _i(t,x) = y_i(t)  x_i \end{aligned}$$collects the times t for which (37) holds.

The setvalued maps \({\widetilde{D}}_i, D_i, D: I \times \mathbb {R}^n \Rightarrow \mathbb {R}^n\), the functions \(\eta _i, \beta _i: I \times \mathbb {R}^n \rightarrow \mathbb {R}\) by
$$\begin{aligned} \eta _i(t,x)&= {\widetilde{y}}_i(t)  x_i, \nonumber \\ \beta _i(t,x)&= {\left\{ \begin{array}{ll} ({\widetilde{y}}_i(t)  x_i) \dot{{\widetilde{y}}}_{i}(t)  \mu x_i  {\widetilde{y}}_i(t) \cdot x  {\widetilde{y}}(t)_\infty  \mu  \cdot x_i  {\widetilde{y}}_i(t) \cdot \overline{\delta }(t)_\infty \\ \quad \text{ for } \; t \in \Omega , x \in \mathbb {R}^n, &{}{} \\ {\widetilde{y}}_i(t)  x_i \cdot K_F(t,x_2) \quad \text{ for } \; t \in I {\setminus } \Omega , x \in \mathbb {R}^n, &{}{} \end{array}\right. } \end{aligned}$$(40)$$\begin{aligned} {\widetilde{D}}_i(t,x)&= \{ v \in \mathbb {R}^n \,:\, \eta _i(t,x) v_i \ge \beta _i(t,x) \} \cap F(t,x). \end{aligned}$$(41)Note that for \(t \in I {\setminus } \Omega \) the inequality in (41) is trivially satisfied for every \(v\in F(t,x)\) by (19).
$$\begin{aligned} D_i(t,x)&= \chi _{H_i(x)}(t) {\widetilde{D}}_i(t,x) + (1  \chi _{H_i(x)}(t)) F(t,x), \end{aligned}$$(42)$$\begin{aligned} G(t,x)&= \bigcap _{i=1}^n D_i(t,x). \end{aligned}$$(43)
It is easy to verify by (43) that G has closed values, since the values of F and \(D_i\) are closed for \(t \in I\), \(x \in \mathbb {R}^n\).
(ii) \(G(\cdot ,x)\) is measurable for any \(x \in \mathbb {R}^n\)
Let us first mention that all functions \(\eta _i(t, x) v_i\) and \(\beta _i(t, x)\) for a fixed \(x \in \mathbb {R}^n\) are Carathéodory in \((t,v) \in I \times \mathbb {R}^n\), i.e. measurable in t for fixed v and continuous with respect to v for fixed t.
For a fixed \(x \in \mathbb {R}^n\) the set \(H_i(x)\) is measurable as the preimage of the open interval \(U = (\Vert \overline{\delta }(\cdot ) \Vert _{L_\infty }, \infty )\) for the measurable function \(\varphi (\cdot ) = y_i(\cdot )  x_i\). For fixed \((t,x) \in \Omega \times \mathbb {R}^n\) the first operand in the intersection of setvalued map \({\widetilde{D}}_i(t,x)\) is measurable in t respectively by [16, Théorème 3.5]. The measurability of \({\widetilde{D}}_i(\cdot ,x)\) follows from the intersection with the measurable setvalued map \(F(\cdot ,x)\), the one of \(D_i(\cdot ,x)\) follows from (42), since \(H_i(x)\) is a measurable set and therefore, the characteristic function \(\chi _{H_i(x)}(\cdot )\) is measurable in t by [18, Example 2.1.2] as well as the product \(\chi _{H_i(x)}(\cdot ) {\widetilde{D}}_i(\cdot ,x)\) by [16, Corollaire 1]. As a finite intersection in (43) the measurability of \(G(\cdot ,x)\) is granted on I by [3, Theorem 8.2.4].
(iii) \(G(t,\cdot )\) is usc for \(t \in \Omega \)
For this we show that for a fixed \(t \in \Omega \) the graph of \(D_i(t,\cdot )\) is closed for every \(i=1,\ldots ,n\).
For sequences with \(\lim _{k\rightarrow \infty }x^k = {x}^{*}\) and \(\lim _{k\rightarrow \infty }{v}^{k} = {v}^{*}\) with \({v}^{k}\in D_i(t,{x}^{k})\) we show that \({v}^{*} \in D_i(t,{x}^{*})\).
case a: \(t \in H_i({x}^{*})\)
The continuity of \(\xi (t,\cdot )\) yields that \(t \in H_i({x}^{k})\) and \({v}^{k} \in {\widetilde{D}}_i(t,{x}^{k})\) from (42) for large k.
The left and righthand sides \(\eta _i(t,x) v_i\) and \(\beta _i(t,x)\) in the inequality (41) are continuous in (x, v), so that the convergence of both sequences \(\{x^{k}\}_{k}\), \(\{v^{k}\}_{k}\) yield the inequality (41) in the first set of the intersection also for \(({x}^{*},{v}^{*})\). Since the graph of \(F(t,\cdot )\) is closed, \({v}^{*} \in {\widetilde{D}}_i(t, {x}^{*})\) is valid and \({v}^{*} \in D_i(t, {x}^{*})\) from \(t \in H_i({x}^{*})\).
case b: \(t \notin H_i({x}^{*})\)
By definition in (42) \(D_i(t, {x}^{*}) = F(t,{x}^{*})\) and \({v}^{*} \in D_i(t, {x}^{*})\) holds trivially.
Therefore in all cases the graphs of \(D_i(t,\cdot )\) and \(G(t,\cdot )\) are closed and \(G(t,\cdot )\) is usc due to [2, Sec. 1.1, Theorem 1] (see also [3, Propositions 1.4.8\(\)1.4.9]), since F(t, x) is compact and \(F(t,\cdot )\) is usc in x.
(iv) G is locally integrably bounded as a subset of F, which is integrably bounded on bounded sets by (A1).
Hence, we have checked all assumptions of the Existence Theorem 3.1.
Step 3: solution of the auxiliary differential inclusion
By Theorem 3.1, there exists a solution x(t) of the auxiliary inclusion (39). We set \(z(t)=x(t)  {\widetilde{y}}(t)\) for the next two steps. Clearly, \(z(\cdot )\) is AC and we can assume without loss of generality (possibly after removing a set of measure zero from \(\Omega \)) that \(x(\cdot )\) and \(z(\cdot )\) are differentiable for \(t \in \Omega \).
In the next steps we prove the estimate (36).
Step 4: local SOSL estimate for \(z_i(\cdot )\) on open subsets of \(\Omega \)
For \(i=1,\ldots ,n\) we define the sets
By the continuity of \(z(\cdot )\) and the measurability of \(y_i(\cdot )  x_i(\cdot )\), \(\theta _{i}\) and \(T_{\max }^{i}\) are measurable sets so that \(\Theta _{i}\) is measurable and open. Define the open set \(\Theta = \bigcup _{i=1}^n \Theta _{i}\). Then
is a closed set. Then clearly \(I = \Theta \cup {\text {int}}(I {\setminus } \Theta ) \cup {\text {bd}}(I {\setminus } \Theta )\). It is wellknown that every open set \(V \subset \mathbb {R}\) is a countable union of disjoint open intervals (see e.g., [60, Theorem 1.3] or [41, Proposition 0.21]). Every such disjoint open interval is the maximal interval (with respect to set inclusion) containing a given point of V. We will call these disjoint open intervals (maximal) components of V.
Step 4a: We now show that for any \(i \in \{1,\ldots ,n\}\) and any (maximal) component of \(\Theta _{i}\), \(\Delta = (t^{\prime }, t^{\prime \prime })\) and every \(t \in {\overline{\Delta }} = [t^{\prime }, t^{\prime \prime }]\) the following estimate holds:
Note that if (45) holds on the open interval \(\Delta \), then it is also true on its closure by the continuity of \(z(\cdot )\) and of the function in the righthand side of (45). For (45) we show the following estimate for a.e. \(t \in \Theta _{i}\):
We use the definition of G(t, x) (see (38)) for \(t \in \Theta _{i} \cap \Omega \), since \({\dot{x}}(t) \in F(t, x(t))\) and (31) holds for \(t \in \Delta \subset \Theta _{i}\) Hence, for \(t \in \Theta _{i} \cap \Omega \)
since \(y(t) = {\widetilde{y}}(t) + \overline{\delta }(t)\).
For the absolutely continuous function \(p(\tau ) = z_i(\tau )\) and \(\tau \in I\) we can argue as in the proof of Lemma 3.4 to get that \(p(\cdot )^2\) and \(z_i(\cdot )^2\) are differentiable at the points where \(z_i(\cdot )\) is differentiable that is w.l.o.g. in \(\Omega \) (eventually removing a set of zero measure from \(\Omega \)). Furthermore, the (extended) chain rule holds for \(p(\tau )^2 = z_i(\tau )^2\) and a.e. \(\tau \in I\) (w.l.o.g. we may assume that \(\tau \in \Omega \)) yielding together with (47)
We can repeat the arguments of cases 1 and 2 in the proof of Lemma 3.4 to show that (46) holds for \(t\in \Theta _{i} \cap \Omega \). We can apply the Gronwall inequality (Lemma 3.3) together with \(p(t) = z_i(t) = z(t)_\infty \) for \(t \in \Delta \subset \Theta _{i} \subset T_{\max }^{i}\) and it follows from (46) that (45) holds.
Step 4b: We show that the inequality (45) proved in step 4a for a (maximal) component of \(\Theta _{i}\) also holds for \(t \in \overline{\Delta } = [t^{\prime }, t^{\prime \prime }]\) for any (maximal, possibly larger) component \(\Delta = (t^{\prime }, t^{\prime \prime })\) of \(\Theta = \bigcup _{i=1}^n \Theta _{i}\).
Indeed, take an arbitary (maximal) component \(\Delta _i = (t_i^{\prime }, t_i^{\prime \prime })\) of \(\Theta _{i}\). If it does not intersect any (maximal) component \(\Delta _j = (t_j^{\prime }, t_j^{\prime \prime })\) of \(\Theta _{j}\) for \(j \ne i\), then \(\Delta _i\) is also a (maximal) component of \(\Theta \) and we can apply the result of step 4a.
If \(\Delta _i \cap \Delta _j \ne \emptyset \) for some \(j \ne i\), we now show that (45) holds in the closure of the interval \(\Delta _i \cup \Delta _j = (t^{\prime }, t^{\prime \prime })\).
There are two possibilities:

a)
the inclusions \(\Delta _i \subset \Delta _j\) or \(\Delta _j \subset \Delta _i\) hold
In this case we simply apply step 4a on the larger interval.

b)
\(\Delta _i\) and \(\Delta _j\) overlap partially, i.e. either \(t_j^\prime \le t_i^\prime < t_j^{\prime \prime } \le t_i^{\prime \prime }\) or \(t_i^\prime \le t_j^\prime < t_i^{\prime \prime } \le t_j^{\prime \prime }\)
Assume for instance the first subcase (the second one is similar to prove). Writing (45) for the interval \([t_j^{\prime },t_i^{\prime }]\), we get
$$\begin{aligned} z(t_i^\prime )_\infty&\le e^{\mu (t_i^\prime  t_j^\prime )} z(t_j^\prime )_\infty + \mu  \int _{t_j^\prime }^{t_i^\prime } e^{\mu (t_i^\prime  s)} \overline{\delta }(s)_2 \,\, ds. \end{aligned}$$(49)
Let \(t \in [t_j^\prime , t_i^{\prime \prime }]\). If \(t \in [t_j^\prime , t_j^{\prime \prime }]\), then (45) holds by Claim 1 for this interval. In the other case \(t \in [t_i^\prime , t_i^{\prime \prime }]\). Then we apply (45) in \((t_i^\prime , t_i^{\prime \prime })\), and for \(z(t_i^\prime )_\infty \) we use (49) and get
where we have used (49) in the second estimate. The estimate above implies that (45) holds in the closure of the union \((t^{\prime }, t^{\prime \prime })\) of any two intersecting (maximal) components \(\Delta _i, \Delta _j\) of \(\Theta _{i}\) and \(\Theta _{j}\), respectively.
Since every (maximal) component of \(\Theta \) is a union of countably many intersecting components of \(\Theta _{i}\), \(i=1,\ldots ,n\), using the above argument and induction, we obtain that (45) holds in the closure of any (maximal) component of \(\Theta \).
In the next step we derive an error estimate in \(I {\setminus } \Theta \) representing an error reset in the estimate, since errors at previous times are not accumulated in this case.
Step 4c (SOSL error reset): We now prove that for all \(t \in {\text {int}}(I) {\setminus } \Theta \) we have
Fix \(t \in {\text {int}}(I) {\setminus } \Theta \), and define \(J_{\max }(t) = \{ i \in \{1,\ldots ,n\} \,:\, z_i(t) = z(t) _\infty \}\) as set of “maximal” indices. Obviously, \(J_{\max }(t) \ne \emptyset \) and \(t \in T_{\max }^{i}\) for all \(i \in J_{\max }(t)\). Consider the possible cases:

1)
there exists \(i_0 \in J_{\max }(t)\) with \(t \in {\text {int}}(T_{\max }^{i_0})\)
Since \(t \in {\text {int}}(I) {\setminus } \Theta \), it follows from a similar representation as in (44) that \(t \in {\text {int}}(I) {\setminus } {\text {int}}(\Theta _{i_0})\). Hence, there are two subcases:
\(\alpha \)) \(t \in {\text {int}}(I) {\setminus } \overline{\Theta }_{i_0}\), i.e. \(t \notin {\text {int}}(\Theta _{i_0})\) and \(t \notin {\text {bd}}(\Theta _{i_0})\)
Then \(t \notin \Theta _{i_0}\) and
$$\begin{aligned}  x_{i_0}(t)  y_{i_0}(t) &\le \Vert \overline{\delta }(\cdot ) \Vert _{L_\infty }, \quad z_{i_0}(t)  = z(t)_\infty , \end{aligned}$$thus by the triangle inequality and \(t \in T_{\max }^{i_0}\)
$$\begin{aligned}  z_{i_0}(t)    \overline{\delta }_{i_0}(t) &\le  z_{i_0}(t)  \overline{\delta }_{i_0}(t)  =  x_{i_0}(t)  \big ( {\widetilde{y}}_{i_0}(t) + \overline{\delta }_{i_0}(t) \big )  \\&=  x_{i_0}(t)  y_{i_0}(t)  \le \Vert \overline{\delta }(\cdot ) \Vert _{L_\infty }\\ \text {so that}\quad  z(t) _\infty =  z_{i_0}(t) &\le \Vert \overline{\delta }(\cdot ) \Vert _{L_\infty } +  \overline{\delta }_{i_0}(t)  \le \Vert \overline{\delta }(\cdot ) \Vert _{L_\infty } +  \overline{\delta }(t) _2 \le 2 \Vert \overline{\delta }(\cdot ) \Vert _{L_\infty }. \end{aligned}$$\(\beta \)) \(t \in {\text {bd}}(\Theta _{i_0})\)
Since \(t \in {\text {bd}}(\Theta _{i_0}) {\setminus } {\text {bd}}(I)\), there exists a sequence \(\{\tau _{k}\}_{k} \subset ({\text {int}}(I) {\setminus } \overline{\Theta }_{i_0}) \cap {\text {int}}(T_{\max }^{i_0})\) converging to t. Hence, by the definition of \(\Theta _{i_0}\) and \(T_{\max }^{i_0}\),
$$\begin{aligned}  x_{i_0}(\tau _k)  y_{i_0}(\tau _k) &\le \Vert \overline{\delta }(\cdot ) \Vert _{L_\infty }, \quad z_{i_0}(\tau _k)  = z(\tau _k)_\infty \end{aligned}$$for \(k \in \mathbb {N}\). Thus, by the triangle inequality and \(\tau _k \in T_{\max }^{i_0}\)
$$\begin{aligned}  z_{i_0}(\tau _k)    \overline{\delta }_{i_0}(\tau _k) &\le  z_{i_0}(\tau _k)  \overline{\delta }_{i_0}(\tau _k) \\&=  x_{i_0}(\tau _k) \big ( {\widetilde{y}}_{i_0}(\tau _k) + \overline{\delta }_{i_0}(\tau _k) \big )  \\&=  x_{i_0}(\tau _k)  y_{i_0}(\tau _k)  \le \Vert \overline{\delta }(\cdot ) \Vert _{L_\infty },\\ \text {so that}\quad  z(\tau _k) _\infty =  z_{i_0}(\tau _k) &\le \Vert \overline{\delta }(\cdot ) \Vert _{L_\infty } +  \overline{\delta }_{i_0}(\tau _k)  \\&\le \Vert \overline{\delta }(\cdot ) \Vert _{L_\infty } +  \overline{\delta }(\tau _k) _2 \le 2 \Vert \overline{\delta }(\cdot ) \Vert _{L_\infty }. \end{aligned}$$The continuity of \(z(\cdot )\) yields \( z(t) _\infty \le 2 \Vert \overline{\delta }(\cdot ) \Vert _{L_\infty }\) and also (50).

2)
for all \(i \in J_{\max }(t)\), \(t \notin {\text {int}}(T_{\max }^{i})\)
Then, since \(t \in \bigcap _{i \in J_{\max }(t)} T_{\max }^{i}\), it follows that there exists \(i_0 \in J_{\max }(t)\) with \(t \in {\text {bd}}(T_{\max }^{i_0})\). \(T_{\max }^{i}\) is closed by the continuity of \(z_i(\cdot )\), \(z(\cdot )\) so that \({\text {int}}(I) {\setminus } T_{\max }^{i}\) is open and \({\text {bd}}(T_{\max }^{i}) = {\text {bd}}({\text {int}}(I) {\setminus } T_{\max }^{i})\) is contained in a union of countable many points which has measure 0. Thus we obtain (50) for a.e. \(t \in {\text {int}}(I) {\setminus } \Theta \). By the continuity of \(z(\cdot )_\infty \) we get that (50) holds for every \(t \in {\text {int}}(I) {\setminus } \Theta \).
Step 4d: \(t \in {\text {bd}}(I) = \{ t_0,T \}\) If \(t = t_0\), then
Otherwise, \(t = T\) which follows either from step 4b) and the continuity of \(z(\cdot )\) (if T is at the boundary of \(\Theta \)) with
or from step 4c) and the continuity of \(z(\cdot )\) (if T is at the boundary of \({\text {int}}(I) {\setminus } \Theta \)) with
Step 5: We show that if \(\Delta = (t^{\prime }, t^{\prime \prime })\) is a (maximal) component of \(\Theta \) with smallest value \(t^{\prime } \in I\), then either \(t^{\prime } = t_0\) or \( z(t^{\prime }) _\infty \le 2 \Vert \overline{\delta }(\cdot ) \Vert _{L_\infty }\).
Indeed, if \(t^{\prime } > t_0\), then in each left neighborhood \((t^{\prime }  \varepsilon , t^{\prime })\) there is a point \(\tau _\varepsilon \notin \Theta \), since otherwise one can extend \(\Delta \) to the left in \(\Theta \) and it will not be maximal in \(\Theta \). Thus for every \(\varepsilon = \frac{1}{k}\), \(k \in \mathbb {N}\), there is a \(\tau _k \in (t^{\prime }  \frac{1}{k}, t^{\prime }) {\setminus } \Theta \). As in step 4c we get \( z(\tau _k) _\infty \le 2 \Vert \overline{\delta }(\cdot ) \Vert _{L_\infty }\), \(k \in \mathbb {N}\). By the continuity of \(z(\cdot )\) and its norm, we get \( z(t^{\prime }) _\infty \le 2 \Vert \overline{\delta }(\cdot ) \Vert _{L_\infty }\).
Step 6: We show that the inequality
holds for all \(t \in I\).
Step 6a) We prove that (54) holds for \(t \in \overline{\Theta }\).
Take a (maximal) component \(\Delta = (t^{\prime }, t^{\prime \prime })\) of \(\Theta \). By step 5, either \(t^\prime = t_0\) or \(z(t)_\infty \le 2 \Vert \overline{\delta }(\cdot ) \Vert _{L_\infty }\). If \(t^\prime = t_0\), then by step 4b (or (52) for \(t^{\prime \prime }=T\)) we have for \(t \in \overline{\Delta } = [t_0, t^{\prime \prime }]\)
which proves (54) in this case together with \(z(t_0)_\infty = {x}^{0}{y}^{0}_\infty \,\).
Let \(t^\prime > t_0\). Then by step 5, \(z(t^\prime )_\infty \le 2 \Vert \overline{\delta }(\cdot ) \Vert _{L_\infty }\) and by this inequality and (45) we have for \(t \in \overline{\Delta }\)
Trivially, (54) holds by (51) for \(t=t_0\). Thus we have shown that (54) holds in each component of \(\Theta \), hence in the closure of \(\Theta \) (by the continuity of \(z(\cdot )\)).
Step 6b) We prove that (54) holds for \(t \in I {\setminus } \overline{\Theta }\).
By step 4c or (53) (if \(T \in {\text {bd}}(\Theta )\)), we have to distinguish three cases, namely \(t=t_0\), \(t \in (t_0,T)\) and \(t=T\). In the first two cases we have \(z(t_0)_\infty = e^{\mu (t_0t_0)} z(t_0)_\infty \) by (51) or \(z(t)_\infty \le 2 \Vert \overline{\delta }(\cdot ) \Vert _{L_\infty }\) by (50) so that (54) holds. For the third case we use the inequality (53). Therefore, the estimate (54) follows immediately in all cases. \(\square \)
We now prove a version of Filippov’s Theorem for SOSL maps with inner and outer perturbations similar to the OSL case in [29, Theorem 3.2] and [30, Theorem 3.1] with a new proof idea.
Corollary 1.19
(Filippovtype theorem for SOSL maps with inner and outer perturbations) Let \(F: I \times \mathbb {R}^n \Rightarrow \mathbb {R}^n\) satisfy the assumptions (A1)–(A5) and let \(y(\cdot )\) satisfying the perturbed inclusion (5) with vector perturbations \(\varepsilon (\cdot ) \in L_1(I)\), \(\overline{\delta }(\cdot ) \in L_\infty (I)\).
Then, there exists a solution \(x(\cdot )\) of (1) such that for all \(t \in I\)
with \(C_1(\mu ) = \mu  \cdot \max _{t \in I} \int _{t_0}^t e^{\mu (ts)} \, ds\), \(C_2(\mu ) = C_1(\mu ) + 1\).
Proof
The function \(z(t) = y(t)  \int _{t_0}^t \overline{\varepsilon }(s) \, ds\) is AC with
satisfies the differential inclusion (35) with righthand side \(F(t, z(t) + {\widetilde{\delta }}(t))\) and a new inner vector perturbation \({\widetilde{\delta }}(t) = \overline{\delta }(t) + \int _{t_0}^t \overline{\varepsilon }(s) \, ds\). \({\widetilde{\delta }}(\cdot )\) is also an \(L_\infty \)function with
Theorem 3.7 guarantees the existence of a solution \(x(\cdot )\) of the original differential inclusion (1) with the estimate (36). Then,
\(\square \)
Note that \(C_1(\mu )\) in Corollary 3.8 can be calculated as 0 for \(\mu =0\) and estimated by 1 for \(\mu < 0\).
Remark 3.9
Note that the estimate (56) in the SOSL case proves the conjecture of [30, Remark 3.2] and provides order 1 with respect to the norm of the inner perturbation \(\Vert \overline{\delta }(\cdot ) \Vert _{L_\infty }\) and of the outer perturbation \(\Vert \overline{\varepsilon }(\cdot ) \Vert _{L_1}\). In the OSL case in [30, Theorem 3.1] the corresponding estimate
(with a constant C depending only on \(\mu \), \(C_B\), \(C_F\)) is of order 1 in the outer perturbation but of order \(\frac{1}{2}\) in the inner perturbation. Hence, the SOSL case provides a better order of the estimates which is also visible in the second motivation of Subsec. 2.2. Under the boundedness assumption (A1) not only the solutions of the perturbed system (5) by Lemma 3.4 are bounded but also the states \({y}^{j}\) and velocities \({w}^{j}\) of the Euler’s method uniformly in the stepsize h (see the reasoning in [30] for the OSL case). Then \(\Vert \overline{\delta }(\cdot )\Vert _\infty = \mathcal {O}(h)\) holds for the SOSL and the OSL case, but only for the SOSL case the estimate for the Euler polygons in (13) would be \(\mathcal {O}(h)\).
A direct proof of Corollary 3.8 following the lines of the proof in [30, Theorem 3.1] in the OSL case may improve the constants \(C_1(\mu )\) and \(C_2(\mu )\). On the other hand, the measurability of \(F(\cdot , x+\overline{\delta }(\cdot ))\) is a subtle issue (see for results in [21, Proposition 3.5] for continuous \(\overline{\delta }(\cdot )\)) and would need either an additional upper ScorzaDragoni property [1, Sec. 5] or another existence result requiring only a strong measurable selection of \(F(\cdot , x)\) plus assumptions on its boundedness (see [63, Chap. 3, Theorem 8.13 and following results]).
3.3 Stability and approximation results
From the presented results we can easily derive stability results for reachable sets with respect to the initial sets or the vector perturbations.
Definition 1.21
Let \({X}^{0} \subset \mathbb {R}^n\) be a nonempty initial set. The reachable set \(\mathcal {R}(t,t_0,{X}^{0})\), sometimes denoted as \(\mathcal {R}_F(t,t_0,{X}^{0})\), of the differential inclusion (1) at a given time \(t \in I\) with initial condition \(x(t_0) \in {X}^{0}\) and righthand side F is defined as the set of all end points of solutions at this time, i.e.
Corollary 1.22
For reachable sets of (1) starting from two compact, nonempty initial sets \({X}^{0}, {Y}^{0} \subset \mathbb {R}^n\) and \(F: I \times \mathbb {R}^n \Rightarrow \mathbb {R}^n\) satisfying the assumptions (A1)–(A5) we have the estimate
and weak (setvalued) exponential stability holds if the SOSL constant \(\mu \) is negative and \(t \rightarrow \infty \).
The same estimate is stated in [29, Theorem 3.2] for the OSL case. Note that the OSL and SOSL estimate do not differ, since the error terms with respect to the initial condition coincide.
Corollary 1.23
Let \({X}^{0} \subset \mathbb {R}^n\) be a compact, nonempty set and let the assumptions of Corollary 3.8 be satisfied. If \(\mathcal {R}_{\delta , \varepsilon }(t, t_0, {X}^{0})\) denotes the reachable set of the perturbed inclusion (5) at time \(t \in I\) with initial set \({X}^{0}\), then
with \(C_1(\mu ), C_2(\mu )\) as in Corollary 3.8.
This result is a direct result of Corollary 3.8. The next approximation result is formulated in the spirit of the classical Filippov Theorem 1.1 and focuses on distances of graphs of the two righthand sides.
Proposition 1.24
Let \(F: I \times \mathbb {R}^n \Rightarrow \mathbb {R}^n\) satisfy the assumptions (A1)–(A5), and let \({\text {Graph}}F(t, \cdot )\) be measurable (w.r.t. to t).

(i)
Let \(y: I \rightarrow \mathbb {R}^n\) be AC such that \(y(t_0) = {y}^{0}\) and
$$\begin{aligned} {\text {dist}}((y(t), {\dot{y}}(t)), {\text {Graph}}F(t,\cdot ))&\le \gamma (t) \quad \text {for a.e.}~t \in I \end{aligned}$$(58)with \(\gamma (\cdot ) \in L_\infty (I)\). Then there exists a solution \(x(\cdot )\) of (1) satisfying
$$\begin{aligned}  y(t)  x(t) _2&\le \max \bigg \{ e^{\mu (tt_0)} {y}^{0}{x}^{0}_\infty , \ 2 e^{\mu _+ (tt_0)} \cdot \big ( \Vert \gamma (\cdot ) \Vert _{L_\infty } + \Vert \gamma (\cdot ) \Vert _{L_1} \big ) \bigg \} \nonumber \\&\quad + C_1(\mu ) \Vert \gamma (\cdot ) \Vert _{L_\infty } + C_2(\mu ) \Vert \gamma (\cdot ) \Vert _{L_1} \quad \text{ for } t \in I \end{aligned}$$(59)with \(C_1(\mu ), C_2(\mu )\) as in Corollary 3.8 and \(\mu \) the SOSL constant of F.

(ii)
If \(G: I \times \mathbb {R}^n \Rightarrow \mathbb {R}^n\) satisfies the assumptions (A1)–(A5) such that \({\text {Graph}}G(t, \cdot )\) is measurable (w.r.t. to t) and
$$\begin{aligned} {\text {d}}({\text {Graph}}G(t,\cdot ), {\text {Graph}}F(t,\cdot ))&\le \gamma (t) \quad \text {for a.e.}~t \in I, \end{aligned}$$(60)then the onesided Hausdorff distance \({\text {d}}(\mathcal {R}_G(t,t_0,{y}^{0}), \mathcal {R}_F(t,t_0,{x}^{0}))\) can be estimated by the same righthand side as in (59) for \(t \in I\).
Proof
(i) Let \(y(\cdot )\) be given. Then
with \({\widetilde{B}}_1(0)\) the closed unit ball in \(\mathbb {R}^{2n}\). The map \(H: I \Rightarrow \mathbb {R}^{2n}\) with
is measurable by [3, Theorem 8.2.4] and has closed, nonempty images by construction. By [3, Theorem 8.1.4], it has a measurable selection \(\big (z(t), w(t)\big ) \in H(t)\) for \(t \in I\) which satisfies
Then for a.e. \(t \in I\)
where \(\overline{\delta }(\cdot ) = z(\cdot )  y(\cdot ) \in L_\infty (I)\), \(\overline{\varepsilon }(\cdot ) = {\dot{y}}(\cdot )  w(\cdot ) \in L_1(I)\). Applying Corollary 3.8 together with (62), there exists a solution \(x(\cdot )\) of (1), such that (59) holds for the given function \(y(\cdot )\).
(ii) For \(y(t) \in \mathcal {R}_G(t,t_0,{y}^{0})\) for \(t \in I\), we have
as well as \(\big (y(t), {\dot{y}}(t)\big ) \in {\text {Graph}}G(t,\cdot )\) so that (61) also holds and the proof above continues as before by using \(x(t) \in \mathcal {R}_F(t,t_0,{x}^{0})\). \(\square \)
Remark 3.14
It follows from the last proposition that the estimate (59) also holds for the (twosided) Hausdorff distance between the reachable sets of the inclusions (1) and (63) under the assumption that (58) holds for the Hausdorff distance between the graphs of F and G. Then \(\mu = \max \{ \mu _F, \mu _G \}\) and the constants \(C_1\) and \(C_2\) are the maximal corresponding constants.
The last three claims can be considered as both approximation and stability results: if the interval I is finite, the estimates of the Hausdorff distances between the original and “perturbed” reachable sets in all three results are uniform in time. This also implies estimates of the distances between the corresponding solution funnels, i.e. the union of the graphs of all solutions. On an infinite time interval, the Hausdorff distances between the reachable sets stay small, if the SOSL constant is nonpositive and the Hausdorff distance between the initial sets or the norms of the perturbations \(\overline{\delta }(\cdot )\), \(\overline{\varepsilon }(\cdot )\) or of the bound \(\gamma (\cdot )\) for the graphs are small.
For instance, let us consider the righthand side of the differential inclusion \({\dot{x}}(t) \in {\text {Sign}}(x(t))\) which is replaced by a sequence of sigmoidal or saturation functions with growing Lipschitz constants. If the stability with respect to the initial value is studied with the help of the classical Filippov Theorem 1.1 for Lipschitz righthand side, the estimate will explode for increasing time. Applying Theorem 3.7 for SOSL righthand side, the estimate is uniformly bounded by the Hausdorff distance of the initial sets, since the SOSL constant for all functions of the sequence is 0. The approximation estimates in this case would not suffer on exploding Lipschitz constants (which appear in Example 2.5 if the Filippov theorem for Lipschitz righthand side would be applied). In contrast to the exploding estimates obtained in the classical Filippov theorem, Proposition 3.13 gives good estimates, since the graphs of the sigmoidal or saturation functions tend to the graph of \({\text {Sign}}(\cdot )\) and all SOSL constants are nonpositive.
4 Examples of differential inclusions with SOSL righthand sides
In this section we present examples of dynamical systems with SOSL righthand sides. In the case of Filippov’s regularization of discontinuous ODEs with unique solution, Theorem 3.7 implies first order of convergence of the Euler approximants to this solution, as we have motivated in Subsection 2.2. The numerical experiments presented here confirm this order of convergence. The combination of the discrete and continuous Filippovtype approximation theorems was successfully applied in [17] to obtain error estimates of the Euler method for Lipschitz differential inclusions with state constraints and may also work in the case of SOSL mappings.
We now consider examples from differential equations based on applications.
Example 4.1
We consider the secondorder differential equation on the time interval \(I = [0,T]\) which was introduced by FlüggeLotz/Klotter in [40, (1.3a) and (1.5d)]
for \(b > 0\), \(D > 0\), \(\omega > 0\), initial value \({y}^{0} = \genfrac(){0.0pt}1{3}{4}\) and a motion under the influence of bangbang controls. The example can also be found in [62, Beispiel 1.3] and with a slightly different factor for \(y(\tau )\) in [51, Example 5.2]. As mentioned in [40] the control function \(u(\tau ) = \rho _1 y(\tau ) + \rho _2 {\dot{y}}(\tau )\) anticipates the behavior of the solution component \(y(\tau )\), acts as a feedback controller and precedes or follows it in time depending on \(\rho = \frac{\rho _2}{\rho _1} > 0\) or \(\rho < 0\). In [40, (1.5d)] the value \(\frac{b}{\omega ^2}\) is set to 1 and the damping factor D to 0.1.
The Filippov regularization is
Let \(\rho _1 = 0\) and \(\rho _2 > 0\):
In this case the model is similar to [57, (2)] (with the righthand side 0 in (64) replaced by a driving force \(\varphi (\eta \tau )\) with a constant \(\eta \) and the equivalent simpler controller \({\text {sign}}({\dot{y}}(\tau ))\)) and comprises two important engineering equations. One model originates from an electric circuit with capacitor, coil, resistor (which damps the condesator charging) and rectifier eventually switching the sign of the condensator charging driven by an excitation with a periodic alternating (AC) voltage. The other model describes a mechanical system with a spring driven by forced vibrations with viscous damping as well as combined dry and Coulomb friction. In the latter D and \(\mu = \frac{b}{\omega ^2}\) are the Coulomb and sliding/dry friction coefficients, respectively.
This equation is also treated in several articles on discontinuous differential equations (e.g., in [62, Beispiel 0.1], [21, Example 13.3] and in [46, (1.4)]. In Fig. 5 (left) the (approximated) solution components \(y_1(t)\) (blue) and \(y_2(t)\) (red) are shown together with the black dashed switching curve \(y_2 = 0\), where \(\frac{b}{\omega ^2} = 4\), \(\eta = \pi \), \(\varphi (s) = 2 \cos (s)\), \(T = 6\). Whenever the solution intersects with this curve, the solution component \(y_2(t)\) has a corner due to \({\text {Sign}}(y_2)\) in F(t, y).
The righthand side is SOSL with constant \(\mu _F = 1\). To see this, rewrite the righthand side of (65) as \(F(t,y) = A y + b(t)  \mu {\widetilde{S}}(y)\) with \(A = \begin{pmatrix} 0 &{} 1 \\ 1 &{} 2D \end{pmatrix}\), the vector \(b(t) = \genfrac(){0.0pt}1{0}{\varphi (\eta t)}\) and the setvalued map \({\widetilde{S}}(y) = \{ 0 \} \times {\text {Sign}}(y_2)\) for \(y = (y_1, y_2) \in \mathbb {R}^2\). The affine part \(A y + b(t)\) is estimated by Lemma 2.1 with SOSL constant
It is easy to prove that \({\widetilde{S}}(\cdot )\) is SOSL of constant 0 so that F is SOSL (even uniform SOSL) by Proposition 2.4(iv) with constant \(\mu =1\).
With Lemma 2.1 and the symmetrized matrix \(A_{\text {sym}}\) it is straight forward to prove that the righthand side \(F(t,\cdot )\) is even disspative (i.e. uniform OSL with constant \(\mu _F = 0\)).
Example 4.2
We continue Example 4.1 with the general model in [40, (1.4) and (1.5d)].
case \(\rho _1 > 0\) and \(\rho _2 > 0\):
The numerical test with the explicit Euler method on the time interval \(I = [0,3 \pi ]\) and \(\rho =1\) indicates graphically convergence order 1 with respect to the step size. In Fig. 5 (right) the (approximated) solution components \(y_1(t)\) (blue) and \(y_2(t)\) (red) are shown together with the green dashed function \(y_1(t) + y_2(t)\), where \(\frac{b}{\omega ^2} = 1\), \(\varphi (s) = 0\), \(T = 3 \pi \). Whenever the green function intersects with the black dashed axis \(y_2=0\), the solution component \(y_2(t)\) has a kink due to \({\text {Sign}}(y_1+y_2)\) in G(t, y). In Fig. 6 the second component of the Euler polygons for \(N \in \{ 40, 80, 160, 320 \}\) subintervals are shown together with the reference trajectory calculated with \(N_{\text {ref}} = 20480\) (dashed black line). Note that there are corners at the phase portrait for the green trajectory around the points \((2.5, 2.5)\), \((1.5,1.5)\) and \((3.5,4)\) reflecting discontinuities of the velocity when the trajectory crosses the line of discontinuity of the righthand side \(y_1 + y_2 =0\). All solutions in the left plot show small zigzagging behavior near the times t with \(y_1(t)+y_2(t)=0\).
In Table 1 the maximum errors (4th column) of the Euler iteration at each grid point are calculated for various step sizes \(h_k\) with respect to the reference solution. From this data of subsequent step sizes, the error at the kth step is compared with the sixth step. The order is estimated and shows roughly \(\mathcal {O}(h)\). A least squares analysis for matching the true errors with the unknowns C and p in \(C h^p\) yields approximately \(C = 36.502\), \(p = 1.4350\), whereas \(C = 14.397\) for fixed \(p = 1\).
The speciality of this variant is the linear combination of components of the solution in the controller deciding on the sign switch in the controller \({\text {sign}}(y(t) + \rho {\dot{y}}(t))\) so that it is not clear whether the righthand side of the differential inclusion is SOSL or not. Nevertheless, the model fits very nicely to the choice of a basis in \(\mathbb {R}^n\) for uniform SOSL setvalued maps in [52].
As suggested in [51] we introduce the transformed system with \(z_1(t) = y_1(t)\), \(z_2(t) = y_1(t) + \rho y_2(t)\) so that we can express \(y_2(t) = \frac{1}{\rho } (z_2(t)  y_1(t)))\). Thus, we consider the equivalent differential inclusion \(z'(t) \in G(t, z(t))\) with
where \({\widetilde{D}} = (D  \frac{1}{2 \rho })\). We prove the strengthened OSL condition and consider \(z = (z_1, z_2)\), \({\widetilde{z}} = ({\widetilde{z}}_1, {\widetilde{z}}_2) \in \mathbb {R}^2\), \(v = (v_1,v_2) \in G(t,z)\), \(s(z_2) \in {\text {Sign}}(z_2)\). G(t, z) is expressed as \(B z  \mu {\widetilde{S}}(z)\) with the matrix \(B = \begin{pmatrix} \frac{1}{\rho } &{} \frac{1}{\rho } \\ (\rho + 2 {\widetilde{D}}) &{} 2 {\widetilde{D}} \end{pmatrix}\) and \(\mu \), \({\widetilde{S}}(\cdot )\) as in Example 4.1. The linear part \(z \mapsto B z\) is SOSL by Lemma 2.1 with constant
We can argue with Proposition 2.4(iv) as in Example 4.1 to see that the transformed differential inclusion with righthand side G(t, z) is SOSL (even uniformly) with constant \(\mu _G = \mu _B\).
We discuss analytically another higherdimensional example with three coupled strings with six states.
Example 4.3
([61, 51, Example 5.3], [55, (16)]) Consider the system of three coupled springs with dry friction of second order
on \(t \in I = [0,6]\) with initial condition \(y(0) = (1, 1, 1, 1, 1, 1)^\top \). With the matrix \(A = \begin{pmatrix} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 \\ 2 &{} 1 &{} 0 &{} 1 &{} 0 &{} 0 \\ 1 &{} 2 &{} 1 &{} 0 &{} 1 &{} 0 \\ 0 &{} 1 &{} 1 &{} 0 &{} 0 &{} 1 \end{pmatrix}\), the setvalued map for the inhomogenity \(B(x) = \sum _{i=1}^6 B_i(x_i) e^i\) and the vector function \(c(t) = \sum _{i=1}^6 c_i(t) e^i\) with the notation (16) used in Proposition 2.4 and \(B_i(x_i) \subset \mathbb {R}\), \(c_i(t) \in \mathbb {R}\) with
Then \(F(t,x) = A x + B(x) + c(t)\).
The diagonal elements of A are either 0 or \(1\) so that the maximal sum of absolute values of offdiagonal elements is 4 (attained in the fifth row). Hence, the function \((t,x) \mapsto A x + c(t)\) is SOSL with constant \(\mu _A = 4\) by Lemma 2.1. The setvalued map B is strengthened uniform OSL with constant 0 by Proposition 2.4(v). By Proposition 2.4(iv) the setvalued map F is strengthened uniform OSL with constant \(\mu _F = \mu _A = 4\).
Example 4.4
Inner setvalued perturbations of the differential inclusion (66)–(69) in Example 4.3 involving \(\delta _i > 0\), \(i=4,5,6\), yield the system
which is SOSL with constant \(\mu = 4\) due to Proposition 2.4(ii) and Example 4.3, but not strengthened uniform OSL.
The new differential inclusion can be seen in the light of a computer implementation of the system of Example 4.3. In practice an algorithm implementing a discrete setvalued Euler’s method will not test whether a floating point number \(y_i\), \(i=4,5,6\), is exactly zero or not to evaluate \({\text {Sign}}(y_i)\). Due to rounding errors one would choose an implementation which returns \({\text {Sign}}(0)\) for the argument \(y_i\) if the absolute value of \(y_i\) is less or equal \(\delta _i\) close to the floating point precision multiplied by a factor depending on an upper bound of \(y_i\), i.e. \(y_i \le \delta _i\). This is exactly the case when \(y_i \in \delta _i [1,1]\) so that \({\text {Sign}}(y_i + \delta _i [1,1]) = [1,1]\). Hence, inner setvalued perturbations can incorporate strategies for taking into account rounding errors in floating point arithmetics.
Further examples in the analysis of block designs or cascading state observers [48, (4.2) and below (A.3)] also lead to SOSL systems.
5 Conclusions
Wellposedness and regularity of solutions in perturbed problems is a topic studied persistently by A. Dontchev. In the paper [35] he and his coauthor proved the order of convergence 1 for the setvalued Euler’s method in the Lipschitz case and partially repeated the proof of the celebrated Filippov theorem (Theorem 1.1) for convex and compactvalued righthand sides, since they were not aware of this theorem. The authors of this paper believe that continuing this tradition is an appropriate way to honor the memory of Asen L. Dontchev.
While the form of the perturbed problem in Theorem 3.7 and Corollary 3.8 is different from that in the original theorem of Filippov, the formulation of Proposition 3.13 is more in the spirit of this theorem.
We are currently preparing a followup paper that will focus on discrete approximations of differential inclusions for SOSL maps which will benefit from the available Filippov approximation theorems in continuous time (presented here) and in discrete time [9].
The authors would like to thank the reviewers for their particularly careful reading, the valuable suggestions and the encouragement for a better presentation of the material. Their remarks helped us to improve substantially the paper.
Data availability
The datasets generated and analyzed during the current study are available from the corresponding author on reasonable request.
References
Appell, J., De Pascale, E., Thái, Nguyêñ Hôǹg., Zabreĭko, P.P.: Multivalued superpositions. Diss. Math. (Rozprawy Mat.) 345 (1995)
Aubin, J.P., Cellina, A.: Differential Inclusions. Vol. 264. Grundlehren der mathematischen Wissenschaften, Springer, Berlin, pp. xiii+342 (1984)
Aubin, J.P., Frankowska, H.: SetValued Analysis. Vol. 2. Systems & Control: Foundations & Applications. Birkhauser Boston Inc., Boston, pp. xx+461 (1990)
Auzinger, W., Frank, R., Macsek, F.: Asymptotic error expansions for stiff equations: the implicit Euler scheme. SIAM J. Numer. Anal 27(1), 67–104 (1990)
Bacciotti, A.: On several notions of generalized solutions for discontinuous differential equations and their relationships. Research Report 19. Dipartimento di Matematica del Politecnico di Torino, Torino (2003). https://citeseerx.ist.psu.edu/doc_ view/pid/c3c4eadd04721dbbd301b96edac713d5301d9d6b
Baier, R., Chahma, I.A., Lempio, F.: Stability and convergence of Euler’s method for stateconstrained differential inclusions. SIAM J. Optim. 18(3) (2007). D. Dentcheva, J. Revalski (eds.), special issue on “Variational Analysis and Optimization”, 10041026 (electronic)
Baier, R., Farkhi, E.: Regularity of setvalued maps and their selections through set differences. Part 1: Lipschitz continuity. Serdica Math. J. 39, 3–4 (2013). Special issue dedicated to the 65th anniversary of Professor Asen L. Dontchev and to the 60th anniversary of Professor Vladimir M. Veliov, pp. 365–390
Baier, R., Farkhi, E.: Regularity of setvalued maps and their selections through set differences. Part 2: Onesided Lipschitz properties. Serdica Math. J. 39, 3–4 (2013). Special issue dedicated to the 65th anniversary of Professor Asen L. Dontchev and to the 60th anniversary of Professor Vladimir M. Veliov, pp. 391–422
Baier, R., Farkhi, E.: Discrete Filippovtype stability for onesided Lipschitzian difference inclusions. In: Feichtinger, G., Kovacevic, R., Tragler, G. (eds.) Control Systems and Mathematical Methods in Economics. Essays in Honor of Vladimir M. Veliov, vol. 687. Lecture Notes in Economy and Math. Systems. Dedicated to Vladimir Veliov’s 65th birthday. Springer, Cham, pp. 27–55 (2018)
Beyn, W.J., Rieger, J.: Numerical fixed grid methods for differential inclusions. Computing 81(1), 91–106 (2007)
Blanchini, F., Miani, S.: SetTheoretic Methods in Control. Systems & Control: Foundations & Applications. Birkhauser Boston Inc., Boston, pp. xvi+481 (2008)
Bressan, A.: Singularities of stabilizing feedbacks. In: Vol. 56(4). Control Theory and its Applications (Grado, 1998). 1998, 87–104 (2001)
Bruckner, A. M.: Differentiation of Real Functions. Vol. 659. Lecture Notes in Math. Berlin: Springer, pp. x+247 (1978)
Calin, O.: Deep Learning Architectures. A Mathematical Approach. Springer Series in the Data Sciences. Springer, Cham, pp. xxx+760 (2020)
Cannarsa, P., Da Prato, G., Frankowska, H.: Invariance for quasidissipative systems in Banach spaces. J. Math. Anal. Appl 457(2), 1173–1187 (2018)
Castaing, C.: Sur les multiapplications mesurables. Rev. Fr. Inform. Rech. Oper. 1(1), 91–126 (1967)
Chahma, I.A.: Setvalued discrete approximation of stateconstrained differential inclusions. Bayreuth. Math. Schr. 67, 3–162 (2003)
Cohn, D.L.: Measure Theory. Second ed. Birkhäuser Advanced Texts: Basler Lehrbücher. [Birkhäuser Advanced Texts: Basel Textbooks]. Birkhäuser/Springer, New York, pp. xxi +457 (2013)
Colombo, G.: Approximate and relaxed solutions of differential inclusions. Rend. Sem. Mat. Univ. Padova 81, 229–238 (1989)
Deimling, K.: Nonlinear Functional Analysis. Springer, Berlin, pp. xiv+450 (1985)
Deimling, K.: Multivalued Differential Equations. Vol. 1. de Gruyter Series in Nonlinear Analysis and Applications. Walter de Gruyter, Berlin (1992)
Dekker, K., Verwer, J.G.: Stability of Runge–Kutta Methods for Stiff Nonlinear Differential Equations. Vol. 2. CWI Monographs. NorthHolland, Amsterdam, pp. ix+307 (1984)
Donchev, T.D.: Functionaldifferential inclusion with monotone righthand side. Nonlinear Anal. 16(6), 533–542 (1991)
Donchev, T. D.: Qualitative properties of a class differential inclusions. Glas. Mat. Ser. III 31(51)2, 269–276 (1996)
Donchev, T.D.: Properties of onesided Lipschitz multivalued maps. Nonlinear Anal. 49(1), 13–20 (2002)
Donchev, T.D.: Properties of the reachable set of control systems. Syst. Control Lett. 46(5), 379–386 (2002)
Donchev, T. D.: One sided Lipschitz multifunctions and applications. In: Optimal Control, Stabilization and Nonsmooth Analysis. Vol. 301. Lecture Notes in Control and Inform. Sci. Springer, Berlin, pp. 333–341 (2004)
Donchev, T.D., Dontchev, A.L.: Singular perturbations in infinitedimensional control systems. SIAM J. Control Optim. 42(5), 1795–1812 (2003)
Donchev, T.D., Farkhi, E.: Stability and Euler approximation of onesided Lipschitz differential inclusions. SIAM J. Control Optim. 36(2), 780–796 (1998)
Donchev, T.D., Farkhi, E.: Approximations of onesided Lipschitz differential inclusions with discontinuous righthand sides. In: Calculus of Variations and Differential Equations (Haifa, 1998). Vol. 410. Chapman & Hall/CRC Res. Notes Math. Chapman & Hall/CRC, Boca Raton, FL, pp. 101–118 (2000) isbn: 9781584880240
Donchev, T.D., Farkhi, E.: On the theorem of FilippovPliś. and some applications. Control Cybern. 38(4A), 1251–1271 (2009)
Donchev, T.D., Farkhi, E., Reich, S.: Fixed set iterations for relaxed Lipschitz multimaps. Nonlinear Anal. 53(7–8), 997–1015 (2003)
Donchev, T.D., Farkhi, E., Reich, S.: Discrete approximations and fixed set iterations in Banach spaces. SIAM J. Optim. 18(3), 895–906 (2007)
Dontchev, A.L., Donchev, T.D., Slavov, Ĭ: A Tikhonovtype theorem for singularly perturbed differential inclusions. Nonlinear Anal. 26(9), 1547–1554 (1996)
Dontchev, A.L., Farkhi, E.: Error estimates for discretized differential inclusions. Computing 41(4), 349–358 (1989)
Dontchev, A.L., Lempio, F.: Difference methods for differential inclusions: a survey. SIAM Rev. 34(2), 263–294 (1992)
Dunn, J.C.: Iterative construction of fixed points for multivalued operators of the monotone type. J. Funct. Anal. 27(1), 38–50 (1978)
Filippov, A.F.: Classical solutions of differential equations with multivalued righthand side. SIAM J. Control 5, 609–621 (1967)
Filippov, A.F.: Differential Equations with Discontinuous Righthand Sides. Vol. 18. Mathematics and its Applications (Soviet Series). English translation of Russian original “Differentsialnye uravneniya s razryvnoi pravoi chastyu”, Nauka, Moscow, 1985. Dordrecht: Kluwer Academic Publishers Group, pp. x+304 (1988)
FlüggeLotz, I., Klotter, K.: Über Bewegungen eines Schwingers unter dem Einfluss von SchwarzWeissRegelungen. I. Bewegungen eines Schwingers von einem Freiheitsgrad; Regelung mit Stellungszuordnung ohne Schaltverschiebungen [On movements of an oscillator under the influence of black and white controls. I. Movements of a vibrator of one degree of freedom; control with position assignment without switching displacements]. In: Z. Angew. Math. Mech. 28, pp. 317–337 (1948)
Folland, G.B.: Real Analysis. Modern Techniques and their Applications. Pure and Applied Mathematics (New York), 2nd edn. First edition published in 1984. Wiley, New York, pp. xvi+386 (1999)
Frankowska, H., Rampazzo, F.: Filippov’s and FilippovWażewski’s theorems on closed domains. J. Differ. Equ. 161(2), 449–478 (2000)
Hairer, E., Wanner, G.: Solving Ordinary Differential Equations. II Stiff and DifferentialAlgebraic Problems. Second ed. Vol. 14. Springer Series in Computational Mathematics. Springer, Berlin, pp. xvi+614 (1996)
Hájek, O.: Discontinuous differential equations. I. J. Differ. Equ. 32(2), 149–170 (1979)
KastnerMaresch, A.E.: Implicit Runge–Kutta methods for differential inclusions. Numer. Funct. Anal. Optim. 11(910) (1990), 937–958 (1991)
KastnerMaresch, A.E.: The implicit midpoint rule applied to discontinuous differential equations. Computing 49(1), 45–62 (1992)
Krasnova, S.A., Mysik, N.S.: Cascade synthesis of a state observer with nonlinear correcting influences. Autom. Remote Control 75(2), 263–280 (2014)
Krasnova, S.A., Utkin, V.A., Utkin, A.V.: A block approach to the analysis and design of invariant nonlinear tracking systems. Autom. Remote Control 78(12), 2120–2140 (2017)
Lempio, F.: Difference methods for differential inclusions. In: Modern Methods of Optimization. Proceedings of a Summer School at the \(Schlos\) Thurnau of the University of Bayreuth (Germany), FRG, October 1–6, 1990. Vol. 378. Lecture Notes in Econom. and Math. Systems. Springer, Berlin, pp. 236–273 (1992)
Lempio, F.: Modified Euler methods for differential inclusions. In: SetValued Analysis and Differential Inclusions. A Collection of Papers resulting from a Workshop held in Pamporovo, Bulgaria, September 17–21, 1990. Vol. 16. Progr. Systems Control Theory. Birkhauser, Boston, pp. 131–148. (1993) isbn: 0817637338
Lempio, F.: Euler’s method revisited. Proc. Steklov Inst. Math. 211, 429–449 (1995)
Lempio, F., Silin, D.B.: Differential inclusions with strongly onesidedLipschitz righthand sides. Differ. Equ. 32(11), 1485–1491 (1997)
Lempio, F., Veliov, V.M.: Discrete approximations of differential inclusions. Bayreuth. Math. Schr. 54, 149–232 (1998)
Łojasiewicz Jr., S.: Some theorems of ScorzaDragoni type for multifunctions with application to the problem of existence of solutions for differential multivalued equations. In: Mathematical Control Theory. Vol. 14 (1). Banach Center Publ. Warsaw: PWN pp. 625–643 (1985)
Marszal, M., Stefański, A.: Synchronization properties in coupled dry friction oscillators. In: Nonlinear Dynamical Systems with SelfExcited and Hidden Attractors. Vol. 133. Stud. Syst. Decis. Control. Springer, Cham, pp. 87–113 (2018)
Pliś, A.: On trajectories of orientor fields. Bull. Acad. Polon. Sci. Ser. Sci. Math. Astronom. Phys. 13, 571–573 (1965)
Reissig, R.: Erzwungene Schwingungen mit zäher Dämpfung und starker Gleitreibung [Forced oscillations with viscous damping and strong sliding friction]. Math. Nachr. 11, 231–238 (1954)
Rieger, J.: A proof of the relaxation theorem for differential inclusions based on Euler approximations. Numer. Funct. Anal. Optim. 33(10), 1244–1249 (2012)
Serrin, J., Varberg, D.E.: A general chain rule for derivatives and the change of variables formula for the Lebesgue integral. Am. Math. Mon. 76, 514–520 (1969)
Stein, E.M., Shakarchi, R.: Real Analysis. Measure Theory, Integration, and Hilbert Spaces. Vol. 3. Princeton Lectures in Analysis. Princeton University Press, Princeton, pp. xx+402 (2005)
Stewart, D.E.: HighAccuracy Numerical Methods for Ordinary Differential Equations with Discontinuous RightHand Side. PhD thesis. The University of Queensland (Australia), Brisbane (1990)
Taubert, K.: Differenzverfahren für Schwingungen mit trockener und zäher Reibung und für Reglungsysteme [Difference methods for vibrations with dry and viscous friction and for control systems]. In: Numer. Math. 26(4), 379–395
Tolstonogov, A.: Differential Inclusions in a Banach Space. Vol. 524. Mathematics and its Applications. Translated from the 1986 Russian original and revised by the author. Kluwer Academic Publishers, Dordrecht, pp. xvi+302 (2000)
Veliov, V.M.: Differential inclusions with stable subinclusions. Nonlinear Anal. 23(8), 1027–1038 (1994)
Vinter, R.: Optimal Control. Systems & Control: Foundations & Applications. Birkhauser, Boston, pp. xviii+507 (2000)
Wolenski, P.R.: The exponential formula for the reachable set of a Lipschitz differential inclusion. SIAM J. Control Optim. 28(5), 1148–1161 (1990)
Zarantonello, E.H.: Dense singlevaluedness of monotone operators. Isr. J. Math. 15, 158–166 (1973)
Funding
Open Access funding enabled and organized by Projekt DEAL. The authors are grateful for the partial support of Tel Aviv University, Mathematical Institute at Tel Aviv “MINT”, University of Bayreuth, and Bavarian Research and Innovation Agency “BayFor” which enabled their participation in two bilateral workshops in Tel Aviv and Bayreuth in 2019 and in which the basis of this paper was laid.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing Interests
The authors have no competing interests to declare that are relevant to the content of this article.
Additional information
Dedicated to the memory of Asen L. Dontchev.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Robert Baier: Partially supported by Tel Aviv University, Mathematical Institute at Tel Aviv “MINT”, and by Bavarian Research and Innovation Agency “BayFor”. Elza Farkhi: Partially supported by University of Bayreuth, Mathematical Institute at Tel Aviv “MINT”, and by Bavarian Research and Innovation Agency “BayFor”.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Baier, R., Farkhi, E. A Filippov approximation theorem for strengthened onesided Lipschitz differential inclusions. Comput Optim Appl 86, 885–923 (2023). https://doi.org/10.1007/s10589023005179
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10589023005179
Keywords
 Differential inclusions
 Filippov theorem
 (Strengthened)
 Monotonicity
 Setvalued Euler method
 Reachable sets