## 1 Introduction

The focus is the external ellipsoidal approximation of all attainable states in a control system with uncertainties as they occur in many technical applications. This arouses our interest in reachable sets (instead of single state vectors) and we face the challenge that these sets have a feedback influence on the underlying control system and thus on their own evolution. The results here lay the foundations for future descent methods in optimization problems.

As a first motivation, we sketch the optimization problem of output feedback control (OFC) in uncertain systems introduced by Kurzhanski and Varaiya  (see also [36, Ch. 10]). In its linear (or linearized) form without available measurements, the state $$x(t) \in {{\mathbb {R}}^n}$$ is determined by $$x' = A(t) \; x + B(t) \; \eta + C(t) \; v$$ with a control $$v(t) \in V$$ and bounded “unknown noise” $$\eta (t)$$. Furthermore, the initial set $$K_0 \subset {{\mathbb {R}}^n}$$ and the set-valued map $$U(\cdot )$$ of constraints on $$\eta (t)$$ are given. For each control $$v \in L^1([0,T], V)$$, the guaranteed state estimation $$K_v(t) \subset {{\mathbb {R}}^n}$$ at time $$t \in [0,T]$$ consists of the values $$x(t) {:}{=} x(t; 0, x_0)$$ of all solutions $$x: [0,t] \longrightarrow {{\mathbb {R}}^n}$$ to $$x' = A\,x + B\,\eta + C\,v$$ with $$x(0) = x_0 \in K(0)$$ and any measurable noise $$\eta (t) \in U(t)$$ (e.g., [36, Ch. 9]). The key goal of OFC is to specify a control strategy $$v = v(t, {\tilde{K}}) \in V$$ in terms of the time $$t \in [0,T]$$ and a set-valued state estimation $${\tilde{K}} \subset {{\mathbb {R}}^n}$$ which “for any starting position $$(t_0, K_0)$$, $$0 \le t_0 < T$$, would bring $$x(T; \,t_0, x_0) \in {{\mathbb {R}}^n}$$ [of a solution x with $$x(t_0) = x_0$$] to a preassigned neighborhood of the given target set $$M \subset {{\mathbb {R}}^n}$$ at given time T—whatever” the “uncertain item” $$\eta (s) \in U(s)$$ $$(s \in [t_0, t])$$ is [36, p. 375]. In terms of set-valued analysis, $$K_v(t)$$ is the reachable set of the initial set $$K_0 \subset {{\mathbb {R}}^n}$$ and the differential inclusion $$x' \in A(s) \,x + B(s)\, U(s) + C(s) \; v(s)$$ at time t and, the OFC problem focuses a closed-loop control $$v = v(t, {\tilde{K}})$$ depending on a set $${\tilde{K}}$$ of states (and not a state vector).

A relaxed OFC problem can be formulated as an optimal control problem: Minimize $$\text{ dist }\big (K(T), \, M \big )$$ $${\mathop {=}\limits ^{\mathrm{\tiny Def.}}}$$ $$\sup _{\xi \,\in \,K(T)} \, \text{ dist }(\xi , M) \ge 0$$ over all set-valued maps $$K:[0,T] \leadsto {{\mathbb {R}}^n}$$ and measurable controls $$v \in V$$ such that each K(t) coincides with the reachable set of $$K_0$$ and $$x' \in A(s) \,x + B(s)\, U(s) + C(s) \; v$$ at time t. Indeed, if the minimum is attained and smaller than a given threshold $$\rho > 0$$ then K(T) is contained in the $$\rho$$-neighborhood of M (as demanded originally).

This problem is already solved by Kurzhanski and Varaiya —even with some generalizations. We mention it here because we consider it an excellent example of how useful sets (as states) instead of vectors can be for handling bounded (but) unknown perturbations deterministically. This is the first step to a broad class of optimization problems. Further examples of set-oriented descent methods with applications in image processing are discussed in, e.g., [22, 39, 40, 66].

In general, reachable sets play an important role in deterministic systems with (bounded) “unknown noise” or other forms of lacking information (about initial states and parameters) because they are the smallest set containing all attainable states whatever the bounded uncertainties are (see, e.g., ). For nonlinear differential inclusions, several established numerical methods are related to grids (e.g., [6, 8, 28, 57]) or based on the level-set formulation and its HJB equation (e.g., [38, 45, 46, 65]). Hence, they are usually expensive and suffer from the “curse of dimensionality”. For linear differential inclusions, the situation is different as the convexity of initial sets is obviously preserved. Chernousko, Kurzhanski and others suggest ellipsoids for approximating these convex sets as supersets and subsets, respectively (see, e.g., [13,14,15,16, 23, 31, 33, 34, 36]). This special subclass of convex sets has the advantage that the evolution of an ellipsoid can be described in form of ordinary differential equations (ODEs) for its center and the underlying positive definite matrix. Hence, it reduces the numerical effort in high dimensions significantly. As the price to pay, however, this approach is usually applied to linear (or linearized) differential inclusions. For approximating solutions within any given collection of sets, Quincampoix and Veliov suggest a general framework in  (but without a concrete numerical algorithm).

This article aims at a rather cheap numerical approximation algorithm for a larger class of set evolution problems: In comparison with Kurzhanski’s OFC problem, the closed-loop control $$v = v(t, {\tilde{K}})$$ has nonlinear influence on the state vector x(t), i.e., the linear control equation $$x' = A(t) \; x + B(t) \; \eta + C(t) \; v$$ (with bounded “unknown noise” $$\eta (t) \in U(t)$$) is replaced by $$x' = A(t, v) \; x + B(t, v) \; \eta$$. Then, the same ansatz $$v = v(t, {\tilde{K}})$$ leads to a differential inclusion $$x'(t) \in {{{\mathcal {A}}}}\big (t, K(t)\big ) \, x(t) + {{{\mathcal {B}}}}\big (t, K(t) \big ) \, U(t)$$ whose coefficient matrices depend on its own reachable set K(t).

We extend the ellipsoidal approximation which Kurzhanski et al. originally developed for linear differential inclusions. In particular, our sufficient conditions on the coefficients concern two aspects: Firstly, this set evolution problem is well posed. Secondly, the inclusion property is preserved, i.e., whenever K(0) is contained in an ellipsoid E(0), then each of them evolves independently of the other in such a way that $$K(t) \subset E(t)$$ holds for every t.

In the following, the well-posedness of the set evolution problem and the inclusion property of any two solutions are handled even for nonlinear differential inclusions and compact (not necessarily convex) sets. This part extends various results in, e.g., [3, 4, 17,18,19, 37, 41, 44, 49, 53, 54, 62]. Then, in favor of fast numerical methods, the external approximation by ellipsoids is restricted to differential inclusions $$x' \in {{{\mathcal {A}}}}(\cdot , K) \, x + {{{\mathcal {B}}}}(\cdot , K) \, U$$ (i.e., linear in x and $$\eta \in U$$). It is worth mentioning that its linear aspects do not concern the compact set $$K(t) \subset {{\mathbb {R}}^n}$$ or its set properties. We present sufficient conditions on the matrix coefficients such that the wanted set K(t) is contained in the intersection of finitely many ellipsoids whose time-dependent centers and positive definite matrices solve an ODE system. The PompeiuHausdorff distance between K(t) and the intersection can be made arbitrarily small by choosing the numbers of ellipsoids sufficiently large. We also give an example that K(t) might not have joint boundary points with each of the ellipsoids (as known in the classical case of linear inclusions, see Proposition 3.1 (2.) below).

This article is structured as follows. First, we summarize the notation used below. Section 2 specifies how the evolution of compact sets in time can be characterized in various (but equivalent) ways. It lays the foundations of what we call set evolution equations and, we give results about their initial value problems (IVP). In Sect. 3, we summarize the method by Kurzhanski et al. for external ellipsoidal approximations of solutions to linear control systems. Then, it is extended to a new class of set evolution problems and, a nonlinear ODE system is suggested for any given number of ellipsoids whose intersections serve as approximations. Section 4 contains a numerical example. All proofs are collected in Sect. 5.

Notation    For any dimension, $$n\in {\mathbb {N}}$$, $${{{\mathcal {K}}}}({{\mathbb {R}}^n})$$ denotes the set of all nonempty compact subsets of $${{\mathbb {R}}^n}$$. $${{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n})$$ abbreviates the set of all nonempty compact convex subsets of $${{\mathbb {R}}^n}$$. $$\Vert \cdot \Vert$$ is the Euclidean norm on $${{\mathbb {R}}^n}$$, $$\Vert \cdot \Vert _{\text{ op }}$$ the related matrix norm. On the basis of the so-called PompeiuHausdorff excess

\begin{aligned} {\mathbbm {e}}(A, \, B) \,\, {:}{=} \,\, \sup _{x\,\in \,A} \; \text{ dist }(x, B) \,\, {\mathop {=}\limits ^{\mathrm{\tiny Def.}}} \,\, \sup _{x\,\in \,A} \;\, \inf _{y\,\in \,B} \;\, \Vert x - y\Vert \qquad \big (A, B \in {{{\mathcal {K}}}}({{\mathbb {R}}^n}) \big ), \end{aligned}

both $${{{\mathcal {K}}}}({{\mathbb {R}}^n})$$ and its subset $${{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n})$$ are usually supplied with the PompeiuHausdorff metric

\begin{aligned} {\mathbbm {d}}(A, B)&{:=}&\max \big \{ {\mathbbm {e}}(A, \, B), \,\; {\mathbbm {e}}(B, \, A) \big \} \\= & {} \sup _{x \,\in \,{{\mathbb {R}}^n}} \; \big | \text{ dist }(x, A) - \text{ dist }(x, B) \big | \qquad \big (A, B \in {{{\mathcal {K}}}}({{\mathbb {R}}^n}) \big ). \end{aligned}

These metric spaces are known to be complete, locally compact and thus separable (e.g., [4, 7, 30, 58]). The gap between $$A, B \in {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$ is defined as $$\; {\mathbbm {g}}(A, \, B) {:}{=} \inf _{x\,\in \,A} \; \text{ dist }(x, B) {\mathop {=}\limits ^{\mathrm{\tiny Def.}}} \inf _{x\,\in \,A} \;\, \inf _{y\,\in \,B} \;\, \Vert x - y\Vert .$$

Moreover, we use the same arrow $$\leadsto$$ for a set-valued (or multivalued) map as, e.g., [4, 5], i.e., for any nonempty sets YZ given, $$g: Y \leadsto Z$$ is a mapping relating each element $$y \in Y$$ to a subset $$g(y) \subset Z$$, which might consist of more than one element of Z.

A set-valued map is called a tube whenever it is defined on a subinterval of $${\mathbb {R}}$$ and has nonempty set values. Further properties of its set values like compactness are usually mentioned explicitly (if required).

$${{{\mathcal {L}}}}^n$$ denotes the Lebesgue measure on $${{\mathbb {R}}^n}$$. Set $${{\mathbb {B}}}_R {:}{=} \big \{ x \in {{\mathbb {R}}^n}$$ $$\big |$$ $$\Vert x \Vert < R \big \}$$ and $${\overline{{\mathbb {B}}}}_R {:}{=} \big \{ x \in {{\mathbb {R}}^n}$$ $$\big |$$ $$\Vert x \Vert \le R \big \}$$ for $$R \ge 0$$. Solutions to ordinary differential equations or inclusions are usually understood in the sense of Carathéodory (unless stated otherwise).

## 2 Evolution Equations for Compact Subsets of $${{\mathbb {R}}^n}$$

### 2.1 Reachable Sets Plus Feedback Lead to Set Evolution Equations

Consider a control system $$x' \in g(t, x, U)$$ (a.e.) where a function $$g: [0,T] \times {{\mathbb {R}}^n}\times {{\mathbb {R}}^m}\longrightarrow {{\mathbb {R}}^n}$$ and a nonempty control subset $$U \subset {{\mathbb {R}}^m}$$ are given. The reachable set of an initial set $$K_0 \subset {{\mathbb {R}}^n}$$ at time $$t \in [0,T]$$ is defined as (1)

Under appropriate assumptions about g and U, each absolutely continuous solution x :  $$[0,t] \longrightarrow {{\mathbb {R}}^n}$$ to the differential inclusion $$x' \in g \big (\cdot , x, U \big )$$ is related to a Lebesgue measurable control u :  $$[0,t] \longrightarrow U$$ with $$x'(s) = g \big ( s, x(s), u(s) \big )$$ for a.e. $$s \in [0,t]$$ due to well-known Filippov’s selection theorem (e.g., [5, Theorem 8.2.10] or [30, Prop. II.2.25]). Hence, we prefer the focus on differential inclusions instead of ordinary differential equations with time-dependent control.

From the conceptual point of view, reachable sets can be interpreted as a way of “integrating” nonempty (usually closed) subsets w.r.t. time. In the special case, for example, that g does not depend on the state vector $$x \in {{\mathbb {R}}^n}$$ explicitly, i.e., whenever $$g = g(t, u)$$, the reachable set $${{{\mathcal {R}}}}_g(t, K_0)$$ can be expressed in terms of an Aumann integral (w.r.t. $${{{\mathcal {L}}}}^1$$) $${{{\mathcal {R}}}}_{g(\cdot , U)}(t, K_0) = K_0 + \int _0^t g(s, U) \,\, \mathrm{d} s$$. (More details about this relationship between set integrals and reachable sets in terms of generalizations are in, e.g., [40, 42, 43]).

In the next step, we aim at an additional feedback w.r.t. the current compact subset of $${{\mathbb {R}}^n}$$. This extension is motivated by Kurzhanski’s OFC problem (mentioned in the introduction) and by examples of descent methods in image segmentation, nonlocal agents-population interaction with closed-loop strategies and deterministic approaches to robust control problems (see, e.g., [17, 22, 40, 43, 66]). Assume that the right-hand side of the differential inclusion depends on a further argument, namely a nonempty compact subset of the state space $${{\mathbb {R}}^n}$$, i.e., we consider the function f :  $$[0,T] \times {{\mathbb {R}}^n}\times {{{\mathcal {K}}}}({{\mathbb {R}}^n}) \times U$$ $$\longrightarrow$$ $${{\mathbb {R}}^n}$$ (with a fixed nonempty compact control set $$U \subset {{\mathbb {R}}^m}$$) instead of $$g: [0,T] \times {{\mathbb {R}}^n}\times {{\mathbb {R}}^m}\longrightarrow {{\mathbb {R}}^n}$$. Each compact-valued tube $$K: [0,T] \leadsto {{\mathbb {R}}^n}$$ (or, equivalently, every single-valued function $$K: [0,T] \longrightarrow {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$) leads to a nonautonomous differential inclusion $$x'(t) \in f \big ( t, \, x(t), \, K(t), \, U \big )$$ (a.e.) and its reachable set $${{{\mathcal {R}}}}_{f(\cdot , \cdot , K, U)}(t, K_0)$$ at time $$t \in [0,T]$$. For f, U and $$K_0 \in {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$ given, the problem is now to find a compact-valued tube $$K(\cdot )$$ satisfying $$K(t) = {{{\mathcal {R}}}}_{f(\cdot , \cdot , K, U)}(t, K_0)$$ for every $$t \in [0,T]$$. In particular, the feedback mentioned previously concerns the differential inclusions which depends on $$K(\cdot )$$. Sufficient conditions of well-posedness and several examples are investigated in, e.g., .

### 2.2 Differential Characterizations of These Compact-Valued Solutions

Reachable sets of differential inclusions are also characterized by means of the so-called integral funnel equation. This approach is very popular among Russian mathematicians like Filippova, Kurzhanski, Panasyuk, Tolstonogov and collaborators (e.g., [34, 48, 50,51,52, 61, 62] and related references).

Indeed, consider the compact initial set $$K_0 \subset {{\mathbb {R}}^n}$$, the nonempty control set $$U \subset {{\mathbb {R}}^m}$$ and the function $$g: [0,T] \times {{\mathbb {R}}^n}\times U \longrightarrow {{\mathbb {R}}^n}$$ given. Under appropriate assumptions, the compact-valued tube of reachable sets K :  $$[0,T] \leadsto {{\mathbb {R}}^n}$$, $$t \mapsto K(t) \, {:}{=} \, {{{\mathcal {R}}}}_{g(\cdot , \cdot , U)}(t, K_0)$$ (in the sense of Eq. (1)) fulfills

\begin{aligned} \lim _{h\,\downarrow \,0} \;\, {\textstyle \frac{1}{h}} \cdot {\mathbbm {d}}\Big ( K(t+h), \,\, \bigcup _{x\,\in \, K(t)} \; \big ( x + h \cdot g (t, x, U) \big ) \Big ) \,\, = \,\, 0 \end{aligned}
(2)

at a.e. time instant $$t \in [0,T)$$. Furthermore, slightly stronger hypotheses about U, g even guarantee that it is the only Lipschitz continuous compact-valued tube $$K: [0,T] \leadsto {{\mathbb {R}}^n}$$ with $$K(0) = K_0$$ satisfying this integral funnel equation (2) for a.e. $$t \in [0,T)$$.

Aubin suggests an alternative criterion of differential type and chooses it as the starting point of his so-called morphological equations (in the metric space $$({{{\mathcal {K}}}}({{\mathbb {R}}^n}), {\mathbbm {d}})$$) (see, e.g., [2,3,4]). At time instant $$t \in [0,T)$$ and for a short period $$h > 0$$, it is now the reachable set $${{{\mathcal {R}}}}_{g(t, \cdot , U)} \big (h, \, K(t) \big ) \subset {{\mathbb {R}}^n}$$ of the autonomous differential inclusion $$y'(s) \in g \big (t, \,y(s), \, U)$$ a.e. in [0, h] which induces an approximation of $$K(t+h) \subset {{\mathbb {R}}^n}$$. Similarly to time derivatives of curves in a normed vector space, the distance between them is to vanish “in first order” for $$h \downarrow 0$$.

In shape sensitivity analysis and shape optimization, the special case of $$U \subset {{\mathbb {R}}^m}$$ consisting of just a single vector leads to the so-called shape derivatives used by Delfour, Sokołowski, Zolésio and others in the so-called velocity method (see, e.g., [20, 21, 59] and references therein).

In more detail, sufficient conditions on $$U \subset {{\mathbb {R}}^m}$$ and g :  $$[0,T] \times {{\mathbb {R}}^n}\times U$$ $$\longrightarrow$$ $${{\mathbb {R}}^n}$$ are known such that the tube $$[0,T] \leadsto {{\mathbb {R}}^n}$$, $$t \mapsto {{{\mathcal {R}}}}_{g(\cdot , \cdot , U)}(t, K_0)$$ of reachable sets (of the nonautonomous differential inclusion $$x' \in g(\cdot , x, U)$$) is the only Lipschitz continuous compact-valued tube K :  $$[0,T] \leadsto {{\mathbb {R}}^n}$$ with $$K(0) = K_0 \in {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$ and

\begin{aligned} \lim _{h\,\downarrow \,0} \;\, {\textstyle \frac{1}{h}} \cdot {\mathbbm {d}}\Big ( K(t+h), \,\, {{{\mathcal {R}}}}_{g(t, \cdot , U)} \big (h, \, K(t) \big ) \Big ) \,\, = \,\, 0. \end{aligned}
(3)

Hence, we have three criteria on the reachable sets of a nonautonomous differential inclusion $$x' \in g(\cdot , x, U)$$ and a compact initial set $$K_0 \subset {{\mathbb {R}}^n}$$. Now we implement the additional aspect of set feedback (again) and obtain the following result about set evolution equations. Shortly speaking, it represents a special case of [42, Theorem 1] which concerns closed-valued tubes evolving along nonautonomous evolution inclusions in a separable Banach space (instead of $${{\mathbb {R}}^n}$$) and thus, we do not give a proof in detail.

### Proposition 2.1

(Equivalent criteria for set evolution equations) Let $$T > 0$$, $$U \subset {{\mathbb {R}}^m}$$ be nonempty compact and $$f: [0,T] \times {{\mathbb {R}}^n}\times {{{\mathcal {K}}}}({{\mathbb {R}}^n}) \times U \longrightarrow {{\mathbb {R}}^n}$$ satisfy these conditions:

1. (i)

For all $$t \in [0,T]$$, $$x \in {{\mathbb {R}}^n}$$ and $$M \in {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$, the set f(txMU) $${\mathop {=}\limits ^{\mathrm{\tiny Def.}}}$$ $$\big \{ f(t,x,M,u)$$ $$\, \big | \,$$ $$u \in U \big \}$$ $$\subset$$ $${{\mathbb {R}}^n}$$ is closed and convex.

2. (ii)

(measurable in $$t) \,$$ For all $$x \in {{\mathbb {R}}^n}$$, $$M \in {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$ and $$u \in U$$, $$f(\,\cdot \,, x, M, u):$$ $$[0,T] \longrightarrow {{\mathbb {R}}^n}$$ is measurable.

3. (iii)

(continuous w.r.t. $$u) \,$$ For all $$t \in [0,T]$$, $$x \in {{\mathbb {R}}^n}$$ and $$M \in {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$, $$f(t, x, M, \,\cdot \,):$$ $$U \longrightarrow {{\mathbb {R}}^n}$$ is continuous.

4. (iv)

(Lipschitz continuous w.r.t. $$x) \,$$ There is $$\lambda \ge 0$$ such that for all $$t \in [0,T]$$, $$u \in U$$ and $$M \in {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$, $$f(t, \,\cdot \,, M,u):$$ $${{\mathbb {R}}^n}\longrightarrow {{\mathbb {R}}^n}$$ is $$\lambda$$-Lipschitz continuous.

5. (v)

(continuous w.r.t. the compact set$$) \,$$ For all $$x \in {{\mathbb {R}}^n}$$, $$u \in U$$ and a.e. $$t \in$$ [0, T], the function $$f(t, x, \,\cdot \,,u):$$ $$\big ( {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$, $${\mathbbm {d}}\big )$$ $$\longrightarrow$$ $${{\mathbb {R}}^n}$$ is continuous.

6. (vi)

(uniform linear growth w.r.t. x only$$) \,$$ There is $$\Gamma \ge 0$$ such that $$\big \Vert f(t,x,M,u) \big \Vert \le \Gamma \; \big ( 1 + \Vert x \Vert \big )$$ for all $$t \in [0,T]$$, $$x \in {{\mathbb {R}}^n}$$, $$M \in {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$ and $$u \in U$$.

Then, the following statements are equivalent for every compact-valued tube $$K: [0,T] \leadsto {{\mathbb {R}}^n}:$$

1. (1.)

At every time $$t \in [0,T]$$, $$K(t) \subset {{\mathbb {R}}^n}$$ coincides with the simultaneous reachable set of $$K(0) \subset {{\mathbb {R}}^n}$$ and the nonautonomous differential inclusion $$x'(s) \in f \big ( s, \, x(s), \, K(s), \, U \big )$$ for a.e. $$s \in [0,t]$$.

2. (2.)

$$K(\cdot )$$ is Lipschitz (w.r.t. $${\mathbbm {d}})$$ and $$\displaystyle \lim _{h\,\downarrow \,0} {\textstyle \frac{1}{h}} \cdot {\mathbbm {d}}\Big ( K(t+h), \, \bigcup _{x\,\in \, K(t)} \big ( x + h \cdot {f(t, x, K(t), U)} \big ) \Big ) \, = \, 0$$ for a.e. t.

3. (3.)

$$K(\cdot )$$ is Lipschitz (w.r.t. $${\mathbbm {d}})$$ and fulfills $$\lim _{h\,\downarrow \,0}} \;\, \frac{1}{h} \cdot {\mathbbm {d}}\Big ( K(t+h), \,{{{\mathcal {R}}}}_{f(t, \,\cdot \,, \,K(t), \,U)} \big (h, \, K(t) \big ) \Big ) \, = \, 0$$ for a.e. t where $${{{\mathcal {R}}}}_{f(t, \,\cdot \,, \,K(t), \,U)} \big (h, \, K(t) \big ) \subset {{\mathbb {R}}^n}$$ denotes the reachable set of the initial set K(t) and the autonomous differential inclusion $$y' \in f \big (t, \, y(\cdot ), \, K(t), \,U \big )$$ a.e. in [0, h] at time $$h \ge 0$$.

### 2.3 Compact-Valued Solutions to the Initial Value Problem

On the search for solutions to the corresponding IVP without state constraints, standard ODE methods like Euler method or successive approximation can be adapted to the compact-valued setting. An additional assumption of Lipschitz continuity (w.r.t. the set argument) proves to be sufficient for extending the CauchyLipschitz theorem about both existence and uniqueness. Various theorems about this topic can be found in references like [3, 4, 17, 19, 37, 41, 44, 53, 62].

The following statement is a reformulation of [17, Theorem 4.2] under the slightly stronger assumptions that the linear growth condition and the Lipschitz constants are uniform w.r.t. x here (see also, e.g., [4, Theorem 4.1.2], [41, Theorem 1.72], [62, § 1]).

### Proposition 2.2

(Existence and uniqueness of solution tubes) In addition to the assumptions of Proposition 2.1 about the control set $$U \subset {{\mathbb {R}}^m}$$ and the function f :  $$[0,T] \times {{\mathbb {R}}^n}\times {{{\mathcal {K}}}}({{\mathbb {R}}^n}) \times U$$ $$\longrightarrow$$ $${{\mathbb {R}}^n}$$, suppose the following condition:

1. v’

(Lipschitz continuous w.r.t. the compact set$$) \,$$ There exists $$\Lambda \in L^1([0,T])$$ such that for all $$x \in {{\mathbb {R}}^n}$$, $$u \in U$$ and a.e. $$t \in$$ [0, T], $$f(t, x, \,\cdot \,,u):$$ $$\big ( {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$, $${\mathbbm {d}}\big )$$ $$\longrightarrow$$ $${{\mathbb {R}}^n}$$ is $$\Lambda (t)$$-Lipschitz continuous.

Then for every $$K_0 \in {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$, there exists a unique compact-valued tube K :  $$[0,T] \leadsto {{\mathbb {R}}^n}$$ with $$K(0) = K_0$$ satisfying the three equivalent conditions 2.1(1.)–(3.).

### Definition 2.3

Let $$K_0 \in {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$ and f :  $$[0,T] \times {{\mathbb {R}}^n}\times {{{\mathcal {K}}}}({{\mathbb {R}}^n}) \times U$$ $$\longrightarrow$$ $${{\mathbb {R}}^n}$$ be given as in Proposition 2.2. The compact-valued tube K :  $$[0,T] \leadsto {{\mathbb {R}}^n}$$ specified there uniquely is called the solution tube of the IVP $$\mathring{K}(t) = f \big ( t, \, \cdot , K(t), \, U \big )$$ in [0, T], $$K(0) = K_0$$.

### Remark 2.4

From now on, we do not really distinguish between two established concepts, i.e., quasidifferential equations by Panasyuk applied to $$\big ( {{{\mathcal {K}}}}({{\mathbb {R}}^n}), {\mathbbm {d}}\big )$$ (e.g., [49, 53, 54]) and morphological equations by Aubin which are the example of his mutational equations applied to $$\big ( {{{\mathcal {K}}}}({{\mathbb {R}}^n}), {\mathbbm {d}}\big )$$ and the “transitions” induced by reachable sets (see, e.g., [3, 4, 41]). Indeed, Proposition 2.1 states their equivalence under suitable assumptions.

Several publications characterize the solution to a set evolution equation in terms of the Hukuhara derivative (w.r.t. time), and thus, the tubes are always assumed to be convex-valued (see, e.g., [37, 44, 62] and related references). Under suitable assumptions about $${\widetilde{f}}:$$ $$[0,T] \times {{{\mathcal {K}}}}({{\mathbb {R}}^n}) \times U$$ $$\longrightarrow$$ $${{\mathbb {R}}^n}$$, a convex-valued tube K :  $$[0,T] \leadsto {{\mathbb {R}}^n}$$ “solves” the differential equation $$D_H K(t) = {\widetilde{f}} \big (t, K(t), U \big )$$ (in that sense) if and only if it fulfills the following condition on Aumann integrals for every $$t \in [0,T]$$

\begin{aligned} K(t) \,\, = \,\, K(0) \, + \, \int _0^t {\widetilde{f}} \big (s, \, K(s), \, U \big ) \,\; \mathrm{d} s. \end{aligned}
(4)

As a consequence, we consider that concept as the special case of our approach in which the function f :  $$[0,T] \times {{\mathbb {R}}^n}\times {{{\mathcal {K}}}}({{\mathbb {R}}^n}) \times U$$ $$\longrightarrow$$ $${{\mathbb {R}}^n}$$ does not depend on the state $$x \in {{\mathbb {R}}^n}$$ explicitly. Indeed, the right-hand side in Eq. (4) coincides with the reachable set of $$K(0) \subset {{\mathbb {R}}^n}$$ and $$y' \in {\widetilde{f}} \big (\,\cdot \,, \, K(\cdot ), \, U \big )$$ at time t (as mentioned in Sect. 2.1).

It is worth mentioning that even in the autonomous linear case, state $$x \in {{\mathbb {R}}^n}$$ and set $$K(s) \subset {{\mathbb {R}}^n}$$ cannot be simply exchanged with each other. Indeed, Tolstonogov gives the example in $${\mathbb {R}}$$ [62, p. 209 f.] that the reachable interval of the single initial state $$0 \in {\mathbb {R}}$$ and $$x' \in - \alpha x + U$$ (with $$\alpha > 0$$ and $$U = -U \subset {\mathbb {R}}$$ having more than one element) does not coincide with the solution K(t) of $$K(t) = \displaystyle \int _0^t \big (- \alpha \, K(s) + U \big ) \,\, \mathrm{d} s$$.

### Proposition 2.5

(Continuous dependence on data) Let $$U \subset {{\mathbb {R}}^m}$$ and $$f_1, f_2:$$ $$[0,T] \times {{\mathbb {R}}^n}\times {{{\mathcal {K}}}}({{\mathbb {R}}^n}) \times U$$ $$\longrightarrow$$ $${{\mathbb {R}}^n}$$ fulfill the assumptions of Proposition 2.2 (with the same $$\Lambda \in L^1([0,T])$$ and $$\lambda , \Gamma \ge 0$$).

Then for any initial sets $$K_0, M_0 \in {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$ with $$K_0 \cup M_0 \subset {\overline{{\mathbb {B}}}}_r$$, the respective solution tubes K, M :  $$[0,T] \leadsto {{\mathbb {R}}^n}$$ satisfy the following a priori bound with $$R {:}{=} (r + \Gamma \, T) \; e^{\Gamma \, T}$$ at every time $$t \in [0,T]$$ ### 2.4 The Inclusion Principle for Set Evolution Equations

Now we focus on sufficient conditions on the coefficients such that $$K(0) \subset M(0) \cap {\widetilde{M}}(0) \subset {{\mathbb {R}}^n}$$ always implies $$K(t) \subset M(t) \cap {\widetilde{M}}(t)$$ at each time $$t \in [0,T]$$ for the respective solutions K, M, $${\widetilde{M}}:$$ $$[0,T] \leadsto {{\mathbb {R}}^n}$$. The following result extends [4, Theorem 4.3.3] to nonautonomous set evolution equations whose functions on the right-hand side are just measurable w.r.t. time.

### Definition 2.6

[5, Definition 4.1.5] Let $$K \subset {{\mathbb {R}}^n}$$ be nonempty and $$x \in {\overline{K}}$$. The intermediate cone or adjacent cone $$T^\flat _K(x)$$ of K at x is defined as $$T^\flat _K(x) {:}{=} \big \{ v \in {{\mathbb {R}}^n}\; \big | \; \lim _{h\,\downarrow \,0} \,\,{\textstyle \frac{1}{h}} \cdot \text{ dist }(x + h \, v, \, K) \; = \, 0 \big \}.$$

### Proposition 2.7

(Inclusion principle of solution tubes) Let $$T > 0$$, $$U \subset {{\mathbb {R}}^m}$$ and f, g, $${\widetilde{g}}:$$ $$[0,T] \times {{\mathbb {R}}^n}\times {{{\mathcal {K}}}}({{\mathbb {R}}^n}) \times U$$ $$\longrightarrow$$ $${{\mathbb {R}}^n}$$ satisfy the hypotheses of Proposition 2.1 and

1. (v’)

There exists $$\Lambda \ge 0$$ such that $$f(t, x, \,\cdot \,,u)$$, $$g(t, x, \,\cdot \,,u)$$, $${\widetilde{g}}(t, x, \,\cdot \,,u):$$ $$\big ( {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$, $${\mathbbm {d}}\big )$$ $$\longrightarrow$$ $${{\mathbb {R}}^n}$$ are $$\Lambda$$-Lipschitz continuous for all $$x \in {{\mathbb {R}}^n}$$, $$u \in U$$ and $$t \in$$ [0, T].

2. (vii’)

For a.e. $$t \in [0,T]$$ and all $$x \in {{\mathbb {R}}^n}$$, $$K, M_1, M_2 \in {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$ with $$x \in K \subset M_1 \cap M_2$$, it holds

\begin{aligned} f(t, x, K, U) \, \subset \, \big ( g(t, x, M_1, U) + T^\flat _{M_1}(x) \big ) \, \cap \, \big ( {\widetilde{g}}(t, x, M_2, U) + T^\flat _{M_2}(x) \big ). \end{aligned}

For all initial $$K_0$$, $$M_0$$, $${\widetilde{M}}_0 \in {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$ with $$K_0 \subset M_0 \cap {\widetilde{M}}_0$$, the solution tubes K, M, $${\widetilde{M}}:$$ $$[0,T] \leadsto {{\mathbb {R}}^n}$$ of

respectively, satisfy $$\, K(t) \subset M(t) \,\cap \, {\widetilde{M}}(t) \,$$ for every $$t \in [0,T]$$.

## 3 Ellipsoidal Approach to External Approximations

### 3.1 Ellipsoidal Approximations of Reachable Sets for Linear Control Systems

For several decades, ellipsoids have been very popular for approximating convex compact subsets of $${{\mathbb {R}}^n}$$. Russian mathematicians, in particular, like Chernousko, Filippova, Kurzhanski and collaborators have proposed them for estimating reachable sets of control systems that are usually linear or linearized (see, e.g., [13,14,15,16, 23, 31, 33, 36] and related references).

Their key advantage is the simple algebraic characterization: Each (so-called non-degenerate) ellipsoid is determined completely by its center $$p \in {{\mathbb {R}}^n}$$ and its matrix $$Q \in {{\mathbb {R}}^{n\times n}}$$ (symmetric and positive definite)

\begin{aligned} {{{\mathcal {E}}}}(p, Q) \; {:}{=} \,\, \big \{ x \in {{\mathbb {R}}^n}\; \big | \; (x-p)^{\top }\, Q^{-1} \, (x-p) \le 1 \big \}. \end{aligned}

We start with a linear time-variant control system $$x' \in A(\cdot ) \; x + B(\cdot ) \; U$$ (a.e. in [0, T]) with A :  $$[0,T] \longrightarrow {{\mathbb {R}}^{n\times n}}$$ and B :  $$[0,T] \longrightarrow {{\mathbb {R}}^{n\times m}}$$ given (as in, e.g., [33, 36]). For the sake of simplicity, the ellipsoidal control set $$U {:}{=} {{{\mathcal {E}}}}(q_u, Q_u) \subset {{\mathbb {R}}^m}$$ does not depend on time.

For each initial $$K_0 \in {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$, the tube K :  $$[0,T] \leadsto {{\mathbb {R}}^n}$$ of reachable sets is characterized by the integral funnel equation In general, the reachable set $$K(t) \subset {{\mathbb {R}}^n}$$ is not an ellipsoid though-even if $$K_0$$ is one.

For characterizing an approximating ellipsoid-valued tube E :  $$[0,T] \leadsto {{\mathbb {R}}^n}$$, Kurzhanski et al. suggest the weaker criterion (5) below based on the PompeiuHausdorff excess $${\mathbbm {e}}$$ instead of the metric $${\mathbbm {d}}$$ (see, e.g., [33, § 3.3], [36, § 3.4]). It leads to the following results reformulating [36, Theorems 3.4.1, 3.4.2]. Its minimal property (2.) (c) indicates in which sense these ellipsoids are “optimal approximations” of $${{{\mathcal {R}}}}_{A\,\cdot \, + B \,U}(t, K_0)$$ within their class.

### Proposition 3.1

[36, 63] Let A :  $$[0,T] \longrightarrow {{\mathbb {R}}^{n\times n}}$$ and B :  $$[0,T] \longrightarrow {{\mathbb {R}}^{n\times m}}$$ be continuous. $$G: \big \{ (t,s) \in [0,T]^2 \; \big | \; t \ge s \big \} \longrightarrow {{\mathbb {R}}^{n\times n}}$$ denotes the fundamental matrix of $$x' = A(t) \,x$$. Suppose that the initial set $$K_0 {:}{=} {{{\mathcal {E}}}}(x_0, X_0) \in {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$ and the control set $$U {:}{=} {{{\mathcal {E}}}}(q_u, Q_u) \subset {{\mathbb {R}}^m}$$ are non-degenerate.

Then, the following statements hold:

1. (1.)

If an ellipsoid-valued tube E :  $$[0,T] \leadsto {{\mathbb {R}}^n}$$ is Lipschitz continuous with $$E(0) = K_0$$ and

\begin{aligned} 0 \,\, = \,\, \lim _{h\,\downarrow \,0} \,\, {\textstyle \frac{1}{h}} \cdot {\mathbbm {e}}\Big ( \big ( {\mathbbm {1}}+ h \; A(t) \big ) \; E(t) + h \; B(t) \; {{{\mathcal {E}}}}(q_u, Q_u), \;\; E(t+h)\Big ) \end{aligned}
(5)

for a.e. $$t \in [0,T]$$, then the reachable set $${{{\mathcal {R}}}}_{A\,\cdot \, + B \,U}(t, K_0) \subset {{\mathbb {R}}^n}$$ of $$x' \in A(\cdot )\,x + B(\cdot )\, U$$ is contained in E(t) at each time $$t \in [0,T]$$.

2. (2.)

In addition, assume that the control system $$x' \in A(\cdot )\,x + B(\cdot )\, U$$ (a.e.) is completely controllable. For $$\ell _0 \in {{\mathbb {R}}^n}$$ fixed arbitrarily, consider $$\ell :$$ $$[0,T] \longrightarrow {{\mathbb {R}}^n}$$, $$t \longmapsto G(0, t)^{\top }\, \ell _0$$ and let x :  $$[0,T] \longrightarrow {{\mathbb {R}}^n}$$, X :  $$[0,T] \longrightarrow {{\mathbb {R}}^{n\times n}}$$ denote the unique solutions of IVP

with $$Q_B(t) {:}{=} B(t) \; Q_u \; B(t)^{\top }\in {{\mathbb {R}}^{n\times n}}$$, $$\pi (t) {:}{=} \sqrt{\, \frac{ \langle \ell (t), \; Q_B(t) \, \ell (t) \rangle }{ \langle \ell (t), \; X(t) \, \ell (t) \rangle } \,} > 0$$. Then, the ellipsoid-valued tube E :  $$[0,T] \leadsto {{\mathbb {R}}^n}$$, $$t \mapsto {{{\mathcal {E}}}}\big ( x(t), X(t) \big )$$ has these properties:

1. a

E satisfies the limit condition (5) for each $$t \in [0,T)$$.

2. b

For each t, E(t) is an external approximation of $${{{\mathcal {R}}}}_{A\,\cdot \, + B \,U}(t, K_0)$$, i.e., $${{{\mathcal {R}}}}_{A\,\cdot \, + B \,U}(t, K_0) \subset E(t)$$.

3. c

E is minimal in the class of ellipsoids w.r.t. set inclusions, i.e., for every $$t \in [0,T]$$, there does not exist any ellipsoid $${\widetilde{E}}$$ $$\subset$$ $${{\mathbb {R}}^n}$$ with $${{{\mathcal {R}}}}_{A\,\cdot \, + B \,U}(t, K_0) \subset {\widetilde{E}} \subsetneqq E(t)$$.

4. d

For every $$t \in [0,T]$$, $${{{\mathcal {R}}}}_{A\,\cdot \, + B \,U}(t, K_0)$$ and E(t) have a boundary point $$\xi (t)$$ in common such that $$\ell (t)$$ is normal to both sets in $$\xi (t)$$, i.e., $$\xi (t) = x(t) +{ \langle \ell (t), \; X(t) \, \ell (t) \rangle ^{- \,\frac{1}{2}}} \; X(t) \; \ell (t) \,.$$

### 3.2 Some External Approximation for a Solution to a Set Evolution Equation

Now we aim to extend these results from linear time-variant control systems (and their reachable sets) to a class of set evolution equations (and their solution tubes in the sense of Definition 2.3).

Hence, the right-hand side of the set evolution equation (described by f in Sect. 2.2) is now supposed to be linear w.r.t. x and u. Reachable sets of nonautonomous linear differential inclusions are known to be always convex as a consequence of the variations of constants formula. Thus, we focus on convex compact subsets of $${{\mathbb {R}}^n}$$ instead of $${{{\mathcal {K}}}}({{\mathbb {R}}^n})$$. In analogy to the notation in Sect. 3.1, let the coefficient functions

\begin{aligned} {{{\mathcal {A}}}}: \; [0,T] \times {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n}) \longrightarrow {{\mathbb {R}}^{n\times n}}, \quad {{{\mathcal {B}}}}: \; [0,T] \times {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n}) \longrightarrow {{\mathbb {R}}^{n\times m}}\end{aligned}

be given. We consider the set evolution equation with the function

\begin{aligned}&f: \; [0,T] \times {{\mathbb {R}}^n}\times {{{\mathcal {K}}}}({{\mathbb {R}}^n}) \times U \longrightarrow {{\mathbb {R}}^n}, \quad (t, x, M, u) \\&\quad \longmapsto {{{\mathcal {A}}}}(t, \,\overline{\mathrm{co}} \;M) \, x + {{{\mathcal {B}}}}(t, \,\overline{\mathrm{co}} \;M) \, u. \end{aligned}

### Proposition 3.2

Let $${{{\mathcal {A}}}}: [0,T] \times {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n}) \longrightarrow {{\mathbb {R}}^{n\times n}}$$, $${{{\mathcal {B}}}}: [0,T] \times {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n}) \longrightarrow {{\mathbb {R}}^{n\times m}}$$ and $$U \subset {{\mathbb {R}}^m}$$ satisfy the following conditions:

1. (i)

$$U {:}{=} {{{\mathcal {E}}}}(q_u, Q_u) \subset {{\mathbb {R}}^m}$$ is non-degenerate.

2. (ii)

For every $$M \in {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n}), {{{\mathcal {A}}}}(\cdot ,M): [0,T] \longrightarrow {{\mathbb {R}}^{n\times n}}$$ and $${{{\mathcal {B}}}}(\cdot ,M): [0,T] \longrightarrow {{\mathbb {R}}^{n\times m}}$$ are measurable.

3. (iii)

There is $$\Lambda \ge 0$$ such that $${{{\mathcal {A}}}}(t, \,\cdot \,):$$ $$\big ( {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n}), {\mathbbm {d}}\big ) \longrightarrow {{\mathbb {R}}^{n\times n}}$$ and $${{{\mathcal {B}}}}(t, \,\cdot \,):$$ $$\big ( {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n}), {\mathbbm {d}}\big ) \longrightarrow {{\mathbb {R}}^{n\times m}}$$ are $$\Lambda$$-Lipschitz continuous for every $$t \in [0,T]$$.

4. (iv)

There is $$\Gamma \ge 0$$ such that for all $$t \in [0,T]$$ and $$M \in {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n})$$, $$\big \Vert {{{\mathcal {A}}}}(t, M) \big \Vert _{\text{ op }}$$, $$\displaystyle \sup _{u\,\in \,U} \,\big \Vert {{{\mathcal {B}}}}(t, M) \, u \big \Vert \le \Gamma$$.

5. (v)

For a.e. $$t \in [0,T]$$ and all $$x \in {{\mathbb {R}}^n}$$, $$K, M \in {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n})$$ with $$x \in K \subset M$$, Consider $$K_0 \in {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n})$$ with nonempty interior. Assume for each of the ellipsoid-valued tubes $$E_1, \ldots , E_N: [0,T] \leadsto {{\mathbb {R}}^n}$$ and $${{{\mathcal {E}}}}_\cap (t) {:}{=} \bigcap _{k\,=\,1}^N E_k(t):$$

1. (vi)

$$E_j(\cdot )$$ is Lipschitz continuous with $$K_0 \subset E_j(0)$$.

2. (vii)

For a.e. $$t \in [0,T)$$, $$\; 0 \, = \, \displaystyle \lim _{h\,\downarrow \,0} \,\, {\textstyle \frac{1}{h}} \!\cdot \! {\mathbbm {e}}\Big ( \big ( {\mathbbm {1}}+ h \; {{{\mathcal {A}}}}\big (t, {{{\mathcal {E}}}}_\cap (t) \big ) \big ) \; E_j(t) + h \, {{{\mathcal {B}}}}\big (t, {{{\mathcal {E}}}}_\cap (t) \big ) \, U, \;\; E_j(t+h)\Big ).$$

Then, the unique solution tube K :  $$[0,T] \leadsto {{\mathbb {R}}^n}$$ of the IVP

\begin{aligned} \mathring{K}(t) = {{{\mathcal {A}}}}\big ( t, \, K(t) \big ) \, x + {{{\mathcal {B}}}}\big ( t, \, K(t) \big ) \, U \quad \text{ in } [0,T], \qquad K(0) = K_0 \end{aligned}
(6)

fulfills $$K(t) \subset {{{\mathcal {E}}}}_\cap (t)$$ for every $$t \in [0,T]$$.

### 3.3 A Computational Method for an External Approximation With Ellipsoidal Values

Proposition 3.1 (2.) provides an ODE system which specifies an ellipsoid-valued tube as an external approximation of the reachable set. In more detail, it concerns the reachable set $${{{\mathcal {R}}}}_{A\,\cdot \, + B \,U}(t, K_0)$$ of a nonautonomous linear differential inclusion $$x' \in A(t) \; x + B(t) \; U$$ and, the ODE system describes the evolution of the center $$x(t) \in {{\mathbb {R}}^n}$$ and the positive definite symmetric matrix $$X(t) \in {{\mathbb {R}}^{n\times n}}$$ of the time-dependent ellipsoids.

Set evolution equations are essentially based on the notion that the coefficients depend on the current set in addition: $${{{\mathcal {A}}}}= {{{\mathcal {A}}}}(t, M)$$, $${{{\mathcal {B}}}}= {{{\mathcal {B}}}}(t, M)$$ instead of $$A = A(t)$$ and $$B = B(t)$$, respectively. This gist motivates us to consider the following nonlinear ODE system

with the abbreviations $$Q_{{{\mathcal {B}}}}\big (t, x(t), X(t)\big ) = {{{\mathcal {B}}}}\big (t, \, {{{\mathcal {E}}}}(x(t), X(t)) \big ) \,\, Q_u \,\, {{{\mathcal {B}}}}\big (t, \, {{{\mathcal {E}}}}(x(t), X(t)) \big )\!^{\top }\in {{\mathbb {R}}^{n\times n}}$$ and $$\pi (t) = \sqrt{\; \frac{ \langle \ell (t), \;\, Q_{{{\mathcal {B}}}}(t, x(t), X(t)) \,\, \ell (t) \rangle }{ \langle \ell (t), \;\, X(t) \,\, \ell (t) \rangle } \, }$$.

Strictly speaking, Proposition 3.2 considers the intersection of finitely many ellipsoids as an external approximation of the solution value $$K(t) \subset {{\mathbb {R}}^n}$$. Assumption 3.2 (vii) indicates how to choose the coefficients appropriately, i.e., in terms of their pointwise intersection $${{{\mathcal {E}}}}_\cap (t)$$. It leads directly to ODE system (7) below which is easy to solve numerically (using standard methods for the support function of $${{{\mathcal {E}}}}_\cap (t)$$).

### Proposition 3.3

Let $${{{\mathcal {A}}}}: [0,T] \times {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n}) \longrightarrow {{\mathbb {R}}^{n\times n}}$$, $${{{\mathcal {B}}}}: [0,T] \times {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n})$$ $$\longrightarrow {{\mathbb {R}}^{n\times m}}$$ and $$U = {{{\mathcal {E}}}}(q_u, Q_u) \subset {{\mathbb {R}}^m}$$ satisfy the Assumptions 3.2 (i), (iii) and

1. (ii’)

For every $$M \in {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n})$$, $$\, {{{\mathcal {A}}}}(\cdot ,M):$$ $$[0,T] \longrightarrow {{\mathbb {R}}^{n\times n}}$$ and $${{{\mathcal {B}}}}(\cdot ,M):$$ $$[0,T] \longrightarrow {{\mathbb {R}}^{n\times m}}$$ are continuous.

2. (iv’)

There exists $$\Gamma \ge 0$$ such that for all $$t \in [0,T]$$ and $$M \in {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n})$$,

\begin{aligned}&\max \Big \{ \big \Vert {{{\mathcal {A}}}}(t, M) \big \Vert _{\text{ op }}\, , \,\, \big \Vert {{{\mathcal {A}}}}(t, M)^{\top }\big \Vert _{\text{ op }}, \,\, \\&\quad \sup _{u\,\in \,U} \,\big \Vert {{{\mathcal {B}}}}(t, M) \, u \big \Vert \, , \,\, \big \Vert {{{\mathcal {B}}}}(t, M) \big \Vert _{\text{ op }}\, , \,\, \big \Vert {{{\mathcal {B}}}}(t, M)^{\top }\big \Vert _{\text{ op }}\Big \} \le \Gamma . \end{aligned}
3. (vi’)

$$m= n$$ and $${{{\mathcal {B}}}}(t, M) \in {{\mathbb {R}}^{n\times n}}$$ is invertible, $$\big \Vert ({{{\mathcal {B}}}}(t, M)^{\top })^{-1} \big \Vert _{\text{ op }}\le \Gamma$$ for all $$t \in [0,T]$$, $$M \in {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n})$$.

For $$j = 1,\,\ldots ,N$$, let $$\ell _{0 j} \in {{\mathbb {R}}^n}{\setminus } \{ 0 \}$$, $$x_{0j} \in {{\mathbb {R}}^n}$$ and positive definite symmetric $$X_{0 j} \in {{\mathbb {R}}^{n\times n}}$$ be given such that $$\bigcap _{k=1}^N \,{{{\mathcal {E}}}}(x_{0 k},\, X_{0 k}) \subset {{\mathbb {R}}^n}$$ has nonempty interior. Consider the ODE system

with the abbreviations

Then, the following statements hold:

1. (1.)

There exist unique solutions $$\ell _j$$, $$x_j:$$ $$[0,T] \longrightarrow {{\mathbb {R}}^n}$$, $$X_j:$$ [0, T] $$\longrightarrow$$ $${{\mathbb {R}}^{n\times n}}$$ to ODE system (7) with the initial values $$\ell _{0 j}$$, $$x_{0 j} \in {{\mathbb {R}}^n}$$, $$X_{0 j} \in {{\mathbb {R}}^{n\times n}}$$, respectively $$(j=1, \,\ldots , N)$$. Moreover, for all $$t \in [0,T]:$$

• $${{{\mathcal {E}}}}_\cap (t)$$ has nonempty interior.

• $$\ell _j(t) \not = 0 \,$$ and $$X_j(t) \in {{\mathbb {R}}^{n\times n}}$$ is symmetric.

• There are $${\widetilde{c}}_j, {\widetilde{C}}_j > 0$$ (depending on $$\Gamma$$, $$Q_u$$) and $$c_j, C_j > 0$$ (depending on $$\Gamma$$, $$Q_u$$, $$X_{0 j}$$) with

2. (2.)

Consider solutions $$\ell _j(\cdot )$$, $$x_j(\cdot )$$, $$X_j(\cdot )$$ $$(j = 1, \,\ldots , N)$$ as characterized in statement (1.). Then, each tube $$E_j : [0,T] \leadsto {{\mathbb {R}}^n}$$, $$t \mapsto {{{\mathcal {E}}}}\big ( x_j(t), \, X_j(t) \big )$$ $$(j = 1, \,\ldots , N)$$ is Lipschitz and satisfies for every t

\begin{aligned} 0 \, = \,\, \lim _{h\,\downarrow \,0} \,\, {\textstyle \frac{1}{h}} \cdot {\mathbbm {e}}\Big ( \big ( {\mathbbm {1}}+ h \; {{{\mathcal {A}}}}\big (t, {{{\mathcal {E}}}}_\cap (t) \big ) \big ) \; E_j(t) + h \, {{{\mathcal {B}}}}\big (t, {{{\mathcal {E}}}}_\cap (t) \big ) \, U, \;\; E_j(t+h) \Big ). \end{aligned}

In regard to external approximations for set evolutions, Proposition 3.2 has the direct consequence:

### Corollary 3.4

Let the initial set $$K_0 = {{{\mathcal {E}}}}(x_0, X_0) \subset {{\mathbb {R}}^n}$$ be given. Under the assumptions of Proposition 3.3, suppose $$\ell _j(\cdot )$$, $$x_j(\cdot )$$, $$X_j(\cdot )$$ $$(j = 1, \,\ldots , N)$$ are solutions to ODE system (7) with $$\ell _j(0) \in {{\mathbb {R}}^n}{\setminus } \{0\}$$, $$x_j(0) = x_0$$, $$X_j(0) = X_0$$. Furthermore assume condition 3.2(v).

Then, the unique solution K :  $$[0,T] \leadsto {{\mathbb {R}}^n}$$ of IVP (6) fulfills $$K(t) \subset {{{\mathcal {E}}}}_\cap (t)$$ for every t.

### 3.4 No Minimal Property of This Ellipsoidal Approximation in General

In the established context of linear differential inclusions, Proposition 3.1 (2.) provides the connection between solutions to an ODE system for $$x(\cdot )$$, $$X(\cdot )$$ and the ellipsoid-valued tube E :  $$[0,T] \leadsto {{\mathbb {R}}^n}$$, $$t \mapsto {{{\mathcal {E}}}}\big ( x(t), X(t) \big )$$ with three properties. In connection with the more general problem class of set evolution equations, however, we have not made any comment on the last two features so far, i.e.,

• 3.1 (2.) (c)    E is minimal in the class of ellipsoids w.r.t. set inclusions, i.e., for every $$t \in [0,T]$$, there does not exist any ellipsoid $${\widetilde{E}}$$ $$\subset$$ $${{\mathbb {R}}^n}$$ with $${{{\mathcal {R}}}}_{A\,\cdot \, + B \,U}(t, K_0) \subset {\widetilde{E}} \subsetneqq E(t)$$.

• 3.1 (2.) (d)    For every $$t \in [0,T]$$, $${{{\mathcal {R}}}}_{A\,\cdot \, + B \,U}(t, K_0)$$ and E(t) have a boundary point $$\xi (t)$$ in common such that $$\ell (t)$$ is normal to both sets in $$\xi (t)$$.

The following example shows that such a form of minimality does not hold under the assumptions of Proposition 3.3. In a word, the current set $$K(t) \subset {{\mathbb {R}}^n}$$ might have a significant influence on the coefficient matrices $${{{\mathcal {A}}}}\big ( t, \, K(t) \big )$$, $${{{\mathcal {B}}}}\big ( t, \, K(t) \big ) \in {{\mathbb {R}}^{n\times n}}$$ such that joint boundary points are lost instantaneously.

### Example 3.5

For $$n= 2$$, set $$A_0 {:}{=} B_0 {:}{=} {\mathbbm {1}}$$ (i.e., the unit matrix in $${\mathbb {R}}^{2 \times 2}$$), $$U {:}{=} {{{\mathcal {E}}}}(0, Q_u)$$ and $$K_0 {:}{=} {{{\mathcal {E}}}}(0, X_0) \subset {\mathbb {R}}^2$$ with $$Q_u {:}{=} \left( \genfrac{}{}{0.0pt}{}{1}{0} \genfrac{}{}{0.0pt}{}{0}{2} \right)$$, $$X_0 {:}{=} \left( \genfrac{}{}{0.0pt}{}{3}{-1} \genfrac{}{}{0.0pt}{}{-1}{3} \right)$$. ($$(1,1)^{\top }$$ and $$(-1,1)^{\top }$$ are eigenvectors of $$X_0$$ associated with the eigenvalues 2, 4, respectively.)

The variation of constants formula provides an explicit representation of the reachable set $$R(t) \subset {\mathbb {R}}^2$$ of the autonomous linear differential inclusion $$x' \in A_0 \, x + B_0 \, U$$ (see, e.g., [36, Lemma 3.1.1])

\begin{aligned} R(t) \;= & {} \; \exp (t \, A_0) \; {{{\mathcal {E}}}}( 0, X_0) + \int _0^t \exp \big ((t-s) \, A_0 \big ) \,\cdot \,B_0 \,\, U \,\, \mathrm{d} s\\ \;= & {} \; e^t \cdot {{{\mathcal {E}}}}( 0, X_0) + (e^t -1) \cdot {{{\mathcal {E}}}}(0, U_0) \,. \end{aligned}

In particular, $$R(t) \subset {\mathbb {R}}^2$$ is convex, compact, but not an ellipsoid for $$t > 0$$.

In addition to $${{{\mathcal {A}}}}(t, M) {:}{=} A_0 = {\mathbb {1}}$$, we define $${{{\mathcal {B}}}}:$$ $$[0,1] \times {{{{\mathcal {K}}}}_{\text{ co }}}({\mathbb {R}}^2) \longrightarrow {\mathbb {R}}^{2 \times 2}$$ in such a way that for all $$M \in {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n})$$ and $$t \in [0,1]$$,    $$R(t) \subsetneqq M \,\, \Longleftrightarrow \,\, {\overline{{\mathbb {B}}}}_1 = {{{\mathcal {B}}}}\big (t, \,R(t) \big ) \, {\overline{{\mathbb {B}}}}_1 \subsetneqq {{{\mathcal {B}}}}(t, M) \, {{\mathbb {B}}}_1,$$

e.g., $${{{\mathcal {B}}}}(t, M) {:}{=} \big ( 2 - e^{- {\mathbbm {e}}(M, \, R(t))} \big ) \cdot {\mathbb {1}} \in {\mathbb {R}}^{2 \times 2}$$. Clearly, $$R(\cdot )$$ is the solution to the IVP (6).

Fixing a unit vector $$\ell _0 \in {\mathbb {R}}^2$$ arbitrarily, Proposition 3.3 and Corollary 3.4 provide an ellipsoid-valued tube E :  $$[0,1] \leadsto {\mathbb {R}}^2$$, $$t \mapsto {{{\mathcal {E}}}}\big (0, \, X(t) \big )$$ with the following properties:

• X :  $$[0,1] \longrightarrow {\mathbb {R}}^{2 \times 2}$$ and $$\ell :$$ $$[0,1] \longrightarrow {\mathbb {R}}^2$$ solve the ODE system (9)

with the initial values $$\ell (0) = \ell _0$$ and $$X(0) = X_0 {\mathop {=}\limits ^{\mathrm{\tiny Def.}}} \left( \genfrac{}{}{0.0pt}{}{3}{-1} \genfrac{}{}{0.0pt}{}{-1}{3} \right)$$. (The equation of the center $$x(\cdot )$$ in ODE system (7) has the unique solution $$x(\cdot ) = 0$$ in this example and so, we do not mention it explicitly any longer.)

• $$R(t) \subset E(t)$$ holds for every $$t \in [0,1]$$.

Proposition 3.1 (2.), however, specifies an ellipsoid-valued tube $${\widetilde{E}}:$$ $$[0,1] \leadsto {{\mathbb {R}}^n}$$ with $$R(t) \subset {\widetilde{E}}(t) \subset E(t)^\circ$$ for all $$t \in (0,1]$$.

Indeed, consider the solution $${\widetilde{X}}:$$ $$[0,1] \longrightarrow {\mathbb {R}}^{2 \times 2}$$ of

\begin{aligned} {\widetilde{X}}'(t) = 2\; {\widetilde{X}}(t) + \textstyle \sqrt{\,\frac{\langle \ell (t), \; Q_u \,\, \ell (t) \rangle }{\langle \ell (t), \; {\widetilde{X}}(t) \,\, \ell (t) \rangle } \,} \; {\widetilde{X}}(t) \, + \, \sqrt{\,\frac{\langle \ell (t), \; {\widetilde{X}}(t) \,\, \ell (t) \rangle }{\langle \ell (t), \; Q_u \,\, \ell (t) \rangle }\,}\,\, Q_u, \qquad {\widetilde{X}}(0) = X_0 . \end{aligned}

(The adjoint equation for $$\ell$$ is the same as before: $$\ell '(t) = - A_0^{\top }\; \ell (t)= -\ell (t)$$.)

Due to Proposition 3.1 (2.) (c), this $${\widetilde{E}}(\cdot )$$ is minimal w.r.t. set inclusions at every time, i.e., $$R(t) \subset {\widetilde{E}}(t)$$ for all $$t \in (0,1]$$.

Hence, it remains to verify that $${\widetilde{E}}(t)$$ is contained in the interior $$E(t)^\circ$$ for every $$t \in (0,1]$$. As the reachable set R(t) is not an ellipsoid, we have $$R(t) \subsetneqq {\widetilde{E}}(t)$$ and so $${\mathbbm {e}}\big ( E(t), \, R(t) \big ), {\mathbbm {e}}\big ( {\widetilde{E}}(t), \, R(t) \big ) > 0.$$ First, $$\psi {:}{=} \langle \ell (\cdot ), \; X(\cdot ) \; \ell (\cdot ) \rangle :$$ $$[0,1] \longrightarrow {\mathbb {R}}$$ is Lipschitz continuous with and thus, $$\textstyle \frac{\hbox {d}}{\hbox {d}s} \sqrt{\psi (s)} \, = \, \frac{1}{2 \; \sqrt{\psi (s)}} \cdot \psi '(s) \, = \, \big ( 2 - e^{- {\mathbbm {e}}({{{\mathcal {E}}}}(0, X(s)), \, R(s))} \big ) \; \sqrt{\langle \ell (s), \; Q_u \, \ell (s) \rangle }$$ for a.e. $$s \in [0,1]$$. Similarly, $$\textstyle \frac{\hbox {d}}{\hbox {d} s} \, \sqrt{\langle \ell (s), \, {\widetilde{X}}(s) \, \ell (s) \rangle } \, = \, \sqrt{\langle \ell (s), \; Q_u \, \ell (s) \rangle }$$ for a.e. $$s \in (0,1]$$ and we conclude $$\frac{\hbox {d}}{\hbox {d} s} \, \sqrt{\langle \ell (s), \, {\widetilde{X}}(s) \, \ell (s) \rangle }\, = \, \eta (s)\cdot \frac{\hbox {d}}{\hbox {d} s} \, \sqrt{\langle \ell (s), \, X(s) \, \ell (s) \rangle }$$ with some $$\eta \in C^0([0,1], (0,1])$$ satisfying $$\eta (s) < 1$$ whenever $${{{\mathcal {E}}}}(0, X(s))$$ $$\not =$$ R(s). Due to $${\widetilde{X}}(0) = X_0 = X(0)$$, it implies $$\langle \ell (t), \, {\widetilde{X}}(t) \, \ell (t) \rangle < \langle \ell (t), \, X(t) \, \ell (t) \rangle$$ for every $$t \in (0, 1]$$.

Second, we consider $$\ell ^\perp (t) {:}{=} O_{\frac{\pi }{2}} \, \ell (t) \in {\mathbb {R}}^2$$ with the rotation matrix $$O_{\frac{\pi }{2}} {:}{=} \left( \genfrac{}{}{0.0pt}{}{0}{1} \genfrac{}{}{0.0pt}{}{-1}{0} \right)$$ in the similar way: ODE system (9) implies for $$X^\perp (t) {:}{=} O_{\frac{\pi }{2}}^{\top }\, X(t) \, O_{\frac{\pi }{2}}$$ at a.e. time instant $$s \in [0,1]$$

\begin{aligned} \textstyle \frac{\hbox {d}}{\hbox {d} s} \,X^\perp (s) \,= & {} \, 2\,X^\perp (s) + \big ( 2 - e^{- {\mathbbm {e}}({{{\mathcal {E}}}}(0, X(s)), \, R(s))} \big ) \; \left( \sqrt{\frac{\langle \ell (s), \; Q_u \,\, \ell (s) \rangle }{\langle \ell (s), \; X(s) \,\, \ell (s) \rangle } } \; X^\perp (s)\right. \nonumber \\&\left. \quad + \sqrt{\frac{\langle \ell (s), \; X(s) \,\, \ell (s) \rangle }{\langle \ell (s), \; Q_u \,\, \ell (s) \rangle }} \, O_{\frac{\pi }{2}}^{\top }\,Q_u O_{\frac{\pi }{2}} \right) \end{aligned}

Hence, the same steps as before lead to and thus, $$\langle \ell ^\perp (t), \, {\widetilde{X}}(t) \, \ell ^\perp (t) \rangle < \langle \ell ^\perp (t), \, X(t) \, \ell ^\perp (t) \rangle$$ for every t $$\in$$ (0, 1].

As a consequence, $${\widetilde{E}}(t) {\mathop {=}\limits ^{\mathrm{\tiny Def.}}} {{{\mathcal {E}}}}\big (0, {\widetilde{X}}(t) \big )$$ is always contained in the interior of $$E(t) {\mathop {=}\limits ^{\mathrm{\tiny Def.}}} {{{\mathcal {E}}}}\big (0, X(t) \big )$$ $$\subset$$ $${\mathbb {R}}^2$$. In particular, the reachable tube $$R(\cdot )$$ is the solution of the underlying IVP (6), but R(t) cannot have a joint boundary point with E(t) for any $$t \in (0,1]$$.

### 3.5 Approximating the Solution of a Set Evolution Equation with Arbitrary Precision

Example 3.5 shows that the value K(t) of the solution tube might be contained in the interior of any approximating ellipsoid $${{{\mathcal {E}}}}\big ( x(t), \, X(t) \big )$$ based on the ODE system (7), (8). Whenever a joint boundary point (as mentioned in Proposition 3.1 (2.) (d)) does not exist, it is not so obvious how to estimate the gap of the approximation. The following results states that the exact solution can be approximated with arbitrary precision—by choosing the number of ellipsoids sufficiently large.

### Proposition 3.6

Suppose the assumptions of Corollary  3.4 for $${{{\mathcal {A}}}}, {{{\mathcal {B}}}}: [0,T] \times {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n}) \longrightarrow {{\mathbb {R}}^{n\times n}}$$ and $$U = {{{\mathcal {E}}}}(q_u, Q_u) \subset {{\mathbb {R}}^n}$$. Choose any initial set $$K_0 = {{{\mathcal {E}}}}(x_0, X_0) \subset {{\mathbb {R}}^n}$$.

For every $$\varepsilon > 0$$, there exist $$N \in {\mathbb {N}}$$ and unit vectors $$\ell _{0 j} \in {{\mathbb {R}}^n}$$ $$(j = 1,\,\ldots , N)$$ such that the following statement holds:    Let $$\ell _j(\cdot )$$, $$x_j(\cdot )$$, $$X_j(\cdot )$$ $$(j = 1,\,\ldots , N)$$ be the unique solutions to ODE system (7) with the initial values $$\ell _j(0) = \ell _{0 j}$$, $$x_j(0) = x_0$$, $$X_j(0) = X_0$$. Then, the solution tube K :  $$[0,T] \leadsto {{\mathbb {R}}^n}$$ to the IVP (6) satisfies $${\mathbbm {d}}\big ( K(t), \, {{{\mathcal {E}}}}_\cap (t) \big ) < \varepsilon$$ for all $$t \in [0,T]$$.

## 4 A Numerical Example

This example is deliberately short and in two dimensions so that numerical results can be shown in figures. Motivated by challenges of collision avoidance, we consider a simple cart under the influence of bounded “unknown noise” and aim at a guaranteed state estimation of its position and velocity. In addition, security reasons require a safety zone which grows with the “uncertainty” of the state estimation.

Now we suggest a simple model for this situation. Initially (i.e., without any noise or safety zone), the cart moves according to the linear control system for scalar position $$x_1$$ and velocity $$x_2 = x_1'$$ in which $$u_2$$ describes its acceleration: with Three aspects are implemented additionally: First, initial position and velocity are imprecise and, the error in position grows with the velocity. Here, we choose the initial set $${{{\mathcal {E}}}}(x_0, X_0) \subset {\mathbb {R}}^2$$ with $$x_0 {:}{=} (0, 1)^{\top }$$, having the eigenvector $$(1,1)^{\top }. \;$$ Second, there is a scalar “noise” $$u_1$$ whose order of magnitude is small in comparison with the acceleration $$u_2$$. It affects both position $$x_1$$ and velocity $$x_2$$. We choose $$U {:}{=} {{{\mathcal {E}}}}(0, Q_u) \subset {\mathbb {R}}^2$$, and . The reachable set of $$x' \in A \,x + B_0 \, U$$ consists of all states which the cart can attain—no matter how the acceleration $$u_2$$ and the “noise” $$u_1$$ have evolved. Its set diameter exemplifies a real quantity describing the uncertainty/imprecision of the system—whenever we do not have any influence on $$u_1$$, $$u_2$$.

The third aspect is due to the safety zone. It is induced by an additional noise term representing an ellipsoidal neighborhood of 0 which depends on the set diameter. We set with the smooth cut-off function $$\varphi (s) {:}{=} \frac{s}{1 \,+\, s}$$ $$(s \ge 0)$$ and consider $$\mathring{K}(t) = A \,x + {{{\mathcal {B}}}}\big ( t, \,K(t) \big ) \, U$$.

On the basis of Proposition 3.6, the solution values K(1), $$K(10) \subset {\mathbb {R}}^2$$ are approximated by the intersections of 400 ellipses with the joint initial set $${{{\mathcal {E}}}}(x_0, X_0) \subset {\mathbb {R}}^2$$ and the respective direction vector $$\ell _j(0) \in {\mathbb {R}}^2$$ distributed equidistantly in the unit circle.

The table shows the numerical values of PompeiuHausdorff distance $${\mathbbm {d}}$$ between K(10) and the intersection $${{{\mathcal {E}}}}_{\cap ,N}(10) {\mathop {=}\limits ^{\mathrm{\tiny Def.}}}$$ $$\bigcap _{k=1}^N \, {{{\mathcal {E}}}}\big (x_k(10), \, X_k(10)\big )$$ and the gap of $$\partial {{{\mathcal {E}}}}_{\cap ,N}(10)$$ and K(10) for various N. It is worth mentioning that in this (very simple) example, all these gaps are positive, i.e., K(10) does not have any joint boundary point with $${{{\mathcal {E}}}}_\cap (10)$$.

## 5 Proofs

### 5.1 Tools about Reachable Sets of Differential Inclusions

Proposition 2.1 specifies sufficient conditions on $$U \subset {{\mathbb {R}}^m}$$ and f :  $$[0,T] \times {{\mathbb {R}}^n}\times {{{\mathcal {K}}}}({{\mathbb {R}}^n}) \times U$$ $$\longrightarrow$$ $${{\mathbb {R}}^n}$$ such that the three types of set evolution equations are equivalent to each other. Here, we formulate just the key properties of reachable sets relevant for proving the statements in Sects. 2.2 and 2.3. Essentially the same arguments as in the proof of [41, Lemma 1.58] lead to the following statement underlying the equivalence 2.1 “(2.) $$\Longleftrightarrow$$ (3.)”, for example.

### Lemma 5.1

Let $$T > 0$$, $$U \subset {{\mathbb {R}}^m}$$ be nonempty compact and $$g: [0,T] \times {{\mathbb {R}}^n}\times U \longrightarrow {{\mathbb {R}}^n}$$ satisfy these conditions:

1. (i)

For all $$t \in [0,T]$$ and $$x \in {{\mathbb {R}}^n}$$, the set g(txU) $${\mathop {=}\limits ^{\mathrm{\tiny Def.}}}$$ $$\big \{ g(t,x,u)$$ $$\, \big | \,$$ $$u \in U \big \}$$ $$\subset$$ $${{\mathbb {R}}^n}$$ is closed and convex.

2. (ii)

For all $$x \in {{\mathbb {R}}^n}$$ and $$u \in U$$, $$g(\,\cdot \,, x, u):$$ $$[0,T] \longrightarrow {{\mathbb {R}}^n}$$ is Lebesgue measurable.

3. (iii)

For all $$x \in {{\mathbb {R}}^n}$$ and a.e. $$t \in [0,T]$$, $$g(t, x, \,\cdot \,):$$ $$U \longrightarrow {{\mathbb {R}}^n}$$ is continuous.

4. (iv)

There exists $$\lambda \in L^1([0,T])$$ such that for all $$u \in U$$ and a.e. $$t \in [0,T]$$, the function $$g(t, \,\cdot \,, u):$$ $${{\mathbb {R}}^n}\longrightarrow {{\mathbb {R}}^n}$$ is $$\lambda (t)$$-Lipschitz continuous.

5. (v)

There is $$\Gamma \ge 0$$ with $$\big \Vert g(t,x,u) \big \Vert \le \Gamma \; \big ( 1 + \Vert x \Vert \big )$$ for all $$t \in [0,T]$$, $$x \in {{\mathbb {R}}^n}$$ and $$u \in U$$.

Then, there exists a measurable set $${\widetilde{J}} \subset [0,T]$$ of full measure, i.e., $${{{\mathcal {L}}}}^1 \big ( [0,T] {\setminus } {\widetilde{J}} \big )$$ $$=$$ 0, such that the following statements hold for every $$t \in {\widetilde{J}}:$$

1. (1.)

$$\displaystyle \lim _{h\,\downarrow \,0} \;\; {\textstyle \frac{1}{h}} \cdot {\mathbbm {d}}\Big ( {{{\mathcal {R}}}}_{g(t + \,\cdot \,,\,\cdot \,,U)}(h, \, M_0), \, \bigcup _{x \,\in \, M_0} \big (x + h \cdot g(t, x, U) \big ) \Big ) \, = \, 0 \;\;$$ for every initial set $$M_0 \in {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$.

2. (2.)

For every initial set $$M_0 \in {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$, the PompeiuHausdorff distance between the reachable sets of the autonomous inclusion $$y' \in g(t,y,U)$$ and the nonautonomous inclusion $$y' \in g(t + \cdot , y, U)$$ satisfies     $$\displaystyle \lim _{h\,\downarrow \,0} \;\; \textstyle \frac{1}{h} \cdot {\mathbbm {d}}\big ( {{{\mathcal {R}}}}_{g(t,\,\cdot \,,U)}(h, \, M_0), \,\, {{{\mathcal {R}}}}_{g(t + \,\cdot \,,\,\cdot \,,U)}(h, \, M_0) \big ) \,\; = \,\, 0.$$

### Lemma 5.2

Let $$U \subset {{\mathbb {R}}^m}$$ and g :  $$[0,T] \times {{\mathbb {R}}^n}\times U$$ $$\longrightarrow$$ $${{\mathbb {R}}^n}$$ satisfy the assumptions of Lemma 5.1. Consider $$K_0 \in {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$ with $$K_0 \subset {\overline{{\mathbb {B}}}}_r$$. Then,

1. (1.)

For every $$t \in [0,T]$$, $${{{\mathcal {R}}}}_{g(\cdot , \cdot , U)}(t, K_0)$$ is contained in $${\overline{{\mathbb {B}}}}_{R(t)} \subset {{\mathbb {R}}^n}$$ with $$R(t) {:}{=} \big ( r + \Gamma \, t \big ) \cdot e^{\Gamma \, t}$$.

2. (2.)

$${{{\mathcal {R}}}}_{g(\cdot , \cdot , U)}(\cdot , K_0):$$ $$[0,T] \leadsto {{\mathbb {R}}^n}$$ is Lipschitz continuous w.r.t. $${\mathbbm {d}}$$ and, its Lipschitz constant is $$\le \, \Gamma \, \big ( 1 + r + \Gamma \, T \big ) \cdot e^{\Gamma \, T}$$.

As a consequence of well-known Filippov’s theorem about solutions to differential inclusions, the following bound holds for the PompeiuHausdorff distance between reachable sets (see, e.g., the proofs of [4, Proposition 3.7.3], [17, Lemma 5.1] or [41, Propositions 1.50, 2.79]).

### Lemma 5.3

(Reachable sets: Continuous dependence on data) Suppose the assumptions of Lemma 5.1 for $$U \subset {{\mathbb {R}}^m}$$ and $$g_1, g_2:$$ $$[0,T] \times {{\mathbb {R}}^n}\times U$$ $$\longrightarrow$$ $${{\mathbb {R}}^n}$$ (with the same $$\lambda \in L^1([0,T])$$ and $$\Gamma \ge 0$$).

For all initial sets $$K_1$$, $$K_2 \in {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$ with $$K_1 \cup K_2 \subset {\overline{{\mathbb {B}}}}_r$$, the following estimate with $$R {:}{=} \big ( r + \Gamma \, T \big ) \cdot e^{\Gamma \, T}$$ holds at each time $$t \in [0,T]$$ ### Proof of Proposition 2.5

Consider $$\delta {:}{=} {\mathbbm {d}}\big ( K(\cdot ), \,M(\cdot ) \big ):$$ $$[0,T] \longrightarrow [0,\infty )$$.

Due to Lemma 5.2 (2.), $$\delta (\cdot )$$ is Lipschitz continuous with $$\delta (0) = {\mathbbm {d}}\big ( K(0), \, M(0) \big )$$. We conclude from the criterion 2.1 (3.) and Lemma 5.3 that for a.e. $$t \in [0,T)$$ By Assumption 2.2 (v’), $$f_1(t, x, \,\cdot \,,u):$$ $$\big ( {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$, $${\mathbbm {d}}\big )$$ $$\longrightarrow$$ $${{\mathbb {R}}^n}$$ is $$\Lambda (t)$$-Lipschitz for all x, u and a.e. t. Hence, for a.e. $$t \in [0,T]$$ and, Gronwall’s inequality leads to the claimed estimate. $$\square$$

### 5.3 Inclusion Principle of Solution Tubes (Proposition 2.7)

The gist of the proof is to reformulate the condition $$K \subset M_1 \cap M_2$$ as a constraint on tuples $$(K, M_1, M_2) \in {{{\mathcal {K}}}}({{\mathbb {R}}^n})^3$$. $${{{\mathcal {C}}}}\, {:}{=} \, \big \{ (K, M_1, M_2) \in {{{\mathcal {K}}}}({{\mathbb {R}}^n})^3 \; \big | \; K \subset M_1 \cap M_2 \big \}$$ proves to be invariant w.r.t. the system

\begin{aligned} \mathring{K}(t) = f \big ( t, \cdot \, , \, K(t), \, U \big ), \quad \mathring{M}(t) = g \big ( t, \, \cdot \, , \, M(t), \, U \big ), \quad \mathring{{\widetilde{M}}}(t) = {\widetilde{g}} \big ( t, \, \cdot , \, {\widetilde{M}}(t), \, U \big ) . \end{aligned}

Weak invariance (a.k.a. viability) has already been investigated by Aubin and Gorre (e.g., [4, § 4.3.3], [26, 27]). Now we use some of their technical results for verifying the (strong) invariance of $${{{\mathcal {C}}}}$$.

### Lemma 5.4

[4, Lemma 4.2.7] Supply the product $${{{\mathcal {K}}}}({{\mathbb {R}}^n})^3$$ with the metric

\begin{aligned}&{\mathbbm {d}}_3: \; {{{\mathcal {K}}}}({{\mathbb {R}}^n})^3 \times {{{\mathcal {K}}}}({{\mathbb {R}}^n})^3 \quad \longrightarrow {\mathbb {R}}, \;\; \big ( (K_1, K_2, K_3), \, (M_1, M_2, M_3) \big )\\&\quad \longmapsto {\mathbbm {d}}(K_1, M_1) + {\mathbbm {d}}(K_2, M_2) + {\mathbbm {d}}(K_3, M_3). \end{aligned}

Then, $${{{\mathcal {C}}}}{\mathop {=}\limits ^{\mathrm{\tiny Def.}}} \big \{ (K, M_1, M_2) \in {{{\mathcal {K}}}}({{\mathbb {R}}^n})^3 \; \big | \; K \subset M_1 \cap M_2 \big \}$$ is closed in $$\big ( {{{\mathcal {K}}}}({{\mathbb {R}}^n})^3, {\mathbbm {d}}_3 \big )$$.

### Lemma 5.5

(Gorre [4, Theorem 4.2.8] ) Let $$U \subset {{\mathbb {R}}^m}$$ be nonempty compact and $${\widetilde{g}}_1$$, $${\widetilde{g}}_2$$, $${\widetilde{g}}_3:$$ $${{\mathbb {R}}^n}\times U \longrightarrow {{\mathbb {R}}^n}$$ satisfy the following conditions:

1. (i)

For all $$x \in {{\mathbb {R}}^n}$$, the set $${\widetilde{g}}_j(x, U)$$ $$\subset$$ $${{\mathbb {R}}^n}$$ is compact and convex.

2. (ii)

For every $$x \in {{\mathbb {R}}^n}$$, $${\widetilde{g}}_j(x, \,\cdot \,):$$ $$U \longrightarrow {{\mathbb {R}}^n}$$ is continuous.

3. (iii)

There exists $$\lambda > 0$$ such that for each $$u \in U$$, $${\widetilde{g}}_j( \,\cdot \,, u):$$ $${{\mathbb {R}}^n}\longrightarrow {{\mathbb {R}}^n}$$ is $$\lambda$$-Lipschitz continuous.

Suppose that $$(K, M_1, M_2) \in {{{\mathcal {C}}}}$$ fulfills $${\widetilde{g}}_1(x, U) \subset \big ( {\widetilde{g}}_2(x, U) + T^\flat _{M_1}(x) \big ) \cap \big ( {\widetilde{g}}_3(x, U) + T^\flat _{M_2}(x) \big ) \,$$ for all $$x \in K$$.

Then, the tuple $$\big ( {\widetilde{g}}_1(\,\cdot \,,U)$$, $${\widetilde{g}}_2(\,\cdot \,,U)$$, $${\widetilde{g}}_3(\,\cdot \,,U)\big )$$ of Lipschitz maps $${{\mathbb {R}}^n}\leadsto {{\mathbb {R}}^n}$$ is contingent to $${{{\mathcal {C}}}}$$ $$\subset$$ $${{{\mathcal {K}}}}({{\mathbb {R}}^n})^3$$ at $$(K, M_1, M_2)$$ in the sense of [4, Definition 1.5.2], i.e., the following equivalent conditions hold:

1. (1.)

$$\liminf _{h\,\downarrow \,0}} \,\; \frac{1}{h} \cdot \text{ dist}_{{\mathbbm {d}}_3} \Big ( \big ( {{{\mathcal {R}}}}_{{\widetilde{g}}_1(\,\cdot \,,U)}(h, K), \; {{{\mathcal {R}}}}_{{\widetilde{g}}_2(\,\cdot \,,U)}(h, M_1), \; {{{\mathcal {R}}}}_{{\widetilde{g}}_3(\,\cdot \,,U)}(h, M_2) \big ), \; {{{\mathcal {C}}}}\Big ) = 0$$

2. (2.)

There are sequences $$(h_\ell )_{\ell \,\in \,{\mathbb {N}}}$$ and $$\big ( (K_\ell , M_{1,\ell }, M_{2,\ell }) \big )_{\ell \,\in \,{\mathbb {N}}}$$ in $${\mathbb {R}}$$ and $${{{\mathcal {K}}}}({{\mathbb {R}}^n})^3$$, respectively, such that for every $$\ell \in {\mathbb {N}}$$, The next lemma extends (forward) Lebesgue points to measurable functions with values in a metric space Y.

### Lemma 5.6

Let Y be a metric space. Suppose for $$\psi : [0,T] \longrightarrow Y$$ and $$\Delta : Y \times Y \longrightarrow [0,\infty ):$$

1. (i)

$$\psi (\cdot )$$ is Lebesgue measurable, $$\Delta (\cdot )$$ is continuous and satisfies the triangle inequality.

2. (ii)

For some $$y_0 \in Y$$, $$M {:}{=} \max \{ \Delta \big (y_0, \, \psi (\cdot )\big ), \,$$ $$\Delta \big (\psi (\cdot ), \, y_0\big ) \big \}: [0,T] \longrightarrow {\mathbb {R}}$$ is integrable.

Then, $$\, \displaystyle \lim _{h\,\downarrow \,0} \,\,{\textstyle \frac{1}{h}} \; \int _t^{t+h} \Delta \big ( \psi (t), \, \psi (s) \big ) \,\, \mathrm{d} s \; = \; 0 \;$$ holds for a.e. $$t \in [0,T)$$.

### Proof

Choose any sequence $$(\varepsilon _\ell )_{\ell \in {\mathbb {N}}}$$ in (0, 1) with $$\sum _{\ell = 1}^\infty \varepsilon _\ell < \infty$$. For each $$\ell \in {\mathbb {N}}$$, Lusin’s Theorem for metric-valued functions provides a compact subset $$I_\ell \subset [0,T]$$ with $${\mathcal L}^1 ( [0,T] {\setminus } I_\ell ) < \varepsilon _\ell$$ such that $$\psi |_{I_\ell }:$$ $$I_\ell \longrightarrow Y$$ is continuous (e.g., [11, Theorem 7.14.25] citing [24, 32]). Set $${\widetilde{J}}_k {:}{=} \bigcap _{\ell \,\ge \, k} \, I_\ell \subset {\mathbb {R}}$$ for $$k \in {\mathbb {N}}$$. Let $$\chi _{[0,T] {\setminus } {\widetilde{J}}_k}: {\mathbb {R}}\longrightarrow \{0,1\}$$ denote the characteristic function of $$[0,T] {\setminus } {\widetilde{J}}_k \subset {\mathbb {R}}$$ $$(k \in {\mathbb {N}})$$. $$\chi _{[0,T] {\setminus } {\widetilde{J}}_k} \cdot M$$ is also integrable. Hence, the set $$J_k$$ of all $$t \in [0,T)$$ with

\begin{aligned}&\lim _{h\,\downarrow \,0} \, {\textstyle \frac{1}{h}} \int _t^{t+h} \! \big | \chi _{[0,T] {\setminus } {\widetilde{J}}_k}(t) - \chi _{[0,T] {\setminus } {\widetilde{J}}_k}(s) \big | \,\, \mathrm{d} s\, = \, 0 \, \\&\quad = \, \lim _{h\,\downarrow \,0} \, {\textstyle \frac{1}{h}} \int _t^{t+h} \! \big | (\chi _{[0,T] {\setminus } {\widetilde{J}}_k} \, M)(t) - (\chi _{[0,T] {\setminus } {\widetilde{J}}_k} \, M)(s) \big | \, \mathrm{d} s \end{aligned}

is of full measure (see, e.g., [60, Ch. 3, Corollary 1.6])). We obtain for all $$k \in {\mathbb {N}}$$ and $$t \in {\widetilde{J}}_k \cap J_k$$ as the composition $$[0,T] \cap {\widetilde{J}}_k \ni$$ $$s \longmapsto \Delta \big ( \psi (t), \, \psi (s) \big )$$ is continuous. It implies $$\displaystyle \lim _{h\,\downarrow \,0} \frac{1}{h} \displaystyle \int _t^{t+h} \! \Delta \big ( \psi (t), \, \psi (s) \big ) \,\, \mathrm{d} s = 0$$ for all $$k \in {\mathbb {N}}$$ and $$t \in {\widetilde{J}}_k \cap J_k$$. Finally, this limit holds for a.e. $$t \in [0,T]$$ since $${\widetilde{J}}_k \subset {\widetilde{J}}_{k+1}$$ for all $$k \in {\mathbb {N}}$$ and $${{{\mathcal {L}}}}^1 \big ( [0,T] {\setminus } {\widetilde{J}}_k \big ) \le \displaystyle \sum \nolimits _{\ell \,=\,k}^\infty \, {{{\mathcal {L}}}}^1 \big ( [0,T] {\setminus } I_\ell \big ) \le \displaystyle \sum \nolimits _{\ell \,=\,k}^\infty \,\varepsilon _\ell \longrightarrow 0$$ $$( k \rightarrow \infty )$$. $$\square$$

### Proof of 2.7

We adapt the arguments usually used for (strong) invariance theorems of differential equations or inclusions (see, e.g., [1, § 5.3], [41, Proposition A.8], [64, § 10.XVI]).

Consider $$\delta :$$ $$[0,T] \longrightarrow [0, \infty )$$ with

\begin{aligned} \delta (t) {=} \text{ dist}_{{\mathbbm {d}}_3} \Big ( \big ( K(t), M(t), {\widetilde{M}}(t)\big ), \, {{{\mathcal {C}}}}\Big ) {\mathop {=}\limits ^{\mathrm{\tiny Def.}}} \displaystyle \inf _{(M_0, M_1, M_2) \,\in \,{{{\mathcal {C}}}}} \Big ( {\mathbbm {d}}\big ( K(t), \, M_0 \big ) + {\mathbbm {d}}\big ( M(t), \, M_1 \big ) + {\mathbbm {d}}\big ( {\widetilde{M}}(t), \, M_2 \big ) \Big ). \end{aligned}

$$\delta (\cdot )$$ is Lipschitz continuous since so is each of the tubes $$K(\cdot )$$, $$M(\cdot )$$, $${\widetilde{M}}(\cdot )$$. Due to $$\delta (0) = 0$$, it remains to verify $$\delta '(t) \le \big ( \Lambda + \lambda \big ) \cdot \delta (t)$$ for a.e. $$t \in [0,T)$$ because then Gronwall’s inequality leads to $$\delta (t) = 0$$ for all $$t \in [0,T]$$.

The single-valued function $$[0,T] \times \big ( {{\mathbb {R}}^n}\times {{{\mathcal {K}}}}({{\mathbb {R}}^n}) \big )$$ $$\longrightarrow$$ $${{{\mathcal {K}}}}({{\mathbb {R}}^n})$$, $$(t, x, S) \longmapsto f(t, x, S, U)$$ is measurable/ Lipschitz—in the following sense:

• For all $$(t, x, S) \in [0,T] {\times } {{\mathbb {R}}^n}{\times } {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$, the set $$f(t, x, S, U) {\mathop {=}\limits ^{\mathrm{\tiny Def.}}}\big \{ f(t,x,S, u) \, \big | \, u \in U \big \}$$ is compact due to continuity assumption 2.1 (iii) and the compactness of U.

• For every $$(x, S) \in {{\mathbb {R}}^n}\times {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$, $$f(\cdot , x, S, U):$$ [0, T] $$\longrightarrow$$ $$\big ({{{\mathcal {K}}}}({{\mathbb {R}}^n})$$, $${\mathbbm {d}}\big )$$ is measurable. Indeed, $$f(\cdot , x, S, \cdot ):$$ $$[0,T] \times U \longrightarrow {{\mathbb {R}}^n}$$ is a Carathéodory function due to Assumptions 2.1 (ii),(iii). Fix $$\varepsilon > 0$$ arbitrarily. Then, the ScorzaDragoni theorem (e.g., [56, Theorem 1]) provides a closed subset $$J_\varepsilon \subset [0,T]$$ with $${{{\mathcal {L}}}}^1([0,T] {\setminus } J_\varepsilon ) < \varepsilon$$ such that $$f(\cdot , x, S, \cdot ) \big |_{J_\varepsilon \times U}$$ is continuous. This restriction is even uniformly continuous since $$J_\varepsilon \times U$$ is compact. As a consequence, $$f(\cdot , x, S, U) \big |_{J_\varepsilon }:$$ $$J_\varepsilon \longrightarrow \big ({{{\mathcal {K}}}}({{\mathbb {R}}^n})$$, $${\mathbbm {d}}\big )$$ is continuous. We conclude from Lusin’s theorem (e.g., [24, 32]) that $$f(\cdot , x, S, U):$$ [0, T] $$\longrightarrow$$ $$\big ({{{\mathcal {K}}}}({{\mathbb {R}}^n})$$, $${\mathbbm {d}}\big )$$ is measurable.

• For every $$t \in [0,T]$$, $$f(t, \cdot , \cdot , U):$$ $${{\mathbb {R}}^n}\times {{{\mathcal {K}}}}({{\mathbb {R}}^n}) \longrightarrow {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$ is $$(\Lambda + \lambda )$$-Lipschitz continuous as a consequence of assumptions 2.1 (iv) and 2.7 (v’) (about the partial Lipschitz continuity of $$f(t, \cdot , \cdot , u)$$ uniform in t, u).

In particular, for every $$x \in {{\mathbb {R}}^n}$$, $$f(\cdot , x, \cdot , U):$$ $$[0,T] \times {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$ $$\longrightarrow$$ $$\big ({{{\mathcal {K}}}}({{\mathbb {R}}^n}), {\mathbbm {d}}\big )$$, $$(t, S) \longmapsto f(t, x, S, U)$$ is a Carathéodory function. Thus, the composition $$[0,T] \longrightarrow {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$, $$t \longmapsto f \big (t, x, K(t), U \big )$$ is measurable. For the same reasons, $$[0,T] \longrightarrow {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$, $$t \longmapsto g \big (t, x, M(t), U \big )$$ and $${\widetilde{g}} \big (\cdot , x, {\widetilde{M}}(\cdot ), U \big )$$ are also measurable.

Fix $$R > 0$$ sufficiently large such that K(t), M(t), $${\widetilde{M}}(t)$$ $$\subset$$ $$B_R$$ for all $$t \in [0,T]$$. Let $$\{x_1, \,x_2, \,\ldots \}$$ be a countable dense subset of $${\overline{{\mathbb {B}}}}_R \subset {{\mathbb {R}}^n}$$. Define $$I_0$$ as the set of all $$t \in [0,T)$$ such that

• $$\delta (\cdot )$$ is differentiable at t

• the inclusion condition 2.7 (vii’) holds at t and

• for every $$k \in {\mathbb {N}}$$, $$\, \displaystyle \lim _{h\,\downarrow \,0}$$ $$\frac{1}{h} \cdot \displaystyle \int _t^{t+h} {\mathbbm {d}}\big ( f (t,x_k, K(t), U), \; f (s, x_k, K(s), U)\big ) \,\, \mathrm{d} s = 0$$ and the same for g, $${\widetilde{g}}$$.

Due to Rademacher’s theorem and Lemma 5.6, $$I_0 \subset [0,T]$$ is of full measure, i.e., $${{{\mathcal {L}}}}^1([0,T] {\setminus } I_0) = 0$$.

For every $$t \in I_0$$, there is $$\big (K_t, M_t, {\widetilde{M}}_t \big )$$ $$\in$$ $${{{\mathcal {C}}}}$$ $$\subset$$ $${{{\mathcal {K}}}}({{\mathbb {R}}^n})^3$$ with $${\mathbbm {d}}_3 \Big ( \big (K(t), M(t), {\widetilde{M}}(t) \big ), \, \big (K_t, M_t, {\widetilde{M}}_t \big ) \Big ) = \delta (t).$$ Indeed, first, all closed balls in $$\big ( {{{\mathcal {K}}}}({{\mathbb {R}}^n})$$, $${\mathbbm {d}}\big )$$ are compact due to [7, Theorem 3.2.4] and so it is in $$\big ( {{{\mathcal {K}}}}({{\mathbb {R}}^n})^3$$, $${\mathbbm {d}}_3 \big )$$ then. Second, $${{{\mathcal {C}}}}$$ is closed w.r.t. $${\mathbbm {d}}_3$$ according to Lemma 5.4.

Due to the inclusion condition 2.7 (vii’), Lemma 5.5 applied to $${\widetilde{g}}_1 {:}{=} f \big (t, \,\cdot \,, K_t, \,\cdot \, \big )$$, $${\widetilde{g}}_2 {:}{=} g \big (t, \,\cdot \,, M_t, \,\cdot \, \big )$$, $${\widetilde{g}}_3 {:}{=} {\widetilde{g}} \big (t, \,\cdot \,,{\widetilde{M}}_t,\,\cdot \,\big ):$$ $${{\mathbb {R}}^n}\times U \longrightarrow {{\mathbb {R}}^n}$$ provides a sequence $$(h_\ell )_{\ell \,\in \, {\mathbb {N}}}$$ in $$(0, T - t)$$ tending to 0 such that for each $$\ell \in {\mathbb {N}}$$,

\begin{aligned} \text{ dist}_{{\mathbbm {d}}_3} \Big ( \big ( {{{\mathcal {R}}}}_{f(t,\cdot , K_t, U)}(h_\ell , K_t), \; {{{\mathcal {R}}}}_{g(t,\cdot , M_t, U)}(h_\ell , M_t), \; {{{\mathcal {R}}}}_{{\widetilde{g}}(t,\cdot , {\widetilde{M}}_t, U)} (h_\ell , {\widetilde{M}}_t) \big ), \; {{{\mathcal {C}}}}\Big ) \, < \, \textstyle \frac{3}{\ell } \; h_\ell . \end{aligned}

Lemma 5.3 implies for every $$\ell \in {\mathbb {N}}$$ Fix any $$\varepsilon > 0$$. There exist finitely many $$x_{(k_1)}$$, $$x_{(k_2)}, \ldots , x_{(k_N)} \in {\overline{{\mathbb {B}}}}_R$$ with $${\overline{{\mathbb {B}}}}_R \subset \bigcup _{\nu \,=\,1}^N {{\mathbb {B}}}_{\frac{\varepsilon }{6\,(1 + \lambda )}}(x_{(k_\nu )})$$. Lipschitz assumption 2.1 (iv) (w.r.t. x) leads to Finally, we obtain $$\; \delta '(t) \, = \, \displaystyle \lim _{h\,\rightarrow \,0} \, {\textstyle \frac{\delta (t+h) \,-\,\delta (t)}{h}} \; = \; \lim _{\ell \,\rightarrow \,\infty } \, {\textstyle \frac{\delta (t+h_\ell ) \,-\,\delta (t)}{h_\ell }} \; \le \; \big ( \Lambda + \lambda \big ) \cdot \delta (t) + \varepsilon \;$$ for every $$t \in I_0$$ (with $$\varepsilon > 0$$ fixed arbitrarily small and independently of t).    $$\square$$

### Lemma 5.7

(Intersecting two convex sets [47, §§ 5 – 8])

1. (1.)

Let M and N denote two convex subsets of a normed space E. If N contains an open ball $${{\mathbb {B}}}_\rho (x_0) \subset E$$ with $$x_0 \in M$$ and $$\rho > 0$$, then the following inequality holds for every $$x \in E$$

\begin{aligned} \text{ dist }(x, \, M \cap N) \;\, \le \;\, \left( 1 + {\textstyle \frac{1}{\rho }} \; \Vert x - x_0\Vert \right) \cdot \big ( \text{ dist }(x, \,M) + \text{ dist }(x, \,N) \big ). \end{aligned}
2. (2.)

Let $${{{\mathcal {T}}}}$$ be a topological space, E a normed space, $$t_0 \in {{{\mathcal {T}}}}$$ and M, N :  $${{{\mathcal {T}}}}\leadsto E$$ set-valued maps with convex values. Suppose M and N are continuous (w.r.t. $${\mathbbm {d}})$$ at $$t_0$$ and $$M(t_0) \cap N(t_0)^\circ$$ $$\subset$$ E is nonempty and bounded. Then, $${{{\mathcal {T}}}}\leadsto E$$, $$t \mapsto M(t) \cap N(t)$$ is continuous (w.r.t. $${\mathbbm {d}})$$ at $$t_0$$.

3. (3.)

Let E be a normed space and M, N :  $$[0,T] \leadsto E$$ convex-valued tubes. Assume for each $$t \in [0,T]$$ that $$M(t) \cap N(t)$$ is bounded and the intersection of M(t) and the interior of N(t) is nonempty. If both $$M(\cdot )$$ and $$N(\cdot )$$ are Lipschitz continuous, so is also $$M(\cdot ) \cap N(\cdot )$$.

### Proof of Proposition 3.2

Set $${{{\mathcal {E}}}}_\cap (t)$$ $${:}{=}$$ $$\bigcap _{j\,=\,1}^N \, E_j(t)$$ for $$t \in [0,T]$$. We consider the auxiliary function

\begin{aligned} \delta : \; [0,T] \, \longrightarrow \, [0, \infty ], \quad t \, \longmapsto \, \left\{ \begin{array}{ll} {\mathbbm {e}}\big ( K(t), \; {{{\mathcal {E}}}}_\cap (t) \big ) &{}\quad \text{ if } \quad {{{\mathcal {E}}}}_\cap (t) \not = \emptyset \\ \infty &{}\quad \text{ if } \; \quad {{{\mathcal {E}}}}_\cap (t) = \emptyset . \end{array} \right. \end{aligned}

and aim at verifying $$\delta (t) = 0$$ for all $$t \in [0,T]$$. $$\delta (0) = 0$$ holds due to the assumption $$K(0) = K_0 \subset {{{\mathcal {E}}}}_\cap (0)$$.

At every time instant $$t \in [0,T]$$, the convex set $$K(t) \subset {{\mathbb {R}}^n}$$ coincides with the reachable set of $$K_0$$ and the nonautonomous linear differential inclusion $$x' \in {{{\mathcal {A}}}}\big (s, \,K(s) \big ) \; x + {{{\mathcal {B}}}}\big (s, \,K(s) \big ) \; U$$ (a.e.), which can be represented by means of the variation of constants formula (see, e.g., [33, §§ 1.1, 1.2]). As a consequence, each $$K(t) \subset {{\mathbb {R}}^n}$$ has in common with $$K(0) = K_0$$ that its interior is not empty. Furthermore, $$K(\cdot )$$ is Lipschitz continuous. Hence, there exists a radius $$r_0 > 0$$ such that for every $$t \in [0,T]$$, K(t) contains a closed ball with radius $$2 \, r_0$$. Choose $$\Delta > 0$$ sufficiently small such that its product with the maximum of the Lipschitz constants of $$K(\cdot )$$, $$E_1(\cdot ), \ldots , E_N(\cdot )$$ is bounded by $$r_0$$.

Now consider any $$t_0 \in [0,T]$$ with $$\delta (t_0) = 0$$, i.e., $$K(t_0)$$ $$\subset$$ $${{{\mathcal {E}}}}_\cap (t_0)$$. Then, there exists $$x_0 \in {{\mathbb {R}}^n}$$ such that $${\overline{{\mathbb {B}}}}_{2\,r_0}(x_0)$$ $$\subset$$ $$K(t_0)$$ $$\subset$$ $${{{\mathcal {E}}}}_\cap (t_0)$$. Set $${\widetilde{T}} {:}{=} \min \big \{ t_0 + \Delta$$, $$T \big \}$$ $$\in$$ $$(t_0, T]$$ and we obtain for all $$s \in [t_0, {\widetilde{T}}]$$

\begin{aligned} \max _{k\,=\,1,\ldots ,N} \; {\mathbbm {d}}\big ( E_k(t_0), \, E_k(s) \big ) \,\, \le \,\, r_0 \quad \Longrightarrow \quad {{\mathbb {B}}}_{r_0}(x_0) \, \subset \, {{{\mathcal {E}}}}_\cap (s). \end{aligned}

From now on, we focus on the restriction of $$\delta$$ to $$[t_0, {\widetilde{T}}]$$.

$$\delta$$ is Lipschitz continuous in $$[t_0, {\widetilde{T}}]$$ as a consequence of Lemma 5.7 (3.).

Choose $$\varepsilon \in (0, \,{\widetilde{T}}-t_0)$$ arbitrarily. According to the ScorzaDragoni theorem, there exists a closed subset $${\widetilde{J}}_\varepsilon$$ $$\subset$$ $$[t_0, {\widetilde{T}}]$$ with $${{{\mathcal {L}}}}^1 \big ( [t_0, {\widetilde{T}}] {\setminus } {\widetilde{J}}_\varepsilon \big ) < \frac{\varepsilon }{4}$$ such that both $${{{\mathcal {A}}}}\big |_{{\widetilde{J}}_\varepsilon \times {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n})}$$ and $${{{\mathcal {B}}}}\big |_{{\widetilde{J}}_\varepsilon \times {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n})}$$ are continuous.

Then, there is a closed subset $$J_\varepsilon$$ of $$[t_0, {\widetilde{T}})$$ with the following properties:

• $$J_\varepsilon \subset {\widetilde{J}}_\varepsilon$$ and $${{{\mathcal {L}}}}^1 \big ( [t_0, {\widetilde{T}}] {\setminus } J_\varepsilon \big ) < \frac{\varepsilon }{2}$$.

• At every time instant $$\tau \in J_\varepsilon$$, each $$E_j(\cdot )$$ $$(j = 1, \ldots , N)$$ satisfies condition 3.2 (vii).

• Each $$\tau \in J_\varepsilon$$ is a (forward) Lebesgue point of the characteristic function $$\chi _{{\widetilde{J}}_\varepsilon }:$$ $$[0,T] \longrightarrow \{0,1\}$$ of $${\widetilde{J}}_\varepsilon$$, i.e.,    $$\displaystyle \lim _{\genfrac{}{}{0.0pt}{}{h\,\downarrow \,0}{(\tau +h \,\in \,[0,T])}} \; {\textstyle \frac{1}{h}} \cdot {{{\mathcal {L}}}}^1 \big ([\tau , \tau +h] \cap {\widetilde{J}}_\varepsilon \big ) \;\, = \;\,\chi _{{\widetilde{J}}_\varepsilon }(\tau ) \,\, = \,\, 1\,.$$

Choose any $$\tau \in J_\varepsilon$$.

Then, the excess condition 3.2 (vii) also holds for $${{{\mathcal {E}}}}_\cap (\tau )$$, i.e.,

\begin{aligned} \displaystyle \lim _{h\,\downarrow \,0} \,\; {\textstyle \frac{1}{h}} \cdot {\mathbbm {e}}\Big ( \big ( {\mathbbm {1}}+ h \; {{{\mathcal {A}}}}\big (\tau , {{{\mathcal {E}}}}_\cap (\tau ) \big ) \big ) \; {{{\mathcal {E}}}}_\cap (\tau ) + h \; {{{\mathcal {B}}}}\big (\tau , {{{\mathcal {E}}}}_\cap (\tau ) \big ) \; U, \;\; {{{\mathcal {E}}}}_\cap (\tau +h)\Big ) \;\, = \,\, 0. \end{aligned}

Indeed, Lemma 5.7 (1.) implies for all $$h \in (0, {\widetilde{T}}-\tau )$$, $$x \in {{{\mathcal {E}}}}_\cap (\tau )$$ and $$u \in U$$

\begin{aligned}&\text{ dist }\Big (\big ( {\mathbbm {1}}+ h \cdot {{{\mathcal {A}}}}\big (\tau , \, {{{\mathcal {E}}}}_\cap (\tau ) \big ) \big ) \,\, x \, + \, h \; {{{\mathcal {B}}}}\big (\tau , \, {{{\mathcal {E}}}}_\cap (\tau ) \big ) \,\, u, \;\; {{{\mathcal {E}}}}_\cap (\tau +h)\Big )\\&\quad \le \,\, C \cdot \displaystyle \sum _{j\,=\,1}^N \; {\mathbbm {e}}\Big (\big ( {\mathbbm {1}}+ h \cdot {{{\mathcal {A}}}}\big (\tau , \, {{{\mathcal {E}}}}_\cap (\tau ) \big ) \big ) \, E_j(\tau ) + h \; {{{\mathcal {B}}}}\big (\tau , \, {{{\mathcal {E}}}}_\cap (\tau ) \big ) \, U, \;\; E_j(\tau +h)\Big ) \end{aligned}

with a constant $$C > 0$$ depending on $$r_0$$, $$x_0$$, N, $$\sup _{\begin{array}{c} {s \,\in \, [0,T] \quad } \\ {k\,=\,1,\ldots ,N} \end{array}} {\mathbbm {d}}\big (\{0\}, \, E_k(s) \big ) < \infty$$.

Let H :  $$[\tau , {\widetilde{T}}] \leadsto {{\mathbb {R}}^n}$$ denote the unique solution tube of the IVP

\begin{aligned} \mathring{H}(s) = {{{\mathcal {A}}}}\big ( s, \, H(s) \big ) \, x + {{{\mathcal {B}}}}\big ( s, \, H(s) \big ) \, U \quad \text{ in } [\tau ,{\widetilde{T}}], \qquad H(\tau ) = {{{\mathcal {E}}}}_\cap (\tau ) \in {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n}) \end{aligned}

according to Proposition 2.2. All its values are convex since each $$H(s) \subset {{\mathbb {R}}^n}$$ coincides with the reachable set of a nonautonomous linear differential inclusion. For every $$s \in [\tau , {\widetilde{T}}]$$, Proposition 2.5 guarantees $${\mathbbm {d}}\big ( K(s), \, H(s)\big )$$ $$\le$$ $$\delta (\tau ) \cdot e^{\mathrm{const}(\Gamma , \Lambda ) \cdot (s-\tau )}$$.

Then, H fulfills the integral funnel condition at time $$\tau$$, i.e.,

\begin{aligned} 0= & {} \displaystyle \lim _{h\,\downarrow \,0} \frac{1}{h} \cdot {\mathbbm {d}}\Big ( H(\tau +h), \displaystyle \bigcup _{x\,\in \, H(\tau )}\Big ( x + h \cdot \big ( {{{\mathcal {A}}}}\big (\tau , \,H(\tau ) \big ) \,x +{{{\mathcal {B}}}}\big (\tau , \,H(\tau ) \big ) \,U \big )\Big ) \Big )\\= & {} \displaystyle \lim _{h\,\downarrow \,0}\frac{1}{h} \cdot {\mathbbm {d}}\Big ( H(\tau +h),\big ( {\mathbbm {1}}+ h \cdot {{{\mathcal {A}}}}\big (\tau , H(\tau ) \big ) \big ) \; H(\tau )+ h \cdot {{{\mathcal {B}}}}\big (\tau , H(\tau ) \big ) \; U \Big ). \end{aligned}

Indeed, the autonomous differential inclusion $$x' \in A \big (\tau , \,H(\tau ) \big ) \,x + B \big (\tau , \,H(\tau ) \big ) \,U$$ induces the auxiliary tube R :  $$[0,\infty ) \leadsto {{\mathbb {R}}^n}$$ of reachable sets of $$H(\tau ) \subset {{\mathbb {R}}^n}$$. $$R(\cdot )$$ satisfies the integral funnel condition at (even) every time $$h \in [0, \infty )$$ as a consequence of Proposition 2.1. Furthermore, Proposition 2.5 and Assumptions 3.2 (iii), (iv) lead to this upper bound for each $$h \in [0, \,{\widetilde{T}}-\tau ]$$ with $$\, \rho {:}{=} \displaystyle e^{\Gamma \, T} \cdot \sup \big \{ {\mathbbm {d}}\big (\{0\}, \, E_k(s) \cup K_0 \big ) + \Gamma \, T \, \big | \, s \in [0,T], \,\, k = 1,\ldots ,N \big \}\,$$.

The characterization of $$J_\varepsilon$$ and the continuity of $$H(\cdot )$$ imply $$\; \displaystyle \lim _{h\,\downarrow \,0} \,\, {\textstyle \frac{1}{h}} \cdot {\mathbbm {d}}\big (H(\tau +h), \; R(h) \big ) \, = \, 0 \;$$ and thus, i.e., H satisfies the integral funnel condition at time $$\tau$$.

In the next step, we conclude from the triangle inequality for every h $$\in$$ $$\big ( 0, \, {\widetilde{T}}-t \big )$$ due to $$H(\tau ) = {{{\mathcal {E}}}}_\cap (\tau )$$. Thus, we have $$\; \displaystyle \limsup _{h\,\downarrow \,0} \, {\textstyle \frac{\delta (\tau +h) \,-\, \delta (\tau )}{h}} \, \le \, C(\Gamma , \Lambda ) \cdot \delta (\tau )$$ for every $$\tau$$ in $$J_\varepsilon \subset [t_0, {\widetilde{T}})$$. As $$\varepsilon \in (0, \,{\widetilde{T}}-t_0)$$ had been chosen arbitrarily, the last inequality holds for a.e. $$\tau$$ $$\in$$ $$[t_0, {\widetilde{T}}]$$ and so, Gronwall’s inequality implies $$\delta = 0$$ in $$[t_0, {\widetilde{T}}]$$, i.e., $$K(t) \subset {{{\mathcal {E}}}}_\cap (t)$$ for each $$t \in [t_0, {\widetilde{T}}]$$.

Finally, it is worth mentioning that $${\widetilde{T}}$$ was chosen as $${\widetilde{T}} = \min \big \{ t_0 + \Delta , \, T \big \}$$ with $$\Delta > 0$$ depending only on $$r_0$$ and the Lipschitz constants of $$K(\cdot )$$, $$E_1(\cdot ), \ldots , E_N(\cdot )$$ (but not on $$t_0$$). As a consequence, we conclude $$\delta = 0$$ in the whole interval [0, T] by means of finitely many subintervals of this form $$[t_0, {\widetilde{T}}]$$ (chosen in a piecewise way). $$\square$$

### 5.5 A Computational Method for an External Approximation With Ellipsoidal Values (Proposition 3.3)

On our way to proving Proposition 3.3, we consider the required property $$\bigcap _{k=1}^N \,{{{\mathcal {E}}}}\big (x_k(t), \, X_k(t)\big )$$ $$\not =$$ $$\emptyset$$ a technical challenge since its relationship with the ODE systems (for all $$x_j(\cdot )$$, $$X_j(\cdot )$$) is not really obvious. Hence, for $$\rho \ge 0$$ fixed arbitrarily, we focus on the following auxiliary problem

\begin{aligned} \begin{array}{l} \ell _j'(t) = - {{{\mathcal {A}}}}\big (t, \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho } (t) \big )\!^{\top }\;\quad \ell _j(t)\\ x_j'(t) = {{{\mathcal {A}}}}\big (t, \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho } (t) \big )\,\quad x_j(t)+ {{{\mathcal {B}}}}\big (t, \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho } (t) \big )\,\, q_u,\\ X_j'(t) = {{{\mathcal {A}}}}\big (t, \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho } (t) \big )\,\, X_j(t)+ {\widetilde{\pi }}_j(t) \; X_j(t) \quad \\ \qquad \qquad \qquad + X_j(t) \;{{{\mathcal {A}}}}\big (t, \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho } (t) \big )^{\top }+\frac{1}{{\widetilde{\pi }}_j(t)}\; {\widetilde{Q}}_{{{{\mathcal {B}}}}\, \cap } (t) \end{array} \end{aligned}
(10)

$$(j = 1, ,\,\ldots ,N)$$ with the modified abbreviations

\begin{aligned} \begin{array}{l} \Delta _{1,\rho }(t) {:}{=} \rho , \Delta _{j,\rho }(t)\, {:}{=} \, \rho +2 \, {\mathbbm {g}}\Big ( \displaystyle {{{\mathcal {E}}}}\big (x_j(t), X_j(t)\big ), \; \bigcap _{k\,=\,1}^{j-1} \!\Big ({{{\mathcal {E}}}}\big (x_k(t), X_k(t)\big ) \!+ \! {\overline{{\mathbb {B}}}}_{\Delta _{k,\rho }(t)} \!\Big ) \Big ) \quad (j \!\ge \! 2),\\ {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho } (t){:}{=}\displaystyle \bigcap _{k\,=\,1}^N\Big ({{{\mathcal {E}}}}\big (x_k(t), \, X_k(t)\big ) \, + \,{\overline{{\mathbb {B}}}}_{\rho +\Delta _{k,\rho }(t)} \!\Big ),\\ {\widetilde{Q}}_{{{{\mathcal {B}}}}\, \cap } (t) {:}{=}{{{\mathcal {B}}}}\big (t, \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho } (t) \big ) \,\; Q_u \,\;{{{\mathcal {B}}}}\big (t, \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho } (t) \big )\!^{\top }\in {{\mathbb {R}}^{n\times n}}\,, \quad {\widetilde{\pi }}_j(t) \, {:}{=} \,\sqrt{\,\frac{ \langle \ell _j(t), \;\, {\widetilde{Q}}_{{{{\mathcal {B}}}}\, \cap } (t) \,\,\ell _j(t) \rangle }{ \langle \ell _j(t), \;\, X_j(t) \,\, \ell _j(t) \rangle } \,}\,\, . \end{array} \end{aligned}
(11)

In comparison with the original problem (7), (8), the intersection $${{{\mathcal {E}}}}_\cap (t)$$ $${\mathop {=}\limits ^{\mathrm{\tiny Def.}}}$$ $$\bigcap _{k=1}^N \,{{{\mathcal {E}}}}\big (x_k(t), \, X_k(t)\big )$$ is replaced by its superset $${\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho } (t) \subset {{\mathbb {R}}^n}$$ which has two key properties:

• $${\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho } (t) \not = \emptyset$$ is convex. If $$\rho > 0$$, it contains a closed ball of radius $$\rho > 0. \;$$ Indeed, for each $$j \in \{2,\,\ldots , N\}$$, $${{{\mathcal {E}}}}\big (x_j(t), X_j(t)\big ) + {\overline{{\mathbb {B}}}}_{\Delta _{j,\rho }(t)}$$ contains a ball with some center $$z_j$$ $$\in$$ $$\bigcap _{k\,=\,1}^{j-1} \, \big ({{{\mathcal {E}}}}\big (x_k(t), X_k(t)\big ) + {\overline{{\mathbb {B}}}}_{\Delta _{k,\rho }(t)} \big )$$ $$\not =$$ $$\emptyset$$ and radius $$\rho$$. Hence, $${\overline{{\mathbb {B}}}}_{\rho }(z_j)$$ $$\subset$$ $$\bigcap _{k\,=\,1}^j \big ({{{\mathcal {E}}}}\big (x_k(t), \, X_k(t)\big ) \, + \, {\overline{{\mathbb {B}}}}_{\rho + \Delta _{k,\rho }(t)} \!\big )$$.

• If $${{{\mathcal {E}}}}_\cap (t) \not = \emptyset$$, we have $$\Delta _{j,\rho }(t)$$ $$=$$ $$\rho$$ for all $$j \in \{1,\,\ldots ,N\}$$ and thus $${{{\mathcal {E}}}}_\cap (t)$$ $$\subset$$ $${\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho }(t)$$ $$=$$ $$\bigcap _{k\,=\,1}^N \big ({{{\mathcal {E}}}}\big (x_k(t), \, X_k(t)\big ) + {\overline{{\mathbb {B}}}}_{2\,\rho } \big )$$. It implies $$\text{ Lim}_{\rho \,\downarrow \,0} \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho }(t) = {{{\mathcal {E}}}}_\cap (t)$$.

### Lemma 5.8

( [25, Theorems 2, 3, 5]) Consider any $$p_1$$, $$p_2 \in {{\mathbb {R}}^n}$$ and positive semidefinite symmetric $$Q_1, Q_2 \in {{\mathbb {R}}^{n\times n}}$$. Set $$M {:}{=} \max \left\{ \sqrt{\Vert Q_1\Vert _{\text{ op }}}, \sqrt{\Vert Q_2\Vert _{\text{ op }}} \right\} > 0. \;$$ Then, the following inequalities hold:

1. (1.)

$$\delta _0 {:}{=} {\mathbbm {d}}\big ( {{{\mathcal {E}}}}(0, Q_1), \; {{{\mathcal {E}}}}(0, Q_2) \big )$$, $$\Delta _0 {:}{=} \sqrt{ \Vert Q_1 - Q_2 \Vert _{\text{ op }}}$$ fulfill $$\frac{\Delta _0^2}{M \, + \, \sqrt{M^2 + \Delta _0^2}}$$ $$\le$$ $$\delta _0$$ $$\le$$ $$\Delta _0$$ $$\le$$ $$\sqrt{\delta _0^2 + 2 \, M \, \delta _0}$$.

2. (2.)

$$\delta {:}{=} {\mathbbm {d}}\big ( {{{\mathcal {E}}}}(p_1, Q_1), \; {{{\mathcal {E}}}}(p_2, Q_2) \big )$$ and $$\Delta {:}{=} \Vert p_1 - p_2\Vert + \sqrt{ \Vert Q_1 - Q_2 \Vert _{\text{ op }}}$$ satisfy $$\frac{\Delta ^2}{2 \,\, ( M + \Delta )}$$ $$\le$$ $$\delta$$ $$\le$$ $$\Delta$$ $$\le$$ $$\delta + \sqrt{\delta ^2 + 2 \, M \, \delta }$$.

3. (3.)

$$\delta {:}{=} {\mathbbm {d}}\big ( {{{\mathcal {E}}}}(p_1, Q_1), \; {{{\mathcal {E}}}}(p_2, Q_2) \big )$$, $$d {:}{=} \Vert p_1 - p_2\Vert + \big \Vert Q_1^{\frac{1}{2}} - Q_2^{\frac{1}{2}} \big \Vert _{\text{ op }}$$ fulfill $$\delta \le d \le \big (1 + 2 \; \sqrt{2} \; n\; (n+2)\big ) \cdot \delta$$.

### Lemma 5.9

Let $${{{\mathcal {A}}}}:$$ $$[0,T] \times {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n})$$ $$\longrightarrow$$ $${{\mathbb {R}}^{n\times n}}$$, $${{{\mathcal {B}}}}:$$ $$[0,T] \times {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n})$$ $$\longrightarrow$$ $${{\mathbb {R}}^{n\times n}}$$ and $$U = {{{\mathcal {E}}}}(q_u, Q_u) \subset {{\mathbb {R}}^n}$$ satisfy the assumptions of Proposition 3.3. Fix $$\rho > 0$$. For any initial $$\ell _{0 j} \in {{\mathbb {R}}^n}{\setminus } \{ 0 \}$$, $$x_{0j} \in {{\mathbb {R}}^n}$$ and positive definite $$X_{0 j} \in {{\mathbb {R}}^{n\times n}}$$ $$(j = 1,\,\ldots ,N)$$ given, consider system (10) with abbreviations (11).

Then, there exist $$\tau \in (0,T]$$ and at least one tuple of solutions $$\ell _j$$, $$x_j:$$ $$[0,\tau ] \longrightarrow {{\mathbb {R}}^n}$$, $$X_j:$$ $$[0,\tau ] \longrightarrow {{\mathbb {R}}^{n\times n}}$$ $$(j = 1,\,\ldots ,N)$$.

### Proof

Let $$\mathrm{SPD}(n)$$ abbreviate the set of all positive definite symmetric matrices in $${{\mathbb {R}}^{n\times n}}$$. The functions are continuous due to Lemma 5.8 and the triangle inequality of $${\mathbbm {d}}$$. For any sets $$M_1$$, $$M_2 \in {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n})$$ with $$M_1^\circ \cap M_2 \not = \emptyset$$, Lemma 5.7 (2.) implies that $${{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n}) \times {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n}) \leadsto {{\mathbb {R}}^n}$$, $$(N_1, N_2) \mapsto N_1 \cap N_2$$ is continuous with nonempty set values close to $$(M_1, M_2)$$.

As a consequence, the compositions

\begin{aligned} \; \delta : {{\mathbb {R}}^n}\times \mathrm{SPD}(n) \times {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n}) \,\longrightarrow \, {\mathbb {R}}, \,\, (q, Q, M_2) \,\longmapsto \, \rho + 2 \cdot {\mathbbm {g}}\big ( {{{\mathcal {E}}}}(q,Q), M_2 \big ) \end{aligned}

and $${{\mathbb {R}}^n}\times \mathrm{SPD}(n) \times {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n}) \longrightarrow {{{{\mathcal {K}}}}_{\text{ co }}}({{\mathbb {R}}^n}),\quad (q, Q, M_2) \longmapsto \quad \big ( {{{\mathcal {E}}}}(q,Q) + {\overline{{\mathbb {B}}}}_{\delta (q,Q,M_2)} \big ) \cap M_2$$ are continuous.

Hence, the right-hand side of the ODE system (10) is continuous w.r.t. state and measurable w.r.t. time. The local existence of solutions results from a well-known theorem in ODE theory (see, e.g., [64, Ch. III, § 10.XVIII Theorem]). $$\square$$

### Lemma 5.10

Under the assumptions of Proposition 3.3, consider solutions $$\ell _j:$$ $$[0,\tau ] \longrightarrow {{\mathbb {R}}^n}$$, $$x_j:$$ $$[0,\tau ] \longrightarrow {{\mathbb {R}}^n}$$, $$X_j:$$ $$[0,\tau ] \longrightarrow {{\mathbb {R}}^{n\times n}}$$ $$(j = 1,2)$$ to system (10) with abbreviations (11), $$\rho > 0$$, $$\tau \in (0, T]$$.

Then, it holds for every $$t \in [0, \tau ]$$ and $$j \in \{1,\,\ldots ,N\}:$$

1. (1.)

$$0 \; < \; \big \Vert \ell _{0 j} \big \Vert \,\, e^{-\Gamma \, t} \; \le \; \big \Vert \ell _j(t) \big \Vert \; \le \; \big \Vert \ell _{0 j} \big \Vert \,\, e^{\Gamma \, t}$$ and $$\Vert x_j(t) \Vert \, \le \, \mathrm{const}(\Gamma ) \cdot \big ( \Vert x_{0 j} \Vert + \Vert q_u\Vert \; t \big ) \cdot e^{\Gamma \, t}$$.

2. (2.)

$$X_j(t) \in {{\mathbb {R}}^{n\times n}}$$ is symmetric.

3. (3.)

There is $$\gamma _j > 0$$ (depending on $$X_{0 j} \in {{\mathbb {R}}^{n\times n}}$$) such that for all $$v \in {{\mathbb {R}}^n}$$, $$\, \gamma _j \; e^{-2\,\Gamma \,t} \;\Vert v\Vert ^2 \le \langle v, \; X_j(t) \, v \rangle$$. Hence, $$X_j(t) \in {{\mathbb {R}}^{n\times n}}$$ is positive definite.

4. (4.)

There exist $${\widetilde{c}}_j, {\widetilde{C}}_j > 0$$ (depending on $$\Gamma$$, $$Q_u$$) and $$c_j, C_j > 0$$ (depending on $$\Gamma$$, $$Q_u$$, $$X_{0 j}$$) with 5. (5.)

$$\big \Vert X_j(t) \big \Vert _{\text{ op }}\le \mathrm{const}(\Gamma , Q_u, X_{0 j}, T) \cdot e^{2 \,\Gamma \, t}, \big \Vert X_j ' (t) \big \Vert _{\text{ op }}\le \mathrm{const}(\Gamma , Q_u, X_{0 j}, T) \cdot e^{4 \,\Gamma \, t}$$.

As a consequence of the a priori bounds (1.), (5.), each solution tuple to the IVP with ODE system (10) and $$\rho > 0$$ can be extended to [0, T] (see, e.g., [64, Ch. III, § 10.XX Theorem]).

### Proof

(1.)    $$\ell _j(\cdot )$$ solves the adjoint ODE $$\ell _j'(t) = - {{{\mathcal {A}}}}\big (t, \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho } (t) \big )\!^{\top }\;\ell _j(t)$$ in $$[0, \tau ]$$ and, $${{{\mathcal {A}}}}^{\top }$$ is bounded by $$\Gamma$$ due to hypothesis 3.3 (iv’). $$x_j(\cdot )$$ is characterized as the solution to an inhomogeneous linear ODE and so, Gronwall’s inequality provides an explicit a priori bound.

(2.)   $$Q_u \in {{\mathbb {R}}^{n\times n}}$$ is symmetric by assumption and so is $${\widetilde{Q}}_{{{{\mathcal {B}}}}\, \cap } (t) \in {{\mathbb {R}}^{n\times n}}$$ then. Hence, both $$X_j(\cdot )$$ and $$X_j(\cdot )^{\top }$$ are Carathéodory solutions Y :  $$[0, \tau ] \longrightarrow {{\mathbb {R}}^{n\times n}}$$ to the ODE $$Y'(t) \, = \, {{{\mathcal {A}}}}\big (t, \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho } (t) \big ) \, Y(t) + {\widetilde{\pi }}_j(t) \; Y(t) + Y(t) \; {{{\mathcal {A}}}}\big (t, \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho } (t)\big )^{\top }+ \textstyle \frac{1}{{\widetilde{\pi }}_j(t)}\; {\widetilde{Q}}_{{{{\mathcal {B}}}}\, \cap } (t)$$ with the same $${\widetilde{\pi }}_j(\cdot )$$. $$X_j(0) = X_{0j} = X_j(0)^{\top }$$ implies $$X_j(t) = X_j(t)^{\top }$$ for every $$t \in [0, \tau ]$$.

(3.)    Choose $$t \in (0, \tau ]$$ and $$v_t \in {{\mathbb {R}}^n}{\setminus } \{ 0\}$$ arbitrarily and let $$v: [0,t] \longrightarrow {{\mathbb {R}}^n}$$ denote the unique solution of the adjoint ODE $$v' = - {{{\mathcal {A}}}}\big (\cdot , \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho } (\cdot ) \big )\!^{\top }\; v$$ with $$v(t) = v_t$$. We conclude from ODE system (10) at a.e. time $$s \in [0, \tau )$$ and hence, $$\frac{\hbox {d}}{\hbox {d} s} \, \langle v(s), \; X_j(s) \; v(s) \rangle$$ $$\ge$$ 0. At time $$s = t$$, in particular, we obtain

\begin{aligned}&\langle v_t, \; X_j(t) \; v_t \rangle \; = \; \langle v(t), \; X_j(t) \; v(t) \rangle \; \ge \; \langle v(0), \; X_{0 j} \; v(0) \rangle \; \ge \; \gamma _j \; \Vert v(0) \Vert ^2 \; \\&\ge \; \gamma _j \; e^{-2\,\Gamma \,t} \; \Vert v_t \Vert ^2 \, > \, 0 \end{aligned}

with $$\gamma _j = \mathrm{const}(X_{0 j}) > 0$$, i.e., $$X_j(t) \in {{\mathbb {R}}^{n\times n}}$$ is positive definite.

(4.)    For every $$t \in [0, \tau ]$$, the CauchySchwarz inequality and assumption 3.3 (iv’) lead to

\begin{aligned} \langle \ell _j(t), \, {\widetilde{Q}}_{{{{\mathcal {B}}}}\, \cap } (t) \, \ell _j(t) \rangle\le & {} \Vert Q_u \Vert \,\big \Vert {{{\mathcal {B}}}}\big (t, \, {{{\mathcal {E}}}}_\cap (t) \big )\!^{\top }\big \Vert _{\text{ op }}^2 \, \big \Vert \ell _j(t) \big \Vert ^2\\\le & {} \; \Vert Q_u \Vert \, \Gamma ^2 \cdot \Vert \ell _{0 j} \Vert ^2 \; e^{2\,\Gamma \,t} \,. \end{aligned}

Similarly, we conclude from Eq. (11), $$Q_u \in {{\mathbb {R}}^{n\times n}}$$ being positive definite and hypotheses 3.3 (iv’), (vi’)

\begin{aligned} \langle \ell _j(t), \; {\widetilde{Q}}_{{{{\mathcal {B}}}}\, \cap } (t) \,\, \ell _j(t) \rangle&{=}&\quad \langle {{{\mathcal {B}}}}\big (t, \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho } (t) \big )\!^{\top }\; \ell _j(t), \;\; Q_u \,\, {{{\mathcal {B}}}}\big (t, \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho } (t) \big )\!^{\top }\; \ell (t) \rangle \\\ge & {} \quad \mathrm{const}(Q_u) \; \big \Vert {{{\mathcal {B}}}}\big (t, \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho } (t) \big )\!^{\top }\; \ell _j(t) \big \Vert ^2\\\ge & {} \mathrm{const}(\Gamma , Q_u) \; \Vert \ell _{0 j} \Vert ^2 \;e^{-2 \, \Gamma \,t} \,. \end{aligned}

In regard to the other scalar product in the quotient $$\pi _j(t)$$, we have already verified in the proof of the statement (3.) that $$[0, \tau ] \longrightarrow {\mathbb {R}}$$, $$t \longmapsto \langle \ell _j(t), \; X_j(t) \; \ell _j(t) \rangle$$ is non-decreasing and so, for every $$t\in [0, \tau ]$$,

\begin{aligned} \langle \ell _j(t), \; X_j(t) \; \ell _j(t) \rangle \,\; \ge \,\; \langle \ell _j(0), \; X_{0 j} \; \ell _j(0) \rangle \,\; \ge \,\; \mathrm{const}(X_{0 j}) \; \Vert \ell _{0 j} \Vert ^2 \; . \end{aligned}

Furthermore, its time derivative mentioned there implies for a.e. $$t \in [0, \tau ]$$

\begin{aligned} \frac{\hbox {d}}{\hbox {d} t} \, \sqrt{\langle \ell _j(t), \; X_j(t) \; \ell _j(t) \rangle }= & {} \frac{1}{2 \; \sqrt{\langle \ell _j(t), \; X_j(t) \; \ell _j(t) \rangle }} \cdot \frac{\hbox {d}}{\hbox {d} t} \; \langle \ell _j(t), \,\, X_j(t) \; \ell _j(t) \rangle \\= & {} \sqrt{\langle {{{\mathcal {B}}}}\big (t, \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho } (t) \big )\!^{\top }\; \ell _j(t), \;\; Q_u \,\, {{{\mathcal {B}}}}\big (t, \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho } (t) \big )\!^{\top }\; \ell _j(t) \rangle \;} \,. \end{aligned}

Assumption 3.3 (iv’) (again) guarantees for a.e. $$t \in [0,\tau ]$$

\begin{aligned} \textstyle \frac{\hbox {d}}{\hbox {d} t} \, \sqrt{\langle \ell _j(t), \; X_j(t) \; \ell _j(t) \rangle } \,\, \le \,\, \mathrm{const}(\Gamma , Q_u) \; \big \Vert \ell _j(t) \big \Vert \,\, \le \,\, \mathrm{const}(\Gamma , Q_u) \; e^{\Gamma \;t} \; \big \Vert \ell _{0 j} \big \Vert \end{aligned}

and thus for all $$t \in [0,\tau ]$$

\begin{aligned}&\sqrt{\langle \ell _j(t), \; X_j(t) \; \ell _j(t) \rangle } \; \\&\quad \le \; \mathrm{const}(\Gamma , Q_u) \; \big \Vert \ell _{0 j} \big \Vert \; \Big ( 1 + \displaystyle \int _0^t e^{\Gamma \;s} \, \mathrm{d} s \Big ) \;\le \; \mathrm{const}(\Gamma , Q_u) \; \big \Vert \ell _{0 j} \big \Vert \; e^{\Gamma \;t}\,. \end{aligned}

(5.)    In combination with Assumptions 3.3 (iv’), (vi’), ODE (10) of $$X_j(\cdot )$$ has the consequence

\begin{aligned}&\big \Vert X_j'(t) \big \Vert _{\text{ op }}\le \Gamma \,\, \big \Vert X_j(t) \big \Vert _{\text{ op }}\\&\qquad + {\big \Vert X_j(t) \big \Vert _{\text{ op }}\; \big \Vert {{{\mathcal {A}}}}\big (t, \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho } (t) \big )\!^{\top }\big \Vert _{\text{ op }}\, + \, {\widetilde{\pi }}_j(t) \; \big \Vert X_j(t) \big \Vert _{\text{ op }}\,+\, \frac{1}{{\widetilde{\pi }}_j(t)}\; \big \Vert {\widetilde{Q}}_{{{{\mathcal {B}}}}\, \cap } (t)\big \Vert _{\text{ op }}}\\&\quad \le 2 \; \Gamma \; \big \Vert X_j(t) \big \Vert _{\text{ op }}+ C_j \; e^{2\,\Gamma \,t} \; \big \Vert X_j(t) \big \Vert _{\text{ op }}\\&\qquad + \frac{1}{c_j} \; e^{2\,\Gamma \,t} \; \big \Vert {\widetilde{Q}}_{{{{\mathcal {B}}}}\, \cap } (t)\big \Vert _{\text{ op }}\\&\le {\mathrm{const}(\Gamma , Q_u, X_{0 j}) \,\cdot \, e^{2 \, \Gamma \, t} \,\; \big \Vert X_j(t) \big \Vert _{\text{ op }}} + \frac{1}{c_j} \; e^{2\,\Gamma \,t} \; \Gamma ^2 \; \Vert Q_u \Vert _{\text{ op }}\end{aligned}

for a.e. $$t \in [0,\tau ]$$. Gronwall’s inequality provides $$\big \Vert X_j(t) \big \Vert _{\text{ op }}\, \le \, \mathrm{const}(\Gamma , Q_u, X_{0 j}, T) \cdot \big ( \Vert X_{0 j} \Vert _{\text{ op }}+ \Vert Q_u \Vert _{\text{ op }}\big ) \cdot e^{2 \,\Gamma \, t}$$ for all $$t \in [0, \tau ] \subset [0,T]$$. $$\square$$

Subsequent Lemma 5.12 proves the aspect of existence in Proposition 3.3 (1.).

Afterward, the statement about uniqueness follows in Lemma 5.14. In preparation for both results, the next lemma indicates a technically essential property of solutions to the “approximating” system (i.e., with $$\rho > 0$$) and the “exact” system (i.e., $$\rho = 0$$).

### Lemma 5.11

In addition to the assumptions of Proposition 3.3, fix $$\rho \ge 0$$ arbitrarily and let $$\ell _j:$$ $$[0, T] \longrightarrow {{\mathbb {R}}^n}$$, $$x_j:$$ $$[0, T] \longrightarrow {{\mathbb {R}}^n}$$, $$X_j:$$ $$[0, T] \longrightarrow {{\mathbb {R}}^{n\times n}}$$ $$(j = 1,\,\ldots ,N)$$ be solutions to ODE system (10) with initial values $$\ell _{0 j} \in {{\mathbb {R}}^n}{\setminus } \{ 0 \}$$, $$x_{0j} \in {{\mathbb {R}}^n}$$ and positive definite symmetric $$X_{0 j} \in {{\mathbb {R}}^{n\times n}}$$ given such that $$\bigcap _{k = 1}^N \,{{{\mathcal {E}}}}(x_{0 k},\, X_{0 k})$$ $$\subset$$ $${{\mathbb {R}}^n}$$ has nonempty interior.

Then, there exists a radius $${\widehat{r}} = {\widehat{r}}(\Gamma , T, x_{0 \,\cdot \,}, X_{0 \,\cdot \,}) > 0$$ such that for every $$t \in [0,T]$$, the intersection $${{{\mathcal {E}}}}_\cap (t)$$ $${\mathop {=}\limits ^{\mathrm{\tiny Def.}}}$$ $$\bigcap _{k = 1}^N \, {{{\mathcal {E}}}}\big (x_k(t),\, X_k(t) \big )$$ contains a ball with radius $${\widehat{r}}$$.

### Proof

$${{{\mathcal {E}}}}_\cap (0)$$ $${\mathop {=}\limits ^{\mathrm{\tiny Def.}}}$$ $$\bigcap _{k = 1}^N \,{{{\mathcal {E}}}}(x_{0 k},\, X_{0 k})$$ $$\subset$$ $${{\mathbb {R}}^n}$$ has nonempty interior and so, we can choose $$z_0 \in {{\mathbb {R}}^n}$$ and $$R_0 > 0$$ with $${\overline{{\mathbb {B}}}}_{R_0}(z_0)$$ $$\subset$$ $${{{\mathcal {E}}}}_\cap (0)$$. Set $${\widehat{r}} {:}{=} R_0 \; e^{-\Gamma \,T} > 0$$. Let z :  $$[0,T] \longrightarrow {{\mathbb {R}}^n}$$ denote the unique solution of

\begin{aligned} z' \; = \; {{{\mathcal {A}}}}\big (\cdot , \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho }(\cdot ) \big ) \; z + {{{\mathcal {B}}}}\big (\cdot , \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho }(\cdot ) \big ) \, q_u \quad \text{ in } \,[0,T], \qquad z(0) = z_0 \,. \end{aligned}

Then, $${\overline{{\mathbb {B}}}}_{{\widehat{r}}} \big (z(t) \big ) \subset {{\mathbb {R}}^n}$$ is contained in the reachable set R(t) of the differential inclusion $$x' \in {{{\mathcal {A}}}}\big (\cdot , \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho }(\cdot ) \big ) \; x + {{{\mathcal {B}}}}\big (\cdot , \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho }(\cdot ) \big ) \, U$$ and the initial set $${\overline{{\mathbb {B}}}}_{R_0}( z_0)$$ at each time $$t \in [0,T]$$. Indeed, for each $$y_t \in {\overline{{\mathbb {B}}}}_{{\widehat{r}}} \big (z(t) \big )$$, the variation of constants formula provides the solution y :  $$[0,t] \longrightarrow {{\mathbb {R}}^n}$$ of $$y' = {{{\mathcal {A}}}}\big (\cdot , \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho }(\cdot ) \big ) \; y + {{{\mathcal {B}}}}\big (\cdot , \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho }(\cdot ) \big ) \, q_u$$ with $$y(t) = y_t$$ and, Gronwall’s inequality implies

\begin{aligned} \big \Vert y(0) - z_0 \big \Vert \,\, = \,\, \big \Vert y(0) - z(0) \big \Vert \,\, \le \,\, \big \Vert y(t) - z(t) \big \Vert \cdot e^{\Gamma \; t} \,\, \le \,\, {\widehat{r}} \cdot e^{\Gamma \; t} \,\, = \,\, R_0 \end{aligned}

i.e., $$y_t \in R(t)$$. Due to Assumptions 3.2 (iii) and 3.3 (ii’), the compositions $${{{\mathcal {A}}}}\big (\cdot , \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho }(\cdot ) \big )$$, $${{{\mathcal {B}}}}\big (\cdot , \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho }(\cdot ) \big ):$$ $$[0,T] \longrightarrow {{\mathbb {R}}^{n\times n}}$$ are continuous. Hence, we can conclude from Proposition 3.1 (2.) that R(t) is a subset of each $${{{\mathcal {E}}}}\big ( x_j(t), \, X_j(t) \big )$$ $$(j = 1,\,\ldots ,N)$$, i.e., $${\overline{{\mathbb {B}}}}_{{\widehat{r}}} \big (z(t) \big )$$ $$\subset$$ $${{{\mathcal {E}}}}_\cap (t)$$. $$\square$$

### Lemma 5.12

Suppose the assumptions of Proposition 3.3.

For all initial $$\ell _{0 j} \in {{\mathbb {R}}^n}{\setminus } \{ 0 \}$$, $$x_{0j} \in {{\mathbb {R}}^n}$$ and positive definite $$X_{0 j} \in {{\mathbb {R}}^{n\times n}}$$ $$(j = 1,\,\ldots ,N)$$ given, there exist solutions $$\ell _j:$$ $$[0, T] \longrightarrow {{\mathbb {R}}^n}$$, $$x_j:$$ $$[0, T] \longrightarrow {{\mathbb {R}}^n}$$, $$X_j:$$ $$[0, T] \longrightarrow {{\mathbb {R}}^{n\times n}}$$ $$(j = 1, \,\ldots ,N)$$ to system (10) with $$\rho = 0$$ satisfying for every $$t \in [0,T]$$

• $${{{\mathcal {E}}}}_\cap (t)$$ $${\mathop {=}\limits ^{\mathrm{\tiny Def.}}}$$ $$\bigcap _{k=1}^N \,{{{\mathcal {E}}}}\big ( x_k(t), \, X_k(t) \big )$$ has nonempty interior.

• $$\ell _j(t) \not = 0 \,$$ and $$X_j(t) \in {{\mathbb {R}}^{n\times n}}$$ is symmetric.

• There exist $${\widetilde{c}}_j, {\widetilde{C}}_j > 0$$ (depending on $$\Gamma$$, $$Q_u$$) and $$c_j, C_j > 0$$ (depending on $$\Gamma$$, $$Q_u$$, $$X_{0 j}$$) with • $$X_j(\cdot )$$ is Lipschitz continuous with a Lipschitz constant depending only on $$\Gamma$$, $$Q_u$$, $$X_{0 j}$$, T.

### Proof

Let $$(\rho _\iota )_{\iota \in {\mathbb {N}}}$$ denote any sequence in (0, 1) tending to 0. For each $$\iota \in {\mathbb {N}}$$, Lemma 5.9 provides solutions $$\ell _{j,\iota }$$, $$x_{j,\iota }$$, $$X_{j,\iota }$$ $$(j = 1,\,\ldots ,N)$$ to ODE system (10) with $$\rho = \rho _\iota > 0$$. They exist in the whole interval [0, T] due to Lemma 5.10 (1.),(5.).

The a priori estimates in Lemma 5.10 do not depend on $$\rho > 0$$ and thus, they are uniform w.r.t. $$\iota \in {\mathbb {N}}$$. Hence, the $$3\,N$$ sequences $$( \ell _{j,\iota } )_{\iota \in {\mathbb {N}}}$$, $$( x_{j,\iota } )_{\iota \in {\mathbb {N}}}$$, $$( X_{j,\iota } )_{\iota \in {\mathbb {N}}}$$ are bounded uniformly and equi-continuous. The compactness theorem of ArzelàAscoli guarantees a joint sequence $$\iota _\kappa \nearrow \infty$$ of indices and continuous functions $$\ell _j:$$ $$[0, T] \longrightarrow {{\mathbb {R}}^n}$$, $$x_j:$$ $$[0, T] \longrightarrow {{\mathbb {R}}^n}$$, $$X_j:$$ $$[0, T] \longrightarrow {{\mathbb {R}}^{n\times n}}$$ $$(j = 1,\,\ldots ,N)$$ such that

$$\ell _{j, \,\iota _\kappa } \longrightarrow \ell _j, \quad x_{j, \,\iota _\kappa } \longrightarrow x_j, \quad X_{j, \,\iota _\kappa } \longrightarrow X_j \quad \text{ uniformly } \text{ in } \, [0,T] \quad (\kappa \longrightarrow \infty )$$

(see, e.g., [64, Ch. II, § 7.IV Theorem]). We conclude from Lemma 5.8 (2.)

\begin{aligned} {\mathbbm {d}}\big ( {{{\mathcal {E}}}}\big ( x_{j, \,\iota _\kappa }(t), \, X_{j, \,\iota _\kappa }(t) \big ), \;\; {{{\mathcal {E}}}}\big ( x_j(t), \, X_j(t) \big )\big ) \, \longrightarrow \, 0 \quad (\kappa \longrightarrow \infty ) \end{aligned}
(12)

for each $$j \in \{1,\,\ldots ,N\}$$ and $$t \in [0,T]$$. Lemma 5.11 provides a radius $${\widehat{r}} = {\widehat{r}}(\Gamma , T, x_{0 \,\cdot \,}, X_{0 \,\cdot \,}) > 0$$ and a sequence $$\big ( z_\iota (t) \big )_{\iota \in {\mathbb {N}}}$$ of time-dependent centers in $${{\mathbb {R}}^n}$$ satisfying $${\overline{{\mathbb {B}}}}_{{\widehat{r}}} \big ( z_\iota (t) \big ) \subset \bigcap _{j\,=\,1}^N \, {{{\mathcal {E}}}}\big ( x_{j, \,\iota }(t), \, X_{j, \,\iota }(t) \big )$$ for all $$\iota \in {\mathbb {N}}$$ and $$t \in [0,T]$$. Then, at each time $$t \in [0,T]$$, a subsequence of $$\big ( z_{\iota _\kappa }(t) \big )_{\kappa \in {\mathbb {N}}}$$ converges to some $$z(t) \in {{\mathbb {R}}^n}$$. The convergence in (12) implies $${\overline{{\mathbb {B}}}}_{{\widehat{r}}/2} \big ( z(t) \big ) \subset {{{\mathcal {E}}}}_\cap (t)$$. In particular, $${{{\mathcal {E}}}}_\cap (t)$$ has nonempty interior (as claimed).

Next we conclude from Lemma 5.7 (1.) for every $$t \in [0,T]$$

\begin{aligned}&{\mathbbm {d}}\Big ( \displaystyle \bigcap _{j\,=\,1}^N \, {{{\mathcal {E}}}}\big ( x_{j, \,\iota _\kappa }(t), \, X_{j, \,\iota _\kappa }(t) \big ), \; {{{\mathcal {E}}}}_\cap (t) \Big ) \\&\quad \, \le \, \mathrm{const}({\widehat{r}}) \cdot \displaystyle \sum _{j\,=\,1}^N \, {\mathbbm {d}}\Big ( {{{\mathcal {E}}}}\big ( x_{j, \,\iota _\kappa }(t), X_{j, \,\iota _\kappa }(t) \big ), \; {{{\mathcal {E}}}}\big ( x_j(t), X_j(t) \big ) \Big ) {\mathop {\longrightarrow }\limits ^{\kappa \,\rightarrow \,\infty }} 0. \end{aligned}

Thus, the intersections $${\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho _{(\iota _\kappa )}} (t)$$ used in the coefficients of ODE system (10) for $$\rho = \rho _{(\iota _\kappa )}$$ converge to $${{{\mathcal {E}}}}_\cap (t)$$ for $$\kappa \longrightarrow \infty$$ at each time $$t \in [0,T]$$. Assumption 3.2 (iii) leads to

\begin{aligned} \lim _{\kappa \,\rightarrow \,\infty } \; {{{\mathcal {A}}}}\big (t, \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho _{(\iota _\kappa )}} (t) \big ) \, = \, {{{\mathcal {A}}}}\big (t, \, {{{\mathcal {E}}}}_\cap (t) \big ), \quad \lim _{\kappa \,\rightarrow \,\infty } \; {{{\mathcal {B}}}}\big (t, \, {\widetilde{{{{\mathcal {E}}}}}}_{\cap , \rho _{(\iota _\kappa )}} (t) \big ) \, = \, {{{\mathcal {B}}}}\big (t, \, {{{\mathcal {E}}}}_\cap (t) \big ). \end{aligned}

Carathéodory solutions to ODEs are characterized by an integral condition at each time t implying the absolute continuity. Due to the uniform bounds in Lemma 5.10, the respective criteria for each index $$\iota _\kappa \in {\mathbb {N}}$$ and their limits for $$\kappa \longrightarrow \infty$$ reveal that $$\ell _j(\cdot )$$, $$x_j(\cdot )$$, $$X_j(\cdot )$$ $$(j=1,\,\ldots ,N)$$ are absolutely continuous in [0, T] and solve the ODE system (10) (with $$\rho = 0$$). $$\square$$

### Lemma 5.13

( [29, Theorem 6.2]) If the Hermitian matrices A, $$B \in {\mathbb {C}}^{n\times n}$$ are positive definite then for any unitarily invariant norm, it holds $$\,\big \Vert A^{\frac{1}{2}} \,-\, B^{\frac{1}{2}} \big \Vert \, \le \, \textstyle \frac{1}{\sqrt{\lambda _{\min }(A)} \, + \, \sqrt{\lambda _{\min }(B)}\,} \; \big \Vert A \,-\, B \big \Vert \,$$ where $$\lambda _{\min }$$ denotes the smallest eigenvalue.

### Lemma 5.14

Under the assumptions of Proposition 3.3, let $$\ell _j:$$ $$[0, T] \longrightarrow {{\mathbb {R}}^n}$$, $$x_j:$$ $$[0, T] \longrightarrow {{\mathbb {R}}^n}$$, $$X_j:$$ $$[0, T] \longrightarrow {{\mathbb {R}}^{n\times n}}$$ $$(j = 1, \,\ldots ,N)$$ be solutions to ODE system (7) with initial values $$\ell _{0 j} \in {{\mathbb {R}}^n}{\setminus } \{ 0 \}$$, $$x_{0j} \in {{\mathbb {R}}^n}$$ and positive definite symmetric $$X_{0 j} \in {{\mathbb {R}}^{n\times n}}$$ such that $$\bigcap _{k=1}^N \,{{{\mathcal {E}}}}(x_{0 k},\, X_{0 k})$$ $$\subset$$ $${{\mathbb {R}}^n}$$ has nonempty interior.

Then, there does not exist any other solution to this IVP in [0, T].

### Proof

Every solution tuple to an IVP with ODE system (7) also solves the ODE system (10) with $$\rho = 0$$ and abbreviations (11).

Due to Gronwall’s inequality, it is sufficient to prove that the functions on the right-hand side of ODE system (10) are Lipschitz continuous w.r.t. states in a neighborhood of the solution values. In comparison with the uniqueness result in Proposition 2.2, for example, it is worth mentioning that now the states are in the vector spaces

• $${{\mathbb {R}}^n}$$ for the directions $$\ell _j(t)$$ and the centers $$x_j(t)$$ $$(j = 1, \,\ldots ,N)$$

• $${{\mathbb {R}}^{n\times n}}$$ for the matrices $$X_j(t) \in {{\mathbb {R}}^{n\times n}}$$ of ellipsoids $$(j = 1, \,\ldots ,N)$$,

but not in the metric space $${{{\mathcal {K}}}}({{\mathbb {R}}^n})$$ (for the ellipsoids).

In contrast to Lemma 5.10, we now assume the existence of solutions in [0, T] as well as (implicitly) $$\bigcap _{k=1}^N \,{{{\mathcal {E}}}}\big (x_k(t),\, X_k(t) \big )$$ $$\not =$$ $$\emptyset$$ for all $$t \in [0,T]$$. Hence, the same arguments (as for Lemma 5.10 (3.),(4.)) lead to constants $${\widetilde{c}}_j$$, $${\widetilde{C}}_j$$, $$c_j$$, $$C_j = \mathrm{const}(\Gamma , \, Q_u, \, X_{0 j}) > 0$$ such for $$j = 1$$, $$\,\ldots$$, N and all $$t \in [0,T]$$, $$v \in {{\mathbb {R}}^n}$$ The smallest eigenvalue of $$X_j(t) \in {{\mathbb {R}}^{n\times n}}$$ $$(t \in [0,T]$$, $$j \in \{1, \,\ldots ,N\})$$ is bounded from below by $$c_j \; e^{-2\,\Gamma \,T} > 0$$. Furthermore, $$x_j(\cdot )$$ and $$X_j(\cdot )$$ are Lipschitz continuous with a Lipschitz constant depending only on $$\Gamma$$, $$Q_u$$, $$x_{0 j}$$, $$X_{0 j}$$, T. Hence, there exists an open neighborhood $${{{\mathcal {U}}}}\subset \big ( {{\mathbb {R}}^{n\times n}}$$, $$\Vert \cdot \Vert _{\text{ op }}\big )$$ of the compact set $$\big \{ X_j(t) \, \big | \, t \in [0,T]$$, $$j \in \{1, \,\ldots ,N\} \big \}$$ such that each symmetric matrix in $${{{\mathcal {U}}}}$$ has eigenvalues $$> \min _j \, \frac{c_j}{2} \; e^{-2\,\Gamma \,T} > 0$$. According to Lemmas 5.8 (3.) and 5.13, all symmetric $$Q_1$$, $$Q_2 \in {{{\mathcal {U}}}}$$ are positive definite and satisfy for every $$p_1$$, $$p_2 \in {{\mathbb {R}}^n}$$

\begin{aligned}&{\mathbbm {d}}\big ( {{{\mathcal {E}}}}(p_1, Q_1), \,\, {{{\mathcal {E}}}}(p_2, Q_2) \big ) \le \big \Vert p_1 - p_2 \big \Vert + \big \Vert Q_1^{\frac{1}{2}} - Q_2^{\frac{1}{2}} \big \Vert _{\text{ op }}\\&\quad \le \big \Vert p_1 - p_2 \big \Vert + \mathrm{const}(\Gamma , \min _j \,c_j, T) \cdot \big \Vert Q_1 - Q_2 \big \Vert _{\text{ op }}. \end{aligned}

Lemma 5.11 guarantees some $${\widehat{r}} > 0$$ such that for all $$t \in [0,T]$$, a ball with radius $${\widehat{r}}$$ is contained in $$\bigcap _{j=1}^N \,{{{\mathcal {E}}}}\big (x_j(t),\, X_j(t) \big )$$.

There exists an open neighborhood $${{{\mathcal {V}}}}\subset \big ( {{\mathbb {R}}^n}\times {{\mathbb {R}}^{n\times n}}\big ) \!^N$$ of the compact set $$\big \{ \big (x_k(t), \, X_k(t)\big )_{k=1,\,\ldots ,N}$$ $$\big |$$ $$t \in [0,T] \big \}$$ such that for all $$j \in \{1,\,\ldots ,N\}$$, $$p_j \in {{\mathbb {R}}^n}$$ and symmetric $$Q_j \in {{\mathbb {R}}^{n\times n}}$$ with $$(p_k, Q_k)_{k=1,\,\ldots ,N} \in {{{\mathcal {V}}}}$$,

• $$Q_j \in {{{\mathcal {U}}}}\subset {{\mathbb {R}}^{n\times n}}$$ for each $$j \in \{1,\,\ldots ,N\}$$ and

• $$\bigcap _{j=1}^N \,{{{\mathcal {E}}}}(p_j, Q_j)$$ $$\subset$$ $${{\mathbb {R}}^n}$$ contains a ball of radius $$\frac{{\widehat{r}}}{2}$$.

Lemma 5.7 (1.) and the characterization of $${{{\mathcal {U}}}}$$ imply for all $$(p_k, Q_k)_{k=1,\,\ldots ,N}$$, $$({\widetilde{p}}_k, {\widetilde{Q}}_k)_{k=1,\,\ldots ,N} \in {{{\mathcal {V}}}}$$

\begin{aligned} {\mathbbm {d}}\Big ( \displaystyle \bigcap _{k\,=\,1}^N {{{\mathcal {E}}}}(p_k, Q_k) , \; \bigcap _{k\,=\,1}^N {{{\mathcal {E}}}}({\widetilde{p}}_k, {\widetilde{Q}}_k) \Big )\le & {} \mathrm{const}\cdot \displaystyle \sum _{k\,=\,1}^N {\mathbbm {d}}\big ( {{{\mathcal {E}}}}(p_k, Q_k), \; {{{\mathcal {E}}}}({\widetilde{p}}_k, {\widetilde{Q}}_k) \big )\\\le & {} \mathrm{const}\cdot \displaystyle \sum _{k\,=\,1}^N \Big (\big \Vert p_k - {\widetilde{p}}_k \big \Vert + \big \Vert Q_k - {\widetilde{Q}}_k \big \Vert _{\text{ op }}\Big ). \end{aligned}

Together with Assumptions 3.2 (ii), (iii) and 3.3 (iv’),(vi’), this inequality leads to all the aspects of (local) Lipschitz continuity w.r.t. state required for concluding the claimed uniqueness of solutions to ODE system (10) from Gronwall’s inequality. $$\square$$

Finally, we prove Proposition 3.3 (2.) in form of Lemma 5.16 below:

### Lemma 5.15

([33, § 2.1 and Lemma 2.2.1 (a)]) Let $${{{\mathcal {E}}}}(p_1, Q_1)$$ and $${{{\mathcal {E}}}}(p_2, Q_2)$$ be ellipsoids in $${{\mathbb {R}}^n}$$. Then, these statements hold:

1. (1.)

for all $$A \in {{\mathbb {R}}^{n\times n}}$$, $$b \in {{\mathbb {R}}^n}$$, $$\; A \,\, {{{\mathcal {E}}}}(p_1, Q_1) + b \; = \; {{{\mathcal {E}}}}\big ( A \, p_1 + b, \,\, A \, Q_1 \, A^{\top }\big )$$,

2. (2.)

$${{{\mathcal {E}}}}(p_1, Q_1) + {{{\mathcal {E}}}}(p_2, Q_2) \subset {{{\mathcal {E}}}}\!\left( p_1 \!+\! p_2, \; (1\!+\!\frac{1}{\eta }) \, Q_1 + (1\!+\!\eta ) \, Q_2 \right)$$ for all $$\eta$$ > 0.

### Lemma 5.16

Under the assumptions of Proposition 3.3 consider any solutions $$\ell _j(\cdot )$$, $$x_j(\cdot )$$, $$X_j(\cdot )$$ $$(j = 1,\,\ldots ,N)$$ to ODE system (7) with the additional features mentioned in Lemma 5.12.

Then, $$E_j:$$ $$[0,T] \leadsto {{\mathbb {R}}^n}$$, $$t \mapsto {{{\mathcal {E}}}}\big ( x_j(t), \, X_j(t) \big )$$ $$(j = 1,\,\ldots ,N)$$ have the following properties:

1. (1.)

$$E_j(\cdot )$$ is Lipschitz continuous (w.r.t. $${\mathbbm {d}})$$.

2. (2.)

For every $$t \in [0,T)$$, $$\; 0 \, = \, \displaystyle \lim _{h\,\downarrow \,0} \,\, {\textstyle \frac{1}{h}} \cdot {\mathbbm {e}}\Big ( \big ( {\mathbbm {1}}+ h \; {{{\mathcal {A}}}}\big (t, {{{\mathcal {E}}}}_\cap (t) \big ) \big ) \; E_j(t) + h \, {{{\mathcal {B}}}}\big (t, {{{\mathcal {E}}}}_\cap (t) \big ) \, U, \;\; E_j(t+h) \Big ).$$

### Proof

(1.)    $$X_j(\cdot )$$ is Lipschitz continuous and, there exists $$\gamma _j = \gamma _j(X_{0 j}) > 0$$ such that $$\langle v, \; X_j(t) \, v \rangle$$ $$\ge$$ $$\gamma _j\; e^{-2\,\Gamma \,t} \;\Vert v\Vert ^2$$ holds for every $$v \in {{\mathbb {R}}^n}$$. We conclude from Lemma 5.13 for all $$s, t \in [0,T]$$

\begin{aligned}&\big \Vert X_j(s)^{\frac{1}{2}} \,-\, X_j(t)^{\frac{1}{2}} \big \Vert _{\text{ op }}\le \mathrm{const}(\gamma _j) \cdot \big \Vert X_j(s) \,-\, X_j(t) \big \Vert _{\text{ op }}\\&\quad \le \mathrm{const}\big (\Gamma , Q_u, X_{0 j}, T \big ) \cdot |s - t| \,. \end{aligned}

Finally, Lemma 5.8 (3.) and the Lipschitz continuity of $$x_j(\cdot )$$ guarantee that $$E_j:$$ $$[0,T] \leadsto {{\mathbb {R}}^n}$$ is Lipschitz continuous (w.r.t. $${\mathbbm {d}}$$).

(2.) Due to Lemma 5.11, we conclude from Lemma 5.7 (1.) that $${{{\mathcal {E}}}}_\cap :$$ [0, T] $$\leadsto$$ $${{\mathbb {R}}^n}$$ is also Lipschitz continuous. Hence, Assumptions 3.2 (iii) and 3.3 (ii’) guarantee the continuity of $${{{\mathcal {A}}}}\big (\cdot , {{{\mathcal {E}}}}_\cap (\cdot ) \big )$$ and $${{{\mathcal {B}}}}\big (\cdot , {{{\mathcal {E}}}}_\cap (\cdot ) \big ):$$ $$[0,T] \longrightarrow {{\mathbb {R}}^{n\times n}}$$.

Finally, Proposition 3.1 (2.) (a) states the claimed limit for each $$t \in [0,T)$$. $$\square$$

### 5.6 Approximating the Set Evolution Solution with Arbitrary Precision (Proposition 3.6)

The proof of [12, Theorem] provides several useful supplementary details.

### Lemma 5.17

() Let $$M \subset {{\mathbb {R}}^n}$$ be a bounded convex subset with nonempty interior and boundary of class $$C^1$$. Fix $$\varepsilon \in \big (0, \frac{1}{32} \big )$$ arbitrarily. Then, the following statements hold:

1. (1.)

There exist finitely many $$x_1, \ldots , x_N \in \partial M$$ and unit vectors $$\nu _1, \ldots , \nu _N \in {{\mathbb {R}}^n}$$ such that for every $$j \in \{1, \,\ldots , N\}$$, $$\nu _j$$ is normal to M in $$x_j$$ and $$\mathrm{co} \;\big \{ x_1, \,\ldots , x_N \big \} \, \subset \, {\overline{M}} \, \subset \, \mathrm{co} \;\big \{ x_1 + \varepsilon \,\nu _1, \ldots , \, x_N + \varepsilon \,\nu _N\big \} \,.$$ In particular, $$\, {\mathbbm {d}}\big ( {\overline{M}}, \, \mathrm{co} \;\big \{ x_1 + \varepsilon \,\nu _1, \ldots , \, x_N + \varepsilon \,\nu _N\big \} \big ) \,\le \,\varepsilon$$.

2. (2.)

Choose $$x_1, \ldots , x_N \in \partial M$$ and normal unit vectors $$\nu _1, \ldots , \nu _N \in {{\mathbb {R}}^n}$$ such that for each $$j \in \{1, \ldots ,N\}$$,

\begin{aligned} \min _{k\,\not = \,j} \;\; \big \Vert \nu _j - \nu _k \big \Vert \,\, \Big ( 1 + \textstyle \frac{3}{16} \; \frac{\Vert \nu _j \,-\, \nu _k \Vert ^2}{1 \,-\, \Vert \nu _j \,-\, \nu _k \Vert ^2} \Big ) \,\; < \,\; \sqrt{2 \,\varepsilon } \,. \end{aligned}

Then, M is contained in the convex hull of $$\big \{ x_j + \varepsilon \, \nu _j \; \big | \; j \in \{1,\ldots ,N\} \big \}$$.

The following lemma results from the observation that every compact convex set in $${{\mathbb {R}}^n}$$ is of arbitrary positive reach (in the sense of Federer) (e.g., [21, Ch. 6, § 6 & Theorem 8.1 and Ch. 7, Theorem 7.1]).

### Lemma 5.18

Let $$M \subset {{\mathbb {R}}^n}$$ be nonempty, compact and convex. For each $$\varepsilon > 0$$, the boundary of $${\overline{{\mathbb {B}}}}_\varepsilon (M) {\mathop {=}\limits ^{\mathrm{\tiny Def.}}} \big \{ x \in {{\mathbb {R}}^n}\, \big | \, \text{ dist }(x, M) \le \varepsilon \big \}$$ is of class $$C^{1,1}$$.

### Proof of Proposition 3.6

Fix any $$\varepsilon \in \big ( 0, \frac{1}{32} \big )$$ and set $${\widetilde{r}} {:}{=} \max \big \{1, \, \big ( {\mathbbm {d}}( \{0\}, K_0) + e^{\Gamma \, T} \big ) \cdot e^{\Gamma \, T}$$, $${\mathbbm {d}}( \{0\}, U) \big \}$$ $$\ge$$ 1, $$\; {\widehat{\varepsilon }} {:}{=} \frac{\varepsilon }{2} \cdot e^{- 4 \; {\widetilde{r}} \; \Lambda \; T \,\cdot \, \exp ((2\,{\widetilde{r}}\,\Lambda + \Gamma ) \,T)}$$ $$\in$$ $$\big (0, \frac{\varepsilon }{2} \big )$$, $$\; {\widetilde{\varepsilon }} {:}{=} \frac{3}{4} \; e^{-\,\Gamma \,T} \cdot {\widehat{\varepsilon }} \in \big (0, \frac{3}{8} \,\varepsilon \big )$$.

1. Step 1

Choosing $$N \in {\mathbb {N}}$$ and the initial unit vectors $$\ell _{0 j} \in {{\mathbb {R}}^n}$$ $$(j = 1,\,\ldots , N)$$.

Due to Lemma 5.18, $${\overline{{\mathbb {B}}}}_{{\widetilde{\varepsilon }}/4}(K_0) \subset {{\mathbb {R}}^n}$$ has the boundary of class $$C^{1,1}$$ and so, it satisfies the assumptions of Lemma 5.17 in particular. Hence, there exist finitely many points $$\zeta _1$$, $$\,\ldots \,$$, $$\zeta _N$$ $$\in$$ $$\partial {\overline{{\mathbb {B}}}}_{{\widetilde{\varepsilon }}/4}(K_0)$$ and unit vectors $$\ell _{0 1}$$, $$\,\ldots \,$$, $$\ell _{0 N} \in {{\mathbb {R}}^n}$$ such that

• for every $$j \in \{1, \,\ldots , N\}$$, $$\ell _{0 j}$$ is normal to $${\overline{{\mathbb {B}}}}_{{\widetilde{\varepsilon }}/4}(K_0)$$ in $$\zeta _j$$,

• $$\mathrm{co} \;\big \{ \zeta _1, \,\ldots , \zeta _N \big \} \,\, \subset \,\, {\overline{{\mathbb {B}}}}_{{\widetilde{\varepsilon }}/4}(K_0) \,\, \subset \,\, \mathrm{co} \;\big \{ \textstyle \zeta _1 + \frac{{\widetilde{\varepsilon }}}{4} \,\ell _{0 1}, \,\ldots \,, \, \zeta _N + \frac{{\widetilde{\varepsilon }}}{4} \,\ell _{0 N}\big \} \,,$$

• for every $$j \in \{1, \,\ldots , N\}$$, $$\; \text{ dist }\big ( \ell _{0 j}, \, \{ \ell _{0 k} \, | \, k \not = j \} \big ) < {\widetilde{\varepsilon }} \;$$ and $$\; \text{ dist }\big ( \zeta _j, \,\; \{ \zeta _k \, | \, k \not = j \} \big ) < \frac{{\widetilde{\varepsilon }}}{4}$$.

For each $$j \in \{1, \,\ldots \, , N\}$$, $$\xi _{0 j} {:}{=} \zeta _j - \frac{{\widetilde{\varepsilon }}}{4} \,\ell _{0 j}$$ belongs to $$\partial K_0$$ and, $$\ell _{0 j}$$ is normal to $$K_0$$ in $$\xi _{0 j}$$ as a consequence of [21, Ch. 6, Theorem 6.2 (iii)]. Moreover, the triangle inequality leads to $$\; \text{ dist }\big ( \xi _{0 j}, \,\; \{ \xi _{0 k} \, | \, k \not = j \} \big ) < {\widetilde{\varepsilon }}$$.

1. Step 2

Constructing the ellipsoid-valued $$E_j(\cdot )$$ and the auxiliary tube $$R(\cdot )$$.

K :  $$[0,T] \leadsto {{\mathbb {R}}^n}$$ denotes the solution to IVP (6).

Proposition 3.3 provides unique solutions $$\ell _j$$, $$x_j:$$ $$[0,T] \longrightarrow {{\mathbb {R}}^n}$$, $$X_j:$$ $$[0,T] \longrightarrow {{\mathbb {R}}^{n\times n}}$$ $$(j=1, \,\ldots ,N)$$ to ODE system (7) with initial values $$\ell _j(0) = \ell _{0 j}$$, $$x_j(0) = x_{0 j}$$, $$X_j(0) = X_{0 j}$$ (and abbreviations (8)). According to Corollary 3.4, the intersection map $${{{\mathcal {E}}}}_\cap {:}{=} \bigcap _{j=1}^N \, {{{\mathcal {E}}}}\big (x_j, X_j):$$ $$[0,T] \leadsto {{\mathbb {R}}^n}$$ satisfies $$K(t) \subset {{{\mathcal {E}}}}_\cap (t)$$ for every $$t \in [0,T]$$.

Define $$A {:}{=} {{{\mathcal {A}}}}(\cdot , {{{\mathcal {E}}}}_\cap )$$, $$B {:}{=} {{{\mathcal {B}}}}(\cdot , {{{\mathcal {E}}}}_\cap ):$$ $$[0,T] \longrightarrow {{\mathbb {R}}^{n\times n}}$$ and consider $$x' \in A(\cdot ) \,x + B(\cdot ) \,U$$ in [0, T]. Let $$R(t) \subset {{\mathbb {R}}^n}$$ denote its reachable set of $$K_0$$ at each time t $$\in$$ [0, T].

Proposition 3.1 (2.) guarantees for every $$j \in \{1,\,\ldots ,N\}$$ and $$t \in [0,T]$$

• $$R(t) \subset E_j(t)$$.

• $$E_j(t)$$ is minimal in the class of ellipsoids w.r.t. set inclusions, i.e., there is no ellipsoid $${\widetilde{E}}$$ $$\subset$$ $${{\mathbb {R}}^n}$$ with $$R(t) \subset {\widetilde{E}} \subsetneqq E_j(t)$$.

• $$\ell _j(t)$$ is normal to R(t) and $$E_j(t)$$ in $$\xi _j(t) : = x_j(t) + { \langle \ell _j(t), \, X_j(t) \, \ell _j(t) \rangle ^{- \,\frac{1}{2}}} \, X_j(t) \ell _j(t)$$ $$\in$$ $$\partial R(t)$$ $$\cap$$ $$\partial E_j(t)$$.

1. Step 3

An upper bound of the PompeiuHausdorff distance between R(t) and $${{{\mathcal {E}}}}_\cap (t)$$.

For each $$j \in \{1, \,\ldots \, , N\}$$, the choice of $$\ell _j(0) = \ell _{0 j}$$ in Step 1 implies $$\xi _j(0) = \xi _{0 j}$$ $$\in$$ $$\partial K_0$$ $${\mathop {=}\limits ^{\mathrm{\tiny Def.}}}$$ $$\partial {{{\mathcal {E}}}}(x_0, X_0)$$.

Moreover, there exists $$k \in \{1, \,\ldots \, , N\} {\setminus } \{j\}$$ satisfying $$\big \Vert \ell _j(t) - \ell _k(t) \big \Vert < \frac{3}{4} \,{\widehat{\varepsilon }}$$ for all $$t \in [0,T]$$. Indeed, Step 1 provides an index $$k \not = j$$ with $$\big \Vert \ell _{0 j} - \ell _{0 k} \big \Vert < {\widetilde{\varepsilon }}$$. Hence, we conclude from ODE system (7) and Assumption 3.3 (iv’) for each $$t \in [0,T]$$

\begin{aligned} \big \Vert \ell _j(t) - \ell _k(t) \big \Vert\le & {} \int _0^t \big \Vert - {{{\mathcal {A}}}}\big (s, \, {{{\mathcal {E}}}}_\cap (s) \big )^{\top }\, \ell _j(s) \,+\, {{{\mathcal {A}}}}\big (s, \, {{{\mathcal {E}}}}_\cap (s) \big )^{\top }\, \ell _k(s) \big \Vert \; \mathrm{d} s\\\le & {} \int _0^t \Gamma \,\, \big \Vert \ell _j(s) - \ell _k(s) \big \Vert \,\; \mathrm{d} s. \end{aligned}

Gronwall’s inequality leads to $$\big \Vert \ell _j(t) - \ell _k(t) \big \Vert$$ $$\le$$ $$\big \Vert \ell _{0 j} - \ell _{0 k} \big \Vert$$ $$\; e^{\Gamma \, T}$$ < $$\frac{3}{4} \,{\widehat{\varepsilon }}$$. In particular, we have

\begin{aligned} \big \Vert \ell _j(t) - \ell _k(t) \big \Vert \, \Big ( 1 + \textstyle \frac{3}{16} \; \frac{\Vert \ell _j(t) \,-\, \ell _k(t) \Vert ^2}{1 \,-\, \Vert \ell _j(t) \,-\, \ell _k(t) \Vert ^2} \Big ) \,\, \le \,\, \frac{4}{3} \, \big \Vert \ell _j(t) - \ell _k(t) \big \Vert \,\,< \; {\widehat{\varepsilon }} \; < \, \sqrt{2 \,\frac{3 \,{\widehat{\varepsilon }}}{4}}. \end{aligned}

For each $$t \in [0,T]$$, the boundary of $${\overline{{\mathbb {B}}}}_{{\widehat{\varepsilon }}/4} \big (R(t) \big )$$ is of class $$C^{1,1}$$ due to Lemma 5.18 and so, we can apply Lemma 5.17 (2.) to $$\xi _j(t) + \frac{{\widehat{\varepsilon }}}{4} \, \ell _j(t)$$ $$\in$$ $$\partial \,{\overline{{\mathbb {B}}}}_{{\widehat{\varepsilon }}/4} \big (R(t) \big )$$, the normals $$\ell _j(t)$$ $$(j = 1$$, $$\,\ldots \,$$, N) and $$\frac{3}{4} \,{\widehat{\varepsilon }} \in \big (0, \frac{1}{32} \big )$$:

$$\displaystyle \bigcup _{j=1}^N \, \big \{ \xi _j(t) \big \} \,\, \subset \,\, R(t) \,\, \subset \,\, {\overline{{\mathbb {B}}}}_{{\widehat{\varepsilon }}/4} \big (R(t) \big ) \,\, \subset \,\, \mathrm{co} \;\bigcup _{j=1}^N \, \big \{ \xi _j(t) + {\widehat{\varepsilon }} \, \ell _j(t) \big \}.$$

These arguments (with the same points and normals) also hold for $${{{\mathcal {E}}}}_\cap (t)$$, i.e.,

$$\displaystyle \bigcup _{j=1}^N \, \big \{ \xi _j(t) \big \} \,\, \subset \,\, {{{\mathcal {E}}}}_\cap (t) \,\, \subset \,\, {\overline{{\mathbb {B}}}}_{{\widehat{\varepsilon }}/4} \big ({{{\mathcal {E}}}}_\cap (t) \big ) \,\, \subset \,\, \mathrm{co} \;\bigcup _{j=1}^N \, \big \{ \xi _j(t) + {\widehat{\varepsilon }} \, \ell _j(t) \big \}.$$

Hence, we obtain $$\, {\mathbbm {d}}\big ( R(t), \; {{{\mathcal {E}}}}_\cap (t) \big ) \le {\widehat{\varepsilon }} \,$$ for every $$t \in [0,T]$$.

1. Step 4

An upper bound of the PompeiuHausdorff distance between K(t) and R(t).

The tube R :  $$[0,T] \leadsto {{\mathbb {R}}^n}$$ of reachable sets is a solution of $$\mathring{R}(t) = {{{\mathcal {A}}}}\big ( t, {{{\mathcal {E}}}}_\cap (t) \big ) \, x + {{{\mathcal {B}}}}\big ( t, {{{\mathcal {E}}}}_\cap (t) \big ) \, U$$ with $$R(0) = K_0$$ (in the sense of Definition 2.3). Lemma 5.2 (1.) guarantees $$K(t) \cup R(t) \subset {\overline{{\mathbb {B}}}}_{{\widetilde{r}}}$$ for all $$t \in [0,T]$$. Hence, we conclude from Proposition 2.5 and the Assumptions 3.2 (iii), 3.3 (iv’) for every $$t \in [0,T]$$

\begin{aligned} {\mathbbm {d}}\big ( K(t), \,R(t))\le & {} \displaystyle \int _0^t \,\, \displaystyle \sup _{\begin{array}{c} {x \,\in \,{\overline{{\mathbb {B}}}}_{{\widetilde{r}}}} \\ {u \,\in \,U \quad } \end{array}} \; \big \Vert \, {{{\mathcal {A}}}}\big (s, \,\quad K(s) \big ) x + {{{\mathcal {B}}}}\big (s, \,\quad K(s) \big ) u\\&\quad - {{{\mathcal {A}}}}\big (s, \,{{{\mathcal {E}}}}_\cap (s) \big ) x - {{{\mathcal {B}}}}\big (s, \,{{{\mathcal {E}}}}_\cap (s) \big ) u \big \Vert e^{(2 \, {\widetilde{r}} \, \Lambda +\Gamma ) \; (t-s)} \,\, d s \\\le & {} {2 \; {\widetilde{r}} \; e^{(2\, {\widetilde{r}} \, \Lambda + \Gamma ) \,T} \; \Lambda \cdot \displaystyle \int _0^t {\mathbbm {d}}\big ( K(s), \, {{{\mathcal {E}}}}_\cap (s) \big ) \,\;\mathrm{d} s.} \end{aligned}

Step 3 leads to $$\; {\mathbbm {d}}\big ( K(t), \,{{{\mathcal {E}}}}_\cap (t) \big ) \le 2 \; {\widetilde{r}} \; e^{(2\, {\widetilde{r}} \, \Lambda + \Gamma ) \,T} \; \Lambda \cdot \displaystyle \int _0^t {\mathbbm {d}}\big ( K(s), \, {{{\mathcal {E}}}}_\cap (s) \big ) \,\;\mathrm{d} s \, + \, {\widehat{\varepsilon }}$$.

Finally, Gronwall’s inequality states for every $$t \in [0,T]$$ (due to $${\widetilde{r}} \ge 1$$)

\begin{aligned}&{\mathbbm {d}}\big ( K(t), \,{{{\mathcal {E}}}}_\cap (t) \big ) \\&\quad \le \displaystyle {\widehat{\varepsilon }} \,\cdot \, \Big ( 1 + 2 \; {\widetilde{r}} \; e^{(2\, {\widetilde{r}} \, \Lambda + \Gamma ) \,T} \; \Lambda \cdot \int _0^t \exp \big (2 \; {\widetilde{r}} \; e^{(2\,{\widetilde{r}}\,\Lambda + \Gamma ) \,T} \; \Lambda \,\, (t-s) \big ) \,\; \mathrm{d} s \Big )\\&\quad \le {\widehat{\varepsilon }} \,\cdot \, e^{4 \; {\widetilde{r}} \; \Lambda \; T \cdot e^{(2\,{\widetilde{r}}\,\Lambda + \Gamma ) \,T}} \quad < \,\, \varepsilon \,. \end{aligned}

$$\square$$

## 6 Conclusions

Ellipsoids represent a well-established approach of approximation in control theory and related optimization problems (like the OFC by Kurzhanski and Varaiya in [35, 36, Ch. 10]). Now they are used for external approximations of convex-valued solutions to so-called set evolution equations. In a word, we focus on reachable sets to linear time-variant control systems whose coefficient matrices depend on their own reachable set. (This very general form of feedback can be formulated equivalently in terms of integral funnel equations, Panasyuk’s quasidifferential equations and Aubin’s morphological equations, see Propositions 2.1 and 2.7). In regard to future applications, the main result consists in sufficient conditions on the coefficients such that the convex solution values can be approximated as intersections of finitely many ellipsoids with arbitrary precision. Their respective centers and matrices are characterized by a nonlinear ODE system (Sect. 3) and so, they can be calculated rather quickly.