Abstract
The matter of approximating the solutions of a differential problem driven by a rough measure by solutions of similar problems driven by “smoother” measures is considered under very general assumptions on the multifunction on the right-hand side. The key tool in our investigation is the notion of uniformly bounded \(\varepsilon \)-variations, which mixes the supremum norm with the uniformly bounded variation condition. Several examples to motivate the generality of our outcomes are included.
Similar content being viewed by others
1 Introduction
When studying the evolution of a large number of processes in real life, one notices that the measured quantities often have discontinuities. For instance, one finds such feature whenever in the continuous progress of the phenomenon discrete perturbations occur.
Properties of the solutions for dynamical systems of this kind are difficult to be obtained, especially in the case where there are infinitely many discrete perturbations (i.e. impulses) and the impulse moments accumulate in the interval of time under observation. Such situation is described in the theory of hybrid systems as Zeno behaviour (see [22, 29]), and it is usually avoided by works concerning classical impulsive differential equations [10, 29].
A convenient tool for treating this matter is offered by the theory of measure differential equations, also known as differential equations driven by measures [6, 13]. For particular measures (absolutely continuous, discrete, respectively, a sum of an absolutely continuous measure with a discrete one), this theory throws a new light on the theories of usual differential equations, difference equations, respectively, impulsive equations. Dynamic equations on time scales can also be seen as measure differential equations [13].
We will be interested in the set-valued version (for which motivations can be found in [1]), namely in studying measure differential inclusions of the form
where \(\mu \) is the Stieltjes measure associated with a left-continuous non-decreasing function, \(F:[0,1] \times \mathbb {R}^d \rightarrow \mathcal {P}_{kc}(\mathbb {R}^d)\) is a multifunction (i.e. a function having values compact convex subsets of the d-dimensional Euclidean space) and \(x_0 \in \mathbb {R}^d\).
The existence of solutions of bounded variation was obtained under Carathéodory-type hypothesis (e.g. in [8, 12]), but in order to get more properties of the solution set some additional conditions are necessary. For instance, the possibility to approximate the solution set by the solution set of a similar problem driven by a “smoother” measure, in other words, a continuous dependence on the measure driving the problem, is not available unless more hypotheses are imposed. Even the single-valued case is not simple (see [13] in the nonlinear setting and [16, 23] in the linear setting).
In order to achieve this property, in [30] an alternative notion of solution was considered. In [9], a type of convergence for measures adequate to the set-valued case was used instead. Finally, in [25] there are two such continuous dependence results: one assuming a uniformly bounded variation condition on the multifunction w.r.t. the Hausdorff distance and the other one imposing an equiregularity condition (in the sense given in [14]). Obviously, in all these papers the matter of existence of solutions is first treated.
We propose in the present work an approach based on the notion of uniformly bounded \(\varepsilon \)-variation, introduced in [14] and used there to get a very general Helly-type selection principle for regulated functions.
This is a very “ingenious” concept mixing the supremum norm with the bounded variation property, a very natural combination if we have in mind that the closure of the subspace of BV functions in the \(\sup \)-norm topology is the whole space of regulated functions. It has recently found interesting applications in the study of hysteresis phenomenon (see [19, 20] or [5]).
Thus, by imposing a uniformly bounded \(\varepsilon \)-variation condition on the multifunction on the right-hand side of the inclusion (1), we are able to prove the existence of solutions with bounded variation. What’s more, for this set of solutions we get the continuous dependence on the measure driving the inclusion, i.e. the solution set of the original problem (driven by a possibly very rough measure) can be approached by the solutions set of approximating differential inclusions driven by “smoother” measures. A first result of this kind imposes the two-norm convergence (which again mixes the supremum norm with the bounded variation assumption) of the distribution functions associated with the approximating measures and uses an appropriate convergence result for Kurzweil–Stieltjes integrals, borrowed from [5]. Another result of continuous dependence imposes the strong convergence of the sequence of measures driving the approximating problems.
Along the way, we give examples to motivate our assumptions and, at the same time, to point out the generality of our outcomes.
2 Notions and preliminary facts
When \(u : [0,1] \rightarrow \mathbb {R}^d\) is a function with values in the d-dimensional Euclidean space, the total variation in u will be denoted by \(\mathrm{var}(u)\), and if it is finite, then u will be said to have bounded variation (or to be a BV function). For a real-valued BV function u, by \(\mathrm{d}u\) we denote the corresponding Stieltjes measure. It is defined for half-open subintervals of [0, 1] by
and it is then extended to all Borel subsets of the unit interval in the standard way. We shall consider only positive Borel measures; therefore, Stieltjes measures with left-continuous non-decreasing distribution function u.
Let us also recall that a function \(u : [0,1] \rightarrow \mathbb {R}^d\) is said to be regulated if there exist the limits \(u(t^+)\) and \(u(s^-)\) for every points \(t \in [0,1)\) and \(s \in (0,1]\). The set of discontinuity points of a regulated function is known to be at most countable [17], and the bounded variation or continuous functions are obviously regulated. Such functions are also bounded, and the space \(G([0,1],{\mathbb R}^d)\) of regulated functions \(u:[0,1]\rightarrow \mathbb {R}^d\) is a Banach space when endowed with the norm \(\Vert u\Vert _C={\sup _{t\in [0,1]}}\Vert u(t)\Vert \).
Notice that these notions are also available for functions with values in a general Banach space.
A useful characterization of regulated functions was given in [14]:
Proposition 1
([14, Theorem 2.14]) A function \(x:[0,1]\rightarrow \mathbb {R}^d\) is regulated if and only if there is an increasing continuous function \(\eta :[0,\infty )\rightarrow [0,\infty )\), \(\eta (0)=0\) and an increasing function \(v:[0,1]\rightarrow [0,1]\), \(v(0)=0\), \(v(1)=1\) such that for every \(0\le t_1< t_2\le 1\),
The proof of [14, Theorem 2.14] can be repeated in the case of a Banach space, and so, this characterization is also available for Banach space-valued regulated functions.
Our existence and continuous dependence results rely mainly on the following notion, introduced in [14] in order to get a Helly-type selection principle for regulated functions.
Definition 1
-
(i)
For a function \(x:[0,1]\rightarrow \mathbb {R}^d\) and an arbitrary \(\varepsilon >0\), denote by
$$\begin{aligned} \varepsilon -\mathrm{var}\; x=\inf \{ \mathrm{var}(z); z:[0,1]\rightarrow \mathbb {R}^d\; \mathrm{is}\; \mathrm{BV}, \Vert x-z\Vert _C\le \varepsilon \} \end{aligned}$$and call it “the \(\varepsilon \)-variation of x”. We understand that \(\inf \emptyset =\infty \).
-
(ii)
A family \(\mathcal {A}\subset G([0,1],{\mathbb R}^d)\) is said to have uniformly bounded \(\varepsilon \)-variations if for every \(\varepsilon >0\) there exists \(L^{\varepsilon }>0\) such that
$$\begin{aligned} \varepsilon -\mathrm{var} \;x\le L^{\varepsilon },\quad \mathrm{for\; every}\; x\in \mathcal {A}. \end{aligned}$$
It is stated in [14, Proposition 3.4] that a function \(x:[0,1]\rightarrow \mathbb {R}^d\) is regulated if and only if \(\varepsilon -\mathrm{var}\; x<\infty \) for every \(\varepsilon >0\).
The analogue of Helly selection principle (originally proved for BV functions) for regulated functions is [14, Theorem 3.8]:
Theorem 1
Let \((x_n)_n \subset G([0,1],{\mathbb R}^d)\) be a sequence with uniformly bounded \(\varepsilon \)-variations such that \((x_n(0))_n\) is bounded. Then, it has a subsequence pointwisely convergent on [0, 1] to a regulated function.
Remark 1
Obviously, any sequence of BV functions with uniformly bounded variation has uniformly bounded \(\varepsilon \)-variations. However, this notion is a significant generalization of the notion of “uniformly bounded variation”, e.g. any sequence of regulated functions uniformly convergent has uniformly bounded \(\varepsilon \)-variations, by Krejci and Laurencot [20, Proposition 5.6].
In order to see even better that this notion is a very general one, we recall the following:
Proposition 2
([5, Theorem 2.2]) Let \(\mathcal {A} \subset G([0,1],{\mathbb R}^d)\) be given. Then, the following conditions are equivalent:
- (i)
\(\mathcal {A} \) has uniformly bounded \(\varepsilon \)-variations.
- (ii)
There exists a non-decreasing function \(\phi :\mathbb {R}^+ \rightarrow \mathbb {R}^+\) such that \(\phi (+ \infty ) = + \infty \), and \(\mathcal {A} \) has uniformly bounded \(\phi \)-variations.
Remind ([5, Definition 2.1]) that a set of functions \(\mathcal {A}\) is said to have uniformly bounded \(\phi \)-variations if for every partition \(0=t_0<\cdots <t_m=1\) we have:
In particular, when \(\phi (t)=c\cdot t^2\) (c being a positive constant), we get the functions of two bounded variation.
Example 1
Let us consider the space \(\mathrm{BV}_2([0,1],{\mathbb R}^d)\) of functions of two bounded variation. It is known that \(\mathrm{BV}([0,1],{\mathbb R}^d) \subset \mathrm{BV}_2([0,1],{\mathbb R}^d)\) and that the inclusion is strict [15].
Let us choose a sequence \((x_n)_n \in \mathrm{BV}_2([0,1],{\mathbb R}^d){\setminus } \mathrm{BV}([0,1],{\mathbb R}^d)\) such that:
- (1)
\((x_n)_n\) has uniformly bounded 2-variation;
- (2)
\((x_n)_n\) is not uniformly convergent.
Therefore, according to Proposition 2, the sequence \((x_n)_n\) has uniformly bounded \(\varepsilon \)-variations, but it is not uniformly convergent and it has not uniform bounded variation.
The following auxiliary result will be used in the proof of the main results.
Proposition 3
Let \(\mathcal {A}\subset G([0,1],{\mathbb R}^d)\) be a family of functions with uniform bounded \(\varepsilon \)-variations such that \(\mathcal {A}(0)\) is bounded, and let \(h:[0,1]\rightarrow \mathbb {R}\) be a regulated function. Then,
is also a family of functions with uniform bounded \(\varepsilon \)-variations.
Proof
Denote by \(M>0\) such that \(\Vert h\Vert _C\le M\) and \(\Vert y(0)\Vert \le M\) for all \(y\in \mathcal {A}\). Let \(\varepsilon >0\). There exists \(K_{\varepsilon }\) such that \(\varepsilon -\mathrm{var} (y)\le K_{\varepsilon }\) for all \(y\in \mathcal {A}\), i.e. for each \(y\in \mathcal {A}\), there is a BV function \(y_{\varepsilon }\) with \(\mathrm{var} (y_{\varepsilon })\le K_{\varepsilon }\) and \(\Vert y-y_{\varepsilon }\Vert _C<\varepsilon \).
By [14, Proposition 3.4], one can find a BV function \(h_{\varepsilon }\) such that
Then, for each \(y\in \mathcal {A}\) there exists the BV function \(h_{\varepsilon }\cdot y_{\varepsilon }\) satisfying
and this shows that \(h\cdot \mathcal {A}\) is indeed a family of functions with uniform bounded \(\varepsilon \)-variations since
\(\square \)
Now, several words concerning the integrals will appear in our computations. Since in general we do not assume continuity, the Riemann–Stieltjes integral might not be well defined. On the other hand, the space of regulated functions is tightly connected to the space of BV functions via the Kurzweil–Stieltjes integration (we refer the reader to [21, 26, 27, 31]); therefore, this kind of integral seems to be the most natural choice in our framework. In what follows, we focus on the basic properties of Kurzweil–Stieltjes integrals.
Definition 2
A function \(g:[0,1]\rightarrow \mathbb {R}^d\) is said to be Kurzweil–Stieltjes integrable with respect to \(u:[0,1]\rightarrow \mathbb {R}\) on [0, 1] (shortly, KS-integrable) if there exists \((\mathrm{KS})\int _0^{1}g(s)\mathrm{d}u(s)\in \mathbb {R}^d\) such that, for every \(\varepsilon > 0\), there is a gauge \(\delta _{\varepsilon } \) (a positive function) on [0, 1] with
for every \(\delta _{\varepsilon }\)-fine partition \(\{([t_{i-1},t_i],\xi _i): \ \xi _i \in [t_{i-1}, t_i], \ i=1,\ldots ,p\}\) of [0, 1].
We recall that a partition \(\{([t_{i-1},t_i],\xi _i): \ i=1,\ldots ,p\}\) is \(\delta \)-fine if \([t_{i-1},t_i] \subset \left]\xi _i-\delta (\xi _i),\xi _i+\delta (\xi _i) \right[\), \( \forall i=1, \dots ,p\).
The KS-integrability is preserved on all subintervals of [0, 1]. The function \(t\mapsto \mathrm{(KS)}\int _0^t g(s)\mathrm{d}u(s)\) is called the KS-primitive of g w.r.t. u on [0, 1].
We mostly deal with the Kurzweil–Stieltjes integral; this is why the notation \(\int _0^t g(s)\mathrm{d}u(s)\) will be preferred instead of \(\mathrm{(KS)}\int _0^t g(s)\mathrm{d}u(s)\). Note though that in the framework of a bounded variation function u, as a consequence of [24, Theorem 6.11.3], the Lebesgue–Stieltjes integrability implies the KS- integrability, but the two integrals do not always have the same value. More precisely,
In particular, regulated functions are KS-integrable with respect to bounded variation functions and also bounded variation functions are KS-integrable with respect to regulated functions (see [31]). The following property of the primitive implies that the solutions that will be obtained are functions of bounded variation.
Proposition 4
([31, Proposition 2.3.16]) Let \(u:[0,1]\rightarrow \mathbb {R}\) and \(g:[0,1]\rightarrow \mathbb {R}^d\) be such that the Kurzweil–Stieltjes integral \(\int _0^1 g(s)\mathrm{d}u(s)\) exists. If u is regulated, then so is the primitive \(h:[0,1]\rightarrow {\mathbb R}^d\), \(h(t)=\int _0^t g(s) \mathrm{d}u(s)\) and for every \(t \in [0,1]\),
It follows that h is left continuous, respectively, right continuous at the points where u has the specified property.
Moreover, when u is of bounded variation and g is bounded, h is also of bounded variation.
The following estimations hold.
Proposition 5
-
(i)
([27, Lemma I.4.16]) Let \(g:[0,1]\rightarrow \mathbb {R}^d\) be regulated and \(u :[0,1]\rightarrow \mathbb {R}\) a BV function. Then,
$$\begin{aligned} \left\| \int _0^1 g(t) \mathrm{d}u(t) \right\| \le \Vert g\Vert _C\cdot \mathrm{var}(u). \end{aligned}$$ -
(ii)
([31, Theorem 2.3.8]) Let \(g:[0,1]\rightarrow \mathbb {R}^d\) be a BV function and \(u \in G([0,1],\mathbb {R})\). Then,
$$\begin{aligned} \left\| \int _0^1 g(t) \mathrm{d}u(t) \right\| \le \left[ \Vert g(0)\Vert + \Vert g(1)\Vert + \mathrm{var}(g) \right] \Vert u\Vert _C. \end{aligned}$$
We end this section related to Kurzweil–Stieltjes integration by a convergence result which in a different setting can be found in [5, Theorem 2.5]. We preferred to give here the entire proof (even if it follows the same line as in the mentioned work) since in [5] the framework is that of Young integral (which might, in general, have different properties than the KS-integral) in Hilbert spaces and, besides, a generalized notion of bounded variation is taken into account.
Lemma 1
Let \(w:[0,1]\rightarrow \mathbb {R}\) be a step function and \(f_n:[0,1]\rightarrow \mathbb {R}^d\) be pointwise convergent to the null function. Then,
Proof
Let w be defined as \(w:= \sum _{k=0}^m \hat{c_k} \chi _{\{t_{k}\}} + \sum _{k=1}^m {c_k} \chi _{(t_{k-1}, t_k)} \). Then, by [27, Theorems I.4.21, I.4.22],
where \(c_0:=\hat{c_0}\) and \(c_{m+1}:=\hat{c_m}\); therefore, it suffices to pass to the limit when \(n\rightarrow \infty \) to get the assertion. \(\square \)
Theorem 2
Let \(f, f_n \in G([0,1],\mathbb {R}^d)\), \(g, g_n \in \mathrm{BV}([0,1],\mathbb {R})\) for \(n \in \mathbb {N}\) be such that the sequence \((f_n)_n\) has uniformly bounded \(\varepsilon \)-variation, \(f_n \rightarrow f\) pointwise, \(\lim _{n \rightarrow \infty }\Vert g-g_n\Vert _C =0\) and \(\sup _{n \in \mathbb {N}} \mathrm{var} (g_n) =M < \infty \). Then,
Proof
By the hypothesis of uniformly bounded \(\varepsilon \)-variation, for each \(\varepsilon >0\) there exists \(L^{\varepsilon }>0\) such that
Thus, one can find \(z^{\varepsilon }\) and \(z_n^{\varepsilon }\) in \(\mathrm{BV}([0,1],{\mathbb R}^d)\) such that \(\Vert f_n - z_n^{\varepsilon }\Vert _C \le {\varepsilon }\) and \(\Vert f - z^{\varepsilon }\Vert _C \le {\varepsilon }\), \(\mathrm{var} (z_n^{\varepsilon }) \le L^{\varepsilon } +1\).
Denote by \(\hat{L^{\varepsilon }} = \max \{\mathrm{var} (z^{\varepsilon }), L^{\varepsilon } +1\}\).
The sequence \((f_n)_n\) is bounded in \(G([0,1],\mathbb {R}^d)\) inasmuch as \((f_n(0))_n\) is bounded (being convergent) and for every n and \(t\in [0,1]\),
If we take \(\varepsilon =1\), we obtain independently of n that there is \(M_1\) such that \(\Vert f_n\Vert _C \le M_1\).
Let now \(\varepsilon >0\) be fixed. Then, there is a step function \(w :[0,1]\rightarrow \mathbb {R}\) such that \(\Vert g-w\Vert _C \le \frac{\varepsilon }{ \hat{L^{\varepsilon }}}\), and \(\mathrm{var}(w) \le M + \delta \) with \(\delta >0\). Using Lemma 1 and the uniform convergence of \((g_n)_n\), there exists \(n_0\) such that for \(n \ge n_0\) we have
Then,
We are going to majorize each term separately. Thus, by Proposition 5
Applying again Proposition 5, we get
and
Finally,
Therefore, by (4), (3), (5), (6) and (7) for \(n \ge n_0\), there is a constant K independent of n and \(\varepsilon \) such that
and so, (2) is satisfied. \(\square \)
In a usual way in measure theory (e.g. [4]), a sequence of measures \((\mu _n)_n\) is said to converge strongly (resp. weakly*) to the measure \(\mu \) if for every bounded measurable (resp. continuous) function \(f:[0,1]\rightarrow \mathbb {R}\),
Finally, for unmentioned necessary notions of set-valued analysis, we refer the reader to [1, 7, 18]. The space \(\mathcal {P}_{kc}({\mathbb R}^d)\) of all nonempty compact convex subsets of \({\mathbb R}^d\) will be considered endowed with the Hausdorff distance D (also called Pompeiu–Hausdorff distance); it is well known that it becomes a complete metric space. For \(A\in \mathcal {P}_{kc}({\mathbb R}^d)\), denote by \(|A|=D(A,\{0\})\).
The classical Radström embedding theorem ([7, Theorem II.19]) yields that \(\mathcal {P}_{kc}({\mathbb R}^d)\) can be embedded in a Banach space \((X,\Vert \cdot \Vert )\) such that if \(x_1,x_2\in X\) correspond to the sets \(A_1,A_2\in \mathcal {P}_{kc}({\mathbb R}^d)\), then
A multifunction \(\varGamma :{\mathbb R}^d\rightarrow \mathcal {P}_{kc}({\mathbb R}^d)\) is upper semi-continuous at a point \(x_0\) if for every \(\varepsilon >0\) there exists \(\delta _{\varepsilon } > 0\) such that the excess of \(\varGamma (x)\) over \(\varGamma (x_0)\) (in the sense of Hausdorff) is less than \(\varepsilon \) whenever \(\Vert x-x_0\Vert <\delta _{\varepsilon }\): \(\varGamma (x) \subset \varGamma (x_0) + \varepsilon B^d\), where \(B^d\) is the unit ball of \({\mathbb R}^d\).
We say that a function \(g:[0,1] \rightarrow {\mathbb R}^d\) is a selection of \(\varGamma :[0,1]\rightarrow \mathcal {P}_{kc}({\mathbb R}^d)\) if \(g(t) \in \varGamma (t)\) a.e. Recall that [25, Lemma 3.10] yields that regulated multifunctions possess regulated selections (see also [11] for the infinite-dimensional setting or [3] for a BV selection result).
3 Main results
We shall first of all study the measure differential multivalued problem (1) from the point of view of existence of solutions. We clearly write down the definition that will be taken into account for solution of such a problem.
Definition 3
A solution of the problem (1) is a function \(x : [0,1] \rightarrow {{\mathbb R}^d}\) for which there exists a KS-integrable function \(g : [0,1] \rightarrow \mathbb {R}^d\) such that \(g(t) \in F(t,x(t))\)\(\mu \)-a.e. and
Note that here \(\mu \) is a Stieltjes measure associated with a left-continuous BV function so, by Proposition 4, x is also left continuous and so, in the preceding definition, we have in fact \(g(t) \in F(t,x(t^-))\)\(\mu \)-a.e. (as in [8]).
For existence results for measure differential inclusions considering this notion of solution, we refer the reader to [8, 9] or [25]. Notice that alternative concepts of solutions were considered in [22, 30] or [29].
We get the existence of solutions with special features, namely defined through regulated selections of the multifunction on the right-hand side, under assumptions of regulatedness with respect to the Hausdorff distance on the multifunction. What is more, this set of solutions will be shown to satisfy two continuous dependence results, i.e. the solution set can be approached by the solutions set of approximating inclusions. In Theorem 4, the distribution functions associated with the measures driving the approximating inclusions converge in the two-norm sense to the distribution function associated with the measure driving the limit problem. In Theorem 5, the measures driving the approximating inclusions tend strongly to the measure driving the initial problem.
Let us start by proving a key auxiliary lemma concerning the existence of regulated selections for regulated multifunctions (w.r.t. Hausdorff distance). Such selections are obtained considering the classical “Steiner point map” \(s_d(K)\) associated with a convex compact subset \(K \in \mathcal {P}_{kc}({\mathbb R}^d)\) and defined using spherical integration (e.g. [1], page 366 and [18] page 98). It is known that the map \(s_d(\cdot )\) is Lipschitz with the constant d:
Lemma 2
Let \(F:[0,1]\rightarrow \mathcal {P}_{kc}({\mathbb R}^d)\) be regulated w.r.t. Hausdorff distance. Then:
- (i)
For every \(\varepsilon >0\), there exists \(F^{\varepsilon }:[0,1]\rightarrow \mathcal {P}_{kc}({\mathbb R}^d)\) which is BV w.r.t. the Hausdorff distance and satisfies the inequality
$$\begin{aligned} \sup _{t\in [0,1]} D\left( F(t),F^{\varepsilon }(t)\right) <\varepsilon . \end{aligned}$$ - (ii)
There exists a regulated selection \(f:[0,1]\rightarrow \mathbb {R}^d\) of F such that for every \(\varepsilon >0\) one can find a selection \(g^{\varepsilon }:[0,1]\rightarrow {\mathbb R}^d\) of \(F^{\varepsilon }\) with \(\mathrm{var}\; g^{\varepsilon }\le d\; \mathrm{var} (F^{\varepsilon })\) satisfying
$$\begin{aligned} \sup _{t\in [0,1]}\left\| f(t)-g^{\varepsilon }(t)\right\| <\mathrm{d} \varepsilon . \end{aligned}$$
Proof
- (i)
Through Radström embedding procedure, we are able to consider the multifunction F as a regulated function with values in an appropriate Banach space. By the density of the subset of BV mappings in the space of regulated mappings with values in this Banach space, we get the first assertion.
- (ii)
Let now f(t) and \(g^{\varepsilon }(t)\) be the Steiner selection, respectively, of F(t) and \(F^{\varepsilon }(t)\) (obtained by means the Steiner point map). Then, by the inequality (8),
$$\begin{aligned} \sup _{t\in [0,1]} \Vert f(t)-g^{\varepsilon }(t)\Vert \le \sup _{t\in [0,1]} d \;D\left( F(t),F^{\varepsilon }(t)\right) <\mathrm {d}\varepsilon . \end{aligned}$$
On the other hand, since F is regulated, its Steiner pointwise selection f is also regulated. Indeed, by Proposition 1 (in fact by the remark following Proposition 1 applied in the Banach space where \(\mathcal {P}_{kc}({\mathbb R}^d)\) can be embedded through the Radström embedding), there are an increasing continuous function \(\eta :[0,\infty )\rightarrow [0,\infty )\), \(\eta (0)=0\) and an increasing function \(v:[0,1]\rightarrow [0,1]\), \(v(0)=0\), \(v(1)=1\) such that
for every \(0\le t_1< t_2\le 1\).
It follows that
and so, the regulatedness of f is again a consequence of Proposition 1.
Besides, \(g^{\varepsilon }\) has the variation majorized by the variation of \(F^{\varepsilon }\) multiplied by d since for every \(0\le t_1<t_2\le 1\),
\(\square \)
The following hypotheses will be imposed to the multifunction \(F:[0,1]\times \mathbb {R}^d\rightarrow \mathcal {P}_{kc}({\mathbb R}^d)\) in order to obtain our existence result.
- (H1)
For every BV function \(x:[0,1]\rightarrow {\mathbb R}^{d}\), the map \(F(\cdot ,x(\cdot ))\) is regulated with respect to the Hausdorff distance;
- (H2)
For every \(R>0\) and every \(\varepsilon >0\), there exists \(L_{\varepsilon ,R}>0\) such that for every BV function x whose variation \(\mathrm{var}(x)\le R\) one can find a BV multifunction \(F^{\varepsilon }_x:[0,1]\rightarrow \mathcal {P}_{kc}({\mathbb R}^d)\) such that
$$\begin{aligned} \mathrm{var}(F^{\varepsilon }_x)\le L_{\varepsilon ,R}\quad \mathrm{and}\quad \sup _{t\in [0,1]}D\left( F(t,x(t)),F^{\varepsilon }_x(t)\right) <\varepsilon ; \end{aligned}$$ - (H3)
\(F(t,\cdot )\) is upper semi-continuous for every \(t \in [0,1]\).
Lemma 3
Let \(F:[0,1]\times \mathbb {R}^d\rightarrow \mathcal {P}_{kc}({\mathbb R}^d)\) satisfy the assumptions (H1), (H2). Then, for every \(R>0\) there exists \(M_R>0\) such that for every \(x:[0,1]\rightarrow \mathbb {R}^d\) with \(x(0)=x_0\) and \(\mathrm{var}(x)\le R\),
Proof
Let us first remark that the proof of [14, Proposition 3.7] works not only in \({\mathbb R}^d\), but also in an infinite-dimensional vector space. By applying it in the Banach space in which we embed \(\mathcal {P}_{kc}({\mathbb R}^d)\) (by Radström embedding theorem), for the family
having uniformly bounded \(\varepsilon \)-variations and the additional property that
is bounded, one obtains that there exists a constant \(M_R\) such that
\(\square \)
Theorem 3
Let \(\mu \) be the Stieltjes measure associated with a left-continuous non-decreasing function and let \(F: [0,1] {\times }\mathbb {R}^{d} {\rightarrow } \mathcal {P}_{kc}({\mathbb R}^d)\) satisfy hypotheses (H1), (H2) and (H3).
Suppose that one can find \(R_0>0\) satisfying the inequality
Then, there exists at least one BV solution \(x:[0,1] \rightarrow \mathbb {R}^{d}\) for the measure differential problem (1) such that
with \(\mathrm{var} (x)\le R_0\), \(g(\cdot ) \in F(\cdot ,x(\cdot ))\) is regulated and satisfies
Proof
We shall construct a sequence of BV functions having the variations majorized by \(R_0\) and, using Theorem 1, we shall prove that it has a convergent subsequence. Its limit will be our solution.
Start by choosing \(x_0(t) = x_0\) for \(t \in [0,1]\). Suppose then that we have already constructed a BV function \(x_n\) on [0, 1] with \(\mathrm{var}(x_n)\le R_0\) and choose \(x_{n+1}\) as described below.
By hypothesis (H1), \(F(\cdot ,x_n(\cdot ))\) is regulated with respect to the Hausdorff distance; therefore, applying Lemma 2, one can find a regulated selection \(g_n(\cdot )\) of \(F(\cdot ,x_n(\cdot ))\) satisfying Lemma 2, namely that for every \(\varepsilon >0\) there exists a selection \(g_{x_n}^{\varepsilon }:[0,1]\rightarrow \mathbb {R}^d\) of \(F_{x_n}^{\varepsilon }\) with \(\mathrm{var} (g_{x_n}^{\varepsilon })\le d\; \mathrm{var} (F_{x_n}^{\varepsilon })\) satisfying
This means that the sequence \((g_n)_n\) has uniformly bounded \(\varepsilon \)-variations. Define now
Lemma 3 implies that \(\Vert g_n\Vert _C\le M_{R_0}\), whence
and so, the sequence \((x_n)_n\) has indeed the variations majorized by \(R_0\).
Since \((g_n)_n\) has uniformly bounded \(\varepsilon \)-variations and observing that
it implies, by Helly–Frankova’s selection principle (see Theorem 1), that one can extract a subsequence \((g_{n_k})_k\) pointwisely convergent to a regulated function g satisfying
We are now able to apply the bounded convergence result [27, Theorem I.4.24] for regulated functions, to get
and so, if we note by
it follows that \(x_{n_k}\rightarrow x\) pointwisely.
We finally prove that x is a solution of (1) for our measure driven differential inclusion (i.e, \(g(t) \in F(t,x(t))\)) inasmuch as by hypothesis (H3): for each \(t\in [0,1]\) and \(\varepsilon >0\),
for all k greater than some \(k_{\varepsilon ,t}\), whence \(g(t) \in F(t,x(t))\) as pointwise limit of \((g_{n_k})_k\). \(\square \)
Remark 2
Our hypotheses (H1), (H2), (H3) are less restrictive than the assumptions imposed in [25, Theorem 3.5] to get BV solutions and continuous dependence results (in particular, they are satisfied by any Lipschitz continuous multifunction).
Moreover, the result is more general than [25, Theorem 3.11] (as it can be seen from the characterization of uniform bounded \(\varepsilon \)-variation given in [14, Theorem 3.11.(ii)]).
Let us now give an example to motivate the generality of Theorem 3.
Example 2
Let \(\mu \) be the Stieltjes measure associated with a left-continuous non-decreasing function \(u:[0,1]\rightarrow \mathbb {R}\) with a possibly very rough behaviour, such as
where H is the Heaviside function
By its expression, u has countably many discontinuity points accumulating at the middle of the unit interval (thus, the studied hybrid system will have a Zeno behaviour, which cannot be studied using classical impulsive differential equations).
Let \(F: [0,1] {\times }\mathbb {R}^{d} {\rightarrow } \mathcal {P}_{kc}({\mathbb R}^d)\) be defined for each \(t\in [0,1]\) and \(x\in \mathbb {R}^{d}\) by
where \(\widetilde{F}: [0,1] {\times }\mathbb {R}^{d} {\rightarrow } \mathcal {P}_{kc}({\mathbb R}^d)\) is a Lipschitz continuous multifunction, i.e. there exists \(K>0\) such that \(K \mu ([0,1])<1\) and for every \(t_1,t_2\in [0,1]\) and \(x_1,x_2\in \mathbb {R}^{d}\),
and \(f:[0,1]\rightarrow {\mathbb R}^{d}\) is a function with \(BV_2\)-variation but not BV.
In this case, \(\widetilde{F}\) satisfies, by [25, Remark 3.6], the property that for each \(R>0\) the family \(\{\widetilde{F}(\cdot ,x(\cdot )),\mathrm{var}(x)\le R\}\) has uniform bounded variation w.r.t. the Hausdorff distance. Moreover, the function f is regulated (but not BV); thus, it has finite \(\varepsilon \)-variation. Therefore, F satisfies our assumption (H2) (and, obviously, the other two as well).
It can be checked that for each \(R>0\) one can choose
whence any \(R_0\) such that
satisfies the inequality \(M_{R_0}\mu ([0,1])\le R_0\). It follows that the multifunction F satisfies the hypotheses of Theorem 3, but does not satisfy the assumptions of other existence results known for the same problem (1), such as [25, Theorem 3.5].
Remark 3
Moreover, if the multifunction \(\widetilde{F}\) has the property that for each \(R>0\), the family \(\{\widetilde{F}(\cdot ,x(\cdot )); \ \ \mathrm{var} (x)\le R\}\) has uniformly bounded variation and f is as in the preceding example, then the multifunction
satisfies the hypotheses of Theorem 3, but does not satisfy the assumptions of [25, Theorem 3.11].
We can obtain, under the assumptions of previous theorem, the continuous dependence on the measure of the set of solutions with described properties.
Denote by \(\mathcal {S}_n\) and \(\mathcal {S}\) the set of solutions for the problem (1) driven by \(\mu _n\) and \(\mu \), respectively, where \(x_n(t) = x_0 + \int _0^t g_n(s) \; \mathrm{d}\mu _n(s)\) and \(x(t) = x_0 + \int _0^t g(s) \; \mathrm{d}\mu (s)\), \(\forall t \in [0,1]\), are obtained by integrating regulated selections \(g_n \) and g, respectively, with
Theorem 4
Let F satisfy the assumptions (H1), (H2), (H3) of Theorem 3, and let \(\mu ,(\mu _n)_n\) be Stieltjes measures associated with left-continuous non-decreasing functions \(u,u_n\), respectively, such that
Suppose that there exists \(R_0>0\) such that
Then for every sequence \((x_n)_n\), \(x_n \in \mathcal {S}_n\), there exists \(x\in \mathcal {S}\) towards which a subsequence \((x_{n_k})_k\) converges pointwisely and such that the sequence of Stieltjes measures \((\mathrm{d}x_{n_k})_k\) converges weakly* to the measure dx.
Proof The hypothesis of the existence theorem is verified for \(\mu \) and \(\mu _n\) for all \(n\in \mathbb {N}\); therefore, the sets \(\mathcal {S}_n\) and \(\mathcal {S}\) are nonempty.
Let \((x_n)_n\) be a sequence of solutions for our problem driven by the measures \(\mu _n\), respectively. Then, there exists \(g_n(t) \in F(t,x_n(t))\) such that \(x_n(t) = x_0 + \int _0^t g_n(s) \; \mathrm{d}\mu _n(s), \; \forall \; t \in [0,1]\) and \(g_n\) is regulated with
Obviously, the sequence \((g_n)_n\) satisfies the hypotheses of Helly–Frankova’s selection principle and so one can find a subsequence \((g_{n_k})_k\) pointwise convergent to a regulated function g with
Let us show that
has the property that \((x_{n_k})_k\) converges pointwisely to x.
Indeed,
which tends to 0 as \(k\rightarrow \infty \) by Theorem 2.
Concerning the second part of the assertion, namely that \(\mathrm{d}x_{n_k}\) converges weakly* to \(\mathrm{d}x\), take an arbitrary continuous (therefore, in particular regulated) function \(h:[0,1]\rightarrow \mathbb {R}\). The function h is bounded and h is KS-integrable with respect to \(x_{n_k}\) (regulated functions are KS-integrable w.r.t. BV functions), so the substitution [31, Theorem 2.3.19] can be applied to get:
By Proposition 3, also the sequence \((hg_{n_k})_k\) has uniform bounded \(\varepsilon \)-variations, so again by Theorem 2, (9) converges to 0 as \(k\rightarrow \infty \).
Besides, [24, Theorem 6.11.3] implies that in this case
and also
and so,
Finally, let us see that \(x\in \mathcal {S}\) as a consequence of the semi-continuity property of multifunction F since it implies that for each \(t\in [0,1]\) and \(\varepsilon >0\),
for all k greater than some \(k_{\varepsilon ,t}\). \(\Box \)
Remark 4
Let us note that in some works (such as [2]), this type of convergence of a sequence of BV functions, i.e.
is called two-norm convergence.
Under the assumptions of the previous theorem, namely that \((u_n)_n\) two-norm converges to u, the associated Stieltjes measures have the property that
since for any continuous function \(f:[0,1]\rightarrow \mathbb {R}\),
by Theorem 2 combined with [24, Theorem 6.11.3].
The continuous dependence result allows us, as stated before, to approximate solutions of the studied problem in the case of a measure having a distribution function with a bad behaviour by the solutions of approximating problems driven by simpler measures, as it can be seen in the following example.
Example 3
Let the sequence \(u_n:[0,1]\rightarrow {\mathbb R}\) be defined for each \(n\in \mathbb {N}\) by
Then, \((u_n)_n\) is a sequence of uniform bounded variation since \(\mathrm{var} (u_n)= 1\) for each \(n\in {\mathbb N}\).
Consider now the function \(u :[0,1]\rightarrow {\mathbb R}\) defined as follows:
Also \(\mathrm{var}(u) = 1\). The sequence \((u_n)_n\) converges to u uniformly, since
Hence, \((u_n)_n\) is a sequence of uniform bounded variation converging uniformly to the BV function u and so our Theorem 4 is applicable. It yields that for any multifunction satisfying hypotheses (H1)–(H3), the solutions described in Theorem 3 of problem (1) driven by the measure \(\mathrm{d}u\) can be approximated by the solutions of measure differential problems driven by the measures \(\mathrm {d}u_n\) which have much better behaviour (in particular, \(\mathrm{d}u\) possesses countably many impulses accumulating on the unit interval, while each \(\mathrm {d}u_n\) has only a finite number of impulses).
Remark 5
If in Theorem 4 we strengthen the two-norm convergence hypothesis by imposing, instead, that
then
whence it can be seen that the rate convergence of \((x_n)_n\) towards x depends on the rate convergence of \((g_n)_n\) towards g and also on the rate convergence of \((u_n)_n\) towards u.
Another result allowing one to approximate the solutions to the problem (1) is to use the convergence [28, Theorem 2.8].
Theorem 5
Let F satisfy the assumptions (H1), (H2), (H3) of Theorem 3, and let \(\mu ,(\mu _n)_n\) be Stieltjes measures associated with left-continuous non-decreasing functions \(u,u_n\), resp., such that
Suppose that there exists \(R_0>0\) such that \( \mu _n([0,1])\le \frac{R_0}{M_{R_0}},\forall n\in \mathbb {N}. \)
Then, for every sequence \((x_n)_n\), \(x_n \in \mathcal {S}_n\) there exists \(x\in \mathcal {S}\) towards which a subsequence \((x_{n_k})_k\) converges pointwisely and such that the sequence of Stieltjes measures \((\mathrm{d}x_{n_k})_k\) converges strongly to the measure \(\mathrm{d}x\).
Proof
The existence of a selection g of \(F(\cdot ,x(\cdot ))\) towards which \(g_{n_k}\) tends pointwise follows as in the previous continuous dependence theorem, using Helly–Frankova’s selection principle. Let us show that \((x_{n_k})_k\) converges pointwise to the function x defined by \(x(t)=x_0+\int _0^t g(s)\mathrm{d}\mu (s)\).
Fix \(t\in [0,1]\). We have
We can apply [28, Theorem 2.8] since the uniform integrability is ensured by the hypothesis \( \mu _n([0,1])\le \frac{R_0}{M_{R_0}},\forall n\in \mathbb {N}\) and by the fact that the sequence \((g_{n_k})_k\) is bounded in the supremum norm by \(M_{R_0}\) and obtain
By [24, Theorem 6.11.3],
while
Besides, \(g_{n_k}(t)\rightarrow g(t)\) and \(u_{n_k}(t+)-u_{n_k}(t)\rightarrow u(t+)-u(t)\) (since we can take \(f(t)=\chi _{\{t\}}\) in the definition of strong convergence of the measures \((\mathrm {d}u_n)_n\) towards the measure \(\mathrm{d}u\)). It follows that
In order to get the strong convergence of \((\mathrm{d}x_{n_k})_k\) towards \(\mathrm{d}x\), take an arbitrary bounded and measurable function \(h:[0,1]\rightarrow \mathbb {R}\). The function h is bounded, and h is LS-integrable with respect to \(x_{n_k}\) (therefore, KS-integrable), so the substitution [31, Theorem 2.3.19] can be applied to get
This converges to 0 as \(k\rightarrow \infty \) again by [28, Theorem 2.8] combined with [24, Theorem 6.11.3], as before. \(\square \)
It is worthwhile to observe that all our results are obtained in the interval [0, 1], but the interval [0, 1] may be replaced by a general interval [a, b]. If the condition \(\mu ([0,1])M_{R_0}\le R_0\) is not satisfied on [0, 1], but for some subinterval \([0,\alpha ]\subset [0,1]\) one has \(\mu ([0,\alpha ])M_{R_0}\le R_0\), then solutions exist on \([0,\alpha ]\) and the approximation results hold on this interval.
References
Aubin, J.-P., Frankowska, H.: Set-Valued Analysis. Birkhäuser, Boston (1990)
Aye, K.K., Lee, P.Y.: The dual of the space of functions of bounded variations. Math. Bohem. 131, 1–9 (2006)
Belov, S.A., Chistyakov, V.V.: A selection principle for mappings of bounded variation. J. Math. Anal. Appl. 249, 351–366 (2000)
Billingsley, P.: Weak conference of measures: application in probability. In: Conference Board of the Mathematical Sciences Regional Conference Series in Applied Mathematics, No 5, Society for Industrial and Applied Mathematics. Pa, Philadelphia (1971)
Brokate, M., Krejci, P.: Duality in the space of regulated functions and the play operator. Math. Z. 245, 667–688 (2003)
Cao, Y., Sun, J.: On existence of nonlinear measure driven equations involving non-absolutely convergent integrals. Nonlinear Anal. Hybrid Syst. 20, 72–81 (2016)
Castaing, C., Valadier, M.: Convex Analysis and Measurable Multifunctions. Lecture Notes in Math. 580. Springer, Berlin (1977)
Cichoń, M., Satco, B.: Measure differential inclusions–between continuous and discrete. Adv. Differ. Equ. 56, 18 (2014)
Cichoń, M., Satco, B.: On the properties of solutions set for measure driven differential inclusions. In: Discrete and Continuous Dunamical Systems. Special Issue: SI, pp. 287–296 (2015)
Cichoń, M., Satco, B., Sikorska-Nowak, A.: Impulsive nonlocal differential equations through differential equations on time scales. Appl. Math. Comput. 218, 2449–2458 (2011)
Cichoń, M., Cichoń, K., Satco, B.: Measure differential inclusions through selection principles in the space of regulated functions. Mediterr. J. Math. 15(4), 148 (2018)
Di Piazza, L., Marraffa, V., Satco, B.: Closure properties for integral problems driven by regulated functions via convergence results. J. Math. Anal. Appl. 466, 690–710 (2018)
Federson, M., Mesquita, J.G., Slavík, A.: Measure functional differential equations and functional dynamic equations on time scales. J. Differ. Equ. 252, 3816–3847 (2012)
Fraňková, D.: Regulated functions. Math. Bohem. 116, 20–59 (1991)
Golubov, B.I.: On functions of bounded \(p\)-variation. Math. USSR Izv. 2, 799–819 (1968)
Halas, Z., Tvrdý, M.: Continuous dependence of solutions of generalized linear differential equations on a parameter. Funct. Differ. Equ. 16, 299–313 (2009)
Hönig, C.S.: Volterra Stieltjes—Integral Equations. North-Holland, Amsterdam (1975)
Hu, S., Papageorgiou, N.S.: Handbook of Multivalued Analysis. Kluwer Academic Publisher, Dordrecht (1997)
Krejci, P.: Hysteresis in singularly perturbed problems. In: Mortell, M., O’Malley, R., Pokrovskii, A., Sobolev, V. (eds.) Singular Perturbations and Hysteresis, pp. 73–100. SIAM, Philadelphia (2005)
Krejci, P., Laurencot, P.: Generalized variational inequalities. J. Convex Anal. 9, 159–183 (2002)
Kurzweil, J.: Generalized ordinary differential equations and continuous dependence on a parameter. Czechoslov. Math. J. 7, 418–449 (1957)
Miller, B., Rubinovitch, E.Y.: Impulsive Control in Continuous and Discrete-Continuous Systems. Kluwer Academic Publishers, Dordrecht (2003)
Monteiro, G.A., Tvrdý, M.: Generalized linear differential equations in a Banach space: continuous dependence on a paramete. Discrete Contin. Dyn. Syst. 33, 283–303 (2013)
Monteiro, G.A., Slavik, A., Tvrdý, M.: Kurzweil–Stieltjes Integral: Theory and Applications, Series in Real Analysis, vol. 15. World-Scientific, Singapore (2018)
Satco, B.: Continuous dependence results for set-valued measure differential problems. Electron. J. Qual. Theory Differ. Equ. 79, 1–15 (2015)
Schwabik, Š.: Generalized Ordinary Differential Equations. World Scientific, Singapore (1992)
Schwabik, Š., Tvrdý, M., Vejvoda, O.: Differential and Integral Equations. Boundary Problems and Adjoints. Praha, Dordrecht (1979)
Serfozo, R.: Convergence of Lebesgue integrals with varying measures. Sankhya Ser. A 44(3), 380–402 (1982)
Sesekin, A.N., Zavalishchin, S.T.: Dynamic Impulse Systems. Kluwer Academic, Dordrecht (1997)
Silva, G.N., Vinter, R.B.: Measure driven differential inclusions. J. Math. Anal. Appl. 202, 727–746 (1996)
Tvrdý, M.: Differential and integral equations in the space of regulated functions. Membr. Differ. Equ. Math. Phys. 25, 1–104 (2002)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The authors were partially supported by the Grant of GNAMPA prot.U-UFMBAZ-2017-001592 22-12-2017. The infrastructure used in this work was partially supported from the project “Integrated Center for research, development and innovation in Advanced Materials, Nanotechnologies, and Distributed Systems for fabrication and control”, Contract No. 671/09.04.2015, Sectoral Operational Program for Increase of the Economic Competitiveness co-funded from the European Regional Development Fund.
Rights and permissions
About this article
Cite this article
Di Piazza, L., Marraffa, V. & Satco, B. Approximating the solutions of differential inclusions driven by measures. Annali di Matematica 198, 2123–2140 (2019). https://doi.org/10.1007/s10231-019-00857-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10231-019-00857-6