1 Introduction

Relaxation results for set-valued differential inclusions assert that when considering the problem

$$\begin{aligned} x'(t) \in F(t,x(t)),\quad x(0)=\xi _0 \end{aligned}$$
(2)

and its convexified analog

$$\begin{aligned} x'(t) \in {\overline{co}}F(t,x(t)), \quad x(0)=\xi _0 \end{aligned}$$
(3)

on a compact interval, any solution of the second one can be approximated in the uniform topology with a solution of the first one when F is Lipschitz with respect to the state. In the classical case, such results can be found, e.g., in [2] (see also the references therein).

Due to the huge importance in Optimal Control, in Stability Theory for hybrid systems among other fields, they were then generalized in many directions (e.g., [14, 19, 25, 32]).

The study of the same problem with infinite-time horizon has been addressed, as far as the authors know, only for classical differential inclusions in [20] and [10].

On the other hand, differential problems with Stieltjes derivative [30] replacing the usual derivative offer a powerful tool for studying dynamics of systems with mixed continuous and discrete behavior.

This theory got an increasing attention in the last decade (see [18, 27, 31, 35] and the references therein), since it has proved itself useful in modeling the dynamics of various systems where instantaneous changes (occurring at the discontinuity points of g) and stationary intervals of time (related to intervals where g is constant) are involved, such as in [18, 23] or [31]; it is related to the theory of measure differential problems [12, 13, 15, 28, 30] and it is worthwhile to mention that this setting covers usual differential problems (for an absolutely continuous map g), discrete problems (when g is a sum of step functions), as well as impulsive equations (for g being a combination of these two types of maps).

In the present paper, via an extension to the setting of differential inclusions with Stieltjes derivative of a classical result on continuous selection of solutions of (2) ([9, Theorem 3.1]) and following the steps of proof proposed in [20], we get a relaxation theorem for this very wide framework on an infinite-time interval. The main tool is a Filippov–Wa\(\mathrm{{\dot{z}}}\)ewski result for g-differential inclusions recently published [25, Theorem 12].

We thus generalize [20, Theorem 1], where g is the identy function.

More precisely, we prove that given \(\xi _0\in \mathbb {R}^d\), \(r:[0,\infty )\rightarrow \mathbb {R}_+\) continuous and \(z:[0,\infty )\rightarrow \mathbb {R}^d\) a solution of

$$\begin{aligned} z'_g(t) \in \left\{ \begin{array}{lllll} {\overline{co}}F(t,z(t)), &{} t \notin D_g, \qquad z(0)=\xi _0, \\ F(t, z(t)), &{} t \in D_g \end{array} \right. \end{aligned}$$

(\(D_g\) is the set of discontinuity points of g), there exists \(\eta ^0\in B(\xi _0,r(0))\) and a solution \(x:[0,\infty )\rightarrow \mathbb {R}^d\) of

$$\begin{aligned} x'_g(t)\in F(t,x(t)), \; x(0)=\eta ^0, \end{aligned}$$

such that

$$\begin{aligned} |z(t)-x(t)|\le r(t),\;\mathrm{for\;every\;}t\in [0,\infty ). \end{aligned}$$

Like in the classical case where \(g(t)=t\), we cannot hope to find a solution x approximating z with the same initial value (see the counterexample in [20, Section 3]), as in the case of finite time intervals.

Also, since it was proved [25, Proposition 13] that, on compact intervals, the closure in the uniform norm topology of the solution set of

$$\begin{aligned} x'_g(t)\in F(t,x(t)), \; x(0)=\xi _0 \end{aligned}$$

is contained in the solution set of

$$\begin{aligned} x'_g(t) \in \left\{ \begin{array}{lllll} {\overline{co}}F(t,x(t)), &{} t \notin D_g, \qquad x(0)=\xi _0, \\ F(t, x(t)), &{} t \in D_g, \end{array} \right. \end{aligned}$$

it was expected that, even on infinite-time intervals, at the discontinuity points of g, it is not necessary to consider the closed convex hull of the values of F.

Our motivation is coming from the large number of applications of the relaxation results regarding the stability properties of differential inclusions [1], of hybrid systems [4, 5, 33] or in control theory [33, 39].

Moreover, since dynamic inclusions on time scales [8, 15, 37] can also be seen as measure differential inclusions and, therefore, as Stieltjes differential problems, it is possible to deduce a new relaxation result on \([0,\infty )\) for dynamic problems on time scales. The same can be said about generalized differential inclusions [22, 29, 34, 36, 38].

Let us finally acknowledge that Stieltjes differential inclusions include, in particular, impulsive differential inclusions; thus, a new relaxation result on infinite-time intervals can be inferred for such problems (with possibly set-valued impulses), as in [28] or [26].

2 Notations and Preliminaries

Let \(I\subset [0,\infty )\) be an interval containing 0 and \(g:I\rightarrow \mathbb {R}\) be a non-decreasing left-continuous function, which on any compact interval has at most a finite number of accumulation points of its discontinuity points. Without any loss of generality, we may suppose that \(g(0)=0\) and denote by

$$\begin{aligned} \Delta ^+(t)=g(t+)-g(t), \ \ \ t \in I, \end{aligned}$$

where \(g(t+)\) stands for the limit at the right (which is well defined, since g has bounded variation).

The g-topology \(\tau _g\) on I is the topology having as open balls \( \{s\in I;|g(t)-g(s)|<r \},\quad r>0\) (see [18]). The dimension of the Euclidean space \(\mathbb {R}^d\) is \(d\ge 1\), \(|\cdot |\) is its norm, and \(B(x,r)=\{y\in \mathbb {R}^d:|x-y|\le r\}\) whenever \(x\in \mathbb {R}^d\) and \(r>0\).

The symbol \(\mathcal {L}_g(I)\) will denote the \(\sigma \)-algebra of g-measurable sets [18], while the Lebesgue–Stieltjes integrability of an \(\mathbb {R}^d\)-valued function on I means the abstract Lebesgue integrability w.r.t. the Stieltjes measure \(\mu _g\) generated by g. The space of such functions LS-integrable w.r.t. g will be denoted by \(L^1_g(I,\mathbb {R}^d)\) and its norm by

$$\begin{aligned} ||f||_{L^1_g}= \int _I |f(\tau )|dg(\tau ),\quad f\in L^1_g \left( I,\mathbb {R}^d \right) . \end{aligned}$$

Let us now recall the notion of differentiability related to Stieltjes type integrals introduced in [30] (motivated by [40]).

Definition 2.1

The derivative with respect to g (or the g-derivative, also called Stieltjes derivative) of a function \(f:I \rightarrow \mathbb {R}^d\) at a point \({\overline{t}}\in I\) is defined by

$$\begin{aligned} f'_g({\overline{t}})= & {} \lim _{t\rightarrow {\overline{t}}}\frac{f(t)-f({\overline{t}})}{g(t)-g({\overline{t}})}\quad \hbox { if}\,\, g\,\, \hbox {is continuous at} \,\, {\overline{t}},\\ f'_g({\overline{t}})= & {} \lim _{t\rightarrow {\overline{t}}+}\frac{f(t)-f({\overline{t}})}{g(t)-g({\overline{t}})}\quad \text{ otherwise }, \end{aligned}$$

whenever the limit exists.

The set

$$\begin{aligned} D_g=\{t\in I: \Delta ^+g(t)>0\} \end{aligned}$$

contains the discontinuities of g and notice that if \(t\in D_g\), then the g-derivative \(f'_g(t)\) is well defined if and only if the right limit \(f(t+)\) exists and then

$$\begin{aligned} f'_g(t)=\frac{f(t+)-f(t)}{g(t+)-g(t)}. \end{aligned}$$

Definition 2.1 has no meaning whenever t belongs to

$$\begin{aligned} C_g=\{t\in I: g \text{ is } \text{ constant } \text{ on } (t-\varepsilon ,t+\varepsilon ) \text{ for } \text{ some } \varepsilon >0\}, \end{aligned}$$

but \(\mu _g(C_g)=0\), see [30].

Moreover, we observe that it is possible to define the g-derivative at the points of \(C_g\) as well, as in [16, Definition 3.7] (in a way which is coherent with the previous definition), thus allowing one to study higher order Stieltjes differential equations.

By elementary properties of the Lebesgue–Stieltjes integrals, if \(f\in L^1_g(I,\mathbb {R}^d)\), then the primitive \(\int _{[0,\cdot )} f(\tau )dg(\tau )\) is g-absolutely continuous (see [30]), that is: a function \(u:I\rightarrow \mathbb {R}^d\) is g-absolutely continuous if, for every \(\varepsilon >0\), there is \(\delta _{\varepsilon }>0\), such that for any set \(\{(t'_j,t''_j)\}_{i=1}^m\) of non-overlapping subintervals of I

$$\begin{aligned} \sum _{j=1}^m(g(t''_j)-g(t'_j))<\delta _{\varepsilon } \Rightarrow \sum _{j=1}^m |u(t''_j) - u(t'_j)|<\varepsilon . \end{aligned}$$

It was proved in [30, Proposition 5.3] that any g-absolutely continuous function has bounded variation; it is left-continuous on I and continuous at every continuity point of g.

By \(AC_g(I,\mathbb {R}^d)\), we mean the space of g-absolutely continuous maps from I to \(\mathbb {R}^d\) endowed with the (Banach space) norm

$$\begin{aligned} \Vert u\Vert _{AC}=|u(0)|+\int _I |u'_g(\tau )|dg(\tau ). \end{aligned}$$

Remark that g-absolutely continuous functions are obviously uniformly g-continuous, i.e., for every \(\varepsilon >0\), there is \(\delta _{\varepsilon }>0\), such that for \(t',t''\in I\)

$$\begin{aligned} |g(t)-g(t')|< \delta _{\varepsilon }\quad \textrm{implies }\quad |u(t)-u(t')|<\varepsilon . \end{aligned}$$

Moreover, if a function defined on a bounded interval is uniformly g-continuous, it is bounded ([25, Lemma 2]).

The Stieltjes integrals and the Stieltjes derivative are tightly connected by Fundamental Theorems of Calculus.

Theorem 2.2

(a) ([30, Theorem 2.4]) Let \(T>0\). If \(f:[0,T]\rightarrow \mathbb {R}^d\) is LS-integrable with respect to g, then

$$\begin{aligned} F(t)=\int _{[0,t)} f(s)\,dg(s),\quad t\in [0,T] \end{aligned}$$

defines a map \(\mu _g\)-a.e. g-differentiable with the property that \(F_g'(t)=f(t)\), \(\mu _g\)-a.e.

(b) ([30, Theorem 5.4], see also [18, Theorem 5.1]) Let \(T>0\). If \(f:[0,T] \rightarrow \mathbb {R}^d\) is g–absolutely continuous, then \(F_g'\) exists \(\mu _g\)–a.e., and

$$\begin{aligned} F(t)=F(0)+\int _{[0,t)} F_g'(s) \,dg(s)\quad \text{ for } \text{ every } t\in [0,T]. \end{aligned}$$

We recall now some basic notions of set-valued analysis ([2, 3, 6], see also [11]). If X and Y be two Banach spaces, by \(\mathcal {P}(X)\), we denote the space of non-empty closed subsets of \((X,\Vert \cdot \Vert _X)\) and by \(\mathcal {P}_{k}(X)\) its subspace consisting in the compact subsets; it is a complete metric space when endowed with the Hausdorff–Pompeiu distance, D, that is

$$\begin{aligned} D(A,B)=\max (e(A,B),e(B,A)), \end{aligned}$$

where the excess e(AB) of \(A\in \mathcal {P}_{k}(X)\) over \(B\in \mathcal {P}_{k}(X)\) is defined as

$$\begin{aligned} e(A,B)=\sup \{ d(a,B):a\in A \}=\sup \{ \inf _{b\in B}\Vert a-b\Vert _X: a\in A\}. \end{aligned}$$

For \(A\in \mathcal {P}_{k}(X)\), let \({\overline{co}}A\) denote its closed convex hull and \(|A|=D(A,\{0\})\).

A multifunction \(\Gamma : X \rightarrow \mathcal {P}(Y)\) is lower semicontinuous (l.s.c.) if for each closed set \(C \subseteq Y\)

$$\begin{aligned} \Gamma ^+(C)=\{x\in X: \Gamma (x)\subset C \} \end{aligned}$$

is closed in X [3]. A function \(\gamma : X \rightarrow Y\) is called a selection of a multifunction \(\Gamma : X \rightarrow \mathcal {P}(Y)\) if \(\gamma (x) \in \Gamma (x)\) for every \(x\in X\).

In [25, Lemma 2], a g-uniformly continuous function is showed to be bounded (for an interesting characterization of g-uniform continuity, see [17]). Following the idea of proof of [25, Lemma 2], we get the boundedness of a jointly uniformly continuous multifunction.

Lemma 2.3

Let \(T>0\), \(g:[0,T]\rightarrow \mathbb {R}\) be non-decreasing and left-continuous and \(Q\subset \mathbb {R}^d\) be compact. If \(F:[0,T]\times Q \rightarrow \mathcal {P}_{k}(\mathbb {R}^d)\) is uniformly continuous (in (tx)) in the product topology of \(\tau _g\) with the usual topology of \(\mathbb {R}^d\) (both generated by pseudometrics), then it is bounded.

Proof

One can find \(\delta _1>0\), such that

$$\begin{aligned} s,t{} & {} \in [0,T],\; |g(t)-g(s)|<\delta _{1} \; \textrm{and} \; x,y\in Q,\; |x-y|<\delta _1 \Rightarrow \\{} & {} \quad \; D(F(t,x),F(s,y))<1. \end{aligned}$$

It is easy to see that there are only a finite number of discontinuity points of g where

$$\begin{aligned} g(\tau _i+)-g(\tau _i)\ge \delta _1, \end{aligned}$$

denote them by \(\tau _i,i=1,\ldots ,N_1\). To simplify the writing, let \(\tau _0=0\) and \(\tau _{N_1+1}=T\).

Each interval \((\tau _i,\tau _{i+1}), i=0,\ldots ,N_1\) can be split into \(N^i\) subintervals \(I^i_j,j=1,\ldots ,N^i\) on which \(|g(t)-g(s)|<\delta _{1}\). Choose a point \({\overline{t}}^i_{j}\in I^i_j,j=1,\ldots ,N^i\) in every such subinterval.

Also, the compact Q can be covered by a finite number of balls \(B(x_l,\delta _1), l=1,\ldots ,N_2\). Then, for every \(t\in [0,T]{\setminus } \{\tau _i;i=1,\ldots ,N_1 \}\), one can find \(i\in \{0,\ldots ,N_1 \}\), \(j\in \{ 1,\ldots ,N^i \}\), such that \(t\in I^i_j\) and for every \(x\in Q\), one can find \(l\in \{1,\ldots ,N_2\}\), such that \(x\in B(x_l,\delta _1)\), whence

$$\begin{aligned} |F(t,x)|\le D(F(t,x),F({\overline{t}}^i_{j},x_l))+|F({\overline{t}}^i_{j},x_l)|\le 1+|F({\overline{t}}^i_{j},x_l)|. \end{aligned}$$

On the other hand, for each \(i\in \{1,\ldots ,N_1\}\)

$$\begin{aligned} |F(\tau _i,x)|\le D(F(\tau _i,x),F(\tau _i,x_l))+|F(\tau _i,x_l)|\le 1+|F(\tau _i,x_l)|. \end{aligned}$$

In conclusion, if we denote by

$$\begin{aligned} M= & {} \max \left( \{|F({\overline{t}}^i_{j},x_l)|; 1\le i\le N_1, 1\le j\le N^i,1\le l\le N_2 \} \right. \\{} & {} \left. \cup \{|F(\tau _i,x_l)|;1\le i\le N_1,1\le l\le N_2 \}\right) , \end{aligned}$$

any \((t,x)\in [0,T]\times Q\) satisfies the inequality

$$\begin{aligned} |F(t,x)|\le 1+M. \end{aligned}$$

\(\square \)

3 Main Results

The first two results below allow us to foresee some of the difficulties we are facing when the Stieltjes derivative is involved.

Proposition 3.1

([24, Proposition 2.5]) Let \(T>0\) and \(g:[0,T]\rightarrow \mathbb {R}\) be non-decreasing and left-continuous. If \(t \in [0,T]\) and \(f_1\) and \(f_2\) are two real-valued functions defined in a neighborhood of t, g-differentiable at t, then the product \(f_1\cdot f_2\) is g-differentiable as well at this point and

$$\begin{aligned} (f_1 f_2)'_g(t)=(f_1)'_g(t) f_2 (t)+f_1(t)(f_2)'_g(t)+(f_1)'_g(t) (f_2)'_g(t)\cdot \Delta ^+g(t). \end{aligned}$$

Lemma 3.2

Let \(T>0\) and \(g:[0,T]\rightarrow \mathbb {R}\) be non-decreasing and left-continuous. Then, for every non-negative function \(f:[0,T]\rightarrow \mathbb {R}\)g-differentiable at \(t\in [0,T]\) and for every \(n\ge 1\)

$$\begin{aligned} f^{n-1}(t) f'_g(t)\le \left( \frac{f^n}{n}\right) '_g(t). \end{aligned}$$

Proof

We prove the assertion by mathematical induction. By Proposition 3.1

$$\begin{aligned} (f^2)'_g(t)=2f(t)f'_g(t)+[f'_g(t)]^2\cdot \Delta ^+g(t)\ge 2f(t)f'_g(t), \end{aligned}$$

so it is valid for \(n=1\). Suppose now

$$\begin{aligned} f^{n-1}(t) f'_g(t)\le \left( \frac{f^n}{n}\right) '_g(t), \end{aligned}$$

and let us prove that \(f^{n}(t) f'_g(t)\le \left( \frac{f^{n+1}}{n+1}\right) '_g(t)\). Again, by Proposition 3.1

$$\begin{aligned} (f^{n+1})'_g(t)= & {} (f^n\cdot f)'_g(t)=(f^n)'_g(t)f(t)+f^n(t)f'_g(t)+(f^n)'_g(t)f'_g(t)\Delta ^+g(t)\\\ge & {} (f^n)'_g(t)f(t)+f^n(t)f'_g(t)+n f^{n-1}(t)(f'_g(t) )^2\Delta ^+g(t)\\\ge & {} (f^n)'_g(t)f(t)+f^n(t)f'_g(t)\\\ge & {} nf^{n-1}(t) f'_g(t)f(t)+f^n(t)f'_g(t)\\= & {} (n+1)f^{n}(t) f'_g(t). \end{aligned}$$

\(\square \)

Solutions for multivalued Stieltjes differential problems are understood in the following sense:

Definition 3.3

Let \(F:I\times \mathbb {R}^d\rightarrow \mathcal {P}_{k}(\mathbb {R}^d)\) and \(x_0\in \mathbb {R}^d\) be given. A function \(x:I\rightarrow \mathbb {R}^d\) is a solution of (1) if it is g-absolutely continuous

$$\begin{aligned} x'_g(t)\in F(t,x(t)), \; \mu _g \;\text{-a.e. }\;t\in I, \end{aligned}$$

and \(x(0)=x_0\).

Existence results on a bounded interval I for such problems are already known (e.g., [7] for convex-valued right-hand side or [12, 25, 27] in the non-necessarily convex setting).

We recall the following

Proposition 3.4

([25, Corollary 10]) If \(F:[0,T]\times \mathbb {R}^d\rightarrow \mathcal {P}_{k}(\mathbb {R}^d)\) is measurable w.r.t. its first variable and Lipschitz w.r.t. the second one, then

$$\begin{aligned} z'_g(t)\in F(t,z(t)), \quad z(0)=z_0 \end{aligned}$$

has solutions on [0, T].

In the case \(I=[0,\infty )\), we can easily derive the following.

Proposition 3.5

Let \(F:[0,\infty )\times \mathbb {R}^d\rightarrow \mathcal {P}_{k}(\mathbb {R}^d)\) be measurable w.r.t. its first variable and locally Lipschitz w.r.t. the second one, i.e., there exists \(L:[0,\infty )\rightarrow \mathbb {R}\) locally LS-integrable w.r.t. g, such that

$$\begin{aligned} D(F(t,x),F(t,y))\le L(t) |x-y|,\quad \mathrm{for\; every\;} x,y\in \mathbb {R}^d. \end{aligned}$$

Then, (1) has solutions on \([0,\infty )\).

Proof

Consider an increasing sequence \((T_k)_{k\in \mathbb {N}}\) of continuity points of g tending to \(\infty \) with \(T_0=0\).

We start by applying proposition 3.4 to get a g-absolutely continuous solution \(x_0\) on \([T_{0},T_{1}]\) for the problem

$$\begin{aligned} x'_g(t)\in F(t,x(t)), \; \mu _g \;\text{-a.e. }\;t\in [T_{0},T_{1}], \qquad x(0)=x_0. \end{aligned}$$

Then, we apply again [25, Corollary 10] to get a g-absolutely continuous solution \(x_1\) on \([T_{1},T_{2}]\) for the problem

$$\begin{aligned} x'_g(t)\in F(t,x(t)), \; \mu _g \;\text{-a.e. }\;t\in [T_{1},T_{2}], \qquad x(T_1)=x_0(T_1) \end{aligned}$$

and we repeat this procedure for each \(k\ge 0\) in order to get a g-absolutely continuous solution \(x_k\) on \([T_k,T_{k+1}]\) for

$$\begin{aligned} x'_g(t)\in F(t,x(t)), \; \mu _g \;\text{-a.e. }\;t\in [T_{k},T_{k+1}], \qquad x(T_k)=x_{k-1}(T_k). \end{aligned}$$

Now, we concatenate these solutions to get a solution on \([0,\infty )\). More precisely, the function

$$\begin{aligned} x(t)=\left\{ \begin{array}{lllll} x_0(t), &{} t \in [T_0,T_1), \\ x_1(t), &{} t \in [T_1,T_2), \\ ...\\ x_k(t), &{} t \in [T_k,T_{k+1}) \\ ... \end{array} \right. \end{aligned}$$

is a solution of (1), since its g-derivative satisfies the inclusion at \(\mu _g\)-almost every point in \([0,\infty ){\setminus } \{T_k; k\ge 1\}\) and the exceptional set is \(\mu _g\)-null due to the fact that any \(T_k\) is a continuity point of g. \(\square \)

We also recall the following technical result:

Proposition 3.6

([9, Proposition 2.2]) If \(G:\mathbb {R}^d\rightarrow \mathcal {P}(L^1([0,T],\mathbb {R}^d))\) with non-empty, closed and decomposable values is l.s.c., \(\phi :\mathbb {R}^d\rightarrow L^1([0,T],\mathbb {R}^d)\) and \(\psi :\mathbb {R}^d\rightarrow L^1([0,T],\mathbb {R})\) are continuous, and for every \(s\in \mathbb {R}^d\), the set

$$\begin{aligned} H(s)=\overline{\{ u\in G(s); |u(t)-\phi (s)(t)|\le \psi (s)(t) \;a.e.\}} \end{aligned}$$

is non-empty, then the map \(H:\mathbb {R}^d\rightarrow \mathcal {P}(L^1([0,T],\mathbb {R}^d))\) is l.s.c., and therefore, it admits continuous selections.

It can be easily shown that the previous proposition remains available for the Stieltjes measure \(\mu _g\). We state it below in this setting.

Proposition 3.7

Let \(T>0\). If \(G:\mathbb {R}^d\rightarrow \mathcal {P}(L^1_g([0,T],\mathbb {R}^d))\) with non-empty, closed, and decomposable values is l.s.c., \(\phi :\mathbb {R}^d\rightarrow L^1_g([0,T],\mathbb {R}^d)\) and \(\psi :\mathbb {R}^d\rightarrow L^1_g([0,T],\mathbb {R})\) are continuous, and for every \(s\in \mathbb {R}^d\), the set

$$\begin{aligned} H(s)=\overline{\{ u\in G(s); |u(t)-\phi (s)(t)|\le \psi (s)(t), \;\mu _g-a.e.\}} \end{aligned}$$

is non-empty, then the map \(H:\mathbb {R}^d\rightarrow \mathcal {P}(L^1_g([0,T],\mathbb {R}^d))\) is l.s.c., and therefore, it admits continuous selections.

We present next a result on continuous selection of trajectory which generalizes the classical [9, Theorem 3.1] to the setting of g-differential inclusions.

Theorem 3.8

Let \(F:[0,T]\times \mathbb {R}^d \times \mathbb {R}^d \rightarrow \mathcal {P}(\mathbb {R}^d)\) satisfy the following conditions:

\((\chi 1)\) F is \(\mathcal {L}_g([0,T]) \otimes \mathcal {B}(\mathbb {R}^d\times \mathbb {R}^d)\)-measurable;

\((\chi 2)\) there exists a map \(s\mapsto L(\cdot ,s)\) continuous from \(\mathbb {R}^d\) to \(L^1_g([0,T],\mathbb {R})\), such that

$$\begin{aligned} D(F(t,x,s),F(t,y,s))\le L(t,s)|x-y|,\mu _g-a.e. \ t\in [0,T], \forall \ s,x,y\in \mathbb {R}^d; \end{aligned}$$

\((\chi 3)\) for each (tx), \(F(t,x,\cdot )\) is l.s.c;

\((\chi 4)\) for any continuous function \(s\mapsto y(\cdot ,s)\) from \(\mathbb {R}^d\) to \(AC_g([0,T],\mathbb {R}^d)\), there is a continuous map \(\beta =\beta _y:\mathbb {R}^d\rightarrow L^1_g([0,T],\mathbb {R})\) with the property that

$$\begin{aligned} d(y'_g(t,s),F(t,y(t,s),s))\le \beta _y(s)(t),\mu _g-a.e. \ t\in [0,T], \forall s\in \mathbb {R}^d. \end{aligned}$$
(4)

Then, for any continuous \(s\mapsto y(\cdot ,s)\) from \(\mathbb {R}^d\) to \(AC_g([0,T],\mathbb {R}^d)\), every \(\beta _y\) satisfying (4) and \(\varepsilon >0\), one can find a function \(x:[0,T]\times \mathbb {R}^d\rightarrow \mathbb {R}^d\), such that:

(a) the map \(s\mapsto x(\cdot , s)\) is continuous from \(\mathbb {R}^d\) to \(AC_g([0,T],\mathbb {R}^d)\);

(b) for each \(s \in \mathbb {R}^d\), \(t\mapsto x(t,s)\) is a solution of

$$\begin{aligned} \left\{ \begin{array}{l} x'_g(t,s) \in F(t,x(t,s),s),\; \mu _g-a.e.\; t\in [0,T]\\ x(0,s)=s; \end{array} \right. \end{aligned}$$

(c) for every \(s\in \mathbb {R}^d\) and \(t\in [0,T]\)

$$\begin{aligned} |{y}(t,s)-x(t,s)|{} & {} \le e^{\int _{[0,t)}L(\tau ,s)dg(\tau )}(\varepsilon +|y(0,s)-s|)\\{} & {} \quad +\int _{[0,t)}{\beta }(s)(\tau )e^{\int _{[\tau ,t)}L(\xi ,s)dg(\xi )}dg(\tau ). \end{aligned}$$

Note that, by hypothesis \((\chi 2)\), \((\chi 4)\) is equivalent to:

\((\chi 4')\) there is \({\overline{\beta }}:\mathbb {R}^d\rightarrow L^1_g([0,T],\mathbb {R}_+)\) continuous, such that

$$\begin{aligned} d(0,F(t,0,s))\le {\overline{\beta }}(s)(t),\mu _g-a.e. \ t\in [0,T], \ \forall s \in \mathbb {R}^d. \end{aligned}$$

Proof

The proof is inspired by that of [9, Theorem 3.1]. By a change of variable if necessary (see [9, page 325]), we may suppose, for the sake of simplicity, that \(y(t,s)=0\) and to look for solutions with \(x(0)=0\).

Let \((\varepsilon _n)_n\) be a strictly increasing sequence of positive numbers convergent to \(\varepsilon \) (suppose without restricting the generality that \(\varepsilon _0 g(T)<\varepsilon _1\)) and \(\beta _n:[0,T]\times \mathbb {R}^d\rightarrow \mathbb {R}\) be given for each \(n\ge 1\) by

$$\begin{aligned} \beta _n(s)(t)= & {} \int _{[0,t)}{\beta }(s)(u)\frac{\left( \int _{[0,t)}L(\tau ,s)dg(\tau )-\int _{[0,u)}L(\tau ,s)dg(\tau ) \right) ^{n-1}}{(n-1)!}dg(u)\\&\qquad +&\frac{\left( \int _{[0,t)}L(\tau ,s)dg(\tau ) \right) ^{n-1}}{(n-1)!}\varepsilon _n, \end{aligned}$$

while

$$\begin{aligned} \beta _0(s)(t)=\frac{{\beta }(s)(t)+\varepsilon _0}{L(t,s)}. \end{aligned}$$

Step I. We construct a sequence \((x_n)_{n\in \mathbb {N}}:[0,T]\times \mathbb {R}^d\rightarrow \mathbb {R}^d \) of approximate solutions, such that for every \(n\ge 0\), \(x_n(0,s)=0\) and:

  1. (i)

    \(s\mapsto x_n(\cdot ,s)\) is continuous from \(\mathbb {R}^d\) to \(AC_g([0,T],\mathbb {R}^d)\);

  2. (ii)

    \((x_{n+1})'_g(t,s)\in F(t,x_n(t,s),s)\), \(\mu _g\)-a.e.;

  3. (iii)

    \(|(x_{n+1})'_g(t,s)-(x_n)'_g(t,s)|\le L(t,s)\cdot \beta _n(s)(t)\), \(\mu _g\)-a.e.

Let \(x_0:[0,T]\times \mathbb {R}^d\rightarrow \mathbb {R}^d\) be given by

$$\begin{aligned} x_0(t,s)=0. \end{aligned}$$

Obviously, \(x_0(0,s)=0\) and \(s\mapsto x_0(\cdot ,s)\) is continuous from \(\mathbb {R}^d\) to \(AC_g([0,T],\mathbb {R}^d)\). Define \(G_0, H_0:\mathbb {R}^d\rightarrow \mathcal {P}(L^1_g([0,T],\mathbb {R}^d))\) by

$$\begin{aligned} G_0(s)=\{v\in L^1_g([0,T],\mathbb {R}^d); v(t)\in F(t,x_0(t,s),s),\; \mu _g-a.e.\}, \end{aligned}$$

and similarly

$$\begin{aligned} H_0(s)=\overline{\{ v\in G_0(s);|v(t)-(x_0)'_g(t,s)|< {\beta }(s)(t)+\varepsilon _0, \; \mu _g-a.e.\}}. \end{aligned}$$

As a consequence of hypothesis \(\chi 4)\), the multifunction \(G_0\) has non-empty values.

Indeed, one can find a measurable selection \(v_1(\cdot ,s)\) of \(F(\cdot ,0,s)\), such that \(|v_1(t,s)|\le {\beta }(s)(t)\) for every \(t\in [0,T]\). Afterward, let \(v(\cdot ,s)\) be a measurable selection of \(F(\cdot ,x_0(\cdot ,s),s)\), such that for every \(t\in [0,T]\)

$$\begin{aligned} |v_1(t,s)-v(t,s)|=d(v_1(t,s),F(t,x_0(t,s),s)), \end{aligned}$$

and note that

$$\begin{aligned} |v(t,s)|\le & {} |v_1(t,s)|+d(v_1(t,s),F(t,x_0(t,s),s))\\\le & {} {\beta }(s)(t)+D(F(t,0,s),F(t,x_0(t,s),s))\le {\beta }(s)(t)+L(t,s)|x_0(t,s)|, \end{aligned}$$

which is LS-integrable w.r.t. g (as a function of t).

Moreover, as in [9, Proposition 2.1], it follows that condition \(\chi 4')\) implies that the multifunction \(G_0\) is l.s.c.

Besides, \(H_0\) has also non-empty values, because

$$\begin{aligned} d((x_0)'_g(t,s),F(t,x_0(t,s),s)\le {\beta }(s)(t),\; \forall t\in [0,T]; \end{aligned}$$

thus, by Proposition 3.7, \(H_0\) has a continuous selection, i.e., \(h_0:\mathbb {R}^d\rightarrow L^1_g([0,T],\mathbb {R}^d)\) satisfying

$$\begin{aligned} h_0(s)(t)\in F(t,x_0(t,s),s)\;\mathrm{and\;}\; |h_0(s)(t)-(x_0)'_g(t,s)|\le {\beta }(s)(t)+\varepsilon _0,\; \mu _g-a.e. \end{aligned}$$

Next, consider \(x_1(t,s)=\int _{[0,t)}h_0(s)(\tau )dg(\tau )\) for which \(x_1(0,s)=0\) and \((x_1)'_g(t,s)\in F(t,x_0(t,s),s)\) for \(\mu _g\)-almost every t hold; moreover, due to Theorem 2.2

$$\begin{aligned} |x_1(t,s)-x_0(t,s)|= & {} \left| \int _{[0,t)}h_0(s)(\tau )dg(\tau )-\int _{[0,t)}(x_0)'_g(\tau ,s)dg(\tau )\right| \\\le & {} \int _{[0,t)}|h_0(s)(\tau )-(x_0)'_g(\tau ,s)|dg(\tau )\\\le & {} \int _{[0,t)}{\beta }(s)(t)+\varepsilon _0dg(\tau )<\beta _1(s)(t) \end{aligned}$$

and

$$\begin{aligned} |(x_1)'_g(t,s)-(x_0)'_g(t,s)|\le {\beta }(s)(t)+\varepsilon _0=L(t,s)\beta _0(s)(t). \end{aligned}$$

Suppose we have defined \(x_0,x_1,\ldots ,x_n\) with the properties (i)–(iii). Then

$$\begin{aligned} d((x_n)'_g(t,s),F(t,x_n(t,s),s))\le & {} D(F(t,x_{n-1}(t,s),s),F(t,x_n(t,s),s))\nonumber \\\le & {} L(t,s) |x_n(t,s)-x_{n-1}(t,s)|. \end{aligned}$$
(5)

From (iii), it follows using Theorem 2.2 and interchanging the order of integration that:

$$\begin{aligned} \begin{array}{ll} |x_n(t,s)-x_{n-1}(t,s)-(x_n(0,s)-x_{n-1}(0,s))|\\ \\ \le \int _{[0,t)}L(\tau ,s)\beta _{n-1}(s)(\tau )dg(\tau )\\ =\int _{[0,t)}L(\tau ,s)\left( \int _{[0,\tau )}{\beta }(s)(u)\frac{\left( \int _{[0,\tau )}L(\xi ,s)dg(\xi )-\int _{[0,u)}L(\xi ,s)dg(\xi ) \right) ^{n-2}}{(n-2)!}dg(u)\right) dg(\tau )\\ \quad +\int _{[0,t)}L(\tau ,s)\frac{\left( \int _{[0,\tau )}L(\xi ,s)dg(\xi ) \right) ^{n-2}}{(n-2)!}\varepsilon _{n-1}dg(\tau )\\ = \int _{[0,t)}{\beta }(s)(u) \left( \int _{[u,t)}L(\tau ,s)\frac{\left( \int _{[0,\tau )}L(\xi ,s)dg(\xi )-\int _{[0,u)}L(\xi ,s)dg(\xi ) \right) ^{n-2}}{(n-2)!}dg(\tau ) \right) dg(u)\\ \quad +\int _{[0,t)}L(\tau ,s)\frac{\left( \int _{[0,\tau )}L(\xi ,s)dg(\xi ) \right) ^{n-2}}{(n-2)!}\varepsilon _{n-1}dg(\tau ). \end{array} \end{aligned}$$

By Lemma 3.2, we get

$$\begin{aligned}{} & {} |x_n(t,s)-x_{n-1}(t,s)-(x_n(0,s)-x_{n-1}(0,s))| \nonumber \\ {}{} & {} \le \int _{[0,t)}{\beta }(s)(u) \left( \int _{[u,t)}\left( \frac{\left( \int _{[u,\cdot )}L(\xi ,s)dg(\xi )\right) ^{n-1}}{(n-1)!}\right) '_g(\tau )dg(\tau ) \right) dg(u)\nonumber \\{} & {} \quad + \int _{[0,t)} \left( \frac{\left( \int _{[0,\cdot )}L(\xi ,s)dg(\xi )\right) ^{n-1}}{(n-1)!}\right) '_g(\tau )\cdot \varepsilon _{n-1} dg(\tau ), \end{aligned}$$

and by Theorem 2.2

$$\begin{aligned}{} & {} |x_n(t,s)-x_{n-1}(t,s)| \nonumber \\ {}{} & {} \le \int _{[0,t)}{\beta }(s)(u)\frac{\left( \int _{[0,t)}L(\xi ,s)dg(\xi )-\int _{[0,u)}L(\xi ,s)dg(\xi ) \right) ^{n-1}}{(n-1)!}dg(u) \nonumber \\{} & {} \quad +\frac{1}{(n-1)!}\left( \int _{[0,t)}L(\tau ,s)dg(\tau ) \right) ^{n-1} \cdot \varepsilon _{n-1} \nonumber \\ {}{} & {} < \beta _n(s)(t). \end{aligned}$$
(6)

Therefore, (5) implies that

$$\begin{aligned} d((x_n)'_g(t,s),F(t,x_n(t,s),s))< L(t,s)\cdot \beta _n(s)(t),\; \mu _g-a.e. \end{aligned}$$

To define \(x_{n+1}\), consider \(G_n, H_n:\mathbb {R}^d\rightarrow \mathcal {P}(L^1_g([0,T],\mathbb {R}^d))\) given by

$$\begin{aligned} G_n(s)=\{v\in L^1_g([0,T],\mathbb {R}^d); v(t)\in F(t,x_n(t,s),s),\; \mu _g-a.e.\} \end{aligned}$$

and

$$\begin{aligned} H_n(s)=\overline{\{ v\in G_n(s);|v(t)-(x_n)'_g(t,s)|< L(t,s)\beta _n(s)(t), \; \mu _g-a.e.\}}. \end{aligned}$$

By the same reason as before \(G_n\) is non-empty valued, l.s.c. and decomposable and \(H_n\) is non-empty valued; therefore, Proposition 3.7 yields that \(H_n\) has a continuous selection, i.e., \(h_n:\mathbb {R}^d\rightarrow L^1_g([0,T],\mathbb {R}^d)\) satisfying

$$\begin{aligned} h_n(s)(t){} & {} \in F(t,x_n(t,s),s)\;\mathrm{and\;} |h_n(s)(t)-(x_n)'_g(t,s)|\le L(t,s)\beta _n(s)(t),\;\nonumber \\{} & {} \quad \mu _g-a.e. \end{aligned}$$
(7)

Let

$$\begin{aligned} x_{n+1}(t,s)=\int _{[0,t)}h_n(s)(\tau )dg(\tau ). \end{aligned}$$

It satisfies the conditions (i)–(iii); besides, by (7)

$$\begin{aligned} \begin{array}{ll} \Vert x_{n+1}(\cdot ,s)-x_n(\cdot ,s)\Vert _{AC}\\ \\ =|x_{n+1}(0,s)-x_n(0,s)|+\int _{[0,T]}|(x_{n+1})'_g(\tau ,s)-(x_n)'_g(\tau ,s)|dg(\tau )\\ \\ \le \int _{[0,T]}L(\tau ,s)\beta _n(s)(\tau )dg(\tau )\\ = \int _{[0,T]}L(\tau ,s)\left( \int _{[0,\tau )}{\beta }(s)(u)\frac{\left( \int _{[0,\tau )}L(\xi ,s)dg(\xi )-\int _{[0,u)}L(\xi ,s)dg(\xi ) \right) ^{n-1}}{(n-1)!}dg(u)\right) dg(\tau )\\ \quad +\int _{[0,T]}L(\tau ,s)\frac{\left( \int _{[0,\tau )}L(\xi ,s)dg(\xi ) \right) ^{n-1}}{(n-1)!}\varepsilon _n dg(\tau ) \end{array} \end{aligned}$$

whence, again by interchanging the order of integration and using Lemma 3.2

$$\begin{aligned}{} & {} \Vert x_{n+1}(\cdot ,s)-x_n(\cdot ,s)\Vert _{AC}\\{} & {} \le \int _{[0,T]}{\beta }(s)(\tau )\frac{\left( \int _{[0,T)}L(\xi ,s)dg(\xi )-\int _{[0,\tau )}L(\xi ,s)dg(\xi ) \right) ^{n}}{n!}dg(\tau )\\{} & {} \quad +\frac{\left( \int _{[0,T)}L(\tau ,s)dg(\tau ) \right) ^{n}}{n!}\varepsilon _{n}\\{} & {} <\int _{[0,T]}{\beta }(s)(\tau )\frac{\left\| (L(\cdot ,s) \right\| _{L^1_g}^{n}}{(n)!}dg(\tau )+\frac{\left\| L(\cdot ,s) \right\| _{L^1_g}^{n}}{(n)!}\varepsilon \\{} & {} =\frac{\left\| L(\cdot ,s) \right\| _{L^1_g}^{n}}{(n)!}(\Vert {\beta }(s)\Vert _{L^1_g}+\varepsilon ), \end{aligned}$$

so the sequence \((x_n(\cdot ,s))_n\) is Cauchy in \(\Vert \Vert _{AC}\); this holds uniformly in s in some neighborhood of some \(s_0\) due to the continuity assumptions on L and \(\beta \).

Step II. Let us define \(x:[0,T]\times \mathbb {R}^d\rightarrow \mathbb {R}^d\) by

$$\begin{aligned} x(t,s)=\lim _{n\rightarrow \infty }x_n(t,s). \end{aligned}$$

It has the property that \(s\mapsto x(\cdot ,s)\) is continuous from \(\mathbb {R}^d\) to \(AC_g([0,T],\mathbb {R}^d)\).

Let us now check that \(t\mapsto x(t,s)\) is a solution of

$$\begin{aligned} \left\{ \begin{array}{l} x'_g(t,s) \in F(t,x(t,s),s),\; \mu _g-a.e.\; t\in [0,T]\\ x(0,s)=0. \end{array} \right. \end{aligned}$$

Remark that

$$\begin{aligned} d((x_{n+1})'_g(t,s),F(t,x(t,s),s)= & {} d(h_n(s)(t),F(t,x(t,s),s) \nonumber \\\le & {} D(F(t,x_n(t,s),s),F(t,x(t,s),s) \nonumber \\ {}\le & {} L(t,s)\cdot |x_n(t,s)-x(t,s)|. \end{aligned}$$
(8)

From (7), one deduces that

$$\begin{aligned} |(x_{n+1})'_g(t,s)-(x_n)'_g(t,s)|\le L(t,s)\beta _n(s)(t)\le \frac{\left\| L(\cdot ,s) \right\| _{L^1_g}^{n}}{(n)!}(\Vert {\beta }(s)\Vert _{L^1_g}+\varepsilon ); \end{aligned}$$

thus, \(((x_n)'_g(t,s))_n\) is a pointwisely Cauchy sequence, therefore pointwisely convergent to some y(ts). The sequence satisfies the Lebesgue dominated convergence theorem as, for all \(n\in \mathbb {N}\), one can write

$$\begin{aligned} \begin{array}{ll} |(x_n)'_g(t,s)| \le \sum \limits _{i=0}^{n-1}|(x_{i+1})'_g(t,s)-(x_i)'_g(t,s)|\le \sum \limits _{i=0}^{n-1}L(t,s)\beta _i(s)(t)\\ \\ \quad = L(t,s)\sum \limits _{i=1}^{n-1} \left( \int _{[0,t)}{\beta }(s)(u)\frac{\left( \int _{[u,t)}L(\tau ,s)dg(\tau ) \right) ^{i-1}}{(i-1)!}dg(u)+\frac{\left( \int _{[0,t)}L(\tau ,s)dg(\tau ) \right) ^{i-1}}{(i-1)!}\varepsilon _i \right) \\ \\ \qquad +{\beta }(s)(t)+\varepsilon _0\\ \\ <L(t,s)\int _{[0,t)}{\beta }(s)(u) e^{\int _{[u,t)}L(\xi ,s)dg(\xi )}dg(u)+L(t,s)e^{\int _{[0,t)}L(\xi ,s)dg(\xi )}\varepsilon _0+{\beta }(s)(t)+\varepsilon _0, \end{array} \end{aligned}$$

and see that the term on the right-hand side is LS-integrable w.r.t. g, since \(\int _{[0,\cdot )}L(\xi ,s)dg(\xi )\) and \(\int _{[0,\cdot )}{\beta }(s)(u) e^{\int _{[u,\cdot )}L(\xi ,s)dg(\xi )}dg(u)\) are g-absolutely continuous (thus, bounded), while \(L(\cdot ,s)\) and \({\beta }(s)\) are LS-integrable. Therefore

$$\begin{aligned} x_n(t,s)=\int _{[0,t)}(x_n)'_g(\tau ,s)dg(\tau )\rightarrow \int _{[0,t)}y(\tau ,s)dg(\tau ). \end{aligned}$$

Consequently, \(\int _{[0,t)}y(\tau ,s)dg(\tau )=x(t,s)\) for every \(t\in [0,T]\) and \(s\in \mathbb {R}^d\), whence by Theorem 2.2, \(y(t,s)=x'_g(t,s)\).

Looking at (8)

$$\begin{aligned} x'_g(t,s)\in F(t,x(t,s),s), \end{aligned}$$

and the assertion is proved.

Let us finally prove the inequality (c). By (6)

$$\begin{aligned}{} & {} |x_n(t,s)-y(t,s)|=|x_n(t,s)-y(t,s)-(x_n(0,s)-y(0,s))|\\{} & {} \le \sum _{i=1}^n |x_i(t,s)-x_{i-1}(t,s)|\le \sum _{i=1}^n \beta _i(s)(t)\\{} & {} =\sum _{i=1}^n \left( \int _{[0,t)}{\beta }(s)(u)\frac{\left( \int _{[u,t)}L(\tau ,s)dg(\tau ) \right) ^{i-1}}{(i-1)!}dg(u)+\frac{\left( \int _{[0,t)}L(\tau ,s)dg(\tau ) \right) ^{i-1}}{(i-1)!}\varepsilon _i\right) \\{} & {} <\int _{[0,t)}{\beta }(s)(u) e^{\int _{[u,t)}L(\tau ,s)dg(\tau )}dg(u)+e^{\int _{[0,t)}L(\tau ,s)dg(\tau )}\varepsilon . \end{aligned}$$

\(\square \)

The following was given in [20] when \(g(t)=t\).

Corollary 3.9

Let \(T>0\) and \(F:[0,T]\times \mathbb {R}^d \rightarrow \mathcal {P}(\mathbb {R}^d)\) satisfy:

(H1) F is \(\mathcal {L}_g([0,T])\otimes \mathcal {B}( \mathbb {R}^d)\)-measurable;

(H2) One can find a function \(L\in L^1_g([0,T],\mathbb {R}_+)\), such that for \(\mu _g\)-a.e. \(t\in [0,T]\) and every \(x,y\in \mathbb {R}^d\)

$$\begin{aligned} D(F(t,x),F(t,y))\le L(t)|x-y|; \end{aligned}$$

(H3) For some function \(\beta \in L^1_g([0,T],\mathbb {R}_+)\),

$$\begin{aligned} d(0,F(t,0))\le \beta (t), \; \mu _g-a.e. \end{aligned}$$

Then, for each \(\xi _0\in \mathbb {R}^d\), each solution \({\overline{y}}:[0,T]\rightarrow \mathbb {R}^d\) of

$$\begin{aligned} \left\{ \begin{array}{l} x'_g(t) \in F(t,x(t)),\; \mu _g-a.e. \; on\; [0,T]\\ x(0)=\xi _0, \end{array} \right. \end{aligned}$$

and for every \(\varepsilon >0\), there exists a map \(x:[0,T]\times \mathbb {R}^d\rightarrow \mathbb {R}^d\), such that:

(a) for each \(\eta \in \mathbb {R}^d\), \(t\mapsto x(t,\eta )\) is a solution of

$$\begin{aligned} \left\{ \begin{array}{l} x'_g(t) \in F(t,x(t)),\; \mu _g-a.e.\; t\in [0,T]\\ x(0)=\eta ; \end{array} \right. \end{aligned}$$

(b) the map \(\eta \mapsto x(\cdot , \eta )\) is continuous from \(\mathbb {R}^d\) to \(AC_g([0,T],\mathbb {R}^d)\);

(c) for every \(\eta \in \mathbb {R}^d\) and \(t\in [0,T]\)

$$\begin{aligned} |{\overline{y}}(t)-x(t,\eta )|\le e^{\int _{[0,t)}L(\tau )dg(\tau )}(\varepsilon +|\xi _0-\eta |). \end{aligned}$$

Proof

The assertion follows by applying Theorem 3.8 for F not depending on s and for \(y(t,s)={\overline{y}}(t)\) for every \(s\in \mathbb {R}^d\) (so, constant w.r.t. s), so that we can take \(\beta (s)(t)=0\) identically, because \({\overline{y}}\) is a solution of the above-mentioned problem. \(\square \)

We will combine it with a Relaxation Theorem for Stieltjes differential inclusions recently proved in [25].

Theorem 3.10

([25, Theorem 12]) Let \(T>0\), \(g:[0,T]\rightarrow \mathbb {R}\) be left-continuous, non-decreasing, with the property that its discontinuity points accumulate only finitely many times and let \(F:[0,T]\times \mathbb {R}^d \rightarrow \mathcal {P}_{k}(\mathbb {R}^d)\) satisfy the following:

K1) F is uniformly continuous (in (tx)) in the product topology of \(\tau _g\) with the usual topology of \(\mathbb {R}^d\) (both generated by pseudometrics) on any set \([0,T]\times Q\) with \(Q\subset \mathbb {R}^d\) compact;

K2) there exists \(L>0\) such that for every \(t\in [0,T]\)

$$\begin{aligned} D(F(t,x),F(t,y))\le L |x-y|,\quad \mathrm{for\;all}\; x,y\in \mathbb {R}^d. \end{aligned}$$

Then, for every \(\xi _0\in \mathbb {R}^d\), every solution \(y:[0,T]\rightarrow \mathbb {R}^d\) of

$$\begin{aligned} y'_g(t) \in \left\{ \begin{array}{lllll} {\overline{co}}F(t,y(t)), &{} t \notin D_g, \qquad y(0)=\xi _0 \\ F(t, y(t)), &{} t \in D_g, \end{array} \right. \end{aligned}$$

and for every \(\varepsilon >0\), there exists a solution \({\overline{y}}:[0,T]\rightarrow \mathbb {R}^d\) of

$$\begin{aligned} {\overline{y}}'_g(t)\in F(t,z(t)),\quad {\overline{y}}(0)=\xi _0, \end{aligned}$$

such that

$$\begin{aligned} |{\overline{y}}(t)-y(t)|\le \varepsilon , \quad \mathrm{for\; every\;} t\in [0,T]. \end{aligned}$$

We proceed with two auxiliary results needed in the main theorem.

Lemma 3.11

Let \(T>0\), \(\xi _0 \in \mathbb {R}^d\) and \(F:[0,T]\times \mathbb {R}^d\rightarrow \mathcal {P}(\mathbb {R}^d)\) be given and consider on [0, T] the problems

$$\begin{aligned} \left\{ \begin{array}{l} x'_g(t) \in F(t,x(t)),\; \mu _g-a.e.\\ x(0)=\xi _0 \end{array} \right. \end{aligned}$$
(9)

and

$$\begin{aligned} x'_g(t) \in \left\{ \begin{array}{lllll} {\overline{co}}F(t,x(t)), &{} t \notin D_g, \qquad x(0)=\xi _0 \\ F(t, x(t)), &{} t \in D_g. \end{array} \right. \end{aligned}$$
(10)

Suppose \(y:[0,T] \rightarrow \mathbb {R}^d\) is a solution of (10) and let \(\varepsilon >0\) and

$$\begin{aligned} \mathcal {T}:= \{ \xi \in \mathbb {R}^d: | \xi - y(t)| \le \varepsilon \ { \mathrm for \ some}\; t \in [0,T]\} \end{aligned}$$

be the \(\varepsilon \)-tube around the image of y.

If F satisfies the assumption K1) and:

(h2) One can find \(L>0\), such that for \(\mu _g\)-a.e. \(t\in [0,T]\) and every \(\xi ,\eta \in B(\mathcal {T},1)\)

$$\begin{aligned} D(F(t,\xi ),F(t,\eta ))\le L|\xi -\eta |, \end{aligned}$$

then there exists \(\delta >0\) and a function \(x:[0,T]\times B(\xi _0, \delta ) \rightarrow \mathbb {R}^d\) with the following properties:

(a) for each \(\eta \in B(\xi _0, \delta )\), \(t\mapsto x(t,\eta )\) is a solution of

$$\begin{aligned} \left\{ \begin{array}{l} x'_g(t) \in F(t,x(t)),\; \mu _g-a.e.\; on \; [0,T]\\ x(0)=\eta ; \end{array} \right. \end{aligned}$$
(11)

(b) the map \(\eta \mapsto x(\cdot , \eta )\) is continuous from \(B(\xi _0, \delta )\) to \(AC_g([0,T],\mathbb {R}^d)\);

(c) for every \(\eta \in B(\xi _0, \delta )\) and \(t\in [0,T]\)

$$\begin{aligned} |y(t)-x(t,\eta )|\le \varepsilon . \end{aligned}$$

Proof

Since y is g-absolutely continuous, it is bounded; therefore, \( B(\mathcal {T},1)\) is bounded, and so, hypothesis K1) implies (using Lemma 2.3) that there exists \(\alpha >0\), such that for \(\xi \in B(\mathcal {T},1)\)

$$\begin{aligned} |F(t,\xi )| \le \alpha , \ \ \ \mu _g-a.e. \ t \in [0,T]. \end{aligned}$$

In particular, F has compact values at any \((t,\xi )\in [0,T]\times B(\mathcal {T},1)\). To get the thesis, we combine Corollary 3.9 with the Relaxation Theorem 3.10. To this aim, we need to modify the function F to ensure that the Lipschitz property holds on the whole space \(\mathbb {R}^d\).

As in [20], let \(\Phi : \mathbb {R}^d \rightarrow [0,1]\) be defined by

$$\begin{aligned} \Phi (x):= \max \{1- d(x, \mathcal {T}), 0\} \end{aligned}$$

and \({\tilde{F}}(t,x):= \Phi (x) F(t,x)\).

We prove that \({\tilde{F}}\) satisfies the hypothesis (H1)–(H3) of Corollary 3.9.

Hypothesis (H1) is immediate, since being uniformly continuous, F is jointly measurable and the same is true for \({\tilde{F}}\). Also, H3) follows by taking \(\beta (t)= \alpha \).

We prove now that (H2) holds. To this purpose, let \(t \in [0,T]\) satisfy h2) (i.e., \(\mu _g\)-almost everywhere) and let \(x,y \in \mathbb {R}^d\); we have to consider the following cases:

(i) if \(x,y \in B(\mathcal {T}, 1)\)

$$\begin{aligned} \begin{array}{llll} D({\tilde{F}}(t,x), {\tilde{F}}(t,y))= D(\Phi (x){F}(t,x), \Phi (y){F}(t,y))\\ \le D(\Phi (x){F}(t,x), \Phi (y){F}(t,x)) + D(\Phi (y){F}(t,x), \Phi (y){F}(t,y))\\ \le |\Phi (x) -\Phi (y) | | F(t,x)| + |\Phi (y)| D({F}(t,x), {F}(t,y))\\ \le \alpha |x-y| + D({F}(t,x), {F}(t,y)) \le (\alpha +L)|x-y|; \end{array} \end{aligned}$$

(ii) if \(x \in B(\mathcal {T}, 1)\) and \(y \notin B(\mathcal {T}, 1)\)

$$\begin{aligned} \begin{array}{llll} D({\tilde{F}}(t,x), {\tilde{F}}(t,y))= D(\Phi (x){F}(t,x),\{0\})\\ = | \Phi (x) F(t,x)|= |\Phi (x)| | F(t,x)|\\ \le \alpha |\Phi (x)| \le \alpha |\Phi (x)- \Phi (y)| \le \alpha |x-y|; \end{array} \end{aligned}$$

(iii) if \(x, y \notin B(\mathcal {T}, 1)\) then \(D({\tilde{F}}(t,x), {\tilde{F}}(t,y))= D(\{0\},\{0\})=0\).

Hence, the Lipschitz condition is satisfied by \({\tilde{F}}\) on the whole \(\mathbb {R}^d\)

$$\begin{aligned} D({\tilde{F}}(t,x),{\tilde{F}}(t,y))\le (\alpha +L) |x-y|. \end{aligned}$$

Note that \({\tilde{F}}(t,x)\) and F(tx) coincide for any \((t,x) \in [0,T] \times \mathcal {T}\); therefore, the solutions of (9), (10), and (11) are the same for the inclusions with \({\tilde{F}}(t,x) \) instead of F(tx).

Now as for any \(t\in [0,T]\), \(y(t)\in \mathcal {T}\), \(F(t,y(t))={\tilde{F}}(t,y(t))\), and so, y is a solution of (10) with \({\tilde{F}}\) instead of F in this case.

By Theorem 3.10, there exists a solution \({\overline{y}}\) of (9) with \({\tilde{F}}\) instead of F which satisfies for all \(t \in [0,T]\)

$$\begin{aligned} | y(t) - {\overline{y}}(t)| \le \frac{\varepsilon }{2}. \end{aligned}$$

However, as \({\overline{y}}(t)\in \mathcal {T}\), \({\overline{y}}\) is in fact a solution of (9).

Now, we apply Corollary 3.9 with \({\overline{y}}\) and \(\varepsilon _1=\frac{\varepsilon }{4 e^{\int _{[0,T]}L(\tau ) d_g(\tau )}}\). There is a function \(x:[0,T] \times \mathbb {R}^d \rightarrow \mathbb {R}^d\), such that

(a) for every \(\eta \in \mathbb {R}^d\) the function \(t \rightarrow x(t, \eta )\) is a solution of

$$\begin{aligned} x'_g(t) \in {\tilde{F}}(t,x(t)),\; x(0)=\eta ; \end{aligned}$$

(b) the map \(\eta \mapsto x(\cdot , \eta )\) is continuous from \(\mathbb {R}^d\) to \(AC_g([0,T],\mathbb {R}^d)\);

(c) for every \(\eta \in \mathbb {R}^d\) and \(t\in [0,T]\)

$$\begin{aligned} |{\overline{y}}(t)-x(t,\eta )|\le e^{\int _{[0,t)}L(\tau )dg(\tau )}(\varepsilon _1+|\xi _0-\eta |). \end{aligned}$$

Denoting by \(\delta = \frac{\varepsilon }{4e^{\int _{[0,T]}L(\tau ) d_g(\tau )}}\), we get for each \(\eta \in B(\xi _0, \delta )\)

$$\begin{aligned} |{\overline{y}}(t)- x(t, \eta ) | \le \frac{\varepsilon }{2} \ \ \ \forall t \in [0,T]. \end{aligned}$$

Therefore, for each \(\eta \in B(\xi _0, \delta )\) and all \(t \in [0,T]\)

$$\begin{aligned} |y(t)- x(t, \eta ) | \le |y(t)- {\overline{y}}(t) | + |{\overline{y}}(t)- x(t, \eta ) | \le \frac{\varepsilon }{2} + \frac{\varepsilon }{2} =\varepsilon . \end{aligned}$$

This implies that for each \(\eta \in B(\xi _0, \delta )\), the solution \(x(\cdot , \eta )\) lies in the tube \(\mathcal {T}\) in which \({\tilde{F}}\) and F coincide, so they are solutions of the original system. It follows that the restriction of x to \([0,T] \times B(\xi _0, \delta )\) satisfies the required conditions. \(\square \)

For the main result, we impose the hypotheses below on \(F:[0,\infty )\times \mathbb {R}^d\rightarrow \mathcal {P}(\mathbb {R}^d)\).

(i) F is uniformly continuous in (tx) in the product topology of \(\tau _g\) with the usual topology of \(\mathbb {R}^d\) (both being generated by pseudometrics) on any set \([0,T]\times Q\) with \(T>0\) and \(Q\subset \mathbb {R}^d\) compact;

(ii) for every \(T>0\) and \(R>0\), one can find \(L_{T,R}>0\), such that for any \(\xi ,\eta \in \mathbb {R}^d\) with \(\max (|\xi |,|\eta |)\le R\):

$$\begin{aligned} D(F(t,\xi ),F(t,\eta ))\le L_{T,R} |\xi -\eta |,\;\mu _g\mathrm{-a.e.\; on \;}[0,T]. \end{aligned}$$

Lemma 3.12

Let \(F:[0,\infty )\times \mathbb {R}^d\rightarrow \mathcal {P}_k(\mathbb {R}^d)\) satisfy i) and ii), \(\xi \in \mathbb {R}^d\) and \(z:[0,\infty )\rightarrow \mathbb {R}^d\) be a solution of

$$\begin{aligned} x'_g(t) \in \left\{ \begin{array}{lllll} {\overline{co}}F(t,x(t)), &{} t \notin D_g, \qquad x(0)=\xi \\ F(t, x(t)), &{} t \in D_g. \end{array} \right. \end{aligned}$$

Let also \(r:[0,\infty )\rightarrow \mathbb {R}_+\) be continuous and \((T_k)_{k\in \mathbb {N}}\) increasingly tend to \(\infty \) with \(T_0=0\).

Then, there exists a nonincreasing sequence \((\delta _k)_k\) of positive numbers and, for every \(k\in \mathbb {N}\), a sequence \((\eta _j^k)_{j\ge 1}\), such that:

  • \(\delta _k\le r_{k+1}:=\min \{r(t):t\in [T_k,T_{k+1}]\}\) for each \(k\in \mathbb {N}\);

  • \(\eta _j^k\in B(z(T_k),\delta _k)\) for all \(k\in \mathbb {N}\) and \({j\ge 1}\);

  • for any \(k\ge 1\), if a subsequence of \((\eta _j^k)_{k\ge 1}\), say \((\eta _{j_l}^k)_{l\ge 1}\), tends to some \(\eta ^k\), then \((\eta _{j_l}^{k-1})_{l\ge 1}\) tends to some \(\eta ^{k-1}\) and there exists a solution \({\overline{x}}:[0,T_k-T_{k-1}]\rightarrow \mathbb {R}^d\) of

    $$\begin{aligned} \left\{ \begin{array}{l} x'_g(t) \in F(T_{k-1}+t,x(t)),\; \mu _g-a.e.\\ x(0)=\eta ^{k-1} \end{array} \right. \end{aligned}$$

    with

    $$\begin{aligned} |{\overline{x}}(t)-z(T_{k-1}+t)|\le r(T_{k-1}+t),\; \forall \; t\in [0,T_k-T_{k-1}] \end{aligned}$$
    (12)

    and \({\overline{x}}(T_k-T_{k-1})=\eta ^k\).

Proof

We first remark that assumption i) implies (again by Lemma 2.3) that for every \(T,R>0\), there is \(\alpha _{T,R}>0\), such that

$$\begin{aligned} |F(t,\xi )|\le \alpha _{T,R}, \ \ \ \mu _g-a.e. \ t \in [0,T],\; \forall |\xi | \le R. \end{aligned}$$

For the sake of completeness, we list here all the steps, similar to those in the proof of [20, Lemma 3.2] For each \(k \ge 1\), we apply Lemma 3.11 to the problem

$$\begin{aligned} x'_g(t)\in -F(T_k-t,x), t\in [0,T_k-T_{k-1}] \end{aligned}$$
(13)

and, respectively,

$$\begin{aligned} x'_g(t)\in {\overline{co}}\left( -F(T_k-t,x)\right) , t\in [0,T_k-T_{k-1}]. \end{aligned}$$
(14)

First, let \(\delta _0=r_1\). By induction, for every \(k\ge 1\), the following will be constructed in the same way as in the proof of [20, Lemma 3.2]:

  • \(0<\delta _k\le r_{k+1}\);

  • \(x_k:[0,T_k-T_{k-1}]\times B(z(T_k),\delta _k)\rightarrow \mathbb {R}^d\) having the following properties:

    • for any \(\eta \), \(x_k(\cdot ,\eta )\) is a solution of (13) with initial value \(x(0)=\eta \);

    • for any t, \(x_k(t,\cdot )\) maps \(B(z(T_k),\delta _k)\) into \(AC_g([0,T_k-T_{k-1}],\mathbb {R}^d)\) continuously;

    • for any \(\eta \)

      $$\begin{aligned} |z(T_k-t)-x_k(t,\eta )|\le r_k,\; \forall t\in [0,T_k-T_{k-1}]; \end{aligned}$$
    • for any \(\eta \)

      $$\begin{aligned} x_k(T_k-T_{k-1},\eta )\in B(z(T_{k-1}),\delta _{k-1}). \end{aligned}$$

Now let us concatenate conveniently these functions to construct (for each k) a map \(y_k:[0,T_k]\rightarrow \mathbb {R}^d\) by

$$\begin{aligned} y_k(t)=\left\{ \begin{array}{l} x_k(t,z(T_k)),\; \mathrm{if\;} t\in [0,T_k-T_{k-1}]\\ x_{k-1}(t-(T_k-T_{k-1}),x_k(T_k-T_{k-1},z(T_k))),\; \mathrm{if\;} t\in [T_k-T_{k-1},T_k-T_{k-2}]\\ ...\\ x_1(t-(T_k-T_1),x_2(T_2-T_1,...,x_k(T_k-T_{k-1},z(T_k))),\; \mathrm{if\;} t\in [T_k-T_1,T_k] \end{array} \right. \end{aligned}$$

and define

$$\begin{aligned} \begin{array}{l} \eta _0^0=z(T_0)\\ \eta _1^0=y_1(T_1),\; \eta _1^1=z(T_1)\\ \eta _2^0=y_2(T_2),\eta _2^1=y_2(T_2-T_1),\eta _2^2=z(T_2)\\ ... \end{array} \end{aligned}$$

Each of them satisfies the condition \(\eta _j^k\in B(z(T_k),\delta _k)\). Besides, if a subsequence \((\eta _{j_l}^k)_{l\ge 1}\) tends to some \(\eta ^k\), then

$$\begin{aligned} \lim _{l\rightarrow \infty }\eta _{j_l}^{k-1}=\lim _{l\rightarrow \infty }x_k(T_k-T_{k-1},\eta _{j_l}^k)=x_k(T_k-T_{k-1},\eta ^k); \end{aligned}$$

so, indeed, \((\eta _{j_l}^{k-1})_{l\ge 1}\) tends to \(\eta ^{k-1}:=x_k(T_k-T_{k-1},\eta ^k)\).

Finally, \({\overline{x}}:[0,T_k-T_{k-1}]\rightarrow \mathbb {R}^d\) defined by \({\overline{x}}(t)=x_k(T_k-T_{k-1}-t,\eta ^k)\) satisfies all the requirements. \(\square \)

We are now ready for the main relaxation result for Stieltjes differential inclusions on infinite-time intervals.

Theorem 3.13

Let \(F:[0,\infty )\times \mathbb {R}^d\rightarrow \mathcal {P}(\mathbb {R}^d)\) satisfy i) and ii), \(\xi \in \mathbb {R}^d\) and \(z:[0,\infty )\rightarrow \mathbb {R}^d\) be a solution of

$$\begin{aligned} x'_g(t) \in \left\{ \begin{array}{lllll} {\overline{co}}F(t,x(t)), &{} t \notin D_g, \qquad x(0)=\xi \\ F(t, x(t)), &{} t \in D_g. \end{array} \right. \end{aligned}$$

Let also \(r:[0,\infty )\rightarrow \mathbb {R}\) be continuous with \(r(t)>0\) on \([0,\infty )\).

Then, there exists \(\eta ^0\in B(\xi ,r(0))\) and a solution \(x:[0,\infty )\rightarrow \mathbb {R}^d\) of

$$\begin{aligned} x'_g(t)\in F(t,x), \; x(0)=\eta ^0 \end{aligned}$$
(15)

with

$$\begin{aligned} |z(t)-x(t)|\le r(t),\;\mathrm{for\;every\;}t\in [0,\infty ). \end{aligned}$$

Proof

Consider an increasing sequence \((T_k)_{k\in \mathbb {N}}\) of continuity points of g tending to \(\infty \) with \(T_0=0\).

Let \((\delta _k)_k\) and, for every \(k\in \mathbb {N}\), \((\eta _j^k)_{j\ge 1}\) be given by Lemma 3.12.

The key fact in this point of the proof is that \((\eta _j^0)_{j\ge 1}\subset B(z(0),r_1)\) which is compact; therefore, it has a subsequence convergent to some \(\eta ^0\in B(z(0),r_1)\). The corresponding subsequence of \((\eta _j^1)_{j\ge 1}\) also has a convergent subsequence, since it is contained in the compact \(B(z(T_1),\delta _1)\) and so on, and thus, the diagonal sequence converges to some \(\eta ^k\in B(z(T_k),\delta _k)\). By Lemma 3.12, there exists a solution \({\overline{x}}_k:[0,T_k-T_{k-1}]\rightarrow \mathbb {R}^d\) of

$$\begin{aligned} \left\{ \begin{array}{l} ({\overline{x}}_k)'_g(t) \in F(T_{k-1}+t,{\overline{x}}_k(t)),\; \mu _g-a.e.\\ {\overline{x}}_k(0)=\eta ^{k-1} \end{array} \right. \end{aligned}$$

with

$$\begin{aligned} |{\overline{x}}_k(t)-z(T_{k-1}+t)|\le r(T_{k-1}+t),\; \forall \; t\in [0,T_k-T_{k-1}], \end{aligned}$$

and \({\overline{x}}_k(T_k-T_{k-1})=\eta ^k\).

Then, the map \(x:[0,\infty )\rightarrow \mathbb {R}^d\) defined as

$$\begin{aligned} x(t)={\overline{x}}_k(t)\;\mathrm{for\;} t\in [T_{k-1},T_k) \end{aligned}$$

is a solution of (15) and satisfies, due to (12),

$$\begin{aligned} |z(t)-x(t)|\le r(t),\;\mathrm{for\;every\;}t\in [T_{k-1},T_k] \end{aligned}$$

so on the whole interval \([0,\infty )\)

$$\begin{aligned} |z(t)-x(t)|\le r(t). \end{aligned}$$

\(\square \)

Let us note that in the convexified problem in the main theorem, the convex hull is not necessary at the points of discontinuity of g since [25, Proposition 13] states that, on finite intervals, any limit of an uniformly convergent sequence of solutions of

$$\begin{aligned} x'_g(t)\in F(t,x), \; x(0)=\xi \end{aligned}$$

is a solution of

$$\begin{aligned} x'_g(t) \in \left\{ \begin{array}{lllll} {\overline{co}}F(t,x(t)), &{} t \notin D_g, \qquad x(0)=\xi .\\ F(t, x(t)), &{} t \in D_g. \end{array} \right. \end{aligned}$$

This is consistent with the statements of the relaxation results for difference inclusions [21], for set-valued problems for hybrid systems [4], and for dynamic inclusions on time scales [32].

Remark 3.14

We cannot hope to get solutions of the non-convex problem which approximate a given solution of the relaxed inclusion with the same initial value (see the counterexample in [20, Section 4] in the particular case \(g(t)=t\)).

Our main result allows us to get the relaxation of the problem (1) in a very wide framework, such as in the case where \(F:[0,\infty )\times \mathbb {R}^d\rightarrow \mathcal {P}_k(\mathbb {R}^d)\) satisfies (i) and (ii) and \(g:[0,\infty )\rightarrow \mathbb {R}\) is defined (see [13, Example 2]) by

$$\begin{aligned} g(t)=t+k+\sum _{i=1}^{\infty }\frac{1}{2^i}H\left( t-\left( k+\frac{1}{2}-\frac{1}{3+i}\right) \right) ,\quad t\in [k,k+1),\; k\in \mathbb {N}, \end{aligned}$$

H being the Heaviside function

$$\begin{aligned} H(t)=0\; \textrm{if}\; t\le 0\quad \textrm{and} \quad H(t)=1\; \textrm{if}\; t> 0. \end{aligned}$$

The function g is non-decreasing and left-continuous and its discontinuity points accumulate at the moments \(k+\frac{1}{2}, k\in \mathbb {N}\); thus, on any compact interval, they accumulate at most finitely many times.

We finally remark that, as far as the authors know, this is the first relaxation result on infinite time available for dynamics involving discrete perturbations.