1 Introduction

Convergence results for varying measures have significant applications to various fields of pure and applied sciences including stochastic processes, statistics, control and game theories, transportation problems, neural networks, signal and image processing (see, for example, [2, 5,6,7,8, 15, 20, 24, 28, 34]).

E.T. Copson in [4], weakening the monotonicity of a sequence of real numbers by changing it in a convex inequality involving k consecutive terms of the sequence, gave a sufficient condition to guarantee the convergence of bounded sequences of real numbers.

Recently, following this idea, in [23] the classical Monotone Convergence Theorem has been generalized by changing the monotonicity with a convexity condition on the involved functions.

In the present paper, we go further and continue the investigation started in [11, 14,15,16,17, 21, 22, 25, 32], providing conditions to ensure convergence results for a sequence of functions \((f_n)_n\) integrable with respect to the measures of a sequence \((m_n)_n\), when the functions satisfy a convexity condition. If the sequence of measures \((m_n)_n\) satisfies a convexity condition, convergence theorems for the integrals are obtained as well.

The paper is organized as follows.

In Sect. 2, after the Introduction, convergence theorems for varying measures are given when the sequence of functions \((f_n)_n\) satisfies inequalities of Copson’s type (in both increasing or decreasing manners) and the sequence of measures is setwisely or weakly convergent.

In Sect. 3, the convergence of the sequence \(\left( \int _{\Omega } f d m_n\right) _n\) under a convexity condition on the sequence of measures \((m_n)_n\) is obtained, when again \((m_n)_n\) converges in setwise sense.

Finally, Sect. 4 provides a continuous dependence result for measure differential equations under Copson’s type assumptions on the measures driving the equations. Such outcomes are important in applications since they allow one to approximate the solutions of a differential problem driven by a general finite Borel measure by solutions of differential problems driven by measures with nicer behavior (e.g. [13, 33] or [9, 10, 31] for the more general, set-valued setting).

Measure differential equations (which can be equivalently written as Stieltjes differential equations, see [19, 27]) proved themselves very useful in studying real life processes with dead times or abrupt changes occurring in their dynamics, e.g. [1, 19] or [29].

We remark that the main theorem of this section, proved for measure differential equations, could be used to get new continuous dependence results for generalized differential problems ( [33]), for impulsive differential equations with finitely or countably many impulses ( [19, 33]) and also for dynamic equations on time scales ( [13]).

2 Convergence Results Under Convexity Conditions on the Functions

Let \((\Omega , {\mathcal {A}}) \) be a measurable space and we denote by \({\mathcal {M}}^+(\Omega )\) the family of finite nonnegative measures on \((\Omega , {\mathcal {A}}).\) Let \(m, m_n \in {\mathcal {M}}^+ (\Omega )\) for \(n \in {\mathbb {N}}\) and let \(f, f_n: \Omega \rightarrow {\mathbb {R}}\), for \(n \in {\mathbb {N}}\), be measurable functions. The symbol \(L^1(m)\) stands for the family of Lebesgue integrable functions with respect to (briefly w.r.t. ) the measure m while \(\int _A f dm\) is the Lebesgue integral of f over a set \(A \in {\mathcal {A}}\).

We recall the following result.

Lemma 2.1

([4, 23, Lemmas 1, 2]) Let \((x_n)_n\) be a sequence of real numbers which satisfies the inequalities

$$\begin{aligned} x_{n+k} \ge \sum _{j=1}^k \alpha _j x_{n+k-j}, \ \ \mathrm{for \ all } \ n \ge 1 \end{aligned}$$
(1)

(\(x_{n+k} \le \sum _{j=1}^k \alpha _j x_{n+k-j}, \ \ \mathrm{for \ all } \ n \ge 1\), respectively), where k is a fixed positive integer, the coefficients \(\alpha _j\) are strictly positive and \(\sum _{j=1}^k \alpha _j =1\).

Then the sequence \(y_n=\min \{x_{n-1}, \dots ,x_{n-k}\}\), \(n \ge k+1\), is increasing (\(y_n=\max \{x_{n-1}, \dots ,x_{n-k}\}\), \(n \ge k+1\) is decreasing, respectively). Moreover, if the sequence \((x_n)_n\) satisfies (1) and if \(\lambda = \lim _{n \rightarrow \infty }x_n \) then \(x_n \le k \lambda \) for all \(n \ge 1\).

In the whole Section we will consider a sequence of functions \((f_n)_n\) satisfying

$$\begin{aligned} f_{n+k} (x) \ge \sum _{J=1}^k \alpha _j f_{n+k-j}(x), \ \ \ \mathrm{for \ all \ } n\ge 1, x \in \Omega , \end{aligned}$$
(2)

or, respectively, the reverse inequalities

$$\begin{aligned} f_{n+k}(x) \le \sum _{j=1}^k \alpha _j f_{n+k-j}(x), \ \ \ \mathrm{for \ all \ } n\ge 1, x \in \Omega , \end{aligned}$$
(3)

where k is a fixed positive integer, the coefficients \(\alpha _j\) are strictly positive and \(\sum _{j=1}^k \alpha _j =1\).

2.1 Setwisely Convergent Measures

We are now giving convergence theorems with convexity conditions on the functions and setwise converging measures.

We recall that a sequence \((m_n)_n\) converges setwisely to m (\(m_n \xrightarrow []{s} m\)) if for every \(A \in {\mathcal {A}}\)

$$\begin{aligned} \lim _{n \rightarrow \infty } m_n (A) = m(A) \end{aligned}$$

([21, Sect. 2.1], [17, Definition 2.3])

Definition 2.2

Let \((m_n)_n \subseteq {\mathcal {M}}(^+\Omega )\). We say that a sequence \((f_n)_n: \Omega \rightarrow {\mathbb {R}}\) is uniformly \((m_n)\)-integrable on \(\varOmega \) if

$$\begin{aligned} \lim _{\alpha \rightarrow +\infty } \sup _{n \in {{\mathbb {N}}} }\int _{\{|f_n| >\alpha \}} |f_n| dm_n \ =0. \end{aligned}$$
(4)

If \(f_n=f\) for all \(n\in {{\mathbb {N}}}\), then we say that f is uniformly \((m_n)\)-integrable on \(\varOmega \).

In the proof of our convergence results, we will use the following proposition:

Proposition 2.3

( [11, Proposition 2.10 and Corollary 2.8]) Let \((m_n)_n \subseteq {\mathcal {M}}^+(\Omega )\) converge setwisely to \(m \in {\mathcal {M}}^+(\Omega )\). Moreover, let \(f: \varOmega \rightarrow {\mathbb {R}}\) be uniformly \((m_n)\)-integrable on \(\varOmega \). Then \(f \in L^1(m)\) and for all \(A \in {{\mathcal {A}}}\)

$$\begin{aligned} \lim _{n \rightarrow \infty } \int _{A} f\,dm_n = \int _{A} f\,dm. \end{aligned}$$
(5)

We show that if \((f_n)_n\) satisfies a convexity condition of Copson’s type the convergence holds for the sequence \((f_n)_n\), not only for the function f (see (6) below).

Theorem 2.4

Let \(f_n: \Omega \rightarrow [0, +\infty ]\), \(n \in {{\mathbb {N}}}\), be a sequence of measurable functions satisfying (2) and let m and \(m_n\), \(n \in {{\mathbb {N}}}\), belong to \({\mathcal {M}}^+(\Omega )\). Then there exists a measurable function \(f: \Omega \rightarrow [0, +\infty ]\) such that \(\lim _{n \rightarrow \infty } f_n(x) =f(x)\) for all \(x \in \Omega \). Suppose that

(2.4.i) f is uniformly \((m_n)\)-integrable on \(\varOmega \);

(2.4.ii) \((m_n)_n\) is setwisely convergent to m.

Then, for all \(A \in {{\mathcal {A}}}\),

$$\begin{aligned} \lim _{n \rightarrow \infty } \int _{A} f_ndm_n = \int _{A} fdm. \end{aligned}$$
(6)

Proof

According to Copson’s theorem applied to each \(x \in \Omega \), we can find a function \( f: \Omega \rightarrow [0, + \infty ]\) such that

$$\begin{aligned} \lim _{n \rightarrow +\infty }f_n(x)=f(x) \ \ \mathrm{for \ all } \ x \in \Omega . \end{aligned}$$

Since each function \(f_n\) is measurable, the function f is also measurable.

To prove the assertion, it is sufficient to prove the equality (6) for \(A= \varOmega \). Fix \(x \in \Omega \) and define for each \(n \ge k+1\)

$$\begin{aligned} g_n(x)=\min \{f_{n-1}(x), \dots , f_{n-k}(x)\}. \end{aligned}$$

Now by (2) and Lemma 2.1 it follows that

$$\begin{aligned} g_n(x) \le g_{n+1}(x) \le f_{n}(x) \end{aligned}$$

and \(\lim _{n \rightarrow +\infty } g_n(x) =f(x).\)

Therefore, applying the monotone convergence theorem for setwise converging measures ( [17, Corollary 6.2]) to the increasing sequence \((g_n)_n\) we get

$$\begin{aligned} \lim _{n \rightarrow +\infty } \int _{\Omega } g_n d m_n = \int _{\Omega }f dm. \end{aligned}$$

Observe that \(\int _{\Omega }f dm \in [0, + \infty ]\). If \(\int _{\Omega }f dm = + \infty \), then since for all \(n \in {{\mathbb {N}}}\) \(g_n(x) \le f_n(x)\), passing to the limit we obtain

$$\begin{aligned} \int _{\Omega }f dm = + \infty = \lim _{n \rightarrow +\infty } \int _{\Omega } g_n d m_n =\lim _{n \rightarrow +\infty } \int _{\Omega } f_n d m_n. \end{aligned}$$

Assume that \(\int _{\Omega } f dm < + \infty \) and consider \(h_n(x)=\min \{ l_{n-1}(x), \dots , l_{n-k}(x)\}\) for \(n>k\), where \(l_n(x) = \sum _{j=1}^k f_{n-j}(x)\). The sequence \((l_n)_n\) satisfies the inequality

$$\begin{aligned} l_{n+k} (x) \ge \sum _{j=1}^k \alpha _j l_{n+k-j}(x), \ \ \ \mathrm{for \ all \ } n\ge 1, x \in \Omega ; \end{aligned}$$

therefore, by Lemma 2.1 it follows that \((h_n)_n\) is an increasing sequence with

$$\begin{aligned} \lim _{n \rightarrow + \infty }h_n(x) = k \cdot f(x) \end{aligned}$$

for all \(x \in \Omega \). Moreover, for every \(n>k\)

$$\begin{aligned} f_{n-k}(x) \le h_n(x) \le k \cdot f(x); \end{aligned}$$

therefore, applying the Dominated Convergence Theorem for varying measures [30, Proposition 18, p.232] we conclude that

$$\begin{aligned} \lim _{n \rightarrow \infty } \int _{\Omega } f_ndm_n = \int _{\Omega } fdm. \end{aligned}$$

\(\square \)

If we consider now the decreasing case (3), we have the following

Theorem 2.5

Let \(f_n: \Omega \rightarrow [0, +\infty ]\) be a sequence of measurable functions such that (3) holds and let m and \(m_n\), \(n \in {{\mathbb {N}}}\), belong to \({\mathcal {M}}^+(\Omega )\). Then there exists a measurable function \(f: \Omega \rightarrow [0, +\infty ]\) such that \(\lim _{n \rightarrow \infty } f_n(x) =f(x)\) for all \(x \in \Omega \).

Suppose that

(2.5.i) \(f_1,f_2,\dots ,f_k \) are uniformly \((m_n)\)-integrable on \(\varOmega \) and \(f_1,f_2,\dots , f_k \in L^1(m) \);

(2.5.ii) \((m_n)_n\) is setwisely convergent to m.

Then, for all \(A \in {{\mathcal {A}}}\),

$$\begin{aligned} \lim _{n \rightarrow \infty } \int _{A} f_ndm_n = \int _{A} fdm. \end{aligned}$$
(7)

Proof

The existence of \( f: \Omega \rightarrow [0, + \infty ]\) such that \(\lim _{n \rightarrow +\infty }f_n(x)=f(x)\) for all \(x \in \Omega \) can be proved as in Theorem 2.4. It is sufficient now to show the equality (7) for \(A= \varOmega \). Fix \(x \in \Omega \) and define for \(n \ge k+1\),

$$\begin{aligned} g_n(x)=\max \{f_{n-1}(x), \dots , f_{n-k}(x)\}. \end{aligned}$$

By Lemma 2.1, \((g_n(x))_n\) is a decreasing sequence satisfying

$$\begin{aligned} f_{n-1}(x) \le g_n(x)\le g_{n-1}(x) \ \mathrm{for \ all} \ n>k ; \end{aligned}$$

moreover, by the definition of \(g_n(x)\) we get

$$\begin{aligned} \lim _{n \rightarrow +\infty } g_n(x) = \lim _{n \rightarrow +\infty } f_n(x)= f(x). \end{aligned}$$

Now observe that if \(f_1\) and \(f_2\) are in \(L^1(m)\) and are uniformly \((m_n)\)-integrable on \(\varOmega \), then the same is true for \(\max \{f_1,f_2\}= \frac{|f_1+f_2|+|f_1-f_2|}{2}\). Therefore, the function \(g_{k+1}(x)=\max \{f_{k}(x), \dots , f_{1}(x)\} \in L^1(m)\) and it is uniformly \((m_n)\)-integrable.

By Proposition 2.3 it follows that

$$\begin{aligned} \lim _{n \rightarrow \infty } \int _{\Omega } g_{k+1}dm_n = \int _{\Omega } g_{k+1}dm. \end{aligned}$$

Besides, \(f_n(x) \le g_{n+1}(x) \le g_{k+1}(x)\) for \(n>k\), so applying the Lebesgue convergence theorem for setwise convergent measures ( [30, Proposition 18, p.232]) we get

$$\begin{aligned} \lim _{n \rightarrow \infty } \int _{\Omega } f_ndm_n = \int _{\Omega } fdm. \end{aligned}$$

\(\square \)

2.2 Weakly Convergent Measures

We are now considering convergence theorems with convexity conditions on the functions and weakly converging measures.

For the following two results we suppose that \(\Omega \) is a locally compact Hausdorff space and \( {\mathcal {A}}\) will be its Borel \(\sigma \)-algebra. We denote by \(C_b(\Omega )\) the family of all bounded continuous functions on \(\Omega \).

We recall that a sequence \((m_n)_n\) converges weakly to m (\(m_n \xrightarrow []{w} m\), [21, Sect. 2.1]) if

$$\begin{aligned} \int _{\Omega } g dm_n \rightarrow \int _{\Omega } g dm, \ \ \mathrm{for \ all} \ \, g \in C_b(\Omega ). \end{aligned}$$

We have the following

Theorem 2.6

Let \(f_n: \Omega \rightarrow [0, +\infty ]\), \(n \in {{\mathbb {N}}}\) be a sequence of lower semicontinuous functions satisfying (2) and let m and \(m_n\), \(n \in {{\mathbb {N}}}\), belong to \({\mathcal {M}}^+(\Omega )\).Then there exists a measurable function \(f: \Omega \rightarrow [0, +\infty ]\) such that \(\lim _{n \rightarrow \infty } f_n(x) =f(x)\) for all \(x \in \Omega \). Suppose that

(2.6.i) f is uniformly \((m_n)\)-integrable on \(\varOmega \);

(2.6.ii) f is continuous;

(2.6.iii) \((m_n)_n\) is weakly convergent to m.

Then, for all \(A \in {{\mathcal {A}}}\),

$$\begin{aligned} \lim _{n \rightarrow \infty } \int _{A} f_ndm_n = \int _{A} fdm. \end{aligned}$$
(8)

Proof

The proof follows as in Theorem 2.4, but in this case we have to apply the monotone convergence theorem for weakly convergent measures ( [17, Theorem 6.1]) when \(\int _{\Omega }f dm = \infty \).

If \(\int _{\Omega } f dm < + \infty \) then, as in the previous result, we have that

$$\begin{aligned} f_{n-k}(x) \le k \cdot f(x) \ \ \mathrm{for \ all} \ n>k \ \textrm{and} \ x \in \Omega , \end{aligned}$$

and since by (2.6.i) the function f is uniformly \((m_n)\)-integrable it follows that the sequence \((f_n)_n \) is uniformly \((m_n)\)-integrable on \(\varOmega \). Consequently, by the Lebesgue convergence theorem for weakly convergent measures ([17, Corollary 5.1]), we get

$$\begin{aligned} \lim _{n \rightarrow \infty } \int _{\Omega } f_ndm_n = \int _{\Omega } fdm. \end{aligned}$$

\(\square \)

If condition (3) holds instead, then the following can be proved.

Theorem 2.7

Let \(f_n: \Omega \rightarrow [0, +\infty ]\), \(n \in {{\mathbb {N}}}\), be a sequence of lower semicontinuous functions such that (3) holds and let m and \(m_n\), \(n \in {{\mathbb {N}}}\), belong to \({\mathcal {M}}^+(\Omega )\).Then there exists a measurable function \(f: \Omega \rightarrow [0, +\infty ]\) such that \(\lim _{n \rightarrow \infty } f_n(x) =f(x)\) for all \(x \in \Omega \).

Suppose that

(2.7.i) \(f_1,f_2,\dots ,f_k \) are uniformly \((m_n)\)-integrable on \(\varOmega \) and \(f_1,f_2,\dots , f_k \in L^1(m) \);

(2.7.ii) f is continuous;

(2.7.iii) \((m_n)_n\) is weakly convergent to m.

Then, for all \(A \in {{\mathcal {A}}}\),

$$\begin{aligned} \lim _{n \rightarrow \infty } \int _{A} f_ndm_n = \int _{A} fdm. \end{aligned}$$
(9)

Proof

The proof follows as that of Theorem 2.5. Since

$$\begin{aligned} f_n(x) \le g_n(x) \le g_{k+1}(x) \ \ \mathrm{for \ all } \ n>k \end{aligned}$$

and since by (2.7i) the function \(g_{k+1}\) is uniformly \((m_n)\)-integrable on \(\Omega \), it follows that the sequence \((f_n)_n \) is uniformly \((m_n)\)-integrable as well. Therefore, applying the Lebesgue convergence theorem for weakly converging measures ( [17, Corollary 5.1]) we deduce that

$$\begin{aligned} \lim _{n \rightarrow \infty } \int _{\Omega } f_ndm_n = \int _{\Omega } fdm. \end{aligned}$$

\(\square \)

3 Convergence Results for Measures Satisfying Convexity Conditions

In this section we will consider limit theorems of the following type:

$$\begin{aligned} \displaystyle {\int }_{\Omega }f\,dm_n \rightarrow \displaystyle {\int }_{\Omega }f \,dm \end{aligned}$$

where the sequence of measures \((m_n)_n\) satisfies a convexity condition.

It is known (see [12], p 30) that if \((m_n)_n\) is a sequence of measures converging setwise to a set function m, then m is a measure if one of the following holds:

  1. 1)

    \((m_n)_n\) is an increasing sequence;

  2. 2)

    (Vitali–Hahn–Saks) m is finite-valued.

We want first to prove that if we substitute the monotonicity condition by one of the following inequalities of Copson type,

$$\begin{aligned} m_{n+k}(A) \ge \sum _{j=1}^k \alpha _j m_{n+k-j}(A), \ \ \ \mathrm{for \ all \ } n\ge 1, A \in {{\mathcal {A}}}, \end{aligned}$$
(10)

or

$$\begin{aligned} m_{n+k}(A) \le \sum _{j=1}^k \alpha _j m_{n+k-j}(A), \ \ \ \mathrm{for \ all \ } n\ge 1, A \in {{\mathcal {A}}}, \end{aligned}$$
(11)

where k is a fixed positive integer, the coefficients \(\alpha _j\) are strictly positive and \(\sum _{j=1}^k \alpha _j =1\), we still obtain that m is a measure.

Proposition 3.1

Let \((m_n)_n\) be a sequence in \({\mathcal {M}}^+(\Omega )\) converging setwisely to a set function \(m:{{\mathcal {A}}} \rightarrow {{\mathbb {R}}}\). If (10) holds then m is a measure.

Proof

Fix \(A \in {{\mathcal {A}}}\) and for \(n \ge k+1\) let

$$\begin{aligned} \nu _n(A)=\min \{m_{n-1}(A), \dots , m_{n-k}(A)\}. \end{aligned}$$

Then \(\nu _n(A) \le m_n(A)\) for each \(n>k\); indeed,

$$\begin{aligned} m_n(A) \ge \sum _{j=1}^k \alpha _j m_{n-j}(A) \ge \sum _{j=1}^k \alpha _j \nu _{n}(A) =\nu _n(A). \end{aligned}$$

Moreover, \(\nu _n(A) \le \nu _{n+1}(A)\) for all \(n \in {{\mathbb {N}}}\), since

$$\begin{aligned} \nu _{n+1}(A)= & {} \min \{m_{n}(A), \dots , m_{n-k+1}(A)\}\\\ge & {} \min \{m_{n-k}(A), \min \{m_{n}(A), \dots , m_{n-k+1}(A)\}\}\\= & {} \min \{m_n(A), \nu _n(A)\} = \nu _n(A). \end{aligned}$$

Therefore, \((\nu _n)_n\) is an increasing sequence of measures, thus it converges to a measure \(\nu \). We want to prove that the sequence \((m_n)_n\) converges to \(\nu \) as well. Fix \(A \in {{\mathcal {A}}}\). If \(\nu (A) \rightarrow +\infty \) then as

$$\begin{aligned} \nu _n(A) \le m_n(A) \ \ \mathrm{for \ all } \ n>k, \end{aligned}$$

also \(m_n(A) \rightarrow +\infty \). Assume that \(\lim _{n \rightarrow + \infty } \nu _n(A)=\nu (A) < +\infty \). Then for every \(\varepsilon >0 \) there exists \(n_{\varepsilon } \in {{\mathbb {N}}}\) such that whenever \(n >n_{\varepsilon }\)

$$\begin{aligned} \nu (A) -\varepsilon< \nu _n(A) <\nu (A). \end{aligned}$$

If \(1 \le s \le k\),

$$\begin{aligned} m_{n+s}(A)\ge & {} \alpha _s m_n(A) + \sum _{t\not = s} \alpha _t m_{n+s-t}(A)\\\ge & {} \alpha _s m_n(A) + \sum _{t\not = s} \alpha _t \nu _{n+s}(A)\\= & {} \alpha _s m_n(A) +(1-\alpha _s)\nu _{n+s}(A)\\\ge & {} \alpha _s m_n(A) +(1-\alpha _s)(\nu (A) - \varepsilon ). \end{aligned}$$

For each \(n > n_{\varepsilon }\) there is \(1 \le {\bar{s} }\le k\) for which \(m_{n+{\bar{s}}}(A)= \nu _{n+k+1}(A)\). Then

$$\begin{aligned} \nu (A)\ge & {} \nu _{n+k+1}(A) = m_{n+{\bar{s}}}(A) \ge (1- \alpha _{\bar{s}}) (\nu (A) -\varepsilon ) + \alpha _{\bar{s}} m_n(A) \\= & {} m_n(A) + (1- \alpha _{\bar{s}}) (\nu (A) -\varepsilon -m_n(A)). \end{aligned}$$

Also \(\nu (A) -\varepsilon < \nu _n(A) \le m_n(A)\), so if \(\alpha \) is the least of the coefficients \(\alpha _{\bar{s}}\) satisfying

$$\begin{aligned} \nu (A) \ge m_n(A) + (1- \alpha ) (\nu (A) -\varepsilon -m_n(A))= \alpha m_n(A) + (1-\alpha ) (\nu (A) -\varepsilon ) \end{aligned}$$

we get

$$\begin{aligned} \alpha m_n(A) \le \nu (A) -(1- \alpha )(\nu (A) -\varepsilon ). \end{aligned}$$

Therefore

$$\begin{aligned} m_n(A) \le \nu (A) + \frac{1- \alpha }{\alpha } \varepsilon \end{aligned}$$

whence

$$\begin{aligned} \nu (A) -\varepsilon < \nu _n(A) \le m_n(A) \le \nu (A) + \frac{1- \alpha }{\alpha } \varepsilon . \end{aligned}$$

This implies that the sequence \((m_n)_n\) setwise converges to \(\nu \) which is a measure, and since by hypothesis \((m_n)_n\) converges to m, it follows that \(m=\nu \) is a measure. \(\square \)

An analogous result holds in the case of the reverse inequality in the convex combination, assuming that the measures \(m_1, m_2, \dots , m_k\) are finite-valued.

Proposition 3.2

Let \((m_n)_n\) be a sequence in \({\mathcal {M}}^+(\Omega )\) converging setwise to a set function \(m:{{\mathcal {A}}} \rightarrow {{\mathbb {R}}}\). If (11) holds and \(m_1, m_2, \dots , m_k\) are finite-valued, m is a measure.

Proof

The proof is similar to that of Proposition 3.1. In this case considering for \(n>k\) \(\nu _n(A)=\max \{m_{n-1}(A), \dots , m_{n-k}(A)\}\),

$$\begin{aligned} \nu _n(A) \ge m_{n}(A) \ \ \textrm{and} \ \ \nu _n(A) \ge \nu _{n+1}(A). \end{aligned}$$

Since \(\nu _{k+1}(A)=\max \{m_{k}(A), \dots , m_{1}(A)\}\) is finite-valued, reasoning as before we obtain that there is a coefficient \(0< \alpha <1\) such that

$$\begin{aligned} \nu (A) -\frac{1- \alpha }{\alpha } \varepsilon \le m_n(A) \le \nu (A) + \varepsilon \end{aligned}$$

and the thesis follows. \(\square \)

Besides, a sequence \((m_n)_n\) for which (10) holds can be shown to satisfy a domination condition.

Proposition 3.3

Let \((m_n)_n\) be a sequence in \({\mathcal {M}}^+(\Omega )\) converging setwise to a set function \(m: {{\mathcal {A}}} \rightarrow {{\mathbb {R}}}\) and satisfying (10). Then for all \(A \in {{\mathcal {A}}}\) and \(n\ge 1\), \(m_n(A) \le k m(A) \).

Proof

Fix \(A \in {{\mathcal {A}}}\). For \(n \ge k+1\) consider the sequence of measures \((\nu _n)_n\) defined by

$$\begin{aligned} \nu _n(A)= m_{n-1}(A)+\dots +m_{n-k}(A). \end{aligned}$$

Then

$$\begin{aligned} \nu _n(A)\ge & {} \sum _{j=1}^k \alpha _j m_{n-1-j}(A) + \dots + \sum _{j=1}^k \alpha _j m_{n-k-j}(A)\\= & {} \alpha _1(m_{n-2}(A) +\dots m_{n-k-1}(A))+\alpha _2(m_{n-3}(A) +\dots m_{n-k-2}(A))\\+ & {} \dots +\alpha _k(m_{n-1-k}(A) +\dots + m_{n-2k}(A))\\= & {} \sum _{j=1}^k \alpha _j \nu _{n-j}(A) \end{aligned}$$

so also the sequence \((\nu _n(A))_n\) verifies the Copson’s inequality and if for \(n \ge k+1\)

$$\begin{aligned} \eta _n(A)= \min \{\nu _{n-1}(A), \dots , \nu _{n-k}(A)\} \end{aligned}$$

then \(\eta _n(A) \le \eta _{n+1}(A)\) for all \(n >k\), and it follows by Lemma 2.1 that the sequence \(\eta _n(A)\) is divergent or it converges to some \(\lambda (A)\). If it is divergent there is nothing to prove, otherwise assume that it is convergent to \(\lambda (A)\) and so by the previous Proposition 3.1 it follows that \(\lambda \) is a measure and the sequence \((\nu _n)_n\) is also setwise convergent to \(\lambda \).

On the other hand by Proposition 3.1 the sequence \((m_n(A))_n\) converges to m(A), and from the definition of the sequence of measures \((\nu _n)_n\) it follows that \(\lambda (A)= k m(A)\).

To prove that \(m_n(A) \le k m(A) \), we observe that

$$\begin{aligned} \eta _n(A)= & {} \min \{\nu _{n-1}(A), \dots \nu _{n-k}(A)\}\\= & {} \min \{(m_{n-2}(A)+\dots +m_{n-k-1}(A)), \dots , (m_{n-k-1}(A)\\{} & {} +\dots +m_{n-2k}(A))\} \end{aligned}$$

and since the term \(m_{n-k-1}(A)\) is an addend in every term we get

$$\begin{aligned} m_{n-k-1}(A) \le \eta _n(A) \le km(A). \end{aligned}$$

Therefore

$$\begin{aligned} m_{n}(A) \le \eta _{n+k+1}(A) \le km(A) \ \ \mathrm{for \ all} \ n \in {{\mathbb {N}}} \ \textrm{and} \ A \in {{\mathcal {A}}} \end{aligned}$$

and the thesis follows. \(\square \)

Now we are able to prove the convergence results of this section.

Theorem 3.4

Let \(f:\Omega \rightarrow {{\mathbb {R}}}\) be a nonnegative measurable function and let \((m_n)_n\) be a sequence in \({\mathcal {M}}^+(\Omega )\) convergent setwisely to a set function \(m:{{\mathcal {A}}} \rightarrow {{\mathbb {R}}}\) and satisfying (10). Then for all \(E \in {{\mathcal {A}}}\)

$$\begin{aligned} \lim _{n \rightarrow + \infty } \int _E f dm_n =\int _E fdm. \end{aligned}$$
(12)

Proof

It is sufficient to prove the equality (12) for \(E= \varOmega \). For every \(A \in {{\mathcal {A}}}\) and for all \(n \ge k+1\) let

$$\begin{aligned} \nu _n(A)=\min \{m_{n-1}(A), \dots , m_{n-k}(A)\}. \end{aligned}$$

Then as in Proposition 3.1 we get that the sequence \((\nu _n(A))_n\) is increasing and \(\nu _n(A) \le m_n(A)\) for all \(n>k\). Moreover, the sequence \((\nu _n)_n\) converges to m. So it follows by the convergence theorem for monotone measures ( [21, Theorem 2.1 (c)]) that

$$\begin{aligned} \lim _{n \rightarrow + \infty } \int _{\Omega } f d \nu _n =\int _{\Omega } fdm. \end{aligned}$$

If \(\int _{\Omega } fdm = + \infty \), then one can see that

$$\begin{aligned} + \infty = \int _{\Omega } fdm = \lim _{n \rightarrow + \infty } \int _{\Omega } f d \nu _n \le \lim _{n \rightarrow + \infty } \int _{\Omega } f d m_n \end{aligned}$$

and the assertion is proved.

Assume now that \(\int _{\Omega } fdm < + \infty \), i.e. \(f \in L^1(m)\). It follows by Proposition 3.3 that \(m_n(A) \le k m(A) \) for all \(n \in {{\mathbb {N}}}\) and \(A \in {{\mathcal {A}}}\), also

$$\begin{aligned} \int _{\Omega } fd(km) = k \int _{\Omega }fdm < + \infty ; \end{aligned}$$

therefore, the statement follows from [21, Theorem 2.1 (b)] \(\square \)

For the opposite inequality we have the next result.

Theorem 3.5

Let \(f:\Omega \rightarrow {{\mathbb {R}}}\) be a nonnegative measurable function and let \((m_n)_n\) be a sequence in \({\mathcal {M}}^+(\Omega )\) converging setwise to a set function \(m: {{\mathcal {A}} } \rightarrow {{\mathbb {R}}}\). Assume that (11) holds and \(\int _{\Omega } f dm_j < \infty \), for \(j=1, \dots , k\). Then for all \(E \in {{\mathcal {A}}}\)

$$\begin{aligned} \lim _{n \rightarrow + \infty } \int _E f dm_n =\int _E fdm. \end{aligned}$$
(13)

Proof

It is sufficient to prove the equality (13) for \(E= \varOmega \). For every \(A \in {{\mathcal {A}}}\) and for all \(n \ge k+1\) let

$$\begin{aligned} \nu _n(A)=\max \{m_{n-1}(A), \dots , m_{n-k}(A)\}. \end{aligned}$$

Then as in Proposition 3.2 we get that the sequence \((\nu _n(A))_n\) is decreasing and for all \(n>k\) \(\nu _n(A) \ge m_n(A)\). Moreover, the sequence \((\nu _n)_n\) converges to the measure m. Also for all \(n>k\)

$$\begin{aligned} \int _{\Omega } f d\nu _{k+1} \ge \int _{\Omega } f d \nu _n \ge \int _{\Omega } f d m_n , \end{aligned}$$

so it follows by [21, Theorem 2.1 (b)] that

$$\begin{aligned} \lim _{n \rightarrow + \infty } \int _{\Omega } f d m_n =\int _{\Omega } fdm. \end{aligned}$$

\(\square \)

4 Application to Measure Differential Equations

We apply in this last section a previously obtained convergence result in order to get a continuous dependence feature of a measure differential equation

$$\begin{aligned} dx(t)=f(t,x(t))dm,\quad x(0)=x_0, \end{aligned}$$
(14)

where \(\Omega =[0,1]\) and \({\mathcal {A}}\) is its Borel \(\sigma \)-algebra, \(m\in {\mathcal {M}}^+([0,1])\) and \(f:[0,1]\times {\mathbb {R}}^d\rightarrow {\mathbb {R}}^d\).

A function \(x:[0,1]\rightarrow {\mathbb {R}}^d\) is a solution of this problem if

$$\begin{aligned} x(t)=x_0+\int _{[0,t)} f(s,x(s))dm(s) \quad \text{ for } \text{ every } t\in [0,1], \end{aligned}$$

where the integral is understood in Lebesgue sense.

We recall that every finite Borel measure on the real line coincides with the Lebesgue–Stieltjes measure induced by some non-decreasing left-continuous function (see [3, Theorem 3.21]), consequently looking for solutions in the described sense for such an equation is equivalent to looking for solutions of a Stieltjes differential equation (we refer to [19] or [27]).

A global existence and uniqueness result for measure differential equations under Lipschitz assumptions on the right-hand side, stated for the equivalent formulation with Stieltjes derivative, was given in [19, Theorem 7.3] (see also [13, Theorem 5.3], [19, Theorem 7.4] for local results). We can also refer to [33, Theorem 5.4].

Theorem 4.1

Let \(f:[0,1]\times {\mathbb {R}}^d\rightarrow {\mathbb {R}}^d\) satisfy:

  1. i)

    for every \(x\in {\mathbb {R}}^d\), \(f(\cdot ,x)\) is measurable;

  2. ii)

    \(f(\cdot ,x_0)\) is Lebesgue-integrable w.r.t. m;

  3. iii)

    there exists a map \(L:[0,1]\rightarrow [0, + \infty )\) Lebesgue-integrable with respect to m such that

    $$\begin{aligned} \Vert f(t,x)-f(t,y)\Vert \le L(t)\Vert x-y\Vert ,\quad \textrm{for} \; m-a.e.\; t\in [0,1],\;x,y\in {\mathbb {R}}^d. \end{aligned}$$

    Then (14) has a unique solution on [0, 1].

We remind the reader that a function \(h: [0,1] \rightarrow {\mathbb {R}}\) is called regulated (we refer to [18] for a detailed discussion on this notion) if there exist

$$\begin{aligned} h(t+)=\lim _{t'\rightarrow t+}h(t'), \ \mathrm{for \ all} \ t \in [0,1),\quad g(s-)=\lim _{s'\rightarrow s-}h(s'), \ \mathrm{for \ all} \ s \in (0,1]. \end{aligned}$$

The following related concept ( [18]) is very useful for getting compactness for regulated functions; a set \({\mathcal {F}}\) of \({\mathbb {R}}^d\)-valued regulated functions on [0, 1] is said to be equi-regulated if for every \(\varepsilon >0\) and every \(t_0\in [0,1]\) there exists \(\delta >0\) such that, for all \(f\in {\mathcal {F}}\),

  1. i)

    \(\Vert f(t)-f(t_0-)\Vert <\varepsilon \) whenever \(t_0-\delta<t<t_0\);

  2. ii)

    \(\Vert f(s)-f(t_0+)\Vert <\varepsilon \) whenever \(t_0<s<t_0+\delta \).

We also recall, for completeness, a recent Gronwall inequality for measure differential equations.

Theorem 4.2

([26, Corollary 4.5]) Let \(u,K,L:[0,1)\rightarrow [0, + \infty )\) be such that \(L, K \cdot L, u \cdot L\) are Lebesgue-integrable w.r.t. the measure \(m\in {\mathcal {M}}^+([0,1])\). If

$$\begin{aligned} u(t)\le K(t)+\int _{[0,t)}L(s)u(s)dm(s),\; \mathrm{for\; every}\;t\in [0,1), \end{aligned}$$

then

$$\begin{aligned} u(t)\le K(t)+\int _{[0,t)}K(s)L(s)e^{\int _{[s,t)}L(\tau )dm(\tau )}dm(s),\; \mathrm{for\; every}\;t\in [0,1). \end{aligned}$$

We present now the main result of this section on the behavior of the solution of (14) when the measure m is varying.

Theorem 4.3

Let \(f:[0,1]\times {\mathbb {R}}^d\rightarrow {\mathbb {R}}^d\) satisfy:

  1. i)

    for every \(x\in {\mathbb {R}}^d\), \(f(\cdot ,x)\) is measurable;

  2. ii)

    \(f(\cdot ,x_0)\in L^1(m)\);

  3. iii)

    there exists a map \(L:[0,1]\rightarrow [0, + \infty )\) Lebesgue-integrable with respect to m such that

    $$\begin{aligned} \Vert f(t,x)-f(t,y)\Vert \le L(t)\Vert x-y\Vert ,\quad \mathrm{for\; all\;}t\in [0,1],\;x,y\in {\mathbb {R}}^d; \end{aligned}$$
  4. iv)

    there exists a map \(M:[0,1]\rightarrow [0, + \infty )\) Lebesgue-integrable with respect to m such that

    $$\begin{aligned} \Vert f(t,x)\Vert \le M(t), \quad \mathrm{for\; all\;}t\in [0,1],\;x\in {\mathbb {R}}^d. \end{aligned}$$

    Let \((m_n)_n\) be a sequence in \({\mathcal {M}}^+([0,1])\) setwise convergent to \(m\in {\mathcal {M}}^+ ([0,1])\) and satisfying (10).

Then the sequence \((x_n)_n\) of solutions of the measure differential problems

$$\begin{aligned} dx(t)=f(t,x(t))dm_n,\quad x(0)=x_0 \end{aligned}$$
(15)

converges uniformly on [0, 1] to the solution x of (14).

Proof

Let us first note that by Proposition 3.3, \(f(\cdot ,x_0)\), L and M are also Lebesgue-integrable w.r.t. every \(m_n\) since \(m_n\le k m\) for all \(n\in {\mathbb {N}}\).

Besides, the assumptions on f ensure that for any function \(y:[0,1]\rightarrow {\mathbb {R}}^d\), the map \(f(\cdot ,y(\cdot ))\) is measurable (therefore, by hypothesis iv) and the previous observation, Lebesgue-integrable w.r.t. m and also w.r.t. \(m_n\), for every \(n\in {\mathbb {N}}\)).

Then let us see that \((\Vert x_n-x\Vert )_n\) is bounded on [0, 1]. Indeed, for any \(n \in {{\mathbb {N}}}\) and \(t \in [0,1]\),

$$\begin{aligned} \Vert x_n(t)-x(t)\Vert= & {} \left\| \int _{[0,t)}f(s,x_n(s))dm_n(s)- \int _{[0,t)}f(s,x(s))dm(s) \right\| \\\le & {} \int _{[0,t)}\left\| f(s,x_n(s))\right\| dm_n(s)+\int _{[0,t)}\left\| f(s,x(s))\right\| dm(s) \end{aligned}$$

and using Proposition 3.3 brings us to

$$\begin{aligned} \Vert x_n(t)-x(t)\Vert\le & {} \int _{[0,t)}\left\| f(s,x_n(s))\right\| d(km)(s)+\int _{[0,t)}\left\| f(s,x(s))\right\| dm(s)\\\le & {} (k+1)\int _{[0,t)}M(s)dm(s) \le (k+1)\int _{[0,1)}M(s)dm(s). \end{aligned}$$

We can write, for each \(t\in [0,1]\),

$$\begin{aligned} \Vert x_n(t)-x(t)\Vert{} & {} = \left\| \int _{[0,t)}f(s,x_n(s))dm_n(s)-\int _{[0,t)}f(s,x(s))dm(s) \right\| \\{} & {} \le \int _{[0,t)}\left\| f(s,x_n(s))-f(s,x(s))\right\| dm_n(s)\\{} & {} +\left\| \int _{[0,t)}f(s,x(s))d m_n(s)-\int _{[0,t)}f(s,x(s))d m(s) \right\| . \end{aligned}$$

Applying Theorem 3.4,

$$\begin{aligned} \int _{[0,\cdot )}f(s,x(s))d m_n(s) \rightarrow \int _{[0,\cdot )}f(s,x(s))d m(s)\; \mathrm{pointwisely.} \end{aligned}$$

But the sequence \(\left( \int _{[0,\cdot )}f(s,x(s))d m_n(s)\right) _n\) is equi-regulated by [18, Theorem 3.10], as there is a nondecreasing function given on [0, 1] by

$$\begin{aligned} h(t)=\int _{[0,t)}\Vert f(s,x(s))\Vert d(km)(s) \end{aligned}$$

satisfying, for every \(0\le t<t'\le 1\) and every \(n \in {\mathbb N}\),

$$\begin{aligned}{} & {} \left\| \int _{[0,t)}f(s,x(s))d m_n(s)-\int _{[0,t')}f(s,x(s))d m_n(s) \right\| \\= & {} \left\| \int _{[t,t')}f(s,x(s))d m_n(s) \right\| \\\le & {} \int _{[t,t')}\left\| f(s,x(s))\right\| d m_n(s) \\\le & {} \int _{[t,t')}\left\| f(s,x(s))\right\| d (km)(s)\\= & {} h(t')-h(t). \end{aligned}$$

As it is well-known ( [18, Theorem 3.3]), any equi-regulated, pointwisely convergent sequence of regulated functions converges uniformly, therefore

$$\begin{aligned} \int _{[0,\cdot )}f(s,x(s))d m_n(s) \rightarrow \int _{[0,\cdot )}f(s,x(s))d m(s)\; \textrm{uniformly,} \end{aligned}$$

i.e. for every \(\varepsilon >0\) one can find \(n_{\varepsilon }\in {\mathbb {N}}\) such that

$$\begin{aligned} \left\| \int _{[0,t)}f(s,x(s))d m_n(s)-\int _{[0,t)}f(s,x(s))d m(s) \right\| <\varepsilon ,\;\mathrm{for\;all\;}n\ge n_{\varepsilon },\;t\in [0,1]. \end{aligned}$$

Using now the Lipschitz assumption on f, for every such n we get

$$\begin{aligned} \Vert x_n(t)-x(t)\Vert \le \int _{[0,t)} L(s)\Vert x_n(s)-x(s)\Vert d m_n(s)+\varepsilon ,\;\mathrm{for\;every\;}t\in [0,1]. \end{aligned}$$

As \((\Vert x_n-x\Vert )_n\) is bounded on [0, 1], we can apply, for each \(n\ge n_{\varepsilon }\), the Gronwall type result ( [26, Corollary  4.5]), in order to deduce that

$$\begin{aligned} \Vert x_n(t)-x(t)\Vert \le \int _{[0,t)} L(s) \varepsilon e^{\int _{[s,t)}L(\tau )d m_n(\tau )} d m_n(s)+\varepsilon ,\;\mathrm{for\;every\;}t\in [0,1). \end{aligned}$$

Again by Theorem 3.4, the sequence \((\int _{[0,1)}L(\tau )d m_n(\tau ))_n\) is convergent, therefore bounded, say by \(M_1>0\), whence

$$\begin{aligned} \int _{[s,t)}L(\tau )d m_n(\tau ) \le M_1,\quad \; \mathrm{for \; all \;}s<t \in [0,1],\; n\in {\mathbb {N}}. \end{aligned}$$

Consequently, for all \(t \in [0,1]\),

$$\begin{aligned} \int _{[0,t)} L(s) e^{\int _{[s,t)}L(\tau )d m_n(\tau )} d m_n(s) \le \int _{[0,t)} L(s) e^{M_1} d m_n(s)\le M_1 e^{M_1},\;\forall \;n\in {\mathbb {N}} \end{aligned}$$

and so

$$\begin{aligned} \Vert x_n(t)-x(t)\Vert \le \varepsilon \left( 1+M_1 e^{M_1}\right) ,\;\mathrm{for\;every\;}t\in [0,1)\;\mathrm{and \;}n\ge n_{\varepsilon }. \end{aligned}$$

Due to the left-continuity of \(x_n\) and x we get

$$\begin{aligned} \Vert x_n(1) -x(1)\Vert \le \varepsilon \left( 1+M_1 e^{M_1}\right) , \;\mathrm{for\;every\;}n\ge n_{\varepsilon } \end{aligned}$$

and the proof is over. \(\square \)