1 Introduction

Assume that \((\Omega ,{\mathcal {A}},P)\) is a probability space, \(f:[0,1]\times \Omega \rightarrow [0,1]\) is a function satisfying the boundary condition

$$\begin{aligned} f(0,\omega )=0\quad \hbox {and}\quad f(1,\omega )=1\quad \hbox {for every }\omega \in \Omega , \end{aligned}$$
(1)

\(g:[0,1]\rightarrow \mathbb R\) is a bounded function with \(g(0)=g(1)=0\) and \(a,b\in \mathbb R\) are fixed numbers. We are interested in bounded solutions \(\varphi :[0,1]\rightarrow \mathbb R\) of the following iterative functional equation

figure a

We say that a function \(\varphi :[0,1]\rightarrow \mathbb R\) is a solution of equation (E\(_g\)) if for every \(x\in [0,1]\) the function \(\varphi \circ f(x,\cdot )\) is measurable and (E\(_g\)) holds.

The main purpose of this paper is to describe all solutions of equation (E\(_g\)) in some classes of bounded functions \(h:[0,1]\rightarrow \mathbb R\) such that \(h(0)=a\) and \(h(1)=b\). We are also interested under which assumptions any bounded solution \(\varphi :[0,1]\rightarrow \mathbb R\) of equation (E\(_g\)) with a certain property can be expressed in the form \(\varphi =\Phi +\varphi _*\), where \(\Phi \) is a solution of the equation

figure b

having the same property as \(\varphi \) and \(\varphi _*\) is a specific solution of equation (E\(_g\)). This problem seems to be easy to answer, but the difficulty is that the classes considered in this paper are not linear spaces. It is even not clear when the existence of a solution with a certain property of one of the equations (E\(_g\)) and (E\(_0\)) implies the existence of a solution with the same property of the other of these equations. Such a problem is quite natural in the theory of functional equations and it has been studied several times by many authors for different functional equations in various classes of functions; mainly in cases where the class of considered functions forms a vector space.

Functional equations (E\(_0\)) and (E\(_g\)), as well as their generalizations and special cases, are investigated in various classes of functions in connection with their appearance in miscellaneous fields of science (for more details see [1, Chapter XIII], [2, Chapters 6, 7] and [3, Sect. 4]). As emphasized in [2, Sect. 0.3] iteration is the fundamental technique for solving functional equations in a single variable, and iterates usually appear in the formulae for solutions. In most cases such formulas are obtained by taking the limit of sequences in which iterates are involved. In this paper we make use of this fundamental technique, but the goal is to apply a subclass of Banach limits instead of the limit. The idea of replacing the limit by a Banach limit seems to be clear, because we do not need any additional assumption guaranteeing the existence of a Banach limit of a bounded sequence, in contrast to the case when we want to calculate the limit of such a sequence.

This paper is organized as follows. Section 2 contains the notation and basic tools required for our considerations. In Sects. 3 and 4 we describe bounded solutions \(\varphi :[0,1]\rightarrow \mathbb R\) with \(\varphi (0)=a\) and \(\varphi (1)=b\) of equations (E\(_0\)) and (E\(_g\)), respectively. Finally, in Sect. 5, we formulate some consequences of the main results obtained and we present a few examples of the possible applications of those results.

2 Preliminaries

Denote by \(B([0,1],\mathbb R)\) the space of all bounded functions \(h:[0,1]\rightarrow \mathbb R\) endowed with the supremum norm and respectively by \(Borel([0,1],\mathbb R)\), \(C_x([0,1],\mathbb R)\), \(Lip([0,1],\mathbb R)\) and \(BV([0,1],\mathbb R)\) its subspaces of all functions that are Borel measurable, continuous at \(x\in [0,1]\), Lipschitzian, and of bounded variation (i.e. functions which can be written as a difference of two increasing functions; see [4, Chapter 1.4]). Next denote by \({\mathcal {M}}([0,1],\mathbb R)\) the space consisting of all functions \(h\in B([0,1],\mathbb R)\) such that for every \(x\in [0,1]\) the function \(h\circ f(x,\cdot )\) is measurable. Note that the space \({\mathcal {M}}([0,1],\mathbb R)\) is at least one dimensional, because every constant function belongs to it.

Define an operator \(T:{\mathcal {M}}([0,1],\mathbb R)\rightarrow B([0,1],\mathbb R)\) by setting

$$\begin{aligned} Th(x)=\int _{\Omega }h(f(x,\omega ))dP(\omega ). \end{aligned}$$

Note that T is linear and continuous with \(\Vert T\Vert =1\). Moreover, equation (E\(_g\)) can be written now in the form

$$\begin{aligned} \varphi =T\varphi +g. \end{aligned}$$
(2)

To the end of this paper we fix a subspace \({\mathcal {B}}([0,1],\mathbb R)\) of the space \({\mathcal {M}}([0,1],\mathbb R)\) that is invariant under T, i.e. \({\mathcal {B}}([0,1],\mathbb R)\) is such that

$$\begin{aligned} T({\mathcal {B}}([0,1],\mathbb R))\subset {\mathcal {B}}([0,1],\mathbb {R}). \end{aligned}$$
(3)

Before we give examples showing how the space \({\mathcal {B}}([0,1],\mathbb R)\) can look like in certain situations, let us say what we mean writing \({{\mathcal {A}}}=2^{\Omega }\). Namely, in such a case we may (and do) assume that \(\Omega \) is countable; otherwise we can replace \(\Omega \) by its subset \(\{\omega \in \Omega : P(\{\omega \})>0\}\), which is clearly countable. Moreover, integration in (E\(_g\)) reduces to summation and every bounded function is measurable.

Example 2.1

If \({{\mathcal {A}}}=2^{\Omega }\), then (3) holds with \({\mathcal {B}}([0,1],\mathbb R)=B([0,1],\mathbb R)\).

Example 2.2

If

(H\(_1\)):

f is increasing with respect to the first variable and measurable with respect to the second variable,

then (3) holds with \({\mathcal {B}}([0,1],\mathbb R)=BV([0,1],\mathbb R)\).

Example 2.3

If

(H\(_2\)):

\(\int _{\Omega }|f(x,\omega )-f(y,\omega )|dP(\omega )\le |x-y|\) for all \(x,y\in [0,1]\) and f is measurable with respect to the second variable,

then (3) holds with \({\mathcal {B}}([0,1],\mathbb R)=Lip([0,1],\mathbb R)\).

Let \({\mathcal {B}}\) be the \(\sigma \)-algebra of all Bores subsets of [0, 1]. Following [5] we say that \(h:[0,1]\times \Omega \rightarrow [0,1]\) is a random-valued function (shortly: an rv-function) if it is measurable with respect to \(\sigma \)-algebra \({\mathcal {B}}\times {\mathcal {A}}\).

Example 2.4

If f is an rv-function, then (3) holds with \({\mathcal {B}}([0,1],\mathbb R)=Borel([0,1],\mathbb R)\).

Example 2.5

Fix \(x_0\in \{0,1\}\) and let \(f(\cdot ,\omega )\) be continuous at \(x_0\) for every \(\omega \in \Omega \). If \({{\mathcal {A}}}=2^{\Omega }\), then (3) holds with \({\mathcal {B}}([0,1],\mathbb R)=C_{x_0}([0,1],\mathbb R)\). If f is an rv-function, then (3) holds with \({\mathcal {B}}([0,1],\mathbb R)=Borel([0,1],\mathbb R)\cap C_{x_0}([0,1],\mathbb R)\).

To describe solutions of equation (E\(_g\)) in the case where \({\mathcal {A}}=2^\Omega \) we need the concept of Banach limits, established in [6]. However in the general case, when integration is required, we need the concept of medial limits, established in [7] (cf. [8]) as a very special class of Banach limits.

Denote by \(l^\infty (\mathbb N)\) the space of all bounded real sequences equipped with the supremum norm and by \({\mathfrak {B}}\) the family of all Banach limits defined on \(l^\infty (\mathbb N)\). Recall that \(M\in {\mathfrak {B}}\) if \(M:l^\infty (\mathbb N)\rightarrow \mathbb R\) is a linear, positive, shift invariant and normalized operator. It is easy to see that any \(M\in {\mathfrak {B}}\) is continuous with \(\Vert M\Vert =1\). It is known that the cardinality of \({\mathfrak {B}}\) is equal to \(2^{{\mathfrak {c}}}\) (see [9]), and even that the cardinality of the set of all extreme points of \({\mathfrak {B}}\) is equal to \(2^{{\mathfrak {c}}}\) (see [10], cf. [11]); here \({\mathfrak {c}}\) is the cardinality of the continuum.

As it was mentioned above, in the general case we need to integrate the pointwise Banach limit of a bounded sequence of measurable functions. However, the problem is that there is no guarantee that the pointwise Banach limit of a bounded sequence of measurable functions is a measurable function (see [12, page 288]). Fortunately, it is known that there are Banach limits possessing exactly the required property. More precisely, a Banach limit M is called a medial limit if \(\int _{\Omega }M((h_m(\omega ))_{m\in \mathbb N})dP(\omega )\) is defined and equal to \(M((\int _{\Omega }h_m(\omega )dP(\omega ))_{m\in \mathbb N})\) whenever \((h_m)_{m\in \mathbb N}\) is a bounded sequence of measurable real-valued functions on \(\Omega \). It is also known that the continuum hypothesis implies the existence of medial limits. More results on the existence and non-existence of medial limits can be found in [13, Chapter 53] and in [14]. Denote by \({\mathfrak {M}}\) the family of all medial limits, i.e. \(M\in {\mathfrak {M}}\subset {\mathfrak {B}}\) if

$$\begin{aligned} \int _{\Omega }M\big ((h_m(\omega ))_{m\in \mathbb N}\big )dP(\omega ) =M\left( \Big (\int _{\Omega }h_m(\omega )dP(\omega )\Big )_{m\in \mathbb N}\right) \end{aligned}$$

for every sequence \((h_m)_{m\in \mathbb N}\) of bounded measurable real-valued functions defined on \(\Omega \). Note that \({\mathfrak {M}}={\mathfrak {B}}\) in the case where \({\mathcal {A}}=2^\Omega \).

From now on, given a nonempty family \({\mathcal {F}}\subset B([0,1],\mathbb R)\) we denote by \({\mathcal {F}}_a^b\) the family of all \(h\in {\mathcal {F}}\) such that \(h(0)=a\) and \(h(1)=b\). To distinguish two important families let us adopt the shorthand \({\mathcal {M}}_a^b={\mathcal {M}}([0,1],\mathbb R)_a^b\) and \({\mathcal {B}}_a^b={\mathcal {B}}([0,1],\mathbb R)_a^b\). It is clear that \({\mathcal {B}}_0^0\) is a subspace of the space \({\mathcal {B}}([0,1],\mathbb R)\) and that \({\mathcal {B}}_a^b+{\mathcal {B}}_0^0={\mathcal {B}}_a^b\) whenever \({\mathcal {B}}_a^b\ne \emptyset \). It is also clear that if we determine all solutions of equation (E\(_g\)) in the class \({\mathcal {B}}_a^b\), then we can easily describe all solutions of this equation in the class \({\mathcal {B}}([0,1],\mathbb R)\). Now, we begin describing solutions of equation (E\(_g\)) in \({\mathcal {B}}_a^b\), assuming from now on that \({\mathcal {B}}_a^b\ne \emptyset \). Our first lemma is a simple consequence of (1) and (3).

Lemma 2.1

  1. (i)

    If \(h\in {\mathcal {B}}_a^b\), then \(Th\in {\mathcal {B}}_a^b\).

  2. (ii)

    If \(\varphi \in {\mathcal {B}}_a^b\) satisfies (E\(_g\)), then \(g\in {\mathcal {B}}_0^0\).

3 Solutions of equation (E\(_0\))

If \(h\in {\mathcal {B}}([0,1],\mathbb R)\), then \(\sup _{m\in \mathbb N}\Vert T^mh\Vert \le \Vert h\Vert \), and hence \((T^mh(x))_{m\in \mathbb N}\in l^\infty (\mathbb N)\) for every \(x\in [0,1]\). Therefore, for all \(h\in {\mathcal {B}}([0,1],\mathbb R)\) and \(M\in {\mathfrak {B}}\) we define a function \(M_h:[0,1]\rightarrow \mathbb R\) by putting

$$\begin{aligned} M_h(x)=M\big ((T^mh(x))_{m\in \mathbb N}\big ). \end{aligned}$$

The functions \(M_h\) plays a crucial role in this section as well as in this paper. So, we need some fact about them.

Lemma 3.1

Assume that \(h\in {\mathcal {B}}_a^b\). If \(M\in {\mathfrak {M}}\), then \(M_h\in {\mathcal {M}}_a^b\) and \(TM_h=M_h\).

Proof

Fix \(M\in {\mathfrak {M}}\). From Lemma 2.1(i) we see that \(T^mh\in {\mathcal {B}}_a^b\) for every \(m\in \mathbb N\). Then

$$\begin{aligned} \sup _{x\in [0,1]}|M_h(x)|\le \sup _{x\in [0,1]}\Vert M\Vert \sup _{m\in \mathbb N}|T^mh(x)|\le \Vert h\Vert . \end{aligned}$$

Thus \(M_h\in B([0,1],\mathbb R)\). Since \(M\in {\mathfrak {M}}\), it follows that \(M_h\in {\mathcal {M}}([0,1],\mathbb R)\). Moreover, (1) implies \(M_h(0)=a\) and \(M_h(1)=b\). In consequence, \(M_h\in {\mathcal {M}}_a^b\).

Applying properties of medial limits we obtain

$$\begin{aligned} TM_h(x)&=\int _{\Omega }M\left( \big (T^mh(f(x,\omega ))\big )_{m\in \mathbb N}\right) dP(\omega )\\&=M\left( \Big (\int _{\Omega }T^mh(f(x,\omega ))dP(\omega )\Big )_{m\in \mathbb N}\right) =M\left( \big (T^{m+1}h(x)\big )_{m\in \mathbb N}\right) \\&=M_h(x) \end{aligned}$$

for every \(x\in [0,1]\). \(\square \)

We now want to find conditions under which \(M_h\in {\mathcal {B}}_a^b\) for every \(h\in {\mathcal {B}}_a^b\). Unfortunately, there is no chance to prove that \(M_h\in {\mathcal {B}}_a^b\) in the general case. In fact, we would have to show that (3) holds, i.e. \(TM_h\in {\mathcal {B}}_a^b\), but by Lemma 3.1 we have \(TM_h=M_h\). This observation suggests the following definition.

We say that the class \({\mathcal {B}}_a^b\) is closed under \(M\in {\mathfrak {B}}\), if \(M_h\in {\mathcal {B}}_a^b\) for every \(h\in {\mathcal {B}}_a^b\).

It turns out that there are many interesting classes that are closed under some Banach limits. Let us now give a few examples of such classes. The first two are immediate consequences of Examples 2.1 and 2.4.

Example 3.1

If \({\mathcal {B}}([0,1],\mathbb R)=B([0,1],\mathbb R)\), then \(B([0,1],\mathbb R)_a^b\) is closed under any \(M\in {\mathfrak {B}}\).

Example 3.2

If f is an rv-function and M is a medial limit with respect to a probability Borel measure on [0, 1], then \(Borel([0,1],\mathbb R)_a^b\) is closed under M.

Example 3.3

Fix \(x_0\in \{0,1\}\). If \({\mathcal {A}}=2^\Omega \) and

(H\(_{3}\)):

there exists \(\eta >0\) such that \(\frac{f(x,\omega )-f(x_0,\omega )}{x-x_0}\le 1\) for all \(\omega \in \Omega \) and \(x\in (0,1)\) with \(|x-x_0|\le \eta \),

then \(C_{x_0}([0,1],\mathbb R)_a^b\) is closed under any \(M\in {\mathfrak {B}}\). To prove the conclusion let us put \({\mathcal {B}}([0,1],\mathbb R)=C_{x_0}([0,1],\mathbb R)\); this is possibly according to Example 2.5, because (H\(_3\)) yields the continuity of \(f(\cdot ,\omega )\) at \(x_0\) for every \(\omega \in \Omega \). Let \(M\in {\mathfrak {B}}\) and let \(h\in C_{x_0}([0,1],\mathbb R)_a^b\). It is clear that \(M_h(0)=a\) and \(M_h(1)=b\). To prove that \(M_h\) is continuous at \(x_0\) fix \(\varepsilon >0\). Then choose \(\delta \in (0,\eta )\), where \(\eta \) is a number occurring in (H\(_3\)), such that

$$\begin{aligned} \sup _{x\in A}|h(x)-h(x_0)|\le \varepsilon , \end{aligned}$$
(4)

where \(A=\{x\in [0,1]:|x-x_0|\le \delta \}\). Then \(|f(x,\omega )-f(x_0,\omega )|\le \delta \) for all \(\omega \in \Omega \) and \(x\in A\), and by an easy induction, we obtain \(\sup _{x\in A}|T^mh(x)-T^mh(x_0)|\le \sup _{x\in A}|h(x)-h(x_0)|\) for every \(m\in \mathbb N\). This jointly with (4) gives

$$\begin{aligned} \sup _{x\in A}|M_h(x)-M_h(x_0)|&\le \sup _{x\in A}\Vert M\Vert \sup _{m\in \mathbb N}|T^mh(x)-T^mh(x_0)|\le \varepsilon , \end{aligned}$$

which proves that \(M_h\) is continuous at \(x_0\).

Example 3.4

If (H\(_2\)) holds, then \(Lip([0,1],\mathbb R)_a^b\) is closed under any \(M\in {\mathfrak {B}}\). For the prove of the conclusion we put \({\mathcal {B}}([0,1],\mathbb R)=Lip([0,1],\mathbb R)\); this is acceptable according to Example 2.3. Fix \(M\in {\mathfrak {B}}\) and \(h\in Lip([0,1],\mathbb R)_a^b\). Clearly, \(M_h(0)=a\) and \(M_h(1)=b\). To prove that \(M_h\) is Lipschitzian denote by L the Lipschitz constant of h. A simple induction gives \(|T^mh(x)-T^mh(y)|\le L|x-y|\) for all \(m\in \mathbb N\) and \(x,y\in [0,1]\). Thus,

$$\begin{aligned} |M_h(x)-M_h(y)|&\le \Vert M\Vert \sup _{m\in \mathbb N}|T^mh(x)-T^mh(y)|\le L|x-y| \end{aligned}$$

for all \(x,y\in [0,1]\).

Example 3.5

If (H\(_1\)) holds, then \(BV([0,1],\mathbb R)_a^b\) is closed under any \(M\in {\mathfrak {B}}\). To show that the conclusion holds we put \({\mathcal {B}}([0,1],\mathbb R)=BV([0,1],\mathbb R)\); this is possible according to Example 2.2. Fix \(M\in {\mathfrak {B}}\) and \(h\in BV([0,1],\mathbb R)_a^b\). Obviously, \(M_h(0)=a\) and \(M_h(1)=b\). Moreover, there exist increasing functions \(h_1,h_2\in B([0,1],\mathbb R)\) such that \(h=h_1-h_2\). Thus \(T^mh=T^mh_1-T^mh_2\) for every \(m\in \mathbb R\), and hence \(M_h=M_{h_1}-M_{h_2}\). Finally, (H\(_1\)) jointly with properties of Banach limits implies that both the functions \(M_{h_1}\) and \(M_{h_2}\) are increasing.

We are now in a position to formulate the main results of this section. To simplify their statements, let us denote by \(\mathfrak {sol}_a^b\) (E\(_0\)) the family of all functions from \({\mathcal {B}}_a^b\) satisfying equation (E\(_0\)).

Theorem 3.2

For every \(M\in {\mathfrak {M}}\) we have \(\mathfrak {sol}_a^b\)(E\(_0\))\(\subset \left\{ M_h:h\in {\mathcal {B}}_a^b\right\} \). Moreover, if \({\mathcal {B}}_a^b\) is closed under \(M\in {\mathfrak {M}}\), then \(\mathfrak {sol}_a^b\)(E\(_0\))\(=\left\{ M_h:h\in {\mathcal {B}}_a^b\right\} \).

Proof

Fix \(M\in {\mathfrak {M}}\) and \(\Phi \in \mathfrak {sol}_a^b\)(E\(_0\)). Then \(T^m\Phi =\Phi \) for every \(m\in \mathbb N\), and hence

$$\begin{aligned} \Phi (x)=M\big ((T^m\Phi (x))_{m\in \mathbb N}\big )=M_\Phi (x) \end{aligned}$$

for every \(x\in [0,1]\). Thus \(\mathfrak {sol}_a^b\) (E\(_0\))\(\subset \left\{ M_h:h\in {\mathcal {B}}_a^b\right\} \). The opposite inclusion follows from Lemma 3.1. \(\square \)

Solutions of equations (E\(_0\)) was investigated in [15, 16], basically in almost the same classes of bounded functions. However, Theorem 3.2 is incomparable with the results obtained in the papers mentioned, in which the existence and the uniqueness problems have been considered as well as properties of the unique solution have been studied.

4 Solutions of equation (E\(_g\))

In this section we describe all functions belonging to the class \({\mathcal {B}}_a^b\) which are solutions of equation (E\(_g\)). We also give the formula for these solutions showing that each of them can be written in the form \(\Phi +\varphi _*\), where \(\Phi \in {\mathcal {B}}_a^b\) is a solution of equation (E\(_0\)) and \(\varphi _*\in {\mathcal {B}}_0^0\) is a particular solution of equation (E\(_g\)). To find \(\varphi _*\) we need define a certain family of functions generated by \(g\in {\mathcal {B}}_0^0\); recall that \(g\in {\mathcal {B}}_0^0\) is a necessary condition for equation (E\(_g\)) to have a solution in the class \({\mathcal {B}}_a^b\) by Lemma 2.1(ii). If \(g\in {\mathcal {B}}_0^0\), then Lemma 2.1(i) yields \(\{T^lg:l\in \mathbb N\}\subset {\mathcal {B}}_0^0\). Therefore, given \(g\in {\mathcal {B}}_0^0\) and \(k\in \mathbb N\) we can define a function \(g_k:[0,1]\rightarrow \mathbb R\) by putting

$$\begin{aligned} g_k(x)=\sum _{l=0}^{k-1}T^lg(x). \end{aligned}$$

Set

$$\begin{aligned} {\mathcal {G}}=\{g_k:k\in \mathbb N\}. \end{aligned}$$

As in the previous section, denote by \(\mathfrak {sol}_a^b\)(E\(_g\)) the family of all functions from \({\mathcal {B}}_a^b\) satisfying equation (E\(_g\)).

Lemma 4.1

If \(\mathfrak {sol}_a^b\)(E\(_g\))\(\ne \emptyset \), then \({\mathcal {G}}\) is a bounded subset of \({\mathcal {B}}_0^0\).

Proof

Fix \(\varphi \in \mathfrak {sol}_a^b\)(E\(_g\)). Then Lemma 2.1 implies that \({\mathcal {G}}\subset {\mathcal {B}}_0^0\). Applying (2) we obtain

$$\begin{aligned} \Vert g_k\Vert&=\sup _{x\in [0,1]}|g_k(x)|=\sup _{x\in [0,1]}\left| \sum _{l=0}^{k-1}T^{l}\varphi (x)-\sum _{l=0}^{k-1}T^{l+1}\varphi (x)\right| \\&\le \sup _{x\in [0,1]}|\varphi (x)|+\sup _{x\in [0,1]}|T^{k}\varphi (x)|\le \Vert \varphi \Vert +\Vert T\Vert ^{k}\Vert \varphi \Vert = 2\Vert \varphi \Vert \end{aligned}$$

for every \(k\in \mathbb N\). \(\square \)

The above lemma shows that boundedness of the family \({\mathcal {G}}\) is a necessary condition for equation (E\(_g\)) to have a solution in the class \({\mathcal {B}}_a^b\). This also demonstrate, that \(M_{g_k}\) is well defined for all \(k\in \mathbb N\) and \(M\in {\mathfrak {B}}\) whenever equation (E\(_g\)) has a solution in \({\mathcal {B}}_a^b\).

Lemma 4.2

If \(\mathfrak {sol}_a^b\)(E\(_g\))\(\ne \emptyset \), then \(M_{g_k}=0\) for all \(M\in {\mathfrak {B}}\) and \(k\in \mathbb N\).

Proof

Fix \(\varphi \in \mathfrak {sol}_a^b\)(E\(_g\)), \(M\in {\mathfrak {B}}\) and \(k\in \mathbb N\). From (2) we get

$$\begin{aligned} M_{g}(x)=M((T^{m}g(x))_{m\in \mathbb N})=M((T^{m}\varphi (x))_{m\in \mathbb N})-M((T^{m+1}\varphi (x))_{m\in \mathbb N})=0 \end{aligned}$$

for every \(x\in [0,1]\). Now, it only remains to see that \(M_{g_k}=kM_{g}\). \(\square \)

If \(g\in {\mathcal {B}}_0^0\) and \({\mathcal {G}}\) is bounded, then for every \(M\in {\mathfrak {B}}\) we define a function \(M_*:[0,1]\rightarrow \mathbb R\) by putting

$$\begin{aligned} M_*(x)=M((g_k(x))_{k\in \mathbb N}). \end{aligned}$$

Lemma 4.3

Assume that \(g\in {\mathcal {B}}_0^0\) and \({\mathcal {G}}\) is bounded. If \(M\in {\mathfrak {M}}\), then \(M_*\in {\mathcal {M}}_0^0\) and \(M_*=TM_*+g\).

Proof

Fix \(M\in {\mathfrak {M}}\) and observe that

$$\begin{aligned} \sup _{x\in [0,1]}|M_*(x)|\le \sup _{x\in [0,1]}\Vert M\Vert \sup _{k\in \mathbb N}|g_k(x)|\le \sup _{k\in \mathbb N}\Vert g_k\Vert <+\infty . \end{aligned}$$

Thus \(M_*\in B([0,1],\mathbb R)\). Since \(M\in {\mathfrak {M}}\), it follows that \(M_*\in {\mathcal {M}}([0,1],\mathbb R)\). Moreover, it is easy to check that \(M_*(0)=M_*(1)=0\). In consequence, \(M_*\in {\mathcal {M}}_0^0\).

Applying properties of medial limits we obtain

$$\begin{aligned} TM_*(x)&=\int _{\Omega }M\left( \big (g_k(f(x,\omega ))\big )_{k\in \mathbb N}\right) dP(\omega ) =M\left( \Big (\int _{\Omega }g_k(f(x,\omega ))dP(\omega )\Big )_{k\in \mathbb N}\right) \\&=M\left( \Big (\sum _{l=0}^{k-1}\int _{\Omega }T^{l}g(f(\omega ,x))dP(\omega )\Big )_{k\in \mathbb N}\right) =M\left( \Big (\sum _{l=0}^{k-1}T^{l+1}g(x)\Big )_{k\in \mathbb N}\right) \\&=M\left( \big (g_{k+1}(x)\big )_{k\in \mathbb N}-(g(x))_{k\in \mathbb N}\right) =M_*(x)-g(x) \end{aligned}$$

for every \(x\in [0,1]\). \(\square \)

We now want to find conditions under which \(M_*\in {\mathcal {B}}_0^0\). The situation is similar to that for \(M_h\in {\mathcal {B}}_a^b\). Namely, to prove that \(M_*\in {\mathcal {M}}_0^0\), we would have to show that \(TM_*\in {\mathcal {B}}_0^0\), but by Lemma 4.3 we have \(TM_*=M_*-g\). This leads us to the following definition.

We say that a function \(g\in {\mathcal {B}}_0^0\) is admissible for \(M\in {\mathfrak {B}}\), if the family \({\mathcal {G}}\) is bounded and \(M_*\in {\mathcal {B}}_0^0\).

Note that the assumption on boundedness of \({\mathcal {G}}\) in the admissibility definition is not restrictive, because if the family \({\mathcal {G}}\) is unbounded, then \(M_*\) can not be a solution of equation (E\(_g\)) by Lemma 4.1.

Before we give examples of conditions guaranteeing admissibility of a given function for a Banach limit, let us recall the definition of almost convergence of sequences. Namely, a bounded sequence \((x_{m})_{m\in \mathbb N}\) of real numbers is said to be almost convergent to a real number x if \(M((x_{m})_{m\in \mathbb N})=x\) for any \(M\in {\mathfrak {B}}\). The sequence \((0,1,0,1,0,1,\ldots )\) is a simple example of a non-convergent sequence, which is almost convergent. However almost none of the sequences consisting of 0’s and 1’s are almost convergent (see [17]). It is proved in [18] that a sequence \((x_{k})_{k\in \mathbb N}\) is almost convergent to x if and only if \(\lim _{n\rightarrow \infty }\frac{1}{n}\sum _{m=0}^{n-1}x_{k+m}=x\) uniformly in k. Therefore, for a given \(x\in [0,1]\) there exists \(y\in \mathbb R\) such that \(M((T^mg(x))_{m\in \mathbb N})=y\) for every \(M\in {\mathfrak {B}}\) if and only if

$$\begin{aligned} \lim _{n\rightarrow \infty }\left( \sum _{l=0}^{n+k-2}T^lg(x)-\sum _{l=k}^{n+k-2}\frac{l+1-k}{n}T^lg(x)\right) =y\quad \hbox {uniformly in }k. \end{aligned}$$

Example 4.1

Assume that \({\mathcal {G}}\subset {\mathcal {B}}_0^0\). If the series \(\sum _{l=0}^{\infty }T^lg\) pointwise almost converges to a function from \({\mathcal {B}}_0^0\), then g is admissible for every \(M\in {\mathfrak {B}}\).

Observe that if \({\mathcal {G}}\subset {\mathcal {B}}_0^0\) and if the series \(\sum _{l=0}^{\infty }T^lg\) pointwise converges to a bounded function, then the series pointwise almost converges to the same bounded function and \(M_*=\sum _{l=0}^{\infty }T^lg\) for any \(M\in {\mathfrak {B}}\). Moreover, since

$$\begin{aligned} M_*(x)=M((T^mg_{k}(x))_{k\in \mathbb N})+\sum _{l=0}^{m}T^lg(x) \end{aligned}$$

for all \(x\in [0,1]\), \(M\in {\mathfrak {B}}\) and \(m\in \mathbb N\), it follows that for a fixed \(x\in [0,1]\) the series \(\sum _{l=0}^{\infty }T^lg(x)\) converges if and only if the limit \(\lim _{m\rightarrow \infty }M((T^mg_{k}(x))_{k\in \mathbb N})\) exists for every \(M\in {\mathfrak {B}}\).

Example 4.2

Assume that \({\mathcal {B}}([0,1],\mathbb R)=B([0,1],\mathbb R)\). Then every function \(g\in {\mathcal {B}}_0^0\) guaranteeing boundedness of the family \({\mathcal {G}}\) is admissible for any \(M\in {\mathfrak {B}}\).

Example 4.3

Assume that \(g\in {\mathcal {B}}_0^0\) and there exists \(m\in \mathbb N\) such that

$$\begin{aligned} T^{m}g=0. \end{aligned}$$
(5)

Then \({\mathcal {G}}=\big \{\sum _{l=0}^{k-1}T^lg:k\in \{1,\ldots ,m\}\big \}\) and \(M_*=\sum _{l=0}^{m-1}T^lg\) for any \(M\in {\mathfrak {B}}\). Therefore g is admissible for any \(M\in {\mathfrak {B}}\).

Let us note that condition (5) is not very far from a necessary condition for g derived in Lemma 4.2, which says that \(M\big ((T^mg(x))_{m\in \mathbb N}\big )=0\) for all \(x\in [0,1]\) and \(M\in {\mathfrak {B}}\).

We now formulate the main result of this paper.

Theorem 4.4

  1. (i)

    Assume that

    $$\begin{aligned} \mathfrak {sol}_a^b(\hbox {E}_g)\ne \emptyset . \end{aligned}$$
    (6)

    Then for every \(M\in {\mathfrak {M}}\) we have \(\mathfrak {sol}_a^b\)(E\(_g\))\(\subset \left\{ M_h+M_*:h\in {\mathcal {B}}_a^b\right\} \). Moreover, g is admissible for any \(M\in {\mathfrak {B}}\) under which \({\mathcal {B}}_a^b\) is closed.

  2. (ii)

    If \({\mathcal {B}}_a^b\) is closed under \(M\in {\mathfrak {M}}\) and \(g\in {\mathcal {B}}_0^0\) is admissible for M, then \(\mathfrak {sol}_a^b\)(E\(_g\))\(=\left\{ M_h+M_*:h\in {\mathcal {B}}_a^b\right\} \).

  3. (iii)

    If \(g\in {\mathcal {B}}_0^0\) is admissible for some \(M\in {\mathfrak {M}}\), then \(\mathfrak {sol}_a^b\)(E\(_g\))\(=\mathfrak {sol}_a^b\)(E\(_0\))\(+M_*\).

Proof

First of all note that assumption (6) implies that \({\mathcal {B}}_a^b\ne \emptyset \).

(i) Fix \(\varphi \in \mathfrak {sol}_a^b\)(E\(_g\)) and \(M\in {\mathfrak {M}}\). Obviously, \(M_\varphi \) is well defined. From Lemma 4.1 we conclude that \(M_*\) is also well defined. Applying induction to (2) we get

$$\begin{aligned} \varphi =T^k\varphi +g_k \end{aligned}$$
(7)

for every \(k\in \mathbb N\), and hence \(\varphi (x)=M_\varphi (x)+M_*(x)\) for every \(x\in [0,1]\). Thus \(\mathfrak {sol}_a^b\)(E\(_g\))\(\subset \left\{ M_h+M_*:h\in {\mathcal {B}}_a^b\right\} \). Moreover, if \({\mathcal {B}}_a^b\) is closed under \(M\in {\mathfrak {B}}\), then \(M_\varphi \in {\mathcal {B}}_a^b\), and making use of (7) we obtain that \(\sup _{k\in \mathbb N}\Vert g_k\Vert \le 2\Vert \varphi \Vert \) and \(M_*=\varphi -M_\varphi \in {\mathcal {B}}_0^0\).

(ii) Fix \(M\in {\mathfrak {M}}\) and \(h\in {\mathcal {B}}_a^b\). Then Lemmas 3.1, 4.1 and 4.3 give

$$\begin{aligned} T(M_h+M_*)+g=TM_h+TM_*+g=M_h+M_*. \end{aligned}$$

Since \({\mathcal {B}}_a^b\) is closed under \(M\in {\mathfrak {M}}\) and \(g\in {\mathcal {B}}_0^0\) is admissible for M, we have \(M_h+M_*\in {\mathcal {B}}_a^b\). Then \(M_h+M_*\in \mathfrak {sol}_a^b\)(E\(_g\)), and hence \(\left\{ M_h+M_*:h\in {\mathcal {B}}_a^b\right\} \subset \mathfrak {sol}_a^b\)(E\(_g\)). Now it suffices to apply assertion (i).

(iii) Fix \(\varphi \in \mathfrak {sol}_a^b\)(E\(_g\)). Lemma 4.3 jointly with the admissibility of g implies that \(\varphi -M_*\in \mathfrak {sol}_a^b\)(E\(_0\)). Hence \(\varphi =(\varphi -M_*)+M_*\in \mathfrak {sol}_a^b\)(E\(_0\))\(+M_*\). Conversely, fix \(\Phi \in \mathfrak {sol}_a^b\)(E\(_0\)). Then again Lemma 4.3 jointly with the admissibility of g implies that \(\Phi +M_*\in \mathfrak {sol}_a^b\)(E\(_g\)). \(\square \)

Corollary 4.5

Assume that \(g\in {\mathcal {B}}_0^0\). Then equation (E\(_g\)) has a solution in \({\mathcal {B}}_a^b\) if and only if g is admissible for some \(M\in {\mathfrak {B}}\) and equation (E\(_0\)) has a solution in \({\mathcal {B}}_a^b\).

Remark 4.1

If \(\mathfrak {sol}_a^b\)(E\(_g\))\(=\emptyset \), then it may happen that there is no \(M\in {\mathfrak {B}}\) for which \(M_*\) is well defined; see e.g. the equation \(\varphi (x)=\varphi (x)+1\). Therefore, assumption (6) can not be omitted in assertion (i) of Theorem 4.4. The above exemplary equation also shows that the admissibility assumption in assertion (iii) of Theorem 4.4 is necessary.

5 Consequence of the main results

In this section we formulate some exemplary consequences of the main results, making use of the presented examples and applying some know results on equation (E\(_g\)). We begin with the case where \({\mathcal {A}}=2^\Omega \).

Corollary 5.1

Assume

(H\(_4\)):

\((f_n)_{n\in \mathbb N}\) is a sequence of self-mappings of [0, 1] such that \(f_n(0)=0\) and \(f_n(1)=1\) for every \(n\in \mathbb N\), \((p_n)_{n\in \mathbb N}\) is a sequence of nonnegative real numbers summing up to one and \(g\in B([0,1],\mathbb R)_0^0\).

Then the equation

figure c

has a solution in \(B([0,1],\mathbb R)\) if and only if the family

$$\begin{aligned} \left\{ \sum _{l=1}^{k}\sum _{n_1,\ldots ,n_l\in \mathbb N}p_{n_1}\cdots p_{n_l}(g\circ f_{n_1}\circ \dots \circ f_{n_l}):k\in \mathbb N\right\} \end{aligned}$$
(8)

is bounded. Moreover, if the family given by (8) is bounded, then \(\varphi \in B([0,1],\mathbb R)\) is a solution of equation (e\(_g\)) if and only if \(\varphi =M_h+M_*\) with some \(M\in {\mathfrak {B}}\) and \(h\in B([0,1],\mathbb R)\).

Proof

In view of Examples 2.1, 3.1 and 4.2, it suffices to apply Theorem 3.2 and Corollary 4.5 with \({\mathcal {B}}([0,1],\mathbb R)=B([0,1],\mathbb R)\) and arbitrary \(M\in {\mathfrak {B}}\). \(\square \)

Now we show a possible application of Corollary 5.1.

Example 5.1

Fix \(N\in \mathbb N\), real numbers \(p_0,\ldots ,p_N\ge 0\) summing up to one and a function \(f:[0,1]\rightarrow [0,1]\) such that \(f(0)=0\), \(f(1)=1\) and \(f^{N+1}(x)=x\) for every \(x\in [0,1]\); for a full description of such functions see [1, Theorem 15.1]. Then consider the following functional equation

$$\begin{aligned} \varphi (x)=\sum _{n=0}^{N}p_n\varphi (f^n(x))+g(x), \end{aligned}$$
(9)

which is discussed in more details in [1, Chapter XIII] and in [2, Subsects. 6.3 and 6.7]).

For all \(n\in \{0,\ldots ,N\}\) and \(m\in \mathbb N\) define recursively numbers \(\alpha _{m,n}\) putting

$$\begin{aligned} \alpha _{1,n}=p_n\quad \hbox { and }\quad \alpha _{m+1,n}=\sum _{k=0}^N\alpha _{m,k}p_{(n-k)\mod \!\!(N+1)}. \end{aligned}$$

Then applying induction we obtain

$$\begin{aligned} T^mh=\sum _{n=0}^{N}\alpha _{m,n}(h\circ f^n) \end{aligned}$$

for all \(h\in B([0,1],\mathbb R)\) and \(m\in \mathbb N\). Moreover, family (8) takes the form

$$\begin{aligned} \left\{ \sum _{n=0}^{N}\Big (\sum _{m=1}^{k}\alpha _{m,n}\Big )g\circ f^n:k\in \mathbb N\right\} . \end{aligned}$$
(10)

Therefore, from Corollary 5.1 we infer that Eq. (9) has a solution in \(B([0,1],\mathbb R)\) if and only if family (10) is bounded. Moreover, if the family (10) is bounded, then \(\varphi :[0,1]\rightarrow \mathbb R\) is a bounded solution of Eq. (9) if and only if

$$\begin{aligned} \begin{aligned} \varphi (x)&=\sum _{n=0}^{N}M\left( \big (\alpha _{m,n}\big )_{m\in \mathbb N}\right) h(f^n(x))+g(x)\\&+ M\left( \sum _{n=0}^{N}g(f^n(x))\Big (\sum _{m=1}^{k}\alpha _{m,n}\Big )_{k\in \mathbb N}\right) \end{aligned} \end{aligned}$$

with some \(h\in B([0,1],\mathbb R)\) and \(M\in {\mathfrak {B}}\).

If \(p_0=\dots =p_N=\frac{1}{N+1}\), then \(\alpha _{m,n}=\frac{1}{N+1}\) for all \(m\in \mathbb N\) and \(n\in \{0,\ldots ,N\}\), and hence the family (10) is bounded if and only if

$$\begin{aligned} \sum _{n=0}^{N}g(f^n(x))=0\quad \hbox { for every }x\in [0,1]; \end{aligned}$$
(11)

cf. Example 4.3. In consequence, equation (9) with \(p_0=\dots =p_N=\frac{1}{N+1}\) has a solution \(\varphi \in B([0,1],\mathbb R)\) if and only if (11) holds, and moreover,

$$\begin{aligned} \varphi (x)=\frac{1}{N+1}\sum _{n=0}^{N}h(f^n(x))+g(x) \end{aligned}$$

with some \(h\in B([0,1],\mathbb R)\).

Purely bounded solutions of equation (E\(_g\)) are considered rather rarely. Usually some additional property is requited, such as monotonicity (see e.g. [19,20,21]), Borel measurability (see e.g. [22, 23]), continuity at a point (see e.g. [24]). The next two corollaries concern just such cases. To formulate the first one we need some notion. Namely, following [5] (cf. [25]) we define iterates of a function \(h:[0,1]\times \Omega \rightarrow [0,1]\) as follows

$$\begin{aligned} h(x,\omega )=h(x,\omega _1)\quad \hbox { and }\quad h^{n+1}(x,\omega )=h(h^n(x,\omega ),\omega _{n+1}) \end{aligned}$$

for all \(x\in [0,1]\), \(\omega =(\omega _1,\omega _2,\ldots )\in \Omega ^\infty \) and \(n\in \mathbb N\). Note that if h is an rv-function, then all its iterates are also rv-functions defined on the product space \((\Omega ^\infty ,{\mathcal {A}}^\infty ,P^\infty )\).

Corollary 5.2

Assume that f is an rv-function such that the function \(f(\cdot ,\omega )\) is continuous at 0 and 1 for every \(\omega \in \Omega \) and the function \(m:[0,1]\rightarrow [0,1]\) defined by \(m(x)=\int _\Omega f(x,\omega )dP(\omega )\) is continuous with \(m(x)\ne x\) for every \(x\in (0,1)\). Let \(g\in B([0,1],\mathbb R)_0^0\) be Borel measurable continuous at 0 and 1, let the family \({\mathcal {G}}\) be bounded, and let M be a medial limit with respect to a probability Borel measure on [0, 1] such that \(M_*\) is continuous at 0 and 1. If \(\varphi \in B([0,1],\mathbb R)_0^1\) is a Borel measurable, continuous at 0 and 1 solution of equation (E\(_g\)), then

$$\begin{aligned} \varphi (x)=P^\infty \left( \lim _{n\rightarrow \infty }f^n(x,\cdot )=1\right) +M_*(x) \end{aligned}$$

for every \(x\in [0,1]\).

Proof

Choose \({\mathcal {B}}([0,1],\mathbb R)=Borel([0,1],\mathbb R)\cap C_0([0,1],\mathbb R)\cap C_1([0,1],\mathbb R)\); this is possible in view of Examples 2.4 and 2.5. According to [24, Proposition 2.1 and Corollary 2.4] we have \(\mathfrak {sol}_a^b\)(E\(_0\))\(=\{\Phi \}\), where \(\Phi (x)=P^\infty \left( \lim _{n\rightarrow \infty }f^n(x,\cdot )=1\right) \) for every \(x\in [0,1]\). Finally, since g is admissible for M, it is enough to apply Theorem 4.4(iii). \(\square \)

Corollary 5.3

Assume (H\(_4\)). Let \(x_0\in \{0,1\}\) and let there exists \(\eta >0\) such that \(\frac{f_n(x)-f_n(x_0)}{x-x_0}\le 1\) for all \(n\in \mathbb N\) and \(x\in (0,1)\) with \(|x-x_0|\le \eta \). If \(g\in C_{x_0}([0,1],\mathbb R)_0^0\) and the series \(\sum _{l=0}^{\infty }T^lg\) converges uniformly, then \(\varphi \in C_{x_0}([0,1],\mathbb R)\) is a solution of equation (e\(_g\)) if and only if there exists \(h\in C_{x_0}([0,1],\mathbb R)\) such that \(\varphi =M_h+\sum _{l=0}^{\infty }T^lg\) with an arbitrary \(M\in {\mathfrak {B}}\).

Proof

The uniform convergence of the series \(\sum _{l=0}^{\infty }T^lg\) implies its pointwise almost convergence to a function from the class \(C_{x_0}([0,1],\mathbb R)\) as well as the boundedness of the family \({\mathcal {G}}\). Now it is enough to apply Theorems 3.2 and 4.4 with \({\mathcal {B}}([0,1],\mathbb R)=C_{x_0}([0,1],\mathbb R)\) and an arbitrary \(M\in {\mathfrak {B}}\), which is possible in view of Examples 2.5, 3.3 and 4.1. \(\square \)

The next example is in the spirit of the idea of the manuscript [26] with the use of Corollary 5.3.

Example 5.2

Assume (H\(_4\)) with \(g\in C_0([0,1],\mathbb R)\) and let there exists \(\alpha >1\) such that \(f_n(x)\le x^\alpha \) for all \(n\in \mathbb N\) and \(x\in [0.1]\). Then consider equation (e\(_g\)) and its solutions in the class \(C_0([0,1],\mathbb R)\).

Fix \(h\in C_0([0,1],\mathbb R)\) and \(x\in (0,1)\). By induction on m we obtain \(T^mh(x)=\sum _{n_1,\ldots ,n_m\in \mathbb N}p_{n_1}\cdots p_{n_m}h(f_{n_1}(\ldots ( f_{n_m}(x))\ldots ))\) and \(f_{n_1}(\ldots (f_{n_m}(x))\ldots )\le x^{\alpha ^m}\) for every \(m\in \mathbb N\). Thus \(\lim _{m\rightarrow \infty }T^mh(x)=0\), and hence

$$\begin{aligned} M_h(x)=M((T^mh(x))_{m\in \mathbb N})={\left\{ \begin{array}{ll}h(0),&{}\hbox {if }x\in [0,1),\\ h(1),&{}\hbox {if }x=1\end{array}\right. } \end{aligned}$$

for every \(M\in {\mathfrak {B}}\). If the series \(\sum _{l=0}^{\infty }T^lg(x)\) uniformly converges, then Corollary 5.3 implies that every solution \(\varphi \in C_0([0,1],\mathbb R)\) of equation (e\(_g\)) is of the form

$$\begin{aligned} \varphi (x)={\left\{ \begin{array}{ll} a+\sum _{l=0}^{\infty }T^lg(x),&{}\hbox {if }x\in [0,1),\\ b,&{}\hbox {if }x=1,\end{array}\right. } \end{aligned}$$

where \(a,b\in \mathbb R\).

Lipschitzian solutions of equation (E\(_g\)), in a more general setting than in this paper, were recently examined in [27,28,29,30]. However, the next Corollary gives a general formulae for a wide class of Lipschitzian solutions of equation (E\(_g\)), in contrast to the papers mentioned, in which assumptions made force uniqueness or uniqueness up to an additive constant of Lipschitzian solutions of the equation considered.

Corollary 5.4

Assume \((\mathrm{H}_2)\) and let \(g\in Lip([0,1],\mathbb R)\). Then equation (E\(_g\)) has a solution in \(Lip([0,1],\mathbb R)\) if and only if g is admissible for \(M\in {\mathfrak {M}}\). Moreover, every solution \(\varphi \in Lip([0,1],\mathbb R)\) of equation (E\(_g\)) is of the form \(\varphi =M_h+M_*\) with some \(h\in Lip([0,1],\mathbb R)\) and \(M\in {\mathfrak {M}}\).

Proof

First note that (H\(_2\)) jointly with (1) yields

$$\begin{aligned} \int _{\Omega }f(x,\omega )dP(\omega )=x\quad \hbox { for every }x\in [0,1]. \end{aligned}$$

This condition implies that each piecewise affine function is a solution of equation (E\(_0\)). In particular, equation (E\(_0\)) has a Lipschitzian solution. Now, in view of Examples 2.3 and 3.4, it suffice to apply Corollary 4.5 and Theorem 4.4(ii) with \({\mathcal {B}}([0,1],\mathbb R)=Lip([0,1],\mathbb R)\) and suitable \(M\in {\mathfrak {M}}\). \(\square \)

The next corollary gives a formulae for the general solution of equation (E\(_g\)) in the space \(BV([0,1],\mathbb R)\), and hence, partially solves the problem considered in [31] for a very spacial case of equation (E\(_0\)).

Corollary 5.5

Assume \((\mathrm{H}_1)\). Let \(g\in BV([0,1],\mathbb R)\). Then equation (E\(_g\)) has a solution in \(BV([0,1],\mathbb R)\) if and only if g is admissible for some \(M\in {\mathfrak {M}}\) and equation (E\(_0\)) has a solution \(\Phi \in BV([0,1],\mathbb R)\). Moreover, \(\varphi \in BV([0,1],\mathbb R)\) satisfies (E\(_g\)) if and only if \(\varphi =M_h+M_*\) with some \(h\in BV([0,1],\mathbb R)\) and \(M\in {\mathfrak {M}}\).

Proof

It is enough to apply Theorems 3.2 and 4.4 with \({\mathcal {B}}([0,1],\mathbb R)=BV([0,1],\mathbb R)\) and arbitrary \(M\in {\mathfrak {M}}\), which is possible in view of Examples 2.2 and 3.5. \(\square \)

Before we formulate the last corollary of this paper let us to extend the main result of [31] to equation (E\(_0\)). For this purpose, given \(h\in BV([0,1],\mathbb R)\) we denote by \(h_+\) and \(h_-\) the upper and the lower variation of h, respectively.

Proposition 5.6

Assume \((\mathrm{H}_1)\). If \(\Phi \in BV([0,1],\mathbb R)\) satisfies (E\(_0\)), then also \(\Phi _+\) and \(\Phi _-\) satisfy (E\(_0\)).

Proof

Fix \(\Phi \in BV([0,1],\mathbb R)\) satisfying (E\(_0\)). Define functions \(F,G:[0,1]\rightarrow \mathbb R\) by putting \(F(x)=\int _{\Omega }\Phi _+(f(x,\omega ))dP(\omega )\) and \(G(x)=\int _{\Omega }\Phi _-(f(x,\omega ))dP(\omega )\). Then \(\Phi _+-\Phi _-=F-G\) and by (H\(_1\)) both the functions F and G are increasing. Hence the Jordan decomposition yields \(\Phi _+(y)-\Phi _+(x)\le F(y)-F(x)\) for all \(0\le x\le y\le 1\). Putting in the last inequality \(x=0\) and \(y=1\) in turn and making use of (1) we obtain \(\Phi _+(y)\le F(y)\) and \(\Phi _+(x)\ge F(x)\) for all \(x,y\in [0,1]\). In consequence, \(\Phi _+=F\) and \(\Phi _-=G\). \(\square \)

The above proposition reduces the problem of determining all solutions of bounded variation of equation (E\(_0\)) to that of finding all increasing solutions of this equation. We end this paper determining all increasing solutions of equation (E\(_0\)).

Corollary 5.7

Assume \((\mathrm{H}_1)\). Then \(\varphi :[0,1]\rightarrow \mathbb R\) is an increasing solution of equation (E\(_0\)) if and only if \(\varphi =M_h\) with some increasing function \(h:[0,1]\rightarrow \mathbb R\) and \(M\in {\mathfrak {M}}\).

Proof

If \(\varphi :[0,1]\rightarrow \mathbb R\) is an increasing solution of equation (E\(_0\)), then \(\varphi =M_\varphi \) with any \(M\in {\mathfrak {M}}\).

Conversely, if \(h:[0,1]\rightarrow \mathbb R\) is increasing and \(M\in {\mathfrak {M}}\), then \(M_h\) is increasing as well. Moreover, Corollary 5.5 implies that \(M_h\) satisfies (E\(_0\)). \(\square \)