1 Introduction

In [9], Gronwall was the first to provide Gronwall’s inequality under the framework of differential form. Later, Bellman [2] put forward the integral form of Gronwall’s inequality as the following proposition.

Proposition 1.1

Assume that \(\alpha \geq 0\) and \(T>0\). If \(\beta (t)\) and \(\mu (t), t\in [0,T]\) are two nonnegative continuous functions satisfying

$$\begin{aligned} \mu (t)\leq \alpha + \int ^{t}_{0}\beta (s)\mu (s)\,\mathrm{d}s,\quad t\in [0,T], \end{aligned}$$

then

$$\begin{aligned} \mu (t)\leq \alpha e^{\int ^{t}_{0}\beta (s)\,\mathrm{d}s},\quad t\in [0,T]. \end{aligned}$$

Since then, on the basis of different motivation, Gronwall’s inequality has been extended and used considerably in various articles. It became a useful tool to solve many problems in the fields of differential equations. We refer the reader to [4, 5, 12, 13, 19] and the references therein. In order to meet the needs of the development of stochastic differential equations, many scholars tried to generalize Gronwall’s inequality. Wang and Fan [16] established the following backward stochastic Gronwall’s inequality.

Proposition 1.2

Let \(\beta (\omega, t)\) be a strictly positive \({\mathbb{F}}\)-adapted stochastic process satisfying

$$\begin{aligned} \biggl\Vert \int ^{T}_{0}\beta (\omega, s)\,\mathrm{d}s \biggr\Vert _{ \infty }< \infty. \end{aligned}$$

If the nonnegative \({\mathscr{F}}_{t}\)-adapted stochastic process \(\mu (\omega, t)\) satisfies the following conditions:

1:

\({\mathbb{E}}[\sup_{t\in [0,T]}\mu (\omega, t)]<\infty \).

2:

\(\mu (t)\leq a + {\mathbb{E}} [ \int ^{T}_{t}\beta (s)\mu (s) \,\mathrm{d}s | {\mathscr{F}}_{t} ], t\in [0, T]\), where \(a> 0\) is a constant.

Then, for each \(t\in [0, T]\), we have

$$\begin{aligned} \mu (t)\leq a{\mathbb{E}} \bigl[e^{\int ^{T}_{t}\beta (s)\,\mathrm{d}s } | {\mathscr{F}}_{t} \bigr]. \end{aligned}$$
(1)

In particular, if \(a=0\), we have \(u(t)=0\).

They used this proposition to prove a comparison theorem of \(L^{p}\) solution for 1-dimensional backward stochastic differential equation under the stochastic Lipschitz condition. Hun et al. [10] generalized backward stochastic Gronwall’s inequality in the situation of random time horizon. [1] generalized a Lipovan’s result of Gronwall-like inequalities into a more general form.

Later, Bihari [3] put forward a useful generalization of Gronwall–Bellman’s inequality, called Bihari’s inequality, which provides explicit bounds on unknown functions. This inequality also had many applications in the fields of differential equations. Zhang and Zhu [20] used Bihari’s inequality to study non-Lipschitz stochastic differential equations driven by multi-parameter Brownian motion. In [17], Bihari’s inequality was used to study non-Lipschitz stochastic Volterra type equations with jumps. Furthermore, Wu et al. used [18] Bihari’s inequality to analyze the solvability of anticipated backward stochastic differential equations. And Fan [6] used Bihari’s inequality to study existence, uniqueness, and stability of \(L^{1}\) solutions for multidimensional backward stochastic differential equations with generators of one-sided Osgood type. With further study of stochastic differential equations, it was found that original Bihari’s inequality can no longer meet the needs of application. Thus, people began to generalize Bihari’s inequality. [11] studied some new Gronwall–Bellman–Bihari type integral inequalities with singular as well as nonsingular kernels, generalizing some already existing results. As an application, the behavior of solution of the fractional stochastic differential equation has been investigated. [8] analyzed some new nonlinear Gronwall–Bellman–Bihari type inequalities with singular kernel via k-fractional integral of Riemann–Liouville, which can be used to study some properties of solution for fractional differential equations.

To the best of our knowledge, so far there is little study on backward stochastic Bihari’s inequality. Motivated by the above articles, in this paper, we mainly generalize the following Bihari’s inequality in [3] into the situation of backward stochastic Bihari’s inequality.

Proposition 1.3

Let \(\rho: {\mathbb{R}}^{+}\rightarrow {\mathbb{R}}^{+}\) be a continuous and nondecreasing function, if \(\beta (s), f(s)\) are two nonnegative functions on \({\mathbb{R}}^{+}\) such that, for some \(a> 0\),

$$\begin{aligned} f(t)\leq a+ \int ^{t}_{0}\beta (s)\rho \bigl(f(s)\bigr)\,\mathrm{d}s,\quad t\geq 0. \end{aligned}$$

Then

$$\begin{aligned} f(t)\leq G^{-1} \biggl( G(a) + \int ^{t}_{0}\beta (s)\,\mathrm{d}s \biggr), \end{aligned}$$

where \(G(x)\doteq \int ^{x}_{c}\frac{1}{\rho (y)}\,\mathrm{d}y\) is well defined for some \(c> 0\) and \(G^{-1}(\cdot )\) is an inverse function of G.

We will study several different forms of backward stochastic Bihari’s inequality and give two applications. It is necessary to point out that the proof method in [3, 14, 15] for Bihari’s inequality is no longer applicable since \(\beta (s)\) in the following theorems in this paper depends on ω, while the \(\beta (s)\) in Proposition 1.3 is independent of ω.

2 Preliminaries

2.1 Notations

For \(x, y \in {\mathbb{R}}\), we use \(|x |\) to denote the Euclidean norm of x and use \(\langle x, y\rangle \) to denote the Euclidean inner product. For \(B\in {\mathbb{R}}^{ d}\), \(|B |\) represents \(\sqrt{\mathrm{Tr} BB^{\ast }.}\) Let \((\Omega, {\mathscr{F}}, P)\) be a complete probability space taking along a d-dimensional Brownian motion \(\{W_{t}\}_{0\leq t\leq T}\). \({\mathbb{F}}\doteq \{{\mathscr{F}}_{t}\}_{t\in [0,T]}\) is the natural filtration generated by W. For Euclidean space \({\mathbb{H}}\), we introduce the following spaces:

\(L^{2}_{{\mathscr{F}}_{T}}(\Omega; {\mathbb{H}})\) is represented as a space of \({\mathbb{H}}\)-valued \({\mathscr{F}}_{T}\)-measurable random variables ϕ satisfying \(\|\phi \|_{2}\doteq ({\mathbb{E}}[|\phi |^{2}])^{\frac{1}{2}}< \infty \).

\(L^{\infty }_{{\mathscr{F}}_{T}}(\Omega; {\mathbb{H}})\) is represented as a space of \({\mathbb{H}}\)-valued \({\mathscr{F}}_{T}\)-measurable random variables ϕ satisfying \(\|\phi \|_{\infty }\doteq \mathrm{esssup}_{\omega \in \Omega }|\phi |< \infty \).

\(L^{2}_{{\mathbb{F}}}(0, T; {\mathbb{H}})\) is represented as a space of \({\mathbb{H}}\)-valued \({\mathbb{F}}\)-adapted stochastic processes \(\{\varphi _{s}, s\in [0, T]\}\) satisfying \(\|\varphi \|_{L^{2}_{{\mathscr{F}}}(0, T)}\doteq ( { \mathbb{E}} [ \int ^{T}_{0}|\varphi (s)|^{2}\,\mathrm{d}s ] )^{\frac{1}{2}}< \infty \).

\(L^{\infty }_{{\mathbb{F}}}(0, T; {\mathbb{H}})\) is represented as a space of \({\mathbb{H}}\)-valued \({\mathbb{F}}\)-adapted stochastic processes \(\{\varphi _{s}, s\in [0, T]\}\) satisfying \(\|\varphi \|_{L^{\infty }_{{\mathbb{F}}}(0, T)}\doteq \mathrm{esssup}_{( \omega, s)\in \Omega \times [0,T]}|\varphi (s)|< \infty \).

\(S^{2}_{{\mathbb{F}}}(0, T; {\mathbb{H}})\) is represented as a space of continuous processes \(\{\varphi _{s}, s\in [t, T]\}\) in \(L^{2}_{{\mathbb{F}}}(0, T; {\mathbb{H}})\) satisfying \(\|\varphi \|_{S^{2}_{{\mathbb{F}}}(0, T)}\doteq ( { \mathbb{E}}[\sup_{0 \leqslant s \leqslant T}|\varphi (s)|^{2}] ) ^{\frac{1}{2}}< \infty \).

In the following, ρ is a nondecreasing continuous concave function from \({\mathbb{R}}_{+}\) to \({\mathbb{R}}_{+}\) such that \(\rho (0)=0\) and \(\int _{0+}\frac{1}{\rho (s)}\,\mathrm{d}s= +\infty \), where \(G(x)\doteq \int ^{x}_{c}\frac{1}{\rho (y)}\,\mathrm{d}y\) is well defined for some \(c> 0\) and \(G^{-1}(\cdot )\) is an inverse function of G.

Remark 2.1

  1. 1.

    Since ρ is concave and \(\rho (0)=0\), one can find a pair of positive constants a and b such that

    $$\begin{aligned} \rho (u)\leq a+ bu \quad\text{for all u $\geq $ 0}. \end{aligned}$$
  2. 2.

    We make the following convention: the letter \(C^{\prime }\) will denote a positive constant, whose value may vary from one place to another. Moreover, \(C^{\prime }\) only depends on the constants in the following theorems.

3 Main results

Before giving our main results, we need the following lemma.

Lemma 3.1

The above function \(G(x), x> 0\) is a concave function.

Proof

For any \(0< x_{1}< x_{2}\),

$$\begin{aligned} &G \biggl( \frac{x_{1}+x_{2}}{2} \biggr) - \frac{1}{2} \bigl( G(x_{1})+G(x_{2}) \bigr) \\ &\quad = \frac{1}{2} \int ^{\frac{x_{1}+x_{2}}{2}}_{x_{1}} \frac{1}{\rho (x)}\,\mathrm{d}x - \frac{1}{2} \int ^{x_{2}}_{ \frac{x_{1}+x_{2}}{2}}\frac{1}{\rho (x)}\,\mathrm{d}x\geq 0. \end{aligned}$$

Thus, \(G(x), x> 0\) is a concave function. □

Theorem 3.2

Let \(\beta (\omega, t)\) be a strictly positive \({\mathbb{F}}\)-adapted stochastic process satisfying

$$\begin{aligned} \biggl\Vert \int ^{T}_{0}\beta (\omega, s)\,\mathrm{d}s \biggr\Vert _{ \infty }< \infty. \end{aligned}$$

If the nonnegative \({\mathscr{F}}_{t}\)-adapted stochastic process \(\mu (\omega, t)\) satisfies the following conditions:

1:

\({\mathbb{E}}[\sup_{t\in [0,T]}\mu (\omega, t)]<\infty \).

2:

\(\mu (t)\leq a + {\mathbb{E}} [ \int ^{T}_{t}\beta (s)\rho ( \mu (s)) \,\mathrm{d}s | {\mathscr{F}}_{t} ], t\in [0, T]\), where \(a> 0\) is a constant.

Then, for each \(t\in [0, T]\), we have

$$\begin{aligned} \mu (t)\leq G^{-1} \biggl( G \biggl({\mathbb{E}} \biggl[ G^{-1} \biggl( G(a)+ \int ^{T}_{0}\beta (s)\,\mathrm{d}s \biggr)\bigg| { \mathscr{F}}_{t} \biggr] \biggr) - \int ^{t}_{0} \beta (s)\,\mathrm{d}s \biggr). \end{aligned}$$
(2)

In particular, if \(a=0\), we have \(u(t)=0\).

Proof

Set \(\eta = \int ^{T}_{0}\beta (s)\rho (\mu (s))\,\mathrm{d}s\). By the martingale representation theorem, there exists a stochastic process \(\{z(t), t\in [0, T] \}\in L^{2}_{{\mathbb{F}}}(0, T; {\mathbb{R}})\) such that

$$\begin{aligned} {\mathbb{E}}[\eta | {\mathscr{F}}_{t}]= {\mathbb{E}}[\eta ]+ \int ^{t}_{0}z(s) \,\mathrm{d}W(s). \end{aligned}$$

Set

$$\begin{aligned} \bar{\mu }(t)=a+ {\mathbb{E}} \biggl[ \int ^{T}_{t}\beta (s)\rho \bigl( \mu (s)\bigr) \, \mathrm{d}s| {\mathscr{F}}_{t} \biggr]. \end{aligned}$$

By assumptions in theorem, we know that \(\mu (t) \leq \bar{\mu }(t)\). Moreover,

$$\begin{aligned} \bar{\mu }(t)&=a+ {\mathbb{E}} \biggl[ \int ^{T}_{t}\beta (s)\rho \bigl( \mu (s)\bigr) \, \mathrm{d}s| {\mathscr{F}}_{t} \biggr] \\ & =a+ {\mathbb{E}} \biggl[ \eta - \int ^{t}_{0}\beta (s)\rho \bigl(\mu (s)\bigr) \, \mathrm{d}s| {\mathscr{F}}_{t} \biggr] \\ &=a+ {\mathbb{E}}[\eta |{\mathscr{F}}_{t}] - \int ^{t}_{0}\beta (s)\rho \bigl( \mu (s)\bigr) \, \mathrm{d}s \\ &=a+{\mathbb{E}}[\eta ]+ \int ^{t}_{0}z(s)\,\mathrm{d}W(s)- \int ^{t}_{0}\beta (s) \rho \bigl(\mu (s)\bigr)\, \mathrm{d}s. \end{aligned}$$

Using the differential formula to \(G^{-1} ( G(\bar{\mu }(t))+\int ^{t}_{0}\beta (s)\,\mathrm{d}s )\), we have

$$\begin{aligned} & \mathrm{d}G^{-1} \biggl( G\bigl(\bar{\mu }(t)\bigr)+ \int ^{t}_{0}\beta (s)\,\mathrm{d}s \biggr) \\ &\quad =\rho \biggl( G^{-1} \biggl( G\bigl(\bar{\mu }(t)\bigr)+ \int ^{t}_{0} \beta (s)\,\mathrm{d}s \biggr) \biggr) \\ &\qquad{}\times \biggl( \frac{1}{\rho (\bar{\mu }(t))}\bigl(z(t)\,\mathrm{d}W(t)- \beta (t)\rho \bigl(\mu (t)\bigr) \,\mathrm{d}t\bigr) + \beta (t)\,\mathrm{d}t \biggr). \end{aligned}$$

Since ρ is nondecreasing and \(\mu (t) \leq \bar{\mu }(t)\), we have \(\rho (\mu (t)) \leq \rho (\bar{\mu }(t))\), we have

$$\begin{aligned} \mathrm{d}G^{-1} \biggl( G\bigl(\bar{\mu }(t)\bigr)+ \int ^{t}_{0}\beta (s)\,\mathrm{d}s \biggr)&\geq \rho \biggl( G^{-1} \biggl( G\bigl(\bar{\mu }(t)\bigr)+ \int ^{t}_{0}\beta (s)\,\mathrm{d}s \biggr) \biggr) \frac{1}{\rho (\bar{\mu }(t))}z(t)\,\mathrm{d}W(t)\\ &\geq z(t)\,\mathrm{d}W(t). \end{aligned}$$

Integrating on \([t, T]\) and taking the conditional mathematical expectation with respect to \({\mathscr{F}}_{t}\), noting that \(\bar{\mu }(T)=a\) leads to

$$\begin{aligned} {\mathbb{E}} \biggl[ G^{-1} \biggl( G\bigl(\bar{\mu }(T)\bigr)+ \int ^{T}_{0} \beta (s)\,\mathrm{d}s \biggr)\bigg| { \mathscr{F}}_{t} \biggr] \geq G^{-1} \biggl( G\bigl(\bar{\mu }(t)\bigr)+ \int ^{t}_{0}\beta (s) \,\mathrm{d}s \biggr). \end{aligned}$$

Thus,

$$\begin{aligned} \bar{\mu }(t)\leq G^{-1} \biggl( G \biggl({\mathbb{E}} \biggl[ G^{-1} \biggl( G(a)+ \int ^{T}_{0}\beta (s)\,\mathrm{d}s \biggr)\bigg| { \mathscr{F}}_{t} \biggr] \biggr) - \int ^{t}_{0} \beta (s)\,\mathrm{d}s \biggr). \end{aligned}$$

By \(\mu (t) \leq \bar{\mu }(t)\), we obtain

$$\begin{aligned} \mu (t)\leq G^{-1} \biggl( G \biggl({\mathbb{E}} \biggl[ G^{-1} \biggl( G(a)+ \int ^{T}_{0}\beta (s)\,\mathrm{d}s \biggr)\bigg| { \mathscr{F}}_{t} \biggr] \biggr) - \int ^{t}_{0} \beta (s)\,\mathrm{d}s \biggr). \end{aligned}$$

 □

Remark 3.3

We will illustrate that

$$\begin{aligned} G^{-1} \biggl( G \biggl({\mathbb{E}} \biggl[ G^{-1} \biggl( G(a)+ \int ^{T}_{0}\beta (s)\,\mathrm{d}s \biggr)\bigg| { \mathscr{F}}_{t} \biggr] \biggr) - \int ^{t}_{0}\beta (s) \,\mathrm{d}s \biggr) \geq 0. \end{aligned}$$

Since G is a concave function, by Jensen’s inequality, we have

$$\begin{aligned} &G^{-1} \biggl( G \biggl({\mathbb{E}} \biggl[ G^{-1} \biggl( G(a)+ \int ^{T}_{0}\beta (s)\,\mathrm{d}s \biggr)\bigg| { \mathscr{F}}_{t} \biggr] \biggr) - \int ^{t}_{0}\beta (s) \,\mathrm{d}s \biggr)\\ &\quad \geq G^{-1} \biggl( {\mathbb{E}} \biggl[ G(a)+ \int ^{T}_{t}\beta (s)\,\mathrm{d}s| { \mathscr{F}}_{t} \biggr] \biggr)\geq 0. \end{aligned}$$

The following lemma is a slight extension of Theorem 3.2.

Lemma 3.4

Let \(\alpha (\omega, t), \beta (\omega, t)\) be two nonnegative \({\mathbb{F}}\)-adapted stochastic processes. One of them is strictly positive. If the following conditions

1:

\(\| \int ^{T}_{0}\alpha (\omega, s)\,\mathrm{d}s \| _{ \infty }<\infty, \| \int ^{T}_{0}\beta (\omega, s)\,\mathrm{d}s \| _{\infty }<\infty \);

2:

\(\mu (\omega, t)\) is a nonnegative \({\mathbb{F}}\)-adapted stochastic process and \({\mathbb{E}}[\sup_{t\in [0,T]}\mu (\omega, t)]<\infty \);

3:

\(\mu (t)\leq a + {\mathbb{E}} [ \int ^{T}_{t}\alpha (s)\mu (s) \,\mathrm{d}s | {\mathscr{F}}_{t} ] + {\mathbb{E}} [ \int ^{T}_{t}\beta (s)\rho (\mu (s)) \,\mathrm{d}s | { \mathscr{F}}_{t} ], t\in [0, T]\), where \(a> 0\) is a constant,

hold, then, for each \(t\in [0, T]\), we have

$$\begin{aligned} \mu (t)\leq{}& \bar{W}^{-1} \biggl( \bar{W} \biggl({\mathbb{E}} \biggl[ \bar{W}^{-1} \biggl( \bar{W}(a)+ \int ^{T}_{0}\alpha (s) \,\mathrm{d}s + \int ^{T}_{0}\beta (s)\,\mathrm{d}s \biggr)\bigg| { \mathscr{F}}_{t} \biggr] \biggr)\\ &{} - \int ^{t}_{0}\alpha (s) \,\mathrm{d}s- \int ^{t}_{0}\beta (s)\,\mathrm{d}s \biggr), \end{aligned}$$

where \(\bar{W}(x)\doteq \int ^{x}_{c}\frac{1}{y +\rho (y)}\,\mathrm{d}y\) is well defined for some \(c> 0\) and \(\bar{W}^{-1}(\cdot )\) is an inverse function of . In particular, if \(a=0\), we have \(u(t)=0\).

In the following theorem, the forward Gronwall–Bellman’s inequality in [14] is generalized to backward stochastic Gronwall–Bellman’s inequality. In addition, compared with Gronwall–Bellman’s inequality in [14], \(\alpha (\omega, t), \beta (\omega, t)\) in the following theorem are not independent of ω. Thus, the proof method in this paper is also different from the method in [14].

Theorem 3.5

Let \(\alpha (\omega, t), \beta (\omega, t)\) be two nonnegative \({\mathbb{F}}\)-adapted stochastic processes. If the following conditions are satisfied:

1:

\(\| \int ^{T}_{0}\alpha (\omega, s)\,\mathrm{d}s \| _{ \infty }<\infty, \| \int ^{T}_{0}\beta (\omega, s)\,\mathrm{d}s \| _{\infty }<\infty \),

2:

\(\mu (\omega, t)\) is a nonnegative \({\mathbb{F}}\)-adapted stochastic process and \({\mathbb{E}}[\sup_{t\in [0,T]}\mu (\omega, t)]<\infty \),

3:

\(\mu (t)\leq a + {\mathbb{E}} [ \int ^{T}_{t}\alpha (s)\mu (s) \,\mathrm{d}s | {\mathscr{F}}_{t} ] + {\mathbb{E}} [ \int ^{T}_{t} (\alpha (s){\mathbb{E}} [ \int _{s}^{T} \beta (\tau )\mu (\tau ) \,\mathrm{d}\tau | {\mathscr{F}}_{s} ] ) \,\mathrm{d}s |{\mathscr{F}}_{t} ], t \in [0, T]\), where \(a> 0\) is a constant,

then, for each \(t\in [0, T]\), we have

$$\begin{aligned} \mu (t)\leq a{\mathbb{E}} \bigl[e^{\int ^{T}_{t}(\alpha (s)+ \beta (s))\,\mathrm{d}s} |{\mathscr{F}}_{t} \bigr]. \end{aligned}$$

In particular, if \(a=0\), we have \(u(t)=0\).

Proof

Set

$$\begin{aligned} \bar{\mu }(t)=a + {\mathbb{E}} \biggl[ \int ^{T}_{t}\alpha (s)\mu (s) \,\mathrm{d}s| { \mathscr{F}}_{t} \biggr]+{\mathbb{E}} \biggl[ \int ^{T}_{t} \biggl(\alpha (s){\mathbb{E}} \biggl[ \int _{s}^{T} \beta (\tau )\mu (\tau ) \,\mathrm{d}\tau | {\mathscr{F}}_{s} \biggr] \biggr) \,\mathrm{d}s\bigg|{\mathscr{F}}_{t} \biggr]. \end{aligned}$$

It follows that \(\mu (t) \leq \bar{\mu }(t)\). Set

$$\begin{aligned} \eta ={\mathbb{E}} \biggl[ \int ^{T}_{0}\alpha (s)\mu (s) \,\mathrm{d}s | { \mathscr{F}}_{t} \biggr]+{\mathbb{E}} \biggl[ \int ^{T}_{0} \biggl(\alpha (s){\mathbb{E}} \biggl[ \int _{s}^{T} \beta (\tau )\mu (\tau ) \,\mathrm{d}\tau | {\mathscr{F}}_{s} \biggr] \biggr) \,\mathrm{d}s\bigg|{\mathscr{F}}_{t} \biggr]. \end{aligned}$$

By the martingale representation theorem, there exists a stochastic process \(\{z(t), t\in [0, T] \}\in L^{2}_{{\mathbb{F}}}(0, T; {\mathbb{R}})\),

$$\begin{aligned} {\mathbb{E}}[\eta | {\mathscr{F}}_{t}]= {\mathbb{E}}[\eta ]+ \int ^{t}_{0}z(s) \,\mathrm{d}W(s). \end{aligned}$$

Thus,

$$\begin{aligned} \bar{\mu }(t)={}&a + {\mathbb{E}} \biggl[ \int ^{T}_{t}\alpha (s)\mu (s) \,\mathrm{d}s| { \mathscr{F}}_{t} \biggr]+{\mathbb{E}} \biggl[ \int ^{T}_{t} \biggl(\alpha (s){\mathbb{E}} \biggl[ \int _{s}^{T} \beta (\tau )\mu (\tau ) \,\mathrm{d}\tau | {\mathscr{F}}_{s} \biggr] \biggr) \,\mathrm{d}s\bigg|{\mathscr{F}}_{t} \biggr] \\ ={}&a+ {\mathbb{E}} \biggl[\eta - \int ^{t}_{0}\alpha (s)\mu (s) \,\mathrm{d}s- \int ^{t}_{0} \biggl(\alpha (s){\mathbb{E}} \biggl[ \int ^{T}_{s} \beta (\tau )\mu (\tau ) \,\mathrm{d}\tau | {\mathscr{F}}_{s} \biggr] \biggr) \,\mathrm{d}s\bigg| {\mathscr{F}}_{t} \biggr] \\ ={}&a+{\mathbb{E}}[\eta ]+ \int ^{t}_{0}z(s)\,\mathrm{d}W(s)- \int ^{t}_{0} \alpha (s)\mu (s) \,\mathrm{d}s\\ &{}- \int ^{t}_{0} \biggl(\alpha (s){\mathbb{E}} \biggl[ \int ^{T}_{s}\beta (\tau )\mu (\tau ) \,\mathrm{d}\tau | { \mathscr{F}}_{s} \biggr] \biggr) \,\mathrm{d}s. \end{aligned}$$

Furthermore, it leads to

$$\begin{aligned} \mathrm{d}\bar{\mu }(t)& =z(t)\,\mathrm{d}W(t)-\alpha (t)\mu (t) \,\mathrm{d}t - \biggl( \alpha (t){\mathbb{E}} \biggl[ \int ^{T}_{t}\beta (\tau ) \mu (\tau ) \,\mathrm{d}\tau | {\mathscr{F}}_{t} \biggr] \biggr)\,\mathrm{d}t \\ &\geq z(t)\,\mathrm{d}W(t)-\alpha (t) \biggl( \bar{\mu }(t)+ \biggl( { \mathbb{E}} \biggl[ \int ^{T}_{t}\beta (\tau )\bar{\mu }(\tau ) \,\mathrm{d}\tau | {\mathscr{F}}_{t} \biggr] \biggr) \biggr) \,\mathrm{d}t. \end{aligned}$$

Set

$$\begin{aligned} m(t)=\bar{\mu }(t) + \biggl( {\mathbb{E}} \biggl[ \int ^{T}_{t} \beta (\tau )\bar{\mu }(\tau ) \,\mathrm{d}\tau | {\mathscr{F}}_{t} \biggr] \biggr). \end{aligned}$$

From the above equality, we get

$$\begin{aligned} \mathrm{d}m(t)&=\,\mathrm{d}\bar{\mu }(t) +\,\mathrm{d}\biggl( {\mathbb{E}} \biggl[ \int ^{T}_{t}\beta (\tau )\bar{\mu }(\tau ) \,\mathrm{d}\tau | {\mathscr{F}}_{t} \biggr] \biggr) \\ & \geq z(t)\,\mathrm{d}W(t)-\alpha (t)m(t)\,\mathrm{d}t+\mathrm{d}\biggl( { \mathbb{E}} \biggl[ \int ^{T}_{t}\beta (\tau )\bar{\mu }(\tau ) \,\mathrm{d}\tau | {\mathscr{F}}_{t} \biggr] \biggr). \end{aligned}$$

Integrating on \([t, T]\) and taking the conditional expectation with respect to \({\mathscr{F}}_{t}\) on both sides of the above inequality, we have

$$\begin{aligned} m(T)- m(t)& \geq -{\mathbb{E}} \biggl[ \int ^{T}_{t}\alpha (s)m(s) \,\mathrm{d}s| { \mathscr{F}}_{t} \biggr]- {\mathbb{E}} \biggl[ \int ^{T}_{t}\beta (\tau )\bar{\mu }(\tau ) \,\mathrm{d}\tau | {\mathscr{F}}_{t} \biggr]. \end{aligned}$$

Then

$$\begin{aligned} m(t)& \leq a + {\mathbb{E}} \biggl[ \int ^{T}_{t}\alpha (s)m(s) \,\mathrm{d}s| { \mathscr{F}}_{t} \biggr] + {\mathbb{E}} \biggl[ \int ^{T}_{t}\beta (\tau )m(\tau ) \,\mathrm{d}\tau | { \mathscr{F}}_{t} \biggr]. \end{aligned}$$

From Theorem 1 in [16], we have

$$\begin{aligned} m(t)& \leq a{\mathbb{E}} \bigl[e^{\int ^{T}_{t}(\alpha (s)+\beta (s)) \,\mathrm{d}s} | {\mathscr{F}}_{t} \bigr]. \end{aligned}$$

Then we have

$$\begin{aligned} \mu (t)& \leq a{\mathbb{E}} \bigl[e^{\int ^{T}_{t}(\alpha (s)+ \beta (s))\,\mathrm{d}s} | {\mathscr{F}}_{t} \bigr]. \end{aligned}$$

 □

From Theorem 3.5, we get the following lemma.

Lemma 3.6

\(\alpha (\omega, t), \beta (\omega, t)\) are two nonnegative \({\mathbb{F}}\)-adapted stochastic processes satisfying

$$\begin{aligned} \biggl\Vert \int ^{T}_{0}\alpha (\omega, s)\,\mathrm{d}s \biggr\Vert _{ \infty }< \infty, \qquad\biggl\Vert \int ^{T}_{0}\beta (\omega, s)\,\mathrm{d}s \biggr\Vert _{\infty }< \infty. \end{aligned}$$

If the following conditions hold:

1:

\(n(t)\) is a strictly positive \({\mathbb{F}}\)-adapted stochastic process and is decreasing with respect to t;

2:

\(\mu (\omega, t)\) is a nonnegative \({\mathbb{F}}\)-adapted stochastic process and \({\mathbb{E}}[\sup_{t\in [0,T]}\mu (\omega, t)]<\infty \);

3:

\(\mu (t)\leq n(t) + {\mathbb{E}} [ \int ^{T}_{t}\alpha (s) \mu (s) \,\mathrm{d}s | {\mathscr{F}}_{t} ]+{\mathbb{E}} [ \int ^{T}_{t} (\alpha (s){\mathbb{E}} [ \int _{s}^{T}\beta (\tau )\mu (\tau ) \,\mathrm{d}\tau | { \mathscr{F}}_{s} ] ) \,\mathrm{d}s |{\mathscr{F}}_{t} ], t\in [0, T]\),

then, for each \(t\in [0, T]\), we have

$$\begin{aligned} \mu (t)\leq n(t){\mathbb{E}} \bigl[e^{\int ^{T}_{t}(\alpha (s)+ \beta (s))\,\mathrm{d}s} |{\mathscr{F}}_{t} \bigr], \quad \mathrm{d}P\textit{-a.s.} \end{aligned}$$

Proof

Since

$$\begin{aligned} &\mu (t)\leq n(t) + {\mathbb{E}} \biggl[ \int ^{T}_{t}\alpha (s) \mu (s) \,\mathrm{d}s| { \mathscr{F}}_{t} \biggr]+{\mathbb{E}} \biggl[ \int ^{T}_{t} \biggl(\alpha (s){\mathbb{E}} \biggl[ \int _{s}^{T}\beta (\tau )\mu (\tau ) \,\mathrm{d}\tau | { \mathscr{F}}_{s} \biggr] \biggr) \,\mathrm{d}s\bigg|{\mathscr{F}}_{t} \biggr],\\ &\quad t\in [0, T], \end{aligned}$$

we have

$$\begin{aligned} &\frac{\mu (t)}{n(t)}\leq 1 + {\mathbb{E}} \biggl[ \int ^{T}_{t} \alpha (s)\frac{\mu (s)}{n(s)} \,\mathrm{d}s| { \mathscr{F}}_{t} \biggr]+{\mathbb{E}} \biggl[ \int ^{T}_{t} \biggl(\alpha (s){ \mathbb{E}} \biggl[ \int _{s}^{T}\beta (\tau ) \frac{\mu (\tau )}{n(\tau )} \,\mathrm{d}\tau | {\mathscr{F}}_{s} \biggr] \biggr) \,\mathrm{d}s\bigg|{ \mathscr{F}}_{t} \biggr],\\ &\quad t\in [0, T]. \end{aligned}$$

By the above theorem, we complete the proof. □

Next, we will extend Bellman–Bihari’s inequality in Theorem 2 of [15] to backward stochastic Bellman–Bihari’s inequality. And the proof method is this paper is also different from the method in [15].

Theorem 3.7

Let \(\alpha (\omega, t), \beta (\omega, t), \gamma (\omega, t)\) be three nonnegative \({\mathbb{F}}\)-adapted stochastic processes satisfying

$$\begin{aligned} \biggl\Vert \int ^{T}_{0}\alpha (\omega, s)\,\mathrm{d}s \biggr\Vert _{ \infty }< \infty, \qquad\biggl\Vert \int ^{T}_{0}\beta (\omega, s)\,\mathrm{d}s \biggr\Vert _{\infty }< \infty,\qquad \biggl\Vert \int ^{T}_{0}\gamma ( \omega, s)\,\mathrm{d}s \biggr\Vert _{\infty }< \infty. \end{aligned}$$

One of them is strictly positive. If the following conditions are established:

1:

\(\mu (\omega, t)\) is a nonnegative \({\mathbb{F}}\)-adapted stochastic process and \({\mathbb{E}}[\sup_{t\in [0,T]}\mu (\omega, t)]<\infty \);

2:
$$\begin{aligned} \mu (t)\leq {}&a + {\mathbb{E}} \biggl[ \int ^{T}_{t}\alpha (s)\mu (s) \,\mathrm{d}s| { \mathscr{F}}_{t} \biggr]\\ &{}+{\mathbb{E}} \biggl[ \int ^{T}_{t} \biggl(\alpha (s){\mathbb{E}} \biggl[ \int _{s}^{T} \beta (\tau )\mu (\tau ) \,\mathrm{d}\tau | {\mathscr{F}}_{s} \biggr] \biggr) \,\mathrm{d}s\bigg|{\mathscr{F}}_{t} \biggr] \\ &{} + {\mathbb{E}} \biggl[ \int ^{T}_{t}\gamma (s)\rho \bigl(\mu (s)\bigr) \, \mathrm{d}s| {\mathscr{F}}_{t} \biggr],\quad t\in [0, T], \end{aligned}$$

then, for each \(t\in [0, T]\), we have

$$\begin{aligned} \mu (t)\leq{}& \bar{W}^{-1} \biggl( \bar{W} \biggl({\mathbb{E}} \biggl[ \bar{W}^{-1} \biggl( \bar{W}(a)+ \int ^{T}_{0}\bigl( \alpha (s)+\beta (s)+\gamma (s) \bigr)\,\mathrm{d}s \biggr)\bigg| { \mathscr{F}}_{t} \biggr] \biggr)\\ &{} - \int ^{t}_{0}\bigl(\alpha (s)+ \beta (s)+\gamma (s) \bigr)\,\mathrm{d}s \biggr), \end{aligned}$$

where \(\bar{W}, \bar{W}^{-1} \) are the functions in Lemma 3.4.

Proof

Set

$$\begin{aligned} \bar{\mu }(t)={}&a + {\mathbb{E}} \biggl[ \int ^{T}_{t}\alpha (s)\mu (s) \,\mathrm{d}s| { \mathscr{F}}_{t} \biggr]+{\mathbb{E}} \biggl[ \int ^{T}_{t} \biggl(\alpha (s){\mathbb{E}} \biggl[ \int _{s}^{T} \beta (\tau )\mu (\tau ) \,\mathrm{d}\tau | {\mathscr{F}}_{s} \biggr] \biggr) \,\mathrm{d}s\bigg|{\mathscr{F}}_{t} \biggr] \\ &{} + {\mathbb{E}} \biggl[ \int ^{T}_{t}\gamma (s)\rho \bigl(\mu (s)\bigr) \, \mathrm{d}s| {\mathscr{F}}_{t} \biggr]. \end{aligned}$$

It follows that \(\mu (t) \leq \bar{\mu }(t)\). Set

$$\begin{aligned} \eta ={}&a + \int ^{T}_{0}\alpha (s)\mu (s) \,\mathrm{d}s+{\mathbb{E}} \biggl[ \int ^{T}_{0} \biggl(\alpha (s){\mathbb{E}} \biggl[ \int _{s}^{T} \beta (\tau )\mu (\tau ) \,\mathrm{d}\tau | {\mathscr{F}}_{s} \biggr] \biggr) \,\mathrm{d}s \biggr] \\ &{} + \int ^{T}_{0}\gamma (s)\rho \bigl(\mu (s)\bigr) \, \mathrm{d}s. \end{aligned}$$

By the martingale representation theorem, there exists a stochastic process \(\{z(t), t\in [0, T] \}\in L^{2}_{{\mathbb{F}}}(0, T; {\mathbb{R}})\) such that

$$\begin{aligned} {\mathbb{E}}[\eta | {\mathscr{F}}_{t}]= {\mathbb{E}}[\eta ]+ \int ^{t}_{0}z(s) \,\mathrm{d}W(s). \end{aligned}$$

Thus,

$$\begin{aligned} \bar{\mu }(t)={}&a + {\mathbb{E}} \biggl[ \int ^{T}_{t}\alpha (s)\mu (s) \,\mathrm{d}s| { \mathscr{F}}_{t} \biggr]\\ &{}+{\mathbb{E}} \biggl[ \int ^{T}_{t} \biggl(\alpha (s){\mathbb{E}} \biggl[ \int _{s}^{T} \beta (\tau )\mu (\tau ) \,\mathrm{d}\tau | {\mathscr{F}}_{s} \biggr] \biggr) \,\mathrm{d}s|{\mathscr{F}}_{t} \biggr] \\ & {}+ {\mathbb{E}} \biggl[ \int ^{T}_{t} \gamma (s)\rho \bigl(\mu (s)\bigr) \, \mathrm{d}s| {\mathscr{F}}_{t} \biggr] \\ ={}&a+ {\mathbb{E}} \biggl[\eta - \int ^{t}_{0}\alpha (s)\mu (s) \,\mathrm{d}s\\ &{}- \int ^{t}_{0} \biggl(\alpha (s){\mathbb{E}} \biggl[ \int ^{T}_{s} \beta (\tau )\mu (\tau ) \,\mathrm{d}\tau | {\mathscr{F}}_{s} \biggr] \biggr)\,\mathrm{d}s- \int ^{t}_{0}\gamma (s)\rho \bigl(\mu (s)\bigr) \, \mathrm{d}s | {\mathscr{F}}_{t} \biggr] \\ ={}&a+{\mathbb{E}}[\eta ]+ \int ^{t}_{0}z(s)\,\mathrm{d}W(s)- \int ^{t}_{0} \alpha (s)\mu (s) \,\mathrm{d}s\\ &{}- \int ^{t}_{0} \biggl(\alpha (s){\mathbb{E}} \biggl[ \int ^{T}_{s}\beta (\tau )\mu (\tau ) \,\mathrm{d}\tau | { \mathscr{F}}_{s} \biggr] \biggr) \,\mathrm{d}s \\ &{} - \int ^{t}_{0}\gamma (s)\rho \bigl(\mu (s)\bigr) \, \mathrm{d}s. \end{aligned}$$

Furthermore, it leads to

$$\begin{aligned} \mathrm{d}\bar{\mu }(t)& =z(t)\,\mathrm{d}W(t)-\alpha (t)\mu (t) \,\mathrm{d}t - \biggl( \alpha (t){\mathbb{E}} \biggl[ \int ^{T}_{t}\beta (\tau ) \mu (\tau ) \,\mathrm{d}\tau | {\mathscr{F}}_{t} \biggr] \biggr)\,\mathrm{d}t -\gamma (t)\rho \bigl(\bar{ \mu }(t)\bigr) \,\mathrm{d}t \\ &\geq z(t)\,\mathrm{d}W(t)-\alpha (t) \biggl( \bar{\mu }(t) + \biggl( { \mathbb{E}} \biggl[ \int ^{T}_{t}\beta (\tau )\bar{\mu }(\tau ) \,\mathrm{d}\tau | {\mathscr{F}}_{t} \biggr] \biggr) \biggr)\,\mathrm{d}t-\gamma (t)\rho \bigl(\bar{\mu }(t)\bigr) \,\mathrm{d}t. \end{aligned}$$

Set

$$\begin{aligned} m(t)=\bar{\mu }(t) + \biggl( {\mathbb{E}} \biggl[ \int ^{T}_{t} \beta (\tau )\bar{\mu }(\tau ) \,\mathrm{d}\tau | {\mathscr{F}}_{t} \biggr] \biggr). \end{aligned}$$

From the above equality, we get

$$\begin{aligned} \mathrm{d}m(t)&=\mathrm{d}\bar{\mu }(t) +\mathrm{d}\biggl( {\mathbb{E}} \biggl[ \int ^{T}_{t}\beta (\tau )\bar{\mu }(\tau ) \,\mathrm{d}\tau | {\mathscr{F}}_{t} \biggr] \biggr) \\ & \geq z(t)\,\mathrm{d}W(t)-\alpha (t)m(t)\,\mathrm{d}t+\mathrm{d}\biggl( { \mathbb{E}} \biggl[ \int ^{T}_{t}\beta (\tau )\bar{\mu }(\tau ) \,\mathrm{d}\tau | {\mathscr{F}}_{t} \biggr] \biggr) - \gamma (t)\rho \bigl(\bar{\mu }(t)\bigr) \,\mathrm{d}t. \end{aligned}$$

Integrating on \([t, T]\) and taking the conditional expectation with respect to \({\mathscr{F}}_{t}\) on both sides of the above inequality, we have

$$\begin{aligned} m(T)- m(t)\geq {}&{-}{\mathbb{E}} \biggl[ \int ^{T}_{t}\alpha (s)m(s) \,\mathrm{d}s| { \mathscr{F}}_{t} \biggr]- {\mathbb{E}} \biggl[ \int ^{T}_{t}\beta (s)\bar{\mu }(s) \,\mathrm{d}s | { \mathscr{F}}_{t} \biggr]\\ &{}- {\mathbb{E}} \biggl[ \int ^{T}_{t} \gamma (s)\rho \bigl(\bar{\mu }(s) \bigr) \,\mathrm{d}s| {\mathscr{F}}_{t} \biggr]. \end{aligned}$$

Since \(\bar{\mu }(t)\leq m(t) \), we derive

$$\begin{aligned} m(t) \leq{}& a + {\mathbb{E}} \biggl[ \int ^{T}_{t}\alpha (s)m(s) \,\mathrm{d}s| { \mathscr{F}}_{t} \biggr] + {\mathbb{E}} \biggl[ \int ^{T}_{t}\beta (s)m(s) \,\mathrm{d}s | { \mathscr{F}}_{t} \biggr]\\ &{}+{\mathbb{E}} \biggl[ \int ^{T}_{t}\gamma (s)\rho \bigl(m(s)\bigr) \,\mathrm{d}s| {\mathscr{F}}_{t} \biggr]. \end{aligned}$$

From Lemma 3.4, we have

$$\begin{aligned} \mu (t)\leq{}& \bar{W}^{-1} \biggl( \bar{W} \biggl({\mathbb{E}} \biggl[ \bar{W}^{-1} \biggl( \bar{W}(a)+ \int ^{T}_{0}\bigl( \alpha (s)+\beta (s)+\gamma (s) \bigr)\,\mathrm{d}s \biggr)\bigg| { \mathscr{F}}_{t} \biggr] \biggr)\\ &{} - \int ^{t}_{0}\bigl(\alpha (s)+ \beta (s)+\gamma (s) \bigr)\,\mathrm{d}s \biggr). \end{aligned}$$

 □

4 Application

4.1 Application 1

In this subsection, we will give an application for Theorem 3.2. Let \((Y^{(1)}, Z^{(1)}), (Y^{(2)}, Z^{(2)})\) be respectively the solutions of the following two 1-dimensional BSDEs:

$$\begin{aligned} Y^{(j)}(t)= \xi ^{(j)}+ \int ^{T}_{t}f_{j}\bigl(s, Y^{(j)}(s), Z^{(j)}(s)\bigr) \,\mathrm{d}s - \int ^{T}_{t}Z^{(j)}(s)\,\mathrm{d}W(s),\quad t\in [0, T], \end{aligned}$$
(3)

where \(j=1,2\).

  1. (H1)

    Assume that

    $$\begin{aligned} f(\cdot,\cdot, \cdot, \cdot ): \Omega \times [0, T] \times { \mathbb{R}}\times { \mathbb{R}}^{ d}\rightarrow {\mathbb{R}}, \end{aligned}$$

    and f satisfies the following condition:

    $$\begin{aligned} {\mathbb{E}} \biggl[ \int ^{T}_{0} \bigl\vert f(s,0,0) \bigr\vert ^{2}\,\mathrm{d}s \biggr]< \infty. \end{aligned}$$
  2. (H2)

    For all \(s\in [0, T], y,y^{\prime }\in {\mathbb{R}}, z, z^{\prime }\in { \mathbb{R}}^{d} \), we have

    $$\begin{aligned} \bigl\vert f(s,y,z) - f\bigl(s,y^{\prime },z^{\prime } \bigr) \bigr\vert ^{2} \leq \epsilon (s)\rho \bigl( \bigl\vert y- y^{\prime } \bigr\vert ^{2}\bigr) + C \bigl\vert z-z^{\prime } \bigr\vert ^{2}, \end{aligned}$$

    where \(C> 0\) is a constant and \(\epsilon (s)> 0\) is an \({\mathbb{F}}\)-adapted stochastic process satisfying \(\|\int ^{T}_{0}\epsilon (s)\,\mathrm{d}s \|_{\infty } < \infty \).

  3. (H3)

    Assume that \(\xi ^{(j)} \in L^{2}_{{\mathscr{F}}_{T}}(\Omega; {\mathbb{R}})\), \((Y^{(j)}, Z^{(j)}) \in S^{2}_{{\mathbb{F}}}(0, T; {\mathbb{R}})\times L^{2}_{{ \mathbb{F}}}(0, T; {\mathbb{R}}^{d}), j=1,2\).

Theorem 4.1

Assume that \(f_{1}, f_{2}\) satisfy \((\mathrm{H}1)\) and \((\mathrm{H}2)\) and condition \((\mathrm{H}3)\) holds. If \(\xi ^{(1)}\leq \xi ^{(2)}\) and \(f_{1}(t, y, z)\leq f_{2}(t, y, z)\) for all \(t\in [0, T], y\in {\mathbb{R}}, z\in {\mathbb{R}}^{d}\), we have

$$\begin{aligned} Y^{(1)}(t)\leq Y^{(2)}(t), \quad\textit{a.e., a.s.} \end{aligned}$$

Proof

Set

$$\begin{aligned} \hat{Y}(t)=Y^{(1)}(t)-Y^{(2)}(t), \qquad \hat{Z}(t)=Z^{(1)}(t)-Z^{(2)}(t). \end{aligned}$$

Using Itô’s formula to \(|\hat{Y}^{+}(t)|^{2}\), we have

$$\begin{aligned} &e^{\beta t} \bigl\vert \hat{Y}^{+}(t) \bigr\vert ^{2} + {\mathbb{E}} \biggl[ \int ^{T}_{t} \beta e^{\beta s} \bigl\vert \hat{Y}^{+}(s) \bigr\vert ^{2}\,\mathrm{d}s|{ \mathscr{F}}_{t} \biggr]+ {\mathbb{E}} \biggl[ \int ^{T}_{t}1_{\hat{Y}^{+}(t)>0}e^{ \beta s} \bigl\vert \hat{Z}(s) \bigr\vert ^{2}\,\mathrm{d}s|{\mathscr{F}}_{t} \biggr] \\ &\quad \leq \lambda {\mathbb{E}} \biggl[ \int ^{T}_{t} e^{\beta s} \bigl\vert \hat{Y}^{+}(s) \bigr\vert ^{2}\,\mathrm{d}s|{ \mathscr{F}}_{t} \biggr] + \frac{1}{\lambda }{\mathbb{E}} \biggl[ \int ^{T}_{t}1_{\hat{Y}^{+}(s)>0}e^{ \beta s} \bigl\vert f_{1}\bigl(s, Y_{1}(s), Z_{1}(s)\bigr) \\ &\qquad{} - f_{1}\bigl(s, Y_{2}(s), Z_{2}(s)\bigr) \bigr\vert ^{2}\,\mathrm{d}s|{\mathscr{F}}_{t} \biggr] \\ &\quad \leq \lambda {\mathbb{E}} \biggl[ \int ^{T}_{t} e^{\beta s} \bigl\vert \hat{Y}^{+}(s) \bigr\vert ^{2}\,\mathrm{d}s|{ \mathscr{F}}_{t} \biggr] + C'{ \mathbb{E}} \biggl[ \int ^{T}_{t}e^{\beta s}\epsilon (s)\rho \bigl( \bigl\vert \hat{Y}^{+}(s) \bigr\vert ^{2}\bigr)\,\mathrm{d}s|{\mathscr{F}}_{t} \biggr] \\ &\qquad{} + \frac{C^{\prime }}{\lambda }{\mathbb{E}} \biggl[ \int ^{T}_{t}1_{ \hat{Y}^{+}(s)>0}e^{\beta s} \bigl\vert \hat{Z}(s) \bigr\vert ^{2}\,\mathrm{d}s| { \mathscr{F}}_{t} \biggr]. \end{aligned}$$

Letting \(\lambda = 2C'\) and \(\beta >\lambda \), then we obtain

$$\begin{aligned} \bigl\vert \hat{Y^{+}}(t) \bigr\vert ^{2} \leq C^{\prime }{\mathbb{E}} \biggl[ \int ^{T}_{t} \epsilon (s)\rho \bigl( \bigl\vert \hat{Y}^{+}(s) \bigr\vert ^{2}\bigr)\,\mathrm{d}s|{ \mathscr{F}}_{t} \biggr]. \end{aligned}$$

So, according to Theorem 3.2, we obtain that \(\hat{Y}^{+}(t) =0\), a.s., that is to say, \(Y^{(1)}(t)\leq Y^{(2)}(t)\), a.e., a.s. □

Remark 4.2

Obviously, since \(\epsilon (s)\) is a stochastic process, the assumption in \((\mathrm{H}3)\) about f is different from the assumption in [7].

4.2 Application 2

In this subsection, we will give an application for Theorem 3.7. Let \((Y^{(1)}, Z^{(1)}), (Y^{(2)}, Z^{(2)})\) be the solutions of the following two 1-dimensional BSDEs, respectively:

$$\begin{aligned} Y^{(j)}(t)={}& \xi ^{(j)}+ \int ^{T}_{t}f_{j}\bigl(s, Y^{(j)}(s), Z^{(j)}(s)\bigr) \,\mathrm{d}s + \int ^{T}_{t}\epsilon _{2}(s){\mathbb{E}} \biggl[ \int ^{T}_{s} \epsilon _{3}(r)Y^{(j)}(r) \,\mathrm{d}r|{\mathscr{F}}_{s} \biggr] \,\mathrm{d}s \\ & {}- \int ^{T}_{t}Z^{(j)}(s)\,\mathrm{d}W(s),\quad t\in [0, T], \end{aligned}$$
(4)

where \(\epsilon _{1}, \epsilon _{2} \) are two nonnegative processes satisfying

$$\begin{aligned} \biggl\Vert \int ^{T}_{0}\epsilon _{2}(\omega, s)\, \mathrm{d}s \biggr\Vert _{\infty }< \infty,\qquad \biggl\Vert \int ^{T}_{0} \bigl\vert \epsilon _{3}( \omega, s) \bigr\vert ^{2}\,\mathrm{d}s \biggr\Vert _{\infty }< \infty,\quad \text{respectively.} \end{aligned}$$
  1. (H1)’

    Assume that

    $$\begin{aligned} f(\cdot,\cdot, \cdot, \cdot ): \Omega \times [0, T] \times { \mathbb{R}}\times { \mathbb{R}}^{ d}\rightarrow {\mathbb{R}}, \end{aligned}$$

    and f satisfies the following condition:

    $$\begin{aligned} {\mathbb{E}} \biggl[ \int ^{T}_{0} \bigl\vert f(s,0,0) \bigr\vert ^{2}\,\mathrm{d}s \biggr]< \infty. \end{aligned}$$
  2. (H2)’

    For all \(s\in [0, T], y,y^{\prime }\in {\mathbb{R}}, z, z^{\prime }\in { \mathbb{R}}^{d} \), we have

    $$\begin{aligned} \bigl\vert f(s,y,z) - f\bigl(s,y^{\prime },z^{\prime } \bigr) \bigr\vert ^{2} \leq \epsilon _{1}(s) \rho \bigl( \bigl\vert y- y^{\prime } \bigr\vert ^{2}\bigr)+ \epsilon _{2}(s) \bigl\vert y- y^{\prime } \bigr\vert ^{2} + C \bigl\vert z-z^{ \prime } \bigr\vert ^{2}, \end{aligned}$$

    where \(C> 0\) is a constant and \(\epsilon _{1}(s)\) is an \({\mathbb{F}}\)-adapted positive stochastic process satisfying \(\|\int ^{T}_{0}\epsilon _{1}(s)\,\mathrm{d}s\|_{\infty } < \infty \).

  3. (H3)’

    Assume that \(\xi ^{(j)} \in L^{2}_{{\mathscr{F}}_{T}}(\Omega; {\mathbb{R}})\), \((Y^{(j)}, Z^{(j)}) \in S^{2}_{{\mathbb{F}}}(0, T; {\mathbb{R}})\times L^{2}_{{ \mathbb{F}}}(0, T; {\mathbb{R}}^{d}), j=1,2\).

Theorem 4.3

Assume that \(f_{1}, f_{2}\) satisfy \(\mathrm{(H1)}\), \(\mathrm{(H2)}\)’ and the condition \((\mathrm{H}3)\)’ holds. If \(\xi ^{(1)}\leq \xi ^{(2)}\) and \(f_{1}(t, y, z)\leq f_{2}(t, y, z)\) for all \(t\in [0, T], y\in {\mathbb{R}}, z\in {\mathbb{R}}^{d}\), we have

$$\begin{aligned} Y^{(1)}(t)\leq Y^{(2)}(t), \quad\textit{a.e.}, \textit{ a.s.} \end{aligned}$$

Proof

Set

$$\begin{aligned} \hat{Y}(t)=Y^{(1)}(t)-Y^{(2)}(t),\qquad \hat{Z}(t)=Z^{(1)}(t)-Z^{(2)}(t). \end{aligned}$$

Using Itô’s formula to \(|\hat{Y}^{+}(t)|^{2}\), we have

$$\begin{aligned} &e^{\beta t} \bigl\vert \hat{Y}^{+}(t) \bigr\vert ^{2} + {\mathbb{E}} \biggl[ \int ^{T}_{t} \beta e^{\beta s} \bigl\vert \hat{Y}^{+}(s) \bigr\vert ^{2}\,\mathrm{d}s|{ \mathscr{F}}_{t} \biggr]+ {\mathbb{E}} \biggl[ \int ^{T}_{t}1_{\hat{Y}^{+}(t)>0}e^{ \beta s} \bigl\vert \hat{Z}(s) \bigr\vert ^{2}\,\mathrm{d}s|{\mathscr{F}}_{t} \biggr] \\ &\quad \leq \lambda {\mathbb{E}} \biggl[ \int ^{T}_{t} e^{\beta s} \bigl\vert \hat{Y}^{+}(s) \bigr\vert ^{2}\,\mathrm{d}s|{ \mathscr{F}}_{t} \biggr] + \frac{1}{\lambda }{\mathbb{E}} \biggl[ \int ^{T}_{t}1_{\hat{Y}^{+}(s)>0}e^{ \beta s} \bigl\vert f_{1}\bigl(s, Y_{1}(s), Z_{1}(s)\bigr) \\ & \qquad{}- f_{1}\bigl(s, Y_{2}(s), Z_{2}(s)\bigr) \bigr\vert ^{2}\,\mathrm{d}s|{\mathscr{F}}_{t} \biggr] \\ &\qquad{} +C'{\mathbb{E}} \biggl\{ \int ^{T}_{t}e^{\beta s}\epsilon _{2}(s){ \mathbb{E}} \biggl[ \int ^{T}_{s} \bigl\vert \epsilon _{3}(r) \bigr\vert ^{2} \bigl\vert \hat{Y}^{+}(r) \bigr\vert ^{2} \,\mathrm{d}r|{ \mathscr{F}}_{s} \biggr]\,\mathrm{d}s\bigg|{ \mathscr{F}}_{t} \biggr\} \\ &\qquad{}+{\mathbb{E}} \biggl\{ \int ^{T}_{t}e^{ \beta s}\epsilon _{2}(s) \bigl\vert \hat{Y}^{+}(s) \bigr\vert ^{2})\,\mathrm{d}s|{ \mathscr{F}}_{t} \biggr\} \\ & \quad\leq \lambda {\mathbb{E}} \biggl[ \int ^{T}_{t} e^{\beta s} \bigl\vert \hat{Y}^{+}(s) \bigr\vert ^{2}\,\mathrm{d}s|{ \mathscr{F}}_{t} \biggr] + C'{ \mathbb{E}} \biggl[ \int ^{T}_{t}e^{\beta s}\epsilon _{1}(s)\rho \bigl( \bigl\vert \hat{Y}^{+}(s) \bigr\vert ^{2}\bigr)\,\mathrm{d}s|{\mathscr{F}}_{t} \biggr] \\ &\qquad{} + \frac{C^{\prime }}{\lambda }{\mathbb{E}} \biggl[ \int ^{T}_{t}1_{ \hat{Y}^{+}(s)>0}e^{\beta s} \bigl\vert \hat{Z}(s) \bigr\vert ^{2}\,\mathrm{d}s| { \mathscr{F}}_{t} \biggr] \\ &\qquad{} +C'{\mathbb{E}} \biggl\{ \int ^{T}_{t}e^{\beta s}\epsilon _{2}(s){ \mathbb{E}} \biggl[ \int ^{T}_{s} \bigl\vert \epsilon _{3}(r) \bigr\vert ^{2} \bigl\vert \hat{Y}^{+}(r) \bigr\vert ^{2} \,\mathrm{d}r|{ \mathscr{F}}_{s} \biggr]\,\mathrm{d}s\bigg|{ \mathscr{F}}_{t} \biggr\} \\ &\qquad{}+{\mathbb{E}} \biggl\{ \int ^{T}_{t}e^{ \beta s}\epsilon _{2}(s) \bigl\vert \hat{Y}^{+}(s) \bigr\vert ^{2})\,\mathrm{d}s|{ \mathscr{F}}_{t} \biggr\} . \end{aligned}$$

Letting \(\lambda = 2C'\) and \(\beta >\lambda \), then we obtain

$$\begin{aligned} \bigl\vert \hat{Y^{+}}(t) \bigr\vert ^{2} \leq{}& C^{\prime }{\mathbb{E}} \biggl[ \int ^{T}_{t} \epsilon _{1}(s)\rho \bigl( \bigl\vert \hat{Y}^{+}(s) \bigr\vert ^{2}\bigr)\,\mathrm{d}s|{ \mathscr{F}}_{t} \biggr] \\ &{} +C'{\mathbb{E}} \biggl\{ \int ^{T}_{t}e^{\beta s}\epsilon _{2}(s){ \mathbb{E}} \biggl[ \int ^{T}_{s} \bigl\vert \epsilon _{3}(r) \bigr\vert ^{2} \bigl\vert \hat{Y}^{+}(r) \bigr\vert ^{2} \,\mathrm{d}r|{ \mathscr{F}}_{s} \biggr]\,\mathrm{d}s\bigg|{ \mathscr{F}}_{t} \biggr\} \\ &{}+{\mathbb{E}} \biggl\{ \int ^{T}_{t}e^{ \beta s}\epsilon _{2}(s) \bigl\vert \hat{Y}^{+}(s) \bigr\vert ^{2})\,\mathrm{d}s|{ \mathscr{F}}_{t} \biggr\} . \end{aligned}$$

So, according to Theorem 3.7, we obtain that \(\hat{Y}^{+}(t) =0\), a.s., that is to say,

$$\begin{aligned} Y^{(1)}(t)\leq Y^{(2)}(t),\quad \text{ a.e.,} \text{ a.s.} \end{aligned}$$

 □

5 Conclusion

In this paper, we mainly studied several different forms of backward stochastic Bellman–Bihari’s inequality. The proposed scheme is based on studying the method of backward stochastic Gronwall’s inequalities and forward Bellman–Bihari’s inequalities. As far as we know, there is little study on backward stochastic Bellman–Bihari’s inequality. Just as backward stochastic Gronwall’s inequalities including some essential features compared with forward stochastic Gronwall’s inequalities, backward stochastic Bellman–Bihari’s inequalities also enjoy some essential different features, which can be applied to solve some problems on BSDEs. Our further interests will lie on more applications by using backward stochastic Bellman–Bihari’s inequalities.