1 Introduction

Given a filtered probability space \((\Omega ,\mathcal {F},(\mathcal {F}_t)_{t\in [0,T]},P)\), Pardoux and Peng [20] first introduced the following type of nonlinear backward stochastic differential equations (BSDEs for short):

$$\begin{aligned} Y_t=\xi +\int _t^T f(s,Y_s,Z_s)\mathrm{d}s-\int _t^T Z_s\mathrm{d}B_s, \end{aligned}$$

where the generator \(f(\cdot ,y,z)\) is progressively measurable and Lipschitz continuous with respect to (yz), \(\xi \) is an \(\mathcal {F}_T\)-measurable and square integrable terminal value. They proved that there exists a unique pair of progressively measurable processes (YZ) satisfying this equation. The BSDE theory attracts a great deal of attention due to its wide applications in mathematical finance, stochastic control and quasilinear partial differential equations (see [8, 21], etc).

One of the most important extensions is the reflected BSDE initiated by El Karoui et al. [7]. In addition to the generator f and the terminal value \(\xi \), there is an additional continuous process S, called the obstacle, prescribed in this problem. The reflection means that the solution is forced to be above this given process S. More precisely, the solution of the reflected BSDE with parameters \((\xi ,f,S)\) is a triple of processes (YZL) such that

$$\begin{aligned}&Y_t=\xi +\int _t^T f(s,Y_s,Z_s)\mathrm{d}s+L_T-L_t-\int _t^T Z_s\mathrm{d}B_s,\\&Y_t\ge S_t, \ t\in [0,T], \text { and } \int _0^T(Y_s-S_s)\mathrm{d}L_s=0, P\text {-a.s.}, \end{aligned}$$

where L is an increasing process to push the solution upward. Besides, it should behave in a minimal way, which means that L only acts when the solution Y reaches the obstacle S. This requirement corresponds to the mathematical expression \( \int _0^T(Y_s-S_s)\mathrm{d}L_s=0\), called Skorohod condition. The reflected BSDE is a useful tool to study problems of pricing American options, the obstacle problem for quasilinear PDEs as well as the variational inequalities (see [1, 7]).

Building upon these results, Cvitanic and Karaztas [3] studied BSDEs with two reflecting obstacles, which means that the solution Y is forced to stay between a lower obstacle L and an upper obstacle U. This can be achieved by the combined actions of two increasing processes: one is to push the solution upward, the other is to push it downward and both of them act in a minimal way when Y tries to cross the obstacles. They also established the relation between the solution of the doubly reflected BSDE and the value function of Dynkin game. For more details about this topic, we refer to the papers [2, 6, 9,10,11, 25].

Note that the classical BSDE and reflected BSDE theory can only deal with financial problems under mean uncertainty, not volatility uncertainty, and can give probabilistic interpretations for quasilinear PDEs, not fully nonlinear ones. Motivated by these facts, Peng [22, 23] systematically established the G-expectation theory. A new type of Brownian motion B, called G-Brownian motion, whose increments are stationary and independent, was constructed. Different from the classical case, the quadratic variation process \(\langle B\rangle \) is not deterministic. The basic notions and tools, such as the stochastic integral with respect to G-Brownian motion B and G-Itô’s formula, were also established.

A few years later, Hu et al. [12] established the well-posedness of BSDEs driven by G-Brownian motion (G-BSDEs for short) as the following:

$$\begin{aligned} Y_t=\xi +\int _t^T f(s,Y_s,Z_s)\mathrm{d}s+\int _t^T g(s,Y_s,Z_s)\mathrm{d}\langle B\rangle _s-\int _t^T Z_s\mathrm{d}B_s-(K_T-K_t), \end{aligned}$$

where the generators f, g are Lipschitz continuous with respect to (YZ). Under conditions similar to the classical case, applying the Galerkin approximation technique and the PDE approach, they proved that there exists a unique solution (YZK) to this equation, where K is a decreasing G-martingale. Besides, in the accompanying paper [13], they also obtained the comparison theorem, Girsanov transformation and the nonlinear Feynman–Kac formula.

Li et al. [15] first studied the reflected G-BSDE with a lower obstacle. Due to the appearance of the decreasing G-martingale, the Skorohod condition was replaced by a martingale condition in order to get the uniqueness of the solutions. The existence was proved by the approximation method via penalization. Li and Peng [14] also considered the upper obstacle case. However, in order to pull the solution down below the upper obstacle, one needs to add a decreasing process L in the G-BSDE. Hence, the main difficulty is that the process \(L-K\) is not monotonic as in the lower obstacle case. Although they did not obtain the uniqueness, they showed that the solution constructed by a penalization method is a maximal one by a variant comparison theorem.

In this paper, we investigate the doubly reflected BSDE driven by G-Brownian motion with two obstacles (LU). As in the classical case, there should be two increasing processes \(A^+, A^-\): one aims to push the solution upward while the other is to pull the solution downward, and both processes behave in a minimal way such that they satisfy the Skorohod condition. Besides, there will also be a decreasing G-martingale K as in the G-BSDE, which exhibits the uncertainty of the model. Therefore, it is natural to conjecture that a solution to this doubly reflected G-BSDE should be a 5-tuples of processes \((Y, Z, K, A^+, A^-)\) with \(L_t\le Y_t\le U_t\) satisfying

$$\begin{aligned} {\left\{ \begin{array}{ll} Y_t=\xi +\int _t^T f(s,Y_s,Z_s)\mathrm{d}s+\int _t^T g(s,Y_s,Z_s)\mathrm{d}\langle B\rangle _s-\int _t^T Z_s\mathrm{d}B_s\\ \ \ \ \ \ \ +(A_T^{+}-A_t^{+})-(A_T^{-}-A_t^{-})-(K_T-K_t),\\ \int _0^T (Y_s-L_s)\mathrm{d}A^+_s=\int _0^T (U_s-Y_s)\mathrm{d}A^-_s=0. \end{array}\right. } \end{aligned}$$

However, the processes \(A^+, \ A^-\) and K here are mixed together, and the above Skorohod condition is not applicable. In this paper, we write A for \(A^+-A^--K\) and replace the Skorohod condition by a new kind of Approximate Skorohod Condition, which turns into the martingale condition when there is only one obstacle.

The uniqueness of the solutions is obtained by a priori estimates requiring some delicate analysis. In order to prove the existence, we consider the following G-BSDEs parameterized by \(n=1,2,\ldots \),

$$\begin{aligned} Y_t^n&=\xi +\int _t^T f(s,Y_s^n,Z^n_s)\mathrm{d}s+\int _t^T g(s,Y_s^n,Z_s^n)\mathrm{d}\langle B\rangle _s-\int _t^T Z_s^n\mathrm{d}B_s\\&\quad +(A_T^{n,+}-A_t^{n,+})-(A_T^{n,-}-A_t^{n,-})-(K_T^n-K_t^n), \end{aligned}$$

where \(A_t^{n,+}=\int _0^t n(Y_s^n-L_s)^-\mathrm{d}s\), \(A_t^{n,-}=\int _0^t n(Y_s^n-U_s)^+\mathrm{d}s\).

The objective, similar to the classical case studied by Cvitanic and Karaztas [3], is to show that the sequence \((Y^n,Z^n,A^n)\), where \(A^n=A^{n,+}-A^{n,-}-K^n\), converges to a triple of processes (YZA), and that (YZA) is a solution to the doubly reflected G-BSDE. To this end, the dominated convergence theorem and the property of weakly compactness played crucial role in Cvitanic and Karaztas [3]. However, these tools are not available under the G-expectation framework.

Our proof is divided into two stages.

Stage 1 We establish the uniform estimates for \(Y^n\) under the norm \(\Vert \cdot \Vert _{S_G^\alpha }\), and prove that \((Y^n-U)^+\) and \((Y^n-L)^-\) converge to 0 under the norm \(\Vert \cdot \Vert _{S_G^\alpha }\). These properties hold true under the assumption that the upper and lower obstacles belong to the space \(S_G^\beta (0,T)\) and they are separated by some generalized G-Itô process (see (A3’). The latter implies that the limit Y (if exists) lies between the upper and lower obstacles.

Stage 2 We show that the sequences \(A^{n,+}_T\), \(A^{n,-}_T\), \(K^n_T\) (resp. \(Z^n\)) are uniformly bounded under the norm \(\Vert \cdot \Vert _{L_G^\alpha }\) (resp. \(\Vert \cdot \Vert _{H^{\alpha }_G}\)). For this purpose, we prove that \((Y^n-U)^+\) converges to 0 with the explicit rate \(\frac{1}{n}\), which requires that the upper obstacle is a generalized G-Itô process.

Based on the above analysis, we obtain the convergence of \((Y^n,Z^n,A^n)\), and consequently the existence of the doubly reflected G-BSDE.

Recall that the G-expectation can be represented as the supremum of the linear expectation under the probability P overall \(P\in \mathcal {P}\), where \(\mathcal {P}\) is a collection of mutually singular martingale measures. Therefore, the G-expectation theory shares many similarities with the quasi-sure analysis by Denis and Martini [5] and the second-order BSDEs by Soner et al. [27] and Matoussi et al. [17]. Compared with these works, one advantage of the G-expectation framework is that the solution to the G-BSDEs is a (generalized) G-Itô process, and that the decomposition of (generalized) G-Itô processes is unique. This amounts to say that the derivatives \(\partial _tu\), \(\partial _xu\) and the second-order derivative \(\partial ^2_xu\) of a function u(tx) are all well defined in the G-expectation space, which is crucial to give the probabilistic representations for (path dependent) fully nonlinear PDEs. In other words, the solutions of G-BSDEs have strong regularity and can be universally defined in the spaces of the G-framework, which enhances the results in [27] and [17].

The problem considered in this paper is closely related to Matoussi et al. [16], which studied the second-order BSDEs with general reflections, but it is formulated in a quite different way.

  1. 1.

    The solution (YZA) to the doubly reflected G-BSDE is defined in the G-framework, in which the processes have strong regularity and remarkable properties. As is mentioned above, in the G-framework, the unique decomposition of Itô processes implies that the derivative \(\partial ^2_xu\) is well-defined, which embodies the advantages of the G-expectation compared to the linear expectations.

  2. 2.

    In [16] and the corrigendum [18], the process V (corresponding to the process A in this paper) is defined and characterized by the Skorohod condition individually for each probability P in \(\mathcal {P}\). In this paper, the process A and the corresponding approximate Skorohod condition are given universally with respect to all probabilities P in \(\mathcal {P}\).

This paper is organized as follows. In Sect. 2, we present some notions and results on G-expectation and G-BSDEs as preliminaries. In Sect. 3, we first state the definition of solutions to doubly reflected G-BSDEs and establish some a priori estimates from which we can derive the uniqueness of the solution. We then introduce the penalization method to prove the existence of the solution in Sect. 4.

2 Preliminaries

In this section, we review notations and results in the G-expectation framework, which are concerned with the G-Itô calculus and BSDE driven by G-Brownian motion. For simplicity, we only consider the one-dimensional case. For more details, we refer to the papers [12, 13, 22,23,24].

Let \(\Omega =C_{0}([0,\infty );\mathbb {R})\), the space of real-valued continuous functions starting from the origin, be endowed with the following norm,

$$\begin{aligned} \rho (\omega ^1,\omega ^2):=\sum _{i=1}^\infty 2^{-i}[(\max _{t\in [0,i]}|\omega _t^1-\omega _t^2|)\wedge 1], \text { for } \omega ^1,\omega ^2\in \Omega . \end{aligned}$$

Let B be the canonical process on \(\Omega \). Set

$$\begin{aligned} L_{ip} (\Omega ):=\{ \varphi (B_{t_{1}},\ldots ,B_{t_{n}}): \ n\in \mathbb {N}, \ t_{1} ,\ldots , t_{n}\in [0,\infty ), \ \varphi \in C_{b,Lip}(\mathbb {R}^{ n})\}, \end{aligned}$$

where \(C_{b,Lip}(\mathbb {R}^{ n})\) denotes the set of bounded Lipschitz functions on \(\mathbb {R}^{n}\). Let \((\Omega ,L_{ip}(\Omega ),\hat{\mathbb {E}})\) be the G-expectation space, where the function \(G:\mathbb {R}\rightarrow \mathbb {R}\) is defined by

$$\begin{aligned} G(a):=\frac{1}{2}\hat{\mathbb {E}}[aB_1^2]=\frac{1}{2}({\bar{\sigma }}^2a^+-\underline{\sigma }^2a^-). \end{aligned}$$

In this paper, we always assume that G is non-degenerate, i.e., \(\underline{\sigma }^2 >0\). In fact, the (conditional) G-expectation for \(\xi \in L_{ip}(\Omega )\) can be calculated as follows. Assume that \(\xi \) can be represented as

$$\begin{aligned} \xi =\varphi (B_{{t_1}}, B_{t_2},\ldots ,B_{t_n}). \end{aligned}$$

Then, for \(t\in [t_{k-1},t_k)\), \(k=1,\ldots ,n\),

$$\begin{aligned} \hat{\mathbb {E}}_{t}[\varphi (B_{{t_1}}, B_{t_2},\ldots ,B_{t_n})]=u_k(t, B_t;B_{t_1},\ldots ,B_{t_{k-1}}), \end{aligned}$$

where, for any \(k=1,\ldots ,n\), \(u_k(t,x;x_1,\ldots ,x_{k-1})\) is a function of (tx) parameterized by \((x_1,\ldots ,x_{k-1})\) such that it solves the following fully nonlinear PDE defined on \([t_{k-1},t_k)\times \mathbb {R}\):

$$\begin{aligned} \partial _t u_k+G(\partial _x^2 u_k)=0 \end{aligned}$$

with terminal conditions

$$\begin{aligned} u_k(t_k,x;x_1,\ldots ,x_{k-1})=u_{k+1}(t_k,x;x_1,\ldots ,x_{k-1},x), \ k<n \end{aligned}$$

and \(u_n(t_n,x;x_1,\ldots ,x_{n-1})=\varphi (x_1,\ldots ,x_{n-1},x)\). Hence, the G-expectation of \(\xi \) is \(\hat{\mathbb {E}}_0[\xi ]\).

For each \(p\ge 1\), the completion of \(L_{ip} (\Omega )\) under the norm \(\Vert \xi \Vert _{L_{G}^{p}}:=(\hat{\mathbb {E}}[|\xi |^{p}])^{1/p}\) is denoted by \(L_{G}^{p}(\Omega )\). The conditional G-expectation \(\mathbb {{\hat{E}}}_{t}[\cdot ]\) can be extended continuously to the completion \(L_{G}^{p}(\Omega )\). The canonical process B is the 1-dimensional G-Brownian motion in this space.

For each fixed \(T\ge 0\), set \(\Omega _T=\{\omega _{\cdot \wedge T}:\omega \in \Omega \}\). We may define \(L_{ip}(\Omega _T)\) and \(L_G^p(\Omega _T)\) similarly. Besides, Denis et al. [4] proved that the G-expectation has the following representation.

Theorem 2.1

[4] There exists a weakly compact set \(\mathcal {P}\) of probability measures on \((\Omega ,\mathcal {B}(\Omega ))\), such that

$$\begin{aligned} \hat{\mathbb {E}}[\xi ]=\sup _{P\in \mathcal {P}}E_{P}[\xi ] \text { for all } \xi \in {L}_{G}^{1}{(\Omega )}. \end{aligned}$$

\(\mathcal {P}\) is called the set that represents \(\hat{\mathbb {E}}\).

Let \(\mathcal {P}\) be a weakly compact set that represents \(\hat{\mathbb {E}}\). For this \(\mathcal {P}\), we define the capacity

$$\begin{aligned} c(A):=\sup _{P\in \mathcal {P}}P(A),\ A\in \mathcal {B}(\Omega ). \end{aligned}$$

A set \(A\in \mathcal {B}(\Omega _T)\) is called polar if \(c(A)=0\). A property holds “quasi-surely” (q.s.) if it holds outside a polar set. In the following, we do not distinguish the two random variables X and Y if \(X=Y\), q.s.

For \(\xi \in L_{ip}(\Omega _T)\), let \(\mathcal {E}(\xi )=\hat{\mathbb {E}}[\sup _{t\in [0,T]}\hat{\mathbb {E}}_t[\xi ]]\) and \(\mathcal {E}\) is called the G-evaluation. For \(p\ge 1\) and \(\xi \in L_{ip}(\Omega _T)\), define \(\Vert \xi \Vert _{p,\mathcal {E}}=[\mathcal {E}(|\xi |^p)]^{1/p}\) and denote by \(L_{\mathcal {E}}^p(\Omega _T)\) the completion of \(L_{ip}(\Omega _T)\) under \(\Vert \cdot \Vert _{p,\mathcal {E}}\). The following theorem can be regarded as Doob’s maximal inequality under G-expectation.

Theorem 2.2

[28] For any \(\alpha \ge 1\) and \(\delta >0\), \(L_G^{\alpha +\delta }(\Omega _T)\subset L_{\mathcal {E}}^{\alpha }(\Omega _T)\). More precisely, for any \(1<\gamma <\beta :=(\alpha +\delta )/\alpha \), \(\gamma \le 2\), we have

$$\begin{aligned} \Vert \xi \Vert _{\alpha ,\mathcal {E}}^{\alpha }\le \gamma ^*\{\Vert \xi \Vert _{L_G^{\alpha +\delta }}^{\alpha }+14^{1/\gamma } C_{\beta /\gamma }\Vert \xi \Vert _{L_G^{\alpha +\delta }}^{(\alpha +\delta )/\gamma }\},\quad \forall \xi \in L_{ip}(\Omega _T), \end{aligned}$$

where \(C_{\beta /\gamma }=\sum _{i=1}^\infty i^{-\beta /\gamma }\), \(\gamma ^*=\gamma /(\gamma -1)\).

For \(T>0\) and \(p\ge 1\), the following spaces will be frequently used in this paper.

  • \(M_G^0(0,T):=\{\eta : \eta _{t}(\omega )=\sum _{j=0}^{N-1}\xi _{j}(\omega )\mathbf{1} _{[t_{j},t_{j+1})}(t),\) where \(\xi _j\in L_{ip}(\Omega _{t_j})\), \(t_0\le \cdots \le t_N\) is a partition of \([0,T]\}\);

  • \(M_G^p(0,T)\) is the completion of \(M_G^0(0,T)\) under the norm \(\Vert \eta \Vert _{M_{G}^{p}}\);

  • \(H_G^p(0,T)\) is the completion of \(M_G^0(0,T)\) under the norm \(\Vert \eta \Vert _{H_G^p}\);

  • \(S_G^0(0,T)=\{h(t,B_{t_1\wedge t}, \ldots ,B_{t_n\wedge t}):t_1,\ldots ,t_n\in [0,T],h\in C_{b,Lip}(\mathbb {R}^{n+1})\}\);

  • \(S_G^p(0,T)\) is the completion of \(S_G^0(0,T)\) under the norm \(\Vert \eta \Vert _{S_G^p}\),

where \(\Vert \eta \Vert _{M_{G}^{p}}:=(\mathbb {{\hat{E}}}[\int _{0}^{T}|\eta _{s}|^{p}\mathrm{d}s])^{1/p}\), \(\Vert \eta \Vert _{H_G^p}:=\{\hat{\mathbb {E}}[(\int _0^T|\eta _s|^2\mathrm{d}s)^{p/2}]\}^{1/p}\) and \(\Vert \eta \Vert _{S_G^p}=\{\hat{\mathbb {E}}[\sup _{t\in [0,T]}|\eta _t|^p]\}^{1/p}\).

We denote by \(\langle B\rangle \) the quadratic variation process of the G-Brownian motion B. For two processes \(\eta \in M_G^p(0,T)\) and \(\zeta \in H_G^p(0,T)\), Peng established the G-Itô integrals \(\int _0^\cdot \eta _s \mathrm{d}\langle B\rangle _s\) and \(\int _0^\cdot \zeta _s \mathrm{d}B_s\). Similar to the classical Burkholder–Davis–Gundy inequality, the following property holds.

Proposition 2.1

[13] If \(\eta \in H_G^{\alpha }(0,T)\) with \(\alpha \ge 1\) and \(p\in (0,\alpha ]\), then \(\sup _{u\in [t,T]}|\int _t^u\eta _s \mathrm{d}B_s|^p\in L_G^1(\Omega _T)\) and

$$\begin{aligned} \underline{\sigma }^p c_p\hat{\mathbb {E}}_t\left[ \left( \int _t^T |\eta _s|^2\mathrm{d}s\right) ^{p/2}\right] \le \hat{\mathbb {E}}_t \left[ \sup _{u\in [t,T]} \left| \int _t^u\eta _s \mathrm{d}B_s \right| ^p \right] \le {\bar{\sigma }}^p C_p\hat{\mathbb {E}}_t\left[ \left( \int _t^T |\eta _s|^2\mathrm{d}s\right) ^{p/2}\right] , \end{aligned}$$

where \(0<c_p<C_p<\infty \) are constants.

We now introduce some basic results of G-BSDEs. Consider the following type of G-BSDE

$$\begin{aligned} Y_t=\xi +\int _t^T f(s,Y_s,Z_s)\mathrm{d}s+\int _t^T g(s,Y_s,Z_s)\mathrm{d}\langle B\rangle _s-\int _t^T Z_s \mathrm{d}B_s-(K_T-K_t), \end{aligned}$$
(1)

where

$$\begin{aligned} f(t,\omega ,y,z), \ g(t,\omega ,y,z):[0,T]\times \Omega _T\times \mathbb {R}\times \mathbb {R}\rightarrow \mathbb {R} \end{aligned}$$

satisfy the following properties:

  1. (H1)

    There exists some \(\beta >1\) such that for any \(y, z \in \mathbb {R}\), \(f(\cdot ,\cdot ,y,z), \ g(\cdot ,\cdot ,y,z)\in M_G^{\beta }(0,T)\);

  2. (H2)

    There exists some \(L>0\) such that

    $$\begin{aligned} |f(t,y,z)-f(t,y',z')|+|g(t,y,z)-g(t,y',z')|\le L(|y-y'|+|z-z'|). \end{aligned}$$

For simplicity, we denote by \(\mathfrak {S}_G^{\alpha }(0,T)\) the collection of processes (YZK) such that \(Y\in S_G^{\alpha }(0,T)\), \(Z\in H_G^{\alpha }(0,T)\), and K is a decreasing G-martingale with \(K_0=0\) and \(K_T\in L_G^{\alpha }(\Omega _T)\). Hu et al. [12, 13] established the existence and uniqueness result for Eq. (1) as well as the comparison theorem.

Theorem 2.3

[12] Assume that \(\xi \in L_G^{\beta }(\Omega _T)\) and fg satisfy (H1) and (H2) for some \(\beta >1\). Then, for any \(1<\alpha <\beta \), Eq. (1) has a unique solution \((Y,Z,K)\in \mathfrak {S}_G^{\alpha }(0,T)\). Moreover, we have

$$\begin{aligned} |Y_t|^\alpha \le C\hat{\mathbb {E}}_t[|\xi |^\alpha +\int _t^T |f(s,0,0)|^\alpha +|g(s,0,0)|^\alpha \mathrm{d}s], \end{aligned}$$

where the constant C depends on \(\alpha \), T, \(\underline{\sigma }\) and L.

Below is a generalization of Proposition 3.5 in [12].

Theorem 2.4

Let fg satisfy (H1) and (H2) for some \(\beta >1\). Assume

$$\begin{aligned} Y_{t}&=\xi +\int _{t}^{T}f(s,Y_{s},Z_{s})\mathrm{d}s+\int _t^T g(s,Y_s,Z_s)\mathrm{d}\langle B\rangle _s\\&\quad -\int _{t}^{T}Z_{s}\mathrm{d}B_{s} -(K_{T}-K_{t})+(A_T-A_t), \end{aligned}$$

where \(Y\in S_G^{\alpha }(0,T)\), \(Z\in H_G^{\alpha }(0,T)\), KA are both decreasing process with \(A_0=K_{0}=0\) and \(A_T, K_{T}\in L_G^{\alpha }(\Omega _{T})\) for some \(\beta \ge \alpha >1\). Then there exists a constant \(C_{\alpha }:=C(\alpha ,T,\underline{\sigma },{\bar{\sigma }},L)>0\) such that

$$\begin{aligned} \begin{aligned} \mathbb {{\hat{E}}}\left[ \left( \int _{0}^{T}|Z_{s}|^{2}\mathrm{d}s\right) ^{\frac{\alpha }{2}}\right]&\le C_{\alpha }\bigg \{ \Vert Y\Vert _{S_G^{\alpha }}^{\alpha }+\Vert Y\Vert _{S_G^{\alpha }}^{\frac{\alpha }{2}} \times \bigg (\left( \mathbb {{\hat{E}}}\left[ \left( \int _{0}^{T}f_{s}^{0}\mathrm{d}s\right) ^{\alpha }\right] \right) ^{\frac{1}{2}}\\&\quad +\left( \mathbb {{\hat{E}}}\left[ \left( \int _{0}^{T}g_{s}^{0}\mathrm{d}s\right) ^{\alpha }\right] \right) ^{\frac{1}{2}}+\big (m_\alpha ^{A, K}\big )^{1/2}\bigg )\bigg \}, \end{aligned} \end{aligned}$$

where \(f_{s}^{0}=|f(s,0,0)|\), \(g_{s}^{0}=|g(s,0,0)|\), \(m_\alpha ^{A,K}=\min \{\hat{\mathbb {E}}[|A_T|^\alpha ], \hat{\mathbb {E}}[|K_T|^\alpha ]\}\).

Proof

Applying Itô’s formula to \(|Y_{t}|^{2}\), we have

$$\begin{aligned} |Y_{0}|^{2}+\int _{0}^{T}|Z_{s}|^{2}\mathrm{d}\langle B\rangle _{s}&=|\xi |^{2}-\int _{0}^{T}2Y_{s}Z_{s}\mathrm{d}B_{s}-\int _{0}^{T}2Y_{s}d(K_{s}-A_{s})\\&\quad +\int _{0}^{T}2Y_{s}f(s)\mathrm{d}s+\int _{0}^{T}2Y_{s}g(s)\mathrm{d}\langle B\rangle _s, \end{aligned}$$

where \(f(s)=f(s,Y_{s},Z_{s})\) and \(g(s)=g(s,Y_{s},Z_{s})\). Then

$$\begin{aligned} \left( \int _{0}^{T}|Z_{s}|^{2}\mathrm{d}\langle B\rangle _{s}\right) ^{\frac{\alpha }{2}}&\le C_{\alpha }\bigg \{|\xi |^{\alpha }+ \left| \int _{0}^{T}Y_{s}f(s)\mathrm{d}s \right| ^{\frac{\alpha }{2}} +\left| \int _{0}^{T}Y_{s}Z_{s}\mathrm{d}B_{s}\right| ^{\frac{\alpha }{2}}\\&\quad +\left| \int _{0}^{T}Y_{s}g(s)\mathrm{d}\langle B\rangle _s \right| ^{\frac{\alpha }{2}}+ \left| \int _{0}^{T}Y_{s} \mathrm{d}K_{s}\right| ^{\frac{\alpha }{2}}+\left| \int _{0}^{T}Y_{s} \mathrm{d}A_{s} \right| ^{\frac{\alpha }{2}}\bigg \}. \end{aligned}$$

By simple calculation, we can obtain

$$\begin{aligned} \begin{aligned} \mathbb {{\hat{E}}}\left[ \left( \int _{0}^{T}|Z_{s}|^{2}\mathrm{d}s\right) ^{\frac{\alpha }{2}}\right] \le&C_{\alpha }\bigg \{ \Vert Y\Vert _{S_G^{\alpha }}^{\alpha }+\Vert Y\Vert _{S_G^{\alpha }}^{\frac{\alpha }{2}}\bigg [(\mathbb {{\hat{E}}}[|K_{T}|^{\alpha }])^{\frac{1}{2}}+(\mathbb {{\hat{E}}}[|A_{T}|^{\alpha }])^{\frac{1}{2}}\\&\quad +\left( \mathbb {{\hat{E}}}\left[ \left( \int _{0}^{T}f_{s}^{0}\mathrm{d}s\right) ^{\alpha }\right] \right) ^{\frac{1}{2}}+\left( \mathbb {{\hat{E}}}\left[ \left( \int _{0}^{T}g_{s}^{0}\mathrm{d}s\right) ^{\alpha }\right] \right) ^{\frac{1}{2}}\bigg ]\bigg \}. \end{aligned} \end{aligned}$$
(2)

On the other hand, noting that

$$\begin{aligned} K_{T}=\xi -Y_{0}+\int _{0}^{T}f(s)\mathrm{d}s+\int _0^T g(s)\mathrm{d}\langle B\rangle _s-\int _{0}^{T}Z_{s}\mathrm{d}B_{s}+A_T, \end{aligned}$$

we get

$$\begin{aligned} \begin{aligned} \mathbb {{\hat{E}}}[|K_{T}|^{\alpha }]&\le C_{\alpha }\bigg \{ \Vert Y\Vert _{S_G^{\alpha }}^{\alpha }+\mathbb {{\hat{E}}}\left[ \left( \int _{0}^{T}|Z_{s}|^{2}\mathrm{d}s\right) ^{\frac{\alpha }{2}}\right] +\mathbb {{\hat{E}}}[|A_{T}|^{\alpha }]\\&\quad +\mathbb {{\hat{E}}}\left[ \left( \int _{0}^{T}f_{s}^{0}\mathrm{d}s\right) ^{\alpha }\right] +\mathbb {{\hat{E}}}\left[ \left( \int _{0}^{T}g_{s}^{0}\mathrm{d}s\right) ^{\alpha }\right] \bigg \}. \end{aligned} \end{aligned}$$
(3)

Suppose that \(\hat{\mathbb {E}}[|K_T|^\alpha ]\ge \hat{\mathbb {E}}[|A_T|^\alpha ]\). By (2) and (3), we have

$$\begin{aligned} \mathbb {{\hat{E}}}\left[ \left( \int _{0}^{T}|Z_{s}|^{2}\mathrm{d}s\right) ^{\frac{\alpha }{2}}\right]&\le C_{\alpha }\bigg \{ \Vert Y\Vert _{S_G^{\alpha }}^{\alpha }+\Vert Y\Vert _{S_G^{\alpha }}^{\frac{\alpha }{2}} \times \bigg (\left( \mathbb {{\hat{E}}}\left[ \left( \int _{0}^{T}f_{s}^{0}\mathrm{d}s\right) ^{\alpha }\right] \right) ^{\frac{1}{2}}\\&\quad +\left( \mathbb {{\hat{E}}}\left[ \left( \int _{0}^{T}g_{s}^{0}\mathrm{d}s\right) ^{\alpha }\right] \right) ^{\frac{1}{2}} +\big (\hat{\mathbb {E}}[|A_T|^\alpha ]\big )^{1/2}\bigg )\bigg \}. \end{aligned}$$

By symmetry of K and A, we get the desired result. \(\square \)

Theorem 2.5

[13] Let \((Y_t^l,Z_t^l,K_t^l)_{t\le T}\), \(l=1,2\), be the solutions of the following G-BSDEs:

$$\begin{aligned} Y^l_t&=\xi ^l+\int _t^T f^l(s,Y^l_s,Z^l_s)\mathrm{d}s+\int _t^T g^l(s,Y^l_s,Z^l_s)\mathrm{d}\langle B\rangle _s\\&\quad +V_T^l-V_t^l-\int _t^T Z^l_s \mathrm{d}B_s-(K^l_T-K^l_t), \end{aligned}$$

where processes \(\{V_t^l\}_{0\le t\le T}\) are assumed to be right-continuous with left limits (RCLL), q.s., such that \(\hat{\mathbb {E}}[\sup _{t\in [0,T]}|V_t^l|^\beta ]<\infty \), \(f^l,\ g^l\) satisfy (H1) and (H2), \(\xi ^l\in L_G^{\beta }(\Omega _T)\) with \(\beta >1\). If \(\xi ^1\ge \xi ^2\), \(f^1\ge f^2\), \(g^1\ge g^2\) and \(V_t^1-V_t^2\) is an increasing process, then \(Y_t^1\ge Y_t^2\).

Compared to the classical BSDE, there appears, in the BSDE driven by G-Brownian motion, an additional non-increasing G-martingale K, which exhibits the uncertainty of the model. The difficulty in the analysis of G-BSDE mainly lies in the appearance of this component. Song [29] proved that the non-increasing G-martingale could not be form of \(\{\int _0^t \eta _s\mathrm{d}s\}\) or \(\{\int _0^t \gamma _s\mathrm{d}\langle B\rangle _s\}\), where \(\eta ,\gamma \in M_G^1(0,T)\). More generally, he proved the following result.

Theorem 2.6

[29] Assume that for \(t\in [0,T]\), \(\int _0^t \zeta _s \mathrm{d}B_s+\int _0^t \eta _s\mathrm{d}s+K_t=L_t\), where \(\zeta \in H_G^1(0,T)\), \(\eta \in M_G^1(0,T)\) and KL are non-increasing G-martingales. Then we have \(\int _0^t \zeta _s\mathrm{d}B_s=0\), \(\int _0^t \eta _s\mathrm{d}s=0\) and \(K_t=L_t\).

Remark 2.1

A process of the following form is called a generalized G-Itô process:

$$\begin{aligned} u_t=u_0+\int _0^t \eta _s\mathrm{d}s+\int _0^t \zeta _s \mathrm{d}B_s+K_t, \end{aligned}$$

where \(\eta \in M_G^1(0,T)\), \(\zeta \in H_G^1(0,T)\) and K is a non-increasing G-martingale. Theorem 2.6 shows that the decomposition for generalized G-Itô processes is unique.

3 G-BSDE with Two Reflection Barriers

In this section, we give the formulation of the doubly reflected BSDE driven by G-Brownian motion. Particularly, the approximate Skorohod condition is introduced to guarantee the uniqueness of the solutions, which will be proved via some a priori estimates given later.

3.1 Formulation of Doubly Reflected BSDE Driven by G-Brownian Motion

We formulate the doubly reflected BSDE driven by G-Brownian motion in detail. For simplicity, we only consider the case of 1-dimensional G-Brownian motion. However, our results and methods still hold for the case \(d>1\). We are given the following data: the generators f and g, the lower obstacle process \(\{L_t\}_{t\in [0,T]}\), the upper obstacle process \(\{U_t\}_{t\in [0,T]}\) and the terminal value \(\xi \).

Here f and g are maps

$$\begin{aligned} f(t,\omega ,y,z),g(t,\omega ,y,z):[0,T]\times \Omega _T\times \mathbb {R}^2\rightarrow \mathbb {R}. \end{aligned}$$

Below, we list the assumptions on the data of the doubly reflected G-BSDEs.

There exists some \(\beta >2\) such that

  1. (A1)

    for any yz, \(f(\cdot ,\cdot ,y,z)\), \(g(\cdot ,\cdot ,y,z)\in S_G^\beta (0,T)\);

  2. (A2)

    \(|f(t,\omega ,y,z)-f(t,\omega ,y',z')|+|g(t,\omega ,y,z)-g(t,\omega ,y',z')|\le \kappa (|y-y'|+|z-z'|)\) for some \(\kappa >0\);

  3. (A3)

    \(\{L_t\}_{t\in [0,T]}\), \(\{U_t\}_{t\in [0,T]}\in S_G^\beta (0,T)\), \(L_t\le U_t\), \(t\in [0,T]\), q.s. and the upper obstacle is a generalized G-Itô process of the following form

    $$\begin{aligned} U_t=U_0+\int _0^t b(s)\mathrm{d}s+\int _0^t \sigma (s)\mathrm{d}B_s+K_t, \end{aligned}$$

    where \(\{b(t)\}_{t\in [0,T]},\{\sigma (t)\}_{t\in [0,T]}\in S_G^\beta (0,T)\), \(K\in S_G^\beta (0,T)\) is a non-increasing G-martingale;

  4. (A4)

    \(\xi \in L_G^\beta (\Omega _T)\) and \(L_T\le \xi \le U_T\), q.s.

Remark 3.1

Notice that Assumptions (A1)–(A4) are quite similar to the ones in [3] since the non-increasing G-martingale K is equal to 0 when G reduces to a linear function.

We call a triple of processes (YZA) with \(Y, A\in S_G^\alpha (0,T)\), \(Z\in H_G^\alpha (0,T)\), for some \(2\le \alpha \le \beta \), a solution to the doubly reflected G-BSDE with the data \((\xi , f, g, L, U)\) if the following properties hold:

  1. (S1)

    \(L_t\le Y_t\le U_t\), \(t\in [0,T]\);

  2. (S2)

    \(Y_t=\xi +\int _t^T f(s,Y_s,Z_s)\mathrm{d}s+\int _t^T g(s,Y_s,Z_s)\mathrm{d}\langle B\rangle _s-\int _t^T Z_s \mathrm{d}B_s+(A_T-A_t)\);

  3. (S3)

    (YA) satisfies Approximate Skorohod Condition with order \(\alpha \) \((ASC _{\alpha })\).

Condition \((ASC _{\alpha })\): We say a pair of processes (YA) with \(Y, A\in S_G^\alpha (0,T)\) satisfies the approximate Skorohod condition with order \(\alpha \) (with respect to the obstacles LU) if there exist non-decreasing processes \(\{A^{n,+}\}_{n\in \mathbb {N}}\), \(\{A^{n,-}\}_{n\in \mathbb {N}}\) and non-increasing G-martingales \(\{K^n\}_{n\in \mathbb {N}}\), such that

  • \(\hat{\mathbb {E}}[|A_T^{n,+}|^\alpha +|A_T^{n,-}|^\alpha +|K^n_T|^\alpha ]\le C\), where C is independent of n;

  • \(\hat{\mathbb {E}}[\sup \limits _{t\in [0,T]}|A_t-(A_t^{n,+}-A_t^{n,-}-K_t^n)|^\alpha ]\rightarrow 0\), as \(n\rightarrow \infty \);

  • \(\lim \limits _{n\rightarrow \infty }\hat{\mathbb {E}}[|\int _0^T (Y_s-L_s)d A_s^{n,+}|^{\alpha /2}]=0\);

  • \(\lim \limits _{n\rightarrow \infty }\hat{\mathbb {E}}[|\int _0^T (U_s-Y_s)d A_s^{n,-}|^{\alpha /2}]=0\).

Below is the main result of this paper, which gives the wellposedness of the doubly reflected G-BSDE.

Theorem 3.1

Suppose that \(\xi \), f, g, L and U satisfy (A1)–(A4). Then the reflected G-BSDE with data \((\xi ,f,g,L,U)\) has a unique solution (YZA). Moreover, for any \(2\le \alpha <\beta \) we have \(Y\in S^\alpha _G(0,T)\), \(Z\in H_G^\alpha (0,T)\) and \(A\in S_G^{\alpha }(0,T)\).

Remark 3.2

Since the solution to a doubly reflected G-BSDE is constructed by a family of penalized G-BSDEs (see Sect. 4), hence the existence holds only for the case that \(\alpha <\beta \) due to Theorem 2.3. However, if there are two solutions \((Y^i,Z^i,A^i)\) with \(Y^i, A^i\in S^\beta _G(0,T)\) and \(Z^i\in H_G^\beta (0,T)\), \(i=1,2\), by Proposition 3.1, we have \(Y^1\equiv Y^2\). Consequently, \(Z^1\equiv Z^2\) and \(A^1\equiv A^2\). Therefore, the uniqueness result still holds when \(\alpha =\beta \).

Remark 3.3

Recall that in the classical case (see [3]), the Skorohod condition below is required to guarantee the uniqueness of the solution (YZA) to the doubly reflected BSDE with parameters \((\xi ,f,L,U)\): \(\int _0^T (Y_s-L_s)\mathrm{d}A^+_s=\int _0^T (U_s-Y_s)\mathrm{d}A^-_s=0\), where \(A^+\), \(A^-\) are two non-decreasing processes and \(A=A^+-A^-\).

Therefore, a more natural definition of the solution to the G-RBSDE \((\xi ,f,g,L,U)\) is a triple of processes (YZA) satisfying (S1), (S2) and the following Skorohod condition.

Condition \((SC )\): The process A is decomposed as \(A={\tilde{A}}-K\) with \({\tilde{A}}\) a finite variation process and K a non-increasing G-martingale, such that

$$\begin{aligned} \int _0^T (Y_s-L_s)d {\tilde{A}}_s^{+}=\int _0^T (U_s-Y_s)d {\tilde{A}}_s^{-}=0, \end{aligned}$$

where \({\tilde{A}}^+\), \({\tilde{A}}^-\) are two non-decreasing processes and \(A=A^+-A^-\).

Since the Skorohod condition is stronger than the approximate Skorohod condition, it follows from Theorem 3.1 that the solution satisfying Condition (SC) is unique. The existence of the solutions satisfying Condition (SC) is equivalent to prove the decomposition of the process A in Theorem 3.1:

\(A={\tilde{A}}-K,\) where \({\tilde{A}}\) is a finite variation process satisfying the Skorohod condition and K is a non-increasing G-martingale.

The existence and uniqueness of this decomposition are both interesting problems, which will be considered in future.

Remark 3.4

Suppose that \(U\equiv \infty \), i.e., the doubly reflected G-BSDE is reduced to the reflected G-BSDE with a lower obstacle. We can show that \(A\in S_G^\alpha (0,T)\) is non-decreasing and satisfies the martingale condition, that is, \(\{-\int _0^t (Y_s-L_s)\mathrm{d}A_s\}_{t\in [0,T]}\) is a non-increasing G-martingale, which is the definition of solution to reflected G-BSDE with a lower obstacle (see [15]).

In fact, let \(\{A^{n,+}\}_{n\in \mathbb {N}}\), \(\{A^{n,-}\}_{n\in \mathbb {N}}\) and \(\{K^n\}_{n\in \mathbb {N}}\) be the approximation sequences for A. It is clear that \(A^{n,-}\equiv 0\) for any \(n\in \mathbb {N}\). Note that \(\{A^{n,+}-K^n\}\) is non-decreasing and

$$\begin{aligned} \lim _{n\rightarrow \infty }\hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}|A_t-(A_t^{n,+}-K_t^n)|^\alpha \right] =0, \end{aligned}$$

then A is non-decreasing. Since \(Y\le L\) and \(K^n\) is a non-increasing G-martingale, it follows that \(\{-\int _0^t (Y_s-L_s)\mathrm{d}K^n_s\}_{t\in [0,T]}\) is a non-increasing G-martingale for any \(n\in \mathbb {N}\). It suffices to show that

$$\begin{aligned} \lim _{n\rightarrow \infty }\hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}|-\int _0^t (Y_s-L_s)\mathrm{d}A_s-\int _0^T(Y_s-L_s)\mathrm{d}K_s^n|\right] =0. \end{aligned}$$

It is easy to check that

$$\begin{aligned}&\hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}|-\int _0^t (Y_s-L_s)\mathrm{d}A_s-\int _0^T(Y_s-L_s)\mathrm{d}K_s^n| \right] \\&\quad \le \hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}\left| \int _0^t (Y_s-L_s)d(A_s-{\tilde{A}}_s^{n}) \right| \right] +\hat{\mathbb {E}}\left[ \sup _{t\in [0,T]} \left| \int _0^t (Y_s-L_s)\mathrm{d}A_s^{n,+}\right| \right] , \end{aligned}$$

where \({\tilde{A}}^n=A^{n,+}-K^n\). Applying Lemma 3.1 below yields the desired result.

By a similar analysis as above, if \(L\equiv -\infty \), the definition of solution to doubly reflected G-BSDE can be reduced to the one of the upper obstacle case studied in [14].

Remark 3.5

For some results, we will replace Assumptions (A1), (A3) by the following weaker ones.

(A1’):

For any yz, \(f(\cdot ,\cdot ,y,z)\), \(g(\cdot ,\cdot ,y,z)\in M_G^\beta (0,T)\);

(A3’):

\(\{L_t\}_{t\in [0,T]}\), \(\{U_t\}_{t\in [0,T]}\in S_G^\beta (0,T)\), \(L_t\le U_t\), \(t\in [0,T]\), q.s. and there exists a generalized G-Itô process I such that \(L\le I\le U\), where

$$\begin{aligned}I_t=I_0+\int _0^t b^I(s)\mathrm{d}s+\int _0^t \sigma ^I(s)\mathrm{d}B_s+K^I_t,\end{aligned}$$

with \(b^I\in M_G^\beta (0,T), \sigma ^I\in H_G^\beta (0,T)\), \(K^I_0=0\) and \(K^I\in S_G^\beta (0,T)\) a non-increasing G-martingale.

Remark 3.6

Since the generator g plays the same role as f, in the following of this paper, we only consider the case that \(g=0\).

3.2 Some A Priori Estimates

In this subsetion, we give a priori estimate for the solution of the reflected G-BSDE, which implies the uniqueness of the solution to doubly reflected G-BSDE. In the following of this paper, we denote by C a constant depending on \(\alpha , T, \kappa ,\underline{\sigma }\), but not on n, which may vary from line to line.

Let us denote by \(Var_0^T(A)\) the total variation of a process A on [0, T]. We first introduce the following lemma.

Lemma 3.1

For \(\alpha >1\), let A, \(\{A^n\}_{n\in \mathbb {N}}\subset S_G^\alpha (0,T)\) be processes such that \(\hat{\mathbb {E}}[|Var_0^T(A^n)|^\alpha ]\le C\) and

$$\begin{aligned} \lim _{n\rightarrow \infty }\hat{\mathbb {E}} \left[ \sup _{t\in [0,T]}|A_t-A_t^n|^\alpha \right] =0, \end{aligned}$$

where C is independent of n. Then, we have \(\hat{\mathbb {E}}[|Var_0^T(A)|^\alpha ]\le C\). Moreover, if \(Y\in S_G^{p}(0,T)\), with \(p=\frac{\alpha }{\alpha -1}\), we have

$$\begin{aligned}\lim _{n\rightarrow \infty }\hat{\mathbb {E}} \left[ \sup _{t\in [0,T]}\left| \int _0^t Y_sd(A_s-A_s^n)\right| \right] =0.\end{aligned}$$

Proof

We first show that A is a finite variation process. Let

$$\begin{aligned} \mathcal {A}=\left\{ \sum _{i=1}^{n-1} a_i I_{(t_i,t_{i+1}]}(s);|a_i|=1, 0\le t_0<\cdots <t_n=T,n\in \mathbb {N}\right\} . \end{aligned}$$

Since \(\sup _{t\in [0,T]}|A_t-A_t^n|\) converges to 0 under the norm \(\Vert \cdot \Vert _{L_G^1}\), we may choose a subsequence, still denoted by \(A^n\), such that \(\sup _{t\in [0,T]}|A_t-A_t^n|\) converges to 0, q.s. It follows that for any \(a\in \mathcal {A}\)

$$\begin{aligned} \lim _{n\rightarrow \infty }\int _0^T a(s)\mathrm{d}A_s^n=\int _0^T a(s)\mathrm{d}A_s. \end{aligned}$$

Then we have

$$\begin{aligned} Var_0^T(A)&=\sup _{a\in \mathcal {A}}\int _0^T a(s)\mathrm{d}A_s=\sup _{a\in \mathcal {A}}\liminf _n\int _0^T a(s)\mathrm{d}A_s^n\\&\le \liminf _n\sup _{a\in \mathcal {A}}\int _0^T a(s)\mathrm{d}A_s^n=\liminf _n Var_0^T(A^n). \end{aligned}$$

Hence, it follows from the assumption that \(\hat{\mathbb {E}}[|Var_0^T(A)|^\alpha ]\le C\). It remains to prove that for any \(Y\in S_G^{p}(0,T)\), with \(p=\frac{\alpha }{\alpha -1}\), we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\hat{\mathbb {E}} \left[ \sup _{t\in [0,T]}\left| \int _0^t Y_sd(A_s-A_s^n)\right| \right] =0. \end{aligned}$$

In fact, for each \(m\in \mathbb {N}\), let \({\widetilde{Y}}^m_t=\sum _{i=0}^{m-1}Y_{t_i^m}I_{[t_i^m,t_{i+1}^m}(t)\), where \(t_i^m=\frac{iT}{m}\), \(i=0,1,\ldots ,m\). Set

$$\begin{aligned} \mathbf{I} =\sup _{t\in [0,T]}\left| \int _0^t {\widetilde{Y}}^m_s d(A_s-A_s^n)\right| ,\ \mathbf{II} =\sup _{t\in [0,T]}\left| \int _0^t (Y_s-{\widetilde{Y}}_s^n)d(A_s-A_s^n)\right| . \end{aligned}$$

By simple calculation, we have

$$\begin{aligned} \hat{\mathbb {E}}[\mathbf{I} ]&\le \sum _{i=0}^{m-1}\hat{\mathbb {E}}\left[ \sup _{s\in [0,T]}|Y_s|(|A^n_{t_{i+1}^m}-A_{t_{i+1}^m}|+|A^n_{t_{i}^m}-A_{t_{i}^m}|)\right] \\&\le (\hat{\mathbb {E}}\left[ \sup _{s\in [0,T]}|Y_s|^p])^{1/p} \sum _{i=0}^{m-1}\{(\hat{\mathbb {E}}[|A^n_{t_{i+1}^m}-A_{t_{i+1}^m}|^\alpha \right] )^{1/\alpha }+(\hat{\mathbb {E}}[|A^n_{t_{i}^m}-A_{t_{i}^m}|^\alpha ])^{1/\alpha }\},\\ \hat{\mathbb {E}}[\mathbf{II} ]&\le (\hat{\mathbb {E}}\left[ \sup _{s\in [0,T]}|Y_s-{\widetilde{Y}}_s^m|^p\right] )^{1/p}\{ (\hat{\mathbb {E}}[|Var_0^T(A^n)|^\alpha ])^{1/\alpha }+(\hat{\mathbb {E}}[|Var_0^T(A)|^\alpha ])^{1/\alpha }\}. \end{aligned}$$

Letting n tend to infinity yields that \(\hat{\mathbb {E}}[\mathbf{I} ]\rightarrow 0\), for any \(m\in \mathbb {N}\). Then, letting m approach to infinity, we obtain that \(\hat{\mathbb {E}}[\mathbf{II} ]\rightarrow 0\) by Lemma 3.2 in [12]. The proof is complete. \(\square \)

Proposition 3.1

Let \((\xi ^1,f^1,L,U)\) and \((\xi ^2,f^2,L,U)\) be two sets of data each one satisfying all Assumptions (A1)–(A4). Let \((Y^i,Z^i,A^i)\) be a solution of the reflected G-BSDE with data \((\xi ^i,f^i,L,U)\), \(i=1,2\), respectively. Set \({\hat{Y}}_t=Y^1_t-Y^2_t\), \({\hat{\xi }}=\xi ^1-\xi ^2\). Then, for any \(2\le \alpha \le \beta \), there exists a constant \(C:=C(\alpha ,T, \kappa ,\underline{\sigma })>0\) such that

$$\begin{aligned} |{\hat{Y}}_t|^\alpha \le C\hat{\mathbb {E}}_t[|{\hat{\xi }}|^\alpha +\int _t^T|{\hat{\lambda }}_s|^\alpha \mathrm{d}s], \end{aligned}$$

where \({\hat{\lambda }}_s=|f^1(s,Y_s^2,Z_s^2)-f^2(s,Y_s^2,Z_s^2)|\).

Proof

Set \({\hat{Z}}_t=Z_t^1-Z_t^2\), \({\hat{A}}_t=A_t^1-A_t^2\). By the G-Itô formula, we have

$$\begin{aligned} d|{\hat{Y}}_t|^2=-2{\hat{Y}}_t(f^1(t,Y_t^1,Z_t^1)-f^2(t,Y_t^2,Z_t^2))dt+2{\hat{Y}}_t{\hat{Z}}_t\mathrm{d}B_t+{\hat{Z}}_t^2 \mathrm{d}\langle B\rangle _t-2{\hat{Y}}_td{\hat{A}}_t. \end{aligned}$$

For any \(r>0\), applying G-Itô’s formula to \(H_t^{\alpha /2}e^{rt}=(|{\hat{Y}}_t|^2)^{\alpha /2} e^{rt}\), we have

$$\begin{aligned} \begin{aligned}&H_t^{\alpha /2}e^{rt}+\int _t^T re^{rs}H_s^{\alpha /2}\mathrm{d}s+\int _t^T \frac{\alpha }{2} e^{rs} H_s^{\alpha /2-1}({\hat{Z}}_s)^2\mathrm{d}\langle B\rangle _s\\&\quad =|{\hat{\xi }}|^\alpha e^{rT}+ \alpha (1-\frac{\alpha }{2})\int _t^Te^{rs}H_s^{\alpha /2-2}({\hat{Y}}_s)^2({\hat{Z}}_s)^2\mathrm{d}\langle B\rangle _s\\&\qquad +\int _t^T{\alpha } e^{rs}H_s^{\alpha /2-1}{\hat{Y}}_s(f^1(s,Y_s^1,Z_s^1)-f^2(s,Y_s^2,Z_s^2))\mathrm{d}s \\&\qquad +\int _t^T\alpha e^{rs}H_s^{\alpha /2-1}{\hat{Y}}_sd{\hat{A}}_s-\int _t^T\alpha e^{rs}H_s^{\alpha /2-1}{\hat{Y}}_s{\hat{Z}}_s\mathrm{d}B_s. \end{aligned} \end{aligned}$$
(4)

From Assumption (A2) of \(f^1\), we have

$$\begin{aligned}&\int _t^T{\alpha } e^{rs}H_s^{\alpha /2-1}{\hat{Y}}_s(f^1(s,Y_s^1,Z_s^1)-f^2(s,Y_s^2,Z_s^2))\mathrm{d}s\\&\quad \le \int _t^T{\alpha }e^{rs}H_s^{\frac{\alpha -1}{2}}\{|f^1(s,Y_s^1,Z_s^1)-f^1(s,Y_s^2,Z_s^2)|+{\hat{\lambda }}_s\}\mathrm{d}s\\&\quad \le \int _t^T{\alpha }e^{rs}H_s^{\frac{\alpha -1}{2}}\{\kappa (|{\hat{Y}}_s|+|{\hat{Z}}_s|)+{\hat{\lambda }}_s\}\mathrm{d}s\\&\quad \le {\tilde{r}}\int _t^T e^{rs}H_s^{\alpha /2}\mathrm{d}s +\frac{\alpha (\alpha -1)}{4}\int _t^Te^{rs}H_s^{\alpha /2-1}({\hat{Z}}_s)^2\mathrm{d}\langle B\rangle _s\\&\qquad + \int _t^T{\alpha } e^{rs}H_s^{\alpha /2-1/2}|{\hat{\lambda }}_s|\mathrm{d}s, \end{aligned}$$

where \({\tilde{r}}=\alpha \kappa +\frac{\alpha \kappa ^2}{\underline{\sigma }^2(\alpha -1)}\). Then by Young’s inequality, we obtain

$$\begin{aligned} \int _t^T{\alpha } e^{rs}H_s^{\alpha /2-1/2}|{\hat{\lambda }}_s|\mathrm{d}s \le (\alpha -1)\int _t^T e^{rs}H_s^{\alpha /2}\mathrm{d}s+\int _t^T e^{rs}|{\hat{\lambda }}_s|^\alpha \mathrm{d}s. \end{aligned}$$

Let \(\{A^{i,n,+}\}_{n\in \mathbb {N}}\), \(\{A^{i,n,-}\}_{n\in \mathbb {N}}\) and \(\{K^{i,n}\}_{n\in \mathbb {N}}\) be the approximation sequences for \(A^{i}\), \(i=1,2\). Set \(A^{i,n}=A^{i,n,+}-A^{i,n,-}-K^{i,n}\), \(i=1,2\). It is easy to check that

$$\begin{aligned}&\int _t^T\alpha e^{rs}H_s^{\alpha /2-1}{\hat{Y}}_s\mathrm{d}A^{1}_s\\&\quad = \int _t^T\alpha e^{rs}H_s^{\alpha /2-1}{\hat{Y}}_sd({A}^1_s-A_s^{1,n})+\int _t^T\alpha e^{rs}H_s^{\alpha /2-1}{\hat{Y}}_sd{A}^{1,n}_s\\&\quad \le \left| \int _t^T\alpha e^{rs}H_s^{\alpha /2-1}{\hat{Y}}_sd({A}^1_s-A_s^{1,n})\right| +\int _t^T\alpha e^{rs}H_s^{\alpha /2-1}({\hat{Y}}_s)^+\mathrm{d}A_s^{1,n,+}\\&\qquad +\int _t^T\alpha e^{rs}H_s^{\alpha /2-1}({\hat{Y}}_s)^-\mathrm{d}A_s^{1,n,-}-\int _t^T\alpha e^{rs}H_s^{\alpha /2-1}({\hat{Y}}_s)^+\mathrm{d}K_s^{1,n}. \end{aligned}$$

By Lemma 3.1, we have for any \(t\in [0,T]\)

$$\begin{aligned} \lim _{n\rightarrow \infty }\hat{\mathbb {E}}\left[ \left| \int _t^T\alpha e^{rs}H_s^{\alpha /2-1}{\hat{Y}}_sd({A}^1_s-A_s^{1,n})\right| \right] =0. \end{aligned}$$

Note that \(Y_s^i\ge L_s\), for any \(s\in [0,T]\) and \(i=1,2\), which implies that \({\hat{Y}}_s\le Y_s^1-L_s\). Hence, we have \(({\hat{Y}}_s)^+\le Y_s^1-L_s\). By simple calculation, we obtain that

$$\begin{aligned}&\hat{\mathbb {E}}[\int _t^T\alpha e^{rs}H_s^{\alpha /2-1}({\hat{Y}}_s)^+\mathrm{d}A_s^{1,n,+}]\\&\quad \le C\hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}(|Y_t^1|+|Y_t^2|)^{\alpha -2}\int _t^T ({\hat{Y}}_s)^+d A_s^{1,n,+}\right] \\&\quad \le C(\hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}(|Y_t^1|^\alpha +|Y_t^2|^\alpha )\right] )^{\frac{\alpha -2}{\alpha }}(\hat{\mathbb {E}}\left[ \left| \int _t^T ({\hat{Y}}_s)^+d A_s^{1,n,+} \right| ^{\frac{\alpha }{2}} \right] )^{\frac{2}{\alpha }}. \end{aligned}$$

Recalling the definition of approximate Skorohod condition, we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\hat{\mathbb {E}}\left[ \left| \int _t^T\alpha e^{rs}H_s^{\alpha /2-1}({\hat{Y}}_s)^+\mathrm{d}A_s^{1,n,+}\right| \right] =0. \end{aligned}$$

Similar analysis as above yields that

$$\begin{aligned}&\lim _{n\rightarrow \infty }\hat{\mathbb {E}}\left[ \left| \int _t^T\alpha e^{rs}H_s^{\alpha /2-1}({\hat{Y}}_s)^-\mathrm{d}A_s^{1,n,-}\right| \right] =0,\\&\lim _{n\rightarrow \infty }\hat{\mathbb {E}}\left[ \left| \int _t^T\alpha e^{rs}H_s^{\alpha /2-1}({\hat{Y}}_s)^+\mathrm{d}A_s^{2,n,-}\right| \right] =0,\\&\lim _{n\rightarrow \infty }\hat{\mathbb {E}}\left[ \left| \int _t^T\alpha e^{rs}H_s^{\alpha /2-1}({\hat{Y}}_s)^-\mathrm{d}A_s^{2,n,+} \right| \right] =0. \end{aligned}$$

Set \(M^n_t=\int _0^t\alpha e^{rs}H_s^{\alpha /2-1}({\hat{Y}}_s{\hat{Z}}_s\mathrm{d}B_s+({\hat{Y}}_s)^{+}\mathrm{d}K^{1,n}_s+({\hat{Y}}_s)^-\mathrm{d}K_s^{2,n})\), \(n\ge 1\). By Lemma 3.4 in [12], \(M^n\) is a G-martingale. Let \(r={\tilde{r}}+\alpha \). Combining the above inequalities, we get

$$\begin{aligned}&H_t^{\alpha /2}e^{rt}+(M^n_T-M^n_t)\\&\quad \le |{\hat{\xi }}|^{\alpha } e^{rT}+\int _t^T e^{rs}|{\hat{\lambda }}_s|^\alpha \mathrm{d}s+\sum _{i=1}^2\left| \int _t^T\alpha e^{rs}H_s^{\alpha /2-1}{\hat{Y}}_sd({A}^i_s-A_s^{i,n}) \right| \\&\qquad +\int _t^T\alpha e^{rs}H_s^{\alpha /2-1}({\hat{Y}}_s)^+d(A_s^{1,n,+}+A_s^{2,n,-})\\&\qquad +\int _t^T\alpha e^{rs}H_s^{\alpha /2-1}({\hat{Y}}_s)^-d(A_s^{1,n,-}+A_s^{2,n,+}) \end{aligned}$$

Taking conditional expectations on both sides and letting \(n\rightarrow \infty \), there exists a constant \(C:=C(\alpha ,T, L,\underline{\sigma })>0\) such that

$$\begin{aligned} |{\hat{Y}}_t|^\alpha \le C\hat{\mathbb {E}}_t[|{\hat{\xi }}|^\alpha +\int _t^T|{\hat{\lambda }}_s|^\alpha \mathrm{d}s]. \end{aligned}$$

The proof is complete. \(\square \)

4 Proof of the Main Result

In this section, we will focus on the penalization method in order to get the existence of solutions to doubly reflected G-BSDEs. For \(n\in \mathbb {N}\), consider the following family of G-BSDEs

$$\begin{aligned} \begin{aligned} Y_t^n&=\xi +\int _t^T f(s,Y_s^n,Z_s^n)\mathrm{d}s-\int _t^T Z_s^n\mathrm{d}B_s-(K_T^n-K_t^n)\\&\quad +n\int _t^T(Y_s^n-L_s)^-\mathrm{d}s -n\int _t^T(Y_s^n-U_s)^+\mathrm{d}s. \end{aligned} \end{aligned}$$
(5)

Now let \(A_t^{n,-}=n\int _0^t (Y_s^n-U_s)^+\mathrm{d}s\), \(A_t^{n,+}=n\int _0^t (Y_s^n-L_s)^-\mathrm{d}s\). Then \(\{A_t^{n,\pm }\}_{t\in [0,T]}\) are nondecreasing processes. We can rewrite G-BSDE (5) as

$$\begin{aligned} \begin{aligned} Y_t^n&= \xi +\int _t^T f(s,Y_s^n,Z_s^n)\mathrm{d}s-\int _t^T Z_s^n\mathrm{d}B_s-(K_T^n-K_t^n)\\&\quad +(A_T^{n,+}-A_t^{n,+})-(A_T^{n,-}-A_t^{n,-}). \end{aligned} \end{aligned}$$
(6)

4.1 Uniform Estimates of \(Y^n\)

Under weaker Assumptions (A1’), (A2), (A3’), (A4), we show that \(\{Y^n\}_{n=1}^\infty \) are uniformly bounded under the norm \(\Vert \cdot \Vert _{S_G^\alpha }\).

Lemma 4.1

For \(2\le \alpha <\beta \), there exists a constant C independent of n, such that

$$\begin{aligned} \hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}|Y_t^n|^\alpha \right] \le C. \end{aligned}$$

Proof

Let \(I_t=I_0+\int _0^t b^I(s)\mathrm{d}s+\int _0^t \sigma ^I(s)\mathrm{d}B_s+K^I_t\) be the generalized G-Itô process such that \(L\le I\le U\). Set \({\bar{Y}}_t^n=Y_t^n-{I}_t,\) \({\bar{Z}}_t^n=Z_t^n-\sigma ^I(t)\), \(H_t=({\bar{Y}}^n_t)^2\), \({\bar{U}}_t=U_t-I_t\), \({\bar{L}}_t=L_t-I_t\), and \({\bar{f}}_t=f(t,Y^n_t, Z^n_t)+b^I(t)\). G-BSDE (5) can be rewritten as

$$\begin{aligned} {\bar{Y}}_t^n&=\xi -I_T+\int _t^T {\bar{f}}(s)\mathrm{d}s+n\int _t^T({\bar{Y}}_s^n-{\bar{L}}_s)^-\mathrm{d}s -n\int _t^T({\bar{Y}}_s^n-{\bar{U}}_s)^+\mathrm{d}s\\&\quad -\int _t^T {\bar{Z}}_s^n\mathrm{d}B_s-(K_T^n-K_t^n) +(K^I_T-K^I_t). \end{aligned}$$

For any \(r>0\), applying Itô’s formula to \(H_t^{\alpha /2}e^{rt}\), we get

$$\begin{aligned} \begin{aligned}&H_t^{\alpha /2}e^{rt}+\int _t^T re^{rs}H_s^{\alpha /2}\mathrm{d}s+\int _t^T \frac{\alpha }{2} e^{rs}H_s^{\alpha /2-1}({\bar{Z}}_s^n)^2\mathrm{d}\langle B\rangle _s\\&\quad =|\xi -{I}_T|^\alpha e^{rT}+\alpha (1-\frac{\alpha }{2})\int _t^Te^{rs}H_s^{\alpha /2-2}({\bar{Y}}_s^n)^2({\bar{Z}}_s^n)^2\mathrm{d}\langle B\rangle _s\\&\qquad -\int _t^T\alpha e^{rs}H_s^{\alpha /2-1}n{\bar{Y}}_s^n({\bar{Y}}_s^n-{\bar{U}}_s)^+\mathrm{d}s+\int _t^T\alpha e^{rs}H_s^{\alpha /2-1}n{\bar{Y}}_s^n({\bar{Y}}_s^n-{\bar{L}}_s)^-\mathrm{d}s\\&\qquad +\int _t^T{\alpha } e^{rs}H_s^{\alpha /2-1}{\bar{Y}}_s^n{\bar{f}}_s\mathrm{d}s-\int _t^T\alpha e^{rs}H_s^{\alpha /2-1}({\bar{Y}}_s^n{\bar{Z}}_s^n\mathrm{d}B_s+{\bar{Y}}_s^n\mathrm{d}K_s^n-{\bar{Y}}^n_s\mathrm{d}K^I_s). \end{aligned} \end{aligned}$$

Noting that \(-{\bar{Y}}_s^n({\bar{Y}}_s^n-{\bar{U}}_s)^+\le 0\) and \({\bar{Y}}_s^n({\bar{Y}}_s^n-{\bar{L}}_s)^-\le 0\), we get

$$\begin{aligned} \begin{aligned}&H_t^{\alpha /2}e^{rt}+\int _t^T re^{rs}H_s^{\alpha /2}\mathrm{d}s+\int _t^T \frac{\alpha }{2} e^{rs}H_s^{\alpha /2-1}({\bar{Z}}_s^n)^2\mathrm{d}\langle B\rangle _s\\&\quad \le |\xi -{I}_T|^\alpha e^{rT}+\alpha (1-\frac{\alpha }{2})\int _t^Te^{rs}H_s^{\alpha /2-2}({\bar{Y}}_s^n)^2({\bar{Z}}_s^n)^2\mathrm{d}\langle B\rangle _s\\&\qquad +\int _t^T{\alpha } e^{rs}H_s^{\alpha /2-1/2}|{\bar{f}}_s|\mathrm{d}s-(M_T-M_t), \end{aligned} \end{aligned}$$

where

$$\begin{aligned} M_t=\int _0^t\alpha e^{rs}H_s^{\alpha /2-1}({\bar{Y}}_s^n{\bar{Z}}_s\mathrm{d}B_s+({\bar{Y}}_s^n)^+\mathrm{d}K_s^n+({\bar{Y}}^n_s)^-\mathrm{d}K^I_s) \end{aligned}$$

is a G-martingale. From Assumption (A2) of f, we have

$$\begin{aligned}&\int _t^T{\alpha } e^{rs}H_s^{\alpha /2-1/2}|{\bar{f}}_s|\mathrm{d}s\\&\quad \le \int _t^T{\alpha } e^{rs}H_s^{\alpha /2-1/2}\{|f(s,0,0)|+|b^I(s)|+\kappa [|{\bar{Y}}_s^n|+|{\bar{Z}}_s^n|+|{I}_s|+|\sigma ^I(s)|]\}\mathrm{d}s\\&\quad \le \left( \alpha \kappa +\frac{\alpha \kappa ^2}{\underline{\sigma }^2(\alpha -1)}\right) \int _t^T e^{rs}H_s^{\alpha /2}\mathrm{d}s+\frac{\alpha (\alpha -1)}{4}\int _t^Te^{rs}H_s^{\alpha /2-1}({\bar{Z}}_s^n)^2\mathrm{d}\langle B\rangle _s\\&\qquad +\int _t^T \alpha e^{rs}H_s^{\alpha /2-1/2}[|f(s,0,0)|+|b^I(s)|+\kappa (|{I}_s|+|\sigma ^I(s)|)] \mathrm{d}s. \end{aligned}$$

By Young’s inequality, we obtain

$$\begin{aligned}&\int _t^T \alpha e^{rs}H_s^{\alpha /2-1/2}[|f(s,0,0)|+|b^I(s)|+\kappa (|{U}_s|+|\sigma ^I(s)|)] \mathrm{d}s\\&\quad \le \int _t^T e^{rs}[|f(s,0,0)|^\alpha +|b^I(s)|^\alpha +\kappa ^\alpha |{I}_s|^\alpha +\kappa ^\alpha |\sigma ^I(s)|^\alpha ]\mathrm{d}s\\&\qquad +4(\alpha -1)\int _t^T e^{rs}H_s^{\alpha /2}\mathrm{d}s . \end{aligned}$$

Combining the above inequalities, we get

$$\begin{aligned}&H_t^{\frac{\alpha }{2}}e^{rt}+\int _t^T (r-{\tilde{\alpha }})e^{rs}H_s^{\frac{\alpha }{2}}\mathrm{d}s+\int _t^T \frac{\alpha (\alpha -1)}{4} e^{rs}H_s^{\frac{\alpha -2}{2}}({\bar{Z}}_s^n)^2\mathrm{d}\langle B\rangle _s+(M_T-M_t)\\&\quad \le |\xi -{I}_T|^\alpha e^{rT}+\int _t^T e^{rs}[|f(s,0,0)|^\alpha +|b^I(s)|^\alpha +\kappa ^\alpha |{I}_s|^\alpha +\kappa ^\alpha |\sigma ^I(s)|^\alpha ]\mathrm{d}s, \end{aligned}$$

where \({\tilde{\alpha }}=4(\alpha -1)+\alpha \kappa +\frac{\alpha \kappa ^2}{\underline{\sigma }^2(\alpha -1)}\). Setting \(r={\tilde{\alpha }}+1\) and taking conditional expectations on both sides, we derive that

$$\begin{aligned} H_t^{\alpha /2}e^{rt}\le \hat{\mathbb {E}}_t[|\xi -{I}_T|^\alpha e^{rT}+\int _t^T e^{rs}[|f(s,0,0)|^\alpha +|b^I(s)|^\alpha +\kappa ^\alpha |I_s|^\alpha +\kappa ^\alpha |\sigma ^I(s)|^\alpha ]\mathrm{d}s]. \end{aligned}$$

Then, there exists a constant C independent of n such that

$$\begin{aligned} |{\bar{Y}}_t^n|^\alpha \le C\hat{\mathbb {E}}_t[|\xi -{I}_T|^\alpha +\int _t^T [|f(s,0,0)|^\alpha +|b^I(s)|^\alpha +|\sigma ^I(s)|^\alpha +|{I}_s|^\alpha ]\mathrm{d}s]. \end{aligned}$$

Noting that \(|Y_t^n|^\alpha \le C(|{\bar{Y}}_t^n|^\alpha +|I_t|^\alpha )\) and applying Theorem 2.2, we finally get the desired result. \(\square \)

4.2 Convergence of \((Y^n-U)^+\) and \((Y^n-L)^-\)

Under Assumptions (A1’), (A2), (A3’), (A4), we show that \((Y^n-U)^+\) and \((Y^n-L)^-\) converge to 0 under the norm \(\Vert \cdot \Vert _{S_G^\alpha }\). First, we prove a simple lemma.

Lemma 4.2

For \(S\in S^{\beta }_G(0,T)\) with \(\beta >1\), define \( \int _s^te^{-nu}\mathrm{d}s_u:=e^{-nt}S_t-e^{-ns}S_s+\int _s^tn S_ue^{-nu}du,\) and set \(i_n(t)=\hat{\mathbb {E}}_t[|\int _t^Te^{-n(s-t)}\mathrm{d}s_s|^{\alpha }]\) for some \(1\le \alpha <\beta \). Then, as \(n\rightarrow \infty \), we have,

$$\begin{aligned} \hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}|i_n(t)|\right] \rightarrow 0. \end{aligned}$$

Proof

Notice that the mappings \(D^n: S^\beta _G(0,T)\rightarrow S^1_G(0,T)\) by \(D^n(S)=i_n\) are uniformly continuous with respect to n, i.e.,

$$\begin{aligned}&\Vert D^n(S)-D^n(S')\Vert _{S^1_G}\\&\quad \le 3^{\alpha }\alpha \hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}\hat{\mathbb {E}}_t[\sup _{s\in [0,T]}|S_s-S'_s|\sup _{s\in [0,T]}|S^{\theta }_s|^{\alpha -1}]\right] \\&\quad \le 3^\alpha \alpha \bigg (\hat{\mathbb {E}}\bigg [\sup _{t\in [0,T]}\hat{\mathbb {E}}_t[\sup _{s\in [0,T]}|S_s-S'_s|^\alpha ]\bigg ]\bigg )^{\frac{1}{\alpha }}\bigg (\hat{\mathbb {E}}\bigg [\sup _{t\in [0,T]}\hat{\mathbb {E}}_t[\sup _{s\in [0,T]}|S^\theta _s|^\alpha ]\bigg ]\bigg )^{\frac{\alpha -1}{\alpha }}. \end{aligned}$$

where \(S^\theta =\theta S+(1-\theta )S'\) for some \(\theta \in [0,1]\). By Theorem 2.2, it suffices to prove this lemma for a dense subset of \(S^\beta _G(0,T)\). For a G-Itô process \(S_t=S_0+\int _0^t b^S(s)\mathrm{d}s+\int _0^t \sigma ^S(s)\mathrm{d}B_s+\int _0^tc^S(s)\mathrm{d}\langle B\rangle _s\) with \(b^S, c^S, \sigma ^S\in M^0_G(0,T)\), we have

$$\begin{aligned} |i_n(t)|\le&C_{\alpha }\bigg (\hat{\mathbb {E}}_t\bigg [\big |\int _t^Te^{-n(s-t)}(|b^S(s)|+|c^S(s)|)\mathrm{d}s\big |^{\alpha }\bigg ]\\&\quad +\hat{\mathbb {E}}_t\bigg [\big |\int _t^Te^{-n(s-t)}\sigma ^S(s)\mathrm{d}B_s\big |^\alpha \bigg ]\bigg )\\ \le&\frac{C_{\alpha }}{n^{\frac{\alpha }{2}}}\bigg (\frac{1}{n^{\frac{\alpha }{2}}}\hat{\mathbb {E}}_t\bigg [\sup _{s\in [0,T]}(|b^S(s)|+|c^S(s)|)^{\alpha }\bigg ]+ \hat{\mathbb {E}}_t\bigg [\sup _{s\in [0,T]}|\sigma ^S(s)|^{\alpha }\bigg ]\bigg ). \end{aligned}$$

So, we get \(\hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}|i_n(t)|^\alpha \right] \rightarrow 0\) as n goes to \(\infty \). \(\square \)

Lemma 4.3

Let \({\tilde{Y}}^n, {\tilde{M}}^n\in S^\alpha _G(0,T)\) and \({\tilde{f}}^n\in M^\alpha _G(0,T)\) for some \(1<\alpha \le \beta \) satisfy

$$\begin{aligned} {\tilde{Y}}^n_t=\xi +\int _t^T{\tilde{f}}^n(s)\mathrm{d}s+n\int _t^T({\tilde{Y}}_s^n-L_s)^-\mathrm{d}s -n\int _t^T({\tilde{Y}}_s^n-U_s)^+\mathrm{d}s-({\tilde{M}}^n_T-{\tilde{M}}^n_t). \end{aligned}$$

Assuming that \({\tilde{M}}^n\) is a martingale under a time-consistent sublinear expectation \(\tilde{\mathbb {E}}\), we have

$$\begin{aligned}&({\tilde{Y}}_t^n-U_t)^+\le \bigg |\mathbb {{\tilde{E}}}_t[\int _t^Te^{-n(s-t)}{\tilde{f}}^n(s)\mathrm{d}s+\int _t^Te^{-n(s-t)}dU_s]\bigg |,\end{aligned}$$
(7)
$$\begin{aligned}&({\tilde{Y}}_t^n-L_t)^-\le \bigg |\mathbb {{\tilde{E}}}_t[\int _t^Te^{-n(s-t)}{\tilde{f}}^n(s)\mathrm{d}s+\int _t^Te^{-n(s-t)}\mathrm{d}L_s]\bigg |. \end{aligned}$$
(8)

Proof

For \(S\in S^{\beta }_G(0,T)\), setting \({\bar{Y}}^n_t={\tilde{Y}}^n_t-S_t\), \({\bar{U}}_t=U_t-S_t\) and \({\bar{L}}_t=L_t-S_t\), we have

$$\begin{aligned}&e^{-nt}{\bar{Y}}^n_t+\int _t^T e^{-ns}d{\tilde{M}}^n_s\\&\quad =e^{-nT}(\xi -S_T)+\int _t^T ne^{-ns}\bigg ({\bar{Y}}_s^n-({\bar{Y}}_s^n-{\bar{U}}_s)^++({\bar{Y}}^n_s-{\bar{L}}_s)^-\bigg )\mathrm{d}s\\&\qquad +\int _t^Te^{-ns}{\tilde{f}}^n(s)\mathrm{d}s+\int _t^Te^{-ns}\mathrm{d}s_s. \end{aligned}$$

(1) If \(S_t=U_t\), we have \(\xi - S_T=\xi -U_T\le 0\), and

$$\begin{aligned} {\bar{Y}}_s^n-({\bar{Y}}_s^n-{\bar{U}}_s)^++({\bar{Y}}^n_s-{\bar{L}}_s)^-=-({\tilde{Y}}^n_s-U_s)^-+({\tilde{Y}}^n_s-L_s)^-\le 0. \end{aligned}$$
(9)

So, we have

$$\begin{aligned} ({\tilde{Y}}_t^n-U_t)^+\le \bigg |\mathbb {{\tilde{E}}}_t[\int _t^Te^{-n(s-t)}{\tilde{f}}^n(s)\mathrm{d}s+\int _t^Te^{-n(s-t)}dU_s]\bigg |. \end{aligned}$$

(2) If \(S_t=L_t\), we have \(\xi - S_T=\xi -L_T\ge 0\), and

$$\begin{aligned} {\bar{Y}}_s^n-({\bar{Y}}_s^n-{\bar{U}}_s)^++({\bar{Y}}^n_s-{\bar{L}}_s)^-=({\tilde{Y}}^n_s-L_s)^+-({\tilde{Y}}^n_s-U_s)^-\ge 0. \end{aligned}$$
(10)

So, we have

$$\begin{aligned} ({\tilde{Y}}_t^n-L_t)^-\le \bigg |\mathbb {{\tilde{E}}}_t[\int _t^Te^{-n(s-t)}{\tilde{f}}^n(s)\mathrm{d}s+\int _t^Te^{-n(s-t)}\mathrm{d}L_s]\bigg |. \end{aligned}$$

The proof is complete. \(\square \)

Lemma 4.4

Assume that (A1’), (A2), (A3’) and (A4) hold. As n goes to \(\infty \), for any \(2\le \alpha <\beta \), we have

$$\begin{aligned} \hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}|(Y_t^n-U_t)^+|^\alpha \right] \rightarrow 0, \ \hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}|(Y_t^n-L_t)^-|^\alpha \right] \rightarrow 0. \end{aligned}$$
(11)

Proof

For each given \(\varepsilon >0\), we can choose a Lipschitz function \(l(\cdot )\) such that \(I_{[-\varepsilon ,\varepsilon ]}\le l(x)\le I_{[-2\varepsilon ,2\varepsilon ]}\). Thus we have

$$\begin{aligned}&f(s,Y_s^n,Z_s^n)-f(s,Y_s^n,0)\\&\quad =(f(s,Y_s^n, Z_s^n)-f(s, Y_s^n,0))l(Z_s^n)+a^{\varepsilon ,n}_sZ_s^n=:m_s^{\varepsilon ,n}+a^{\varepsilon ,n}_sZ_s^n, \end{aligned}$$

where \(a^{\varepsilon ,n}_s=(1-l(Z_s^n))(f(s,Y_s^n, Z_s^n)-f(s, Y_s^n,0))(Z_s^n)^{-1}\in M_G^2(0,T)\) with \(|a^{\varepsilon ,n}_s|\le \kappa \). It is easy to check that \(|m_s^{\varepsilon ,n}|\le 2\kappa \varepsilon \). Then we can get

$$\begin{aligned} f(s, Y_s^n, Z_s^n)=f(s, Y_s^n,0)+a^{\varepsilon ,n}_s Z_s^n+m_s^{\varepsilon ,n}. \end{aligned}$$

Now we consider the following G-BSDE:

$$\begin{aligned} Y^{\varepsilon ,n}_t=\xi +\int _t^T a^{\varepsilon ,n}_sZ^{\varepsilon ,n}_s\mathrm{d}s-\int _t^T Z^{\varepsilon ,n}_s\mathrm{d}B_s-(K^{\varepsilon ,n}_T-K^{\varepsilon ,n}_t). \end{aligned}$$

For each \(\xi \in L_G^p(\Omega _T)\) with \(p>1\), define

$$\begin{aligned} \tilde{\mathbb {E}}^{\varepsilon ,n}_t[\xi ]:=Y^{\varepsilon ,n}_t, \end{aligned}$$

which is a time-consistent sublinear expectation. Set \({\tilde{B}}^{\varepsilon ,n}_t=B_t-\int _0^t a^{\varepsilon ,n}_s \mathrm{d}s\). By Theorem 5.2 in [13], \(\{{\tilde{B}}^{\varepsilon ,n}_t\}\) is a G-Brownian motion under \(\tilde{\mathbb {E}}^{\varepsilon ,n}[\cdot ]\).

We rewrite G-BSDE (5) as the following

$$\begin{aligned} \begin{aligned} Y_t^n&=\xi +\int _t^T f^{\varepsilon ,n}(s)\mathrm{d}s-\int _t^Tn(Y_s^n-U_s)^+\mathrm{d}s+\int _t^Tn(Y_s^n-L_s)^-\mathrm{d}s\\&\quad -\int _t^T Z_s^nd{\tilde{B}}^{\varepsilon ,n}_s-(K_T^n-K_t^n), \end{aligned} \end{aligned}$$
(12)

where \(f^{\varepsilon ,n}(s)=f(s, Y_s^n,0)+m^{\varepsilon ,n}_s\). Since \(K^n\) is a martingale under \(\tilde{\mathbb {E}}^{\varepsilon ,n}[\cdot ]\) by Theorem 5.1 in [13], it follows from (7) in Lemma 4.3 that

$$\begin{aligned} (Y_t^n-U_t)^+&\le \bigg |\tilde{\mathbb {E}}^{\varepsilon , n}_t[\int _t^Te^{-n(s-t)}f^{\varepsilon , n}(s)\mathrm{d}s+\int _t^Te^{-n(s-t)}dU_s]\bigg |. \end{aligned}$$

By Theorem 2.3, for \(2\le \alpha <\beta \), it follows that

$$\begin{aligned} \begin{aligned}&\hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}|(Y^n_t-U_t)^+|^\alpha \right] \\&\quad \le \hat{\mathbb {E}}\bigg [\sup _{t\in [0,T]}\bigg |\tilde{\mathbb {E}}^{\varepsilon , n}_t[\int _t^Te^{-n(s-t)}f^{\varepsilon , n}(s)\mathrm{d}s+\int _t^Te^{-n(s-t)}dU_s]\bigg |^{\alpha }\bigg ]\\&\quad \le C_{\alpha }\hat{\mathbb {E}}\bigg [\sup _{t\in [0,T]}\hat{\mathbb {E}}_t[\bigg |\int _t^Te^{-n(s-t)}f^{\varepsilon , n}(s)\mathrm{d}s+\int _t^Te^{-n(s-t)}dU_s\bigg |^{\alpha }]\bigg ], \end{aligned} \end{aligned}$$
(13)

which converges to 0 as n goes to \(\infty \) by Lemma 4.2. Similarly, we can prove

$$\begin{aligned} \lim _{n\rightarrow \infty }\hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}|(Y_t^n-L_t)^-|^\alpha \right] =0. \end{aligned}$$

The proof is complete. \(\square \)

4.3 Uniform Estimates of \(Z^n\), \(K^n\), \(A^{n,-}\) and \(A^{n,+}\)

In this subsection, we give the uniform estimates for \(Z^n\), \(K^n\), \(A^{n,+}\) and \(A^{n,-}\) under Assumptions (A1)–(A4). To this end, we prove that \((Y^n-U)^+\) converges to 0 with the explicit rate \(\frac{1}{n}\), which requires that the upper obstacle to be a generalized G-Itô process.

Lemma 4.5

For \(2\le \alpha <\beta \), there exists a constant C independent of n, such that

$$\begin{aligned} \hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}|(Y_t^n-U_t)^+|^\alpha \right] \le \frac{C}{n^\alpha }. \end{aligned}$$

Proof

Now \(U_t=U_0+\int _0^tb(s)\mathrm{d}s+\int _0^t\sigma (s)\mathrm{d}B_s+K_t\) with \(b, \ \sigma \in S_G^\beta (0,T)\), and \(K\in S_G^\beta (0,T)\) a non-increasing G-martingale. Below, we employ the notations in the proof of Lemma 4.4.

We rewrite \(U_t\) as

$$\begin{aligned} U_t=U_0+\int _0^tb^{\varepsilon ,n}(s)\mathrm{d}s+\int _0^t\sigma (s)d{\tilde{B}}^{\varepsilon , n}_s+K_t, \end{aligned}$$
(14)

where \(b^{\varepsilon ,n}(s)=b(s)+a_s^{\varepsilon , n}\sigma (s)\). By (13), we have, for \(2\le \alpha <\beta \),

$$\begin{aligned}&\hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}|(Y^n_t-U_t)^+|^\alpha \right] \\&\quad \le \hat{\mathbb {E}}\bigg [\sup _{t\in [0,T]}\bigg |\tilde{\mathbb {E}}^{\varepsilon , n}_t[\int _t^Te^{-n(s-t)}f^{\varepsilon , n}(s)\mathrm{d}s+\int _t^Te^{-n(s-t)}dU_s]\bigg |^{\alpha }\bigg ]\\&\quad \le \hat{\mathbb {E}}\bigg [\sup _{t\in [0,T]}\bigg |\tilde{\mathbb {E}}^{\varepsilon , n}_t[\int _t^Te^{-n(s-t)}(|f^{\varepsilon , n}(s)|+|b^{\varepsilon ,n}(s)|)\mathrm{d}s]\bigg |^{\alpha }\bigg ]. \end{aligned}$$

By Theorem 2.3, it follows that

$$\begin{aligned} \hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}|(Y^n_t-U_t)^+|^\alpha \right] \le&\frac{1}{n^\alpha }\hat{\mathbb {E}}\bigg [\sup _{t\in [0,T]}\bigg |\tilde{\mathbb {E}}^{\varepsilon , n}_t[\sup _{s\in [0,T]}|f^{\varepsilon , n}(s)+b^{\varepsilon , n}(s)|]\bigg |^{\alpha }\bigg ]\\ \le&C_{\alpha }\frac{1}{n^\alpha }\hat{\mathbb {E}}\bigg [\sup _{t\in [0,T]}\hat{\mathbb {E}}_t[\sup _{s\in [0,T]}|f^{\varepsilon , n}(s)+b^{\varepsilon , n}(s)|^{\alpha }]\bigg ]. \end{aligned}$$

Since \(\hat{\mathbb {E}}\bigg [\sup _{t\in [0,T]}\hat{\mathbb {E}}_t[\sup _{s\in [0,T]}|f^{\varepsilon , n}(s)+b^{\varepsilon , n}(s)|^{\alpha }]\bigg ]\) are uniformly bounded, we get the desired result. \(\square \)

Lemma 4.6

For \(2\le \alpha <\beta \), there exists a constant C independent of n, such that

$$\begin{aligned} \hat{\mathbb {E}}[|K_T^n|^\alpha ]\le C, \ \hat{\mathbb {E}}[|A_T^{n,-}|^\alpha ]\le C,\ \hat{\mathbb {E}}[|A_T^{n,+}|^\alpha ]\le C, \ and \ \mathbb {{\hat{E}}}\left[ \left( \int _{0}^{T}|Z^n_{s}|^{2}\mathrm{d}s\right) ^{\frac{\alpha }{2}}\right] \le C. \end{aligned}$$

Proof

By Lemma 4.5, there exists a constant C independent of n such that

$$\begin{aligned} \hat{\mathbb {E}}[|A_T^{n,-}|^\alpha ]=n^\alpha \hat{\mathbb {E}}\left[ \left( \int _0^T (Y_s^n-U_s)^+\mathrm{d}s\right) ^\alpha \right] \le C. \end{aligned}$$

Then, it follows from Theorem 2.4 that \(\mathbb {{\hat{E}}}[(\int _{0}^{T}|Z^n_{s}|^{2}\mathrm{d}s)^{\frac{\alpha }{2}}]\) are uniformly bounded. Noting that

$$\begin{aligned} K_T^n-A_T^{n,+}=\xi -Y_0^n+\int _0^T f(s,Y_s^n,Z_s^n)\mathrm{d}s-\int _0^T Z_s^n\mathrm{d}B_s+A_T^{n,-}, \end{aligned}$$

we conclude that \(\hat{\mathbb {E}}[|K_T^n-A_T^{n,+}|^\alpha ]\) are uniformly bounded. Since \(K_T^n\) and \(-A_T^{n,+}\) are non-positive, the proof is complete. \(\square \)

4.4 Proof of Theorem 3.1

In this subsection, we prove that \(Y^n\), \(Z^n\), and \(A^n=A^{n,+}-K^n-A^{n,-}\), \(n\ge 1\) are Cauchy sequences with respect to the norms \(\Vert \cdot \Vert _{S^\alpha _G}\), \(\Vert \cdot \Vert _{H^\alpha _G}\) and \(\Vert \cdot \Vert _{S^\alpha _G}\), respectively, and that their limits are a solution to the doubly reflected G-BSDE.

Lemma 4.7

For \(2\le \alpha <\beta \), we have

$$\begin{aligned}&\lim _{n,m\rightarrow \infty }\hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}|Y_t^n-Y_t^m|^\alpha \right] =0,\\&\quad \lim _{n,m\rightarrow \infty }\hat{\mathbb {E}}\left[ \left( \int _0^T|Z_s^n-Z_s^m|^2\mathrm{d}s\right) ^{\frac{\alpha }{2}}\right] =0, \\&\quad \lim _{n,m\rightarrow \infty }\hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}|A_t^n-A_t^m|^\alpha \right] =0. \end{aligned}$$

Proof

For any \(r>0\), and \(n, m\in \mathbb {N}\), set

$$\begin{aligned} \begin{array}{lll}{{\hat{Y}}_t=Y_t^n-Y_t^m,} &{} {{\hat{Z}}_t=Z_t^n-Z_t^m,} &{} {{\hat{K}}_t=K_t^n-K_t^m,} \\ {{\hat{A}}_t^+=A_t^{n,+}-A_t^{m,+},} &{} {{\hat{A}}_t^-=A_t^{n,-}-A_t^{m,-},} &{} {{\hat{f}}_t=f(t,Y_t^n,Z_t^n)-f(t,Y_t^m,Z_t^m).} \end{array} \end{aligned}$$

Denote \(H_t=|{\hat{Y}}_t|^2\). Applying Itô’s formula to \(H_t^{\alpha /2}e^{rt}\), we get

$$\begin{aligned} \begin{aligned}&H_t^{\alpha /2}e^{rt}+\int _t^T re^{rs}H_s^{\alpha /2}\mathrm{d}s+\int _t^T \frac{\alpha }{2} e^{rs} H_s^{\alpha /2-1}({\hat{Z}}_s)^2\mathrm{d}\langle B\rangle _s\\&\quad = \alpha (1-\frac{\alpha }{2})\int _t^Te^{rs}H_s^{\alpha /2-2}({\hat{Y}}_s)^2({\hat{Z}}_s)^2\mathrm{d}\langle B\rangle _s +\int _t^T\alpha e^{rs}H_s^{\alpha /2-1}{\hat{Y}}_sd({\hat{A}}_s^+-{\hat{A}}_s^-)\\&\qquad +\int _t^T{\alpha } e^{rs}H_s^{\alpha /2-1}{\hat{Y}}_s{\hat{f}}_s\mathrm{d}s-\int _t^T\alpha e^{rs}H_s^{\alpha /2-1}({\hat{Y}}_s{\hat{Z}}_s\mathrm{d}B_s+{\hat{Y}}_sd{\hat{K}}_s). \end{aligned} \end{aligned}$$

Noting that \(A_t^{n,-}=n\int _0^t (Y_s^n-U_s)^+\mathrm{d}s\), \(A_t^{n,+}=n\int _0^t (Y_s^n-L_s)^-\mathrm{d}s\), we have

$$\begin{aligned}&\int _t^T\alpha e^{rs}H_s^{\alpha /2-1}{\hat{Y}}_sd({\hat{A}}_s^+-{\hat{A}}_s^-)\\&\quad =\int _t^T\alpha e^{rs}H_s^{\alpha /2-1}\bigg [(Y^n_s-L_s)-(Y^m_s-L_s)\bigg ](\mathrm{d}A^{n,+}_s-\mathrm{d}A^{m,+}_s)\\&\qquad -\int _t^T\alpha e^{rs}H_s^{\alpha /2-1}\bigg [(Y^n_s-U_s)-(Y^m_s-U_s)\bigg ](\mathrm{d}A^{n,-}_s-\mathrm{d}A^{m,-}_s)\\&\quad \le \int _t^T\alpha e^{rs}H_s^{\alpha /2-1}\bigg [(Y^n_s-L_s)^-\mathrm{d}A^{m,+}_s+(Y^m_s-L_s)^{-}\mathrm{d}A^{n,+}_s\bigg ]\\&\qquad +\int _t^T\alpha e^{rs}H_s^{\alpha /2-1}\bigg [(Y^n_s-U_s)^{+}\mathrm{d}A^{m,-}_s+(Y^m_s-U_s)^+\mathrm{d}A^{n,-}_s\bigg ]=:\int _t^T\Delta _s\mathrm{d}s. \end{aligned}$$

Therefore,

$$\begin{aligned} \begin{aligned}&H_t^{\alpha /2}e^{rt}+\int _t^T re^{rs}H_s^{\alpha /2}\mathrm{d}s+\int _t^T \frac{\alpha }{2} e^{rs} H_s^{\alpha /2-1}({\hat{Z}}_s)^2\mathrm{d}\langle B\rangle _s\\&\quad \le \alpha (1-\frac{\alpha }{2})\int _t^Te^{rs}H_s^{\alpha /2-2}({\hat{Y}}_s)^2({\hat{Z}}_s)^2\mathrm{d}\langle B\rangle _s+\int _t^T{\alpha } e^{rs}H_s^{\alpha /2-1}{\hat{Y}}_s{\hat{f}}_s\mathrm{d}s\\&\qquad +\int _t^T\Delta _s\mathrm{d}s-(M_T-M_t ), \end{aligned} \end{aligned}$$

where \(M_t=\int _0^t \alpha e^{rs}H_s^{\alpha /2-1}({\hat{Y}}_s{\hat{Z}}_s\mathrm{d}B_s+({\hat{Y}}_s)^+\mathrm{d}K_s^m+({\hat{Y}}_s)^-\mathrm{d}K_s^n)\) is a G-martingale. Applying the Hölder inequality, we have

$$\begin{aligned} \int _t^T{\alpha } e^{rs}H_s^{\frac{\alpha -1}{2}}|{\hat{f}}_s|\mathrm{d}s \le&(\alpha \kappa +\frac{\alpha \kappa ^2}{\underline{\sigma }^2(\alpha -1)})\int _t^T e^{rs}H_s^{\alpha /2}\mathrm{d}s\\&+\frac{\alpha (\alpha -1)}{4}\int _t^Te^{rs}H_s^{\alpha /2-1}({\hat{Z}}_s)^2\mathrm{d}\langle B\rangle _s. \end{aligned}$$

Letting \(r=1+\alpha \kappa +\frac{\alpha \kappa ^2}{\underline{\sigma }^2(\alpha -1)}\), we have

$$\begin{aligned} H_t^{\alpha /2}e^{rt}+(M_T-M_t)\le \int _t^T\Delta _s\mathrm{d}s. \end{aligned}$$

Taking conditional expectation on both sides of the above inequality, it follows that

$$\begin{aligned} H_t^{\alpha /2}e^{rt}\le \hat{\mathbb {E}}_t[\int _t^T \Delta _s\mathrm{d}s]. \end{aligned}$$
(15)

Consequently, we have

$$\begin{aligned} \hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}|{\hat{Y}}_t|^\alpha \right] \le \hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}\hat{\mathbb {E}}_t[\int _0^T \Delta _s\mathrm{d}s]\right] . \end{aligned}$$
(16)

By symmetry and Theorem 2.2, it suffices to prove that there exists some \(\gamma >1\), such that

$$\begin{aligned} \lim _{n,m\rightarrow \infty }\hat{\mathbb {E}}\left[ \left( \int _0^T H_s^{\alpha /2-1}(Y_s^n-L_s)^-\mathrm{d}A^{m,+}_s\right) ^{\gamma }\right] =0. \end{aligned}$$
(17)

For \(1<\gamma <\beta /\alpha \),

$$\begin{aligned}&\hat{\mathbb {E}}\left[ \left( \int _0^T H_s^{\alpha /2-1}(Y_s^n-L_s)^-\mathrm{d}A^{m,+}_s\right) ^{\gamma }\right] \\&\quad \le \hat{\mathbb {E}}\left[ \sup _{s\in [0,T]}|{\hat{Y}}_s|^{(\alpha -2)\gamma }\sup _{s\in [0,T]}\big ((Y_s^n-L_s)^-\big )^{\gamma }\big (A^{m,+}_T\big )^{\gamma }\right] \\&\quad \le \bigg (\hat{\mathbb {E}}\left[ \sup _{s\in [0,T]}|{\hat{Y}}_s|^{\alpha \gamma }]\bigg )^{\frac{\alpha }{\alpha -2}}\bigg (\hat{\mathbb {E}}[\sup _{s\in [0,T]}\big ((Y_s^n-L_s)^-\big )^{\alpha \gamma }\right] \bigg )^{\frac{1}{\alpha }}\hat{\mathbb {E}}[\big (A^{m,+}_T\big )^{\alpha \gamma }]^{\frac{1}{\alpha }} , \end{aligned}$$

which converges to 0 as n goes to \(\infty \) by Lemmas 4.1,  4.4 and  4.6.

By a similar analysis as in the proof of Theorem 5.1 in [15], for some \(2\le \alpha <\beta \), we get that

$$\begin{aligned}&\hat{\mathbb {E}}\left[ \left( \int _0^T |{\hat{Z}}_t|^2dt\right) ^{\frac{\alpha }{2}}\right] \le C\left\{ \hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}|{\hat{Y}}_t|^\alpha \right] +(\hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}|{\hat{Y}}_t|^\alpha \right] )^{1/2}\right\} ,\\&\hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}|{\hat{A}}_t|^\alpha \right] \le C\left\{ \hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}|{\hat{Y}}_t|^\alpha \right] +\hat{\mathbb {E}}\left[ \left( \int _0^T |{\hat{Z}}_t|^2dt\right) ^{\frac{\alpha }{2}}\right] \right\} . \end{aligned}$$

The proof is complete. \(\square \)

Now, we prove our main result.

Proof of Theorem 3.1

First we prove the uniqueness. Suppose that \((Y^i,Z^i,A^i)\), \(i=1,2\) are solutions of the reflected G-BSDE with data \((\xi ,f,,L,U)\). Proposition 3.1 yields that \(Y^1\equiv Y^2\). Applying G-Itô’s formula to \((Y_t^1-Y_t^2)^2\equiv 0\) and taking expectation (we may refer to Equation (4)), we get

$$\begin{aligned} \hat{\mathbb {E}}\left[ \left( \int _0^T |Z_s^1-Z_s^2|^2\mathrm{d}\langle B\rangle _s\right) ^{\alpha /2}\right] =0. \end{aligned}$$

It follows that \(Z^1\equiv Z^2\). Then it is easy to check \(A^1\equiv A^2\).

Now we are in a position to show the existence. By Lemma 4.7, there exist \(Y\in S_G^\alpha (0,T)\), \(Z\in H_G^\alpha (0,T)\) and a finite variation process \(A\in S_G^\alpha (0,T)\) such that when n goes to infinity,

$$\begin{aligned} \hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}|Y_t^n-Y_t|^\alpha \right] \rightarrow 0,\ \hat{\mathbb {E}}\left[ \left( \int _0^T |Z_t^n-Z_t|^2\mathrm{d}s\right) ^{\frac{\alpha }{2}}\right] \rightarrow 0,\ \hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}|A_t^n-A_t|^\alpha \right] \rightarrow 0, \end{aligned}$$

where \(A_t^n=A_t^{n,+}-K_t^n-A_t^{n,-}\). By Lemma 4.4 , we derive that \(L_t\le Y_t\le U_t\), for any \(t\in [0,T]\). It remains to show that A satisfies the approximate Skorohod condition with order \(\alpha \). We claim that \(\{A^{n,+}\}_{n\in \mathbb {N}}\), \(\{A^{n,-}\}_{n\in \mathbb {N}}\) and \(\{K^{n}\}_{n\in \mathbb {N}}\) are the approximation sequences. It is sufficent to prove that

$$\begin{aligned} \lim _{n\rightarrow \infty }\hat{\mathbb {E}}\left[ \left| \int _0^T (Y_s-L_s)\mathrm{d}A_s^{n,+} \right| ^{\alpha /2} \right] =0. \end{aligned}$$

In fact, it is easy to check that

$$\begin{aligned} \int _0^T (Y_s-L_s)\mathrm{d}A_s^{n,+}&=\int _0^T (Y_s-Y_s^n)\mathrm{d}A_s^{n,+}+\int _0^T (Y_s^n-L_s)n(Y_s^n-L_s)^-\mathrm{d}s\\&\le \sup _{t\in [0,T]}|Y_s-Y_s^n|A_T^{n,+}. \end{aligned}$$

It follows that

$$\begin{aligned} \hat{\mathbb {E}}\left[ \left| \int _0^T(Y_s-L_s)\mathrm{d}A_s^{n,+} \right| ^{\alpha /2} \right] \le (\hat{\mathbb {E}}\left[ \sup _{t\in [0,T]}|Y_t-Y_t^n|^\alpha \right] )^{1/2}(\hat{\mathbb {E}}[|A_T^{n,+}|])^{1/2}. \end{aligned}$$

Hence, the claim holds. The proof is complete.

Remark 4.1

The analysis for the penalization method above can also be used for the single obstacle case, which will extend the results in [15] and [14] to a more general setting. More precisely, suppose that \(L\in S_G^\beta (0,T)\) is bounded from above by some generalized G-Itô process I satisfying (A3’). Then the reflected G-BSDE with a lower obstacle whose parameters are given by \((\xi ,f,g,L)\) admits a unique solution, where (fg) satisfies (A1’), (A2) and \(\xi \in L^\beta _G(\Omega _T)\) such that \(\xi \ge L_T\). The reflected G-BSDE with an upper obstacle whose parameters are given by \((\xi ,f,g,U)\) admits a unique solution, where (fgU) satisfies (A1)–(A3) and \(\xi \in L^\beta _G(\Omega _T)\) with \(\xi \le U_T\).

By the construction via penalization, we obtain the following comparison theorem for doubly reflected G-BSDEs.

Theorem 4.1

Let \((\xi ^i,f^i,L^i,U^i)\) be two sets of data satisfying (A1)–(A4), \(i=1,2\). We furthermore assume the following:

  1. (i)

    \(\xi ^1\le \xi ^2\), q.s.;

  2. (ii)

    \(f^1(t,y,z)\le f^2(t,y,z)\), \(\forall (y,z)\in \mathbb {R}^2\);

  3. (iii)

    \(L_t^1\le L^2_t\), \(U_t^1\le U_t^2\), \(0\le t\le T\), q.s.

Let \((Y^i,Z^i,A^i)\) be the solutions of the doubly reflected G-BSDE with data \((\xi ^i,f^i,L^i,U^i)\), \(i=1,2\), respectively. Then

$$\begin{aligned} Y_t^1\le Y^2_t, \quad 0\le t\le T \quad q.s. \end{aligned}$$

Proof

For \(i=1,2\), consider the following G-BSDEs parameterized by \(n=1,2,\ldots \),

$$\begin{aligned} Y_t^{i,n}&=\xi ^{i,n}+\int _t^T f^i(s,Y_s^{i,n},Z_s^{i,n})\mathrm{d}s-\int _t^T Z_s^{i,n}\mathrm{d}B_s-(K_T^{i,n}-K_t^{i,n})\\&\quad -n\int _t^T (Y^{i,n}_s-U_s^i)^+\mathrm{d}s+\int _t^T n(Y_s^{i,n}-L_s^i)^-\mathrm{d}s. \end{aligned}$$

By Theorem 2.5, for any \(t\in [0,T]\) and \(n=1,2,\ldots \), we have \(Y^{1,n}_t\le Y^{2,n}_t\). Letting n go to infinity, we get the desired result. \(\square \)

Remark 4.2

Similar result still holds for the comparison theorem if the function g which corresponds to the \(\mathrm{d}\langle B\rangle \) term is not 0.