1 Introduction

Dynkin game was first introduced and studied by Dynkin in [6]. Later, in Neveu [18], EIbakidze [7], Kifer [15] and Ohtsubo [19, 20], the authors considered Dynkin game in discrete parameter case with (without) a finite constraint. The version of continuous time was also studied in many studies (for examples, see Morimoto [17], Stettner [26], Krylov [16] and the references therein).

A general formulation of Dynkin game is stated as follows. The lower and upper value functions

$$\begin{aligned} \underline{V}_t:=ess\sup _{\tau \in {\mathcal {T}_t}}ess\inf _{\sigma \in {\mathcal {T}_t}}E[R_t(\tau ,\sigma ) |\mathcal {F}_t] \end{aligned}$$
(1)

and

$$\begin{aligned} \overline{V}_t:=ess\inf _{\sigma \in {\mathcal {T}_t}}ess\sup _{\tau \in {\mathcal {T}_t}}E[R_t(\tau ,\sigma ) |\mathcal {F}_t] \end{aligned}$$
(2)

are defined, respectively, where \(R_t(\tau ,\sigma )\) is a function of two stopping times \(\tau \) and \(\sigma \). Then, one often tries to find a sufficient condition such that \(\overline{V}_t =\underline{V}_t\) holds. Obviously, \(\overline{V}_t \ge \underline{V}_t\). To get the reverse inequality, the method that we often use is to find a pair of saddle points \((\tau _t^*,\sigma _t^*)\), such that

$$\begin{aligned} E[R_t(\tau ,\sigma _t^*)|\mathcal {F}_t]\le E[R_t(\tau _t^*,\sigma _t^*)|\mathcal {F}_t]\le E[R_t(\tau _t^*,\sigma )|\mathcal {F}_t] \end{aligned}$$
(3)

holds. If (3) is true, denote \(V(t):=\overline{V}_t =\underline{V}_t\), and V(t) is called the value function of the Dynkin game.

The Dynkin game can be seen as an extension of the optimal stopping problem. The martingale approach has been used to find a pair of saddle points, and then the value function is obtained by solving this double optimal stopping problem (for example, see Dynkin [6], Krylov [16] and the references therein). In Friedman [8] and Bensoussan and Friedman [1], the authors developed the analytical theory of stochastic differential games with stopping times in Markov setting. They studied the value function and found the saddle point of Dynkin game by using the theories of partial differential equations, variational inequalities and free-boundary problems. Later, reflected backward stochastic differential equation (short for reflected BSDE) with one lower obstacle has been found very useful to solve this optimal stopping problem. Since then, many researchers have investigated the Dynkin game by solving the reflected BSDE with the lower and upper obstacles (for example, see Cvitanic and Karatzas [5], Hamadène and Lepeltier [11] and the references therein). In addition, there are some other ways to solve this game, such as the pathwise approach (see Karatzas [13]) and the connection with bounded variation problem (see Karatzas and Wang [14]).

Inspired by Cvitanic and Karatzas [5], in this paper we study Dynkin game in the stochastic environment with ambiguity and evaluate the reward process by g-expectations induced by BSDEs. Our problem can be formulated as follows. We define the lower and upper value functions

$$\begin{aligned} \underline{V}_t:=ess\sup _{\tau \in {\mathcal {T}_t}} ess\inf _{\sigma \in {\mathcal {T}_t}}\mathcal {E}^g_t[R(\tau ,\sigma )] \end{aligned}$$
(4)

and

$$\begin{aligned} \overline{V}_t=ess\inf _{\sigma \in {\mathcal {T}_t}} ess\sup _{\tau \in {\mathcal {T}_t}}\mathcal {E}^g_t[R(\tau ,\sigma )], \end{aligned}$$
(5)

respectively, where \(R(\tau ,\sigma ):=L(\tau )1_{\{\tau \le \sigma \}}+U(\sigma )1_{\{\sigma <\tau \}}\) and \(\mathcal {T}_t\) is the set of all stopping times taking values in [tT]. Under some suitable assumptions on two processes L(t) and U(t), we want to find a pair of saddle points \((\tau _t^*,\sigma _t^*)\) such that

$$\begin{aligned} \mathcal {E}^g_t[R(\tau ,\sigma _t^*)]\le \mathcal {E}^g_t[R(\tau _t^*,\sigma _t^*)]\le \mathcal {E}^g_t[R(\tau _t^*,\sigma )] \end{aligned}$$
(6)

holds, and then the game has a value function \(V(t)=\overline{V}_t =\underline{V}_t\).

This problem looks very similar to the one that was stated and solved in Cvitanic and Karatzas [5], but there are some differences between them. In Sect. 3, we point out the main difference between them. Furthermore, the more complicated case with a constraint is considered and the reward process is evaluated by \(g_\Gamma \)-expectation. The notion of \(g_\Gamma \)-expectation is given in Sect. 2.

This paper is organized as follows. In Sect. 2, we introduce some notations, assumptions, notions, propositions about BSDE, reflected BSDE and BSDE with a constraint that are used in this paper. The main results and proofs are stated in Sect. 3.

2 BSDE, reflected BSDE and BSDE with a constraint

In this section, we shall present some notations, assumptions, notions and propositions about BSDE, reflected BSDE and BSDE with a constraint that are used in this paper.

Let \((\Omega ,{{\mathcal {F}}},P)\) be a probability space and \((W_t)_{t\ge 0}\) be a d-dimensional standard Brownian motion with respect to filtration \(({{\mathcal {F}}}_t)_{t\ge 0}\) generated by Brownian motion and all P-null subsets, i.e.,

$$\begin{aligned} {{{\mathcal {F}}}_t}=\sigma \{W_s;s\le t\}\vee {{\mathcal {N}}}, \end{aligned}$$

where \({{\mathcal {N}}}\) is the set of all P-null subsets. Fix a real number \(T>0\).

Define the following usual spaces of processes (random variables):

\(L^2(\mathcal {F}_T)=\{\xi :\xi \) is \({{\mathcal {F}}}_T\)-measurable random variable such that \(E[|\xi |^2]<\infty \}\);

\(H_T^2(R^d)=\{V:V_t\) is \(R^d\)-valued and \(\mathcal{F}_t\)-adapted process such that \(E[\int _0^T|V_s|^2\mathrm{d}s]<\infty \}\);

\(S_T^2(R)=\{V:V_t\) is R-valued and RCLL, \(\mathcal{F}_t\)-adapted process such that \(E[\sup _{0\le t\le T}|V_t|^2]<\infty \}\);

\(S_{c,i;T}^2(R)=\{A:A_t\) is R-valued and continuous, increasing, \({{\mathcal {F}}}_t\)-adapted process with \(A_0=0\) such that \(E[A_T^2]<\infty \}\);

\(S_{i;T}^2(R)=\{A:A_t\) is R-valued and increasing, RCLL, \({{\mathcal {F}}}_t\)-adapted process with \(A_0=0\) such that \(E[A_T^2]<\infty \}\).

Now, we consider the following one-dimensional BSDE:

$$\begin{aligned} Y_t=\xi +\int _t^T g(s,Y_s,Z_s)\mathrm{d}s-\int _t^T Z_s\cdot \mathrm{d}W_s,\quad t\in [0,T]. \end{aligned}$$
(7)

Some assumptions are given as follows:

  1. (A.0)

    Let \(g:\Omega \times [0,T]\times R\times R^d\mapsto R\) such that for any \((y,z)\in R\times R^d\), g(tyz) is \(\mathcal{F}_t\)-progressively measurable.

  2. (A.1)

    \(g(\cdot ,0,0)\in H_T^2(R)\);

  3. (A.2)

    There exists a positive constant \(\mu \) such that for all \(y_1,y_2\in R\), \(z_1,z_2\in R^d\),

    $$\begin{aligned}|g(t,y_1,z_1)-g(t,y_2,z_2)|\le \mu \left( |y_1-y_2|+|z_1-z_2|\right) .\end{aligned}$$
  4. (A.3)

    \(g(\cdot ,y,0)\equiv 0,\ \ \ \forall y\in R\).

From Pardoux and Peng [21], we know that if g satisfies (A.0)–(A2), then for any \(\xi \in L^2(\mathcal {F}_T)\), BSDE (7) has a unique pair of adapted processes \((Y,Z)\in S_T^2(R)\times H_T^2(R^d)\).

Next, some notions and propositions of g-expectation induced by BSDE are presented. For more details on g-expectation, see Briand et al. [2], Chen et al. [3, 4], He et al. [9], Hu [10], Jiang [12], Peng [22, 23] and Rosazza Gianin [25].

Definition 2.1

(g-expectation, see [22]) Suppose that g satisfies (A.0)–(A.2). For any \(\xi \in L^2(\mathcal {F}_T)\), let (YZ) be the solution of BSDE (7) with terminal value \(\xi \). Consider the mapping \({\mathcal {E}}^g_{t,T}[\cdot ]:L^2(\mathcal {F}_T)\mapsto L^2(\mathcal {F}_t)\), denoted by \({\mathcal {E}}^g_{t,T}[\xi ]=Y_t\). We call \({\mathcal {E}}^g_{t,T}[\xi ]\) the g-expectation of \(\xi \). Furthermore, if g also satisfies (A.3), write \(\mathcal {E}^g_t[\xi ]:=\mathcal {E}^g_{t,T}[\xi ]\).

Proposition 2.2

(see [23]) Suppose that g satisfies (A.0)–(A.2), and \(\sigma \), \(\tau \) are two stopping times satisfying \(\tau \le \sigma \le T\). Let \(\zeta \in L^2(\mathcal {F}_\sigma )\) and (YZ) be the solution of the following BSDE

$$\begin{aligned}Y_\tau =\zeta +\int _\tau ^\sigma g(s,Y_s,Z_s)\mathrm{d}s -\int _\tau ^\sigma Z_s\cdot \mathrm{d}W_s.\end{aligned}$$

Then,

  1. (i)

    If \(\xi \), \(\eta \in L^2(\mathcal {F}_\sigma )\) and \(\xi \ge \eta \) a.s., \(\mathcal {E}^g_{\tau ,\sigma }[\xi ]\ge \mathcal {E}^g_{\tau ,\sigma }[\eta ]\) a.s.

  2. (ii)

    If g satisfies (A.3), \(\mathcal {E}^g_{\tau ,\sigma }[\zeta ]=\mathcal {E}^g_{\tau ,T}[\zeta ]\) a.s. So write \(\mathcal {E}^g_\tau [\zeta ]:=\mathcal {E}^g_{\tau ,\sigma }[\zeta ]\).

  3. (iii)

    If g satisfies (A.3), \(\mathcal {E}^g_\tau [\mathcal {E}^g_\sigma [\zeta ]]=\mathcal {E}^g_\tau [\zeta ]\) a.s.

  4. (iv)

    Let \(\xi _n,\xi ,\eta \in L^2(\mathcal {F}_\sigma )\), \(n=1,2,\dots \). If \(\lim \nolimits _{n\rightarrow \infty }\xi _n=\xi \) a.s. and \(|\xi _n|\le \eta \) a.s., we have \(\lim \nolimits _{n\rightarrow \infty }\mathcal {E}^g_{\tau ,\sigma }[\xi _n]=\mathcal {E}^g_{\tau ,\sigma }[\xi ]\) a.s.

Definition 2.3

(g-martingale, g-supermartingale, g-submartingale, see [23]) Suppose that g satisfies (A.0)–(A.2). An \({{\mathcal {F}}}_t\)-progressively measurable R-valued process X is called a g-martingale (resp., g-supermartingale, g-submartingale), if for each \(0\le t\le T\), \(E[|X_t|^2]<\infty \), and \({\mathcal {E}}^g_{s,t}[X_t]=X_s\) a.s., (resp., \({\mathcal {E}}^g_{s,t}[X_t]\le X_s\) a.s., \({\mathcal {E}}^g_{s,t}[X_t]\ge X_s\) a.s.) for all \(s\in [0,t]\).

The theory of BSDE has been wildly used in many fields such as financial mathematics and stochastic control. Let us mention that the reflected BSDE has been found useful to solve the optimal stopping problem and investigate Dynkin game. In Cvitanic and Karatzas [5], Dynkin game is investigated by solving the reflected BSDE with double obstacles. In the following, the notion of reflected BSDE is given.

Definition 2.4

(Reflected BSDE, see [24]) Suppose that \(\xi \in L^2(\mathcal {F}_T)\), and g satisfies (A.0)-(A.2). Consider two processes L, \(U\in S_T^2(R)\) so as to satisfy

$$\begin{aligned}L(t) \le U(t) \quad \hbox {a.s.}, \quad \forall t\in [0,T)\quad \hbox {and} \quad L(T)\le \xi \le U(T)\quad \hbox {a.s.}\end{aligned}$$

We say that a quadruple \((Y,Z,A,K)\in S_T^2(R)\times H_T^2(R^d)\times S_T^2(R)\times S_T^2(R)\) is a solution of reflected BSDE (BSDE reflected between a lower obstacle L and an upper obstacle U) with parameters \((\xi ,g)\), if

  1. (i)

    A, K are increasing: \(\mathrm{d}A\ge 0\), \(\mathrm{d}K\ge 0\);

  2. (i)

    (YZ) solves the following BSDE on [0, T]:

    $$\begin{aligned} Y_t= \xi +\int _t^Tg(s,Y_s,Z_s)\mathrm{d}s +A_T-A_t-(K_T-K_t)-\int _t^TZ_s\cdot \mathrm{d}W_s; \end{aligned}$$
    (8)
  3. (ii)

    Y is dominated by L and U, i.e.,

    $$\begin{aligned} L(t) \le Y_t \le U(t) \quad \hbox {a.s.}, \quad \forall t\in [0,T]; \end{aligned}$$
    (9)
  4. (iii)

    The following Skorohod condition holds:

    $$\begin{aligned} \int _0^T(Y_{t-}-L(t-))\mathrm{d}A_t=\int _0^T(U(t-)-Y_{t-})\mathrm{d}K_t=0. \end{aligned}$$
    (10)

Remark 2.5

(1) In order to make Dynkin game, which we study, meaningful, we need the following assumptions to solve the reflected BSDE (8) formulated in Definition 2.4:

(B.1) Let L(t), U(t) be two continuous processes belonging to \(S_T^2(R)\) and satisfy that

$$\begin{aligned} L(t)< U(t) \quad \hbox {a.s.}, \quad \forall t\in [0,T)\quad \hbox {and} \quad L(T)\le \xi \le U(T)\quad \hbox {a.s.}; \end{aligned}$$

(B.2) Mokobdski’s condition: there exist two non-negative supermartingales h and \(h^{'}\) of \(S_T^2(R)\) such that

$$\begin{aligned} L(t)\le h_t-h^{'}_t\le U(t),\ \ \ \forall t\in [0,T]. \end{aligned}$$

(2) From Cvitanic and Karatzas [5], we know that if the assumptions (A.0)–(A.2) and (B.1), (B.2) hold, then for each \(\xi \in L^2(\mathcal {F}_T)\), there exists a unique solution \((Y,Z,A,K)\in S_T^2(R)\times H_T^2(R^d)\times S_{c,i;T}^2(R)\times S_{c,i;T}^2(R)\) of reflected BSDE (8) formulated in Definition 2.4.

(3) In particular, let \((Y^*,Z^*,A^*,K^*)\) be the unique solution of reflected BSDE (8) with terminal value L(T).

In this paper, the constrained case of Dynkin game is also considered. So we introduce the theory of BSDE with a constraint. First, the notion of g-supersolution is presented.

Definition 2.6

(g-supersolution, see [23]) Suppose that \(\xi \in L^2(\mathcal {F}_T)\), and g satisfies (A.0)-(A.2). If a triple \((Y, Z, A)\in S_T^2(R)\times H_T^2(R^d)\times S_{i;T}^2(R)\) and (YZ) solves the following BSDE on [0, T]:

$$\begin{aligned} Y_t= \xi +\int _t^Tg(s,Y_s,Z_s)\mathrm{d}s +A_T-A_t-\int _t^TZ_s\cdot \mathrm{d}W_s, \end{aligned}$$
(11)

then we call Y a g-supersolution on [0, T]

Let \(\phi :\Omega \times [0,T]\times R\times R^d\mapsto R^{+}\) be a given non-negative, \({{\mathcal {F}}}_t\)-progressively measurable function such that \(\phi (t,0,0)\in H_T^2(R)\) and \(\phi \) is Lipschitz with respect to (yz), that is, there exists a positive constant \(\mu \) such that for all \(y_1,y_2\in R\), \(z_1,z_2\in R^d\),

$$\begin{aligned} |\phi (t,y_1,z_1)-\phi (t,y_2,z_2)|\le \mu (|y_1-y_2|+|z_1-z_2|). \end{aligned}$$

Now, we consider BSDE of the form (11) with a constraint

$$\begin{aligned} K_t(\omega ):=\{(y,z)\in R\times R^d;\phi (\omega ,t,y,z)=0\} \end{aligned}$$

imposed on the solution, that is, the solution of BSDE of the form (11) also satisfies that

$$\begin{aligned} \phi (t,Y_t,Z_t)=0\ \ \hbox {a.s.} \end{aligned}$$
(12)

To make the problem of BSDE with a constraint meaningful, we need the following assumption:

(H) There exists at least a g-supersolution \(\hat{Y}\) and the corresponding decomposition \((\hat{Z},\hat{A})\) with terminal value \(\xi \), such that the constraint (12) holds.

Definition 2.7

(The smallestg-supersolution subject to a constraint, \(g_\Gamma \)-expectation, see [23]) Given terminal value \(\xi \), a g-supersolution Y with the decomposition (ZA) is said to be the smallest g-supersolution subject to the constraint (12), if for any g-supersolution \(\hat{Y}\) with terminal value \(\xi \) and the corresponding decomposition \((\hat{Z},\hat{A})\) subject to \(\phi (t,\hat{Y}_t,\hat{Z}_t)=0\) a.s., \(Y_t\le \hat{Y}_t\) a.s. In the case of \(g(\cdot ,y,0)=0\) and \(\phi (\cdot ,y,0)=0\), \(\forall y\in R\), the smallest g-supersolution \(Y_t\) subject to the constraint (12) is denoted by \(\mathcal {E}_t^{g,\phi }[\xi ]\). We call \(\mathcal {E}_t^{g,\phi }[\xi ]\) the \(g_\Gamma \)-expectation of \(\xi \).

Remark 2.8

To construct the smallest g-supersolution of BSDE (11) subject to the constraint (12), Peng [23] introduced the following BSDEs on [0, T]:

$$\begin{aligned} Y_t^n=\xi +\int _t^Tg_n(s,Y_s^n,Z_s^n)\mathrm{d}s-\int _t^TZ_s^n\cdot \mathrm{d}W_s, n=1,2,\ldots , \end{aligned}$$
(13)

where \(g_n:=g+n\phi \). In [23], the author proved that \(Y^n\) increasingly converges to Y, and Y is the smallest g-supersolution of BSDE (11) subject to the constraint (12).

Remark 2.9

Suppose that for any \(y\in R\), \(g(\cdot ,y,0)\equiv 0\) and \(\phi (\cdot ,y,0)\equiv 0\). Then the smallest g-supersolution subject to the constraint (12) is well defined for terminal value \(\xi \) in \(L^\infty (\mathcal {F}_T)\), which denotes the space of all essentially bounded \(\mathcal {F}_T\)-measurable variables. In fact, for a given \(\xi \in L^\infty (\mathcal {F}_T)\) (i.e., there exists a positive constant D such that \(|\xi |\le D\) a.s.), then the following Y with the corresponding decomposition is (ZA) is a g-supersolution of BSDE (11) such that the constraint (12) holds, where

$$\begin{aligned} Y_t= & {} \left\{ \begin{array}{ll} D, &{} \quad t\in [0,T)\\ \xi , &{}\quad t=T \end{array}, \right. \\ A_t= & {} \left\{ \begin{array}{ll} 0, &{}\quad t\in [0,T)\\ D-\xi , &{}\quad t=T \end{array} \right. \end{aligned}$$

and \(Z_t=0\). Thus, the smallest g-supersolution subject to the constraint (12) exists.

At last, we give a comparison theorem of the smallest g-supersolution subject to the constraint (12).

Proposition 2.10

Suppose g and \(\phi \) satisfy the assumptions (A.0), (A.2) and (A.3). If \(\xi ,\eta \in L^2(\mathcal {F}_T)\) and \(\xi \le \eta \) a.s., then

$$\begin{aligned} \mathcal {E}_t^{g,\phi }[\xi ]\le \mathcal {E}_t^{g,\phi }[\eta ] \quad \hbox {a.s.}, \quad \forall t\in [0,T]. \end{aligned}$$
(14)

Proof

Let \(\left( Y^{1,n},Z^{1,n}\right) \) and \(\left( Y^{2,n},Z^{2,n}\right) \) be the solutions of the following BSDEs on [0, T]:

$$\begin{aligned} Y_t^{1,n}=\xi +\int _t^T\left[ g\left( s,Y_s^{1,n},Z_s^{1,n}\right) +n\phi \left( s,Y_s^{1,n},Z_s^{1,n}\right) \right] \mathrm{d}s-\int _t^TZ_s^{1,n}\cdot \mathrm{d}W_s, n=1,2,\dots \end{aligned}$$

and

$$\begin{aligned} Y_t^{2,n}=\eta +\int _t^T\left[ g\left( s,Y_s^{2,n},Z_s^{2,n}\right) +n\phi \left( s,Y_s^{2,n},Z_s^{2,n}\right) \right] \mathrm{d}s-\int _t^TZ_s^{2,n}\cdot \mathrm{d}W_s, n=1,2,\dots , \end{aligned}$$

respectively. By the comparison theorem of BSDE (see Peng [22]), we can obtain that for each n,

$$\begin{aligned} Y_t^{1,n}\le Y_t^{2,n} \quad \hbox {a.s.}, \quad \forall t\in [0,T]. \end{aligned}$$

From Remark 2.8, we know the fact that \(\lim \limits _{n\rightarrow \infty }Y_t^{1,n}=\mathcal {E}_t^{g,\phi }[\xi ]\) a.s. and \(\lim \limits _{n\rightarrow \infty }Y_t^{2,n}=\mathcal {E}_t^{g,\phi }[\eta ]\) a.s., \(\forall t\in [0,T]\). Then it is easy to obtain that

$$\begin{aligned} \mathcal {E}_t^{g,\phi }[\xi ]\le \mathcal {E}_t^{g,\phi }[\eta ] \quad \hbox {a.s.}, \quad \forall t\in [0,T]. \end{aligned}$$

The proof of Proposition 2.10 is complete. \(\square \)

3 Dynkin game under ambiguity

In this section, we first study the Dynkin game without constraints. Second, the more complicated case with a constraint will be investigated.

3.1 Dynkin game without constraints

In Cvitanic and Karatzas [5], the method for studying the Dynkin game is stated as follows. Define the lower and upper value functions

$$\begin{aligned} \underline{V}(t):=ess\sup _{\tau \in {\mathcal {T}_t}}ess\inf _{\sigma \in {\mathcal {T}_t}}E[\hat{R}_t(\tau ,\sigma ) |\mathcal {F}_t] \end{aligned}$$

and

$$\begin{aligned} \overline{V}(t):=ess\inf _{\sigma \in {\mathcal {T}_t}}ess\sup _{\tau \in {\mathcal {T}_t}}E[\hat{R}_t(\tau ,\sigma ) |\mathcal {F}_t], \end{aligned}$$

respectively, where \(\mathcal {T}_t\) is the set of all stopping times taking values in [tT] and

$$\begin{aligned} \begin{aligned} \hat{R}_t(\tau ,\sigma )=\int _t^{\tau \wedge \sigma }g(s,Y_s^*,Z_s^*)\mathrm {d}s + L(\tau )1_{\{\tau \le \sigma \}}+U(\sigma )1_{\{\sigma <\tau \}}. \end{aligned} \end{aligned}$$
(15)

A pair of saddle points \((\tau _t^*,\sigma _t^*)\) can be found, which satisfies

$$\begin{aligned} \begin{aligned} E[\hat{R}_t(\tau ,\sigma _t^*)|\mathcal {F}_t]\le E[\hat{R}_t(\tau _t^*,\sigma _t^*)|\mathcal {F}_t]\le E[\hat{R}_t(\tau _t^*,\sigma )|\mathcal {F}_t], \end{aligned} \end{aligned}$$

and then the game has a value function \(V(t)=\overline{V}(t) =\underline{V}(t)\). Furthermore,

$$\begin{aligned} Y^*_t=V(t) \quad \hbox {a.s.}, \quad \forall t\in [0,T]. \end{aligned}$$
(16)

In our framework, we need the assumption (A.3) to hold. The method for studying the Dynkin game in our framework can be formulated as follows. Define the lower and upper value functions

$$\begin{aligned} \underline{V}_t:=ess\sup _{\tau \in {\mathcal {T}_t}} ess\inf _{\sigma \in {\mathcal {T}_t}}\mathcal {E}^g_t[R(\tau ,\sigma )] \end{aligned}$$
(17)

and

$$\begin{aligned} \overline{V}_t=ess\inf _{\sigma \in {\mathcal {T}_t}} ess\sup _{\tau \in {\mathcal {T}_t}}\mathcal {E}^g_t[R(\tau ,\sigma )], \end{aligned}$$
(18)

respectively, where \(R(\tau ,\sigma ):=L(\tau )1_{\{\tau \le \sigma \}}+U(\sigma )1_{\{\sigma <\tau \}}\). By Definition 2.1, we know that

$$\begin{aligned} \mathcal {E}^g_t[R(\tau ,\sigma )]=E\left[ \int _t^{\tau \wedge \sigma }g(s,Y^{\tau ,\sigma }_s,Z^{\tau ,\sigma }_s)\mathrm{d}s + L(\tau )1_{\{\tau \le \sigma \}}+U(\sigma )1_{\{\sigma <\tau \}}|\mathcal {F}_t\right] , \end{aligned}$$
(19)

where \((Y^{\tau ,\sigma },Z^{\tau ,\sigma })\) is the solution of BSDE (7) with terminal value \(\eta :=L(\tau )1_{\{\tau \le \sigma \}}+U(\sigma )1_{\{\sigma <\tau \}}\). We can find a pair of saddle points \((\tau _t^*,\sigma _t^*)\) such that

$$\begin{aligned} \mathcal {E}^g_t[R(\tau ,\sigma _t^*)]\le \mathcal {E}^g_t[R(\tau _t^*,\sigma _t^*)]\le \mathcal {E}^g_t[R(\tau _t^*,\sigma )] \end{aligned}$$
(20)

holds, and then the game has a value function \(V(t)=\overline{V}_t =\underline{V}_t\). Furthermore, we can obtain that (16) also holds.

From the expressions of (15) and (19), we can easily find the differences between our method and the method of Cvitanic and Karatzas [5]. In (15), \((Y^{*},Z^{*})\) is the solution of reflected BSDE (8) formulated in Definition 2.4 with terminal value L(T), and do not depend on the stopping times \(\sigma \) and \(\tau \). But in (19), \((Y^{\tau ,\sigma },Z^{\tau ,\sigma })\) is the solution of BSDE (7) with terminal value \(L(\tau )1_{\{\tau \le \sigma \}}+U(\sigma )1_{\{\sigma <\tau \}}\), and depend on the stopping times \(\sigma \) and \(\tau \).

With the help of the theory of g-expectation, we can find a pair of saddle points of the Dynkin game. The main reason is that g-expectation enjoys almost all properties of classical expectation except for linearity. But from the proof of Theorem 3.1, we can see that linearity is not crucial to study Dynkin game without constraints. Theorem 3.1 is the main result of the Dynkin game without constraints.

Theorem 3.1

Suppose that (A.0), (A.2), (A.3) and (B.1), (B.2) hold. Let \((Y^*,Z^*,A^*,K^*)\) be the unique solution of reflected BSDE (8) with terminal value L(T). Then Dynkin game stated in (17) and (18) has a pair of saddle points \((\tau _t^*,\sigma _t^*)\), such that

$$\begin{aligned} \mathcal {E}^g_t[R(\tau ,\sigma _t^*)]\le \mathcal {E}^g_t\left[ Y^*_{\tau \wedge \sigma _t^*}\right] \le \mathcal {E}^g_t\left[ Y^*_{\tau _t^*\wedge \sigma _t^*}\right] \le \mathcal {E}^g_t\left[ Y^*_{\tau _t^*\wedge \sigma }\right] \le \mathcal {E}^g_t\left[ R(\tau _t^*,\sigma )\right] \end{aligned}$$
(21)

holds for any \(\tau ,\sigma \in \mathcal {T}_t\), and hence

$$\begin{aligned} \underline{V}_t=\overline{V}_t=Y^*_t\quad \hbox {a.s.}, \quad \forall t\in [0,T], \end{aligned}$$

where

$$\begin{aligned} \tau _t^*=\inf \{s\ge t: L(s)=Y^*_s\}\wedge T \end{aligned}$$
(22)

and

$$\begin{aligned} \sigma _t^*=\inf \{s\ge t:U(s)=Y^*_s\}\wedge T. \end{aligned}$$
(23)

Proof

For any \(\tau \in \mathcal {T}_t\), we have

$$\begin{aligned} \mathcal {E}^g_t[R(\tau ,\sigma _t^*)]= & {} \mathcal {E}^g_t \left[ L(\tau )1_{\{\tau \le \sigma _t^*\}}+U(\sigma _t^*)1_{\{\sigma _t^*<\tau \}}\right] \\\le & {} \mathcal {E}^g_t\left[ Y^*_\tau 1_{\{\tau \le \sigma _t^*\}}+U(\sigma _t^*)1_{\{\sigma _t^*<\tau \}}\right] \\= & {} \mathcal {E}^g_t\left[ Y^*_\tau 1_{\{\tau \le \sigma _t^*\}}+Y^*_{\sigma _t^*}1_{\{\sigma _t^*<\tau \}}\right] \\= & {} \mathcal {E}^g_t\left[ Y^*_{\tau \wedge \sigma _t^*}\right] . \end{aligned}$$

Now, we prove that for any \(\tau \in \mathcal {T}_t\),

$$\begin{aligned} \mathcal {E}^g_t\left[ Y^*_{\tau \wedge \sigma _t^*}\right] \le \mathcal {E}^g_t\left[ Y^*_{\tau _t^*\wedge \sigma _t^*}\right] \end{aligned}$$
(24)

holds.

Case 1: On the set \(\{\omega :\sigma _t^*<\tau \le \tau _t^*\}\), (24) obviously holds.

Case 2: On the set \(\{\omega :\tau \le \tau _t^*<\sigma _t^*\}\), by (10) and (21), (22), we have

$$\begin{aligned} A_{\tau _t^*}-A_\tau =0 \end{aligned}$$

and

$$\begin{aligned} K_{\tau _t^*}-K_\tau =0. \end{aligned}$$

So BSDE (8) can be rewritten as follows:

$$\begin{aligned} Y^*_\tau = Y^*_{\tau _t^*}+\int _{\tau }^{\tau _t^*}g(s,Y^*_s,Z^*_s)\mathrm{d}s -\int _{\tau }^{\tau _t^*}Z^*_s\cdot \mathrm{d}W(s). \end{aligned}$$

This means that

$$\begin{aligned} \mathcal {E}^g_t\left[ Y^*_\tau \right] = \mathcal {E}^g_t\left[ Y^*_{\tau _t^*}\right] . \end{aligned}$$
(25)

Case 3: On the set \(\{\omega :\tau \le \sigma _t^*\le \tau _t^*\}\), by (10) and (21), (22), we have

$$\begin{aligned} A_{\sigma _t^*}-A_\tau =0 \end{aligned}$$

and

$$\begin{aligned} K_{\sigma _t^*}-K_\tau =0. \end{aligned}$$

So BSDE (8) can be rewritten as follows:

$$\begin{aligned} Y^*_\tau = Y^*_{\sigma _t^*}+\int _{\tau }^{\sigma _t^*}g(s,Y^*_s,Z^*_s)\mathrm{d}s -\int _{\tau }^{\sigma _t^*}Z^*_s\cdot \mathrm{d}W(s). \end{aligned}$$

This means that

$$\begin{aligned} \mathcal {E}^g_t\left[ Y^*_\tau \right] = \mathcal {E}^g_t\left[ Y^*_{\sigma _t^*}\right] . \end{aligned}$$
(26)

From Cases 1–3, we can obtain that on the set \(\{\omega :\tau \le \tau _t^*\}\),

$$\begin{aligned} \mathcal {E}^g_t[Y^*_{\tau \wedge \sigma _t^*}] =\mathcal {E}^g_t[Y^*_{\tau _t^*\wedge \sigma _t^*}]. \end{aligned}$$
(27)

Case 4: On the set \(\{\omega :\sigma _t^*\le \tau _t^*<\tau \}\), (24) obviously holds.

Case 5: On the set \(\{\omega :\tau _t^*<\tau \le \sigma _t^*\}\), by (10) and (21), (22), we have

$$\begin{aligned} K_\tau -K_{\tau _t^*}=0. \end{aligned}$$

So BSDE (8) can be rewritten as follows:

$$\begin{aligned} Y^*_{\tau _t^*} = Y^*_\tau +\int _{\tau _t^*}^{\tau }g(s,Y^*_s,Z^*_s)\mathrm{d}s+A^*_\tau -A^*_{\tau _t^*} -\int _{\tau _t^*}^{\tau }Z^*_s\cdot \mathrm{d}W(s). \end{aligned}$$

This means that

$$\begin{aligned} \mathcal {E}^g_t\left[ Y^*_\tau \right] \le \mathcal {E}^g_t\left[ Y^*_{\tau _t^*}\right] . \end{aligned}$$
(28)

Case 6: On the set \(\{\omega :\tau _t^*<\sigma _t^*<\tau \}\), by (10) and (21), (22), we have

$$\begin{aligned} K_{\sigma _t^*}-K_{\tau _t^*}=0. \end{aligned}$$

So BSDE (8) can be rewritten as follows:

$$\begin{aligned} Y^*_{\tau _t^*} = Y^*_{\sigma _t^*}+\int _{\tau _t^*}^{\sigma _t^*}g(s,Y^*_s,Z^*_s)\mathrm{d}s+A^*_{\sigma _t^*}-A^*_{\tau _t^*} -\int _{\tau _t^*}^{\sigma _t^*}Z^*_s\cdot \mathrm{d}W(s). \end{aligned}$$

This means that

$$\begin{aligned} \mathcal {E}^g_t\left[ Y^*_{\sigma _t^*}\right] \le \mathcal {E}^g_t\left[ Y^*_{\tau _t^*}\right] . \end{aligned}$$
(29)

From Cases 4–6, we can obtain that on the set \(\{\omega :\tau _t^*<\tau \}\),

$$\begin{aligned} \mathcal {E}^g_t[Y^*_{\tau \wedge \sigma _t^*}] \le \mathcal {E}^g_t[Y^*_{\tau _t^*\wedge \sigma _t^*}]. \end{aligned}$$
(30)

Hence from (27) and (30), we know that (24) holds.

Next, we prove that for any \(\sigma \in \mathcal {T}_t\),

$$\begin{aligned} \mathcal {E}^g_t\left[ Y^*_{\tau _t^*\wedge \sigma _t^*}\right] \le \mathcal {E}^g_t \left[ Y^*_{\tau _t^*\wedge \sigma }\right] \end{aligned}$$
(31)

holds. In a similar manner to that above, we can obtain that on the set \(\{\omega :\sigma \le \sigma _t^*\}\),

$$\begin{aligned} \mathcal {E}^g_t\left[ Y^*_{\tau _t^*\wedge \sigma _t^*}\right] =\mathcal {E}^g_t \left[ Y^*_{\tau _t^*\wedge \sigma }\right] \end{aligned}$$
(32)

and on the set \(\{\omega :\sigma >\sigma _t^*\}\),

$$\begin{aligned} \mathcal {E}^g_t\left[ Y^*_{\tau _t^*\wedge \sigma _t^*}\right] \le \mathcal {E}^g_t \left[ Y^*_{\tau _t^*\wedge \sigma }\right] . \end{aligned}$$
(33)

Hence from (32) and (33), we know that (31) holds.

Finally, we prove that for any \(\sigma \in \mathcal {T}_t\),

$$\begin{aligned} \mathcal {E}^g_t\left[ Y^*_{\tau _t^*\wedge \sigma }\right] \le \mathcal {E}^g_t\left[ R(\tau _t^*,\sigma )\right] \end{aligned}$$
(34)

holds. In fact,

$$\begin{aligned} R(\tau _t^*,\sigma )=L(\tau _t^*)1_{\{\tau _t^*\le \sigma \}}+U(\sigma )1_{\{\sigma<\tau _t^*\}} \ge Y^*_{\tau _t^*}1_{\{\tau _t^*\le \sigma \}}+Y^*_{\sigma }1_{\{\sigma <\tau _t^*\}}=Y^*_{\tau _t^*\wedge \sigma }. \end{aligned}$$

So we complete the proof of Theorem 3.1. \(\square \)

Remark 3.2

Compared with the method that used to prove Theorem 4.1 in Cvitanic and Karatzas [5], we find that our method is simpler. The main reason is that we use the theory of g-expectation to handle this problem. In Sect. 3.2, it can be found that the theory of g-expectation is very convenient for us to handle the constrained case.

3.2 Dynkin game with a constraint

The Dynkin game with a constraint that we study can be formulated as follows. Define the lower and upper value functions

$$\begin{aligned} \underline{V}_t:=ess\sup _{\tau \in {\mathcal {T}_t}}ess\inf _{\sigma \in {\mathcal {T}_t}} \mathcal {E}^{g,\phi }_t[R(\tau ,\sigma )] \end{aligned}$$
(35)

and

$$\begin{aligned} \overline{V}_t=ess\inf _{\sigma \in {\mathcal {T}_t}}ess\sup _{\tau \in {\mathcal {T}_t}} \mathcal {E}^{g,\phi }_t[R(\tau ,\sigma )], \end{aligned}$$
(36)

respectively, where \(R(\tau ,\sigma )=L(\tau )1_{\{\tau \le \sigma \}}+U(\sigma )1_{\{\sigma <\tau \}}\). Our aim is to find a pair of saddle points \((\tau _t^*,\sigma _t^*)\) such that

$$\begin{aligned} \mathcal {E}^{g,\phi }_t[R(\tau ,\sigma _t^*)]\le \mathcal {E}^{g,\phi }_t[R(\tau _t^*,\sigma _t^*)]\le \mathcal {E}^{g,\phi }_t[R(\tau _t^*,\sigma )] \end{aligned}$$
(37)

holds for any \(\tau ,\sigma \in \mathcal {T}_t\), and then the game has a value function \(V(t)=\overline{V}_t =\underline{V}_t\).

Remark 3.3

Suppose that (A.0), (A.2), (A.3) and (B.1), (B.2) hold. For each \(m\in N\), we consider the following BSDE on [0, T]:

$$\begin{aligned} Y_t^m= L(T)+\int _t^Tg_m(s,Y^m_s,Z^m_s)\mathrm{d}s +A^m_T-A^m_t-(K^m_T-K^m_t)-\int _t^TZ^m_s\cdot \mathrm{d}W_s, \end{aligned}$$
(38)

where \(g_m=:g+m\phi \). Let \((Y^m,Z^m,A^m,K^m)\in S_T^2(R)\times H_T^2(R^d) \times S_{c,i;T}^2(R)\times S_{c,i;T}^2(R)\) be the unique solution of reflected BSDE (38) satisfying

$$\begin{aligned} L(t) \le Y^m_t \le U(t) \quad \hbox {a.s.}, \qquad \forall t\in [0,T] \end{aligned}$$
(39)

and

$$\begin{aligned} \int _0^T(Y_{t}^m-L(t))\mathrm{d}A_t^m=\int _0^T(U(t)-Y_{t}^m)\mathrm{d}K_t^m=0 \quad \hbox {a.s.} \end{aligned}$$
(40)

From Peng and Xu [24], we know that \((Y^m,Z^m,A^m,K^m)\) can be obtained by the penalization method. For each \(m,n\in N\), let \((Y^{n,m},Z^{n,m})\) be the solution of the following BSDE on [0, T]:

$$\begin{aligned} Y^{n,m}_t= & {} L(T)+\int _t^Tg_m(s,Y^{n,m}_s,Z^{n,m}_s)\mathrm{d}s+n\int _t^T(Y^{n,m}_s-L(s))^-\mathrm{d}s\\&-n\int _t^T(Y^{n,m}_s-U(s))^+\mathrm{d}s-\int _t^TZ^{n,m}_s\cdot \mathrm{d}W(s). \end{aligned}$$

Fixing n, by the comparison theorem of BSDE, we have that \(Y^{n,m}\) is increasing with respect to m. By Theorem 3.1 in Peng and Xu [24], we know that

$$\begin{aligned} \left( Y^{n,m}_t,Z^{n,m}_t, n\int _t^T(Y^{n,m}_s-L(s))^-\mathrm{d}s,n\int _t^T(Y^{n,m}_s-U(s))^+\mathrm{d}s\right) \rightarrow (Y^m_t,Z^m_t,A^m_t,K^m_t), \end{aligned}$$

as \(n\rightarrow \infty \), \(Y^m\) is increasing with respect to m and Y (the limit of \(Y^m\)) is RCLL.

The following theorem is the main result of the Dynkin game with a constraint.

Theorem 3.4

Suppose that (A.0), (A.2), (A.3) and (B.1), (B.2) hold. We also assume that L(t), U(t) satisfy the following assumptions:

(C) L(t) is increasing and there exists some constant \(B>0\) such that

$$\begin{aligned}L(t)\le U(t) \le B \quad \hbox {a.s.}, \qquad \forall t\in [0,T]\end{aligned}$$

and

(D) \(\phi (\cdot ,y,0)\equiv 0\), \(\forall y\in R\).

Then, the Dynkin game stated in (35) and (36) has a pair of saddle points \((\tau _t^*,\sigma _t^*)\), such that

$$\begin{aligned} \mathcal {E}^{g,\phi }_t[R(\tau ,\sigma _t^*)]\le \mathcal {E}^{g,\phi }_t\left[ Y_{\tau \wedge \sigma _t^*}\right] \le \mathcal {E}^{g,\phi }_t\left[ Y_{\tau _t^*\wedge \sigma _t^*}\right] \le \mathcal {E}^{g,\phi }_t\left[ Y_{\tau _t^*\wedge \sigma }\right] \le \mathcal {E}^{g,\phi }_t\left[ R(\tau _t^*,\sigma )\right] \end{aligned}$$
(41)

holds for any \(\tau ,\sigma \in \mathcal {T}_t\), and hence

$$\begin{aligned} \underline{V}_t=\overline{V}_t=Y_t\quad \hbox {a.s.}, \quad \forall t\in [0,T], \end{aligned}$$

where

$$\begin{aligned} \tau _t^*=\inf \{s\ge t: L(s)=Y_s\}\wedge T \end{aligned}$$
(42)

and

$$\begin{aligned} \sigma _t^*=\inf \{s\ge t:U(s)=Y_s\}\wedge T. \end{aligned}$$
(43)

Proof

Define

$$\begin{aligned} \tau _t^*(m):=\inf \{s\ge t: L(s)=Y^m_s\}\wedge T \end{aligned}$$

and

$$\begin{aligned} \sigma _t^*(m):=\inf \{s\ge t:U(s)=Y^m_s\}\wedge T. \end{aligned}$$

It is easy to check that \(\{\tau _t^*(m)\}\) is increasing and \(\tau _t^*\ge \tau _t^*(m)\) for any m, and \(\{\sigma _t^*(m)\}\) is decreasing and \(\sigma _t^*\le \sigma _t^*(m)\) for any m.

From the expressions of \(\tau _t^*\) and \(\sigma _t^*\), it is easy to obtain that for any \(\tau ,\sigma \in \mathcal {T}_t\),

$$\begin{aligned} R(\tau ,\sigma _t^*)=L(\tau )1_{\{\tau \le \sigma _t^*\}}+U(\sigma _t^*)1_{\{\sigma _t^*<\tau \}} \le Y_\tau 1_{\{\tau \le \sigma _t^*\}}+Y_{\sigma _t^*}1_{\{\sigma _t^*<\tau \}} =Y_{\tau \wedge \sigma _t^*} \end{aligned}$$

and

$$\begin{aligned} R(\tau _t^*,\sigma )=L(\tau _t^*)1_{\{\tau _t^*\le \sigma \}}+U(\sigma )1_{\{\sigma<\tau _t^*\}} \ge Y_{\tau _t^*}1_{\{\tau _t^*\le \sigma \}}+Y_{\sigma }1_{\{\sigma <\tau _t^*\}}=Y_{\tau _t^*\wedge \sigma }. \end{aligned}$$

Hence, by Proposition 2.10, we have

$$\begin{aligned} \mathcal {E}^{g,\phi }_t[R(\tau ,\sigma _t^*)]\le \mathcal {E}^{g,\phi }_t\left[ Y_{\tau \wedge \sigma _t^*}\right] \end{aligned}$$
(44)

and

$$\begin{aligned} \mathcal {E}^{g,\phi }_t\left[ Y_{\tau _t^*\wedge \sigma }\right] \le \mathcal {E}^{g,\phi }_t\left[ R(\tau _t^*,\sigma )\right] . \end{aligned}$$
(45)

Now, we prove that for any \(\tau \in \mathcal {T}_t\),

$$\begin{aligned} \mathcal {E}^{g,\phi }_t[Y_{\tau \wedge \sigma _t^*}]\le Y_t \end{aligned}$$
(46)

holds. For any \(n\le m\), by the comparison theorem of BSDE and Theorem 3.1, we have

$$\begin{aligned} \mathcal {E}^{g_n}_t\left[ Y^m_{\tau \wedge \sigma _t^*}\right] \le \mathcal {E}^{g_m}_t\left[ Y^m_{\tau \wedge \sigma _t^*}\right] =\mathcal {E}^{g_m}_t\left[ Y^m_{\tau \wedge \sigma _t^*(m)}\right] \le \mathcal {E}^{g_m}_t\left[ Y^m_{\tau _t^*(m)\wedge \sigma _t^*(m)}\right] =Y^m_t. \end{aligned}$$
(47)

By Proposition 2.2, fixing n and taking limit in (47) as \(m\rightarrow \infty \), we can obtain that

$$\begin{aligned} \mathcal {E}^{g_n}_t \left[ Y_{\tau \wedge \sigma _t^*}\right] \le Y_t. \end{aligned}$$
(48)

Then by Remark 2.8 and taking limit in (48) as \(n\rightarrow \infty \), (46) holds. In particular,

$$\begin{aligned} \mathcal {E}^{g,\phi }_t \left[ Y_{\tau _t^*\wedge \sigma _t^*}\right] \le Y_t. \end{aligned}$$
(49)

Next we prove that for any \(\sigma \in \mathcal {T}_t\),

$$\begin{aligned} Y_t\le \mathcal {E}^{g,\phi }_t[Y_{\tau _t^*\wedge \sigma }] \end{aligned}$$
(50)

hold. By Theorem 3.1 and Remark 2.8, it follows that

$$\begin{aligned} Y^m_t=\mathcal {E}^{g_m}_t\left[ Y^m_{\tau _t^*(m)\wedge \sigma _t^*(m)}\right] \le \mathcal {E}^{g_m}_t\left[ Y^m_{\tau _t^*(m)\wedge \sigma }\right] \le \mathcal {E}^{g,\phi }_t\left[ Y^m_{\tau _t^*(m)\wedge \sigma }\right] . \end{aligned}$$
(51)

Case 1: On the set \(\{\omega :\sigma <\tau _t^*(m)\le \tau _t^*\}\), by Remark 3.3, we have

$$\begin{aligned} Y^m_{\tau _t^*(m)\wedge \sigma }=Y^m_\sigma =Y^m_{\tau _t^*\wedge \sigma }\le Y_{\tau _t^*\wedge \sigma }. \end{aligned}$$
(52)

Case 2: On the set \(\{\omega :\tau _t^*(m)\le \sigma \le \tau _t^*\}\), by the increasing property of L(t), we have

$$\begin{aligned} Y^m_{\tau _t^*(m)\wedge \sigma }=Y^m_{\tau _t^*(m)}=L(\tau _t^*(m))\le L(\sigma )\le Y_\sigma = Y_{\tau _t^*\wedge \sigma }. \end{aligned}$$
(53)

Case 3: On the set \(\{\omega :\tau _t^*(m)\le \tau _t^*<\sigma \}\), by the increasing property of L(t), we have

$$\begin{aligned} Y^m_{\tau _t^*(m)\wedge \sigma }=Y^m_{\tau _t^*(m)}=L(\tau _t^*(m))\le L(\tau _t^*)=Y_{\tau _t^*}= Y_{\tau _t^*\wedge \sigma }. \end{aligned}$$
(54)

From Cases 1–3 and by Proposition 2.10, we can obtain that

$$\begin{aligned} \mathcal {E}^{g,\phi }_t\left[ Y^m_{\tau _t^*(m)\wedge \sigma }\right] \le \mathcal {E}^{g,\phi }_t\left[ Y_{\tau _t^*\wedge \sigma }\right] . \end{aligned}$$
(55)

From (51), (55) and taking limit for \(Y_t^m\) as \(m\rightarrow \infty \), (50) holds. In particular,

$$\begin{aligned} Y_t\le \mathcal {E}^{g,\phi }_t[Y_{\tau _t^*\wedge \sigma _t^*}]. \end{aligned}$$
(56)

Thus, it follows from (44)–(46), (49), (50) and (56) that (43) holds. So we complete the proof of Theorem 3.4. \(\square \)

For applications of the Dynkin game under ambiguity, a good example is the pricing and hedging of the game option. Our results can be used to analyze this game option in the incomplete market. Further study about this topic will appear in our future work.