Proof of Theorem 3.3
Proof
To verify that \((\mathbf {\varvec{\delta }}_{\Omega })^{*}\) is the equilibrium control profile and V is the equilibrium value function, it is sufficient to check that they satisfy the equilibrium condition in (2.4). For any market maker in \(\Omega _{\mathrm{mm}}\), given other market makers’ strategies in \((\mathbf {\varvec{\delta }}_{\Omega })^{*}\) and any admissible strategy \(\varvec{\delta }\), we should prove:
$$\begin{aligned} \begin{aligned}&J(\varvec{\delta }, (\mathbf {\varvec{\delta }}_{-0})^{*}, t, S, x, q) \le J((\varvec{\delta })^{*}, (\mathbf {\varvec{\delta }}_{-0})^{*}, t, S, x, q) = V(t, S, x, q). \end{aligned} \end{aligned}$$
Assume the reference market maker takes the arbitrary strategy \(\varvec{\delta }\), while every other market maker decides his/her bid/ask spread by \((\delta ^{a})^{*}(t, S_t, X_t, q_{t}) = \pi ^{a}_{q_{t}}(t)\) and \((\delta ^{b})^{*}(t, S_t, X_t, q_{t}) = \pi ^{b}_{q_{t}}(t)\). Denote reference market maker’s cash position at any time t as \(X^{*, \varvec{\delta }}_{t}\), while their inventory is \(q^{*, \varvec{\delta }}_{t}\). Then, for any time \(t \in [0, T]\), by ansatz (3.1) and Itô’s lemma with respect to function \(\theta \), we get the following:
$$\begin{aligned} \begin{aligned}&V(T, S_{T}, X^{*, \varvec{\delta }}_{T}, q^{*, \varvec{\delta }}_{T}) = X^{*, \varvec{\delta }}_{T} + q^{*, \varvec{\delta }}_{T} S_{T} + \theta _{q^{*, \varvec{\delta }}_{T}}(T) = x + q S + \theta _q(t) \\&\quad + \int _{t}^{T} \delta ^{b}_{u} I^{b}(q^{*, \varvec{\delta }}_{u}) {\mathrm{d}}N_{u}^{b} + \int _{t}^{T} \delta ^{a}_{u} I^{a}(q^{*, \varvec{\delta }}_{u}) d N_{u}^{a} + \int _{t}^{T} q^{*, \varvec{\delta }}_{u} {\mathrm{d}}S_u + \int _{t}^{T} \theta '_{q^{*, \varvec{\delta }}_{u}}(u){\mathrm{d}}u \\&\quad + \int _{t}^{T} (\theta _{q^{*, \varvec{\delta }}_{u} + 1}(u) - \theta _{q^{*, \varvec{\delta }}_{u}}(u) ) I^{b}(q^{*, \varvec{\delta }}_{u}) {\mathrm{d}}N_{u}^{b} + \int _{t}^{T} (\theta _{q^{*, \varvec{\delta }}_{u} - 1}(u) - \theta _{q^{*, \varvec{\delta }}_{u}}(u) ) I^{a}(q^{*, \varvec{\delta }}_{u}) {\mathrm{d}}N_{u}^{a}. \end{aligned} \end{aligned}$$
(5.1)
As \(q^{*, \varvec{\delta }}_{u}\) takes value in finite set \(\mathbf {Q}\), and the solution for ODE exists on compact set [0, T], we know both \(\theta _{q}(u)\) and \(\theta '_{q}(u)\) are uniformly bounded on [0, T] for all \(q \in \mathbf {Q}\) and:
$$\begin{aligned} \begin{aligned}&\mathbb {E}\left[ \int _{t}^{T} (q^{*, \varvec{\delta }}_{u})^2 {\mathrm{d}}u\right]< + \infty , \quad \mathbb {E}\left[ \int _{t}^{T} (\theta '_{q^{*, \varvec{\delta }}_{u}}(u))^2 {\mathrm{d}}u\right] < + \infty . \end{aligned} \end{aligned}$$
Moreover, from assumption that \(f(\delta , x) \le \lambda (\delta )\) for all x, we have admissible control satisfies (see [10, page 16]):
$$\begin{aligned} \begin{aligned}&\mathbb {E}\left[ \sum _{j = -Q+1}^{Q} P^{a}_{j} \int _{t}^{T} f(\delta ^{a}_{u}, \pi ^{a}_{j}(t)) I^{a}(q^{*, \varvec{\delta }}_{u}) |\delta ^{a}_{u} + \theta _{q^{*, \varvec{\delta }}_{u} - 1}(u) - \theta _{q^{*, \varvec{\delta }}_{u}}(u)| {\mathrm{d}}u\right]< + \infty \\&\mathbb {E}\left[ \sum _{j = -Q}^{Q-1} P^{b}_{j} \int _{t}^{T} f(\delta ^{b}_{u}, \pi ^{b}_{j}(t)) I^{b}(q^{*, \varvec{\delta }}_{u}) |\delta ^{b}_{u} + \theta _{q^{*, \varvec{\delta }}_{u} + 1}(u) - \theta _{q^{*, \varvec{\delta }}_{u}}(u)| {\mathrm{d}}u\right] < + \infty . \end{aligned} \end{aligned}$$
Taking expectation on both sides of (5.1), we have:
$$\begin{aligned} \begin{aligned}&\mathbb {E}\left[ V(T, S_{T}, X^{*, \varvec{\delta }}_{T}, q^{*, \varvec{\delta }}_{T})\right] = V(t, S, x, q) + \mathbb {E}\left[ \int _{t}^{T} \theta '_{q^{*, \varvec{\delta }}_{u}}(u) {\mathrm{d}}u\right] \\&+ \mathbb {E}\left[ \int _{t}^{T} \eta ^{a}(\theta (u),\delta ^{a}_{u},\pi ^{a}(u), q^{*, \varvec{\delta }}_{u}) I^{a}(q^{*, \varvec{\delta }}_{u}) {\mathrm{d}}u\right] \\&+ \mathbb {E}\left[ \int _{t}^{T} \eta ^{b}(\theta (u),\delta ^{b}_{u},\pi ^{b}(u), q^{*, \varvec{\delta }}_{u}) I^{b}(q^{*, \varvec{\delta }}_{u}) {\mathrm{d}}u\right] . \end{aligned} \end{aligned}$$
where \(\eta ^{a}\) and \(\eta ^{b}\) are defined in (3.6). Hence, we have:
$$\begin{aligned} \begin{aligned}&\mathbb {E}\left[ V(T, S_{T}, X^{*, \varvec{\delta }}_{T}, q^{*, \varvec{\delta }}_{T}) \right] \le V(t, S, x, q) + \mathbb {E}\left[ \int _{t}^{T} \theta '_{q^{*, \varvec{\delta }}_{u}}(u) {\mathrm{d}}u\right] \\&+ \mathbb {E}\left[ \int _{t}^{T} \sup _{\delta _{u}^{a}} \eta ^{a}(\theta (u),\delta ^{a}_{u},\pi ^{a}(u), q^{*, \varvec{\delta }}_{u}) I^{a}(q^{*, \varvec{\delta }}_{u}) {\mathrm{d}}u\right] + \mathbb {E}\left[ \int _{t}^{T} \sup _{\delta _{u}^{b}} \eta ^{b}(\theta (u),\delta ^{b}_{u},\pi ^{b}(u), q^{*, \varvec{\delta }}_{u}) I^{b}(q^{*, \varvec{\delta }}_{u}){\mathrm{d}}u\right] . \end{aligned} \end{aligned}$$
(5.2)
As \(\theta \) satisfies ODE system (3.5) for every \(u \in [0, T]\). We substitute it into the corresponding part in (5.2) and have following.
$$\begin{aligned} \begin{aligned}&J(\varvec{\delta }, (\mathbf {\varvec{\delta }}_{-0})^{*}, t, S, x, q) = \mathbb {E}\left[ V(T, S_{T}, X^{*, \varvec{\delta }}_{T}, q^{*, \varvec{\delta }}_{T}) - \frac{1}{2} \gamma \sigma ^2 \int _{t}^{T} (q^{*, \varvec{\delta }}_{u})^2 {\mathrm{d}}u\right] \le V(t, S, x, q). \end{aligned} \end{aligned}$$
On the other hand, if the reference market maker also takes equilibrium control, her cash position and inventory are denoted by \(X^{*}_{t}\) and \(q^{*}_{t}\), respectively. And we have the following:
$$\begin{aligned} \begin{aligned}&\eta ^{a}(\theta (t),\pi ^{a}_{q}(t),\pi ^{a}(t), q) = \sup _{\delta } \eta ^{a}(\theta (t), \delta , \pi ^{a}(t), q), \quad \\&\eta ^{b}(\theta (t),\pi ^{b}_{q}(t),\pi ^{b}(t), q) = \sup _{\delta } \eta ^{b}(\theta (t), \delta , \pi ^{b}(t), q). \end{aligned} \end{aligned}$$
Substituting the equilibrium control defined in (3.4) to (5.2) can conclude the proof as following:
$$\begin{aligned} \begin{aligned} J((\varvec{\delta })^{*}, (\mathbf {\varvec{\delta }}_{-0})^{*}, t, S, x, q)&= \mathbb {E}\left[ V(T, S_{T}, X^{*}_{T}, q^{*}_{T}) - \frac{1}{2} \gamma \sigma ^2 \int _{t}^{T} (q^{*}_{u})^2 {\mathrm{d}}u\right] \\&= V(t, S, x, q) \ge J(\varvec{\delta }, (\mathbf {\varvec{\delta }}_{-0})^{*}, t, S, x, q). \end{aligned} \end{aligned}$$
\(\square \)
Proof of Theorem 3.4
The proof of Theorem 3.4 is made of three steps:
-
1.
There exist functions \(w^{a}\), \(w^{b}\) such that for any vector \(\mu \in \mathbb {R}^{M}\), \(w^{a}(\mu )\) and \(w^{b}(\mu )\) satisfy Equation (3.7).
-
2.
\(w^{a}\) and \(w^{b}\) are unique and locally Lipschitz continuous, which guarantees RHS of the ODE system (3.8) is also locally Lipschitz continuous.
-
3.
There exists unique classical solution to ODE system (3.8).
The key step for proving Steps 1 and 2 is to characterize the vectors \(w^{a}(\mu )\) and \(w^{b}(\mu )\) by the first-order condition of Hamiltonian. They are the solutions to some equation system. Then, we can prove step 1 and 2 by discussing the zero point for the equation system. The key step for proving Step 3 is to obtain upper bound estimation for \(\theta \). It can be done by showing \(\theta \) is also a solution to another system of ODE, which admits the comparison principle, and hence upper bound for its solution. Without confusion of notations, we write \(w^{a}(\mu )\) and \(w^{b}(\mu )\) as,
$$\begin{aligned} \begin{aligned}&w^{a}(\mu ) = w^{a} = (w_{-Q+1}^{a}, \ldots , w_{Q}^{a}), \quad w^{b}(\mu ) = w^{b} = (w_{-Q}^{b}, \ldots , w_{Q-1}^{b}). \end{aligned} \end{aligned}$$
Proof of Step 1
We first show that \(w^{a}\) and \(w^{b}\) satisfy the equilibrium condition of the Hamiltonian system. We provide some preliminary results for the existence and uniqueness of the maximum point for Hamiltonian \(G_{q}^{a}(\delta ) := \eta ^{a}(\mu ,\delta , w, q)\) given any vector \(\mu \in \mathbf {R}^{M}\) and \(w \in \mathbf {R}^{M-1}\). We can define \(G_{q}^{b}(\delta )\) and prove the same result similarly.
Lemma 5.1
Assume intensity function f satisfies all the assumptions in Theorem 2.1. Then, given any vectors \(w = (w_{-Q+1}, \ldots , w_{Q}) \in \mathbb {R}^{M-1}\) and \(\mu \), the maximum point exists and is unique for function \(G_{q}^{a}\) when \(q = -Q + 1, \ldots , Q\). Furthermore, the maximum point of \(G_{q}^{a}(\delta )\) satisfies the first-order condition:
$$\begin{aligned} \begin{aligned}&\frac{d G_{q}^{a}(\delta )}{d \delta } = 0. \end{aligned} \end{aligned}$$
Proof
Given any vector \(\mu \) and w, the expected intensity function d is defined by
$$\begin{aligned} \begin{aligned}&d(\delta ) := \sum _{j = -Q+1}^{Q} P^{a}_{j2} f(\delta ,w_{j}). \end{aligned} \end{aligned}$$
From Assumption 2.1, we know for any \(\delta \), x and y:
$$\begin{aligned} \begin{aligned}&f(\delta , x) f''_{1 1}(\delta , y) + f(\delta , y) f''_{1 1}(\delta , x) < 4 f'_{1}(\delta , x) f'_{1}(\delta , y). \end{aligned} \end{aligned}$$
(5.3)
Simple calculation shows
$$\begin{aligned} \begin{aligned}&d(\delta ) d''(\delta ) < 2 (d'(\delta ))^2, \end{aligned} \end{aligned}$$
which implies \(\delta + \mu _{q-1} - \mu _{q}+ d(\delta )/d'(\delta )\) is a strictly increasing function of \(\delta \). Combining with \(d'(\delta )<0\), we conclude that there exists a unique \(\delta ^*\) such that \(\frac{d G_{q}^{a}(\delta ^*)}{d \delta } = 0\) and \(G_{q}^{a}(\delta )\) is strictly increasing for \(\delta <\delta ^*\) and strictly decreasing for \(\delta >\delta ^*\), that is, \(\delta ^*\) is the unique global maximum point of \(G_{q}^{a}\). \(\square \)
Step 1 is equivalent to following theorem, which proves that the generalized Issac’s condition in Definition 3.2 holds for any vector \(\mu \in \mathbb {R}^M\). We only focus on \(w^{a}\), as the proof of \(w^{b}\) is similar.
Theorem 5.2
Assume the intensity function f satisfies Assumption 2.1. Then, for any fixed vector \(\mu = (\mu _{-Q}, \ldots , \mu _{Q}) \in \mathbb {R}^M\), there exists vector \(w^{a} = (w_{-Q+1}^{a}, \ldots , w_{Q}^{a})\) such that for \(q = -Q + 1, \ldots , Q\),
$$\begin{aligned} \begin{aligned}&w_{q}^{a} = {\mathop {{{\,\mathrm{\arg \!\max }\,}}}\limits _{\delta }} \{\eta ^{a}(\mu , \delta , w^{a}, q) \}. \end{aligned} \end{aligned}$$
(5.4)
Define a mapping \(T: \mathbb {R}^{M-1} \rightarrow \mathbb {R}^{M-1}\) as
$$\begin{aligned} \begin{aligned}&T_{q}(w) = {\mathop {{{\,\mathrm{\arg \!\max }\,}}}\limits _{\delta \in \mathbb {R}}} \{ \eta ^{a}(\mu , \delta , w, q) \}, \quad \forall q \in \{-Q + 1, \ldots , Q \} \\&T(w) := (T_{-Q+1}(w), \ldots , T_{Q}(w)), \end{aligned} \end{aligned}$$
(5.5)
(5.4) is equivalent to \(w^{a} = T(w^{a})\), namely, \(w^{a}\) is a fixed point of mapping T. We need the following Schauder fixed-point theorem to prove the existence of \(w^a\).
Theorem 5.3
(Schauder) If K is a nonempty convex closed subset of a Hausdorff topological vector space V and T is a continuous mapping of K into itself such that T(K) is contained in a compact subset of K, then T has a fixed point.
To apply Theorem 5.3, we need to show the existence of K and the continuity of T. The next lemma confirms the first requirement.
Lemma 5.4
Given any vector \(\mu = (\mu _{-Q} \ldots , \mu _{Q}) \in \mathbb {R}^M\) and mapping T defined in (5.5), there exists a nonempty convex compact set \(K \subset \mathbb {R}^{M-1}\) such that \(T(K) \subset K\).
Proof
Firstly, for any vector \(w \in \mathbb {R}^{M-1}\), define \(\mathbf {y} = (y_{-Q+1}, \ldots , y_{Q}) = T(w)\). There exist a uniform \(\delta _{\mathrm{min}} \in \mathbb {R}\) such that for every q,
$$\begin{aligned} \begin{aligned}&y_{q} \ge \delta _{\mathrm{min}}. \end{aligned} \end{aligned}$$
(5.6)
We can prove by contradiction. Assume there was no lower bound for \(y_{q}\). Defining \(G^{a}_{q}(\delta ) = \eta ^{a}(\mu ,\delta , y, q)\) for \(q = -Q+1, \ldots , Q\), we know
$$\begin{aligned} \begin{aligned}&y_{q} = {\mathop {{{\,\mathrm{\arg \!\max }\,}}}\limits _{\delta }} \{ G^{a}_{q}(\delta ) \}. \end{aligned} \end{aligned}$$
Denote the uniform upper bound and lower bound of \(\mu _{q-1} - \mu _{q}\) among all \(q \in \mathbf {Q}\) as \(M_d\) and \(m_d\). We have
$$\begin{aligned} \begin{aligned}&y_{q} > - M_d. \end{aligned} \end{aligned}$$
Otherwise, \(G^{a}_{q}(y_{q}) < 0\) and contradicts with the fact that \(\delta > - m_d\), \(G^{a}_{q}(\delta ) > 0\) and \(y_{q} = {\mathop {{{\,\mathrm{\arg \!\max }\,}}}\limits _{\delta }} \{ G^{a}_{q}(\delta ) \}\). Hence, we can conclude that
$$\begin{aligned} \begin{aligned}&y_{q} \ge \delta _{\mathrm{min}} := - M_p. \end{aligned} \end{aligned}$$
Secondly, for any vector \(w \in [\delta _{\mathrm{min}}, +\infty )^{M-1}\), define \(\mathbf {y} = (y_{-Q+1}, \ldots , y_{Q}) = T(w)\). There exists a uniform \(\delta _{\mathrm{max}} \in \mathbb {R}\) such that for every q,
$$\begin{aligned} \begin{aligned}&y_{q} \le \delta _{\mathrm{max}}. \end{aligned} \end{aligned}$$
(5.7)
Define \(\delta _0 := - m_d + 1\). By definition of \(m_d\), for every q we have
$$\begin{aligned} \begin{aligned}&\delta _0 + \mu _{q-1} - \mu _{q} \ge 1 > 0. \end{aligned} \end{aligned}$$
Hence, for every \(q \in \mathbf {Q}\), \(G^{a}_{q}(\delta _0) > 0\). Moreover, as f is increasing to its second argument, for any vector \(w \in [\delta _{\mathrm{min}}, +\infty )^{M-1}\), we have:
$$\begin{aligned} \begin{aligned}&G^{a}_{q}(\delta _0) \ge \sum _{j = -Q+1}^{Q} P^{a}_{j} f(\delta _0,\delta _{\mathrm{min}}). \end{aligned} \end{aligned}$$
(5.8)
By assumption \(\lim _{\delta \rightarrow +\infty } \lambda (\delta ) \delta = 0\), there exists \(\delta _{\mathrm{max}} > \delta _0\) such that
$$\begin{aligned} \begin{aligned}&\max _{q} \{ \sum _{j = -Q+1}^{Q} P^{a}_{j} \lambda (\delta _{\mathrm{max}}) (\delta _{\mathrm{max}} + \mu _{q-1} - \mu _{q}) \} < \sum _{j = -Q+1}^{Q} P^{a}_{j} f(\delta _0,\delta _{\mathrm{min}}). \end{aligned} \end{aligned}$$
(5.9)
As \(f(\delta _{\mathrm{max}}, \cdot )\) is bounded by \(\lambda (\delta _{\mathrm{max}})\) uniformly, (5.8) and (5.9) imply that for any vector \(w \in [\delta _{\mathrm{min}}, +\infty )^{M-1}\),
$$\begin{aligned} \begin{aligned}&\max _{q} G^{a}_{q}(\delta _{\mathrm{max}}) <G^{a}_{q}(\delta _0). \end{aligned} \end{aligned}$$
Since \(\delta _{\mathrm{max}} > \delta _0\) and \(G^{a}_{q}(\delta _{\mathrm{max}}) <G^{a}_{q}(\delta _0)\), we know that the maximum point \(\delta ^*\) of \(G^{a}_{q}\) cannot be in the interval \((\delta _{\mathrm{max}}, \infty )\) as it would otherwise be a contradiction to \(G^{a}_{q}(\delta )\) being a strictly increasing function of \(\delta \) for \(\delta <\delta ^*\). Hence for any \(q \in \mathbf {Q}\),
$$\begin{aligned} \begin{aligned}&y_q \in [\delta _{\mathrm{min}}, \delta _{\mathrm{max}}], \end{aligned} \end{aligned}$$
which shows \(T(K) \subset K\), where \(K = [\delta _{\mathrm{min}}, \delta _{\mathrm{max}}]^{M-1}\). \(\square \)
To prove T is a continuous mapping, we need the following Berge’s maximum theorem.
Theorem 5.5
(Berge) Let X and \(\Theta \) be metric spaces, \(f:X\times \Theta \rightarrow \mathbb {R}\) be a function jointly continuous in its two arguments and \(C:\Theta \rightarrow X\) be a compact-valued correspondence. For x in X and \(\theta \) in \(\Theta \), let
$$\begin{aligned} f^{*}(\theta )=\max \{f(x,\theta )|x\in C(\theta )\}, \end{aligned}$$
and
$$\begin{aligned} x^{*}(\theta )={\mathrm {arg}} \max \{f(x,\theta )|x\in C(\theta )\} =\{x\in C(\theta )\,|\,f(x,\theta )=f^{*}(\theta )\}. \end{aligned}$$
If C is continuous at some \(\theta \), then \(f^{*}\) is continuous at \(\theta \) and \(x^*\) is nonempty, compact-valued and upper hemicontinuous at \(\theta \), that is, if \(\theta _{n}\rightarrow \theta \) and \(b_n\rightarrow b\) as \(n\rightarrow \infty \) with \(b_n\in x^*(\theta _{n})\), then \(b\in x^*(\theta )\).
The next lemma shows that any single-valued, bounded, upper hemicontinuous mapping is a continuous function.
Lemma 5.6
Let A, B be two Euclidean spaces, \(\Gamma : A \rightarrow B\) be a single-valued, bounded and upper hemicontinuous mapping, then \(\Gamma \) is a continuous function.
Proof
For any sequence \(a_n\rightarrow a\) and \(b_n= \Gamma (a_n)\) (\(\Gamma \) is a single-valued mapping), if \(b_n\) tends to a limit b, then we must have \(b=\Gamma (a)\) by the hemicontinuity of \(\Gamma \) and we are done. Assume the sequence \(b_n\) did not have a limit. Since \(b_n\) is a bounded sequence, there exist at least two subsequences \(b_{n_k}\) and \(b_{n_k'}\) that converge to two different values b and \(b'\). Since \(a_n\rightarrow a\), we must have both \(a_{n_k}\) and \(a_{n_k'}\) tend to a, the hemicontinuity of \(\Gamma \) would imply \(b=\Gamma (a)\) and \(b'=\Gamma (a)\), a contradiction to the assumption that \(b\ne b'\). Therefore, \(\Gamma \) is continuous. \(\square \)
We now can prove the mapping T defined in (5.5) is continuous on K.
Lemma 5.7
(Continuous Mapping T in \(\mathbb {R}^M\)) Given any vector \(\mu = (\mu _{-Q} \ldots , \mu _{Q}) \in \mathbb {R}^M\) and bounded set K defined in Lemma 5.4, mapping T defined in (5.5) is continuous on K.
Proof
We prove that given vector \(\mu \), each element \(T_q(w)\) of mapping T is continuous respect to each \(w_{q} \). As the maximum point of \(\eta ^{a}(\mu , \cdot , w, q)\) exists and is unique for every \(q \in \{-Q + 1, \ldots , Q\}\), \(T_q\) is a well-defined single-value mapping. Moreover, \(\eta ^{a}(\mu , \delta , w, q)\) is jointly continuous w.r.t. \(\delta \) and w. By Berge’s maximum theorem, \(T_q\) is upper hemicontinuous function of w on bounded set K. Therefore, by Lemma 5.6, for \(q \in \{-Q + 1, \ldots , Q\}\), \(T_q\) is also continuous w.r.t. every \(w_{q}\). We conclude that given vector \(\mu \), the mapping T is a continuous mapping from \(K \rightarrow K\). \(\square \)
Finally, we can prove Theorem 5.2, which concludes the proof of step 1.
Proof of Theorem 5.2
As the intensity function f satisfies Assumption 2.1, from Lemma 5.1, the maximum point of \(G^{a}_{q}(\delta )\) exists and is unique for every q. Fixed vector \(\mu \in \mathbb {R}^M\), define mapping \(T: \mathbb {R}^{M-1} \rightarrow \mathbb {R}^{M-1}\) as in (5.5). \(w^{a}\) is the fixed point of mapping T. To show the existence of fixed point to the mapping, Schauder fixed-point theorem is applied to T by the following steps.
Firstly, by Lemma 5.4, there exists a bounded closed set \(K \subset \mathbb {R}^{M-1}\) which is equivalently a compact set, such that \(T(K) \subset K\). From the proof of Lemma 5.4, the compact set K is convex.
Secondly, from Lemma 5.1 and 5.7, T is a single value continuous mapping from K to K. By Theorem 5.3, T has a fixed point for every given \(\mu \), denoted by \(w^{a}\), and
$$\begin{aligned} \begin{aligned}&w^{a}_{q} = T(w^{a})\in K. \end{aligned} \end{aligned}$$
(5.10)
This concludes the proof of Step 1. \(\square \)
Proof of Step 2
We first state a global implicit function theorem in [9, Theorem 4], which is used in the proof.
Theorem 5.8
Assume \(F : \mathbb {R}^{n} \times \mathbb {R}^{m} \rightarrow \mathbb {R}^{n}\) is a locally Lipschitz mapping such that
-
For every \(y \in \mathbb {R}^{m}\), the function \(\phi _{y} : \mathbb {R}^{n} \rightarrow \mathbb {R}\), defined by \( \phi _{y}(x) = \frac{1}{2} ||F(x,y)||^2\), is coercive, i.e., \(\lim _{||x|| \rightarrow \infty } \phi _{y}(x) = +\infty \).
-
The set \(\partial _{x}F(x, y)\) is of maximal rank for all \((x, y) \in \mathbb {R}^{n} \times \mathbb {R}^{m}\).
Then, there exists a unique locally Lipschitz function \(f: \mathbb {R}^{m} \rightarrow \mathbb {R}^{n}\) such that equations \(F(x, y) = 0\) and \(x = f(y)\) are equivalent in the set \(\mathbb {R}^{n} \times \mathbb {R}^{m}\).
With the help of Theorem 5.8, we can show the local Lipschitz continuity of functions \(w^{a}\) and \(w^{b}\).
Theorem 5.9
Assume the intensity function f satisfies Assumption 2.1. Then, there are single-valued and locally Lipschitz continuous functions \(w^{a}, w^{b} : \mathbb {R}^{M} \rightarrow \mathbb {R}^{M-1}\), such that they satisfy the generalized Issac’s condition (3.7) in Definition 3.2 for any given vector \(\mu \in \mathbb {R}^{M}\).
Proof
We provide the proof for \(w^{a}\) only. The proof for \(w^{b}\) is similar.
To begin with, from Assumption 2.1, we have (5.3) for all \(\delta \), x and y. From Lemma 5.1, the maximum point of \(G_{q}^{a}(\delta ) = \eta ^{a}(\mu ,\delta , w^{a}, q)\) is unique. From Remark 5.1, given any vector \(\mu \), \(w^{a}\) that satisfies the generalized Issac’s condition in Definition 3.2 is also the solution to the following first-order condition for every q,
$$\begin{aligned} \begin{aligned}&\sum _{j = -Q+1}^{Q} P^{a}_{j} [ f(w_{q}^{a},w_{j}^{a}) + f'_{1}(w_{q}^{a},w_{j}^{a}) (w_{q}^{a} + \mu _{q-1} - \mu _{q}) ] = 0. \end{aligned} \end{aligned}$$
For any vector \(\mu \) and \(\delta = (\delta _{-Q+1}, \ldots , \delta _{Q})\), define function \(F_q : \mathbb {R}^{M-1} \times \mathbb {R}^{M} \rightarrow \mathbb {R}\) for every \(q \in \{-Q+1, \cdots Q\}\) as following:
$$\begin{aligned} \begin{aligned}&F_q(\delta , \mu ) := - \frac{\sum _{j = -Q+1}^{Q} P^{a}_{j} f(\delta _{q},\delta _{j})}{\sum _{j = -Q+1}^{Q} P^{a}_{j} f'_{1}(\delta _{q},\delta _{j}) } - \delta _{q} - (\mu _{q-1} - \mu _{q}). \end{aligned} \end{aligned}$$
Define mapping \(F: \mathbb {R}^{M-1} \times \mathbb {R}^{M} \rightarrow \mathbb {R}^{M-1}\) as
$$\begin{aligned} \begin{aligned}&F(\delta , \mu ) := (F_{-Q+1}(\delta , \mu ), \ldots , F_{Q}(\delta , \mu )). \end{aligned} \end{aligned}$$
F is continuously differentiable and \(w^{a}\) is determined implicitly by \( F(w^{a}, \mu ) = 0\). From the proof of step 1, there exists a function \(w^{a}: \mathbb {R}^{M} \rightarrow \mathbb {R}^{M-1}\) such that \(F(w^{a}(\mu ),\mu ) = 0\) for any vector \(\mu \). If we can verify Theorem 5.8 holds in this case, the function \(w^{a}\) satisfying \(F(w^{a}(\mu ),\mu ) = 0\) must be unique and continuously differentiable, which concludes our proof. Hence, the next step is to verify Theorem 5.8.
Firstly, we prove that the Jacobian matrix of F never vanishes. Denote Jacobian matrix of F with respect to \(\delta \) as \(\partial _{\delta } F\), a \(2Q\times 2Q\) matrix, and its component at (i, m) is \(\frac{\partial F_i}{\partial \delta _{m}}(\delta , \mu )\) for \(i,m=-Q+1,\ldots , Q\). Denote by, for \(i \in \{ -Q+1, \ldots , Q \}\),
$$\begin{aligned} \begin{aligned}&D_{i} := \left( \sum _{m = -Q+1}^{Q} P^{a}_{m} f'_{1}(\delta _{q},\delta _{m}) \right) ^2 > 0 \\&A_{i} = \frac{1}{D_{i}} \sum _{m = -Q+1}^{Q} \sum _{j = -Q+1}^{Q} P^{a}_{m} P^{a}_{j} [ f''_{1 1}(\delta _{i},\delta _{m}) f(\delta _{i},\delta _{j}) - f'_{1}(\delta _{i},\delta _{m}) f'_{1}(\delta _{i},\delta _{j})] \\&I_{i m} : = \frac{1}{D_{i}} P^{a}_{m} \sum _{j = -Q+1}^{Q} P^{a}_{j} [ f(\delta _{i},\delta _{j}) f''_{1 2}(\delta _{i},\delta _{m}) - f'_{1}(\delta _{i},\delta _{j}) f'_{2}(\delta _{i},\delta _{m}) ]. \end{aligned} \end{aligned}$$
For \(m = i\), we have:
$$\begin{aligned} \begin{aligned}&\frac{\partial F_i}{\partial \delta _{i}}(\delta , \mu ) = -1 + A_{i} + I_{i i}. \end{aligned} \end{aligned}$$
From Assumption 2.1, we have (5.3), and simple calculation shows:
$$\begin{aligned} \begin{aligned}&-1 + A_{i} = \frac{1}{D_{i}} \sum _{m = -Q+1}^{Q} \sum _{j = -Q+1}^{Q} P^{a}_{m} P^{a}_{j} [ f''_{1 1}(\delta _{i},\delta _{m}) f(\delta _{i},\delta _{j}) - 2 f'_{1}(\delta _{i},\delta _{m}) f'_{1}(\delta _{i},\delta _{j})] < 0. \end{aligned} \end{aligned}$$
Hence,
$$\begin{aligned} \begin{aligned}&\left| \frac{\partial F_i}{\partial \delta _{i}}(\delta , \mu )\right| \ge 1 - A_{i} - \left| I_{i,i}\right| . \end{aligned} \end{aligned}$$
For \(i \ne m\), the nondiagonal element of the Jacobian matrix \(\partial _{\delta } F\) is given by:
$$\begin{aligned} \begin{aligned}&\frac{\partial F_i}{\partial \delta _{m}}(\delta , \mu ) = I_{i m}. \end{aligned} \end{aligned}$$
To compare the diagonal element with the sum of nondiagonal elements, we have:
$$\begin{aligned} \begin{aligned}&\left| \frac{\partial F_i}{\partial \delta _{i}}(\delta , \mu )\right| - \sum _{m \ne i}\left| \frac{\partial F_i}{\partial \delta _{m}}(\delta , \mu )\right| \ge 1 - A_{i} - \sum _{m = -Q+1}^{Q} |I_{i m}|. \end{aligned} \end{aligned}$$
(5.11)
From the definition of \(A_{i}\) and \(I_{i m}\),
$$\begin{aligned} \begin{aligned}&1 - A_{i} - \sum _{m = -Q+1}^{Q} |I_{i m}| \\&= \frac{1}{D_{i}} \sum _{m = -Q+1}^{Q} P^{a}_{m} \{ \sum _{j = -Q}^{Q} P^{a}_{j} [ 2 f'_{1}(\delta _{i},\delta _{m}) f'_{1}(\delta _{i},\delta _{j}) - f''_{1 1}(\delta _{i},\delta _{m}) f(\delta _{i},\delta _{j})] \\&\quad - |\sum _{j = -Q+1}^{Q} P^{a}_{j} [f(\delta _{i},\delta _{j}) f''_{1 2}(\delta _{i},\delta _{m}) - f'_{1}(\delta _{i},\delta _{j}) f'_{2}(\delta _{i},\delta _{m})] | \}. \end{aligned} \end{aligned}$$
(5.12)
By the assumption of f in (2.1), we have
$$\begin{aligned} \begin{aligned}&\sum _{j = -Q+1}^{Q} P^{a}_{j} [ 2 f'_{1}(\delta _{i},\delta _{m}) f'_{1}(\delta _{i},\delta _{j}) - f''_{1 1}(\delta _{i},\delta _{m}) f(\delta _{i},\delta _{j}) ] \\&\qquad \pm \left[ \sum _{j = -Q+1}^{Q} P^{a}_{j}\left[ - f'_{2}(\delta _{i},\delta _{m}) f'_{1}(\delta _{i},\delta _{j}) + f''_{1 2}(\delta _{i},\delta _{m}) f(\delta _{i},\delta _{j}) \right] \right] > 0. \end{aligned} \end{aligned}$$
(5.13)
Therefore, as \(D_i > 0\), from (5.11), (5.12) and (5.13), we conclude that
$$\begin{aligned} \begin{aligned}&|\frac{\partial F_i}{\partial \delta _{i}}(\delta , \mu )| - \sum _{m \ne i}|\frac{\partial F_i}{\partial \delta _{m}}(\delta , \mu )| > 0. \end{aligned} \end{aligned}$$
The Jacobian matrix \(\partial _{\delta } F(\delta , \mu )\) is strictly diagonally dominant and is therefore a nonsingular matrix.
Secondly, we show that given any fixed vector \(\mu \), whenever \(|| \delta || \rightarrow \infty \), \(|| F(\delta , \mu ) || \rightarrow \infty \). For any vector sequence \({{\mathbf {\delta }^{k}}}, k = 1, 2 \cdots \), \(|| {{\mathbf {\delta }^{k}}} || \rightarrow \infty \). Then, there exists sequence \(n_k \in \{-Q+1, \ldots , Q\} , k = 1, 2 \cdots \), such that \(| \delta ^{k}_{n_k} | \rightarrow \infty \). \(\delta ^{k}_{n_k}\) is the \(n_k\)th element of vector \({{\mathbf {\delta }^{k}}}\). In the case that \(\delta ^{ k}_{n_k} \rightarrow -\infty \), as we have
$$\begin{aligned} \begin{aligned}&L_{n_k}({{\mathbf {\delta }^{k}}}) := \frac{\sum _{m = -Q+1}^{Q} P^{a}_{m} f(\delta ^{k}_{n_{k}},\delta ^{k}_{m})}{\sum _{m = -Q+1}^{Q} P^{a}_{m} f'_{1}(\delta ^{k}_{n_{k}},\delta ^{k}_{m}) } < 0. \end{aligned} \end{aligned}$$
Hence, we know following when \(k \rightarrow +\infty \):
$$\begin{aligned} \begin{aligned}&F_{n_k}({{\mathbf {\delta }^{k}}}, \mu ) = - L_{n_k}({{\mathbf {\delta }^{k}}}) - \delta ^{k}_{n_k} - (\mu _{n_k-1} - \mu _{n_k}) > - \delta ^{k}_{n_k} - (\mu _{n_k-1} - \mu _{n_k}) \rightarrow + \infty . \end{aligned} \end{aligned}$$
It means when \(\delta ^{ k}_{n_k} \rightarrow -\infty \), \(|| F({{{\mathbf {\delta }}^{k}}}, \mu ) || \rightarrow \infty \).
On the other hand, in the case that \(\delta ^{k}_{n_k} \rightarrow +\infty \), we can always assume \(\delta ^{k}_{n_k} = \max \{\delta ^{k}_{i} \}_{i \in \mathbf {Q}, i>-Q}\). As \(f'_{1} < 0\), \(f>0\) and f is increasing function to its second variable, we have the following estimation on \(F_{n_k}({{\mathbf {\delta }^{k}}}, \mu )\):
$$\begin{aligned} \begin{aligned}&F_{n_k}({{\mathbf {\delta }^{k}}}, \mu ) = - \frac{\sum _{m = -Q+1}^{Q} P^{a}_{m} f(\delta ^{k}_{n_{k}},\delta ^{k}_{m})}{\sum _{m = -Q+1}^{Q} P^{a}_{m} f'_{1}(\delta ^{k}_{n_{k}},\delta ^{k}_{m}) } - \delta ^{k}_{n_k} - (\mu _{n_k-1} - \mu _{n_k}) \\&\quad \le - \frac{\sum _{m = -Q+1}^{Q} P^{a}_{m} f(\delta ^{k}_{n_{k}},\delta ^{k}_{n_{k}})}{P^{a}_{n_{k}} f'_{1}(\delta ^{k}_{n_{k}},\delta ^{k}_{n_{k}})} - \delta ^{k}_{n_k} - (\mu _{n_k-1} - \mu _{n_k}). \end{aligned} \end{aligned}$$
From the assumption that \( \lim _{\delta \rightarrow +\infty } - \frac{f'_{1}(\delta , \delta )}{f(\delta , \delta )} > 0\), we have:
$$\begin{aligned} \begin{aligned}&0< - \lim _{\delta ^{k}_{n_{k}} \rightarrow +\infty } \frac{\sum _{m = -Q+1}^{Q} P^{a}_{m} f(\delta ^{k}_{n_{k}},\delta ^{k}_{n_{k}})}{P^{a}_{n_{k}} f'_{1}(\delta ^{k}_{n_{k}},\delta ^{k}_{n_{k}})} < +\infty . \end{aligned} \end{aligned}$$
Then, by taking \(\delta ^{k}_{n_k} \rightarrow +\infty \), we finally have:
$$\begin{aligned} \begin{aligned}&\lim _{\delta ^{k}_{n_k} \rightarrow +\infty } F_{n_k}({{\mathbf {\delta }^{k}}}, \mu ) = - \infty . \end{aligned} \end{aligned}$$
Hence, when fixed \(\mu \) and \(\delta ^{k}_{n_k} \rightarrow +\infty \), we also get \(|| F({{\mathbf {\delta }^{k}}}, \mu ) || \rightarrow \infty \). Moreover, if \(\delta ^{k}_{n_k}\) is consisted of two subsequences such that one converges to \(+\infty \), another to \(-\infty \), by combining above, we can still get \(|| F({{\mathbf {\delta }^{k}}}, \mu ) || \rightarrow \infty \). We conclude that whenever \(|| \delta || \rightarrow \infty \), \(|| F(\delta , \mu ) || \rightarrow \infty \).
Theorem 5.8 implies that there exists a function \(w^{a}: \mathbb {R}^{M} \rightarrow \mathbb {R}^{M-1}\) such that \(F(w^{a}(\mu ), \mu ) = 0\) and \(w^{a}\) is unique and locally Lipschitz continuous, which concludes the proof of Step 2. \(\square \)
Proof of Step 3
We next prove there exists a unique classical solution \(\theta \) to ODE system (3.8) on [0, T]. The proof is divided by two parts. Firstly, we show the solution to ODE system (3.8) is bounded if it exists. Secondly, we provide the proof for existence and uniqueness of the classical solution to ODE system (3.8).
Lemma 5.10
Assume the intensity function f satisfies Assumption 2.1. If \(\theta : [0, T] \rightarrow \mathbb {R}^M\) is a solution to the ODE system (3.8), then for all \(q \in \mathbf {Q}\) we have
$$\begin{aligned} - \frac{1}{2} \gamma \sigma ^2 Q^2 T -l(Q) \le \theta _{q}(t) \le 2 \sup _{\delta } \lambda (\delta ) \delta T. \end{aligned}$$
Proof
We first prove the upper bound. From the assumption on f and the proof for steps 1 and 2, the ODE system (3.8) is well defined. Since \(\theta \) is assumed to be a solution, define twice continuously differentiable functions \(d^{0}\) and \(d^{1}\) as
$$\begin{aligned} \begin{aligned}&d^{0}(t, \delta ) := \sum _{j = -Q}^{Q-1} P^{b}_{j} f(\delta , w^{b}_{j}(\theta (t))) \le \lambda (\delta ) \\&d^{1}(t, \delta ) := \sum _{j = -Q+1}^{Q} P^{a}_{j} f(\delta , w^{a}_{j}(\theta (t))) \le \lambda (\delta ). \end{aligned} \end{aligned}$$
From Assumption 2.1, we have (5.3) for all \(\delta \), x and y. Simple calculation shows that \(d^{0}\) and \(d^{1}\) satisfy
$$\begin{aligned} \begin{aligned}&d^{\zeta }(t, \delta ) \le \lambda (\delta ), \quad \frac{\partial ^2 d^{\zeta }}{\partial \delta ^2}(t, \delta ) d^{\zeta }(t, \delta ) < 2 (\frac{\partial d^{\zeta }}{\partial \delta }(t, \delta ))^2, \quad \zeta = 0, 1. \end{aligned} \end{aligned}$$
On the other hand, \(\theta \) is also the solution to ODE system for all \(q \in \mathbf {Q}\):
$$\begin{aligned} \begin{aligned}&\theta '_{q}(t) = \frac{1}{2} \gamma \sigma ^2 q^2 - \sup _{\delta } \{ d^{0}(t, \delta ) (\delta + \theta _{q + 1}(t)-\theta _{q}(t)) \} I^{b}(q) \\&- \sup _{\delta } \{ d^{1}(t, \delta ) (\delta + \theta _{q - 1}(t)-\theta _{q}(t)) \} I^{a}(q) \theta _{q}(T) = -l(|q|). \end{aligned} \end{aligned}$$
(5.14)
The comparison principle for ODE system (5.14) can be proved easily with similar argument in the proof of comparison principle in Guéant [10]. Define operator \(H^{\zeta } : [0, T] \times \mathbb {R} \rightarrow \mathbb {R}\) for both \(\zeta = 0, 1\) as
$$\begin{aligned} \begin{aligned}&H^{\zeta }(t, \Delta \mu ) := \sup _{\delta } \{ d^{\zeta }(t, \delta ) (\delta + \Delta \mu ) \}. \end{aligned} \end{aligned}$$
Then, from Guéant [10], we know \(H^{\zeta }\) is an increasing and nonnegative function in \(\Delta \mu \).
$$\begin{aligned} \begin{aligned}&\max _{t \in [0, T], \zeta = 0, 1} H^{\zeta }(t, 0) \le \sup _{\delta } \{ \lambda (\delta ) \delta \}. \end{aligned} \end{aligned}$$
Define \(\bar{\theta } : [0, T] \rightarrow \mathbb {R}^M\) as following:
$$\begin{aligned} \begin{aligned}&\bar{\theta }_{q}(t) = 2 \sup _{\delta } \lambda (\delta ) \delta (T - t). \end{aligned} \end{aligned}$$
Substituting \(\bar{\theta }\) into ODE system (5.14), we have
$$\begin{aligned} \begin{aligned}&- \bar{\theta }'_{q}(t) + \frac{1}{2} \gamma \sigma ^2 q^2 - H^{0}(t, \bar{\theta }_{q+1}(t)-\bar{\theta }_{q}(t))I^{b}(q) - H^{1}(t, \bar{\theta }_{q-1}(t)-\bar{\theta }_{q}(t))I^{a}(q) \\&= \sum _{\zeta =0}^{1} (\sup _{\delta } \lambda (\delta ) - H^{\zeta }(t, 0)) + \frac{1}{2} \gamma \sigma ^2 q^2 \ge 0 \\&\bar{\theta }_{q}(T) = 0 \ge \theta _{q}(T) = - l(|q|). \end{aligned} \end{aligned}$$
Then, by the comparison principle from Guéant [10], we know for every \(q \in \mathbf {Q}\),
$$\begin{aligned} \begin{aligned}&\theta _{q}(t) \le \bar{\theta }_{q}(t) \le 2 \sup _{\delta } \lambda (\delta ) \delta T. \end{aligned} \end{aligned}$$
We next prove the lower bound. Let \(\tilde{\theta }: [0, T] \rightarrow \mathbb {R}^M\) satisfy the following ODE system for all \(q \in \mathbf {Q}\):
$$\begin{aligned} \begin{aligned}&\tilde{\theta }'_{q}(t) - \frac{1}{2} \gamma \sigma ^2 q^2 = 0 \\&\tilde{\theta }_{q}(T) = -l(|q|). \end{aligned} \end{aligned}$$
(5.15)
The closed-form solution is given by
$$\begin{aligned} \begin{aligned}&\tilde{\theta }_{q}(t) = \frac{1}{2} \gamma \sigma ^2 q^2 (t - T) -l(|q|). \end{aligned} \end{aligned}$$
Note we have estimation that for every vector \(\mu \in \mathbb {R}^M\) and every \(q \in \mathbf {Q}\),
$$\begin{aligned} \begin{aligned}&\eta ^{a}(\mu , w^{a}_{q}(\mu ), w^{a}(\mu ), q) \ge 0, \quad \eta ^{b}(\mu , w^{b}_{q}(\mu ), w^{b}(\mu ), q) \ge 0. \end{aligned} \end{aligned}$$
Since \(\tilde{\theta }_{q}(T) \le \theta _{q}(T)\), \(\tilde{\theta }^{'}_{q}(t) \ge \theta ^{'}_{q}(t)\), then it can be proved similarly as the proof of the upper solution that for every \(q \in \mathbf {Q}\):
$$\begin{aligned} \begin{aligned}&\theta _{q}(t)\ge \tilde{\theta }_{q}(t) \ge - \frac{1}{2} \gamma \sigma ^2 Q^2 T -l(Q). \end{aligned} \end{aligned}$$
\(\square \)
To prove the existence of a classical solution to the coupled ODE system (3.8), we cite the Picard–Lindelof theorem in ODE theory that provides the existence and uniqueness of solution.
Theorem 5.11
(Picard–Lindelof theorem) Consider the initial value problem in \(\mathbb {R}^M\):
$$\begin{aligned} y'(t) = F(t, y(t)),\; y(t_0) = y_0, \end{aligned}$$
where \(F : \mathbb {R} \times \mathbb {R}^M \rightarrow \mathbb {R}^M\) is uniformly Lipschitz continuous in y with Lipschitz constant L (independent of t) and continuous in t. Then, for some value \(\varepsilon > 0\), there exists a unique solution y(t) to the initial value problem on the interval \( [t_{0}-\varepsilon ,t_{0}+\varepsilon ]\).
The next lemma is a direct conclusion from the proof of Theorem 5.11, see Teschl [18]. It helps us to extend the local existence and uniqueness of solution to the global existence and uniqueness.
Lemma 5.12
Let \(C_{{a,b}}=[t_{0}-a,t_{0}+a]\times B_{b}(y_{0})\), where \(B_{b}(y_{0})\) is a closed ball in \(\mathbb {R}^M\) with center \(y_0\) and radius b. Define
$$\begin{aligned} {\begin{aligned}&M= \sup _{{(t, y) \in C_{{a,b}}}} \Vert F(t,y)\Vert . \end{aligned}} \end{aligned}$$
Then, the solution to the ODE system (3.8) exists and is unique on interval \([t_0 - \epsilon , t_0 + \epsilon ]\), if \(\epsilon \) satisfies following:
$$\begin{aligned} {\begin{aligned}&\epsilon < \min \left\{ \frac{b}{M},\frac{1}{L}, a\right\} . \end{aligned}} \end{aligned}$$
Theorem 5.13
Consider the terminal-value ODE problem on [0, T]:
$$\begin{aligned} \begin{aligned}&\theta '(t) = F(t,\theta (t)), \; \theta (T) = \theta _0, \end{aligned} \end{aligned}$$
(5.16)
where \(F : [0, T] \times \mathbb {R}^{M} \rightarrow \mathbb {R}^{M}\) is a jointly locally Lipschitz continuous function. Assume that there exists a constant K such that if solution \(\theta \) exists on any subinterval of [0, T], \(\theta (t) \in [-K, K]^M\). Then, there exists a unique solution to (5.16) on [0, T].
Proof
Define \(A_{T, 2 \sqrt{M} K} := [0, T] \times [-2 \sqrt{M} K, 2 \sqrt{M} K]^M\). F is a continuous function. Hence, there exists uniform constant \(C > 0\) such that
$$\begin{aligned} {\begin{aligned}&C := \sup _{(t,y) \in A_{T, 2 \sqrt{M} K}} \Vert F(t,y)\Vert . \end{aligned}} \end{aligned}$$
(5.17)
Since F is jointly locally Lipschitz continuous, there exists a series of open set \(A_{i}\) such that F is Lipschitz continuous in \(A_{i}\) with Lipschitz coefficient \(L_i\), and \(A_{T, 2 \sqrt{M} K} \subset \cup _{i} A_{i}\). By Heine–Borel theorem, there are finite set I of i such that \(A_{T, 2 \sqrt{M} K} \subset \cup _{i \in I} A_{i}\). Defining \(L := \max _{i \in I} L_i\), we know F is Lipschitz continuous on the compact set \(A_{T, 2 \sqrt{M} K}\) with uniform Lipschitz coefficient L.
As terminal value \(\theta _{0} \in [-K, K]^M\), we define \(C^{0}_{T, \sqrt{M} K} := [0, T] \times B_{\sqrt{M} K}(\theta _{0})\). Then, \(C^{0}_{T, \sqrt{M} K} \subset A_{T, 2 \sqrt{M} K}\). For \(\epsilon := \min \{\frac{\sqrt{M} K}{C}, \frac{1}{L}, T \}\), the solution \(\theta \) to ODE system (5.16) exists and is unique on \([T - \epsilon , T]\). If \(\epsilon =T\), then we are done, otherwise, update the new terminal time as \(\tilde{T} := T - \epsilon \). Since \(\theta (\tilde{T}) \in [-K, K]^M\) by assumption, we can update a new terminal value \(\theta _{0} := \theta (\tilde{T})\). Define a new \(C^{1}_{\tilde{T}, \sqrt{M} K} := [0, \tilde{T}] \times B_{\sqrt{M} K}(\theta (\tilde{T})) \subset A_{T, 2 \sqrt{M} K}\). For \(\epsilon := \min \{\frac{\sqrt{M} K}{C}, \frac{1}{L}, \tilde{T} \}\), solution \(\theta \) to ODE system (5.16) exists and is unique on \([\tilde{T} - \epsilon , \tilde{T}]\), and hence exists and is unique also on \([\tilde{T} - \epsilon , T]\). Repeat this process, and we can reach \(\epsilon = \tilde{T}\) after finite number of steps, in which case we have proved the existence and uniqueness of solution \(\theta \) to ODE system (5.16) on the whole time interval [0, T]. \(\square \)
Combining Lemma 5.10, Theorem 5.9, and Theorem 5.13, we can finally proceed to show that the ODE system (3.8) has a unique classical solution.
Theorem 5.14
There exists unique classical solution \(\theta \) to ODE system (3.8) on [0, T].
Proof
According to Lemma 5.10, we know if the solution \(\theta \) exists on any subinterval of [0, T], there exists constant \(K \ge 0\) such that
$$\begin{aligned} \begin{aligned}&-K \le \theta _{q}(t) \le K. \end{aligned} \end{aligned}$$
Define \(F: [0, T] \times \mathbb {R}^{M} \rightarrow \mathbb {R}^{M}\) as
$$\begin{aligned} \begin{aligned}&F_{q}(t,\theta (t)) := \frac{1}{2} \gamma \sigma ^2 q^2 - \eta ^{a}(\theta (t),w^{a}_{q}(\theta (t)),w^{a}(\theta (t)), q) I^{a}(q) \\&\qquad \qquad \qquad - \eta ^{b}(\theta (t),w^{b}_{q}(\theta (t)),w^{b}(\theta (t)), q) I^{b}(q) \\&F(t,\theta (t)) := (F_{-Q}(t,\theta (t)), \ldots , F_{Q}(t,\theta (t))). \end{aligned} \end{aligned}$$
As q is finite, the original ODE system (3.8) can be rewritten in a vector form with F as in (5.16). Then, F is a jointly locally Lipschitz continuous function, and if solution \(\theta \) exists on any subinterval of [0, T], \(\theta (t) \in [-K, K]^M\). By Theorem 5.13, the ODE system has unique solution on [0, T]. This concludes the proof of step 3. \(\square \)
Completion of Proof of Theorem 3.4
From Steps 1, 2 and 3, we know there exist unique locally Lipschitz continuous functions \(w^{a}, w^{b}\) that satisfy the generalized Issac’s condition in Definition 3.2, the ODE system (3.8) is well defined and equivalent to the ODE system (3.5). There exists a unique classical solution to ODE system (3.8). Define the equilibrium value function for \(G_{\mathrm{mm}}\) by (3.1), and the equilibrium controls by (3.9). As \(\theta \) is the classical solution to the ODE system (3.8), it is a continuous function on [0, T] and hence bounded. Then, both \(\pi ^{a}(t) = w^{a}(\theta (t))\) and \(\pi ^{b}(t) = w^{b}(\theta (t))\) are bounded on [0, T]. \(\theta \), \(\pi ^{a}(t)\) and \(\pi ^{b}(t)\) satisfy the ODE system (3.5). Hence, from the verification Theorem 3.3, the equilibrium for game \(G_{\mathrm{mm}}\) exists. On the other hand, as the solution to ODE system (3.5) is unique, by Theorem 3.1 we know the equilibrium point is also unique.