Appendix A: Proof of Lemma 2.1
It can be easily checked that the drift and volatility coefficients satisfy the Yamada–Watanabe condition of Proposition 5.2.13 in [27]. An application of that result yields the existence of a unique non-explosive strong solution to the SDE (2.1).
Next we prove the positivity of this strong solution. Due to the existence of only positive jumps in (2.1), it is enough to check that the intensity ξ
(k) stays positive when c
k
=d
k
=0. Let \(\widetilde{\xi}^{(k)}\) be the corresponding (strong) solution to the SDE (2.1) with c
k
=d
k
=0. Let \(\ell^{a+}(M) = (\ell^{a+}_{t}(M); t\geq0)\) be the upper local time process of a general continuous semimartingale M=(M
t
;t≥0) in the point \(a\in\mathbb{R}\). Then ℓ
a+(M) may be expressed as
$$\begin{aligned} \ell^{a+}_t(M) = \lim_{\varepsilon\downarrow0}\frac{1}{\varepsilon}\int_0^t\mathbf {1}_{\{a\leq M_s<a+\varepsilon\}}\,d\langle M,M\rangle_s,\quad t\geq0. \end{aligned}$$
We next verify that the upper local time process \(\ell^{0+}(\widetilde{\xi}^{(k)})\) of the continuous semi-martingale \(\widetilde{\xi}^{(k)}\) in 0 is zero. When \(\rho>\frac{1}{2}\), for all t>0 and ε>0, we have
$$\begin{aligned} \frac{1}{\varepsilon}\int_0^t\mathbf{1}_{\{0\leq \widetilde{\xi}^{(k)}<\varepsilon\}}\,d\langle\, \widetilde{\xi }^{(k)},\widetilde{\xi}^{(k)} \rangle_s=\frac{\sigma _k^2}{\varepsilon}\int_0^t\mathbf{1}_{\{0\leq \widetilde{\xi}^{(k)}<\varepsilon\}}(\widetilde{\xi}^{(k)})^{2\rho}\,ds \leq\sigma_k^2\varepsilon^{2\rho-1}t, \end{aligned}$$
which approaches zero as ε↓0. This shows that \(\ell^{0+}(\widetilde{\xi}^{(k)})\equiv0\) when \(\rho>\frac {1}{2}\). For \(\rho=\frac{1}{2}\), using the occupation time formula, we get
$$\begin{aligned} \int_{\mathbb{R}}\frac{1}{|a|}\mathbf{1}_{\{a\neq0\}}\ell ^{a+}_t(\widetilde{\xi}^{(k)})\,da =&\sigma_k^2\int_0^t\frac {1}{|\widetilde{\xi}^{(k)}|}\mathbf{1}_{\{|\widetilde{\xi }^{(k)}|>0\}}|\widetilde{\xi}^{(k)}|^{2\rho}\,d s \\ =&\sigma_k^2\int_0^t\mathbf{1}_{\{|\widetilde{\xi}^{(k)}|>0\} }\,ds\leq\sigma_k^2t,\quad t\geq0. \end{aligned}$$
Note that |a|−1 is not integrable in any neighborhood of a=0. Then it must hold that \(\ell^{0+}_{t}(\widetilde{\xi}^{(k)})=0\) for all t≥0. Using Tanaka’s formula, it follows that
$$\begin{aligned} \mathbb{E} \big[(\widetilde{\xi}^{(k)}_{t\wedge\varsigma _m})_{-}\big] =&\mathbb{E} \big[(\xi^{(k)}_{0})_{-}\big] -\mathbb{E} \left[\int_0^{t\wedge\varsigma_m}\mathbf{1}_{\{ \widetilde{\xi}^{(k)}_s\leq0\}}d\widetilde{\xi}^{(k)}_s\right] +\frac{1}{2}\mathbb{E} \big[\ell_{t\wedge\varsigma _m}^{0+}(\widetilde{\xi}^{(k)})\big] \\ =&-\alpha_k\mathbb{E} \left[\int_0^{t\wedge\varsigma_m}\mathbf {1}_{\{\widetilde{\xi}^{(k)}_s\leq0\}}\,ds\right] +\kappa_k\mathbb{E} \left[\int_0^{t\wedge\varsigma_m}\mathbf {1}_{\{\widetilde{\xi}^{(k)}_s\leq0\}}\widetilde{\xi}^{(k)}_s\,d s\right] \\ \leq& 0,\quad t\geq0, \end{aligned}$$
where \(\varsigma_{m}=\inf\{t>0;\ |\widetilde{\xi}^{(k)}|\geq m\}\) with \(m\in\mathbb{N} \). This implies that \(\widetilde{\xi }^{(k)}_{t\wedge\varsigma_{m}}\geq0\)
\(\mathbb{P} \)-a.s. for each \(m\in\mathbb{N} \). Letting m→∞, we conclude that \(\widetilde{\xi}^{(k)}_{t} \geq0\) for all t≥0.
By virtue of the Feller boundary classification criteria, we have that the boundary 0 is unattainable for \(\widetilde{\xi}^{(k)}\) when \(\rho>\frac{1}{2}\). Thus the proof of the lemma is complete. □
Appendix B: Proofs related to weak convergence analysis
2.1 B.1 Moment estimate of intensities of K-names
Recall that the intensity process \(\xi^{(k)}=(\xi^{(k)}_{t};\ t\geq0)\) of the kth name follows the CEV process with jumps in (2.1).
Lemma B.1
Let Assumption (A2) hold. Then, for any
T>0,
$$\begin{aligned} \sup_{0\leq t\leq T, K\in\mathbb{N} }\frac{1}{K}\sum _{k=1}^K\mathbb{E} \big[\big|\xi_{t}^{(k)}\big|^\beta\big]<+\infty, \end{aligned}$$
(B.1)
where 1≤β≤4.
The proof procedure for the moment estimates (B.1) follows straightforward arguments. First, we apply Itô’s formula, and then use the Hölder inequality, the BDG inequality and Gronwall’s lemma. The full details are omitted here.
2.2 B.2 Proof of Lemma 4.2
It follows from the definition (2.3) of default times that for each k∈{1,2,…,K},
$$\begin{aligned} {\mathcal{M}}_t^{(k)}:=H_t^{(k)} - \int_0^t\overline{H}_s^{(k)}\xi _s^{(k)}\,ds,\quad t\geq0 \end{aligned}$$
(B.2)
is a \((\mathbb{P} ,\mathcal{G}_{t}^{(k)})\)-martingale. Hence the third term on the right-hand side of Equation (4.2) may be rewritten as
$$\begin{aligned} &\frac{1}{K}\int_0^t\sum_{k=1}^Kf\big(p_k,(Y_1^{(k)},\widetilde {Y}_1^{(k)}),\xi_{s-}^{(k)}\big)\,d\overline{H}_s^{(k)} \\ &\quad =-\frac{1}{K}\int_0^t\sum_{k=1}^Kf \big(p_k,(Y_1^{(k)},\widetilde {Y}_1^{(k)}),\xi_{s-}^{(k)}\big)\,d{H}_s^{(k)} \\ &\quad =-\frac{1}{K}\int_0^t\sum_{k=1}^Kf \big(p_k,(Y_1^{(k)},\widetilde{Y}_1^{(k)}),\xi_{s-}^{(k)}\big)\,d \mathcal{M}_s^{(k)} \\ &\qquad{}-\frac{1}{K}\int_0^t\sum_{k=1}^K\xi_{s}^{(k)}f \big(p_k,(Y_1^{(k)},\widetilde{Y}_1^{(k)}),\xi_{s-}^{(k)}\big)\overline {H}_s^{(k)}\,d s. \end{aligned} $$
Thus there exists a (local) martingale \(\widehat{\mathcal{M}}^{(K)}=(\widehat{\mathcal{M}}_{t}^{(K)};\ t\geq 0)\) such that
$$\begin{aligned} {\varPhi}(\nu_t^{(K)}) =& {\varPhi}(\nu_0^{(K)}) + \sum _{m=1}^M\int_0^t\frac{\partial\varphi}{\partial x_m}\big(\nu _s^{(K)}(\boldsymbol{f})\big) \nu_s^{(K)}(\mathcal{L}_{11}f_m)\,ds + \widehat{\mathcal {M}}^{(K)}_t \\ &{}+\frac{1}{2K^2}\sum_{m,n=1}^M\int_0^t\frac{\partial^2\varphi }{\partial x_m\partial x_{n}}\big(\nu_s^{(K)}(\boldsymbol{f})\big) \\ &\begin{aligned}[c] {}\times\bigg(\sum_{k=1}^K&\sigma_k^2\frac{\partial f_m}{\partial x}\big(p_k,(Y_1^{(k)},\widetilde{Y}_1^{(k)}),\xi_s^{(k)}\big) \\ &{}\times\frac{\partial f_n}{\partial x}\big(p_k,(Y_1^{(k)},\widetilde{Y}_1^{(k)}),\xi_s^{(k)}\big)(\xi _s^{(k)})^{2\rho}\overline{H}_s^{(k)}\bigg)\,ds \end{aligned} \\ &{}-\frac{1}{K} \sum_{m=1}^M \int_0^t \frac{\partial\varphi }{\partial x_m}\big(\nu_s^{(K)}(\boldsymbol{f})\big) \sum_{k=1}^K\xi_{s}^{(k)} f_m\big(p_k,(Y_1^{(k)},\widetilde{Y}_1^{(k)} ),\xi_{s-}^{(k)}\big)\overline{H}_s^{(k)} ds \\ &{}+\sum_{k=1}^K\widehat{\lambda}_k\int_0^t\Big(\varphi\big(\nu _s^{(K)}(\boldsymbol{f})+\boldsymbol{J}_s^{(K,k)}(Y_1^{(k)},\widetilde {Y}_1^{(k)})\big) -\varphi\big(\nu_s^{(K)}(\boldsymbol{f})\big)\Big)\,ds \\ &{}+\widehat{\lambda}_c\int_0^t\Big(\varphi\big(\nu_s^{(K)}(\boldsymbol{f})+\boldsymbol{J}_s^{(K,c)}(\boldsymbol{Y}_1, \widetilde{\boldsymbol{Y}}_1)\big)-\varphi \big(\nu_s^{(K)}\boldsymbol{(f)}\big)\Big)\,ds, \end{aligned}$$
(B.3)
where t≥0. Note that the fifth line of the above equation may be rewritten as
$$-\sum_{m=1}^M\int_0^t\frac{\partial\varphi}{\partial x_m}\big(\nu _s^{(K)}(\boldsymbol{f})\big) \nu_{s}^{(K)}(\chi_0f_m)\,ds, $$
where χ
0
f(p,y,x)=xf(p,y,x) with \((p,y,x)\in\mathcal{O}\). Let \(\boldsymbol{J}_{m,\cdot}^{(K,k)}\) (resp. \(\boldsymbol{J}_{m,\cdot}^{(K,c)}\)) be the mth component of \(\boldsymbol{J}_{\cdot}^{(K,k)}(Y_{1}^{(k)},\widetilde{Y}_{1}^{(k)})\) (resp. \(\boldsymbol{J}_{\cdot}^{(K,c)}(\boldsymbol{Y}_{1},\widetilde{\boldsymbol{Y}}_{1} )\)) with m=1,2,…,M. Observe that
$$\begin{aligned} &\varphi\big(\nu_s^{(K)}(\boldsymbol{f})+\boldsymbol{J}_s^{(K,k)}(Y_1^{(k)},\widetilde{Y}_1^{(k)})\big)-\varphi\big(\nu _s^{(K)}(\boldsymbol{f})\big)\\ &\quad \simeq\sum_{m=1}^M\frac{\partial\varphi}{\partial x_m}\big(\nu _s^{(K)}(\boldsymbol{f})\big)\boldsymbol{J}_{m,s}^{(K,k)} \\ & \quad \simeq\sum_{m=1}^M\frac{\partial\varphi}{\partial x_m}\big(\nu_s^{(K)}(\boldsymbol{f})\big) \frac{\partial f_m}{\partial x}\big(p_k,(Y_1^{(k)},\widetilde{Y}_1^{(k)}),\xi_{s-}^{(k)}\big)\frac{ d_k\widetilde{Y}_1^{(k)}}{K}\overline{H}_s^{(k)} , \end{aligned}$$
and
$$\begin{aligned} &\varphi\big(\nu_s^{(K)}(\boldsymbol{f})+\boldsymbol{J}_s^{(K,c)}(\boldsymbol{Y}_1)\big)-\varphi\big(\nu_s^{(K)}(\boldsymbol{f})\big)\\ &\quad \simeq\sum_{m=1}^M\frac{\partial\varphi}{\partial x_m}\big(\nu _s^{(K)}(\boldsymbol{f})\big)\boldsymbol{J}_{m,s}^{(K,c)} \\ &\quad \simeq\sum_{m=1}^M\frac{\partial\varphi}{\partial x_m}\big(\nu_s^{(K)}(\boldsymbol{f})\big) \sum_{k=1}^K\frac{\partial f_m}{\partial x}\big(p_k,(Y_1^{(k)},\widetilde{Y}_1^{(k)}),\xi_{s-}^{(k)}\big)\frac{c_kY_1^{(k)}}{K}\overline{H}_s^{(k)} , \end{aligned}$$
where a
K
≃b
K
means that lim
K→∞|a
K
−b
K
|=0. Accordingly, the sixth line of (B.3) may be rewritten as
$$\begin{aligned} &\sum_{m=1}^M\int_0^t\frac{\partial\varphi}{\partial x_m}\big(\nu_s^{(K)}(\boldsymbol{f})\big) \frac{1}{K}\sum_{k=1}^K\widehat{\lambda}_k \frac{\partial f_m}{\partial x}\big(p_k,(Y_1^{(k)},\widetilde {Y}_1^{(k)}),\xi_{s-}^{(k)}\big)\,d_k\widetilde{Y}_1^{(k)}\overline {H}_s^{(k)} \,ds \\ & \quad \simeq\sum_{m=1}^M\int_0^t\frac{\partial\varphi}{\partial x_m}\big(\nu_s^{(K)}(\boldsymbol{f})\big)\nu_s^{(K)}(\mathcal{L}_{21}f_m)\,ds, \end{aligned}$$
and the seventh line of (B.3) may be rewritten as
$$\begin{aligned} &\widehat{\lambda}_c\sum_{m=1}^M\int_0^t\frac{\partial\varphi }{\partial x_m}\big(\nu_s^{(K)}(\boldsymbol{f})\big)\ \frac{1}{K}\sum_{k=1}^K \frac{\partial f_m}{\partial x}\big(p_k,(Y_1^{(k)},\widetilde {Y}_1^{(k)}),\xi_{s-}^{(k)}\big)c_kY_1^{(k)}\overline{H}_s^{(k)}\, ds \\ &\quad \simeq\widehat{\lambda}_c\sum_{m=1}^M\int_0^t\frac{\partial \varphi}{\partial x_m}\big(\nu_s^{(K)}(\boldsymbol{f})\big)\nu_s^{(K)}(\mathcal{L}_{22}f_m)\,ds. \end{aligned}$$
Finally, the second to fourth line of (B.3) may be rewritten as
$$\begin{aligned} \frac{1}{2K}\sum_{m,n=1}^M\int_0^t\frac{\partial^2\varphi }{\partial x_m\partial x_{n}}\big(\nu_s^{(K)}(\boldsymbol{f})\big)\nu_s^{(K)}\big(\chi_1(\mathcal{L}_2f_m) \chi_1(\mathcal {L}_2f_n)\big)\,ds, \end{aligned}$$
(B.4)
with the operators χ
1
f(p,y,x)=σx
ρ
f(p,y,x) and \({\mathcal{L}}_{2}f(p,y,x)=\frac{\partial f}{\partial x}(p,y,x)\). We now prove that (B.4) approaches zero when K→∞. Indeed, let
$$\varsigma_a^{(K)}=\inf\Big\{ t\geq0 ; \max_{k=1,\dots, K} |\xi_t^{(k)} |\geq a\Big\} $$
for a>0. Then, for each fixed a>0, as K→∞,
$$\begin{aligned} \Bigg|\frac{1}{2K}\sum_{m,n=1}^M\int_0^{t\wedge\varsigma _a^{(K)}}\frac{\partial^2\varphi}{\partial x_m\partial x_{n}}\big(\nu_s^{(K)}(\boldsymbol{f})\big)\nu_s^{(K)}\big(\chi_1(\mathcal{L}_2f_m) \chi_1(\mathcal {L}_2f_n)\big)\,ds\Bigg|\leq\frac{C_a}{K}\longrightarrow0. \end{aligned}$$
Letting a→∞, we conclude that the quantity in (B.4) approaches zero as K→∞ since \(\varsigma_{a}^{(K)}\to+\infty\). This completes the proof of the lemma. □
2.3 B.3 Proof of Lemma 4.3
Let 0≤t≤T. Recall that the decomposition of \(\nu_{t}^{(K)}(f)\) for \(f\in C^{\infty}(\mathcal{O})\) admits the form
$$\begin{aligned} \nu_t^{(K)}(f)=\nu_0^{(K)}(f) + A_t^{(K)} + \widehat{A}_t^{(K)} + B_t^{(K)} + \widehat{B}_t^{(K)}, \end{aligned}$$
(B.5)
where we have defined
$$\begin{aligned} A_t^{(K)} =& \int_0^t\nu_s^{(K)}(\mathcal{L}_{11}f)\,ds, \\ \widehat{A}_t^{(K)} =&\frac{1}{K}\int_0^t\sum_{k=1}^Kf\big(p_k,(Y_1^{(k)},\widetilde{Y}_1^{(k)}),\xi_{s-}^{(k)}\big)\,d\overline {H}_s^{(k)}, \\ B_t^{(K)} =&\frac{1}{K}\int_0^t\sum_{k=1}^K \sigma_k\frac{\partial f}{\partial x} \big(p_k,(Y_1^{(k)},\widetilde{Y}_1^{(k)}),\xi_s^{(k)}\big)\overline{H}_s^{(k)}(\xi_s^{(k)})^{\rho}\, dW_s^{(k)} , \\ \widehat{B}_t^{(K)} =&\begin{aligned}[t] \frac{1}{K}\int_0^t\sum_{k=1}^K \Big(&f \big(p_k,(Y_1^{(k)},\widetilde{Y}_1^{(k)}),\xi_{s-}^{(k)}+ d_k\widetilde{Y}_1^{(k)}\big)\\ &{}-f \big(p_k,(Y_1^{(k)},\widetilde {Y}_1^{(k)}),\xi_{s-}^{(k)}\big)\Big)\overline{H}_s^{(k)}\,d \widehat{N}_s^{(k)} \end{aligned} \\ &\begin{aligned}[c] {}+ \frac{1}{K}\int_0^t \sum_{k=1}^K\Big(&f \big(p_k,(Y_1^{(k)},\widetilde{Y}_1^{(k)}),\xi_{s-}^{(k)}+ c_kY_1^{(k)}\big) \\ &{}-f \big(p_k,(Y_1^{(k)},\widetilde {Y}_1^{(k)}),\xi_{s-}^{(k)}\big)\Big)\overline{H}_s^{(k)} \,d \widehat{N}_s^{(c)}. \end{aligned} \end{aligned}$$
(B.6)
Then for any T>0, we have
$$\begin{aligned} \sup_{0\leq t\leq T}\big|\nu_t^{(K)}(f)\big| \leq&\sup_{0\leq t\leq T}\big|A_t^{(K)}\big| +\sup_{0\leq t\leq T}\big|\widehat{A}_t^{(K)}\big| \\ &{}+\sup_{0\leq t\leq T}\big|B_t^{(K)}\big|+\sup_{0\leq t\leq T}\big|\widehat{B}_t^{(K)}\big|. \end{aligned}$$
(B.7)
Next we estimate the expectation of each term on the right-hand side of the above equation. First, by Assumption (A2), we have
$$\begin{aligned} \mathbb{E} \bigg[\sup_{0\leq t\leq T}\big|A_t^{(K)}\big|\bigg] \leq&\begin{aligned}[t] \frac{1}{K}\sum_{k=1}^K\mathbb{E} \bigg[\int_0^T&\bigg|\frac{1}{2}\sigma_k^2(\xi_s^{(k)})^{2\rho} \frac{\partial^2 f}{\partial x^2}\big(p_k,(Y_1^{(k)},\widetilde{Y}_1^{(k)} ),\xi_s^{(k)}\big)\\ &{}+(\alpha_k-\kappa_k\xi_{s}^{(k)})\frac {\partial f}{\partial x}\big(p_k,(Y_1^{(k)},\widetilde {Y}_1^{(k)}),\xi_s^{(k)}\big)\bigg|\,ds\bigg] \end{aligned}\\ \leq&\frac{C_p^2}{2}\left\|\frac{\partial^2 f}{\partial x^2}\right\|\int_0^T \frac{1}{K}\sum_{k=1}^K\mathbb{E} \big[(\xi _s^{(k)})^{2\rho}\big] \,ds \\ &{} +C_p\left\|\frac{\partial f}{\partial x}\right\|\int_0^T \frac {1}{K}\sum_{k=1}^K\mathbb{E} \big[\xi_s^{(k)}\big] \,ds +C_p T\left\|\frac{\partial f}{\partial x}\right\|, \end{aligned}$$
where, for a given function \(f\in C^{\infty}(\mathcal{O})\), ∥f∥ denotes the supremum norm on \(\mathcal{O}\), i.e., \(\|f\| = \sup _{(p,y,x)\in\mathcal{O}} |f(p,y,x)|\). The constant C
p
>0 above is chosen to be \(C_{p} =\max_{k\in\{1,\dots,K\}}\{\alpha_{k},\kappa_{k},\sigma _{k},c_{k},d_{k},\widehat{\lambda}_{k},m_{k}^{Y}, m_{k}^{\tilde{Y}}\}\), and is finite by Assumption (A2).
We can bound the second term on the right-hand side of (B.7) as
$$\begin{aligned} \begin{aligned}[t] \mathbb{E} \bigg[\sup_{0\leq t\leq T}\big|\widehat{A}_t^{(K)}\big|\bigg]&\leq \frac{1}{K}\mathbb{E} \bigg[\int_0^T\sum_{k=1}^K\big|f \big(p_k,(Y_1^{(k)},\widetilde{Y}_1^{(k)}),\xi_{s-}^{(k)}\big)\big|\,d{H}_s^{(k)}\bigg]\\ &\leq\left\|f\right\|\mathbb{E} \bigg[\frac{1}{K}\sum _{k=1}^K{H}_T^{(k)}\bigg]\leq\|f\|, \end{aligned} \end{aligned}$$
where we have used \(\frac{1}{K}\sum_{k=1}^{K}{H}_{T}^{(k)}\leq1\) for all \(K\in\mathbb{N} \). Using the Burkholder–Davis–Gundy inequality, we can bound the third term on the right-hand side of (B.7) as
$$\begin{aligned} \mathbb{E} \bigg[\sup_{0\leq t\leq T}\big|B_t^{(K)}\big|\bigg] \leq&\frac{1}{K} \mathbb{E} \bigg[\int_0^T\sum_{k=1}^K\sigma_k^2\left|\frac {\partial f}{\partial x}\big(p_k,(Y_1^{(k)},\widetilde {Y}_1^{(k)}),\xi_s^{(k)}\big)\right|^2\overline{H}_s^{(k)}(\xi _s^{(k)})^{2\rho}\,ds\bigg]^{\frac{1}{2}} \\ \leq&{C_p}\left\|\frac{\partial f}{\partial x}\right\|\mathbb{E} \bigg[\int_0^T\frac{1}{K}\sum_{k=1}^K(\xi_s^{(k)})^{2\rho}\,ds\bigg]^{\frac{1}{2}} \\ \leq&\frac{C_p}{2}\left\|\frac{\partial f}{\partial x}\right\| \bigg(\int_0^T\frac{1}{K}\sum_{k=1}^K\mathbb{E} \big[(\xi _s^{(k)})^{2\rho}\big]\,ds+\frac{1}{K}\bigg) \\ \leq&\frac{C_p}{2}\left\|\frac{\partial f}{\partial x}\right\| \bigg(\int_0^T\frac{1}{K}\sum_{k=1}^K\mathbb{E} \big[(\xi _s^{(k)})^{2\rho}\big]\,ds+1\bigg). \end{aligned}$$
Finally, we have
$$\begin{aligned} \mathbb{E} \bigg[\sup_{0\leq t\leq T}\big|\widehat{B}_t^{(K)}\big|\bigg] \leq& \begin{aligned}[t] {\frac{1}{K}}\mathbb{E} \bigg[\int_0^T\sum_{k=1}^K&\big|f \big(p_k,(Y_1^{(k)},\widetilde{Y}_1^{(k)}),\xi_{s-}^{(k)}+d_k \widetilde{Y}_1^{(k)}\big)\\ &{}-f\big(p_k,(Y_1^{(k)},\widetilde{Y}_1^{(k)}),\xi _{s-}^{(k)}\big)\big|\overline{H}_s^{(k)}d\widehat{N}_s^{(k)}\bigg] \end{aligned}\\ &\begin{aligned}[c] +\,\frac{1}{K}\mathbb{E} \bigg[\int_0^T\sum_{k=1}^K&\big|f \big(p_k,(Y_1^{(k)},\widetilde{Y}_1^{(k)}),\xi_{s-}^{(k)}+c_k Y_1^{(k)}\big)\\ &{}-f \big(p_k,(Y_1^{(k)},\widetilde{Y}_1^{(k)}),\xi _{s-}^{(k)}\big)\big|\overline{H}_s^{(k)}\,d\widehat{N}_s^{(c)}\bigg] \end{aligned}\\ \leq&\left\|\frac{\partial f}{\partial x}\right\|\frac{1}{K}\sum_{k=1}^K \big(\lambda_k(c_k\vee d_k)m_k^Y\big) \\ \leq& C_p^2(C_p+\widehat{\lambda}_c)\left\|\frac{\partial f}{\partial x}\right\|, \end{aligned}$$
where we have used the mean-value theorem in the last inequality.
Note that \(\mathbb{E} [\nu_{0}^{(K)}(f)]\leq\|f\|\). Using (B.1) in Lemma B.1, we can find a constant \(C=C(T,\|f\|,\|\frac{\partial f}{\partial x}\| ,\|\frac{\partial^{2} f}{\partial x^{2}}\|)>0\) such that
$$\begin{aligned} \sup_{K\in\mathbb{N} }\mathbb{E} \bigg[\sup_{0\leq t\leq T}\big|\nu_t^{(K)}(f)\big|\bigg]<C<+\infty. \end{aligned}$$
From Chebyshev’s inequality, it follows that (4.6) holds. □
2.4 B.4 Proof of Lemma 4.4
From the decomposition (B.5), it follows that
$$\begin{aligned} (\nu_{t+u}^{(K)}-\nu _{t}^{(K)})(f) =&A_{t+u}^{(K)}-A_{t}^{(K)}+\widehat {A}_{t+u}^{(K)}-\widehat{A}_{t}^{(K)} +B_{t+u}^{(K)}-B_{t}^{(K)} \\ &{}+\widehat{\mathcal{M}}_{t+u}^{(K)}-\widehat{\mathcal {M}}_{t}^{(K)} + P_{t+u}^{(K)} - P^{(K)}_t, \end{aligned}$$
(B.8)
where \(A_{t}^{(K)},\widehat{A}_{t}^{(K)},B_{t+u}^{(K)}\) are given by (B.6) and we have defined
$$\begin{aligned} \widehat{\mathcal{M}}_{t}^{(K)} =&\frac{1}{K}\sum_{k=1}^K\int _0^t\int_{\mathbb{R}_+}\big( f(p_k,y,\xi _{s-}^{(k)}+d_ky_2)-f(p_k,y,\xi_{s-}^{(k)})\big) \overline{H}_s^{(k)}\widehat{N}^{(k)}(ds,dy_2)\\ &\begin{aligned}[c] +\,\frac{1}{K}\sum_{k=1}^K\int_0^t\int_{\mathbb{R}_+}&\big(f(p_k,y,\xi_{s-}^{(k)}+c_ky_1)-f(p_k,y,\xi_{s-}^{(k)})\big)\\ &{}\times \overline{H}_s^{(k)}\widehat{N}^{(c)}(ds,dy_1), \end{aligned}\\ P^{(K)}_t =&\begin{aligned}[t] \frac{1}{K}\sum_{k=1}^K\widehat{\lambda}_k\int_0^t\int _{\mathbb{R}_+}&\big(f(p_k,y,\xi_{s-}^{(k)}+d_ky_2)-f(p_k,y,\xi _{s-}^{(k)})\big)\\ &{} \times \overline{H}_s^{(k)}F^{(k)}_{\widetilde{Y}}(dy_2)\,ds \end{aligned}\\ &\begin{aligned}[c] +\,\frac{1}{K}\sum_{k=1}^K\widehat{\lambda}_c\int_0^t\int _{\mathbb{R}_+}&\big(f(p_k,y,\xi_{s-}^{(k)}+c_ky_1)-f(p_k,y,\xi _{s-}^{(k)})\big)\\ &{} \times \overline{H}_s^{(k)}F^{(k)}_{Y}(dy_1)\,ds. \end{aligned} \end{aligned}$$
Here, for \((t,y)=(t,y_{1},y_{2})\in\mathbb{R}_{+}^{3}\), \(\widehat{N}^{(k)}(dt,d y_{2})\) and \(\widehat{N}^{(c)}(dt,dy_{1})\) denote the compensated Poisson random measures associated, respectively, with the systematic compound Poisson process \(\sum_{i=1}^{\widehat{N}_{\cdot}^{(c)}}Y_{i}^{(k)}\) and with the idiosyncratic one given by \(\sum_{\ell=1}^{\widehat{N}_{\cdot}^{(k)}}\widetilde{Y}_{\ell}^{(k)}\). Moreover, the measures \(F_{Y}^{(k)}(dy_{1})\) and \(F_{\widetilde{Y}}^{(k)}(dy_{2})\) are the distributions of the jump amplitudes \(Y_{1}^{(k)}\) and \(\widetilde{Y}_{1}^{(k)}\), respectively. Then
$$\begin{aligned} h^2\big(\nu_{t+u}^{(K)}(f),\nu_{t}^{(K)}(f)\big) \leq& 8\big(\big|A_{t+u}^{(K)}-A_{t}^{(K)}\big|+\big|\widehat {A}_{t+u}^{(K)}-\widehat{A}_{t}^{(K)}\big| +\big|P_{t+u}^{(K)}-P_{t}^{(K)}\big| \\ &+\big|{B}_{t+u}^{(K)}-B_{t}^{(K)}\big|^2+\big|\widehat{\mathcal {M}}_{t+u}^{(K)}-\widehat{\mathcal{M}}_{t}^{(K)} \big|^2\big). \end{aligned}$$
First, we have for 0≤u≤δ that
$$\begin{aligned} \big|A_{t+u}^{(K)}-A_{t}^{(K)}\big| \leq& \frac{C_p^2}{2}\left\|\frac{\partial^2 f}{\partial x^2}\right\| \int_t^{t+u} \frac{1}{K}\sum_{k=1}^K(\xi_s^{(k)})^{2\rho} \,ds \\ &{}+C_p\left\|\frac{\partial f}{\partial x}\right\|\int_{{t}}^{t+u} \frac{1}{K}\sum_{k=1}^K\xi_s^{(k)} \,ds +C_p\left\|\frac{\partial f}{\partial x}\right\|u \\ \leq&\frac{C_p^2}{4}\left\|\frac{\partial^2 f}{\partial x^2}\right\|\delta^{\frac{1}{4}}\bigg(1+\int_0^{T} \frac {1}{K}\sum_{k=1}^K(\xi_s^{(k)})^{4\rho} \,ds\bigg)+C_p\left\|\frac {\partial f}{\partial x}\right\|\delta \\ &{}+\frac{C_p}{2}\left\|\frac{\partial f}{\partial x}\right\|\delta ^{\frac{1}{4}}\bigg(1+\int_0^{T} \frac{1}{K}\sum_{k=1}^K(\xi _s^{(k)})^{2} \,ds\bigg)=:H_K^{1}(\delta). \end{aligned}$$
Next, we have
$$\begin{aligned} \big|\widehat{A}_{t+u}^{(K)}-\widehat{A}_{t}^{(K)}\big| \leq\frac{1}{K}\sum_{k=1}^K\int_t^{t+u}\left\|f\right\| d{H}_s^{(k)}=\left\|f\right\|\frac{1}{K}\sum _{k=1}^K(H_{t+u}^{(k)}-H_t^{(k)}). \end{aligned}$$
Note that the difference \(H_{t+u}^{(k)}-H_{t}^{(k)}\) admits the decomposition
$$\begin{aligned} H_{t+u}^{(k)}-H_t^{(k)}={\mathcal{M}}_{t+u}^{(k)}-{\mathcal {M}}_{t}^{(k)} +\int_t^{t+u}\overline{H}^{(k)}_s\xi_s^{(n)}\,ds, \end{aligned}$$
where the martingale \({\mathcal{M}}^{(k)}=({\mathcal{M}}_{t}^{(k)};\ t\geq0)\) is defined by (B.2). Then it holds that
$$\begin{aligned} &\mathbb{E} \bigg[\big|\widehat{A}_{t+u}^{(K)}-\widehat {A}_{t}^{(K)}\big|\bigg|\bigvee_{k=1}^K\mathcal{G}_t^{(k)}\bigg] \\ &\quad\leq\left\|f\right\|\frac{1}{K}\sum_{k=1}^K\mathbb{E} \bigg[{\mathcal{M}}_{t+u}^{(k)}-{\mathcal{M}}_{t}^{(k)} +\int _t^{t+u}\overline{H}^{(k)}_s\xi_s^{(k)}\,ds\bigg|\bigvee _{k=1}^K\mathcal{G}_t^{(k)}\bigg] \\ &\quad\leq\left\|f\right\|\mathbb{E} \bigg[\int_t^{t+u}\frac {1}{K}\sum_{k=1}^K\xi_s^{(k)}\,ds\bigg|\bigvee_{k=1}^K\mathcal {G}_t^{(k)}\bigg] \\ &\quad\leq\mathbb{E} \bigg[\frac{\|f\|}{2}\delta^{\frac {1}{4}}\bigg(1+\int_0^T\frac{1}{K}\sum_{k=1}^K(\xi _s^{(k)})^2\,ds\bigg)\bigg|\bigvee_{k=1}^K{\mathcal{G}}_t^{(k)}\bigg] =:\mathbb{E} \bigg[H_K^{2}(\delta)\bigg|\bigvee_{k=1}^K\mathcal {G}_t^{(k)}\bigg]. \end{aligned}$$
For the third difference term on the right-hand side of (B.8), we have
$$\begin{aligned} &\mathbb{E} \bigg[\big|B_{t+u}^{(K)}-B_t^{(K)}\big|^2\bigg|\bigvee_{k=1}^K\mathcal{G}_t^{(k)}\bigg]\\ &\quad = \mathbb{E} \bigg[\big|B_{t+u}^{(K)}\big|^2-\big|B_t^{(K)}\big|^2\bigg|\bigvee_{k=1}^K\mathcal{G}_t^{(k)}\bigg] \\ &\quad \leq C_p^2\left\|\frac{\partial f}{\partial x}\right\|^2\mathbb {E} \bigg[\int_t^{t+u}\frac{1}{K}\sum_{k=1}^K(\xi_s^{k})^{2\rho }\,ds\bigg|\bigvee_{k=1}^K\mathcal{G}_t^{(k)}\bigg] \\ &\quad \leq\mathbb{E} \bigg[\frac{C_p^2}{2}\left\|\frac{\partial f}{\partial x}\right\|^2\delta^{\frac{1}{4}}\bigg(1+\int _0^{T}\frac{1}{K}\sum_{k=1}^K(\xi_s^{k})^{4\rho}\,ds\bigg)\bigg|\bigvee_{k=1}^K\mathcal{G}_t^{(k)}\bigg]\\ &\quad =:\mathbb{E} \bigg[H_K^{3}(\delta)\bigg|\bigvee_{k=1}^K\mathcal {G}_t^{(k)}\bigg]. \end{aligned}$$
Next, we consider the fourth difference term on the right-hand side of (B.8). Using the inequality (x+y)2≤2(x
2+y
2), the martingale property of \((\widehat{\mathcal {M}}^{(K)}_{t};\ t\geq0)\), the mean-value theorem and Assumption (A2), we have
$$\begin{aligned} &\mathbb{E} \bigg[ \big|\widehat{\mathcal {M}}^{(K)}_{t+u}-\widehat{\mathcal{M}}^{(K)}_{t}\big|^2\bigg|\bigvee_{k=1}^K\mathcal{G}_t^{(k)}\bigg] \\ &\quad \begin{aligned}[c] {}\leq2\mathbb{E} \Bigg[\frac{1}{K^2}\sum_{k=1}^K\int_t^{t+u}\int _{\mathbb{R}_+}&\big|f(p_k,y,\xi_{s-}^{(k)}+d_ky_2)-f(p_k,y,\xi _{s-}^{(k)})\big|^2 \\ &{} \times\overline{H}_s^{(k)}\widehat{N}^{(k)}(ds,dy_2)\bigg|\bigvee _{k=1}^K\mathcal{G}_t^{(k)}\Bigg] \end{aligned} \\ &\qquad \begin{aligned}[c] {}+2\mathbb{E} \Bigg[\frac{1}{K^2}\int_t^{t+u}\int_{\mathbb {R}_+}&\bigg(\sum_{k=1}^K\big|f(p_k,y,\xi _{s-}^{(k)}+c_ky_1)-f(p_k,y,\xi_{s-}^{(k)})\big| \overline{H}_s^{(k)}\bigg)^2 \\ &{} \times\widehat{N}^{(c)}(ds,dy_1)\bigg|\bigvee_{k=1}^K\mathcal {G}_t^{(k)}\Bigg] \end{aligned} \\ &\quad {}\leq2\mathbb{E} \bigg[\frac{1}{K^2}\left\|\frac{\partial f}{\partial x}\right\|^2\sum_{k=1}^K\widehat{\lambda}_kd_k^2\int _t^{t+u}\int_{\mathbb{R}_+}y_2^2{F}_{\widetilde {Y}}^{(k)}(dy_2)\,ds\bigg|\bigvee_{k=1}^K\mathcal{G}_t^{(k)}\bigg] \\ &\qquad{}+2\mathbb{E} \bigg[\frac{1}{K^2}\left\|\frac{\partial f}{\partial x}\right\|^2\widehat{\lambda}_c\int_t^{t+u}\int _{\mathbb{R}_+}\bigg(\sum_{k=1}^Kc_k\overline{H}_s^{(k)}\bigg)^2y_1^2F_Y^{(k)}(d y_1)\,ds\bigg|\bigvee_{k=1}^K\mathcal{G}_t^{(k)}\bigg] \\ &\quad {}\leq 2 C_p^2(C_p+\widehat{\lambda}_c)\left\|\frac{\partial f}{\partial x}\right\|^2\delta=:H_K^{4}(\delta). \end{aligned}$$
Finally, we have
$$\begin{aligned} \big|P_{t+u}^{(K)}-P_t^{(K)}\big| \leq&C_p(C_p+\widehat{\lambda }_c)\left\|\frac{\partial f}{\partial x}\right\| \frac{1}{K}\sum_{k=1}^K\int_t^{t+u}\int_{\mathbb {R}_+}y_1F_{Y}^{(k)}(dy_1)\,ds \\ &{}+2C_p^2\left\|\frac{\partial f}{\partial x}\right\|\frac {1}{K}\sum_{k=1}^K\int_t^{t+u}\int_{\mathbb{R}_+}y_2F_{\widetilde {Y}}^{(k)}(dy_2)\,ds \\ =&C_p(3C_p+\widehat{\lambda}_c)\left\|\frac{\partial f}{\partial x}\right\|\frac{1}{K}\sum_{k=1}^K\int_t^{t+u} \big(\mathbb{E} [Y_1^{(k)}]+\mathbb{E} [\widetilde{Y}_1^{(k)}]\big)\,ds \\ \leq& 2C_p^2(3C_p+\widehat{\lambda}_c)\left\|\frac{\partial f}{\partial x}\right\|\delta=:H_K^{5}(\delta). \end{aligned}$$
Note that \(h^{2}(\nu_{t}^{(K)}(f),\nu_{t-v}^{(K)}(f))\leq1\). Let \(H_{K}(\delta)=\sum_{n=1}^{5}H_{K}^{n}(\delta)\). This satisfies
$$\lim_{\delta\to0}\sup_{K\in\mathbb{N} }\mathbb{E} [H_K(\delta)]=0 $$
and (4.7) holds, due to the above estimates and (B.1) from Lemma B.1. □
Appendix C: Proofs related to Sect. 5
Lemma C.1
The default times
τ
1,…,τ
K
, τ
A
and
τ
B
are conditionally independent. Namely, for any
t
1,…,t
K
,t
A
,t
B
≥0 and
T≥max{t
1,…,t
K
,t
A
,t
B
}, we have
$$\begin{aligned} & \mathbb{P} \big(\tau_1>t_1,\dots,\tau_K>t_K,\tau_A>t_A,\tau _B>t_B\big|\mathcal{F}_{T}^{(K,A,B)}\big) \\ &\quad=\prod_{j\in\{1,\dots,K,A,B\}}\exp\left(-\int _{0}^{t_j}\xi_s^{(j)}\,ds\right). \end{aligned}$$
Proof
The proof is straightforward and follows immediately from the discussion in Sect. 9.1.1 of [10]. □
Proof of Theorem 5.3
Firstly, we define the conditional cumulative distribution function associated with the default times \((\tau_{X}^{*},\tau_{A},\tau_{B})\). For min{t
∗,t
A
,t
B
}>t, set
$$\begin{aligned} P(t;t^*,t_A,t_B):=\mathbb{P} \big(\tau_X^*\leq t^*,\tau_A\leq t_A,\tau_B\leq t_B \big|\mathcal{G}_t^{(K,A,B)}\big). \end{aligned}$$
Then
$$\begin{aligned} P(t;t^*,t_A,t_B) &=1-\mathbb{P} \big(\tau_A> t_A\big|\mathcal{G}_t^{(K,A,B)} \big)-\mathbb{P} \big(\tau_B> t_B\big|\mathcal{G}_t^{(K,A,B)}\big) \\ &\quad {}+\mathbb{P} \big(\tau_A> t_A,\tau_B> t_B\big|\mathcal {G}_t^{(K,A,B)} \big) \\ &\quad {}-\Big(\mathbb{P} \big(\tau_X^*>t^*\big|\mathcal{G}_t^{(K,A,B)} \big)-\mathbb{P} \big(\tau_A>t_A,\tau_X^*>t^*\big|\mathcal {G}_t^{(K,A,B)} \big)\Big) \\ &\quad {}+\mathbb{P} \big(\tau_B>t_B,\tau_X^*>t^*\big|\mathcal {G}_t^{(K,A,B)} \big) \\ &\quad {}-\mathbb{P} \big(\tau_A>t_A,\tau_B>t_B,\tau_X^*>t^*\big|\mathcal{G}_t^{(K,A,B)}\big). \end{aligned}$$
(C.1)
Using Lemma 5.1, and the definition of the limit default time \(\tau_{X}^{*}\) given in Sect. 4.3, on the event \(\{\tau_{X}^{*}>t,\tau_{A}>t_{A},\tau_{B}>t_{B}\}\), we have
$$\begin{aligned} &\mathbb{P} \big(\tau_X^*>t^*,\tau_A> t_A,\tau_B> t_B \big|\mathcal{G}_t^{(K,A,B)}\big) \\ &\qquad=\widehat{F}(t,t^*)\mathbb{E} \left[\exp\left(-\int _t^{t_A}\xi_s^{(A)}\,ds-\int_t^{t_B}\xi_s^{(B)}\,ds\right)\bigg|\mathcal{F}_t^{(K,A,B)}\right]. \end{aligned}$$
By virtue of (C.1), it follows that
$$\begin{aligned} \frac{\partial P(t;t^*,t_A,t_B)}{\partial t_B}=\big(1-\widehat {F}(t,t^*)\big)\frac{\partial P(t;\infty,t_A,t_B)}{\partial t_B}, \end{aligned}$$
(C.2)
where
$$ \begin{aligned}[b] \frac{\partial P(t;\infty,t_A,t_B)}{\partial t_B}:={}&\frac{\partial P(t;t^*,t_A,t_B)}{\partial t_B}\bigg|_{t^*=\infty}\\ ={}&\mathbb{E} \left[\exp\left(-\int_t^{t_B}\xi_s^{(B)}d s\right)\xi_{t_B}^{(B)}\bigg|\mathcal{F}_t^{(K,A,B)}\right]\\ &-\mathbb{E} \left[\exp\left(-\int_t^{t_A}\xi_s^{(A)}\,d s-\int_t^{t_B}\xi_s^{(B)}\,d s\right)\xi_{t_B}^{(B)}\bigg|\mathcal{F}_t^{(K,A,B)}\right]. \end{aligned} $$
(C.3)
Hence we have that
$$\begin{aligned} B^{(K,*)}(t,T) &{}=\begin{aligned}[t]\int_{t}^T\int_{t}^{\infty}\int_{t}^{ \infty}&\mathbb{E} \Big[\mathbf{1}_{\{t_B\leq t_A\}}\mathbf{1}_{\{t_B<t^*\}}D(t,t_B)\varepsilon_+^{(K,*)}(t_B,T) \\ &{}\times\frac{\partial^3P(t;t^*,t_A,t_B)}{\partial t^*\partial t_A\partial t_B}\Big|\mathcal{F}_t^{(K,A,B)}\Big]\,d t^*\,dt_A\,dt_B \end{aligned} \\ &{}=\begin{aligned}[t] \int_{t}^T\int_{t}^\infty &\mathbb{E} \Big[\mathbf{1}_{\{t_B<t^*\}}D(t,t_B)\varepsilon_+^{(K,*)}(t_B,T) \\ &{}\times\frac{\partial^2P(t;t^*,t_A,t_B)}{\partial t^*\partial t_B}\Big|_{t_A=t_B}^{\infty}\Big|\mathcal {F}_t^{(K,A,B)}\Big]\,d t^*\,dt_B \end{aligned} \\ &{}=\begin{aligned}[t]\int_{t}^T\int_{t}^\infty &\mathbb{E} \Big[\mathbf{1}_{\{t_B<t^*\}}D(t,t_B)\varepsilon_+^{(K,*)}(t_B,T) \\ &{} \times\frac{\partial^2P(t;t^*,\infty,t_B)}{\partial t^*\partial t_B}\Big|\mathcal{F}_t^{(K,A,B)}\Big]\,dt^*\,dt_B \end{aligned} \\ &\quad \begin{aligned}[c] {}-\int_{t}^T\int_{t}^\infty &\mathbb{E} \Big[\mathbf{1}_{\{t_B<t^*\}}D(t,t_B)\varepsilon_+^{(K,*)}(t_B,T) \\ &{}\times\frac{\partial^2P(t;t^*,t_B,t_B)}{\partial t^*\partial t_B}\Big|\mathcal{F}_t^{(K,A,B)}\Big]\,d t^*\,dt_B \end{aligned} \\ &{}=\begin{aligned}[t]\int_{t}^T&D(t,t_B)\varepsilon_+^{(K,*)}(t_B,T) \\ &{}\times\mathbb{E} \Big[\frac{\partial P(t;\infty,\infty,t_B)}{\partial t_B}-\frac{\partial P(t;t_B,\infty,t_B)}{\partial t_B}\Big|\mathcal{F}_t^{(K,A,B)}\Big]\,dt_B \end{aligned} \\ &\quad \begin{aligned}[c] {}-\int_{t}^T&D(t,t_B)\varepsilon_+^{(K,*)}(t_B,T) \\ &{}\times\mathbb{E} \Big[\frac{\partial P(t;\infty,t_B,t_B)}{\partial t_B}-\frac{\partial P(t;t_B,t_B,t_B)}{\partial t_B}\Big|\mathcal{F}_t^{(K,A,B)}\Big]\,dt_B \end{aligned} \\ &{}=\begin{aligned}[t] \int_{t}^T&D(t,t_B)\varepsilon_+^{(K,*)}(t_B,T)\widehat{F}(t,t_B) \\ &{} \times\mathbb{E} \Big[\frac{\partial P(t;\infty,\infty,t_B)}{\partial t_B}-\frac{\partial P(t;\infty,t_B,t_B)}{\partial t_B}\Big|\mathcal{F}_t^{(K,A,B)}\Big]\,dt_B, \end{aligned} \end{aligned}$$
(C.4)
where we have used (C.2) to obtain the last equality in (C.4). Using (C.3), we have
$$\begin{aligned} B^{(K,*)}(t,T) =& \begin{aligned}[t] \int_t^T&\mathbb{E} \Bigg[D(t,t_B)\varepsilon _{+}^{(K,*)}(t_B,T)\widehat{F}(t,t_B)\\ &{} \times\exp\left(-\int_t^{t_B}\xi_s^{(A)}+\xi_s^{(B)}\,ds\right)\xi _{t_B}^{(B)}\bigg|\mathcal{F}_t^{(K,A,B)}\Bigg]\,dt_B \end{aligned} \\ &\begin{aligned}[c] {} -\int_t^T&\mathbb{E} \Bigg[D(t,t_B)\varepsilon _{+}^{(K,*)}(t_B,T)\widehat{F}(t,t_B) \\ &{} \times\exp\left(-\int_t^{\infty}\xi_s^{(A)}\,ds - \int_t^{t_B}\xi_s^{(B)}\,d s\right)\xi_{t_B}^{(B)}\bigg|\mathcal{F}_t^{(K,A,B)}\Bigg]\,dt_B. \end{aligned} \end{aligned}$$
For t≤t
B
≤T and \((x_{A},x_{B})\in\mathbb{R}_{+}^{2}\), define the function
$$\begin{aligned} &\widehat{H}_1(t_B-t,x_A,x_B) \\ &\quad:=\mathbb{E} \left[\exp\left(-\int_t^{\infty}\xi_s^{(A)}\,d s-\int_t^{t_B}\xi_s^{(B)}\,d s\right)\xi_{t_B}^{(B)}\bigg|\xi_t^{(A)}=x_A,\xi_t^{(B)}=x_B\right ]. \end{aligned}$$
(C.5)
Using the definition of the function H
1 given in (5.6) along with (C.5), we obtain that B
(K,∗)(t,T) is given by
$$\begin{aligned} \begin{aligned}[t] B^{(K,*)}(t,T) & =\mathbb{E} \left[\mathbf{1}_{\{t<\tau_B \leq\min(\tau_A,T)\} }\mathbf{1}_{\{\tau_B<\tau_X^*\}}D(t,\tau_B)\varepsilon _{+}^{(K,*)}(\tau_B,T)\Big|\mathcal{G}_t^{(K,A,B)}\right] \\ &{}=\begin{aligned}[t] \int_t^T&D(t,t_B)\varepsilon_{+}^{(K,*)}(t_B,T)\widehat {F}(t,t_B)\\ &{}\times\big(H_1(t_B-t,\xi_t^{(A)},\xi _t^{(B)})-\widehat{H}_1(t_B-t,\xi_t^{(A)},\xi_t^{(B)})\big)\,d t_B. \end{aligned} \end{aligned} \end{aligned}$$
Next, we prove that the function \(\widehat{H}_{1}\) defined in (C.5) vanishes. To this end, let \(\xi^{(A,\mathrm{noj})}=(\xi_{t}^{(A,\mathrm{noj})};\ t\geq0)\) be the CEV process satisfying the SDE
$$\begin{aligned} d\xi_t^{(A,\mathrm{noj})} = -\kappa_A \xi_t^{(A,\mathrm{noj})}\,dt + \sigma_A (\xi_t^{(A,\mathrm{noj})})^{\widehat{\rho}}\,dW_t^{(A)},\quad \xi_t^{(A,\mathrm{noj})}=x_A>0, \end{aligned}$$
where κ
A
,σ
A
>0 and \(\widehat{\rho}\in[\frac{1}{2},1)\) are the parameters specified in (2.2). Then we have
$$\begin{aligned} 0 \leq& \mathbb{E} _{t_B,x_A}\left[\exp\left(-\int_{t_B}^{\infty }\xi_s^{(A)}\,d s\right)\right] \\ \leq& \mathbb{E} _{t_B,x_A}\left[\exp\left(-\int_{t_B}^{\infty }\xi_s^{(A,\mathrm{noj})}\,d s\right)\right]=:M(t_B,x_A), \end{aligned}$$
where \(\mathbb{E} _{t_{B},x_{A}}[\ \cdot\ ]\) represents the expectation conditional on the underlying state process being equal to x
A
at time t
B
. Hereafter, we use \(\mathbb{E} _{x_{A}}[\ \cdot\ ]:=\mathbb {E} _{0,x_{A}}[\ \cdot\ ]\).
We want to verify that M(t
B
,x
A
)=0 for fixed t
B
,x
A
>0. Using the Markov property of ξ
(A,noj),
$$\begin{aligned} M(t_B,x_A) = M(x_A):= \mathbb{E} _{x_A}\left[\exp\left(-\int_{0}^{\infty}\xi_s^{(A,\mathrm{noj})}\,d s\right)\right],\quad x_A>0. \end{aligned}$$
Let \(\mbox{BESQ}_{\delta,x_{A}}=(\mbox{BESQ}_{\delta,x_{A}}(t);\ t\geq 0)\) denote a squared Bessel process with dimension δ>0. This is a particular CIR process satisfying the SDE
$$\begin{aligned} d\mbox{BESQ}_{\delta,x_A}(t) = \delta \,dt + 2\sqrt{\text{BESQ}_{\delta,x_A}(t)}\,dW_t^{(A)}, \end{aligned}$$
(C.6)
where \(\mbox{BESQ}_{(\delta,x_{A})}(0)=x_{A}\). From Proposition 2.3 in [4], it follows that
$$\begin{aligned} \xi_t^{(A,\mathrm{noj})} = e^{-\kappa_At}\left(\mbox{BESQ}_{\delta,x_A^{1/p}}\big(a(t) \big)\right)^{p},\quad t\geq0, \end{aligned}$$
where \(p=\frac{1}{2(1-\widehat{\rho})}>1\), \(\delta=\frac{2\widehat{\rho}-1}{\widehat{\rho}-1}\), and the time-changed function a(t) is defined as
$$\begin{aligned} a(t) = \frac{(1-\widehat{\rho})\sigma_A^2}{2\kappa_A}\big(e^{2(1-\widehat{\rho})\kappa_A t}-1\big)=\frac{1}{\ell_A}\big(e^{(\kappa_A/p)t}-1\big). \end{aligned}$$
Here \(\ell_{A}=\frac{2\kappa_{A}}{(1-\widehat{\rho})\sigma_{A}^{2}}\). Then
$$\begin{aligned} M(x_A) = \mathbb{E} _{x_A}\left[\exp\left(-\int_{0}^{\infty}e^{-\kappa _As}\left(\mbox{BESQ}_{\delta,x_A^{1/p}}\big(a(s)\big)\right)^{p}\,d s\right)\right]. \end{aligned}$$
Set the time variable v=a(s). Then \(s=a^{-1}(v)=\frac{p}{\kappa_{A}}\log(\ell_{A}v+1)\). Observing that a(0)=0, we obtain that
$$\begin{aligned} M(x_A) = \mathbb{E} _{x_A}\left[\exp\left(-\frac{p\ell_A}{\kappa_A}\int _{0}^{\infty}\frac{1}{(\ell_Av+1)^{p+1}}\big(\mbox{BESQ}_{\delta ,x_A^{1/p}}(v)\big)^{p}\,d v\right)\right]. \end{aligned}$$
For any T>0, define
$$\begin{aligned} M_T(x_A) :=& \mathbb{E} _{x_A}\left[\exp\left(-\frac{p\ell_A}{\kappa_A}\int _{0}^{T}\frac{1}{(\ell_Av+1)^{p+1}}\big(\mbox{BESQ}_{\delta ,x_A^{1/p}}(v)\big)^{p}\,d v\right)\right] \\ \leq&\mathbb{E} _{x_A}\left[\exp\left(\frac{p\ell_A}{\kappa _A}\frac{1}{(\ell_AT+1)^{p+1}}\int_{0}^{T}\big(\mbox{BESQ}_{\delta ,x_A^{1/p}}(v)\big)^{p}\,d v\right)\right] \\ =&g(T)\mathbb{E} _{x_A}\left[\exp\left(-\int_{0}^{T}\big(\mbox {BESQ}_{\delta,x_A^{1/p}}(v)\big)^{p}\,d v\right)\right], \end{aligned}$$
with the function
$$g(T)=\exp\left(\frac{p\ell_A}{\kappa_A}\frac{1}{(\ell _AT+1)^{p+1}}\right),\quad \text{and hence}\quad \lim_{T\to\infty}g(T)=1. $$
Next, we prove the limit
$$\begin{aligned} \lim_{T\to\infty}\mathbb{E} _{x_A}\left[\exp\left(-\int _{0}^{T}\big(\mbox{BESQ}_{\delta,x_A^{1/p}}(v)\big)^{p}\,d v\right)\right]=0. \end{aligned}$$
(C.7)
Let \(\nu=\frac{\delta-2}{2}\) and hence ν=−p. In terms of (C.6), we have
$$\begin{aligned} d\mbox{BESQ}_{\delta,x_A}(t) = 2(\nu+1) \,dt + 2\sqrt{\mbox{BESQ}_{\delta,x_A}(t)}\,dW_t^{(A)}, \end{aligned}$$
where \(\mbox{BESQ}_{\delta,x_{A}}(0)=x_{A}\). Note that ν<0 and p≥1. Using Lemma 2.1 in Çetin [16], we have that for any α>0, the process
$$\begin{aligned} M_t^{(u)}:=u\big(\sqrt{X_t}\big)X_t^{p}\exp\left(-\frac{\alpha }{2}\int_0^t X_s^p\,d s\right),\quad t\geq0, \end{aligned}$$
is a (local) martingale, where X=(X
t
; t≥0) denotes any squared Bessel process starting at x>0 with the above dimension δ>0. The function u satisfies the ODE
$$\begin{aligned} x^2u''(x) + xu'(x) -u(x)\big(p^2+\alpha x^{2(p+1)}\big)=0,\quad x>0. \end{aligned}$$
Then for the stopping time τ
R
:=inf{t≥0; X
t
≥R} with R≥x, \(x=x_{A}^{1/p}\) and \(X_{t}=\mbox{BESQ}_{\delta,x_{A}^{1/p}}(t)\), it holds that
$$\begin{aligned} \mathbb{E} _{x_A}\left[M_{t\wedge\tau_R}\right]=u\big(x_A^{1/(2p)}\big)x_A^{1/2}\quad \text{for all}\ t>0. \end{aligned}$$
Letting t→∞, it follows that
$$\begin{aligned} \mathbb{E} _{x_A}\left[\exp\left(-\frac{\alpha}{2}\int_0^{\tau _R} X_s^p\,d s\right)\right] = \frac{u(x_A^{1/(2p)})x_A^{1/2}}{u(\sqrt{R})R^{p/2}}. \end{aligned}$$
Since we consider the case \(R\geq x_{A}^{1/p}\), where \(x_{A}^{1/p}\) is the starting value of the squared Bessel process \(\mbox{BESQ}_{\delta,x_{A}^{1/p}}\), the function \(R\mapsto u(\sqrt{R})R^{p/2}\) must be increasing. This is because when R increases, τ
R
will increase, and hence \(\mathbb{E} _{x_{A}} [\exp(-\frac{\alpha}{2}\int_{0}^{\tau_{R}} X_{s}^{p}\,d s ) ] \) is decreasing with respect to R. Using Eq. (2.8) from [16], we deduce that
$$\begin{aligned} \mathbb{E} _{x_A}\left[\exp\left(-\frac{\alpha}{2}\int_0^{\tau _R} X_s^p\,d s\right)\right] = \frac{u_0(x_A^{1/(2p)})x_A^{1/2}}{u_0(\sqrt{R})R^{p/2}}, \end{aligned}$$
(C.8)
where the function u
0 is defined as
$$u_0(x) =I_{\frac{p}{p+1}}\Big(\frac{1}{p+1}\sqrt{\alpha}x^{p+1}\Big),\quad x>0. $$
Here I
b
is the modified Bessel function of the first kind with \(b>-\frac{1}{2}\), defined by
$$\begin{aligned} I_b(x) = \frac{(x/2)^{b}}{\varGamma(b+\frac{1}{2})\varGamma(\frac {1}{2})}\int_{-1}^{1}e^{-xt}(1-t^2)^{b-\frac{1}{2}}\,d t,\quad x>0. \end{aligned}$$
Using the fact that lim
x→∞
I
b
(x)=+∞ and taking R→∞ in (C.8), it follows that for any α>0,
$$\begin{aligned} \mathbb{E} _{x_A}\left[\exp\left(-\frac{\alpha}{2}\int_0^{\infty}X_s^p\,d s\right)\right] = 0. \end{aligned}$$
(C.9)
Accordingly, the limit equality (C.7) is proved by taking the parameter α=2 in (C.9) and hence M(t
B
,x
A
)=0. This results in \(\mathbb{E} _{t_{B},x_{A}} [\exp(-\int_{t_{B}}^{\infty}\xi_{s}^{(A)}\,d s ) ]=0\). For t≤t
B
≤T, using the tower property, it follows that
$$\begin{aligned} &\widehat{H}_1(t_B-t,\xi_t^{(A)},\xi_t^{(B)}) \\ &\quad =\mathbb{E} \left[\exp\left(-\int_t^{\infty}\xi_s^{(A)}\,d s - \int_t^{t_B}\xi_s^{(B)}\,ds\right)\xi_{t_B}^{(B)}\Bigg|\mathcal {F}_t^{(K,A,B)}\right] \\ &\quad = \mathbb{E} \bigg[\mathbb{E} \left[\exp\left(-\int_t^{\infty }\xi_s^{(A)}\,d s - \int_t^{t_B}\xi_s^{(B)}\,d s\right)\xi_{t_B}^{(B)}\bigg|\mathcal{F}_{t_B}^{(K,A,B)}\right ]\bigg|\mathcal{F}_t^{(K,A,B)}\bigg] \\ &\quad \begin{aligned}[t] = \mathbb{E} \bigg[&\exp\left(-\int_t^{t_B}(\xi_s^{(A)}+\xi_{s}^{(B)})\,d s\right)\xi_{t_B}^{(B)} \\ &\times\mathbb{E} \left[\exp\left(-\int_{t_B}^{\infty }\xi_s^{(A)}\,d s\right) \bigg|\mathcal{F}_{t_B}^{(K,A,B)}\right]\bigg|\mathcal {F}_{t}^{(K,A,B)}\bigg] = 0. \end{aligned} \end{aligned}$$
This yields (5.4). Similarly, for the second term on the right-hand side of (5.2), we have, on the event \(\{\tau_{X}^{*}\wedge\tau_{A}\wedge\tau_{B}>t\}\),
$$\begin{aligned} A^{(K,*)}(t,T)= \int_t^T&D(t,t_A)\varepsilon_{-}^{(K,*)}(t_A,T)\widehat {F}(t,t_A) \\ &{}\times\big(H_2(t_A-t,\xi_t^{(A)},\xi_t^{(B)})-\widehat {H}_2(t_A-t,\xi_t^{(A)},\xi_t^{(B)})\big)\,d t_A, \end{aligned}$$
where the function H
2 is defined as in (5.6) and with the function
$$\begin{aligned} &\widehat{H}_2(t_A-t,x_A,x_B) \\ & \quad:=\mathbb{E} \left[\exp\left(-\int_t^{t_A}\xi_s^{(A)}\,d s-\int_t^{\infty}\xi_s^{(B)}\,d s\right)\xi_{t_B}^{(A)}\bigg|\xi_t^{(A)}=x_A,\xi_t^{(B)}=x_B\right]. \end{aligned}$$
Using a symmetric argument to the one used to show that \(\widehat{H}_{1}(t_{B}-t,x_{A},x_{B})=0\), it follows that \(\widehat{H}_{2}(t_{A}-t,x_{A},x_{B})=0\). Hence, we obtain that A
(K,∗)(t,T) is given by (5.5). This completes the proof of Theorem 5.3. □
Appendix D: Solutions to Riccati equations
Lemma D.1
The explicit solution to the Riccati equation
$$\begin{aligned} B'(u) = -\kappa B(u) + \frac{1}{2}\sigma^2 B^2(u) -1,\quad B(0) = 0, \end{aligned}$$
is given by
$$\begin{aligned} B(\kappa,\sigma;u) = -\frac{2(e^{\varpi u}-1)}{2\varpi+ (\kappa+\varpi)(e^{\varpi u}-1)},\quad 0\leq u\leq T, \end{aligned}$$
(D.1)
where
κ>0, σ>0 and
\(\varpi=\sqrt{\kappa^{2}+2\sigma^{2}}\).
Lemma D.2
Let
\(b\in\mathbb{R}\), b≠0. Then the explicit solution to the Riccati equation
$$\begin{aligned} \beta'(u) = -\kappa\beta(u) + \frac{1}{2}\sigma^2 \beta^2(u) -1, \quad\beta(0) = b \end{aligned}$$
(D.2)
is given by
$$\begin{aligned} \beta(\kappa,\sigma,b;u) = B(\kappa,\sigma;u) + e^{\phi(u)}\frac{1}{\frac{1}{b}-\frac{\sigma^2}{2}\int_0^u e^{\phi(v)}\,dv}, \end{aligned}$$
(D.3)
where the function
B(κ,σ;u) is given by (D.1), and
$$\phi(u)=\sigma^2\int_0^u B(\kappa,\sigma;v)\,dv-\kappa u,\quad 0\leq u\leq T. $$
Moreover, we have
$$\begin{aligned} \int_0^u e^{\phi(v)}\,dv =&\int_0^ue^{\varpi v}\left(\frac{2\varpi}{(\varpi-\kappa)+e^{\varpi v}(\kappa+\varpi)}\right)^2\,dv =\frac{2}{\kappa+ \varpi\mathrm{coth} (\frac{\varpi u}{2} )}, \end{aligned}$$
where
\(\mathrm{coth}(u)=\frac{\cosh(u)}{\sinh(u)}\)
is the hyperbolic cotangent of u.
Proof
For b≠0, we define the function
$$\begin{aligned} f(u) = B(\kappa,\sigma;u) + e^{\phi(u)}\frac{1}{C-\frac{\sigma^2}{2}\int_0^u e^{\phi(v)}\,dv}, \end{aligned}$$
(D.4)
where C is an unspecified real constant. Then a straightforward computation gives \(f'(u) = -1-\kappa f(u)+\frac{\sigma^{2}}{2}f^{2}(u)\). This yields that the function given by (D.4) is the general solution to the above Riccati equation. Taking the initial condition β(0)=b into account, we have the constant \(C=\frac{1}{b}\) in (D.4), since B(κ,σ;0)=0. □
Lemma D.3
Let
\(b\in\mathbb{R}\)
and
a
ℓ
>0. Then the explicit solution to the Riccati equation
$$\begin{aligned} \widehat{\beta}'(u) =& -\kappa\widehat{\beta} (u) + \frac{1}{2}\sigma^2 \widehat{\beta}^2(u) - a_\ell, \quad \widehat{\beta}(0) = b \end{aligned}$$
(D.5)
is given by
$$\begin{aligned} \widehat{\beta}(\kappa,\sigma,a_\ell,b;u) =a_\ell\beta\Big(\kappa,\sigma\sqrt{a_\ell},\frac{b}{a_{\ell}};u \Big), \end{aligned}$$
(D.6)
where the function
β(κ,σ,b;u) is given by (D.3).
Proof
Let \(g(u)=\frac{\widehat{\beta}(u)}{a_{\ell}}\). Then the function g satisfies the Riccati equation (D.5) with coefficient σ and initial value b replaced by \(\sigma\sqrt{a_{\ell}}\) and \(\frac{b}{a_{\ell}}\), respectively. Thus \(g(u) = \beta(\kappa,\sigma\sqrt{a_{\ell}},\frac{b}{a_{\ell}};u )\) and hence the solution (D.6) follows. □
Lemma D.4
Assume the default intensities
ξ
(A)
and
ξ
(B)
of the two counterparties to be CIR processes (i.e., \(\widehat{\rho}=\frac{1}{2}\)
in (2.2)). Define the conditional expectations
$$\begin{aligned} & Q_{t,T}g(x_A,x_B) \\ & \quad := \mathbb{E} \left[\exp\left(-\int_{t}^{T}\ell(\xi_s^{(A)},\xi_s^{(B)})\,d s\right)g(\xi_T^{(A)},\xi_T^{(B)})\bigg|\xi_t^{(A)}=x_A, \xi_t^{(B)}=x_B\right]. \end{aligned}$$
(D.7)
Assume that the functions
ℓ
and
g
on
\(\mathbb{R}_{+}^{2}\)
are of the form
$$\begin{aligned} \ell(x_A,x_B) =&a_{\ell}x_A + b_{\ell}x_B + c_{\ell}, \\ g(x_A,x_B) =&(a_g + b_g x_A+ c_g x_B)e^{d_g + e_g x_A + f_g x_B}, \end{aligned}$$
where
d
g
,e
g
,f
g
and
a
i
,b
i
,c
i
for
i∈{ℓ,g} are real constants. Then we have
$$\begin{aligned} Q_{t,T}g(x_A,x_B) =& \big(\theta_{AB}(T-t)+\theta_{A}(T-t)x_A +\theta_{B}(T-t)x_B\big) \\ &{}\times e^{\beta_{AB}(T-t) + \beta_A(T-t)x_A + \beta_B(T-t)x_B},\quad 0\leq t\leq T, \end{aligned}$$
(D.8)
where the unspecified functions in (D.8) satisfy the generalized Riccati equations
$$\begin{aligned} R(a_\ell,b_\ell,c_\ell){:}\quad \begin{cases} -\beta_A'(u)-\kappa_A\beta_A(u)+\frac{1}{2}\sigma_A^2\beta _A^2(u)-a_{\ell}=0,\\ -\beta_B'(u)-\kappa_B\beta_B(u)+\frac{1}{2}\sigma_B^2\beta _B^2(u)-b_{\ell}=0,\\ \alpha_A\beta_A(u)+\alpha_B\beta_B(u)-\lambda-c_{\ell}+\widehat {\lambda}_c{\varPhi}\big(c_A\beta_A(u),c_B\beta_B(u)\big)\\ \quad+\widehat{\lambda}_A\widetilde{{\varPhi}}\big(d_A\beta _A(u),0\big)+\widehat{\lambda}_B\widetilde{{\varPhi}}\big(0,d_B\beta_B(u)\big)=\beta_{AB}'(u), \end{cases} \end{aligned}$$
(D.9)
and
$$\begin{aligned} \begin{cases} -\theta'_A(u) - \kappa_A \theta_A(u)+ \sigma^2_A \theta_A(u) \beta_A(u)=0,\\ -\theta'_B(u) - \kappa_B \theta_B(u)+ \sigma^2_B \theta_B(u) \beta _B(u)=0,\\ \alpha_A \theta_A(u) + \alpha_B \theta_B(u)+ \widehat{\lambda}_cc_A\theta_A(u)\frac{\partial{\varPhi}(c_A \beta_A(u), c_B \beta_B(u))}{\partial\theta_A}\\ \quad+ \widehat{\lambda}_cc_B\theta_B(u)\frac{\partial {\varPhi}(c_A \beta_A(u), c_B \beta_B(u))}{\partial\theta_B} + \widehat{\lambda}_A\,d_A\theta_A(u) \frac{\partial \widetilde{{\varPhi}}(d_A\beta_A(u), 0)}{\partial\theta_A}\\ \quad+ \widehat{\lambda}_Bd_B\theta_B(u) \frac{\partial \widetilde{{\varPhi}}(0, d_B \beta_B(u))}{\partial\theta_B} = \theta'_{AB}(u). \end{cases} \end{aligned}$$
(D.10)
Here the function
Φ(θ
A
,θ
B
) denotes the moment-generating function of the bivariate random variable
\((Y_{1}^{(A)},Y_{1}^{(B)})\), defined by
$$\begin{aligned} {\varPhi}(\theta_A,\theta_B)=\int_{\mathbb{R}_+^2}e^{\theta_A y_A+\theta_B y_B}F_{AB}(dy_A,dy_B),\quad \theta_A,\theta_B\leq0. \end{aligned}$$
(D.11)
Similarly, \(\widetilde{{\varPhi}}(\theta_{A},\theta_{B})\)
denotes the moment-generating function of the bivariate random variable
\((\widetilde{Y}_{1}^{(A)},\widetilde{Y}_{1}^{(B)})\), defined by
$$\begin{aligned} \widetilde{{\varPhi}}(\theta_A,\theta_B)=\int_{\mathbb {R}_+^2}e^{\theta_A \widetilde{y}_A+\theta_B \widetilde{y}_B}\widetilde{F}_{AB}(d \widetilde{y}_A,d\widetilde{y}_B),\quad \theta_A,\theta_B\leq0, \end{aligned}$$
(D.12)
with
F
AB
(dy
A
,dy
B
) and
\(\widetilde{F}_{AB}(d \widetilde{y}_{A},d\widetilde{y}_{B})\)
denoting the joint distribution functions of
\((Y_{1}^{(A)},Y_{1}^{(B)})\)
and
\((\widetilde{Y}_{1}^{(A)},\widetilde{Y}_{1}^{(B)})\), respectively. Moreover, \(\lambda= \widehat{\lambda}_{A} + \widehat{\lambda}_{B} + \widehat{\lambda}_{c}\), with
\(\widehat{\lambda}_{A}\)
and
\(\widehat{\lambda}_{B}\)
being the intensities of the idiosyncratic Poisson processes associated with
A
and
B, while
\(\widehat{\lambda}_{c}\)
is the intensity of the common Poisson process. The initial conditions of the unspecified functions in (D.8) are given by
$$\begin{aligned} \begin{array}{@{}r@{}l@{}l@{}l@{}} \theta_{AB}(0) &=a_g,&\qquad \theta_A(0)=b_g,&\qquad \theta_B(0)=c_g,\\ \beta_{AB}(0)&=d_g,&\qquad \beta_A(0) =e_g,&\qquad \beta_B(0)=f_g. \end{array} \end{aligned}$$
(D.13)
Proof
Applying the Feynman–Kac formula to (D.7), it follows that Q
t,T
g(x
A
,x
B
) satisfies on \(\mathbb{R}_{+}^{2}\) the equation
$$\begin{aligned} \begin{aligned}[c] \Big(\frac{\partial}{\partial t} + \mathcal{L}\Big)f(t,x_A,x_B) &= \ell(x_A,x_B)f(t,x_A,x_B), \\ f(T,x_A,x_B)&={g(x_A,x_B)}, \end{aligned} \end{aligned}$$
(D.14)
where the integro-differential operator \(\mathcal{L}\) acting on \(h\in C^{2}(\mathbb{R}_{+}^{2})\) is given by
$$\begin{aligned} \mathcal{L}h(x_A,x_B) =& \frac{1}{2}\sigma_A^2x_A\frac{\partial ^2h}{\partial x_A^2}(x_A,x_B) + \frac{1}{2}\sigma_B^2x_B\frac {\partial^2h}{\partial x_B^2}(x_A,x_B) \\ &{}+(\alpha_A-\kappa_Ax_A)\frac{\partial h}{\partial x_A}(x_A,x_B) \\ & {}+ (\alpha_B-\kappa_Bx_B)\frac{\partial h}{\partial x_B}(x_A,x_B)-\lambda h(x_A,x_B) \\ &{}+\widehat{\lambda}_A\int_{\mathbb{R}_+}h(x_A+ d_A\widetilde{y}_A,x_B)\widetilde{F}_{A}(d\widetilde{y}_A) \\ &{}+\widehat{\lambda}_B\int_{\mathbb{R}_+}h(x_A,x_B+ d_B\widetilde {y}_B)\widetilde{F}_{B}(d\widetilde{y}_B) \\ &{}+\widehat{\lambda}_c\int_{\mathbb{R}_+^2}h(x_A+c_Ay_A,x_B+c_By_B)F_{AB}(d y_A,dy_B). \end{aligned}$$
Plugging the solution form (D.8) into the PIDE (D.14), we obtain
$$\begin{aligned} \frac{\partial f}{\partial t}(t,x_A,x_B) =&-f(t,x_A,x_B)\big(\beta _{AB}'(T-t)+\beta_A'(T-t)x_A + \beta_B'(T-t)x_B\big) \\ & {}-\left(\theta'_{AB}(T-t) + \theta'_A(T-t) x_A + \theta'_B(T-t) x_B\right) \\ &\quad {}\times e^{\beta_{AB}(T-t) + \beta_A(T-t)x_A + \beta _B(T-t)x_B}, \\ \frac{\partial f}{\partial x_A}(t,x_A,x_B) =&f(t,x_A,x_B)\beta _A(T-t) \\ &{}+ \theta_A(T-t) e^{\beta_{AB}(T-t) + \beta_A(T-t)x_A + \beta _B(T-t)x_B}, \\ \frac{\partial f}{\partial x_B}(t,x_A,x_B) =&f(t,x_A,x_B)\beta _B(T-t) \\ &{}+ \theta_B(T-t) e^{\beta_{AB}(T-t) + \beta_A(T-t)x_A + \beta _B(T-t)x_B}, \\ \frac{\partial^2 f}{\partial x_A^2}(t,x_A,x_B) =&f(t,x_A,x_B)\beta _A^2(T-t) \\ &{}+ 2 \theta_A(T-t) \beta_A(T-t) e^{\beta_{AB}(T-t) + \beta _A(T-t)x_A + \beta_B(T-t)x_B} , \\ \frac{\partial^2 f}{\partial x_B^2}(t,x_A,x_B) =&f(t,x_A,x_B)\beta _B^2(T-t) \\ &{}+ 2 \theta_B(T-t) \beta_B(T-t) e^{\beta_{AB}(T-t) + \beta _A(T-t)x_A + \beta_B(T-t)x_B} \end{aligned}$$
and
$$\begin{aligned} & \int_{\mathbb{R}_+}f(t,x_A+ d_A\widetilde{y}_A,x_B)\widetilde{F}_{A}(d\widetilde{y}_A) \\ &\quad = f(t,x_A,x_B)\int_{\mathbb{R}_+}e^{\beta_A(T-t) d_A\widetilde{y}_A}\widetilde{F}_{A}(d\widetilde{y}_A) \\ &\qquad {}+ e^{\beta_{AB}(T-t) + \beta_A(T-t)x_A + \beta_B(T-t)x_B} \int_{\mathbb{R}_+} \theta_A(T-t) d_A\widetilde{y}_A e^{\beta_A(T-t) d_A\widetilde{y}_A} \widetilde{F}_{A}(d\widetilde{y}_A), \\ & \int_{\mathbb{R}_+}f(t,x_A,x_B+ d_B\widetilde{y}_B)\widetilde{F}_{B}(d\widetilde{y}_B) \\ &\quad = f(t,x_A,x_B)\int_{\mathbb{R}_+}e^{\beta_B(T-t) d_B\widetilde{y}_B}\widetilde{F}_{B}(d\widetilde{y}_B) \\ &\qquad {} + e^{\beta_{AB}(T-t) + \beta_A(T-t)x_A + \beta_B(T-t)x_B} \int_{\mathbb{R}_+} \theta_B(T-t) d_B\widetilde{y}_B e^{\beta_B(T-t) d_B\widetilde{y}_B} \widetilde{F}_{B}(d\widetilde{y}_B), \\ & \int_{\mathbb{R}_+^2}f(t,x_A+c_Ay_A,x_B+c_By_B)F_{AB}(dy_A,dy_B) \\ & \quad = f(t,x_A,x_B) \int_{\mathbb{R}_+^2}e^{\beta_A(T-t)c_Ay_A +\beta_B(T-t)c_B y_B} F_{AB}(dy_A,dy_B) \\ &\qquad {} + \begin{aligned}[t] &e^{\beta_{AB}(T-t) + \beta_{A}(T-t) x_A + \beta _{B}(T-t) x_B} \int_{\mathbb{R}_+^2} \big( \theta_A(T-t) c_A y_A + \theta_B(T-t) c_B y_B \big) \\ &{}\times e^{\beta_A(T-t) c_A y_A + \beta_B(T-t) c_B y_B} F_{AB}(dy_A,dy_B). \end{aligned} \end{aligned}$$
In order for (D.14) to hold, we need that for all u∈[0,T] and \((x_{A},x_{B})\in\mathbb{R}_{+}^{2}\), the following two equalities are satisfied:
$$\begin{aligned} &x_A\Big(-\beta_A'(u)-\kappa_A\beta_A(u)+\frac{1}{2}\sigma _A^2\beta_A^2(u)-a_{\ell}\Big) \\ & \quad {}+x_B\Big(-\beta_B'(u)-\kappa_B\beta_B(u)+\frac{1}{2}\sigma _B^2\beta_B^2(u)-b_{\ell}\Big) \\ & \quad {}+\alpha_A\beta_A(u)+\alpha_B\beta_B(u)-\beta_{AB}'(u)-\lambda -c_{\ell} \\ &\quad {} +\widehat{\lambda}_c{\varPhi}\big(c_A\beta_A(u),c_B\beta _B(u)\big) +\widehat{\lambda}_A\widetilde{{\varPhi}}\big(d_A\beta_A(u),0\big)+\widehat{\lambda}_B\widetilde{{\varPhi}}\big(0,d_B\beta _B(u)\big)=0 \end{aligned}$$
and
$$\begin{aligned} & x_A \big(-\theta'_A(u) - \kappa_A \theta_A(u)+ \sigma^2_A \theta_A(u) \beta_A(u) \big) \\ & \quad {}+ x_B\big(-\theta'_B(u) - \kappa_B \theta_B(u)+ \sigma^2_B \theta_B(u) \beta_B(u)\big) \\ & \quad {}-\theta'_{AB}(u) + \alpha_A \theta_A(u) + \alpha_B \theta_B(u) \\ & \quad {}+ \widehat{\lambda}_cc_A\theta_A(u)\frac{\partial{\varPhi}(c_A \beta_A(u), c_B \beta_B(u))}{\partial\theta_A} + \widehat{\lambda}_cc_B\theta_B(u)\frac{\partial{\varPhi}(c_A \beta_A(u), c_B \beta_B(u))}{\partial\theta_B} \\ & \quad {}+ \widehat{\lambda}_Ad_A\theta_A(u) \frac{\partial \widetilde{{\varPhi}}(d_A\beta_A(u), 0)}{\partial\theta_A} + \widehat{\lambda}_Bd_B\theta_B(u) \frac{\partial \widetilde{{\varPhi}}(0, d_B \beta_B(u))}{\partial\theta_B} = 0. \end{aligned} $$
Hence, the unspecified functions in (D.8) satisfy the Riccati equations (D.9) and (D.10). From the terminal condition in (D.14), we further have the initial conditions given by (D.13). This completes the proof of the lemma. □
The following lemmas give the explicit solutions to the generalized Riccati equations (D.9) and (D.10).
Lemma D.5
Assume we have the initial values
β
A
(0)=e
g
≤0 and
β
B
(0)=f
g
≤0. If the constants
a
ℓ
,b
ℓ
are strictly positive, then the generalized Riccati equation (D.9) admits the explicit solution
$$\begin{aligned} \beta_A(u) =&a_\ell\beta\Big(\kappa_A,\sigma_A\sqrt{a_\ell },\frac{e_g}{a_{\ell}};u\Big), \end{aligned}$$
(D.15)
$$\begin{aligned} \beta_B(u) =&b_\ell\beta\Big(\kappa_B,\sigma_B\sqrt{b_\ell },\frac{f_g}{b_{\ell}};u\Big), \end{aligned}$$
(D.16)
$$\begin{aligned} \beta_{AB}(u) =& \begin{aligned}[t] \int_0^u&\Big(\alpha_A\beta_A(v)+\alpha_B\beta_B(v)+\widehat {\lambda}_c{\varPhi}\big(c_A\beta_A(v),c_B\beta_B(v)\big) \\ &{}+\widehat{\lambda}_A\widetilde{\varPhi}\big(d_A\beta_A(v),0\big)+\widehat{\lambda}_B\widetilde{\varPhi}\big(0,d_B\beta_B(v)\big)\Big)\,d v \end{aligned} \\ & {}+d_g - (\lambda+c_{\ell})u, \end{aligned}$$
(D.17)
where the function
\(\beta\left(\kappa,\sigma,b;u\right)\)
is given by (D.3) and
Φ(θ
A
,θ
B
), \(\widetilde{\varPhi}(\theta_{A},\theta_{B})\)
are the moment-generating functions defined in (D.11) and (D.12) with
θ
A
,θ
B
≤0.
Proof
The solutions given in (D.15), (D.16) can be obtained by an immediate application of Lemma D.3. Note that the bivariate random variables \((Y_{1}^{(A)},Y_{1}^{(B)})\) and \((\widetilde{Y}_{1}^{(A)},\widetilde{Y}_{1}^{(B)})\) associated with the jumps of the counterparties are assumed to take values in \(\mathbb{R}_{+}^{2}\). Then the corresponding moment-generating function Φ(θ
A
,θ
B
) exists if θ
A
,θ
B
≤0. From Lemma D.2, it follows that the solution (D.6) is given by
$$\begin{aligned} \beta(\kappa,\sigma,b;u) = B(\kappa,\sigma;u) + e^{\phi(u)}\frac{1}{\frac{1}{b}-\frac{\sigma^2}{2}\int_0^u e^{\phi(v)}\,dv}\leq0,\quad 0\leq u\leq T, \end{aligned}$$
provided the initial value β(κ,σ,b;0)=b≤0. This is because B(κ,σ;u)≤0 for all 0≤u≤T, by (D.1). Hence if the initial values β
A
(0)=e
g
≤0 and β
B
(0)=f
g
≤0 in (D.16), then Φ(c
A
β
A
(u),c
B
β
B
(u)), \(\widetilde{\varPhi}(d_{A}\beta_{A}(u),0)\) and \(\widetilde{\varPhi}(0,d_{B}\beta_{B}(u))\) exist since c
A
,c
B
,d
A
,d
B
>0, and they can be computed using (D.11) and (D.12). Hence, we can derive the solution β
AB
(u) to the third equation in (D.9), which is given by (D.17). □
Based on the above explicit solution to the generalized Riccati equation (D.9), we immediately have
Lemma D.6
The generalized Riccati equation (D.10) admits the explicit solution
$$\begin{aligned} \theta_A(u) =&b_g\exp\left(-\kappa_Au + \sigma_A^2\int_0^u\beta_A(v)\,dv\right), \\ \theta_B(u) =&c_g\exp\left(-\kappa_Bu + \sigma_B^2\int_0^u\beta_B(v)\,dv\right), \\ \theta_{AB}(u) =&a_g + \int_0^u\Bigg(\alpha_A \theta_A(v) + \alpha_B \theta_B(v)+ \widehat{\lambda}_cc_A\theta_A(v)\frac{\partial{\varPhi}(c_A \beta_A(v), c_B \beta_B(v))}{\partial\theta_A} \\ &{}+ \widehat{\lambda}_cc_B\theta_B(v)\frac{\partial{\varPhi}(c_A \beta_A(v), c_B \beta_B(v))}{\partial\theta_B} + \widehat{\lambda}_Ad_A\theta_A(v) \frac{\partial \widetilde{\varPhi}(d_A\beta_A(v), 0)}{\partial\theta_A} \\ &{}+ \widehat{\lambda}_Bd_B\theta_B(v) \frac{\partial\widetilde{\varPhi}(0, d_B \beta_B(v))}{\partial \theta_B}\Bigg)\,dv, \quad0\leq u\leq T. \end{aligned}$$
Appendix E: Proof of Proposition 6.2
Recall from (4.12) that for 0≤t≤s≤T,
$$\begin{aligned} \widehat{F}(t,s)=\mathbb{E} \left[\int_{\mathcal{O}}\mathbb{E} \left[\exp\left(-\int_t^{{s}}X_u(\boldsymbol{p})\,du\right)\right ]q(dp)\eta(dy)\phi_0(d x)\right], \end{aligned}$$
where \(\boldsymbol{p}=(p,y,x)\in\mathcal{O}\) with \(p=(\alpha,\kappa,\sigma,c,d,\widehat{\lambda})\in\mathcal{O}_{p}\). The limit process X(p)=(X
t
(p); t≥0) is a shifted square-root diffusion process given by
$$\begin{aligned} X_t(\boldsymbol{p}) = x + \int_0^t \big(D(\boldsymbol{p}) + \alpha- \kappa X_u(\boldsymbol{p}) \big) \,du + \sigma\int_0^t \big(X_u(\boldsymbol{p})\big)^{1/2} \,d W_u, \end{aligned}$$
where the drift D(p) is given by (4.10), i.e., \(D(\boldsymbol{p})=dy_{2}\widehat{\lambda}+cy_{1}\widehat{\lambda}_{c}\).
Note that the limit process X(p) is an affine process. Using Lemma D.4, we have, for 0≤t≤s,
$$\mathbb{E} \left[\exp\left(-\int_t^sX_u(\boldsymbol{p})\,du\right)\bigg|X_t(\boldsymbol{p})=x\right]=\exp\left(A_{\boldsymbol{p}}(s-t) + B_{\boldsymbol{p}}(s-t) x\right), $$
where the functions A
p
and B
p
satisfy the system of Riccati equations
$$\begin{aligned} \begin{cases} -A'_{\boldsymbol{p}}(u) + \big(D(\boldsymbol{p})+\alpha\big)B_{\boldsymbol{p}}(u) =0, \\ -B_{\boldsymbol{p}}'(u)-\kappa B_{\boldsymbol{p}}(u) + \frac{1}{2}\sigma^2B_{\boldsymbol{p}}^2(u)-1=0, \end{cases} \end{aligned}$$
(E.1)
with the initial conditions
$$\begin{aligned} A_{\boldsymbol{p}}(0) = B_{\boldsymbol{p}}(0) = 0. \end{aligned}$$
(E.2)
By Lemma D.1, the solution to the second equation of the system (E.1) is given by
$$\begin{aligned} B_{\boldsymbol{p}}(u) = -\frac{2 \left(e^{\varpi u } - 1\right)}{2 \varpi+ \left( \kappa+\varpi\right) \left(e^{\varpi u } -1 \right) },\quad 0\leq u\leq s, \end{aligned}$$
where \(\varpi= \sqrt{\kappa^{2} + 2 \sigma^{2}}\). Using the first equation of the system (E.1) and the initial conditions (E.2), it follows that
$$\begin{aligned} e^{A_{\boldsymbol{p}}(s)}=\exp\left(\big(\alpha+ D(\boldsymbol{p})\big)\int_0^s B_{\boldsymbol{p}}(u)\,du\right), \end{aligned}$$
where
$$\begin{aligned} \int_0^s B_{\boldsymbol{p}}(u) \,du = \frac{2T}{\varpi-\kappa}+\frac{4}{\varpi^2-\kappa^2}\log\frac{2 \varpi}{(1+e^{\varpi T})\varpi+(e^{\varpi T}-1)\kappa} . \end{aligned}$$
Hence, we obtain
$$\begin{aligned} &\int_{\mathcal{O}}\mathbb{E} \left[\exp\left(-\int _t^{s}X_u(\boldsymbol{p})du\right)\right]q(dp)\eta(dy)\phi_0(dx) \\ &\quad=\int_{\mathcal{O}}\exp\left(A_{\boldsymbol{p}}(s-t) + B_{\boldsymbol{p}}(s-t) x\right) q(dp) \eta(dy)\phi_0(dx) \\ &\quad= e^{x^* B_{p^*}(s-t)} \int_{\mathbb{R}_+^2} e^{A_{(p^*,y_1,y_2)}(s-t)} \delta_Y(dy_1) \delta_{\widetilde{Y}} (dy_2) \\ &\quad=e^{x^* B_{p^*}(s-t)+{A_{(p^*,Y,\widetilde {Y})}(s-t)}} \\ &\quad= e^{x^* B_{p^*}(s-t)} \exp\left((\alpha^*+ d^*\widehat{\lambda}^*\widetilde{Y} + c^*\widehat{\lambda}_cY)\int_0^{s-t} B_{p^*}(u)\,du\right). \end{aligned}$$
Using the independence of the exponential random variables Y and \(\widetilde{Y}\), we have
$$\begin{aligned} \widehat{F}(t,s) =& e^{x^* B_{p^*}(s-t)} \mathbb{E} \left[\exp\left((\alpha^*+ d^*\widehat{\lambda}^*\widetilde{Y} + c^*\widehat{\lambda}_cY) \int_0^{s-t} B_{p^*}(u)\,du\right)\right ] \\ =&\exp\left(x^* B_{p^*}(s-t) + \alpha^*\int_0^{s-t} B_{p^*}(u)\,du \right) \\ &{}\times\mathbb{E} \left[\exp\left(\widetilde{Y}d^*\widehat {\lambda}^*\int_0^{s-t} B_{p^*}(u)\,du\right)\right] \mathbb{E} \left[\exp\left(Yc^*\widehat{\lambda}_c\int_0^{s-t} B_{p^*}(u)\,du\right)\right] \\ =&\exp\left(x^* B_{p^*}(s-t) + \alpha^*\int_0^{s-t} B_{p^*}(u)\,du \right) \\ &{}\times\frac{\gamma_1}{\gamma_1-c^*\widehat{\lambda}_c\int_0^{s-t} B_{p^*}(u)\,du} \frac{\gamma_2}{\gamma_2-d^*\widehat{\lambda}^*\int_0^{s-t} B_{p^*}(u)\,du}, \end{aligned}$$
since \(\int_{0}^{u} B_{p^{*}}(z)dz<0\) for all u>0. Hence, the proof of the lemma is complete. □
Appendix F: Proof of Proposition 6.3
For t≤t
B
≤T and \((x_{A},x_{B})\in\mathbb{R}_{+}^{2}\), using (D.7), we obtain
$$\begin{aligned} & H_1(t_B-t,x_A,x_B) \\ &\quad =\mathbb{E} \left[\exp\left(-\int_t^{t_B} (\xi_s^{(A)}+\xi _s^{(B)})\, ds\right)\xi_{t_B}^{(B)}\bigg|\xi_t^{(A)}=x_A,\xi _t^{(B)}=x_B\right] \\ &\quad =\big(h_1(t_B-t)+h_A(t_B-t)\xi_{t}^{(A)}+h_B(t_B-t)\xi _{t}^{(B)}\big) \\ & \qquad{}\times\exp\big(\widehat{h}_1(t_B-t) + \widehat{h}_A(t_B-t)\xi_t^{(A)}+ \widehat{h}_B(t_B-t)\xi_t^{(B)}\big), \end{aligned}$$
where the functions \((\widehat{h}_{1}(u),\widehat{h}_{A}(u),\widehat {h}_{B}(u))\) satisfy the generalized Riccati equation R(1,1,0) given by (D.9), while the functions (h
1(u),h
A
(u),h
B
(u)) satisfy the generalized Riccati equation given by (D.10). The initial conditions are given by
$$\begin{aligned} h_1(0)=h_A(0)=\widehat{h}_1(0)=\widehat{h}_A(0)=\widehat{h}_B(0)=0,\quad \mbox{and}\quad h_B(0)=1. \end{aligned}$$
Solving the corresponding Riccati equations via Lemmas D.5 and D.6, respectively, we obtain that the solutions are given by (6.2) and (6.3), respectively.
By virtue of Lemma D.4, we have, for t≤t
A
≤T,
$$\begin{aligned} H_2(t_A-t,x_A,x_B) =&\big(w_1(t_A-t)+w_A(t_A-t)x_A+w_B(t_A-t)x_B\big) \\ &{} \times\exp\big(\widehat{w}_1(t_A-t) +\widehat {w}_A(t_A-t)x_A+\widehat{w}_B(t_A-t)x_B\big), \end{aligned}$$
where the functions \((\widehat{w}_{1}(u),\widehat{w}_{A}(u),\widehat {w}_{B}(u))\) satisfy the generalized Riccati equation R(1,1,0) given by (D.9), while the functions (w
1(u),w
A
(u),w
B
(u)) satisfy the generalized Riccati equation given by (D.10). The initial conditions are given by
$$\begin{aligned} w_1(0)=w_B(0)=\widehat{w}_1(0)=\widehat{w}_A(0)=\widehat{w}_B(0)=0,\quad \mbox{and}\quad w_A(0)=1. \end{aligned}$$
Solving the corresponding Riccati equations by using Lemmas D.5 and D.6, respectively, we get that the solutions are given by (6.4) and (6.5), respectively. Hence, the proof of the proposition is complete. □