Skip to main content
Log in

Locally Risk-Minimizing Hedging of Counterparty Risk for Portfolio of Credit Derivatives

  • Published:
Applied Mathematics & Optimization Submit manuscript

Abstract

We discuss dynamic hedging of counterparty risk for a portfolio of credit derivatives by the local risk-minimization approach. We study the problem from the perspective of an investor who, trading with credit default swaps (CDS) referencing the counterparty, wants to protect herself/himself against the loss incurred at the default of the counterparty. We propose a credit risk intensity-based model consisting of interacting default intensities by taking into account direct contagion effects. The portfolio of defaultable claims is of generic type, including CDS portfolios, risky bond portfolios and first-to-default claims with payments allowed to depend on the default state of the reference firms and counterparty. Using the martingale representation of the conditional expectation of the counterparty risk price payment stream under the minimal martingale measure, we recover a closed-form representation for the locally risk minimizing strategy in terms of classical solutions to nonlinear recursive systems of Cauchy problems. We also discuss applications of our framework to the most prominent classes of credit derivatives.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Canabarro [13] argues that the high market volatility experienced during the global financial crisis created challenges for the dynamic hedge of CVA.

References

  1. Ansel, J., Stricker, C.: Unicité et existence de la loi minimale. In: Séminaire de Probabilités XXVII, 22–29. Springer, New York (1993)

  2. Arai, T.: Minimal martingale measures for jump diffusion processes. J. Appl. Probab. 41, 263–270 (2004)

    Article  MathSciNet  Google Scholar 

  3. Azizpour, S., Giesecke, K., Schwenkler , G.: Exploring the sources of default clustering. J. Financ. Econom. Forthcoming (2017)

  4. Biagini, F., Cretarola, A.: Quadratic hedging methods for defaultable claims. Appl. Math. Optim. 56, 425–443 (2007)

    Article  MathSciNet  Google Scholar 

  5. Biagini, F., Cretarola, A.: Local risk-minimization for defaultable markets. Math. Financ. 19, 669–689 (2009)

    Article  MathSciNet  Google Scholar 

  6. Biagini, F., Cretarola, A.: Local risk-minimization for defaultable claims with recovery process. Appl. Math. Optim. 65, 293–314 (2012)

    Article  MathSciNet  Google Scholar 

  7. Bielecki, T., Jeanblanc, M., Rutkowski , M.: Hedging of defaultable claims. In: R.A. Carmona, E. Cinlar, I. Ekeland, E. Jouini, N. Touzi (eds.) Paris-Princeton Lectures on Mathematical Finance 2003. Lecture Notes in Mathematics, 1–32, Springer, Berlin (2004a)

  8. Bielecki, T., Jeanblanc, M., Rutkowski, M.: Pricing and hedging of credit risk: replication and mean-variance approaches I. In: Yin, G., Zhang, Q. (eds.) Mathematics of Finance, pp. 37–53. AMS, Providence, RI (2004b)

    Chapter  Google Scholar 

  9. Bielecki, T., Jeanblanc, M., Rutkowski, M.: Pricing and trading credit default swaps in a hazard process model. Ann. Appl. Probab. 18, 2495–2529 (2008)

    Article  MathSciNet  Google Scholar 

  10. Bo, L., Capponi, A.: Portfolio choice with market-credit risk dependencies. SIAM J. Control Optim. 56(4), 3050–3091 (2018)

    Article  MathSciNet  Google Scholar 

  11. Bo, L., Capponi, A., Chen, P.C.: Credit Portfolio selection with decaying contagion intensities. Math. Financ. (2018). https://doi.org/10.1111/mafi.12177

  12. Brigo, D., Capponi, A., Pallavicini, A.: Arbitrage-free bilateral counterparty risk valuation under collateralization and application to credit default swaps. Math. Financ. 24, 125–146 (2014)

    Article  MathSciNet  Google Scholar 

  13. Canabarro, E.: Pricing and hedging counterparty risk: lessons re-learned? Chapter 6. In: Canabarro, E. (ed.) Counterparty Credit Risk. Risk Books, London (2010)

    Google Scholar 

  14. Capponi, A.: Pricing and mitigation of counterparty credit exposure. In: Fouque, J.P., Langsam, J. (eds.) Handbook of Systemic Risk. Cambridge University Press, Cambridge (2013)

    Google Scholar 

  15. Ceci, C., Colaneri, K., Cretarola, A.: Local risk-minimization under restricted information on asset prices. Electron. J. Probab. 20, 1–30 (2015)

    Article  MathSciNet  Google Scholar 

  16. Ceci, C., Colaneri, K., Cretarola, A.: Unit-linked life insurance policies: optimal hedging in partially observable market models. Insurance Math. Econ. 76, 149–163 (2017)

    Article  MathSciNet  Google Scholar 

  17. Choulli, T., Krawczyk, L., Stricker, C.: \({{{{\cal{E}}}}}\)-martingales and their applications in mathematical finance. Ann. Probab. 26, 853–876 (1998)

    Article  MathSciNet  Google Scholar 

  18. Choulli, T., Vandaele, N., Vanmaele, M.: The Föllmer-Schweizer decomposition: comparison and description. Stoch. Process. Appl. 120, 853–872 (2010)

    Article  Google Scholar 

  19. Föllmer, H., Sondermann, D.: Hedging of non-redundant contingent claims. In: Hildenbrand, W., Mas-Colell, A. (eds.) Contributions to Mathematical Economics, pp. 205–223. Elsevier, Amsterdam (1985)

    Google Scholar 

  20. Frei, C., Capponi, A., Brunetti, C.: Managing counterparty risk in OTC markets. Finance and Economics Discussion Series Divisions of Research & Statistics and Monetary Affairs Federal Reserve Board, Washington, DC. https://www.federalreserve.gov/econres/feds/files/2017083pap.pdf (2017)

  21. Frey, R., Backhaus, J.: Dynamic hedging of synthetic CDO tranches with spread risk and default contagion. J. Econ. Dyn. Contr. 34, 710–724 (2010)

    Article  MathSciNet  Google Scholar 

  22. Frey, R., Schmidt, T.: Pricing and hedging of credit derivatives via the innovations approach to nonlinear filtering. Financ. Stoch. 16, 105–133 (2012)

    Article  MathSciNet  Google Scholar 

  23. Gregory, J.: Counterparty Credit Risk: The New Challenge for Global Financial Markets. Wiley, Chichester (2010)

    Google Scholar 

  24. Heath, D., Schweizer, M.: Martingales versus PDEs in finance: an equivalence result with examples. J. Appl. Probab. 37, 947–957 (2000)

    Article  MathSciNet  Google Scholar 

  25. Okhrati, R., Balbás, A., Garridoz, J.: Hedging of defaultable claims in a structural model using a locally risk-minimizing approach. Stoch. Process. Appl. 124, 2868–2891 (2014)

    Article  MathSciNet  Google Scholar 

  26. Protter, P.: Stochastic Integration and Differential Equations, 2nd edn. Springer, New York (2005)

    Book  Google Scholar 

  27. Schweizer, M.: Hedging of Options in a General Semimartingale Model, p. 8615. Diss. ETH, Zurich (1988)

    Google Scholar 

  28. Schweizer, M.: On the minimal martingale measure and the Föllmer-Schweizer decomposition. Stoch. Anal. Appl. 13, 573–599 (1995)

    Article  Google Scholar 

  29. Schweizer, M.: A guided tour through quadratic hedging approaches. In: Jouini, E., Cvitanic, J., Musiela, M. (eds.) Option Pricing, Interest Rates and Risk Management, pp. 538–574. Cambridge University Press, Cambridge (2001)

    Chapter  Google Scholar 

  30. Schweizer, M.: Local risk-minimization for multidimensional assets and payment streams. Banach Cent. Publ. 83, 213–229 (2008)

    Article  MathSciNet  Google Scholar 

  31. Tankov, P.: Pricing and hedging in exponential Lévy models: review of recent results. Paris-Princeton Lecture Notes in Mathematics Finance. Springer, New York (2010)

  32. Wang, W., Zhou, J., Qian, L., Su, X.: Local risk minimization for vulnerable European contingent claims on non tradable assets under regime switching models. Stoch. Anal. Appl. 34, 662–678 (2016)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to thank two anonymous referees for the careful reading and helpful comments to improve the presentation of this paper. The research of L. Bo is supported by Natural Science Foundation of China under Grant 11471254.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Claudia Ceci.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A Proofs

A Proofs

Proof of Lemma 2.2

By Definition 2.2, we have the representation of the dividend process of the first-to-default claim given by

$$\begin{aligned} D(t) = -\varepsilon \int _{0}^{t \wedge T} (1-K(u))du + \sum _{i=1}^N\int _{0}^{t \wedge T} L_i(H(u))H_i(u) d K(u). \end{aligned}$$
(89)

The third term of the above dividend process is in fact given by

$$\begin{aligned} \sum _{i=1}^N\int _{0}^{t \wedge T} L_i(H(u))H_i(u) d K(u)&=\sum _{i=1}^NL_i(H({\bar{\tau }}_1))H_i({\bar{\tau }}_1)\mathbf{1}_{{\bar{\tau }}_1\le t\wedge T}\\ {}&=\sum _{i=1}^NL_i(H({\bar{\tau }}_1))\mathbf{1}_{\tau _i\le {\bar{\tau }}_1}\mathbf{1}_{{\bar{\tau }}_1\le t\wedge T}. \end{aligned}$$

Notice that for all \(i=1,\ldots ,N\), we have \(\tau _i\ge {\bar{\tau }}_1=\tau _1\wedge \cdots \wedge \tau _N\), a.s.. Hence \(\mathbf{1}_{\tau _i\le {\bar{\tau }}_1}=\mathbf{1}_{\tau _i={\bar{\tau }}_1}\), a.s.. Thus the above equality becomes that

$$\begin{aligned} \sum _{i=1}^N\int _{0}^{t \wedge T} L_i(H(u))H_i(u) d K(u)&=\sum _{i=1}^NL_i(H({\bar{\tau }}_1))\mathbf{1}_{\tau _i\le {\bar{\tau }}_1}\mathbf{1}_{{\bar{\tau }}_1\le t\wedge T}\\ {}&=\sum _{i=1}^NL_i(H({\bar{\tau }}_1))\mathbf{1}_{\tau _i={\bar{\tau }}_1}\mathbf{1}_{{\bar{\tau }}_1\le t\wedge T}. \end{aligned}$$

This results in the dividend representation given by Eq. (11). \(\square \)

Proof of Proposition 2.3

Using (7), it holds that, for \(t\in [0,T]\),

$$\begin{aligned} D(T)-D(t)&=\xi (H(T))(1-K(T))\mathbf{1}_{t\ne T}+\int _{t}^{T} (1-K(u)) a(u) du \\&\quad \, + \int _{t}^{T} Z(u) d K(u). \end{aligned}$$

Then, it follows from (15) that, for \(t\in [0,T]\),

$$\begin{aligned} {S}(t,T)&= {\mathbb {E}} ^{ {\mathbb {Q}} }\Bigg [ \xi (H(T))(1-K(T))\mathbf{1}_{t\ne T}+ \int _{t}^{T} (1-K(H(u))) a(H(u)) du\\&\quad + \, \int _{t}^{T} Z(H(u)) d K(H(u))\Big |{\mathcal {G}}_t\Bigg ]. \end{aligned}$$

Recall that Z(z) and K(z) are deterministic functions on \(z\in {{{\mathcal {S}}}}=\{0,1\}^{N+1}\).

Using integrations by parts, it follows that

$$\begin{aligned} Z(H(T))K(H(T))=\,&Z(H(t))K(H(t))+\int _t^T Z(H(u))dK(H(u))\nonumber \\&\quad +\int _t^T K(H(u^-))dZ(H(u)). \end{aligned}$$
(90)

On the other hand, Itô’s formula gives that for \(u\in [t,T]\),

$$\begin{aligned} dZ(H(u))&=\sum _{j=1}^{N+1} [Z(H^j(u^-))-Z(H(u^-))]dH_j(u)\\ {}&=\sum _{j=1}^{N+1} [Z(H^j(u^-))-Z(H(u^-))]dM_j^{ {\mathbb {Q}} }(u) \\&\quad +\sum _{j=1}^{N+1}[Z(H^j(u))-Z(H(u))](1-H_j(u))(1+\vartheta _j(u))X_j(u)du. \end{aligned}$$

For \(j=1,\ldots ,N+1\), \(M_j^{ {\mathbb {Q}} }=(M_j^{ {\mathbb {Q}} }(t))_{t\in [0,T]}\) is the \({\mathbb {G}}\)-martingale given in Proposition 2.1. Hence, Eq. (90) yields that

$$\begin{aligned} \int _t^T Z(H(u))dK(H(u))&=Z(H(T))K(H(T))-Z(H(t))K(H(t))\\&\qquad -\int _t^T K(H(u^-))dZ(H(u))\\&=Z(H(T))K(H(T))-Z(H(t))K(H(t))\\&\qquad -\sum _{j=1}^{N+1}\int _t^T K(H(u^-))[Z(H^j(u^-))-Z(H(u^-))]dM_j^{ {\mathbb {Q}} }(u)\\&\qquad -\sum _{j=1}^{N+1}\int _t^TK(H(u)) [Z(H^j(u))\\&\qquad -Z(H(u))](1-H_j(u))(1+\vartheta _j(u))X_j(u)du. \end{aligned}$$

This results in the price representation given by \(S(t,T)=F(t,X(t),H(t))-Z(H(t))K(H(t))\), where

$$\begin{aligned} F(t,x,z)&:= {\mathbb {E}} _{t,x,z}^{ {\mathbb {Q}} }\bigg [ \xi (H(T))(1-K(H(T)))\mathbf{1}_{t\ne T}+ Z(H(T))K(H(T))\nonumber \\&\quad +\int _{t}^{T} (1-K(H(u))) a(H(u)) du\nonumber \\&\quad -\sum _{j=1}^{N+1}\int _t^T K(H(u))[Z(H^j(u))\nonumber \\&\quad -Z(H(u))](1-H_j(u))(1+\vartheta _j(u))X_j(u)du\bigg ], \end{aligned}$$
(91)

using that the pair (XH) is a \({\mathbb {G}}\)-adapted Markov process. Then the price representation (17) follows from the decomposition of F(txz) given by

$$\begin{aligned} F(t,x,z)=\mathbf{1}_{t\ne T} \Lambda _1(t,x,z) + \Lambda _2(t,x,z),\qquad (t,x,z)\in [0,T]\times \mathbb {R}_+^{N+1}\times {{{\mathcal {S}}}}. \end{aligned}$$
(92)

This completes the proof of the lemma. \(\square \)

Proof of Proposition 2.4

On \((t,x)\in [0,T)\times \mathbb {R}_+^{N+1}\), we rewrite (26) as follows:

$$\begin{aligned}&\left( \frac{\partial }{\partial t}+\tilde{{{\mathcal {A}}}}^{ {\mathbb {Q}} }\right) u(t,x)+h(x)u(t,x)+w(t,x)=0 \end{aligned}$$
(93)

with \(u(T,x)=\alpha _1\xi ^{(l)}(1-K^{(l)})+\alpha _2Z^{(l)}K^{(l)}\) for all \(x\in \mathbb {R}_+^{N+1}\). The coefficients

$$\begin{aligned} h(x)&:= -\sum _{j\notin \{j_1,\ldots ,j_l\}}(1+\vartheta _j^{(l)}(x))x_j,\\ w(t,x)&:=\sum _{j\notin \{j_1,\ldots ,j_l\}}(1+\vartheta _j^{(l)}(x))x_j\big [F_{\alpha }^{(l+1),j}(t,x+w_j)\\&\quad -\alpha _3K^{(l)} (Z^{(l+1),j}-Z^{(l)})\big ]+\alpha _3(1-K^{(l)})a^{(l)}. \end{aligned}$$

We will apply Theorem 1 of Heath and Schweizer [24] to prove existence and uniqueness of classical solutions to Eq. (93) by verifying that their imposed conditions [A1], [A2], [A3’] and [A3a’]-[A3e’] hold in our case. Consider a sequence of bounded domains \(D_n:=(\frac{1}{n},n)^{N+1}\), \(n\in \mathbb {N} \), with smoothed corners and satisfying \(\bigcup _{n=1}^{\infty }D_n=\mathbb {R}_+^{N+1}\). Thus we verified that the condition [A3’] on the domain of the equation holds. By the assumptions (A1)–(A3), the conditions [A1] and [A2] for the coefficients \(\mu (x)+\sigma (x)\widetilde{\theta }(x,z)\) and \(\sigma (x)\) can be satisfied. This also implies that [A3a’] holds. Moreover, since \(\sigma \sigma ^{\top }(x)\) is continuous and invertible under the assumptions (A1) and (A2), \(\sigma \sigma ^{\top }(x)\) is uniformly elliptic on \((t,x)\times {\overline{D}}_n\), i.e. [A3b’] holds. Notice that \(F_{\alpha }^{(l+1),j}(t,x+w_j)\) is bounded and \(C^{1,2}\) in (tx) by the induction hypothesis. Additionally, notice that h(x) is linear in x. Then the conditions [A3c’] and [A3d’] on the coefficients h(x) and w(tx) on \((t,x)\in [0,T]\times {\overline{D}}_n\) are satisfied. Finally we need to verify [A3e’]. For this, it suffices to prove the uniform integrability of the family

$$\begin{aligned} \left\{ \int _t^Tw(s,{\check{X}}^{(t,x)}(s))e^{-\int _t^sh({\check{X}}^{(t,x)}(u))du}ds;\ (t,x)\in [0,T]\times \mathbb {R}_+^{N+1}\right\} . \end{aligned}$$
(94)

Here, the underlying \(\mathbb {R}_+^{N+1}\)-valued process \(({\check{X}}^{(t,x)}(s))_{s\in [t,T]}\) is the unique strong solution of

$$\begin{aligned} d{\check{X}}^{(t,x)}(s)&=(\mu ({\check{X}}^{(t,x)} (s))+\sigma ({\check{X}}^{(t,x)}(s))\widetilde{\theta }(\check{X}^{(t,x)}(s),0^{j_1,\ldots ,j_l}))ds\\&\quad +\sigma ({\check{X}}^{(t,x)}(s))dW(s),\ {\check{X}}^{(t,x)}(t)=x. \end{aligned}$$

By the inductive hypothesis that \(F_{\alpha }^{(l+1),j}(t,x)\) is nonnegative and bounded on \([0,T]\times \mathbb {R}_+^{N+1}\) for all \(j\notin \{j_1,\ldots ,j_l\}\), there exists a constant \(C>0\) independent of (tx) such that for all \((t,x)\in [0,T]\times \mathbb {R}_+^{N+1}\),

$$\begin{aligned}&{\mathbb {E}} \left[ \left| \int _t^Tw(s,{\check{X}}^{(t,x)}(s))e^{\int _t^sh({ \check{X}}^{(t,x)}(u))du}ds\right| ^2\right] \nonumber \\&\quad \le C {\mathbb {E}} \left[ \left| \int _t^Te^{-\int _t^s(\sum _{k \notin \{j_1,\ldots ,j_l\}}(1+\vartheta _k({\check{X}}^{(t,x)}(u))){ \check{X}}_k^{(t,x)}(u))du} \right. \right. \nonumber \\&\left. \left. \qquad \times \left( 1+\sum _{j\notin \{j_1,\ldots ,j_l\}}(1+\vartheta _j({\check{X}}^{(t,x)}(s))){ \check{X}}_j^{(t,x)}(s)\right) ds\right| ^2 \right] \nonumber \\&\quad \le 2CT^2\nonumber \\&\qquad +2C {\mathbb {E}} \left[ \left| \int _t^Te^{-\int _t^s(\sum _{k\notin \{j_1,\ldots ,j_l\}} (1+\vartheta _k({\check{X}}^{(t,x)}(u))){\check{X}}_k^{(t,x)}(u))du} \right. \right. \nonumber \\&\qquad \left. \left. \times \, d\left( \int _t^s\sum _{j\notin \{j_1,\ldots ,j_l\}}(1+\vartheta _j({ \check{X}}^{(t,x)}(u))){\check{X}}_j^{(t,x)}(u)du\right) \right| ^2\right] \nonumber \\&\quad \le 2CT^2+2C\left\{ 1+\left| {\mathbb {E}} \left[ e^{-\int _t^T(\sum _{k\notin \{j_1,\ldots ,j_l\}} (1+\vartheta _k({\check{X}}^{(t,x)}(u))){\check{X}}_k^{(t,x)}(u))du}\right] \right| ^2\right\} \nonumber \\&\quad \le 2CT^2+4C. \end{aligned}$$
(95)

This yields the existence of a constant \(C>0\), independent of (tx), such that

$$\begin{aligned} \sup _{(t,x)\in [0,T]\times \mathbb {R}_+^{N+1}} {\mathbb {E}} \left[ \left| \int _t^Tw(s, {\check{X}}^{(t,x)}(s))e^{\int _t^sh({\check{X}}^{(t,x)}(u))du}ds\right| ^2\right] \le C<+\infty . \end{aligned}$$

This yields the uniform integrability of the family (94). It implies the condition [A3e’] of Heath and Schweizer [24] is satisfied. Using Theorem 1 of Heath and Schweizer [24], Eq. (93) admits a unique classical solution u(tx) on \([0,T]\times \mathbb {R}_+^{N+1}\).

Further, the estimate (95) implies that this solution is bounded for all \((t,x)\in [0,T]\times \mathbb {R}_+^{N+1}\). This completes the proof of the proposition. \(\square \)

Proof of Lemma 2.5

It follows from Eq. (7) that

$$\begin{aligned} D(T) = \xi (H(T))(1-K(T)) +\int _{0}^{T} (1-K(u)) a(u) du + \int _{0}^{T} Z(u) d K(u). \end{aligned}$$

Using integration by parts (90), we have that

$$\begin{aligned} D(T)&= \xi (H(T))(1-K(H(T))) + \int _{0}^{T} (1-K(H(u))) a(u) du + Z(H(T))K(H(T))\\&\quad -Z(H(0))K(H(0))-\int _{0}^{T} K(H(u^-)) d Z(H(u)). \end{aligned}$$

Since \(K(0)=0\), it follows from Proposition 2.4 that

$$\begin{aligned} Y(t)&= F_{(1,1,1)}(t,X(t),H(t))+\int _{0}^{t} (1-K(u)) a(u) du -\int _{0}^{t} K(H(u^-)) d Z(H(u)). \end{aligned}$$
(96)

Above, \(F_{(1,1,1)}(t,x,z)\) is the unique bounded classical solution to the recursive system of the backward Cauchy problems given by, on \((t,x,z)\in [0,T)\times \mathbb {R}_+^{N+1}\times {{{\mathcal {S}}}}\),

$$\begin{aligned}&\left( \frac{\partial }{\partial t} + {{{\mathcal {A}}}}^{ {\mathbb {Q}} }\right) F_{(1,1,1)}(t,x,z)+(1-K(z))a(z)\nonumber \\&\quad - \sum _{j=1}^{N+1}K(z)[Z(z^j)-Z(z)](1-z_j)(1+\vartheta _j(x))x_j= 0 \end{aligned}$$
(97)

with the terminal condition

$$\begin{aligned} F_{(1,1,1)}(T,x,z)=\xi (z)(1-K(z))+Z(z)K(z),\qquad (x,z)\in \mathbb {R}_+^{N+1}\times {{{\mathcal {S}}}}. \end{aligned}$$
(98)

Applying Itô’s formula and (97), we obtain that

$$\begin{aligned}&F_{(1,1,1)}(t,X(t),H(t)) =F_{(1,1,1)}(0,X(0),H(0))\\&\qquad +\int _0^t\left\{ \sum _{j=1}^{N+1}K(H(u))[Z(H^j(u))-Z(H(u))] (1-H_j(u))\right. \\&\left. \qquad (1 +\,\vartheta _j(u))X_j(u)-(1-K(H(u)))a(H(u))\right\} du\\&\qquad + \int _0^t D_xF_{(1,1,1)}(u,X(u),H(u))^{\top }\sigma (X(u))dW^{ {\mathbb {Q}} }(u)\\&\qquad +\sum _{j=1}^{N+1} \int _0^t [F_{(1,1,1)}(u,X(u^-)+w_j,H^j(u^-))\\&\qquad - F_{(1,1,1)}(u,X(u^-),H(u^-))]dM_j^{ {\mathbb {Q}} }(u). \end{aligned}$$

Using Eq. (96), we deduce

$$\begin{aligned} dY(t)&=D_xF_{(1,1,1)}(t,X(t),H(t))^{\top }\sigma (X(t))dW^{ {\mathbb {Q}} }(t)\\&\quad +\sum _{j=1}^{N+1}[F_{(1,1,1)}(t,X(t^-)+w_j,H^j(t^-))-F_{(1,1,1)} (t,X(t^-),H(t^-))]dM_j^{ {\mathbb {Q}} }(t)\\&\quad -\sum _{j=1}^{N+1}K(H(t^-))[Z(H^j(t^-))-Z(H(t^-))]dM_j^{ {\mathbb {Q}} }(t). \end{aligned}$$

This yields the dynamics (28) of the gain process. \(\square \)

Proof of Lemma 3.3

We first verify that the density process \(\xi \) is strictly positive and square integrable. The assumption of \(0<1+{\hat{\lambda }}(t,x,z)\Psi _j(t,x,z)<\nu _j\) implies that \(\xi \) is strictly positive using the SDE-representation of the stochastic exponential. We next introduce the so-called mean-variance trade-off process given by

$$\begin{aligned} \Xi (t)&:=\int _0^t {\hat{\lambda }}(s,X(s),H(s))^2 d\left<Q\right>(s) \nonumber \\&=\int _0^t\frac{|\Upsilon (s) \widetilde{\theta }(s)+\sum _{j=1}^{N+1}\Psi _j(s)(1-H_j(s)) \vartheta _j(s)X_j(s)|^2}{|\Upsilon (s)|^2+\sum _{j=1}^{N+1} \Psi _j^2(s)(1-H_j(s))X_j(s)}ds\nonumber \\&\le 2\int _0^t|\widetilde{\theta }(s)|^2ds+2^{N+1}\int _0^t\vartheta _j^2(s)X_j(s)ds. \end{aligned}$$
(99)

Then Assumption (A3) yields that \(\Xi =(\Xi (t))_{t\in [0,T]}\) is uniformly bounded. Using Proposition 3.7 of Choulli et al. [17], the process \(\xi \) satisfies the reverse Hölder inequality, see also Assumption 3.2 in Arai [2]. On the other hand, the structural condition given by \(B=-\int _0^{\cdot }{\hat{\lambda }}(s,X(s^-),H(s^-))d\left<Q\right>(s)\) implies that \(Y_{N+1}\xi \) is a local \( {\mathbb {P}} \)-martingale (see Ansel and Stricker [1]). Using the arguments in Sect. 3 of Arai [2], we have that \(\xi \) is the density process of the MMM \({\hat{ {\mathbb {P}} }}\) w.r.t. \( {\mathbb {P}} \). \(\square \)

Proof of Theorem 3.5

Without any loss of generality, we set \(L_{N+1}(z)=1\) for all \(z\in {{{\mathcal {S}}}}\). Then, in the default state \(z=0^{j_1,\ldots ,j_l}\), we rewrite Eq. (62) in the following abstract form: on \((t,x)\in [0,T)\times \mathbb {R}_+^{N+1}\),

$$\begin{aligned} \left( \frac{\partial }{\partial t}+\bar{{{\mathcal {A}}}}\right) u(t,x)+h(t,x)u(t,x)+w(t,x)=0 \end{aligned}$$
(100)

with \(u(T,x)=0\) for all \(x\in \mathbb {R}_+^{N+1}\). The coefficients are given by

$$\begin{aligned} h(t,x)&:=-\sum _{j\notin \{j_1,\ldots ,j_l\}} {\hat{F}}_j^{(l)}(t,x),\\ w(t,x)&:=\left\{ \sum _{i=1}^{{\bar{N}}} b_i(1-K_i^{(l+1),N+1})\big [F_{(1,1,1)i}(t,x\right. \\&\left. \quad +\,w_{N+1},0^{j_1, \ldots ,j_l,N+1})-Z_i^{(l+1),N+1}K_i^{(l+1),N+1}\big ]\right\} _+\\&\qquad \times {\hat{F}}_{N+1}^{(l)}(t,x)\mathbf{1}_{j_1,\ldots ,j_l\ne N+1}+\sum _{j\notin \{j_1,\ldots ,j_l\}}g^{(l+1),j}(t,x+w_j){\hat{F}}_j^{(l)}(t,x). \end{aligned}$$

We next apply Theorem 1 of Heath and Schweizer [24] to prove existence and uniqueness of classical solutions of Eq. (100) by verifying that their series of conditions [A1], [A2], [A3’] and [A3a’]-[A3e’] hold in our case. We first consider bounded domains \(D_n:=(\frac{1}{n},n)^{N+1}\), \(n\in \mathbb {N} \), with smoothed corners such that \(\bigcup _{n=1}^{\infty }D_n=\mathbb {R}_+^{N+1}\). We can then verify that the condition [A3’] holds in the domain of the equation. Using assumptions (A1)–(A3), the conditions [A1] and [A2] hold. The same assumption also implies that [A3a’] holds. Moreover \(\sigma \sigma ^{\top }(x)\) is uniformly elliptic on \((t,x)\times {\overline{D}}_n\), i.e. [A3b’] holds. Notice that the solution \(g^{(l+1),j}(t,x+w_j)\) is bounded and \(C^{1,2}\) in (tx) by the induction hypothesis for \(j\notin \{j_1,\ldots ,j_l\}\). The function \(F_{(1,1,1)i}(t,x)\) is also bounded and \(C^{1,2}\) in (tx) for \(i=1,\ldots ,{\bar{N}}\) by Proposition 2.4. Note that the positive \({\hat{F}}_j^{(l)}(t,x)\) is \(C^1\) in (tx). Then the conditions [A3c’] and [A3d’] on the coefficients h(tx) and w(tx), \((t,x)\in [0,T]\times {\overline{D}}_n\), are satisfied. It is left to verify [A3e’]. For this, it suffices to prove the uniform integrability of the family

$$\begin{aligned} \left\{ \int _t^Tw(s,{\hat{X}}^{(t,x)}(s))e^{-\int _t^sh(u,{\hat{X}}^{(t,x)}(u))du}ds;\ (t,x)\in [0,T]\times \mathbb {R}_+^{N+1}\right\} . \end{aligned}$$
(101)

Here, for \(t\in [0,T]\), the \(N+1\)-dimensional Markov process \(({\hat{X}}^{(t,x)}(s))_{s\in [t,T]}\) satisfies a SDE with \({\hat{X}}^{(t,x)}(t)=x\) such that its infinitesimal generator is given by \(\bar{{{\mathcal {A}}}}\) in (60).

Consider first the case \(N+1\in \{j_1,\ldots ,j_l\}\). Because \(g^{(l+1),j}(t,x)\) is bounded on \([0,T]\times \mathbb {R}_+^{N+1}\) by the induction hypothesis, there exists a constant \(C>0\) such that

$$\begin{aligned}&{\mathbb {E}} \left[ \left| \int _t^Tw(s,{\hat{X}}^{(t,x)}(s))e^{\int _t^sh(u,{ \hat{X}}^{(t,x)}(u))du}ds\right| ^2\right] \\&\quad \le C {\mathbb {E}} \left[ \left| \int _t^T\left( \sum _{j\notin \{j_1,\ldots ,j_l\}}{ \hat{F}}_j^{(l)}(s,{\hat{X}}^{(t,x)}(s))\right) e^{-\int _t^s\sum _{k \notin \{j_1,\ldots ,j_l\}} {\hat{F}}_k^{(l)}(u,{\hat{X}}^{(t,x)}(u))du}ds \right| ^2\right] \\&\quad =C {\mathbb {E}} \left[ \left| \int _t^Te^{-\int _t^s\sum _{k\notin \{j_1,\ldots ,j_l\}} {\hat{F}}_k^{(l)}(u,{\hat{X}}^{(t,x)}(u))du}d\left( \int _t^s\sum _{j \notin \{j_1,\ldots ,j_l\}}{\hat{F}}_j^{(l)}(u,{\hat{X}}^{(t,x)}(u))du\right) \right| ^2\right] \\&\quad \le C\left\{ 1+\left| {\mathbb {E}} \left[ e^{-\int _t^T\sum _{k\notin \{j_1,\ldots ,j_l\}} {\hat{F}}_k^{(l)}(u,{\hat{X}}^{(t,x)}(u))du}\right] \right| ^2\right\} \\&\quad \le C, \end{aligned}$$

where \(C>0\) is independent of (tx). Next, consider the case \(N+1\notin \{j_1,\ldots ,j_l\}\). Also notice that \(F_{(1,1,1)i}(t,x)\) is bounded and \(C^{1,2}\) in (tx) for \(i=1,\ldots ,{\bar{N}}\) by Proposition 2.4. Then there exists a constant \(C>0\) such that

$$\begin{aligned}&{\mathbb {E}} \left[ \left| \int _t^Tw(s,{\hat{X}}^{(t,x)}(s))e^{\int _t^sh(u,{\hat{X}}^{(t,x)} (u))du}ds\right| ^2\right] \\&\quad \le C {\mathbb {E}} \left[ \left| \int _t^Te^{-\int _t^s\sum _{k\notin \{j_1,\ldots ,j_l\}} {\hat{F}}_k^{(l)}(u,{\hat{X}}^{(t,x)}(u))du}\right. \right. \\&\left. \left. \qquad \times \left( {\hat{F}}_{N+1}^{(l)} (s,{\hat{X}}^{(t,x)}(s))+\sum _{j\notin \{j_1,\ldots ,j_l\}}{\hat{F}}_{j}^{(l)} (s,{\hat{X}}^{(t,x)}(s))\right) ds\right| ^2 \right] . \end{aligned}$$

Since \(N+1\in \{j_1,\ldots ,j_l\}^c\), \({\hat{F}}_{N+1}^{(l)}(s,{\hat{X}}^{(t,x)}(s))\le \sum _{k\notin \{j_1,\ldots ,j_l\}}{\hat{F}}_{k}^{(l)}(s,{\hat{X}}^{(t,x)}(s))\), a.s.. This implies that

$$\begin{aligned}&{\mathbb {E}} \left[ \left| \int _t^Tw(s,{\hat{X}}^{(t,x)}(s))e^{\int _t^sh(u, {\hat{X}}^{(t,x)}(u))du}ds\right| ^2\right] \\&\quad \le 4C {\mathbb {E}} \left[ \left| \int _t^Te^{-\int _t^s\sum _{k\notin \{j_1,\ldots ,j_l\}} {\hat{F}}_k^{(l)}(u,{\hat{X}}^{(t,x)}(u))du}d\left( \int _t^s\sum _{j \notin \{j_1,\ldots ,j_l\}}{\hat{F}}_{j}^{(l)}(u,{\hat{X}}^{(t,x)}(u))du\right) \right| ^2\right] \\&\quad \le 4C\left\{ 1+\left| {\mathbb {E}} \left[ e^{-\int _t^T\sum _{k\notin \{j_1,\ldots ,j_l\}} {\hat{F}}_k^{(l)}(u,{\hat{X}}^{(t,x)}(u))du}\right] \right| ^2\right\} \\&\quad \le 4C, \end{aligned}$$

where \(C>0\) is independent of (tx). Thus we have verified the existence of a constant \(C>0\), independent of (tx), such that

$$\begin{aligned} \sup _{(t,x)\in [0,T]\times \mathbb {R}_+^{N+1}} {\mathbb {E}} \left[ \left| \int _t^Tw(s,{ \hat{X}}^{(t,x)}(s))e^{\int _t^sh(u,{\hat{X}}^{(t,x)}(u))du}ds\right| ^2\right] \le C<+\infty . \end{aligned}$$

This yields the uniform integrability of the family (101). It implies that the condition [A3e’] of Heath and Schweizer [24] holds. Using Theorem 1 of Heath and Schweizer [24], we conclude that Eq. (100) admits a unique classical solution u(tx) on \([0,T]\times \mathbb {R}_+^{N+1}\).

We next prove the solution is nonnegative and bounded on \([0,T]\times \mathbb {R}_+^{N+1}\). Using the Feymann-Kac’s representation of the classical solution u(tx), for \((t,x)\in [0,T]\times \mathbb {R}_+^{N+1}\),

$$\begin{aligned} u(t,x)&= {\mathbb {E}} \Bigg [\int _t^Te^{-\int _t^s\sum _{k\notin \{j_1,\ldots ,j_l\}} {\hat{F}}_k^{(l)}(u,{\hat{X}}^{(t,x)}(u))du}\nonumber \\&\quad \times \Bigg (\sum _{j\notin \{j_1,\ldots ,j_l\}} {\hat{F}}_j^{(l)}(s,{\hat{X}}^{(t,x)}(s))g^{(l+1),j}(t,{\hat{X}}^{(t,x)} (s)+w_j)\nonumber \\&\qquad +\left\{ \sum _{i=1}^{{\bar{N}}} b_i(1-K_i^{(l+1),N+1})\big [F_{(1,1,1)i}(s,{\hat{X}}^{(t,x)}(s) \right. \nonumber \\&\left. \quad +\,w_{N+1},0^{j_1,\ldots ,j_l,N+1})-Z_i^{(l+1),N+1}K_i^{(l+1),N+1}\big ] \right\} _+\nonumber \\&\qquad \times {\hat{F}}_{N+1}^{(l)}(s,{\hat{X}}^{(t,x)}(s))\mathbf{1}_{j_1,\ldots ,j_l\ne N+1}\Bigg )ds \Bigg ]. \end{aligned}$$
(102)

If \(N+1\in \{j_1,\ldots ,j_l\}\), then Eq. (102) reduces to

$$\begin{aligned} u(t,x)&= {\mathbb {E}} \left[ \int _t^Te^{-\int _t^s\sum _{k\notin \{j_1,\ldots ,j_l\}} {\hat{F}}_k^{(l)}(u,{\hat{X}}^{(t,x)}(u))du}\right. \\&\quad \left. \times \left( \sum _{j \notin \{j_1,\ldots ,j_l\}}{\hat{F}}_j^{(l)}(s, {\hat{X}}^{(t,x)}(s))g^{(l+1),j}(t,{\hat{X}}^{(t,x)}(s)+w_j)\right) ds\right] . \end{aligned}$$

Since the nonnegative function \(g^{(l+1),j}(t,x)\) is bounded on \([0,T]\times \mathbb {R}_+^{N+1}\) by the inductive hypothesis, there exists a constant \(C>0\) such that

$$\begin{aligned} 0\le u(t,x)&\le C\sum _{j\notin \{j_1,\ldots ,j_l\}} {\mathbb {E}} \left[ \int _t^T{\hat{F}}_j^{(l)} (s,{\hat{X}}^{(t,x)}(s))e^{-\int _t^s\sum _{k\notin \{j_1,\ldots ,j_l\}} {\hat{F}}_k^{(l)}(u,{\hat{X}}^{(t,x)}(u))du}ds\right] \\&=C\left\{ 1- {\mathbb {E}} \left[ e^{-\int _t^T\sum _{k\notin \{j_1,\ldots ,j_l\}} {\hat{F}}_k^{(l)}(u,{\hat{X}}^{(t,x)}(u))du}\right] \right\} . \end{aligned}$$

Obviously, the above inequality yields the existence of a constant \(C>0\) such that \(0\le u(t,x)\le C\) for all \((t,x)\in [0,T]\times \mathbb {R}_+^{N+1}\). Next, consider the case \(N+1\notin \{j_1,\ldots ,j_l\}\). It follows from (102) that

$$\begin{aligned} u(t,x)&= {\mathbb {E}} \Bigg [\int _t^Te^{-\int _t^s\sum _{k\notin \{j_1,\ldots ,j_l\}} {\hat{F}}_k^{(l)}(u,{\hat{X}}^{(t,x)}(u))du}\\&\qquad \quad \times \Bigg (\sum _{j\notin \{j_1,\ldots ,j_l\}} {\hat{F}}_j^{(l)}(s,{\hat{X}}^{(t,x)}(s))g^{(l+1),j}(t, {\hat{X}}^{(t,x)}(s)+w_j)\\&\qquad +\left\{ \sum _{i=1}^{{\bar{N}}} b_i(1-K_i^{(l+1),N+1})\big [F_{(1,1,1)i}(t,{\hat{X}}^{(t,x)} (s) \right. \\&\left. \qquad +\,w_{N+1},0^{j_1,\ldots ,j_l,N+1})-Z_i^{(l+1),N+1}K_i^{ (l+1),N+1}\big ]\right\} _+ \\&\qquad \times {\hat{F}}_{N+1}^{(l)}(s,{\hat{X}}^{(t,x)}(s))\Bigg )ds \Bigg ]. \end{aligned}$$

Then there exists a constant \(C>0\) such that

$$\begin{aligned} 0\le u(t,x)&\le C {\mathbb {E}} \left[ \int _t^Te^{-\int _t^s\sum _{k \notin \{j_1,\ldots ,j_l\}}{\hat{F}}_k^{(l)}(u,{\hat{X}}^{(t,x)}(u))du} \right. \\&\left. \quad \times \left( \sum _{j\notin \{j_1,\ldots ,j_l\}}{\hat{F}}_j^{(l)}(s,{\hat{X}}^{(t,x)}(s)) +{\hat{F}}_{N+1}^{(l)}(s,{\hat{X}}^{(t,x)}(s))\right) ds \right] . \end{aligned}$$

Since \(N+1\in \{j_1,\ldots ,j_l\}^c\), we have that

$$\begin{aligned} 0\le u(t,x)&\le 2C\left\{ 1- {\mathbb {E}} \left[ e^{-\int _t^T\sum _{k\notin \{j_1, \ldots ,j_l\}} {\hat{F}}_k^{(l)}(u,{\hat{X}}^{(t,x)}(u))du}\right] \right\} . \end{aligned}$$

The above inequality gives a constant \(C>0\) such that \(0\le u(t,x)\le C\) for all \((t,x)\in [0,T]\times \mathbb {R}_+^{N+1}\). This completes the proof of the theorem. \(\square \)

Proof of Lemma 3.8

It follows from (69) that

$$\begin{aligned}&\int _0^tD_{x}g(s,X(s),H(s))^{\top }\sigma (X(s))d{W}(s) \\&\quad =\int _0^tD_xg(s,X(s), H(s))^{\top }\sigma (X(s))({\hat{\lambda }}\Upsilon )(s,X(s),H(s))^{\top }ds \\&\quad \quad +\int _0^tL_{N+1}(H^{N+1}(s))\left\{ \sum _{i=1}^{{\bar{N}}}b_i (1-K_i(H^{N+1}(s)))F_{i}(t,X(s)+w_{N+1},H^{N+1}(s))\right\} _+ \\&\quad \quad \times (1-H_{N+1}(s)){\hat{F}}_{N+1}(s,X(s),H(s))ds+g(t,X(t), H(t))-g(0,X(0),H(0)) \\&\quad \quad +\sum _{j=1}^{N+1}\int _0^t\left[ g(s,{X}(s)+w_j,H^j(s))-g(s,{X}(s),H(s)) \right] \\&\qquad \times (1-H_j(s))\vartheta _j(X(s),H(s))X_j(s)ds \\&\quad \quad -\sum _{j=1}^{N+1}\left[ g(s,{X}(s^-)+w_j,H^j(s^-))-g(s,{X}(s^-),H(s^-)) \right] d{M}_j(s) \\&\quad =:\int _0^tD_xg(s,X(s),H(s))^{\top }\sigma (X(s))({\hat{\lambda }}\Upsilon ) (s,X(s),H(s))^{\top }ds+E(t). \end{aligned}$$

Then, for any \(\varepsilon >0\),

$$\begin{aligned}&{\mathbb {E}} \left[ \left| \int _0^tD_{x}g(s,X(s),H(s))^{\top }\sigma (X(s))d{W}(s)\right| ^2\right] \\&\qquad = {\mathbb {E}} \left[ \left| \int _0^tD_xg(s,X(s),H(s))^{\top }\sigma (X(s)) ({\hat{\lambda }}\Upsilon )(s,X(s),H(s))^{\top }ds+E(t)\right| ^2\right] \\&\qquad \le (1+\varepsilon ) {\mathbb {E}} \left[ \left| \int _0^tD_xg(s,X(s),H(s))^{\top }\sigma (X(s)) ({\hat{\lambda }}\Upsilon )(s,X(s),H(s))^{\top }ds\right| ^2\right] \\&\qquad \quad +\left( 1+\frac{1}{\varepsilon }\right) {\mathbb {E}} [|E(t)|^2]\\&\qquad \le (1+\varepsilon ) {\mathbb {E}} \left[ \left( \int _0^t\left| ({\hat{\lambda }}\Upsilon ) (s,X(s),H(s))\right| ^2ds\right) \right. \\&\left. \qquad \quad \times \left( \int _0^t|D_xg(s,X(s),H(s))^{\top } \sigma (X(s))|^2ds\right) \right] \\&\qquad \quad +\left( 1+\frac{1}{\varepsilon }\right) {\mathbb {E}} [|E(t)|^2]. \end{aligned}$$

By the assumption of the lemma, we have that \( {\mathbb {E}} [\int _0^T|({\hat{\lambda }}\Upsilon )(s,X(s),H(s))|^2ds]\le |{\hat{\lambda }}\Upsilon |_{\infty }^2T\). Since g(txz) is the unique bounded classical solution of Eq. (59) by Theorem 3.5. Further, by Proposition 2.4 and Assumption (A3), there exists a constant \(C=C(T)>0\) such that \( {\mathbb {E}} [|E(T)|^2]\le C(T)+C(T) {\mathbb {E}} [\int _0^T\sum _{j=1}^{N+1}X_j^2(s)ds]\). This gives that

$$\begin{aligned}&{\mathbb {E}} \left[ \int _0^T\left| D_{x}g(s,X(s),H(s))^{\top }\sigma (X(s))\right| ^2ds\right] \le \left( 1+\frac{1}{\varepsilon }\right) C(T) \\&\qquad + (1+\varepsilon )|{\hat{\lambda }}\Upsilon |_{\infty }^2T {\mathbb {E}} \left[ \int _0^T \left| D_{x}g(s,X(s),H(s))^{\top }\sigma (X(s))\right| ^2ds\right] \\&\qquad +\left( 1+\frac{1}{\varepsilon }\right) C(T) {\mathbb {E}} \left[ \int _0^T \sum _{j=1}^{N+1}X_j^2(s)ds\right] . \end{aligned}$$

Using the condition \((1+\varepsilon )|{\hat{\lambda }}\Upsilon |_{\infty }^2T<1\) for some \(\varepsilon >0\), we get the estimate (70).

\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bo, L., Ceci, C. Locally Risk-Minimizing Hedging of Counterparty Risk for Portfolio of Credit Derivatives. Appl Math Optim 82, 799–850 (2020). https://doi.org/10.1007/s00245-018-9549-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00245-018-9549-y

Keywords

AMS 2000 subject classifications

Navigation