Skip to main content
Log in

A general approach for Parisian stopping times under Markov processes

  • Published:
Finance and Stochastics Aims and scope Submit manuscript

Abstract

We propose a method based on continuous-time Markov chain (CTMC) approximation to compute the distribution of Parisian stopping times and to price options of Parisian style under general one-dimensional Markov processes. We prove the convergence of the method under a general setting and obtain sharp estimates of the convergence rate for diffusion models. Our theoretical analysis reveals how to design the grid of the CTMC to achieve faster convergence. Numerical experiments are conducted to demonstrate the accuracy and efficiency of our method for both diffusion and jump models. To show the versatility of our approach, we develop extensions for multi-sided Parisian stopping times, the joint distribution of Parisian stopping times and first passage times, Parisian bonds, regime-switching models and stochastic volatility models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Algorithm 1
Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Abate, J., Whitt, W.: The Fourier-series method for inverting transforms of probability distributions. Queueing Syst. 10, 5–87 (1992)

    MathSciNet  MATH  Google Scholar 

  2. Albrecher, H., Kortschak, D., Zhou, X.: Pricing of Parisian options for a jump-diffusion model with two-sided jumps. Appl. Math. Finance 19, 97–129 (2012)

    MathSciNet  MATH  Google Scholar 

  3. Athanasiadis, C., Stratis, I.G.: On some elliptic transmission problems. Ann. Pol. Math. 63, 137–154 (1996)

    MathSciNet  MATH  Google Scholar 

  4. Avellaneda, M., Wu, L.: Pricing Parisian-style options with a lattice method. Int. J. Theor. Appl. Finance 2, 1–16 (1999)

    MATH  Google Scholar 

  5. Baldi, P., Caramellino, L., Iovino, M.G.: Pricing complex barrier options with general features using sharp large deviation estimates. In: Niederreiter, H., Spanier, J. (eds.) Monte Carlo and Quasi-Monte Carlo Methods 1998, pp. 149–162. Springer, Berlin (2000)

    Google Scholar 

  6. Bernard, C., Le Courtois, O., Quittard-Pinon, F.: A new procedure for pricing Parisian options. J. Deriv. 12(4), 45–53 (2005)

    Google Scholar 

  7. Cai, N., Kou, S., Song, Y.: A unified framework for regime-switching models. Preprint (2019). Available online at https://ssrn.com/abstract=3310365

  8. Cai, N., Song, Y., Kou, S.: A general framework for pricing Asian options under Markov processes. Oper. Res. 63, 540–554 (2015)

    MathSciNet  MATH  Google Scholar 

  9. Chesney, M., Gauthier, L.: American Parisian options. Finance Stoch. 10, 475–506 (2006)

    MathSciNet  MATH  Google Scholar 

  10. Chesney, M., Jeanblanc-Picqué, M., Yor, M.: Brownian excursions and Parisian barrier options. Adv. Appl. Probab. 29, 165–184 (1997)

    MathSciNet  MATH  Google Scholar 

  11. Chesney, M., Vasiljević, N.: Parisian options with jumps: a maturity–excursion randomization approach. Quant. Finance 18, 1887–1908 (2018)

    MathSciNet  MATH  Google Scholar 

  12. Cui, Z., Kirkby, J.L., Nguyen, D.: A general framework for discretely sampled realized variance derivatives in stochastic volatility models with jumps. Eur. J. Oper. Res. 262, 381–400 (2017)

    MathSciNet  MATH  Google Scholar 

  13. Cui, Z., Kirkby, J.L., Nguyen, D.: A general valuation framework for SABR and stochastic local volatility models. SIAM J. Financ. Math. 9, 520–563 (2018)

    MATH  Google Scholar 

  14. Cui, Z., Kirkby, J.L., Nguyen, D.: A general framework for time-changed Markov processes and applications. Eur. J. Oper. Res. 273, 785–800 (2019)

    MathSciNet  MATH  Google Scholar 

  15. Cui, Z., Lee, C., Liu, Y.: Single-transform formulas for pricing Asian options in a general approximation framework under Markov processes. Eur. J. Oper. Res. 266, 1134–1139 (2018)

    MathSciNet  MATH  Google Scholar 

  16. Czarna, I., Palmowski, Z.: Ruin probability with Parisian delay for a spectrally negative Lévy risk process. J. Appl. Probab. 48, 984–1002 (2011)

    MathSciNet  MATH  Google Scholar 

  17. Dassios, A., Lim, J.W.: Parisian option pricing: a recursive solution for the density of the Parisian stopping time. SIAM J. Financ. Math. 4, 599–615 (2013)

    MathSciNet  MATH  Google Scholar 

  18. Dassios, A., Lim, J.W.: An analytical solution for the two-sided Parisian stopping time, its asymptotics, and the pricing of Parisian options. Math. Finance 27, 604–620 (2017)

    MathSciNet  Google Scholar 

  19. Dassios, A., Lim, J.W.: Recursive formula for the double-barrier Parisian stopping time. J. Appl. Probab. 55, 282–301 (2018)

    MathSciNet  MATH  Google Scholar 

  20. Dassios, A., Lim, J.W., Qu, Y.: Azéma martingales for Bessel and CIR processes and the pricing of Parisian zero-coupon bonds. Math. Finance 30, 1497–1526 (2020)

    MathSciNet  MATH  Google Scholar 

  21. Dassios, A., Wu, S.: Parisian ruin with exponential claims. Working paper (2008). Available online at https://eprints.lse.ac.uk/32033/

  22. Dassios, A., Wu, S.: Perturbed Brownian motion and its application to Parisian option pricing. Finance Stoch. 14, 473–494 (2010)

    MathSciNet  MATH  Google Scholar 

  23. Dassios, A., Wu, S.: Double-barrier Parisian options. J. Appl. Probab. 48, 1–20 (2011)

    MathSciNet  MATH  Google Scholar 

  24. Dassios, A., Zhang, Y.Y.: The joint distribution of Parisian and hitting times of Brownian motion with application to Parisian option pricing. Finance Stoch. 20, 773–804 (2016)

    MathSciNet  MATH  Google Scholar 

  25. Doetsch, G.: Introduction to the Theory and Application of the Laplace Transformation. Springer, Berlin (1974)

    MATH  Google Scholar 

  26. Ekström, E., Tysk, J.: Boundary conditions for the single-factor term structure equation. Ann. Appl. Probab. 21, 332–350 (2011)

    MathSciNet  MATH  Google Scholar 

  27. Eriksson, B., Pistorius, M.R.: American option valuation under continuous-time Markov chains. Adv. Appl. Probab. 47, 378–401 (2015)

    MathSciNet  MATH  Google Scholar 

  28. Feng, L., Linetsky, V.: Pricing options in jump-diffusion models: an extrapolation approach. Oper. Res. 56, 304–325 (2008)

    MathSciNet  MATH  Google Scholar 

  29. Fulton, C.T., Pruess, S.A.: Eigenvalue and eigenfunction asymptotics for regular Sturm-Liouville problems. J. Math. Anal. Appl. 188, 297–340 (1994)

    MathSciNet  MATH  Google Scholar 

  30. Geršgorin, S.: Über die Abgrenzung der Eigenwerte einer Matrix. Bull. Acad. Sci. URSS 6, 749–754 (1931)

    MATH  Google Scholar 

  31. Gilbarg, D., Trudinger, N.S.: Elliptic Partial Differential Equations of Second Order. Springer, Berlin (2015)

    MATH  Google Scholar 

  32. Haber, R.J., Schönbucher, P.J., Wilmott, P.: Pricing Parisian options. J. Deriv. 6(3), 71–79 (1999)

    Google Scholar 

  33. Jacod, J., Shiryaev, A.: Limit Theorems for Stochastic Processes, 2nd edn. Springer, Berlin (2003)

    MATH  Google Scholar 

  34. Kim, K.K., Lim, D.Y.: Risk analysis and hedging of Parisian options under a jump-diffusion model. J. Futures Mark. 36, 819–850 (2016)

    Google Scholar 

  35. Kong, Q., Zettl, A.: Dependence of eigenvalues of Sturm–Liouville problems on the boundary. J. Differ. Equ. 126, 389–407 (1996)

    MathSciNet  MATH  Google Scholar 

  36. Kou, S.G., Wang, H.: Option pricing under a double exponential jump diffusion model. Manag. Sci. 50, 1178–1192 (2004)

    Google Scholar 

  37. Labart, C.: Parisian option. In: Cont, R. (ed.) Encyclopedia of Quantitative Finance, pp. 1355–1357. Wiley, New York (2010)

    Google Scholar 

  38. Labart, C., Lelong, J.: Pricing double barrier Parisian options using Laplace transforms. Int. J. Theor. Appl. Finance 12, 19–44 (2009)

    MathSciNet  MATH  Google Scholar 

  39. Le, N.T., Lu, X., Zhu, S.P.: An analytical solution for Parisian up-and-in calls. ANZIAM J. 57, 269–279 (2016)

    MathSciNet  MATH  Google Scholar 

  40. Li, L., Zeng, P., Zhang, G.: Speed and duration of drawdown under general Markov models. Preprint (2022). Available online at https://ssrn.com/abstract=4222362

  41. Li, L., Zhang, G.: Error analysis of finite difference and Markov chain approximations for option pricing. Math. Finance 28, 877–919 (2018)

    MathSciNet  MATH  Google Scholar 

  42. Loeffen, R., Czarna, I., Palmowski, Z.: Parisian ruin probability for spectrally negative Lévy processes. Bernoulli 19, 599–609 (2013)

    MathSciNet  MATH  Google Scholar 

  43. Lu, X., Le, N.T., Zhu, S.P., Chen, W.: Pricing American-style Parisian up-and-out call options. Eur. J. Appl. Math. 29, 1–29 (2018)

    MathSciNet  MATH  Google Scholar 

  44. Madan, D.B., Carr, P.P., Chang, E.C.: The variance gamma process and option pricing. Rev. Finance 2, 79–105 (1998)

    MATH  Google Scholar 

  45. Mijatović, A., Pistorius, M.: Continuously monitored barrier options under Markov processes. Math. Finance 23, 1–38 (2013)

    MathSciNet  MATH  Google Scholar 

  46. Quarteroni, A., Sacco, R., Saleri, F.: Numerical Mathematics, 1st edn. Springer, Berlin (2000)

    MATH  Google Scholar 

  47. Song, Y., Cai, N., Kou, S.: Computable error bounds of Laplace inversion for pricing Asian options. INFORMS J. Comput. 30, 634–645 (2018)

    MathSciNet  MATH  Google Scholar 

  48. Zhang, G., Li, L.: Analysis of Markov chain approximation for option pricing and hedging: grid design and convergence behavior. Oper. Res. 67, 407–427 (2019)

    MathSciNet  MATH  Google Scholar 

  49. Zhang, G., Li, L.: A general approach for lookback option pricing under Markov models. Preprint (2021). Available online at https://arxiv.org/abs/2112.00439

  50. Zhang, G., Li, L.: A general method for analysis and valuation of drawdown risk under Markov models. J. Econ. Dyn. Control 152, 104669 (2023)

    Google Scholar 

  51. Zhang, G., Li, L.: Analysis of Markov chain approximation for diffusion models with nonsmooth coefficients for option pricing. SIAM J. Financ. Math. 13, 1144–1190 (2022)

    MATH  Google Scholar 

  52. Zhang, X., Li, L., Zhang, G.: Pricing American drawdown options under Markov models. Eur. J. Oper. Res. 293, 1188–1205 (2021)

    MATH  Google Scholar 

  53. Zhu, S.P., Chen, W.T.: Pricing Parisian and Parasian options analytically. J. Econ. Dyn. Control 37, 875–896 (2013)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lingfei Li.

Ethics declarations

Competing Interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The research of Lingfei Li was supported by Hong Kong Research Grant Council General Research Fund Grants 14202117 and 14207019. The research of Gongqiu Zhang was supported by National Natural Science Foundation of China Grant 12171408, and Shenzhen Fundamental Research Program Project JCYJ20190813165407555.

Appendices

Appendix A: Several extensions

In this section, we generalise our method for some more complicated situations to demonstrate the versatility of our approach.

1.1 A.1 Multi-sided Parisian options

Define a Parisian stopping time for an arbitrary set as

$$ \tau _{A}^{D} = \inf \{ t \ge 0 : 1_{\{ Y_{t} \in A \}} (t - g_{A, t}) \ge D \}, \qquad g_{A, t} = \sup \{s\le t : Y_{s} \notin A\}. $$

If \(A = (-\infty , L)\), then \(\tau _{A}^{D} = \tau _{L, D}^{-}\), and if \(A = (U, \infty )\), then \(\tau _{A}^{D} = \tau _{U, D}^{+}\). The multi-sided Parisian stopping time can be defined as

$$ \tau _{\mathcal{A}}^{D} = \min \{ \tau _{A}^{D}: A \in \mathcal{A} \}, $$

where \(\mathcal{A}\) is a collection of subsets of ℝ. The double-barrier case corresponds to \(\mathcal{A} = \{ (-\infty , L), (U, \infty ) \}\) with \(L< U\). We also let

$$ T^{-}_{A} = \inf \{ t \ge 0: Y_{t} \notin A \}, \qquad T^{+}_{A} = \inf \{ t \ge 0: Y_{t} \in A \}. $$

Consider a finite state CTMC \(Y\). For any \(x, y\) in its state space \(\mathbb{S}\), let

$$ h(q, x; y) = \mathbb{E}_{x} \big[ e^{-q \tau _{\mathcal{A}}^{D}} 1_{ \{ Y_{\tau _{\mathcal{A}}^{D}} = y \}} \big]. $$

Define \(B=\bigcup _{A\in \mathcal{A}}A\). Then by a conditioning argument, we obtain

$$\begin{aligned} h(q, x; y) &= \sum _{A \in \mathcal{A}} e^{-qD} \mathbb{E}_{x} \big[ 1_{ \{ T_{A}^{-} \ge D, Y_{D} = y \}} \big] 1_{\{ x \in A \}} \\ & \phantom{=:} + \sum _{A \in \mathcal{A}} \sum _{z \notin A} \mathbb{E}_{x}\big[ e^{-q T_{A}^{-}} 1_{\{ T_{A}^{-} < D, Y_{T_{A}^{-}} = z \}} \big] h(q, z; y) 1_{\{ x \in A \}} \\ & \phantom{=:} + \sum _{z \in B}\mathbb{E}_{x} \big[ e^{-q T_{B}^{+}} 1_{\{ Y_{T_{B}^{+}} = z \}} \big] h(q, z; y) 1_{\{ x \notin B \}}. \end{aligned}$$

Let \(v_{A}(D, x; y) = \mathbb{E}_{x} [ 1_{\{ T_{A}^{-} \ge D, Y_{D} = y \}} ]\). It solves

$$ \textstyle\begin{cases} \displaystyle \frac{\partial v_{A}}{\partial D}(D, x; y) = \mathbb{G} v_{A}(D, x; y) ,\qquad x \in A, D > 0, \\ v_{A}(D, x; y) = 0,\qquad x \notin A, D > 0, \\ v_{A}(0, x; y) = 1_{\{ x = y \}}. \end{cases} $$

Let \(u^{-}_{A}(q, D, x; z) = \mathbb{E}_{x}[ e^{-q T_{A}^{-}} 1_{\{ T_{A}^{-} < D, Y_{T_{A}^{-}} = z \}} ]\). It satisfies

$$ \textstyle\begin{cases} \displaystyle \frac{\partial u^{-}_{A}}{\partial D}(q, D, x; z) = \mathbb{G} u^{-}_{A}(q, D, x; z) - q u^{-}_{A}(q, D, x; z),\qquad x \in A, D > 0, \\ u^{-}_{A}(q, D, x; z) = 1_{\{ x = z \}},\qquad x \notin A, D > 0, \\ u^{-}_{A}(q, 0, x; z) = 0. \end{cases} $$

We can rewrite it as \(u^{-}_{A}(q, D, x; z) = u^{-}_{1,A}(q, x; z) - u^{-}_{2,A}(q, D, x; z)\), and these two parts satisfy

$$ \textstyle\begin{cases} \mathbb{G} u^{-}_{1,A}(q, x; z) - q u^{-}_{1,A}(q, x; z) = 0,\qquad x \in A, \\ u^{-}_{1,A}(q, x; z) = 1_{\{ x = z \}},\qquad x \notin A, \end{cases} $$

and

$$ \textstyle\begin{cases} \displaystyle \frac{\partial u^{-}_{2,A}}{\partial{D}}(q, D, x; z) = \mathbb{G} u^{-}_{2,A}(q, D, x; z) - q u^{-}_{2,A}(q, D, x; z), \qquad x \in A, \\ u^{-}_{2,A}(q, D, x; z) = 0,\qquad x \notin A, D > 0, \\ u^{-}_{2,A}(q, 0, x; z) = u^{-}_{1,A}(q, x; z). \end{cases} $$

Let \(u^{+}(q, x; z) = \mathbb{E}_{x} [ e^{-q T_{B}^{+}} 1_{\{ Y_{T_{B}^{+}} = z \}} ]\). It is the solution to

$$ \textstyle\begin{cases} \mathbb{G} u^{+}(q, x; z) - q u^{+}(q, x; z) = 0,\qquad x \notin B, \\ u^{+}(q, x; z) = 1_{\{ x = z \}},\qquad x \in B. \end{cases} $$

Let \({ I}_{A} = \operatorname{diag}((1_{\{x \in A\}})_{x \in \mathbb{S}})\). The solutions to the above equations are given as

$$\begin{aligned} V_{A} &= \big( v_{A}(D, x; y) \big)_{x,y \in \mathbb{S}}= \exp ( { I}_{A} { G}D ) { I}_{A}, \\ U_{1,A}^{-}(q) &= \big( u^{-}_{1,A}(q, x; z) \big)_{x, z\in \mathbb{S}} = \big( { I} - { I}_{A} - { I}_{A} ({ G} - q{ I} ) \big)^{-1} ({ I} - { I}_{A}), \\ U_{2,A}^{-}(q) &= \big( u^{-}_{2,A}(q,D, x; z) \big)_{x, z\in \mathbb{S}} = \exp ( { I}_{A} { G}D ) { I}_{A} { U}_{1,A}^{-}(q), \\ U^{-}_{A}(q) &= \big( u^{-}_{A}(q,D, x; z) \big)_{x, z\in \mathbb{S}} = { U}_{1,A}^{-}(q) - { U}_{2,A}^{-}(q), \\ U^{+}(q) &= \big( u^{+}(q, x; z) \big)_{x, z\in \mathbb{S}} = \big( { I}_{B} - ({ I} - { I}_{B}) ({ G } - q{ I}) \big)^{-1} { I}_{B}. \end{aligned}$$

Then we obtain

$$\begin{aligned} { H}(q) &= \big( h(q, x; y) \big)_{x, y \in \mathbb{S}} \\ &= e^{-qD} \bigg( { I} - \sum _{A \in \mathcal{A}} { I}_{A} { U}^{-}_{A}(q) - ({ I} - { I}_{B}) { U}^{+}(q) \bigg)^{-1} \sum _{A \in \mathcal{A}} { I}_{A} { V}_{A}. \end{aligned}$$

Now consider a multi-sided Parisian option with price

$$ u(t, x) = \mathbb{E}_{x}\big[ 1_{\{ \tau _{\mathcal{A}}^{D} \le t \}} f(Y_{t}) \big]. $$

Using the arguments from the single-sided case, we can derive that the Laplace transform \(\tilde{u}(q, x) = \int _{0}^{\infty} e^{-qt} u(t, x) dt\) is given by

$$ \widetilde{ u}(q) = \big( \tilde{u}(q, x) \big)_{x \in \mathbb{S}} = { H}(q) (q{ I} - { G})^{-1}{ f}. $$

1.2 A.2 Mixed barrier and Parisian options

The CTMC approximation can be generalised to derive the joint distribution of Parisian stopping times and first passage times. For example, Dassios and Zhang [24] introduce the so-called MinParisianHit option which is triggered either when the age of an excursion above \(L\) reaches time \(D\) or a barrier \(B > L\) is crossed by the underlying asset price process \(S\). The price of the MinParisianHit option can be approximated as

$$ \text{minPHC}_{i}^{u}(t, x; L, D, B) = e^{-r_{f}t} \mathbb{E}_{x} \big[ 1_{\{ \tau _{L, D}^{+} \wedge \tau _{B}^{+} \le t \}} f(Y_{t}) \big], $$

where \(Y\) is a CTMC with state space \(\mathbb{S}\) and transition rate matrix \({ G}\) to approximate the underlying price process. To price the option, it suffices to substitute \(\tau _{L, D}^{-}\) by \(\tau _{L, D}^{+} \wedge \tau _{B}^{+}\) in the proof of Theorem 2.7 and find the Laplace transform of \(\tau _{L, D}^{+} \wedge \tau _{B}^{+}\) under the model \(Y\). Let \(h_{1}(q, x; y) = \mathbb{E}_{x}[e^{-q\tau _{L, D}^{+} \wedge \tau _{B}^{+}} 1_{\{ \tau _{L, D}^{+} \wedge \tau _{B}^{+} = y \}}]\). Using a conditioning argument, we can show that \(h_{1}(q, x; y)\) satisfies the linear system

$$\begin{aligned} h_{1}(q, x; y) &= 1_{\{ x\ge B, x = y \}} + 1_{\{ x \le L \}} \sum _{z > L} \mathbb{E}_{x}\big[ e^{-q \overline{\tau}_{L}^{+}} 1_{\{ Y_{ \overline{\tau}_{L}^{+}} = z \}} \big] h_{1}(q, z; y) \end{aligned}$$
(A.1)
$$\begin{aligned} & \phantom{=:} + 1_{\{ L < x < B \}} \mathbb{E}_{x}\big[ e^{-q\tau _{B}^{+}} 1_{\{ \tau _{B}^{+} < D, Y_{\tau _{B}^{+}} = y \}} \big] \end{aligned}$$
(A.2)
$$\begin{aligned} & \phantom{=:} + 1_{\{ L < x < B \}} \sum _{z \le L} \mathbb{E}_{x}\big[ e^{-q \overline{\tau}_{L}^{-}} 1_{\{ \overline{\tau}_{L}^{-} < D \le \tau _{B}^{+}, Y_{\overline{\tau}_{L}^{-}} = z \}} \big] h_{1}(q, z; y) \end{aligned}$$
(A.3)
$$\begin{aligned} & \phantom{=:} + 1_{\{ L < x < B \}} e^{-qD} \mathbb{E}_{x}\big[ 1_{\{ \overline{\tau}_{L}^{-} \wedge \tau _{B}^{+} \ge D, Y_{D} = y \}} \big], \end{aligned}$$
(A.4)

where \(\overline{\tau}_{L}^{+} = \inf \{ t \ge 0: Y_{t} > L \}\) and \(\overline{\tau}_{L}^{-} = \inf \{ t \ge 0: Y_{t} \le L \}\) (they are slightly different from \(\tau _{L}^{+}\) and \(\tau _{L}^{-}\) defined in Sect. 2).

We next analyse each term. For (A.1), let \(\overline{u}(q, x; z) = \mathbb{E}_{x}[ e^{-q \overline{\tau}_{L}^{+}} 1_{\{ Y_{\overline{\tau}_{L}^{+}} = z \}} ]\). It satisfies

$$ \textstyle\begin{cases} \mathbb{G} \overline{u}(q, x; z) - q \overline{u}(q, x; z) = 0, \qquad x \in (-\infty , L] \cap \mathbb{S}, \\ \overline{u}(q, x; z) = 1_{\{ x = z \}},\qquad x \in (L, \infty ) \cap \mathbb{S}. \end{cases} $$

For (A.2), let \(v(q, D, x; y) = \mathbb{E}_{x}[ e^{-q\tau _{B}^{+}} 1_{\{ \tau _{B}^{+} < D, Y_{\tau _{B}^{+}} = y \}} ]\). It is the solution to

$$ \textstyle\begin{cases} \displaystyle \frac{\partial v}{\partial D}(q, D, x; y) = \mathbb{G} v(q, D, x; y) - q v(q, D, x; y), \\ \qquad \quad \ \, \qquad \qquad D >0, x \in (-\infty , B) \cap \mathbb{S}, \\ v(q, D, x; y) = 1_{\{ x = y \}},\qquad D>0, x \in [B, \infty ) \cap \mathbb{S}, \\ v(q, 0, x; y) = 0,\qquad x \in \mathbb{S}. \end{cases} $$

\(v(q, D, x; y)\) can be split as \(v_{1}(q, x; y) - v_{2}(q, D, x; y)\) with \(v_{1}(q, x ; y)\) satisfying

$$ \textstyle\begin{cases} \mathbb{G} v_{1}(q,x; y) - qv_{1}(q, x; y) = 0,\qquad x \in (-\infty , B) \cap \mathbb{S}, \\ v_{1}(q, x; y) = 1_{\{ x = y \}},\qquad x \in [B, \infty ) \cap \mathbb{S}, \end{cases} $$

and \(v_{2}(q, D, x; y)\) satisfying

$$ \textstyle\begin{cases} \displaystyle \frac{\partial v_{2}}{\partial D}(q, D, x; y) = \mathbb{G} v_{2}(q, D, x; y) - q v_{2}(q, D, x; y), \\ \qquad \quad \ \, \qquad \qquad D >0, x \in (-\infty , B) \cap \mathbb{S}, \\ v_{2}(q, D, x; y) = 0,\qquad D>0, x \in [B, \infty ) \cap \mathbb{S}, \\ v_{2}(q, 0, x; y) = v_{1}(q, x; y),\qquad x \in \mathbb{S}. \end{cases} $$

For (A.3), let \(u(q, D, x; z) = \mathbb{E}_{x}[ e^{-q\overline{\tau}_{L}^{-}} 1_{\{ \overline{\tau}_{L}^{-} < D \le \tau _{B}^{+}, Y_{\overline{\tau}_{L}^{-}} = z \}} ]\). It solves

$$ \textstyle\begin{cases} \displaystyle \frac{\partial u}{\partial D}(q, D, x; z) = \mathbb{G} u(q, D, x; z) - q u(q, D, x; z), D > 0, x \in (L, B) \cap \mathbb{S}, \\ u(q, D, x; z) = 1_{\{ x = z \}} \mathbb{E}_{x} [ 1_{\{ \tau _{B}^{+} \ge D \}} ],\qquad D>0, x \in (-\infty , L] \cap \mathbb{S}, \\ u(q, D, x; z) = 0,\qquad D > 0, x \in [B, \infty ) \cap \mathbb{S}, \\ u(q, 0, x; z) = 0, \qquad x \in \mathbb{S}. \end{cases} $$

The term \(u(q, D, x; z)\) can be split as \(u_{1}(q, x; z) - u_{2}(q, D, x; z)\) with \(u_{1}(q, x; z)\) satisfying

$$ \textstyle\begin{cases} \mathbb{G} u_{1}(q, x; z) - q u_{1}(q, x; z) = 0, \qquad x \in (L, B) \cap \mathbb{S}, \\ u_{1}(q, x; z) = 1_{\{ x = z \}} \mathbb{E}_{x} [ 1_{\{ \tau _{B}^{+} \ge D \}} ], \qquad x \in (-\infty , L] \cap \mathbb{S}, \\ u_{1}(q, x; z) = 0, \qquad x \in [B, \infty ) \cap \mathbb{S}, \end{cases} $$

and \(u_{2}(q, D, x; z)\) satisfying

$$ \textstyle\begin{cases} \displaystyle \frac{\partial u_{2}}{\partial D}(q, D, x; z) = \mathbb{G} u_{2}(q, D, x; z) - q u_{2}(q, D, x; z), \\ \qquad \quad \ \, \qquad \qquad D > 0, x \in (L, B) \cap \mathbb{S}, \\ u_{2}(q, D, x; z) = 0, \qquad D>0, x \in (-\infty , L] \cap \mathbb{S}, \\ u_{2}(q, D, x; z) = 0, \qquad D > 0, x \in [B, \infty ) \cap \mathbb{S}, \\ u_{2}(q, 0, x; z) = u_{1}(q, x; z),\qquad x \in \mathbb{S}. \end{cases} $$

For (A.4), let \(w(D, x; y) = \mathbb{E}_{x}[ 1_{\{ \overline{\tau}_{L}^{-} \wedge \tau _{B}^{+} \ge D, Y_{D} = y \}} ]\). It solves

$$ \textstyle\begin{cases} \displaystyle \frac{\partial w}{\partial D}(D, x; y) = \mathbb{G} w(D, x; y), \qquad D>0, x \in (L, B) \cap \mathbb{S}, \\ w(D, x; y) = 0, \qquad D>0, x \in \mathbb{S} \backslash (L, B), \\ w(0, x; y) = 1_{\{ x = y \}}, \qquad x \in \mathbb{S}. \end{cases} $$

Let \(\overline{ I}_{L}^{+} = \operatorname{diag}((1_{\{ x > L \}})_{x \in \mathbb{S}})\), \(\overline{ I}_{L}^{-} = \operatorname{diag}((1_{\{ x \le L \}})_{x \in \mathbb{S}})\), \({ I}_{L, B} = \overline{ I}_{L}^{+} { I}_{B}^{-}\) (they are slightly different from \({ I}_{L}^{+}\) and \({ I}_{L}^{-}\) defined in Sect. 2). The solutions to the above equations are given by

$$\begin{aligned} \overline{U}(q) &= \big( \overline{u}(q, x; z) \big)_{x, z \in \mathbb{S}} = ( q\overline{ I}_{L}^{-} - \overline{ I}_{L}^{-}{G} + \overline{ I}_{L}^{+})^{-1} \overline{ I}_{L}^{+}, \\ {V}_{1}(q) &= \big( v_{1}(q, x; y) \big)_{x, y \in \mathbb{S}}= ( q { I}_{B}^{-} - { I}_{B}^{-} { G} + { I}_{B}^{+} )^{-1} { I}_{B}^{+}, \\ {V}_{2}(q) &= \big( v_{2}(q, D, x; y) \big)_{x, y \in \mathbb{S}} = \exp ( { I}_{B}^{-} { G} - q{ I}_{B}^{-} ) { I}_{B}^{-} { V}_{1}(q), \\ {U}_{1}(q) &= \big( u_{1}(q, x; z) \big)_{x, z \in \mathbb{S}} \\ &= ( q{ I}_{L, B} - { I}_{L, B} { G} + { I} - { I}_{L, B} )^{-1} \overline{ I}_{L}^{-} \operatorname{diag} \big(\exp ({I}_{B}^{-} { G} D ) { 1}_{B}^{-}\big) , \\ {U}_{2}(q) &= \big( u_{2}(q, D, x; z) \big)_{x, z \in \mathbb{S}} = \exp ( { I}_{L, B} { G} - q { I}_{L, B} ) { I}_{L, B} { U}_{1}(q), \\ {W} &= \big( w(D, x; y) \big)_{x, y \in \mathbb{S}} = \exp ({I}_{L, B} { G} D ) { I}_{L, B}, \end{aligned}$$

where \({ 1}_{B}^{-} = (1_{\{ x < B \}})_{x \in \mathbb{S}}\). Let \({ H}_{1}(q) = ( h_{1}(q,x; y) )_{x, y \in \mathbb{S}}\), which satisfies

$$\begin{aligned} { H}_{1}(q) &= { I}_{B}^{+} + { I}_{L, B} \big({ V}_{1}(q) - { V}_{2}(q) \big) + { I}_{L, B} \big({ U}_{1}(q) - { U}_{2}(q) \big) { H}_{1}(q) \\ & \phantom{=:} + \overline{ I}_{L}^{-} \overline{ U}(q) { H}_{1}(q) + e^{-qD} { I}_{L, B} { W}. \end{aligned}$$

The solution is given by

$$ { H}_{1}(q) = \big( { I} - { U}(q) \big)^{-1} { V}(q), $$

where

$$\begin{aligned} &{ U}(q) = { I}_{L, B} \big({ U}_{1}(q) - { U}_{2}(q) \big) + \overline{ I}_{L}^{-} \overline{ U}(q), \\ &{ V}(q) = { I}_{B}^{+} + { I}_{L, B} \big({ V}_{1}(q) - { V}_{2}(q) \big) + e^{-qD} { I}_{L, B} { W}. \end{aligned}$$

Consider the Laplace transform

$$ \widetilde{u}_{i}(q, x) = \int _{0}^{\infty} e^{-qt} \text{minPHC}_{i}^{u}(t, x; L, D, B) dt,\qquad \Re (q) > 0. $$

It can be obtained as

$$ \widetilde{ u}_{i}(q) = \big( \widetilde{u}_{i}(q, x) \big)_{x \in \mathbb{S}} = { H}_{1}(q + r_{f}) \big( (q + r_{f}) { I} - { G} \big)^{-1} { f}. $$

1.3 A.3 Pricing Parisian bonds

Recently, Dassios et al. [20] proposed a Parisian type of bonds whose payoff depends on whether the excursion of the interest rate above some level \(L\) exceeds a given duration before maturity, i.e., \(h(R_{\tau}) 1_{\{\tau < T \}}\), where \(T\) is the bond maturity, \(R\) is the short rate process, \(h(\cdot )\) is the payoff function and

$$ \tau = \inf \{ t > 0: U_{t} = D \}, \qquad U_{t} = t - \sup \{ s < t: R_{s} \le L \}. $$

The bond price can be written as

$$ P(T, x) = \mathbb{E}_{x}\big[ e^{-\int _{0}^{\tau} R_{t} dt} f(R_{ \tau}) 1_{\{ \tau < T \}} \big], $$

where \(\mathbb{E}_{x}[\cdot ] = \mathbb{E}[\cdot |R_{0} = x]\). We can calculate its Laplace transform with respect to \(T\) as

$$ \widetilde{P}(q, x) = \int _{0}^{\infty} e^{-qT} P(T, x) dT = \frac{1}{q} \mathbb{E}_{x}\big[ e^{-\int _{0}^{\tau} (q + R_{t}) dt} f(R_{ \tau}) \big]. $$

Suppose that \(R\) is a CTMC with state space \(\mathbb{S}_{R}\) to approximate the original short rate model (e.g. the CIR model considered in Dassios et al. [20]). Let

$$ h(q, x) = \mathbb{E}_{x}\big[ e^{-\int _{0}^{\tau} (q + R_{t}) dt} f(R_{ \tau}) \big] $$

and define \(\overline{\tau}_{L}^{+}\) and \(\overline{\tau}_{L}^{-}\) as in Sect. A.2. Then using a conditioning argument, we obtain

$$\begin{aligned} h(q, x) &= \mathbb{E}_{x}\big[ e^{-\int _{0}^{D} (q + R_{t}) dt} f(R_{D}) 1_{\{ \overline{\tau}_{L}^{-} \ge D \}} \big] 1_{\{ x > L \}} \\ & \phantom{=:} + \sum _{z \le L} \mathbb{E}_{x}\big[ e^{-\int _{0}^{\overline{\tau}_{L}^{-}} (q + R_{t}) dt} 1_{\{ \overline{\tau}_{L}^{-} < D, R_{\overline{\tau}_{L}^{-}} = z \}} \big] 1_{\{ x > L \}} h(q, z) \\ & \phantom{=:} + \sum _{z > L} \mathbb{E}_{x}\big[ e^{-\int _{0}^{\overline{\tau}_{L}^{+}} (q + R_{t}) dt} 1_{\{ R_{\overline{\tau}_{L}^{+}} = z \}} \big] 1_{\{ x \le L \}} h(q, z). \end{aligned}$$

Let \({ G}\) be the generator matrix of \(R\). Consider

$$ v(q, D, x) = \mathbb{E}_{x}\big[ e^{-\int _{0}^{D} (q + R_{t}) dt} f(R_{D}) 1_{\{ \overline{\tau}_{L}^{-} \ge D \}} \big]. $$

It satisfies

$$ \textstyle\begin{cases} \displaystyle \frac{\partial v}{\partial D}(q, D, x) = (\mathbb{G} - q-x) v(q, D, x), \qquad D > 0, x \in (L, \infty ) \cap \mathbb{S}_{R}, \\ v(q, D, x) = 0,\qquad D > 0, x \in (-\infty , L] \cap \mathbb{S}_{R}, \\ v(q, 0, x) = f(x), \qquad x \in \mathbb{S}_{R}. \end{cases} $$

Let \(u^{-}(q, D, x; z) = \mathbb{E}_{x}[ e^{-\int _{0}^{\overline{\tau}_{L}^{-}} (q + R_{t}) dt} 1_{\{ \overline{\tau}_{L}^{-} < D, R_{\overline{\tau}_{L}^{-}} = z \}} ]\). It solves

$$ \textstyle\begin{cases} \displaystyle \frac{\partial u^{-}}{\partial D}(q, D, x; z) = ( \mathbb{G} - q-x) u^{-}(q, D, x; z), \qquad D > 0, x \in (L, \infty ) \cap \mathbb{S}_{R}, \\ u^{-}(q, D, x; z) = 1_{\{ x = z \}}, \qquad D>0, x \in (-\infty , L] \cap \mathbb{S}_{R}, \\ u^{-}(q, 0, x; z) = 0. \end{cases} $$

We can decompose \(u^{-}(q, D, x; z)\) as \(u^{-}(q, D, x; z) = u_{1}^{-}(q, x; z) - u_{2}^{-}(q, D, x; z)\) with them satisfying

$$ \textstyle\begin{cases} (\mathbb{G} - q-x) u_{1}^{-}(q, x; z) = 0, \qquad x \in (L, \infty ) \cap \mathbb{S}_{R}, \\ u_{1}^{-}(q, x; z) = 1_{\{ x = z \}}, \qquad x \in (-\infty , L] \cap \mathbb{S}_{R}, \end{cases} $$

and

$$ \textstyle\begin{cases} \displaystyle \frac{\partial u_{2}^{-}}{\partial D}(q, D, x; z) = ( \mathbb{G} - q-x) u_{2}^{-}(q, D, x; z), \qquad D > 0, x \in (L, \infty ) \cap \mathbb{S}_{R}, \\ u_{2}^{-}(q, D, x; z) = 0, \qquad D>0, x \in (-\infty , L] \cap \mathbb{S}_{R}, \\ u_{2}^{-}(q, 0, x; z) = u_{1}^{-}(q, x; z), \qquad x \in \mathbb{S}_{R}. \end{cases} $$

Let \(u^{+}(q, x; z) = \mathbb{E}_{x}[ e^{-\int _{0}^{\overline{\tau}_{L}^{+}} (q + R_{t}) dt} 1_{\{ R_{\overline{\tau}_{L}^{+}} = z \}} ]\). It satisfies

$$ \textstyle\begin{cases} (\mathbb{G} - q-x) u^{+}(q, x; z) = 0, \qquad x \in (-\infty , L] \cap \mathbb{S}_{R}, \\ u^{+}(q, x; z) = 1_{\{ x = z \}}, \qquad x \in (L, \infty ) \cap \mathbb{S}_{R}. \end{cases} $$

Let \({ f} = (f(x))_{x \in \mathbb{S}_{R}}\) and

$$\begin{aligned} h(q) &= \big( h(q, x) \big)_{x \in \mathbb{S}_{R}},\qquad { v}(q) = \big( v(q, D, x) \big)_{x \in \mathbb{S}_{R}}, \\ U^{-}(q) &= \big( u^{-}(q, D, x; z) \big)_{x, z \in \mathbb{S}_{R}}, \\ U_{1}^{-}(q) &= \big( u_{1}^{-}(q, x; z) \big)_{x, z \in \mathbb{S}_{R}}, \qquad { U}_{2}^{-}(q) = \big( u_{2}^{-}(q, D, x; z) \big)_{x, z \in \mathbb{S}_{R}}, \\ U^{+}(q) &= \big( u^{+}(q, x; z) \big)_{x, z \in \mathbb{S}_{R}}, \qquad { R}_{q} = \text{diag}\big( (q + x)_{x \in \mathbb{S}_{R}} \big). \end{aligned}$$

They can be calculated as

$$\begin{aligned} v(q) &= \exp \big( \overline{ I}^{+}_{L} ({ G} - { R}_{q}) D \big) \overline{ I}^{+}_{L} { f}, \\ U_{1}^{-}(q) &= \big( \overline{{ I}}^{-}_{L} - \overline{ I}^{+}_{L} ({ G} - { R}_{q}) \big)^{-1} \overline{{ I}}^{-}_{L}, \\ U_{2}^{-}(q) &= \exp \big( \overline{ I}^{+}_{L} ({ G} - { R}_{q}) D \big) \overline{ I}^{+}_{L} { U}_{1}^{-}(q), \\ U^{-}(q) &= { U}_{1}^{-}(q) - { U}_{2}^{-}(q), \\ U^{+}(q) &= \big( \overline{ I}^{+}_{L} - \overline{ I}^{-}_{L} ({ G} - { R}_{q}) \big)^{-1} \overline{ I}^{+}_{L}, \\ h(q) &= \big( { I} - \overline{ I}^{+}_{L} { U}^{-}(q) \overline{ I}^{-} - \overline{ I}^{-}_{L} { U}^{+}(q) \overline{ I}^{+}_{L} \big)^{-1} { v}(q). \end{aligned}$$

We then calculate \(\widetilde{P}(q, x)\) by dividing \(h(q,x)\) by \(q\) and obtain the bond price \(P(T, x)\) by Laplace inversion.

1.4 A.4 Regime-switching and stochastic volatility models

Suppose the asset price satisfies \(S_{t} = \zeta (X_{t}, \widetilde{v}_{t})\) for some function \(\zeta (\cdot , \cdot )\) and a regime-switching process \(X\). The regime process \(\widetilde{v}\) has values in \(\mathbb{S}_{v} = \{ v_{1}, v_{2}, \dots , v_{n_{v}} \}\), and in each regime, \(X\) is a general jump-diffusion. We approximate the dynamics of \(X\) in each regime by a CTMC \(\widetilde{X}\) with state space \(\mathbb{S}_{X} = \{ x_{1}, x_{2}, \dots , x_{n} \}\). Hence \(S_{t}\) can be approximated by \(\widetilde{S}_{t} = \zeta (\widetilde{X}_{t}, \widetilde{v}_{t})\). The analysis of single-barrier Parisian stopping times for this type of models can be done similarly as in Sect. 2. Let

$$ \widetilde{ x} = \big((x_{1}, v_{1}), (x_{1}, v_{2}), \dots , (x_{1}, v_{n_{v}}), \dots , (x_{n}, v_{1}), (x_{n}, v_{2}), \dots , (x_{n}, v_{n_{v}}) \big) \in \mathbb{R}^{nn_{v}}. $$

Let \(\widetilde{{ G}}\) be the generator matrix of \((\widetilde{X}, \widetilde{v})\), which can be constructed as

$$\begin{aligned} \widetilde{ G} = { I} \otimes \Lambda + \sum _{i = 1}^{n_{v}} { G}_{i} \otimes { I}_{i} \in \mathbb{R}^{nn_{v} \times nn_{v}}, \end{aligned}$$
(A.5)

where \(I\) is the identity matrix in \(\mathbb{R}^{n \times n}\), \(\Lambda \in \mathbb{R}^{n_{v}\times n_{v}}\) is the generator matrix of \(\widetilde{v}\), ⊗ stands for the Kronecker product, \(G_{i}\) is the generator matrix of \(\widetilde{X}\) in regime \(v_{i}\), and \({ I}_{i}\) is a matrix in \(\mathbb{R}^{n_{v}\times n_{v}}\) with the \((i, i)\)-element being 1 and all other elements being zero. Let

$$ { H}(q) = \big(h(q, x, v; y, u)\big)_{x, y \in \mathbb{S}_{X}, u, v \in \mathbb{S}_{v}} $$

with

$$ h(q, x, v; y, u) = \mathbb{E}_{x, v}\big[ e^{-q \widetilde{\tau}_{L, D}^{-}} 1_{\{ \widetilde{X}_{\widetilde{\tau}_{L, D}^{-}} = y, \widetilde{v}_{ \widetilde{\tau}_{L, D}^{-}} = u \}} \big], $$

where \(\widetilde{\tau}_{L, D}^{-} = \inf \{ t \ge 0: 1_{\{ \widetilde{S}_{t} < L \}} (t - \widetilde{g}^{-}_{L, t}) \ge D \}\) with

$$ \widetilde{g}^{-}_{L, t} = \sup \{ s \le t: \widetilde{S}_{s} \ge L \}. $$

We can solve the Parisian problem in the same way as for 1D CTMCs. Let

$$\begin{aligned} \widetilde{{ V}} &= \exp \big( \widetilde{ I}_{L}^{-} \widetilde{{ G}} D \big) \widetilde{ I}_{L}^{-}, \\ \widetilde{ U}^{+}_{1}(q) &= \widetilde{ I}_{L}^{-} \big(q \widetilde{ I}_{L}^{-} - \widetilde{ I}_{L}^{-} \widetilde{ G} + \widetilde{ I}_{L}^{+} \big)^{-1} \widetilde{ I}_{L}^{+}, \\ \widetilde{ U}^{+}_{2}(q) & = \widetilde{ I}_{L}^{-} \exp \big( \widetilde{ I}_{L}^{-} \widetilde{ G} D \big) \widetilde{ I}_{L}^{-} \widetilde{ U}_{1}(q), \\ \widetilde{ U}^{-}(q) &= \widetilde{ I}_{L}^{+}\big(q\widetilde{ I}_{L}^{+} - \widetilde{ I}_{L}^{+} \widetilde{ G} + \widetilde{ I}_{L}^{-} \big)^{-1} \widetilde{ I}_{L}^{-}, \\ \widetilde{ U}(q) &= \widetilde{ U}_{1}^{+}(q) - \widetilde{ U}_{2}^{+}(q) + \widetilde{ U}^{-}(q), \end{aligned}$$

where \(\widetilde{ I}_{L}^{+} = \operatorname{diag}(1_{\{ \zeta (\widetilde{ x}) \ge L \}})\) and \(\widetilde{ I}_{L}^{-} = \operatorname{diag}(1_{\{ \zeta (\widetilde{ x}) < L \}})\). Then we have

$$ \widetilde{ H}(q) = e^{-qD} \widetilde{ V} + \widetilde{ U}(q) \widetilde{ H}(q). $$

We solve for \(\widetilde{ H}(q)\) and obtain

$$ \widetilde{ H}(q) = e^{-qD} \big(\widetilde{ I} - \widetilde{ U}(q) \big)^{-1} \widetilde{ V}, $$

where \(\widetilde{ I}\) is the identity matrix in \(\mathbb{R}^{nn_{v} \times n n_{v}}\). For option pricing, let

$$ \widetilde{{ u}}(q) = \big( \widetilde{u}(q, x, v) \big)_{x\in \mathbb{S}_{X}, v \in \mathbb{S}_{v}}, $$

where \(\widetilde{u}(q, x, v) \) is the Laplace transform of the option price given by

$$ \widetilde{u}(q, x, v) = \int _{0}^{\infty} e^{-qt} \mathbb{E}_{x, v} \big[ f(\widetilde{X}_{t}, \widetilde{v}_{t}) 1_{\{ \widetilde{\tau}_{L, D}^{-} \le t \}} \big] dt. $$

We obtain \(\widetilde{{ u}}(q)\) as

$$ \widetilde{{ u}}(q) = e^{-qD} \big(\widetilde{ I} - \widetilde{ U}(q) \big)^{-1} \widetilde{ V} (q \widetilde{ I} - \widetilde{ G} )^{-1} \widetilde{ f}, $$

where \(\widetilde{{ f}} = { {\mathbf{1}}}_{n_{v}} \otimes (f(x_{1}), f(x_{2}), \dots , f(x_{n}))\), and \({ {\mathbf{1}}}_{n_{v}}\) is an all-one vector in \(\mathbb{R}^{n_{v}}\).

Remark A.1

Cui et al. [13] show that general stochastic volatility models can be approximated by a regime-switching CTMC. Consider

$$ \textstyle\begin{cases} dS_{t}=\omega (S_{t}, v_{t}) d t+m (v_{t} ) \Gamma (S_{t}) d W_{t}^{(1)}, \\ d v_{t}=\mu (v_{t}) d t+\sigma (v_{t}) d W_{t}^{(2)}, \end{cases} $$

where \([W^{(1)}, W^{(2)} ]_{t} = \rho t\) with \(\rho \in [-1, 1]\). As in Cui et al. [13], consider

$$ X_{t} = g(S_{t}) - \rho f(v_{t}) $$

with \(g(x):=\int _{0}^{x} \frac{1}{\Gamma (u)} d u\) and \(f(x):=\int _{0}^{x} \frac{m(u)}{\sigma (u)} d u\). It follows that

$$ dX_{t} =\theta (X_{t}, v_{t}) d t+\sqrt{1-\rho ^{2}} m (v_{t} ) d W_{t}^{*}, $$

where \(W^{\ast}\) is a standard Brownian motion independent of \(W^{(2)}\) and

$$ \theta (x, v) = \frac{\omega (\zeta (x, v), v)}{\Gamma (\zeta (x, v))}- \frac{\Gamma ^{\prime}(\zeta (x, v))}{2} m^{2} (v )-\rho h (v ) $$

with

$$\begin{aligned} \zeta (x, v)& := g^{-1}\big(x + \rho f(v)\big), \\ h(x)&: =\mu (x) \frac{m(x)}{\sigma (x)}+\frac{1}{2}\big(\sigma (x) m^{ \prime}(x)-\sigma ^{\prime}(x) m(x)\big). \end{aligned}$$

Then we can use a two-layer approximation for \((X, v)\). First, construct a CTMC \(\widetilde{v}\) with state space \(\mathbb{S}_{v} = \{ v_{1}, \dots , v_{n_{v}} \}\) and generator matrix \(\Lambda \) in \(\mathbb{R}^{n_{v} \times n_{v}}\) to approximate \(v\). Second, for each \(v_{\ell }\in \mathbb{S}_{v}\), construct a CTMC with state space \(\mathbb{S}_{X} = \{ x_{1}, \dots , x_{n} \}\) and generator matrix \(\mathcal{G}_{v}\) to approximate the dynamics of \(X\) conditionally on \(\widetilde{v}=v_{\ell}\). Then \((X, v)\) is approximated by a regime-switching CTMC \((\widetilde{X}, \widetilde{v})\), where \(\widetilde{X}\) transitions on \(\mathbb{S}_{X}\) following \(\mathcal{G}_{v}\) when \(\widetilde{v} = v\) for each \(v \in \mathbb{S}_{v}\), and \(\widetilde{v}\) evolves according to its transition rate matrix \(\Lambda \).

Remark A.2

We can significantly reduce the complexity of our algorithm when \(X\) is a regime-switching diffusion. In this case, it is approximated by a regime-switching birth-and-death process \(\widetilde{X}\). As \(\widetilde{X}\) can only move to the neighbouring states at each transition, we have

$$\begin{aligned} h(q, x, v; y, u) &= \mathbb{E}_{x, v}\big[ e^{-q \widetilde{\tau}_{L, D}^{-}} 1_{\{ \widetilde{X}_{\widetilde{\tau}_{L, D}^{-}} = y, \widetilde{v}_{ \widetilde{\tau}_{L, D}^{-}} = u \}} \big] \\ &= 1_{\{ x < L \}} e^{-qD} \widetilde{v}(D, x, v; y, u) \\ & \phantom{=:} + \sum _{w \in \mathbb{S}_{v}} 1_{\{ x < L \}} \widetilde{u}^{+}(q, D, x, v; w) h(q, L^{+}, w; y, u) \\ & \phantom{=:} + \sum _{w \in \mathbb{S}_{v}} 1_{\{ x \ge L \}} \widetilde{u}^{-}(q, x, v; w) h(q, L^{-}, w; y, u), \end{aligned}$$

where

$$\begin{aligned} &\widetilde{v}(D, x, v; y, u) = \mathbb{E}_{x, v}\big[ 1_{\{ \widetilde{T}_{L}^{+} \ge D, \widetilde{X}_{D} = y, \widetilde{v}_{D} = u \}} \big], \\ &\widetilde{u}^{+}(q, D, x, v; w) = \mathbb{E}_{x, v} \big[ e^{-q \widetilde{T}_{L}^{+}} 1_{\{ \widetilde{v}_{\widetilde{T}_{L}^{+}} = w, \widetilde{T}_{L}^{+} < D \}} \big], \\ &\widetilde{u}^{-}(q, x, v; w) = \mathbb{E}_{x, v} \big[ e^{-q \widetilde{T}_{L}^{-}} 1_{\{ \widetilde{v}_{\widetilde{T}_{L}^{-}} = w \}} \big]. \end{aligned}$$

Setting \(x = L^{+},L^{-}\) yields

$$\begin{aligned} h(q, L^{+}, v; y, u) &= \sum _{w \in \mathbb{S}_{v}} \widetilde{u}^{-}(q, L^{+}, v; w) h(q, L^{-}, w; y, u), \\ h(q, L^{-}, v; y, u) &= e^{-qD} \widetilde{v}(D, L^{-}, v; y, u) \\ & \phantom{=:} + \sum _{w \in \mathbb{S}_{v}} \widetilde{u}^{+}(q, D, L^{-}, v; w) h(q, L^{+}, w; y, u). \end{aligned}$$

As each \({ G}_{i}\), \(1 \le i \le n_{v}\), is a tridiagonal matrix, \(\widetilde{ G}\) defined in (A.5) is a block tridiagonal matrix. We can solve a block tridiagonal linear system associated with \(\widetilde{ G}\) using the block LU decomposition (see Quarteroni et al. [46, Sect. 3.8.3]), and the cost is only \(\mathcal{O}(n_{v}^{3} n)\) which is linear in \(n\). Using the analysis in Remark 2.6, we can conclude that the cost of computing \(\widetilde{{ u}}(q)\) is \(\mathcal{O}(m n_{v}^{3} n)\) if the approximation (2.6) is applied.

Appendix B: Proofs

Proof of Proposition 2.3

We derive the equation for \(v_{n}(D, x; y)\); the others can be obtained in a similar manner. Let \(\mathbb{T}_{\delta }= \{ i\delta : i = 0, 1, 2, \dots \}\) and correspondingly \(\tau _{L}^{\delta ,+} = \inf \{ t \in \mathbb{T}_{\delta}: Y_{t} \ge L \}\). As \(Y\) is a piecewise constant process, we have \(\tau _{L}^{\delta ,+} \downarrow \tau _{L}^{+}\) as \(\delta \to 0\). Using the monotone convergence theorem, we have

$$ v_{n,\delta}(D, x; y) := \mathbb{E}_{x}\big[ 1_{\{ \tau _{L}^{\delta ,+} \ge D \}} 1_{\{ Y_{D} = y \}} \big] \longrightarrow v_{n}(D, x; y). $$

Denote the transition probability of \(Y\) by \(p_{n}(\delta , x, z)\). We have that

$$ p_{n}(\delta , x, z) = { G}(x, z) \delta + o(\delta ) $$

for \(z \ne x\) and \(p_{n}(\delta , x, x) = 1 + { G}(x, x) \delta \). Then for \(x < L\),

$$\begin{aligned} v_{n,\delta}(D, x; y) &= \sum _{z \in \mathbb{S}_{n}} p_{n}(\delta , x, z) v_{n,\delta}(D-\delta , z; y) \\ &= \big(1 + G\delta + o(\delta ) \big) v_{n,\delta}(D-\delta , x; y). \end{aligned}$$

Therefore,

$$ \frac{v_{n,\delta}(D, x; y) - v_{n,\delta}(D-\delta , x; y)}{\delta} = Gv_{n,\delta}(D-\delta , x; y) + o(1). $$

Taking \(\delta \) to 0 shows the ODE. The boundary and initial conditions are obvious. □

Proof of Proposition 2.4

It is easy to see that the solution matrix for \({ V}\) is unique. Suppose there are two solutions \({ V'}\) and \({ V''}\). Their difference \(V'- V''\) satisfies a homogeneous linear ODE system with zero as the initial and boundary conditions. Thus it must equal the zero matrix, and hence \({V}'= {V}''\). Similarly, we obtain the uniqueness of the solution matrix for \({ U}^{+}_{2}(q)\). The solution matrix for \({ U}^{+}_{1}(q)\) is also unique. Note that for \(D>0\) and \(\Re (q)>0\), we have

$$ (q{ I}_{L}^{-} - { I}_{L}^{-} { G} + { I}_{L}^{+} ){ U}^{+}_{1}(q) = { I}_{L}^{+}. $$

Let \({A}=(a_{ij})=q{ I}_{L}^{-} - { I}_{L}^{-} { G} + { I}_{L}^{+}\). Observe that

$$ \Re (a_{ii}) > \sum _{j \ne i} |a_{ij} |, \qquad i = 0, 1, \ldots , n. $$

By Gershgorin’s circle theorem (see Geršgorin [30]), all the eigenvalues of \(A\) have a strictly positive real part, which implies the invertibility of \(A\). Similarly, we can show the uniqueness of the solution matrix for \({ U}^{-}(q)\). □

Proof of Theorem 3.3

The assumptions imply that for any \(\varepsilon > 0\), there exists \(M > 0\) such that for any \(n\), we have

$$\begin{aligned} \big| \mathbb{E}\big[ |f(Y_{t}^{n}) |1_{\{ |f(Y_{t}^{n})| \ge M \}} \big] \big| < \varepsilon , \\ \big| \mathbb{E}\big[ |f(X_{t}) | 1_{\{ |f(X_{t})| \ge M \}}\big] \big| < \varepsilon . \end{aligned}$$

It follows that

$$\begin{aligned} & \big|\mathbb{E}\big[ 1_{\{ \tau _{L, D}^{(n), -} \le t \}} f(Y^{(n)}_{t}) \big] - \mathbb{E}\big[ 1_{\{ \tau _{L, D}^{-} \le t \}} f(X_{t}) \big] \big| \\ &\le \big|\mathbb{E}\big[ 1_{\{ \tau _{L, D}^{(n), -} \le t \}} f_{M}(Y^{(n)}_{t}) \big] - \mathbb{E}\big[ 1_{\{ \tau _{L, D}^{-} \le t \}} f_{M}(X_{t}) \big] \big| + 2\varepsilon , \end{aligned}$$

where

$$ f_{M}(x) = M 1_{\{ f(x) > M \}} + f(x) 1_{\{ |f(x)| \le M \}} - M 1_{ \{ f(x) < - M \}}. $$

Noting that \(\varepsilon \) can be taken arbitrarily small, it suffices to show that

$$ \mathbb{E}\big[ 1_{\{ \tau _{L, D}^{(n), -} \le t \}} f_{M}(Y^{(n)}_{t}) \big] \longrightarrow \mathbb{E}\big[ 1_{\{ \tau _{L, D}^{-} \le t \}} f_{M}(X_{t}) \big] \qquad \text{as } n \to \infty . $$

But \(Y^{(n)} \Rightarrow X\) implies that if \(g: D(\mathbb{R}) \to \mathbb{R}\) is bounded and continuous on some subset \(C\) of \(D(\mathbb{R})\) such that \(\mathbb{P}[X \in C] = 1\), then

$$ \mathbb{E} [g(Y^{(n)}) ] \longrightarrow \mathbb{E} [g(X) ] \qquad \text{as } n \to \infty . $$

With the assumptions that

$$\begin{aligned} \mathbb{P}[X_{t} \in \mathcal{D}] &=\mathbb{P}[\tau _{L, D}^{-} = t] = 0, \\ \mathbb{P}[X \in V] &= \mathbb{P}[X \in W] = \mathbb{P}[X \in U^{+}] = \mathbb{P}[X \in U^{-}] = 1, \end{aligned}$$

it suffices to establish the continuity of \(\tau _{L, D}^{-}(\omega )\) on \(V \cap W \cap U^{+} \cap U^{-}\).

For \(\omega \in V \cap W \cap U^{+} \cap U^{-}\), suppose \(\omega ^{(n)} \to \omega \) as \(n \to \infty \) on \(D(\mathbb{R})\). Note that for \(\omega \in U^{+} \cap U^{-}\), we have \(\sigma _{k+1}^{-}(\omega )>\sigma _{k}^{+}(\omega )>\sigma _{k}^{-}( \omega )\) for \(k\ge 1\).

– Suppose \(\tau _{L, D}^{-}(\omega ) < s\). There exists \(k \ge 1\) such that \(\sigma _{k}^{+}(\omega ) \wedge s - \sigma ^{-}_{k}(\omega ) > D\). Let \(J(\omega )=\{t: \omega (t)\ne \omega (t-)\}\). Since \(\mathbb{R}_{+}\backslash J(\omega )\) is dense, we can find an \(\varepsilon > 0\) that is small enough such that \(\sigma _{k}^{+}(\omega ) \wedge s - \sigma ^{-}_{k}(\omega ) - 2 \varepsilon > D\), \(\omega \) is continuous at \(\sigma _{k}^{+}(\omega ) \wedge s - \varepsilon \) and \(\sigma ^{-}_{k}(\omega )+\varepsilon \), and

$$ \sup \{ \omega _{u}: \sigma ^{-}_{k}(\omega ) + \varepsilon \le u \le \sigma _{k}^{+}(\omega ) \wedge s - \varepsilon \} < L. $$

By Jacod and Shiryaev [33, Proposition 2.4] which shows the continuity of the supremum process, \(\sup \{ \omega _{u}^{(n)}: \sigma ^{-}_{k}(\omega ) + \varepsilon \le u \le \sigma _{k}^{+}(\omega ) \wedge s - \varepsilon \} < L\). Since \(\sigma _{k}^{+}(\omega ) \wedge s - \sigma ^{-}_{k}(\omega ) - 2 \varepsilon > D\), we conclude that \(\tau _{L, D}^{ -}(\omega ^{(n)}) < s\) for sufficiently large \(n\). Therefore, we have

$$ \limsup _{n \to \infty} \tau _{L, D}^{ -}(\omega ^{(n)}) \le \tau _{L, D}^{-}(\omega ). $$

– Suppose \(\tau _{L, D}^{-}(\omega ) > s\). Since \(\omega \in V\), there is \(\overline{k} \ge 1\) such that \(\sigma _{\overline{k}}^{+}(\omega ) \ge s \) and \(\sigma _{\overline{k}}^{-}(\omega ) < s\). As \(\omega \in W\), \(\max \{ \sigma _{k}^{+}(\omega ) \wedge s - \sigma _{k}^{-}(\omega ) : 1\le k\le \overline{k}\} < D\). Due to the denseness of \(\mathbb{R}_{+}\backslash J(\omega )\), we can find a small enough \(\varepsilon >0\) such that \(\sigma _{k}^{+}(\omega ) \wedge s - \sigma _{k}^{-}(\omega ) + 2\varepsilon < D\), \(\omega \) is continuous at \(\sigma _{k}^{+} \wedge s + \varepsilon \) and \(\sigma _{k}^{-}-\varepsilon \) for any \(1 \le k \le \overline{k}\), and \(\inf \{ \omega _{u}: \sigma _{k}^{+}(\omega ) + \varepsilon \le u \le \sigma _{k+1}^{-}(\omega ) -\varepsilon \} > L\) for any \(1 \le k < \overline{k}\). In the same spirit as [33, Proposition 2.4], we can show the continuity of the infimum process. For sufficiently large \(n\), we thus have \(\inf \{ \omega _{u}^{(n)}: \sigma _{k}^{+}(\omega ) + \varepsilon \le u \le \sigma _{k+1}^{-}(\omega ) - \varepsilon \} > L\) for any \(1 \le k < \overline{k}\). It follows that the age of the excursion below \(L\) of \(\omega ^{(n)}\) up to time \(s\) cannot exceed \(\max \{ \sigma _{k}^{+}(\omega ) \wedge s - \sigma _{k}^{-}(\omega ) + 2\varepsilon : 1\le k\le \overline{k}\} < D\). This implies \(\tau _{L, D}^{-}(\omega ^{(n)}) > s\) for sufficiently large \(n\). Therefore,

$$ \liminf _{n \to \infty} \tau _{L, D}^{ -}(\omega ^{(n)}) \ge \tau _{L, D}^{-}(\omega ). $$

Combining the arguments above, we obtain the continuity of the Parisian stopping time \(\tau _{L,D}^{-}(\omega )\) on \(V \cap W \cap U^{+} \cap U^{-}\). This concludes the proof. □

Proof of Proposition 3.4

As

$$ Y_{t}^{(n)} - Y_{0}^{(n)} - \int _{0}^{t} \mathbb{G}g(Y_{s}^{(n)}) ds, \qquad t \geq 0, $$

is a martingale with \(g(y) = y\), we have

$$ \mathbb{E}_{x}[Y_{t}^{(n)}] = x + \mathbb{E}_{x} \bigg[ \int _{0}^{t} \mathbb{G}g(Y_{s}^{(n)}) ds \bigg]. $$

We next bound \(\mathbb{G}g(Y_{s}^{(n)})\) by writing

$$\begin{aligned} \mathbb{G}g(Y_{s}^{(n)}) &= \sum _{y \in \mathbb{S}_{n}} { G}(Y_{s}^{(n)}, y) y = \sum _{y \in \mathbb{S}_{n}} { G}(Y_{s}^{(n)}, y) (y - Y_{s}^{n}) \\ &= \widetilde{\mu}(Y_{s}^{(n)}) + \sum _{y \in \mathbb{S}_{n} \backslash \{ Y_{s}^{(n)} \}} (y - Y_{s}^{(n)}) \int _{I_{y}- Y_{s}^{(n)}} \nu (Y_{s}^{(n)}, dz) \\ &= \mu (Y_{s}^{(n)}) - \sum _{y \in \mathbb{S}_{n} \backslash \{ Y_{s}^{(n)} \}} (y - Y_{s}^{(n)}) \int _{I_{y} - Y_{s}^{(n)}} 1_{\{ |z| \le 1 \}} \nu (Y_{s}^{(n)}, dz) \\ & \phantom{=:} + \sum _{y \in \mathbb{S}_{n}\backslash \{ Y_{s}^{(n)} \}} (y - Y_{s}^{(n)}) \int _{I_{y}- Y_{s}^{(n)}} \nu (Y_{s}^{(n)}, dz) \\ &= \mu (Y_{s}^{(n)}) + \sum _{y \in \mathbb{S}_{n} \backslash \{ Y_{s}^{(n)} \}} (y - Y_{s}^{(n)}) \int _{I_{y} - Y_{s}^{(n)}} 1_{\{ |z| > 1 \}} \nu (Y_{s}^{(n)}, dz) \\ &\le c_{1} Y_{s}^{(n)} + c_{2}. \end{aligned}$$

It follows that

$$ \mathbb{E}_{x}[Y_{t}^{(n)}] \le x + c_{1} \int _{0}^{t} \mathbb{E}_{x}[Y_{s}^{(n)}] ds + c_{2}t. $$

Using Gronwall’s inequality, we obtain

$$ \mathbb{E}_{x}[Y_{t}^{(n)}] \le x + c_{2}t + c_{1}\int _{0}^{t} (x+c_{2}s)e^{c_{1}(t-s)}ds, $$

which shows the claim. □

Proof of Lemma 4.4

(1) This result can be found in Zhang and Li [48, Proposition 1 and Corollary 1].

(2) We first apply a Liouville transform to the eigenvalue problem. Let

$$\begin{aligned} &y = \int _{\ell}^{x} \frac{1}{\sigma (z)} dz, \qquad B = \int _{ \ell}^{b} \frac{1}{\sigma (z)} dz, \\ & Q(y) = \frac{ ( (m (x(y) )/s (x(y) ) )^{1/4} )''}{ (m (x(y) )/s (x(y) ) )^{1/4}}, \qquad \mu _{k}^{+}(B) = \lambda ^{+}_{k}(b). \end{aligned}$$

Then the eigenvalue problem is cast in the Liouville normal form as (see Fulton and Pruess [29, Eqs. (2.1)–(2.5)])

$$ \textstyle\begin{cases} -\partial _{yy} \phi _{k}^{+}(y, B) + Q(y) \phi _{k}^{+}(y, B) = - \mu _{k}^{+}(B) \phi _{k}^{+}(y, B), \qquad y \in (0, B), \\ \phi _{k}^{+}( 0, B) = \phi _{k}^{+}(B, B) = 0. \end{cases} $$

As shown in [29, Eq. (2.13)],

$$ \| \varphi _{k}^{+}( \cdot , b) \|_{2}^{2} = \int _{\ell}^{b} \varphi _{k}^{+}( x, b)^{2} m(x) dx = \int _{0}^{B} \phi _{k}^{+}(y, B)^{2} dy = \| \phi _{k}^{+}(\cdot , B) \|_{2}^{2}. $$

Hence the normalised eigenfunction can be recovered as

$$ \varphi _{k}^{+}( x, b) = \bigg(\frac{s (x(y) )}{m (x(y) )}\bigg)^{1/4} \frac{\phi _{k}^{+}(y, B)}{ \| \phi _{k}^{+}(\cdot , B) \|}. $$

We study the sensitivities of \(\phi _{k}^{+}( y, B)\) and \(\mu _{k}^{+}( B)\) with respect to \(y\) and \(B\) from which we can obtain the sensitivities of \(\varphi _{k}^{+}( x, b)\) and \(\lambda _{k}^{+}( b)\) with respect to \(x\) and \(b\). Let

$$ s_{k}(B) = \sqrt{\mu _{k}^{+}( B)}. $$

By [29, Eq. (3.7)], we have

$$\begin{aligned} \phi _{k}^{+}( y, B) &= \frac{\sin (s_{k}(B) y) }{s_{k}(B) } \\ & \phantom{=:} + \frac{1}{s_{k}(B) } \int _{0}^{y} \sin \big(s_{k}(B) (y - z)\big) Q(z) \phi _{k}^{+}( z, B) dz. \end{aligned}$$
(B.1)

By Gronwall’s inequality, \(\phi _{k}^{+}( y, B) = \mathcal{O}(1/k)\). Differentiating on both sides yields

$$\begin{aligned} \partial _{y} \phi _{k}^{+}( y, B) &= \sin \big(s_{k}(B) y\big)+\int _{0}^{y} \cos \big(s_{k}(B) (y - z)\big) Q(z) \phi _{k}^{+}( z, B) dz, \\ \end{aligned}$$
(B.2)
$$\begin{aligned} \partial _{y}^{2} \phi _{k}^{+}( y, B) &= s_{k}(B) \cos \big(s_{k}(B) y \big) + Q(y) \phi _{k}^{+}( y, B) \\ & \phantom{=:} -s_{k}(B) \int _{0}^{y} \sin \big(s_{k}(B) (y - z)\big) Q(z) \phi _{k}^{+}( z, B) dz, \\ \end{aligned}$$
(B.3)
$$\begin{aligned} \partial _{y}^{3} \phi _{k}^{+}( y, B) &= -s_{k}^{2}(B) \sin \big(s_{k}(B) y\big) + Q'(y) \phi _{k}^{+}( y, B) + Q(y) \partial _{y} \phi _{k}^{+}( y, B) \\ & \phantom{=:} -s_{k}^{2}(B) \int _{0}^{y} \cos \big(s_{k}(B) (y - z)\big) Q(z) \phi _{k}^{+}( z, B) dz. \end{aligned}$$
(B.4)

Thus we get

$$ | \partial _{y} \phi _{k}^{+}( y, B) | = \mathcal{O}(1), \qquad | \partial _{y}^{2} \phi _{k}^{+}( y, B) | = \mathcal{O}(k), \qquad | \partial _{y}^{3} \phi _{k}^{+}( y, B) | = \mathcal{O}(k^{2}). $$

By Kong and Zettl [35, Theorem 3.2], we have

$$ s_{k}'(B) = - \frac{ (\partial _{y} \phi _{k}^{+}(B, B) )^{2}}{ \| \phi _{k}^{+}(\cdot , B) \|^{2} }. $$
(B.5)

As in [29, Eq. (6.11)], we have \(1/ \| \phi _{k}^{+}(\cdot , B) \| = O(k)\). Moreover, (B.2) implies that \(\partial _{y} \phi _{k}^{+}( B, B) = \mathcal{O}(1)\). Therefore \(s_{k}'(B) = \mathcal{O}(k^{2})\).

Differentiating on both sides of (B.1), we obtain

$$\begin{aligned} \partial _{B} \phi ^{+}_{k}( y, B) &= \frac{y \cos (s_{k}(B) y) s_{k}^{\prime}(B) s_{k}(B)-\sin (s_{k}(B) y) s_{k}^{\prime}(B)}{s_{k}(B)^{2}} \\ & \phantom{=:} +\int _{0}^{y} \frac{(x-z) \cos (s_{k}(B)(y-z)) s_{k}^{\prime}(B) s_{k}(B)}{s_{k}(B)^{2}} Q(z) \phi ^{+}_{k}( z, B) d z \\ & \phantom{=:} -\int _{0}^{y} \frac{\sin (s_{k}(B)(y-z)) s_{k}^{\prime}(B)}{s_{k}(B)^{2}} Q(z) \phi ^{+}_{k}( z, B) d z \\ & \phantom{=:} +\int _{0}^{y} \sin \big(s_{k}(B)(y-z)\big) Q(z) \partial _{B} \phi ^{+}_{k}( z, B) d z. \end{aligned}$$

Thus there exist constants \(c_{2}, c_{3} > 0\) such that

$$ |\partial _{B} \phi ^{+}_{k}( y, B) | \le c_{2} k + c_{3} \int _{0}^{y} |\partial _{B} \phi ^{+}_{k}( z, B) | dz. $$

Applying Gronwall’s inequality again shows that

$$ |\partial _{B} \phi ^{+}_{k}( y, B) | \le c_{2}k \exp ( c_{3} B ) = \mathcal{O}(k). $$

Differentiating with respect to \(B\) on both sides of (B.2)–(B.4) and applying the above estimate, we obtain

$$\begin{aligned} | \partial _{B} \partial _{y} \phi _{k}^{+}( y, B) | &= \mathcal{O}(k^{2}), \\ | \partial _{B} \partial _{y}^{2} \phi _{k}^{+}( y, B) | &= \mathcal{O}(k^{3}), \\ | \partial _{B} \partial _{y}^{3} \phi _{k}^{+}( y, B) | &= \mathcal{O}(k^{4}). \end{aligned}$$

We can also derive that

$$ \partial _{B} \| \phi _{k}^{+}( \cdot , B) \|^{2} = \big(\phi _{k}^{+}( \cdot , B)\big)^{2} + 2\int _{0}^{B} \partial _{B} \phi _{k}^{+}( z, B) \phi _{k}^{+}( z, B) dz = O(1/k). $$

Differentiating on both sides of (B.5), we obtain

$$ s_{k}''(B) = \frac{\partial _{B} \| \phi _{k}^{+}(\cdot , B) \|^{2} (\partial _{y} \phi _{k}^{+}(B, B))^{2} - \| \phi _{k}^{+}(\cdot , B) \|^{2} \partial _{B} (\partial _{y} \phi _{k}^{+}(B, B))^{2}}{ \| \phi _{k}^{+}(\cdot , B) \|^{4}} = \mathcal{O}(k^{4}). $$

Subsequently, we obtain

$$\begin{aligned} \partial _{B} \mu _{k}( B) &= 2s_{k}(B) s_{k}'(B) = \mathcal{O}(k^{3}), \\ \partial _{BB} \mu _{k}( B) &= 2s_{k}(B) s_{k}''(B) + 2 \big(s_{k}'(B) \big)^{2} = \mathcal{O}(k^{5}). \end{aligned}$$

After some calculations based on the relationship between \((\lambda _{k}^{+}( b) , \varphi _{k}^{+} ( x, b))\) and \((\mu _{k}^{+}( B), \phi _{k}^{+}(y, B))\), we obtain

$$\begin{aligned} \partial _{b} \lambda ^{+}_{k}( b) &= \mathcal{O}(k^{3}), \qquad \partial _{bb} \lambda ^{+}_{k}( b) = \mathcal{O}(k^{5}), \\ | \partial _{b} \varphi _{k}^{+} ( x, b) | &= \mathcal{O}(k^{2}), \qquad | \partial _{b} \partial _{x} \varphi _{k}^{+} ( x, b) | = \mathcal{O}(k^{3}), \\ | \partial _{b} \partial _{x}^{2} \varphi _{k}^{+} ( x, b) | & = O(k^{4}), \qquad | \partial _{b} \partial _{x}^{3} \varphi _{k}^{+} ( x, b) | = O(k^{5}). \end{aligned}$$

By Zhang and Li [48, Proposition 2], we have \(\lambda _{n, k}^{+} = \lambda _{k}( L^{+}) + \mathcal{O}(k^{4} \delta _{n}^{2})\) and hence

$$ \lambda _{n, k}^{+} = \lambda _{k}( L) + k^{3}\mathcal{O}(L^{+} - L) + \mathcal{O}(k^{4} \delta _{n}^{2}). $$

For (4.5), by [48, Proposition 3], we have

$$ \varphi _{n, k}^{+}( x) = \varphi _{k}^{+}( x, L^{+}) + \mathcal{O}(k^{4} \delta _{n}^{2}) = \varphi _{k}^{+}( x, L) + k \mathcal{O}(L^{+} - L) + \mathcal{O}(k^{4} \delta _{n}^{2}). $$

The same proposition also shows that \(\nabla ^{+} \varphi ^{+}_{n, k}( L^{-}) = \nabla ^{+} \varphi ^{+}_{k}( L^{-}; L^{+}) + \mathcal{O}(k^{6} \delta _{n}^{2})\). Thus we obtain

$$\begin{aligned} \varphi ^{+}_{n, k}( L^{-}) &= \varphi ^{+}_{k}( L^{-}, L^{+}) + \mathcal{O}(k^{6} \delta _{n}^{3}) \\ &= \varphi ^{+}_{k}( L^{-}, L^{+}) - \varphi ^{+}_{k}( L, L^{+}) + \varphi ^{+}_{k}( L, L^{+})- \varphi ^{+}_{k}( L^{+}, L^{+}) + \mathcal{O}(k^{6} \delta _{n}^{3}) \\ &= \partial _{x} \varphi ^{+}_{k}( L, L^{+}) (L^{-} - L) + \frac{1}{2} \partial _{xx} \varphi ^{+}_{k}( L, L^{+}) (L^{-} - L)^{2} \\ & \phantom{=:} - \partial _{x} \varphi ^{+}_{k}( L, L^{+}) (L^{+} - L) - \frac{1}{2} \partial _{xx} \varphi ^{+}_{k}( L, L^{+}) (L^{+} - L)^{2} + \mathcal{O}(k^{6} \delta _{n}^{3}) \\ &= -\partial _{x} \varphi ^{+}_{k}( L, L^{+}) \delta ^{+} L^{-} + \frac{1}{2} \partial _{xx} \varphi ^{+}_{k}( L, L^{+}) (\delta ^{+} L^{-})^{2} + \mathcal{O}(k^{6} \delta _{n}^{3}) \\ & \phantom{=:} + \partial _{xx} \varphi ^{+}_{k}( L, L^{+}) \delta ^{+} L^{-}(L^{+} - L) + \mathcal{O}(k^{6} \delta _{n}^{3}) \\ &= -\partial _{x} \varphi ^{+}_{k}( L, L) \delta ^{+} L^{-} + \frac{1}{2} \partial _{xx} \varphi ^{+}_{k}( L, L) (\delta ^{+} L^{-})^{2} + k^{4} \delta ^{+} L^{-} \mathcal{O}(L^{+} - L) \\ & \phantom{=:} + \mathcal{O}(k^{6} \delta _{n}^{3}). \end{aligned}$$
(B.6)

The rest of the results in the second part follows from [48, Lemmas 3, 5 and 6].

(3) These results can be proved using arguments similar to those for Lemma 4.6 and Proposition 4.7. □

Proof of Lemma 4.6

Let \(\psi ^{+}(\cdot )\) and \(\psi ^{-}(\cdot )\) be two independent solutions to the Sturm–Liouville problem \(\mu (x) \psi '(x) + \frac{1}{2} \sigma ^{2}(x) \psi ''(x) - q\psi (x) = 0\), where \(\psi ^{+}(\cdot )\) is strictly increasing and \(\psi ^{-}(\cdot )\) is strictly decreasing and they are \(C^{4}\) by Gilbarg and Trudinger [31, Theorem 6.19]. Then we can construct \(u_{1}^{+}(q, x, b)\) as

$$ u_{1}^{+}(q, x, b) = \frac{\psi ^{+}(\ell ) \psi ^{-}(x) - \psi ^{+}(x) \psi ^{-}(\ell )}{\psi ^{+}(\ell ) \psi ^{-}(b) - \psi ^{+}(b) \psi ^{-}(\ell )}, $$

from which it is easy to see that \(b\mapsto u_{1}^{+}(q, x, b)\in C^{3}\).

Similarly to Li and Zhang [41, Theorem 3.22], we can show that

$$ u_{1, n}^{+}(q, x; L^{+}) = u_{1}^{+}(q, x, L^{+}) + \mathcal{O}( \delta _{n}^{2}). $$

So the first equation in the lemma follows by the smoothness of \(b \mapsto u_{1}^{+}(q, x; b)\). For the second, let \(e_{n}(x) = u_{1, n}^{+}(q, x; L^{+}) - u^{+}_{1}(q, x; L^{+})\). For \(x \in \mathbb{S}_{n} \cap (\ell , L^{+})\), we have

$$\begin{aligned} &\mathbb{G}_{n} e_{n}(x) \\ &= \mathbb{G}_{n} u_{1, n}^{+}(q, x; L^{+}) - \big(\mathbb{G}_{n} u^{+}_{1}(q, x; L^{+}) - \mathcal{G} u^{+}_{1}(q, x, L^{+}) \big) + \mathcal{G} u^{+}_{1}(q, x, L^{+}) \\ &= q \big( u_{1, n}^{+}(q, x; L^{+}) - u_{1}^{+}(q, x; L^{+}) \big) - \big(\mathbb{G}_{n} u_{1}^{+}(q, x, L^{+}) - \mathcal{G} u_{1}^{+}(q, x, L^{+}) \big) \\ &= \mathcal{O}(\delta _{n}^{2}). \end{aligned}$$

Therefore, for any \(x, y \in \mathbb{S}_{n} \cap [\ell , L^{-}]\), we obtain

$$ \frac{1}{s_{n}(y)} \nabla ^{+} e_{n}(y) - \frac{1}{s_{n}(x)} \nabla ^{+} e_{n}(x) = \sum _{z \in \mathbb{S}_{n} \cap [x^{+}, y]} m_{n}(z) \delta z \mathbb{G}_{n} e_{n}(z) = \mathcal{O}(\delta _{n}^{2}). $$

We also have \(\sum _{z \in \mathbb{S}_{n} \cap [\ell , L^{-}]} \delta ^{+} z \nabla ^{+} e_{n}(z) = e_{n}(L^{+}) - e_{n}(\ell ) = 0\), from which we conclude that there must exist \(x, y \in \mathbb{S}_{n} \cap [\ell , L^{-}]\) such that

$$ \frac{1}{s_{n}(y)} \nabla ^{+} e_{n}(y) \frac{1}{s_{n}(x)} \nabla ^{+} e_{n}(x) \le 0. $$

Therefore we obtain

$$ \bigg| \frac{1}{s_{n}(x)} \nabla ^{+} e_{n}(x) \bigg| \le \bigg| \frac{1}{s_{n}(y)} \nabla ^{+} e_{n}(y) - \frac{1}{s_{n}(x)} \nabla ^{+} e_{n}(x) \bigg| = \mathcal{O}(\delta _{n}^{2}). $$

It follows that

$$ \bigg| \frac{1}{s_{n}(L^{-})} \nabla ^{+} e_{n}(L^{-}) \bigg| \le \bigg| \frac{1}{s_{n}(x)} \nabla ^{+} e_{n}(x) \bigg| + \mathcal{O}( \delta _{n}^{2}) = \mathcal{O}(\delta _{n}^{2}), $$

and hence \(\nabla ^{+} e_{n}(L^{-}) = \mathcal{O}(\delta _{n}^{2})\) holds. Therefore we get

$$ u_{1, n}^{+}(q, L^{-}; L^{+}) - u_{1}^{+}(q, L^{-}, L^{+}) = \delta ^{+} L^{-} \nabla ^{+} e_{n}(L^{-}) = \mathcal{O}(\delta _{n}^{3}). $$

Using the arguments for obtaining (B.6), we arrive at the second equation. □

Proof of Lemma 4.8

We can construct \(u^{-}(q, x, b)\) as

$$ u^{-}(q, x, b) = \frac{\psi ^{+}(r) \psi ^{-}(x) - \psi ^{+}(x) \psi ^{-}(r)}{\psi ^{+}(r) \psi ^{-}(b) - \psi ^{+}(b) \psi ^{-}(r)}, $$

where \(\psi ^{+}\) and \(\psi ^{-}\) are given in the proof of Lemma 4.6. From this expression, it is easy to deduce that \(b \mapsto u^{-}(q, x, b)\) is \(C^{3}\).

Direct calculations show that

$$\begin{aligned} \partial _{b} u^{-}(q, b, b) &= - \frac{\psi ^{+}(r) (\psi ^{-})'(b) - (\psi ^{+})'(b) \psi ^{-}(r)}{\psi ^{+}(r) \psi ^{-}(b) - \psi ^{+}(b) \psi ^{-}(r)}, \\ \partial _{bb} u^{-}(q, x, b) &= - \frac{\psi ^{+}(r) (\psi ^{-})''(b) - (\psi ^{+})''(b) \psi ^{-}(r)}{\psi ^{+}(r) \psi ^{-}(b) - \psi ^{+}(b) \psi ^{-}(r)} \\ & \phantom{=:} + 2 \frac{(\psi ^{+}(r) (\psi ^{-})'(b) - (\psi ^{+})'(b) \psi ^{-}(r))^{2}}{(\psi ^{+}(r) \psi ^{-}(b) - \psi ^{+}(b) \psi ^{-}(r))^{2}}. \end{aligned}$$

Using these equations, we can verify that

$$\begin{aligned} &\mu (L) \partial _{b} u^{-}(q, L, L) - \sigma ^{2}(L) \big(\partial _{b} u^{-}(q, L, L)\big)^{2} + \frac{1}{2} \sigma ^{2}(L) \partial _{bb} u^{-}(q, L, L) + q \\ &= - \frac{\psi ^{+}(r) ( \mu (b) (\psi ^{-})'(b) + \frac{1}{2} \sigma ^{2}(b) (\psi ^{-})''(b) - q\psi ^{-}(b) ) }{\psi ^{+}(r) \psi ^{-}(b) - \psi ^{+}(b) \psi ^{-}(r)} \\ & \phantom{=:} + \frac{\psi ^{-}(r) ( \mu (b) (\psi ^{+})'(b) + \frac{1}{2} \sigma ^{2}(b) (\psi ^{+})''(b) - q\psi ^{+}(b) ) }{\psi ^{+}(r) \psi ^{-}(b) - \psi ^{+}(b) \psi ^{-}(r)} = 0. \end{aligned}$$

We can factorise \(u_{n}^{-}(q, x; L^{-})\) as

$$\begin{aligned} u_{n}^{-}(q, x; L^{-}) &= \mathbb{E}_{x}\big[ e^{-q\tau _{L}^{-}} 1_{ \{ Y_{\tau _{L}^{-}} = L^{-} \}} \big] \\ &= \mathbb{E}_{x}\big[ e^{-q\tau _{L^{++}}^{-}} 1_{\{ Y_{\tau _{L^{++}}^{-}} = L^{+} \}} \big] \mathbb{E}_{L^{+}}\big[ e^{-q\tau _{L}^{-}} 1_{\{ Y_{ \tau _{L}^{-}} = L^{-} \}} \big] \\ &= \widetilde{u}_{n}^{-}(q, x; L^{+}) u_{n}^{-}(q, L^{+}; L^{-}). \end{aligned}$$

To derive this, we use the fact that for \(Y^{(n)}\) to arrive at \(L^{-}\) from the above, it should first touch \(L^{+}\) because it is a birth-and-death process. Using the smoothness of \(u^{-}(q, x; b)\) in \(b\) and

$$ u_{n}^{-}(q, x; L^{+}) = u^{-}(q, x; L^{+}) + \mathcal{O}(\delta _{n}^{2}), $$

we obtain

$$ u_{n}^{-}(q, x; L^{+}) = u^{-}(q, x, L) + \mathcal{O}(L^{+} - L) + \mathcal{O}(\delta _{n}^{2}). $$

Similarly to the proof of Lemma 4.6, we can show that

$$ u_{n}^{-}(q, L^{+}; L^{-}) = u^{-}(q, L^{+}, L^{-}) + \mathcal{O}( \delta _{n}^{3}). $$

Thus we obtain

$$\begin{aligned} &u^{-}(q, L^{+}, L^{-}) - 1 \\ &= u^{-}(q, L^{+}, L^{-}) - u^{-}(q, L^{+}, L) + u^{-}(q, L^{+}, L) - u^{-}(q, L^{+}, L^{+}) \\ &= \partial _{b} u^{-}(q, L^{+}, L)(L^{-} - L) + \frac{1}{2} \partial _{bb} u^{-}(q, L^{+}, L)(L^{-} - L)^{2} \\ &\quad - \partial _{b} u^{-}(q, L^{+}, L)(L^{+} - L) - \frac{1}{2} \partial _{bb} u^{-}(q, L^{+}, L)(L^{+} - L)^{2} + \mathcal{O}( \delta _{n}^{3}) \\ &= -\partial _{b} u^{-}(q, L^{+}, L) \delta ^{+} L^{-} + \frac{1}{2} \partial _{bb} u^{-}(q, L^{+}, L)(\delta ^{+} L^{-})^{2} \\ & \phantom{=:} - \frac{1}{2} \partial _{bb} u^{-}(q, L^{+}, L) \delta ^{+} L^{-} (L^{+} - L) + \mathcal{O}(\delta _{n}^{3}) \\ &= -\partial _{b} u^{-}(q, L^{+}, L) \delta ^{+} L^{-} + \frac{1}{2} \partial _{bb} w^{-}(q, L^{+}, L)(\delta ^{+} L^{-})^{2} + \mathcal{O}(\delta ^{+} L^{-}) (L^{+} - L) \\ & \phantom{=:} + \mathcal{O}(\delta _{n}^{3}). \end{aligned}$$

This concludes the proof. □

Proof of Proposition 4.5

By Lemma 4.4, there exist constants \(c_{1}, c_{2} > 0\) such that

$$\begin{aligned} &v_{n}(D, x; y) / \big(m_{n}(y) \delta y\big) \\ &= \sum _{k = 1}^{n_{e}} e^{-\lambda _{n, k}^{+} D} \varphi _{n, k}^{+}(q, x) \varphi _{n, k}^{+}(y) \\ &= \sum _{k = 1}^{n_{e}} e^{-\lambda _{k}^{+}(L)D} \varphi _{k}^{+}(x, L) \varphi _{k}^{+}(y, L) + \big( \mathcal{O}(L^{+} - L) + \mathcal{O}(\delta _{n}^{2}) \big)\sum _{k = 1}^{n_{e}} k^{11} e^{-c_{1}k^{2} D} \\ &= \sum _{k = 1}^{\infty} e^{-\lambda _{k}^{+}(L)D} \varphi _{k}^{+}(x, L) \varphi _{k}^{+}(y, L) + \mathcal{O}\bigg(\sum _{k = n_{e} + 1}^{ \infty} e^{-c_{2} k^{2} D}\bigg) \\ & \phantom{=:} + \big( \mathcal{O}(L^{+} - L) + \mathcal{O}(\delta _{n}^{2}) \big) \sum _{k = 1}^{\infty} k^{11} e^{-c_{1}k^{2} D} \\ &= \bar{v}(D, x, y) + \mathcal{O}(L^{+} - L) + \mathcal{O}(\delta _{n}^{2}). \end{aligned}$$

In the last equality, we use the fact that \(\sum _{k = n_{e} + 1}^{\infty} e^{-c_{2} k^{2} D} \le c_{3} e^{-c_{4}/ \delta _{n}^{2}} \le c_{5} \delta _{n}^{2}\) for some constants \(c_{3}, c_{4}, c_{5} > 0\). We use this result directly hereafter. Using (4.5), we also obtain

$$\begin{aligned} v_{n}(D, L^{-}; y) / \big(m_{n}(y) \delta y\big) &= \sum _{k = 1}^{n_{e}} e^{-\lambda _{n, k}^{+} D} \varphi _{n, k}^{+}(L^{-}) \varphi _{n, k}^{+}(y) \\ &= \sum _{k = 1}^{n_{e}} e^{-\lambda _{k}^{+}(L) D} \varphi _{n, k}^{+}(L^{-}) \varphi _{k}^{+}(y, L) \\ & \phantom{=:} + \mathcal{O}\big((L^{+} - L)\delta ^{+}L^{-}\big) + \mathcal{O}( \delta _{n}^{3}) \\ &= -\delta ^{+} L^{-}\sum _{k = 1}^{n_{e}} e^{-\lambda _{k}^{+}(L) D} \partial _{x} \varphi ^{+}_{k}(L, L) \varphi _{k}^{+}(y, L) \\ & \phantom{=:} + \frac{1}{2} (\delta ^{+} L^{-})^{2} \sum _{k = 1}^{n_{e}} e^{- \lambda _{k}^{+}(L) D} \partial _{xx} \varphi ^{+}_{k}(L, L) \varphi _{k}^{+}(y, L) \\ & \phantom{=:} + \mathcal{O}\big((L^{+} - L)\delta ^{+}L^{-}\big) + \mathcal{O}( \delta _{n}^{3}) \\ &= -\delta ^{+} L^{-}\sum _{k = 1}^{\infty} e^{-\lambda _{k}^{+}(L) D} \partial _{x} \varphi ^{+}_{k}(L, L) \varphi _{k}^{+}(q, y; L) \\ & \phantom{=:} + \frac{1}{2} (\delta ^{+} L^{-})^{2} \sum _{k = 1}^{\infty} e^{- \lambda _{k}^{+}(L) D} \partial _{xx} \varphi ^{+}_{k}(L, L) \varphi _{k}^{+}(y, L) \\ & \phantom{=:} + \mathcal{O}\big((L^{+} - L)\delta ^{+}L^{-}\big) + \mathcal{O}( \delta _{n}^{3}) \\ &= - \bar{v}_{x}(D, L; y) \delta ^{+} L^{-} + \frac{1}{2} \partial _{xx} \bar{v}(D, L; y) ( \delta ^{+} L^{-})^{2} \\ & \phantom{=:} + \mathcal{O}\big((L^{+} - L)\delta ^{+}L^{-}\big) + \mathcal{O}( \delta _{n}^{3}). \end{aligned}$$

For (4.8), we have

$$ \mu (L)\partial _{x} \varphi _{k}^{+}(L, L) + \frac{1}{2} \sigma ^{2}(L) \partial _{xx} \varphi _{k}^{+}(L, L) - q \varphi _{k}^{+}(L, L) = - \lambda _{k}(L) \varphi _{k}^{+}(L, L). $$

As \(\varphi _{k}^{+}(L,L) = 0\), we get \(\mu (L)\partial _{x} \varphi _{k}^{+}(L, L) + \frac{1}{2} \sigma ^{2}(L) \partial _{xx} \varphi _{k}^{+}(L, L) = 0\). It follows that

$$\begin{aligned} &\mu (L) \partial _{x} \bar{v}(D, L; y) + \frac{1}{2} \sigma ^{2}(L) \partial _{xx} \bar{v}(D, L; y) \\ &= \sum _{k = 1}^{\infty }e^{-\lambda _{k}^{+}(L) D} \bigg( \mu (L) \partial _{x} \varphi _{k}^{+}(L, L) + \frac{1}{2} \sigma ^{2}(L) \partial _{xx} \varphi _{k}^{+}(L, L) \bigg) = 0, \end{aligned}$$

where the interchange of summation and differentiation can be verified with the estimates of \(\lambda _{k}^{+}(L)\) and \(\varphi _{k}^{+}(L, L)\) in Lemma 4.4.

The results for \(v_{n}(D, x;\ell )\) and \(v(D, L, L)\) can be proved by arguments similar to those for the third part of Lemma 4.6 and the equation

$$ v(D, x, L) = v_{1}(x, L) - v_{2}(D, x, L) $$

along with the differential equation for \(v_{1}(x, L)\) and the eigenfunction expansion for \(v_{2}(D, x, L)\). □

Proof of Proposition 4.7

For the first and second claims, we only prove that

$$ c_{n, k}(q) = c_{k}(q, L) + O(k^{4} \delta _{n}^{2}). $$

The other steps are almost identical to those in the proof of Proposition 4.5. But

$$\begin{aligned} &c_{n, k}(q) - c_{k}(q, L) \\ &= \sum _{y \in \mathbb{S}_{n}^{o} \cap (-\infty , L^{+})}\!\!\! \varphi _{n, k}^{+}(y) u_{1, n}^{+}(q, y; L^{+}) m_{n}(y) \delta y - \int _{\ell}^{L} \varphi _{k}^{+}(y, L) u_{1}^{+}(q, y, L) m(y) dy \\ &= \sum _{y \in \mathbb{S}_{n}^{o} \cap (-\infty , L^{+})} \!\!\! \varphi _{k}^{+}(y, L) u_{1}^{+}(q, y, L) m_{n}(y) \delta y + \mathcal{O}\big(k^{4} \delta _{n}^{2}\big) + \mathcal{O}\big(k^{2} (L^{+} - L)\big) \\ & \phantom{=:} \qquad - \int _{\ell}^{L} \varphi _{k}^{+}(y, L) u_{1}^{+}(q, y, L) m(y) dy \\ &= \sum _{y \in \mathbb{S}_{n}^{o} \cap (-\infty , L^{+})} \!\!\! \varphi _{k}^{+}(y, L) u_{1}^{+}(q, y, L) m(y) \delta y + \mathcal{O} \big(k^{4} \delta _{n}^{2}\big) + \mathcal{O}\big(k^{2} (L^{+} - L) \big) \\ & \phantom{=:} \qquad - \int _{\ell}^{L} \varphi _{k}^{+}(y, L) u_{1}^{+}(q, y, L) m(y) dy \\ &= \sum _{z \in \mathbb{S}_{n}^{-} , z< L^{-}} \frac{1}{2} \big( \varphi _{k}^{+}(z, L) u_{1}^{+}(q, z, L) m(z) + \varphi _{k}^{+}(z^{+}, L) u_{1}^{+}(q, z^{+}, L) m(z^{+}) \big) \delta ^{+} z \\ & \phantom{=:} \quad - \int _{z}^{z^{+}} \varphi _{k}^{+}(y, L) u_{1}^{+}(q, y, L) m(y) dy ) + \int _{L^{-}}^{L} \varphi _{k}^{+}(y, L) u_{1}^{+}(q, y, L) m(y) dy \\ & \phantom{=:} \quad + \mathcal{O} (k^{4} \delta _{n}^{2} ) + \mathcal{O}\big(k^{2} (L^{+} - L)\big) \\ &= \mathcal{O} (k^{4} \delta _{n}^{2} ) + \mathcal{O}\big(k^{2} (L^{+} - L)\big). \end{aligned}$$

For the last claim, using the arguments in the proof of Proposition 4.5, we obtain

$$ \mu (L) \partial _{x} u_{2}^{+}(q, D, L; L) + \frac{1}{2} \sigma ^{2}(L) \partial _{xx} u^{+}_{2}(q, D, L; L) = 0. $$

It follows that

$$\begin{aligned} &\mu (L) \partial _{x} u^{+}(q, D, L, L) + \frac{1}{2} \sigma ^{2}(L) \partial _{xx} u^{+}(q, D, L, L) - q \\ &= \mu (L) \partial _{x} u_{1}^{+}(q, L, L) + \frac{1}{2} \sigma ^{2}(L) \partial _{xx} u_{1}^{+}(q, L, L) - qu_{1}^{+}(q, L, L) = 0, \end{aligned}$$

where we use the differential equation for \(u_{1}^{+}(q, x, L)\) at \(x = L\) and the boundary condition \(u_{1}^{+}(q, L, L) = 1\). □

Proof of Proposition 4.10

By Athanasiadis and Stratis [3, Theorem 1.4], (4.17) admits a unique solution \(w(\cdot )\) that belongs to \(C^{1}([\ell , r]) \cap C^{2}([\ell , r] \backslash \mathcal{D})\). The equation for \(\widetilde{w}(q, z)\) can be written in a self-adjoint form as

$$ \frac{1}{m(x)} \partial _{x}\bigg( \frac{1}{s(x)} \partial _{x} \widetilde{w}(q, x)\bigg) - q \widetilde{w}(q, x) = f(x). $$

Multiplying both sides with \(m(x)\) and integrating from \(\ell ^{+}_{1/2} = \ell + \delta ^{+} \ell /2\) to \(z \in \mathbb{S}_{n}\) yields

$$\begin{aligned} &\frac{1}{s(z)} \partial _{x} \widetilde{w}(q, z) - \frac{1}{s(\ell ^{+}_{1/2})} \partial _{x} \widetilde{w}(q, l^{+}_{1/2}) - q \int _{\ell ^{+}_{1/2}}^{z} m(y) \widetilde{w}(q, y) dy dz \\ &= \int _{\ell ^{+}_{1/2}}^{z} m(y) f(y) dy. \end{aligned}$$

Further multiplying both sides with \(s(z)\) and integrating from \(x\) to \(x^{+}\) gives

$$\begin{aligned} &\widetilde{w}(q, x^{+}) - \widetilde{w}(q, x) - \frac{\int _{x}^{x^{+}} s(z) dz}{s(\ell ^{+}_{1/2})} \partial _{x} \widetilde{w}(q, \ell ^{+}_{1/2}) \\ & \phantom{=::} \qquad \, \ \ - q \int _{x}^{x^{+}} s(z) \int _{\ell ^{+}_{1/2}}^{z} m(y) \widetilde{w}(q, y) dy dz \\ &= \int _{x}^{x^{+}} s(z) \int _{\ell ^{+}_{1/2}}^{z} m(y) f(y) dy dz. \end{aligned}$$

Moreover, it is clear that \(\widetilde{w}_{n}(q, z)\) satisfies

$$ \frac{\delta ^{-} x}{m_{n}(x) \delta x} \nabla ^{-}\bigg( \frac{1}{s_{n}(x)} \nabla ^{+} \widetilde{w}_{n}(q, x) \bigg) - q \widetilde{w}_{n}(q, x) = f(x). $$

Multiplying both sides with \(m_{n}(x) \delta x\) and summing from \(\ell ^{+}\) to \(x\), we obtain

$$\begin{aligned} &\frac{1}{s_{n}(x)} \nabla ^{+} \widetilde{w}_{n}(q, x) - \frac{1}{s_{n}(\ell )} \nabla ^{+} \widetilde{w}_{n}(q,\ell ) - q \sum _{\ell < y \le x} \widetilde{w}_{n}(q, y) m_{n}(y) \delta y \\ &= \sum _{\ell < y \le x} f(y) m_{n}(y) \delta y. \end{aligned}$$

It follows that

$$\begin{aligned} s_{n}(x) \delta ^{+} x \sum _{\ell < y \le x} f(y) m_{n}(y) \delta y &= \widetilde{w}_{n}(q, x^{+}) - \widetilde{w}_{n}(q, x) \\ & \phantom{=:} - s_{n}(x) \frac{\delta ^{+} x}{s_{n}(\ell )} \nabla ^{+} \widetilde{w}_{n}(q,\ell ) \\ & \phantom{=:} - q s_{n}(x) \delta ^{+} x\sum _{\ell < y \le x} \widetilde{w}_{n}(q, y) m_{n}(y) \delta y. \end{aligned}$$

Let \(e(x) = \widetilde{w}_{n}(q, x) - \widetilde{w}(q, x)\). We have

$$\begin{aligned} &e(x^{+}) - e(x) \\ &= s_{n}(x) \delta ^{+} x \bigg( \frac{1}{s_{n}(\ell ^{+}_{1/2})} \nabla ^{+} \widetilde{w}_{n}(q,\ell ) - \frac{1}{s(\ell ^{+}_{1/2})} \partial _{x} \widetilde{w}(q,\ell ) \bigg) \\ & \phantom{=:} + \bigg( s_{n}(x) \delta ^{+} x - \int _{x}^{x^{+}} s(z) dz \bigg) \frac{1}{s(\ell ^{+}_{1/2})} \partial _{x} \widetilde{w}(q,\ell ) \end{aligned}$$
(B.7)
$$\begin{aligned} & \phantom{=:} + q s_{n}(x) \delta ^{+} x \sum _{\ell < y \le x} e(y) m_{n}(y) \delta y \\ & \phantom{=:} + q \bigg(s_{n}(x) \delta ^{+} x - \int _{x}^{x^{+}} s(z) dz\bigg) \sum _{\ell < y \le x} \widetilde{w}(q, y) m_{n}(y) \delta y \end{aligned}$$
(B.8)
$$\begin{aligned} & \phantom{=:} + q \int _{x}^{x^{+}} s(z) \bigg( \sum _{\ell < y \le x} \widetilde{w}(q, y) m_{n}(y) \delta y - \int _{\ell ^{+}_{1/2}}^{z} \widetilde{w}(q, y) m(y) dy \bigg) dz \end{aligned}$$
(B.9)
$$\begin{aligned} & \phantom{=:} + \bigg(s_{n}(x) \delta ^{+} x - \int _{x}^{x^{+}} s(z) dz\bigg) \sum _{\ell < y \le x} f(y) m_{n}(y) \delta y \end{aligned}$$
(B.10)
$$\begin{aligned} & \phantom{=:} + \int _{x}^{x^{+}} s(z) \bigg( \sum _{\ell < y \le x} f(y) m_{n}(y) \delta y - \int _{\ell ^{+}_{1/2}}^{z} f(y) m(y) dy \bigg) dz. \end{aligned}$$
(B.11)

The quantities in (B.7), (B.8) and (B.10) are all \(\mathcal{O}(\delta _{n}^{3})\). For the quantity in (B.11), note that

$$\begin{aligned} s(z) &= s(x) + s'(x)(z - x) + \mathcal{O}(\delta _{n}^{2}), \\ \int _{\ell ^{+}_{1/2}}^{z} f(y) m(y) dy &= \int _{\ell ^{+}_{1/2}}^{x^{+}_{1/2}} f(y) m(y) dy + f(x) m(x) (z - x^{+}_{1/2}) \\ & \phantom{=:} + \mathcal{O}(\delta _{n}^{\gamma }1_{\{ x \in \mathcal{D}_{N} \}} + \delta _{n}^{2}), \end{aligned}$$

where \(x^{+}_{1/2} = x + \delta ^{+}x/2\) for \(z \in [x, x^{+}]\),

$$ \int _{\ell ^{+}_{1/2}}^{x^{+}_{1/2}} f(y) m(y) dy = \sum _{\ell < y \le x} f(y) m_{n}(y) dy + \mathcal{O}(\delta _{n}^{\gamma}), $$

and \(\mathcal{D}_{N} = \{ y \in \mathbb{S}_{n}^{o}: [y^{-}, y^{+}] \cap \mathcal{D} \ne \emptyset \}\). Therefore,

$$\begin{aligned} \text{(B.11)} &= s(x) \delta ^{+} x \bigg( \sum _{\ell < y \le x} f(y) m_{n}(y) dy - \int _{\ell ^{+}_{1/2}}^{x^{+}_{1/2}} f(y) m(y) dy \bigg) \\ & \phantom{=:} + s(x) f(x) m(x) \int _{x}^{x^{+}} (x^{+}_{1/2} - z) dz + \mathcal{O} ( \delta _{n}^{1 + \gamma} 1_{\{ x\in \mathcal{D}_{N} \}} + \delta _{n}^{3} + \delta _{n}^{1 + \gamma} ) \\ &= \mathcal{O} (\delta _{n}^{1 + \gamma} 1_{\{ x\in \mathcal{D}_{N} \}} + \delta _{n}^{3} ). \end{aligned}$$

By the same argument, we can show that \(\text{(B.9)} = \mathcal{O}(\delta _{n}^{3})\). Putting these estimates back and letting \(e^{\ast}(x) = -e(x)\), we deduce that there exists a constant \(c_{1} > 0\) independent of \(\delta _{n}\) such that

$$\begin{aligned} e(x^{+}) &\le s_{n}(x) \delta ^{+} x \bigg( \frac{1}{s_{n}(\ell ^{+}_{1/2})} \nabla ^{+} \widetilde{w}_{n}(q, \ell ) - \frac{1}{s(\ell ^{+}_{1/2})} \partial _{x} \widetilde{w}(q, \ell ) \bigg) \\ & \phantom{=:} + e(x) + q s_{n}(x) \delta ^{+} x \sum _{\ell < y \le x} e(y) m_{n}(y) \delta y + c_{1} (\delta _{n}^{1 + \gamma} 1_{\{ x\in \mathcal{D}_{N} \}} + \delta _{n}^{3} ), \\ e^{\ast}(x^{+}) &\le -s_{n}(x) \delta ^{+} x \bigg( \frac{1}{s_{n}(\ell ^{+}_{1/2})} \nabla ^{+} \widetilde{w}_{n}(q, \ell ) - \frac{1}{s(\ell ^{+}_{1/2})} \partial _{x} \widetilde{w}(q, \ell ) \bigg) \\ & \phantom{=:} + e^{\ast}(x) + q s_{n}(x) \delta ^{+} x \sum _{\ell < y \le x} e^{ \ast}(y) m_{n}(y) \delta y + c_{1} (\delta _{n}^{1 + \gamma} 1_{\{ x \in \mathcal{D}_{N} \}} + \delta _{n}^{3} ). \end{aligned}$$

Using the positive lower and upper bounds for \(s_{n}(x)\), \(m_{n}(x)\) which are independent of \(\delta _{n}\) and the discrete Gronwall inequality, we conclude that there exist constants \(c_{2}, c_{3}, c_{4}, c_{5} > 0\) independent of \(\delta _{n}\) such that

$$\begin{aligned} e(x) &\le c_{2}\bigg( c_{1} h_{n}^{\gamma }+ c_{3} \Big( \frac{1}{s_{n}(\ell ^{+}_{1/2})} \nabla ^{+} \widetilde{w}_{n}(q, \ell ) - \frac{1}{s(\ell ^{+}_{1/2})} \partial _{x} \widetilde{w}(q, \ell ) \Big) \bigg), \end{aligned}$$
(B.12)
$$\begin{aligned} e^{\ast}(x) &\le c_{4}\bigg( c_{1} h_{n}^{\gamma }- c_{5} \Big( \frac{1}{s_{n}(\ell ^{+}_{1/2})} \nabla ^{+} \widetilde{w}_{n}(q, \ell ) - \frac{1}{s(\ell ^{+}_{1/2})} \partial _{x} \widetilde{w}(q, \ell ) \Big) \bigg). \end{aligned}$$
(B.13)

Note that \(e(r) = e^{\ast}(r) = 0\). Then there exist constants \(c_{6}, c_{7} > 0\) independent of \(\delta _{n}\) such that

$$\begin{aligned} \bigg( \frac{1}{s_{n}(\ell ^{+}_{1/2})} \nabla ^{+} \widetilde{w}_{n}(q, \ell ) - \frac{1}{s(\ell ^{+}_{1/2})} \partial _{x} \widetilde{w}(q, \ell ) \bigg) &\le c_{6} \delta _{n}^{\gamma}, \\ -\bigg( \frac{1}{s_{n}(\ell ^{+}_{1/2})} \nabla ^{+} \widetilde{w}_{n}(q, \ell ) - \frac{1}{s(\ell ^{+}_{1/2})} \partial _{x} \widetilde{w}(q, \ell ) \bigg) &\le c_{7} \delta _{n}^{\gamma}. \end{aligned}$$

Hence \(| \frac{1}{s_{n}(\ell ^{+}_{1/2})} \nabla ^{+} \widetilde{w}_{n}(q, \ell ) - \frac{1}{s(\ell ^{+}_{1/2})} \partial _{x} \widetilde{w}(q, \ell ) | = \mathcal{O}(\delta _{n}^{\gamma})\). Substituting these estimates back into (B.12) and (B.13), we obtain \(e(x) = \mathcal{O}(\delta _{n}^{\gamma})\) and \(-e(x) = \mathcal{O}(\delta _{n}^{\gamma})\). Thus we have \(e(x) = \mathcal{O}(\delta _{n}^{\gamma})\) and the proof is complete. □

Proof of Theorem 4.9

The smoothness of \(h(q, x; y)\) and its limit at \(y =\ell \) and value at \(y = L\) are direct consequences of the properties of \(v(D, x; y)\). By (4.7), (4.11) and (4.15), we have

$$\begin{aligned} &u_{n}^{-}(q, L^{+}; L^{-}) v_{n}(D, L^{-}; y) / \big(m_{n}(y) \delta y\big) \\ &= \bigg(- \bar{v}_{x} \delta ^{+} L^{-} + \frac{1}{2} \bar{v}_{xx} ( \delta ^{+} L^{-})^{2} + \mathcal{O}(\delta ^{+} L^{-})(L^{+} - L) \bigg) \\ & \phantom{=:} \times \bigg(1 - u^{-}_{b} \delta ^{+} L^{-} + \frac{1}{2} u^{-}_{bb} ( \delta ^{+} L^{-})^{2} + \mathcal{O}(\delta ^{+} L^{-})(L^{+} - L) \bigg) + \mathcal{O}(\delta _{n}^{3}) \\ &= -\bar{v}_{x} \delta ^{+}L^{-} + (\delta ^{+}L^{-})^{2} \bigg( \bar{v}_{x} u_{b}^{-} + \frac{1}{2} \bar{v}_{xx} \bigg) + \mathcal{O}( \delta ^{+} L^{-}) (L^{+} - L) + \mathcal{O}(\delta _{n}^{3}). \end{aligned}$$

Here, we neglect the arguments \((D, L; y)\) for \(\bar{v}\) and \((q, L, L)\) for \(u^{-}\) to make the equations shorter. Moreover, using (4.11) and (4.15), we obtain

$$\begin{aligned} &1 - u_{n}^{+}(q,D, L^{-}; L^{+}) u_{n}^{-}(q, L^{+}; L^{-}) \\ &= 1 - \bigg(1 - u^{+}_{x} \delta ^{+} L^{-} + \frac{1}{2} u^{+}_{xx} ( \delta ^{+} L^{-})^{2} + \mathcal{O}(\delta ^{+} L^{-})(L^{+} - L) \bigg) \\ & \phantom{::} \qquad \times \bigg(1 - u^{-}_{b} \delta ^{+} L^{-} + \frac{1}{2} u^{-}_{bb} (\delta ^{+} L^{-})^{2} + \mathcal{O}(\delta ^{+} L^{-})(L^{+} - L) \bigg) + \mathcal{O}(\delta _{n}^{3}) \\ &= \delta ^{+} L^{-} \big( u_{b}^{-} + u_{x}^{+} \big) - (\delta ^{+} L^{-})^{2} \bigg( \frac{1}{2} u_{bb}^{-} + u_{x}^{+} u_{b}^{-} + \frac{1}{2} u_{xx}^{+} \bigg) \\ & \phantom{=:} + \mathcal{O}(\delta ^{+} L^{-}) (L^{+} - L) + \mathcal{O}(\delta _{n}^{3}). \end{aligned}$$

Here, we neglect the arguments \((q, D, x, L)\) for \(u^{+}\) for the same reason. It follows that

$$\begin{aligned} &h_{n}(q, L^{+}; y)/\big(m_{n}(y)\delta y\big) \\ &= e^{-qD} \frac{-v_{x} + \delta ^{+}L^{-} ( \bar{v}_{x} u_{b}^{-} + \frac{1}{2} \bar{v}_{xx} ) + \mathcal{O}(L^{+} - L) + \mathcal{O}(\delta _{n}^{2})}{ u_{b}^{-} + u_{x}^{+} - \delta ^{+} L^{-} ( \frac{1}{2} u_{bb}^{-} + u_{x}^{+} u_{b}^{-} + \frac{1}{2} u_{xx}^{+} ) + \mathcal{O} (L^{+} - L) + \mathcal{O}(\delta _{n}^{2})} \\ &= h(q, L; y) / m(y) \\ & \phantom{=:} + e^{-qD} \delta ^{+} L^{-} \frac{-\frac{1}{2} \bar{v}_{x} u_{bb}^{-} - \frac{1}{2} \bar{v}_{x} u_{xx}^{+} + v_{x}^{-} (u_{b}^{-} )^{2} + \frac{1}{2} \bar{v}_{xx} u_{b}^{-} + \frac{1}{2} u_{x}^{+} \bar{v}_{xx} }{ ( u_{b}^{-} + u_{x}^{+} )^{2} + \mathcal{O}(\delta _{n})} \\ & \phantom{=:} + \mathcal{O}(L^{+} - L) + \mathcal{O}(\delta _{n}^{2}). \end{aligned}$$

By (4.12) and (4.8), we have \(\bar{v}_{xx} = -\frac{2\mu (L)}{\sigma ^{2}(L)} \bar{v}_{x}\) and \(u^{+}_{xx} = -\frac{2\mu (L)}{\sigma ^{2}(L)} u^{+}_{x} + \frac{2q}{\sigma ^{2}(L)}\). Subsequently, we obtain

$$\begin{aligned} &-\frac{1}{2} \bar{v}_{x} u_{bb}^{-} - \frac{1}{2} \bar{v}_{x} u_{xx}^{+} + v_{x}^{-} (u_{b}^{-} )^{2} + \frac{1}{2} \bar{v}_{xx} u_{b}^{-} + \frac{1}{2} u_{x}^{+} \bar{v}_{xx} \\ &= -\frac{1}{2} \bar{v}_{x} u_{bb}^{-}- \frac{1}{2} \bar{v}_{x} \bigg( -\frac{2\mu (L)}{\sigma ^{2}(L)} u^{+}_{x} + \frac{2q}{\sigma ^{2}(L)} \bigg) \\ & \phantom{=:} + v_{x}^{-} (u_{b}^{-} )^{2} - \frac{1}{2} \frac{2\mu (L)}{\sigma ^{2}(L)} \bar{v}_{x} u_{b}^{-} - \frac{1}{2} \frac{2\mu (L)}{\sigma ^{2}(L)} \bar{v}_{x} u_{x}^{+} \\ &= -\frac{1}{\sigma ^{2}(L) }\bigg( \frac{1}{2} \sigma ^{2}(L) u_{bb}^{-} + q - \sigma ^{2}(L) (u_{b}^{-})^{2} + \mu (L) u_{b}^{-} \bigg) = 0. \end{aligned}$$

The last equality follows from (4.13). Therefore, we have that

$$ h_{n}(q, L^{+}; y)/\big(m_{n}(y)\delta y\big) = h(q, L; y) / m(y) + \mathcal{O}(L^{+} - L) + \mathcal{O}(\delta _{n}^{2}). $$

Then by (4.14), we obtain

$$\begin{aligned} &h_{n}(q, L^{-}; y) u_{n}^{-}(q, x; L^{-}) / \big(m_{n}(y) \delta y \big) \\ &= h_{n}(q, L^{-}; y) u_{n}^{-}(q, x; L^{+}) u_{n}^{-}(q, L^{+}; L^{-}) / \big(m_{n}(y) \delta y\big) \\ &= h_{n}(q, L^{+}; y) u_{n}^{-}(q, x; L^{+}) / \big(m_{n}(y) \delta y \big) \\ &= h(q, L; y) u_{n}^{-}(q, x; L^{+}) / m(y) + \mathcal{O}(L^{+} - L) + \mathcal{O}(\delta _{n}^{2}) \\ &= h(q, L; y) u^{-}(q, x, L) / m(y) + \mathcal{O}(L^{+} - L) + \mathcal{O}(\delta _{n}^{2}). \end{aligned}$$

Based on the previous estimates, we deduce that

$$\begin{aligned} &h_{n}(q, x; y) / \big(m_{n}(y) \delta y\big) \\ &= 1_{\{ x< L \}} e^{-qD} v_{n}(D, x; y) / \big(m_{n}(y) \delta y \big) \\ & \phantom{=:} + 1_{\{ x < L \}} u_{n}^{+}(q, D, x; L^{+}) h_{n}(q, L^{+}; y) / \big(m_{n}(y) \delta y\big) \\ & \phantom{=:} + 1_{\{ x \ge L \}} u^{-}_{n}(q, x; L^{-}) h_{n}(q, L^{-}; y) / \big(m_{n}(y) \delta y\big) \\ &= 1_{\{ x< L \}} e^{-qD} \bar{v}(D, x; y) + 1_{\{ x < L \}} u^{+}(q, D, x, L) h(q, L; y) / m(y) \\ & \phantom{=:} + 1_{\{ x \ge L \}} h(q, L; y) u^{-}(q, x, L) / m(y) + \mathcal{O}(L^{+} - L) + \mathcal{O}(\delta _{n}^{2}). \end{aligned}$$

 □

Proof of Theorem 4.11

Recall that

$$\begin{aligned} h_{n}(q, x;\ell ) &= h(q, x;\ell ) + \mathcal{O}(L^{+} - L) + \mathcal{O}(\delta _{n}^{2}), \\ \widetilde{w}(q,\ell ) &= \widetilde{w}_{n}(q,\ell ) = f(\ell )/q, \end{aligned}$$

which will be used below. We have

$$\begin{aligned} &\widetilde{u}_{n}(q, x) - \widetilde{u}(q, x) \\ &= \sum _{z \in \mathbb{S}_{n} \cap (\ell , L)} \!\!\! h_{n}(q, x; z) \widetilde{w}_{n}(q, z) - \int _{\ell}^{L} h(q, x; z) \widetilde{w}(q, z) dz \\ & \phantom{=:} \quad \ \ + h_{n}(q, x;\ell ) \widetilde{w}_{n}(q,\ell ) - h(q, x; \ell ) \widetilde{w}(q,\ell ) \\ &= \sum _{z \in \mathbb{S}_{n} \cap (\ell , L)}\!\!\! h_{n}(q, x; z) \widetilde{w}_{n}(q, z) - \int _{\ell}^{L} h(q, x; z) \widetilde{w}(q, z) dz + \mathcal{O}(L^{+} - L) + \mathcal{O}(\delta _{n}^{2}) \\ &= \sum _{z \in \mathbb{S}_{n} \cap (\ell , L)} \!\!\! h_{n}(q, x; z) / \big(m_{n}(z) \delta z\big) \widetilde{w}_{n}(q, z) m_{n}(z) \delta z - \int _{\ell}^{L} h(q, x; z) \widetilde{w}(q, z) dz \\ & \phantom{=:} \quad \ \ + \mathcal{O}(L^{+} - L) + \mathcal{O}(\delta _{n}^{2}) \\ &= \sum _{z \in \mathbb{S}_{n} \cap (\ell , L)}\!\!\! h(q, x; z) \widetilde{w}(q, z) m_{n}(z) / m(z) \delta z - \int _{\ell}^{L} h(q, x; z) \widetilde{w}(q, z) dz \\ & \phantom{=:} \quad \ \ + \mathcal{O}(L^{+} - L) + \mathcal{O}(\delta _{n}^{\gamma}) \\ &= \sum _{z \in \mathbb{S}_{n} \cap (\ell , L)} \!\!\! h(q, x; z) \widetilde{w}(q, z) \delta z - \int _{\ell}^{L} h(q, x; z) \widetilde{w}(q, z) dz \\ & \phantom{=:} \quad \ \ + \frac{1}{2}\sum _{z \in \mathbb{S}_{n} \cap (\ell , L)} \! \!\! h(q, x; z) \widetilde{w}(q, z) \frac{\mu (z)}{\sigma ^{2}(z) } \big( (\delta ^{+} z)^{2} - (\delta ^{-} z)^{2} \big) \\ & \phantom{=:} \quad \ \ + \mathcal{O}(L^{+} - L) + \mathcal{O}(\delta _{n}^{\gamma}) \\ &= \sum _{z \in \mathbb{S}_{n}^{-} \cap (\ell , L^{-})}\!\! \bigg( \frac{1}{2} (h(q, x; z) + h(q, x; z^{+})) \delta ^{+} z - \int _{z}^{z^{+}} h(q, x; y) \widetilde{w}(q, y) dy \bigg) \\ & \phantom{=:} \quad \ \ + h(q, x; \ell ^{+}) \widetilde{w}(q, \ell ^{+}) \delta ^{-} \ell ^{+}/2 + h(q, x; L^{-}) \widetilde{w}(q, L^{-}) \delta ^{+} L^{-}/2 \\ & \phantom{=:} \quad \ \ - \int _{y \in (\ell , \ell ^{+}) \cup (L^{-}, L)} h(q, x; y) \widetilde{w}(q, y) dy \\ & \phantom{=:} \quad \ \ + \frac{1}{2} \sum _{z \in \mathbb{S}_{n}^{-} \cap ( \ell , L^{-})} \!\! \bigg( h(q, x; z) \widetilde{w}(q, z) \frac{\mu (z)}{\sigma ^{2}(z) } \\ & \phantom{=:} \qquad \qquad \qquad \qquad \ \ - h(q, x; z^{+}) \widetilde{w}(q, z^{+}) \frac{\mu (z^{+})}{\sigma ^{2}(z^{+}) } \bigg) (\delta ^{+} z)^{2} \\ & \phantom{=:} \quad \ \ - \frac{1}{2} h(q, x; \ell ^{+}) \widetilde{w}(q, \ell ^{+}) \frac{\mu (\ell ^{+})}{\sigma ^{2}(\ell ^{+}) } (\delta ^{-}\ell ^{+})^{2} \\ & \phantom{=:} \quad \ \ + \frac{1}{2} h(q, x; L^{-}) \widetilde{w}(q, L^{-}) \frac{\mu (L^{-})}{\sigma ^{2}(L^{-}) } (\delta ^{+}L^{-})^{2} \\ & \phantom{=:} \quad \ \ + \mathcal{O}(L^{+} - L) + \mathcal{O}(\delta _{n}^{ \gamma}) \\ &= \sum _{z \in \mathbb{S}_{n} \cap (\ell , L^{-}), [z, z^{+}] \mathcal{D} = \emptyset} \max _{y \in [z, z^{+}]} \partial _{yy} \big( h(q, x; \cdot ) \widetilde{w}(q, \cdot ) \big) (y) \mathcal{O}( \delta _{n}^{3} ) \\ & \phantom{=:} \qquad \qquad \ + \sum _{z \in \mathbb{S}_{n} \cap (\ell , L^{-}), [z, z^{+}] \mathcal{D} \ne \emptyset} \max _{y \in [z, z^{+}]} \partial _{y} \big( h(q, x; \cdot ) \widetilde{w}(q, \cdot ) \big) (y) \mathcal{O}( \delta _{n}^{2} ) \\ & \phantom{=:} \qquad \qquad \ + \mathcal{O}(L^{+} - L) + \mathcal{O}( \delta _{n}^{\gamma}) \\ &=\mathcal{O}(L^{+} - L) + \mathcal{O}(\delta _{n}^{\gamma}). \end{aligned}$$

In the second-to-last second equality, we use the error estimate of the trapezoidal rule and the smoothness of \(h(q, x; z)\) and \(\widetilde{w}(q, z)\). □

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, G., Li, L. A general approach for Parisian stopping times under Markov processes. Finance Stoch 27, 769–829 (2023). https://doi.org/10.1007/s00780-023-00505-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00780-023-00505-1

Keywords

Mathematics Subject Classification (2020)

JEL Classification

Navigation