Skip to main content
Log in

Modeling Fast Diffusion Processes in Time Integration of Stiff Stochastic Differential Equations

  • Original Paper
  • Published:
Communications on Applied Mathematics and Computation Aims and scope Submit manuscript

Abstract

Numerical algorithms for stiff stochastic differential equations are developed using linear approximations of the fast diffusion processes, under the assumption of decoupling between fast and slow processes. Three numerical schemes are proposed, all of which are based on the linearized formulation albeit with different degrees of approximation. The schemes are of comparable complexity to the classical explicit Euler-Maruyama scheme but can achieve better accuracy at larger time steps in stiff systems. Convergence analysis is conducted for one of the schemes, that shows it to have a strong convergence order of 1/2 and a weak convergence order of 1. Approximations arriving at the other two schemes are discussed. Numerical experiments are carried out to examine the convergence of the schemes proposed on model problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. We make the following choices: \(\beta =(0.05,0.05,5\times 10^{-7})\) for the linear (\(p=1\)) case, and \((0.04,0.04,5\times 10^{-4})\) for the quadratic (\(p=2\)) case.

References

  1. Abdulle, A.: Explicit methods for stiff stochastic differential equations. In: Engquist, B., Runborg, O., Tsai, Y.-H.R. (eds) Numerical Analysis of Multiscale Computations, pp. 1–22. Springer, Berlin Heidelberg (2012)

    Google Scholar 

  2. Abdulle, A., Li, T.: S-ROCK methods for stiff Itô SDEs. Commun. Math. Sci. 6, 845–868 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  3. Ahn, T.H., Sandu, A., Han, X.: Implicit simulation methods for stochastic chemical kinetics. J. Appl. Anal. Comput. 5, 420 – 452 (2015)

  4. Arnold, L.: Stochastic Differetial Equations: Theory and Applications. Wiley, USA (1974)

    Google Scholar 

  5. Bunker, D.L., Garrett, B., Kleindienst, T., Long, G.S.: Discrete simulation methods in combustion kinetics. Combust. Flame 23, 373–379 (1974)

    Article  Google Scholar 

  6. Cao, Y., Gillespie, D.T., Petzold, L.R.: Accelerated stochastic simulation of the stiff enzyme-substrate reaction. J. Chem. Phys. 123, 144917 (2005)

    Article  Google Scholar 

  7. Cao, Y., Gillespie, D.T., Petzold, L.R.: The slow-scale stochastic simulation algorithm. J. Chem. Phys. 122, 014116 (2005)

    Article  Google Scholar 

  8. Cao, Y., Petzold, L.R., Rathinam, M., Gillespie, D.T.: The numerical stability of leaping methods for stochastic simulation of chemically reacting systems. J. Chem. Phys. 121, 12169–12178 (2004)

    Article  Google Scholar 

  9. Contou-Carrere, M.N., Daoutidis, P.: Decoupling of fast and slow variables in chemical Langevin equations with fast and slow reactions. In: 2006 American Control Conference, pp. 6, IEEE, USA (2006)

  10. Gillespie, D.: A general method for numerically simulating the stochastic time evolution of coupled chemical reactions. J. Comput. Phys. 22, 403–434 (1976)

    Article  MathSciNet  Google Scholar 

  11. Gillespie, D.: Exact stochastic simulation of coupled chemical reactions. J. Phys. Chem. 81, 2340–2361 (1977)

    Article  Google Scholar 

  12. Gillespie, D.T.: The chemical Langevin equation. J. Chem. Phys. 113, 297–306 (2000)

    Article  Google Scholar 

  13. Gillespie, D.T.: Approximate accelerated stochastic simulation of chemically reacting systems. J. Chem. Phys. 115, 1716–1734 (2001)

    Article  Google Scholar 

  14. Gillespie, D.T.: The chemical Langevin and Fokker-Planck equations for the reversible isomerization reaction. J. Phys. Chem. A 106, 5063–5071 (2002)

    Article  Google Scholar 

  15. Gillespie, D.T.: Stochastic simulation of chemical kinetics. Annu. Rev. Phys. Chem. 58, 35–55 (2007)

    Article  Google Scholar 

  16. Goussis, D.A., Valorani, M., Creta, F., Najm, H.N.: Inertial manifolds with CSP. in: Bathe, K. (ed) Computational Fluid and Solid Mechanics 2003, volume 2, pp. 1951–1954. Elsevier Science, Cambridge, MA (2003)

  17. Goussis, D.A.: Quasi steady state and partial equilibrium approximations: their relation and their validity. Combust. Theor. Model. 16, 869–926 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  18. Goutsias, J.: Quasiequilibrium approximation of fast reaction kinetics in stochastic biochemical systems. J. Chem. Phys. 122, 184102 (2005)

    Article  Google Scholar 

  19. Guo, Q., Liu, W., Mao, X., Yue, R.: The truncated Milstein method for stochastic differential equations with commutative noise. J. Comput. Appl. Math. 338, 298–310 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  20. Hadjinicolaou, M., Goussis, D.A.: Asymptotic solutions of stiff PDEs with the CSP method: the reaction-diffusion equation. SIAM J. Sci. Comput. 20, 781–810 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  21. Han, X., Valorani, M., Najm, H.: Explicit time integration of the stiff chemical Langevin equations using computational singular perturbation. J. Chem. Phys. 150, 194101 (2019)

    Article  Google Scholar 

  22. Haseltine, E.L., Rawlings, J.B.: Approximate simulation of coupled fast and slow reactions for stochastic chemical kinetics. J. Chem. Phys. 117, 6959–6969 (2002)

    Article  Google Scholar 

  23. Higham, D.J.: Modeling and simulating chemical reactions. SIAM Rev. 50, 347–368 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  24. Higham, D., Khanin, R.: Chemical master versus chemical Langevin for first-order reaction networks. Open Appl. Math. J. 2, 59–79 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  25. Higham, D.J., Mao, X., Stuart, A.M.: Strong convergence of Euler-type methods for nonlinear stochastic differential equations. SIAM J. Numer. Anal. 40, 1041–1063 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  26. Hu, Y.: Semi-implicit Euler-Maruyama scheme for stiff stochastic equations. In: Körezliouglu, H., Oksendal, B., Üstünel, A.S. (eds) Stochastic Analysis and Related Topics V, pp. 183–202. Birkhäuser Boston, Boston, MA (1996)

    Chapter  Google Scholar 

  27. Hu, L., Li, X., Mao, X.: Convergence rate and stability of the truncated Euler-Maruyama method for stochastic differential equations. J. Comput. Appl. Math. 337, 274–289 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  28. Kim, J.K., Josić, K., Bennett, M.R.: The validity of quasi-steady-state approximations in discrete stochastic simulations. Biophys. J. 107, 783–793 (2014)

    Article  Google Scholar 

  29. Kloeden, P.E., Platen, E.: Numerical Solution of Stochastic Differential Equations. Applications of Mathematics (New York), vol. 23. Springer-Verlag, Berlin (1992)

  30. Kloeden, P., Neuenkirch, A.: Convergence of numerical methods for stochastic differential equations in mathematical finance. In: Gerstner, T., Kloeden, P. (eds) Recent Developments in Computational Finance, pp. 49–80. World Sci. Publ, Hackensack, NJ (2013)

  31. Kloeden, P.E., Platen, E.: Higher-order implicit strong numerical schemes for stochastic differential equations. J. Stat. Phys. 66, 283–314 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  32. Lam, S.H.: Singular perturbation for stiff equations using numerical methods. In: Casci, C. (ed) Recent Advances in the Aerospace Sciences, pp. 3–19. Plenum Press, New York (1985)

    Chapter  Google Scholar 

  33. Lam, S.H., Goussis, D.A.: Understanding complex chemical kinetics with computational singular perturbation. Proc. Comb. Inst. 22, 931–941 (1988)

    Article  Google Scholar 

  34. Lam, S.H., Goussis, D.A.: Computational Singular Perturbation; Theory and Applications, Report 1986-MAE, Princeton Univ., USA (1991a)

  35. Lam, S.H., Goussis, D.A.: The CSP method for simplifying kinetics. Int. J. Chem. Kinet. 26, 461–486 (1994)

    Article  Google Scholar 

  36. Li, T., Abdulle, A.: Effectiveness of implicit methods for stiff stochastic differential equations. Commun. Comput. Phys. 3, 295–307 (2008)

  37. Mao, X.: Stochastic Differential Equations and Applications. Horwood, New York (1997)

    MATH  Google Scholar 

  38. Milstein, G.N., Platen, E., Schurz, H.: Balanced implicit methods for stiff stochastic systems. SIAM J. Num. Anal. 35, 1010–1019 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  39. Nguyen, D.T., Nguyen, S.L., Hoang, T.A., Yin, G.: Tamed-Euler method for hybrid stochastic differential equations with Markovian switching. Nonlinear Anal. Hybrid Syst. 30, 14–30 (2018)

  40. Oksendal, B.: Stochastic Differential Equations: an Introduction with Applications. Springer-Verlag, Berlin/Heidelberg (2003)

    Book  MATH  Google Scholar 

  41. Rao, C.V., Arkin, A.P.: Stochastic chemical kinetics and the quasi-steady-state assumption: application to the Gillespie algorithm. J. Chem. Phys. 118, 4999–5010 (2003)

    Article  Google Scholar 

  42. Rathinam, M., Petzold, L.R., Cao, Y., Gillespie, D.T.: Stiffness in stochastic chemically reacting systems: the implicit tau-leaping method. J. Chem. Phys. 119, 12784–12794 (2003)

    Article  Google Scholar 

  43. Reams, R.: Hadamard inverses, square roots and products of almost semidefinite matrices. Linear Algebra Appl. 288, 35–43 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  44. Salis, H., Kaznessis, Y.: Accurate hybrid stochastic simulation of a system of coupled chemical or biochemical reactions. J. Chem. Phys. 122, 54103–54111 (2005)

    Article  Google Scholar 

  45. Salloum, M., Alexanderian, A., Le Maître, O., Najm, H.N., Knio, O.: Simplified CSP analysis of a stiff stochastic ODE system. Comput. Methods Appl. Mech. Eng. 217–220, 121–138 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  46. Sotiropoulos, V., Contou-Carrere, M., Daoutidis, P., Kaznessis, Y.N.: Model reduction of multiscale chemical Langevin equations: a numerical case study. IEEE/ACM Trans. Comput. Biol. Bioinf. 6, 470–482 (2009)

    Article  MATH  Google Scholar 

  47. Székely, T., Burrage, K.: Stochastic simulation in systems biology. Comput. Struct. Biotechnol. J. 12, 14–25 (2014)

    Article  Google Scholar 

  48. Ta, C., Wang, D., Nie, Q.: An integration factor method for stochastic and stiff reaction-diffusion systems. J. Comput. Phys. 295, 505–522 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  49. Thomas, P., Straube, A.V., Grima, R.: The slow-scale linear noise approximation: an accurate, reduced stochastic description of biochemical networks under timescale separation conditions. BMC Syst. Biol. 6, 39 (2012)

    Article  Google Scholar 

  50. Tian, T., Burrage, K.: Implicit Taylor methods for stiff stochastic differential equations. Appl. Numer. Math. 38, 167–185 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  51. Valorani, M., Goussis, D.A., Najm, H.N.: Using CSP to analyze computed reactive flows. In: 8th SIAM Int. Conf. on Numerical Combustion, Amelia Island, FL (2000)

  52. Valorani, M., Najm, H.N., Goussis, D.: CSP analysis of a transient flame-vortex interaction: time scales and manifolds. Combust. Flame 134, 35–53 (2003)

    Article  Google Scholar 

  53. Valorani, M., Goussis, D.A., Creta, F., Najm, H.N.: Higher order corrections in the approximation of low dimensional manifolds and the construction of simplified problems with the CSP method. J. Comput. Phys. 209, 754–786 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  54. Valorani, M., Creta, F., Goussis, D.A., Lee, J.C., Najm, H.N.: Chemical kinetics simplification via CSP. Combust. Flame 146, 29–51 (2006)

    Article  Google Scholar 

  55. Wang, W., Gan, S., Wang, D.: A family of fully implicit milsterin methods for stiff stochastic differential equations with multiplicative noise. BIT Num. Math. 52, 741–772 (2012)

    Article  MATH  Google Scholar 

  56. Weinan, E., Liu, D., Vanden-Eijnden, E.: Nested stochastic simulation algorithms for chemical kinetic systems with disparate rates. J. Chem. Phys. 123, 194107 (2005)

  57. Yin, Z., Gan, S.: An error corrected Euler-Maruyama method for stiff stochastic differential equations. Appl. Math. Comput. 256, 630–641 (2015)

    MathSciNet  MATH  Google Scholar 

  58. Zagaris, A., Kaper, H.G., Kaper, T.J.: Analysis of the CSP reduction method for chemical kinetics. In: SIAM Conference on Applications of Dynamical Systems, May 27–31, 2003 at Snowbird, Utah (2003)

  59. Zagaris, A., Kaper, H.G., Kaper, T.J.: Analysis of the CSP reduction method for chemical kinetics. Nonlinear Sci. 14, 59–91 (2004)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors acknowledge technical discussions with Prof. Mauro Valorani from Sapienza University of Rome, that helped with the development of ideas. This work was partially supported by the Simons Foundation (Collaboration Grants for Mathematicians No. 419717), and by the US Department of Energy (DOE), Office of Basic Energy Sciences (BES) Division of Chemical Sciences, Geosciences, and Biosciences. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA-0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.

The authors would also like to express gratitude to the anonymous referees for their insightful and constructive comments, which have led to a significantly improved quality of the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoying Han.

Ethics declarations

Conflict of Interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Appendix A Details of Proofs in Convergence Analysis

Appendix A Details of Proofs in Convergence Analysis

Throughout the analysis in this section, the notations of \(c_T\) and \(C_T\) are used for generic constants dependent on T but not \(\Delta t\), that may change from line to line.

1.1 A.1 Proof of Lemma 1

We first consider the case with \(p=2\). Note that due to (5) and the Itô isometry we have

$$\begin{aligned} {\mathbb {E}}\left[ \left| {\hat{X}}(s) - x_{n_s}\right| ^2\right]\leqslant & {} M {\mathbb {E}}\left[ \sum ^M_{k=1} \Big (\int ^s_{t_{n_s}} g_k({\hat{X}}(\tau )) \mathrm {d}W_k(\tau )\Big )^2\right] \\\leqslant & {} M \sum ^M_{k=1} {\mathbb {E}}\left[ \int ^s_{t_{n_s}} g^2_k({\hat{X}}(\tau )) \mathrm {d}\tau \right] , \quad s \geqslant 0. \end{aligned}$$

Then by (41) we have

$$\begin{aligned} g^2_k({\hat{X}}(\tau )) \leqslant 2 \left( L^2_k \left| {\hat{X}}(\tau ) - x_{n_s}\right| ^2 + 2 g_k^2(0) + 2 L^2_k x_{n_s}^2\right) , \quad \tau \in [t_s, s), \end{aligned}$$

and thus due to Assumption (A2) we have

$$\begin{aligned}&\quad{\mathbb {E}}\left[ \left| {\hat{X}}(s) - x_{n_s}\right| ^2\right] \\&\leqslant 2 M \sum ^M_{k=1} \int ^s_{t_{n_s}} \left( L^2_k {\mathbb {E}}\left[ \left| {\hat{X}}(\tau ) - x_{n_s}\right| ^2\right] +2 g_k^2(0) + 2 L^2_k {\mathbb {E}}\left[ x_{n_s}^2\right] \right) \mathrm {d}\tau \\&\leqslant 2 M \sum ^M_{k=1}L^2_k \int ^s_{t_{n_s}} {\mathbb {E}}\left[ \left| {\hat{X}}(\tau ) - x_{n_s}\right| ^2\right] \mathrm {d}\tau + 4M \sum ^M_{k=1}\left( g^2_k(0) + L^2_k \Lambda _T \right) \Delta t \end{aligned}$$

for \(s \geqslant 0.\) It then follows from Gronwall’s inequality that

$$\begin{aligned} {\mathbb {E}}\left[ \left| {\hat{X}}(s) - x_{n_s}\right| ^2\right] \leqslant 4M \sum ^M_{k=1}\left( g^2_k(0) + L^2_k \Lambda _T \right) \Delta t \text{e}^{ 2 M \sum ^M_{k=1}L^2_k \Delta t} \leqslant c_T \Delta t \end{aligned}$$
(A1)

for \(s \geqslant 0\), where \(c_T\) is a constant depends on M, \(\Lambda _T\), T, \(L_k\), and \(g_k(0)\) for \(k=1, \cdots , M\), but independent of \(\Delta t\).

For \(p> 2\), using again (5), the Itô isometry, Hölder’s inequality, and the Cauchy-Schwarz, we have

$$\begin{aligned} {\mathbb {E}}\left[ \left| {\hat{X}}(s) - x_{n_s}\right| ^p\right]\leqslant & {} c_{p, M} {\mathbb {E}}\left[ \sum ^M_{k=1} \Big (\int ^s_{t_{n_s}} g_k({\hat{X}}(\tau )) \mathrm {d}W_k(\tau )\Big )^p\right] \\\leqslant & {} c_{p, M} \sum ^M_{k=1} {\mathbb {E}}\left[ \left( \int ^s_{t_{n_s}} g^2_k({\hat{X}}(\tau )) \mathrm {d}\tau \right) ^{p/2}\right] \\\leqslant & {} \cdots \cdots \cdot \leqslant c_{p, M} (\Delta t)^{p/2-1} {\mathbb {E}}\left[ \int ^s_{t_{n_s}} g^p_k({\hat{X}}(\tau )) \mathrm {d}\tau \right] , \quad s \geqslant 0, \end{aligned}$$

where here and below \(c_{p,M}\) is a generic constant depending on p and M that may change from line to line. On the other hand by (41) we have

$$\begin{aligned} g^p_k({\hat{X}}(\tau ))\leqslant & {} c_{p, M}\left( L^2_k \left| {\hat{X}}(\tau ) - x_{n_s}\right| ^2 + 2 g_k^2(0) + 2 L^2_k x_{n_s}^2\right) ^{p/2} \\\leqslant & {} \cdots \\\leqslant & {} c_{p, M} \left( L^p_k \left| {\hat{X}}(\tau ) - x_{n_s}\right| ^p + g_k^p(0) + L^p_k x_{n_s}^p\right) , \end{aligned}$$

and thus

$$\begin{aligned}&\quad{\mathbb {E}}\left[ \left| {\hat{X}}(s) - x_{n_s}\right| ^p\right] \\& \leqslant c_{p, M} (\Delta t)^{p/2-1} \sum ^M_{k=1} \int ^s_{t_{n_s}} \left( L^p_k {\mathbb {E}}\left[ \left| {\hat{X}}(\tau ) - x_{n_s}\right| ^p\right] + g_k^p(0) + L^p_k {\mathbb {E}}\left[ x_{n_s}^p\right] \right) \mathrm {d}\tau \\& \leqslant c_{p, M} (\Delta t)^{p/2-1} \sum ^M_{k=1}L^p_k \int ^s_{t_{n_s}} {\mathbb {E}}\left[ \left| {\hat{X}}(\tau ) - x_{n_s}\right| ^p\right] \mathrm {d}\tau \\ &\quad + c_{p, M} (\Delta t)^{p/2} \sum ^M_{k=1}\left( g^p_k(0) + L^p_k \Lambda _T\right) . \end{aligned}$$

Similar to (A1), applying Gronwall’s Lemma to the above inequality results in

$$\begin{aligned} {\mathbb {E}}\left[ \left| {\hat{X}}(s) - x_{n_s}\right| ^p\right] \leqslant c_{p, M} (\Delta t)^{p/2} \sum ^M_{k=1}\left( g^p_k(0) + L^p_k \Lambda _T\right) \text{e}^{(\Delta t)^{p/2} \sum ^M_{k=1}L^p_k }, \end{aligned}$$

which implies the desired assertion for all every number \(p \geqslant 2\) with

$$\begin{aligned} C_{p,T} = c_{p, M} \sum ^M_{k=1}\left( g^p_k(0) + L^p_k \Lambda _T\right) \text{e}^{ \sum ^M_{k=1}L^p_k }, \end{aligned}$$

and the assumption without loss of generality that \(\Delta t \leqslant 1\). The proof is complete.

1.2 A.2 Proof of Lemma 2

First, by Hölder’s inequality and the Lipschitz condition on f,

$$\begin{aligned} \sup _{0 \leqslant t \leqslant T}{\mathcal {E}}_1^2(t)\leqslant & {} T \sup _{0 \leqslant t \leqslant T}\int ^t_0 \big \vert f(X(s))-f({\tilde{x}}(s))\big \vert ^2 \mathrm {d}s \\\leqslant & {} T L_f^2 \sup _{0 \leqslant t \leqslant T} \int ^t_0 \big \vert X(s)-{\tilde{x}}(s)\big \vert ^2 \mathrm {d}s . \end{aligned}$$

Taking the expectation of the above inequality and using Doob’s maximal inequality gives

$$\begin{aligned} {\mathbb {E}}\left[ \sup _{0 \leqslant t \leqslant T}{\mathcal {E}}_1^2(t)\right]\leqslant & {} 4 T L^2_f {\mathbb {E}}\left[ \int ^T_0 \big \vert X(s)-{\tilde{x}}(s)\big \vert ^2 \mathrm {d}s\right] \nonumber \\\leqslant & {} 4 T L^2_f \left( \int ^T_0 {\mathbb {E}}\Bigg [\sup _{0 \leqslant t \leqslant s} {\mathcal {E}}_0^2(t)\Bigg ] \mathrm {d}s +\int ^T_0 {\mathbb {E}}\left[ \big \vert y(s)-{\tilde{x}}(s)\big \vert ^2 \right] \mathrm {d}s\right) . \end{aligned}$$
(A2)

Using (42) to obtain the term \(y(s)-{\tilde{x}}(s)\) and squaring it, gives

$$\begin{aligned} \vert y(s)-{\tilde{x}}(s)\vert ^2\leqslant & {} (m+1) \left( f^2(x_{n_s}) (\Delta t)^2 + \sum ^m_{k=M+1} g^2_k(x_{n_s}) \left( W_k(s) - W_k(t_{n_s})\right) ^2 \right) \\&+ (m+1) \sum ^M_{k=1} \left( \int ^s_{t_{n_s}}\Big ( g_k(x_{n_s}) + g^\prime _k(x_{n_s})({\hat{X}}(\tau ) - x_{n_s})\Big)\mathrm {d}W_k(\tau )\right) ^2 . \end{aligned}$$

Then taking the expectation of the above inequality, and using (41) and the Itô isometry, we deduce

$$\begin{aligned}&\quad \, {\mathbb {E}}\left[ \vert y(s)-{\tilde{x}}(s)\vert ^2\right] \nonumber \\&\leqslant 2 (m+1) \left( (\Delta t)^2 \big (f^2(0) + L^2_f {\mathbb {E}}[x_{n_s}^2]\big ) + \sum ^m_{k=M+1}{\mathbb {E}}\Big [\big (g_k^2(0) + L^2_k x_{n_s}^2 \big ) \big (W_k(s) - W_k(t_{n_s})\big )^2 \Big ]\right) \\& \quad + 2 (m+1) \sum ^M_{k=1} {\mathbb {E}}\left[ \int ^s_{t_{n_s}}\Big ( g^2_k(x_{n_s}) + (g^\prime _k(x_{n_s}))^2({\hat{X}}(\tau ) - x_{n_s})^2\Big ) \mathrm {d}\tau \right] \nonumber \\&\leqslant 2 (m+1) \left( (\Delta t)^2 \big (f^2(0) + L^2_f {\mathbb {E}}[x_{n_s}^2]\big ) + \sum ^m_{k=1}\Big (\big (g_k^2(0) + L^2_k {\mathbb {E}}[x_{n_s}^2] \big ) \Delta t \Big )\right) \nonumber \\& \quad + 2 (m+1) {\mathbb {E}}\left[ \int ^s_{t_{n_s}} (g^\prime _k(x_{n_s}))^2({\hat{X}}(\tau ) - x_{n_s})^2 \mathrm {d}\tau \right] . \end{aligned}$$
(A3)

Using Hölder’s inequality, Lemma 1, Assumptions (A2) and (A3), the last term of (A3) satisfies

$$\begin{aligned}&\qquad \,{\mathbb {E}}\left[ \int ^s_{t_{n_s}} (g^\prime _k(x_{n_s}))^2({\hat{X}}(\tau ) - x_{n_s})^2 \mathrm {d}\tau \right] \nonumber \\&\quad \leqslant \left( {\mathbb {E}}\big [ \big (g^\prime _k(x_{n_s})\big )^4 \big ]\right) ^{1/2} \left( {\mathbb {E}}\Big [ \Big (\int ^s_{t_{n_s}} ({\hat{X}}(\tau ) - x_{n_s})^2 \mathrm {d}\tau \Big )^2\Big ] \right) ^{1/2} \nonumber \\&\quad \leqslant \lambda ^2 \left( {\mathbb {E}}\big [ \big (1 + \vert x_{n_s}\vert ^ \gamma \big )^4\big ]\right) ^{1/2} \left( \Delta t \int ^s_{t_{n_s}} {\mathbb {E}}\Big [ ({\hat{X}}(\tau ) - x_{n_s})^4 \Big ] \mathrm {d}\tau \right) ^{1/2} \nonumber \\&\quad \leqslant c_T (\Delta t)^2, \end{aligned}$$
(A4)

where \(c_T\) depends on T, \(\lambda\), \(\gamma\), \(\Lambda _T\), \(L_k\), and \(g_k(0)\) for \(k=1, \cdots , m\), but independent of \(\Delta t\).

Now inserting (A4) into (A3), using Assumption (A2), and integrating from 0 to T gives

$$\begin{aligned} \int ^T_0 {\mathbb {E}}\left[ \vert y(s)-{\tilde{x}}(s)\vert ^2\right] \mathrm {d}s\leqslant & {} 2(m+1) T \Bigg [ (\Delta t)^2 \big (f^2(0) + L^2_f \Lambda _T + c_T \big ) \nonumber \\&+ \Delta t \sum ^m_{k=1}\Big ( g_k^2(0) + L^2_k \Lambda _T \Big )\Bigg ]. \end{aligned}$$
(A5)

Consequently, (A2) can be further estimated to satisfy

$$\begin{aligned} {\mathbb {E}}\left[ \sup _{0 \leqslant t \leqslant T}{\mathcal {E}}_1^2(t)\right] \leqslant 4T L^2_f \int ^T_0 {\mathbb {E}}\Bigg [\sup _{0 \leqslant t \leqslant s} {\mathcal {E}}_0^2(t)\Bigg ] \mathrm {d}s + 2(m+1) T^2 L^2_f\left( (\Delta t)^2 c_1 + \Delta t c_2 \right) \end{aligned}$$

with \(c_1 = f^2(0) + L^2_f \Lambda _T + c_T\) and \(c_2 = \sum ^m_{k=1} \big (g_k^2(0) + L^2_k \Lambda _T\big )\). Setting

$$C_T = 2(m+1) T^2 L^2_f \max \{c_1, c_2\}$$

implies the desired assertion. The proof is complete.

1.3 A.3 Proof of Lemma 3

First by the Cauchy-Schwarz, the Doob’s martingale maximal inequality, the Itô’s isometry, and Assumption (A1) we have

$$\begin{aligned} {\mathbb {E}}\left[ \sup _{0 \leqslant t \leqslant T}{\mathcal {E}}_2^2(t)\right]&\leqslant (m-M) {\mathbb {E}}\left[ \sup _{0 \leqslant t \leqslant T} \sum ^m_{k=M+1} \left| \int ^t_0 \big (g_k(X(s) - g_k({\tilde{x}}(s))\big ) \mathrm {d}W_k(s)\right| ^2 \right] \nonumber \\&\leqslant 4(m-M) {\mathbb {E}}\left[ \sum ^m_{k=M+1} \left| \int ^T_0 \big (g_k(X(s) - g_k({\tilde{x}}(s))\big ) \mathrm {d}W_k(s)\right| ^2\right] \nonumber \\&= 4(m-M) {\mathbb {E}}\left[ \sum ^m_{k=M+1} \int ^T_0 \big (g_k(X(s) - g_k({\tilde{x}}(s))\big )^2 \mathrm {d}s \right] \nonumber \\&\leqslant 4(m-M) \sum ^m_{k=M+1} L_{k}^2 {\mathbb {E}}\left[ \int ^T_0 \big \vert X(s)-{\tilde{x}}(s)\big \vert ^2 \mathrm {d}s\right] . \end{aligned}$$

Similar to (A2) in Lemma 2,

$$\begin{aligned} {\mathbb {E}}\left[ \int ^T_0 \big \vert X(s)-{\tilde{x}}(s)\big \vert ^2 \mathrm {d}s\right] \leqslant \int ^T_0 {\mathbb {E}}\Bigg [\sup _{0 \leqslant t \leqslant s} {\mathcal {E}}_0^2(t)\Bigg ] \mathrm {d}s + {\mathbb {E}}\left[ \int ^T_0 \big \vert y(s)-{\tilde{x}}(s)\big \vert ^2 \mathrm {d}s \right] \end{aligned}$$

and then it follows from the estimate (A5) that

$$\begin{aligned} {\mathbb {E}}\left[ \int ^T_0 \big \vert X(s)-{\tilde{x}}(s)\big \vert ^2 \mathrm {d}s\right]\leqslant & {} \int ^T_0 {\mathbb {E}}\Bigg [\sup _{0 \leqslant t \leqslant s} {\mathcal {E}}_0^2(t)\Bigg ] \mathrm {d}s \\&+ 2(m+1) T \sum ^m_{k=M+1} L_{k}^2\left( (\Delta t)^2 c_1 + \Delta t c_2 \right) , \end{aligned}$$

where \(c_1\) and \(c_2\) are the same as in Lemma 2. Setting \(C_T = 4(m-M)(2m+1)T \cdot\!\!\!\!{\sum ^m_{k=M+1} L_{k}^2 \max \{c_1, c_2\}}\) implies the desired assertion. The proof is complete.

1.4 A.4 Proof of Lemma 4

First, it follows from the Cauchy-Schwarz, Doob’s martingale maximal inequality, and Itô’s isometry that

$$\begin{aligned} {\mathbb {E}}\left[ \sup _{0 \leqslant t \leqslant T}{\mathcal {E}}_3^2(t)\right] \leqslant 4 M \sum ^M_{k=1} \int ^T_0 {\mathbb {E}}\left[ \Big ( g_k(X(s)) - g_k({\tilde{x}}(s)) - g^\prime _k({\tilde{x}}(s))({\hat{X}}(s) - {\tilde{x}}(s))\Big )^2 \right] \mathrm {d}s . \end{aligned}$$
(A6)

Note that due to Rolle’s theorem, and then by Assumption (A4), for every \(s \in {\mathbb {R}}\) there exists \(y_s\) between \({\hat{X}}(s)\) and \(x_{n_s}\) such that

$$\begin{aligned}&\quad \left| g_k({\hat{X}}(s)) - g_k(x_{n_s} ) - g^\prime _k({\tilde{x}}(s))({\hat{X}}(s) - x_{n_s}) \right| \nonumber \\&= \left| \big (g^{\prime }(y_s) - g^\prime _k(x_{n_s}) \big ) ({\hat{X}}(s) - x_{n_s}) \right| \leqslant D_k ({\hat{X}}(s) - x_{n_s}) ^2. \end{aligned}$$
(A7)

Writing \(g_k(X(s)) - g_k({\tilde{x}}(s))\) in (A6) as \(g_k(X(s)) - g_k({\hat{X}}(s)) +g_k({\hat{X}}(s)) - g_k(x_{n_s})\), it then follows from (A7), Lemma 1, and Assumption (A1) that

$$\begin{aligned} {\mathbb {E}}\left[ \sup _{0 \leqslant t \leqslant T}{\mathcal {E}}_3^2(t)\right]&\leqslant 8 M \sum ^M_{k=1} \left( \int ^T_0 {\mathbb {E}}\Big [\big ( g_k(X(s)) - g_k({\hat{X}}(s)) \big )^2 \Big ] \mathrm {d}s \right. \nonumber \\&\quad \left. + D_k^2\int ^T_0{\mathbb {E}}\Big [({\hat{X}}(s) - x_{n_s}) ^4\Big ] \mathrm {d}s \right) \nonumber \\&\leqslant 8 M \sum ^M_{k=1} \Bigg ( L_k^2 \int ^T_0 {\mathbb {E}}\left[ \big (X(s) - {\hat{X}}(s)\big )^2 \right] \mathrm {d}s \nonumber \\&\quad + D_k^2 T C_{4,T} (\Delta t)^2 \Bigg ), \end{aligned}$$
(A8)

where \(C_{4,T}\) is the constant in Lemma 1 which is independent of \(\Delta t\).

It remains to estimate \({\mathbb {E}}\left[ \big \vert X(s) - {\hat{X}}(s)\big \vert ^2 \right]\). In fact, by (1) and (5), the Cauchy inequality, and the Hölder inequality we have

$$\begin{aligned}&\quad\, {\mathbb {E}}\left[ \vert X(s) - {\hat{X}}(s)\big \vert ^2\right] \nonumber \\&\leqslant (m+1) {\mathbb {E}}\left[ \Bigg \vert \int ^s_{t_{n_s}} f(X(\tau )) \mathrm {d}\tau \Bigg \vert ^2 \right] \nonumber \\& \quad \,+ (m+1) \sum ^m_{k=M+1}{\mathbb {E}}\left[ \Bigg \vert \int ^s_{t_{n_s}} g_k(X(\tau )) \mathrm {d}W_k(\tau ) \Bigg \vert ^2 \right] \nonumber \\& \quad + (m+1) \sum ^M_{k=1}{\mathbb {E}}\left[ \Bigg \vert \int ^s_{t_{n_s}} \big (g_k(X(\tau ))-g_k({\hat{X}}(\tau ))\big ) \mathrm {d}W_k(\tau ) \Bigg \vert ^2\right] \nonumber \\& \leqslant (m+1) \left( \Delta t \int ^s_{t_{n_s}} {\mathbb {E}}\big [ f^2(X(\tau )) \big ]\mathrm {d}\tau + \sum ^m_{k=M+1} \int ^s_{t_{n_s}} {\mathbb {E}}\big [ g^2_k(X(\tau )) \big ] \mathrm {d}\tau \right) \nonumber \\& \quad + (m+1) \sum ^M_{k=1}\int ^s_{t_{n_s}} {\mathbb {E}}\big [(g_k(X(\tau ))-g_k({\hat{X}}(\tau )))^2 \big ] \mathrm {d}\tau . \end{aligned}$$
(A9)

By Assumption (A1), \(f^2(X(\tau )) \leqslant 2 f^2(x_{n_s}) + 2 L^2_f \vert X(\tau ) - x_{n_s}\vert ^2\) and

$$g^2_k(X(\tau )) \leqslant 2 g_k^2(x_{n_s}) + 2 L^2_k \vert X(\tau ) - x_{n_s}\vert ^2$$

and thus

$$\begin{aligned} \int ^s_{t_{n_s}} {\mathbb {E}}\big [ f^2(X(\tau )) \big ]\mathrm {d}\tau\leqslant & {} 2\Delta t \left( 2 f^2(0) + 2L^2_f \Lambda _T + L^2_f {\mathbb {E}}\left[ \sup _{0 \leqslant t \leqslant s} {\mathcal {E}}^2_0(t)\right] \right) , \quad \end{aligned}$$
(A10)
$$\begin{aligned} \int ^s_{t_{n_s}} {\mathbb {E}}\big [ g_k^2(X(\tau )) \big ]\mathrm {d}\tau\leqslant & {} 2\Delta t \left( 2 g_k^2(0) + 2L^2_k \Lambda _T + L^2_k {\mathbb {E}}\left[ \sup _{0 \leqslant t \leqslant s} {\mathcal {E}}^2_0(t)\right] \right) . \quad \end{aligned}$$
(A11)

Inserting (A10)–(A11) into (A9) and using Assumption (A1) again we obtain

$$\begin{aligned} {\mathbb {E}}\left[ \vert X(s) - {\hat{X}}(s)\big \vert ^2\right]\leqslant & {} c_3 \left[ \sup _{0 \leqslant t \leqslant s} {\mathcal {E}}^2_0(t)\right] + c_4 \\&+ (m+1) \sum ^M_{k=1} L_k^2 \int ^s_{t_{n_s}} {\mathbb {E}}\big [\vert X(\tau ) - {\hat{X}}(\tau )\vert ^2\big ] \mathrm {d}\tau , \end{aligned}$$

where due to Assumption (A2)

$$\begin{aligned} c_3= & {} 2(m+1) \Delta t \left( \Delta t L_f^2 + \sum ^m_{M+1} L_k^2 \right) , \\ c_4= & {} 2(m+1) \Delta t \left( \Delta t + 2 \Delta t( f^2(0) + L^2_f \Lambda _T) + 2 \sum ^m_{M+1}(g^2_k(0) + L^2_k \Lambda _T )\right) . \end{aligned}$$

It then follows from Gronwall’s inequality that

$$\begin{aligned} {\mathbb {E}}\left[ \vert X(s) - {\hat{X}}(s)\big \vert ^2\right]\leqslant & {} c_3 \left[ \sup _{0 \leqslant t \leqslant s} {\mathcal {E}}^2_0(t)\right] + c_4 \nonumber \\&+ (m+1) \sum ^M_{k=1} L_k^2 \int ^s_{t_{n_s}} \left( c_3 \Big [ \sup _{0 \leqslant t \leqslant \tau } {\mathcal {E}}^2_0(t)\Big ] + c_4\right) \text{e}^{(m+1) \sum ^M_{k=1} L_k^2 (s - \tau )} \mathrm {d}\tau \nonumber \\\leqslant & {} \left( c_3 \left[ \sup _{0 \leqslant t \leqslant s} {\mathcal {E}}^2_0(t)\right] + c_4 \right) \text{e}^{(m+1) \sum ^M_{k=1} L_k^2 \Delta t} \nonumber \\\leqslant & {} c \Delta t \left[ \sup _{0 \leqslant t \leqslant s} {\mathcal {E}}^2_0(t)\right] + c_T \Delta t, \end{aligned}$$
(A12)

where \(c = 2(m+1) (\Delta t L^2_f + \sum _{k=M+1}^M L^2_k)\text{e}^{(m+1) \sum ^M_{k=1} L_k^2 }\) and \(c_T\) is a generic constant dependent on \(\Lambda _T\), m, \(f^2(0)\), \(g_k^2(0)\), \(L_f^2\), \(L_k^2\) but independent of \(\Delta t\). Finally, inserting (A12) into (A8) results in the desired assertion by setting

$$\begin{aligned} C= 16 M (m+1) \left( \Delta t L^2_f + \sum _{k=M+1}^M L^2_k \right) \text{e}^{(m+1) \sum ^M_{k=1} L_k^2 } \sum _{k=1}^M L_k^2 . \end{aligned}$$
(A13)

The proof is complete.

1.5 A.5 Proof of Lemma 6

Consider integral representation of the piecewise continuous process \(\eta (t)\)

$$\begin{aligned} \eta (t) = \phi _n(t) \sum _{k=1}^M \left( b_{k,n} \left( - a_{k,n} \int ^t_{t_n} \mathrm {d}s + \int ^t_{t_n} \mathrm {d}W_k(s) \right) \right) , \quad t \in [t_n, t_{n+1}), \end{aligned}$$
(A14)

where \(\phi _n(t)\) is defined in (24). Then

$$\begin{aligned} \vert \eta (t) - Y_n(t)\vert= & {} \phi _n(t) \sum _{k=1}^M \left| b_{k,n} \left( - a_{k,n} \int ^t_{t_n} (\phi ^{-1}_n(s) - 1) \mathrm {d}s \right. \right. \\&+ \left. \left. \int ^t_{t_n} (\phi ^{-1}_n(s) - 1) \mathrm {d}W_k(s) \right) \right| , \quad t \in [t_n, t_{n+1}), \end{aligned}$$

and it follows from the Cauchy-Bunyakovsky-Schwarz inequality and Hölder’s inequality that

$$\begin{aligned} \Big ({\mathbb {E}}\vert \eta (t) - Y_n(t)\vert \Big )^2\leqslant & {} 2 {\mathbb {E}}\left[ \phi ^2_n(t)\right] {\mathbb {E}}\left[ \sum ^M_{k=1} \left| b_{k,n} \left( - a_{k,n} \int ^t_{t_n} (\phi ^{-1}_n(s) - 1) \mathrm {d}s \right. \right. \right. \nonumber \\&\left. \left. \left. + \int ^t_{t_n} (\phi ^{-1}_n(s) - 1) \mathrm {d}W_k(s) \right) \right| ^2 \right] \nonumber \\\leqslant & {} 2 M {\mathbb {E}}\left[ \phi ^2_n(t)\right] \sum ^M_{k=1} {\mathbb {E}}\left[ b^2_{k,n} \left( a_{k,n}^2 \Delta t \int ^t_{t_n} (\phi ^{-1}_n(s) - 1)^2 \mathrm {d}s \right. \right. \nonumber \\&\left. \left. + \Bigg ( \int ^t_{t_n} (\phi ^{-1}_n(s) - 1) \mathrm {d}W_k(s) \Bigg )^2\right) \right] . \end{aligned}$$
(A15)

Noting that at each t, \(\phi _n(t)\) follows a log-normal distribution, i.e.,

$$\begin{aligned} \ln \phi _n(t) \sim {\mathcal {N}}\left( -\frac{1}{2} \sum _{k=1}^M a_{k,n}^2(t-t_n), \sum _{k=1}^M a_{k,n}^2(t-t_n)\right) , \end{aligned}$$
(A16)

then by using Assumptions (A3) and (A2), the first term on the right-hand side of (A15) satisfies

$$\begin{aligned} {\mathbb {E}}\left[ \phi ^2_n(t)\right]= & {} \text{e}^{\sum _{k=1}^M a_{k,n}^2(t-t_n)} \leqslant 1 + \sum _{k=1}^M {\mathbb {E}}\left[ (g^\prime _k(x_n))^2\right] \Delta t + {\mathcal {O}}(\Delta t^2) \nonumber \\\leqslant & {} 1 + \lambda ^2 \sum _{k=1}^M \left( {\mathbb {E}}[(1 +\vert x_n\vert ^\gamma )^4]\right) ^{1/2} \Delta t + {\mathcal {O}}(\Delta t^2) \leqslant c. \end{aligned}$$
(A17)

Then using (A16) again, the two integrals on the right-hand side of (A15) satisfy respectively,

$$\begin{aligned} {\mathbb {E}}\left[ \int ^t_{t_n} (\phi ^{-1}_n(s) - 1)^2 \mathrm {d}s \right]\leqslant & {} 2 \int ^t_{t_n} {\mathbb {E}}\left[ \phi ^{-2}_n(s) + 1 \right] \mathrm {d}s \nonumber \\\leqslant & {} 2 (c + 1) \Delta t, \end{aligned}$$
(A18)
$$\begin{aligned} {\mathbb {E}}\left[ \Big ( \int ^t_{t_n} (\phi ^{-1}_n(s) - 1) \mathrm {d}W_k(s) \Big )^2 \right]= & {} {\mathbb {E}}\left[ \int ^t_{t_n} (\phi ^{-1}_n(s) - 1)^2 \mathrm {d}s\right] \nonumber \\\leqslant & {} 2(c+1) \Delta t. \end{aligned}$$
(A19)

Inserting (A17)–(A19) into (A15), and using again the boundedness of \({\mathbb {E}}[b_{k,n}^2]\) and \({\mathbb {E}}[a_{k,n}^2]\) implied by Assumptions (A1)–(A3) we obtain

$$\begin{aligned} \Big ({\mathbb {E}}[\vert \eta (t) - Y_n(t)\vert ]\Big )^2\leqslant & {} 2 M c \left\{ \Delta t {\mathbb {E}}[b_{k,n}^2] {\mathbb {E}}[a_{k,n}^2] {\mathbb {E}}\left[ \int ^t_{t_n} (\phi ^{-1}_n(s) - 1)^2 \mathrm {d}s \right] \right. \\&\left. + {\mathbb {E}}[b_{k,n}^2] {\mathbb {E}}\left[ \left( \int ^t_{t_n} (\phi ^{-1}_n(s) - 1) \mathrm {d}W_k(s) \right) ^2 \right] \right\} \nonumber \\\leqslant & {} 2 M c \left( (\Delta t)^2 + \Delta t \right) , \quad t \in [t_n, t_{n+1}), \,\, n = 0, \cdots , N, \end{aligned}$$

in which c is a generic constant independent of \(\Delta t\) and may be different from line to line. It follows immediately that

$$\begin{aligned} \sup _{0 \leqslant t \leqslant T} {\mathbb {E}}[\vert \eta (t) - Y(t)\vert ] \leqslant C (\Delta t)^{1/2}, \end{aligned}$$

in which c is a constant depending on M, T, \(\Lambda _T\), \(L_k\), \(D_k\), but is independent of \(\Delta t\). The proof is complete.

1.6 A.6 Proof of Theorem 2

Note that since \(x(t_n) = x_n\) for \(n = 0, 1, \cdots , N\), the weak discretization error above satisfies

$$\begin{aligned} {{\mathfrak {E}}^w}&: =\big \vert {\mathbb {E}}[ \psi (x(T)) ] - {\mathbb {E}}[\psi (X(T))] \big \vert \\&\leqslant {} \big \vert {\mathbb {E}}[ \psi (y(T)) ] - {\mathbb {E}}[\psi (X(T))] \big \vert + \big \vert {\mathbb {E}}[ \psi (x(T)) ] - {\mathbb {E}}[\psi (y(T))] \big \vert , \end{aligned}$$

where x(t) and y(t) satisfy (33) and (39), respectively. Similar to the proof of strong convergence, we will first estimate

$$\begin{aligned} {{\mathfrak {E}}^w_1} := \big \vert {\mathbb {E}}[ \psi (y(T)) ] - {\mathbb {E}}[\psi (X(T))] \big \vert . \end{aligned}$$

To that end, let u(ty) be a solution of the following Feynman-Kac partial differential equation:

$$\begin{aligned} u_t(t, y) + f(y)u_y(t, y) +\frac{1}{2} u_{yy}(t, y) \sum ^m_{k=1} g^2_k(y) = 0 \quad \text{ for } \, t\in [0, T],\,\, y \in {\mathbb {R}} \end{aligned}$$

with \(u(T, y) = \psi (y).\) Applying Itô’s formula to u(ty(t)) with y(t) satisfying (39) and using the above equation yields

$$\begin{aligned} \mathrm {d}u(t, y(t))= & {} \left( u_t(t, y(t)) + u_y(t, y(t)) f({\tilde{x}}(t)) \right) \mathrm {d}t \nonumber \\&+ \frac{1}{2} u_{yy}(t, y(t)) \left( \sum ^m_{k=M+1} g^2_k({\tilde{x}}(t)) \right. \\&\left. + \sum ^M_{k=1} \Big ( g_k({\tilde{x}}(t)) + g^\prime _k({\tilde{x}}(t))({\hat{X}}(t) - {\tilde{x}}(t))\Big )^2 \right) \mathrm {d}t \nonumber \\&+ u_y(t, y(t)) \left( \sum ^m_{k=M+1} g_k({\tilde{x}}(t)) \mathrm {d}W_k(t) \right. \\&\left. + \sum ^M_{k=1} \Big ( g_k({\tilde{x}}(t)) + g^\prime _k({\tilde{x}}(t))({\hat{X}}(t) - {\tilde{x}}(t))\Big ) \mathrm {d}W_k(t)\right), \\ \mathrm {d}u(t, y(t))= & {} \Bigg ( u_y(t, y(t)) \left( f({\tilde{x}}(t)) - f(y(t) \right) \\&+ \frac{1}{2} u_{yy}(t, y(t)) \left( \sum ^m_{k=M+1} \Big (g_k^2({\tilde{x}}(t)) - g_k^2(y(t)\Big ) \right) \Bigg ) \mathrm {d}t \\&+ \frac{1}{2} u_{yy}(t, y(t)) \sum _{k=1}^M \Big ( \Big ( g_k({\tilde{x}}(t)) + g^\prime _k({\tilde{x}}(t))({\hat{X}}(t) - {\tilde{x}}(t))\Big )^2 \\&- g^2_k(y(t)) \Big ) \mathrm {d}t \nonumber \\&+ u_y(t, y(t)) \Bigg (\sum ^m_{k=M+1} g_k({\tilde{x}}(t)) \mathrm {d}W_k(t) \\&+ \sum ^M_{k=1} \Big ( g_k({\tilde{x}}(t)) + g^\prime _k({\tilde{x}}(t))({\hat{X}}(t) - {\tilde{x}}(t))\Big ) \mathrm {d}W_k(t)\Bigg ). \end{aligned}$$

Notice that due to the Feynman-Kac formula, we have \(u(0, x_0) = {\mathbb {E}}[\psi (X(T))]\). Then, integrating the above equation from 0 to T using \(u(T, x_N) = \psi (x_N)\) and taking the expectation of the resulting equation gives

$$\begin{aligned}&\quad \, {\mathbb {E}}[ \psi (x_N) ] - {\mathbb {E}}[\psi (y(T))]\\& = {\mathbb {E}}\left[ \int ^T_0 \Bigg (u_y(t, y(t)) \left( f({\tilde{x}}(t)) - f(y(t)) \right) \right. \\& \quad \left. + \frac{1}{2} u_{yy}(t, y(t)) \sum ^m_{k=M+1} \Big (g_k^2({\tilde{x}}(t)) - g_k^2(y(t))\Big ) \Bigg ) \mathrm {d}t \right] \\& \quad + \frac{1}{2}{\mathbb {E}}\left[ \int ^T_0 u_{yy}(t, y(t)) \sum _{k=1}^M \left( \Big ( g_k({\tilde{x}}(t)) + g^\prime _k({\tilde{x}}(t))({\hat{X}}(t) - {\tilde{x}}(t))\Big )^2 - g^2_k(y(t)) \right) \mathrm {d}t \right] , \end{aligned}$$

which implies that

$$\begin{aligned} {{\mathfrak {E}}^w_1} \leqslant \int ^T_0 \left| {\mathbb {E}}[{\mathfrak {e}}_1(t, y(t))]\right| \mathrm {d}t + \int ^T_0 \left| {\mathbb {E}}[{\mathfrak {e}}_2(t, y(t))]\right| \mathrm {d}t +\int ^T_0 \left| {\mathbb {E}}[{\mathfrak {e}}_3(t, y(t))]\right| \mathrm {d}t , \end{aligned}$$
(A20)

where

$$\begin{aligned} {\mathfrak {e}}_1(t, y(t))= & {} u_y(t, y(t)) \left( f({\tilde{x}}(t)) - f(y(t)) \right) , \\ {\mathfrak {e}}_2(t, y(t))= & {} u_{yy}(t, y(t)) \sum ^m_{k=M+1} \Big (g_k^2({\tilde{x}}(t)) - g_k^2(y(t))\Big ), \\ {\mathfrak {e}}_3(t, y(t))= & {} u_{yy}(t, y(t)) \sum _{k=1}^M \left( \Big ( g_k({\tilde{x}}(t)) + g^\prime _k({\tilde{x}}(t))({\hat{X}}(t) - {\tilde{x}}(t))\Big )^2 - g^2_k(y(t)) \right) . \end{aligned}$$

Note that \({\mathfrak {e}}_1(t_n, y(t_n)) = {\mathfrak {e}}_2(t_n, y(t_n)) = {\mathfrak {e}}_3(t_n, y(t_n)) = 0.\) We next estimate each of \({\mathfrak {e}}_1\), \({\mathfrak {e}}_2\), and \({\mathfrak {e}}_3\).

First apply the Itô formula to \({\mathfrak {e}}_1(t, x(t))\) to obtain

$$\begin{aligned} \mathrm {d}{\mathfrak {e}}_1(t, y(t))= & {} \left( \frac{\partial {\mathfrak {e}}_1}{\partial t} + \frac{\partial {\mathfrak {e}}_1}{\partial y} f({\tilde{x}}(t)) \right. \nonumber \\&+ \frac{1}{2} \frac{\partial ^2 {\mathfrak {e}}_1}{\partial y^2} \Big ( \sum ^m_{k=M+1} g^2_k({\tilde{x}}(t)) \nonumber \\&\left. + \sum ^M_{k=1} \big ( g_k({\tilde{x}}(t)) + g^\prime _k({\tilde{x}}(t))({\hat{X}}(t) - {\tilde{x}}(t))\big )^2 \Big ) \right) \mathrm {d}t \nonumber \\&+ \frac{\partial {\mathfrak {e}}_1}{\partial y} \sum _{k=1}^m {\mathcal {G}}_k \mathrm {d}W_k(t), \end{aligned}$$
(A21)

where

$$\begin{aligned} {\mathcal {G}}_k = \left\{ \begin{array}{ll} g_k({\tilde{x}}(t)) + g^\prime _k({\tilde{x}}(t))({\hat{X}}(t) - {\tilde{x}}(t)) &{} \,\, \text{ for }\,\, k=1, \cdots , M, \\ g_k({\tilde{x}}(t)) &{} \,\, \text{ for }\,\, k=M+1, \cdots , m . \end{array} \right. \end{aligned}$$

Integrating (A21) from \(t_n\) to \(t \in [t_n, t_{n+1})\) using \({\mathfrak {e}}_1(t_n, y(t_n)) = 0\) then taking expectation of the resulting equation gives

$$\begin{aligned} {\mathbb {E}}[{\mathfrak {e}}_1(t, y(t)]= & {} \int ^t_{t_n} \Bigg \{{\mathbb {E}}\Bigg [ \frac{\partial {\mathfrak {e}}_1}{\partial t}(s, y(s)) \Bigg ] + {\mathbb {E}}\Bigg [ \frac{\partial {\mathfrak {e}}_1}{\partial y}(s, y(s)) f(x_n) \Bigg ] \\&+ \frac{1}{2} {\mathbb {E}}\left[ \frac{\partial ^2 {\mathfrak {e}}_1}{\partial y^2} \sum ^m_{k=M+1} g^2_k(x_n) \right] \Bigg \}\mathrm {d}s \nonumber \\&+ \frac{1}{2} \int ^t_{t_n} {\mathbb {E}}\left[ \frac{\partial ^2 {\mathfrak {e}}_1}{\partial y^2}\sum ^M_{k=1} \big ( g_k(x_n) + g^\prime _k(x_n)({\hat{X}}(s) - x_n)\big )^2\right] \mathrm {d}s. \end{aligned}$$

Then by the Cauchy inequality, the Itô isometry, and (5) we have

$$\begin{aligned} {\mathbb {E}}[{\mathfrak {e}}_1(t, y(t)]\leqslant & {} \int ^t_{t_n} \left\{ {\mathbb {E}}\Bigg [ \frac{\partial {\mathfrak {e}}_1}{\partial t}(s, y(s)) \Bigg ] + {\mathbb {E}}\Bigg [ \frac{\partial {\mathfrak {e}}_1}{\partial x}(s, y(s)) f(x_n) \Bigg ] \right. \\&+ \left. {\mathbb {E}}\left[ \frac{\partial ^2 {\mathfrak {e}}_1}{\partial x^2} \left( \sum ^m_{k=1} g^2_k(x_n) + \sum _{k=1}^M (g^\prime (x_n))^2 \int ^s_{t_n} \sum ^M_{k=1} g^2_k({\hat{X}}(\tau )) \mathrm {d}\tau \right) \right] \right\} \mathrm {d}s. \end{aligned}$$

For simplicity, assume that the functions f and \(g_k\) satisfy conditions such that all expectations appearing in the above inequality are bounded. Then there exists \(C_1 > 0\) such that

$$\begin{aligned} \vert {\mathbb {E}}[{\mathfrak {e}}_1(t, x(t)]\vert \leqslant C_1 \Delta t . \end{aligned}$$
(A22)

Following similar analysis we can obtain that there exist \(C_2 > 0\) and \(C_3 > 0\) such that

$$\begin{aligned} \vert {\mathbb {E}}[{\mathfrak {e}}_2(t, x(t)]\vert \leqslant C_2 \Delta t , \quad \vert {\mathbb {E}}[{\mathfrak {e}}_3(t, x(t)]\vert \leqslant C_3 \Delta t. \end{aligned}$$
(A23)

In the end, inserting (A22)–(A23) into (A20) we have that there exists \(C_T>0\) such that \({\mathfrak {E}}_1 \leqslant C_T \Delta t\).

Note that in the FPM-LP scheme, x(t) is essentially an EM approximation of y(t), along with an exponential approximation of a strong convergence order 3/2 implied by (45). It then follows immediately that there exists \(C_T>0\) such that the error

$$\begin{aligned} {{\mathfrak {E}}^w_2}: = \big \vert {\mathbb {E}}[ \psi (x(T)) ] - {\mathbb {E}}[\psi (y(T))] \big \vert \leqslant C_T \Delta t, \end{aligned}$$

which implies that the FPM-LP scheme has a weak order of convergence 1. The proof is complete.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Han, X., Najm, H.N. Modeling Fast Diffusion Processes in Time Integration of Stiff Stochastic Differential Equations. Commun. Appl. Math. Comput. 4, 1457–1493 (2022). https://doi.org/10.1007/s42967-022-00188-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42967-022-00188-z

Keywords

Mathematics Subject Classification

Navigation