Skip to main content
Log in

A Fourier Transform Method for Solving Backward Stochastic Differential Equations

  • Published:
Methodology and Computing in Applied Probability Aims and scope Submit manuscript

Abstract

We propose a method based on the Fourier transform for numerically solving backward stochastic differential equations. Time discretization is applied to the forward equation of the state variable as well as the backward equation to yield a recursive system with terminal conditions. By assuming the integrability of the functions in the terminal conditions and applying truncation, the solutions of the system are shown to be integrable and we derive recursions in the Fourier space. The fractional FFT algorithm is applied to compute the Fourier and inverse Fourier transforms. We showcase the efficiency of our method through various numerical examples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  • Applebaum D (2009) Lévy processes and stochastic calculus. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Bailey DH, Swarztrauber PN (1991) The fractional Fourier transform and applications. SIAM Rev 33(3):389–404

    Article  MathSciNet  Google Scholar 

  • Bender C, Steiner J (2012) Least-squares Monte Carlo for backward SDEs. In: Numerical Methods in Finance. Springer, pp 257–289

  • Bertoin J (1996) Lévy Processes, vol 121. Cambridge University Press, Cambridge

    MATH  Google Scholar 

  • Bismut J-M (1973) Conjugate convex functions in optimal stochastic control. J Math Anal Appl 44(2):384–404

    Article  MathSciNet  Google Scholar 

  • Bouchard B, Touzi N (2004) Discrete-time approximation and monte-Carlo simulation of backward stochastic differential equations. Stoch Process Appl 111(2):175–206

    Article  MathSciNet  Google Scholar 

  • Carr P, Madan D (1999) Option valuation using the fast Fourier transform. J. Comput. Finance 2(4):61–73

    Article  Google Scholar 

  • Chen J, Fan L, Li L, Zhang G (2021) Sinc approximation of multidimensional Hilbert transform and its applications. Preprint. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3091664

  • Chourdakis K (2005) Option pricing using the fractional FFT. J Comput Finance 8(2):1–18

    Article  Google Scholar 

  • Douglas J, Ma J, Protter P, et al. (1996) Numerical methods for forward-backward stochastic differential equations. Ann Appl Probab 6 (3):940–968

    Article  MathSciNet  Google Scholar 

  • El Karoui N, Peng, Quenez MC (1997) Backward stochastic differential equations in finance. Math Financ 7(1):1–71

    Article  MathSciNet  Google Scholar 

  • Fang F, Oosterlee CW (2008) A novel pricing method for European options based on Fourier-cosine series expansions. SIAM J Sci Comput 31(2):826–848

    Article  MathSciNet  Google Scholar 

  • Fang F, Oosterlee CW (2009) Pricing early-exercise and discrete barrier options by Fourier-cosine series expansions. Numer Math 114(1):27

    Article  MathSciNet  Google Scholar 

  • Feng L, Linetsky V (2008) Pricing discretely monitored barrier options and defaultable bonds in Lévy process models: a fast Hilbert transform approach. Math Financ 18(3):337–384

    Article  Google Scholar 

  • Gobet E, Lemor J-P, Warin X, et al. (2005) A regression-based Monte Carlo method to solve backward stochastic differential equations. Ann Appl Probab 15(3):2172–2202

    Article  MathSciNet  Google Scholar 

  • Gobet E, Turkedjiev P (2016) Linear regression MDP scheme for discrete backward stochastic differential equations under general conditions. Math Comput 85(299):1359–1391

    Article  MathSciNet  Google Scholar 

  • Huijskens T, Ruijter MJ, Oosterlee CW (2016) Efficient numerical Fourier methods for coupled forward–backward SDEs. J Comput Appl Math 296:593–612

    Article  MathSciNet  Google Scholar 

  • Hurd TR, Zhou Z (2010) A Fourier transform method for spread option pricing. SIAM J Financial Math 1(1):142–157

    Article  MathSciNet  Google Scholar 

  • Iserles (2009) A first course in the numerical analysis of differential equations number 44. Cambridge University Press, Cambridge

    MATH  Google Scholar 

  • Jackson KR, Jaimungal S, Surkov V (2008) Fourier space time-stepping for option pricing with Lévy models. J Comput Finance 12(2):1–29

    Article  MathSciNet  Google Scholar 

  • Lemor J-P, Gobet E, Warin X (2006) Rate of convergence of an empirical regression method for solving generalized backward stochastic differential equations. Bernoulli 12(5):889–916

    Article  MathSciNet  Google Scholar 

  • Linetsky V, Mendoza R (2010) Constant elasticity of variance (CEV) diffusion model. Encyclopedia of Quantitative Finance 1

  • Longstaff FA, Schwartz S (2001) Valuing American options by simulation: a simple least-squares approach. Rev. Financ. Stud. 14(1):113–147

    Article  Google Scholar 

  • Lord R, Fang F, Bervoets F, Oosterlee CW (2008) A fast and accurate FFT-based method for pricing early-exercise options under Lévy processes. SIAM J Sci Comput 30(4):1678–1705

    Article  MathSciNet  Google Scholar 

  • Ma J, Morel JM, Yong J (1999) Forward-backward stochastic differential equations and their applications. Springer, Berlin

    Google Scholar 

  • Ma J, Shen J, Zhao Y (2008) On numerical approximations of forward-backward stochastic differential equations. SIAM J Numer Anal 46(5):2636–2661

    Article  MathSciNet  Google Scholar 

  • Milstein G, Tretyakov MV (2006) Numerical algorithms for forward-backward stochastic differential equations. SIAM J Sci Comput 28(2):561–582

    Article  MathSciNet  Google Scholar 

  • Pardoux E, Peng S (1990) Adapted solution of a backward stochastic differential equation. Systems & Control Letters 14(1):55–61

    Article  MathSciNet  Google Scholar 

  • Pardoux E, Peng S (1992) Backward stochastic differential equations and quasilinear parabolic partial differential equations. In: Stochastic partial differential equations and their applications. Springer, pp 200–217

  • Pham H (2009) Continuous-time stochastic control and optimization with financial applications. Springer, Berlin

    Book  Google Scholar 

  • Ruijter MJ, Oosterlee CW (2015a) A Fourier cosine method for an efficient computation of solutions to BSDEs. SIAM J Sci Comput 37(2):A859–A889

    Article  MathSciNet  Google Scholar 

  • Ruijter MJ, Oosterlee CW (2015b) Supplementary materials to “a Fourier cosine method for an efficient computation of solutions to BSDEs”. Available from https://epubs.siam.org/doi/suppl/10.1137/130913183

  • Ruijter MJ, Oosterlee CW (2016) Numerical Fourier method and second-order Taylor scheme for backward SDEs in finance. Appl Numer Math 103:1–26

    Article  MathSciNet  Google Scholar 

  • van der Have Z, Oosterlee CW (2018) The COS method for option valuation under the SABR dynamics. Int J Comput Math 95(2):444–464

    Article  MathSciNet  Google Scholar 

  • Zhang J (2017) Backward stochastic differential equations: from linear to fully nonlinear theory. Springer, Berlin

    Book  Google Scholar 

  • Zhang J, et al. (2004) A numerical scheme for BSDEs. Ann Appl Probab 14(1):459–488

    Article  MathSciNet  Google Scholar 

  • Zhao W, Li Y, Zhang G (2012) A generalized 𝜃-scheme for solving backward stochastic differential equations. Discrete & Continuous Dynamical Systems-Series B 17(5):1585–1603

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This research was supported by Hong Kong Research Grant Council General Research Fund Grant No.14203418. We thank the associate editor and the reviewer for their comments that led to substantial improvements in the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lingfei Li.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Proofs

Proof Proposition 1

Consider the j th component of \(\mathbb {E}_{m}^{\boldsymbol {x}}\left [v\left (t_{m+1}, \boldsymbol {X}_{m+1}^{\Delta }\right ) {\Delta } \boldsymbol {B}_{m+1}\right ]\), which can be calculated as follows:

$$ \begin{array}{@{}rcl@{}} &&\mathbb{E}_{m}^{\boldsymbol{x}}\left[v\left( t_{m+1},\boldsymbol{x}+\boldsymbol{\mu}{\Delta} t+\boldsymbol{\Sigma}{\Delta} \boldsymbol{B}_{m+1}\right) {\Delta} {B}_{m+1,j}\right]\\ \!&=&\!\frac{1}{(2\pi{\Delta} t)^{d/2}}\!{\int}_{\mathbb{R}^{d-1}}\!\exp\!\left( \!\!-\frac{{\boldsymbol{\zeta}_{[\boldsymbol{j}]}}^{\intercal}{\boldsymbol{\zeta}_{[\boldsymbol{j}]}}}{2{\Delta} t}\!\right)\!d{\boldsymbol{\zeta}_{[\boldsymbol{j}]}}\!{\int}_{\mathbb{R}}v\!\left( t_{m+1},\boldsymbol{x} + \boldsymbol{\mu}{\Delta} t + \boldsymbol{\Sigma}\boldsymbol{\zeta}\right)\zeta_{j}\exp\!\left( \!-\frac{{\zeta_{j}^{2}}}{2{\Delta} t}\!\right)d\zeta_{j}\\ &=&\frac{\Delta t}{(2\pi{\Delta} t)^{d/2}}{\int}_{\mathbb{R}^{d-1}}\exp(-\frac{{\boldsymbol{\zeta}_{[\boldsymbol{j}]}}^{\intercal}{\boldsymbol{\zeta}_{[\boldsymbol{j}]}}}{2{\Delta} t})d{\boldsymbol{\zeta}_{[\boldsymbol{j}]}}\\ &&{\int}_{\mathbb{R}}\left( {\Sigma}_{1,j}\frac{\partial}{\partial x_{1}}+\cdots+{\Sigma}_{k,j}\frac{\partial}{\partial x_{k}} \right)v\left( t_{m+1},\boldsymbol{x}+\boldsymbol{\mu}{\Delta} t+\boldsymbol{\Sigma}\boldsymbol{\zeta}\right)\exp\left( -\frac{{\zeta_{j}^{2}}}{2{\Delta} t}\right)d\zeta_{j}\\ &=&{\Delta} t\boldsymbol{\tilde{\Sigma}}_{j,(\cdot)}\mathbb{E}_{m}^{\boldsymbol{x}}\left[ D_{\boldsymbol{x}}v(t_{m+1},\boldsymbol{X}^{\Delta}_{m+1}) \right], \end{array} $$

where ζ[j] is the vector ζ without the j th coordinate and \(\boldsymbol {\tilde {\Sigma }}_{j,(\cdot )}\) is the j th row vector of \(\boldsymbol {\tilde {\Sigma }}\). The third equation is obtained from integration by parts. □

Proof Theorem 2

We prove the following two properties by mathematical induction:

  1. (1)

    \(y^{m}(\boldsymbol {x})\in L^{1}(\mathbb {R}^{k})\), \(\boldsymbol {z}^{m}(\boldsymbol {x})\in L^{1}(\mathbb {R}^{k})\),

  2. (2)

    \(D_{\boldsymbol {x}}y^{m}(\boldsymbol {x})\in L^{1}(\mathbb {R}^{k})\),

for any m = M,M − 1,⋯ , 0. The results obviously hold for m = M in light of Assumption 2 and Eq. 3.1. Next, for any m = M − 1,⋯ , 1, we assume the above properties hold at m + 1. We denote \(\mathcal {S}^{k}(K)\) as a k-dimensional ball with radius K and recall that fm+ 1(x) is truncated to 0 out of this sphere (throughout the proof fm+ 1 denotes the truncated version). From Assumption 1 (3), fm+ 1(x) is continuous on the compact sphere \(\mathcal {S}^{k}(K)\) and hence also uniformly continuous. Consequently fm+ 1(x) and Dxfm+ 1(x) are integrable. Together we have

$$ y^{m+1}(\boldsymbol{x}), {z}^{m+1}(\boldsymbol{x}), f^{m+1}(\boldsymbol{x}), D_{\boldsymbol{x}}y^{m+1}(\boldsymbol{x}), D_{\boldsymbol{x}}f^{m+1}(\boldsymbol{x})\in L^{1}(\mathbb{R}^{k}). $$

Theorem 1 shows \(\mathcal {P}_{\Delta }\) preserves integrability. Thus, we have

$$ P_{\Delta}y^{m+1}(\boldsymbol{x}), P_{\Delta}\boldsymbol{z}^{m+1}(\boldsymbol{x}), P_{\Delta}f^{m+1}(\boldsymbol{x}), P_{\Delta} D_{\boldsymbol{x}}y^{m+1}(\boldsymbol{x}), P_{\Delta} D_{\boldsymbol{x}}f^{m+1}(\boldsymbol{x})\in L^{1}(\mathbb{R}^{k}). $$

Therefore, we have \({z}_{j}^{m}(\boldsymbol {x})\in L^{1}(\mathbb {R}^{k})\) since all the terms on the right hand side of Eq. 3.2 are integrable. We next go to Eq. 3.3, where

$$ y^{m}(\boldsymbol{x})=P_{\Delta}y^{m+1}(\boldsymbol{x})+{\Delta} t \theta_{1}f(t_{m},\boldsymbol{x},y^{m}(\boldsymbol{x}),\boldsymbol{z}^{m}(\boldsymbol{x}))+ {\Delta} t(1-\theta_{1})P_{\Delta}f^{m+1}(\boldsymbol{x}). $$

With a initial guess by \(y^{m,0}(\boldsymbol {x})=P_{\Delta }y^{m+1}(\boldsymbol {x})\in L^{1}(\mathbb {R}^{k})\), the Picard iteration Eq. 3.5 generates a sequence of functions {ym,p(x)}, which are also in \(L^{1}(\mathbb {R}^{k})\). Moreover,

$$ \begin{array}{@{}rcl@{}} &&{\int}_{\mathbb{R}^{k}}|y^{m,p+1}(\boldsymbol{x})-y^{m,p}(\boldsymbol{x})|d\boldsymbol{x}\\ &=&{\Delta} t\theta_{1} {\int}_{\mathbb{R}^{k}}| f(t_{m},\boldsymbol{x},y^{m,p}(\boldsymbol{x}),\boldsymbol{z}^{m}(\boldsymbol{x}))-f(t_{m},\boldsymbol{x},y^{m,p-1}(\boldsymbol{x}),\boldsymbol{z}^{m}(\boldsymbol{x})) |d\boldsymbol{x}\\ &\leq&{\Delta} t\theta_{1} L{\int}_{\mathbb{R}^{k}}|y^{m,p}(\boldsymbol{x})-y^{m,p-1}(\boldsymbol{x})|dx \end{array} $$

where the last inequality is obtained by Assumption 1 and L is the Lipschitz constant for f. Since Δt𝜃1L < 1, we have created a Cauchy sequence in \(L^{1}(\mathbb {R}^{k})\) which converges to a \(L^{1}(\mathbb {R}^{k})\) limit. The limit solves the equation and thus it is the ym(x) we seek. This shows \(y^{m}(\boldsymbol {x})\in L^{1}(\mathbb {R}^{k})\).

Next we analyze Dxym(x). We have for i = 1,⋯ ,k,

$$ D_{\boldsymbol{x},i}y^{m}(\boldsymbol{x})=D_{\boldsymbol{x},i}\left( P_{\Delta}y^{m+1}(\boldsymbol{x})\right) +{\Delta} t \theta_{1}D_{\boldsymbol{x},i}f^{m}(\boldsymbol{x})+ {\Delta} t(1-\theta_{1})D_{\boldsymbol{x},i}\left( P_{\Delta}f^{m+1}(\boldsymbol{x})\right) , $$

As fm(x) is truncated, Dx,ifm(x) is integrable. We show PΔym+ 1(x) is integrable and PΔfm+ 1(x) can be treated in a similar way. Note that

$$ P_{\Delta}y^{m+1}(\boldsymbol{x})={\int}_{\mathbb{R}^{k}} y^{m+1}(\boldsymbol{\delta})p_{\Delta}(\boldsymbol{\delta};\boldsymbol{x})d\boldsymbol{\delta}, $$

where pΔ(δ; x) is the conditional probability density function for \(X^{\Delta }_{m+1}\) conditioned on \(X^{\Delta }_{m}=\boldsymbol {x}\) and it is given by

$$ {p_{\Delta}}(\boldsymbol{\delta};\boldsymbol{x})=\frac{1}{\sqrt{(2\pi)^{k}\det(\boldsymbol{\Sigma}\tilde{\boldsymbol{\Sigma}}{\Delta} t)}}\exp\left[ -\frac{1}{2}(\boldsymbol{\delta}-(\boldsymbol{x}+\boldsymbol{\mu}{\Delta} t))^{\intercal}(\boldsymbol{\Sigma}\tilde{\boldsymbol{\Sigma}}{\Delta} t)^{-1}(\boldsymbol{\delta}-(\boldsymbol{x}+\boldsymbol{\mu}{\Delta} t))\right]. $$

It is straightforward to show that

$$ \frac{\partial {p_{\Delta}}}{\partial \delta_{i}}(\boldsymbol{\delta};\boldsymbol{x})=-\frac{\partial {p_{\Delta}}}{\partial x_{i}}(\boldsymbol{\delta};\boldsymbol{x}). $$

Then,

$$ \begin{array}{@{}rcl@{}} D_{\boldsymbol{x},i}\left( P_{\Delta}y^{m+1}(\boldsymbol{x})\right) &=&{\int}_{\mathbb{R}^{k}} y^{m+1}(\boldsymbol{\delta})\frac{\partial {p_{\Delta}}}{\partial x_{i}}(\boldsymbol{\delta};\boldsymbol{x})d\boldsymbol{\delta}\\ &=&-{\int}_{\mathbb{R}^{k}} y^{m+1}(\boldsymbol{\delta})\frac{\partial {p_{\Delta}}}{\partial \delta_{i}}(\boldsymbol{\delta};\boldsymbol{x})d\boldsymbol{\delta}\\ &=&{\int}_{\mathbb{R}^{k-1}}\!\left[\!- y^{m+1}(\boldsymbol{\delta}){p_{\Delta}}(\boldsymbol{\delta};\boldsymbol{x})\big|_{\delta_{i}=-\infty}^{\infty} + D_{\boldsymbol{\delta},i}y^{m+1}(\boldsymbol{\delta}){p_{\Delta}}(\boldsymbol{\delta};\boldsymbol{x})d\delta_{i}\!\right]\!d\boldsymbol{\delta}_{[i]}\\ &=&{\int}_{\mathbb{R}^{k}} D_{\boldsymbol{\delta},i}y^{m+1}(\boldsymbol{\delta}){p_{\Delta}}(\boldsymbol{\delta};\boldsymbol{x})d\boldsymbol{\delta},\\ &=&P_{\Delta}\left( D_{\boldsymbol{x},i} y^{m+1}(\boldsymbol{x})\right), \end{array} $$
(A.1)

where δ[i] with subscript [i] denotes the vector without the i th coordinate from δ. Since \(D_{\boldsymbol {x},i} y^{m+1}\in L^{1}(\mathbb {R}^{k})\) and PΔ preserves integrability, we know \(D_{\boldsymbol {x},i}\left (P_{\Delta }y^{m+1}(\boldsymbol {x})\right )\) is also integrable from Eq. A.1. This shows \(D_{\boldsymbol {x},i}y^{m}(\boldsymbol {x})\in L^{1}(\mathbb {R}^{k})\), which completes the proof. □

Proof Lemma 1

The first part is obtained in Bertoin (1996). For the second part, we have

$$ \mathcal{F}_{k}\left( P_{\Delta}D_{\boldsymbol{x},j}v(\boldsymbol{x})\right)(\boldsymbol{\xi})=\phi_{\Delta}(-\boldsymbol{\xi})\mathcal{F}_{k}\left( D_{\boldsymbol{x},j}v(\boldsymbol{x})\right)(\boldsymbol{\xi}) $$

Using integration by parts, we obtain

$$ \begin{array}{@{}rcl@{}} &&\mathcal{F}_{k}\left( D_{\boldsymbol{x},j}v(\boldsymbol{x})\right)(\boldsymbol{\xi})\\ &=&{\int}_{\mathbb{R}^{k-1}}e^{i\boldsymbol{\xi}^{\intercal}_{[j]} \boldsymbol{x}_{[j]}}{\int}_{\mathbb{R}} e^{i{\xi}_{j}{x}_{j}}D_{\boldsymbol{x},j}v(\boldsymbol{x})dx_{j}d\boldsymbol{x}_{[j]}\\ &=&{\int}_{\mathbb{R}^{k-1}}e^{i\boldsymbol{\xi}^{\intercal}_{[j]} \boldsymbol{x}_{[j]}} \left( e^{i\xi_{j} x_{j}}v(\boldsymbol{x})\bigg|_{x_{j}=-\infty}^{\infty}-i\xi_{j}{\int}_{\mathbb{R}}e^{i\xi_{j} x_{j}}v(\boldsymbol{x})dx_{j} \right) d\boldsymbol{x}_{[j]}, \end{array} $$

where x[j] is x without the j th coordinate and so is ξ[j]. Since \(v(\boldsymbol {x})\in L^{1}(\mathbb {R}^{k})\), the first term in the parenthesis is 0 and we obtain the result in the statement. □

Proof Theorem 4

We prove by mathematical induction. At m = M, since \(\boldsymbol {\tilde {\Sigma }}(\boldsymbol {x})\) is Lipschitz continuous in Assumption 1 and truncated outside a k-dimensional ball \(\mathcal {S}^{k}(K)\), it is easy to see that yM(x) and zM(x) are integrable. Next, for any m = M − 1,⋯ , 1, we assume the \(y^{m+1}(\boldsymbol {x})\in L^{1}(\mathbb {R}^{k})\) and \({\boldsymbol {z}}^{m+1}(\boldsymbol {x})\in L^{1}(\mathbb {R}^{k})\). In general, unlike Theorem 2, we no longer have \(D_{\boldsymbol {x}}y^{m+1}(\boldsymbol {x})\in L^{1}(\mathbb {R}^{k})\) for the general case, and below we will use different arguments to prove PΔDxym+ 1(x) is bounded.

Note that the weak derivative Dx,jym+ 1(x) is in \(L^{1}_{loc}(\mathbb {R}^{k})\) and as ym+ 1(x) is integrable, we can find a sufficient large region Ω, such that for any x ∈Ωc, |Dx,jym+ 1(x)| < C. We have

$$ \begin{array}{@{}rcl@{}} |P_{\Delta}D_{\boldsymbol{x},j}y^{m+1}(\boldsymbol{x})|&=&\left|{\int}_{\mathbb{R}^{k}} D_{\boldsymbol{x},j}y^{m+1}(\boldsymbol{\delta})p_{\Delta}(\boldsymbol{\delta};\boldsymbol{x})d\boldsymbol{\delta}\right|\\ &\leq& \left|{\int}_{\Omega}D_{\boldsymbol{x},j}y^{m+1}(\boldsymbol{\delta})p_{\Delta}(\boldsymbol{\delta};\boldsymbol{x})d\boldsymbol{\delta}\right|+\left|{\int}_{{\Omega}^{\mathsf{c}}}D_{\boldsymbol{x},j}y^{m+1}(\boldsymbol{\delta})p_{\Delta}(\boldsymbol{\delta};\boldsymbol{x})d\boldsymbol{\delta}\right| \end{array} $$

where pΔ(δ; x) is the conditional probability density function for \(X^{\Delta }_{m+1}\) conditioned on \(X^{\Delta }_{m}=\boldsymbol {x}\) with state dependent parameters μ(x) and Σ(x). Since Dx,jym+ 1(x) is in \(L^{1}_{loc}(\mathbb {R}^{k})\) and pΔ is a density function of a normal distribution, the first absolute value is bounded. Furthermore, |Dx,jym+ 1(x)| < C for any x ∈Ωc and the integral for the density function is less than 1, thus the second absolute value is also bounded. Consequently, PΔDx,jym+ 1(x) is bounded. Multiplying it with the truncated \(\boldsymbol {\tilde {\Sigma }}(\boldsymbol {x})\), we obviously have \(\boldsymbol {\tilde {\Sigma }}(\boldsymbol {x})P_{\Delta }D_{\boldsymbol {x}}y^{m+1}(\boldsymbol {x}) \in L^{1}(\mathbb {R}^{k})\). As fm+ 1(x) is truncated, fm+ 1(x) and Dx,ifm+ 1(x) are in \(L^{1}(\mathbb {R}^{k})\). Using that \(\mathcal {P}_{\Delta }\) preserves integrability, the induction assumption and Eq. 4.2, we obtain \(\boldsymbol {z}^{m}(\boldsymbol {x})\in L^{1}(\mathbb {R}^{k})\). The integrability of ym(x) is shown using the same arguments as in Theorem 2. □

Appendix B: Fractional Fast Fourier Transform (FrFFT)

1.1 B.1 One-Dimensional Fractional Fast Fourier Transform

Consider calculating the integral numerically:

$$ \begin{array}{@{}rcl@{}} \hat{f}(\xi_{\tilde{n}})&=&{{\int}_{a}^{b}}\exp{(i\xi_{\tilde{n}}x)} f(x)dx\\ &\approx& h_{x} \sum\limits_{n=0}^{N-1}\exp(i\xi_{\tilde{n}}x_{n})f(x_{n}) \end{array} $$

where

$$ \begin{array}{@{}rcl@{}} x_{n}=a+ nh_{x}, \quad h_{x}&=\frac{b-a}{N}, \quad n=0,\cdots,N-1\\ \xi_{\tilde{n}}=\tilde{a}+ \tilde{n}h_{\xi}, \quad h_{\xi}&=\frac{\tilde{b}-\tilde{a}}{N}, \quad \tilde{n}=0,\cdots,N-1. \end{array} $$

Hence,

$$ \begin{array}{@{}rcl@{}} \hat{f}(\xi_{\tilde{n}})&\approx& h_{x} \sum\limits_{n=0}^{N-1}\exp(i(\tilde{a}+\tilde{n}h_{\xi})(a+nh_{x}))f(x_{n})\\ &=&h_{x} e^{ia\tilde{a}}e^{ia\tilde{n}h_{\xi}}\sum\limits_{n=0}^{N-1}e^{i\tilde{a}nh_{x}}e^{in\tilde{n}h_{\xi}h_{x}}f(x_{n}) \end{array} $$

Setting \(f^{*}_{n}=e^{i\tilde {a}nh_{x}}f(x_{n})\) and \(\theta =-\frac {h_{x}h_{\xi }}{2\pi }\), the equation above can be rewritten as follows:

$$ \hat{f}(\xi_{\tilde{n}})=h_{x} e^{ia\tilde{a}}e^{ia\tilde{n}h_{\xi}}\sum\limits_{n=0}^{N-1}e^{-2\pi in\tilde{n} \theta}f^{*}_{n}, \quad \tilde{n}=0,\cdots,N-1, $$

which is in the form of fractional discrete Fourier transform. The sum for all points on the ξ-grid can be computed efficiently using the fractional fast Fourier transform algorithm (see Bailey and Swarztrauber (1991)). We need to define a vector

$$ c=\left( e^{i\pi0^{2}\theta},e^{i\pi1^{2}\theta},\cdots,e^{i\pi(N-1)^{2}\theta},0,\cdots,0,e^{i\pi(N-1)^{2}\theta},\cdots e^{i\pi1^{2}\theta}\right)^{\intercal} $$

where c is a N× 1 vector and N is the first power of 2 such that \(N^{*}\geqslant 2N-1\). Denote

$$ \begin{array}{@{}rcl@{}} g_{n}&=&e^{-i\pi n^{2}\theta}f^{*}_{n},\quad n=0,\cdots,N-1\\ g^{*}&=&\left( g_{0},g_{1},\cdots,g_{N-1},0,0\cdots,0\right), \end{array} $$

where g is also a N× 1 vector. Then we may numerically calculate the Fourier transform \(\hat {f}\) by

$$ \hat{f}(\xi_{\tilde{n}})=h_{x}e^{ia\tilde{a}}e^{ia\tilde{n}h_{\xi}}e^{-i\pi \tilde{n}^{2}\theta}\left( \mathbb{F}^{-1}\left( \mathbb{F}(c)\cdot \mathbb{F}(g^{*}) \right) \right)_{\tilde{n}}, \quad \tilde{n}=0,1,\cdots,N-1, $$

which can reduce the computational complexity from \(\mathcal {O}(N^{2})\) to \(\mathcal {O}(N\log N)\).

1.2 B.2 Multidimensional Fractional Fast Fourier Transform

Consider the k-dimensional integral

$$ \begin{array}{@{}rcl@{}} \hat{f}(\boldsymbol{\xi}_{\boldsymbol{\tilde{n}}})&=&{\int}_{[\boldsymbol{a},\boldsymbol{b}]}e^{i\boldsymbol{\xi}_{\boldsymbol{\tilde{n}}}^{\intercal}\boldsymbol{x}}f(\boldsymbol{x})d\boldsymbol{x}\\ &\approx& h_{x_{1}}\cdots h_{x_{k}}\sum\limits_{n_{1}=0}^{N-1}\cdots\sum\limits_{n_{k}=0}^{N-1}e^{i(\xi^{1}_{\tilde{n}_{1}}x^{1}_{n_{1}}+\cdots+\xi^{k}_{\tilde{n}_{k}}x^{k}_{n_{k}})}f(x^{1}_{n_{1}},\cdots,x^{k}_{n_{k}}), \end{array} $$

where

$$ \begin{array}{@{}rcl@{}} x^{i}_{n_{i}}=a_{i}+ n_{i}h_{x_{i}}, \quad h_{x_{i}}&=\frac{b_{i}-a_{i}}{N}, \quad n_{i}=0,\cdots,N-1, \\ \xi^{i}_{\tilde{n}_{i}}=\tilde{a}_{i}+ \tilde{n}_{i}h_{\xi_{i}}, \quad h_{\xi_{i}}&=\frac{\tilde{b}_{i}-\tilde{a}_{i}}{N}, \quad \tilde{n}_{i}=0,\cdots,N-1. \end{array} $$

To evaluate this k-dimensional sum, we can proceed as follows.

$$ \hat{f}(\xi^{1}_{\tilde{n}_{1}},\cdots,\xi^{k}_{\tilde{n}_{k}})\approx h_{x_{1}}\sum\limits_{n_{1}=0}^{N-1}e^{i\xi^{1}_{\tilde{n}_{1}}x^{1}_{n_{1}}}g(x^{1}_{n_{1}},\xi^{2}_{\tilde{n}_{2}},\cdots,\xi^{k}_{\tilde{n}_{k}}), $$
(B.1)

where

$$ g(x^{1}_{n_{1}},\xi^{2}_{\tilde{n}_{2}},\cdots,\xi^{k}_{\tilde{n}_{k}})=h_{x_{2}}{\cdots} h_{x_{k}}\sum\limits_{n_{2}=0}^{N-1}\cdots\sum\limits_{n_{k}=0}^{N-1}e^{i(\xi^{2}_{\tilde{n}_{2}}x^{2}_{n_{2}}+\cdots+\xi^{k}_{\tilde{n}_{k}}x^{k}_{n_{k}})}f(x^{1}_{n_{1}},x^{2}_{n_{2}},\cdots,x^{k}_{n_{k}}). $$
(B.2)

If we are given \(g(x^{1}_{n_{1}},\xi ^{2}_{\tilde {n}_{2}},\cdots ,\xi ^{k}_{\tilde {n}_{k}})\), we can apply the one-dimensional FrFFT to compute Eqs. B.1, and B.2 is a (k − 1)-dimensional sum, which we can break down to a one-dimensional sum of functions defined by a (k − 2)-dimensional sum like Eq. B.2. Repeating this procedure would allow us to reduce the multidimensional problem into a sequence of one-dimensional problems, each of which can be computed by the one-dimensional FrFFT.

Let \(\mathcal {O}(F_{k})\) be the computational complexity of the k-dimensional problem (k ≥ 2). Then, We have the following recursion

$$ \mathcal{O}(F_{k})=N\mathcal{O}(F_{k-1})+N^{k-1}\mathcal{O}(N\log(N)). $$

Since \(\mathcal {O}(F_{1})=\mathcal {O}(N\log (N))\), we obtain \(\mathcal {O}(F_{k})=\mathcal {O}(N^{k}\log (N))\).

Appendix C: Least Squares Monte Carlo (LSM) regression

We simulate Xt using the Euler scheme. Using the approximations in Bouchard and Touzi (2004) and Zhang et al. (2004), we have a backward scheme given by:

$$ \begin{array}{@{}rcl@{}} Y_{M}^{\Delta}&=&g\left( \boldsymbol{X}_{M}^{\Delta}\right), \end{array} $$
(C.1)
$$ \begin{array}{@{}rcl@{}} Z_{m,j}^{\Delta}&=&\frac{1}{\Delta t} \mathbb{E}_{m}\left[Y_{m+1}^{\Delta} {\Delta} {B}_{m+1,j}\right], \quad j=1,2,\cdots,d, \end{array} $$
(C.2)
$$ \begin{array}{@{}rcl@{}} Y_{m}^{\Delta}&=&\mathbb{E}_{m}\left[Y_{m+1}^{\Delta}+{\Delta} t f({t_{m}},\boldsymbol{X}_{m}^{\Delta},Y_{{m+1}}^{\Delta},\boldsymbol{Z}_{{m}}^{\Delta}) \right],\ \end{array} $$
(C.3)

where m = M − 1,…, 0. As \(\boldsymbol {X}_{{m}}^{\Delta }\) is Markovian, there exist functions \(y_{m}^{\Delta }(\boldsymbol {x})\) and \(z_{m,j}^{\Delta }(\boldsymbol {x})\) such that

$$ Y_{m}^{\Delta}=y_{m}^{\Delta}(\boldsymbol{X}_{m}^{\Delta}), Z_{m,j}^{\Delta}=z_{m,j}^{\Delta}(\boldsymbol{X}_{m}^{\Delta}), \quad j=1,\cdots,d. $$

The conditional expectations can be computed by the least squares Monte Carlo method. Consider d + 1 sets of basis functions:

$$ \boldsymbol{\eta}_{0}(m, \boldsymbol{x})=\left( \eta_{0,1}(m, \boldsymbol{x}), \ldots, \eta_{0, K}(m, \boldsymbol{x})\right)^{\intercal} $$
(C.4)

for estimating \(y_{m}^{\Delta }(\boldsymbol {x})\) and

$$ \boldsymbol{\eta}_{j}(m, \boldsymbol{x})=\left( \eta_{j,1}(m, \boldsymbol{x}), \ldots, \eta_{j, K}(m, \boldsymbol{x})\right)^{\intercal},\quad j=1,\cdots, d $$
(C.5)

for estimating \(z_{m,j}^{\Delta }(\boldsymbol {x})\). After generating L independent samples of \(({\Delta }{\boldsymbol {B}}_{m,\lambda }, {\boldsymbol {X}}^{\Delta }_{m,\lambda })_{m=1,}\) ⋯ ,M, λ = 1,⋯ ,L, we can follow Eqs. C.1C.2 and C.3 to approximate \(y_{m}^{\Delta }(\boldsymbol {x})\) and \(z_{m,j}^{\Delta }(\boldsymbol {x})\) as

$$ \begin{array}{@{}rcl@{}} \tilde{y}_{M}^{\Delta, K, L}(\boldsymbol{x})&=&g(\boldsymbol{x}),\\ \tilde{z}_{m,j}^{\Delta, K, L}(\boldsymbol{x})&=&\boldsymbol{\eta}_{j}(m, \boldsymbol{x})^{\intercal} \boldsymbol{\alpha}_{m,j}^{\Delta, K, L}, \quad j=1, \ldots, d, \end{array} $$
(C.6)
$$ \begin{array}{@{}rcl@{}} \tilde{y}_{m}^{\Delta, K, L}(\boldsymbol{x})&=&\boldsymbol{\eta}_{0}(m, \boldsymbol{x})^{\intercal} \boldsymbol{\alpha}_{m, 0}^{\Delta, K, L}, \end{array} $$
(C.7)

for m = M − 1,…, 0,where

$$ \begin{array}{@{}rcl@{}} \boldsymbol{\alpha}_{m, j}^{\Delta, K, L}&=&\arg \min\limits_{\boldsymbol{\alpha} \in \mathbb{R}^{K}} \frac{1}{L} \sum\limits_{\lambda=1}^{L}\left[\boldsymbol{\eta}_{j}\left( m, {\boldsymbol{X}}^{\Delta}_{m,\lambda}\right)^{\intercal} \boldsymbol{\alpha}-\frac{\Delta{B}_{m+1,j,\lambda}}{\Delta t} \tilde{y}_{m+1}^{\Delta, K, L}\left( {\boldsymbol{X}}_{m+1,\lambda}^{\Delta}\right)\right]^{2} \end{array} $$
(C.8)
$$ \begin{array}{@{}rcl@{}} \boldsymbol{\alpha}_{m, 0}^{\Delta, K, L}&=&\arg \min\limits_{\boldsymbol{\alpha} \in \mathbb{R}^{K}} \frac{1}{L} \sum\limits_{\lambda=1}^{L}\left[\boldsymbol{\eta}_{0}\left( m, {\boldsymbol{X}}_{m,\lambda}^{\Delta}\right)^{\intercal} \boldsymbol{\alpha}-\tilde{y}_{m+1}^{\Delta, K, L}\left( {\boldsymbol{X}}_{m+1,\lambda}^{\Delta}\right)\right.\\ &&\left.-{\Delta} tf\left( t_{m}, {\boldsymbol{X}}_{m,\lambda}^{\Delta}, \tilde{y}_{m+1}^{\Delta, K, L}\left( {\boldsymbol{X}}_{m+1,\lambda}^{\Delta}\right), \tilde{\boldsymbol{z}}_{m}^{\Delta, K, L}\left( {\boldsymbol{X}}_{m,\lambda}^{\Delta}\right)\right) \right]^{2}. \end{array} $$
(C.9)

In general, there are various ways to choose the basis functions in Eqs. C.4 and C.5. Here, we consider the hypercube indicator basis functions used in Gobet et al. (2005). We choose a domain \(H\subset \mathbb {R}^{k}\) centered on x0 = (x0,1,⋯ ,x0,k) as

$$ H=[x_{0,1}-q_{1},x_{0,1}+q_{1}]\times\cdots\times[x_{0,k}-q_{k},x_{0,k}+q_{k}] $$

for some positive numbers q1,⋯ ,qk. Then we divide this domain H into K equal hypercubes, i.e., H = ∪k= 1,⋯ ,KHk. Thus we have the following indicator basis function:

where j = 0, 1,⋯ ,d and k = 1,⋯ ,K. We summarize the algorithm as follows:

figure d

In our implementation, we follow Lemor et al. (2006) to set the parameters:

$$ M=2\sqrt{2}^{J-1},\ L=2\sqrt{2}^{3(J-1)},\ K=(2q/\delta)^{k},\ \delta=C_{d}/\sqrt{2}^{J-1}, $$
(C.10)

where J is any positive integer, and Cd is a constant and we simply set q1 = ⋯ = qk = q. We also run Algorithm 3 for \(\mathcal {R}\) times and use the average of their results to obtain more accurate approximations for the solutions. The computation time grows linearly in MLK.

Appendix D: The Backward COS Method

We summarize the BCOS method of Ruijter and Oosterlee (2015a) for the one-dimensional problem, which is the case considered in their paper. Although the results can be extended to multiple dimensions, we refrain from presenting them which involve more notations and they are not really needed for our numerical examples. Let [a,b] be a sufficiently large interval. Using the COS method, the conditional expectations are approximated as:

$$ \begin{array}{@{}rcl@{}} \mathbb{E}_{m}^{\boldsymbol{x}}\left[v\left( t_{m+1}, \boldsymbol{X}_{m+1}^{\Delta}\right)\right]&\approx&\frac{b-a}{2} {\sum\limits_{k=0}^{N-1}}{\prime} \mathcal{V}_{k}\left( t_{m+1}\right) {\Phi}_{k, m}(x),\\ \mathbb{E}_{m}^{\boldsymbol{x}}\left[v\left( t_{m+1}, \boldsymbol{X}_{m+1}^{\Delta}\right) {\Delta} \boldsymbol{B}_{m+1}\right] &=& \sigma\left( t_{m}, x\right) {\Delta} t \mathbb{E}_{m}^{x}\left[D_{x} v\left( t_{m+1}, X_{m+1}^{\Delta}\right)\right]\\ &\approx& \sigma\left( t_{m}, x\right) {\Delta} t \frac{b-a}{2} \sum\limits_{k=0}^{N-1}{\prime} \mathcal{V}_{k}\left( t_{m+1}\right) {\Phi}_{k,m}^{\prime}(x), \end{array} $$

where

$$ \begin{array}{@{}rcl@{}} \mathcal{V}_{k}\left( t_{m+1}\right) &:=&\frac{2}{b-a} {{\int}_{a}^{b}} v\left( t_{m+1}, x \right) \cos \left( k \pi \frac{x-a}{b-a}\right) d x,\\ {\Phi}_{k,m}(x)&:=&\frac{2}{b-a} \Re\left( \phi_{\Delta}\left( \frac{k \pi}{b-a} ; x\right) e^{i k \pi \frac{x-a}{b-a}}\right), \end{array} $$
(D.1)

\({\Phi }_{k,m}^{\prime }(x)\) is the derivative of Φk,m(x), \({\sum }{\prime }\) means the first term in the summation is 1/2, R(⋅) is the real part of a complex value, and ϕΔ(⋅; x) is the conditional characteristic function of \(X^{\Delta }_{m+1}-X^{\Delta }_{m}\) given \(X^{\Delta }_{m}=x\).

The COS series approximations can be applied to compute the conditional expectations in Eq. 2.3 to Eq. 2.5, and we must know \(\mathcal {Y}_{k}\left (t_{m+1}\right )\), \(\mathcal {Z}_{k}\left (t_{m+1}\right )\) and \(\mathcal {F}_{k}\left (t_{m+1}\right )\) defined as

$$ \begin{array}{@{}rcl@{}} \!\!\!\!\!\!\!\mathcal{Y}_{k}\left( t_{m+1}\right)& = &\frac{2}{b-a} {{\int}_{a}^{b}} y\left( t_{m+1}, x\right) \cos \left( k \pi \frac{x-a}{b-a}\right) d x, \end{array} $$
(D.2)
$$ \begin{array}{@{}rcl@{}} \!\!\!\!\!\!\!\mathcal{Z}_{k}\left( t_{m+1}\right)& = &\frac{2}{b-a} {{\int}_{a}^{b}} z\left( t_{m+1}, x\right) \cos \left( k \pi \frac{x-a}{b-a}\right) d x, \end{array} $$
(D.3)
$$ \begin{array}{@{}rcl@{}} \!\!\!\!\!\!\!\mathcal{F}_{k}\left( t_{m+1}\right)& = &\frac{2}{b-a} {{\int}_{a}^{b}}\! f\left( t_{m+1}, x, y\left( t_{m+1}, x\right), z\left( t_{m+1}, x\right)\right) \cos \left( k \pi \frac{x - a}{b - a}\right) d x. \end{array} $$
(D.4)

Thus, from Eq. 2.4, z(tm,x) can be approximated as

$$ \begin{array}{@{}rcl@{}} z\left( t_{m}, x\right) &\approx&-\frac{1-\theta_{2}}{\theta_{2}} \frac{b-a}{2} \sum\limits_{k=0}^{N-1}{\prime} \mathcal{Z}_{k}\left( t_{m+1}\right) {\Phi}_{k,m}(x) \\ &&+\frac{b-a}{2} \sum\limits_{k=0}^{N-1}{\prime}\left( \frac{1}{\Delta t \theta_{2}} \mathcal{Y}_{k}\left( t_{m+1}\right)+\frac{1-\theta_{2}}{\theta_{2}} \mathcal{F}_{k}\left( t_{m+1}\right)\right) \sigma\left( t_{m}, x\right) {\Delta} t {\Phi}_{k,m}^{\prime}(x). \end{array} $$
(D.5)

From Eq. 2.5, we have

$$ y\left( t_{m}, x\right)={\Delta} t \theta_{1} f\left( t_{m}, x, y\left( t_{m}, x\right), z\left( t_{m}, x\right)\right)+h\left( t_{m}, x\right), $$
(D.6)

where

$$ \begin{array}{@{}rcl@{}} h\left( t_{m}, x\right) &:=&\mathbb{E}_{m}^{x}\left[Y_{m+1}^{\Delta}\right]+{\Delta} t\left( 1-\theta_{1}\right) \mathbb{E}_{m}^{x}\left[f\left( t_{m+1}, \mathbf{X}_{m+1}^{\Delta}\right)\right] \\ & \approx& \frac{b-a}{2} \sum\limits_{k=0}^{N-1}{\prime}\left( \mathcal{Y}_{k}\left( t_{m+1}\right)+{\Delta} t\left( 1-\theta_{1}\right) \mathcal{F}_{k}\left( t_{m+1}\right)\right) {\Phi}_{k,m}(x). \end{array} $$
(D.7)

Integrals like Eq. D.1 are approximated using the mid-point rule as

$$ \begin{array}{@{}rcl@{}} \mathcal{V}_{k}\left( t_{m+1}\right) &\approx& \sum\limits_{n=0}^{N-1} \frac{2}{b-a} v\left( t_{m+1}, x_{n}\right) \cos \left( k \pi \frac{x_{n}-a}{b-a}\right) {\Delta} x\\ &=&\sum\limits_{n=0}^{N-1} v\left( t_{m+1}, x_{n}\right) \cos \left( k \pi \frac{2 n+1}{2 N}\right) \frac{2}{N} \end{array} $$
(D.8)

for k = 0, 1,⋯ ,N − 1, where

$$ x_{n}:=a+(n+\frac{1}{2}){\Delta} x, \quad {\Delta} x=\frac{b-a}{N}, $$

for n = 0, 1,⋯ ,N − 1. The sum Eq. D.8 is a discrete cosine transform (DCT), which can be computed by FFT with complexity \(\mathcal {O}\left (N\log N\right )\).

1.1 D.1 Constant Drift μ and Volatility σ

In this case, ϕΔ(⋅; x) is independent of x. From Eq. D.5, we obtain

$$ \begin{array}{@{}rcl@{}} \mathcal{Z}_{k}\left( t_{m}\right) &=&\frac{2}{b-a} {{\int}_{a}^{b}} z\left( t_{m}, x\right) \cos \left( k \pi \frac{x-a}{b-a}\right) d x\\ & \approx& \Re\!\left( \sum\limits_{j=0}^{N-1}{\prime}\!\left[\!-\frac{1 - \theta_{2}}{\theta_{2}} \mathcal{Z}_{j}\!\left( t_{m+1}\right) + \frac{i j \pi}{b - a} \sigma {\Delta} t\!\left( \!\frac{1}{\Delta t \theta_{2}} \mathcal{Y}_{j}\left( t_{m+1}\right) + \frac{1 - \theta_{2}}{\theta_{2}} \mathcal{F}_{j}\left( t_{m+1}\right)\right)\!\right]\right. \\ && \left.\phi_{\Delta}\left( \frac{j \pi}{b-a}\right) \mathcal{M}_{k, j}\right), \end{array} $$
(D.9)

for k = 0, 1,⋯ ,N − 1, where

$$ \begin{array}{@{}rcl@{}} \phi_{\Delta}(u)&=&\exp\left( iu\mu{\Delta} t-\frac{1}{2}u^{2}\sigma^{2}{\Delta} t\right),\\ \mathcal{M}_{k, j} &:=&\frac{2}{b-a} {{\int}_{a}^{b}} e^{i j \pi \frac{x-a}{b-a}} \cos \left( k \pi \frac{x-a}{b-a}\right) d x. \end{array} $$

Similarly, from Eq. D.7 we have

$$ \begin{array}{@{}rcl@{}} \!\!\!\!\!\!\!\!\mathcal{H}_{k}\left( t_{m}\right) &=&\!\frac{2}{b-a} {{\int}_{a}^{b}} h\left( t_{m}, x\right) \cos \left( k \pi \frac{x-a}{b-a}\right) d x \\ & \approx&\! \Re\!\left( \sum\limits_{j=0}^{N-1}{\prime}\left[\mathcal{Y}_{j}\left( t_{m+1}\right) + {\Delta} t\left( 1 - \theta_{1}\right) \mathcal{F}_{j}\left( t_{m+1}\right)\right] \phi_{\Delta}\!\left( \!\frac{j \pi}{b - a}\right) \mathcal{M}_{k, j}\!\right). \end{array} $$
(D.10)

These Fourier cosine coefficients can be computed using FFT (see Fang and Oosterlee (2009)) and the complexity of obtaining all N coefficients is \(\mathcal {O}\left (N\log N\right )\) instead of \(\mathcal {O}\left (N^{2}\right )\).

To obtain y(tm,x), we solve the equation given by Eq. D.6 by doing Picard iterations P times. We use the solution to calculate \(\mathcal {F}_{k}\left (t_{m}\right )\) defined by Eq. D.4 via Eq. D.8 and then calculate \(\mathcal {Y}_{k}\left (t_{m}\right )\) as

$$ \mathcal{Y}_{k}\left( t_{m}\right) = {\Delta} t \theta_{1} \mathcal{F}_{k}\left( t_{m}\right)+\mathcal{H}_{k}\left( t_{m}\right). $$

We summarize the algorithm as follows:

figure e

The complexity is as follows: \(O(N\log (N))\) for the initialization, and for each step of recursion, \(O(N\log (N))\) for (i), O(PN) for (ii) and \(O(N\log (N))\) for (iii). The total complexity is \(\mathcal {O}\left (M\left (N \log N+P N\right )\right )\).

1.2 D2. General Drift μ(x) and Volatility σ(x)

In the general case, ϕΔ(u; x) depends on x. With this change, in Step (i) of the recursion, \(z\left (t_{m}, x\right )\) and \(h\left (t_{m}, x\right )\) can no longer be computed with FFT and now it requires \(\mathcal {O}\left (N^{2}\right )\) operations. Moreover, Eqs. D.9 and D.10 no longer hold, but DCT can still be applied in Step (iii) of the recursion to calculate \(\mathcal {Z}_{k}(t_{m})\) and \({\mathscr{H}}_{k}(t_{m})\) with cost \(\mathcal {O}\left (N\log (N)\right )\) by discretizing the integrals that define them (see Eq. D.8). The total computation complexity for this algorithm is \(\mathcal {O}\left (M\left (N^{2}+N \log N+P N\right )\right )\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ge, Y., Li, L. & Zhang, G. A Fourier Transform Method for Solving Backward Stochastic Differential Equations. Methodol Comput Appl Probab 24, 385–412 (2022). https://doi.org/10.1007/s11009-021-09860-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11009-021-09860-y

Keywords

Mathematics Subject Classification (2010)

Navigation