Skip to main content
Log in

Second order multiscale stochastic volatility asymptotics: stochastic terminal layer analysis and calibration

  • Published:
Finance and Stochastics Aims and scope Submit manuscript

Abstract

Multiscale stochastic volatility models have been developed as an efficient way to capture the principal effects on derivative pricing and portfolio optimization of randomly varying volatility. The recent book by Fouque et al. (Multiscale Stochastic Volatility for Equity, Interest-Rate and Credit Derivatives, 2011) analyzes models in which the volatility of the underlying is driven by two diffusions – one fast mean-reverting and one slowly varying – and provides a first order approximation for European option prices and for the implied volatility surface, which is calibrated to market data. Here, we present the full second order asymptotics, which are considerably more complicated due to a terminal layer near the option expiration time. We find that to second order, the implied volatility approximation depends quadratically on log-moneyness, capturing the convexity of the implied volatility curve seen in data. We introduce a new probabilistic approach to the terminal layer analysis needed for the derivation of the second order singular perturbation term, and calibrate to S&P 500 options data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Alòs, E.: A generalization of the Hull and White formula with applications to option pricing approximation. Finance Stoch. 10, 353–365 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  2. Chernov, M., Gallant, R., Ghysels, E., Tauchen, G.: Alternative models for stock price dynamics. J. Econom. 116, 225–257 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  3. Conlon, J., Sullivan, M.: Convergence to Black–Scholes for ergodic volatility models. Eur. J. Appl. Math. 16, 385–409 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  4. Fouque, J.-P., Jaimungal, S., Lorig, M.: Spectral decomposition of option prices in fast mean-reverting stochastic volatility models. SIAM J. Financ. Math. 2, 665–691 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  5. Fouque, J.-P., Papanicolaou, G., Sircar, R., Sølna, K.: Short time-scale in S&P 500 volatility. J. Comput. Finance 6(4), 1–23 (2003)

    Article  Google Scholar 

  6. Fouque, J.-P., Papanicolaou, G., Sircar, R., Sølna, K.: Singular perturbations in option pricing. SIAM J. Appl. Math. 63, 1648–1665 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  7. Fouque, J.-P., Papanicolaou, G., Sircar, R., Sølna, K.: Multiscale stochastic volatility asymptotics. SIAM J. Multiscale Model. Simul. 2, 22–42 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  8. Fouque, J.-P., Papanicolaou, G., Sircar, R., Sølna, K.: Multiscale Stochastic Volatility for Equity, Interest-Rate and Credit Derivatives. Cambridge Univ. Press, Cambridge (2011)

    Book  MATH  Google Scholar 

  9. Fournié, E., Lebuchoux, J., Touzi, N.: Small noise expansion and importance sampling. Asymptot. Anal. 14, 361–376 (1997)

    MathSciNet  MATH  Google Scholar 

  10. Fukasawa, M.: Asymptotic analysis for stochastic volatility: edgeworth expansion. Electron. J. Probab. 16, 764–791 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  11. Fukasawa, M.: Asymptotic analysis for stochastic volatility: martingale expansion. Finance Stoch. 15, 635–654 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  12. Gatheral, J.: The Volatility Surface: A Practitioner’s Guide. Wiley, New York (2006)

    Google Scholar 

  13. Gobet, E., Miri, M.: Time dependent Heston model. SIAM J. Financ. Math. 1, 289–325 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  14. Heston, S.: A closed-form solution for options with stochastic volatility with applications to bond and currency options. Rev. Financ. Stud. 6, 327–343 (1993)

    Article  Google Scholar 

  15. Hillebrand, E.: Neglecting parameter changes in GARCH models. J. Econom. 129, 121–138 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  16. Howison, S.: Matched asymptotic expansions in financial engineering. J. Eng. Math. 53, 385–406 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  17. LeBaron, B.: Stochastic volatility as a simple generator of apparent financial power laws and long memory. Quant. Finance 1, 621–631 (2001)

    Article  MathSciNet  Google Scholar 

  18. Lee, R.: Local volatilities under stochastic volatility. Int. J. Theor. Appl. Finance 4, 45–89 (1999)

    MathSciNet  MATH  Google Scholar 

  19. Lewis, A.: Option Valuation under Stochastic Volatility. Finance Press, New York (2000)

    MATH  Google Scholar 

  20. Lorig, M., Pagliarani, S., Pascucci, A.: Explicit implied volatilities for multifactor local-stochastic volatility models. Math. Finance (2015, to appear), 10.1111/mafi.12105

  21. Meyn, P., Tweedie, R.L.: Stability of Markovian processes, I: criteria for discrete-time chains. Adv. Appl. Probab. 24, 542–574 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  22. Meyn, P., Tweedie, R.L.: Stability of Markovian processes, III: Foster–Lyapunov criteria for continuous-time processes. Adv. Appl. Probab. 25, 518–548 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  23. Rogers, L.C.G., Williams, D.: Diffusions, Markov Processes and Martingales, vol. 2, Itô Calculus. Cambridge Univ. Press, Cambridge (2000)

    Book  MATH  Google Scholar 

  24. Sircar, R., Papanicolaou, G.: Stochastic volatility, smile and asymptotics. Appl. Math. Finance 6, 107–145 (1999)

    Article  MATH  Google Scholar 

  25. Souza, M., Zubelli, J.: On the asymptotics of fast mean-reversion stochastic volatility models. Int. J. Theor. Appl. Finance 10, 817–835 (2007)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ronnie Sircar.

Additional information

Fouque’s work is supported by NSF grants DMS-0806461 and DMS-1107468.

Lorig’s work is partially supported by NSF grant DMS-0739195.

Sircar’s work is partially supported by NSF grant DMS-1211906.

Appendices

Appendix A: Proof of accuracy for smooth payoffs

In this appendix, we derive the accuracy result for options with smooth payoffs \(h\) as described in Remark 2.1 following Assumption 9. This is needed in order to give a meaning to the terminal value \(P_{2,0}(T,x,y,z)\) studied in Sect. 2.3.2 and to justify the regularization argument for general payoffs given in Appendix B.

In what follows, we make use of the following lemma several times.

Lemma A.1

Let \(h\) be a smooth payoff function, that is, \(h\) is \(C^{\infty}(0,\infty)\) and it and all its derivatives grow at most polynomially at 0 and \(\infty\). Then its Black–Scholes price \(P_{\mathrm{BS}}(\tau,x;\sigma)\) is also \(C^{\infty}(0,\infty)\) in \(x\), and \(\partial_{x}^{k}P_{\mathrm{BS}}\) (\(k\geq 0\)) are also at most polynomially growing at 0 and \(+\infty\) in the current stock price \(x\), and are bounded uniformly in \(\tau\in[0,T]\) for fixed \(x>0\).

Proof

From the formula (2.56), we see that \(P_{\mathrm{BS}}\) is \(C^{\infty}(0,\infty)\) in \(x\) and grows at most polynomially in \(x\) at 0 and \(+\infty\) as inherited from the behavior of \(h\). Then we compute

$$ \partial_{x}^{k}P_{\mathrm{BS}}(\tau,x;\sigma) = e^{-r\tau}\int h^{(k)}\left (xe^{(r-\frac{1}{2}\sigma^{2})\tau + \sigma\sqrt{\tau}\,\xi} \right ) \left (e^{(r-\frac{1}{2}\sigma^{2})\tau + \sigma\sqrt{\tau}\,\xi} \right )^{k}\frac{e^{-\xi^{2}/2}}{\sqrt{2\pi}}\,d\xi, $$

where \(h^{(k)}\) is the \(k\)th derivative of \(h\), which is at most polynomially growing by assumption. Therefore, \(\partial_{x}^{k}P_{\mathrm{BS}}\) is also at most polynomially growing at 0 and \(+\infty\) in \(x\), and uniformly bounded in \(\tau\in[0,T]\) for fixed \(x>0\). □

We note that this lemma does not hold for the nonsmooth case of puts and calls where the derivatives of the payoff are singular at the strike price.

Remark A.2

Since we have \(P_{0,0}(t,x,z)=P_{\mathrm{BS}}(T-t,x;\bar{\sigma }(z))\), it follows that the function \(\mathscr{D}_{k}P_{0,0}=x^{k}\partial^{k}_{x}P_{0,0}\) is at most polynomially growing in \(x\), and bounded uniformly in \(\tau\in[0,T]\) for fixed \(x>0\).

We also use the fact that \(Y\) and \(Z\) have moments of all orders uniformly bounded in \(\varepsilon \) and \(\delta \) (thanks to Assumptions 6 and 7 made on \(Y^{(1)}\) and \(Z^{(1)}\) in Sect. 2.1).

Lemma A.3

If \(J(y,z)\) is at most polynomially growing, then for every \((y,z)\), there exists a positive constant \(C<\infty\) such that

$$ \sup_{t\leq T}\sup_{\varepsilon ,\delta \leq 1} \mathbb{E}^{\star }[ |J(Y_{t},Z_{t})| \mid Y_{0} = y, Z_{0} = z ] \leq C . $$

The proof of this lemma can be found following Lemma 4.9 in [8].

The following property will also be used in what follows.

Lemma A.4

For each \(k\in \mathbb{Z}\), there exists a constant \(C_{k}<\infty\) depending on \(x\) and \(T\) such that

$$\begin{aligned} \sup_{t\leq T}\sup_{\varepsilon ,\delta \leq 1} \mathbb{E}^{\star }\big[ |X_{t}|^{k} \,\big|\, X_{0}=x, Y_{0} = y, Z_{0} = z \big] \leq C_{k} . \end{aligned}$$

Proof

This is a simple consequence of (2.2) and the boundedness of \(f(y,z)\) (Assumption 2 of Sect. 2.1); in fact,

$$\begin{aligned} |X_{t}|^{k}&=x^{k} \exp \left (krt-\frac{k}{2}\int_{0}^{t}f^{2}(Y_{s},Z_{s})ds+k\int_{0}^{t}f(Y_{s},Z_{s}) dW_{s}^{\star(0)}\right ) \\ &=x^{k} \exp \left (krt+\frac{k^{2}-k}{2}\int_{0}^{t}f^{2}(Y_{s},Z_{s})ds\right ) \\ &\quad{} \times \exp \left ( -\frac{k^{2}}{2}\int_{0}^{t}f^{2}(Y_{s},Z_{s})ds+k\int_{0}^{t}f(Y_{s},Z_{s}) dW_{s}^{\star(0)}\right ) \\ &\leq x^{k} \exp \left (krt+\frac{k^{2}-k}{2}\overline{c}^{2}t\right ) \\ &\quad{}\times \exp \left ( -\frac{k^{2}}{2}\int_{0}^{t}f^{2}(Y_{s},Z_{s})ds+k\int_{0}^{t}f(Y_{s},Z_{s}) dW_{s}^{\star(0)}\right ), \end{aligned}$$

where \(\overline{c}\) is the upper bound on the volatility function \(f\) in Assumption 2. Therefore,

$$ \mathbb{E}^{\star }\big[ |X_{t}|^{k} \big] \leq x^{k} \exp \left (krt+\frac{k^{2}-k}{2}\overline{c}^{2}t\right ). $$

 □

1.1 A.1 Intermediate lemmas

Lemma A.5

Let \(\xi(x,z)\) and \(\chi(y,z)\) be functions that are at most polynomially growing, with \(\left \langle \chi(\cdot,z) \right \rangle = 0\) for all \(z\). Assume further that \(\xi(x,z)\) is smooth in \((x,z)\) with derivatives at most polynomially growing, and \(\chi(y,z)\) is smooth in \(z\) with derivatives at most polynomially growing as well. Then we have

$$\begin{aligned} \mathbb{E}^{\star }_{t,x,y,z} [ \chi(Y_{T},Z_{T}) \xi(X_{T},Z_{T})]={\mathscr{O}}\big(\varepsilon ^{q/2}+ \sqrt{\delta}\big)\quad \textit{for}\ q< 1. \end{aligned}$$
(A.1)

In order to establish Lemma A.5, we need the following result.

Lemma A.6

Let \(\chi(y,z)\) be a function that is at most polynomially growing, with \(\left \langle \chi(\cdot,z) \right \rangle = 0\) for all \(z\). Then for \(q<1\) and \(z\) fixed, there exist \(\bar{\varepsilon }>0\) and a polynomial \(C(y)\) such that

$$\big| \mathbb{E}^{\star }_{t,y} [\chi(Y_{s},z) | Y_{s-\varepsilon ^{q}} ] \big|\leq \sqrt{ \varepsilon } |C(Y_{s-\varepsilon ^{q}}) | \quad \textit{for any}\ 0< \varepsilon \leq\bar{\varepsilon }\ \textit{and}\ s\geq t+\varepsilon ^{q}. $$

The proof of Lemma A.6 is given at the end of this section.

Proof of Lemma A.5

First, we replace \(Z_{T}\) with \(z=Z_{t}\). This replacement results in an \({\mathscr{O}}(\sqrt{ \delta })\) error, i.e.,

$$\begin{aligned} \mathbb{E}^{\star }_{t,x,y,z} [ \chi(Y_{T},Z_{T}) \xi(X_{T},Z_{T})] - \mathbb{E}^{\star }_{t,x,y,z} [ \chi(Y_{T},z) \xi(X_{T},z)] &= {\mathscr{O}}\big(\sqrt{ \delta }\big) . \end{aligned}$$
(A.2)

To see this, we observe from (2.2) that

$$\begin{aligned} Z_{T} &= z + \delta \int_{t}^{T} c(Z_{s}) ds - \sqrt{ \delta } \int_{t}^{T} \varGamma (Y_{s},Z_{s}) g(Z_{s}) ds + \sqrt{ \delta } \int_{t}^{T} g(Z_{s}) dW_{s}^{\star(2)} . \end{aligned}$$

The error (A.2) is then deduced by Taylor expanding \(\chi(y,z) \xi(x,z)\) with respect to \(z\) and using the linear growth of the coefficients in Assumption 1 in Sect. 2.1, the polynomial growth of the functions \(\chi\), \(\xi\) and their derivatives, and the uniform finiteness of the moments of all orders in Lemma A.3.

Next, we replace \(X_{T}\) by \(X_{T-\varepsilon ^{q}}\), where \(q<1\). This results in an \({\mathscr{O}}(\varepsilon ^{q/2})\) error, i.e.,

$$\begin{aligned} \mathbb{E}^{\star }_{t,x,y} [ \chi(Y_{T},z) \xi(X_{T},z)] - \mathbb{E}^{\star }_{t,x,y,z} [ \chi(Y_{T},z) \xi(X_{T-\varepsilon ^{q}},z)] &= {{\mathscr{O}}\big(\varepsilon ^{q/2}\big) }. \end{aligned}$$
(A.3)

The error (A.3) is deduced by using (2.2) to write

$$\begin{aligned} X_{T} &= X_{T-\varepsilon ^{q}} + r \int_{T-\varepsilon ^{q}}^{T} X_{s} ds + \int_{T-\varepsilon ^{q}}^{T} f(Y_{s},Z_{s}) X_{s} {dW_{s}^{\star(0)} } , \end{aligned}$$

and then by Taylor expanding \(\xi(x,z)\) about the point \(x=X_{T-\varepsilon ^{q}}\) and once again using that \(\xi(x,z)\) and its derivatives are at most polynomially growing in \(x\) and the moment estimate in Lemma A.4.

Now observe that

$$\begin{aligned} \mathbb{E}^{\star }_{t,x,y,z} [ \chi(Y_{T},z) \xi(X_{T-\varepsilon ^{q}},z) ] &= \mathbb{E}^{\star }_{t,x,y,z} \bigl[ \xi(X_{T-\varepsilon ^{q}},z) \mathbb{E}^{\star }[\chi(Y_{T},z) |\mathscr{F}_{T-\varepsilon ^{q}} ] \bigr] \\ &= \mathbb{E}^{\star }_{t,x,y,z} \bigl[ \xi(X_{T-\varepsilon ^{q}},z) \mathbb{E}^{\star }[\chi(Y_{T},z) |Y_{T-\varepsilon ^{q}} ] \bigr] . \end{aligned}$$

Using Lemma A.6 at \(s=T\), polynomial growth and the moment estimates, we deduce that the expectation in (A.1) is \({\mathscr{O}}(\varepsilon ^{q/2}+ \sqrt{\delta})\) for \(q<1\). □

1.2 A.2 Proof of Theorem 2.6 for smooth payoffs

Now we recall our price approximation \(\widetilde{P}^{\varepsilon ,\delta }\) from (2.53); this is

$$\begin{aligned} P^{\varepsilon ,\delta }\approx \widetilde{P}^{\varepsilon ,\delta } &= P_{0,0} + \sqrt{ \varepsilon } \, P_{1,0} + \sqrt{ \delta } \, P_{0,1} + \sqrt{ \varepsilon \,\delta } \, P_{1,1} + \varepsilon \, P_{2,0} + \delta \, P_{0,2} , \end{aligned}$$

where \(P_{i,j}, i+j\leq 2\), are given in Proposition 2.2. The singular perturbation proof involves terms with higher order in \(\varepsilon \), and so we introduce

$$\begin{aligned} \widehat{P}^{\varepsilon ,\delta } &= \widetilde{P}^{\varepsilon ,\delta }+\varepsilon ^{3/2} P_{3,0} + \varepsilon ^{2} P_{4,0}+\varepsilon \sqrt{ \delta } P_{2,1}+\varepsilon ^{3/2}\sqrt{ \delta } P_{3,1}. \end{aligned}$$

Remark A.7

The additional terms \(P_{3,0}, P_{4,0}, P_{2,1}, P_{3,1}\) are solutions of the Poisson equations (2.18), (2.19), (2.38) and (2.39) whose centering conditions have been used to obtain lower order terms in the price expansion. Since these four additional terms are not part of our approximation, but used only for the proof of accuracy, we simply need them to be any solution of these four Poisson equations, which are all of the form

$$\mathscr{L}_{0} P = \sum_{k\geq1} c_{k}(t,y,z)\mathscr{D}_{k}P_{0,0}, $$

where the sum is finite, the \(c_{k}(t,y,z)\) are at most polynomially growing in \(y\) and \(z\) and bounded uniformly in \(t\in[0,T]\), and the \(\mathscr{D}_{k}P_{0,0}\) are at most polynomially growing in \(x\) and bounded uniformly in \(t\in[0,T]\) for fixed \(x>0\) by Remark A.2. Therefore, by Assumption 8, the solutions \(P_{3,0}, P_{4,0}, P_{2,1}, P_{3,1}\) are at most polynomially growing in \((x,y,z)\) and bounded uniformly in \(t\in[0,T]\).

Next, we define the residual

$$\begin{aligned} R^{\varepsilon ,\delta } &:= P^{\varepsilon ,\delta } - \widehat{P}^{\varepsilon ,\delta } . \end{aligned}$$

The proof of Theorem 2.6 consists of showing that

$$R^{\varepsilon ,\delta }={\mathscr{O}}\big(\varepsilon ^{1+q/2}+\varepsilon \sqrt{ \delta }+\delta \sqrt{ \varepsilon }+\delta^{3/2}\big)\quad \text{ for } q< 1. $$

By the choices made in Sects. 2.3.1, 2.3.3 and 2.3.4, when applying the operator \(\mathscr{L}^{\varepsilon ,\delta }\) to the function \(R^{\varepsilon ,\delta }\), all of the terms of order

$$\varepsilon ^{-1},\varepsilon ^{-1/2},1,\varepsilon ^{1/2},\varepsilon , \delta ^{1/2}\varepsilon ^{-1},\delta ^{1/2}\varepsilon ^{-1/2}, \delta ^{1/2},\delta ^{1/2}\varepsilon ^{1/2},\delta \varepsilon ^{-1},\delta \varepsilon ^{-1/2},\delta $$

cancel, as does the term \(\mathscr{L}^{\varepsilon ,\delta }P^{\varepsilon ,\delta }\). Hence we deduce that the residual \(R^{\varepsilon ,\delta }\) satisfies the PDE

$$\begin{aligned} \mathscr{L}^{\varepsilon ,\delta } R^{\varepsilon ,\delta } + \mathcal{S}^{\varepsilon ,\delta } &=0 , \end{aligned}$$
(A.4)

pointwise in \((t,x,y,z)\), where the source term \(\mathcal{S}^{\varepsilon ,\delta } \)in (A.4) is quite lengthy to write out explicitly. However, it is straightforward to check that it is a finite sum of the form

$$\begin{aligned} \mathcal{S}^{\varepsilon ,\delta }=\sum_{i,j:\, i + j \geq 3} \sqrt{ \varepsilon }^{\,i} \sqrt{ \delta }^{\,j} \sum_{k \geq 1} C_{i,j,k}(t,y,z) \mathscr{D}_{k} P_{0,0} , \end{aligned}$$

where the coefficients \(C_{i,j,k}(t,y,z)\) are bounded uniformly in \(t \in [0,T]\) and at most polynomially growing in \(y\) and \(z\). We know the terms \(\mathscr{D}_{k} P_{0,0}\) are at most polynomially growing in \(x\) and bounded uniformly in \(t\in[0,T]\) for fixed \(x\) by Lemma A.1 and the observation in Remark A.2. Consequently, the source term in (A.4) is at most polynomially growing in \(x, y\) and \(z\) and uniformly bounded in \(t\in[0,T]\) and \(\varepsilon ,\delta \leq 1\). Thus we have \(\mathcal{S}^{\varepsilon ,\delta }={\mathscr{O}}(\varepsilon ^{3/2}+\varepsilon \sqrt{ \delta }+\delta \sqrt{ \varepsilon }+\delta^{3/2})\).

Using the terminal conditions for \(P_{i,j}, i+j\leq 2\), we deduce the terminal condition for the residual as

$$\begin{aligned} R^{\varepsilon ,\delta }(T,x,y,z) &= -\varepsilon P_{2,0}(T,x,y,z) + \mathcal{S}^{\varepsilon ,\delta }_{T}, \end{aligned}$$
(A.5)

pointwise in \((x,y,z)\), where again the terms in \(\mathcal{S}^{\varepsilon ,\delta }_{T}\) come from the Poisson equations discussed in Remark A.7. It is straightforward to check that \(\mathcal{S}^{\varepsilon ,\delta }_{T}\) is of the form

$$\begin{aligned} \mathcal{S}^{\varepsilon ,\delta }_{T}(x,y,z)=\sum_{i,j:\, i + j \geq 3} \sqrt{ \varepsilon }^{\,i} \sqrt{ \delta }^{\,j} \sum_{k \geq 1} C_{i,j,k}(y,z) \mathscr{D}_{k} h(x), \end{aligned}$$

where again the sum is finite and the coefficients \(C_{i,j,k}(y,z)\) are at most polynomially growing in \(y\) and \(z\). The terms \(\mathscr{D}_{k}h(x)\) are at most polynomially growing in \(x\) by the assumption in Theorem 2.6. Consequently, the term \(\mathcal{S}^{\varepsilon ,\delta }_{T}\) in (A.5) is at most polynomially growing in \(x, y\) and \(z\), uniformly in \(\varepsilon ,\delta \leq 1\). Thus we have \(\mathcal{S}^{\varepsilon ,\delta }_{T}={\mathscr{O}}(\varepsilon ^{3/2}+\varepsilon \sqrt{ \delta }+\delta \sqrt{ \varepsilon }+\delta^{3/2})\). The same polynomial growth condition holds for

$$\begin{aligned} P_{2,0}(T,x,y,z)=-\frac{1}{2}\phi(y,z)\mathscr{D}_{2}P_{0,0}(T,x,z)=-\frac{1}{2}\phi(y,z)\mathscr{D}_{2}h(x). \end{aligned}$$
(A.6)

It is important to note that the nonvanishing terminal value \(P_{2,0}(T,x,y,z)\) plays a particular role since it appears at the \(\varepsilon \) order. The probabilistic representation of \(R^{\varepsilon ,\delta }\), solution to the Cauchy problem (A.4), (A.5), is therefore

$$\begin{aligned} R^{\varepsilon ,\delta }(t,x,y,z) &= \frac{\varepsilon }{2}\, \mathbb{E}^{\star }_{t,x,y,z} \big[ e^{-r(T-t)} \phi(Y_{T},Z_{T}) \mathscr{D}_{2}h(X_{T})\big] \\ &\phantom{=:} + {\mathscr{O}}\big(\varepsilon ^{3/2}+\varepsilon \sqrt{ \delta }+\delta \sqrt{ \varepsilon }+\delta^{3/2}\big), \end{aligned}$$
(A.7)

where \(\mathbb{E}^{\star }_{t,x,y,z}\) denotes expectation under the \((\varepsilon ,\delta )\)-dependent dynamics (2.2) starting at time \(t< T\) from \((x,y,z)\). The \({\mathscr{O}}(\varepsilon ^{3/2}+\varepsilon \sqrt{ \delta }+\delta \sqrt{ \varepsilon }+\delta^{3/2})\)-term comes from \(\mathcal{S}^{\varepsilon ,\delta }\) in (A.4) and \(\mathcal{S}^{\varepsilon ,\delta }_{T}\) in (A.5), and it retains the same order because of the uniform control of the moments of \(X\), \(Y\) and \(Z\) recalled in Lemmas A.3 and A.4 at the beginning of this section. We next examine the above expectation in (A.7) in detail.

From Lemma A.5 with \(\xi= \mathscr{D}_{2}h\) and \(\chi=\phi\), where smoothness in \(z\) follows from the smoothness of \(f\) (Assumption 8 in Sect. 2.1), we have

$$\mathbb{E}^{\star }_{t,x,y,z} \big[ e^{-r(T-t)} \phi(Y_{T},Z_{T}) \mathscr{D}_{2}h(X_{T})\big]={\mathscr{O}}\big(\varepsilon ^{q/2} + \sqrt{\delta}\big)\quad \mbox{for $q< 1$}, $$

by our choice (2.32). We then conclude from (A.7) that the residual \(R^{\varepsilon ,\delta }\) is indeed \({\mathscr{O}}(\varepsilon ^{1+q/2}+\varepsilon \sqrt{ \delta }+\delta \sqrt{ \varepsilon }+\delta^{3/2})\) for any \(q < 1\), which establishes Theorem 2.6.  □

Remark A.8

This is exactly where we see that our choice of terminal condition (2.31) for \(P_{2,0}\), which leads to (2.32), was necessary, because if \(\left \langle \phi(\cdot,z) \right \rangle \ne 0\), then the expectation in (A.7) would be of order 1 and the residual would be of order \(\varepsilon \).

1.3 A.3 Proof of Lemma A.6

Let us first consider the case \(\varLambda=0\). For \(z\) fixed, \(\chi(y,z)\) being at most polynomially growing in \(y\), there exist \(a>0\) and an integer \(k\) such that \(|\chi(y,z)|\leq a(y^{2k}+1)\). By Assumption 4 in Sect. 2.1, we have that the process \(Y^{(1)}\) is a regular diffusion and thus, by [23, Proposition V.50.1], a Feller process, as is any skeleton chain of \(Y^{(1)}\). Moreover, any skeleton chain of \(Y^{(1)}\) is \(\varPi\)-irreducible and the support of \(\varPi\) has a non-empty interior (since by Assumption 4, \(\varPi\) has a density \(\pi\)). Therefore, by [21, Theorem 3.4(ii)], all compact subsets of the state space of \(Y^{(1)}\) are petite for some skeleton chain of \(Y^{(1)}\). This allows us to apply [22, Theorem 6.1], from which it follows that there exist \(b<\infty\) and \(\lambda>0\) such that

$$\big| \mathbb{E}^{\star }_{y}\big[\chi\big(Y^{(1)}_{t},z\big)\big]-\langle \chi(\cdot,z)\rangle\big| =\big| \mathbb{E}^{\star }_{y}\big[\chi\big(Y^{(1)}_{t},z\big)\big]\big|\leq ab(y^{2k}+1)e^{-\lambda t} \quad \mbox{for every } \, t. $$

By stationarity, one deduces that for \(s-\varepsilon ^{q}\geq 0\),

$$\big| \mathbb{E}^{\star }_{y,s-\varepsilon ^{q}}[\chi(Y_{s},z)]-\langle \chi(\cdot,z)\rangle\big| =\big| \mathbb{E}^{\star }_{y}\big[\chi\big(Y^{(1)}_{1/\varepsilon ^{1-q}},z\big)\big]\big|\leq ab(y^{2k}+1)e^{-\lambda /\varepsilon ^{1-q}}, $$

and consequently

$$\big| \mathbb{E}^{\star }_{t,y}[\chi(Y_{s},z)|Y_{s-\varepsilon ^{q}}]\big|\leq ab(Y_{s-\varepsilon ^{q}}^{2k}+1)e^{-\lambda /\varepsilon ^{1-q}}. $$

Lemma A.6 follows by using \(e^{-\lambda /\varepsilon ^{1-q}}\leq \sqrt{ \varepsilon }\) for \(\varepsilon \leq 1\). Note that this last inequality is what we need for the second order accuracy studied in this paper, but it can be improved (in fact, to any power of \(\varepsilon \) up to a multiplicative constant or for \(\varepsilon \) small enough).

However, under the pricing measure \(\mathbb{P}^{\star }\), due to the presence of the possibly nonzero market price of volatility risk \(\varLambda (y)\), we need to deal with the perturbed infinitesimal generator \(\mathscr{L}_{0}-\sqrt{ \varepsilon }\beta(y)\varLambda(y)\partial_{y}\) and its associated diffusion process denoted by \(Y^{(1,\varepsilon )}\) which satisfies

$$\begin{aligned} \begin{aligned} dY^{(1,\varepsilon )}_{t} &= \big( \alpha(Y^{(1,\varepsilon )}_{t}) - \sqrt{ \varepsilon } \beta(Y^{(1,\varepsilon )}_{t}) \varLambda (Y_{t}^{(1,\varepsilon )}) \big) dt + \beta(Y^{(1,\varepsilon )}_{t})\,dW_{t}^{\star(1)} , \\ Y_{0}^{(1,\varepsilon )} &= y . \end{aligned} \end{aligned}$$
(A.8)

The process \(Y^{(1,\varepsilon )}\) in (A.8) admits the invariant distribution \(\varPi_{\varepsilon }\) with density

$$\begin{aligned} \pi_{\varepsilon }(y) &= \frac{J_{\varepsilon }}{\beta^{2}(y)}\exp \left (2 \int_{0}^{y} \frac{\alpha(u) - \sqrt{ \varepsilon }\beta(u)\varLambda (u)}{\beta^{2}(u)}du \right ) , \end{aligned}$$

where \(J_{\varepsilon }\) is a normalization factor. Using Assumption 5 and following the argument given above in the case \(\varLambda=0\), we obtain that there exist \(b<\infty\) and \(\lambda>0\) independent of \(\varepsilon \leq 1\) such that

$$ \big| \mathbb{E}^{\star }_{t,y}[\chi(Y_{s},z)|Y_{s-\varepsilon ^{q}}]-\left \langle \chi(\cdot,z)\right \rangle _{\varepsilon }\big| \leq ab(Y_{s-\varepsilon ^{q}}^{2k}+1)e^{-\lambda /\varepsilon ^{1-q}} \leq ab(Y_{s-\varepsilon ^{q}}^{2k}+1)\sqrt{ \varepsilon }. $$

Now, expanding \(\pi_{\varepsilon }\) (including \(J_{\varepsilon }\)), we derive for any \(g \in L_{1}(\varPi_{\varepsilon })\) that

$$\begin{aligned} \left \langle g\right \rangle _{\varepsilon }&= \langle g\rangle - 2 \sqrt{ \varepsilon } \left \langle \left (\int_{0}^{\cdot} \frac{\varLambda (u)}{\beta(u)}du\right ) \big( g(\cdot) - \langle g\rangle \big)\right \rangle + {{\mathscr{O}}(\varepsilon )}. \end{aligned}$$
(A.9)

Hence, using the fact that \(\left \langle \chi(\cdot,z)\right \rangle =0\) and the triangle inequality, Lemma A.6 follows. Note that the term in \(\sqrt{ \varepsilon }\) in (A.9) would generate a contribution of order \(\sqrt{ \varepsilon }\) from \(P_{2}\) which would contribute a term of order \(\varepsilon ^{3/2}\) if one were to seek an expansion of the price at that order.  □

Appendix B: Proof of Theorem 2.6

In this appendix, we consider payoffs \(h\) satisfying Assumption 9. We regularize such a payoff \(h\) by replacing it with its Black–Scholes price with time to maturity \(\varDelta >0\) and volatility \(\bar{\sigma }(z)\) which appears as a constant volatility, \(z\) being a parameter. Accordingly, we define

$$\begin{aligned} h^{ \varDelta }(x,z) &= P_{\mathrm{BS}}\big( \varDelta ,x;\bar{\sigma }(z)\big), \end{aligned}$$
(B.1)

where \(P_{\mathrm{BS}}(\tau,x;\sigma)\) is the Black–Scholes price of an option with payoff \(h\) as a function of the time to maturity \(\tau\), the stock price \(x\) and the volatility \(\sigma\). We note that for \(\varDelta >0\), the regularized payoff \(h^{ \varDelta }\), as a function of \(x\), is \(C^{\infty}\), and as well as its derivatives at most polynomially growing at 0 and \(+\infty\). As such, \(h^{ \varDelta }\) is smooth, as considered in Appendix A.

The price \(P^{\varepsilon ,\delta, \varDelta }(t,x,y,z)\) of the option with the regularized payoff satisfies

$$ \mathscr{L}^{\varepsilon ,\delta} P^{\varepsilon ,\delta, \varDelta } = 0 , \quad P^{\varepsilon ,\delta, \varDelta }(T,x,y,z) = h^{ \varDelta }(x,z) , $$

where the operator \(\mathscr{L}^{\varepsilon ,\delta}\) is given in (2.6). Corresponding to the price approximation \(\widetilde{P}^{\varepsilon ,\delta }\) given in (2.53), we introduce the second order approximation of the regularized option price denoted by \(\widetilde{P}^{\varepsilon ,\delta , \varDelta }\), i.e.,

$$\begin{aligned} \widetilde{P}^{\varepsilon ,\delta , \varDelta } &= P_{0,0}^{ \varDelta } + \sqrt{ \varepsilon } P_{1,0}^{ \varDelta } + \sqrt{\delta} P_{0.1}^{ \varDelta } +\varepsilon P_{2,0}^{ \varDelta }+ \sqrt{ \varepsilon }\sqrt{\delta} P_{1,1}^{ \varDelta } +\delta P_{0,2}^{ \varDelta }, \end{aligned}$$
(B.2)

where, from Proposition 2.2, \(P_{0,0}^{ \varDelta }\) is the Black–Scholes price of the option maturing at \(T\) with payoff \(h^{ \varDelta }(x,z)\), evaluated at volatility \(\bar{\sigma }(z)\). Since we have regularized the payoff in (B.1) by using the Black–Scholes price with volatility \(\bar{\sigma }(z)\), it follows that \(P_{0,0}^{ \varDelta }\) is given by

$$\begin{aligned} P_{0,0}^{ \varDelta }(t,x,z)=P_{0,0}(t- \varDelta ,x,z) &= P_{\mathrm{BS}}\big(T-t+ \varDelta ,x;\bar{\sigma }(z)\big). \end{aligned}$$

Similarly, the other terms in (B.2) are solutions of the PDE problems in (2.54) with \(h\) replaced by \(h^{\varDelta }\), and they are given explicitly in Proposition 2.2. Note that the term \(\varepsilon P_{2,0}^{ \varDelta }\) in (B.2) plays a particular role. From (A.6), it is given by

$$\begin{aligned} \varepsilon P_{2,0}^{ \varDelta }(t,x,y,z)=-\frac{1}{2}\varepsilon \phi(y,z)\mathscr{D}_{2}P^{ \varDelta }_{0,0}(t,x,z), \end{aligned}$$
(B.3)

where \(\phi\) is centered, and at maturity, this term becomes \(-\frac{1}{2}\varepsilon \phi(y,z)\mathscr{D}_{2}h^{ \varDelta }(x,z)\).

The proof of Theorem 2.6 relies on the following three lemmas, which we prove below.

Lemma B.1

For a fixed point \((t,x,y,z)\) with \(t< T\), there exist constants \(\bar{ \varDelta }_{1}>0\), \(\bar{\varepsilon }_{1} > 0\) and \(c_{1}>0\) such that

$$\begin{aligned} | P^{\varepsilon ,\delta}(t,x,y,z) - P^{\varepsilon ,\delta, \varDelta }(t,x,y,z) | \leq c_{1} \varDelta , \end{aligned}$$

for all \(0< \varDelta \leq \bar{ \varDelta }_{1}\) and \(0 < \varepsilon \leq \bar{\varepsilon }_{1}\).

Lemma B.1 controls the error between the model price and the model price with the regularized payoff.

Lemma B.2

For a fixed point \((t,x,y,z)\) with \(t< T\), there exist constants \(\bar{ \varDelta }_{2}>0\), \(\bar{\varepsilon }_{2} > 0\) and \(c_{2}>0\) such that

$$\begin{aligned} | \widetilde{P}^{\varepsilon ,\delta }(t,x,y,z) - \widetilde{P}^{\varepsilon ,\delta , \varDelta }(t,x,y,z) | \leq c_{2} \varDelta , \end{aligned}$$

for all \(0< \varDelta \leq \bar{ \varDelta }_{2}\) and \(0 < \varepsilon \leq \bar{\varepsilon }_{2}\).

Lemma B.2 controls the error between the approximated price and the approximated price with the regularized payoff.

Lemma B.3

For a fixed point \((t,x,y,z)\) with \(t< T\), there exist constants \(\bar{ \varDelta }_{3}>0\), \(\bar{\varepsilon }_{3} > 0\) and \(c_{3}>0\) such that

$$\begin{aligned} | P^{\varepsilon ,\delta, \varDelta }(t,x,y,z) - \widetilde{P}^{\varepsilon ,\delta , \varDelta }(t,x,y,z) | \leq c_{3} \big( \varepsilon ^{1+q/2} +\varepsilon \sqrt{\delta}+\delta\sqrt{ \varepsilon }+\delta^{3/2}\big), \end{aligned}$$

for all \(0 < \varepsilon \leq \bar{\varepsilon }_{3}\), any \(q<1\), and uniformly in \(\varDelta \leq \bar{ \varDelta }_{3}\).

Lemma B.3 controls the error between the model price and the approximated price, both with the regularized payoff.

2.1 B.1 Proof of Theorem 2.6 for general payoffs

The proof follows directly from Lemmas B.1B.3. Take \(\bar{\varepsilon }= \min(\bar{\varepsilon }_{1},\bar{\varepsilon }_{2},\bar{\varepsilon }_{3})\) and choose \(\varDelta =\varepsilon ^{3/2}\). Then, using Lemmas B.1B.3, we find

$$\begin{aligned} |P^{\varepsilon ,\delta} - \widetilde{P}^{\varepsilon ,\delta }| &\leq |P^{\varepsilon ,\delta} - P^{\varepsilon ,\delta, \varDelta }| + | P^{\varepsilon ,\delta, \varDelta } - \widetilde{P}^{\varepsilon ,\delta , \varDelta }| + | \widetilde{P}^{\varepsilon ,\delta , \varDelta }- \widetilde{P}^{\varepsilon ,\delta } | \\ &\leq 2 \max(c_{1},c_{2}) \varepsilon ^{3/2} + c_{3} \big( \varepsilon ^{1+q/2} +\varepsilon \sqrt{\delta}+\delta\sqrt{ \varepsilon }+\delta^{3/2}\big) ,\\ &={\mathscr{O}}\big(\varepsilon ^{3/2-}+\varepsilon \sqrt{ \delta }+\delta \sqrt{ \varepsilon }+\delta^{3/2}\big), \end{aligned}$$

where the functions are evaluated at a fixed \((t,x,y,z)\) with \(t< T\).  □

2.2 B.2 Proofs of Lemmas B.1 and B.2

Proof of Lemma B.1

The proof is a straightforward extension of [6, Lemma 4.1]. It requires a multi-factor “correlated Hull–White formula” with general payoffs which is in [8, Sect. 2.5.4]. We give some details here since it introduces notations that are also used in the proof of Lemma B.4 below. Conditioning on the volatility path \((Y_{u},Z_{u})_{t\leq u\leq T}\) (or their driving Brownian motions \((W_{u}^{\star(1)},W_{u}^{\star(2)})_{t\leq u\leq T}\)), we obtain the representations

$$\begin{aligned} P^{\varepsilon ,\delta}(t,x,y,z)&= \mathbb{E}^{\star }_{t,x,y,z} \big[P_{\mathrm{BS}}\big(t,xe^{\zeta_{t,T}}; \bar{\sigma }_{\perp,t,T}\big)\big],\\ P^{\varepsilon ,\delta, \varDelta }(t,x,y,z)&= \mathbb{E}^{\star }_{t,x,y,z} \big[P_{\mathrm{BS}}\big(t,xe^{\zeta_{t,T}+r\varDelta }; \bar{\sigma }^{\varDelta }_{\perp,t,T}\big)\big], \end{aligned}$$

where \(P_{\mathrm{BS}}\) is the Black–Scholes price with payoff \(h\) and maturity \(T\), and for \(t< s\leq T\),

$$\begin{aligned} \zeta_{t,s} &=\rho_{1}\int_{t}^{s}f(Y_{u},Z_{u})dW_{u}^{\star(1)}+\rho_{2}\int_{t}^{s}f(Y_{u},Z_{u})dW_{u}^{\star(2)} \\ &\phantom{=:} -\frac{1}{2}(\rho_{1}^{2}+\rho_{2}^{2}) \int_{t}^{s}f(Y_{u},Z_{u})^{2}du , \end{aligned}$$
(B.4)
$$\begin{aligned} \bar{\sigma }_{\perp,t,s}^{2} &=\frac{c_{0}^{2}}{s-t} \int_{t}^{s}f(Y_{u},Z_{u})^{2}du, \end{aligned}$$
(B.5)

with

$$\begin{aligned} 0< c_{0}^{2} &:= \frac{1-\rho_{1}^{2}-\rho_{2}^{2}-\rho_{12}^{2}+2\rho_{1}\rho_{2}\rho_{12}}{1-\rho_{12}^{2}}\leq 1, \\ (\bar{\sigma }^{\varDelta }_{\perp,t,s})^{2} &\phantom{:}=\bar{\sigma }_{\perp,t,s}^{2}+\frac{\varDelta \bar{\sigma }^{2}(z)}{s-t}. \end{aligned}$$

Therefore,

$$\begin{aligned} & | P^{\varepsilon ,\delta}(t,x,y,z) - P^{\varepsilon ,\delta, \varDelta }(t,x,y,z) | \\ &\quad\leq e^{-r(T-t)} \mathbb{E}^{\star }_{t,x,y,z} \bigg[\int |h(xe^{\eta'+\zeta_{t,T}})| |p_{1}(\eta')-p_{2}(\eta')|d\eta'\bigg], \end{aligned}$$

where \(p_{1}\) denotes the Gaussian density of

$$\mathcal{N}\bigg(\Big(r-\frac{\bar{\sigma }_{\perp,t,T}^{2}}{2}\Big)(T-t), \bar{\sigma }_{\perp,t,T}^{2}(T-t)\bigg) $$

and \(p_{2}\) the Gaussian density of

$$\mathcal{N}\bigg(r\varDelta +\Big(r-\frac{(\bar{\sigma }^{\varDelta }_{\perp,t,T})^{2}}{2}\Big)(T-t), (\bar{\sigma }^{\varDelta }_{\perp,t,T})^{2}(T-t)\bigg) . $$

Observe that the variance \(\bar{\sigma }_{\perp,t,T}^{2}(T-t)\) is bounded and bounded below by \(c_{0}^{2}\underline{c}^{2}(T-t)\), where \(0<\underline{c}\leq f(y,z)\) from Assumption 2 in Sect. 2.1. Lemma B.1 follows by using the polynomial growth of \(h\) at 0 and \(\infty\), and the exponential moments of \(\zeta_{t,T}\). □

Proof of Lemma B.2

From Proposition 2.2, we can express each \(P_{i,j}\), \(i+j\leq 2\), as an operator acting on \(P_{\mathrm{BS}}(T-t,x;\bar{\sigma }(z))\). Since derivatives with respect to \(\sigma\) can be converted to derivatives with respect to \(x\) by the vega–gamma relation (2.59), we can write the price approximation \(\widetilde{P}^{\varepsilon ,\delta }\) in (2.53) as \(\widetilde{P}^{\varepsilon ,\delta }(t,x,y,z){=}\mathcal{G}P_{\mathrm{BS}}(T-t,x;\bar{\sigma }(z))\), where the operator \(\mathcal{G}\) is a polynomial the in \(\mathscr{D}_{i}\) with bounded coefficients for \((y,z)\) given. Similarly, we can express \(\widetilde{P}^{\varepsilon ,\delta , \varDelta }\) as \(\widetilde{P}^{\varepsilon ,\delta , \varDelta }=\mathcal{ G}P_{\mathrm{BS}}(T-t+\varDelta ,x;\bar{\sigma }(z))\), and therefore

$$\begin{aligned} \widetilde{P}^{\varepsilon ,\delta }- \widetilde{P}^{\varepsilon ,\delta , \varDelta } &= \mathcal{G}\Big(P_{\mathrm{BS}}\big(T-t,x;\bar{\sigma }(z)\big)-P_{\mathrm{BS}}\big(T-t+\varDelta ,x;\bar{\sigma }(z)\big)\Big). \end{aligned}$$

Using the differentiability of \(P_{\mathrm{BS}}\) and \(\{\mathscr{D}_{i}\} P_{\mathrm{BS}}\) with respect to \(t\) at \(t< T\), Lemma B.2 follows easily. □

2.3 B.3 Estimates on greeks

The key to proving Lemma B.3 is the following lemma providing uniform estimates.

Lemma B.4

As in Lemma B.3, in what follows, \(t\) is fixed such that \(t< T\). Let \(\chi(y,z)\) be a function which is at most polynomially growing in \((y,z)\), and smooth in \(z\) with partial derivatives with respect to \(z\) that are at most polynomially growing in \((y,z)\). Denote by \(\eta_{s}=\log X_{s}\) the log-process and by \(\eta=\log x\) the corresponding log-variable. Then for any integer \(k\), there exists a finite constant \(c > 0\), which may depend on \((t,x,y,z,T)\), such that uniformly in \(\varepsilon \), \(\delta\), \(\varDelta >0\) and \(t\leq s\leq T\),

$$\begin{aligned} \big| { \mathbb{E}^{\star }_{t,x,y,z}\big[ \chi(Y_{s},Z_{s})\partial_{\eta}^{k} P_{0,0}^{ \varDelta }(s,e^{\eta_{s}},Z_{s})\big] } \big| &\leq c , \end{aligned}$$
(B.6)

and, for a given \(p\geq 0\),

$$\begin{aligned} \bigg| \mathbb{E}^{\star }_{t,x,y,z}\bigg[ \int_{t}^{T} (T-s)^{p} e^{-r(s-t)} \chi(Y_{s},Z_{s}) \partial_{\eta}^{k} P_{0,0}^{ \varDelta }(s,e^{\eta_{s}},Z_{s}) ds\bigg] \bigg| &\leq c . \end{aligned}$$
(B.7)

Additionally, if \(\chi\) is centered, \(\left \langle \chi(\cdot,z) \right \rangle =0\) for all \(z\), then for any \(q<1\) and any integer \(k\), there exists a finite constant \(c > 0\), which may depend on \((t,x,y,z,T)\), such that for any \(\varepsilon \) satisfying \(\varepsilon ^{q}\leq T-t\) and any \(s\) satisfying \(t+\varepsilon ^{q}\leq s \leq T\), uniformly in \(\varDelta >0\), we have

$$\begin{aligned} \big| { \mathbb{E}^{\star }_{t,x,y,z}\big[ \chi(Y_{s},Z_{s}) \partial_{\eta}^{k} P_{0,0}^{ \varDelta }(s,e^{\eta_{s}},Z_{s})\big] } \big| &\leq c \big(\varepsilon ^{q/2}+\sqrt{\delta}\big) , \end{aligned}$$
(B.8)

and, for a given \(p\geq 0\),

$$\begin{aligned} &\bigg| \mathbb{E}^{\star }_{t,x,y,z}\bigg[ \int_{t}^{T} (T-s)^{p} e^{-r(s-t)} \chi(Y_{s},Z_{s}) \partial_{\eta}^{k} P_{0,0}^{ \varDelta }(s,e^{\eta_{s}},Z_{s}) ds\bigg] \bigg| \\ &\quad\leq c \big(\varepsilon ^{q/2}+ \sqrt{\delta}\big). \end{aligned}$$
(B.9)

Proof

This is an improved version of Lemma 5.2 in [6], where the proof consisted in an explicit computation of \(\partial_{\eta}^{k} P_{0,0}^{ \varDelta }\) in the case of a call payoff. Here, we aim at estimates which are uniform in \(\varDelta \). Conditioning on the volatility path \((Y_{u},Z_{u})_{t\leq u\leq s}\) and using the notations introduced in the proof of Lemma B.1 in Sect. B.2, we get

$$\begin{aligned} & \mathbb{E}^{\star }_{t,x,y,z}\big[ \chi(Y_{s},Z_{s})\partial_{\eta}^{k} P_{0,0}^{ \varDelta }(s,e^{\eta_{s}},Z_{s})\big] \\ &\quad= \mathbb{E}^{\star }_{t,x,y,z}\bigg[ \chi(Y_{s},Z_{s})\int h(e^{\eta'+\zeta_{t,s}})\partial_{\eta}^{k} p(\eta'-\eta)d\eta'\bigg] , \end{aligned}$$
(B.10)

where \(p\) is the Gaussian density of \(\mathcal{N} \left ( M, \varSigma \right ) \) with

$$\begin{aligned} &M = \bigg(r-\frac{1}{2}\bar{\sigma }_{\perp,t,s}^{2}\bigg)(s-t)+\bigg(r-\frac{1}{2}\bar{\sigma }(Z_{s})^{2}\bigg)(T+\varDelta -s), \\ &\varSigma = \bar{\sigma }_{\perp,t,s}^{2}(s-t)+\bar{\sigma }(Z_{s})^{2}(T+\varDelta -s) , \end{aligned}$$
(B.11)

and \(\zeta_{t,s}\) and \(\bar{\sigma }_{\perp,t,s}^{2}\) are defined for \(s>t\) in (B.4) and (B.5), respectively. Note that for \(s=t\), \(\zeta_{t,t}=0\) and the Gaussian distribution is simply

$$\mathcal{N}\bigg(\Big(r-\frac{1}{2}\bar{\sigma }(z)^{2}\Big)(T+\varDelta -t), \bar{\sigma }(z)^{2}(T+\varDelta -t)\bigg). $$

The uniform bound (B.6) follows from the uniform lower bound of the variance of \(p\), the polynomial growth of \(h\), the uniform moments of \(Y\) and \(Z\) (Lemma A.3), and the exponential moments of \(\zeta_{t,s}\). The bound (B.7) is a direct consequence of (B.6).

If in addition \(\chi\) is centered, we define

$$\xi_{s}= \mathbb{E}^{\star }\big[ \partial_{\eta}^{k} P_{0,0}^{ \varDelta }(s,e^{\eta_{s}},Z_{s})\,\big|\, (Y_{u},Z_{u})_{t\leq u\leq s}\big], $$

and we write for \(s\geq t+\varepsilon ^{q}\) that

$$\begin{aligned} & \mathbb{E}^{\star }_{t,x,y,z}\big[ \chi(Y_{s},Z_{s}) \partial_{\eta}^{k} P_{0,0}^{ \varDelta }(s,e^{\eta_{s}},Z_{s})\big] \\ &\quad= \mathbb{E}^{\star }_{t,x,y,z}[ \chi(Y_{s},Z_{s}) \xi_{s}], \\ &\quad= \mathbb{E}^{\star }_{t,x,y,z}[ \chi(Y_{s},Z_{s})( \xi_{s}-\xi_{s-\varepsilon ^{q}})] \\ &\qquad{} + \mathbb{E}^{\star }_{t,x,y,z}\big[ \xi_{s-\varepsilon ^{q}} \mathbb{E}^{\star }[\chi(Y_{s},Z_{s})\,|\, \mathcal{F}_{s-\varepsilon ^{q}}]\big].\quad \end{aligned}$$
(B.12)

The second term \(\mathbb{E}^{\star }_{t,x,y,z}[ \xi_{s-\varepsilon ^{q}} \mathbb{E}^{\star }[\chi(Y_{s},Z_{s})\,|\, \mathcal{F}_{s-\varepsilon ^{q}}]]\) in (B.12) is treated as in the proof of Lemma A.5. Replacing \(\mathbb{E}^{\star }[\chi(Y_{s},Z_{s})\,|\, \mathcal{F}_{s-\varepsilon ^{q}}]\) with \(\mathbb{E}^{\star }[\chi(Y_{s},z)\,|\, \mathcal{F}_{s-\varepsilon ^{q}}]\) results in an \(\mathcal{O}(\sqrt{\delta})\) error. Lemma A.6 (using the centering condition) and the argument given above to prove (B.6) give

$$\begin{aligned} \big| \mathbb{E}^{\star }_{t,x,y,z}\left[ \xi_{s-\varepsilon ^{q}} \mathbb{E}^{\star }[\chi(Y_{s},Z_{s})\,|\, \mathcal{F}_{s-\varepsilon ^{q}}]\right]\big| &\leq c\big(\sqrt{ \varepsilon }+\sqrt{\delta}\big). \end{aligned}$$
(B.13)

Regarding the first term \(\mathbb{E}^{\star }_{t,x,y,z}[ \chi(Y_{s},Z_{s})( \xi_{s}-\xi_{s-\varepsilon ^{q}})]\) in (B.12), we write as in (B.10)

$$\begin{aligned} & \mathbb{E}^{\star }_{t,x,y,z}[ \chi(Y_{s},Z_{s})( \xi_{s}-\xi_{s-\varepsilon ^{q}})] \\ &\quad= \mathbb{E}^{\star }_{t,x,y,z}\bigg[ \chi(Y_{s},Z_{s})\int h(e^{\eta'+\zeta_{t,s}})\partial_{\eta}^{k} \left (p-\tilde{p})(\eta'-\eta \right )d\eta'\bigg] , \end{aligned}$$

where \(p\) is the Gaussian density of (B.11) and \(\tilde{p}\) is the Gaussian density of \(\mathcal{N}\left (M,\varSigma \right )\) with

$$\begin{aligned} M&= -\zeta_{\tilde{s},s}+\bigg(r-\frac{1}{2}\bar{\sigma }_{\perp,t,\tilde{s}}^{2}\bigg)(\tilde{s}-t)+\bigg(r-\frac{1}{2}\bar{\sigma }(Z_{\tilde{s}})^{2}\bigg)(T+\varDelta -\tilde{s}), \\ \varSigma &= \bar{\sigma }_{\perp,t,\tilde{s}}^{2}(\tilde{s}-t)+\bar{\sigma }(Z_{\tilde{s}})^{2}(T+\varDelta -\tilde{s}) , \end{aligned}$$

where \(\tilde{s}=s-\varepsilon ^{q}\). Using the differentiability with respect to the mean and variance of a normal density (with variance bounded away from zero) and, as in the proof of (B.6), the polynomial growth of \(h\), the uniform moments of \(Y\) and \(Z\) (Lemma A.3) and the exponential moments of \(\zeta_{t,s}\), we deduce that

$$\begin{aligned} \big| \mathbb{E}^{\star }_{t,x,y,z}[ \chi(Y_{s},Z_{s})( \xi_{s}-\xi_{s-\varepsilon ^{q}})]\big|&\leq c\,\varepsilon ^{q/2}. \end{aligned}$$
(B.14)

Combining (B.13) and (B.14) with \(q<1\) gives (B.8).

The uniform bound (B.9) follows easily by decomposing the integral over \([t,T]\) into two integrals, one over \([t,t+\varepsilon ^{q}]\) and using the bound (B.6), and the other one over \([t+\varepsilon ^{q}, T]\) and using the bound (B.8). Note that the factor \((T-s)^{p}\) in the integral is simply uniformly bounded by \((T-t)^{p}\). □

2.4 B.4 Proof of Lemma B.3

The proof essentially follows the proof of Theorem 2.6 in Appendix A.2. We define the residual \(R^{\varepsilon ,\delta, \varDelta }\) for the regularized payoff via the equation

$$ P^{\varepsilon ,\delta, \varDelta } =\widetilde{P}^{\varepsilon ,\delta , \varDelta }+\varepsilon ^{3/2} P_{3,0}^{ \varDelta } + \varepsilon ^{2} P^{ \varDelta }_{4,0}+\varepsilon \sqrt{\delta}P^{ \varDelta }_{2,1} +\varepsilon ^{3/2}\sqrt{\delta}P^{ \varDelta }_{3,1}+ R^{\varepsilon ,\delta, \varDelta }, $$
(B.15)

where the approximation \(\widetilde{P}^{\varepsilon ,\delta , \varDelta }\) is given by (B.2), and as in the proof in the smooth case in Sect. A.2, we have introduced the additional terms \((P_{3,0}^{ \varDelta }, P^{ \varDelta }_{4,0}, P^{ \varDelta }_{2,1}, P^{ \varDelta }_{3,1})\). As we discussed in Remark A.7 in that section, they are solutions of the Poisson equations (2.18), (2.19), (2.38) and (2.39) (augmented with the \(\varDelta \) superscript), whose centering conditions have been used to obtain lower order terms in the price expansion.

More precisely, applying the operator \(\mathscr{L}^{\varepsilon ,\delta}\) to \(R^{\varepsilon ,\delta, \varDelta }\), we find the analogue of (A.4), namely

$$\begin{aligned} \mathscr{L}^{\varepsilon } R^{\varepsilon ,\delta, \varDelta } &= {G^{\varepsilon , \varDelta } + J^{\varepsilon ,\delta , \varDelta } }, \end{aligned}$$

where the source term \(G^{\varepsilon , \varDelta }\) is given by

$$\begin{aligned} G^{\varepsilon , \varDelta } &= -\big( \varepsilon ^{3/2}(\mathscr{L}_{1} P^{ \varDelta }_{4,0} + \mathscr{L}_{2} P^{ \varDelta }_{3,0}) + \varepsilon ^{2} \mathscr{L}_{2} P_{4,0}^{ \varDelta }\big) , \end{aligned}$$

and \(J^{\varepsilon ,\delta , \varDelta }\) is given by

$$\begin{aligned} &J^{\varepsilon ,\delta , \varDelta } \\ &= - \sqrt{ \delta } \big( \varepsilon (\mathscr{L}_{2} P^{ \varDelta }_{2,1}+\mathscr{L}_{1} P^{ \varDelta }_{3,1}+\mathscr{M}_{3} P^{ \varDelta }_{3,0}+ \mathscr{M}_{1} P_{2,0}^{ \varDelta }) \big) \\ &\phantom{=:} - \sqrt{ \delta } \big( \varepsilon ^{3/2} ( \mathscr{L}_{2} P^{ \varDelta }_{3,1}+\mathscr{M}_{1} P^{ \varDelta }_{3,0}+\mathscr{M}_{3} P^{ \varDelta }_{4,0}) + \varepsilon ^{1} (\mathscr{M}_{1} P^{ \varDelta }_{4,0}) \big) \\ &\phantom{=:} - \delta \big( \sqrt{ \varepsilon } ( \mathscr{M}_{2} P_{1,0}^{ \varDelta }+ \mathscr{M}_{1} P_{1,1}^{ \varDelta }+ \mathscr{M}_{3} P_{2,1}^{ \varDelta } ) + \varepsilon ( \mathscr{M}_{1} P^{ \varDelta }_{2,1}+ \mathscr{M}_{3} P^{ \varDelta }_{3,1} + \mathscr{M}_{2} P_{2,0}^{ \varDelta } ) \\ &\phantom{=:- \delta \big(} + \varepsilon ^{3/2}( \mathscr{M}_{2} P^{ \varDelta }_{3,0} + \mathscr{M}_{1} P^{ \varDelta }_{3,1} ) + \varepsilon ^{2} \mathscr{M}_{2} P^{ \varDelta }_{4,0} \big) \\ &\phantom{=:} - \delta ^{3/2} \big( \mathscr{M}_{2} P_{0,1}^{ \varDelta }+ \mathscr{M}_{1} P_{0,2}^{ \varDelta } + \sqrt{ \varepsilon } \mathscr{M}_{2} P_{1,1}^{ \varDelta } +\varepsilon \mathscr{M}_{2} P_{2,1}^{ \varDelta }+ \varepsilon ^{3/2} \mathscr{M}_{2} P_{3,1}^{ \varDelta }\big) \\ & \phantom{=:} - \delta ^{2} \mathscr{M}_{2} P_{0,2}^{ \varDelta } . \end{aligned}$$
(B.16)

We have separated the terms involving singular perturbation only, that is, \(G^{\varepsilon , \varDelta }\), and the terms involving regular perturbation as well, that is, \(J^{\varepsilon ,\delta , \varDelta }\). With the same decomposition in mind, at the maturity date \(T\), we have

$$\begin{aligned} R^{\varepsilon ,\delta, \varDelta }(T,x,y,z) &= H^{\varepsilon , \varDelta }(x,y,z) + { K^{\varepsilon ,\delta , \varDelta }(x,y,z) }, \end{aligned}$$

where the functions \(H^{\varepsilon , \varDelta }\) and \(K^{\varepsilon ,\delta , \varDelta }\) are given by

$$\begin{aligned} H^{\varepsilon , \varDelta }(x,y,z) &= - \varepsilon P_{2,0}^{ \varDelta }(T,x,y,z) - \varepsilon ^{3/2} P_{3,0}^{ \varDelta }(T,x,y,z) - \varepsilon ^{2} P_{4,0}^{ \varDelta }(T,x,y,z) , \\K^{\varepsilon ,\delta , \varDelta }(x,y,z) &= { -\varepsilon \sqrt{\delta}P^{ \varDelta }_{2,1}(T,x,y,z) - \varepsilon ^{3/2}\sqrt{\delta}P^{ \varDelta }_{3,1}(T,x,y,z) ,} \end{aligned}$$
(B.17)

and the particular term \(\varepsilon P_{2,0}^{ \varDelta }(T,x,y,z)\) is given in (B.3). The residual \(R^{\varepsilon ,\delta, \varDelta }\) has the stochastic representation

$$\begin{aligned} R^{\varepsilon ,\delta, \varDelta }(t,x,y,z) &={} \mathbb{E}^{\star }_{t,x,y,z}\bigg[ - \int_{t}^{T} e^{-r(s-t)} G^{\varepsilon , \varDelta }(X_{s},Y_{s},Z_{s}) ds \\ &{}\phantom{=: \mathbb{E}^{\star }_{t,x,y,z}\bigg[} + e^{-r(T-t)} H^{\varepsilon , \varDelta }(X_{T},Y_{T},Z_{T}) \bigg] \\ &{}\phantom{=:} + \mathbb{E}^{\star }_{t,x,y,z}\bigg[ - \int_{t}^{T} e^{-r(s-t)} J^{\varepsilon ,\delta , \varDelta }(X_{s},Y_{s},Z_{s}) ds \\ &{}\phantom{=: =: \mathbb{E}^{\star }_{t,x,y,z}\bigg[} + e^{-r(T-t)} K^{\varepsilon ,\delta , \varDelta }(X_{T},Y_{T},Z_{T}) \bigg] , \end{aligned}$$
(B.18)

At this point, in order to apply the bounds in Lemma B.4, it is useful to change variables to \(\eta(x) = \log x\). We note that for a function \(\xi\) that is at least \(n+2m\) times differentiable, we have

$$\begin{aligned} \mathscr{D}_{1}^{n} \mathscr{D}_{2}^{m} \xi \big(\eta(x)\big) &= \sum_{k=n+m}^{n+2m} a_{k} \partial_{\eta}^{k}\xi\big(\eta(x)\big) , \end{aligned}$$

where the \(a_{k} \) are integers. Denoting \(\tau=T-t\), a direct computation shows that \(G^{\varepsilon , \varDelta }\) is of the form

$$\begin{aligned} &G^{\varepsilon , \varDelta }(t,e^{\eta},y,z) \\ &= \, \varepsilon ^{3/2}\bigg( \sum_{k=1}^{5} g_{k}^{(0)}(y,z) \partial_{\eta}^{k} + \tau \sum_{k=1}^{7} g_{k}^{(1)}(y,z) \partial_{\eta}^{k} + \tau^{2} \sum_{k=1}^{9} g_{k}^{(2)}(y,z) \partial_{\eta}^{k} \bigg) P_{0,0}^{ \varDelta }(t, e^{\eta},z) \\ & \phantom{=:} + \varepsilon ^{2} \bigg( \sum_{k=1}^{6} g_{k}^{(3)}(y,z) \partial_{\eta}^{k} + \tau \sum_{k=1}^{8} g_{k}^{(4)}(y,z) \partial_{\eta}^{k} + \tau^{2} \sum_{k=1}^{10} g_{k}^{(5)}(y,z) \partial_{\eta}^{k}\bigg) P_{0,0}^{ \varDelta }(t, e^{\eta},z). \end{aligned}$$
(B.19)

Likewise, one finds that \(H^{\varepsilon , \varDelta }\) is of the form

$$\begin{aligned} &{H^{\varepsilon , \varDelta }(e^{\eta},y,z) } \\ &= { \bigg( \varepsilon \sum_{k=1}^{2}h_{k}^{(0)}(y,z) \partial_{\eta}^{k} + \varepsilon ^{3/2} \sum_{k=1}^{3} h_{k}^{(1)}(y,z) \partial_{\eta}^{k} + \varepsilon ^{2} \sum_{k=1}^{4} h_{k}^{(2)}(y,z) \partial_{\eta}^{k} \bigg) P_{0,0}^{ \varDelta }(T,e^{\eta},z) } , \end{aligned}$$
(B.20)

where \(\langle h_{1}^{(0)} \rangle = \langle h_{2}^{(0)} \rangle=0\). Then, by the expressions (B.19) and (B.20) and Lemma B.4 (bounds (B.8) and (B.9) for the terms in \(\varepsilon \), and bounds (B.6) and (B.7) for the other terms), there exists a constant \(c>0\) such that uniformly in \(\varDelta >0\),

$$\begin{aligned} \big| \mathbb{E}^{\star }_{t,x,y,z} \big[ H^{\varepsilon , \varDelta }(X_{T},Y_{T},Z_{T}) \big] \big| &\leq c \big(\varepsilon ^{1+q/2} +\varepsilon \sqrt{\delta}\big) , \end{aligned}$$
(B.21)
$$\begin{aligned} \bigg| \mathbb{E}^{\star }_{t,x,y,z} \bigg[ \int_{t}^{T} e^{-r(s-t)} G^{\varepsilon , \varDelta }(X_{s},Y_{s},Z_{s}) ds \bigg]\bigg| &\leq c \big(\varepsilon ^{1+q/2} +\varepsilon \sqrt{\delta}\big) . \end{aligned}$$
(B.22)

Next, analyzing the terms \(J^{\varepsilon ,\delta , \varDelta }\) and \(K^{\varepsilon ,\delta , \varDelta }\) given by (B.16) and (B.17), respectively, we find there exists a constant \(c>0\) such that uniformly in \(\varDelta >0\),

$$\begin{aligned} &\big| \mathbb{E}^{\star }_{t,x,y,z} \big[ K^{\varepsilon ,\delta , \varDelta }(X_{T},Y_{T},Z_{T}) \big] \big| \leq { c \, \varepsilon \sqrt{ \delta } }, \end{aligned}$$
(B.23)
$$\begin{aligned} &\bigg| \mathbb{E}^{\star }_{t,x,y,z}\bigg[ \int_{t}^{T} e^{-r(s-t)} J^{\varepsilon ,\delta , \varDelta }(X_{s},Y_{s},Z_{s}) ds \bigg]\bigg| \leq c \big( \varepsilon \sqrt{ \delta } + \delta \sqrt{ \varepsilon } + \delta ^{3/2} \big) . \end{aligned}$$
(B.24)

Here, we omit the lengthy details which consist of writing decomposition formulas for \(J^{\varepsilon ,\delta , \varDelta }\) and \(K^{\varepsilon ,\delta , \varDelta }\) similar to the ones obtained for \(G^{\varepsilon , \varDelta }\) and \(H^{\varepsilon , \varDelta }\) in (B.19) and (B.20). \(J^{\varepsilon ,\delta , \varDelta }\) and \(K^{\varepsilon ,\delta , \varDelta }\) correspond to performing first a regular perturbation bringing a factor \(\sqrt{ \delta }\) and then performing a first order singular perturbation which does not involve boundary layer terms.

Putting together the definition (B.15), the representation formula (B.18) and the bounds (B.21)–(B.24), we deduce that for fixed \((t,x,y,z)\) with \(t< T\) and \(q<1\), there exists a constant \(c\) such that

$$\begin{aligned} \big| P^{\varepsilon ,\delta, \varDelta } - \widetilde{P}^{\varepsilon ,\delta , \varDelta } \big| &= { \big|\varepsilon ^{3/2} P_{3,0}^{ \varDelta } + \varepsilon ^{2} P^{ \varDelta }_{4,0}+\varepsilon \sqrt{\delta}P^{ \varDelta }_{2,1}+\varepsilon ^{3/2}\sqrt{\delta}P^{ \varDelta }_{3,1}+ R^{\varepsilon ,\delta, \varDelta }\big| } \\ &\leq c \big( \varepsilon ^{1+q/2} +\varepsilon \sqrt{\delta}+\delta\sqrt{ \varepsilon }+\delta^{3/2}\big) , \end{aligned}$$

which concludes the proof of Lemma B.3. □

Appendix C: Proof of accuracy after parameter reduction in Sect. 2.6.1

Throughout this section, we use the notation \({\mathscr{O}}(\varepsilon ^{3/2-})\) to indicate terms that are of order \({\mathscr{O}}(\varepsilon ^{1+q/2})\) for any \(q<1\). Recall from (2.66) that \(\sigma ^{*2}=\bar{\sigma }^{2}+2\sqrt{ \varepsilon }V_{2}\), where we do not show the \(z\)-dependence for simplicity of notation.

We show that replacing \(\widetilde{P}^{\varepsilon ,\delta }\) in Theorem 2.6 by \(P^{*,\varepsilon ,\delta }\) defined in (2.67) does not alter the order of accuracy of the approximation. Note that we are in fact performing a regular perturbation on the volatility. We provide here a PDE-based proof assuming smooth payoffs as in Appendix A, and we omit the details of the regularization argument which is a simple application of Lemma B.2 and its extension to the regularization of the approximation \(P^{*,\varepsilon ,\delta }\).

First, we note that \(( P_{0,0} - P_{0,0}^{*} )= {\mathscr{O}}(\sqrt{ \varepsilon })\) since

$$\begin{aligned} \left \langle \mathscr{L}_{2} \right \rangle \left ( P_{0,0} - P_{0,0}^{*} \right ) = \sqrt{ \varepsilon }\,V_{2} \mathscr{D}_{2} P_{0,0}^{*} ,\quad P_{0,0}(T,x,z) - P_{0,0}^{*}(T,x,z) = 0 . \end{aligned}$$

Next, we define \(E_{1}^{\varepsilon ,\delta }(t,x,z)\) by

$$\begin{aligned} E_{1}^{\varepsilon ,\delta } &:= \big( P_{0,0} + \sqrt{ \varepsilon }P_{1,0} + \sqrt{ \delta } P_{0,1} \big) -\big( P_{0,0}^{*} + \sqrt{ \varepsilon }P_{1,0}^{*} + \sqrt{ \delta } P_{0,1}^{*} \big), \end{aligned}$$

the difference in the first order approximations. Note that \(E_{1}^{\varepsilon ,\delta }(T,x,z)=0\) and

$$\begin{aligned} \left \langle \mathscr{L}_{2}\right \rangle E_{1}^{\varepsilon ,\delta } &= \Big( \sqrt{ \varepsilon } \left ( \mathscr{V}^{*} + V_{2} \mathscr{D}_{2} \right ) + \sqrt{ \delta } \left \langle \mathscr{M}_{1} \right \rangle \Big) \left ( P_{0,0}^{*} - P_{0,0} \right ) \\ &{}\phantom{=:} + \varepsilon V_{2} \mathscr{D}_{2} P_{1,0}^{*} + \sqrt{ \varepsilon \delta } V_{2} \mathscr{D}_{2} P_{0,1}^{*}. \end{aligned}$$

Thus, we conclude that \(E_{1}^{\varepsilon ,\delta } = {\mathscr{O}}(\varepsilon + \sqrt{ \varepsilon \delta })\).

Similarly incorporating the order \(\varepsilon \) term, we define \(E_{2}^{\varepsilon }(t,x,y,z)\) by

$$\begin{aligned} E_{2}^{\varepsilon } := \left ( P_{0,0} + \sqrt{ \varepsilon }P_{1,0} + \varepsilon P_{2,0} \right ) - \left ( P_{0,0}^{*} + \sqrt{ \varepsilon }P_{1,0}^{*} + \varepsilon P_{2,0}^{*} \right ) . \end{aligned}$$

From equation (A.9) and by using \(\mathscr{D}_{2}( P_{0,0} - P_{0,0}^{*} ) = {\mathscr{O}}(\sqrt{ \varepsilon })\), one can show that \(E_{2}^{\varepsilon }(T,x,y,z)={\mathscr{O}}(\varepsilon ^{3/2-})\). We then compute

$$\begin{aligned} \left \langle \mathscr{L}_{2}\right \rangle E_{2}^{\varepsilon } &= \sqrt{ \varepsilon } \mathscr{V}\Big( \big( P_{0,0}^{*} + \sqrt{ \varepsilon }P_{1,0}^{*}\big) -\big( P_{0,0} + \sqrt{ \varepsilon }P_{1,0}\big) \Big) \\ &\phantom{=:} + \varepsilon \mathscr{A}\left ( P_{0,0}^{*} - P_{0,0} \right ) + \varepsilon ^{3/2} V_{2} \mathscr{D}_{2} P_{2,0}^{*} . \end{aligned}$$

Incorporating the order \(\sqrt{ \varepsilon \delta }\) term, we define \(E_{3}^{\varepsilon }(t,x,z)\) by

$$\begin{aligned} E_{3}^{\varepsilon } := \left ( P_{0,1} + \sqrt{ \varepsilon } P_{1,1} \right ) - \left ( P_{0,1}^{*} + \sqrt{\varepsilon } P_{1,1}^{*} \right ) . \end{aligned}$$

Note that \(E_{3}^{\varepsilon }(T,x,z)=0\) and

$$\begin{aligned} \left \langle \mathscr{L}_{2}\right \rangle E_{3}^{\varepsilon } &= \left \langle \mathscr{M}_{1} \right \rangle \Big(\big( P_{0,0}^{*} + \sqrt{ \varepsilon } P_{1,0}^{*} \big) -\big( P_{0,0} + \sqrt{ \varepsilon } P_{1,0}^{*} \big) \Big) \\&\phantom{=:} + \sqrt{ \varepsilon } \frac{1}{\bar{\sigma }'} \mathscr{C}\partial_{z} \left ( P_{0,0}^{*} - P_{0,0} \right ) + \sqrt{ \varepsilon } \mathscr{V}\left ( P_{0,1}^{*} - P_{0,1} \right ) . \end{aligned}$$

Now define \(E_{4}^{\varepsilon }(t,x,z)\) by

$$\begin{aligned} E_{4}^{\varepsilon } := P_{0,2} - P_{0,2}^{*} . \end{aligned}$$

Note that \(E_{4}^{\varepsilon }(T,x,z)=0\) and

$$\begin{aligned} \left \langle \mathscr{L}_{2}\right \rangle E_{4}^{\varepsilon } &= \left \langle \mathscr{M}_{1}\right \rangle \left ( P_{0,1}^{*} -P_{0,1} \right ) + \mathscr{M}_{2} \left ( P_{0,0}^{*} -P_{0,0} \right ) + \sqrt{ \varepsilon } V_{2} \mathscr{D}_{2} P_{0,2}^{*} . \end{aligned}$$

Finally,

$$\begin{aligned} &\left \langle \mathscr{L}_{2}\right \rangle \big( E_{2}^{\varepsilon } + \sqrt{ \delta } E_{3}^{\varepsilon } + \delta E_{4}^{\varepsilon } \big) \\ &\quad= \big( \sqrt{ \varepsilon } \mathscr{V}+ \sqrt{ \delta } \left \langle \mathscr{M}_{1} \right \rangle \big) E_{1}^{\varepsilon ,\delta } + \varepsilon ^{3/2} V_{2} \mathscr{D}_{2} P_{2,0}^{*} + \sqrt{ \varepsilon } \delta V_{2} \mathscr{D}_{2} P_{0,2}^{*} \\ &{}\qquad + \bigg( \varepsilon \mathscr{A}+ \sqrt{ \varepsilon \delta } \frac{1}{\bar{\sigma }'} \mathscr{C}\partial_{z}\bigg) \left ( P_{0,0}^{*} - P_{0,0} \right ) + \delta \mathscr{M}_{2} \left ( P_{0,0}^{*} -P_{0,0} \right ). \end{aligned}$$

Hence, we conclude that

$$\begin{aligned} E_{2}^{\varepsilon } + \sqrt{ \delta } E_{3}^{\varepsilon } + \delta E_{4}^{\varepsilon } &= {\mathscr{O}}\big( \varepsilon ^{3/2-} + \varepsilon \sqrt{ \delta } + \sqrt{ \varepsilon } \, \delta \big) . \end{aligned}$$

 □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fouque, JP., Lorig, M. & Sircar, R. Second order multiscale stochastic volatility asymptotics: stochastic terminal layer analysis and calibration. Finance Stoch 20, 543–588 (2016). https://doi.org/10.1007/s00780-016-0298-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00780-016-0298-y

Keywords

Mathematics Subject Classification

JEL Classification

Navigation