Skip to main content

The Martingale Approach

  • Chapter
Derivative Security Pricing

Abstract

The martingale approach is widely used in the literature on contingent claim analysis. Following the definition of a martingale process, we give some examples, including the Wiener process, stochastic integral, and exponential martingale. We then present the Girsanov’s theorem on a change of measure. As an application, we derive the Black–Scholes formula under risk neutral measure. A brief discussion on the pricing kernel representation and the Feynman–Kac formula is also included.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Note that we use ξ(t, T) to denote ξ(T)∕ξ(t) where ξ(t) is defined in Eq. (8.38).

  2. 2.

    The notation δ(xX) should be interpreted as

    $$\displaystyle{\delta (x_{1} - X_{1})\delta (x_{2} - X_{2})\cdots \delta (x_{n} - X_{n}).}$$
  3. 3.

    For the purposes of the discussion in this section it is useful to have a notation for the expectation operator that indicates both the time, t, as well as the initial value, x, of the underlying stochastic process when expectations are formed. We shall not use this notation elsewhere.

References

  • Baxter, M., & Rennie, A. (1996). Financial calculus-an introduction to derivative pricing. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Chung, K. L., & Williams, R. J. (1990). Introduction to stochastic integration (2nd ed.). Boston: Birkhäusen.

    Book  Google Scholar 

  • Gihman, I. I., & Skorohod, A. V. (1979). The theory of stochastic processes. New York: Springer.

    Book  Google Scholar 

  • Harrison, M. J. (1990). Brownian motion and stochastic flow systems. Malabar, FL: Robert E. Krieger Publishing Co.

    Google Scholar 

  • Harrison, M. J., & Kreps, D. M. (1979). Martingales and arbitrage in multiperiod securities markets. Journal of Economic Theory, 20, 381–408.

    Article  Google Scholar 

  • Harrison, M. J., & Pliska, S. R. (1981). Martingales and stochastic integrals in the theory of continuous trading. Stochastic Processes and Their Applications, 11, 215–260.

    Article  Google Scholar 

  • Musiela, M., & Rutkowski, M. (1997). Martingale methods in financial modelling. New York: Springer.

    Book  Google Scholar 

  • Neftci, S. N. (2000). An introduction to the mathematics of financial derivatives (2nd ed.). New York: Academic Press.

    Google Scholar 

  • Oksendal, B. (2003). Stochastic differential equations (6th ed.). New York: Springer.

    Book  Google Scholar 

  • Sundaran, R. K. (1997). Equivalent martingale measures and risk-neutral pricing: An expository note. Journal of Derivatives, Fall, 85–98.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Appendices

Appendix

Appendix 8.1 Proof of Proposition 8.1

This proof is based on that given in Gihman and Skorohod (1979) and here we consider just the one-dimensional case so as to illustrate the essential ideas. We note that (see Fig. 8.5)

$$\displaystyle{ u(x,t) = \mathbb{E}_{x,t}f(x(T)) = \mathbb{E}_{x,t}\big[\mathbb{E}_{x_{t+\varDelta t},t+\varDelta t}f(x(T))\big] = \mathbb{E}_{x,t}u(x(t +\varDelta t),t +\varDelta t). }$$
(8.134)
Fig. 8.5
figure 5

Successive initial points for u(x, t)

By Ito’s Lemma

$$\displaystyle\begin{array}{rcl} & & u(x(t +\varDelta t),t +\varDelta t) = u(x,t +\varDelta t) \\ & & +\int _{t}^{t+\varDelta t}\left [\mu (x(s),s)\frac{\partial u} {\partial x}(x(s),t +\varDelta t) + \frac{1} {2}\sigma ^{2}(x(s),s)\frac{\partial ^{2}u} {\partial x^{2}}(x(s),t +\varDelta t)\right ]\mathit{ds} \\ & & +\int _{t}^{t+\varDelta t}\sigma (x(s),s)\frac{\partial u} {\partial x}(x(s),t +\varDelta t)\mathit{dz}(s). {}\end{array}$$
(8.135)

Applying the expectation operator \(\mathbb{E}_{x,t}\) across the last equation and bearing in mind that the expectation of the stochastic integral on the right hand side is zero and that \(\mathbb{E}_{x,t}u(x,t +\varDelta t) = u(x,t +\varDelta t)\) we obtain from (8.134) and (8.135) that

$$\displaystyle\begin{array}{rcl} u(x,t)& =& u(x,t +\varDelta t) + \mathbb{E}_{x,t}\int _{t}^{t+\varDelta t}\bigg[\mu (x(s),s)\frac{\partial u} {\partial x}(x(s),t +\varDelta t) \\ & & \quad + \frac{1} {2}\sigma ^{2}(x(s),s)\frac{\partial ^{2}u} {\partial x^{2}}(x(s),t +\varDelta t)\bigg]\mathit{ds} {}\end{array}$$
(8.136)

so that, on application of the mean value theorem for integrals

$$\displaystyle\begin{array}{rcl} 0& =& \,u(x,t +\varDelta t) - u(x,t) + \mathbb{E}_{x,t}\bigg[\mu (x(s'),s')\frac{\partial u} {\partial x}(x(s'),t +\varDelta t) \\ & & \quad + \frac{1} {2}\sigma ^{2}(x(s),s)\frac{\partial ^{2}u} {\partial x^{2}}(x(s'),t +\varDelta t)\bigg]\varDelta t {}\end{array}$$
(8.137)

where s′ ∈ (t, t +Δ t). Dividing the last equation by Δ t and passing to the limit Δ t → 0 we obtain

$$\displaystyle{ \frac{\partial u} {\partial t} (x,t) +\mu (x,t)\frac{\partial u} {\partial x}(x,t) + \frac{1} {2}\sigma ^{2}(x,t)\frac{\partial ^{2}u} {\partial x^{2}}(x,t) = 0, }$$
(8.138)

which is Eq. (8.123). The initial condition lim t → T u(x, t) = f(x) follows since the \(\mathbb{E}_{x,t}\) and lim operations may be interchanged. Thus

$$\displaystyle{\lim _{t\rightarrow T}u(x,t) =\lim _{t\rightarrow T}\mathbb{E}_{x,t}f(x(T)) = \mathbb{E}_{x,t}\lim _{t\rightarrow T}f(x(T)) = f(x).}$$

We can use (8.138) to prove an important subsidiary result that will be useful in proving Proposition 8.2. Note that (8.134) may be written

$$\displaystyle{ \frac{\mathbb{E}_{x,t}u(x(t +\varDelta t),t +\varDelta t) - u(x,t +\varDelta t)} {\varDelta t} = \frac{-(u(x,t +\varDelta t) - u(x,t))} {\varDelta t}. }$$

Taking the limit Δ t → 0 we obtain the result

$$\displaystyle{ \lim _{\varDelta t\rightarrow 0}\frac{\mathbb{E}_{x,t}u(x(t +\varDelta t),t +\varDelta t) - u(x,t +\varDelta t)} {\varDelta t} = -\frac{\partial u} {\partial t}, }$$

which by use of (8.138) and the definition of the operator \(\mathcal{K}\) becomes

$$\displaystyle{ \lim _{\varDelta t\rightarrow 0}\frac{\mathbb{E}_{x,t}u(x(t +\varDelta t),t +\varDelta t) - u(x,t +\varDelta t)} {\varDelta t} = \mathcal{K}u. }$$
(8.139)

The relationship (8.139) holds for any functional of the process x.

Appendix 8.2 Proof of Proposition 8.2

We note first of all that by integration by parts

$$\displaystyle\begin{array}{rcl} & & \lambda \int _{t'}^{t''}\exp \left [\lambda \int _{ s}^{T}g(x(u),u)\mathit{du}\right ]g(x(s),s)\mathit{ds} \\ & & =\exp \left [\lambda \int _{t'}^{T}g(x(u),u)\mathit{du}\right ] -\exp \left [\lambda \int _{ t''}^{T}g(x(u),u)\mathit{du}\right ].{}\end{array}$$
(8.140)

Taking t′ < t″ and applying the operator \(\mathbb{E}_{x,t}\) across the last equation, we obtain (see Fig. 8.6)

$$\displaystyle\begin{array}{rcl} & & v(x,t') - \mathbb{E}_{x,t'}\bigg(\exp \left [\lambda \int _{t''}^{T}g(x(u),u)\mathit{du}\right ]\bigg) \\ & & =\lambda \int _{ t'}^{t''}\mathbb{E}_{ x,t'}\bigg(g(x(s),s)\exp \left [\lambda \int _{s}^{T}g(x(u),u)\mathit{du}\right ]\bigg)\mathit{ds} \\ & & =\lambda \int _{ t'}^{t''}\mathbb{E}_{ x,t'}\big(g(x(s),s)\big) \cdot \mathbb{E}_{x(s),s}\bigg(\exp \left [\lambda \int _{s}^{T}g(x(u),u)\mathit{du}\right ]\bigg)\mathit{ds} \\ & & =\lambda \int _{ t'}^{t''}\mathbb{E}_{ x,t'}\bigg(g(x(s),s) \cdot v(x(s),s)\bigg)\mathit{ds}. {}\end{array}$$
(8.141)
Fig. 8.6
figure 6

The telescoping of expectations for Eq. (8.141)

Note also that

$$\displaystyle\begin{array}{rcl} \mathbb{E}_{x,t'}\bigg(\exp \left [\lambda \int _{t''}^{T}g(x(u),u)\mathit{du}\right ]\bigg)& =& \mathbb{E}_{ x,t'}\bigg(\mathbb{E}_{x(t''),t''}\exp \left [\lambda \int _{t''}^{T}g(x(u),u)\mathit{du}\right ]\bigg) \\ & =& \mathbb{E}_{x,t'}v(x(t''),t''), {}\end{array}$$
(8.142)

so that (8.141) becomes

$$\displaystyle{ v(x,t') - \mathbb{E}_{x,t'}v(x(t''),t'') =\lambda \int _{ t'}^{t''}\mathbb{E}_{ x,t'}\bigg(g(x(s),s)v(x(s),s)\bigg)\mathit{ds}. }$$
(8.143)

The rest of the proof consists in setting \(t' = t,t'' = t + h\) and considering the limit as h → 0. Thus (8.142) becomes

$$\displaystyle{ v(x,t) - \mathbb{E}_{x,t}v(x(t + h),t + h) =\lambda \int _{ t}^{t+h}\mathbb{E}_{ x,t}[g(x(s),s)v(x(s),s)]\mathit{ds}, }$$
(8.144)

or, adding and subtracting v(x, t + h) on the left and applying the mean value theorem for integrals on the right of (8.144)

$$\displaystyle\begin{array}{rcl} v(x,t)& -& v(x,t + h) + v(x,t + h) - \mathbb{E}_{x,t}v(x(t + h),t + h) \\ & =& \lambda g(x(t),t) \cdot v(x(t),t) \cdot h + 0(h). {}\end{array}$$
(8.145)

Dividing through the last equation by h, taking the limit h → 0, we obtain

$$\displaystyle{ -\lim _{h\rightarrow 0}\frac{\mathbb{E}_{x,t}v(x(t + h),t + h) - v(x,t + h)} {h} = \frac{\partial v} {\partial t} +\lambda g(x,t)v. }$$

Applying the result (8.139) in the present context we have that

$$\displaystyle{ \lim _{h\rightarrow 0}\frac{\mathbb{E}_{x,t}v(x(t + h),t + h) - v(x,t + h)} {h} = \mathcal{K}v(x,t). }$$
(8.146)

Hence we obtain the result that v(x, t) satisfies the partial differential equation

$$\displaystyle{ \frac{\partial v} {\partial t} + \mathcal{K}v +\lambda \mathit{gv} = 0. }$$
(8.147)

The initial condition \(\lim _{t\rightarrow t}v(x,t) = 1\) follows fairly simply from the definition of v(x, t).

Appendix 8.3 Proof of Proposition 8.3

The proof of Proposition 8.3 turns out to be more convenient by defining a function g such that (8.128) may be re-written as

$$\displaystyle{v(x,t) = \mathbb{E}_{x,t}\left [\exp \left (\lambda \int _{t}^{T}g(x(s),s)\mathit{ds} + f(x(T))\right )\right ].}$$

It suffices to modify the first line to the proof of Proposition 8.2 (i.e. Eq. (8.140)) to

$$\displaystyle\begin{array}{rcl} & & \qquad \lambda \int _{t'}^{t''}\exp \left [\lambda \int _{ s}^{T}g(x(u),u)\mathit{du} + f(x(T))\right ] \cdot g(x(s),s)\mathit{ds} \\ & & \,=\,\exp \left [\lambda \int _{t}^{T}g(x(u),u)\mathit{du} + f(x(T))\right ] -\exp \left [\lambda \int _{ t''}^{T}g(x(u),u)\mathit{du} + f(x(T))\right ].{}\end{array}$$
(8.148)

Applying the operator \(\mathbb{E}_{x,t'}\) across this last equation, bearing in mind the definition of v(x, t) and performing similar manipulations to those leading to Eq. (8.143), we find that

$$\displaystyle{ v(x,t') - \mathbb{E}_{x,t'}v(x(t''),t'') =\lambda \int _{ t'}^{t''}\mathbb{E}_{ x,t'}\bigg[g(x(s),s)v(x(s),s)\bigg]\mathit{ds}. }$$
(8.149)

This last equation is the analogue in the present context of Eq. (8.143) in Proposition 8.2. By letting \(t' = t,t'' = t + h\) and considering the limit as h → 0 we will analogously find that v(x, t) satisfies the partial differential equation

$$\displaystyle{\frac{\partial v} {\partial t} + \mathcal{K}v +\lambda g(x,t)v(x,t) = 0.}$$

The initial condition (8.130) follows easily by taking the limit t → T in (8.128).

Appendix 8.4 Proof of Proposition 8.4

We define the function H(x, t) satisfying

$$\displaystyle{ H_{x}(x,t) = \frac{\partial H(x,t)} {\partial x} = \frac{h(x,t)} {\sigma (x,t)}. }$$
(8.150)

By application of Ito’s lemma we find that

$$\displaystyle\begin{array}{rcl} \mathit{dH}(x(s),s)& =& \left [H_{t}(x(s),s) + \mathcal{K}H(x(s),s)\right ]\mathit{ds} +\sigma (x(s),s)H_{x}(x(s),s)\mathit{dz}(s) \\ & =& \left [H_{t}(x(s),s) + \mathcal{K}H(x(s),s)\right ]\mathit{ds} + h(x(s),s)\mathit{dz}(s), {}\end{array}$$
(8.151)

by use of (8.151), where \(H_{t}(x,t) = \partial H(x,t)/\partial t\). Hence integrating the last equation over the internal [t, T] we obtain

$$\displaystyle\begin{array}{rcl} \int _{t}^{T}h(x(s),s)\mathit{dz}(s)\,& =& \,H(x(T),T)\,-\,H(x,t) -\int _{ t}^{T}\left [H_{ t}(x(s),s)\right. \\ & & +\left.\mathcal{K}H(x(s),s)\right ]\mathit{ds}. {}\end{array}$$
(8.152)

Substituting Eq. (8.152) into Eq. (8.131) we find that ψ(x, t) may be expressed as

$$\displaystyle\begin{array}{rcl} \psi (x,t)& =& \,e^{-\gamma H(x,t)}\mathbb{E}_{ x,t}\bigg[\exp \bigg\{\lambda \int _{t}^{T}g(x(s),s)\mathit{ds} {}\\ & & -\gamma \int _{t}^{T}\left [H_{ t}(x(s),s) + \mathcal{K}H(x(s),s)\right ]\mathit{ds}\bigg\}e^{\gamma H(x(T),T)}f(x(T))\bigg]. {}\\ \end{array}$$

Finally we apply Proposition 8.3 to the function ϕ(x, t) ≡ e γ H(x, t) ψ(x, t) to obtain

$$\displaystyle\begin{array}{rcl} & & \frac{\partial } {\partial t}\left [e^{\gamma H(x,t)}\psi (x,t)\right ] + \mathcal{K}\left [e^{\gamma H(x,t)}\psi (x,t)\right ] \\ & & \quad + \left [\lambda g(x,t) -\gamma H_{t}(x,t) -\gamma \mathcal{K}H(x,t)\right ]e^{\gamma H(x,t)}\psi (x,t) = 0.{}\end{array}$$
(8.153)

Noting that

$$\displaystyle\begin{array}{rcl} \mathcal{K}[e^{\gamma H(x,t)}\psi (x,t)]& =& e^{\gamma H(x,t)}\left [\frac{1} {2}\sigma ^{2} \frac{\partial ^{2}\psi } {\partial x^{2}} +\gamma \sigma h \frac{\partial \psi } {\partial x} + \frac{1} {2}(\gamma \sigma ^{2}\frac{\partial ^{2}H} {\partial x^{2}} +\gamma ^{2}h^{2})\psi \right ] {}\\ & & +e^{\gamma H(x,t)}\left [\mu \frac{\partial \psi } {\partial x} +\mu \gamma \frac{\partial H} {\partial x} \psi \right ], {}\\ \end{array}$$

and that

$$\displaystyle{ \mathcal{K}H(x,t) = \frac{1} {2}\sigma ^{2}\frac{\partial ^{2}H} {\partial x^{2}} +\mu \frac{\partial H} {\partial x}, }$$

we find after some algebraic manipulations that ψ(x, t) indeed satisfies the partial differential equation (8.132). Finally we note directly from (8.131) that

$$\displaystyle{ \lim _{t\rightarrow T}\psi (x,t) = f(x(T)), }$$

which provides the initial condition (8.133).

Problems

Problem 8.1

Consider the expression (8.107) for the price of a European call option, namely

$$\displaystyle{C(S,t) = e^{-r(T-t)}\tilde{\mathbb{E}}_{ t}[C(S_{T},T)],}$$

where \(\tilde{\mathbb{E}}_{t}\) is generated according to the process

$$\displaystyle{\mathit{dS} = \mathit{rSdt} +\sigma \mathit{Sd}\tilde{z}(t).}$$
  1. (i)

    By simulating M paths for S approximate the expectation with

    $$\displaystyle{ \frac{1} {M}\sum _{i=0}^{M}C(S_{ T}^{(i)},T),}$$

    where i indicates the ith path. Take r = 5 %p.a., σ = 20 %p.a., S = 100, E = 100 and T = 6 months.

  2. (ii)

    Compare graphically the simulated values for various M with the true Black–Scholes value.

  3. (iii)

    Instead of using discretisation to simulate paths for S, use instead the result in Eq. (6.16). We know from Problem 6.16 that this involves no discretisation error.

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Chiarella, C., He, XZ., Nikitopoulos, C.S. (2015). The Martingale Approach. In: Derivative Security Pricing. Dynamic Modeling and Econometrics in Economics and Finance, vol 21. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-45906-5_8

Download citation

Publish with us

Policies and ethics