Skip to main content

Lévy Processes and Applications

  • Chapter

Part of the book series: Universitext ((UTX))

Abstract

We define and characterise the class of Lévy processes. To illustrate the variety of processes captured within the definition of a Lévy process, we explore briefly the relationship between Lévy processes and infinitely divisible distributions. We also discuss some classical applied probability models, which are built on the strength of well-understood path properties of elementary Lévy processes. We hint at how generalisations of these models may be approached using more sophisticated Lévy processes. At a number of points later on in this text, we handle these generalisations in more detail.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   49.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    We shall also repeatedly abuse this notation throughout the book as, on occasion, we will need to talk about a Lévy process, X, referenced against a random time horizon, say e, which is independent of X and exponentially distributed. In that case, we shall use \(\mathbb{P}\) (and accordingly \(\mathbb{E}\)) for the product law associated with X and e.

  2. 2.

    Where we have assumed natural enlargement here, it is commonplace in other literature to assume that the filtration \(\mathbb{F}\) satisfies “les conditions habituelles”. In particular, for each t≥0, \(\mathcal{F}_{t}\) is complete with respect to all null sets of \(\mathbb{P}\). This can create problems, for example, when looking at changes of measure (as indeed we will in this book). The reader is encouraged to read Warning 1.3.39. of Bichteler (2002) for further investigation.

  3. 3.

    Here and throughout the remainder of the book, we use the convention that, for any n=0,1,2,…, \(\sum_{n+1}^{n}\cdot=0\).

  4. 4.

    The notation ℜz refers to the real part of z.

  5. 5.

    We assume that the reader is familiar with the basic notion of a stopping time for a Markov process as well as the strong Markov property. Both will be dealt with in more detail for a general Lévy process in Chap. 3.

  6. 6.

    Following standard notation, the measure δ 0 is the Dirac measure, which assigns a unit atom to the point 0.

References

  • Barndorff-Nielsen, O.E. and Shephard, N. (2001) Modelling by Lévy Processes for Financial Econometrics. In, Lévy Processes: Theory and Applications. O.E. Barndorff-Nielsen, T. Mikosch, and S. Resnick (Eds.), Birkhäuser, Basel, 283–318.

    Chapter  Google Scholar 

  • Bichteler, K. (2002) Stochastic Integration with Jumps. Cambridge University Press, Cambridge.

    Book  MATH  Google Scholar 

  • Bingham, N.H. (1975) Fluctuation theory in continuous time, Adv. Appl. Probab. 7, 705–766.

    Article  MATH  MathSciNet  Google Scholar 

  • Bingham, N.H. and Kiesel, R. (2004) Risk-Neutral Valuation. Pricing and Hedging of Financial Derivatives. Springer, Berlin.

    MATH  Google Scholar 

  • Boyarchenko, S.I. and Levendorskii, S.Z. (2002a) Perpetual American options under Lévy processes. SIAM J. Control Optim. 40, 1663–1696.

    Article  MATH  MathSciNet  Google Scholar 

  • Caballero, M.-E., Pardo, J.C. and Pérez, J.L. (2010) On Lamperti stable processes. Probab. Math. Stat. 30, 1–28.

    MATH  Google Scholar 

  • Carr, P., Geman, H., Madan, D. and Yor, M. (2003) Stochastic volatility for Lévy processes. Math. Finance 13, 345–382.

    Article  MATH  MathSciNet  Google Scholar 

  • Chan, T. (2004) Some applications of Lévy processes in insurance and finance. Finance 25, 71–94.

    Google Scholar 

  • Cramér, H. (1994a) Collected Works. Vol. I. Edited and with a preface by Anders Martin-Löf. Springer, Berlin.

    Book  Google Scholar 

  • Cramér, H. (1994b) Collected Works. Vol. II. Edited and with a preface by Anders Martin-Löf. Springer, Berlin.

    Book  Google Scholar 

  • de Finetti, B. (1929) Sulle funzioni ad incremento aleatorio. Rend. Accad. Naz. Lincei 10, 163–168.

    MATH  Google Scholar 

  • Eberlein, E. (2001) Application of generalized hyperbolic Lévy motions to finance. In, Lévy Processes: Theory and Applications, O.E. Barndorff-Nielsen, T. Mikosch, and S. Resnick (Eds.), Birkhäuser, Basel, 319–337.

    Chapter  Google Scholar 

  • Feller, W. (1971) An Introduction to Probability Theory and Its Applications. Vol II. 2nd Edition. Wiley, New York.

    MATH  Google Scholar 

  • Good, I.J. (1953) The population frequencies of species and the estimation of populations parameters. Biometrika 40, 260–273.

    MathSciNet  Google Scholar 

  • Grosswald, E. (1976) The student t-distribution of any degree of freedom is infinitely divisible. Z. Wahrscheinlichkeitstheor. Verw. Geb. 36, 103–109.

    Article  MATH  MathSciNet  Google Scholar 

  • Halgreen, C. (1979) Self-decomposability of the generalized inverse Gaussian and hyperbolic distributions. Z. Wahrscheinlichkeitstheor. Verw. Geb. 47, 13–18.

    Article  MATH  MathSciNet  Google Scholar 

  • Heyde, C.C. and Seneta, E. (1977) I. J. Bienaymé: statistical theory anticipated. In, Studies in the History of Mathematics and Physical Sciences, 3. Springer, Berlin.

    Google Scholar 

  • Hougaard, P. (1986) Survival models for heterogeneous populations derived from stable distributions. Biometrika 73, 386–396.

    Google Scholar 

  • Huzak, M., Perman, M., Šikić, H. and Vondraček, Z. (2004a) Ruin probabilities and decompositions for general perturbed risk processes. Ann. Appl. Probab. 14, 1378–1397.

    Article  MATH  MathSciNet  Google Scholar 

  • Huzak, M., Perman, M., Šikić, H. and Vondraček, Z. (2004b) Ruin probabilities for competing claim processes. J. Appl. Probab. 41, 679–690.

    Article  MATH  MathSciNet  Google Scholar 

  • Ismail, M.E.H. (1977) Bessel functions and the infinite divisibility of the Student t-distribution. Ann. Probab. 5, 582–585.

    Article  MATH  Google Scholar 

  • Ismail, M.E.H. and Kelker, D.H. (1979) Special functions, Stieltjes transforms and infinite divisibility. SIAM J. Math. Anal. 10, 884–901.

    Article  MATH  MathSciNet  Google Scholar 

  • Itô, K. (1942) On stochastic processes. I. (Infinitely divisible laws of probability). Jpn. J. Math. 18, 261–301.

    MATH  Google Scholar 

  • Johnson, N.L. and Kotz, S. (1970) Distributions in Statistics. Continuous Univariate Distributions. Vol 1. Wiley, New York.

    MATH  Google Scholar 

  • Jørgensen, B. (1982) Statistical Properties of the Generalized Inverse Gaussian Distribution. Lecture Notes in Statistics, 9. Springer, Berlin.

    Book  Google Scholar 

  • Khintchine A. (1937) A new derivation of one formula by Levy P. Bull. Mosc. State Univ. I, 1–5.

    Google Scholar 

  • Klüppelberg, C., Kyprianou, A.E. and Maller, R.A. (2004a) Ruin probabilities for general Lévy insurance risk processes. Ann. Appl. Probab. 14, 1766–1801.

    Article  MATH  MathSciNet  Google Scholar 

  • Klüppelberg, C., Lindner, A. and Maller, R. (2004b) A continuous-time GARCH process driven by a Lévy process: stationarity and second order behaviour. J. Appl. Probab. 41, 601–622.

    Article  MATH  MathSciNet  Google Scholar 

  • Kolmogorov, N.A. (1932) Sulla forma generale di un processo stocastico omogeneo (un problema di B. de Finetti). Atti Accad. Naz. Lincei, Rend. 15, 805–808.

    MATH  Google Scholar 

  • Koponen, I. (1995) Analytic approach to the problem of convergence of truncated Lévy flights towards the Gaussian stochastic process. Phys. Rev. E. 52, 1197–1199.

    Article  Google Scholar 

  • Kuznetsov, A. (2010a) Wiener–Hopf factorization and distribution of extrema for a family of Lévy processes. Ann. Appl. Probab. 20, 1801–1830.

    Article  MATH  MathSciNet  Google Scholar 

  • Lamperti, J. (1967a) Continuous-state branching processes. Bull. Am. Math. Soc. 73, 382–386.

    Article  MATH  MathSciNet  Google Scholar 

  • Lamperti, J. (1967b) The limit of a sequence of branching processes. Z. Wahrscheinlichkeitstheor. Verw. Geb. 7, 271–288.

    Article  MATH  MathSciNet  Google Scholar 

  • Lebedev, N.N. (1972) Special Functions and Their Applications. Dover, New York.

    MATH  Google Scholar 

  • Lévy, P. (1924) Théorie des erreurs. La loi de Gauss et les lois exceptionnelles. Bull. Soc. Math. Fr. 52, 49–85.

    Google Scholar 

  • Lévy, P. (1925) Calcul des Probabilités. Gauthier-Villars, Paris.

    MATH  Google Scholar 

  • Lévy, P. (1934a) Sur les intégrales dont les éléments sont des variables aléatoires indépendantes. Ann. Sc. Norm. Super. Pisa 3, 337–366.

    Google Scholar 

  • Lévy, P. (1934b) Sur les intégrales dont les éléments sont des variables aléatoires indépendantes. Ann. Sc. Norm. Super. Pisa 4, 217–218.

    Google Scholar 

  • Lévy, P. (1948) Processus Stochastiques et Mouvement Brownien. Gauthiers–Villars, Paris (Second edition 1965).

    MATH  Google Scholar 

  • Lukacs, E. (1970) Characteristic Functions. 2nd Edition, revised and enlarged. Hafner, New York.

    MATH  Google Scholar 

  • Lundberg, F. (1903) Approximerad framställning av sannolikhetsfunktionen. Återförsäkring av kollektivrisker. Akad. Afhandling. Almqvist. och Wiksell, Uppsala.

    Google Scholar 

  • McKean, H. (1965) Appendix: a free boundary problem for the heat equation arising from a problem of mathematical economics. Ind. Manage. Rev. 6, 32–39.

    MathSciNet  Google Scholar 

  • Samorodnitsky, G. and Taqqu, M.S. (1994) Stable Non-Gaussian Random Processes. Chapman and Hall/CRC, Boca Raton.

    MATH  Google Scholar 

  • Sato, K. (1999) Lévy Processes and Infinitely Divisible Distributions. Cambridge University Press, Cambridge.

    MATH  Google Scholar 

  • Schoutens, W. and Teugels, J.L. (1998) Lévy processes, polynomials and martingales. Commun. Stat., Stoch. Models 14, 335–349.

    MATH  MathSciNet  Google Scholar 

  • Steutel, F.W. (1970) Preservation of Infinite Divisibility under Mixing and Related Topics. Math. Centre Tracts, 33. Math. Centrum, Amsterdam.

    MATH  Google Scholar 

  • Steutel, F.W. (1973) Some recent results in infinite divisibility. Stoch. Process. Appl. 1, 125–143.

    Article  MATH  MathSciNet  Google Scholar 

  • Thorin, O. (1977a) On the infinite divisibility of the Pareto distribution. Scand. Actuar. J. 31–40.

    Google Scholar 

  • Thorin, O. (1977b) On the infinite divisibility of the lognormal distribution. Scand. Actuar. J. 121–148.

    Google Scholar 

  • Tweedie, M.C.K. (1984) An index which distinguishes between some important exponential families. In, Statistics: Applications and New Directions. Calcutta, 1981, 579–604. Indian Statist. Inst., Calcutta.

    Google Scholar 

  • Watson, H.W. and Galton, F. (1874) On the probability of the extinction of families. J. Anthropol. Inst. G. B. Irel. 4, 138–144.

    Google Scholar 

  • Zolotarev, V.M. (1986) One Dimensional Stable Distributions. American Mathematical Society, Providence.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Exercises

Exercises

1.1

Prove that, in order to check for stationary and independent increments of the process {X t :t≥0}, it suffices to check that, for all \(n\in\mathbb{N}\) and 0≤s 1t 1≤⋯≤s n t n <∞ and \(\theta_{1},\ldots, \theta_{n}\in\mathbb{R}\),

$$\mathbb{E} \Biggl[\prod_{j=1}^n{\rm e}^{\mathrm{i}\theta_j (X_{t_j}-X_{s_j}) } \Biggr] = \prod_{j=1}^n \mathbb{E} \bigl[{\rm e}^{\mathrm{i}\theta_j X_{t_j-s_j} } \bigr]. $$

Show, moreover, that the sum of two (or indeed any finite number of) independent Lévy processes is again a Lévy process.

1.2

Suppose that S={S n :n≥0} is any random walk and Γ p is an independent random variable with a geometric distribution on {0,1,2,…}, with parameter p.

  1. (i)

    Show that Γ p is infinitely divisible.

  2. (ii)

    Show that \(S_{\boldsymbol{\Gamma}_{p}}\) is infinitely divisible.

1.3

(Proof of Lemma 1.7)

In this exercise, we derive the Frullani identity.

  1. (i)

    Show for any function f, such that f′ exists and is continuous and f(0) and f(∞) are finite, that

    $$\int_0^\infty\frac{ f(ax)-f(bx)}{x}{\mathrm{d}}x = \bigl(f(0)- f(\infty)\bigr)\log\biggl(\frac{b}{a} \biggr), $$

    where b>a>0.

  2. (ii)

    By choosing f(x)=ex, a=α>0 and b=αz, where z<0, show that

    $$ \frac{1}{(1- z/\alpha)^\beta} = \mathrm{e}^{-\int_0^\infty (1-\mathrm{e}^{zx}) \frac{\beta}{x}\mathrm{e}^{-\alpha x}{\mathrm {d}}x} $$
    (1.24)

    and hence, by analytic extension, show that the above identity is still valid for all \(z\in\mathbb{C}\) such that ℜz≤0.

1.4

Establishing formulae (1.9) and (1.10) from the Lévy measure given in (1.11) is the result of a series of technical manipulations of special integrals. In this exercise, we work through them. In the following text, we will use the gamma function Γ(z), defined by

$$\varGamma(z) = \int_0^\infty t^{z-1} \mathrm{e}^{-t}{\mathrm{d}}t, $$

for z>0. Note the gamma function can also be analytically extended so that it is also defined on \(\mathbb{R}\backslash\{ 0,-1,-2,\ldots\}\) (see Lebedev 1972). Whilst the specific definition of the gamma function for negative numbers will not play an important role in this exercise, the following two facts, which can be derived from it, will. For \(z\in\mathbb{R}\backslash\{0,-1,-2,\ldots\}\) the gamma function observes the recursion Γ(1+z)=(z) and \(\varGamma(1/2)=\sqrt{\pi}\).

  1. (i)

    Suppose that 0<α<1. Prove that for u>0,

    $$ \int_{0}^{\infty}\bigl(\mathrm{e}^{-ur}-1 \bigr)r^{-\alpha-1}{\mathrm{d}}r=\varGamma(-\alpha)u^{\alpha} $$

    and show that the same equality is valid when −u is replaced by any complex number w≠0 with ℜw≤0. Conclude, by considering w=i, that

    $$ \int_{0}^{\infty}\bigl(1- \mathrm{e}^{\mathrm{i}r} \bigr)r^{-\alpha -1}{\mathrm{d}}r= -\varGamma(-\alpha)\mathrm{e}^{-\mathrm{i}\pi \alpha/2} $$
    (1.25)

    and similarly for the complex conjugate of both sides of (1.25). Deduce (1.9) by considering the integral

    $$ \int_{0}^{\infty}\bigl(1 - \mathrm{e}^{\mathrm{i}\xi\theta r} \bigr)r^{-\alpha-1}{\mathrm{d}}r $$

    for ξ=±1 and \(\theta\in\mathbb{R}\). Note that you will have to take \(a=\eta-\int_{\mathbb{R}}x\mathbf{1}_{(|x|<1)} \varPi ( {\mathrm{d}}x ) \), which you should check is finite.

  2. (ii)

    Now suppose that α=1. First prove that

    $$\int_{|x|<1}\mathrm{e}^{\mathrm{i}\theta x}\bigl(1-|x|\bigr){\mathrm{d}}x = 2 \biggl( \frac{1- \cos\theta}{\theta^2} \biggr), $$

    for \(\theta\in\mathbb{R}\). Hence by, Fourier inversion, show that

    $$\int_0^\infty\frac{1- \cos r}{r^2}{\mathrm{d}}r = \frac{\pi}{2}. $$

    Use this identity to show that for z>0,

    $$\int_0^\infty\bigl(1- \mathrm{e}^{\mathrm{i}rz} + \mathrm{i}zr\mathbf{1}_{(r<1)}\bigr)\frac{1}{r^2}{\mathrm{d}}r = \frac{\pi}{2}z + \mathrm{i}z\log z - \mathrm{i}kz, $$

    for some constant \(k\in\mathbb{R}\). By considering the complex conjugate of the above integral, establish the expression in (1.10). Note that you will need a different choice of a to part (i).

  3. (iii)

    Now suppose that 1<α<2. Integrate (1.25) by parts to get

    $$ \int_{0}^{\infty }\bigl(\mathrm{e}^{\mathrm{i}r}-1- \mathrm{i}r\bigr)r^{-\alpha -1}{\mathrm{d}}r=\varGamma(-\alpha)\mathrm{e}^{-\mathrm{i}\pi \alpha/2}. $$

    Deduce the identity (1.9) in a similar manner to the proof of (i) and (ii).

1.5

For any \(\theta\in\mathbb{R}\), prove that

$$\exp\bigl\{ \mathrm{i}\theta X_t + t \varPsi(\theta)\bigr\} , \quad t \geq0, $$

is a martingale where {X t :t≥0} is a Lévy process with characteristic exponent Ψ.

1.6

In this exercise, we will work out in detail some features of the inverse Gaussian process discussed earlier on in this chapter. Recall that τ={τ s :s≥0} is a non-decreasing Lévy process defined by τ s =inf{t≥0:B t +bt>s}, s≥0, where B={B t :t≥0} is a standard Brownian motion and b>0.

  1. (i)

    Argue along the lines of Exercise 1.5 to show that, for each λ>0,

    $$\mathrm{e}^{\lambda B_t - \frac{1}{2}\lambda^2 t},\quad t\geq0, $$

    is a martingale. Use Doob’s Optional Sampling Theorem to obtain

    $$\mathbb{E}\bigl(\mathrm{e}^{-(\frac{1}{2}\lambda^2 + b\lambda )\tau_s}\bigr) = \mathrm{e}^{-\lambda s}. $$

    Use analytic extension to deduce further that τ s has characteristic exponent

    $$\varPsi_s(\theta)= s\bigl(\sqrt{-2\mathrm{i}\theta+ b^2} - b\bigr), $$

    for all \(\theta\in\mathbb{R}\).

  2. (ii)

    Defining the measure \(\varPi({\mathrm{d}}x)=(2\pi x^{3})^{-1/2} \mathrm{e}^{-xb^{2}/2}{\mathrm{d}}x\) on x>0, check, using (1.25) from Exercise 1.4, that

    $$\int_0^\infty\bigl(1-\mathrm{e}^{\mathrm{i}\theta x} \bigr)\varPi({\mathrm{d}}x) = \varPsi(\theta), $$

    for all \(\theta\in\mathbb{R}\). Confirm that the triple (a,σ,Π), appearing in the Lévy–Khintchine formula, is thus σ=0, Π as above and \(a= - 2sb^{-1}\int_{0}^{b} (2\pi)^{-1/2} \mathrm{e}^{-y^{2}/2}{\mathrm{d}}y\).

  3. (iii)

    Taking

    $$\mu_s({\mathrm{d}}x) = \frac{s}{\sqrt{2\pi x^3}} \mathrm{e}^{sb} \mathrm{e}^{-\frac{1}{2}(s^2 x^{-1} + b^2 x)}{\mathrm{d}}x, \quad x>0, $$

    show that

    $$\begin{aligned} \int_0^\infty\mathrm{e}^{-\lambda x} \mu_s ({\mathrm{d}}x) =& \mathrm{e}^{bs - s\sqrt{b^2 + 2\lambda}} \int _0^\infty\frac{s}{\sqrt{2\pi x^3}}\mathrm{e}^{-\frac{1}{2}(\frac {s}{\sqrt{x}} - \sqrt{(b^2 + 2\lambda)x} )^2 } {\mathrm{d}}x \\ =& \mathrm{e}^{bs - s\sqrt{b^2 + 2\lambda}} \int_0^\infty \sqrt{\frac{2\lambda+ b^2}{2\pi u}} \mathrm{e}^{-\frac{1}{2} (\frac{s}{\sqrt{u}} - \sqrt{(b^2 + 2\lambda)u})^2}{\mathrm{d}}u. \end{aligned}$$

    Hence, by adding the last two integrals together deduce that

    $$\int_0^\infty\mathrm{e}^{-\lambda x} \mu_s ({\mathrm{d}}x) = \mathrm{e}^{-s(\sqrt{b^2 + 2\lambda} -b)}, $$

    thereby confirming both that μ s (dx) is a probability distribution on (0,∞), and that it is the probability distribution of τ s .

1.7

Show that for a simple Brownian motion B={B t :t>0} the first passage process τ={τ s :s>0}, where τ s =inf{t≥0:B t s}, is a stable process with parameters α=1/2 and β=1.

1.8

(Proof of Theorem 1.9)

As we shall see in this exercise, the proof of Theorem 1.9 follows from the proof of a more general result given by the conclusion of parts (i)–(iv) below for random walks.

  1. (i)

    Suppose that S={S n :n≥0} is a random walk with S 0=0 and jump distribution Q on \(\mathbb{R}\). By considering the variables \(S^{*}_{k} := S_{n} - S_{n-k}\) for k=0,1,…,n and noting that the joint distributions of (S 0,…,S n ) and \((S^{*}_{0},\ldots,S^{*}_{n})\) are identical, show that for all y>0 and n≥1,

    $$\begin{aligned} &P(S_n \in{\mathrm{d}}y \text{ and }S_n> S_j \text{ for } j= 0, \ldots, n-1) \\ &\quad= P(S_n \in{\mathrm{d}}y \text{ and }S_j >0 \text{ for } j=1,\ldots,n). \end{aligned}$$

    Hint: it may be helpful to draw a diagram of the path of the first n steps of S and to rotate it by 180.

  2. (ii)

    Define

    $$T^-_0 = \inf\{n> 0 : S_n \leq0 \}\quad\text{and}\quad T^+_0 = \inf\{n> 0 : S_n > 0\}. $$

    By summing both sides of the equality

    $$\begin{aligned} & P(S_1>0, \ldots, S_n >0, S_{n+1}\in{\mathrm{d}}x ) \\ &\quad= \int_{(0,\infty)} P(S_1>0, \ldots, S_n>0, S_n \in{\mathrm{d}}y) Q({\mathrm{d}}x-y) \end{aligned}$$

    over n, show that for x≤0,

    $$P(S_{T^-_0}\in{\mathrm{d}}x)= \int_{[0,\infty)} V({\mathrm{d}}y) Q({\mathrm{d}}x - y), $$

    where, for y≥0,

    $$V({\mathrm{d}}y)= \delta_0 ({\mathrm{d}}y) + \sum_{n\geq1}P(H_n \in{\mathrm{d}}y ) $$

    and H={H n :n≥0} is a random walk with H 0=0 and step distribution given by \(P(S_{T^{+}_{0}}\in{\mathrm{d}}z)\), for z≥0.

  3. (iii)

    Embedded in the Cramér–Lundberg model is a random walk S whose increments are equal in distribution to that of c e λ ξ 1, where e λ is an independent exponential random variable with mean 1/λ. Noting (using obvious notation) that c e λ has the same distribution as e β , where β=λ/c, show that the step distribution of this random walk satisfies

    $$Q(z,\infty)= \biggl(\int_0^\infty \mathrm{e}^{-\beta u} F({\mathrm{d}}u) \biggr)\mathrm{e}^{-\beta z}, \quad z\geq 0, $$

    and

    $$Q(-\infty, -z) = E\bigl(\overline{F}(\mathbf{e}_\beta+ z)\bigr),\quad z>0, $$

    where \(\overline{F}(x) = 1-F(x)\), for all x≥0, and E is expectation with respect to the random variable e β .

  4. (iv)

    Since upward jumps are exponentially distributed in this random walk, use the lack-of-memory property to reason that

    $$V({\mathrm{d}}y)= \delta_0 ({\mathrm{d}}y) + \beta{\mathrm {d}}y,\quad y\geq0. $$

    Hence deduce from parts (ii) and (iii) that

    $$P( - S_{T^-_0}> z)=E \biggl(\overline{F}(\mathbf{e}_\beta) + \int_x^\infty\beta\overline{F}( \mathbf{e}_\beta+ z){\mathrm{d}}z \biggr) $$

    and so, by writing out the above identity with the density of the exponential distribution, show that the conclusions of Theorem 1.9 hold.

1.9

(Proof of Theorem 1.11)

Suppose that X is a compound Poisson process of the form

$$X_t = t - \sum_{i=1}^{N_t} \xi_i, \quad t\geq0, $$

where the process N={N t :t≥0} is a Poisson process with rate λ>0 and {ξ i :i≥1} positive, independent and identically distributed with common distribution F having finite mean μ.

  1. (i)

    Show by direct computation that, for all θ≥0, \(\mathbb{E}(\mathrm{e}^{\theta X_{t}})= \mathrm{e}^{\psi(\theta)t} \), where

    $$\psi(\theta)=\theta- \lambda\int_{(0,\infty)} \bigl(1- \mathrm{e}^{-\theta x}\bigr)F({\mathrm{d}}x). $$

    Show that ψ is strictly convex, is equal to zero at the origin and tends to infinity at infinity. Further, show that ψ(θ)=0 has one additional root in [0,∞) other than θ=0 if and only if ψ′(0+)<0.

  2. (ii)

    Show that \(\{\exp\{\theta^{*} X_{t\wedge\tau^{+}_{x}} \}: t\geq0\}\) is a martingale, where \(\tau^{+}_{x} = \inf\{t > 0: X_{t} >x\}\), x>0 and θ is the largest root described in the previous part of the question. Show further that

    $$\mathbb{P}(\overline{X}_\infty> x) = \mathrm{e}^{-\theta^* x}, $$

    for all x>0.

  3. (iii)

    Show that for all t≥0,

    $$\int_0^t \mathbf{1}_{(W_s = 0 )}{\mathrm{d}}s = ( \overline{X}_t- w)\vee0, $$

    where \(W_{t} = (w\vee\overline{X}_{t}) - X_{t}\).

  4. (iv)

    Deduce that \(I: = \int_{0}^{\infty}\mathbf{1}_{(W_{s} = 0 )}{\mathrm{d}}s=\infty\) if λμ≤1.

  5. (v)

    Assume that λμ>1. Show that

    $$\mathbb{P}\bigl(I\in{\mathrm{d}}x; \tau^+_w =\infty| W_0 =w \bigr) = \bigl(1- \mathrm{e}^{-\theta^* w}\bigr)\delta_0 ({\mathrm{d}}x), \quad x\geq0. $$

    Next use the lack-of-memory property to deduce that

    $$\mathbb{P}\bigl(I\in{\mathrm{d}}x; \tau^+_w <\infty| W_0 =w \bigr) = \theta^* \mathrm{e}^{-\theta^*(w+x)}{\mathrm{d}}x. $$

1.10

Here, we solve a considerably simpler optimal stopping problem than (1.20). Suppose, as in the aforementioned problem, that X is a linear Brownian motion with scaling parameter σ>0 and drift \(\gamma\in\mathbb{R}\). Fix K>0 and let

$$ v(x) = \sup_{a\in\mathbb{R}}\mathbb{E}_x\bigl( \mathrm{e}^{-q \tau^-_a}\bigl(K - \mathrm{e}^{X_{\tau^-_a}} \bigr)^+\bigr), $$
(1.26)

where

$$\tau^-_a = \inf\{t>0 : X_t < a\}. $$
  1. (i)

    Following similar arguments to those in Exercises 1.5 and 1.9, show that {exp{θX t ψ(θ)t}:t≥0} is a martingale, where ψ(θ)=σ 2 θ 2/2+γθ.

  2. (ii)

    By considering the martingale in part (i) at the stopping time \(t\wedge\tau^{+}_{x}\) and then letting t↑∞, deduce that

    $$\mathbb{E}\bigl(\mathrm{e}^{-q\tau^{+}_x}\bigr) = \mathrm{e}^{ - x(\sqrt{\gamma^2 + 2\sigma^2 q} - \gamma)/\sigma^2} $$

    and hence deduce that for a≥0,

    $$\mathbb{E}\bigl(\mathrm{e}^{-q\tau^{-}_{-a}}\bigr) = \mathrm{e}^{ - a(\sqrt{\gamma^2 + 2\sigma^2 q} +\gamma)/\sigma^2}. $$
  3. (iii)

    Let \(v(x,a) = \mathbb{E}_{x}(\mathrm{e}^{-q\tau^{-}_{-a}} (K - \exp\{X_{\tau^{-}_{-a}}\}))\). For each fixed x differentiate v(x,a) in the variable a and show that the solution to (1.26) is the same as the solution given in Theorem 1.13.

1.11

In this exercise, we characterise the Laplace exponent of the continuous-time Markov branching process, Y, described in Sect. 1.3.4.

  1. (i)

    Show that for ϕ>0 and t≥0 there exists some function u t (ϕ)>0 satisfying

    $$E_y\bigl(\mathrm{e}^{-\phi Y_t}\bigr) = \mathrm{e}^{- y u_t (\phi)}, $$

    where y∈{0,1,2,…}.

  2. (ii)

    Show that for s,t≥0,

    $$u_{t+s}(\phi) = u_s\bigl(u_t(\phi)\bigr). $$
  3. (iii)

    Appealing to the infinitesimal behaviour of the Markov chain Y, show that

    $$\frac{\partial u_t (\phi)}{\partial t} = \psi\bigl(u_t (\phi)\bigr) $$

    and u 0(ϕ)=ϕ, where

    $$\psi(q) = \lambda\int_{[-1,\infty)} \bigl(1 - \mathrm{e}^{-qx} \bigr) F({\mathrm{d}}x) $$

    and F is given in (1.22).

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Kyprianou, A.E. (2014). Lévy Processes and Applications. In: Fluctuations of Lévy Processes with Applications. Universitext. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-37632-0_1

Download citation

Publish with us

Policies and ethics