Abstract
Section 7.1 begins with formal definitions and contains an extensive discussion of the basic properties of characteristic functions, including those related to the nature of the underlying distributions. Section 7.2 presents the proofs of the inversion formulas for both densities and distribution functions, and also in the space of square integrable functions. Then the fundamental continuity theorem relating pointwise convergence of characteristic functions to weak convergence of the respective distributions is proved in Sect. 7.3. The result is illustrated by proving the Poisson theorem, with a bound for the convergence rate, in Sect. 7.4. After that, the previously presented theory is extended in Sect. 7.5 to the multivariate case. Some applications of characteristic functions are discussed in Sect. 7.6, including the stability properties of the normal and Cauchy distributions and an in-depth discussion of the gamma distribution and its properties. Section 7.7 introduces the concept of generating functions and uses it to analyse the asymptotic behaviour of a simple Markov discrete time branching process. The obtained results include the formula for the eventual extinction probability, the asymptotic behaviour of the non-extinction probabilities in the critical case, and convergence in that case of the conditional distributions of the scaled population size given non-extinction to the exponential law.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
More precisely, in classical mathematical analysis, the Fourier transform φ(t) of a function f(t) from the space L 1 of integrable functions is defined by the equation
$$\varphi(t)=\frac{1}{\sqrt{2\pi}} \int e^{itx} f(t) \,dt $$(the difference from ch.f. consists in the factor \(1/\sqrt{2\pi}\)). Under this definition the inversion formula has a “symmetric” form: if φ∈L 1 then
$$f(x)=\frac{1}{\sqrt{2\pi}}\int e^{-itx}\varphi(t) \,dt. $$This representation is more symmetric than the inversion formula for ch.f. (7.2.1) in Sect. 7.2 below.
- 2.
In the literature, the inversion formula is often given in the form
$$F(y) -F(x) =\frac{1}{2\pi}\lim_{A \to\infty} \int _{-A}^A \frac{e^{-itx} -e^{-ity}}{it} \varphi(t) \,dt $$which is equivalent to (7.2.7).
- 3.
Formula (7.2.8) can also be obtained from (7.2.1) without integration by noting that (F(x)−F(y))/(y−x) is the value at zero of the convolution of two densities: f(x) and the uniform density over the interval [−y,−x] (see also the remark at the end of Sect. 3.6). The ch.f. of the convolution is equal to \(\frac{e^{-itx} -e^{-ity}}{(y-x)it}\varphi (t)\).
- 4.
Here we again omit the factor \(\frac{1}{\sqrt{2\pi}}\) (cf. the footnote on page 154).
- 5.
This extension is not really substantial since close results could be established using Theorem 5.2.2 in which ξ k can only take the values 0 and 1. It suffices to observe that the probability of the event A=⋃ k {ξ k ≠0, ξ k ≠1} is bounded by the sum ∑q k and therefore
$$\mathbf{P}(S_n =k) =\theta_1 \sum q_k + \Bigl(1-\theta_2 \sum q_k \Bigr) \mathbf{P}(S_n =k | \overline{A} ), \quad\theta_i \le1 , \ i=1,2, $$where \(\mathbf{P}(S_{n} =k | \overline{A} )=\mathbf{P}(S_{n}^{*} =k)\) and \(S_{n}^{*}\) are sums of independent random variables \(\xi_{k}^{*}\) with
$$\mathbf{P}\bigl(\xi_k^* =1\bigr) =p_k^* =\frac{p_k}{1-q_k}, \qquad \mathbf{P}\bigl(\xi_k^* =0\bigr) =1-p_k^* . $$ - 6.
The simple proof of Theorem 7.7.3 that we presented here is due to K.A. Borovkov.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag London
About this chapter
Cite this chapter
Borovkov, A.A. (2013). Characteristic Functions. In: Probability Theory. Universitext. Springer, London. https://doi.org/10.1007/978-1-4471-5201-9_7
Download citation
DOI: https://doi.org/10.1007/978-1-4471-5201-9_7
Publisher Name: Springer, London
Print ISBN: 978-1-4471-5200-2
Online ISBN: 978-1-4471-5201-9
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)