Skip to main content

Gaussian Signals, Covariance Matrices, and Sample Path Properties

  • Chapter
  • First Online:
A First Course in Statistics for Signal Analysis

Part of the book series: Statistics for Industry, Technology, and Engineering ((SITE))

  • 1162 Accesses

Abstract

In general, determination of the shape of the sample paths of a random signal X(t) requires knowledge of n-D (or, in the terminology of signal processing, n-point) probabilities

$$\displaystyle \begin{aligned}{\mathbf{P}}\Bigl( a_1<X(t_1)<b_1, \dots, a_n<X(t_n)<b_n\Bigr), \end{aligned}$$

for an arbitrary n, and arbitrary windows a1 < b1, …, an < bn. But, usually, this information cannot be recovered if the only signal characteristic known is the autocorrelation function. The latter depends on the 2-point distributions but does not uniquely determine them. However, in the case of Gaussian signals, the autocovariances determine not only 2-point probability distributions but also all the n-point probability distributions, so that complete information is available within the second-order theory. In particular, that means that you only have to estimate means and covariances to obtain the complete model. Also, in the Gaussian universe, the weak stationarity implies the strict stationarity as defined in Chap. 4. For the sake of simplicity all signals in this chapter are assumed to be real-valued. The chapter ends with a more subtle analysis of sample paths properties of stationary signals such as continuity and differentiability; in the Gaussian case these issues have fairly complete answers.

Of course, faced with real-world data the proposition that they are distributed according to a Gaussian distribution must be tested rigorously. Many such tests have been developed by the statisticians.1 In other cases, one can make an argument in favor of such a hypothesis based on the Central Limit Theorem (4.5.5) and (4.5.6).

1See, e.g., M. Denker and W.A. Woyczyński’s book mentioned in previous chapters.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Remember that, for any matrices M, and N, we have (MN)T = N T M T, (MN)−1 = N −1 M −1, and (M T)−1 = (M −1)T.

  2. 2.

    Note that, for some simple (complex-valued) Gaussian stationary signals, like, e.g., X(t) = X ⋅ e jt, where X ∼ N(0,  1), one can choose the t is so that the determinant of the covariance matrix is zero; take, for example N = 2, and t 1 = π, t 2 = 2π. Then the joint p.d.f. of the Gaussian random vector (X(t 1), …, X(t N))T is not of the form (9.3.2). Such signals are called degenerate.

  3. 3.

    Recall that the sequence (X n) of random quantities is said to converge to X, in the mean-square, if E|X n − X|2 → 0, as n →.

  4. 4.

    This argument relies on the so-called Cauchy criterion of convergence for random quantities with finite variance: A sequence X n converges in the mean-square as n →, that is, there exists a random quantity X such that limn E(X nX)2 = 0, if and only if limnlimm E(X nX m)2 = 0. This criterion permits the verification of the convergence without knowing what the limit is; see, e.g., Theorem 11.4.2 in W. Rudin, Principles of Mathematical Analysis, McGraw-Hill, New York 1976.

  5. 5.

    For details, see M. Loeve, Probability Theory, Van Nostrand, Princeton 1963, Section 34.3.

  6. 6.

    For a more complete discussion of this theorem and its consequences for sample path continuity and differentiability of random signals, see, for example, M. Loève, Probability Theory, Van Nostrand, Princeton 1963, Section 35.3.

  7. 7.

    This inequality is known as the Chebyshev Inequality and its proof here has been carried out only in the case of absolutely continuous probability distributions. The proof in the discrete case is left to the reader as an exercise, see Sect. 9.5.

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Woyczyński, W.A. (2019). Gaussian Signals, Covariance Matrices, and Sample Path Properties. In: A First Course in Statistics for Signal Analysis. Statistics for Industry, Technology, and Engineering. Birkhäuser, Cham. https://doi.org/10.1007/978-3-030-20908-7_9

Download citation

Publish with us

Policies and ethics