Skip to main content

Sampling, Sample Moments and Sampling Distributions

  • Chapter
  • First Online:
  • 6383 Accesses

Abstract

Prior to this point, our study of probability theory and its implications has essentially addressed questions of deduction, being of the type: “Given a probability space, what can we deduce about the characteristics of outcomes of an experiment?” Beginning with this chapter, we turn this question around, and focus our attention on statistical inference and questions of the form: “Given characteristics associated with the outcomes of an experiment, what can we infer about the probability space?”

Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    More formally, the term stochastic process refers to any collection of random variables indexed by some index set T, i.e., {X t , t ∈ T} is a stochastic process.

  2. 2.

    This follows straightforwardly from the definitions of marginal and conditional density functions, as the reader should verify.

  3. 3.

    These conditions, coupled with the condition that x ∈ {0,1}, ensure that the denominators in the density expressions are positive and that the numerators are nonnegative.

  4. 4.

    An alternative proof, requiring only that moments up to the rth-order exist, can be based on Khinchine’s WLLN. Although we will not use the property later, the reader can utilize Kolmogorov’s SLLN to also demonstrate that \( {{{{M^{\prime}}}_r}} \) \( {\mathop{\to}\limits^{\rm{as}} } \) \( {{{{\mu ^{\prime}}}_r}} \) (see Chapter 5).

  5. 5.

    Some authors define the sample variance as \( S_n^2 = \left( {n/\left( {n - 1} \right)} \right){{M}_2}, \) so that E\( \left( {S_n^2} \right) = {{\sigma}^2} \), which identifies \( S_n^2 \) as an unbiased estimator of σ 2 (see Section 7.2). However, this definition would be inconsistent with the aforementioned fact that M 2, and not \( \left( {n/\left( {n - 1} \right)} \right){{M}_2} \), is the second moment about the mean, and thus the variance, of the sample or empirical distribution function, \( {{\hat{F}}_n} \).

  6. 6.

    This is an example of a joint sample moment about the origin, the general definition being given by \( {{M^{\prime}}_{{r,s}}} = \left( {1/n} \right)\sum\nolimits_{{i = 1}}^n {X_i^rY_i^s}. \) The definition for the case of joint sample moment about the mean replaces X i with X i \( {{\overline{X}}_i} \), Y i with Y i \( {{\overline{Y}}_i} \), and \( {{{{M^{\prime}}}_{{rs}}}{\text { with }} {{M}_{{r,s}}}} \).

  7. 7.

    See Kendall and Stuart (1977), Advanced Theory, Vol. 1, pp. 246-251, for an approach based on Taylor series expansions that can be used to approximate moments of the sample correlation.

  8. 8.

    The theorem can be extended to the case where X ~ N (μ x ), in which case the condition for independence is that BΣA = 0.

  9. 9.

    This interval and associated probability was obtained by noting that if Y ~ \( \chi_{{199}}^2 \), then P(161.83 ≤ y ≤ 239.96) = .95, which was obtained via numerical integration of the \( {\mathop{\chi}\nolimits_{{199}}^2 } \) density, leaving.025 probability in both the right and left tails of the density. Using the relationship S 2 ~ (.015625/200)Y then leads to the stated interval.

  10. 10.

    Although, as we have mentioned previously, the density function can always be identified in principle by an integration problem involving the MGF in the integrand.

  11. 11.

    Here, and elsewhere, we are suppressing a technical requirement that f be continuous at the point g −1(b), so that we can invoke Lemma 6.1 for differentiation of the cumulative distribution function. Even if f is discontinuous at g −1(b), we can nonetheless define h(b) as indicated above, since a density function can be redefined arbitrarily at a finite number of points of discontinuity without affecting the assignment of probabilities to any events.

  12. 12.

    These properties define a function that is piecewise invertible on the domain \( \cup_{{i = 1}}^n{{D}_i} \).

  13. 13.

    Note that I(y) is an index set containing the indices of all of the D i sets that have an element whose image under the function g is the value y.

  14. 14.

    If max or min do not exist, they are replaced with sup and inf.

  15. 15.

    We are continuing to use the convention that y 1/2 refers to the positive square root of y, so that −y 1/2 refers to the negative square root.

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media New York

About this chapter

Cite this chapter

Mittelhammer, R.C. (2013). Sampling, Sample Moments and Sampling Distributions. In: Mathematical Statistics for Economics and Business. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-5022-1_6

Download citation

Publish with us

Policies and ethics