Skip to main content

Dependence or Independence of the Sample Mean and Variance In Non-IID or Non-Normal Cases and the Role of Some Tests of Independence

  • Chapter
Recent Advances in Applied Probability

Abstract

Let X1, …, Xn be independent and identically distributed (iid) random variables. We denote the sample mean \(\overline X = n^{ - 1} \sum\nolimits_{i = 1}^n {X_i } \) and the sample variance \(S^2 = (n - 1)^{ - 1} \sum\nolimits_{i = 1}^n {(X_i - } \overline X )^2 \) for n2. Then, it is well-known that if the underlying common probability model for the X’s is N(µ,σ2), the sample mean \(\bar X\)̄ and the sample variance S2 are independently distributed. On the other hand, it is also known that if \(\bar X\)̄ and S2 are independently distributed, then the underlying common probability model for the X’s must be normal (Zinger (1958)). Theorem 1.1 summarizes these. But, what can one expect regarding the status of independence or dependence between \(\bar X\)̄ and S2 when the random variables X’s are allowed to be non-iid or non-normal? In a direct contrast with the message from Theorem 1.1, what we find interesting is that the sample mean \(\bar X\)̄ and the variance S2 may or may not follow independent probability models when the observations Xis are not iid or when these follow non-normal probability laws. With the help of examples, we highlight a number of interesting scenarios. These examples point toward an opening for the development of important characterization results and we hope to see some progress on this in the future. Illustrations are provided where we have applied the t-test based on Pearson-sample correlation coefficient, a traditional non-parametric test based on Spearman-rank correlation coefficient, and the Chi-square test to “validate” independence or dependence between the appropriate \(\bar x\)̄, s data. In a number of occasions, the t-test and the traditional non-parametric test unfortunately arrived at conflicting conclusions based on same data. We raise the potential of a major problem in implementing either a t-test or the nonparametric test as exploratory data analytic (EDA) tools to examine dependence or association for paired data in practice! The Chi-square test, however, correctly validated dependence whenever (\(\bar x\)̄, s) data were dependent. Also, the Chi-square test never sided against a correct conclusion that the paired data \(\bar x\)(̄, s) were independent whenever the paired variables were in fact independent. It is safe to say that among three contenders, the Chi-square test stood out as the most reliable EDA tool in validating the true state of nature of dependence (or independence) between \(\bar x\)̄, S2 as evidenced by the observed paired data \(\bar x\)̄, s whether the observations X1, …, Xn were assumed iid, not iid or these were non-normal.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • H. Cramér, Mathematical Methods of Statistics, Princeton Univ. Press: Princeton, 1946.

    Google Scholar 

  • J. D. Gibbons and S. Chakraborti, Nonparametric Statistical Inference, second edition, Marcel Dekker: New York, 1992.

    Google Scholar 

  • A. Kagan, Yu. V. Linnik, and C. R. Rao, Characterization Problems of Mathematical Statistics John Wiley & Sons: New York, 1973.

    Google Scholar 

  • E. L. Lehmann, Testing Statistical Hypotheses, second edition, Springer-Verlag: New York, 1986.

    Google Scholar 

  • E. Lukacs, Characteristic Functions, Charles Griffin: London, 1960.

    Google Scholar 

  • N. Mukhopadhyay, Probability and Statistical Inference, Marcel Dekker: New York, 2000.

    Google Scholar 

  • G. E. Noether, Introduction to Statistics: The Nonparametric Way, Springer-Verlag: New York, 1991.

    Google Scholar 

  • B. R. Ramachandran, Advanced Theory of Characteristic Functions, Statistical Publishing Society: Calcutta, 1967.

    Google Scholar 

  • C. R. Rao, Linear Statistical Inference and Its Applications, second edition, John Wiley & Sons: New York, 1973.

    Google Scholar 

  • A. A. Zinger, “The independence of quasi-polynomial statistics and analytical properties of distributions,” Theory Probab. Appl. vol. 3 pp. 247–265, 1958.

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer Science + Business Media, Inc.

About this chapter

Cite this chapter

Mukhopadhyay, N. (2005). Dependence or Independence of the Sample Mean and Variance In Non-IID or Non-Normal Cases and the Role of Some Tests of Independence. In: Baeza-Yates, R., Glaz, J., Gzyl, H., Hüsler, J., Palacios, J.L. (eds) Recent Advances in Applied Probability. Springer, Boston, MA. https://doi.org/10.1007/0-387-23394-6_17

Download citation

Publish with us

Policies and ethics