Advertisement

Testing for Independence of Observations

  • Albert Madansky
Part of the Springer Texts in Statistics book series (STS)

Abstract

The problem addressed in this chapter is one of testing whether a sequence of random variables x 1,…,x n are independent based on a set of observations x 1, …, x n of these random variables. One approach to this problem is to make an assumption about the distribution of the x i , both when they are independent and under the alternative that they are not independent, and test one hypothesis against the other. The simplest such pair of assumptions is that the x i are identically distributed as N(0, σ2) random variables and, in the case of dependence, that the dependence can be modeled as a first-order auto-regressive series wherein
$$\begin{gathered} {{x}_{1}} = {{u}_{1}}, \hfill \\ {{x}_{i}} = \rho {{x}_{{i - 1}}} + {{u}_{i}},i = 2, \ldots ,n, \hfill \\ \end{gathered}$$
and the u i are independent N(0, σ2) random variables. The case ρ = 0 corresponds to the case in which the x i are independent N(0, σ2) random variables. Under the alternative hypothesis, C(x i , x i ) = ρ |ij| and V x i = σ2(1 + ρ + … + ρ i − 1)= σ2(1 – ρ i )/(1 – ρ), so that the x i are not even identically distributed. Section 1 studies tests of this and more complex autoregressive models as alternatives to independence, both for a sequence of observations and for regression residuals.

Keywords

Serial Correlation American Statistical Association Regression Residual Parametric Procedure Nonparametric Procedure 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bartels, R. 1982. The rank version of von Neumann’s ratio test for randomness. Journal of the American Statistical Association 77 (March): 40–46.MATHCrossRefGoogle Scholar
  2. Box, G. E. P. and Jenkins, G. M. 1976. Time Series Analysis: Forecasting and Control, 2nd edn. San Francisco: Holden-Day.MATHGoogle Scholar
  3. Box, G. E. P. and Ljung, G. M. 1978. On a measure of lack of fit in time series models. Biometrika 65 (August): 297–303.MATHGoogle Scholar
  4. Box, G. E. P. and Pierce, D. A. 1970. Distribution of residual autocorrelations in autoregressive-integrated moving average time series models. Journal of the American Statistical Association 65 (December): 1509–26.MathSciNetMATHCrossRefGoogle Scholar
  5. Durbin, J. and Watson, G. S. 1950. Testing for serial correlation in least squares regression. I. Biometrika 37 (December): 409–28.MathSciNetMATHGoogle Scholar
  6. Durbin, J. and Watson, G. S. 1951. Testing for serial correlation in least squares regression. II. Biometrika 38 (June): 159–78.MathSciNetMATHGoogle Scholar
  7. Durbin, J. and Watson, G. S. 1971. Testing for serial correlation in least squares regression. III. Biometrika 58 (April): 1–19.MathSciNetMATHGoogle Scholar
  8. Edgington, E. S. 1961. Probability table for number of runs of signs of first differences in ordered series. Journal of the American Statistical Association 56 (March): 156–59.MathSciNetCrossRefGoogle Scholar
  9. Hart, B. I. and von Neumann, J. 1942. Tabulation of the probabilities for the ratio of the mean square successive difference to the variance. Annals of Mathematical Statistics 13 (June): 207–14.MathSciNetMATHCrossRefGoogle Scholar
  10. Kendall, M. G. and Stuart, A. 1966. The Advanced Theory of Statistics, Vol. 3. London: Griffin.Google Scholar
  11. Knoke, J. D. 1977. Testing for randomness against autocorrelation: Alternative tests. Biometrika 64 (December): 523–29.MathSciNetMATHCrossRefGoogle Scholar
  12. Levene, H. 1952. On the power function of tests of randomness based on runs up and down. Annals of Mathematical Statistics 23 (March): 34–56.MathSciNetMATHCrossRefGoogle Scholar
  13. Levene, H. and Wolfowitz, J. 1944. The covariance matrix of runs up and down. Annals of Mathematical Statistics 15 (March): 58–69.MathSciNetMATHCrossRefGoogle Scholar
  14. Madansky, A. 1976. Foundations of Econometrics. Amsterdam: North-Holland.MATHGoogle Scholar
  15. Mason, D. G. 1927. Artistic Ideals. New York: Norton.Google Scholar
  16. Siegel, S. 1956. Nonparametric Statistics for the Behavioral Sciences. New York: McGraw-Hill.MATHGoogle Scholar
  17. Swed, F. S. and Eisenhart, C. 1943. Tables for testing randomness of grouping in a sequence of alternatives. Annals of Mathematical Statistics 14 (March): 83–86.MathSciNetGoogle Scholar
  18. Theil, H. and Nagar, A. L. 1961. Testing the independence of regression disturbances. Journal of the American Statistical Association 56 (December): 793–806.MathSciNetMATHCrossRefGoogle Scholar
  19. von Neumann, J. 1941. Distribution of the ratio of the mean square successive difference to the variance. Annals of Mathematical Statistics 12 (September): 367–95.MathSciNetMATHCrossRefGoogle Scholar
  20. Wald, A. and Wolfowitz, J. 1940. On a test whether two samples are from the same population. Annals of Mathematical Statistics 11 (June): 147–62.MathSciNetMATHCrossRefGoogle Scholar
  21. Wallis, W. A. and Moore, G. H. 1941. A significance test for time series analysis. Journal of the American Statistical Association 36 (September): 401–9.CrossRefGoogle Scholar
  22. Williams, J. D. 1941. Moments of the ratio of the mean square successive difference tothe mean square difference in samples from a normal universe. Annals of Mathematical Statistics 12 (June): 239–41.MathSciNetMATHCrossRefGoogle Scholar
  23. Wolfowitz, J. 1944. On the theory of runs with some applications to quality control. Annals of Mathematical Statistics 15 (September): 280–88.Google Scholar

Copyright information

© Springer-Verlag New York Inc. 1988

Authors and Affiliations

  • Albert Madansky
    • 1
  1. 1.Graduate School of BusinessUniversity of ChicagoChicagoUSA

Personalised recommendations