Advertisement

Unification of Statistical Methods for Continuous and Discrete Data

  • Emanuel Parzen
Conference paper

Abstract

We propose the concept of unification of statistical methods in order to develop a general philosophy of statistical data analysis. We propose that ways of thinking about statistical ends (goals) and means (procedures) are needed that provide a framework for implementing and comparing several different approaches to a data analysis problem. We believe that unification has benefits which include: existing (often parametric) methods will be better understood; many new (often nonparametric) methods will be developed. The new methods are usually computer intensive; consequently unification of statistical methods can be considered to be closely related to computational statistics. We define computational statistical methods as characterized by being graphics intensive and number crunching intensive.

Keywords

Quantile Function Quadratic Detector Comparison Density Linear Rank Statistic Parametric Probability Model 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Alexander, William (1989) “Boundary kernel estimation of the two-sample comparison density function” Texas A&M Department of Statistics Ph.D. thesis.Google Scholar
  2. Aly, E. A. A., M. Csorgo, and L. Horvath (1987) “P-P plots, rank processes, and Chernoff-Savage theorems” in New Perspectives in Theoretical and Applied Statistics (ed. M. L. Puri, J. P. Vilaplann, W. Wertz) New York: Wiley 135–156.Google Scholar
  3. Boos, Dennis D. (1986) “Comparing k populations with linear rank statistics”, Journal of the American Statistican Association, 81, 1018–1025.MathSciNetMATHCrossRefGoogle Scholar
  4. Cheng, R. C. H. and Stephens, M. A. (1989) “A goodness of fit test using Moran’s statistic with estimated parameters”, Biometrika, 76, 385–392.MathSciNetMATHCrossRefGoogle Scholar
  5. Chui, C. K., Deutsch, F., Ward, J. D. (1990) “Constrained best approximation in Hilbert space,” Constructive Approximation, 6, 35–64.MathSciNetMATHCrossRefGoogle Scholar
  6. Eubank, R. L., V. N. LaRiccia, R. B. Rosenstein (1987) “Test statistics derived as components of Pearson’s Phi-squared distance measure”, Journal of the American Statistical Association, 82, 816–825.MathSciNetMATHCrossRefGoogle Scholar
  7. Freedman, D., Pisani, R., Purves, R. (1978) Statistics, New York: Norton.Google Scholar
  8. Parzen, E. (1979) “Nonparametric statistical data modelling”, Journal of the American Statistical Association, 74, 105–131.MathSciNetMATHCrossRefGoogle Scholar
  9. Parzen, E. (1989) “Multi-sample functional statistical data analysis,” in Statistical Data Analysis and Inference, (ed. Y. Dodge). Amsterdam: North Holland, pp. 71–84.Google Scholar
  10. Rayner, J. C. W. and Best, D. J. (1989). Smooth Tests of Goodness of Fit, Oxford University Press, New York.MATHGoogle Scholar
  11. Read, T. R. C. and Cressie, N. A. C. (1988). Goodness of Fit Statistics for Discrete Multivariate Data, Springer Verlag, New York.MATHCrossRefGoogle Scholar
  12. Renyi, A. (1961). “On measures of entropy and information.” Proc. 4th Berkeley Symp. Math. Statist. Probability, 1960, 1, 547–561. University of California Press: Berkeley.Google Scholar
  13. Shorack, Galen and John Wellner (1986) Empirical Processes With Applications to Statistics, New York: Wiley.MATHGoogle Scholar

Copyright information

© Springer-Verlag New York, Inc. 1992

Authors and Affiliations

  • Emanuel Parzen
    • 1
  1. 1.Department of StatisticsTexas A&M UniversityUSA

Personalised recommendations