Advertisement

Computationally Intensive Techniques

  • Wolfgang Karl HärdleEmail author
  • Léopold Simar
Chapter

Abstract

It is generally accepted that training in statistics must include some exposure to the mechanics of computational statistics. This exposure to computational methods is of an essential nature when we consider extremely high-dimensional data. Computer-aided techniques can help us to discover dependencies in high dimensions without complicated mathematical tools.

References

  1. L. Breiman, J.H. Friedman, R. Olshen, C.J. Stone, Classification and Regression Trees (Wadsworth, Belmont, 1984)Google Scholar
  2. R.D. Cook, S. Weisberg, Comment on “Sliced inverse regression for dimension reduction”. J. Am. Stat. Assoc. 86(414), 328–332 (1991)zbMATHGoogle Scholar
  3. N. Duan, K.-C. Li, Slicing regression: a link-free regression method. Ann. Stat. 19(2), 505–530 (1991)MathSciNetCrossRefGoogle Scholar
  4. EUNITE. Electricity load forecast competition of the European Network on Intelligent Technologies for Smart Adaptive Systems (2001). http://neuron.tuke.sk/competition/
  5. J.H. Friedman, W. Stuetzle, Projection Pursuit Classification (1981). (Unpublished manuscript)Google Scholar
  6. J.H. Friedman, J.W. Tukey, A projection pursuit algorithm for exploratory data analysis. IEEE Trans. Comput. C 23, 881–890 (1974)CrossRefGoogle Scholar
  7. P. Hall, K.-C. Li, On almost linearity of low dimensional projections from high dimensional data. Ann. Stat. 21(2), 867–889 (1993)MathSciNetCrossRefGoogle Scholar
  8. J.L. Hodges, E.L. Lehman, The efficiency of some non-parametric competitors of the \(t\)-test. Ann. Math. Stat. 27, 324–335 (1956)CrossRefGoogle Scholar
  9. P. Huber, Projection pursuit. Ann. Stat. 13(2), 435–475 (1985)MathSciNetCrossRefGoogle Scholar
  10. M.C. Jones, R. Sibson, What is projection pursuit? (With discussion). J. R. Stat. Soc. Ser. A 150(1), 1–36 (1987)CrossRefGoogle Scholar
  11. K. Kendall, S. Stuart, Distribution Theory, vol. 1, The Advanced Theory of Statistics (Griffin, London, 1977)Google Scholar
  12. J.B. Kruskal, Toward a practical method which helps uncover the structure of a set of observations by finding the line tranformation which optimizes a new “index of condensation”, in Statistical Computation, ed. by R.C. Milton, J.A. Nelder (Academic Press, New York, 1969), pp. 427–440Google Scholar
  13. J.B. Kruskal, Linear transformation of multivariate data to reveal clustering, in Multidimensional Scaling: Theory and Applications in the Behavioural Sciences, volume 1, ed. by R.N. Shepard, A.K. Romney, S.B. Nerlove (Seminar Press, London, 1972), pp. 179–191Google Scholar
  14. K.-C. Li, Sliced inverse regression for dimension reduction (with discussion). J. Am. Stat. Assoc. 86(414), 316–342 (1991)CrossRefGoogle Scholar
  15. K.-C. Li, On principal Hessian directions for data visualization and dimension reduction: another application of Stein’s lemma. J. Am. Stat. Assoc. 87, 1025–1039 (1992)MathSciNetCrossRefGoogle Scholar
  16. J. Mercer, Functions of positive and negative type and their connection with the theory of integral equations. Philos. Trans. R. Soc. Lond. 209, 415–446 (1909)CrossRefGoogle Scholar
  17. J.R. Schott, Determining the dimensionality in sliced inverse regression. J. Am. Stat. Assoc. 89(425), 141–148 (1994)MathSciNetCrossRefGoogle Scholar
  18. V. Vapnik, The Nature of Statistical Learning Theory (Springer, New York, 1995)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Ladislaus von Bortkiewicz Chair of StatisticsHumboldt-Universität zu BerlinBerlinGermany
  2. 2.Institute of Statistics, Biostatistics and Actuarial SciencesUniversité Catholique de LouvainLouvain-la-NeuveBelgium

Personalised recommendations