Abstract
To get started thinking about statistics, consider the three famous problems
-
Suppose you have a bag filled with colored marbles. You close your eyes and reach into it and pull out a handful of marbles, what can you say about what is in the bag?
-
You arrive in a strange town and you need a taxicab. You look out the window, and in the dark, you can just barely make out the number on the roof of one of the cabs. In this town, you know they label the cabs sequentially. How many cabs does the town have?
-
You have already taken the entrance exam twice and you want to know if it’s worth it to take it a third time in the hopes that your score will improve. Because only the last score is reported, you are worried that you may do worse the third time. How do you decide whether or not to take the test again?
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
We will explain null hypothesis and the rest of it later.
- 2.
This is also known as the invariance property of maximum likelihood estimators. It basically states that the maximum likelihood estimator of any function, say, \(h(\theta )\), is the same h with the maximum likelihood estimator for \(\theta \) substituted in for \(\theta \); namely, \(h(\theta _{ML})\).
- 3.
It turns out that the central limit theorem augmented with an Edgeworth expansion tells us that convergence is regulated by the skewness of the distribution [1]. In other words, the more symmetric the distribution, the faster it converges to the normal distribution according to the central limit theorem.
- 4.
Certain technical regularity conditions must hold for this property of maximum likelihood estimator to work. See [2] for more details.
- 5.
The space of all vectors, \(\mathbf {a}\) such that \(\langle \mathbf {a},\mathbf {1} \rangle = 0\) is denoted \(\mathbf {1}^\perp \).
- 6.
The F(m, n) F-distribution has two integer degree-of-freedom parameters, m and n.
- 7.
The last term is of no interest because we are only interested in relative changes in the ISE.
References
W. Feller, An Introduction to Probability Theory and Its Applications, vol. 1 (Wiley, New York, 1950)
L. Wasserman, All of Statistics: A Concise Course in Statistical Inference (Springer, Berlin, 2004)
R.A. Maronna, D.R. Martin, V.J. Yohai, Robust Statistics: Theory and Methods. Wiley Series in Probability and Statistics (Wiley, New York, 2006)
D.G. Luenberger, Optimization by Vector Space Methods. Professional Series (Wiley, New York, 1968)
C. Loader, Local Regression and Likelihood (Springer, Berlin, 2006)
T. Hastie, R. Tibshirani, J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer Series in Statistics (Springer, New York, 2013)
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Unpingco, J. (2019). Statistics. In: Python for Probability, Statistics, and Machine Learning. Springer, Cham. https://doi.org/10.1007/978-3-030-18545-9_3
Download citation
DOI: https://doi.org/10.1007/978-3-030-18545-9_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-18544-2
Online ISBN: 978-3-030-18545-9
eBook Packages: EngineeringEngineering (R0)