Full reality cannot be included in a model; thus we seek a good model to approximate the effects or factors supported by the empirical data. The selection of an appropriate approximating model is critical to statistical inference from many types of empirical data. This chapter introduces concepts from information theory (see Guiasu 1977), which has been a discipline only since the mid-1940s and covers a variety of theories and methods that are fundamental to many of the sciences (see Cover and Thomas 1991 for an exciting overview; Figure 2.1 is produced from their book and shows their view of the relationship of information theory to several other fields). In particular, the Kullback—Leibler “distance,” or “information,” between two models (Kull-back and Leibler 1951) is introduced, discussed, and linked to Boltzmann’s entropy in this chapter. Akaike (1973) found a simple relationship between the Kullback—Leibler distance and Fisher’s maximized log-likelihood function (see deLeeuw 1992 for a brief review). This relationship leads to a simple, effective, and very general methodology for selecting a parsimonious model for the analysis of empirical data.
Keywords
- Model Selection
- Candidate Model
- Bootstrap Sample
- Akaike Weight
- Full Reality
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.