An information criterion for model selection with missing data via complete-data divergence

Article
  • 111 Downloads

Abstract

We derive an information criterion to select a parametric model of complete-data distribution when only incomplete or partially observed data are available. Compared with AIC, our new criterion has an additional penalty term for missing data, which is expressed by the Fisher information matrices of complete data and incomplete data. We prove that our criterion is an asymptotically unbiased estimator of complete-data divergence, namely the expected Kullback–Leibler divergence between the true distribution and the estimated distribution for complete data, whereas AIC is that for the incomplete data. The additional penalty term of our criterion for missing data turns out to be only half the value of that in previously proposed information criteria PDIO and AICcd. The difference in the penalty term is attributed to the fact that our criterion is derived under a weaker assumption. A simulation study with the weaker assumption shows that our criterion is unbiased while the other two criteria are biased. In addition, we review the geometrical view of alternating minimizations of the EM algorithm. This geometrical view plays an important role in deriving our new criterion.

Keywords

Akaike information criterion Alternating projections Data manifold EM algorithm Fisher information matrix Incomplete data Kullback–Leibler divergence Misspecification Takeuchi information criterion 

References

  1. Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19, 716–723.MathSciNetCrossRefMATHGoogle Scholar
  2. Amari, S. (1995). Information geometry of the EM and em algorithms for neural networks. Neural Networks, 8, 1379–1408.Google Scholar
  3. Amari, S., Nagaoka, H. (2007). Methods of information geometry 191. Providence, RI: American Mathematical Society.Google Scholar
  4. Bozdogan, H. (1987). Model selection and Akaike’s information criterion (AIC): The general theory and its analytical extensions. Psychometrika, 52, 345–370.MathSciNetCrossRefMATHGoogle Scholar
  5. Burnham, K. P., Anderson, D. R. (2002). Model selection and multimodel inference: A practical information-theoretic approach. New York: Springer.Google Scholar
  6. Byrne, W. (1992). Alternating minimization and Boltzmann machine learning. IEEE Transactions on Neural Networks, 3, 612–620.CrossRefGoogle Scholar
  7. Cavanaugh, J. E., Shumway, R. H. (1998). An Akaike information criterion for model selection in the presence of incomplete data. Journal of Statistical Planning and Inference, 67, 45–65.Google Scholar
  8. Chapelle, O., Schölkopf, B., Zien, A. (2006). Semi-supervised learning. Cambridge: The MIT Press.Google Scholar
  9. Claeskens, G., Consentino, F. (2008). Variable selection with incomplete covariate data. Biometrics, 64, 1062–1069.Google Scholar
  10. Csiszár, I. (1975). I-divergence geometry of probability distributions and minimization problems. The Annals of Probability, 3, 146–158.MathSciNetCrossRefMATHGoogle Scholar
  11. Csiszár, I., Tusnády, G. (1984). Information geometry and alternating minimization procedures. Statistics and decisions, Supplement Issue, 1, 205–237.Google Scholar
  12. Dempster, A. P., Laird, N. M., Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society Series B (Methodological), 39, 1–38.Google Scholar
  13. Ip, E. H., Lalwani, N. (2000). A note on the geometric interpretation of the EM algorithm in estimating item characteristics and student abilities. Psychometrika, 65, 533–537.Google Scholar
  14. Kawakita, M., Takeuchi, J. (2014). Safe semi-supervised learning based on weighted likelihood. Neural Networks, 53, 146–164.Google Scholar
  15. Konishi, S., Kitagawa, G. (2008). Information criteria and statistical modeling. New York: Springer.Google Scholar
  16. Meng, X.-L., Rubin, D. B. (1991). Using EM to obtain asymptotic variance-covariance matrices: The SEM algorithm. Journal of the American Statistical Association, 86, 899–909.Google Scholar
  17. Seghouane, A. K., Bekara, M., Fleury, G. (2005). A criterion for model selection in the presence of incomplete data based on Kullback’s symmetric divergence. Signal Processing, 85, 1405–1417.Google Scholar
  18. Shimodaira, H. (1994). A new criterion for selecting models from partially observed data. Selecting Models from Data (pp. 21–29). New York: Springer.CrossRefGoogle Scholar
  19. Shimodaira, H. (2000). Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90, 227–244.MathSciNetCrossRefMATHGoogle Scholar
  20. White, H. (1982). Maximum likelihood estimation of misspecified models. Econometrica, 50, 1–25.MathSciNetCrossRefMATHGoogle Scholar
  21. Yamazaki, K. (2014). Asymptotic accuracy of distribution-based estimation of latent variables. The Journal of Machine Learning Research, 15, 3541–3562.MathSciNetGoogle Scholar

Copyright information

© The Institute of Statistical Mathematics, Tokyo 2017

Authors and Affiliations

  1. 1.Division of Mathematical Science, Graduate School of Engineering ScienceOsaka UniversityToyonakaJapan
  2. 2.RIKEN Center for Advanced Intelligence ProjectTokyoJapan
  3. 3.Kawasaki Heavy Industries, Ltd.AkashiJapan

Personalised recommendations