We derive an information criterion to select a parametric model of complete-data distribution when only incomplete or partially observed data are available. Compared with AIC, our new criterion has an additional penalty term for missing data, which is expressed by the Fisher information matrices of complete data and incomplete data. We prove that our criterion is an asymptotically unbiased estimator of complete-data divergence, namely the expected Kullback–Leibler divergence between the true distribution and the estimated distribution for complete data, whereas AIC is that for the incomplete data. The additional penalty term of our criterion for missing data turns out to be only half the value of that in previously proposed information criteria PDIO and AICcd. The difference in the penalty term is attributed to the fact that our criterion is derived under a weaker assumption. A simulation study with the weaker assumption shows that our criterion is unbiased while the other two criteria are biased. In addition, we review the geometrical view of alternating minimizations of the EM algorithm. This geometrical view plays an important role in deriving our new criterion.
Akaike information criterion Alternating projections Data manifold EM algorithm Fisher information matrix Incomplete data Kullback–Leibler divergence Misspecification Takeuchi information criterion
This is a preview of subscription content, log in to check access
We would like to thank the reviewers for their comments to improve the manuscript. We appreciate Kei Hirose and Shinpei Imori for their suggestions and comments. While preparing an earlier version of the manuscript, which was published as Shimodaira (1994), Hidetoshi Shimodaira is indebted to Shun-ichi Amari for the geometrical view of the EM algorithm and to Noboru Murata for the derivation of the Takeuchi information criterion.
Csiszár, I., Tusnády, G. (1984). Information geometry and alternating minimization procedures. Statistics and decisions, Supplement Issue, 1, 205–237.Google Scholar
Dempster, A. P., Laird, N. M., Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society Series B (Methodological), 39, 1–38.Google Scholar
Ip, E. H., Lalwani, N. (2000). A note on the geometric interpretation of the EM algorithm in estimating item characteristics and student abilities. Psychometrika, 65, 533–537.Google Scholar
Kawakita, M., Takeuchi, J. (2014). Safe semi-supervised learning based on weighted likelihood. Neural Networks, 53, 146–164.Google Scholar
Konishi, S., Kitagawa, G. (2008). Information criteria and statistical modeling. New York: Springer.Google Scholar
Meng, X.-L., Rubin, D. B. (1991). Using EM to obtain asymptotic variance-covariance matrices: The SEM algorithm. Journal of the American Statistical Association, 86, 899–909.Google Scholar
Seghouane, A. K., Bekara, M., Fleury, G. (2005). A criterion for model selection in the presence of incomplete data based on Kullback’s symmetric divergence. Signal Processing, 85, 1405–1417.Google Scholar
Shimodaira, H. (1994). A new criterion for selecting models from partially observed data. Selecting Models from Data (pp. 21–29). New York: Springer.CrossRefGoogle Scholar
Shimodaira, H. (2000). Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90, 227–244.MathSciNetCrossRefMATHGoogle Scholar