Advertisement

Improving the EM Algorithm

  • David Lansky
  • George Casella

Abstract

The EM algorithm is often a practical method for obtaining maximum likelihood estimates. For the vector parameter case, we provide a faster method than Meng and Rubin (1989) for obtaining the derivative of the EM mapping, which can be used to obtain the observed variance-covariance matrix. Our method exhibits good behavior for a simple example. Aitken’s acceleration is commonly used to speed convergence of EM near a solution. Because Aitken’s acceleration often fails to converge we propose a mixture of EM and Aitken accelerated EM which satisfies the generalized EM (GEM) criteria, assuring convergence. We show that such a mixture sequence exists and demonstrate good convergence behavior for a heuristic approximation to this mixture.

Keywords

Fisher Information Heuristic Approximation Likelihood Surface Complete Data Likelihood Good Convergence Behavior 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Dempster, A. P., Laird, N. M. and Rubin, D. B. (1977) Maximum Likelihood from Incomplete Data via the EM Algorithm (with discussion), JRSS B 39:1–38MathSciNetMATHGoogle Scholar
  2. Dennis, J.E. and Schnabel, R.B. (1983) Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Prentice-Hall Inc., Englewood Cliffs, NJMATHGoogle Scholar
  3. Efron, B. and Hinkley, D. V. (1978) Assessing the Accuracy of the Maximum Likelihood Estimator: Observed Versus Expected Fisher Information, Biometrika 65(3):457–487MathSciNetMATHCrossRefGoogle Scholar
  4. Little, R. J. A. and Rubin, D. B. (1987) Statistical Analysis With Missing Data, WileyGoogle Scholar
  5. Louis, T. A. (1982) Finding the Observed Information Matrix when Using the EM Algorithm, JRSS B 44(2):226–233MathSciNetMATHGoogle Scholar
  6. Meilijson, I. (1989) A Fast Improvement to the EM Algorithm on its Own Terms, JRSS B 51:127–138MathSciNetMATHGoogle Scholar
  7. Meng, X. and Rubin, D. B. (1989) Obtaining Asymptotic Variance-Covariance Matrices By The EM Algorithm, Dept. of Statistics, Harvard University, Cambridge, MA 02138Google Scholar
  8. Wu, J. (1983) On the Convergence Properties of the EM Algorithm, The Annals of Stat. 11(1):95–103MATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag New York, Inc. 1992

Authors and Affiliations

  • David Lansky
    • 1
  • George Casella
    • 1
  1. 1.Biometrics UnitCornell UniversityIthacaUSA

Personalised recommendations