Advertisement

\(\mathcal{U}\)-Likelihood and \(\mathcal{U}\)-Updating Algorithms: Statistical Inference in Latent Variable Models

  • Jaemo Sung
  • Sung-Yang Bang
  • Seungjin Choi
  • Zoubin Ghahramani
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3720)

Abstract

In this paper we consider latent variable models and introduce a new \(\mathcal{U}\) -likelihood concept for estimating the distribution over hidden variables. One can derive an estimate of parameters from this distribution. Our approach differs from the Bayesian and Maximum Likelihood (ML) approaches. It gives an alternative to Bayesian inference when we don’t want to define a prior over parameters and gives an alternative to the ML method when we want a better estimate of the distribution over hidden variables. As a practical implementation, we present a \(\mathcal{U}\) -updating algorithm based on the mean field theory to approximate the distribution over hidden variables from the \(\mathcal{U}\)-likelihood. This algorithm captures some of the correlations among hidden variables by estimating reaction terms. Those reaction terms are found to penalize the likelihood. We show that the \(\mathcal{U}\)-updating algorithm becomes the EM algorithm as a special case in the large sample limit. The useful behavior of our method is confirmed for the case of mixture of Gaussians by comparing to the EM algorithm.

Keywords

Mixture Model Bayesian Inference Bayesian Approach Hide Variable Marginal Probability 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Gelman, A., Carlin, J.B., Stern, H.S., Rubin, D.B., Gelman, A.: Bayesian Data Analysis. Chapman & Hall/CRC (1995)Google Scholar
  2. 2.
    Jordan, M., Ghahramani, Z., Jaakkola, T., Saul, L.: An introduction to variational methods for graphical models. Machine Learning 72(2), 183–233 (1999)CrossRefGoogle Scholar
  3. 3.
    Neal, R.M.: Probabilistic Inference Using Markov Chain Monte Carlo Methods. Technical Report CRG-TR-93-1, Dept. of Computer Science, University of Toronto (1993)Google Scholar
  4. 4.
    Minka, T.: Expectation Propagation for approximate Bayesian inference. In: Proc. Uncertainty in Artificial Intelligence (2001)Google Scholar
  5. 5.
    Ghahramani, Z., Beal, M.J.: Propagation algorithms for variational bayesian learning. In: Advances in Neural Information Processing Systems, vol. 13. MIT Press, Cambridge (2001)Google Scholar
  6. 6.
    Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B 39, 1–38 (1977)zbMATHMathSciNetGoogle Scholar
  7. 7.
    Neal, R.M., Hinton, G.E.: A view of the EM algorithm that justifies incremental, sparse, and other variants. In: Jordan, M.I. (ed.) Learning in Graphical Models, pp. 355–368. Kluwer Academic Publishers, Dordrecht (1988)Google Scholar
  8. 8.
    Opper, M., Saad, D. (eds.): Advanced Mean Field Methods: Theory and Practice. MIT Press, Cambridge (2001)zbMATHGoogle Scholar
  9. 9.
    McLachlan, G., Peel, D.: Finite Mixture Models. Wiley-Interscience, Hoboken (2000)zbMATHCrossRefGoogle Scholar
  10. 10.
    Richardson, S., Green, P.J.: On Bayesian analysis of mixtures with an unknown number of components. Journal of the Royal Statistical Society B 59, 731–792 (1997)zbMATHCrossRefMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Jaemo Sung
    • 1
  • Sung-Yang Bang
    • 1
  • Seungjin Choi
    • 1
  • Zoubin Ghahramani
    • 2
  1. 1.Department of Computer SciencePOSTECHRepublic of Korea
  2. 2.Gatsby Computational Neuroscience UnitUniversity College LondonLondonEngland

Personalised recommendations