Machine Learning

, Volume 50, Issue 1, pp 175–196

Population Markov Chain Monte Carlo

Authors

  • Kathryn Blackmond Laskey
    • Department of Systems Engineering and Operations ResearchGeorge Mason University
  • James W. Myers
    • TRW, VAR1/9D02
Article

DOI: 10.1023/A:1020206129842

Cite this article as:
Laskey, K.B. & Myers, J.W. Machine Learning (2003) 50: 175. doi:10.1023/A:1020206129842

Abstract

Stochastic search algorithms inspired by physical and biological systems are applied to the problem of learning directed graphical probability models in the presence of missing observations and hidden variables. For this class of problems, deterministic search algorithms tend to halt at local optima, requiring random restarts to obtain solutions of acceptable quality. We compare three stochastic search algorithms: a Metropolis-Hastings Sampler (MHS), an Evolutionary Algorithm (EA), and a new hybrid algorithm called Population Markov Chain Monte Carlo, or popMCMC. PopMCMC uses statistical information from a population of MHSs to inform the proposal distributions for individual samplers in the population. Experimental results show that popMCMC and EAs learn more efficiently than the MHS with no information exchange. Populations of MCMC samplers exhibit more diversity than populations evolving according to EAs not satisfying physics-inspired local reversibility conditions.

Markov chain Monte CarloMetropolis-Hastings algorithmgraphical probabilistic modelsBayesian networksBayesian learningevolutionary algorithms
Download to read the full article text

Copyright information

© Kluwer Academic Publishers 2003