Machine Learning

, Volume 93, Issue 1, pp 53–69

The flip-the-state transition operator for restricted Boltzmann machines


DOI: 10.1007/s10994-013-5390-3

Cite this article as:
Brügge, K., Fischer, A. & Igel, C. Mach Learn (2013) 93: 53. doi:10.1007/s10994-013-5390-3


Most learning and sampling algorithms for restricted Boltzmann machines (RMBs) rely on Markov chain Monte Carlo (MCMC) methods using Gibbs sampling. The most prominent examples are Contrastive Divergence learning (CD) and its variants as well as Parallel Tempering (PT). The performance of these methods strongly depends on the mixing properties of the Gibbs chain. We propose a Metropolis-type MCMC algorithm relying on a transition operator maximizing the probability of state changes. It is shown that the operator induces an irreducible, aperiodic, and hence properly converging Markov chain, also for the typically used periodic update schemes. The transition operator can replace Gibbs sampling in RBM learning algorithms without producing computational overhead. It is shown empirically that this leads to faster mixing and in turn to more accurate learning.


Restricted Boltzmann machine Markov chain Monte Carlo Gibbs sampling Mixing rate Contrastive divergence learning Parallel tempering 

Copyright information

© The Author(s) 2013

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of HelsinkiHelsinkiFinland
  2. 2.Helsinki Institute for Information Technology HIITHelsinkiFinland
  3. 3.Institut für NeuroinformatikRuhr-Universität BochumBochumGermany
  4. 4.Department of Computer ScienceUniversity of CopenhagenCopenhagenDenmark

Personalised recommendations