Skip to main content
Log in

A tutorial on variational Bayesian inference

  • Published:
Artificial Intelligence Review Aims and scope Submit manuscript

Abstract

This tutorial describes the mean-field variational Bayesian approximation to inference in graphical models, using modern machine learning terminology rather than statistical physics concepts. It begins by seeking to find an approximate mean-field distribution close to the target joint in the KL-divergence sense. It then derives local node updates and reviews the recent Variational Message Passing framework.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Attias H (2000) A variational Bayesian framework for graphical models. In: Advances in neural information processing systems. MIT Press

  • Bernardo JM, Smith AFM (2000) Bayesian theory. Wiley, London

    MATH  Google Scholar 

  • Bishop CM, Winn JM, Spiegelhalter D (2002) VIBES: a variational inference engine for Bayesian networks. In: Advances in neural information processing systems

  • Winn J, Bishop C (2005) Variational message passing. J Mach Learn Res 6: 661–694

    MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Charles W. Fox.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Fox, C.W., Roberts, S.J. A tutorial on variational Bayesian inference. Artif Intell Rev 38, 85–95 (2012). https://doi.org/10.1007/s10462-011-9236-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10462-011-9236-8

Keywords

Navigation