Learning in Graphical Models

  • Michael I. Jordan

Part of the NATO ASI Series book series (ASID, volume 89)

Table of contents

  1. Front Matter
    Pages i-5
  2. Inference

    1. Front Matter
      Pages 7-7
    2. Robert Cowell
      Pages 27-49
    3. Michael I. Jordan, Zoubin Ghahramani, Tommi S. Jaakkola, Lawrence K. Saul
      Pages 105-161
    4. Tommi S. Jaakkola, Michael I. Jordan
      Pages 163-173
    5. D. J. C. Mackay
      Pages 175-204
  3. Independence

    1. Front Matter
      Pages 229-229
    2. Thomas S. Richardson
      Pages 231-259
  4. Foundations for Learning

    1. Front Matter
      Pages 299-299
    2. David Heckerman
      Pages 301-354
  5. Learning from Data

    1. Front Matter
      Pages 369-369
    2. Christopher M. Bishop
      Pages 371-403
    3. Nir Friedman, Moises Goldszmidt
      Pages 421-459
    4. Dan Geiger, David Heckerman, Christopher Meek
      Pages 461-477
    5. Geoffrey E. Hinton, Brian Sallans, Zoubin Ghahramani
      Pages 479-494
    6. Michael Kearns, Yishay Mansour, Andrew Y. Ng
      Pages 495-520
    7. Stefano Monti, Gregory F. Cooper
      Pages 521-540
    8. Lawrence Saul, Michael Jordan
      Pages 541-554
    9. Peter W. F. Smith, Joe Whittaker
      Pages 555-574
    10. D. J. Spiegelhalter, N. G. Best, W. R. Gilks, H. Inskip
      Pages 575-598
  6. Back Matter
    Pages 623-630

About this book


In the past decade, a number of different research communities within the computational sciences have studied learning in networks, starting from a number of different points of view. There has been substantial progress in these different communities and surprising convergence has developed between the formalisms. The awareness of this convergence and the growing interest of researchers in understanding the essential unity of the subject underlies the current volume.
Two research communities which have used graphical or network formalisms to particular advantage are the belief network community and the neural network community. Belief networks arose within computer science and statistics and were developed with an emphasis on prior knowledge and exact probabilistic calculations. Neural networks arose within electrical engineering, physics and neuroscience and have emphasised pattern recognition and systems modelling problems. This volume draws together researchers from these two communities and presents both kinds of networks as instances of a general unified graphical formalism. The book focuses on probabilistic methods for learning and inference in graphical models, algorithm analysis and design, theory and applications. Exact methods, sampling methods and variational methods are discussed in detail.
Audience: A wide cross-section of computationally oriented researchers, including computer scientists, statisticians, electrical engineers, physicists and neuroscientists.


Bayesian network Latent variable model Monte Carlo method algorithms clustering data analysis electrical engineering expectation–maximization algorithm learning linear regression proving visualization

Editors and affiliations

  • Michael I. Jordan
    • 1
  1. 1.Massachusetts Institute of TechnologyCambridgeUSA

Bibliographic information

  • DOI
  • Copyright Information Kluwer Academic Publishers 1998
  • Publisher Name Springer, Dordrecht
  • eBook Packages Springer Book Archive
  • Print ISBN 978-94-010-6104-9
  • Online ISBN 978-94-011-5014-9
  • Series Print ISSN 0258-123X
  • Buy this book on publisher's site