Advertisement

Data-Driven Discovery Using Probabilistic Hidden Variable Models

  • Padhraic Smyth
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4264)

Abstract

Generative probabilistic models have proven to be a very useful framework for machine learning from scientific data. Key ideas that underlie the generative approach include (a) representing complex stochastic phenomena using the structured language of graphical models, (b) using latent (hidden) variables to make inferences about unobserved phenomena, and (c) leveraging Bayesian ideas for learning and prediction. This talk will begin with a brief review of learning from data with hidden variables and then discuss some exciting recent work in this area that has direct application to a broad range of scientific problems. A number of different scientific data sets will be used as examples to illustrate the application of these ideas in probabilistic learning, such as time-course microarray expression data, functional magnetic resonance imaging (fMRI) data of the human brain, text documents from the biomedical literature, and sets of cyclone trajectories.

Keywords

Graphical Model Scientific Data Functional Magnetic Resonance Imaging Bayesian Idea Text Document 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Padhraic Smyth
    • 1
  1. 1.Information and Computer ScienceUniversity of CaliforniaIrvineUSA

Personalised recommendations