Extensions of the Informative Vector Machine

  • Neil D. Lawrence
  • John C. Platt
  • Michael I. Jordan
Conference paper

DOI: 10.1007/11559887_4

Part of the Lecture Notes in Computer Science book series (LNCS, volume 3635)
Cite this paper as:
Lawrence N.D., Platt J.C., Jordan M.I. (2005) Extensions of the Informative Vector Machine. In: Winkler J., Niranjan M., Lawrence N. (eds) Deterministic and Statistical Methods in Machine Learning. Lecture Notes in Computer Science, vol 3635. Springer, Berlin, Heidelberg

Abstract

The informative vector machine (IVM) is a practical method for Gaussian process regression and classification. The IVM produces a sparse approximation to a Gaussian process by combining assumed density filtering with a heuristic for choosing points based on minimizing posterior entropy. This paper extends IVM in several ways. First, we propose a novel noise model that allows the IVM to be applied to a mixture of labeled and unlabeled data. Second, we use IVM on a block-diagonal covariance matrix, for “learning to learn” from related tasks. Third, we modify the IVM to incorporate prior knowledge from known invariances. All of these extensions are tested on artificial and real data.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Neil D. Lawrence
    • 1
  • John C. Platt
    • 2
  • Michael I. Jordan
    • 3
  1. 1.Department of Computer ScienceUniversity of SheffieldSheffieldU.K.
  2. 2.Microsoft Research, Microsoft CorporationRedmondU.S.A
  3. 3.Computer Science and StatisticsUniversity of CaliforniaBerkeleyU.S.A

Personalised recommendations