Deterministic and Statistical Methods in Machine Learning

Volume 3635 of the series Lecture Notes in Computer Science pp 56-87

Extensions of the Informative Vector Machine

  • Neil D. LawrenceAffiliated withCarnegie Mellon UniversityDepartment of Computer Science, University of Sheffield
  • , John C. PlattAffiliated withCarnegie Mellon UniversityMicrosoft Research, Microsoft Corporation
  • , Michael I. JordanAffiliated withCarnegie Mellon UniversityComputer Science and Statistics, University of California

* Final gross prices may vary according to local VAT.

Get Access


The informative vector machine (IVM) is a practical method for Gaussian process regression and classification. The IVM produces a sparse approximation to a Gaussian process by combining assumed density filtering with a heuristic for choosing points based on minimizing posterior entropy. This paper extends IVM in several ways. First, we propose a novel noise model that allows the IVM to be applied to a mixture of labeled and unlabeled data. Second, we use IVM on a block-diagonal covariance matrix, for “learning to learn” from related tasks. Third, we modify the IVM to incorporate prior knowledge from known invariances. All of these extensions are tested on artificial and real data.