International Journal of Computer Vision

, Volume 40, Issue 1, pp 25-47

First online:

Learning Low-Level Vision

  • William T. FreemanAffiliated withMitsubishi Electric Research Labs.
  • , Egon C. PasztorAffiliated withMIT Media Laboratory
  • , Owen T. CarmichaelAffiliated with209 Smith Hall, Carnegie-Mellon University

Rent the article at a discount

Rent now

* Final gross prices may vary according to local VAT.

Get Access


We describe a learning-based method for low-level vision problems—estimating scenes from images. We generate a synthetic world of scenes and their corresponding rendered images, modeling their relationships with a Markov network. Bayesian belief propagation allows us to efficiently find a local maximum of the posterior probability for the scene, given an image. We call this approach VISTA—Vision by Image/Scene TrAining.

We apply VISTA to the “super-resolution” problem (estimating high frequency details from a low-resolution image), showing good results. To illustrate the potential breadth of the technique, we also apply it in two other problem domains, both simplified. We learn to distinguish shading from reflectance variations in a single image under particular lighting conditions. For the motion estimation problem in a “blobs world”, we show figure/ground discrimination, solution of the aperture problem, and filling-in arising from application of the same probabilistic machinery.

vision and learning belief propagation low-level vision super-resolution shading and reflectance motion estimation