International Journal of Computer Vision

, Volume 40, Issue 1, pp 25–47

Learning Low-Level Vision

  • William T. Freeman
  • Egon C. Pasztor
  • Owen T. Carmichael
Article

DOI: 10.1023/A:1026501619075

Cite this article as:
Freeman, W.T., Pasztor, E.C. & Carmichael, O.T. International Journal of Computer Vision (2000) 40: 25. doi:10.1023/A:1026501619075

Abstract

We describe a learning-based method for low-level vision problems—estimating scenes from images. We generate a synthetic world of scenes and their corresponding rendered images, modeling their relationships with a Markov network. Bayesian belief propagation allows us to efficiently find a local maximum of the posterior probability for the scene, given an image. We call this approach VISTA—Vision by Image/Scene TrAining.

We apply VISTA to the “super-resolution” problem (estimating high frequency details from a low-resolution image), showing good results. To illustrate the potential breadth of the technique, we also apply it in two other problem domains, both simplified. We learn to distinguish shading from reflectance variations in a single image under particular lighting conditions. For the motion estimation problem in a “blobs world”, we show figure/ground discrimination, solution of the aperture problem, and filling-in arising from application of the same probabilistic machinery.

vision and learningbelief propagationlow-level visionsuper-resolutionshading and reflectancemotion estimation

Copyright information

© Kluwer Academic Publishers 2000

Authors and Affiliations

  • William T. Freeman
    • 1
  • Egon C. Pasztor
    • 2
  • Owen T. Carmichael
    • 3
  1. 1.Mitsubishi Electric Research Labs.CambridgeUSA
  2. 2.MIT Media LaboratoryCambridgeUSA
  3. 3.209 Smith HallCarnegie-Mellon UniversityPittsburghUSA