International Journal of Computer Vision

, Volume 40, Issue 1, pp 25–47 | Cite as

Learning Low-Level Vision

  • William T. Freeman
  • Egon C. Pasztor
  • Owen T. Carmichael


We describe a learning-based method for low-level vision problems—estimating scenes from images. We generate a synthetic world of scenes and their corresponding rendered images, modeling their relationships with a Markov network. Bayesian belief propagation allows us to efficiently find a local maximum of the posterior probability for the scene, given an image. We call this approach VISTA—Vision by Image/Scene TrAining.

We apply VISTA to the “super-resolution” problem (estimating high frequency details from a low-resolution image), showing good results. To illustrate the potential breadth of the technique, we also apply it in two other problem domains, both simplified. We learn to distinguish shading from reflectance variations in a single image under particular lighting conditions. For the motion estimation problem in a “blobs world”, we show figure/ground discrimination, solution of the aperture problem, and filling-in arising from application of the same probabilistic machinery.

vision and learning belief propagation low-level vision super-resolution shading and reflectance motion estimation 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Adelson, E.H. 1995. Personal communication.Google Scholar
  2. Barrow, H.G. and Tenenbaum, J.M. 1981. Computational vision. Proc. IEEE, 69(5):572–595.Google Scholar
  3. Bell, A.J. and Sejnowski, T.J. 1997. The independent components of natural scenes are edge filters. Vision Research, 37(23):3327–3338.Google Scholar
  4. Berger, J.O. 1985. Statistical Decision Theory and Bayesian Analysis. Springer: Berlin.Google Scholar
  5. Besag, J. 1974. Spatial interaction and the statistical analysis of lattice systems (with discussion). J. Royal Statist. Soc. B, 36:192–326.Google Scholar
  6. Binford, T., Levitt, T. and Mann, W. 1988. Bayesian inference in model-based machine vision. In Uncertainty in Artificial Intelligence, J.F. Lemmer and L.M. Kanal (Eds.), Morgan Kaufmann: Los Alos, CA.Google Scholar
  7. Bishop, C.M. 1995. Neural Networks for Pattern Recognition. Oxford.Google Scholar
  8. Burt, P.J. and Adelson, E.H. 1983. The Laplacian pyramid as a compact image code. IEEE Trans. Comm., 31(4):532–540.Google Scholar
  9. Carandini, M. and Heeger, D.J. 1994, Summation and division by neurons in primate visual cortex. Science, 264:1333–1336.Google Scholar
  10. DeBonet, J.S. and Viola, P. 1998. Texture recognition using a nonparametric multi-scale statistical model. In Proc. IEEE Computer Science Conference on Computer Vision and Pattern Recognition, Santa Barbara, CA.Google Scholar
  11. Field, D.J. 1994. What is the goal of sensory coding. Neural Computation, 6:559–601.Google Scholar
  12. Freeman, W.T. 1994. The generic viewpoint assumption in a framework for visual perception. Nature, 368(6471):542–545.Google Scholar
  13. Freeman,.T., Haddon, J.A., and Pasztor, E.C. 2001. Learning motion analysis. In Statistical Theories of the Brain, R. Rao, B. Olshausen, and M. Lewicki (Eds.), MIT Press, Cambridge, MA. See also Scholar
  14. Freeman, W.T. and Pasztor, E.C. 1999. Learning to estimate scenes from images. In Adv. Neural Information Processing Systems, Kearns, M.S., Solla, S.A., and Cohn, D.A. (Eds.). Vol. 11, Cambridge, MA. See also Scholar
  15. Freeman,.T. and Viola, P.A. 1998. Bayesian model of surface perception. Adv. in Neural Information Processing Systems, Vol. 10.Google Scholar
  16. Frey, B.J. 1998. Graphical Models for Machine Learning and Digital Communication. MIT Press: Cambridge, MA.Google Scholar
  17. Frey, B.J. 2000. Filling in scenes by propagating probabilities through layers and into appearance models. In Proc. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, Hilton Head Island, S.C.Google Scholar
  18. Geiger, D. and Girosi, F. 1991. Parallel and deterministic algorithms from MRF's: Surface reconstruction. IEEE Pattern Analysis and Machine Intelligence, 13(5):401–412.Google Scholar
  19. Geman, S. and Geman, D. 1984. Stochastic relaxation, Gibbs distribution, and the Bayesian restoration of images. IEEE Pattern Analysis and Machine Intelligence, 6:721–741.Google Scholar
  20. Heeger, D.J. and Bergen, J.R. 1995. Pyramid-based texture analysis/synthesis. In ACM SIGGRAPH. In Computer Graphics Proceedings, Annual Conference Series, pp. 229–236.Google Scholar
  21. Horn, B.K.P. 1986. Robot Vision. MIT Press: Cambridge, MA.Google Scholar
  22. Horn, B.K.P. and Brooks, M.J. (Eds.). 1989. Shape From Shading. The MIT Press: Cambridge, MA.Google Scholar
  23. Hurlbert, A.C. and Poggio, T.A. 1988. Synthesizing a color algorithm from examples. Science, 239:482–485.Google Scholar
  24. Isard, M. and Blake, A. 1996. Contour tracking by stochastic propagation of conditional density. In Proc. European Conf. on Computer Vision, pp. 343–356.Google Scholar
  25. Jahne, B. 1991. Digital Image Processing. Springer-Verlag: Berlin.Google Scholar
  26. Jordan, M.I. (Ed.). 1998. Learning in Graphical Models. MIT Press: Cambridge, MA.Google Scholar
  27. Jordan, M.I., Kearns, M.J., and Solla, S.A. (Eds.), MIT Press, Cambridge, MA. See also Scholar
  28. Kersten, D., O'Toole, A.J., Sereno, M.E., Knill, D.C., and Anderson, J.A. 1987. Associative learning of scene parameters from images. Applied Optics, 26(23):4999–5006.Google Scholar
  29. Kittler, J. and Illingworth, J. 1985. Relaxation labelling algorithms—a review. Image and Vision Computing, 3(11):206–216.Google Scholar
  30. Knill, D. and Richards, W. (Eds.). 1996. Perception as Bayesian Inference. Cambridge Univ. Press: Cambridge, London.Google Scholar
  31. Kschischang, F.R. and Frey, B.J. 1998. Iterative decoding of compound codes by probability propagation in graphical models. IEEE Journal on Selected Areas in Communication, 16(2):219–230.Google Scholar
  32. Landy, M.S. and Movshon, J.A. (Eds.). 1991. Computational Models of Visual Processing. MIT Press: Cambridge, MA.Google Scholar
  33. Luettgen, M.R., Karl, W.C., and Willsky, A.S. 1994. Efficient multiscale regularization with applications to the computation of optical flow. IEEE Trans. Image Processing, 3(1):41–64.Google Scholar
  34. McEliece, R., MacKay, D., and Cheng, J. 1998. Turbo decoding as as an instance of pearl's ‘Belief Propagation’ algorithm. IEEE J. on Sel. Areas in Comm., 16(2):140–152.Google Scholar
  35. Olshausen, B.A. and Field, D.J. 1996. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381:607–609.Google Scholar
  36. Pearl, J. 1988. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann: Los Altos, CA.Google Scholar
  37. Pentland, A. and Horowitz, B. 1993. A practical approach to fractal-based image compression. In Digital Images and Human Vision. A.B. Watson (Ed.). MIT Press: Cambridge, MA.Google Scholar
  38. Poggio, T., Torre, V., and Koch, C. 1985. Computational vision and regularization theory. Nature, 317(26):314–139.Google Scholar
  39. Polvere, M. 1998. Mars v. 1.0, A quadtree based fractal image coder/decoder. Scholar
  40. Rosenfeld, A., Hummel, R.A., and Zucker, S.W. 1976. Scene labeling by relaxation operations. IEEE Trans. Systems, Man, Cybern, 6(6):420–433.Google Scholar
  41. Saund, E. 1999. Perceptual organization of occluding contours generated by opaque surfaces. In Proc. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition., Ft. Collins, CO.Google Scholar
  42. Schultz, R.R. and Stevenson, R.L. 1994. A Bayesian approach to image expansion for improved definition. IEEE Trans. Image Processing, 3(3):233–242.Google Scholar
  43. Simoncelli, E.P. 1997. Statistical models for images: Compression, restoration and synthesis. In 31st Asilomar Conf. on Sig., Sys. and Computers, Pacific Grove, CA.Google Scholar
  44. Sinha, P. and Adelson, E.H. 1993. Recovering reflectance and illumination in a world of painted polyhedra. In Proc. 4th Intl. Conf. Comp. Vis., pp. 156–163.Google Scholar
  45. Smyth, P., Heckerman, D., and Jordan, M.I. 1997. Probabilistic independence networks for hidden Markov probability models. Neural Computation, 9(2):227–270.Google Scholar
  46. Szeliski, R. 1989. Bayesian Modeling of Uncertainty in Low-level Vision. Kluwer Academic Publishers: Boston.Google Scholar
  47. Weiss,. 1997. Interpreting images by propagating Bayesian beliefs. Adv. in Neural Information Processing Systems, Vol. 9. pp. 908–915.Google Scholar
  48. Weiss, Y. 1998. Belief propagation and revision in networks with loops. Technical Report 1616, AI Lab Memo, MIT, Cambridge, MA 02139.Google Scholar
  49. Weiss, Y. and Freeman, W.T. 1999. Correctness of belief propagation in Gaussian graphical models of arbitrary topology. Technical Report UCB.CSD-99-1046, Berkeley Computer Science Dept.*#x223D;yweiss/ Scholar
  50. Weiss, Y. and Freeman, W.T. 2001. On the optimality of solutions of the max-product belief propagation algorithm in arbitrary graphs. IEEE Trans. Info. Theory. Special issue on codes on Graphs and Iterative Algorithms. See also: 99-39.Google Scholar
  51. Yedidia, J.S., Freeman, W.T., and Weiss, Y. 2000. Generalized belief propagation. Technical Report 2000– 26, MERL, Mitsubishi Electric Research Labs., Scholar
  52. Zhu, S.C. and Mumford, D. 1997. Prior Learning and Gibbs Reaction-Diffusion. IEEE Pattern Analysis and Machine Intelligence, 19(11).Google Scholar

Copyright information

© Kluwer Academic Publishers 2000

Authors and Affiliations

  • William T. Freeman
    • 1
  • Egon C. Pasztor
    • 2
  • Owen T. Carmichael
    • 3
  1. 1.Mitsubishi Electric Research Labs.CambridgeUSA
  2. 2.MIT Media LaboratoryCambridgeUSA
  3. 3.209 Smith HallCarnegie-Mellon UniversityPittsburghUSA

Personalised recommendations