A wide variety of data structures are used to represent images. At the low level, raw grey-level image or binary images are represented by arrays of pixels (with square, triangular or hexagonal connectivity). Object boundaries are described by fourier descriptors or strings (Freeman chain code, symbolic strings). The adjacency of object regions is described by graph structures such as the region adjacency graph. Finally hierarchical or pyramidal <192> data structures which describe an image at a series of different levels or resolutions have proved useful (eg. quad trees <193>).


Optical Flow Fourier Descriptor Symbolic String Quad Tree Intrinsic Image 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Tanimoto, S. and Klinger, A. Structured Computer Vision. Academic Press, New York, 1980.Google Scholar
  2. Nagel, H.H. On the estimation of optical flow: relations between different approaches and some new results. Artificial Intelligence, 33:299–324, 1987CrossRefGoogle Scholar
  3. Fu, K.S. and Mui, J.K. A survey on image segmentation. Pattern Recognition, 13:3–16, 1981.MathSciNetCrossRefGoogle Scholar
  4. Brady, M. Computational approaches to image understanding. ACM Computer Surveys, 14:3–71, 1982.CrossRefGoogle Scholar
  5. Bundy, A. Incidence Calculus: A Mechanism for Probabilistic Reasoning. Journal of Automated Reasoning, 1:263–284, 1985. Also available as DAI Research Paper No 216, Edinburgh University.MathSciNetMATHCrossRefGoogle Scholar
  6. Quinlan, J.R. Inferno: a cautious approach to uncertain inference. The Computer Journal, 26(3), 1983.Google Scholar
  7. Sussman, G.J. A computational model of skill acquisition. American Elsevier, New York, 1975.Google Scholar
  8. Tate, A. Interacting goals and their use. In Proceedings of IJCAI-79, International Joint Conference on Artificial Intelligence, 1979.Google Scholar
  9. Waldinger, R. Achieving several goals simultaneously. Technical Note 107, SRI AI Center, Menlo Park, 1975.Google Scholar
  10. Warren, D.H.D. WARPLAN: A system for generating plans. Memo 76, Dept. of Artificial Intelligence, Edinburgh, 1974.Google Scholar
  11. Allen, J. Toward a general model of action and time. Artificial Intelligence, 23, 1984.Google Scholar
  12. Brady, M. Computational approaches to image understanding. Computer Surveys, 14(1):2–71, 1982.MathSciNetCrossRefGoogle Scholar
  13. Barrow, H.G. and Tenenbaum, J.M. Computational vision. In Proc IEEE 6, pages 572–596, IEEE, 1981.Google Scholar
  14. McAllester, D. Reasoning Utility Package User’s Manual Version One. Memo 667, MIT AI Lab, Cambridge, Mass., April 1982.Google Scholar
  15. Muggleton, S. and Buntine, W. Machine invention of first-order predicates by inverting resolution. In Proceedings of the Fifth International Conference on Machine Learning, pages 339–352. Morgan Kaufmann, San Mateo, California, 1988.Google Scholar
  16. Fahlman, S.E. NETL, a system for representing and using real-world knowledge. MIT Press, Cambridge, Mass., 1979.MATHGoogle Scholar
  17. Woods, W. et al. Speech Understanding Systems, Final Report Vol. 4. Report 3438, Beranek and Newman Inc., 1976.Google Scholar
  18. Korf, R. Depth-first iterative-deepening: an optimal admissible tree search. Artificial Intelligence, 27(1):97–109, 1985.MathSciNetMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1990

Authors and Affiliations

  • Alan Bundy
    • 1
  1. 1.Department of Artificial IntelligenceUniversity of EdinburghEdinburghScotland, UK

Personalised recommendations