Generation of semantic regions from image sequences

  • Jonathan H. Fernyhough
  • Anthony G. Cohn
  • David C. Hogg
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1065)


The simultaneous interpretation of object behaviour from real world image sequences is a highly desirable goal in machine vision. Although this is rather a sophisticated task, one method for reducing the complexity in stylized domains is to provide a context specific spatial model of that domain. Such a model of space is particularly useful when considering spatial event detection where the location of an object could indicate the behaviour of that object within the domain. To date, this approach has suffered the drawback of having to generate the spatial representation by hand for each new domain. A method is described, complete with experimental results, for automatically generating a region based context specific model of space for strongly stylized domains from the movement of objects within that domain.


spatial representation scene understanding 


  1. André, E., Herzog, G. & Rist, T. (1988), On the simultaneous interpretation of real world image sequences and their natural language description: The system soccer, in ‘Proc. ECAI-88', Munich, pp. 449–454.Google Scholar
  2. Baumberg, A. M. & Hogg, D. C. (1994a), An efficient method for contour tracking using active shape models, in ‘IEEE Workshop on Motion of Non-rigid and Articulated Objects', I.C.S. Press, pp. 194–199.Google Scholar
  3. Baumberg, A. M. & Hogg, D. C. (1994b), Learning flexible models from image sequences, in ‘European Conference on Computer Vision', Vol. 1, pp. 299–308.Google Scholar
  4. Cui, Z., Cohn, A. & Randell, D. (1992), Qualitative simulation based on a logical formalism of space and time, in ‘Proceedings of AAAI-92', AAAI Press, Menlo Park, California, pp. 679–684.Google Scholar
  5. Howarth, R. J. (1994), Spatial Representation and Control for a Surveillance System, PhD thesis, Queen Mary and Westfield College, The University of London.Google Scholar
  6. Howarth, R. J. & Buxton, H. (1992), ‘An analogical representation of space and time', Image and Vision Computing 10(7), 467–478.Google Scholar
  7. Johnson, N. & Hogg, D. (1995), Learning the distribution of object trajectories for event recognition, in D. Pycock, ed., 'Proceedings of the 6th British Machine Vision Conference', Vol. 2, BMVA, University of Birmingham, Birmingham, pp. 583–592.Google Scholar
  8. Li-Qun, X., Young, D. & Hogg, D. (1992), Building a model of a road junction using moving vehicle information, in D. Hogg, ed., ‘Proceedings of the British Machine Vision Conference', Springer-Verlag, London, pp. 443–452.Google Scholar
  9. Melkman, A. V. (1987), ‘On-line construction of the convex hull of a simple polyline', Information Processing Letters 25(1), 11–12.Google Scholar
  10. Nagel, H. H. (1988), ‘From image sequences towards conceptual descriptions', Image and Vision Computing 6(2), 59–74.Google Scholar
  11. Neumann, B. & Novak, H.-J. (1983), Event models for recognitions and natural language description of events in real-world image sequences, in ‘Proceedings of the Eigth IJCAI Conference', pp. 724–726.Google Scholar
  12. Retz-Schmidt, G. (1988), A replai of soccer: Recognizing intentions in the domain of soccer games, in ‘Proc. ECAI-88', Pitman, Munich, pp. 455–457.Google Scholar
  13. Sonka, M., Hlavac, V. & Boyle, R. (1993), Image Processing, Analysis and Machine Vision, Chapman & Hall.Google Scholar
  14. Zimmermann, K. & Freksa, C. (1993), Enhancing spatial reasoning by the concept of motion, in A. Sloman, D. Hogg, G. Humphreys, A. Ramsay & D. Partridge, eds, ‘Prospects for Artificial Intelligence', IOS Press, pp. 140–147.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1996

Authors and Affiliations

  • Jonathan H. Fernyhough
    • 1
  • Anthony G. Cohn
    • 1
  • David C. Hogg
    • 1
  1. 1.Division of Artificial Intelligence, School of Computer StudiesUniversity of LeedsLeeds

Personalised recommendations