International Journal of Computer Vision

, Volume 4, Issue 1, pp 39–57 | Cite as

Detecting moving objects

  • William B. Thompson
  • Ting-Chuen Pong


The detection of moving objects is important in many tasks. This paper examines moving object detection based primarily on optical flow. We conclude that in realistic situations, detection using visual information alone is quite difficult, particularly when the camera may also be moving. The availability of additional information about camera motion and/or scene structure greatly simplifies the problem. Two general classes of techniques are examined. The first is based upon the motion epipolar constraint—translational motion produces a flow field radially expanding from a “focus of expansion” (FOE). Epipolar methods depend on knowing at least partial information about camera translation and/or rotation. The second class of methods is based on comparison of observed optical flow with other information about depth, for example from stereo vision. Examples of several of these techniques are presented.


Image Processing Artificial Intelligence Flow Field Computer Vision Computer Image 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    G. Adiv, “Inherent ambiguities in recovering 3-d motion and structure from a noisy flow field,” Proc. 3rd IEEE Conf. Comput, Vision Pattern Recog., San Francisco, pp. 70–77, 1985.Google Scholar
  2. 2.
    Proc. Workshop on Motion: Representation and Analysis, Kiawah Island, SC, May 1986.Google Scholar
  3. 3.
    S. Ullman, The Interpretation of Visual Motion, Cambridge, MA: MIT Press, 1979.Google Scholar
  4. 4.
    R. Jain, W.N. Martin, and J.K. Aggarwal, “Extraction of moving object images through change detection,” Proc. 6th Intern. Joint Conf. Artif. Intell., Tokyo, pp. 425–428, 1979.Google Scholar
  5. 5.
    R. Jain, D. Militzer, and H.-H. Nagel, “Separating non-stationary from stationary scene components in a sequence of real world TV images,” Proc. 5th Intern. Joint Conf. Artif. Intell., Cambridge, MA, pp. 425–428, 1977.Google Scholar
  6. 6.
    A.M. Waxman and J.H. Duncan, “Binocular image flows,” Proc. Workshop on Motion: Representation and Analysis, Kiawah Island, SC, 1986.Google Scholar
  7. 7.
    W.B. Thompson, K.M. Mutch, and V.A. Berzins, “Dynamic occlusion analysis in optical flow fields,” IEEE Trans. PAMI 7:374–383, July 1985.Google Scholar
  8. 8.
    R.C. Jain, “Segmentation of frame sequences obtained by a moving observer,” IEEE Trans. PAMI 6:624–629, September 1984.Google Scholar
  9. 9.
    W.B. Thompson, “Combining motion and contrast for segmentation,” IEEE Trans. PAMI 2:543–549, November 1980.Google Scholar
  10. 10.
    W.F. Clocksin, “Perception of surface slant and edge labels from optical flow: A computational approach,” Perception 9:253–269, 1980.Google Scholar
  11. 11.
    K. Nakayama and J.M. Loomis, “Optical velocity patterns, velocity sensitive neurons, and space perception: A hypothesis,” Perception 3:63–80, 1974.Google Scholar
  12. 12.
    D.J. Heeger and G. Hager, “Egomotion and the sabilized world,” Proc. 2nd Intern. Conf. Comput. Vision, Tampa, pp. 435–440, 1988.Google Scholar
  13. 13.
    Z. Zhang, O.D. Faugeras, and N. Avache, “Analysis of a sequence of stereo scenes containing multiple moving objects using rigidity constraints,” Proc. 2nd Intern. Conf. Comput. Vision. Tampa, pp. 177–186, 1988.Google Scholar
  14. 14.
    A.R. Bruss and B.K.P. Horn, “Passive navigation,” Comput. Vision, Graphics Image Process. 21(1):3–20, 1983.Google Scholar
  15. 15.
    D.H. Ballard and O.A. Kimball, “Rigid body motion from depth and optical flow,” Comput. Vision, Graphics Image Process. 22: 95–115, 1983.Google Scholar
  16. 16.
    D.A. Marr, Vision, San Francisco: W.H. Freeman, 1982.Google Scholar
  17. 17.
    A. Bandopadhay, B. Chandra, and D.H. Ballard, “Active navigation: Tracking an environmental point considered beneficial,” Proc. Workshop on Motion: Representation and Analysis, Kiawah Island, SC, pp. 23–29, 1986.Google Scholar
  18. 18.
    T.S. Huang, S.D. Blostein, A. Werkheiser, M. McDonnel, and M. Lew, “Motion detection and estimation from stereo image sequences: Some preliminary experimental results,” Proc. Workshop on Motion: Representatin and Analysis, Kiawah Island, SC, pp. 45–46, 1986.Google Scholar
  19. 19.
    J.J. Gibson, The Perception of the Visual World, Cambridge, MA: Riverside Press, 1950.Google Scholar
  20. 20.
    R.A. Brooks, A.M. Flynn, and T. Marill, “Self calibration of motion and stereo for mobile robots,” Proc. 4th Intern. Symp. Robotics Res., 1987.Google Scholar
  21. 21.
    J.H. Reiger and D.T. Lawton, “Sensor motion and relative depth from difference fields of optic flows,” Proc. 8th Intern. Joint Conf. Artif. Intell., Karlsruhe, pp. 1027–1031, 1983.Google Scholar
  22. 22.
    S.T. Barnard and W.B. Thompson, “Disparity analysis of images,” IEEE Trans. PAMI 2:333–340, July 1980.Google Scholar
  23. 23.
    D.T. Lawton, personal communication.Google Scholar
  24. 24.
    J.K. Kearney, W.B. Thompson, and D.L. Boley, “Optical flow estimation,” IEEE Trans. PAMI 9:229–244, March 1987.Google Scholar
  25. 25.
    W.B. Thompson and J.K. Kearney, “Inexact vision,” Proc. Workshop on Motion: Representation and Analysis, Kiawah Island, SC, pp. 15–21, 1986.Google Scholar

Copyright information

© Kluwer Academic Publishers 1990

Authors and Affiliations

  • William B. Thompson
    • 1
  • Ting-Chuen Pong
    • 1
  1. 1.Computer Science DepartmentUniversity of MinnesotaMinneapolis

Personalised recommendations