Advertisement

Hierarchical model-based motion estimation

  • James R. Bergen
  • P. Anandan
  • Keith J. Hanna
  • Rajesh Hingorani
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 588)

Abstract

This paper describes a hierarchical estimation framework for the computation of diverse representations of motion information. The key features of the resulting framework (or family of algorithms) are a global model that constrains the overall structure of the motion estimated, a local model that is used in the estimation process, and a coarse-fine refinement strategy. Four specific motion models: affine flow, planar surface flow, rigid body motion, and general optical flow, are described along with their application to specific examples.

Keywords

Flow Field Optical Flow Local Model Motion Estimation Motion Model 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    G. Adiv. Determining three-dimensional motion and structure from optical flow generated by several moving objects. IEEE Trans. on Pattern Analysis and Machine Intelligence, 7(4):384–401, July 1985.Google Scholar
  2. 2.
    P. Anandan. A unified perspective on computational techniques for the measurement of visual motion. In International Conference on Computer Vision, pages 219–230, London, May 1987.Google Scholar
  3. 3.
    P. Anandan. A computational framework and an algorithm for the measurement of visual motion. International Journal of Computer Vision, 2:283–310, 1989.Google Scholar
  4. 4.
    J. R. Bergen and E. H. Adelson. Hierarchical, computationally efficient motion estimation algorithm. J. Opt. Soc. Am. A., 4:35, 1987.Google Scholar
  5. 5.
    J. R. Bergen, P. J. Burt, R. Hingorani, and S. Peleg. Computing two motions from three frames. In International Conference on Computer Vision, Osaka, Japan, December 1990.Google Scholar
  6. 6.
    P. J. Burt and E. H. Adelson. The laplacian pyramid as a compact image code. IEEE Transactions on Communication, 31:532–540, 1983.Google Scholar
  7. 7.
    P.J. Burt, J.R. Bergen, R. Hingorani, R. Kolczinski, W.A. Lee, A. Leung, J. Lubin, and H. Shvaytser. Object tracking with a moving camera, an application of dynamic motion analysis. In IEEE Workshop on Visual Motion, pages 2–12, Irvine, CA, March 1989.Google Scholar
  8. 8.
    P.J. Burt, R. Hingorani, and R. J. Kolczynski. Mechanisms for isolating component patterns in the sequential analysis of multiple motion. In IEEE Workshop on Visual Motion, pages 187–193, Princeton, NJ, October 1991.Google Scholar
  9. 9.
    Stefan Carlsson. Object detection using model based prediction and motion parallax. In Stockholm workshop on computational vision, Stockholm, Sweden, August 1989.Google Scholar
  10. 10.
    J. Dengler. Local motion estimation with the dynamic pyramid. In Pyramidal systems for computer vision, pages 289–298, Maratea, Italy, May 1986.Google Scholar
  11. 11.
    W. Enkelmann. Investigations of multigrid algorithms for estimation of optical flow fields in image sequences. Computer Vision, Graphics, and Image Processing, 4339:150–177, 1988.Google Scholar
  12. 12.
    K. J. Hanna. Direct multi-resolution estimation of ego-motion and structure from motion. In Workshop on Visual Motion, pages 156–162, Princeton, NJ, October 1991.Google Scholar
  13. 13.
    J. Heel. Direct estimation of structure and motion from multiple frames. Technical Report 1190, MIT AI LAB, Cambridge, MA, 1990.Google Scholar
  14. 14.
    E. C. Hildreth. The Measurement of Visual Motion. The MIT Press, 1983.Google Scholar
  15. 15.
    B. K. P. Horn. Robot Vision. MIT Press, Cambridge, MA, 1986.Google Scholar
  16. 16.
    B. K. P. Horn and B. G. Schunck. Determining optical flow. Artificial Intelligence, 17:185–203, 1981.Google Scholar
  17. 17.
    B. K. P. Horn and E. J. Weldon. Direct methods for recovering motion. International Journal of Computer Vision, 2(1):51–76, June 1988.Google Scholar
  18. 18.
    B.D. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In Image Understanding Workshop, pages 121–130, 1981.Google Scholar
  19. 19.
    L. Matthies, R. Szeliski, and T. Kanade. Kalman filter-based algorithms for estimating depth from image-sequences. In International Conference on Computer Vision, pages 199–213, Tampa, FL, 1988.Google Scholar
  20. 20.
    H. H. Nagel. Displacement vectors derived from second order intensity variations in intensity sequences. Computer Vision, Pattern recognition and Image Processing, 21:85–117, 1983.Google Scholar
  21. 21.
    S. Negahdaripour and B.K.P. Horn. Direct passive navigation. IEEE Trans. on Pattern Analysis and Machine Intelligence, 9(1):168–176, January 1987.Google Scholar
  22. 22.
    A. Singh. An estimation theoretic framework for image-flow computation. In International Conference on Computer Vision, Osaka, Japan, November 1990.Google Scholar
  23. 23.
    A.M. Waxman and K. Wohn. Contour evolution, neighborhood deformation and global image flow: Planar surfaces in motion. International Journal of Robotics Research, 4(3):95–108, Fall 1985.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1992

Authors and Affiliations

  • James R. Bergen
    • 1
  • P. Anandan
    • 1
  • Keith J. Hanna
    • 1
  • Rajesh Hingorani
    • 1
  1. 1.David Sarnoff Research CenterPrincetonUSA

Personalised recommendations