Pixel matching and motion segmentation in image sequences

  • Narendra Ahuja
  • Ram Charan
Structure from Motion
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1035)


This paper presents a coarse-to-fine algorithm to obtain pixel trajectories in a long image sequence and to segment it into subsets corresponding to distinctly moving objects. Much of the previous related work has addressed the computation of optical flow over two frames or sparse feature trajectories in sequences. The features used are often small in number and restrictive assumptions are made about them such as the visibility of features in all the frames. The algorithm described here uses a coarse scale point feature detector to form a 3-D dot pattern in the spatio temporal space. The trajectories are extracted as 3-D curves-formed by the points using perceptual grouping. Increasingly dense correspondences are obtained iteratively from the sparse feature trajectories. At the finest level, which is the focus of this paper, all pixels are matched and the finest boundaries of the moving objects are obtained.


Motion Segmentation Perceptual Grouping Pixel Matching Triangulation Feature Matching Optical Flow 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    I. K. Sethi and R. Jain, “Finding trajectories of feature points in a monocular image sequence,” IEEE Transactions on Pattern Analysis and Machine intelligence, vol. PAMI-9, January 1987.Google Scholar
  2. 2.
    I. K. Sethi, V. Salari, and S. Vemuri, “Feature point matching using temporal smoothness in velocity,” in Pattern Recognition Theory and Applications (P. A. Devijver and J. Kittler, eds.), pp. 119–131, New York: Springer-Verlag, June 1986.Google Scholar
  3. 3.
    K. Rangarajan and M. Shah, “Establishing motion correspondences,” CVGIP: Image Understanding, vol. 54, pp. 56–73, July 1991.Google Scholar
  4. 4.
    C. L. Cheng and J. K. Aggarwal, “A two-stage hybrid approach to the correspondence problem via forward searching and backward correcting,” in Proceedings of the International Conference on Pattern Recognition, pp. 173–179, 1990.Google Scholar
  5. 5.
    C. Debrunner and N. Ahuja, “Motion and structure factorization and segmentation of long multiple motion image sequences,” in European Conference on Computer Vision, pp. 217–221, 1992.Google Scholar
  6. 6.
    J. K. Aggarwal and Y. F. Wang, “Analysis of a sequence of images using point and line correspondences,” in Proceedings of the International Conference on Robotics and Automation, 1987.Google Scholar
  7. 7.
    J. Weng, N. Ahuja, and T. Huang, “Motion and structure from point correspondences: A robust algorithm for planar case with error estimation,” in Proceedings of the International Conference on Pattern Recognition, 1988.Google Scholar
  8. 8.
    J. L. Barron, D. J. Fleet, and S. S. Beauchemin, “Performance of optical flow techniques, ” in Proceedings of the Conference on Computer Vision and Pattern Recognition, pp. 236–242, 1992.Google Scholar
  9. 9.
    J. H. Duncan and T. C. Chou, “The detection of motion and computation of optical flow,” in Proceedings of the International Conference on Computer Vision, pp. 374–382, 1988.Google Scholar
  10. 10.
    D. Heeger, “Model for the extraction of image flow,” Journal of the Optical Society of America, pp. 1455–1471, 1987.Google Scholar
  11. 11.
    B. Horn and B. Schunck, “Determining optical flow,” Artificial Intelligence, vol. 17, pp. 185–204, 1981.Google Scholar
  12. 12.
    A. Singh, Optical Flow Computation: A Unified Perspective. Los Alamitos: IEEE Computer Society Press, 1992.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1996

Authors and Affiliations

  • Narendra Ahuja
    • 1
  • Ram Charan
    • 1
  1. 1.Beckman Institute for Advanced Science and TechnologyUniversity of Illinois at Urbana-ChampaignUrbanaUSA

Personalised recommendations