Multiple constraints for optical flow

  • Massimo Tistarelli
Optical Flow and Motion Fields
Part of the Lecture Notes in Computer Science book series (LNCS, volume 800)


The computation of the optical flow field from an image sequences requires the definition of constraints on the temporal change of image features. In general, these constraints limit the motion of the body in space and/or of the features on the image plane.

In this paper the implications in the use of multiple constraints in the computational schema are considered. It is shown that differential constraints correspond to an implicit feature tracking. Consequently, the results strictly depend upon the local gray level structure. The best results (either in terms of measurement accuracy and speed in the computation) are obtained by selecting and applying the constraints which are best “tuned” to the particular image feature under consideration.

Several experiments are presented both from a synthetic scene and from real image sequences.


  1. 1.
    H. H. Nagel. Direct estimation of optical flow and of its derivatives. In G. A. Orban and H. H. Nagel, editors, Artificial and Biological Vision Systems, pages 193–224. Springer Verlag, 1992.Google Scholar
  2. 2.
    S. Uras, F. Girosi, A. Verri, and V. Torre. A computational approach to motion perception. Biological Cybernetics, 60:79–87, 1988.Google Scholar
  3. 3.
    A. Verri and T. Poggio. Motion field and optical flow: qualitative properties. IEEE Trans. on PAMI, PAMI-11:490–498, 1989.Google Scholar
  4. 4.
    H. H. Nagel and W. Enkelmann. An investigation of smoothness constraints for the estimation of displacement vector fields from image sequences. IEEE Transaction on PAMI, PAMI-8 1:565–593, 1986.Google Scholar
  5. 5.
    H. H. Nagel. On the estimation of optical flow: Relations between differenet approaches and some new results. Artificial Intelligence, 33:299–324, 1987.Google Scholar
  6. 6.
    J. R. Bergen, P. Anandan, K. J. Hanna, and R. Hingorani. Hierarchical modelbased motion estimation. In Proc. of second European Conference on Computer Vision, pages 237–252, S. Margherita Ligure, Italy, May 19–22, 1992. Springer Verlag.Google Scholar
  7. 7.
    A. Verri, F. Girosi, and V. Torre. Differential techniques for optical flow. Journal of the Optical Society of America A, 7:912–922, 1990.Google Scholar
  8. 8.
    M. Tistarelli and G. Sandini. Estimation of depth from motion using an anthropomorphic visual sensor. Image and Vision Computing, 8, No. 4:271–278, 1990.Google Scholar
  9. 9.
    M. Tistarelli and G. Sandini. Dynamic aspects in active vision. CVGIP: Image Understanding, 56:108–129, July 1992.Google Scholar
  10. 10.
    B. K. P. Horn and B. G. Schunck. Determining optical flow. Artificial Intelligence, 17 No.1–3:185–204, 1981.Google Scholar
  11. 11.
    J. L. Barron, D. J. Fleet, and S. S. Beauchemin. Performance of optical flow techniques. Int. J. of Computer Vision, also Tech. Rep. RPL-TR-9107, 1993.Google Scholar
  12. 12.
    D. J. Fleet and A. D. Jepson. Computation of component image velocity from local phase information. Int. J. of Computer Vision, 5:77–104, 1990.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1994

Authors and Affiliations

  • Massimo Tistarelli
    • 1
  1. 1.Department of Communication, Computer and Systems Science Integrated Laboratory for Advanced Robotics (LIRA-Lab)University of GenoaGenoaItaly

Personalised recommendations