Advertisement

International Journal of Computer Vision

, Volume 17, Issue 1, pp 7–41 | Cite as

Motion of points and lines in the uncalibrated case

  • Thierry Viéville
  • Olivier Faugeras
  • Quang-Tuan Luong
Article

Abstract

In the present paper we address the problem of computing structure and motion, given a set point and/or line correspondences, in a monocular image sequence, when the camera is not calibrated.

Considering point correspondences first, we analyse how to parameterize the retinal correspondences, in function of the chosen geometry: Euclidean, affine or projective geometry. The simplest of these parameterizations is called the FQs-representation and is a composite projective representation. The main result is that considering N+1 views in such a monocular image sequence, the retinal correspondences are parameterized by 11 N−4 parameters in the general projective case. Moreover, 3 other parameters are required to work in the affine case and 5 additional parameters in the Euclidean case. These 8 parameters are “calibration” parameters and must be calculated considering at least 8 external informations or constraints. The method being constructive, all these representations are made explicit.

Then, considering line correspondences, we show how the the same parameterizations can be used when we analyse the motion of lines, in the uncalibrated case. The case of three views is extensively studied and a geometrical interpretation is proposed, introducing the notion of trifocal geometry which generalizes the well known epipolar geometry. It is also discussed how to introduce line correspondences, in a framework based on point correspondences, using the same equations.

Finally, considering the F Qs-representation, one implementation is proposed as a “motion module”, taking retinal correspondences as input, and providing and estimation of the 11 N−4 retinal motion parameters. As discussed in this paper, this module can also estimate the 3D depth of the points up to an affine and projective transformation, defined by the 8 parameters identified in the first section. Experimental results are provided.

Keywords

Motion Parameter Geometrical Interpretation Projective Geometry External Information Projective Transformation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bar Shalom, Y. and Fortmann, T.E. 1988. Tracking and Data Association Academic Press: Boston.Google Scholar
  2. Crowley, J., Bobet, P., and Schmid, C. 1993. Autocalibration by direct observations of objects. Image and Vision Computing, 11.Google Scholar
  3. Deriche, R. and Faugeras, O.D. 1990. Tracking line segments. In Proceedings of the 1st ECCV, Antibes, pp. 259–269, Springer-Verlag, Berlin.Google Scholar
  4. Deriche, R. and Giraudon, G. 1990. Accurate corner detection: an analytical study. In Proceedings of the 3rd ICCV, Osaka, pp. 66–71.Google Scholar
  5. Enciso, R., Viéville, T., and Faugeras, O. 1993. Approximation du changement de focale et de mise au point par une transformation affine à trois paramètres. Technical Report 2071, INRIA.Google Scholar
  6. Faugeras, O. 1992. What can be seen in three dimensions with an uncalibrated stereo rig? In 2nd ECCV, Genoa.Google Scholar
  7. Faugeras, O. 1993. Three-Dimensional Computer Vision: A Geometric Viewpoint. MIT Press: Boston.Google Scholar
  8. Faugeras, O., Luong, Q.T., and Maybank, S. 1992. Camera selfcalibration: Theory and experiment. In 2nd ECCV, Genoa.Google Scholar
  9. Faugeras, O.D., Lustman, F., and Toscani, G. 1987. Motion and structure from point and line matches. In Proceedings of the First International Conference on Computer Vision, London, pp. 25–34.Google Scholar
  10. Gill, P.E. and Murray, W. 1978. Algorithms for the solution of nonlinear least squares problem. SIAM Journal on Numerical Analysis, 15:977–992.Google Scholar
  11. Guiducci, A. 1988. Corner characterization by differential geometry techniques. Pattern Recognition Letters, 8: 311–318.Google Scholar
  12. Hartley, R.I. 1993. Camera calibration using line correspondences. In Proc. DARPA Image Understanding Workshop, pp. 361–366.Google Scholar
  13. Hartley, R.I. and Gupta, R. 1993. Computing matched-epipolar projections. In Proceedings of the CVPR'93 Conference, pp. 549–555.Google Scholar
  14. Heel, J. 1990. Temporally integrated surface reconstruction. In Proceedings of the 3rd ICCV, Osaka.Google Scholar
  15. Huang, T. and Netravali, A. 1990. Linear and polynomial methods in motion estimation. In L. Auslander, T Kailath, and S. Mitter (Eds.), Signal Processing, Part 1: Signal Processing Theory, Springer Verlag.Google Scholar
  16. Kanal, L. and Lemmer, J. 1988. Uncertainty in Artificial Intelligence. North Holland Press: Amsterdam.Google Scholar
  17. Lavest, J., Rives, G., and Dhome, M. 1993. 3D reconstruction by zooming. In Intelligent Autonomous System, Pittsburg.Google Scholar
  18. Liu, Y. and Huang, T.S. 1986. Estimation of rigid body motion using straight line correspondences. In Proceedings Workshop on Motion: Representation and Analysis, Charleston, South California, pp. 47–51.Google Scholar
  19. Longuet-Higgins, H.C. 1981. A computer algorithm for reconstructing a scene from two projections. Nature 293: 133–135.Google Scholar
  20. Luong, Q., Deriche, R., Faugeras, O., and Papadopoulo, T. 1993. On determining the fundamental matrix: analysis of different methods and experimental results. Technical Report RR-1894, INRIA, Sophia, France.Google Scholar
  21. Luong, Q.-T. and Viéville, T. 1994. Canonic representations for the geometries of multiple projective views. In 3rd E.C.C.V., Stockholm.Google Scholar
  22. Luong, T. 1992. Matrice Fondamentale et Calibration Visuelle sur l'Environnement. Ph.D. Thesis, Université de Paris-Sud, Orsay.Google Scholar
  23. Maybank, S. and Faugeras, O. 1992. A theory of self-calibration of a moving camera. The International Journal of Computer Vision, 8.Google Scholar
  24. Mitiche, A., Seida, S., and Aggarwal, J.K. 1986. Interpretation of structure and motion using straight line correspondences. In Proceedings of the 8th ICPR, Paris, France, pp. 1110–1112.Google Scholar
  25. Mundy, J. and Zisserman, A. 1992. Geometric Invariance in Computer Vision. MIT Press: Boston.Google Scholar
  26. Navab, N., Faugeras, O.D., and Viéville, T. 1993. The critical sets of lines for camera displacement estimation: a mixed Euclidean-projective and constructive approach. In IEEE Proc. Fourth Int'l Conf. Comput. Vision, Berlin, Germany, pp. 713–723.Google Scholar
  27. Press, W., Flannery, B., Teukolsky, S., and Vetterling, W. 1988. Numerical Recipes, the Art of Scientific Computing, Cambridge University Press: Cambridge, U.S.A..Google Scholar
  28. Quan, L. 1994. Invariants of 6 points from 3 uncalibrated images. In 3rd E.C.C.V., Stockholm.Google Scholar
  29. Robert, L. 1992. Perception Stéréoscopique de Courbes et de Surfaces Tridimensionnelles, Application à la Robotique Mobile. PhD Thesis, Ecole Polytechnique, Palaiseau, France.Google Scholar
  30. Robert, L. and Faugeras, O. 1993. Relative 3D positionning and 3D convex hull computation from a weakly calibrated stereo pair. In H. Nagel (Ed.), 4th I.C.C.V., Berlin, IEEE Computer Society Press: Los Alamitos, California.Google Scholar
  31. Ruymgaart, P.A. and Soong, T.T. 1985 Mathematics of Kalman-Bucy Filtering, Springer Verlag: Berlin.Google Scholar
  32. Stephens, M., Blisset, R., Charnley, D., Sparks, E. and Pike, J. 1989. Outdoor vehicle navigation using passive 3D vision. In Computer Vision and Pattern Recognition, IEEE Computer Society Press, pp. 556–562.Google Scholar
  33. Thacker, N.A. 1992. On-line calibration of a 4-dof robot head for stereo vision. In British Machine Vision Assoclation Meeting on Active Vision, London.Google Scholar
  34. Trivedi, H. 1991. Semi-analytic method for estimating stereo camera geometry from matched points. Image and Vision Computing, 9.Google Scholar
  35. Tsai, R.Y. 1989. Synopsis of recent progress on camera calibration for 3D machine vision. Robotics Review, 1: 147–159.Google Scholar
  36. Viéville, T. 1994. Autocalibration of visual sensor parameters on a robotic head. Image and Vision Computing, 12.Google Scholar
  37. Viéville, T., Facao, P., and Clergue, E. 1993. Building a depth and kinematic 3D-map from visual and inerrial sensors using the vertical cue. In H. Nagel (Ed.), 4th I.C.C.V., Berlin, IEEE Computer Society Press: Los Alamitos, California.Google Scholar
  38. Viéville, T., Facao, P., and Clergue, E. 1994. Computation of egomotion using the vertical cue. Machine Vision and Applications, 8.Google Scholar
  39. Viéville, T. and Faugeras, O. 1990. Feed forward recovery of motion and structure from a sequence of 2D-lines matches. In S. Tsuji, A. Kak and J.-O. Eklundh (Eds.), Third International Conference on Computer Vision, Osaka, IEEE Computer Society Press: Los Alamitos, California, pp. 517–522.Google Scholar
  40. Viéville, T., Zeller, C., and Robert, L. 1994. Using collineations to compute motion and structure in an uncalibrated image sequence, to appear.Google Scholar
  41. Willson, R. 1994, Modeling and Calibration of Automated Zoom Lenses. Ph.D. Thesis, Department of Electrical and Computer Engineering, Carnegie Mellon University.Google Scholar
  42. Willson, R. and Shafer, S. 1993. What is the center of the image? In IEEE Proc. CVPR'93, New York, pp. 670–671.Google Scholar

Copyright information

© Kluwer Academic Publishers 1996

Authors and Affiliations

  • Thierry Viéville
    • 1
  • Olivier Faugeras
    • 1
  • Quang-Tuan Luong
    • 1
  1. 1.INRIA, SophiaValbonneFrance

Personalised recommendations