Advertisement

International Journal of Computer Vision

, Volume 29, Issue 3, pp 181–202 | Cite as

On the Importance of Being Asymmetric in Stereopsis—Or Why We Should Use Skewed Parallel Cameras

  • Antônio Francisco
  • Fredrik Bergholm
Article

Abstract

This paper presents a new clever camera sensor, where relative pose determination is not needed, and the sensor is simultaneously capable of using vergence micromovements. Sweeping depth using vergence micromovements promises subpixel depth precision, measuring zero disparity at each time instant. We show that curves preserving zero disparity are exactly conics, nondegenerate or degenerate. Oddly enough, only circles (Vieth-Müller circles) are routinely considered, either theoretically or in practical work, in vergence stereo. Horopters in human vision, cf. Ogle (1932), closely resemble conics.

We introduce translational vergence by suggesting the use of a pair of shift-optics CCD cameras. The nonrigidity causes zero disparity curves to become planes, for each fixation. (They are degenerate conics.) We have parallel optical axes, but slanting left and right primary lines of sight. During vergence movements, the primary lines of sight move over time. This has farreaching consequences: Binocular head-eye systems all involve relative camera rotation, to fixate. But, camera rotation is unnecessary. Hence, for relative depth maps, there is no need for measuring camera rotation (relative camera pose) from mechanical sources. Nor are algorithms needed for calculating epipolar lines. The suggested technique removes the need for camera rotations about the optical centers in a binocular head-eye system.

stereo stereopsis vergence horopters Vieth-Müller circles corresponding retinal points image sensor shifts 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ahuja, N. and Abbott, A.L. 1993. Active stereo: Integrating disparity, vergence, focus, aperture, and calibration for surface estimation. PAMI, 15(10):1007-1030.Google Scholar
  2. Andersson, M. and Bergholm, F. 1997. What is gained by monocular or binocular micromovements? Tech. Report KTH/NA/P-97/04, CVAP 208, CVAP, NADA, KTH, Stockholm, Sweden.Google Scholar
  3. Ayres, F., Jr. 1967. Theory and Problems of Projective Geometry. Schaum’s Outline Series in Mathematics, McGraw-Hill Book Company: New York, St. Louis, San Francisco, Toronto, Sydney.Google Scholar
  4. Carpenter, R.H.S. 1988. Movements of the Eyes. 2nd edition, Pion Limited: London.Google Scholar
  5. Coombs, D. and Brown, C. 1993. Real time binocular smooth pursuit. IJCV, 11(2):147-164.Google Scholar
  6. Francisco, A. 1991. The role of vergence micromovements on depth perception. Tech. Report MS-CIS-91-37, GRASP Lab., CIS, Univ. of Pennsylvania, Philadelphia, PA, USA.Google Scholar
  7. Francisco, A. 1993. Relative depth from vergence micromovements. In Proc. 4th ICCV, Berlin, Germany, pp. 481-486.Google Scholar
  8. Francisco, A., Uhlin, T., and Eklundh, J.-E. 1993. Continuous vergence movements for relative depth acquisition. In Proc. SCIA 1993, Tromsoe, Norway, pp. 97-104.Google Scholar
  9. Fry, G.A. 1984. Binocular vision. In Foundation of Sensory Science, W.W. Dawson (Ed.), Springer Verlag, Chap. 8.Google Scholar
  10. Grosso, E. and Tistarelli, M. 1995. Active/dynamic stereo vision. IEEE Trans. on PAMI, 17(11):1117-1128.Google Scholar
  11. Horn, B.K.P. 1986. Robot Vision. MIT Press: Cambridge, MA.Google Scholar
  12. Jain, R., Bartlett, S.L., and O’Brien, N. 1987. Complex logarithm mapping, PAMI, 9:3.Google Scholar
  13. Krishnan, A. and Ahuja, N. 1993. Use of a non-frontal camera for extended depth of field in wide scenes. In Proc. of SPIE, Intelligent Robots and Computer Vision XII: Active Vision and 3D Methods, Vol. 2056, Boston, MA, USA.Google Scholar
  14. Lindsay, P.H. and Norman, D.A. 1977. Human Information Processing. 2nd edition, Academic Press: New York.Google Scholar
  15. Marr, D. and Poggio, T. 1979. A Computational theory of human stereo vision. In Proc. of the Royal Society of London B, Vol. 204, pp. 301-328.Google Scholar
  16. Newport Catalog, Precision Laser & Optics Products, European Office: NewPort GmbH, Darmstadt, Germany.Google Scholar
  17. Ogle, K.N. 1932. An analytical treatment of the longitudinal horopter: Its measurement and application to related phenomena.... Journal of Opt. Soc. Am., 22(12):665-728.Google Scholar
  18. Ogle, K.N. 1950. Researches in Binocular Vision. Saunders: Philadelphia. Reprinted: Hafner Publ. Company: London, New York, 1964.Google Scholar
  19. Records, R.E. 1979. Physiology of the Human Eye and Visual System, C.W. Tyler and A.B. Scott (Eds.), Harper & Row, Publishers: Hagerstown, Chap. 22.Google Scholar
  20. Tistarelli, M. and Sandini, G. 1990. Estimation of depth from motion using an anthropomorphic visual sensor. Image and Vision Computing, 8(4):271-278.Google Scholar
  21. Zhizhuo, W. 1990. Principles of Photogrammetry. ISBN-7-81030-000-8/P, Publishing House of Surveying and Mapping: Beijing.Google Scholar
  22. Zielke, T., Storjohann, K., Mallot, H.A., and von Seelen, W. 1990. Adapting computer vision systems to the visual environment: Topographic mapping. In Proc. of First ECCV’ 90, Lecture Notes in Computer Science 427, Springer Verlag, pp. 613- 615.Google Scholar

Copyright information

© Kluwer Academic Publishers 1998

Authors and Affiliations

  • Antônio Francisco
    • 1
  • Fredrik Bergholm
    • 1
  1. 1.Computational Vision and Active Perception Laboratory (CVAP), Department of Numerical Analysis and Computer ScienceRoyal Institute of TechnologyStockholmSweden

Personalised recommendations