Advertisement

Machine Vision and Applications

, Volume 24, Issue 1, pp 133–144 | Cite as

Generating near-spherical range panoramas by fusing optical flow and stereo from a single-camera folded catadioptric rig

  • Igor Labutov
  • Carlos Jaramillo
  • Jizhong Xiao
Original Paper

Abstract

We design a novel “folded” spherical catadioptric rig (formed by two coaxially-aligned spherical mirrors of distinct radii and a single perspective camera) to recover near-spherical range panoramas (about 360° × 153°) from the fusion of depth given by optical flow and stereoscopy. We observe that for rigid motion that is parallel to a plane, optical flow and stereo generate nearly complementary distributions of depth resolution. While optical flow provides strong depth cues in the periphery and near the poles of the view-sphere, stereo generates reliable depth in a narrow band about the equator instead. We exploit this dual-modality principle by modeling (separately) the depth resolution of optical flow and stereo in order to fuse them later on a probabilistic spherical panorama. We achieve a desired vertical field-of-view and optical resolution by deriving a linearized model of the rig in terms of three parameters (radii of the two mirrors plus axial distance between the mirrors’ centers). We analyze the error due to the violation of the single viewpoint constraint and formulate additional constraints on the design to minimize this error. We evaluate our proposed method via a synthetic model and with real-world prototypes by computing dense spherical panoramas of depth from cluttered indoor environments after fusing the two modalities (stereo and optical flow).

Keywords

Catadioptrics Sensor fusion Omnidirectional vision Stereoscopy Optical flow 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Conroy J., Gremillion G., Ranganathan B., Humbert J.S.: Implementation of wide-field integration of optic flow for autonomous quadrotor navigation. Auton. Robots 27(3), 189 (2009)CrossRefGoogle Scholar
  2. 2.
    Burbridge, C., Spacek, L., Park, W.: Omnidirectional vision simulation and robot localisation. In: Proceedings of TARO, pp. 32–39 (2006)Google Scholar
  3. 3.
    Kim J.H., Chung M.J.: SLAM with omni-directional stereo vision sensor. Intell. Robots Syst. (2003). Nevada, Las Vegas (2003)Google Scholar
  4. 4.
    Kuthirummal S., Nayar S.K.: Multiview radial catadioptric imaging for scene capture. Intell. Robots Syst. ACM Trans. Graph. 25(3), 916 (2006)Google Scholar
  5. 5.
    Baker, S., Nayar, S.: In: A theory of catadioptric image formation. pp. 35–42 (1998)Google Scholar
  6. 6.
    Nayar, S.: In: Omnidirectional vision. vol. 8, pp. 195–202 (1998)Google Scholar
  7. 7.
    Su, L., Luo, C., Zhu, F.: In: Obtaining obstacle information by an omnidirectional stereo vision system. pp. 48–52 (2006)Google Scholar
  8. 8.
    Corrêa, F.R., Guizilini, V.C., Junior, J.O.: Omnidirectional stereovision system with two-lobe hyperbolic mirror for robot navigation. ABCM Symp. Series Mech. 2, 653 (2006)Google Scholar
  9. 9.
    Nayar, S.K., Peri, V.: In: Folded catadioptric cameras. vol. 2, pp. 217–223 (1999)Google Scholar
  10. 10.
    Jang, G., Kim, S., Kweon, I.: Single camera catadioptric stereo system. In: Proceedings of Workshop on Omnidirectional Vision, Camera Networks and Nonclassical cameras (OMNIVIS2005) (2005)Google Scholar
  11. 11.
    Cabral, E.L.L., de Souza Junior, J.C., Hunold, M.C.: Omnidirectional stereo vision with a hiperbolic double lobed mirror. Pattern Recogn. Int. Conf. 1, 1–4 (2004)Google Scholar
  12. 12.
    Derrien, S., Konolige, K.: Approximating a single viewpoint in panoramic imaging devices. In: Proceedings of ICRA ’00. IEEE International Conference on Robotics and Automation, vol. 4, pp. 3931–3938 (2000)Google Scholar
  13. 13.
    Nelson R.C., Aloimonos J.: Finding motion parameters from spherical motion fields (or the advantages of having eyes in the back of your head). Biol. Cybern. 58(4), 261–273 (1988)CrossRefGoogle Scholar
  14. 14.
    McCarthy, C., Barnes, N., Srinivasan, M.: Real time biologically-inspired depth maps from spherical flow. In: 2007 IEEE International Conference on Robotics and Automation, pp. 4887–4892 (2007)Google Scholar
  15. 15.
    Baker, S., Nayar, K.: A tutorial on catadioptric image formation. Spherical mirrors (1998)Google Scholar
  16. 16.
    Matthies L., Shafer S.: Error modeling in stereo navigation. IEEE J. Robot. Autom. 3(3), 239–248 (1987)CrossRefGoogle Scholar
  17. 17.
    Souhila K., Karim A.: Optical flow based robot obstacle avoidance. Int. J. Adv. Robot. Syst. 4(1), 13–16 (2007)Google Scholar
  18. 18.
    Li R., Sclaroff S.: Computer vision and image understanding. Comput. Vision Image Underst. 110(1), 75–90 (2008)CrossRefGoogle Scholar
  19. 19.
    Scaramuzza, D.: Omnidirectional vision: from calibration to robot motion estimation. ETH Zurich, PhD thesis, 17635 (2008)Google Scholar

Copyright information

© Springer-Verlag 2011

Authors and Affiliations

  • Igor Labutov
    • 1
    • 2
  • Carlos Jaramillo
    • 1
    • 3
  • Jizhong Xiao
    • 4
  1. 1.Computer Engineering DepartmentThe City College, City University of New York (CUNY City College)New YorkUSA
  2. 2.Cornell UniversityIthacaUSA
  3. 3.The Graduate CenterCity University of New YorkNew YorkUSA
  4. 4.Electrical Engineering DepartmentThe City College, City University of New York (CUNY City College)New YorkUSA

Personalised recommendations