Sensor Pose Estimation from Multi-center Cylindrical Panoramas

  • Fay Huang
  • Reinhard Klette
  • Yun-Hao Xie
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5414)


Cylindrical panoramas can be classified into various types according to their basic scanning properties and mutual spatial alignment, such as single-center (e.g., as in QTVR), concentric, multi-center, symmetric, or (after a transformation onto a cylinder) catadioptric panoramas. This paper deals with a solution of the sensor pose estimation problem using (somehow calculated) corresponding points in the multi-center panoramas. All other types of panoramas are able to be described by this general multi-center model. Due to the non-linearity of the multi-centered projection geometry, the modeling of sensor pose estimation typically results into non-linear and highly complicated forms which incur numerical instability. This paper shows that there exist linear models for sensor pose estimation under minor geometrical constraints, namely symmetric and leveled panoramas. The presented approaches are important for solving the 3D data fusion problem for multiple panoramas; it is also fundamental for an in-depth analysis of multi-view geometry of panoramic images.


Image Point Projection Center Panoramic Image Symmetric Pair Input Error 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Huang, F., Klette, R., Scheibe, K.: Panoramic Imaging: Sensor-Line Cameras and Laser Range-Finders. Wiley, West Sussex (2008)CrossRefGoogle Scholar
  2. 2.
    Huang, F., Wei, S.K., Klette, R.: Geometrical fundamentals of polycentric panoramas. In: Proc. ICCV 2001, Vancouver, Canada, pp. I560–I565 (July 2001)Google Scholar
  3. 3.
    Li, Y., Shum, H.Y., Tang, C.K., Szeliski, R.: Stereo reconstruction from multiperspective panoramas. IEEE Transactions on Pattern Analysis and Machine Intelligence 26(1), 45–62 (2004)CrossRefGoogle Scholar
  4. 4.
    Murray, D.: Recovering range using virtual multicamera stereo. CVIU 61(2), 285–291 (1995)Google Scholar
  5. 5.
    Peleg, S., Ben-Ezra, M.: Stereo panorama with a single camera. In: Proc. CVPR 1999, Fort Collins, Colorado, USA, pp. 395–401 (June 1999)Google Scholar
  6. 6.
    Scheibe, K., Suppa, M., Hirschmäller, H., Strackenbrock, B., Huang, F., Liu, R., Hirzinger, G.: Multi-scale 3d-modeling. In: Chang, L.-W., Lie, W.-N. (eds.) PSIVT 2006. LNCS, vol. 4319, pp. 96–107. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  7. 7.
    Shum, H.Y., He, L.W.: Rendering with concentric mosaics. In: Proc. SIGGRAPH 1999, Los Angeles, California, USA, pp. 299–306 (August 1999)Google Scholar
  8. 8.
    Ishiguro, H., Yamamoto, M., Tsuji, S.: Omni-directional stereo. PAMI 14(2), 257–262 (1992)CrossRefGoogle Scholar
  9. 9.
    Kang, S.B., Szeliski, R.: 3-d scene data recovery using omnidirectional multibaseline stereo. IJCV 25(2), 167–183 (1997)CrossRefGoogle Scholar
  10. 10.
    Seitz, S.: The space of all stereo images. In: Proc. ICCV 2001, Vancouver, Canada, pp. 26–33 (July 2001)Google Scholar
  11. 11.
    Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge Uni. Press, United Kingdom (2000)zbMATHGoogle Scholar
  12. 12.
    Chen, S.E.: QuickTimeVR - an image-based approach to virtual environment navigation. In: Proc. SIGGRAPH 1995, Los Angeles, California, USA, pp. 29–38 (August 1995)Google Scholar
  13. 13.
    Kang, S.B., Desikan, P.: Virtual navigation of complex scenes using clusters of cylindrical panoramic images. In: Graphics Interface, pp. 223–232 (1998)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Fay Huang
    • 1
  • Reinhard Klette
    • 2
  • Yun-Hao Xie
    • 1
  1. 1.Institute of Computer Science and Information EngineeringNational Ilan UniversityTaiwan, R.O.C.
  2. 2.Department of Computer ScienceThe University of AucklandNew Zealand

Personalised recommendations