Advertisement

Estimating Camera Position and Posture by Using Feature Landmark Database

  • Motoko Oe
  • Tomokazu Sato
  • Naokazu Yokoya
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3540)

Abstract

Estimating camera position and posture can be applied to the fields of augmented reality and robot navigation. In these fields, to obtain absolute position and posture of the camera, sensor-based methods using GPS and magnetic sensors and vision-based methods using input images from the camera have been investigated. However, sensor-based methods are difficult to synchronize the camera and sensors accurately, and usable environments are limited according to selection of sensors. On the other hand, vision-based methods need to allocate many artificial markers otherwise an estimation error will accumulate. Thus, it is difficult to use such methods in large and natural environments. This paper proposes a vision-based camera position and posture estimation method for large environments, which does not require sensors and artificial markers by detecting natural feature points from image sequences taken beforehand and using them as landmarks.

Keywords

Input Image Augmented Reality Camera Position Image Template World Coordinate System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Höllerer, T., Feiner, S., Pavlik, J.: Situated documentaries: Embedding multimedia presentations in the real world. In: Proc. Int. Symp. on Wearable Computers, pp. 79–86 (1999)Google Scholar
  2. 2.
    Feiner, T.H.S., MacIntyre, B., Webster, A.: A touring machine: Prototyping 3d mobile augmented reality systems for exploring the urban environment. In: Proc. Int. Symp. on Wearable Computers, pp. 74–81 (1997)Google Scholar
  3. 3.
    Tenmoku, R., Kanbara, M., Yokoya, N.: A wearable augmented reality system using positioning infrastructures and a pedometer. In: Proc. IEEE Int. Symp. on Wearable Computers, pp. 110–117 (2003)Google Scholar
  4. 4.
    Hallaway, D., Höllerer, T., Feiner, S.: Coarse, inexpensive, infrared tracking for wearable computing. In: Proc. IEEE Int. Symp. on Wearable Computers, pp. 69–78 (2003)Google Scholar
  5. 5.
    Welch, G., Bishop, G., Vicci, L., Brumback, S., Keller, K., Colucci, D.: High-performance wide-area optical tracking -the hiball tracking system. Presence: Teleoperators and Virtual Environments 10(1), 1–21 (2001)CrossRefGoogle Scholar
  6. 6.
    Kato, H., Billinghurst, H.: Marker tracking and hmd calibration for a video-based augmented reality conferencing system. In: Proc. IEEE/ACM Int. Workshop on Augmented Reality, pp. 85–94 (1999)Google Scholar
  7. 7.
    Neumann, U., You, S.: Natural feature tracking for augmented-reality. IEEE Transactions on Multimedia 1(1), 53–64 (1999)CrossRefGoogle Scholar
  8. 8.
    Davison, A.J., Cid, Y.G., Kita, N.: Real-time 3d slam with wide-angle vision. In: Proc. IFAC Symp. on Intelligent Autonomous Vehicles (2004)Google Scholar
  9. 9.
    Naimark, L., Foxlin, E.: Circular data matrix fiducial system and robust image processing for a wearable vision-inertial self-tracker. In: Proc. IEEE/ACM Int. Symp. on Mixed and Augmented Reality, pp. 27–36 (2002)Google Scholar
  10. 10.
    Tomasi, C., Kanade, T.: Shape and motion from image streams under orthography: A factorization method. Int. J. of Computer Vision 9(2), 137–154 (1992)CrossRefGoogle Scholar
  11. 11.
    Beardsley, P., Zisserman, A., Murray, D.: Sequential updating of projective and affine structure from motion. Int. J. of Computer Vision 23(3), 235–259 (1997)CrossRefGoogle Scholar
  12. 12.
    Lepetit, V., Vacchetti, L., Thalmann, D., Fua, P.: Fully automated and stable registration for augmented reality applications. In: Proc. Int. Symp. on Mixed and Augmented Reality, pp. 93–102 (2003)Google Scholar
  13. 13.
    Gordon, I., Lowe, D.G.: Scene modelling, recognition and tracking with invariant image features. In: Proc. Int. Symp. on Mixed and Augmented Reality, pp. 110–119 (2004)Google Scholar
  14. 14.
    Sato, T., Ikeda, S., Yokoya, N.: Extrinsic camera parameter recovery from multiple image sequences captured by an omni-directional multi-camera system. In: Proc. European Conf. on Computer Vision, 2nd edn., pp. 326–340 (2004)Google Scholar
  15. 15.
    Harris, C., Stephens, M.: A combined corner and edge detector. In: Proc. Alvey Vision Conf., pp. 147–151 (1988)Google Scholar
  16. 16.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Comm. of the ACM 24, 381–395 (1981)CrossRefMathSciNetGoogle Scholar
  17. 17.
    Ikeda, S., Sato, T., Yokoya, N.: A calibration method for an omnidirectional multi-camera system. In: Proc. SPIE Electronic Imaging, vol. 5006, pp. 499–507 (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Motoko Oe
    • 1
  • Tomokazu Sato
    • 2
  • Naokazu Yokoya
    • 2
  1. 1.IBMJapan
  2. 2.Nara Institute of Science and TechnologyJapan

Personalised recommendations