Vision-Based Real-Time Camera Matchmoving with a Known Marker

  • Bum-Jong Lee
  • Jong-Seung Park
  • Mee Young Sung
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4161)


A primary requirement for practical augmented reality systems is a method of accurate and reliable camera tracking. In this paper, we propose a fast and stable camera matchmoving method aimed for real-time augmented reality application. A known marker is used for the fast detection and tracking of feature points. From the feature tracking of one of three different types of markers on a single frame, we estimate the camera position and translation parameters. The entire pose estimation process is linear and initial estimates are not required. As an application of the proposed method, we implemented a video augmentation system that replaces the marker in the image frames with a virtual 3D graphical object during the marker tracking. Experimental results showed that the proposed camera tracking method is robust and fast enough to interactive video-based applications.


Feature Point Singular Value Decomposition Augmented Reality Video Stream Virtual Object 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Gibson, S., Cook, J., Howard, T., Hubbold, R., Oram, D.: Accurate camera calibration for off-line, video-based augmented reality. In: Proceedings of International Symposium on Mixed and Augmented Reality (ISMAR 2002), pp. 37–46 (2002)Google Scholar
  2. 2.
    Simon, G., Fitzgibbon, A., Zisserman, A.: Markerless tracking using planar structures in the scene. In: Proceedings of IEEE and ACM International Symposium on Augmented Reality (ISAR 2000), pp. 120–128 (2000)Google Scholar
  3. 3.
    Kutulakos, K.N., Vallino, J.R.: Calibration-free augmented reality. IEEE Transactions on Visualization and Computer Graphics 4, 1–20 (1998)CrossRefGoogle Scholar
  4. 4.
    Carceroni, R., Brown, C.: Numerical method for model-based pose recovery (1997)Google Scholar
  5. 5.
    Tomasi, C., Kanade, T.: Shape and motion from image streams under orthography: A factorization approach. International Journal of Computer Vision 9, 137–154 (1992)CrossRefGoogle Scholar
  6. 6.
    Oberkampf, D., DeMenthon, D.F., Davis, L.S.: Iterative pose estimation using coplanar feature points. Comput. Vis. Image Underst. 63, 495–511 (1996)CrossRefGoogle Scholar
  7. 7.
    Dementhon, D., Davis, L.: Model-based object pose in 25 lines of code. International Journal of Computer Vision 15, 123–141 (1995)CrossRefGoogle Scholar
  8. 8.
    Zhang, Z.: A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence 22, 1330–1334 (2000)CrossRefGoogle Scholar
  9. 9.
    Poelman, C., Kanade, T.: A paraperspective factorization method for shape and motion recovery. IEEE T-PAMI 19, 206–218 (1997)CrossRefGoogle Scholar
  10. 10.
    Kato, H., Billinghurst, M., Poupyrev, I., Imamoto, K., Tachibana, K.: Virtual object manipulation on a table-top AR environment. In: Proceedings of the International Symposium on Augmented Reality (ISAR 2000), pp. 111–119 (2000)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2006

Authors and Affiliations

  • Bum-Jong Lee
    • 1
  • Jong-Seung Park
    • 1
  • Mee Young Sung
    • 1
  1. 1.Dept. of Computer Science & EngineeringUniversity of IncheonIncheonRepublic of Korea

Personalised recommendations