The Visual Computer

, Volume 26, Issue 6–8, pp 1145–1154 | Cite as

Scalable real-time planar targets tracking for digilog books

Original Article

Abstract

We propose a novel 3D tracking method that supports several hundreds of pre-trained potential planar targets without losing real-time performance. This goes well beyond the state-of-the-art, and to reach this level of performances, two threads run in parallel: the foreground thread tracks feature points from frame-to-frame to ensure real-time performances, while a background thread aims at recognizing the visible targets and estimating their poses. The latter relies on a coarse-to-fine approach: assuming that one target is visible at a time, which is reasonable for digilog books applications, it first recognizes the visible target with an image retrieval algorithm, then matches feature points between the target and the input image to estimate the target pose. This background thread is more demanding than the foreground one, and is therefore several times slower. We therefore propose a simple but effective mechanism for the background thread to communicate its results to the foreground thread without lag. Our implementation runs at more than 125 frames per second, with 314 potential planar targets. Its applicability is demonstrated with an Augmented Reality book application.

Keywords

Planar target tracking Augmented reality Digilog book Vocabulary tree 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bay, H., Ess, A., Tuytelaars, T., Vangool, L.: Speeded-Up Robust Features (SURF). Comput. Vis. Image Underst. 110(3), 346–359 (2008) CrossRefGoogle Scholar
  2. 2.
    Billinghurst, M., Kato, H., Poupyrev, I.: The magicbook—moving seamlessly between reality and virtuality. IEEE Comput. Graph. Appl. 21(3), 6–8 (2001) Google Scholar
  3. 3.
    Fiala, M.: ARTag, a fiducial marker system using digital techniques. Conf. Comput. Vis. Pattern Recognit. 2, 590–596 (2005) Google Scholar
  4. 4.
    Frauendorfer, F., Wu, C., Frahm, J.M., Pollefeys, M.: Visual word based location recognition in 3D models using distance augmented weighting. In: 3D Data Processing, Visualization and Transmission (2008) Google Scholar
  5. 5.
    Ha, T., Lee, Y., Woo, W.: Digilog book for temple bell tolling experience based on interactive augmented reality with culture technology. J. Virtual Real. (spVR) (2009, accepted) Google Scholar
  6. 6.
    Kato, H., Billinghurst, M.: Marker tracking and HMD calibration for a video-based augmented reality conferencing system. In: 2nd IEEE and ACM International Workshop on Augmented Reality, pp. 85–94 (1999) Google Scholar
  7. 7.
    Lee, T., Hollerer, T.: Multithreaded hybrid feature tracking for markerless augmented reality. IEEE Trans. Vis. Comput. Graph. 15, 355–368 (2008) Google Scholar
  8. 8.
    Lepetit, V., Lagger, P., Fua, P.: Randomized trees for real-time keypoint recognition. In: Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 775–781 (2005) Google Scholar
  9. 9.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004) CrossRefGoogle Scholar
  10. 10.
    Muja, M., Lowe, D.G.: Fast approximate nearest neighbors with automatic algorithm configuration. In: International Conference on Compute Vision Theory and Applications, pp. 331–340 (2009) Google Scholar
  11. 11.
    Nister, D., Stewenius, H.: Scalable recognition with a vocabulary tree. In: Conference on Computer Vision and Pattern Recognition, pp. 2161–2168 (2006) Google Scholar
  12. 12.
    Ozuysal, M., Calonder, M., Fua, P., Lepetit, V.: Fast keypoint recognition using random ferns. IEEE Trans. Pattern Anal. Mach. Intell. 32(3), 448–461 (2010) CrossRefGoogle Scholar
  13. 13.
    Park, J., Woo, W.: Multi-layer based authoring tool for digilog book. In: International Conference on Entertainment Computing. Lecture Notes in Computer Science, vol. 5709, pp. 234–239. Springer, Berlin (2009) Google Scholar
  14. 14.
    Park, Y., Lepetit, V., Woo, W.: Multiple 3D object tracking for augmented reality. In: International Symposium on Mixed and Augmented Reality, pp. 117–120 (2008) Google Scholar
  15. 15.
    Scherrer, C., Pilet, J., Fua, P., Lepetit, V.: The haunted book. In: International Symposium on Mixed and Augmented Reality, pp. 163–164 (2008) Google Scholar
  16. 16.
    Simon, G., Fitzgibbon, A., Zisserman, A.: Markerless tracking using planar structures in the scene. In: International Symposium on Mixed and Augmented Reality, pp. 137–146 (2000) Google Scholar
  17. 17.
    Taketa, N., Hayashi, K., Kato, H., Noshida, S.: Virtual pop-up book based on augmented reality. In: Symposium on Human Interface 2007, Held as Part of HCI International 2007. Lecture Notes in Computer Science, pp. 475–484. Springer, Berlin (2007) Google Scholar
  18. 18.
    Wagner, D., Reitmayr, G., Mulloni, A., Drummond, T., Schmalstieg, D.: Pose tracking from natural features on mobile phones. In: International Symposium on Mixed and Augmented Reality, pp. 125–134 (2008) Google Scholar
  19. 19.
    Wagner, D., Schmalstieg, D.: Artoolkitplus for pose tracking on mobile devices. In: Proceedings of 12th Computer Vision Winter Workshop (CVWW’07), pp. 139–146 (2007) Google Scholar
  20. 20.
    Wagner, D., Schmalstieg, D., Bischof, H.: Multiple target detection and tracking with guaranteed framerates on mobile phones. In: International Symposium on Mixed and Augmented Reality, pp. 57–64 (2009) Google Scholar

Copyright information

© Springer-Verlag 2010

Authors and Affiliations

  1. 1.U-VR LaboratoryGISTGwangjuS. Korea
  2. 2.CVLabEPFLLausanneSwitzerland

Personalised recommendations