Advertisement

A LINE-MOD-based markerless tracking approachfor AR applications

  • Yue Wang
  • Shusheng ZhangEmail author
  • Sen Yang
  • Weiping He
  • Xiaoliang Bai
  • Yifan Zeng
ORIGINAL ARTICLE

Abstract

Markerless tracking is still a very challenging problem in augmented reality applications, especially the real elements are textureless. In this paper, we proposed a model-based method to tackle the markerless tracking problem. Motivated by LINE-MOD algorithm, one of the state-of-the-art object detection methods, and multiview-based 3D model retrieval approach, we built a camera tracking system utilizing image retrieval. In the off-line training stage, 3D models were used to generate templates automatically. To estimate the camera pose accurately in the online matching stage, LINE-MOD was adapted into a scale-invariant descriptor using depth information obtained from Softkinetic, and an interpolation method combined with other mathematical calculations was used for camera pose refinement. The experimental result shows that the proposed method is fast and robust for markerless tracking in augmented reality environment; the tracking accuracy is much closer to that of ARToolKit markers.

Keywords

Augmented reality Tracking Markerless LINE-MOD 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Kyriacou T, Bugmann G, Lauria S (2005) Vision-based urban navigation procedures for verbally instructed robots. Robot Auton Syst 51(1):69–80CrossRefGoogle Scholar
  2. 2.
    Nee AYC, Ong SK, Chryssolouris G, Mourtzis D (2012) Augmented reality applications in design and manufacturing. CIRP Annals-Manufacturing Technology 61(2):657–679CrossRefGoogle Scholar
  3. 3.
    Fang HC, Ong SK, Nee AYC (2013) Orientation planning of robot end-effector using augmented reality. Int J Adv Manuf Technol 67(9–12):2033–2049CrossRefGoogle Scholar
  4. 4.
    Lee JY, Rhee G (2008) Context-aware 3D visualization and collaboration services for ubiquitous cars using augmented reality. Int J Adv Manuf Technol 37(5–6):431–442CrossRefGoogle Scholar
  5. 5.
    Wang ZB, Ong SK, Nee AYC (2013) Augmented reality aided interactive manual assembly design. Int J Adv Manuf Technol 69(5–8):1311–1321CrossRefGoogle Scholar
  6. 6.
    Zhu J, Ong SK, Nee AYC (2013) An authorable context-aware augmented reality system to assist the maintenance technicians. Int J Adv Manuf Technol 66(9–12):1699–1714Google Scholar
  7. 7.
    Lee JY, Rhee GW, Seo DW (2010) Hand gesture-based tangible interactions for manipulating virtual objects in a mixed reality environment. Int J Adv Manuf Technol 51(9–12):1069–1082CrossRefGoogle Scholar
  8. 8.
    Chen CJ, Hong J, Wang SF (2015) Automated positioning of 3D virtual scene in AR-based assembly and disassembly guiding system. Int J Adv Manuf Technol 76(5–8):753–764CrossRefGoogle Scholar
  9. 9.
    Lee JY, Rhee GW, Park H (2009) AR/RP-based tangible interactions for collaborative design evaluation of digital products. Int J Adv Manuf Technol 45(7–8):649–665CrossRefGoogle Scholar
  10. 10.
    Friedrich W, Jahn D, Schmidt L (2002) ARVIKA-augmented reality for development, production and service. Proceedings of International Symposium on Mixed and Augmented Reality (ISMAR) 2002:3–4CrossRefGoogle Scholar
  11. 11.
    Kanbara M, Yokoya N (2002) Geometric and photometric registration for real-time augmented reality. In: Proceedings of International Symposium on Mixed and Augmented Reality (ISMAR)., pp 279–280CrossRefGoogle Scholar
  12. 12.
    Baratoff G, Neubeck A, Regenbrecht H (2002) Interactive multi-marker calibration for augmented reality applications. In: Proceedings of the 1st International Symposium on Mixed and Augmented Reality (ISMAR)., p 107CrossRefGoogle Scholar
  13. 13.
    Kato H, Billinghurst M (1999) Marker tracking and hmd calibration for a video-based augmented reality conferencing system. In: Proceedings of 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR)., pp 85–94CrossRefGoogle Scholar
  14. 14.
    Rekimoto J (1998) Matrix: a realtime object identification and registration method for augmented reality. In: Proceedings of 3rd Asia Pacific Computer Human Interaction., pp 63–68CrossRefGoogle Scholar
  15. 15.
    Wohlhart P, Lepetit V (2015) Learning descriptors for object recognition and 3d pose estimation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR)., pp 3109–3118Google Scholar
  16. 16.
    Mottaghi R, Xiang Y, Savarese S (2015) A coarse-to-fine model for 3D pose estimation and sub-category recognition. In: Conference on Computer Vision and Pattern Recognition (CVPR)., pp 418–426Google Scholar
  17. 17.
    Hinterstoisser S, Lepetit V, Ilic S, Holzer S, Bradski G, Konolige K, Navab N (2012) Model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes. In: Asian Conference on Computer Vision (ACCV)., pp 548–562Google Scholar
  18. 18.
    Choi C, Christensen HI (2010) Real-time 3D model-based tracking using edge and keypoint features for robotic manipulation. In: International Conference on Robotics and Automation (ICRA)., pp 4048–4055Google Scholar
  19. 19.
    Wagner D, Reitmayr G, Mulloni A, Drummond T, Schmalstieg D (2010) Real-time detection and tracking for augmented reality on mobile phones. IEEE Trans Vis Comput Graph 16(3):355–368CrossRefGoogle Scholar
  20. 20.
    Wagner D, Schmalstieg D, Bischof H (2009) Multiple target detection and tracking with guaranteed framerates on mobile phones. In: 8th IEEE International Symposium on Mixed and Augmented Reality (ISMAR)., pp 57–64Google Scholar
  21. 21.
    Wangsiripitak S, Murray DW (2004) Reducing mismatching under time–pressure by reasoning about visibility and occlusion. Journal of Computer Vision 60(2):91–110CrossRefGoogle Scholar
  22. 22.
    Nguyen DD, Ko JP, Jeon JW (2015) Determination of 3D object pose in point cloud with CAD model. In: 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)., pp 1–6CrossRefGoogle Scholar
  23. 23.
    Drummond T, Cipolla R (2002) Real-time visual tracking of complex structures. IEEE Trans Pattern Anal Mach Intell 24(7):932–946CrossRefGoogle Scholar
  24. 24.
    Nuernberger B, Ofek E, Benko H, Wilson AD (2016) SnapToReality: Aligning Augmented Reality to the Real World. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems., pp 1233–1244CrossRefGoogle Scholar
  25. 25.
    Hinterstoisser S, Cagniart C, Ilic S, Sturm P, Navab N, Fua P, Lepetit V (2012) Gradient response maps for real-time detection of textureless objects. IEEE Trans Pattern Anal Mach Intell 34(5):876–888CrossRefGoogle Scholar
  26. 26.
    Chen DY, Tian XP, Shen YT, Ouhyoung M (2003) On visual similarity based 3D model retrieval. Computer Graphics Forum 22(3):223–232CrossRefGoogle Scholar
  27. 27.
    Hinterstoisser S, Lepetit V, Ilic S, Fua P, Navab N (2010) Dominant orientation templates for real-time detection of texture-less objects. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 23:2257–2264Google Scholar
  28. 28.
    Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. Computer Vision and Pattern Recognition (CVPR) 1:886–893Google Scholar
  29. 29.
    Payet N, Todorovic S (2011) From contours to 3D object detection and pose estimation. In: IEEE International Conference on Computer Vision (ICCV), 23., pp 983–990Google Scholar
  30. 30.
    Park Y, Lepetit V, Woo W (2011) Texture-less object tracking with online training using an RGB-D camera. IEEE International Symposium on Mixed and Augmented Reality (ISMAR) 1416:121–126Google Scholar
  31. 31.
    Newcombe RA, Izadi S, Hilliges O, Molyneaux D, Kim D, Davison AJ, …, Fitzgibbon A (2011) KinectFusion: real-time dense surface mapping and tracking. In: 10th IEEE international symposium on Mixed and augmented reality (ISMAR), p 127–136Google Scholar
  32. 32.
    Newcombe RA, Fox D, Seitz SM (2015) DynamicFusion: reconstruction and tracking of non-rigid scenes in real-time. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)., pp 343–352Google Scholar
  33. 33.
    Mur-Artal R, Tardos JD (2014) ORB-SLAM: tracking and mapping recognizable features. In: Proceeding of Robotics: Science and Systems (RSS) Workshop on Multi View Geometry in RoboticsGoogle Scholar
  34. 34.
    Engel J, Schöps T, Cremers D (2014) LSD-SLAM: Large-scale direct monocular SLAM. In: European Conference on Computer Vision (ECCV)., pp 834–849Google Scholar
  35. 35.
    Kalal Z, Matas J, Mikolajczyk K (2010) P-N learning: Bootstrapping binary classifiers by structural constraints. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 238:49–56Google Scholar
  36. 36.
    Grabner M, Grabner H, Bischof H (2007) Learning features for tracking. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR)., pp 1–8Google Scholar
  37. 37.
    Abawi DF, Bienwald J, Dorner R (2004) Accuracy in optical tracking with fiducial markers: an accuracy function for ARToolKit. In: Proceedings of the 3rd IEEE/ACM International Symposium on Mixed and Augmented Reality (ISMAR)., pp 260–261CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London 2016

Authors and Affiliations

  • Yue Wang
    • 1
  • Shusheng Zhang
    • 1
    Email author
  • Sen Yang
    • 1
  • Weiping He
    • 1
  • Xiaoliang Bai
    • 1
  • Yifan Zeng
    • 1
  1. 1.Key Laboratory of Contemporary Designing and Integrated Manufacturing Technology, Ministry of EducationNorthwestern Polytechnical UniversityXi’anChina

Personalised recommendations