Advertisement

Integation Methods of Model-Free Features for 3D Tracking

  • Ville Kyrki
  • Kerstin Schmock
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3540)

Abstract

A number of approaches for 3D pose tracking have been recently introduced, most of them utilizing an edge (wireframe) model of the target. However, the use of an edge model has significant problems in complex scenes due to background, occlusions, and multiple responses. Integration of model-free information has been recently proposed to decrease these problems.

In this paper, we propose two integration methods for model-free point features to enhance the robustness and to increase the performance of real-time model-based tracking. The relative pose change between frames is estimated using an optimization approach. This allows the pose change to be integrated very efficiently in a Kalman filter. Our first approach estimates the pose change in a least squares sense while the second one uses M-estimators to decrease the effect of outliers. Experiments are presented which demonstrate that the approaches are superior in performance to earlier approaches.

Keywords

Interest Point Direct Integration Basic Optimization Basic Optimization Approach Measurement Covariance Matrix 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Vacchetti, L., Lepetit, V., Fua, P.: Stable real-time 3D tracking using online and offline information. IEEE Trans PAMI 26, 1385–1391 (2004)Google Scholar
  2. 2.
    Vincze, M., Ayromlou, M., Ponweiser, M., Zillich, M.: Edge projected integration of image and model cues for robust model-based object tracking. Int. J. of Robotics Research (2001)Google Scholar
  3. 3.
    Taylor, G., Kleeman, L.: Fusion of multimodal visual cues for model-based object tracking. In: Australiasian Conf. on Robotics and Automation, Brisbane, Australia (2003)Google Scholar
  4. 4.
    Drummond, T., Cipolla, R.: Real-time visual tracking of complex structures. IEEE Trans. PAMI 24, 932–946 (2002)Google Scholar
  5. 5.
    Wunsch, P., Hirzinger, G.: Real-time visual tracking of 3-d objects with dynamic handling of occlusion. In: IEEE Int. Conf. on Robotics and Automation, ICRA 1997, Albuquerque, New Mexico, USA, pp. 2868–2873 (1997)Google Scholar
  6. 6.
    Kyrki, V., Kragic, D.: Integration of model-based and model-free cues for visual object tracking in 3D. In: Int Conf on Robotics and Automation, ICRA 2005 (2005)Google Scholar
  7. 7.
    Lowe, D.: Distinctive image features from scale-invariant keypoints. Int. J. Comp. Vis. 60, 91–110 (2004)CrossRefGoogle Scholar
  8. 8.
    Harris, C.J., Stephens, M.: A combined corner and edge detector. In: Proc. 4th Alvey Vision Conference, Manchester, UK (1988)Google Scholar
  9. 9.
    Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P.: Numerical Recipes in C++. Cambridge University Press, Cambridge (2002)Google Scholar
  10. 10.
    Brent, R.P.: Algorithms for Minimization without Derivatives. Prentice-Hall, Englewood Cliffs (1973)zbMATHGoogle Scholar
  11. 11.
    Huber, P.J.: Robust estimation of a location parameter. Annals of Mathematical Statistics 35, 73–101 (1964)zbMATHCrossRefMathSciNetGoogle Scholar
  12. 12.
    Huber, P.J.: Robust Statistics. Wiley, Chichester (1981)zbMATHCrossRefGoogle Scholar
  13. 13.
    Tsai, R., Lenz, R.K.: A new technique for fully autonomous and efficient 3D robotics hand/eye calibration. IEEE Trans. Robotics and Autom. 5, 345–358 (1989)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Ville Kyrki
    • 1
  • Kerstin Schmock
    • 1
  1. 1.Laboratory of Information ProcessingLappeenranta University of TechnologyLappeenrantaFinland

Personalised recommendations