Tracking the Untrackable: How to Track When Your Object Is Featureless

  • Karel Lebeda
  • Jiri Matas
  • Richard Bowden
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7729)

Abstract

We propose a novel approach to tracking objects by low-level line correspondences. In our implementation we show that this approach is usable even when tracking objects with lack of texture, exploiting situations, when feature-based trackers fails due to the aperture problem. Furthermore, we suggest an approach to failure detection and recovery to maintain long-term stability. This is achieved by remembering configurations which lead to good pose estimations and using them later for tracking corrections.

We carried out experiments on several sequences of different types. The proposed tracker proves itself as competitive or superior to state-of-the-art trackers in both standard and low-textured scenes.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Kalal, Z., Matas, J., Mikolajczyk, K.: P-N learning: Bootstrapping binary classifiers by structural constraints. In: Proc. of CVPR, pp. 49–56 (2010)Google Scholar
  2. 2.
    Grabner, H., Grabner, M., Bischof, H.: Real-Time Tracking via On-line Boosting. In: Proc. of BMVC (2006)Google Scholar
  3. 3.
    Cehovin, L., Kristan, M., Leonardis, A.: An adaptive coupled-layer visual model for robust visual tracking. In: Proc. of ICCV (2011)Google Scholar
  4. 4.
    Matas, J., Vojir, T.: Robustifying the flock of trackers. In: Proc. of Computer Vision Winter Workshop, pp. 91–97 (2011)Google Scholar
  5. 5.
    Dupac, J., Matas, J.: Ultra-fast tracking based on zero-shift points. In: Proc. of ICASSP, pp. 1429–1432 (2011)Google Scholar
  6. 6.
    Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proc. of Imaging Underst. Workshop, pp. 121–130 (1981)Google Scholar
  7. 7.
    Mikolajczyk, K., Schmid, C.: Scale and affine invariant interest point detectors. International Journal of Computer Vision 60, 63–86 (2004)CrossRefGoogle Scholar
  8. 8.
    Harris, C., Stephens, M.: A combined corner and edge detector. In: Proc. of Alvey Vision Conference, pp. 147–151 (1988)Google Scholar
  9. 9.
    Harris, C., Stennett, C.: Rapid – a video rate object tracker. In: Proc. of BMVC (1990)Google Scholar
  10. 10.
    Drummond, T., Cipolla, R.: Real-time visual tracking of complex structures. IEEE Trans. PAMI 24, 932–946 (2002)CrossRefGoogle Scholar
  11. 11.
    Tsin, Y., Genc, Y., Zhu, Y., Ramesh, V.: Learn to track edges. In: Proc. of ICCV, pp. 1–8 (2007)Google Scholar
  12. 12.
    Beveridge, J.R., Riseman, E.M.: How easy is matching 2D line models using local search? (1997)Google Scholar
  13. 13.
    Chum, O., Matas, J., Kittler, J.: Locally Optimized RANSAC. In: Michaelis, B., Krell, G. (eds.) DAGM 2003. LNCS, vol. 2781, pp. 236–243. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  14. 14.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM 24, 381–395 (1981)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Hartley, R.I.: Projective reconstruction from line correspondences. In: Proc. of CVPR, pp. 903–907 (1994)Google Scholar
  16. 16.
    Canny, J.: A computational approach to edge detection. IEEE Trans. PAMI 8, 679–698 (1986)CrossRefGoogle Scholar
  17. 17.
    Olson, C.F., Huttenlocher, D.P.: Automatic target recognition by matching oriented edge pixels. IEEE Trans. Image Processing 6, 103–113 (1997)CrossRefGoogle Scholar
  18. 18.
    Shotton, J., Blake, A., Cipolla, R.: Multiscale categorical object recognition using contour fragments. IEEE Trans. PAMI 30, 1270–1281 (2008)CrossRefGoogle Scholar
  19. 19.
    Jepson, A.D., Fleet, D.J., El-Maraghi, T.F.: Robust online appearance models for visual tracking. IEEE Trans. PAMI 25, 1296–1311 (2003)CrossRefGoogle Scholar
  20. 20.
    Ross, D., Lim, J., Yang, M.-H.: Adaptive Probabilistic Visual Tracking with Incremental Subspace Update. In: Pajdla, T., Matas, J. (eds.) ECCV 2004. LNCS, vol. 3022, pp. 470–482. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  21. 21.
    Chen, M., Pang, S.K., Cham, T.J., Goh, A.: Visual tracking with generative template model based on riemannian manifold of covariances. In: Proc. of Int. Conf. on Information Fusion, pp. 874–881 (2011)Google Scholar
  22. 22.
    Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Karel Lebeda
    • 1
  • Jiri Matas
    • 1
  • Richard Bowden
    • 2
  1. 1.Center for Machine PerceptionCzech Technical University in PragueCzech Republic
  2. 2.Centre for Vision, Speech and Signal ProcessingUniversity of SurreyUK

Personalised recommendations