Advertisement

Dual-Force Metric Learning for Robust Distracter-Resistant Tracker

  • Zhibin Hong
  • Xue Mei
  • Dacheng Tao
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7572)

Abstract

In this paper, we propose a robust distracter-resistant tracking approach by learning a discriminative metric that adaptively learns the importance of features on-the-fly. The proposed metric is elaborately designed for the tracking problem by forming a margin objective function which systematically includes distance margin maximization and reconstruction error constraint that acts as a force to push distracters away from the positive space and into the negative space. Due to the variety of negative samples in the tracking problem, we specifically introduce the similarity propagation technique that gives distracters a second force from the negative space. Consequently, the discriminative metric obtained helps to preserve the most discriminative information to separate the target from distracters while ensuring the stability of the optimal metric. We seamlessly combine it with the popular L1 minimization tracker. Our tracker is therefore not only resistant to distracters, but also inherits the merit of occlusion robustness from the L1 tracker. Quantitative comparisons with several state-of-the-art algorithms have been conducted in many challenging video sequences. The results show that our method resists distracters excellently and achieves superior performance.

Keywords

Visual tracking distracter distance metric similarity propagation 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Adam, A., Rivlin, E., Shimshoni, I.: Robust fragments-based tracking using the integral histogram. In: CVPR, pp. 798–805 (2006)Google Scholar
  2. 2.
    Avidan, S.: Ensemble tracking. TPAMI 29, 261–271 (2007)CrossRefGoogle Scholar
  3. 3.
    Babenko, B., Ming-Hsuan, Y., Belongie, S.: Visual tracking with online Multiple Instance Learning. In: CVPR, pp. 983–990 (2009)Google Scholar
  4. 4.
    Balan, A.O., Black, M.J.: An adaptive appearance model approach for model-based articulated object tracking. In: CVPR, pp. 758–765 (2006)Google Scholar
  5. 5.
    Belkin, M., Niyogi, P.: Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation 15, 1373–1396 (2003)zbMATHCrossRefGoogle Scholar
  6. 6.
    Bian, W., Tao, D.: Biased discriminant Euclidean embedding for content-based image retrieval. TIP 19(2), 545–554 (2010)MathSciNetGoogle Scholar
  7. 7.
    Bogomolov, Y., Dror, G., Lapchev, S., Rivlin, E., Rudzsky, M.: Classification of moving targets based on motion and appearance. In: BMVC, pp. 429–438 (2003)Google Scholar
  8. 8.
    Cannons, K.J., Gryn, J.M., Wildes, R.P.: Visual Tracking Using a Pixelwise Spatiotemporal Oriented Energy Representation. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part IV. LNCS, vol. 6314, pp. 511–524. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  9. 9.
    Collins, R.T., Liu, Y., Leordeanu, M.: Online selection of discriminative tracking features. TPAMI 27, 1631–1643 (2005)CrossRefGoogle Scholar
  10. 10.
    Dinh, T.B., Vo, N., Medioni, G.: Context tracker: Exploring supporters and distracters in unconstrained environments. In: CVPR, pp. 1177–1184 (2011)Google Scholar
  11. 11.
    Elgammal, A., Duraiswami, R., Davis, L.: Probabilistic tracking in joint feature spatial spaces. In: CVPR (2003)Google Scholar
  12. 12.
    Fan, J., Wu, Y., Dai, S.: Discriminative Spatial Attention for Robust Tracking. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part I. LNCS, vol. 6311, pp. 480–493. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  13. 13.
    He, X., Cai, D., Yan, S., Zhang, H.J.: Neighborhood preserving embedding. In: ICCV, pp. 1208–1213 (2005)Google Scholar
  14. 14.
    Ho, J., Lee, K.C., Yang, M.H., Kriegman, D.: Visual tracking using learned linear subspaces. In: CVPR, pp. 782–789 (2004)Google Scholar
  15. 15.
    Jepson, A., Fleet, D., El-Maraghi, T.: Robust on-line appearance models for visual tracking. TPAMI 25, 1296–1311 (2003)CrossRefGoogle Scholar
  16. 16.
    Jiang, N., Liu, W., Wu, Y.: Adaptive and discriminative metric differential tracking. In: CVPR, pp. 1161–1168 (2011)Google Scholar
  17. 17.
    Kim, S.-J., Koh, K., Lustig, M., Boyd, S., Gorinevsky, D.: A method for large-scale ℓ1-regularized least squares. IEEE Journal on Selected Topics in Signal Processing 1(4), 429–438 (2007)CrossRefGoogle Scholar
  18. 18.
    Kondor, R.I., Lafferty, J.: Diffusion kernels on graphs and other discrete input spaces. In: ICML, pp. 315–322 (2002)Google Scholar
  19. 19.
    Kwon, J., Lee, K.M.: Visual tracking decomposition. In: CVPR (2010)Google Scholar
  20. 20.
    Li, H., Shen, C., Shi, Q.: Real-time visual tracking using compressive sensing. In: CVPR, pp. 1305–1312 (2011)Google Scholar
  21. 21.
    Liu, B., Yang, L., Huang, J., Meer, P., Gong, L., Kulikowski, C.: Robust and Fast Collaborative Tracking with Two Stage Sparse Optimization. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part IV. LNCS, vol. 6314, pp. 624–637. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  22. 22.
    Liu, W., Tian, X., Tao, D., Liu, J.: Constrained metric learning via distance gap maximization. In: AAAI, pp. 518–524 (2010)Google Scholar
  23. 23.
    Mei, X., Ling, H.: Robust visual tracking using ℓ1 minimization. In: ICCV, pp. 1436–1443 (2009)Google Scholar
  24. 24.
    Mei, X., Ling, H., Wu, Y., Blasch, E., Bai, L.: Minimum error bounded efficient ℓ1 tracker with occlusion detection. In: CVPR, pp. 1257–1264 (2011)Google Scholar
  25. 25.
    Ross, D.A., Lim, J., Lin, R.S., Yang, M.H.: Incremental learning for robust visual tracking. IJCV 77, 125–141 (2008)CrossRefGoogle Scholar
  26. 26.
    Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290, 2323–2326 (2000)CrossRefGoogle Scholar
  27. 27.
    Shi, J., Tomasi, C.: Good features to track. In: CVPR (1994)Google Scholar
  28. 28.
    Shrager, J., Hogg, T., Huberman, B.A.: Observation of phase transitions in spreading activation networks. Science 236, 1092–1094 (1987)CrossRefGoogle Scholar
  29. 29.
    Tao, D., Tang, X., Li, X., Rui, Y.: Direct kernel biased discriminant analysis: a new content-based image retrieval relevance feedback algorithm. IEEE Transactions on Multimedia 8(4), 716–727 (2006)CrossRefGoogle Scholar
  30. 30.
    Wang, X., Hua, G., Han, T.X.: Discriminative Tracking by Metric Learning. In: Daniilidis, K. (ed.) ECCV 2010, Part III. LNCS, vol. 6313, pp. 200–214. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  31. 31.
    Yu, Q., Dinh, T.B., Medioni, G.: Online Tracking and Reacquisition Using Co-trained Generative and Discriminative Trackers. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part II. LNCS, vol. 5303, pp. 678–691. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  32. 32.
    Zhou, D., Bousquet, O., Lal, T.N., Weston, J., Schölkopf, B.: Learning with local and global consistency. In: NIPS, pp. 321–328 (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Zhibin Hong
    • 1
  • Xue Mei
    • 2
  • Dacheng Tao
    • 1
  1. 1.Centre for Quantum Computation and Intelligent Systems, Faculty of Engineering and Information TechnologyUniversity of TechnologySydneyAustralia
  2. 2.Center for Automation Research, Electrical & Computer Engineering DepartmentUniversity of MarylandCollege ParkUSA

Personalised recommendations