Skip to main content
Log in

Robust visual tracking using information theoretical learning

  • Published:
Annals of Mathematics and Artificial Intelligence Aims and scope Submit manuscript

Abstract

This paper presents a novel online object tracking algorithm with sparse representation for learning effective appearance models under a particle filtering framework. Compared with the state-of-the-art 1 sparse tracker, which simply assumes that the image pixels are corrupted by independent Gaussian noise, our proposed method is based on information theoretical Learning and is much less sensitive to corruptions; it achieves this by assigning small weights to occluded pixels and outliers. The most appealing aspect of this approach is that it can yield robust estimations without using the trivial templates adopted by the previous sparse tracker. By using a weighted linear least squares with non-negativity constraints at each iteration, a sparse representation of the target candidate is learned; to further improve the tracking performance, target templates are dynamically updated to capture appearance changes. In our template update mechanism, the similarity between the templates and the target candidates is measured by the earth movers’ distance(EMD). Using the largest open benchmark for visual tracking, we empirically compare two ensemble methods constructed from six state-of-the-art trackers, against the individual trackers. The proposed tracking algorithm runs in real-time, and using challenging sequences performs favorably in terms of efficiency, accuracy and robustness against state-of-the-art algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Adam, A., Rivlin, E., Shimshoni, I.: Robust fragments-based tracking using the integral histogram Proceedings of the International Conference on Computer Vision and Pattern Recognition, pp 798–805 (2006)

    Google Scholar 

  2. Arulampalam, M., Maskell, S., Gordon, N., Clapp, T.: A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking. IEEE Trans. Signal Process. 50(2), 174–188 (2002)

    Article  Google Scholar 

  3. Avidan, S.: Support vector tracking. IEEE Trans. Pattern Anal. Mach. Intell. 26(8), 1064–1072 (2004)

    Article  Google Scholar 

  4. Avidan, S.: Ensemble tracking. Proceedings of the 10th European Conference on Computer Vision, 494C501 (2005)

  5. Babenko, B., Yang, M., Belongie, S.: Visual tracking with online multiple instance learning. IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1619–1632 (2011)

    Article  Google Scholar 

  6. Black, M.: EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation. Int. J. Comput. Vision 26(1), 63C84 (1998)

    Article  Google Scholar 

  7. Collins, R., Liu, Y., Leordeanu, M.: Online selection of discriminative tracking features. IEEE Trans. Pattern Anal. Mach. Intell. 27(10), 1631–1643 (2004)

    Article  Google Scholar 

  8. Comaniciu, D., Member, V., Meer, P.: Kernel-based object tracking. IEEE Trans. Pattern Anal. Mach. Intell. 25(5), 564–577 (2003)

    Article  Google Scholar 

  9. Grabner, H., Grabner, M., Bischof, H.: Real-time tracking via on-line boosting. Proceedings of the British Machine Vision Conference, 47–56 (2006)

  10. Grabner, H., Leistner, C., Bischof, H.: Semi-supervised on-line boosting for robust tracking. Proceedings of the 10th European Conference on Computer Vision, 234–247 (2008)

  11. Hare, S., Saffari, A., Torr, P.: Struck:structured output tracking with kernels. Proceedings of the International Conference on Computer Vision and Pattern Recognition, 263–270 (2011)

  12. Henriques, J., Caseiro, R., Martins, P., Batista, J.: Exploiting the circulant structure of tracking-by-detection with kernels. Proceedings of European Conference on Computer Vision, 702–715 (2012)

  13. Kwon, J., Lee, K.: Visual tracking decomposition. Proceedings of the International Conference on Computer Vision and Pattern Recognition, 1269–1276 (2010)

  14. Liu, W., Pokharel, P., Principe, J.: Error Entropy, Correntropy and M-Estimation. Proceedings Workshop of Machine Learning for Signal Processing (2006)

  15. Liu, W., Pokharel, P., Principe, J.: Correntropy: Properties and applications in Non-Gaussian signal processing. IEEE Trans. Signal Process. 55(11), 5286–5298 (2007)

    Article  MathSciNet  Google Scholar 

  16. Mei, X., Ling, H.: Robust visual tracking and vehicle classification via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 33(11), 2259–2272 (2011)

    Article  Google Scholar 

  17. Ran, H., Zheng, S., Gang, H.: Maximum correntropy criterion for robust face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1561–1576 (2011)

    Article  Google Scholar 

  18. Ross, D., Lim, J., Lin, R., Yang, M.: Incremental learning for robust visual tracking. Int. J. Comput. Vis. 77(8), 125–141 (2008)

    Article  Google Scholar 

  19. Wang, D., Lu, H., Yang, M.: Online object tracking with sparse prototypes. IEEE Trans. Image Process. 22(1), 314–325 (2013)

    Article  MathSciNet  Google Scholar 

  20. Wright, J., Yang, A. Y., Ganesh, A., Sastry, S. S., Ma, Y.: Roubust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 210–227 (2009)

    Article  Google Scholar 

  21. Wu, Y., Lim, J., Yang, M.: Online object tracking: A benchmark. Proceedings of the International Conference on Computer Vision and Pattern Recognition (2011)

  22. Yilmaz, A., Javed, O., Shah, M.: Object Tracking: A survey. ACM Comput. Surv. 38(4), 81–93 (2006)

    Article  Google Scholar 

  23. Yuan, X., Hu, B.: Robust Feature Extraction via information theoretic learning. Proceedings of International Conference on Machine learning, 1193–1200 (2009)

  24. Zhang, K., Zhang, L., Yang, M.: Real-time compressive tracking. Proceedings of the 10th European Conference on Computer Vision, 864–877 (2012)

  25. Zhang, T., Ghanem, B., Liu, S., Ahuja, N.: Low-rank sparse learning for robust visual tracking. Proceedings of the 10th European Conference on Computer Vision, 470–484 (2012)

  26. Zhang, T., Ghanem, B., Liu, S., Ahuja, N.: Robust visual tracking via multi-task sparse learning. Proceedings of the International Conference on Computer Vision and Pattern Recognition, 2042–2049 (2012)

Download references

Acknowledgments

This work was supported by the National Basic Research Program of China (973 Program) under Grant no. 2013CB329404, the Major Research Project of the National Natural Science Foundation of China under Grant no. 91230101, the National Natural Science Foundation of China under Grant no. 61075006 and 11201367, the Key Project of the National Natural Science Foundation of China under Grant no. 11131006 and the Research Fund for the Doctoral Program of Higher Education of China under Grant no. 20100201120048, natural science Fund of Ningxia, China under Grant no. NZ12209.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Weifu Ding.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ding, W., Zhang, J. Robust visual tracking using information theoretical learning. Ann Math Artif Intell 80, 113–129 (2017). https://doi.org/10.1007/s10472-017-9543-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10472-017-9543-0

Keywords

Mathematics Subject Classification (2010)

Navigation