Advertisement

Object tracking based on Huber loss function

Original Article
  • 45 Downloads

Abstract

In this paper we present a novel visual tracking algorithm, in which object tracking is achieved by using subspace learning and Huber loss regularization in a particle filter framework. The changing appearance of tracked target is modeled by principle component analysis basis vectors and row group sparsity. This method takes advantage of the strengths of subspace representation and explicitly takes the underlying relationship between particle candidates into consideration in the tracker. The representation of each particle is learned via the multi-task sparse learning method. Huber loss function is employed to model the error between candidates and templates, yielding robust tracking. We utilize the alternating direction method of multipliers to solve the proposed representation model. In experiments we tested sixty representative video sequences that reflect the specific challenges of tracking and used both qualitative and quantitative metrics to evaluate the performance of our tracker. The experiment results demonstrated that the proposed tracking algorithm achieves superior performance compared to nine state-of-the-art tracking methods.

Keywords

Object tracking Subspace learning Huber loss function Alternating direction method of multipliers Multi-task sparse learning 

References

  1. 1.
    Wright, J., Yang, A.Y., Ganesh, A., Sastry, S.S., Ma, Y.: Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 210–227 (2009)CrossRefGoogle Scholar
  2. 2.
    Mei, X., Ling, H.: Robust visual tracking and vehicle classification via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 33(11), 2259–2272 (2011)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Mei, X., Ling, H., Yi, W., Blasch, E.P., Bai, L.: Efficient minimum error bounded particle resampling l1 tracker with occlusion detection. IEEE Trans. Image Process. 22(7), 2661–2675 (2013)MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Bao, C., Wu, Y., Ling, H., Ji, H.: Real time robust l1 tracker using accelerated proximal gradient approach. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1830–1837 (2012)Google Scholar
  5. 5.
    Zhang, T., Ghanem, B., Liu, S., Ahuja, N.: Robust visual tracking via multi-task sparse learning. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2012)Google Scholar
  6. 6.
    Zhang, T., Ghanem, B., Liu, S., Ahuja, N.: Low-rank sparse learning for robust visual tracking. In: Computer Vision-ECCV, pp. 470–484. Springer, Berlin (2012)Google Scholar
  7. 7.
    Zhang, T., Liu, S., Ahuja, N., Yang, M.-H., Ghanem, B.: Robust visual tracking via consistent low-rank sparse learning. Int. J. Comput. Vis. 111(2), 171–190 (2015)CrossRefGoogle Scholar
  8. 8.
    Wang, D., Lu, H., Yang, M.-H.: Least soft-threshold squares tracking. In: CVPR, pp. 2371–2378 (2013)Google Scholar
  9. 9.
    Wang, D., Huchuan, L., Yang, M.-H.: Online object tracking with sparse prototypes. IEEE Trans. Image Process. 22(1), 314–325 (2013)MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Ross, D., Lim, J., Lin, R.-S., Yang, M.-H.: Incremental learning for robust visual tracking. Int. J. Comput. Vis. 77(1), 125–141 (2008)CrossRefGoogle Scholar
  11. 11.
    Xiao, Z., Huchuan, L., Wang, D.: L2-RLS-based object tracking. IEEE Trans. Circuits Syst. Video Technol. 24(8), 1301–1309 (2014)CrossRefGoogle Scholar
  12. 12.
    Huber, P.J.: Robust estimation of a location parameter. Ann Math Stat 35, 73–101 (1964)MathSciNetCrossRefMATHGoogle Scholar
  13. 13.
    Wang, N., Wang, J., Yeung, D.-Y.: Online robust non-negative dictionary learning for visual tracking. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 657–664 (2013)Google Scholar
  14. 14.
    Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)CrossRefMATHGoogle Scholar
  15. 15.
    Quattoni, A., Carreras, X., Collins, M., Darrell, T.: An efficient projection for l1, infty regularization. In: International Conference on Machine Learning, pp. 857–864 (2009)Google Scholar
  16. 16.
    Isard, M., Blake, A.: Condensationconditional density propagation for visual tracking. Int. J. Comput. Vis. 29(1), 5–28 (1998)CrossRefGoogle Scholar
  17. 17.
    Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL visual object classes challenge 2010, VOC 2010. In: Results http://www.pascal-network.org/challenges/VOC/voc2010/workshop/index.html (2010)
  18. 18.
    Babenko, B., Yang, M.-H., Belongie, S.: Robust object tracking with online multiple instance learning. IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1619–1632 (2011)CrossRefGoogle Scholar
  19. 19.
    Zhang, K., Zhang, L., Yang, M.-H.: Real-time compressive tracking. In: European Conference on Computer Vision, pp. 864–877. Springer, Berlin (2012)Google Scholar
  20. 20.
    Zhang, K., Song, H.: Real-time visual tracking via online weighted multiple instance learning. Pattern Recognit. 46(1), 397–411 (2013)MathSciNetCrossRefMATHGoogle Scholar
  21. 21.
    Wang, Y., Shiqiang, H., Shandong, W.: Visual tracking based on group sparsity learning. Mach. Vis. Appl. 26(1), 1–13 (2015)CrossRefGoogle Scholar
  22. 22.
    Hong, Z., Mei, X., Prokhorov, D., Tao, D.: Tracking via robust multi-task multi-view joint sparse representation. In: ICCV pp. 649–656 (2013)Google Scholar
  23. 23.
    Bai, Y., Tang, M.: Object tracking via robust multitask sparse representation. IEEE Signal Process. Lett. 21(8), 909–913 (2014)CrossRefGoogle Scholar
  24. 24.
    Yilmaz, A., Javed, O., Shah, M.: Object tracking: a survey. ACM Comput. Surv. 38(4), 13–32 (2006)CrossRefGoogle Scholar
  25. 25.
    Li, X., Hu, W., Shen, C., Zhang, Z., Dick, A., Van Den Hengel, A.: A survey of appearance models in visual object tracking. In: ACM Transactions on Intelligent Systems and Technology (TIST), vol. 58, no. 4 (2013)Google Scholar
  26. 26.
    Smeulders, A.W.M., Chu, D.M., Cucchiara, R., Calderara, S., Dehghan, A., Shah, M.: Visual tracking: an experimental survey. IEEE Trans. Pattern Anal. Mach. Intell. 36(7), 1442–1468 (2014)CrossRefGoogle Scholar
  27. 27.
    Wu, Y., Lim, J., Yang, M.-H.: Online object tracking: a benchmark. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2411–2418. IEEE (2013)Google Scholar
  28. 28.
    Avidan, S.: Support vector tracking. IEEE Trans. Pattern Anal. Mach. Intell. 26(8), 1064–1072 (2004)CrossRefGoogle Scholar
  29. 29.
    Grabner, H, Bischof, H.: On-line boosting and vision. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 260–267. IEEE (2006)Google Scholar
  30. 30.
    Kalal, Z., Mikolajczyk, K., Matas, J.: Tracking–learning–detection. IEEE Trans. Pattern Anal. Mach. Intell. 34(7), 1409–1422 (2012)CrossRefGoogle Scholar
  31. 31.
    Zhang, K., Zhang, L., Yang, M.-H.: Real-time object tracking via online discriminative feature selection. IEEE Trans. Image Process. 22(12), 4664–4677 (2013)MathSciNetCrossRefMATHGoogle Scholar
  32. 32.
    Kwon, J., Lee, K.M.: Visual tracking decomposition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1269–1276. IEEE (2010)Google Scholar
  33. 33.
    Liu, J., Ji, S., Ye, J.: Multi-task feature learning via efficient l 2, 1-norm minimization. In: Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pp. 339–348. AUAI Press (2009)Google Scholar
  34. 34.
    Jalali, A., Sanghavi, S., Ruan, C., Ravikumar, P.K.: A dirty model for multi-task learning. In: Advances in Neural Information Processing Systems, pp. 964–972 (2010)Google Scholar
  35. 35.
    Gong, P., Ye, J., Zhang, C.: Robust multi-task feature learning. In: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, New York, NY, USA (2012)Google Scholar
  36. 36.
    Yuan, X.-T., Liu, X., Yan, S.: Visual classification with multi-task joint sparse representation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3493–3500 (2010)Google Scholar
  37. 37.
    Adler, A., Elad, M., Hel-Or, Y., Rivlin, E.: Sparse coding with anomaly detection. J. Signal Process. Syst. 79(2), 179–188 (2015)CrossRefGoogle Scholar
  38. 38.
    Sun, D.L., Fvotte, C.: Alternating direction method of multipliers for non-negative matrix factorization with the beta-divergence. In: IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (2014)Google Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2018

Authors and Affiliations

  1. 1.School of Electrical Engineering and Computer ScienceUniversity of OttawaOttawaCanada
  2. 2.School of Aeronautics and AstronauticsShanghai Jiao Tong UniversityShanghaiChina
  3. 3.Departments of Radiology, Biomedical Informatics, Bioengineering, and Intelligent System (Computer Science)University of PittsburghPittsburghUSA

Personalised recommendations