Abstract
In this paper we present a novel visual tracking algorithm, in which object tracking is achieved by using subspace learning and Huber loss regularization in a particle filter framework. The changing appearance of tracked target is modeled by principle component analysis basis vectors and row group sparsity. This method takes advantage of the strengths of subspace representation and explicitly takes the underlying relationship between particle candidates into consideration in the tracker. The representation of each particle is learned via the multi-task sparse learning method. Huber loss function is employed to model the error between candidates and templates, yielding robust tracking. We utilize the alternating direction method of multipliers to solve the proposed representation model. In experiments we tested sixty representative video sequences that reflect the specific challenges of tracking and used both qualitative and quantitative metrics to evaluate the performance of our tracker. The experiment results demonstrated that the proposed tracking algorithm achieves superior performance compared to nine state-of-the-art tracking methods.
Similar content being viewed by others
References
Wright, J., Yang, A.Y., Ganesh, A., Sastry, S.S., Ma, Y.: Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 210–227 (2009)
Mei, X., Ling, H.: Robust visual tracking and vehicle classification via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 33(11), 2259–2272 (2011)
Mei, X., Ling, H., Yi, W., Blasch, E.P., Bai, L.: Efficient minimum error bounded particle resampling l1 tracker with occlusion detection. IEEE Trans. Image Process. 22(7), 2661–2675 (2013)
Bao, C., Wu, Y., Ling, H., Ji, H.: Real time robust l1 tracker using accelerated proximal gradient approach. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1830–1837 (2012)
Zhang, T., Ghanem, B., Liu, S., Ahuja, N.: Robust visual tracking via multi-task sparse learning. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2012)
Zhang, T., Ghanem, B., Liu, S., Ahuja, N.: Low-rank sparse learning for robust visual tracking. In: Computer Vision-ECCV, pp. 470–484. Springer, Berlin (2012)
Zhang, T., Liu, S., Ahuja, N., Yang, M.-H., Ghanem, B.: Robust visual tracking via consistent low-rank sparse learning. Int. J. Comput. Vis. 111(2), 171–190 (2015)
Wang, D., Lu, H., Yang, M.-H.: Least soft-threshold squares tracking. In: CVPR, pp. 2371–2378 (2013)
Wang, D., Huchuan, L., Yang, M.-H.: Online object tracking with sparse prototypes. IEEE Trans. Image Process. 22(1), 314–325 (2013)
Ross, D., Lim, J., Lin, R.-S., Yang, M.-H.: Incremental learning for robust visual tracking. Int. J. Comput. Vis. 77(1), 125–141 (2008)
Xiao, Z., Huchuan, L., Wang, D.: L2-RLS-based object tracking. IEEE Trans. Circuits Syst. Video Technol. 24(8), 1301–1309 (2014)
Huber, P.J.: Robust estimation of a location parameter. Ann Math Stat 35, 73–101 (1964)
Wang, N., Wang, J., Yeung, D.-Y.: Online robust non-negative dictionary learning for visual tracking. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 657–664 (2013)
Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)
Quattoni, A., Carreras, X., Collins, M., Darrell, T.: An efficient projection for l1, infty regularization. In: International Conference on Machine Learning, pp. 857–864 (2009)
Isard, M., Blake, A.: Condensationconditional density propagation for visual tracking. Int. J. Comput. Vis. 29(1), 5–28 (1998)
Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL visual object classes challenge 2010, VOC 2010. In: Results http://www.pascal-network.org/challenges/VOC/voc2010/workshop/index.html (2010)
Babenko, B., Yang, M.-H., Belongie, S.: Robust object tracking with online multiple instance learning. IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1619–1632 (2011)
Zhang, K., Zhang, L., Yang, M.-H.: Real-time compressive tracking. In: European Conference on Computer Vision, pp. 864–877. Springer, Berlin (2012)
Zhang, K., Song, H.: Real-time visual tracking via online weighted multiple instance learning. Pattern Recognit. 46(1), 397–411 (2013)
Wang, Y., Shiqiang, H., Shandong, W.: Visual tracking based on group sparsity learning. Mach. Vis. Appl. 26(1), 1–13 (2015)
Hong, Z., Mei, X., Prokhorov, D., Tao, D.: Tracking via robust multi-task multi-view joint sparse representation. In: ICCV pp. 649–656 (2013)
Bai, Y., Tang, M.: Object tracking via robust multitask sparse representation. IEEE Signal Process. Lett. 21(8), 909–913 (2014)
Yilmaz, A., Javed, O., Shah, M.: Object tracking: a survey. ACM Comput. Surv. 38(4), 13–32 (2006)
Li, X., Hu, W., Shen, C., Zhang, Z., Dick, A., Van Den Hengel, A.: A survey of appearance models in visual object tracking. In: ACM Transactions on Intelligent Systems and Technology (TIST), vol. 58, no. 4 (2013)
Smeulders, A.W.M., Chu, D.M., Cucchiara, R., Calderara, S., Dehghan, A., Shah, M.: Visual tracking: an experimental survey. IEEE Trans. Pattern Anal. Mach. Intell. 36(7), 1442–1468 (2014)
Wu, Y., Lim, J., Yang, M.-H.: Online object tracking: a benchmark. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2411–2418. IEEE (2013)
Avidan, S.: Support vector tracking. IEEE Trans. Pattern Anal. Mach. Intell. 26(8), 1064–1072 (2004)
Grabner, H, Bischof, H.: On-line boosting and vision. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 260–267. IEEE (2006)
Kalal, Z., Mikolajczyk, K., Matas, J.: Tracking–learning–detection. IEEE Trans. Pattern Anal. Mach. Intell. 34(7), 1409–1422 (2012)
Zhang, K., Zhang, L., Yang, M.-H.: Real-time object tracking via online discriminative feature selection. IEEE Trans. Image Process. 22(12), 4664–4677 (2013)
Kwon, J., Lee, K.M.: Visual tracking decomposition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1269–1276. IEEE (2010)
Liu, J., Ji, S., Ye, J.: Multi-task feature learning via efficient l 2, 1-norm minimization. In: Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pp. 339–348. AUAI Press (2009)
Jalali, A., Sanghavi, S., Ruan, C., Ravikumar, P.K.: A dirty model for multi-task learning. In: Advances in Neural Information Processing Systems, pp. 964–972 (2010)
Gong, P., Ye, J., Zhang, C.: Robust multi-task feature learning. In: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, New York, NY, USA (2012)
Yuan, X.-T., Liu, X., Yan, S.: Visual classification with multi-task joint sparse representation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3493–3500 (2010)
Adler, A., Elad, M., Hel-Or, Y., Rivlin, E.: Sparse coding with anomaly detection. J. Signal Process. Syst. 79(2), 179–188 (2015)
Sun, D.L., Fvotte, C.: Alternating direction method of multipliers for non-negative matrix factorization with the beta-divergence. In: IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (2014)
Author information
Authors and Affiliations
Corresponding author
Additional information
This work was jointly supported by the National Natural Science Foundation of China (No. 61374161) and China Aviation Science Foundation (No. 20142057006). This work was also partially supported by a National Institutes of Health (NIH)/National Cancer Institute (NCI) R01 Grant (#1R01CA193603).
Rights and permissions
About this article
Cite this article
Wang, Y., Hu, S. & Wu, S. Object tracking based on Huber loss function. Vis Comput 35, 1641–1654 (2019). https://doi.org/10.1007/s00371-018-1563-1
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00371-018-1563-1