Advertisement

Channel Pruning for Visual Tracking

  • Manqiang Che
  • Runling Wang
  • Yan Lu
  • Yan Li
  • Hui Zhi
  • Changzhen XiongEmail author
Conference paper
  • 856 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11129)

Abstract

Deep convolutional feature based Correlation Filter trackers have achieved record-breaking accuracy, but the huge computational complexity limits their application. In this paper, we derive the efficient convolution operators (ECO) tracker which obtains the top rank on VOT-2016. Firstly, we introduce a channel pruned VGG16 model to fast extract most representative channels for deep features. Then an Average Feature Energy Ratio method is put forward to select advantageous convolution channels, and an adaptive iterative strategy is designed to optimize object location. Finally, extensive experimental results on four benchmarks OTB-2013, OTB-2015, VOT-2016 and VOT-2017, demonstrate that our tracker performs favorably against the state-of-the-art methods.

Keywords

Correlation filter Deep feature Channel pruning Iterative optimization 

Supplementary material

478770_1_En_3_MOESM1_ESM.pdf (2.6 mb)
Supplementary material 1 (pdf 2631 KB)

Supplementary material 2 (mp4 33169 KB)

References

  1. 1.
    Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 583–596 (2015)CrossRefGoogle Scholar
  2. 2.
    Danelljan, M., Shahbaz Khan, F., Felsberg, M., Weijer, J.: Adaptive color attributes for real-time visual tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1090–1097 (2014)Google Scholar
  3. 3.
    Danelljan, M., Robinson, A., Shahbaz Khan, F., Felsberg, M.: Beyond correlation filters: learning continuous convolution operators for visual tracking. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9909, pp. 472–488. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46454-1_29CrossRefGoogle Scholar
  4. 4.
    Ma, C., Huang, J.B., Yang, X., Yang, M.H.: Hierarchical convolutional features for visual tracking. In: Proceedings of the 2015 IEEE International Conference on Computer Vision, pp. 3074–3082 (2015)Google Scholar
  5. 5.
    Wang, M., Liu, Y., Huang, Z.: Large margin object tracking with circulant feature maps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4800–4808 (2017)Google Scholar
  6. 6.
    Danelljan, M., Bhat, G., Shahbaz Khan, F., Felsberg, M.: ECO: efficient convolution operators for tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6931–6939 (2017)Google Scholar
  7. 7.
    Wang, L., Ouyang, W., Wang, X., Lu, H.: Visual tracking with fully convolutional networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3119–3127 (2015)Google Scholar
  8. 8.
    Lukezic, A., Vojr, T., Cehovin Zajc, L., Matas, J., Kristan, M.: Discriminative correlation filter with channel and spatial reliability. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4847–4856 (2017)Google Scholar
  9. 9.
    Bertinetto, L., Valmadre, J., Golodetz, S., Miksik, O., Torr, P.H.S.: Staple: complementary learners for real-time tracking. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1401–1409 (2016)Google Scholar
  10. 10.
    Kang, K., et al.: T-CNN: tubelets with convolutional neural networks for object detection from videos. IEEE Trans. Circuits Syst. Video Technol., 1 (2017)Google Scholar
  11. 11.
    He, Z., Fan, Y., Zhuang, J., Dong, Y., Bai, H.: Correlation filters with weighted convolution responses. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 1992–2000 (2017)Google Scholar
  12. 12.
    Song, Y., et al.: VITAL: VIsual Tracking via Adversarial Learning. In: Computer Vision and Pattern Recognition (2018 Spotlight)Google Scholar
  13. 13.
    Ma, C., Huang, J.B., Yang, X., Yang, M.H.: Robust visual tracking via hierarchical convolutional features. arXiv preprint (2017)Google Scholar
  14. 14.
    Wang, X., Li, H., Li, Y., Shen, F., Porikli, F.: Robust and real-time deep tracking via multi-scale domain adaptation. In: International Conference on Multimedia and Expo, pp. 1338–1343 (2017)Google Scholar
  15. 15.
    Sun, C., Wang, D., Lu, H., Yang, M.H.: Learning spatial-aware regressions for visual tracking. In: Computer Vision and Pattern Recognition (2018 Spotlight)Google Scholar
  16. 16.
    Gundogdu, E., Alatan, A.A.: Learning attentions: good features to correlate for visual tracking. arXiv preprint (2017)Google Scholar
  17. 17.
    Danelljan, M., Hager, G., Shahbaz Khan, F., Felsberg, M.: Accurate scale estimation for robust visual tracking. In: British Machine Vision Conference (BMVC), pp. 1–11. British Machine Vision Association, Durham (2014)Google Scholar
  18. 18.
    Wu, Y., Lim, J., Yang, M.: Object tracking benchmark. TPAMI 37(9), 1834–1848 (2015)CrossRefGoogle Scholar
  19. 19.
    Wu, Y., Lim, J., Yang, M.: Online object tracking: a benchmark. In: Computer Vision and Pattern Recognition (2013)Google Scholar
  20. 20.
    Kristan, M., et al.: The visual object tracking VOT2016 challenge results. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9914, pp. 777–823. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-48881-3_54CrossRefGoogle Scholar
  21. 21.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Comput. Sci. 1(2), 3 (2014)Google Scholar
  22. 22.
    Kristan, M., Leonardis, A., Matas, J., et al.: The visual object tracking VOT2017 challenge results. In: ICCV Workshops (2017)Google Scholar
  23. 23.
    He, Y., Zhang, X., Sun, J.: Channel pruning for accelerating very deep neural networks. In: International Conference on Computer Vision (ICCV), pp. 1398–1406 (2017)Google Scholar
  24. 24.
    Bolme, D.S., Beveridge, J.R., Draper, B.A., et al.: Visual object tracking using adaptive correlation filters. In: Proceedings of European Conference on Computer Vision, pp. 2544–2550 (2010)Google Scholar
  25. 25.
    Sun, C., Wang, D., Lu, H., Yang, M.: Correlation tracking via joint discrimination and reliability learning. In: Proceedings of European Conference on Computer Vision (2018)Google Scholar
  26. 26.
    Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: Exploiting the circulant structure of tracking-by-detection with kernels. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7575, pp. 702–715. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-33765-9_50CrossRefGoogle Scholar
  27. 27.
    Hu, H., Peng, R., Tai, Y.W., et al.: Network trimming: a data-driven neuron pruning approach towards efficient deep architectures. arXiv preprint arXiv:1607.03250 (2016)
  28. 28.
    Wen, W., Wu, C., Wang, Y., et al.: Learning structured sparsity in deep neural networks. In: Proceedings of Advances in Neural Information Processing Systems, pp. 2074–2082 (2016)Google Scholar
  29. 29.
    Zhou, H., Alvarez, J.M., Porikli, F.: Less is more: towards compact CNNs. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 662–677. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46493-0_40CrossRefGoogle Scholar
  30. 30.
    Danelljan, M., Häger, G., Khan, F.S., Felsberg, M.: Learning spatially regularized correlation filters for visual tracking. In: Proceedings of International Conference on Computer Vision, pp. 4310–4318 (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Manqiang Che
    • 1
  • Runling Wang
    • 1
  • Yan Lu
    • 1
  • Yan Li
    • 1
  • Hui Zhi
    • 1
  • Changzhen Xiong
    • 1
    Email author
  1. 1.North China University of TechnologyBeijingChina

Personalised recommendations