Advertisement

Cross-Layer Convolutional Siamese Network for Visual Tracking

  • Yanyin Chen
  • Xing Chen
  • Huibin Tan
  • Xiang Zhang
  • Long Lan
  • Xuhui Huang
  • Zhigang Luo
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11302)

Abstract

In most trackers for visual tracking, Siamese network based trackers construct a pair of twin structures to learn a similarity metric between tracked object and search region to predict the position of the object in the coming frame. They have achieved impressive performance in both speed and accuracy. However, semantic features from different layers are not fully explored in most current Siamese network based tracker. To this, we propose a cross-layer convolutional Siamese network tracker (Siam-CC) which attempts to explore more semantic features of different layers from two aspects. Firstly, we combine the shallow-to-deep cross-layer convolutional response maps to capture various semantic-aware features and meanwhile enforce Siam-CC to only focus on the most interesting location, because much more semantic information is able to reduce negative effect of background. Secondly, to further boost the discrimination of responses, an adaptive contrastive loss is additionally developed together with traditional logistical loss, which, to some extent, assists in filtering out some noisy responses. Experiments on a large-scale benchmark dataset show the effectiveness of Siam-CC as compared to the state-of-the-art trackers.

Keywords

Visual tracking Cross-layer convolutional Contrastive loss 

Notes

Acknowledgment

This work was supported by the National Natural Science Foundation of China [61806213], the National Natural Science Foundation of China [U1435222] and the National High-tech R&D Program [2015AA020108].

References

  1. 1.
    Bertinetto, L., Valmadre, J., Golodetz, S., Miksik, O., Torr, P.H.S.: Staple: complementary learners for real-time tracking. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1401–1409 (2016)Google Scholar
  2. 2.
    Bertinetto, L., Valmadre, J., Henriques, J.F., Vedaldi, A., Torr, P.H.S.: Fully-convolutional siamese networks for object tracking. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9914, pp. 850–865. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-48881-3_56CrossRefGoogle Scholar
  3. 3.
    Danelljan, M., Häger, G., Khan, F.S.: Accurate scale estimation for robust visual tracking. In: British Machine Vision Conference, pp. 65.1–65.11 (2014)Google Scholar
  4. 4.
    Deng, C., Huang, G., Xu, J., Tang, J.: Extreme learning machines: new trends and applications. Sci. China Inf. Sci. 58(2), 1–16 (2015)CrossRefGoogle Scholar
  5. 5.
    Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)Google Scholar
  6. 6.
    Guo, Q., Feng, W., Zhou, C., Huang, R., Wan, L., Wang, S.: Learning dynamic Siamese network for visual object tracking. In: IEEE International Conference on Computer Vision, pp. 1–9 (2017)Google Scholar
  7. 7.
    Hadsell, R., Chopra, S., Lecun, Y.: Dimensionality reduction by learning an invariant mapping. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1735–1742 (2006)Google Scholar
  8. 8.
    He, A., Luo, C., Tian, X., Zeng, W.: A twofold Siamese network for real-time object tracking. arXiv:1802.08817 (2018)
  9. 9.
    He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: IEEE International Conference on Computer Vision, pp. 1026–1034 (2015)Google Scholar
  10. 10.
    Held, D., Thrun, S., Savarese, S.: Learning to track at 100 FPS with deep regression networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 749–765. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46448-0_45CrossRefGoogle Scholar
  11. 11.
    Henriques, J.F., Rui, C., Martins, P., Batista, J.: High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 583–596 (2015)CrossRefGoogle Scholar
  12. 12.
    Hong, S., You, T., Kwak, S., Han, B.: Online tracking by learning discriminative saliency map with convolutional neural network. In: International Conference on Machine Learning, pp. 597–606 (2015)Google Scholar
  13. 13.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  14. 14.
    Li, Y., Zhu, J.: A scale adaptive kernel correlation filter tracker with feature integration. In: Agapito, L., Bronstein, M.M., Rother, C. (eds.) ECCV 2014. LNCS, vol. 8926, pp. 254–265. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-16181-5_18CrossRefGoogle Scholar
  15. 15.
    Ma, C., Huang, J.B., Yang, X., Yang, M.H.: Hierarchical convolutional features for visual tracking. In: IEEE International Conference on Computer Vision, pp. 3074–3082 (2015)Google Scholar
  16. 16.
    Nam, H., Han, B.: Learning multi-domain convolutional neural networks for visual tracking. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4293–4302. IEEE (2016)Google Scholar
  17. 17.
    Qi, Y., et al.: Hedged deep tracking. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4303–4311 (2016)Google Scholar
  18. 18.
    Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  20. 20.
    Teng, Z., Xing, J., Wang, Q., Lang, C., Feng, S., Jin, Y.: Robust object tracking based on temporal and spatial deep networks. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1144–1153 (2017)Google Scholar
  21. 21.
    Valmadre, J., Bertinetto, L., Henriques, J., Vedaldi, A., Torr, P.H.: End-to-end representation learning for correlation filter based tracking. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 5000–5008. IEEE (2017)Google Scholar
  22. 22.
    Wang, L., Ouyang, W., Wang, X.: Visual tracking with fully convolutional networks. In: IEEE International Conference on Computer Vision, pp. 3119–3127 (2015)Google Scholar
  23. 23.
    Wu, Y., Lim, J., Yang, M.H.: Online object tracking: A benchmark. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2411–2418. IEEE (2013)Google Scholar
  24. 24.
    Wu, Y., Lim, J., Yang, M.H.: Object tracking benchmark. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1834–1848 (2015)CrossRefGoogle Scholar
  25. 25.
    Zhang, J., Ma, S., Sclaroff, S.: MEEM: robust tracking via multiple experts using entropy minimization. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8694, pp. 188–203. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10599-4_13CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Science and Technology on Parallel and Distributed LaboratoryNUDTChangshaPeople’s Republic of China
  2. 2.College of ComputerNUDTChangshaPeople’s Republic of China
  3. 3.HPCLNUDTChangshaPeople’s Republic of China
  4. 4.Department of Computer Science and Technology, College of ComputerNUDTChangshaPeople’s Republic of China

Personalised recommendations