Advertisement

Correlation filter tracking algorithm based on multiple features and average peak correlation energy

  • Xiyan Sun
  • Kaidi Zhang
  • Yuanfa JiEmail author
  • Shouhua Wang
  • Suqing Yan
  • Sunyong Wu
Article
  • 15 Downloads

Abstract

Since traditional target tracking algorithms employ artificial features, they are not robust enough to describe the appearance of a target. Therefore, it is difficult to apply them to complex scenes. Moreover, the traditional target tracking algorithms do not measure the confidence level of the response. When the confidence level is low, the appearance model of the target is easily disturbed, and the tracking performance is degraded. This paper proposes the Multiple Features and Average Peak Correlation Energy (MFAPCE) tracking algorithm. The MFAPCE tracking algorithm combines deep features with color features and uses average peak correlation energy to measure confidence level. The algorithm uses multiple convolution layers and color histogram features to describe the target appearance. The response is obtained by optimizing the context information using a correlation filter framework. The average peak correlation energy is used to determine the final confidence level of the response and thus determines whether to update the model. The experiments showed that the MFAPCE algorithm improves the tracking performance compared with traditional tracking algorithms.

Keywords

Correlation filter Target tracking Deep features Average peak correlation energy 

Notes

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Nos. 61561016 and 11603041), Guangxi Information Science Experiment Center funded project, Department of Science and Technology of Guangxi Zhuang Autonomous Region (Nos. AC16380014, AA17202048, and AA17202033).

References

  1. 1.
    Babenko B, Yang M-H, Belongie S (2011) Robust object tracking with online mu-ltiple instance learning. IEEE Trans Pattern Anal Mach Intell 33(8):1619–1632CrossRefGoogle Scholar
  2. 2.
    Bertinetto L, Valmadre J, Henriques JF, Vedaldi A, Torr PHS (2016) Fully convolutional siamese networks for object tracking. Computer Vision – ECCV 2016 Workshops pp 850–865Google Scholar
  3. 3.
    Bertinetto L, Valmadre J, Golodetz S, Miksik O, Torr PHS (2016) Staple: complementary learners for real-time tracking. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Google Scholar
  4. 4.
    Bolme D, Beveridge JR, Draper BA, Lui YM (2010) Visual object tracking using adaptive correlation filters. 2010 IEEE Computer Society Conference on Computer Vision and Pattern RecognitionGoogle Scholar
  5. 5.
    Danelljan M, Khan FS, Felsberg M, van de Weijer J (2014) Adaptive color attributes for real-time visual tracking. 2014 IEEE Conference on Computer Vision and Pattern RecognitionGoogle Scholar
  6. 6.
    Danelljan M, Häger G, Shahbaz Khan F, Felsberg M (2014) Accurate scale estimation for robust visual tracking. Proceedings of the British Machine Vision Conference 2014Google Scholar
  7. 7.
    Danelljan M, Hager G, Khan FS, Felsberg M (2015) Learning spatially regularized correlation filters for visual tracking. 2015 IEEE International Conference on Computer Vision (ICCV)Google Scholar
  8. 8.
    Danelljan M, Hager G, Khan FS, Felsberg M (2015) Convolutional features for correlation filter based visual tracking. 2015 IEEE International Conference on Computer Vision Workshop (ICCVW)Google Scholar
  9. 9.
    Danelljan M, Hager G, Khan FS, Felsberg M (2016) Adaptive decontamination of the training set: a unified formulation for discriminative visual tracking. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Google Scholar
  10. 10.
    Danelljan M, Robinson A, Shahbaz Khan F, Felsberg M (2016) Beyond correlation filters: learning continuous convolution operators for visual tracking. Lect Notes Comput Sci pp 472–488Google Scholar
  11. 11.
    Danelljan M, Bhat G, Khan FS, Felsberg M (2017) ECO: efficient convolution operators for tracking. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Google Scholar
  12. 12.
    Galoogahi HK, Sim T, Lucey S (2015) Correlation filters with limited boundaries. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Google Scholar
  13. 13.
    Gao J, Ling H, Hu W, Xing J (2014) Transfer learning based visual tracking with Gauss-ian processes regression. Lect Notes Comput Sci pp 188–203Google Scholar
  14. 14.
    Hare S, Golodetz S, Saffari A, Vineet V, Cheng M-M, Hicks SL, Torr PHS (2016) Struck: structured output tracking with kernels. IEEE Trans Pattern Anal Mach Intell 38(10):2096–2109CrossRefGoogle Scholar
  15. 15.
    Henriques JF, Caseiro R, Martins P, Batista J (2012) Exploiting the circulant structure of tracking-by-detection with kernels. Lect Notes Comput Sci pp 702–715Google Scholar
  16. 16.
    Henriques JF, Caseiro R, Martins P, Batista J (2015) High-speed tracking with kernelized correlation filters. IEEE Trans Pattern Anal Mach Intell 37(3):583–596CrossRefGoogle Scholar
  17. 17.
    Kalal Z, Mikolajczyk K, Matas J (2012) Tracking-learning-detection. IEEE Trans Pattern Anal Mach Intell 34(7):1409–1422CrossRefGoogle Scholar
  18. 18.
    Kwon J, Lee KM (2010) Visual tracking decomposition. 2010 IEEE Computer Society Conference on Computer Vision and Pattern RecognitionGoogle Scholar
  19. 19.
    Lan R, Zhou Y (2016) Quaternion-Michelson descriptor for color image classification. IEEE Trans Image Process 25(11):5281–5292MathSciNetCrossRefGoogle Scholar
  20. 20.
    Lan R, Zhou Y (2017) Medical image retrieval via histogram of compressed scattering coefficients. IEEE J Biomed Health Inform 21(5):1338–1346MathSciNetCrossRefGoogle Scholar
  21. 21.
    Lan R, Zhou Y, Tang YY (2017) Quaternionic weber local descriptor of color images. IEEE Trans Circuits Syst Video Technol 27(2):261–274CrossRefGoogle Scholar
  22. 22.
    Li Y, Zhu J (2015) A scale adaptive kernel correlation filter tracker with feature integration. Lect Notes Comput Sci pp 254–265Google Scholar
  23. 23.
    Li P, Wang D, Wang L, Lu H (2018) Deep visual tracking: review and experimental comparison. Pattern Recogn 76:323–338CrossRefGoogle Scholar
  24. 24.
    Liang P, Blasch E, Ling H (2015) Encoding color information for visual tracking: algorithms and benchmark. IEEE Trans Image Process 24(12):5630–5644MathSciNetCrossRefGoogle Scholar
  25. 25.
    Lu H, Li Y, Chen M, Kim H, Serikawa S (2017) Brain intelligence: go beyond artificial intelligence. Mob Netw Appl 23(2):368–375CrossRefGoogle Scholar
  26. 26.
    Lukezic A, Vojir T, Zajc LC, Matas J, Kristan M (2017) Discriminative correlation filter with channel and spatial reliability. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Google Scholar
  27. 27.
    Ma C, Huang J-B, Yang X, Yang M-H (2015) Hierarchical convolutional features for visual tracking. 2015 IEEE International Conference on Computer Vision (ICCV)Google Scholar
  28. 28.
    Mueller M, Smith N, Ghanem B (2016) A benchmark and simulator for UAV tracking. Lect Notes Comput Sci pp 445–461Google Scholar
  29. 29.
    Mueller M, Smith N, Ghanem B (2017) Context-aware correlation filter tracking. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Google Scholar
  30. 30.
    Nam H, Han B (2016) Learning multi-domain convolutional neural networks for visual T-racking. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Google Scholar
  31. 31.
    Possegger TM, Bischof H (2015) In defense of color-based model-free tracking. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Google Scholar
  32. 32.
    Qi Y, Zhang S, Qin L, Yao H, Huang Q, Lim J, Yang M-H (2016) Hedged deep tracking. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Google Scholar
  33. 33.
    Ross DA, Lim J, Lin R-S, Yang M-H (2007) Incremental learning for robust visual Tr-acking. Int J Comput Vis 77(1–3):125–141Google Scholar
  34. 34.
    Sakai Y, Lu H, Tan J-K, Kim H (2019) Recognition of surrounding environment from electric wheelchair videos based on modified YOLOv2. Futur Gener Comput Syst 92:157–161CrossRefGoogle Scholar
  35. 35.
    Song S, Xiao J (2013) Tracking revisited using RGBD camera: unified benchmark and B-aselines. 2013 IEEE International Conference on Computer VisionGoogle Scholar
  36. 36.
    Sun C, Wang D, Lu H, Yang M-H (2018) Learning spatial-aware regressions for visual tracking, CVPRGoogle Scholar
  37. 37.
    Sun C, Wang D, Lu H, Yang M-H (2018) Correlation tracking via joint discrimination and reliability learning, CVPRGoogle Scholar
  38. 38.
    Valmadre J, Bertinetto L, Henriques J, Vedaldi A, Torr PHS (2017) End-to-End representation learning for correlation filter based tracking. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Google Scholar
  39. 39.
    Visual tracking: an experimental survey (2014) IEEE Trans Pattern Anal Mach Intell 36(7):1442–1468Google Scholar
  40. 40.
    Wang L, Ouyang W, Wang X, Lu H (2015) Visual tracking with fully convolutional networks. 2015 IEEE International Conference on Computer Vision (ICCV)Google Scholar
  41. 41.
    Wang L, Ouyang W, Wang X, Lu H (2016) STCT: sequentially training convolutional networks for visual tracking. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Google Scholar
  42. 42.
    Wang M, Liu Y, Huang Z (2017) Large margin object tracking with circulant feature maps. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Google Scholar
  43. 43.
    Wu Y, Lim J, Yang M-H (2013) Online object tracking: a benchmark. 2013 IEEE Con-ference on Computer Vision and Pattern RecognitionGoogle Scholar
  44. 44.
    Wu Y, Lim J, Yang M-H (2015) Object tracking benchmark. IEEE Trans Pattern Anal Mach Intell 37(9):1834–1848CrossRefGoogle Scholar
  45. 45.
    Zhang J, Ma S, Sclaroff S (2014) MEEM: robust tracking via multiple experts using entropy minimization. Lect Notes Comput Sci pp 188–203Google Scholar
  46. 46.
    Zhong W, Lu H, Yang M-H (2012) Robust object tracking via sparsity-based collaborative model. 2012 IEEE Conference on Computer Vision and Pattern RecognitionGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Guangxi Key Laboratory of Precision Navigation Technology and ApplicationGuilin University of Electronic TechnologyGuilinChina

Personalised recommendations