Advertisement

On large appearance change in visual tracking

  • Yun Liang
  • Mei-hua Wang
  • Yan-wen Guo
  • Wei-shi ZhengEmail author
Original Article
  • 4 Downloads

Abstract

This paper concerns on overcoming the challenges caused by drastic appearance change in visual tracking, especially the long-term appearance variation due to occlusion or large object deformation. We aim to build a long-term appearance model for robust tracking against large appearance change in two new respects: using historical and distinguishing cues to model target representation and extracting effective spatial objectness features from each frame to distinguish outliers. For the first purpose, an adaptive superpixel-based appearance model is formulated. Different from previous superpixel-based trackers, a complementary feature set is defined for the update model to preserve the features of those temporally disappeared object parts especially under occlusion and large deformation. For the second purpose, three new spatial objectness cues specially designed for tracking are defined, including surrounding comparison, edge density change and weighted superpixel straddling. With these spatial objectness cues, our method facilitates target object localization and ensures the target has similar edge distribution between adjacent frames. These cues greatly improve the ability of our method to distinguish the target from its surrounding background. The adaptive appearance model retains valuable features of historical results, and the spatial objectness cues are extracted from the current frame, and thus they are finally combined to complement with each other to solve large appearance changes. The extensive evaluations on the CVPR 2013 online object tracking benchmark and VOT 2014 datasets demonstrate the effectiveness of our method as compared with related trackers.

Keywords

Superpixel-based appearance model Spatial objectness cues Complementary feature set Adaptive online update 

Notes

Acknowledgements

Many thanks go to the anonymous reviewers for their careful work and thoughtful suggestions that have helped us to achieve great and substantial improvement on this paper. This work was partially supported by the National Natural Science Fund of China (61772209, 61772257, 61672279), the Science and Technology Planning Project of Guangdong Province (2016A050502050). Wei-shi Zheng is the corresponding author of this paper.

Compliance with ethical standards

Conflict of interest

All authors declare that they have no conflict of interest.

Supplementary material

521_2019_4094_MOESM1_ESM.docx (8.5 mb)
Supplementary material 1 (DOCX 8731 kb)
521_2019_4094_MOESM2_ESM.docx (3.6 mb)
Supplementary material 2 (DOCX 3689 kb)
521_2019_4094_MOESM3_ESM.docx (5.8 mb)
Supplementary material 3 (DOCX 5902 kb)

References

  1. 1.
    Smeulders AWM, Chu DM, Cucchiara R, Calderara S, Dehghan A, Shah M (2014) Visual tracking: an experiment survey. IEEE Trans Pattern Anal Mach Intell 36(7):1442–1468CrossRefGoogle Scholar
  2. 2.
    Hong Z, Chen Z, Wang C, Mei X, Prokhorov D, Tao D (2015) MUlti-store tracker (MUSTer): a cognitive psychology inspired approach to object tracking. In: IEEE international conference on computer vision and pattern recognition, pp 749–758Google Scholar
  3. 3.
    Duffner S, Garcia C (2013) PixelTrack: a fast adaptive algorithm for tracking non-rigid objects. In: IEEE international conference on computer vision, pp 2480–2487Google Scholar
  4. 4.
    Bibi A, Mueller M, Ghanem B (2016) Target response adaptation for correlation filter tracking. In: European conference on computer vision (2016)Google Scholar
  5. 5.
    Ma C, Yang X, Zhang C, Yang MY (2015) Long-term correlation tracking. In: IEEE international conference on computer vision and pattern recognition, pp 5388–5396Google Scholar
  6. 6.
    Zhang S, Zhou H, Jiang F, Li X (2015) Robust visual tracking using structurally random projection and weighted least squares. IEEE Trans Circuits Syst Video Technol 25(11):1749–1760CrossRefGoogle Scholar
  7. 7.
    Wang D, Lu H, Yang MH (2012) Online object tracking with sparse prototypes. IEEE Trans Image Process 22(1):314–325MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Possegger H, Mauthner T, Bischof H (2015) In defense of color-based model-free tracking. In: IEEE international conference on computer vision and pattern recognition, pp 2113–2120Google Scholar
  9. 9.
    Danelljan M, Khan FS, Felsberg M, Weijer JV (2014) Adaptive color attributes for real-time visual tracking. In: IEEE international conference on computer vision and pattern recognition, pp 1090–1097Google Scholar
  10. 10.
    Wang S, Lu H, Yang F, Yang MH (2011) Superpixel tracking. In: IEEE international conference on computer vision, pp 1323–1330Google Scholar
  11. 11.
    Wen LY, Cai ZW, Lei Z, Yi D, Li SZ (2014) Robust online learned spatio-temporal context model for visual tracking. IEEE Trans Image Process 23(2):785–796MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Kwon J, Roh J, Lee KM, Gool LV (2014) Robust visual tracking with double bounding box. In: European conference on computer vision, pp 377–392Google Scholar
  13. 13.
    Zhang K, Zhang L, Yang MH (2014) Fast compressive tracking. IEEE Trans Pattern Anal Mach Intell 36(10):2002–2015CrossRefGoogle Scholar
  14. 14.
    Danelljan M, Hager G, Khan FS, Felsberg M (2016) Discriminative scale space tracking. IEEE Trans Pattern Anal Mach Intell 39:1561–1575CrossRefGoogle Scholar
  15. 15.
    Li X, Dick A, Shen C, van den Hengel A, Wang H (2013) Incremental learning of 3D-DCT compact representations for robust visual tracking. IEEE Trans Pattern Anal Mach Intell 35(4):863–881CrossRefGoogle Scholar
  16. 16.
    Bai QX, Wu Z, Sclaroff S, Betke M, Monnier C (2013) Randomized ensemble tracking. In: IEEE international conference on computer vision, pp 2040–2047Google Scholar
  17. 17.
    Santner J, Leistner C, Saffari A, Pock T, Bischof H (2010) PROST: Parallel robust online simple tracking. In: IEEE international conference on computer vision and pattern recognition, pp 723–730Google Scholar
  18. 18.
    Zhong W, Lu H, Yang MH (2012) Robust object tracking via sparsity based collaborative model. In: IEEE international conference on computer vision and pattern recognition, pp 1838–1845Google Scholar
  19. 19.
    Wang D, Lu HC (2014) Visual tracking via probability continuous outlier model. In: IEEE international conference on computer vision and pattern recognition, pp 3478–3485Google Scholar
  20. 20.
    Atkinson RC, Shiffrin RM (1968) Human memory: a proposed system and its control processes. Psychol Learn Motiv 2:89–195CrossRefGoogle Scholar
  21. 21.
    Wang N, Shi J, Yeung DY, Jia J (2015) Understanding and diagnosing visual tracking systems. In: IEEE international conference on computer vision and pattern recognition, pp 3101–3109Google Scholar
  22. 22.
    Kwon J, Lee KM (2009) Tracking of a non-rigid object via patch-based dynamic appearance modeling and adaptive Basin Hopping Monte Carlo sampling. In: IEEE international conference on computer vision and pattern recognition, pp 1208–1215Google Scholar
  23. 23.
    Li C, Cheng H, Hu S, Liu X, Tang J, Lin L (2016) Learning collaborative sparse representation for grayscale-thermal tracking. IEEE Trans Image Process 25(12):5743–5756MathSciNetCrossRefzbMATHGoogle Scholar
  24. 24.
    Lan XY, Ma AJ, Yuen PC (2014) Multi-cue visual tracking using robust feature-level fusion based on joint sparse representation. In: IEEE international conference on computer vision and pattern recognition, pp 1194–1201Google Scholar
  25. 25.
    Zhang T, Liu S, Xu C, Yan S, Ghanem B, Ahuja N, Yang MH (2015) Structural sparse tracking. In: IEEE international conference on computer vision and pattern recognition, pp 150–158Google Scholar
  26. 26.
    Chen DP, Yuan ZJ, Wu Y, Zhang G, Zheng NJ (2013) Constructing adaptive complex cells for robust visual tracking. In: IEEE international conference on computer vision, pp 1113–1120Google Scholar
  27. 27.
    Dai M, Cheng S, He X, Wang D (2018) Object tracking in the presence of shaking motions. Neural Comput Appl.  https://doi.org/10.1007/s00521-018-3387-3 Google Scholar
  28. 28.
    Li C, Lin L, Zuo W, Tang J, Yang MH (2018) Visual tracking via dynamic graph learning. In: IEEE transactions on pattern analysis and machine intelligence, pp 1–15Google Scholar
  29. 29.
    Zhong W, Lu HC, Yang MH (2012) Robust object tracking via sparsity-based collaborative model. In: IEEE international conference on computer vision and pattern recognition, pp 1838–1845Google Scholar
  30. 30.
    Sun S, An Z, Jiang X, Zhang B, Zhang J (2018) Robust object tracking with the inverse relocation strategy. Neural Comput Appl.  https://doi.org/10.1007/s00521-018-3667-y Google Scholar
  31. 31.
    Choi J, Chang HJ, Fischer T, Yun S, Lee K, Jeong J, Demiris Y, Choi JY (2018) Context-aware deep feature compression for high-speed visual tracking. In: IEEE conference on computer vision and pattern recognition, pp 479–488Google Scholar
  32. 32.
    Jia X, Lu H, Yang MH (2012) Visual tracking via adaptive structural local sparse appearance model. In: IEEE international conference on computer vision and pattern recognition, pp 1822–1829Google Scholar
  33. 33.
    Hare S, Saffari A, Torr PHS (2011) Struck: structured output tracking with kernels. In: IEEE international conference on computer vision, pp 263–270Google Scholar
  34. 34.
    Li Y, Zhu J, Hoi SCH (2015) Reliable patch trackers: robust visual tracking by exploiting reliable patches. In: IEEE international conference on computer vision and pattern recognition, pp 353–361Google Scholar
  35. 35.
    Wang D, Lu H, Xiao Z, Yang MH (2015) Inverse sparse tracker with a locally weighted distance metric. IEEE Trans Image Process 24(9):2646–2657MathSciNetCrossRefzbMATHGoogle Scholar
  36. 36.
    Wang NY, Wang JD, Yeung DY (2013) Online robust non-negative dictionary learning for visual tracking. In: IEEE international conference on computer vision, pp 657–664Google Scholar
  37. 37.
    Henriques JF, Caseiro R, Martins P, Batista J (2015) High-speed tracking with kernelized correlation filters. IEEE Trans Pattern Anal Mach Intell 37(3):583–596CrossRefGoogle Scholar
  38. 38.
    Godec M, Roth PM, Bischof H (2011) Hough-based tracking of non-rigid objects. In: IEEE international conference on computer vision, pp 81–88Google Scholar
  39. 39.
    Hu W, Zhou X, Hu M, Maybank S (2009) Occlusion reasoning for tracking multiple walking people. IEEE Trans Circuits Syst Video Technol 19(1):114–121CrossRefGoogle Scholar
  40. 40.
    Hu M, Liu Z, Zhang J, Zhang G (2017) Robust object tracking via multi-cue fusion. Signal Process 139:86–95CrossRefGoogle Scholar
  41. 41.
    Grabner H, Leistner C, Bischof H (2008) Semi-supervised on-line boosting for robust tracking. In: European conference on computer vision, pp 234–247Google Scholar
  42. 42.
    Zhang H, Cao X, Ho JKL, Chow TWS (2017) Object-level video advertising: an optimization framework. IEEE Trans Ind Inform 13:520–531CrossRefGoogle Scholar
  43. 43.
    Kadir T, Zisserman A, Brady M (2004) An affine invariant salient region detector. In: European conference on computer vision, pp 228–241Google Scholar
  44. 44.
    Marchesotti L, Cifarelli C, Csurka G (2009) A framework for visual saliency detection with applications to image thumb nailing. In: IEEE international conference on computer vision, pp 2232–2239Google Scholar
  45. 45.
    Hou X, Zhang L (2007) Saliency detection: a spectral residual approach. In: IEEE international conference on computer vision and pattern recognition, pp 1–8Google Scholar
  46. 46.
    Alexe B, Deselaers T, Ferrari V (2012) Measuring the objectness of image windows. IEEE Trans Pattern Anal Mach Intell 34(11):2189–2202CrossRefGoogle Scholar
  47. 47.
    Wu Y, Lim J, Yang MH (2013) Online object tracking: a benchmark. In: IEEE international conference on computer vision and pattern recognition, pp 2411–2418Google Scholar
  48. 48.
    Xing JL, Gao J, Li B, Hu WM, Yan SC (2013) Robust object tracking with online multi-lifespan dictionary learning. In: IEEE international conference on computer vision, pp 665–672Google Scholar
  49. 49.
    Radhakrishna A, Shaji A, Lucchi K, Fua P, Susstrunk S (2010) Slic-superpixels, No. EPFL-REPORT-149300Google Scholar
  50. 50.
    Kristan M, Pflugfelder R, et al (2014) The visual object tracking VOT2014 challenge results. In: European conference on computer vision (Workshop, 2014)Google Scholar
  51. 51.
    Henriques F, Caseiro R, Martins P, Batista J (2012) Exploiting the circulant structure of tracking-by-detection with kernels. In: European conference on computer vision (2012)Google Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2019

Authors and Affiliations

  • Yun Liang
    • 1
  • Mei-hua Wang
    • 1
  • Yan-wen Guo
    • 2
  • Wei-shi Zheng
    • 3
    Email author
  1. 1.College of Mathematics and InformaticsSouth China Agricultural UniversityGuangzhouChina
  2. 2.National Key Laboratory for Novel Software TechnologyNanjing UniversityNanjingChina
  3. 3.School of Data and Computer ScienceSun Yat-sen UniversityGuangzhouChina

Personalised recommendations