Skip to main content
Log in

Robust Visual Tracking Based on Convolutional Features with Illumination and Occlusion Handing

  • Regular Paper
  • Published:
Journal of Computer Science and Technology Aims and scope Submit manuscript

Abstract

Visual tracking is an important area in computer vision. How to deal with illumination and occlusion problems is a challenging issue. This paper presents a novel and efficient tracking algorithm to handle such problems. On one hand, a target’s initial appearance always has clear contour, which is light-invariant and robust to illumination change. On the other hand, features play an important role in tracking, among which convolutional features have shown favorable performance. Therefore, we adopt convolved contour features to represent the target appearance. Generally speaking, first-order derivative edge gradient operators are efficient in detecting contours by convolving them with images. Especially, the Prewitt operator is more sensitive to horizontal and vertical edges, while the Sobel operator is more sensitive to diagonal edges. Inherently, Prewitt and Sobel are complementary with each other. Technically speaking, this paper designs two groups of Prewitt and Sobel edge detectors to extract a set of complete convolutional features, which include horizontal, vertical and diagonal edges features. In the first frame, contour features are extracted from the target to construct the initial appearance model. After the analysis of experimental image with these contour features, it can be found that the bright parts often provide more useful information to describe target characteristics. Therefore, we propose a method to compare the similarity between candidate sample and our trained model only using bright pixels, which makes our tracker able to deal with partial occlusion problem. After getting the new target, in order to adapt appearance change, we propose a corresponding online strategy to incrementally update our model. Experiments show that convolutional features extracted by well-integrated Prewitt and Sobel edge detectors can be efficient enough to learn robust appearance model. Numerous experimental results on nine challenging sequences show that our proposed approach is very effective and robust in comparison with the state-of-the-art trackers.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Wang N Y, Shi J P, Yeung D Y, Jia J Y. Understanding and diagnosing visual tracking systems. In Proc. IEEE Int. Conf. Computer Vision, December 2015, pp.3101-3109.

  2. Zhang X Q, Hu W M, Bao H J, Maybank S. Robust head tracking based on multiple cues fusion in the kernel-Bayesian framework. IEEE Trans. Circuits and Systems for Video Technology, 2013, 23(7): 1197-1208.

  3. Zhang X Q, Hu W M, Xie N H, Bao H J, Maybank S. A robust tracking system for low frame rate video. International Journal of Computer Vision, 2015, 115(3): 279-304.

    Article  MathSciNet  Google Scholar 

  4. Zhang X Q, Hu W M, Qu W, Maybank S. Multiple object tracking via species-based particle swarm optimization. IEEE Trans. Circuits and Systems for Video Technology, 2010, 20(11): 1590-1602.

  5. Sun J, He F Z, Chen Y L, Chen X. A multiple template approach for robust tracking of fast motion target. Applied Mathematics-A Journal of Chinese Universities, 2016, 31(2): 177-197.

    Article  MathSciNet  MATH  Google Scholar 

  6. Grabner H, Grabner M, Bischof H. Realtime tracking via on-line boosting. In Proc. British Machine Vision Conf., September 2006, pp.47-56.

  7. Babenko B, Yang M H, Belongie S. Robust object tracking with online multiple instance learning. IEEE Trans. Pattern Analysis and Machine Intelligence, 2011, 33(8): 1619-1632.

  8. Hare S, Golodetz S, Saffari A, Vineet V, Cheng M M, Hicks S L, Torr P H S. Struck: Structured output tracking with kernels. IEEE Trans. Pattern Analysis and Machine Intelligence, 2016, 38(10): 2096-2109.

  9. Mei X, Ling H B. Robust visual tracking using 1 minimization. In Proc. IEEE 12th Int. Conf. Computer Vision, September 2009, pp.1436-1443.

  10. Li K, He F Z, Yu H P, Chen X. A correlative classifiers approach based on particle filter and sample set for tracking occluded target. Applied Mathematics-A Journal of Chinese Universities, 2017, 32(3): 294-312

    Article  MathSciNet  Google Scholar 

  11. Zhong W, Lu H C, Yang M H. Robust object tracking via sparsity-based collaborative model. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, June 2012, pp.1838-1845.

  12. Wu Y Q, He F Z, Zhang D J, Li X X. Service-oriented feature-based data exchange for cloud-based design and manufacturing. IEEE Trans. Services Computing, 2015, PP(99). doi: https://doi.org/10.1109/TSC.2015.2501981.

  13. Bao C L, Wu Y, Ling H B, Ji H. Real time robust L1 tracker using accelerated proximal gradient approach. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, June 2012, pp.1830-1837.

  14. Ross D A, Lim J, Lin R S, Yang M H. Incremental learning for robust visual tracking. International Journal of Computer Vision, 2008, 77(1/2/3): 125-141.

  15. Kwon J, Lee K M. Visual tracking decomposition. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, June 2010, pp.1269-1276.

  16. Zhang K H, Zhang L, Yang M H. Real-time compressive tracking. In Proc. the 12th European Conf. Computer Vision, October 2012, pp.864-877.

  17. Ojala T, Pietikainen M, Mäenpaa T. Multiresolution grayscale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Analysis and Machine Intelligence, 2002, 24(7): 971-987.

  18. Li K, He F Z, Chen X. Real-time object tracking via compressive feature selection. Frontiers of Computer Science, 2016, 10(4): 689-701.

    Article  Google Scholar 

  19. Viola P, Jones M. Rapid object detection using a boosted cascade of simple features. In Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition, December 2001, pp.I-511-I-518.

  20. Ni B, He F Z, Pan Y T, Yuan Z Y. Using shapes correlation for active contour segmentation of uterine fibroid ultrasound images in computer-aided therapy. Applied Mathematics-A Journal of Chinese Universities, 2016, 31(1): 37-52.

    Article  MathSciNet  MATH  Google Scholar 

  21. Zhang D J, He F Z, Han S, Zou L, Wu Y Q, Chen Y L. An efficient approach to directly compute the exact Hausdorff distance for 3D point sets. Integrated Computer Aided Engineering, 2017, 24(3): 261-277.

    Article  Google Scholar 

  22. Chen Y L, He F Z, Wu Y Q, Hou N. A local start search algorithm to compute exact Hausdorff distance for arbitrary point sets. Pattern Recognition, 2017, 67: 139-148

    Article  Google Scholar 

  23. Li K, He F Z, Yu H, Chen X. A parallel and robust object tracking approach synthesizing adaptive Bayesian learning and improved incremental subspace learning. Frontiers of Computer Science. doi: https://doi.org/10.1007/s11704-018-6442-4

  24. Zhang D J, He F Z, Han S H, Li X X. Quantitative optimization of interoperability during feature-based data exchange. Integrated Computer Aided Engineering, 2016, 23(1): 31-50.

    Article  Google Scholar 

  25. Wang L, Liu T, Wang G, Chan K L, Yang Q X. Video tracking using learned hierarchical features. IEEE Trans. Image Processing, 2015, 24(4): 1424-1435.

    Article  MathSciNet  Google Scholar 

  26. Ma C, Huang J B, Yang X K, Yang M H. Hierarchical convolutional features for visual tracking. In Proc. IEEE Int. Conf. Computer Vision, December 2015, pp.3074-3082.

  27. Wang L J, Ouyang W L, Wang X G, Lu H C. Visual tracking with fully convolutional networks. In Proc. IEEE Int. Conf. Computer Vision, December 2015, pp.3119-3127.

  28. Wolffsohn J S, Mukhopadhyay D, Rubinstein M. Image enhancement of real-time television to benefit the visually impaired. American Journal of Ophthalmology, 2007, 144(3): 436-440.

    Article  Google Scholar 

  29. Raheja J L, Kumar U. Human facial expression detection from detected in captured image using back propagation neural network. International Journal of Computer Science & Information Technology, 2010, 2(1): 116-123.

    Google Scholar 

  30. Sevilla-Lara L, Learned-Miller E. Distribution fields for tracking. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, June 2012, pp.1910-1917.

  31. Oron S, Bar-Hillel A, Levi D, Avidan S. Locally orderless tracking. International Journal of Computer Vision, 2015, 111(2): 213-228.

    Article  MathSciNet  Google Scholar 

  32. Kalal Z, Mikolajczyk K, Matas J. Tracking-learning-detection. IEEE Trans. Pattern Analysis and Machine Intelligence, 2012, 34(7): 1409-1422.

  33. Liu B Y, Huang J Z, Yang L, Kulikowsk C. Robust tracking using local sparse appearance model and K-selection. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, June 2011, pp.1313-1320.

  34. Henriques J F, Caseiro R, Martins P, Batista J. Exploiting the circulant structure of tracking-by-detection with kernels. In Proc. the 12th European Conf. Computer Vision, October 2012, pp.702-715.

  35. Wang D, Lu H C, Yang M H. Least soft-threshold squares tracking. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, June 2013, pp.2371-2378.

  36. Li Y, Zhu J K, Hoi S C H. Reliable patch trackers: Robust visual tracking by exploiting reliable patches. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, June 2015, pp.353-361.

  37. Sun C, Wang D, Lu H C. Occlusion-aware fragment-based tracking with spatial-temporal consistency. IEEE Trans. Image Processing, 2016, 25(8): 3814-3825.

    Article  MathSciNet  Google Scholar 

  38. Lv X, He F Z, Cai W W, Cheng Y. A string-wise CRDT algorithm for smart and large-scale collaborative editing systems. Advanced Engineering Informatics, 2017, 33: 397-409

    Article  Google Scholar 

  39. Zhou Y, He F Z, Qiu Y M. Optimization of parallel iterated local search algorithms on graphics processing unit. The Journal of Supercomputing, 2016, 72(6): 2394-2416.

    Article  Google Scholar 

  40. Zhou Y, He F Z, Qiu Y M. Dynamic strategy based parallel ant colony optimization on GPUs for TSPs. Science China Information Sciences, 2017, 60: 068102.

  41. Yan X H, He F Z, Hou N, Ai H J. An efficient particle swarm optimization for largescale hardware/software code-sign system. International Journal of Cooperative Information Systems. doi: https://doi.org/10.1142/S0218843017410015.

  42. Cheng Y, He F Z, Wu Y Q, Zhang D J. Meta-operation conflict resolution for human-human interaction in collaborative feature-based CAD systems. Cluster Computing, 2016, 19(1): 237-253.

    Article  Google Scholar 

  43. Yan X H, He F Z, Chen Y L. A novel hardware/software partitioning method based on position disturbed particle swarm optimization with invasive weed optimization. Journal of Computer Science and Technology, 2017, 32(2): 340-355.

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fa-Zhi He.

Electronic supplementary material

Below is the link to the electronic supplementary material.

ESM 1

(PDF 72 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, K., He, FZ. & Yu, HP. Robust Visual Tracking Based on Convolutional Features with Illumination and Occlusion Handing. J. Comput. Sci. Technol. 33, 223–236 (2018). https://doi.org/10.1007/s11390-017-1764-5

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11390-017-1764-5

Keywords

Navigation