Abstract
CNN-based visual trackers has been successfully applied to surveillance networks. Some trackers apply sliding-window method to generate candidate samples which is the input of network. However, some candidate samples containing too much background regions are mistakenly used for target tracking, which leads to a drift problem. To mitigate this problem, we propose a novel Context Adaptive Visual tracker (CAVT), which discards the patches containing too much background regions and constructs a robust appearance model of tracking targets. The proposed method first formulates a weighted similarity function to construct a pure target region. The pure target region and the surrounding area of the bounding box are used as a target prior and a background prior, respectively. Then the method exploits both the target prior and background prior to distinguish target and background regions from the bounding box. Experiments on a challenging benchmark OTB demonstrate that the proposed CAVT algorithm performs favorably compared to several state-of-the-art methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Hong, S., You, T., Kwak, S., Han, B.: Online tracking by learning discriminative saliency map with convolutional neural network. In: International Conference on International Conference on Machine Learning (2015)
Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Süsstrunk, S.: SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2274–2282 (2012)
Yi, W., Lim, J., Yang, M.H.: Online object tracking: a benchmark. In: Computer Vision and Pattern Recognition (2013)
Zdenek, K., Krystian, M., Jiri, M.: Tracking-learning-detection. IEEE Trans. Pattern Anal. Mach. Intell. 34(7), 1409–1422 (2012)
Yang, L., Zhu, J., Hoi, S.C.H.: Reliable patch trackers: robust visual tracking by exploiting reliable patches. In: Computer Vision and Pattern Recognition (2015)
Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 583–596 (2015)
Zhang, K., Liu, Q., Wu, Y., Yang, M.-H.: Robust visual tracking via convolutional networks without training. IEEE Trans. Image Process. 25(4), 1779–1792 (2016)
Li, Y., Zhu, J.: A scale adaptive kernel correlation filter tracker with feature integration. In: Agapito, L., Bronstein, M.M., Rother, C. (eds.) ECCV 2014. LNCS, vol. 8926, pp. 254–265. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16181-5_18
Zhang, J., Ma, S., Sclaroff, S.: MEEM: robust tracking via multiple experts using entropy minimization. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8694, pp. 188–203. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10599-4_13
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering
About this paper
Cite this paper
Feng, W., Li, M., Zhou, Y., Li, Z., Li, C. (2019). Context Adaptive Visual Tracker in Surveillance Networks. In: Han, S., Ye, L., Meng, W. (eds) Artificial Intelligence for Communications and Networks. AICON 2019. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 286. Springer, Cham. https://doi.org/10.1007/978-3-030-22968-9_33
Download citation
DOI: https://doi.org/10.1007/978-3-030-22968-9_33
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-22967-2
Online ISBN: 978-3-030-22968-9
eBook Packages: Computer ScienceComputer Science (R0)