Advertisement

Spatio-Context-Based Target Tracking with Adaptive Multi-Feature Fusion for Real-World Hazy Scenes

Article

Abstract

The problem of serious air pollution such as haze and fog and other complex scenes has posed great challenges for target tracking in computer vision. Single-feature approaches demonstrating unstable performances lead to poor tracking. Through the multi-feature fusion and the focus of attention (FOA) mechanism in biological vision system, these problems under complex scenes can be solved. Accordingly, a tracking algorithm based on multi-feature fusion and spatio-context correlation is proposed. Accurate expression of multi-feature fusion by color, texture, and edge features, and adaptive weighted update by information entropy, has greatly enhanced the adaptability to environment variations. Then, combined with the spatio-temporal context algorithm, the target can be achieved accurately. Compared with the state-of-the-art tracking algorithms, the experiment results validate the effectiveness of our method in hazy scenes, furthermore, our method also results in the promotion of image vision quality. Specifically, the average center error is reduced to 0.9440 pixel, and the average overlap rate and the FPS rise to 0.8700 and 4.4302, respectively. Additionally, the halo artifacts and color cast of restored images have been avoided by the popular dehazing algorithm. Our proposed tracking algorithm outperforms the state-of-the-art methods in accuracy, robustness, and real-time even in complex real-world hazy scenes.

Keywords

Target tracking Multi-feature fusion Adaptive weighted update Spatio-context Hazy scenes 

Notes

Funding

This study was funded by the National Natural Science Foundation of People’s Republic of China (Grant No. 91026005) and the Fundamental Research Funds for the Central Universities (Grant Nos. ZYGX2016J131, ZYGX2016J138).

Compliance with Ethical Standards

Ethical Approval

This article does not contain any studies with human participants or animals performed by any of the authors.

References

  1. 1.
    Kamal AT, Bappy JH, Farrell JA, Roy-Chowdhury AK. Distributed multi-target tracking and data association in vision networks. IEEE Trans Pattern Anal Mach Intell. 2016;38(7):1397–410.CrossRefPubMedGoogle Scholar
  2. 2.
    Beard M, Vo BT, Vo BN. Bayesian multi-target tracking with merged measurements using labelled random finite sets. IEEE Trans Signal Process. 2015;63(6):1433–47.CrossRefGoogle Scholar
  3. 3.
    Xie L, Yu Y, Huang Z. An online learning target tracking method based on extreme learning machine. Intelligent Control and Automation (WCICA). IEEE. 2016:2080–2085.Google Scholar
  4. 4.
    Zhang KH, Zhang L, Yang MH, et al. Fast tracking via spatio-temporal context learning. arXiv preprint arXiv. 2013:1311.1939.Google Scholar
  5. 5.
    Li G, Liu Z, Li H, Ren P. Target tracking based on biological-like vision identity via improved sparse representation and particle filtering. Cogn Comput. 2016;8(5):910–23.CrossRefGoogle Scholar
  6. 6.
    Kalal Z, Mikolajczyk K, Matas J. Tracking-learning-detection. IEEE Trans Pattern Anal Mach Intell. 2012;34(7):1409–22.CrossRefPubMedGoogle Scholar
  7. 7.
    Zhang K, Zhang L, Yang M. Real-time compressive tracking. In: Proceedings of ECCV. 2012;7574(1):864–877.Google Scholar
  8. 8.
    Deilamani MJ, Asli RN. Moving object tracking based on mean shift algorithm and features fusion. Int Symp Artif Intell Signal Process. 2011:48–53.Google Scholar
  9. 9.
    Chen C, Schonfeld D. A particle filtering framework for joint video tracking and pose estimation. IEEE Trans Image Process. 2010;19(6):1625–34.CrossRefPubMedGoogle Scholar
  10. 10.
    Jun-Fei L, Jian-Zhen W, Hong-Qin L. Vehicle traffic flow detection system based on video images under haze environment. image. 2016;3(10).Google Scholar
  11. 11.
    Gang W, Xiaoqin Z, Shoubao S, et al. Vehicle tracking incorporating low-rank sparse into particle filter in haze scene. In: 2016 3rd International Conference on Information Science and Control Engineering (ICISCE). IEEE. 2016:739–742.Google Scholar
  12. 12.
    Kim SK, Choi KH, Park SY. A framework for object detection by haze removal. J Inst Electron Inf Eng. 2014;51(5):168–76.Google Scholar
  13. 13.
    Yuan Y, Zhao Y, Wang X. Day and night vehicle detection and counting in complex environment. In: 2013 28th International Conference of Image and Vision Computing New Zealand (IVCNZ). IEEE. 2013;453–458.Google Scholar
  14. 14.
    Zhou Z, Wu D, Peng X, et al. Object tracking based on camshift with multi-feature fusion. J Softw. 2014;9(1):147–53.Google Scholar
  15. 15.
    Riahi D, Bilodeau G A. Multiple feature fusion in the dempster-shafer framework for multi-object tracking. Computer and Robot Vision (CRV), 2014 Canadian Conference on. IEEE. 2014;313–320.Google Scholar
  16. 16.
    Li Y, Zhu E, Zhu X, Yin J, Zhao J. Counting pedestrian with mixed features and extreme learning machine. Cogn Comput. 2014;6(3):462–76.CrossRefGoogle Scholar
  17. 17.
    Zhang Y, Wang Y, Jin J, Wang X. Sparse Bayesian learning for obtaining sparsity of EEG frequency bands based feature vectors in motor imagery classification. Int J Neural Syst. 2017;27(02):1650032.CrossRefPubMedGoogle Scholar
  18. 18.
    Wang H, Zhang Y, Waytowich NR, Krusienski DJ, Zhou G, Jin J, et al. Discriminative feature extraction via multivariate linear regression for SSVEP-based BCI. IEEE Trans Neural Syst Rehabil Eng. 2016;24(5):532–41.CrossRefPubMedGoogle Scholar
  19. 19.
    Sun W, Wang H, Sun C, Guo B, Jia W, Sun M. Fast single image haze removal via local atmospheric light veil estimation. Comput Electr Eng. 2015;46:371–83.CrossRefPubMedPubMedCentralGoogle Scholar
  20. 20.
    Wang Z, Luo J, Qin K, et al. Model based edge-preserving and guided filter for real-world hazy scenes visibility restoration. Cogn Comput. 2017:1–14.Google Scholar
  21. 21.
    Yeh CH, Kang LW, Lee MS, Lin CY. Haze effect removal from image via haze density estimation in optical model. Opt Express. 2013;21(22):27127–41.CrossRefPubMedGoogle Scholar
  22. 22.
    He K, Sun J, Tang X. Single image haze removal using dark channel prior. IEEE Trans Pattern Anal Mach Intell. 2011;33(12):2341–53.CrossRefPubMedGoogle Scholar
  23. 23.
    Zhang T, Ghanem B, Liu S, et al. Robust visual tracking via structured multi-task sparse learning. Int J Comput Vis. 2013;101(2);1838–1845.Google Scholar
  24. 24.
    Zhong W, Lu H, Yang MH. Robust object tracking via sparsity-based collaborative model. In: 2012 I.E. Conference on Computer vision and pattern recognition (CVPR). IEEE. 2012;1838–1845.Google Scholar
  25. 25.
    Mei X, Ling H. Robust visual tracking using L1 minimization. In: 2009 I.E. 12th International Conference on Computer Vision. 2009;1436–1443.Google Scholar
  26. 26.
    Kristan M., Pflugfelder R., Leonardis A, et al. The visual object tracking VOT 2013 challenge results. In: 2013 I.E. International Conference on Computer Vision Workshops (ICCVW). IEEE. 2013;98–111.Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.School of Aeronautics and AstronauticsUniversity of Electronic Science and Technology of ChinaChengduChina
  2. 2.Tencent TEG OMDChengduChina
  3. 3.School of Mathematical SciencesUniversity of Electronic Science and Technology of ChinaChengduChina

Personalised recommendations