Advertisement

Signal, Image and Video Processing

, Volume 12, Issue 7, pp 1227–1235 | Cite as

Tracking occluded objects using chromatic co-occurrence matrices and particle filter

  • Issam ElafiEmail author
  • Mohamed Jedra
  • Noureddine Zahid
Original Paper

Abstract

In the computer vision field, many real-world applications are based on detecting and tracking moving objects. One of the most important challenges in these applications is tracking occluded objects. Actually, when two or multiple objects occlude, the used tracking system suffers from information loss which negatively influences its tracking performance. The present paper introduces a new method to overcome this problem using only one target image and without any classification or learning phase. Indeed, a tracking system is established by combining the chromatic co-occurrence matrices and the particle filter in order to evaluate the occluded target position. The qualitative and quantitative studies show that the results obtained by the proposed approach are very competitive in comparison with several state-of-the-art methods.

Keywords

Occlusion The chromatic co-occurrence matrices Tracking Moving objects Single camera particle filter 

Supplementary material

11760_2018_1273_MOESM1_ESM.avi (2.1 mb)
Supplementary material 1 (avi 2128 KB)
11760_2018_1273_MOESM2_ESM.avi (4.5 mb)
Supplementary material 2 (avi 4634 KB)

Supplementary material 3 (avi 1426 KB)

Supplementary material 4 (avi 7975 KB)

Supplementary material 5 (avi 4533 KB)

11760_2018_1273_MOESM6_ESM.avi (1.4 mb)
Supplementary material 6 (avi 1479 KB)

Supplementary material 7 (avi 1517 KB)

References

  1. 1.
    Gabriel, P.F., Verly, J.G., Piater, J.H., Genon, A.: The state of the art in multiple object tracking under occlusion in video sequences. In: Advanced Concepts for Intelligent Vision Systems (ACIVS), pp. 166–173. Ghent, Belgium (2003)Google Scholar
  2. 2.
    Zhou, Y., Tao, H.: A background layer model for object tracking through occlusion. In: Ninth IEEE International Conference on Computer Vision, vol. 2, pp. 1079–1085. Nice, France (2003)Google Scholar
  3. 3.
    Jepson, A.D., Fleet, D.J., Black, M.J.: A layered motion representation with occlusion and compact spatial support. In: ECCV 2002, pp. 692–706. Copenhagen, Denmark (2002)Google Scholar
  4. 4.
    Yilmaz, A., Li, X., Shah, M.: Contour-based object tracking with occlusion handling in video acquired using mobile cameras. IEEE Trans. Pattern Anal. Mach. Intell. 26(11), 1531–1536 (2004)CrossRefGoogle Scholar
  5. 5.
    MacCormick, J., Blake, A.: A probabilistic exclusion principle for tracking multiple objects. In: The Proceedings of the Seventh IEEE International Conference on Computer Vision, pp. 572–578, vol. 1. Kekyra, Greece (1999)Google Scholar
  6. 6.
    Wu, B., Nevatia, R.: Tracking of multiple, partially occluded humans based on static body part detection. In: IEEE computer society conference on computer vision and pattern recognition, vol. 1, pp. 951–958. New York, USA (2006)Google Scholar
  7. 7.
    Wu, B., Nevatia, R.: Detection and tracking of multiple, partially occluded humans by bayesian combination of edgelet based part detectors. Int. J. Comput. Vis. 75(2), 247–266 (2007)CrossRefGoogle Scholar
  8. 8.
    Senior, A., Hampapur, A., Tian, Y.-L., Brown, L., Pankanti, S., Bolle, R.: Appearance models for occlusion handling. Image Vis. Comput. 24(11), 1233–1243 (2006)CrossRefGoogle Scholar
  9. 9.
    Ding, J., Tang, Y., Tian, H., Liu, W., Huang, Y.: Robust tracking with adaptive appearance learning and occlusion detection. Multimed. Syst. 22, 1–15 (2015)Google Scholar
  10. 10.
    Haralick, R.M., Shanmugam, K., Dinstein, I.: Textural features for image classification. IEEE Trans. Syst. Man Cybern SMC–3(6), 610–621 (1973)CrossRefGoogle Scholar
  11. 11.
    Penatti, O.A.B., Valle, E., da Torres, R.: Comparative study of global color and texture descriptors for web image retrieval, J. Vis. Commun. Image Represent. 23(2), 359–380 (2012)CrossRefGoogle Scholar
  12. 12.
    Upneja, R., Singh, C.: Fast computation of Jacobi–Fourier moments for invariant image recognition. Pattern Recognit. 48(5), 1836–1843 (2015)CrossRefzbMATHGoogle Scholar
  13. 13.
    Tahmasbi, A., Saki, F., Shokouhi, S.B.: Classification of benign and malignant masses based on Zernike moments. Comput. Biol. Med. 41(8), 726–735 (2011)CrossRefGoogle Scholar
  14. 14.
    Skrzypniak, M., Macaire, L., Postaire, J.-G.: Indexation d’images de personnes par analyse de matrices de co-occurrences couleur. In: Actes CORESA’00 Journ. D’études D’échanges Compression Représentation Signaux Audiov, pp. 411 – 418. Poitiers, France (2000)Google Scholar
  15. 15.
    Muselet, D.: Reconnaissance automatique d’objets sous éclairage non contrôlé par analyse d’images couleur. Ph.D. thesis, Lille 1 University, France (2005)Google Scholar
  16. 16.
    Jaward, M., Mihaylova, L., Canagarajah, N., Bull, D.: Multiple object tracking using particle filters. In: IEEE Aerospace Conference, pp. 1–8. Montana, USA (2006)Google Scholar
  17. 17.
    Shan, C., Wei, Y., Tan, T., Ojardias, F.: Real time hand tracking by combining particle filtering and mean shift. In: Sixth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 669–674. Seoul, South Korea (2004)Google Scholar
  18. 18.
    Medeiros, H., Park, J., Kak, A.: A parallel color-based particle filter for object tracking. In: IEEE Computer Vision and Pattern Recognition Workshops (CVPRW ’08), pp. 1–8, Anchorage, Alaska (2008)Google Scholar
  19. 19.
    Nummiaro, K., Koller-Meier, E., Van Gool, L.: An adaptive color-based particle filter. Image Vis. Comput. 21(1), 99–110 (2003)CrossRefzbMATHGoogle Scholar
  20. 20.
  21. 21.
    PETS2000: ftp://ftp.pets.rdg.ac.uk/pub/PETS2000/. Accessed May 2016
  22. 22.
    Wu, Y., Lim, J., Yang, M.H.: Object tracking benchmark. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1834–1848 (2015)CrossRefGoogle Scholar
  23. 23.
  24. 24.
    Wu, Y., Shen, B., Ling, H.: Online robust image alignment via iterative convex optimization. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1808–1814. Providence, Rhode Island (2012)Google Scholar
  25. 25.
    Oron, S., Bar-Hillel, A., Levi, D., Avidan, S.: Locally orderless tracking. Int. J. Comput. Vis. 111(2), 213–228 (2014)MathSciNetCrossRefGoogle Scholar
  26. 26.
    Ross, D.A., Lim, J., Lin, R.-S., Yang, M.-H.: Incremental learning for robust visual tracking. Int. J. Comput. Vis. 77(1–3), 125–141 (2007)Google Scholar
  27. 27.
    Zhang, K., Zhang, L., Yang, M.H.: Fast compressive tracking. IEEE Trans. Pattern Anal. Mach. Intell. 36(10), 2002–2015 (2014)CrossRefGoogle Scholar
  28. 28.
    Sevilla-Lara, L., Learned-Miller, E.: Distribution fields for tracking. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1910–1917. Providence, Rhode Island (2012)Google Scholar
  29. 29.
    Dinh, T.B., Vo, N., Medioni, G.: Context tracker: exploring supporters and distracters in unconstrained environments. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1177–1184. Colorado Springs (2011)Google Scholar
  30. 30.
    Zhong, W., Lu, H., Yang, M.H.: Robust object tracking via sparsity-based collaborative model. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1838–1845. Colorado Springs (2011)Google Scholar
  31. 31.
    Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: Exploiting the circulant structure of tracking-by-detection with kernels. In: European Conference on Computer Vision (ECCV), pp. 702–715. Firenze, Italy (2012)Google Scholar
  32. 32.
    Bao, C., Wu, Y., Ling, H., Ji, H.: Real time robust L1 tracker using accelerated proximal gradient approach. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1830–1837. Providence, Rhode Island (2012)Google Scholar
  33. 33.
    Jia, X., Lu, H., Yang, M.H.: Visual tracking via adaptive structural local sparse appearance model,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1822–1829. Providence, Rhode Island. (June 2012)Google Scholar
  34. 34.
    Zhang, T., Ghanem, B., Liu, S., Ahuja, N.: Robust visual tracking via multi-task sparse learning. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2042–2049. Providence, Rhode Island (2012)Google Scholar
  35. 35.
    VIVID Tracking Evaluation Web Site: http://vision.cse.psu.edu/data/vividEval/main.html. Accessed 01 Nov 2017
  36. 36.
    Collins, R., Zhou, X., Teh, S.K.: An open source tracking testbed and evaluation web site. In: IEEE International Workshop on Performance Evaluation of Tracking and Surveillance, vol. 35. Beijing (2005)Google Scholar
  37. 37.
    Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 583–596 (2015)CrossRefGoogle Scholar
  38. 38.
    Zhang, B., et al.: Output constraint transfer for kernelized correlation filter in tracking. IEEE Trans. Syst. Man Cybern. Syst. 47(4), 693–703 (2017)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2018

Authors and Affiliations

  1. 1.Laboratory of Conception and Systems, Faculty of ScienceMohammed V UniversityRabatMorocco

Personalised recommendations