Skip to main content
Log in

Tracking using Numerous Anchor Points

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

In this paper, an online adaptive model-free tracker is proposed to track single objects in video sequences to deal with real-world tracking challenges like low-resolution, object deformation, occlusion and motion blur. The novelty lies in the construction of a strong appearance model that captures features from the initialized bounding box and then are assembled into anchor point features. These features memorize the global pattern of the object and have an internal star graph-like structure. These features are unique and flexible and help tracking generic and deformable objects with no limitation on specific objects. In addition, the relevance of each feature is evaluated online using short-term consistency and long-term consistency. These parameters are adapted to retain consistent features that vote for the object location and that deal with outliers for long-term tracking scenarios. Additionally, voting in a Gaussian manner helps in tackling inherent noise of the tracking system and in accurate object localization. Furthermore, the proposed tracker uses pairwise distance measure to cope with scale variations and combines pixel-level binary features and global weighted color features for model update. Finally, experimental results on a visual tracking benchmark dataset are presented to demonstrate the effectiveness and competitiveness of the proposed tracker.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. https://bitbucket.org/tanushri/tuna.

  2. http://opencv.org/.

References

  1. Adam, A., Rivlin, E., Shimshoni, I.: Robust fragments-based tracking using the integral histogram. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 798–805 (2006)

  2. Babenko, B., Yang, M.H., Belongie, S.: Robust object tracking with online multiple instance learning. IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1619–1632 (2011)

    Article  Google Scholar 

  3. Bouachir, W., Bilodeau, G.A.: Structure-aware keypoint tracking for partial occlusion handling. IEEE Winter. Conf. Appl. Comput. Vis. (WACV) 2014, 877–884 (2014)

    Google Scholar 

  4. Cai, Z., Wen, L., Yang, J., Lei, Z., Li, S.: Structured visual tracking with dynamic graph. In: Computer Vision ACCV 2012. Lecture Notes in Computer Science, vol. 7726, Springer, Berlin, pp. 86–97 (2013)

  5. Chakravorty, T., Bilodeau, G.A., Granger, E.: Contextual object tracker with structure encoding. In: 2015 IEEE International Conference on Image Processing (ICIP), pp. 4937–4941 (2015)

  6. Collins, R.: Mean-shift blob tracking through scale space. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings., vol. 2, pp. II–234–40 (2003)

  7. Comaniciu, D., Ramesh, V., Meer, P.: Real-time tracking of non-rigid objects using mean shift. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition, 2000., vol. 2, pp. 142–149 (2000)

  8. Comaniciu, D., Ramesh, V., Meer, P.: Kernel-based object tracking. IEEE Trans. Pattern Anal. Mach. Intell. 25(5), 564–577 (2003)

    Article  Google Scholar 

  9. Danelljan, M., Khan, F., Felsberg, M., van de Weijer, J.: Adaptive color attributes for real-time visual tracking. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) 2014, 1090–1097 (2014)

    Google Scholar 

  10. Felzenszwalb, P.F., Girshick, R.B., McAllester, D., Ramanan, D.: Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 32(9), 1627–1645 (2010)

    Article  Google Scholar 

  11. Grabner, H., Grabner, M., Bischof, H.: Real-time tracking via on-line boosting. In: Proceedings BMVC, pp 6.1–6.10 (2006)

  12. Grabner, H., Leistner, C., Bischof, H.: Semi-supervised on-line boosting for robust tracking. In: Proceedings of the 10th European Conference on Computer Vision: Part I, Springer, Berlin, ECCV ’08, pp. 234–247 (2008)

  13. Hare, S., Saffari, A., Torr, P.: Struck: Structured output tracking with kernels. In: IEEE International Conference on Computer Vision (ICCV) 2011, pp. 263–270 (2011)

  14. Henriques, J.a.F., Caseiro, R., Martins, P., Batista, J.: Exploiting the circulant structure of tracking-by-detection with kernels. In: Proceedings of the 12th European Conference on Computer Vision-Volume Part IV, Springer, Berlin, ECCV’12, pp. 702–715 (2012)

  15. Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: High-speed tracking with kernelized correlation filters. IEEE Transactions on Pattern Analysis and Machine Intelligence (2015)

  16. Jia, X., Lu, H., Yang, M.H.: Visual tracking via adaptive structural local sparse appearance model. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) 2012, 1822–1829 (2012)

    Google Scholar 

  17. Kalal, Z., Mikolajczyk, K., Matas, J.: Tracking-learning-detection. IEEE Trans. Pattern Anal. Mach. Intell. 34(7), 1409–1422 (2012)

    Article  Google Scholar 

  18. Kwon, J., Lee, K.M.: Tracking by sampling trackers. IEEE Int. Conf. Comput. Vis. (ICCV) 2011, 1195–1202 (2011)

    Google Scholar 

  19. Leutenegger, S., Chli, M., Siegwart, R.Y.: Brisk: binary robust invariant scalable keypoints. In: Proceedings of the 2011 International Conference on Computer Vision, ICCV ’11, pp. 2548–2555. IEEE Computer Society, Washington, DC, USA (2011)

  20. Liu, B., Huang, J., Yang, L., Kulikowsk, C.: Robust tracking using local sparse appearance model and k-selection. In: Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, CVPR ’11, pp. 1313–1320. IEEE Computer Society, Washington, DC, USA (2011)

  21. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)

    Article  Google Scholar 

  22. Matthews, I., Ishikawa, T., Baker, S.: The template update problem. IEEE Trans. Pattern Anal. Mach. Intell. 26(6), 810–815 (2004)

    Article  Google Scholar 

  23. Mei, X., Ling, H.: Robust visual tracking using \({l}_{1}\) minimization. In: IEEE 12th International Conference on Computer Vision, ICCV 2009, pp. 1436–1443 (2009)

  24. Nebehay, G., Pflugfelder, R.: Consensus-based matching and tracking of keypoints for object tracking. In: IEEE Winter Conference on Applications of Computer Vision, 2014, IEEE (2014)

  25. Ortiz, R.: Freak: Fast retina keypoint. In: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) CVPR ’12, pp. 510–517. IEEE Computer Society, Washington, DC, USA (2012)

  26. Pérez, P., Hue, C., Vermaak, J., Gangnet, M.: Color-based probabilistic tracking. In: Proceedings of the 7th European Conference on Computer Vision-Part I, ECCV ’02, pp 661–675. Springer, London, UK (2002)

  27. Possegger, H., Mauthner, T., Bischof, H.: In defense of color-based model-free tracking. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2113–2120 (2015)

  28. Ross, D.A., Lim, J., Lin, R.S., Yang, M.H.: Incremental learning for robust visual tracking. Int. J. Comput. Vis. 77(1–3), 125–141 (2008)

    Article  Google Scholar 

  29. Shi, J., Tomasi, C.: Good Features to Track. Tech. rep, Ithaca, NY, USA (1993)

  30. Smeulders, A., Chu, D., Cucchiara, R., Calderara, S., Dehghan, A., Shah, M.: Visual tracking: an experimental survey. IEEE Trans. Pattern Anal. Mach. Intell. 36(7), 1442–1468 (2014)

    Article  Google Scholar 

  31. St-Charles, P.L., Bilodeau, G.A.: Improving background subtraction using local binary similarity patterns. IEEE Winter Conf. Appl. Comput. Vis. (WACV) 2014, 509–515 (2014)

    Google Scholar 

  32. Stalder, S., Grabner, H., v Gool, L.: Beyond semi-supervised tracking: tracking should be as simple as detection, but not simpler than recognition. In: IEEE 12th International Conference on Computer Vision Workshops (ICCV Workshops), 2009, pp. 1409–1416 (2009)

  33. Wang, N., Yeung, D.Y.: Learning a deep compact image representation for visual tracking. Adv. Neural Inf. Process. Syst. 26, 809–817 (2013)

    Google Scholar 

  34. Wang, N., Li, S., Gupta, A., Yeung, D.: Transferring rich feature hierarchies for robust visual tracking. CoRR abs/1501.04587 (2015)

  35. Wang, S., Lu, H., Yang, F., Yang, M.H.: Superpixel tracking. In: Proceedings of the 2011 International Conference on Computer Vision ICCV ’11, pp. 1323–1330, IEEE Computer Society, Washington, DC, USA (2011)

  36. Wu, Y., Shen, B., Ling, H.: Online robust image alignment via iterative convex optimization. In: CVPR, IEEE Computer Society, pp. 1808–1814 (2012)

  37. Wu, Y., Lim, J., Yang, M.H.: Online object tracking: a benchmark. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2013)

  38. Yang, M., Wu, Y., Hua, G.: Context-aware visual tracking. IEEE Trans. Pattern Anal. Mach. Intell. 31(7), 1195–1209 (2009)

    Article  Google Scholar 

  39. Yilmaz, A., Javed, O., Shah, M.: Object tracking: a survey. ACM Comput. Surv. 38(4), 13 (2006)

    Article  Google Scholar 

  40. Yoon, J.H., Kim, D.Y., Yoon, K.J.: Visual tracking via adaptive tracker selection with multiple features. In: ECCV (4), Lecture Notes in Computer Science, vol. 7575, pp 28–41. Springer (2012)

  41. Zhang, K., Zhang, L., Yang, M.H.: Real-time compressive tracking. In: Proceedings of the 12th European Conference on Computer Vision-Volume Part III, ECCV’12, pp. 864–877. Springer, Berlin (2012)

  42. Zhong, W., Lu, H., Yang, M.H.: Robust object tracking via sparsity-based collaborative model. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) 2012, 1838–1845 (2012)

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by FRQ-NT team Grant #167442 and by REPARTI (Regroupement pour l’étude des environnements partagés intelligents répartis) FRQ-NT strategic cluster.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tanushri Chakravorty.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chakravorty, T., Bilodeau, GA. & Granger, É. Tracking using Numerous Anchor Points. Machine Vision and Applications 29, 247–261 (2018). https://doi.org/10.1007/s00138-017-0898-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-017-0898-3

Keywords

Navigation