Advertisement

Discriminative Spatial Attention for Robust Tracking

  • Jialue Fan
  • Ying Wu
  • Shengyang Dai
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6311)

Abstract

A major reason leading to tracking failure is the spatial distractions that exhibit similar visual appearances as the target, because they also generate good matches to the target and thus distract the tracker. It is in general very difficult to handle this situation. In a selective attention tracking paradigm, this paper advocates a new approach of discriminative spatial attention that identifies some special regions on the target, called attentional regions (ARs). The ARs show strong discriminative power in their discriminative domains where they do not observe similar things. This paper presents an efficient two-stage method that divides the discriminative domain into a local and a semi-local one. In the local domain, the visual appearance of an attentional region is locally linearized and its discriminative power is closely related to the property of the associated linear manifold, so that a gradient-based search is designed to locate the set of local ARs. Based on that, the set of semi-local ARs are identified through an efficient branch-and-bound procedure that guarantees the optimality. Extensive experiments show that such discriminative spatial attention leads to superior performances in many challenging target tracking tasks.

Keywords

Discriminative Power Attentional Region Robust Tracking Discriminative Score Spatial Selection 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Peters, R., Itti, L.: Beyond bottom-up: incorporating task-dependent influences into a computational model of spatial attention. In: CVPR (2007)Google Scholar
  2. 2.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. on PAMI, 1254–1259 (1998)Google Scholar
  3. 3.
    Hager, G., Dewan, M., Stewart, C.: Multiple kernel tracking with ssd. In: CVPR (2004)Google Scholar
  4. 4.
    Fan, Z., Yang, M., Wu, Y., Hua, G., Yu, T.: Efficient optimal kernel placement for reliable visual tracking. In: CVPR (2006)Google Scholar
  5. 5.
    Yang, M., Yuan, J., Wu, Y.: Spatial selection for attentional visual tracking. In: CVPR (2007)Google Scholar
  6. 6.
    Gao, D., Vasconcelos, N.: Discriminant interest points are stable. In: CVPR (2007)Google Scholar
  7. 7.
    Parameswaran, V., Ramesh, V., Zoghlami, I.: Tunable kernels for tracking. In: CVPR (2006)Google Scholar
  8. 8.
    Donoser, M., Bischof, H.: Efficient maximally stable extremal region (mser) tracking. In: CVPR (2006)Google Scholar
  9. 9.
    Collins, R., Liu, Y.: On-line selection of discriminative tracking features. In: ICCV (2003)Google Scholar
  10. 10.
    Shi, J., Tomasi, C.: Good features to track. In: CVPR (1994)Google Scholar
  11. 11.
    Kadir, T., Brandy, M.: Saliency, scale and image description. IJCV (2001)Google Scholar
  12. 12.
    Mikolajczyk, K., Tuytelaars, T., Schmid, C., Zisserman, A., Matas, J., Schaffalitzky, F., Kadir, T., Van Gool, L.: A comparison of affine region detectors. IJCV (2005)Google Scholar
  13. 13.
    Kwon, J., Lee, K.: Tracking of a non-rigid object via patch-based dynamic appearance modeling and adaptive basin hopping monte carlo sampling. In: CVPR (2009)Google Scholar
  14. 14.
    Gall, J., Lempitsky, V.: Class-specific hough forests for object detection. In: CVPR (2009)Google Scholar
  15. 15.
    Lowe, D.: Distinctive image features from scale-invariant keypoints. IJCV (2004)Google Scholar
  16. 16.
    Wu, Y., Fan, J.: Contextual flow. In: CVPR (2009)Google Scholar
  17. 17.
    Comaniciu, D., Ramesh, V., Meer, P.: Real-time tracking of non-rigid objects using mean shift. In: CVPR (2000)Google Scholar
  18. 18.
    Palmer, S.: Vision science: photons to phenomenology. The MIT Press, Cambridge (1999)Google Scholar
  19. 19.
    Avidan, S.: Ensemble tracking. In: CVPR (2005)Google Scholar
  20. 20.
    Fan, J., Wu, Y., Dai, S.: Discriminative spatial attention for robust tracking. Northwestern University Technical Report (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Jialue Fan
    • 1
  • Ying Wu
    • 1
  • Shengyang Dai
    • 2
  1. 1.Northwestern UniversityEvanston
  2. 2.Sony US Research CenterSan Jose

Personalised recommendations