Advertisement

Visual Attention Based Motion Object Detection and Trajectory Tracking

  • Wen Guo
  • Changsheng Xu
  • Songde Ma
  • Min Xu
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6298)

Abstract

A motion trajectory tracking method using a novel visual attention model and kernel density estimation is proposed in this paper. As a crucial step, moving objects detection is based on visual attention. The visual attention model is built by combination of the static and motion feature attention map and a Karhunen-Loeve transform (KLT) distribution map. Since the visual attention analysis is conducted on object level instead of pixel level, the proposed method can detect any kinds of motion objects provided saliency without the affection of objects appearance and surrounding circumstance. After locating the region of moving object, the kernel density is estimated for trajectory tracking. The experimental results show that the proposed method is promising for moving objects detection and trajectory tracking.

Keywords

Visual attention object detection trajectory tracking 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Comaniciu, D., Meer, P.: Mean-Shift: A Robust Approach toward Feature Space Analysis. IEEE Trans. PAMI, 1–18 (2002)Google Scholar
  2. 2.
    Bradski, G.: Computer vision face tracking as a component of perceptual user interface. In: WACV 1998, Princeton, NJ, pp. 214–219 (1998)Google Scholar
  3. 3.
    Levy, A., Lindenbaum, M.: Sequential Karhunen-Loeve basis extraction and its application to image. IEEE Tran. Image Processing 9, 1371–1374 (2000)zbMATHCrossRefGoogle Scholar
  4. 4.
    Shan, C., Tan, T., Wei, Y.: Real time hand tracking using a mean shift embedded particle filter. Pattern Recognition, 1958–1970 (2007)Google Scholar
  5. 5.
    Friedman, N., Russell, S.: Image segmentation in video sequences: A probabilistic approach. In: 13th Conf. Uncertainty in Artificial Intelligence, pp. 175–181 (1997)Google Scholar
  6. 6.
    Torre, F., Black, M.: Robust principal component analysis for computer vision. In: Proceeding of ICCV 2001, vol. 1, pp. 362–369 (2001)Google Scholar
  7. 7.
    Mittal, A., Paragios, N.: Motion based background subtraction using adaptive kernel density estimation. In: Proceeding of ICCV 2004, pp. 302–309 (2004)Google Scholar
  8. 8.
    Rutishauser, U., et al.: Is bottom-up attention useful for object recognition. In: Proceeding of ICCV, pp. 37–44 (2004)Google Scholar
  9. 9.
    Itti, L., Koch, C., Niebur, E.: A model for saliency based visual attention for rapid scene analysis. IEEE Trans. PAMI 20, 1245–1259 (1998)Google Scholar
  10. 10.
    Liu, H., Jiang, S., Huang, Q., Xu, C.: A Generic Virtual Content Insertion System Based on Visual Attention Analysis. In: Proceeding of the 16th ACMMM, pp. 379–388 (2008)Google Scholar
  11. 11.
    Kruizinga, P., Petkov, N.: Computational model of dot pattern selective cells. Biological Cybernetics 83(4), 313–325 (2000)zbMATHCrossRefGoogle Scholar
  12. 12.
    Zhang, G., Yuan, Z., Zheng, N., et al.: Visual saliency based object tracking. In: Zha, H., Taniguchi, R.-i., Maybank, S. (eds.) ACCV 2009. LNCS, vol. 5994, pp. 246–257. Springer, Heidelberg (2010)Google Scholar
  13. 13.
    Michael, D., Martin, U., Martin, H., et al.: Saliency driven total variation segmentation. In: Proceeding of ICCV 2009, pp. 817–824 (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Wen Guo
    • 1
    • 2
  • Changsheng Xu
    • 1
  • Songde Ma
    • 1
  • Min Xu
    • 3
  1. 1.National Lab of Pattern Recognition, Institute of AutomationChinese Academy of SciencesBeijingChina
  2. 2.Shandong Institutes of Business and TechnologyElectronic Engineering DepartmentYantaiChina
  3. 3.Faculty of Engineering and Information TechnologyUniversity of TechnologySydneyAustralia

Personalised recommendations