Decision Fusion for Target Detection Using Multi-spectral Image Sequences from Moving Cameras

  • Luis López-Gutiérrez
  • Leopoldo Altamirano-Robles
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3523)


In this paper an approach for automatic target detection and tracking, using multisensor image sequences with the presence of camera motion is presented. The approach consists of three parts. The first part uses a motion segmentation method for the detection of targets in the visible images sequence. The second part uses a Gaussian background model for detecting objects presented in the infrared sequence, which is preprocessed to eliminate the camera motion. The third part combines the individual results of the detection systems; it extends the Joint Probabilistic Data Association (JPDA) algorithm to handle an arbitrary number of sensors. Our approach is tested using image sequences with high clutter on dynamic environments. Experimental results show that the system detects 99% of the targets in the scene, and the fusion module removes 90% of the false detections.


Target Detection Camera Motion Motion Segmentation Decision Fusion False Target 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Wang, D.: Unsupervised Video Segmentation Based on Water-sheds and Temporal Tracking. Trans. Circuits Syst Video Technology 8, 539–546 (1998)CrossRefGoogle Scholar
  2. 2.
    Foresti, G.L.: Object Recognition and Tracking for Remote Video Surveillance. Trans. Circuits Syst. Video Technol. 9, 1045–1062 (1999)CrossRefGoogle Scholar
  3. 3.
    Odobez, J., Bouthemy, P.: Direct incremental model-based image motion segmentation analysis for video analysis. Signal Processing 66, 143–155 (1998)MATHCrossRefGoogle Scholar
  4. 4.
    Odobez, J., Bouthemy, P.: Robust Multiresolution Estimation of Parametric Motion Models. JVCIR 6(4), 348–365 (1995)Google Scholar
  5. 5.
    Hubert, P.J.: Robust statistics. Wiley, Chichester (1981)CrossRefGoogle Scholar
  6. 6.
    Horn, Shunck: Determining optical flow. Artificial Intelligence 17, 185–203 (1981)CrossRefGoogle Scholar
  7. 7.
    Stauffer, C.: Adaptive background mixture models for real-time tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 246–252 (1999)Google Scholar
  8. 8.
    Bar-Shalom, Fortmann, T.: Tracking and Data Association. Academic P, San Diego (1988)MATHGoogle Scholar
  9. 9.
    Waltz, E., Llinas, J.: Handbook of Multisensor data fusion. CRC Press, Boca Raton (2001)Google Scholar
  10. 10.
    Barron, J., Fleet, D., Bauchemin, S.: Performance of optical flow techniques. International Journal of Computer Vision 12(1), 43–77 (1994)CrossRefGoogle Scholar
  11. 11.
    Irani, M., Rousso, B., Peleg, S.: Computing occluding and transparent motion. Intern. J. Comput. Vis. 12(1), 5–16 (1994)CrossRefGoogle Scholar
  12. 12.
    Pao, L., O’Neil, S.: Multisensor Fusion Algorithms for Tracking. In: Proc. of American Control Conference, pp. 859–863 (1993)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Luis López-Gutiérrez
    • 1
  • Leopoldo Altamirano-Robles
    • 1
  1. 1.National Institute of Astrophysics Optics and ElectronicsPueblaMéxico

Personalised recommendations