Decision Fusion for Object Detection and Tracking Using Mobile Cameras
In this paper an approach to the automatic target detection and tracking using multisensor image sequences with the presence of camera motion is presented. The approach consists of three parts. The first part uses a motion segmentation method for targets detection in the visible images sequence. The second part uses a background model for detecting objects presented in the infrared sequence, which is preprocessed to eliminate the camera motion. The third part combines the individual results of the detection systems; it extends the Joint Probabilistic Data Association (JPDA) algorithm to handle an arbitrary number of sensors. Our approach is tested using image sequences with high clutter on dynamic environments. Experimental results show that the system detects 99% of the targets in the scene, and the fusion module removes 90% of the false detections.
KeywordsTarget Detection Camera Motion Motion Segmentation Decision Fusion False Target
- 5.Stauffer, C.: Adaptive background mixture models for real-time tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 246–252 (1999)Google Scholar
- 7.Waltz, E., Llinas, J.: Handbook of multisensor data fusion. CRC Press, Boca Raton (2001)Google Scholar
- 10.Stauffer, C., Grimson, W.E.L.: Learning patterns of activity using real time tracking. IEEE trans. PAMI 22(8), 747–757 (2000)Google Scholar
- 11.Pao, L., O’Neil, S.: Multisensor Fusion algorithms for tracking. In: Proc. of American Control Conference, pp. 859–863 (1993)Google Scholar