Chapter

Pattern Recognition

Volume 5748 of the series Lecture Notes in Computer Science pp 101-110

High-Level Fusion of Depth and Intensity for Pedestrian Classification

  • Marcus RohrbachAffiliated withEnvironment Perception, Group Research, Daimler AGDept. of Computer Science, TU Darmstadt
  • , Markus EnzweilerAffiliated withImage & Pattern Analysis Group, Dept. of Math.and Computer Science, Univ. of Heidelberg
  • , Dariu M. GavrilaAffiliated withEnvironment Perception, Group Research, Daimler AGIntelligent Systems Lab, Fac. of Science, Univ. of Amsterdam

* Final gross prices may vary according to local VAT.

Get Access

Abstract

This paper presents a novel approach to pedestrian classification which involves a high-level fusion of depth and intensity cues. Instead of utilizing depth information only in a pre-processing step, we propose to extract discriminative spatial features (gradient orientation histograms and local receptive fields) directly from (dense) depth and intensity images. Both modalities are represented in terms of individual feature spaces, in each of which a discriminative model is learned to distinguish between pedestrians and non-pedestrians. We refrain from the construction of a joint feature space, but instead employ a high-level fusion of depth and intensity at classifier-level.

Our experiments on a large real-world dataset demonstrate a significant performance improvement of the combined intensity-depth representation over depth-only and intensity-only models (factor four reduction in false positives at comparable detection rates). Moreover, high-level fusion outperforms low-level fusion using a joint feature space approach.