High-level fusion of depth and intensity for pedestrian classification

Authors
Publication date 2009
Host editors
  • J. Denzler
  • G. Notni
  • H. Süße
Book title Pattern Recognition
Book subtitle 31st DAGM Symposium, Jena, Germany, September 9-11, 2009 : proceedings
ISBN
  • 9783642037979
ISBN (electronic)
  • 9783642037986
Series Lecture Notes in Computer Science
Event 31st Annual Symposium of the Deutsche Arbeitsgemeinschaft für Mustererkennung (DAGM 2009), Jena, Germany
Pages (from-to) 101-110
Publisher Berlin: Springer
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
This paper presents a novel approach to pedestrian classification which involves a high-level fusion of depth and intensity cues. Instead of utilizing depth information only in a pre-processing step, we propose to extract discriminative spatial features (gradient orientation histograms and local receptive fields) directly from (dense) depth and intensity images. Both modalities are represented in terms of individual feature spaces, in each of which a discriminative model is learned to distinguish between pedestrians and non-pedestrians. We refrain from the construction of a joint feature space, but instead employ a high-level fusion of depth and intensity at classifier-level.
Our experiments on a large real-world dataset demonstrate a significant performance improvement of the combined intensity-depth representation over depth-only and intensity-only models (factor four reduction in false positives at comparable detection rates). Moreover, high-level fusion outperforms low-level fusion using a joint feature space approach.
Document type Conference contribution
Language English
Published at https://doi.org/10.1007/978-3-642-03798-6_11
Permalink to this page
Back