Advertisement

International Journal of Computer Vision

, Volume 63, Issue 2, pp 153–161 | Cite as

Detecting Pedestrians Using Patterns of Motion and Appearance

  • Paul Viola
  • Michael J. Jones
  • Daniel Snow
Article

Abstract

This paper describes a pedestrian detection system that integrates image intensity information with motion information. We use a detection style algorithm that scans a detector over two consecutive frames of a video sequence. The detector is trained (using AdaBoost) to take advantage of both motion and appearance information to detect a walking person. Past approaches have built detectors based on motion information or detectors based on appearance information, but ours is the first to combine both sources of information in a single detector. The implementation described runs at about 4 frames/second, detects pedestrians at very small scales (as small as 20 × 15 pixels), and has a very low false positive rate.

Our approach builds on the detection work of Viola and Jones. Novel contributions of this paper include: (i) development of a representation of image motion which is extremely efficient, and (ii) implementation of a state of the art pedestrian detection system which operates on low resolution images under difficult conditions (such as rain and snow).

Keywords

pedestrian detection human sensing boosting tracking 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Avidan, S. 2001. Support vector tracking. In IEEE Conference on Computer Vision and Pattern Recognition.Google Scholar
  2. Crow, F. 1984. Summed-area tables for texture mapping. In Proceedings of SIGGRAPH, Vol. 18, No. 3, pp. 207–212.Google Scholar
  3. Cutler, R. and Davis, L. 2000. Robust real-time periodic motion detection: Analysis and applications. In IEEE Patt. Anal. Mach. Intell., Vol. 22, pp. 781–796.Google Scholar
  4. Philomin, V. and Gavrila, D. 1999. Real-time object detection for “smart vehicles.” In IEEE International Conference on Computer Vision, pp. 87–93.Google Scholar
  5. Freund, Yoav and Schapire, Robert E. 1995. A decision-theoretic generalization of on-line learning and an application to boosting. In Computational Learning Theory: Eurocolt’ 95. Springer-Verlag, pp. 23–37.Google Scholar
  6. Hoffman, D.D. and Flinchbaugh, B.E. 1982. The interpretation of biological motion. Biological Cybernetics, 195–204.Google Scholar
  7. Lee, L. Gait dynamics for recognition and classification. Mit ai lab memo aim-2001-019, MIT, 2001.Google Scholar
  8. Liu, F. and Picard, R. 1998. Finding periodicity in space and time. In IEEE International Conference on Computer Vision, pp. 376–383.Google Scholar
  9. Papageorgiou, C., Oren, M., and Poggio, T. 1998. A general frame-work for object detection. In International Conference on Computer Vision.Google Scholar
  10. Polana, R. and Nelson, R. 1994. Detecting activities. Journal of Visual Communication and Image Representation.Google Scholar
  11. Rowley, H., Baluja, S., and Kanade, T. 1998. Neural network-based face detection. In IEEE Patt. Anal. Mach. Intell., Vol. 20, pp. 22–38.Google Scholar
  12. Schapire, R. and Singer, Y. 1999. Improving boosting algorithms using confidence-rated predictions.Google Scholar
  13. Schneiderman, H. and Kanade, T. 2000. A statistical method for 3D object detection applied to faces and cars. In International Conference on Computer Vision.Google Scholar
  14. Viola, P. and Jones, M. 2001. Rapid object detection using a boosted cascade of simple features. In IEEE Conference on Computer Vision and Pattern Recognition.Google Scholar

Copyright information

© Springer Science + Business Media, Inc. 2005

Authors and Affiliations

  1. 1.Microsoft ResearchOne Microsoft WayRedmondUSA
  2. 2.Mitsubishi Electric Research LaboratoriesCambridgeUSA

Personalised recommendations