Depth Estimation in Image Sequences in Single-Camera Video Surveillance Systems

  • Aleksander Lamża
  • Zygmunt Wróbel
  • Andrzej Dziech
Part of the Communications in Computer and Information Science book series (CCIS, volume 368)

Abstract

Depth estimation plays a key role in numerous applications, including video surveillance, target tracking, robotics or medicine. The standard method of obtaining depth information is to use stereovision systems, which require at least two cameras. In some applications this is a big hindrance because of dimensions, costs or power consumption. Therefore, there is a need to develop an efficient method of depth estimation that can be applied in single-camera vision systems. In the literature several techniques to accomplish this task can be found. However, they require either modification of the camera (in the mirror-based methods) or changing the parameters of the lens (in the focus-based methods). In this paper a new method based on image sequences from cameras with standard fixed-focal-length and fixed-focus lens is presented.

Keywords

depth estimation optical flow video surveillance single-camera vision 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Baker, S., Scharstein, D., Lewis, J.P., et al.: A Database and Evaluation Methodology for Optical Flow. Computer Vision, 1–8 (2007)Google Scholar
  2. 2.
    Brox, T., Bruhn, A., Papenberg, N., Weickert, J.: High accuracy optical flow estimation based on a theory for warping. In: Pajdla, T., Matas, J. (eds.) ECCV 2004. LNCS, vol. 3024, pp. 25–36. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  3. 3.
    Fleet, D.J., Weiss, Y.: Optical Flow Estimation. In: Paragios, N., Chen, Y., Faugeras, O. (eds.) Mathematical Models for Computer Vision: The Handbook, pp. 239–257. Springer (2005)Google Scholar
  4. 4.
    Gaspar, T., Oliveira, P.: New Dynamic Estimation of Depth from Focus in Active Vision Systems - Data Acquisition, LPV Observer Design, Analysis and Test. In: Proc. VISAPP, pp. 484–491 (2011)Google Scholar
  5. 5.
    Loy, C.C., Hospedales, T.M., Xiang, T., Gong, S.: Stream-based joint exploration-exploitation active learning. In: Computer Vision and Pattern Recognition (CVPR), pp. 1560–1567 (2012)Google Scholar
  6. 6.
    Middlebury dataset, vision.middlebury.edu/flow/
  7. 7.
  8. 8.
    Pachidis, T.: Pseudo Stereovision System (PSVS): A Monocular Mirror-based Stereovision System. In: Scene Reconstruction, Pose Estimation and Tracking, pp. 305–330. I-Tech Education and Publishing, Vienna (2007)Google Scholar
  9. 9.
    Pachidis, T.P., Lygouras, J.N.: Pseudo-Stereo Vision System: A Detailed Study. Journal of Intelligent and Robotic Systems 42(2), 135–167 (2005)CrossRefGoogle Scholar
  10. 10.
  11. 11.
    Wang, M., Wang, X.: Automatic Adaptation of a Generic Pedestrian Detector to a Specific Traffic Scene. In: IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs (2011)Google Scholar
  12. 12.
    Wei, Y., Dong, Z., Wu, C.: Depth measurement using single camera with fixed camera parameters. IET Comput. Vis. 6(1), 29–39 (2012)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Aleksander Lamża
    • 1
  • Zygmunt Wróbel
    • 1
  • Andrzej Dziech
    • 2
  1. 1.Department of Biomedical Computer Systems, Institute of Computer Science, Faculty of Computer and Materials ScienceUniversity of Silesia in KatowiceSosnowiecPoland
  2. 2.Department of TelecommunicationsAGH University of Science and TechnologyKrakowPoland

Personalised recommendations