Direct differential range estimation using optical masks

  • Eero P. Simoncelli
  • Hany Farid
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1065)


We describe a novel formulation of the range recovery problem, based on computation of the differential variation in image intensities with respect to changes in camera position. The method uses a single stationary camera and a pair of calibrated optical masks to directly measure this differential quantity. The subsequent computation of the range image is simple and should be suitable for real-time implementation. We also describe a variant of this technique, based on direct measurement of the differential change in image intensities with respect to aperture size. These methods are comparable in accuracy to other single-lens ranging techniques. We demonstrate the potential of our approach with a simple example.


Range Image Pinhole Camera Point Light Source Brightness Constancy Mask Function 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. [Adelson 92]
    Adelson, E.H. and Wang, J.Y.A. Single Lens Stereo with a Plenoptic Camera. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(2):99–106, 1992.Google Scholar
  2. [Dowski 94]
    Dowski, E.R. and Cathey, W.T. Single-Lens Single-Image Incoherent Passive-Ranging Systems. Applied Optics, 33(29):6762–6773, 1994.Google Scholar
  3. [Horn 86]
    Horn, B.K.P. Robot Vision. MIT Press, Cambridge, MA, 1986.Google Scholar
  4. [Jones 93]
    Jones, D.G. and Lamb, D.G. Analyzing the Visual Echo: Passive 3-D Imaging with a Multiple Aperture Camera. Technical Report CIM-93-3, Department of Electrical Engineering, McGill University, 1993.Google Scholar
  5. [Krotkov 87]
    Krotkov, E. Focusing. International Journal of Computer Vision, 1:223–237, 1987.Google Scholar
  6. [Lucas 81]
    Lucas, B.D. and Kanade, T. An iterative image registration technique with an application to stereo vision. In Proceedings of the 7th International Joint Conference on Artificial Intelligence, pages 674–679, Vancouver, 1981.Google Scholar
  7. [Nayar 95]
    Nayar, S.K., Watanabe, M., and Noguchi, M. Real-Time Focus Range Sensor. In Proceedings of the International Conference on Computer Vision, pages 995–1001, Cambridge, MA, 1995.Google Scholar
  8. [Pentland 87]
    Pentland, A.P. A New Sense for Depth of Field. IEEE Transactions on Pattern Analysis and Machine Intelligence, 9(4):523–531, 1987.Google Scholar
  9. [Simoncelli 94]
    Simoncelli, E.P. Design of Multi-Dimensional Derivative Filters. In First International Conference on Image Processing, 1994.Google Scholar
  10. [Subbarao 88]
    Subbarao, M. Parallel Depth Recovery by Changing Camera Parameters. In Proceedings of the International Conference on Computer Vision, pages 149–155, 1988.Google Scholar
  11. [Xiong 93]
    Xiong, Y. and Shafer, S. Depth from Focusing and Defocusing. In Proc. of the DARPA Image Understanding Workshop, pages 967–976, 1993.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1996

Authors and Affiliations

  • Eero P. Simoncelli
    • 1
  • Hany Farid
    • 1
  1. 1.GRASP LaboratoryUniversity of PennsylvaniaPhiladelphiaUSA

Personalised recommendations