Monocular 3D Scene Reconstruction at Absolute Scales by Combination of Geometric and Real-Aperture Methods

  • Annika Kuhl
  • Christian Wöhler
  • Lars Krüger
  • Pablo d’Angelo
  • Horst-Michael Groß
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4174)


We propose a method for combining geometric and real-aperture methods for monocular 3D reconstruction of static scenes at absolute scales. Our algorithm relies on a sequence of images of the object acquired by a monocular camera of fixed focal setting from different viewpoints. Object features are tracked over a range of distances from the camera with a small depth of field, leading to a varying degree of defocus for each feature. Information on absolute depth is obtained based on a Depth-from-Defocus approach. The parameters of the point spread functions estimated by Depth-from-Defocus are used as a regularisation term for Structure-from-Motion. The reprojection error obtained from Bundle Adjustment and the absolute depth error obtained from Depth-from-Defocus are simultaneously minimised for all tracked object features. The proposed method yields absolutely scaled 3D coordinates of the scene points without any prior knowledge about the structure of the scene. Evaluating the algorithm on real-world data we demonstrate that it yields typical relative errors between 2 and 3 percent. Possible applications of our approach are self-localisation and mapping for mobile robotic systems and pose estimation in industrial machine vision.


Point Spread Function Absolute Scale Bundle Adjustment Reprojection Error Scene Point 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Scharstein, D., Szeliski, R., Zabih, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. In: IEEE Workshop on Stereo and Multi-Baseline Vision (2001)Google Scholar
  2. 2.
    Harris, C., Stephens, M.: A combined corner and edge detector. In: Proc. of The Fourth Alvey Vision Conference, Manchester, pp. 147–151 (1988)Google Scholar
  3. 3.
    Lowe, D.G.: Object recognition from local scale-invariant features. In: Proc. of the International Conference on Computer Vision ICCV, Corfu, pp. 1150–1157 (1999)Google Scholar
  4. 4.
    Shi, J., Tomasi, C.: Good features to track. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR 1994), Seattle (1994)Google Scholar
  5. 5.
    Horn, B.K.: Robot Vision. McGraw-Hill Higher Education (1986)Google Scholar
  6. 6.
    Chaudhuri, S., Rajagopalan, A.: Depth from Defocus: A Real Aperture Imaging Approach. Springer, Berlin (1999)Google Scholar
  7. 7.
    Ens, J., Lawrence, P.: An investigation of methods for determining depth from focus. IEEE Trans. Pattern Anal. Mach. Intell. 15, 97–108 (1993)CrossRefGoogle Scholar
  8. 8.
    Pentland, A.P.: Depth of scene from depth of field. In: Proc. Image Understanding Workshop, pp. 253–259 (1982)Google Scholar
  9. 9.
    Pentland, A.P.: A new sense for depth of field. IEEE Trans. Pattern Anal. Mach. Intell. 9, 523–531 (1987)CrossRefGoogle Scholar
  10. 10.
    Watanabe, M., Nayar, S., Noguchi, M.: Real-time computation of depth from defocus. In: Proc. of SPIE: Three-Dimensional and Unconventional Imaging for Industrial Inspection and Metrology (1995)Google Scholar
  11. 11.
    Grossmann, P.: Depth from focus. Pattern Recogn. Lett. 5, 63–69 (1987)CrossRefGoogle Scholar
  12. 12.
    Subbarao, M., Choi, T.: Accurate recovery of three-dimensional shape from image focus. IEEE Trans. Pattern Anal. Mach. Intell. 17, 266–274 (1995)CrossRefGoogle Scholar
  13. 13.
    Subbarao, M.: Efficient depth recovery through inverse optics. In: Freeman, H. (ed.) Machine Vision for Inspection and Measurement, pp. 101–126. Academic Press, New York (1989)Google Scholar
  14. 14.
    Myles, Z., da Vitoria Lobo, N.: Recovering affine motion and defocus blur simultaneously. IEEE Trans. Pattern Anal. Mach. Intell. 20, 652–658 (1998)CrossRefGoogle Scholar
  15. 15.
    Krüger, L., Wöhler, C., Würz-Wessel, A., Stein, F.: In-factory calibration of multiocular camera systems. In: Photonics Europe, Automatic Target Recognition XIV., Proceedings of the SPIE, vol. 5457, pp. 126–137 (2004)Google Scholar
  16. 16.
    Triggs, B., McLauchlan, P.F., Hartley, R.I., Fitzgibbon, A.W.: Bundle adjustment – A modern synthesis. In: Triggs, B., Zisserman, A., Szeliski, R. (eds.) ICCV-WS 1999. LNCS, vol. 1883, pp. 298–375. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  17. 17.
    Pedrotti, F.L.: Introduction to Optics, 2nd edn. Prentice-Hall, Englewood Cliffs (1993)Google Scholar
  18. 18.
    Bouguet, J.: Camera calibration toolbox for MATLAB (1997),
  19. 19.
    Madsen, K., Nielsen, H.B., Tingleff, O.: Methods for non-linear least squares problems (1999),

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Annika Kuhl
    • 1
    • 2
  • Christian Wöhler
    • 1
  • Lars Krüger
    • 1
  • Pablo d’Angelo
    • 1
  • Horst-Michael Groß
    • 2
  1. 1.DaimlerChrysler AG, Group Research, Machine PerceptionUlmGermany
  2. 2.Faculty of Computer Science and AutomationTechnical University of IlmenauIlmenauGermany

Personalised recommendations