Alignment of Time-of-Flight and Stereoscopic Data

  • Miles HansardEmail author
  • Seungkyu Lee
  • Ouk Choi
  • Radu Horaud
Part of the SpringerBriefs in Computer Science book series (BRIEFSCOMPUTER)


An approximately Euclidean representation of the visible scene can be obtained directly from a time-of-flight camera. An uncalibrated binocular system, in contrast, gives only a projective reconstruction of the scene. This chapter analyzes the geometric mapping between the two representations, without requiring an intermediate calibration of the binocular system. The mapping can be found by either of two new methods, one of which requires point correspondences between the range and color cameras, and one of which does not. It is shown that these methods can be used to reproject the range data into the binocular images, which makes it possible to associate high-resolution color and texture with each point in the Euclidean representation. The extension of these methods to multiple time-of-flight system is demonstrated, and the associated problems are examined. An evaluation metric, which distinguishes calibration error from combined calibration and depth error, is developed. This metric is used to evaluate a system that is based on three time-of-flight cameras.


Depth and color combination Projective alignment Time-of-Flight camera calibration Multicamera systems 


  1. 1.
    Bartczak, B., Koch, R.: Dense depth maps from low resolution time-of-flight depth and high resolution color views. In: Proceedings of International Symposium on Visual Computing (ISVC), pp. 228–239 (2009)Google Scholar
  2. 2.
    Beder, C., Bartczak, B., Koch, R.: A comparison of PMD-cameras and stereo-vision for the task of surface reconstruction using patchlets. In: Proceedings of Computer Vision and Parallel Recognition (CVPR), pp. 1–8 (2007)Google Scholar
  3. 3.
    Beder, C., Schiller, I., Koch, R.: Photoconsistent relative pose estimation between a PMD 2D3D-camera and multiple intensity cameras. In: Proceedings of Symposium of the German Association for Pattern Recognition (DAGM), pp. 264–273 (2008)Google Scholar
  4. 4.
    Bleiweiss, A., Werman, M.: Fusing time-of-flight depth and color for real-time segmentation and tracking. In: Proceedings of the Dynamic 3D Imaging: DAGM 2009 Workshop, pp. 58–69 (2009)Google Scholar
  5. 5.
    Csurka, G., Demirdjian, D., Horaud, R.: Finding the collineation between two projective reconstructions. Comput. Vis. Image Underst. 75(3), 260–268 (1999)Google Scholar
  6. 6.
    Dubois, J.M., Hügli, H.: Fusion of time-of-flight camera point clouds. In: Proceedings of European Conference on Computer Vision (ECCV) Workshop on Multi-Camera and Multi-modal Sensor Fusion Algorithms and Applications, Marseille (2008)Google Scholar
  7. 7.
    Förstner, W.: Uncertainty and projective geometry. In: Bayro-Corrochano, E. (ed.) Handbook of Geometric Computing, pp. 493–534. Springer, New York (2005)Google Scholar
  8. 8.
    Hansard, M., Horaud, R., Amat, M., Lee, S.: Projective alignment of range and parallax data. In: Proceedings of Computer Vision and Parallel Recognition (CVPR), pp. 3089–3096 (2011)Google Scholar
  9. 9.
    Hartley, R., Sturm, P.: Triangulation. Comput. Vis. Image Underst. 68(2), 146–157 (1997)CrossRefGoogle Scholar
  10. 10.
    Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2000)Google Scholar
  11. 11.
    Hebert, M., Krotkov, E.: 3D measurements from imaging laser radars: how good are they? Image Vis. Comput. 10(3), 170–178 (1992)CrossRefGoogle Scholar
  12. 12.
    Horn, B., Hilden, H., Negahdaripour, S.: Closed-form solution of absolute orientation using orthonormal matrices. J. Opt. Soc. Am. A 5(7), 1127–1135 (1988)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Huhle, B., Fleck, S., Schilling, A.: Integrating 3D time-of-flight camera data and high resolution images for 3DTV applications. In: Proceedings of 3DTV Conference, pp. 1–4 (2007)Google Scholar
  14. 14.
    Kanazawa, Y., Kanatani, K.: Reliability of plane fitting by range sensing. In: International Conference on Robotics and Automation (ICRA), pp. 2037–2042 (1995)Google Scholar
  15. 15.
    Kim, Y., Chan, D., Theobalt, C., Thrun, S.: Design and calibration of a multi-view TOF sensor fusion system. In: Proceedings of Computer Vision and Parallel Recognition (CVPR) Workshop on Time-of-Flight Camera based Computer Vision (2008)Google Scholar
  16. 16.
    Koch, R., Schiller, I., Bartczak, B., Kellner, F., Köser, K.: MixIn3D: 3D mixed reality with ToF-camera. In: Proceedings of DAGM Workshop on Dynamic 3D Imaging, pp. 126–141 (2009)Google Scholar
  17. 17.
    Kolb, A., Barth, E., Koch, R., Larsen, R.: Time-of-flight cameras in computer graphics. Comput. Graphics Forum 29(1), 141–159 (2010)Google Scholar
  18. 18.
    Lindner, M., Schiller, I., Kolb, A., Koch, R.: Time-of-flight sensor calibration for accurate range sensing. Comput. Vis. Image Underst. 114(12), 1318–1328 (2010)CrossRefGoogle Scholar
  19. 19.
    Mesa Imaging AG.
  20. 20.
    Pathak, K., Vaskevicius, N., Birk, A.: Revisiting uncertainty analysis for optimum planes extracted from 3D range sensor point-clouds. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), pp. 1631–1636 (2009)Google Scholar
  21. 21.
    Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P.: Numerical Recipes in C. Cambridge University Press, 2nd edition (1992)Google Scholar
  22. 22.
    Schiller, I., Beder, C., Koch, R.: Calibration of a PMD camera using a planar calibration object together with a multi-camera setup. In: International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences XXI, pp. 297–302 (2008)Google Scholar
  23. 23.
    Verri, A., Torre, V.: Absolute depth estimate in stereopsis. J. Opt. Soc. Am. A 3(3), 297–299 (1986)CrossRefGoogle Scholar
  24. 24.
    Wang, C., Tanahasi, H., Hirayu, H., Niwa, Y., Yamamoto, K.: Comparison of local plane fitting methods for range data. In: Proceedings of Computer Vision and Parallel Recognition (CVPR), pp. 663–669 (2001)Google Scholar
  25. 25.
    Wu, J., Zhou, Y., Yu, H., Zhang, Z.: Improved 3D depth image estimation algorithm for visual camera. In: Proceedings of International Congress on Image and Signal Processing (2009)Google Scholar
  26. 26.
    Zeng, H., Deng, X., Hu, Z.: A new normalized method on line-based homography estimation. Pattern Recogn. Lett. 29, 1236–1244 (2008)CrossRefGoogle Scholar
  27. 27.
    Zhang Q., Pless, R.: Extrinsic calibration of a camera and laser range finder (improves camera calibration). In: Proceedings of Intenational Conference on Intelligent Robots and Systems, pp. 2301–2306 (2004)Google Scholar
  28. 28.
    Zhu, J., Wang, L., Yang, R.G., Davis, J.: Fusion of time-of-flight depth and stereo for high accuracy depth maps. In: Proceedings of Computer Vision and Parallel Recognition (CVPR), pp. 1–8 (2008)Google Scholar

Copyright information

© Miles Hansard 2013

Authors and Affiliations

  • Miles Hansard
    • 1
    Email author
  • Seungkyu Lee
    • 2
  • Ouk Choi
    • 2
  • Radu Horaud
    • 3
  1. 1.Electronic Engineering and Computer ScienceQueen Mary, University of LondonLondonUK
  2. 2.Samsung Advanced Institute of TechnologyYongin-siKorea, Republic of (South Korea)
  3. 3.INRIA Grenoble Rhône-AlpesMontbonnot Saint-MartinFrance

Personalised recommendations