Skip to main content
Log in

Image-guided ToF depth upsampling: a survey

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

Recently, there has been remarkable growth of interest in the development and applications of time-of-flight (ToF) depth cameras. Despite the permanent improvement of their characteristics, the practical applicability of ToF cameras is still limited by low resolution and quality of depth measurements. This has motivated many researchers to combine ToF cameras with other sensors in order to enhance and upsample depth images. In this paper, we review the approaches that couple ToF depth images with high-resolution optical images. Other classes of upsampling methods are also briefly discussed. Finally, we provide an overview of performance evaluation tests presented in the related studies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. Data courtesy of Zinemath Zrt [152].

  2. The methods have been developed by the authors of this survey. The algorithms are presented in [20].

References

  1. Alenya, G., Dellen, B., Torras, C.: 3D modelling of leaves from color and ToF data for robotized plant measuring. In: IEEE International Conference on Robotics and Automation, pp. 3408–3414 (2011)

  2. Awate, S.P., Whitaker, R.T.: Higher-order image statistics for unsupervised, information-theoretic, adaptive, image filtering. Proceedings of Conference on Computer Vision and Pattern Recognition 2, 44–51 (2005)

    Google Scholar 

  3. Balure, C.S., Kini, M.R.: Depth image super-resolution: a review and wavelet perspective. In: International Conference on Computer Vision and Image Processing, pp. 543–555 (2017)

  4. Barash, D.: Fundamental relationship between bilateral filtering, adaptive smoothing, and the nonlinear diffusion equation. IEEE Trans. Pattern Anal. Mach. Intell. 24, 844–847 (2002)

    Article  Google Scholar 

  5. Bartczak, B., Koch, R.: Dense depth maps from low resolution time-of-flight depth and high resolution color views. In: Advances in Visual Computing, pp. 228–239. Springer, Heidelberg (2009)

  6. Beder, C., Bartczak, B., Koch, R.: A comparison of PMD-cameras and stereo-vision for the task of surface reconstruction using patchlets. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2007)

  7. Bevilacqua, A., Di Stefano, L., Azzari, P.: People tracking using a time-of-flight depth sensor. In: Proceedings of International Conference on Video and Signal Based Surveillance (2006)

  8. Bredies, K., Kunisch, K., Pock, T.: Total generalized variation. SIAM J. Imaging Sci. 3, 492–526 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  9. Buades, A., Coll, B., Morel, J.-M.: A review of image denoising algorithms, with a new one. Multiscale Model. Simul. 4, 490–530 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  10. Carter, J., Schmid, K., Waters, K., Betzhold, L., Hadley, B., Mataosky, R., Halleran, J.: Lidar 101: an introduction to lidar technology, data, and applications. Technical report, NOAA Coastal Services Center, Charleston, USA (2012)

    Google Scholar 

  11. Čech, J., Šara, R.: Efficient sampling of disparity space for fast and accurate matching. In: BenCOS Workshop, CVPR, pp. 1–8 (2007)

  12. Chan, D., Buisman, H., Theobalt, C., Thrun, S.: A noise-aware filter for real-time depth upsampling. In: Proceedings of the ECCV Workshop on Multi-camera and Multi-modal Sensor Fusion Algorithms and Applications (2008)

  13. Choi, J., Min, D., Ham, B., Sohn, K.: Spatial and temporal up-conversion technique for depth video. In: Proceedings of International Conference on Image Processing, pp. 3525–3528 (2009)

  14. Choi, O., Lim, H., Kang, B., Kim, Y.S., Lee, K., Kim, J.D.K., Kim, C.-Y.: Discrete and continuous optimizations for depth image super-resolution. In: Proceedings of IS&T/SPIE Electronic, Imaging, pp. 82900C–82900C (2012)

  15. Cui, Y., Schuon, S., Chan, D., Thrun, S., Theobalt, C.: 3D shape scanning with a time-of-flight camera. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 1173–1180 (2010)

  16. De Cubber, G., Doroftei, D., Sahli, H., Baudoin, Y.: Outdoor terrain traversability analysis for robot navigation using a time-of-flight camera. In: Proceedings of RGB-D Workshop on 3D Perception in Robotics (2011)

  17. Dellen, B., Alenyà, R., Sergi Foix, S., Torras, C.: 3D object reconstruction from Swissranger sensor data using a spring-mass model. In: Proceedings of International Conference on Computer Vision Theory and Applications, vol. 2, pp. 368–372 (2009)

  18. Diebel, J., Thrun, S.: An application of Markov random fields to range sensing. In: Proceedings of Advances in Neural Information Processing Systems, pp. 291–298 (2005)

  19. Dolson, J., Baek, J., Plagemann, C., Thrun, S.: Upsampling range data in dynamic environments. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 1141–1148 (2010)

  20. Eichhardt, I., Jankó, Z., Chetverikov, D.: Novel methods for image-guided ToF depth upsampling. In: IEEE International Conference on Systems, Man, and Cybernetics, pp. 002073–002078 (2016)

  21. Einramhof, P., Olufs, M., Vincze, S.: Experimental evaluation of state of the art 3D-sensors for mobile robot navigation. In: Proceedings of Austrian Association for Pattern Recognition Workshop, pp. 153–160 (2007)

  22. Falie, D., Buzuloiu, V.: Wide range time of flight camera for outdoor surveillance. In: Proceedings of IEEE Symposium on Microwaves, Radar and Remote Sensing, , pp. 79–82 (2008)

  23. Fattal, R.: Image upsampling via imposed edge statistics. ACM Tran. Graphics 26, 95 (2007)

    Article  Google Scholar 

  24. Ferstl, D., Reinbacher, C., Ranftl, R., Rüther, M., Bischof, H.: Image guided depth upsampling using anisotropic total generalized variation. In: Proceedings of International Conference on Computer Vision, pp. 993–1000 (2013)

  25. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24, 381–395 (1981)

    Article  MathSciNet  Google Scholar 

  26. Fofi, D., Sliwa, T., Voisin, Y.: A comparative survey on invisible structured light. Electron. Imaging 2004, 90–98 (2004)

    Google Scholar 

  27. Foix, S., Alenya, G., Andrade-Cetto, J., Torras, C.: Object modeling using a ToF camera under an uncertainty reduction approach. In: Proceedings of International Conference on Robotics and Automation, pp. 1306–1312 (2010)

  28. Foix, S., Alenya, G., Torras, C.: Lock-in time-of-flight (ToF) cameras: a survey. Sensors J. 11(9), 1917–1926 (2011)

    Article  Google Scholar 

  29. Foix, S., Alenyà, R., Torras, C.: Exploitation of time-of-flight (ToF) cameras. Technical Report IRI-DT-10-07, IRI-UPC (2010)

  30. Fu, M., Zhou, W.: Depth map super-resolution via extended weighted mode filtering. Vis. Commun. Image Process. 1, 1–4 (2016)

    Google Scholar 

  31. Fuchs, S., May, S.: Calibration and registration for precise surface reconstruction with time-of-flight cameras. Int. J. Intell. Syst. Technol. Appl. 5, 274–284 (2008)

    Google Scholar 

  32. Fukushima, N., Takeuchi, K., Kojima, A.: Self-similarity matching with predictive linear upsampling for depth map. In: 3DTV-Conference: The True Vision-Capture, Transmission and Display of 3D Video, pp. 1–4 (2016)

  33. Garcia, F., Mirbach, B., Ottersten, B., Grandidier, F., Cuesta, A.: Pixel weighted average strategy for depth sensor data fusion. In: Proceedings of International Conference on Image Processing, pp. 2805–2808 (2010)

  34. Gemeiner, P., Jojic, P., Vincze, M.: Selecting good corners for structure and motion recovery using a time-of-flight camera. In: International Conference on Intelligent Robots and Systems, pp. 5711–5716 (2009)

  35. Gokturk, S.B., Tomasi, C.: 3D head tracking based on recognition and interpolation using a time-of-flight depth sensor. Proceedings of Conference on Computer Vision and Pattern Recognition 2, 211–217 (2004)

    Google Scholar 

  36. Gong, M., Yang, L., Wang, R., Gong, M.: A performance study on different cost aggregation approaches used in real-time stereo matching. Int. J. Comput. Vis. 75, 283–296 (2007)

    Article  Google Scholar 

  37. Grzegorzek, M., Theobalt, C., Koch, R., Kolb, A. (eds.): Time-of-Flight and Depth Imaging, Sensors, Algorithms, and Applications. Springer, Heidelberg (2013)

    Google Scholar 

  38. Guan, H., Li, J., Yu, Y., Chapman, M., Wang, C.: Automated road information extraction from mobile laser scanning data. IEEE Trans. Intell. Transp. Syst. 16, 194–205 (2015)

    Article  Google Scholar 

  39. Guomundsson, S.A., Aanæs, H., Larsen, R.: Fusion of stereo vision and time-of-flight imaging for improved 3D estimation. Int. J. Intell. Syst. Technol. Appl. 5(3), 425–433 (2008)

    Google Scholar 

  40. Guomundsson, S.A., Larsen, R., Aanæs, H., Pardas, M., Casas, J.R.: ToF imaging in smart room environments towards improved people tracking. In: Proceedings of Conference on Computer Vision and Pattern Recognition Workshops, pp. 1–6 (2008)

  41. Hahne, U., Alexa, M.: Combining time-of-flight depth and stereo images without accurate extrinsic calibration. Int. J. Intell. Syst. Technol. Appl. 5, 325–333 (2008)

    Google Scholar 

  42. Hahne, U., Alexa, M.: Depth imaging by combining time-of-flight and on-demand stereo. In: Kolb, A., Koch, R. (eds.) Dynamic 3D Imaging, pp. 70–83. Springer, Berlin (2009)

    Chapter  Google Scholar 

  43. Hahne, U., Alexa, M.: Exposure fusion for time-of-flight imaging. In: Computer Graphics Forum, vol. 30, pp. 1887–1894. Wiley, London (2011)

  44. Han, Y., Lee, J.-Y., Kweon, I.: High quality shape from a single RGD-D image under uncalibrated natural illumination. In: Proceedings of International Conference on Computer Vision, pp. 1617–1624 (2013)

  45. Hansard, M., Lee, S., Choi, O., Horaud, R.: Time-of-Flight Cameras. Springer, Berlin (2013)

    Book  Google Scholar 

  46. Harrison, A., Newman, P.: Image and sparse laser fusion for dense scene reconstruction. In: Howard, A., Iagnemma, K., Kelly, A. (eds.) Field and Service Robotics, pp. 219–228. Springer, Berlin (2010)

    Chapter  Google Scholar 

  47. He, K., Sun, J., Tang, X.: Guided image filtering. In: Proceedings of European Conference on Computer Vision, pp. 1–14 (2010)

  48. Heidelberg Collaboratory for Image Processing, Ruprecht-Karl University: Time of flight stereo fusion collection. http://hci.iwr.uni-heidelberg.de/Benchmarks/ (2016)

  49. Herbort, S., Wöhler, C.: An introduction to image-based 3D surface reconstruction and a survey of photometric stereo methods. 3D Res. 2(3), 1–17 (2011)

  50. Herrera, C., Kannala, J., Heikkilä, J.: Joint depth and color camera calibration with distortion correction. IEEE Trans. Pattern Anal. Mach. Intell. 34, 2058–2064 (2012)

    Article  Google Scholar 

  51. Ho, Y.-S., Kang, Y.-S.: Multi-view depth generation using multi-depth camera system. In: International Conference on 3D Systems and Application, pp. 67–70 (2010)

  52. Hornacek, M., Rhemann, C., Gelautz, M., Rother, C.: Depth super resolution by rigid body self-similarity in 3D. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 1123–1130 (2013)

  53. Huhle, B., Schairer, T., Jenke, P., Straßer, W.: Fusion of range and color images for denoising and resolution enhancement with a non-local filter. Comput. Vis. Image Underst. 114, 1336–1345 (2010)

    Article  Google Scholar 

  54. Hui, T.-W., Loy, C.C., Tang, X.: Depth map super-resolution by deep multi-scale guidance. In: European Conference on Computer Vision, pp. 353–369 (2016)

  55. Hussmann, S., Liepert, T.: Robot vision system based on a 3D-ToF camera. In: Proceedings of Conference on Instrumentation and Measurement Technology, pp. 1–5 (2007)

  56. Ihrke, I., Kutulakos, K.N., Lensch, H.P.A., Magnor, M., Heidrich, W.: State of the art in transparent and specular object reconstruction. In: EUROGRAPHICS 2008 State of the Art Reports (2008)

  57. Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Hodges, S., Kohli, P., Shotton, J., Davison, A.J., Fitzgibbon, A.: KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera. In: Proceedings of ACM Symposium on User Interface Software and Technology, pp. 559–568 (2011)

  58. Jung, J., Lee, J.-Y., Jeong, Y., Kweon, I.: Time-of-flight sensor calibration for a color and depth camera pair. PAMI 37, 1501–1513 (2015)

    Article  Google Scholar 

  59. Kamilov, U.S., Boufounos, P.T.: Depth superresolution using motion adaptive regularization. In: IEEE International Conference on Multimedia & Expo Workshops, pp. 1–6 (2016)

  60. Kang, Y.-S., Ho, Y.-S.: High-quality multi-view depth generation using multiple color and depth cameras. In: IEEE International Conference on Multimedia and Expo, pp. 1405–1410 (2010)

  61. Katz, S., Adler, A., Yahav, G.: Combined depth filtering and super resolution. US Patent 8,660,362. www.google.com/patents/US8660362 (2014)

  62. Kim, C., Yu, H., Yang, G.: Depth super resolution using bilateral filter. In: Proceedings of International Congress on Image and Signal Processing, vol. 2, pp. 1067–1071(2011)

  63. Kim, Y.M., Theobalt, C., Diebel, J., Kosecka, J., Miscusik, B., Thrun, S.: Multi-view image and ToF sensor fusion for dense 3D reconstruction. In: ICCV Workshops, pp. 1542–1549 (2009)

  64. Knoop, S., Vacek, S., Dillmann, R.: Sensor fusion for 3D human body tracking with an articulated 3D body model. In: Proceedings of International Conference on Robotics and Automation, pp. 1686–1691 (2006)

  65. Kolb, A., Barth, E., Koch, R., Larsen, R.: Time-of-flight cameras in computer graphics. Comput. Graphics Forum 29, 141–159 (2010)

    Article  Google Scholar 

  66. Kopf, J., Cohen, M.F., Lischinski, D., Uyttendaele, M.: Joint bilateral upsampling. ACM Trans. Graphics 26, 673–678 (2007)

    Google Scholar 

  67. Kuhnert, K.-D., Stommel, M.: Fusion of stereo-camera and PMD-camera data for real-time suited precise 3D environment reconstruction. In: Proceedings of International Conference on Intelligent Robots and Systems, pp. 4780–4785 (2006)

  68. Kuznetsova, A.: A ToF camera calibration toolbox. http://github.com/kapibara/ToF-Calibration (2015)

  69. Kuznetsova, A., Rosenhahn, B.: On calibration of a low-cost time-of-flight camera. In: ECCV Workshop on Consumer Depth Cameras for Computer Vision (2014)

  70. Lai, K., Bo, L., Ren, X., Fox, D.: A large-scale hierarchical multi-view RGB-D object dataset. In: IEEE International Conference on Robotics and Automation, pp. 1817–1824 (2011)

  71. Langmann, B., Hartmann, K., Loffeld, O.: Comparison of depth super-resolution methods for 2D/3D images. Int. J. Comput. Inf. Syst. Ind. Manag. Appl. 3, 635–645 (2011)

    Google Scholar 

  72. Lefloch, D., Nair, R., Lenzen, F., Schäfer, H., Streeter, L., Cree, M.J., Koch, R., Kolb, A.: Technical foundation and calibration methods for time-of-flight cameras. In: Grzegorzek, M., Theobalt, C., Koch, R., Kolb, A. (eds.) Time-of-Flight and Depth Imaging. Sensors, Algorithms, and Applications, pp. 3–24. Springer, Berlin (2013)

  73. Li, J., Lu, Z., Zeng, G., Gan, R., Wang, L., Zha, H.: A joint learning-based method for multi-view depth map super resolution. In: Proceedings of Asian Conference on Pattern Recognition, pp. 456–460 (2013)

  74. Li, J., Zeng, G., Gan, R., Zha, H., Wang, L.: A Bayesian approach to uncertainty-based depth map super resolution. In: Proceedings of Asian Conference on Computer Vision, pp. 205–216 (2012)

  75. Li, L.: Time-of-flight camera—an introduction. Technical report SLOA190B, Texas Instruments, 2014. www.ti.com/lit/wp/sloa190b/sloa190b.pdf

  76. Li, Y., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep joint image filtering. In: European Conference on Computer Vision, pp. 154–169 (2016)

  77. Lindner, M., Schiller, I., Kolb, A., Koch, R.: Time-of-flight sensor calibration for accurate range sensing. Comput. Vis. Image Underst. 114, 1318–1328 (2010)

    Article  Google Scholar 

  78. Liu, M., Zhao, Y., Liang, J., Lin, C., Bai, H., Yao, C.: Depth map up-sampling with fractal dimension and texture-depth boundary consistencies. Neurocomputing (2017). doi:10.1016/j.neucom.2016.11.067

  79. Liu, X., Fujimura, K.: Hand gesture recognition using depth data. In: Proceedings of International Conference on Automatic Face and Gesture Recognition, pp. 529–534 (2004)

  80. Lo, K.-H. , Wang, Y.-C., Hua, K.-L.: Edge-preserving depth map upsampling by joint trilateral filter. IEEE Trans. Cybern. pp. 1–14 (2017). doi:10.1109/TCYB.2016.2637661

  81. Lu, J., Min, D., Pahwa, R.S., Do, M.N.: A revisit to MRF-based depth map super-resolution and enhancement. In: International Conference on Acoustics, Speech and Signal Processing, pp. 985–988 (2011)

  82. Lu, S., Ren, X., Liu, F.: Depth enhancement via low-rank matrix completion. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 3390–3397 (2014)

  83. Mac Aodha, O., Campbell, N.D.F., Nair, A., Brostow, G.J.: Patch based synthesis for single depth image super-resolution. In: Proceedings of European Conference on Computer Vision, pp. 71–84 (2012)

  84. Mandal, S., Bhavsar, A., Sao, A.K.: Depth map restoration from undersampled data. IEEE Trans. Image Process. 26, 119–134 (2017)

    Article  MathSciNet  Google Scholar 

  85. May, S., Droeschel, D., Holz, D., Wiesen, C., Fuchs, S.: 3D pose estimation and mapping with time-of-flight cameras. In: Proceedings of IROS Workshop on 3D Mapping (2008)

  86. Milanfar, P.: A tour of modern image filtering: new insights and methods, both practical and theoretical. IEEE Signal Process. Mag. 30, 106–128 (2013)

    Article  Google Scholar 

  87. Min, D., Lu, J., Do, M.N.: Depth video enhancement based on weighted mode filtering. IEEETIP 21, 1176–1190 (2012)

    MathSciNet  Google Scholar 

  88. Murphy, K.P.: Machine Learning: A Probabilistic Perspective. MIT press, Cambridge (2012)

    MATH  Google Scholar 

  89. Nair, R., Meister, S., Lambers, M., Balda, M., Hofmann, H., Kolb, A., Kondermann, D., Jähne, B.: Ground truth for evaluating time of flight imaging. In: Grzegorzek, M., Theobalt, C., Koch, R., Kolb, A. (eds.) Time-of-Flight and Depth Imaging. Sensors, Algorithms, and Applications, pp. 52–74. Springer, Berlin (2013)

  90. Nair, R., Ruhl, K., Lenzen, F., Meister, S., Schäfer, H., Garbe, C.S., Eisemann, M., Magnor, M., Kondermann, D.: A survey on time-of-flight stereo fusion. In: Grzegorzek, M., Theobalt, C., Koch, R., Kolb, A. (eds.) Time-of-Flight and Depth Imaging. Sensors, Algorithms, and Applications, pp. 105–127. Springer, Berlin (2013)

  91. Nanda, H., Fujimura, K.: Visual tracking using depth data. In: Proceedings of Conference on Computer Vision and Pattern Recognition Workshops (2004)

  92. Nehab, D., Rusinkiewicz, S., Davis, J., Ramamoorthi, R.: Efficiently combining positions and normals for precise 3D geometry. ACM Trans. Graphics 24, 536–543 (2005)

    Article  Google Scholar 

  93. Newcombe, R.A., Davison, A.J., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A.J., Kohi, P., Shotton, J., Hodges, S., Fitzgibbon, A.: KinectFusion: real-time dense surface mapping and tracking. In: Proceedings of IEEE International Symposium on Mixed and Augmented Reality, pp. 127–136 (2011)

  94. Nguyen, C.V., Izadi, S., Lovell, D.: Modeling kinect sensor noise for improved 3D reconstruction and tracking. In: Second Joint 3DIM/3DPVT Conference: 3D Imaging, Modeling, Processing, Visualization & Transmission, pp. 524–530 (2012)

  95. Or-El, R., Rosman, G., Wetzler, A., Kimmel, R., Bruckstein, A.M.: RGBD-fusion: real-time high precision depth recovery. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 5407–5416 (2015)

  96. Paris, S., Kornprobst, P., Tumblin, J., Durand, F.: Bilateral Filtering. Now Publishers Inc., Norwell (2009)

    MATH  Google Scholar 

  97. Park, J., Kim, H., Tai, Y.-W., Brown, M.S., Kweon, I.: High quality depth map upsampling for 3D-ToF cameras. In: Proceedings of International Conference on Computer Vision, pp. 1623–1630 (2011)

  98. Park, J., Kim, H., Tai, Y.-W., Brown, M.S., Kweon, I.: High-quality depth map upsampling and completion for RGB-D cameras. IEEE Trans. Image Process. 23, 5559–5572 (2014)

    Article  MathSciNet  Google Scholar 

  99. Pfeifer, N., Lichti, D., Böhm, J., Karel, W.: 3D cameras: errors, calibration and orientation. In: Remondino, F., Stoppa, D. (eds.) TOF Range-Imaging Cameras, pp. 117–138. Springer, Berlin (2013)

    Chapter  Google Scholar 

  100. Porikli, F.: Constant time O(1) bilateral filtering. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2008)

  101. Prusak, A., Melnychuk, O., Roth, H., Schiller, I.: Pose estimation and map building with a time-of-flight-camera for robot navigation. Int. J. Intell. Syst. Technol. Appl. 5, 355–364 (2008)

    Google Scholar 

  102. Remondino, F., Stoppa, D.: ToF Range-Imaging Cameras. Springer, Berlin (2013)

    Book  Google Scholar 

  103. Richardt, C., Stoll, C., Dodgson, N.A., Seidel, H.-P., Theobalt, C.: Coherent spatiotemporal filtering, upsampling and rendering of RGBZ videos. Comput. Graphics Forum 31, 247–256 (2012)

    Article  Google Scholar 

  104. Riegler, G., Rüther, M., Bischof, H.: ATGV-net: accurate depth super-resolution. In: European Conference on Computer Vision, pp. 268–284 (2016)

  105. Riemens, A.K., Gangwal, O.P., Barenbrug, B., Berretty, R.-P.M.: Multistep joint bilateral depth upsampling. In: IS&T/SPIE Electronic Imaging, pp. 72570M–72570M (2009)

  106. Robot Vision Laboratory, Graz University of Technology: ToFMark—Depth upsampling evaluation dataset. http://rvlab.icg.tugraz.at/tofmark/ (2014)

  107. Rotman, D., Gilboa, G.: A depth restoration occlusionless temporal dataset. In: International Conference on 3D Vision, pp. 176–184 (2016)

  108. Ruiz-Sarmiento, J.R., Galindo, C., Gonzalez, J.: Improving human face detection through ToF cameras for ambient intelligence applications. In: Ambient Intelligence-Software and Applications, pp. 125–132. Springer, Berlin (2011)

  109. Salvi, J., Fernandez, S., Pribanic, T., Llado, X.: A state of the art in structured light patterns for surface profilometry. Pattern Recogn. 43, 2666–2680 (2010)

    Article  MATH  Google Scholar 

  110. Scharstein, D., Hirschmüller, H., Kitajima, Y., Krathwohl, G., Nesic, N., Wang, X., Westling, P.: Middlebury stereo datasets. http://vision.middlebury.edu/stereo/data/ (2001–2014)

  111. Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vis. 47, 7–42 (2002)

    Article  MATH  Google Scholar 

  112. Schaul, L., Fredembach, C., Süsstrunk, S.: Color image dehazing using the near-infrared. In: Proceedings of International Conference on Image Processing, pp. 1629–1632 (2009)

  113. Schmidt, M.: Analysis, Modeling and Dynamic Optimization of 3D Time-of-Flight Imaging Systems. PhD thesis, Ruperto-Carola University of Heidelberg, Germany (2011)

  114. Schoenberg, J.R., Nathan, A., Campbell, M.: Segmentation of dense range information in complex urban scenes. In: Proceedings of International Conference on Intelligent Robots and Systems, pp. 2033–2038. IEEE (2010)

  115. Schuon, S., Theobalt, C., Davis, J., Thrun, S.: Lidarboost: depth superresolution for ToF 3D shape scanning. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 343–350 (2009)

  116. Schwarz, S., Olsson, R., Sjostrom, M.: Depth sensing for 3DTV: a survey. IEEE MultiMedia 20, 10–17 (2013)

    Article  Google Scholar 

  117. Schwarz, S., Sjöström, M., Olsson, R.: Multivariate sensitivity analysis of time-of-flight sensor fusion. 3D Res. 5, 1–16 (2014)

  118. Schwarz, S., Sjöström, M., Olsson, R.: Temporal consistent depth map upscaling for 3DTV. In: IS&T/SPIE Electronic Imaging, pp. 901302–901302 (2014)

  119. Schwarz, S., Sjöström, M., Olsson, R.: Weighted optimization approach to time-of-flight sensor fusion. IEEE Trans. Image Process. 23, 214–225 (2014)

    Article  MathSciNet  Google Scholar 

  120. Schwarz, S.: Gaining Depth: Time-of-Flight Sensor Fusion for Three-Dimensional Video Content Creation. PhD thesis, Mittuniversitetet, Sweden (2014)

  121. Seitz, S.M., Curless, B., Diebel, J., Scharstein, D., Szeliski, R.: A comparison and evaluation of multi-view stereo reconstruction algorithms. Conference on Computer Vision and Pattern Recognition 1, 519–528 (2006)

    Google Scholar 

  122. Snavely, N., Zitnick, C.L., Kang, S.B., Cohen, M.: Stylizing 2.5-D video. In: Proceedings of 4th International Symposium on Non-photorealistic Animation and Rendering, pp. 63–69. ACM (2006)

  123. Soh, Y., Sim, J.Y., Kim, C.S., Lee, S.U.: Superpixel-based depth image super-resolution. In: IS&T/SPIE Electronic Imaging, pp. 82900D–82900D. Int. Society for Optics and Photonics (2012)

  124. Sonka, M., Hlavac, V., Boyle, R.: Image Processing, Analysis, and Machine Vision. Thomson, Stamford (2008)

    Google Scholar 

  125. Stoykova, E., Alatan, A.A., Benzie, P., Grammalidis, N., Malassiotis, S., Ostermann, J., Piekh, S.: 3-D time-varying scene capture technologies–a survey. IEEE Trans. Circuits Syst. 17, 1568–1586 (2007)

    Google Scholar 

  126. Szeliski, R.: Computer Vision: Algorithms and Applications. Springer, Berlin (2010)

    MATH  Google Scholar 

  127. Tallón, M., Babacan, S.D., Mateos, J., Do, M.N., Molina, R., Katsaggelos, A.K.: Upsampling and denoising of depth maps via joint-segmentation. In: Proceedings of European Signal Processing Conference, pp. 245–249 (2012)

  128. Thielemann, J.T., Breivik, G.M., Berge, A.: Pipeline landmark detection for autonomous robot navigation using time-of-flight imagery. In: Proceedings of Conference on Computer Vision and Pattern Recognition Workshops, pp. 1–7 (2008)

  129. Tian, J., Ma, K.-K.: A survey on super-resolution imaging. Signal Image Video Process. 5, 329–342 (2011)

    Article  Google Scholar 

  130. Tomasi, C., Manduchi, R.: Bilateral filtering for gray and color images. In: Proceedings of International Conference on Computer Vision, pp. 839–846 (1998)

  131. Van Ouwerkerk, J.D.: Image super-resolution survey. Image Vis. Comput. 24, 1039–1052 (2006)

    Article  Google Scholar 

  132. Villena-Martínez, V., Fuster-Guilló, A., Azorín-López, J., Saval-Calvo, M., Mora-Pascual, J., Garcia-Rodriguez, J., Garcia-Garcia, A.: A quantitative comparison of calibration methods for RGB-D sensors using different technologies. Sensors 17, 243 (2017)

    Article  Google Scholar 

  133. Vosters, L., Varekamp, C., de Haan, G.: Evaluation of efficient high quality depth upsampling methods for 3DTV. In: IS&T/SPIE Electronic Imaging, pp. 865005–865005 (2013)

  134. Vosters, L., Varekamp, C., de Haan G.: Overview of efficient high-quality state-of-the-art depth enhancement methods by thorough design space exploration. J. Real-Time Image Process. 1–21 (2015). doi:10.1007/s11554-015-0537-z

  135. Weingarten, J.W., Gruener, G., Siegwart, R.: A state-of-the-art 3D sensor for robot navigation. Proceedings of International Conference on Intelligent Robots and Systems 3, 2155–2160 (2004)

    Google Scholar 

  136. Xiang, X., Li, G., Tong, J., Zhang, M., Pan, Z.: Real-time spatial and depth upsampling for range data. Trans. Comput. Sci. XII: Special Issue Cyberworlds 6670, 78 (2011)

  137. Xu, X., Po, L.-M., Ng, K.-H., Feng, L., Cheung, K.-W., Cheung, C.-H., Ting, C.-W.: Depth map misalignment correction and dilation for DIBR view synthesis. Signal Process. Image Commun. 28, 1023–1045 (2013)

    Article  Google Scholar 

  138. Yang, K., Dou, Y., Chen, X., Lv, S., Qiao, P.: Depth enhancement via non-local means filter. In: International Conference on Advanced Computational Intelligence, pp. 126–130 (2015)

  139. Yang, Q., Ahuja, N., Yang, R., Tan, K.-H., Davis, J., Culbertson, B., Apostolopoulos, J., Wang, G.: Fusion of median and bilateral filtering for range image upsampling. IEEE Trans. Image Process. 22, 4841–4852 (2013)

    Article  MathSciNet  Google Scholar 

  140. Yang, Q., Tan, K.-H., Ahuja. N.: Real-time O(1) bilateral filtering. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 557–564 (2009)

  141. Yang, Q., Tan, K.H., Culbertson, B., Apostolopoulos, J.: Fusion of active and passive sensors for fast 3D capture. In: Proceedings of IEEE International Workshop on Multimedia Signal Processing, pp. 69–74 (2010)

  142. Yang, Q., Yang, R., Davis, J., Nistér, D.: Spatial-depth super resolution for range images. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2007)

  143. Yin, L., Yang, R., Gabbouj, M., Neuvo, Y.: Weighted median filters: a tutorial. IEEE Trans. Circuits Syst. II Analog. Digital Signal Process 43, 157–192 (1996)

    Google Scholar 

  144. Yu, L.-F., Yeung, S.-K., Tai, Y-W., Lin. S.: Shading-based shape refinement of RGBD images. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 1415–1422 (2013)

  145. Yuan, F., Swadzba, A., Philippsen, R., Engin, O., Hanheide, M., Wachsmuth, S.: Laser-based navigation enhanced with 3D time-of-flight data. In: Proceedings of International Conference on Robotics and Automation, pp. 2844–2850 (2009)

  146. Yuan, L., Jin, X., Yuan. C.: Enhanced joint trilateral up-sampling for super-resolution. In: Pacific Rim Conference on Multimedia, pp. 518–526 (2016)

  147. Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000)

    Article  Google Scholar 

  148. Zhang, Z.: Microsoft Kinect sensor and its effect. IEEE MultiMedia 19, 4–10 (2012)

    Article  Google Scholar 

  149. Zhu, J., Wang, L., Gao, J., Yang, R.: Spatial-temporal fusion for high accuracy depth maps using dynamic MRFs. IEEE Trans. Pattern Anal. Mach. Intell. 32, 899–909 (2010)

    Article  Google Scholar 

  150. Zhu, J., Wang, L., Yang, R., Davis. J.E.: Fusion of time-of-flight depth and stereo for high accuracy depth maps. In: Proceedings of Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2008)

  151. Zhu, J., Wang, L., Yang, R., Davis, J.E.: Pan. Z.: Reliability fusion of time-of-flight depth and stereo geometry for high quality depth maps. IEEE Trans. Pattern Anal. Mach. Intell. 33(7), 1400–1414 (2011)

    Article  Google Scholar 

  152. Zinemath, Z. The zLense platform. www.zinemath.com/ (2014)

Download references

Acknowledgements

We are grateful to Zinemath Zrt for providing test data. This research was supported in part by the programme ‘Highly industrialised region on the west part of Hungary with limited R&D capacity: Research and development programs related to strengthening the strategic future oriented industries manufacturing technologies and products of regional competences carried out in comprehensive collaboration’ of the Hungarian National Research, Development and Innovation Fund (NKFIA), Grant #VKSZ_12-1-2013-0038. This work was also supported by the NKFIA Grant #K-120233.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dmitry Chetverikov.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Eichhardt, I., Chetverikov, D. & Jankó, Z. Image-guided ToF depth upsampling: a survey. Machine Vision and Applications 28, 267–282 (2017). https://doi.org/10.1007/s00138-017-0831-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-017-0831-9

Keywords

Navigation