Edge-preserving interpolation of depth data exploiting color information

  • Valeria Garro
  • Carlo Dal Mutto
  • Pietro Zanuttigh
  • Guido M. Cortelazzo
Article

Abstract

The extraction of depth information associated to dynamic scenes is an intriguing topic, because of its perspective role in many applications, including free viewpoint and 3D video systems. Time-of-flight (ToF) range cameras allow for the acquisition of depth maps at video rate, but they are characterized by a limited resolution, specially if compared with standard color cameras. This paper presents a super-resolution method for depth maps that exploits the side information from a standard color camera: the proposed method uses a segmented version of the high-resolution color image acquired by the color camera in order to identify the main objects in the scene and a novel surface prediction scheme in order to interpolate the depth samples provided by the ToF camera. Effective solutions are provided for critical issues such as the joint calibration between the two devices and the unreliability of the acquired data. Experimental results on both synthetic and real-world scenes have shown how the proposed method allows to obtain a more accurate interpolation with respect to standard interpolation approaches and state-of-the-art joint depth and color interpolation schemes.

Keywords

Depth map Interpolation Super resolution Calibration Time of flight 

References

  1. 1.
    Ballan L, Brusco N, Cortelazzo GM (2005) 3D passive shape recovery from texture and silhouette information. In: Proceedings of IEEE European conference on visual media production (CVMP). LondonGoogle Scholar
  2. 2.
    Beder C, Koch R (2008) Calibration of focal length and 3D pose based on the reflectance and depth image of a planar object. Int J Intell Syst Technol Appl 5:285–294Google Scholar
  3. 3.
    Bouguet J, Matlab camera calibration toolbox (2000). http://www.vision.caltech.edu/bouguetj/calib_doc/. Accessed 6 May 2013
  4. 4.
    Diebel J, Thrun S (2005) An application of Markov random fields to range sensing. In: Proceedings of conference on neural information processing systems (NIPS)Google Scholar
  5. 5.
    Felzenszwalb PF, Huttenlocher DP (2004) Efficient graph-based image segmentation. Int J Comput Vision 59(2):167–181CrossRefGoogle Scholar
  6. 6.
    Fischler M, Bolles R (1987) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. In: Readings in computer vision: issues, problems, principles, and paradigms. Morgan Kaufmann, San Francisco, pp 726–740Google Scholar
  7. 7.
    Garro V, dal Mutto C, Zanuttigh P, Cortelazzo G (2009) A novel interpolation scheme for range data with side information. In: Proceedings of IEEE European conference on visual media production (CVMP), pp 52–60Google Scholar
  8. 8.
    Guan L, Franco J, Pollefeys M (2008) 3D object reconstruction with heterogeneous sensor data. In: Proceedings of international symposium on 3D data processing, visualization and transmission (3DPVT)Google Scholar
  9. 9.
    Hartley R, Zisserman A (2004) Multiple view geometry in computer vision, 2nd edn. Cambridge University Press, CambridgeCrossRefGoogle Scholar
  10. 10.
    Hartley RI, Sturm P (1994) Triangulation. In: Proceedings of ARPA image understanding workshop, pp 957–966Google Scholar
  11. 11.
    Horn BKP (1987) Closed-form solution of absolute orientation using unit quaternions. J Opt Soc Am A 4(4):629–642MathSciNetCrossRefGoogle Scholar
  12. 12.
    Kim Y, Chan D, Theobalt C, Thrun S (2008) Design and calibration of a multi-view TOF sensor fusion system. In: Proceedings of IEEE CVPR workshop on time-of-flight computer visionGoogle Scholar
  13. 13.
    Kim Y, Theobalt C, Diebel J, Kosecka J, Micusik B, Thrun S (2009) Multi-view image and TOF sensor fusion for dense 3d reconstruction. In: Proceedings of 3-D digital imaging and modeling conference (3DIM)Google Scholar
  14. 14.
    Kopf J, Cohen MF, Lischinski D, Uyttendaele M (2007) Joint bilateral upsampling. ACM Trans Graph 26(3):96CrossRefGoogle Scholar
  15. 15.
    Langmann B, Hartmann K, Loffeld O (2011) Comparison of depth super-resolution methods for 2D/3D images. Int J Comput Inf Syst Ind Manag Appl 3:635–645Google Scholar
  16. 16.
    Li Y, Xue T, Sun L, Liu J (2012) Joint example-based depth map super-resolution. In: Proceedings of IEEE international conference on multimedia and expo (ICME), pp 985–988Google Scholar
  17. 17.
    Lindner M, Lambers M, Kolb A (2008) Sub-pixel data fusion and edge-enhanced distance refinement for 2D/3D images. Int J Intell Syst Technol Appl 5:344–354Google Scholar
  18. 18.
    Lu J, Min D, Pahwa R, Do M (2011) A revisit to MRF-based depth map super-resolution and enhancement. In: Proceedings of international conference on acoustics, speech and signal processing (ICASSP), pp 985–988Google Scholar
  19. 19.
    Dal Mutto C, Zanuttigh P, Cortelazzo G (2010) A probabilistic approach to TOF and stereo data fusion. In: Proceedings of international symposium on 3D data processing, visualization and transmission (3DPVT)Google Scholar
  20. 20.
    Schuon S, Theobalt C, Davis J, Thrun S (2008) High-quality scanning using time-of-flight depth super resolution. In: Proceedings of CVPR workshop on time-of-flight computer vision, pp 1–7Google Scholar
  21. 21.
    Kahlmann T, Ingensand H (2008) Calibration and development for increased accuracy of 3D range image cameras. J Appl Geodesy 2:1–11CrossRefGoogle Scholar
  22. 22.
    Yang Q, Yang R, Davis J, Nister D (2007) Spatial-depth super resolution for range images. In: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), pp 1–8Google Scholar
  23. 23.
    Zanuttigh P, Cortelazzo G (2009) Compression of depth information for 3d rendering. In: Proceedings of 3D TV conferenceGoogle Scholar
  24. 24.
    Zhang L, Curless B, Seitz S (2003) Spacetime stereo: shape recovery for dynamic scenes. In: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), pp 367–374Google Scholar
  25. 25.
    Zhang Z (1998) A flexible new technique for camera calibration. IEEE Trans Pattern Anal Mach Intell 22:1330–1334CrossRefGoogle Scholar
  26. 26.
    Zhu J, Wang L, Yang R, Davis J (2008) Fusion of time-of-flight depth and stereo for high accuracy depth maps. In: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), pp 1–8Google Scholar

Copyright information

© Institut Mines-Télécom and Springer-Verlag France 2013

Authors and Affiliations

  • Valeria Garro
    • 1
  • Carlo Dal Mutto
    • 2
  • Pietro Zanuttigh
    • 2
  • Guido M. Cortelazzo
    • 2
  1. 1.Department of Computer ScienceUniversity of VeronaVeronaItaly
  2. 2.Department of Information EngineeringUniversity of PadovaPadovaItaly

Personalised recommendations