Multimedia Tools and Applications

, Volume 76, Issue 3, pp 4357–4380 | Cite as

Depth completion for kinect v2 sensor

  • Wanbin Song
  • Anh Vu Le
  • Seokmin Yun
  • Seung-Won Jung
  • Chee Sun Won
Article
  • 689 Downloads

Abstract

Kinect v2 adopts a time-of-flight (ToF) depth sensing mechanism, which causes different type of depth artifacts comparing to the original Kinect v1. The goal of this paper is to propose a depth completion method, which is designed especially for the Kinect v2 depth artifacts. Observing the specific types of depth errors in the Kinect v2 such as thin hole-lines along the object boundaries and the new type of holes in the image corners, in this paper, we exploit the position information of the color edges extracted from the Kinect v2 sensor to guide the accurate hole-filling around the object boundaries. Since our approach requires a precise registration between color and depth images, we also introduce the transformation matrix which yields point-to-point correspondence with a pixel-accuracy. Experimental results demonstrate the effectiveness of the proposed depth image completion algorithm for the Kinect v2 in terms of completion accuracy and execution time.

Keywords

Kinect v2 Hole-filling Depth completion Depth and color fusion 

References

  1. 1.
    Abdel-Aziz YI (1971) Direct linear transformation from comparator coordinates in close-range photogrammetry. In: ASP Symposium on Close-Range Photogrammetry in IllinoisGoogle Scholar
  2. 2.
    Breuer T, Bodensteiner C, Arens M (2014) Low-cost commodity depth sensor comparison and accuracy analysis. In: SPIE Security+ Defence International Society for Optics and Photonics, pp 92500G–92500GGoogle Scholar
  3. 3.
    Canny J (1986) A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell 6:679–698CrossRefGoogle Scholar
  4. 4.
    Chen C, Cai J, Zheng J, Cham T-J, Shi G (2013) A color-guided, region-adaptive and depth-selective unified framework for kinect depth recovery. In: Multimedia Signal Processing (MMSP), 2013 I.E. 15th International Workshop on. IEEE, pp 7–12Google Scholar
  5. 5.
    Chen L, Lin H, Li S (2012) Depth image enhancement for Kinect using region growing and bilateral filter. In: Pattern Recognition (ICPR), 2012, 21st IEEE International Conference, pp 3070–3073Google Scholar
  6. 6.
    El-laithy RA, Huang J, Yeh M (2012) Study on the use of microsoft kinect for robotics applications. In: Proceedings of Position Location and Navigation Symposium (PLANS), pp 1280–1288Google Scholar
  7. 7.
    Ho YS (2013) Challenging technical issues of 3D video processing. J Converg 4(1):1–6Google Scholar
  8. 8.
    Horng Y-R, Tseng Y-C, Chang T-S (2010) Stereoscopic images generation with directional Gaussian filter. In: Proceedings of 2010 I.E. International Symposium on Circuits and Systems (ISCAS), pp 2650–2653Google Scholar
  9. 9.
    Jung S-W, Choi O (2013) Color image enhancement using depth and intensity measurements of a time-of-flight depth camera. Opt Eng 52(10):103104–103104CrossRefGoogle Scholar
  10. 10.
    Kinect 1 vs. Kinect 2, a quick side-by-side reference. Available online: http://channel9.msdn.com/coding4fun/kinect/Kinect-1-vs-Kinect-2-a-side-by-side-reference. Accessed 11 Oct 2015
  11. 11.
    Lai K, Bo L, Ren X, Fox D (2011) A large-scale hierarchical multi-view rgb-d object dataset. In: Robotics and Automation (ICRA), 2011 I.E. International Conference on, pp 1817–1824Google Scholar
  12. 12.
    Le AV, Jung S-W, Won CS (2014) Directional joint bilateral filter for depth images. Sensors 14(7):11362–11378CrossRefGoogle Scholar
  13. 13.
    Levin A, Lischinski D, Weiss Y (2004) Colorization using optimization. ACM Trans Graph 23(3):689–694CrossRefGoogle Scholar
  14. 14.
    Qi F, Han J, Wang P, Shi G, Li F (2013) Structure guided fusion for depth map inpainting. Pattern Recogn Lett 34(1):70–76CrossRefGoogle Scholar
  15. 15.
    Rosten E, Drummond T (2006) Machine learning for high-speed corner detection. In: Computer vision–ECCV 2006. Springer, Berlin Heidelberg, pp 430–443CrossRefGoogle Scholar
  16. 16.
    Rother C, Kolmogorov V, Blake A (2004) Grabcut: interactive foreground extraction using iterated graph cuts. ACM Trans Graph 23(3):309–314CrossRefGoogle Scholar
  17. 17.
    Rothwell CA, Forsyth DA, Zisserman A, Mundy JL (1993) Extracting projective structure from single perspective views of 3D point sets. In: Proceedings of Fourth International Conference on Computer Vision, pp 573–582Google Scholar
  18. 18.
    Scharstein D, Pal C (2007) Learning conditional random fields for stereo. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), pp 1–8Google Scholar
  19. 19.
    Tomasi C, Manduchi R (1998) Bilateral filtering for gray and color images. In: Sixth International Conference on Computer Vision, 1998, pp 839–846Google Scholar
  20. 20.
    Vipparthi SK, Nagar SK (2014) Color directional local quinary patterns for content based indexing and retrieval. Hum Centric Comput Inf Sci 4(1):1–13CrossRefGoogle Scholar
  21. 21.
    Wasza J, Bauer S, Hornegger J (2011) Real-time preprocessing for dense 3-D range imaging on the GPU: defect interpolation, bilateral temporal averaging and guided filtering. In: Computer Vision Workshops (ICCV Workshops), 2011 I.E. International Conference on, pp 1221–1227Google Scholar
  22. 22.
    Yang J, Ye X, Li K, Hou C, Wang Y (2014) Color-guided depth recovery from RGB-D data using an adaptive auto-regressive model. IEEE Trans Image Process 23(8):3443–3458MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  • Wanbin Song
    • 1
  • Anh Vu Le
    • 1
  • Seokmin Yun
    • 1
  • Seung-Won Jung
    • 2
  • Chee Sun Won
    • 1
  1. 1.Department of Electronics and Electrical EngineeringDongguk University-SeoulSeoulSouth Korea
  2. 2.Department of Multimedia EngineeringDongguk University-SeoulSeoulSouth Korea

Personalised recommendations