Advertisement

A Memory-Efficient KinectFusion Using Octree

  • Ming Zeng
  • Fukai Zhao
  • Jiaxiang Zheng
  • Xinguo Liu
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7633)

Abstract

KinectFusion is a real time 3D reconstruction system based on a low-cost moving depth camera and commodity graphics hardware. It represents the reconstructed surface as a signed distance function, and stores it in uniform volumetric grids. Though the uniform grid representation has advantages for parallel computation on GPU, it requires a huge amount of GPU memory. This paper presents a memory-efficient implementation of KinectFusion. The basic idea is to design an octree-based data structure on GPU, and store the signed distance function on data nodes. Based on the octree structure, we redesign reconstruction update and surface prediction to highly utilize parallelism of GPU. In the reconstruction update step, we first perform “add nodes” operations in a level-order manner, and then update the signed distance function. In the surface prediction step, we adopt a top-down ray tracing method to estimate the surface of the scene. In our experiments, our method costs less than 10% memory of KinectFusion while still being fast. Consequently, our method can reconstruct scenes 8 times larger than the original KinectFusion on the same hardware setup.

Keywords

Octree GPU KinectFusion 3D Reconstruction 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Chen, Y., Medioni, G.: Object modeling by registration of multiple range images. Image and Vision Computing (IVC) 10(3), 145–155 (1992)CrossRefGoogle Scholar
  2. 2.
    Fitzgibbon, A.W., Zisserman, A.: Automatic Camera Recovery for Closed or Open Image Sequences. In: Burkhardt, H.-J., Neumann, B. (eds.) ECCV 1998. LNCS, vol. 1406, pp. 311–326. Springer, Heidelberg (1998)Google Scholar
  3. 3.
    Frisken, S.F., Perry, R.N., Rockwood, A.P., Jones, T.R.: Adaptively sampled distance fields: a general representation of shape for computer graphics. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2000, pp. 249–254. ACM Press/Addison-Wesley Publishing Co., New York (2000)CrossRefGoogle Scholar
  4. 4.
    Harris, M., Owens, J.D., Sengupta, S., Zhang, Y., Davidson, A.: Cudpp homepage (2007), http://gpgpu.org/developer/cudpp
  5. 5.
    Harris, M., Sengupta, S., Owens, J.D.: Parallel prefix sum (scan) with CUDA, ch. 39. Addison Wesley (August 2007)Google Scholar
  6. 6.
    Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R., Kohli, P., Shotton, J., Hodges, S., Freeman, D., Davison, A., Fitzgibbon, A.: Kinectfusion: real-time 3d reconstruction and interaction using a moving depth camera. In: Proceedings of the 24th Annual ACM Symposium on user Interface Software and Technology, UIST 2011, pp. 559–568. ACM, New York (2011)Google Scholar
  7. 7.
    Michael, K., Matthew, B., Hugues, H.: Poisson surface reconstruction. In: Proceedings of the Fourth Eurographics Symposium on Geometry Processing, SGP 2006, pp. 61–70. Eurographics Association, Aire-la-Ville (2006)Google Scholar
  8. 8.
    Microsoft. Microsoft kinect project (2010), http://www.xbox.com/kinect
  9. 9.
    Newcombe, R.A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A.J., Kohli, P., Shotton, J., Hodges, S., Fitzgibbon, A.: Kinectfusion: Real-time dense surface mapping and tracking. In: Procedings of IEEE/ACM International Symposium on Mixed and Augmented Reality, pp. 127–136 (2011)Google Scholar
  10. 10.
    Newcombe, R.A., Lovegrove, S., Davison, A.J.: Dtam: Dense tracking and mapping in real-time. In: International Conference on Computer Vision, pp. 2320–2327 (2011)Google Scholar
  11. 11.
    Pollefeys, M., Gool, L.V., Vergauwen, M., Verbiest, F., Cornelis, K., Tops, J., Koch, R.: Visual modeling with a hand-held camera. International Journal of Computer Vision 59(3), 207–232 (2004)CrossRefGoogle Scholar
  12. 12.
    Richard, A.J.D., Newcombe, A.: Live dense reconstruction with a single moving camera. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1498–1505 (June 2010)Google Scholar
  13. 13.
    Stühmer, J., Gumhold, S., Cremers, D.: Real-Time Dense Geometry from a Handheld Camera. In: Goesele, M., Roth, S., Kuijper, A., Schiele, B., Schindler, K. (eds.) DAGM 2010. LNCS, vol. 6376, pp. 11–20. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  14. 14.
    Sun, X., Zhou, K., Stollnitz, E., Shi, J., Guo, B.: Interactive relighting of dynamic refractive objects. ACM Trans. Graph. 27(3), 35:1–35:9 (2008)CrossRefGoogle Scholar
  15. 15.
    Whelan, T., McDonald, J., Kaess, M., Fallon, M., Johannsson, H., Leonard, J.J.: Kintinuous: Spatially extended kinectfusion. In: RSS Workshop on RGB-D: Advanced Reasoning with Depth Cameras (July 2012)Google Scholar
  16. 16.
    Zhou, K., Gong, M., Huang, X., Guo, B.: Data-parallel octrees for surface reconstruction. IEEE Transactions on Visualization and Computer Graphics (TVCG) 17(5), 669–681 (2011)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Ming Zeng
    • 1
  • Fukai Zhao
    • 1
  • Jiaxiang Zheng
    • 1
  • Xinguo Liu
    • 1
  1. 1.State Key Lab of CAD&CGZhejiang UniversityHangzhouChina

Personalised recommendations