Points2Surf Learning Implicit Surfaces from Point Clouds

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12350)


A key step in any scanning-based asset creation workflow is to convert unordered point clouds to a surface. Classical methods (e.g., Poisson reconstruction) start to degrade in the presence of noisy and partial scans. Hence, deep learning based methods have recently been proposed to produce complete surfaces, even from partial scans. However, such data-driven methods struggle to generalize to new shapes with large geometric and topological variations. We present Points2Surf, a novel patch-based learning framework that produces accurate surfaces directly from raw scans without normals. Learning a prior over a combination of detailed local patches and coarse global information improves generalization performance and reconstruction accuracy. O5ur extensive comparison on both synthetic and real data demonstrates a clear advantage of our method over state-of-the-art alternatives on previously unseen classes (on average, Points2Surf brings down reconstruction error by 30% over SPR and by 270%+ over deep learning based SotA methods) at the cost of longer computation times and a slight increase in small-scale topological noise in some cases. Our source code, pre-trained model, and dataset are available at:


Surface reconstruction Implicit surfaces Point clouds Patch-based Local and global Deep learning Generalization 



This work has been supported by the FWF projects no. P24600, P27972 and P32418 and the ERC Starting Grant SmartGeometry (StG-2013-335373).

Supplementary material

504441_1_En_7_MOESM1_ESM.pdf (3.8 mb)
Supplementary material 1 (pdf 3877 KB)


  1. 1.
    Alliez, P., Cohen-Steiner, D., Tong, Y., Desbrun, M.: Voronoi-based variational reconstruction of unoriented point sets. In: Symposium on Geometry Processing, vol. 7, pp. 39–48 (2007)Google Scholar
  2. 2.
    Badki, A., Gallo, O., Kautz, J., Sen, P.: Meshlet priors for 3D mesh reconstruction. arXiv (2020)Google Scholar
  3. 3.
    Barrow, H.G., Tenenbaum, J.M., Bolles, R.C., Wolf, H.C.: Parametric correspondence and chamfer matching: two new techniques for image matching. In: Proceedings of the 5th International Joint Conference on Artificial Intelligence, IJCAI 1977, vol. 2, pp. 659–663. Morgan Kaufmann Publishers Inc., San Francisco (1977).
  4. 4.
    Berger, M., et al.: A survey of surface reconstruction from point clouds. In: Computer Graphics Forum, vol. 36, pp. 301–329. Wiley Online Library (2017)Google Scholar
  5. 5.
    Chaine, R.: A geometric convection approach of 3-D reconstruction. In: Proceedings of the 2003 Eurographics/ACM SIGGRAPH Symposium on Geometry Processing, pp. 218–229. Eurographics Association (2003)Google Scholar
  6. 6.
    Chen, Z., Zhang, H.: Learning implicit fields for generative shape modeling. In: Proceedings of the CVPR (2019)Google Scholar
  7. 7.
    Collins, R.T.: A space-sweep approach to true multi-image matching. In: Proceedings of the CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 358–363. IEEE (1996)Google Scholar
  8. 8.
    Dai, A., Nießner, M.: Scan2Mesh: from unstructured range scans to 3D meshes. In: Proceedings of the Computer Vision and Pattern Recognition (CVPR). IEEE (2019)Google Scholar
  9. 9.
    Dey, T.K., Goswami, S.: Tight cocone: a water-tight surface reconstructor. In: Proceedings of the 8th ACM Symposium on Solid Modeling and Applications, pp. 127–134. ACM (2003)Google Scholar
  10. 10.
    Digne, J., Morel, J.M., Souzani, C.M., Lartigue, C.: Scale space meshing of raw data point sets. In: Computer Graphics Forum, vol. 30, pp. 1630–1642. Wiley Online Library (2011)Google Scholar
  11. 11.
    Edelsbrunner, H.: Surface reconstruction by wrapping finite sets in space. In: Aronov, B., Basu, S., Pach, J., Sharir, M. (eds.) Discrete and Computational Geometry. Algorithms and Combinatorics, vol. 25, pp. 379–404. Springer, Heidelberg (2003). Scholar
  12. 12.
    Fan, H., Su, H., Guibas, L.J.: A point set generation network for 3D object reconstruction from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 605–613 (2017)Google Scholar
  13. 13.
    Groueix, T., Fisher, M., Kim, V.G., Russell, B., Aubry, M.: AtlasNet: a Papier-Mâché approach to learning 3D surface generation. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)Google Scholar
  14. 14.
    Gschwandtner, M., Kwitt, R., Uhl, A., Pree, W.: BlenSor: blender sensor simulation toolbox. In: Bebis, G., et al. (eds.) ISVC 2011. LNCS, vol. 6939, pp. 199–208. Springer, Heidelberg (2011). Scholar
  15. 15.
    Guerrero, P., Kleiman, Y., Ovsjanikov, M., Mitra, N.J.: PCPNet: learning local shape properties from raw point clouds. Comput. Graph. Forum 37(2), 75–85 (2018). Scholar
  16. 16.
    Hoppe, H., DeRose, T., Duchamp, T., McDonald, J., Stuetzle, W.: Surface reconstruction from unorganized points. ACM SIGGRAPH Comput. Graph. 26, 71–78 (1992)CrossRefGoogle Scholar
  17. 17.
    Kazhdan, M.: Reconstruction of solid models from oriented point sets. In: Proceedings of the 3rd Eurographics Symposium on Geometry Processing, p. 73. Eurographics Association (2005)Google Scholar
  18. 18.
    Kazhdan, M., Bolitho, M., Hoppe, H.: Poisson surface reconstruction. In: Proceedings of the Eurographics Symposium on Geometry Processing (2006)Google Scholar
  19. 19.
    Kazhdan, M., Hoppe, H.: Screened Poisson surface reconstruction. ACM Trans. Graph. (ToG) 32(3), 29 (2013)CrossRefGoogle Scholar
  20. 20.
    Koch, S., et al.: ABC: a big CAD model dataset for geometric deep learning. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (June 2019)Google Scholar
  21. 21.
    Ladicky, L., Saurer, O., Jeong, S., Maninchedda, F., Pollefeys, M.: From point clouds to mesh using regression. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3893–3902 (2017)Google Scholar
  22. 22.
    Li, G., Liu, L., Zheng, H., Mitra, N.J.: Analysis, reconstruction and manipulation using arterial snakes. In: ACM SIGGRAPH Asia 2010 Papers, SIGGRAPH ASIA 2010. Association for Computing Machinery, New York (2010).
  23. 23.
    Lorensen, W.E., Cline, H.E.: Marching cubes: a high resolution 3D surface construction algorithm. ACM SIGGRAPH Comput. Graph. 21, 163–169 (1987)CrossRefGoogle Scholar
  24. 24.
    Manson, J., Petrova, G., Schaefer, S.: Streaming surface reconstruction using wavelets. Comput. Graph. Forum 27, 1411–1420 (2008). Wiley Online LibraryCrossRefGoogle Scholar
  25. 25.
    Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.: Occupancy networks: Learning 3D reconstruction in function space. In: Proceedings of CVPR (2019)Google Scholar
  26. 26.
    Nagai, Y., Ohtake, Y., Suzuki, H.: Smoothing of partition of unity implicit surfaces for noise robust surface reconstruction. Comput. Graph. Forum 28, 1339–1348 (2009). Wiley Online LibraryCrossRefGoogle Scholar
  27. 27.
    Ohrhallinger, S., Mudur, S., Wimmer, M.: Minimizing edge length to connect sparsely sampled unstructured point sets. Comput. Graph. 37(6), 645–658 (2013)CrossRefGoogle Scholar
  28. 28.
    Ohtake, Y., Belyaev, A., Seidel, H.P.: A multi-scale approach to 3D scattered data interpolation with compactly supported basis functions. In: 2003 Shape Modeling International, pp. 153–161. IEEE (2003)Google Scholar
  29. 29.
    Ohtake, Y., Belyaev, A., Seidel, H.P.: 3D scattered data interpolation and approximation with multilevel compactly supported RBFs. Graph. Models 67(3), 150–165 (2005)CrossRefGoogle Scholar
  30. 30.
    Park, J.J., Florence, P., Straub, J., Newcombe, R., Lovegrove, S.: DeepSDF: learning continuous signed distance functions for shape representation. arXiv preprint arXiv:1901.05103 (2019)
  31. 31.
    Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: Deep learning on point sets for 3d classification and segmentation. arXiv preprint arXiv:1612.00593 (2016)
  32. 32.
    Rakotosaona, M.J., La Barbera, V., Guerrero, P., Mitra, N.J., Ovsjanikov, M.: PointCleanNet: learning to denoise and remove outliers from dense point clouds. Comput. Graph. Forum 39, 185–203 (2019)CrossRefGoogle Scholar
  33. 33.
    Sharf, A., Lewiner, T., Shamir, A., Kobbelt, L., Cohen-Or, D.: Competing fronts for coarse-to-fine surface reconstruction. Comput. Graph. Forum 25, 389–398 (2006). Wiley Online LibraryCrossRefGoogle Scholar
  34. 34.
    Tatarchenko, M., Dosovitskiy, A., Brox, T.: Octree generating networks: efficient convolutional architectures for high-resolution 3D outputs. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2088–2096 (2017)Google Scholar
  35. 35.
    Wang, P.S., Sun, C.Y., Liu, Y., Tong, X.: Adaptive O-CNN: a patch-based deep representation of 3D shapes. ACM Trans. Graph. (TOG) 37(6), 1–11 (2018)Google Scholar
  36. 36.
    Williams, F., Schneider, T., Silva, C.T., Zorin, D., Bruna, J., Panozzo, D.: Deep geometric prior for surface reconstruction. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10130–10139 (2019)Google Scholar
  37. 37.
    Wolff, K., et al.: Point cloud noise and outlier removal for image-based 3D reconstruction. In: 2016 4th International Conference on 3D Vision (3DV), pp. 118–127 (October 2016).

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.TU WienViennaAustria
  2. 2.AdobeLondonUK
  3. 3.VRVisViennaAustria
  4. 4.University College LondonLondonUK

Personalised recommendations