Advertisement

Meshing Point Clouds with Predicted Intrinsic-Extrinsic Ratio Guidance

Conference paper
  • 646 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12353)

Abstract

We are interested in reconstructing the mesh representation of object surfaces from point clouds. Surface reconstruction is a prerequisite for downstream applications such as rendering, collision avoidance for planning, animation, etc. However, the task is challenging if the input point cloud has a low resolution, which is common in real-world scenarios (e.g., from LiDAR or Kinect sensors). Existing learning-based mesh generative methods mostly predict the surface by first building a shape embedding that is at the whole object level, a design that causes issues in generating fine-grained details and generalizing to unseen categories. Instead, we propose to leverage the input point cloud as much as possible, by only adding connectivity information to existing points. Particularly, we predict which triplets of points should form faces. Our key innovation is a surrogate of local connectivity, calculated by comparing the intrinsic/extrinsic metrics. We learn to predict this surrogate using a deep point cloud network and then feed it to an efficient post-processing module for high-quality mesh generation. We demonstrate that our method can not only preserve details, handle ambiguous structures, but also possess strong generalizability to unseen categories by experiments on synthetic and real data.

Keywords

Mesh reconstruction Point cloud 

Notes

Acknowledgements

This work was funded in part by Kuaishou Technology, NSF grant IIS-1764078, NSF grant 1703957, the Ronald L. Graham chair and the UC San Diego Center for Visual Computing.

Supplementary material

504445_1_En_5_MOESM1_ESM.pdf (9.8 mb)
Supplementary material 1 (pdf 10032 KB)

References

  1. 1.
    Badki, A., Gallo, O., Kautz, J., Sen, P.: Meshlet priors for 3D mesh reconstruction. arXiv preprint arXiv:2001.01744 (2020)
  2. 2.
    Bernardini, F., Mittleman, J., Rushmeier, H., Silva, C., Taubin, G.: The ball-pivoting algorithm for surface reconstruction. IEEE Trans. Vis. Comput. Graph. 5(4), 349–359 (1999)CrossRefGoogle Scholar
  3. 3.
    Boissonnat, J.D., Geiger, B.: Three-dimensional reconstruction of complex shapes based on the delaunay triangulation. In: Biomedical Image Processing and Biomedical Visualization, vol. 1905, pp. 964–975. International Society for Optics and Photonics (1993)Google Scholar
  4. 4.
    Carr, J.C., et al.: Reconstruction and representation of 3D objects with radial basis functions. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, pp. 67–76 (2001)Google Scholar
  5. 5.
    Chang, A.X., et al.: Shapenet: an information-rich 3d model repository. arXiv preprint arXiv:1512.03012 (2015)
  6. 6.
    Chen, Z., Tagliasacchi, A., Zhang, H.: BSP-Net: generating compact meshes via binary space partitioning. arXiv preprint arXiv:1911.06971 (2019)
  7. 7.
    Chen, Z., Zhang, H.: Learning implicit fields for generative shape modeling. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5939–5948 (2019)Google Scholar
  8. 8.
    Cignoni, P., Callieri, M., Corsini, M., Dellepiane, M., Ganovelli, F., Ranzuglia, G.: MeshLab: an open-source mesh processing tool. In: Eurographics Italian Chapter Conference 2008, pp. 129–136 (2008)Google Scholar
  9. 9.
    Corsini, M., Cignoni, P., Scopigno, R.: Efficient and flexible sampling with blue noise properties of triangular meshes. IEEE Trans. Vis. Comput. Graph. 18(6), 914–924 (2012)CrossRefGoogle Scholar
  10. 10.
    Deprelle, T., Groueix, T., Fisher, M., Kim, V., Russell, B., Aubry, M.: Learning elementary structures for 3D shape generation and matching. In: Advances in Neural Information Processing Systems, pp. 7433–7443 (2019)Google Scholar
  11. 11.
    Digne, J.: An analysis and implementation of a parallel ball pivoting algorithm. Image Process. Line 4, 149–168 (2014)CrossRefGoogle Scholar
  12. 12.
    Edelsbrunner, H., Mücke, E.P.: Three-dimensional alpha shapes. ACM Trans. Graph. (TOG) 13(1), 43–72 (1994)CrossRefGoogle Scholar
  13. 13.
    Fan, H., Su, H., Guibas, L.J.: A point set generation network for 3d object reconstruction from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 605–613 (2017)Google Scholar
  14. 14.
    Gao, L., et al.: SDM-NET: deep generative network for structured deformable mesh. ACM Trans. Graph. (TOG) 38(6), 1–15 (2019)Google Scholar
  15. 15.
    Genova, K., Cole, F., Vlasic, D., Sarna, A., Freeman, W.T., Funkhouser, T.: Learning shape templates with structured implicit functions. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 7154–7164 (2019)Google Scholar
  16. 16.
    Gkioxari, G., Malik, J., Johnson, J.: Mesh R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 9785–9795 (2019)Google Scholar
  17. 17.
    Graham, B., Engelcke, M., van der Maaten, L.: 3D semantic segmentation with submanifold sparse convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9224–9232 (2018)Google Scholar
  18. 18.
    Groueix, T., Fisher, M., Kim, V.G., Russell, B.C., Aubry, M.: 3D-CODED: 3D correspondences by deep deformation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11206, pp. 235–251. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01216-8_15CrossRefGoogle Scholar
  19. 19.
    Groueix, T., Fisher, M., Kim, V.G., Russell, B.C., Aubry, M.: A papier-mâché approach to learning 3D surface generation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 216–224 (2018)Google Scholar
  20. 20.
    Guennebaud, G., Germann, M., Gross, M.: Dynamic sampling and rendering of algebraic point set surfaces. In: Computer Graphics Forum, vol. 27, pp. 653–662. Wiley Online Library (2008)Google Scholar
  21. 21.
    Guerrero, P., Kleiman, Y., Ovsjanikov, M., Mitra, N.J.: PCPNET learning local shape properties from raw point clouds. In: Computer Graphics Forum, vol. 37, pp. 75–85. Wiley Online Library (2018)Google Scholar
  22. 22.
    Hoppe, H., DeRose, T., Duchamp, T., McDonald, J., Stuetzle, W.: Surface reconstruction from unorganized points. In: Proceedings of the 19th Annual Conference on Computer Graphics and Interactive Techniques, pp. 71–78 (1992)Google Scholar
  23. 23.
    Kanazawa, A., Tulsiani, S., Efros, A.A., Malik, J.: Learning category-specific mesh reconstruction from image collections. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11219, pp. 386–402. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01267-0_23CrossRefGoogle Scholar
  24. 24.
    Kazhdan, M., Bolitho, M., Hoppe, H.: Poisson surface reconstruction. In: Proceedings of the Fourth Eurographics Symposium on Geometry Processing, vol. 7 (2006)Google Scholar
  25. 25.
    Kazhdan, M., Hoppe, H.: Screened Poisson surface reconstruction. ACM Trans. Graph. (ToG) 32(3), 1–13 (2013)CrossRefGoogle Scholar
  26. 26.
    Liao, Y., Donne, S., Geiger, A.: Deep marching cubes: learning explicit surface representations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2916–2925 (2018)Google Scholar
  27. 27.
    Liu, H.T.D., Kim, V.G., Chaudhuri, S., Aigerman, N., Jacobson, A.: Neural subdivision. arXiv preprint arXiv:2005.01819 (2020)
  28. 28.
    Liu, M., Sheng, L., Yang, S., Shao, J., Hu, S.M.: Morphing and sampling network for dense point cloud completion. arXiv preprint arXiv:1912.00280 (2019)
  29. 29.
    Liu, S., Chen, W., Li, T., Li, H.: Soft rasterizer: differentiable rendering for unsupervised single-view mesh reconstruction. arXiv preprint arXiv:1901.05567 (2019)
  30. 30.
    Lorensen, W.E., Cline, H.E.: Marching cubes: a high resolution 3D surface construction algorithm. ACM SIGGRAPH Comput. Graph. 21(4), 163–169 (1987)CrossRefGoogle Scholar
  31. 31.
    Luo, T., et al.: Learning to group: a bottom-up framework for 3d part discovery in unseen categories. In: International Conference on Learning Representations (2020). https://openreview.net/forum?id=rkl8dlHYvB
  32. 32.
    Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.: Occupancy networks: Learning 3D reconstruction in function space. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4460–4470 (2019)Google Scholar
  33. 33.
    Mo, K., et al.: Partnet: a large-scale benchmark for fine-grained and hierarchical part-level 3d object understanding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 909–918 (2019)Google Scholar
  34. 34.
    Öztireli, A.C., Guennebaud, G., Gross, M.: Feature preserving point set surfaces based on non-linear kernel regression. In: Computer Graphics Forum, vol. 28, pp. 493–501. Wiley Online Library (2009)Google Scholar
  35. 35.
    Pan, J., Han, X., Chen, W., Tang, J., Jia, K.: Deep mesh reconstruction from single RGB images via topology modification networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 9964–9973 (2019)Google Scholar
  36. 36.
    Park, J.J., Florence, P., Straub, J., Newcombe, R., Lovegrove, S.: DeepSDF: learning continuous signed distance functions for shape representation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 165–174 (2019)Google Scholar
  37. 37.
    Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660 (2017)Google Scholar
  38. 38.
    Qi, C.R., Yi, L., Su, H., Guibas, L.J.: Pointnet++: deep hierarchical feature learning on point sets in a metric space. In: Advances in Neural Information Processing Systems, pp. 5099–5108 (2017)Google Scholar
  39. 39.
    Sharma, G., Goyal, R., Liu, D., Kalogerakis, E., Maji, S.: CSGNet: neural shape parser for constructive solid geometry. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5515–5523 (2018)Google Scholar
  40. 40.
    Surazhsky, V., Surazhsky, T., Kirsanov, D., Gortler, S.J., Hoppe, H.: Fast exact and approximate geodesics on meshes. ACM Trans. Graph. (TOG) 24(3), 553–560 (2005)CrossRefGoogle Scholar
  41. 41.
    Tulsiani, S., Su, H., Guibas, L.J., Efros, A.A., Malik, J.: Learning shape abstractions by assembling volumetric primitives. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2635–2643 (2017)Google Scholar
  42. 42.
    Turk, G., Levoy, M.: Zippered polygon meshes from range images. In: Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, pp. 311–318 (1994)Google Scholar
  43. 43.
    Wang, N., Zhang, Y., Li, Z., Fu, Y., Liu, W., Jiang, Y.-G.: Pixel2Mesh: generating 3D mesh models from single RGB images. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11215, pp. 55–71. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01252-6_4CrossRefGoogle Scholar
  44. 44.
    Wen, C., Zhang, Y., Li, Z., Fu, Y.: Pixel2mesh++: multi-view 3D mesh generation via deformation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1042–1051 (2019)Google Scholar
  45. 45.
    Williams, F., Schneider, T., Silva, C., Zorin, D., Bruna, J., Panozzo, D.: Deep geometric prior for surface reconstruction. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10130–10139 (2019)Google Scholar
  46. 46.
    Yang, Y., Feng, C., Shen, Y., Tian, D.: Foldingnet: interpretable unsupervised learning on 3D point clouds. arXiv preprint arXiv:1712.07262 (2017)
  47. 47.
    Yu, L., Li, X., Fu, C.-W., Cohen-Or, D., Heng, P.-A.: EC-Net: an edge-aware point set consolidation network. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 398–414. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01234-2_24CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.University of CaliforniaSan DiegoUSA

Personalised recommendations