Abstract
Explicit neural surface representations allow for exact and efficient extraction of the encoded surface at arbitrary precision, as well as analytic derivation of differential geometric properties such as surface normal and curvature. Such desirable properties, which are absent in its implicit counterpart, makes it ideal for various applications in computer vision, graphics and robotics. However, SOTA works are limited in terms of the topology it can effectively describe, distortion it introduces to reconstruct complex surfaces and model efficiency. In this work, we present Minimal Neural Atlas, a novel atlas-based explicit neural surface representation. At its core is a fully learnable parametric domain, given by an implicit probabilistic occupancy field defined on an open square of the parametric space. In contrast, prior works generally predefine the parametric domain. The added flexibility enables charts to admit arbitrary topology and boundary. Thus, our representation can learn a minimal atlas of 3 charts with distortion-minimal parameterization for surfaces of arbitrary topology, including closed and open surfaces with arbitrary connected components. Our experiments support the hypotheses and show that our reconstructions are more accurate in terms of the overall geometry, due to the separation of concerns on topology and geometry.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Atzmon, M., Lipman, Y.: SAL: sign agnostic learning of shapes from raw data. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2020)
Atzmon, M., Lipman, Y.: SALD: sign agnostic learning with derivatives. In: International Conference on Learning Representations (2021)
Badki, A., Gallo, O., Kautz, J., Sen, P.: Meshlet priors for 3d mesh reconstruction. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Baorui, M., Zhizhong, H., Yu-shen, L., Matthias, Z.: Neural-pull: learning signed distance functions from point clouds by learning to pull space onto surfaces. In: International Conference on Machine Learning (ICML) (2021)
Bednarik, J., Parashar, S., Gundogdu, E., Salzmann, M., Fua, P.: Shape reconstruction by learning differentiable surface representations. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2020)
Bekker, J., Davis, J.: Learning from positive and unlabeled data: a survey. Mach. Learn. 109, 719–760 (2020)
Boulch, A., Langlois, P., Puy, G., Marlet, R.: NeeDrop: self-supervised shape representation from sparse point clouds using needle dropping. In: 2021 International Conference on 3D Vision (3DV) (2021)
Chang, A.X., et al.: ShapeNet: an information-rich 3D model repository. Technical report. arXiv:1512.03012 [cs.GR], Stanford University – Princeton University – Toyota Technological Institute at Chicago (2015)
Chen, Z., Zhang, H.: Learning implicit fields for generative shape modeling. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Chibane, J., Mir, A., Pons-Moll, G.: Neural unsigned distance fields for implicit function learning. In: Advances in Neural Information Processing Systems (2020)
Cornea, O., Lupton, G., Oprea, J., Tanré, D.: Lusternik-Schnirelmann Category. American Mathematical Soc. (2003)
Degener, P., Meseth, J., Klein, R.: An adaptable surface parameterization method. In: IMR (2003)
Deng, Z., Bednarik, J., Salzmann, M., Fua, P.: Better patch stitching for parametric surface reconstruction. In: Proceedings - 2020 International Conference on 3D Vision, 3DV 2020 (2020)
Elkan, C., Noto, K.: Learning classifiers from only positive and unlabeled data. In: Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2008)
Fan, H., Su, H., Guibas, L.: A point set generation network for 3d object reconstruction from a single image. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
Fox, R.H.: On the Lusternik-Schnirelmann category. Ann. Math., 333–370 (1941)
Gropp, A., Yariv, L., Haim, N., Atzmon, M., Lipman, Y.: Implicit geometric regularization for learning shapes. In: International Conference on Machine Learning (2020)
Groueix, T., Fisher, M., Kim, V.G., Russell, B.C., Aubry, M.: A Papier-Mâché approach to learning 3D surface generation. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2018)
Gupta, K., Chandraker, M.: Neural mesh flow: 3D manifold mesh generation via diffeomorphic flows. In: Advances in Neural Information Processing Systems (2020)
Hormann, K., Greiner, G.: MIPS: an efficient global parametrization method, Technical report. Erlangen-Nuernberg Univ (Germany) Computer Graphics Group (2000)
James, I.: On category, in the sense of Lusternik-Schnirelmann. Topology 17, 331–348 (1978)
Knapitsch, A., Park, J., Zhou, Q.Y., Koltun, V.: Tanks and temples: benchmarking large-scale scene reconstruction. ACM Trans. Graph. 36, 1–13 (2017)
Madadi, M., Bertiche, H., Bouzouita, W., Guyon, I., Escalera, S.: Learning cloth dynamics: 3d + texture garment reconstruction benchmark. In: Proceedings of the NeurIPS 2020 Competition and Demonstration Track, PMLR (2021)
Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.: Occupancy networks: learning 3d reconstruction in function space. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 405–421. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_24
Morreale, L., Aigerman, N., Kim, V., Mitra, N.J.: Neural surface maps. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2021)
Pang, J., Li, D., Tian, D.: TearingNet: point cloud autoencoder to learn topology-friendly representations. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2021)
Park, J.J., Florence, P., Straub, J., Newcombe, R., Lovegrove, S.: DeepSDF: learning continuous signed distance functions for shape representation. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Rabinovich, M., Poranne, R., Panozzo, D., Sorkine-Hornung, O.: Scalable locally injective mappings. ACM Trans. Graph. 36, 1 (2017)
Schreiner, J., Asirvatham, A., Praun, E., Hoppe, H.: Inter-surface mapping. ACM Trans. Graph. 23, 870–877 (2004)
Smith, J., Schaefer, S.: Bijective parameterization with free boundaries. ACM Trans. Graph 34, 1–9 (2015)
Tatarchenko, M., Richter, S.R., Ranftl, R., Li, Z., Koltun, V., Brox, T.: What do single-view 3D reconstruction networks learn? In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems (2017)
Williams, F., Schneider, T., Silva, C., Zorin, D., Bruna, J., Panozzo, D.: Deep geometric prior for surface reconstruction. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2019)
Yang, Y., Feng, C., Shen, Y., Tian, D.: FoldingNet: point cloud auto-encoder via deep grid deformation. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Acknowledgements
This research/project is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-RP-2021-024), and the Tier 2 grant MOE-T2EP20120-0011 from the Singapore Ministry of Education.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Low, W.F., Lee, G.H. (2022). Minimal Neural Atlas: Parameterizing Complex Surfaces with Minimal Charts and Distortion. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13662. Springer, Cham. https://doi.org/10.1007/978-3-031-20086-1_27
Download citation
DOI: https://doi.org/10.1007/978-3-031-20086-1_27
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-20085-4
Online ISBN: 978-3-031-20086-1
eBook Packages: Computer ScienceComputer Science (R0)