Advertisement

Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images

Conference paper
  • 533 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12363)

Abstract

We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views. Previous work on learning shape reconstruction from multiple views uses discrete representations such as point clouds or voxels, while continuous surface generation approaches lack multi-view consistency. We address these issues by designing neural networks capable of generating high-quality parametric 3D surfaces which are also consistent between views. Furthermore, the generated 3D surfaces preserve accurate image pixel to 3D surface point correspondences, allowing us to lift texture information to reconstruct shapes with rich geometry and appearance. Our method is supervised and trained on a public dataset of shapes from common object categories. Quantitative results indicate that our method significantly outperforms previous work, while qualitative results demonstrate the high quality of our reconstructions.

Keywords

3D reconstruction Multi-view Single-view Parametrization 

Notes

Acknowledgement

We thank the anonymous reviewers for their comments and suggestions. This work was supported by a Vannevar Bush Faculty Fellowship, NSF grant IIS-1763268, grants from the Stanford GRO Program, the SAIL-Toyota Center for AI Research, AWS Machine Learning Awards Program, UCL AI Center, and a gift from the Adobe.

Supplementary material

504473_1_En_8_MOESM1_ESM.pdf (24.2 mb)
Supplementary material 1 (pdf 24826 KB)

Supplementary material 2 (mp4 72164 KB)

References

  1. 1.
    Badrinarayanan, V., Kendall, A., Cipolla, R.: SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017)CrossRefGoogle Scholar
  2. 2.
    Barrow, H.G., Tenenbaum, J.M., Bolles, R.C., Wolf, H.C.: Parametric correspondence and chamfer matching: two new techniques for image matching. In: Proceedings of the 5th International Joint Conference on Artificial Intelligence, IJCAI 1977, vol. 2, pp. 659–663. Morgan Kaufmann Publishers Inc., San Francisco (1977). http://dl.acm.org/citation.cfm?id=1622943.1622971
  3. 3.
    Bhoi, A.: Monocular depth estimation: a survey. arXiv preprint arXiv:1901.09402 (2019)
  4. 4.
    Chang, A.X., et al.: ShapeNet: an information-rich 3D model repository. arXiv preprint arXiv:1512.03012 (2015)
  5. 5.
    Chen, R., Han, S., Xu, J., Su, H.: Point-based multi-view stereo network. In: Proceedings of ICCV (2019)Google Scholar
  6. 6.
    Chen, Z., Zhang, H.: Learning implicit fields for generative shape modeling. In: Proceedings of CVPR (2019)Google Scholar
  7. 7.
    Choy, C.B., Xu, D., Gwak, J.Y., Chen, K., Savarese, S.: 3D-R2N2: a unified approach for single and multi-view 3D object reconstruction. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 628–644. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46484-8_38CrossRefGoogle Scholar
  8. 8.
    Deprelle, T., Groueix, T., Fisher, M., Kim, V.G., Russell, B.C., Aubry, M.: Learning elementary structures for 3D shape generation and matching. In: Proceedings of NeurIPS (2019)Google Scholar
  9. 9.
    Fan, H., Su, H., Guibas, L.: A point set generation network for 3D object reconstruction from a single image. In: Proceedings of CVPR (2017)Google Scholar
  10. 10.
    Fan, H., Su, H., Guibas, L.J.: A point set generation network for 3D object reconstruction from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 605–613 (2017)Google Scholar
  11. 11.
    Genova, K., Cole, F., Vlasic, D., Sarna, A., Freeman, W.T., Funkhouser, T.: Learning shape templates with structured implicit functions. In: Proceedings of ICCV (2019)Google Scholar
  12. 12.
    Girdhar, R., Fouhey, D.F., Rodriguez, M., Gupta, A.: Learning a predictable and generative vector representation for objects. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9910, pp. 484–499. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46466-4_29CrossRefGoogle Scholar
  13. 13.
    Groueix, T., Fisher, M., Kim, V.G., Russell, B.C., Aubry, M.: AtlasNet: a papier-mâché approach to learning 3D surface generation. In: Proceedings of CVPR (2018)Google Scholar
  14. 14.
    Henzler, P., Mitra, N., Ritschel, T.: Escaping plato’s cave using adversarial training: 3D shape from unstructured 2D image collections. In: Proceedings of ICCV (2019)Google Scholar
  15. 15.
    Insafutdinov, E., Dosovitskiy, A.: Unsupervised learning of shape and pose with differentiable point clouds. In: Proceedings of NeurIPS (2018)Google Scholar
  16. 16.
    Kanazawa, A., Tulsiani, S., Efros, A.A., Malik, J.: Learning category-specific mesh reconstruction from image collections. In: Proceedings of ECCV (2018)Google Scholar
  17. 17.
    Kar, A., Häne, C., Malik, J.: Learning a multi-view stereo machine. In: Proceedings of NeurIPS (2017)Google Scholar
  18. 18.
    Kato, H., Harada, T.: Learning view priors for single-view 3D reconstruction. In: Proceedings of CVPR (2019)Google Scholar
  19. 19.
    Kato, H., Ushiku, Y., Harada, T.: Neural 3D mesh renderer. In: Proceedings of CVPR (2018)Google Scholar
  20. 20.
    Kazhdan, M., Bolitho, M., Hoppe, H.: Poisson surface reconstruction. In: Proceedings of the Eurographics Symposium on Geometry Processing (2006)Google Scholar
  21. 21.
    Kazhdan, M., Hoppe, H.: Screened poisson surface reconstruction. ACM Trans. Graph. (ToG) 32, 1–13 (2013)CrossRefGoogle Scholar
  22. 22.
    Kulkarni, N., Gupta, A., Tulsiani, S.: Canonical surface mapping via geometric cycle consistency. In: Proceedings of ICCV (2019)Google Scholar
  23. 23.
    Lin, C.H., Kong, C., Lucey, S.: Learning efficient point cloud generation for dense 3D object reconstruction. In: Proceedings of AAAI (2018)Google Scholar
  24. 24.
    Lin, C.H., et al.: Photometric mesh optimization for video-aligned 3D object reconstruction. In: Proceedings of CVPR (2019)Google Scholar
  25. 25.
    Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10602-1_48CrossRefGoogle Scholar
  26. 26.
    Liu, S., Li, T., Chen, W., Li, H.: Soft rasterizer: a differentiable renderer for image-based 3D reasoning (2019)Google Scholar
  27. 27.
    Liu, S., Saito, S., Chen, W., Li, H.: Learning to infer implicit surfaces without 3D supervision. In: Advances in Neural Information Processing Systems, pp. 8295–8306 (2019)Google Scholar
  28. 28.
    Lorensen, W.E., Cline, H.E.: Marching cubes: a high resolution 3D surface construction algorithm. In: SIGGRAPH (1987)Google Scholar
  29. 29.
    Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.: Occupancy networks: learning 3D reconstruction in function space. In: Proceedings of CVPR (2019)Google Scholar
  30. 30.
    Niemeyer, M., Mescheder, L., Oechsle, M., Geiger, A.: Differentiable volumetric rendering: learning implicit 3D representations without 3D supervision. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3504–3515 (2020)Google Scholar
  31. 31.
    Pan, J., Han, X., Chen, W., Tang, J., Jia, K.: Deep mesh reconstruction from single RGB images via topology modification networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 9964–9973 (2019)Google Scholar
  32. 32.
    Park, J.J., Florence, P., Straub, J., Newcombe, R., Lovegrove, S.: DeepSDF: learning continuous signed distance functions for shape representation. In: Proceedings of CVPR (2019)Google Scholar
  33. 33.
    Petersen, F., Bermano, A.H., Deussen, O., Cohen-Or, D.: Pix2vex: image-to-geometry reconstruction using a smooth differentiable renderer. arXiv preprint arXiv:1903.11149 (2019)
  34. 34.
    Richter, S.R., Roth, S.: Matryoshka networks: predicting 3D geometry via nested shape layers. In: Proceedings of CVPR (2018)Google Scholar
  35. 35.
    Saito, S., Huang, Z., Natsume, R., Morishima, S., Kanazawa, A., Li, H.: PIFu: pixel-aligned implicit function for high-resolution clothed human digitization. In: Proceedings of ICCV (2019)Google Scholar
  36. 36.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  37. 37.
    Sridhar, S., Rempe, D., Valentin, J., Bouaziz, S., Guibas, L.J.: Multiview aggregation for learning category-specific shape reconstruction. In: Proceedings of NeurIPS (2019)Google Scholar
  38. 38.
    Tatarchenko, M., Dosovitskiy, A., Brox, T.: Octree generating networks: efficient convolutional architectures for high-resolution 3D outputs. In: Proceedings of ICCV (2017)Google Scholar
  39. 39.
    Tatarchenko, M., Richter, S.R., Ranftl, R., Li, Z., Koltun, V., Brox, T.: What do single-view 3D reconstruction networks learn? In: Proceedings of CVPR (2019)Google Scholar
  40. 40.
    Tulsiani, S., Zhou, T., Efros, A.A., Malik, J.: Multi-view supervision for single-view reconstruction via differentiable ray consistency. In: Proceedings of CVPR (2017)Google Scholar
  41. 41.
    Wang, H., Sridhar, S., Huang, J., Valentin, J., Song, S., Guibas, L.J.: Normalized object coordinate space for category-level 6D object pose and size estimation. In: Proceedings of CVPR (2019)Google Scholar
  42. 42.
    Wang, N., Zhang, Y., Li, Z., Fu, Y., Liu, W., Jiang, Y.G.: Pixel2Mesh: generating 3D mesh models from single RGB images. In: Proceedings of ECCV (2018)Google Scholar
  43. 43.
    Wang, P.S., Liu, Y., Guo, Y.X., Sun, C.Y., Tong, X.: Adaptive O-CNN: a Patch-based Deep Representation of 3D Shapes. In: SIGGRAPH Asia (2018)Google Scholar
  44. 44.
    Wang, W., Ceylan, D., Mech, R., Neumann, U.: 3DN: 3D deformation network. In: Proceedings of CVPR (2019)Google Scholar
  45. 45.
    Wen, C., Zhang, Y., Li, Z., Fu, Y.: Pixel2Mesh++: multi-view 3D mesh generation via deformation. In: Proceedings of ICCV (2019)Google Scholar
  46. 46.
    Xu, Q., Wang, W., Ceylan, D., Mech, R., Neumann, U.: DISN: deep implicit surface network for high-quality single-view 3D reconstruction. In: Proceedings of NeurIPS (2019)Google Scholar
  47. 47.
    Yan, X., Yang, J., Yumer, E., Guo, Y., Lee, H.: Perspective transformer nets: learning single-view 3D object reconstruction without 3D supervision. In: Proceedings of NeurIPS (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Zhejiang UniversityHangzhouChina
  2. 2.Stanford UniversityStanfordUSA
  3. 3.Adobe ResearchLondonUK
  4. 4.University College LondonLondonUK

Personalised recommendations