3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction

  • Christopher B. Choy
  • Danfei Xu
  • JunYoung Gwak
  • Kevin Chen
  • Silvio Savarese
Conference paper

DOI: 10.1007/978-3-319-46484-8_38

Volume 9912 of the book series Lecture Notes in Computer Science (LNCS)
Cite this paper as:
Choy C.B., Xu D., Gwak J., Chen K., Savarese S. (2016) 3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction. In: Leibe B., Matas J., Sebe N., Welling M. (eds) Computer Vision – ECCV 2016. ECCV 2016. Lecture Notes in Computer Science, vol 9912. Springer, Cham

Abstract

Inspired by the recent success of methods that employ shape priors to achieve robust 3D reconstructions, we propose a novel recurrent neural network architecture that we call the 3D Recurrent Reconstruction Neural Network (3D-R2N2). The network learns a mapping from images of objects to their underlying 3D shapes from a large collection of synthetic data [13]. Our network takes in one or more images of an object instance from arbitrary viewpoints and outputs a reconstruction of the object in the form of a 3D occupancy grid. Unlike most of the previous works, our network does not require any image annotations or object class labels for training or testing. Our extensive experimental analysis shows that our reconstruction framework (i) outperforms the state-of-the-art methods for single view reconstruction, and (ii) enables the 3D reconstruction of objects in situations when traditional SFM/SLAM methods fail (because of lack of texture and/or wide baseline).

Keywords

Multi-view Reconstruction Recurrent neural network 

Supplementary material

419983_1_En_38_MOESM1_ESM.pdf (10.4 mb)
Supplementary material 1 (pdf 10655 KB)

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Christopher B. Choy
    • 1
  • Danfei Xu
    • 1
  • JunYoung Gwak
    • 1
  • Kevin Chen
    • 1
  • Silvio Savarese
    • 1
  1. 1.Stanford UniversityStanfordUSA