Advertisement

Point Cloud Colorization Based on Densely Annotated 3D Shape Dataset

  • Xu Cao
  • Katashi NagaoEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11295)

Abstract

This paper introduces DensePoint, a densely sampled and annotated point cloud dataset containing over 10,000 single objects across 16 categories, by merging different kind of information from two existing datasets. Each point cloud in DensePoint contains 40,000 points, and each point is associated with two sorts of information: RGB value and part annotation. In addition, we propose a method for point cloud colorization by utilizing Generative Adversarial Networks (GANs). The network makes it possible to generate colours for point clouds of single objects by only giving the point cloud itself. Experiments on DensePoint show that there exist clear boundaries in point clouds between different parts of an object, suggesting that the proposed network is able to generate reasonably good colours. Our dataset is publicly available on the project page (http://rwdc.nagao.nuie.nagoya-u.ac.jp/DensePoint).

Keywords

Point cloud dataset Colorization Generative adversarial networks 

References

  1. 1.
  2. 2.
    Chang, A.X., et al.: ShapeNet: an information-rich 3D model repository. Technical report arXiv: 1512.03012 [cs.GR] (2015)Google Scholar
  3. 3.
    Chen, X., Golovinskiy, A., Funkhouser, T.: A benchmark for 3D mesh segmentation. ACM Trans. Graph. 28(3), 73 (2009). (Proc. SIGGRAPH)CrossRefGoogle Scholar
  4. 4.
    Dai, A., Ritchie, D., Bokeloh, M., Reed, S., Sturm, J. and Nießner, M.: ScanComplete: large-scale scene completion and semantic segmentation for 3D scans. In: Proceedings of Computer Vision and Pattern Recognition (CVPR). IEEE (2018)Google Scholar
  5. 5.
    Fan, H., Su, H., Guibas, L.J.: A point set generation network for 3D object reconstruction from a single image. In: Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, pp. 2463–2471 (2017)Google Scholar
  6. 6.
    Goodfellow, I., et al.: Generative adversarial nets. In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 27, pp. 2672–2680 (2014)Google Scholar
  7. 7.
    Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of Wasserstein gans. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 5767–5777. Curran Associates, Inc. (2017)Google Scholar
  8. 8.
    Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: CVPR (2017)Google Scholar
  9. 9.
    Klokov, R., Lempitsky, V.: Escape from cells: deep kd-networks for the recognition of 3D point cloud models. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2017)Google Scholar
  10. 10.
    Lin, C.H., Kong, C., Lucey, S.: Learning efficient point cloud generation for dense 3D object reconstruction. In: Proceedings of AAAI Conference on Artificial Intelligence (AAAI) (2018)Google Scholar
  11. 11.
    Lun, Z., Gadelha, M., Kalogerakis, E., Maji, S., Wang, R.: 3D shape reconstruction from sketches via multi-view convolutional networks. In: Proceedings of 2017 International Conference on 3D Vision (3DV) (2017)Google Scholar
  12. 12.
    Nagao, K., Miyakawa, Y.: Building scale VR: sutomatically creating indoor 3D maps and its application to simulation of disaster situations. In: Proceedings of Future Technologies Conference (FTC) (2017)Google Scholar
  13. 13.
    Panos, A., Olga, D., Ioannis, M., Leonidas, G.: Learning representations and generative models for 3D point clouds In: Proceedings of International Conference on Learning Representations (ICLR) (2018)Google Scholar
  14. 14.
    Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation. In: Proceedings of Computer Vision and Pattern Recognition (CVPR). IEEE (2017)Google Scholar
  15. 15.
    Qi, C.R., Yi, L., Su, H., Guibas, L.J.: PointNet++: deep hierarchical feature learning on point sets in a metric space. arXiv preprint arXiv: 1706.02413 (2017)Google Scholar
  16. 16.
    Shao, L., Chang, A.X., Su, H., Savva, M., Guibas, L.J.: Cross-modal attribute transfer for rescaling 3D models. In: Proceedings of 2017 International Conference on 3D Vision (3DV), pp. 640–648 (2017)Google Scholar
  17. 17.
    Shilane, P., Min, P., Kazhdan, M., Funkhouser, T.: The Princeton shape benchmark. In: Shape Modeling International (2004)Google Scholar
  18. 18.
    Smith, E.J., Meger, D.: Improved adversarial systems for 3D object generation and reconstruction. In: Levine, S., Vanhoucke, V., Goldberg, K. (eds.) Proceedings of the 1st Annual Conference on Robot Learning. Proceedings of Machine Learning Research, vol. 78, pp. 87–96. PMLR (2017)Google Scholar
  19. 19.
    Song, S., Yu, F., Zeng, A., Chang, A.X., Savva, M., Funkhouser, T.: Semantic scene completion from a single depth image. In: Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition (2017)Google Scholar
  20. 20.
    Su, H., et al.: SPLATNet: sparse lattice networks for point cloud processing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2530–2539 (2018)Google Scholar
  21. 21.
    Wu, J., Zhang, C., Xue, T., Freeman, W.T., Tenenbaum, J.B.: Learning a probabilistic latent space of object shapes via 3D generative-adversarial modeling. In: Advances in Neural Information Processing Systems, pp. 82–90 (2016)Google Scholar
  22. 22.
    Wu, Z., et al.: 3D ShapeNets: a deep representation for volumetric shapes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015)Google Scholar
  23. 23.
    Xiang, Y., et al.: ObjectNet3D: a large scale database for 3D object recognition. In: Proceedings of European Conference Computer Vision (ECCV) (2016)Google Scholar
  24. 24.
    Yang, B., Wen, H., Wang, S., Clark, R., Markham, A., Trigoni, N.: 3D object reconstruction from a single depth view with adversarial learning. In: Proceedings of International Conference on Computer Vision Workshops (ICCVW) (2017)Google Scholar
  25. 25.
    Yi, L., et al.: A scalable active framework for region annotation in 3D shape collections. In: Proceedings of SIGGRAPH Asia (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Intelligent Systems, Graduate School of InformaticsNagoya UniversityNagoyaJapan

Personalised recommendations