Skip to main content
Log in

PointALCR: adversarial latent GAN and contrastive regularization for point cloud completion

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Development of LiDAR and depth camera makes it easily to extract the point cloud data of practical items. However, some drawbacks, such as sparsity or loss of details of the point cloud, are evident. Therefore, quite different from the methods as developed so far which usually reconstructed incomplete point cloud either in terms of GAN-based or autoencoder-based networks, respectively. In this paper, we propose PointALCR, which combines GAN-based and autoencoder-based frameworks with contrastive regularization in order to improve the representative and generative abilities for completion of the point cloud. A module named Adversarial Latent GAN be employed for learning a latent space of input/target point cloud representation and extending the generative and discriminative abilities on GAN training procedures. Contrastive regularization ensures that the reconstructed items to be close to the ground truth and far from the incomplete input in feature space. Experimental results demonstrate that PointALCR has the capabilities better than previous methods on challenging point cloud completion tasks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Qi, C.R., Su, H., Mo, K., Guibas, L.J.: Pointnet: Deep learning on point sets for 3D classification and segmentation. In: CVPR (2016)

  2. Qi, C.R., Yi, L., Su, H., Guibas, L.J.: Pointnet++: deep hierarchical feature learning on point sets in a metric space. In: NeuIPS (2017)

  3. Dai, A., Qi, C.R., Nießner, M.: Shape completion using 3d-encoder-predictor CNNS and shape synthesis. In: CVPR (2017)

  4. Huang, Z., Yu, Y., Xu, J., Ni, F., Le, X.: Pf-net: point fractal network for 3d point cloud completion. In: CVPR (2020)

  5. Li, R., Li, X., Fu, C.-W., Cohen-Or, D., Heng, P.-A.: Pu-gan: a point cloud upsampling adversarial network. In: ICCV (2019)

  6. Liu, M., Sheng, L., Yang, S., Shao, J., Hu, S.-M.: Morphing and sampling network for dense point cloud completion. In: AAAI (2020)

  7. Wang, X., Ang Jr., M.H., Lee, G.H.: Cascaded refinement network for point cloud completion. In: CVPR (2020)

  8. Yu, L., Li, X., Fu, C.W., Cohen-Or, D. and Heng, P.A.: Pu-net: point cloud upsampling network. In: CVPR (2018)

  9. Yuan, W., Khot, T., Held, D., Mertz, C. and Hebert, M.: Pcn: point completion network. In: 3DV (2018)

  10. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. In: ICLR (2014)

  11. Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: NeuIPS (2014)

  12. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: CVPR (2019)

  13. Li, Y., Bu, R., Sun, M., Wu, W., Di, X. and Chen, B.: Pointcnn: convolution on x-transformed points. In: NeuIPS (2018)

  14. Jiang, M., Wu, Y., Lu, C.: Pointsift: A sift-like network module for 3d point cloud semantic segmentation. arXiv:1807.00652 (2018)

  15. Hermosilla, P., Ritschel, T., Vazquez, P.-P., Vinacua, A., Ropinski, T.: Monte carlo convolution for learning on non-uniformly sampled point clouds. In: SIGGRAPH Asia 37(6) (2018)

  16. Klokov, R., Lempitsky, V.: Escape from cells: deep kd-networks for the recognition of 3d point cloud models. In: ICCV (2017)

  17. Yi, L., Su, H., Guo, X., Guibas, L.: Syncspeccnn: synchronized spectral CNN for 3D shape segmentation. In: CVPR (2017)

  18. Tchapmi, L.P., Kosaraju, V., Rezatofighi, H., Reid, I. and Savarese, S.: Topnet: structural point cloud decoder. In: CVPR (2019)

  19. Xiao, C., Zhang, W., Yan, Q. : Detail preserved point cloud completion via separated feature aggregation. In: ECCV (2020)

  20. Pan, L., Chen, X., Cai, Z., Zhang, J., Zhao, H., Yi, S. and Liu, Z.: Variational relational point completion network. In: CVPR, pp. 8524–8533 (2021)

  21. Guo, D., Li, J., Qingshan, L., Shen, C., Wang, J., Cui, Y.: Pointattn: you only need attention for point cloud completion. arXiv:2203.08485 (2022)

  22. Achlioptas, P., Diamanti, O., Mitliagkas, I., Guibas, L.: Learning representations and generative models for 3D point clouds. In: ICML (2018)

  23. Wu, Y., Xie, S., Girshick, R., He, K., Fan, H.: Momentum contrast for unsupervised visual representation learning. In CVPR (2020)

  24. Chebotar Y., Hsu, J., Jang, E., Schaal, S., Levine, S., Sermanet, P., Lynch, C.: Time-contrastive networks: Self-supervised learning from video. In: LCRA (2018)

  25. Norouzi, M., Hinton, G., Chen, T., Kornblith, S.: A simple framework for contrastive learning of visual representations. arXiv:2002.05709 (2002)

  26. Altche, F., Tallec, C., Richemond, P., Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z., Grill, J.-B., Strub, F., Gheshlaghi Azar, M., et al.: Bootstrap your own latent—a new approach to self-supervised learning. In: NeuIPS (2020)

  27. De Fauw, J., Razavi, A., Doersch, C., Eslami, S.M., van den Oord, A., Henaff, O.J, Srinivas, A.: Data-efficient image recognition with contrastive predictive coding. In: ICML (2020)

  28. Liang, H., Jiang, C., Feng, D., Chen, X., Xu, H., Liang, X., Zhang, W., Li, Z., Van Gool, L.: Exploring geometry-aware contrast and clustering harmonization for self-supervised 3d object detection. In: ICCV, pp 3293–3302 (2021)

  29. Xie, S., Gu, J., Guo, D., Qi, C.R., Guibas,L., Litany, O.: Pointcontrast: unsupervised pre-training for 3d point cloud understanding. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds) ECCV. Springer International Publishing, , Cham, pp 574–591 (2020)

  30. Lin, T.-Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: CVPR (2017)

  31. Fan, H., Su, H., Guibas, L.: A point set generation network for 3d object reconstruction from a single image. In: CVPR (2017)

  32. Zhao, Y., Birdal, T., Deng, H., Tombari, F.: 3D point capsule networks. In CVPR (2019)

  33. Yi, L., Kim, V.G., Ceylan, D., Shen, I.-C., Yan, M., Su, H., Lu, C., Huang, Q., Sheffer, A., Guibas, L.: A scalable active framework for region annotation in 3d shape collections. In: SIGGRAPH Asia (2016)

  34. Yang, Y., Feng, C., Shen, Y., Tian, D.: Foldingnet: point cloud auto-encoder via deep grid deformation. In: CVPR (2018)

  35. Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., Xiao, J.: 3d shapenets: a deep representation for volumetric shapes. In: CVPR (2015)

  36. Gadelha, M., Wang, R., Maji, S.: Multiresolution tree networks for 3d point cloud processing. In: ECCV (2018)

  37. Lin, C.-H., Kong, C., Lucey, S.: Learning efficient point cloud generation for dense 3d object reconstruction. In: AAAI (2018)

Download references

Acknowledgements

This work was supported by the National Key Research and Development Program of China (2019YFC1521104), National Natural Science Foundation of China (72192821, 61972157, 61872241), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), Shanghai Science and Technology Commission (21511101200, 22YF1420300).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Bin Sheng or Lizhuang Ma.

Ethics declarations

Conflicts of interest

The authors declare that they have no conflicts of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, Q., Zhao, J., Cheng, C. et al. PointALCR: adversarial latent GAN and contrastive regularization for point cloud completion. Vis Comput 38, 3341–3349 (2022). https://doi.org/10.1007/s00371-022-02550-x

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-022-02550-x

Keywords

Navigation