Skip to main content
Log in

Multi-feature fusion point cloud completion network

  • Published:
World Wide Web Aims and scope Submit manuscript

Abstract

In the real world, 3D point cloud data is generally obtained by LiDAR scanning. However, objects in the real world are occluded from each other, which will cause the point cloud scanned by LiDAR to be partially missing. In this paper, we improve PF-Net (a learning-based point cloud completion network), which is better to obtain the feature of the point cloud. Specifically, our improved network is an encoder-decoder-discriminator structure, which can directly take the missing point cloud data as input without additional preprocessing. In the encoder, we use the ALL-MLP (ALL-Multi Layer Perceptron) method to extract features from the point cloud. It combines the features obtained by each convolution in the feature extraction process, and finally sends it to the decoder. The decoder generates a prediction for the missing part of the point cloud, and the discriminator feeds back the generated result to the decoder to produce a more realistic effect. Our experiments show that the improved network has better accuracy in most categories than the state-of-the-art methods, and generates a relatively complete point cloud with achieving the purpose of complementing missing point cloud data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5

Similar content being viewed by others

References

  1. Achlioptas P., Diamanti O., Mitliagkas I., and Guibas L. J., “Learning representations and generative models for 3D point clouds,” ICML, 2018.

  2. Cai J., Han H., Cui J., Chen J., Liu L., and Zhou S. K., “Semi-Supervised Natural Face De-Occlusion,” IEEE Trans. Inf. Forensics Secur, 2021.

  3. Dai A., Qi C. R., and Niebner M., “Shape completion using 3D-Encoder-Predictor CNNs and shape synthesis,” CVPR, 2017.

  4. Dai A., Qi C. R., and Nießner M., “Shape completion using 3d-encoder-predictor cnns and shape synthesis,” CVPR, 2017.

  5. Dong J., Zhang L., Zhang H., and Liu W., “Occlusion-Aware GAN for Face De-Occlusion in the Wild,” ICME, 2020.

  6. Fan H., Su H., and Guibas L., “A point set generation network for 3d object reconstruction from a single image,” CVPR, 2017.

  7. Gadelha M., Wang R., and Maji S., “Multiresolution tree networks for 3D point cloud processing,” ECCV, 2018.

  8. Goodfellow I. J., P-Abadie J., Mirza M., Xu B., W-Farley D., Courville S. O, A. C., Bengio Y., “Generative Adversarial Networks,” arXiv preprint arXiv:1406.2661, 2014.

  9. Han X., Li Z., Huang H., Kalogerakis E., and Yu Y., “High-resolution shape completion using deep neural networks for global structure and local geometry inference,” arXiv preprint arXiv:1709.07599, 2017.

  10. Huang Z., Yu Y., Xu J., Ni F., and Le X, “PF-Net: Point Fractal Network for 3D Point Cloud Completion,” CVPR 2020

  11. Lin C., Kong C., and Lucey S., “Learning efficient point cloud generation for dense 3D object reconstruction,” AAAI, 2018.

  12. Lu H., Zhang M., Xu X., Li Y., and Shen H. T., “Deep Fuzzy Hashing Network for Efficient Image Retrieval,” IEEE Trans. Fuzzy Syst, 2021

  13. Lu H., Zhang Y., Li Y., Jiang C., and Abbas H., “User-Oriented Virtual Mobile Network Resource Management for Vehicle Communications,” IEEE Transactions on Intelligent Transportation Systems, 2020.

  14. Lu H., Tang Y., and Sun Y., “DRRS-BC: Decentralized Routing Registration System Based on Blockchain,” Journal of Automatica Sinica, 2020.

  15. Lu H., Yang R., Deng Z., Zhang Y., Gao G., and Lan R., “Chinese image captioning via fuzzy attention-based DenseNet-BiLSTM,” ACM Transactions on Multimedia Computing Communications and Applications, 2021.

  16. Ma C., Li X., Li Y., and Tian X., “Visual information processing for deep-sea visual monitoring system,” Cognitive Robotics, 2021.

  17. Martínez A. M., “Recognizing imprecisely localized, partially occluded, and expression variant faces from a single sample per class,” IEEE Trans. Pattern Anal. Mach. Intell, 2002.

  18. Oh H. J., Lee K. M., and Lee S. U., “Occlusion invariant face recognition using selective local non-negative matrix factorization basis images,” Image Vis. Comput, 2008.

  19. Qi C. R., Yi L., Su H., and Guibas L. J., “Pointnet++: Deep hierarchical feature learning on point sets in a metric space,” NeurIPS, 2017.

  20. Qi C. R., Su H., Mo K., and Guibas L. J., “Pointnet: Deep learning on point sets for 3D classification and segmentation,” CVPR, 2017.

  21. Sarmad M., Lee H., and Kim Y. M., “RL-GAN-Net: A reinforcement learning agent controlled GAN network for real-time point cloud shape completion,” CVPR, 2019.

  22. Sharma A., Grau O., and Fritz M., “Vconv-dae: Deep volumetric shape learning without object labels,” In European Conference on Computer Vision, 2016.

  23. Smith E. and Meger D., “Improved adversarial systems for 3d object generation and reconstruction,” arXiv preprint arXiv:1707.09557, 2017.

  24. Stutz D. and Geiger A., “Learning 3d shape completion from laser scan data with weak supervision,” CVPR, 2018.

  25. Sun J., and Li Y., “Multi-feature fusion network for road scene semantic segmentation,” Computers & Electrical Engineering, 2021.

  26. Thanh Nguyen D., Hua B. S., Tran K., Pham Q. H., and Yeung S. K.,” A field model for repairing 3d shapes,” CVPR, 2016.

  27. Varley J., DeChant C., Richardson A., Ruales J., and Allen P., “Shape completion enabled robotic grasping,” IEEE/RSJ International Conference, 2017.

  28. Wang W., Huang Q., You S., Yang C., and Neumann U., “Shape inpainting using 3D generative adversarial network and recurrent convolutional networks,” ICCV, 2017.

  29. Wang P., Wang D., Zhang X., Li X., Peng T., Lu H., and Tian X., “Numerical and Experimental Study on the Maneuverability of an Active Propeller Control based Wave Glider,” Applied Ocean Research, 2020.

  30. Yang Y., Feng C., Shen Y., and Tian D., “Foldingnet: Point cloud auto-encoder via deep grid deformation,” CVPR, 2018.

  31. Yang B., Wen H., Wang S., Clark R., Markham A., and Trigoni N., “3D object reconstruction from a single depth view with adversarial learning,” ICCV Workshops, 2017.

  32. Yang B., Rosa S., Markham A., Trigoni N., and Wen H., “3d object dense reconstruction from a single depth view,” arXiv preprint arXiv:1802.00411, 2018.

  33. Yi L., V. Kim G., Ceylan D., Shen I., Yan M., Su H., Lu C., Huang Q., Sheffer A., and Guibas L. J., “A scalable active framework for region annotation in 3D shape collections,” ACM Trans. on Graphics, 2016.

  34. Yuan X., and Park I. K., “Face De-Occlusion Using 3D Morphable Model and Generative Adversarial Network,” ICCV 2019.

  35. Yuan W., Khot T., Held D., Mertz C., and Hebert M., “PCN: Point completion network,” 3DV, 2018.

  36. Zhan X., Pan X., Dai B., Liu Z., Lin D., Loy C. C., “Self-Supervised Scene De-Occlusion,” CVPR, 2020.

  37. Zhao Y., Birdal T., Deng H., and Tombari F., “3D point capsule networks,” CVPR, 2018.

  38. Zheng Q., Zhu J., Tang H., Liu X., Li Z., and Lu H., “Generalized Label Enhancement with Sample Correlations,” IEEE Transactions on Knowledge and Data Engineering, 2021.

  39. Zhou Q, Wang Y, Liu J, and Jin X, “An open-source project for real-time image semantic segmentation,”. Sci. China Inf. 2019.

  40. Zhou Q, Wang Y, Fan Y, Wu X, Zhang S, and Kang B, “AGLNet: Towards real-time semantic segmentation of self-driving images via attention-guided lightweight network,” Appl. Soft Computing, 2020.

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Xiu Chen or Yujie Li.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is belongs to the Topical Collection: Special Issue on Synthetic Media on the Web Guest Editors: Huimin Lu, Xing Xu, Jože Guna, and Gautam Srivastava

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, X., Li, Y. & Li, Y. Multi-feature fusion point cloud completion network. World Wide Web 25, 1551–1564 (2022). https://doi.org/10.1007/s11280-021-00938-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11280-021-00938-8

Keywords

Navigation