Skip to main content
Log in

Partial point cloud registration algorithm based on deep learning and non-corresponding point estimation

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

For the limitations of global feature-based deep learning point cloud registration algorithms in partial point cloud registration, this paper proposes a partial point cloud registration algorithm NcPE-PNLK combining global features and correspondence. The NcPE-PNLK algorithm introduces a feature interaction module to complete the information interaction between two point clouds in the feature extraction stage, which can improve the credibility of the feature. Moreover, the algorithm predicts the correspondence through the non-corresponding point estimation module, which reduces the influence of non-overlapping regions on the global feature and effectively solves the problem of performance degradation of the global feature registration algorithms in partial point cloud registration. We test the registration performance of NcPE-PNLK on synthetic scene dataset and real dataset in this paper. The experimental results show that NcPE-PNLK can effectively reduce the impact of non-overlapping regions in the registration process and achieve better performance compared with the global feature-based registration algorithms. In addition, compared with the correspondence-based registration algorithms, the NcPE-PNLK algorithm does not need to calculate correspondence precisely, which can achieve high-precision partial point cloud registration with guaranteed efficiency.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Data availability

The experimental data that support the findings of this study are available in the Princeton ModelNet project with the identifier https://modelnet.cs.princeton.edu and the Stanford 3D Scanning Repository with the identifier https://graphics.stanford.edu/data/3Dscanrep

References

  1. Hitchcox, T., Forbes, J.R.: A point cloud registration pipeline using Gaussian process regression for bathymetric SLAM. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4615–4622 (2020)

  2. Cattaneo, D., Vaghi, M., Valada, A.: LCDNet: deep loop closure detection and point cloud registration for lidar slam. IEEE Trans. Rob. 38(4), 2074–2093 (2022). https://doi.org/10.1109/TRO.2022.3150683

    Article  Google Scholar 

  3. Lin, C., Kong, C., Lucey, S.: Learning efficient point cloud generation for dense 3D object reconstruction. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 7114–7121 (2018)

  4. Shi, J., Sun, Z., Bai, S.: 3D reconstruction framework via combining one 3D scanner and multiple stereo trackers. Vis. Comput. 34, 377–389 (2018)

    Article  Google Scholar 

  5. Salvi, J., Matabosch, C., Fofi, D., Forest, J.: A review of recent range image registration methods with accuracy evaluation. Image Vis. Comput. 25(5), 578–596 (2007). https://doi.org/10.1016/j.imavis.2006.05.012

    Article  Google Scholar 

  6. Liu, W., Wu, H., Chirikjian, G.S.: LSG-CPD: coherent point drift with local surface geometry for point cloud registration. In: Proceedings of IEEE/CVF International Conference on Computer Vision, pp. 15273–15282 (2021)

  7. Hu, L., Xiao, J., Wang, Y.: An automatic 3d registration method for rock mass point clouds based on plane detection and polygon matching. Vis. Comput. 36(4), 669–681 (2020). https://doi.org/10.1007/s00371-019-01648-z

    Article  MathSciNet  Google Scholar 

  8. Zhao, B., Xi, J.: Efficient and accurate 3D modeling based on a novel local feature descriptor. Inf. Sci. 512, 295–314 (2020)

    Article  Google Scholar 

  9. Villena Martinez, V., Oprea, S., Saval-Calvo, M., Azorin-Lopez, J., Guill´o, A., Fisher, R.: When deep learning meets data alignment: a review on deep registration networks (drns). Appl. Sci. 512, 295–314 (2020). https://doi.org/10.1016/j.ins.2019.04.020

    Article  Google Scholar 

  10. Zeng, A., Song, S., Nießner, M., Fisher, M., Xiao, J., Funkhouser, T.: 3DMatch: learning local geometric descriptors from RGB-D reconstructions. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 199–208 (2017)

  11. Elbaz, G., Avraham, T., Fischer, A.: 3D point cloud registration for localization using a deep neural network auto-encoder. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2472–2481 (2017)

  12. Min, T., Kim, E., Shim, I.: Geometry guided network for point cloud registration. IEEE Rob. Autom. Lett. 6(4), 7270–7277 (2021). https://doi.org/10.1109/LRA.2021.3097268

    Article  Google Scholar 

  13. Tong, G., Shao, Y. Peng, H. Learning local contextual features for 3D point clouds semantic segmentation by attentive kernel convolution. Vis. Comput. (2023)

  14. Zhao, H., Liang, Z., Wang, C., Yang, M.: Centroidreg: a global-to-local framework for partial point cloud registration. IEEE Rob. Autom. Lett. 6(2), 2533–2540 (2021). https://doi.org/10.1109/LRA.2021.3061369

    Article  Google Scholar 

  15. Huang, S., Gojcic, Z., Usvyatsov, M., Wieser, A., Schindler, K.: PREDATOR: registration of 3D point clouds with low overlap. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4265–4274 (2021)

  16. Cao, A.-Q., Puy, G., Boulch, A., Marlet, R.: PCAM: product of cross-attention matrices for rigid registration of point clouds. In: Proceedings of IEEE/CVF International Conference on Computer Vision, pp. 13209–13218 (2021)

  17. Aoki, Y., Goforth, H., Srivatsan, R.A., Lucey, S.: PointNetLK: robust & efficient point cloud registration using PointNet. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7156–7165 (2019)

  18. Xu, H., Ye, N., Liu, S., Liu, G., Zeng, B.: FINet: dual branches feature interaction for partial-to-partial point cloud registration. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 2848–2856 (2022)

  19. Besl, P.J., Mckay, N.D.: A method for registration of 3-d shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992)

    Article  Google Scholar 

  20. Xie, Z., Xu, S., Li, X.: A high-accuracy method for fine registration of overlapping point clouds. Image Vis. Comput. 28(4), 563–570 (2010)

    Article  Google Scholar 

  21. F¨orstner, K.W.: Khoshelham: efficient and accurate registration of point clouds with plane to plane correspondences. In: Proceedings of IEEE International Conference on Computer Vision Workshops, pp. 2165–2173 (2017)

  22. Favre, K., Pressigout, M., Marchand, E., Morin, L.: A plane-based approach for indoor point clouds registration. In: Proceedings of International Conference on Pattern Recognition, pp. 7072–7079 (2021)

  23. Lowe, D.G.: Object recognition from local scale-invariant features. In: Proceedings of IEEE/CVF International Conference on Computer Vision, pp. 1150–1157 (1999)

  24. Zhou, Q., Park, J., Koltun, V.: Fast global registration. In: Proceedings of European Conference on Computer Vision, pp. 766–782 (2016)

  25. Rusu, R.B., Blodow, N., Beetz, M.: Fast point feature histograms (FPFH) for 3D registration. In: Proceedings of IEEE International Conference on Robotics and Automation, pp. 3212–3217 (2009)

  26. Siwen, Q., Jie, M., Fangyu, H., Bin, F., Tao, M.: Local voxelized structure for 3D binary feature representation and robust registration of point clouds from low-cost sensors. Inf. Sci. 444, 153–171 (2018)

    Article  Google Scholar 

  27. Xiong, F., Dong, B., Huo, W., Pang, M., Kuang, L., Han, X.: A local feature descriptor based on rotational volume for pairwise registration of point clouds. IEEE Access 8, 100120–100134 (2020)

    Article  Google Scholar 

  28. Yew, Z.J., Lee, G.H.: REGTR: End-to-end point cloud correspondences with transformers. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6667–6676 (2022)

  29. Sarode, V., Dhagat, A., Srivatsan, R.A., Zevallos, N., Lucey, S., Choset, H.: MaskNet: a fully-convolutional network to estimate inlier points. In: Proceedings of International Conference on 3D Vision, pp. 1029–1038 (2020)

  30. Wang, Y., Solomon, J.: Deep closest point: learning representations for point cloud registration. In: Proceedings of IEEE/CVF International Conference on Computer Vision, pp. 3522–3531 (2019)

  31. Wang, Y., Sun, Y.B., Liu, Z.W., Sarma, S.E., Bronstein, M.M., Solomon, J.M.: Dynamic graph CNN for learning on point clouds. ACM Trans. Graphics 38(5), 1–12 (2019)

    Article  Google Scholar 

  32. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Aidan, K.L., Polosukhin, I.: Attention is all you need. In: Proceedings of Advances in Neural Information Processing Systems, pp. 8812–8824 (2019)

  33. Wang, Y., Solomon, J.M.: PRNet: Self-supervised learning for partial-to-partial registration. In: Proceedings of Neural Information Processing Systems, pp. 8812–8824 (2019)

  34. Li, D., He, K., Wang, L., Zhang, D.: Local feature extraction network with high correspondences for 3d point cloud registration. Appl. Intell. 52(9), 9638–9649 (2022)

    Article  Google Scholar 

  35. Fu, K., Liu, S., Luo, X., Wang, M.: Robust point cloud registration framework based on deep graph matching. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8889–8898 (2021)

  36. Yew, Z.J., Lee, G.H.: Rpm-net: robust point matching using learned features. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11821–11830 (2020)

  37. Yuan, W., Eckart, B., Kim, K., Jampani, V., Fox, D., Kautz, J.: DeepGMR: learning latent gaussian mixture models for registration. In: Proceedings of European Conference on Computer Vision, pp. 733–750 (2020)

  38. Li, J., Zhang, C., Xu, Z., Zhou, H., Zhang, C.: Iterative distanceaware similarity matrix convolution with mutual-supervised point elimination for effificient point cloud registration. In: Proceedings of European Conference on Computer Vision, pp. 378–394 (2020)

  39. Kipf, T, N., Welling, M.: Semi-Supervised Classification with Graph Convolutional Networks. arXiv:1609.02907 (2016)

  40. Charles, R.Q., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 77–85 (2017)

  41. Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of the 7th International Joint Conference on Artificial Intelligence, pp. 674–679 (1997)

  42. Li, X., Pontes, J., Lucey, S.: PointNetLK revisited. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12758–12767 (2021)

  43. Huang, X., Mei, G., Zhang, J.: Feature-metric registration: a fast semi-supervised approach for robust point cloud registration without correspondences. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11363–11371 (2020)

  44. Sarode, V., Li, X., Goforth, H., Aoki, Y., Srivatsan, R.A., Lucey, S., Choset, H.: Pcrnet: point cloud registration network using pointnet encoding. arXiv:1908.07906 (2019)

  45. Baker, S., Matthews, I.: Lucas-kanade 20 years on: a unifying framework. Int. J. Comput. Vis. 56(3), 221–255 (2004)

    Article  MATH  Google Scholar 

  46. Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., Xiao, J.: 3D ShapeNets: a deep representation for volumetric shapes. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1912–1920 (2015)

  47. Turk, G., Levoy, M.: The Stanford 3D scanning repository. Figshare http://graphics.stanford.edu/data/3Dscanrep (2005)

  48. Yew, Z.J., Lee, G.H.: RPM-Net: robust point matching using learned features. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11821–11830 (2020)

Download references

Funding

This study was supported by the Science and Technology Project of Hebei Education Department and Tianjin Research Program of Application Foundation and Advanced Technology of China under Grant numbers ZD2018045 and 15JCYBJC17100.

Author information

Authors and Affiliations

Authors

Contributions

SW developed the idea, implemented the code, and wrote the first draft of the manuscript. YZ and YC assisted with the experiment. All authors commented on previous versions of the manuscript and approved the final manuscript.

Corresponding author

Correspondence to Yanju Guo.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, S., Kang, Z., Chen, L. et al. Partial point cloud registration algorithm based on deep learning and non-corresponding point estimation. Vis Comput (2023). https://doi.org/10.1007/s00371-023-03103-6

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00371-023-03103-6

Keywords

Navigation