Skip to main content
Log in

PFRNet: 3-D partial-to-full point cloud registration network for arbitrary pose matching

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

3-D point cloud registration algorithms have been commonly studied and effectively applied to object pose estimation. Due to limited field of view of a 3-D camera, only partial point cloud of the observed object can be grabbed at each frame. Thus, the registration problem between partial and full point clouds remains challenging due to missing data and arbitrary pose matching. This paper proposes a seemingly novel partial-to-full registration network (PFRNet) based on establishing point-wise correspondences with a full range of uncertainty. Specifically, an effective descriptor is developed to generate distance histograms capturing systematically geometric information for each point to release difficulty in training when considering a full range of uncertainty. Then, a compensation network is proposed to adjust the histogram descriptor extracted from the partial point cloud by learning differences caused by missing data. Next, these two descriptors are input to a shared local feature extractor to generate per-point learned features. Besides, in order to establish corresponding point pairs, a deep network is applied to estimate the outlier and annealing parameters. Finally, the proposed architecture adopts a differentiable singular value decomposition module to output the rigid transformation. Experimental results show that our PFRNet achieves high precision, outperforming baseline methods while maintaining fast estimation on both synthetic ModelNet40 and realistic S3DIS data sets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Aiger, D., Mitra, N.J., Cohen-Or, D.: 4-points congruent sets for robust pairwise surface registration. In: ACM SIGGRAPH 2008 Papers, pp. 1–10 (2008)

  2. Aoki, Y., Goforth, H., Srivatsan, R.A., Lucey, S.: PointNetLK: robust & efficient point cloud registration using PointNet. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7163–7172 (2019)

  3. Armeni, I., Sener, O., Zamir, A.R., Jiang, H., Brilakis, I., Fischer, M., Savarese, S.: 3D semantic parsing of large-scale indoor spaces. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1534–1543 (2016)

  4. Basdogan, C., Oztireli, A.C.: A new feature-based method for robust and efficient rigid-body registration of overlapping point clouds. Vis. Comput. 24(7–9), 679–688 (2008)

    Article  Google Scholar 

  5. Ben-Shabat, Y., Gould, S.: DeepFit: 3D surface fitting via neural network weighted least squares. In: Computer Vision—ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16, pp. 20–34. Springer (2020)

  6. Besl, P.J., McKay, N.D.: A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992)

    Article  Google Scholar 

  7. Bouaziz, S., Tagliasacchi, A., Pauly, M.: Sparse iterative closest point. In: Computer graphics forum, vol. 32, pp. 113–123. Wiley (2013)

  8. Chang, W.C., Lin, Y.K., Pham, V.T.: Vision-based flexible and precise automated assembly with 3D point clouds. In: 2021 9th International Conference on Control, Mechatronics and Automation (ICCMA), pp. 218–223. IEEE (2021)

  9. Chang, W.C., Pham, V.T.: An efficient neural network with performance-based switching of candidate optimizers for point cloud matching. In: Proceedings of the 6th International Conference on Control, Mechatronics and Automation, pp. 159–164 (2018)

  10. Chang, W.C., Pham, V.T.: 3-D point cloud registration using convolutional neural networks. Appl. Sci. 9(16), 3273 (2019)

    Article  Google Scholar 

  11. Chang, W.C., Pham, V.T., Huang, Y.C.: A fusion of CNNs and ICP for 3-D point cloud registration. In: 2020 17th International Conference on Ubiquitous Robots (UR), pp. 124–129. IEEE (2020)

  12. Chang, W.C., Wu, C.H.: Candidate-based matching of 3-D point clouds with axially switching pose estimation. Vis. Comput. 36, 1–15 (2019)

    Google Scholar 

  13. Chen, J., Wu, X., Wang, M.Y., Li, X.: 3D shape modeling using a self-developed hand-held 3d laser scanner and an efficient HT-ICP point cloud registration algorithm. Opt. Laser Technol. 45, 414–423 (2013)

    Article  Google Scholar 

  14. Chetverikov, D., Stepanov, D., Krsek, P.: Robust Euclidean alignment of 3D point sets: the trimmed iterative closest point algorithm. Image Vis. Comput. 23(3), 299–309 (2005)

    Article  Google Scholar 

  15. Elbaz, G., Avraham, T., Fischer, A.: 3D point cloud registration for localization using a deep neural network auto-encoder. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4631–4640 (2017)

  16. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)

    Article  MathSciNet  Google Scholar 

  17. Fu, K., Liu, S., Luo, X., Wang, M.: Robust point cloud registration framework based on deep graph matching. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8893–8902 (2021)

  18. Golub, G.H., Reinsch, C.: Singular value decomposition and least squares solutions. Linear Algebra 2, 134–151 (1971)

    Google Scholar 

  19. Guo, M.H., Cai, J.X., Liu, Z.N., Mu, T.J., Martin, R.R., Hu, S.M.: PCT: point cloud transformer. Comput. Vis. Media 7(2), 187–199 (2021)

    Article  Google Scholar 

  20. He, B., Lin, Z., Li, Y.F.: An automatic registration algorithm for the scattered point clouds based on the curvature feature. Opt. Laser Technol. 46, 53–60 (2013)

    Article  Google Scholar 

  21. Hong-Seok, P., Mani, T.U.: Development of an inspection system for defect detection in pressed parts using laser scanned data. Procedia Eng. 69, 931–936 (2014)

    Article  Google Scholar 

  22. Hosoki, D., Lu, H., Kim, H., Kimura, N., Okawachi, T., Nozoe, E., Nakamura, N.: Detection of facial symmetric plane based on registration of 3D point cloud. In: 2019 19th International Conference on Control, Automation and Systems (ICCAS), pp. 1037–1041. IEEE (2019)

  23. Jiang, J., Cheng, J., Chen, X.: Registration for 3-D point cloud using angular-invariant feature. Neurocomputing 72(16–18), 3839–3844 (2009)

    Article  Google Scholar 

  24. Kamencay, P., Sinko, M., Hudec, R., Benco, M., Radil, R.: Improved feature point algorithm for 3d point cloud registration. In: 2019 42nd International Conference on Telecommunications and Signal Processing (TSP), pp. 517–520. IEEE (2019)

  25. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  26. Kurobe, A., Sekikawa, Y., Ishikawa, K., Saito, H.: CorsNet: 3D point cloud registration by deep neural network. IEEE Robot. Autom. Lett. 5(3), 3960–3966 (2020)

    Article  Google Scholar 

  27. Liu, H., Liu, T., Li, Y., Xi, M., Li, T., Wang, Y.: Point cloud registration based on MCMC-SA ICP algorithm. IEEE Access 7, 73637–73648 (2019)

    Article  Google Scholar 

  28. Lu, W., Wan, G., Zhou, Y., Fu, X., Yuan, P., Song, S.: DeepICP: an end-to-end deep neural network for 3D point cloud registration. arXiv preprint arXiv:1905.04153 (2019)

  29. Mavridis, P., Andreadis, A., Papaioannou, G.: Efficient sparse ICP. Comput. Aided Geometr. Des. 35, 16–26 (2015)

    Article  MathSciNet  Google Scholar 

  30. Mellado, N., Aiger, D., Mitra, N.J.: Super 4PCS fast global pointcloud registration via smart indexing. In: Computer Graphics Forum, vol. 33, pp. 205–215. Wiley (2014)

  31. Niedfeldt, P.C., Beard, R.W.: Convergence and complexity analysis of recursive-RANSAC: a new multiple target tracking algorithm. IEEE Trans. Autom. Control 61(2), 456–461 (2015)

    MathSciNet  Google Scholar 

  32. Nistér, D.: Preemptive RANSAC for live structure and motion estimation. Mach. Vis. Appl. 16(5), 321–329 (2005)

    Article  Google Scholar 

  33. Open3D: Fast global registration. http://www.open3d.org/docs/0.10.0/tutorial/Advanced/global_registration.html#Fast-global-registration. Accessed 9 Nov 2021

  34. Open3D: Global registration. http://www.open3d.org/docs/0.10.0/tutorial/Advanced/global_registration.html. Accessed 5 Nov 2021

  35. Pan, L., Cai, Z., Liu, Z.: Robust partial-to-partial point cloud registration in a full range. arXiv preprint arXiv:2111.15606 (2021)

  36. Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., Chintala, S.: PyTorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, vol. 32, pp. 8024–8035. Curran Associates, Inc. (2019)

  37. Phillips, J.M., Liu, R., Tomasi, C.: Outlier robust ICP for minimizing fractional RMSD. In: Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007), pp. 427–434. IEEE (2007)

  38. Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660 (2017)

  39. Quan, S., Ma, J., Hu, F., Fang, B., Ma, T.: Local voxelized structure for 3D binary feature representation and robust registration of point clouds from low-cost sensors. Inf. Sci. 444, 153–171 (2018)

    Article  Google Scholar 

  40. Quan, S., Yang, J.: Compatibility-guided sampling consensus for 3-D point cloud registration. IEEE Trans. Geosci. Remote Sens. 5, 7380–7392 (2020)

    Article  Google Scholar 

  41. Rusu, R.B., Blodow, N., Beetz, M.: Fast point feature histograms (FPFH) for 3D registration. In: 2009 IEEE International Conference on Robotics and Automation, pp. 3212–3217. IEEE (2009)

  42. Rusu, R.B., Blodow, N., Marton, Z.C., Beetz, M.: Aligning point cloud views using persistent feature histograms. In: 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3384–3391. IEEE (2008)

  43. Sarode, V., Li, X., Goforth, H., Aoki, Y., Srivatsan, R.A., Lucey, S., Choset, H.: Pcrnet: point cloud registration network using pointnet encoding. arXiv preprint arXiv:1908.07906 (2019)

  44. Shi, X., Peng, J., Li, J., Yan, P., Gong, H.: The iterative closest point registration algorithm based on the normal distribution transformation. Procedia Comput. Sci. 147, 181–190 (2019)

    Article  Google Scholar 

  45. Tang, K., Song, P., Chen, X.: Signature of geometric centroids for 3D local shape description and partial shape matching. In: Asian Conference on Computer Vision, pp. 311–326. Springer (2016)

  46. Torr, P.H.S., Davidson, C.: IMPSAC: synthesis of importance sampling and random sample consensus. IEEE Trans. Pattern Anal. Mach. Intell. 25(3), 354–364 (2003)

    Article  Google Scholar 

  47. Wang, C., Shu, Q., Yang, Y., Yuan, F.: Point cloud registration in multidirectional affine transformation. IEEE Photonics J. 10(6), 1–15 (2018)

    Google Scholar 

  48. Wang, H., Wang, H., Zhuang, C.: 6D pose estimation from point cloud using an improved point pair features method. In: 2021 7th International Conference on Control, Automation and Robotics (ICCAR), pp. 280–284. IEEE (2021)

  49. Wang, X., Li, Y., Peng, Y., Ying, S.: A coarse-to-fine generalized-ICP algorithm with trimmed strategy. IEEE Access 8, 40692–40703 (2020)

    Article  Google Scholar 

  50. Wang, Y., Solomon, J.M.: Deep closest point: learning representations for point cloud registration. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3523–3532 (2019)

  51. Wang, Y., Solomon, J.M.: PRNet: self-supervised learning for partial-to-partial registration. In: Advances in Neural Information Processing Systems, pp. 8814–8826 (2019)

  52. Wang, Y., Sun, Y., Liu, Z., Sarma, S.E., Bronstein, M.M., Solomon, J.M.: Dynamic graph CNN for learning on point clouds. ACM Trans. Graph. TOG 38(5), 1–12 (2019)

    Article  Google Scholar 

  53. Wu, Y., He, K.: Group normalization. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19 (2018)

  54. Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., Xiao, J.: 3D ShapeNets: A deep representation for volumetric shapes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1912–1920 (2015)

  55. Xie, Z., Xu, S., Li, X.: A high-accuracy method for fine registration of overlapping point clouds. Image Vis. Comput. 28(4), 563–570 (2010)

    Article  Google Scholar 

  56. Xu, J., Ma, Y., He, S., Zhu, J.: 3D-GIoU: 3D generalized intersection over union for object detection in point cloud. Sensors 19(19), 4093 (2019)

    Article  Google Scholar 

  57. Xu, M., Lu, J.: Distributed RANSAC for the robust estimation of three-dimensional reconstruction. IET Comput. Vis. 6(4), 324–333 (2012)

    Article  MathSciNet  Google Scholar 

  58. Xu, Y., Jung, C., Chang, Y.: Head pose estimation using deep neural networks and 3D point clouds. Pattern Recognit. 121, 108,210 (2022)

    Article  Google Scholar 

  59. Yang, J., Cao, Z., Zhang, Q.: A fast and robust local descriptor for 3D point cloud registration. Inf. Sci. 346, 163–179 (2016)

    Article  Google Scholar 

  60. Yang, J., Li, H., Jia, Y.: Go-ICP: solving 3D registration efficiently and globally optimally. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1457–1464 (2013)

  61. Yew, Z.J., Lee, G.H.: RPM-Net: robust point matching using learned features. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11,824–11,833 (2020)

  62. Ying, S., Peng, J., Du, S., Qiao, H.: A scale stretch method based on ICP for 3D data registration. IEEE Trans. Autom. Sci. Eng. 6(3), 559–565 (2009)

    Article  Google Scholar 

  63. Zhang, W., Chenkun, Q.: Pose estimation by key points registration in point cloud. In: 2019 3rd International Symposium on Autonomous Systems (ISAS), pp. 65–68. IEEE (2019)

  64. Zhou, Q.Y., Park, J., Koltun, V.: Fast global registration. In: European Conference on Computer Vision, pp. 766–782. Springer (2016)

  65. Zhu, J., Jin, C., Jiang, Z., Xu, S., Xu, M., Pang, S.: Robust point cloud registration based on both hard and soft assignments. Opt. Laser Technol. 110, 202–208 (2019)

    Article  Google Scholar 

Download references

Funding

This research was supported by Ministry of Science and Technology, Taiwan, R.O.C. under Grant MOST 110-2221-E-027-117-MY3.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wen-Chung Chang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chang, WC., Pham, VT. PFRNet: 3-D partial-to-full point cloud registration network for arbitrary pose matching. Vis Comput (2023). https://doi.org/10.1007/s00371-023-03209-x

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00371-023-03209-x

Keywords

Navigation