Abstract
In this paper, we introduce PCR-CG: a novel 3D point cloud registration module explicitly embedding the color signals into geometry representation. Different from the previous SOTA methods that used only geometry representation, our module is specifically designed to effectively correlate color and geometry for the point cloud registration task. Our key contribution is a 2D-3D cross-modality learning algorithm that embeds the features learned from color signals to the geometry representation. With our designed 2D-3D projection module, the pixel features in a square region centered at correspondences perceived from images are effectively correlated with point cloud representations. In this way, the overlap regions can be inferred not only from point cloud but also from the texture appearances. Adding color is non-trivial. We compare against a variety of baselines designed for adding color to 3D, such as exhaustively adding per-pixel features or RGB values in an implicit manner. We leverage Predator as our baseline method and incorporate our module into it. Our experimental results indicate a significant improvement on the 3DLoMatch benchmark. With the help of our module, we achieve a significant improvement of \(6.5\%\) registration recall with 5000 sampled points over our baseline method. To validate the effectiveness of 2D features on 3D, we ablate different 2D pre-trained networks and show a positive correlation between the pre-trained weights and task performance. Our study reveals a significant advantage of correlating explicit deep color features to the point cloud in the registration task.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Ao, S., Hu, Q., Yang, B., Markham, A., Guo, Y.: SpinNet: learning a general surface descriptor for 3D point cloud registration. In: CVPR, pp. 11753–11762 (2021)
Aoki, Y., Goforth, H., Srivatsan, R.A., Lucey, S.: PointNetLK: robust & efficient point cloud registration using pointnet. In: CVPR, pp. 7163–7172 (2019)
Armeni, I., et al.: 3D semantic parsing of large-scale indoor spaces. In: ICCV (2016)
Arun, K.S., Huang, T.S., Blostein, S.D.: Least-squares fitting of two 3-D point sets. TPAMI 5, 698–700 (1987)
Bai, X., et al.: PointDSC: robust point cloud registration using deep spatial consistency. In: CVPR, pp. 15859–15869 (2021)
Bai, X., Luo, Z., Zhou, L., Fu, H., Quan, L., Tai, C.L.: D3Feat: joint learning of dense detection and description of 3D local features. In: CVPR, pp. 6359–6367 (2020)
Balntas, V., Doumanoglou, A., Sahin, C., Sock, J., Kouskouridas, R., Kim, T.K.: Pose guided RGBD feature learning for 3D object pose estimation. In: CVPR, pp. 3856–3864 (2017)
Besl, P.J., McKay, N.D.: Method for registration of 3-D shapes. In: Sensor Fusion IV: Control Paradigms and Data Structures, vol. 1611, pp. 586–606. International Society for Optics and Photonics (1992)
Chang, A., et al.: Matterport3D: learning from RGB-D data in indoor environments. arXiv preprint arXiv:1709.06158 (2017)
Choy, C., Park, J., Koltun, V.: Fully convolutional geometric features. In: CVPR, pp. 8958–8966 (2019)
Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T., Nießner, M.: ScanNet: richly-annotated 3D reconstructions of indoor scenes. In: CVPR (2017)
Dai, A., Nießner, M.: 3DMV: joint 3D-multi-view prediction for 3D semantic scene segmentation. In: ECCV, pp. 452–468 (2018)
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR (2009)
El Banani, M., Gao, L., Johnson, J.: UnsupervisedR &R: unsupervised point cloud registration via differentiable rendering. In: CVPR, pp. 7129–7139 (2021)
El Banani, M., Johnson, J.: Bootstrap your own correspondences. In: ICCV, pp. 6433–6442 (2021)
Gojcic, Z., Zhou, C., Wegner, J.D., Wieser, A.: The perfect match: 3D point cloud matching with smoothed densities. In: CVPR, pp. 5545–5554 (2019)
He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: ICCV (2017)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)
Hou, J., Dai, A., Nießner, M.: 3D-SIS: 3D semantic instance segmentation of RGB-D scans. In: CVPR (2019)
Hou, J., Dai, A., Nießner, M.: RevealNet: seeing behind objects in RGB-D scans. In: CVPR (2020)
Hou, J., Xie, S., Graham, B., Dai, A., Nießner, M.: Pri3D: can 3D priors help 2D representation learning? In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5693–5702 (2021)
Hu, W., Zhao, H., Jiang, L., Jia, J., Wong, T.T.: Bidirectional projection network for cross dimension scene understanding. In: CVPR, pp. 14373–14382 (2021)
Huang, S., Gojcic, Z., Usvyatsov, M., Wieser, A., Schindler, K.: PREDATOR: Registration of 3D point clouds with low overlap. In: CVPR, pp. 4267–4276 (2021)
Lahoud, J., Ghanem, B., Pollefeys, M., Oswald, M.R.: 3D instance segmentation via multi-task metric learning. In: ICCV (2019)
Liu, Y., Fan, Q., Zhang, S., Dong, H., Funkhouser, T., Yi, L.: Contrastive multimodal fusion with tupleinfonce. In: CVPR, pp. 754–763 (2021)
Liu, Y., Yi, L., Zhang, S., Fan, Q., Funkhouser, T., Dong, H.: P4contrast: contrastive learning with pairs of point-pixel pairs for RGB-D scene understanding. arXiv preprint arXiv:2012.13089 (2020)
Liu, Z., Qi, X., Fu, C.W.: 3D-to-2D distillation for indoor scene parsing. In: CVPR, pp. 4464–4474 (2021)
Lowe, D.G.: Distinctive image features from scale-invariant keypoints. IJCV 60(2), 91–110 (2004)
Niethammer, M., Kwitt, R., Vialard, F.X.: Metric learning for image registration. In: ICCV, pp. 8463–8472 (2019)
Park, J., Zhou, Q.Y., Koltun, V.: Colored point cloud registration revisited. In: ICCV, pp. 143–152 (2017)
Qi, C.R., Chen, X., Litany, O., Guibas, L.J.: ImVoteNet: boosting 3D object detection in point clouds with image votes. In: CVPR (2020)
Qi, C.R., Litany, O., He, K., Guibas, L.J.: Deep Hough voting for 3D object detection in point clouds. In: ICCV, pp. 9277–9286 (2019)
Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation. In: CVPR (2017)
Qin, Z., Yu, H., Wang, C., Guo, Y., Peng, Y., Xu, K.: Geometric transformer for fast and robust point cloud registration. In: CVPR, pp. 11143–11152 (2022)
Revaud, J., et al.: R2D2: repeatable and reliable detector and descriptor. arXiv preprint arXiv:1906.06195 (2019)
Rusinkiewicz, S., Levoy, M.: Efficient variants of the ICP algorithm. In: Proceedings Third International Conference on 3-D Digital Imaging and Modeling, pp. 145–152. IEEE (2001)
Sarlin, P.E., DeTone, D., Malisiewicz, T., Rabinovich, A.: SuperGlue: learning feature matching with graph neural networks. In: CVPR (2020)
Sarlin, P.E., DeTone, D., Malisiewicz, T., Rabinovich, A.: SuperGlue: learning feature matching with graph neural networks. In: CVPR, pp. 4938–4947 (2020)
Schönberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: CVPR (2016)
Schönberger, J.L., Zheng, E., Frahm, J.-M., Pollefeys, M.: Pixelwise view selection for unstructured multi-view stereo. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 501–518. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46487-9_31
Song, S., Xiao, J.: Sliding shapes for 3D object detection in depth images. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8694, pp. 634–651. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10599-4_41
Srinivasan, P.P., Wang, T., Sreelal, A., Ramamoorthi, R., Ng, R.: Learning to synthesize a 4D RGBD light field from a single image. In: CVPR, pp. 2243–2251 (2017)
Stückler, J., Gutt, A., Behnke, S.: Combining the strengths of sparse interest point and dense image registration for RGB-D odometry. In: ISR/Robotik; International Symposium on Robotics, pp. 1–6. VDE (2014)
Thomas, H., Qi, C.R., Deschaud, J.E., Marcotegui, B., Goulette, F., Guibas, L.J.: KPConv: flexible and deformable convolution for point clouds. In: CVPR (2019)
Xu, C., et al.: Image2Point: 3D point-cloud understanding with 2D image pretrained models (2021)
Yu, H., Li, F., Saleh, M., Busam, B., Ilic, S.: CofiNet: reliable coarse-to-fine correspondences for robust pointcloud registration. In: NeurIPS, vol. 34 (2021)
Zeng, A., Song, S., Nießner, M., Fisher, M., Xiao, J., Funkhouser, T.: 3DMatch: learning local geometric descriptors from RGB-D reconstructions. In: CVPR (2017)
Zhou, Q., Sattler, T., Leal-Taixe, L.: Patch2Pix: epipolar-guided pixel-level correspondences. In: ICCV, pp. 4669–4678 (2021)
Acknowledgments
This work is supported by the Joint Funds of Zhejiang NSFC (LTY22F020001) and Open Research Fund of State Key Laboratory of Transient Optics and Photonics. Yu Zhang is the corresponding author.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zhang, Y., Yu, J., Huang, X., Zhou, W., Hou, J. (2022). PCR-CG: Point Cloud Registration via Deep Explicit Color and Geometry. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13670. Springer, Cham. https://doi.org/10.1007/978-3-031-20080-9_26
Download citation
DOI: https://doi.org/10.1007/978-3-031-20080-9_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-20079-3
Online ISBN: 978-3-031-20080-9
eBook Packages: Computer ScienceComputer Science (R0)