Abstract
Object pose estimation is a crucial task in computer vision and augmented reality. One of its key challenges is the difficulty of annotation of real training data and the lack of textured CAD models. Therefore, pipelines which do not require CAD models and which can be trained with few labeled images are desirable. We propose a weakly-supervised approach for object pose estimation from RGB-D data using training sets composed of very few labeled images with pose annotations along with weakly-labeled images with ground truth segmentation masks without pose labels. We achieve this by learning to annotate weakly-labeled training data through shape alignment while simultaneously training a pose prediction network. Point cloud alignment is performed using structure and rotation-invariant feature-based losses. We further learn an implicit shape representation, which allows the method to work without the known CAD model and also contributes to pose alignment and pose refinement during training on weakly labeled images. The experimental evaluation shows that our method achieves state-of-the-art results on LineMOD, Occlusion-LineMOD and TLess despite being trained using relative poses and on only a fraction of labeled data used by the other methods. We also achieve comparable results to state-of-the-art RGB-D based pose estimation approaches even when further reducing the amount of unlabeled training data. In addition, our method works even if relative camera poses are given instead of object pose annotations which are typically easier to obtain.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Arun, K.S., Huang, T.S., Blostein, S.D.: Least-squares fitting of two 3-D point sets. IEEE Trans. Pattern Anal. Mach. Intell. (1987)
Brachmann, E., Krull, A., Michel, F., Gumhold, S., Shotton, J., Rother, C.: Learning 6d object pose estimation using 3D object coordinates. In: ECCV (2014)
Busam, B.: High performance visual pose computation. Ph.D. Thesis, Technische Universität München (2021)
Cai, M., Reid, I.: Reconstruct locally, localize globally: a model free method for object pose estimation. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Chen, W., Jia, X., Chang, H.J., Duan, J., Leonardis, A.: G2L-Net: global to local network for real-time 6d pose estimation with embedding vector features. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Drost, B., Ulrich, M., Navab, N., Ilic, S.: Model globally, match locally: efficient and robust 3D object recognition. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 998–1005. IEEE (2010)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385 (2015)
He, Y., Huang, H., Fan, H., Chen, Q., Sun, J.: FFB6D: a full flow bidirectional fusion network for 6d pose estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3003–3013 (2021)
He, Y., Sun, W., Huang, H., Liu, J., Fan, H., Sun, J.: PVN3D: a deep point-wise 3D keypoints voting network for 6dof pose estimation. In: CVPR (2020)
He, Y., Sun, W., Huang, H., Liu, J., Fan, H., Sun, J.: PVN3D: a deep point-wise 3D keypoints voting network for 6DoF pose estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Hinterstoisser, S., et al.: Model based training, detection and pose estimation of texture-less 3D objects in heavily cluttered scenes (2012)
Hodaň, T., Haluza, P., Obdržálek, Š., Matas, J., Lourakis, M., Zabulis, X.: T-LESS: an RGB-D dataset for 6D pose estimation of texture-less objects. IEEE Winter Conference on Applications of Computer Vision (WACV) (2017)
Hodan, T., Melenovsky, A.: Bop: benchmark for 6d object pose estimation. https://bop.felk.cvut.cz/home/ (2019)
Jung, H., Brasch, N., Leonardis, A., Navab, N., Busam, B.: Wild ToFu: improving range and quality of indirect time-of-flight depth with RGB fusion in challenging environments. In: 2021 International Conference on 3D Vision (3DV), pp. 239–248. IEEE (2021)
Kabsch, W.: A solution for the best rotation to relate two sets of vectors. Acta Crystallographica Section A (1976)
Kaskman, R., Shugurov, I., Zakharov, S., Ilic, S.: 6 DoF pose estimation of textureless objects from multiple RGB frames. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020. LNCS, vol. 12536, pp. 612–630. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-66096-3_41
Kaskman, R., Zakharov, S., Shugurov, I., Ilic, S.: Homebreweddb: RGB-D dataset for 6d pose estimation of 3D objects. In: ICCV Workshops (2019)
Labbe, Y., Carpentier, J., Aubry, M., Sivic, J.: CosyPose: consistent multi-view multi-object 6D pose estimation. In: Proceedings of the European Conference on Computer Vision (ECCV) (2020)
Lepetit, V., Moreno-Noguer, F., Fua, P.: EPnP: an accurate o(n) solution to the PnP problem. Int. J. Comput. Vis. (2009). https://doi.org/10.1007/s11263-008-0152-6
Li, F., Shugurov, I., Busam, B., Li, M., Yang, S., Ilic, S.: PolarMesh: a star-convex 3D shape approximation for object pose estimation. IEEE Robot. Autom. Lett. 7(2), 4416–4423 (2022)
Li, F., Shugurov, I., Busam, B., Yang, S., Ilic, S.: WS-OPE: weakly supervised 6-D object pose regression using relative multi-camera pose constraints. IEEE Robot. Autom. Lett. (2022)
Li, F., Yu, H., Shugurov, I., Busam, B., Yang, S., Ilic, S.: NeRF-Pose: a first-reconstruct-then-regress approach for weakly-supervised 6d object pose estimation. arXiv preprint arXiv:2203.04802 (2022)
Manhardt, F., Kehl, W., Gaidon, A.: ROI-10D: monocular lifting of 2D detection to 6D pose and metric shape. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2069–2078 (2019)
Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.: Occupancy networks: Learning 3d reconstruction in function space. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Park, J.J., Florence, P., Straub, J., Newcombe, R., Lovegrove, S.: DeepSDF: learning continuous signed distance functions for shape representation. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Park, K., Mousavian, A., Xiang, Y., Fox, D.: LatentFusion: end-to-end differentiable reconstruction and rendering for unseen object pose estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2020)
Park, K., Patten, T., Vincze, M.: Pix2Pose: pixel-wise coordinate regression of objects for 6D pose estimation. In: The IEEE International Conference on Computer Vision (ICCV) (2019)
Peng, S., Liu, Y., Huang, Q., Zhou, X., Bao, H.: PVNet: pixel-wise voting network for 6DoF pose estimation. In: CVPR (2019)
Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation. arXiv preprint arXiv:1612.00593 (2016)
Shugurov, I., Li, F., Busam, B., Ilic, S.: OSOP: a multi-stage one shot object pose estimation framework. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6835–6844 (2022)
Shugurov, I., Pavlov, I., Zakharov, S., Ilic, S.: Multi-view object pose refinement with differentiable renderer. IEEE Robot. Autom. Lett. (2021)
Shugurov, I., Zakharov, S., Ilic, S.: DPODv2: dense correspondence-based 6 DoF pose estimation. IEEE Trans. Pattern Anal. Mach. Intell. (2021)
Song, C., Song, J., Huang, Q.: HybridPose: 6D object pose estimation under hybrid representations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2020)
Wang, C., et al.: DenseFusion: 6D object pose estimation by iterative dense fusion (2019)
Wang, G., Manhardt, F., Shao, J., Ji, X., Navab, N., Tombari, F.: Self6D: self-supervised monocular 6D object pose estimation. In: The European Conference on Computer Vision (ECCV) (2020)
Wang, G., Manhardt, F., Tombari, F., Ji, X.: GDR-Net: geometry-guided direct regression network for monocular 6d object pose estimation. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2021)
Wang, P., et al.: DemoGrasp: few-shot learning for robotic grasping with human demonstration. In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5733–5740. IEEE (2021)
Xiang, Y., Schmidt, T., Narayanan, V., Fox, D.: PoseCNN: a convolutional neural network for 6D object pose estimation in cluttered scenes. In: Robotics: Science and Systems (RSS) (2018)
Zakharov, S., Kehl, W., Bhargava, A., Gaidon, A.: Autolabeling 3D objects with differentiable rendering of SDF shape priors. In: IEEE Computer Vision and Pattern Recognition (CVPR) (2020)
Zakharov, S., Shugurov, I., Ilic, S.: DPOD: 6D pose object detector and refiner. In: International Conference on Computer Vision (ICCV) (2019)
Zhang, S., Zhao, W., Guan, Z., Peng, X., Peng, J.: Keypoint-graph-driven learning framework for object pose estimation. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2021)
Acknowledgements
This work was partially funded by the German BMWK under grant GEMIMEG-II-01MT20001A.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Supplementary material 2 (mp4 10406 KB)
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Vutukur, S.R., Shugurov, I., Busam, B., Hutter, A., Ilic, S. (2022). WeLSA: Learning to Predict 6D Pose from Weakly Labeled Data Using Shape Alignment. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13668. Springer, Cham. https://doi.org/10.1007/978-3-031-20074-8_37
Download citation
DOI: https://doi.org/10.1007/978-3-031-20074-8_37
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-20073-1
Online ISBN: 978-3-031-20074-8
eBook Packages: Computer ScienceComputer Science (R0)