Skip to main content

WeLSA: Learning to Predict 6D Pose from Weakly Labeled Data Using Shape Alignment

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Abstract

Object pose estimation is a crucial task in computer vision and augmented reality. One of its key challenges is the difficulty of annotation of real training data and the lack of textured CAD models. Therefore, pipelines which do not require CAD models and which can be trained with few labeled images are desirable. We propose a weakly-supervised approach for object pose estimation from RGB-D data using training sets composed of very few labeled images with pose annotations along with weakly-labeled images with ground truth segmentation masks without pose labels. We achieve this by learning to annotate weakly-labeled training data through shape alignment while simultaneously training a pose prediction network. Point cloud alignment is performed using structure and rotation-invariant feature-based losses. We further learn an implicit shape representation, which allows the method to work without the known CAD model and also contributes to pose alignment and pose refinement during training on weakly labeled images. The experimental evaluation shows that our method achieves state-of-the-art results on LineMOD, Occlusion-LineMOD and TLess despite being trained using relative poses and on only a fraction of labeled data used by the other methods. We also achieve comparable results to state-of-the-art RGB-D based pose estimation approaches even when further reducing the amount of unlabeled training data. In addition, our method works even if relative camera poses are given instead of object pose annotations which are typically easier to obtain.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Arun, K.S., Huang, T.S., Blostein, S.D.: Least-squares fitting of two 3-D point sets. IEEE Trans. Pattern Anal. Mach. Intell. (1987)

    Google Scholar 

  2. Brachmann, E., Krull, A., Michel, F., Gumhold, S., Shotton, J., Rother, C.: Learning 6d object pose estimation using 3D object coordinates. In: ECCV (2014)

    Google Scholar 

  3. Busam, B.: High performance visual pose computation. Ph.D. Thesis, Technische Universität München (2021)

    Google Scholar 

  4. Cai, M., Reid, I.: Reconstruct locally, localize globally: a model free method for object pose estimation. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  5. Chen, W., Jia, X., Chang, H.J., Duan, J., Leonardis, A.: G2L-Net: global to local network for real-time 6d pose estimation with embedding vector features. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  6. Drost, B., Ulrich, M., Navab, N., Ilic, S.: Model globally, match locally: efficient and robust 3D object recognition. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 998–1005. IEEE (2010)

    Google Scholar 

  7. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385 (2015)

  8. He, Y., Huang, H., Fan, H., Chen, Q., Sun, J.: FFB6D: a full flow bidirectional fusion network for 6d pose estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3003–3013 (2021)

    Google Scholar 

  9. He, Y., Sun, W., Huang, H., Liu, J., Fan, H., Sun, J.: PVN3D: a deep point-wise 3D keypoints voting network for 6dof pose estimation. In: CVPR (2020)

    Google Scholar 

  10. He, Y., Sun, W., Huang, H., Liu, J., Fan, H., Sun, J.: PVN3D: a deep point-wise 3D keypoints voting network for 6DoF pose estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  11. Hinterstoisser, S., et al.: Model based training, detection and pose estimation of texture-less 3D objects in heavily cluttered scenes (2012)

    Google Scholar 

  12. Hodaň, T., Haluza, P., Obdržálek, Š., Matas, J., Lourakis, M., Zabulis, X.: T-LESS: an RGB-D dataset for 6D pose estimation of texture-less objects. IEEE Winter Conference on Applications of Computer Vision (WACV) (2017)

    Google Scholar 

  13. Hodan, T., Melenovsky, A.: Bop: benchmark for 6d object pose estimation. https://bop.felk.cvut.cz/home/ (2019)

  14. Jung, H., Brasch, N., Leonardis, A., Navab, N., Busam, B.: Wild ToFu: improving range and quality of indirect time-of-flight depth with RGB fusion in challenging environments. In: 2021 International Conference on 3D Vision (3DV), pp. 239–248. IEEE (2021)

    Google Scholar 

  15. Kabsch, W.: A solution for the best rotation to relate two sets of vectors. Acta Crystallographica Section A (1976)

    Google Scholar 

  16. Kaskman, R., Shugurov, I., Zakharov, S., Ilic, S.: 6 DoF pose estimation of textureless objects from multiple RGB frames. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020. LNCS, vol. 12536, pp. 612–630. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-66096-3_41

    Chapter  Google Scholar 

  17. Kaskman, R., Zakharov, S., Shugurov, I., Ilic, S.: Homebreweddb: RGB-D dataset for 6d pose estimation of 3D objects. In: ICCV Workshops (2019)

    Google Scholar 

  18. Labbe, Y., Carpentier, J., Aubry, M., Sivic, J.: CosyPose: consistent multi-view multi-object 6D pose estimation. In: Proceedings of the European Conference on Computer Vision (ECCV) (2020)

    Google Scholar 

  19. Lepetit, V., Moreno-Noguer, F., Fua, P.: EPnP: an accurate o(n) solution to the PnP problem. Int. J. Comput. Vis. (2009). https://doi.org/10.1007/s11263-008-0152-6

  20. Li, F., Shugurov, I., Busam, B., Li, M., Yang, S., Ilic, S.: PolarMesh: a star-convex 3D shape approximation for object pose estimation. IEEE Robot. Autom. Lett. 7(2), 4416–4423 (2022)

    Article  Google Scholar 

  21. Li, F., Shugurov, I., Busam, B., Yang, S., Ilic, S.: WS-OPE: weakly supervised 6-D object pose regression using relative multi-camera pose constraints. IEEE Robot. Autom. Lett. (2022)

    Google Scholar 

  22. Li, F., Yu, H., Shugurov, I., Busam, B., Yang, S., Ilic, S.: NeRF-Pose: a first-reconstruct-then-regress approach for weakly-supervised 6d object pose estimation. arXiv preprint arXiv:2203.04802 (2022)

  23. Manhardt, F., Kehl, W., Gaidon, A.: ROI-10D: monocular lifting of 2D detection to 6D pose and metric shape. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2069–2078 (2019)

    Google Scholar 

  24. Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.: Occupancy networks: Learning 3d reconstruction in function space. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  25. Park, J.J., Florence, P., Straub, J., Newcombe, R., Lovegrove, S.: DeepSDF: learning continuous signed distance functions for shape representation. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  26. Park, K., Mousavian, A., Xiang, Y., Fox, D.: LatentFusion: end-to-end differentiable reconstruction and rendering for unseen object pose estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2020)

    Google Scholar 

  27. Park, K., Patten, T., Vincze, M.: Pix2Pose: pixel-wise coordinate regression of objects for 6D pose estimation. In: The IEEE International Conference on Computer Vision (ICCV) (2019)

    Google Scholar 

  28. Peng, S., Liu, Y., Huang, Q., Zhou, X., Bao, H.: PVNet: pixel-wise voting network for 6DoF pose estimation. In: CVPR (2019)

    Google Scholar 

  29. Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation. arXiv preprint arXiv:1612.00593 (2016)

  30. Shugurov, I., Li, F., Busam, B., Ilic, S.: OSOP: a multi-stage one shot object pose estimation framework. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6835–6844 (2022)

    Google Scholar 

  31. Shugurov, I., Pavlov, I., Zakharov, S., Ilic, S.: Multi-view object pose refinement with differentiable renderer. IEEE Robot. Autom. Lett. (2021)

    Google Scholar 

  32. Shugurov, I., Zakharov, S., Ilic, S.: DPODv2: dense correspondence-based 6 DoF pose estimation. IEEE Trans. Pattern Anal. Mach. Intell. (2021)

    Google Scholar 

  33. Song, C., Song, J., Huang, Q.: HybridPose: 6D object pose estimation under hybrid representations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2020)

    Google Scholar 

  34. Wang, C., et al.: DenseFusion: 6D object pose estimation by iterative dense fusion (2019)

    Google Scholar 

  35. Wang, G., Manhardt, F., Shao, J., Ji, X., Navab, N., Tombari, F.: Self6D: self-supervised monocular 6D object pose estimation. In: The European Conference on Computer Vision (ECCV) (2020)

    Google Scholar 

  36. Wang, G., Manhardt, F., Tombari, F., Ji, X.: GDR-Net: geometry-guided direct regression network for monocular 6d object pose estimation. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2021)

    Google Scholar 

  37. Wang, P., et al.: DemoGrasp: few-shot learning for robotic grasping with human demonstration. In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5733–5740. IEEE (2021)

    Google Scholar 

  38. Xiang, Y., Schmidt, T., Narayanan, V., Fox, D.: PoseCNN: a convolutional neural network for 6D object pose estimation in cluttered scenes. In: Robotics: Science and Systems (RSS) (2018)

    Google Scholar 

  39. Zakharov, S., Kehl, W., Bhargava, A., Gaidon, A.: Autolabeling 3D objects with differentiable rendering of SDF shape priors. In: IEEE Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  40. Zakharov, S., Shugurov, I., Ilic, S.: DPOD: 6D pose object detector and refiner. In: International Conference on Computer Vision (ICCV) (2019)

    Google Scholar 

  41. Zhang, S., Zhao, W., Guan, Z., Peng, X., Peng, J.: Keypoint-graph-driven learning framework for object pose estimation. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2021)

    Google Scholar 

Download references

Acknowledgements

This work was partially funded by the German BMWK under grant GEMIMEG-II-01MT20001A.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shishir Reddy Vutukur .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 2 (mp4 10406 KB)

Supplementary material 1 (pdf 1860 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Vutukur, S.R., Shugurov, I., Busam, B., Hutter, A., Ilic, S. (2022). WeLSA: Learning to Predict 6D Pose from Weakly Labeled Data Using Shape Alignment. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13668. Springer, Cham. https://doi.org/10.1007/978-3-031-20074-8_37

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20074-8_37

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20073-1

  • Online ISBN: 978-3-031-20074-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics