Skip to main content

Meta-sampler: Almost-Universal yet Task-Oriented Sampling for Point Clouds

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13662))

Included in the following conference series:

Abstract

Sampling is a key operation in point-cloud task and acts to increase computational efficiency and tractability by discarding redundant points. Universal sampling algorithms (e.g., Farthest Point Sampling) work without modification across different tasks, models, and datasets, but by their very nature are agnostic about the downstream task/model. As such, they have no implicit knowledge about which points would be best to keep and which to reject. Recent work has shown how task-specific point cloud sampling (e.g., SampleNet) can be used to outperform traditional sampling approaches by learning which points are more informative. However, these learnable samplers face two inherent issues: i) overfitting to a model rather than a task, and ii) requiring training of the sampling network from scratch, in addition to the task network, somewhat countering the original objective of down-sampling to increase efficiency. In this work, we propose an almost-universal sampler, in our quest for a sampler that can learn to preserve the most useful points for a particular task, yet be inexpensive to adapt to different tasks, models or datasets. We first demonstrate how training over multiple models for the same task (e.g., shape reconstruction) significantly outperforms the vanilla SampleNet in terms of accuracy by not overfitting the sample network to a particular task network. Second, we show how we can train an almost-universal meta-sampler across multiple tasks. This meta-sampler can then be rapidly fine-tuned when applied to different datasets, networks, or even different tasks, thus amortizing the initial cost of training. Code is available at https://github.com/ttchengab/MetaSampler.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Antoniou, A., Edwards, H., Storkey, A.J.: How to train your MAML. In: ICLR (2019)

    Google Scholar 

  2. Ao, S., Hu, Q., Yang, B., Markham, A., Guo, Y.: SpinNet: learning a general surface descriptor for 3D point cloud registration. In: CVPR (2021)

    Google Scholar 

  3. Behley, J., et al.: Towards 3D lidar-based semantic scene understanding of 3D point cloud sequences: the semanticKITTI dataset. Int. J. Robot. Res. (2021)

    Google Scholar 

  4. Chang, A.X., et al.: ShapeNet: an information-rich 3D model repository. arXiv preprint arXiv:1512.03012 (2015)

  5. Chen, S., Tian, D., Feng, C., Vetro, A., Kovacevic, J.: Fast resampling of 3D point clouds via graphs. arXiv preprint arXiv:1702.06397 (2017)

  6. Chen, X., Chen, B., Mitra, N.J.: Unpaired point cloud completion on real scans using adversarial training. In: ICLR (2020)

    Google Scholar 

  7. Choy, C.B., Xu, D., Gwak, J.Y., Chen, K., Savarese, S.: 3D-R2N2: a unified approach for single and multi-view 3D object reconstruction. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 628–644. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46484-8_38

    Chapter  Google Scholar 

  8. Dovrat, O., Lang, I., Avidan, S.: Learning to sample. In: CVPR (2019)

    Google Scholar 

  9. Fan, H., Su, H., Guibas, L.J.: A point set generation network for 3D object reconstruction from a single image. In: CVPR (2017)

    Google Scholar 

  10. Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: Precup, D., Teh, Y.W. (eds.) ICML (2017)

    Google Scholar 

  11. Gojcic, Z., Zhou, C., Wegner, J.D., Wieser, A.: The perfect match: 3D point cloud matching with smoothed densities. In: CVPR (2019)

    Google Scholar 

  12. Goyal, A., Law, H., Liu, B., Newell, A., Deng, J.: Revisiting point cloud shape classification with a simple and effective baseline. In: Meila, M., Zhang, T. (eds.) ICML (2021)

    Google Scholar 

  13. Groh, F., Wieschollek, P., Lensch, H.P.A.: Flex-convolution - million-scale point-cloud learning beyond grid-worlds. In: Jawahar, C.V., Li, H., Mori, G., Schindler, K. (eds.) ACCV 2018. LNCS, vol. 11361, pp. 105–122. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-20887-5_7

    Chapter  Google Scholar 

  14. Guo, Y., Wang, H., Hu, Q., Liu, H., Liu, L., Bennamoun, M.: Deep learning for 3D point clouds: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 43(12), 4338–4364 (2020)

    Article  Google Scholar 

  15. Gupta, A., Eysenbach, B., Finn, C., Levine, S.: Unsupervised meta-learning for reinforcement learning. arXiv preprint arXiv:1806.04640 (2020)

  16. Hamdi, A., Giancola, S., Li, B., Thabet, A.K., Ghanem, B.: MVTN: multi-view transformation network for 3D shape recognition. In: ICCV (2021)

    Google Scholar 

  17. Hu, Q., et al.: SQN: weakly-supervised semantic segmentation of large-scale 3D point clouds with 1000x fewer labels. arXiv preprint arXiv:2104.04891 (2021)

  18. Hu, Q., Yang, B., Khalid, S., Xiao, W., Trigoni, N., Markham, A.: SensaturBAN: learning semantics from urban-scale photogrammetric point clouds. Int. J. Comput. Vision, pp. 1–28 (2022)

    Google Scholar 

  19. Hu, Q., et al.: RandLA-net: efficient semantic segmentation of large-scale point clouds. In: CVPR (2020)

    Google Scholar 

  20. Hu, Q., et al.: Learning semantic segmentation of large-scale point clouds with random sampling. IEEE Trans. Pattern Anal. Mach. Intell. (2021)

    Google Scholar 

  21. Huang, C., Cao, Z., Wang, Y., Wang, J., Long, M.: Metasets: meta-learning on point sets for generalizable representations. In: CVPR (2021)

    Google Scholar 

  22. Huang, S., Gojcic, Z., Usvyatsov, M., Wieser, A., Schindler, K.: Predator: registration of 3D point clouds with low overlap. In: CVPR (2021)

    Google Scholar 

  23. Jabri, A., Hsu, K., Gupta, A., Eysenbach, B., Levine, S., Finn, C.: Unsupervised curricula for visual meta-reinforcement learning. In: Wallach, H.M., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E.B., Garnett, R. (eds.) NeurIPS (2019)

    Google Scholar 

  24. Koch, G., Zemel, R., Salakhutdinov, R., et al.: Siamese neural networks for one-shot image recognition. In: ICML Workshop (2015)

    Google Scholar 

  25. Landrieu, L., Simonovsky, M.: Large-scale point cloud semantic segmentation with superpoint graphs. In: CVPR (2018)

    Google Scholar 

  26. Lang, I., Manor, A., Avidan, S.: SampleNet: differentiable point cloud sampling. In: CVPR (2020)

    Google Scholar 

  27. Li, Y., Bu, R., Sun, M., Wu, W., Di, X., Chen, B.: PointCNN: convolution on X-transformed points. In: NeurIPS (2018)

    Google Scholar 

  28. Liu, X., Han, Z., Liu, Y., Zwicker, M.: Point2Sequence: learning the shape representation of 3D point clouds with an attention-based sequence to sequence network. In: AAAI (2019)

    Google Scholar 

  29. Liu, Z., Tang, H., Lin, Y., Han, S.: Point-voxel CNN for efficient 3D deep learning. In: Wallach, H.M., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E.B., Garnett, R. (eds.) NeurIPS (2019)

    Google Scholar 

  30. Ma, X., Qin, C., You, H., Ran, H., Fu, Y.: Rethinking network design and local geometry in point cloud: a simple residual MLP framework. In: ICLR (2022)

    Google Scholar 

  31. Mandikal, P., Navaneet, K.L., Babu, R.V.: 3D-PSRNet: part segmented 3D point cloud reconstruction from a single image. In: Leal-Taixé, L., Roth, S. (eds.) ECCV 2018. LNCS, vol. 11131, pp. 662–674. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11015-4_50

    Chapter  Google Scholar 

  32. Nezhadarya, E., Taghavi, E., Razani, R., Liu, B., Luo, J.: Adaptive hierarchical down-sampling for point cloud classification. In: CVPR (2020)

    Google Scholar 

  33. Qi, C.R., Litany, O., He, K., Guibas, L.J.: Deep Hough voting for 3D object detection in point clouds. In: ICCV (2019)

    Google Scholar 

  34. Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation. In: CVPR (2017)

    Google Scholar 

  35. Qi, C.R., Yi, L., Su, H., Guibas, L.J.: PointNet++: deep hierarchical feature learning on point sets in a metric space. In: NeurIPS (2017)

    Google Scholar 

  36. Ren, M., et al.: Meta-learning for semi-supervised few-shot classification. In: ICLR (2018)

    Google Scholar 

  37. Sarode, V., et al.: PCRNet: point cloud registration network using pointnet encoding (2019)

    Google Scholar 

  38. Tatarchenko, M., Dosovitskiy, A., Brox, T.: Octree generating networks: efficient convolutional architectures for high-resolution 3D outputs. In: ICCV (2017)

    Google Scholar 

  39. Wallace, B., Hariharan, B.: Few-shot generalization for single-image 3D reconstruction via priors. In: ICCV (2019)

    Google Scholar 

  40. Wang, Y., Sun, Y., Liu, Z., Sarma, S.E., Bronstein, M.M., Solomon, J.M.: Dynamic graph CNN for learning on point clouds. ACM TOG (2019)

    Google Scholar 

  41. Wu, W., Qi, Z., Fuxin, L.: PointConv: deep convolutional networks on 3D point clouds. In: CVPR (2018)

    Google Scholar 

  42. Wu, Z., et al.: 3D shapenets: a deep representation for volumetric shapes. In: CVPR (2015)

    Google Scholar 

  43. Xiang, T., Zhang, C., Song, Y., Yu, J., Cai, W.: Walk in the cloud: learning curves for point clouds shape analysis. In: ICCV (2021)

    Google Scholar 

  44. Xu, Q., Sun, X., Wu, C., Wang, P., Neumann, U.: Grid-GCN for fast and scalable point cloud learning. In: CVPR (2020)

    Google Scholar 

  45. Yu, X., Tang, L., Rao, Y., Huang, T., Zhou, J., Lu, J.: Point-BERT: pre-training 3D point cloud transformers with masked point modeling. arXiv preprint arXiv:2111.14819 (2021)

  46. Yuan, W., Khot, T., Held, D., Mertz, C., Hebert, M.: PCN: point completion network. In: 3DV (2018)

    Google Scholar 

  47. Zhang, Y., Hu, Q., Xu, G., Ma, Y., Wan, J., Guo, Y.: Not all points are equal: learning highly efficient point-based detectors for 3D lidar point clouds. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2022)

    Google Scholar 

  48. Zhong, J.X., Zhou, K., Hu, Q., Wang, B., Trigoni, N., Markham, A.: No pain, big gain: classify dynamic point cloud sequences with static models by fitting feature-level space-time surfaces. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2022)

    Google Scholar 

Download references

Acknowledgements

This work was partially supported by Amazon AWS, Oxford Singapore Human-Machine Collaboration Programme, and EPSRC ACE-OPS grant.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qingyong Hu .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 999 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Cheng, TY., Hu, Q., Xie, Q., Trigoni, N., Markham, A. (2022). Meta-sampler: Almost-Universal yet Task-Oriented Sampling for Point Clouds. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13662. Springer, Cham. https://doi.org/10.1007/978-3-031-20086-1_40

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20086-1_40

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20085-4

  • Online ISBN: 978-3-031-20086-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics