Skip to main content
Log in

Deep active sampling with self-supervised learning

  • Letter
  • Published:
Frontiers of Computer Science Aims and scope Submit manuscript

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

References

  1. Bengar J Z, van de Weijer J, Twardowski B, Raducanu B. Reducing label effort: self-supervised meets active learning. In: Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. 2021, 1631–1639

  2. He K, Fan H, Wu Y, Xie S, Girshick R. Momentum contrast for unsupervised visual representation learning. In: Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020, 9726–9735

  3. Xiao H, Rasul K, Vollgraf R. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. 2017, arXiv preprint arXiv: 1708.07747

  4. Krizhevsky A, Hinton G. Learning multiple layers of features from tiny images. See Google website, 2009

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haochen Shi.

Electronic supplementary material

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shi, H., Zhou, H. Deep active sampling with self-supervised learning. Front. Comput. Sci. 17, 174323 (2023). https://doi.org/10.1007/s11704-022-2189-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11704-022-2189-z

Navigation