Skip to main content

An Evaluation of Self-supervised Pre-training for Skin-Lesion Analysis

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 Workshops (ECCV 2022)

Abstract

Self-supervised pre-training appears as an advantageous alternative to supervised pre-trained for transfer learning. By synthesizing annotations on pretext tasks, self-supervision allows pre-training models on large amounts of pseudo-labels before fine-tuning them on the target task. In this work, we assess self-supervision for diagnosing skin lesions, comparing three self-supervised pipelines to a challenging supervised baseline, on five test datasets comprising in- and out-of-distribution samples. Our results show that self-supervision is competitive both in improving accuracies and in reducing the variability of outcomes. Self-supervision proves particularly useful for low training data scenarios (<1500 and <150 samples), where its ability to stabilize the outcomes is essential to provide sound results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/VirtualSpaceman/ssl-skin-lesions.

References

  1. Azizi, S., et al.: Robust and efficient medical imaging with self-supervision. arXiv preprint arXiv:2205.09723 (2022)

  2. Azizi, S., et al.: Big self-supervised models advance medical image classification. In: International Conference on Computer Vision (ICCV) (2021)

    Google Scholar 

  3. Bai, W., et al.: Self-supervised learning for cardiac MR image segmentation by anatomical position prediction. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11765, pp. 541–549. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32245-8_60

    Chapter  Google Scholar 

  4. Bissoto, A., Fornaciali, M., Valle, E., Avila, S.: (De)Constructing bias on skin lesion datasets. In: Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2019)

    Google Scholar 

  5. Bissoto, A., Valle, E., Avila, S.: Debiasing skin lesion datasets and models? Not so fast. In: Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2020)

    Google Scholar 

  6. Boyd, J., Liashuha, M., Deutsch, E., Paragios, N., Christodoulidis, S., Vakalopoulou, M.: Self-supervised representation learning using visual field expansion on digital pathology. In: International Conference on Computer Vision (ICCV) (2021)

    Google Scholar 

  7. Caron, M., Misra, I., Mairal, J., Goyal, P., Bojanowski, P., Joulin, A.: Unsupervised learning of visual features by contrasting cluster assignments. In: Advances in Neural Information Processing Systems (NeurIPS) (2020)

    Google Scholar 

  8. Chen, L., Bentley, P., Mori, K., Misawa, K., Fujiwara, M., Rueckert, D.: Self-supervised learning for medical image analysis using image context restoration. Med. Image Anal. 58, 101539 (2019)

    Article  Google Scholar 

  9. Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning (ICML) (2020)

    Google Scholar 

  10. Chen, X., Yao, L., Zhou, T., Dong, J., Zhang, Y.: Momentum contrastive learning for few-shot covid-19 diagnosis from chest ct images. Pattern Recogn. 113, 107826 (2021)

    Article  Google Scholar 

  11. Chen, X., Fan, H., Girshick, R., He, K.: Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297 (2020)

  12. Codella, N., Gutman, D., Celebi, M.E., Helba, B., Marchetti, M.A., et al.: Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC). In: International Symposium on Biomedical Imaging (ISBI) (2018)

    Google Scholar 

  13. Cole, E., Yang, X., Wilber, K., Mac Aodha, O., Belongie, S.: When does contrastive visual representation learning work? In: Conference on Computer Vision and Pattern Recognition (CVPR) (2022)

    Google Scholar 

  14. Geirhos, R., et al.: Shortcut learning in deep neural networks. Nat. Mach. Intelli. 2(11), 665–673 (2020)

    Article  Google Scholar 

  15. Gidaris, S., Singh, P., Komodakis, N.: Unsupervised representation learning by predicting image rotations. In: International Conference on Learning Representations (ICML) (2018)

    Google Scholar 

  16. Grill, J.B., et al.: Bootstrap your own latent - a new approach to self-supervised learning. In: Advances in Neural Information Processing Systems (NeurIPS) (2020)

    Google Scholar 

  17. Gutmann, M., Hyvärinen, A.: Noise-contrastive estimation: a new estimation principle for unnormalized statistical models. In: International Conference on Artificial Intelligence and Statistics (AISTATS) (2010)

    Google Scholar 

  18. He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  19. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2016)

    Google Scholar 

  20. Hervella, Á.S., Rouco, J., Novo, J., Ortega, M.: Retinal image understanding emerges from self-supervised multimodal reconstruction. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 321–328. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_37

    Chapter  Google Scholar 

  21. Hosseinzadeh Taher, M.R., Haghighi, F., Feng, R., Gotway, M.B., Liang, J.: A systematic benchmarking analysis of transfer learning for medical image analysis. In: Albarqouni, S., et al. (eds.) DART/FAIR -2021. LNCS, vol. 12968, pp. 3–13. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87722-4_1

    Chapter  Google Scholar 

  22. Hu, D., et al.: Discriminative sounding objects localization via self-supervised audiovisual matching. In: Advances in Neural Information Processing Systems (NeurIPS), vol. 33 (2020)

    Google Scholar 

  23. Jamaludin, A., Kadir, T., Zisserman, A.: Self-supervised learning for spinal MRIs. In: Cardoso, M.J., et al. (eds.) DLMIA/ML-CDS -2017. LNCS, vol. 10553, pp. 294–302. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67558-9_34

    Chapter  Google Scholar 

  24. Jing, L., Tian, Y.: Self-supervised visual feature learning with deep neural networks: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 43, 4037–4058 (2020)

    Article  Google Scholar 

  25. Kawahara, J., Daneshvar, S., Argenziano, G., Hamarneh, G.: Seven-point checklist and skin lesion classification using multitask multimodal neural nets. IEEE J. Biomed. Health Inform. 23(2), 538–546 (2019)

    Article  Google Scholar 

  26. Kawakami, K., Wang, L., Dyer, C., Blunsom, P., van den Oord, A.: Learning robust and multilingual speech representations. In: Conference on Empirical Methods in Natural Language Processing (EMNLP) (2020)

    Google Scholar 

  27. Khosla, P., et al.: Supervised contrastive learning. In: Advances in Neural Information Processing Systems (NeurIPS) (2020)

    Google Scholar 

  28. Li, Y., Chen, J., Zheng, Y.: A multi-task self-supervised learning framework for scopy images. In: International Symposium on Biomedical Imaging (ISBI) (2020)

    Google Scholar 

  29. Liu, X., et al.: Self-supervised learning: generative or contrastive. IEEE Trans. Knowl. Data Eng. 35, 857–876 (2021)

    Google Scholar 

  30. Liu, X., et al.: Self-supervised learning for dense depth estimation in monocular endoscopy. In: Stoyanov, D., et al. (eds.) CARE/CLIP/OR 2.0/ISIC -2018. LNCS, vol. 11041, pp. 128–138. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01201-4_15

    Chapter  Google Scholar 

  31. Menegola, A., Fornaciali, M., Pires, R., Bittencourt, F.V., Avila, S., Valle, E.: Knowledge transfer for melanoma screening with deep learning. In: International Symposium on Biomedical Imaging (ISBI) (2017)

    Google Scholar 

  32. Morís, D.I., Hervella, Á.S., Rouco, J., Novo, J., Ortega, M.: Context encoder self-supervised approaches for eye fundus analysis. In: International Joint Conference on Neural Networks (IJCNN) (2021)

    Google Scholar 

  33. Pacheco, A.G., et al.: Pad-ufes-20: a skin lesion dataset composed of patient data and clinical images collected from smartphones. Data Brief 32, 106221 (2020)

    Article  Google Scholar 

  34. Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: feature learning by inpainting. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2016)

    Google Scholar 

  35. Rotemberg, V., et al.: A patient-centric dataset of images and metadata for identifying melanomas using clinical context. Scientific Data 8(1), 1–8 (2021)

    Google Scholar 

  36. Srikar Appalaraju, Yi Zhu, Y.X., Fehervari, I.: Towards good practices in self-supervised representation learning. In: Advances in Neural Information Processing Systems Workshops (NeurIPSW) (2020)

    Google Scholar 

  37. Sriram, A., et al.: Covid-19 prognosis via self-supervised representation learning and multi-image prediction. arXiv preprint arXiv:2101.04909 (2021)

  38. Tack, J., Mo, S., Jeong, J., Shin, J.: CSI: novelty detection via contrastive learning on distributionally shifted instances. In: Advances in Neural Information Processing Systems (NeurIPS) (2020)

    Google Scholar 

  39. Tian, Y., Sun, C., Poole, B., Krishnan, D., Schmid, C., Isola, P.: What makes for good views for contrastive learning? In: Advances in Neural Information Processing Systems (NeurIPS) (2020)

    Google Scholar 

  40. Truong, T., Mohammadi, S., Lenga, M.: How transferable are self-supervised features in medical image classification tasks? In: Machine Learning for Health, pp. 54–74. PMLR (2021)

    Google Scholar 

  41. Valle, E., et al.: Data, depth, and design: learning reliable models for skin lesion analysis. Neurocomputing 383, 303–313 (2020)

    Article  Google Scholar 

  42. Verdelho, M.R., Barata, C.: On the impact of self-supervised learning in skin cancer diagnosis. In: International Symposium on Biomedical Imaging (2022)

    Google Scholar 

  43. Vu, Y.N.T., Wang, R., Balachandar, N., Liu, C., Ng, A.Y., Rajpurkar, P.: Medaug: contrastive learning leveraging patient metadata improves representations for chest x-ray interpretation. In: Machine Learning for Healthcare Conference, pp. 755–769 (2021)

    Google Scholar 

  44. Wang, D., Pang, N., Wang, Y., Zhao, H.: Unlabeled skin lesion classification by self-supervised topology clustering network. Biomed. Signal Process. Control 66, 102428 (2021)

    Article  Google Scholar 

  45. Wang, H., et al.: Score-cam: score-weighted visual explanations for convolutional neural networks. In: Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2020)

    Google Scholar 

  46. Wang, Z., Lyu, J., Luo, W., Tang, X.: Superpixel inpainting for self-supervised skin lesion segmentation from dermoscopic images. In: International Symposium on Biomedical Imaging (ISBI) (2022)

    Google Scholar 

  47. Wu, Z., Xiong, Y., Yu, S.X., Lin, D.: Unsupervised feature learning via non-parametric instance discrimination. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2018)

    Google Scholar 

  48. Zhang, R., Isola, P., Efros, A.A.: Colorful image colorization. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 649–666. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46487-9_40

    Chapter  Google Scholar 

  49. Zhou, H.-Y., Yu, S., Bian, C., Hu, Y., Ma, K., Zheng, Y.: Comparing to learn: surpassing imagenet pretraining on radiographs by comparing image representations. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12261, pp. 398–407. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59710-8_39

    Chapter  Google Scholar 

Download references

Acknowledgements

L. Chaves is partially funded by Santander and Google LARA 2021. A. Bissoto is funded by FAPESP 2019/19619-7. E. Valle is partially funded by CNPq 315168/2020-0. S. Avila is partially funded by CNPq 315231/2020-3, FAPESP 2013/08293-7, 2020/09838-0, and Google LARA 2021. This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior – Brasil (CAPES) – Finance Code 001. The Recod.ai lab is supported by projects from FAPESP, CNPq, and CAPES.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Levy Chaves .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chaves, L., Bissoto, A., Valle, E., Avila, S. (2023). An Evaluation of Self-supervised Pre-training for Skin-Lesion Analysis. In: Karlinsky, L., Michaeli, T., Nishino, K. (eds) Computer Vision – ECCV 2022 Workshops. ECCV 2022. Lecture Notes in Computer Science, vol 13804. Springer, Cham. https://doi.org/10.1007/978-3-031-25069-9_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-25069-9_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-25068-2

  • Online ISBN: 978-3-031-25069-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics