Skip to main content

Advertisement

Log in

The devil is in the details: a small-lesion sensitive weakly supervised learning framework for prostate cancer detection and grading

  • Original Article
  • Published:
Virchows Archiv Aims and scope Submit manuscript

Abstract

Prostate cancer (PCa) is a significant health concern in aging males, and the diagnosis depends primarily on histopathological assessments to determine tumor size and Gleason score. This process is highly time-consuming, subjective, and relies on the extensive experience of the pathologists. Deep learning based artificial intelligence shows an ability to match pathologists on many prostate cancer diagnostic scenarios. However, it is easy to make mistakes on some hard cases with small tumor areas considering the extensively high-resolution of whole slide images (WSIs). The absence of fine-grained and large-scale annotations of such small tumor lesions makes this problem more challenging. Existing methods usually perform uniform cropping of the foreground of WSI and then use convolutional neural networks as the backbone network to predict the classification results. However, cropping can damage the structure of tiny tumors, which affects classification accuracy. To solve this problem, we propose an Intensive-Sampling Multiple Instance Learning Framework (ISMIL), which focuses on tumor regions and improves the recognition of small tumor regions by intensively sampling the crucial regions. Experiments of prostate cancer detection show that our method achieves an area under the receiver operating characteristic curve (AUC) of 0.987 on the PANDA sets, which improves recall by at least 33% with higher specificity over the current primary methods for hard cases. The ISMIL also demonstrates comparable abilities to human experts on the prostate cancer grading task. Moreover, ISMIL has shown good robustness in independent cohorts, which makes it a potential tool to improve the diagnostic efficiency of pathologists.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Data availability

PANDA, DiagSet-B and PAIP datasets are publicly available (https://www.kaggle.com/competitions/prostate-cancer-grade-assessment/data, https://paperswithcode.com/dataset/diagset, http://www.wisepaip.org/paip/). The other datasets generated and/or analyzed during the current study are available from the corresponding authors on reasonable request.

Code availability

Our code will be available online at https://github.com/YangZyyyy/Intensive-sampling-MIL.

Notes

  1. https://paip2021.grand-challenge.org/

  2. https://portal.gdc.cancer.gov/

  3. http://www.wisepaip.org/paip

  4. https://github.com/Xiyue-Wang/TransPath

References

  1. Sung H, Ferlay J, Siegel RL, Laversanne M, Soerjomataram I, Jemal A, Bray F (2021) Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin 71(3):209–249

    Article  PubMed  Google Scholar 

  2. Steiner DF, Nagpal K, Sayres R, Foote DJ, Wedin BD, Pearce A, Cai CJ, Winter SR, Symonds M, Yatziv L et al (2020) Evaluation of the use of combined artificial intelligence and pathologist assessment to review and grade prostate biopsies. JAMA Netw Open 3(11):2023267–2023267

    Article  Google Scholar 

  3. Delahunt B, Miller RJ, Srigley JR, Evans AJ, Samaratunga H (2012) Gleason grading: past, present and future. Histopathology 60(1):75–86

    Article  PubMed  Google Scholar 

  4. Nguyen K, Sabata B, Jain AK (2012) Prostate cancer grading: Gland segmentation and structural features. Pattern Recogn Lett 33(7):951–961

    Article  Google Scholar 

  5. Doyle S, Feldman MD, Shih N, Tomaszewski J, Madabhushi A (2012) Cascaded discrimination of normal, abnormal, and confounder classes in histopathology: Gleason grading of prostate cancer. BMC Bioinforma 13(1):1–15

    Article  Google Scholar 

  6. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556

  7. Arvaniti E, Fricker KS, Moret M, Rupp N, Hermanns T, Fankhauser C, Wey N, Wild PJ, Rueschoff JH, Claassen M (2018) Automated gleason grading of prostate cancer tissue microarrays via deep learning. Sci Rep 8(1):1–11

    Article  CAS  Google Scholar 

  8. Van der Laak J, Litjens G, Ciompi F (2021) Deep learning in histopathology: the path to the clinic. Nat Med 27(5):775–784

    Article  Google Scholar 

  9. Ayyad SM, Shehata M, Shalaby A, El-Ghar A, Ghazal M, El-Melegy M, Abdel-Hamid NB, Labib LM, Ali HA, El-Baz A et al (2021) Role of AI and histopathological images in detecting prostate cancer: A survey. Sensors 21(8):2586

    Article  PubMed  PubMed Central  Google Scholar 

  10. Bulten W, Balkenhol M, Belinga J-JA, Brilhante A, Çakır A, Egevad L, Eklund M, Farré X, Geronatsiou K, Molinié V et al (2021) Artificial intelligence assistance significantly improves Gleason grading of prostate biopsies by pathologists. Modern Pathol 34(3):660–671

    Article  CAS  Google Scholar 

  11. Steiner DF, MacDonald R, Liu Y, Truszkowski P, Hipp JD, Gammage C, Thng F, Peng L, Stumpe MC (2018) Impact of deep learning assistance on the histopathologic review of lymph nodes for metastatic breast cancer. Am J Surg Pathol 42(12):1636

    Article  PubMed  PubMed Central  Google Scholar 

  12. Campanella G, Hanna MG, Geneslaw L, Miraflor A, Werneck Krauss Silva V, Busam KJ, Brogi E, Reuter VE, Klimstra DS, Fuchs TJ (2019) Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nat Med 25(8):1301–1309

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  13. Wang X, Xiang J, Zhang J, Yang S, Yang Z, Wang M-H, Zhang J, Wei Y, Huang J, Han X (2022) SCL-WC: Cross-slide contrastive learning for weakly-supervised whole-slide image classification. In: Advances in neural information processing systems

  14. Chen RJ, Lu MY, Weng W-H, Chen TY, Williamson DF, Manz T, Shady M, Mahmood F (2021) Multimodal co-attention transformer for survival prediction in gigapixel whole slide images. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 4015–4025

  15. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  16. Kolesnikov A, Dosovitskiy A, Weissenborn D, Heigold G, Uszkoreit J, Beyer L, Minderer M, Dehghani M, Houlsby N, Gelly S et al (2021) An image is worth 16x16 words: Transformers for image recognition at scale

  17. Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B (2021) Swin transformer: Hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 10012–10022

  18. Lu MY, Williamson DF, Chen TY, Chen RJ, Barbieri M, Mahmood F (2021) Data-efficient and weakly supervised computational pathology on whole-slide images. Nat Biomed Eng 5(6):555–570

    Article  PubMed  PubMed Central  Google Scholar 

  19. Li B, Li Y, Eliceiri KW (2021) Dual-stream multiple instance learning network for whole slide image classification with self-supervised contrastive learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 14318–14328

  20. Shao Z, Bian H, Chen Y, Wang Y, Zhang J, Ji X et al (2021) Transmil: Transformer based correlated multiple instance learning for whole slide image classification. Adv Neural Inf Process Syst 34

  21. Ilse M, Tomczak J, Welling M (2018) Attention-based deep multiple instance learning. In: International conference on machine learning. PMLR, pp 2127–2136

  22. Sharma Y, Shrivastava A, Ehsan L, Moskaluk CA, Syed S, Brown D (2021) Cluster-to-conquer: A framework for end-to-end multi-instance learning for whole slide image classification. In: Medical imaging with deep learning. PMLR, pp 682–698

  23. Carbonneau M-A, Cheplygina V, Granger E, Gagnon G (2018) Multiple instance learning: A survey of problem characteristics and applications. Pattern Recogn 77:329–353

    Article  Google Scholar 

  24. Feng J, Zhou Z-H (2017) Deep MIML network. In: Proceedings of the AAAI conference on artificial intelligence, vol 31

  25. Pinheiro PO, Collobert R (2015) From image-level to pixel-level labeling with convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1713–1721

  26. Medsker LR, Jain L (2001) Recurrent neural networks. Des Appl 5:64–67

    Google Scholar 

  27. Bahdanau D, Cho KH, Bengio Y (2015) Neural machine translation by jointly learning to align and translate. In: 3rd international conference on learning representations, ICLR 2015

  28. Lu MY, Chen TY, Williamson DF, Zhao M, Shady M, Lipkova J, Mahmood F (2021) AI-based pathology predicts origins for cancers of unknown primary. Nature 594(7861):106–110

    Article  CAS  PubMed  Google Scholar 

  29. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. Adv Neural Inf Process Syst 30

  30. Bulten W, Kartasalo K, Chen P-HC, Ström P, Pinckaers H, Nagpal K, Cai Y, Steiner DF, van Boven H, Vink R et al (2022) Artificial intelligence for diagnosis and Gleason grading of prostate cancer: the panda challenge. Nat Med, 1–10

  31. Koziarski M, Cyganek B, Olborski B, Antosz Z, Żydak M, Kwolek B, Wasowicz P, Bukała A, Swadźba J, Sitkowski P (2021) DiagSet: a dataset for prostate cancer histopathological image classification. arXiv:2105.04014

  32. Cohen J (1960) A coefficient of agreement for nominal scales. Educ Psychol Meas 20(1):37–46

    Article  Google Scholar 

  33. Bulten W, Pinckaers H, van Boven H, Vink R, de Bel T, van Ginneken B, van der Laak J, Hulsbergen-van de Kaa C, Litjens G (2020) Automated deep-learning system for Gleason grading of prostate cancer using biopsies: a diagnostic study. Lancet Oncol 21(2):233–241

    Article  PubMed  Google Scholar 

  34. Humphrey PA, Moch H, Cubilla AL, Ulbright TM, Reuter VE (2016) The 2016 WHO classification of tumours of the urinary system and male genital organs—Part B: prostate and bladder tumours. Eur Urol 70(1):106–119

    Article  PubMed  Google Scholar 

  35. Tan M, Le Q (2019) Efficientnet: Rethinking model scaling for convolutional neural networks. In: International conference on machine learning. PMLR, pp 6105–6114

  36. Wang X, Du Y, Yang S, Zhang J, Wang M, Zhang J, Yang W, Huang J, Han X (2023) RetCCL: clustering-guided contrastive learning for whole-slide image retrieval. Med Image Anal 83:102645

    Article  PubMed  Google Scholar 

  37. Wang X, Yang S, Zhang J, Wang M, Zhang J, Yang W, Huang J, Han X (2022) Transformer-based unsupervised contrastive learning for histopathological image classification. Medical Image Analysis

  38. Ruder S (2016) An overview of gradient descent optimization algorithms. arXiv:1609.04747

  39. Kingma DP, Ba J (2014) Adam: A method for stochastic optimization. arXiv:1412.6980

  40. Beluch WH, Genewein T, Nürnberger A, Köhler JM (2018) The power of ensembles for active learning in image classification. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 9368–9377

  41. Epstein JI (2010) An update of the Gleason grading system. J Urol 183(2):433–440

    Article  PubMed  Google Scholar 

  42. Van Leenders GJ, Van Der Kwast TH, Grignon DJ, Evans AJ, Kristiansen G, Kweldam CF, Litjens G, McKenney JK, Melamed J, Mottet N et al (2020) The 2019 international society of urological pathology (ISUP) consensus conference on grading of prostatic carcinoma. Am J Surg Pathol 44(8):87

    Article  Google Scholar 

  43. Epstein JI, Egevad L, Amin MB, Delahunt B, Srigley JR, Humphrey PA (2016) The 2014 international society of urological pathology (ISUP) consensus conference on Gleason grading of prostatic carcinoma. Am J Surg Pathol 40(2):244– 252

    Article  PubMed  Google Scholar 

  44. Epstein JI, Zelefsky MJ, Sjoberg DD, Nelson JB, Egevad L, Magi-Galluzzi C, Vickers AJ, Parwani AV, Reuter VE, Fine SW et al (2016) A contemporary prostate cancer grading system: a validated alternative to the Gleason score. Eur Urol 69(3):428– 435

    Article  PubMed  Google Scholar 

  45. Jin C, Tanno R, Mertzanidou T, Panagiotaki E, Alexander DC (2021) Learning to downsample for segmentation of ultra-high resolution images. In: International conference on learning representations

  46. Marin D, He Z, Vajda P, Chatterjee P, Tsai S, Yang F, Boykov Y (2019) Efficient segmentation: Learning downsampling near semantic boundaries. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 2131–2141

  47. Thandiackal K, Chen B, Pati P, Jaume G, Williamson DFK, Gabrani M, Goksel O (2022) Differentiable zooming for multiple instance learning on whole-slide images. In: Avidan S, Brostow G, Cissé M, Farinella GM, Hassner T (eds) Computer Vision – ECCV 2022. Springer, pp 699–715

  48. Yue M, Zhang J, Wang X, Yan K, Cai L, Tian K, Niu S, Han X, Yu Y, Huang J et al (2021) Can AI-assisted microscope facilitate breast HER2 interpretation? a multi-institutional ring study. Virchows Arch 479(3):443–449

    Article  CAS  PubMed  Google Scholar 

  49. Tang H-P, Cai D, Kong Y-Q, Ye H, Ma Z-X, Lv H-S, Tuo L-R, Pan Q-J, Liu Z-H, Han X (2021) Cervical cytology screening facilitated by an artificial intelligence microscope: A preliminary study. Cancer Cytopathol 129(9):693–700

    Article  PubMed  Google Scholar 

  50. Chen P-HC, Gadepalli K, MacDonald R, Liu Y, Kadowaki S, Nagpal K, Kohlberger T, Dean J, Corrado GS, Hipp JD et al (2019) An augmented reality microscope with real-time artificial intelligence integration for cancer diagnosis. Nat Med 25(9):1453–1457

    Article  CAS  PubMed  Google Scholar 

  51. Allsbrook Jr, WC, Mangold KA, Johnson MH, Lane RB, Lane CG, Amin MB, Bostwick DG, Humphrey PA, Jones EC, Reuter VE et al (2001) Interobserver reproducibility of Gleason grading of prostatic carcinoma: urologic pathologists. Hum Pathol 32(1):74–80

    Article  Google Scholar 

Download references

Acknowledgements

The PAIP dataset used in this research was provided by Seoul National University Hospital by a grant of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health & Welfare, Republic of Korea (grant number: HI18C0316). We also thank Junhan Zhao from Havard Medical School for helping us complete the comments related to the discussion of clinical applications and polishing our presentation.

Author information

Authors and Affiliations

Authors

Contributions

ZY, XW, JX designed the study. JZ, SY, WY, ZL, XH collected public datasets. XW, YL collected tissue samples and scanned them into WSIs. ZY, XW, JX drafted the manuscript, and all authors approved the final version to be published.

Corresponding authors

Correspondence to Jun Zhang, Zhongyu Li or Yueping Liu.

Ethics declarations

Conflict of interest

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Zhongyi Yang, Xiyue Wang, and Jinxi Xiang contributed equally to this work.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, Z., Wang, X., Xiang, J. et al. The devil is in the details: a small-lesion sensitive weakly supervised learning framework for prostate cancer detection and grading. Virchows Arch 482, 525–538 (2023). https://doi.org/10.1007/s00428-023-03502-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00428-023-03502-z

Keywords

Navigation