Structure-Aware Staging for Breast Cancer Metastases

  • Songtao Zhang
  • Li Sun
  • Ruiqiao Wang
  • Hongping Tang
  • Jin Zhang
  • Lin LuoEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11040)


Determining the stage of breast cancer metastases is an important component of cancer surveillance and control. It is laborious for pathologist to manually examine large amount of biological tissue and this process is error-prone. Deep learning methods can be used to automatically detect cancer metastases and identify cancer subtypes. However, current deep learning-based methods mainly focus on local patches but ignore the overall structure of lymph tissue, due to the memory limitation and computational cost of processing the gigapixel whole slide histopathological image (WSI) at a time. In this paper, we propose a structure-aware deep learning framework for staging of breast cancer metastases, in which we introduce lymph structure information to guide training patch selection and prediction features design. Our approach achieves \(85.1\%\) accuracy on slide-level and 0.80 kappa score on patient level. In addition, we see \(6.1\%\) and \(5\%\) performance gain on slide level and patient level classification respectively after introducing global structure information.


Cancer staging Structure-aware Deep learning 


  1. 1.
    CAMELYON17 Workshop (2017). Accessed 4 June 2018
  2. 2.
    Bejnordi, B.E., et al.: Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. Jama 318(22), 2199–2210 (2017)CrossRefGoogle Scholar
  3. 3.
    Chen, L.-C., et al.: Encoder-decoder with atrous separable convolution for semantic image segmentation. arXiv preprint arXiv:1802.02611 (2018)
  4. 4.
    Goldstraw, P., et al.: The IASLC lung cancer staging project: proposals for the revision of the TNM stage groupings in the forthcoming (seventh) edition of the TNM classification of malignant tumours. J. Thorac. Oncol. 2(8), 706–714 (2007)CrossRefGoogle Scholar
  5. 5.
    Han, S., Mao, H., Dally, W.J.: Deep compression: compressing deep neural networks with pruning, trained quantization and Huffman coding. arXiv preprint arXiv:1510.00149 (2015)
  6. 6.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  7. 7.
    Liu, Y., et al.: Detecting cancer metastases on gigapixel pathology images. arXiv preprint arXiv:1703.02442 (2017)
  8. 8.
    Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-ResNet and the impact of residual connections on learning. In: AAAI, vol. 4, p. 12 (2017)Google Scholar
  9. 9.
    Szegedy, C., et al.: Going deeper with convolutions. In: CVPR (2015)Google Scholar
  10. 10.
    Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Songtao Zhang
    • 1
  • Li Sun
    • 1
  • Ruiqiao Wang
    • 1
  • Hongping Tang
    • 2
  • Jin Zhang
    • 1
  • Lin Luo
    • 3
    Email author
  1. 1.Southern University of Science and TechnologyShenzhenChina
  2. 2.Department of PathologyShenzhen Maternity and Child Healthcare Hospital Affiliated to Southern Medical UniversityShenzhenChina
  3. 3.Peking UniversityBeijingChina

Personalised recommendations