Cervical Nuclei Segmentation in Whole Slide Histopathology Images Using Convolution Neural Network

  • Qiuju Yang
  • Kaijie WuEmail author
  • Hao Cheng
  • Chaochen Gu
  • Yuan Liu
  • Shawn Patrick Casey
  • Xinping Guan
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 937)


Pathologists generally diagnose whether or not cervical cancer cells have the potential to spread to other organs and assess the malignancy of cancer through whole slide histopathology images using virtual microscopy. In this process, the morphology of nuclei is one of the significant diagnostic indices, including the size, the orientation and arrangement of the nuclei. Therefore, accurate segmentation of nuclei is a crucial step in clinical diagnosis. However, several challenges exist, namely a single whole slide image (WSI) often occupies a large amount of memory, making it difficult to manipulate. More than that, due to the extremely high density and variant shapes, sizes and overlapping nuclei, as well as low contrast, weakly defined boundaries, different staining methods and image acquisition techniques, it is difficult to achieve accurate segmentation. A method is proposed, comprised of two main parts to achieve lesion localization and automatic segmentation of nuclei. Initially, a U-Net model was used to localize and segment lesions. Then, a multi-task cascade network was proposed to combine nuclei foreground and edge information to obtain instance segmentation results. Evaluation of the proposed method for lesion localization and nuclei segmentation using a dataset comprised of cervical tissue sections collected by experienced pathologists along with comparative experiments, demonstrates the outstanding performance of this method.


Nuclei segmentation Whole slide histopathology image Deep learning Convolutional neural networks Cervical cancer 



This work is supported by National Key Scientific Instruments and Equipment Development Program of China (2013YQ03065101) and partially supported by National Natural Science Foundation (NNSF) of China under Grant 61503243 and National Science Foundation (NSF) of China under the Grant 61521063.


  1. 1.
    Mcguire, S.: World cancer report 2014. Geneva, Switzerland: world health organization, international agency for research on cancer, WHO Press, 2015. Adv. Nutr. 7(2), 418 (2016)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Canavan, T.P., Doshi, N.R.: Cervical cancer. Am. Fam. Physician 61(5), 1369 (2000)Google Scholar
  3. 3.
    LeCun, Y.: Accessed 16 Oct 2013
  4. 4.
    Saltzer, J.H.: End-to-end arguments in system design. ACM Trans. Comput. Syst. (TOCS) 2(4), 277–288 (1984)CrossRefGoogle Scholar
  5. 5.
    Shelhamer, E., Long, J., Darrell, T.: Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 640–651 (2014)CrossRefGoogle Scholar
  6. 6.
    Badrinarayanan, V., Kendall, A., Cipolla, R.: SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 12(39), 2481–2495 (2017)CrossRefGoogle Scholar
  7. 7.
    Zheng, S., Jayasumana, S., Romera-Paredes, B., Vineet, V., Su, Z., Du, D., et al.: Conditional random fields as recurrent neural networks. In: IEEE International Conference on Computer Vision (ICCV), pp. 1529–1537 (2015)Google Scholar
  8. 8.
    Chen, L.C., Papandreou, G., Kokkinos, I., et al.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2018)CrossRefGoogle Scholar
  9. 9.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). Scholar
  10. 10.
    Chen, H., Qi, X., Yu, L., Heng, P.A.: DCAN: deep contour-aware networks for accurate gland segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2487–2496 (2016)Google Scholar
  11. 11.
    Dai, J., He, K., Sun, J.: Instance-aware semantic segmentation via multi-task network cascades. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3150–3158 (2015)Google Scholar
  12. 12.
    Li, Y., Qi, H., Dai, J., Ji, X., Wei, Y.: Fully convolutional instance-aware semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4438–4446 (2017)Google Scholar
  13. 13.
    He, K., Gkioxari, G., Dollár, P., et al.: Mask R-CNN. In: IEEE International Conference on Computer Vision (ICCV), pp. 2980–2988 (2017)Google Scholar
  14. 14.
    Dai, J., Li, Y., He, K., et al.: R-FCN: Object detection via region-based fully convolutional networks. Advances in Neural Information Processing Systems 29 (NIPS) (2016)Google Scholar
  15. 15.
    Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.A.: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11(12), 3371–3408 (2010)MathSciNetzbMATHGoogle Scholar
  16. 16.
    Cho, K., Van Merrienboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. Computer Science (2014)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  • Qiuju Yang
    • 1
  • Kaijie Wu
    • 1
    Email author
  • Hao Cheng
    • 1
  • Chaochen Gu
    • 1
  • Yuan Liu
    • 2
  • Shawn Patrick Casey
    • 1
  • Xinping Guan
    • 1
  1. 1.Department of Automation, Key Laboratory of System Control and Information Processing, Ministry of Education of ChinaShanghai Jiao Tong UniversityShanghaiChina
  2. 2.Pathology DepartmentInternational Peace Maternity and Child Health Hospital of China Welfare InstituteShanghaiChina

Personalised recommendations