Advertisement

Handling Missing Annotations for Semantic Segmentation with Deep ConvNets

  • Olivier Petit
  • Nicolas Thome
  • Arnaud Charnoz
  • Alexandre Hostettler
  • Luc Soler
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11045)

Abstract

Annotation of medical images for semantic segmentation is a very time consuming and difficult task. Moreover, clinical experts often focus on specific anatomical structures and thus, produce partially annotated images. In this paper, we introduce SMILE, a new deep convolutional neural network which addresses the issue of learning with incomplete ground truth. SMILE aims to identify ambiguous labels in order to ignore them during training, and don’t propagate incorrect or noisy information. A second contribution is SMILEr which uses SMILE as initialization for automatically relabeling missing annotations, using a curriculum strategy. Experiments on 3 organ classes (liver, stomach, pancreas) show the relevance of the proposed approach for semantic segmentation: with 70% of missing annotations, SMILEr performs similarly as a baseline trained with complete ground truth annotations.

Keywords

Medical images Deep learning Convolutional Neural Networks Incomplete ground truth annotation Noisy labels Missing labels 

Supplementary material

473898_1_En_3_MOESM1_ESM.pdf (855 kb)
Supplementary material 1 (pdf 855 KB)

References

  1. 1.
    Badrinarayanan, V., Kendall, A., Cipolla, R.: SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. (2017)Google Scholar
  2. 2.
    Bengio, Y., Louradour, J., Collobert, R., Weston, J.: Curriculum learning. In: Proceedings of the 26th Annual International Conference on Machine Learning, ICML 2009, pp. 41–48 (2009)Google Scholar
  3. 3.
    Chen, L., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2018)CrossRefGoogle Scholar
  4. 4.
    Dietterich, T.G., Lathrop, R.H., Lozano-Pérez, T.: Solving the multiple instance problem with axis-parallel rectangles. Artif. Intell. 89(1–2), 31–71 (1997)CrossRefGoogle Scholar
  5. 5.
    Han, X.: Automatic liver lesion segmentation using a deep convolutional neural network method. CoRR abs/1704.07239 (2017)Google Scholar
  6. 6.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, 27–30 June 2016, pp. 770–778 (2016)Google Scholar
  7. 7.
    Hwang, S., Kim, H.-E.: Self-transfer learning for weakly supervised lesion localization. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 239–246. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46723-8_28CrossRefGoogle Scholar
  8. 8.
    Kraus, O.Z., Ba, L.J., Frey, B.J.: Classifying and segmenting microscopy images with deep multiple instance learning. Bioinformatics 32(12), 52–59 (2016)CrossRefGoogle Scholar
  9. 9.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  10. 10.
    Li, X., Chen, H., Qi, X., Dou, Q., Fu, C., Heng, P.: H-DenseUNet: hybrid densely connected UNet for liver and liver tumor segmentation from CT volumes. CoRR abs/1709.07330 (2017)Google Scholar
  11. 11.
    Liu, H., Feng, J., Feng, Z., Lu, J., Zhou, J.: Left atrium segmentation in CT volumes with fully convolutional networks. In: Cardoso, M.J., et al. (eds.) DLMIA/ML-CDS -2017. LNCS, vol. 10553, pp. 39–46. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-67558-9_5CrossRefGoogle Scholar
  12. 12.
    Lu, Z., Fu, Z., Xiang, T., Han, P., Wang, L., Gao, X.: Learning from weak and noisy labels for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(3), 486–500 (2017)CrossRefGoogle Scholar
  13. 13.
    Mordan, T., Durand, T., Thome, N., Cord, M.: WILDCAT: weakly supervised learning of deep ConvNets for image classification, localization and segmentation. In: Computer Vision and Pattern Recognition (CVPR) (2017)Google Scholar
  14. 14.
    Oquab, M., Bottou, L., Laptev, I., Sivic, J.: Is object localization for free? Weakly-supervised learning with convolutional neural networks. In: CVPR (2015)Google Scholar
  15. 15.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  16. 16.
    Trullo, R., Petitjean, C., Nie, D., Shen, D., Ruan, S.: Joint segmentation of multiple thoracic organs in CT images with two collaborative deep architectures. In: Cardoso, M.J., et al. (eds.) DLMIA/ML-CDS -2017. LNCS, vol. 10553, pp. 21–29. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-67558-9_3CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Olivier Petit
    • 1
    • 2
  • Nicolas Thome
    • 1
  • Arnaud Charnoz
    • 2
  • Alexandre Hostettler
    • 3
  • Luc Soler
    • 2
    • 3
  1. 1.CEDRIC - Conservatoire National des Arts et MetiersParisFrance
  2. 2.Visible Patient SASStrasbourgFrance
  3. 3.IRCADStrasbourgFrance

Personalised recommendations