Advertisement

FU-Net: Multi-class Image Segmentation Using Feedback Weighted U-Net

  • Mina JafariEmail author
  • Ruizhe Li
  • Yue Xing
  • Dorothee Auer
  • Susan Francis
  • Jonathan Garibaldi
  • Xin Chen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11902)

Abstract

In this paper, we present a generic deep convolutional neural network (DCNN) for multi-class image segmentation. It is based on a well-established supervised end-to-end DCNN model, known as U-net. U-net is firstly modified by adding widely used batch normalization and residual block (named as BRU-net) to improve the efficiency of model training. Based on BRU-net, we further introduce a dynamically weighted cross-entropy loss function. The weighting scheme is calculated based on the pixel-wise prediction accuracy during the training process. Assigning higher weights to pixels with lower segmentation accuracies enables the network to learn more from poorly predicted image regions. Our method is named as feedback weighted U-net (FU-net). We have evaluated our method based on T1-weighted brain MRI for the segmentation of midbrain and substantia nigra, where the number of pixels in each class is extremely unbalanced to each other. Based on the dice coefficient measurement, our proposed FU-net has outperformed BRU-net and U-net with statistical significance, especially when only a small number of training examples are available. The code is publicly available in GitHub (GitHub link: https://github.com/MinaJf/FU-net).

Keywords

Convolutional neural network Medical image segmentation U-net Weighted cross entropy 

Notes

Acknowledgement

The authors acknowledge Nvidia for donating a graphic card for this research.

References

  1. 1.
    Chan, T.F., Vese, L.A.: Active contours without edges. IEEE Trans. Image Process. 10(2), 266–277 (2001)CrossRefGoogle Scholar
  2. 2.
    Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2001)CrossRefGoogle Scholar
  3. 3.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60(2), 91–110 (2004)CrossRefGoogle Scholar
  4. 4.
    LeCun, Y., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)CrossRefGoogle Scholar
  5. 5.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: International Conference on Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  6. 6.
    Sudre, C.H., Li, W., Vercauteren, T., Ourselin, S., Jorge Cardoso, M.: Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In: Cardoso, M.J., et al. (eds.) DLMIA/ML-CDS -2017. LNCS, vol. 10553, pp. 240–248. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-67558-9_28CrossRefGoogle Scholar
  7. 7.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  8. 8.
    Farabet, C., et al.: Learning hierarchical features for scene labeling. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1915–1929 (2013)CrossRefGoogle Scholar
  9. 9.
    Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  10. 10.
    Drozdzal, M., Vorontsov, E., Chartrand, G., Kadoury, S., Pal, C.: The importance of skip connections in biomedical image segmentation. In: Carneiro, G., et al. (eds.) LABELS/DLMIA -2016. LNCS, vol. 10008, pp. 179–187. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46976-8_19CrossRefGoogle Scholar
  11. 11.
    Alom, M.Z., et al.: Recurrent residual convolutional neural network based on u-net (R2U-net) for medical image segmentation. arXiv preprint arXiv:1802.06955 (2018)
  12. 12.
    Zhou, Z., Rahman Siddiquee, M.M., Tajbakhsh, N., Liang, J.: UNet++: a nested U-net architecture for medical image segmentation. In: Stoyanov, D., et al. (eds.) DLMIA/ML-CDS -2018. LNCS, vol. 11045, pp. 3–11. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00889-5_1CrossRefGoogle Scholar
  13. 13.
    Zhuang, J.J.: LadderNet: multi-path networks based on U-Net for medical image segmentation (2018)Google Scholar
  14. 14.
    Lin, T.-Y., et al.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision (2017)Google Scholar
  15. 15.
    Shrivastava, A., Gupta, A., Girshick, R.: Training region-based object detectors with online hard example mining. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)Google Scholar
  16. 16.
    Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)zbMATHGoogle Scholar
  17. 17.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015)
  18. 18.
    He, K., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)Google Scholar
  19. 19.
    Milletari, F., Navab, N., Ahmadi, S.-A.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV). IEEE (2016)Google Scholar
  20. 20.
    Abraham, N., Khan, N.M.J.: A novel focal Tversky loss function with improved attention U-net for lesion segmentation (2018)Google Scholar
  21. 21.
    Wang, C., et al.: A two-stage 3D Unet framework for multi-class segmentation on full resolution image (2018)Google Scholar
  22. 22.
    Ruder, S.J.: An overview of gradient descent optimization algorithms (2016)Google Scholar
  23. 23.
    Schwarz, S.T., et al.: In vivo assessment of brainstem depigmentation in Parkinson disease: potential as a severity marker for multicenter studies. Radiology 283(3), 789–798 (2016)CrossRefGoogle Scholar
  24. 24.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Mina Jafari
    • 1
    Email author
  • Ruizhe Li
    • 1
  • Yue Xing
    • 2
  • Dorothee Auer
    • 2
  • Susan Francis
    • 3
  • Jonathan Garibaldi
    • 1
  • Xin Chen
    • 1
  1. 1.School of Computer ScienceUniversity of NottinghamNottinghamUK
  2. 2.School of MedicineUniversity of NottinghamNottinghamUK
  3. 3.Sir Peter Mansfield Imaging CentreUniversity of NottinghamNottinghamUK

Personalised recommendations