Advertisement

Multi-scale Convolutional-Stack Aggregation for Robust White Matter Hyperintensities Segmentation

  • Hongwei Li
  • Jianguo ZhangEmail author
  • Mark Muehlau
  • Jan Kirschke
  • Bjoern Menze
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11383)

Abstract

Segmentation of both large and small white matter hyperintensities/lesions in brain MR images is a challenging task which has drawn much attention in recent years. We propose a multi-scale aggregation model framework to deal with volume-varied lesions. Firstly, we present a specifically-designed network for small lesion segmentation called Stack-Net, in which multiple convolutional layers are ‘one-by-one’ connected, aiming to preserve rich local spatial information of small lesions before the sub-sampling layer. Secondly, we aggregate multi-scale Stack-Nets with different receptive fields to learn multi-scale contextual information of both large and small lesions. Our model is evaluated on recent MICCAI WMH Challenge Dataset and outperforms the state-of-the-art on lesion recall and lesion F1-score under 5-fold cross validation. It claimed the first place on the hidden test set after independent evaluation by the challenge organizer. In addition, we further test our pre-trained models on a Multiple Sclerosis lesion dataset with 30 subjects under cross-center evaluation. Results show that the aggregation model is effective in learning multi-scale spatial information.

Keywords

White matter hyperintensities Deep learning 

Notes

Acknowledgements

This work was supported in part by NSFC grant (No. 61628212), Royal Society International Exchanges grant (No. 170168), the Macau Science and Technology Development Fund under 112/2014/A3. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.

References

  1. 1.
    Debette, S., Markus, H.: Bmj 341, c3666 (2010)CrossRefGoogle Scholar
  2. 2.
    Menze, B.H., et al.: IEEE Trans. Med. Imaging 34(10), 1993–2024 (2015)CrossRefGoogle Scholar
  3. 3.
    Yu, F., Koltun, V.: arXiv preprint arXiv:1511.07122 (2015)
  4. 4.
    Borghesani, P.R., et al.: Neuropsychologia 51(8), 1435–1444 (2013)CrossRefGoogle Scholar
  5. 5.
    Moeskops, P., et al.: NeuroImage: Clin. 17, 251–262 (2018)CrossRefGoogle Scholar
  6. 6.
    Li, H., et al.: arXiv preprint arXiv:1802.05203v1 (2018)
  7. 7.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  8. 8.
    Long, J., Shelhamer, E., Darrell, T.: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)Google Scholar
  9. 9.
    Milletari, F., Navab, N., Ahmadi, S.-A.: 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571. IEEE (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Hongwei Li
    • 1
  • Jianguo Zhang
    • 3
    Email author
  • Mark Muehlau
    • 2
  • Jan Kirschke
    • 2
  • Bjoern Menze
    • 1
  1. 1.Technical University of MunichMunichGermany
  2. 2.Klinikum rechts der IsarMunichGermany
  3. 3.University of DundeeDundeeUK

Personalised recommendations