Advertisement

A Multi-resolution Coarse-to-Fine Segmentation Framework with Active Learning in 3D Brain MRI

  • Zhenxi Zhang
  • Jie Li
  • Zhusi Zhong
  • Zhicheng Jiao
  • Xinbo GaoEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11935)

Abstract

Precise segmentation of key tissues in medical images is of great significance. Although deep neural networks have achieved promising results in many medical image segmentation tasks, it is still a challenge for volumetric medical image segmentation due to the limited computing resources and annotated datasets. In this paper, we propose a multi-resolution coarse-to-fine segmentation framework to perform accurate segmentation. The proposed framework contains a coarse stage and a fine stage. The coarse stage with low-resolution data provide high semantic cues for the fine stage. Moreover, we embed active learning processes into coarse-to-fine framework for sparse annotation, the proposed multiple query criteria active learning methods can select high-value slices to label. We evaluated the effectiveness of proposed framework on two public brain MRI datasets. Our coarse-to-fine networks outperform other competitive methods under the condition of fully supervised training. In addition, the proposed active learning method only need 30% to 40% slices of one scan to produce relatively better dense prediction results than non-active learning method and one query criteria active learning methods.

Keywords

Deep learning Coarse-to-fine framework Active learning Sparse annotation Tissue segmentation 

Notes

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Grant 61432014, 61772402, U1605252 and 61671339, and in part by National High-Level Talents Special Support Program of China under Grant CS31117200001.

References

  1. 1.
    Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W., Frangi, A. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  2. 2.
    Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46723-8_49CrossRefGoogle Scholar
  3. 3.
    Zhou, Y., Xie, L., Shen, W., Wang, Y., Fishman, E.K., Yuille, A.L.: A fixed-point model for pancreas segmentation in abdominal CT scans. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10433, pp. 693–701. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66182-7_79CrossRefGoogle Scholar
  4. 4.
    Roy, A.G., Conjeti, S., Navab, N., et al.: QuickNAT: a fully convolutional network for quick and accurate segmentation of neuroanatomy. NeuroImage 186, 713–727 (2019)CrossRefGoogle Scholar
  5. 5.
    Kumar, S., Conjeti, S., Roy, A.G., et al.: InfiNet: fully convolutional networks for infant brain MRI segmentation. In: ISBI, pp. 145–148 (2018)Google Scholar
  6. 6.
    Roth, H.R., et al.: DeepOrgan: multi-level deep convolutional networks for automated pancreas segmentation. In: Navab, N., Hornegger, J., Wells, W., Frangi, A. (eds.) MICCAI 2015. LNCS, vol. 9349, pp. 556–564. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24553-9_68CrossRefGoogle Scholar
  7. 7.
    Moeskops, P., et al.: Deep learning for multi-task medical image segmentation in multiple modalities. In: Ourselin, S., Joskowicz, L., Sabuncu, M., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 478–486. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46723-8_55CrossRefGoogle Scholar
  8. 8.
    Wachinger, C., Reuter, M., Klein, T.: DeepNAT: deep convolutional neural network for segmenting neuroanatomy. NeuroImage 170, 434–445 (2018)CrossRefGoogle Scholar
  9. 9.
    Milletari, F., et al.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision, pp. 565–571. IEEE (2016)Google Scholar
  10. 10.
    Huang, Y.J., et al.: HL-FCN: hybrid loss guided FCN for colorectal cancer segmentation. In: ISBI, pp. 195–198 (2018)Google Scholar
  11. 11.
    Zhou, Z., Rahman Siddiquee, M.M., Tajbakhsh, N., Liang, J.: UNet++: a nested U-Net architecture for medical image segmentation. In: Stoyanov, D., et al. (eds.) DLMIA/ML-CDS -2018. LNCS, vol. 11045, pp. 3–11. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00889-5_1CrossRefGoogle Scholar
  12. 12.
    Zhang, Z., Zhang, X., Peng, C., Xue, X., Sun, J.: ExFuse: enhancing feature fusion for semantic segmentation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11214, pp. 273–288. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01249-6_17CrossRefGoogle Scholar
  13. 13.
    Wang, L., Xie, C., Zeng, N.: RP-Net: a 3D convolutional neural network for brain segmentation from magnetic resonance imaging. IEEE Access 7, 39670–39679 (2019)CrossRefGoogle Scholar
  14. 14.
    Zhu, Z., et al.: A 3D coarse-to-fine framework for volumetric medical image segmentation. In: International Conference on 3D Vision, pp. 682–690 (2018)Google Scholar
  15. 15.
    Yu, Q., et al.: Recurrent saliency transformation network: incorporating multi-stage visual cues for small organ segmentation. In: CVPR, pp. 8280–8289 (2018)Google Scholar
  16. 16.
    Han, M., et al.: Segmentation of CT thoracic organs by multi-resolution VB-nets. SegTHOR@ ISBI (2019)Google Scholar
  17. 17.
    Tokunaga, H., et al.: Adaptive weighting multi-field-of-view CNN for semantic segmentation in pathology. In: CVPR, pp. 12597–12606 (2019)Google Scholar
  18. 18.
    Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: CVPR, pp. 7132–7141 (2018)Google Scholar
  19. 19.
    Woo, S., Park, J., Lee, J.-Y., Kweon, I.S.: CBAM: convolutional block attention module. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 3–19. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01234-2_1CrossRefGoogle Scholar
  20. 20.
    Roy, A.G., Navab, N., Wachinger, C.: Concurrent spatial and channel ‘squeeze and excitation’ in fully convolutional networks. In: Frangi, A., Schnabel, J., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 421–429. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00928-1_48CrossRefGoogle Scholar
  21. 21.
    Li, X., et al.: Selective Kernel Networks. arXiv preprint arXiv:1903.06586 (2019)
  22. 22.
    Wu, J., et al.: Active learning with noise modeling for medical image annotation. In: ISBI, pp. 298–301 (2018)Google Scholar
  23. 23.
    Szegedy, C., et al.: Going deeper with convolutions. In: CVPR, pp. 1–9 (2015)Google Scholar
  24. 24.
    Yang, L., Zhang, Y., Chen, J., Zhang, S., Chen, D.Z.: Suggestive annotation: a deep active learning framework for biomedical image segmentation. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 399–407. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66179-7_46CrossRefGoogle Scholar
  25. 25.
    Bhalgat, Y., Shah, M., Awate, S.: Annotation-cost minimization for medical image segmentation using suggestive mixed supervision fully convolutional networks. arXiv preprint arXiv:1812.11302 (2018)
  26. 26.
    Zhang, Z., et al.: A sparse annotation strategy based on attention-guided active learning for 3D medical image segmentation. arXiv preprint arXiv:1906.07367 (2019)
  27. 27.
    Shao, W., Sun, L., Zhang, D.: Deep active learning for nucleus classification in pathology images. In: ISBI, pp. 199–202 (2018)Google Scholar
  28. 28.
    Zhou, Z., et al.: Fine-tuning convolutional neural networks for biomedical image analysis: actively and incrementally. In: CVPR, pp. 7340–7351 (2017)Google Scholar
  29. 29.
    Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: ICML, pp. 1050–1059 (2016)Google Scholar
  30. 30.
    Roy, A.G., Conjeti, S., Navab, N., Wachinger, C.: Inherent brain segmentation quality control from fully convnet Monte Carlo sampling. In: Frangi, A., Schnabel, J., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 664–672. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00928-1_75CrossRefGoogle Scholar
  31. 31.
    Nair, T., Precup, D., Arnold, D.L., Arbel, T.: Exploring uncertainty measures in deep networks for multiple sclerosis lesion detection and segmentation. In: Frangi, A., Schnabel, J., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 655–663. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00928-1_74CrossRefGoogle Scholar
  32. 32.
    Kennedy, D.N., Haselgrove, C., Hodge, S.M., Rane, P.S., Makris, N., Frazier, J.A.: CANDIShare: a resource for pediatric neuroimaging data. Neuroinformatics 10(3), 319–322 (2012)CrossRefGoogle Scholar
  33. 33.
    Frazier, J.A., et al.: Diagnostic and sex effects on limbic volumes in early-onset bipolar disorder and schizophrenia. Schizophr. Bull. 34(1), 37–46 (2007)CrossRefGoogle Scholar
  34. 34.
    Rohlfing, T., et al.: Evaluation of atlas selection strategies for atlas-based image segmentation with application to confocal microscopy images of bee brains. NeuroImage 21(4), 1428–1442 (2004)CrossRefGoogle Scholar
  35. 35.
    Zhao, Y., et al.: A novel active learning framework for classification: using weighted rank aggregation to achieve multiple query criteria. Pattern Recogn. 93, 581–602 (2019)CrossRefGoogle Scholar
  36. 36.
    Kumar, P., et al.: U-Segnet: fully convolutional neural network based automated brain tissue segmentation tool. In: ICIP, pp. 3503–3507 (2018)Google Scholar
  37. 37.
    Deng, Y., et al.: A strategy of MR brain tissue images’ suggestive annotation based on modified U-net. arXiv preprint arXiv:1807.07510 (2018)
  38. 38.
    Chen, H., et al.: VoxResNet: deep voxelwise residual networks for brain segmentation from 3D MR images. NeuroImage 170, 446–455 (2018)CrossRefGoogle Scholar
  39. 39.
    Jiao, Z., et al.: A deep feature based framework for breast masses classification. Neurocomputing 197, 221–231 (2016)CrossRefGoogle Scholar
  40. 40.
    Jiao, Z., et al.: Deep convolutional neural networks for mental load classification based on EEG data. Pattern Recogn. 76, 582–595 (2018)CrossRefGoogle Scholar
  41. 41.
    Zhong, Z., et al.: An attention-guided deep regression model for landmark detection in cephalograms. arXiv preprint arXiv:1906.07549 (2019)Google Scholar
  42. 42.
    Sudre, C.H., Li, W., Vercauteren, T., Ourselin, S., Jorge Cardoso, M.: Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In: Cardoso, M., et al. (eds.) DLMIA/ML-CDS-2017. LNCS, vol. 10553, pp. 240–248. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-67558-9_28CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Zhenxi Zhang
    • 1
  • Jie Li
    • 1
  • Zhusi Zhong
    • 1
  • Zhicheng Jiao
    • 2
  • Xinbo Gao
    • 1
    Email author
  1. 1.School of Electronic EngineeringXidian UniversityXi’anChina
  2. 2.Perelman School of Medicine at University of PennsylvaniaPhiladelphiaUSA

Personalised recommendations