AssemblyNet: A Novel Deep Decision-Making Process for Whole Brain MRI Segmentation

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11766)


Whole brain segmentation using deep learning (DL) is a very challenging task since the number of anatomical labels is very high compared to the number of available training images. To address this problem, previous DL methods proposed to use a global convolution neural network (CNN) or few independent CNNs. In this paper, we present a novel ensemble method based on a large number of CNNs processing different overlapping brain areas. Inspired by parliamentary decision-making systems, we propose a framework called AssemblyNet, made of two “assemblies” of U-Nets. Such a parliamentary system is capable of dealing with complex decisions and reaching a consensus quickly. AssemblyNet introduces sharing of knowledge among neighboring U-Nets, an “amendment” procedure made by the second assembly at higher-resolution to refine the decision taken by the first one, and a final decision obtained by majority voting. When using the same 45 training images, AssemblyNet outperforms global U-Net by 28% in terms of the Dice metric, patch-based joint label fusion by 15% and SLANT-27 by 10%. Finally, AssemblyNet demonstrates high capacity to deal with limited training data to achieve whole brain segmentation in practical training and testing times.


Whole brain segmentation CNN Ensemble learning Transfer learning Multiscale framework 



This work benefited from the support of the project DeepvolBrain of the French National Research Agency (ANR-18-CE45-0013). This study was achieved within the context of the Laboratory of Excellence TRAIL ANR-10-LABX-57 for the BigDataBrain project. Moreover, we thank the Investments for the future Program IdEx Bordeaux (ANR-10-IDEX- 03- 02, HL-MRI Project), Cluster of excellence CPU and the CNRS. This study has been also supported by the DPI2017-87743-R grant from the Spanish Ministerio de Economia, Industria Competitividad. The authors gratefully acknowledge the support of NVIDIA Corporation with their donation of the TITAN Xp GPU used in this research.


  1. 1.
    Wang, H., Yushkevich, P.: Multi-atlas segmentation with joint label fusion and corrective learning—an open source implementation. Front. Neuroinform. 7, 27 (2013). Scholar
  2. 2.
    de Brebisson, A., Montana, G.: Deep neural networks for anatomical brain segmentation. In: IEEE CVPR Workshops, pp. 20–28 (2015)Google Scholar
  3. 3.
    Wachinger, C., et al.: DeepNAT: deep convolutional neural network for segmenting neuroanatomy. NeuroImage 170, 434–445 (2018)CrossRefGoogle Scholar
  4. 4.
    Roy, A.G., Conjeti, S., Sheet, D., Katouzian, A., Navab, N., Wachinger, C.: Error corrective boosting for learning fully convolutional networks with limited data. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 231–239. Springer, Cham (2017). Scholar
  5. 5.
    Wong, K.C.L., Moradi, M., Tang, H., Syeda-Mahmood, T.: 3D segmentation with exponential logarithmic loss for highly unbalanced object sizes. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11072, pp. 612–619. Springer, Cham (2018). Scholar
  6. 6.
    Huo, Y., et al.: 3D whole brain segmentation using spatially localized atlas network tiles. NeuroImage 194, 105–119 (2019)CrossRefGoogle Scholar
  7. 7.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). Scholar
  8. 8.
    Manjón, J.V., et al.: Adaptive non-local means denoising of MR images with spatially varying noise levels. JMRI 31(1), 192–203 (2010)CrossRefGoogle Scholar
  9. 9.
    Tustison, N.J., et al.: N4ITK: improved N3 bias correction. IEEE Trans. Med. Imaging 29(6), 1310–1320 (2010)CrossRefGoogle Scholar
  10. 10.
    Avants, B.B., et al.: A reproducible evaluation of ANTs similarity metric performance in brain image registration. NeuroImage 54(3), 2033–2044 (2011)CrossRefGoogle Scholar
  11. 11.
    Manjón, J.V., et al.: Robust MRI brain tissue parameter estimation by multistage outlier rejection. Magn. Reson. Med. 59(4), 866–873 (2008)CrossRefGoogle Scholar
  12. 12.
    Manjón, J.V., et al.: Nonlocal intracranial cavity extraction. IJBI 2014, 10 (2014)Google Scholar
  13. 13.
    Marcus, D.S., et al.: Open access series of imaging studies (OASIS): cross-sectional MRI data in young, middle aged, nondemented, and demented older adults. J. Cogn. Neurosci. 19(9), 1498–1507 (2007)CrossRefGoogle Scholar
  14. 14.
    Collins, D.L., et al.: Design and construction of a realistic digital brain phantom. IEEE TMI 17(3), 463–468 (1998)Google Scholar
  15. 15.
    Kennedy, D.N., et al.: CANDIShare: a resource for pediatric neuroimaging data. Neuroinformatics 10(3), 319–322 (2012)CrossRefGoogle Scholar
  16. 16.
    Zhang, H., et al.: Mixup: beyond empirical risk minimization. arXiv:1710.09412 (2017)
  17. 17.
    Izmailov, P., et al.: Averaging weights leads to wider optima and better generalization. arXiv:1803.05407 (2018)
  18. 18.
    Gal, Y., Ghahramani, Z.: A theoretically grounded application of dropout in recurrent neural networks. In: Advances in Neural Information Processing Systems, pp. 1019–1027 (2016)Google Scholar
  19. 19.
    Balakrishnan, G., et al.: VoxelMorph: a learning framework for deformable medical image registration. In: IEEE TMI (2019)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.CNRS, Univ. Bordeaux, Bordeaux INP, LABRI, UMR5800TalenceFrance
  2. 2.Bordeaux INP, Univ. Bordeaux, CNRS, IMS, UMR 5218TalenceFrance
  3. 3.CNRS, Univ. Bordeaux, IMB, UMR 5251TalenceFrance
  4. 4.ITACA, Universitat Politècnica de ValènciaValenciaSpain

Personalised recommendations