One Network to Segment Them All: A General, Lightweight System for Accurate 3D Medical Image Segmentation
- 5.8k Downloads
Many recent medical segmentation systems rely on powerful deep learning models to solve highly specific tasks. To maximize performance, it is standard practice to evaluate numerous pipelines with varying model topologies, optimization parameters, pre- & postprocessing steps, and even model cascades. It is often not clear how the resulting pipeline transfers to different tasks.
We propose a simple and thoroughly evaluated deep learning framework for segmentation of arbitrary medical image volumes. The system requires no task-specific information, no human interaction and is based on a fixed model topology and a fixed hyperparameter set, eliminating the process of model selection and its inherent tendency to cause method-level over-fitting. The system is available in open source and does not require deep learning expertise to use. Without task-specific modifications, the system performed better than or similar to highly specialized deep learning methods across 3 separate segmentation tasks. In addition, it ranked 5-th and 6-th in the first and second round of the 2018 Medical Segmentation Decathlon comprising another 10 tasks.
The system relies on multi-planar data augmentation which facilitates the application of a single 2D architecture based on the familiar U-Net. Multi-planar training combines the parameter efficiency of a 2D fully convolutional neural network with a systematic train- and test-time augmentation scheme, which allows the 2D model to learn a representation of the 3D image volume that fosters generalization.
We would like to thank both Microsoft and NVIDIA for providing computational resources on the Azure platform for this project.
- 1.Simpson, A.L., et al.: A large annotated medical image dataset for the development and evaluation of segmentation algorithms. CoRR abs/1902.09063 (2019)Google Scholar
- 2.Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
- 3.Louring Koch, T., Perslev, M., Igel, C., Brand, S.S.: Accurate segmentation of dental panoramic radiographs with U-Nets. In: International Symposium on Biomedical Imaging (ISBI). IEEE (2019)Google Scholar
- 4.Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning (ICML), pp. 448–456. PMLR (2015)Google Scholar
- 5.Odena, A., Dumoulin, V., Olah, C.: Deconvolution and checkerboard artifacts. Distill (2016)Google Scholar
- 6.Simard, P.Y., Steinkraus, D., Platt, J.: Best practices for convolutional neural networks applied to visual document analysis. In: International Conference on Document Analysis and Recognition (ICDAR). IEEE (2003)Google Scholar
- 11.Ganaye, P., Sdika, M., Benoit-Cattin, H.: Towards integrating spatial localization in convolutional neural networks for brain image segmentation. In: International Symposium on Biomedical Imaging (ISBI), pp. 621–625. IEEE (2018)Google Scholar
- 12.Roy, A.G., Conjeti, S., Navab, N., Wachinger, C.: QuickNAT: segmenting MRI neuroanatomy in 20 seconds. CoRR arXiv:1801.04161 (2018)
- 13.Ambellan, F., Tack, A., Ehlke, M., Zachow, S.: Automated segmentation of knee bone and cartilage combining statistical shape knowledge and convolutional neural networks: data from the osteoarthritis initiative. Med. Image Anal. 52, 109–118 (2019). OAI-ZIB dataset (supplementary material)CrossRefGoogle Scholar
- 14.Isensee, F., et al.: nnU-Net: self-adapting framework for U-Net-based medical image segmentation. CoRR arXiv:1809.10486 (2018)