Abdominal Adipose Tissue Segmentation in MRI with Double Loss Function Collaborative Learning
Deep learning has shown promising progress in computer-aided medical image diagnosis in recent years, such adipose tissue segmentation. Generally, training a high-performance deep segmentation model requires a large amount of labeled images. However, in clinical practice many labels are saved in numerical forms rather than image forms while relabelling images with manual segmentation is extremely time-consuming and laborious. To fill in this gap between numerical labels and image-based labels, we propose a novel double loss function to train an adipose segmentation model through collaborative learning. Specifically, the double loss function leverages a large volume of numerical labels available and a small volume of images labels. To validate our collaborative learning model, we collect one dataset of 300 high quality MR images with pixel-level segmentation labels and another dataset of 9000 clinical quantitative MR images with numerical labels of the number of pixels in subcutaneous adipose tissue (SAT), visceral adipose tissue (VAT), and Non-adipose tissues. Our approach achieves 94.3% and 90.8% segmentation accuracy for SAT and VAT respectively in the dataset with image labels, and 93.6% and 88.7% segmentation accuracy for the dataset with only numerical labels. The proposed approach can be generalize to a broad range of clinical problems with different types of ground truth labels.
KeywordsSegmentation Adipose Weak supervised data Deep learning Multi-correlation data
This work was supported in part by the National Natural Science Foundation of China under Grant 61872241 and Grant 61572316, in part by the National Key Research and Development Program of China under Grant 2017YFE0104000 and Grant 2016YFC1300302, in part by the Hong Kong Research Grants Council (No. PolyU 152035/17E), in part by the Science and Technology Commission of Shanghai Municipality One Belt And One Road International Joint Laboratory Construction Project under Grant 18410750700 and in part by the Science and Technology Commission of Shanghai Municipality under Grant 17411952600 and Grant 16DZ0501100.
- 1.Anand, S.S., KN, P.B., et al.: Automated segmentation of visceral and subcutaneous (deep and superficial) adipose tissues in normal and overweight men. J. Magn. Reson. Imaging 41(4), 924–934 (2015)Google Scholar
- 8.Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
- 10.Tsai, C.H., Lin, C.Y., Lin, C.J.: Incremental and decremental training for linear classification. In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 343–352. ACM (2014)Google Scholar
- 11.Wu, X.: Fully convolutional networks for semantic segmentation. Comput. Sci. (2015)Google Scholar