Advertisement

Abdominal Adipose Tissue Segmentation in MRI with Double Loss Function Collaborative Learning

  • Siyuan Pan
  • Xuhong Hou
  • Huating LiEmail author
  • Bin ShengEmail author
  • Ruogu FangEmail author
  • Yuxin Xue
  • Weiping Jia
  • Jing Qin
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11769)

Abstract

Deep learning has shown promising progress in computer-aided medical image diagnosis in recent years, such adipose tissue segmentation. Generally, training a high-performance deep segmentation model requires a large amount of labeled images. However, in clinical practice many labels are saved in numerical forms rather than image forms while relabelling images with manual segmentation is extremely time-consuming and laborious. To fill in this gap between numerical labels and image-based labels, we propose a novel double loss function to train an adipose segmentation model through collaborative learning. Specifically, the double loss function leverages a large volume of numerical labels available and a small volume of images labels. To validate our collaborative learning model, we collect one dataset of 300 high quality MR images with pixel-level segmentation labels and another dataset of 9000 clinical quantitative MR images with numerical labels of the number of pixels in subcutaneous adipose tissue (SAT), visceral adipose tissue (VAT), and Non-adipose tissues. Our approach achieves 94.3% and 90.8% segmentation accuracy for SAT and VAT respectively in the dataset with image labels, and 93.6% and 88.7% segmentation accuracy for the dataset with only numerical labels. The proposed approach can be generalize to a broad range of clinical problems with different types of ground truth labels.

Keywords

Segmentation Adipose Weak supervised data Deep learning Multi-correlation data 

Notes

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grant 61872241 and Grant 61572316, in part by the National Key Research and Development Program of China under Grant 2017YFE0104000 and Grant 2016YFC1300302, in part by the Hong Kong Research Grants Council (No. PolyU 152035/17E), in part by the Science and Technology Commission of Shanghai Municipality One Belt And One Road International Joint Laboratory Construction Project under Grant 18410750700 and in part by the Science and Technology Commission of Shanghai Municipality under Grant 17411952600 and Grant 16DZ0501100.

References

  1. 1.
    Anand, S.S., KN, P.B., et al.: Automated segmentation of visceral and subcutaneous (deep and superficial) adipose tissues in normal and overweight men. J. Magn. Reson. Imaging 41(4), 924–934 (2015)Google Scholar
  2. 2.
    Badrinarayanan, V., Kendall, A., Cipolla, R.: Segnet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017)CrossRefGoogle Scholar
  3. 3.
    Criminisi, A., Shotton, J., Konukoglu, E., et al.: Decision forests: a unified framework for classification, regression, density estimation, manifold learning and semi-supervised learning. Found. Trends® Comput. Graph. Vis. 7(2–3), 81–227 (2012)CrossRefGoogle Scholar
  4. 4.
    Després, J.P., Lemieux, I.: Abdominal obesity and metabolic syndrome. Nature 444(7121), 881 (2006)CrossRefGoogle Scholar
  5. 5.
    Jiang, F., et al.: Abdominal adipose tissues extraction using multi-scale deep neural network. Neurocomputing 229(C), 23–33 (2017)CrossRefGoogle Scholar
  6. 6.
    Langner, T., et al.: Fully convolutional networks for automated segmentation of abdominal adipose tissue depots in multicenter water-fat MRI. Magn. Reson. Med. 81(4), 2736–2745 (2019)CrossRefGoogle Scholar
  7. 7.
    Matsushita, Y., et al.: Associations of visceral and subcutaneous fat areas with the prevalence of metabolic risk factor clustering in 6,292 Japanese individuals: the hitachi health study. Diab. Care 33(9), 2117–2119 (2010)CrossRefGoogle Scholar
  8. 8.
    Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  9. 9.
    Thörmer, G., et al.: Software for automated MRI-based quantification of abdominal fat and preliminary evaluation in morbidly obese patients. J. Magn. Reson. Imaging 37(5), 1144–1150 (2013)CrossRefGoogle Scholar
  10. 10.
    Tsai, C.H., Lin, C.Y., Lin, C.J.: Incremental and decremental training for linear classification. In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 343–352. ACM (2014)Google Scholar
  11. 11.
    Wu, X.: Fully convolutional networks for semantic segmentation. Comput. Sci. (2015)Google Scholar
  12. 12.
    Zhou, A., Murillo, H., Peng, Q.: Novel segmentation method for abdominal fat quantification by MRI. J. Magn. Reson. Imaging 34(4), 852–860 (2011)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Computer Science and EngineeringShanghai Jiao Tong UniversityShanghaiChina
  2. 2.Shanghai Jiao Tong University Affiliated Sixth People’s HospitalShanghaiChina
  3. 3.Department of Biomedical EngineeringUniversity of FloridaGainesvilleUSA
  4. 4.Centre for Smart Health, School of NursingThe Hong Kong Polytechnic UniversityHong KongChina

Personalised recommendations