A Segmentation-Aware Deep Fusion Network for Compressed Sensing MRI

  • Zhiwen Fan
  • Liyan Sun
  • Xinghao DingEmail author
  • Yue Huang
  • Congbo Cai
  • John Paisley
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11210)


Compressed sensing MRI is a classic inverse problem in the field of computational imaging, accelerating the MR imaging by measuring less k-space data. The deep neural network models provide the stronger representation ability and faster reconstruction compared with “shallow” optimization-based methods. However, in the existing deep-based CS-MRI models, the high-level semantic supervision information from massive segmentation-labels in MRI dataset is overlooked. In this paper, we proposed a segmentation-aware deep fusion network called SADFN for compressed sensing MRI. The multilayer feature aggregation (MLFA) method is introduced here to fuse all the features from different layers in the segmentation network. Then, the aggregated feature maps containing semantic information are provided to each layer in the reconstruction network with a feature fusion strategy. This guarantees the reconstruction network is aware of the different regions in the image it reconstructs, simplifying the function mapping. We prove the utility of the cross-layer and cross-task information fusion strategy by comparative study. Extensive experiments on brain segmentation benchmark MRBrainS and BratS15 validated that the proposed SADFN model achieves state-of-the-art accuracy in compressed sensing MRI. This paper provides a novel approach to guide the low-level visual task using the information from mid- or high-level task.


Compressed sensing Magnetic resonance imaging Medical image segmentation Deep neural network 

Supplementary material

474211_1_En_4_MOESM1_ESM.pdf (675 kb)
Supplementary material 1 (pdf 675 KB)


  1. 1.
    Atkinson, D., et al.: Automatic compensation of motion artifacts in MRI. Magn. Reson. Med. 41(1), 163–170 (1999)CrossRefGoogle Scholar
  2. 2.
    Jung, H., Sung, K., Nayak, K.S., Kim, E.Y., Ye, J.C.: k-t FOCUSS: a general compressed sensing framework for high resolution dynamic MRI. Magn. Reson. Med. 61(1), 103–116 (2009)CrossRefGoogle Scholar
  3. 3.
    Lustig, M., Donoho, D., Pauly, J.M.: Sparse MRI: the application of compressed sensing for rapid MR imaging. Magn. Reson. Med. 58(6), 1182–1195 (2007)CrossRefGoogle Scholar
  4. 4.
    Schlemper, J., et al.: A deep cascade of convolutional neural networks for MR image reconstruction. In: Niethammer, M., Styner, M., Aylward, S., Zhu, H., Oguz, I., Yap, P.-T., Shen, D. (eds.) IPMI 2017. LNCS, vol. 10265, pp. 647–658. Springer, Cham (2017). Scholar
  5. 5.
    Ravishankar, S., Bresler, Y.: MR image reconstruction from highly undersampled k-space data by dictionary learning. IEEE Trans. Med. Imaging 30(5), 1028–1041 (2011)CrossRefGoogle Scholar
  6. 6.
    Ma, S., Yin, W., Zhang, Y., Chakraborty, A.: An efficient algorithm for compressed MR imaging using total variation and wavelets. In: CVPR, pp. 1–8. IEEE (2008)Google Scholar
  7. 7.
    Yang, J., Zhang, Y., Yin, W.: A fast alternating direction method for TVL1-L2 signal reconstruction from partial fourier data. IEEE J. Sel. Top. Sig. Process. 4(2), 288–297 (2010)CrossRefGoogle Scholar
  8. 8.
    Huang, J., Zhang, S., Metaxas, D.: Efficient MR image reconstruction for compressed MR imaging. Med. Image Anal. 15(5), 670–679 (2011)CrossRefGoogle Scholar
  9. 9.
    Qu, X., Guo, D., Ning, B., Hou, Y., Lin, Y., Cai, S., Chen, Z.: Undersampled MRI reconstruction with patch-based directional wavelets. Magn. Reson. Imaging 30(7), 964–977 (2012)CrossRefGoogle Scholar
  10. 10.
    Qu, X., Hou, Y., Lam, F., Guo, D., Zhong, J., Chen, Z.: Magnetic resonance image reconstruction from undersampled measurements using a patch-based nonlocal operator. Med. Image Anal. 18(6), 843–856 (2014)CrossRefGoogle Scholar
  11. 11.
    Lai, Z., et al.: Image reconstruction of compressed sensing MRI using graph-based redundant wavelet transform. Med. Image Anal. 27, 93–104 (2016)CrossRefGoogle Scholar
  12. 12.
    Ravishankar, S., Bresler, Y.: Efficient blind compressed sensing using sparsifying transforms with convergence guarantees and application to magnetic resonance imaging. SIAM J. Imaging Sci. 8(4), 2519–2557 (2015)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Huang, Y., Paisley, J., Lin, Q., Ding, X., Fu, X., Zhang, X.P.: Bayesian nonparametric dictionary learning for compressed sensing MRI. IEEE Trans. Image Process. 23(12), 5007–5019 (2014)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Dong, W., Shi, G., Li, X., Ma, Y., Huang, F.: Compressive sensing via nonlocal low-rank regularization. IEEE Trans. Image Process. 23(8), 3618–3632 (2014)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Wang, S., et al.: Accelerating magnetic resonance imaging via deep learning. In: ISBI, pp. 514–517. IEEE (2016)Google Scholar
  16. 16.
    Caballero, J., Bai, W., Price, A.N., Rueckert, D., Hajnal, J.V.: Application-driven MRI: joint reconstruction and segmentation from undersampled MRI data. In: Golland, P., Hata, N., Barillot, C., Hornegger, J., Howe, R. (eds.) MICCAI 2014. LNCS, vol. 8673, pp. 106–113. Springer, Cham (2014). Scholar
  17. 17.
    Lee, D., Yoo, J., Ye, J.C.: Deep residual learning for compressed sensing MRI. In: ISBI, pp. 15–18. IEEE (2017)Google Scholar
  18. 18.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). Scholar
  19. 19.
    Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016). Scholar
  20. 20.
    Chen, H., Dou, Q., Yu, L., Qin, J., Heng, P.A.: VoxResNet: deep voxelwise residual networks for brain segmentation from 3D MR images. NeuroImage 170, 446–455 (2018)CrossRefGoogle Scholar
  21. 21.
    Stollenga, M.F., Byeon, W., Liwicki, M., Schmidhuber, J.: Parallel multi-dimensional LSTM, with application to fast biomedical volumetric image segmentation. In: NIPS, pp. 2998–3006 (2015)Google Scholar
  22. 22.
    Chen, J., Yang, L., Zhang, Y., Alber, M., Chen, D.Z.: Combining fully convolutional and recurrent neural networks for 3D biomedical image segmentation. In: NIPS, pp. 3036–3044 (2016)Google Scholar
  23. 23.
    Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). Scholar
  24. 24.
    Li, G., Yu, Y.: Deep contrast learning for salient object detection. In: CVPR, pp. 478–487 (2016)Google Scholar
  25. 25.
    Zhang, P., Wang, D., Lu, H., Wang, H., Ruan, X.: Amulet: aggregating multi-level convolutional features for salient object detection. In: ICCV, October 2017Google Scholar
  26. 26.
    Hou, Q., Cheng, M.M., Hu, X., Borji, A., Tu, Z., Torr, P.: Deeply supervised salient object detection with short connections. In: CVPR, pp. 5300–5309 (2017)Google Scholar
  27. 27.
    Liu, D., Wen, B., Liu, X., Huang, T.S.: When image denoising meets high-level vision tasks: a deep learning approach. arXiv preprint arXiv:1706.04284 (2017)
  28. 28.
    Li, B., Peng, X., Wang, Z., Xu, J., Feng, D.: AOD-Net: all-in-one dehazing network. In: ICCV, October 2017Google Scholar
  29. 29.
    Mendrik, A.M., et al.: MRBrainS challenge: online evaluation framework for brain image segmentation in 3T MRI scans. Comput. Intell. Neurosci. 2015, 1 (2015)CrossRefGoogle Scholar
  30. 30.
    Dong, H., Yang, G., Liu, F., Mo, Y., Guo, Y.: Automatic brain tumor detection and segmentation using U-net based fully convolutional networks. In: Valdés Hernández, M., González-Castro, V. (eds.) MIUA 2017. CCIS, vol. 723, pp. 506–517. Springer, Cham (2017). Scholar
  31. 31.
    Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Process. 13(4), 600–612 (2004)CrossRefGoogle Scholar
  32. 32.
    Tai, Y., Yang, J., Liu, X.: Image super-resolution via deep recursive residual network. In: CVPR, pp. 2790–2798 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Zhiwen Fan
    • 1
  • Liyan Sun
    • 1
  • Xinghao Ding
    • 1
    Email author
  • Yue Huang
    • 1
  • Congbo Cai
    • 1
  • John Paisley
    • 2
  1. 1.Fujian Key Laboratory of Sensing and Computing for Smart CityXiamen UniversityXiamenChina
  2. 2.Department of Electrical EngineeringColumbia UniversityNew YorkUSA

Personalised recommendations