Advertisement

Attention-Guided Decoder in Dilated Residual Network for Accurate Aortic Valve Segmentation in 3D CT Scans

  • Bowen FanEmail author
  • Naoki Tomii
  • Hiroyuki Tsukihara
  • Eriko Maeda
  • Haruo Yamauchi
  • Kan Nawata
  • Asuka Hatano
  • Shu Takagi
  • Ichiro Sakuma
  • Minoru Ono
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11794)

Abstract

Automatic aortic valve segmentation in cardiac CT scans is of high significance for surgeons’ diagnosis on aortic valve disease and planning of aortic valve-sparing surgery. However, the very fast flapping speed, ambiguous shapes and extremely thin structures of the aortic valve lead to great difficulties in developing automatic segmentation algorithms. In this paper, we proposed an end-to-end deep learning method to address the problem of segmentation of the aortic valve from cardiac CT scans. Our method uses 3D voxel-wise dilated residual network (DRN) as backbone network and we equip it with novel attention-guided decoder modules to suppress non-valve artifacts and noise and pay attention on the fine leaflets in order to acquire accurate valve segmentation results. We conducted qualitative and quantitative analysis to compare with state-of-the-art (SOTA) 3D medical image segmentation models. Experiment results corroborate that the proposed method has very high competence.

Keywords

Aortic valve 3D segmentation Attention-guided decoder 

References

  1. 1.
    Beyersdorf, F., et al.: Current state of the reimplantation technique (DAVID Operation): surgical details and results. HSR Proc. Intensive Care Cardiovasc. Anesth. 4(2), 73 (2012)Google Scholar
  2. 2.
    Long, J., et al.: Fully convolutional networks for semantic segmentation. In: CVPR, pp. 3431–3440 (2015)Google Scholar
  3. 3.
    Bai, W., et al.: Recurrent neural networks for aortic image sequence segmentation with sparse annotations. In: Frangi, A., Schnabel, J., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11073, pp. 586–594. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00937-3_67CrossRefGoogle Scholar
  4. 4.
    Lalys, F., et al.: Automatic aortic root segmentation and anatomical landmarks detection for TAVI procedure planning. Minim. Invasive Ther. Allied Technol. 28(3), 1–8 (2018).  https://doi.org/10.1080/13645706.2018.1488734CrossRefGoogle Scholar
  5. 5.
    Pouch, A.M., et al.: Medially constrained deformable modeling for segmentation of branching medial structures: application to aortic valve segmentation and morphometry. Med. Image Anal. 26(1), 217–231 (2015).  https://doi.org/10.1016/j.media.2015.09.003CrossRefGoogle Scholar
  6. 6.
    Khamooshian, A., et al.: Dynamic three-dimensional geometry of the aortic valve apparatus—a feasibility study. J. Cardiothorac. Vasc. Anesth. 31(4), 1290–1300 (2017).  https://doi.org/10.1053/j.jvca.2017.03.004CrossRefGoogle Scholar
  7. 7.
    Lin, T.Y., et al.: Feature pyramid networks for object detection. In: CVPR, pp. 2117–2125 (2017)Google Scholar
  8. 8.
    Yu, F., et al.: Dilated residual networks. In: CVPR, pp. 636–644 (2017)Google Scholar
  9. 9.
    Roy, A.G., et al.: Concurrent spatial and channel ‘squeeze & excitation’ in fully convolutional networks. In: Frangi, A., Schnabel, J., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) Medical Image Computing and Computer Assisted Intervention – MICCAI 2018. LNCS, vol. 11070, pp. 421–429. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00928-1_48CrossRefGoogle Scholar
  10. 10.
    He, K., et al.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)Google Scholar
  11. 11.
    Shi, Z., et al.: Bayesian VoxDRN: a probabilistic deep voxelwise dilated residual network for whole heart segmentation from 3D MR images. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11073, pp. 569–577. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00937-3_65CrossRefGoogle Scholar
  12. 12.
    Hu, J., et al.: Squeeze-and-excitation networks. In: CVPR, pp. 7132–7141 (2018)Google Scholar
  13. 13.
    Wang, P., et al.: Understanding convolution for semantic segmentation. In: WACV, pp. 1451–1460 (2018)Google Scholar
  14. 14.
    Çiçek, Ö., Abdulkadir, A., Lienkamp, Soeren S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46723-8_49CrossRefGoogle Scholar
  15. 15.
    Milletari, F., et al.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: 3DV, pp. 565–571 (2016)Google Scholar
  16. 16.
    Oktay, O., et al.: Attention U-Net: learning where to look for the pancreas. arXiv preprint (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Bowen Fan
    • 1
    Email author
  • Naoki Tomii
    • 2
  • Hiroyuki Tsukihara
    • 3
  • Eriko Maeda
    • 3
  • Haruo Yamauchi
    • 3
  • Kan Nawata
    • 3
  • Asuka Hatano
    • 1
  • Shu Takagi
    • 1
  • Ichiro Sakuma
    • 1
  • Minoru Ono
    • 3
  1. 1.Graduate School of EngineeringThe University of TokyoTokyoJapan
  2. 2.Center for Disease Biology and Integrative MedicineThe University of TokyoTokyoJapan
  3. 3.The University of Tokyo HospitalTokyoJapan

Personalised recommendations