Advertisement

Conditional Generative Adversarial Networks for Metal Artifact Reduction in CT Images of the Ear

  • Jianing Wang
  • Yiyuan Zhao
  • Jack H. Noble
  • Benoit M. Dawant
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11070)

Abstract

We propose an approach based on a conditional generative adversarial network (cGAN) for the reduction of metal artifacts (RMA) in computed tomography (CT) ear images of cochlear implants (CIs) recipients. Our training set contains paired pre-implantation and post-implantation CTs of 90 ears. At the training phase, the cGAN learns a mapping from the artifact-affected CTs to the artifact-free CTs. At the inference phase, given new metal-artifact-affected CTs, the cGAN produces CTs in which the artifacts are removed. As a pre-processing step, we also propose a band-wise normalization method, which splits a CT image into three channels according to the intensity value of each voxel and we show that this method improves the performance of the cGAN. We test our cGAN on post-implantation CTs of 74 ears and the quality of the artifact-corrected images is evaluated quantitatively by comparing the segmentations of intra-cochlear anatomical structures, which are obtained with a previously published method, in the real pre-implantation and the artifact-corrected CTs. We show that the proposed method leads to an average surface error of 0.18 mm which is about half of what could be achieved with a previously proposed technique.

Keywords

Conditional generative adversarial networks Metal artifact reduction Cochlear implants 

Notes

Acknowledgements

This work has been supported in parts by NIH grants R01DC014037 and R01DC014462 and by the Advanced Computing Center for Research and Education (ACCRE) of Vanderbilt University. The content is solely the responsibility of the authors and does not necessarily represent the official views of this institute.

References

  1. 1.
    Gjesteby, L., et al.: Metal artifact reduction in CT: where are we after four decades? IEEE Access. 4, 5826–5849 (2016)CrossRefGoogle Scholar
  2. 2.
    Zhang, Y., Yu, H.: Convolutional neural network based metal artifact reduction in x-ray computed tomography. arXiv:1709.01581 (2017)
  3. 3.
    Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv:1411.1784 (2014)
  4. 4.
    Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. arXiv:1611.07004 (2017)
  5. 5.
    Park, H.S., et al.: Machine-learning-based nonlinear decomposition of CT images for metal artifact reduction. arXiv:1708.00244 (2017)
  6. 6.
    Gjesteby, L., et al.: Reducing metal streak artifacts in CT images via deep learning: pilot results. In: The 14th International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine, vol. 14(6), pp. 611–614 (2017)Google Scholar
  7. 7.
    Cochlear Implants. National Institute on Deafness and Other Communication Disorders, No. 11-4798 (2011)Google Scholar
  8. 8.
    Noble, J.H., et al.: Automatic segmentation of intracochlear anatomy in conventional CT. IEEE Trans. Biomed. Eng. 58(9), 2625–2632 (2011)CrossRefGoogle Scholar
  9. 9.
    Reda, F.A., et al.: Automatic segmentation of intra-cochlear anatomy in post-implantation CT. In: Proceedings of the SPIE 8671, Medical Imaging 2013: Image-Guided Procedures, Robotic Interventions, and Modeling, 86710I (2013)Google Scholar
  10. 10.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. arXiv:1505.04597 (2015)Google Scholar
  11. 11.
    Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46475-6_43CrossRefGoogle Scholar
  12. 12.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. arXiv:1512.03385 (2015)
  13. 13.
    Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv:1703.10593 (2017)
  14. 14.
    Li, C., Wand, M.: Precomputed real-time texture synthesis with markovian generative adversarial networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 702–716. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46487-9_43CrossRefGoogle Scholar
  15. 15.
    Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. arXiv:1609.04802, (2017)
  16. 16.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv:1412.6980 (2014)
  17. 17.
    Goodfellow, I.J., et al.: Generative adversarial networks. arXiv:1406.2661 (2014)

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Jianing Wang
    • 1
  • Yiyuan Zhao
    • 1
  • Jack H. Noble
    • 1
  • Benoit M. Dawant
    • 1
  1. 1.Department of Electrical Engineering and Computer ScienceVanderbilt UniversityNashvilleUSA

Personalised recommendations