Advertisement

Encoding CT Anatomy Knowledge for Unpaired Chest X-ray Image Decomposition

  • Zeju Li
  • Han Li
  • Hu HanEmail author
  • Gonglei Shi
  • Jiannan Wang
  • S. Kevin ZhouEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11769)

Abstract

Although chest X-ray (CXR) offers a 2D projection with overlapped anatomies, it is widely used for clinical diagnosis. There is clinical evidence supporting that decomposing an X-ray image into different components (e.g., bone, lung and soft tissue) improves diagnostic value. We hereby propose a decomposition generative adversarial network (DecGAN) to anatomically decompose a CXR image but with unpaired data. We leverage the anatomy knowledge embedded in CT, which features a 3D volume with clearly visible anatomies. Our key idea is to embed CT priori decomposition knowledge into the latent space of unpaired CXR autoencoder. Specifically, we train DecGAN with a decomposition loss, adversarial losses, cycle-consistency losses and a mask loss to guarantee that the decomposed results of the latent space preserve realistic body structures. Extensive experiments demonstrate that DecGAN provides superior unsupervised CXR bone suppression results and the feasibility of modulating CXR components by latent space disentanglement. Furthermore, we illustrate the diagnostic value of DecGAN and demonstrate that it outperforms the state-of-the-art approaches in terms of predicting 11 out of 14 common lung diseases.

Notes

Acknowledgements

H. Han’s work is supported in part by the Natural Science Foundation of China (grants 61732004 and 61672496), External Cooperation Program of Chinese Academy of Sciences (CAS) (grant GJHZ1843), and Youth Innovation Promotion Association CAS (grant 2018135). We thank Cheng Ouyang for helpful comments.

References

  1. 1.
    Armato III, S.G., McLennan, G., et al.: The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans. Med. Phys. 38(2), 915–931 (2011)CrossRefGoogle Scholar
  2. 2.
    Hogeweg, L., Sanchez, C.I., van Ginneken, B.: Suppression of translucent elongated structures: applications in chest radiography. IEEE Trans. Med. Imaging 32(11), 2099–2113 (2013)CrossRefGoogle Scholar
  3. 3.
    Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: IEEE CVPR, pp. 4700–4708 (2017)Google Scholar
  4. 4.
    Jaeger, S., Candemir, S., et al.: Two public chest X-ray datasets for computer-aided screening of pulmonary diseases. Quant. Imaging Med. Surg. 4(6), 475–477 (2014)Google Scholar
  5. 5.
    Laskey, M.A.: Dual-energy X-ray absorptiometry and body composition. Nutrition 12(1), 45–51 (1996)CrossRefGoogle Scholar
  6. 6.
    Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  7. 7.
    Wang, X., Peng, Y., et al.: ChestX-ray8: hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: IEEE CVPR, pp. 3462–3471 (2017)Google Scholar
  8. 8.
    Yang, W., et al.: Cascade of multi-scale convolutional neural networks for bone suppression of chest radiographs in gradient domain. Med. Image Anal. 35, 421–433 (2017)CrossRefGoogle Scholar
  9. 9.
    Zhang, Y., Miao, S., Mansi, T., Liao, R.: Task driven generative modeling for unsupervised domain adaptation: application to X-ray image segmentation. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11071, pp. 599–607. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00934-2_67CrossRefGoogle Scholar
  10. 10.
    Zhou, B., Lin, X., Eck, B., Hou, J., Wilson, D.: Generation of virtual dual energy images from standard single-shot radiographs using multi-scale and conditional adversarial network. In: Jawahar, C.V., Li, H., Mori, G., Schindler, K. (eds.) ACCV 2018. LNCS, vol. 11361, pp. 298–313. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-20887-5_19CrossRefGoogle Scholar
  11. 11.
    Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: IEEE ICCV, pp. 2242–2251 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Biomedical Image Analysis GroupImperial College LondonLondonUK
  2. 2.University of Chinese Academy of SciencesBeijingChina
  3. 3.Medical Imaging, Robotics, Analytic Computing Laboratory/Engineering (MIRACLE), Key Laboratory of Intelligent Information Processing of Chinese Academy of Sciences (CAS)Institute of Computing Technology, CASBeijingChina
  4. 4.Peng Cheng LaboratoryShenzhenChina
  5. 5.Department of Computer Science and TechnologySouthest UniversityNanjingChina
  6. 6.College of Electronic Information and Optical EngineeringNankai UniversityTianjinChina

Personalised recommendations