Advertisement

COMe-SEE: Cross-modality Semantic Embedding Ensemble for Generalized Zero-Shot Diagnosis of Chest Radiographs

  • Angshuman PaulEmail author
  • Thomas C. Shen
  • Niranjan Balachandar
  • Yuxing Tang
  • Yifan Peng
  • Zhiyong Lu
  • Ronald M. Summers
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12446)

Abstract

Zero-shot learning, in spite of its recent popularity, remains an unexplored area for medical image analysis. We introduce a first-of-its-kind generalized zero-shot learning (GZSL) framework that utilizes information from two different imaging modalities (CT and x-ray) for the diagnosis of chest radiographs. Our model makes use of CT radiology reports to create a semantic space consisting of signatures corresponding to different chest diseases and conditions. We introduce a CrOss-Modality Semantic Embedding Ensemble (COMe-SEE) for zero-shot diagnosis of chest x-rays by relating an input x-ray to a signature in the semantic space. The ensemble, designed using a novel semantic saliency preserving autoencoder, utilizes the visual and the semantic saliency to facilitate GZSL. The use of an ensemble not only helps in dealing with noise but also makes our model useful across different datasets. Experiments on two publicly available datasets show that the proposed model can be trained using one dataset and still be applied to data from another source for zero-shot diagnosis of chest x-rays.

Keywords

Zero-shot learning Chest x-ray Semantic saliency Autoencoder Ensemble 

Notes

Acknowledgment

This project was supported by the Intramural Research Programs of the National Institutes of Health, Clinical Center and National Library of Medicine. We thank NVIDIA for GPU card donation.

Supplementary material

506088_1_En_11_MOESM1_ESM.pdf (129 kb)
Supplementary material 1 (pdf 129 KB)

References

  1. 1.
    Banerjee, I., Madhavan, S., Goldman, R.E., Rubin, D.L.: Intelligent word embeddings of free-text radiology reports. In: AMIA Annual Symposium Proceedings, vol. 2017, p. 411. American Medical Informatics Association (2017)Google Scholar
  2. 2.
    Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)CrossRefGoogle Scholar
  3. 3.
    Dajac, J., Kamdar, J., Moats, A., Nguyen, B.: To screen or not to screen: low dose computed tomography in comparison to chest radiography or usual care in reducing morbidity and mortality from lung cancer. Cureus 8(4), e589 (2016).  https://doi.org/10.7759/cureus.589 CrossRefGoogle Scholar
  4. 4.
    Demner-Fushman, D., et al.: Preparing a collection of radiology examinations for distribution and retrieval. J. Am. Med. Inform. Assoc. 23(2), 304–310 (2015)CrossRefGoogle Scholar
  5. 5.
    Frome, A., et al.: Devise: a deep visual-semantic embedding model. In: Advances in Neural Information Processing Systems, pp. 2121–2129 (2013)Google Scholar
  6. 6.
    Huang, H., Wang, C., Yu, P.S., Wang, C.D.: Generative dual adversarial network for generalized zero-shot learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 801–810 (2019)Google Scholar
  7. 7.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  8. 8.
    Kodirov, E., Xiang, T., Gong, S.: Semantic autoencoder for zero-shot learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3174–3183 (2017)Google Scholar
  9. 9.
    Krizhevsky, A., Hinton, G.E.: Using very deep autoencoders for content-based image retrieval. In: ESANN, vol. 1, p. 2 (2011)Google Scholar
  10. 10.
    Rajpurkar, P., et al.: CheXNet: radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225 (2017)
  11. 11.
    Romera-Paredes, B., Torr, P.: An embarrassingly simple approach to zero-shot learning. In: International Conference on Machine Learning, pp. 2152–2161 (2015)Google Scholar
  12. 12.
    Rong, X.: word2vec parameter learning explained. arXiv preprint arXiv:1411.2738 (2014)
  13. 13.
    Sariyildiz, M.B., Cinbis, R.G.: Gradient matching generative networks for zero-shot learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2168–2178 (2019)Google Scholar
  14. 14.
    Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: ChestX-ray8: Hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2097–2106 (2017)Google Scholar
  15. 15.
    Wang, X., Peng, Y., Lu, L., Lu, Z., Summers, R.M.: TieNet: text-image embedding network for common thorax disease classification and reporting in chest X-rays. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9049–9058 (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Angshuman Paul
    • 1
    Email author
  • Thomas C. Shen
    • 1
  • Niranjan Balachandar
    • 1
  • Yuxing Tang
    • 1
  • Yifan Peng
    • 2
  • Zhiyong Lu
    • 2
  • Ronald M. Summers
    • 1
  1. 1.Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical CenterBethesdaUSA
  2. 2.Biomedical Text Mining Group, National Center for Biotechnology Information, National Institutes of HealthBethesdaUSA

Personalised recommendations