Advertisement

Probabilistic Radiomics: Ambiguous Diagnosis with Controllable Shape Analysis

  • Jiancheng Yang
  • Rongyao Fang
  • Bingbing NiEmail author
  • Yamin Li
  • Yi Xu
  • Linguo Li
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11769)

Abstract

Radiomics analysis has achieved great success in recent years. However, conventional Radiomics analysis suffers from insufficiently expressive hand-crafted features. Recently, emerging deep learning techniques, e.g., convolutional neural networks (CNNs), dominate recent research in Computer-Aided Diagnosis (CADx). Unfortunately, as black-box predictors, we argue that CNNs are “diagnosing” voxels (or pixels), rather than lesions; in other words, visual saliency from a trained CNN is not necessarily concentrated on the lesions. On the other hand, classification in clinical applications suffers from inherent ambiguities: radiologists may produce diverse diagnosis on challenging cases. To this end, we propose a controllable and explainable Probabilistic Radiomics framework, by combining the Radiomics analysis and probabilistic deep learning. In our framework, 3D CNN feature is extracted upon lesion region only, then encoded into lesion representation, by a controllable Non-local Shape Analysis Module (NSAM) based on self-attention. Inspired from variational auto-encoders (VAEs), an Ambiguity PriorNet is used to approximate the ambiguity distribution over human experts. The final diagnosis is obtained by combining the ambiguity prior sample and lesion representation, and the whole network named \(DenseSharp^{+}\) is end-to-end trainable. We apply the proposed method on lung nodule diagnosis on LIDC-IDRI database to validate its effectiveness.

Keywords

Radiomics Deep learning Attention Computer-Aided Diagnosis (CADx) Explainable Artificial Intelligence (XAI) 

Notes

Acknowledgment

This work was supported by National Science Foundation of China (U1611461, 61502301, 61521062). This work was supported by SJTU-UCLA Joint Center for Machine Perception and Inference, China’s Thousand Youth Talents Plan, STCSM 17511105401, 18DZ2270700 and MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China. This work was also jointly supported by SJTU-Minivision joint research grant.

Supplementary material

490281_1_En_73_MOESM1_ESM.pdf (525 kb)
Supplementary material 1 (pdf 524 KB)

References

  1. 1.
    Armato III, S.G., McLennan, G., Bidaut, L., et al.: The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans. Med. Phys. 38(2), 915–931 (2011)CrossRefGoogle Scholar
  2. 2.
    Gillies, R.J., Kinahan, P.E., Hricak, H.: Radiomics: images are more than pictures, they are data. Radiology 278(2), 563–577 (2015)CrossRefGoogle Scholar
  3. 3.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)Google Scholar
  4. 4.
    Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: CVPR, vol. 1, p. 3 (2017)Google Scholar
  5. 5.
    Hussein, S., Cao, K., Song, Q., Bagci, U.: Risk stratification of lung nodules using 3D CNN-based multi-task learning. In: Niethammer, M., Styner, M., Aylward, S., Zhu, H., Oguz, I., Yap, P.-T., Shen, D. (eds.) IPMI 2017. LNCS, vol. 10265, pp. 249–260. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-59050-9_20CrossRefGoogle Scholar
  6. 6.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: ICML (2015)Google Scholar
  7. 7.
    Kingma, D.P., Welling, M.: Auto-encoding variational bayes. In: ICLR (2014)Google Scholar
  8. 8.
    Vaswani, A., Shazeer, N., Parmar, N., et al.: Attention is all you need. In: NIPS, pp. 5998–6008 (2017)Google Scholar
  9. 9.
    Yang, J., Zhang, Q., Fang, R., Ni, B., Liu, J., Tian, Q.: Adversarial attack and defense on point sets. arXiv preprint arXiv:1902.10899 (2019)
  10. 10.
    Yang, J., Zhang, Q., Ni, B., et al.: Modeling point clouds with self-attention and gumbel subset sampling. In: CVPR, pp. 3323–3332 (2019)Google Scholar
  11. 11.
    Zhao, W., Yang, J., et al.: 3D deep learning from ct scans predicts tumorinvasiveness of subcentimeter pulmonary adenocarcinomas. Cancer Res. 78(24), 6881–6889 (2018)CrossRefGoogle Scholar
  12. 12.
    Zhao, W., Yang, J., et al.: Toward automatic prediction of EGFR mutation status in pulmonary adenocarcinoma with 3d deep learning. Cancer Med. (2019)Google Scholar
  13. 13.
    Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: CVPR, pp. 2921–2929 (2016)Google Scholar
  14. 14.
    Zhu, W., Liu, C., Fan, W., Xie, X.: Deeplung: 3D deep convolutional nets for automated pulmonary nodule detection and classification. In: WACV (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Jiancheng Yang
    • 1
    • 2
    • 3
  • Rongyao Fang
    • 1
  • Bingbing Ni
    • 1
    • 2
    • 3
    Email author
  • Yamin Li
    • 1
  • Yi Xu
    • 1
  • Linguo Li
    • 1
  1. 1.Shanghai Jiao Tong UniversityShanghaiChina
  2. 2.MoE Key Lab of Artificial Intelligence, AI InstituteShanghai Jiao Tong UniversityShanghaiChina
  3. 3.Shanghai Institute for Advanced Communication and Data ScienceShanghaiChina

Personalised recommendations