Advertisement

Learning Visual Context by Comparison

Conference paper
  • 698 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12350)

Abstract

Finding diseases from an X-ray image is an important yet highly challenging task. Current methods for solving this task exploit various characteristics of the chest X-ray image, but one of the most important characteristics is still missing: the necessity of comparison between related regions in an image. In this paper, we present Attend-and-Compare Module (ACM) for capturing the difference between an object of interest and its corresponding context. We show that explicit difference modeling can be very helpful in tasks that require direct comparison between locations from afar. This module can be plugged into existing deep learning models. For evaluation, we apply our module to three chest X-ray recognition tasks and COCO object detection & segmentation tasks and observe consistent improvements across tasks. The code is available at https://github.com/mk-minchul/attend-and-compare.

Keywords

Context modeling Attention mechanism Chest X-ray 

Supplementary material

504441_1_En_34_MOESM1_ESM.pdf (24.1 mb)
Supplementary material 1 (pdf 24640 KB)

References

  1. 1.
    Armato III, S.G., Giger, M.L., MacMahon, H.: Computerized detection of abnormal asymmetry in digital chest radiographs. Med. Phys. 21(11), 1761–1768 (1994)CrossRefGoogle Scholar
  2. 2.
    Baltruschat, I.M., Nickisch, H., Grass, M., Knopp, T., Saalbach, A.: Comparison of deep learning approaches for multi-label chest X-ray classification. Sci. Rep. 9(1), 6381 (2019)CrossRefGoogle Scholar
  3. 3.
    Bustos, A., Pertusa, A., Salinas, J.M., de la Iglesia-Vayá, M.: PadChest: a large chest X-ray image dataset with multi-label annotated reports. arXiv (2019)Google Scholar
  4. 4.
    Cao, Y., Xu, J., Lin, S., Wei, F., Hu, H.: GCNet: non-local networks meet squeeze-excitation networks and beyond. In: ICCV (2019)Google Scholar
  5. 5.
    Chakraborty, D.P.: Recent advances in observer performance methodology: jackknife free-response ROC (JAFROC). Radiat. Prot. Dosim. 114(1–3), 26–31 (2005)CrossRefGoogle Scholar
  6. 6.
    Chen, B., Li, J., Guo, X., Lu, G.: DualCheXNet: dual asymmetric feature learning for thoracic disease classification in chest X-rays. Biomed. Signal Process. Control 53, 101554 (2019)CrossRefGoogle Scholar
  7. 7.
    Chen, L., et al.: SCA-CNN: spatial and channel-wise attention in convolutional networks for image captioning. In: CVPR (2017)Google Scholar
  8. 8.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR (2009)Google Scholar
  9. 9.
    Forrest, J.V., Friedman, P.J.: Radiologic errors in patients with lung cancer. West. J. Med. 134(6), 485 (1981)Google Scholar
  10. 10.
    Guan, Q., Huang, Y.: Multi-label chest X-ray image classification via category-wise residual attention learning. Pattern Recogn. Lett. 130, 259–266 (2018)CrossRefGoogle Scholar
  11. 11.
    Guan, Q., Huang, Y., Zhong, Z., Zheng, Z., Zheng, L., Yang, Y.: Thorax disease classification with attention guided convolutional neural network. Pattern Recogn. Lett. 131, 38–45 (2020).  https://doi.org/10.1016/j.patrec.2019.11.040. http://www.sciencedirect.com/science/article/pii/S0167865519303617
  12. 12.
    Gündel, S., Grbic, S., Georgescu, B., Liu, S., Maier, A., Comaniciu, D.: Learning to recognize abnormalities in chest X-rays with location-aware dense networks. In: Vera-Rodriguez, R., Fierrez, J., Morales, A. (eds.) CIARP 2018. LNCS, vol. 11401, pp. 757–765. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-13469-3_88CrossRefGoogle Scholar
  13. 13.
    He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: ICCV (2017)Google Scholar
  14. 14.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)Google Scholar
  15. 15.
    Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: CVPR (2018)Google Scholar
  16. 16.
    Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: CVPR (2017)Google Scholar
  17. 17.
    Huang, Z., Wang, X., Huang, L., Huang, C., Wei, Y., Liu, W.: CCNet: criss-cross attention for semantic segmentation. In: ICCV (2019)Google Scholar
  18. 18.
    Irvin, J., et al.: CheXpert: a large chest radiograph dataset with uncertainty labels and expert comparison. In: AAAI (2019)Google Scholar
  19. 19.
    Johnson, A.E., et al.: MIMIC-CXR, a de-identified publicly available database of chest radiographs with free-text reports. Scientific Data (2019)Google Scholar
  20. 20.
    Kamel, S.I., Levin, D.C., Parker, L., Rao, V.M.: Utilization trends in noncardiac thoracic imaging, 2002–2014. J. Am. Coll. Radiol. 14(3), 337–342 (2017)CrossRefGoogle Scholar
  21. 21.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: NIPS (2012)Google Scholar
  22. 22.
    Lee, H., Kim, H.E., Nam, H.: SRM: a style-based recalibration module for convolutional neural networks. In: ICCV (2019)Google Scholar
  23. 23.
    Li, Z., et al.: Thoracic disease identification and localization with limited supervision. In: CVPR (2018)Google Scholar
  24. 24.
    Lin, T.Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10602-1_48CrossRefGoogle Scholar
  25. 25.
    Mao, C., Yao, L., Luo, Y.: ImageGCN: multi-relational image graph convolutional networks for disease identification with chest X-rays. arXiv (2019)Google Scholar
  26. 26.
    Nam, J.G., et al.: Development and validation of deep learning-based automatic detection algorithm for malignant pulmonary nodules on chest radiographs. Radiology 290(1), 218–228 (2019).  https://doi.org/10.1148/radiol.2018180237. pMID: 30251934
  27. 27.
    Oquab, M., Bottou, L., Laptev, I., Sivic, J.: Is object localization for free?-weakly-supervised learning with convolutional neural networks. In: CVPR (2015)Google Scholar
  28. 28.
    Quekel, L.G., Kessels, A.G., Goei, R., van Engelshoven, J.M.: Miss rate of lung cancer on the chest radiograph in clinical practice. Chest 115(3), 720–724 (1999)CrossRefGoogle Scholar
  29. 29.
    Rajpurkar, P., et al.: CheXNet: radiologist-level pneumonia detection on chest X-rays with deep learning. arXiv (2017)Google Scholar
  30. 30.
    Roy, A.G., Navab, N., Wachinger, C.: Concurrent spatial and channel ‘squeeze & excitation’ in fully convolutional networks. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 421–429. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00928-1_48CrossRefGoogle Scholar
  31. 31.
    Song, C., Huang, Y., Ouyang, W., Wang, L.: Mask-guided contrastive attention model for person re-identification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1179–1188 (2018)Google Scholar
  32. 32.
    Tang, Y., Wang, X., Harrison, A.P., Lu, L., Xiao, J., Summers, R.M.: Attention-guided curriculum learning for weakly supervised classification and localization of thoracic diseases on chest radiographs. In: Shi, Y., Suk, H.-I., Liu, M. (eds.) MLMI 2018. LNCS, vol. 11046, pp. 249–258. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00919-9_29CrossRefGoogle Scholar
  33. 33.
    Vaswani, A., et al.: Attention is all you need. In: NIPS (2017)Google Scholar
  34. 34.
    Wang, F., et al.: Residual attention network for image classification. In: CVPR (2017)Google Scholar
  35. 35.
    Wang, H., Xia, Y.: ChestNet: a deep neural network for classification of thoracic diseases on chest radiography. arXiv (2018)Google Scholar
  36. 36.
    Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: CVPR (2018)Google Scholar
  37. 37.
    Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: ChestX-ray8: hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: CVPR (2017)Google Scholar
  38. 38.
    Woo, S., Park, J., Lee, J.-Y., Kweon, I.S.: CBAM: convolutional block attention module. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 3–19. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01234-2_1CrossRefGoogle Scholar
  39. 39.
    Wu, Y., Kirillov, A., Massa, F., Lo, W.Y., Girshick, R.: Detectron2 (2019). https://github.com/facebookresearch/detectron2
  40. 40.
    Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: CVPR (2017)Google Scholar
  41. 41.
    Yao, L., Prosky, J., Poblenz, E., Covington, B., Lyman, K.: Weakly supervised medical diagnosis and localization from multiple resolutions. arXiv (2018)Google Scholar
  42. 42.
    Zhang, J., Bargal, S.A., Lin, Z., Brandt, J., Shen, X., Sclaroff, S.: Top-down neural attention by excitation backprop. Int. J. Comput. Vis. 126(10), 1084–1102 (2018)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Lunit Inc.SeoulRepublic of Korea
  2. 2.Seoul National University HospitalSeoulRepublic of Korea

Personalised recommendations