Advertisement

MVP-Net: Multi-view FPN with Position-Aware Attention for Deep Universal Lesion Detection

  • Zihao Li
  • Shu Zhang
  • Junge Zhang
  • Kaiqi Huang
  • Yizhou Wang
  • Yizhou YuEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11769)

Abstract

Universal lesion detection (ULD) on computed tomography (CT) images is an important but underdeveloped problem. Recently, deep learning-based approaches have been proposed for ULD, aiming to learn representative features from annotated CT data. However, the hunger for data of deep learning models and the scarcity of medical annotation hinders these approaches to advance further. In this paper, we propose to incorporate domain knowledge in clinical practice into the model design of universal lesion detectors. Specifically, as radiologists tend to inspect multiple windows for an accurate diagnosis, we explicitly model this process and propose a multi-view feature pyramid network (FPN), where multi-view features are extracted from images rendered with varied window widths and window levels; to effectively combine this multi-view information, we further propose a position-aware attention module. With the proposed model design, the data-hunger problem is relieved as the learning task is made easier with the correctly induced clinical practice prior. We show promising results with the proposed model, achieving an absolute gain of \(\mathbf {5.65\%}\) (in the sensitivity of FPs@4.0) over the previous state-of-the-art on the NIH DeepLesion dataset.

Keywords

Universal lesion detection Multi-view Position-aware Attention 

Notes

Acknowledgement

This work is funded by the National Natural Science Foundation of China (Grant No. 61876181, 61721004, 61403383, 61625201, 61527804) and the Projects of Chinese Academy of Sciences (Grant QYZDB-SSW-JSC006 and Grant 173211KYSB20160008). We would like to thank Feng Liu for valuable discussions.

References

  1. 1.
    Wang, B., Qi, G., Tang, S., Zhang, L., Deng, L., Zhang, Y.: Automated pulmonary nodule detection: high sensitivity with few candidates. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11071, pp. 759–767. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00934-2_84CrossRefGoogle Scholar
  2. 2.
    Lee, S., Bae, J.S., Kim, H., Kim, J.H., Yoon, S.: Liver lesion detection from weakly-labeled multi-phase CT volumes with a grouped single shot multibox detector. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11071, pp. 693–701. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00934-2_77CrossRefGoogle Scholar
  3. 3.
    He, K., Gkioxari, G., Dollr, P., Girshick, R.: Mask R-CNN. In: ICCV, pp. 2961–2969 (2017)Google Scholar
  4. 4.
    Lin, T.-Y., Dollr, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: ICCV, pp. 2117–2125 (2017)Google Scholar
  5. 5.
    Tang, Y., Yan, K., Tang, Y., Liu, J., Xiao, J., Summers, R.M.: ULDor: a universal lesion detector for CT scans with pseudo masks and hard negative example mining. arXiv preprint arXiv:1901.06359 (2019)
  6. 6.
    Yan, K., Bagheri, M., Summers, R.M.: 3D context enhanced region-based convolutional neural network for end-to-end lesion detection. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 511–519. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00928-1_58CrossRefGoogle Scholar
  7. 7.
    Yan, K., et al.: Deep lesion graphs in the wild: relationship learning and organization of significant radiology image findings in a diverse large-scale lesion database. In: CVPR, pp. 9261–9270 (2018)Google Scholar
  8. 8.
    Yan, K., Lu, L., Summers, R.M.: Unsupervised body part regression via spatially self-ordering convolutional neural networks. In: ISBI, pp. 1022–1025 (2018)Google Scholar
  9. 9.
    Woo, S., Park, J., Lee, J.-Y., Kweon, I.S.: CBAM: convolutional block attention module. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 3–19. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01234-2_1CrossRefGoogle Scholar
  10. 10.
    Dai, J., Yi, L., He, K., Sun, J.: R-FCN: object detection via region-based fully convolutional networks. In: NIPS, pp. 379–387 (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Zihao Li
    • 1
    • 2
  • Shu Zhang
    • 3
  • Junge Zhang
    • 1
  • Kaiqi Huang
    • 1
  • Yizhou Wang
    • 2
    • 3
    • 4
  • Yizhou Yu
    • 2
    Email author
  1. 1.Institute of AutomationChinese Academy of SciencesBeijingChina
  2. 2.Deepwise AI LabBeijingChina
  3. 3.Department of Computer SciencePeking UniversityHaidian DistrictChina
  4. 4.Peng Cheng LaboratoryShenzhenChina

Personalised recommendations