Advertisement

Deep Active Self-paced Learning for Biomedical Image Analysis

  • Wenzhe Wang
  • Ruiwei Feng
  • Xuechen Liu
  • Yifei Lu
  • Yanjie Wang
  • Ruoqian Guo
  • Zhiwen Lin
  • Tingting Chen
  • Danny Z. Chen
  • Jian WuEmail author
Chapter
Part of the Intelligent Systems Reference Library book series (ISRL, volume 171)

Abstract

Automatic and accurate analysis in biomedical images (e.g., image classification, lesion detection and segmentation) plays an important role in computer-aided diagnosis of common human diseases. However, this task is challenging due to the need of sufficient training data with high quality annotation, which is both time-consuming and costly to obtain. In this chapter, we propose a novel Deep Active Self-paced Learning (DASL) strategy to reduce annotation effort and also make use of unannotated samples, based on a combination of Active Learning (AL) and Self-Paced Learning (SPL) strategies. To evaluate the performance of the DASL strategy, we apply it to two typical problems in biomedical image analysis, pulmonary nodule segmentation in 3D CT images and diabetic retinopathy (DR) identification in digital retinal fundus images. In each scenario, we propose a novel deep learning model and train it with the DASL strategy. Experimental results show that the proposed models trained with our DASL strategy perform much better than those trained without DASL using the same amount of annotated samples.

Notes

Acknowledgements

The research of Jian Wu was partially supported by the Ministry of Education of China under grant No. 2017PT18, the Zhejiang University Education Foundation under grants No. K18-511120-004, No. K17-511120-017, and No. K17-518051-021, the Major Scientific Project of Zhejiang Lab under grant No. 2018DG0ZX01, and the National Natural Science Foundation of China under grant No. 61672453. The research of Danny Z. Chen was supported in part by NSF Grant CCF-1617735.

References

  1. 1.
    Gonçalves, L., Novo, J., Campilho, A.: Hessian based approaches for 3D lung nodule segmentation. Expert. Syst. Appl. 61, 1–15 (2016)CrossRefGoogle Scholar
  2. 2.
    Armato III, S.G., McLennan, G., Bidaut, L., McNitt-Gray, M.F., Meyer, C.R., Reeves, A.P., Zhao, B., Aberle, D.R., Henschke, C.I., Hoffman, E.A., et al.: The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans. Med. Phys. 38(2), 915–931 (2011)CrossRefGoogle Scholar
  3. 3.
    Messay, T., Hardie, R.C., Tuinstra, T.R.: Segmentation of pulmonary nodules in computed tomography using a regression neural network approach and its application to the lung image database consortium and image database resource initiative dataset. Med. Image Anal. 22(1), 48–62 (2015)CrossRefGoogle Scholar
  4. 4.
    Feng, X., Yang, J., Laine, A.F., Angelini, E.D.: Discriminative localization in CNNs for weakly-supervised segmentation of pulmonary nodules. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 568–576. Springer (2017)Google Scholar
  5. 5.
    Yang, L., Zhang, Y., Chen, J., Zhang, S., Chen, D.Z.: Suggestive annotation: a deep active learning framework for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 399–407. Springer (2017)Google Scholar
  6. 6.
    Dai, L., Sheng, B., Wu, Q., Li, H., Hou, X., Jia, W., Fang, R.: Retinal microaneurysm detection using clinical report guided multi-sieving CNN. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 525–532. Springer (2017)Google Scholar
  7. 7.
    van Grinsven, M.J.J.P., van Ginneken, B., Hoyng, C.B., Theelen, T., Sanchez, C.I.: Fast convolutional neural network training using selective data sampling: application to hemorrhage detection in color fundus images. IEEE Trans. Med. Imaging 35(5), 1273–1284 (2016)CrossRefGoogle Scholar
  8. 8.
    Seoud, L., Hurtut, T., Chelbi, J., Cheriet, F., Langlois, J.M.P.: Red lesion detection using dynamic shape features for diabetic retinopathy screening. IEEE Trans. Med. Imaging 35(4), 1116–1126 (2015)CrossRefGoogle Scholar
  9. 9.
    Yang, Y., Li, T., Li, W., Wu, H., Fan, W., Zhang, W.: Lesion detection and grading of diabetic retinopathy via two-stages deep convolutional neural networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 533–540. Springer (2017)Google Scholar
  10. 10.
    Li, X., Zhong, A., Lin, M., Guo, N., Sun, M., Sitek, A., Ye, J., Thrall, J., Li, Q.: Self-paced convolutional neural network for computer aided detection in medical imaging analysis. In: International Workshop on Machine Learning in Medical Imaging, pp. 212–219. Springer (2017)Google Scholar
  11. 11.
    Settles, B.: Active learning literature survey. Computer Sciences Technical Report 1648, University of Wisconsin – Madison (2009)Google Scholar
  12. 12.
    Kumar, M.P., Packer, B., Koller, D.: Self-paced learning for latent variable models. In: Advances in Neural Information Processing Systems, pp. 1189–1197 (2010)Google Scholar
  13. 13.
    Lin, L., Wang, K., Meng, D., Zuo, W., Zhang, L.: Active self-paced learning for cost-effective and progressive face identification. IEEE Trans. Pattern Anal. Mach. Intell. 40(1), 7–19 (2018)CrossRefGoogle Scholar
  14. 14.
    Wang, W., Lu, Y., Wu, B., Chen, T., Chen, D.Z., Wu, J.: Deep active self-paced learning for accurate pulmonary nodule segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 723–731. Springer (2018)Google Scholar
  15. 15.
    Lin, Z., Guo, R., Wang, Y., Wu, B., Chen, T., Wang, W., Chen, D.Z., Wu, J.: A framework for identifying diabetic retinopathy based on anti-noise detection and attention-based fusion. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 74–82. Springer (2018)Google Scholar
  16. 16.
    Jiang, L., Meng, D., Zhao, Q., Shan, S., Hauptmann, A.G.: Self-paced curriculum learning. In: AAAI Conference on Artificial Intelligence, pp. 2694–2700 (2015)Google Scholar
  17. 17.
    Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 424–432. Springer (2016)Google Scholar
  18. 18.
    Milletari, F., Navab, N., Ahmadi, S.A.: V-Net: Fully convolutional neural networks for volumetric medical image segmentation. In: 4th IEEE International Conference on 3D Vision, pp. 565–571. IEEE (2016)Google Scholar
  19. 19.
    Yu, L., Cheng, J.Z., Dou, Q., Yang, X., Chen, H., Qin, J., Heng, P.A.: Automatic 3D cardiovascular MR segmentation with densely-connected volumetric convnets. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 287–295. Springer (2017)Google Scholar
  20. 20.
    He, K., Gkioxari, G., Dollar, P., Girshick, R.: Mask R-CNN. In: IEEE International Conference on Computer Vision, pp. 2980–2988. IEEE (2017)Google Scholar
  21. 21.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: Towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015)Google Scholar
  22. 22.
    Lin, T.Y., Dollár, P., Girshick, R.B., He, K., Hariharan, B., Belongie, S.J.: Feature pyramid networks for object detection. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 936–944. IEEE (2017)Google Scholar
  23. 23.
    Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708. IEEE (2017)Google Scholar
  24. 24.
    Girshick, R.: Fast R-CNN. In: IEEE International Conference on Computer Vision, pp. 1440–1448. IEEE (2015)Google Scholar
  25. 25.
    Drozdzal, M., Vorontsov, E., Chartrand, G., Kadoury, S., Pal, C.: The importance of skip connections in biomedical image segmentation. In: Deep Learning and Data Labeling for Medical Applications, pp. 179–187. Springer (2016)Google Scholar
  26. 26.
    Wen, Y., Zhang, K., Li, Z., Qiao, Y.: A discriminative feature learning approach for deep face recognition. In: European Conference on Computer Vision, pp. 499–515. Springer (2016)Google Scholar
  27. 27.
    Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: SSD: Single shot multibox detector. In: European conference on computer vision, pp. 21–37. Springer (2016)Google Scholar
  28. 28.
    Chen, L.C., Yang, Y., Wang, J., Xu, W., Yuille, A.L.: Attention to scale: Scale-aware semantic image segmentation. In: Computer Vision and Pattern Recognition, pp. 3640–3649. IEEE (2016)Google Scholar
  29. 29.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Computer Vision and Pattern Recognition, pp. 770–778. IEEE (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Wenzhe Wang
    • 1
    • 2
  • Ruiwei Feng
    • 1
    • 2
  • Xuechen Liu
    • 1
    • 2
  • Yifei Lu
    • 1
    • 2
  • Yanjie Wang
    • 1
    • 2
  • Ruoqian Guo
    • 1
    • 2
  • Zhiwen Lin
    • 1
    • 2
  • Tingting Chen
    • 1
    • 2
  • Danny Z. Chen
    • 2
    • 3
  • Jian Wu
    • 1
    • 2
    Email author
  1. 1.College of Computer Science and TechnologyZhejiang UniversityHangzhouChina
  2. 2.Real Doctor AI Research CentreZhejiang UniversityHangzhouChina
  3. 3.Department of Computer Science and EngineeringUniversity of Notre DameNotre DameUSA

Personalised recommendations