Advertisement

Leveraging Other Datasets for Medical Imaging Classification: Evaluation of Transfer, Multi-task and Semi-supervised Learning

  • Hong Shang
  • Zhongqian Sun
  • Wei Yang
  • Xinghui Fu
  • Han Zheng
  • Jia Chang
  • Junzhou HuangEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11768)

Abstract

To address the data scarcity challenge in developing deep learning based medical imaging classification, a widely-used strategy is to leverage other available datasets in training. Three machine learning algorithms belong to this concept, namely, transfer learning (TL), multi-task learning (MTL) and semi-supervised learning (SSL). TL and MTL bring another labeled dataset usually from different categories, while SSL utilizes an unlabeled dataset from the same category. Each has proven useful for medical imaging tasks. In this work, we unified these three algorithms into one framework, to directly compare individual contribution and combine them to extract extra performance. For SSL, state-of-the-art consistency based methods were evaluated, including \(\varPi \)-Model and virtual adversarial training. Experiments were done on classifying gastric diseases given endoscopic images trained with various amount of data. It was observed that individually TL has the most while SSL has the least performance gain. When used together, their contribution build up constructively leading to further improved performance especially with larger capacity network. This work helps guide applying each or combination of TL/MTL/SSL for other medical applications.

Keywords

Machine learning Computer-aided diagnosis 

References

  1. 1.
    Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous systems (2015). https://www.tensorflow.org/. Software available from tensorflow.org
  2. 2.
    Caruana, R.: Multitask learning. Mach. Learn. 28(1), 41–75 (1997)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Cheplygina, V., de Bruijne, M., Pluim, J.P.: Not-so-supervised: a survey of semi-supervised, multi-instance, and transfer learning in medical image analysis. Med. Image Anal. 54, 280–296 (2019)CrossRefGoogle Scholar
  4. 4.
    Hoo-Chang, S., et al.: Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans. Med. Imaging 35(5), 1285 (2016)CrossRefGoogle Scholar
  5. 5.
    Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: CVPR, vol. 1, p. 3 (2017)Google Scholar
  6. 6.
    Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Technical report. Citeseer (2009)Google Scholar
  7. 7.
    Laine, S., Aila, T.: Temporal ensembling for semi-supervised learning. In: Proceedings International Conference on Learning Representations (ICLR) (2017)Google Scholar
  8. 8.
    Litjens, G., et al.: A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88 (2017)CrossRefGoogle Scholar
  9. 9.
    Misra, I., Shrivastava, A., Gupta, A., Hebert, M.: Cross-stitch networks for multi-task learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3994–4003 (2016)Google Scholar
  10. 10.
    Miyato, T., Maeda, S.I., Ishii, S., Koyama, M.: Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE Trans. Pattern Anal. Mach. Intell. 41, 1979–1993 (2018)CrossRefGoogle Scholar
  11. 11.
    Oliver, A., Odena, A., Raffel, C.A., Cubuk, E.D., Goodfellow, I.: Realistic evaluation of deep semi-supervised learning algorithms. In: Advances in Neural Information Processing Systems, pp. 3239–3250 (2018)Google Scholar
  12. 12.
    Ruder, S.: An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098 (2017)
  13. 13.
    Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. (IJCV) 115(3), 211–252 (2015).  https://doi.org/10.1007/s11263-015-0816-yMathSciNetCrossRefGoogle Scholar
  14. 14.
    Sun, C., Shrivastava, A., Singh, S., Gupta, A.: Revisiting unreasonable effectiveness of data in deep learning era. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 843–852 (2017)Google Scholar
  15. 15.
    Tarvainen, A., Valpola, H.: Mean teachers are better role models: weight-averaged consistency targets improve semi-supervised deep learning results. In: Advances in Neural Information Processing Systems, pp. 1195–1204 (2017)Google Scholar
  16. 16.
    Zagoruyko, S., Komodakis, N.: Wide residual networks. In: Richard, C., Wilson, E.R.H., Smith, W.A.P. (eds.) Proceedings of the British Machine Vision Conference (BMVC), pp. 87.1–87.12. BMVA Press, September 2016.  https://doi.org/10.5244/C.30.87, https://dx.doi.org/10.5244/C.30.87
  17. 17.
    Zhang, Z., Luo, P., Loy, C.C., Tang, X.: Facial landmark detection by deep multi-task learning. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8694, pp. 94–108. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10599-4_7CrossRefGoogle Scholar
  18. 18.
    Zhou, Z.H.: A brief introduction to weakly supervised learning. Natl. Sci. Rev. 5(1), 44–53 (2017)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Hong Shang
    • 1
  • Zhongqian Sun
    • 1
  • Wei Yang
    • 1
  • Xinghui Fu
    • 1
  • Han Zheng
    • 1
  • Jia Chang
    • 2
  • Junzhou Huang
    • 1
    Email author
  1. 1.Tencent AI LabShenzhenChina
  2. 2.Tencent AIMISShenzhenChina

Personalised recommendations