Skip to main content

Knowledge-Guided Pretext Learning for Utero-Placental Interface Detection

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2020 (MICCAI 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12261))

Abstract

Modern machine learning systems, such as convolutional neural networks rely on a rich collection of training data to learn discriminative representations. In many medical imaging applications, unfortunately, collecting a large set of well-annotated data is prohibitively expensive. To overcome data shortage and facilitate representation learning, we develop Knowledge-guided Pretext Learning (KPL) that learns anatomy-related image representations in a pretext task under the guidance of knowledge from the downstream target task. In the context of utero-placental interface detection in placental ultrasound, we find that KPL substantially improves the quality of the learned representations without consuming data from external sources such as ImageNet. It outperforms the widely adopted supervised pre-training and self-supervised learning approaches across model capacities and dataset scales. Our results suggest that pretext learning is a promising direction for representation learning in medical image analysis, especially in the small data regime.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Tiny-VGG builts on VGG-13 [22], with 16-32-64-128-256 channels in five blocks. See Appendix.

References

  1. Shin, H.-C., et al.: Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans. Med. Imaging 35(5), 1285–1298 (2016)

    Article  Google Scholar 

  2. Gulshan, V., et al.: Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. Jama 316(22), 2402–2410 (2016)

    Article  Google Scholar 

  3. Esteva, A., et al.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115–118 (2017)

    Article  Google Scholar 

  4. Lee, H., et al.: An explainable deep-learning algorithm for the detection of acute intracranial haemorrhage from small datasets. Nat. Biomed. Eng. 3(3), 173 (2019)

    Article  Google Scholar 

  5. Qi, H., et al.: UPI-Net: semantic contour detection in placental ultrasound. In: ICCV-VRMI (2019)

    Google Scholar 

  6. Noroozi, M., Favaro, P.: Unsupervised learning of visual representations by solving jigsaw puzzles. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9910, pp. 69–84. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46466-4_5

    Chapter  Google Scholar 

  7. Taleb, A., et al.: Multimodal self-supervised learning for medical image analysis. arXiv preprint arXiv:1912.05396 (2019)

  8. Raghu, M., et al.: Transfusion: understanding transfer learning for medical imaging. In: NeurIPS (2019)

    Google Scholar 

  9. He, K., et al.: Rethinking imagenet pre-training. In: ICCV (2019)

    Google Scholar 

  10. Szegedy, C., et al.: Going deeper with convolutions. In: CVPR (2015)

    Google Scholar 

  11. Misra, I., van der Maaten, L.: Self-supervised learning of pretext-invariant representations. arXiv preprint arXiv:1912.01991 (2019)

  12. Gidaris, S., et al.: Unsupervised representation learning by predicting image rotations. In: ICLR (2018)

    Google Scholar 

  13. Tajbakhsh, N., et al.: Surrogate supervision for medical image analysis: effective deep learning from limited quantities of labeled data. In: ISBI (2019)

    Google Scholar 

  14. Doersch, C., et al.: Unsupervised visual representation learning by context prediction. In: ICCV (2015)

    Google Scholar 

  15. Bai, W., et al.: Self-supervised learning for cardiac MR image segmentation by anatomical position prediction. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11765, pp. 541–549. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32245-8_60

    Chapter  Google Scholar 

  16. Goyal, P., et al.: Scaling and benchmarking self-supervised visual representation learning. In: ICCV (2019)

    Google Scholar 

  17. Xie, S., Tu, Z.: Holistically-nested edge detection. In: ICCV (2015)

    Google Scholar 

  18. Yu, Z., et al.: Casenet: deep category-aware semantic edge detection. In: CVPR (2017)

    Google Scholar 

  19. Lawrence, S., Burns, I., Back, A., Tsoi, A.C., Giles, C.L.: Neural network classification and prior class probabilities. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 7700, pp. 295–309. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-35289-8_19

    Chapter  Google Scholar 

  20. Collins, S.L., et al.: Influence of power doppler gain setting on virtual organ computer-aided analysis indices in vivo: can use of the individual sub-noise gain level optimize information? Ultrasound Obstetr. Gynecol. 40(1), 75–80 (2012)

    Article  Google Scholar 

  21. Irvin, J., et al.: CheXpert: a large chest radiograph dataset with uncertainty labels and expert comparison. In: AAAI (2019)

    Google Scholar 

  22. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  23. Arbelaez, P., et al.: Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 33(5), 898–916 (2010)

    Article  Google Scholar 

Download references

Acknowledgements

Huan Qi is supported by a China Scholarship Council doctoral research fund (grant No. 201608060317). The NIH Eunice Kennedy Shriver National Institute of Child Health and Human Development Human Placenta Project UO1 HD 087209, EPSRC grant EP/M013774/1, and ERC-ADG-2015 694581 are also acknowledged.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huan Qi .

Editor information

Editors and Affiliations

Appendix

Appendix

See Tables 5, 6, and 7.

Table 5. Network configurations of three backbone CNNs: Tiny-VGG, VGG-13 and VGG-19. Each ‘conv3-x’ module contains a \(3\times 3\) convolution layer with x channels, followed by a batch-norm layer and a ReLU non-linearity. ‘FC-X’ means that the network outputs X class scores, where X is determined by specific applications: \(X=1000\) for ImageNet and \(X=5\) for CheXpert.
Table 6. Hyper-parameter settings for ImageNet and CheXpert image classification pre-training.
Table 7. Basic geometry and contrast jittering were applied as the routine data augmentation steps for SSL, KPL and subsequent UPI detection. For SSL and KPL, data augmentation were applied to image patches. For UPI detection, data augmentation were applied to image planes.

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Qi, H., Collins, S., Noble, J.A. (2020). Knowledge-Guided Pretext Learning for Utero-Placental Interface Detection. In: Martel, A.L., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. MICCAI 2020. Lecture Notes in Computer Science(), vol 12261. Springer, Cham. https://doi.org/10.1007/978-3-030-59710-8_57

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-59710-8_57

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-59709-2

  • Online ISBN: 978-3-030-59710-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics