Abstract
The application of deep learning in the field of medical images has been gaining attention in recent years. However, due to the small number of medical images, the inability to generate them manually, and the high cost of annotation, it is difficult to train them. Masked Autoencoders (MAE), a recent hot new approach in the field of self-supervision, is based on Vision Transformer (ViT) architecture coding to improve the learning difficulty by adding masking to the cut non-overlapping image patches to obtain deeper image representations. In this paper, we employ three different small-scale pre-training datasets using MAE’s pre-training method to pre-train and select the medical segmentation task as its performance test task. We experimentally find that the new hybrid dataset pre-trained with the training set of the hybrid downstream task and other datasets has good performance, and the results are improved compared to those without pre-training, and better than those obtained with a single dataset, showing the potential of the self-supervised method hybrid pre-training for medical segmentation tasks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Chen, J., et al.: TransUNet: transformers make strong encoders for medical image segmentation. arXiv preprint arXiv:2102.04306 (2021)
Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning, pp. 1597–1607. PMLR (2020)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Dosovitskiy, A., et al.: An image is worth 16\(\times \)16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)
El-Nouby, A., Izacard, G., Touvron, H., Laptev, I., Jegou, H., Grave, E.: Are large-scale datasets necessary for self-supervised pre-training? arXiv preprint arXiv:2112.10740 (2021)
Fu, S., et al.: Domain adaptive relational reasoning for 3D multi-organ segmentation. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12261, pp. 656–666. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59710-8_64
He, K., Chen, X., Xie, S., Li, Y., Dollár, P., Girshick, R.: Masked autoencoders are scalable vision learners. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16000–16009 (2022)
He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9729–9738 (2020)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Ke, A., Ellsworth, W., Banerjee, O., Ng, A.Y., Rajpurkar, P.: CheXtransfer. In: Proceedings of the Conference on Health, Inference, and Learning. ACM (2021). https://doi.org/10.1145/3450439.3451867
Liu, X., et al.: Self-supervised learning: generative or contrastive. IEEE Trans. Knowl. Data Eng. (2021)
Liu, Y., et al.: A deep learning system for differential diagnosis of skin diseases. Nat. Med. 26(6), 900–908 (2020)
Matsoukas, C., Haslum, J.F., Söderberg, M., Smith, K.: Is it time to replace CNNs with transformers for medical images? arXiv preprint arXiv:2108.09038 (2021)
Nabulsi, Z., et al.: Deep learning for distinguishing normal versus abnormal chest radiographs and generalization to two unseen diseases tuberculosis and COVID-19. Sci. Rep. 11(1), 1–15 (2021). https://doi.org/10.1038/s41598-021-93967-2. Funding Information: This study was funded by Google LLC and/or a subsidiary thereof (‘Google’. Z. N., A. S., S. J., E. S., A. P. K., W. Y., J. Yang, R.P., S. K., J. Yu, G. S. C., L. P., K. E., D. T., N. B., Y. L., P.-H. C. C., and S. S. are employees of Google and own stock as part of the standard compensation package. C. L. is a paid consultant of Google. R. K., M. E., F. G. V., and D. M. received funding from Google to support the research collaboration
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Xie, Z., Geng, Z., Hu, J., Zhang, Z., Hu, H., Cao, Y.: Revealing the dark secrets of masked image modeling (2022). https://doi.org/10.48550/ARXIV.2205.13543, https://arxiv.org/abs/2205.13543
Zhou, H.Y., Yu, S., Bian, C., Hu, Y., Ma, K., Zheng, Y.: Comparing to learn: surpassing ImageNet pretraining on radiographs by comparing image representations (2020). https://doi.org/10.48550/ARXIV.2007.07423, https://arxiv.org/abs/2007.07423
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Han, Y., Chen, H., Xu, P., Li, Y., Li, K., Yin, J. (2022). Hybrid Pre-training Based on Masked Autoencoders for Medical Image Segmentation. In: Cai, Z., Chen, Y., Zhang, J. (eds) Theoretical Computer Science. NCTCS 2022. Communications in Computer and Information Science, vol 1693. Springer, Singapore. https://doi.org/10.1007/978-981-19-8152-4_12
Download citation
DOI: https://doi.org/10.1007/978-981-19-8152-4_12
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-8151-7
Online ISBN: 978-981-19-8152-4
eBook Packages: Computer ScienceComputer Science (R0)