Abstract
While convolutional neural networks (CNNs) are dominating the area of computer-aided 3D medical image diagnosis, they are incapable of capturing global information due to the intrinsic locality of convolution. Transformers, another type of neural network empowered with self-attention mechanism, are good at representing global relations, yet computationally expensive and do not generalize well on small datasets. Applying Transformers on 3D medical images has two major problems: 1) medical 3D volumes are bigger in size than natural images which makes training process computationally impractical, 2) and 3D medical image datasets are usually smaller than natural image datasets since medical images are expensive to collect. In this paper, we propose the 3D Medical image Transformer (3DMeT) to address these two issues. 3DMeT introduces 3D convolutional layers to perform block embedding instead of the original linear embedding to cut the computational cost. Additionally, we propose a teacher-student training strategy to address the data-hungry issue by adapting convolutional layers’ weights from a CNN teacher. We conduct experiments on knee images, results demonstrate that the 3DMeT (70.2) confidently outperforms the 3DCNNs (65.3) and Vision Transformer (58.7).
S. Wang and Z. Zhuang—Contributed equally.
This work was supported by the National Key Research and Development Program of China (2018YFC0116400), National Natural Science Foundation of China (NSFC) grants (62001292), Shanghai Pujiang Program(19PJ1406800), and Interdisciplinary Program of Shanghai Jiao Tong University.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Dosovitskiy, A., et al.: An image is worth 16\(\times \)16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)
Feng, C., et al.: Deep learning framework for Alzheimer’s disease diagnosis via 3D-CNN and FSBI-LSTM. IEEE Access 7, 63605–63618 (2019)
Han, K., et al.: A survey on visual transformer. arXiv preprint arXiv:2012.12556 (2020)
Hara, K., Kataoka, H., Satoh, Y.: Can spatiotemporal 3D cNNs retrace the history of 2D CNNs and ImageNet? In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6546–6555 (2018)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Laine, S., Aila, T.: Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:1610.02242 (2016)
Liu, F., et al.: Fully automated diagnosis of anterior cruciate ligament tears on knee MR images by using deep learning. Radiol. Artif. Intell. 1(3), 180091 (2019)
Pedoia, V., Norman, B., Mehany, S.N., Bucknor, M.D., Link, T.M., Majumdar, S.: 3D convolutional neural networks for detection and severity staging of meniscus and PFJ cartilage morphological degenerative changes in osteoarthritis and anterior cruciate ligament subjects. J. Magn. Reson. Imaging 49(2), 400–410 (2019)
Ramesh, A., et al.: Zero-shot text-to-image generation. Proc. Mach. Learn. Res. 139, 8821–8831 (2021)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
Singh, S.P., Wang, L., Gupta, S., Goli, H., Padmanabhan, P., Gulyás, B.: 3D deep learning on medical images: a review. Sensors 20(18), 5097 (2020)
Tarvainen, A., Valpola, H.: Mean teachers are better role models: weight-averaged consistency targets improve semi-supervised deep learning results. arXiv preprint arXiv:1703.01780 (2017)
Vaswani, A., et al.: Attention is all you need. arXiv preprint arXiv:1706.03762 (2017)
Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794–7803 (2018)
Acknowledgement
This work was supported by the National Key Research and Development Program of China (2018YFC0116400), National Natural Science Foundation of China (NSFC) grants (62001292), Shanghai Pujiang Program(19PJ1406800), and Interdisciplinary Program of Shanghai Jiao Tong University.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Wang, S. et al. (2021). 3DMeT: 3D Medical Image Transformer for Knee Cartilage Defect Assessment. In: Lian, C., Cao, X., Rekik, I., Xu, X., Yan, P. (eds) Machine Learning in Medical Imaging. MLMI 2021. Lecture Notes in Computer Science(), vol 12966. Springer, Cham. https://doi.org/10.1007/978-3-030-87589-3_36
Download citation
DOI: https://doi.org/10.1007/978-3-030-87589-3_36
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-87588-6
Online ISBN: 978-3-030-87589-3
eBook Packages: Computer ScienceComputer Science (R0)