Abstract
Medical visual question answering (VQA) is a challenging task that requires answering clinical questions of a given medical image, by taking consider of both visual and language information. However, due to the small scale of training data for medical VQA, pre-training fine-tuning paradigms have been a commonly used solution to improve model generalization performance. In this paper, we present a novel self-supervised approach that learns unimodal and multimodal feature representations of input images and text using medical image caption datasets, by leveraging both unimodal and multimodal contrastive losses, along with masked language modeling and image text matching as pre-training objectives. The pre-trained model is then transferred to downstream medical VQA tasks. The proposed approach achieves state-of-the-art (SOTA) performance on three publicly available medical VQA datasets with significant accuracy improvements of 2.2%, 14.7%, and 1.7% respectively. Besides, we conduct a comprehensive analysis to validate the effectiveness of different components of the approach and study different pre-training settings. Our codes and models are available at https://github.com/pengfeiliHEU/MUMC.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Nguyen, B.D., Do, T.T., Nguyen, B.X., Do, T., Tjiputra, E., Tran, Q.D.: Overcoming data limitation in medical visual question answering. MICCAI 4, 522–530 (2019). https://doi.org/10.1007/978-3-030-32251-9_57
Do, T., Nguyen, B.X., Tjiputra, E., Tran, M., Tran, Q.D., Nguyen, A.: Multiple meta-model quantifying for medical visual question answering. MICCAI 5, 64–74 (2021). https://doi.org/10.1007/978-3-030-87240-3_7
Pelka, O., Koitka, S., Rückert, J., Nensa, F., Friedrich, C.M.: Radiology objects in context (ROCO): a multimodal image dataset. In: Stoyanov, D., et al. Intravascular Imaging and Computer Assisted Stenting and Large-Scale Annotation of Biomedical Data and Expert Label Synthesis. LABELS CVII STENT 2018 2018 2018. Lecture Notes in Computer Science, vol. 11043, pp. 180–189. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01364-6_20
Subramanian, S., Wang, L.L., Bogin, B., et al.: Medicat: a dataset of medical images, captions, and textual references. In: Findings of the Association for Computational Linguistics, EMNLP 2020, pp. 2112–2120 (2020). https://github.com/allenai/medicat
Ruckert, J., Abacha, A.B., Herrera, A.G., et al.: Overview of ImageCLEF medical 2022-caption prediction and concept detection. In: Experimental IR Meets Multilinguality, Multimodality, and Interaction (2022). https://www.imageclef.org/2022
Liu, B., Zhan, L.M., Wu, X.M., et al.: Contrastive pre-training and representation distillation for medical visual question answering based on radiology images. MICCAI 2, 210–220 (2021). https://doi.org/10.1007/978-3-030-87196-3_20
Radford, A., Kim, J.W., Hallacy, C., et al.: Learning transferable visual models from natural language supervision. In: ICML, pp. 8748–8763 (2021)
Eslami, S., de Melo, G., Meinel, C.: Does clip benefit visual question answering in the medical domain as much as it does in the general domain? arXiv preprint arXiv: 2112.13906 (2021)
Cong, F., Xu, S., Li, G., et al.: Caption-aware medical VQA via semantic focusing and progressive cross-modality comprehension. ACM Multimedia, pp. 3569–3577 (2022)
Chen, Z., Du, Y., Hu, J., et al.: Multi-modal masked autoencoders for medical vision-and-language pre-training. MICCAI 5, 679–689 (2022). https://doi.org/10.1007/978-3-031-16443-9_65
Lau, J.J., Gayen, S., Abacha, A.B., Demner-Fushman, D.: A dataset of clinically generated visual questions and answers about radiology images. Sci. Data 5, 1–10 (2018). https://osf.io/bd96f
He, X., et al.: Towards visual question answering on pathology images. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP, pp. 708–718 (2021). https://github.com/UCSD-AI4H/PathVQA
Liu, B., Zhan, L.M., Xu, L., et al.: Slake: a semantically-labeled knowledge-enhanced dataset for medical visual question answering. In: ISBI, IEEE, pp. 1650–1654 (2021). https://www.med-vqa.com/slake/
Selvaraju, R.R., Cogswell, M., Das, A., et al.: Grad-cam: visual explanations from deep networks via gradient-based localization. In: ICCV, pp. 618–626 (2017)
Vaswani, A., Shazeer, N., Parmar, N., et al.: Attention is all you need. In: NeurIPS 30 (2017)
Dosovitskiy, A., Beyer, L., Kolesnikov, A., et al.: An image is worth 16 × 16 words: Transformers for image recognition at scale. In: ICLR (2021)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL, pp. 4171–4186 (2019)
Sennrich, R., Haddow, B., Birch, A.: Neural machine translation of rare words with subword units. arXiv preprint arXiv: 1508.07909 (2015)
He, K., Fan, H., Wu, Y., et al.: Girshick: momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9726–9735 (2020)
Li, J., Selvaraju, R.R., Gotmare, A., et al.: Align before fuse: Vision and language representation learning with momentum distillation. Adv. Neural. Inf. Process. Syst. 34, 9694–9705 (2021)
Li, J., Li, D, Xiong, C, et al.: Blip: bootstrapping language-image pre-training for unified vision-language understanding and generation. In: International Conference on Machine Learning. PMLR, pp. 12888–12900 (2022)
He, K., Chen, X., Xie, S., et al.: Masked autoencoders are scalable vision learners. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15979–15988 (2022)
Li, Y., Fan, H., Hu, R., et al.: Scaling Language-Image Pre-training via Masking. arXiv preprint arXiv: 2212.00794 (2022)
Cubuk, E.D., Zoph, B., Shlens, J., et al.: Randaugment: practical automated data augmentation with a reduced search space. In: CVPR Workshops, pp. 3008–3017 (2020)
Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. arXiv preprint arXiv: 1711.05101 (2017)
Pan, H., He, S., Zhang, K., et al.: AMAM: an attention-based multimodal alignment model for medical visual question answering. Knowl.-Based Syst. 255, 109763 (2022)
Gong, H., Chen, G., Mao, M., Li, Z., Li, G.: VQAMix: conditional triplet mixup for medical visual question answering. IEEE Trans. Med. Imaging 41(11), 3332–3343 (2022)
Acknowledgement
This work is supported by Natural Science Foundation of Heilongjiang Province under grant number LH2021F015.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Li, P., Liu, G., He, J., Zhao, Z., Zhong, S. (2023). Masked Vision and Language Pre-training with Unimodal and Multimodal Contrastive Losses for Medical Visual Question Answering. In: Greenspan, H., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. MICCAI 2023. Lecture Notes in Computer Science, vol 14220. Springer, Cham. https://doi.org/10.1007/978-3-031-43907-0_36
Download citation
DOI: https://doi.org/10.1007/978-3-031-43907-0_36
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-43906-3
Online ISBN: 978-3-031-43907-0
eBook Packages: Computer ScienceComputer Science (R0)