Abstract
Visual Question Answering (VQA) is a popular research topic that has gained attention from diverse fields like computer vision and natural language processing. While there are existing English foundation models that transfer well in downstream tasks like visual question answering, there is a lack of research in Vietnamese VQA (ViVQA). These attention-based models have shown high efficiency in generating spatial maps for relevant image regions or sentence parts, contributing to the success of these models. In this article, we propose a joint modeling approach for attention-based language and vision in ViVQA, called SCBM system, which achieves Accuracy, WUPS score 0.9, and WUPS score 0.0 of 0.6201, 0.6814, and 0.8719 on the ViVQA benchmark dataset, respectively. These results are twice as good as previous baselines (Co-attention combined with PhoW2V), opening up possibilities for further advancements in ViVQA. We also discuss the development path of ViVQA systems to achieve breakthroughs in this field.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Burstein, J., Doran, C., Solorio, T. (eds.) Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, 2–7 June 2019, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics (2019)
Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. In: 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, 3–7 May 2021. OpenReview.net (2021)
Guo, M.H., Lu, C.Z., Liu, Z.N., Cheng, M.M., Hu, S.M.: Visual attention network. arXiv preprint arXiv:2202.09741 (2022)
Kafle, K., Kanan, C.: Visual question answering: datasets, algorithms, and future challenges. Comput. Vis. Image Underst. 163, 3–20 (2017)
Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part V. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48
Liu, Y., et al.: Roberta: a robustly optimized BERT pretraining approach. CoRR abs/1907.11692 (2019)
Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. In: 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, 10–17 October 2021, pp. 9992–10002. IEEE (2021)
Nguyen, D.Q., Nguyen, A.T.: PhoBERT: pre-trained language models for Vietnamese. In: Cohn, T., He, Y., Liu, Y. (eds.) Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16–20 November 2020. Findings of ACL, EMNLP 2020, pp. 1037–1042. Association for Computational Linguistics (2020)
Radford, A., et al.: Learning transferable visual models from natural language supervision. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18–24 July 2021, Virtual Event. Proceedings of Machine Learning Research, vol. 139, pp. 8748–8763. PMLR (2021)
Ren, M., Kiros, R., Zemel, R.S.: Exploring models and data for image question answering. In: Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, Montreal, Quebec, Canada, 7–12 December 2015, pp. 2953–2961 (2015)
Tran, K.Q., Nguyen, A.T., Le, A.T., Nguyen, K.V.: ViVQA: Vietnamese visual question answering. In: Hu, K., Kim, J., Zong, C., Chersoni, E. (eds.) Proceedings of the 35th Pacific Asia Conference on Language, Information and Computation, PACLIC 2021, Shanghai International Studies University, Shanghai, China, 5–7 November 2021, pp. 683–691. Association for Computational Lingustics (2021)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Wu, H., et al.: CVT: introducing convolutions to vision transformers. In: 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, 10–17 October 2021, pp. 22–31. IEEE (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Trung, H.L., Cong, T.D., Quoc, T.N., Hoang, V.T. (2023). SCBM: A Hybrid Model for Vietnamese Visual Question Answering. In: Dao, NN., Thinh, T.N., Nguyen, N.T. (eds) Intelligence of Things: Technologies and Applications. ICIT 2023. Lecture Notes on Data Engineering and Communications Technologies, vol 187. Springer, Cham. https://doi.org/10.1007/978-3-031-46573-4_26
Download citation
DOI: https://doi.org/10.1007/978-3-031-46573-4_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-46572-7
Online ISBN: 978-3-031-46573-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)