Skip to main content

SurgicalGPT: End-to-End Language-Vision GPT for Visual Question Answering in Surgery

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2023 (MICCAI 2023)

Abstract

Advances in GPT-based large language models (LLMs) are revolutionizing natural language processing, exponentially increasing its use across various domains. Incorporating uni-directional attention, these autoregressive LLMs can generate long and coherent paragraphs. However, for visual question answering (VQA) tasks that require both vision and language processing, models with bi-directional attention or models employing fusion techniques are often employed to capture the context of multiple modalities all at once. As GPT does not natively process vision tokens, to exploit the advancements in GPT models for VQA in robotic surgery, we design an end-to-end trainable Language-Vision GPT (LV-GPT) model that expands the GPT2 model to include vision input (image). The proposed LV-GPT incorporates a feature extractor (vision tokenizer) and vision token embedding (token type and pose). Given the limitations of unidirectional attention in GPT models and their ability to generate coherent long paragraphs, we carefully sequence the word tokens before vision tokens, mimicking the human thought process of understanding the question to infer an answer from an image. Quantitatively, we prove that the LV-GPT model outperforms other state-of-the-art VQA models on two publically available surgical-VQA datasets (based on endoscopic vision challenge robotic scene segmentation 2018 and CholecTriplet2021) and on our newly annotated dataset (based on the holistic surgical scene dataset). We further annotate all three datasets to include question-type annotations to allow sub-type analysis. Furthermore, we extensively study and present the effects of token sequencing, token type and pose embedding for vision tokens in the LV-GPT model.

L. Seenivasan and M. Islam are co-first authors.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    chat.openai.com.

  2. 2.

    One class shares a common name with a surgical phase class.

  3. 3.

    Code available: github.com/lalithjets/SurgicalGPT

References

  1. Adams, L., et al.: Computer-assisted surgery. IEEE Comput. Graphics Appl. 10(3), 43–51 (1990)

    Article  Google Scholar 

  2. Allan, M., et al.: 2018 robotic scene segmentation challenge. arXiv preprint arXiv:2001.11190 (2020)

  3. Bates, D.W., Gawande, A.A.: Error in medicine: what have we learned? (2000)

    Google Scholar 

  4. Ben-Younes, H., Cadene, R., Cord, M., Thome, N.: Mutan: Multimodal tucker fusion for visual question answering. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2612–2620 (2017)

    Google Scholar 

  5. Ben-Younes, H., Cadene, R., Thome, N., Cord, M.: Block bilinear superdiagonal fusion for visual question answering and visual relationship detection. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 8102–8109 (2019)

    Google Scholar 

  6. Brown, T., et al.: Language models are few-shot learners. In: Advance in Neural Information Processing System, vol. 33, pp. 1877–1901 (2020)

    Google Scholar 

  7. Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)

  8. Guo, J., et al.: From images to textual prompts: zero-shot VQA with frozen large language models. arXiv preprint arXiv:2212.10846 (2022)

  9. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  10. Hong, M., Rozenblit, J.W., Hamilton, A.J.: Simulation-based surgical training systems in laparoscopic surgery: a current review. Virtual Reality 25, 491–510 (2021)

    Article  Google Scholar 

  11. Kneebone, R.: Simulation in surgical training: educational issues and practical implications. Med. Educ. 37(3), 267–277 (2003)

    Article  Google Scholar 

  12. Li, L.H., Yatskar, M., Yin, D., Hsieh, C.J., Chang, K.W.: VisualBERT: a simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557 (2019)

  13. Liu, X., et al.: GPT understands, too. arXiv preprint arXiv:2103.10385 (2021)

  14. Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10012–10022 (2021)

    Google Scholar 

  15. Peng, B., Li, C., Li, J., Shayandeh, S., Liden, L., Gao, J.: SOLOIST: few-shot task-oriented dialog with a single pretrained auto-regressive model. arXiv preprint arXiv:2005.05298 3 (2020)

  16. Rogers, D.A., Yeh, K.A., Howdieshell, T.R.: Computer-assisted learning versus a lecture and feedback seminar for teaching a basic surgical technical skill. Am. J. Surg. 175(6), 508–510 (1998)

    Article  Google Scholar 

  17. Sarker, S., Patel, B.: Simulation and surgical training. Int. J. Clin. Pract. 61(12), 2120–2125 (2007)

    Article  Google Scholar 

  18. Seenivasan, L., Islam, M., Krishna, A.K., Ren, H.: Surgical-VQA: Visual question answering in surgical scenes using transformer. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) Medical Image Computing and Computer Assisted Intervention-MICCAI 2022. LNCS, vol. 13437, pp. 33–43. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16449-1_4

    Chapter  Google Scholar 

  19. Sharma, D., Purushotham, S., Reddy, C.K.: MedFuseNet: an attention-based multimodal deep learning model for visual question answering in the medical domain. Sci. Rep. 11(1), 1–18 (2021)

    Article  Google Scholar 

  20. Thoppilan, R., et al.: LAMDA: language models for dialog applications. arXiv preprint arXiv:2201.08239 (2022)

  21. Twinanda, A.P., Shehata, S., Mutter, D., Marescaux, J., De Mathelin, M., Padoy, N.: Endonet: a deep architecture for recognition tasks on laparoscopic videos. IEEE Trans. Med. Imaging 36(1), 86–97 (2016)

    Article  Google Scholar 

  22. Valderrama, N., et al.: Towards holistic surgical scene understanding. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) Medical Image Computing and Computer Assisted Intervention-MICCAI 2022. LNCS, vol. 13437, pp. 442–452. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16449-1_42

    Chapter  Google Scholar 

  23. Wang, S., Zhao, Z., Ouyang, X., Wang, Q., Shen, D.: ChatCAD: interactive computer-aided diagnosis on medical image using large language models. arXiv preprint arXiv:2302.07257 (2023)

  24. Yu, Z., Yu, J., Fan, J., Tao, D.: Multi-modal factorized bilinear pooling with co-attention learning for visual question answering. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1821–1830 (2017)

    Google Scholar 

  25. Yu, Z., Yu, J., Xiang, C., Fan, J., Tao, D.: Beyond bilinear: generalized multimodal factorized high-order pooling for visual question answering. IEEE Trans. Neural Netw. Learn. Syst. 29(12), 5947–5959 (2018)

    Article  Google Scholar 

Download references

Acknowledgement

This work was supported by Hong Kong Research Grants Council (RGC) Collaborative Research Fund (CRF C4026-21GF and CRF C4063-18G) and Shun Hing Institute of Advanced Engineering (BME-p1-21/8115064) at the Chinese University of Hong Kong. M. Islam was funded by EPSRC grant [EP/W00805X/1].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hongliang Ren .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 60 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Seenivasan, L., Islam, M., Kannan, G., Ren, H. (2023). SurgicalGPT: End-to-End Language-Vision GPT for Visual Question Answering in Surgery. In: Greenspan, H., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. MICCAI 2023. Lecture Notes in Computer Science, vol 14228. Springer, Cham. https://doi.org/10.1007/978-3-031-43996-4_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-43996-4_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-43995-7

  • Online ISBN: 978-3-031-43996-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics