Skip to main content

Leveraging Large Language Models for Literature Review Tasks - A Case Study Using ChatGPT

  • Conference paper
  • First Online:
Advanced Research in Technologies, Information, Innovation and Sustainability (ARTIIS 2023)

Abstract

Literature reviews constitute an indispensable component of research endeavors; however, they often prove laborious and time-intensive. This study explores the potential of ChatGPT, a prominent large-scale language model, to facilitate the literature review process. By contrasting outcomes from a manual literature review with those achieved using ChatGPT, we ascertain the accuracy of ChatGPT's responses. Our findings indicate that ChatGPT aids researchers in swiftly perusing vast and heterogeneous collections of scientific publications, enabling them to extract pertinent information related to their research topic with an overall accuracy of 70%. Moreover, we demonstrate that ChatGPT offers a more economical and expeditious means of achieving this level of accuracy compared to human researchers. Nevertheless, we conclude that although ChatGPT exhibits promise in generating a rapid and cost-effective general overview of a subject, it presently falls short of generating a comprehensive literature overview requisite for scientific applications. Lastly, we propose avenues for future research to enhance the performance and utility of ChatGPT as a literature review assistant.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. vom Brocke, J., et al.: Reconstructing the giant: on the importance of rigour in documenting the literature search process. In: ECIS 2009 Proceedings (2009)

    Google Scholar 

  2. Jozefowicz, R., Vinyals, O., Schuster, M., Shazeer, N., Wu, Y.: Exploring the limits of language modeling. arXiv (2016)

    Google Scholar 

  3. Uszkoreit, J.: Transformer: A Novel Neural Network Architecture for Language Under-standing – Google AI Blog (2017). https://ai.googleblog.com/2017/08/transformer-novel-neural-network.html

  4. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners, 1–9 (2019)

    Google Scholar 

  5. Ouyang, L., et al.: Training language models to follow instructions with human feedback. Adv. Neural. Inf. Process. Syst. 35, 27730–27744 (2022)

    Google Scholar 

  6. Zhang, S., et al.: OPT: open pre-trained transformer language models. arXiv

    Google Scholar 

  7. Chakrabarty, T., Padmakumar, V., He, H.: Help me write a poem: instruction tuning as a vehicle for collaborative poetry writing. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 6848–6863 (2022)

    Google Scholar 

  8. Weidinger, L., et al.: Ethical and social risks of harm from Language Models (2021)

    Google Scholar 

  9. Weidinger, L., et al.: Taxonomy of risks posed by language models. In: 2022 ACM Conference on Fairness, Accountability, and Transparency, New York, NY, USA, pp. 214–229. ACM (2022). https://doi.org/10.1145/3531146.3533088

  10. OpenAI: Introducing ChatGPT (2023). https://openai.com/blog/chatgpt

  11. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171–4186 (2019). https://doi.org/10.18653/v1/N19-1423

  12. Leippold, M.: Thus spoke GPT-3: interviewing a large-language model on climate finance. Financ. Res. Lett. 53, 103617 (2023). https://doi.org/10.1016/j.frl.2022.103617

    Article  Google Scholar 

  13. Brown, T.B., et al.: Language Models are Few-Shot Learners (2020)

    Google Scholar 

  14. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017). https://doi.org/10.48550/arXiv.1706.03762

  15. Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach. arXiv (2019)

    Google Scholar 

  16. Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 2020, 5485–5551 (2020). https://doi.org/10.5555/3455716.3455856

  17. OpenAI: GPT-4 Technical Report (2023)

    Google Scholar 

  18. Kojima, T., Gu, S.S., Reid, M., Matsuo, Y., Iwasawa, Y.: Large Language Models are Zero-Shot Reasoners (2022)

    Google Scholar 

  19. Snæbjarnarson, V., Einarsson, H.: Cross-Lingual QA as a Stepping Stone for Monolingual Open QA in Icelandic. Proceedings of the Workshop on Multilingual Information Access (MIA), vol. , 29–36 (2022). doi: https://doi.org/10.18653/v1/2022.mia-1.4

  20. Gao, T., Xia, L.,Yu, D. (eds.): Fine-tuning pre-trained language model with multi-level adaptive learning rates for answer selection, vol. (2019)

    Google Scholar 

  21. DeRosa, D.M., Lepsinger, R.: Virtual Team Success: A Practical Guide for Working and Learning from Distance. Wiley (2010)

    Google Scholar 

  22. Hosseini-Asl, E., Asadi, S., Asemi, A., Lavangani, M.A.Z.: Neural text generation for idea generation: the case of brainstorming. Int. J. Hum.-Comput. Stud. 151 (2021)

    Google Scholar 

  23. Palomaki, J., Kytola, A., Vatanen, T.: Collaborative idea generation with a language model. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, vol. 1–12 (2021)

    Google Scholar 

  24. Valvoda, J., Fang, Y., Vandyke, D.: Prompting for a conversation: how to control a dialog model? In: Proceedings of the Second Workshop on When Creative AI Meets Conversational AI, pp. 1–8 (2022)

    Google Scholar 

  25. Zeng, Y., Nie, J.-Y.: Open-Domain Dialogue Generation Based on Pre-trained Language Models

    Google Scholar 

  26. Li, D., You, J., Funakoshi, K., Okumura, M.: A-TIP: attribute-aware text Infilling via Pre-trained language model. In: Proceedings of the 29th International Conference on Computational Linguistics, pp. 5857–5869 (2022)

    Google Scholar 

  27. Rahali, A., Akhloufi, M.A.: End-to-End transformer-based models in textual-based NLP. AI 4, 54–110 (2023). https://doi.org/10.3390/ai4010004

  28. Ziegler, D.M., et al.: Fine-tuning language models from human preferences (2020). https://doi.org/10.48550/arXiv.1909.08593

  29. Jiang, X., Liang, Y., Chen, W., Duan, N.: XLM-K: improving cross-lingual language model pre-training with multilingual knowledge. AAAI 36, 10840–10848 (2022). https://doi.org/10.1609/aaai.v36i10.21330

    Article  Google Scholar 

  30. Dunn, A., et al.: Structured information extraction from complex scientific text with fine-tuned large language models (2022)

    Google Scholar 

  31. Wu, T., Shiri, F., Kang, J., Qi, G., Haffari, G., Li, Y.-F.: KC-GEE: knowledge-based conditioning for generative event extraction (2022)

    Google Scholar 

  32. Santosh, T.Y.S.S., Chakraborty, P., Dutta, S., Sanyal, D.K., Das, P.P.: Joint entity and relation extraction from scientific documents: role of linguistic information and entity types (2021). https://ceur-ws.org/Vol-3004/paper2.pdf

  33. Singh, V.K., Singh, P., Karmakar, M., Leta, J., Mayr, P.: The journal coverage of Web of science, Scopus and dimensions: a comparative analysis. Scientometrics 126, 5113–5142 (2021). https://doi.org/10.1007/s11192-021-03948-5

    Article  Google Scholar 

  34. Haman, M., Školník, M.: Using ChatGPT to conduct a literature review. Accountab. Res. 1–3 (2023). https://doi.org/10.1080/08989621.2023.2185514

  35. Temsah, O., et al.: Overview of early ChatGPT’s presence in medical literature: insights from a hybrid literature review by ChatGPT and human experts. Cureus 15, e37281 (2023). https://doi.org/10.7759/cureus.37281

    Article  Google Scholar 

  36. Rahman, M., Terano, H.J.R., Rahman, N., Salamzadeh, A., Rahaman, S.: ChatGPT and academic research: a review and recommendations based on practical examples. J. Educ. Mngt. Dev. Stud. 3, 1–12 (2023). https://doi.org/10.52631/jemds.v3i1.175

  37. Gupta, R., et al.: Expanding cosmetic plastic surgery research using ChatGPT. Aesthetic Surgery J. (2023). https://doi.org/10.1093/asj/sjad069

  38. Ouyang, L., et al.: Training language models to follow instructions with human feedback

    Google Scholar 

  39. OpenAI: Best practices for prompt engineering with OpenAI API (2023). https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api

  40. OpenAI: Models (2023). https://platform.openai.com/docs/models/overview

  41. BigScience Workshop: BLOOM. Hugging Face (2022)

    Google Scholar 

  42. Touvron, H., et al.: LLaMA: Open and Efficient Foundation Language Models (2023)

    Google Scholar 

Download references

Acknowledgments

This research has been funded by both the Government of Upper Austria as part of the research grant Logistikum.Retail and by the Christian Doppler Gesellschaft as part of the Josef Ressel Centre PREVAIL.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Robert Zimmermann .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zimmermann, R., Staab, M., Nasseri, M., Brandtner, P. (2024). Leveraging Large Language Models for Literature Review Tasks - A Case Study Using ChatGPT. In: Guarda, T., Portela, F., Diaz-Nafria, J.M. (eds) Advanced Research in Technologies, Information, Innovation and Sustainability. ARTIIS 2023. Communications in Computer and Information Science, vol 1935. Springer, Cham. https://doi.org/10.1007/978-3-031-48858-0_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-48858-0_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-48857-3

  • Online ISBN: 978-3-031-48858-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics