Skip to main content

Text Summarization for Big Data Analytics: A Comprehensive Review of GPT 2 and BERT Approaches

  • Chapter
  • First Online:
Data Analytics for Internet of Things Infrastructure

Part of the book series: Internet of Things ((ITTCC))

Abstract

The goal of approaches to automatic text summarization is to construct summaries while extracting the essential information from one or more input texts. Large models could be trained thanks to the ability to examine text non-sequentially, which led to the Transformer becoming the most well-known NLP model. Big data and associated methodologies are frequently used to handle and alter these massive volumes of information. This chapter looks at large data methodologies and method such as Bidirectional Encoder Representations from Transformers (BERT) and Generative Pre-trained Transformer 2 (GPT 2) models for multi-document summarization. The Transformer, BERT and GPT and GPT 2 models in text summarization give very close results in terms of accuracy and they need to be compared to give a model that performs better. In this chapter, the two models have been compared and our results have shown that BERT performs better than GPT 2. This is found based on the results given by ROUGE metrics on a news article dataset containing 100 text files to summarize.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ma, T., Pan, Q., Rong, H., Qian, Y., Tian, Y., & Al-Nabhan, N. (2022). T-BERTSum: Topic-aware text summarization based on BERT. IEEE Transactions on Computational Social Systems, 9(3), 879–890. https://doi.org/10.1109/TCSS.2021.3088506

    Article  Google Scholar 

  2. Babar, S., Tech-Cse, M., & Rit (2013). Text summarization: An overview.

    Google Scholar 

  3. Gupta, A., Chugh, D., & Katarya, R. (2022). Automated news summarization using transformers. In Sustainable advanced computing (pp. 249–259). Springer.

    Chapter  Google Scholar 

  4. Suleiman, D., & Awajan, A. (2020). Deep learning based abstractive text summarization: Approaches, datasets, evaluation measures, and challenges. Mathematical Problems in Engineering, 2020.

    Google Scholar 

  5. Shini, R. S., & Kumar, V. A. (2021). Recurrent neural network based text summarization techniques by word sequence generation. In 2021 6th international conference on inventive computation technologies (ICICT) (pp. 1224–1229). IEEE.

    Chapter  Google Scholar 

  6. Ozsoy, M. G., Alpaslan, F. N., & Cicekli, I. (2011). Text summarization using latent semantic analysis. Journal of Information Science, 37(4), 405–417.

    Article  MathSciNet  Google Scholar 

  7. Mahajani, A., Pandya, V., Maria, I., & Sharma, D. (2019). A comprehensive survey on extractive and abstractive techniques for text summarization. In Ambient communications and computer systems (pp. 339–351).

    Chapter  Google Scholar 

  8. Liu, Y., & Lapata, M. (2019, August 22). Text summarization with pretrained encoders. arXiv preprint arXiv:1908.08345.

    Google Scholar 

  9. Rahman, M. M., & Siddiqui, F. H. (2019). An optimized abstractive text summarization model using peephole convolutional LSTM. Symmetry, 11(10), 1290.

    Article  Google Scholar 

  10. Vig, J. (2019, June 12). A multiscale visualization of attention in the transformer model. arXiv preprint arXiv:1906.05714.

    Google Scholar 

  11. Song, S., Huang, H., & Ruan, T. (2019). Abstractive text summarization using LSTM-CNN based deep learning. Multimedia Tools and Applications, 78(1), 857–875.

    Article  Google Scholar 

  12. Chopra, S., Auli, M., & Rush, A. M. (2016). Abstractive sentence summarization with attentive recurrent neural networks. In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: Human language technologies (pp. 93–98).

    Google Scholar 

  13. Nagalavi, D., Hanumanthappa, M., & Ravikumar, K. (2019). An improved attention layer assisted recurrent convolutional neural network model for abstractive text summarization. INFOCOMP Journal of Computer Science, 18(2), 36–47.

    Google Scholar 

  14. Kieuvongngam, V., Tan, B., & Niu, Y. (2020, June 3). Automatic text summarization of covid-19 medical research articles using bert and gpt-2. arXiv preprint arXiv:2006.01997.

    Google Scholar 

  15. Miller, D. (2019, June 7). Leveraging BERT for extractive text summarization on lectures. arXiv preprint arXiv:1906.04165.

    Google Scholar 

  16. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In Proc. Conf. North Amer.- Chapter Assoc. Comput. Linguistics, Hum. Lang. Technol (Vol. 1, pp. 4171–4186). Association for Computational Linguistics. https://doi.org/10.18653/v1/n19-1423

    Chapter  Google Scholar 

  17. Abdel-Salam, S., & Rafea, A. (2022). Performance study on extractive text summarization using BERT models. Information, 13(2), 67.

    Article  Google Scholar 

  18. Liu, Y. (2019, March 25). Fine-tune BERT for extractive summarization. arXiv preprint arXiv:1903.10318.

    Google Scholar 

  19. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9.

    Google Scholar 

  20. Montesinos, D. M. (2020, September 10). Modern methods for text generation. arXiv preprint arXiv:2009.04968.

    Google Scholar 

  21. Barbella, M., & Tortora, G. Rouge metric evaluation for text summarization techniques. Available at SSRN 4120317.

    Google Scholar 

  22. Bharathi Mohan, G., & Prasanna Kumar, R. Survey of text document summarization based on ensemble topic vector clustering model. In P. P. Joby, V. E. Balas, & R. Palanisamy (Eds.), IoT based control networks and intelligent systems (Lecture notes in networks and systems) (Vol. 528). Springer. https://doi.org/10.1007/978-981-19-5845-8_60

  23. Mohan, G. B., & Kumar, R. P. (2022). A comprehensive survey on topic modeling in text summarization. In D. K. Sharma, S. L. Peng, R. Sharma, & D. A. Zaitsev (Eds.), Micro-electronics and telecommunication engineering . ICMETE 2021 (Lecture notes in networks and systems) (Vol. 373). Springer. https://doi.org/10.1007/978-981-16-8721-1_22

    Chapter  Google Scholar 

  24. Kalpana, G., Kumar, R. P., & Ravi, T. (2010). Classifier based duplicate record elimination for query results from web databases. In Trendz in Information Sciences & Computing (TISC2010) (pp. 50–53). https://doi.org/10.1109/TISC.2010.5714607

    Chapter  Google Scholar 

  25. Assegie, T. A., Rangarajan, P. K., Kumar, N. K., & Vigneswari, D. (2022). An empirical study on machine learning algorithms for heart disease prediction. IAES International Journal of Artificial Intelligence (IJ-AI), 11(3), 1066. 10.11591/ijai.v11.i3.pp1066-1073.

    Article  Google Scholar 

  26. Mohan, G. B., & Kumar, R. P. (2022). Lattice abstraction-based content summarization using baseline abstractive lexical chaining progress. International Journal of Information Technology. https://doi.org/10.1007/s41870-022-01080-y

  27. Yang, Z., Dong, Y., Deng, J., Sha, B., & Xu, T. (2021). Research on automatic news text summarization technology based on GPT2 model. In 2021 3rd international conference on artificial intelligence and advanced manufacture. https://doi.org/10.1145/3495018.3495091

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to G. Bharathi Mohan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Bharathi Mohan, G., Prasanna Kumar, R., Parathasarathy, S., Aravind, S., Hanish, K.B., Pavithria, G. (2023). Text Summarization for Big Data Analytics: A Comprehensive Review of GPT 2 and BERT Approaches. In: Sharma, R., Jeon, G., Zhang, Y. (eds) Data Analytics for Internet of Things Infrastructure. Internet of Things. Springer, Cham. https://doi.org/10.1007/978-3-031-33808-3_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-33808-3_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-33807-6

  • Online ISBN: 978-3-031-33808-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics