Abstract
Bio-BERT (BERT for Bio-medical Text Mining) is a Natural Language Processing (NLP) model, pre-trained on massive bio-medical data. Bio-BERT is effective in an extensive variety of NLP tasks that can be applied to bio-medical data. BERTSUM, BERTSUMABS, and BERTSUMEXTABS are NLP models built for the task of Extractive Text Summarization (ETS) and Abstractive Text Summarization (ATS). These models are evaluated on CNN/DailyMail and Extreme Summarization datasets. In this chapter, the objective is to achieve ETS and ATS for CORD-19 dataset. A hybrid NLP model based on Bio-BERT, BERTSUM, BERTSUMABS, and BERTSUMEXTABS has been proposed. As the objective is to find ETS and ATS on bio-medical datasets, Bio-BERT has been chosen as it is pre-trained on bio-medical PubMed full-text articles. BERTSUM, BERTSUMABS, and BERTSUMEXTABS models are chosen as they were fine-tuned for the task of text summarization. As there is a rapid acceleration in the novel COVID-19 publications, there is a need to obtain a summary of these publications in order to save time. The model generated summary has to be on par with the human written summary. Experiments were conducted on the CORD-19 dataset, and the proposed hybrid model has been evaluated based on ROUGE metric. The proposed model is compared with BERT-based BERTSUM, BERTSUMABS, and BERTSUMEXTABS on CORD-19 dataset and is found to achieve the highest ROUGE values for the task of ETS and ATS.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Atri, Y.K., Pramanick, S., Goyal, V., Chakraborty, T.: See, hear, read: Leveraging multimodality with guided attention for abstractive text summarization. Knowledge-Based Systems 227, 107152 (2021)
Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL-HLT, ACL, Volume 1, pp. 4171–4186 (2019)
Gao, Y., Xu, Y., Huang, H., Liu, Q., Wei, L., Liu, L.: Jointly learning topics in sentence embedding for document summarization. IEEE Transactions on Knowledge and Data Engineering 32(4), 688–699 (2020)
Gidiotis, A., Tsoumakas, G.: A divide-and-conquer approach to the summarization of long documents. IEEE/ACM Trans. on Audio, Speech, and Lang. Processing 28, 3029–3040 (2020)
Gu, J., Lu, Z., Li, H., Li, V.O.: Incorporating copying mechanism in sequence-to-sequence learning. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1), pp. 1631–1640 (2016)
Gulcehre, C., Ahn, S., Nallapati, R., Zhou, B., Bengio, Y.: Pointing the unknown words. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1), pp. 140–149 (2016)
Joshi, A., Fidalgo, E., Alegre, E., Fernández-Robles, L.: Summcoder: An unsupervised framework for extractive text summarization based on deep auto-encoders. Expert Systems with Applications 129, 200–215 (2019)
Khanam, S.A., Liu, F., Chen, Y.P.P.: Joint knowledge-powered topic level attention for a convolutional text summarization model. Knowledge-Based Systems 228, 107273 (2021)
Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov, V., Zettlemoyer, L.: Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. pp. 7871–7880 (2020)
Li, H., Zhu, J., Ma, C., Zhang, J., Zong, C.: Read, watch, listen, and summarize: Multi-modal summarization for asynchronous text, image, audio and video. IEEE Transactions on Knowledge and Data Engineering 31(5), 996–1009 (2019)
Lin, C.Y.: ROUGE: A package for automatic evaluation of summaries. In: Text Summarization Branches Out, ACL, pp. 74–81 (2004)
Liu, Y.: Fine-tune bert for extractive summarization. ArXiv abs/1903.10318 (2019)
Liu, Y., Lapata, M.: Text summarization with pretrained encoders. arXiv preprint arXiv:1908.08345 (2019)
Ma, T., Pan, Q., Rong, H., Qian, Y., Tian, Y., Al-Nabhan, N.: T-bertsum: Topic-aware text summarization based on bert. IEEE Trans. on Computational Social Systems pp. 1–12 (2021)
Moradi, M., Dorffner, G., Samwald, M.: Deep contextualized embeddings for quantifying the informative content in biomedical text summarization. Computer Methods and Programs in Biomedicine 184, 105117 (2020)
Muthu, B., Cb, S., Kumar, P.M., Kadry, S.N., Hsu, C.H., Sanjuan, O., Crespo, R.G.: A framework for extractive text summarization based on deep learning modified NN classifier. ACM Trans. Asian Low-Resour. Lang. Inf. Process. 20(3) (2021)
Nallapati, R., Zhai, F., Zhou, B.: Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In: Thirty-First AAAI Conference on Artificial Intelligence (2017)
Nallapati, R., Zhou, B., dos Santos, C., Gu̇lçehre, Ç., Xiang, B.: Abstractive text summarization using sequence-to-sequence RNNs and beyond. In: Proceedings of The 20th SIGNLL CCNLL, pp. 280–290 (2016)
Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering 22(10), 1345–1359 (2010)
Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.: Improving language understanding by generative pre-training (2018)
Raffel, C., Shazeer, N.M., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., Liu, P.J.: Exploring the limits of transfer learning with a unified text-to-text transformer. ArXiv abs/1910.10683 (2020)
Rush, A.M., Chopra, S., Weston, J.: A neural attention model for abstractive sentence summarization. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 379–389 (2015)
Rush, A.M., Harvard, S., Chopra, S., Weston, J.: A neural attention model for sentence summarization. In: ACLWeb. Proceedings of the 2015 conference on empirical methods in natural language processing (2017)
See, A., Liu, P.J., Manning, C.D.: Get to the point: Summarization with pointer-generator networks. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Volume 1, pp. 1073–1083 (2017)
Su, M.H., Wu, C.H., Cheng, H.T.: A two-stage transformer-based approach for variable-length abstractive summarization. IEEE/ACM Trans. on Audio, Speech, and Lang. Processing 28, 2061–2072 (2020)
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, u., Polosukhin, I.: Attention is all you need. In: Proceedings of the 31st International Conference, NIPS’17, pp. 6000–6010 (2017)
Wang, L.L., Lo, K., Chandrasekhar, Y., Reas, R., Yang, J., Eide, D., Funk, K., Kinney, R., Liu, Z., Merrill, W., et al.: Cord-19: The covid-19 open research dataset. ArXiv (2020)
Yang, M., Li, C., Shen, Y., Wu, Q., Zhao, Z., Chen, X.: Hierarchical human-like deep neural networks for abstractive text summarization. IEEE Transactions on Neural Networks and Learning Systems 32(6), 2744–2757 (2021)
Yao, K., Zhang, L., Du, D., Luo, T., Tao, L., Wu, Y.: Dual encoding for abstractive text summarization. IEEE Transactions on Cybernetics 50(3), 985–996 (2020)
Zeng, W., Luo, W., Fidler, S., Urtasun, R.: Efficient summarization with read-again and copy mechanism. CoRR abs/1611.03382 (2016)
Zhang, J., Zhao, Y., Saleh, M., Liu, P.: Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In: International Conference on Machine Learning, pp. 11328–11339. PMLR (2020)
Zhang, Y., Li, D., Wang, Y., Fang, Y., Xiao, W.: Abstract text summarization with a convolutional seq2seq model. Applied Sciences 9(8) (2019)
Zhou, Q., Yang, N., Wei, F., Huang, S., Zhou, M., Zhao, T.: A joint sentence scoring and selection framework for neural extractive document summarization. IEEE/ACM Trans. on Audio, Speech, and Lang. Processing 28, 671–681 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Nunna, J.L.D., Hanuman Turaga, V.K., Chebrolu, S. (2023). Extractive and Abstractive Text Summarization Model Fine-Tuned Based on BERTSUM and Bio-BERT on COVID-19 Open Research Articles. In: Misra, R., Omer, R., Rajarajan, M., Veeravalli, B., Kesswani, N., Mishra, P. (eds) Machine Learning and Big Data Analytics. ICMLBDA 2022. Springer Proceedings in Mathematics & Statistics, vol 401. Springer, Cham. https://doi.org/10.1007/978-3-031-15175-0_17
Download citation
DOI: https://doi.org/10.1007/978-3-031-15175-0_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-15174-3
Online ISBN: 978-3-031-15175-0
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)