Skip to main content
Log in

Transformer models used for text-based question answering systems

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

The question answering system is frequently applied in the area of natural language processing (NLP) because of the wide variety of applications. It consists of answering questions using natural language. The problem is, in general, solved by employing a dataset that consists of an input text, a query, and the text segment or span from the input text that provides the question’s answer. The ability to make human-level predictions from data has improved significantly thanks to deep learning models, particularly the Transformer architecture, which has been state-of-the-art in text-based models in recent years. This paper reviews studies related to the use of transformer models in the implementation of question-answering (QA) systems. The paper’s first focus is on the attention and transformer models. A brief description of the architectures is presented by classifying them into models based on encoders, decoders, and on both Encoder-Decoder. Following that, we examine the most recent research trends in textual QA datasets by highlighting the architecture of QA systems and categorizing them according to various criteria. We survey also a significant set of evaluation metrics that have been developed in order to evaluate the models’ performance. Finally, we highlight solutions built to simplify the implementation of Transformer models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. https://github.com/huggingface

  2. https://huggingface.co/

References

  1. Mishra A, Jain SK (2016) A survey on question answering systems with classification. J King Saud Univ - Comput Inf Sci 28:345–361

    Google Scholar 

  2. Victoria F (2021) The advantages of human evaluation of sociomedical question answering systems. Int J Open Inf Technol 9:53–59

    Google Scholar 

  3. Vakulenko S, Longpre S, Tu Z, Anantha R (2021) Question rewriting for conversational question answering. In: Proceedings of the 14th ACM International conference on web search and data mining, pp 355–363

  4. Sachan DS, Reddy S, Hamilton WL, Dyer C, Yogatama D (2021) End-to-end training of multi-document reader and retriever for open-domain question answering. In: Advances in neural information processing systems, NeurIPS

  5. Scheider S, Nyamsuren E, Kruiger H, Xu H (2021) Geo-analytical question-answering with gis. Int J Digit Earth 14:1–14

    Google Scholar 

  6. Menaha R, Jayanthi V, Krishnaraj N, Sundra Kumar NP (2021) A cluster-based approach for finding domain wise experts in community question answering system. J Phys Conf Ser 1767:012035

    Google Scholar 

  7. Jiang Z, Chi C, Zhan Y (2021) Research on medical question answering system based on knowledge graph. IEEE Access 9:21094–21101

    Google Scholar 

  8. Roy PK (2021) Deep neural network to predict answer votes on community question answering sites. Neural Process Lett 53:1633–1646

    Google Scholar 

  9. Loginova E, Varanasi S, Neumann G (2021) Towards end-to-end multilingual question answering. Inf Syst Front 23:227–241

    Google Scholar 

  10. Do P, Phan THV, Gupta BB (2021) Developing a vietnamese tourism question answering system using knowledge graph and deep learning. ACM Trans Asian Low-Resour Lang Inf Process 20:1–18

    Google Scholar 

  11. Bulla M, Hillebrand L, Lübbering M, Sifa R (2021) Knowledge graph based question answering system for financial securities. In: German conference on artificial intelligence (künstliche intelligenz), pp 44–50

  12. Chen Z, Chen W, Smiley C, Shah S, Borova I, Langdon D, Moussa R, Beane M, Huang T-H, Routledge B, Wang WY (2021) Fin QA : A dataset of numerical reasoning over financial data. In: Proceedings of the 2021 Conference on empirical methods in natural language processing. Association for computational linguistics, online and punta Cana, dominican republic, pp 3697–3711

  13. Sakata W, Shibata T, Tanaka R, Kurohashi S (2019) FAQ retrieval using query-question similarity and bert-based query-answer relevance. In: Proceedings of the 42nd international acm sigir conference on research and development in information retrieval

  14. Abbasiantaeb Z, Momtazi S (2021) Text-based question answering from information retrieval and deep neural network perspectives: A survey. WIREs Data Mining and Knowledge Discovery, vol 11(6)

  15. Otegi A, San Vicente I, Saralegi X, Peñas A, Lozano B, Agirre E (2022) Information retrieval and question answering: a case study on covid-19 scientific literature. Knowl-Based Syst 240:108072

    Google Scholar 

  16. Datta S, Roberts K (2022) Fine-grained spatial information extraction in radiology as two-turn question answering. Int J Med Inform 158:104628

    Google Scholar 

  17. Ali I, Yadav D, Sharma A (2022) Question answering system for semantic web: a review. Int J Adv Intell Paradig 22(1-2):114–147

    Google Scholar 

  18. Yin D, Cheng S, Pan B, Qiao Y, Zhao W, Wang D (2022) Chinese named entity recognition based on knowledge based question answering system. Appl Sci 12(11):5373

    Google Scholar 

  19. Skrebeca J, Kalniete P, Goldbergs J, Pitkevica L, Tihomirova D, Romanovs A (2021) Modern development trends of chatbots using artificial intelligence (ai). In: 62nd International scientific conference on information technology and management science of riga technical university (ITMS), pp 1–6

  20. Amer E, Hazem A, Farouk O, Louca A, Mohamed Y, Ashraf M (2021) A proposed chatbot framework for covid-19. In: International mobile, intelligent, and ubiquitous computing conference (MIUCC), pp 263–268

  21. Tarek A, El Hajji M, Youssef E-S, Fadili H (2022) Towards highly adaptive edu-chatbot. Procedia Comput Sci 198:397–403

    Google Scholar 

  22. Fuad A, Al-Yahya M (2022) Araconv: developing an arabic task-oriented dialogue system using multi-lingual transformer model mt5. Appl Sci 12(4):1881

    Google Scholar 

  23. Miao Y, Liu K, Yang W, Yang C (2022) A novel transformer-based model for dialog state tracking. In: International conference on human-computer interaction, pp 148–156

  24. Xie R, Lu Y, Lin F, Lin L (2020) Faq-based question answering via knowledge anchors. In: Zhu X, Zhang M, Hong Y, He R (eds) Natural language processing and chinese computing, pp 3–15. Springer, Cham

  25. Pan Y, Ma M, Pflugfelder B, Groh G (2021) How to build robust FAQ chatbot with controllable question generator? CoRR arXiv:2112.03007

  26. Riloff E, Thelen M (2000) A rule-based question answering system for reading comprehension tests. In: Proceedings of the ANLP/NAACL Workshop on reading comprehension tests as evaluation for computer-based language understanding sytems - vol 6, Washington, USA, pp 13–19

  27. Šuster S, Daelemans W (2018) CliCR: A dataset of clinical case reports for machine reading comprehension. In: Proceedings of the 2018 Conference of the north american chapter of the association for computational linguistics: human language technologies, vol 1 (long papers). Association for computational linguistics, pp 1551–1563

  28. Lai G, Xie Q, Liu H, Yang Y, Hovy E (2017) RACE: Large-scale Reading comprehension dataset from examinations. In: Proceedings of the 2017 conference on empirical methods in natural language processing. Association for computational linguistics, pp 785–794

  29. Hu M, Peng Y, Huang Z, Li D (2019) Retrieve, read, rerank: Towards end-to-end multi-document reading comprehension. In: Proceedings of the 57th annual meeting of the association for computational linguistics. Association for computational linguistics, pp 2285–2295

  30. He S, Han D (2020) An effective dense co-attention networks for visual question answering. Sensors:4897

  31. Boukhers Z, Hartmann T, Jürjens J (2022) Coin: counterfactual image generation for vqa interpretation. Sensors

  32. Naseem U, Khushi M, Kim J (2022) Vision-language transformer for interpretable pathology visual question answering. IEEE J Biomed Health Inform

  33. Bansal A, Zhang Y, Chellappa R (2020) Visual question answering on image sets. In: Proceedings of the european conference on computer vision (ECCV)

  34. Gasmi K, Ltaifa IB, Lejeune G, Alshammari H, Ammar LB, Mahmood MA (2022) Optimal deep neural network-based model for answering visual medical question. Cybern Syst 53:403– 424

    Google Scholar 

  35. Wu Q, Teney D, Wang P, Shen C, Dick A, van den Hengel A (2017) Visual question answering: a survey of methods and datasets. Comput Vis Image Underst 163:21–40

    Google Scholar 

  36. Yang Z, Garcia N, Chu C, Otani M, Nakashima Y, Takemura H (2020) Bert representations for video question answering. In: IEEE winter conference on applications of computer vision (WACV), pp 1545–1554

  37. Gupta P, Gupta M (2022) Knowledge-aware news video question answering. In: Pacific-asia conference on knowledge discovery and data mining, pp 3–15

  38. Yang Z, Garcia N, Chu C, Otani M, Nakashima Y, Takemura H (2021) A comparative study of language transformers for video question answering. Neurocomputing 445:121–133

    Google Scholar 

  39. Wu T, Garcia N, Otani M, Chu C, Nakashima Y, Takemura H (2021) Transferring domain-agnostic knowledge in video question answering. In: The 32nd british machine vision conference

  40. He W, Liu K, Liu J, Lyu Y, Zhao S, Xiao X, Liu Y, Wang Y, Wu H, She Q, Liu X, Wu T, Wang H (2018) Dureader: a chinese machine reading comprehension dataset from real-world applications. In: Proceedings of the Workshop on machine reading for question answering. Association for computational linguistics, pp 37–46

  41. Dhingra B, Mazaitis K, Cohen WW (2017) Quasar: datasets for question answering by search and reading. CoRR arXiv:1707.03904

  42. Qi P, Lee H, Sido OT, Manning CD (2021) Retrieve, rerank, read, then iterate: answering open-domain questions of arbitrary complexity from text. In: The conference on empirical methods in natural language processing, EMNLP

  43. Biten AF, Litman R, Xie Y, Appalaraju S, Manmatha R (2022) Latr: layout-aware transformer for scene-text vqa. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 16548–16558

  44. Walmer M, Sikka K, Sur I, Shrivastava A, Jha S (2022) Dual-key multimodal backdoors for visual question answering. In: Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, pp 15375–15385

  45. Xu F, Lin Q, Liu J, Zhang L, Zhao T, Chai Q, Pan Y (2021) Moca: incorporating multi-stage domain pretraining and cross-guided multimodal attention for textbook question answering. CoRR arXiv:2112.02839

  46. Zhang XF (2021) Towards robustness against natural language word substitutions. In: The international conference on learning representations (ICLR)

  47. Gholamian S (2021) Leveraging code clones and natural language processing for log statement prediction. In: 36th IEEE/ACM international conference on automated software engineering (ASE), pp 1043–1047

  48. Akdemir A, Jeon Y (2021) DPRK-BERT: the supreme language model. CoRR arXiv:2112.00567

  49. Khodadadi A, Ghandiparsi S, Chuah C-N (2021) A natural language processing and deep learning based model for automated vehicle diagnostics using free-text customer service reports. CoRR arXiv:2111.14977

  50. Devlin J, Chang M-W, Lee K, Toutanova K (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies, vol 1 (long and short papers). Association for computational linguistics, pp 4171–4186

  51. Rajpurkar P, Zhang J, Lopyrev K, Liang P (2016) SQUAD: 100,000+ questions for machine comprehension of text. In: Proceedings of the 2016 Conference on empirical methods in natural language processing. Association for computational linguistics, pp 2383–2392

  52. Rouhou AC, Dhiaf M, Kessentini Y, Salem SB (2022) Transformer-based approach for joint handwriting and named entity recognition in historical document. Pattern Recogn Lett 155:128–134

    Google Scholar 

  53. AlBadani B, Shi R, Dong J, Al-Sabri R, Moctard OB (2022) Transformer-based graph convolutional network for sentiment analysis. Appl Sci 12(3):1316

    Google Scholar 

  54. Cambazoglu BB, Sanderson M, Scholer F, Croft B (2021) A review of public datasets in question answering research. SIGIR Forum, vol 54

  55. Green BF, Wolf AK, Chomsky C, Laughery K (1961) Baseball: an automatic question-answerer. In: Papers presented at the 9-11 May 1961, western joint IRE-AIEE-ACM computer conference, New York, pp 219–224

  56. Woods WA (1973) Progress in natural language understanding: an application to lunar geology. In: Proceedings of national computer conference and exposition, AFIPS ’73, New York, pp 441–450

  57. Androutsopoulos I, Ritchie G, Thanisch P (1993) Masque/sql – an efficient and portable natural language query interface for relational databases. In: Proceeding of the 6th international conference on industrial & engineering applications of artificial intelligence and expert systems, pp 327–330

  58. Androutsopoulos I, Ritchie GD, Thanisch P (1995) Natural language interfaces to databases–an introduction. Nat Lang Eng 1:29–81

    Google Scholar 

  59. Lopez V, Uren V, Sabou M, Motta E (2011) Is question answering fit for the semantic web? a survey. Semant Web 2:125–155

    Google Scholar 

  60. Burke RD, Hammond KJ, Kulyukin V, Lytinen SL, Tomuro N, Schoenberg S (1997) Question answering from frequently asked question files: experiences with the faq finder system. AI Mag 18:57

    Google Scholar 

  61. Peñas A, Magnini B, Forner P, Sutcliffe R, Rodrigo A, Giampiccolo D (2012) Question answering at the cross-language evaluation forum 2003—2010. Lang Resour Eval 46:177–217

    Google Scholar 

  62. Voorhees EM (2001) Question answering in trec. In: Proceedings of the tenth international conference on information and knowledge management, Georgia, USA, pp 535–537

  63. Voorhees E (2002) Overview of the TREC 2001 question answering track. In: Proceedings of the tenth text retrieval conference (TREC). TREC’01, pp 42–51

  64. Voorhees EM (2003) Overview of the TREC 2002 question answering track. In: Proceedings of The eleventh text retrieval conference

  65. Voorhees E (2004) Overview of the trec 2003 question answering track, pp 54–68. Other, national institute of standards and technology, Gaithersburg, MD

  66. Ellen V (2005) Overview of the trec 2004 question answering track. In: Proceedings of the thirteenth text retrieval conference, TREC’04

  67. Voorhees E, Dang H (2006) Overview of the trec 2005 question answering track

  68. Dang H, Lin J, Kelly D (2008) Overview of the trec 2006 question answering track. Special publication (NIST SP), national institute of standards and technology, Gaithersburg MD

  69. Mitamura T, Shima H, Sakai T, Kando N, Mori T, Takeda K, Lin C-Y, Song R, Lin C-J, Lee C-W (2008) Overview of the ntcir-7 aclia tasks: advanced cross-lingual information access. In: Proceedings of the 7th NTCIR workshop meeting on evaluation of information access technologies: information retrieval, question answering and cross-lingual information Access

  70. Lee Y-H, Lee C-W, Sung C-L, Tzou M-T, Wang C-C, Liu S-H, Shih C-W, Yang P-Y, Hsu W-L (2008) Complex question answering with asqa at ntcir 7 aclia. In: Proceedings of the 7th NTCIR workshop meeting on evaluation of information access technologies: information retrieval, question answering and cross-lingual information access

  71. Huang Z, Thint M, Qin Z (2008) Question classification using head words and their hypernyms. In: Proceedings of the Conference on empirical methods in natural language processing, pp 927–936

  72. Loni B, Khoshnevis SH, Wiggers P (2011) Latent semantic analysis for question classification with neural networks. In: IEEE workshop on automatic speech recognition & understanding, pp 437–442

  73. Nedumaran A, Babu RG, Kassa MM, Karthika P (2020) Machine level classification using support vector machine. In: AIP conference proceedings, vol 2207, p 020013

  74. Joseph J, Panicker JR, Meera M (2016) An efficient natural language interface to xml database. In: International conference on information science (ICIS), pp 207–212

  75. Nguyen DQ, Nguyen DQ, Pham SB (2017) Ripple down rules for question answering. Semantic Web 8(4):511–532

    MathSciNet  Google Scholar 

  76. Huang Z, Xu S, Hu M, Wang X, Qiu J, Fu Y, Zhao Y, Peng Y, Wang C (2020) Recent trends in deep learning based open-domain textual question answering systems. IEEE Access 8:94341–94356

    Google Scholar 

  77. Lei T, Shi Z, Liu D, Yang L, Zhu F (2018) A novel cnn-based method for question classification in intelligent question answering. In: Proceedings of the international conference on algorithms, computing and artificial intelligence, pp 1–6

  78. Xia W, Zhu W, Liao B, Chen M, Cai L, Huang L (2018) Novel architecture for long short-term memory used in question classification. Neurocomputing 299:20–31

    Google Scholar 

  79. Khattab O, Potts C, Zaharia M (2021) Relevance-guided Supervision for openQA with colBERT. Transactions of the association for computational linguistics 9:929–944

    Google Scholar 

  80. Karpukhin V, Oguz B, Min S, Lewis P, Wu L, Edunov S, Chen D, Yih W-T (2020) Dense passage retrieval for open-domain question answering. In: Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP). Association for computational linguistics, pp 6769–6781

  81. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. Adv Neural Inf Process Syst:30

  82. Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I et al (2019) Language models are unsupervised multitask learners. OpenAI blog 1:9

    Google Scholar 

  83. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, Levy O, Lewis M, Zettlemoyer L, Stoyanov V (2020) Roberta: a robustly optimized BERT pretraining approach. In: 8Th international conference on learning representations, ICLR

  84. Zamani H, Craswell N (2020) Macaw: an extensible conversational information seeking platform. Association for computing machinery, pp 2193–2196

  85. Krishna K, Roy A, Iyyer M (2021) Hurdles to progress in long-form question answering. In: Proceedings of the 2021 Conference of the north american chapter of the association for computational linguistics: human language technologies. Association for computational linguistics, pp 4940–4957

  86. Nakano R, Hilton J, Balaji S, Wu J, Ouyang L, Kim C, Hesse C, Jain S, Kosaraju V, Saunders W, Jiang X, Cobbe K, Eloundou T, Krueger G, Button K, Knight M, Chess B, Schulman J (2021) Browser-assisted question-answering with human feedback. CoRR

  87. Jin Q, Yuan Z, Xiong G, Yu Q, Ying H, Tan C, Chen M, Huang S, Liu X, Yu S (2022) Biomedical question answering: a survey of approaches and challenges. ACM Comput Surv (CSUR) 55:1–36

    Google Scholar 

  88. Kim Y, Bang S, Sohn J, Kim H (2022) Question answering method for infrastructure damage information retrieval from textual data using bidirectional encoder representations from transformers. Automation in construction:134

  89. Nambiar RS, Gupta D (2022) Dedicated farm-haystack question answering system for pregnant women and neonates using corona virus literature. In: 12th International conference on cloud computing, data science & engineering (confluence), pp 222–227

  90. Chen C, Tan Z, Cheng Q, Jiang X, Liu Q, Zhu Y, Gu X (2022) Utc: a unified transformer with inter-task contrastive learning for visual dialog. In: Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, pp 18103–18112

  91. Raza S, Schwartz B, Rosella LC (2022) Coquad: a covid-19 question answering dataset system, facilitating research, benchmarking, and practice. BMC Bioinform 23:1–28

    Google Scholar 

  92. Deng L, Liu Y (2018) Deep learning in natural language processing. Springer

  93. Kamath U, Liu J, Whitaker J (2019) Deep learning for NLP and speech recognition. Springer

  94. Lopez MM, Kalita J (2017) Deep learning applied to NLP. CoRR arXiv:1703.03091

  95. Lauriola I, Lavelli A, Aiolli F (2022) An introduction to deep learning in natural language processing: models, techniques, and tools. Neurocomputing 470:443–456

    Google Scholar 

  96. Jacovi A, Sar Shalom O, Goldberg Y (2018) Understanding convolutional neural networks for text classification. In: Proceedings of the EMNLP workshop BlackboxNLP: analyzing and interpreting neural networks For NLP, Brussels, Belgium, pp 56–65

  97. Mou L, Meng Z, Yan R, Li G, Xu Y, Zhang L, Jin Z (2016) How transferable are neural networks in NLP applications?. In: Proceedings of the 2016 conference on empirical methods in natural language processing. Association for computational linguistics, pp 479–489

  98. Sutskever I, Martens J, Hinton G (2011) Generating text with recurrent neural networks. In: Proceedings of the 28th international conference on international conference on machine learning. ICML’11, Washington, USA, pp 1017–1024

  99. Sutskever I, Hinton G, Taylor G (2008) The recurrent temporal restricted boltzmann machine. In: Proceedings of the 21st international conference on neural information processing systems. NIPS’08, British Columbia, Canada, pp 1601–1608

  100. Hochreiter S, Schmidhuber J (1996) Lstm can solve hard long time lag problems. In: Proceedings of the 9th international conference on neural information processing systems. NIPS’96, Denver, Colorado, pp 473–479

  101. Bahar P, Brix C, Ney H (2018) Towards two-dimensional sequence to sequence model in neural machine translation. In: Proceedings of the 2018 Conference on empirical methods in natural language processing. Association for computational linguistics, pp 3009–3015

  102. He X, Haffari G, Norouzi M (2018) Sequence to sequence mixture model for diverse machine translation. In: Proceedings of the 22nd Conference on computational natural language learning. Association for computational linguistics, pp 583– 592

  103. Mohammad Masum AK, Abujar S, Islam Talukder MA, Azad Rabby AKMS, Hossain SA (2019) Abstractive method of text summarization with sequence to sequence rnns. In: 10th International conference on computing, communication and networking technologies, pp 1–5

  104. Shi T, Keneshloo Y, Ramakrishnan N, Reddy CK (2021) Neural abstractive text summarization with sequence-to-sequence models. ACM/IMS Trans ata Sci:2

  105. Huang L, Wang W, Chen J, Wei X-Y (2019) Attention on attention for image captioning. 2019 IEEE/CVF Int conf Comput Vis (ICCV):4633–4642

  106. Aneja J, Deshpande A, Schwing AG (2018) Convolutional image captioning. In: 2018 IEEE/CVF conference on computer vision and pattern recognition (CVPR). IEEE computer society, pp 5561–5570

  107. Sutskever I, Vinyals O, Le QV (2014) Sequence to sequence learning with neural networks. Adv Neural Inf Process Syst:27

  108. Cho K, van Merriënboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, Bengio Y (2014) Learning phrase representations using RNN encoder–decoder for statistical machine translation. In: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). Association for computational linguistics, pp 1724–1734

  109. Wang J, Peng B, Zhang X (2018) Using a stacked residual lstm model for sentiment intensity prediction. Neurocomputing 322:93–101

    Google Scholar 

  110. Landi F, Baraldi L, Cornia M, Cucchiara R (2021) Working memory connections for lstm. Neural Netw 144:334–341

    Google Scholar 

  111. Lechner M, Hasani RM (2022) Mixed-memory rnns for learning long-term dependencies in irregularly sampled time series. In: The international conference on learning representations (ICLR)

  112. Larochelle H, Hinton G (2010) Learning to combine foveal glimpses with a third-order boltzmann machine. In: Proceedings of the 23rd international conference on neural information processing systems - vol 1. NIPS’10, British Columbia, Canada, pp 1243–1251

  113. Bahdanau D, Cho K, Bengio Y (2015) Neural machine translation by jointly learning to align and translate. In: 3rd International conference on learning representations, ICLR

  114. Luong T, Pham H, Manning CD (2015) Effective approaches to attention-based neural machine translation. In: Proceedings of the 2015 conference on empirical methods in natural language processing. Association for computational linguistics, pp 1412–1421

  115. Cheng J, Dong L, Lapata M (2016) Long short-term memory-networks for machine reading. In: Proceedings of the 2016 Conference on empirical methods in natural language processing. Association for computational linguistics, pp 551–561

  116. Parikh A, Täckström O, Das D, Uszkoreit J (2016) A decomposable attention model for natural language inference. In: Proceedings of the conference on empirical methods in natural language processing. Association for computational linguistics, pp 2249–2255

  117. Paulus R, Xiong C, Socher R (2018) A deep reinforced model for abstractive summarization. In: 6th International conference on learning representations, ICLR

  118. Gehring J, Auli M, Grangier D, Yarats D, Dauphin Y (2017) Convolutional sequence to sequence learning. In: Thirty-fourth international conference on machine learning, ICML

  119. Karita S, Chen N, Hayashi T, Hori T, Inaguma H, Jiang Z, Someki M, Soplin NEY, Yamamoto R, Wang X, Watanabe S, Yoshimura T, Zhang W (2019) A comparative study on transformer vs rnn in speech applications. In: IEEE automatic speech recognition and understanding workshop (ASRU), pp 449–456

  120. Lin Z, Feng M, dos Santos CN, Yu M, Xiang B, Zhou B, Bengio Y (2017) A structured self-attentive sentence embedding

  121. Torrey L, Shavlik J (2010) Transfer learning. In: Handbook of research on machine learning applications and trends: algorithms, methods, and techniques. IGI global, pp 242–264

  122. Pan SJ, Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22:1345–1359

    Google Scholar 

  123. WEI Y, Zhang Y, Huang J, Yang Q (2018) Transfer learning via learning to transfer. In: Proceedings of the 35th international conference on machine learning, pp 5085–5094

  124. Dai AM, Le QV (2015) Semi-supervised sequence learning. In: Proceedings of the 28th international conference on neural information processing systems. MIT Press - vol 2, pp 3079–3087

  125. Howard J, Ruder S (2018) Universal language model fine-tuning for text classification. In: Proceedings of the 56th annual meeting of the association for computational linguistics (vol 1: long papers). Association for computational linguistics, pp 328–339

  126. Hooshmand A, Sharma R (2019) Energy predictive models with limited data using transfer learning. In: Proceedings of the tenth ACM international conference on future energy systems, pp 12–16

  127. Pinto G, Wang Z, Roy A, Hong T, Capozzoli A (2022) Transfer learning for smart buildings: a critical review of algorithms, applications, and future perspectives. Adv Appl Energy:100084

  128. Albahli S, Albattah W (2021) Deep transfer learning for covid-19 prediction: case study for limited data problems. Current Med Imaging 17:973

    Google Scholar 

  129. Bashath S, Perera N, Tripathi S, Manjang K, Dehmer M, Streib FE (2022) A data-centric review of deep transfer learning with applications to text data. Inf Sci 585:498–528

    Google Scholar 

  130. Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J (2013) Distributed representations of words and phrases and their compositionality. Adv Neural Inf Process Syst, vol 26

  131. Cer D, Yang Y, Kong S-Y, Hua N, Limtiaco N, St John R, Constant N, Guajardo-Cespedes M, Yuan S, Tar C, Strope B, Kurzweil R (2018) Universal sentence encoder for english. In: Proceedings of the 2018 Conference on empirical methods in natural language processing: system demonstrations. Association for computational linguistics, pp 169–174

  132. Le Q, Mikolov T (2014) Distributed representations of sentences and documents. In: Proceedings of the 31st international conference on machine learning. Proceedings of machine learning research. Bejing, China, vol 32, pp 1188–1196

  133. Houlsby N, Giurgiu A, Jastrzebski S, Morrone B, De Laroussilhe Q, Gesmundo A, Attariyan M, Gelly S (2019) Parameter-efficient transfer learning for nlp. In: International conference on machine learning, pp 2790–2799

  134. Rebuffi S-A, Bilen H, Vedaldi A (2017) Learning multiple visual domains with residual adapters. In: Proceedings of the 31st International conference on neural information processing systems. Curran Associates Inc, pp 506–516

  135. Chaudhari S, Mithal V, Polatkan G, Ramanath R (2021) An attentive survey of attention models. ACM Trans Intell Syst Technol, vol 12

  136. Wolf T, Debut L, Sanh V, Chaumond J, Delangue C, Moi A, Cistac P, Rault T, Louf R, Funtowicz M, Davison J, Shleifer S, von Platen P, Ma C, Jernite Y, Plu J, Xu C, Le Scao T, Gugger S, Drame M, Lhoest Q, Rush A (2020) Transformers: state-of-the-art natural language processing. In: Proceedings of the Conference on empirical methods in natural language processing: system demonstrations, Online, pp 38–45

  137. Soni S, Roberts K (2020) Evaluation of dataset selection for pre-training and fine-tuning transformer language models for clinical question answering. In: Proceedings of the 12th Language resources and evaluation conference, Marseille, France, pp 5532–5538

  138. Li F, Jin Y, Liu W, Rawat BPS, Cai P, Yu H (2019) Fine-tuning bidirectional encoder representations from transformers (bert)–based models on large-scale electronic health record notes: an empirical study. JMIR Med Inform 7:14830

    Google Scholar 

  139. Braşoveanu AMP, Andonie R (2020) Visualizing transformers for nlp: a brief survey. In: 24th International conference information visualisation (IV), pp 270–279

  140. Bartolo M, Roberts A, Welbl J, Riedel S, Stenetorp P (2020) Beat the AI: investigating adversarial human annotation for reading comprehension. Trans Assoc Comput Linguist 8:662–678

    Google Scholar 

  141. Peters ME, Neumann M, Iyyer M, Gardner M, Clark C, Lee K, Zettlemoyer L (2018) Deep contextualized word representations. In: Proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies, vol 1 (long papers). Association for computational linguistics, pp 2227–2237

  142. Ruder S (2019) Neural transfer learning for natural language processing. PhD thesis, NUI Galway

  143. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R (2020) Albert: a lite bert for self-supervised learning of language representations. In: 8th international conference on learning representations, ICLR

  144. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov RR, Le QV (2019) Xlnet: generalized autoregressive pretraining for language understanding. Adv Neural Inf Process Syst, vol 32

  145. Wang A, Singh A, Michael J, Hill F, Levy O, Bowman S (2018) GLUE: A multi-task benchmark and analysis platform for natural language understanding. In: Proceedings of the 2018 EMNLP Workshop BlackboxNLP: analyzing and interpreting neural networks for NLP. Association for computational linguistics, pp 353–355

  146. Clark K, Luong M, Le QV, Manning CD (2020) ELECTRA: pre-training text encoders as discriminators rather than generators. In: 8th International conference on learning representations, ICLR

  147. Joshi M, Chen D, Liu Y, Weld DS, Zettlemoyer L, Levy O (2020) Spanbert: improving pre-training by representing and predicting spans. Trans Assoc Comput Linguistics 8:64–77

    Google Scholar 

  148. Buciluundefined C, Caruana R, Niculescu-Mizil A (2006) Model compression. In: Proceedings of the 12th ACM SIGKDD international conference on knowledge discovery and data mining. KDD ’06, New York, pp 535–541

  149. Hinton G, Vinyals O, Dean J (2015) Distilling the knowledge in a neural network. In: NIPS 2014 deep learning workshop, Montreal, Canada

  150. Gou J, Yu B, Maybank SJ, Tao D (2021) Knowledge distillation: a survey. Int J Comput Vis 129(6):1789–1819

    Google Scholar 

  151. Sanh V, Debut L, Chaumond J, Wolf T (2019) Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter the 5th workshop on energy efficient machine learning and cognitive computing - neurIPS

  152. Jiao X, Yin Y, Shang L, Jiang X, Chen X, Li L, Wang F, Liu Q (2020) TinyBERT: distilling BERT for natural language understanding. In: Findings of the association for computational linguistics: EMNLP. Association for computational linguistics, pp 4163–4174

  153. Beltagy I, Peters ME, Cohan A (2020) Longformer: the long-document transformer. CoRR arXiv:2004.05150

  154. Dai Z, Yang Z, Yang Y, Carbonell J, Le Q, Salakhutdinov R (2019) Transformer-XL: attentive language models beyond a fixed-length context. In: Proceedings of the 57th annual meeting of the association for computational linguistics. Association for computational linguistics, pp 2978–2988

  155. Wang W, Wei F, Dong L, Bao H, Yang N, Zhou M (2020) Minilm: deep self-attention distillation for task-agnostic compression of pre-trained transformers. Adv Neural Inf Process Syst 33:5776–5788

    Google Scholar 

  156. Mirzadeh SI, Farajtabar M, Li A, Levine N, Matsukawa A, Ghasemzadeh H (2020) Improved knowledge distillation via teacher assistant. In: The AAAI conference on artificial intelligence

  157. Zhang Z, Han X, Liu Z, Jiang X, Sun M, Liu Q (2019) ERNIE: enhanced language representation with informative entities. In: Proceedings of the 57th annual meeting of the association for computational linguistics. Association for computational linguistics, pp 1441–1451

  158. Du N, Huang Y, Dai AM, Tong S, Lepikhin D, Xu Y, Krikun M, Zhou Y, Yu AW, Firat O, Zoph B, Fedus L, Bosma M, Zhou Z, Wang T, Wang YE, Webster K, Pellat M, Robinson K, Meier-Hellstern K, Duke T, Dixon L, Zhang K, Le QV, Wu Y, Chen Z, Cui C (2021) Glam: efficient scaling of language models with mixture-of-experts. CoRR arXiv:2112.06905

  159. Radford A, Narasimhan K (2018) Improving language understanding by generative pre-training. https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf. Accessed 11 June 2018

  160. Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A et al (2020) Language models are few-shot learners. Adv Neural Inf Process Syst 33:1877–1901

    Google Scholar 

  161. Lewis M, Liu Y, Goyal N, Ghazvininejad M, Mohamed A, Levy O, Stoyanov V, Zettlemoyer L (2020) BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceedings of the 58th annual meeting of the association for computational linguistics. Association for Computational Linguistics, pp 7871– 7880

  162. Dong L, Yang N, Wang W, Wei F, Liu X, Wang Y, Gao J, Zhou M, Hon H-W (2019) Unified language model pre-training for natural language understanding and generation. Adv Neural Inf Process Syst, vol 32

  163. Raffel C, Shazeer N, Roberts A, Lee K, Narang S, Matena M, Zhou Y, Li W, Liu PJ (2020) Exploring the limits of transfer learning with a unified text-to-text transformer. J Mach Learn Res 21(140):1–67

    MathSciNet  MATH  Google Scholar 

  164. Xue L, Constant N, Roberts A, Kale M, Al-Rfou R, Siddhant A, Barua A, Raffel C (2021) Mt5: a massively multilingual pre-trained text-to-text transformer. In: Proceedings of the 2021 conference of the north american chapter of the association for computational linguistics: human Language Technologies. Association for computational linguistics, pp 483–498

  165. Ganguli D, Hernandez D, Lovitt L, Askell A, Bai Y, Chen A, Conerly T, Dassarma N, Drain D, Elhage N, El Showk S, Fort S, Hatfield-Dodds Z, Henighan T, Johnston S, Jones A, Joseph N, Kernian J, Kr1avec S, Mann B, Nanda N, Ndousse K, Olsson C, Amodei D, Brown T, Kaplan J, McCandlish S, Olah C, Amodei D, Clark J (2022). In: 2022 ACM conference on fairness, accountability, and transparency. Association for computing machinery, pp 1747–1764

  166. Rosset C (2020) Turing-nlg: A 17-billion-parameter language model by microsoft. Microsoft Blog, vol 1

  167. Shoeybi M, Patwary M, Puri R, LeGresley P, Casper J, Catanzaro B (2019) Megatron-lm: training multi-billion parameter language models using model parallelism. CoRR arXiv:1909.08053

  168. Kupiec J (1993) Murax: a robust linguistic approach for question answering using an on-line encyclopedia. In: Proceedings of the 16th annual international ACM SIGIR conference on research and development in information retrieval, Pittsburgh, Pennsylvania, USA, pp 181–190

  169. Kwok C, Etzioni O, Weld DS (2001) Scaling question answering to the web. ACM Trans Inf Syst 19:242–262

    Google Scholar 

  170. Brill E, Dumais S, Banko M (2002) An analysis of the askmsr question-answering system. In: Proceedings of the ACL-02 conference on empirical methods in natural language processing - vol 10, Philadelphia, USA, pp 257–264

  171. Sun R, Jiang J, Fan Y, Hang T, Tat-seng C, Kan CM-Y (2005) Using syntactic and semantic relation analysis in question answering. In: Proceedings of the fourteenth text retrieval conference, pp 15–18

  172. Xu J, Croft WB (2017) Quary expansion using local and global document analysis. SIGIR Forum 51:168–175

    Google Scholar 

  173. Quirk C, Brockett C, Dolan WB (2004) Monolingual machine translation for paraphrase generation. In: Proceedings of the conference on empirical methods in natural language processing, pp 142–149

  174. Bannard C, Callison-Burch C (2005) Paraphrasing with bilingual parallel corpora. In: Proceedings of the 43rd annual meeting on association for computational linguistics, ACL ’05, Ann Arbor, Michigan, pp 597–604

  175. Zhao S, Niu C, Zhou M, Liu T, Li S (2008) Combining multiple resources to improve SMT-based paraphrasing model. In: Proceedings of ACL-08: HLT, Columbus, Ohio, pp 1021–1029

  176. Wubben S, van den Bosch A, Krahmer E (2010) Paraphrase generation as monolingual translation: data and evaluation. In: Proceedings of the 6th international natural language generation conference, INLG ’10, Trim, Ireland, pp 203–207

  177. Li X, Roth D (2002) Learning question classifiers. In: Proceedings of the 19th international conference on computational linguistics - vol 1, COLING ’02, pp 1–7

  178. Suzuki J, Taira H, Sasaki Y, Maeda E (2003) Question classification using hdag kernel. In: Proceedings of the ACL workshop on multilingual summarization and question answering - vol 12, MultiSumQA ’03, pp 61–68

  179. Rahman Khilji AFU, Manna R, Rahman Laskar S, Pakray P, Das D, Bandyopadhyay S, Gelbukh A (2020) Question classification and answer extraction for developing a cooking qa system. Computación y Sistemas 24:927–933

    Google Scholar 

  180. Zhang D, Lee WS (2003) Question classification using support vector machines. In: Proceedings of the 26th annual international ACM SIGIR conference on research and development in informaion retrieval, pp 26–32

  181. Ferrucci D, Brown E, Chu-Carroll J, Fan J, Gondek D, Kalyanpur AA, Lally A, Murdock JW, Nyberg E, Prager J et al (2010) Building watson: an overview of the deepqa project. AI magazine 31:59–79

    Google Scholar 

  182. Tayyar Madabushi H, Lee M (2016) High accuracy rule-based question classification using question syntax and semantics. In: Proceedings of COLING, the 26th international conference on computational linguistics: technical papers, Osaka, Japan, pp 1220–1230

  183. Croft B, Lafferty J (2003) Language modeling for information retrieval. Springer, vol 13

  184. Robertson S, Zaragoza H (2009) The probabilistic relevance framework: bm25 and beyond. Found Trends Inf Retr 3:333–389

    Google Scholar 

  185. Schutzë H, Manning CD, Raghavan P (2008) Introduction to information retrieval. Cambridge University Press Cambridge, vol 39

  186. Xiaoli L, Xiaokai Y, Kan L (2021) An improved model of document retrieval efficiency based on information theory. J Physics Conf Series 1848:012094

    Google Scholar 

  187. Izacard G, Petroni F, Hosseini L, Cao ND, Riedel S, Grave E (2020) A memory efficient baseline for open domain question answering. CoRR arXiv:2012.15156

  188. Breja M, Jain SK (2022) Analyzing linguistic features for answer re-ranking of why-questions. J Cases Inf Technol (JCIT) 24:1–16

    Google Scholar 

  189. Ozyurt IB (2021) End-to-end biomedical question answering via bio-answerfinder and discriminative language representation models. CLEF (working notes)

  190. Allam AMN, Haggag MH (2012) The question answering systems: a survey. Int J Res Rev Inf Sci (IJRRIS), vol 2

  191. Wang M et al (2006) A survey of answer extraction techniques in factoid question answering. Comput Linguistics 1:1–14

    Google Scholar 

  192. Mollá D, Van Zaanen M, Smith D (2006) Named entity recognition for question answering. In: Proceedings of the australasian language technology workshop, pp 51–58

  193. Burger J, Cardie C, Chaudhri V, Gaizauskas R, Harabagiu S, Israel D, Jacquemin C, Lin C-Y, Maiorano S, Miller G, Moldovan D, Ogden B, Prager J, Riloff E, Singhal A, Shrihari R, Strazalkowski T, Voorhees E, Weishedel R (2003) Issues, tasks and program structures to roadmap research in question & answering (q & a). In: Document understanding conference

  194. Kolomiyets O, Moens M-F (2011) A survey on question answering technology from an information retrieval perspective. Inf Sci 181:5412–5434

    MathSciNet  Google Scholar 

  195. Azad HK, Deepak A (2019) Query expansion techniques for information retrieval: a survey. Inf Process & Manag 56:1698–1735

    Google Scholar 

  196. Garg R, Oh E, Naidech A, Kording K, Prabhakaran S (2019) Automating ischemic stroke subtype classification using machine learning and natural language processing. J Stroke Cerebrovasc Dis 28:2045–2051

    Google Scholar 

  197. Kim C, Zhu V, Obeid J, Lenert L (2019) Natural language processing and machine learning algorithm to identify brain mri reports with acute ischemic stroke. PloS One 14:0212778

    Google Scholar 

  198. Ofer D, Brandes N, Linial M (2021) The language of proteins: nlp, machine learning & protein sequences. Comput Struct Biotech J 19:1750–1758

    Google Scholar 

  199. Zhou G, Xie Z, Yu Z, Huang JX (2021) Dfm: a parameter-shared deep fused model for knowledge base question answering. Inf Sci 547:103–118

    Google Scholar 

  200. Chen Y, Li H, Hua Y, Qi G (2021) Formal query building with query structure prediction for complex question answering over knowledge base. In: Proceedings of the twenty-ninth international joint conference on artificial intelligence

  201. Abdelaziz I, Ravishankar S, Kapanipathi P, Roukos S, Gray A (2021) A semantic parsing and reasoning-based approach to knowledge base question answering. Proc AAAI Conf Artif Intell 35:15985–15987

    Google Scholar 

  202. Yogish D, Manjunath T, Hegadi RS (2016) A survey of intelligent question answering system using nlp and information retrieval techniques. Int J Adv Res Comput Commun Eng 5:536–540

    Google Scholar 

  203. Pathak A, Manna R, Pakray P, Das D, Gelbukh A, Bandyopadhyay S (2021) Scientific text entailment and a textual-entailment-based framework for cooking domain question answering. Sādhanā 46:1–19

    MathSciNet  Google Scholar 

  204. Kaur H, Kumari R (2013) Novel scoring system for identify accurate answers for factoid questions. Int J Sci Res (IJSR) 29:294–297

    Google Scholar 

  205. Moldovan D, Paşca M, Harabagiu S, Surdeanu M (2002) Performance issues and error analysis in an open-domain question answering system. In: Proceedings of the 40th annual meeting on association for computational linguistics, ACL ’02, pp 33–40

  206. Benamara F (2004) Cooperative question answering in restricted domains: the WEBCOOP experiment. In: Proceedings of the conference on question answering in restricted domains, ACL’04, Barcelona, Spain, pp 31–38

  207. Bu F, Zhu X, Hao Y, Zhu X (2010) Function-based question classification for general QA. In: Proceedings of the conference on empirical methods in natural language processing, ACL’10, Cambridge, MA, pp 1119–1128

  208. Indurkhya N, Damerau FJ (2010) Handbook of natural language processing vol 2nd chapman & hall/CRC

  209. Suresh kumar G, Zayaraz G (2015) Concept relation extraction using naïve bayes classifier for ontology-based question answering systems. J King Saud Univ Comput Inf Sci 27:13–24

    Google Scholar 

  210. Dwivedi SK, Singh V (2013) Research and reviews in question answering system. Procedia Technol 10:417–424

    Google Scholar 

  211. Moldovan D, Harabagiu S, Pasca M, Mihalcea R, Girju R, Goodrum R, Rus V (2000) The structure and performance of an open-domain question answering system. In: Proceedings of the 38th annual meeting of the association for computational linguistics, ACL’00, Hong Kong, Chine, pp 563–570

  212. Higashinaka R, Isozaki H (2008) Corpus-based question answering for why-questions. In: Proceedings of the third international joint conference on natural language processing: vol-I, IJCNLP’08, pp 418–425

  213. Verberne S, Boves L, Oostdijk N, Coppen P-A (2008) Using syntactic information for improving why-question answering. In: Proceedings of the 22nd international conference on computational linguistics (Coling), Manchester, UK, pp 953–960

  214. Suzan V, Lou B, Nelleke O, Peter-Arno C (2010) What is not in the bag of words for why-QA? Comput Linguistics 36:229–245

    Google Scholar 

  215. Wu Y, Hori C, Kashioka H, Kawai H (2015) Leveraging social q&a collections for improving complex question answering. Comput Speech & Language 29:1–19

    Google Scholar 

  216. Cui H, Kan M-Y, Chua T-S (2007) Soft pattern matching models for definitional question answering. ACM Trans Inf Syst 25:8

    Google Scholar 

  217. Missen MMS, Boughanem M, Cabanac G (2009) Challenges for sentence level opinion detection in blogs. In: Eighth IEEE/ACIS international conference on computer and information science, pp 347–351

  218. Malik Muhammad Saad Missen MB, Cabanac G (2010) Opinion finding in blogs: a passage-based language modeling approach. In: Adaptivity, personalization and fusion of heterogeneous information, RIAO ’10, pp 148–152

  219. Poria S, Gelbukh A, Cambria E, Yang P, Hussain A, Durrani T (2012) Merging senticnet and wordnet-affect emotion lists for sentiment analysis. In: IEEE 11th international conference on signal processing, pp 1251–1255

  220. Poria S, Gelbukh A, Das D, Bandyopadhyay S (2012) Fuzzy clustering for semi-supervised learning–case study: construction of an emotion lexicon. In: Mexican international conference on artificial intelligence, MICAI’12, pp 73–86

  221. Poria S, Cambria E, Winterstein G, Huang G-B (2014) Sentic patterns: dependency-based rules for concept-level sentiment analysis. Knowl-Based Syst 69:45–63

    Google Scholar 

  222. Basuki S, Purwarianti A (2016) Statistical-based approach for indonesian complex factoid question decomposition. Int J Electr Eng Inf 8:356–373

    Google Scholar 

  223. Yao X, Van Durme B (2014) Information extraction over structured data: Question answering with Freebase. In: Proceedings of the 52nd annual meeting of the association for computational linguistics (vol 1: long papers), Baltimore, Maryland, pp 956–966

  224. Sacaleanu B, Orasan C, Spurk C, Ou S, Ferrandez O, Kouylekov M, Negri M (2008) Entailment-based question answering for structured data. In: Coling: companion volume: demonstrations, manchester, UK, pp 173–176

  225. Oguz B, Chen X, Karpukhin V, Peshterliev S, Okhonko D, Schlichtkrull MS, Gupta S, Mehdad Y, Yih S (2020) Unified open-domain question answering with structured and unstructured knowledge. CoRR arXiv:2012.14610

  226. Zhu F, Lei W, Huang Y, Wang C, Zhang S, Lv J, Feng F, Chua T-S (2021) TAT-QA: a question answering benchmark on a hybrid of tabular and textual content in finance. In: Proceedings of the 59th annual meeting of the association for computational linguistics and the 11th international joint conference on natural language processing (vol 1: long papers). Association for Computational Linguistics, pp 3277–3287

  227. Pinto D, Branstein M, Coleman R, Croft WB, King M, Li W, Wei X (2002) Quasm: a system for question answering using semi-structured data. In: Proceedings of the 2nd ACM/IEEE-CS joint conference on digital libraries, pp 46–55

  228. Norvig P, Lakoff G (1987) Taking: a study in lexical network theory. In: Annual meeting of the berkeley linguistics society, BLS’87, pp 195–206

  229. Seo MJ, Kembhavi A, Farhadi A, Hajishirzi H (2017) Bidirectional attention flow for machine comprehension. In: 5th International conference on learning representations, ICLR

  230. Hermann KM, Kocisky T, Grefenstette E, Espeholt L, Kay W, Suleyman M, Blunsom P (2015) Teaching machines to read and comprehend. Adv Neural Inf Process Syst, vol 28

  231. Chen M, D’arcy M, Liu A, Fernandez J, Downey D (2019) CODAH: an adversarially-authored question answering dataset for common sense. In: Proceedings of the 3rd workshop on evaluating vector space representations for NLP. Association for Computational Linguistics, pp 63–69

  232. Reddy S, Chen D, Manning CD (2019) Coqa: a conversational question answering challenge. Trans Association Comput Linguistics 7:249–266

    Google Scholar 

  233. Yang Z, Qi P, Zhang S, Bengio Y, Cohen W, Salakhutdinov R, Manning CD (2018) Hotpot QA: a dataset for diverse, explainable multi-hop question answering. In: Proceedings of the 2018 conference on empirical methods in natural language processing. Association for Computational Linguistics, pp 2369–2380

  234. Nguyen T, Rosenberg M, Song X, Gao J, Tiwary S, Majumder R, Deng L (2017) MS MARCO: a human generated machine reading comprehension dataset. In: 5th International conference on learning representations, ICLR

  235. Khashabi D, Chaturvedi S, Roth M, Upadhyay S, Roth D (2018) Looking beyond the surface: a challenge set for reading comprehension over multiple sentences. In: Proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies, vol 1 (long papers), New Orleans, Louisiana, pp 252–262

  236. Kwiatkowski T, Palomaki J, Redfield O, Collins M, Parikh A, Alberti C, Epstein D, Polosukhin I, Kelcey M, Devlin J, Lee K, Toutanova KN, Jones L, Chang M-W, Dai A, Uszkoreit J, Le Q, Petrov S (2019) Natural questions: a benchmark for question answering research. Trans Association Comput Linguistics 7:452–466

    Google Scholar 

  237. Trischler A, Wang T, Yuan X, Harris J, Sordoni A, Bachman P, Suleman K (2017) NewsQA: a machine comprehension dataset. In: Proceedings of the 2nd workshop on representation learning for NLP. Association for Computational Linguistics, pp 191–200

  238. Welbl J, Stenetorp P, Riedel S (2018) Constructing datasets for multi-hop reading comprehension across documents. Trans Association Comput Linguistics:287–302

  239. Choi E, He H, Iyyer M, Yatskar M, Yih W-T, Choi Y, Liang P, Zettlemoyer L (2018) QuAC: question answering in context. In: Proceedings of the 2018 conference on empirical methods in natural language processing. Association for Computational Linguistics, pp 2174–2184

  240. Mostafazadeh N, Roth M, Louis A, Chambers N, Allen J (2017) LSDSEm 2017 shared task: the story cloze test. In: Proceedings of the 2nd workshop on linking models of lexical, sentential and discourse-level semantics, Valencia, Spain, pp 46–51

  241. Chambers N, Jurafsky D (2008) Unsupervised learning of narrative event chains. In: Proceedings of ACL-08: HLT, Columbus, Ohio, pp 789–797

  242. Zellers R, Bisk Y, Schwartz R, Choi Y (2018) SWAG: a large-scale adversarial dataset for grounded commonsense inference. In: Proceedings of the 2018 conference on empirical methods in natural language processing. Association for Computational Linguistics, pp 93–104

  243. Rohrbach A, Torabi A, Rohrbach M, Tandon N, Pal C, Larochelle H, Courville A, Schiele B (2017) Movie description. Int J Comput Vis 123(1):94–120

    Google Scholar 

  244. Krishna R, Hata K, Ren F, Fei-Fei L, Carlos Niebles J (2017) Dense-captioning events in videos. In: Proceedings of the IEEE international conference on computer vision, pp 706–715

  245. Heilbron FC, Escorcia V, Ghanem B, Niebles JC (2015) Activitynet: a large-scale video benchmark for human activity understanding. In: IEEE Conference on computer vision and pattern recognition (CVPR), pp 961–970

  246. Yagcioglu S, Erdem A, Erdem E, Ikizler-Cinbis N (2018) RecipeQA: a challenge dataset for multimodal comprehension of cooking recipes. In: Proceedings of the 2018 conference on empirical methods in natural language processing. Association for Computational Linguistics, pp 1358–1368

  247. Kočiskỳ T, Schwarz J, Blunsom P, Dyer C, Hermann KM, Melis G, Grefenstette E (2018) The narrativeqa reading comprehension challenge. Transa Association Comput Linguistics 6:317–328

    Google Scholar 

  248. Joshi M, Choi E, Weld D, Zettlemoyer L (2017) TriviaQA: a large scale distantly supervised challenge dataset for reading comprehension. In: Proceedings of the 55th annual meeting of the association for computational linguistics (vol 1: long papers). Association for computational linguistics, pp 1601–1611

  249. Dua D, Wang Y, Dasigi P, Stanovsky G, Singh S, Gardner M (2019) DROP: a reading comprehension benchmark requiring discrete reasoning over paragraphs. In: Proceedings of the 2019 conference of the north american chapter of the association for computational linguistics:human language technologies, vol 1 (long and short papers). Association for computational linguistics, pp 2368–2378

  250. Huang L, Le Bras R, Bhagavatula C, Choi Y (2019) Cosmos QA: machine reading comprehension with contextual commonsense reasoning. In: Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP). Association for computational linguistics, Hong Kong, China, pp 2391–2401

  251. Yu W, Jiang Z, Dong Y, Feng J (2020) Reclor: a reading comprehension dataset requiring logical reasoning. In: 8th International conference on learning representations, ICLR

  252. Dunn M, Sagun L, Higgins M, Güney VU, Cirik V, Cho K (2017) Searchqa: a new q&a dataset augmented with context from a search engine. CoRR arXiv:1704.05179

  253. Usbeck R, Gusmita RH, Ngomo AN, Saleem M (2018) 9th challenge on question answering over linked data (QALD-9) (invited paper). In: Joint proceedings of the 4th workshop on semantic deep learning (SemDeep-4) and NLIWoD4: natural language interfaces for the web of data (NLIWOD-4) and 9th question answering over linked data challenge (QALD-9) co-located with 17th international semantic web conference (ISWC 2018). (2018’4), pp 58–64

  254. Raghavan P, Liang JJ, Mahajan D, Chandra R, Szolovits P (2021) emrKBQA: a clinical knowledge-base question answering dataset. In: Proceedings of the 20th workshop on biomedical language processing, Online, pp 64–73

  255. Kusner MJ, Sun Y, Kolkin NI, Weinberger KQ (2015) From word embeddings to document distances. In: Proceedings of the 32nd international conference on international conference on machine learning - vol 37, ICML’15, pp 957–966

  256. Mikolov T, Chen K, Corrado G, Dean J (2013) Efficient estimation of word representations in vector space. In: 1st International conference on learning representations, ICLR

  257. Pennington J, Socher R, Manning C (2014) Glove: global vectors for word representation. In: Proceedings of the conference on empirical methods in natural language processing (EMNLP), Doha, Qatar, pp 1532–1543

  258. Joulin A, Grave E, Bojanowski P, Mikolov T (2017) Bag of tricks for efficient text classification. In: Proceedings of the 15th conference of the european chapter of the association for computational linguistics: vol 2, short papers. Association for computational linguistics, pp 427–431

  259. Sethy A, Ramabhadran B (2008) Bag-of-word normalized n-gram models. In: INTERSPEECH, 9th annual conference of the international speech communication association, ISCA’08

  260. Papineni K, Roukos S, Ward T, Zhu W-J (2002) Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th annual meeting on association for computational linguistics, Philadelphia, Pennsylvania, pp 311–318

  261. Lavie A, Agarwal A (2007) Meteor: an automatic metric for mt evaluation with high levels of correlation with human judgments. In: Proceedings of the second workshop on statistical machine translation, StatMT ’07, Prague, Czech Republic, pp 228–231

  262. Denkowski M, Lavie A (2014) Meteor universal: language specific translation evaluation for any target language. In: Proceedings of the ninth workshop on statistical machine translation, ACL’14, Baltimore, USA, pp 376–380

  263. Guo Y, Hu J (2019) Meteor++ 2.0: adopt syntactic level paraphrase knowledge into machine translation evaluation. In: Proceedings of the fourth conference on machine translation, Florence, Italy, pp 501–506

  264. Clark E, Celikyilmaz A, Smith NA (2019) Sentence mover’s similarity: automatic evaluation for multi-sentence texts. In: Proceedings of the 57th annual meeting of the association for computational linguistics, ACL’19, Florence, Italy, pp 2748–2760

  265. Lowe R, Noseworthy M, Serban IV, Angelard-Gontier N, Bengio Y, Pineau J (2017) Towards an automatic turing test: learning to evaluate dialogue responses. In: Proceedings of the 55th annual meeting of the association for computational linguistics (vol 1: long papers). Association for computational linguistics, pp 1116–1126

  266. Zhang T, Kishore V, Wu F, Weinberger KQ, Artzi Y (2020) Bertscore: evaluating text generation with BERT. In: 8th International conference on learning representations, ICLR

  267. Tao C, Mou L, Zhao D, Yan R (2018) RUBER: an unsupervised method for automatic evaluation of open-domain dialog systems. In: Proceedings of the thirty-second AAAI conference on artificial intelligence, (AAAI-18), the 30th innovative applications of artificial intelligence (IAAI-18), and the 8th AAAI symposium on educational advances in artificial intelligence (EAAI-18)

  268. Grice HP (1975) Logic and conversation. In: Cole P, Morgan JL (eds) Syntax and semantics: vol 3: speech acts. Academic Press, pp 41–58

  269. Vu T, Moschitti A (2021) AVA: an automatic evaluation approach for question answering systems. In: Proceedings of the 2021 conference of the north american chapter of the association for computational linguistics: human Language technologies. Association for computational linguistics, pp 5223–5233

  270. Liu X, Wang Y, Ji J, Cheng H, Zhu X, Awa E, He P, Chen W, Poon H, Cao G, Gao J (2020) The Microsoft toolkit of multi-task deep neural networks for natural language understanding. In: Proceedings of the 58th annual meeting of the association for computational linguistics: system demonstrations. Association for computational linguistics, pp 118–126

Download references

Acknowledgements

This research was enabled in part by support provided by the Natural Sciences and Engineering Research Council of Canada (NSERC), funding reference number RGPIN-2018-06233.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Khalid Nassiri.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Moulay Akhloufi contributed equally to this work.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nassiri, K., Akhloufi, M. Transformer models used for text-based question answering systems. Appl Intell 53, 10602–10635 (2023). https://doi.org/10.1007/s10489-022-04052-8

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-022-04052-8

Keywords

Navigation