Advertisement

A General Framework for Multiple Choice Question Answering Based on Mutual Information and Reinforced Co-occurrence

  • Jorge Martinez-GilEmail author
  • Bernhard Freudenthaler
  • A Min Tjoa
Chapter
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11860)

Abstract

As a result of the continuously growing volume of information available, browsing and querying of textual information in search of specific facts is currently a tedious task exacerbated by a reality where data presentation very often does not meet the needs of users. To satisfy these ever-increasing needs, we have designed an solution to provide an adaptive and intelligent solution for the automatic answer of multiple-choice questions based on the concept of mutual information. An empirical evaluation over a number of general-purpose benchmark datasets seems to indicate that this solution is promising.

Keywords

Expert systems Knowledge engineering Information retrieval Question answering 

Notes

Acknowledgements

We would like to thank the anonymous reviewers for their helpful suggestions to improve this work. This research has been supported by the Austrian Ministry for Transport, Innovation and Technology, the Federal Ministry of Science, Research and Economy, and the Province of Upper Austria in the frame of the COMET center SCCH.

References

  1. 1.
    Aydin, B.I., Yilmaz, Y.S., Li, Y., Li, Q., Gao, J., Demirbas, M.: Crowdsourcing for multiple-choice question answering. In: AAAI 2014, pp. 2946–2953 (2014)Google Scholar
  2. 2.
    Bennett, Z., Russell-Rose, T., Farmer, K.: A scalable approach to legal question answering. In: ICAIL 2017, pp. 269–270 (2017)Google Scholar
  3. 3.
    Blohm, S., Cimiano, P.: Using the web to reduce data sparseness in pattern-based information extraction. In: PKDD 2007, pp. 18–29 (2007)Google Scholar
  4. 4.
    Brueninghaus, S., Ashley, K.D.: Improving the representation of legal case texts with information extraction methods. In: ICAIL 2001, pp. 42–51 (2001)Google Scholar
  5. 5.
    Clark, P., et al.: Combining retrieval, statistics, and inference to answer elementary science questions. In: AAAI 2016, pp. 2580-2586 (2016)Google Scholar
  6. 6.
    Church, K.W., Hanks, P.: Word association norms, mutual information and lexicography. In: 27th ACL, pp. 76–83 (1989)Google Scholar
  7. 7.
    Deerwester, S.C., Dumais, S.T., Landauer, T.K., Furnas, G.W., Harshman, R.A.: Indexing by latent semantic analysis. JASIS 41(6), 391–407 (1990)CrossRefGoogle Scholar
  8. 8.
    Ding, J., Wang, Y., Hu, W., Shi, L., Qu, Y.: Answering multiple-choice questions in geographical Gaokao with a concept graph. In: ESWC 2018, pp. 161–176 (2018)CrossRefGoogle Scholar
  9. 9.
    Fawei, B., Pan, J.Z., Kollingbaum, M.J., Wyner, A.: A methodology for criminal law and procedure ontology for legal question answering. In: JIST 2018, pp. 198–214 (2018)CrossRefGoogle Scholar
  10. 10.
    Ferrucci, D.A.: Introduction to this is Watson. IBM J. Res. Dev. 56(3), 1 (2012)Google Scholar
  11. 11.
    Ferrucci, D.A., Levas, A., Bagchi, S., Gondek, D., Mueller, E.T.: Watson: beyond jeopardy! Artif. Intell. 199–200, 93–105 (2013)CrossRefGoogle Scholar
  12. 12.
    Hameurlain, A., Morvan, F.: Big Data management in the cloud: evolution or crossroad? In: BDAS 2016, pp. 23–38 (2016)Google Scholar
  13. 13.
    Hoeffner, K., Walter, S., Marx, E., Usbeck, R., Lehmann, J., Ngonga Ngomo, A.-C.: Survey on challenges of question answering in the semantic web. Semant. Web 8(6), 895–920 (2017)CrossRefGoogle Scholar
  14. 14.
    Hoffart, J., Suchanek, F.M., Berberich, K., Weikum, G.: YAGO2: a spatially and temporally enhanced knowledge base from Wikipedia. Artif. Intell. 194, 28–61 (2013)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Joshi, M., Choi, E., Weld, D.S., Zettlemoyer, L.: TriviaQA: a large scale distantly supervised challenge dataset for reading comprehension. In: ACL1 2017, pp. 1601–1611 (2017)Google Scholar
  16. 16.
    Kolomiyets, O., Moens, M.-F.: A survey on question answering technology from an information retrieval perspective. Inf. Sci. 181(24), 5412–5434 (2011)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Krovetz, R.: Viewing morphology as an inference process. Artif. Intell. 118(1–2), 277–294 (2000)CrossRefGoogle Scholar
  18. 18.
    Kumar Ray, S., Singh, S., Joshi, B.P.: Exploring multiple ontologies and WordNet framework to expand query for question answering system. In: IHCI 2009, pp. 296–305 (2009)CrossRefGoogle Scholar
  19. 19.
    Lame, G.: Using NLP techniques to identify legal ontology components: concepts and relations. Artif. Intell. Law 12(4), 379–396 (2004)CrossRefGoogle Scholar
  20. 20.
    Lee, L.: Measures of distributional similarity. In: ACL 1999 (1999)Google Scholar
  21. 21.
    Li, Y., McLean, D., Bandar, Z., O’Shea, J., Crockett, K.A.: Sentence similarity based on semantic nets and corpus statistics. IEEE Trans. Knowl. Data Eng. 18(8), 1138–1150 (2006)CrossRefGoogle Scholar
  22. 22.
    Martinez-Gil, J., Freudenthaler, B., Tjoa, A.M.: Multiple choice question answering in the legal domain using reinforced co-occurrence. In: DEXA (1) 2019, pp. 138–148 (2019)Google Scholar
  23. 23.
    Martinez-Gil, J., Freudenthaler, B., Natschlaeger, T.: Automatic recommendation of prognosis measures for mechanical components based on massive text mining. IJWIS 14(4), 480–494 (2018)CrossRefGoogle Scholar
  24. 24.
    Martinez-Gil, J.: Automated knowledge base management: a survey. Comput. Sci. Rev. 18, 1–9 (2015)MathSciNetCrossRefGoogle Scholar
  25. 25.
    Martinez-Gil, J.: An overview of textual semantic similarity measures based on web intelligence. Artif. Intell. Rev. 42(4), 935–943 (2014)CrossRefGoogle Scholar
  26. 26.
    Maxwell, K.T., Schafer, B.: Concept and context in legal information retrieval. In: JURIX 2008, pp. 63–72 (2008)Google Scholar
  27. 27.
    Mimouni, N., Nazarenko, A., Salotti, S.: Answering complex queries on legal networks: a direct and a structured IR approaches. In: AICOL 2017, pp. 451–464 (2017)Google Scholar
  28. 28.
    Morimoto, A., Kubo, D., Sato, M., Shindo, H., Matsumoto, Y.: Legal question answering system using neural attention. In: COLIEE@ICAIL 2017, pp. 79–89 (2017)Google Scholar
  29. 29.
    Nicula, B., Ruseti, S., Rebedea, T.: Improving deep learning for multiple choice question answering with candidate contexts. In: ECIR 2018, pp. 678-683 (2018)Google Scholar
  30. 30.
    Reese, S., Boleda, G., Cuadros, M., Padró, L., Rigau, G.: Wikicorpus: a word-sense disambiguated multilingual Wikipedia corpus. In: LREC 2010 (2010)Google Scholar
  31. 31.
    Stam, M.: Calcipher system. https://github.com/matt-stam/calcipher. Accessed 01 Apr 2019
  32. 32.
    Sun, H., Wei, F., Zhou, M.: Answer extraction with multiple extraction engines for web-based question answering. In: NLPCC 2014, pp. 321–332 (2014)Google Scholar
  33. 33.
    Xu, K., Reddy, S., Feng, Y., Huang, S., Zhao, D.: Question answering on freebase via relation extraction and textual evidence. In: ACL1 2016 (2016)Google Scholar
  34. 34.
    Yih, W.-T., Chang, M.-W., Meek, C., Pastusiak, A.: Question answering using enhanced lexical semantic models. In: ACL1 2013, pp. 1744–1753 (2013)Google Scholar
  35. 35.
    Zhang, Y., He, S., Liu, K., Zhao, J.: A joint model for question answering over multiple knowledge bases. In: AAAI 2016, pp. 3094–3100 (2016)Google Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  • Jorge Martinez-Gil
    • 1
    Email author
  • Bernhard Freudenthaler
    • 1
  • A Min Tjoa
    • 1
    • 2
  1. 1.Software Competence Center Hagenberg GmbHHagenbergAustria
  2. 2.Vienna University of TechnologyViennaAustria

Personalised recommendations