Advertisement

Automatic Answering Method Considering Word Order for Slot Filling Questions of University Entrance Examinations

  • Ryo TagamiEmail author
  • Tasuku Kimura
  • Hisashi Miyamori
Conference paper
  • 726 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10647)

Abstract

Recently, automatic answering technologies such as question answering have attracted attention as a technology to satisfy various information requests from users. In this paper, we propose an automatic answering method considering word order for the slot filling questions in the university entrance examination world history problems. In particular, when in analyzing the question sentence, the answer category is estimated from the surrounding words of the filling slot and used for extracting the answer candidates. Also, these candidates are evaluated by introducing the indicator using the consistency with the category and the occurrence situation of the surrounding words. In the experiment, we first compare the accuracy of the word prediction models. Then, we compare the proposed method with the baseline method and clarify what kind of change is observed in the correct answer rate.

Keywords

Factoid question answering Automatic answering University entrance examination Distributed representation word order 

Notes

Acknowledgment

A part of this work was supported by Kyoto Sangyo University Research Grants.

References

  1. 1.
    Ferrucci, D., Brown, E., Chu-Carroll, J., Fan, J., Gondek, D., Kalyanpur, A.A., Lally, A., Murdock, J.W., Nyberg, E., Prager, J., et al.: Building Watson: an overview of the DeepQA project. AI Mag. 31(3), 59–79 (2010).  https://doi.org/10.1609/aimag.v31i3.2303 CrossRefGoogle Scholar
  2. 2.
    Iyyer, M., Boyd-Graber, J.L., Claudino, L.M.-B., Socher, R., Daumé III, H.: A neural network for factoid question answering over paragraphs. In: EMNLP, pp. 633–644 (2014)Google Scholar
  3. 3.
    Murata, M., Utiyama, M., Isahara, H.: Japanese question-answering system using decreased adding with multiple answers at NTCIR 5. In: NTCIR-5 Workshop Meeting (2005)Google Scholar
  4. 4.
    Sakamoto, K., Ishioroshi, M., Matsui, H., Jin, T., Wada, F., Nakayama, S., Shibuki, H., Mori, T., Kando, N.: Forst: question answering system for second-stage examinations at NTCIR-12 QA lab-2 task. In: 12th NTCIR Conference on Evaluation of Information Access Technologies, pp. 467–472 (2016)Google Scholar
  5. 5.
    Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. CoRR (2013)Google Scholar
  6. 6.
    Ariga, S., Tsuruoka, Y.: Synonym extension of words according to context by vector representation of words (in Japanese). In: 2015 The Association for Natural Language Processing, pp. 752–755 (2015)Google Scholar
  7. 7.
    Sato, T.: Neologism dictionary based on the language resources on the web for MeCab (2015)Google Scholar
  8. 8.
    Kimura, T., Nakata, R., Miyamori, H.: KSU team’s multiple choice QA system at the NTCIR-12 QA lab-2 task. In: 12th NTCIR Conference on Evaluation of Information Access Technologies Conference on Evaluation of Information Access Technologies, pp. 437–444 (2016)Google Scholar
  9. 9.
    Robertson, S., Zaragoza, H., et al.: The probabilistic relevance framework: BM25 and beyond. Found. Trends® Inf. Retr. 3(4), 333–389 (2009)CrossRefGoogle Scholar
  10. 10.
    Tokui, S., Oono, K., Hido, S., Clayton, J.: Chainer: a next-generation open source framework for deep learning. In: Proceedings of Workshop on Machine Learning Systems in NIPS (2015)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Division of Frontier InformaticsGraduate School of Kyoto Sangyo UniversityKyoto-shiJapan

Personalised recommendations