Neural architecture for question answering using a knowledge graph and web corpus

Abstract

In Web search, entity-seeking queries often trigger a special question answering (QA) system. It may use a parser to interpret the question to a structured query, execute that on a knowledge graph (KG), and return direct entity responses. QA systems based on precise parsing tend to be brittle: minor syntax variations may dramatically change the response. Moreover, KG coverage is patchy. At the other extreme, a large corpus may provide broader coverage, but in an unstructured, unreliable form. We present AQQUCN, a QA system that gracefully combines KG and corpus evidence. AQQUCN accepts a broad spectrum of query syntax, between well-formed questions to short “telegraphic” keyword sequences. In the face of inherent query ambiguities, AQQUCN aggregates signals from KGs and large corpora to directly rank KG entities, rather than commit to one semantic interpretation of the query. AQQUCN models the ideal interpretation as an unobservable or latent variable. Interpretations and candidate entity responses are scored as pairs, by combining signals from multiple convolutional networks that operate collectively on the query, KG and corpus. On four public query workloads, amounting to over 8000 queries with diverse query syntax, we see 5–16% absolute improvement in mean average precision (MAP), compared to the entity ranking performance of recent systems. Our system is also competitive at entity set retrieval, almost doubling F1 scores for challenging short queries.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Notes

  1. 1.

    These are known as KBQA or “Knowledge Base Question Answering” systems.

  2. 2.

    Our system is named AQQUCN because it augments the AQQU system of Bast and Haußmann (2015) with convolutional networks.

  3. 3.

    Hint of relation r might be distributed among multiple disjoint spans, but this is not a serious problem for our proposed system because we allow spans with multiple roles.

  4. 4.

    As in all QA systems, \(e_1\)-linking accuracy does affect QA accuracy, but the variation is hard to characterize without a battery of entity linking methods with carefully controlled recall/precision profiles. AQQU gave slightly better accuracy with TagMe than with its own linker, so we used TagMe for all experiments. SMAPH (Cornolti et al. 2014) would be a better choice, but it is provided only as a network service, and it needs Google search as yet another level of network service, which has severe usage volume restriction.

  5. 5.

    Two hops are needed to traverse mediator nodes like m.

  6. 6.

    For simplicity, we describe the single-relation case; multi-hop cases with mediator nodes are handled analogously.

  7. 7.

    We use K for the number of top entities in the response to the user, and \(K'\) for the number of interpretations to be used internally.

  8. 8.

    Also see Chapter 11 (End-to-end Deep Learning) of http://www.mlyearning.org/.

  9. 9.

    https://github.com/marcotcr/lime.

  10. 10.

    For three cases, only AQQU (Bast and Haußmann 2015) and Sempre (Berant and Liang 2015) code were available. Text2KB is available at https://github.com/DenXX/aqqu, but with missing corpus files and no format specification.

References

  1. Andreas, J., Rohrbach, M., Darrell, T., & Klein, D. (2016). Learning to compose neural networks for question answering. arXiv preprint arXiv:160101705

  2. Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. CoRR. arXiv:1409.0473

  3. Balog, K., Azzopardi, L., & de Rijke, M. (2006). Formal models for expert finding in enterprise corpora. In SIGIR conference (pp. 43–50). https://doi.org/10.1145/1148170.1148181. http://staff.science.uva.nl/~kbalog/files/sigir2006-expertsearch.pdf

  4. Balog, K., Azzopardi, L., & de Rijke, M. (2009). A language modeling framework for expert finding. Information Processing and Management, 45(1), 1–19. https://doi.org/10.1016/j.ipm.2008.06.003

    Article  Google Scholar 

  5. Bast, H., & Buchhold, B. (2017). QLever: A query engine for efficient sparql+text search. In CIKM (pp. 647–656). https://github.com/ad-freiburg/QLever

  6. Bast, H., & Haußmann, E. (2015). More accurate question answering on freebase. In CIKM (pp. 1431–1440). http://ad-publications.informatik.uni-freiburg.de/CIKM_freebase_qa_BH_2015.pdf

  7. Berant, J., & Liang, P. (2015). Imitation learning of agenda-based semantic parsers. TACL 3, 545–558. https://www.transacl.org/ojs/index.php/tacl/article/viewFile/646/160

  8. Berant, J., Chou, A., Frostig, R., & Liang, P. (2013). Semantic parsing on Freebase from question-answer pairs. In EMNLP conference (pp. 1533–1544). http://aclweb.org/anthology//D/D13/D13-1160.pdf

  9. Bollacker, K., Evans, C., Paritosh, P., Sturge, T., & Taylor, J. (2008). Freebase: A collaboratively created graph database for structuring human knowledge. In SIGMOD conference (pp. 1247–1250). http://ids.snu.ac.kr/w/images/9/98/sc17.pdf

  10. Bordes, A., Chopra, S., & Weston, J. (2014). Question answering with subgraph embeddings. arXiv preprint arXiv:14063676

  11. Bordes, A., Usunier, N., Chopra, S., & Weston, J. (2015). Large-scale simple question answering with memory networks. arXiv preprint arXiv:150602075

  12. Cardie, C. (2012). CS 4740: Introduction to natural language processing. https://www.cs.cornell.edu/courses/cs4740/2012sp/lectures.htm

  13. Chakrabarti, S. (2010). Bridging the structured-unstructured gap: Searching the annotated Web. Keynote talk at WSDM 2010. https://www.cse.iitb.ac.in/~soumen/doc/wsdm2010/TalkSlides.pdf

  14. ClueWeb09. (2009). http://www.lemurproject.org/clueweb09.php/

  15. CodaLab. (2016). Webquestions benchmark for question answering. http://bit.ly/2kvXroJ

  16. Cornolti, M., Ferragina, P., Ciaramita, M., Rued, S., & Schuetze, H. (2014). The SMAPH system for query entity recognition and disambiguation. In ERD challenge workshop. https://research.google.com/pubs/archive/42720.pdf

  17. CSAW. (2018). The CSAW project at IIT Bombay. https://www.cse.iitb.ac.in/~soumen/doc/CSAW/

  18. Dalton, J., Dietz, L., & Allan, J. (2014). Entity query feature expansion using knowledge base links. In SIGIR conference. http://ciir-publications.cs.umass.edu/pub/web/getpdf.php?id=1143

  19. Dong, L., & Lapata, M. (2016). Language to logical form with neural attention. In ACL (Vol. 1, pp. 33–43). arXiv:1601.01280.

  20. Dong, L., Wei, F., Zhou, M., & Xu, K. (2015). Question answering over freebase with multi-column convolutional neural networks. In ACL conference.

  21. Fang, Y., Si, L., & Mathur, A. P. (2010). Discriminative models of integrating document evidence and document-candidate associations for expert search. In SIGIR conference. http://www.cs.purdue.edu/homes/fangy/SIGIR2010_Expert_Search.pdf

  22. Ferragina, P., & Scaiella, U. (2010). TAGME: On-the-fly annotation of short text fragments (by wikipedia entities). arXiv:1006.3498.

  23. Gabrilovich, E., Ringgaard, M., & Subramanya, A. (2013). FACC1: Freebase annotation of ClueWeb corpora. http://lemurproject.org/clueweb12/. http://lemurproject.org/clueweb12/, version 1 (Release date 2013-06-26, Format version 1, Correction level 0).

  24. Ganea, O. E., & Hofmann, T. (2017). Deep joint entity disambiguation with local neural attention. arXiv preprint arXiv:170404920.

  25. Globerson, A., Lazic, N., Chakrabarti, S., Subramanya, A., Ringgaard, M., & Pereira, F. (2016). Collective entity resolution with multi-focal attention. In ACL conference (pp. 621–631). https://www.aclweb.org/anthology/P/P16/P16-1059.pdf

  26. Huang, E. H., Socher, R., Manning, C. D., & Ng, A. Y. (2012). Improving word representations via global context and multiple word prototypes. In ACL conference.

  27. Hui, K., Yates, A., Berberich, K., & de Melo, G. (2017). PACRR: A position-aware neural IR model for relevance matching. arXiv preprint arXiv:170403940.

  28. Hui, K., Yates, A., Berberich, K., & de Melo, G. (2018). Co-PACRR: A context-aware neural IR model for ad-hoc retrieval. In WSDM conference (pp. 279–287).

  29. Iyyer, M., Boyd-Graber, J., Claudino, L., Socher, R., & Daumé, III. H. (2014). A neural network for factoid question answering over paragraphs. In EMNLP conference (pp. 633–644).

  30. Joachims, T. (2002). Optimizing search engines using clickthrough data. In SIGKDD conference, ACM (pp. 133–142). http://www.cs.cornell.edu/People/tj/publications/joachims_02c.pdf

  31. Joshi, M., Sawant, U., & Chakrabarti, S. (2014). Knowledge graph and corpus driven segmentation and answer inference for telegraphic entity-seeking queries. In EMNLP conference (pp. 1104–1114). http://www.emnlp2014.org/papers/pdf/EMNLP2014117.pdf, download http://bit.ly/1OCKbVW

  32. Kasneci, G., Suchanek, F. M., Ifrim, G., Ramanath, M., & Weikum, G. (2008). NAGA: Searching and ranking knowledge. In ICDE, IEEE.

  33. Kim, Y. (2014). Convolutional neural networks for sentence classification. arXiv preprint arXiv:14085882.

  34. Kwiatkowski, T., Choi, E., Artzi, Y., & Zettlemoyer, L. S. (2013). Scaling semantic parsers with on-the-fly ontology matching. In EMNLP conference (pp. 1545–1556). http://homes.cs.washington.edu/~lsz/papers/kcaz-emnlp13.pdf

  35. Liang, C., Berant, J., Le, Q., Forbus, K. D., & Lao, N. (2016). Neural symbolic machines: Learning semantic parsers on Freebase with weak supervision. arXiv:1611.00020.

  36. Lin, T., Pantel, P., Gamon, M., Kannan, A., & Fuxman, A. (2012). Active objects: Actions for entity-centric search. In WWW conference, ACM (pp. 589–598). https://doi.org/10.1145/2187836.2187916. http://research.microsoft.com/apps/pubs/default.aspx?id=161389

  37. Ling, X., & Weld, D. S. (2012). Fine-grained entity recognition. In AAAI conference. http://xiaoling.github.io/pubs/ling-aaai12.pdf

  38. Liu, T. Y. (2009). Learning to rank for information retrieval. In Foundations and trends in information retrieval (Vol. 3, pp. 225–331). Now Publishers. https://doi.org/10.1561/1500000016. http://www.nowpublishers.com/product.aspx?product=INR&doi=1500000016

  39. Lv, Y., & Zhai, C. (2009). Positional language models for information retrieval. In SIGIR conference (pp. 299–306). https://doi.org/10.1145/1571941.1571994. http://sifaka.cs.uiuc.edu/czhai/pub/sigir09-PLM.pdf

  40. MacAvaney, S., Yates, A., Cohan, A., Soldaini, L., Hui, K., Goharian, N., & Frieder, O. (2018). Characterizing question facets for complex answer retrieval. arXiv preprint arXiv:180500791.

  41. Macdonald, C., & Ounis, I. (2006). Voting for candidates: Adapting data fusion techniques for an expert search task. In CIKM (pp. 387–396). https://doi.org/10.1145/1183614.1183671.

  42. Macdonald, C., & Ounis, I. (2011). Learning models for ranking aggregates. In Advances in information retrieval. LNCS (Vol. 6611, pp. 517–529). New York: Springer. http://www.dcs.gla.ac.uk/~craigm/publications/macdonald11learned.pdf

  43. Miller, A. H., Fisch, A., Dodge, J., Karimi, A., Bordes, A., & Weston, J. (2016). Key-value memory networks for directly reading documents. arXiv:1606.03126.

  44. Murdock, J. W., Kalyanpur, A., Welty, C., Fan, J., Ferrucci, D. A., Gondek, D. C., Zhang, L., & Kanayama, H. (2012). Typing candidate answers using type coercion. IBM Journal of Research and Development 56(3/4), 7:1–7:13. https://pdfs.semanticscholar.org/765d/0956e46846a33a1062749daede11ba71680f.pdf

  45. Petkova, D., & Croft, W. B. (2007). Proximity-based document representation for named entity retrieval. In CIKM (pp. 731–740). ACM. https://doi.org/10.1145/1321440.1321542. http://portal.acm.org/citation.cfm?id=1321440.1321542

  46. Pound, J., Hudek, A. K., Ilyas, I. F., & Weddell, G. (2012). Interpreting keyword queries over Web knowledge bases. In CIKM. https://cs.uwaterloo.ca/~jpound/pubs/pound-cikm2012.pdf

  47. Reed, S., & De Freitas, N. (2015). Neural programmer-interpreters. arXiv preprint arXiv:151106279.

  48. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?” Explaining the predictions of any classifier. In SIGKDD conference (pp. 1135–1144).

  49. Roth, D. (2017). On the necessity of learning and reasoning: A perspective from natural language understanding. McCarthy award acceptance speech at IJCAI 2017. https://www.youtube.com/watch?v=tAKn3Gt75rg

  50. Saha, A., Pahuja, V., Khapra, M. M., Sankaranarayanan, K., & Chandar, S. (2018). Complex sequential question answering: Towards learning to converse over linked question answer pairs with a knowledge graph. arXiv preprint arXiv:180110314.

  51. Savenkov, D., & Agichtein, E. (2016). When a knowledge base is not enough: Question answering over knowledge bases with external text data. In SIGIR conference (pp. 235–244). https://dl.acm.org/citation.cfm?id=2911536

  52. Savenkov, D., & Agichtein, E. (2017). Evinets: Neural networks for combining evidence signals for factoid question answering. In ACL conference (Vol. 2, pp. 299–304). http://aclweb.org/anthology/P17-2047

  53. Sawant, U., & Chakrabarti, S. (2013). Features and aggregators for web-scale entity search. arXiv:1303.3164.

  54. Severyn, A., & Moschitti, A. (2015). Learning to rank short text pairs with convolutional deep neural networks. In Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval (pp. 373–382). http://disi.unitn.it/~severyn/papers/sigir-2015-long.pdf

  55. Shalev-Shwartz, S., & Shashua, A. (2016). On the sample complexity of end-to-end training vs. semantic abstraction training. arXiv:1604.06915

  56. Wang, M. (2006). A survey of answer extraction techniques in factoid question answering. Computational Linguistics 1(1). https://nlp.stanford.edu/mengqiu/publication/LSII-LitReview.pdf

  57. West, R., Gabrilovich, E., Murphy, K., Sun, S., Gupta, R., & Lin, D. (2014). Knowledge base completion via search-based question answering. In WWW conference (pp. 515–526). https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/42024.pdf

  58. Xiong, C., Callan, J., & Liu, T. Y. (2017). Word-entity duet representations for document ranking. In SIGIR conference (pp. 763–772). arXiv:1706.06636

  59. Xu, K., Reddy, S., Feng, Y., Huang, S., & Zhao, D. (2016). Question answering on Freebase via relation extraction and textual evidence. arXiv preprint arXiv:160300957.

  60. Yahya, M., Berberich, K., Elbassuoni, S., Ramanath, M., Tresp, V., & Weikum, G. (2012). Natural language questions for the Web of data. In EMNLP conference, Jeju Island, Korea (pp. 379–390). http://www.aclweb.org/anthology/D12-1035

  61. Yang, M. C., Duan, N., Zhou, M., & Rim, H. C. (2014). Joint relational embeddings for knowledge-based question answering. In EMNLP conference (pp. 645–650).

  62. Yao, X. (2015). Lean question answering over Freebase from scratch. In NAACL conference (pp. 66–70). http://www.aclweb.org/website/old_anthology/N/N15/N15-3014.pdf

  63. Yao, X., & Van Durme, B. (2014). Information extraction over structured data: Question answering with Freebase. In ACL conference, ACL. http://www.cs.jhu.edu/~xuchen/paper/yao-jacana-freebase-acl2014.pdf

  64. Yavuz, S., Gur, I., Su, Y., Srivatsa, M., & Yan, X. (2016). Improving semantic parsing via answer type inference. In EMNLP conference (pp. 149–159). http://cs.ucsb.edu/~ysu/papers/emnlp16_type.pdf

  65. Yih, S. Wt., Chang, M. W., He, X., & Gao, J. (2015). Semantic parsing via staged query graph generation: Question answering with knowledge base. In ACL conference (pp. 1321–1331). http://anthology.aclweb.org/P/P15/P15-1128.pdf

  66. Zhiltsov, N., Kotov, A., & Nikolaev, F. (2015). Fielded sequential dependence model for ad-hoc entity retrieval in the Web of data. In SIGIR conference (pp. 253–262). http://www.cs.wayne.edu/kotov/docs/zhiltsov-sigir15.pdf

  67. Zhong, V., Xiong, C., & Socher, R. (2017). Seq2SQL: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:170900103

Download references

Acknowledgements

Thanks to the reviewers for their constructive suggestions. Thanks to Elmar Haußmann for generous help with AQQU. Thanks to Doug Oard for advice on set versus ranked retrieval. Thanks to Saurabh Sarda for migrating the code of Joshi et al. (2014) to use AQQU. Partly supported by grants from IBM and nVidia.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Soumen Chakrabarti.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Sawant, U., Garg, S., Chakrabarti, S. et al. Neural architecture for question answering using a knowledge graph and web corpus. Inf Retrieval J 22, 324–349 (2019). https://doi.org/10.1007/s10791-018-9348-8

Download citation

Keywords

  • Question answering
  • Knowledge graph
  • Neural network
  • Convolutional network
  • Entity ranking