Advertisement

Knowledge Augmented Inference Network for Natural Language Inference

  • Shan Jiang
  • Bohan Li
  • Chunhua Liu
  • Dong Yu
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 957)

Abstract

This paper proposes a Knowledge Augmented Inference Network (K- AIN) that can effectively incorporate external knowledge into existing neural network models on Natural Language Inference (NLI) task. Different from previous works that use one-hot representations to describe external knowledge, we employ the TransE model to encode various semantic relations extracted from the external Knowledge Base (KB) as distributed relation features. We utilize these distributed relation features to construct knowledge augmented word embeddings and integrate them into the current neural network models. Experimental results show that our model achieves a better performance than the strong baseline on the SNLI dataset and we also surpass the current state-of-the-art models on the SciTail dataset.

Keywords

Natural language inference External knowledge Knowledge graph embedding 

Notes

Acknowledgements

This work is funded by Beijing Advanced Innovation for Language Resources of BLCU, the Fundamental Research Funds for the Central Universities in BLCU (No.17PT05) and the BLCU Academic Talents Support Program for the Young and Middle-Aged.

References

  1. 1.
    Bordes, A., Usunier, N., García-Durán, A., Weston, J., Yakhnenko, O.: Translating embeddings for modeling multi-relational data. In: NIPS (2013)Google Scholar
  2. 2.
    Bowman, S.R., Angeli, G., Potts, C., Manning, C.D.: A large annotated corpus for learning natural language inference. In: ACL (2015)Google Scholar
  3. 3.
    Chen, Q., Zhu, X., Ling, Z.H., Wei, S., Jiang, H., Inkpen, D.: Enhanced LSTM for natural language inference. In: ACL (2017)Google Scholar
  4. 4.
    Choi, J., Yoo, K.M., Lee, S-G.: Unsupervised learning of task-specific tree structures with tree-LSTMs. CoRR abs/1707.02786 (2017)Google Scholar
  5. 5.
    Weissenborn, D., Kočiský, T., Dyer, C.: Reading twice for natural language understanding. CoRR (2017)Google Scholar
  6. 6.
    Ghaeini, R., et al.: DR-BiLSTM: dependent reading bidirectional LSTM for natural language inference. CoRR (2018)Google Scholar
  7. 7.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9, 1735–1780 (1997)CrossRefGoogle Scholar
  8. 8.
    Khot, T., Sabharwal, A., Clark, P.: SciTail: a textual entailment dataset from science question answering (2018)Google Scholar
  9. 9.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR (2014)Google Scholar
  10. 10.
    Lin, H., Sun, L., Han, X.: Reasoning with heterogeneous knowledge for commonsense machine comprehension (2017)Google Scholar
  11. 11.
    MacCartney, B., Manning, C.D.: Modeling semantic containment and exclusion in natural language inference. COLING (2008)Google Scholar
  12. 12.
    Manning, C., Surdeanu, M., Bauer, J., Finkel, J., Bethard, S., McClosky, D.: The Stanford CoreNLP natural language processing toolkit. In: ACL (2014)Google Scholar
  13. 13.
    Miller, G.A.: WordNet: a lexical database for English. Commun. ACM 38, 39–41 (1992)CrossRefGoogle Scholar
  14. 14.
    Nie, Y., Bansal, M.: Shortcut-stacked sentence encoders for multi-domain inference. In: ACL (2017)Google Scholar
  15. 15.
    Parikh, A.P., Täckström, O., Das, D., Uszkoreit, J.: A decomposable attention model for natural language inference. CoRR (2016)Google Scholar
  16. 16.
    Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: EMNLP (2014)Google Scholar
  17. 17.
    Chen, Q., Zhu, X., Ling, Z.-H., Inkpen, D., Wei, S.: Natural language inference with external knowledge. CoRR (2017)Google Scholar
  18. 18.
    Rocktäschel, T., Grefenstette, E., Hermann, K.M., Kociský, T., Blunsom, P.: Reasoning about entailment with neural attention. CoRR (2015)Google Scholar
  19. 19.
    Sha, L., Li, S., Chang, B., Sui, Z.: Recognizing textual entailment via multi-task knowledge assisted LSTM (2016)Google Scholar
  20. 20.
    Shen, T., Zhou, T., Long, G., Jiang, J., Wang, S., Zhang, C.: Reinforced self-attention network: a hybrid of hard and soft attention for sequence modeling. CoRR abs/1801.10296 (2018)Google Scholar
  21. 21.
    Shi, C., et al.: Knowledge-based semantic embedding for machine translation (2016)Google Scholar
  22. 22.
    Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. In: Machine Learning Research, pp. 1929–1958 (2014)Google Scholar
  23. 23.
    Tay, Y., Tuan, L.A., Hui, S.C.: A compare-propagate architecture with alignment factorization for natural language inference. CoRR (2018)Google Scholar
  24. 24.
    Vougiouklis, P., Hare, J.S., Simperl, E.P.B.: A neural network approach for knowledge-driven response generation. In: COLING (2016)Google Scholar
  25. 25.
    Wang, S., Jiang, J.: Learning natural language inference with LSTM. CoRR (2015)Google Scholar
  26. 26.
    Wang, Z., Hamza, W., Florian, R.: Bilateral multi-perspective matching for natural language sentences. CoRR (2017)Google Scholar
  27. 27.
    Williams, A., Nangia, N., Bowman, S.R.: A broad-coverage challenge corpus for sentence understanding through inference. CoRR (2017)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.Beijing Advanced Innovation for Language Resources of BLCUBeijingChina
  2. 2.Beijing Language and Culture UniversityBeijingChina

Personalised recommendations