Advertisement

Natural Language Processing Neural Network for Recall and Inference

  • Tsukasa Sagara
  • Masafumi Hagiwara
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6354)

Abstract

In this paper, we propose a novel neural network which can learn knowledge from natural language documents and can perform recall and inference. The proposed network has a sentence layer, a knowledge layer, ten kinds of deep case layers and a dictionary layer. In the network learning step, connections are updated based on Hebb’s learning rule. The proposed network can handle a complicated sentence by incorporating the deep case layers and get unlearned knowledge from the dictionary layer. In the dictionary layer, Goi-Taikei, containing 400,000 words dictionary, is employed. Two kinds of experiments were carried out by using goo encyclopedia and Wikipedia as knowledge sources. Superior performance of the proposed neural network has been confirmed.

Keywords

Natural Language Processing Knowledge Source Question Answering Input Sentence Recall Phase 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Tamura, A., Anzai, Y.: Natural language processing system based on connectionist models. IPSJ 28(2), 202–210 (1987)Google Scholar
  2. 2.
    McClelland, J., Rumelhart, D.: Explorations in parallel distributed processing: a handbook of models, programs, and exercises. MIT Press, Cambridge (1988)Google Scholar
  3. 3.
    Cangelosi, A., Domenico, P.: The processing of verbs and nouns in neural networks. Brain and Language 89, 401–408 (2004)CrossRefGoogle Scholar
  4. 4.
    Laukaitis, R., Laukaitis, A.: Natural Language Processing and the Conceptual Model Self-organizing Map. In: Kedad, Z., Lammari, N., Métais, E., Meziane, F., Rezgui, Y. (eds.) NLDB 2007. LNCS, vol. 4592, pp. 193–203. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  5. 5.
    Hummel, J., Holyoak, K.: A symbolic-connectionist theory of relational inference and generalization. Psychological Review 110, 220–264 (2003)CrossRefGoogle Scholar
  6. 6.
    Sakakibara, K., Hagiwara, M.: A proposal of 3-dimensional self-organizing memory and its application to knowledge extraction from natural language. Transactions of the Japanese Society for Artificial Intelligence: AI 21, 73–80 (2006)CrossRefGoogle Scholar
  7. 7.
    Saito, M., Hagiwara, M.: Natural language processing neural network for analogical inference. IEICE Technical Report. NC. 108(480) , 1–6 (2009)Google Scholar
  8. 8.
    Kudo, T., Matsumoto, Y.: Japanese dependency analysis using cascaded chunking. In: CoNLL 2002: Proceedings of the 6th Conference on Natural Language Learning 2002 (COLING 2002 Post-Conference Workshops), pp. 63–69 (2002)Google Scholar
  9. 9.
    Watanabe, T., Ohta, M., Ohta, K., Ishikawa, H.: A page fusion method using case grammar. DBSJ 2004(72), 653–660 (2004)Google Scholar
  10. 10.
    Watanabe, T., Ohno, S., Ohta, M., Katayama, K., Ishikawa, H.: A Distinction Emphasis Multi-document Fusion Technique. In: DEWS 2005 (2005)Google Scholar
  11. 11.
    Oishi, T., Matsumoto, Y.: Lexical knowledge acquisition for japanese verbs based on surface case pattern analysis. IPSJ 36, 2597–2610 (1995)Google Scholar
  12. 12.
    Shirai, S., Ooyama, Y., Ikehara, S., Miyazaki, M., Yokoo, A.: Introduction to Goi-Taikei: A Japanese Lexicon. IPSJ SIG Notes 98, 47–52 (1998)Google Scholar
  13. 13.
    NTT Resonant Inc.: goo dictionary, http://dictionary.goo.ne.jp/
  14. 14.
    Wikipedians: Wikipedia, http://ja.wikipedia.org/wiki/

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Tsukasa Sagara
    • 1
  • Masafumi Hagiwara
    • 1
  1. 1.Department of Information and Computer ScienceKeio UniversityYokohamaJapan

Personalised recommendations