Applied Intelligence

, Volume 19, Issue 1–2, pp 83–99

Towards Effective Parsing with Neural Networks: Inherent Generalisations and Bounded Resource Effects

  • Peter C.R. Lane
  • James B. Henderson

DOI: 10.1023/A:1023820807862

Cite this article as:
Lane, P.C. & Henderson, J.B. Applied Intelligence (2003) 19: 83. doi:10.1023/A:1023820807862


This article explores how the effectiveness of learning to parse with neural networks can be improved by including two architectural features relevant to language: generalisations across syntactic constituents and bounded resource effects. A number of neural network parsers have recently been proposed, each with a different approach to the representational problem of outputting parse trees. In addition, some of the parsers have explicitly attempted to capture an important regularity within language, which is to generalise information across syntactic constituents. A further property of language is that natural bounds exist for the number of constituents which a parser need retain for later processing. Both the generalisations and the resource bounds may be captured in architectural features which enhance the effectiveness and efficiency of learning to parse with neural networks. We describe a number of different types of neural network parser, and compare them with respect to these two features. These features are both explicitly present in the Simple Synchrony Network parser, and we explore and illustrate their impact on the process of learning to parse in some experiments with a recursive grammar.

neural networks resource effects structured representations syntactic parsing systematicity 

Copyright information

© Kluwer Academic Publishers 2003

Authors and Affiliations

  • Peter C.R. Lane
    • 1
  • James B. Henderson
    • 2
  1. 1.Department of Computer ScienceUniversity of Hertfordshire, Hatfield CampusHATFIELDUK
  2. 2.Department of Computer ScienceUniversity of GenevaGenève 4Switzerland

Personalised recommendations