Skip to main content
Log in

Neural networks in higher levels of abstraction

  • Published:
Biological Cybernetics Aims and scope Submit manuscript

Abstract.

 Existing artificial neural network models are not very successful in understanding or generating natural language texts. Therefore it is proposed to design novel neural network structures in higher levels of abstraction. This concept leads to a hierarchy of network layers which extract and store local details in every layer and transfer the remaining nonlocal context information to higher levels. At the same time data compression is provided from layer to layer. The use of the same network elements (meta-words) in higher levels for different word series in the basis level is introduced and discussed for grammatical identity or similarity. Thus text can be compressed to forms which are almost free of redundancy. Possible applications are storage, transmission, understanding, generating and translation of texts.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Author information

Authors and Affiliations

Authors

Additional information

Received: 6 May 1993 / Accepted in revised form: 24 July 1996

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hilberg, W. Neural networks in higher levels of abstraction. Biol Cybern 76, 23–40 (1997). https://doi.org/10.1007/s004220050318

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/s004220050318

Keywords

Navigation