Abstract.
Existing artificial neural network models are not very successful in understanding or generating natural language texts. Therefore it is proposed to design novel neural network structures in higher levels of abstraction. This concept leads to a hierarchy of network layers which extract and store local details in every layer and transfer the remaining nonlocal context information to higher levels. At the same time data compression is provided from layer to layer. The use of the same network elements (meta-words) in higher levels for different word series in the basis level is introduced and discussed for grammatical identity or similarity. Thus text can be compressed to forms which are almost free of redundancy. Possible applications are storage, transmission, understanding, generating and translation of texts.
Similar content being viewed by others
Author information
Authors and Affiliations
Additional information
Received: 6 May 1993 / Accepted in revised form: 24 July 1996
Rights and permissions
About this article
Cite this article
Hilberg, W. Neural networks in higher levels of abstraction. Biol Cybern 76, 23–40 (1997). https://doi.org/10.1007/s004220050318
Issue Date:
DOI: https://doi.org/10.1007/s004220050318