Abstract
A naturally structured information is typical in symbolic processing. Nonetheless, learning in connectionism is usually related to poorly organized data, like arrays or sequences. For these types of data, classical neural networks are proven to be universal approximators.
Recently, recursive networks were introduced in order to deal with structured data. They also represent a universal tool to approximate mappings between graphs and real vector spaces. In this paper, an overview of the present state of the art on approximation in recursive networks is carried on. Finally, some results on generalization are reviewed, establishing the VC-dim for recursive architectures of fixed size.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
M. Bianchini, M. Gori, and F. Scarselli. Theoretical properties of recursive networks with linear neurons. Technical Report RT-DII-26/99, Dip. di Ingegneria dell’Informazione, Università di Siena, Siena, (Italy), 1999.
P. Frasconi, M. Gori, and A. Sperduti. A general framework for adaptive processing of data structures. IEEE Transactions on Neural Networks, 9(5):768–786, September 1998.
M. Gori, F. Scarselli, and A. C. Tsoi. On the clousure of set of functions that can be realized by a multilayer perceptron. IEEE Transactions on Neural Networks, 9(6):1086–1098, 1998.
B. Hammer. Learning recursive data is intractable. Preprint 194, 7/1997, Osnabrücker Schriften zur Mathematik, Osnabrück, (Austria), 1997.
B. Hammer. On the approximation capability of recurrent neural networks. In NC’98, International Symposium on Neural Computation, Vienna, (Austria), September 1998.
A. Küchler and C. Goller. Inductive learning in symbolic domains using structure-driven recurrent neural networks. In G. Görz and S. Hölldobler, eds., Advances in Artificial Intelligence, pp. 183–197. Springer, Berlin, 1996.
W. Maas. Vapnik-Chervonenkis dimension of neural nets. Preprint NC-TR-96-015, NeuroCOLT Technical Report Series, 1996.
J. B. Pollack. Recursive distributed representations. Artificial Intelligence, 46(1–2):77–106, 1990.
A. Sperduti. Labelling RAAM. Connection Science, 6(4):429–459, 1994.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer-Verlag London Limited
About this paper
Cite this paper
Bianchini, M., Gori, M., Scarselli, F. (1999). Recursive Networks: An Overview of Theoretical Results. In: Marinaro, M., Tagliaferri, R. (eds) Neural Nets WIRN Vietri-99. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-0877-1_25
Download citation
DOI: https://doi.org/10.1007/978-1-4471-0877-1_25
Publisher Name: Springer, London
Print ISBN: 978-1-4471-1226-6
Online ISBN: 978-1-4471-0877-1
eBook Packages: Springer Book Archive