Skip to main content

Neural networks for processing data structures

  • Chapter
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1387))

Abstract

The possibility to represent and to process structures in a neural network greatly increases the computational capability of neural networks. This new capability, besides to provide a new tool for the classification of structures, can also be exploited to integrate neural networks and symbolic systems in a hybrid system. In fact, structures generated by a symbolic module can be evaluated by this type of networks and their evaluation can be used to modify the behavior of the symbolic module. An instance of this integration scheme is given, for example, by learning heuristics for automated deduction systems. Goller reported very successful results in using a Back-propagation Through Structure network within the SETHEO theorem prover [8]. On the other side, it is not difficult to figure out, in analogy with finite state automata extraction from recurrent networks, how to extract tree automata from a neural network for structures. This would allow the above scheme to work on the other side around, with a neural module which is driven by a symbolic subsystem.

The computational results presented in this tutorial, however, need to be extended to DOAGs in order to fully characterize the kind of symbolic computations which can be performed by recursive neural network. Nevertheless it must be pointed out that, due to its numerical nature, recursive neural networks perform a kind of computation which is not possible to easily reproduce by a symbolic system. The impact of this new style of computation should find very useful application in domains where structure and numerical values are both important aspects of the computational problem, such as in the prediction of the biological activity of drugs in Chemistry.

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. N. Alon, A. K. Dewdney, and T. J. Ott. Efficient simulation of finite automata byneural nets. Journal of the Association for Computing Machinery, 38(2):495–514, 1991.

    MathSciNet  MATH  Google Scholar 

  2. L. Atlas and al. A performance comparison of trained multilayer perceptrons and trained classification trees. Proceedings of the IEEE, 78:1614–1619, 1992.

    Article  Google Scholar 

  3. Y. Bengio, P. Frasconi, and P. Simard. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(5):157–166, March 1994. Special Issue on Recurrent Neural Networks.

    Article  Google Scholar 

  4. L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. Wadsworth International Group, 1984.

    Google Scholar 

  5. S. E. Fahlman. The recurrent cascade-correlation architecture. Technical Report CMU-CS-91-100, Carnegie Mellon, 1991.

    Google Scholar 

  6. S. E. Fahlman and C. Lebiere. The cascade-correlation learning architecture. In D. S. Touretzky, editor, Advances in Neural Information Processing Systems 2, pages 524–532. San Mateo, CA: Morgan Kaufmann, 1990.

    Google Scholar 

  7. CL. Giles, D. Chen, G.Z. Sun, H.H. Chen, Y.C. Lee, and M.W. Goudreau. Constructive learning of recurrent neural networks: Limitations of recurrent casade correlation and a simple solution. IEEE Transactions on Neural Networks, 6(4):829–836, 1995.

    Article  Google Scholar 

  8. C. Goller. A Connectionist Approach for Learning Search-Control Heuristics for Automated Deduction Systems. PhD thesis, Technical University Munich, Computer Science, 1997.

    Google Scholar 

  9. C. Goller and A. Küchler. Learning task-dependent distributed structure-representations by backpropagation through structure. In IEEE International Conference on Neural Networks, pages 347–352, 1996.

    Google Scholar 

  10. R. C. Gonzalez and M. G. Thomason. Syntactical Pattern Recognition. Addison-Wesley, 1978.

    Google Scholar 

  11. M.W. Goudreau, C.L. Giles, S.T. Chakradhar, and D. Chen. First-order vs. second-order single layer recurrent neural networks. IEEE Transactions on Neural Networks, 5(3):511–513, 1994.

    Article  Google Scholar 

  12. L. H. Hall and L. B. Kier. Reviews in Computational Chemistry, chapter 9, The Molecular Connectivity Chi Indexes and Kappa Shape Indexes in Structure-Property Modeling, pages 367–422. VCH Publishers, Inc.: New York, 1991.

    Google Scholar 

  13. B. Hammer and V. Sperschneider. Neural networks can approximate mappings on structured objects. In Proceedings of the 2nd International Conference on Computational Intelligence and Neuroscience, 1997. Research Triangle Park, USA.

    Google Scholar 

  14. B. G. Horne and D. R. Hush. Bounds on the complexity of recurrent neural network implementations of finite state machines. Neural Networks, 9(2):243–252, 1996.

    Article  Google Scholar 

  15. S. C. Kremer. Comments on “constructive learning of recurrent neural networks: ...”, cascading the proof describing limitations of recurrent cascade correlation. IEEE Transactions on Neural Networks, 1995. In press.

    Google Scholar 

  16. S.C. Kremer. Finite state automata that recurrent cascade-correlation cannot represent. In D. Touretzky, M. Mozer, and M. Hasselno, editors, Advances in Neural Information Processing Systems 8. MIT Press, 1996. 612–618.

    Google Scholar 

  17. A. Küchler and C. Goller. Inductive Learning in Symbolic Domains Using Structure-Driven Recurrent Neural Networks. In Günther Görz and Steffen Hölldobler, editors, KI-96: Advances in Artificial Intelligence, Lecture Notes in Computer Science (LNCS 1137), pages 183–197, Berlin, 1996. Springer.

    Google Scholar 

  18. T. Li, L. Fang, and A. Jennings. Structurally adaptive self-organizing neural trees. In International Joint Conference on Neural Networks, pages 329–334, 1992.

    Google Scholar 

  19. C.W. Omlin and C.L. Giles. Constructing deterministic finite-state automata in recurrent neural networks. Journal of the ACM, 43(6):937–972, 1996.

    Article  MathSciNet  MATH  Google Scholar 

  20. M. P. Perrone. A soft-competitive splitting rule for adaptive tree-structured neural networks. In International Joint Conference on Neural Networks, pages 689–693, 1992.

    Google Scholar 

  21. M. P. Perrone and N. Intrator. Unsupervised splitting rules for neural tree classifiers. In International Joint Conference on Neural Networks, pages 820–825, 1992.

    Google Scholar 

  22. J. B. Pollack. Recursive distributed representations. Artificial Intelligence, 46(1–2):77–106, 1990.

    Article  Google Scholar 

  23. D. E. Rumelhart and J. L. McClelland. Parallel Distributed Processing: Explorations in the Micro structure of Cognition. MIT Press, 1986.

    Google Scholar 

  24. A. Sankar and R. Mammone. Neural Tree Networks, pages 281–302. Neural Networks: Theory and Applications. Academic Press, 1991.

    Google Scholar 

  25. S. Schulz, A. Küchler, and C. Goller. Some Experiments on the Applicability of Folding Architecture Networks to Guide Theorem Proving. In Proceedings of the 10th International FLAIRS Conference, 1997.

    Google Scholar 

  26. I. K. Sethi. Entropy nets: From decision trees to neural networks. Proceeding of the IEEE, 78:1605–1613, 1990.

    Article  Google Scholar 

  27. H. T. Siegelmann and E. D. Sontag. On the computational power of neural nets. Journal of Computer and System Sciences, 50(1):132–150, 1995.

    Article  MathSciNet  MATH  Google Scholar 

  28. J. A. Sirat and J-P. Nadal. Neural trees: a new tool for classification. Network, 1:423–438, 1990.

    Article  MathSciNet  Google Scholar 

  29. K.-Y. Siu, V. Roychowdhury, and T. Kailath. Discrete Neural Computation. Englewood Cliffs, New Jersey: Prentice Hall, 1995.

    MATH  Google Scholar 

  30. A. Sperduti and A. Starita. Supervised neural networks for the classification of structures. IEEE Transactions on Neural Networks, 8(3):714–735, 1997.

    Article  Google Scholar 

  31. A. Sperduti, A. Starita, and C. Goller. Learning distributed representations for the classification of terms. In Proceedings of the International Joint Conference on Artificial Intelligence, pages 509–515, 1995.

    Google Scholar 

  32. J. W. Thatcher. Tree automata: An informal survey. In A. V. Aho, editor, Currents in the Theory of Computing. Prentice-Hall, Englewood Cliffs, NJ, 1973.

    Google Scholar 

  33. R. J. Williams and D. Zipser. A learning algorithm for continually running fully recurrent neural networks. Neural Computation, 1:270–280, 1989.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

C. Lee Giles Marco Gori

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Sperduti, A. (1998). Neural networks for processing data structures. In: Giles, C.L., Gori, M. (eds) Adaptive Processing of Sequences and Data Structures. NN 1997. Lecture Notes in Computer Science, vol 1387. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0053997

Download citation

  • DOI: https://doi.org/10.1007/BFb0053997

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-64341-8

  • Online ISBN: 978-3-540-69752-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics