Projecting sub-symbolic onto symbolic representations in artificial neural networks

  • Marco Gori
  • Giovanni Soda
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 728)


In this paper we introduce a novel approach to learning in artificial neural networks where the architecture is strongly involved and plays a central role. The function to be learned is supposed to be partially determined by prior rules and, therefore, the process of learning from examples consists of discovering solutions by taking this prior knowledge into account.

We discuss the basic ideas in the case of nondeterministic automata that can be injected in special architectures (symbolic architectures) by suitable constraints in the weight space. We consider fully-connected networks embedding these symbolic architectures. The learning is conceived in such a way that favorites the development of symbolic representations. This is very much related to many existing pruning techniques, but the pruning process we introduce forces the learning toward development of high level representations.

Index terms

Artificial Intelligence connectionist models learning from examples pruning 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    E.B. Baum and D. Haussler, “What Size Net Give Valid Generalization?,” Neural Computation, Vol. 1, No. 1, pp. 151–160Google Scholar
  2. 2.
    P. Frasconi, M. Gori, M. Maggini, and G. Soda, “Unified Integration of Explicit Rules and Learning by Example in Recurrent Networks,” IEEE Trans. on Knowledge and Data Engineering, in press.Google Scholar
  3. 3.
    P. Frasconi, M. Gori, and G. Soda, “Injecting Nondeterministic Finite State Automata into Recurrent Neural Networks,” Tech. Rep. RT.15/92, Universita' di Firenze, August 1992.Google Scholar
  4. 4.
    M. Gori and A. Tesi, “On the Problem of Local Minima in BackPropagation”, IEEE Trans. Pattern Anal. and Machine Intell., vol. 14, no. 1, pp. 76–86, 1991.Google Scholar
  5. 5.
    J.E. Hopcroft, J.D. Ullman, Introduction to Automata Theory, Languages and Computation, Addison-Wesley, Reading, Mass., 1979.Google Scholar
  6. 6.
    X.H. Yu, “Can Backpropagation Error Surface Not Have Local Minima?,” IEEE Trans. on Neural Networks, Vol. 3, No. 6, November 1992, pp. 1019–1020Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1993

Authors and Affiliations

  • Marco Gori
    • 1
  • Giovanni Soda
    • 1
  1. 1.Dipartimento di Sistemi e InformaticaFirenzeItaly

Personalised recommendations