Representation of knowledge and learning on automata networks

  • Françoise Fogelman Soulie
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 316)


We have seen that new techniques allow to design automata networks, capable of learning how to solve problems of high order. Those techniques clearly go beyond the limitations of the perceptron and are very different, in their spirit, from the techniques usually used in Artificial Intelligence.

They are based on a representation of knowledge which is distributed, fault tolerant, and an inference mechanism which is dynamical. These aspects make this approach look more like the way the brain works than the usual Al approach.

In the case of the learning-from-examples problem, it is possible to envision these methods as capable of automatically generate predicates characteristic of the set of examples, which would allow to use them as first steps in more classical Al systems (Expert systems for example).


Associative Memory Decision Unit Hide Unit Learning Session Boltzmann Machine 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

7. References

  1. [1]
    D.H. ACKLEY, O.E. HINTON, T.J. SEJNOWSKI: A Learning Algorithm for Boltzmann Machines. Cognitive Science, 9, pp 147–169, 1985.CrossRefGoogle Scholar
  2. [2]
    J.A. ANDERSON: Cognitive Capabilities of a Parallel System. In [3], pp 209–226.Google Scholar
  3. [3]
    E. BIENENSTOCK, F. FOGELMAN SOULIE, G. WEISBUCH Eds: «Disordered systems and biological organization», Springer-Verlag, NATO Asi Series in Systems and Computer Science, F20, 1986.Google Scholar
  4. [4]
    R.O. DUDA, P.E. HART: Pattern classification and scene analysis. Wiley, 1973.Google Scholar
  5. [5]
    J.A. FELDMAN: Connections. Byte, pp 277–283, april 1985.Google Scholar
  6. [6]
    F. FOGELMAN SOULIE: Pattern Recognition by Threshold Networks. In «Actes du Colloque International d'Intelligence Artificielle, Marseille», 1984.Google Scholar
  7. [7]
    F. FOGELMAN SOULIE: Brains and Machines: architectures for to-morrow? «Cognitive 85», CESTA-AFCET Ed., forum, (in french), 1985.Google Scholar
  8. [8]
    F. FOGELMAN SOULIE, E. GOLES-CHACC: Knowledge representation by automata networks. In «Computers and Computing», P. Chenin, C. di Crescenzo, F. Robert Eds, Masson-Wiley, pp 175–180, 1986.Google Scholar
  9. [9]
    F. FOGELMAN SOULIE, G. WEISBUCH: Random iterations of threshold networks and associative memory. SIAM J. on Computing, to appear.Google Scholar
  10. [10]
    F. FOGELMAN SOULIE, P. GALLINARI, S. THIRIA: Learning and associative memory. In «Pattern Recognition, Theory and Applications», P.A. Devijver Ed., NATO ASI Series in Computer Science, Springer-Verlag, to appear.Google Scholar
  11. [11]
    F. FOGELMAN SOULIE, P. GALLINARI, Y. LE CUN, S. THIRIA: Automata Networks and Artificial Intelligence. In «Computing on Automata Networks», F. Fogelman Soulié, Y. Robert, M. Tchuente Eds, Manchester Univ. Press, to appear.Google Scholar
  12. [12]
    E. GOLES-CHACC: Comportement Dynamique de résaaux d'Automates. Thesis, Grenoble, 1985.Google Scholar
  13. [13]
    E. GOLES-CHACC: this volume.Google Scholar
  14. [14]
    T.N.E. GREVILLE: Some applications of the pseudo inverse of a matrix. SIAM Rev. 2, pp 15–22, 1960.Google Scholar
  15. [15]
    D.O. HEBB: The Organization of Behavior. Wiley, 1949.Google Scholar
  16. [16]
    G.E. HINTON: Learning in Parallel Networks. Byte, pp 265–273, april 1985.Google Scholar
  17. [17]
    G.E. HINTON, J.A. ANDERSON (Eds): Parallel Models of Associative Memory. Hillsdale, Erlbaum, 1981.Google Scholar
  18. [18]
    J.J. HOPFIELD: Neural Networks and Physical Systems with Emergent Collective Computational Abilities, P.N.A.S. USA, vol 79, pp 2554–2558, 1982.Google Scholar
  19. [19]
    T. KOHONEN: Self-Organization and Associative Memory. Springer Series in Information Sciences, vol 8, Springer-Verlag, 1984.Google Scholar
  20. [20]
    Y. LE CUN: A learning scheme for assymetric threshold network. In «Cognitive 85», CESTA-AFCET Ed., pp 599–604, 1985.Google Scholar
  21. [21]
    Y. LE CUN: Learning process in an asymmetric threshold network. In «Disordered systems and biological organization», E. Bienenstock, F. Fogelman Soulié, G. Weisbuch Eds, Springer-Verlag, NATO Asi series in systems and computer science, no20, pp 233–240, 1986.Google Scholar
  22. [22]
    Y. LE CUN, F. FOGELMAN SOULIE: Modèles Connexionnistes de l'Apprentissage. Special issue on "Apprentissage et Machine", Intellectica, to appear.Google Scholar
  23. [23]
    W. SMAC CULLOCH, W. PITTS: A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophysics, 5, pp 115–133, 1943.Google Scholar
  24. [24]
    R.S. MICHALSKI, L.G. CARBONELL, T.M. MITCHELL: Machine Learning. Tioga, 1983.Google Scholar
  25. [25]
    M. MINSKY, S. PARERT: Perceptrons, an Introduction to Computational Geometry. Cambridge, MIT Press, 1969.Google Scholar
  26. [26]
    J. von NEUMANN: Theory of self reproducing automata. A.W. Burks Ed. Univ. Illinois Press, 1966.Google Scholar
  27. [27]
    D.C. PLAUT, S.J. NOWLAN, G.E. HINTON: Experiments on Learning by Back Propagation. Carnegie Mellon University Technical Report, CMU-CS-86-126, 1986.Google Scholar
  28. [28]
    F. ROSENBLATT: Principles of Neurodynamics. Sparton, 1962.Google Scholar
  29. [29]
    D.E. RUMELHART, G.E. HINTON, R.J. WILLIAMS: Learning internal representations by error propagation. In [30].Google Scholar
  30. [30]
    D.E. RUMELHART, J.L. MAC CLELLAND Eds: Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1, Foundations. MIT Press, 1986.Google Scholar
  31. [31]
    D.E. RUMELHART, J.L. MAC CLELLAND: On learning the past tenses of english verbs. In [30].Google Scholar
  32. [32]
    T.J. SEJNOWSKI, G.E. HINTON: Separating figure from ground with a Boltzmann machine. In «Vision, brain and cooperative computation». Arbib M.A., Hanson A.R. Eds, Cambridge, MIT Press, 1985.Google Scholar
  33. [33]
    T.J. SEJNOWSKI, P.K. KIENKER, G.E. HINTON: Learning symmetry groups with hidden units: beyond the perceptron. to appear in Physica D.Google Scholar
  34. [34]
    T.J. SEJNOWSKI, C.H. ROSENBERG: NETtalk: a parallel network that learns to read aloud. Johns Hopkins Technical report JHU/EECS-86/01.Google Scholar
  35. [35]
    D.L. WALTZ, J.B. POLLACK: Massively Parallel Parsing: a Strongly Interactive Model of Natural Language Interpretation. Cognitive Science. no 9, pp 51–74, 1985.Google Scholar
  36. [36]
    G. WEISBUCH, F. FOGELMAN SOULIE: Scaling laws for the attractors of Hopfield networks. J. Phys. Lett., 46, 623–630, 1985.Google Scholar
  37. [37]
    B. WIDROW, M.E. HOFF: Adaptive switching circuits. IRE WESCON Conv. Record, part 4, pp 96–104, 1960.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1988

Authors and Affiliations

  • Françoise Fogelman Soulie
    • 1
    • 2
  1. 1.Laboratoire LIA Ecole des Hautes Etudes en InformatiqueUniversité of Paris 5ParisFrance
  2. 2.Laboratoire de Dynamique des Réseaux c/o CESTAParisFrance

Personalised recommendations