Advertisement

On neural network programming

  • P. De Pinto
  • M. Sette
Short Papers
Part of the Lecture Notes in Computer Science book series (LNCS, volume 549)

Abstract

In this paper we investigate how a neural network can be regarded as a massive parallel computer architecture. To this end, we focus our attention not on a stochastic asynchronous model, but on the McCulloch and Pitts network [MP43], as modified by Caianiello [Cai61], and we specify what we mean for the environment in which the network operates: it is essentially the entity assigning meaning to the network input and output nodes. Changing the environment definition implies dealing with different neural architectures.

To show how to program a neural architecture, we introduce a model of environment helping us in choosing functions suitable to be pipelined, as in a data flow architecture. As an example, we sketch the working of a parallel multiplier function.

Keywords

Neural network Parallel processing Parallel architectures Scientific topic Architectures languages and environments 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [Cai61]
    E. R. Caianiello. Outline of a theory of thought processes and thinking machines. J. of Theor. Biol., (2):204–235, 1961.Google Scholar
  2. [Den80]
    J. B. Dennis. Data flow supercomputers. Computer, 48–56, november 1980.Google Scholar
  3. [DP90]
    P. De Pinto. The implementation of the assembler associated to a general purpose neural network. In Third Italian Workshop on “Parallel Architectures and Neural Networks”, Vietri sul Mare (SA), May 1990.Google Scholar
  4. [DPLS90]
    P. De Pinto, F. E. Lauria, and M. Sette. On the Hebb rule and unlimited precision arithmetic in a MacCulloch and Pitts network. In R. Eckmiller, editor, Advanced neural computers, pages 121–128, International Symposium on: Neural Networks for Sensory and Motor Systems, Elsevier Sci., Amsterdam, 1990.Google Scholar
  5. [Hew85]
    C. Hewitt. The challenge of open systems. BYTE, 223–242, April 1985.Google Scholar
  6. [HU79]
    J. E. Hopcroft and J. D. Ullman. Introduction to automata theory, languages and computation. Addison-Wesley Pub. Co., 1979.Google Scholar
  7. [Lau88]
    F. E. Lauria. A connectionist approach to knowledge acquisition. Cybernetics and Systems: an Intern. Journ., (5):19–35, 1988.Google Scholar
  8. [Lau89]
    F. E. Lauria. A general purpose neural network as a new computing paradigm. In E. R. Caianiello, editor, Second Italian Workshop on “Parallel Architectures and Neural Networks”, pages 131–144, World Scientific Pub., Singapore, 1989.Google Scholar
  9. [MP43]
    W. S. McCulloch and W. Pitts. A logical calculus of the ideas immanent in the nervous activity. Bull. Math. Bioph., 115–143, 1943.Google Scholar
  10. [RM86]
    D. E. Rumelhart and J. L. McClelland. Parallel distributed processing. Volume 1, MIT Press, Cambridge, MA, 1986.Google Scholar
  11. [Set90]
    M. Sette. An OCCAM simulation of a general purpose neural network. In Third Italian Workshop on “Parallel Architectures and Neural Networks”, Vietri sul Mare (SA), May 1990.Google Scholar
  12. [Tra91]
    G. Trautteur. Problems with Symbols. A commentary to: Herbert Simon, “Scientific Discovery as Problem Solving”. RISESST, (1), 1991. To be published.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1991

Authors and Affiliations

  • P. De Pinto
    • 1
  • M. Sette
    • 1
  1. 1.Istituto per la Ricerca sui Sistemi Informatici ParalleliCNRNapoli

Personalised recommendations