MapA: An array processor architecture for neural networks

  • J. Ortega
  • F. J. Pelayo
  • A. Prieto
  • B. Pino
  • C. G. Puntonet
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 686)


After some considerations about the requirements that any hardware implementation of neural networks should meet, an array processor architecture is proposed which not only allows us to exploit the data parallelism that neural networks hold but is also easy to programme. As an example of the usefulness and programmability of this architecture, we provide the software implementing the recalling and learning modes for a Multilayer Perception (MLP) in an array processor with the proposed architecture and whose processing elements are connected by using a Multi-Ring (MR) network.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Flynn, M.J.:“Very High-Speed Computing Systems”. Proc.IEEE, vol. 54, pp. 1901–1909. 1966.Google Scholar
  2. [2]
    Treleaven, P.C.; Brownbridge, D.R.; Hopkings, R.P.:“Data Driven and Demand Driven Computer Architecture”. ACM Computing Surveys, pp.93–144. March, 1982.Google Scholar
  3. [3]
    Koren, I.; Mendelson, B.:“A Data-Driven VLSI Array for Arbitrary Algorithms”. IEEE Computer, Vol. 21, No.10, pp. 30–43. October, 1988.Google Scholar
  4. [4]
    Ramakrishna, B.; Yen, D.W.L. et al.:“The Cydra 5 Departamental Supercomputer”. IEEE Computer, Vol. 22, No.1, pp. 12–35. January, 1989.Google Scholar
  5. [5]
    Foulster, D.E.; Schreiber:“The Saxpy Matrix-1: A General-Purpose Systolic Computer”. IEEE. Computer, Vol20, No.7, pp35–43. July, 1987.Google Scholar
  6. [6]
    Moreno, J.H.; Lang, T.:“Matrix Computations on Systolic-Type Meshes: An Introduction to the Multimesh Graph Method”. IEEE Computer, Vol. 23, No.4, pp. 32–52. April, 1990.Google Scholar
  7. [7]
    Potter, J.L.; Meilander, W.C.:“Array Processor Supercomputers”. Proc. IEEE, vol. 77, No. 12, pp. 1829–1841. December, 1989.Google Scholar
  8. [8]
    Hayes, J.P.; Mudge, T.:“Hypcrcube Supercomputers”. Proc. IEEE, vol. 77, No.12, pp. 1829–1841. December, 1989.Google Scholar
  9. [9]
    Prieto, A. et al.:“Simulalion and Harwarc Implementation of Competitive Learning Neural Networks”, in “Statistical Mechanics of Neural Networks”, edited by L. Garrido, Springer-Verlag, pp. 189–204. 1990.Google Scholar
  10. [10]
    Pelayo, F.J.; Pino, B.; Prieto, A.; Ortega, J.; Fernández, F.J.:“CMOS Implementation of Synapse Matrices with Programmable Analog Weights”, in ”Artificial Neural Networks”, edited by A. Prieto, Springer-Verlag, pp.260–267. 1991.Google Scholar
  11. [11]
    Atlas, L.E.; Suzuki, Y.:“Digital Systems for Artificial Neural Networks”. IEEE Circuits and Devices Mag., pp.20–24, November, 1989.Google Scholar
  12. [12]
    Treleaven, P.; Pacheco, M.; Vellasco, M.:“VLSI Architectures for Neural Networks”. IEEE Micro, Vol. 9, No.6, pp. 8–27. December, 1989.Google Scholar
  13. [13]
    Hammerstrom, D.:“A VLSI Architecture for High-Pcrformance, Low-Cost, On-Chip Learning”. Int. Joint Conf. on Neural Networks, Vol. II, pp. 537–544. June, 1990.Google Scholar
  14. [14]
    Pacheco, M.:“A Neural-RISC Processor and Parallel Architecture”. PhD. Thesis, Dept. Computer Science, University College of London, University London, 1991.Google Scholar
  15. [15]
    Widrow, B.; Lehr, M.:“30 Years of Adaptive Neural Networks: Perceptron, Madaline, and Backpropagation”. Proc IEEE, Vol. 78, No.9, pp. 1415–1442. September, 1990.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1993

Authors and Affiliations

  • J. Ortega
    • 1
  • F. J. Pelayo
    • 1
  • A. Prieto
    • 1
  • B. Pino
    • 1
  • C. G. Puntonet
    • 1
  1. 1.Depto. de Electrónica y Tecnología de ComputadoresUniversidad de GranadaGranadaSpain

Personalised recommendations