Advertisement

Evaluation of the CNAPS neuro-computer for the simulation of MLPS with receptive fields

  • Bertrand Granado
  • Patrick Garda
Neural Nets Simulation, Emulation and Implementation
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1240)

Abstract

In this paper we evaluate the performances of the CNAPS neuro-computer for the simulation of the MLPs with receptive fields used for handwritten character recognition. Our measures show that the effective performances are far below the peak performances or the performances achieved for fully connected MLPs. We introduce a new methodology to predict the time simulation on CNAPS for any MLP. Finally we show that the performances of CNAPS are close to those of direct hardware implementations and that they can even be improved.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Jim Baley and Dan Hammerstrom. Why vlsi implementation of associative vlcns require connection multiplexing. In Proceedings of the IEEE 2nd Annual International Conference on Neural Network, pages 112–119, 1988.Google Scholar
  2. 2.
    B. Boser, E. Sackinger, J. Bromley, Y. LeCun, and L. Jackel. An analog neural network processor with programmable topology. IEEE Journal of Solid-State Circuits, 26(12):2017–2025, December 1991.Google Scholar
  3. 3.
    B. Boser, E. Sackinger, J. Bromley, Y. LeCun, and L. Jackel. Hardware requirements for neural network pattern classifiers. IEEE Micro, 12(1):32–40, February 1992.Google Scholar
  4. 4.
    Jocelyn Cloutier, Eric Cosatto, Steven Pigeon, François R. Boyer, and Patrice Y. Simard. Vip: An fpga-based processor for image processing and neural networks. Proceedings of MicroNeuro'96, pages 330–336, 1996.Google Scholar
  5. 5.
    Bertrand Granado and Patrick Garda. Evaluation of the two differents interconnection networks of the cnaps neurocomputer. In Proceedings of ICANN'96, Juillet 1996.Google Scholar
  6. 6.
    Matthew. Griffin, Gary Tahara, Kurt Knorpp, Ray Pinkham, Bob Riley, Dan Hamerstrom, and Eric Means. An 11 million transistor digital neural network execution engine. In Proceedings of IEEE International Solid-State Circuits Conference, 1991.Google Scholar
  7. 7.
    Dan Hammerstrom. A vlsi architecture for high-performance, low-cost, on-chip learning. In Proceedings of Internationnal Join Conference on Neural Network, pages 537–544, 1990.Google Scholar
  8. 8.
    Y. LeCun, B. Boser, J.S. Denker, D.henderson, R.E. Howard, W. hubbard, and L.J. Jackel. Handwritten digit recognition with a back-propagation network. In Neural Information Process and System, pages 396–404, 1990.Google Scholar
  9. 9.
    Dean Mueller and Dan Hammerstrom. A neural network systems component. In Proceedings od International Conference on Neural Network, pages 1258–1264, 1993.Google Scholar
  10. 10.
    Eduard Säckinger, Bernhard Boser, Jane Bromley, Yann LeCun, and Lawrence D. Jackel. Application of the ANNA neural network chip to high-speed character recognition. IEEE Transaction on Neural Networks, 3(2), March 1992.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Bertrand Granado
    • 1
  • Patrick Garda
    • 1
  1. 1.Laboratoire des Instruments et Sytèmes Boîte 203Université Pierre et Marie CurieParis Cedex 05

Personalised recommendations