Advertisement

An Efficient Hardware Implementation of Feed-Forward Neural Networks*

  • Tamás Szabó
  • Gábor Horváth
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2070)

Abstract

This paper proposes a new way of digital hardware imple- mentation of nonlinear activation functions in feed-forward neural net- works. The basic idea of this new realization is that the nonlinear functions can be implemented using a matrix-vector multiplication. Recently a new approach was proposed for the realization of matrix-vector mul- tiplications which approach can also be applied for implementing the nonlinear functions if the nonlinear functions are approximated by sim- ple basis functions. The paper proposes to use B-spline basis functions to the approximate nonlinear sigmoidal functions, it shows that this ap proximation fulfills the general requirements on the activation functions, presents the details of the proposed hardware implementation, and gives a summary of an extensive study about the effects of B-spline nonlin- ear function realization on the size and the trainability of feed-forward neural networks.

Keywords

Mean Square Error Activation Function Hide Neuron Order Spline Nonlinear Activation Function 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    M. H. Hassoun, Fundamentals of artificial neural networks, MIT Press, 1995.Google Scholar
  2. 2.
    T. G. Clarkson and Y. Ding, RAM-Based Neural Networks, chapter Extracting directional information for the recognition of fingerprints by pRAM networks, pp. 174–185, World Scientific, 1998.Google Scholar
  3. 3.
    “White blood cell classification & counting with the ZISC, ” http://www.fr.ibm.com/france/cdlab/zblcell.htm, 2000.
  4. 4.
    Tamás Szabó, Béla Fehér, and Gäbor Horväth, “Neural network implementation using distributed arithmetic, ” in Proceedings of the International Conference on Knowledge-based Electronic Systems, Adelaide, Australia, 1998, vol. 3, pp. 511–520.Google Scholar
  5. 5.
    Tamäs Szabó, Lörinc Antoni, Gäbor Horváth, and Béla Fehér, “An efficient implementation for a matrix-vector multiplier structure, ” in Proceedings of IEEE International Joint Conference on Neural Networtks, IJCNN2000, 2000, vol. II, pp. 49–54.Google Scholar
  6. 6.
    Manferd Glesner and Werner Pöchmüller, Neurocomputers, an overview of neural networks in VLSI, Neural Computing. Chapman & Hall, 1994.Google Scholar
  7. 7.
    Franco Scarselli and Ah Chung Tsoi, “Universal approximation using feedforward neural networks: A survey of some existing methods, and some new results, ” Neural Networks, vol. 11, no. 1, pp. 15–37, 1998.CrossRefGoogle Scholar
  8. 8.
    Věra Kurková, “Approximation of functions by neural networks, ” in Proceedings of NC’98, 1998, pp. 29–36.Google Scholar
  9. 9.
    M. B. Stinchcombe and H. White, “Approximating and learning unknown mappings using multilayer networks with bounded weights, ” in Proc. of Int. Joint Conference on Neural Networks, IJCNN’90. 1990, vol.III, pp. 7–16, IEEE Press.CrossRefGoogle Scholar
  10. 10.
    Moshe Leshno, Vladimir Ya. Lin, Allan Pinkus, and Shimon Schocken, “Multilayer feedforward networks with a nonpolynomial activation function can approximate any function, ” Neural Networks, vol. 6, pp. 861–867, 1993.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Tamás Szabó
    • 1
  • Gábor Horváth
    • 1
  1. 1.Department of Measurement and Information SystemsTechnical University of BudapestBudapestHungary

Personalised recommendations