An Efficient Hardware Architecture for a Neural Network Activation Function Generator
This paper proposes an efficient hardware architecture for a function generator suitable for an artificial neural network (ANN). A spline-based approximation function is designed that provides a good trade-off between accuracy and silicon area, whilst also being inherently scalable and adaptable for numerous activation functions. This has been achieved by using a minimax polynomial and through optimal placement of the approximating polynomials based on the results of a genetic algorithm. The approximation error of the proposed method compares favourably to all related research in this field. Efficient hardware multiplication circuitry is used in the implementation, which reduces the area overhead and increases the throughput.
KeywordsActivation Function Hardware Implementation Hardware Architecture Area Overhead Range Reduction
Unable to display preview. Download preview PDF.
- 6.Holt, J., Hwang, J.: Finite Precision Error Analysis of Neural Network Hardware Implementations. IEEE Transactions on Computers 42, 280–291 (1993)Google Scholar
- 7.Haykins, S.: Neural Networks - A Comprehensive Foundation. Prentice-Hall, Englewood Cliffs (1999)Google Scholar
- 9.Alippi, C., Storti-Gajani, G.: Simple Approximation of Sigmoid Functions: Realistic Design of Digital VLSI Neural Networks. In: Proceedings of the IEEE Int’l Symp. Circuits and Systems, pp. 1505–1508 (1991)Google Scholar
- 12.Faiedh, H., Gafsi, Z., Besbes, K.: Digital Hardware Implementation of Sigmoid Function and its Derivative for Artificial Neural Networks. In: Proceedings of the 13th International Conference on Microelectronics, pp. 189–192. Rabat, Morocco (2001)Google Scholar
- 16.The MathWorks: MATLAB and Simulink for Technical Computing (2006), http://www.mathworks.com/
- 17.Koren, I.: Computer Arithmetic Algorithms, 2nd edn. A K Peters Ltd, Wellesley (2001)Google Scholar