Artificial Neural Network Engine: Parallel and Parameterized Architecture Implemented in FPGA
In this paper we present and analyze an artificial neural network hardware engine, its architecture and implementation. The engine was designed to solve performance problems of the serial software implementations. It is based on a hierarchical parallel and parameterized architecture. Taking into account verification results, we conclude that this engine improves the computational performance, producing speedups from 52.3 to 204.5 and its architectural parameterization provides more flexibility.
KeywordsArtificial Neural Network FPGA Implementation Solve Performance Problem Neuron Module Temporal Parallelism
- 1.Misra, M.: Parallel Environments for Implementing Neural Networks. Neural Computing Surveys 1, 48–60 (1997)Google Scholar
- 3.Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning Internal Representations by Error Propagation. In: Rumelhart, D.E., McCleland, J.L. (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition Foundations, vol. 1, pp. 318–364. MIT, Cambridge (1986)Google Scholar
- 4.Haykin, S.: Neural Networks: A Comprehensive Foundation, 2nd edn. Prentice-Hall, Englewood Cliffs (1998)Google Scholar
- 7.Linares-Barranco, B., Andreou, A.G., Indiveri, G., Shibata, T.(eds.): Special Issue on Neural on Neural Networks Hardware Implementations. IEEE Transactions on Neural Networks 14(5) (2003)Google Scholar