FPGA Implementations of Neural Networks pp 103-136 | Cite as
FPNA: Applications and Implementations
Abstract
Neural networks are usually considered as naturally parallel computing models. But the number of operators and the complex connection graph of standard neural models can not be handled by digital hardware devices. The Field Programmable Neural Arrays framework introduced in Chapter 3 reconciles simple hardware topologies with complex neural architectures, thanks to some configurable hardware principles applied to neural computation. This two-chapter study gathers the different results that have been published about the FPNA concept, as well as some unpublished ones. This second part shows how FPNAs lead to powerful neural architectures that are easy to map onto digital hardware: applications and implementations are described, focusing on a class of synchronous FPNA-derived neural networks, for which on-chip learning is also available.
Keywords
neural networks fine-grain parallelism digital hardware FPGAPreview
Unable to display preview. Download preview PDF.
References
- [1]D. Anguita, S. Bencetti, A. De Gloria, G. Parodi, D. Ricci, and S. Ridella. FPGA implementation of high precision feedforward networks. In Proc. MicroNeuro, pages 240–243, 1997.Google Scholar
- [2]S.L. Bade and B.L. Hutchings. FPGA-based stochastic neural networks-implementation. In Proceedings of the IEEE Workshop on FPGAs for Custom Computing Machines, pages 189–198, 1994.Google Scholar
- [3]R. Baron and B. GIRau. Parameterized normalization: application to wavelet networks. In Proc. IJCNN, volume 2, pages 1433–1437. IEEE, 1998.Google Scholar
- [4]K. Ben Khalifa, M.H. Bedoui, L. Bougrain, R. Raychev, M. Dogui, and F. Alexandre. Analyse et classification des etats de vigilance par reseaux de neurones. Technical Report RR-4714, INRIA, 2003.Google Scholar
- [5]N.M. Botros and M. Abdul-Aziz. Hardware implementation of an artificial neural network. In Proc. ICNN, volume 3, pages 1252–1257, 1993.Google Scholar
- [6]L. Breiman, J.H. Friedman, R.A. Olshen, and C.J. Stone. Classification and regression trees. 0-534-98054-6. Wadsworth Inc, Belmont California, 1984.MATHGoogle Scholar
- [7]C. Cerisara and D. Fohr. Multi-band automatic speech recognition. Computer Speech and Language, 15(2):151–174, 2001.CrossRefGoogle Scholar
- [8]F. de Dinechin. The price of routing in fpgas. Journal of Universal Computer Science, 6(2):227–239, 2000.Google Scholar
- [9]J.G. Eldredge and B.L. Hutchings. RRANN: a hardware implementation of the backpropagation algorithm using reconfigurable FPGAs. In Proceedings of the IEEE World Conference on Computational Intelligence, 1994.Google Scholar
- [10]C. Gegout, B. GIRau, and F. Rossi. Generic back-propagation in arbitrary feedforward neural networks. In Artificial Neural Nets and Genetic Algorithms — Proc. of ICANNGA, pages 168–171. Springer-Verlag, 1995.Google Scholar
- [11]C. Gegout and F. Rossi. Geometrical initialization, parameterization and control of multilayer perceptron: application to function approximation. In Proc. WCCI, 1994.Google Scholar
- [12]B. GIRau. Dependencies of composite connections in Field Programmable Neural Arrays. Research report NC-TR-99-047, NeuroCOLT, Royal Holloway, University of London, 1999.Google Scholar
- [13]B. GIRau. Du parallelisme des modeles connexionnistes a leur implantation parallele. PhD thesis n° 99ENSL0116, ENS Lyon, 1999.Google Scholar
- [14]B. GIRau. Digital hardware implementation of 2D compatible neural networks. In Proc. IJCNN. IEEE, 2000.Google Scholar
- [15]B. GIRau. FPNA: interaction between FPGA and neural computation. Int. Journal on Neural Systems, 10(3):243–259, 2000.Google Scholar
- [16]B. GIRau and A. Tisserand. MLP computing and learning on FPGA using on-line arithmetic. Int. Journal on System Research and Information Science, special issue on Parallel and Distributed Systems for Neural Computing, 9(2–4), 2000.Google Scholar
- [17]J.L. Holt and J.-N. Hwang. Finite precision error analysis of neural network hardware implementations. IEEE Transactions on Computers, 42(3):281–290, March 1993.CrossRefGoogle Scholar
- [18]A. Johannet, L. Personnaz, G. Dreyfus, J.D. Gascuel, and M. Weinfeld. Specification and implementation of a digital Hopfield-type associative memory with on-chip training. IEEE Trans. on Neural Networks, 3, 1992.Google Scholar
- [19]W. Kautz. The realization of symmetric switching functions with linear-input logical elements. IRE Trans. Electron. Comput., EC-10, 1961.Google Scholar
- [20]R. Minnick. Linear-input logic. IEEE Trans. Electron. Comput., EC-10, 1961.Google Scholar
- [21]J.M. Muller. Elementary Functions, Algorithms and Implementation. Birkhauser, Boston, 1997.MATHGoogle Scholar
- [22]L. Prechelt. Proben1-a set of neural network benchmark problems and benchmarking rules. Technical Report 21/94, Fakultat fur Informatik, Universit at Karlsruhe, 1994.Google Scholar
- [23]V. Salapura, M. Gschwind, and O. Maisch berger. A fast FPGA implementation of a general purpose neuron. In Proc. FPL, 1994.Google Scholar
- [24]K. Siu, V. Roychowdhury, and T. Kailath. Depth-size tradeoffs for neural computation. IEEE Trans. on Computers, 40(12):1402–1412, 1991.MathSciNetCrossRefGoogle Scholar
- [25]M. van Daalen, P. Jeavons, and J. Shawe-Taylor. A stochastic neural architecture that exploits dynamically reconfigurable FPGAs. In Proc. of IEEE Workshop on FPGAs for Custom Computing Machines, pages 202–211, 1993.Google Scholar
- [26]Q. Zhang and A. Benveniste. Wavelet networks. IEEE Trans. on Neural Networks, 3(6):889–898, Nov. 1992.CrossRefGoogle Scholar
- [27]J. Zhao, J. Shawe-Taylor, and M. van Daalen. Learning in stochastic bit stream neural networks. Neural Networks, 9(6):991–998, 1996.CrossRefGoogle Scholar