Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

FPGA Implementation of a Pipelined On-Line Backpropagation

Abstract

The paper describes the implementation of a systolic array for a multilayer perceptron with a hardware-friendly learning algorithm. A pipelined modification of the on-line backpropagation algorithm is shown and explained. It better exploits the parallelism because both the forward and backward phases can be performed simultaneously. The neural network performance for the proposed modification is discussed and compared with the standard so-called on-line backpropagation algorithm in typical databases and with the various precisions required. Although the preliminary results are positive, subsequent theoretical analysis and further experiments with different training sets will be necessary. For this reason our VLSI systolic architecture—together with the combination of FPGA reconfiguration properties and a design flow based on generic VHDL—can create a reusable, flexible, and fast method of designing a complete ANN on a single FPGA and can permit very fast hardware verifications for our trials of the Pipeline On-line Backpropagation algorithm and the standard algorithms.

This is a preview of subscription content, log in to check access.

References

  1. 1.

    S. Hauck, “The Roles of FPGAs in Reprogrammable Systems,” Proceedings of the IEEE, vol. 86, no. 4, 1998, pp. 615–638.

  2. 2.

    C.E. Cox and W.E. Blanz, “GANGLION—A Fast Field-Programmable Gate Array Implementation of a Connectionist Classifier,” Journal of Solid State Circuits, vol. 27, no. 3, 1992, pp. 288–299.

  3. 3.

    V. Jean, B. Patrice, R. Didier, S. Mark, T. Hervé, and B. Philippe, “Programmable Active Memories: Reconfigurable Systems Come of Age,” IEEE Transactions on VLSI Systems, vol. 4, no. 1, 1996, pp. 56–69.

  4. 4.

    P. Lysaght, J. Stockwood, J. Law, and D. Girma, “Artificial Neural Network Implementation on a Fine Grained FPGA,” in Proc. of FPL 94, pp. 421–432.

  5. 5.

    V. Salapura, M. Gschwind, and O. Maischberger, “A Fast FPGA Implementation of a General Purpose Neuron,” in Proc. of the Fourth International Workshop on Field Programmable Logic and Applications, Sept. 1994.

  6. 6.

    S.L. Bade and B.L. Hutchings, “FPGA-Based Stochastic Neural Network Implementation,” IEEE Workshop on FPGAs for Custom Computing Machines, 1994, pp. 189–198.

  7. 7.

    K. Kollmann, K. Riemschneider, and H.C. Zeider, “On-Chip Backpropagation Training Using Parallel Stochastic Bit Streams,” in Proceedings of the IEEE International Conference on Microelectronics for Neural Networks and Fuzzy Systems MicroNeuro’96, 1996, pp. 149–156.

  8. 8.

    J.G. Elredge and B.L. Hutchings, “RRANN: A hardware Implementation of the Backpropagation Algorithm Using Reconfigurable FPGAs,” IEEE World Conference on Computational Intelligence, June 1994, pp. 77–80.

  9. 9.

    J.-L. Beuchat, J.-O. Haenni, and E. Sanchez, “Hardware Reconfigurable Neural Networks,” Parallel and Distributed Processing, Lecture Notes in Computer Science, Springer-Verlag, vol. 1388, 1998, pp. 91–98.

  10. 10.

    N. Izeboudjen, A. Farah, S. Titri, and H. Boumeridja, “Digital Implementation of Artificial Neural Networks: From VHDL Description to FPGA Implementation,” Lecture Notes in Computer Science, vol. 1607, June 1999, pp. 139–148.

  11. 11.

    S. Titri, H. Boumeridja, D. Lazib, and N. Izeboudjen, “A Reuse Oriented Design Methodology for Artificial Neural Network Implementation,” ASIC/SOC Conference, 1999. Proceedings. 1999, pp. 409–413.

  12. 12.

    R. Gadea and A. Mochol’, “Forward-Backward Parallelism in On-Line Backpropagation,” Lecture Notes in Computer Science, vol. 1607, 1999, pp. 157–165.

  13. 13.

    C.R. Rosemberg and G. Belloch, “An Implementation of Network Learning on the Connection Machine,” in Connectionist Models and their Implications, D. Waltz and J Feldman, eds., Ablex, Norwood, NJ, 1988.

  14. 14.

    A. Petrowski, G. Dreyfus, and C. Girault, “Performance Analysis of a Pipelined Backpropagation Parallel Algorithm,” IEEE Transaction on Neural Networks, vol. 4, no. 6, 1993, pp. 970–981.

  15. 15.

    A. Singer, “Implementations of Artificial Neural Networks on the Connection Machine,” Parallel Computing, vol. 14, 1990, pp. 305–315.

  16. 16.

    S. Shams and J.L. Gaudiot, “Implementing Regularly Structured Neural Networks on the Dream Machine,” IEEE Transactions on Neural networks, vol. 6, no. 2, 1995, pp. 408–421.

  17. 17.

    W.-M. lin, V.K. Prasanna, and K.W. Przytula, “Algorithmic Mapping of Neural Network Models onto Parallel SIMD Machines,” IEEE Transactions on Computers, vol. 40, no. 12, 1991, pp. 1390–1401.

  18. 18.

    S.R. Jones, K.M. Sammut, and J. Hunter, “Learning in Linear Systolic Neural Network Engines: Analysis and Implementation,” Transactions on Neural Netwoks, vol. 5, no. 4, 1994, pp. 584–593.

  19. 19.

    D. Naylor, S. Jones, and D. Myers, “Backpropagation in Linear Arrays—A Performance Analysis and Optimization,” IEEE Transactions on Neural Networks, vol. 6, no. 3, 1995, pp. 583–595.

  20. 20.

    P. Murtagh, A.C. Tsoi, and N. Bergmann, “Bit-Serial Array Implementation af a Multilayer Perceptron,” IEEE Proceedings-E, vol. 140, no. 5, 1993, pp. 277–288.

  21. 21.

    B. Burton, R.G. Harley, G. Diana, and J.R. Rodgerson, “Reducing the Computational Demands of Continually Online-Trained Artificial Neural Networks for System Identification and Control of Fast Processes,” IEEE Transaction on Industry Applications, vol. 34. no. 3, 1998, pp. 589–596.

  22. 22.

    D.E. Rumelhart, G.E. Hinton, and R.J. Williams, Learning Internal Representations by Error Backpropagation, Parallel Distributed Processing, vol. 1, MIT Press, Cambridge, MA, 1986, pp. 318–362.

  23. 23.

    S.E. Falhman, “Faster Learning Variations on Backpropagation: An Empirical Study,” in Proc. 1988 Connectionist Models Summer School, 1988, pp. 38–50.

  24. 24.

    M.R. Zargham, Computer Architecture: Single and Parallel Systems, Prentice Hall International Inc. 1996.

  25. 25.

    F. Bayo, Y. Cheneval, A. Guérin-Dugué, R. Chentouf, C. Aviles-Cruz, J. Madrenas, M. Moreno, and J.L. Voz, ESPRIT Basic Research Project Number 6891: ELENA, Project Coordinator C. Jutten, Task B4: Benchmarks, 1995.

  26. 26.

    J. Qualy and G. Saucier, “Fast Generation of Neuro-ASICs,” in Proc. Int. Neural Networks Conf, vol. 2, 1990, pp. 563–567.

  27. 27.

    D.J. Myers and R.A. Hutchinson, “Efficient Implementation of Piecewise Linear Activation Function for Digital VLSI Neural Networks,” Elec. Lett., vol. 25, no. 24, 1989, pp. 1662–1663.

  28. 28.

    H. Hassler and N. Takagi “Function Evaluation by Table Look-up and Addition,” in Procedings of the 12th Symposium on Computer Arithmetic, 1995, pp. 10–16.

  29. 29.

    D.J. Myers and R.A. Hutchinson, “Efficient Implementation of Piecewise Linear Activation Function for Digital VLSI Neural Networks,” Elec. Lett., vol. 25, no. 24, 1989, pp. 1662–1663.

  30. 30.

    C. Alippi and G. Storti-Gajani, “Simple Approximation of Sigmoidal Functions: Realistic Design of Digital Neural Networks Capable of Learning,” in Proc. IEEE Int. Symp. Circuits and Syst., 1991, pp. 1505–1508.

  31. 31.

    P. Murtagh and A.C. Tsoi, “ Implementation Issues of Sigmoid Function and its Derivative for VLSI Digital Neural Networks,” IEE Proceedings-E, vol. 139, no. 3, 1992, pp. 201–214.

  32. 32.

    H. Hikawa, “Improvement of the Learning Performance of Multiplierless Multiplayer Neural Network,” in IEEE International Synposium on Circuits and Systems, 1997, pp. 641–644.

  33. 33.

    B. Girau and A. Tisserand, “ MLP Computing and Learning on FPGA Using On-line Arithmetic,” Int. Journal on System Research and Information Science, special issue on Parallel and Distributed Systems for Neural Computing, 2000.

  34. 34.

    B. Girau, “Building a 2D-Compatible Multilayer Neural Network,” in Proc. IJCNN. IEEE, 2000, pp. 59–64.

  35. 35.

    S. Marusic and G. Deng, “A Neural Network Based Adaptive Non-Linear Lossless Predictive Coding Technique,” Signal Processing and Its Applications, ISSPA ‘99, 1999.

Download references

Author information

Correspondence to Rafael Gadea Gironés.

Additional information

Rafael Gadea-Gironés received the M.Sc. and Ph.D. degrees from the Universidad Politécnica de Valencia, Spain, in 1990 and 2000, respectively. Since 1992 he has been a lecturer of the Department of Electronics at the Universidad Politécnica de Valencia. Currently, he is assistant professor at Telecommunications Engineering School of the Universidad Politécnica de Valencia, Spain. His areas of research interest include hardware description languages, design of FPGA-based systems and design of neural networks and cellular automatas.

Ricardo José Colom-Palero received the M.Sc. and Ph.D. degrees from the Universidad Politécnica de Valencia, Spain, in 1993 and 2001, respectively. Since 1993 he has been a lecturer of the Department of Electronics at the Universidad Politécnica de Valencia. Currently, he is assistant professor at Telecommunications Engineering School of the Universidad Politécnica de Valencia, Spain. His areas of research interest include VLSI signal processing, design of FPGA-based systems and custom digital signal processing for audio and video applications.

Joaquín Cerdá-Boluda received the M.Sc. and Ph.D. degrees from the Universidad Politécnica de Valencia, Spain, in 1998 and 2004, respectively. Since 1999 he has been a lecturer of the Department of Electronics at the Universidad Politécnica de Valencia. Currently, he is assistant professor at Telecommunications Engineering School of the Universidad Politécnica de Valencia, Spain. His areas of research interest include design of FPGA-based systems and design of neural networks and cellular automata.

Angel Sebastia received his MSc degree in Electronic Engineering from the Polytechnic University of Valencia, Spain, in 1985, and his PhD degree in Electronic Engineering from the University of Valencia, in 1991. He is currently a professor at the Department of Electronic Engineering at the Polytechnic University of Valencia. His research interests are high speed data acquisition systems, electronics in nuclear instrumentation and FPGA-based system design.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Gironés, R.G., Gironés, R.G., Palero, R.C. et al. FPGA Implementation of a Pipelined On-Line Backpropagation. J VLSI Sign Process Syst Sign Image Video Technol 40, 189–213 (2005). https://doi.org/10.1007/s11265-005-4961-3

Download citation

Keywords

  • FPGA implementation
  • Artificial Neural Networks (ANN)
  • backpropagation algorithm
  • VLSI systolic architecture