Skip to main content
Log in

Neural network training based on FPGA with floating point number format and it’s performance

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

In this paper, two-layered feed forward artificial neural network’s (ANN) training by back propagation and its implementation on FPGA (field programmable gate array) using floating point number format with different bit lengths are remarked based on EX-OR problem. In the study, being suitable with the parallel data-processing specification on ANN’s nature, it is especially ensured to realize ANN training operations parallel over FPGA. On the training, Virtex2vp30 chip of Xilinx FPGA family is used. The network created on FPGA is coded by using VHDL. By comparing the results to available literature, the technique developed here proved to consume less space for the subjected ANN training which has the same structure and bit length, it is shown to have better performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Haykin S (1994) Neural networks a comprehensive foundation. Macmillan College Publishing, Englewood Cliffs

    MATH  Google Scholar 

  2. Narendra KS, Parthasaraty K (1990) Identification and control of dynamical systems using neural networkworks. IEEE Trans Neural Netw 1(1):4–27

    Article  Google Scholar 

  3. Farrugia S, Yee H, Nickolls P (1993) Implantable cardioverter defibrillator electrogram recognition with a multiplayer perceptron. PACE: Pacing Clin Electrophysiol 16(1):228–234

    Article  Google Scholar 

  4. Çavuşlu MA, Karakaya F, Altun H (2008) ÇKA Tipi Yapay Sinir Aği Kullanılarak Plaka Yeri Tespitinin FPGA’da Donanımsal Gerçeklenmesi. In: Proceedings of Akıllı Sistemlerde Yenilikler ve Uygulamalar Sempozyumu 2008 (ASYU 2008) Isparta, Turkey (in Turkish)

  5. Ramache U (1992) SYNAPSE: a neurocomputer that synthesize neural algorithms on paralel systolic engine. J Parallel Distrib Comput 14:306–318

    Article  Google Scholar 

  6. Burr J (1993) Digital neurochip design. In: Przytula KW, Prasanna VK (eds) Parallel digital implementations of neural networks. Prentice Hall, Englewood Cliffs, pp 223–281

    Google Scholar 

  7. Rucket U, Funke A, Pintake C (1993) Acceleratorboard for neural associative memories. Neurocomputing 5(1):39–49

    Article  Google Scholar 

  8. Morgan P, Ferguson A, Bolouri H (1994) Cost-performance analysis of FPGA, VLSI and WSI implementations of a RAM-based neural network. In: Proceedings of the 4th international conference on microelectronics for neural networks and fuzzy systems. Turin, Italy, pp 235-243

  9. Krips M, Lammert T, Kummert A (2002) FPGA implementation of a neural network for a real-time hand tracking system. In: Proceedings of the first IEEE international workshop on electronic design, test and applications. pp 313–317

  10. Ossoinig H, Reisinger E, Steger C, Weiss R (1996) Design and FPGA-implementation of a neural network. In: Proceedings of the 7th international conference on signal processing applications & technology. Boston, USA, pp 939–943

  11. Sahin S, Becerikli Y, Yazici S (2006) Neural network implementation in hardware using FPGAs. Lect Notes Comput Sci 4234:1105–1112

    Article  Google Scholar 

  12. Zhu J, Milne GJ, Gunther BK (1999) Towards an FPGA based reconfigurable computing environment for neural network implementations. 9th international conference on artificial neural networks:ICANN’99 Edinburgh UK, IEE Conf Pub 2(CP470):661–666. doi:10.1049/cp:19991186

  13. Mousa M, Areibi S, Nichols K (2006) On the arithmetic precision for implementing back-propagation networks on FPGA: a case study. In: Omondi AR, Rajapakse JC (eds) FPGA implementations of neural networks. Springer, US, pp 37–61

    Chapter  Google Scholar 

  14. Savich AW, Moussa M, Areibi S (2007) The impact of arithmetic representation on implementing MLP-BP on FPGAs: a study. IEEE Trans Neural Netw 18(1):240–252

    Article  Google Scholar 

  15. Haykin S (1999) Neural networks a comprehensive foundation, 2nd edn. Prentice Hall, NJ

    MATH  Google Scholar 

  16. Stevenson M, Winter R, Widrow B (1990) Sensitivity of feedforward neural networks to weight errors. IEEE Trans Neural Netw 1(1):71–80

    Article  Google Scholar 

  17. Çavuşlu MA, Karakuzu C, Şahin S (2006) Neural network hardware implementation using FPGA. In: ISEECE 2006 3rd international symposium on electrical, electronic and computer engineering symposium proceedings. TRNC, Nicosia, pp 287–290

  18. Az I, Şahin S, Karakuzu C, Çavuşlu MA (2006) Implementation of FFT and IFFT Algorithms in FPGA. In: ISEECE 2006 3rd international symposium on electrical, electronic and computer engineering symposium proceedings. TRNC, Nicosia, pp 7–10

  19. Çavuşlu MA, Dikmeşe S, Sahin S, Küçük K, Kavak A (2006) Akıllı Anten Algoritmalarının IEEE 754 Kayan Sayı Formatı ile FPGA Tabanlı Gerçeklenmesi ve Performans Analizi. In: Proceedings of URSI-TÜRKİYE’ 2006 3. BİLİMSEL KONGRESİ. Ankara, Turkey, pp 610–612 (in Turkish)

  20. Az I, Sahin S, Cavuslu MA (2007) Implementation of fast fourier and inverse fast fourier transforms in FPGA. In Proceedings of IEEE 15th signal processing and communications applications SIU 2007. Eskisehir, Turkey, pp 1–4

  21. Belanovic P (2002) Library of parameterized hardware modules for floating-point arithmetic with an example application. Dissertation, Northeastern University

  22. Elliot D L (1993) A better activation function for artificial neural networks. Technical Research Report T.R. 93-8, Institute for Systems Research, University of Maryland

  23. Tveter DR (1998) Backpropagator’s Review. http://www.dontveter.com/bpr/activate.html. Accessed 28 May 2010

  24. Gallagher S (2004) Accelerating DSP Algorithms Using FPGAs. http://klabs.org/mapld04/presentations/session_p/p188_gallagher_s.ppt. p 6, Accessed 28 May 2010

  25. Vishwanathan A, FPGA implementation of network printer (an overview). http://www25.brinkster.com/anandv98/projects/nwprinter.asp. Accessed 28 May 2010

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mehmet Ali Çavuşlu.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Çavuşlu, M.A., Karakuzu, C., Şahin, S. et al. Neural network training based on FPGA with floating point number format and it’s performance. Neural Comput & Applic 20, 195–202 (2011). https://doi.org/10.1007/s00521-010-0423-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-010-0423-3

Keywords

Navigation