Skip to main content

Optimized FPGA Implementation of an Artificial Neural Network Using a Single Neuron

  • Conference paper
  • First Online:
Computer Science and Education in Computer Science (CSECS 2023)

Abstract

Since its emergence in the early 1940s as a connectionist approximation of the functioning of neurons in the brain, artificial neural networks have undergone significant development. The trend of increasing complexity is steadily exponential and includes an ever-increasing variety of models. This is due on the one hand to the achievements in microelectronics, and on the other to the growing interest and development of the mathematical apparatus in the field of artificial intelligence. It can be assumed however that overcomplicating the structure of the artificial neural network is no guarantee of success. Following this reasoning, the paper proposes a continuation of the author’s previous research to create an optimized neural network designed for use on resource-constrained hardware. The new solution aims to present a design procedure for building neural networks using only a single hardware neuron by using context switching and time multiplexing by the aid of an FPGA device. This would lead to significant reduction in computational requirements and the possibility of creating small but very efficient artificial neural networks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Brown, T., Mann, B., Ryder, N., et al.: Language models are few-shot learners. OpenAI (2020). https://doi.org/10.48550/arXiv.2005.14165

  2. Dally, W.: High performance hardware for machine learning, cadence ENN summit. NVIDIA Corporation, Stanford University (2016)

    Google Scholar 

  3. Eberhart, R., Dobbins, R.: Early neural network development history: the age of Camelot. IEEE Eng. Med. Biol. Mag. 9(3), 15–18 (1990). https://doi.org/10.1109/51.59207

    Article  Google Scholar 

  4. Freeman, W.: Mass Action in the Nervous System. Academic Press (2012). ISBN-13 978-0124120471

    Google Scholar 

  5. Gholami, A., Kim, S., Dong, Z.: A survey of quantization methods for efficient neural network inference. In: Low-Power Computer Vision: Improving the Efficiency of Artificial Intelligence (2012). https://doi.org/10.48550/arXiv.2103.13630

  6. Goodfellow, I., Pouget-Abadie, J., Mirza, M. et al.: Generative adversarial nets (2014). arXiv:1406.2661 [stat.ML], https://doi.org/10.48550/arXiv.1406.2661

  7. Gorbounov, Y., Chen, H.: Context-switching neural node for constrained-space hardware. In: Zlateva, T., Goleva, R. (eds.) CSECS 2022. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol. 450, pp. 45–59. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-17292-2_4

    Chapter  Google Scholar 

  8. Harris, D., Harris, S.: Digital Design and Computer Architecture, 2edn. Morgan Kaufmann, Elsevier (2013). ISBN 978-0-12-394424-5

    Google Scholar 

  9. Hebb, D.: The Organization of Behavior: A Neuropsychological Theory. Willey, USA (1949)

    Google Scholar 

  10. Hopfield, J.: Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. 79(8), 2554–2558 (1982). https://doi.org/10.1073/pnas.79.8.2554

    Article  MathSciNet  MATH  Google Scholar 

  11. IEEE Std 754-2019, IEEE Computer Society. 2019. IEEE Standard for Floating-Point Arithmetic IEEE STD 754-2019, pp. 1-84, ISBN 978-1-5044-5924-2

    Google Scholar 

  12. Jouppi, N., Young, C., Patil, N., et al.: In-datacenter performance analysis of a tensor processing unit. In: 44th International Symposium on Computer Architecture (ISCA) (2017). https://doi.org/10.48550/arXiv.1704.04760

  13. Kay, L.: How brains create the world: the dynamical legacy of walter J Freeman in olfactory system physiology. Chaos Complex Lett. 11(1), 41–47 (2017). PMID: 30686946; PMCID: PMC6344053

    Google Scholar 

  14. Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, vol. 25, no. 2 (2012). https://doi.org/10.1145/3065386

  15. LeCun, Y., Bottou, L., Bengio, Y., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2323 (1998). https://doi.org/10.1109/5.726791

    Article  Google Scholar 

  16. Leijnen, S., Veen, F.: The neural network zoo. In: Conference Theoretical Information Studies, Proceedings, vol. 47, no. 9 (2020). https://doi.org/10.3390/proceedings47010009

  17. McCulloch, W., Pitts, W.: A logical calculus of the ideas immanent in nervous activity. Bull. Mathe. Biophys. 5, 115–133 (1943). https://doi.org/10.1007/BF02478259

    Article  MathSciNet  MATH  Google Scholar 

  18. Medium, Brief History of Neural Networks by Strachnyi, K. https://medium.com/analytics-vidhya/brief-history-of-neural-networks-44c2bf72eec. Accessed 21 Mar 2023

  19. Minsky, M., Papert, S.: Perceptrons: An Introduction to Computational Geometry. MIT Press (1969). ISBN 0 262 13043 2

    Google Scholar 

  20. Murahari, V., Jimenez, C., Yang, R., et al.: DataMUX: data multiplexing for neural networks. In: 36th Conference on Neural Information Processing Systems (2022). https://doi.org/10.48550/arXiv.2202.09318

  21. Nurvitadhi, E., Sheffield, D., Sim, J.: Accelerating binarized neural networks: comparison of FPGA, CPU, GPU, and ASIC. In: International Conference on Field-Programmable Technology, Xi’an, China, pp. 77–84 (2016). https://doi.org/10.1109/FPT.2016.7929192

  22. Omondi, A.R., Rajapakse, J.C., Bajger, M.: FPGA Neurocomputers. In: Omondi, A.R., Rajapakse, J.C. (eds.) FPGA Implementations of Neural Networks, pp. 1–37. Springer, Boston (2006). https://doi.org/10.1007/0-387-28487-7_1. ISBN-10 0-387-28485-0

    Chapter  Google Scholar 

  23. Poggio, T., Girosi, F.: Networks for approximation and learning. Proc. IEEE 78(9), 1481–1497 (1990)

    Article  MATH  Google Scholar 

  24. Pribram, K.: The neurophysiology of remembering. Sci. Am. 220(1), 73–86 (1969). https://doi.org/10.1038/scientificamerican0169-73

    Article  Google Scholar 

  25. Puttagunta, M., Ravi, S.: Medical image analysis based on deep learning approach. Multimed. Tools Appl. (2020). https://doi.org/10.1007/s11042-021-10707-4

  26. Ray, P.: A review on TinyML: state-of-the-art and prospects. J. King Saud Univ. – Comput. Inf. Sci. 34(4), 1595–1623 (2022), https://doi.org/10.1016/j.jksuci.2021.11.019

  27. Rosenblatt, F.: The Perceptron - a perceiving and recognizing automaton. Report 85-460-1. Cornell Aeronautical Laboratory (1957)

    Google Scholar 

  28. Scarselli, F., Yong, S., Gori, M., et al.: Graph neural networks for ranking web pages. In: IEEE/WIC/ACM International Conference on Web Intelligence (WI 2005) (2005). https://doi.org/10.1109/WI.2005.67

  29. Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015). https://doi.org/10.1016/j.neunet.2014.09.003

    Article  Google Scholar 

  30. Stelzer, F., Röhm, A., Vicente, R., et al.: Deep neural networks using a single neuron: folded-in-time architecture using feedback-modulated delay loops. Nat. Commun. 12, 5164 (2021). https://doi.org/10.1038/s41467-021-25427-4

    Article  Google Scholar 

  31. Wang, C., Luo, Z.: A review of the optimal design of neural keywords: deep learning; deep neural network; FPGA; optimization; hardware acceleration Networks Based on FPGA. Appl. Sci. 12, 10771 (2022). https://doi.org/10.3390/app122110771

    Article  Google Scholar 

  32. Wang, Z., She, Q., Ward, T.: Generative Adversarial networks in computer vision: a survey and taxonomy. ACM Comput. Surv. (2020). ISSN 0360-0300

    Google Scholar 

  33. Werbos, P.: Beyond regression: new tools for prediction and analysis in the behavioral sciences. Ph.D. thesis, Committee on Applied Mathematics, Harvard University (1974)

    Google Scholar 

  34. Widrow, B., Hoff, M.: Adaptive switching circuits. In: IRE WESCON Convention Record, New York, pp. 96–104 (1960). https://doi.org/10.7551/mitpress/4943.003.0012

  35. Wu, L., Cui, P., Pei, J., et al.: Graph Neural Networks: Foundations, Frontiers, and Applications. Springer, Heidelberg (2022). https://doi.org/10.1007/978-981-16-6054-2. ISBN 978-9811660535

  36. Yasoubi, A., Hojabr, R., Modarressi, M.: Power-efficient accelerator design for neural networks using computation reuse. IEEE Comput. Archit. Lett. 16(1), 72–75 (2017). https://doi.org/10.1109/LCA.2016.2521654. ISSN 1556-6056

Download references

Acknowledgments

The research paper is written in relation with the agreements between the New Bulgarian University, the China University of Mining and Technology, and the University of Mining and Geology “St. Ivan Rilski” on the subjects “Research and improvement of nodes and elements of the control of mechatronic systems” (MEMF-175/10.05.2023), “Joint Research and Development of key technologies for autonomous control systems”, and “Construction of International Joint Laboratory for new energy power generation and electric vehicles”.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yassen Gorbounov .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gorbounov, Y., Chen, H. (2023). Optimized FPGA Implementation of an Artificial Neural Network Using a Single Neuron. In: Zlateva, T., Tuparov, G. (eds) Computer Science and Education in Computer Science. CSECS 2023. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 514. Springer, Cham. https://doi.org/10.1007/978-3-031-44668-9_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44668-9_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44667-2

  • Online ISBN: 978-3-031-44668-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics