Abstract
With the development of artificial intelligence, Neural-Network based algorithms are widely applied in various fields, which causes the network structure more complex and the number of neurons also increases. Activation function is the most important part of the neuron, and has nonlinear characteristics. Therefore, the modeling of activation function on FPGA is faced with the problem of too many iterations and a large amount of hardware resource consumption. In this paper, a modeling method of Sigmoid function based on FPGA multi-processor is proposed. In this method, different processor cores are respectively used to control different modules, such as communication with peripherals, scheduling of computing resources, sharing of hardware resources, etc., so as to optimize the use of hardware resources and accelerate the computing. The result cased with the Sigmoid function shows that the method presented in this paper is feasible. And the comparison and analysis of the resources usages and simulation results are given.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Patil, T., Pandey, S., Visrani, K.: A review on basic deep learning technologies and applications. In: Data Science and Intelligent Applications, pp. 565–573. Springer, Singapore (2021). https://doi.org/10.1007/978-981-15-4474-3_61
Zairi, K.: Deep learning for computer vision problems: literature review. In: Advanced Deep Learning Applications in Big Data Analytics, pp. 92–109. IGI Global (2021)
Landolt, S., Wambsganss, T., Söllner, M.: A taxonomy for deep learning in natural language processing. In: Hawaii International Conference on System Sciences (2021)
Gschwend, D.: ZynqNet: an FPGA-accelerated embedded convolutional neural network. arXiv preprint arXiv:2005.06892 (2020)
Luo, C., Sit, M.K., Fan, H., Liu, S., Luk, W., Guo, C.: Towards efficient deep neural network training by FPGA-based batch-level parallelism. J. Semiconductors 41(2), 022403 (2020)
Sharma, S.: Activation functions in neural networks. Towards Data Sci. 6, 310–316 (2017)
Suárez, S.T.P.: Design methodology of sigmoid functions for Neural Networks using lookup tables on FPGAs (No. 583). EasyChair (2018)
Tsmots, I., Skorokhoda, O., Rabyk, V.: Hardware implementation of sigmoid activation functions using FPGA. In: 2019 IEEE 15th International Conference on the Experience of Designing and Application of CAD Systems (CADSM), pp. 34–38. IEEE, February 2019
Dhasarathan, N.: Comparison of various multi-core processor using FPGA. J. Sci. Technol. Res. (JSTAR) 1(1) (2020)
Ali, E., Pora, W.: Implementation and verification of IEEE-754 64-bit floating-point arithmetic library for 8-bit soft-core processors. In: 2020 8th International Electrical Engineering Congress (iEECON), pp. 1–4. IEEE, March 2020
Rowlings, M.R., Tyrrell, A.M., Trefzer, M.A.: Embedded social insect-inspired intelligence networks for system-level runtime management. In: 2020 Design, Automation and Test in Europe Conference and Exhibition (DATE), pp. 1550–1555. IEEE, March 2020
Motaqi, A., Helaoui, M., AghliMoghaddam, S., Mosavi, M.R.: Detailed implementation of asynchronous circuits on commercial FPGAs. Analog Integr. Circ. Sig. Process 103(3), 375–389 (2020). https://doi.org/10.1007/s10470-020-01602-3
Guohua, W., Dongming, L., Fengzhou, W., Adetomi, A., Arslan, T.: A tiny and multifunctional ICAP controller for dynamic partial reconfiguration system. In: 2017 NASA/ESA Conference on Adaptive Hardware and Systems (AHS), pp. 71–76. IEEE, July 2017
Hayou, S., Doucet, A., Rousseau, J.: On the impact of the activation function on deep neural networks training. In: International Conference on Machine Learning, pp. 2672–2680. PMLR, May 2019
Yin, X., Goudriaan, J.A.N., Lantinga, E.A., Vos, J.A.N., Spiertz, H.J.: A flexible sigmoid function of determinate growth. Ann. Bot. 91(3), 361–371 (2003)
Kalman, B.L., Kwasny, S.C.: Why tanh: choosing a sigmoidal function. In: [Proceedings 1992] IJCNN International Joint Conference on Neural Networks, vol. 4, pp. 578–581. IEEE, June 1992
Kennedy, M.P., Chua, L.O.: Neural networks for nonlinear programming. IEEE Trans. Circ. Syst. 35(5), 554–562 (1988)
Stahl, H.: The convergence of Padé approximants to functions with branch points. J. Approx. Theory 91(2), 139–204 (1997)
RodrÃguez, W.S., Sánchez, F.R., Santa, F.M.: 8-bit softcore microprocessor with dual accumulator designed to be used in FPGA. Tecnura 22(56), 40–50 (2018)
Rădoi, I., Răstoceanu, F., Hritcu, D.T.: Data transfer methods in FPGA based embedded design for high speed data processing systems. In: 2018 International Conference on Communications (COMM), pp. 519–522. IEEE, June 2018
RodrÃguez, W.S., Sánchez, F.R., Santa, F.M.: Microprocesador softcore de 8 bits con doble acumulador disenado para ser usado en FPGA. Tecnura 22(56), 40 (2018)
Acknowledgement
This work is supported by the National Key Research and Development Program of China (No. 2018YFB1701602).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Yue, L., Chun, Z., Chuan, X. (2022). A Modeling Method of Neural Network Activation Function Based on FPGA Multiprocessor. In: Jia, Y., Zhang, W., Fu, Y., Yu, Z., Zheng, S. (eds) Proceedings of 2021 Chinese Intelligent Systems Conference. Lecture Notes in Electrical Engineering, vol 803. Springer, Singapore. https://doi.org/10.1007/978-981-16-6328-4_41
Download citation
DOI: https://doi.org/10.1007/978-981-16-6328-4_41
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-16-6327-7
Online ISBN: 978-981-16-6328-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)