Skip to main content

FPGA Implementation of Efficient Softmax Architecture for Deep Neural Networks

  • Conference paper
  • First Online:
Emerging Electronics and Automation (E2A 2022)

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 1088))

Included in the following conference series:

  • 119 Accesses

Abstract

Neural networks have been widely used and are being improved to meet the demands of future technological advancements. Softmax is used to deliver multi-class logistic regression and classifier operations after the input data from the different Convolutional layers have been processed. Exponentiation and division operations, for example, are hardware-intensive operations in this function. The gap between highly optimized hardware-efficient neural networks and softmax implementation has been widening in recent years, resulting in a bottleneck effect. As a result, in order to work with neural networks like CNN and DNN, a hardware-efficient implementation of the function is required. For multiple values of classes, we proposed a hardware-efficient softmax architecture and implemented it in FPGAs using appropriate approximation techniques.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 259.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 329.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Cardarilli GC et al (2021) A pseudo-softmax function for hardwarebased high-speed image classification. Sci Rep 11. Article ID: 15307

    Google Scholar 

  2. Wei Z, Arora A, Patel P, John L (2020) Design space exploration for softmax implementations. In: Proceedings of 31st IEEE International conference on application-specific systems, architectures and processors (ASAP), pp 45–52

    Google Scholar 

  3. Yuan B (2016) Efficient hardware architecture of softmax layer in deep neural network. In: Proceedings of 29th IEEE International system-on-chip conference (SOCC), pp 323–326

    Google Scholar 

  4. Du G, Tian C, Li Z, Zhang D, Yin Y-S, Ouyang Y (2019) Efficient softmax hardware architecture for deep neural networks. In: Proceedings of great lakes symposium VLSI (GLSVLSI), pp 75–80

    Google Scholar 

  5. Spagnolo F, Perri S, Corsonello P (2022) Aggressive approximation of the SoftMax function for power-efficient hardware implementations. IEEE Trans Circuits Syst II Express Briefs 69(3):1652–1656. https://doi.org/10.1109/TCSII.2021.3120495

    Article  Google Scholar 

  6. Hussain MA, Tsai T-H (2021) An efficient and fast softmax hardware architecture (EFSHA) for deep neural networks. In: 2021 IEEE 3rd International conference on artificial intelligence circuits and systems (AICAS), pp 1–4. https://doi.org/10.1109/AICAS51828.2021.9458541

  7. Zhu D, Lu S, Wang M, Lin J, Wang Z (2020) Efficient precision adjustable architecture for softmax function in deep learning. IEEE Trans Circuits Syst II Exp Briefs 67(12):3382–3386

    Google Scholar 

  8. Gao Y, Liu W, Lombardi F (2020) Design and implementation of an approximate softmax layer for deep neural networks. Institute of Electrical and Electronics Engineers (IEEE), pp 1–5

    Google Scholar 

  9. Geng X, Lin J, Zhao B, Kong A, Aly MMS, Chandrasekhar V (2019) Hardware-aware softmax approximation for deep neural networks. Lecture notes in computer science (Including Subseries Lecture notes in artificial intelligence and Lecture notes in bioinformatics), pp 107–122

    Google Scholar 

  10. Larkin D, Kinane A, Muresan V, O'Connor N (2006) An efficient hardware architecture for a neural network activation function generator. In: Proceedings of the ISNN International symposium on neural networks, vol 144, pp 1319–1327

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Velmathi Guruviah .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gokula Kannan, R., Hari Raghavan, V., Guruviah, V. (2024). FPGA Implementation of Efficient Softmax Architecture for Deep Neural Networks. In: Gabbouj, M., Pandey, S.S., Garg, H.K., Hazra, R. (eds) Emerging Electronics and Automation. E2A 2022. Lecture Notes in Electrical Engineering, vol 1088. Springer, Singapore. https://doi.org/10.1007/978-981-99-6855-8_47

Download citation

Publish with us

Policies and ethics