Skip to main content
Log in

Behavioural synthesis of SGD using the CCC framework: a simple XOR-solving MLP

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Behavioural synthesis enables the automation of the design process by generating task-specific hardware configured for either FPGA and SoC platforms or custom silicon devices such as ASICs. Relevant commercial tools’ flows can bring significant benefits for software developers with no hardware design expertise. Our Custom Coprocessor Compilations (CCC) high level synthesis tool is leveraged in this work to synthesize a FPGA design for stochastic gradient descent (SGD), a cornerstone optimization approach into today’s modern deep neural networks. A simple 3-input-XOR-solving, multilayer perceptron (MLP) is implemented and transformed into a Register Transfer Level (RTL) VHDL hardware microarchitecture using the CCC hardware synthesizer. The produced VHDL is subsequently verified for correct functionality in GNU Ada. Results validate our motivation for accelerated performance, targeted to low-powered, autonomous devices.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Data availability

The data that support the findings of this study are not openly available due to reasons of patent protecting data, and are available from the corresponding author upon reasonable request in a controlled access repository where relevant.

References

  1. McClelland JL, Rumelhart DE (1988) Explorations in parallel distributed processing: a handbook of models, programs, and exercises. MIT Press, Cambridge, MA

    Google Scholar 

  2. Bland R (1998) Learning XOR: exploring the space of a classic problem. Technical Report, Department of Computing Science and Mathematics, University of Stirling

    Google Scholar 

  3. Rumelhart D, Hinton G, Williams R (1986) Learning representations by back-propagating errors. Nature 323:533–536

    Article  Google Scholar 

  4. Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press, Cambridge, MA

    MATH  Google Scholar 

  5. Bottou L, Bousquet O (2012) The tradeoffs of large scale learning. In: Sra S, Nowozin S, Wright SJ (eds) Optimization for machine learning. MIT Press, Cambridge, MA, pp 351–368

    Google Scholar 

  6. Nielsen MA (2015) Neural networks and deep learning, vol 2018. Determination Press, San Francisco, CA

    Google Scholar 

  7. Sutskever I, Martens J, Dahl G, Hinton G (2013) On the importance of initialization and momentum in deep learning. In: Proceedings of the 30th International Conference on Machine Learning, Atlanta, Georgia, USA

    Google Scholar 

  8. Duchi J, Hazan E, Singer Y (2011) Adaptive subgradient methods for online learning and stochastic optimization. J Mach Learn Res 12(7)

  9. Hinton G Overview of mini-batch gradient descent. Lecture 6a: (available online) http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf, retrieved 25 November 2020

  10. Kingma DP, Ba J (2015) Adam: A method for stochastic optimization. In: Proceedings of the 3rd International Conference for Learning Representations (ICLR), San Diego, CA, USA

    Google Scholar 

  11. Bengio Y (2012) Practical recommendations for gradient-based training of deep architectures. In: Montavon G, Orr G, Müller KR (eds) Neural networks: tricks of the trade. Berlin, Springer, pp 437–478

    Chapter  Google Scholar 

  12. Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the 13th international conference on artificial intelligence and statistics, Chia Laguna Resort, Sardinia, Italy, pp 249–256

    Google Scholar 

  13. He K, Zhang X, Ren S, Sun J (2015) Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In: Proceedings of the 2015 IEEE international conference on computer vision, Araucano Park, Las Condes, Chile, pp 1026–1034

    Google Scholar 

  14. Coussy P, Morawiec A (2008) High-level synthesis: from algorithm to digital circuit. Springer, Berlin

    Book  Google Scholar 

  15. Martin G, Smith G (2009) High-level synthesis: past, present, and future. IEEE Des Test Comput 26(4):18–25

    Article  Google Scholar 

  16. Nane R, Sima VM et al (2015) A survey and evaluation of FPGA high-level synthesis tools. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 35(10):1591–1604

    Article  Google Scholar 

  17. Cong J, Liu B, Neuendorffer S, Noguera J, Vissers K, Zhang Z (2011) High-level synthesis for FPGAs: from prototyping to deployment. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 30(4):473–491

    Article  Google Scholar 

  18. Lahti S, Sjövall P, Vanne J, Hämäläinen TD (2018) Are we there yet? A study on the state of high-level synthesis. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 38(5):898–911

    Article  Google Scholar 

  19. Byerly A, Kalganova T, Dear I (2021) No routing needed between capsules. Neurocomputing 463:545–553

    Article  Google Scholar 

  20. Crockett LH, Elliot RA, Enderwitz MA, Stewart RW (2014) The Zynq book: embedded processing with the arm cortex-A9 on the Xilinx Zynq-7000 all programmable SoC. Strathclyde Academic Media

    Google Scholar 

  21. Dossis M (2011) A Formal Design Framework to Generate Coprocessors with Implementation Options. International Journal of Research and Reviews in Computer Science (IJRRCS) 2(4): 929–936, ISSN: 2079–2557, Science Academy Publisher, UK

  22. Dossis M (2010) Intermediate predicate format for design automation tools. Journal of Next Generation Information Technology (JNIT) 1(1):100–117

    Article  Google Scholar 

  23. Dossis M (2006) Patent number 1005308, 5/10/2006, Greek Industrial Property Organisation

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Dossis.

Ethics declarations

Conflict of interest

The authors declare no conflict of interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Amanatidis, D., Dossis, M. Behavioural synthesis of SGD using the CCC framework: a simple XOR-solving MLP. Appl Intell 52, 15226–15236 (2022). https://doi.org/10.1007/s10489-022-03376-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-022-03376-9

Keywords

Navigation