Skip to main content
Log in

Efficient Hardware Implementation of Decision Tree Training Accelerator

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

Abstract

In this paper, a serial architecture for acceleration and implementation of Decision Tree (DT) training algorithm has been proposed. This architecture is compatible with 32-bit integer as well as fixed-point training data. In the worst case scenario, the FPGA implementation of the proposed architecture for Two Means DT (TMDT) algorithm is proved to run at least \(28\times \) faster than conventional C4.5 training algorithm widely used in many machine learning classifications. The proposed architecture is implemented on FPGA platform operating at maximum frequency of 62 MHz. Further, the hardware implementation is proved to run at least \(10\times \) faster than the software implementation in worst condition. This design has been tested on five binary datasets of variable size and dimension. Thus, the proposed hardware realisation is compatible to wide range of training datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Ahmad MW, Mourshed M, Rezgui Y. Trees vs neurons: Comparison between random forest and ANN for high-resolution prediction of building energy consumption. Energy Build. 2017;147:77–89.

    Article  Google Scholar 

  2. Behnke S, Karayiannis NB. Competitive neural trees for pattern classification. IEEE Trans Neural Netw. 1998;9(6):1352–69.

    Article  Google Scholar 

  3. Buschjäger S, Morik K. Decision tree and random forest implementations for fast filtering of sensor data. IEEE Trans Circuits Syst I Regular Pap. 2017;65(1):209–22.

    Article  Google Scholar 

  4. Canilho J, Véstias M, Neto H. Multi-core for k-means clustering on FPGA. In: 2016 26th International Conference on Field Programmable Logic and Applications (FPL), IEEE; 2016, pp. 1–4.

  5. Chrysos G, Dagritzikos P, Papaefstathiou I, Dollas A. HC-CART: A parallel system implementation of data mining classification and regression tree (cart) algorithm on a multi-fpga system. ACM Trans Archit Code Optim (TACO). 2013;9(4):1–25.

    Article  Google Scholar 

  6. Davey N, Adams R, George SJ. The architecture and performance of a stochastic competitive evolutionary neural tree network. Appl Intell. 2000;12(1–2):75–93.

    Article  Google Scholar 

  7. Dua D, Graff C. UCI machine learning repository; 2017. http://archive.ics.uci.edu/ml.

  8. Estlick M, Leeser M, Theiler J, Szymanski J.J. Algorithmic transformations in the implementation of k-means clustering on reconfigurable hardware. In: Proceedings of the 2001 ACM/SIGDA ninth international symposium on Field programmable gate arrays; 2001, pp. 103–10 .

  9. Foresti G.L, Dolso T. An adaptive high-order neural tree for pattern recognition. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) (2004);34(2), 988–996.

  10. Li T, Tang YY, Fang L. A structure-parameter-adaptive (spa) neural tree for the recognition of large character set. Pattern Recognit. 1995;28(3):315–29.

    Article  Google Scholar 

  11. Lopez-Estrada S, Cumplido R. Decision tree based fpga-architecture for texture sea state classification. In: 2006 IEEE International Conference on Reconfigurable Computing and FPGA’s (ReConFig 2006), IEEE; 2006; pp. 1–7.

  12. Milone, D.H., Sáez, J.C., Simón, G., Rufiner, H.L.: Self-organizing neural tree networks. In: Proceedings of the 20th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Vol. 20 Biomedical Engineering Towards the Year 2000 and Beyond (Cat. No. 98CH36286), vol. 3, IEEE; 1998, pp. 1348–51.

  13. Qian M. Application of CORDIC algorithm to neural networks VLSI design. In: The Proceedings of the Multiconference on” Computational Engineering in Systems Applications”, vol. 1, IEEE;2006, pp. 504–8.

  14. Quinlan JR. Induction of decision trees. Mach Learn. 1986;1(1):81–106.

    Google Scholar 

  15. Sahin S, Becerikli Y, Yazici S. Neural network implementation in hardware using FPGAs. In: International Conference on Neural Information Processing; 2006, pp. 1105–12.

  16. Saqib F, Dutta A, Plusquellic J, Ortiz P, Pattichis MS. Pipelined decision tree classification accelerator implementation in FPGA (DT-CAIF). IEEE Trans Comput. 2013;64(1):280–5.

    Article  MathSciNet  Google Scholar 

  17. Shoaran M, Haghi BA, Taghavi M, Farivar M, Emami-Neyestanak A. Energy-efficient classification for resource-constrained biomedical applications. IEEE J Emerg Select Topics Circuits Syst. 2018;8(4):693–707.

    Article  Google Scholar 

  18. Song HH, Lee SW. A self-organizing neural tree for large-set pattern classification. IEEE Trans Neural Netw. 1998;9(3):369–80.

    Article  Google Scholar 

  19. Su MC, Lo HH, Hsu FH. A neural tree and its application to spam e-mail detection. Expert Syst Appl. 2010;37(12):7976–85.

    Article  Google Scholar 

  20. Tong D, Qu YR, Prasanna VK. Accelerating decision tree based traffic classification on FPGA and multicore platforms. IEEE Trans Parallel Distrib Syst. 2017;28(11):3046–59.

    Article  Google Scholar 

  21. Winterstein F, Bayliss S, Constantinides G.A. FPGA-based K-means clustering using tree-based data structures. In: 2013 23rd International Conference on Field programmable Logic and Applications, IEEE; 2013, pp. 1–6.

  22. Yang Y, Boling C.S, Mason A.J. Power-area efficient VLSI implementation of decision tree based spike classification for neural recording implants. In: 2014 IEEE Biomedical Circuits and Systems Conference (BioCAS) Proceedings, IEEE; 2014; pp. 380–3.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rituparna Choudhury.

Ethics declarations

Conflict of interest

The authors declare that they have no conflicts of interest.

Data availablility

The data are available publicly at https://archive.ics.uci.edu/ml/index.php .

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the topical collection “Hardware for AI, Machine Learning and Emerging Electronic Systems” guest edited by Himanshu Thapliyal, Saraju Mohanty and VS Kanchana Bhaaskaran.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Choudhury, R., Ahamed, S.R. & Guha, P. Efficient Hardware Implementation of Decision Tree Training Accelerator. SN COMPUT. SCI. 2, 360 (2021). https://doi.org/10.1007/s42979-021-00748-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42979-021-00748-9

Keywords

Navigation