Skip to main content

Hardware-Aware Evolutionary Approaches to Deep Neural Networks

  • Chapter
  • First Online:
Handbook of Evolutionary Machine Learning

Part of the book series: Genetic and Evolutionary Computation ((GEVO))

  • 1256 Accesses

Abstract

This chapter gives an overview of evolutionary algorithm (EA) based methods applied to the design of efficient implementations of deep neural networks (DNN). We introduce various acceleration hardware platforms for DNNs developed especially for energy-efficient computing in edge devices. In addition to evolutionary optimization of their particular components or settings, we will describe neural architecture search (NAS) methods adopted to directly design highly optimized DNN architectures for a given hardware platform. Techniques that co-optimize hardware platforms and neural network architecture to maximize the accuracy-energy trade-offs will be emphasized. Case studies will primarily be devoted to NAS for image classification. Finally, the open challenges of this popular research area will be discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 249.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Barone, S., Traiola, M., Barbareschi, M., Bosio, A.: Multi-objective application-driven approximate design method. IEEE Access 9, 86975–86993 (2021)

    Article  Google Scholar 

  2. Bavikadi, S., Dhavlle, A., Ganguly, A., Haridass, A., Hendy, H., Merkel, C., Reddi, V.J., Sutradhar, P.R., Joseph, A., Pudukotai Dinakarrao, S.M.: A survey on machine learning accelerators and evolutionary hardware platforms. IEEE Design & Test 39(3), 91–116 (2022)

    Article  Google Scholar 

  3. Benmeziane, H., El Maghraoui, K., Ouarnoughi, H., Niar, S., Wistuba, M., Wang, N.: Hardware-aware neural architecture search: survey and taxonomy. In: Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pp. 4322–4329. International Joint Conferences on Artificial Intelligence Organization (2021). Survey Track

    Google Scholar 

  4. Bingham, G., Macke, W., Miikkulainen, R.: Evolutionary optimization of deep learning activation functions. In: Proceedings of the 2020 Genetic and Evolutionary Computation Conference, GECCO ’20, pp. 289–296. ACM (2020)

    Google Scholar 

  5. Bose, S.K., Lawrence, C.P., Liu, Z., Makarenko, K.S., van Damme, R.M.J., Broersma, H.J., van der Wiel, W.G.: Evolution of a designless nanoparticle network into reconfigurable boolean logic. Nat. Nanotechnol. 10, 1048–1052 (2015)

    Article  Google Scholar 

  6. Cai, H., Gan, C., Wang, T., Zhang, Z., Han, S.: Once-for-all: train one network and specialize it for efficient deployment. In: 8th International Conference on Learning Representations, ICLR. OpenReview.net (2020)

    Google Scholar 

  7. Cai, H., Zhu, L., Han, S.: Proxylessnas: direct neural architecture search on target task and hardware. In: 7th International Conference on Learning Representations, ICLR. OpenReview.net (2019)

    Google Scholar 

  8. Capra, M., Bussolino, B., Marchisio, A., Shafique, M., Masera, G., Martina, M.: An updated survey of efficient hardware architectures for accelerating deep convolutional neural networks. Future Internet 12(7), 113 (2020)

    Article  Google Scholar 

  9. Ceska, M., Matyas, J., Mrazek, V., Sekanina, L., Vasicesk, Z., Vojnar, T.: Sagtree: towards efficient mutation in evolutionary circuit approximation. Swarm Evol. Comput. 69, 100986 (2022)

    Article  Google Scholar 

  10. Chen, Y.-H., Krishna, T., Emer, J.S., Sze, V.: Eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE J. Solid-State Circuits 52(1), 127–138 (2017)

    Article  Google Scholar 

  11. Chen, Y., Meng, G., Zhang, Q., Zhang, X., Song, L., Xiang, S., Pan, C.: Joint neural architecture search and quantization (2018)

    Google Scholar 

  12. Chetlur, S., Woolley, C., Vandermersch, P., Cohen, J., Tran, J., Catanzaro, B., Shelhamer, E.: cuDNN: efficient primitives for deep learning (2014)

    Google Scholar 

  13. Colangelo, P., Segal, O., Speicher, A., Margala, M.: Artificial neural network and accelerator co-design using evolutionary algorithms. In: 2019 IEEE High Performance Extreme Computing Conference (HPEC), pp. 1–8 (2019)

    Google Scholar 

  14. Dai, X., Zhang, P., Wu, B., Yin, H., Sun, F., Wang, Y., Dukhan, M., Hu, Y., Wu, Y., Jia, Y., Vajda, P., Uyttendaele, M., Jha, N.K.: ChamNet: towards efficient network design through platform-aware model adaptation. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11390–11399 (2019)

    Google Scholar 

  15. Deb, K.: Multi-Objective Optimization Using Evolutionary Algorithms. Wiley (2009)

    Google Scholar 

  16. Dong, X., Yang, Y.: NAS-Bench-201: extending the scope of reproducible neural architecture search. In: International Conference on Learning Representations (2020)

    Google Scholar 

  17. Dupuis, E., Novo, D., O’Connor, I., Bosio, A.: A heuristic exploration of retraining-free weight-sharing for CNN compression. In: 27th Asia and South Pacific Design Automation Conference, ASP-DAC, pp. 134–139. IEEE (2022)

    Google Scholar 

  18. Elsken, T., Metzen, J.H., Hutter, F.: Efficient multi-objective neural architecture search via Lamarckian evolution. In: 7th International Conference on Learning Representations, ICLR 2019. OpenReview.net (2019)

    Google Scholar 

  19. Fasfous, N., Vemparala, M.R., Frickenstein, A., Valpreda, E., Salihu, D., Höfer, J., Singh, A., Nagaraja, N.-S., Voegel, H.-J., Doan, N.A.V., Martina, M., Becker, J., Stechele, W.: AnaCoNGA: analytical HW-CNN co-design using nested genetic algorithms. In: 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE), pp. 238–243 (2022)

    Google Scholar 

  20. Garofalo, A., Rusci, M., Conti, F., Rossi, D., Benini, L.: PULP-NN: a computing library for quantized neural network inference at the edge on RISC-V based parallel ultra low power clusters. In: 2019 26th IEEE International Conference on Electronics, Circuits and Systems (ICECS), pp. 33–36 (2019)

    Google Scholar 

  21. Garofalo, A., Tagliavini, G., Conti, F., Rossi, D., Benini, L.: XpulpNN: accelerating quantized neural networks on RISC-V processors through ISA extensions. In: 2020 Design, Automation Test in Europe Conference Exhibition (DATE), pp. 186–191 (2020)

    Google Scholar 

  22. Guo, Z., Zhang, X., Mu, H., Heng, W., Liu, Z., Wei, Y., Sun, J.: Single path one-shot neural architecture search with uniform sampling (2019). arXiv:abs/1904.00420

  23. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition (2015)

    Google Scholar 

  24. Huang, S.-Y., Chu, W.-T.: PONAS: progressive one-shot neural architecture search for very efficient deployment (2020). arXiv:abs/2003.05112

  25. Intel. Intel-optimized math library for numerical computing (2021)

    Google Scholar 

  26. Jiang, W., Lou, Q., Yan, Z., Yang, L., Hu, J., Hu, X.S., Shi, Y.: Device-circuit-architecture co-exploration for computing-in-memory neural accelerators. IEEE Trans. Comput. 70(4), 595–605 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  27. Jiang, W., Yang, L., Sha, E.H.-M., Zhuge, Q., Gu, S., Dasgupta, S., Shi, Y., Hu, J.: Hardware/software co-exploration of neural architectures. IEEE Trans. Comput.-Aided Design Integr. Circ. Syst. 39(12), 4805–4815 (2020)

    Google Scholar 

  28. Jouppi, N.P., Young, C., Patil, N., Patterson, D.: A domain-specific architecture for deep neural networks. Commun. ACM 61(9), 50–59 (2018)

    Article  Google Scholar 

  29. Kao, S.-C., Krishna, T.: Gamma: automating the hw mapping of dnn models on accelerators via genetic algorithm. In: Proceedings of the 39th International Conference on Computer-Aided Design, ICCAD ’20. ACM (2020)

    Google Scholar 

  30. Lapid, R., Sipper, M.: Evolution of activation functions for deep learning-based image classification. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO ’22, pp. 2113–2121. ACM (2022)

    Google Scholar 

  31. Li, C., Yu, Z., Fu, Y., Zhang, Y., Zhao, Y., You, H., Yu, Q., Wang, Y., Hao, C., Lin, Y.: HW-NAS-Bench: hardware-aware neural architecture search benchmark. In: 9th International Conference on Learning Representations, ICLR 2021. OpenReview.net (2021)

    Google Scholar 

  32. Liberis, E., Dudziak, L, Lane, N.D.: \(\mu \)NAS: constrained neural architecture search for microcontrollers. In: EuroMLSys ’21, pp. 70–79. ACM (2021)

    Google Scholar 

  33. Lin, J., Chen, W.-M., Lin, Y., Cohn, J., Gan, C., Han, S.: MCUNet: tiny deep learning on iot devices. In: 34th Conference on Neural Information Processing Systems (NeurIPS 2020), pp. 1–12 (2020)

    Google Scholar 

  34. Lin, Y., Hafdi, D., Wang, H., Liu, Z., Han, S.: Neural-hardware architecture search. In: 33rd Conference on Neural Information Processing Systems (NeurIPS 2019) (2019)

    Google Scholar 

  35. Lin, Y., Yang, M., Han, S.: NAAS: neural accelerator architecture search. In: 2021 58th ACM/ESDA/IEEE Design Automation Conference (DAC) (2021)

    Google Scholar 

  36. Loni, M., Sinaei, S., Zoljodi, A., Daneshtalab, M., Sjödin, M.: DeepMaker: a multi-objective optimization framework for deep neural networks in embedded systems. Microprocess. Microsyst. 73, 102989 (2020)

    Article  Google Scholar 

  37. Lu, B., Yang, J., Jiang, W., Shi, Y., Ren, S.: One proxy device is enough for hardware-aware neural architecture search. Proc. ACM Meas. Anal. Comput. Syst. 5(3) (2021)

    Google Scholar 

  38. Lu, Z., Deb, K., Goodman, E., Banzhaf, W., Boddeti, V.N.: NSGANetV2: evolutionary multi-objective surrogate-assisted neural architecture search. In: Computer Vision – ECCV 2020, pp. 35–51. Springer, Cham (2020)

    Google Scholar 

  39. Lu, Z., Whalen, I., Boddeti, V., Dhebar, Y., Deb, K., Goodman, E., Banzhaf, W.: NSGA-Net: neural architecture search using multi-objective genetic algorithm. In: Proceedings of the Genetic and Evolutionary Computation Conference, GECCO ’19, pp. 419–427. ACM (2019)

    Google Scholar 

  40. Luo, X., Liu, D., Huai, S., Kong, H., Chen, H., Liu, W.: Designing efficient DNNs via hardware-aware neural architecture search and beyond. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 41(6), 1799–1812 (2022)

    Google Scholar 

  41. Luo, X., Liu, D., Huai, S., Liu, W.: HSCoNAS: hardware-software co-design of efficient DNNs via neural architecture search. In: DATE 2021 (2021)

    Google Scholar 

  42. MAESTRO. An open-source infrastructure for modeling dataflows within deep learning accelerators (2021)

    Google Scholar 

  43. Marchisio, A., Massa, A., Mrazek, V., Bussolino, B., Martina, M., Shafique, M.: NASCaps: a framework for neural architecture search to optimize the accuracy and hardware efficiency of convolutional capsule networks. In: 2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD), pp. 1–9 (2020)

    Google Scholar 

  44. Mazumder, A.N., Meng, J., Rashid, H.-A., Kallakuri, U., Zhang, X., Seo, J.-S., Mohsenin, T.: A survey on the optimization of neural network accelerators for micro-ai on-device inference. IEEE J. Emer. Select. Topics Circ. Syst. 11(4), 532–547 (2021)

    Google Scholar 

  45. Mittal, S.: A survey of techniques for approximate computing. ACM Comput. Surv. 48(4), 1–33 (2016)

    Google Scholar 

  46. Mittal, S.: A survey of FPGA-based accelerators for convolutional neural networks. Neural Comput. Appl. 32(32), 1109–1139 (2020)

    Article  Google Scholar 

  47. Mittal, S., Rajput, P., Subramoney, S.: A survey of deep learning on cpus: opportunities and co-optimizations. IEEE Trans. Neural Netw. Learn. Syst. 33(10), 5095–5115 (2022)

    Article  MathSciNet  Google Scholar 

  48. Mrazek, V., Hrbacek, R., et al.: Evoapprox8b: library of approximate adders and multipliers for circuit design and benchmarking of approximation methods. In: Proceedings of DATE’17, pp. 258–261 (2017)

    Google Scholar 

  49. Mrazek, V., Sekanina, L., Vasicek, Z.: Libraries of approximate circuits: Automated design and application in CNN accelerators. IEEE J. Emerg. Select. Topics Circuits Syst. 10(4), 406–418 (2020)

    Google Scholar 

  50. Mrazek, V., Vasicek, Z., Sekanina, L., Hanif, A.M., Shafique, M.: ALWANN: automatic layer-wise approximation of deep neural network accelerators without retraining. In: Proceedings of the IEEE/ACM International Conference on Computer-Aided Design, pp. 1–8. IEEE (2019)

    Google Scholar 

  51. Murshed, M.G.S., Murphy, C., Hou, D., Khan, N., Ananthanarayanan, G., Hussain, F.: Machine learning at the network edge: a survey. ACM Comput. Surv. 54(8) (2021)

    Google Scholar 

  52. Nader, A., Azar, D.: Evolution of activation functions: an empirical investigation. ACM Trans. Evol. Learn. Optim. 1(2) (2021)

    Google Scholar 

  53. Parashar, A., Raina, P., Shao, Y.S., Chen, Y.-H., Ying, V.A., Mukkara, A., Venkatesan, R., Khailany, B., Keckler, S.W., Emer, J.: Timeloop: a systematic approach to dnn accelerator evaluation. In: 2019 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), pp. 304–315 (2019)

    Google Scholar 

  54. Parsa, M., Ankit, A., Ziabari, A., Roy, K.: PABO: pseudo agent-based multi-objective bayesian hyperparameter optimization for efficient neural accelerator design. In: 2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), pp. 1–8 (2019)

    Google Scholar 

  55. Pham, H., Guan, M.Y., Zoph, B., Le, Q.V., Dean, J.: Efficient neural architecture search via parameter sharing. In: Proceedings of the 35th International Conference on Machine Learning, ICML 2018, vol. 80, pp. 4092–4101. PMLR (2018)

    Google Scholar 

  56. Pinos, M., Mrazek, V., Sekanina, L.: Evolutionary approximation and neural architecture search. Genet. Program Evolv. Mach. 23(3), 351–374 (2022)

    Article  Google Scholar 

  57. Prabakaran, B.S., Akhtar, A., Rehman, S., Hasan, O., Shafique, M.: BioNetExplorer: architecture-space exploration of bio-signal processing deep neural networks for wearables. IEEE Inter. Things J. 1–10 (2021)

    Google Scholar 

  58. Prashanth, H.C., Madhav, R.: Evolutionary standard cell synthesis of unconventional designs. In: Proceedings of the Great Lakes Symposium on VLSI 2022, GLSVLSI ’22, pp. 189–192. ACM (2022)

    Google Scholar 

  59. Real, E., Moore, S., Selle, A., Saxena, S., Suematsu, Y.L., Tan, J., Le, Q.V., Kurakin, A.: Large-scale evolution of image classifiers. In: Proceedings of the 34th International Conference on Machine Learning, ICML’17, vol. 70, pp. 2902–2911 (2017). JMLR.org

    Google Scholar 

  60. Sarwar, S.S., Venkataramani, S., Ankit, A., Raghunathan, A., Roy, K.: Energy-efficient neural computing with approximate multipliers. J. Emerg. Technol. Comput. Syst. 14(2), 16:1–16:23 (2018)

    Google Scholar 

  61. Sateesan, A., Sinha, S., Smitha, K.G., Vinod, A.P.: A survey of algorithmic and hardware optimization techniques for vision convolutional neural networks on FPGAs. Neural Process. Lett. 53(3), 2331–2377 (2021)

    Article  Google Scholar 

  62. Schorn, C., Elsken, T., Vogel, S., Runge, A., Guntoro, A., Ascheid, G.: Automated design of error-resilient and hardware-efficient deep neural networks. Neural Comput. Appl. 32(24), 18327–18345 (2020)

    Article  Google Scholar 

  63. Sekanina, L.: Neural architecture search and hardware accelerator co-search: a survey. IEEE Access 9, 151337–151362 (2021)

    Article  Google Scholar 

  64. Shafique, M., Naseer, M., Theocharides, T., Kyrkou, C., Mutlu, O., Orosa, L., Choi, J.: Robust machine learning systems: challenges, current trends, perspectives, and the road ahead. IEEE Design & Test 37(2), 30–57 (2020)

    Article  Google Scholar 

  65. Sipper, M.: Neural networks with à la carte selection of activation functions. SN Comput. Sci. 2(6), 470 (2021)

    Article  Google Scholar 

  66. Staudigl, F., Merchant, F., Leupers, R.: A survey of neuromorphic computing-in-memory: architectures, simulators, and security. IEEE Design & Test 39(2), 90–99 (2022)

    Article  Google Scholar 

  67. Stewart, R., Nowlan, A., Bacchus, P., Ducasse, Q., Komendantskaya, E.: Optimising hardware accelerated neural networks with quantisation and a knowledge distillation evolutionary algorithm. Electronics 10(4) (2021)

    Google Scholar 

  68. Sze, V., Chen, Y., Yang, T., Emer, J.S.: Efficient processing of deep neural networks: a tutorial and survey. Proc. IEEE 105(12), 2295–2329 (2017)

    Article  Google Scholar 

  69. Sze, V., Chen, Y.-H., Yang, T.-J., Emer, J.S.: Efficient Processing of Deep Neural Networks. Synthesis Lectures on Computer Architecture. Morgan & Claypool Publishers (2020)

    Google Scholar 

  70. Vasicek, Z., Sekanina, L.: Evolutionary approach to approximate digital circuits design. IEEE Trans. Evol. Comput. 19(3), 432–444 (2015)

    Article  Google Scholar 

  71. Velasco-Montero, D., Fernandez-Berni, J., Carmona-Galan, R., Rodriguez-Vazquez, A.: Previous: a methodology for prediction of visual inference performance on IoT devices. IEEE Internet Things J. 7(10), 9227–9240 (2020)

    Article  Google Scholar 

  72. Venkataramani, S., et al.: Efficient AI system design with cross-layer approximate computing. Proc. IEEE 108(12), 2232–2250 (2020)

    Article  Google Scholar 

  73. Wang, T., Wang, K., Cai, H., Lin, J., Liu, Z., Wang, H., Lin, Y., Han, S.: APQ: joint search for network architecture, pruning and quantization policy. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2075–2084 (2020)

    Google Scholar 

  74. Wang, X., Wang, X., Jin, L., Lv, R., Dai, B., He, M., Lv, T.: Evolutionary algorithm-based and network architecture search-enabled multiobjective traffic classification. IEEE Access 9, 52310–52325 (2021)

    Article  Google Scholar 

  75. Wu, Y.N., Emer, J.S., Sze, V.: Accelergy: an architecture-level energy estimation methodology for accelerator designs. In: 2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), pp. 1–8 (2019)

    Google Scholar 

  76. Xia, X., Ding, W.: HNAS: hierarchical neural architecture search on mobile devices (2020)

    Google Scholar 

  77. Yuan, Z., Liu, J., Li, X., Yan, L., Chen, H., Bingzhe, W., Yang, Y., Sun, G.: NAS4RRAM: neural network architecture search for inference on rram-based accelerators. Sci. China Inf. Sci. 64, 160407 (2021)

    Article  Google Scholar 

  78. Zhang, L.L., Yang, Y., Jiang, Y., Zhu, W., Liu, Y.: Fast hardware-aware neural architecture search. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 2959–2967 (2020)

    Google Scholar 

  79. Zhang, X., Wang, J., Zhu, C., Lin, Y., Xiong, J., Hwu, W.-M., Chen, D.: DNNBuilder: an automated tool for building high-performance DNN hardware accelerators for FPGAs. In: 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), pp. 1–8 (2018)

    Google Scholar 

  80. Zhou, Y., Dong, X., Akin, B., Tan, M., Peng, D., Meng, T., Yazdanbakhsh, A., Huang, D., Narayanaswami, R., Laudon, J.: Rethinking co-design of neural architectures and hardware accelerators (2021)

    Google Scholar 

Download references

Acknowledgements

This work was supported by the Czech science foundation project Automated design of hardware accelerators for resource-aware machine learning under number 21-13001S.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lukas Sekanina .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Sekanina, L., Mrazek, V., Pinos, M. (2024). Hardware-Aware Evolutionary Approaches to Deep Neural Networks. In: Banzhaf, W., Machado, P., Zhang, M. (eds) Handbook of Evolutionary Machine Learning. Genetic and Evolutionary Computation. Springer, Singapore. https://doi.org/10.1007/978-981-99-3814-8_12

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-3814-8_12

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-3813-1

  • Online ISBN: 978-981-99-3814-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics