Skip to main content

Neural Architecture Search for Spiking Neural Networks

Part of the Lecture Notes in Computer Science book series (LNCS,volume 13684)

Abstract

Spiking Neural Networks (SNNs) have gained huge attention as a potential energy-efficient alternative to conventional Artificial Neural Networks (ANNs) due to their inherent high-sparsity activation. However, most prior SNN methods use ANN-like architectures (e.g., VGG-Net or ResNet), which could provide sub-optimal performance for temporal sequence processing of binary information in SNNs. To address this, in this paper, we introduce a novel Neural Architecture Search (NAS) approach for finding better SNN architectures. Inspired by recent NAS approaches that find the optimal architecture from activation patterns at initialization, we select the architecture that can represent diverse spike activation patterns across different data samples without training. Moreover, to further leverage the temporal information among the spikes, we search for feed-forward connections as well as backward connections (i.e., temporal feedback connections) between layers. Interestingly, SNASNet found by our search algorithm achieves higher performance with backward connections, demonstrating the importance of designing SNN architecture for suitably using temporal information. We conduct extensive experiments on three image recognition benchmarks where we show that SNASNet achieves state-of-the-art performance with significantly lower timesteps (5 timesteps). Code is available on Github.

Keywords

  • Spiking Neural Networks
  • Neural architecture search
  • Neuromorphic computing

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Abdelfattah, M.S., Mehrotra, A., Dudziak, Ł., Lane, N.D.: Zero-cost proxies for lightweight NAS. arXiv preprint arXiv:2101.08134 (2021)

  2. Baker, B., Gupta, O., Naik, N., Raskar, R.: Designing neural network architectures using reinforcement learning. arXiv preprint arXiv:1611.02167 (2016)

  3. Bellec, G.: A solution to the learning dilemma for recurrent networks of spiking neurons. Nat. Commun. 11(1), 1–15 (2020)

    CrossRef  Google Scholar 

  4. Bender, G., Kindermans, P.J., Zoph, B., Vasudevan, V., Le, Q.: Understanding and simplifying one-shot architecture search. In: International Conference on Machine Learning, pp. 550–559. PMLR (2018)

    Google Scholar 

  5. Brock, A., Lim, T., Ritchie, J.M., Weston, N.: Smash: one-shot model architecture search through hypernetworks. arXiv preprint arXiv:1708.05344 (2017)

  6. Cai, H., Zhu, L., Han, S.: ProxylessNAS: direct neural architecture search on target task and hardware. arXiv preprint arXiv:1812.00332 (2018)

  7. Cao, Y., Chen, Y., Khosla, D.: Spiking deep convolutional neural networks for energy-efficient object recognition. Int. J. Comput. Vision 113(1), 54–66 (2015). https://doi.org/10.1007/s11263-014-0788-3

    CrossRef  MathSciNet  Google Scholar 

  8. Chen, B., et al.: BN-NAS: neural architecture search with batch normalization. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 307–316 (2021)

    Google Scholar 

  9. Chen, B., et al.: GLiT: neural architecture search for global and local image transformer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 12–21 (2021)

    Google Scholar 

  10. Chen, W., Gong, X., Wang, Z.: Neural architecture search on ImageNet in four GPU hours: a theoretically inspired perspective. arXiv preprint arXiv:2102.11535 (2021)

  11. Chen, Y., Yang, T., Zhang, X., Meng, G., Xiao, X., Sun, J.: DetNAS: backbone search for object detection. Adv. Neural. Inf. Process. Syst. 32, 6642–6652 (2019)

    Google Scholar 

  12. Christensen, D.V., et al.: 2022 roadmap on neuromorphic computing and engineering. Neuromorphic Comput. Eng. 2, 022501 (2022)

    Google Scholar 

  13. Comsa, I.M., Fischbacher, T., Potempa, K., Gesmundo, A., Versari, L., Alakuijala, J.: Temporal coding in spiking neural networks with alpha synaptic function. In: ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8529–8533. IEEE (2020)

    Google Scholar 

  14. Demin, V., Nekhaev, D.: Recurrent spiking neural network learning based on a competitive maximization of neuronal activity. Front. Neuroinform. 12, 79 (2018)

    CrossRef  Google Scholar 

  15. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)

    Google Scholar 

  16. Deng, S., Gu, S.: Optimal conversion of conventional artificial neural networks to spiking neural networks. arXiv preprint arXiv:2103.00476 (2021)

  17. Deng, S., Li, Y., Zhang, S., Gu, S.: Temporal efficient training of spiking neural network via gradient re-weighting. arXiv preprint arXiv:2202.11946 (2022)

  18. Diehl, P.U., Cook, M.: Unsupervised learning of digit recognition using spike-timing-dependent plasticity. Front. Comput. Neurosci. 9, 99 (2015)

    CrossRef  Google Scholar 

  19. Diehl, P.U., Neil, D., Binas, J., Cook, M., Liu, S.C., Pfeiffer, M.: Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In: 2015 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2015)

    Google Scholar 

  20. Dong, X., Yang, Y.: NAS-bench-201: extending the scope of reproducible neural architecture search. arXiv preprint arXiv:2001.00326 (2020)

  21. Duan, Y., et al.: TransNAS-bench-101: improving transferability and generalizability of cross-task neural architecture search. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5251–5260 (2021)

    Google Scholar 

  22. Fang, W., et al.: Spikingjelly (2020). https://github.com/fangwei123456/spikingjelly

  23. Fang, W., Yu, Z., Chen, Y., Huang, T., Masquelier, T., Tian, Y.: Deep residual learning in spiking neural networks. arXiv preprint arXiv:2102.04159 (2021)

  24. Fang, W., Yu, Z., Chen, Y., Masquelier, T., Huang, T., Tian, Y.: Incorporating learnable membrane time constant to enhance learning of spiking neural networks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2661–2671 (2021)

    Google Scholar 

  25. Garg, I., Chowdhury, S.S., Roy, K.: DCT-SNN: using DCT to distribute spatial information over time for low-latency spiking neural networks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4671–4680 (2021)

    Google Scholar 

  26. Gong, X., Chang, S., Jiang, Y., Wang, Z.: AutoGAN: neural architecture search for generative adversarial networks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3224–3234 (2019)

    Google Scholar 

  27. Gu, P., Xiao, R., Pan, G., Tang, H.: STCA: spatio-temporal credit assignment with delayed feedback in deep spiking neural networks. In: IJCAI, pp. 1366–1372 (2019)

    Google Scholar 

  28. Guo, Z., et al.: Single path one-shot neural architecture search with uniform sampling. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12361, pp. 544–560. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58517-4_32

  29. Han, B., Srinivasan, G., Roy, K.: RMP-SNN: residual membrane potential neuron for enabling deeper high-accuracy and low-latency spiking neural network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13558–13567 (2020)

    Google Scholar 

  30. Hanin, B., Rolnick, D.: Complexity of linear regions in deep networks. In: International Conference on Machine Learning, pp. 2596–2604. PMLR (2019)

    Google Scholar 

  31. Hanin, B., Rolnick, D.: Deep ReLU networks have surprisingly few activation patterns (2019)

    Google Scholar 

  32. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034 (2015)

    Google Scholar 

  33. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)

    Google Scholar 

  34. Hu, S., et al.: DSNAS: direct neural architecture search without parameter retraining. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12084–12092 (2020)

    Google Scholar 

  35. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015)

  36. Izhikevich, E.M.: Simple model of spiking neurons. IEEE Trans. Neural Netw. 14(6), 1569–1572 (2003)

    CrossRef  MathSciNet  Google Scholar 

  37. Jia, S., Zhang, T., Cheng, X., Liu, H., Xu, B.: Neuronal-plasticity and reward-propagation improved recurrent spiking neural networks. Front. Neurosci. 15, 205 (2021)

    CrossRef  Google Scholar 

  38. Jin, X., Rast, A., Galluppi, F., Davies, S., Furber, S.: Implementing spike-timing-dependent plasticity on spinnaker neuromorphic hardware. In: The 2010 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2010)

    Google Scholar 

  39. Kim, Y., Panda, P.: Revisiting batch normalization for training low-latency deep spiking neural networks from scratch. arXiv preprint arXiv:2010.01729 (2020)

  40. Kim, Y., Panda, P.: Optimizing deeper spiking neural networks for dynamic vision sensing. Neural Netw. 144, 686–698 (2021)

    CrossRef  Google Scholar 

  41. Kim, Y., Panda, P.: Visual explanations from spiking neural networks using interspike intervals. Sci. Rep. 11, 19037 (2021). https://doi.org/10.1038/s41598-021-98448-0

    CrossRef  Google Scholar 

  42. Kim, Y., Venkatesha, Y., Panda, P.: PrivateSNN: fully privacy-preserving spiking neural networks. arXiv preprint arXiv:2104.03414 (2021)

  43. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  44. Kundu, S., Datta, G., Pedram, M., Beerel, P.A.: Spike-thrift: towards energy-efficient deep spiking neural networks by limiting spiking activity via attention-guided compression. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3953–3962 (2021)

    Google Scholar 

  45. Kundu, S., Pedram, M., Beerel, P.A.: Hire-SNN: harnessing the inherent robustness of energy-efficient deep spiking neural networks by training with crafted input noise. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5209–5218 (2021)

    Google Scholar 

  46. Ledinauskas, E., Ruseckas, J., Juršėnas, A., Buračas, G.: Training deep spiking neural networks. arXiv preprint arXiv:2006.04436 (2020)

  47. Lee, C., Sarwar, S.S., Panda, P., Srinivasan, G., Roy, K.: Enabling spike-based backpropagation for training deep neural network architectures. Front. Neurosci. 14, 119 (2020)

    CrossRef  Google Scholar 

  48. Lee, J.H., Delbruck, T., Pfeiffer, M.: Training deep spiking neural networks using backpropagation. Front. Neurosci. 10, 508 (2016)

    CrossRef  Google Scholar 

  49. Li, Y., Deng, S., Dong, X., Gong, R., Gu, S.: A free lunch from ANN: towards efficient, accurate spiking neural networks calibration. arXiv preprint arXiv:2106.06984 (2021)

  50. Li, Y., Deng, S., Dong, X., Gu, S.: Converting artificial neural networks to spiking neural networks via parameter calibration. arXiv preprint arXiv:2205.10121 (2022)

  51. Li, Y., Guo, Y., Zhang, S., Deng, S., Hai, Y., Gu, S.: Differentiable spike: rethinking gradient-descent for training spiking neural networks. Adv. Neural. Inf. Process. Syst. 34, 23426–23439 (2021)

    Google Scholar 

  52. Liang, L., et al.: H2learn: high-efficiency learning accelerator for high-accuracy spiking neural networks. arXiv preprint arXiv:2107.11746 (2021)

  53. Liu, C., et al.: Auto-DeepLab: hierarchical neural architecture search for semantic image segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 82–92 (2019)

    Google Scholar 

  54. Liu, H., Simonyan, K., Yang, Y.: DARTs: differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018)

  55. Loshchilov, I., Hutter, F.: SGDR: stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983 (2016)

  56. Lu, S., Sengupta, A.: Exploring the connection between binary and spiking neural networks. Front. Neurosci. 14, 535 (2020)

    CrossRef  Google Scholar 

  57. Mellor, J., Turner, J., Storkey, A., Crowley, E.J.: Neural architecture search without training. In: International Conference on Machine Learning, pp. 7588–7598. PMLR (2021)

    Google Scholar 

  58. Montúfar, G., Pascanu, R., Cho, K., Bengio, Y.: On the number of linear regions of deep neural networks. arXiv preprint arXiv:1402.1869 (2014)

  59. Mostafa, H.: Supervised learning based on temporal coding in spiking neural networks. IEEE Trans. Neural Net. Learn. Syst. 29(7), 3227–3235 (2017)

    Google Scholar 

  60. Na, B., Mok, J., Park, S., Lee, D., Choe, H., Yoon, S.: AutoSNN: towards energy-efficient spiking neural networks. arXiv preprint arXiv:2201.12738 (2022)

  61. Neftci, E.O., Mostafa, H., Zenke, F.: Surrogate gradient learning in spiking neural networks. IEEE Sign. Process. Mag. 36, 61–63 (2019)

    CrossRef  Google Scholar 

  62. Panda, P., Aketi, S.A., Roy, K.: Toward scalable, efficient, and accurate deep spiking neural networks with backward residual connections, stochastic softmax, and hybridization. Front. Neurosci. 14, 653 (2020)

    CrossRef  Google Scholar 

  63. Panda, P., Roy, K.: Learning to generate sequences with combination of Hebbian and non-Hebbian plasticity in recurrent spiking neural networks. Front. Neurosci. 11, 693 (2017)

    CrossRef  Google Scholar 

  64. Park, S., Kim, S., Na, B., Yoon, S.: T2fSNN: deep spiking neural networks with time-to-first-spike coding. arXiv preprint arXiv:2003.11741 (2020)

  65. Paszke, A., et al.: Automatic differentiation in PyTorch. In: NIPS-W (2017)

    Google Scholar 

  66. Pham, H., Guan, M., Zoph, B., Le, Q., Dean, J.: Efficient neural architecture search via parameters sharing. In: International Conference on Machine Learning, pp. 4095–4104. PMLR (2018)

    Google Scholar 

  67. Raghu, M., Poole, B., Kleinberg, J., Ganguli, S., Sohl-Dickstein, J.: On the expressive power of deep neural networks. In: International Conference on Machine Learning, pp. 2847–2854. PMLR (2017)

    Google Scholar 

  68. Rathi, N., Roy, K.: Diet-SNN: a low-latency spiking neural network with direct input encoding and leakage and threshold optimization. IEEE Trans. Neural Net. Learn. Syst. (2021)

    Google Scholar 

  69. Rathi, N., Srinivasan, G., Panda, P., Roy, K.: Enabling deep spiking neural networks with hybrid conversion and spike timing dependent backpropagation. arXiv preprint arXiv:2005.01807 (2020)

  70. Real, E., Aggarwal, A., Huang, Y., Le, Q.V.: Regularized evolution for image classifier architecture search. In: Proceedings of the AAAI conference on artificial intelligence, vol. 33, pp. 4780–4789 (2019)

    Google Scholar 

  71. Roy, K., Jaiswal, A., Panda, P.: Towards spike-based machine intelligence with neuromorphic computing. Nature 575(7784), 607–617 (2019)

    CrossRef  Google Scholar 

  72. Rueckauer, B., Lungu, I.A., Hu, Y., Pfeiffer, M., Liu, S.C.: Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Front. Neurosci. 11, 682 (2017)

    CrossRef  Google Scholar 

  73. Sengupta, A., Ye, Y., Wang, R., Liu, C., Roy, K.: Going deeper in spiking neural networks: VGG and residual architectures. Front. Neurosci. 13, 95 (2019)

    CrossRef  Google Scholar 

  74. Shrestha, S.B., Orchard, G.: SLAYER: spike layer error reassignment in time. arXiv preprint arXiv:1810.08646 (2018)

  75. Shu, Y., Wang, W., Cai, S.: Understanding architectures learnt by cell-based neural architecture search. arXiv preprint arXiv:1909.09569 (2019)

  76. Siems, J., Zimmer, L., Zela, A., Lukasik, J., Keuper, M., Hutter, F.: NAS-bench-301 and the case for surrogate benchmarks for neural architecture search. arXiv preprint arXiv:2008.09777 (2020)

  77. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2015)

    Google Scholar 

  78. Tan, M., et al.: MnasNet: platform-aware neural architecture search for mobile. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2820–2828 (2019)

    Google Scholar 

  79. Venkatesha, Y., Kim, Y., Tassiulas, L., Panda, P.: Federated learning with spiking neural networks. arXiv preprint arXiv:2106.06579 (2021)

  80. Wu, B., et al.: FBNet: hardware-aware efficient convnet design via differentiable neural architecture search. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10734–10742 (2019)

    Google Scholar 

  81. Wu, H., et al.: Training spiking neural networks with accumulated spiking flow. IJO 1(1) (2021)

    Google Scholar 

  82. Wu, J., Chua, Y., Zhang, M., Li, G., Li, H., Tan, K.C.: A tandem learning rule for effective training and rapid inference of deep spiking neural networks. arXiv e-prints pp. arXiv-1907 (2019)

    Google Scholar 

  83. Wu, J., Xu, C., Zhou, D., Li, H., Tan, K.C.: Progressive tandem learning for pattern recognition with deep spiking neural networks. arXiv preprint arXiv:2007.01204 (2020)

  84. Wu, Y., Deng, L., Li, G., Zhu, J., Shi, L.: Spatio-temporal backpropagation for training high-performance spiking neural networks. Front. Neurosci. 12, 331 (2018)

    CrossRef  Google Scholar 

  85. Wu, Y., Deng, L., Li, G., Zhu, J., Xie, Y., Shi, L.: Direct training for spiking neural networks: faster, larger, better. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 1311–1318 (2019)

    Google Scholar 

  86. Xie, S., Zheng, H., Liu, C., Lin, L.: SNAS: stochastic neural architecture search. arXiv preprint arXiv:1812.09926 (2018)

  87. Xiong, H., Huang, L., Yu, M., Liu, L., Zhu, F., Shao, L.: On the number of linear regions of convolutional neural networks. In: International Conference on Machine Learning, pp. 10514–10523. PMLR (2020)

    Google Scholar 

  88. Xu, J., Zhao, L., Lin, J., Gao, R., Sun, X., Yang, H.: KNAS: green neural architecture search. In: International Conference on Machine Learning, pp. 11613–11625. PMLR (2021)

    Google Scholar 

  89. Xu, L., et al.: ViPNAS: efficient video pose estimation via neural architecture search. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16072–16081 (2021)

    Google Scholar 

  90. Yan, Z., Dai, X., Zhang, P., Tian, Y., Wu, B., Feiszli, M.: FP-NAS: fast probabilistic neural architecture search. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15139–15148 (2021)

    Google Scholar 

  91. Yang, T.J., Liao, Y.L., Sze, V.: NetAdaptV2: efficient neural architecture search with fast super-network training and architecture optimization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2402–2411 (2021)

    Google Scholar 

  92. Yang, Y., You, S., Li, H., Wang, F., Qian, C., Lin, Z.: Towards improving the consistency, efficiency, and flexibility of differentiable neural architecture search. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6667–6676 (2021)

    Google Scholar 

  93. Yang, Z., et al.: HourNAS: extremely fast neural architecture search through an hourglass lens. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10896–10906 (2021)

    Google Scholar 

  94. Yao, M., et al.: Temporal-wise attention spiking neural networks for event streams classification. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10221–10230 (2021)

    Google Scholar 

  95. Ying, C., Klein, A., Christiansen, E., Real, E., Murphy, K., Hutter, F.: NAS-bench-101: towards reproducible neural architecture search. In: International Conference on Machine Learning, pp. 7105–7114. PMLR (2019)

    Google Scholar 

  96. Yousefzadeh, A., Stromatias, E., Soto, M., Serrano-Gotarredona, T., Linares-Barranco, B.: On practical issues for stochastic STDP hardware with 1-bit synaptic weights. Front. Neurosci. 12, 665 (2018)

    CrossRef  Google Scholar 

  97. Zeng, D., Huang, Y., Bao, Q., Zhang, J., Su, C., Liu, W.: Neural architecture search for joint human parsing and pose estimation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 11385–11394 (2021)

    Google Scholar 

  98. Zhang, W., Li, P.: Spike-train level backpropagation for training deep recurrent spiking neural networks. arXiv preprint arXiv:1908.06378 (2019)

  99. Zhang, W., Li, P.: Temporal spike sequence learning via backpropagation for deep spiking neural networks. arXiv preprint arXiv:2002.10085 (2020)

  100. Zhang, X., Huang, Z., Wang, N., Xiang, S., Pan, C.: you only search once: single shot neural architecture search via direct sparse optimization. IEEE Trans. Pattern Anal. Mach. Intell. 43(9), 2891–2904 (2020)

    CrossRef  Google Scholar 

  101. Zhang, X., et al.: DCNAS: densely connected neural architecture search for semantic image segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13956–13967 (2021)

    Google Scholar 

  102. Zhang, X., Hou, P., Zhang, X., Sun, J.: Neural architecture search with random labels. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10907–10916 (2021)

    Google Scholar 

  103. Zhao, Y., Wang, L., Tian, Y., Fonseca, R., Guo, T.: Few-shot neural architecture search. In: International Conference on Machine Learning, pp. 12707–12718. PMLR (2021)

    Google Scholar 

  104. Zheng, H., Wu, Y., Deng, L., Hu, Y., Li, G.: Going deeper with directly-trained larger spiking neural networks. arXiv preprint arXiv:2011.05280 (2020)

  105. Zhong, Z., Yan, J., Wu, W., Shao, J., Liu, C.L.: Practical block-wise neural network architecture generation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2423–2432 (2018)

    Google Scholar 

  106. Zoph, B., Le, Q.V.: Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578 (2016)

  107. Zoph, B., Vasudevan, V., Shlens, J., Le, Q.V.: Learning transferable architectures for scalable image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8697–8710 (2018)

    Google Scholar 

Download references

Acknowledgment

This work was supported in part by C-BRIC, a JUMP center sponsored by DARPA and SRC, Google Research Scholar Award, the National Science Foundation (Grant#1947826), TII (Abu Dhabi) and the DARPA AI Exploration (AIE) program.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Youngeun Kim .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 805 KB)

Rights and permissions

Reprints and Permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kim, Y., Li, Y., Park, H., Venkatesha, Y., Panda, P. (2022). Neural Architecture Search for Spiking Neural Networks. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13684. Springer, Cham. https://doi.org/10.1007/978-3-031-20053-3_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20053-3_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20052-6

  • Online ISBN: 978-3-031-20053-3

  • eBook Packages: Computer ScienceComputer Science (R0)