Skip to main content

Like an Open Book? Read Neural Network Architecture with Simple Power Analysis on 32-Bit Microcontrollers

  • Conference paper
  • First Online:
Smart Card Research and Advanced Applications (CARDIS 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14530))

  • 86 Accesses


Model extraction is a growing concern for the security of AI systems. For deep neural network models, the architecture is the most important information an adversary aims to recover. Being a sequence of repeated computation blocks, neural network models deployed on edge-devices will generate distinctive side-channel leakages. The latter can be exploited to extract critical information when targeted platforms are physically accessible. By combining theoretical knowledge about deep learning practices and analysis of a widespread implementation library (ARM CMSIS-NN), our purpose is to answer this critical question: how far can we extract architecture information by simply examining an EM side-channel trace? For the first time, we propose an extraction methodology for traditional MLP and CNN models running on a high-end 32-bit microcontroller (Cortex-M7) that relies only on simple pattern recognition analysis. Despite few challenging cases, we claim that, contrary to parameters extraction, the complexity of the attack is relatively low and we highlight the urgent need for practicable protections that could fit the strong memory and latency requirements of such platforms.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions


  1. 1.

  2. 2.

    With \(P=0\), borders are not considered and the output tensor is shorter.

  3. 3.

    \(S=1\) is the standard default sliding, one element at a time.

  4. 4.

  5. 5.

  6. 6.

    In [20], authors consider 4 variants of AlexNet, Inceptionv3, ResNet-50 and 101, for a total of 16 architectures.

  7. 7.

  8. 8.

  9. 9.

    Raw traces are available in the public repository.

  10. 10.

    Obviously, we expect that the size of the inputs feeding the model is known by the adversary (e.g., \(28\times 28\) for MNIST and \(32\times 32\times 3\) for Cifar-10).

  11. 11.

    Equivalent functions are available for non-squared input tensor.

  12. 12.

    Obviously, if the dense layer had x4.5 more neurons, MAC complexity between dense and conv. would be equal but having a too large number of neurons (i.e. trainable parameters) for dense layers is usually unsuitable with classical overfitting issues.


  1. Batina, L., Jap, D., Bhasin, S., Picek, S.: CSI NN: reverse engineering of neural network architectures through electromagnetic side channel. In: 28th USENIX Security Symposium. USENIX Association (2019)

    Google Scholar 

  2. Carlini, N., Jagielski, M., Mironov, I.: Cryptanalytic extraction of neural network models. In: Micciancio, D., Ristenpart, T. (eds.) CRYPTO 2020. LNCS, vol. 12172, pp. 189–218. Springer, Cham (2020).

    Chapter  Google Scholar 

  3. Chmielewski, Ł, Weissbart, L.: On reverse engineering neural network implementation on GPU. In: Zhou, J., et al. (eds.) ACNS 2021. LNCS, vol. 12809, pp. 96–113. Springer, Cham (2021).

    Chapter  Google Scholar 

  4. Duddu, V., Samanta, D., Rao, D.V., Balas, V.E.: Stealing neural networks via timing side channels. arXiv preprint arXiv:1812.11720 (2018)

  5. Gongye, C., Fei, Y., Wahl, T.: Reverse-engineering deep neural networks using floating-point timing side-channels. In: 57th ACM/IEEE Design Automation Conference (DAC). IEEE (2020)

    Google Scholar 

  6. Hector, K., Moellic, P.-A., Dumont, M., Dutertre, J.-M.: Fault injection and safe-error attack for extraction of embedded neural network models. arXiv preprint arXiv:2308.16703 (2023)

  7. Jagielski, M., Carlini, N., Berthelot, D., Kurakin, A., Papernot, N.: High accuracy and high fidelity extraction of neural networks. In: Proceedings of the 29th USENIX Conference on Security Symposium (2020)

    Google Scholar 

  8. Joud, R., Moëllic, P.A., Pontié, S., Rigaud, J.B.: A practical introduction to side-channel extraction of deep neural network parameters. In: Buhan, I., Schneider, T. (eds.) CARDIS 2022. LNCS, vol. 13820, pp. 45–65. Springer, Cham (2023).

    Chapter  Google Scholar 

  9. Lai, L., Suda, N., Chandra, V.: CMSIS-NN: efficient neural network kernels for arm cortex-M CPUs. arXiv preprint arXiv:1801.06601 (2018)

  10. Lin, J., Chen, W.M., Lin, Y., Gan, C., Han, S., et al.: MCUNet: tiny deep learning on IoT devices. In: Advances in Neural Information Processing Systems, vol. 33 (2020)

    Google Scholar 

  11. Luo, Y., Duan, S., Gongye, C., Fei, Y., Xu, X.: NNReArch: a tensor program scheduling framework against neural network architecture reverse engineering. In: IEEE 30th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM). IEEE (2022)

    Google Scholar 

  12. Ma, J.: A higher-level Neural Network library on Microcontrollers (NNoM) (2020)

    Google Scholar 

  13. Maji, S., Banerjee, U., Chandrakasan, A.P.: Leaky nets: recovering embedded neural network models and inputs through simple power and timing side-channels-attacks and defenses. IEEE Internet Things J. 8(15) (2021)

    Google Scholar 

  14. Nguyen, B., Moëllic, P.A., Blayac, S.: Evaluation of convolution primitives for embedded neural networks on 32-bit microcontrollers. In: Abraham, A., Pllana, S., Casalino, G., Ma, K., Bajaj, A. (eds.) ISDA 2022. LNNS, vol. 646, pp. 427–437. Springer, Cham (2022).

    Chapter  Google Scholar 

  15. Orekondy, T., Schiele, B., Fritz, M.: Knockoff nets: stealing functionality of black-box models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2019)

    Google Scholar 

  16. Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of the ACM on Asia conference on Computer and Communications Security (2017)

    Google Scholar 

  17. Papernot, N., McDaniel, P., Sinha, A., Wellman, M.P.: SoK: security and privacy in machine learning. In: 2018 IEEE European Symposium on Security and Privacy (EuroS &P), pp. 399–414. IEEE (2018)

    Google Scholar 

  18. Rakin, A.S., Chowdhuryy, M.H.I., Yao, F., Fan, D.: DeepSteal: advanced model extractions leveraging efficient weight stealing in memories. In: IEEE Symposium on Security and Privacy (SP). IEEE (2022)

    Google Scholar 

  19. Tramèr, F., Zhang, F., Juels, A., Reiter, M.K., Ristenpart, T.: Stealing machine learning models via prediction APIs. In: USENIX Security Symposium, vol. 16 (2016)

    Google Scholar 

  20. Xiang, Y., Chen, Z., Chen, Z., et al.: Open DNN box by power side-channel attack. IEEE Trans. Circ. Syst. II: Express Briefs 67(11) (2020)

    Google Scholar 

  21. Yli-Mäyry, V., Ito, A., Homma, N., Bhasin, S., Jap, D.: Extraction of binarized neural network architecture and secret parameters using side-channel information. In: IEEE International Symposium on Circuits and Systems (ISCAS). IEEE (2021)

    Google Scholar 

  22. Yu, H., Ma, H., Yang, K., Zhao, Y., Jin, Y.: DeepEM: deep neural networks model recovery through EM side-channel information leakage. In: IEEE International Symposium on Hardware Oriented Security and Trust (HOST). IEEE (2020)

    Google Scholar 

Download references


This work is supported by (CEA-Leti) the EU project InSecTT (ECSEL JU 876038) and by ANR (Fr) in the framework Investissements d’avenir program (ANR-10-AIRT-05, irtnanoelec); and (Mines Saint-Etienne) by ANR PICTURE program (AAPG2020). This work benefited from the French Jean Zay supercomputer with the AI dynamic access program.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Pierre-Alain Moëllic .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Joud, R., Moëllic, PA., Pontié, S., Rigaud, JB. (2024). Like an Open Book? Read Neural Network Architecture with Simple Power Analysis on 32-Bit Microcontrollers. In: Bhasin, S., Roche, T. (eds) Smart Card Research and Advanced Applications. CARDIS 2023. Lecture Notes in Computer Science, vol 14530. Springer, Cham.

Download citation

  • DOI:

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-54408-8

  • Online ISBN: 978-3-031-54409-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics