Skip to main content

An Automata-Theoretic Approach to Synthesizing Binarized Neural Networks

  • Conference paper
  • First Online:
Automated Technology for Verification and Analysis (ATVA 2023)

Abstract

Deep neural networks, (DNNs, a.k.a. NNs), have been widely used in various tasks and have been proven to be successful. However, the accompanied expensive computing and storage costs make the deployments in resource-constrained devices a significant concern. To solve this issue, quantization has emerged as an effective way to reduce the costs of DNNs with little accuracy degradation by quantizing floating-point numbers to low-width fixed-point representations. Quantized neural networks (QNNs) have been developed, with binarized neural networks (BNNs) restricted to binary values as a special case. Another concern about neural networks is their vulnerability and lack of interpretability. Despite the active research on trustworthy of DNNs, few approaches have been proposed to QNNs. To this end, this paper presents an automata-theoretic approach to synthesizing BNNs that meet designated properties. More specifically, we define a temporal logic, called BLTL, as the specification language. We show that each BLTL formula can be transformed into an automaton on finite words. To deal with the state-explosion problem, we provide a tableau-based approach in real implementation. For the synthesis procedure, we utilize SMT solvers to detect the existence of a model (i.e., a BNN) in the construction process. Notably, synthesis provides a way to determine the hyper-parameters of the network before training. Moreover, we experimentally evaluate our approach and demonstrate its effectiveness in improving the individual fairness and local robustness of BNNs while maintaining accuracy to a great extent.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Baluta, T., Shen, S., Shinde, S., Meel, K.S., Saxena, P.: Quantitative verification of neural networks and its security applications. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pp. 1249–1264 (2019)

    Google Scholar 

  2. Barrett, C., Stump, A., Tinelli, C., et al.: The SMT-lib standard: version 2.0. In: Proceedings of the 8th International Workshop on Satisfiability Modulo Theories (Edinburgh, UK), vol. 13, p. 14 (2010)

    Google Scholar 

  3. Bu, L., Zhao, Z., Duan, Y., Song, F.: Taking care of the discretization problem: a comprehensive study of the discretization problem and a black-box adversarial attack in discrete integer domain. IEEE Trans. Dependable Secur. Comput. 19(5), 3200–3217 (2022)

    Article  Google Scholar 

  4. Chen, G., et al.: Who is real bob? Adversarial attacks on speaker recognition systems. In: Proceedings of the 42nd IEEE Symposium on Security and Privacy (SP), pp. 694–711 (2021)

    Google Scholar 

  5. Chen, G., Zhang, Y., Zhao, Z., Song, F.: QFA2SR: query-free adversarial transfer attacks to speaker recognition systems. In: Proceedings of the 32nd USENIX Security Symposium (2023)

    Google Scholar 

  6. Chen, Get al.: Towards understanding and mitigating audio adversarial examples for speaker recognition. IEEE Trans. Dependable Secur. Comput. 20(5), 3970–3987 (2022)

    Google Scholar 

  7. Chen, G., Zhao, Z., Song, F., Chen, S., Fan, L., Liu, Y.: AS2T: arbitrary source-to-target adversarial attack on speaker recognition systems. IEEE Trans. Dependable Secur. Comput. 1–17 (2022)

    Google Scholar 

  8. Cheng, C.-H., Nührenberg, G., Huang, C.-H., Ruess, H.: Verification of binarized neural networks via inter-neuron factoring. In: Piskac, R., Rümmer, P. (eds.) VSTTE 2018. LNCS, vol. 11294, pp. 279–290. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-03592-1_16

    Chapter  Google Scholar 

  9. de Moura, L., Bjørner, N.: Z3: an efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78800-3_24

    Chapter  Google Scholar 

  10. Deng, L.: The MNIST database of handwritten digit images for machine learning research. IEEE Signal Process. Mag. 29(6), 141–142 (2012)

    Article  Google Scholar 

  11. Dua, D., Graff, C.: UCI machine learning repository (2017). https://archive.ics.uci.edu/ml

  12. Eykholt, Ket al.: Robust physical-world attacks on deep learning visual classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1625–1634 (2018)

    Google Scholar 

  13. Giacobbe, M., Henzinger, T.A., Lechner, M.: How many bits does it take to quantize your neural network? In: TACAS 2020, Part II. LNCS, vol. 12079, pp. 79–97. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-45237-7_5

    Chapter  Google Scholar 

  14. GPT-4. https://openai.com/product/gpt-4

  15. Guo, X., Wan, W., Zhang, Z., Zhang, M., Song, F., Wen, X.: Eager falsification for accelerating robustness verification of deep neural networks. In: Proceedings of the 32nd IEEE International Symposium on Software Reliability Engineering, pp. 345–356 (2021)

    Google Scholar 

  16. Henzinger, T.A., Lechner, M., Zikelic, D.: Scalable verification of quantized neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 3787–3795 (2021)

    Google Scholar 

  17. Huang, X., et al.: A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability. Comput. Sci. Rev. 37, 100270 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  18. Li, J., Liu, J., Yang, P., Chen, L., Huang, X., Zhang, L.: Analyzing deep neural networks with symbolic propagation: towards higher precision and faster verification. In: Chang, B.-Y.E. (ed.) SAS 2019. LNCS, vol. 11822, pp. 296–319. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32304-2_15

    Chapter  Google Scholar 

  19. Liang, Z., Ren, D., Liu, W., Wang, J., Yang, W., Xue, B.: Safety verification for neural networks based on set-boundary analysis. In: David, C., Sun, M. (eds.) TASE 2023. LNCS, vol. 13931, pp. 248–267. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-35257-7_15

    Chapter  Google Scholar 

  20. Liu, C., et al.: Algorithms for verifying deep neural networks. Found. Trends® Optim. 4(3–4), 244–404 (2021)

    Google Scholar 

  21. Liu, W.W., Song, F., Zhang, T.H.R., Wang, J.: Verifying ReLU neural networks from a model checking perspective. J. Comput. Sci. Technol. 35, 1365–1381 (2020)

    Article  Google Scholar 

  22. Lösbrock, C.D.: Implementing an incremental solver for difference logic. Master’s thesis, RWTH Aachen University (2018)

    Google Scholar 

  23. Nagel, M., Fournarakis, M., Amjad, R.A., Bondarenko, Y., Van Baalen, M., Blankevoort, T.: A white paper on neural network quantization. arXiv preprint arXiv:2106.08295 (2021)

  24. Narodytska, N., Kasiviswanathan, S., Ryzhyk, L., Sagiv, M., Walsh, T.: Verifying properties of binarized deep neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  25. Narodytska, N., Zhang, H., Gupta, A., Walsh, T.: In search for a SAT-friendly binarized neural network architecture. In: International Conference on Learning Representations (2020)

    Google Scholar 

  26. Shih, A., Darwiche, A., Choi, A.: Verifying binarized neural networks by Angluin-style learning. In: Janota, M., Lynce, I. (eds.) SAT 2019. LNCS, vol. 11628, pp. 354–370. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-24258-9_25

    Chapter  Google Scholar 

  27. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  28. Song, F., Lei, Y., Chen, S., Fan, L., Liu, Y.: Advanced evasion attacks and mitigations on practical ml-based phishing website classifiers. Int. J. Intell. Syst. 36(9), 5210–5240 (2021)

    Article  Google Scholar 

  29. Tao, Y., Liu, W., Song, F., Liang, Z., Wang, J., Zhu, H.: An automata-theoretic approach to synthesizing binarized neural networks (2023). https://songfu1983.github.io/publications/ATVA23full.pdf

  30. FSD chip-tesla. https://en.wikichip.org/wiki/tesla_(car_company)/fsd_chip

  31. Zhang, P., et al.: White-box fairness testing through adversarial sampling. In: Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering, pp. 949–960 (2020)

    Google Scholar 

  32. Zhang, Y., Song, F., Sun, J.: QEBVerif: quantization error bound verification of neural networks. In: Proceedings of the 35th International Conference on Computer Aided Verification, pp. 413–437 (2023)

    Google Scholar 

  33. Zhang, Y., Zhao, Z., Chen, G., Song, F., Chen, T.: BDD4BNN: a BDD-based quantitative analysis framework for binarized neural networks. In: Silva, A., Leino, K.R.M. (eds.) CAV 2021, Part I. LNCS, vol. 12759, pp. 175–200. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81685-8_8

    Chapter  Google Scholar 

  34. Zhang, Y., Zhao, Z., Chen, G., Song, F., Chen, T.: Precise quantitative analysis of binarized neural networks: a BDD-based approach. ACM Trans. Softw. Eng. Methodol. 32(3), 1–51 (2023)

    Article  Google Scholar 

  35. Zhang, Y., et al.: QVIP: an ILP-based formal verification approach for quantized neural networks. In: Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering, pp. 1–13 (2022)

    Google Scholar 

  36. Zhao, Z., Chen, G., Wang, J., Yang, Y., Song, F., Sun, J.: Attack as defense: characterizing adversarial examples using robustness. In: Proceedings of the 30th ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA), pp. 42–55 (2021)

    Google Scholar 

  37. Zhao, Z., Zhang, Y., Chen, G., Song, F., Chen, T., Liu, J.: CLEVEREST: accelerating CEGAR-based neural network verification via adversarial attacks. In: Singh, G., Urban, C. (eds.) SAS 2022. LNCS, vol. 13790, pp. 449–473. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-22308-2_20

    Chapter  Google Scholar 

  38. Zheng, H., et al.: NeuronFair: interpretable white-box fairness testing through biased neuron identification. In: Proceedings of the 44th International Conference on Software Engineering, pp. 1519–1531 (2022)

    Google Scholar 

Download references

Acknowledgement

This work is partially supported by the National Key R & D Program of China (2022YFA1005101), the National Natural Science Foundation of China (61872371, 62072309, 62032024), CAS Project for Young Scientists in Basic Research (YSBR-040), and ISCAS New Cultivation Project (ISCAS-PYFX-202201).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wanwei Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tao, Y., Liu, W., Song, F., Liang, Z., Wang, J., Zhu, H. (2023). An Automata-Theoretic Approach to Synthesizing Binarized Neural Networks. In: André, É., Sun, J. (eds) Automated Technology for Verification and Analysis. ATVA 2023. Lecture Notes in Computer Science, vol 14215. Springer, Cham. https://doi.org/10.1007/978-3-031-45329-8_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-45329-8_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-45328-1

  • Online ISBN: 978-3-031-45329-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics