Skip to main content

Bisimulations for Neural Network Reduction

  • Conference paper
  • First Online:
Verification, Model Checking, and Abstract Interpretation (VMCAI 2022)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 13182))

Abstract

We present a notion of bisimulation that induces a reduced network which is semantically equivalent to the given neural network. We provide a minimization algorithm to construct the smallest bisimulation equivalent network. Reductions that construct bisimulation equivalent neural networks are limited in the scale of reduction. We present an approximate notion of bisimulation that provides semantic closeness, rather than, semantic equivalence, and quantify semantic deviation between the neural networks that are approximately bisimilar. The latter provides a trade-off between the amount of reduction and deviations in the semantics.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ashok, P., Hashemi, V., Křetínský, J., Mohr, S.: DeepAbstract: neural network abstraction for accelerating verification. In: Hung, D.V., Sokolsky, O. (eds.) ATVA 2020. LNCS, vol. 12302, pp. 92–107. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59152-6_5

    Chapter  Google Scholar 

  2. Baier, C., Katoen, J.P.: Principles of Model Checking. Representation and Mind Series, The MIT Press, Cambridge (2008)

    MATH  Google Scholar 

  3. Bunel, R., Turkaslan, I., Torr, P.H.S., Kohli, P., Kumar, M.P.: Piecewise linear neural network verification: a comparative study. CoRR (2017)

    Google Scholar 

  4. Cheng, Y., Wang, D., Zhou, P., Zhang, T.: A survey of model compression and acceleration for deep neural networks. CoRR (2017)

    Google Scholar 

  5. Deng, L., Li, G., Han, S., Shi, L., Xie, Y.: Model compression and hardware acceleration for neural networks: a comprehensive survey. Proc. IEEE 108(4), 485–532 (2020)

    Article  Google Scholar 

  6. Dutta, S., Jha, S., Sankaranarayanan, S., Tiwari, A.: Output range analysis for deep feedforward neural networks. In: Dutle, A., Muñoz, C., Narkawicz, A. (eds.) NFM 2018. LNCS, vol. 10811, pp. 121–138. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-77935-5_9

    Chapter  Google Scholar 

  7. Elboher, Y.Y., Gottschlich, J., Katz, G.: An abstraction-based framework for neural network verification. In: Lahiri, S.K., Wang, C. (eds.) CAV 2020. LNCS, vol. 12224, pp. 43–65. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-53288-8_3

    Chapter  MATH  Google Scholar 

  8. Girard, A., Julius, A.A., Pappas, G.J.: Approximate simulation relations for hybrid systems. Discret. Event Dyn. Syst. 18(2), 163–179 (2008)

    Article  MathSciNet  Google Scholar 

  9. Girard, A., Pappas, G.J.: Approximate bisimulation relations for constrained linear systems. Automatica 43(8), 1307–1317 (2007)

    Article  MathSciNet  Google Scholar 

  10. Girard, A., Pola, G., Tabuada, P.: Approximately bisimilar symbolic models for incrementally stable switched systems. In: Egerstedt, M., Mishra, B. (eds.) HSCC 2008. LNCS, vol. 4981, pp. 201–214. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78929-1_15

    Chapter  MATH  Google Scholar 

  11. Huang, X., et al.: Safety and trustworthiness of deep neural networks: a survey. CoRR abs/1812.08342 (2018)

    Google Scholar 

  12. Katz, G., Barrett, C.W., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. CoRR (2017)

    Google Scholar 

  13. Larsen, K.G., Skou, A.: Bisimulation through probabilistic testing. Inf. Comput. 94(1), 1–28 (1991)

    Article  MathSciNet  Google Scholar 

  14. Milner, R.: Communication and Concurrency. Prentice-Hall, Inc., Hoboken (1989)

    MATH  Google Scholar 

  15. Prabhakar, P., Afzal, Z.R.: Abstraction based output range analysis for neural networks (2019)

    Google Scholar 

  16. Pulina, L., Tacchella, A.: An abstraction-refinement approach to verification of artificial neural networks. In: Touili, T., Cook, B., Jackson, P. (eds.) CAV 2010. LNCS, vol. 6174, pp. 243–257. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-14295-6_24

    Chapter  Google Scholar 

  17. Sotoudeh, M., Thakur, A.V.: Abstract neural networks. In: Pichardie, D., Sighireanu, M. (eds.) SAS 2020. LNCS, vol. 12389, pp. 65–88. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-65474-0_4

    Chapter  Google Scholar 

  18. Virmaux, A., Scaman, K.: Lipschitz regularity of deep neural networks: analysis and efficient estimation. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems 31, pp. 3835–3844. Curran Associates, Inc. (2018)

    Google Scholar 

  19. Xiang, W., et al.: Verification for machine learning, autonomy, and neural networks survey. CoRR abs/1810.01989 (2018)

    Google Scholar 

  20. Xiang, W., Tran, H., Johnson, T.T.: Output reachable set estimation and verification for multi-layer neural networks. CoRR abs/1708.03322 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pavithra Prabhakar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Prabhakar, P. (2022). Bisimulations for Neural Network Reduction. In: Finkbeiner, B., Wies, T. (eds) Verification, Model Checking, and Abstract Interpretation. VMCAI 2022. Lecture Notes in Computer Science(), vol 13182. Springer, Cham. https://doi.org/10.1007/978-3-030-94583-1_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-94583-1_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-94582-4

  • Online ISBN: 978-3-030-94583-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics