Skip to main content

Open- and Closed-Loop Neural Network Verification Using Polynomial Zonotopes

  • Conference paper
  • First Online:
NASA Formal Methods (NFM 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13903))

Included in the following conference series:

Abstract

We present a novel approach to efficiently compute tight non-convex enclosures of the image through neural networks with ReLU, sigmoid, or hyperbolic tangent activation functions. In particular, we abstract the input-output relation of each neuron by a polynomial approximation, which is evaluated in a set-based manner using polynomial zonotopes. While our approach can also can be beneficial for open-loop neural network verification, our main application is reachability analysis of neural network controlled systems, where polynomial zonotopes are able to capture the non-convexity caused by the neural network as well as the system dynamics. This results in a superior performance compared to other methods, as we demonstrate on various benchmarks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In contrast to [25, Def. 1], we explicitly do not integrate the constant offset c in G. Moreover, we omit the identifier vector used in [25] for simplicity.

  2. 2.

    https://codeocean.com/capsule/8237552/tree/v1.

References

  1. Althoff, M.: Reachability analysis and its application to the safety assessment of autonomous cars. Ph.D. thesis, Technical University of Munich (2010)

    Google Scholar 

  2. Althoff, M.: Reachability analysis of nonlinear systems using conservative polynomialization and non-convex sets. In: Proceedings of the International Conference on Hybrid Systems: Computation and Control, pp. 173–182 (2013)

    Google Scholar 

  3. Althoff, M.: An introduction to CORA 2015. In: Proceedings of the International Workshop on Applied Verification for Continuous and Hybrid Systems, pp. 120–151 (2015)

    Google Scholar 

  4. Bak, S., Liu, C., Johnson, T.: The second international verification of neural networks competition (VNN-COMP 2021): Summary and results. arXiv:2109.00498 (2021)

  5. Beaufays, F., Abdel-Magid, Y., Widrow, B.: Application of neural networks to load-frequency control in power systems. Neural Netw. 7(1), 183–194 (1994)

    Article  Google Scholar 

  6. Bogomolov, S., et al.: JuliaReach: a toolbox for set-based reachability. In: Proceedings of the International Conference on Hybrid Systems: Computation and Control, pp. 39–44 (2019)

    Google Scholar 

  7. Bunel, R., et al.: Branch and bound for piecewise linear neural network verification. J. Mach. Learn. Res. 21(42) (2020)

    Google Scholar 

  8. Cheng, C.H., Nührenberg, G., Ruess, H.: Maximum resilience of artificial neural networks. In: Proceedings of the International Symposium on Automated Technology for Verification and Analysis, pp. 251–268 (2017)

    Google Scholar 

  9. Christakou, C., Vrettos, S., Stafylopatis, A.: A hybrid movie recommender system based on neural networks. Int. J. Artif. Intell. Tools 16(5), 771–792 (2007)

    Article  Google Scholar 

  10. Clavière, A., et al.: Safety verification of neural network controlled systems. In: Proceedings of the International Conference on Dependable Systems and Networks, pp. 47–54 (2021)

    Google Scholar 

  11. David, O.E., Netanyahu, N.S., Wolf, L.: DeepChess: end-to-end deep neural network for automatic learning in chess. In: Proceedings of the International Conference on Artificial Neural Networks, pp. 88–96 (2016)

    Google Scholar 

  12. Dutta, S., et al.: Learning and verification of feedback control systems using feedforward neural networks. In: Proceedings of the International Conference on Analysis and Design of Hybrid Systems, pp. 151–156 (2018)

    Google Scholar 

  13. Dutta, S., et al.: Sherlock-a tool for verification of neural network feedback systems. In: Proceedings of the International Conference on Hybrid Systems: Computation and Control, pp. 262–263 (2019)

    Google Scholar 

  14. Dutta, S., Chen, X., Sankaranarayanan, S.: Reachability analysis for neural feedback systems using regressive polynomial rule inference. In: Proceedings of the International Conference on Hybrid Systems: Computation and Control, pp. 157–168 (2019)

    Google Scholar 

  15. Ehlers, R.: Formal verification of piece-wise linear feed-forward neural networks. In: Proceedings of the International Symposium on Automated Technology for Verification and Analysis, pp. 269–286 (2017)

    Google Scholar 

  16. Fan, J., Huang, et al.: ReachNN*: a tool for reachability analysis of neural-network controlled systems. In: Proceedings of the International Symposium on Automated Technology for Verification and Analysis, pp. 537–542 (2020)

    Google Scholar 

  17. Goubault, E., Putot, S.: RINO: robust inner and outer approximated reachability of neural networks controlled systems. In: Proceedings of the International Conference on Computer Aided Verification, pp. 511–523 (2022)

    Google Scholar 

  18. Huang, C., et al.: ReachNN: reachability analysis of neural-network controlled systems. Trans. Embed. Comput. Syst. 18(5s) (2019)

    Google Scholar 

  19. Huang, C., et al.: POLAR: A polynomial arithmetic framework for verifying neural-network controlled systems. In: Proceedings of the International Symposium on Automated Technology for Verification and Analysis, pp. 414–430 (2022)

    Google Scholar 

  20. Ivanov, R., et al.: Verisig: verifying safety properties of hybrid systems with neural network controllers. In: Proceedings of the International Conference on Hybrid Systems: Computation and Control, pp. 169–178 (2019)

    Google Scholar 

  21. Ivanov, R., et al.: Verisig 2.0: verification of neural network controllers using Taylor model preconditioning. In: Proceedings of the International Conference on Computer Aided Verification, pp. 249–262 (2021)

    Google Scholar 

  22. Katz, G., et al.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Proceedings of the International Conference on Computer Aided Verification, pp. 97–117 (2017)

    Google Scholar 

  23. Khan, S., et al.: Facial recognition using convolutional neural networks and implementation on smart glasses. In: Proceedings of the International Conference on Information Science and Communication Technology (2019). Article 19

    Google Scholar 

  24. Khedr, H., Ferlez, J., Shoukry, Y.: PEREGRiNN: penalized-relaxation greedy neural network verifier. In: Proceedings of the International Conference on Computer Aided Verification, pp. 287–300 (2021)

    Google Scholar 

  25. Kochdumper, N., Althoff, M.: Sparse polynomial zonotopes: a novel set representation for reachability analysis. Trans. Autom. Control 66(9), 4043–4058 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  26. Makino, K., Berz, M.: Taylor models and other validated functional inclusion methods. Int. J. Pure and Appl. Math. 4(4), 379–456 (2003)

    Google Scholar 

  27. Mukherjee, D., et al.: A survey of robot learning strategies for human-robot collaboration in industrial settings. Robot. Comput.-Integr. Manuf. 73 (2022)

    Google Scholar 

  28. Müller, M.N., et al.: PRIMA: precise and general neural network certification via multi-neuron convex relaxations. Proceedings on Programming Languages 1 (2022). Article 43

    Google Scholar 

  29. Müller, M.N., et al.: The third international verification of neural networks competition (VNN-COMP 2022): summary and results. arXiv preprint arXiv:2212.10376 (2022)

  30. Pulina, L., Tacchella, A.: Challenging SMT solvers to verify neural networks. AI Commun. 25(2), 117–135 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  31. Raghunathan, A., Steinhardt, J., Liang, P.: Semidefinite relaxations for certifying robustness to adversarial examples. In: Proceedings of the International Conference on Neural Information Processing Systems, pp. 10900–10910 (2018)

    Google Scholar 

  32. Riedmiller, M., Montemerlo, M., Dahlkamp, H.: Learning to drive a real car in 20 minutes. In: Proceedings of the International Conference on Frontiers in the Convergence of Bioscience and Information Technologies, pp. 645–650 (2007)

    Google Scholar 

  33. Schilling, C., Forets, M., Guadalupe, S.: Verification of neural-network control systems by integrating Taylor models and zonotopes. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 8169–8177 (2022)

    Google Scholar 

  34. Singh, G., et al.: Fast and effective robustness certification. In: Proceedings of the International Conference on Advances in Neural Information Processing Systems (2018)

    Google Scholar 

  35. Singh, G., et al.: An abstract domain for certifying neural networks. Proce. Prog. Lang. 3 (2019). Article 41

    Google Scholar 

  36. Singh, G., et al.: Beyond the single neuron convex barrier for neural network certification. In: Proceedings of the International Conference on Advances in Neural Information Processing Systems (2019)

    Google Scholar 

  37. Tjeng, V., Xiao, K.Y., Tedrake, R.: Evaluating robustness of neural networks with mixed integer programming. In: Proceedings of the International Conference on Learning Representations (2019)

    Google Scholar 

  38. Tran, H.D., et al.: Parallelizable reachability analysis algorithms for feed-forward neural networks. In: Proceedings of the International Conference on Formal Methods in Software Engineering, pp. 51–60 (2019)

    Google Scholar 

  39. Tran, H.D., et al.: Safety verification of cyber-physical systems with reinforcement learning control. Trans. Embedded Comput. Syst. 18(5s) (2019). Article 105

    Google Scholar 

  40. Tran, H.D., et al.: Star-based reachability analysis of deep neural networks. In: Proceedings of the International Symposium on Formal Methods, pp. 670–686 (2019)

    Google Scholar 

  41. Tran, H.D., et al.: NNV: The neural network verification tool for deep neural networks and learning-enabled cyber-physical systems. In: Proc. of the Int. Conf. on Computer Aided Verification. pp. 3–17 (2020)

    Google Scholar 

  42. Vincent, J.A., Schwager, M.: reachable polyhedral marching (RPM): a safety verification algorithm for robotic systems with deep neural network components. In: Proceedings of the International Conference on Robotics and Automation, pp. 9029–9035 (2021)

    Google Scholar 

  43. Wang, S., et al.: Formal security analysis of neural networks using symbolic intervals. In: Proceedings of the USENIX Security Symposium, pp. 1599–1614 (2018)

    Google Scholar 

  44. Wang, S., et al.: Beta-CROWN: efficient bound propagation with per-neuron split constraints for neural network robustness verification. In: Proceedings of the International Conference on Neural Information Processing Systems (2021)

    Google Scholar 

  45. Weng, L., et al.: Towards fast computation of certified robustness for ReLU networks. In: Proceedings of the International Conference on Machine Learning, pp. 5276–5285 (2018)

    Google Scholar 

  46. Xiang, W., et al.: Reachable set estimation and safety verification for piecewise linear systems with neural network controllers. In: Proceedings of the American Control Conference, pp. 1574–1579 (2018)

    Google Scholar 

  47. Xu, K., et al.: Fast and complete: enabling complete neural network verification with rapid and massively parallel incomplete verifiers. In: Proceedings of the International Conference on Learning Representations (2021)

    Google Scholar 

  48. Yang, X., et al.: Reachability analysis of deep ReLU neural networks using facet-vertex incidence. In: Proceedings of the International Conference on Hybrid Systems: Computation and Control (2021). Article 18

    Google Scholar 

  49. Zhang, H., et al.: Efficient neural network robustness certification with general activation functions. In: Proceedings of the International Conference on Neural Information Processing Systems, pp. 4944–4953 (2018)

    Google Scholar 

Download references

Acknowledgements

We gratefully acknowledge the financial support from the project justITSELF funded by the European Research Council (ERC) under grant agreement No 817629, from DIREC - Digital Research Centre Denmark, and from the Villum Investigator Grant S4OS. In addition, this material is based upon work supported by the Air Force Office of Scientific Research and the Office of Naval Research under award numbers FA9550-19-1-0288, FA9550-21-1-0121, FA9550-23-1-0066 and N00014-22-1-2156. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the United States Air Force or the United States Navy.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Niklas Kochdumper .

Editor information

Editors and Affiliations

Appendix A

Appendix A

We now provide the proof for Prop. 2. According to Def. 2, the one-dimensional polynomial zonotope \(\mathcal{P}\mathcal{Z}= \langle c,G,G_I,E \rangle _{PZ}\) is defined as

$$\begin{aligned} \begin{aligned} \mathcal{P}\mathcal{Z}&= \bigg \{ c + \underbrace{\sum _{i=1}^h \bigg ( \prod _{k=1}^p \alpha _k ^{E_{(k,i)}} \bigg ) G_{(i)}}_{d(\alpha )} + \underbrace{\sum _{j=1}^{q} \beta _j G_{I(j)}}_{z(\beta )} ~ \bigg | ~ \alpha _k, \beta _j \in [-1,1] \bigg \} \\&= \big \{ c + d(\alpha ) + z(\beta )~\big |~ \alpha ,\beta \in [-\textbf{1},\textbf{1}] \big \}, \end{aligned} \end{aligned}$$
(10)

where \(\alpha = [\alpha _1~\dots ~\alpha _p]^T\) and \(\beta = [\beta _1~\dots ~\beta _q]^T\). To compute the image through the quadratic function \(g(x)\) we require the expressions \(d(\alpha )^2\), \(d(\alpha ) z(\beta )\), and \(z(\beta )^2\), which we derive first. For \(d(\alpha )^2\) we obtain

$$\begin{aligned} \begin{aligned} d(\alpha )^2&= \bigg ( \sum _{i=1}^h \bigg ( \prod _{k=1}^p \alpha _k ^{E_{(k,i)}} \bigg ) G_{(i)} \bigg ) \bigg ( \sum _{j=1}^h \bigg ( \prod _{k=1}^p \alpha _k ^{E_{(k,j)}} \bigg ) G_{(j)} \bigg ) \\&= \sum _{i=1}^h \sum _{j=1}^h \bigg ( \prod _{k=1}^p \alpha _k ^{E_{(k,i)} + E_{(k,j)}} \bigg ) G_{(i)} G_{(j)} \\&= \sum _{i=1}^h \bigg ( \prod _{k=1}^p \alpha _k ^{2 E_{(k,i)}} \bigg ) G_{(i)}^2 + \sum _{i=1}^{h-1} \sum _{j=i+1}^h \bigg ( \prod _{k=1}^p \underbrace{\alpha _k ^{E_{(k,i)} + E_{(k,j)}}}_{\alpha _k^{\widehat{E}_{i(k,j)}}} \bigg ) 2 \underbrace{G_{(i)} G_{(j)}}_{\widehat{G}_{i(j)}} \\&\overset{\begin{array}{c} (9)\\ \vspace{-2pt} \end{array}}{=} \sum _{i=1}^{h(h+1)/2} \bigg ( \prod _{k=1}^p \alpha _k ^{\widehat{ E}_{(k,i)}} \bigg ) \widehat{G}_{(i)}, \end{aligned} \end{aligned}$$
(11)

for \(d(\alpha )z(\beta )\) we obtain

$$\begin{aligned} \begin{aligned} d(\alpha )z(\beta )&= \bigg ( \sum _{i=1}^h \bigg ( \prod _{k=1}^p \alpha _k ^{E_{(k,i)}} \bigg ) G_{(i)} \bigg ) \bigg ( \sum _{j=1}^q \beta _j G_{I(j)} \bigg ) \\&= \sum _{i=1}^h \sum _{j=1}^q \underbrace{\bigg ( \beta _j \prod _{k=1}^p \alpha _k ^{E_{(k,i)}} \bigg )}_{\beta _{q + (i-1)h+j}} G_{(i)} G_{I(j)} \overset{\begin{array}{c} (9) \\ \vspace{-2pt} \end{array}}{=} \sum _{i = 1}^{hq} \beta _{q + i} \, \overline{G}_{(i)}, \end{aligned} \end{aligned}$$
(12)

and for \(z(\beta )^2\) we obtain

$$\begin{aligned} z(\beta )^2&= \bigg ( \sum _{i=1}^q \beta _i G_{I(i)} \bigg ) \bigg ( \sum _{j=1}^q \beta _j G_{I(j)} \bigg ) = \sum _{i=1}^q \sum _{j=1}^q \beta _i \beta _j \, G_{I(i)} G_{I(j)} \nonumber \\&= \sum _{i=1}^q \beta _i^2 G_{I(i)}^2 + \sum _{i=1}^{q-1} \sum _{j=i+1}^q \beta _i \beta _j \, 2 \, G_{I(i)} G_{I(j)} \nonumber \\&= 0.5 \sum _{i=1}^q G_{I(i)}^2 + \sum _{i=1}^q \underbrace{(2 \beta _i^2 - 1)}_{\beta _{(h+1)q+i}} 0.5 \, G_{I(i)}^2 + \sum _{i=1}^{q-1} \sum _{j=i+1}^q \underbrace{\beta _i \beta _j}_{\beta _{a(i,j)}} 2 \, \underbrace{G_{I(i)} G_{I(j)}}_{\check{G}_{i(j)}}\nonumber \\&\overset{\begin{array}{c} (9)\\ \vspace{-2pt} \end{array}}{=} 0.5 \sum _{i=1}^q G_{I(i)}^2 + \sum _{i=1}^{q(q+1)/2} \beta _{(h+1)q + i} \, \check{G}_{(i)}, \end{aligned}$$
(13)

where the function a(i, j) maps indices i, j to a new index:

$$\begin{aligned} a(i,j) = (h+2)q + j-i + \sum _{k=1}^{i-1} q-k. \end{aligned}$$

In (12) and (13), we substituted the expressions \(\beta _j \prod _{k=1}^p \alpha _k^{E_{(k,i)}}\), \(2\beta _i^2 -1\), and \(\beta _i\beta _j\) containing polynomial terms of the independent factors \(\beta \) by new independent factors, which results in an enclosure due to the loss of dependency. The substitution is possible since

$$\begin{aligned} \beta _j \prod _{k=1}^p \alpha _k^{E_{(k,i)}} \in [-1,1],~~ 2\beta _i^2 -1 \in [-1,1],~~ \text {and} ~~\beta _i\beta _j \in [-1,1]. \end{aligned}$$

Finally, we obtain for the image

$$\begin{aligned} \begin{aligned}&\big \{ g(x)~\big |~ x \in \mathcal{P}\mathcal{Z} \big \} = \big \{a_1\,x^2 + a_2\,x + a_3~ \big | ~ x \in \mathcal{P}\mathcal{Z} \big \}\\&~ \\ \overset{\begin{array}{c} (10) \\ \vspace{-2pt} \end{array}}{=}&\big \{ a_1( c + d(\alpha ) + z(\beta ))^2 + a_2(c + d(\alpha ) + z(\beta )) + a_3~\big | ~ \alpha ,\beta \in [-\textbf{1},\textbf{1}] \big \}\\&~ \\ =&\big \{ a_1c^2 + a_2c + a_3+ (2 a_1c + a_2) d(\alpha ) + a_1d(\alpha )^2\\&~~ + (2 a_1c + a_2) z(\beta ) + 2 a_1d(\alpha )z(\beta ) + a_1z(\beta )^2 ~ \big | ~ \alpha ,\beta \in [-\textbf{1},\textbf{1}] \big \}\\&~ \\ \overset{\begin{array}{c} (11),(12),(13)\\ \vspace{-2pt} \end{array}}{\subseteq }&\bigg \langle a_1c^2 + a_2c + a_3+ 0.5 \, a_1\sum _{i=1}^q G_I^2, \big [(2 a_1c + a_2)G ~~ a_1\widehat{G} \big ], \\&~~~~~~~~~~~~~~~~~~~~~ \big [ (2 a_1c + a_2)G_I ~~ 2a_1\overline{G} ~~a_1\check{G} \big ], \big [E ~~ \widehat{E} \big ] \bigg \rangle _{PZ} \overset{\begin{array}{c} (8) \\ \vspace{-2pt} \end{array}}{=} \langle c_q,G_q,G_{I,q},E_q \rangle _{PZ}, \end{aligned} \end{aligned}$$

which concludes the proof.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kochdumper, N., Schilling, C., Althoff, M., Bak, S. (2023). Open- and Closed-Loop Neural Network Verification Using Polynomial Zonotopes. In: Rozier, K.Y., Chaudhuri, S. (eds) NASA Formal Methods. NFM 2023. Lecture Notes in Computer Science, vol 13903. Springer, Cham. https://doi.org/10.1007/978-3-031-33170-1_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-33170-1_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-33169-5

  • Online ISBN: 978-3-031-33170-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics