Skip to main content

Model-Agnostic Reachability Analysis on Deep Neural Networks

  • Conference paper
  • First Online:
Advances in Knowledge Discovery and Data Mining (PAKDD 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13935))

Included in the following conference series:

  • 1137 Accesses

Abstract

Verification plays an essential role in the formal analysis of safety-critical systems. Most current verification methods have specific requirements when working on Deep Neural Networks (DNNs). They either target one particular network category, e.g., Feedforward Neural Networks (FNNs), or networks with specific activation functions, e.g., ReLU. In this paper, we develop a model-agnostic verification framework, called DeepAgn, and show that it can be applied to FNNs, Recurrent Neural Networks (RNNs), or a mixture of both. Under the assumption of Lipschitz continuity, DeepAgn analyses the reachability of DNNs based on a novel optimisation scheme with a global convergence guarantee. It does not require access to the network’s internal structures, such as layers and parameters. Through reachability analysis, DeepAgn can tackle several well-known robustness problems, including computing the maximum safe radius for a given input, and generating the ground-truth adversarial example. We also empirically demonstrate DeepAgn’s superior capability and efficiency in handling a broader class of deep neural networks, including both FNNs and RNNs with very deep layers and millions of neurons, than other state-of-the-art verification approaches. Our tool is available at https://github.com/TrustAI/DeepAgn

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Akintunde, M., Lomuscio, A., Maganti, L., Pirovano, E.: Reachability analysis for neural agent-environment systems. In: KR, pp. 184–193 (2018)

    Google Scholar 

  2. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017)

    Google Scholar 

  3. Carlini, N., Wagner, D.: Audio adversarial examples: targeted attacks on speech-to-text. In: 2018 IEEE Security and Privacy Workshops (SPW) (2018)

    Google Scholar 

  4. Dutta, S., Jha, S., Sanakaranarayanan, S., Tiwari, A.: Output range analysis for deep neural networks. arXiv preprint arXiv:1709.09130 (2017)

  5. Ehlers, R.: Formal verification of piece-wise linear feed-forward neural networks. In: International Symposium on Automated Technology for Verification and Analysis (2017)

    Google Scholar 

  6. Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., Vechev, M.: Ai2: safety and robustness certification of neural networks with abstract interpretation. In: 2018 IEEE Symposium on Security and Privacy (SP), pp. 3–18. IEEE (2018)

    Google Scholar 

  7. Gergel, V., Grishagin, V., Gergel, A.: Adaptive nested optimization scheme for multidimensional global search. J. Glob. Optim. 66(1), 35–51 (2016)

    Article  MathSciNet  Google Scholar 

  8. Goldstein, A.: Optimization of lipschitz continuous functions. Math. Program. (1977)

    Google Scholar 

  9. Gong, Y., Poellabauer, C.: Crafting adversarial examples for speech paralinguistics applications. arXiv preprint arXiv:1711.03280 (2017)

  10. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)

    Google Scholar 

  11. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  12. Huang, X., Kroening, D., Ruan, W., et al.: A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability. Comput. Sci. Rev. 37 (2020)

    Google Scholar 

  13. Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. In: International Conference on Computer Aided Verification (2017)

    Google Scholar 

  14. Jacoby, Y., Barrett, C., Katz, G.: Verifying recurrent neural networks using invariant inference. arXiv preprint arXiv:2004.02462 (2020)

  15. Katz, G., Barrett, C., et al.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: International Conference on Computer Aided Verification (2017)

    Google Scholar 

  16. Ko, C.Y., Lyu, Z., Weng, T.W., et al.: Popqorn: quantifying robustness of recurrent neural networks. arXiv preprint arXiv:1905.07387 (2019)

  17. Mirman, M., Gehr, T., Vechev, M.: Differentiable abstract interpretation for provably robust neural networks. In: ICML (2018)

    Google Scholar 

  18. Mu, R., Ruan, W., Marcolino, L.S., Ni, Q.: 3Dverifier: efficient robustness verification for 3D point cloud models. Mach. Learn. 1–28 (2022)

    Google Scholar 

  19. Mu, R., Ruan, W., Marcolino, L.S., Jin, G., Ni, Q.: Certified policy smoothing for cooperative multi-agent reinforcement learning. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI’23) (2023)

    Google Scholar 

  20. Narodytska, N., Kasiviswanathan, S., Ryzhyk, L., Sagiv, M., Walsh, T.: Verifying properties of binarized deep neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  21. O’Searcoid, M.: Metric Spaces. Springer, London (2006). https://doi.org/10.1007/978-1-84628-627-8

  22. Papernot, N., McDaniel, P., Swami, A., Harang, R.: Crafting adversarial input sequences for recurrent neural networks. In: MILCOM 2016–2016 IEEE Military Communications Conference, pp. 49–54. IEEE (2016)

    Google Scholar 

  23. Pulina, L., Tacchella, A.: An abstraction-refinement approach to verification of artificial neural networks. In: International Conference on Computer Aided Verification (2010)

    Google Scholar 

  24. Ruan, W., Huang, X., Kwiatkowska, M.: Reachability analysis of deep neural networks with provable guarantees. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence, pp. 2651–2659 (2018)

    Google Scholar 

  25. Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)

  26. Vengertsev, D., Sherman, E.: Recurrent neural network properties and their verification with Monte Carlo techniques. In: SafeAI@AAAI (2020)

    Google Scholar 

  27. Wang, F., Xu, P., Ruan, W., Huang, X.: Towards verifying the geometric robustness of large-scale neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI’23) (2023)

    Google Scholar 

  28. Warden, P.: Speech commands: a dataset for limited-vocabulary speech recognition. arXiv preprint arXiv:1804.03209 (2018)

  29. Weng, L., Zhang, H., Chen, H., et al.: Towards fast computation of certified robustness for ReLU networks. In: ICML (2018)

    Google Scholar 

  30. Weng, T.W., et al.: Evaluating the robustness of neural networks: an extreme value theory approach. arXiv preprint arXiv:1801.10578 (2018)

  31. Wong, E., Kolter, Z.: Provable defenses against adversarial examples via the convex outer adversarial polytope. In: ICML (2018)

    Google Scholar 

  32. Wu, M., Wicker, M., Ruan, W., Huang, X., Kwiatkowska, M.: A game-based approximate verification of deep neural networks with provable guarantees. Theor. Comput. Sci. 807, 298–329 (2020)

    Article  MathSciNet  Google Scholar 

  33. Yin, X., Ruan, W., Fieldsend, J.: Dimba: discretely masked black-box attack in single object tracking. Mach. Learn. 1–19 (2022)

    Google Scholar 

  34. Zhang, H., Shinn, M., Gupta, A., Gurfinkel, A., Le, N., Narodytska, N.: Verification of recurrent neural networks for cognitive tasks via reachability analysis. In: ECAI 2020, pp. 1690–1697. IOS Press (2020)

    Google Scholar 

  35. Zhang, T., Ruan, W., Fieldsend, J.E.: Proa: a probabilistic robustness assessment against functional perturbations. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases (ECML/PKDD’22) (2022)

    Google Scholar 

  36. Zhang, Y., Ruan, W., Wang, F., Huang, X.: Generalizing universal adversarial perturbations for deep neural networks. Mach. Learn. (2023)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wenjie Ruan .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 903 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, C., Ruan, W., Wang, F., Xu, P., Min, G., Huang, X. (2023). Model-Agnostic Reachability Analysis on Deep Neural Networks. In: Kashima, H., Ide, T., Peng, WC. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2023. Lecture Notes in Computer Science(), vol 13935. Springer, Cham. https://doi.org/10.1007/978-3-031-33374-3_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-33374-3_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-33373-6

  • Online ISBN: 978-3-031-33374-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics