Abstract
Neural networks have been widely used in safety-critical systems, but those safety-critical systems containing neural networks still have security risks due to the existence of adversarial examples. The security of neural networks can be ensured to some extent by verifying them. However, since the verification of neural networks is a NP-hard problem, it is still impossible to apply the verification algorithm to large-scale neural networks. For this reason, we propose TBFV-INN, a new framework for verification and improving neural networks. First, we propose a testing-based neural network pruning algorithm, which obtains the execution path of each test case in the neural network by executing them. Secondly, test-based neural network pruning divides the original neural network into several sub neural networks. Finally, for divided sub neural network, a verification algorithm is used to verify and construct the data set to retrain the neural network, thus ensuring that each sub neural network is reliable in a particular input-output interval. We show a case study to demonstrate the feasibility of the framework.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Blalock, D., Gonzalez Ortiz, J.J., Frankle, J., Guttag, J.: What is the state of neural network pruning? Proceed. Mach. Learn. Syst. 2, 129–146 (2020)
Bunel, R.R., Turkaslan, I., Torr, P., Kohli, P., Mudigonda, P.K.: A unified view of piecewise linear neural network verification. In: Advances in Neural Information Processing Systems 31 (2018)
Ehlers, R.: Formal verification of piece-wise linear feed-forward neural networks. In: D’Souza, D., Narayan Kumar, K. (eds.) ATVA 2017. LNCS, vol. 10482, pp. 269–286. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68167-2_19
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. stat 1050, 20 (2015)
Grigorescu, S., Trasnea, B., Cocias, T., Macesanu, G.: A survey of deep learning techniques for autonomous driving. J. Field Robot. 37(3), 362–386 (2020)
Gui, G., Liu, F., Sun, J., Yang, J., Zhou, Z., Zhao, D.: Flight delay prediction based on aviation big data and machine learning. IEEE Trans. Veh. Technol. 69(1), 140–150 (2019)
Hu, H., Peng, R., Tai, Y.W., Tang, C.K.: Network trimming: a data-driven neuron pruning approach towards efficient deep architectures. arXiv preprint arXiv:1607.03250 (2016)
Huang, A., Qiu, L., Li, Z.: Applying deep learning method in TVP-VAR model under systematic financial risk monitoring and early warning. J. Comput. Appl. Math. 382, 113065 (2021)
Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 97–117. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_5
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236 (2016)
Liu, A., Liu, S.: Enhancing the capability of testing-based formal verification by handling operations in software packages. IEEE Trans. Softw. Eng. 48, 304–324 (2022)
Liu, S.: Testing-based formal verification for theorems and its application in software specification verification. In: Aichernig, B.K.K., Furia, C.A.A. (eds.) TAP 2016. LNCS, vol. 9762, pp. 112–129. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-41135-4_7
Lomuscio, A., Maganti, L.: An approach to reachability analysis for feed-forward relu neural networks. arXiv preprint arXiv:1706.07351 (2017)
Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582 (2016)
Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: 2016 IEEE European symposium on security and privacy (EuroS &P), pp. 372–387. IEEE (2016)
Pei, K., Cao, Y., Yang, J., Jana, S.: DeepXplore: automated whitebox testing of deep learning systems. In: proceedings of the 26th Symposium on Operating Systems Principles, pp. 1–18 (2017)
Pratt, V.R.: Semantical considerations on Floyd-Hoare logic. In: 17th Annual Symposium on Foundations of Computer Science (SFCS 1976), pp. 109–121. IEEE (1976)
Sun, Y., Huang, X., Kroening, D., Sharp, J., Hill, M., Ashmore, R.: DeepConcolic: testing and debugging deep neural networks. In: 2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion), pp. 111–114. IEEE (2019)
Tjeng, V., Xiao, K., Tedrake, R.: Evaluating robustness of neural networks with mixed integer programming. arXiv preprint arXiv:1711.07356 (2017)
Tramer, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: Ensemble adversarial training: attacks and defenses. stat 1050, 22 (2018)
Wang, R., Liu, S.: TBFV-SE: testing-based formal verification with symbolic execution. In: 2018 IEEE International Conference on Software Quality, Reliability and Security (QRS), pp. 59–66. IEEE (2018)
Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S.: Efficient formal safety analysis of neural networks. In: Advances in Neural Information Processing Systems 31 (2018)
Wang, S., et al.: Beta-crown: efficient bound propagation with per-neuron split constraints for neural network robustness verification. Adv. Neural. Inf. Process. Syst. 34, 29909–29921 (2021)
Xu, K., et al.: Fast and complete: enabling complete neural network verification with rapid and massively parallel incomplete verifiers. In: International Conference on Learning Representation (ICLR) (2021)
Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: attacks and defenses for deep learning. IEEE Trans. Neural Netw. Learn. Syst. 30(9), 2805–2824 (2019)
Acknowledgements
This work was supported by JST SPRING, Grant Number JPMJSP2132.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Liu, H., Liu, S., Liu, A., Fang, D., Xu, G. (2023). Verifying and Improving Neural Networks Using Testing-Based Formal Verification. In: Liu, S., Duan, Z., Liu, A. (eds) Structured Object-Oriented Formal Language and Method. SOFL+MSVL 2022. Lecture Notes in Computer Science, vol 13854. Springer, Cham. https://doi.org/10.1007/978-3-031-29476-1_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-29476-1_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-29475-4
Online ISBN: 978-3-031-29476-1
eBook Packages: Computer ScienceComputer Science (R0)