Skip to main content

DeepCert: Verification of Contextually Relevant Robustness for Neural Network Image Classifiers

  • Conference paper
  • First Online:
Computer Safety, Reliability, and Security (SAFECOMP 2021)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 12852))

Included in the following conference series:

Abstract

We introduce DeepCert, a tool-supported method for verifying the robustness of deep neural network (DNN) image classifiers to contextually relevant perturbations such as blur, haze, and changes in image contrast. While the robustness of DNN classifiers has been the subject of intense research in recent years, the solutions delivered by this research focus on verifying DNN robustness to small perturbations in the images being classified, with perturbation magnitude measured using established \(L_p\) norms. This is useful for identifying potential adversarial attacks on DNN image classifiers, but cannot verify DNN robustness to contextually relevant image perturbations, which are typically not small when expressed with \(L_p\) norms. DeepCert addresses this underexplored verification problem by supporting: (1) the encoding of real-world image perturbations; (2) the systematic evaluation of contextually relevant DNN robustness, using both testing and formal verification; (3) the generation of contextually relevant counterexamples; and, through these, (4) the selection of DNN image classifiers suitable for the operational context (i) envisaged when a potentially safety-critical system is designed, or (ii) observed by a deployed system. We demonstrate the effectiveness of DeepCert by showing how it can be used to verify the robustness of DNN image classifiers build for two benchmark datasets (‘German Traffic Sign’ and ‘CIFAR-10’) to multiple contextually relevant perturbations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Deep neural network Contextual robustness.

References

  1. Ashmore, R., Calinescu, R., Paterson, C.: Assuring the machine learning lifecycle: desiderata, methods, and challenges (2019). arXiv preprint arXiv:1905.04223

  2. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy, pp. 39–57. IEEE (2017)

    Google Scholar 

  3. De Fauw, J., et al.: Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat. Med. 24(9), 1342–1350 (2018)

    Article  Google Scholar 

  4. Dutta, S., Jha, S., Sankaranarayanan, S., Tiwari, A.: Output range analysis for deep feedforward neural networks. In: NASA Formal Methods Symposium, pp. 121–138. Springer (2018)

    Google Scholar 

  5. Fischetti, M., Jo, J.: Deep Neural Networks as 0–1 mixed integer linear programs: a feasibility study (2017). arXiv preprint arXiv:1712.06174

  6. Gauerhof, L., Hawkins, R., Picardi, C., Paterson, C., Hagiwara, Y., Habli, I.: Assuring the safety of machine learning for pedestrian detection at crossings. In: Casimiro, A., Ortmeier, F., Bitsch, F., Ferreira, P. (eds.) SAFECOMP 2020. LNCS, vol. 12234, pp. 197–212. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-54549-9_13

    Chapter  Google Scholar 

  7. Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., Vechev, M.T.: AI2: safety and robustness certification of neural networks with abstract interpretation. In: 2018 IEEE Symposium on Security and Privacy, pp. 3–18 (2018)

    Google Scholar 

  8. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples (2014). arXiv preprint arXiv:1412.6572

  9. Grosse, K., Manoharan, P., Papernot, N., Backes, M., McDaniel, P.: On the (statistical) detection of adversarial examples (2017). arXiv preprint arXiv:1503.02531

  10. Hamdi, A., Ghanem, B.: Towards analyzing semantic robustness of deep neural networks. In: European Conference on Computer Vision, pp. 22–38. Springer (2020)

    Google Scholar 

  11. Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 3–29. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_1

    Chapter  Google Scholar 

  12. Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: An efficient SMT solver for verifying deep neural networks. In: International Conference on Computer Aided Verification, pp. 97–117. Springer (2017)

    Google Scholar 

  13. Katz, G., et al.: The marabou framework for verification and analysis of deep neural networks. In: Dillig, I., Tasiran, S. (eds.) CAV 2019. LNCS, vol. 11561, pp. 443–452. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-25540-4_26

    Chapter  Google Scholar 

  14. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial machine learning at scale (2016). arXiv preprint arXiv:1611.01236

  15. Mitani, A., et al.: Detection of anaemia from retinal fundus images via deep learning. Nat. Biomed. Eng. 4(1), 18–27 (2020)

    Article  Google Scholar 

  16. Mohapatra, J., Weng, T.W., Chen, P.Y., Liu, S., Daniel, L.: Towards verifying robustness of neural networks against a family of semantic perturbations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 244–252 (2020)

    Google Scholar 

  17. Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582 (2016)

    Google Scholar 

  18. Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: 2016 IEEE European Symposium on Security and Privacy, pp. 372–387. IEEE (2016)

    Google Scholar 

  19. Picardi, C., Paterson, C., Hawkins, R.D., Calinescu, R., Habli, I.: Assurance argument patterns and processes for machine learning in safety-related systems. In: Workshop on Artificial Intelligence Safety, pp. 23–30 (2020)

    Google Scholar 

  20. Pulina, L., Tacchella, A.: An abstraction-refinement approach to verification of artificial neural networks. In: CAV, pp. 243–257 (2010)

    Google Scholar 

  21. Singh, G., Gehr, T., Püschel, M., Vechev, M.: An abstract domain for certifying neural networks. Proc. ACM Program. Lang. 3, 1–30 (2019)

    Article  Google Scholar 

  22. Stallkamp, J., Schlipsing, M., Salmen, J., Igel, C.: The german traffic sign recognition benchmark: a multi-class classification competition. In: The 2011 International Joint Conference on Neural Networks, pp. 1453–1460. IEEE (2011)

    Google Scholar 

  23. Szegedy, C., et al.: Intriguing properties of neural networks (2013). arXiv:1312.6199

  24. Tabernik, D., Skočaj, D.: Deep learning for large-scale traffic-sign detection and recognition. IEEE Trans. Intell. Transp. Syst. 21(4), 1427–1440 (2019)

    Article  Google Scholar 

  25. Tian, Y., Pei, K., Jana, S., Ray, B.: Deeptest: automated testing of deep-neural-network-driven autonomous cars. In: Proceedings of the 40th International Conference on Software Engineering, pp. 303–314 (2018)

    Google Scholar 

  26. Tjeng, V., Xiao, K., Tedrake, R.: Evaluating robustness of neural networks with mixed integer programming (2017). arXiv preprint arXiv:1711.07356

  27. Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S.: Efficient formal safety analysis of neural networks. In: Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems (2018)

    Google Scholar 

  28. Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S.: Formal security analysis of neural networks using symbolic intervals. In: 27th USENIX Security Symposium (2018)

    Google Scholar 

  29. Wu, H., et al.: Parallelization techniques for verifying neural networks. In: 2020 Formal Methods in Computer Aided Design, pp. 128–137 (2020)

    Google Scholar 

  30. Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: attacks and defenses for deep learning. IEEE Trans. Neural Netw. Learn. Syst. 30(9), 2805–2824 (2019)

    Article  MathSciNet  Google Scholar 

  31. Zhang, M., Zhang, Y., Zhang, L., Liu, C., Khurshid, S.: DeepRoad: GAN-based metamorphic testing and input validation framework for autonomous driving systems. In: 2018 33rd IEEE/ACM International Conference on Automated Software Engineering, pp. 132–142. IEEE (2018)

    Google Scholar 

  32. Zhang, N., Zhang, L., Cheng, Z.: Towards simulating foggy and hazy images and evaluating their authenticity. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, E.S., et al. (eds.) Neural Information Processing, pp. 405–415. Springer, Cham, USA (2017). https://doi.org/10.1007/978-3-319-70090-8_42

    Chapter  Google Scholar 

Download references

Acknowledgements

This research has received funding from the Assuring Autonomy International Programme project ‘Assurance of Deep-Learning AI Techniques’ and the UKRI project EP/V026747/1 ‘Trustworthy Autonomous Systems Node in Resilience’.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Colin Paterson .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Paterson, C., Wu, H., Grese, J., Calinescu, R., Păsăreanu, C.S., Barrett, C. (2021). DeepCert: Verification of Contextually Relevant Robustness for Neural Network Image Classifiers. In: Habli, I., Sujan, M., Bitsch, F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2021. Lecture Notes in Computer Science(), vol 12852. Springer, Cham. https://doi.org/10.1007/978-3-030-83903-1_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-83903-1_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-83902-4

  • Online ISBN: 978-3-030-83903-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics