Skip to main content

RoMA: A Method for Neural Network Robustness Measurement and Assessment

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2022)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1791))

Included in the following conference series:

Abstract

Neural network models have become the leading solution for a large variety of tasks, such as classification, natural language processing, and others. However, their reliability is heavily plagued by adversarial inputs: inputs generated by adding tiny perturbations to correctly-classified inputs, and for which the neural network produces erroneous results. In this paper, we present a new method called Robustness Measurement and Assessment (RoMA), which measures the robustness of a neural network model against such adversarial inputs. Specifically, RoMA determines the probability that a random input perturbation might cause misclassification. The method allows us to provide formal guarantees regarding the expected frequency of errors that a trained model will encounter after deployment. The type of robustness assessment afforded by RoMA is inspired by state-of-the-art certification practices, and could constitute an important step toward integrating neural networks in safety-critical systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The term confidence is sometimes used to represent the reliability of the DNN as a whole; this is not our intention here.

References

  1. Anderson, B., Sojoudi, S.: Data-Driven Assessment of Deep Neural Networks with Random Input Uncertainty. Technical report (2020). arxiv:abs/2010.01171

  2. Anderson, T.: Anderson-Darling tests of goodness-of-fit. Int. Encycl. Statist. Sci. 1, 52–54 (2011)

    Google Scholar 

  3. Bastani, O., Ioannou, Y., Lampropoulos, L., Vytiniotis, D., Nori, A., Criminisi, A.: Measuring neural net robustness with constraints. In: Proceedings of 30th Conference on Neural Information Processing Systems (NIPS) (2016)

    Google Scholar 

  4. Berlinger, M., Kolling, S., Schneider, J.: A generalized Anderson-Darling test for the goodness-of-fit evaluation of the fracture strain distribution of acrylic glass. Glass Struct. Eng. 6(2), 195–208 (2021)

    Article  Google Scholar 

  5. Box, G., Cox, D.: An analysis of transformations revisited, rebutted. J. Am. Stat. Assoc. 77(377), 209–210 (1982)

    Article  MATH  Google Scholar 

  6. Carlini, N., Wagner, D.: Towards Evaluating the Robustness of Neural Networks. In: Proceedings of 2017 IEEE Symposium on Security and Privacy (S &P), pp. 39–57 (2017)

    Google Scholar 

  7. Cohen, J., Rosenfeld, E., Kolter, Z.: Certified Adversarial Robustness via Randomized Smoothing. In: Proceedings of 36th International Conference on Machine Learning (ICML) (2019)

    Google Scholar 

  8. Dvijotham, K., Garnelo, M., Fawzi, A., Kohli, P.: Verification of Deep Probabilistic Models. Technical report (2018). arXiv:abs/1812.02795

  9. European Union Aviation Safety Agency: Artificial Intelligence Roadmap: A Human-Centric Approach To AI In Aviation (2020). https://www.easa.europa.eu/newsroom-and-events/news/easa-artificial-intelligence-roadmap-10-published

  10. Federal Aviation Administration: RTCA Inc, Document RTCA/DO-178B (1993). https://nla.gov.au/nla.cat-vn4510326

  11. Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and Harnessing Adversarial Examples. Technical report (2014). arXiv:abs/1412.6572

  12. Griffith, D., Amrhein, C., Huriot, J.M.: Econometric Advances in Spatial Modelling and Methodology: Essays in Honour of Jean Paelinck. ASTA, Springer Science & Business Media, New York (2013). https://doi.org/10.1007/978-1-4757-2899-6

    Book  Google Scholar 

  13. Hammersley, J.: Monte Carlo Methods. MSAP, Springer Science & Business Media, Dordrecht (2013). https://doi.org/10.1007/978-94-009-5819-7

    Book  Google Scholar 

  14. Huang, C., Hu, Z., Huang, X., Pei, K.: Statistical certification of acceptable robustness for neural networks. In: Proceedings International Conference on Artificial Neural Networks (ICANN), pp. 79–90 (2021)

    Google Scholar 

  15. Katz, G., Barrett, C., Dill, D., Julian, K., Kochenderfer, M.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Proceedings of 29th International Conference on Computer Aided Verification (CAV), pp. 97–117 (2017)

    Google Scholar 

  16. Katz, G., Barrett, C., Dill, D., Julian, K., Kochenderfer, M.: Reluplex: a calculus for reasoning about deep neural networks. In: Formal Methods in System Design (FMSD) (2021)

    Google Scholar 

  17. Landi, A., Nicholson, M.: ARP4754A/ED-79A-guidelines for development of civil aircraft and systems-enhancements, novelties and key topics. SAE Int. J. Aerosp. 4, 871–879 (2011)

    Article  Google Scholar 

  18. Levy, N., Katz, G.: RoMA: Code and Experiments (2022). https://drive.google.com/drive/folders/1hW474gRoNi313G1_bRzaR2cHG5DLCnJl

  19. Mangal, R., Nori, A., Orso, A.: Robustness of neural networks: a probabilistic and practical approach. In: Proceedings of 41st IEEE/ACM International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER), pp. 93–96 (2019)

    Google Scholar 

  20. Michelmore, R., Kwiatkowska, M., Gal, Y.: Evaluating Uncertainty Quantification in End-to-End Autonomous Driving Control. Technical report (2018). arXiv:abs/1811.06817

  21. Pereira, A., Thomas, C.: Challenges of machine learning applied to safety-critical cyber-physical systems. Mach. Learn. Knowl. Extract. 2(4), 579–602 (2020)

    Article  Google Scholar 

  22. Rossi, R.: Mathematical Statistics: an Introduction to Likelihood Based Inference. John Wiley & Sons, New York (2018)

    Google Scholar 

  23. Scipy: Scipy Python package (2021). https://scipy.org

  24. Taherdoost, H.: Sampling methods in research methodology; how to choose a sampling technique for research. Int. J. Acad. Res. Manage. (IJARM) (2016)

    Google Scholar 

  25. Tit, K., Furon, T., Rousset, M.: Efficient statistical assessment of neural network corruption robustness. In: Proceedings of 35th Conference on Neural Information Processing Systems (NeurIPS) (2021)

    Google Scholar 

  26. Vidot, G., Gabreau, C., Ober, I., Ober, I.: Certification of Embedded Systems Based on Machine Learning: A Survey. Technical report (2021). arXiv:abs/2106.07221

  27. Webb, S., Rainforth, T., Teh, Y., Kumar, M.: A Statistical Approach to Assessing Neural Network Robustness. Technical report (2018). arXiv:abs/1811.07209

  28. Weng, L., et al.: PROVEN: verifying robustness of neural networks with a probabilistic approach. In: Proceedings of 36th International Conference on Machine Learning (ICML) (2019)

    Google Scholar 

  29. Yeo, I.K., Johnson, R.: A new family of power transformations to improve normality or symmetry. Biometrika 87(4), 954–959 (2000)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

We thank Dr. Pavel Grabov From Tel-Aviv University for his valuable comments and support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Natan Levy .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Levy, N., Katz, G. (2023). RoMA: A Method for Neural Network Robustness Measurement and Assessment. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds) Neural Information Processing. ICONIP 2022. Communications in Computer and Information Science, vol 1791. Springer, Singapore. https://doi.org/10.1007/978-981-99-1639-9_8

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-1639-9_8

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-1638-2

  • Online ISBN: 978-981-99-1639-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics