Skip to main content

HashC: Making DNNs’ Coverage Testing Finer and Faster

  • Conference paper
  • First Online:
Dependable Software Engineering. Theories, Tools, and Applications (SETTA 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13649))

  • 344 Accesses

Abstract

Though Deep Neural Networks (DNNs) have been widely deployed and achieved great success in many domains, they have severe safety and reliability concerns. To provide testing evidence for DNNs’ reliable behaviors, various coverage testing techniques inspired by traditional software testing have been proposed. However, the coverage criteria in these techniques are either not fine enough to capture subtle behaviors of DNNs, or too time-consuming to be applied on large-scale DNNs. In this paper, we develop a coverage testing framework named HashC, which makes mainstream coverage criteria (e.g., NC and KMNC) much finer. Meanwhile, HashC reduces the time complexity of combinatorial coverage testing from polynomial time to linear time. Our experiments show that, 1) the HashC criteria are finer than existing mainstream coverage criteria, 2) HashC greatly accelerates combinatorial coverage testing and can handle the testing of large-scale DNNs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    The \(\left( {\begin{array}{c}n\\ k\end{array}}\right) \) denotes \(\frac{n!}{k!(n-k)!}\) which is the number of \(k\)-combinations from \(n\) elements.

  2. 2.

    The function SHA-1 is used in this paper, because it is easier to compute than other cryptographic hash functions.

  3. 3.

    Due to limited space, we only show the coverage scores of \(i\)-MNISTs (\(i = 1000, 2000,4000,6000,8000,10000\)) in this paper.

References

  1. Aumasson, J.-P., Neves, S., Wilcox-O’Hearn, Z., Winnerlein, C.: BLAKE2: simpler, smaller, fast as MD5. In: Jacobson, M., Locasto, M., Mohassel, P., Safavi-Naini, R. (eds.) ACNS 2013. LNCS, vol. 7954, pp. 119–135. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-38980-1_8

    Chapter  Google Scholar 

  2. Dang, Q.: Changes in federal information processing standard (FIPS) 180–4, secure hash standard. Cryptologia 37(1), 69–73 (2013). https://doi.org/10.1080/01611194.2012.687431

    Article  Google Scholar 

  3. Davies, A.: Google’s self-driving car caused its first crash (2016). https://www.wired.com/2016/02/googles-self-driving-car-may-caused-first-crash/. Accessed 7 July 2021

  4. Gerasimou, S., Eniser, H.F., Sen, A., Cakan, A.: Importance-driven deep learning system testing. In: Proceedings of 42nd International Conference on Software Engineering, ICSE 2020, 27 June - 19 July, 2020, Seoul, South Korea, pp. 702–713. IEEE (2020). https://doi.org/10.1145/3377811.3380391

  5. Goodfellow, I.J., Bengio, Y., Courville, A.C.: Deep Learning. Adaptive Computation and Machine Learning, MIT Press, Cambridge (2016)

    MATH  Google Scholar 

  6. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of 29th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, 27–30 June 2016, Las Vegas, NV, USA, pp. 770–778. IEEE Computer Society (2016). https://doi.org/10.1109/CVPR.2016.90

  7. Kim, J., Feldt, R., Yoo, S.: Guiding deep learning system testing using surprise adequacy. In: Proceedings of the 41st International Conference on Software Engineering, ICSE 2019, Montreal, QC, Canada, 25–31 May 2019, pp. 1039–1049. IEEE/ACM (2019). https://doi.org/10.1109/ICSE.2019.00108

  8. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. Technical report (2009)

    Google Scholar 

  9. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  10. LeCun, Y., Cortes, C., Burges, C.J.: The MNIST database of handwritten digits. https://yann.lecun.com/exdb/mnist/ (1998). Accessed 4 Jan 2020

  11. Li, Z., Chen, Y., Gong, G., Li, D., Lv, K., Chen, P.: A survey of the application of combinatorial testing. In: Proceedings of 19th IEEE International Conference on Software Quality, Reliability and Security Companion, QRS Companion 2019, 22–26 July 2019, Sofia, Bulgaria, pp. 512–513. IEEE (2019). https://doi.org/10.1109/QRS-C.2019.00100

  12. Ma, L., et al.: DeepCT: tomographic combinatorial testing for deep learning systems. In: Proceedings of 26th IEEE International Conference on Software Analysis, Evolution and Reengineering, SANER 2019, 24–27 February 2019, Hangzhou, China, pp. 614–618. IEEE (2019). https://doi.org/10.1109/SANER.2019.8668044

  13. Ma, L., et al.: DeepGauge: multi-granularity testing criteria for deep learning systems. In: Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, ASE 2018, 3–7 September 2018, Montpellier, France, pp. 120–131. ACM (2018). https://doi.org/10.1145/3238147.3238202

  14. Mantiuk, R., Kim, K.J., Rempel, A.G., Heidrich, W.: HDR-VDP-2: a calibrated visual metric for visibility and quality predictions in all luminance conditions. ACM Trans. Graph. 30(4), 40 (2011). https://doi.org/10.1145/2010324.1964935

    Article  Google Scholar 

  15. Menezes, A., van Oorschot, P.C., Vanstone, S.A.: Handbook of Applied Cryptography. CRC Press, Boca Raton (1996). https://doi.org/10.1201/9781439821916

  16. Morawiecki, P., Pieprzyk, J., Srebrny, M.: Rotational cryptanalysis of round-reduced KECCAK. In: Moriai, S. (ed.) FSE 2013. LNCS, vol. 8424, pp. 241–262. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-662-43933-3_13

    Chapter  Google Scholar 

  17. Nie, C., Leung, H.: A survey of combinatorial testing. ACM Comput. Surv. 43(2), 11:1–11:29 (2011). https://doi.org/10.1145/1883612.1883618

  18. NTSB: Preliminary report: Highway hwy18mh010 (2018). https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH010-prelim.pdf. Accessed 7 July 2021

  19. Olah, C., Mordvintsev, A., Schubert, L.: Feature visualization. Distillation 2(11), e7 (2017). https://doi.org/10.23915/distill.00007

    Article  Google Scholar 

  20. Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library. CoRR abs/1912.01703 (2019)

    Google Scholar 

  21. Pei, K., Cao, Y., Yang, J., Jana, S.: DeepXplore: automated whitebox testing of deep learning systems. In: Proceedings of the 26th Symposium on Operating Systems Principles, SOSP 2017, 28–31 October 2017, Shanghai, China, pp. 1–18. ACM (2017). https://doi.org/10.1145/3132747.3132785

  22. Rivest, R.L.: The MD5 message-digest algorithm. RFC. 1321, 1–21 (1992). https://doi.org/10.17487/RFC1321

  23. Salay, R., Czarnecki, K.: Using machine learning safely in automotive software: an assessment and adaption of software process requirements in ISO 26262. CoRR abs/1808.01614 (2018)

    Google Scholar 

  24. Sekhon, J., Fleming, C.: Towards improved testing for deep learning. In: Proceedings of the 41st International Conference on Software Engineering: New Ideas and Emerging Results, ICSE (NIER) 2019, 29–31 May 2019, Montreal, QC, Canada, pp. 85–88. IEEE (2019). https://doi.org/10.1109/ICSE-NIER.2019.00030

  25. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: Proceedings of 3rd International Conference on Learning Representations, ICLR 2015, 7–9 May 2015, San Diego, CA, USA. International Conference on Learning Representations (2015)

    Google Scholar 

  26. Sun, W., Lu, Y., Sun, M.: Are coverage criteria meaningful metrics for DNNs? In: Proceedings of 31st International Joint Conference on Neural Networks, IJCNN 2021, 18–22 July 2020, Virtual Event. IEEE (2021)

    Google Scholar 

  27. Sun, Y., Huang, X., Kroening, D.: Testing deep neural networks. CoRR abs/1803.04792 (2018)

    Google Scholar 

  28. Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of 28th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, 7–12 June 2015, Boston, MA, USA, pp. 1–9. IEEE Computer Society (2015). https://doi.org/10.1109/CVPR.2015.7298594

  29. van Tilborg, H.C.A., Jajodia, S.: ISO 19790 2006 Security Requirements for Cryptographic Modules. In: van Tilborg, H.C.A., Jajodia, S. (eds.) Encyclopedia of Cryptography and Security, p. 648. Springer, Boston (2011). https://doi.org/10.1007/978-1-4419-5906-5_1038

  30. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004). https://doi.org/10.1109/TIP.2003.819861

    Article  Google Scholar 

  31. Wiki: Death of elaine Herzberg (2018). https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg. Accessed 7 July 2021

  32. Zhang, L., Zhang, L., Mou, X., Zhang, D.: FSIM: a feature similarity index for image quality assessment. IEEE Trans. Image Process. 20(8), 2378–2386 (2011). https://doi.org/10.1109/TIP.2011.2109730

    Article  MathSciNet  MATH  Google Scholar 

  33. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of 31st IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, 18–22 June 2018, Salt Lake City, UT, USA, pp. 586–595. IEEE Computer Society (2018). https://doi.org/10.1109/CVPR.2018.00068

Download references

Acknowledgement

This research was sponsored by the National Natural Science Foundation of China under Grant No. 62172019, and CCF-Huawei Formal Verification Innovation Research Plan.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Meng Sun .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sun, W., Xue, X., Lu, Y., Sun, M. (2022). HashC: Making DNNs’ Coverage Testing Finer and Faster. In: Dong, W., Talpin, JP. (eds) Dependable Software Engineering. Theories, Tools, and Applications. SETTA 2022. Lecture Notes in Computer Science, vol 13649. Springer, Cham. https://doi.org/10.1007/978-3-031-21213-0_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-21213-0_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-21212-3

  • Online ISBN: 978-3-031-21213-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics