Advertisement

5W+1H Static Analysis Report Quality Measure

  • Maxim Menshchikov
  • Timur Lepikhin
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 779)

Abstract

Modern development best practices rank static analysis quite high in a list of quality assurance methods. Static analyzers indicate errors found and help improve software quality. However, the quality of reports is merely evaluated, if done at all. In this paper we generalize analyzer output messages and explore ways to improve reliability of comparison results. We introduce informational value as a measure of report quality with respect to 5Ws (What, When, Where, Who, Why) and 1H (How To Fix) questions, formulate and verify a hypothesis about its independence on generic quality measures, suggest a methodology to include it into static analysis benchmarking and present our observations after testing, which might help tool developers choose the direction towards more understandable reports.

Keywords

Static analysis Report quality measure Understandability of reports Informational value 

References

  1. 1.
  2. 2.
    Chess, B., West, J.: Secure Programming with Static Analysis. Pearson Education, London (2007)Google Scholar
  3. 3.
    Cousot, P., Cousot, R.: Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In: Proceedings of the 4th ACM SIGACT-SIGPLAN Symposium on Principles of Programming Languages, pp. 238–252. ACM (1977)Google Scholar
  4. 4.
    Johns, M., Jodeit, M.: Scanstud: a methodology for systematic, fine-grained evaluation of static analysis tools. In: 2011 IEEE Fourth International Conference on Software Testing, Verification and Validation Workshops (ICSTW), pp. 523–530. IEEE (2011)Google Scholar
  5. 5.
    Johnson, B., Song, Y., Murphy-Hill, E., Bowdidge, R.: Why don’t software developers use static analysis tools to find bugs? In: 2013 35th International Conference on Software Engineering (ICSE), pp. 672–681. IEEE (2013)Google Scholar
  6. 6.
    Lattner, C., Adve, V.: LLVM: a compilation framework for lifelong program analysis & transformation. In: Proceedings of the International Symposium on Code Generation and Optimization: Feedback-Directed and Runtime Optimization. p. 75. IEEE Computer Society (2004)Google Scholar
  7. 7.
    Muske, T., Bokil, P.: On implementational variations in static analysis tools. In: 2015 IEEE 22nd International Conference on Software Analysis, Evolution and Reengineering (SANER), pp. 512–515. IEEE (2015)Google Scholar
  8. 8.
    Okun, V., Delaitre, A., Black, P.E.: Report on the static analysis tool exposition (SATE) IV. NIST Special Publication 500, 297 (2013)Google Scholar
  9. 9.
    Parton, K., McKeown, K.R., et al.: Who, what, when, where, why?: comparing multiple approaches to the cross-lingual 5W task. In: Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, vol. 1, pp. 423–431. Association for Computational Linguistics (2009)Google Scholar
  10. 10.
    Shiraishi, S., Mohan, V., Marimuthu, H.: Test suites for benchmarks of static analysis tools. In: 2015 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW), pp. 12–15. IEEE (2015)Google Scholar
  11. 11.
    Zitser, M., Lippmann, R., Leek, T.: Testing static analysis tools using exploitable buffer overflows from open source code. In: ACM SIGSOFT Software Engineering Notes, vol. 29, pp. 97–106. ACM (2004)Google Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  1. 1.Saint Petersburg State UniversitySt. PetersburgRussia

Personalised recommendations