Comparing Bug Finding Tools with Reviews and Tests

  • Stefan Wagner
  • Jan Jürjens
  • Claudia Koller
  • Peter Trischberger
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3502)


Bug finding tools can find defects in software source code using an automated static analysis. This automation may be able to reduce the time spent for other testing and review activities. For this we need to have a clear understanding of how the defects found by bug finding tools relate to the defects found by other techniques. This paper describes a case study using several projects mainly from an industrial environment that were used to analyse the interrelationships. The main finding is that the bug finding tools predominantly find different defects than testing but a subset of defects found by reviews. However, the types that can be detected are analysed more thoroughly. Therefore, a combination is most advisable if the high number of false positives of the tools can be tolerated.


Model Check True Positive Defect Type Error Handling Real Defect 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Ball, T., Rajamani, S.K.: The SLAM Project: Debugging System Software via Static Analysis. In: Proc. 29th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (2002)Google Scholar
  2. 2.
    Beizer, B.: Software Testing Techniques, 2nd edn. Thomson Learning (1990)Google Scholar
  3. 3.
    Bush, W.R., Pincus, J.D., Sielaff, D.J.: A static analyzer for finding dynamic programming errors. Softw. Pract. Exper. 30, 775–802 (2000)CrossRefzbMATHGoogle Scholar
  4. 4.
    Chillarege, R.: Orthogonal Defect Classification. In: Lyu, M.R. (ed.) Handbook of Software Reliability Engineering, ch. 9. IEEE Computer Society Press and McGraw-Hill (1996)Google Scholar
  5. 5.
    Csallner, C., Smaragdakis, Y.: CnC: Combining Static Checking and Testing. In: Inverardi, P., Jazayeri, M. (eds.) ICSE 2005. LNCS, vol. 4309. Springer, Heidelberg (2006) (to appear)Google Scholar
  6. 6.
    Engler, D., Musuvathi, M.: Static Analysis versus Model Checking for Bug Finding. In: Steffen, B., Levi, G. (eds.) VMCAI 2004. LNCS, vol. 2937, pp. 191–210. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  7. 7.
    Flanagan, C., Leino, K.R.M., Lillibridge, M., Nelson, G., Saxe, J.B., Stata, R.: Extended Static Checking for Java. In: Proc. 2002 ACM SIGPLAN Conference on Programming Language Design and Implementation (2002)Google Scholar
  8. 8.
    Hovemeyer, D., Pugh, W.: Finding Bugs is Easy. SIGPLAN Notices 39(12) (2004) (to appear)Google Scholar
  9. 9.
    IEEE. IEEE Standard Classification for Software Anomalies (1993) (IEEE Std 1044-1993)Google Scholar
  10. 10.
    Johnson, R., Wagner, D.: Finding User/Kernel Pointer Bugs With Type Inference. In: Proc. 13th USENIX Security Symposium (2004)Google Scholar
  11. 11.
    Jones, C.: Applied Software Measurement: Assuring Productivity and Quality. McGraw-Hill, New York (1991)zbMATHGoogle Scholar
  12. 12.
    Koller, C.: Vergleich verschiedener Methoden zur analytischen Qualitätssicherung. Diploma Thesis, Technische Universität München (2004) (in German)Google Scholar
  13. 13.
    Myers, G.J.: The Art of Software Testing. John Wiley & Sons, Chichester (1979)Google Scholar
  14. 14.
    Palsberg, J.: Type-Based Analysis and Applications. In: Proc. 2001 ACM SIGPLAN-SIGSOFT Workshop on Program Analysis for Software Tools and Engineering (PASTE 2001), pp. 20–27. ACM Press, New York (2001)CrossRefGoogle Scholar
  15. 15.
    PMD (February 2005),
  16. 16.
    Pretschner, A., Prenninger, W., Wagner, S., Kühnel, C., Baumgartner, M., Sostawa, B., Zölch, R., Stauner, T.: One Evaluation of Model-Based Testing and its Automation. In: Proc. 27th International Conference on Software Engineering (ICSE 2005) (2005) (to appear)Google Scholar
  17. 17.
    QJ Pro (February 2005),
  18. 18.
    Rutar, N., Almazan, C.B., Foster, J.S.: A Comparison of Bug Finding Tools for Java. In: Proc. 15th IEEE International Symposium on Software Reliability Engineering (ISSRE 2004), pp. 245–256 (2004)Google Scholar
  19. 19.
    S. Wagner. Efficiency Analysis of Defect-Detection Techniques. Technical Report TUMI-0413, Institut für Informatik, Technische Universität München, 2004.Google Scholar
  20. 20.
    Wagner, S.: Reliability Efficiency of Defect-Detection Techniques: A Field Study. In: Suppl. Proc. 15th IEEE International Symposium on Software Reliability Engineering (ISSRE 2004), pp. 294–301 (2004)Google Scholar
  21. 21.
    Wagner, S.: Towards Software Quality Economics for Defect-Detection Techniques. In: Proc. 29th Annual IEEE/NASA Software Engineering Workshop (2005) (to appear)Google Scholar
  22. 22.
    Zitser, M., Lippmann, R., Leek, T.: Testing Static Analysis Tools using Exploitable Buffer Overflows from Open Source Code. In: Proc. 12th ACM SIGSOFT International Symposium on Foundations of Software Engineering (SIGSOFT 2004/FSE-12), pp. 97–106. ACM Press, New York (2004)CrossRefGoogle Scholar

Copyright information

© IFIP International Federation for Information Processing 2005

Authors and Affiliations

  • Stefan Wagner
    • 1
  • Jan Jürjens
    • 1
  • Claudia Koller
    • 1
  • Peter Trischberger
    • 2
  1. 1.Institut für InformatikTechnische Universität MünchenGarchingGermany
  2. 2.O2 GermanyMunichGermany

Personalised recommendations