Argus: Online Statistical Bug Detection

  • Long Fei
  • Kyungwoo Lee
  • Fei Li
  • Samuel P. Midkiff
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3922)


Statistical debugging is a powerful technique for identifying bugs that do not violate programming rules or program invariants. Previously known statistical debugging techniques are offline bug isolation (or localization) techniques. In these techniques, the program dumps data during its execution, which is used by offline statistical analysis to discover differences in passing and failing executions. The differences identify potential bug sites. Offline techniques suffer from three limitations: (i) a large number of executions are needed to provide data, (ii) each execution must be labelled as passing or failing, and (iii) they are postmortem techniques and therefore cannot raise an alert at runtime when a bug symptom occurs. In this paper, we present an online statistical bug detection tool called Argus. Argus constructs statistics at runtime using a sliding window over the program execution, is capable of detecting bugs in a single execution and can raise an alert at runtime when bug symptoms occur. Moreover, it eliminates the requirement for labelling all executions as passing or failing. We present experimental results using the Siemens bug benchmark showing that Argus is effective in detecting 102 out of 130 bugs. We introduce optimization techniques that greatly improve Argus’ detection power and control the false alarm rate when a small number of executions are available. Argus generates more precise bug reports than the best known bug localization techniques.


False Alarm False Alarm Rate Runtime Overhead Program Language Design Single Execution 


  1. 1.
    Austin, T.M., Breach, S.E., Sohi, G.S.: Efficient detection of all pointer and array access errors. In: Proceedings of the ACM SIGPLAN 1994 conference on Programming Language Design and Implementation, pp. 290–301 (1994)Google Scholar
  2. 2.
    Burnell, L., Horvitz, E.: Structure and chance: melding logic and probability for software debugging. Communications of the ACM 38(3), 31 (1995)CrossRefGoogle Scholar
  3. 3.
    Casella, G., Berger, R.L.: Statistical Inference, 2nd edn. Duxbury Press (2001)Google Scholar
  4. 4.
    Cleve, H., Zeller, A.: Locating causes of program failures. In: Proceedings of the 27th International Conference on Software Engineering, pp. 342–351 (2005)Google Scholar
  5. 5.
    Dallmeier, V., Lindig, C., Zeller, A.: Lightweight defect localization for java. In: Proceedings of the 19th European Conference on Object-Oriented Programming (2005)Google Scholar
  6. 6.
    Dickinson, W., Leon, D., Podgurski, A.: Finding failures by cluster analysis of execution profiles. In: ICSE 2001: Proceedings of the 23rd International Conference on Software Engineering, pp. 339–348 (2001)Google Scholar
  7. 7.
    Ernst, M.D., Czeisler, A., Griswold, W.G., Notkin, D.: Quickly detecting relevant program invariants. In: Proceedings of the 22nd International Conference on Software Engineering, pp. 449–458 (2000)Google Scholar
  8. 8.
    Hangal, S., Lam, M.S.: Tracking down software bugs using automatic anomaly detection. In: Proceedings of the 24th International Conference on Software Engineering, pp. 291–301 (2002)Google Scholar
  9. 9.
    Harrold, M.J., Rothermel, G., Sayre, K., Wu, R., Yi, L.: An empirical investigation of the relationship between fault-revealing test behavior and differences in program spectra. Journal of Software Testing, Verifications, and Reliability 10(3), 171–194 (2000)CrossRefGoogle Scholar
  10. 10.
    Hastings, R., Joyce, B.: Purify: Fast detection of memory leaks and access errors. In: Proceedings of the USENIX Winter Technical Conference (1992)Google Scholar
  11. 11.
    Hutchins, M., Foster, H., Goradia, T., Ostrand, T.: Experiments of the effectiveness of dataflow- and controlflow-based test adequacy criteria. In: Proceedings of the 16th International Conference on Software Engineering, pp. 191–200 (1994)Google Scholar
  12. 12.
    Johnson, T.A., Lee, S.-I., Fei, L., Basumallik, A., Upadhyaya, G., Eigenmann, R., Midkiff, S.P.: Experiences in using cetus for source-to-source transformations. In: Eigenmann, R., Li, Z., Midkiff, S.P. (eds.) LCPC 2004. LNCS, vol. 3602, Springer, Heidelberg (2005)CrossRefGoogle Scholar
  13. 13.
    Liblit, B., Aiken, A., Zheng, A.X., Jordan, M.I.: Bug isolation via remote program sampling. In: Proceedings of the ACM SIGPLAN 2003 conference on Programming Language Design and Implementation, pp. 141–154 (2003)Google Scholar
  14. 14.
    Liblit, B., Naik, M., Zheng, A.X., Aiken, A., Jordan, M.I.: Scalable statistical bug isolation. In: Proceedings of the ACM SIGPLAN 2005 conference on Programming Language Design and Implementation (2005)Google Scholar
  15. 15.
    Liu, C., Yan, X., Fei, L., Han, J., Midkiff, S.P.: Sober: Statistical model-based bug localization. In: Proceedings of The fifth joint meeting of the European Software Engineering Conference and ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE 2005) (2005)Google Scholar
  16. 16.
    Loginov, A., Yong, S.H., Horwitz, S., Reps, T.W.: Debugging via run-time type checking. In: Proceedings of the 4th International Conference on Fundamental Approaches to Software Engineering, pp. 217–232 (2001)Google Scholar
  17. 17.
    Software errors cost U.S. economy $59.5 billion annually, 2002. NIST News Release 2002-10Google Scholar
  18. 18.
    Podgurski, A., Leon, D., Francis, P., Masri, W., Minch, M., Sun, J., Wang, B.: Automated support for classifying software failure reports. In: Proceedings of the 25th International Conference on Software Engineering, pp. 465–475 (2003)Google Scholar
  19. 19.
    Pytlik, B., Renieris, M., Krishnamurthi, S., Reiss, S.P.: Automated fault localization using potential invariants. In: Proceedings of the 5th International Workshop on Automated and Algorithmic Debugging, pp. 287–296 (2003)Google Scholar
  20. 20.
    Renieris, M., Reiss, S.P.: Fault localizationwith nearest neighbor queries. In: Proceedings of the 18th IEEE International Conference on Automated Software Engineering, pp. 30–39 (2003)Google Scholar
  21. 21.
    Rothermel, G., Harrold, M.J.: Empirical studies of a safe regression test selection technique. IEEE Transactions on Software Engineering 24(6), 401–419 (1998)CrossRefGoogle Scholar
  22. 22.
    Sekar, R., Bendre, M., Dhurjati, D., Bollineni, P.: A fast automaton-based method for detecting anomalous program behaviors. In: Proceedings of the 2001 IEEE Symposium on Security and Privacy, p. 144 (2001)Google Scholar
  23. 23.
    Steffen, J.L.: Adding run-time checking to the portable C compiler. Software Practice and Experience 22(4), 305–316 (1992)CrossRefGoogle Scholar
  24. 24.
    Teknomo, K.: Recursive simple statistics tutorial. online document,
  25. 25.
    Zhou, P., Liu, W., Fei, L., Lu, S., Qin, F., Zhou, Y., Midkiff, S., Torrellas, J.: AccMon: Automatically detecting memory-related bugs via program counter-based invariants. In: Proceedings of the 37th Annual IEEE/ACM International Symposium on Micro-architecture (MICRO 2004) (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Long Fei
    • 1
  • Kyungwoo Lee
    • 1
  • Fei Li
    • 2
  • Samuel P. Midkiff
    • 1
  1. 1.School of Electrical and Computer EngineeringUSA
  2. 2.School of Industrial EngineeringPurdue UniversityWest LafayetteUSA

Personalised recommendations