Eclat: Automatic Generation and Classification of Test Inputs

  • Carlos Pacheco
  • Michael D. Ernst
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3586)

Abstract

This paper describes a technique that selects, from a large set of test inputs, a small subset likely to reveal faults in the software under test. The technique takes a program or software component, plus a set of correct executions — say, from observations of the software running properly, or from an existing test suite that a user wishes to enhance. The technique first infers an operational model of the software’s operation. Then, inputs whose operational pattern of execution differs from the model in specific ways are suggestive of faults. These inputs are further reduced by selecting only one input per operational pattern. The result is a small portion of the original inputs, deemed by the technique as most likely to reveal faults. Thus, the technique can also be seen as an error-detection technique.

The paper describes two additional techniques that complement test input selection. One is a technique for automatically producing an oracle (a set of assertions) for a test input from the operational model, thus transforming the test input into a test case. The other is a classification-guided test input generation technique that also makes use of operational models and patterns. When generating inputs, it filters out code sequences that are unlikely to contribute to legal inputs, improving the efficiency of its search for fault-revealing inputs.

We have implemented these techniques in the Eclat tool, which generates unit tests for Java classes. Eclat’s input is a set of classes to test and an example program execution—say, a passing test suite. Eclat’s output is a set of JUnit test cases, each containing a potentially fault-revealing input and a set of assertions at least one of which fails. In our experiments, Eclat successfully generated inputs that exposed fault-revealing behavior; we have used Eclat to reveal real errors in programs. The inputs it selects as fault-revealing are an order of magnitude as likely to reveal a fault as all generated inputs.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ammons, G., Bodík, R., Larus, J.R.: Mining specifications. In: Proceedings of the 29th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, Portland, Oregon, January 16–18, 2002, pp. 4–16 (2002)Google Scholar
  2. 2.
    Balcer, M.J., Hasling, W.M., Ostrand, T.J.: Automatic generation of test scripts from formal test specifications. In: Proceedings of the ACM SIGSOFT 1989 Third Symposium on Testing, Analysis, and Verification (TAV3), December 1989, pp. 210–218 (1989)Google Scholar
  3. 3.
    Bowring, J.F., Rehg, J.M., Harrold, M.J.: Active learning for automatic classification of software behavior. In: ISSTA 2004, Proceedings of the 2004 International Symposium on Software Testing and Analysis, Boston, MA, USA, July 12–14, 2004, pp. 195–205 (2004)Google Scholar
  4. 4.
    Boyapati, C., Khurshid, S., Marinov, D.: Korat: Automated testing based on Java predicates. In: ISSTA 2002, Proceedings of the 2002 International Symposium on Software Testing and Analysis, Rome, Italy, July 22–24, 2002, pp. 123–133 (2002)Google Scholar
  5. 5.
    Brun, Y., Ernst, M.D.: Finding latent code errors via machine learning over program executions. In: ICSE 2004, Proceedings of the 26th International Conference on Software Engineering, Edinburgh, Scotland, May 26–28, 2004, pp. 480–490 (2004)Google Scholar
  6. 6.
    Burdy, L., Cheon, Y., Cok, D., Ernst, M.D., Kiniry, J., Leavens, G.T., Leino, K.R.M., Poll, E.: An overview of JML tools and applications. In: Eighth International Workshop on Formal Methods for Industrial Critical Systems (FMICS 2003), Trondheim, Norway, June 5–7 (2003)Google Scholar
  7. 7.
    Chang, J., Richardson, D.J.: Structural specification-based testing: Automated support and experimental evaluation. In: Proceedings of the 7th European Software Engineering Conference and the 7th ACM SIGSOFT Symposium on the Foundations of Software Engineering, Toulouse, France, September 6–9, pp. 285–302 (1999)Google Scholar
  8. 8.
    Claessen, K., Hughes, J.: QuickCheck: A lightweight tool for random testing of Haskell programs. In: ICFP 2000, Proceedings of the fifth ACM SIGPLAN International Conference on Functional Programming, Montreal, Canada, September 18–20, 2000, pp. 268–279 (2000)Google Scholar
  9. 9.
    Csallner, C., Smaragdakis, Y.: JCrasher: an automatic robustness tester for Java. Software: Practice and Experience 34(11), 1025–1117 (2004)CrossRefGoogle Scholar
  10. 10.
    Duran, J.W., Ntafos, S.C.: An evaluation of random testing. IEEE Transactions on Software Engineering 10(4), 438–444 (1984)CrossRefGoogle Scholar
  11. 11.
    Ernst, M.D., Cockrell, J., Griswold, W.G., Notkin, D.: Dynamically discovering likely program invariants to support program evolution. IEEE Transactions on Software Engineering 27(2),1–25 (February 2001); Proceedings of the 21st International Conference on Software Engineering, Los Angeles, CA, USA, May 19–21, pp. 213–224 (1999)Google Scholar
  12. 12.
    Foundations of Software Engineering group, Microsoft Research. Documentation for AsmL 2 (2003), http://research.microsoft.com/fse/asml
  13. 13.
    Hamlet, D., Taylor, R.: Partition testing does not inspire confidence. IEEE Transactions on Software Engineering 16(12), 1402–1411 (1990)CrossRefMathSciNetGoogle Scholar
  14. 14.
    Hangal, S., Lam, M.S.: Tracking down software bugs using automatic anomaly detection. In: ICSE 2002, Proceedings of the 24th International Conference on Software Engineering, Orlando, Florida, May 22–24, pp. 291–301 (2002)Google Scholar
  15. 15.
    Harder, M., Mellen, J., Ernst, M.D.: Improving test suites via operational abstraction. In: ICSE 2003, Proceedings of the 25th International Conference on Software Engineering, Portland, Oregon, May 6–8, pp. 60–71 (2003)Google Scholar
  16. 16.
    Henkel, J., Diwan, A.: Discovering algebraic specifications from Java classes. In: Cardelli, L. (ed.) ECOOP 2003. LNCS, vol. 2743. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  17. 17.
    Korel, B.: Automated test data generation for programs with procedures. In: Proceedings of the 1996 ACM SIGSOFT international symposium on Software testing and analysis, pp. 209–215. ACM Press, New York (1996)CrossRefGoogle Scholar
  18. 18.
    Ostrand, T.J., Balcer, M.J.: The category-partition method for specifying and generating functional tests. Communications of the ACM 31(6), 676–686 (1988)CrossRefGoogle Scholar
  19. 19.
    Parasoft Corporation. Jtest version 4.5, http://www.parasoft.com/
  20. 20.
    Podgurski, A., Leon, D., Francis, P., Masri, W., Minch, M., Sun, J., Wang, B.: Automated support for classifying software failure reports. In: ICSE 2003, Proceedings of the 25th International Conference on Software Engineering, Portland, Oregon, May 6–8, pp. 465–475 (2003)Google Scholar
  21. 21.
    Salton, G.: Automatic Information Organization and Retrieval. McGraw-Hill, New York (1968)Google Scholar
  22. 22.
    Stotts, D., Lindsey, M., Antley, A.: An informal formal method for systematic JUnit test case generation. In: Proceedings of 2nd XP Universe and 1st Agile Universe Conference (XP/Agile Universe), Chicago, IL, USA, August 4–7, pp. 131–143 (2002)Google Scholar
  23. 23.
    Tracey, N., Clark, J., Mander, K., McDermid, J.: An automated framework for structural test-data generation. In: Proceedings of the 13th Annual International Conference on Automated Software Engineering (ASE 1998), Honolulu, Hawaii, October 14–16, 1998, pp. 285–288 (1998)Google Scholar
  24. 24.
    van Rijsbergen, C.J.: Information Retrieval, 2nd edn. Butterworths, London (1979)Google Scholar
  25. 25.
    Weiss, M.A.: Data Structures and Algorithm Analysis in Java. Addison-Wesley Longman, Amsterdam (1999)Google Scholar
  26. 26.
    Whaley, J., Martin, M., Lam, M.: Automatic extraction of object-oriented component interfaces. In: ISSTA 2002, Proceedings of the 2002 International Symposium on Software Testing and Analysis, Rome, Italy, July 22–24, pp. 218–228 (2002)Google Scholar
  27. 27.
    Xie, T.: Personal communication (August 2003)Google Scholar
  28. 28.
    Xie, T., Marinov, D., Notkin, D.: Rostra: A framework for detecting redundant object-oriented unit tests. In: ASE 2004: Proceedings of the 20th Annual International Conference on Automated Software Engineering, Linz, Australia, November 9–11, pp. 196–205 (2004)Google Scholar
  29. 29.
    Xie, T., Marinov, D., Schulte, W., Notkin, D.: Symstra: A framework for generating object-oriented unit tests using symbolic execution. In: Halbwachs, N., Zuck, L.D. (eds.) TACAS 2005. LNCS, vol. 3440, pp. 365–381. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  30. 30.
    Xie, T., Notkin, D.: Tool-assisted unit test selection based on operational violations. In: ASE 2003: Proceedings of the 18th Annual International Conference on Automated Software Engineering, Montreal, Canada, October 8–10, pp. 40–48 (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Carlos Pacheco
    • 1
  • Michael D. Ernst
    • 1
  1. 1.MIT Computer Science and Artificial Intelligence Lab, The Stata CenterCambridgeUSA

Personalised recommendations