Advertisement

Mutually Enhancing Test Generation and Specification Inference

  • Tao Xie
  • David Notkin
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2931)

Abstract

Generating effective tests and inferring likely program specifications are both difficult and costly problems. We propose an approach in which we can mutually enhance the tests and specifications that are generated by iteratively applying each in a feedback loop. In particular, we infer likely specifications from the executions of existing tests and use these specifications to guide automatic test generation. Then the existing tests, as well as the new tests, are used to infer new specifications in the subsequent iteration. The iterative process continues until there is no new test that violates specifications inferred in the previous iteration. Inferred specifications can guide test generation to focus on particular program behavior, reducing the scope of analysis; and newly generated tests can improve the inferred specifications. During each iteration, the generated tests that violate inferred specifications are collected to be inspected. These violating tests are likely to have a high probability of exposing faults or exercising new program behavior. Our hypothesis is that such a feedback loop can mutually enhance test generation and specification inference.

Keywords

Model Check Test Generation Finite State Machine Test Input Specification Inference 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ammons, G., Bodik, R., Larus, J.R.: Mining specification. In: Proceedings of Principles of Programming Languages, Portland, Oregon, pp. 4–16 (2002)Google Scholar
  2. 2.
    Ball, T., Rajamani, S.K.: Automatically Validating Temporal Safety Properties of Interfaces. In: Dwyer, M.B. (ed.) SPIN 2001. LNCS, vol. 2057, pp. 103–122. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  3. 3.
    Bernot, G., Gaudel, M.C., Marre, B.: Software testing based on formal specifications: a theory and a tool. Software Engineering Journal, 387–405 (November 1991)Google Scholar
  4. 4.
    Bieman, J.M., Olender, K.M.: Using algebraic specifications to find sequencing defects. In: Proceedings of 4th International Symposium on Software Reliability Engineering, pp. 226–232 (1993)Google Scholar
  5. 5.
    Boyapati, C., Khurshid, S., Marinov, D.: Korat: Automated testing based on Java predicates. In: Proceedings of the 2002 International Symposium on Software Testing and Analysis, Rome, Italy, pp. 123–133 (2002)Google Scholar
  6. 6.
    Campbell, R.H., Habermann, A.N.: The Specification of Process Synchronization by Path Expressions. LNCS, vol. 16, pp. 89–102 (1974)Google Scholar
  7. 7.
    Cobleigh, J.M., Giannakopoulou, D., Pasareanu, C.S.: Learning Assumptions for Compositional Verification. In: Proceedings of the 9th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, pp. 331–346 (2003)Google Scholar
  8. 8.
    Dick, J., Faivre, A.: Automating the Generation and Sequencing of Test Cases from Model-Based Specifications. In: Proceedings of International Symposium of Formal Methods Europe, pp. 268–284 (1993)Google Scholar
  9. 9.
    Doong, R., Frankl, P.G.: The ASTOOT approach to testing object-oriented programs. ACM Transactions on Software Engineering 3(2), 101–130 (1994)CrossRefGoogle Scholar
  10. 10.
    Ernst, M.D., Cockrell, J., Griswold, W.G., Notkin, D.: Dynamically discovering likely program invariants to support program evolution. IEEE Transactions on Software Engineering 27(2), 1–25 (2001)CrossRefGoogle Scholar
  11. 11.
    Ernst, M.D.: Static and dynamic analysis: Synergy and duality. In: Proceedings of ICSE Workshop on Dynamic Analysis, Portland, Oregon, May 2003, pp. 24–27.Google Scholar
  12. 12.
    Flanagan, C., Leino, K.R.M.: Houdini, an annotation assistant for ESC/Java. In: Proceedings of International Symposium of Formal Methods Europe, March 2001, pp. 500–517 (2001)Google Scholar
  13. 13.
    Gannon, J., McMullin, P., Hamlet, R.: Data-Abstraction Implementation, Specification, and Testing. ACM Transactions on Programming Languages and Systems 31(3), 211–223 (1981)CrossRefGoogle Scholar
  14. 14.
    Giannakopoulou, D., Pasareanu, C.S., Barringer, H.: Assumption Generation for Software Component Verification. In: Proceedings of the 17th IEEE International Conference on Automated Software Engineering, pp. 3–12 (2002)Google Scholar
  15. 15.
    Groce, A., Peled, D., Yannakakis, M.: Adaptive model checking. In: Proceedings of the 8th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, April 2002, pp. 357–370 (2002)Google Scholar
  16. 16.
    Gupta, N.: Generating Test Data for Dynamically Discovering Likely Program Invariants. In: Proceedings of ICSE 2003 Workshop on Dynamic Analysis, May 2003, pp. 21–24 (2003)Google Scholar
  17. 17.
    Gupta, N., Mathur, A.P., Soffa, M.L.: Automated test data generation using an iterative relaxation method. In: Proceedings of International Symposium on Foundations of Software Engineering, November 1998, pp. 231–244 (1998)Google Scholar
  18. 18.
    Guttag, J.V., Horning, J.J.: The algebraic specification of abstract data types. Acta Information 10, 27–52 (1978)MATHMathSciNetGoogle Scholar
  19. 19.
    Hagerer, A., Hungar, H., Niese, O., Steffen, B.: Model Generation by Moderated Regular Extrapolation. In: Proceedings of International Conference on Fundamental Approaches to Software Engineering, Heidelberg, Germany, April 2002, pp. 80–95 (2002)Google Scholar
  20. 20.
    Harder, M., Mellen, J., Ernst, M.D.: Improving test suites via operational abstraction. In: Proceedings of the International Conference on Software Engineering, Portland, Oregon, May 2003, pp. 60–71 (2003)Google Scholar
  21. 21.
    Henkel, J., Diwan, A.: Discovering algebraic specifications from Java classes. In: Proceedings of European Conference on Object-Oriented Programming, July 2003, pp. 431–456 (2003)Google Scholar
  22. 22.
    Hoare, C.A.R.: An axiomatic basis for computer programming. Communications of the ACM 12(10), 576–580 (1969)MATHCrossRefGoogle Scholar
  23. 23.
    Hughes, M., Stotts, D.: Daistish: Systematic algebraic testing for OO programs in the presence of side-effects. In: Proceedings of the International Symposium on Software Testing and Analysis, San Diego, California, pp. 53–61 (1996)Google Scholar
  24. 24.
    Parasoft Corporation, Jtest manuals version 4.5 October 23 (2002), http://www.parasoft.com/
  25. 25.
    Lee, D., Yannakakis, M.: Principles and methods of testing finite state machines - a survey. Proceedings of the IEEE 84(8), 1090–1123 (1996)CrossRefGoogle Scholar
  26. 26.
    Naumovich, G., Frankl, P.G.: Toward Synergy of Finite State Verification and Testing. In: Proceedings of the First International Workshop on Automated Program Analysis, Testing and Verification, June 2000, pp. 89–94 (2000)Google Scholar
  27. 27.
    Nimmer, J.W., Ernst, M.D.: Static verification of dynamically detected program invariants: Integrating Daikon and ESC/Java. In: Proceedings of First Workshop on Runtime Verification, Paris, France. ENTCS, vol. 55 (2) (2001)Google Scholar
  28. 28.
    Osterweil, L.J., et al.: Strategic directions in software quality. ACM Computing Surveys (4), 738–750 (1996)Google Scholar
  29. 29.
    Peled, D., Vardi, M.Y., Yannakakis, M.: Black Box Checking. In: Proceedings of FORTE/PSTV, Beijing, China, pp. 225–240. Kluwer, Dordrecht (1999)Google Scholar
  30. 30.
    Pettis, K., Hansen, R.C.: Profile guided code positioning. In: Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation, pp. 16–27. White Plains, NY (1990)Google Scholar
  31. 31.
    Whaley, J., Martin, M.C., Lam, M.S.: Automatic Extraction of Object-Oriented Component Interfaces. In: Proceedings of the International Symposium on Software Testing and Analysis, July 2002, pp. 218–228 (2002)Google Scholar
  32. 32.
    Xie, T., Notkin, D.: Macro and Micro Perspectives on Strategic Software Quality Assurance in Resource Constrained Environments. In: Proceedings of 4th International Workshop on Economics-Driven Software Engineering Research, May 2002, pp. 66–70 (2002)Google Scholar
  33. 33.
    Xie, T., Notkin, D.: Exploiting Synergy between Testing and Inferred Partial Specifications. In: Proceedings of ICSE 2003 Workshop on Dynamic Analysis, Portland, Oregon, pp. 17–20 (2003)Google Scholar
  34. 34.
    Xie, T., Notkin, D.: Tool-Assisted Unit Test Selection Based on Operational Violations. In: Proceedings of 18th IEEE International Conference on Automated Software Engineering, Montreal, Canada, October 2003, pp. 40–48 (2003)Google Scholar
  35. 35.
    Young, M.: Symbiosis of Static Analysis and Program Testing. In: Proceedings of 6th International Conference on Fundamental Approaches to Software Engineering, Warsaw, Poland, April 2003, pp. 1–5 (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Tao Xie
    • 1
  • David Notkin
    • 1
  1. 1.Department of Computer Science and EngineeringUniversity of Washington at SeattleUSA

Personalised recommendations