Discovering Likely Method Specifications

  • Nikolai Tillmann
  • Feng Chen
  • Wolfram Schulte
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4260)

Abstract

Software specifications are of great use for more rigorous software development. They are useful for formal verification and automated testing, and they improve program understanding. In practice, specifications often do not exist and developers write software in an ad-hoc fashion. We describe a new way to automatically infer specifications from code. Our approach infers a likely specification for any method such that the method’s behavior, i.e., its effect on the state and possible result values, is summarized and expressed in terms of some other methods. We use symbolic execution to analyze and relate the behaviors of the considered methods. In our experiences, the resulting likely specifications are compact and human-understandable. They can be examined by the user, used as input to program verification systems, or as input for test generation tools for validation. We implemented the technique for .NET programs in a tool called Axiom Meister. It inferred concise specifications for base classes of the .NET platform and found flaws in the design of a new library.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
  2. 2.
  3. 3.
    Document object model(DOM), http://www.w3.org/DOM/
  4. 4.
  5. 5.
    Ammons, G., Bodik, R., Larus, J.R.: Mining specifications. In: Proc. 29th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, pp. 4–16 (2002)Google Scholar
  6. 6.
    Ball, T., Lahiri, S., Musuvathi, M.: Zap: Automated theorem proving for software analysis. Technical Report MSR-TR-2005-137, Microsoft Research, Redmond, WA, USA (2005)Google Scholar
  7. 7.
    Barnett, M., Leino, R., Schulte, W.: The Spec# programming system: An overview. In: Barthe, G., Burdy, L., Huisman, M., Lanet, J.-L., Muntean, T. (eds.) CASSIS 2004. LNCS, vol. 3362, pp. 49–69. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  8. 8.
    Barnett, M., Naumann, D.A., Schulte, W., Sun, Q.: 99.44% pure: Useful abstractions in specifications. In: Proc. 6th Workshop on Formal Techniques for Java-like Programs (June 2004)Google Scholar
  9. 9.
    Burdy, L., Cheon, Y., Cok, D., Ernst, M.D., Kiniry, J., Leavens, G.T., Leino, K.R.M., Poll, E.: An overview of JML tools and applications. International Journal on Software Tools for Technology Transfer 7(3), 212–232 (2005)CrossRefGoogle Scholar
  10. 10.
    Cook, A.R.B., Podelski, A.: Abstraction-refinement for termination. In: Hankin, C., Siveroni, I. (eds.) SAS 2005. LNCS, vol. 3672, pp. 87–101. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  11. 11.
    Detlefs, D., Nelson, G., Saxe, J.: Simplify: A theorem prover for program checking. Technical Report HPL-2003-148, HP Labs, Palo Alto, CA, USA (2003)Google Scholar
  12. 12.
    Ehrig, H., Mahr, B.: Fundamentals of Algebraic Specification I, Secaucus, NJ, USA. Springer, New York (1985)Google Scholar
  13. 13.
    Ernst, M.D., Cockrell, J., Griswold, W.G., Notkin, D.: Dynamically discovering likely program invariants to support program evolution. IEEE Transactions on Software Engineering 27(2), 99–123 (2001)CrossRefGoogle Scholar
  14. 14.
    Flanagan, C., Leino, K.R.M.: Houdini, an annotation assistant for esc/java. In: Oliveira, J.N., Zave, P. (eds.) FME 2001. LNCS, vol. 2021, pp. 500–517. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  15. 15.
    Gannod, G.C., Cheng, B.H.C.: A specification matching based approach to reverse engineering. In: ICSE 1999: Proceedings of the 21st international conference on Software engineering, Los Alamitos, CA, USA, pp. 389–398 (1999)Google Scholar
  16. 16.
    Gannod, G.C., Cheng, B.H.C.: Strongest postcondition semantics as the formal basis for reverse engineering. In: WCRE 1995: Proceedings of the Second Working Conference on Reverse Engineering, pp. 188–197 (July 1995)Google Scholar
  17. 17.
    Grieskamp, W., Tillmann, N., Schulte, W.: XRT - Exploring Runtime for .NET - Architecture and Applications. In: SoftMC 2005: Workshop on Software Model Checking. Electronic Notes in Theoretical Computer Science (July 2005)Google Scholar
  18. 18.
    Groce, A., Visser, W.: What went wrong: Explaining counterexamples. In: 10th International SPIN Workshop on Model Checking of Software, Portland, Oregon, May 9–10, pp. 121–135 (2003)Google Scholar
  19. 19.
    Henkel, J., Diwan, A.: Discovering algebraic specifications from Java classes. In: Proc. 17th European Conference on Object-Oriented Programming, pp. 431–456 (2003)Google Scholar
  20. 20.
    King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976)MATHCrossRefGoogle Scholar
  21. 21.
    Li, Z., Zhou, Y.: PR-Miner: Automatically extracting implicit programming rules and detecting violations in large software code. In: 13th ACM SIGSOFT Symposium on the Foundations of Software Engineering, FSE 2005 (September 2005)Google Scholar
  22. 22.
    Logozzo, F.: Automatic inference of class invariants. In: Steffen, B., Levi, G. (eds.) VMCAI 2004. LNCS, vol. 2937, pp. 211–222. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  23. 23.
    O’Callahan, R., Jackson, D.: Lackwit: a program understanding tool based on type inference. In: ICSE 1997: Proceedings of the 19th international conference on Software engineering, New York, NY, USA, pp. 338–348 (1997)Google Scholar
  24. 24.
    Sagiv, M., Reps, T., Wilhelm, R.: Parametric shape analysis via 3-valued logic. ACM Trans. Program. Lang. Syst. 24(3), 217–298 (2002)CrossRefGoogle Scholar
  25. 25.
    Taghdiri, M.: Inferring specifications to detect errors in code. In: 19th IEEE International Conference on Automated Software Engineering (ASE 2004) (September 2004)Google Scholar
  26. 26.
    Tillmann, N., Schulte, W.: Parameterized unit tests. In: Proceedings of the 10th European Software Engineering Conference held jointly with 13th ACM SIGSOFT International Symposium on Foundations of Software Engineering, pp. 253–262 (2005)Google Scholar
  27. 27.
    Visser, W., Havelund, K., Brat, G., Park, S.: Model checking programs. In: Proc. 15th IEEE International Conference on Automated Software Engineering, pp. 3–12 (2000)Google Scholar
  28. 28.
    Whaley, J., Martin, M.C., Lam, M.S.: Automatic extraction of object-oriented component interfaces. In: Proc. the International Symposium on Software Testing and Analysis, pp. 218–228 (2002)Google Scholar
  29. 29.
    Xie, T., Notkin, D.: Automatically identifying special and common unit tests for object-oriented programs. In: Proceedings of the 16th IEEE International Symposium on Software Reliability Engineering (ISSRE 2005) (November 2005)Google Scholar
  30. 30.
    Xie, T., Notkin, D.: Tool-assisted unit test generation and selection based on operational abstractions. Automated Software Engineering Journal (2006)Google Scholar
  31. 31.
    Yang, J., Evans, D.: Dynamically inferring temporal properties. In: Proc. the ACM-SIGPLAN-SIGSOFT Workshop on Program Analysis for Software Tools and Engineering, pp. 23–28 (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Nikolai Tillmann
    • 1
  • Feng Chen
    • 2
  • Wolfram Schulte
    • 1
  1. 1.Microsoft ResearchRedmondUSA
  2. 2.University of Illinois at Urbana-ChampaignUrbanaUSA

Personalised recommendations