Testing First-Order Logic Axioms in Program Verification

  • Ki Yung Ahn
  • Ewen Denney
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6143)

Abstract

Program verification systems based on automated theorem provers rely on user-provided axioms in order to verify domain-specific properties of code. However, formulating axioms correctly (that is, formalizing properties of an intended mathematical interpretation) is non-trivial in practice, and avoiding or even detecting unsoundness can sometimes be difficult to achieve. Moreover, speculating soundness of axioms based on the output of the provers themselves is not easy since they do not typically give counterexamples. We adopt the idea of model-based testing to aid axiom authors in discovering errors in axiomatizations. To test the validity of axioms, users define a computational model of the axiomatized logic by giving interpretations to the function symbols and constants in a simple declarative programming language. We have developed an axiom testing framework that helps automate model definition and test generation using off-the-shelf tools for meta-programming, property-based random testing, and constraint solving. We have experimented with our tool to test the axioms used in AutoCert, a program verification system that has been applied to verify aerospace flight code using a first-order axiomatization of navigational concepts, and were able to find counterexamples for a number of axioms.

Keywords

model-based testing program verification automated theorem proving property-based testing constraint solving 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Denney, E., Trac, S.: A software safety certification tool for automatically generated guidance, navigation and control code. In: IEEE Aerospace Conference (March 2008)Google Scholar
  2. 2.
    Sutcliffe, G.: System description: System On TPTP. In: McAllester, D. (ed.) CADE 2000. LNCS, vol. 1831, pp. 406–410. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  3. 3.
    Sutcliffe, G., Denney, E., Fischer, B.: Practical proof checking for program certification. In: Proceedings of the CADE-20 Workshop on Empirically Successful Classical Automated Reasoning, ESCAR 2005 (July 2005)Google Scholar
  4. 4.
    Kuipers, J.B.: Quaternions and Rotation Sequences. Princeton University Press, Princeton (1999)MATHGoogle Scholar
  5. 5.
    Vallado, D.A.: Fundamentals of Astrodynamics and Applications, 2nd edn., Space Technology Library. Microcosm Press/Kluwer Academic Publishers (2001)Google Scholar
  6. 6.
    Dutertre, B., de Moura, L.: The YICES SMT solver (2006), Tool paper at http://yices.csl.sri.com/tool-paper.pdf
  7. 7.
    Sheard, T., Peyton Jones, S.: Template metaprogramming for Haskell. In: ACM SIGPLAN Haskell Workshop 2002, October 2002, pp. 1–16. ACM Press, New York (2002)Google Scholar
  8. 8.
    Claessen, K., Hughes, J.: QuickCheck: a lightweight tool for random testing of Haskell programs. In: Proceedings of the ACM SIGPLAN International Conference on Functional Programming, September 2000, pp. 268–279 (2000)Google Scholar
  9. 9.
    Green, C.: The Application of Theorem Proving to Question-Answering Systems. PhD thesis, Stanford University (1969)Google Scholar
  10. 10.
    Weyhrauch, R.: Prolegomena to a theory of mechanized formal reasoning. Artificial Intelligence 13(1,2), 133–170 (1980)MATHCrossRefMathSciNetGoogle Scholar
  11. 11.
    Claessen, K., Svensson, H.: Finding counter examples in induction proofs. In: Beckert, B., Hähnle, R. (eds.) TAP 2008. LNCS, vol. 4966, pp. 48–65. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  12. 12.
    Berghofer, S., Nipkow, T.: Random testing in Isabelle/HOL. In: 2nd IEEE International Conference on Software Engineering and Formal Methods (SEFM 2004), pp. 230–239 (2004)Google Scholar
  13. 13.
    Carlier, M., Dubois, C.: Functional testing in the Focal environment. In: Beckert, B., Hähnle, R. (eds.) TAP 2008. LNCS, vol. 4966, pp. 84–98. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  14. 14.
    Dybjer, P., Haiyan, Q., Takeyama, M.: Combining testing and proving in dependent type theory. In: Basin, D., Wolff, B. (eds.) TPHOLs 2003. LNCS, vol. 2758, pp. 188–203. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  15. 15.
    Blaine, L., Gilham, L., Liu, J., Smith, D., Westfold, S.: Planware: Domain-specific synthesis of high-performance schedulers. In: The 13th IEEE International Conference on Automated Software Engineering (ASE 1998), Honolulu, Hawaii, USA, pp. 270–280. IEEE Computer Society, Los Alamitos (1998)Google Scholar
  16. 16.
    Becker, M., Smith, D.R.: Model validation in Planware. In: Verification and Validation of Model-Based Planning and Scheduling Systems (VVPS 2005), Monterey, California, USA, June 6-7 (2005)Google Scholar
  17. 17.
    Paulson, L., Nipkow, T.: Isabelle: A Generic Theorem Prover. LNCS, vol. 828. Springer, Heidelberg (1994)MATHGoogle Scholar
  18. 18.
    Claessen, K., Sutcliffe, G.: A simple type system for FOF (2009), http://www.cs.miami.edu/~tptp/TPTP/Proposals/TypedFOF.html

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Ki Yung Ahn
    • 1
    • 2
  • Ewen Denney
    • 3
  1. 1.Portland State UniversityPortlandUSA
  2. 2.Mission Critical TechnologiesInc. / NASA Ames Research CenterMoffett FieldUSA
  3. 3.Stinger Ghaffarian TechnologiesInc. / NASA Ames Research CenterMoffett FieldUSA

Personalised recommendations