A Learning-Based Approach to Unit Testing of Numerical Software

  • Karl Meinke
  • Fei Niu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6435)


We present an application of learning-based testing to the problem of automated test case generation (ATCG) for numerical software. Our approach uses n-dimensional polynomial models as an algorithmically learned abstraction of the SUT which supports n-wise testing. Test cases are iteratively generated by applying a satisfiability algorithm to first-order program specifications over real closed fields and iteratively refined piecewise polynomial models.

We benchmark the performance of our iterative ATCG algorithm against iterative random testing, and empirically analyse its performance in finding injected errors in numerical codes. Our results show that for software with small errors, or long mean time to failure, learning-based testing is increasingly more efficient than iterative random testing.


Model Check Local Model Unit Test System Under Test Numerical Program 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Caviness, B.F., Johnson, J.R.: Quantifier Elimination and Cylindrical Algebraic Decomposition. Springer, Heidelberg (1998)zbMATHGoogle Scholar
  2. 2.
    Chauhan, P., Clarke, E.M., Kukula, J.H., Sapra, S., Veith, H., Wang, D.: Automated abstraction refinement for model checking large state spaces using sat based conflict analysis. In: Aagaard, M.D., O’Leary, J.W. (eds.) FMCAD 2002. LNCS, vol. 2517, Springer, Heidelberg (2002)CrossRefGoogle Scholar
  3. 3.
    Clarke, E., Gupta, A., Kukula, J., Strichman, O.: Sat-based abstraction refinement using ilp and machine learning. In: Brinksma, E., Larsen, K.G. (eds.) CAV 2002. LNCS, vol. 2404, Springer, Heidelberg (2002)CrossRefGoogle Scholar
  4. 4.
    Cox, M.G., Harris, P.M., Johnson, E.G., Kenward, P.D., Parkin, G.I.: Testing the numerical correctness of software. Technical Report CMSC 34/04, National Physical Laboratory, Teddington (January 2004)Google Scholar
  5. 5.
    Groce, A., Peled, D., Yannakakis, M.: Adaptive model checking. Logic Journal of the IGPL 14(5), 729–744 (2006)zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Hatton, L., Roberts, A.: How accurate is scientific software? ACM Transactions on Software Engineering 20(10), 786–797 (1994)Google Scholar
  7. 7.
    Hatton, L.: The chimera of software quality. Computer 40(8), 104, 102–103 (2007)CrossRefGoogle Scholar
  8. 8.
    Knupp, P., Salari, K.: Verification of Computer Codes in Computational Science and Engineering. CRC Press, Boca Raton (2002)CrossRefGoogle Scholar
  9. 9.
    Loeckx, J., Sieber, K.: The foundations of program verification, 2nd edn. John Wiley & Sons, Inc., New York (1987)zbMATHGoogle Scholar
  10. 10.
    Meinke, K.: Automated black-box testing of functional correctness using function approximation. In: ISSTA 2004: Proceedings of the 2004 ACM SIGSOFT international symposium on Software testing and analysis, pp. 143–153. ACM, New York (2004)CrossRefGoogle Scholar
  11. 11.
    Poston, R.M.: Automating Specification-Based Software Testing. IEEE Computer Society Press, Los Alamitos (1997)Google Scholar
  12. 12.
    Reimer, M.: Multivariate Polynomial Approximation. Birkhäuser, Basel (October 2003)zbMATHGoogle Scholar
  13. 13.
    Roache, P.J.: Building pde codes to be verifiable and validatable. Computing in Science and Engineering, 30–38 (September/October 2004)Google Scholar
  14. 14.
    Tarski, A.: Decision Method for Elementary Algebra and Geometry. Univ. of California Press, Berkeley (1951)zbMATHGoogle Scholar

Copyright information

© IFIP International Federation for Information Processing 2010

Authors and Affiliations

  • Karl Meinke
    • 1
  • Fei Niu
    • 1
  1. 1.Royal Institute of TechnologyStockholmSweden

Personalised recommendations