Advertisement

A Little Language for Testing

  • Alex GroceEmail author
  • Jervis Pinto
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9058)

Abstract

The difficulty of writing test harnesses is a major obstacle to the adoption of automated testing and model checking. Languages designed for harness definition are usually tied to a particular tool and unfamiliar to programmers; moreover, such languages can limit expressiveness. Writing a harness directly in the language of the software under test (SUT) makes it hard to change testing algorithms, offers no support for the common testing idioms, and tends to produce repetitive, hard-to-read code. This makes harness generation a natural fit for the use of an unusual kind of domain-specific language (DSL). This paper defines a template scripting testing language, TSTL, and shows how it can be used to produce succinct, readable definitions of state spaces. The concepts underlying TSTL are demonstrated in Python but are not tied to it.

Keywords

Model Check Automate Testing Testing Algorithm Beam Search Mars Science Laboratory 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
  2. 2.
    JPF: the swiss army knife of Java(TM) verification. http://babelfish.arc.nasa.gov/trac/jpf
  3. 3.
  4. 4.
    Andrews, J., Zhang, Y.R., Groce, A.: Comparing automated unit testing strategies. Technical Report 736, Department of Computer Science, University of Western Ontario, December 2010Google Scholar
  5. 5.
    Bentley, J.: Programming pearls: little languages. Communications of the ACM 29(8), 711–721 (1986)CrossRefMathSciNetGoogle Scholar
  6. 6.
    Cadar, C., Dunbar, D., Engler, D.: KLEE: Unassisted and automatic generation of high-coverage tests for complex systems programs. In: Operating System Design and Implementation, pp. 209–224 (2008)Google Scholar
  7. 7.
    Fowler, M.: Domain-Specific Languages. Addison-Wesley Professional, (2010)Google Scholar
  8. 8.
    Fraser, G., Arcuri, A.: Evosuite: automatic test suite generation for object-oriented software. In: Proceedings of the 19th ACM SIGSOFT Symposium and the 13th European Conference on Foundations of Software Engineering, pp. 416–419. ESEC/FSE ’11, ACM (2011)Google Scholar
  9. 9.
    Giannakopoulou, D., Howar, F., Isberner, M., Lauderdale, T., Rakamarić, Z., Raman, V.: Taming test inputs for separation assurance. In: International Conference on Automated Software Engineering, pp. 373–384 (2014)Google Scholar
  10. 10.
    Gligoric, M., Gvero, T., Jagannath, V., Khurshid, S., Kuncak, V., Marinov, D.: Test generation through programming in udita. In: International Conference on Software Engineering, pp. 225–234 (2010)Google Scholar
  11. 11.
    Groce, A., Alipour, M.A., Zhang, C., Chen, Y., Regehr, J.: Cause reduction for quick testing. In: Software Testing, Verification and Validation (ICST), 2014 IEEE Seventh International Conference on, pp. 243–252. IEEE (2014)Google Scholar
  12. 12.
    Groce, A., Erwig, M.: Finding common ground: choose, assert, and assume. In: Workshop on Dynamic Analysis, pp. 12–17 (2012)Google Scholar
  13. 13.
    Groce, A., Fern, A., Erwig, M., Pinto, J., Bauer, T., Alipour, A.: Learning-based test programming for programmers, pp. 752–786 (2012)Google Scholar
  14. 14.
    Groce, A., Fern, A., Pinto, J., Bauer, T., Alipour, A., Erwig, M., Lopez, C.: Lightweight automated testing with adaptation-based programming. In: IEEE International Symposium on Software Reliability Engineering, pp. 161–170 (2012)Google Scholar
  15. 15.
    Groce, A., Havelund, K., Holzmann, G., Joshi, R., Xu, R.G.: Establishing flight software reliability: Testing, model checking, constraint-solving, monitoring and learning. Annals of Mathematics and Artificial Intelligence 70(4), 315–349 (2014)CrossRefzbMATHMathSciNetGoogle Scholar
  16. 16.
    Groce, A., Havelund, K., Smith, M.: From scripts to specifications: The evolution of a flight software testing effort. In: International Conference on Software Engineering, pp. 129–138 (2010)Google Scholar
  17. 17.
    Groce, A., Holzmann, G., Joshi, R.: Randomized differential testing as a prelude to formal verification. In: International Conference on Software Engineering, pp. 621–631 (2007)Google Scholar
  18. 18.
    Groce, A., Joshi, R.: Random testing and model checking: Building a common framework for nondeterministic exploration. In: Workshop on Dynamic Analysis, pp. 22–28 (2008)Google Scholar
  19. 19.
    Groce, A., Zhang, C., Alipour, M.A., Eide, E., Chen, Y., Regeher, J.: Help, help, I’m being suppressed! the significance of suppressors in software testing. In: IEEE International Symposium on Software Reliability Engineering, pp. 390–399 (2013)Google Scholar
  20. 20.
    Groce, A., Zhang, C., Eide, E., Chen, Y., Regehr, J.: Swarm testing. In: International Symposium on Software Testing and Analysis, pp. 78–88 (2012)Google Scholar
  21. 21.
    Holzmann, G.J., Joshi, R.: Model-driven software verification. In: Graf, S., Mounier, L. (eds.) SPIN 2004. LNCS, vol. 2989, pp. 76–91. Springer, Heidelberg (2004) CrossRefGoogle Scholar
  22. 22.
    Holzmann, G., Joshi, R., Groce, A.: Model driven code checking. Automated Software Engineering 15(3–4), 283–297 (2008)CrossRefGoogle Scholar
  23. 23.
    Holzmann, G.J.: The SPIN Model Checker: Primer and Reference Manual. Addison-Wesley Professional, (2003) Google Scholar
  24. 24.
    Clarke, E., Kroning, D., Lerda, F.: A tool for checking ANSI-C programs. In: Jensen, K., Podelski, A. (eds.) TACAS 2004. LNCS, vol. 2988, pp. 168–176. Springer, Heidelberg (2004) CrossRefGoogle Scholar
  25. 25.
    McKeeman, W.: Differential testing for software. Digital Technical Journal of Digital Equipment Corporation 10(1), 100–107 (1998)Google Scholar
  26. 26.
    Milicevic, A., Misailovic, S., Marinov, D., Khurshid, S.: Korat: A tool for generating structurally complex test inputs. In: International Conference on Software Engineering, pp. 771–774 (2007)Google Scholar
  27. 27.
    Pacheco, C., Lahiri, S.K., Ernst, M.D., Ball, T.: Feedback-directed random test generation. In: International Conference on Software Engineering, pp. 75–84 (2007)Google Scholar
  28. 28.
    Visser, W., Havelund, K., Brat, G., Park, S., Lerda, F.: Model checking programs. Automated Software Engineering 10(2), 203–232 (2003)CrossRefGoogle Scholar
  29. 29.
    Visser, W., Păsăreanu, C., Pelanek, R.: Test input generation for Java containers using state matching. In: International Symposium on Software Testing and Analysis, pp. 37–48 (2006)Google Scholar
  30. 30.
    Zeller, A., Hildebrandt, R.: Simplifying and isolating failure-inducing input. Software Engineering, IEEE Transactions on 28(2), 183–200 (2002)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.School of Electrical Engineering and Computer ScienceOregon State UniversityCorvallisUSA

Personalised recommendations