Generating Test Suites with Augmented Dynamic Symbolic Execution

  • Konrad Jamrozik
  • Gordon Fraser
  • Nikolai Tillman
  • Jonathan de Halleux
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7942)


Unit test generation tools typically aim at one of two objectives: to explore the program behavior in order to exercise automated oracles, or to produce a representative test set that can be used to manually add oracles or to use as a regression test set. Dynamic symbolic execution (DSE) can efficiently explore all simple paths through a program, exercising automated oracles such as assertions or code contracts. However, its original intention was not to produce representative test sets. Although DSE tools like Pex can retain subsets of the tests seen during the exploration, customer feedback revealed that users expect different values than those produced by Pex, and sometimes also more than one value for a given condition or program path. This threatens the applicability of DSE in a scenario without automated oracles. Indeed, even though all paths might be covered by DSE, the resulting tests are usually not sensitive enough to make a good regression test suite. In this paper, we present augmented dynamic symbolic execution, which aims to produce representative test sets with DSE by augmenting path conditions with additional conditions that enforce target criteria such as boundary or mutation adequacy, or logical coverage criteria. Experiments with our Apex prototype demonstrate that the resulting test cases can detect up to 30% more seeded defects than those produced with Pex.


Test Suite Mutation Testing Path Condition Coverage Criterion Symbolic Execution 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Ammann, P., Offutt, J.: Introduction to Software Testing, 1st edn. Cambridge University Press (2008)Google Scholar
  2. 2.
    Anand, S., Păsăreanu, C.S., Visser, W.: JPF-SE: a symbolic execution extension to Java PathFinder. In: Grumberg, O., Huth, M. (eds.) TACAS 2007. LNCS, vol. 4424, pp. 134–138. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  3. 3.
    Andrews, J.H., Briand, L.C., Labiche, Y.: Is mutation an appropriate tool for testing experiments? In: Proc. ICSE 2005, pp. 402–411. ACM (2005)Google Scholar
  4. 4.
    Bhattacharya, N., Sakti, A., Antoniol, G., Guéhéneuc, Y.-G., Pesant, G.: Divide-by-zero exception raising via branch coverage. In: Cohen, M.B., Ó Cinnéide, M. (eds.) SSBSE 2011. LNCS, vol. 6956, pp. 204–218. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  5. 5.
    Cadar, C., Sen, K.: Symbolic execution for software testing: three decades later. Commun. ACM 56(2), 82–90 (2013)CrossRefGoogle Scholar
  6. 6.
    Chilenski, J.J., Miller, S.P.: Applicability of modified condition/decision coverage to software testing. Software Engineering Journal, 193–200 (1994)Google Scholar
  7. 7.
    DeMillo, R.A., Lipton, R.J., Sayward, F.: Hints on test data selection: Help for the practicing programmer. Computer 11(4), 34–41 (1978)CrossRefGoogle Scholar
  8. 8.
    Fraser, G., Wotawa, F., Ammann, P.E.: Testing with model checkers: a survey. Softw. Test. Verif. Reliab. 19, 215–261 (2009)CrossRefGoogle Scholar
  9. 9.
    Ghani, K., Clark, J.A.: Strengthening inferred specifications using search based testing. In: Proc. SBST, pp. 187–194. IEEE Computer Society Press (2008)Google Scholar
  10. 10.
    Godefroid, P., Klarlund, N., Sen, K.: Dart: directed automated random testing. In: Proc. PLDI, pp. 213–223 (2005)Google Scholar
  11. 11.
    Godefroid, P., Levin, M.Y., Molnar, D.A.: Active property checking. In: Proc. EMSOFT 2008, pp. 207–216. ACM (2008)Google Scholar
  12. 12.
    Godefroid, P., Levin, M.Y., Molnar, D.A.: Sage: Whitebox fuzzing for security testing. ACM Queue 10(1), 20 (2012)CrossRefGoogle Scholar
  13. 13.
    Godefroid, P., Luchaup, D.: Automatic partial loop summarization in dynamic test generation. In: Proc. ISSTA 2011, pp. 23–33. ACM (2011)Google Scholar
  14. 14.
    Howden, W.E.: Weak mutation testing and completeness of test sets. IEEE TSE 8(4), 371–379 (1982)Google Scholar
  15. 15.
    Jamrozik, K., Fraser, G., Tillmann, N., De Halleux, J.: Augmented dynamic symbolic execution. In: Proc. ASE 2012, pp. 254–257. ACM (2012)Google Scholar
  16. 16.
    Joebstl, E., Weiglhofer, M., Aichernig, B.K., Wotawa, F.: When bdds fail: Conformance testing with symbolic execution and smt solving. In: Proc. ICST 2010, pp. 479–488. IEEE Computer Society (2010)Google Scholar
  17. 17.
    Kosmatov, N., Legeard, B., Peureux, F., Utting, M.: Boundary coverage criteria for test generation from formal models. In: Proc. ISSRE, pp. 139–150. IEEE Computer Society (2004)Google Scholar
  18. 18.
    Pandita, R., Xie, T., Tillmann, N., de Halleux, J.: Guided test generation for coverage criteria. In: Proc. ICSM 2010, pp. 1–10. IEEE Computer Society (2010)Google Scholar
  19. 19.
    Romano, D., Di Penta, M., Antoniol, G.: An approach for search based testing of null pointer exceptions. In: Proc. ICST 2011, pp. 160–169. IEEE (2011)Google Scholar
  20. 20.
    Rothermel, G., Harrold, M.J., Ostrin, J., Hong, C.: An empirical study of the effects of minimization on the fault detection capabilities of test suites. In: Proc. ICSM 1998, pp. 34–43. IEEE Computer Society Press (1998)Google Scholar
  21. 21.
    Sen, K., Marinov, D., Agha, G.: CUTE: a concolic unit testing engine for C. In: Proc. ESEC/FSE-13, pp. 263–272. ACM (2005)Google Scholar
  22. 22.
    Staats, M., Pǎsǎreanu, C.: Parallel symbolic execution for structural test generation. In: Proc. ISSTA 2010, pp. 183–194. ACM (2010)Google Scholar
  23. 23.
    Tillmann, N., de Halleux, J.: Pex–white box test generation for.NET. In: Beckert, B., Hähnle, R. (eds.) TAP 2008. LNCS, vol. 4966, pp. 134–153. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  24. 24.
    Vilkomir, S.A., Bowen, J.P.: Reinforced condition/decision coverage (rc/dc): A new criterion for software testing. In: Bert, D., Bowen, J.P., Henson, M.C., Robinson, K. (eds.) ZB 2002. LNCS, vol. 2272, pp. 291–308. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  25. 25.
    White, L.J., Cohen, E.I.: A domain strategy for computer program testing. IEEE Trans. Softw. Eng. 6, 247–257 (1980)zbMATHCrossRefGoogle Scholar
  26. 26.
    Zhang, L., Xie, T., Zhang, L., Tillmann, N., de Halleux, J., Mei, H.: Test generation via dynamic symbolic execution for mutation testing. In: Proc. ICSM 2010, pp. 1–10. IEEE Computer Society (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Konrad Jamrozik
    • 1
  • Gordon Fraser
    • 2
  • Nikolai Tillman
    • 3
  • Jonathan de Halleux
    • 3
  1. 1.Saarland UniversitySaarbrueckenGermany
  2. 2.University of SheffieldUK
  3. 3.Microsoft ResearchUSA

Personalised recommendations