Skip to main content
Log in

Survey on test data generation tools

An evaluation of white- and gray-box testing tools for C#, C++, Eiffel, and Java

  • Regular Paper
  • Published:
International Journal on Software Tools for Technology Transfer Aims and scope Submit manuscript

Abstract

Automating the process of software testing is a very popular research topic and of real interest to industry. Test automation can take part on different levels, e.g., test execution, test case generation, test data generation. This survey gives an overview of state-of-the art test data generation tools, either academic or commercial. The survey focuses on white- and gray-box techniques. The list of existing tools was filtered with respect to their public availability, their maturity, and activity. The remaining seven tools, i.e., AgitarOne, CodePro AnalytiX, AutoTest, C++test, Jtest, RANDOOP, and PEX, are briefly introduced and their evaluation results are summarized. For the evaluation we defined 31 benchmark tests, which check the tools capabilities to generate test data that satisfies a given specification: 24 primitive type benchmarks and 7 non-primitive type and more complex with respect to the specification benchmarks. Most of the commercial tools implement a test data strategy that uses constant values found in the method under test or values that are slightly modified by means of mathematical operations. This strategy turns out to be very effective. In general, all tools that combine multiple techniques perform very well. For example PEX uses constraint solving techniques, but in cases where the constraint solver reaches its limitations it uses random based techniques to overcome those limitations. Especially, the two commercial tools AgitarOne and PEX that combine multiple approaches to test data generation are able to pass all 31 tests. This survey reflects the status in 2011.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Ammann, P., Offutt, J.: Introduction to Software Testing. Cambridge University Press, Cambridge (2008)

    Book  MATH  Google Scholar 

  2. Thomas A., John, H., Joakim, J., Ulf, W.: Testing Telecoms Software With Quviq QuickCheck. In: 2006 ACM SIGPLAN workshop on Erlang, Portland, Oregon, USA, 2006. ACM, New York

  3. Barnett, M., Fähndrich, M., de Halleux, J., Logozzo, F., Tillmann, N.: Exploiting the synergy between automated-test-generation and programming-by-contract. In: Proceedings of the 31st International Conference on Software Engineering (ICSE’2009). IEEE (2009)

  4. Mike, B., Leino, K.R.M., Wolfram, Schulte.: The Spec# programming system: an overview. In: Construction and Analysis of Safe, Secure, and Interoperable Smart Devices (CASSIS 2004), vol. 3362 of Lecture Notes in Computer Science, pp. 49–69. Springer, Berlin (2005)

  5. Boshernitsan, M., Doong, R., Savoia, A.: From Daikon to agitator: lessons and challenges in building a commercial tool for developer testing. In: 2006 International Symposium on Software Testing and Analysis, pp. 169–180. ACM Press, New York (2006)

  6. Bourdonov, I.B., Kossatchev, A., Kuliamin, V.V., Petrenko, A.: UniTesK test suite architecture. In: FME 2002: Formal Methods Getting IT Right, vol. 2391 of Lecture Notes in Computer Science, pp. 121–152. Springer (2002)

  7. Boyapati, C., Khurshid, S., Marinov, D.: Korat: automated testing based on Java predicates. In: International Symposium on Software Testing and Analysis, pp. 123–133. ACM Press, New York (2002)

  8. Brat, G., Drusinsky, D., Giannakopoulou, D., Goldberg, A., Havelund, K., Lowry, M., Pasareanu, C., Venet, A., Visser, W., Washington, R.: Experimental evaluation of verification and validation tools on Martian Rover software. Formal Methods Syst. Design 25(2/3), 167–198 (2004)

    Article  MATH  Google Scholar 

  9. Cadar, C., Dunbar, D., Engler, D.: KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs. In: USENIX Symposium on Operating Systems Design and Implementation, pp. 209–224. USENIX (2008)

  10. Cadar, C., Ganesh, V., Pawlowski, P.M., Dill, D.L., Engler, D.R.: EXE: automatically generating inputs of death. ACM Trans. Inf. Syst. Secur. 12(2), 322–335 (2008)

    Article  Google Scholar 

  11. Campbell, C., Grieskamp, W., Nachmanson, L., Schulte, W., Tillmann, N., Veanes, M.: Model-based testing of object-oriented reactive systems with Spec explorer. Technical report, Microsoft Research, Redmond (2005)

  12. Ciupa, I., Leitner, A.: Automatic testing based on design by contract. In: Proceedings of Net. ObjectDays 2005, pp. 545–557 (2005)

  13. Ciupa, I., Leitner, A., Oriol, M., Meyer, B.: ARTOO: adaptive random testing for object-oriented software. In: 30th International Conference on Software Engineering, pp. 71–80. ACM, New York (2008)

  14. Csallner, C., Smaragdakis, Y.: JCrasher: an automatic Robustness tester for Java. Softw.: Pract. Exp. 34(11), 1025–1050 (2004)

    Google Scholar 

  15. Csallner, C., Smaragdakis, Y.: Check ’n’ crash: combining static checking and testing. In: 27th ACM/IEEE International Conference on Software Engineering, pp. 422–431. ACM, New York (2005)

  16. de Halleux, J., Tillmann, N.: Moles: tool-assisted environment isolation with closures. Vol. 6141 of Lecture Notes in Computer Science, pp. 253–270. Springer, Berlin/Heidelberg (2010)

  17. Ernst, M.D., CockrelI, J., Griswold, W.G., Notkin, D.: Dynamically discovering likely program invariants to support program evolution. In: 21st International Conference on Software Engineering, pp. 213–222, Los Alamitos, CA, USA, 1999. IEEE (1999)

  18. Cormac, F., Rustan, K., Leino, M., Lillibridge, M., Nelson, G., Saxe, J.B., Stata, R.: Extended static checking for Java. ACM SIGPLAN Not. Confer. Program. Lang. Design Implement. 37(5), 234–245 (2002)

    Google Scholar 

  19. Freese, T.: EasyMock: dynamic mock objects for JUnit. In: 3nd International Conference on Extreme Programming and Flexible Processes in Software Engineering (XP 2002), pp. 2–5 (2002)

  20. Godefroid, P., Klarlund, N., Sen, K.: DART: directed automated random testing. ACM SIGPLAN Conference on Programming Language Design and Implementation, pp. 213–223 (2005)

  21. Google Inc. ToT: Friends you can depend on. http://googletesting.blogspot.com/2008/06/tott-friends-you-can-depend-on.html (2008)

  22. Grieskamp, W.: Multi-paradigmatic model-based testing. In: Formal Approaches to Software Testing and Runtime Verification, vol. 4262 of Lecture Notes in Computer Science, pp. 1–19. Springer, Berlin (2006)

  23. Grieskamp, W., Qu, X., Wei, X., Kicillof, N., Cohen, M.: Interaction coverage meets path coverage by SMT constraint solving. In: Testing of Software and Communication Systems, vol. 58262 of Lecture Notes in Computer Science, pp. 97–112. Springer, Berlin (2009)

  24. Huima, A.: Implementing conformiq Qtronic. In: Testing of Software and Communicating Systems, vol. 4581 of Lecture Notes in Computer Science, pp. 1–12. Springer, Berlin/Heidelberg (2007)

  25. Google Inc. Codepro analytix user guide. http://developers.google.com/java-dev-tools/codepro/doc/. Accessed Nov 2011

  26. Jaffuel, E., Legeard, B.: LEIRIOS test generator: automated test generation from B models. In: B 2007: Formal Specification and Development in B, vol. 4355 of Lecture Notes in Computer Science, pp. 277–280. Springer, Berlin/Heidelberg (2006)

  27. Kuliamin, V.V., Petrenko, A.K., Kossatchev, A.S., Bourdonov, I.B.: UniTesK: model based testing in industrial practice. In: 1st European Conference on Model Driven, Software Engineering, pp. 55–63 (2003)

  28. Legeard, B., Peureux, F., Utting, M.: Automated boundary testing from Z and B. In: Eriksson L.-H., Lindsay, P. (eds.) Formal Methods, vol. 2391 of Lecture Notes in Computer Science, pp 221–236. Springer (2002)

  29. Meyer, B., Ciupa, I., Leitner, A., Liu, L.: Automatic testing of object-oriented software. In: SOFSEM 2007: Theory and Practice of Computer Science, vol. 4362 of Lecture Notes in Computer Science, pp. 114–129. Springer, Heidelberg (2007)

  30. Meyers, S.: Effective C++: 55 specific ways to improve your programs and design, 3rd edn. Addison-Wesley Professional, Boston (2005)

    Google Scholar 

  31. De Moura, L., Bjørner, N.: Z3: an efficient SMT solver. In: TACAS’08: Tools and Algorithms for the Construction and Analysis of Systems, vol. 4963 of Lecture Notes in Computer Science, pp. 337–340. Springer, Heidelberg (2008)

  32. Myers, G.J., Sandler, C., Badgett, T., Thomas, T.M.: The Art of Software Testing, 2nd edn. Wiley, New York (2004)

    Google Scholar 

  33. Pacheco, C., Ernst, M.D.: Eclat: automatic generation and classification of test inputs. In: ECOOP 2005: Object-Oriented Programming, vol. 3568 of Lecture Notes in Computer Science, pp. 504–527. Springer (2005)

  34. Pacheco, C., Ernst, M.D.: Randoop: feedback-directed random testing for Java. In: OOPSLA 2007: Conference on Object Oriented Programming Systems Languages and Applications, pp. 815–816. ACM, New York (2007)

  35. Pacheco, C., Lahiri, S.K., Ball, T.: Finding errors in.NET with feedback-directed random testing. ISSTA’08: International Symposium on Software Testing and, Analysis, pp. 87–96 (2008)

  36. Pacheco, C., Lahiri, S.K., Ernst, M.D., Ball, T.: Feedback-directed random test generation. In: Proceedings of the 29th International Conference on Software Engineering, pp. 75–84, Minneapolis, MN, USA, 2007. IEEE (2007)

  37. Parasoft. Parasoft C++test user’s guide (2010)

  38. Parasoft. http://www.parasoft.com. Accessed Nov 2011

  39. Parasoft. Using design by contract to automate Java software and component testing. http://www.parasoft.com/jsp/products/article.jsp?articleId=579&product=Jcontract. Accessed Nov 2011

  40. Penix, J., Visser, W., Park, S., Pasareanu, C., Engstrom, E., Larson, A., Weininger, N.: Verifying time partitioning in the DEOS scheduling Kernel. Formal Methods Syst. Design 26(2), 103–135 (2005)

    Google Scholar 

  41. Microsoft Research. Advanced concepts: parameterized unit testing with Microsoft Pex. http://research.microsoft.com/en-us/projects/pex/pexconcepts.pdf (2010)

  42. Sen, K., Agha, G.: CUTE and jCUTE: concolic unit testing and explicit path model-checking tools. In: 18th International Conference on Computer Aided Verification, vol. 4144 of Lecture Notes in Computer Science, pp. 419–423, Seattle, Washington, USA, 2006. Springer, New York (2006)

  43. Sen, K., Marinov, D., Agha, G.: CUTE: a concolic unit testing engine for C. In: 10th European Software Engineering Conference, vol. 30 of ACM SIGSOFT Software Engineering Notes, pp. 263–272, Lisbon, Portugal, 2005. ACM, New York (2005)

  44. SMTCOMP. Call for Entrants. http://www.smtcomp.org/2010/call10.txt (2010)

  45. Tillmann N., De Halleux, J.: Pex–White Box Test Generation for.NET. In: Proceedings of the 2nd international conference on tests and proofs (TAP 2008), vol. 4966 of Lecture Notes in Computer Science, pp. 134–153. Springer (2008)

  46. Tillmann, N., Grieskamp, W., Schulte, W.: Parameterized unit tests. SIGSOFT Softw. Eng. Notes 30(5), 253–262 (2005)

    Article  Google Scholar 

  47. Veanes, M., de Halleux P., Tillmann, N.: Rex: Symbolic Regular Expression Explorer. In: 2010 Third International Conference on Software Testing, Verification and Validation, pp. 498–507. IEEE (2010)

  48. Veanes, M., Campbell, C., Grieskamp, W., Schulte, W., Tillmann, N., Nachmanson, L.: Model-based testing of object-oriented reactive systems with Spec explorer. In: Formal Methods and Testing, vol. 4949 of Lecture Notes in Computer Science, pp. 39–76. Springer (2008)

  49. Visser, W., Havelund, K., Brat, G.: Model checking programs. Autom. Softw. Eng. 10(2), 203–232 (2003)

    Article  Google Scholar 

  50. Visser, W., Psreanu, C.S.: Test input generation with Java PathFinder. ACM SIGSOFT Softw. Eng. Notes 29(4), 97–107 (2004)

    Article  Google Scholar 

  51. Wei, Y., Gebhardt, S., Oriol, M., Meyer, B.: Satisfying test preconditions through guided object selection. In: 3rd International Conference on Software Testing, Verification and Validation, pp. 1–10, Paris, France, 2010. IEEE (2010)

Download references

Acknowledgments

The authors wish to thank the anonymous referees for their detailed and constructive feedback in order to improve the paper. The research herein is partially conducted within the competence network Softnet Austria (http://www.soft-net.at) and funded by the Austrian Federal Ministry of Economics (bm:wa), the province of Styria, the Steirische Wirtschaftsförderungsgesellschaft mbH (SFG), and the city of Vienna in terms of the center for innovation and technology (ZIT).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stefan J. Galler.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Galler, S.J., Aichernig, B.K. Survey on test data generation tools. Int J Softw Tools Technol Transfer 16, 727–751 (2014). https://doi.org/10.1007/s10009-013-0272-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10009-013-0272-3

Keywords

Navigation