Advertisement

Retrofitting Unit Tests for Parameterized Unit Testing

  • Suresh Thummalapenta
  • Madhuri R. Marri
  • Tao Xie
  • Nikolai Tillmann
  • Jonathan de Halleux
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6603)

Abstract

Recent advances in software testing introduced parameterized unit tests (PUT), which accept parameters, unlike conventional unit tests (CUT), which do not accept parameters. PUTs are more beneficial than CUTs with regards to fault-detection capability, since PUTs help describe the behaviors of methods under test for all test arguments. In general, existing applications often include manually written CUTs. With the existence of these CUTs, natural questions that arise are whether these CUTs can be retrofitted as PUTs to leverage the benefits of PUTs, and what are the cost and benefits involved in retrofitting CUTs as PUTs. To address these questions, in this paper, we conduct an empirical study to investigate whether existing CUTs can be retrofitted as PUTs with feasible effort and achieve the benefits of PUTs in terms of additional fault-detection capability and code coverage. We also propose a methodology, called test generalization, that helps in systematically retrofitting existing CUTs as PUTs. Our results on three real-world open-source applications (≈ 4.6 KLOC) show that the retrofitted PUTs detect 19 new defects that are not detected by existing CUTs, and also increase branch coverage by 4% on average (with maximum increase of 52% for one class under test and 10% for one application under analysis) with feasible effort.

Keywords

Unit Test Symbolic Execution Test Generalization Code Coverage Branch Coverage 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
    Pex - automated white box testing for .NET (2009), http://research.microsoft.com/Pex/
  5. 5.
  6. 6.
  7. 7.
    Cansdale, J., Feldman, G., Poole, C., Two, M.: NUnit (2002), http://nunit.com/index.php
  8. 8.
    Csallner, C., Smaragdakis, Y.: JCrasher: an automatic robustness tester for Java. Softw. Pract. Exper. 34(11) (2004)Google Scholar
  9. 9.
    Daniel, B., Jagannath, V., Dig, D., Marinov, D.: ReAssert: Suggesting repairs for broken unit tests. In: Proc. ASE, pp. 433–444 (2009)Google Scholar
  10. 10.
    Godefroid, P., Klarlund, N., Sen, K.: DART: Directed Automated Random Testing. In: Proc. PLDI, pp. 213–223 (2005)Google Scholar
  11. 11.
    Granville, B., Tongo, L.D.: Data structures and algorithms (2006), http://dsa.codeplex.com/
  12. 12.
    de Halleux, J.: Quickgraph, graph data structures and algorithms for .NET (2006), http://quickgraph.codeplex.com/
  13. 13.
    Jaygarl, H., Kim, S., Xie, T., Chang, C.K.: OCAT: Object capture-based automated testing. In: Proc. ISSTA, pp. 159–170 (2010)Google Scholar
  14. 14.
    Khurshid, S., Pasareanu, C.S., Visser, W.: Generalized symbolic execution for model checking and testing. In: Garavel, H., Hatcliff, J. (eds.) TACAS 2003. LNCS, vol. 2619, pp. 553–568. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  15. 15.
    King, J.C.: Symbolic execution and program testing. Communications of the ACM 19(7), 385–394 (1976)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Marri, M.R., Xie, T., Tillmann, N., de Halleux, J., Schulte, W.: An empirical study of testing file-system-dependent software with mock objects. In: Proc. AST, Business and Industry Case Studies, pp. 149–153 (2009)Google Scholar
  17. 17.
    Meyer, B.: Object-Oriented Software Construction. Prentice Hall PTR, Englewood Cliffs (2000)Google Scholar
  18. 18.
    Pacheco, C., Lahiri, S.K., Ernst, M.D., Ball, T.: Feedback-directed random test generation. In: Proc. ICSE. pp. 75–84 (2007)Google Scholar
  19. 19.
    Saff, D., Boshernitsan, M., Ernst, M.D.: Theories in practice: Easy-to-write specifications that catch bugs. Tech. Rep. MIT-CSAIL-TR-2008-002, MIT Computer Science and Artificial Intelligence Laboratory (2008), http://www.cs.washington.edu/homes/mernst/pubs/testing-theories-tr002-abstract.html
  20. 20.
    Sen, K., Marinov, D., Agha, G.: CUTE: a concolic unit testing engine for C. In: Proc. ESEC/FSE, pp. 263–272 (2005)Google Scholar
  21. 21.
    Thummalapenta, S., Xie, T., Tillmann, N., de Halleux, P., Schulte, W.: MSeqGen: Object-oriented unit-test generation via mining source code. In: Proc. ESEC/FSE, pp. 193–202 (2009)Google Scholar
  22. 22.
    Tillmann, N., de Halleux, J.: Pex–white box test generation for .NET. In: Beckert, B., Hähnle, R. (eds.) TAP 2008. LNCS, vol. 4966, pp. 134–153. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  23. 23.
    Tillmann, N., Schulte, W.: Parameterized Unit Tests. In: Proc. ESEC/FSE, pp. 253–262 (2005)Google Scholar
  24. 24.
    Xie, T., Marinov, D., Notkin, D.: Rostra: A framework for detecting redundant object-oriented unit tests. In: Proc. ASE, pp. 196–205 (2004)Google Scholar
  25. 25.
    Xie, T., Tillmann, N., de Halleux, P., Schulte, W.: Mutation analysis of parameterized unit tests. In: Proc. Mutation, pp. 177–181 (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Suresh Thummalapenta
    • 1
  • Madhuri R. Marri
    • 1
  • Tao Xie
    • 1
  • Nikolai Tillmann
    • 2
  • Jonathan de Halleux
    • 2
  1. 1.Department of Computer ScienceNorth Carolina State UniversityRaleighUSA
  2. 2.Microsoft ResearchRedmondUSA

Personalised recommendations