Augmenting Automatically Generated Unit-Test Suites with Regression Oracle Checking

Part of the Lecture Notes in Computer Science book series (LNCS, volume 4067)


A test case consists of two parts: a test input to exercise the program under test and a test oracle to check the correctness of the test execution. A test oracle is often in the form of executable assertions such as in the JUnit testing framework. Manually generated test cases are valuable in exposing program faults in the current program version or regression faults in future program versions. However, manually generated test cases are often insufficient for assuring high software quality. We can then use an existing test-generation tool to generate new test inputs to augment the existing test suite. However, without specifications these automatically generated test inputs often do not have test oracles for exposing faults. In this paper, we have developed an automatic approach and its supporting tool, called Orstra, for augmenting an automatically generated unit-test suite with regression oracle checking. The augmented test suite has an improved capability of guarding against regression faults. In our new approach, Orstra first executes the test suite and collects the class under test’s object states exercised by the test suite. On collected object states, Orstra creates assertions for asserting behavior of the object states. On executed observer methods (public methods with non-void returns), Orstra also creates assertions for asserting their return values. Then later when the class is changed, the augmented test suite is executed to check whether assertion violations are reported. We have evaluated Orstra on augmenting automatically generated tests for eleven subjects taken from a variety of sources. The experimental results show that an automatically generated test suite’s fault-detection capability can be effectively improved after being augmented by Orstra.


Test Suite Java Modelling Language Method Invocation Test Oracle Regression Fault 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Agitar Agitatior 2.0 (Novermber 2004),
  2. 2.
    Andrews, D.M.: Using executable assertions for testing and fault tolerance. In: Proc. the 9th International Symposium on Fault-Tolerant Computing, pp. 102–105 (1979)Google Scholar
  3. 3.
    Andrews, J.H., Briand, L.C., Labiche, Y.: Is mutation an appropriate tool for testing experiments? In: Proc. 27th International Conference on Software Engineering, pp. 402–411 (2005)Google Scholar
  4. 4.
    The Jakarta Commons Subproject (2005),
  5. 5.
    Arnold, K., Gosling, J., Holmes, D.: The Java Programming Language. Addison-Wesley Longman Publishing Co., Inc., Amsterdam (2000)Google Scholar
  6. 6.
    Arnold, R.S.: Software Change Impact Analysis. IEEE Computer Society Press, Los Alamitos (1996)Google Scholar
  7. 7.
    Beck, K.: Extreme programming explained. Addison-Wesley, Reading (2000)Google Scholar
  8. 8.
    Boyapati, C., Khurshid, S., Marinov, D.: Korat: automated testing based on Java predicates. In: Proc. International Symposium on Software Testing and Analysis, pp. 123–133 (2002)Google Scholar
  9. 9.
    Cheon, Y., Leavens, G.T.: A simple and practical approach to unit testing: The JML and JUnit way. In: Proc. 16th European Conference Object-Oriented Programming, June 2002, pp. 231–255 (2002)Google Scholar
  10. 10.
    Clark, M.: Junit primer. Draft manuscript (October 2000)Google Scholar
  11. 11.
    Csallner, C., Smaragdakis, Y.: JCrasher: an automatic robustness tester for Java. Software: Practice and Experience 34, 1025–1050 (2004)CrossRefGoogle Scholar
  12. 12.
    Csallner, C., Smaragdakis, Y.: Check ’n’ Crash: Combining static checking and testing. In: Proc. 27th International Conference on Software Engineering, May 2005, pp. 422–431 (2005)Google Scholar
  13. 13.
    Dahm, M., van Zyl, J.: Byte Code Engineering Library (April 2003),
  14. 14.
    Dillon, L.K., Ramakrishna, Y.S.: Generating oracles from your favorite temporal logic specifications. In: Proc. 4th ACM SIGSOFT Symposium on Foundations of Software Engineering, pp. 106–117 (1996)Google Scholar
  15. 15.
    Do, H., Rothermel, G., Kinneer, A.: Empirical studies of test case prioritization in a JUnit testing environment. In: Proc. 15th International Symposium on Software Reliability Engineering, pp. 113–124 (2004)Google Scholar
  16. 16.
    Doong, R.-K., Frankl, P.G.: The ASTOOT approach to testing object-oriented programs. ACM Trans. Softw. Eng. Methodol. 3(2), 101–130 (1994)CrossRefGoogle Scholar
  17. 17.
    Ernst, M.D., Cockrell, J., Griswold, W.G., Notkin, D.: Dynamically discovering likely program invariants to support program evolution. IEEE Trans. Softw. Eng. 27(2), 99–123 (2001)CrossRefGoogle Scholar
  18. 18.
    Fowler, M.: Refactoring: Improving the Design of Existing Code. Addison-Wesley, Reading (1999)Google Scholar
  19. 19.
    Gamma, E., Beck, K.: JUnit (2003),
  20. 20.
    Harrold, M.J., Jones, J.A., Li, T., Liang, D., Gujarathi, A.: Regression test selection for Java software. In: Proc. 16th ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications, pp. 312–326 (2001)Google Scholar
  21. 21.
    Harrold, M.J., Rothermel, G., Sayre, K., Wu, R., Yi, L.: An empirical investigation of the relationship between spectra differences and regression faults. Journal of Software Testing, Verification and Reliability 10(3), 171–194 (2000)CrossRefGoogle Scholar
  22. 22.
    Henkel, J., Diwan, A.: Discovering algebraic specifications from Java classes. In: Proc. 17th European Conference on Object-Oriented Programming, pp. 431–456 (2003)Google Scholar
  23. 23.
    Leavens, G.T., Baker, A.L., Ruby, C.: Preliminary design of JML: A behavioral interface specification language for Java. Technical Report TR 98-06i, Department of Computer Science, Iowa State University (June 1998)Google Scholar
  24. 24.
    Marinov, D., Andoni, A., Daniliuc, D., Khurshid, S., Rinard, M.: An evaluation of exhaustive testing for data structures. Technical Report MIT-LCS-TR-921, MIT CSAIL, Cambridge, MA (September 2003)Google Scholar
  25. 25.
    Memon, A.M., Banerjee, I., Nagarajan, A.: What test oracle should I use for effective GUI testing? In: Proc. 18th IEEE International Conference on Automated Software Engineering, pp. 164–173 (2003)Google Scholar
  26. 26.
    Memon, A.M., Pollack, M.E., Soffa, M.L.: Automated test oracles for GUIs. In: Proc. 8th ACM SIGSOFT International Symposium on Foundations of Software Engineering, pp. 30–39 (2000)Google Scholar
  27. 27.
    Memon, A.M., Soffa, M.L.: Regression testing of GUIs. In: Proc. 9th European Software Engineering Conference held jointly with 11th ACM SIGSOFT International Symposium on Foundations of Software Engineering, pp. 118–127 (2003)Google Scholar
  28. 28.
    Meyer, B.: Eiffel: The Language. Prentice-Hall, Englewood Cliffs (1992)MATHGoogle Scholar
  29. 29.
    Orso, A., Kennedy, B.: Selective capture and replay of program executions. In: Proc. 3rd International ICSE Workshop on Dynamic Analysis, St. Louis, MO, May 2005, pp. 29–35 (2005)Google Scholar
  30. 30.
    Pacheco, C., Ernst, M.D.: Eclat: Automatic generation and classification of test inputs. In: Proc. 19th European Conference on Object-Oriented Programming, Glasgow, Scotland, July 2005, pp. 504–527 (2005)Google Scholar
  31. 31.
    Parasoft Jtest manuals version 4.5. Online manual (April 2003),
  32. 32.
    Peters, D., Parnas, D.L.: Generating a test oracle from program documentation. In: Proc. 1994 Internation Symposium on Software Testing and Analysis, pp. 58–65 (1994)Google Scholar
  33. 33.
    Ren, X., Shah, F., Tip, F., Ryder, B.G., Chesley, O.: Chianti: a tool for change impact analysis of Java programs. In: Proc. 19th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications, pp. 432–448 (2004)Google Scholar
  34. 34.
    Richardson, D.J.: TAOS: Testing with analysis and oracle support. In: Proc. 1994 ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 138–153 (1994)Google Scholar
  35. 35.
    Richardson, D.J., Aha, S.L., O’Malley, T.O.: Specification-based test oracles for reactive systems. In: Proc. 14th International Conference on Software Engineering, pp. 105–118 (1992)Google Scholar
  36. 36.
    Rosenblum, D.S.: Towards a method of programming with assertions. In: Proc. 14th International Conference on Software Engineering, pp. 92–104 (1992)Google Scholar
  37. 37.
    Rountev, A.: Precise identification of side-effect-free methods in Java. In: Proc. 20th IEEE International Conference on Software Maintenance, September 2004, pp. 82–91 (2004)Google Scholar
  38. 38.
    Saff, D., Artzi, S., Perkins, J.H., Ernst, M.D.: Automatic test factoring for Java. In: Proc. 21st IEEE International Conference on Automated Software Engineering, Long Beach, CA, November 2005, pp. 114–123 (2005)Google Scholar
  39. 39.
    Salcianu, A., Rinard, M.: Purity and side effect analysis for Java programs. In: Proc. 6th International Conference on Verification, Model Checking and Abstract Interpretation, Paris, France, January 2005, pp. 199–215 (2005)Google Scholar
  40. 40.
    Stotts, D., Lindsey, M., Antley, A.: An informal formal method for systematic JUnit test case generation. In: Proc. 2002 XP/Agile Universe, pp. 131–143 (2002)Google Scholar
  41. 41.
    Sun Microsystems. Java 2 Platform, Standard Edition, v 1.4.2, API Specification. Online documentation (November 2003),
  42. 42.
    Visser, W., Pasareanu, C.S., Khurshid, S.: Test input generation with Java PathFinder. In: Proc. 2004 ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 97–107 (2004)Google Scholar
  43. 43.
    Xie, T., Marinov, D., Notkin, D.: Rostra: A framework for detecting redundant object-oriented unit tests. In: Proc. 19th IEEE International Conference on Automated Software Engineering, September 2004, pp. 196–205 (2004)Google Scholar
  44. 44.
    Xie, T., Marinov, D., Schulte, W., Notkin, D.: Symstra: A framework for generating object-oriented unit tests using symbolic execution. In: Proc. 11th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, April 2005, pp. 365–381 (2005)Google Scholar
  45. 45.
    Xie, T., Notkin, D.: Tool-assisted unit test selection based on operational violations. In: Proc. 18th IEEE International Conference on Automated Software Engineering, pp. 40–48 (2003)Google Scholar
  46. 46.
    Xie, T., Notkin, D.: Automatic extraction of object-oriented observer abstractions from unit-test executions. In: Proc. 6th International Conference on Formal Engineering Methods, November 2004, pp. 290–305 (2004)Google Scholar
  47. 47.
    Xie, T., Notkin, D.: Checking inside the black box: Regression testing by comparing value spectra. IEEE Transactions on Software Engineering 31(10), 869–883 (2005)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Tao Xie
    • 1
  1. 1.Department of Computer ScienceNorth Carolina State UniversityRaleighUSA

Personalised recommendations