Software Quality Journal

, Volume 12, Issue 3, pp 185–210 | Cite as

Selecting a Cost-Effective Test Case Prioritization Technique

  • Sebastian Elbaum
  • Gregg Rothermel
  • Satya Kanduri
  • Alexey G. Malishevsky

Abstract

Regression testing is an expensive testing process used to validate modified software and detect whether new faults have been introduced into previously tested code. To reduce the cost of regression testing, software testers may prioritize their test cases so that those which are more important, by some measure, are run earlier in the regression testing process. One goal of prioritization is to increase a test suite's rate of fault detection. Previous empirical studies have shown that several prioritization techniques can significantly improve rate of fault detection, but these studies have also shown that the effectiveness of these techniques varies considerably across various attributes of the program, test suites, and modifications being considered. This variation makes it difficult for a practitioner to choose an appropriate prioritization technique for a given testing scenario. To address this problem, we analyze the fault detection rates that result from applying several different prioritization techniques to several programs and modified versions. The results of our analyses provide insights into which types of prioritization techniques are and are not appropriate under specific testing scenarios, and the conditions under which they are or are not appropriate. Our analysis approach can also be used by other researchers or practitioners to determine the prioritization techniques appropriate to other workloads.

test case prioritization regression testing empirical studies 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Avritzer, A. and Weyuker, E. 1995. The automatic generation of load test suites and the assessment of the resulting software, IEEE Transactions on Software Engineering 21(9): 705-716.Google Scholar
  2. Balcer, M., Hasling, W. and Ostrand, T. 1989. Automatic generation of test scripts from formal test specifications, Proceedings of the 3rd Symposium on Software Testing, Analysis, and Verification, December, pp. 210-218.Google Scholar
  3. Elbaum, S., Gable, D., and Rothermel, G. 2001a. Understanding and measuring the sources of variation in the prioritization of regression test suites, Proceedings of the 7th International Software Metrics Symposium, Institute of Electrical and Electronics Engineers, Inc., April.Google Scholar
  4. Elbaum, S., Kallakuri, K., Malishevsky, A.G., Rothermel, G., and Kanduri, S. 2003. Understanding the effects of changes on the cost-effectiveness of regression testing techniques, Journal of Software Testing, Verification, and Reliability 13(2): 65-83.Google Scholar
  5. Elbaum, S., Malishevsky, A., and Rothermel, G. 2001b. Incorporating varying test costs and fault severities into test case prioritization, International Conference on Software Engineering, May, pp. 329-338.Google Scholar
  6. Elbaum, S., Malishevsky, A., and Rothermel, G. 2002. Test case prioritization: A family of empirical studies, IEEE Transactions of Software Engineering 28(2): 159-182.Google Scholar
  7. Hutchins, M., Foster, H., Goradia, T., and Ostrand, T. 1994. Experiments on the effectiveness of dataflow-and controlflow-based test adequacy criteria, Proceedings of the International Conference on Software Engineering, May, pp. 191-200.Google Scholar
  8. Jones, J. and Harrold, M. 2001. Test-suite reduction and prioritization for modified condition/decision coverage, Proceedings of the International Conference on Software Maintenance, November.Google Scholar
  9. Khoshgoftaar, T., Allen, E., and Deng, J. 2002. Using regression trees to classify fault-prone software modules, IEEE Transactions on Reliability 51(4): 455-462.Google Scholar
  10. Kim, J.-M. and Porter, A. 2002. A history-based test prioritization technique for regression testing in resource constrained environments, Proceedings of the International Conference on Software Engineering, May.Google Scholar
  11. Malishevsky, A., Rothermel, G. and Elbaum, S. 2002. Modeling the cost-benefits tradeoffs for regression testing techniques, Proceedings of the International Conference on Software Maintenance, October.Google Scholar
  12. Nikora, A. and Munson, J. 1998. Software evolution and the fault process, Proceedings of the 23rd Annual Software Engineering Workshop, NASA/Goddard Space Flight Center.Google Scholar
  13. Ostrand, T. and Balcer, M. 1988. The category-partition method for specifying and generating functional tests, Communications of the ACM 31(6).Google Scholar
  14. Porter, A. and Selby, R. 1990. Evaluating techniques for generating metric-based classification trees, The Journal of Systems and Software 12(3): 209-218.Google Scholar
  15. Ramey, C. and Fox, B. 1998. Bash Reference Manual, 2.2 edition. Sebastopol, CA, O'Reilly & Associates, Inc.Google Scholar
  16. Rothermel, G., Elbaum, S., Malishevsky, A., Kallakuri, P., and Davia, B. 2002. The impact of test suite granularity on the cost-effectiveness of regression testing, Proceedings of the 24th International Conference on Software Engineering, May, pp. 230-240.Google Scholar
  17. Rothermel, G. and Harrold, M. 1996. Analyzing regression test selection techniques, IEEE Transactions on Software Engineering 22(8): 529-551.Google Scholar
  18. Rothermel, G., Untch, R., Chu, C., and Harrold, M. 1999. Test case prioritization: An empirical study, Proceedings of the International Conference on Software Maintenance, pp. 179-188.Google Scholar
  19. Rothermel, G., Untch, R., Chu, C., and Harrold, M. 2001. Test case prioritization, IEEE Transactions on Software Engineering 27(10): 929-948.Google Scholar
  20. Selby, R. and Porter, A. 1988. Learning from examples: Generation and evaluation of decision trees for software resource analysis, IEEE Transactions on Software Engineering 14(12): 1743-1757.Google Scholar
  21. Srivastava, A. and Thiagarajan, J. 2002. Effectively prioritizing tests in development environment, Proceedings of the International Symposium on Software Testing and Analysis, July, pp. 97-106.Google Scholar
  22. Statsoft. Statistica, http://www.statsoft.com/exploratory.htmlGoogle Scholar
  23. Wong, W., Horgan, J., London, S., and Agrawal, H. 1997. A study of effective regression in practice, Proceedings of the 8th International Symposium on Software Reliability Engineering, November, pp. 230-238.Google Scholar
  24. Wong, W., Horgan, J., London, S., and Mathur, A. 1994. Effect of test set size and block coverage on the fault detection effectiveness, Proceedings of the 5th International Symposium on Software Reliability Engineering, November, pp. 230-238.Google Scholar
  25. Wong, W., Horgan, J., London, S. and Mathur, A. 1995. Effect of test set minimization on fault detection effectiveness, Proceedings of the 17th International Conference on Software Engineering, April, pp. 41-50.Google Scholar

Copyright information

© Kluwer Academic Publishers 2004

Authors and Affiliations

  • Sebastian Elbaum
    • 1
  • Gregg Rothermel
    • 1
  • Satya Kanduri
    • 1
  • Alexey G. Malishevsky
    • 2
  1. 1.Department of Computer Science and EngineeringUniversity of Nebraska –LincolnUSA
  2. 2.Department of Computer ScienceOregon State UniversityUSA

Personalised recommendations