Test Case Prioritization Using Online Fault Detection Information

  • Mohsen Laali
  • Huai Liu
  • Margaret Hamilton
  • Maria Spichkova
  • Heinz W. Schmidt
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9695)

Abstract

The rapid evolution of software necessitates effective fault detection within increasingly restricted execution times. To improve the effectiveness of the regression testing required for extensive fault detection, test cases have to be prioritized. The test cases with the higher chance of capturing faults are executed earlier in the series. This prioritization enables faster feedback for fixing more faults earlier. Various prioritization techniques have been proposed based on the information provided by offline (static) test execution history on previous versions of the software. In this paper, we propose a family of new test case prioritization techniques, which utilize online (dynamic) information about the locations of previously revealed faults in the detection of other faults. Our empirical studies demonstrate that the new techniques are more effective than the existing traditional test case prioritization techniques.

Keywords

Software testing Regression testing Test case prioritization Online test prioritization 

References

  1. 1.
    Software-artifact Infrastructure Repository. http://sir.unl.edu. Accessed 24 Mar 2015
  2. 2.
    Booch, G.: Object Oriented Design with Applications. Benjamin-Cummings Publishing Co., Inc., Redwood City (1991)MATHGoogle Scholar
  3. 3.
    Do, H., Elbaum, S., Rothermel, G.: Supporting controlled experimentation with testing techniques: an infrastructure and its potential impact. Empir. Softw. Eng. 10(4), 405–435 (2005)CrossRefGoogle Scholar
  4. 4.
    Do, H., Rothermel, G., Kinneer, A.: Empirical studies of test case prioritization in a junit testing environment. In: ISSRE 2004, pp. 113–124. IEEE (2004)Google Scholar
  5. 5.
    Elbaum, S., Malishevsky, A.G., Rothermel, G.: Test case prioritization: a family of empirical studies. IEEE Trans. Softw. Eng. 28(2), 159–182 (2002)CrossRefGoogle Scholar
  6. 6.
    Hutchins, M., Foster, H., Goradia, T., Ostrand, T.: Experiments of the effectiveness of dataflow-and controlflow-based test adequacy criteria. In: ICSE, pp. 191–200. IEEE Computer Society Press (1994)Google Scholar
  7. 7.
    Jiang, B., Zhang, Z., Chan, W.K., Tse, T.: Adaptive random test case prioritization. In: ASE 2009, pp. 233–244. IEEE (2009)Google Scholar
  8. 8.
    Jiang, B., Zhang, Z., Chan, W.K., Tse, T., Chen, T.Y.: How well does test case prioritization integrate with statistical fault localization? Inf. Softw. Technol. 54(7), 739–758 (2012)CrossRefGoogle Scholar
  9. 9.
    Jones, J.A., Harrold, M.J.: Empirical evaluation of the tarantula automatic fault-localization technique. In: ASE 2005, pp. 273–282. ACM (2005)Google Scholar
  10. 10.
    Just, R., Jalali, D., Inozemtseva, L., Ernst, M.D., Holmes, R., Fraser, G.: Are mutants a valid substitute for real faults in software testing. In: FSE 2014 (2014)Google Scholar
  11. 11.
    Kim, J.-M., Porter, A.: A history-based test prioritization technique for regression testing in resource constrained environments. In: ICSE 2002, pp. 119–129. IEEE (2002)Google Scholar
  12. 12.
    Li, Z., Harman, M., Hierons, R.M.: Search algorithms for regression test case prioritization. IEEE Trans. Softw. Eng. 33(4), 225–237 (2007)CrossRefGoogle Scholar
  13. 13.
    Malaiya, Y.K., Li, M.N., Bieman, J.M., Karcich, R.: Software reliability growth with test coverage. IEEE Trans. Reliab. 51(4), 420–426 (2002)CrossRefGoogle Scholar
  14. 14.
    Myers, G.J., Sandler, C., Badgett, T.: The Art of Software Testing. Wiley, New York (2011)Google Scholar
  15. 15.
    Newman, M.E.: Power laws, Pareto distributions and Zipf’s law. Contemp. Phys. 46(5), 323–351 (2005)CrossRefGoogle Scholar
  16. 16.
    Orso, A., Rothermel, G.: Software testing: a research travelogue (2000–2014). In: FoSER, pp. 117–132. ACM (2014)Google Scholar
  17. 17.
    Rooney, P.: Microsofts CEO: 80–20 rule applies to bugs, not just features. ChannelWeb, October 2002Google Scholar
  18. 18.
    Rothermel, G., Untch, R.H., Chu, C., Harrold, M.J.: Test case prioritization: an empirical study. In: ICSM 1999, pp. 179–188. IEEE (1999)Google Scholar
  19. 19.
    Rothermel, G., Untch, R.H., Chu, C., Harrold, M.J.: Prioritizing test cases for regression testing. IEEE Trans. Softw. Eng. 27(10), 929–948 (2001)CrossRefGoogle Scholar
  20. 20.
    Spichkova, M., Liu, H., Laali, M., Schmidt, H.: Human factors in software reliability engineering. In: WAHESE 2015 (2015, to appear)Google Scholar
  21. 21.
    Yoo, S., Harman, M.: Regression testing minimization, selection and prioritization: a survey. JSTVR 22(2), 67–120 (2012)Google Scholar
  22. 22.
    Yoon, H., Choi, B.: A test case prioritization based on degree of risk exposure and its empirical study. Int. J. Softw. Eng. Knowl. Eng. 21(02), 191–209 (2011)CrossRefGoogle Scholar
  23. 23.
    Yu, Y., Jones, J.A., Harrold, M.J.: An empirical study of the effects of test-suite reduction on fault localization. In: ICSE 2008, pp. 201–210. ACM (2008)Google Scholar
  24. 24.
    Zhang, L., Hao, D., Zhang, L., Rothermel, G., Mei, H.: Bridging the gap between the total and additional test-case prioritization strategies. In: ICSE 2013, pp. 192–201 (2013)Google Scholar
  25. 25.
    Zheng, Z., Zhou, T.C., Lyu, M.R., King, I.: FTCloud: a component ranking framework for fault-tolerant cloud applications. In: ISSRE 2010, pp. 398–407. IEEE (2010)Google Scholar
  26. 26.
    Zhou, Z.Q.: Using coverage information to guide test case selection in adaptive random testing. In: COMPSACW 2010, pp. 208–213. IEEE (2010)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Mohsen Laali
    • 1
  • Huai Liu
    • 1
  • Margaret Hamilton
    • 1
  • Maria Spichkova
    • 1
  • Heinz W. Schmidt
    • 1
  1. 1.RMIT UniversityMelbourneAustralia

Personalised recommendations