Advertisement

Using Behaviour Inference to Optimise Regression Test Sets

  • Ramsay Taylor
  • Mathew Hall
  • Kirill Bogdanov
  • John Derrick
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7641)

Abstract

Where a software component is updated or replaced regression testing is required. Regression test sets can contain considerable redundancy. This is especially true in the case where no formal regression test set exists and the new component must instead be compared against patterns of behaviour derived from in-use log data from the previous version. Previous work has applied search-based techniques such as Genetic Algorithms to minimise test sets, but these relied on code coverage metrics to select test cases. Recent work has demonstrated the advantage of behaviour inference as a test adequacy metric. This paper presents a multi-objective search-based technique that uses behaviour inference as the fitness metric. The resulting test sets are evaluated using mutation testing and it is demonstrated that a considerably reduced test set can be found that retains all of the fault finding capability of the complete set.

Keywords

Genetic Algorithm State Machine Pareto Front Test Suite Behaviour Inference 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    StateChum, http://statechum.sourceforge.net/ (accessed March 14, 2012)
  2. 2.
    Armstrong, J., Virding, R., Wikström, C., Williams, M.: Concurrent Programming in ERLANG. Prentice Hall (1996)Google Scholar
  3. 3.
    Biswas, S., Mall, R., Satpathy, M., Sukumaran, S.: Regression Test Selection Techniques: A Survey. Informatica 35(3), 289–321 (2011)Google Scholar
  4. 4.
    Chow, T.S.: Testing Software Design Modeled by Finite-State Machines. IEEE Trans. Softw. Eng. 4(3), 178–187 (1978)zbMATHCrossRefGoogle Scholar
  5. 5.
    Deb, K., Agrawal, S., Pratap, A., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation 6(2), 182–197 (2002)CrossRefGoogle Scholar
  6. 6.
    Fraser, G., Walkinshaw, N.: Behaviourally Adequate Software Testing. In: Proceedings of the Fifth International Conference on Software Testing, Verification and Validation (ICST) (2012)Google Scholar
  7. 7.
    Guo, Q., Derrick, J., Taylor, R.: Mutation Testing of Erlang Distributed and Concurrent Applications. In: Automated Software Engineering (submitted, 2012)Google Scholar
  8. 8.
    Hamlet, R.G.: Testing programs with the aid of a compiler. IEEE Transactions on Software Engineering 3, 279–290 (1977)zbMATHCrossRefGoogle Scholar
  9. 9.
    Harman, M., Jones, B.F.: Search-based software engineering. Information & Software Technology 43(14), 833–839 (2001)CrossRefGoogle Scholar
  10. 10.
    Clark, J.A., Dan, H., Hierons, R.M.: Semantic Mutation Testing. Science of Computer Programming (2011) (page under print)Google Scholar
  11. 11.
    Lang, K.J., Pearlmutter, B.A., Price, R.A.: Results of the Abbadingo One DFA Learning Competition and a New Evidence-Driven State Merging Algorithm. In: Honavar, V.G., Slutzki, G. (eds.) ICGI 1998. LNCS (LNAI), vol. 1433, pp. 1–12. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  12. 12.
    Lin, J.-W., Huang, C.-Y.: Analysis of test suite reduction with enhanced tie-breaking techniques. Information & Software Technology 51(4), 679–690 (2009)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Mansour, N., El-Fakih, K.: Simulated Annealing and Genetic Algorithms for Optimal Regression Testing. Journal of Software Maintenance 11(1), 19–34 (1999)CrossRefGoogle Scholar
  14. 14.
    Pradel, M., Bichsel, P., Gross, T.R.: A framework for the evaluation of specification miners based on finite state machines. In: ICSM, pp. 1–10. IEEE Computer Society (2010)Google Scholar
  15. 15.
    Walkinshaw, N., Bogdanov, K.: Automated Comparison of State-Based Software Models in terms of their Language and Structure. ACM Transactions on Software Engineering and Methodology 22(2) (2012)Google Scholar
  16. 16.
    Walkinshaw, N., Bogdanov, K., Holcombe, M., Salahuddin, S.: Reverse engineering state machines by interactive grammar inference. In: Proceedings of the 14th Working Conference on Reverse Engineering (WCRE). IEEE (2007)Google Scholar
  17. 17.
    Weyuker, E.J.: Assessing Test Data Adequacy through Program Inference. ACM Transactions on Programming Languages and Systems 5(4), 641–655 (1983)zbMATHCrossRefGoogle Scholar
  18. 18.
    Xu, W., Huang, L., Fox, A., Patterson, D., Jordan, M.: Experience mining google’s production console logs. In: Proceedings of the 2010 Workshop on Managing Systems via Log Analysis and Machine Learning Techniques, p. 5. USENIX Association (2010)Google Scholar
  19. 19.
    Yoo, S., Harman, M.: Using hybrid algorithm for Pareto efficient multi-objective test suite minimisation. Journal of Systems and Software 83(4), 689–701 (2010)CrossRefGoogle Scholar

Copyright information

© IFIP International Federation for Information Processing 2012

Authors and Affiliations

  • Ramsay Taylor
    • 1
  • Mathew Hall
    • 1
  • Kirill Bogdanov
    • 1
  • John Derrick
    • 1
  1. 1.Department of Computer ScienceThe University of SheffieldUK

Personalised recommendations