Automatic Test Data Generation by Multi-objective Optimisation
This paper presents a technique for automated test data generation applicable to both procedural and object-oriented programs. During the generation, the test cases are optimised such as to maximise structural code coverage by minimising at the same time the number of test cases required. To cope with these two inherently conflicting goals, hybrid self-adaptive and multi-objective evolutionary algorithms are applied. Our approach is based on a preliminary activity that provides support for the automatic instrumentation of source code in order to record the relevant data flow information at runtime. By exclusively utilising the insight gained hereby, test data sets are successively enhanced towards the goals mentioned above. Finally, the efficiency of the test set generated is evaluated in terms of its fault detection capability by means of mutation testing. In addition, the actual coverage percentage achieved is determined by taking into account the results of a static data flow analysis of the system under test. Thanks to the dramatic decrease of effort required for generating and verifying test cases, the technique presented here allows to substantially improve the V&V-phase of complex, safety-relevant software. Preliminary experimental results gained so far are reported in the paper.
Keywordstesting data flow evolutionary algorithms automated test data generation object-oriented software mutation testing
Unable to display preview. Download preview PDF.
- 2.O’Sullivan, M., Vössner, S., Wegener, J.: Testing temporal correctness of real-time systems - a new approach using genetic algorithms and cluster analysis. In: EuroSTAR 1998 Software Testing Analysis & Review. EuroSTAR, Munich Park Hilton, vol. 6, pp. 397–418 (1998)Google Scholar
- 4.Michael, C.C., McGraw, G.: Automated software test data generation for complex programs. In: Automated Software Engineering. Thirteenth IEEE Conference on Automated Software Engineering, pp. 136–146. IEEE, Los Alamitos (1998)Google Scholar
- 5.Baresel, A.: Automatisierung von Strukturtests mit evolutionären Algorithmen. Diplomarbeit, Lehr- und Forschungsgebiet Softwaretechnik, Humboldt-Universität Berlin, Berlin (2000)Google Scholar
- 6.Wegener, J., Buhr, K., Pohlheim, H.: Automatic test data generation for structural testing of embedded software systems by evolutionary testing. In: Langdon, W.B., Cantú-Paz, E. (eds.) GECCO 2002. Proceedings of the Genetic and Evolutionary Computation Conference, pp. 1233–1240. Morgan Kaufmann, New York (2002)Google Scholar
- 10.Ntafos, S.C.: On testing with required elements. In: Proceedings of COMPSAC 1981, Chicago, pp. 132–139. IEEE Computer Society, Los Alamitos (1981)Google Scholar
- 14.Chatterjee, R., Ryder, B.G.: Data-flow-based testing of object-oriented libraries. Technical Report DCS-TR-433, Department of Computer Science, Rutgers University, New Jersey (2001)Google Scholar
- 15.Oster, N., Dorn, R.D.: A data flow approach to testing object-oriented java-programs. In: Spitzer, C., Schmocker, U., Dang, V.N. (eds.) Probabilistic Safety Assessment and Management (PSAM7/ESREL 2004), vol. 2, pp. 1114–1119. Springer, London (2004)Google Scholar
- 19.Zitzler, E., Thiele, L.: Multiobjective optimization using evolutionary algorithms – a comparative case study. Technical report, Swiss Federal Institute of Technology Zurich, Computer Engineering and Communication Networks Laboratory (TIK), Gloriastrasse 35, CH-8092 Zurich, Switzerland (1998)Google Scholar