Advertisement

Evolving Rules for Action Selection in Automated Testing via Genetic Programming - A First Approach

  • Anna I. Esparcia-Alcázar
  • Francisco Almenar
  • Urko Rueda
  • Tanja E. J. Vos
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10200)

Abstract

Tools that perform automated software testing via the user interface rely on an action selection mechanism that at each step of the testing process decides what to do next. This mechanism is often based on random choice, a practice commonly referred to as monkey testing. In this work we evaluate a first approach to genetic programming (GP) for action selection that involves evolving IF-THEN-ELSE rules; we carry out experiments and compare the results with those obtained by random selection and also by \(\mathcal {Q}\)-learning, a reinforcement learning technique. Three applications are used as Software Under Test (SUT) in the experiments, two of which are proprietary desktop applications and the other one an open source web-based application. Statistical analysis is used to compare the three action selection techniques on the three SUTs; for this, a number of metrics are used that are valid even under the assumption that access to the source code is not available and testing is only possible via the GUI. Even at this preliminary stage, the analysis shows the potential of GP to evolve action selection mechanisms.

Keywords

Automated testing via the GUI Action selection for testing Testing metrics Genetic Programming 

Notes

Acknowledgments

This work was partially funded by project SHIP (SMEs and HEIs in Innovation Partnerships, ref: EACEA/A2/UHB/CL 554187).

References

  1. 1.
    Aho, P., Menz, N., Rty, T.: Dynamic reverse engineering of GUI models for testing. In: Proceedings of the 2013 International Conference on Control, Decision and Information Technologies (CoDIT 2013), May 2013Google Scholar
  2. 2.
    Aho, P., Oliveira, R., Algroth, E., Vos, T.: Evolution of automated testing of software systems through graphical user interface. In: International Conference on Advances in Computation, Communications and Services, Valencia (2016)Google Scholar
  3. 3.
    Alegroth, E., Feldt, R., Ryrholm, L.: Visual GUI testing in practice: challenges, problems and limitations. Empirical Softw. Eng. 20, 694–744 (2014)CrossRefGoogle Scholar
  4. 4.
    Bauersfeld, S., Vos, T.E.J.: User interface level testing with TESTAR: what about more sophisticated action specification and selection? In: Post-proceedings of the Seventh Seminar on Advanced Techniques and Tools for Software Evolution, SATToSE 2014, L’Aquila, Italy, 9–11 July 2014. pp. 60–78 (2014). http://ceur-ws.org/Vol-1354/paper-06.pdf
  5. 5.
    Bauersfeld, S., Wappler, S., Wegener, J.: A metaheuristic approach to test sequence generation for applications with a GUI. In: Cohen, M.B., Ó Cinnéide, M. (eds.) SSBSE 2011. LNCS, vol. 6956, pp. 173–187. Springer, Heidelberg (2011). doi: 10.1007/978-3-642-23716-4_17CrossRefGoogle Scholar
  6. 6.
    Chaudhary, N., Sangwan, O.: Metrics for event driven software. Int. J. Adv. Comput. Sci. Appl. (IJACSA) 7(1), 85–89 (2016)Google Scholar
  7. 7.
    Esparcia-Alcázar, A.I., Almenar, F., Martínez, M., Rueda, U., Vos, T.E.: Q-learning strategies for action selection in the TESTAR automated testing tool. In: Proceedings of META 2016 6th International Conference on Metaheuristics and Nature Inspired Computing, pp. 174–180 (2016)Google Scholar
  8. 8.
    Koza, J.R.: Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press, Cambridge (1992). http://mitpress.mit.edu/books/genetic-programmingzbMATHGoogle Scholar
  9. 9.
    Lehman, J., Stanley, K.O.: Novelty search and the problem with objectives. In: Riolo, R., Vladislavleva, E., Moore, J.H. (eds.) Genetic Programming Theory and Practice IX. Genetic and Evolutionary Computation, pp. 37–56. Springer, New York (2011)CrossRefGoogle Scholar
  10. 10.
    Memon, A.M., Soffa, M.L., Pollack, M.E.: Coverage criteria for GUI testing. In: Proceedings of ESEC/FSE 2001, pp. 256–267 (2001)Google Scholar
  11. 11.
    Seesing, A., Gross, H.G.: A genetic programming approach to automated test generation for object-oriented software. Int. Trans. Syst. Sci. Appl. 1(2), 127–134 (2006)Google Scholar
  12. 12.
    Wappler, S., Wegener, J.: Evolutionary unit testing of object-oriented software using strongly-typed genetic programming. In: Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, GECCO 2006, pp. 1925–1932. ACM, New York (2006). http://doi.acm.org/10.1145/1143997.1144317
  13. 13.
    Watkins, C.: Learning from Delayed Rewards. Ph.D. thesis, Cambridge University (1989)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Anna I. Esparcia-Alcázar
    • 1
  • Francisco Almenar
    • 1
  • Urko Rueda
    • 1
  • Tanja E. J. Vos
    • 1
  1. 1.Research Center on Software Production Methods (PROS)Universitat Politècnica de ValènciaValenciaSpain

Personalised recommendations