Automated Testing of Web Applications with TESTAR

Lessons Learned Testing the Odoo Tool
  • Francisco Almenar
  • Anna I. Esparcia-Alcázar
  • Mirella Martínez
  • Urko Rueda
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9962)


The TESTAR tool was originally conceived to perform automated testing of desktop applications via their Graphical User Interface (GUI). Starting from the premise that source code is not available, TESTAR automatically selects actions based only on information derived from the GUI and in this way generates test sequences on the fly. In this work we extend its use to web applications and carry out experiments using the Odoo open source management software as the testing object. We also introduce novel metrics to evaluate the performance of the testing with TESTAR, which are valid even when access to the source code is not available and testing is only possible via the GUI. We compare results obtained for two types of action selection mechanisms, based on random choice and \(\mathcal {Q}\)-learning with different parameter settings. Statistical analysis shows the superiority of the latter provided an adequate choice of parameters; furthermore, the results point to interesting areas for improvement.


Automated GUI testing Testing metrics Testing web applications \(\mathcal {Q}\)-learning 



This work was partially funded by projects SHIP (SMEs and HEIs in Innovation Partnerships, ref: EACEA/A2/UHB/CL 554187) and PERTEST (TIN2013-46928-C3-1-R).


  1. 1.
    Bauersfeld, S., de Rojas, A., Vos, T.: Evaluating rogue user testing in industry: an experience report. In: 2014 IEEE Eighth International Conference on Research Challenges in Information Science (RCIS), pp. 1–10, May 2014Google Scholar
  2. 2.
    Bauersfeld, S., Vos, T.E.J., Condori-Fernández, N., Bagnato, A., Brosse, E.: Evaluating the TESTAR tool in an industrial case study. In: 2014 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM 2014, Torino, Italy, p. 4, 18–19 September 2014Google Scholar
  3. 3.
    Memon, A.M., Soffa, M.L., Pollack, M.E.: Coverage criteria for GUI testing. In: Proceedings of ESEC/FSE 2001, pp. 256–267 (2001)Google Scholar
  4. 4.
    Rueda, U., Vos, T.E.J., Almenar, F., Martínez, M.O., Esparcia-Alcázar, A.I.: TESTAR: from academic prototype towards an industry-ready tool for automated testing at the user interface level. In: Canos, J.H., Gonzalez Harbour, M. (eds.) Actas de las XX Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2015), pp. 236–245 (2015)Google Scholar
  5. 5.
    Schwartz, A., Hetzel, M.: The impact of fault type on the relationship between code coverage and fault detection. In: Proceedings of the 11th International Workshop on Automation of Software Test, AST 2016, pp. 29–35. ACM, New York (2016).
  6. 6.
    Vos, T.E.J., Kruse, P.M., Condori-Fernández, N., Bauersfeld, S., Wegener, J.: TESTAR: tool support for test automation at the user interface level. IJISMD 6(3), 46–83 (2015). Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Francisco Almenar
    • 1
  • Anna I. Esparcia-Alcázar
    • 1
  • Mirella Martínez
    • 1
  • Urko Rueda
    • 1
  1. 1.Research Center on Software Production Methods (PROS)Universitat Politècnica de ValènciaValenciaSpain

Personalised recommendations