Skip to main content
Log in

Automated GUI performance testing

  • Published:
Software Quality Journal Aims and scope Submit manuscript

Abstract

A significant body of prior work has devised approaches for automating the functional testing of interactive applications. However, little work exists for automatically testing their performance. Performance testing imposes additional requirements upon GUI test automation tools: the tools have to be able to replay complex interactive sessions, and they have to avoid perturbing the application’s performance. We study the feasibility of using five Java GUI capture and replay tools for GUI performance test automation. Besides confirming the severity of the previously known GUI element identification problem, we also describe a related problem, the temporal synchronization problem, which is of increasing importance for GUI applications that use timer-driven activity. We find that most of the tools we study have severe limitations when used for recording and replaying realistic sessions of real-world Java applications and that all of them suffer from the temporal synchronization problem. However, we find that the most reliable tool, Pounder, causes only limited perturbation and thus can be used to automate performance testing. Based on an investigation of Pounder’s approach, we further improve its robustness and reduce its perturbation. Finally, we demonstrate in a set of case studies that the conclusions about perceptible performance drawn from manual tests still hold when using automated tests driven by Pounder. Besides the significance of our findings to GUI performance testing, the results are also relevant to capture and replay-based functional GUI test automation approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Notes

  1. The artifacts used in this methodology are available at http://www.sape.inf.usi.ch/pounder and will be submitted to the Community Event-based Testing collection at http://www.comet.unl.edu/.

  2. http://www.abbot.sourceforge.net/.

  3. http://www.jacareto.sourceforge.net/.

  4. http://www.pounder.sourceforge.net/.

  5. http://www.marathontesting.com/.

  6. http://www.jfcunit.sourceforge.net/.

  7. http://www.junit.sourceforge.net/.

  8. The confidence intervals are often so tight that the hatch patterns essentially disappear under the average curve.

  9. Information about and links to our contributions to Pounder are available at http://www.sape.inf.usi.ch/pounder.

References

  • Adamoli, A., & Hauswirth, M. (2010). Trevis: A context tree visualization & analysis framework and its use for classifying performance failure reports. In SoftVis ’10: Proceedings of the ACM symposium on software visualization.

  • Alsmadi, I. (2008). The utilization of user sessions in testing. In ICIS ’08: Proceedings of the seventh IEEE/ACIS international conference on computer and information science (icis 2008) (pp. 581–585). Washington, DC, USA: IEEE Computer Society.

  • Belli, F. (2001). Finite-state testing and analysis of graphical user interfaces. Software reliability engineering, international symposium on, 0:34.

  • Blackburn, S. M., Garner, R., Hoffmann, C., Khang, A. M., McKinley, K. S., Bentzur, R., et al. (2006). The dacapo benchmarks: Java benchmarking development and analysis. In OOPSLA ’06: Proceedings of the 21st annual ACM SIGPLAN conference on object-oriented programming systems, languages, and applications (pp. 169–190). New York, NY, USA: ACM.

  • Brooks, P. A., & Memon, A. M. (2007). Automated gui testing guided by usage profiles. In ASE ’07: Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering (pp. 333–342). New York, NY, USA: ACM.

  • Brooks, P. A., Robinson, B. P., & Memon, A. M. (2009). An initial characterization of industrial graphical user interface systems. Software testing, verification, and validation, 2008 international conference on, 0:11–20.

  • Chang, T.-H., Yeh, T., & Miller, R. C. (2010). Gui testing using computer vision. In Proceedings of the 28th international conference on human factors in computing systems, CHI ’10 (pp. 1535–1544). New York, NY, USA: ACM.

  • Chinnapongse, V., Lee, I., Sokolsky, O., Wang, S., & Jones, P. L. (2009). Model-based testing of gui-driven applications. In Proceedings of the 7th IFIP WG 10.2 international workshop on software technologies for embedded and ubiquitous systems, SEUS ’09 (pp. 203–214). Berlin, Heidelberg: Springer-Verlag.

  • de Oliveira, D. A. S., Crandall, J. R., Wassermann, G., Felix Wu, S., Su, Z., & Chong, F. T. (2006). Execrecorder: Vm-based full-system replay for attack analysis and system recovery. In ASID ’06: Proceedings of the 1st workshop on architectural and system support for improving software dependability (pp. 66–71). New York, NY, USA: ACM.

  • Deursen, A.,, & Mesbah, A. (2010). Research issues in the automated testing of ajax applications. In Proceedings of the 36th conference on current trends in theory and practice of computer science, SOFSEM ’10 (pp. 16–28). Berlin, Heidelberg: Springer-Verlag.

  • El Ariss, O., Xu, D., Dandey, S., Vender, B., McClean, P., & Slator, B. (2010). A systematic capture and replay strategy for testing complex gui based java applications. In Proceedings of the 2010 seventh international conference on information technology: New generations, ITNG ’10 (pp. 1038–1043). Washington, DC, USA: IEEE Computer Society.

  • Elbaum, S., Karre, S., & Rothermel, G. (2003). Improving web application testing with user session data. In Proceedings of the 25th international conference on software engineering, ICSE ’03 (pp. 49–59). Washington, DC, USA: IEEE Computer Society.

  • Elbaum, S., Rothermel, G., Karre, S., & Fisher, M., II. (2005). Leveraging user-session data to support web application testing. IEEE Transactions on Software Engineering, 31, 187–202.

    Article  Google Scholar 

  • Georges, A., Buytaert, D., & Eeckhout, L. (2007). Statistically rigorous java performance evaluation. In OOPSLA ’07: Proceedings of the 22nd annual ACM SIGPLAN conference on object-oriented programming systems and applications (pp. 57–76). New York, NY, USA: ACM.

  • Grechanik, M., Xie, Q., Fu, C. (2009). Maintaining and evolving gui-directed test scripts. In ICSE ’09: Proceedings of the 2009 IEEE 31st international conference on software engineering (pp. 408–418). Washington, DC, USA: IEEE Computer Society.

  • Hackner, D. R., & Memon, A. M. (2008). Test case generator for guitar. In Companion of the 30th international conference on software engineering, ICSE Companion ’08 (pp. 959–960). New York, NY, USA: ACM.

  • Jovic, M., Adamoli, A., Zaparanuks, D., & Hauswirth, M. (2010) Automating performance testing of interactive java applications. In: AST ’10: Proceedings of the 5th Workshop on Automation of Software Test, pp. 8–15, New York, NY, USA, 2010. ACM.

  • Jovic, M., & Hauswirth, M. (2008). Measuring the performance of interactive applications with listener latency profiling. In: PPPJ ’08: Proceedings of the 6th international symposium on principles and practice of programming in java (pp. 137–146). New York, NY, USA: ACM.

  • Kasik, D. J., & George, H. G. (1996). Toward automatic generation of novice user test scripts. In Proceedings of the SIGCHI conference on human factors in computing systems: Common ground, CHI ’96 (pp. 244–251). New York, NY, USA: ACM.

  • Li, P., Huynh, T., Reformat, M., & Miller, J. (2007). A practical approach to testing gui systems. Empirical Software Engineering, 12, 331–357.

    Article  Google Scholar 

  • Li, K., Wu, M. (2004). Effective GUI testing automation: Developing an automated GUI testing tool. Alameda, CA, USA: SYBEX Inc.

    Google Scholar 

  • Lindvall, M., Rus, I., Donzelli, P., Memon, A., Zelkowitz, M., Betin-Can, A, et al. (2007). Experimenting with software testbeds for evaluating new technologies. Empirical Software Engineering: An International Journal, 12(4), 417–444.

    Article  Google Scholar 

  • Liu, H., Jin, H., Liao, X., Hu, L., & Yu, C. (2009). Live migration of virtual machine based on full system trace and replay. In HPDC ’09: Proceedings of the 18th ACM international symposium on high performance distributed computing (pp. 101–110). New York, NY, USA: ACM.

  • Liu, C.-H., Kung, D. C., Hsia, P., Hsu, C.-T. (2000a). Object-based data flow testing of web applications. In Proceedings of the the first Asia-Pacific conference on quality software (APAQS’00), APAQS ’00 (pp. 7–16). Washington, DC, USA: IEEE Computer Society.

  • Liu, C.-H., Kung, D. C., Hsia, P., & Hsu, C.-T. (2000b). Structural testing of web applications. In Proceedings of the 11th international symposium on software reliability engineering (pp. 84–96). Washington, DC, USA: IEEE Computer Society.

  • Lowell, C., & Stell-Smith, J. (2003) Successful automation of gui driven acceptance testing. In Proceedings of the 4th international conference on extreme programming and agile processes in software engineering, XP’03 (pp. 331–333). Berlin, Heidelberg: Springer-Verlag.

  • Lucca, G. D., Fasolino, A., & Faralli, F. (2002). Testing web applications. In Proceedings of the international conference on software maintenance (ICSM’02) (pp. 310–319). Washington, DC, USA: IEEE Computer Society.

  • Marchetto, A., Ricca, F., & Tonella, P. (2008a). A case study-based comparison of web testing techniques applied to ajax web applications. International Journal of Software Tools and Technology Transactions, 10, 477–492.

    Article  Google Scholar 

  • Marchetto, A., Tonella, P., & Ricca, F. (2008b). State-based testing of ajax web applications. In ICST (pp. 121–130). IEEE Computer Society.

  • McMaster, S., & Memon, A. (2008). Call-stack coverage for gui test suite reduction. ACM Transactions of Software Engineering, 34, 99–115.

    Article  Google Scholar 

  • McMaster, S., & Memon, A. M. (2009). An extensible heuristic-based framework for gui test case maintenance. In TESTBEDS ’09: Proceedings of the first international workshop on TESTing techniques & experimentation benchmarks for event-driven software.

  • Memon, A. M. (2008). Automatically repairing event sequence-based gui test suites for regression testing. ACM Transactions of Software Engineering Methodology, 18(2), 1–36.

    Article  Google Scholar 

  • Memon, A. M., Pollack, M. E., & Soffa, M. L. (2001) Hierarchical gui test case generation using automated planning. IEEE Transactions of Software Engineering, 27, 144–155.

    Article  Google Scholar 

  • Memon, A., Nagarajan, A., & Xie, Q. (2005). Automating regression testing for evolving gui software. Journal of Software Maintenance, 17, 27–64.

    Article  Google Scholar 

  • Memon, A. M., & Xie, Q. (2005). Studying the fault-detection effectiveness of gui test cases for rapidly evolving software. IEEE Transactions of Software Engineering, 31, 884–896.

    Article  Google Scholar 

  • Mesbah, A. , & van Deursen, A. (2009). Invariant-based automatic testing of ajax user interfaces. In Proceedings of the 31st international conference on software engineering, ICSE ’09 (pp. 210–220). Washington, DC, USA: IEEE Computer Society.

  • Meszaros, G. (2003). Agile regression testing using record & playback. In Companion of the 18th annual ACM SIGPLAN conference on object-oriented programming, systems, languages, and applications, OOPSLA ’03 (pp. 353–360). New York, NY, USA: ACM.

  • Mitchell, A., & James F. Power. An approach to quantifying the run-time behaviour of java gui applications. In: WISICT ’04: Proceedings of the winter international symposium on Information and communication technologies, pp. 1–6. Trinity College Dublin, 2004.

  • Mu, B, Zhan, M., & Hu, L. (2009). Design and implementation of gui automated testing framework based on xml. In Proceedings of the 2009 WRI world congress on software engineering—Vol. 04, WCSE ’09 (pp. 194–199). Washington, DC, USA: IEEE Computer Society.

  • Mytkowicz, T., Diwan, A., Hauswirth, M., & Sweeney, P. F. (2009). Producing wrong data without doing anything obviously wrong! In: ASPLOS ’09: Proceeding of the 14th international conference on architectural support for programming languages and operating systems (pp. 265–276). New York, NY, USA: ACM.

  • Mytkowicz, T., Diwan, A., Hauswirth, M., & Sweeney, P. F. (2010). Evaluating the accuracy of java profilers. In PLDI ’10: Proceedings of the 2010 ACM SIGPLAN conference on programming language design and implementation (pp. 187–197). New York, NY, USA: ACM.

  • Narayanasamy, S., Pokam, G., & Calder, B. (2005). Bugnet: Continuously recording program execution for deterministic replay debugging. In ISCA ’05: Proceedings of the 32nd annual international symposium on computer architecture (pp. 284–295). Washington, DC, USA: IEEE Computer Society.

  • Nguyen, D. H., Strooper, P., & Suess, J. G. (2010). Model-based testing of multiple gui variants using the gui test generator. In Proceedings of the 5th workshop on automation of software test, AST ’10 (pp. 24–30). New York, NY, USA: ACM.

  • Ricca, F., & Tonella, P. (2001). Analysis and testing of web applications. In Proceedings of the 23rd international conference on software engineering, ICSE ’01 (pp. 25–34). Washington, DC, USA: IEEE Computer Society.

  • Ronsse, M., & Bosschere, K. D. (1999). Recplay: a fully integrated practical record/replay system. ACM Transactions on Computer System, 17(2), 133–152.

    Article  Google Scholar 

  • Ronsse, M., De Bosschere, K., Christiaens, M., de Kergommeaux, J. C., & Kranzlmüller, D. (2003). Record/replay for nondeterministic program executions. Commun. ACM, 46(9):62–67.

    Article  Google Scholar 

  • Ruiz, A., & Price, Y. W. (2007). Test-driven gui development with testng and abbot. IEEE Software, 24, 51–57.

    Article  Google Scholar 

  • Ruiz, A., & Price, Y. W. (2008). Gui testing made easy. In Proceedings of the testing: Academic & industrial conference—practice and research techniques (pp. 99–103). Washington, DC, USA: IEEE Computer Society.

  • Sampath, S. (2004). Towards defining and exploiting similarities in web application use cases through user session analysis. In Proceedings of the second international workshop on dynamic analysis.

  • Shehady, R. K., & Siewiorek, D. P. (1997). A method to automate user interface testing using variable finite state machines. In Proceedings of the 27th international symposium on fault-tolerant computing (FTCS ’97), FTCS ’97 (p. 80). Washington, DC, USA: IEEE Computer Society.

  • Steven, J., Chandra, P., Fleck, B., & Podgurski, A. (2000). jRapture: A Capture/Replay tool for observation-based testing. SIGSOFT Software Engineering Notes, 25(5), 158–167.

    Article  Google Scholar 

  • Strecker, J., & Memon, A. M. (2008). Relationships between test suites, faults, and fault detection in gui testing. In ICST ’08: Proceedings of the first international conference on software testing, verification, and validation. Washington, DC, USA: IEEE Computer Society.

  • Sun, Y., & Jones, E. L. (2004). Specification-driven automated testing of gui-based java programs. In Proceedings of the 42nd annual Southeast regional conference, ACM-SE 42 (pp. 140–145). New York, NY, USA: ACM.

  • Sun Microsystems. (2004). Java Virtual Machine Tool Interface (JVMTI), http://www.java.sun.com/j2se/1.5.0/docs/guide/jvmti.

  • Silva, J. C., Saraiva, J., & Campos, J. C. (2009). A generic library for gui reasoning and testing. In Proceedings of the 2009 ACM symposium on applied computing, SAC ’09 (pp. 121–128). New York, NY, USA: ACM.

  • White, L., & Almezen, H. (2000). Generating test cases for gui responsibilities using complete interaction sequences. Software reliability engineering, international symposium on, 0:110.

    Google Scholar 

  • Xie, Q. (2006). Developing cost-effective model-based techniques for gui testing. In Proceedings of the 28th international conference on software engineering, ICSE ’06 (pp. 997–1000), New York, NY, USA: ACM.

  • Xie, Q., & Memon, A. M. (2007). Designing and comparing automated test oracles for gui-based software applications. ACM Transactions of Software Engineering Methodology.

  • Xie, Q., & Memon, A. M. (2008) Using a pilot study to derive a gui model for automated testing. ACM Transactions of Software Engineering Methodology, 18, 7:1–7:35.

    Google Scholar 

  • Yuan, X., Cohen, M. B., & Memon, A. M. (2009). Towards dynamic adaptive automated test generation for graphical user interfaces. In Proceedings of the IEEE international conference on software testing, verification, and validation workshops (pp. 263–266). Washington, DC, USA, 2009. IEEE Computer Society.

  • Yang, J.-T., Huang, J.-L., Wang, F.-J., & Chu, W. C. (1999). An object-oriented architecture supporting web application testing. In 23rd international computer software and applications conference, COMPSAC ’99 (pp. 122–127). Washington, DC, USA: IEEE Computer Society.

  • Yuan, X., & Memon, A. M. (2007). Using GUI run-time state as feedback to generate test cases. In ICSE ’07: Proceedings of the 29th international conference on software engineering (pp. 396–405). Washington, DC, USA: IEEE Computer Society.

  • Yuan, X., & Memon, A. M. (2010). Generating event sequence-based test cases using gui runtime state feedback. IEEE Transactions on Software Engineering, 36, 81–95.

    Article  Google Scholar 

  • Zaparanuks, D., & Hauswirth, M. (2010). Characterizing the design and performance of interactive java applications. In ISPASS (pp. 23–32). IEEE Computer Society.

Download references

Acknowledgments

This work has been conducted in the context of the Binary Translation and Virtualization cluster of the EU HiPEAC Network of Excellence. It has been funded by the Swiss National Science Foundation under grant number 125259.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrea Adamoli.

Additional information

This article is an extended version of our AST 2010 paper Jovic et al. (2010).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Adamoli, A., Zaparanuks, D., Jovic, M. et al. Automated GUI performance testing. Software Qual J 19, 801–839 (2011). https://doi.org/10.1007/s11219-011-9135-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11219-011-9135-x

Keywords

Navigation