Visual vs. DOM-Based Web Locators: An Empirical Study

  • Maurizio Leotta
  • Diego Clerissi
  • Filippo Ricca
  • Paolo Tonella
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8541)

Abstract

Automation in Web testing has been successfully supported by DOM-based tools that allow testers to program the interactions of their test cases with the Web application under test. More recently a new generation of visual tools has been proposed where a test case interacts with the Web application by recognising the images of the widgets that can be actioned upon and by asserting the expected visual appearance of the result.

In this paper, we first discuss the inherent robustness of the locators created by following the visual and DOM-based approaches and we then compare empirically a visual and a DOM-based tool, taking into account both the cost for initial test suite development from scratch and the cost for test suite maintenance during code evolution. Since visual tools are known to be computationally demanding, we also measure the test suite execution time.

Results indicate that DOM-based locators are generally more robust than visual ones and that DOM-based test cases can be developed from scratch and evolved at lower cost. Moreover, DOM-based test cases require a lower execution time. However, depending on the specific features of the Web application under test and its expected evolution, in some cases visual locators might be the best choice (e.g., when the visual appearance is more stable than the structure).

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Berner, S., Weber, R., Keller, R.: Observations and lessons learned from automated testing. In: Proc. of ICSE 2005, pp. 571–579. IEEE (2005)Google Scholar
  2. 2.
    Borjesson, E., Feldt, R.: Automated system testing using visual GUI testing tools: A comparative study in industry. In: Proc. of ICST 2012, pp. 350–359 (2012)Google Scholar
  3. 3.
    Capocchi, L., Santucci, J.-F., Ville, T.: Software test automation using DEVSimPy environment. In: Proc. of SIGSIM-PADS 2013, pp. 343–348. ACM (2013)Google Scholar
  4. 4.
    Chang, T.-H., Yeh, T., Miller, R.C.: Gui testing using computer vision. In: Proc. of CHI 2010, pp. 1535–1544. ACM (2010)Google Scholar
  5. 5.
    Choudhary, S.R., Zhao, D., Versee, H., Orso, A.: Water: Web application test repair. In: Proc. of ETSE 2011, pp. 24–29. ACM (2011)Google Scholar
  6. 6.
    Collins, E., Dias-Neto, A., de Lucena, V.: Strategies for agile software testing automation: An industrial experience. In: Proc. of COMPSACW 2012, pp. 440–445. IEEE (2012)Google Scholar
  7. 7.
    Grechanik, M., Xie, Q., Fu, C.: Maintaining and evolving GUI-directed test scripts. In: Proc. of ICSE 2009, pp. 408–418. IEEE (2009)Google Scholar
  8. 8.
    Leotta, M., Clerissi, D., Ricca, F., Spadaro, C.: Improving test suites maintainability with the page object pattern: An industrial case study. In: Proc. of 6th Int. Conference on Software Testing, Verification and Validation Workshops, ICSTW 2013, pp. 108–113. IEEE (2013)Google Scholar
  9. 9.
    Leotta, M., Clerissi, D., Ricca, F., Tonella, P.: Capture-replay vs. programmable web testing: An empirical assessment during test case evolution. In: Proc. of 20th Working Conference on Reverse Engineering, WCRE 2013, pp. 272–281. IEEE (2013)Google Scholar
  10. 10.
    Memon, A.M.: Automatically repairing event sequence-based GUI test suites for regression testing. TOSEM, 18(2), 4:1–4:36 (2008)Google Scholar
  11. 11.
    Mirzaaghaei, M.: Automatic test suite evolution. In: Proc. of ESEC/FSE 2011, pp. 396–399. ACM (2011)Google Scholar
  12. 12.
    Mirzaaghaei, M., Pastore, F., Pezze, M.: Automatically repairing test cases for evolving method declarations. In: Proc. of ICSM 2010, pp. 1–5. IEEE (2010)Google Scholar
  13. 13.
    Ricca, F., Tonella, P.: Testing processes of web applications. Ann. Softw. Eng. 14(1-4), 93–114 (2002)CrossRefMATHGoogle Scholar
  14. 14.
    Ricca, F., Tonella, P.: Detecting anomaly and failure in web applications. IEEE MultiMedia 13(2), 44–51 (2006)CrossRefGoogle Scholar
  15. 15.
    Skoglund, M., Runeson, P.: A case study on regression test suite maintenance in system evolution. In: Proc. of ICSM 2004, pp. 438–442. IEEE (2004)Google Scholar
  16. 16.
    Xie, Q., Grechanik, M., Fu, C.: Rest: A tool for reducing effort in script-based testing. In: Proc. of ICSM 2008, pp. 468–469. IEEE (2008)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Maurizio Leotta
    • 1
  • Diego Clerissi
    • 1
  • Filippo Ricca
    • 1
  • Paolo Tonella
    • 2
  1. 1.DIBRISUniversità di GenovaItaly
  2. 2.Fondazione Bruno KesslerTrentoItaly

Personalised recommendations