Automated Software Engineering

, Volume 24, Issue 1, pp 189–231

Continuous validation of performance test workloads

  • Mark D. Syer
  • Weiyi Shang
  • Zhen Ming Jiang
  • Ahmed E. Hassan
Article

DOI: 10.1007/s10515-016-0196-8

Cite this article as:
Syer, M.D., Shang, W., Jiang, Z.M. et al. Autom Softw Eng (2017) 24: 189. doi:10.1007/s10515-016-0196-8
  • 244 Downloads

Abstract

The rise of large-scale software systems poses many new challenges for the software performance engineering field. Failures in these systems are often associated with performance issues, rather than with feature bugs. Therefore, performance testing has become essential to ensuring the problem-free operation of these systems. However, the performance testing process is faced with a major challenge: evolving field workloads, in terms of evolving feature sets and usage patterns, often lead to “outdated” tests that are not reflective of the field. Hence performance analysts must continually validate whether their tests are still reflective of the field. Such validation may be performed by comparing execution logs from the test and the field. However, the size and unstructured nature of execution logs makes such a comparison unfeasible without automated support. In this paper, we propose an automated approach to validate whether a performance test resembles the field workload and, if not, determines how they differ. Performance analysts can then update their tests to eliminate such differences, hence creating more realistic tests. We perform six case studies on two large systems: one open-source system and one enterprise system. Our approach identifies differences between performance tests and the field with a precision of 92 % compared to only 61 % for the state-of-the-practice and 19 % for a conventional statistical comparison.

Keywords

Performance testing Continuous testing Workload characterization Workload comparison Execution logs 

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  • Mark D. Syer
    • 1
  • Weiyi Shang
    • 1
  • Zhen Ming Jiang
    • 2
  • Ahmed E. Hassan
    • 1
  1. 1.Software Analysis and Intelligence Lab (SAIL), School of ComputingQueen’s UniversityKingstonCanada
  2. 2.Department of Electrical Engineering & Computer ScienceYork UniversityTorontoCanada

Personalised recommendations