Advertisement

Performance Regression Unit Testing: A Case Study

  • Vojtěch Horký
  • František Haas
  • Jaroslav Kotrč
  • Martin Lacina
  • Petr Tůma
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8168)

Abstract

Including performance tests as a part of unit testing is technically more difficult than including functional tests – besides the usual challenges of performance measurement, specifying and testing the correctness conditions is also more complex. In earlier work, we have proposed a formalism for expressing these conditions, the Stochastic Performance Logic. In this paper, we evaluate our formalism in the context of performance unit testing of JDOM, an open source project for working with XML data. We focus on the ability to capture and test developer assumptions and on the practical behavior of the built in hypothesis testing when the formal assumptions of the tests are not met.

Keywords

Stochastic Performance Logic regression testing performance testing unit testing performance evaluation 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bergmann, V.: ContiPerf 2 (2013), http://databene.org/contiperf.html
  2. 2.
    Bulej, L., Bures, T., Keznikl, J., Koubkova, A., Podzimek, A., Tuma, P.: Capturing Performance Assumptions using Stochastic Performance Logic. In: Proc. ICPE 2012. ACM (2012)Google Scholar
  3. 3.
    Clark, M.: JUnitPerf (2013), http://www.clarkware.com/software/JUnitPerf
  4. 4.
    Foo, K., Jiang, Z.M., Adams, B., Hassan, A., Zou, Y., Flora, P.: Mining performance regression testing repositories for automated performance analysis. In: Proc. QSIC 2010. IEEE (2010)Google Scholar
  5. 5.
    Ghaith, S., Wang, M., Perry, P., Murphy, J.: Profile-based, load-independent anomaly detection and analysis in performance regression testing of software systems. In: Proc. CSMR 2013. IEEE (2013)Google Scholar
  6. 6.
    Heger, C., Happe, J., Farahbod, R.: Automated root cause isolation of performance regressions during software development. In: Proc. ICPE 2013. ACM (2013)Google Scholar
  7. 7.
    JDOM (2013), http://www.jdom.org
  8. 8.
    hunterhacker/jdom [Git] (2013), https://github.com/hunterhacker/jdom
  9. 9.
    hunterhacker/jdom: Verifier performance (2013), https://github.com/hunterhacker/jdom/wiki/Verifier-Performance
  10. 10.
    JUnit (April 2013), http://junit.org
  11. 11.
    Kalibera, T., Bulej, L., Tůma, P.: Benchmark Precision and Random Initial State. In: Proc. SPECTS 2005. SCS (2005)Google Scholar
  12. 12.
    Kalibera, T., Tůma, P.: Precise Regression Benchmarking with Random Effects: Improving Mono Benchmark Results. In: Horváth, A., Telek, M. (eds.) EPEW 2006. LNCS, vol. 4054, pp. 63–77. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  13. 13.
    Oliveira, A., Petkovich, J.-C., Reidemeister, T., Fischmeister, S.: Datamill: Rigorous performance evaluation made easy. In: Proc. ICPE 2013. ACM (2013)Google Scholar
  14. 14.
    Porter, A., Yilmaz, C., Memon, A.M., Schmidt, D.C., Natarajan, B.: Skoll: A process and infrastructure for distributed continuous quality assurance. IEEE Trans. Softw. Eng. 33(8), 510–525 (2007)CrossRefGoogle Scholar
  15. 15.
    Puchko, T.: Retrotranslator (2013), http://retrotranslator.sourceforge.net
  16. 16.
  17. 17.
    Welch, B.L.: The Generalization of Student’s Problem when Several Different Population Variances are Involved. Biometrika 34(1/2), 28–35 (1947)MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Vojtěch Horký
    • 1
  • František Haas
    • 1
  • Jaroslav Kotrč
    • 1
  • Martin Lacina
    • 1
  • Petr Tůma
    • 1
  1. 1.Department of Distributed and Dependable Systems, Faculty of Mathematics and PhysicsCharles University in PraguePrague 1Czech Republic

Personalised recommendations