Advertisement

Automated Software Engineering

, Volume 24, Issue 1, pp 139–187 | Cite as

Unit testing performance with Stochastic Performance Logic

  • Lubomír Bulej
  • Tomáš Bureš
  • Vojtěch Horký
  • Jaroslav Kotrč
  • Lukáš Marek
  • Tomáš Trojánek
  • Petr Tůma
Article

Abstract

Unit testing is an attractive quality management tool in the software development process, however, practical obstacles make it difficult to use unit tests for performance testing. We present Stochastic Performance Logic, a formalism for expressing performance requirements, together with interpretations that facilitate performance evaluation in the unit test context. The formalism and the interpretations are implemented in a performance testing framework and evaluated in multiple experiments, demonstrating the ability to identify performance differences in realistic unit test scenarios.

Keywords

Performance evaluation Unit testing Java 

Notes

Acknowledgments

We gratefully acknowledge the contribution of our colleagues František Haas, Jaroslav Keznikl, Alena Koubková, Martin Lacina and Andrej Podzimek, who were authors of the conference papers that this article combines and extends. Portions of this work were previously published under the copyright of the corresponding publishers and partially supported by the EU Project 257414 ASCENS and the GAČR project P202/10/J042 FERDINAND. This work was partially supported by the COST CZ (LD) Project LD15051 and by the Charles University institutional funding (SVV).

References

  1. ANSI/IEEE: IEEE standard for software unit testing. ANSI/IEEE Std 1008-1987 (1986)Google Scholar
  2. Beck, K.: Simple Smalltalk Testing. Cambridge University Press, Cambridge (1997)CrossRefGoogle Scholar
  3. Beck, K.: Test Driven Development: By Example. Addison-Wesley, Boston (2002)Google Scholar
  4. Bergmann, V.: ContiPerf 2 (2013) http://databene.org/contiperf.html
  5. Bourque, P., Fairley, R.: Guide to the Software Engineering Body of Knowledge, Version 3.0. IEEE Computer Society Press, Los Alamitos (2014)Google Scholar
  6. Bulej, L., Bureš, T., Gerostathopoulos, I., Horký, V., Keznikl, J., Marek, L., Tschaikowski, M., Tribastone, M., Tůma, P.: Supporting Performance Awareness in Autonomous Ensembles. Lecture Notes in Computer Science, pp. 291–322. Springer, Berlin (2015)Google Scholar
  7. Bulej, L., Bureš, T., Keznikl, J., Koubkovš, A., Podzimek, A. Tůma, P.: Capturing performance assumptions using stochastic performance logic. In: Proceedings of 3rd ACM/SPEC International Conference on Performance Engineering (ICPE), pp. 311–322. ACM (2012)Google Scholar
  8. Chen, T., Guo, Q., Temam, O., Wu, Y., Bao, Y., Xu, Z., Chen, Y.: Statistical performance comparisons of computers. IEEE Trans. Comput. 64(5), 1442–1455 (2015)MathSciNetCrossRefGoogle Scholar
  9. Clark, M.: JUnitPerf (2013) http://www.clarkware.com/software/JUnitPerf
  10. Dice, D.: Biased locking in hotspot (2006) https://blogs.oracle.com/dave/entry/biased_locking_in_hotspot
  11. Dice, D., Moir, M., Scherer, W.: Quickly reacquirable locks. Technical report, Sun Microsystems, Inc. (2003)Google Scholar
  12. Foo, K., Jiang, Z.M., Adams, B., Hassan, A., Zou, Y., Flora, P.: Mining performance regression testing repositories for automated performance analysis. In: Proceedings of 10th International Conference on Quality Software (QSIC), pp. 32–41. IEEE (2010)Google Scholar
  13. Ghaith, S., Wang, M., Perry, P. and Murphy, J.: Profile-based, load-independent anomaly detection and analysis in performance regression testing of software systems. In: Proceedings of 17th European Conference on Software Maintenance and Reengineering (CSMR), pp. 379–383. IEEE (2013)Google Scholar
  14. Heger, C., Happe, J., Farahbod, R.: Automated root cause isolation of performance regressions during software development. In: Proceedings of 4th ACM/SPEC International Conference on Performance Engineering (ICPE), pp.s 27–38. ACM (2013)Google Scholar
  15. Horký, V., Haas, F., Kotrč, J., Lacina, M., Tůma, P.: Performance regression unit testing: a case study. In: Proceedings of 10th European Performance Engineering Workshop (EPEW), Lecture Notes in Computer Science, vol. 8168, pp. 149–163. Springer, Berlin (2013)Google Scholar
  16. Horký, V., Libič, P., Steinhauser, A., Tůma, P.: DOs and DON’Ts of conducting performance measurements in Java (tutorial). In: Proceedings of 6th ACM/SPEC International Conference on Performance Engineering (ICPE), pp. 337–340. ACM (2015)Google Scholar
  17. hunterhacker/jdom [Git] (2013). https://github.com/hunterhacker/jdom
  18. hunterhacker/jdom: Verifier performance (2013). https://github.com/hunterhacker/jdom/wiki/Verifier-Performance
  19. JDOM Library (2013). http://www.jdom.org
  20. JUnit Tool, Apr (2013). http://junit.org
  21. Kalibera, T., Bulej, L., Tůma, P.: Automated detection of performance regressions: the Mono experience. In: Proceedings of 13th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS), pp. 183–190. IEEE (2005)Google Scholar
  22. Kalibera, T., Bulej, L., Tůma, P.: Benchmark precision and random initial state. In: Proceedings of International Symposium on Performance Evaluation of Computer and Telecommunication Systems (SPECTS), pp. 853–862. SCS (2005)Google Scholar
  23. Liu, X., Guo, Z., Wang, X., Chen, F., Lian, X., Tang, J., Wu, M., Kaashoek, M.F., Zhang, Z.: D3s: debugging deployed distributed systems. In: Proceedings of 5th USENIX Symposium on Networked Systems Design & Implementation (NSDI), pp. 423–437. USENIX (2008)Google Scholar
  24. Mytkowicz, T., Diwan, A., Hauswirth, M., Sweeney, P. F.: Producing wrong data without doing anything obviously wrong. In: Proceedings of 14th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), pp. 265–276. ACM (2009)Google Scholar
  25. Oliveira, A., Petkovich, J.-C., Reidemeister, T., Fischmeister, S.: Datamill: rigorous performance evaluation made easy. In: Proceedings of 4th ACM/SPEC International Conference on Performance Engineering (ICPE), pp. 137–148. ACM (2013)Google Scholar
  26. Oracle. Java microbenchmarking harness (OpenJDK: jmh) (2014). http://openjdk.java.net/projects/code-tools/jmh/
  27. Perl, S.E., Weihl, W.E.: Performance assertion checking. In: Proceedings of 14th ACM Symposium on Operating Systems Principles (SOSP), pp. 134–145. ACM (1993)Google Scholar
  28. Porter, A., Yilmaz, C., Memon, A.M., Schmidt, D.C., Natarajan, B.: Skoll: a process and infrastructure for distributed continuous quality assurance. IEEE Trans. Softw. Eng. 33(8), 510–525 (2007)CrossRefGoogle Scholar
  29. Reynolds, P., Killian, C., Wiener, J.L., Mogul, J.C., Shah, M.A., Vahdat, A.: Pip: Detecting the unexpected in distributed systems. In: Proceedings of 3rd Conference on Networked Systems Design & Implementation (NSDI), pp. 9–9. USENIX (2006)Google Scholar
  30. Sewe, A., Mezini, M., Sarimbekov, A., Binder, W.: Da Capo con Scala: design and analysis of a Scala benchmark suite for the Java virtual machine. In: Proceedings of 26th ACM conference on Object-oriented Programming Systems, Languages, and Applications (OOPSLA), pp. 657–676. ACM (2011)Google Scholar
  31. Sheskin, D.J.: Handbook of Parametric and Nonparametric Statistical Procedures. CRC Press, Boca Raton (2011)zbMATHGoogle Scholar
  32. Tjang, A., Oliveira, F., Bianchini, R., Martin, R., Nguyen, T.: Model-based validation for internet services. In: Proceedings of 28th IEEE International Symposium on Reliable Distributed Systems (SRDS), pp. 61–70. IEEE (2009)Google Scholar
  33. Trojánek, T.: Capturing performance assumptions using stochastic performance logic. Master’s Thesis, Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic (2012)Google Scholar
  34. Vetter, J.S., Worley, P.H.: Asserting performance expectations. In: Proceedings of 15th ACM/IEEE Conference on Supercomputing (SC), pp. 1–13. IEEE (2002)Google Scholar
  35. Welch, B.L.: The generalization of “Student’s” problem when several different population variances are involved. Biometrika 34(1/2), 28–35 (1947)MathSciNetCrossRefzbMATHGoogle Scholar
  36. Wirsing, M., Hölzl, M., Koch, N., Mayer, P. (eds.): Software Engineering for Collective Autonomic Systems (The ASCENS Approach). Lecture Notes in Computer Science, vol. 8998. Springer, Berlin (2015)Google Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  1. 1.Department of Distributed and Dependable Systems, Faculty of Mathematics and PhysicsCharles UniversityPragueCzech Republic
  2. 2.Faculty of InformaticsUniversity of LuganoLuganoSwitzerland

Personalised recommendations