Efficient Experiment Selection in Automated Software Performance Evaluations

  • Dennis Westermann
  • Rouven Krebs
  • Jens Happe
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6977)

Abstract

The performance of today’s enterprise applications is influenced by a variety of parameters across different layers. Thus, evaluating the performance of such systems is a time and resource consuming process. The amount of possible parameter combinations and configurations requires many experiments in order to derive meaningful conclusions. Although many tools for automated performance testing are available, controlling experiments and analyzing results still requires large manual effort. In this paper, we apply statistical model inference techniques, namely Kriging and MARS, in order to adaptively select experiments. Our approach automatically selects and conducts experiments based on the accuracy observed for the models inferred from the currently available data. We validated the approach using an industrial ERP scenario. The results demonstrate that we can automatically infer a prediction model with a mean relative error of 1.6% using only 18% of the measurement points in the configuration space.

Keywords

Experiment Selection Software Performance Multivariate Adaptive Regression Spline Mean Relative Error Enterprise Application 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Balsamo, S., Di Marco, A., Inverardi, P., Simeoni, M.: Model-Based Performance Prediction in Software Development: A Survey. IEEE Transactions on Software Engineering 30(5), 295–310 (2004)CrossRefGoogle Scholar
  2. 2.
    Becker, S., Koziolek, H., Reussner, R.: The Palladio component model for model-driven performance prediction. Journal of Systems and Software 82, 3–22 (2009)CrossRefGoogle Scholar
  3. 3.
    Courtois, M., Woodside, M.: Using regression splines for software performance analysis and software characterization. In: Proceedings of the 2nd International Workshop on Software and Performance, WOSP 2000, September 17–20, pp. 105–114. ACM Press, New York (2000)Google Scholar
  4. 4.
    De Smith, M.J., Goodchild, M.F., Longley, P.A.: Geospatial Analysis: A Comprehensive Guide to Principles, Techniques and Software Tools. Troubador PublishingGoogle Scholar
  5. 5.
    Denaro, G., Polini, A., Emmerich, W.: Early performance testing of distributed software applications. SIGSOFT Software Engineering Notes 29(1), 94–103 (2004)CrossRefGoogle Scholar
  6. 6.
    Fioukov, A.V., Hammer, D.K., Obbink, H., Eskenazi, E.M.: Performance prediction for software architectures. In: Proceedings of PROGRESS 2002 Workshop (2002)Google Scholar
  7. 7.
    Friedman, J.H.: Multivariate adaptive regression splines. Annals of Statistics 19(1), 1–141 (1991)MathSciNetCrossRefMATHGoogle Scholar
  8. 8.
    Gorton, I., Liu, A.: Performance Evaluation of Alternative Component Architectures for Enterprise JavaBean Applications. IEEE Internet Computing 7(3), 18–23 (2003)CrossRefGoogle Scholar
  9. 9.
    Groenda, H.: Certification of software component performance specifications. In: Proceedings of Workshop on Component-Oriented Programming (WCOP) 2009, pp. 13–21 (2009)Google Scholar
  10. 10.
    Happe, J., Westermann, D., Sachs, K., Kapová, L.: Statistical inference of software performance models for parametric performance completions. In: Heineman, G.T., Kofron, J., Plasil, F. (eds.) QoSA 2010. LNCS, vol. 6093, pp. 20–35. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  11. 11.
    Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data mining, Inference,and Prediction, 2nd edn. Springer Series in Statistics. Springer, Heidelberg (2009)CrossRefMATHGoogle Scholar
  12. 12.
    Jin, Y., Tang, A., Han, J., Liu, Y.: Performance evaluation and prediction for legacy information systems. In: Proceedings of ICSE 2007, pp. 540–549. IEEE CS, Washington (2007)Google Scholar
  13. 13.
    Jung, G., Pu, C., Swint, G.: Mulini: An Automated Staging Framework for QoS of Distributed Multi-Tier Applications. In: ASE Workshop on Automating Service Quality (2007)Google Scholar
  14. 14.
    Koziolek, H.: Performance evaluation of component-based software systems: A survey. Performance Evaluation (in press, corrected proof, 2009)Google Scholar
  15. 15.
    Kraft, S., Pacheco-Sanchez, S., Casale, G., Dawson, S.: Estimating service resource consumption from response time measurements. In: Proc. of VALUETOOLS 2009. ACM, NY (2009)Google Scholar
  16. 16.
    Krige, D.G.: A Statistical Approach to Some Basic Mine Valuation Problems on the Witwatersrand. Journal of the Chemical, Metallurgical and Mining Society of South Africa 52(6), 119–139 (1951)Google Scholar
  17. 17.
    Kumar, D., Zhang, L., Tantawi, A.: Enhanced inferencing: Estimation of a workload dependent performance model. In: Proceedings of VALUETOOLS 2009 (2009)Google Scholar
  18. 18.
    Li, J., Heap, A.D.: A review of spatial interpolation methods for environmental scientists. Geoscience Australia, Canberra (2008)Google Scholar
  19. 19.
    Miller, B.P., Callaghan, M.D., Cargille, J.M., Hollingsworth, J.K., Irvin, R.B., Karavanic, K.L., Kunchithapadam, K., Newhall, T.: The paradyn parallel performance measurement tool. Computer 28, 37–46 (1995)CrossRefGoogle Scholar
  20. 20.
    Mos, A., Murphy, J.: A framework for performance monitoring, modelling and prediction of component oriented distributed systems. In: WOSP 2002: Proc. of the 3rd International Workshop on Software and Performance, pp. 235–236. ACM, New York (2002)Google Scholar
  21. 21.
    Motulsky, H.J., Ransnas, L.A.: Fitting curves to data using nonlinear regression: a practical and non-mathematical review (1987)Google Scholar
  22. 22.
    Pacifici, G., Segmuller, W., Spreitzer, M., Tantawi, A.: Dynamic estimation of cpu demand of web traffic. In: Proc. of VALUETOOLS 2006, page 26. ACM, New York (2006)Google Scholar
  23. 23.
    Pebesma, E.J.: Multivariable geostatistics in s: the gstat package. Computers and Geosciences 30, 683–691 (2004)CrossRefGoogle Scholar
  24. 24.
    Reussner, R., Sanders, P., Prechelt, L., Müller, M.S.: SKaMPI: A detailed, accurate MPI benchmark. In: Alexandrov, V.N., Dongarra, J. (eds.) PVM/MPI 1998. LNCS, vol. 1497, pp. 52–59. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  25. 25.
    Sacks, J., Welch, W.J., Mitchell, T.J., Wynn, H.P.: Design and analysis of computer experiments. Statistical Science 4, 409–423 (1989)MathSciNetCrossRefMATHGoogle Scholar
  26. 26.
    Sankarasetty, J., Mobley, K., Foster, L., Hammer, T., Calderone, T.: Software performance in the real world: personal lessons from the performance trauma team. In: Cortellessa, V., Uchitel, S., Yankelevich, D. (eds.) WOSP, pp. 201–208. ACM, New York (2007)CrossRefGoogle Scholar
  27. 27.
    SAP. SAP Standard Application Benchmarks (March 2011), http://www.sap.com/solutions/benchmark
  28. 28.
    Schneider, T.: SAP Performance Optimization Guide: Analyzing and Tuning SAP Systems. Galileo Pr. Inc., Bonn (2006)Google Scholar
  29. 29.
    Switzer, P.: Kriging. John Wiley and Sons Ltd., Chichester (2006)CrossRefGoogle Scholar
  30. 30.
    Thakkar, D., Hassan, A.E., Hamann, G., Flora, P.: A framework for measurement based performance modeling. In: WOSP 2008: Proceedings of the 7th International Workshop on Software and Performance, pp. 55–66. ACM, New York (2008)Google Scholar
  31. 31.
    Tobler, W.: A computer movie simulating urban growth in the detroit region. Economic Geography 46(2), 234–240 (1970)CrossRefGoogle Scholar
  32. 32.
    Westermann, D., Happe, J.: Performance Cockpit: Systematic Measurements and Analyses. In: ICPE 2011: Proceedings of the 2nd ACM/SPEC International Conference on Performance Engineering. ACM, New York (2011)Google Scholar
  33. 33.
    Westermann, D., Happe, J.: Software Performance Cockpit (March 2011), http://www.softwareperformancecockpit.org/
  34. 34.
    Westermann, D., Happe, J., Hauck, M., Heupel, C.: The performance cockpit approach: A framework for systematic performance evaluations. In: Proceedings of the 36th EUROMICRO SEAA 2010. IEEE CS, Los Alamitos (2010)Google Scholar
  35. 35.
    Woodside, C.M., Vetland, V., Courtois, M., Bayarov, S.: Resource function capture for performance aspects of software components and sub-systems. In: Dumke, R.R., Rautenstrauch, C., Schmietendorf, A., Scholz, A. (eds.) WOSP 2000 and GWPESD 2000. LNCS, vol. 2047, pp. 239–256. Springer, Heidelberg (2001)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Dennis Westermann
    • 1
  • Rouven Krebs
    • 1
  • Jens Happe
    • 1
  1. 1.SAP ResearchKarlsruheGermany

Personalised recommendations