Validating Model-Driven Performance Predictions on Random Software Systems

  • Vlastimil Babka
  • Petr Tůma
  • Lubomír Bulej
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6093)

Abstract

Software performance prediction methods are typically validated by taking an appropriate software system, performing both performance predictions and performance measurements for that system, and comparing the results. The validation includes manual actions, which makes it feasible only for a small number of systems.

To significantly increase the number of systems on which software performance prediction methods can be validated, and thus improve the validation, we propose an approach where the systems are generated together with their models and the validation runs without manual intervention. The approach is described in detail and initial results demonstrating both its benefits and its issues are presented.

Keywords

performance modeling performance validation MDD 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Avritzer, A., Weyuker, E.J.: The Automatic Generation of Load Test Suites and the Assessment of the Resulting Software. IEEE Trans. Software Eng. 21(9) (1995)Google Scholar
  2. 2.
    Babka, V., Bulej, L., Decky, M., Kraft, J., Libic, P., Marek, L., Seceleanu, C., Tuma, P.: Resource Usage Modeling, Q-ImPrESS Project Deliverable D3.3 (2008), http://www.q-impress.eu/
  3. 3.
    Balsamo, S., Di Marco, A., Inverardi, P., Simeoni, M.: Model-Based Performance Prediction in Software Development: A Survey. IEEE Trans. Software Eng. 30(5) (2004)Google Scholar
  4. 4.
    Bause, F.: Queueing Petri Nets - A Formalism for the Combined Qualitative and Quantitative Analysis of Systems. In: Proc. 5th Intl. W. on Petri Nets and Performance Models. IEEE CS, Los Alamitos (1993)Google Scholar
  5. 5.
    Becker, S., Bulej, L., Bures, T., Hnetynka, P., Kapova, L., Kofron, J., Koziolek, H., Kraft, J., Mirandola, R., Stammel, J., Tamburrelli, G., Trifu, M.: Service Architecture Meta Model, Q-ImPrESS Deliverable D2.1 (2008), http://www.q-impress.eu/
  6. 6.
    Becker, S., Dencker, T., Happe, J.: Model-driven generation of performance prototypes. In: Kounev, S., Gorton, I., Sachs, K. (eds.) SIPEW 2008. LNCS, vol. 5119, pp. 79–98. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  7. 7.
    Becker, S., Koziolek, H., Reussner, R.: The Palladio Component Model for Model-driven Performance Prediction. J. Syst. Softw. 82(1) (2009)Google Scholar
  8. 8.
    Bertolino, A.: Software Testing Research: Achievements, Challenges, Dreams. In: Proc. Intl. Conf. on Software Engineering, ICSE 2007, W. on the Future of Software Engineering, FOSE 2007. IEEE CS, Los Alamitos (2007)Google Scholar
  9. 9.
    Budinsky, F., Brodsky, S.A., Merks, E.: Eclipse Modeling Framework. Pearson Education, London (2003)Google Scholar
  10. 10.
    Cascaval, C., DeRose, L., Padua, D.A., Reed, D.A.: Compile-Time Based Performance Prediction. In: Carter, L., Ferrante, J. (eds.) LCPC 1999. LNCS, vol. 1863, p. 365. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  11. 11.
    Franks, G., Maly, P., Woodside, M., Petriu, D.C., Hubbard, A.: Layered Queueing Network Solver and Simulator User Manual (2005), http://www.sce.carleton.ca/rads/lqns/
  12. 12.
    Franks, G., Al-Omari, T., Woodside, M., Das, O., Derisavi, S.: Enhanced Modeling and Solution of Layered Queueing Networks. IEEE Trans. Software Eng. 35(2) (2009)Google Scholar
  13. 13.
    Grundy, J.C., Cai, Y., Liu, A.: Generation of Distributed System Test-Beds from High-Level Software Architecture Descriptions. In: Proc. 16th IEEE Intl. Conf. on Automated Software Engineering, ASE 2001. IEEE CS, Los Alamitos (2001)Google Scholar
  14. 14.
    Henning, J.L.: SPEC CPU2006 Benchmark Descriptions. SIGARCH Comput. Archit. News 34(4) (2006)Google Scholar
  15. 15.
    Hrischuk, C.E., Rolia, J.A., Woodside, C.M.: Automatic Generation of a Software Performance Model Using an Object-Oriented Prototype. In: Proc. 3rd Intl. W. on Modeling, Analysis, and Simulation On Computer and Telecommunication Systems, MASCOTS 1995. IEEE CS, Los Alamitos (1995)Google Scholar
  16. 16.
    Joshi, A., Eeckhout, L., Bell Jr., R.H., John, L.K.: Distilling the Essence of Proprietary Workloads Into Miniature Benchmarks. ACM Trans. Archit. Code Optim. 5(2) (2008)Google Scholar
  17. 17.
    Kounev, S.: Performance Modeling and Evaluation of Distributed Component-Based Systems Using Queueing Petri Nets. IEEE Trans. Software Eng. 32(7) (2006)Google Scholar
  18. 18.
    Kounev, S., Buchmann, A.: SimQPN: A Tool and Methodology for Analyzing Queueing Petri Net Models by Means of Simulation. Perform. Eval. 63(4) (2006)Google Scholar
  19. 19.
    Koziolek, H., Happe, J., Becker, S.: Parameter dependent performance specifications of software components. In: Hofmeister, C., Crnković, I., Reussner, R. (eds.) QoSA 2006. LNCS, vol. 4214, pp. 163–179. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  20. 20.
    Liu, Y., Fekete, A., Gorton, I.: Design-Level Performance Prediction of Component-Based Applications. IEEE Trans. Software Eng. 31(11) (2005)Google Scholar
  21. 21.
    Liu, Y., Gorton, I.: Accuracy of Performance Prediction for EJB Applications: A Statistical Analysis. In: Gschwind, T., Mascolo, C. (eds.) SEM 2004. LNCS, vol. 3437, pp. 185–198. Springer, Heidelberg (2005)Google Scholar
  22. 22.
    OW2 Consortium: RUBiS: Rice University Bidding System, http://rubis.ow2.org/
  23. 23.
    Rausch, A., Reussner, R., Mirandola, R., Plasil, F. (eds.): The Common Component Modeling Example: Comparing Software Component Models. Springer, Heidelberg (2008)Google Scholar
  24. 24.
    Standard Performance Evaluation Corporation: SPEC CPU2006 Benchmark, http://www.spec.org/cpu2006/
  25. 25.
    Standard Performance Evaluation Corporation: SPECjAppServer2004 Benchmark, http://www.spec.org/jAppServer2004/
  26. 26.
    Sun Microsystems, Inc.: Java Pet Store Demo, http://blueprints.dev.java.net/petstore/index.html
  27. 27.
    The Q-ImPrESS Project Consortium: Quality Impact Prediction for Evolving Service-oriented Software, http://www.q-impress.eu/
  28. 28.
    Transaction Processing Performance Council: TPC Benchmarks, http://www.tpc.org/information/benchmarks.asp
  29. 29.
    Weyuker, E.J., Vokolos, F.I.: Experience with Performance Testing of Software Systems: Issues, an Approach, and Case Study. IEEE Trans. Software Eng. 26(12) (2000)Google Scholar
  30. 30.
    Woodside, C.M., Neron, E., Ho, E.D.S., Mondoux, B.: An “Active Server” Model for the Performance of Parallel Programs Written Using Rendezvous. J. Syst. Softw. 6(1-2) (1986)Google Scholar
  31. 31.
    Woodside, C.M., Schramm, C.: Scalability and Performance Experiments Using Synthetic Distributed Server Systems. Distributed Systems Engineering 3(1) (1996)Google Scholar
  32. 32.
    Xu, J., Oufimtsev, A., Woodside, M., Murphy, L.: Performance Modeling and Prediction of Enterprise JavaBeans with Layered Queuing Network Templates. SIGSOFT Softw. Eng. Notes 31(2) (2006)Google Scholar
  33. 33.
    Zhu, L., Gorton, I., Liu, Y., Bui, N.B.: Model Driven Benchmark Generation for Web Services. In: Proc. 2006 Intl. W. on Service-oriented Software Engineering, SOSE 2006. ACM, New York (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Vlastimil Babka
    • 1
  • Petr Tůma
    • 1
  • Lubomír Bulej
    • 1
    • 2
  1. 1.Department of Distributed and Dependable Systems, Faculty of Mathematics and PhysicsCharles UniversityPragueCzech Republic
  2. 2.Institute of Computer ScienceAcademy of Sciences of the Czech RepublicPragueCzech Republic

Personalised recommendations