Advertisement

A DSL-Based Framework for Performance Assessment

  • Hamid El MaazouzEmail author
  • Guido Wachsmuth
  • Martin Sevenich
  • Dalila Chiadmi
  • Sungpack Hong
  • Hassan Chafi
Conference paper
Part of the Learning and Analytics in Intelligent Systems book series (LAIS, volume 7)

Abstract

Performance assessment is an essential verification practice in both research and industry for software quality assurance. Experiment setups for performance assessment tend to be complex. A typical experiment needs to be run for a variety of involved hardware, software versions, system settings and input parameters. Typical approaches for performance assessment are based on scripts. They do not document all variants explicitly, which makes it hard to analyze and reproduce experiment results correctly. In general they tend to be monolithic which makes it hard to extend experiment setups systematically and to reuse features such as result storage and analysis consistently across experiments. In this paper, we present a generic approach and a DSL-based framework for performance assessment. The DSL helps the user to set and organize the variants in an experiment setup explicitly. The Runtime module in our framework executes experiments after which results are stored together with the corresponding setups in a database. Database queries provide easy access to the results of previous experiments and the correct analysis of experiment results in context of the experiment setup. Furthermore, we describe operations for common problems in performance assessment such as outlier detection. At Oracle, we successfully instantiate the framework and use it to nightly assess the performance of PGX [6, 12], a toolkit for parallel graph analytics.

Keywords

MDE DSL Syntax Compiler Performance Experiment Design Assessment Automation 

References

  1. 1.
    Barve, Y., Shekhar, S., Khare, S., Bhattacharjee, A., Gokhale, A.: UPSARA: a model-driven approach for performance analysis of cloud-hosted applications. In: 2018 IEEE/ACM 11th International Conference on Utility and Cloud Computing (UCC), pp. 1–10. IEEE (2018)Google Scholar
  2. 2.
    Bianculli, D., Binder, W., Drago, M.L.: SOABench: performance evaluation of service-oriented middleware made easy. In: 2010 ACM/IEEE 32nd International Conference on Software Engineering, vol. 2, pp. 301–302. IEEE (2010)Google Scholar
  3. 3.
    Ferme, V., Pautasso, C.: Towards holistic continuous software performance assessment. In: Proceedings of the 8th ACM/SPEC on International Conference on Performance Engineering Companion, pp. 159–164. ACM (2017)Google Scholar
  4. 4.
    Grønli, T.-M., Ghinea, G.: Meeting quality standards for mobile application development in businesses: a framework for cross-platform testing. In: 2016 49th Hawaii International Conference on System Sciences (HICSS), pp. 5711–5720. IEEE (2016)Google Scholar
  5. 5.
    Hong, S., Chafi, H., Sedlar, E., Olukotun, K.: Green-Marl: a DSL for easy and efficient graph analysis. ACM SIGARCH Comput. Archit. News 40(1), 349–362 (2012)CrossRefGoogle Scholar
  6. 6.
    Hong, S., Depner, S., Manhardt, T., Van Der Lugt, J., Verstraaten, M., Chafi, H.: PGX.D: a fast distributed graph processing engine. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, p. 58. ACM (2015)Google Scholar
  7. 7.
    Kats, L.C., Vermaas, R., Visser, E.: Integrated language definition testing: enabling test-driven language development. ACM SIGPLAN Not. 46(10), 139–154 (2011)CrossRefGoogle Scholar
  8. 8.
    Kats, L.C.L., Visser, E.: The Spoofax language workbench: rules for declarative specification of languages and IDEs. ACM SIGPLAN Not. 45, 444–463 (2010)CrossRefGoogle Scholar
  9. 9.
    Kersten, M.L., Kemper, A., Markl, V., Nica, A., Poess, M., Sattler, K.-U.: Tractor pulling on data warehouses. In: Proceedings of the Fourth International Workshop on Testing Database Systems, p. 7. ACM (2011)Google Scholar
  10. 10.
    Martin, R.C.: Clean Code: a Handbook of Agile Software Craftsmanship. Pearson Education, London (2009)Google Scholar
  11. 11.
    Ousterhout, J.K.: Scripting: higher level programming for the 21st century. Computer 31(3), 23–30 (1998)CrossRefGoogle Scholar
  12. 12.
    Raman, R., van Rest, O., Hong, S., Wu, Z., Chafi, H., Banerjee, J.: PGX.ISO: parallel and efficient in-memory engine for subgraph isomorphism. In: Proceedings of Workshop on GRAph Data management Experiences and Systems, pp. 1–6. ACM (2014)Google Scholar
  13. 13.
    Schmidt, D.C.: Model-driven engineering. Comput. IEEE Comput. Soc. 39(2), 25 (2006)CrossRefGoogle Scholar
  14. 14.
    da Silveira, M.B., et al.: Canopus: a domain-specific language for modeling performance testing (2016)Google Scholar
  15. 15.
    Van Deursen, A., Klint, P., Visser, J.: Domain-specific languages: an annotated bibliography. ACM SIGPLAN Not. 35(6), 26–36 (2000)CrossRefGoogle Scholar
  16. 16.
    van Rest, O., Hong, S., Kim, J., Meng, X., Chafi, H.: PGQL: a property graph query language. In: Proceedings of the Fourth International Workshop on Graph Data Management Experiences and Systems, p. 7. ACM (2016)Google Scholar
  17. 17.
    Wienke, J., Wigand, D., Koster, N., Wrede, S.: Model-based performance testing for robotics software components. In: 2018 Second IEEE International Conference on Robotic Computing (IRC), pp. 25–32. IEEE (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Hamid El Maazouz
    • 1
    • 2
    Email author
  • Guido Wachsmuth
    • 2
  • Martin Sevenich
    • 2
  • Dalila Chiadmi
    • 1
  • Sungpack Hong
    • 2
  • Hassan Chafi
    • 2
  1. 1.Ecole Mohammadia d’ingénieursRabatMorocco
  2. 2.Oracle LabsRedwood ShoresUSA

Personalised recommendations