From Performance to Dependability Benchmarking: A Mandatory Path

  • Marco Vieira
  • Henrique Madeira
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5895)

Abstract

The work on performance benchmarking has started long ago. Ranging from simple benchmarks that target a very specific system or component to very complex benchmarks for complex infrastructures, performance benchmarks have contributed to improve successive generations of systems. However, the fact that nowadays most systems need to guarantee high availability and reliability shows that it is necessary to shift the focus from measuring pure performance to the measurement of both performance and dependability. Research on dependability benchmarking has started in the beginning of this decade, having already led to the proposal of several benchmarks. However, no dependability benchmark has yet achieved the status of a real benchmark endorsed by a standardization body or corporation. In this paper we argue that standardization bodies must shift focus and start including dependability metrics in their benchmarks. We present an overview of the state-of-the-art on dependability benchmarking and define a set of research needs and challenges that have to be addressed for the establishment of real dependability benchmarks.

Keywords

Benchmarking dependability performance metrics 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Transaction Processing Performance Council, http://www.tpc.org/
  2. 2.
    Standard Performance Evaluation Corporation, http://www.spec.org/
  3. 3.
    Curnow, H.J., Wichmann, B.A.: A synthetic benchmark. Computer Journal 19(1), 43–49 (1976)CrossRefGoogle Scholar
  4. 4.
    Bitton, D., DeWitt, D.J., Turbyfill, C.: Benchmarking Database Systems – A Systematic Approach. In: 9th Intl. Conf. on Very Large Data Bases. VLDB Endowment (1983)Google Scholar
  5. 5.
    Anon, et al: A Measure of Transaction Processing Power: Datamat (April 1, 1985)Google Scholar
  6. 6.
    IFIP WG10.4 on Dependable Computing And Fault Tolerance, http://www.dependability.org/wg10.4/
  7. 7.
    Laprie, J.-C.: Dependable Computing: Concepts, Limits, Challenges. In: 25th Int. Symp. on Fault-Tolerant Computing, FTCS-25. IEEE Press, Los Alamitos (1995)Google Scholar
  8. 8.
    Trivedi, K.S., Haverkort, B.R., Rindos, A., Mainkar, V.: Methods and Tools for Reliability and Performability: Problems and Perspectives. In: Haring, G., Kotsis, G. (eds.) TOOLS 1994. LNCS, vol. 794, pp. 1–24. Springer, Heidelberg (1994)Google Scholar
  9. 9.
    Jenn, E., Arlat, J., Rimén, M., Ohlsson, J., Karlsson, J.: Fault Injection into VHDL Models: The MEFISTO Tool. In: Randell, B., Laprie, J.-C., Kopetz, H., Littlewood, B. (eds.) Predictably Dependable Computing Systems. LNCS, pp. 329–346. Springer, Berlin (1995)Google Scholar
  10. 10.
    Gray, J.: A Census of Tandem System Availability Between 1985 and 1990. IEEE Transactions on Reliability R-39(4), 409–418 (1990)CrossRefGoogle Scholar
  11. 11.
    Hsueh, M.-C., Tsai, T.K., Iyer, R.K.: Fault Injection Techniques and Tools. IEEE Computer 30(4), 75–82 (1997)Google Scholar
  12. 12.
    Carreira, J., Madeira, H., Silva, J.G.: Xception: A Technique for the Experimental Evaluation of Dependability in Modern Computers. IEEE Trans. on Software Engineering 24(2), 125–136 (1998)CrossRefGoogle Scholar
  13. 13.
    Koopman, P., DeVale, J.: Comparing the Robustness of POSIX Operating Systems. In: 29th International Symposium on Fault-Tolerant Computing, FTCS-29, pp. 30–37 (1999)Google Scholar
  14. 14.
    Arlat, J., Fabre, J.-C., Rodríguez, M., Salles, F.: Dependability of COTS Microkernel-based Systems. IEEE Transactions on Computers 51(2) (2002)Google Scholar
  15. 15.
    Wilson, D., Murphy, B., Spainhower, L.: Progress on Defining Standardized Classes of Computing the Dependability of Computer Systems. In: DSN 2002 Workshop on Dependability Benchmarking, pp. F1–5 (2002)Google Scholar
  16. 16.
    Kanoun, K., Spainhower, L. (eds.): Dependability Benchmarking for Computer Systems. Wiley, Chichester (2008)Google Scholar
  17. 17.
    DBench Project, Project funded by the European Community under the “Information Society Technology” Programme (1998-2002), http://www.dbench.org/
  18. 18.
    Kalakech, A., Kanoun, K., Crouzet, Y., Arlat, A.: Benchmarking the Dependability of Windows NT, 2000 and XP. In: International Conference on Dependable Systems and Networks, DSN 2004. IEEE Press, Los Alamitos (2004)Google Scholar
  19. 19.
    Kanoun, K., Crouret, Y.: Dependability Benchmarking for Operating Systems. International Journal of Performance Engineering 2(3), 275–287 (2006)Google Scholar
  20. 20.
    Transaction Processing Performance Council: TPC Benchmark C, Standard Specification, Version 5.9. Transaction Processing Performance Council (2007)Google Scholar
  21. 21.
    Moreira, F., Maia, R., Costa, D., Duro, N., Rodríguez-Dapena, P., Hjortnaes, K.: Static and Dynamic Verification of Critical Software for Space Applications. In: Data Systems In Aerospace, DASIA 2003 (2003)Google Scholar
  22. 22.
    Ruiz, J.–C., Yuste, P., Gil, P., Lemus, L.: On Benchmarking the Dependability of Automotive Engine Control Applications. In: IEEE/IFIP International Conference on Dependable Systems and Networks, DSN 2004. IEEE Press, Los Alamitos (2004)Google Scholar
  23. 23.
    Vieira, M., Madeira, H.: A Dependability Benchmark for OLTP Application Environments. In: 29th Intl. Conf. on Very Large Data Bases, VLDB 2003. VLDB Endowment (2003)Google Scholar
  24. 24.
    Buchacker, K., Dal Cin, M., Hoxer, H.-J., Karch, R., Sieh, V., Tschache, O.: Reproducible Dependability Benchmarking Experiments Based on Unambiguous Benchmark Setup Descriptions. In: IEEE/IFIP International Conference on Dependable Systems and Networks, DSN 2003. IEEE Press, Los Alamitos (2003)Google Scholar
  25. 25.
    Durães, J., Vieira, M., Madeira, H.: Dependability Benchmarking of Web-Servers. In: The 23rd International Conference on Computer Safety, Reliability and Security, SAFECOMP 2004. IEEE Press, Los Alamitos (2004)Google Scholar
  26. 26.
    Standard Performance Evaluation Corporation: SPECweb99 Release 1.02 (Design Document. Standard Performance Evaluation Corporation (2000)Google Scholar
  27. 27.
    Brown, A., Patterson, D.A.: Towards Availability Benchmarks: A Cases Study of Software RAID Systems. In: 2000 USENIX Annual Technical Conf. USENIX Association (2000)Google Scholar
  28. 28.
    Brown, A., Patterson, D.A.: To Err is Human. In: First Workshop on Evaluating and Architecting System Dependability, EASY (2001)Google Scholar
  29. 29.
    Brown, A., Chung, L.C., Patterson, D.A.: Including the Human Factor in Dependability Benchmarks. In: DSN 2002 Workshop on Dependability Benchmarking (2002)Google Scholar
  30. 30.
    Brown, A., Chung, L., Kakes, W., Ling, C., Patterson, D.A.: Dependability Benchmarking of Human-Assisted Recovery Processes. In: IEEE/IFIP International Conference on Dependable Systems and Networks, DSN 2004. IEEE Press, Los Alamitos (2004)Google Scholar
  31. 31.
    Zhu, J., Mauro, J., Pramanick, I.: R3 - A Framework for Availability Benchmarking. In: IEEE/IFIP International Conference on Dependable Systems and Networks, DSN 2003, pp. B-86–87. IEEE Press, Los Alamitos (2003)Google Scholar
  32. 32.
    Zhu, J., Mauro, J., Pramanick, I.: Robustness Benchmarking for Hardware Maintenance Events. In: IEEE/IFIP International Conference on Dependable Systems and Networks, DSN 2003, pp. 115–122. IEEE Press, Los Alamitos (2003)Google Scholar
  33. 33.
    Mauro, J., Zhu, J., Pramanick, I.: The System Recovery Benchmark. In: 2004 Pacific Rim International Symposium on Dependable Computing, PRDC 2004. IEEE Press, Los Alamitos (2004)Google Scholar
  34. 34.
    Elling, R., Pramanick, I., Mauro, J., Bryson, W., Tang, D.: Analytical RAS Benchmarks. In: Kanoun, K., Spainhower, L. (eds.) Dependability Benchmarking for Computer Systems. Wiley, Chichester (2008)Google Scholar
  35. 35.
    Constantinescu, C.: Neutron SER characterization of microprocessors. In: IEEE Dependable Systems and Networks Conference, pp. 754–759. IEEE Press, Los Alamitos (2005)Google Scholar
  36. 36.
    Constantinescu, C.: Dependability benchmarking using environmental tools. In: IEEE Annual Reliability and Maintanability Symposium, pp. 567–571. IEEE Press, Los Alamitos (2005)Google Scholar
  37. 37.
    IBM Autonomic Computing Initiative, http://www.research.ibm.com/autonomic/
  38. 38.
    Lightstone, S., Hellerstein, J., Tetzlaff, W., Janson, P., Lassettre, E., Norton, C., Rajaraman, B., Spainhower, L.: Towards Benchmarking Autonomic Computing Maturity. In: First IEEE Conference on Industrial Automatics, INDIN-2003. IEEE Press, Los Alamitos (2003)Google Scholar
  39. 39.
    Brown, A., Hellerstein, J., Hogstrom, M., Lau, T., Lightstone, S., Shum, P., Yost, M.P.: Benchmarking Autonomic Capabilities: Promises and Pitfalls. In: 1st International Conference on Autonomic Computing, ICAC 2004 (2004)Google Scholar
  40. 40.
    Bondavalli, A., Ceccarelli, A., Falai, L., Karlsson, J., Kocsis, I., Lollini, P., Madeira, H., Majzik, I., Montecchi, L., van Moorsel, A., Strigini, L., Vadursi, M., Vieira, M.: Preliminary Research Roadmap. AMBER Project – Assessing, Measuring and Benchmarking Resilience, IST – 216295 AMBER. AMBER Project (2008), http://www.amber-project.eu/

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Marco Vieira
    • 1
  • Henrique Madeira
    • 1
  1. 1.CISUC, Department of Informatics EngineeringUniversity of CoimbraPortugal

Personalised recommendations