How to Advance TPC Benchmarks with Dependability Aspects

  • Raquel Almeida
  • Meikel Poess
  • Raghunath Nambiar
  • Indira Patil
  • Marco Vieira
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6417)

Abstract

Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

Keywords

Industry standard benchmarks ACID properties Durability Dependability 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Transaction Processing Performance Council, http://www.tpc.org/
  2. 2.
    IFIP WG10.4 on Dependable Computing And Fault Tolerance, http://www.dependability.org/wg10.4/
  3. 3.
    Laprie, J.C.: Dependable Computing: Concepts, Limits, Challenges. In: 25th Int. Symp. On Fault-Tolerant Computing: FTCS-25. IEEE Press, Los AlamitosGoogle Scholar
  4. 4.
    Avizienis, A., Laprie, J.-C., Randell, B.: Fundamental Concepts of Dependability: LAAS Research Report, N°1145 (April 2001)Google Scholar
  5. 5.
    Trivedi, K.S., Haverkort, B.R., Rindos, A., Mainkar, V.: Methods and Tools for Reliability and Performability: Problems and Perspectives. In: 7th Intl. Conf. on Techniques and Tools for Computer Performance EvaluationGoogle Scholar
  6. 6.
    Jenn, E., Arlat, J., Rimén, M., Ohlsson, J., Karlsson, J.: Fault Injection into VHDL Models: The MEFISTO Tool. In: Randell, B., Laprie, J.-C., Kopetz, H., Littlewood, B. (eds.) Predictably Dependable Computing SystemsGoogle Scholar
  7. 7.
    Gray, J.: A Census of Tandem System Availability Between 1985 and 1990. IEEE Transactions on Reliability R-39(4), 409–418 (1990)CrossRefGoogle Scholar
  8. 8.
    Hsueh, M.C., Tsai, T.K., Iyer, R.K.: Fault Injection Techniques and Tools. IEEE Computer 30(4), 75–82 (1997)CrossRefGoogle Scholar
  9. 9.
    Carreira, J., Madeira, H., Silva, J.G.: Xception: A Technique for the Experimental Evaluation of Dependability in Modern Computers. IEEE Trans. on Software Engineering 24(2), 125–136 (1998)CrossRefGoogle Scholar
  10. 10.
    Koopman, P., DeVale, J.: Comparing the Robustness of POSIX Operating Systems. In: 29th International Symposium on Fault-Tolerant Computing, FTCS-29, pp. 30–37Google Scholar
  11. 11.
    Arlat, J., Fabre, J.-C., Rodríguez, M., Salles, F.: Dependability of COTS Microkernel-based Systems. IEEE Transactions on Computers 51(2) (2002)Google Scholar
  12. 12.
    Vieira, M., Madeira, H.: A Dependability Benchmark for OLTP Application Environments. In: VLDB 2003 (2003)Google Scholar
  13. 13.
    Vieira, M.: Dependability Benchmarking for Transactional Systems, PhD Thesis, University of Coimbra, Portugal (2005)Google Scholar
  14. 14.
    Vieira, M., Madeira, H.: From Performance to Dependability Benchmarking: A Mandatory Path. In: Nambiar, R.O., Poess, M. (eds.) TPCTC 2009. LNCS, vol. 5895, pp. 67–83. Springer, Heidelberg (2009)Google Scholar
  15. 15.
    Poess, M., Nambiar, R.O., Vaid, K., Stephens, J.M., Huppler, K., Haines, E.: Energy Benchmarks: A Detailed Analysis. In: E-Energy 2010. ACM, New York (2010) ISBN: 978-1-4503-0042-1Google Scholar
  16. 16.
    Crolotte, A.: Issues in Benchmark Metric Selection. In: Nambiar, R.O., Poess, M. (eds.) TPCTC 2009. LNCS, vol. 5895, pp. 146–152. Springer, Heidelberg (2009)Google Scholar
  17. 17.
    Lee, I., Iyer, R.K.: Software Dependability in the Tandem GUARDIAN System. IEEE Transactions on Software Engineering 21(5) (1995)Google Scholar
  18. 18.
    Kalyanakrishnam, M., Kalbarczyk, Z., Iyer, R.: Failure Data Analysis of a LAN of Windows NT Based Computers. In: Symposium on Reliable Distributed Database Systems, SRDS, vol. 18. IEEE Press, Los Alamitos (1999)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Raquel Almeida
    • 1
  • Meikel Poess
    • 2
  • Raghunath Nambiar
    • 3
  • Indira Patil
    • 4
  • Marco Vieira
    • 1
  1. 1.CISUC - Department of Informatics EngineeringUniversity of CoimbraPortugal
  2. 2.Oracle CorporationRedwood ShoresUSA
  3. 3.Cisco Systems, Inc.San JoseUSA
  4. 4.Hewlett Packard CompanyCupertinoUSA

Personalised recommendations