Advertisement

Benchmarking Using Basic DBMS Operations

  • Alain Crolotte
  • Ahmad Ghazal
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6417)

Abstract

The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.

Keywords

Benchmark Model Star Schema Aggregation Query Secondary Index Index Access 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    TPC Benchmark D and H (Decision Support), Transaction Processing Council, http://www.tpc.org
  2. 2.
    Gao, K., Pandis, I.: Implementation of TPC-H and TPC-C toolkits, CS and ECE Departments, Carnegie Mellon University, http://www.cs.cmu.edu/~ipandis/courses/15823/project_final_paper.pdf
  3. 3.
    Ghazal, A., Seid, D., Ramesh, B., Crolotte, A., Koppuravuri, M., Vinod, G.: Dynamic plan generation for parameterized queries. In: Proceedings of the 35th SIGMOD International Conference on Management of Data, SIGMOD, pp. 909–916 (2009)Google Scholar
  4. 4.
    Crolotte, A.: Issues in Metric Selection and the TPC-D Single Stream Power, http://www.tpc.org
  5. 5.
    Crolotte, A.: Issues in Benchmark Metric Selection. In: Nambiar, R.O., Poess, M. (eds.) TPCTC 2009. LNCS, vol. 5895, pp. 146–152. Springer, Heidelberg (2009)Google Scholar
  6. 6.
    The Data Warehouse Challenge, Specification document revision 1.4.c, Data Challenge Inc. (April 9, 1998)Google Scholar
  7. 7.
    O’Neil, P., O’Neil, E., Chen, X., Revilak, S.: The Star Schema Benchmark and Augmented Fact Table Indexing. In: Nambiar, R.O., Poess, M. (eds.) TPCTC 2009. LNCS, vol. 5895, pp. 237–252. Springer, Heidelberg (2009)Google Scholar
  8. 8.
    Huppler, K.: The Art of Building a Good Benchmark. In: Nambiar, R.O., Poess, M. (eds.) TPCTC 2009. LNCS, vol. 5895, pp. 18–30. Springer, Heidelberg (2009)Google Scholar
  9. 9.
    Ballinger, C.: Relevance of the TPC-D Benchmark Queries: The Questions You Ask Every Day, NCR Parallel Systems, http://www.tpc.org/information/other/articles/TPCDart_0197.asp
  10. 10.
    Poess, M., Floyd, C.: New TPC Benchmarks for Decision Support and Web Commerce. In: SIGMOD, pp. 64–71 (2000)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Alain Crolotte
    • 1
  • Ahmad Ghazal
    • 1
  1. 1.Teradata CorporationEl Segundo

Personalised recommendations