Advertisement

Benchmarks for Transaction and Analytical Processing Systems

  • Anja Bog
Chapter
Part of the In-Memory Data Management Research book series (IMDM)

Abstract

As presented in  Chap. 1, the goal of this thesis is to analyze and compare the behavior of databases in mixed workload scenarios as a basis to evaluate logical database design decisions. Benchmarks provide a method for this. A benchmark is “a standardized problem or test that serves as a basis for evaluation or comparison (as of computer system performance).” [141]

Bibliography

  1. 5.
    T.L. Anderson, The hypermodel benchmark, in Proceedings of the 2nd International Conference on Extending Database Technology: Advances in Database Technology, EDBT ’90, Venice (Springer, New York, 1990), pp. 317–331Google Scholar
  2. 8.
    D. Bausch, I. Petrov, A. Buchmann, On the performance of database query processing algorithms on flash solid state disks, in Proceedings of the 2011 22nd International Workshop on Database and Expert Systems Applications, DEXA ’11, Toulouse (IEEE Computer Society, Washington D.C., 2011), pp. 139–144Google Scholar
  3. 10.
    D. Bitton, C. Turbyfill, Design and analysis of multi-user benchmarks for database systems. Technical report, Cornell University, Ithaca, 1984Google Scholar
  4. 11.
    D. Bitton, D.J. DeWitt, C. Turbyfill, Benchmarking database systems a systematic approach, in Proceedings of the 9th International Conference on Very Large Data Bases, Florence (Morgan Kaufmann, San Francisco, 1983), pp. 8–19Google Scholar
  5. 20.
    M. Böhm, D. Habich, W. Lehner, U. Wloka, Dipbench toolsuite: a framework for benchmarking integration systems, in ICDE, Cancun, ed. by G. Alonso, J.A. Blakeley, A.L.P. Chen (IEEE, 2008), pp. 1596–1599Google Scholar
  6. 23.
    M.J. Carey, D.J. DeWitt, J.F. Naughton, The 007 benchmark. ACM SIGMOD Rec. 22(2), 12–21 (1993). ACM, New YorkGoogle Scholar
  7. 25.
    R.G.G. Cattell, J. Skeen, Object operations benchmark. ACM Trans. Database Syst. 17(1), 1–31 (1992). ACM, New YorkGoogle Scholar
  8. 26.
    E. Cecchet, G. Candea, A. Ailamaki, Middleware-based database replication: the gaps between theory and practice, in SIGMOD ’08: Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, Vancouver (ACM, New York, 2008), pp. 739–752Google Scholar
  9. 44.
    Coglin Mill, RODIN data asset management high performance extract/transform/load benchmark. Report, June 2002. Retrieved from http://www.coglinmill.com/pdf_files/RODIN%20i890%20Benchmark%20June%202002.pdf. Last accessed 15 June 2012
  10. 45.
    R. Cole, F. Funke, L. Giakoumakis, W. Guy, A. Kemper, S. Krompass, H. Kuno, R. Nambiar, T. Neumann, M. Poess, K.-U. Sattler, M. Seibold, E. Simon, F. Waas, The mixed workload CH-BenCHmark, in Proceedings of the Fourth International Workshop on Testing Database Systems, DBTest ’11, Athens (ACM, New York, 2011), pp. 8:1–8:6Google Scholar
  11. 49.
    J. Darmont, M. Schneider, Object-oriented database benchmarks, in Advanced Topics in Database Research, vol. 1, ed. by K. Siau (IGI, Hershey, 2002), pp. 34–57Google Scholar
  12. 52.
    D.J. DeWitt, The Wisconsin benchmark: past, present, and future, in Database and Transaction Processing System Performance Handbook, ed. by J. Gray (Morgan-Kaufman, San Mateo, 1991)Google Scholar
  13. 55.
    J. Dittrich, A. Jindal, Towards a one size fits all database architecture, in Outrageous Ideas and Vision Track, 5th Biennial Conference on Innovative Data Systems Research (CIDR 11), Asilomar, 2011. Online Proceedings. Retrieved from http://www.cidrdb.org/cidr2011/program.html. Last accessed 15 June 2012
  14. 56.
    D. Dominguez-Sal, N. Martinez-Bazan, V. Muntes-Mulero, P. Baleta, J.L. Larriba-Pay, A discussion on the design of graph database benchmarks, in Proceedings of the Second TPC Technology Conference on Performance Evaluation, Measurement and Characterization of Complex Systems, TPCTC’10, Singapore (Springer, Berlin/Heidelberg, 2011), pp. 25–40Google Scholar
  15. 58.
    S. Elnaffar, P. Martin, B. Schiefer, S. Lightstone, Is it DSS or OLTP: automatically identifying DBMS workloads. J. Intell. Inf. Syst. 30(3), 249–271 (2008). doi:10.1007/s10844-006-0036-6CrossRefGoogle Scholar
  16. 61.
    C.D. French, “One size fits all” database architectures do not work for DSS, in Proceedings of the 1995 ACM SIGMOD International Conference on Management of Data, SIGMOD ’95, San Jose (ACM, New York, 1995), pp. 449–450Google Scholar
  17. 62.
    C.D. French, Teaching an OLTP database Kernel advanced data warehousing techniques, in Proceedings of the Thirteenth International Conference on Data Engineering, ICDE ’97, Washington, D.C. (IEEE Computer Society, 1997), pp. 194–198Google Scholar
  18. 64.
    F. Funke, A. Kemper, S. Krompass, H. Kuno, T. Neumann, A. Nica, M. Poess, M. Seibold, Metrics for measuring the performance of the mixed workload CH-BenCHmark, in Proceedings of the 3rd TPC Technology Conference on Performance Evaluation and Benchmarking (TPC TC), Seattle, 2011Google Scholar
  19. 65.
    F. Funke, A. Kemper, T. Neumann, HyPer-sonic combined transaction AND query processing. PVLDB 4(12), 1367–1370 (2011)Google Scholar
  20. 66.
    F. Funke, A. Kemper, T. Neumann, Benchmarking hybrid OLTP & OLAP database systems, in GI-Fachtagung Datenbanksysteme für Business, Technologie und Web (BTW), Kaiserslautern, ed. by T. Härder, W. Lehner, B. Mitschang, H. Schöning, H. Schwarz. Volume 180 of LNI, (GI, 2011), pp. 390–409Google Scholar
  21. 67.
    J. Garcia, Role of in-memory analytics in big data analysis. Technology Evaluation Centers (TEC) Article, Mar 2012Google Scholar
  22. 74.
    J. Gray (ed.), The Benchmark Handbook for Database and Transaction Systems, 2nd edn. (Morgan Kaufmann, San Mateo, 1993)Google Scholar
  23. 80.
    M. Grund, J. Krüger, H. Plattner, A. Zeier, P. Cudre-Mauroux, S. Madden, HYRISE: a main memory hybrid storage engine. Proc. VLDB Endow. 4(2), 105–116 (2010)Google Scholar
  24. 91.
    P. Helland, If you have too much data, then “good enough” is good enough. Commun. ACM, 54(6), 40–47 (2011). ACM, New YorkGoogle Scholar
  25. 94.
    T. Hogan, Overview of TPC benchmark E: the next generation of OLTP benchmarks, in Performance Evaluation and Benchmarking, ed. by R. Nambiar, M. Poess. Volume 5895 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 2009), pp. 84–98Google Scholar
  26. 95.
    W.W. Hsu, A.J. Smith, H.C. Young, Characteristics of production database workloads and the TPC benchmarks. IBM Syst. J. 40(3), 781–802 (2001)CrossRefGoogle Scholar
  27. 96.
    K. Huppler, The art of building a good benchmark, in Performance Evaluation and Benchmarking, ed. by R. Nambiar, M. Poess. Volume 5895 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 2009), pp. 18–30Google Scholar
  28. 98.
    IBM, IBM solidDB. Product Website, n.d. Retrieved from http://www-01.ibm.com/software/data/soliddb/soliddb/. Last accessed 15 June 2012
  29. 99.
    IBM Software Group Information Management, Telecommunication application transaction processing (TATP) benchmark description. Version 1.0, Mar 2009. Retrieved from http://tatpbenchmark.sourceforge.net. Last accessed 15 June 2012
  30. 103.
    W.H. Inmon, The operational data store. Designing the operational data store. Inf. Manag. Mag. (1998)Google Scholar
  31. 116.
    A. Kemper, T. Neumann, HyPer: a hybrid OLTP & OLAP main memory database system based on virtual memory snapshots, in Proceedings of the 2011 IEEE 27th International Conference on Data Engineering, ICDE ’11, Hannover (IEEE Computer Society, Washington, D.C., 2011), pp. 195–206Google Scholar
  32. 124.
    J. Krueger, M. Grund, A. Zeier, H. Plattner, Enterprise application-specific data management, in Proceedings of the 2010 14th IEEE International Enterprise Distributed Object Computing Conference, EDOC ’10, Vitoria (IEEE Computer Society, Washington, D.C., 2010), pp. 131–140Google Scholar
  33. 125.
    J. Krüger, C. Kim, M. Grund, N. Satish, D. Schwalb, J. Chhugani, H. Plattner, P. Dubey, A. Zeier, Fast updates on read-optimized databases using multi-core CPUs. Proc. VLDB Endow. 5(1), 61–72 (2011) VLDB Endowment.Google Scholar
  34. 131.
    L. Liu, M.T. Özsu (eds.), Encyclopedia of Database Systems. (Springer, New York/London, 2009)Google Scholar
  35. 140.
    McObject, eXtremeDB real-time embedded database. Product Website, 2012. Retrieved from http://www.mcobject.com/embedded_database_products. Last accessed 15 June 2012
  36. 141.
    Merriam-Webster, Benchmark. In Merriam-Webster.com, 2011. Retrieved from http://www.merriam-webster.com/dictionary/benchmark. Last accessed 15 June 2012
  37. 144.
    E. Mills, G. Shamshoian, M. Blazek, P. Naughton, R. Seese, W. Tschudi, D. Sartor, The business case for energy management in high-tech industries. Energy Effic. 1, 5–20 (2008)CrossRefGoogle Scholar
  38. 148.
    R.O. Nambiar, M. Poess, The making of TPC-DS, in Proceedings of the 32nd International Conference on Very Large Data Bases, VLDB ’06, ed. by U. Dayal, K.-Y. Whang, D.B. Lomet, G. Alonso, G.M. Lohman, M.L. Kersten, S.K. Cha, Y.-K. Kim (VLDB Endowment, Seoul, 2006), pp. 1049–1058Google Scholar
  39. 150.
    R. Nambiar, N. Wakou, F. Carman, M. Majdalany, Transaction processing performance council (TPC): state of the council 2010, in Performance Evaluation, Measurement and Characterization of Complex Systems, ed. by R. Nambiar, M. Poess. Volume 6417 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 2011), pp. 1–9Google Scholar
  40. 156.
    P.E. O’Neil, E.J. O’Neil, X. Chen, The star schema benchmark (SSB), Jan 2007. Retrieved from http://www.cs.umb.edu/~poneil/StarSchemaB.pdf. Last accessed 15 June 2012
  41. 159.
    Oracle, Oracle TimesTen in-memory database. Product Website, n.d.. Retrieved from http://www.oracle.com/us/products/database/timesten-066524.html. Last accessed 15 June 2012
  42. 160.
    Oracle, Oracle applications benchmark. Benchmark Website, n.d.. Retrieved from http://www.oracle.com/us/solutions/benchmark/apps-benchmark/index-166919.html. Last accessed 15 June 2012
  43. 164.
    I. Petrov, G. Almeida, A. Buchmann, U. Graef, Building large storage based on flash disks, in Proceeding of the First International Workshop on Accelerating Data Management Systems Using Modern Processor and Storage Architectures, ADMS 2010, Singapore, 2010Google Scholar
  44. 165.
    I. Petrov, R. Gottstein, T. Ivanov, D. Bausch, A. Buchmann, Page size selection for OLTP databases on SSD RAID storage. J. Inf. Data Manag. 2(1), 11 (2011)Google Scholar
  45. 167.
    H. Plattner, SanssouciDB: an in-memory database for processing enterprise workloads, in 14. GI-Fachtagung Datenbanksysteme für Business, Technologie und Web (BTW), ed. by T. Härder, W. Lehner, B. Mitschang, H. Schöning, H. Schwarz. Volume 180 of LNI, Kaiserslautern (GI, 2011), pp. 2–21Google Scholar
  46. 169.
    M. Poess, C. Floyd, New TPC benchmarks for decision support and web commerce. ACM SIGMOD Rec. 29(4), 64–71 (2000). ACM, New YorkGoogle Scholar
  47. 170.
    M. Poess, B. Smith, L. Kollar, P. Larson, TPC-DS, taking decision support benchmarking to the next level, in Proceedings of the 2002 ACM SIGMOD International Conference on Management of Data, SIGMOD ’02, Madison (ACM, New York, 2002), pp. 582–587Google Scholar
  48. 176.
    U. Röhm, OLAP with a database cluster, in Database Technologies: Concepts, Methodologies, Tools, and Applications, ed. by J. Erickson (IGI Global, Hershey, 2009), pp. 829–46CrossRefGoogle Scholar
  49. 177.
    K. Sachs, S. Kounev, J. Bacon, A. Buchmann, Performance evaluation of message-oriented middleware using the SPECjms2007 benchmark. Perform. Eval. 66(8), 410–434 (2009)CrossRefGoogle Scholar
  50. 178.
    SAP, SAP standard application benchmarks, n.d.. Retrieved from http://www.sap.com/solutions/benchmark/index.epx. Last accessed 15 June 2012
  51. 179.
    SAP, SAP in-memory computing. Product Website, n.d.. Retrieved from http://www.sap.com/solutions/technology/in-memory-computing-platform/hana/overview/index.epx. Last accessed 15 June 2012
  52. 190.
    V. Sikka, F. Färber, W. Lehner, S.K. Cha, T. Peh, C. Bornhövd, Efficient transaction processing in SAP HANA database: the end of a column store myth, in Proceedings of the 2012 International Conference on Management of Data, SIGMOD ’12, Scottsdale (ACM, New York, 2012), pp. 731–742Google Scholar
  53. 193.
    Standard Performance Evaluation Corporation, SPEC – power and performance, user guide, SPECpower_ssj2008 V1.11, Sept 2011. Retrieved from http://www.spec.org/power/docs/SPECpower_ssj2008-User_Guide.pdf. Last accessed 15 June 2012
  54. 194.
    Standard Performance Evaluation Corporation (SPEC), Server efficiency rating tool (SERT) design document beta-1, Sept 2011. Retrieved from http://www.spec.org/sert/. Last accessed 15 June 2012
  55. 195.
    Standard Performance Evaluation Corporation, Corporation website, 2012. Retrieved from http://www.spec.org/. Last accessed 15 June 2012
  56. 197.
    M. Stonebraker, A new direction for TPC?, in Performance Evaluation and Benchmarking, ed. by R. Nambiar, M. Poess. Volume 5895 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 2009), pp. 11–17Google Scholar
  57. 205.
    A. Thomasian, Performance analysis of database systems, in Performance Evaluation: Origins and Directions, ed. by G. Haring, C. Lindemann, M. Reiser. Volume 1769 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 2000), pp. 305–327Google Scholar
  58. 207.
    Transaction Processing Performance Council, TPC benchmark W (Web commerce). Specification, version 1.8, Feb 2002. Retrieved from http://www.tpc.org/tpcw/. Last accessed 15 June 2012
  59. 208.
    Transaction Processing Performance Council, TPC benchmark DS (decision support). Draft specification, revision 32, 2005. Retrieved from http://tpc.org/tpcds. Last accessed 15 June 2012
  60. 209.
    Transaction Processing Performance Council, TPC benchmark C. Standard specification, revision 5.11, Feb 2010. Retrieved from http://tpc.org/tpcc. Last accessed 15 June 2012
  61. 210.
    Transaction Processing Performance Council, TPC benchmark E. Standard specification, version 1.12.0, June 2010. Retrieved from http://tpc.org/tpce. Last accessed 15 June 2012
  62. 211.
    Transaction Processing Performance Council, TPC-energy specification. Standard specification, version 1.2.0, June 2010. Retrieved from http://www.tpc.org/tpc_energy. Last accessed 15 June 2012
  63. 212.
    Transaction Processing Performance Council, TPC benchmark H (decision support). Standard specification, revision 2.14.0, Feb 2011. Retrieved from http://tpc.org/tpch. Last accessed 15 June 2012
  64. 213.
    Transaction Processing Performance Council, Council website, 2012. Retrieved from http://tpc.org. Last accessed 15 June 2012
  65. 215.
    P. Vassiliadis, A. Karagiannis, V. Tziovara, A. Simitsis, Towards a benchmark for ETL workflows, in Proceedings of the Fifth International Workshop on Quality in Databases, QDB, Vienna, 2007, ed. by V. Ganti, F. Naumann, pp. 49–60Google Scholar
  66. 216.
    M. Vieira, H. Madeira, A dependability benchmark for OLTP application environments, in Proceedings of the 29th International Conference on Very Large Databases, VLDB’03, ed. by J.C. Freytag, P.C. Lockemann, S. Abiteboul, M.J. Carey, P.G. Selinger, A. Heuer (VLDB Endowment, Berlin, 2003), pp. 742–753Google Scholar
  67. 217.
    M. Vieira, H. Madeira, From performance to dependability benchmarking: a mandatory path, in Performance Evaluation and Benchmarking, ed. by R. Nambiar, M. Poess (Springer, Berlin/Heidelberg, 2009), pp. 67–83Google Scholar
  68. 218.
    M. Vieira, H. Madeira, K. Sachs, S. Kounev, Resilience benchmarking, in Resilience Assessment and Evaluation of Computing Systems, ed. by A. Avritzer, A. van Moorsel, M. Vieira, K. Wolter (Springer, Berlin/Heidelberg 2012), pp. 283–301Google Scholar
  69. 219.
    VoltDB, The NewSQL database for high velocity applications. Product website, n.d. Retrieved from http://voltdb.com/products-services. Last accessed 15 June 2012
  70. 222.
    M. Winslett, David DeWitt speaks out: on rethinking the CS curriculum, why the database community should be proud, why query optimization doesn’t work, how supercomputing funding is sometimes very poorly spent, how he’s not a good coder and isn’t smart enough to do DB theory, and more. ACM SIGMOD Rec. 31(2), 50–62 (2002). ACM, New YorkGoogle Scholar
  71. 224.
    L. Wyatt, B. Caufield, D. Pol, Principles for an ETL benchmark, in Performance Evaluation and Benchmarking, ed. by R. Nambiar, M. Poess (Springer, Berlin/Heidelberg, 2009), pp. 183–198Google Scholar
  72. 226.
    N. Yuhanna, M. Gilpin, D. D’Silva, TPC benchmarks don’t matter anymore, features and cost are key factors when choosing a DBMS. Forrester research, Mar 2009Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Anja Bog
    • 1
  1. 1.Hasso-Plattner-InstituteUniversity of PotsdamPotsdamGermany

Personalised recommendations