Skip to main content

Benchmarks for Transaction and Analytical Processing Systems

  • Chapter
  • First Online:
  • 1402 Accesses

Part of the book series: In-Memory Data Management Research ((IMDM))

Abstract

As presented in Chap. 1, the goal of this thesis is to analyze and compare the behavior of databases in mixed workload scenarios as a basis to evaluate logical database design decisions. Benchmarks provide a method for this. A benchmark is “a standardized problem or test that serves as a basis for evaluation or comparison (as of computer system performance).” [141]

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Bibliography

  1. T.L. Anderson, The hypermodel benchmark, in Proceedings of the 2nd International Conference on Extending Database Technology: Advances in Database Technology, EDBT ’90, Venice (Springer, New York, 1990), pp. 317–331

    Google Scholar 

  2. D. Bausch, I. Petrov, A. Buchmann, On the performance of database query processing algorithms on flash solid state disks, in Proceedings of the 2011 22nd International Workshop on Database and Expert Systems Applications, DEXA ’11, Toulouse (IEEE Computer Society, Washington D.C., 2011), pp. 139–144

    Google Scholar 

  3. D. Bitton, C. Turbyfill, Design and analysis of multi-user benchmarks for database systems. Technical report, Cornell University, Ithaca, 1984

    Google Scholar 

  4. D. Bitton, D.J. DeWitt, C. Turbyfill, Benchmarking database systems a systematic approach, in Proceedings of the 9th International Conference on Very Large Data Bases, Florence (Morgan Kaufmann, San Francisco, 1983), pp. 8–19

    Google Scholar 

  5. M. Böhm, D. Habich, W. Lehner, U. Wloka, Dipbench toolsuite: a framework for benchmarking integration systems, in ICDE, Cancun, ed. by G. Alonso, J.A. Blakeley, A.L.P. Chen (IEEE, 2008), pp. 1596–1599

    Google Scholar 

  6. M.J. Carey, D.J. DeWitt, J.F. Naughton, The 007 benchmark. ACM SIGMOD Rec. 22(2), 12–21 (1993). ACM, New York

    Google Scholar 

  7. R.G.G. Cattell, J. Skeen, Object operations benchmark. ACM Trans. Database Syst. 17(1), 1–31 (1992). ACM, New York

    Google Scholar 

  8. E. Cecchet, G. Candea, A. Ailamaki, Middleware-based database replication: the gaps between theory and practice, in SIGMOD ’08: Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, Vancouver (ACM, New York, 2008), pp. 739–752

    Google Scholar 

  9. Coglin Mill, RODIN data asset management high performance extract/transform/load benchmark. Report, June 2002. Retrieved from http://www.coglinmill.com/pdf_files/RODIN%20i890%20Benchmark%20June%202002.pdf. Last accessed 15 June 2012

  10. R. Cole, F. Funke, L. Giakoumakis, W. Guy, A. Kemper, S. Krompass, H. Kuno, R. Nambiar, T. Neumann, M. Poess, K.-U. Sattler, M. Seibold, E. Simon, F. Waas, The mixed workload CH-BenCHmark, in Proceedings of the Fourth International Workshop on Testing Database Systems, DBTest ’11, Athens (ACM, New York, 2011), pp. 8:1–8:6

    Google Scholar 

  11. J. Darmont, M. Schneider, Object-oriented database benchmarks, in Advanced Topics in Database Research, vol. 1, ed. by K. Siau (IGI, Hershey, 2002), pp. 34–57

    Google Scholar 

  12. D.J. DeWitt, The Wisconsin benchmark: past, present, and future, in Database and Transaction Processing System Performance Handbook, ed. by J. Gray (Morgan-Kaufman, San Mateo, 1991)

    Google Scholar 

  13. J. Dittrich, A. Jindal, Towards a one size fits all database architecture, in Outrageous Ideas and Vision Track, 5th Biennial Conference on Innovative Data Systems Research (CIDR 11), Asilomar, 2011. Online Proceedings. Retrieved from http://www.cidrdb.org/cidr2011/program.html. Last accessed 15 June 2012

  14. D. Dominguez-Sal, N. Martinez-Bazan, V. Muntes-Mulero, P. Baleta, J.L. Larriba-Pay, A discussion on the design of graph database benchmarks, in Proceedings of the Second TPC Technology Conference on Performance Evaluation, Measurement and Characterization of Complex Systems, TPCTC’10, Singapore (Springer, Berlin/Heidelberg, 2011), pp. 25–40

    Google Scholar 

  15. S. Elnaffar, P. Martin, B. Schiefer, S. Lightstone, Is it DSS or OLTP: automatically identifying DBMS workloads. J. Intell. Inf. Syst. 30(3), 249–271 (2008). doi:10.1007/s10844-006-0036-6

    Article  Google Scholar 

  16. C.D. French, “One size fits all” database architectures do not work for DSS, in Proceedings of the 1995 ACM SIGMOD International Conference on Management of Data, SIGMOD ’95, San Jose (ACM, New York, 1995), pp. 449–450

    Google Scholar 

  17. C.D. French, Teaching an OLTP database Kernel advanced data warehousing techniques, in Proceedings of the Thirteenth International Conference on Data Engineering, ICDE ’97, Washington, D.C. (IEEE Computer Society, 1997), pp. 194–198

    Google Scholar 

  18. F. Funke, A. Kemper, S. Krompass, H. Kuno, T. Neumann, A. Nica, M. Poess, M. Seibold, Metrics for measuring the performance of the mixed workload CH-BenCHmark, in Proceedings of the 3rd TPC Technology Conference on Performance Evaluation and Benchmarking (TPC TC), Seattle, 2011

    Google Scholar 

  19. F. Funke, A. Kemper, T. Neumann, HyPer-sonic combined transaction AND query processing. PVLDB 4(12), 1367–1370 (2011)

    Google Scholar 

  20. F. Funke, A. Kemper, T. Neumann, Benchmarking hybrid OLTP & OLAP database systems, in GI-Fachtagung Datenbanksysteme für Business, Technologie und Web (BTW), Kaiserslautern, ed. by T. Härder, W. Lehner, B. Mitschang, H. Schöning, H. Schwarz. Volume 180 of LNI, (GI, 2011), pp. 390–409

    Google Scholar 

  21. J. Garcia, Role of in-memory analytics in big data analysis. Technology Evaluation Centers (TEC) Article, Mar 2012

    Google Scholar 

  22. J. Gray (ed.), The Benchmark Handbook for Database and Transaction Systems, 2nd edn. (Morgan Kaufmann, San Mateo, 1993)

    Google Scholar 

  23. M. Grund, J. Krüger, H. Plattner, A. Zeier, P. Cudre-Mauroux, S. Madden, HYRISE: a main memory hybrid storage engine. Proc. VLDB Endow. 4(2), 105–116 (2010)

    Google Scholar 

  24. P. Helland, If you have too much data, then “good enough” is good enough. Commun. ACM, 54(6), 40–47 (2011). ACM, New York

    Google Scholar 

  25. T. Hogan, Overview of TPC benchmark E: the next generation of OLTP benchmarks, in Performance Evaluation and Benchmarking, ed. by R. Nambiar, M. Poess. Volume 5895 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 2009), pp. 84–98

    Google Scholar 

  26. W.W. Hsu, A.J. Smith, H.C. Young, Characteristics of production database workloads and the TPC benchmarks. IBM Syst. J. 40(3), 781–802 (2001)

    Article  Google Scholar 

  27. K. Huppler, The art of building a good benchmark, in Performance Evaluation and Benchmarking, ed. by R. Nambiar, M. Poess. Volume 5895 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 2009), pp. 18–30

    Google Scholar 

  28. IBM, IBM solidDB. Product Website, n.d. Retrieved from http://www-01.ibm.com/software/data/soliddb/soliddb/. Last accessed 15 June 2012

  29. IBM Software Group Information Management, Telecommunication application transaction processing (TATP) benchmark description. Version 1.0, Mar 2009. Retrieved from http://tatpbenchmark.sourceforge.net. Last accessed 15 June 2012

  30. W.H. Inmon, The operational data store. Designing the operational data store. Inf. Manag. Mag. (1998)

    Google Scholar 

  31. A. Kemper, T. Neumann, HyPer: a hybrid OLTP & OLAP main memory database system based on virtual memory snapshots, in Proceedings of the 2011 IEEE 27th International Conference on Data Engineering, ICDE ’11, Hannover (IEEE Computer Society, Washington, D.C., 2011), pp. 195–206

    Google Scholar 

  32. J. Krueger, M. Grund, A. Zeier, H. Plattner, Enterprise application-specific data management, in Proceedings of the 2010 14th IEEE International Enterprise Distributed Object Computing Conference, EDOC ’10, Vitoria (IEEE Computer Society, Washington, D.C., 2010), pp. 131–140

    Google Scholar 

  33. J. Krüger, C. Kim, M. Grund, N. Satish, D. Schwalb, J. Chhugani, H. Plattner, P. Dubey, A. Zeier, Fast updates on read-optimized databases using multi-core CPUs. Proc. VLDB Endow. 5(1), 61–72 (2011) VLDB Endowment.

    Google Scholar 

  34. L. Liu, M.T. Özsu (eds.), Encyclopedia of Database Systems. (Springer, New York/London, 2009)

    Google Scholar 

  35. McObject, eXtremeDB real-time embedded database. Product Website, 2012. Retrieved from http://www.mcobject.com/embedded_database_products. Last accessed 15 June 2012

  36. Merriam-Webster, Benchmark. In Merriam-Webster.com, 2011. Retrieved from http://www.merriam-webster.com/dictionary/benchmark. Last accessed 15 June 2012

  37. E. Mills, G. Shamshoian, M. Blazek, P. Naughton, R. Seese, W. Tschudi, D. Sartor, The business case for energy management in high-tech industries. Energy Effic. 1, 5–20 (2008)

    Article  Google Scholar 

  38. R.O. Nambiar, M. Poess, The making of TPC-DS, in Proceedings of the 32nd International Conference on Very Large Data Bases, VLDB ’06, ed. by U. Dayal, K.-Y. Whang, D.B. Lomet, G. Alonso, G.M. Lohman, M.L. Kersten, S.K. Cha, Y.-K. Kim (VLDB Endowment, Seoul, 2006), pp. 1049–1058

    Google Scholar 

  39. R. Nambiar, N. Wakou, F. Carman, M. Majdalany, Transaction processing performance council (TPC): state of the council 2010, in Performance Evaluation, Measurement and Characterization of Complex Systems, ed. by R. Nambiar, M. Poess. Volume 6417 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 2011), pp. 1–9

    Google Scholar 

  40. P.E. O’Neil, E.J. O’Neil, X. Chen, The star schema benchmark (SSB), Jan 2007. Retrieved from http://www.cs.umb.edu/~poneil/StarSchemaB.pdf. Last accessed 15 June 2012

  41. Oracle, Oracle TimesTen in-memory database. Product Website, n.d.. Retrieved from http://www.oracle.com/us/products/database/timesten-066524.html. Last accessed 15 June 2012

  42. Oracle, Oracle applications benchmark. Benchmark Website, n.d.. Retrieved from http://www.oracle.com/us/solutions/benchmark/apps-benchmark/index-166919.html. Last accessed 15 June 2012

  43. I. Petrov, G. Almeida, A. Buchmann, U. Graef, Building large storage based on flash disks, in Proceeding of the First International Workshop on Accelerating Data Management Systems Using Modern Processor and Storage Architectures, ADMS 2010, Singapore, 2010

    Google Scholar 

  44. I. Petrov, R. Gottstein, T. Ivanov, D. Bausch, A. Buchmann, Page size selection for OLTP databases on SSD RAID storage. J. Inf. Data Manag. 2(1), 11 (2011)

    Google Scholar 

  45. H. Plattner, SanssouciDB: an in-memory database for processing enterprise workloads, in 14. GI-Fachtagung Datenbanksysteme für Business, Technologie und Web (BTW), ed. by T. Härder, W. Lehner, B. Mitschang, H. Schöning, H. Schwarz. Volume 180 of LNI, Kaiserslautern (GI, 2011), pp. 2–21

    Google Scholar 

  46. M. Poess, C. Floyd, New TPC benchmarks for decision support and web commerce. ACM SIGMOD Rec. 29(4), 64–71 (2000). ACM, New York

    Google Scholar 

  47. M. Poess, B. Smith, L. Kollar, P. Larson, TPC-DS, taking decision support benchmarking to the next level, in Proceedings of the 2002 ACM SIGMOD International Conference on Management of Data, SIGMOD ’02, Madison (ACM, New York, 2002), pp. 582–587

    Google Scholar 

  48. U. Röhm, OLAP with a database cluster, in Database Technologies: Concepts, Methodologies, Tools, and Applications, ed. by J. Erickson (IGI Global, Hershey, 2009), pp. 829–46

    Chapter  Google Scholar 

  49. K. Sachs, S. Kounev, J. Bacon, A. Buchmann, Performance evaluation of message-oriented middleware using the SPECjms2007 benchmark. Perform. Eval. 66(8), 410–434 (2009)

    Article  Google Scholar 

  50. SAP, SAP standard application benchmarks, n.d.. Retrieved from http://www.sap.com/solutions/benchmark/index.epx. Last accessed 15 June 2012

  51. SAP, SAP in-memory computing. Product Website, n.d.. Retrieved from http://www.sap.com/solutions/technology/in-memory-computing-platform/hana/overview/index.epx. Last accessed 15 June 2012

  52. V. Sikka, F. Färber, W. Lehner, S.K. Cha, T. Peh, C. Bornhövd, Efficient transaction processing in SAP HANA database: the end of a column store myth, in Proceedings of the 2012 International Conference on Management of Data, SIGMOD ’12, Scottsdale (ACM, New York, 2012), pp. 731–742

    Google Scholar 

  53. Standard Performance Evaluation Corporation, SPEC – power and performance, user guide, SPECpower_ssj2008 V1.11, Sept 2011. Retrieved from http://www.spec.org/power/docs/SPECpower_ssj2008-User_Guide.pdf. Last accessed 15 June 2012

  54. Standard Performance Evaluation Corporation (SPEC), Server efficiency rating tool (SERT) design document beta-1, Sept 2011. Retrieved from http://www.spec.org/sert/. Last accessed 15 June 2012

  55. Standard Performance Evaluation Corporation, Corporation website, 2012. Retrieved from http://www.spec.org/. Last accessed 15 June 2012

  56. M. Stonebraker, A new direction for TPC?, in Performance Evaluation and Benchmarking, ed. by R. Nambiar, M. Poess. Volume 5895 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 2009), pp. 11–17

    Google Scholar 

  57. A. Thomasian, Performance analysis of database systems, in Performance Evaluation: Origins and Directions, ed. by G. Haring, C. Lindemann, M. Reiser. Volume 1769 of Lecture Notes in Computer Science (Springer, Berlin/Heidelberg, 2000), pp. 305–327

    Google Scholar 

  58. Transaction Processing Performance Council, TPC benchmark W (Web commerce). Specification, version 1.8, Feb 2002. Retrieved from http://www.tpc.org/tpcw/. Last accessed 15 June 2012

  59. Transaction Processing Performance Council, TPC benchmark DS (decision support). Draft specification, revision 32, 2005. Retrieved from http://tpc.org/tpcds. Last accessed 15 June 2012

  60. Transaction Processing Performance Council, TPC benchmark C. Standard specification, revision 5.11, Feb 2010. Retrieved from http://tpc.org/tpcc. Last accessed 15 June 2012

  61. Transaction Processing Performance Council, TPC benchmark E. Standard specification, version 1.12.0, June 2010. Retrieved from http://tpc.org/tpce. Last accessed 15 June 2012

  62. Transaction Processing Performance Council, TPC-energy specification. Standard specification, version 1.2.0, June 2010. Retrieved from http://www.tpc.org/tpc_energy. Last accessed 15 June 2012

  63. Transaction Processing Performance Council, TPC benchmark H (decision support). Standard specification, revision 2.14.0, Feb 2011. Retrieved from http://tpc.org/tpch. Last accessed 15 June 2012

  64. Transaction Processing Performance Council, Council website, 2012. Retrieved from http://tpc.org. Last accessed 15 June 2012

  65. P. Vassiliadis, A. Karagiannis, V. Tziovara, A. Simitsis, Towards a benchmark for ETL workflows, in Proceedings of the Fifth International Workshop on Quality in Databases, QDB, Vienna, 2007, ed. by V. Ganti, F. Naumann, pp. 49–60

    Google Scholar 

  66. M. Vieira, H. Madeira, A dependability benchmark for OLTP application environments, in Proceedings of the 29th International Conference on Very Large Databases, VLDB’03, ed. by J.C. Freytag, P.C. Lockemann, S. Abiteboul, M.J. Carey, P.G. Selinger, A. Heuer (VLDB Endowment, Berlin, 2003), pp. 742–753

    Google Scholar 

  67. M. Vieira, H. Madeira, From performance to dependability benchmarking: a mandatory path, in Performance Evaluation and Benchmarking, ed. by R. Nambiar, M. Poess (Springer, Berlin/Heidelberg, 2009), pp. 67–83

    Google Scholar 

  68. M. Vieira, H. Madeira, K. Sachs, S. Kounev, Resilience benchmarking, in Resilience Assessment and Evaluation of Computing Systems, ed. by A. Avritzer, A. van Moorsel, M. Vieira, K. Wolter (Springer, Berlin/Heidelberg 2012), pp. 283–301

    Google Scholar 

  69. VoltDB, The NewSQL database for high velocity applications. Product website, n.d. Retrieved from http://voltdb.com/products-services. Last accessed 15 June 2012

  70. M. Winslett, David DeWitt speaks out: on rethinking the CS curriculum, why the database community should be proud, why query optimization doesn’t work, how supercomputing funding is sometimes very poorly spent, how he’s not a good coder and isn’t smart enough to do DB theory, and more. ACM SIGMOD Rec. 31(2), 50–62 (2002). ACM, New York

    Google Scholar 

  71. L. Wyatt, B. Caufield, D. Pol, Principles for an ETL benchmark, in Performance Evaluation and Benchmarking, ed. by R. Nambiar, M. Poess (Springer, Berlin/Heidelberg, 2009), pp. 183–198

    Google Scholar 

  72. N. Yuhanna, M. Gilpin, D. D’Silva, TPC benchmarks don’t matter anymore, features and cost are key factors when choosing a DBMS. Forrester research, Mar 2009

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Bog, A. (2014). Benchmarks for Transaction and Analytical Processing Systems. In: Benchmarking Transaction and Analytical Processing Systems. In-Memory Data Management Research. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-38070-9_3

Download citation

Publish with us

Policies and ethics