Performance Characterization and Benchmarking. Traditional to Big Data

Volume 8904 of the series Lecture Notes in Computer Science pp 82-96


Towards an Extensible Middleware for Database Benchmarking

  • David BermbachAffiliated withInformation Systems Engineering Group, TU Berlin
  • , Jörn KuhlenkampAffiliated withInformation Systems Engineering Group, TU Berlin
  • , Akon DeyAffiliated withSchool of Information Technologies, University of Sydney Email author 
  • , Sherif SakrAffiliated withKing Saud Bin Abdulaziz University for Health Sciences
  • , Raghunath NambiarAffiliated withCisco Systems, Inc.

* Final gross prices may vary according to local VAT.

Get Access


Today’s database benchmarks are designed to evaluate a particular type of database. Furthermore, popular benchmarks, like those from TPC, come without a ready-to-use implementation requiring database benchmark users to implement the benchmarking tool from scratch. The result of this is that there is no single framework that can be used to compare arbitrary database systems. The primary reason for this, among others, being the complexity of designing and implementing distributed benchmarking tools.

In this paper, we describe our vision of a middleware for database benchmarking which eliminates the complexity and difficulty of designing and running arbitrary benchmarks: workload specification and interface mappers for the system under test should be nothing but configuration properties of the middleware. We also sketch out an architecture for this benchmarking middleware and describe the main components and their requirements.