Parameter Curation for Benchmark Queries
In this paper we consider the problem of generating parameters for benchmark queries so these have stable behavior despite being executed on datasets (real-world or synthetic) with skewed data distributions and value correlations. We show that uniform random sampling of the substitution parameters is not well suited for such benchmarks, since it results in unpredictable runtime behavior of queries. We present our approach of Parameter Curation with the goal of selecting parameter bindings that have consistently low-variance intermediate query result sizes throughout the query plan. Our solution is illustrated with IMDB data and the recently proposed LDBC Social Network Benchmark (SNB).
KeywordsQuery Optimizer System Under Test Parameter Binding Query Plan Runtime Behavior
- 1.LDBC Benchmark. http://ldbc.eu:8090/display/TUC/Interactive+Workload
- 2.Barahmand, S., Ghandeharizadeh, S.: BG: a benchmark to evaluate interactive social networking actions. In: CIDR (2013)Google Scholar
- 5.Moerkotte, G.: Building query compilers. http://pi3.informatik.uni-mannheim.de/~moer/querycompiler.pdf
- 6.Poess, M., Stephens Jr., J.M.: Generating thousand benchmark queries in seconds. In: VLDB 2004, pp. 1045–1053 (2004)Google Scholar