VCWC: A Versioning Competition Workflow Compiler
System competitions evaluate solvers and compare state-of-the-art implementations on benchmark sets in a dedicated and controlled computing environment, usually comprising of multiple machines. Recent initiatives such as  aim at establishing best practices in computer science evaluations, especially identifying measures to be taken for ensuring repeatability, excluding common pitfalls, and introducing appropriate tools. For instance, Asparagus  focusses on maintaining benchmarks and instances thereof. Other known tools such as Runlim (http://fmv.jku.at/runlim/) and Runsolver  help to limit resources and measure CPU time and memory usage of solver runs. Other systems are tailored at specific needs of specific communities: the not publicly accessible ASP Competition evaluation platform for the 3rd ASP Competition 2011  implements a framework for running a ASP competition. Another more general platform is StarExec , which aims at providing a generic framework for competition maintainers. The last two systems are similar in spirit, but each have restrictions that reduce the possibility of general usage: the StarExec platform does not provide support for generic solver input and has no scripting support, while the ASP Competition evaluation platform has no support for fault-tolerant execution of instance runs.Moreover, benchmark statistics and ranking can only be computed after all solver runs for all benchmark instances have been completed.
Unable to display preview. Download preview PDF.
- 1.Asparagus Web-based Benchmarking Environment, http://asparagus.cs.uni-potsdam.de/
- 3.Calimeri, F., Ianni, G., Krennwallner, T., Ricca, F.: The Answer Set Programming Competition. AI Mag. 33(4), 114–118 (2012)Google Scholar
- 4.Calimeri, F., Ianni, G., Ricca, F.: The third open answer set programming competition. Theor. Pract. Log. Prog., FirstView, 1–19 (2012), doi:10.1017/S1471068412000105Google Scholar
- 5.Couvares, P., Kosar, T., Roy, A., Weber, J., Wenger, K.: Workflow Management in Condor. In: Workflows for e-Science, pp. 357–375. Springer (2007)Google Scholar
- 6.Collaboratory on Experimental Evaluation of Software and Systems in Computer Science (2012), http://evaluate.inf.usi.ch/
- 7.The software of the seventh international planning competition (IPC) (2011), http://www.plg.inf.uc3m.es/ipc2011-deterministic/FrontPage/Software
- 8.Järvisalo, M., Le Berre, D., Roussel, O., Simon, L.: The International SAT Solver Competitions. AI Mag. 33(1), 89–92 (2012)Google Scholar
- 9.Klebanov, V., Beckert, B., Biere, A., Sutcliffe, G. (eds.): Proceedings 1st Int’l Workshop on Comparative Empirical Evaluation of Reasoning Systems, vol. 873. CEUR-WS.org (2012)Google Scholar
- 10.Papadimitriou, C.H.: Computational complexity. Addison-Wesley (1994)Google Scholar
- 11.Peschiera, C., Pulina, L., Tacchella, A.: Designing a solver competition: the QBFEVAL’10 case study. In: Workshop on Evaluation Methods for Solvers, and Quality Metrics for Solutions (EMS+QMS) 2010. EPiC, vol. 6, pp. 19–32. EasyChair (2012)Google Scholar
- 13.Stump, A., Sutcliffe, G., Tinelli, C.: Introducing StarExec: a cross-community infrastructure for logic solving. In: Klebanov, et al. (eds.) , p. 2Google Scholar