VCWC: A Versioning Competition Workflow Compiler

  • Günther Charwat
  • Giovambattista Ianni
  • Thomas Krennwallner
  • Martin Kronegger
  • Andreas Pfandler
  • Christoph Redl
  • Martin Schwengerer
  • Lara Katharina Spendier
  • Johannes Peter Wallner
  • Guohui Xiao
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8148)

Abstract

System competitions evaluate solvers and compare state-of-the-art implementations on benchmark sets in a dedicated and controlled computing environment, usually comprising of multiple machines. Recent initiatives such as [6] aim at establishing best practices in computer science evaluations, especially identifying measures to be taken for ensuring repeatability, excluding common pitfalls, and introducing appropriate tools. For instance, Asparagus [1] focusses on maintaining benchmarks and instances thereof. Other known tools such as Runlim (http://fmv.jku.at/runlim/) and Runsolver [12] help to limit resources and measure CPU time and memory usage of solver runs. Other systems are tailored at specific needs of specific communities: the not publicly accessible ASP Competition evaluation platform for the 3rd ASP Competition 2011 [4] implements a framework for running a ASP competition. Another more general platform is StarExec [13], which aims at providing a generic framework for competition maintainers. The last two systems are similar in spirit, but each have restrictions that reduce the possibility of general usage: the StarExec platform does not provide support for generic solver input and has no scripting support, while the ASP Competition evaluation platform has no support for fault-tolerant execution of instance runs.Moreover, benchmark statistics and ranking can only be computed after all solver runs for all benchmark instances have been completed.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Asparagus Web-based Benchmarking Environment, http://asparagus.cs.uni-potsdam.de/
  2. 2.
    Barrett, C., Deters, M., Moura, L., Oliveras, A., Stump, A.: 6 years of SMT-Comp. J. Auto. Reasoning 50(3), 243–277 (2013)CrossRefGoogle Scholar
  3. 3.
    Calimeri, F., Ianni, G., Krennwallner, T., Ricca, F.: The Answer Set Programming Competition. AI Mag. 33(4), 114–118 (2012)Google Scholar
  4. 4.
    Calimeri, F., Ianni, G., Ricca, F.: The third open answer set programming competition. Theor. Pract. Log. Prog., FirstView, 1–19 (2012), doi:10.1017/S1471068412000105Google Scholar
  5. 5.
    Couvares, P., Kosar, T., Roy, A., Weber, J., Wenger, K.: Workflow Management in Condor. In: Workflows for e-Science, pp. 357–375. Springer (2007)Google Scholar
  6. 6.
    Collaboratory on Experimental Evaluation of Software and Systems in Computer Science (2012), http://evaluate.inf.usi.ch/
  7. 7.
    The software of the seventh international planning competition (IPC) (2011), http://www.plg.inf.uc3m.es/ipc2011-deterministic/FrontPage/Software
  8. 8.
    Järvisalo, M., Le Berre, D., Roussel, O., Simon, L.: The International SAT Solver Competitions. AI Mag. 33(1), 89–92 (2012)Google Scholar
  9. 9.
    Klebanov, V., Beckert, B., Biere, A., Sutcliffe, G. (eds.): Proceedings 1st Int’l Workshop on Comparative Empirical Evaluation of Reasoning Systems, vol. 873. CEUR-WS.org (2012)Google Scholar
  10. 10.
    Papadimitriou, C.H.: Computational complexity. Addison-Wesley (1994)Google Scholar
  11. 11.
    Peschiera, C., Pulina, L., Tacchella, A.: Designing a solver competition: the QBFEVAL’10 case study. In: Workshop on Evaluation Methods for Solvers, and Quality Metrics for Solutions (EMS+QMS) 2010. EPiC, vol. 6, pp. 19–32. EasyChair (2012)Google Scholar
  12. 12.
    Roussel, O.: Controlling a solver execution with the runsolver tool. J. Sat. 7, 139–144 (2011)MathSciNetGoogle Scholar
  13. 13.
    Stump, A., Sutcliffe, G., Tinelli, C.: Introducing StarExec: a cross-community infrastructure for logic solving. In: Klebanov, et al. (eds.) [9], p. 2Google Scholar
  14. 14.
    Sutcliffe, G.: The TPTP problem library and associated infrastructure. J. Autom. Reasoning 43(4), 337–362 (2009)CrossRefMATHGoogle Scholar
  15. 15.
    Thain, D., Tannenbaum, T., Livny, M.: Distributed Computing in Practice: The Condor Experience. Concurrency Computat. Pract. Exper. 17(2-4), 323–356 (2005)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Günther Charwat
    • 1
  • Giovambattista Ianni
    • 2
  • Thomas Krennwallner
    • 1
  • Martin Kronegger
    • 1
  • Andreas Pfandler
    • 1
  • Christoph Redl
    • 1
  • Martin Schwengerer
    • 1
  • Lara Katharina Spendier
    • 1
  • Johannes Peter Wallner
    • 1
  • Guohui Xiao
    • 1
  1. 1.Institute of Information SystemsVienna University of TechnologyViennaAustria
  2. 2.Dipartimento di Matematica e InformaticaUniversità della CalabriaRendeItaly

Personalised recommendations