Skip to main content

VCWC: A Versioning Competition Workflow Compiler

  • Conference paper

Part of the Lecture Notes in Computer Science book series (LNAI,volume 8148)


System competitions evaluate solvers and compare state-of-the-art implementations on benchmark sets in a dedicated and controlled computing environment, usually comprising of multiple machines. Recent initiatives such as [6] aim at establishing best practices in computer science evaluations, especially identifying measures to be taken for ensuring repeatability, excluding common pitfalls, and introducing appropriate tools. For instance, Asparagus [1] focusses on maintaining benchmarks and instances thereof. Other known tools such as Runlim ( and Runsolver [12] help to limit resources and measure CPU time and memory usage of solver runs. Other systems are tailored at specific needs of specific communities: the not publicly accessible ASP Competition evaluation platform for the 3rd ASP Competition 2011 [4] implements a framework for running a ASP competition. Another more general platform is StarExec [13], which aims at providing a generic framework for competition maintainers. The last two systems are similar in spirit, but each have restrictions that reduce the possibility of general usage: the StarExec platform does not provide support for generic solver input and has no scripting support, while the ASP Competition evaluation platform has no support for fault-tolerant execution of instance runs.Moreover, benchmark statistics and ranking can only be computed after all solver runs for all benchmark instances have been completed.


  • Benchmark Server
  • International Planning Competition
  • Script Support
  • Competition Track
  • Versioning Competition

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This research is supported by the Austrian Science Fund (FWF) project P20841 and P24090.

This is a preview of subscription content, access via your institution.

Buying options

USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Asparagus Web-based Benchmarking Environment,

  2. Barrett, C., Deters, M., Moura, L., Oliveras, A., Stump, A.: 6 years of SMT-Comp. J. Auto. Reasoning 50(3), 243–277 (2013)

    CrossRef  Google Scholar 

  3. Calimeri, F., Ianni, G., Krennwallner, T., Ricca, F.: The Answer Set Programming Competition. AI Mag. 33(4), 114–118 (2012)

    Google Scholar 

  4. Calimeri, F., Ianni, G., Ricca, F.: The third open answer set programming competition. Theor. Pract. Log. Prog., FirstView, 1–19 (2012), doi:10.1017/S1471068412000105

    Google Scholar 

  5. Couvares, P., Kosar, T., Roy, A., Weber, J., Wenger, K.: Workflow Management in Condor. In: Workflows for e-Science, pp. 357–375. Springer (2007)

    Google Scholar 

  6. Collaboratory on Experimental Evaluation of Software and Systems in Computer Science (2012),

  7. The software of the seventh international planning competition (IPC) (2011),

  8. Järvisalo, M., Le Berre, D., Roussel, O., Simon, L.: The International SAT Solver Competitions. AI Mag. 33(1), 89–92 (2012)

    Google Scholar 

  9. Klebanov, V., Beckert, B., Biere, A., Sutcliffe, G. (eds.): Proceedings 1st Int’l Workshop on Comparative Empirical Evaluation of Reasoning Systems, vol. 873. (2012)

    Google Scholar 

  10. Papadimitriou, C.H.: Computational complexity. Addison-Wesley (1994)

    Google Scholar 

  11. Peschiera, C., Pulina, L., Tacchella, A.: Designing a solver competition: the QBFEVAL’10 case study. In: Workshop on Evaluation Methods for Solvers, and Quality Metrics for Solutions (EMS+QMS) 2010. EPiC, vol. 6, pp. 19–32. EasyChair (2012)

    Google Scholar 

  12. Roussel, O.: Controlling a solver execution with the runsolver tool. J. Sat. 7, 139–144 (2011)

    MathSciNet  Google Scholar 

  13. Stump, A., Sutcliffe, G., Tinelli, C.: Introducing StarExec: a cross-community infrastructure for logic solving. In: Klebanov, et al. (eds.) [9], p. 2

    Google Scholar 

  14. Sutcliffe, G.: The TPTP problem library and associated infrastructure. J. Autom. Reasoning 43(4), 337–362 (2009)

    CrossRef  MATH  Google Scholar 

  15. Thain, D., Tannenbaum, T., Livny, M.: Distributed Computing in Practice: The Condor Experience. Concurrency Computat. Pract. Exper. 17(2-4), 323–356 (2005)

    CrossRef  Google Scholar 

Download references

Author information

Authors and Affiliations


Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Charwat, G. et al. (2013). VCWC: A Versioning Competition Workflow Compiler. In: Cabalar, P., Son, T.C. (eds) Logic Programming and Nonmonotonic Reasoning. LPNMR 2013. Lecture Notes in Computer Science(), vol 8148. Springer, Berlin, Heidelberg.

Download citation

  • DOI:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-40563-1

  • Online ISBN: 978-3-642-40564-8

  • eBook Packages: Computer ScienceComputer Science (R0)