Advertisement

Abstract

Constraint satisfaction algorithms are often benchmarked on hard, random problems. There are, however, many reasons for wanting a larger class of problems in our benchmark suites. For example, we may wish to benchmark algorithms on more realistic problems, to run competitions, or to study the impact on modelling and problem reformulation. Whilst there are many other constructive benefits of a benchmark library, there are also several potential pitfalls. For example, if the library is small, we run the risk of over-fitting our algorithms. Even if the library is large, certain problem features may be rare or absent. A model benchmark library should be easy to find and easy to use. It should contain as diverse and large a set of problems as possible. It should be easy to extend, and as comprehensive and up to date as possible. It should also be independent of any particular constraint solver, and contain neither just hard (nor just easy) problems.

Copyright information

© Springer-Verlag Berlin Heidelberg 1999

Authors and Affiliations

  • Ian P. Gent
    • 1
  • Toby Walsh
    • 1
  1. 1.Department of Computer ScienceUniversity of StrathclydeGlasgowUnited Kingdom

Personalised recommendations