Tags:Constraint Modelling, Generating Benchmarks and Invited Talk
Abstract:
Benchmarking is fundamental for assessing the relative performance of alternative solving approaches. For an informative benchmark we often need a sufficient quantity of instances with different levels of difficulty and the ability to explore subsets of the instance space to detect performance discrimination among solvers. In this talk, I will present AutoIG, an automated tool for generating benchmark instances for constraint solvers. AutoIG supports generating two types of instances: graded instances (i.e., solvable at a certain difficult level by a solver), and discriminating instances (i.e., favouring a solver over another). The usefulness of the tool in benchmarking is demonstrated via an application on five problems taken from the MiniZinc Challenges. Our experiments show that the large number of instances found by AutoIG can provide more detailed insights into the performance of the solvers rather than just a ranking. Cases where a solver is weak or even faulty can be detected, providing valuable information to solver developers. Moreover, discriminating instances can reveal parts of the instance space where a generally weak solver actually performs well relative to others, and therefore could be useful as part of an algorithm portfolio.
A Constraint-Based Tool for Generating Benchmark Instances