Tags:experiments, meta-data library and problem fingerprinting
Abstract:
Experimental data and benchmarks play a crucial role in developing new algorithms and implementations of SAT solvers. Besides comparing and evaluating solvers, they provide the basis for all kinds of experiments, for setting up hypothesis and for testing them. Currently -- even though some initiatives for setting up benchmark databases have been undertaken, and the SAT Competitions provide a ``standardized'' collection of instances -- it is hard to assemble benchmark sets with prescribed properties. Moreover, the origin of SAT instances is often not clear, and benchmark collections might contain duplicates. In this paper we suggest a framework to store meta-data information about SAT instances and provide a framework for collecting, assessing and distributing benchmark metadata.