Overview

We elaborate upon the composition of representative benchmark sets for evaluating and improving the performance of Boolean constraint solvers. We accomplish this in the context of Satisfiability Testing and Answer Set Programming. Starting from a thorough analysis of current practice, we isolate a set of desiderata for guiding the development of a parametrizable benchmark selection algorithm. We evaluate this algorithm wrt to three different distributions and show empirically that a normal distribution of hard instances leads to the best result. In particular, we empirically demonstrate that optimizing solvers on the obtained selection of benchmarks leads to better configurations than obtainable from the vast original set of benchmark instances.

Download

Sets