Clause selection is a crucial choice point in modern saturation-based automatic theorem provers. A standard heuristic advises selecting small clauses, i.e., clauses with few symbol occurrences. More generally, we can prefer clauses with a small weighted symbol occurrence count, where each symbol’s occurrence count gets multiplied by a respective symbol weight. A human domain expert would traditionally supply the symbol weights. In this paper, we propose a system based on a graph neural network (GNN) that learns to predict these symbol weights to boost clause selection on an arbitrary first-order logic problem. Our experiments show that advising the automatic theorem prover Vampire on the first-order fragment of TPTP using a trained neural network allows the prover to solve 2.9 % more problems than weighting symbols uniformly.
How Much Should This Symbol Weigh? a GNN-Advised Clause Selection