View: session overviewtalk overview

09:30 | On the Hardness of SAT with Community Structure SPEAKER: Nathan Mull ABSTRACT. Recent attempts to explain the effectiveness of Boolean satisfiability (SAT) solvers based on conflict-driven clause learning (CDCL) on large industrial benchmarks have focused on the concept of community structure. Specifically, industrial benchmarks have been empirically found to have good community structure, and experiments seem to show a correlation between such structure and the efficiency of CDCL. However, in this paper we establish hardness results suggesting that community structure is not sufficient to explain the success of CDCL in practice. First, we formally characterize a property shared by a wide class of metrics capturing community structure, including “modularity”. Next, we show that the SAT instances with good community structure according to any metric with this property are still NP-hard. Finally, we also prove that with high probability, random unsatisfiable modular instances generated from the “pseudo-industrial” community attachment model of Giráldez-Cru and Levy have exponentially long resolution proofs. Such instances are therefore hard for CDCL on average, indicating that actual industrial instances easily solved by CDCL may have some other relevant structure not captured by this model. |

10:00 | Trade-offs Between Time and Memory in a Tighter Model of CDCL SAT Solvers SPEAKER: unknown ABSTRACT. A long line of research has studied the power of conflict-driven clause learning (CDCL) and how it compares to the resolution proof system in which it searches for proofs. It has been shown that CDCL can polynomially simulate resolution, even when the learning scheme is adversarially chosen as long as it is asserting. However, the simulation only works under the assumption that no learned clauses are ever forgotten, and the polynomial blow-up is significant. Moreover, the simulation requires very frequent restarts, whereas the power of CDCL with less frequent or entirely without restarts remains poorly understood. With a view towards obtaining results with tighter relations between CDCL and resolution, we introduce a more fine-grained model of CDCL that captures not only time but also memory usage and number of restarts. We show how previously established strong size-space trade-offs for resolution can be transformed into equally strong trade-offs between time and memory usage for CDCL, where the upper bounds hold for CDCL without any restarts using the standard 1UIP clause learning scheme, and the (in some cases tightly matching) lower bounds hold for arbitrarily frequent restarts and arbitrary clause learning schemes. |

11:00 | On stronger calculi for QBFs SPEAKER: Uwe Egly ABSTRACT. Quantified Boolean formulas (QBFs) generalize propositional formulas by admitting quantifications over propositional variables. QBFs can be viewed as (restricted) formulas of first-order predicate logic and easy translations of QBFs into first-order formulas exist. We analyze different translations and show that first-order resolution combined with such translations can polynomially simulate well-known deduction concepts for QBFs. Furthermore, we extend QBF calculi by the possibility to instantiate a universal variable by an existential variable of smaller level. Combining such an enhanced calculus with the propositional extension rule results in a calculus with a universal quantifier rule which essentially introduces propositional formulas for universal variables. In this way, one can mimic a very general and strong universal quantifier rule known from sequent Gentzen systems. |

11:30 | On Q-Resolution and DPLL QBF Solving SPEAKER: Mikolas Janota ABSTRACT. Q-resolution and its variations provide the underlying proof systems for the DPLL-based QBF solvers. While (long-distance) Q-resolution models a CDCL QBF solver, it is not known whether the inverse is also true. This paper provides a negative answer to this question. This contrasts with SAT solving, where CDCL solvers have been shown to simulate resolution. |

12:00 | Q-Resolution with Generalized Axioms SPEAKER: Florian Lonsing ABSTRACT. Q-resolution is a proof system for quantified Boolean formulas (QBFs) in prenex conjunctive normal form (PCNF) which underlies search-based QBF solvers with clause and cube learning. Traditional Q-resolution calculi come with two kinds of axioms. The clause axiom states that any clause appearing in the given QBF can be derived in the calculus. The cube axiom allows to derive cubes which are propositional implicants of the CNF part of the given QBF. In practice, a search-based QBF solver applies the axioms at the leaves of the search tree that is implicitly constructed during a solver run. To overcome the restricted deductive power of the traditional axioms, we extend the Q-resolution calculus by generalized axioms. The generalized axioms allow to derive clauses and cubes earlier in the search, thus pruning the search tree and resulting in an exponentially more powerful Q-resolution calculus. We implemented a variant of the Q-resolution calculus with generalized axioms in the QBF solver DepQBF. Thereby, we apply integrated SAT solving and resource-bounded QBF preprocessing during the search to heuristically detect potential axiom applications. Experiments with application benchmarks indicate a substantial performance improvement and motivate future work. |

14:00 | Satisfiability Testing, a Disruptive Technology in Program Verification SPEAKER: David Monniaux |

15:30 | A SAT Approach to Branchwidth SPEAKER: unknown ABSTRACT. Branch decompositions provide a prominent method for structurally decomposing a hypergraph which applies also to propositional CNF formulas and other combinatorial structures. The width of a branch decomposition provides a measure of how well it decomposes the given object. Many hard computational problems (such as TSP or propositional model counting) become tractable on instances that admit a branch decomposition of small width. A bottleneck for all these algorithmic applications is the space requirement which is exponential in the width of the given branch decomposition. Hence it is crucial to compute first a branch decomposition whose width is as small as possible. We propose a SAT approach to finding branch decompositions of small width. The core of our approach is an efficient SAT encoding which determines with a single SAT call whether a given hypergraph admits a branch decomposition of width at most k. For our encoding we developed a novel partition-based characterization of branch decompositions. However, encoding size provides a firm barrier for scalability. In order to break through this barrier, we developed a novel heuristic approach where the SAT encoding is used to locally improve a given candidate decomposition until a fixed-point is reached. |

16:00 | Improved static symmetry breaking for SAT SPEAKER: Jo Devriendt ABSTRACT. An effective SAT preprocessing technique is the construction of symmetry breaking formulas: auxiliary clauses that guide a SAT solver away from needless exploration of symmetric subproblems. However, during the past decade, state-of-the-art SAT solvers rarely incorporated symmetry breaking. This suggests that the reduction of the search space does not outweigh the overhead incurred by detecting symmetry and constructing symmetry breaking formulas. We investigate three methods to construct more effective symmetry breaking formulas. The first method simply improves the encoding of symmetry breaking formulas. The second detects special symmetry subgroups, for which complete symmetry breaking formulas exist. The third infers binary symmetry breaking clauses for a symmetry group as a whole rather than longer clauses for individual symmetries. We implement these methods in a symmetry breaking preprocessor, and verify their effectiveness on both hand-picked problems as well as the 2014 SAT competition benchmark set. Our experiments indicate that our symmetry breaking preprocessor improves the current state-of-the-art in static symmetry breaking for SAT and has a sufficiently low overhead to improve the performance of modern SAT solvers on hard combinatorial instances. |

16:30 | Learning Rate Based Branching Heuristic for SAT Solvers SPEAKER: unknown ABSTRACT. In this paper, we propose a framework for viewing solver branching heuristics as optimization algorithms where the objective is to maximize the learning rate, defined as the propensity for variables to generate learnt clauses. By viewing online variable selection in SAT solvers as an optimization problem, we can leverage a wide variety of optimization algorithms, especially from machine learning, to design effective branching heuristics. In particular, we model the variable selection optimization problem as an online multi-armed bandit, a special-case of reinforcement learning, to learn branching variables such that the learning rate of the solver is maximized. We develop a branching heuristic that we call learning rate branching or LRB, based on a well-known multi-armed bandit algorithm called exponential recency weighted average and implement it as part of MiniSat and CryptoMiniSat. We upgrade the LRB technique with two additional novel ideas to improve the learning rate by accounting for reason side rate and exploiting locality. The resulting LRB branching heuristic is shown to be faster than the VSIDS and conflict history-based (CHB) branching heuristics on 1975 application and hard combinatorial instances from 2009 to 2014 SAT Competitions. We also show that CryptoMiniSat with LRB solves more instances than the one with VSIDS. To the best of our knowledge, the VSIDS branching heuristic and its variants have been the dominant class of such heuristics for SAT solvers since 2001, until we introduced a novel class of machine learning based branching heuristics. |