previous day
next day
all days

View: session overviewtalk overview

09:00-10:30 Session 20: Dependency QBF (SAT 2016)
Location: Auditorium Labri
Dependency Schemes for DQBF
SPEAKER: Ralf Wimmer

ABSTRACT. Dependency schemes allow to identify variable independencies in QBFs or DQBFs. For QBF, several dependency schemes have been proposed, which differ in the number of independencies they are able to identify. In this paper, we analyze the spectrum of dependency schemes that were proposed for QBF. It turns out that only some of them are sound for DQBF. For the sound ones, we provide correctness proofs, for the others counterexamples. Moreover, we add dependency schemes not mentioned in the literature so far in order to characterize exactly the border between dependency schemes that are sound for DQBF and those that are not. Experiments show that a significant number of dependencies can either be added to or removed from a formula without changing its truth value, but with significantly increasing the flexibility for modifying the representation.

Lifting QBF Resolution Calculi to DQBF
SPEAKER: unknown

ABSTRACT. We examine the existing Resolution systems for quantified Boolean formulas (QBF) and answer the question which of these calculi can be lifted to the more powerful Dependency QBFs (DQBF). An interesting picture emerges: While for QBF we have the strict chain of proof systems Q-Resolution < IR-calc < IRM-calc, the situation is quite different in DQBF. Q-Resolution and likewise universal Resolution are too weak: they are not complete. IR-calc has the right strength: it is sound and complete. IRM-calc is too strong: it is not sound any more, and the same applies to long-distance Resolution. Conceptually, we use the relation of DQBF to EPR and explain our new DQBF calculus based on IR-calc as a subsystem of FO-Resolution.

Long Distance Q-Resolution with Dependency Schemes
SPEAKER: unknown

ABSTRACT. Resolution proof systems for quantified Boolean formulas (QBFs) provide a formal model for studying the limitations of state-of-the-art search-based QBF solvers, which use these systems to generate proofs. In this paper, we define a new proof system that combines two such proof systems: Q-resolution with generalized universal reduction according to a dependency scheme and long distance Q-resolution. We show that the resulting proof system is sound for the reflexive resolution-path dependency scheme—in fact, we prove that it admits strategy extraction in polynomial time. As a special case, we obtain soundness and polynomial-time strategy extraction for long distance Q-resolution with universal reduction according to the standard dependency scheme. We report on experiments with a configuration of DepQBF that generates proofs in this system.

10:30-11:00Coffee Break
11:00-12:30 Session 21: Complexity 2 (SAT 2016)
Location: Auditorium Labri
Satisfiability via Smooth Pictures

ABSTRACT. A picture is a r × c matrix M whose entries are taken from some set of symbols Γ . Given a set of Γ of colored versions of symbols in Γ , and given two relations H, V ⊆ Γ × Γ , the picture satisfiability problem consists in determining whether the entries of M can be colored in such a way that all horizontal and vertical constraints are satisfied. This problem can be easily shown to be NP complete. In this work we introduce the notion f(c)-smooth pictures. Intuitively, a smooth picture is a picture whose space of solutions can be represented efficiently by a finite automaton. Our main contribution is an algorithm that determines in time poly(f(c), c) whetehr a f(c)-smooth picture is satisfiable. With each picture M one can associate in a very natural way a formula F M which is satisfiable if and only if M is satisfiable. In our second contribu- tion we construct an infinite family of formulas M 1 , M 2 , ... such that M n is polynomially smooth, but such that F M n requires exponential sized Frege proofs. This shows that there are families of pictures for which our algorithm performs exponentially better than CDCL based solvers. On the other hand, we construct an explicit family of pictures which cause our algorithm to operate in exponential time.

The Normalized Autocorrelation Length of Random Max r-Sat Converges in Probability to (1-1/2^r)/r
SPEAKER: Yochai Twitto

ABSTRACT. We show that the so-called normalized autocorrelation length of random Max $r$-Sat converges in probability to $(1-1/2^r)/r$, where $r$ is the number of literals in a clause. We also show that the correlation between the numbers of clauses satisfied by a random pair of assignments of distance $d=cn$, $0 \leq c \leq 1$, converges in probability to $((1-c)^r-1/2^r)/(1-1/2^r)$. The former quantity is of interest in the area of landscape analysis as a way to better understand problems and assess their hardness for local search heuristics. In a recent result, it has been shown that it may be calculated in polynomial time for any instance, and its mean value over all instances was discussed. Our results are based on a study of the variance of the number of clauses satisfied by a random assignment, and the covariance of the numbers of clauses satisfied by a random pair of assignments of an arbitrary distance. As part of this study, closed-form formulas for the expected value and variance of the latter two quantities are provided. Note that all results are relevant to random $r$-Sat as well.

Tight Upper Bound on Splitting by Linear Combinations for Pigeonhole Principle

ABSTRACT. The usual DPLL algorithm uses splittings (branchings) on single Boolean variables. We consider an extension to allow splitting on linear combinations mod 2, which yields a search tree called a linear splitting tree.
We prove that the pigeonhole principle has linear splitting trees of size 2^{O(n)}. This is near-optimal since Itsykson and Sokolov [2014] proved a 2^{\Omega(n)} lower bound. It improves on the size 2^{\Theta(n \log n)} for splitting on single variables; thus the pigeonhole principle has a gap between linear splitting and the usual splitting on single variables. This is of particular interest since the pigeonhole principle is not based on linear constraints. We further prove that the perfect matching principle has splitting trees of size 2^{O(n)}.

12:30-14:00Lunch Break
15:00-15:30Coffee Break
15:30-17:30 Session 23: SAT and beyond (SAT 2016)
Location: Auditorium Labri
Extreme Cases in SAT Problems

ABSTRACT. With the increasing performances of SAT solvers, a lot of distinct problems, coming from very disparate fields, are added to the pool of {\em Application} problems, regularly used to rank solvers. These problems are also widely used to measure the positive impact of any new idea. We show in this paper that a number of those problems have extreme behaviors that any SAT solvers must deal with. We show that, by adding a few specialized indicators, we can let \glucose choose between four strategies to obtain an important improvements on the set of the hardest problems from all the competitions between 2002 and 2013 included. Moreover, once the SAT solver has been specialized, we show that a new restart polarity policy can improve even more the results. Without the first specialization step mentioned above, this new and effective policy would have been jugged inefficient. Our final \glucose is capable of solving $20\%$ more problems than the original one, while speeding up also UNSAT answers.

Predicate elimination for preprocessing in first-order theorem proving
SPEAKER: unknown

ABSTRACT. Preprocessing plays a major role in efficient propositional reasoning but has been less studied in first-order theorem proving. In this paper we propose a predicate elimination procedure which can be used as a preprocessing step in first-order theorem proving. We describe how this procedure is implemented in iProver and show that many problems in the TPTP library can be simplified using this procedure.

Finding Finite Models in Multi-Sorted First Order Logic
SPEAKER: unknown

ABSTRACT. This work extends the existing MACE-style finite model finding approach to multi-sorted first order logic. This existing approach iteratively assumes increasing domain sizes and encodes the related ground problem as a SAT problem. When moving to the multi-sorted setting each sort may have a different domain size, leading to an explosion in the search space. This paper focusses on methods to tame that search space. The key approach adds additional information to the SAT encoding to suggest which domains should be grown. Evaluation of an implementation of techniques in the Vampire theorem prover shows that they dramatically reduce the search space and that this is an effective approach to find finite models in multi-sorted first order logic.

MCS Extraction with Sublinear Oracle Calls
SPEAKER: unknown

ABSTRACT. Given an inconsistent set of constraints, an often studied problem is to compute an irreducible subset of the constraints which, if relaxed, make the remaining constraints consistent. In the case of unsatisfiable propositional formulas in conjunctive normal form, such irreducible sets of constraints are referred to as Minimal Correction Subsets (MCSes). A number of efficient algorithms have been proposed in recent years, which exploit a wide range of insights into the MCS extraction problem. One open question is to find the best worst-case number of calls to a SAT oracle, when the calls to the oracle are kept simple, and given reasonable definitions of simple oracle calls. This paper develops novel algorithms for computing MCSes which, in specific settings, are guaranteed to require asymptotically fewer than linear calls to a SAT oracle, where the oracle calls can be viewed as simple. The experimental results, obtained on existing problem instances, demonstrate that the new algorithms contribute to improving the state of the art.