View: session overviewtalk overviewside by side with other conferences
09:00 | PRESENTER: Olaf Beyersdorff ABSTRACT. To date, we know only a few handcrafted quantified Boolean formulas (QBFs) that are hard for central QBF resolution systems such as Q-Res and QU-Res, and only one specific QBF family to separate Q-Res and QU-Res. Here we provide a general method to construct hard formulas for Q-Res and QU-Res. The construction uses simple propositional formulas (e.g. minimally unsatisfiable formulas) in combination with easy QBF gadgets (\Sigma_2^b formulas without constant winning strategies). This leads to a host of new hard formulas, including new classes of hard random QBFs. We further present generic constructions for formulas separating Q-Res and QU-Res, and for separating Q-Res and long-distance-Q-Res. |
09:30 | Should decisions in QCDCL follow prefix order? PRESENTER: Benjamin Böhm ABSTRACT. Quantified conflict-driven clause learning (QCDCL) is one of the main solving approaches for quantified Boolean formulas (QBF). One of the differences between QCDCL and propositional CDCL is that QCDCL typically follows the prefix order of the QBF for making decisions. We investigate an alternative model for QCDCL solving where decisions can be made in arbitrary order. The resulting system QCDCL-ANY is still sound and terminating, but does not necessarily allow to always learn asserting clauses or cubes. To address this potential drawback, we additionally introduce two subsystems that guarantee to always learn asserting clauses (QCDCL-UNI-ANY) and asserting cubes (QCDCL-EXI-ANY), respectively. We model all four approaches by formal proof systems and show that QCDCL-UNI-ANY is exponentially better than QCDCL on false formulas, whereas QCDCL-EXI-ANY is exponentially better than QCDCL on true QBFs. Technically, this involves constructing specific QBF families and showing lower and upper bounds in the respective proof systems. We complement our theoretical study with some initial experiments that confirm our theoretical findings. |
10:00 | Changing Partitions in Rectangle Decision Lists ABSTRACT. Rectangle decision lists are a form of decision lists that were recently shown to have applications in the proof complexity of certain OBDD-based QBF-solvers. We consider a version of rectangle decision lists with changing partitions, which corresponds to QBF-solvers that may change the variable order of the OBDDs they produce. We show that even allowing one single partition change generally leads to exponentially more succinct decision lists. More generally, we show that there is a succinctness hierarchy: for every $k\in \mathbb{N}$, when going from $k$ partition changes to $k+1$, there are functions that can be represented exponentially more succinctly. As an application, we show a similar hierarchy for OBDD based QBF-solvers which shows that OBDD-based QBF-solvers could in principle far more efficient if changing the variable orders of the constructed OBDDs is allowed. |
12:10 | Towards a SAT Encoding for Quantum Circuits PRESENTER: Robert Wille ABSTRACT. Satisfiability Testing (SAT) techniques are well-established in classical computing where they are used to solve a broad variety of problems, e.g., in the design of classical circuits and systems. Analogous to the classical realm, quantum algorithms are usually modelled as circuits and similar design tasks need to be tackled. Thus, it is natural to pose the question whether these design tasks in the quantum realm can also be approached using SAT techniques. To the best of our knowledge, no SAT formulation for arbitrary quantum circuits exists and it is unknown whether such an approach is feasible at all. In this work, we define a propositional SAT encoding that, in principle, can be applied to arbitrary quantum circuits. However, we show that due to the inherent complexity of representing quantum states, constructing such an encoding is not feasible in general. Therefore, we establish general criteria for determining the feasibility of the proposed encoding and identify classes of quantum circuits fulfilling these criteria. We explicitly demonstrate how the proposed encoding can be applied to the class of Clifford circuits as a representative. Finally, we empirically demonstrate the applicability and efficiency of the proposed encoding for Clifford circuits. With these results, we lay the foundation for continuing the ongoing success of SAT in classical circuit and systems design for quantum circuits. |
Lunch will be held in Taub lobby (CP, LICS, ICLP) and in The Grand Water Research Institute (KR, FSCD, SAT).
14:00 | SAT Preprocessors and Symmetry ABSTRACT. Exploitation of symmetries is an indispensable approach to solve certain classes of difficult SAT instances. Numerous techniques for the use of symmetry in SAT have evolved over the past few decades. But no matter how symmetries are used precisely, they have to be detected first. We investigate how to detect more symmetry, faster. The initial idea is to reap the benefits of SAT preprocessing for symmetry detection. As it turns out, applying an off-the-shelf preprocessor before handling symmetry does not work: the preprocessor can haphazardly remove symmetry from formulas, severely impeding symmetry exploitation. Our main contribution is a theoretical framework that captures the relationship of SAT preprocessing techniques and symmetry. Based on this, we create a symmetry-aware preprocessor that can be applied safely before handling symmetry. We then demonstrate that applying the preprocessor does not only substantially decrease symmetry detection and breaking times, but also uncovers hidden symmetry not detectable in the original instances. The novelty of our approach lies in the fact that we make use of SAT techniques for symmetry detection, yielding a new application-specific approach to symmetry detection for SAT. |
14:30 | PRESENTER: Jakob Bach ABSTRACT. Hard combinatorial problems such as propositional satisfiability are ubiquitous. The holy grail are solution methods that show good performance on all problem instances. However, new approaches emerge regularly, some of which are complementary to existing solvers in that they only run faster on some instances but not on many others. While portfolios, i.e., sets of solvers, have been touted as useful, putting together such portfolios also needs to be efficient. In particular, it remains an open question how well portfolios can exploit the complementarity of solvers. This paper features a comprehensive analysis of portfolios of recent SAT solvers, the ones from the SAT Competitions 2020 and 2021. We determine optimal portfolios with exact and approximate approaches and study the impact of portfolio size k on performance. We also investigate how effective off-the-shelf prediction models are for instance-specific solver recommendations. One result is that the portfolios found with an approximate approach are as good as the optimal solution in practice. We also observe that marginal returns decrease very quickly with larger k, and our prediction models do not give way to better performance beyond very small portfolio sizes. |
15:00 | SAT-based Leximax Optimisation Algorithms PRESENTER: Mikolas Janota ABSTRACT. In several real-world problems, it is often the case that the goal is to optimise several objective functions. However, usually there is not a single optimal objective vector. Instead, there are many optimal objective vectors known as Pareto-optima. Finding all Pareto-optima is computationally expensive and the number of Pareto-optima can be too large for a user to analyse. A compromise can be made by defining an optimisation criterion that integrates all objective functions. In this paper we propose several SAT-based algorithms to solve multi-objective optimisation problems using the leximax criterion. The leximax criterion is used to obtain a Pareto-optimal solution with a small trade-off between the objective functions, which is suitable in problems where there is an absence of priorities between the objective functions. Experimental results on the Multi-Objective Package Upgradeability Optimisation problem show that the SAT-based algorithms are able to outperform the Integer Linear Programming (ILP) approach when using non-commercial ILP solvers. Additionally, experimental results on selected instances from the MaxSAT evaluation adapted to the multi-objective domain show that our approach outperforms the ILP approach using commercial solvers. |
16:00 | PRESENTER: Christoph Jabs ABSTRACT. We explore a MaxSAT-based approach to bi-objective optimization. Bi-objective optimization refers to the task of finding so-called pareto-optimal solutions in terms of two objective functions. Bi-objective optimization problems naturally arise in various real-world settings. For example, in the context of learning interpretable representations, such as decision rules, from data, one wishes to balance between two objectives, the classification error and the size of the representation. Our approach is generally applicable to bi-objective optimizations which allow for propositional encodings. The approach makes heavy use of incremental SAT and draws inspiration from modern MaxSAT solving approaches. In particular, we describe several variants of the approach which arise from different approaches to MaxSAT solving. In addition to computing a single representative solution per each point of the pareto front, the approach allows for enumerating all pareto-optimal solutions. We empirically compare the efficiency of the approach to recent competing approaches, showing the practical benefits of our approach in the contexts of learning interpretable classification rules and bi-objective set covering. |
16:30 | PRESENTER: Andreas Niskanen ABSTRACT. Boolean satisfiability (SAT) solvers allow for incremental computations, which is key to efficient employment of SAT solvers iteratively for developing complex decision and optimization procedures, including maximum satisfiability (MaxSAT) solvers. However, enabling incremental computations on the level of constraint optimization remains a noticeable challenge. While incremental computations have been identified to have great potential in speeding up MaxSAT-based approaches for solving various real-world optimization problems, enabling incremental computations in MaxSAT remains to most extent unexplored. In this work, we contribute towards making incremental MaxSAT solving a reality. Firstly, building on the IPASIR interface for incremental SAT solving, we propose the IPAMIR interface for implementing incremental MaxSAT solvers and for developing applications making use of incremental MaxSAT. Secondly, we expand our recent adaptation of the implicit hitting set based MaxHS MaxSAT solver to a fully-fledged incremental MaxSAT solver in terms of implementing the IPAMIR specification in full, and detail in particular how, in addition to weight changes, assumptions are enabled without losing incrementality. Thirdly, we provide further empirical evidence on the benefits of incremental MaxSAT solving under assumptions. |
17:00 | Analysis of Core-Guided MaxSAT Using Cores and Correction Sets PRESENTER: Nina Narodytska ABSTRACT. Core-guided solvers are among best performing algorithms for solving maximum satisfiability problems. These solvers perform a sequence of relaxations of the formula to increase the lower bound on the optimal solution on each relaxation step. In addition, the relaxations allow generating a large set of minimal cores (MUSes) of the original formula. However, properties of these cores have not been investigated. In contrast, maximum hitting set-based (maxhs) solvers also extract a set of cores that posses a set of properties, e.g. the size of the minimal hitting set of the discovered cores equals the optimum when a maxhs solver terminates. In this work, we analyze minimal cores and minimum correction sets (MinCSes) of the input formula and its sub-formulas that core-guided solvers produce during the execution. We demonstrate that a set of MUSes that a core-guided algorithm discovers posses the same key properties as cores extracted by maxhs solvers. For instance, we prove the size of a minimum hitting set of these the discovered cores equals to the optimal cost. We also show that it discovers all MinCSes of special subformulas of the input formula. We discuss theoretical and practical implications of our results. |