FLOC 2022: FEDERATED LOGIC CONFERENCE 2022
SAT ON TUESDAY, AUGUST 2ND
Days:
next day
all days

View: session overviewtalk overviewside by side with other conferences

08:30-09:00Coffee & Refreshments
09:00-10:30 Session 44G: QBF-1
09:00
Classes of Hard Formulas for QBF Resolution
PRESENTER: Olaf Beyersdorff

ABSTRACT. To date, we know only a few handcrafted quantified Boolean formulas (QBFs) that are hard for central QBF resolution systems such as Q-Res and QU-Res, and only one specific QBF family to separate Q-Res and QU-Res. Here we provide a general method to construct hard formulas for Q-Res and QU-Res. The construction uses simple propositional formulas (e.g. minimally unsatisfiable formulas) in combination with easy QBF gadgets (\Sigma_2^b formulas without constant winning strategies). This leads to a host of new hard formulas, including new classes of hard random QBFs. We further present generic constructions for formulas separating Q-Res and QU-Res, and for separating Q-Res and long-distance-Q-Res.

09:30
Should decisions in QCDCL follow prefix order?
PRESENTER: Benjamin Böhm

ABSTRACT. Quantified conflict-driven clause learning (QCDCL) is one of the main solving approaches for quantified Boolean formulas (QBF). One of the differences between QCDCL and propositional CDCL is that QCDCL typically follows the prefix order of the QBF for making decisions. We investigate an alternative model for QCDCL solving where decisions can be made in arbitrary order. The resulting system QCDCL-ANY is still sound and terminating, but does not necessarily allow to always learn asserting clauses or cubes. To address this potential drawback, we additionally introduce two subsystems that guarantee to always learn asserting clauses (QCDCL-UNI-ANY) and asserting cubes (QCDCL-EXI-ANY), respectively. We model all four approaches by formal proof systems and show that QCDCL-UNI-ANY is exponentially better than QCDCL on false formulas, whereas QCDCL-EXI-ANY is exponentially better than QCDCL on true QBFs. Technically, this involves constructing specific QBF families and showing lower and upper bounds in the respective proof systems. We complement our theoretical study with some initial experiments that confirm our theoretical findings.

10:00
Changing Partitions in Rectangle Decision Lists

ABSTRACT. Rectangle decision lists are a form of decision lists that were recently shown to have applications in the proof complexity of certain OBDD-based QBF-solvers. We consider a version of rectangle decision lists with changing partitions, which corresponds to QBF-solvers that may change the variable order of the OBDDs they produce. We show that even allowing one single partition change generally leads to exponentially more succinct decision lists. More generally, we show that there is a succinctness hierarchy: for every $k\in \mathbb{N}$, when going from $k$ partition changes to $k+1$, there are functions that can be represented exponentially more succinctly. As an application, we show a similar hierarchy for OBDD based QBF-solvers which shows that OBDD-based QBF-solvers could in principle far more efficient if changing the variable orders of the constructed OBDDs is allowed.

10:30-11:00Coffee Break
11:10-12:10 Session 48: Keynote
11:10
Information Structures for Privacy and Fairness

ABSTRACT. The increasingly pervasive use of big data and machine learning is raising various ethical issues, in particular privacy and fairness.In this talk, I will discuss some frameworks to understand and mitigate the issues, focusing on iterative methods coming from information theory and statistics.In the area of privacy protection, differential privacy (DP) and its variants are the most successful approaches to date. One of the fundamental issues of DP is how to reconcile the loss of information that it implies with the need to pr eserve the utility of the data. In this regard, a useful tool to recover utility is the Iterative Bayesian Update (IBU), an instance of the famous Expectation-Maximization method from Statistics. I will show that the IBU, combined with the metric version of DP, outperforms the state-of-the art, which is based on algebraic methods combined with the Randomized Response mechanism, widely adopted by the Big Tech industry (Google, Apple, Amazon, ...). Furthermore I will discuss a surprising duality between the IBU and one of the methods used to enhance metric DP, that is the Blahut-Arimoto algorithm from Rate-Distortion Theory. Finally, I will discuss the issue of biased decisions in machine learning, and will show that the IBU can be applied also in this domain to ensure a fairer treatment of disadvantaged groups.

 

Brief Bio:Catuscia Palamidessi is Director of Research at INRIA Saclay (since 2002), where she leads the team COMETE. She has been Full Professor at the University of Genova, Italy (1994-1997) and Penn State University, USA (1998-2002). Palamidessi's research interests include Privacy, Machine Learning, Fairness, Secure Information Flow, Formal Methods, and Concurrency. In 2019 she has obtained an ERC advanced grant to conduct research on Privacy and Machine Learning. She has been PC chair of various conferences including LICS and ICALP, and PC member of more than 120 international conferences. She is in the Editorial board of several journals, including the IEEE Transactions in Dependable and Secure Computing, Mathematical Structures in Computer Science, Theoretics, the Journal of Logical and Algebraic Methods in Programming and Acta Informatica. She is serving in the Executive Committee of ACM SIGLOG, CONCUR, and CSL.

http://www.lix.polytechnique.fr/~catuscia/

12:10-12:40 Session 49C: Application of SAT to Quantum circuits
12:10
Towards a SAT Encoding for Quantum Circuits
PRESENTER: Robert Wille

ABSTRACT. Satisfiability Testing (SAT) techniques are well-established in classical computing where they are used to solve a broad variety of problems, e.g., in the design of classical circuits and systems. Analogous to the classical realm, quantum algorithms are usually modelled as circuits and similar design tasks need to be tackled. Thus, it is natural to pose the question whether these design tasks in the quantum realm can also be approached using SAT techniques. To the best of our knowledge, no SAT formulation for arbitrary quantum circuits exists and it is unknown whether such an approach is feasible at all. In this work, we define a propositional SAT encoding that, in principle, can be applied to arbitrary quantum circuits. However, we show that due to the inherent complexity of representing quantum states, constructing such an encoding is not feasible in general. Therefore, we establish general criteria for determining the feasibility of the proposed encoding and identify classes of quantum circuits fulfilling these criteria. We explicitly demonstrate how the proposed encoding can be applied to the class of Clifford circuits as a representative. Finally, we empirically demonstrate the applicability and efficiency of the proposed encoding for Clifford circuits. With these results, we lay the foundation for continuing the ongoing success of SAT in classical circuit and systems design for quantum circuits.

12:30-14:00Lunch Break

Lunch will be held in Taub lobby (CP, LICS, ICLP) and in The Grand Water Research Institute (KR, FSCD, SAT).

14:00-15:30 Session 50H: SAT
14:00
SAT Preprocessors and Symmetry

ABSTRACT. Exploitation of symmetries is an indispensable approach to solve certain classes of difficult SAT instances. Numerous techniques for the use of symmetry in SAT have evolved over the past few decades. But no matter how symmetries are used precisely, they have to be detected first. We investigate how to detect more symmetry, faster. The initial idea is to reap the benefits of SAT preprocessing for symmetry detection. As it turns out, applying an off-the-shelf preprocessor before handling symmetry does not work: the preprocessor can haphazardly remove symmetry from formulas, severely impeding symmetry exploitation.

Our main contribution is a theoretical framework that captures the relationship of SAT preprocessing techniques and symmetry. Based on this, we create a symmetry-aware preprocessor that can be applied safely before handling symmetry. We then demonstrate that applying the preprocessor does not only substantially decrease symmetry detection and breaking times, but also uncovers hidden symmetry not detectable in the original instances. The novelty of our approach lies in the fact that we make use of SAT techniques for symmetry detection, yielding a new application-specific approach to symmetry detection for SAT.

14:30
A Comprehensive Study of k-Portfolios of Recent SAT Solvers
PRESENTER: Jakob Bach

ABSTRACT. Hard combinatorial problems such as propositional satisfiability are ubiquitous. The holy grail are solution methods that show good performance on all problem instances. However, new approaches emerge regularly, some of which are complementary to existing solvers in that they only run faster on some instances but not on many others. While portfolios, i.e., sets of solvers, have been touted as useful, putting together such portfolios also needs to be efficient. In particular, it remains an open question how well portfolios can exploit the complementarity of solvers. This paper features a comprehensive analysis of portfolios of recent SAT solvers, the ones from the SAT Competitions 2020 and 2021. We determine optimal portfolios with exact and approximate approaches and study the impact of portfolio size k on performance. We also investigate how effective off-the-shelf prediction models are for instance-specific solver recommendations. One result is that the portfolios found with an approximate approach are as good as the optimal solution in practice. We also observe that marginal returns decrease very quickly with larger k, and our prediction models do not give way to better performance beyond very small portfolio sizes.

15:00
SAT-based Leximax Optimisation Algorithms
PRESENTER: Mikolas Janota

ABSTRACT. In several real-world problems, it is often the case that the goal is to optimise several objective functions. However, usually there is not a single optimal objective vector. Instead, there are many optimal objective vectors known as Pareto-optima. Finding all Pareto-optima is computationally expensive and the number of Pareto-optima can be too large for a user to analyse. A compromise can be made by defining an optimisation criterion that integrates all objective functions.

In this paper we propose several SAT-based algorithms to solve multi-objective optimisation problems using the leximax criterion. The leximax criterion is used to obtain a Pareto-optimal solution with a small trade-off between the objective functions, which is suitable in problems where there is an absence of priorities between the objective functions. Experimental results on the Multi-Objective Package Upgradeability Optimisation problem show that the SAT-based algorithms are able to outperform the Integer Linear Programming (ILP) approach when using non-commercial ILP solvers. Additionally, experimental results on selected instances from the MaxSAT evaluation adapted to the multi-objective domain show that our approach outperforms the ILP approach using commercial solvers.

15:30-16:00Coffee Break
16:00-17:30 Session 54H: MaxSAT
16:00
MaxSAT-Based Bi-Objective Boolean Optimization
PRESENTER: Christoph Jabs

ABSTRACT. We explore a MaxSAT-based approach to bi-objective optimization. Bi-objective optimization refers to the task of finding so-called pareto-optimal solutions in terms of two objective functions. Bi-objective optimization problems naturally arise in various real-world settings. For example, in the context of learning interpretable representations, such as decision rules, from data, one wishes to balance between two objectives, the classification error and the size of the representation. Our approach is generally applicable to bi-objective optimizations which allow for propositional encodings. The approach makes heavy use of incremental SAT and draws inspiration from modern MaxSAT solving approaches. In particular, we describe several variants of the approach which arise from different approaches to MaxSAT solving. In addition to computing a single representative solution per each point of the pareto front, the approach allows for enumerating all pareto-optimal solutions. We empirically compare the efficiency of the approach to recent competing approaches, showing the practical benefits of our approach in the contexts of learning interpretable classification rules and bi-objective set covering.

16:30
Incremental Maximum Satisfiability
PRESENTER: Andreas Niskanen

ABSTRACT. Boolean satisfiability (SAT) solvers allow for incremental computations, which is key to efficient employment of SAT solvers iteratively for developing complex decision and optimization procedures, including maximum satisfiability (MaxSAT) solvers. However, enabling incremental computations on the level of constraint optimization remains a noticeable challenge. While incremental computations have been identified to have great potential in speeding up MaxSAT-based approaches for solving various real-world optimization problems, enabling incremental computations in MaxSAT remains to most extent unexplored. In this work, we contribute towards making incremental MaxSAT solving a reality. Firstly, building on the IPASIR interface for incremental SAT solving, we propose the IPAMIR interface for implementing incremental MaxSAT solvers and for developing applications making use of incremental MaxSAT. Secondly, we expand our recent adaptation of the implicit hitting set based MaxHS MaxSAT solver to a fully-fledged incremental MaxSAT solver in terms of implementing the IPAMIR specification in full, and detail in particular how, in addition to weight changes, assumptions are enabled without losing incrementality. Thirdly, we provide further empirical evidence on the benefits of incremental MaxSAT solving under assumptions.

17:00
Analysis of Core-Guided MaxSAT Using Cores and Correction Sets
PRESENTER: Nina Narodytska

ABSTRACT. Core-guided solvers are among best performing algorithms for solving maximum satisfiability problems. These solvers perform a sequence of relaxations of the formula to increase the lower bound on the optimal solution on each relaxation step. In addition, the relaxations allow generating a large set of minimal cores (MUSes) of the original formula. However, properties of these cores have not been investigated. In contrast, maximum hitting set-based (maxhs) solvers also extract a set of cores that posses a set of properties, e.g. the size of the minimal hitting set of the discovered cores equals the optimum when a maxhs solver terminates.

In this work, we analyze minimal cores and minimum correction sets (MinCSes) of the input formula and its sub-formulas that core-guided solvers produce during the execution. We demonstrate that a set of MUSes that a core-guided algorithm discovers posses the same key properties as cores extracted by maxhs solvers. For instance, we prove the size of a minimum hitting set of these the discovered cores equals to the optimal cost. We also show that it discovers all MinCSes of special subformulas of the input formula. We discuss theoretical and practical implications of our results.

17:30-18:30 Session 55: Logic Lounge
17:30
Thinking Fast and Slow in AI

ABSTRACT. Current AI systems lack several important human capabilities, such as adaptability, generalizability, self-control, consistency, common sense, and causal reasoning. We believe that existing cognitive theories of human decision making, such as the thinking fast and slow theory, can provide insights on how to advance AI systems towards some of these capabilities. In this talk, I will present the work done by IBM and collaborators in this space, including the definition of a general architecture that is based on fast/slow solvers and a metacognitive component. I will then present experimental results on the behavior of an instance of this architecture, for AI systems that make decisions about navigating in a constrained environment. The results will show how combining the fast and slow decision modalities allows the system to evolve over time and gradually pass from slow to fast thinking with enough experience, and that this greatly helps in decision quality, resource consumption, and efficiency.

Bio:

Francesca Rossi is an IBM Fellow and the IBM AI Ethics Global Leader. She is a computer scientist with over 30 years of experience in AI research. Before joining IBM, she has been a professor of computer science at the University of Padova, Italy, for 20 years. Her research interests focus on artificial intelligence, specifically they include constraint reasoning, preferences, multi-agent systems, computational social choice, and collective decision making. She is also interested in ethical issues in the development and behavior of AI systems, in particular for decision support systems for group decision making. She is a fellow of both AAAI and of EurAI and she has been president of IJCAI and the Editor in Chief of the Journal of AI Research. She will be the next president of AAAI. She co-leads the IBM AI ethics board and she actively participate in many global multi-stakeholder initiatives on AI ethics. She is a member of the board of directors of the Partnership on AI and the industry representative in the steering committee of the Global Partnership on AI. She is a fellow of both the worldwide association of AI (AAAI) and of the European one (EurAI), and she will be the next president of AAAI from July 2022.