ABSTRACT. Isabelle LLVM is a tool to export verified LLVM code from Isabelle HOL. It shallowly embeds a fragment of LLVM into a state-error-monad in Isabelle HOL. A code generator produces actual LLVM text, which can be linked against C/C++ programs. Correctness of programs is proved by a separation logic based reasoning infrastructure, and a stepwise refinement framework built on top of it. Recently added features include structure types with self-pointers, union-types, floating point numbers, and parallel calls.

In this talk, we will give an overview of Isabelle LLVM and the associated verified software development process. We will, in particular, focus on parallelism.

A Linear Parallel Algorithm to Compute Strong and Branching Bisimilarity

ABSTRACT. The most efficient way to calculate strong bisimilarity is by finding the relational coarsest partition of a transition system. We provide the first linear-time algorithm to calculate strong bisimulation using parallel random access machines (PRAMs). More precisely, with n states, m transitions and |Act| <= m action labels, we provide an algorithm for max(n,m) processors that calculates strong bisimulation in time O(n + |Act|) and space O(n + m). A GPU implementation shows that the linear time-bound is achievable on contemporary hardware. We also extended this algorithm to handle branching bisimulation using n^2 processors in time O(n + |Act|).

ABSTRACT. The past two decades have witnessed an unprecedented improvement in runtime performance of SAT solvers owing to clever software engineering and creative design of data structure. Yet, most entries in the annual SAT competition retain the core architecture of MiniSat, which was designed from the perspective of single core CPU architectures. On the other hand, since 2005, there has been a significant shift to heterogeneous architectures owing to the impending end of Dennard scaling. Consequently, it is no coincidence that the recent breakthroughs in computer science have significantly utilized opportunities offered by such heterogeneous architectures. In this talk, I will discuss our novel multi-threaded CDCL-based framework, called GpuShareSat, designed to take advantage of CPU+GPU achitecture.

The core underlying principle of our approach is to divide the tasks among CPU and GPU so as to attempt to achieve the best of both worlds. To demonstrate the practical efficiency of our framework, we augment the state of the art multi-threaded solvers glucose-syrup with GpuShareSat. Our empirical analysis demonstrates that augmentation of glucose-syrup augmented with GpuShareSat solves 22 more instances than glucose-syrup alone on SAT 2020 instances.

Dealing With Uncertainty Between Peers and Having the Best of Times With Distributed Systems

ABSTRACT. Diminishing increases in CPU frequency over the last years lead to increasing demands in SAT & QBF solvers to use multiple cores at once for solving formulas. Multiple solving paradigms exist to parallelize these solvers, each suited for different problem families. One of them is Cube and Conquer, which tries to split formulas into sub-problems by applying assumptions. This lends itself well to parallel solving, as many sub-problems may be solved at the same time. Distributing such a solver is conceptionally easy, but requires significant implementation effort that is mostly identical for different solvers. For this we created Paracooba 2, a framework for solvers based on splitting their (sub-) problems. Paracooba 2 provides the multithreading and distribution over multiple hosts, letting a solving module be implemented in an easier and abstracted environment. In this talk, we share details about this implementation, some networking-related and conceptual challenges that arose during development, and how we have overcome them.

12:30-14:00Lunch Break

Lunches will be held in Taub hall and in The Grand Water Research Institute.

Goéland : A Concurrent Tableau-Based Theorem Prover

ABSTRACT. We describe Goéland, an automated theorem prover for first-order logic that relies on a concurrent search procedure to find tableau proofs, with concurrent processes corresponding to individual branches of the tableau. Since branch closure may require instantiating free variables shared across branches, processes communicate via channels to exchange information about substitutions used for closure. We present the proof-search procedure and its implementation, as well as experimental results obtained on problems from the TPTP library.

Structural Parameter Based Parallel Algorithms for Partially Weighted MaxSAT

ABSTRACT. The most general incarnation of the maximum satisfiability problem is the partially weighted version in which we are given a propositional formula in conjunctive normal form with weights assigned to some of the clauses. The goal is to find an assignment that satisfies all unweighted clauses while the sum of the satisfied weighted clauses is maximized. The usual approach to identify tractable fragments of this problems are structural parameters, that is, properties of the underlying incidence graph of the input formula.

In this talk I will define well-known structural parameters and shortly review the fact that the partially weighted maximum satisfiability problem is fixed-parameter tractable parameterized by all of them. I will present a short introduction to the framework of parallel parameterized complexity and discuss how these results transfer to the parallel setting. We will discuss that we can obtain ever higher levels of achievable parallelization for ever tighter structural parameters – revealing a fundamental difference to the sequential setting. In contrast to existing parallel meta-theorems, all presented algorithms are constructive (they output an optimal solution).

ABSTRACT. Hard combinatorial problems such as propositional satisfiability are ubiquitous. The holy grail is solution methods that work well on all instances of the problem. However, new approaches emerge regularly, some of which complement existing solution methods in that they are faster only on some instances but not on many others. The question arises as to how well portfolios can exploit the complementarity of solution approaches and when one can speak of complementary approaches at all. To answer these and related questions, we have created an extensive data collection, the GBD Benchmark Database. We have used GBD to manage and distribute benchmarks, to organize and evaluate SAT competitions, and in addition, GBD is presently used in various data science projects related to benchmarking and portfolios of SAT solvers. In my talk, I will briefly present the current state of GBD and summarize some of our most recent results related to solver portfolio analysis. I will encourage the use of benchmark meta-data in the evaluation of SAT experiments and explain why the pursuit of good portfolios and stable solvers are not necessarily opposing goals.