FLOC 2022: FEDERATED LOGIC CONFERENCE 2022
PROGRAM FOR THURSDAY, AUGUST 11TH
Days:
previous day
next day
all days

View: session overviewtalk overview

08:30-09:00Coffee & Refreshments
09:00-10:30 Session 120A: SAT/SMT Solving (DSV)
Location: Ullmann 300
09:00
UCLID5: Multi-Modal Formal Modeling, Verification, and Synthesis

ABSTRACT. UCLID5 is a tool for the multi-modal formal modeling, verification, and synthesis of systems. It enables one to tackle verification problems for heterogeneous systems such as combinations of hardware and software, or those that have multiple, varied specifications, or systems that require hybrid modes of modeling. A novel aspect of UCLID5 is an emphasis on the use of syntax-guided and inductive synthesis to automate steps in modeling and verification. In this talk we will present new developments in the UCLID5 tool including new language features, integration with new techniques for syntax-guided synthesis and satisfiability solving, support for hyperproperties and combinations of axiomatic and operational modeling, demonstrations on new problem classes, and a robust implementation.

09:45
Democratizing SAT Solving

ABSTRACT. Boolean satisfiability is a fundamental problem in computer science with a wide range of applications including planning, configuration management, design and verification of software/hardware systems. The annual SAT competition continues to witness impressive improvements in the performance of the winning SAT solvers largely thanks to the development of new heuristics arising out of intensive collaborative research in the SAT community. Modern SAT solvers achieve scalability and robustness with complex heuristics that are challenging to understand and explain. Consequently, the development of new algorithmic insights has been largely restricted to experts in the SAT community.

I will describe our project that boldly aims to democratise the design of SAT solvers. In particular, our project, called CrystalBall, seeks to develop a framework to provide white-box access to the execution of SAT solver that can aid both SAT solver developers and users to synthesize algorithmic heuristics for modern SAT solvers?We view modern conflict-driven clause learning (CDCL) solvers as a composition of classifiers and regressors for different tasks such as branching, clause memory management, and restarting, and we aim to provide a data-driven automated heuristic design mechanism that can allow experts in domains outside SAT community to contribute to the development of SAT solvers.

09:00-10:30 Session 120B (FRIDA)
Location: Ullmann 102
09:00
Preserving Hyperproperties when Using Concurrent Objects

ABSTRACT. Linearizability, a consistency condition for concurrent objects, is known to preserve trace properties. This suffices for modular usage of concurrent objects in applications, deriving their safety properties from the abstract object they implement. However, other desirable properties, like average complexity and information leakage, are not trace properties. These *hyperproperties* are not preserved by linearizable concurrent objects, especially when randomization is used. This talk will discuss formal ways to specify concurrent objects that preserve hyperproperties and their relation with verification methods like forward / backward simulation. We will show that certain concurrent objects cannot satisfy such specifications, and describe ways to mitigate these limitations.

This is joint work with Constantin Enea and Jennifer Welch.

09:45
On Direct and Indirect Information in Distributed Protocols

ABSTRACT. I will discuss the role and uses of direct and indirect information in distributed protocols. In particular, I will describe a protocol based on indirect information whose formal verification is offered as an interesting challenge.

09:00-10:30 Session 120C (GuttmanFest)
Location: Ullmann 200
09:00
Security Protocols as Choreographies
PRESENTER: Alessandro Bruni
09:30
How to Explain Security Protocols to Your Children
09:50
Secure Key Management Policies in Strand Spaces
PRESENTER: Riccardo Focardi
10:10
Verifying a Blockchain-Based Remote Debugging Protocol for Bug Bounty
PRESENTER: Pierpaolo Degano
09:00-10:30 Session 120D: Model Counting: Applications (MC)
Location: Ullmann 302
09:00
Quantifying Software Reliability via Model-Counting
PRESENTER: Samuel Teuber

ABSTRACT. Critical software should be verified. But how to handle the situation when a proof of functional correctness could not be established? In this case, an assessment of the software is required to estimate the risk of using the software.

In this talk, we present our work on the quantification of C program reliability. Our formal approach measures the reliability of a program in terms of its adherence to a provided functional specification. Our implementation can handle bounded C programs precisely where the functionality is specified through assumptions and assertions in the source code. Following a preparatory program transformation, we use a combination of techniques for software bounded model checking and model counting to obtain quantitative results. This approach allows us to count and categorize the possible runs of the software, permitting the computation of the software's reliability as the share of failing program runs. We evaluated our implementation on a representative set of 24 (deterministic and non-deterministic) C programs and showed differences in performance between an exact and an approximate model counter. The evaluation also led to the publication of a total of 123 benchmark instances for projected model counting.

After summarizing the results of our paper, we briefly present future pathways to increase the efficiency and applicability of our approach.

09:30
Stochastic Constraint Optimisation with Applications in Network Analysis
PRESENTER: Anna Latour

ABSTRACT. We present an extensive study of methods for exactly solving stochastic constraint (optimisation) problems (SCPs). Each of these methods combines weighted model counting (WMC) with constraint satisfaction and optimisation. They use knowledge compilation to address the model counting problem; subsequently, either a constraint programming (CP) solver or mixed integer programming (MIP) solver is used to solve the overall SCP. We first explore a method that is generic and decomposes constraints on probability distributions into a multitude of smaller, local constraints. For the second method, we show that many probability distributions in real-world SCPs have a monotonic property, and how to exploit that property to develop a global constraint. We discuss theoretical (dis)advantages of each method, with a focus on methods that employ ordered binary decision diagrams (OBDDs) to encode the probability distributions, and then evaluate their performances experimentally. To configure the space of parameters of these approaches, we propose to use the framework of programming by optimisation (PbO). The results show that a CP-based pipeline obtains the best performance on well-known data mining and frequent itemset mining problems.

10:00
Rushing and Strolling among Answer Sets - Navigation Made Easy

ABSTRACT. Answer set programming (ASP) is a popular declarative programming paradigm with a wide range of applications in artificial intelligence. Oftentimes, when modeling an AI problem with ASP, and in particular when we are interested beyond simple search for optimal solutions, an actual solution, differences between solutions, or number of solutions of the ASP program matter. For example, when a user aims to identify a specific answer set according to her needs, or requires the total number of diverging solutions to comprehend probabilistic applications such as reasoning in medical domains. Then, there are only certain problem specific and handcrafted encoding techniques available to navigate the solution space of ASP programs, which is oftentimes not enough. In this paper, we propose a formal and general framework for interactive navigation toward desired subsets of answer sets analogous to faceted browsing. Our approach enables the user to explore the solution space by consciously zooming in or out of sub-spaces of solutions at a certain configurable pace. We illustrate that weighted faceted navigation is computationally hard. Finally, we provide an implementation of our approach that demonstrates the feasibility of our framework for incomprehensible solution spaces.

09:00-10:30 Session 120E (NSV)
Location: Ullmann 308
09:00
Recent advances for hybrid systems verification with HyPro

ABSTRACT. Hybrid systems are systems with mixed discrete-continuous behavior. Despite intensive research during the last decades to offer algorithms and tools for the reachability analysis of hybrid systems, the available technologies still suffer from scalability problems. Though scalability can be increased on the cost of precision, it is not easy to find the right balance that leads to successful verification in acceptable time. In this talk we give a general introduction to hybrid systems and methods for their reachability analysis, introduce our HyPro tool, and report on some recent work to speed up the computations while still maintaining sufficient precision.

10:00
MLTL Multi-Type: A Logic for Reasoning about Traces of Different Types

ABSTRACT. Modern cyber-physical systems (CPS) operate in complex systems of systems that must seamlessly work together to control safety- or mission-critical functions. Capturing specifications in LTL enables verification and validation of CPS requirements, yet an LTL formula specification can imply unrealistic assumptions, such as that all of traces populating the variables in the formula are of type Boolean and agree on a standard time step. To achieve formal verification of CPS systems of systems, we need to write validate-able requirements that reason over (sub-)system traces of different types, such as traces from different timescales, levels of abstraction, or traces with complex relationships to each other that populate variables in the same formula. Validation includes both transparency for human validation and tractability for automated validation, e.g., since CPS often run on resource-limited embedded systems. Specifications for correctness of numerical algorithms for CPS need to be able to describe global properties with precise representations of local components. Therefore, we introduce Mission-time Linear Temporal Logic Multi-Type (MLTLM), a logic building on MLTL, to enable writing clear, formal requirements over finite traces (e.g., sensor signals, local computations) of different types, cleanly coordinating the temporal logic and trace relationship considerations without significantly increasing the complexity of logical analysis, e.g., model checking, satisfiability, runtime verification. We explore the common scenario of CPS systems of systems operating over different timescales, including a detailed analysis with a publicly-available implementation of MLTLM.

09:00-10:30 Session 120F (PAAR)
Location: Ullmann 205
09:00
Training ENIGMAs, CoPs, and other thinking creatures

ABSTRACT. The talk will mainly discuss several recent AI/TP systems and experiments that combine reasoning with learning in positive feedbackloops. I will try to describe the settings, the datasets that may today contain millions of proofs, the interesting parts of the algorithms, and some of the practical challenges when building learning-guided systems that actually prove theorems in realtime. Time permitting, I will also discuss the outlook, interesting future directions and initial experiments related to them.

10:00
QuAPI: Adding Assumptions to Non-Assuming SAT & QBF Solvers

ABSTRACT. This paper introduces QuAPI, an easy to use library for enabling efficient assumption-based reasoning for arbitrary SAT or QBF solvers. The users of QuAPI only need access to an executable of these solvers. The only requirement on the solvers is to use standard C library functions, either wrapped or directly, for reading input files in the standard DIMACS or QDIMACS format. No source code or library access is necessary, also not for API-less solvers. Our library allows applications to benefit from assumptions and thus incremental solving matching the native support of a standard incremental interface such as IPASIR, but without the need to implement it. This feature is particularly valuable for applications of QBF solving as most state-of-the-art solvers do not support assumption-based reasoning, with some not providing any API at all. We present the architecture of QuAPI and discuss crucial implementation details in depth. Our extensive performance evaluation shows substantial speedup compared to using unmodified solver binaries on similar incremental reasoning tasks.

09:00-10:30 Session 120G: Parallel algorithms (PDAR)
Location: Ullmann 303
09:00
Isabelle LLVM Parallel

ABSTRACT. Isabelle LLVM is a tool to export verified LLVM code from Isabelle HOL. It shallowly embeds a fragment of LLVM into a state-error-monad in Isabelle HOL. A code generator produces actual LLVM text, which can be linked against C/C++ programs. Correctness of programs is proved by a separation logic based reasoning infrastructure, and a stepwise refinement framework built on top of it. Recently added features include structure types with self-pointers, union-types, floating point numbers, and parallel calls.

In this talk, we will give an overview of Isabelle LLVM and the associated verified software development process. We will, in particular, focus on parallelism.

09:45
A Linear Parallel Algorithm to Compute Strong and Branching Bisimilarity

ABSTRACT. The most efficient way to calculate strong bisimilarity is by finding the relational coarsest partition of a transition system. We provide the first linear-time algorithm to calculate strong bisimulation using parallel random access machines (PRAMs). More precisely, with n states, m transitions and |Act| <= m action labels, we provide an algorithm for max(n,m) processors that calculates strong bisimulation in time O(n + |Act|) and space O(n + m). A GPU implementation shows that the linear time-bound is achievable on contemporary hardware. We also extended this algorithm to handle branching bisimulation using n^2 processors in time O(n + |Act|).

09:00-10:30 Session 120H: Invited talk #1 (SMT)
Location: Ullmann 311
09:00
Local Search for Bit-Precise Reasoning and Beyond

ABSTRACT. Reasoning about quantifier-free bit-vector constraints in Satisfiability ModuloTheories (SMT) has been an ongoing challenge for many years, especially for large bit-widths.  Current state-of-the-art for bit-precise reasoning is a technique called bit-blasting, where bit-vector constraints are eagerly translated into propositional logic (SAT).  Bit-blasting is very efficient in practice but does not generally scale well for large bit-widths due to fact that the translation is in general exponential in the size of the input formula, which potentially (and in practice) overwhelms the underlying SAT solver.  For these instances, we need alternative approaches for bit-precise reasoning that do not rely on translations to the SAT level.  In this talk, I will present such an alternative approach, a propagation-based local search procedure, which relies on propagating target values from top-level constraints towards the inputs while utilizing so-called invertibility conditions.  Invertibility conditions precisely characterize when bit-vector constraints are invertible, a core concept of our approach.  Our procedure is, as expected for local search, incomplete in the sense that it can only determine satisfiability but was shown to be effective on hard satisfiable instances, in particular in combination with bit-blasting in a sequential portfolio setting.  I will talk about the strengths and potential weaknesses of this approach, and how to address these weaknesses.  I will further give some insight into how we identified invertibility conditions for bit-vector operators via utilizing syntax-guided synthesis techniques.  I will also present more applications of invertibility conditions, even outside of local search bit-vector reasoning.  First, we have embedded invertibility conditions into quantifier instantiations in a counterexample-guided procedure for quantified bit-vectors.  Second, we have provided invertibility conditions for a majority of operators in the theory of floating-point arithmetic, which will allow us to lift our local search procedure to quantifier-free floating-point reasoning in the future.

10:00
Trail saving in SMT
PRESENTER: Milan Banković

ABSTRACT. In this paper we discuss and evaluate the method of trail saving on backjumps in CDCL(T)-based SMT solvers. The method was originally proposed for CDCL-based SAT solvers at the SAT conference in 2020, showing a positive impact on solving SAT instances. Since a SAT solver tends to follow a similar path down the search tree after a backjump, saving the retracted portion of the trail enables speeding up the inference after the backjump by copying the saved inferred literals to the assertion trail, instead of re-propagating them by the unit-propagation mechanism. A similar behaviour may be expected when SMT solvers are concerned, but since the process of theory propagation within SMT solvers is usually even more expensive, the trail saving technique may potentially have even more significant impact in case of SMT instances. Although the experimental evaluation given in this paper shows some potential of the approach, the obtained results are generally mixed, and depend greatly on the chosen benchmark family, even within the same theory. Further analysis might be needed in order to better understand the behaviour of the method and its effects on the entire solving process.

09:00-10:30 Session 120I (SYNT)
Location: Ullmann 201
09:00
Inductive and Parameter Synthesis to Find POMDP Controllers
10:00
Towards Synthesis in Superposition
PRESENTER: Petra Hozzová

ABSTRACT. We present our ongoing developments in synthesizing recursion-free programs using the superposition reasoning framework in first-order theorem proving. Given a first-order formula as a program specification, we use a superposition-based theorem prover to establish the validity of this formula and, along this process, synthesize a function that meets the specification. To this end, we modify the rules of the superposition calculus to synthesize program fragments corresponding to individual clauses derived during the proof search. If a proof is found, we extract a program based on the found (correctness) proof. We implemented our approach in the first-order theorem prover Vampire and successfully evaluated it on a few example.

10:15
LTL Synthesis with Transformer Neural Networks
PRESENTER: Frederik Schmitt

ABSTRACT. This extended abstract reports on the current advances in using neural networks for the end-to-end synthesis of AIGER circuits from LTL specifications. Novel neural network architectures open many possibilities but suffer from the lack of sufficient amounts of training data. We report on a method to generate large amounts of additional training data, which is sufficiently close to human-written specifications. Hierarchical Transformers trained on this synthetic data solve many problems from the synthesis competitions, out-of-distribution examples, and instances where Strix timed out. This extended abstract is based on findings published in "Neural Circuit Synthesis from Specification Patterns" (NeurIPS, 2021) and "Teaching Temporal Logics to Neural Networks" (ICLR, 2021).

09:00-10:30 Session 120J: SMT & Automated Deduction (University Level) (ThEdu)
Location: Ullmann 306
09:00
On Exams with the Isabelle Proof Assistant

ABSTRACT. We present an approach for testing student learning outcomes in a course on automated reasoning using the Isabelle proof assistant. The approach allows us to test both general understanding of formal proofs in various logical proof systems and understanding of proofs in the higher-order logic of Isabelle/HOL in particular. The use of Isabelle enables almost automatic grading of large parts of the exam. We explain our approach through a number of example problems, and explain why we believe that each of the kinds of problems we have selected are adequate measures of our intended learning outcomes.

09:30
Invited Talk: Satisfiability Modulo Theories in an Undergraduate Class

ABSTRACT. Satisfiability Modulo Theories (SMT) solvers are widely used in various applications, most prominently verification of hardware and software. They are focused on the task of checking the satisfiability of first-order formulas over a wide range of background theories. Currently, mainstream SMT-solvers offer easy to use APIs and textual interfaces, and are accessible even to users with little logical background. Nevertheless, their algorithms are based on interesting and varied theoretical subjects.

For these reasons, SMT as a field is a good candidate for introducing automated reasoning and logic to students with little or no background in logic: its wide applicability provides a great motivation to study them, the solvers’ accessibility offers hands-on exercises and the logical foundations contain a wide variety of topics of different levels and depth.

In this talk I will describe my experience in teaching an automated reasoning class focused on SMT. I will describe my original hypotheses and plans before the class, what happened during the class, and some reflections that occurred in the process.

09:00-10:00 Session 120K (WOLVERINE)

Keynote presentation

Location: Ullmann 202
09:00
Artificial Intelligence

ABSTRACT. TBA

09:00-10:30 Session 120L (WST)
Location: Ullmann 309
09:00
Tuple Interpretations and Applications to Higher-Order Runtime Complexity
PRESENTER: Deivid Vale

ABSTRACT. Tuple interpretations are a class of algebraic interpretation that subsumes both polynomial and matrix interpretations as it does not impose simple termination and allows non-linear interpretations. It was developed in the context of higher-order rewriting to study derivational complexity of algebraic functional systems. In this short paper, we continue our journey to study the complexity of higher-order TRSs by tailoring tuple interpretations to deal with innermost runtime complexity.

09:30
A transitive HORPO for curried systems
PRESENTER: Liye Guo

ABSTRACT. The higher-order recursive path ordering is one of the oldest, but still very effective, methods to prove termination of higher-order TRSs. A limitation of this ordering is that it is not transitive (and its transitive closure is not computable). We will present a transitive variation of HORPO. Unlike previous HORPO definitions, this method can be used directly on terms in curried notation.

10:00
Approximating Relative Match-Bounds
PRESENTER: Dieter Hofbauer

ABSTRACT. We present a simple method for obtaining automata that over-approximate match-heights for the set of right-hand sides of forward closures of a given string rewrite system. The method starts with an automaton that represents height-indexed copies of right-hand sides, and inserts epsilon transitions only. Rules that have no redex path in the highest level, are relatively match-bounded, and thus terminating relative to the others, on the starting language. For height bound 0, we obtain a generalisation of right barren string rewriting. An implementation of our method proves termination of 590 out of 595 benchmarks in TPDB/SRS_Standard/ICFP_2010 quickly.

09:05-10:25 Session 121 (iPRA)
Location: Ullmann 203
09:05
Uniform Interpolants and Model Completions in Formal Verification of Infinite-State Systems

ABSTRACT. In the last two decades, Craig interpolation has emerged as an essential tool in formal verification, where first-order theories are used to express constraints on the system, such as on the datatypes manipulated by programs. Interpolants for such theories are largely exploited as an efficient method to approximate the reachable states of the system and for invariant synthesis. In this talk, we report recent results on a stronger form of interpolation, called uniform interpolation, and its connection with the notion of model completion from model-theoretic algebra. In addition, we discuss how uniform interpolants can be used in the context of formal verification of infinite-state systems to develop effective techniques for computing the reachable states in an exact way. Finally, we present some results about the transfer of uniform interpolants to theory combinations. We argue that methods based on uniform interpolation are particularly effective and computationally efficient when applied to safety verification of the so-called data-aware processes: these are systems where the control flow of a (business) process can interact with a data storage.

10:05
Interpolation via Finitely Subdirectly Irreducible Algebras
PRESENTER: Wesley Fussner

ABSTRACT. We present a number of algebraic results that facilitate the study of the deductive interpolation property and closely-related metalogical properties for algebraizable deductive systems. We prove that, when V is a congruence-distributive variety, V has the congruence extension property if and only if the class of finitely subdirectly irreducible members of V has the congruence extension property. When a deductive system is algebraized by V, this provides a description of local deduction theorems in terms of algebraic models. Further, we prove that for a variety V with the congruence extension property such that the class of finitely subdirectly irreducibles is closed under subalgebras, V has a one-sided amalgamation property (equivalently, since V is a variety, the amalgamation property) if and only if the class of finitely subdirectly irreducibles has this property. When V is the algebraic counterpart of a deductive system, this yields a characterization of the deductive interpolation property in terms of algebraic semantics. We announce a similar result for the transferable injections property, and prove that possession of all these properties is decidable for finitely generated varieties satisfying certain conditions. Finally, as a case study, we describe the subvarieties of a notable variety of BL-algebras that have the amalgamation property.

09:10-10:30 Session 122: Isabelle 1 (Isabelle)
Location: Ullmann 101
09:10
Welcome
09:20
Auxiliary tools for Combinatorics on Words
PRESENTER: Stepan Holub

ABSTRACT. Parts of human proofs which are discharged by formulations like ``it is easy to check'', ``clearly'' or ``one can see'' often require nontrivial effort in formalization. We discuss several tools we use to automatize such routine tasks in our library of results in combinatorics on words.

09:35
Transfer and reversal of lists
PRESENTER: Martin Raška

ABSTRACT. Every statement about lists has naturally a reversed version (changing prefix to suffix for example). We want to obtain such reversed variant automatically in Isabelle. We have developed a custom attribute for this task in our library of combinatorics on words. In this contribution we discuss our alternative attempts to use the Transfer package to achieve this goal, and related difficulties.

10:00
Oracle Integration of Floating-Point Solvers with Isabelle
PRESENTER: Olle Torstensson

ABSTRACT. Sledgehammer, a component of the interactive proof assistant Isabelle, aims to increase proof automation by automatically discharging proof goals with the help of external provers. Among these provers are a group of satisfiability modulo theories (SMT) solvers with support for the SMT-LIB input language. Despite existing formalizations of IEEE floating-point arithmetic in both Isabelle/HOL and SMT-LIB, Sledgehammer employs an abstract translation of floating-point types and constants, depriving the SMT solvers of the opportunity to make use of their dedicated decision procedures for floating-point arithmetic.

We show that, by extending the translation from the language of Isabelle/HOL into SMT-LIB with an interpretation of floating-point types and constants, floating-point reasoning in SMT solvers can be made available to Isabelle. Our main contribution is a description and implementation of such an extension. An evaluation of the extended translation shows a significant increase of Sledgehammer's success rate on proof goals involving floating-points. In particular, this enhancement enables Sledgehammer to prove more non-trivial goals -- thereby increasing proof automation for floating-point arithmetic in Isabelle.

09:15-10:30 Session 123A (ARQNL)
Chair:
Location: Ullmann 104
09:15
Advances and Challenges in the Development and Application of Forgetting Tools

ABSTRACT. Forgetting enables users to create compact views of knowledge bases of logical formulas over a user-specified signature by performing inferences on the symbols outside this signature. These views are easier to navigate, analyse and query, and are reusable as stand-alone knowledge bases in other settings. In this presentation I will give an overview of our work on the development and application of forgetting methods for description and modal logics. After an introduction to the idea of forgetting and the two forms of forgetting (deductive and semantic forgetting), I will discuss current advances and challenges in automating and applying forgetting.

09:15-10:30 Session 123B: Keynote Presentation (FCS)
Location: Ullmann 310
09:15
Computer-aided Verification and Automated Synthesis of Cryptographic Protocols

ABSTRACT. In this talk we summarize our recent efforts developing efficient high-assurance implementations of advanced cryptographic protocol such as secure multi-party computation (MPC) and zero-knowledge (ZK) proof systems. Such high-assurance implementations are automatically extracted from formally verified protocols specifications in EasyCrypt. We show how such automatically extracted implementations do not suffer from a significant performance overhead (as one may initially expect them to), even though they do not incorporate natural performance-boosting optimizations such as parallelism.

09:30-10:30 Session 124A: Keynote (FMBC)
Location: Ullmann 307
09:30
MEV-freedom, in DeFi and beyond

ABSTRACT. Maximal Extractable Value (MEV) refers to a class of recent attacks to public blockchains, where adversaries with the power to reorder, drop or insert transactions in a block can "extract" value from user transactions in the mempool. Empirical research has shown that mainstream DeFi protocols, like e.g. Automated Market Makers and Lending Pools, are massively targeted by MEV attacks. This has detrimental effects on their users, on transaction fees, and on the congestion of blockchain networks. Despite the growing knowledge on MEV attacks to blockchain protocols, an exact definition is still missing. Indeed, formally defining these attacks is an essential prerequisite to the design of provably secure, MEV-free blockchain protocols. In this talk, we propose a formal definition of MEV, based on a general, abstract model of blockchains and smart contracts. We then introduce MEV-freedom, a property enjoyed by contracts resistant to MEV attacks. We validate this notion by rigorously proving that Automated Market Makers and Lending Pools are not MEV-free. We finally discuss how to design MEV-free contracts.

09:30-10:30 Session 124B: Keynote Talk (PERR)

Title: Regression verification of unbalanced recursive functions with multiple calls

Speaker:  Chaked R.J.Sayedoff  and Ofer Strichman ,  Technion, Israel

Abstract: Given two programs p1 and p2, typically two versions of the sameprogram, the goal of \emph{regression verification} is to mark pairs offunctions from p1 and p2 that are equivalent, given a definitionof equivalence. The most common definition is that of \emph{partialequivalence}, namely that the two functions emit the same output if theyare fed with the same input and they both terminate. The strategy usedby the Regression Verification Tool (RVT) is to progress bottom up onthe call graphs of P1,P2, abstract those functions that were alreadyproven to be equivalent with uninterpreted functions, turn loops intorecursion, and abstract the recursive calls also with uninterpretedfunctions. This enables it to create verification conditions in the formof small programs that are loop- and recursion-free. This method workswell for recursive functions as long as they are in sync, and typicallyfails otherwise. In this work we study the problem of provingequivalence when the two recursive functions are not in sync.Effectively we extend  previous work that studied this problem forfunctions with a single recursive call site, to the general case. Wealso introduce a method for detecting automatically the unrolling thatis necessary for making two recursive functions synchronize, whenpossible. We show examples of pairs of functions with multiple recursivecalls that can now be proven equivalent with our method, but cannot beproven equivalent with any other automated verification system.

Location: Ullmann 305
09:30
Regression verification of unbalanced recursive functions with multiple calls

ABSTRACT. Given two programs $p_1$ and $p_2$, typically two versions of the same program, the goal of \emph{regression verification} is to mark pairs of functions from $p_1$ and $p_2$ that are equivalent, given a definition of equivalence. The most common definition is that of \emph{partial equivalence}, namely that the two functions emit the same output if they are fed with the same input and they both terminate. The strategy used by the Regression Verification Tool (RVT) is to progress bottom up on the call graphs of $P_1,P_2$, abstract those functions that were already proven to be equivalent with uninterpreted functions, turn loops into recursion, and abstract the recursive calls also with uninterpreted functions. This enables it to create verification conditions in the form of small programs that are loop- and recursion-free. This method works well for recursive functions as long as they are in sync, and typically fails otherwise. In this work we study the problem of proving equivalence when the two recursive functions are not in sync. Effectively we extend previous work that studied this problem for functions with a single recursive call site, to the general case. We also introduce a method for detecting automatically the unrolling that is necessary for making two recursive functions synchronize, when possible. We show examples of pairs of functions with multiple recursive calls that can now be proven equivalent with our method, but cannot be proven equivalent with any other automated verification system.

10:30-11:00Coffee Break
11:00-12:30 Session 125A (ARQNL)
Chair:
Location: Ullmann 104
11:00
Towards a Coq formalization of a quantified modal logic

ABSTRACT. We present a Coq formalization of the Quantified Reflection Calculus with one modality, or QRC1. This is a decidable, strictly positive, and quantified modal logic previously studied for its applications in proof theory. The highlights are a deep embedding of QRC1 in the Coq proof assistant, a mechanization of the notion of Kripke model with varying domains and a formalization of the soundness theorem. We focus on the design decisions inherent to the formalization and the insights that led to new and simplified proofs.

11:30
Reasoning in Non-normal Modal Description Logics
PRESENTER: Andrea Mazzullo

ABSTRACT. Non-normal modal logics, interpreted on neighbourhood models which generalise the usual relational semantics, have found application in several areas, such as epistemic, deontic, and coalitional reasoning. We present here preliminary results on reasoning in a family of modal description logics obtained by combining ALC with non-normal modal operators. First, we provide a framework of terminating, correct, and complete tableau algorithms to check satisfiability of formulas in such logics with the semantics based on varying domains. We then investigate the satisfiability problems in fragments of these languages obtained by restricting the application of modal operators to formulas only, and interpreted on models with constant domains, providing tight complexity results.

12:00
Intuitionistic derivability in Anderson's variant of the ontological argument

ABSTRACT. Anderson's emendation of Gödel's ontological proof is known as a variant that does not entail modal collapse, that is, derivability of $\nec A\leftrightarrow A$ for all formulas $A$. This variant of the axiomatization is here investigated as a case study of intuitionistic derivability using a natural deduction. The system for higher-order modal logic simulates a varying domain semantics on the object level, in a manner that seems to have been intended by Anderson. As will be shown, intuitionistic derivability is limited to conditional statements where $G(x)$ is assumed, whereas the derivability of $ G(x)$ itself is proved to be impossible. The required classical proof of $\pos \exists x. G(x)$, can be compared to the compatibility argument of Scott's version, who used a form of indirect proof.

11:00-12:30 Session 125B: Model Checking (I) (DSV)
Location: Ullmann 300
11:00
Eldarica and TriCera: towards an open verification framework

ABSTRACT. This talk will give a hands-on introduction to the software verification infrastructure consisting of the Horn solver Eldarica and the C model checker TriCera. After giving a high-level overview, the presenter will provide examples on how the tools can be used / extended for prototyping research ideas. Both tools are open-source and implemented in Scala.

Eldarica is a Horn solver that supports, among others, Horn clauses over the theories of integers, algebraic data-types, bit-vectors, arrays and the novel theory of heaps that simplifies encoding programs with heaps. Input clauses go through several preprocessing stages such as slicing, reachability analysis and inlining before being passed on to the main CEGAR engine that attempts to construct an abstract reachability graph that witnesses their satisfiability.

TriCera is a model checker for C programs that encodes input programs and specifications into a set of Horn clauses that can then be input to a Horn solver such as Eldarica or Z3/Spacer. TriCera mainly uses Eldarica for this purpose due to the tight integration of the tools and the theory of heaps. TriCera accepts most programs written in C11, and provides several other features useful for researchers: - the declaration and usage of uninterpreted predicates that can be seen as a form of inline assembler for Horn clauses, which simplify experimenting with various encodings, - automatic inference of loop invariants and function contracts, - parsing ACSL function contracts, - timing constraints in C programs (similar to UPPAAL timed automata), - concurrency.

11:45
C2C-trans as a design methodology for software verification tools

ABSTRACT. I will report on a number of tools we have designed and implemented for the analysis of multi-threaded C programs. Our solutions share the common pattern of rewriting the original program into an equivalent non-deterministic sequential C program that is then fed to an existing well-performing model-checking tool for the analysis. Our translations have been optimized to take maximum advantage of the back-end technology used for the analysis. Using this paradigm we have designed a few bug-finding approaches that use back-ends based on bounded model-checking, as well as approaches to prove the correctness of programs. I will also report on recent approaches that we are developing to develop distributed bug-finding algorithms that make use of the large availability of processors provided by computer clusters or cloud computing platforms.

11:00-11:45 Session 125C: Blockchain and Real-world Security (FCS)
Location: Ullmann 310
11:00
A Preview to HoRStify: Sound Security Analysis of Smart Contracts
PRESENTER: Sebastian Holler

ABSTRACT. The cryptocurrency Ethereum is the most widely used execution platform for smart contracts. Smart contracts are distributed applications, which govern financial assets and, hence, can implement advanced financial instruments, such as decentralized exchanges or autonomous organizations (DAOs). Their financial nature makes smart contracts an attractive attack target, as demonstrated by numerous exploits on popular contracts resulting in financial damage of millions of dollars. This omnipresent attack hazard motivates the need for sound static analysis tools, which assist smart contract developers in eliminating contract vulnerabilities a priori to deployment. Vulnerability assessment that is sound and insightful for EVM contracts is a formidable challenge because contracts execute low-level bytecode in a largely unknown and potentially hostile execution environment. So far, there exists no provably sound automated analyzer that allows for the verification of security properties based on program dependencies, even though prevalent attack classes fall into this category. In this work, we present HoRStify, the first automated analyzer for dependency properties of Ethereum smart contracts based on sound static analysis. HoRStify grounds its soundness proof on a formal proof framework for static program slicing that we instantiate to the semantics of EVM bytecode. We demonstrate that HoRStify is flexible enough to soundly verify the absence of famous attack classes such as timestamp dependency and, at the same time, performant enough to analyze real-world smart contracts.

11:15
Automatic Fair Exchanges

ABSTRACT. In a decentralized environment, exchanging resources requires users to bargain until an agreement is found. Moreover, human agreements involve a combination of collaborative and selfish behavior and often induce circularity, complicating the evaluation of exchange requests. We introduce MuAC, a policy language that allows users to state in isolation under which conditions they are open to grant their resources and what they require in return. In MuAC, exchange requests are evaluated automatically with the guarantee that the only exchanges that will take place are those that mutually satisfy users’ conditions. Moreover, MuAC can be used as an enforcement mechanism to prevent users from cheating. As a proof of concept, we implement a blockchain smart contract that allows users to exchange their non-fungible tokens.

11:30
Towards Unlinkable Smartcard-based Payments: an extended abstract
PRESENTER: Semen Yurkov

ABSTRACT. The most prevalent smart card-based payment method, EMV, currently offers no privacy to its users. Transaction details and the card number are sent in clear text, enabling cardholders' profiling and tracking. Since public awareness of privacy issues grows and legislation initiatives like GDPR, aiming to regulate unwanted data collection, emerges, we investigate the possibility to make payments anonymous towards eavesdroppers and terminals without compromising the intended security guarantees of EMV. In this extended abstract, we offer an enhanced payment protocol where no information allowing the identification of the card is transferred to the terminal.

11:00-12:30 Session 125D: Formal Methods for Smart Contracts (FMBC)
Location: Ullmann 307
11:00
Finding smart contract vulnerabilities with ConCert's property-based testing framework
PRESENTER: Eske Hoy Nielsen

ABSTRACT. We provide three detailed case studies of vulnerabilities in smart contracts, and show how property based testing would have found them: 1. the Dexter1 token exchange; 2. the iToken; 3. the ICO of Brave's BAT token. The last example is, in fact, new, and was missed in the auditing process.

We have implemented this testing in ConCert, a general executable model/specification of smart contract execution in the Coq proof assistant. ConCert contracts can be used to generate verified smart contracts in Tezos' LIGO and rust. We thus show the effectiveness of combining formal verification and property-based testing of smart contracts.

11:30
Automatic generation of attacker contracts in Solidity

ABSTRACT. Smart contracts on the Ethereum blockchain continue to suffer from well-published errors, leading to a lack of confidence in the whole Ethereum contract platform. A particular example is the very well-known smart contract reentrancy vulnerability, which still continues to be exploited. In this article, we present a method that provided a smart contract which may be vulnerable to such a reentrancy attack, and proceeds to attempt to automatically derive an “attacker” contract which can be used to successfully attack the vulnerable contract. The method uses property-based testing to generate, semi-randomly, large numbers of such potential attacker contracts and then checks whether any of them is a successful attacker.

12:00
Designing EUTxO smart contracts as communicating state machines: the case of simulating accounts

ABSTRACT. Designing smart contracts in UTXO blockchains is often harder and less intuitive than in account-based ledger models. We present a novel way of structuring such applications, making use of a multi-threaded message-passing architecture built using the state machine paradigm for EUTxO contracts. We study this approach in the broader context of comparing account-based and UTxO ledger functionality. Specifically, we develop a specification for a simple account system, resembling that of account-based ledgers. We then present and compare a number of smart contracts we developed according to this specification: a naive implementation (which maintains all accounts monolithically in one contract), an optimized version using Merkle tries, a direct-transfer multi-threaded implementation, and finally culminating in a multi-threaded version with message-passing.

We argue that the multi-threaded approach allows for maximal parallelism by treating each account as an individual thread. We then assert that restricting state machine communication to message-passing, which are special kinds of UTxO entries, allows for maximal concurrency by making account state updates fully independent of each other, thereby also reducing static analysis complexity. We conjecture that due to its robust concurrency, parallelization, and reduced memory use properties, multi-threaded message-passing architecture would serve as a useful template for a broader class of contracts which includes accounts and extensions thereof.

12:15
FAT CAT: Formal Acceptance Testing of Contracts for Administering Tokens
PRESENTER: Aurélien Saue

ABSTRACT. This extended abstract describes ongoing work on a compliance test suite for FA2, a token standard for the Tezos blockchain. By embedding the test suite in the proof assistant Coq, we can prove the completeness of our suite, that is that any FA2-compliant contract must pass the test suite.

11:00-12:30 Session 125E (FRIDA)
Location: Ullmann 102
11:00
Quorum Tree Abstractions of Consensus Protocols

ABSTRACT. Distributed algorithms solving agreement problems like consensus or state machine replication are essential components of modern fault-tolerant distributed services. They are also notoriously hard to understand and reason about. Their complexity stems from the different assumptions on the environment they operate with, i.e., process or network link failures, Byzantine failures etc. In this talk, I will describe a novel abstract representation of the dynamics of such protocols which focuses on quorums of responses (votes) to a request (proposal) that form during a run of the protocol. We show that focusing on such quorums, a run of a protocol can be viewed as working over a tree structure where different branches represent different possible outcomes of the protocol, the goal being to stabilize on the choice of a fixed branch. This abstraction resembles the description of recent protocols used in Blockchain infrastructures, e.g., the protocol supporting Bitcoin or Hotstuff. We show that this abstraction supports reasoning about the safety of various algorithms, e.g., Paxos, PBFT, Raft, and HotStuff, in a uniform way. In general, it provides a novel induction based argument for proving that such protocols are safe. This is joint work with Berk Cirisci and Suha Orhun Mutluergil.

11:45
What's Decidable about Causally Consistent Shared Memory?

ABSTRACT. While causal consistency is one of the most fundamental consistency models weaker than sequential consistency, the decidability of safety verification for (finite-state) concurrent programs running under causally consistent shared-memories is still unclear. We establish the decidability of this problem for two standard and well-studied variants of causal consistency. To do so, for each of the variants, we develop an equivalent "lossy" operational semantics, and show that it constitutes a well-structured transition system, which enables decidable verification. The two novel semantics are based on similar key observations, which, we believe, may also be of independent use in the investigation of weakly consistent shared memory models and their verification. Interestingly, our results are in contrast to the undecidability of this problem under the Release/Acquire fragment of the C/C++11 memory model, which forms another variant of a causally consistent memory that, in terms of allowed outcomes, lies strictly between the two models we study. Nevertheless, all these variants coincide for write/write-race-free programs, which implies the decidability of verification for such programs under Release/Acquire. (Joint work with Udi Boker, partly presented at PLDI'20)

11:00-12:40 Session 125F (GuttmanFest)
Location: Ullmann 200
11:00
On the Complexity of Verification of Time-Sensitive Distributed Systems
11:30
Protocol Analysis with Time and Space
PRESENTER: Santiago Escobar
12:00
Probabilistic annotations for protocol models
12:20
Securing Node-RED Applications
11:00-12:30 Session 125G: Isabelle 2 (Isabelle)
Location: Ullmann 101
11:00
A Verified Implementation of B-trees in Isabelle/HOL
PRESENTER: Niels Mündler

ABSTRACT. In this paper we present the verification of an imperative implementation of the ubiquitous B+-Tree data structure in the interactive theorem prover Isabelle/HOL. The implementation supports membership test, insertion and range queries with efficient binary search for intra-node navigation. The imperative implementation is verified in two steps: an abstract set interface is refined to an executable but inefficient purely functional implementation which is further refined to the efficient imperative implementation.

11:30
On Axiomatic Systems for Classical Propositional Logic

ABSTRACT. We describe selected axiomatic systems for classical propositional logic and show how Isabelle/HOL helps investigate such systems. We consider systems based on implication and falsity, implication and negation, and disjunction and negation.

11:50
On Termination for Hybrid Tableaux

ABSTRACT. I sketch preliminary work on formalizing the termination of a tableau calculus for basic hybrid logic, whose soundness and completeness have already been formally verified. The formalization includes some crucial theorems, but omits others.

12:10
Lessons of Teaching Formal Methods with Isabelle

ABSTRACT. We present our recent experiences teaching two courses on formal methods which use Isabelle/Pure and Isabelle/HOL. We explain our overall approach and our experiences with implementing beginner-friendly tools to help students understand how Isabelle works. We also describe the issues that we often see students struggle with when attempting to learn how to program and prove theorems using Isabelle, and suggest ideas for potential solutions to some of these concerns.

11:00-12:30 Session 125H: Model Counting: Applications & Experiments (MC)
Location: Ullmann 302
11:00
Randomized Synthesis for Diversity and Cost Constraints with Control Improvisation
PRESENTER: Andreas Gittis

ABSTRACT. In many synthesis problems, it can be essential to generate implementations which not only satisfy functional constraints but are also randomized to improve variety, robustness, or unpredictability. The recently-proposed framework of control improvisation (CI) provides techniques for the correct-by-construction synthesis of randomized systems subject to hard and soft constraints. However, prior work on CI has focused on qualitative specifications, whereas in robotic planning and other areas we often have quantitative quality metrics which can be traded against each other. For example, a designer of a patrolling security robot might want to know by how much the average patrol time needs to be increased in order to ensure that a particular aspect of the robot's route is sufficiently diverse and hence unpredictable. In this paper, we enable this type of application by generalizing the CI problem to support quantitative soft constraints which bound the expected value of a given cost function, and randomness constraints which enforce diversity of the generated traces with respect to a given label function. We establish the basic theory of labelled quantitative CI problems, and develop efficient algorithms for solving them when the specifications are encoded by finite automata. We also provide an approximate improvisation algorithm based on constraint solving for any specifications encodable as Boolean formulas. We demonstrate the utility of our problem formulation and algorithms with experiments applying them to generate diverse near-optimal plans for robotic planning problems.

11:20
Generating Random Weighted Model Counting Instances: An Empirical Analysis with Varying Primal Treewidth

ABSTRACT. Weighted model counting (WMC) is an extension of propositional model counting with applications to probabilistic inference and other areas of artificial intelligence. In recent experiments, WMC algorithms are shown to perform similarly overall but with significant differences on specific subsets of benchmarks. A good understanding of the differences in the performance of algorithms requires identifying key characteristics that favour some algorithms over others. In this paper, we introduce a random model for WMC instances with a parameter that influences primal treewidth—the parameter most commonly used to characterise the difficulty of an instance. We then use this model to experimentally compare the performance of WMC algorithms c2d, Cachet, d4, DPMC, and miniC2D on random instances. We show that the easy-hard-easy pattern is different for algorithms based on dynamic programming and algebraic decision diagrams (ADDs) than for all other solvers. We also show how all WMC algorithms scale exponentially with respect to primal treewidth and how this scalability varies across algorithms and densities. Finally, we demonstrate how the performance of ADD-based algorithms changes depending on how much determinism or redundancy there is in the numerical values of weights.

11:50
Extending Partially Directed Graphs and Determine the Size of their Markov Equivalence Class

ABSTRACT. In this talk I will present an overview of recent advances on two fundamental problems in graphical causal analysis. In contrast to logical inference – where we reason about what is certainly true – we reason in statistical inference about what is likely to be true. Causal knowledge is usually represented in the form of directed acyclic graphs that allow an intuitive and mathematical-sound formalism. Since these graphs are a priory unknown, the first task is to learn them from data – in particular, one has to decide whether there is a causal explanation at all. The standard approach due to Verma and Pearl for this task works in two phases: First, a partially directed graph is learned using independent tests on the data; Second, the partially directed graph is extended (or oriented) to a Markov equivalent directed acyclic graph (DAG). While the first stage is well understood, the second part was open for a long time – it was believed that it may be possible to orient a graph in time O(n+m), but the best upper bound was just O(n⁴). In this talk I will sketch recent work with Wienöbst and Liśkiewicz, in which we settled the time complexity of the problem and proved that it is actually Θ(n³) under reasonable complexity-theoretic assumptions.

Since statistical properties of the data are maintained by a number of different DAGs, the sketched process does not unambiguously identify the underling DAG, but just a member of the class of Markov equivalent DAGs. It is therefore important to understand this equivalence class rather than a single DAG alone and, hence, the second fundamental problem is to determine the size of a Markov equivalence class. Since the size of the equivalence class can be exponential in the number of vertices, previous strategies based on exhaustive search, recursive partitioning, or dynamic programming on a tree decomposition all required exponential time. In the second part of the talk I present the high-level idea of a novel polynomial-time algorithm that solves the counting problem using the tree decomposition combined with various structural properties of chordal graphs – avoiding the traditional dynamic program on such decompositions.

11:00-12:30 Session 125I (NSV)
Location: Ullmann 308
11:00
Combining learning-based methods and temporal logic specifications for designing controllers for unknown environments

ABSTRACT. Reinforcement learning (RL) is a popular technique in the AI and robotics domains to obtain control policies for autonomous agents operating in uncertain environments. The basic setup of RL is as follows: in any given state, the autonomous agent takes an action, and the environment stochastically decides the next state for the agent and also gives it a reward. Rewards are defined locally, and are associated with the agent transitions or states. RL algorithms attempt to find control policies that maximize the cumulative reward obtained by the agent. Recent work in these domains has focused on model-free RL, where a further challenge is that the model of the environment is unknown. A crucial element of RL is reward design: ill-designed rewards can make the agents learn undesirable or unsafe control policies – unacceptable in safety-critical systems. In the formal methods community, techniques like reactive synthesis have focused on correct-by-construction design of control policies given a model of the environment and formal specifications in a suitable temporal logic. In this talk, we will look at some recent work on how we can leverage powerful specification formalisms with model-free learning-based methods to get control policies that are safer, and in some cases, accompanied with safety guarantees.

12:00
Verified Numerical Methods for Ordinary Differential Equations
PRESENTER: Ariel Kellison

ABSTRACT. Ordinary differential equations (ODEs) are used to model the evolution of the state of a system over time. They are ubiquitous in the physical sciences and are often used in computational models with safety-critical applications. For critical computations, numerical solvers for ODEs that provide useful guarantees of their accuracy and correctness are required, but do not always exist in practice. In this work, we demonstrate how to use the Coq proof assistant to verify that a C program correctly and accurately finds the solution to an ODE initial value problem (IVP). Our verification framework is modular, and concisely disentangles the high-level mathematical properties expected of the system being modeled from the low-level behavior of a particular C program. Our approach relies on the construction of two simple functional models in Coq: a floating-point valued functional model for analyzing the intermediate-level behavior of the program, and a real-valued functional model for analyzing the high-level mathematical properties of the system being modeled by the IVP. Our final result is a proof that the floating-point solution returned by the C program is an accurate solution to the IVP, with a good quantitative bound. Our framework assumes only the operational semantics of C and of IEEE-754 floating point arithmetic.

11:00-12:30 Session 125J (PAAR)
Location: Ullmann 205
11:00
A Two-Watched Literal Scheme for First-Order Logic
PRESENTER: Lorenz Leutgeb

ABSTRACT. The two-watched literal scheme for propositional logic is a core component of any efficient CDCL (Conflict Driven Clause Learning) implementation. The family of SCL (Clause Learning from Simple Models) calculi also learns clauses with respect to a partial model assumption built by decisions and propagations similar to CDCL. We show that the well-known two-watched literal scheme can be lifted to SCL for first-order logic.

11:30
Lazy Paramodulation in Practice
PRESENTER: Cezary Kaliszyk

ABSTRACT. ableaux-based provers work well for certain classes of first-order formulas and are better suited for integration with machine learning techniques. However, the tableaux techniques developed so far perform way worse than saturation-based techniques on the first-order formulas with equality. In this paper, we present the first implementation of Lazy Paramodulation which allows applying the ordering constraints in connection tableaux proof search without exponential blowup of the number of clauses (characteristic for Brand's modifications). We implement the original Lazy Paramodulation calculus and propose a variant of the Brand's modification (called LP-modification), which is based on the same ideas as Lazy Paramodulation, avoids exponential blowup and is just a preprocessing step for the standard Connection Tableaux calculus. We demonstrate the impact of both the Lazy Paramodulation and LP-modification on the proof search on a benchmark of TPTP library problems.

12:00
Empirical Properties of Term Orderings for Superposition

ABSTRACT. Term orderings play a critical role in most modern automated theorem proving systems, in particular for systems dealing with equality. In this paper we report on a set of experiments with the superposition theorem prover E, comparing the performance of the system under Knuth-Bendix Ordering and Lexicographic Path Ordering with a number of different precedence and weight generation schemes. Results indicate that relative performance under a short time limit seems to be a good predictor for longer run times and that KBO generally outperforms LPO. Also, if other strategy elements (especially clause selection) are independent of the ordering, performance of a given ordering instance with one strategy is a decent predictor of performance with a different strategy.

11:00-12:30 Session 125K: Parallel SAT solving (PDAR)
Location: Ullmann 303
11:00
Leveraging GPUs for Parallel SAT Solving

ABSTRACT. The past two decades have witnessed an unprecedented improvement in runtime performance of SAT solvers owing to clever software engineering and creative design of data structure. Yet, most entries in the annual SAT competition retain the core architecture of MiniSat, which was designed from the perspective of single core CPU architectures. On the other hand, since 2005, there has been a significant shift to heterogeneous architectures owing to the impending end of Dennard scaling. Consequently, it is no coincidence that the recent breakthroughs in computer science have significantly utilized opportunities offered by such heterogeneous architectures. In this talk, I will discuss our novel multi-threaded CDCL-based framework, called GpuShareSat, designed to take advantage of CPU+GPU achitecture.

The core underlying principle of our approach is to divide the tasks among CPU and GPU so as to attempt to achieve the best of both worlds.  To demonstrate the practical efficiency of our framework, we augment the state of the art multi-threaded solvers glucose-syrup with GpuShareSat. Our empirical analysis demonstrates that augmentation of glucose-syrup augmented with GpuShareSat solves 22 more instances than glucose-syrup alone on SAT 2020 instances.

11:45
Dealing With Uncertainty Between Peers and Having the Best of Times With Distributed Systems

ABSTRACT. Diminishing increases in CPU frequency over the last years lead to increasing demands in SAT & QBF solvers to use multiple cores at once for solving formulas. Multiple solving paradigms exist to parallelize these solvers, each suited for different problem families. One of them is Cube and Conquer, which tries to split formulas into sub-problems by applying assumptions. This lends itself well to parallel solving, as many sub-problems may be solved at the same time. Distributing such a solver is conceptionally easy, but requires significant implementation effort that is mostly identical for different solvers. For this we created Paracooba 2, a framework for solvers based on splitting their (sub-) problems. Paracooba 2 provides the multithreading and distribution over multiple hosts, letting a solving module be implemented in an easier and abstracted environment. In this talk, we share details about this implementation, some networking-related and conceptual challenges that arose during development, and how we have overcome them.

11:00-12:30 Session 125L: Research Papers (PERR)
Location: Ullmann 305
11:00
Compositional Approaches to Games in Model Checking

ABSTRACT. Games are an important tool in formal verification techniques such as model checking. Therefore, efficient solutions for games are essential in formal verification. Compositionality is a property that reasoning about a big system can be reduced to reasoning about its constituent parts, and thus it often enables efficient divide-and-conquer-type algorithms.

We present a compositional solution of parity games. This is achieved by adding open ends to the usual notion of parity games. Our compositional solution is based on compact closed categories and compact closed functors between them.

We also present a compositional solution of mean payoff games. We have conducted experiments targeting mean payoff games. The experiment results show that the compositionality property of our approach is advantageous, especially when a target mean payoff game is a large one that is built from well-separated components mean payofff games.

11:30
From Bounded Checking to Verification of Equivalence via Symbolic Up-to Techniques (Extended Abstract)
PRESENTER: Yu-Yang Lin

ABSTRACT. We present a bounded equivalence verification technique for higher-order programs with local state. This technique combines fully abstract symbolic environmental bisimulations similar to symbolic game semantics, novel up-to techniques, and lightweight state invariant annotations. This yields an equivalence verification technique with no false positives or negatives. The technique is bounded-complete, in that all inequivalences are automatically detected given large enough bounds. Moreover, several hard equivalences are proved automatically or after being annotated with state invariants. We realise the technique in a tool prototype called Hobbit and benchmark it with an extensive set of new and existing examples. Hobbit can prove many classical equivalences including all Meyer and Sieber examples.

12:00
Extending a Lemma Generation Approach for Rewriting Induction on Logically Constrained Term Rewriting Systems
PRESENTER: Kasper Hagens

ABSTRACT. One approach to proving equivalence of two functions in imperative programming languages is by an automatic conversion of the functions to an equivalent Logically Constrained Term Rewriting Systems, for which equivalence can be proved using rewriting induction, either fully automatically or guided by the user. We will relate ongoing work to build on this approach, by exploring three new lemma generation techniques.

!!!ATTENTION: THIS IS A CROSS SUBMISSION OF A PAPER WITH THE SAME TITLE HANDED IN FOR WPTE.!!!

11:00-12:30 Session 125M: Arithmetics and higher-order reasoning (SMT)
Location: Ullmann 311
11:00
Bit-Precise Reasoning via Int-Blasting
PRESENTER: Yoni Zohar

ABSTRACT. The state of the art for bit-precise reasoning in the context of Satisfiability Modulo Theories (SMT) is a SAT-based technique called bit-blasting where the input formula is first simplified and then translated to an equisatisfiable propositional formula. The main limitation of this technique is scalability, especially in the presence of large bit-widths and arithmetic operators. We introduce an alternative technique, which we call int-blasting, based on a translation to an extension of integer arith- metic rather than propositional logic. We present several translations, discuss their differences, and evaluate them on benchmarks that arise from the verification of rewrite rule candidates for bit-vector solving, as well as benchmarks from SMT-LIB. We also provide preliminary results on 35 benchmarks that arise from smart contract verification. The eval- uation shows that this technique is particularly useful for benchmarks with large bit-widths and can solve benchmarks that the state of the art cannot.

11:20
An SMT Approach for Solving Polynomials over Finite Fields
PRESENTER: Thomas Hader

ABSTRACT. In this extended abstract we present our work on solving non-linear polynomial systems over finite fields. Given a formula over (in-)equality constraints of polynomials over finite fields, we developed an automated search procedure that checks satisfiability of the polynomial system, that is checking the existence of an assignment of the polynomial variables to values from the finite field such that the constraints are satisfied. We have designed a Model Constructing Satisfiability (MCSat) style search procedure with two different approaches for explanation functions. We have implemented our procedure and compared its performance to state-of-the-art approaches.

11:40
On Satisfiability of Polynomial Equations over Large Prime Fields
PRESENTER: Lucas Vella

ABSTRACT. The importance of zkSNARKs is ever increasing in the cryptocurrency and smart contracts ecosystem. Due to the significant threat of financial loss a bug represents in such applications, there is also a surge of interest in the formal verification of zkSNARK programs. These programs are ultimately encoded as a system of polynomial equations over large finite prime fields, and to prove statements about such a program is to prove statements about its system of equations. In this ongoing work we investigate algebraic techniques with the goal of writing a mixed algebraic-SMT decision procedure to compute satisfiability of a new theory of polynomials over large prime fields. Ideally the new theory and decision procedure could be implemented in existing SMT solvers as well as a standalone tool, in order to perform verification tasks over real world applications of zkSNARKs.

12:00
Goose: A Meta Solver for Deep Neural Network Verification
PRESENTER: Joseph Scott

ABSTRACT. The verification of deep neural networks is a recent algorithmic challenge that has attracted significant interest, resulting in a wide array of complete and incomplete solvers that draw on diverse techniques. As is typical in hard search and optimization problems, no one solver is expected to be the fastest on all inputs. While this insight has been leveraged to boost Boolean Satisfiability (SAT), for instance, by combining or tuning solvers, it is yet to lead to a leap in the neural network verification domain.

Towards this goal, we present Goose, a meta-solver for deep neural network verification. Goose's architecture supports a wide variety of complete and incomplete decision procedures and leverages three key meta-solving techniques to improve efficiency: algorithm selection, probabilistic satisfiability inference, and time interval deepening. Using Goose we observe an 47.3% improvement in PAR-2 score across more than 800 benchmarks and 14 solvers from VNN-COMP '21.

11:00-12:30 Session 125N (SYNT)
Location: Ullmann 201
11:00
Beyond Counterexamples: Satisfiability and Synthesis Modulo Oracles

ABSTRACT. In classic program synthesis algorithms, such as counterexample-guided inductive synthesis (CEGIS), the algorithms alternate between asynthesis phase and an oracle (verification) phase. Many (most) synthesis algorithms use a white-box oracle based on satisfiability modulo theory(SMT) solvers to provide counterexamples. But what if a white-box oracle is either not available or not easy to work with?

In this talk, I will present a framework for solving a general class of oracle-guided synthesis problems which we term synthesis modulo oracles (SyMO). In this setting, oracles are black boxes with a query-response interface defined by the synthesis problem. This allows us to lift synthesis to domains where using an SMT solver as a verifier is not practical.

I will formalise what it means for a problem to be satisfiable modulo oracles, and present algorithms for solving Satisfiability Modulo Theories and Oracles (SMTO), and Synthesis Modulo Oracles (SyMO). I will demonstrate some prototype case-studies for SyMO and SMTO, and show how the use of oracles allows us to lift solve problems beyond the abilities of classic SMT and synthesis solvers.

12:00
Complexity of Relational Query Synthesis
PRESENTER: Aalok Thakkar

ABSTRACT. The synthesis of relational queries from input-output examples has been studied in the context of inductive logic programming. Analyzing the computational complexity of the fundamental problem can significantly impact the development of synthesis tools.

This paper focuses on a fragment of relational queries that corresponds to exactly to positive Datalog, which supports features such as conjunction, disjunction, and recursion, and does not interpret constants. We establish that the synthesis of such queries from input-output examples is co-NP complete. We conclude with the implications of this results on existing synthesis tools and outline some directions for future work.

12:15
Interactive Debugging of Concurrent Programs under Relaxed Memory Models
PRESENTER: Subhajit Roy

ABSTRACT. In this work we present, Gambit, an interactive debugging and synthesis environment for analysis and repair of concurrency bugs under diverse relaxed memory models. Gambit analyses a controlled execution of a program through a theorem prover to answer queries regarding different behaviours due to alternate thread interleavings or reorderings on other relaxed memory model. It also allows automatic synthesis of fences and atomic blocks that repair concurrency bugs under different memory models. We have evaluated Gambit on multiple programs and found that Gambit responds to the debug queries quite fast (about 1 second on an average).

11:00-12:30 Session 125P: Proof Tree Builder & Rule Based ATP (ThEdu)
Location: Ullmann 306
11:00
A Proof Tree Builder for Sequent Calculus and Hoare Logic

ABSTRACT. We have developed a web-based graphical proof assistant, the Proof Tree Builder, that lets you apply rules upwards from the initial goal in sequent calculus and Hoare logic for a simple imperative language. We equipped our tool with a basic proof automation feature and Z3 integration. Students in the automated reasoning course at Princeton University used our tool and found it intuitive. The Proof Tree Builder is available online at https://proof-tree-builder.github.io.

11:30
Rule Based Geometry Automated Theorem Provers
PRESENTER: Pedro Quaresma

ABSTRACT. Geometry Automated Theorem Provers have now what may be considered a long story. Since the early attempts, linked to Artificial Intelligence, synthetic provers based on inference rules and using forward chaining reasoning are considered the more suited for education. Using an appropriated set of rules and using forward chaining they can more easily mimic the expected behaviour of a student when developing a proof. We describe an implementation of the geometry deductive database method, discussing its merits, and demerits when an application in education is concerned.

11:00-12:00 Session 125Q (WOLVERINE)

Table hosts presenting an overview of the topic that will be discussed in their table. 

Location: Ullmann 202
11:00
Safe Reinforcement Learning

ABSTRACT. TBA

11:20
Explainable Artificial Intelligence

ABSTRACT. TBA

11:40
Robustness of Neural Networks

ABSTRACT. TBA

11:00-12:30 Session 125R (WST)
Location: Ullmann 309
11:00
CeTA — Efficient Certification of Termination Proofs

ABSTRACT. We present recent developments in CeTA w.r.t. the efficient certification of termination proofs. Here, efficiency is seen under various aspects. First, we present a case study on minimizing the formalization effort, namely when trying to verify the classical path orders by viewing them as instances of the weighted path order. Second, we argue that the efficient generation of certificates within termination tools requires more powerful certifiers, in particular when considering the usage of SAT- or SMT-solvers. Third, we present recent developments which aim at improving the execution time of CeTA.

12:00
Hydra Battles and AC Termination
PRESENTER: Aart Middeldorp

ABSTRACT. We encode the Battle of Hercules and Hydra as a rewrite system with AC symbols. Its termination is proved by a new termination criterion for AC rewriting.

11:00-12:20 Session 125S (iPRA)
Location: Ullmann 203
11:00
Interpolation and Completeness in the Modal Mu-Calculus

ABSTRACT. From a proof-theoretic perspective, the idea that interpolation is tied to provability is a natural one. Thinking about Craig interpolation, if a ‘nice’ proof of a valid implication ϕ→ψ is available, one may succeed in defining an interpolant by induction on the proof-tree, starting from leaves and proceeding to the implication at the root. This method has recently been applied even to fixed point logics admitting cyclic proofs. In contrast, for uniform interpolation, there is no single proof to work from but a collection of proofs to accommodate: a witness to each valid implication ϕ→ψ where the vocabulary of ψ is constrained. Working over a set of prospective proofs and relying on the structural properties of sequent calculus is the essence of Pitts' seminal result on uniform interpolation for intuitionistic logic.

In this talk we will look at how Pitts' technique can be adapted to derive uniform interpolation for the propositional modal μ-calculus. For this we introduce the notion of an interpolation template, a finite (cyclic) derivation tree in a sequent calculus based on the Jungteerapanich-Stirling annotated proof system. Uniform interpolants arise from encoding the structure of interpolation templates within the syntax of the μ-calculus. We will conclude with a somewhat surprising corollary of the interpolation method via cyclic proofs: a straightforward proof of completeness for Kozen's finitary axiomatisation of the μ-calculus.

12:00
When iota Meets lambda
PRESENTER: Michal Zawidzki

ABSTRACT. In this note we present a sound, complete, and cut-free tableau calculus for the logic being a formalisation of a Russell-style theory of DD with the iota-operator used to construct DD, the lambda-operator forming predicate-abstracts, and DD as genuine terms with a restricted right of residence. We show that in this setting we are able to overcome problems typical for Russell's original theory, such as scoping difficulties or undesired inconsistencies. We prove the Craig interpolation property for the proposed theory, which, through the Beth definability property, allows us to check whether an individual constant from a signature has a DD-counterpart under a given theory.

12:00-12:30 Session 126A: Secure Hardware (FCS)
Location: Ullmann 310
12:00
Unveiling Security through Obscurity Approach of Intel TDX Remote Attestation

ABSTRACT. Intel Trust Domain Extensions (TDX) is the next-generation confidential computing offering of Intel. One of the most critical processes of Intel TDX is the remote attestation mechanism. Since remote attestation bootstraps trust in remote applications, any vulnerability in the attestation mechanism can therefore impact the security of an application. Hence, we investigate the use of formal methods to ensure the correctness of the attestation mechanisms. The symbolic security analysis of remote attestation protocol in Intel TDX reveals a number of subtle inconsistencies found in the specification of Intel TDX that could potentially lead to design and implementation errors as well as attacks. These inconsistencies have been reported to Intel and Intel is in process of updating the specifications. We also explain how formal specification and verification using ProVerif could help avoid these flaws and attacks.

12:15
pi_RA: A pi-calculus for verifying protocols that use remote attestation
PRESENTER: Emiel Lanckriet

ABSTRACT. Remote attestation (RA) allows authentication of software components on untrusted systems by relying on a root of trust. Over the last years use of devices supporting remote attestation (RA) has been rising. Researchers have made several formal models to reason about RA, however, these models tend to stay fairly close to the implementations found in hardware. In this paper we devise a model, called pi_RA, that supports RA at a high level of abstraction by treating it as a cryptographic primitive in a variant of the applied pi-calculus.

To demonstrate the use of pi_RA, we use it to formalise and analyse the security of MAGE, an SGX-based framework that allows mutual attestation of multiple enclaves. This analysis is done by proving the security of a compiler, that incorporates MAGE to resolve problems with mutual attestation, from a language with named actors, pi_Actor, to pi_RA.

12:00-12:30 Session 126B (WOLVERINE)

Round-table discussions -- part I

Location: Ullmann 202
12:30-14:00Lunch Break

Lunches will be held in Taub hall and in The Grand Water Research Institute.

14:00-15:30 Session 127A (ARQNL)
Chair:
Location: Ullmann 104
14:00
Do Lawyers Use Automated Reasoning?

ABSTRACT. Laws must be understood and their application must be explained and justified. As such, they seem to be the perfect use case for (non-classical) automated reasoning. Indeed, there is much academic research in legal knowledge representation and reasoning. Nevertheless, there are no commercial reasoning-based applications, apart from legal expert systems.In this talk, I will survey a personal three years journey towards a commercial application, as well as lessons learned from failures. I will touch various topics, such as knowledge representation and validation, reasoning and justification, and academic vs business perspectives.

15:00
(Re)moving Quantifiers to Simplify Parameterised Boolean Equation Systems

ABSTRACT. We investigate the simplification of parameterised Boolean equation systems (PBESs), a first-order fixpoint logic, by means of quantifier manipulation. Whereas the occurrence of quantifiers in a PBES can significantly increase the effort required to solve a PBES, i.e., compute its semantics, existing syntactic transformations have little to no support for quantifiers. We resolve this, by proposing a static analysis algorithm that identifies which quantifiers may be moved between equations such that their scope is reduced. This syntactic transformation can drastically reduce the effort required to solve a PBES. Additionally, we identify an improvement to the computation of guards, which can be used to strengthen our static analysis.

14:00-15:30 Session 127B: Model Checking (II) and Synthesis (DSV)
Location: Ullmann 300
14:00
Solving Constrained Horn Clauses Lazily and Incrementally

ABSTRACT. Constrained Horn Clauses (CHCs) is a fragment of First Order Logic (FOL), that has gained a lot of attention in recent years. One of the main reasons for the rising interest in CHCs is the ability to reduce many verification problems to satisfiability of CHCs. For example, program verification can naturally be described as the satisfiability of CHCs modulo a background theory such as linear arithmetic and arrays. To this end, CHC-solvers can be used as the back-end for different verification tools and allow to separate the generation of verification conditions from the decision procedure that decides if the verification conditions are correct or not.

In this talk, we present a novel framework, called LazyHorn, that is aimed at solving the satisfiability problem of CHCs modulo a background theory. The framework is driven by the idea that a set of CHCs can be solved in parts, making it an easier problem for the CHC-solver. Furthermore, solving a set of CHCs can benefit from an interpretation revealed by the solver for its subsets. LazyHorn is lazy in that it gradually extends the set of checked CHCs, as needed. It is also incremental in its use of a CHC solver, supplying it with an interpretation, obtained for previously checked subsets. We have implemented an efficient instance of the framework that is restricted to a fragment of CHCs called linear CHCs.

14:45
Interaction Models vs. Formal Models: Synthesis Co-Design

ABSTRACT. Program synthesis is the problem of generating a program to satisfy a specification of user intent. Since these specifications are usually partial, this means searching a space of candidate programs for one that exhibits the desired behavior. A lion’s share of the work on program synthesis focuses on new ways to perform the search, but allowing the user to communicate intent remains a major challenge.

This talk focuses on the complex relationship between user intent, specifications, and interaction model. In the domain of program synthesis, we narrow our focus by targeting a more specific category of users. Focusing on any subgroup of users allows making assumptions on both the input the user can generate for the synthesizer and the output they can consume. In the case of developers, concepts that are part of the programmer’s life such as code review and read-eval-print loops (REPL) can be leveraged for interactions with a synthesizer. The talk describes two synthesis-based tools that both leverage and cater to programmers as their users.

These assumptions also drive ways in which the hard problems of formal methods can be delegated to incomplete assistants: (bounded resource) solvers are a community favorite, and ML models are also gaining traction as an approach for deciding a hard subproblem. A third option, less sound than a solver and less decisive than a model, is a human: delegating parts of the task back to the user ensures that they are solved to the user's satisfaction. But how to make the user work for the tool, rather than the other way around, is a critical issue that must be addressed at the intersection of intent, specification, and interaction model.

Synthesizer design shows us how these elements can be at odds with each other, in particular the tension between the interface and state of the art synthesis techniques. Synthesis is, at best, a computationally hard problem, and many existing program synthesis tools and techniques were designed around the synthesizer and its internals, their user interface dictated by the needs of the synthesizer. We demonstrate the process of concurrently building both the synthesizer and its intended user-facing tool as a way to search for a balance of the needs of the interaction model (and, implicitly, the user) and the algorithm.

14:00-14:45 Session 127C: Analysis for Security and Privacy (FCS)
Location: Ullmann 310
14:00
VeNuS: Neural Network Robustness Specifications via Verifier-guided Optimization
PRESENTER: Anan Kabaha

ABSTRACT. Analyzing the robustness of neural networks is crucial for trusting them. The vast majority of existing work focuses on networks' robustness to epsilon-ball neighborhoods, but these cannot capture complex robustness properties. In this work, we propose VeNuS, a general approach and a system for computing maximally non-uniform robust neighborhoods that optimize a target function (e.g., neighborhood size). The key idea is to let a verifier guide our search by providing the directions in which a neighborhood can expand while remaining robust. We show two approaches to compute these directions. The first approach treats the verifier as a black-box and is thus applicable to any verifier. The second approach extends an existing verifier with the ability to compute the gradient of its analysis results. We evaluate VeNuS on various models and show it computes neighborhoods with 10^407x more distinct images and 1.28x larger average diameters than the epsilon-ball ones. Compared to another non-uniform robustness analyzer, VeNuS computes neighborhoods with 10^314x more distinct images and 2.49x larger average diameters. We further show that the neighborhoods that VeNuS computes enable one to understand robustness properties of networks which cannot be obtained using the standard epsilon-balls. We believe this is a step towards understanding global robustness.

14:15
Compositional Higher-order Declassification Using Logical Relations
PRESENTER: Jan Menz

ABSTRACT. To ensure that programs do not accidentally leak sensitive data, we need to formally specify desirable information-flow properties, including declassification properties, of programs and prove that programs abide by them. Compositionality is a desirable property of such security specifications. In the higher-order setting compositionality of such definitions is, however, not trivial. For programs involving intensional forms of declassification i.e., the deliberate release of private information based on internal state of the program, most past specifications of security in the higher-order setting are not compositional. We argue, with a concrete example, that \emph{logical relations} provide an excellent basis for compositional definitions of declassification-based information security in the higher-order setting, including intensional forms of declassification.

14:30
Robust Indistinguishability
PRESENTER: Adam O'Neill

ABSTRACT. We introduce a new privacy notion, \emph{robust indistinguishability} (RI), parameterized by variables $\varepsilon,\alpha$. We show $(\varepsilon,\alpha)$-RI is \emph{inequivalent} to $(\varepsilon',\delta)$-differential privacy (DP) for \emph{every} $(\varepsilon',\delta)$ unless $\alpha = 0$, in which case it is equivalent to $\delta' = 0$ and another $\epsilon$. Therefore, RI relaxes pure DP along a novel axis compared to $(\varepsilon',\delta)$-DP.

More precisely, we formulate $(\varepsilon,\alpha)$-RI for a randomized mechanism $F$ as a game wherein a source algorithm chooses two neighboring datasets $x_0,x_1$ and a set of outcomes $Y$ with cumulative weight at least $\alpha$. The adversary is then given a sample of random variable $F(x_b)$ where $b$ is a random bit, \emph{conditioned on the outcome of $F$ being in $Y$}. We measure the advantage of the adversary as twice the probability it guesses $b$ minus 1. When $\alpha =1$, this is simply statistical (aka.~total variation) distance between $F(x_1)$ and $F(x_0)$. We stress that to avoid well-known pitfalls of using statistical distance as a privacy notion, RI conditions on the outcome of the mixture distribution $F(x_b)$. RI captures privacy in ``distinguishability games'' that are often implicitly used in applications of DP.

We show foundational results about RI as well as applications. We show an ``optimal'' RI adversary,, which is sometimes efficient. Then we show that RI satisfies composition, post-processing, and group privacy laws. We also show that the Laplace mechanism requires smaller noise magnitude under RI for a fixed $\varepsilon'$ with $\alpha = 1$ vs.~$\alpha = 0$. We also show RI bounds the adversary's advantage in various attack modalities in privacy-preserving machine learning. Finally, we analyze database shuffling under RI for i.i.d.~database entries.

14:00-15:15 Session 127D: Formal Methods for Blockchain Protocols (FMBC)
Location: Ullmann 307
14:00
Proofgold: Blockchain for Formal Methods
PRESENTER: Cezary Kaliszyk

ABSTRACT. Proofgold is a peer to peer cryptocurrency making use of formal logic. Users can publish theories and then develop a theory by publishing documents with definitions, conjectures and proofs. The blockchain records the theories and their state of development (e.g., which theorems have been proven and when). Two of the main theories are a form of classical set theory (for formalizing mathematics) and an intuitionistic theory of higher-order abstract syntax (for reasoning about syntax with binders). We give examples definitions and theorems published into the Proofgold blockchain to demonstrate how the Proofgold network can be used to support formalization efforts. We have also significantly modified the open source Proofgold Core client software to create a faster, more stable and more efficient client, Proofgold Lava. Two important changes are the cryptography code and the database code, and we discuss these improvements.

14:30
A Domain Specific Language for Testing Consensus Implementations

ABSTRACT. Achieving consistency in modern large-scale and fault-tolerant distributed systems often relies on intricate consensus protocols. Ensuring the reliability of implementations of such protocols remains a significant challenge because of the enormous number of exceptional conditions that may arise in production. We propose a methodology and a tool called Netrix for testing such implementations that aims to exploit programmer's knowledge to improve coverage, enables robust bug reproduction, and can be used in regression testing across different versions of an implementation. As a case-study and evaluation, we apply our tool to a popular proof of stake blockchain protocol, Tendermint, which relies on a Byzantine consensus algorithm and to a popular benign consensus algorithm, Raft. We were able to identify deviations of the implementation from the protocol specification and verify corrections on an updated implementation.

14:45
Determinism of ledger updates

ABSTRACT. Ensuring deterministic behaviour in distributed blockchain ledger design matters to end users because it allows for locally predictable fees, smart contract evaluation outcomes, and updates to other ledger-tracked data. In this work we begin by defining an abstract interface of ledgers and its update procedure, which gives us the ability to make formal comparisons of different ledger designs across various properties. We use this model as a basis for formalizing and studying several properties colloquially classified as determinism of ledgers.

We identify a stronger and a weaker definition of determinism, providing simple but illustrative examples. We situate both versions of determinism in the context of the theory of changes, and conjecture what constraints on the derivation of ledger update functions are sufficient and necessary for the two definitions. We additionally discuss substates of a ledger state, which we refer to as threads, and outline how particular threads can remain deterministic while the full ledger may not be.

We discuss how these ideas can be applied to realistic ledgers' designs and architectures, and analyze a nuanced example of non-determinism in an existing UTxO ledger with the tools we have developed.

15:00
Human and machine-readable models of state machines for the Cardano ledger
PRESENTER: Andre Knispel

ABSTRACT. Cardano is a third generation crypto currency developed by IOG whose nodes consist of a network layer, a consensus layer, and a ledger layer. The ledger tracks and validates financial transactions.

The ledger team at IOG has been successful in using a combination of an abstract specification of the ledger, modeled as a small-step operational semantics and written in LaTeX, pen-and-paper proofs, and property based testing using QuickCheck to support the implementation of this critical component of the system. The specification serves as a design document and reference for the team, and also other members of the Cardano ecosystem. However, LaTeX provides no scope or type checking of the model, and there is a tendency for the spec to get out of sync with the rapidly changing implementation. To mitigate both of these problems, and improve on what we already have, we are developing a specification in Agda which is both human and machine readable. This will provide higher assurance and easier maintenance than the current specification via scope and type checking of the current specification. Additionally, we derive a reference implementation from this model via meta-programming, which can be used for conformance testing against the implementation. Last but not least, we can perform machine checked proofs of key properties.

14:00-15:30 Session 127E (FRIDA)
Location: Ullmann 102
14:00
Using Lightweight Formal Methods to Validate a Key-Value Storage Node in Amazon S3

ABSTRACT. This talk reports our experience applying lightweight formal methods to validate the correctness of ShardStore, a new key-value storage node implementation for the Amazon S3 cloud object storage service. By "lightweight formal methods" we mean a pragmatic approach to verifying the correctness of a production storage node that is under ongoing feature development by a full-time engineering team. We do not aim to achieve full formal verification, but instead emphasize automation, usability, and the ability to continually ensure correctness as both software and its specification evolve over time. Our approach decomposes correctness into independent properties, each checked by the most appropriate tool, and develops executable reference models as specifications to be checked against the implementation. Our work has prevented 16 issues from reaching production, including subtle crash consistency and concurrency problems, and has been extended by non-formal-methods experts to check new features and properties as ShardStore has evolved.

14:45
Eliminating Message Counters in Threshold Automata

ABSTRACT. Threshold automata were introduced to give a formal semantics to distributed algorithms in a way that supports automated verification. While transitions in threshold automata are guarded by conditions over the number of globally sent messages, conditions in the pseudocode descriptions of distributed algorithms are usually formulated over the number of locally received messages. In this talk, we present an automated method to close the gap between these two representations. We propose threshold automata with guards over the number of received messages and present abstractions into guards over the number of sent messages, by eliminating the receive message counters. Our approach allows us for the first time to fully automatically verify models of both synchronous, asynchronous, and randomized distributed algorithms that are in one-to-one correspondence with their pseudocode.

14:00-15:30 Session 127F (GuttmanFest)
Chair:
Location: Ullmann 200
14:00
Joshua Guttman: Pioneering Strand Spaces
14:20
Cryptographic Protocol Analysis and Compilation Using CPSA and Roletran
14:40
Principles of Remote Sattestation
15:00
Searching for Selfie in TLS 1.3 with the Cryptographic Protocol Shapes Analyzer
PRESENTER: Prajna Bhandary
14:00-15:30 Session 127G: Isabelle 3 (Isabelle)
Location: Ullmann 101
14:00
Isabelle/VSCode and Electron/Node.js as emerging Isabelle technologies

ABSTRACT. This is a technology preview of the forthcoming release Isabelle2022, which is anticipated for October 2022. It will include a fully integrated Isabelle/VSCode Prover IDE based on a bundled distribution of VSCodium, which is an open-source branch of Microsoft's VSCode project. VSCode is based on Electron, which is a desktop application framework with Chromium browser and Node.js environment. Careful patching and repackaging of VSCodium exposes Electron and Node.js for other Isabelle applications. Thus we gain access to high-end web technologies for Isabelle/PIDE browsing and editing, for example a cross-platform PDF viewer with custom URLs for Isabelle/PIDE commands.

14:30
Towards Accessible Formal Mathematics with ISAC and Isabelle/VSCode
PRESENTER: Bernhard Stöger

ABSTRACT. We revisit the pending problem of accessibility and usability of mathematics for visually impaired individuals. The context is formal mathematics as represented by the proof assistant Isabelle, with the particular Prover IDE front-end Isabelle/VSCode. The broader context of software for mathematics education is provided by ISAC. Leveraging the work that Microsoft and Google have invested into accessibility of VSCode and its underlying Chromium engine, we hope to deliver a properly accessible Mathematics Working Environment (MAWEN) within the Isabelle ecosystem. An important part of this research is to figure out where standard text editing techniques end, and where specific mathematics editing starts, especially calculations and proofs.

15:00
A Linter for Isabelle: Implementation and Evaluation
PRESENTER: Fabian Huch

ABSTRACT. In interactive theorem proving, formalization quality is a key factor for maintainability and re-usability of developments and can also impact proof-checking performance. Commonly, anti-patterns that cause quality issues are known to experienced users. However, in many theorem prover systems, there are no automatic tools to check for their presence and make less experienced users aware of them. We attempt to fill this gap in the Isabelle environment by developing a linter as a publicly available add-on component. The linter offers basic configurability, extensibility, Isabelle/jEdit integration, and a standalone command-line tool. We uncovered 480 potential problems in Isabelle/HOL, 14016 in other formalizations of the Isabelle distribution, and an astonishing 59573 in the AFP. With a specific lint bundle for AFP submissions, we found that submission guidelines were violated in 1595 cases. We set out to alleviate problems in Isabelle/HOL and solved 168 of them so far; we found that high-severity lints corresponded to actual problems most of the time, individual users often made the same mistakes in many places, and that solving those problems retrospectively amounts to a substantial amount of work. In contrast, solving these problems interactively for new developments usually incurs only little overhead, as we found in a quantitative user survey with 22 participants (less than a minute for more than 60% of participants). We also found that a good explanation of problems is key to the users’ ease of solving these problems (correlation coefficient 0.48), and their satisfaction with the end result (correlation coefficient 0.62).

14:00-15:30 Session 127H: Algorithms & Complexity (MC)
Location: Ullmann 302
14:00
Heuristic computation of exact treewidth - some improvements and more experimental evaluations

ABSTRACT. This talk would be a follow-up to my paper at SEA 2022 with essentially the same title. I would report some developments on my part since the submission of the work to SEA. I would try to make the presentation as informative as possible for potential users of treewidth algorithms. I also hope to get feedbacks from the audience on what features of treewidth solvers are desirable from the uses' point of view.

14:30
Counting Complexity for Projected Reasoning in Abstract Argumentation

ABSTRACT. In this talk, we consider counting and projected model counting of extensions in abstract argumentation for various semantics.

When asking for projected counts we are interested in counting the number of extensions of a given argumentation framework while multiple extensions that are identical when restricted to the projected arguments count as only one projected extension.

We establish classical complexity results and parameterized complexity results when the problems are parameterized by treewidth of the undirected argumentation graph. In particular, we show a complexity dichotomies for counting credulous extensions (#.coNP vs. #.P) as well as for the projected variant lifting the complexity by one level (#.NP vs. #.\Sigma^P_2).

To obtain upper bounds for counting projected extensions, we introduce novel algorithms that exploit small treewidth of the undirected argumentation graph of the input instance by dynamic programming (DP). Our algorithms run in time double or triple exponential in the treewidth depending on the considered semantics.

Finally, we take the exponential time hypothesis (ETH) into account and establish tight lower bounds of bounded treewidth algorithms for counting extensions and projected extension.

This talk present joint work with Johannes Fichte and Markus Hecher that was published at AAAI 2019 and presents also some recent advancements on preferred semantics.

15:00
A Faster Algorithm for Propositional Model Counting Parameterized by Incidence Treewidth

ABSTRACT. The propositional model counting problem (#SAT) is known to be fixed-parameter-tractable (FPT) when parameterized by the width k of a given tree decomposition of the incidence graph. The running time of the fastest known FPT algorithm contains the exponential factor of 4^k . We improve this factor to 2^k by utilizing fast algorithms for computing the zeta transform and covering product of functions representing partial model counts, thereby achieving the same running time as FPT algorithms that are parameterized by the less general treewidth of the primal graph. Our new algorithm is asymptotically optimal unless the Strong Exponential Time Hypothesis (SETH) fails.

14:00-15:30 Session 127I (NSV)
Location: Ullmann 308
14:00
Motion planning with temporal logic constraints and preferences

ABSTRACT. Motion planning algorithms are a vital part of autonomous robots that — simply put — turn tasks and constraints into sequences of motions that the robot shall follow. In its most basic form, motion planning asks how to reach a destination while avoiding obstacles. In this talk, we discuss motion planning allowing for more sophisticated tasks, constraints and preferences. We will discuss how to express these in temporal logics, and integrated with sampling-based motion planning algorithms. We will specifically focus on using Signal Temporal Logic (STL) as a powerful specification language with quantitative semantics and discuss what provable guarantees the planning algorithms can provide. Throughout the talk, we will showcase robotics applications, such as autonomous driving and exploration of unknown environments with UAVs.

15:00
Neural Network Precision Tuning Using Stochastic Arithmetic
PRESENTER: Quentin Ferro

ABSTRACT. Neural networks can be costly in terms of memory and execution time. Reducing their cost has become an objective, especially when integrated in an embedded system with limited resources. A possible solution consists in reducing the precision of their neurons parameters. In this article, we present how to use auto-tuning on neural networks to lower their precision while keeping an accurate output. To do so, we use a floating-point auto-tuning tool on different kinds of neural networks. We show that, to some extent, we can lower the precision of several neural network parameters without compromising the accuracy requirement.

14:00-15:30 Session 127J (PAAR)
Location: Ullmann 205
14:00
Exploring Partial Models with SCL
PRESENTER: Simon Schwarz

ABSTRACT. The family of SCL (Clause Learning from Simple Models) calculi learns clauses with respect to a partial model assumption, similar to CDCL (Conflict Driven Clause Learning). The partial model always consists of ground first-order literals and is built by decisions and propagations. In contrast to propositional logic where propagation chains are limited, in first-order logic they can become infinite. Therefore, the SCL family does not require exhaustive propagation and the size of the partial model is always finitely bounded. Any partial model not leading to a conflict constitutes a model for the respective finitely bounded ground clause set. We show that all potential partial models can be explored as part of the SCL calculus for first-order logic without equality and that any overall model is an extension of a partial model considered.

14:30
On SGGS and Horn Clauses

ABSTRACT. SGGS (Semantically-Guided Goal-Sensitive reasoning) is a refutationally complete theorem-proving method that offers first-order conflict-driven reasoning and is model complete in the limit. This paper investigates the behavior of SGGS on Horn clauses, which are widely used in declarative programming, knowledge representation, and verification. We show that SGGS generates the least Herbrand model of a set of definite clauses, and that SGGS terminates on Horn clauses if and only if hyperresolution does. We report on experiments applying the SGGS prototype prover Koala to Horn problems, with promising performances especially on satisfiable inputs.

15:00
Exploring Representation of Horn clauses using GNNs
PRESENTER: Chencheng Liang

ABSTRACT. In recent years, the application of machine learning in program verification, and the embedding of programs to capture semantic information, has been recognized as an important problem by many research groups. Learning program semantics from the raw source code is challenging due to the complexity of real-world programming language syntax, and due to the difficulty to reconstruct long-distance relational information implicitly represented in programs using identifiers. Addressing the first point, we consider Constrained Horn Clauses (CHCs) as a standard representation of program verification problems, providing simple and programming language-independent syntax. For the second challenge, we explore graph representations of CHCs, and propose a new Relational Hypergraph Neural Network (R-HyGNN) architecture to learn program features.

We introduce two different graph representations of CHCs. One is called constraint graph, and emphasizes syntactic information of CHCs by translating the symbols and their relations in CHCs as typed nodes and binary edges, respectively, and constructing the constraints as abstract syntax trees. The second one is called control- and data-flow hypergraph (CDHG), and emphasizes semantic information of CHCs by constructing the control and data flow through ternary hyperedges.

We then propose a new GNN architecture, R-HyGNN, extending Relational Graph Convolutional Networks (by Schlichtkrull et al.), to handle hypergraphs. To evaluate the ability of R-HyGNN to extract semantic information from programs, we use R-HyGNN to train models on the two graph representations, and on five proxy tasks with increasing difficulty, using benchmarks from CHC-COMP 2021 as training data. The most difficult proxy task requires the model to predict the occurrence of clauses in counter-examples (CEs), which subsumes satisfiability of CHCs. CDHG achieves 90.59% accuracy in this task. Furthermore, R-HyGNN has perfect predictions on one of the graphs consisting of more than 290 clauses. Overall, our experiments indicate that R-HyGNN can capture intricate program features for guiding verification problems.

14:00-15:30 Session 127K: Parallel Theorem Proving & SAT solving (PDAR)
Chair:
Location: Ullmann 303
14:00
Goéland : A Concurrent Tableau-Based Theorem Prover

ABSTRACT. We describe Goéland, an automated theorem prover for first-order logic that relies on a concurrent search procedure to find tableau proofs, with concurrent processes corresponding to individual branches of the tableau. Since branch closure may require instantiating free variables shared across branches, processes communicate via channels to exchange information about substitutions used for closure. We present the proof-search procedure and its implementation, as well as experimental results obtained on problems from the TPTP library.

14:45
Structural Parameter Based Parallel Algorithms for Partially Weighted MaxSAT

ABSTRACT. The most general incarnation of the maximum satisfiability problem is the partially weighted version in which we are given a propositional formula in conjunctive normal form with weights assigned to some of the clauses. The goal is to find an assignment that satisfies all unweighted clauses while the sum of the satisfied weighted clauses is maximized. The usual approach to identify tractable fragments of this problems are structural parameters, that is, properties of the underlying incidence graph of the input formula.

In this talk I will define well-known structural parameters and shortly review the fact that the partially weighted maximum satisfiability problem is fixed-parameter tractable parameterized by all of them. I will present a short introduction to the framework of parallel parameterized complexity and discuss how these results transfer to the parallel setting. We will discuss that we can obtain ever higher levels of achievable parallelization for ever tighter structural parameters – revealing a fundamental difference to the sequential setting. In contrast to existing parallel meta-theorems, all presented algorithms are constructive (they output an optimal solution).

14:00-15:00 Session 127L: Research Papers (PERR)
Location: Ullmann 305
14:00
Implementing a Path Based Equivalence Checker for Petri net based Models of Programs

ABSTRACT. User written programs, when transformed by optimizing and parallelizing compilers, can be incorrect, if the compiler is not trusted. So, establishing the validity of these trans- formations is a crucial and challenging task. For program verification, the PRES+ (Petri net Representation of Em- bedded Systems) is now well accepted as a model to capture the data and control flow of a program. In this paper, an effi- cient path based equivalence checking method using a simple PRES+ model (which is easier to generate from a program) for validating several optimizing and parallelizing transfor- mations is proposed. The experimental results demonstrate the efficiency of the method

14:30
Generating Mutation Tests Using an Equivalence Prover

ABSTRACT. Creating test cases and assessing their quality is a necessary but time-consuming part of software development. Mutation Testing facilitates this process by introducing small syntactic changes into the program and thereby creating mutants. The quality of test cases is then measured by their ability to differentiate between those mutants and the original program. Previous studies have also explored techniques to automatically generate test inputs from program mutants. However, the symbolic approaches to this problem do not handle unbounded loops within the program. In this paper, we present a method using the equivalence prover LLRêve to generate test cases from program mutants. We implemented the method as a pipeline and evaluated it on example programs from the learning platform c4learn as well as programs from the library diet libc. Our results have shown that this method takes up to multiple hours to run on these programs, but the generated test cases are able to detect half of the known errors. As an extension to the method, we also present mutations which introduce an additional parameter into the program. We then show on an example that a finite set of test cases can be sufficient to kill the mutant for all values of the parameter.

14:00-15:30 Session 127M: Tools (SMT)
Location: Ullmann 311
14:00
NORMA: a tool for the analysis of Relay-based Railway Interlocking Systems
PRESENTER: Anna Becchi

ABSTRACT. We present Norma, a tool for the modeling and analysis of Relay-based Railway Interlocking Systems (RRIS). Norma is the result of a research project funded by the Italian Railway Network, to support the reverse engineering and migration to computer-based technology of legacy RRIS. The frontend fully supports the graphical modeling of Italian RRIS, with a palette of over two hundred basic components, stubs to abstract RRIS subcircuits, and requirements in terms of formal properties. The internal component based representation is translated into highly optimized Timed nuXmv models, and supports various syntactic and semantic checks based on formal verification, simulation and test case generation. Norma is experimentally evaluated, demonstrating the practical support for the modelers, and the effectiveness of the underlying optimizations.

Note: this paper was accepted at TACAS22.

14:20
cvc5: A Versatile and Industrial-Strength SMT Solver
PRESENTER: Andres Noetzli

ABSTRACT. cvc5 is the latest SMT solver in the cooperating validity checker series and builds on the successful code base of CVC4. This paper serves as a comprehensive system description of cvc5's architectural design and highlights the major features and components introduced since CVC4 1.8. We evaluate cvc5's performance on all benchmarks in SMT-LIB and provide a comparison against CVC4 and Z3.

14:40
The VMT-LIB Language and Tools

ABSTRACT. We present VMT-LIB, a language for the representation of verification problems of invariant and linear-time temporal properties on infinite-state symbolic transition systems. VMT-LIB is developed with the goal of facilitating the interoperability and exchange of benchmark problems among different verification tools. The VMT-LIB language is an extension of the standard SMT-LIB language for SMT solvers, from which it inherits the clean semantics and the many available resources. In this paper we describe the syntax and semantics of VMT-LIB, and present a set of open-source tools to work with the language.

15:00
Murxla: A Modular and Highly Extensible API Fuzzer for SMT Solvers
PRESENTER: Mathias Preiner

ABSTRACT. SMT solvers are highly complex pieces of software with per- formance, robustness, and correctness as key requirements. Complement- ing traditional testing techniques for these solvers with randomized stress testing has been shown to be quite effective. Recent work has showcased the value of input fuzzing for finding issues, but this approach typically does not comprehensively test a solver’s API. Previous work on model- based API fuzzing was tailored to a single solver and a small subset of SMT-LIB. We present Murxla, a comprehensive, modular, and highly extensible model-based API fuzzer for SMT solvers. Murxla randomly generates valid sequences of solver API calls based on a customizable API model, with full support for the semantics and features of SMT-LIB. It is solver-agnostic but extensible to allow for solver-specific testing and supports option fuzzing, cross-checking with other solvers, translation to SMT-LIBv2, and SMT-LIBv2 input fuzzing. Our evaluation confirms its efficacy in finding issues in multiple state-of-the-art SMT solvers.

Note: This paper is accepted at CAV 2022.

14:00-15:30 Session 127N (SYNT)
Location: Ullmann 201
14:00
Reactive Synthesis of LTL specifications with Rich Theories
PRESENTER: Andoni Rodriguez

ABSTRACT. Reactive synthesis is the problem of deciding whether a temporal LTL specification admits a system that models the specification, where the variables are split into variables controlled by the environment and variables controlled by the system. We address the problem of the reactive synthesis of specifications that use predicates from rich theories—including arithmetical ones.

In this paper, we introduce an approach that transforms the non- Boolean specification into an equi-realizable Boolean specification that can be discharged to off-the-self Boolean synthesis tools. Our method consists on (1) substituting the theory literals by Boolean variables, and (2) computing an additional Boolean formula that captures the dependencies between the new variables.

We require that the theory admits an ∃∗∀∗ decidable fragment, which covers many theories supported by modern SMT-solvers. The main expressive limitation is that it does not allow to directly compare variables from the theories at different instants. We made a prototype implemented in Python using the Z3 solver against a collection of arithmetical specifications.

14:20
Regex+: Synthesizing Regular Expressions from Positive Examples

ABSTRACT. Regular expressions are a popular target for programming by example (PBE) systems, which seek to learn regexes from user-provided examples. Synthesizing from only positive examples remains an unsolved challenge, as the unrestricted search space makes it difficult to avoid over- and under-generalizing. Prior work has approached this in two ways: search-based techniques which require extra input, such as user feedback and/or a natural language description, and neural techniques. The former puts an extra burden on the user, while the latter requires large representative training data sets which are almost nonexistent for this domain. To tackle this challenge we present Regex+, a search-based synthesizer that infers regexes from just a few positive examples. Regex+ avoids over/under-generalization by using minimum description length (MDL) learning, adapted to version space algebras in order to efficiently search for an optimal regex according to a compositional MDL ranking function. Our evaluation shows that Regex+ more than triples the accuracy of existing neural and search-based regex synthesizers on benchmarks with only positive examples.

14:40
Inferring Environment Assumptions in Model Refinement

ABSTRACT. Model Refinement is a novel approach to reactive program synthesis that iteratively refines an over-approximating model of a desired system behavior by eliminating undesired behaviors. In contrast to many current automata based approaches to reactive synthesis, it does not require a finite state space or user supplied templates. Instead it symbolically computes the required invariant by solving a system of definite constraints.. The original work on model refinement, however, assumed that both the assumptions on the environment (in an Assume-Guarantee setting) and the constraints on the system variables necessary to guarantee the required behavior were fixed and known. Sometimes, though, the designer of a system has some intended behavior and wishes to know what the minimal assumptions are on the environment under which the system can guarantee the required behavior; or to know what the constraints are on the system variables under known environment assumptions. In other words, we wish to solve a parametric model refinement problem. Our contribution in this paper is to show how such a problem can be solved when the constraints are assumed to be an interval of the form m\ldots n.

15:00
A Framework for Transforming Specifications in Reinforcement Learning

ABSTRACT. Reactive synthesis algorithms allow automatic construction of policies to control an environment modeled as a Markov Decision Process (MDP) that are optimal with respect to high-level temporal logic specifications. However, they assume that the MDP model is known a priori. Reinforcement Learning (RL) algorithms, in contrast, are de- signed to learn an optimal policy when the transition probabilities of the MDP are unknown, but require the user to associate local rewards with transitions. The appeal of high-level temporal logic specifications has motivated research to develop RL algorithms for synthesis of policies from specifications. To understand the techniques, and nuanced varia- tions in their theoretical guarantees, in the growing body of resulting literature, we develop a formal framework for defining transformations among RL tasks with different forms of objectives. We define the notion of sampling-based reduction to transform a given MDP into another one which can be simulated even when the transition probabilities of the original MDP are unknown. We formalize the notions of preservation of optimal policies, convergence, and robustness of such reductions. We then use our framework to restate known results, establish new results to fill in some gaps, and identify open problems. In particular, we show that certain kinds of reductions from LTL specifications to reward-based ones do not exist, and prove the non-existence of RL algorithms with PAC-MDP guarantees for safety specifications

15:20
SYNTCOMP Results
14:00-15:30 Session 127P: Proofs in Education (High-School Level) (ThEdu)
Location: Ullmann 306
14:00
Invited Talk: Computer-assisted proofs and automated methods in Mathematics Education

ABSTRACT. After a short survey of the developments of CAS, DGS and other useful technologies, such as internet, we show their implication in Mathematics Education, and in the broader frame of STEAM Education. In particular we discuss the transformation of Mathematics Education into exploration-discovery-conjecture-proof scheme. This scheme fits well into the so-called 4 C’s of 21st Century Education.

15:00
A Rule-Based Theorem Prover: an Introduction to Proofs at 7th Year
PRESENTER: Pedro Quaresma

ABSTRACT. The introduction of automated deduction systems in secondary schools face several bottlenecks, apart the problems related with the curricula and the teachers, the dissonance between the outcomes of the Geometry Automated Theorem Provers and the normal practice of conjecturing and proving in schools is a major barrier to a wider use of such tools in an educational environment. Since the early implementations of Geometry Automated Theorem Provers, applications of Artificial Intelligence methods, synthetic provers based on inference rules and using forward chaining reasoning are considered to be more suited for education proposes. Choosing an appropriate set of rules and a automated method that can use those rules is a major challenge. We discuss one such rule set and its implementation using the geometry deductive databases method. The approach is tested using a set of geometric conjectures that can be the goal of a 7th year class.

14:00-14:30 Session 127Q (WOLVERINE)

Round-table discussions -- part II

Location: Ullmann 202
14:00-15:30 Session 127R (WST)
Location: Ullmann 309
14:00
A Calculus for Modular Non-Termination Proofs by Loop Acceleration
PRESENTER: Carsten Fuhs

ABSTRACT. Recently, a calculus to combine various techniques for loop acceleration in a modular way has been introduced [Frohn, TACAS 2020]. We show how to adapt this calculus for proving non-termination. An empirical evaluation demonstrates the applicability of our approach.

14:30
Deciding Termination of Uniform Loops with Polynomial Parameterized Complexity
PRESENTER: Jürgen Giesl

ABSTRACT. In previous work we showed that for so-called triangular weakly non-linear loops over rings S like Z, Q, or R, the question of termination can be reduced to the existential fragment of the first-order theory of S. For loops over R, our reduction implies decidability of termination. For loops over Z and Q, it proves semi-decidability of non-termination. In this paper, we show that there is an important class of linear loops where our decision procedure results in an efficient procedure for termination analysis, i.e., where the parameterized complexity of deciding termination is polynomial.

15:00
Improved Automatic Complexity Analysis of Integer Programs
PRESENTER: Nils Lommen

ABSTRACT. In former work, we developed an approach for automatic complexity analysis of integer programs, based on an alternating modular inference of upper runtime and size bounds for program parts. In this paper, we show how recent techniques to improve automated termination analysis of integer programs (like the generation of multiphase-linear ranking functions and control-flow refinement) can be integrated into our approach for the inference of runtime bounds. Our approach is implemented in the complexity analysis tool KoAT.

14:00-15:20 Session 127S (iPRA)
Location: Ullmann 203
14:00
Interpolants and Transformational Proof Systems

ABSTRACT. Proof-based interpolation can be thought of as a proof transformation that localizes a proof by introducing a cut or lemma. The hope is that such a lemma will be generally useful, for example in synthesizing an inductive proof. Usually, we think of interpolants as a service provided by an automated prover to some higher-level reasoner such as a program verifier. In this talk, however, we will consider interpolation as a proof transformation to be applied during proof search. To motivate this idea, we will first consider CDCL as a transformational proof system, with conflict clauses generated by an interpolating transformation rule. Then we move from ground to quantified clauses. By adding one proof search rule and one interpolating transformation rule, we obtain a stratified CHC solver. Another transformation allows us to obtain more general conflict clauses using interpolation. The proof transformation view allows us to tightly integrate higher-level proof strategies with CDCL. This presents engineering challenges, but has the potential to produce a class of efficient solvers that can exploit the structure of problem instances.

15:00
Interpolation and SAT-Based Model Checking Revisited: Adoption to Software Verification
PRESENTER: Philipp Wendler

ABSTRACT. Interpolation-based model checking (McMillan, 2003) is a formal-verification algorithm, which was originally devised to verify safety properties of finite-state transition systems. The algorithm is state-of-the-art in hardware model checking. It derives interpolants from unsatisfiable BMC queries, and collects them to construct an overapproximation of the set of reachable states. Unlike other formal-verification algorithms, such as k-induction or PDR, which have been extended to handle infinite-state systems and investigated for program analysis, McMillan’s interpolation-based model checking algorithm from 2003 has received little attention in the software-verification community. This paper closes this significant, 19 years old gap in knowledge by adopting the algorithm to software verification. We implemented it in the framework CPAchecker, and evaluated the implementation against other state-of-the-art software-verification techniques over the largest publicly available benchmark suite of C safety-verification tasks. The evaluation demonstrates that interpolation-based model checking is competitive among other algorithms in terms of both the number of solved verification tasks and the run-time efficiency. Our results might have important implications for software verification, because researchers and developers now have a richer set of approaches to choose from.

15:30-16:00Coffee Break
16:00-17:30 Session 131A (ARQNL)
Chair:
Location: Ullmann 104
16:00
Advancing Automated Theorem Proving for the Modal Logics D and S5

ABSTRACT. Prefixes and an additional prefix unification can be used when performing proof search in some popular non-classical logics. The paper presents two techniques that optimize this approach for the first-order modal logics D and S5. The paper describes implementations of both techniques and presents a practical evaluation that shows the resulting performance improvement.

16:30
Automated verification of deontic correspondences in Isabelle/HOL - First results
PRESENTER: Xavier Parent

ABSTRACT. We report our first results regarding the automated verification of deontic correspondences (and related matters) in Isabelle/HOL, analogous to what has been achieved for the modal logic cube.

17:00
Solving QMLTP Problems by Translation to Higher-order Logic
PRESENTER: Geoff Sutcliffe

ABSTRACT. This paper describes an evaluation of ATP systems on the QMLTP library of first-order modal logic problems. Principally, the problems are translated to higher-order logic in the TPTP languages using an embedding approach, and solved using higher-order logic ATP systems. Additionally, the results from native modal logic ATP systems are considered, and compared with those from the embedding approach. The conclusions are that (i) The embedding process is reliable and successful. (ii) The choice of backend ATP system can significantly impact the performance of the embedding approach. (iii) Native modal logic ATP systems outperform the embedding approach over the QMLTP problems. (iv) The embedding approach can cope with a wider range modal logics than the native modal systems considered.

16:00-17:30 Session 131B: Industrial Use of Formal Methods (DSV)
Location: Ullmann 300
16:00
Formal Verification of Ethereum Smart Contracts: SMT-Based Approaches and Challenges

ABSTRACT. Ethereum Smart Contracts become a widely adopted technology to buildDecentralized Financial Applications (DeFis) and as such, they alreadyhold 200 billion USDs. Consequently, bugs in smart contracts can beexploited by malicious users and can lead to losses at the scale ofmillions or even billions of USDs. To prevent such situations fromhappening, there has been a growing interest in developing scalableformal verification techniques for this domain. In this talk, we willspecifically focus on the architecture of the Certora VerificationTool, i.e. an industrial standard automated prover technology toverify smart contracts. Besides a brief overview of the overall toolarchitecture, we will focus mainly on the SMT-based subroutines,including features such as domain-specific CEGAR methodology,distributed solver-portfolio approach, or techniques to share andexploit information between individual SMT solvers.

16:45
Achieving Verified Cloud Authorization

ABSTRACT. Amazon Web Services (AWS) authorizes over 43 trillion requests per day [1]. AWS has a custom language for specifying access control policies and a corresponding authorization engine. For each request, the authorization engine determines whether access should be granted or denied based on the relevant policies. In this talk, we describe our experience verifying the authorization engine in the context of a large, actively developed code base where many developers are not familiar with formal methods. We use Dafny, a verification-aware programming language, to specify authorization at a high level, implement it efficiently, and prove properties of the algorithm.

Authorization executes on a JVM, so we decided to compile the Dafny implementation to Java. To earn trust with developers, we insisted at the start that any generated code should be reviewable by anyone familiar with Java.  Dafny's built-in Java compiler does not generate readable code, so we wrote a custom compiler that generates idiomatic Java from our Dafny implementation. The Java code is reviewed in the same way as every other piece of code at Amazon.

Beyond human code review of the generated code, we use a multi-layered approach to ensure the correctness of the custom compiler: We verify compiler passes in Dafny, but also use techniques such as standard software testing and fuzzing.  Additionally, our compiler generates program annotations which let us leverage the capabilities of well-established software analysis tools such as CheckerFramework, to establish properties of the generated code such as null-correctness.

[1] https://aws.amazon.com/blogs/aws/happy-10th-birthday-aws-identity-and-access-management/

 

16:00-16:45 Session 131C: Formal Methods for 2nd Layers/off-chain Protocols (FMBC)
Location: Ullmann 307
16:00
Multi: a Formal Playground for Smart Multi-contract interaction
PRESENTER: Martin Ceresa

ABSTRACT. Blockchains are maintained by a network of participants, miner nodes, that run algorithms designed to maintain collectively a distributed machine tolerant to Byzantine attacks. From the point of view of users, blockchains provide the illusion of centralized computers that perform trustable verifiable computations, where all computations are deterministic and the results cannot be manipulated or undone. Every blockchain is equipped with a crypto-currency. Programs running on blockchains are called smart-contracts and are written in a special-purpose programming language with deterministic semantics. Each transaction begins with an invocation from an external user to a smart contract. Smart contracts have local storage and can call other contracts, and more importantly, they store, send and receive cryptocurrency. Once installed in a blockchain, the code of the smart-contract cannot be modified. Therefore, it is very important to guarantee that contracts are correct before deployment. However, the resulting ecosystem makes it very difficult to reason about program correctness, since smart-contracts can be executed by malicious users or malicious smart-contracts can be designed to exploit other contracts that call them. Many attacks and bugs are caused by unexpected interactions between multiple contracts, the attacked contract and unknown code that performs the exploit. Moreover, there is a very aggressive competition between different blockchains to expand their user base. Ideas are implemented fast and blockchains compete to offer and adopt new features quickly. In this paper, we propose a formal playground that allows reasoning about multi-contract interactions and is extensible to incorporate new features, study their behaviour and ultimately prove properties before features are incorporated into the real blockchain. We implemented a model of computation that models the execution platform, abstracts the internal code of each individual contract and focuses on contract interactions. Even though our Coq implementation is still a work in progress, we show how many features, existing or proposed, can be used to reason about multi-contract interactions.

16:30
Automating Security Analysis of Off-Chain Protocols
PRESENTER: Sophie Rain

ABSTRACT. Game-theoretic approaches provide new ways to model and formally prove security properties of off-chain protocols. For complex protocols, carrying out such formal proofs is a cumbersome and error-prone task. We describe our ongoing efforts for automating the security analysis of off-chain protocols. We encode the game-theoretic protocol model, together with its security properties, as universally quantified formulas, and use SMT solving to enforce these properties.

16:00-17:30 Session 131E: Isabelle 4 (Isabelle)
Location: Ullmann 101
16:00
Gale-Shapley Verified

ABSTRACT. This paper presents a detailed verification of the Gale-Shapley algorithm for stable matching (or marriage). The verification proceeds by stepwise refinement. The initial steps are on the level of imperative programs, ending in a linear time algorithm. An executable functional program is obtained in a last step. The emphasis is on the stepwise refinement and the required invariants.

16:30
From P != NP to monotone circuits of super-polynomial size

ABSTRACT. We report on our work to develop an Isabelle-formalization of a proof by Lev Gordeev. That proof aims at showing that the NP-complete problem CLIQUE is not contained in P, since any Boolean circuit that solves CLIQUE will have super-polynomial size.

Initially, minor flaws have been identified and could quickly be repaired by mild adjustments of definitions and statements. However, there also have been more serious problems, where then we multiple times contacted Gordeev, who proposed more severe changes in the definitions and statements. In every iteration, Isabelle quickly pointed us to those proof steps that then needed to be adjusted, without having to perform a tedious manual rechecking of all proofs.

Although the overall proof is still in a broken state, the problems are restricted to those parts that handle negations in circuits. Consequently the super-polynomial lower bound is valid for monotone circuits: if any such circuit solves CLIQUE for graphs with $m$ vertices, then the size of the circuit is at least $\sqrt[7]{m}^{\sqrt[8]{m}}$ for sufficiently large $m$.

17:00
Automating Kantian Ethics in Isabelle: A Case Study

ABSTRACT. As we grant artificial intelligence increasing power and independence in contexts like healthcare, policing, and driving, AI faces moral dilemmas but lacks the tools to solve them. Warnings from regulators, philosophers, and computer scientists about the dangers of unethical artificial intelligence have spurred interest in automated ethics—i.e., the development of machines that can perform ethical reasoning. However, previous work in automated ethics rarely engages with existing philosophical literature. Philosophically sophisticated ethical theories are necessary for nuanced and reliable judgements, but faithfully translating these complex ethical theories from natural language to the rigid syntax of a computer program poses technical and philosophical challenges. In this paper, I present an implementation of automated Kantian ethics in Isabelle that is faithful to the Kantian philosophical tradition. I formalize Kant's categorical imperative in Carmo and Jones's Dyadic Deontic Logic, implement this formalization in Isabelle, and develop a testing framework to evaluate how well my implementation coheres with expected properties of Kantian ethics, as established in the literature. I present philosophical insights that I derived using this implementation of automated ethics to motivate computational ethics, or the use of automated tools like Isabelle to spark philosophical progress. Finally, I document my experience implementing automated ethics in Isabelle, despite my lack of prior experience using interactive theorem provers. This work demonstrates that Isabelle can be used to implement sophisticated ethical reasoning.

16:00-17:30 Session 131F: Sampling and First Order (MC)
Location: Ullmann 302
16:00
Recursive Solutions to First-Order Model Counting
PRESENTER: Paulius Dilkas

ABSTRACT. First-order model counting (FOMC) is a computational problem that asks to count the models of a sentence in first-order logic. Despite being around for more than a decade, practical FOMC algorithms are still unable to compute functions as simple as a factorial. We argue that the capabilities of FOMC algorithms are severely limited by their inability to express arbitrary recursive computations. To enable arbitrary recursion, we relax the restrictions that typically accompany domain recursion and generalise circuits used to express a solution to an FOMC problem to graphs that may contain cycles. To this end, we enhance the most well-established (weighted) FOMC algorithm ForcLift with new compilation rules and an algorithm to check whether a recursive call is feasible. These improvements allow us to find efficient solutions to counting fundamental structures such as injections and bijections.

16:30
Scalable Uniform Sampling via Efficient Knowledge Compilation
PRESENTER: Yong Lai

ABSTRACT. Knowledge compilation concerns with the compilation of representation languages to target languages supporting a wide range of tractable operations arising from diverse areas of computer science. Tractable target compilation languages are usually achieved by restrictions on the internal nodes ($\land$ or $\lor$) of the NNF. Recently, a new representation language CCDD, which introduces new restrictions on conjunction nodes to capture equivalent literals, was proposed. We show that CCDDsupports uniform sampling in polytime. We present algorithms and a compiler to compile propositional formulas expressed in CNF into CCDD. Experiments over a large set of benchmarks show that our compilation times are better with smaller representations than state-of-art Decision-DNNF, SDD and OBDDC[AND] compilers. We apply our techniques to uniform sampling, and develop a uniform sampler on CNF. Our empirical evaluation demonstrates that our uniform sampler can solve 780 instances while the prior state of the art solved only 648 instances, representing a significant improvement of 132 instances.

17:00
Designing Samplers is Easy: The Boon of Testers
PRESENTER: Priyanka Golia

ABSTRACT. Given a formula F, the problem of uniform sampling seeks to sample solutions of F uniformly at random. Uniform sampling is a fundamental problem with a wide variety of applications. The computational intractability of uniform sampling has led to the development of several samplers that are heuristics and are not accompanied by theoretical analysis of their distribution. Recently, Chakraborty and Meel (2019) designed the first scalable sampling tester, Barbarik, based on a grey-box sampling technique for testing if the distribution, according to which the given sampler is sampling, is close to the uniform or far from uniform. While the theoretical analysis of Barbarik provides only unconditional soundness guarantees, the empirical evaluation of Barbarik did show its success in determining that some of the off-the-shelf samplers were far from a uniform sampler.

The availability of Barbarik has the potential to spur the development of samplers and testing techniques such that developers can design sampling methods that can be accepted by Barbarik even though these samplers may not be amenable to a detailed mathematical analysis. In this paper, we present the realization of this aforementioned promise. Based on the flexibility offered by CryptoMiniSat, we design a sampler CMSGen that promises the achievement of the sweet spot of the quality of distributions and runtime performance. In particular, CMSGen achieves significant runtime performance improvement over the existing samplers. We conduct two case studies, and demonstrate that the usage of CMSGen leads to significant runtime improvements in the context of combinatorial testing and functional synthesis.

16:00-17:00 Session 131G (NSV)
Location: Ullmann 308
16:00
Adversarial Testing and Repair for Neural Network based Control Systems using Optimization

ABSTRACT. In this presentation, we first introduce an adversarial test generation (falsification) framework for control systems with Neural Networks (NN) in the loop based on optimal control theory. We experimentally demonstrate that our framework outperforms black-box system testing methods. Then, we present a new approach for the NN repair problem. Given a set of adversarial samples, the NN repair problem seeks to modify the weights of the NN so that the adversarial samples are removed. We formulate and solve the NN repair problem as a Mixed Integer Quadratic Program (MIQP) that adjusts the weights of a single layer subject to the given safety constraints. We demonstrate the application of our framework on control problems.

16:00-17:30 Session 131H (PAAR)
Location: Ullmann 205
16:00
Optimal Strategy Schedules for Everyone

ABSTRACT. Parametrization of a theorem prover is critical for it to solve a problem. A specific parametrization is called a strategy and the best strategy usually differs from problem to problem. Strategy scheduling is trying multiple strategies within a time limit. veriT-schegen is a toolbox to work with strategy schedules. In its core is a simple tool that uses integer programming to generate optimal schedules automatically. Another tool translates the schedules into shell scripts. Hence, the toolbox can be used with any theorem prover. The generated schedules solve more problems then schedules generated in a greedy manner. While generating optimal schedules is NP hard, the generation time is short in practice.

16:30
The Vampire Approach to Induction
PRESENTER: Marton Hajdu

ABSTRACT. We discuss practical aspects of automating inductive reasoning in the first-order superposition prover Vampire. We explore solutions for reasoning over inductively defined dataypes; generating, storing and applying induction schema instances; and directly integrating inductive reasoning into the saturation reasoning loop of Vampire. Our techniques significantly improve the performance of Vampire despite the inherent difficulty of automated induction. We expect our exposition to be useful when implementing induction in saturation-style provers, and to stimulate further discussion and advances in the area.

17:00
Reuse of Introduced Symbols in Automatic Theorem Provers
PRESENTER: Michael Rawson

ABSTRACT. Automatic theorem provers may introduce fresh function or predicate symbols for various reasons. Sometimes, such symbols can be reused. We describe a simple form of symbol reuse in the first-order system Vampire, investigate its practical effect, and propose future schemes for more aggressive reuse.

16:00-17:30 Session 131I: Solver diversification & Discussion (PDAR)
Chair:
Location: Ullmann 303
16:00
Beyond Portfolios of Stable Solvers

ABSTRACT. Hard combinatorial problems such as propositional satisfiability are ubiquitous. The holy grail is solution methods that work well on all instances of the problem. However, new approaches emerge regularly, some of which complement existing solution methods in that they are faster only on some instances but not on many others. The question arises as to how well portfolios can exploit the complementarity of solution approaches and when one can speak of complementary approaches at all. To answer these and related questions, we have created an extensive data collection, the GBD Benchmark Database. We have used GBD to manage and distribute benchmarks, to organize and evaluate SAT competitions, and in addition, GBD is presently used in various data science projects related to benchmarking and portfolios of SAT solvers. In my talk, I will briefly present the current state of GBD and summarize some of our most recent results related to solver portfolio analysis. I will encourage the use of benchmark meta-data in the evaluation of SAT experiments and explain why the pursuit of good portfolios and stable solvers are not necessarily opposing goals.

16:00-17:00 Session 131J: Invited Talk (PERR)

Title: Compositional relational reasoning via operational game semantics

Andrzej Murawski (University of Oxford)

Abstract: We use operational game models as a guide to develop relational techniques for establishing contextual equivalences with respect to contexts drawn from a hierarchy of four call-by-value higher-order languages,with either general or ground-type references and with either call/cc or no control operator. In game semantics, the differences between the contexts can be captured by the absence or presence of the O-visibility and O-bracketing conditions.

The proposed technique, which we call Kripke Normal-Form Bisimulations, combines insights from normal-form bisimulation and Kripke-style logical relations with game semantics. In particular, the role of the heap is abstracted away using Kripke-style world transition systems. The differences between the four kinds of contexts manifest themselves through simple local conditions that can be shown to correspond to O-visibility and O-bracketing, where applicable.

The technique is sound and complete by virtue of correspondence with operational game semantics. Moreover, the framework developed in the paper sheds a new light on other related developments, such as backtracking and private transitions in Kripke logical relations, which can be related to specific phenomena in game models.

 

This is joint work with Guilhem Jaber (Universite de Nantes).

 

Location: Ullmann 305
16:00
Compositional relational reasoning via operational game semantics

ABSTRACT. We use operational game models as a guide to develop relational techniques for establishing contextual equivalences with respect to contexts drawn from a hierarchy of four call-by-value higher-order languages, with either general or ground-type references and with either call/cc or no control operator. In game semantics, the differences between the contexts can be captured by the absence or presence of the O-visibility and O-bracketing conditions.

The proposed technique, which we call Kripke Normal-Form Bisimulations, combines insights from normal-form bisimulation and Kripke-style logical relations with game semantics. In particular, the role of the heap is abstracted away using Kripke-style world transition systems. The differences between the four kinds of contexts manifest themselves through simple local conditions that can be shown to correspond to O-visibility and O-bracketing, where applicable.

The technique is sound and complete by virtue of correspondence with operational game semantics. Moreover, the framework developed in the paper sheds a new light on other related developments, such as backtracking and private transitions in Kripke logical relations, which can be related to specific phenomena in game models.

This is joint work with Guilhem Jaber (Universite de Nantes).

16:00-17:00 Session 131L (SYNT)
Location: Ullmann 201
16:00
SyGuS-IF + SemGuS Results
16:10
Future Work and Open Challenges Panel – Authors+Chairs
16:00-17:00 Session 131N (WST)

Business Meeting (WST 2022)

Location: Ullmann 309
16:00-17:20 Session 131P (iPRA)
Location: Ullmann 203
16:00
Application of Interpolation in Networks

ABSTRACT. This lecture is about networks consisting of a directed graph in which each vertex is equipped with a language with the Craig interpolation property, along with a knowledge base (set of sentences) in that language. The knowledge bases grow over time according to some set of rules including the following: For any edge between two vertices, any sentence in the first knowledge base that is in the common language can be added to the second knowledge base (intuitively, the first vertex sends a message to the second). We survey a variety of results and problems that arise in this framework. A central question is: Under which conditions will a sentence be eventually provable from the knowledge base at a given vertex.

17:00
Interpolants and Interference

ABSTRACT. Interference-based proof systems are necessary for proof generation in inprocessing SAT solvers. Unfortunately, their structure and semantic invariants are incompatible with standard recursive interpolant systems. Recent research on interference connects it with inference in more expressive logics. This circumvents some of the roadblocks in the quest for interpolant extraction from interference-based proofs, but also raises questions about the applicability of unfeasibility results in this setting.

18:00-19:30Workshop Dinner (at the Technion, Taub Terrace Floor 2) - Paid event