FLOC 2018: FEDERATED LOGIC CONFERENCE 2018
PROGRAM FOR MONDAY, JULY 9TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:30 Session 46: FLoC Plenary Lecture: Peter O'Hearn (FLoC)
Location: Maths LT1
09:00
Continuous Reasoning: Scaling the Impact of Formal Methods

ABSTRACT. Formal reasoning about programs is one of the oldest and most fundamental research directions in computer science. It has also been one of the most elusive. There has been a tremendous amount of valuable research in formal  methods, but rarely have formal reasoning techniques been deployed as part of the development process of large industrial codebases.

 

This talk describes work in continuous reasoning, where formal reasoning about a (changing) codebase is done in a fashion which mirrors the iterative, continuous model of software development that is increasingly practiced in industry. We suggest that advances in continuous reasoning will allow formal reasoning to scale to more programs, and more programmers. We describe our experience using continuous reasoning with large, rapidly changing codebases at Facebook, and we describe open problems and directions for research for the scientific community.

 

This a paper with the same title accompanying this talk appears in the LICS’18 proceedings.

10:30-11:00Coffee Break
11:00-12:30 Session 47A: Security protocols I (CSF)
Location: Maths LT2
11:00
An extensive formal analysis of multi-factor authentication protocols

ABSTRACT.   Passwords are still the most widespread means for authenticating
  users, even though they have been shown to create huge security
  problems. This motivated the use of additional authentication
  mechanisms used in so-called multi-factor authentication
  protocols. In this paper we define a detailed threat model for this
  kind of protocols: while in classical protocol analysis attackers
  control the communication network, we take into account that many
  communications are performed over TLS channels, that computers may
  be infected by different kinds of malwares, that attackers could
  perform phishing, and that humans may omit some actions. We
  formalize this model in the applied pi calculus and perform an
  extensive analysis and comparison of several widely used protocols
  --- variants of Google 2 Step and FIDO's U2F. The analysis is completely
  automated, generating systematically all combinations of threat
  scenarios for each of the protocols and using the Proverif tool for
  automated protocol analysis. Our analysis highlights weaknesses and
  strengths of the different protocols, and allows us to suggest
  several small modifications of the existing protocols which are easy
  to implement, yet improve their security in several threat scenarios.

11:30
Composition Theorems for CryptoVerif and Application to TLS 1.3

ABSTRACT. We present composition theorems for security protocols, to compose a key exchange protocol and a symmetric-key protocol that uses the exchanged key. Our results rely on the computational model of cryptography and are stated in the framework of the tool CryptoVerif. They support key exchange protocols that guarantee injective or non-injective authentication. They also allow random oracles shared between the composed protocols. To our knowledge, they are the first composition theorems for key exchange stated for a computational protocol verification tool, and also the first to allow such flexibility.

As a case study, we apply our composition theorems to a proof of TLS 1.3 Draft-18. This work fills a gap in a previous paper that informally claims a compositional proof of TLS 1.3, without formally justifying it.

12:00
A Cryptographic Look at Multi-Party Channels

ABSTRACT. Cryptographic channels aim to enable authenticated and confidential communication over the Internet. The general understanding seems to be that providing security in the sense of authenticated encryption for every (unidirectional) point-to-point link suffices to achieve this goal. As recently shown (in FSE17/ToSC17), however, even in the bidirectional case just requiring the two unidirectional links to provide security independently of each other does not lead to a secure solution in general. Informally, the reason for this is that the increased interaction in bidirectional communication may be exploited by an adversary. The same argument applies, a fortiori, in a multi-party setting where several users operate concurrently and the communication develops in more directions. In the cryptographic literature, however, the targeted goals for group communication in terms of channel security are still unexplored.

Applying the methodology of provable security, we fill this gap by (i) defining exact (game-based) authenticity and confidentiality goals for broadcast communication and (ii) showing how to achieve them. Importantly, our security notions also account for the causal dependencies between exchanged messages, thus naturally extending the bidirectional case where causal relationships are automatically captured by preserving the sending order. On the constructive side we propose a modular and yet efficient protocol that, assuming only reliable point-to-point links between users, leverages (non-cryptographic) broadcast and standard cryptographic primitives to a full-fledged broadcast channel that provably meets the security notions we put forth.

11:00-12:30 Session 47B: Linear Logic (FSCD)
11:00
Proof nets for bi-intuitionistic linear logic

ABSTRACT. Bi-Intuitionistic Linear Logic (BILL) is an extension of Intuitionistic Linear Logic with a par, dual to the tensor, and subtraction, dual to linear implication. It is the logic of categories with a monoidal closed and a monoidal co-closed structure that are related by linear distributivity, a strength of the tensor over the par. It conservatively extends Full Intuitionistic Linear Logic (FILL), which includes only the par.

We give proof nets for the multiplicative, unit-free fragment MBILL-. Correctness is by local rewriting in the style of Danos contractibility. This rewrite relation yields sequentialization into a relational sequent calculus that extends the existing one for FILL. We give a second, geometric correctness condition via Danos-Regnier switching, and demonstrate composition both inductively and as a one-off global operation.

11:30
Unique perfect matchings and proof nets

ABSTRACT. This paper establishes a bridge between linear logic and mainstream graph theory, building previous work by Retoré (2003). We show that the problem of correctness for MLL+Mix proof nets is equivalent to the problem of uniqueness of a perfect matching. By applying matching theory, we obtain new results for MLL+Mix proof nets: a linear-time correctness criterion, a quasi-linear sequentialization algorithm, and a characterization of the sub-polynomial complexity of the correctness problem. We also use graph algorithms to compute the dependency relation of Bagnol et al. (2015) and the kingdom ordering of Bellin (1997), and relate them to the notion of blossom which is central to combinatorial maximum matching algorithms.

12:00
Lifting Coalgebra Modalities and IMELL Model Structure to Eilenberg-Moore Categories

ABSTRACT. A categorical model of the multiplicative and exponential fragments of intuitionistic linear logic (IMELL), known as a linear category, is a symmetric monoidal closed category with a monoidal coalgebra modality (also known as a linear exponential comonad). Inspired by R. Blute and P. Scott's work on categories of modules of Hopf algebras as models of linear logic, we study Eilenberg-Moore categories of monads as models of IMELL. We define an IMELL lifting monad on a linear category as a Hopf monad -- in the Bruguieres, Lack, and Virelizier sense --  with a mixed distributive law over the monoidal coalgebra modality. As our main result, we show that the linear category structure lifts to Eilenberg-Moore categories of IMELL lifting monads. We explain how monoids in the Eilenberg-Moore of the monoidal coalgebra modality can induce IMELL lifting monads and provide sources for such monoids. Along the way, we also define mixed distributive laws of bimonads over coalgebra modalities and lifting differential category structure to Eilenberg-Moore categories of exponential lifting monads. 

11:00-12:00 Session 47C: ITP Invited Talk: John Harrison (ITP)
Location: Blavatnik LT1
11:00
Mike Gordon: Tribute to a Pioneer in Theorem Proving and Formal Verification

ABSTRACT. Prof. Michael J. C. Gordon, FRS was a great pioneer in both
computer-aided formal verification and interactive theorem proving.
His own work and that of his students helped to explore and map out
these new fields and in particular the fruitful connections between
them. His seminal HOL theorem prover not only gave rise to many
successors and relatives, but was also the framework in which many new
ideas and techniques in theorem proving and verification were explored
for the first time. Mike's untimely death in August 2017 was a tragedy
first and foremost for his family, but was felt as a shocking loss too
by many of us who felt part of his extended family of friends, former
students and colleagues throughout the world. Mike's intellectual
example as well as his unassuming nature and personal kindness will
always be something we treasure. In my talk here I will present an
overall perspective on Mike's life and the whole arc of his
intellectual career. I will also spend  time looking ahead, for
the research themes he helped to establish are still vital and
exciting today in both academia and industry.

11:00-12:40 Session 47D (LICS)
Location: Maths LT1
11:00
Definable decompositions for graphs of bounded linear cliquewidth

ABSTRACT. We prove that for every positive integer k, there exists an MSO_1-transduction that given a graph of linear cliquewidth at most k outputs, nondeterministically, some clique decomposition of the graph of width bounded by a function of k. A direct corollary of this result is the equivalence of the notions of CMSO_1-definability and recognizability on graphs of bounded linear cliquewidth.

11:20
Parameterized circuit complexity of model-checking on sparse structures

ABSTRACT. We prove that for every class $C$ of graphs with effectively bounded expansion, given a first-order sentence $\varphi$ and an $n$-element structure $A$ whose Gaifman graph belongs to $C$, the question whether $\varphi$ holds in $A$ can be decided by a family of AC-circuits of size $f(\varphi)\cdot n^c$ and depth $f(\varphi)+c\log n$, where $f$ is a computable function and $c$ is a universal constant. This places the model-checking problem for classes of bounded expansion in the parameterized circuit complexity class $paraAC^1$. On the route to our result we prove that the basic decomposition toolbox for classes of bounded expansion, including orderings with bounded weak coloring numbers and low treedepth decompositions, can be computed in $paraAC^1$.

11:40
Sequential Relational Decomposition
SPEAKER: Dror Fried

ABSTRACT. The concept of decomposition in computer science and engineering is considered a fundamental component of computational thinking and is prevalent in design of algorithms, software construction, hardware design, and more. We propose a simple and natural formalization of sequential decomposition,in which a task is decomposed into two sequential sub-tasks, with the first sub-task to be executed out before the second sub-task is executed. These tasks are specified by means of input/output relations. We define and study decomposition problems,which is to decide whether a given specification can be sequentially decomposed. Our main result is that decomposition itself is a difficult computational problem. More specifically, we study decomposition problems in three settings: where the input task is specified explicitly, by means of Boolean circuits, and by means of automatic relations. We show that in the first setting decomposition is NP-complete, in the second setting it is NEXPTIME-complete, and in the third setting there is evidence to suggest that it is undecidable. Our results indicate that the intuitive idea of decomposition as a system-design approach requires further investigation. In particular, we show that adding human to the loop by asking for a decomposition hint lowers the complexity of decomposition problems considerably.

12:00
A parameterized halting problem, the linear time hierarchy, and the MRDP theorem
SPEAKER: Yijia Chen

ABSTRACT. The complexity of the parameterized halting problem for nondeterministic Turing machines p-Halt is known to be related to the question of whether there are logics capturing various complexity classes [Chen and Flum, 2012]. Among others, if p-Halt is in para-AC^0, the parameterized version of the circuit complexity class AC^0, then AC^0, or equivalently, (+,\times)-invariant FO, has a logic. Although it is widely believed that p-Halt\notin para-AC^0, we show that the problem is hard to settle by establishing a connection to the question in classical complexity of whether NE\not\subseteq LINH. Here, LINH denotes the linear time hierarchy.

On the other hand, we suggest an approach toward proving NE\not\subseteq LINH using bounded arithmetic. More specifically, we demonstrate that if the much celebrated MRDP (for Matiyasevich-Robinson-Davis-Putnam) theorem can be proved in a certain fragment of arithmetic, then NE\not\subseteq LINH. Interestingly, central to this result is a para-AC^0 lower bound for the parameterized model-checking problem for FO on arithmetical structures.

12:20
Regular and First Order List Functions

ABSTRACT. We define two classes of functions, called regular (respectively, first-order) list functions, which manipulate objects such as lists, lists of lists, pairs of lists, lists of pairs of lists, etc. The definition is in the style of regular expressions: the functions are constructed by starting with some basic functions (e.g. projections from pairs, or head and tail operations on lists) and putting them together using four combinators (most importantly, composition of functions). Our main results are that first-order list functions are exactly the same as first-order transductions, under a suitable encoding of the inputs; and the regular list functions are exactly the same as MSO-transductions.

11:00-12:40 Session 47E (LICS)
Location: Maths LT3
11:00
A theory of linear typings as flows on 3-valent graphs

ABSTRACT. Building on recently established enumerative connections between lambda calculus and the theory of embedded graphs (or "maps"), this paper develops an analogy between typing (of lambda terms) and coloring (of maps). Our starting point is the classical notion of an abelian group-valued "flow" on an abstract graph (Tutte, 1954). Typing a linear lambda term may be naturally seen as constructing a flow (on an embedded 3-valent graph with boundary) valued in a more general algebraic structure consisting of a preordered set equipped with an "implication" operation and unit satisfying composition, identity, and unit laws. Interesting questions and results from the theory of flows (such as the existence of nowhere-zero flows) may then be re-examined from the standpoint of lambda calculus and logic. For example, we give a characterization of when the local flow relations (across vertices) may be categorically lifted to a global flow relation (across the boundary), proving that this holds just in case the underlying map has the orientation of a lambda term. We also develop a basic theory of rewriting of flows that suggests topological meanings for classical completeness results in combinatory logic, and introduce a polarized notion of flow, which draws connections to the theory of proof-nets in linear logic and to bidirectional typing.

11:20
Cellular Cohomology in Homotopy Type Theory

ABSTRACT. We present a development of cellular cohomology in homotopy type theory. Cohomology associates to each space a sequence of abelian groups capturing part of its structure, and has the advantage over homotopy groups in that these abelian groups of many common spaces are easier to compute. Cellular cohomology is a special kind of cohomology designed for cell complexes: these are built in stages by attaching spheres of progressively higher dimension, and cellular cohomology defines the groups out of the combinatorial description of how spheres are attached. Our main result is that for finite cell complexes, a wide class of cohomology theories (including the ones defined through Eilenberg-MacLane spaces) can be calculated via cellular cohomology. This result was formalized in the Agda proof assistant.

11:40
Free Higher Groups in Homotopy Type Theory
SPEAKER: Nicolai Kraus

ABSTRACT. Given a type A in homotopy type theory (HoTT), we define the free infinity-group on A as the higher inductive type FA with constructors [unit : FA], [cons : A -> FA -> FA], and conditions saying that every cons(a) is an auto-equivalence on FA. Assuming that A is a set (i.e. satisfies the principle of unique identity proofs), we are interested in the question whether FA is a set as well, which is very much related to an open problem in the HoTT book [Ex. 8.2]. In this paper, we show an approximation to the question, namely that the fundamental groups of FA are trivial.

12:00
Higher Groups in Homotopy Type Theory

ABSTRACT. We present a development of the theory of higher groups, including infinity groups and connective spectra, in homotopy type theory. An infinity group is simply the loops in a pointed, connected type, where the group structure comes from the structure inherent in the identity types of Martin-Löf type theory. We investigate ordinary groups from this viewpoint, as well as higher dimensional groups and groups that can be delooped more than once. A major result is the stabilization theorem, which states that if an n-type can be delooped n+2 times, then it has the structure of an infinite loop type. Most of the results have been formalized in the Lean proof assistant.

12:20
Strong Sums in Focused Logic

ABSTRACT. A useful connective that has not previously been made to work in focused logic is the strong sum, a form of dependent sum that is eliminated by projection rather than pattern matching. This makes strong sums powerful, but it also creates a problem adapting them to focusing: The type of the right projection from a strong sum refers to the term being projected from, but due to the structure of focused logic, that term is not available.

In this work we confirm that strong sums can be viewed as a negative connective in focused logic. The key is to resolve strong sums' dependencies eagerly, before projection can see them, using a notion of selfification adapted from module type systems. We validate the logic by proving cut admissibility and identity expansion. All the proofs are formalized in Coq.

11:00-12:15 Session 47F: SAT Invited Talk: Christoph Scholl (SAT)
11:00
Welcome to SAT 2018
11:05
Dependency Quantified Boolean Formulas: An Overview of Solution Methods and Applications

ABSTRACT. Dependency quantified Boolean formulas (DQBFs) as a generalization of quantified Boolean formulas (QBFs) have received considerable attention in research during the last years. Here we give an overview of the solution methods developed for DQBF so far. The exposition is complemented with the discussion of various applications that can be handled with DQBF solving.

12:00-12:30 Session 48 (ITP)
Location: Blavatnik LT1
12:00
Efficient Mendler-Style Lambda-Encodings in Cedille
SPEAKER: Denis Firsov

ABSTRACT. It is common to model inductive datatypes as least fixed points of functors. We show that within the Cedille type theory we can relax functoriality constraints and generically derive an induction principle for Mendler-style lambda-encoded inductive datatypes, which arise as least fixed points of covariant schemes where the morphism lifting is defined only on identities. Additionally, we implement a destructor for these lambda-encodings that runs in constant-time. As a result, we can define lambda-encoded natural numbers with an induction principle and a constant-time predecessor function so that the normal form of a numeral requires only linear space. The paper also includes several more advanced examples.

12:30-14:00Lunch Break
14:00-15:00 Session 49A: CSF Invited Talk: Srini Devadas (CSF)
Location: Maths LT2
14:00
Sanctum: Towards an Open-Source, Formally Verified Secure Processor

ABSTRACT. Architectural isolation can be used to secure computation on a remote secure processor with a private key where the privileged software is potentially malicious as recently deployed by Intel's Software Guard Extensions (SGX). This talk will first describe the Sanctum secure processor architecture, which offers the same promise as SGX, namely strong provable isolation of software modules running concurrently and sharing resources, but protects against an important class of additional software attacks that infer private information by exploiting resource sharing.

The talk will then describe a verification methodology based on a trusted abstract platform (TAP) that formally models idealized enclaves and a parameterized adversary. Machine-checked proofs show that the TAP satisfies the three key security properties needed for secure remote execution: integrity, confidentiality and secure measurement. Machine-checked proofs also show that SGX and Sanctum are refinements of the TAP under certain parameterizations of the adversary, demonstrating these systems implement secure enclaves for the stated adversary models.

Biography:

Srini Devadas is the Webster Professor of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology (MIT) where he has been on the faculty since 1988. Devadas's research interests span Computer-Aided Design (CAD), computer security and computer architecture. He is a Fellow of the IEEE and ACM. He has received a 2014 IEEE Computer Society Technical Achievement award, the 2015 ACM/IEEE Richard Newton technical impact award, and the 2017 IEEE Wallace McDowell award for his research. Devadas is a MacVicar Faculty Fellow and an Everett Moore Baker teaching award recipient, considered MIT's two highest undergraduate teaching honors.

14:00-15:00 Session 49B: FSCD Invited talk: Peter Selinger (FSCD)
14:00
Challenges in quantum programming languages

ABSTRACT. In this talk, I will give an overview of some recent progress and current challenges in the design of quantum programming languages. Unlike classical programs, which can in principle be debugged by stopping the program at critical moments and examining the contents of variables, quantum programs are not amenable to traditional debugging because the state of a quantum system cannot usually be examined in a meaningful way. Therefore, we need other methods for ensuring the correctness of quantum programs, such as formal verification. For this reason, I advocate the use of strongly typed, functional programming languages for quantum computing. As far as functional quantum programming languages are concerned, there is currently a relatively wide gap between theory and practice. On the one hand, we have languages with strong theoretical foundations, such as the quantum lambda calculus, which operate at a relatively low level of abstraction and lack many features that would be useful to practical quantum programmers. On the other hand, we have practical functional quantum programming languages such as Quipper, which is implemented as an embedded language in Haskell, has many high-level features, and has been used in large-scale projects, but lacks a theoretical basis and a strong type system. We have recently attempted to narrow this gap through a family of languages called Proto-Quipper, which are designed to offer Quipper-like features while having sound theoretical foundations. I will give an overview of Quipper and its most useful features, report on the progress we made with formalizing fragments of Quipper, and outline several of the still remaining challenges.

14:00-15:30 Session 49C (ITP)
Location: Blavatnik LT1
14:00
Formalizing Implicative Algebras in Coq

ABSTRACT. We present a Coq formalization of Alexandre Miquel’s implicative algebras, which aim at providing a general algebraic framework for the study of classical realizability models. We first give a self-contained presentation of the underlying implicative structures, which roughly consists of a complete lattice equipped with a binary law representing the implication. We then explain how these structures can be turned into models by adding separators, giving rise to the so-called implicative algebras. Additionally, we show how they generalize Boolean and Heyting algebras as well as the usual algebraic structures used in the analysis of classical realizability.

14:30
Software Tool Support for Modular Reasoning in Modal Logics of Actions
SPEAKER: Samuel Balco

ABSTRACT. We present a software tool for reasoning in and about propositional sequent calculi for modal logics of actions. As an example, we implement the display calculus D.EAK of dynamic epistemic logic. The tool generates embeddings of the calculus in the theorem prover Isabelle for formalising proofs about D.EAK. Integrating propositional reasoning in D.EAK with inductive reasoning in Isabelle, we verify in Isabelle the solution of the muddy children puzzle for any number of muddy children. There also is a set of meta-tools that allows us to adapt the software for a wide variety of user defined calculi.

15:00
The Coinductive Formulation of Common Knowledge
SPEAKER: Colm Baston

ABSTRACT. We study the coinductive formulation of common knowledge in type theory. We formalise both the traditional relational semantics and an operator semantics, similar in form to the epistemic system S5, but at the level of events on possible worlds rather than as a logical derivation system. We have two major new results. Firstly, the operator semantics is equivalent to the relational semantics: we discovered that this requires a new hypothesis of semantic entailment on operators, not known in previous literature. Secondly, the coinductive version of common knowledge is equivalent to the traditional transitive closure on the relational interpretation. All results are formalised in the proof assistants Agda and Coq.

14:00-15:40 Session 49D (LICS)
Location: Maths LT1
14:00
A modal mu perspective on solving parity games in quasipolynomial time.

ABSTRACT. We present a new quasi-polynomial algorithm for solving parity games. It is based on a new bisimulation invariant measure of complexity for parity games, called the register-index, which captures the complexity of the priority assignment. For fixed parameter k, the class of games with register-index bounded by k is solvable in polynomial time.

We show that the register-index of parity games of size n is bounded by O(log n) and derive a quasi-polynomial algorithm. Finally we give the first descriptive complexity account of the quasi-polynomial solvability of parity games: The winning regions of parity games with p priorities and register-index k are described by a modal μ formula of which the complexity, as measured by its alternation depth, depends on k rather than p.

14:20
A pseudo-quasi-polynomial algorithm for solving mean-payoff parity games
SPEAKER: Laure Daviaud

ABSTRACT. In a mean-payoff parity game, one of the two players aims both to achieve a qualitative parity objective and to minimize a quantitative long-term average of payoffs (aka. mean payoff). The game is zero-sum and hence the aim of the other player is to either foil the parity objective or to maximize the mean payoff. Our main technical result is a pseudo-quasi-polynomial algorithm for solving mean-payoff parity games. All algorithms for the problem that have been developed for over a decade have a pseudo-polynomial and an exponential factors in their running times; in the running time of our algorithm the latter is replaced with a quasi-polynomial one. Our main conceptual contributions are the definitions of strategy decompositions for both players, and a notion of progress measures for mean-payoff parity games that generalizes both parity and energy progress measures. The former provides normal forms for and succinct representations of winning strategies, and the latter enables the application to mean-payoff parity games of the order-theoretic machinery that underpins a recent quasi-polynomial algorithm for solving parity games.

14:40
Rational Synthesis Under Imperfect Information

ABSTRACT. In this paper, we study the rational synthesis problem for multi-player non zero-sum games played on finite graphs for omega-regular objectives. Rationality is formalized by the concept of Nash equilibrium (NE). Contrary to previous works, we consider in this work the more general and more practically relevant case where players are imperfectly informed. In sharp contrast with the perfect information case, NE are not guaranteed to exist in this more general setting. This motivates the study of the NE existence problem. We show that this problem is ExpTime-C for parity objectives in the two-player case (even if both players are imperfectly informed) and undecidable for more than 2 players. We then study the rational synthesis problem and show that the problem is also ExpTime-C for two imperfectly informed players and undecidable for more than 3 players. As the rational synthesis problem considers a system (Player 0) playing against a rational environment (composed of k players), we also consider the natural case where only Player 0 is imperfectly informed about the state of the environment (and the environment is considered as perfectly informed). In this case, we show that the ExpTime-C result holds when k is arbitrary but fixed. We also analyse the complexity when k is part of the input.

15:00
Playing with Repetitions in Data Words Using Energy Games
SPEAKER: M. Praveen

ABSTRACT. We introduce two-player games which build words over infinite alphabets, and we study the problem of checking the existence of winning strategies. These games are played by two players, who take turns in choosing valuations for variables ranging over an infinite data domain, thus generating multi-attributed data words. The winner of the game is specified by formulas in the Logic of Repeating Values, which can reason about repetitions of data values in infinite data words. We prove that it is undecidable to check if one of the players has a winning strategy, even in very restrictive settings. However, we prove that if one of the players is restricted to choose valuations ranging over the Boolean domain, the games are effectively equivalent to single-sided games on vector addition systems with states (in which one of the players can change control states but cannot change counter values), known to be decidable and effectively equivalent to energy games.

Previous works have shown that the satisfiability problem for various variants of the logic of repeating values is equivalent to the reachability and coverability problems in vector addition systems. Our results raise this connection to the level of games, augmenting further the associations between logics on data words and counter systems.

15:20
Compositional game theory
SPEAKER: Jules Hedges

ABSTRACT. We introduce open games as a compositional foundation of economic game theory. A compositional approach potentially allows methods of game theory and theoretical computer science to be applied to large-scale economic models for which standard economic tools are not practical. An open game represents a game played relative to an arbitrary environment and to this end we introduce the concept of coutility, which is the utility generated by an open game and returned to its environment. Open games are the morphisms of a symmetric monoidal category and can therefore be composed by categorical composition into sequential move games and by monoidal products into simultaneous move games. Open games can be represented by string diagrams which provide an intuitive but formal visualisation of the information flows. We show that a variety of games can be faithfully represented as open games in the sense of having the same Nash equilibria and off-equilibrium best responses.

14:00-15:40 Session 49E (LICS)
Location: Maths LT3
14:00
Concurrency and Probability: Removing Confusion, Compositionally
SPEAKER: Roberto Bruni

ABSTRACT. Assigning a satisfactory truly concurrent semantics to Petri nets with confusion and distributed decisions is a long standing problem, especially if one wants to resolve decisions by drawing from some probability distribution. Here we propose a general solution based on a recursive, static decomposition of (occurrence) nets in loci of decision, called structural branching cells (s-cells). Each s-cell exposes a set of alternatives, called transactions. Our solution transforms a given Petri net into another net whose transitions are the transactions of the s-cells and whose places are those of the original net, with some auxiliary structure for bookkeeping. The resulting net is confusion-free, and thus conflicting alternatives can be equipped with probabilistic choices, while nonintersecting alternatives are purely concurrent and their probability distributions are independent. The validity of the construction is witnessed by a tight correspondence with the recursively stopped configurations of Abbes and Benveniste. Some advantages of our approach are that: i) s-cells are defined statically and locally in a compositional way; ii) our resulting nets exhibit the complete concurrency property.

14:20
ReLoC: A Mechanised Relational Logic for Fine-Grained Concurrency
SPEAKER: Dan Frumin

ABSTRACT. We present ReLoC: a logic for proving refinements of programs in a language with higher-order state, fine-grained concurrency, polymorphism and recursive types. The core of our logic is a judgement e ≾ e' : τ, which expresses that a program e refines a program e' at type τ. In contrast to earlier work on refinements for languages with higher-order state and concurrency, ReLoC provides type- and structure-directed rules for manipulating this judgement, whereas previously, such proofs were carried out by unfolding the judgement into its definition in the model. These more abstract proof rules make it simpler to carry out refinement proofs.

Moreover, we introduce logically atomic relational specifications: a novel approach for relational specifications for compound expressions that take effect at a single instant in time. We demonstrate how to formalise and prove such relational specifications in ReLoC, allowing for more modular proofs.

ReLoC is built on top of the expressive concurrent separation logic Iris, allowing us to leverage features of Iris such as invariants and ghost state. We provide a mechanisation of our logic in Coq, which does not just contain a proof of soundness, but also tactics for interactively carrying out refinements proofs. We have used these tactics to mechanise several examples, which demonstrates the practicality and modularity of our logic.

14:40
Eager Functions as Processes
SPEAKER: Adrien Durier

ABSTRACT. We study Milner's encoding of the call-by-value lambda-calculus in the pi-calculus. We show that, by tuning the encoding to two subcalculi of the pi-calculus (Internal pi and Asynchronous Local pi), the equivalence on lambda-terms induced by the encoding coincides with Lassen's eager normal form bisimilarity, extended to handle eta-equality. As behavioural equivalence in the pi-calculus we consider contextual equivalence and barbed congruence. We also extend the results to preorders.

A crucial technical ingredient in the proofs is the recently-introduced technique of unique solutions of equations, further developed in this paper. In this respect, the paper also intends to be an extended case study on the applicability and expressiveness of the technique.

15:00
Quasi-Open Bisimilarity with Mismatch is Intuitionistic
SPEAKER: Ki Yung Ahn

ABSTRACT. Quasi-open bisimilarity is the coarsest notion of bisimilarity for the pi-calculus that is also a congruence. This work extends quasi-open bisimilarity to handle mismatch (guards with inequalities). This minimal extension of quasi-open bisimilarity allows fresh names to be manufactured to provide constructive evidence that an inequality holds. The extension of quasi-open bisimilarity is canonical and robust --- coinciding with open barbed bisimilarity (an objective notion of bisimilarity congruence) and characterised by an intuitionistic variant of an established modal logic. The more famous open bisimilarity is also considered, for which the coarsest extension for handling mismatch is identified. Applications to symbolic equivalence checking and symbolic model checking are highlighted, e.g., for verifying privacy properties. Theorems and examples are mechanised using the proof assistant Abella.

15:20
Causal Computational Complexity of Distributed Processes

ABSTRACT. This paper studies the complexity of pi-calculus processes with respect to the quantity of transitions caused by an incoming message. First we propose a typing system for integrating Bellantoni and Cook's characterisation of polynomially-bound recursive functions into Deng and Sangiorgi's typing system for termination. We then define computational complexity of distributed messages based on Degano and Priami's causal semantics, which identifies the dependency between interleaved transitions. Next we apply a syntactic flow analysis to typable processes to ensure the computational bound of distributed messages. We prove that our analysis is decidable for a given process; sound in the sense that it guarantees that the total number of messages causally dependent of an input request received from the outside is bounded by a polynomial of the content of this request; and complete which means that each polynomial recursive function can be computed by a typable process.

14:00-15:30 Session 49F: MaxSAT (SAT)
14:00
Approximately Propagation Complete and Conflict Propagating Constraint Encodings

ABSTRACT. The effective use of satisfiability (SAT) solvers requires problem encodings that make good use of the reasoning techniques employed in such solvers, such as unit propagation and clause learning. Propagation completeness has been proposed as a useful property for constraint encodings as it maximizes the utility of unit propagation. Experimental results on using encodings with this property in the context of satisfiability modulo theory (SMT) solving have however remained inconclusive, as such encodings are typically very large, which increases the bookkeeping work of solvers.

In this paper, we introduce approximate propagation completeness and approximate conflict propagation as novel SAT encoding property notions. While approximate propagation completeness is a generalization of classical propagation completeness, (approximate) conflict propagation is a new concept for reasoning about how early conflicts can be detected by a SAT solver. Both notions together span a hierarchy of encoding quality choices, with classical propagation completeness as a special case. We show how to compute approximately propagation complete and conflict propagating constraint encodings with a minimal number of clauses using a reduction to MaxSAT. To evaluate the effect of such encodings, we give results on applying them in a case study.

14:30
Dynamic Polynomial Watchdog Encoding for Solving Weighted MaxSAT
SPEAKER: Tobias Paxian

ABSTRACT. In this paper we present a novel cardinality constraint encoding for solving the weighted MaxSAT problem with iterative SAT-based methods based on the Polynomial Watchdog (PW) CNF encoding for Pseudo-Boolean (PB) constraints. The watchdog of the PW encoding indicates whether the bound of the PB constraint holds. In our approach, we lift this static watchdog concept to a dynamic one allowing an incremental convergence to the optimal result. Consequently, we formulate and implement a SAT-based algorithm for our new Dynamic Polynomial Watchdog (DPW) encoding which can be applied for solving the MaxSAT problem. Furthermore, we introduce three fundamental optimizations of the PW encoding also suited for the original version leading to a significantly less encoding size.

Our experimental results show that our encoding and algorithm is competitive with state-of-the-art encodings as utilized in QMaxSAT (3rd place in last MaxSAT Evaluation 2017). Our encoding dominates two of the QMaxSAT encodings, and at the same time is able to solve unique instances. We integrated our new encoding into QMaxSAT and adapt the heuristic to choose between the only remaining encoding of QMaxSAT and our approach. This combined version solves 19 (4%) more instances in overall 30% less run time on the benchmark set of the MaxSAT Evaluation 2017. However, for the instances solved by both solvers our encoding is 2X faster than all employed encodings of QMaxSAT used in the evaluation.

15:00
Solving MaxSAT with Bit-Vector Optimization

ABSTRACT. We explore the relationships between two closely related optimization problems: MaxSAT and Optimization Modulo Bit-Vectors (OBV). Given a bit-vector or a propositional formula F and a target bit-vector T, Unweighted Partial MaxSAT maximizes the number of satisfied bits in T, while OBV maximizes the value of T. 

We propose a new OBV-based Unweighted Partial MaxSAT algorithm. Our resulting solver–Mrs. Beaver–outscores the state-of-the-art solvers when run with the settings of the Incomplete-60-Second-Timeout Track of MaxSAT Evaluation 2017. 

Mrs. Beaver is the first MaxSAT algorithm designed to be incremental in the following sense: it can be re-used across multiple invocations with different hard assumptions and target bit-vectors. We provide experimental evidence showing that enabling incrementality in MaxSAT significantly improves the performance of a MaxSAT-based Boolean Multilevel Optimization (BMO) algorithm when solving a new, critical industrial BMO application: cleaning-up weak design rule violations during the Physical Design stage of Computer-Aided-Design.

15:00-15:30 Session 50A: Attack trees (CSF)
Location: Maths LT2
15:00
Guided design of attack trees: a system-based approach

ABSTRACT. Attack trees are a well-recognized formalism for security modeling and analysis, but in this work we tackle a problem that has not yet been addressed by the security or formal methods community – namely guided design of attack trees. The objective of the framework presented in this paper is to support a security expert in the process of designing a pertinent attack tree for a given system. In contrast to most of existing approaches for attack trees, our framework contains an explicit model of the real system to be analyzed, formalized as a transition system that may contain quantitative information. The leaves of our attack trees are labeled with reachability goals in the transition system and the attack tree semantics is expressed in terms of traces of the system. The main novelty of the proposed framework is that we start with an attack tree which is not fully refined and by exhibiting paths in the system that are optimal with respect to the quantitative information, we are able to suggest to the security expert which parts of the tree contribute to optimal attacks and should therefore be developed further. Such useful parts of the tree are determined by solving a satisfiability problem in propositional logic.

15:00-15:30 Session 50B: Quantum Computing (FSCD)
15:00
A diagrammatic axiomatisation of fermionic quantum circuits

ABSTRACT. We introduce the fermionic ZW calculus, a string-diagrammatic language for fermionic quantum computing (FQC). After defining a fermionic circuit model, we present the basic components of the calculus, together with their interpretation, and show how the main physical gates of interest in FQC can be represented in the language. We then list our axioms, and derive some additional equations. We prove that the axioms provide a complete equational axiomatisation of the monoidal category whose objects are quantum systems of finitely many local fermionic modes, with operations that preserve or reverse the parity (number of particles mod 2) of states, and the tensor product, corresponding to the composition of two systems, as monoidal product. We achieve this through a procedure that rewrites any diagram in a normal form. We conclude by showing, as an example, how the statistics of a fermionic Mach-Zehnder interferometer can be calculated in the diagrammatic language.

15:30-16:00Coffee Break
16:00-17:30 Session 51A: CSF 5 minutes talks (CSF)

Short talks by attendees. 

The 5-minute talk schedule is available here.

This is a fun session in which you can describe work in progress, crazy-sounding ideas, interesting questions and challenges, research proposals, or anything else within reason! You can use 2-3 slides, or you can just speak without slides.

Chair:
Location: Maths LT2
16:00-18:00 Session 51B: Corrado Böhm Memorial (FSCD)
16:00
ALGORAND A Truly Distributed Ledger

ABSTRACT. A distributed ledger is a tamperproof sequence of data that can be read and augmented by everyone. Distributed ledgers stand to revolutionize the way a democratic society operates. They secure all kinds of traditional transactions –such as payments, asset transfers, titling– in the exact order in which they occur; and enable totally new transactions ---such as cryptocurrencies and smart contracts. They can remove intermediaries and usher in a new paradigm for trust. As currently implemented, however, distributed ledgers cannot achieve their enormous potential.

Algorand is an alternative, democratic, and efficient distributed ledger. Unlike prior ledgers based on ‘proof of work’, it dispenses with ‘miners’. Indeed, Algorand requires only a negligible amount of computation. Moreover, its transaction history does not ‘fork’ with overwhelming probability: i.e., Algorand guarantees the finality of all transactions.

Finally, Algorand enjoys flexible self-governance. By using its hallmark propose-and-agree process, Algorand can correct its course as necessary or desirable, without any ‘hard forks’.

17:00
Corrado Böhm: the white magician in programming and its semantics

ABSTRACT. Several results of Corrado Böhm will be presented  that have made programming more transparant and efficient.

  1.  Self-compiling. In his PhD thesis Corrado carefully presented a program that could translate itself to machine code. This resulted in the boot-strap of computers warming up efficiently.
  2.  Eliminating the go-to. With Giuseppe Jacopini Corrado showed that jumps in programming can be avoided. This resulted in  structured programming.
  3.  The foundation of functional programming. Corrado was one of the first to realize that the computational model of lambda calculus can be used for progr amming by introducing the CUCH-machine.
  4. Fine-structure of lambda terms.  In a paper in Italian Corrado studied what lambda terms cannot be equated. This resulted in a deep analysis of lambda models.
  5. A simple self-evaluator in lambda calculus: E=<<K,S,C>>, where K, S, C, are the well known combinators and <M_1,...,M_n>=\lambda z.zM_1...M_n. Here unexpectedly the initials of Steve Cole Kleene appear, who constructed the first self-evaluator in the 1930's. This has the flavor of a magic tric!
16:00-18:00 Session 51C (ITP)
Location: Blavatnik LT1
16:00
Understanding Parameters of Deductive Verification: an Empirical Investigation of KeY

ABSTRACT. As formal verification of software systems is a complex task comprising many algorithms and heuristics, modern theorem provers offer numerous parameters that are to be selected by a user to control how a piece of software is verified. Evidently, the number of parameters even increases with each new release. One challenge is that default parameters are often insufficient to close proofs automatically and are not optimal in terms of verification effort. The verification phase becomes hardly accessible for non-experts, who typically must follow a time-consuming trial-and-error strategy to choose the right parameters for even trivial pieces of software. To aid users of deductive verification, we apply machine learning techniques to empirically investigate which parameters and combinations thereof impair or improve provability and verification effort. We exemplify our procedure on the deductive verification system KeY 2.6.1 and formulate 38 hypotheses of which only two have been invalidated. We identified parameters that portrait a trade-off between high provability and low verification effort, enabling the possibility to prioritize the selection of a parameter for either direction. Our insights give tool builders a better understanding of their control parameters and constitute a stepping stone towards automated deductive verification and better applicability of verification tools for non-experts.

16:30
Boosting the Reuse of Formal Specifications

ABSTRACT. Advances in theorem proving have enabled the emergence of a variety of formal developments that, over the years, have resulted in large corpuses of formalizations. For example, the NASA PVS Library is a collection of 55 formal developments written in the Prototype Verification System (PVS) over a period of almost 30 years and containing more 28000 proofs. Unfortunately, the simple accumulation of formal developments does not guarantee their reusability. In fact, in formal systems with very expressive specification languages, it is often the case that a particular conceptual object is defined in different ways. This paper presents a technique to establish sound connections between formal definitions. Such connections support the possibility of (partial) borrowing of proved results from one formal description into another, improving the reusability of formal developments. The technique is described using concepts from the field of universal algebra and algebraic specification. The technique is illustrated with concrete examples taken from formalizations available in the NASA PVS Library.

17:00
Formalization of a Polymorphic Subtyping Algorithm
SPEAKER: Jinxu Zhao

ABSTRACT. Modern functional programming languages such as Haskell support sophisticated forms of type-inference, even in the presence of higher-order polymorphism. Central to such advanced forms of type-inference is an algorithm for polymorphic subtyping. This paper formalizes an algorithmic specification for polymorphic subtyping in the Abella theorem prover. The algorithmic specification is shown to be decidable, and sound and complete with respect to Odersky and Laufer's well-known declarative formulation of polymorphic subtyping. While the meta-theoretical results are not new, as far as we know our work is the first to mechanically formalize them. Moreover, our algorithm differs from those currently in the literature by using a novel approach based on worklist judgements. Worklist judgements simplify the propagation of information required by the unification process during subtyping. Furthermore they enable a simple formulation of the meta-theoretical properties, which can be easily encoded in theorem provers.

17:30
A Formal Equational Theory for Call-by-Push-Value

ABSTRACT. Establishing that two programs are contextually equivalent is hard, yet essential for reasoning about semantics preserving program transformations such as compiler optimizations. We adapt Lassen's normal form bisimulations technique to establish the soundness of equational theories for both an untyped call-by-value lambda calculus and a variant of Levy's call-by-push-value language. We demonstrate that our equational theory significantly simplifies the verification of optimizations.

16:00-18:00 Session 51D (LICS)
Location: Maths LT1
16:00
One Theorem to Rule Them All: A Unified Translation of LTL into ω-Automata

ABSTRACT. We present a unified translation of LTL formulas into deterministic Rabin automata, limit-deterministic Büchi automata, and nondeterministic Büchi automata. The translations yield automata of asymptotically optimal size (double or single exponential, respectively). All three translations are derived from one single Master Theorem of purely logical nature. The Master Theorem decomposes the language of a formula into a positive boolean combination of languages that can be translated into ω-automata by elementary means. In particular, the breakpoint, Safra, and ranking constructions used in other translations are not needed.

16:20
A Simple and Optimal Complementation Algorithm for Büchi Automata
SPEAKER: Joel Allred

ABSTRACT. Complementation of Büchi automata is well known for being complex, as Büchi automata in general are nondeterministic. In the worst case, a state-space growth of $O((0.76n)^n)$ cannot be avoided. Experimental results suggest that complementation algorithms perform better on average when they are structurally simple.

In this paper, we present a simple algorithm for complementing Büchi automata, operating directly on subsets of states, structured into state-set tuples (similar to slices), and producing a deterministic automaton. The second step in the construction is then a complementation procedure that resembles the straightforward complementation algorithm for deterministic Büchi automata, the latter algorithm actually being a special case of our construction. Finally, we prove our construction to be optimal, i.e.\ having an upper bound in $O((0.76n)^n)$, and furthermore calculate the $0.76$ factor in a novel exact way.

16:40
The State Complexity of Alternating Automata

ABSTRACT. This paper studies the complexity of languages of finite words using automata theory. To go beyond the class of regular languages, we consider infinite automata and the notion of state complexity defined by Karp. We look at alternating automata as introduced by Chandra, Kozen and Stockmeyer: such machines run independent computations on the word and gather their answers through boolean combinations.

We devise a lower bound technique relying on boundedly generated lattices of languages, and give two applications of this technique. The first is a hierarchy theorem, stating that there are languages of arbitrarily high polynomial alternating state complexity, and the second is a linear lower bound on the alternating state complexity of the prime numbers written in binary. This second result strengthens a result of Hartmanis and Shank from 1968, which implies an exponentially worse lower bound for the same model.

17:00
Automaton-Based Criteria for Membership in CTL

ABSTRACT. Computation Tree Logic (CTL) is widely used in formal verification, however, unlike linear temporal logic (LTL), its connection to automata over words and trees is not yet fully understood. Moreover, the long sought connection between LTL and CTL is still missing; It is not known whether their common fragment is decidable, and there are very limited necessary conditions and sufficient conditions for checking whether an LTL formula is definable in CTL.

We provide sufficient conditions and necessary conditions for LTL formulas and omega-regular languages to be expressible in CTL. The conditions are automaton-based; We first tighten the automaton characterization of CTL to the class of Hesitant Alternating Linear Tree Automata (HLT), and then conduct the conditions by relating between the cycles of a word automaton for a given omega-regular language and the cycles of a potentially equivalent HLT.

The new conditions allow to simplify proofs of known results on languages that are definable, or not, in CTL, as well as to prove new results. Among which, they allow us to refute a conjecture by Clarke and Draghicescu from 1988, regarding a condition for a CTL* formula to be expressible in CTL.

17:20
Separability by piecewise testable languages and downward closures beyond subwords

ABSTRACT. We introduce a flexible class of well-quasi-orderings (WQOs) on words that generalizes the ordering of (not necessarily contiguous) subwords. Each such WQO induces a class of piecewise testable languages (PTLs) as Boolean combinations of upward closed sets. In this way, a range of regular language classes arises as PTLs. Moreover, each of the WQOs guarantees regularity of all downward closed sets. We consider two problems. First, we study which (perhaps non-regular) language classes permit a decision procedure to decide whether two given languages are separable by a PTL with respect to a given WQO. Second, we want to effectively compute downward closures with respect to these WQOs. Our first main result that for each of the WQOs, under mild assumptions, both problems reduce to the simultaneous unboundedness problem (SUP) and are thus solvable for many powerful system classes. In the second main result, we apply the framework to show decidability of separability of regular languages by $\mathcal{B}\Sigma_1[<, \mathsf{mod}]$, a fragment of first-order logic with modular predicates.

17:40
Regular Transducer Expressions for Regular Transformations over infinite words
SPEAKER: Vrunda Dave

ABSTRACT. Functional MSO transductions, deterministic two-way transducers, as well as streaming string transducers are all equivalent models for regular functions. In this paper, we show that every regular function, either on finite words or on infinite words, captured by a deterministic two-way transducer, can be described with a regular transducer expression (RTE). For infinite words, the transducer uses Muller acceptance and omega-regular look-ahead. RTEs are constructed from constant functions using the combinators if-then-else (deterministic choice), Hadamard product, and unambiguous versions of the Cauchy product, the 2-chained Kleene-iteration and the 2-chained omega-iteration. Our proof works for transformations of both finite and infinite words, extending the result on finite words of Alur et al. in LICS'14. In order to construct an RTE associated with a deterministic two-way Muller transducer with look-ahead, we introduce the notion of transition monoid for such two-way transducers where the look-ahead is captured by some backward deterministic Buchi automaton. Then, we use an unambiguous version of Imre Simon's famous forest factorization theorem in order to derive a ``good'' (omega-)regular expression for the domain of the two-way transducer. ``Good'' expressions are unambiguous and Kleene-plus as well as omega-iterations are only used on subexpressions corresponding to idempotent elements of the transition monoid. The combinator expressions are finally constructed by structural induction on the ``good'' (omega-)regular expression describing the domain of the transducer.

16:00-18:00 Session 51E (LICS)
Location: Maths LT3
16:00
Enriching a Linear/Non-linear Lambda Calculus: A Programming Language for String Diagrams

ABSTRACT. Linear/non-linear (LNL) models, as described by Benton, soundly model a LNL term calculus and LNL logic closely related to intuitionistic linear logic. Every such model induces a canonical enrichment that we show soundly models a LNL lambda calculus for string diagrams, introduced by Rios and Selinger (with primary application in quantum computing). Our abstract treatment of this language leads to simpler concrete models compared to those presented so far. We also extend the language with general recursion and prove soundness. Finally, we present an adequacy result for the diagram-free fragment of the language which corresponds to a modified version of Benton and Wadler's adjoint calculus with recursion.

16:20
An algebraic theory of Markov processes
SPEAKER: Giorgio Bacci

ABSTRACT. Markov processes are a fundamental models of probabilistic transition systems and are the underlying semantics of probabilistic programs. We give an algebraic axiomatization of Markov processes using the framework of quantitative equational reasoning introduced in LICS2016. We present the theory in a structured way using work of Hyland et al. on combining monads. We take the interpolative barycentric algebras of LICS16 which captures the Kantorovich metric and combine it with a theory of contractive operators to give the required axiomatization of Markov processes both for discrete and continuous state spaces. This work, apart from its intrinsic interest, shows how one can extend the general notion of combining effects to the quantitative setting.

16:40
Boolean-Valued Semantics for Stochastic Lambda-Calculus
SPEAKER: Radu Mardare

ABSTRACT. The ordinary untyped lambda-calculus has a set-theoretic model proposed in two related forms by Scott and Plotkin in the 1970s. Recently Scott saw how to extend such $\lambda$-calculus models using random variables in a standard way. However, to do reasoning and to add further features, it is better to interpret the construction in a higher-order Boolean- valued model theory using the standard measure algebra. In this paper we develop the semantics of an extended stochastic lambda-calculus suitable for a simple probabilistic programming language, and we exhibit a number of key equations satisfied by the terms of our example language. The terms are interpreted using a continuation-style semantics along with an additional argument, an infinite sequence of coin tosses which serve as a source of randomness. The construction of the model requires a subtle measure-theoretic analysis of the space of coin-tossing sequences. We also introduce a fixed-point operator as a new syntactic construct, as beta-reduction turns out not sound for all terms in our semantics. Finally, we develop a new notion of equality between terms valued by elements of the measure algebra, allowing one to reason about terms that may not be equal almost everywhere. This we hope provides a new framework for developing reasoning about probabilistic programs and their properties of higher type.

17:00
Sound up-to techniques and Complete abstract domains

ABSTRACT. Abstract interpretation is a method to automatically find invariants of programs or pieces of code whose semantics is given via least fixed-points. Up-to techniques have been introduced as enhancements of coinduction, an abstract principle to prove properties expressed as greatest fixed-points.

While abstract interpretation is always sound by definition, the soundness of up-to techniques needs some ingenuity to be proven. For completeness, the setting is switched: up-to techniques are always complete, while abstract domains are not.

In this work we show that, under reasonable assumptions, there is an evident connection between sound up-to techniques and complete abstract domains.

17:20
Every λ-Term is Meaningful for the Infinitary Relational Model

ABSTRACT. Infinite types and formulas are known to have really curious and unsound behaviors. For instance, they allow to type Ω, the auto-autoapplication and they thus do not ensure any form of normalization/productivity. Moreover, in most infinitary frameworks, it is not difficult to define a type R that can be assigned to every λ- term. However, these observations do not say much about what coinductive (i.e. infinitary) type grammars are able to provide: it is for instance very difficult to know what types (besides R) can be assigned to a given term in this setting. We begin with a discussion on the expressivity of different forms of infinite types. Then, using the resource-awareness of sequential intersection types (system S) and tracking, we prove that infinite types are able to characterize the order (arity) of every λ-terms and that, in the infinitary extension of the relational model, every term has a “meaning” i.e. a non-empty denotation. From the technical point of view, we must deal with the total lack of productivity guarantee for typable terms: we do so by importing methods inspired by first order model theory.

17:40
Probabilistic Böhm Trees and Probabilistic Separation

ABSTRACT. We study the notion of observational equivalence in the call-by-name probabilistic lambda-calculus, where two terms are said observationally equivalent if under any context, their head reductions converge with the same probability. Our goal is to generalise the separation theorem to this probabilistic setting. To do so we define probabilistic Böhm trees and probabilistic Nakajima trees, and we mix the well-known B\"öhm-out technique with some new techniques to manipulate and separate probability distributions.

16:00-18:00 Session 51F: CDCL (SAT)
16:00
Using Combinatorial Benchmarks to Probe the Reasoning Power of Pseudo-Boolean Solvers
SPEAKER: Jan Elffers

ABSTRACT. We study cdcl-cuttingplanes, Open-WBO, and Sat4j, three successful solvers from the Pseudo-Boolean Competition 2016, and evaluate them by performing experiments on crafted benchmarks designed to be trivial for the cutting planes (CP) proof system underlying pseudo-Boolean (PB) proof search, but yet potentially tricky for PB solvers. Our results demonstrate severe shortcomings in state-of-the-art PB solving techniques. Despite the fact that our benchmarks have linear-size tree-like CP proofs, the solvers often perform quite badly even for very small instances. Our analysis is that this shows that solvers need to explore stronger methods of pseudo-Boolean reasoning within cutting planes. We make an empirical observation from the competition data that many of the easy crafted instances are also infeasible over the rational numbers, or have small strong backdoors to PB instances without rational solutions. This raises the intriguing question whether the existence of such backdoors can be correlated with easiness/hardness. However, for some of our constructed benchmark families even rationally infeasible instances are completely beyond reach. This indicates that PB solvers need to get better not only at Boolean reasoning but even at linear programming. Finally, we compare CP-based solvers with CDCL and MIP solvers. For those of our benchmarks where the natural CNF encodings admit efficient resolution proofs, we see that the CDCL-based solver Open-WBO is orders of magnitude faster than the CP-based solvers cdcl-cuttingplanes and Sat4j (though it seems very sensitive to the ordering of the input). And the MIP solver Gurobi beats all of these solvers across the board. These experimental results point to several crucial challenges in the quest for more efficient pseudo-Boolean solvers, and we also believe that a further study of our benchmarks could shed important light on the potential and limitations of current state-of-the-art PB solving.

16:30
Machine Learning-based Restart Policy for CDCL SAT Solvers
SPEAKER: Jia Liang

ABSTRACT. Restarts are a critically important heuristic in most modern conflict-driven clause-learning (CDCL) SAT solvers. The precise reason as to why and how restarts enable CDCL solvers to scale efficiently remains obscure. In this paper we address this question, and provide some answers that enabled us to design a new effective machine learning-based restart policy. Specifically, we provide evidence that restarts improve the quality of learnt clauses as measured by one of best known clause quality metrics, namely, literal block distance (LBD). More precisely, we show that more frequent restarts decrease the LBD of learnt clauses, which in turn improves solver performance. We also note that too many restarts can be harmful because of the computational overhead of rebuilding the search tree from scratch too frequently. With this tradeoff in mind, between that of learning better clauses vs. the computational overhead of rebuilding the search tree, we introduce a new machine learning-based restart policy that predicts the quality of the next learnt clause based on the history of previously learnt clauses. The restart policy erases the solver’s search tree during its run, if it predicts that the quality of the next learnt clause is below some dynamic threshold that is determined by the solver’s history on the given input. Our machine learning-based restart policy is based on two observations gleaned from our study of LBDs of learned clauses. First, we discover that high LBD percentiles can be approximated with z-scores of the normal distribution. Second, we find that LBDs, viewed as a sequence, are correlated and hence the LBDs of past learned clauses can be used to predict the LBD of future ones. With these observations in place, and techniques to exploit them, our new restart policy is shown to effective over a large benchmark from the SAT Competition 2014 to 2017.

17:00
Chronological Backtracking
SPEAKER: Vadim Ryvchin

ABSTRACT. Non-Chronological Backtracking (NCB) has been implemented in every modern CDCL SAT solver since the original CDCL solver GRASP. NCB’s importance has never been questioned. This paper argues that NCB is not always helpful. We show how one can implement the alternative to NCB–Chronological Backtracking (CB)–in a modern SAT solver. We demonstrate that CB improves the performance of the winner of the latest SAT Competition, Maple-LCM-Dist, and the winner of the latest MaxSAT Evaluation, Open-WBO.

17:30
Centrality-Based Improvements to CDCL Heuristics
SPEAKER: Sima Jamali

ABSTRACT. There are many reasons to think that SAT solvers should be able to exploit formula structure, but no standard techniques in modern CDCL solvers make explicit use of structure. We describe modifications to modern decision and clause-deletion heuristics that exploit formula structure by using variable centrality. We show that these improve the performance of Maple LCM Dist, the winning solver from Main Track of the 2017 SAT Solver competition. In particular, using centrality in clause deletion results in solving 9 more formulas from the 2017 Main Track. We also look at a number of measures of solver performance and learned clause quality, to see how the changes affect solver execution.

19:00-21:30 FLoC reception at Ashmolean Museum (FLoC)

FLoC reception at Ashmolean Museum. Drinks and canapés available from 7pm (pre-booking via FLoC registration system required; guests welcome).