View: session overviewtalk overview
09:00 | SPEAKER: Christian Doczkal ABSTRACT. We give a formal and constructive proof in Coq/Ssreflect of the known result that the graphs of treewidth two are exactly those that do not admit K4 as a minor. This result is a milestone towards a formal proof of the recent result that isomorphism of treewidth-two graphs can be finitely axiomatized. The proof is based on a function extracting terms from K4-free graphs in such a way that the interpretation of an extracted term yields a treewidth-two graph isomorphic to the original graph. |
09:30 | SPEAKER: Simon Jantsch ABSTRACT. We present a formalization of a translation from LTL formulae to generalized Büchi Automata in the HOL4 theorem prover. Translations from temporal logics to automata are at the core of model checking algorithms based on automata-theoretic techniques. The translation we verify proceeds in two steps: it produces Very Weak Alternating Automata at an intermediate stage, and then ultimately produces a generalized Büchi Automaton. After verifying both transformations, we also encode both of these automata models using a generic, functional graph type, and use the CakeML compiler to generate fully verified machine code implementing the translation. |
10:00 | Verification of PCP-Related Computational Reductions in Coq SPEAKER: Yannick Forster ABSTRACT. We formally verify several computational reductions concerning the Post correspondence problem (PCP) using the proof assistant Coq. Our verification includes a reduction of the halting problem for Turing machines to string rewriting, a reduction of string rewriting to PCP, and reductions of PCP to the intersection problem and the palindrome problem for context-free grammars. |
09:00 | Species, Profunctors and Taylor Expansion Weighted by SMCC--A Unified Framework for Modelling Nondeterministic, Probabilistic and Quantum Programs-- SPEAKER: Takeshi Tsukada ABSTRACT. Motivated by a tight connection between Joyal's combinatorial species and quantitative models of linear logic, this paper introduces \emph{weighted generalised species} (or \emph{weighted profunctors}), where weights are morphisms of a given symmetric monoidal closed category (SMCC). For each SMCC W, we show that the category of W-weighted profunctors is a Lafont category, a categorical model of linear logic with exponential. As a model of programming languages, the construction of this paper gives a unified framework that induces adequate models of nondeterministic, probabilistic, algebraic and quantum programming languages by appropriately choosing the weight SMCC. |
09:20 | Logical paradoxes in quantum computation ABSTRACT. The precise features of quantum theory enabling quantum computational advantage are unclear. Contextuality has emerged as a promising hypothesis: e.g. the magic states needed to practically achieve quantum computation are contextual. Strong contextuality, as defined by Abramsky-Brandenburger, is an extremal form of contextuality describing systems that exhibit logically paradoxical behaviour. After introducing number-theoretic techniques for constructing exotic quantum paradoxes, we give large families of strongly contextual magic states that are optimal in the sense that they enable deterministic injection of gates of the Clifford hierarchy. We thereby bolster a refinement of the resource theory of contextuality that emphasises the computational power of logical paradoxes. |
09:40 | SPEAKER: Renaud Vilmart ABSTRACT. We introduce the first complete and approximatively universal diagrammatic language for quantum mechanics. We make the ZX-Calculus, a diagrammatic language introduced by Coecke and Duncan, complete for the so-called Clifford+T quantum mechanics by adding two new axioms to the language. The completeness of the ZX-Calculus for Clifford+T quantum mechanics was one of the main open questions in categorical quantum mechanics. We prove the completeness of the Clifford+T ZX-Calculus using the recently studied ZW-Calculus, a calculus dealing with integer matrices. We also prove that this fragment of the ZX-Calculus represents exactly all the matrices over some finite dimensional extension of the ring of dyadic rationals. |
10:00 | SPEAKER: Kang Feng Ng ABSTRACT. Categorical quantum mechanics places finite-dimensional quantum theory in the context of compact closed categories, with an emphasis on diagrammatic reasoning. In this framework, two equational diagrammatic calculi have been proposed for pure-state qubit quantum computing: the ZW calculus, developed by Coecke, Kissinger and the first author for the purpose of qubit entanglement classification, and the ZX calculus, introduced by Coecke and Duncan to give an abstract description of complementary observables. Neither calculus, however, provided a complete axiomatisation of their model. In this paper, we present extended versions of ZW and ZX, and show their completeness for pure-state qubit theory, thus solving two major open problems in categorical quantum mechanics. First, we extend the original ZW calculus to represent states and linear maps with coefficients in an arbitrary commutative ring, and prove completeness by a strategy that rewrites all diagrams into a normal form. We then extend the language and axioms of the original ZX calculus, and show their completeness for pure-state qubit theory through a translation between ZX and ZW specialised to the field of complex numbers. This translation expands the one used by Jeandel, Perdrix, and Vilmart to derive an axiomatisation of the approximately universal Clifford+T fragment; restricting the field of complex numbers to a suitable subring, we obtain an alternative axiomatisation of the same theory. |
10:20 | SPEAKER: Renaud Vilmart ABSTRACT. The ZX-Calculus is a graphical language for diagrammatic reasoning in quantum mechanics and quantum information theory. An axiomatisation has recently been proven to be complete for an approximatively universal fragment of quantum mechanics, the so-called Clifford+T fragment. We focus here on the expressive power of this axiomatisation beyond Clifford+T Quantum mechanics. We consider the full pure qubit quantum mechanics, and mainly prove two results: (i) First, the axiomatisation for Clifford+T quantum mechanics is also complete for all equations involving some kind of linear diagrams. The linearity of the diagrams reflects the phase group structure, an essential feature of the ZX-calculus. In particular all the axioms of the ZX-calculus are involving linear diagrams. (ii) We also show that the axiomatisation for Clifford+T is not complete in general but can be completed by adding a single (non linear) axiom, providing a simpler axiomatisation of the ZX-calculus for pure quantum mechanics than the one recently introduced by Ng&Wang. |
09:00 | Quantitative Behavioural Reasoning for Higher-order Effectful Programs: Applicative Distances ABSTRACT. This paper studies the quantitative refinement of Absramsky's applicative similarity and bisimilarity in the context of a generalisation of Fuzz, a call-by-value $\lambda$-calculus with a linear type system that can express program sensitivity, enriched with algebraic operations \emph{\`a la} Plotkin and Power. To do so a general, abstract framework for studying behavioural relations taking values over quantales is defined according to Lawvere's analysis of generalised metric spaces. Barr's notion of relator (or lax extension) is then extended to quantale-valued relations adapting and extending results from the field of monoidal topology. Abstract notions of quantale-valued effectful applicative similarity and bisimilarity are then defined and proved to be a compatible generalised metric (in the sense of Lawvere) and pseudometric, respectively, under mild conditions. |
09:20 | A Generalized Modality for Recursion ABSTRACT. Nakano's *later* modality allows types to express that the output of a function does not immediately depend on its input, and thus that computing its fixpoint is safe. This idea, guarded recursion, has proved useful in various contexts, from functional programming with infinite data structures to formulations of step-indexing internal to type theory. Categorical models have revealed that the later modality corresponds in essence to a simple reindexing of the discrete time scale. Unfortunately, existing guarded type theories suffer from significant limitations for programming purposes. These limitations stem from the fact that the later modality is not expressive enough to capture precise input-output dependencies of functions. As a consequence, guarded type theories reject many productive definitions. Combining insights from guarded type theories and synchronous programming languages, we propose a new modality for guarded recursion. % This modality can apply to a type any sufficiently well-behaved reindexing of the time scale. We call such reindexings *time warps*. Several modalities from the literature, including later, correspond to fixed time warps, and thus arise as special cases of ours. We integrate our modality into a typed lambda-calculus. We equip this calculus with an operational semantics, as well as an adequate denotational semantics in the topos of trees, a standard categorical model for guarded recursion. Building on top of categorical ideas, we describe an abstract type-checking algorithm whose completeness entails the coherence of both semantics. |
09:40 | A functional interpretation with state ABSTRACT. We present a new variant of Goedel's functional interpretation in which extracted programs, rather than being pure terms of system T, interact with a global state. The purpose of the state is to store relevant information about the underlying mathematical environment. Because the validity of extracted programs can depend on the validity of the state, this offers us an alternative way of dealing with the contraction problem. Furthermore, this new formulation of the functional interpretation gives us a clear semantic insight into the computational content of proofs, and provides us with a way of improving the efficiency of extracted programs. |
10:00 | SPEAKER: Wai-Tak Cheung ABSTRACT. The popular library TensorFlow (TF) has familiarised the mainstream of machine-learning community with programming language concepts such as dataflow computing and automatic differentiation. Additionally, it has introduced some genuinely new syntactic and semantic programming concepts. In this paper we study one such new concept. It is a subtle but important feature of TensorFlow, the ability to extract and manipulate the state of a computation graph. This feature allows the convenient specification of parameterised models by freeing the programmer of much of the bureaucratic burden of managing them, while still permitting the use of generic, model-independent, search and optimisation algorithms. We study this new language feature, which we call `graph abstraction' in the context of the call-by-value lambda calculus, using the recently developed Dynamic Geometry of Interaction formalism. We give a simple type system guaranteeing the safety of graph abstraction, and we also show the safety of critical language properties such as garbage collection and the beta law. The semantic model also suggests that the feature could be implemented in a general-purpose functional language reasonably efficiently. |
10:20 | Degrees of Relatedness - A Unified Framework for Parametricity, Irrelevance, Ad Hoc Polymorphism, Intersections, Unions and Algebra in Dependent Type Theory SPEAKER: Andreas Nuyts ABSTRACT. Dependent type theory allows us to write programs and to prove properties about those programs in the same language. However, some properties do not require much proof, as they are evident from a program's implementation, e.g. if a polymorphic program is not ad hoc but relationally parametric, then we get parametricity theorems for free. If we want to safely shortcut proofs by relying on the evident good behaviour of a program, then we need a type-level guarantee that the program is indeed well-behaved. This can be achieved by annotating function types with a modality describing the behaviour of functions. We consider a dependent type system with modalities for relational parametricity, irrelevance (i.e. type-checking time erasability of an argument) and ad hoc polymorphism. The interplay of three modalities and dependent types raises a number of questions. For example: If a term depends on a variable with a given modality, then how should its type depend on it? Are all modalities always applicable, e.g. should we consider parametric functions from the booleans to the naturals? Do we need any further modalities in order to properly reason about these three? We develop a type system, based on a denotational presheaf model, that answers these questions. The core idea is to equip every type with a number of relations --- just equality for small types, but more for larger types --- and to describe function behaviour by saying how functions act on those relations. The system has modality-aware equality judgements (ignoring irrelevant parts) and modality-aware proving operators (for proving free theorems) which are even self-applicable. It also supports sized types, some form of intersection and union types, and parametric quantification over algebraic structures. We prove soundness in a denotational presheaf model. |
09:00 | ABSTRACT. Separation logic (SL) has gained widespread popularity as a formal foundation of tools that analyze and verify heap-manipulating programs. Its great asset lies in its assertion language, which can succinctly express how data structures are laid out in memory, and its discipline of local reasoning, which mimics human intuition about how to prove heap programs correct. This talk discusses approaches to automated reasoning in separation logics using SMT solvers. I will present fragments of SL that admit reductions to decidable first-order theories. I will also discuss incomplete approaches based on combinations of SL-style proof-theoretic reasoning and SMT techniques for some nonstandard models of SL. These techniques have been implemented in the deductive program verifier GRASShopper. |
10:00 | SPEAKER: Pascal Fontaine ABSTRACT. Formal methods applications often rely on SMT solvers to automatically discharge proof obligations. SMT solvers handle quantified formulas using incomplete heuristic techniques like E-matching, and often resort to model-based quantifier instantiation (MBQI) when these techniques fail. This paper revisits enumerative instantiation, a technique that considers instantiations based on exhaustive enumeration of ground terms. Although simple, we argue that enumerative instantiation can supplement other instantiation techniques and be a viable alternative to MBQI for valid proof obligations. We first present a stronger Herbrand Theorem, better suited as a basis for the instantiation loop used in SMT solvers; it furthermore requires considering less instances than classical Herbrand instantiation. Based on this result, we present different strategies for combining enumerative instantiation with other instantiation techniques in an effective way. The experimental evaluation shows that the implementation of these new techniques in the SMT solver cvc4 leads to significant improvements in several benchmark libraries, including many stemming from verification efforts. |
09:30 | Symbolic Side-Channel Analysis for Probabilistic Programs SPEAKER: Pasquale Malacaria ABSTRACT. In this paper we describe symbolic side-channel analysis techniques for detecting and quantifying information leakage, given in terms of Shannon and Min Entropy. Measuring the precise leakage is challenging due to the randomness and noise often present in program executions and side-channel observations. We account for this noise by introducing additional (symbolic) program inputs which are interpreted {\em probabilistically}, using symbolic execution with {\em parameterized} model counting. We also explore an approximate sampling approach for increased scalability. In contrast to typical Monte Carlo techniques, our approach works by sampling symbolic paths, representing multiple concrete paths, and uses pruning to accelerate computation and guarantee convergence to the optimal results. The key novelty of our approach is to provide bounds on the leakage that are provably under- and over-approximating the real leakage. We implemented the techniques in the Symbolic PathFinder tool and we demonstrate them on Java programs. |
10:00 | Secure compilation of side-channel countermeasures: the case of cryptographic “constant-time” SPEAKER: Vincent Laporte ABSTRACT. Software-based countermeasures provide effective mitigation against side-channel attacks, often with minimal efficiency and deployment overheads. Their effectiveness is often amenable to rigorous analysis: specifically, several popular countermeasures can be formalized as information flow policies, and correct implementation of the countermeasures can be verified with state-of-the-art analysis and verification techniques. However, in absence of further justification, the guarantees only hold for the language (source, target, or intermediate representation) on which the analysis is performed. We consider the problem of preserving side-channel countermeasures by compilation for cryptographic “constant-time”, a popular countermeasure against cache-based timing attacks. We present a general method, based on the notion of 2-simulation, for proving that a compilation pass preserves the constant-time countermeasure. Using the Coq proof assistant, we verify the correctness of our method and of several representative instantiations. |
10:00 | Term-Graph Anti-Unification SPEAKER: Temur Kutsia ABSTRACT. We study anti-unification for possibly cyclic, unranked term-graphs and develop an algorithm, which computes a minimal complete set of least general generalizations for them. For bisimilar graphs the algorithm computes the join in the lattice generated by a functional bisimulation. Besides, we consider the case when the graph edges are not ordered (modeled by commutativity). These results generalize anti-unification for ranked and unranked terms to the corresponding term-graphs, and solve also anti-unification problems for rational terms and dags. Our results open a way to widen anti-unification based code clone detection techniques from a tree representation to a graph representation of the code. |
11:00 | A little more conversation, a little less action, a lot more satisfaction: Global states in ProVerif. SPEAKER: Vincent Cheval ABSTRACT. ProVerif is a popular tool for the fully automatic analysis of security protocols, offering very good support to detect flaws or prove security. One exception is the case of protocols with global states such as counters, tables, or more generally, memory cells. ProVerif fails to analyse such protocols, due to its internal abstraction. Our key idea is to devise a generic transformation of the security properties queried to ProVerif. We prove the soundness of our transformation and implement it into a front-end GBVerif. Our experiments show that our front-end (combined with ProVerif) outperforms the few existing tools, both in terms of efficiency and protocol coverage. We successfully apply our tool to a dozen of protocols of the literature, yielding the first fully automatic proof of a security API and a payment protocol of the literature. |
11:30 | Automated Unbounded Verification of Stateful Cryptographic Protocols with Exclusive OR SPEAKER: Jannik Dreier ABSTRACT. Exclusive-or (XOR) operations are common in cryptographic protocols, in particular in RFID protocols and electronic payment protocols. Although there are numerous applications, due to the inherent complexity of faithful models of XOR, there is only limited tool support for the verification of cryptographic protocols using XOR. The Tamarin prover is a state-of-the-art verification tool for cryptographic protocols in the symbolic model. In this paper, we improve the underlying theory and the tool to deal with an equational theory modeling XOR operations. The XOR theory can be freely combined with all equational theories previously supported, including user-defined equational theories. This makes Tamarin the first tool to support simultaneously this large set of equational theories, protocols with global mutable state, an unbounded number of sessions, and complex security properties including observational equivalence. We demonstrate the effectiveness of our approach by analyzing several protocols that rely on XOR, in particular multiple RFID-protocols, where we can identify attacks as well as provide proofs. |
12:00 | SPEAKER: Andreas Viktor Hess ABSTRACT. There are several typing results that, for certain classes of protocols, show it is without loss of attacks to restrict the intruder to sending only well-typed messages. So far, all these typing results hold only for relatively simple protocols that do not keep a state beyond single sessions, excluding stateful protocols that, e.g., maintain long-term databases. Recently, several verification tools for stateful protocols have been proposed, e.g., Set-pi, AIF-omega, and SAPIC/Tamarin, but for none of these a typing result has been established. The main contribution of this paper is a typing result, for a large class of stateful protocols, based on a symbolic protocol model. We illustrate how to connect several formalisms for stateful protocols to this symbolic model. Finally, we discuss how the conditions of our typing result apply to existing protocols, or can be achieved by minor modifications. |
11:00 | Fixed-Point Constraints for Nominal Equational Unification SPEAKER: Daniele Nantes-Sobrinho ABSTRACT. We propose a new axiomatisation of the alpha-equivalence relation for nominal terms, based on a primitive notion of fixed-point constraint. We show that the standard freshness relation between atoms and terms can be derived from the more primitive notion of permutation fixed-point, and use this result to prove the correctness of the new alpha-equivalence axiomatisation. This gives rise to a new notion of nominal unification, where solutions for unification problems are pairs of a fixed-point context and a substitution. This notion of nominal unifier behaves better than the standard notion based on freshness contexts: nominal unification remains finitary in the presence of equational theories such as commutativity, whereas it becomes infinitary when unifiers are expressed using freshness contexts. |
11:30 | Higher-Order Equational Pattern Anti-Unification SPEAKER: David Cerna ABSTRACT. We consider anti-unification for simply typed lambda terms in associative, commutative, and associative-commutative theories and develop a sound and complete algorithm which takes two lambda terms and computes their generalizations in the form of higher-order patterns. The problem is finitary: the minimal complete set of generalizations contains finitely many elements. We define the notion of optimal solution and investigate special fragments of the problem for which the optimal solution can be computed in linear or polynomial time. |
12:00 | SPEAKER: Manfred Schmidt-Schauss ABSTRACT. Automated deduction in higher-order program calculi, where properties of transformation rules are demanded, or confluence or other equational properties are requested, can often be done by syntactically computing overlaps (critical pairs) of reduction rules and transformation rules. Since higher-order calculi have alpha-equivalence as fundamental equivalence, the reasoning procedure must deal with it. We define ASD1-unification problems, which are higher-order equational unification problems employing variables for atoms, expressions and contexts, with additional distinct-variable constraints, and which have to be solved w.r.t. alpha-equivalence. Our proposal is to extend nominal unification to solve these unification problems. We succeeded in constructing the nominal unification algorithm NomUnifyASC. We show that NomUnifyASC is sound and complete for these problem class, and outputs a set of unifiers with constraints in nondeterministic polynomial time if the final constraints are satisfiable. We also show that solvability of the output constraints can be decided in NEXPTIME, and for a fixed number of context-variables in NP time. For terms without context-variables and atom-variables, NomUnifyASC runs in polynomial time, is unitary, and extends the classical problem by permitting distinct-variable constraints. |
11:00 | SPEAKER: Simon Wimmer ABSTRACT. We present a lightweight framework in Isabelle/HOL for the automatic verified (functional or imperative) memoization of recursive functions. Our tool constructs a memoized version of the recursive function and proves a correspondence theorem between the two functions. A number of simple techniques allow us to achieve bottom-up computation and space-efficient memoization. The framework’s utility is demonstrated on a number of classic dynamic programming problems. |
11:30 | ABSTRACT. Code generated from a verified formalisation typically runs faster when it uses machine words instead of a syntactic representation of integers. This paper presents a library for Isabelle/HOL that links the existing formalisation of words to the machine words that the four target languages of Isabelle/HOL’s code generator provide. Our design ensures that (i) Isabelle/HOL machine words can be mapped soundly and efficiently to all target languages despite the differences in the APIs; (ii) they can be used uniformly with the three evaluation engines in Isabelle/HOL, namely code generation, normalisation by evaluation, and term rewriting; and (iii) they blend in with the existing formalisations of machine words. Several large-scale formalisation projects use our library to speed up their generated code. To validate the unverified link between machine words in the logic and those in the target languages, we extended Isabelle/HOL with a general-purpose testing facility that compiles test cases expressed within Isabelle/HOL to the four target languages and runs them with the most common implementations of each language. When we applied this to our library of machine words, we discovered miscomputations in the 64-bit word library of one of the target-language implementations. |
12:00 | ABSTRACT. We present a short implementation of the well-known Tortoise and Hare cycle finding algorithm attributed to Robert W. Floyd in the constructive setting of axiom-free Coq. This algorithm is interesting from a constructive perspective because it is both very simple and potentially non-terminating (depending on the input). To overcome potential non-termination, we encode the given termination argument (there exists a cycle) into a bar inductive predicate that we use as termination certificate for the Coq implementation. From this development, we extract the standard OCaml implementation of this algorithm. We generalize the method to the full Floyd's algorithm that computes the entry point and the period of the cycle of the iterated sequence, when they do exist. We also consider the case of the more efficient Brent's algorithm for computing the period only. Again, the extracted codes correspond to the standard OCaml implementations of these algorithms. |
11:00 | Effective use of SMT solvers for Program Equivalence Checking through Invariant Sketching and Query Decomposition SPEAKER: Shubhani Gupta ABSTRACT. Program equivalence checking is a fundamental problem in computer science with applications to translation validation and automatic synthesis of compiler optimizations. Modern equivalence checkers employ SMT solvers to discharge proof obligations generated by their equivalence checking algorithm. Equivalence checkers also involve algorithms to infer invariants that relate the intermediate states of the two programs being compared for equivalence. We present a new algorithm, called {\em invariant-sketching} that allows the inference of the required invariants through the generation of counter-examples using SMT solvers. We also present an algorithm, called {\em query-decomposition} that allows effective use of SMT solvers for application to equivalence checking. Both invariant-sketching and query-decomposition help us prove equivalence across program transformations that could not be handled by previous equivalence checking algorithms. |
11:30 | SPEAKER: Ahmed Irfan ABSTRACT. Incremental linearization is a conceptually simple, yet effective, technique that we have recently proposed for solving SMT problems over nonlinear real arithmetic constraints. In this paper, we show how the same approach can be applied successfully also to the harder case of nonlinear integer arithmetic problems. We describe in detail our implementation of the basic ideas inside the MathSAT SMT solver, and evaluate its effectiveness with an extensive experimental analysis over all nonlinear integer benchmarks in SMT-LIB. Our results show that MathSAT is very competitive with (and often outperforms) state-of-the-art SMT solvers based on alternative techniques. |
11:00 | Building Better Bit-Blasting for Floating-Point Problems ABSTRACT. An effective approach to handling the theory of floating- point is to reduce it to the theory of bit-vectors. Implementing the re- quired encodings is complex, error prone and require a deep understand- ing of floating-point hardware. This paper presents SymFPU, a library of encodings that can be included in solvers. It also includes a verification argument for its correctness, and experimental results showing that its use in CVC4 out-performs all previous tools. As well as a significantly improved performance and correctness, it is hoped this will give a simple route to add support for the theory of floating-point. |
11:30 | SPEAKER: Peter Backeman ABSTRACT. Reasoning about complex SMT theories is still quite challenging, for instance bit-vectors, floating-point arithmetic, or strings. Approximations offer a means of mapping a complex theory into a simpler one, and attempting to reconstruct models or proofs in the original theory afterwards. UppSAT is an approximating abstract SMT-solver, based on the systematic approximation refinement framework. The framework can be instantiated using an approximation and a back-end SMT solver. Implemented in Scala, UppSAT is designed with easy and flexible specification of approximations in mind. We discuss the structure of approximations in UppSAT and the components needed for their specification. Because approximation components can be defined relatively independently, they can be flexibly combined to obtain many different flavours of approximation. In this extended abstract we discuss what kinds of approximations can be expressed in UppSAT, along with design choices that enable the modular mix-and-match specification of approximations. Finally, we also outline ideas for several new approximations and strategies which we are currently working on (with first results expected to be available at the workshop). |
12:00 | SPEAKER: Albin Coquereau ABSTRACT. Alt-Ergo is an SMT solver jointly developed by Université Paris-Sud and the OCamlPro company. The first version was released in 2006. Since then, its architecture has been continuously adapted for proving formulas generated by software development frameworks. As type systems with polymorphism arise naturally is such platforms, the design of Alt-Ergo has been guided (and constrained) by a native -- and non SMT-LIB compliant -- input language for a polymorphic first-order logic. In this paper, we present the last version of Alt-Ergo, its architecture and main features. The main recent work is a support for a conservative polymorphic extension of the SMT-LIB 2 standard. We measure Alt-Ergo’s performances with this new frontend on a set of benchmarks coming from the deductive program verification systems Frama-C, SPARK 2014, Why3 and Atelier-B, as well as from the SMT-LIB benchmarks library. |
12:00 | On computability and tractability for infinite sets SPEAKER: Mikołaj Bojańczyk ABSTRACT. We propose a definition for computable functions on hereditarily definable sets. Such sets are possibly infinite data structures that can be defined using a fixed underlying logical structure, such as (N,=). We show that, under suitable assumptions on the underlying structure, a programming language called definable while programs captures exactly the computable functions. Next, we introduce a complexity class called fixed-dimension polynomial time, which intuitively speaking describes polynomial computation on hereditarily definable sets. We show that this complexity class contains all functions computed by definable while programs with suitably defined resource bounds. Proving the converse inclusion would prove that Choiceless Polynomial Time with Counting captures order-invariant polynomial time on finite graphs. |
12:20 | Definable Ellipsoid Method, Sums-of-Squares Proofs, and the Isomorphism Problem SPEAKER: Joanna Ochremiak ABSTRACT. The ellipsoid method is an algorithm that solves the (weak) feasibility and linear optimization problems for convex sets by making oracle calls to their (weak) separation problem. We observe that the previously known method to show that this reduction can be done in fixed-point logic with counting (FPC) for linear and semidefinite programs applies to any family of explicitly bounded convex sets. We use this observation to show that the exact feasibility problem for semidefinite programs is expressible in the infinitary version of FPC. As a corollary we get that, for the isomorphism problem, the Lasserre/Sums-of-Squares semidefinite programming hierarchy of relaxations collapses to the Sherali-Adams linear programming hierarchy, up to a small loss in the degree. |
12:00 | SPEAKER: Sean Weaver ABSTRACT. Set membership filters are used as a primary test for whether large sets contain a given element. The most common such filter is the Bloom filter. Most pertinent to this article is the recently introduced Satisfiability (SAT) filter. This article proposes the XOR-Satisfiability (XORSAT) filter, a variant of the SAT filter based on random k-XORSAT. Experimental results show that this new filter can be more than 99% efficient (i.e., achieves the information-theoretic limit) while also having a query speed comparable to the standard Bloom filter, making it practical for use with very large data sets. |
AGENDA of FSCD18 General Meeting
- Welcome by Steering Committee Chair, Luke Ong
- Report of FSCD18 PC Chair, Helene Kirchner
- Report of FSCD18 Conference Chair, Paula Severi
- Progress Report of FSCD19 - PC Chair: Herman Geuvers - Conference Chair: Jakob Rehof
- Election of two Steering Committee members
- Proposal to host FSCD20 (colocating with IJCAR2020) in Paris: Stefano Guerrini and Giulio Manzonetto
- AOB
- Handover to new SC Chair (2018-2021), Delia Kesner.
14:00 | SPEAKER: Benjamin Sherman ABSTRACT. Though many safety-critical software systems use floating point to represent real-world input and output, programmers usually have idealized versions in mind that compute with real numbers. Significant deviations from the ideal can cause errors and jeopardize safety. Some programming systems implement exact real arithmetic, which resolves this matter but complicates others, such as decision making. In these systems, it is impossible to compute (total and deterministic) discrete decisions based on connected spaces such as R. We present programming-language semantics based on constructive topology with variants allowing nondeterminism and/or partiality. Either nondeterminism or partiality suffices to allow computable decision making on connected spaces such as R. We then introduce pattern matching on spaces, a language construct for creating programs on spaces, generalizing pattern matching in functional programming, where patterns need not represent decidable predicates and also may overlap or be inexhaustive, giving rise to nondeterminism or partiality, respectively. Nondeterminism and/or partiality also yield formal logics for constructing approximate decision procedures. We implemented these constructs in the Marshall language for exact real arithmetic. |
14:20 | A Theory of Register Monitors SPEAKER: Thomas Ferrère ABSTRACT. The task of a monitor is to watch, at run-time, the execution of a reactive system, and signal the occurrence of an error in the observed sequence of events. While finite-state monitors have been studied extensively, in practice, monitoring software also makes use of unbounded memory. We define a model of automata equipped with integer-valued registers which can execute only a bounded number of instructions between consecutive events, and thus can form the theoretical basis for the study of infinite-state monitors. We classify these {\em register monitors} according to the number $k$ of available registers, and the type of register instructions. In stark contrast to the theory of computability for register machines, we prove that for every $k>1$, monitors with $k+1$ counters (with instruction set $\langle +1,= \rangle$) are strictly more expressive than monitors with $k$ counters: there is a safety $\omega$-language $L_k$ such that the finite prefixes not in $L_k$ can be recognized on-line with $k+1$ but not with $k$ counters. We also show that adder monitors (with instruction set $\langle 1,+,= \rangle$) are strictly more expressive than counter monitors, but are complete for monitoring all computable safety $\omega$-languages for $k=6$. {\em Real-time monitors} are further required to signal the occurrence of an error as soon as it occurs. The expressiveness hierarchy for counter monitors carries over to real-time monitors. We then show that 2 adders cannot simulate 3 counters in real-time. Finally, we show that real-time adder monitors with inequalities are at least as expressive as the model of real-time Turing machines studied in the 1960s. |
14:40 | SPEAKER: Dominik Velan ABSTRACT. Vector Addition Systems with States (VASS) provide a well-known and fundamental model for the analysis of concurrent processes, parametrized systems, and are also used as abstract models of programs in resource bound analysis. In this paper we study the problem of obtaining asymptotic bounds on the termination time of a given VASS. In particular, we focus on the practically important case of obtaining polynomial bounds on termination time. Our main contributions are as follows: First, we present a polynomial-time algorithm for deciding whether a given VASS has a linear asymptotic complexity. We also show that if a complexity of a VASS is not linear, it is at least quadratic. Second, we classify VASS according to quantitative properties of their cycles. We show that certain singularities in these properties are the key reason for non-polynomial asymptotic complexity of VASS. In absence of singularities, we show that the asymptotic complexity is always polynomial and of the form $\Theta(n^k)$, for some integer $k\leq d$. We present a polynomial-time algorithm computing the optimal $k$. For general VASS, the same algorithm, which is based on a complete technique for the construction of ranking functions in VASS, produces a valid lower bound, i.e. a $k$ such that the termination complexity is $\Omega(n^k)$. Our results are based on new insights into the geometry of VASS dynamics, which hold the potential for further applicability to VASS analysis. |
15:00 | SPEAKER: Brandon Bohrer ABSTRACT. Information-flow security is important to the safety and privacy of cyber-physical systems (CPSs) across many domains: information leakage can both violate user privacy and provide information that supports further attacks. CPSs face the challenge that information can flow both in discrete cyber channels and in continuous real-valued physical channels ranging from time to physical flow of resources. We call these hybrid information flows and introduce dHL, the first logic for verifying these flows in hybrid-dynamical models of CPSs. We achieve verification of hybrid information flows by extending differential dynamic logic (dL) for hybrid-dynamical systems with hybrid-logical features for explicit representation and relation of program states. By verifying hybrid information flows, we ensure security even under a strong attacker model wherein an attacker can observe time and physical values continuously. We present a Hilbert-style proof calculus for dHL, prove it sound, and compare the expressive power of dHL with dL. We demonstrate dHL's abilities by developing a hybrid system model of the smart electrical grid FREEDM. We verify that the naive model has a previously-unknown information flow vulnerability and verify that a revised model resolves the vulnerability. To the best of our knowledge, this is both the first information flow proof for hybrid information flows and the first for a hybrid-dynamical model. We discuss applications of hybrid information flow to a range of critical systems. |
15:20 | SPEAKER: André Platzer ABSTRACT. We prove the completeness of an axiomatization for differential equation invariants. First, we show that the differential equation axioms in differential dynamic logic are complete for all algebraic invariants. Our proof exploits differential ghosts, which introduce additional variables that can be chosen to evolve freely along new differential equations. Cleverly chosen differential ghosts are the proof-theoretical counterpart of dark matter. They create new hypothetical state, whose relationship to the original state variables satisfies invariants that did not exist before. The reflection of these new invariants in the original system enables its analysis. We then show that extending the axiomatization with existence and uniqueness axioms makes it complete for all local progress properties, and further extension with a real induction axiom makes it complete for all real arithmetic invariants. This yields a parsimonious axiomatization, which serves as the logical foundation for reasoning about invariants of differential equations. Moreover, our approach is purely axiomatic, and so the axiomatization is suitable for sound implementation in foundational theorem provers. |
14:00 | Probabilistic stable functions on discrete cones are power series. ABSTRACT. We study the category Cstabm of measurable cones and measurable stable functions—a denotational model of an higher-order language with continuous probabilities and full recursion. We look at Cstabm as a model for discrete probabilities, by showing the existence of a full, faithful and cartesian closed functor which embeds probabilistic coherence spaces—a fully abstract denotational model of an higher language with full recursion and discrete probabilities—into Cstabm. The proof is based on a generalization of Bernstein’s theorem in real analysis allowing to see stable functions between some cones as generalized power series. |
14:20 | Allegories: decidability and graph homomorphisms SPEAKER: Valeria Vignudelli ABSTRACT. Allegories were introduced by Freyd and Scedrov; they form a fragment of Tarski’s calculus of relations. We show that their equational theory is decidable by characterising it in terms of a specific class of graph homomorphisms. We actually do so for an extension of allegories which we prove to be conservative, namely allegories with top. This generalisation makes it possible to exploit the correspondence between terms and K4-free graphs, for which isomorphism was known to be finitely axiomatisable. |
14:40 | Syntax and Semantics for Operations with Scopes SPEAKER: Maciej Piróg ABSTRACT. Motivated by the problem of separating syntax from semantics in programming with algebraic effects and handlers, we propose a categorical model of abstract syntax with so-called scoped operations. As a building block of a term, a scoped operation is not merely a node in a tree, as it can also encompass a whole part of the term (a scope). Some examples from the area of programming are given by the operation "catch" for handling exceptions, in which the part in the scope is the code that may raise an exception, or the operation "once", which selects a single solution from a nondeterministic computation. A distinctive feature of such operations is their behaviour under program composition, that is, syntactic substitution. Our model is based on what Ghani et al. call the monad of explicit substitutions, defined using the initial-algebra semantics in the category of endofunctors. We also introduce a new kind of multi-sorted algebras, called scoped algebras, which serve as interpretations of syntax with scopes. In generality, scoped algebras are given in the style of the presheaf formalisation of syntax with binders of Fiore et al. As the main technical result, we show that our monad indeed arises from free objects in the category of scoped algebras. Importantly, we show that our results are immediately applicable. In particular, we show a Haskell implementation together with some practical, real-life examples. |
15:00 | SPEAKER: Fabio Zanasi ABSTRACT. Symmetric monoidal categories have become ubiquitous as a formal environment for the analysis of compound systems in a compositional, resource-sensitive manner using the graphical syntax of string diagrams. Recently, reasoning with string diagrams has been implemented concretely via double-pushout (DPO) hypergraph rewriting. The hypergraph representation has the twin advantages of being convenient for mechanisation and of completely absorbing the structural laws of symmetric monoidal categories, leaving just the domain-specific equations explicit in the rewriting system. In many applications across different disciplines (linguistics, concurrency, quantum computation, control theory, ...) the structural component appears to be richer than just the symmetric monoidal structure, as it includes one or more Frobenius algebras. In this work we develop a DPO rewriting formalism which is able to absorb multiple Frobenius structures, thus sensibly simplifying diagrammatic reasoning in the aforementioned applications. As a proof of concept, we use our formalism to describe an algorithm which computes the reduced form of a diagram of the theory of interacting bialgebras using a simple rewrite strategy. |
15:20 | Ribbon tensorial logic ABSTRACT. We introduce a topologically-aware version of tensorial logic, called ribbon tensorial logic. To every proof of the logic, we associate a ribbon tangle which tracks the flow of tensorial negations inside the proof. The translation is functorial: it is performed by exhibiting a correspondence between the notion of dialogue category in proof theory and the notion of ribbon category in knot theory. Our main result is that the translation is also faithful: two proofs are equal modulo the equational theory of tensorial logic if and only if the associated ribbon tangles are equal up to topological deformation. This ``proof-as-tangle'' theorem may be understood at the same time as a coherence theorem for ribbon dialogue categories, and as a mathematical foundation for topological game semantics. |
14:00 | SPEAKER: Oleg Zaikin ABSTRACT. We present ALIAS, a modular tool aimed at finding backdoors for hard SAT instances. Here by a backdoor for a specific SAT solver and SAT formula we mean a set of its variables, all possible instantiations of which lead to construction of a family of subformulas with the total solving time less than that for an original formula. For a particular backdoor, the tool uses the Monte-Carlo algorithm to estimate the runtime of a solver when partitioning an original problem via said backdoor. Thus, the problem of finding a backdoor is viewed as a black-box optimization problem. The tool's modular structure allows to employ state-of-the-art SAT solvers and black-box optimization heuristics. In practice, for a number of hard SAT instances, the tool made it possible to solve them much faster than using state-of-the-art multithreaded SAT-solvers. |
14:30 | SPEAKER: Joao Marques-Silva ABSTRACT. Boolean satisfiability (SAT) solvers are at the core of efficient approaches for solving a vast multitude of practical problems. Moreover, albeit targeting an NP-complete problem, SAT solvers are increasingly used for tackling problems beyond NP. Despite the success of SAT in practice, modeling with SAT and more importantly implementing SAT-based problem solving solutions is often a difficult and error-prone task. This paper proposes the PySAT toolkit, which enables fast Python-based prototyping using SAT oracles and SAT-related technology. PySAT provides a simple API for working with a few state-of-the-art SAT oracles and also integrates a number of cardinality constraint encodings, all aiming at simplifying the prototyping process. Experimental results presented in the paper show that PySAT-based implementations can be as efficient as those written in a low-level language. |
15:00 | SPEAKER: Luca Pulina ABSTRACT. We consider the problem of binary image generation with given properties. This problem arises in a number of practical applications, including generation of artificial porous medium for an electrode of lithium-ion batteries, for composed materials, etc. A generated image represents a porous medium. As such, it is subject to two sets of constraints: topological constraints on the structure and process constraints on the physical process over this structure. To perform image generation we need to define a mapping from a porous medium to its physical process parameters. For a given geometry of a porous medium, this mapping can be done by solving a partial differential equation (PDE). However, embedding a PDE solver into the search procedure is computationally expensive. We use a binarized neural network to approximate a PDE solver. This allows us to encode the entire problem as a logical formula. Our main contribution is that, for the first time, we show that this problem can be tackled using decision procedures. Our experiments show that our model is able to produce random constrained images that satisfy both topological and process constraints. |
14:00 | Invited talk: Verifying Learners and Learning Verifiers |
15:00 | SPEAKER: Gergely Kovásznai ABSTRACT. In our previous papers, we investigated several aspects of applying Optimization Modulo Theories (OMT) solvers to Wireless Sensor Networks (WSNs). None of the solvers we used in our experiments scaled enough for WSNs of common size in practice. This is particularly true when investigating additional dependability and security constraints on WSNs of high density. In this paper, we propose an idea of speeding up the OMT solving process by taking into consideration some resources in the systems and by applying regression analysis on those resource values. For instance, in WSNs, the electrical charge in the batteries of sensor nodes can be considered to be a resource that is being consumed as approaching the maximal lifetime of the network. Another example is the knapsack problem where the remaining capacity of the knapsack can be used as such a resource. We show how to integrate this idea in search algorithms in the OMT framework and introduce a new OMT solver called Puli. We present experiments with Puli on WSN and knapsack benchmarks, which show remarkable improvements in the number of solved instances as well as computation time compared to existing solvers. Furthermore, we show that further significant improvement can be realized on so-called monotonous problems, such as WSN optimization, for which Puli can generate more precise assertions. We present Puli as a work-in-progress prototype that we are planning to upgrade to an official release soon, which we want to make publicly available. |
15:00 | SPEAKER: Christina Kohl ABSTRACT. Proof terms are a useful concept for reasoning about computations in term rewriting. Human calculation with proof terms is tedious and error-prone. We present ProTeM, a new tool that offers support for manipulating proof terms that represent multisteps in left-linear rewrite systems. |
15:00 | SPEAKER: Zarathustra Goertzel ABSTRACT. Watchlist (also hint list) is a mechanism that allows related proofs to guide a proof search for a new conjecture. This mechanism has been used with the Otter and Prover9 theorem provers, both for interactive formalizations and for human-assisted proving of open conjectures in small theories. In this work we explore the use of watchlists in large theories coming from first-order translations of large ITP libraries, aiming at improving hammer-style automation by smarter internal guidance of the ATP systems. In particular, we (i) design watchlist-based clause evaluation heuristics inside the E ATP system, and (ii) develop new proof guiding algorithms that load many previous proofs inside the ATP and focus the proof search using a dynamically updated notion of proof matching. The methods are evaluated on a large set of problems coming from the Mizar library, showing significant improvement of E’s standard portfolio of strategies, and also of the previous best set of strategies invented for Mizar by evolutionary methods. |
16:00 | SPEAKER: Julian Nagele |
16:00 | Verified Tail Bounds for Randomized Programs SPEAKER: Joseph Tassarotti ABSTRACT. We mechanize in Coq a theorem by Karp, along with several extensions, that provide an easy to use "cookbook" method for verifying tail bounds of randomized algorithms, much like the traditional "Master Theorem" gives bounds for deterministic algorithms. We apply these results to several examples: the number of comparisons performed by QuickSort, the span of parallel QuickSort, the height of randomly generated binary search trees, and the number of rounds needed for a distributed leader election protocol. Because the constants involved in our symbolic bounds are concrete, we are able to use them to derive numerical probability bounds for various input sizes for these examples. |
16:30 | SPEAKER: Manuel Eberl ABSTRACT. This work is a case study of the formal verification and complexity analysis of some famous probabilistic data structures and algorithms in the proof assistant Isabelle/HOL: – the expected number of comparisons in randomised Quicksort – the average-case analysis of deterministic Quicksort – the expected shape of an unbalanced random Binary Search Tree – the expected shape of a Treap The last two have, to our knowledge, never been analysed in a theorem prover before and the last one is particularly interesting because the analysis involves continuous distributions. The verification builds on the existing probability and measure theory in Isabelle/HOL. Algorithms are shallowly embedded and expressed in the Giry monad, which allows for a very natural and high-level presentation. |
16:00 | Award Ceremony SPEAKER: Christoph M. Wintersteiger ABSTRACT. Presentation of the best paper and best student paper awards. |
16:05 | SPEAKER: Ruben Martins |
16:33 | SPEAKER: Luca Pulina |
17:01 | Sparkle SAT Challenge 2018 SPEAKER: Chuan Luo |
17:29 | SAT Competition 2018 SPEAKER: Marijn Heule |
16:00 | SPEAKER: Leonardo Alt ABSTRACT. Ethereum smart contracts are programs that run inside a public distributed database called a blockchain. These smart contracts are used to handle tokens of value, can be accessed and analyzed by everyone and are immutable once deployed. Those characteristics make it imperative that smart contracts are bug-free at deployment time, hence the need to verify them formally. In this paper we describe our current efforts in building an SMT-based formal verification module within the compiler of Solidity, a popular language for writing smart contracts. The tool is seamlessly integrated into the compiler, where during compilation, the user is automatically warned of and given counterexamples for potential arithmetic overflow/underflow, unreachable code, trivial conditions, and assertion fails. We present how the component currently translates a subset of Solidity into SMT statements using different theories, and discuss future challenges such as multi-transaction and state invariants. |
16:30 | SMT Solving Modulo Tableau and Rewriting Theories SPEAKER: Guillaume Bury ABSTRACT. We propose an automated theorem prover that combines an SMT solver with tableau calculus and rewriting. Tableau inference rules are used to unfold propositional content into clauses while atomic formulas are handled using satisfiability decision procedures as in traditional SMT solvers. To deal with quantified first order formulas, we use metavariables and perform rigid unification modulo equalities and rewriting, for which we introduce an algorithm based on superposition, but where all clauses contain a single atomic formula. Rewriting is introduced along the lines of deduction modulo theory, where axioms are turned into rewrite rules over both terms and propositions. Finally, we assess our approach over a benchmark of problems in the set theory of the B method. |
17:00 | SPEAKER: Andrew Reynolds ABSTRACT. In this paper, we explore a development paradigm where rewrite rules are suggested to the SMT solver developer using syntax-guided enumeration. We capitalize on the recent advances in enumerative syntax-guided synthesis (SyGuS) techniques for efficiently enumerating terms in a grammar of interest, and novel sampling techniques for testing equivalence between terms. We present our preliminary experience with this feature in the SMT solver CVC4, showing its impact on its rewriting capabilities using several internal metrics, and its subsequent impact on solving bit-vector and string constraints in applications. |
16:30 | SPEAKER: Roy Overbeek ABSTRACT. Like termination, confluence is a central property of rewrite systems. Unlike for termination, however, there exists no known complexity hierarchy for confluence. In this paper we investigate whether the decreasing diagrams technique can be used to obtain such a hierarchy. The decreasing diagrams technique is one of the strongest and most versatile methods for proving confluence of abstract reduction systems, it is complete for countable systems, and it has many well-known confluence criteria as corollaries. So what makes decreasing diagrams so powerful? In contrast to other confluence techniques, decreasing diagrams employs a labelling of the steps $\to$ with labels from a well-founded order in order to conclude confluence of the underlying unlabelled relation. Hence it is natural to ask how the size of the label set influences the strength of the technique. In particular, what class of abstract reduction systems can be proven confluent using decreasing diagrams restricted to $1$~label, $2$ labels, $3$ labels, and so on? Surprisingly, we find that two labels suffice for proving confluence for every abstract rewrite system having the cofinality property, thus in particular for every confluent, countable system. We also show that this result stands in sharp contrast to the situation for commutation of rewrite relations, where the hierarchy does not collapse. Finally, as a background theme, we discuss the logical issue of first-order definability of the notion of confluence. |
17:00 | Confluence of Prefix-Constrained Rewrite Systems SPEAKER: Andrianarivelo Nirina ABSTRACT. Prefix-constrained rewriting is a strict extension of context-sensitive rewriting. We study the confluence of prefix-constrained rewrite systems, which are composed of rules of the form L:l -> r where L is a regular string language that defines the allowed rewritable positions. The usual notion of Knuth-Bendix's critical pair needs to be extended using regular string languages, and the convergence of all critical pairs is not enough to ensure local confluence. Thanks to an additional restriction we get local confluence, and then confluence for terminating systems, which makes the word problem decidable. Moreover we present an extended Knuth-Bendix completion procedure, to transform a non-confluent prefix-constrained rewrite system into a confluent one. |
17:30 | Coherence of Gray categories via rewriting SPEAKER: Simon Forest ABSTRACT. Over the recent years, the theory of rewriting has been extended in order to provide systematic techniques to show coherence results for strict higher categories. Here, we investigate a further generalization to low-dimensional weak categories, and consider in details the first non-trivial case: presentations of tricategories. By a general result, those are equivalent to the stricter Gray categories, for which we introduce a notion of rewriting system, as well as associated tools: Tietze transformations, critical pairs, termination orders, etc. We show that a finite rewriting system admits a finite number of critical pairs and, as a variant of Newman's lemma in our context, that a convergent rewriting system is coherent, meaning that two parallel 3-cells are necessarily equal. This is illustrated on rewriting systems corresponding to various well-known structures in the context of tricategories (monoids, adjunctions, Frobenius monoids). Finally, we discuss generalizations in arbitrary dimension. |
The ADHS18 banquet will be at Balliol College, Oxford. Drinks reception from 7:45pm, to be seated by 8:15 (pre-booking via FLoC registration system required; guests welcome).