FLOC 2018: FEDERATED LOGIC CONFERENCE 2018
PROGRAM FOR TUESDAY, JULY 10TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:30 Session 52A: Cryptographic primitives (CSF)
Location: Maths LT2
09:00
Self-Guarding Cryptographic Protocols against Algorithm Substitution Attacks

ABSTRACT. We put forward the notion of self-guarding cryptographic protocols as a countermeasure to algorithm substitution attacks. Such self-guarding protocols can prevent undesirable leakage by subverted algorithms if one has the guarantee that the system has been properly working in an initialization phase. Unlike detection-based solutions they thus proactively thwart attacks, and unlike reverse firewalls they do not assume an online external party.

We present constructions of basic primitives for (public-key and private-key) encryption and for signatures. We also argue that the model captures attacks with malicious hardware tokens and show how to self-guard a PUF-based key exchange protocol.

09:30
Formal Security Proof of CMAC and its Variants

ABSTRACT. The CMAC standard, when initially proposed by Iwata and Kurosawa as OMAC1, was equipped with a complex game-based security proof. Following recent advances in formal verification for game-based security proofs, we formalize a proof of unforgeability for CMAC in EasyCrypt. A side effects of this proof are improvements of EasyCrypt libraries. This formal proof obtains security bounds very similar to Iwata and Kurosawa’s for CMAC, but also proves secure a certain number of intermediate constructions of independent interest, including ECBC, FCBC and XCBC. This work represents one more step in the direction of obtaining a reliable set of independently verifiable evidence for the security of international cryptographic standards.

10:00
Backdoored Hash Functions: Immunizing HMAC and HKDF

ABSTRACT. Security of cryptographic schemes is traditionally measured as the inability of resource-constraint adversaries to violate a desired security goal. The security argument usually relies on a sound design of the underlying components. Arguably, one of the most devastating failures of this approach can be observed when considering adversaries such as intelligence agencies that can influence the design, implementation, and standardization of cryptographic primitives. While the most prominent example of cryptographic backdoors is NIST's Dual_EC_DRBG, believing that such attempts have ended there is naive.

Security of many cryptographic tasks, such as digital signatures, pseudorandom generation, and password protection, crucially relies on the security of hash functions. In this work, we consider the question of how backdoors can endanger security of hash functions and, especially, if and how we can thwart such backdoors. We particularly focus on immunizing arbitrarily backdoored versions of HMAC (RFC 2104) and the hash-based key derivation function HKDF (RFC 5869), which are widely deployed in critical protocols such as TLS. We give evidence that the weak pseudorandomness property of the compression function in the hash function is in fact robust against backdooring. This positive result allows us to build a backdoor-resistant pseudorandom function, i.e., a variant of HMAC, and we show that HKDF can be immunized against backdoors at little cost. Unfortunately, we also argue that safe-guarding unkeyed hash functions against backdoors is presumably hard.

09:00-10:30 Session 52B: Types (FSCD)
09:00
Call-by-name Gradual Type Theory
SPEAKER: Max New

ABSTRACT. Gradually typed languages, including Typed Racket, Typescript, Thorn and Reticulated Python, facilitate interoperability between statically and dynamically typed code, by checking static types when available and applying dynamic type checks when not. However, almost all exisiting research studies gradually typed languages using operational semantics, designed in an ad hoc manner. Furthermore, in the operational setting, questions of program equivalence and other relational properties are difficult to study and for the most part ignored.

In this paper, we propose a type-theoretic and category-theoretic semantics for gradual typing, in the form of gradual type theory, a logic and type theory for (call-by-name) gradual typing. To define the central constructions of gradual typing (the dynamic type, type casts and type error) in a type-theoretic fashion, we extend the theory of types and terms to include gradual type and term precision, internalizing notions of ``more dynamic'' into the type theory and then using these to characterize the constructions of gradual typing uniquely. This includes a novel specification for casts in terms of type and term precision. Combined with the ordinary extensionality ($\eta$) principles that type theory provides, we show that most of the standard operational behavior of casts in a gradually typed language are in fact uniquely determined by our design constraints. This provides a semantic justification for the definitions of casts and also shows that non-standard definitions of casts must violate these principles. We explore a call-by-name type theory in this paper, because it is a simple setting with the necessary extensionality principles, leaving call-by-value to future work.

On the category-theoretic side, we show that our type theory is an internal language of a certain class of double categories called called equipments (in fact, we will use only preorder categories, which are double categories where one direction is posetal), which provides the right algebraic structure with which to interpret precision and casts in gradual typing. We apply this categorical semantics to give a general construction of models of gradual typing, which provides a semantic analogue of Findler and Felleisen's definitions of contracts, and generalizes Dana Scott's domain-theoretic models of dynamic typing.

09:30
The clocks they are adjunctions. Denotational semantics for Clocked Type Theory
SPEAKER: Bassel Mannaa

ABSTRACT. Clocked Type Theory (CloTT) is a type theory for guarded recursion useful for programming with coinductive types, encoding productivity in types, and for reasoning about advanced programming language features using an abstract form of step-indexing. CloTT has previously been shown to enjoy a number of syntactic properties including strong normalisation, canonicity and decidability of type checking. In this paper we present a denotational semantics for CloTT useful, e.g., for studying future extensions of CloTT with constructions such as path types.

The main challenge for constructing this model is to model the notion of ticks used in CloTT for coinductive reasoning about coinductive types. We build on a category previously used to model guarded recursion, but in this category there is no object of ticks, so tick-assumptions in a context can not be modelled using standard tools. Instead we show how ticks can be modelled using adjoint functors, and how to model the tick constant using a semantic substitution.

10:00
Internal Universes in Models of Homotopy Type Theory
SPEAKER: Ian Orton

ABSTRACT. We show that universes of fibrations in various models of homotopy type theory have an essentially global character: they cannot be described in the internal language of the presheaf topos from which the model is constructed. We get around this problem by extending the internal language with a modal operator for expressing properties of global elements. In this setting we show how to construct a universe that classifies the Cohen-Coquand-Huber-Mörtberg (CCHM) notion of fibration from their cubical sets model, starting from the assumption that the interval is tiny - a property that the interval in cubical sets does indeed have. This leads to a completely internal development of models of homotopy type theory within what we call crisp type theory.

09:00-10:00 Session 52C: ITP Invited Talk: Dan Grayson (ITP)
Location: Blavatnik LT1
09:00
Voevodsky's Work on Formalization of Proofs and the Foundations of Mathematics

ABSTRACT. A consistent thread running through the three decades of Voevodsky's work is
the application of the ideas of homotopy theory in new and surprising ways,
first to motives, and then to formalization of proofs and the foundations of
mathematics.  I will present the story of the latter development, focusing on
the points of interest to mathematicians.

09:00-10:00 Session 52D: LICS Invited Talk: Ursula Martin (LICS)
Location: Maths LT1
09:00
On Diversity

ABSTRACT. FLOC’s policy is that "The open exchange of ideas and the freedom of thought and expression are central to the values and goals of FLoC.  They require an environment that recognizes the inherent worth of every person and group. They flourish in communities that foster mutual understanding and embrace diversity. “  Hardly  controversial  - we are surely more likely to solve the world’s big problems if we draw on as wide a variety of people and talents as we can, and create an environment where they can be their best selves  - but that doesn’t mean that change is easy. I’ll draw on the history of  computing in Oxford and elsewhere to look at  the power of  Intellectual and human diversity, why embracing it is hard, and how we can make progress.

From 1st July 2018 Ursula Martin is a Professor on the School of Informatics of the University of Edinburgh, with affiliations to Oxford’s Mathematical Institute and Wadham College.  She is a Fellow of the Royal Academy of Engineering, and of the Royal Society of Edinburgh. Her research,  initially in algebra,  logic and the use of computers to create mathematical proofs,  now focuses on wider social and cultural approaches to understanding the success and impact of foundational research in computer science and mathematics, and she is the author of a recent book on the mathematics of Ada Lovelace.

09:00-10:30 Session 52E: Model Counting (SAT)
09:00
Fast Sampling of Perfectly Uniform Satisfying Assignments

ABSTRACT. Validation of complex digital circuitry primarily relies upon constrained-random simulation. However, it is intractable to simulate all satisfying assignments (models). Rather, sufficient verification coverage is achieved by selecting test stimuli uniformly at random from the set of valid models. The current state-of-the-art witness generators use hashing to perform this sampling near-uniformly. We present a new, perfectly-uniform witness generation algorithm based on the exact model counter sharpSAT and reservoir sampling. In experiments across hundreds of SAT benchmarks, our algorithm is faster than the state of the art by ten to over a hundred thousand times.

09:30
Fast and Flexible Probabilistic Model Counting

ABSTRACT. We present a probabilistic model counter that can trade off running time with approximation accuracy. As in several previous works, the number of models of a formula is estimated by adding random parity constraints (equations). Specifically, the binary logarithm of the number of models is approximated by the number of random constrains needed to eliminate all models. One key difference with prior works is that the systems of parity equations used by our algorithm are the parity check matrices of Low Density Parity Check (LDPC) error-correcting codes. As a result, the equations tend to be much shorter, often containing fewer than 10 variables each, making the search for models far more tractable. The price paid for computational tractability is that their corresponding statistical properties are not as good as when (much longer) constraints are used. We show how one can deal with this issue and derive rigorous approximation guarantees by performing more solver invocations.

10:00
Exploiting Treewidth for Projected Model Counting and its Limits
SPEAKER: Markus Hecher

ABSTRACT. In this paper, we introduce a novel algorithm to solve projected model counting (PMC). PMC asks to count solutions of a Boolean formula with respect to a given set of projected variables, where multiple solutions that are identical when restricted to the projected variables count as only one solution.

Our algorithm exploits bounded primal or incidence treewidth of the input instance. It runs in time $O(2^{2^{k+4}} n^2)$ where k is the treewidth and n is the input size of the instance. In other words, we obtain that the problem PMC is fixed-parameter tractable when parameterized by treewidth. Further, we take the exponential time hypothesis (ETH) into consideration and establish lower bounds of bounded treewidth algorithms for PMC, which yields that runtime bounds of our algorithm are tight.

10:00-10:30 Session 53A (ITP)
Location: Blavatnik LT1
10:00
CalcCheck: a Proof Checker for Teaching the “Logical Approach to Discrete Math”

ABSTRACT. For calculational proofs as they are propagated by Gries and Schneider's textbook classic “A Logical Approach to Discrete Math” (LADM), automated proof checking is feasible, and can provide useful feedback to students acquiring and practicing basic proof skills. We report on the CalcCheck system which implements a proof checker for a mathematical language that resembles the rigorous but informal mathematical style of LADM so closely that students very quickly recognise the system as not an obstacle, but as an aid, and realise that the problem is finding proofs.

Students interact with this trough the “web application” front-end CalcCheckWeb which provides some assistance for proof entry, but intentionally no assistance for proof finding. Upon request, the system displays, side-by-side with the student input, a version of that annotated with the results of checking each step for correctness.

CalcCheckWeb has now been used twice for teaching this course, and students have been solving exercises and submitting assignments, midterms, and final exams on the system ─ for examinations, there is the option to disable proof checking and leave only syntax checking enabled. CalcCheck also performed the grading, with very limited human overriding necessary.

10:00-10:40 Session 53B (LICS)
Location: Maths LT1
10:00
Polynomial Invariants for Affine Programs
SPEAKER: Amaury Pouly

ABSTRACT. We exhibit an algorithm to compute the strongest polynomial (or algebraic) invariants that hold at each program location of a given affine program (i.e., a program having only non-deterministic (as opposed to conditional) branching, and all of whose assignments are given by affine expressions). Our main tool is an algebraic result of independent interest: given a finite set of rational square matrices of the same dimension, we show how to compute the Zariski closure of the semigroup that they generate.

10:20
An answer to the Gamma question

ABSTRACT. We answer in this paper an open question (known as the "Gamma question"), related to the recent notion of coarse computability, which stems from complexity theory. The question was formulated by Andrews, Cai, Diamondstone, Jockusch and Lempp in "Asymptotic density, computable traceability and 1-randomness" (2016, Fundamenta Mathematicae). The Gamma value of an oracle set measures to what extent each set computable with the oracle is approximable in the sense of density by a computable set. The closer to 1 this value is, the closer the oracle is to being computable. The Gamma question asks whether this value can be strictly in between 0 and 1/2.

In this paper, we pursue some work initiated by Monin and Nies in "A unifying approach to the Gamma question" (2015, LICS). Using notions from computability theory, developed by Monin and Nies, together with some basic techniques from the field of error-correcting codes, we are able to give a negative answer to this question.

The proof we give also provides an answer to a related question, asked by Denis Hirschfeldt in the expository paper "Some questions in computable mathematics" (2017, Computability and Complexity). We also solve the Gamma problem for bases other than 2, answering another question of Monin and Nies.

10:30-11:00Coffee Break
11:00-12:30 Session 54A: Secure computation (CSF)
Location: Maths LT2
11:00
Computer-aided proofs for multiparty computation with active security

ABSTRACT. Secure multi-party computation (MPC) is a general cryptographic technique that allows distrusting parties to compute a function of their individual inputs, while only revealing the output of the function. It has found applications in areas such as auctioning, email filtering, and secure teleconference.

Given its importance, it is crucial that the protocols are specified and implemented correctly. In the programming language community it has become good practice to use computer proof assistants to verify correctness proofs. In the field of cryptography, Easycrypt is the state of the art proof assistant. It provides an embedded language for probabilistic programming, together with a specialized logic, embedded into an ambient general purpose higher-order logic. It allows us to conveniently express cryptographic properties. Easycrypt has been used successfully on many applications, including public-key encryption, signatures, garbled circuits and differential privacy. Here we show for the first time that it can also be used to prove security for an n-party MPC protocol against a *malicious* adversary.

We formalize additive and replicated secret sharing schemes and apply them to Maurer's MPC protocol for secure addition and multiplication. Our method extends to general polynomial functions. We follow the insights from Easycrypt that security proofs can be often be reduced to proofs about program equivalence, a topic that is well understood in the verification of programming languages. In particular, we show that in the passive case the non-interference-based (NI) definition is equivalent to a standard game-based security definition. For the active case, we provide a new non-interference based alternative to the usual simulation-based cryptographic definition.

11:30
Enforcing ideal-world leakage bounds in real-world secret sharing MPC frameworks
SPEAKER: Hugo Pacheco

ABSTRACT. We give a language-based security treatment of domain-specific languages and compilers for secure multi-party computation, a cryptographic paradigm that enables collaborative computation over encrypted data. Computations are specified in a core imperative language, as if they were intended to be executed by a trusted-third party, and formally verified against an information-flow policy modelling (an upper bound to) their leakage. This allows non-experts to assess the impact of performance-driven authorized disclosure of intermediate values.

Specifications are then compiled into multi-party protocols. We formalize protocol security using (distributed) probabilistic information-flow and prove that compilation is security-preserving: protocols do not leak more than allowed by the source policy. The proof exploits a natural but previously missing correspondence between simulation-based cryptographic proofs and (composable) probabilistic non-interference.

Finally, we extend our framework to justify leakage cancelling, a domain-specific optimization that allows to, first, write an efficiently computable specification that fails to meet the allowed leakage upper-bound, and then apply a probabilistic pre-processing that brings the overall leakage to within the acceptable range.

12:00
Symbolic security of garbled circuits
SPEAKER: Baiyu Li

ABSTRACT. We present the first computationally sound symbolic analysis of Yao's garbled circuit construction for secure two party computation. Our results include an extension of the symbolic language for cryptographic expressions from previous work on computationally sound symbolic analysis, and a soundness theorem for this extended language. We then demonstrate how the extended language can be used to formally specify not only the garbled circuit construction, but also the formal (symbolic) simulator required by the definition of security. The correctness of the simulation is proved in a purely syntactical way, within the symbolic model of cryptography, and then translated into a concrete computational indistinguishability statement via our general computational soundness theorem. We also implement our symbolic security framework and the garbling scheme in Haskell, and our experiment shows that the symbolic analysis performs well and can be done within several seconds even for large circuits that are useful for real world application.

11:00-12:30 Session 54B: Types (FSCD)
11:00
A Unifying Framework for Type Inhabitation
SPEAKER: Sabine Broda

ABSTRACT. In this paper we define a framework to address different kinds of problems related to type inhabitation, such as type checking, the emptiness problem, generation of inhabitants and counting, in a uniform way. Our framework uses an alternative representation for types, called the pre-grammar of the type, on which different methods for these problems are based. Furthermore, we define a scheme for a decision algorithm that, for particular instantiations of the parameters, can be used to show different inhabitation related problems to be in PSPACE.

11:30
Cumulative Inductive Types in Coq

ABSTRACT. In order to avoid well-know paradoxes associated with self-referential definitions, higher-order dependent type theories stratify the theory using a countably infinite hierarchy of universes (also known as sorts), Type{0} : Type{1} : .... Such type systems are called cumulative if for any type A we have that A : Type{i} implies A : Type{i+1}. The Predicative Calculus of Inductive Constructions (pCIC) which forms the basis of the Coq proof assistant, is one such system.

In this paper we present the Predicative Calculus of Cumulative Inductive Constructions (pCuIC) which extends the cumulativity relation to inductive types. We discuss cumulative inductive types as present in Coq 8.7 and their application to definitional translations.

12:00
Index-Stratified Types

ABSTRACT. We present Tores, a language for logical reasoning which utilizes indexed types and flexible (co)recursion principles to allow encoding of metatheoretic proofs. We particularly target the encoding of proofs using the technique of logical relations. The novel features we introduce are well-founded Mendler-style (co)recursion over indexed data types together with a form of recursion over objects in the index language to build new types. The latter, which we call index-stratified types, are analogue to the concept of large elimination in dependently typed languages. These features combined allow us to encode sophisticated case studies such as normalization for lambda calculi and normalization by evaluation. We prove the soundness of Tores as a programming and proof language via the key theorems of subject reduction and termination.

11:00-12:30 Session 54C (ITP)
Location: Blavatnik LT1
11:00
A Formalization of the LLL Basis Reduction Algorithm

ABSTRACT. The LLL basis reduction algorithm was the first polynomial-time algorithm to compute a reduced basis of a given lattice, and hence also a short vector in the lattice. It thereby approximates an NP-hard problem where the approximation quality solely depends on the dimension of the lattice, but not the lattice itself. The algorithm has several applications in number theory, computer algebra and cryptography.

In this paper, we develop the first mechanized soundness proof of the LLL algorithm using Isabelle/HOL. We additionally integrate one application of LLL, namely a verified factorization algorithm for integer polynomials which runs in polynomial time.

11:30
A Formally Verified Solver for Homogeneous Linear Diophantine Equations

ABSTRACT. In this work we are interested in minimal complete sets of solutions for homogeneous linear diophantine equations. Such equations naturally arise during AC-unification, that is, unification in the presence of associative and commutative symbols. Minimal complete sets of solutions are required to compute AC-critical pairs. We present a verified solver for homogeneous linear diophantine equations that we formalized in Isabelle/HOL. Our work provides the basis for formalizing AC-unification and will eventually enable the certification of automated AC-confluence and AC-completion tools.

12:00
A Coq Tactic for Equality Learning in Linear Arithmetic

ABSTRACT. Coq provides linear arithmetic tactics such as omega or lia. Currently, these tactics either fully prove the current goal in progress, or fail. We propose to improve this behavior: when the goal is not provable in linear arithmetic, we inject in hypotheses new equalities discovered from the linear inequalities. These equalities may help other Coq tactics to discharge the goal. In other words, we apply -- in interactive proofs -- one of the seminal idea of SMT-solving: combining tactics by exchanging equalities.

The paper describes how we have implemented equality learning in a new Coq tactic, dealing with linear arithmetic over rationals. It also illustrates how this tactic interacts with other Coq tactics.

11:00-12:40 Session 54D (LICS)
Location: Maths LT1
11:00
On the number of types in sparse graphs

ABSTRACT. We prove that for every class of graphs $C$ which is nowhere dense, as defined by Nesetril and Ossona de Mendez, and for every first order formula $\phi(\tup x,\tup y)$, whenever one draws a graph $G\in C$ and a subset of its nodes $A$, the number of subsets of $A^{|\tup y|}$ which are of the form $\{\tup v\in A^{|\tup y|} : G\models\phi(\bar u,\tup v)\}$ for some valuation~$\tup u$ of $\tup x$ in $G$ is bounded by $O(|A|^{|\tup x|+\epsilon})$, for every $\epsilon>0$. This provides optimal bounds on the VC-density of first-order definable set systems in nowhere dense graph classes.

We also give two new proofs of upper bounds on quantities in nowhere dense classes which are relevant for their logical treatment. Firstly, we provide a new proof of the fact that nowhere dense classes are uniformly quasi-wide, implying explicit, polynomial upper bounds on the functions relating the two notions. Secondly, we give a new combinatorial proof of a result of Adler and Adler stating that every nowhere dense class of graphs is stable. In contrast to the previous proofs of the above results, our proofs are completely finitistic and constructive, and yield explicit and computable upper bounds on quantities related to uniform quasi-wideness (margins) and stability (ladder indices).

11:20
Model-Theoretic Characterizations of Boolean and Arithmetic Circuit Classes of Small Depth
SPEAKER: Anselm Haak

ABSTRACT. In this paper we give a characterization of both Boolean and arithmetic circuit classes of logarithmic depth in the vein of descriptive complexity theory, i.e., the Boolean classes $\textrm{NC}^1$, $\textrm{SAC}^1$ and $\textrm{AC}^1$ as well as their arithmetic counterparts $\#\textrm{NC}^1$, $\#\textrm{SAC}^1$ and $\#\textrm{AC}^1$. We build on Immerman's characterization of constant-depth polynomial-size circuits by formulae of first-order logic, i.e., $\textrm{AC}^0 = \textrm{FO}$, and augment the logical language with an operator for defining relations in an inductive way. Considering slight variations of the new operator, we obtain uniform characterizations of the three just mentioned Boolean classes. The arithmetic classes can then be characterized by functions counting winning strategies in semantic games for formulae characterizing languages in the corresponding Boolean class.

11:40
Tree depth, quantifier elimination, and quantifier rank
SPEAKER: Yijia Chen

ABSTRACT. For a class K of graphs we consider the following three statements. (i) K has bounded tree-depth. (ii) First-order logic FO has an effective generalized quantifier elimination on K. (iii) The parameterized model checking for FO on K is in para-FO (or equivalently, in para-AC^0). We prove that (i) => (ii) and (ii) <=> (iii). All three statements are equivalent if K is closed under taking subgraphs, but not in general.

By a result due to Elberfeld et al. monadic second-order logic MSO and FO have the same expressive power on every class of graphs of bounded tree-depth. Hence the implication (i) => (iii) holds for MSO, too; it is the analogue of Courcelle's Theorem for tree-depth (instead of tree-width) and para-AC^0 (instead of FPT). In Elberfeld et al. it was already shown that the model-checking for a fixed MSO-property on a class of graphs of bounded tree-depth is in AC^0.

12:00
Wreath Products of Distributive Forest Algebras
SPEAKER: Michael Hahn

ABSTRACT. It is an open problem whether definability in Propositional Dynamic Logic (PDL) on forests is decidable. Based on an al- gebraic characterization by Bojańczyk, et. al., (2012) in terms of forest algebras, Straubing (2013) described an approach to PDL based on a k-fold iterated distributive law. A proof that all languages satisfying such a k-fold iterated distributive law are in PDL would settle decidability of PDL. We solve this problem in the case k = 2: All languages recognized by forest algebras satisyfing a 2-fold iterated distributive law are in PDL. Furthermore, we show that this class is decidable. This provides a novel nontrivial decidable subclass of PDL, and demonstrates the viability of the proposed approach to deciding PDL in general.

12:20
MSO Queries on Trees: Enumerating Answers under Updates Using Forest Algebras

ABSTRACT. We investigate efficient view maintenance for MSO-definable queries over trees or, more precisely, efficient enumeration of answers to MSO-definable queries over trees which are subject to local updates.

We exhibit an algorithm that uses an O(n) preprocessing phase and enumerates answers with O(log(n)) delay between them. When the tree is updated, the algorithm can avoid repeating expensive preprocessing and restart the enumeration phase within O(log(n)) time. This improves over previous results that require O(log^2(n)) time after updates and have O(log^2(n)) delay.

Our algorithms and complexity results in the paper are presented in terms of node-selecting tree automata representing the MSO queries. To present our algorithm, we introduce a balancing scheme for parse trees of forest algebra formulas that is of its own interest.

11:00-12:40 Session 54E (LICS)
Location: Maths LT3
11:00
Impredicative Encodings of (Higher) Inductive Types
SPEAKER: Sam Speight

ABSTRACT. The impredicative $\forall$ operation in Girard's system F permits encodings of various inductive types, such as the natural numbers. However, these types fail to satisfy the relevant $\eta$-rules, and so, in dependent type theory, lack \emph{dependent} elimination rules. We work in Martin-L\"{o}f type theory with an impredicative universe. Using ideas from homotopy type theory, we refine the impredicative encodings so that the dependent elimination rules do hold. We construct a type of natural numbers as an initial algebra, which arises as a subtype of the System F encoding. We also encode some higher inductive types, such as the unit circle $S^1$.

11:20
Guarded Computational Type Theory

ABSTRACT. Nakano's later modality can be used to specify and define recursive functions which are causal or synchronous; in concert with a notion of clock variable, it is possible to also capture the broader class of productive (co)programs. Until now, it has been difficult to combine these constructs with dependent types in a way that preserves the operational meaning of type theory and admits a simple hierarchy of universes.

We present an operational account of guarded dependent type theory with clocks called Guarded Computational Type Theory, featuring a novel clock intersection connective that enjoys the clock irrelevance principle, as well as a predicative hierarchy of universes which does not require any indexing in clock contexts. Guarded Computational Type Theory is simultaneously a programming language with a rich specification logic, as well as a computational metalanguage that can be used to develop semantics of other languages and logics.

11:40
Work Analysis with Resource-Aware Session Types
SPEAKER: Ankush Das

ABSTRACT. While there exist several successful techniques for supporting programmers in deriving static resource bounds for sequential code, analyzing the resource usage of message-passing concurrent processes poses additional challenges. To meet these challenges, this article presents an analysis for statically deriving worst-case bounds on the total work performed by message-passing processes. To decompose interacting processes into components that can be analyzed in isolation, the analysis is based on novel resource-aware session types, which describe protocols and resource contracts for inter-process communication. A key innovation is that both messages and processes carry potential to share and amortize cost while communicating. To symbolically express resource usage in a setting without static data structures and intrinsic sizes, resource contracts describe bounds that are functions of interactions between processes. Resource-aware session types combine standard binary session types and type-based amortized resource analysis in a linear type system. This type system is formulated for a core session-type calculus of the language SILL and proved sound with respect to a multiset-based operational cost semantics that tracks the total number of messages that are exchanged in a system. The effectiveness of the analysis is demonstrated by analyzing standard examples from amortized analysis and the literature on session types and by a comparative performance analysis of different concurrent programs implementing the same interface.

12:00
A General Framework for Relational Parametricity

ABSTRACT. Reynolds’ original theory of relational parametricity intended to capture the idea that polymorphically typed System F programs preserve all relations between inputs. But as Reynolds himself later showed, his theory can only be formalized in a meta-theory with an impredicative universe, such as Martin-Löf Type Theory. A number of more abstract treatments of relational parametricity have since appeared; however, as we show, none of these truly generalize Reynolds’ original theory, in the sense of having it as a direct instance. Indeed, they all require certain strictness conditions that Reynolds’ theory does not satisfy. To correct this, we develop an abstract framework for relational parametricity that does deliver Reynolds’ theory as a direct instance in a natural way. This framework is parametric and uniform with respect to a choice of meta-theory, which allows us to obtain the well-known PER model of Longo and Moggi as a direct instance in a natural way as well. Moreover, we demonstrate on a concrete example that our notion of parametricity also encompasses proof-relevant parametric models, which does not seem to be the case for the well-known definitions. Our framework is thus both descriptive, in that it accounts for well-known models, and prescriptive, in that it identifies properties that good models of relational parametricity should satisfy. It is constructed using the new notion of a split λ2-fibration with isomorphisms, introduced in this paper, which relaxes certain strictness requirements on split λ2-fibrations. Our main theorem is a generalization of Seely’s classical construction of sound models for System F from split λ2-fibrations: we prove that the canonical model of System F induced by every split λ2-fibration with isomorphisms validates System F’s entire equational theory on the nose, independently of the parameterizing meta-theory.

12:20
Classical realizability as a classifier for nondeterminism

ABSTRACT. We show how the language of Krivine's classical realizability may be used to specify various forms of nondeterminism and relate them with properties of realizability models. More specifically, we introduce an abstract notion of multi-evaluation relation which allows us to finely describe various nondeterministic behaviours. This defines a hierarchy of computational models, ordered by their degree of nondeterminism, similar to Sazonov's degrees of parallelism. What we show is a duality between the structure of the characteristic boolean algebra of a realizability model and the degree of nondeterminism in its underlying computational model.

11:00-12:00 Session 54F: SAT Invited Talk: Rahul Santhanam (SAT)
11:00
Modelling SAT

ABSTRACT. SAT is a fundamental computational problem and there are several approaches to modelling its complexity: exact algorithms, proof complexity and random constraint satisfaction. I will briefly survey these approaches, and point out their respective strengths and weaknesses. I will identify certain questions that do not seem to be answerable within any individual approach.

This suggests the need for a more inter-disciplinary perspective - I will mention some promising results in this direction.

12:30-14:00Lunch Break
14:00-15:30 Session 55A: Knowledge and hyperproperties (CSF)
Location: Maths LT2
14:00
The Complexity of Monitoring Hyperproperties

ABSTRACT. We study the runtime verification (RV) complexity of hyperproperties expressed in the temporal logic HyperLTL as a means to inspect a system with respect to a set of security polices. The monitor analyzes trace logs organized by common prefixes that form a finite tree-shaped Kripke structure, or, by common prefixes and suffixes that form a finite acyclic Kripke structure. Unlike the RV techniques for trace-based languages, where monitors are often memoryless, to monitor a hyperproperty as the system execution(s) evolve, the monitor has to repeatedly check a growing Kripke structure that records the traces observed so far. This calls for a rigorous complexity analysis of RV for hyperproperties.

We show that for trees, the complexity in the size of the Kripke structure is L-complete independently of the number of quantifier alternations in the HyperLTL formula. For acyclic Kripke structures, the complexity is PSPACE-complete (in the level of the polynomial hierarchy that corresponds to the number of quantifier alternations). The combined complexity in the size of the Kripke structure and the length of the HyperLTL formula is PSPACE-complete for both trees and acyclic Kripke structures, and is as low as NC for the relevant case of trees and alternation- free HyperLTL formulas. Thus, the size and shape of the Kripke structure as well as the formula have significant impact on the efficiency of monitoring.

14:30
Knowledge-based Security of Dynamic Secrets for Reactive Programs

ABSTRACT. Scripts on webpages could steal sensitive user data. Much work has been done, both in modeling and implementation, to enforce information flow control (IFC) of webpages to mitigate such attacks. It is common to model scripts running in an IFC mechanism as a reactive program. However, this model does not account for dynamic script behavior such as user action simulation, new DOM element generation, or new event handler registration, which could leak information. In this paper, we investigate how to secure sensitive user information, while maintaining the flexibility of declassification, even in the presence of \emph{active attackers}---those who can perform the aforementioned actions. Our approach extends prior work on secure-multi-execution with stateful declassification by treating script-generated content specially to ensure that declassification policies cannot be manipulated by them. We use a knowledge-based progress-insensitive definition of security and prove that our enforcement mechanism is sound. We further prove that our enforcement mechanism is precise and has robust declassification (i.e. active attackers cannot learn more than their passive counterpart).

15:00
Assuming you know: epistemic semantics of relational annotations for expressive flow policies
SPEAKER: David Naumann

ABSTRACT. Many high-level security requirements are about the allowed flow of information in programs, but are difficult to make precise because they involve selective downgrading. Quite a few mutually incompatible and ad-hoc approaches have been proposed for specifying and enforcing downgrading policies. Prior surveys of these approaches have not provided a unifying technical framework. Notions from epistemic logic have emerged as a good approach to policy semantics but are considerably removed from well developed static and dynamic enforcement techniques. We develop a unified framework for expressing, giving meaning and enforcing information downgrading policies that subsumes many previously known approaches. It builds on commonly known and widely deployed concepts and techniques, especially static and dynamic assertion checking. These concepts should make information flow accessible and enable developers without special training to specify precise policies. The unified framework allows to directly compare different policy specification styles and enforce them by leveraging existing tools.

14:00-15:00 Session 55B: FSCD Invited talk: Valeria Vignudelli (FSCD)
14:00
Proof techniques for program equivalence in probabilistic higher-order languages

ABSTRACT. While the theory of functional higher-order languages, starting from lambda-calculi, is a well-established research field, it is only in recent years that the operational semantics of higher-order languages with probabilistic operators has started to be extensively studied. A fundamental notion in the semantics of programming languages is that of program equivalence. In higher-order languages, program equivalence is generally formalized as a contextual equivalence, which can be hard to prove directly. This has motivated the study of proof techniques for contextual equivalence, from inductive ones, such as logical relations, to coinductive ones, mainly in the form of bisimulations. In this talk I will discuss proof techniques for program equivalence in languages combining higher-order and probabilistic features. Several operational methods, traditionally developed in a deterministic setting, have been adapted to probabilistic higher-order languages. I will discuss these proof methods and focus on bisimulation-based techniques, showing how probabilities, combined with different language features, force a number of modifications to the definition of bisimulation.

14:00-15:30 Session 55C (ITP)
Location: Blavatnik LT1
14:00
Backwards and Forwards with Separation Logic

ABSTRACT. The use of Hoare logic in combination with weakest preconditions and strongest postconditions is a standard tool for program verification, known as backward and forward reasoning. In this paper we extend these techniques to allow backward and forward reasoning for separation logic. While the former is derived directly from the standard operators of separation logic, the latter uses a new operator. We implement our framework in the interactive theorem prover Isabelle/HOL, and enable automation with several interactive proof tactics.

14:30
Reification by Parametricity
SPEAKER: Jason Gross

ABSTRACT. We present a new strategy for performing reification in Coq. That is, we show how to generate first-class abstract syntax trees from ``native'' terms of Coq's logic, suitable as inputs to verified proof procedures in the *proof by reflection* style. Our new strategy, based on the pattern tactic, is simple, short, and fast. We survey the existing methods of reification, describing various design choices and tricks that can be used to speed them up, as well as various limitations. Our strategy is not a good fit, for example, when a term must be reified without performing βιζ reduction. We describe the results of benchmarking 18 variants of reification, in addition to our own, finding that our own reification outperforms 16 of these methods in all cases, and one additional method in some cases; the fastest method of reification we tested is writing an OCaml plugin. Our method is the most concise of the strategies we considered, requiring only two to four lines of Ltac---beyond lists of the identifiers to reify and their reified variants---to reify a term. Additionally, our strategy automatically provides error messages which are no less helpful than Coq's own error messages.

15:00
A Coq Formalisation of SQL’S Execution Engines

ABSTRACT. In this article, we use the Coq proof assistant to specify and verify the low level layer of SQL’s execution engines. To reach our goals, we first design a high level Coq specification for data and data-centric operators intended to capture their essence. We, then, provide two Coq implementations of our specification. The first one, the physical algebra, consists in the low level operators found in systems such as Postgresql or Oracle. The second, SQL algebra, is an extended relational algebra that provides a semantics for SQL. Last, we formally relate physical algebra and SQL algebra. By proving that the physical algebra implements SQL algebra, we give high level assurances that physical algebraic and SQL algebra expressions enjoy the same semantics. All this yields the first, to our best knowledge, formalisation in Coq of the low level primitives that constitute the main part of SQL execution engines.

14:00-15:20 Session 55D (LICS)
Location: Maths LT1
14:00
A van Benthem Theorem for Fuzzy Modal Logic
SPEAKER: Paul Wild

ABSTRACT. We present a fuzzy (or quantitative) version of the van Benthem theorem, which characterizes propositional modal logic as the bisimulation-invariant fragment of first-order logic. Specifically, we consider a first-order fuzzy predicate logic logic along with its modal fragment, and show that the first-order formulas that are non-expansive w.r.t. the natural notion of bisimulation distance are exactly those that can be approximated by modal formulas.

14:20
Riesz Modal Logic with Threshold Operators

ABSTRACT. We present a sound and complete axiomatization of the Riesz modal logic extended with one inductively defined operator which allows the definition of threshold operators. This logic is capable of interpreting the bounded fragment of the logic probabilistic CTL over discrete and continuous Markov chains.

14:40
Logics for Word Transductions with Synthesis
SPEAKER: Nathan Lhote

ABSTRACT. We introduce a logic, called LT, to express properties of transductions, i.e. binary relations from input to output (finite) words. In LT, the input/output dependencies are modelled via an origin function which associates to any position of the output word, the input position from which it originates. LT is well-suited to express relations (which are not necessarily functional), and can express all regular functional transductions, i.e. transductions definable for instance by deterministic two-way transducers. Despite its high expressive power, LT has decidable satisfiability and equivalence problems, with tight non-elementary and elementary complexities, depending on specific representation of LT-formulas. Our main contribution is a synthesis result: from any transduction R defined in LT, it is possible to synthesise a regular functional transduction f such that for all input words u in the domain of R, f is defined and (u,f(u)) is in R. As a consequence, we obtain that any functional transduction is regular iff it is LT-definable. We also investigate the algorithmic and expressiveness properties of several extensions of LT, and explicit a correspondence between transductions and data words. As a side-result, we obtain a new decidable logic for data words.

15:00
Type-two polynomial-time and restricted lookahead.

ABSTRACT. This paper provides an alternate characterization of type-two polynomial-time computability, with the goal of making second-order complexity theory more approachable. We rely on the usual oracle machines to model programs with subroutine calls. In contrast to previous results, the use of higher-order objects as running times is avoided, either explicitly or implicitly. Instead, regular polynomials are used. This is achieved by refining the notion of oracle-polynomial-time introduced by Cook. We impose a further restriction on the oracle interactions to force feasibility. Both the restriction as well as its purpose are very simple: it is well-known that Cook's model allows polynomial depth iteration of functional inputs with no restrictions on size, and thus does not guarantee that polynomial-time computability is preserved. To mend this we restrict the number of lookahead revisions, that is the number of times a query can be asked that is bigger than any of the previous queries. We prove that this leads to a class of feasible functionals and that all feasible problems can be solved within this class if one allows to separate a task into efficiently solvable subtasks. Formally put: the closure of our class under lambda-abstraction and application includes all feasible operations. We also revisit the very similar class of strongly poly-time computable operators previously introduced by Kawamura and Steinberg. We prove it to be strictly included in our class and, somewhat surprisingly, to have the same closure property. This can be attributed to properties of the limited recursion operator: It is not strongly poly-time computable but decomposes into two such operations and lies in our class.

14:00-15:20 Session 55E (LICS)
Chair:
Location: Maths LT3
14:00
LMSO: A Curry-Howard Approach to Church's Synthesis via Linear Logic
SPEAKER: Pierre Pradic

ABSTRACT. We propose LMSO, a proof system inspired from Linear Logic, as a proof-theoretical framework to extract finite-state stream transducers from linear-constructive proofs of omega-regular specifications. We advocate LMSO as a stepping stone toward semi-automatic approaches to Church's synthesis combining computer assisted proofs with automatic decisions procedures. LMSO is correct in the sense that it comes with an automata-based realizability model in which proofs are interpreted as finite-state stream transducers. It is moreover complete, in the sense that every solvable instance of Church's synthesis leads to a linear-constructive proof of the formula specifying the synthesis problem.

14:20
A Logical Account for Linear Partial Differential Equations

ABSTRACT. Differential Linear Logic (DiLL), introduced by Ehrhard and Regnier, extends linear logic with a notion of linear approximation of proofs. While DiLL is a classical logic, classical models of it in which this notion of differentiation corresponds to the usual one of functional analysis were missing. We solve this issue by constructing a model, without higher order, based on nuclear topological vector spaces and distributions with compact support. This interpretation sheds a new light on the rules of DiLL as we are able to understand them as the computational steps for the resolution of Linear Partial Differential Equations. We thus introduce D-DiLL, a deterministic refinement of DiLL with a D-exponential, for which we exhibit a cut-elimination procedure, and a categorical semantics. We recover the rules of DiLL as a special case. For any D Linear Partial Differential operator with constant coefficient, we construct a model of D-DiLL where the D-exponential represent the space of distributions on spaces of functions f such that Dg=f, and where cut-elimination resolves the equation, that is computes g from f.

14:40
Unification nets: canonical proof net quantifiers

ABSTRACT. Proof nets for MLL (unit-free Multiplicative Linear Logic) are concise graphical representations of proofs which are canonical in the sense that they abstract away syntactic redundancy such as the order of non-interacting rules. We argue that Girard’s extension to MLL1 (first-order MLL) fails to be canonical because of redundant existential witnesses, and present canonical MLL1 proof nets called unification nets without them. For example, while there are infinitely many cut-free Girard nets ∀xPx ⊢ ∃xPx, one per arbitrary choice of witness for ∃x, there is a unique cut-free unification net, with no specified witness.

Redundant existential witnesses cause Girard’s MLL1 nets to suffer from severe complexity issues: (1) cut elimination is non-local and exponential-time (and -space), and (2) some sequents require exponentially large cut-free Girard nets. Unification nets solve both problems: (1) cut elimination is local and linear-time, and (2) a cut-free unification net is only a linear factor larger than its underlying sequent. Since some unification nets are exponentially smaller than corresponding Girard nets and sequent proofs, technical delicacy is required to ensure correctness is polynomial-time (quadratic).

These results extend beyond MLL1 via a broader methodological insight: for canonical quantifiers, the standard parallel/sequential dichotomy of proof nets fails; an implicit/explicit witness dichotomy is also needed. Work in progress extends unification nets to additives and uses them to extend combinatorial proofs [Proofs without syntax, Annals of Mathematics, 2006] to classical first-order logic.

15:00
Around Classical and Intuitionistic Linear Logics

ABSTRACT. We revisit many aspects of the syntactic relations between (variants of) classical linear logic (LL) and (variants of) intuitionistic linear logic (ILL) in the propositional setting.

On the one hand, we study different (parametric) ``negative'' translations from LL to ILL: their expressiveness, the relations with extensions of LL and their use in the proof theory of LL (cut elimination and focusing). In particular, this bridges the intuitionistic restriction on sequents (at most one conclusion) and the focusing property of linear logic. On the other hand, we generalise the known partial results about conservativity of LL over ILL, leading for example to a conservativity proof for LL over tensor logic (TL).

14:00-15:30 Session 55F: QBF I (SAT)
Chair:
14:00
Circuit-based Search Space Pruning in QBF

ABSTRACT. This paper describes the algorithm implemented in the QBF solver CQESTO, which has placed second in the non-CNF track of the last year's QBF competition. The algorithm is inspired by the CNF-based solver QESTO. Just as QESTO, CQESTO invokes a SAT solver in a black-box fashion. However, it directly operates on the circuit representation of the formula. The paper analyzes the individual operations that the solver performs on the circuit.

14:30
Symmetries for QBF
SPEAKER: Manuel Kauers

ABSTRACT. While symmetries are well understood for Boolean formulas and successfully exploited in practical SAT solving, little is known about symmetries in quantified Boolean formulas (QBF). There are some works introducing adoptions of propositional symmetry breaking techniques for QBF solving, with a theory covering only very specific parts of QBF symmetries. We present a general framework based on symmetric groups that give a concise characterization of symmetries in QBF. Our framework handles universal and existential symmetries in a dual manner and gives rise to a natural generalization of a known symmetry breaking technique from SAT.

15:00
Local Soundness for QBF Calculi
SPEAKER: Martin Suda

ABSTRACT. We develop new semantics for resolution-based calculi for Quantified Boolean Formulas, covering both the CDCL-derived calculi and the expansion-derived ones. The semantics is centred around the notion of a partial strategy for the universal player and allows us to show in a local, inference-by-inference manner that these calculi are sound. It also helps us understand some less intuitive concepts, such as the role of tautologies in long-distance resolution or the meaning of the ``star'' in the annotations of IRM. Furthermore, we show that a clause of any of these calculi can be, in the spirit of Curry-Howard correspondence, interpreted as a specification of the corresponding partial strategy. The strategy is total, i.e. winning, when specified by the empty clause.

15:00-15:30 Session 56: Types (FSCD)
15:00
A Syntax for Higher Inductive-Inductive Types

ABSTRACT. Higher inductive-inductive types (HIITs) generalise inductive types of dependent type theories in two directions. On the one hand they allow the simultaneous definition of multiple sorts that can be indexed over each other. On the other hand they support equality constructors, thus generalising higher inductive types of homotopy type theory. An example which makes use of both features is the well-typed syntax of type theory where conversion rules are given as equality constructors. In this paper we propose a general definition of HIITs using a domain-specific type theory. A context in this small type theory encodes a HIIT by listing the type formation rules and constructors. The type of the elimination principle and its beta-rules are computed from the context using a variant of the syntactic logical relation translation. We show that for indexed W-types and various examples of HIITs the computed elimination principles are the expected ones. Showing that the thus specified HIITs exist is left as future work. The type theory specifying HIITs was formalised in Agda together with the syntactic translations. A Haskell implementation converts the types of sorts and constructors into valid Agda code which postulates the elimination principles and computation rules.

15:30-16:00Coffee Break
15:40-16:40 Session 57A: Complexity (FSCD)
15:40
Counting Environments and Closures

ABSTRACT. Environments and closures are two of the main ingredients of evaluation in lambda-calculus. A closure is a pair consisting of a lambda-term and an environment, whereas an environment is a list of lambda-terms assigned to free variables. In this paper we investigate some dynamic aspects of evaluation in lambda-calculus considering the quantitative, combinatorial properties of environments and closures. Focusing on two classes of environemnts and closures, namely the so-called plain and closed ones, we consider the problem of their asymptotic counting and effective random generation. We provide an asymptotic approximation of the number of both plain environments and closures of size n. Using the associated generating functions, we construct effective samplers for both classes of combinatorial structures. Finally, we discuss the related problem of asymptotic counting and random generation of closed environemnts and closures.

16:10
Term rewriting characterisation of LOGSPACE for finite and infinite data

ABSTRACT. We show that LOGSPACE is characterised by finite orthogonal tail-recursive cons-free constructor term rewriting systems. This result is non-trivial, because in contrast to previous work on characterising LOGSPACE by tail-recursive cons-free programs we do not impose any fixed evaluation strategy. We provide a LOGSPACE algorithm which computes constructor normal forms. We then use this algorithm in the proof of our main result: that simple stream term rewriting systems characterise LOGSPACE-computable stream functions as defined by Ramyaa and Leivant. This result concerns characterising logarithmic-space computation on infinite streams by means of infinitary rewriting.

15:40-16:40 Session 57B (LICS)
Location: Maths LT1
15:40
Distribution-based objectives for Markov Decision Processes
SPEAKER: Blaise Genest

ABSTRACT. We consider distribution-based objectives for Markov Decision Processes (MDP). This class of objectives gives rise to an interesting trade-off between full and partial information. As in full observation, the strategy in the MDP can depend on the state of the system, but similar to partial information, the strategy needs to account for all the states at the same time.

In this paper, we focus on two safety problems that arise naturally in this context, namely, existential and universal safety. Given an MDP A and a closed and convex polytope H of probability distributions over the states of A, the existential safety problem asks whether there exists some distribution ∆ in H and a strategy of A, such that starting from ∆ and repeatedly applying this strategy keeps the distribution forever in H. The universal safety problem asks whether for all distributions in H, there exists such a strategy of A which keeps the distribution forever in H. We prove that both problems are decidable, with tight complexity bounds: we show that existential safety is PTIME-complete, while universal safety is co-NP-complete.

Further, we compare these results with existential and universal safety problems for Rabin’s probabilistic finite-state automata (PFA), the subclass of Partially Observable MDPs which have zero observation. Compared to MDPs, strategies of PFAs are not state dependent. In sharp contrast to the PTIME result, we show that existential safety for PFAs is undecidable. On the other hand, it turns out that the universal safety for PFAs is decidable in EXPTIME, with a co-NP lower bound. Finally, we show that an alternate representation of the input polytope allows us to improve the complexity of universal safety for MDPs and PFAs.

16:00
Stochastic Shortest Paths and Weight-Bounded Properties in Markov Decision Processes

ABSTRACT. The paper deals with finite-state Markov decision processes (MDPs) with integer weights assigned to each state-action pair. New algorithms are presented to classify end components according to their limiting behavior with respect to the accumulated weights. These algorithms are used to provide solutions for two types of fundamental problems for integer-weighted MDPs. First, a polynomial-time algorithm for the classical stochastic shortest path problem is presented, generalizing known results for special classes of weighted MDPs. Second, qualitative probability constraints for weight-bounded (repeated) reachability conditions are addressed. Among others, it is shown that the problem to decide whether a disjunction of weight-bounded reachability conditions holds almost surely under some scheduler belongs to NP ∩ coNP, is solvable in pseudo-polynomial time and is at least as hard as solving two-player mean-payoff games, while the corresponding problem for universal quantification over schedulers is solvable in polynomial time.

16:20
Conditional Value-at-Risk for Reachability and Mean Payoff in Markov Decision Processes

ABSTRACT. We present the conditional value-at-risk (CVaR) in the context of Markov chains and Markov decision processes with reachability and mean-payoff objectives. CVaR quantifies risk by means of the expectation of the worst p-quantile. As such it can be used to design risk-averse systems. We consider not only CVaR constraints, but also introduce their conjunction with expectation constraints and quantile constraints (value-at-risk, VaR). We derive lower and upper bounds on the computational complexity of the respective decision problems and characterize the structure of the strategies in terms of memory and randomization.

15:40-16:40 Session 57C (LICS)
Location: Maths LT3
15:40
What's in a game? A theory of game models

ABSTRACT. Game semantics is a rich and successful class of denotational models for programming languages. Most game models feature a rather intuitive setup, yet surprisingly difficult proofs of such basic results as associativity of composition of strategies. We set out to unify these models into a basic abstract framework for game semantics, game settings. Our main contribution is the generic construction, for any game setting, of a category of games and strategies. Furthermore, we extend the framework to deal with innocence, and prove that innocent strategies form a subcategory. We finally show that our constructions cover many concrete cases, mainly among the early models and the very recent sheaf-based ones.

16:00
An Asynchronous Soundness Theorem for Concurrent Separation Logic

ABSTRACT. Concurrent separation logic (CSL) is a specification logic for concurrent imperative programs with shared memory and locks. In this paper, we develop a concurrent and interactive account of the logic inspired by asynchronous game semantics. To every program C, we associate a pair of asynchronous transition systems [C]S and [C]L which describe the operational behavior of the Code when confronted to its Environment, both at the level of machine states (S) and of machine instructions and locks (L). We then establish that every derivation tree π of a judgment Γ ⊢ {P}C{Q} defines a winning and asynchronous strategy [π] with respect to both asynchronous semantics [C]S and [C]L. From this, we deduce an asynchronous soundness theorem for CSL, which states that the canonical map L : [C]S → [C]L from the stateful semantics [C]S to the stateless semantics [C]L satisfies a basic fibrational property. We advocate that this fibrational property provide a clean and conceptual explanation for the usual soundness theorem of CSL, including the absence of data races.

16:20
The concurrent game semantics of Probabilistic PCF
SPEAKER: Hugo Paquet

ABSTRACT. We define a new games model of Probabilistic PCF (PPCF) by enriching thin concurrent games with symmetry, recently introduced by Castellan et al, with probability. This model supports two interpretations of PPCF, one sequential and one parallel. We make the case for this model by exploiting the causal structure of probabilistic concurrent strategies. First, we show that the strategies obtained from PPCF programs have a deadlock-free interaction, and therefore deduce that there is an interpretation-preserving functor from our games to the probabilistic relational model recently proved fully abstract by Ehrhard et al. It follows that our model is intensionally fully abstract. Finally, we propose a definition of probabilistic innocence and prove a finite definability result, leading to a second (independent) proof of full abstraction.

16:00-16:30 Session 58A: Blockchain (CSF)
Location: Maths LT2
16:00
KEVM: A Complete Formal Semantics of the Ethereum Virtual Machine

ABSTRACT. A developing field of interest for the distributed systems and applied cryptography communities is that of smart contracts: self-executing financial instruments that synchronize their state, often through a blockchain. One such smart contract system that has seen widespread practical adoption is Ethereum, which has grown to a market capacity of 100 billion USD and clears an excess of 500,000 daily transactions. Unfortunately, the rise of these technologies has been marred by a series of costly bugs and exploits. Increasingly, the Ethereum community has turned to formal methods and rigorous program analysis tools. This trend holds great promise due to the relative simplicity of smart contracts and bounded-time deterministic execution inherent to the Ethereum Virtual Machine (EVM). Here we present KEVM, an executable formal specification of the EVM's bytecode stack-based language built with the K Framework, designed to serve as a solid foundation for further formal analyses. We empirically evaluate the correctness and performance of KEVM using the official Ethereum test suite. To demonstrate the usability, several extensions of the semantics are presented and two different-language implementations of the ERC20 Standard Token are verified against the ERC20 specification. These results are encouraging for the executable semantics approach to language prototyping and specification.

16:00-17:00 Session 58B (ITP)
Location: Blavatnik LT1
16:00
An Agda Formalization of Üresin & Dubois' Asynchronous Fixed-Point Theory
SPEAKER: Ran Zmigrod

ABSTRACT. In this paper we describe an Agda-based formalization of results from Uresin and Dubois' ``Parallel Asynchronous Algorithms for Discrete Data.'' That paper investigates a large class of iterative algorithms that can be transformed into asynchronous processes. In their model each node asynchronously performs partial computations and communicates results to other nodes using unreliable channels.  Uresin and Dubois provide sufficient conditions on iterative algorithms that guarantee convergence to unique fixed points for the associated asynchronous iterations. Proving such sufficient conditions for an iterative algorithm is often dramatically simpler than reasoning directly about an asynchronous implementation. These results are used extensively in the literature of distributed computation, making formal verification worthwhile.

Our Agda library provides users with a collection of sufficient conditions, some of which mildly relax assumptions made in the original paper. Our primary application has been in reasoning about the correctness of network routing protocols. To do so we have derived a new sufficient condition based on the ultrametric theory of Alexander Gurney. This was needed to model the complex policy-rich routing protocol that maintains global connectivity in the internet. 

16:30
Towards Certified Meta-Programming with Typed Template-Coq

ABSTRACT. Template-Coq is a plugin for Coq, originally implemented by Malecha, which provides a reifier for Coq terms and global declarations, as represented in the Coq kernel, as well as a denotation command. Initially, it was developed for the purpose of writing functions on Coq’s AST in Gallina. Recently, it was used in the CertiCoq certified compiler project, as its front-end language, to derive parametricity properties, and to extract Coq terms to a CBV λ-calculus. However, the syntax lacked semantics, be it typing semantics or operational semantics, which should reflect, as formal specifications in Coq, the semantics of Coq’s type theory itself. The tool was also rather bare bones, providing only rudimentary quoting and unquoting commands. We generalize it to handle the entire Calculus of Inductive Constructions, as implemented by Coq, including the kernel’s declaration structures for definitions and inductives, and implement a monad for general manipulation of Coq’s logical environment. We demonstrate how this setup allows the definition of many kinds of general purpose plugins, whose correctness can be readily proved in the system itself, and that can be run efficiently after extraction. We give a few examples of implemented plugins, including a parametricity translation. We also advocate the use of Template-Coq as a foundation for higher-level tools.

16:00-17:00 Session 58C: QBF II (SAT)
16:00
QBF as an Alternative to Courcelle's Theorem
SPEAKER: Valia Mitsou

ABSTRACT. We propose reductions to quantified Boolean formulas (QBF) as a new approach to showing fixed-parameter linear algorithms for problems parameterized by treewidth. We demonstrate the feasibility of this approach by giving new algorithms for several well-known problems from artificial intelligence that are in general complete in the second level of the polynomial hierarchy. By reduction from QBF we show that all resulting algorithms are essentially optimal in their dependence on the treewidth. The problems that we consider were already known to be fixed-parameter linear by using Courcelle's theorem or dynamic programming, but we argue that our approach has clear advantages over these techniques: on the one hand, in contrast to Courcelle's theorem, we get concrete and tight guarantees for the runtime dependence on the treewidth; on the other hand, we avoid tedious dynamic programming and, after showing some normalization results for CNF-formulas, our upper bounds often boil down to a few lines.

16:30
Polynomial-Time Validation of QCDCL Certificates
SPEAKER: Tomáš Peitl

ABSTRACT. Quantified Boolean Formulas (QBFs) offer compact encodings of problems arising in areas such as verification and synthesis. These applications typically require that QBF solvers not only decide whether an input formula is true or false but also output a witnessing certificate. State-of-the-art QBF solvers based on Quantified Conflict-Driven Constraint Learning (QCDCL) can emit Q-resolution proofs, from which in turn certificates can be extracted. The correctness of a certificate generated in this way is validated by substituting it into the matrix of the input QBF and using a SAT solver to check that the resulting propositional formula (the validation formula) is unsatisfiable. This final check is often the most time-consuming part of the entire certification workflow.

We propose a new validation method that does not require a SAT call and provably runs in polynomial time. It uses the Q-resolution proof from which the given certificate was extracted to directly generate a (propositional) proof of the validation formula in the RUP format, which can be verified by a proof checker such as DRAT-trim. Experiments with a prototype implementation show a robust, albeit modest, increase in the number of successfully validated certificates compared to validation with a SAT solver.

17:00-18:30 Session 59: FLoC Public Lecture: Stuart Russell (FLoC)

Doors open at 4:30 pm; please be seated by 4:50 pm (attendance is free of charge and all are welcome; please register).

17:00
Unifying Logic and Probability: the BLOG Language

ABSTRACT. Logic and probability are ancient subjects whose unification holds significant potential for the field of artificial intelligence. The BLOG (Bayesian LOGic) language provides a way to write probability models using syntactic and semantic devices from first-order logic. In modern parlance, it is a relational, open-universe probabilistic programming language that allows one to define probability distributions over the entire space of first-order model structures that can be constructed given the constant, function, and predicate symbols of the program. In this public lecture, Stuart Russell will describe the language mainly through examples and cover its application to monitoring the Comprehensive Nuclear-Test-Ban Treaty.