FLOC 2018: FEDERATED LOGIC CONFERENCE 2018
PROGRAM FOR SUNDAY, JULY 8TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:30 Session 34A (COALG)
09:00
The Method of Coalgebra: Exercises in Coinduction
09:45
Coalgebra meets Convexity in Probabilistic Systems
09:00-10:30 Session 34B (Coq)
Location: Maths LT2
09:00
Verifying Distributed Systems

ABSTRACT. TBA

10:00
Procrastination, A proof engineering technique

ABSTRACT. We present a small Coq library for collecting side conditions and deferring their proof.

09:00-10:00 Session 34C: DCM/ITRS joint invited talk (DCM and ITRS)
09:00
Quantitative types: from Foundations to Applications

ABSTRACT. Quantitative techniques are emerging in different areas of computer science, such as model checking, logic, and automata theory, to face the challenge of today's resource aware computation.

In this talk we give a clean theoretical understanding of the use of resources in quantitative systems, and we introduce different alternatives, suitable for a wide range of powerful models of computation, such as for example pattern matching, control operators and infinitary computations.

We survey some interesting results related to those typing systems such as:
(1) characterization of operational properties,
(2) computation of exact measures for reduction sequences and normal forms,
(3) inhabitation problems, and
(4) observational equivalence.

09:00-10:30 Session 34D (Domains13)
Location: Maths LT1
09:00
Invited Talk: Approximating partial by total: fixpoint characterizations of back-and-forth equivalences

ABSTRACT. Model comparison games such as Ehrenfeucht-Fraisse, pebble and bisimulation games play a central role in finite and infinite model theory, modal logic and concurrency theory.

We show how such games can be understood in terms of comonads on the category of relational structures and homomorphisms. This leads to novel syntax-free characterizations of the equivalences induced by a wide range of logics, and coalgebraic characterizations of key combinatorial parameters such as tree-width and tree-depth.

This in turn leads to a novel analysis of back-and-forth equivalences. Instead of considering partial isomorphisms with back- and forth- extension properties, we show how they can be characterized in a uniform fashion in terms of sets of coKleisli morphisms for the associated comonads, satisfying a local invertibility condition. This leads in turn to a fixpoint characterization.

An interesting feature of this characterization is that we are approximating partial maps by (sets of) total ones, and replacing extension properties with a greatest fixpoint computation.

09:40
Diffeological Spaces and Semantics for Differential Programming

ABSTRACT. Differential programming, particularly systems based on Automatic Differentiation, has recently become a popular method to approach high-dimensional optimisation and integration problems found in machine learning and computational statistics. Examples of systems relying on these techniques are TensorFlow and Stan. The semantics of such systems, however, still remains rather poorly understood. In particular, it is unclear how differentiation should interact with more traditional computer science concepts like recursion. For these purposes, we are exploring a rich semantic framework for differential programming based on generalised smooth spaces known as diffeological spaces. We demonstrate how to construct Fiore's axiomatic domain theory in this setting.

10:05
Domain Theory for Intensional Computation

ABSTRACT. Intensional computation supports generic queries of internal structure as well as the extensional computation familiar from lambda-calculus and combinatory logic. This paper introduces a denotational semantics for the simplest intensional calculus, namely SF-calculus. Its domain has a basis given by the normal forms of the calculus, extended with a least element. Their functionality is revealed by a mapping from the domain to its function space that is determined by the reduction rules of the calculus. Conversely, step-functions correspond to simple pattern-matching functions. These are representable in SF-calculus, which yields a right inverse to the mapping above. However, this semantics fails for traditional combinatory logic (SK-calculus) as the corresponding mapping into the function space does not have a right inverse. This is because combinatory logic cannot represent enough pattern-matching functions, cannot support intensional computation. This result complements the syntactic proof that SF-calculus is more expressive than SK-calculus.

09:00-10:30 Session 34E: Invited talk & IoT security (FCS)
09:00
A Formal Approach to Cyber-Physical Attacks

ABSTRACT. We apply formal methods to lay and streamline theoretical foundations to reason about Cyber-Physical Systems (CPSs) and cyber-physical attacks. We focus on integrity and DoS attacks to sensors and actuators of CPSs, and on the timing aspects of these attacks. Our contributions are threefold: (1) we define a hybrid process calculus to model both CPSs and cyber-physical attacks. (2) we define a threat model of cyber-physical attacks and provide the means to assess attack tolerance/vulnerability with respect to a given attack. (3) we formalise how to estimate the impact of a successful attack on a CPS and investigate possible quantifications of the success chances of an attack. We illustrate definitions and results by means of a non-trivial engineering application.

10:00
LISA: Predicting the Impact of DoS Attacks on Real-World Low Power IoT Systems

ABSTRACT. Organizations and researchers alike have widely recognised the multiple advantages of adapting Network Intrusion Detection Systems (NIDS) as the norm to monitor against DoS attacks on their systems. Although effective, implementation within the Internet of Things (IoT) is complicated as the setups and protocols used vary, necessitating data collection to be bespoke to an individual system. Standard approaches used to train NIDS include; 1) Use a database of known attacks or 2) testing systems to create a ``benchmark" behaviour and flag any anomaly as a potential attack. It is not feasible to establish a benchmark behaviour in dynamic IoT systems where devices may constantly shift, new devices might join and behaviours might change. The IoT is by its very nature ubiquitous and therefore time consuming to benchmark, we therefore focus on the first approach. This approach has its own drawbacks amplified for IoT systems as: 1) Collecting data unique to a system and for each attack is time consuming and 2) some system changes can require to collect the data or part of the data from scratch (e.g. interactive smart homes where devices can change frequently)

To address these limitations we present a novel modelling approach which we call Lightweight IoT System under Attack (LISA) to represent the effects of Power Drain and DDoS attacks on IoT Systems. We begin with a precise formalization of properties of IoT devices using measurements from the real system and run verification on the model to assure matching behaviour. We then model specific attacks on the systems and generate synthetic dataset.

09:00-10:30 Session 34F (GS)
Location: Maths LT3
09:00
The Geometry of Parallelism: Probabilistic and Quantum Effects

ABSTRACT. We present a Geometry of Interaction model for higher-order computation which has the ability to model commutative effects in a parallel setting, and in particular to capture probabilistic and quantum effects. The model comes with a multi-token machine, a proof net system, and a PCF-style language. Being based on a rewrite system equipped with a memory, it has a concrete nature which makes it well suited for building low-level operational descriptions of higher-order languages. A success of this approach is to give (essentially for free) an adequate model for a fully-fledged quantum programming language in which entanglement, duplication, and recursion are all available.

A question which is key to our results is the following: What become the notions of confluence and convergence when a parallel rewrite system has both a probabilistic choice and the possibility of non-termination?

Joint work with Ugo Dal Lago, Benoit Valiron, Akira Yoshimizu

09:30
The Geometry of Concurrent Interaction: Handling Multiple Ports by Way of Multiple Tokens

ABSTRACT. We introduce a geometry of interaction model for Mazza's multiport interaction combinators, a graph-theoretic formalism which is able to faithfully capture concurrent computation as embodied by process algebras like the π-calculus. The introduced model is based on token machines in which not one but multiple tokens are allowed to traverse the underlying net at the same time. We prove soundness and adequacy of the introduced model. The former is proved as a simulation result between the token machines one obtains along any reduction sequence. The latter is obtained by a fine analysis of convergence, both in nets and in token machines.

Joint work with Ryo Tanaka and Akira Yoshimizu

10:00
Coalgebras and Higher-Order Computation: a GoI Approach

ABSTRACT. Girard's geometry of interaction (GoI) can be seen as a compositional compilation method from functional programs to sequential machines, where tokens move around and express interactions between (parts of) programs. Intrigued by the combination of abstract theory and concrete dynamics in GoI, our line of work (with Naohiko Hoshino, Koko Muroya and Toshiki Kataoka) has aimed at exploiting, in GoI, results from the theory of coalgebra---a categorical abstraction of state-based transition systems that has found its use principally in concurrency theory. Such reinforced connection between higher-order computation and state-based dynamics is made possible thanks to an elegant categorical axiomatization of GoI by Abramsky, Haghverdi and Scott, where traced monoidal categories are identified to be essential in modeling interaction. In the talk I shall lay out these basic ideas, together with some of our results on: GoI semantics for a quantum programming language; and extension of GoI with algebraic effects.

09:00-10:30 Session 34G: Opening and Keynote (GraMSec)
09:00
Opening remarks and introductions
09:10
Intrusion Tolerance in Complex Cyber Systems

ABSTRACT. We will discuss intrusion tolerance as a desirable property of cyber systems and discuss the relationship between intrusion tolerance and resilience. Intrusion-tolerant complex systems maintain certain security properties even when components of those systems are compromised. We will examine some ways to quantify intrusion tolerance using graphical models of complex cyber systems with a focus on the misuse of authentication credentials and the exploitation of trust relationships. Finally, we will provide some examples of the impact of this analysis on real-world policy decisions.

10:10
Disclosure Analysis of SQL Workflows

ABSTRACT. In the context of business process management, the implementation of data minimization requirements requires that analysts are able to assert what private data each worker is able to access, not only directly via the inputs of the tasks they perform in a business process, but also indirectly via the chain of tasks that lead to the production of these inputs. In this setting, this paper presents a technique which, given a workflow that transforms a set of input tables into a set of output tables via a set of inter-related SQL statements, determines what information from each input table is disclosed by each output table, and under what conditions this disclosure occurs. The result of this disclosure analysis is a summary representation of the possible computations leading from the inputs of the workflow to a given output thereof.

09:00-10:30 Session 34I (LMW)
Location: Maths Boardroom
09:00
How to get the most out of a conference like FLOC?

ABSTRACT. An important aspect of research is to attend conferences: it allows you to learn and stay up to date on recent progress, meet like-minded colleagues, build a research and collaboration network, promote your own research, catch up with old friends and make new ones. Yet, attending a conference like FLOC can also be intimidating and overwhelming. In this talk, I will discuss some concrete strategies and share some advice. 

09:30
Asking questions to yourself and others

ABSTRACT. A good way to interact with research material (research articles and research talks) is to ask questions -- to yourself first, and then to others. Formulating questions can help you understand the work, is essential in reviewing an article, and will miraculously prevent you from falling asleep (or into your mailbox) during a talk. What is a good question? What is not a good question? What's a good time and a good way to deliver the question? This will be covered in this talk which will also include, of course, a large part of answers to your questions.

09:00-10:30 Session 34J (Linearity/TLLA)
09:00
Towards a Functional Language for Species of Structures

ABSTRACT. Species of structures were first introduced by Joyal as a unified framework for the theory of generating series in enumerative combinatorics. In 1988, Girard introduced normal functors as a model of pure lambda-calculus where terms are interpreted as infinite series with sets as coefficients (which correspond to a special case of Joyal's species). Fiore presented a generalized definition that both encompasses Joyal's species and constitutes a model of differential linear logic.

Since species encode families of labelled structures, much work has been done to investigate their connection with algebraic data types. We want to explore an alternative viewpoint of seeing them as terms in an extension of lambda-calculus motivated by the relationship between species and differential models of linear logic and use the combinatorial intuition as a guide in the design of the syntax. The next step would be to study methods of resolution of differential equations in the setting of generalized species with the ultimate goal being to establish in linear logic a combinatorial interpretation of the obtained differential calculus.

09:20
How to count linear and affine closed lambda terms?

ABSTRACT. Affine lambda-terms are lambda-terms in which each bound variable occurs at most once and linear lambda-terms are lambda-terms in which each bound variable occurs once and only once. In this paper we count the number of affine closed lambda-terms of size n, linear closed lambda-terms of size n, affine closed beta-normal forms of size n and linear closed beta-normal forms of size n, for several measures of the size of lambda-terms. From these formulas, we show how we can derive programs for generating all the terms of size n for each class. The foundation of all of this is a specific data structure, made of contexts in which one counts all the holes at each level of abstractions by lambda's.

09:55
Entropy and Complexity Lower Bounds

ABSTRACT. Finding lower bounds in complexity theory has proven to be an extremely difficult task. Nowadays, only one research direction is commonly acknowledged to have the ability to solve current open problems, namely Mulmuley's Geometric Complexity Theory programme. Relying on heavy techniques from algebraic geometry, it stemmed from a first lower bound result for a variant of Parallel Random Access Machines.

We analyse this original proof, together with two other proofs of lower bounds from a semantics point of view, interpreting programs as graphings -- generalizations of dynamical systems. We show that, abstracted to a more general setting that exploits the classic notion of topological entropy, this three proofs share the same structure. This reformulation recentres the proofs around dynamical aspects, relegating the use of algebraic geometry to a model-driven choice rather than a key concept of the method.

09:00-10:30 Session 34K (MSFP)
09:00
Polynomial models of type theory

ABSTRACT. Polynomials (also known as containers) represent datatypes which, like polynomial functions, can be expressed using sums and products. Extending this analogy, I will describe the category of polynomials in terms of sums and products for fibrations. This category arises from a distributive law between the pseudomonad ‘freely adding’ indexed sums to a fibration, and its dual adding indexed products. A fibration with sums and products is essentially the structure defining a categorical model of dependent type theory. I will show how the process of adding sums to such a fibration is an instance of a general 'gluing' construction for building new models from old ones. In particular we can obtain new models of type theory in categories of polynomials. Finally, I will explore the properties of other type formers in these models, and consider which logical principles are and are not preserved by the construction.

10:00
Some No-Go Theorems for Distributive Laws (extended abstract)
SPEAKER: Maaike Zwart

ABSTRACT. While papers showing examples of distributive laws are abundant, papers containing the opposite result - that certain distributive laws cannot exist - are rarely seen. In fact, the only example known to us is in 'Distributing Probability over Nondeterminism' by Varacca and Winskel, which contains a proof by Plotkin that the probability monad does not distribute over the powerset monad. In 2008, Manes and Mulry posed the question of whether the list monad distributes over itself. We solve this question, proving that both the list monad and powerset monad do not distribute over themselves, while there is precisely one distributive law for the multiset monad over itself.

09:00-10:30 Session 34L: Joint QBF / Proof Complexity session (PC and QBF)
Location: Maths L5
09:00
Lower Bound Techniques for QBF Proof Systems
10:00
Towards the Semantics of QBF Clauses

ABSTRACT. Towards the Semantics of QBF Clauses

09:00-10:30 Session 34N (WiL)
Location: Blavatnik LT2
09:00
Logic and Software Engineering: Are We Nearly There Yet?

ABSTRACT. Logic has been acknowledged as potentially important to software engineering for as long as the latter field has existed. Yet, in practice, most software today is built without explicit reliance on logic, and often by people who do not think of themselves as having any knowledge of formal logic. I will claim that this is unlikely to change. Does that mean logic is not useful to software engineering? Of course not: but it means its role may be implicit more than explicit, or concealed in language and tool design. I will discuss some of the ways I have seen logic turn up in model-driven software development, in particular, and try to point out some interesting challenges for the future.

10:00
First-Order vs. Second-Order Encodings for LTLf-to-Automata Translation
SPEAKER: Shufang Zhu

ABSTRACT. Translating formulas of Linear Temporal Logic (LTL) over finite traces, or LTLf, to symbolic Deterministic Finite Automata (DFA) plays an important role not only in LTLf synthesis, but also in synthesis for Safety LTL formulas. The translation is enabled by using MONA, a sophisticated tool for symbolic DFA construction from logic specifications. Recent works used a first-order encoding of LTLf formulas to translate LTLf to First Order Logic (FOL), which is then fed to MONA to construct the symbolic DFA. This encoding was shown to perform well, but other encodings have not been studied. Specifically, the natural question of whether second-order (MSO) encoding, which has significantly simpler quantificational structure, can outperform first-order (FOL) encoding remained open.

In this paper we meet this challenge and study MSO encoding for LTLf formulas. We introduce a specific MSO encoding, and show that this encoding and its simplicity indeed allow more potential than FOL for optimization, thus benefiting symbolic DFA construction. We then show by empirical evaluations that, surprisingly, the FOL encoding outperforms in practice the MSO encodings. We analyze the results and explain how to improve MONA in order to allow the MSO encoding to outperform the FOL encoding.

10:15
A New Linear Logic for Deadlock-Free Session-Typed Processes (Talk Abstract)
SPEAKER: Ornela Dardha

ABSTRACT. We propose a new type system for deadlock-free session-typed π-calculus, a core concurrent programming language, by integrating for the first time, two separate lines of work. The first is the propositions-as-types approach by Caires and Pfenning, which provides a linear logic foundation for session types and guarantees deadlock-freedom. The second is Kobayashi’s approach in which types are annotated with priorities for detecting cyclic dependencies between communication operations. The outcome is a new and more expressive variant of classical linear logic with a proof assignment that gives a session type system with Kobayashi-style priorities.

09:15-10:30 Session 35: Invited Talk (PARIS)
Location: Maths L4
09:15
Riding Modal Cycles

ABSTRACT. Cyclic and ill-founded proofs are an important alternative to proofs by induction. In cyclic proofs infinite paths are permitted as long as they follow the unfolding pattern characterising the validity of the inductive formulæ involved. It has been shown recently that, over modal logic, cyclic proofs coincide with proofs by explicit induction. This result was used to establish sound and complete cut-free axiomatisations for the modal mu-calculus, an extension of propositional modal logic which captures the essence of inductive and co-inductive reasoning. In this talk we will discuss the approach via cyclic modal proofs and their virtues as a proof system on their own right.

09:30-10:30 Session 36A (HoTT/UF)
Location: Blavatnik LT1
09:30
Injective types and searchable types in univalent mathematics

ABSTRACT. We characterize the injective types as the retracts of exponential powers of the universes. The injective (n+1)-types are the retracts of the universes of n-types, and in particular the injective sets are the retracts of powersets. We apply this to construct searchable types, with the property that every decidable subset has an infimum in a well founded, transitive, extensional order. This applies Yoneda-style machinery for types seen as infty-groupoids.

09:30-10:30 Session 36B (WPTE)
09:30
A framework for graph rewriting.

ABSTRACT. Rewriting with graphs enjoys a long history in computer science, graphs being used to represent not only data structures, but also program structures, and even computational models. Termination and confluence techniques have been elaborated for various generalizations of trees, such as rational trees, directed acyclic graphs, lambda terms and lambda graphs. But the design of tools for rewriting arbitrary graphs has made little progress beyond various categorical definitions dating from the mid seventies and the study of the particular families of graphs listed above. This paper describes the generalization of term rewriting techniques to a general class of multigraphs that we call drags, viz. finite, directed, ordered (multi-)graphs. To this end, we develop a rich algebra of drags which allows us to view graph rewriting in exactly the same way as we view term rewriting.

Joint work with Nachum Dershowitz

10:00-10:30 Session 37 (DCM)
10:00
Models of Computation that Conserve Data

ABSTRACT. Conservation of data is an idea to build programming languages and hardware that support fixed-space programs: data cannot be created or destroyed. In this paper we give some motivations for this work and discuss some related ideas such as linearity and reversibility. We then give some examples of models of computation that can be adapted to work in this way. We conclude by discussing some ongoing work in this area.

10:30-11:00Coffee Break
11:00-12:30 Session 38A (COALG)
Chair:
11:00
Coalgebra and Automated Reasoning
11:45
Coalgebraic Tools for Randomness-Conserving Protocols

ABSTRACT. We propose a coalgebraic model for constructing and reasoning about state-based protocols that implement efficient reductions among random processes. We provide basic tools that allow efficient protocols to be constructed in a compositional way and analyzed in terms of the tradeoff between latency and loss of entropy. We show how to use these tools to construct various entropy-conserving reductions between processes. (Joint work with Matvey Soloviev)

11:00-12:30 Session 38B (Coq)
Location: Maths LT2
11:00
Classical Analysis with Coq
SPEAKER: Cyril Cohen

ABSTRACT. In this talk we present an ongoing effort to develop a Coq formal library about elementary real analysis, in a firmly classical setting. Almost all existing proof assistants on the market have been used to define real numbers, and to investigate the formalization of real, and sometimes also complex, analysis. A survey by Boldo et al. reviews the different approaches and the breadth of the existing developments. In particular, the Coq standard library provides an axiomatization of real numbers, with a classical flavor and the Coquelicot external library is a conservative extension thereof. At the time of writing, these libraries however cover far less material that their analogues in the HOL ecosystem, including Harrison’s HOL Light library and its translation to Isabelle/HOL.

The present work is yet another attempt at providing a library for classical analysis in Coq. The motivation is twofold. First, the library relies on stronger classical axioms, so as to get closer to the logical formalism used in classical mathematics. In particular, this impacts the formalization of compactness-related facts. Second, the library is designed along the formalization methodology put into practice in the Mathematical Components libraries. The latter libraries are essentially geared towards algebra and this work aims at providing an extension for topics in analysis. However, we incorporated a significant subset of the Coquelicot library. The main original contributions lie in the effort put in the infrastructure of the library: automation, notations, etc. The library is still rather incomplete, but it proved already useful enough for a few applications.

11:30
What is the Foreign Function Interface of the Coq Programming Language?

ABSTRACT. I propose a talk to open a discussion about this topic. My extended abstract details the motivation and the context. Briefly, oracles are a key ingredient in the success of CompCert. Such an oracle is an untrusted foreign code which outputs are checked by certified code. However, in CompCert, oracles are currently invoked through an unsafe FFI. I will illustrate some pitfalls of this FFI and propose how to overcome them. Moreover, I will conjecture that by using an adequate FFI, we can derive ``theorems for free'' a la Wadler in Coq from the Ocaml type of polymorphic oracles, and thus discharge a part of the certification on the Ocaml typechecker. However, my proposal raises more issues than it solves: in other words, it opens a new topic of research.

12:00
Preliminary Report on the Yalla Library

ABSTRACT. Yet Another deep embedding of Linear Logic in Coq

We present some results and comments around the ongoing development of the Yalla library which provides a deep embedding of linear logic in Coq relying on an explicit exchange rule.

11:00-12:30 Session 38C (DCM)
Chair:
11:00
A syntactic model of mutation and aliasing

ABSTRACT. Traditionally, semantic models of imperative languages use an auxiliary structure which mimics memory. In this way, ownerships and other encapsulation properties need to be reconstructed from the graph structure of such global memory. We present an alternative "syntactic" model where memory is encoded as part of the program rather than as a separate resource. This means that execution can be modelled by just rewriting source code terms, as in semantic models for functional programs. Formally, this is achieved by the block construct, introducing local variable declarations, which play the role of memory when their initializing expressions have been evaluated. In this way, we obtain a language semantics which is more abstract, and directly represents at the syntactic level constraints on aliasing, allowing simpler reasoning about related properties. We illustrate this advantage by expressing in the calculus the "capsule" property, characterizing an isolated portion of memory, which cannot be reached through external references. Capsules can be safely moved to another location in the memory, without introducing sharing. We state that the syntactic model can be encoded in the conventional one, hence efficiently implemented, and outline the proof that the dynamic semantics are equivalent.

11:30
Pointing to Private Names

ABSTRACT. Scoped channels, in the pi-calculus, are not nameable, as they are bound and subject to alpha-renaming. For program analysis purposes, however, to identify properties of these channels, it is necessary to talk about them. We present herein a method for uniquely identifying scoped channels.

12:00
A Category Theoretic Interpretation of Gandy’s Principles for Mechanisms
SPEAKER: Joseph Razavi

ABSTRACT. Based on Gandy's principles for models of computation we give category-theoretic axioms describing locally deterministic updates to finite objects. Rather than fixing a particular category of states, we describe what properties such a category should have. The computation is modelled by a functor that encodes updating the computation, and we give an abstract account of such functors. We show that every updating functor satisfying our conditions is computable.

11:00-12:30 Session 38D (Domains13)
Location: Maths LT1
11:00
Invited Talk: A domain theory for quasi-Borel spaces and statistical probabilistic programming

ABSTRACT. I will describe ongoing work investigating a convenient category of pre-domains and a probabilistic powerdomain construction suitable for statistical probabilistic programming semantics, as used in statistical modelling and machine learning. Specifically, we provide (1) a cartesian closed category; (2) whose objects are (pre)-domains; and (3) a commutative monad for probabilistic choice and Bayesian conditioning. Jones and Plotkin have shown that conditions (2)--(3) hold when one restricts attention to continuous domains, and Jung and Tix have proposed to search for a suitable category of continuous domains possessing all three properties (1)--(3), a question that remains open to date.

I propose an alternative direction, considering spaces with separate, but compatible, measure-theoretic and domain-theoretic structures. On the domain-theoretic side, we require posets with suprema of countably increasing chains (omega-cpos). On the measure-theoretic side, we require a quasi-Borel space (qbs) structure, a recently introduced algebraic structure suitable for modelling higher-order probability theory. There are three equivalent characterisations of this category given by: imposing an order-theoretic separatedness condition on countable-preserving omega-cpo-valued presheaves; internal omega-cpos in the quasi-topos of quasi-Borel spaces; and an essentially algebraic presentation. The category of these omega-qbses validates Fiore and Plotkin's axiomatic domain theory, yielding semantics for recursive types. To conclude, I will describe a commutative powerdomain construction given by factorising the Lebesgue integral from the space of random elements to the space of sigma-linear integration operators.

11:40
Abstractness of Continuation Semantics for Asynchronous Concurrency

ABSTRACT. The full abstractness problem was raised by Robin Milner and the diculty of designing fully abstract semantics is well-known. A denotational semantics is said to be fully abstract if it is correct and complete. The correctness condition is usually easy to establish. However, it may be difficult to establish the completeness condition. In this paper we study the abstractness of denotational models designed with continuation semantics for concurrency (CSC). Due to the interplay between denotations and continuations, the full abstractness problem seems to be even more dicult in continuation semantics. Although several papers employ continuations in the denotational design of concurrent languages, we are not aware of any full abstractness result for a concurrent language designed with continuations. Therefore we introduce a new criterion that we name weak abstractness (which is specific of the CSC technique): it preserves the correctness condition, but relaxes the completeness condition of the classic full abstractness criterion. Weak abstractness may be useful for a wide class of denotational models designed with continuations in which full abstractness is difficult (or impossible) to achieve. To illustrate this approach we present a denotational model for a simple asynchronous concurrent language designed with continuations over metric spaces. We show that this denotational semantics is weakly abstract with respect to a corresponding operational model.

12:05
Algebra semantics of recursive computation types

ABSTRACT. Giving algebra semantics for recursive computation types in a language with effects has been an open problem. We explain how to solve it by expanding the category of cppo algebras to Fiore's category in which the maps are lax homomorphisms.

11:00-12:30 Session 38E: Formal Modelling & Analysis (FCS)
11:00
An Expressive, Flexible and Uniform Logical Formalism for Attribute-based Access Control
SPEAKER: Jiaming Jiang

ABSTRACT. Attribute-based access control (ABAC) is a general access control (AC) model that subsumes numerous earlier AC models. Its increasing popularity stems from the intuitive generic structure of granting permissions based on domain-dependent attributes of users, subjects, objects, and other entities in the system. Multiple formal and informal languages have been developed to express policies in terms of such attributes.

The utility of ABAC policy languages is potentially undermined without a properly formalized underlying model. The high-level structure in a majority of ABAC models consists of sets and sets of sets, expressions that demand that the reader unpack multiple levels of sets and tokens to determine what things mean. The resulting reduced readability potentially endangers correct expression and reduces maintainability and validation. These problems could be multiplied with models that employ nonuniform representations of actions and their governing policies.

In this paper, we address these problems by recasting the high-level structure of ABAC models in a logical formalism that treats all types of actions uniformly. Our formalism uses a simple variant of description logics to model the high-level structure, and function-free first-order logic with equality to represent and reason about the policies. Use of description logics for model formalizations, including hierarchies of types of entities and attributes, is a promise of improved usability, compared with existing ABAC models, in specifying the relationships between and requirements on domain-dependent attributes. Our formal model provides improved flexibility in supporting a variety of different requirements depending on the domain. Specifically, we will discuss how to modify the model if time plays a role in authorizing a requested action, if different policies would potentially arrive at conflicting decisions, and if default and exception rules are in application.

11:30
Proving physical proximity using symbolic models

ABSTRACT. For many modern applications like e.g. contactless payment, and keyless systems, ensuring physical proximity is a security goal of paramount importance. Formal methods have proved their usefulness when analysing standard security protocols. However, existing results and tools do not apply to e.g. distance bounding that aims to ensure physical proximity between two entities. This is due in particular to the fact that existing models do not represent in a faithful way the locations of the participants, and the fact that transmission of messages takes time.

In this paper, we propose several reduction results: when looking for an attack, it is actually sufficient to consider a simple scenario involving at most four participants located at some specific locations. An interesting consequence of our reduction results is that it allows one to reuse ProVerif, an automated tool developed for analysing standard security protocols. As an application, we analyse several distance bounding protocols, as well as a contactless payment protocol.

12:00
Statistical Model Checking of Guessing and Timing Attacks on Distance-bounding Protocols

ABSTRACT. Distance-bounding (DB) protocols were proposed to thwart relay attacks on proximity-based access control systems. In a DB protocol, the verifier computes an upper bound on the distance to the prover by measuring the time needed for a signal to travel to the prover and back. DB protocols are, however, vulnerable to distance fraud, in which a dishonest prover is able to manipulate the distance bound computed by an honest verifier. Despite their conceptual simplicity, devising a formal characterization of DB protocols and distance fraud attacks that is amenable to automated formal analysis is non-trivial, primarily because of their real-time and probabilistic nature. In this work, we present a framework, based on rewriting logic, for formally analyzing different forms of distance-fraud, including recently identified timing attacks. We introduce a generic, real-time and probabilistic model of DB protocols and use it to (mechanically) verify false-acceptance and false-rejection probabilities under various settings and attacker models through statistical model checking with Maude and PVeStA. Using this framework, we first accurately confirm known results and then define and quantitatively evaluate new guessing-ahead attack strategies that would otherwise be difficult to analyze manually.

11:00-12:30 Session 38F (GS)
Location: Maths LT3
11:00
Making parallel-or deterministic again: intentional full abstraction for por

ABSTRACT. Although Plotkin’s parallel-or is inherently deterministic, it has a non-deterministic interpretation in games based on (prime) event structures – in which an event has a unique causal history – because they do not directly support disjunctive causality. General event structures can express disjunctive causality and have a more permissive notion of determinism, but do not support hiding. We show that (structures equivalent to) deterministic general event structures do support hiding, and construct a new category of games based on them with a deterministic interpretation of an affine variant of PCF extended with parallel-or. We then exploit this deterministic interpretation to give a relaxed notion of determinism (observable determinism) on the plain event structures model. Putting this together with our previously introduced concurrent notions of well-bracketing and innocence, we obtain an intensionally fully abstract model for our language.

This is joint work with Simon Castellan and Glynn Winskel, presented at FSCD 2017.

11:30
Generalised Species of Plays

ABSTRACT. In many game models, including the AJM and HO/N models, a strategy can be represented as a subset of the set of all plays of an appropriate type. The collection of plays is, however, not a mere set but that equipped with symmetry, which every strategy should respect. Actually the set of plays of a game in the AJM model is explicitly associated with an equivalence relation. As for the HO/N model, Mellies studied an equivalence relation on plays that every innocent strategy respects.

In this talk, we study the situation by using ideas from combinatorics, namely generalised species and profunctors. We show that the collection of all plays of a given simple type can be seen as a generalised species or a profunctor, and that this mapping induces a (pseudo)functor from the game model to the bicategory of profunctors.

(This is joint work with Kazuyuki Asada and C.-H. Luke Ong.)

12:00
Probabilistic strategies

ABSTRACT. This talk will review probabilistic concurrent strategies (based on event structures), their definition and properties, and indicate issues and topics for future work.

11:00-12:30 Session 38G: Technical Papers (GraMSec)
11:00
A state machine system for insider threat detection
SPEAKER: Haozhe Zhang

ABSTRACT. The risk from insider threats is rising significantly, yet the majority of organizations are ill-prepared to detect and mitigate them. Research has focused on providing rule-based detection systems or anomaly detection tools which use features indicative of malicious insider activity. In this paper we propose a system complimentary to the aforementioned approaches. Based on theoretical advances in describing attack patterns for insider activity, we design and validate a state-machine system that can effectively combine policies from rule-based systems and alerts from anomaly detection systems to create attack patterns that insiders follow to execute an attack. We validate the system in terms of effectiveness and scalability by applying it on ten synthetic scenarios. Our results show that the proposed system allows analysts to craft novel attack patterns and detect insider activity while requiring minimum computational time and memory.

11:45
Combining Bayesian Networks and Fishbone Diagrams to Distinguish between Intentional Attacks and Accidental Technical Failures

ABSTRACT. Because of modern societies' dependence on industrial control systems, adequate response to system failures is essential. In order to take appropriate measures, it is crucial for operators to be able to distinguish between intentional attacks and accidental technical failures. However, adequate decision support for this matter is lacking. In this paper, we use Bayesian Networks (BNs) to distinguish between intentional attacks and accidental technical failures, based on contributory factors and observations (or test results). To facilitate knowledge elicitation, we use extended fishbone diagrams for discussions with experts, and then translate those into the BN formalism. We demonstrate the methodology using an example in a case study from the water management domain.

11:00-12:30 Session 38H (HoTT/UF)
Location: Blavatnik LT1
11:00
A Yoneda lemma-formulation of the univalence axiom
11:30
Robust Notions of Contextual Fibrancy

ABSTRACT. We address two problems. First, in existing cubical models of HoTT, fibrancy of the dependent function type requires fibrancy of both domain and codomain. This is in contrast to what we see in category theory, where a functor space is already a category if the codomain is a category; the domain can be merely a quiver. Secondly, in a two-level type system that distinguishes between fibrant and potentially non-fibrant types, it is often not possible to provide an internal fibrant replacement operation as it does not commute with substitution. We identify a condition that is sufficient to address these problems, and observe that this condition is more easily met when considering notions of contextual fibrancy.

12:00
Cohesive Covering Theory
11:00-12:30 Session 38J (ITRS)
11:00
Natural Deduction and Normalization Proofs for the Intersection Type Discipline

ABSTRACT. Refining and extending previous work by Retoré (1994), we develop a systematic approach to intersection types via natural deduction. We show how a step of beta reduction can be seen as performing, at the level of typing derivations, Prawitz reductions in parallel. Then we derive as immediate consequences of Subject reduction the main theorems about normalization for intersection types: for system D, strong normalization, for system DΩ, the leftmost reduction termination for terms typable without Ω.

11:30
Strong normalization of simple types through uniform intersection types.

ABSTRACT. A new proof of strong normalization for simple type assignment for λ-calculus is obtained, through a translation from this system to a system of uniform intersection types, which is equivalent to it as typability power and whose strong normalization property can be easily proved by induction on derivation.

12:00
Towards a Semantic Measure of the Execution Time in Call-by-Value lambda-Calculus

ABSTRACT. We investigate the possibility of a semantic account of the execution time (i.e. the number of beta v-steps leading to the normal form, if any) for the shuffling calculus, an extension of Plotkin's untyped call-by-value lambda-calculus. For this purpose, we use a linear logic based denotational model that can be seen as a non-idempotent intersection type system: relational semantics. Our investigation is inspired by similar ones for linear logic proof-nets and untyped call-by-name lambda-calculus. We first prove a qualitative result: a (possibly open) term is normalizable for weak reduction (which does not reduce under abstractions) if and only if its interpretation is not empty. We then show that the size of type derivations can be used to measure the execution time. Finally, we show that, differently from the case of linear logic and call-by-name lambda-calculus, the quantitative information enclosed in type derivations does not lift to types (i.e. to the interpretation of terms). In order to get a genuine semantic measure of execution time in a call-by-value setting, we conjecture that a refinement of its syntax and operational semantics is needed.

11:00-12:30 Session 38K (Linearity/TLLA)
11:00
Infinitary Proof Theory: the Multiplicative Additive Case

ABSTRACT. Infinitary and regular proofs are commonly used in fixed point logics. Being natural intermediate devices between semantics and traditional finitary proof systems, they are commonly found in completeness arguments, automated deduction, verification, etc. However, their proof theory is surprisingly underdeveloped. In particular, very little is known about the computational behavior of such proofs through cut elimination. Taking such aspects into account has unlocked rich developments at the intersection of proof theory and programming language theory. One would hope that extending this to infinitary calculi would lead, e.g., to a better understanding of recursion and corecursion in programming languages. Structural proof theory is notably based on two fundamental properties of a proof system: cut elimination and focalization. The first one is only known to hold for restricted (purely additive) infinitary calculi, thanks to the work of Santocanale and Fortier; the second one has never been studied in infinitary systems. In this paper, we consider the infinitary proof system µMALL∞ for multiplicative and additive linear logic extended with least and greatest fixed points, and prove these two key results. We thus establish µMALL∞ as a satisfying computational proof system in itself, rather than just an intermediate device in the study of finitary proof systems.

12:00
Models of Linear Logic based on the Schwartz epsilon product
SPEAKER: Marie Kerjean

ABSTRACT. In this talk we want to present the recent results of \cite{DK}. We construct several smooth classical denotational models of Linear Logic: they are smooth as non-linear proofs are interpreted as infinitely differentiable functions, and they feature an involutive linear negation. The starting point of this work consists in noticing that the multiplicative disjunction $\parr$ corresponds to the well-known Schwartz' epsilon product. Requiring its associativity then asks for a completeness notion, while the linear involutive negation is ensured by considering a good topology (the Arens topology) on the dual, ensuring that the linear involutive negation works as an orthogonality relation.

11:00-12:30 Session 38L (MSFP)
11:00
Formalizing Constructive Quantifier Elimination in Agda

ABSTRACT. In this paper a constructive formalization of quantifier elimination is presented, based on a classical formalization by Tobias Nipkow (2008, "Reflecting Quantifier Elimination for Linear Arithmetic"). The formalization is implemented and verified in the programming language/proof assistant Agda. It is shown that, as in the classical case, the ability to eliminate a single existential quantifier may be generalized to full quantifier elimination and consequently a decision procedure. The latter is shown to have strong properties under a constructive metatheory, such as the generation of witnesses and counterexamples. Finally, this is demonstrated on a minimal theory on the natural numbers.

11:45
Relating Idioms, Arrows and Monads from Monoidal Adjunctions

ABSTRACT. We revisit once again the connection between three notions of computation: monads, arrows and idioms (also called applicative functors). We employ monoidal categories of finitary functors and profunctors on finite sets as models of these notions of computation, and develop the connections between them through adjunctions. As a result, we obtain a categorical version of Lindley, Yallop and Wadler’s characterisation of monads and idioms as arrows satisfying an isomorphism.

11:00-12:30 Session 38M: Contributed talks (PARIS)
Location: Maths L4
11:00
LMSO: A Curry-Howard Approach to Church’s Synthesis via Linear Logic (speaker: Pierre PRADIC)
SPEAKER: Pierre Pradic

ABSTRACT. We propose LMSO, a proof system inspired from Linear Logic, as a proof-theoretical framework to extract finite-state stream transducers from linear-constructive proofs of omega-regular specifications. We advocate LMSO as a stepping stone toward semi-automatic approaches to Church’s synthesis com- bining computer assisted proofs with automatic decisions procedures. LMSO is correct in the sense that it comes with an automata-based realizability model in which proofs are interpreted as finite-state stream transducers. It is moreover complete, in the sense that every solvable instance of Church’s synthesis problem leads to a linear-constructive proof of the formula specifying the synthesis problem.

11:45
Coinductive Uniform Proofs: An Extended Abstract (speaker: Yue LI)
SPEAKER: Yue Li

ABSTRACT. We propose a coinductive extension of Miller et. al.'s framework of uniform proof as a machinery for formulating and proving coinductive invariants arising from first-order Horn clause logic programming. It helps the study of coinductive logic programming.

11:00-12:30 Session 38N: Joint QBF / Proof Complexity session (PC and QBF)
Location: Maths L5
11:00
Size, Cost and Capacity: A Semantic Technique for Hard Random QBFs
11:30
Short Proofs in QBF Expansion
11:50
The Symmetry Rule for Quantified Boolean Formulas
SPEAKER: Martina Seidl
12:10
Discussion
11:00-12:30 Session 38Q (WPTE)
11:00
Automating the Diagram Method to Prove Correctness of Program Transformations

ABSTRACT. Our recently developed LRSX Tool implements a technique to automatically prove the correctness of program transformations in higher-order program calculi which may permit recursive let-bindings. The focused notion of correctness for program transformations is invariance with respect to the observational semantics of programs. The so-called diagram method is automated by combining unification, matching, and reasoning on alpha-renamings on the higher-order meta-language, and automating induction proofs via an encoding into termination problems of term rewrite systems. We explain the techniques, we illustrate the usage of the tool, and we report on experiments.

11:30
Optimizing Space of Parallel Processes

ABSTRACT. The paper is a contribution to exploring and analyzing space-improvements in the concurrent and functional process-calculus CHF. Space-improvements are defined as a generalization from a deterministic pure functional language. The main part of the paper is a polynomial algorithm for space optimizations of parallel independent processes. Applications of this algorithm are: (i) affirmation of space improving transformations for particular classes of program transformations; (ii) support of an interpreter-based method for refuting space-improvements; and (iii) as a stand-alone offline-optimizer for space (or similar ressources) of parallel processes.

12:00
On Transforming Functions Accessing Global Variables into Logically Constrained Term Rewriting Systems

ABSTRACT. A usual idea of transforming imperative programs over the integers into rewriting systems is to transform programs into transition systems where some auxiliary function symbols are introduced to represent intermediate states. In transforming a function calling itself or others, we allow an auxiliary symbol corresponding to a function call to have an extra argument where the called function is executed, and values stored in global variables are passed to the called function as its extra arguments, and the caller receives the possibly updated values together with returned values of the called function. Such a mechanism for global variables performs well under sequential execution, but is not enough to adapt to parallel execution. In this paper, we propose a transformation of imperative programs with function calls and global variables into logically constrained term rewriting systems that represent actions of the whole execution environment with call stacks. More precisely, we prepare a function symbol for the whole environment of execution, which stores global variables and a call stack as its arguments. For a function call, we push the frame to the stack and pop it after the execution. Any running frame is executed at the top of the stack, and statements accessing global variables are represented by rewrite rules for the environment symbol.

11:00-12:40 Session 38R (WiL)
Location: Blavatnik LT2
11:00
On Expanding Standard Notions of Constructivity

ABSTRACT. Brouwer developed the notion of mental constructions based on his view of mathematical truth as experienced truth. These constructions extend the traditional practice of constructive mathematics, and we believe they have the potential to provide a broader and deeper foundation for various constructive theories.We here illustrate mental constructions in two well studied theories – computability theory and plane geometry – and discuss the resulting extended mathematical structures. Further, we demonstrate how these notions can be embedded in an implemented formal framework, namely the constructive type theory of the Nuprl proof assistant. Additionally, we point out several similarities in both the theory and implementation of the extended structures.

11:15
Can every real proof be represented by a formal proof? The Hilbert-Gentzen thesis

ABSTRACT. The thesis that any mathematical proof can be formalized was advocated by (among others) Hilbert and Gentzen around a century ago. The Hilbert-Gentzen thesis says: “Every real proof can be represented by a formal proof.” This article will go through several possible interpretations of this thesis, to be able to understand and discuss it fairly. To be able to fully understand the reasoning that separates the different interpretations, there will be a discussion on the different arguments for why formalisation has the status it currently has within mathematics. The H-G thesis has held a firm place in the foundations and philosophy of mathematics since these thoughts were first expressed a century ago. However the recent “informal proofs debate” gives us several arguments against the thesis, arguing that it is false and that it should be rejected. This talk will (1) define the thesis with help from the context in which it was introduced, (2) give the main arguments for accepting it, (3) give the main arguments for refuting it, as stated by M. Marfori, C. Cellucci, K. gödel, and F. Tanswell, and then (4) evaluate the strength and scope of the arguments and counter arguments. This talk will be a summarization and discussion on the current status of the Hilbert-Gentzen thesis, and the conclusion will be that the thesis overestimates the value of formalisation and that the thesis should be rejected.

11:30
A semantical view of sequent based systems

ABSTRACT. In this work, we explore proof theoretical connections between sequent, nested and labelled calculi. In particular, we show a semantical characterisation of intuitionistic, normal and non-normal modal logics for all these systems, via a case-by-case translation between labelled nested to labelled sequent systems.

11:45
Decomposing labelled proof theory for intuitionistic modal logic
SPEAKER: Sonia Marin

ABSTRACT. We present a labelled deduction for intuitionistic modal logic that comes with two relation symbols, one for the accessible world relation associated with the Kripke semantics for modal logics, and one for the preorder relation associated with the Kripke semantics for intuitionistic logic. Thus, our labelled system is in close correspondence to the birelational Kripke models.

12:00
About the unification type of topological logics over Euclidean spaces

ABSTRACT. We introduce a new inference problem for topological logics, the unifiability problem. Our main result is that, within the context of the mereotopology of all regular closed polygons of the real plane, unifiable formulas always have finite complete sets of unifiers.

11:30-12:30 Session 39 (LMW)
Location: Maths Boardroom
11:30
Can Logic Make Us Better People?

ABSTRACT. In the medieval European university, logic was counted as one of the three foundational subjects of the liberal arts, which were reckoned to be the essential knowledge for living life as a free human being. The Naiyyayikas of ancient India held logic in even higher regard, regarding it as one of the fundamental instruments for achieving enlightenment.

In the modern day, logic does not hold such an exalted position in our collective imagination. It is regarded as a dry and technical subject, whose details are of interest only to scientific specialists. Still, logic has achieved an extraordinary level of sophistication, and is an essential tool that informs much research in computer science.

Given the overwhelming impact of the computer upon society, it is worth considering again the questions of how - and whether - logic can help us live ethical lives. In this talk, I will explores the kinds of ethical questions one can expect to find in modern logic and computer science research, and reflect upon the different answers that modern logicians have made over the decades (both to our credit and discredit).

12:00
Research as a collaborative effort

ABSTRACT. As society becomes more and more connected, so does research. Research in logic and computer science is changing, becoming more  collaborative and open. This often contrasts with the needs that  young researchers have of building their own research. In this talk, starting from my personal experience, I will provide some advices on how to survive in collaborative research.

12:30-14:00Lunch Break
14:00-15:30 Session 40A (COALG)
14:00
Coalgebras and Kleisli Maps for Probability
14:45
Coalgebraic Logics: From Branching Time to Linear Time
14:00-15:30 Session 40B (Coq)
Location: Maths LT2
14:00
A Coq mechanised formal semantics for real life SQL queries : Formally reconciling SQL and (extended) relational algebra.

ABSTRACT. TBA

15:00
Discussion with the Coq development team
14:00-15:30 Session 40C (DCM)
14:00
On Higher-Order Probabilistic Computation.

ABSTRACT. Ideas and tools from probability theory are extremely useful in various areas of theoretical computer science, and become essentials in fields like cryptography, in which being able to flip from a fair coin is a necessity rather than a choice. Recently, there has been a growing interest in studying the role of higher-order functions in probabilistic computation. We give some hints about why this indeed deserves to be studied, focusing on cryptography and machine learning. We then overview the metatheory of a probabilistic lambda-calculus, highlighting the (striking) differences with the one of the pure, deterministic, lambda-calculus.

15:00
Towards a Formal System for Topological Quantum Computation

ABSTRACT. We study Topological Quantum Computing (TQC) from the perspective of computability theory with the aim of definining a formal system which is able to capture the computational features of TQC. We discuss the mathematical model for TQC, namely Modular Tensor Categories, and their suitability for the construction of a domain of denotational objects similar to the Scott domain of the lambda calculus. This leads us to believe that a formalism similar to the classical lambda calculus can be defined also for TQC.

14:00-15:05 Session 40D (Domains13)
Location: Maths LT1
14:00
Invited Talk: Topology and Domain Theory Interfaces

ABSTRACT. Over the history of its development domain theory has strongly influenced various developments in topology, particularly non-Hausdorff topology, and conversely topological notions and theories have significantly impacted domain theory. One might almost speak of a mathematical symbiosis. In this presentation we trace the historical development stream of various of these interfaces. We begin by highlighting sone of the significant developments concerning those topological spaces arising from equipping a dcpo with its Scott topology. Such spaces have been interesting test cases up until very recently for such topological concepts as sobriety, well-filteredness, coherence, and related topological notions.

Monotone convergence spaces are a useful topological generalization of these topological dcpo examples, as well as being generalizations of sober spaces. Just as general spaces have a sobrification, they have a monotone convergence space completion.

Another fruitful line of investigation has been exploring the relation between topological spaces and their lattices $\mathcal{O}(X)$ of open sets. For example the distributive continuous lattices resp.\ completely distributive lattices are precisely those lattices that can be represented as $\mathcal{O}(X)$ for some locally compact sober space resp.\ for some continuous domain equipped with the Scott topology. Very recent results on the Ho-Zhao problem fit into this category, as well as into the study of the Scott topology on dcpos.

Cartesian-closed categories are important for modeling higher types. This led originally to the search for cartesian-closed categories of continuous domains, but later led to more general notions of compactly generated spaces, QCB-spaces (quotients of countably based spaces), and equilogical spaces among other developments.

Connections with classical general topology arise by considering refinements of a given non-Hausdorff topology. One standard refinement is the patch topology, which adds complements of compact saturated sets to the set of open sets. The result is particularly pleasing for the stably compact spaces, for which one obtains a compact partially ordered space in the sense of Nachbin (in particular, a Hausdorff space). The continuous domains with stably compact Scott topology have powerdomains that are again stably compact. In the case of dcpos, a standard refinement of the Scott topology is the Lawson topology, which has also proved a useful topological tool.

Finally one should mention quasimetric spaces. Mike Smyth introduced a notion of completeness for such spaces that generalizes both directed completeness of dcpos and the traditional notion of completeness for metric spaces. The theory has undergone quite extensive development up to the current time.

14:40
A Domain-theoretic Skorohod’s Theorem

ABSTRACT. We outline a domain-theoretic version of Skorohod's Theorem, a basic result in stochastic process theory. We state an analog of the theorem for domains, outline the proof of this result, and show how it implies the original result of Skorohod. We also propose several applications for the results.

14:00-15:30 Session 40E: Cryptography (FCS)
Chair:
14:00
A Study on the Preservation on Cryptographic Constant-Time Security in the CompCert Compiler

ABSTRACT. Cryptographic constant-time programming is an established coding discipline used in cryptography to secure programs against timing attacks. Most, if not all, cryptography library try to adhere to this coding style. The C programming language is oftentimes considered a portable assembly, and is hence used by a great number of cryptography libraries. However, what is executed by the hardware is actual assembly, not C. One can thus wonder whether security properties are preserved through compilation as even formally verified compilers only ensure preservation of observable behaviors.

We present in this paper how to derive a natural framework to prove preservation of cryptographic constant-time security from simulation based proofs of compiler correctness. We also give insights on how this could be adapted to CompCert.

14:30
Homomorphisms and Minimality for Enrich-by-Need Security Analysis

ABSTRACT. A cryptographic protocol can be deployed in a variety of environments, but existing methods of protocol analysis focus only on the protocol, without being sensitive to assumptions about these environments.

We present LPA, a tool which analyzes protocols in context. LPA uses two programs, cooperating with each other: CPSA, a well-known system for protocol analysis, and Razor, a model-finder based on SMT technology. Our analysis follows the enrich-by-need paradigm, in which models of protocol execution are generated and examined.

The choice of which models to generate is important, and we develop a careful motivation and evaluation of LPA's strategy of building minimal models. In fact ``minimality'' can be defined with respect to either of two preorders, namely the homomorphism preorder and the embedding preorder (i.e. the preorder of injective homomorphisms); we discuss the merits of each. Our main technical contributions are algorithms for building homomorphism-minimal models and for generating a set-of-support for the models of a theory, in each case by scripting interactions with an SMT solver.

15:00
A Homotopical Approach to Cryptography

ABSTRACT. We present a new direction for the formal specification of cryptographic schemes using types. In this approach, we specify a cryptographic protocol using the tools of homotopy type theory. Homotopy type theory adds the notion of higher inductive type and univalence axiom to Martin-Löf’s intensional type theory. A higher inductive type allows us to introduce constructors for paths and higher-dimensional paths in addition to points. The paths are then identified by equivalences in the universe through univalence. A higher inductive type can act as a front-end mapped to a concrete cryptographic implementation in the universe. By having a higher inductive type front-end, we can encode domain-specific laws of the cryptographic implementation as higher-dimensional paths. Due to univalence and functoriality, the path structure will be preserved in the mapping and realized by equivalence in the universe. Using this model we can achieve various guarantees on the correctness of the cryptographic implementation.

14:00-15:30 Session 40F (GS)
Location: Maths LT3
14:00
How to denotational an operational semantics

ABSTRACT. This talk will examine the role of game semantics as a bridge between operational and denotational semantics, combing the computational insights of the former with the good mathematical properties of the latter, especially compositionality.

14:30
What's in a game?

ABSTRACT. While most game models feature a rather intuitive setup, they also come with surprisingly difficult proofs of such basic results as associativity of composition of strategies.  We propose a basic abstract framework for game semantics, (innocent) game settings, which enables the generic construction of a category of games and (innocent) strategies, thus unifying a number of concrete cases.

15:00
Game Semantics 25 years on

ABSTRACT. Game semantics has been developed since the 1990's as a denotational paradigm capturing observational equivalence of functional languages with imperative features. While initially introduced for PCF variants, the theory can nowadays express effectful languages ranging from ML fragments and Java programs to C-like code. In this talk we present recent advances in devising game models for effectful computation. Central in this approach is the use of names for representing in an abstract fashion different forms of notions and effects, such as references, higher-order values and polymorphism. Moreover, the use of names facilitates the presentation of game moves as method calls and returns, thus transforming games from a denotational semantics to an operational/trace semantics for open terms.

14:00-15:30 Session 40G: Technical Papers (GraMSec)
14:00
Tendrils of Crime: Visualizing the Diffusion of Stolen Bitcoins
SPEAKER: Mansoor Ahmed

ABSTRACT. As cryptocurrencies enter the lives of the ordinary people cybercriminals start using them to facilitate crime. Europol last year estimated that about 3-4% of crime proceeds are laundered through the use of cryptocurrencies and that this figure is rising steadily. There are two main reasons - lack of regulation and anonymity. Recently, Anderson et al. showed that it is both computationally and legally possible to follow the 'taint' that a stolen coin leaves as it propagates through the network. In this paper we present new visualisation mechanisms for taint propagation in Bitcoin that display how cyber criminals launder money.

14:45
Deciding the Non-Emptiness of Attack trees

ABSTRACT. We define and study the decision problem of the \emph{emptiness} of an attack tree. This decision problem reflects the natural question of knowing whether some attack scenario described by the tree holds in a given model of the system to defend. We establish accurate complexity bounds, ranging from \NP-completeness for arbitrary trees down to \NLOGSPACE-completeness for trees with no occurrence of the \AND operator. Additionally, if the input system to defend has a succinct description, we show that the emptiness problem is \PSPACE-complete.

14:00-15:30 Session 40H (HoTT/UF)
Location: Blavatnik LT1
14:00
Cubical Computational Type Theory
14:30
The definitional symmetric cubical structure of types in type theory with equality defined by abstraction over an interval
15:00
Ordered Cubes
14:00-15:30 Session 40I: Member Talk, Discussion (IFIP WG 1.6)
14:00
Informal Report on FSCD
14:30
Discussion on Reviewing Culture in TCS
15:15
Discussion on IFIP
14:00-15:30 Session 40J (ITRS)
Chair:
14:00
Intersection Types for Unboundedness Problems (ITRS invited talk)

ABSTRACT. Intersection types have been originally developed as an extension of 
simple types, but they can also be used for refining simple types. In this 
talk we concentrate on the latter option; more precisely, on the use of 
intersection types in the context of higher-order recursion schemes. 
Intersection types were used in this context for several purposes like 
model-checking, pumping, transformations of schemes, etc. After an 
overview, I will show how intersection types can be used to solve problems 
talking about unboundedness of some quantities. An example of such a 
problem is: given a higher-order recursion scheme generating an infinite 
tree, decide whether for every number n there is a path in this tree 
containing at least n occurrences of a fixed letter #. Notice that this is 
not a regular property of the tree, thus it cannot be decided using 
standard model-checking methods.

15:00
Gradual Intersection Types
SPEAKER: Pedro Ângelo

ABSTRACT. Gradual typing integrates dynamic and static types. Since its introduction, it has been successfully applied to several extensions of simple types, including subtyping, parametric polymorphism and substructural types. This work studies its application to intersection type systems. We introduce a new approach to define a gradual version of the original intersection type system of Coppo and Dezani. We then present a new operational semantics for terms typed with static and dynamic intersection types, which enables dynamic type casts and identifies the causes for type errors in a framework inspired by the blame calculus. Finally we prove several properties of our system including a correctness criteria and soundness of the extension of the original gradual type system.

14:00-15:30 Session 40K (LMW)
Location: Maths Boardroom
14:00
Relational verification of probabilistic programs

ABSTRACT. 2-safety properties go beyond the traditional formulation of program verification by considering pairs of traces (instead of traces). In the probabilistic setting, verification of 2-safety properties is conveniently achieved using probabilistic relational program logics such as pRHL. The talk will highlight a fruitful connection between pRHL and probabilistic couplings, a well-known tool from analyzing Markov chains, and discuss applications to cryptography and differential privacy.

14:30
Constrained Horn clauses as a basis of automatic program verification: the higher-order case

ABSTRACT. We introduce constrained Horn clauses in higher-order logic, and study satisfiability and related decision problems motivated by the automatic verification of higher-order programs. Although satisfiable systems of higher-order clauses in the standard semantics do not generally have least models, by viewing these systems as a kind of monotone logic programs, we show that there are non-standard semantics that do satisfy the least model property. Moreover the respective satisfiability problems in the standard and non-standard semantics are inter-reducible. With a view to exploiting the remarkable efficiency of SMT solvers, we survey recent developments in the algorithmic solution of higher-order Horn systems by reduction to first order, and discuss related problems.

15:00
Rewriting and Applications

ABSTRACT. In this talk I will briefly present my research area, focusing on recent results on nominal rewriting and including some mentoring aspects.

14:00-15:30 Session 40L (Linearity/TLLA)
14:00
Generalized generalized species of structure and resource modalities

ABSTRACT. We propose to return to the construction carried precedently, where we claimed that Simply-typed approximations = intersection types derivations in the precise sense that we built a categorical equivalence between specific type systems (that encompass all well-known intersection type systems used to characterize normalization) and simply-typed approximations, that we realize as approximation functors, that arise from the translation of the language into linear logic. By studying these specific functors, we claim that their main feature is that they map the exponential of linear logic into what can reasonably be called a resource modality, corresponding either to linear, affine or cartesian intersection types. So, we present the story under the slightly different, and less syntactic, slogan: Intersection type system = multiplicative linear logic + resource modality These resource modalities are linked with well-know systems. In particular, generalized species of structure can be seen as a strictification of the Kleisli category of the linear resource modality. The study of the link between these different resource modality can shed a new light on the extensional collapse, and paving the way for a study of this collapse for dynamic semantics (such as the Geometry of Interaction and ordered combinatory algebras, used to account for forcing and realizability).

14:20
Benchmarking Linear Logic Translations

ABSTRACT. Benchmarking automated theorem proving (ATP) systems using standardized problem sets is a well- established method for measuring their performance. However, the availability of such libraries for non-classical logics is very limited. In this work we seek to start a discussion of benchmarks for Girard’s linear logic and some of its variants. For some quick bootstrapping of the collection of problems, we use translations of the collection of Kleene’s intuitionistic theorems in the traditional monograph Introduction to Metamathematics. We analyze four different translations of intuitionistic logic into linear logic and compare their proofs using linear logic based provers with focusing.

14:55
Coherent interaction graphs: a nondeterministic geometry of interaction for MLL

ABSTRACT. We introduce the notion of coherent graphs, and show how those can be used to define dynamic semantics for Multiplicative Linear Logic. The models thus obtained are finite with respect to several aspects (finite graphs, finite generation of types) and thus improve in this way previous constructions by Seiller. We also discuss how the added notion of coherence can also be used to introduce non-determinism.

14:00-15:30 Session 40M (MSFP)
14:00
Ornamentation put into practice in ML

ABSTRACT. Ornaments have been introduced as a means to lift functions operating on some datatype into new functions operating on an ornamentation of this datatype, i.e., a variant of this datatype carrying additional information. A typical example consists in viewing lists as an ornamentation of natural numbers and then lifting the addition into the concatenation---the user just providing the missing parts of the code. Ornaments have first been introduced in Agda and extensively studied in a categorical setting.

We have been exploiting and adapting the idea of ornamentation in the context of ML. Here, we are interested in syntactic liftings where the lifted function is related to the original one as in the general setting but with a closer correspondence: on the one hand, the behavior (and ideally the computation steps) of the lifted function should coincide with the behavior of the original function, except for the additional user-provided code; on the other hand, the lifted code should be as close as possible to the code the user would have manually written.

Our approach is to first synthesize a generic version of the base program that can then be specialized to any ornamentation of the base version (including the base version as a particular case), and finally simplified. While the generic version of ML base code requires some form of dependent types to typecheck, we ensure by construction, using staged computation and dependent types, that each instantiation to concrete ornaments can be simplified back into ML code.

We use parametricity to show the correspondence between the base and lifted versions. We verified on (small) examples that the lifted code is often close to hand-written code.

This approach to ornamentation can be extended to perform disornamentation---with new design challenges and interesting applications. Ornamentation and disornamentation are just two examples of a more general type-based approach to code refactoring and evolution.

15:00
Profunctor Optics and the Yoneda Lemma

ABSTRACT. Profunctor optics are a neat and composable representation of bidirectional data accessors, including lenses, and their dual, prisms. The profunctor representation exploits higher-order functions and higher-kinded type constructor classes. The relationship between the profunctor representation and the familiar representation in terms of "getter" and "setter" functions is not at all obvious. We derive the former from the latter, making the relationship clear. It turns out to be a fairly direct application of the Yoneda Lemma, arguably the most important result in category theory.

14:00-15:30 Session 40N: Invited tutorial (second part) (PARIS)
Location: Maths L4
14:00
An Introduction to Cyclic Proofs

ABSTRACT. Cyclic proofs have recently been gaining in popularity, both as a proof-theoretic tool for studying logical systems, and as a paradigm for automatic proof search.  Over two hour-long talks at the PARIS workshop I hope to give an introduction to cyclic proofs for the uninitiated, to explore some of their applications, to outline their relationships to other techniques and, perhaps, to clear up some common misconceptions.

14:00-15:30 Session 40P (QBF)
Chair:
Location: Maths L5
14:00
Preprocessing for (D)QBF
SPEAKER: Ralf Wimmer
14:20
Initial study on the effect of different QBF solvers and preprocessors on link-information sensitive QBF encodings for Abstract Dialectical Frameworks
SPEAKER: Uwe Egly
14:40
Tractability results for structured quantified CNF-formulas via knowledge compilation

ABSTRACT. We show how knowledge compilation can be used as a tool for QBF. More precisely, we show that certain one can apply quantification on certain data structures used in knowledge compilation which in combination with the fact that restricted classes of CNF-formulas can be compiled into these data structures can be used to show fixed-parameter tractable results for QBF.

15:00
A Grounder From Second-Order Logic To QBF

ABSTRACT. Recent solver research has developed powerful QBF solvers. Alas, we know of few tools that provide a modelling language on a higher level, translating this to QBF. This is surprising, as in the closely related field of SAT solvers, research has gone hand in hand with the develop- ment of such systems. This extended abstract on work in progress reports on a system that allows the use of second-order logic as a high-level modelling language and that grounds (translates) models written in such a language to a QBF formula. We provide an example encoding, outline the grounding process and propose further research and experiments.

15:20
PrideMM: QBFEVAL Benchmarks (QBFEval18 Contribution)
SPEAKER: Radu Grigore
15:25
Chess Solving and Composing (QBFEval18 Contribution)
14:00-15:30 Session 40Q: Two-player games (SR)
14:00
Infinite-Duration Richman Bidding Games
SPEAKER: Guy Avni

ABSTRACT. Two-player games on graphs are widely studied in formal methods as they model the interaction between a system and its environment. The game is played by moving a token throughout a graph to produce an infinite path. There are several common modes to determine how the players move the token through the graph; e.g., in turn-based games the players alternate turns in moving the token. We study the {\em bidding} mode of moving the token, which, to the best of our knowledge, has never been studied in infinite-duration games. Both players have separate {\em budgets}, which sum up to $1$. In each turn, a bidding takes place. Both players submit bids simultaneously, and a bid is legal if it does not exceed the available budget. The winner of the bidding pays his bid to the other player and moves the token. Reachability bidding games, called {\em Richman games}, have been studied in \cite{LLPU96,LLPSU99}. There, a central question is the existence and computation of {\em threshold} budgets; namely, a value $t \in [0,1]$ such that if \PO's budget exceeds $t$, he can win the game, and if \PT's budget exceeds $1-t$, he can win the game. We focus on parity and mean-payoff games. We show the existence of threshold budgets and show that the complexity of finding them coincides with the $NP \cap coNP$ complexity of reachability bidding games. The solution for mean-payoff consists of our most technically challenging contribution, where we construct optimal strategies for the players while extending and generalizing the probabilistic connection that was known for reachability bidding games.

14:40
Solving Parity Games: Explicit vs Symbolic

ABSTRACT. In this paper we provide a broad investigation of the symbolic approach for solving Parity Games. Specifically, we implement in a fresh tool, called SymPGSolver, four symbolic algorithms to solve Parity Games and compare their performances to the corresponding explicit versions for different classes of games. By means of benchmarks, we show that for random games, even for constrained random games, explicit algorithms actually perform better than symbolic algorithms. The situation changes, however, for structured games, where symbolic algorithms seem to have the advantage. This suggests that when evaluating algorithms for parity-game solving, it would be useful to have real benchmarks and not only random benchmarks, as the common practice has been.

14:00-15:30 Session 40R (WPTE)
14:00
Reducing Total Correctness to Partial Correctness by a Transformation of the Language Semantics

ABSTRACT. We give a language-parametric solution to the problem of total correctness, by automatically reducing it to the problem of partial correctness, under the assumption that a variant that decreases with each program step in a well-founded order is provided. Our approach assumes that the programming language semantics is given as a rewrite theory. We implement a prototype on top of the RMT tool and we show that it works in practice on a number of examples.

14:30
On Transforming Narrowing Trees into Regular Tree Grammars Generating Ranges of Substitutions
SPEAKER: Naoki Nishida

ABSTRACT. The grammar representation of a narrowing tree for a syntactically deterministic conditional term rewriting system and a pair of terms is a regular tree grammar that generates expressions for substitutions obtained by all possible innermost-narrowing derivations that start with the pair and end with non-narrowable terms. In this paper, under a certain syntactic condition, we show a transformation of the grammar representation of a narrowing tree into another regular tree grammar that generates the ranges of substitutions generated by the grammar representation. In our previous work, such a transformation is restricted to the ranges w.r.t. a given single variable, and thus, the usefulness is limited. We extend the previous transformation by representing the range of a substitution as a tuple of terms, which is obtained by the coding for finite trees.

15:00
Verification of Ada Programs with AdaHorn

ABSTRACT. We propose AdaHorn, a model checker for verification of Ada programs wrt. correctness properties. AdaHorn translates Ada programs together with their related correctness properties into a set of Constrained Horn Clauses which are solved by well-known SMT solvers such as Eldarica and PDR-Z3. We propose a set of Ada programs inspired by C programs from the competition SV-COMP, and use them to compare the effectiveness of AdaHorn and GNATProve, a well-known static analyzer for Ada programs. Our experiments show that AdaHorn outperforms GNATProve by yielding correct results in more cases than GNATProve.

14:00-15:30 Session 40S (WiL)
Location: Blavatnik LT2
14:00
POPLMark Reloaded: Mechanizing Logical Relations Proofs

ABSTRACT. Mechanizing formal systems, given via axioms and inference rules, together with proofs about them plays an important role in establishing trust in formal developments. Over the past decade, the POPLMark challenge popularized the use of proof assistants in mechanizing the metatheory of programming languages. Focusing on the the meta-theory of System F with subtyping, it allowed the programming languages community to survey existing techniques to represent and reason about syntactic structures with binders and promote the use of proof assistants. Today, mechanizing proofs is a stable fixture in the daily life of programming languages researchers.

As a follow-up to the POPLMark Challenge, we propose a new collection of benchmarks that use proofs by logical relations. Such proofs are now used to attack problems in the theory of complex languages models, with applications to issues in equivalence of programs, compiler correctness, representation independence and even more intensional properties such as non-interference, differential privacy and secure multi-language inter-operability. Yet, they remain challenging to mechanize. In this talk, we focus on one particular challenge problem, namely strong normalization of a simply-typed lambda-calculus with a proof by Kripke-style logical relations. We will advocate a modern view of this well-understood problem by formulating our logical relation on well-typed terms. Using this case study, we share some of the lessons learned tackling this challenge problem in Beluga, a proof environment that supports higher-order abstract syntax encodings, first-class context and first-class substitutions. We also discuss and highlight similarities, strategies, and limitations in other proof assistants when tackling this challenge problem. We hope others will be motivated to submit solutions! The goal of this talk is to engage the community in discussions on what support in proof environments is needed to truly bring mechanized metatheory to the masses.

This is joint work with Andreas Abel (Chalmers), Aliya Hameer (McGill), Alberto Momigliano (Univ. of Milan), Steven Schäfer (Univ. Saarbrücken), Kathrin Stark (Univ. Saarbrücken)

15:00
Towards a Playground for Logicians

ABSTRACT. Proving properties about proof systems is a necessary task to make sure that they are sound and satisfy basic criteria for the application being considered. Nevertheless, it is a tedious and error-prone task. In this talk, I will present on-going work on the development of a trustworthy system that helps with the verification of proof calculi's properties. This system allows for a natural specification of sequent-style calculi and visualization of rules in LaTeX. The user is also able to construct simple proof trees. Moreover, it can be used to check identity expansion, cut-elimination and permutation lemmas with the click of a button. Our goal is to free researchers from having to write down big and repetitive proofs, and move towards the use of mechanized proofs when meta-theory of proof systems is concerned.

15:15
Non-well-founded proof system for Transitive Closure Logic
SPEAKER: Liron Cohen

ABSTRACT. Transitive closure logic is obtained by a modest addition to first-order logic that affords enormous expressive power. Most importantly, it provides a uniform way of capturing finitary inductive definitions. Thus, particular induction principles do not need to be added to the logic; instead, all induction schemes are available within a single, unified language. We here present a non-well-founded proof system for transitive closure logic which is complete for the standard semantics. This system captures implicit induction, and its soundness is underpinned by the principle of infinite descent. We also consider its subsystem of cyclic proofs.

14:30-15:30 Session 41 (PC)
Location: Maths L6
14:30
Complexity of expander-based reasoning and the power of monotone proofs

ABSTRACT. One of the main successes in circuit complexity is the strong lower bounds on complexity of monotone circuits. By analogy, one might expect that studying monotone reasoning would lead to similar lower bounds in proof complexity. Yet surprisingly, Atserias, Galesi and Pudlak have given a general quasipolynomial simulation of sequent calculus LK by its monotone fragment MLK. Moreover, their techniques give a polynomial simulation, provided properties of AKS sorting networks can be formalized inside LK. Such formalization was obtained in 2011 by Jerabek, assuming provable in LK existence of expander graphs.

Several major results in complexity theory such as undirected graph reachability in logspace (Reingold, Rozenman-Vadhan) and monotone formulas for sorting (Ajtai-Komlos-Szemeredi sorting networks) are based on properties of expander graphs. But what is the complexity such proofs? Much of the existing expander constructions rely on computationally non-trivial algebraic concepts for the analysis, such as the spectral gap, even when constructions themselves are combinatorial.

In this work, we show that existence of expanders of arbitrary size can be proven using NC^1 reasoning. We give a fully combinatorial analysis of an iterative construction of expanders using replacement product, powering and tensoring, and formalize this analysis in the bounded arithmetic system VNC^1. Combined with Atserias, Galesi, Pudlak'2002 and Jerabek'2011, this completes the proof that monotone LK is as powerful as LK for proving monotone sequents.

Joint work with Sam Buss, Valentine Kabanets and Michal Koucky.

15:00
Towards theories for positive polynomial time and monotone proofs with extension

ABSTRACT. We propose arithmetic theories that link systems in monotone proof complexity to classes in monotone computational complexity. In particular, we focus on the case of polynomial-time, for which monotone function classes and monotone proof systems have recently been proposed. We complete the proof complexity theoretic account of this subject by proposing an accompanying logical theory, in the usual sense of 'bounded arithmetic'

15:30-16:00Coffee Break
16:00-18:00 Session 42A (COALG)
16:00
Martingale-Based Methods for Reachability Probabilities: Excitements and Afterthoughts about Coalgebras
16:45
Coalgebra Dreams
16:00-18:00 Session 42B (Coq)
Location: Maths LT2
16:00
ComplCoq: Rewrite Hint Construction with Completion Procedures

ABSTRACT. We present a Coq plugin for performing completion procedures on rewriting rules given by users. The result is added in a hint rewrite database which works with an automated rewriting tactic, so it enables us to write proof scripts more simply.

16:30
Towards a formalization of the guard condition

ABSTRACT. We present a translation of guarded recursive functions in Coq to well-founded recursive functions using only case analysis eliminators and the eliminator of the accesibility predicate. This work-in-progress is a possible first step towards a formalization of Coq's guard condition. We also present an idea to extend the guard recursion to handle inductive-inductive definitions.

17:00
Teaching Your Rooster to Crow in C

ABSTRACT. Coq's notation system is both extremely powerful and confusingly ad-hoc. While powerful enough to pretty-print abstract syntax trees in most domain-specific languages, how to do so does not seem to be common knowledge. Typical questions arising from such an endeavor might include "How do I pick notation levels?", "Why are these notations clashing?", "Which things should be marked as symbols?", "How do I use boxes in `format`?", and "How do I get parentheses to show up (only) where I want them to?" This interactive presentation aims to serve as a guide to these questions and more, by demonstrating and explaining how to pretty-print subsets of C using only Coq's `Notation` mechanism.

17:30
Proof Construction by Tactic Learning

ABSTRACT. We present some early work being done to utilize Artificial Intelligence for proof search in the Coq theorem prover. In a similar vein as the TacticToe project for HOL4, we are working on a system that finds proofs of goals on the tactic level, by learning from previous tactic scripts. Learning on the level of tactics has several advantages over more low-level approaches. First, this allows for much coarser proof steps, meaning that during proof search more complicated proofs can be found. Second, it allows for the usage of custom built, domain specific tactics that where previously defined and used in the development. This will allow for better performance of the system in very specialized domains. The rest of this abstract will describe the required components of our system. Since a number of technical issues need to be addressed, we hope to solicit feedback from the Coq developers at the workshop.

16:00-16:30 Session 42C (DCM)
16:00
Generic Graph Semantics

ABSTRACT. The goal of this ongoing investigations is a more general, more flexible, and more precise generic language for describing algorithms than ordinary sequential abstract state machines. More general, since it will incorporate greater nondeterminism. More flexible, because it allows sequences of assignments in a single step. More precise, on account of the exact control over the order in which values are accessed and the number of times they each are.

16:00-17:15 Session 42D (Domains13)
Location: Maths LT1
16:00
Frames and frame relations

ABSTRACT. We follow two threads in Dana Scott's mathematics to study frames in a different light. First, injectivity is an important idea. Second, relational reasoning can get at functional behavior (via, for example, approximable maps). These will permit us to situate frames in larger ambient categories in which constructions arise from the combination of injectivity and relational reasoning. In particular, the assembly of a frame comes about as being isomorphic to a sublocale \(\mathcal{Q}(L)\) of the frame of endo-relations on a given frame. We prove this by showing directly that \(\mathcal{Q}(L)\) has the universal property of the assembly.

From independent discoveries (Bruns and Lakser \cite{BL70}; Horn and Kimura \cite{HK71}), we know that frames are precisely the injective (meet) semilattices. Simply knowing this does not get us very far in studying frames \emph{qua} frames. But the full subcategory of \(\textsf{SL}\) consisting of injective semilattices comes closer. A semilattice map \(\fromto hML\) between two frames can be viewed ``approximably'' as a relation \(R_h\subseteq L\times M\) defined by \(x\leq h(y)\). Such a relation is closed under weakening: \(x\leq x'\), \(x' \mathrel{R_h}y'\) and \(y'\leq y\) implies \(x\mathrel{R_h}y\). It is a subframe of \(L\times M\). Any such relation, called a \emph{frame relation}, determines a semilattice homomorphism. So the category \(\overline{\textsf{Frm}}\) of frames and frame relations is opposite to the full subcategory of \(\textsf{SL}\) consisting of injective semilattices.

In \(\overline{\textsf{Frm}}\), an adjoint pair of morphisms, i.e., an adjoint pair of frame relations corresponds to an adjoint pair of semilattice homomorphisms. So by adjunction, the lower adjoint determines a semilattice homomorphism preserving all joins. Thus the category \(\textsf{Map}(\textsf{Frm})\), consisting of frames and adjoint pairs of frame relations \((\check f\dashv \hat f)\), is isomorphic to the category \(\textsf{Frm}\).

A subobject (a sublocale) is then an isomorphism class of extremal epimorphisms in this category. One can check that a sublocale of a frame \(L\) corresponds exactly to an idempotent, reflexive frame relation.

What we have said so far is mostly known, at least in folklore. Nevertheless, this points toward studying frames directly via frame relations, as they are (dual to) the natural morphisms of injective semilattices. To illustrate the point, we construct the assembly of a frame via frame relations, showing that its familiar universal property comes about naturally from this approach.

Between any two posets \(P\) and \(Q\), the weakening relations from \(P\) to \(Q\) obviously form a completely distributive lattice, hence a frame. If \(P\) is a meet semilattice and \(Q\) is a join semilattice, then the Heyting structure is especially easy to define; \(x\mathrel{(\Phi\to\Psi)}y\) if and only if for all \(w\) and \(z\) \(w\mathrel{\Phi}z\) implies \(w\wedge x\mathrel{\Psi} y\vee z\). One uses this to show that if \(L\) and \(M\) are frames, then the collection of all frame relations \(\overline{\textsf{Frm}}(L,M)\) is in fact a sublocale of the frame of all weakening relations. But composition of frame relations preserves only finite meets, and not arbitrary joins. So it is natural to consider the homsets as injective semilattices and not as frames, per se. Though we continue to call them frames, this suggests that we are actually studying the injective semilattice enriched category of injective semilattices.

For a frame \(L\), the frame \(\overline{\textsf{Frm}}(L,L)\) includes the reflexive, idempotent frame relations. These, as we have noted, correspond to sublocales of \(L\). Clearly, they are closed under arbitrary intersections. And a simple calculation using the Heyting arrow shows that they constitute a sublocale of \(\overline{\textsf{Frm}}(L,L)\), denoted by \(\mathcal{Q}(L)\). Moreover, \(L\) embeds in \(\mathcal{Q}(L)\) via the map sending \(a\in L\) to the frame relation \(x\mathrel{\Gamma_a}y\) if and only if \(x\leq y\wedge a\). Note that \(\Gamma_a\) corresponds to the closed sublocale of \(a\). The relation corresponding to the open sublocale is \(x\mathrel{\Upsilon_a}y\) if and only if \(a\vee x\leq y\).

Any adjoint pair \(\check f\dashv \hat f\) of frame relations between \(L\) and \(M\) extends by composition with the embedding of \(L\) in \(\mathcal{Q}(L)\) to a pair of frame relations \(f^*,f_*\) between \(\mathcal{Q}(L)\) and \(M\). These \emph{extend} \(\check f\) and \(\hat f\) in a natural way, and furthermore \(f^*;f_* \leq 1_M\). It is natural to ask when these are in fact adjoint.

The frame \(M\) has a \emph{center} relation: \(a\prec^*_M b\) if and only if there is some complemented \(w\) so that \(a\leq w\leq b\). Now if \(f=(\check f\dashv \hat f)\) is an adjoint pair of frame relations so that \(\check f;\hat f\subseteq \mathord{\prec^*_M}\), then \(f^*\dashv f_*\) is the unique adjoint pair extending \(f\) along \(\Gamma\). Conversely, using the injectivity of \(M\) and \(\mathcal{Q}(L)\), if \(f^*\) is adjoint to \(f_*\), then \(\check f;\hat f\subseteq \mathord{\prec^*_M}\). Hence, \(\mathcal{Q}(L)\) has the universal property that any adjoint pair of frame relations from \(L\) to \(M\) that sends \(L\) to the center of \(M\) factors uniquely through \(\Gamma\).

\begin{references}{99}

\bibitem{BL70} Bruns, G. and Lakser, H. Injective hulls of semilattices. \emph{Canadian Mathematics Bulletin}, \textbf{13} (1970), 115--118. \bibitem{HK71} Horn, A. and Kimura, N. The category of semilattices. \emph{Algebra Universalis}, \textbf{1} (1971), 26--38.

\end{references}

16:25
Extending Stone Duality to Relations

ABSTRACT. The importance of, on the one hand, categories of relations and, on the other hand, Stone Duality does not seem to be in need of much justification in a workshop on domain theory. The purpose of this presentation is to explain how both can be combined. In other words, given a concrete duality in which arrows are structure preserving functions, extend this duality to a duality of the corresponding categories of relations (Thm 1). We will use this approach to give a reconstruction of domain logics for non-zero dimensional spaces (Thm 2), based on Scott's idea of moving from algebraic to continuous lattices by splitting certain idempotents.

16:50
Applications of entailments: de Groot duality

ABSTRACT. We study interaction between de Groot duality and the semantic domains of various choices (i.e. powerlocale constructions) on stably compact locales. We give a simple point-free account of the duality on several powerlocale constructions. This is done via the representation of a stably compact locale by means of Scott’s entailment relation with an idempotent relation due to Coquand and Zhang [Theoret. Comput. Sci., 305(1-3):77–84, 2003].

16:00-18:00 Session 42E (GS)
Location: Maths LT3
16:00
Continuous probability distributions in concurrent games
SPEAKER: Hugo Paquet

ABSTRACT. Probabilistic strategies were first introduced by Danos and Harmer in a probabilistic extension of Hyland-Ong games. They work particularly well for modelling probabilistic programs with state: the model is fully abstract for a probabilistic variant of Idealised Algol. Recently, concurrent games were introduced as an alternative framework for game semantics, based on event structures, a fundamental model for concurrent processes. Concurrent games can also be enriched with probability, using a notion of probabilistic event structure, and this makes possible an analysis of Probabilistic PCF (including an intensional full abstraction result).

However all the above do not readily support continuous distributions, making those models unsatisfactory for practical probabilistic languages, in which such distributions are essential. Vakar and Ong have recently announced a generalisation of the Danos-Harmer model, which supports continuous distributions. They also describe an application to a stateful probabilistic language. Here we present a new probabilistic concurrent games model in which strategies are equipped to support arbitrary distributions on the real numbers. We rely on methods of measure theory and introduce measurable event structures, which generalise event structures and form the basis for a model of measurable concurrent games. Then one may adjoin probability in the form of a family of stochastic kernels. We illustrate the model by giving semantics to a higher-order, affine version of Probabilistic PCF with continuous distributions on the reals.

16:30
A definable game semantics for the linear quantum lambda-calculus
SPEAKER: Marc De Visme

ABSTRACT. In previous work, presented at GaLoP 2018, we have defined an adequate game semantics for the linear quantum lambda calculus. To do this we drew inspiration on two lines of work. On the one hand, our account of the dynamics of computation came from the concurrent games on event structures of Winskel et al, and their extension with probabilistic annotations (which we extend conservatively). On the other hand, our formulation of quantum states was inspired by the work of Pagani, Selinger and Valiron about static denotational semantics for the quantum λ-calculus. We used for games event structures annotated by Hilbert spaces, and we annotated the strategies by operators on those Hilbert spaces.

In this work, we present a notion of (sequential) quantum innocence, leading to a definability result with respect to the linear quantum lambda-calculus. Quantum innocence imposes a compatibility between quantum annotations and the causal structure of strate- gies, ensuring that they can be reproduced by terms. We obtain a fully abstract model for the linear quantum λ-calculus. This is not the first fully abstract model in the linear case, see [8] by Selinger and Valiron. However, there is not yet any fully abstract model for the non linear case. Following the work on event structures with symmetry in by Castellan , Clairambault, Paquet and Winskel for probabilistic PCF, we expect to be able to extend it into a fully abstract model for the (full) quantum λ-calculus.

Interestingly, the work on definability lead us to a more abstract definition of quantum games and strategies, relying only weakly on the properties of quantum annotations. Though this is work in progress, it seems to lead to a fully abstract games model for a linear λ-calculus parametrised by a symmetric monoidal category.

17:00
Resource-Tracking Concurrent Games

ABSTRACT. In the study of programming languages, denotational semantics is used to abstract away from (some) intermediate steps of computation, in particular by hiding/internally handling the process of composition of λ-calculi. Although these abstractions are very useful to help reasoning about program behaviours such as program termination, they are often too strong to be able to reflect some quantitative information, or meta-data, such as permissions, time, or energy consumption, that can be of interest for program analysis and optimisation.

In this work we present a framework of annotated concurrent games based on event structures that generally allows for keeping track of meta data, that is, information whose value may vary with the execution flow but cannot modify it. More precisely, for a given equational theory, we build a compact closed category of concurrent games and strategies in which Player moves come equipped with expressions that may only depend on variables representing Opponent moves from their causal history. These expressions may indicate resources required to reach these events, for instance in terms of permissions or time. The free variables corresponding to Opponent moves in these expressions reflect the fact that resource usage may depend on what Opponent already requires; they are instantiated while performing the composition of the strategy with its environment.

In a second time, we use this new framework to give a denotational semantics of an affine IPA-style language with explicit time consumption (or any resource that fits in a concurrent monoid(R, ⋆, ; , 0, ≤) with distributive law a; (b ⋆ c) = (a; b) ⋆ (a; c)) in the spirit of Ghica’s slot games. We show that this interpretation is sound with respect to the usual interleaving-based operational semantics, and (unlike slot games) with respect to its extension allowing non-interfering parallel computation. Even with this extension, however, adequacy does not hold: in essence, our semantics predicts resource usage for the most parallel execution possible, with no scheduling or allocation constraints.

17:30
Automata Theory and Game Semantics

ABSTRACT. I will give a survey of various classes of automata that have been used to capture the game semantics of higher-­order programs and, consequently, obtain decidability results for contextual equivalence.

16:00-18:00 Session 42F: Technical Papers and Closing Remarks (GraMSec)
Chair:
16:00
The Attacker Does not Always Hold the Initiative: Attack Trees with External Refinement
SPEAKER: Ross Horne

ABSTRACT. Attack trees provide a structure to an attack scenario, where disjunctions represent choices decomposing attacker's goals into smaller subgoals. This paper investigates the nature of choices in attack trees. For some choices, the attacker has the initiative, but for other choices either the environment or an active defender decides. A semantics for attack trees combining both types of choice is expressed in linear logic and connections with extensive-form games are highlighted. The linear logic semantics defines a specialisation preorder enabling trees, not necessarily equal, to be compared in such a way that all strategies are preserved.

16:45
On Linear Logic, Functional Programming, and Attack Trees

ABSTRACT. This paper has two main contributions. The first is a new linear logical semantics of causal attack trees in four-valued truth tables. Our semantics is very simple and expressive, supporting specializations, and combines in an interesting way the ideal and filter semantics of causal attack trees. Our second contribution is Lina, a new embedded, in Haskell, domain specific functional programming language for conducting threat analysis using attack trees. Lina has many benefits over existing tools; for example, Lina allows one to specify attack trees very abstractly, which provides the ability to develop libraries of attack trees, furthermore, Lina is compositional, allowing one to break down complex attack trees into smaller ones that can be reasoned about and analyzed incrementally. Furthermore, Lina supports automatically proving properties of attack trees, such as equivalences and specializations, using Maude and the semantics introduced in this paper.

17:30
Closing remarks and discussions
16:00-18:00 Session 42G (HoTT/UF)
Location: Blavatnik LT1
16:00
(Truncated) Simplicial Models of Type Theory
16:30
Towards the syntax and semantics of higher dimensional type theory
16:00-16:30 Session 42I (ITRS)
16:00
Intersection Subtyping with Constructors

ABSTRACT. We study the question of extending the BCD intersection type system with additional type constructors. On the typing side, we focus on adding the usual rules for product types. On the subtyping side, we consider a generic way of defining a subtyping relation on families of types which include intersection types. We find back the BCD subtyping relation by considering the particular case where the type constructors are intersection, omega and arrow. We obtain an extension of BCD subtyping to product types as another instance. We show how the preservation of typing by both reduction and expansion is satisfied in all the considered cases. Our approach takes benefits from a ``sub-formula property'' of the proposed presentation of the subtyping relation.

16:00-18:00 Session 42J (LMW)
Location: Maths Boardroom
16:00
Panel Discussion

ABSTRACT. Nick Benton, Marco Gaboardi, Cynthia Kop, Alexandra Silva will participate in the discussion.

16:00-18:00 Session 42K (Linearity/TLLA)
16:00
A shared memory semantics for session types

ABSTRACT. [joint work with Klaas Pruiksma]

Session types were designed to express communication behavior between
concurrent processes.  They were later discovered to correspond to
propositions in linear logic, where sequent proofs describe processes,
and cut reduction is the basis for message-passing computation.  In
this talk we provide a new operational interpretation of the linear
sequent calculus in terms of shared memory, with some surprising
twists when compared to the message-passing semantics.

 

17:00
A semantic conjecture on second-order MLL and its complexity consequences (work in progress)

ABSTRACT. Semantic evaluation has proven to be a valuable tool to prove computational complexity results on monomorphic type systems. Can we apply it in presence of impredicative polymorphism? The linearity and stratification at work in light logics might make it possible.

17:20
Formalization of Automated Trading Systems in a Concurrent Linear Framework
SPEAKER: Sharjeel Khan

ABSTRACT. We present a declarative and modular specification of an automated trading system (ATS) in the concurrent linear framework CLF. We implemented it in Celf, a CLF type checker which also supports executing CLF specifications. We outline the verification of a representative property of trading systems using generative grammars, an approach to reasoning about CLF specifications.

16:00-17:30 Session 42L (MSFP)
16:00
Backward induction for repeated games

ABSTRACT. We present a method of backward induction for computing approximate subgame perfect Nash equilibria of infinitely repeated games with discounted payoffs. This uses the selection monad transformer, combined with the searchable set monad viewed as a notion of 'topologically compact' nondeterminism, and a simple model of computable real numbers. This is the first application of Escardó and Oliva's theory of higher-order sequential games to games of imperfect information, in which (as well as its mathematically elegance) lazy evaluation does nontrivial work for us compared with a traditional game-theoretic analysis. Since a full theoretical understanding of this method is lacking (and appears to be very hard), we consider this an 'experimental' paper heavily inspired by theoretical ideas. We use the famous Iterated Prisoner's Dilemma as a worked example.

16:45
Everybody's Got To Be Somewhere

ABSTRACT. This literate Agda paper gives a nameless co-de-Bruijn representation of generic (meta)syntax with binding. It owes much to the work of Sato et al. on representation of variable binding by mapping variable use sites. The key to any nameless representation of syntax is how it indicates the variables we choose to use and thus, implicitly those we neglect. The business of selecting is what we shall revisit with care, ensuring that all free variables are used. The definition leads to a new, structurally recursive, construction of simultaneous hereditary substitution, given generically for a universe of syntaxes with metavariables.

16:00-17:30 Session 42M: Contributed talks (PARIS)
Location: Maths L4
16:00
Transitive Closure Logic: Infinitary and Cyclic Proof Systems
SPEAKER: Reuben Rowe

ABSTRACT. Transitive closure logic is a known extension of first-order logic obtained by introducing a transitive closure operator. While other extensions of first-order logic with inductive definitions are a priori parametrized by a set of inductive definitions, the addition of the transitive closure operator uniformly captures all finitary inductive definitions. We present an infinitary proof system for transitive closure logic which is an infinite descent-style counterpart to the existing (explicit induction) proof system for the logic. We show that, as for similar systems for first-order logic with inductive definitions, our infinitary system is complete for the standard semantics and subsumes the explicit system. Moreover, the uniformity of the transitive closure operator allows semantically meaningful complete restrictions to be defined using simple syntactic criteria.

16:45
On the Logical Complexity of Cyclic Arithmetic

ABSTRACT. We study the logical complexity of proofs in cyclic arithmetic (CA), as introduced in Simpson '17, in terms of quantifier alternations of formulae occurring. Writing CΣ_n for (the logical consequences of) cyclic proofs containing only Σ_n formulae, our main result is that IΣ_{n+1} and CΣ_n prove the same Π_{n+1} theorems, for n > 0. Furthermore, due to the 'uniformity' of our method, we also show that CA and PA proofs differ only elementarily in size.

The inclusion IΣ_{n+1} ⊆ CΣ_n is obtained by proof theoretic techniques, relying on normal forms and structural manipulations of Peano arithmetic (PA) proofs. It improves upon the natural result that IΣ_n ⊆ CΣ_n.

The inclusion CΣ_n ⊆ IΣ_{n+1} is obtained by calibrating the approach of Simpson '17 with recent results on the reverse mathematics of Büchi’s theorem (Kolodziejczyk, Michalewski, Pradic & Skrzypczak '16), and specialising to the case of cyclic proofs.

These results improve upon the bounds on proof complexity and logical complexity implicit in Simpson '17 and Berardi & Tatsuta '17.

This talk will be based on the following work: http://www.anupamdas.com/wp/log-comp-cyc-arith/

16:00-17:50 Session 42N (PC)
Location: Maths L6
16:00
A Comparison of Algebraic and Semi-Algebraic Proof Systems
17:00
Optimization over the Boolean Hypercube via Sums of Nonnegative Circuit Polynomials
SPEAKER: Adam Kurpisz

ABSTRACT. Various key problems from theoretical computer science can be expressed as polynomial optimization problems over the boolean hypercube. One particularly successful way to prove complexity bounds for these types of problems are based on sums of squares (SOS) as nonnegativity certificates. In this article, we initiate optimization problems over the boolean hypercube via a recent, alternative certificate called sums of nonnegative circuit polynomials (SONC). We show that key results for SOS based certificates remain valid: First, for polynomials, which are nonnegative over the $n$-variate boolean hypercube with constraints of degree $d$ there exists a SONC certificate of degree at most $n+d$. Second, if there exists a degree $d$ SONC certificate for nonnegativity of a polynomial over the boolean hypercube, then there also exists a short degree $d$ SONC certificate, that includes at most $n^{O(d)}$ nonnegative circuit polynomials. Finally, we show certain differences between SOS and SONC cones namely, we prove, that in opposite to SOS, SONC cone is not closed under taking affine transformation of variables and that for SONC there does not exist an equivalent to Putinar's Positivestellensatz for SOS. We discuss these results both from algebraic and optimization perspective.

17:30
On Dual-Rail Based MaxSAT Solving

ABSTRACT. This work overviews recent results on the dual-rail based MaxSAT solving, including polynomial upper bounds on the refutation of PHP and 2PHP formulae with core-guided MaxSAT solvers and MaxSAT resolution as well as their relative efficiency compared to general resolution and cutting planes.

16:00-18:00 Session 42P (QBF)
Location: Maths L5
16:00
Understanding and Extending Incremental Determinization for 2QBF
SPEAKER: Markus Rabe
16:20
Portfolio-Based Algorithm Selection for Circuit QBFs
SPEAKER: Tomáš Peitl

ABSTRACT. Quantified Boolean Formulas (QBFs) are a generalization of propositional formulae that admits succinct encodings of verification and synthesis problems. Given that modern QBF solvers are based on different architectures with complementary performance characteristics, a portfolio-based approach to QBF solving is particularly promising.

We define a natural set of features of circuit QBFs and show that they can be used to construct portfolio-based algorithm selectors of state-of-the-art circuit QBF solvers that are close to the virtual best solver. We further demonstrate that most of this performance can be achieved using surprisingly small subsets of cheaply computable and intuitive features.

16:40
DepQBF (QBFEval'18 Contribution)
16:45
RAReQS, QFUN, QESTO (QBFEval'18 Contribution)
16:50
Qute (QBFEval'18 Contribution)
16:55
QBFEval 2018
17:20
Discussion
SPEAKER: Hubie Chen
16:00-18:00 Session 42Q: Strategy Logic (SR)
16:00
Quantifying Bounds in Strategy Logic

ABSTRACT. Program synthesis automatically constructs programs from specifications. Strategy Logic is a powerful specification language whose goal is to give theoretical foundations for program synthesis in a multi-agent setting. One limitation of Strategy Logic is that it is purely qualitative. For instance it cannot specify quantitative properties of executions such as "every request is quickly granted", or quantitative properties of trees such as "most executions of the system terminate". In this work, we extend Strategy Logic to include quantitative aspects in a way that can express bounds on "how quickly" and "how many". We define Prompt Strategy Logic, which encompasses Prompt LTL (itself an extension of LTL with a prompt eventuality temporal operator), and we define Bounded-Outcome Strategy Logic which has a bounded quantifier on paths. We supply a general technique, based on the study of automata with counters, that solves the model-checking problems for both these logics.

16:40
Strategy Logic with Imperfect Information

ABSTRACT. We introduce an extension of Strategy logic for the imperfect-information setting, called SLii, and study its model-checking problem. As this logic naturally captures multi-player games with imperfect information, the problem turns out to be undecidable. We introduce a syntactical class of "hierarchical instances" for which, intuitively, as one goes down the syntactic tree of the formula, strategy quantifications are concerned with finer observations of the model. We prove that model-checking SLii restricted to hierarchical instances is decidable. This result, because it allows for complex patterns of existential and universal quantification on strategies, greatly generalises previous ones, such as decidability of multi-player games with imperfect information and hierarchical observations, and decidability of distributed synthesis for hierarchical systems.

17:20
Dependences in Strategy Logic

ABSTRACT. Strategy Logic (SL) is a very expressive temporal logic for specifying and verifying properties of multi-agent systems: in SL, one can quantify over strategies, assign them to agents, and express LTL properties of the resulting plays. Such a powerful framework has two drawbacks: first, model checking SL has non-elementary complexity; second, the exact semantics of SL is rather intricate, and may not correspond to what is expected. In this paper, we focus on strategy dependences in SL, by tracking how existentially-quantified strategies in a formula may (or may not) depend on other strategies selected in the formula, revisiting the approach of [Mogavero et al., Reasoning about strategies: On the model-checking problem, 2014]. We explain why elementary dependences, as defined by Mogavero et al., do not exactly capture the intended concept of behavioral strategies. We address this discrepancy by introducing timeline dependences, and exhibit a large fragment of SL for which model checking can be performed in 2EXPTIME under this new semantics.

16:00-17:00 Session 42R (WiL)
Location: Blavatnik LT2
16:00
On the Combination of Resolution and SAT Procedures for Modal Theorem-Proving

ABSTRACT. In recent work we have proposed a calculus for the multimodal logic K, where clauses are labelled by the modal level in which they occur. The calculus has been implemented and the evaluation shows that the prover performs well if the propositional symbols are uniformly distributed over the modal levels. However, by construction, the set of propositional clauses is always satisfiable. We are investigating the combination of SAT provers with our prover, in order to extract relevant information that could help reducing the time currently used during saturation.

16:15
Embedding Intuitionistic Modal Logics in Higher-Order Logic. An Easy Task?

ABSTRACT. Intuitionistic logics have shown to constitute the foundations for many practical applications. For example, they serve as basis for state-of-the art proof assistents like Coq. Nevertheless, they have also been controversially discussed. David Hilbert wrote that: "Taking the principle of excluded middle from the mathematician would be the same, say, as proscribing the telescope to the astronomer or to the boxer the use of his fists."[4] Indeed, this property of intuitionistic logic gives rise to plenty of new challenges. When trying to embed intuitionistic logics in higher-order logic, monotonicity has to be minded. Furthermore, it is desirable to prove a set of frame conditions equivalent to the axioms used to build different intuitionistic logics. This talk will give a short overview on the challenges occurring due to the absence of the principle of excluded middle and discuss possible solutions.

16:30
Towards a Logical Framework for Latent Variable Modelling

ABSTRACT. Modelling in psychometrics has become increasingly reliant on computer software; at the same time, many offline decisions that a researcher makes remain unrecorded and perhaps, unreconciled to anything more than the researcher’s intuition or best guess. The aim of this paper is to set out a logic that accounts for and guides accounting for decision procedures in psychometric research practices. Such a logic is informed by the integration of three systematic viewpoints under a view of constraints-based practice: i) bounded rationality (Gigerenzer & Goldstein, 1996); ii) axiomatic measurement theory with specific focus on handling measurement uncertainty (Suppes, 2002, 2006, 2016); and iii) a constructive approach to mathematical set theory (Ferreiros, 2016). The integration of these three viewpoints under an overall perspective that is characterised by inference from the best systematisation (Rescher, 2016) is reviewed, and compared to current researcher practices, with particular reference to the problems for psychometric and biometric sciences that are revealed in the Reproducibility Project (Open Science Collaboration, 2012) outcomes. Our conclusion for the overall system is that the constraints characterised by constructive mathematics offer unique tools to researcher communities in accounting for their decision procedures, and a proposal for a software tool that handles the decision protocol is presented. In characterising the decision procedures, we also explore the way that rough set theory (Pawlak, 1998) is integrated into decision procedures to provide insight into database fields or variables that hold some import but may otherwise remain hidden in research outcome reporting.

16:30-17:30 Session 43 (DCM and ITRS)
16:30
Polyadic approximations and intersection types (ITRS/DCM joint invited talk)

ABSTRACT. In his paper introducing linear logic, Girard briefly mentioned how full linear logic could be seen, in an informal sense, as the limit of its purely linear fragment.  Several ways of expanding and formalizing this idea have been developed through the years, including what I call polyadic approximations.  In this talk, I will show how these are deeply related to intersection type systems, via a general construction that is sufficiently broad to allow recovering every well known intersection type system and their normalization properties, as well as introducing new type systems in wildly disparate contexts, with applications ranging from deadlock-freedom in the pi-calculus to a type-theoretic proof of the Cook-Levin theorem.  

(Part of this work was developed jointly with Luc Pellissier and Pierre Vial).

17:00-18:00 Session 44: Priorities for Diversity in Logic in Computer Science (round table discussion) (WiL)

Round table panel discussion, where attendees will be joined by leading figures in the FLoC community to discuss "Priorities for Diversity in Logic for Computer Science".

Location: Blavatnik LT2
18:30-20:30 Women in Logic reception (WiL)

Women in Logic reception and buffet supper at Wadham College (contact Ursula.Martin@maths.ox.ac.uk for more details).

Location: Wadham College