ABSTRACT. Designing programs which do not leak confidential information continues to be a challenge. Part of the difficulty arises when partial information leaks are inevitable, implying that design interventions can only limit rather than eliminate their impact.
We show, by example, how to gain a better understanding of the consequences of information leaks by modelling what adversaries might be able to do with any leaked information. Our presentation is based on the theory of Quantitative Information Flow, but takes an experimental approach to explore potential vulnerabilities in program designs. We make use of the tool Kuifje to interpret a small programming language in a probabilistic semantics that supports quantitative information flow analysis.
ABSTRACT. Equational reasoning is one of the most important tools of functional programming. To facilitate its application to monadic programs, Gibbons and Hinze have proposed a simple axiomatic approach using laws that characterise the computational effects without exposing their implementation details. At the same time Plotkin and Pretnar have proposed algebraic effects and handlers, a mechanism of layered abstract by which effects can be implemented in terms of other effects.
This paper performs a case study that connects these two strands of research. We consider two ways in which the nondeterminism and state effects can interact: the high-level semantics where every nondeterministic branch has a local copy of the state, and the low-level semantics where a single sequentially threaded state is global to all branches.
We give a monadic account of the folklore technique of handling local state in terms of global state, provide a novel axiomatic characterisation of global state and prove that the handler satisfies Gibbons and Hinze's local state axioms by means of a novel combination of free monads and contextual
equivalence. We also provide a model for global state that is necessarily non-monadic.
Certification of Breadth-First Algorithms by Extraction
ABSTRACT. By using pointers, breadth-first algorithms are very easy to
implement efficiently in imperative languages. Implementing
them with the same bounds on execution time in purely
functional style can be challenging, as explained in Okasaki's
paper at ICFP 2000 that even restricts the problem to binary
trees but considers numbering instead of just traversal.
Okasaki's solution is modular and factors out the problem
of implementing queues (FIFOs) with worst-case constant
time operations. We certify those FIFO-based breadth-first
algorithms on binary trees by extracting them from fully
specified Coq terms, given an axiomatic description of FIFOs.
In addition, we axiomatically characterize the strict and
total order on branches that captures the nature of breadth-first
traversal and propose alternative characterizations of
breadth-first traversal of forests. We also propose efficient
certified implementations of FIFOs by extraction, either with
pairs of lists (with amortized constant time operations) or
triples of lazy lists (with worst-case constant time operations),
thus getting from extraction certified breadth-first algorithms
with the optimal bounds on execution time.
ABSTRACT. Common programming tools, like compilers, debuggers, and IDEs, crucially rely on the ability to analyse program code to reason about its behaviour and properties. There has been a great deal of work on verifying compilers and static analyses, but far less on verifying dynamic analyses such as program slicing. Recently, a new mathematical framework for slicing was introduced in which forward and backward slicing are dual in the sense that they constitute a Galois connection. This paper formalizes forward and backward dynamic slicing algorithms for a simple imperative programming language, and formally verifies their duality using the Coq proof assistant.
Self-Certifying Railroad Diagrams, or: How to Teach Nondeterministic Finite Automata
ABSTRACT. Regular expressions can be visualized using railroad or syntax
diagrams. The construction does not depend on fancy artistic
skills. Rather, a diagram can be systematically constructed through
simple, local transformations due to Manna. We argue that the result
can be seen as a nondeterministic finite automaton with
ε-transitions. Despite its simplicity, the construction has a number
of pleasing characteristics: the number of states and the number of
edges is linear in the size of the regular expression; due to sharing
of sub-automata and auto-merging of states the resulting automaton is
often surprisingly small. The proof of correctness relies on the
notion of a subfactor. In fact, Antimirov's subfactors (partial
derivatives) appear as target states of non-ε-transitions, suggesting
a smooth path to nondeterministic finite automata without
ε-transitions. Antimirov's subfactors, in turn, provide a fine-grained
analysis of Brzozowski's factors (derivatives), suggesting a smooth
path to deterministic finite automata. We believe that this makes a
good story line for introducing regular expressions and automata.
ABSTRACT. While simple equational reasoning is adequate for the calculation of many algorithms from their functional specifications, it is not
up to the task of dealing with others, particularly those specified as optimisation problems. One approach is to replace functions by relations,
and equational reasoning by reasoning about relational inclusion. But
such a wholesale approach means one has to adopt a new and sometimes
subtle language to argue about the properties of relational expressions.
A more modest proposal is to generalise our powers of specification by
allowing certain nondeterministic, or multi-valued functions, and to reason about refinement instead. Such functions will not appear in any final
code. Refinement calculi have been studied extensively over the years
and our aim in this article is just to explore the issues in a simple setting
and to justify the axioms of refinement using the semantics suggested by
Morris and Bunkenburg.
ABSTRACT. We introduce setoid type theory, an intensional type theory with a proof-irrelevant universe of propositions and an equality type satisfying function extensionality, propositional extensionality and a definitional computation rule for transport. We justify the rules of setoid type theory by a syntactic translation into a pure type theory with a universe of propositions. We conjecture that our syntax is complete with regards to this translation.